id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2304.10098 | Two-Memory Reinforcement Learning | While deep reinforcement learning has shown important empirical success, it
tends to learn relatively slow due to slow propagation of rewards information
and slow update of parametric neural networks. Non-parametric episodic memory,
on the other hand, provides a faster learning alternative that does not require
representation learning and uses maximum episodic return as state-action values
for action selection. Episodic memory and reinforcement learning both have
their own strengths and weaknesses. Notably, humans can leverage multiple
memory systems concurrently during learning and benefit from all of them. In
this work, we propose a method called Two-Memory reinforcement learning agent
(2M) that combines episodic memory and reinforcement learning to distill both
of their strengths. The 2M agent exploits the speed of the episodic memory part
and the optimality and the generalization capacity of the reinforcement
learning part to complement each other. Our experiments demonstrate that the 2M
agent is more data efficient and outperforms both pure episodic memory and pure
reinforcement learning, as well as a state-of-the-art memory-augmented RL
agent. Moreover, the proposed approach provides a general framework that can be
used to combine any episodic memory agent with other off-policy reinforcement
learning algorithms. | Zhao Yang, Thomas. M. Moerland, Mike Preuss, Aske Plaat | 2023-04-20T05:39:25Z | http://arxiv.org/abs/2304.10098v2 | # Two-Memory Reinforcement Learning
###### Abstract
While deep reinforcement learning has shown important empirical success, it tends to learn relatively slow due to slow propagation of rewards information and slow update of parametric neural networks. Non-parametric episodic memory, on the other hand, provides a faster learning alternative that does not require representation learning and uses maximum episodic return as state-action values for action selection. Episodic memory and reinforcement learning both have their own strengths and weaknesses. Notably, humans can leverage multiple memory systems concurrently during learning and benefit from all of them. In this work, we propose a method called Two-Memory reinforcement learning agent (\(2M\))1 that combines episodic memory and reinforcement learning to distill both of their strengths. The \(2M\) agent exploits the speed of the episodic memory part and the optimality and the generalization capacity of the reinforcement learning part to complement each other. Our experiments demonstrate that the \(2M\) agent is more data efficient and outperforms both pure episodic memory and pure reinforcement learning, as well as a state-of-the-art memory-augmented RL agent. Moreover, the proposed approach provides a general framework that can be used to combine any episodic memory agent with other off-policy reinforcement learning algorithms.
episodic control, reinforcement learning, memory, Atari
Footnote 1: Code will be available after authors notification.
## I Introduction
Deep reinforcement learning (DRL) achieves impressive results in a wide range of domains. It reaches super-human performance in games such as Atari [1], Go [2] and Gran Turismo [3]. Recently, it also has shown promise in scientific applications such as controlling nuclear plasma fusion [4] and discovering new matrix multiplication algorithms [5]. However, DRL is well-known for being data inefficient, since back-propagation of reward signals and learning updates (including representation learning) can be slow.
In contrast to such parametric learning approaches, non-parametric episodic memory approaches maintain a memory buffer to store high-rewarded trajectories for either action selection [6, 7] or for enhancing other reinforcement learning methods [8, 9, 10]. These papers have been demonstrated to outperform conventional reinforcement learning methods in certain tasks, such as Atari [11], and Labyrinth [12]. In episodic memory, reward signals are back-propagated considerably faster than in one-step temporal difference (TD) learning that is commonly used in reinforcement learning. In addition, one-step TD learning is further slowed down by representation learning and the use of function approximation. A potential problem of episodic memory is that fast back-propagation is problematic in stochastic tasks, and the lack of learnable feature representations can make generalization difficult in episodic memory. The question therefore becomes: can we combine the best of both approaches in a single algorithm?
Evidence from neuroscience shows that multiple memory systems are activated when humans are learning, and these also interact with each other [13, 14]. Previous research [8, 9, 10] has shown that integrating episodic memory and reinforcement learning can improve overall performance. In these works, episodic memory is mainly used to provide learning signals for DRL methods, but they again face the same challenges that are inherent to DRL. To fully capitalize on the advantages of episodic memory and reinforcement learning, we propose a novel approach called Two-Memory reinforcement learning (\(2M\)), in which both approaches complement eachother.
The workflow of the \(2M\) agent is shown in Fig. 1. The \(2M\) agent maintains two memories, namely 'episodic control' (episodic memory for control) (\(2M\)-\(EC\)) and'reinforcement learning' (\(2M\)-\(RL\)). In the beginning, the \(2M\) agent decides which memory to employ for action selection in the upcoming episode. Then the collected episodic data is pushed into the experience replay buffer where data for training \(2M\)-\(RL\) is sampled from occasionally. Meanwhile, episodic trajectories are used to update \(2M\)-\(EC\).
The intuition is that after \(2M\)-\(EC\) discovers high-reward
Fig. 1: The workflow of the \(2M\) agent. Before the episode starts, the \(2M\) agent selects one type of memory for action selection in the next episode, which could either be episodic control (EC) or parametric reinforcement learning (RL). The data subsequently collected is used to update the EC solution (directly), and also enters an experience replay buffer (\(B\)) for future updating of the parametric RL solution.
trajectories, the _2M_ agent is able to retrieve these trajectories quickly although they might be suboptimal. However, at this stage, _2M-RL_ still has not learned anything substantial due to the one-step reward backup and the slow updates required with function approximation (in DRL). Thus, the _2M_ agent should prefer to use _2M-EC_ initially. As the amount of data collected increases and training continues, _2M-RL_ becomes increasingly good at dealing with stochasticity and developing better feature representations; the _2M_ agent should then gradually switch to _2M-RL_. This is the conceptual approach we study in this work.
In short, our work makes two main contributions:
* We propose a novel framework called _2M_ that combines episodic control and reinforcement learning methods, exploiting the strengths from both sides. The framework can be used with any type of EC method and any type of off-policy RL method.
* We conduct experiments showing that _2M_ outperforms state-of-the-art baselines, and we also include ablation studies that examine when the _2M_ works well, and where it may still be improved.
## II Background
We will first introduce the formal problem definition and the two main approaches combined in this paper: parametric reinforcement learning (in the form of deep Q-learning) and non-parametric episodic memory.
### _Markov Decision Process_
We follow standard definitions [15] and define decision making problems as a Markov Decision Process (MDP) represented by \(\langle S,A,R,P,\gamma\rangle\). \(S\) denotes the state space, \(A\) denotes the action space, \(R\) denotes the reward function and \(P\) denotes the dynamic transition. \(\gamma\) is the discount factor, which is usually close to 1. The agent interacts with the environment by taking action \(a_{t}\in A\) according to some policy \(\pi\) given the current state \(s_{t}\in S\) at time step \(t\), then the next state \(s_{t+1}\sim P(\cdot|s_{t},a_{t})\) and reward \(r_{t}=R(s_{t},a_{t},s_{t+1})\) is returned by the environment. A policy \(\pi\) maps a state to a distribution over actions. The agent will then take the next action \(a_{t+1}\) based on the new state \(s_{t+1}\). This process repeats until the terminal time step \(T\). The goal of the agent is to learn a policy \(\pi\) that maximizes the expected cumulative reward: \(\mathbb{E}_{s_{t+1}\sim P(\cdot|s_{t},a_{t}),a_{t}\sim\pi(\cdot|s_{t}),r_{t} \sim R(\cdot|s_{t},a_{t})}[\sum_{t=0}^{T}\gamma^{t}\cdot r_{t}]\).
### _Deep Q-Learning_
Define the state-action value function \(Q(s_{t},a_{t})\) as the expected cumulative reward the agent will receive by starting from state \(s_{t}\) and taking action \(a_{t}\). Q-learning is an algorithm to learn the optimal Q value function, it updates the state-action value function by iteratively following the Bellman equation. The update rule of Q-learning is:
\[Q(s_{t},a_{t})\gets Q(s_{t},a_{t})+\alpha(r_{t}+\gamma\max_{a^{\prime} \in A}Q(s_{t+1},a^{\prime})-Q(s_{t},a_{t}))\]
where \(\alpha\) is the learning rate and \(\gamma\) is the discount factor. The solution of Q-learning is generally stored in a table. When the state space is large, storing all possible states becomes infeasible. We may then use a neural network to approximate the state-action value function. The update rule of deep Q network [2] is:
\[y(s_{t})\gets r_{t-1}+\gamma\max_{a^{\prime}\in A}Q(s_{t},a^{\prime})\]
\[\operatorname*{minimize}\mathbb{E}_{s_{t},a_{t},r_{t},s_{t+1}\in D}(y(s_{t+1} )-Q(s_{t},a_{t}))^{2} \tag{1}\]
where \(D\) is the sampled data for training. After we have the state-action value function, the policy is exacted by always taking the action with the highest state-action value greedily in every state.
### _Episodic Control_
Episodic control refers to a class of methods that directly use non-parametric episodic memory for action selection [6, 7]. It is generally implemented as a table (\(Q^{ec}\)), and rows represent different actions while columns represent different states. Each entry is associated with a state-action \((s_{t},a_{t})\) pair and denotes the highest encountered episodic return (\(G_{t}=\sum_{k=t}^{T}\gamma^{k-t}r_{k}\)) after taking action \(a_{t}\) in the state \(s_{t}\). The update only occurs after one episode has terminated, which is similar to tabular Monte-Carlo back-up but with a more aggressive update rule. Instead of incrementally updating state-action values, episodic control replaces the stored episodic return with the best value observed so far. The update rule of episodic control is:
\[Q^{ec}(s_{t},a_{t})\leftarrow\begin{cases}G_{t}&\text{if }(s_{t},a_{t})\notin Q _{ec},\\ \max\{G_{t},Q^{ec}(s_{t},a_{t})\}&\text{otherwise}\end{cases} \tag{2}\]
During action selection, if the queried state already has \(Q\) values for all actions, we take the action that is with the highest \(Q\) value. Otherwise, missed \(Q\) values are estimated by averaging over its K-nearest neighbors' \(Q\) values. When the memory is full, the least updated entry will be dropped from the table.
## III Related Work
We will first discuss previous work on episodic control, and how it has been used as a training target for deep reinforcement learning. This last approach differs from our work, where EC is used for action selection (see Fig. 1). Afterwards, we also briefly discuss related work on experience replay, since it plays a central role in our approach as well (see Fig. 1).
### _Episodic Control_
Model Free Episodic Control (MFEC) [6] is the first episodic control method and it is implemented as a table with untrainable features. Thus, it enjoys limited generalization over either a randomly projected or pre-trained feature space. Neural Episodic Control (NEC) [7] solves this limitation by maintaining a differentiable table to learn features while using the same update and action selection rule (shown in Eq. 2) as MFEC but achieves better performance. Since Monte-Carlo returns must be stored for each state-action pair, episodic
control methods can not deal with continuous action spaces naturally. Continuous Episodic Control (CEC) [16] extends episodic memory to select actions directly in continuous action space. The _2M_ agent integrates MFEC as a core component, and partially uses it for action selection.
### _Episodic Memory for Learning_
While using episodic memory for control is fast, reinforcement learning methods might still be preferred in the long run due to their strength. Many approaches use the returns stored in episodic memory to provide richer learning targets for reinforcement learning methods. Episodic Memory Deep Q Network (EMDQN) [8] uses returns in episodic memory to enhance learning targets in deep Q-Learning. Episodic Memory Actor-Critic (EMAC) [9] and Generalizable Episodic Memory (GEM) [10] uses episodic returns to enhance learning targets in actor-critic methods to solve tasks with continuous action space. Episodic Backward Update (EBU) [17] utilizes structural information of states that are in the same episode and executes a one-step backup for each state along the trajectory. The aforementioned methods take advantage from richer learning signals of episodic memory, but underlying neural network training still progresses slowly. Thereby, the benefit of the EC solution will not affect action selection quickly. In contrast, the _2M_ agent does use episodic memory for action selection, which may give it a fast head-start. As a second difference, in _2M_ the collected data is also used to train the _2M-RL_ agent (in contrast to previous methods).
### _Experience Replay_
Experience replay was originally proposed to improve data efficiency and break correlations of training data for off-policy reinforcement learning methods. Uniform sampling is the most naive and commonly used way to sample data from the replay buffer for training, where transitions are sampled at the same frequency as they were experienced regardless of their significance [18]. To address this limitation, prioritized Experience Replay (PER) [18] prioritizes transitions that have larger TD errors during the training, and samples these transitions more often because larger TD errors indicate there is more information to learn. Hindsight Experience Replay (HER) [19] is proposed for the multi-goal/goal-conditioned RL setting; it treats states that the agent actually achieves as desired goals and learns from failures. Since not all failures are equally important, Curriculum-guided Hindsight Experience Replay (CHER) [20] adaptively replays failed experiences according to their similarities to true goals. An event table [21] is defined as a buffer to store transitions related to important events. The authors theoretically proved that sampling more data from the event table will lead to better performance. Although the _2M_ agent doesn't employ any special sampling strategies to sample from the replay buffer, the data stored in the buffer is actually from two different sources (_2M-EC_ and _2M-RL_). We can thus vary sampling preference by pushing different amounts of data from different sources.
## IV Two-Memory Reinforcement Learning (_2M_)
We will now formally introduce the _2M_ framework. It consists of two'memories', where one represents a fast learning memory (episodic control agent) and another one represents a slow learning memory (reinforcement learning agent, in our work we use 1-step (deep) Q-learning). Intuitively, we should prefer to use the fast (but sub-optimal) learning memory initially, then gradually switch to the slow (but with better asymptotic performance) learning memory. In this section, we will first motivate this intuition by using a very simple example (Sec. IV-A). Then we explain the designs of the proposed approach. Since we combine two different methods and want to switch between them, we also need to discuss a scheduling mechanism to decide when to switch (section IV-B) and need to decide how to utilize collected data (section IV-C). The overall algorithm is detailed in Alg. 1.
### _A Motivating Example_
We first consider a very simple example domain shown in Fig. 2, circles with numbers represent different states while smaller circles represent actions. There are seven states in the state space: \(S=\{s^{1},s^{2},s^{3},s^{4},s^{5},s^{6},s^{7}\}\), two actions (left and right) in the action space: \(A=\{a^{1},a^{2}\}\). Most of dynamic transitions are deterministic, except in state \(s^{2}\), after taking action \(a^{2}\), there is \(50\%\) probability the next state ends up with \(s^{5}\) and \(50\%\) probability ends up with \(s^{6}\): \(P(s^{2},a^{2},s^{5})=P(s^{2},a^{2},s^{6})=0.5\). Leaf nodes \((s^{4},s^{5},s^{6},s^{7})\) are terminal states where the agent will receive a reward: \(R(s^{2},a^{1},s^{4})=+10\), \(R(s^{2},a^{2},s^{5})=-10\), \(R(s^{2},a^{2},s^{6})=+20\), and \(R(s^{3},a^{2},s^{7})=-20\). Color shading indicates the decisions that different agents will make after a single visit to every possible trajectory (orange for episodic control and grey for 1-step Q-learning). We can see that after the agent discovers all possible trajectories, the episodic control agent is already able to follow a sub-optimal trajectory starting from the initial state. However, the 1-step Q-learning agent only back-propagates the reward signal from terminal states to their predecessors,
Fig. 2: A motivating example with seven states in the state space and two actions in the action space. After the agent discovers all possible trajectories (from \(s^{1}\) to all possible terminal states), EC (orange) finds the sub-optimal solution while one-step off-policy RL (grey) only back-propagates reward signals to direct predecessor states and thus does not change the estimates at the root state. However, after many iterations, off-policy RL will converge to the true optimal values (preferring action left followed by action left), while EC will commit to a suboptimal solution (action left followed by action right). Colour shading indicates the preferred solution path for each method after a single visit to every possible path.
which means the 1-step Q-learning agent still is not able to make decisions in the initial state. The optimal policy for this MDP is to take \(a^{1}\) in \(s^{1}\) and take \(a^{1}\) in \(s^{2}\) as well. By definition, episodic control will converge to a sub-optimal policy that always takes \(a^{2}\) in \(s^{2}\) (shown in Eq. 3). With more updates, 1-step Q-learning will learn the optimal state-action value (shown in Eq. 4) for each state-action pair, which will result in the optimal policy.
\[Q^{ec}(s^{2},a^{1})=10,Q^{ec}(s^{2},a^{2})=20 \tag{3}\]
\[Q^{*}(s^{2},a^{1})=10,Q^{*}(s^{2},a^{2})=5 \tag{4}\]
Thus, we conclude that episodic control is fast but sub-optimal, and reinforcement learning (1-step Q-learning in our case) is slow but optimal. There should exist an intersection point between these two different methods where reinforcement learning surpasses episodic memory. One might ask why we do not compare with (full-episode) Monte-Carlo backup. The 1-step back-up can utilize off-policy learning to learn from data collected by other policies, which is more data efficient. We therefore aim to combine the fast learning of EC with the data efficiency of 1-step off-policy learning.
### _Switching_
The previous example highlighted that EC may learn a solution faster, while RL may eventually converge to a better solution. Therefore, we ideally want a switching mechanism, that transitions from EC action selection to RL action selection. We need to decide both **when** and **how** to switch between the two different'memories'.
Regarding the 'when', we propose to decide which memory to use for every episode. This way, we ensure that action selection within the episode stays consistent. To determine which memory we will use in a particular episode, we need to define a probability \(p^{ec}\) that specifies the probability that we will use EC in the next episode. Obviously, we then select _2M-RL_ with probability \(1-p^{ec}\). Since we favor the use of _2M-EC_ at the beginning and _2M-RL_ near the end, we want to gradually decay \(p^{ec}\) from a high value to a lower value according to the equation:
\[p^{ec}\gets p^{e}+(p^{s}-p^{e})\cdot e^{-i/\tau} \tag{5}\]
where \(p^{s}\) and \(p^{e}\) are starting value and end value of \(p^{ec}\), \(i\) is the number of steps the agent takes so far and \(\tau\) is the temperature that controls the speed of decay. Smaller \(\tau\) decays \(p^{ec}\) faster while lager \(\tau\) decays \(p^{ec}\) slower. In the ablation experiments, we also experiment with different scheduling mechanisms.
Fig. 3: The five MinAtar games used in the experiments (from left to right): Breakout, Space Invaders, Asterix, Seaquest, Freeway.
During evaluation, we exploit knowledge stored in both memories in a greedy manner. However, we still need to decide which memory to use during evaluation. The scores _2M-RL_ and _2M-EC_ obtained during training are used as metrics to evaluate their respective performances, and the memory with the higher recent score is selected for action selection during evaluation. More specifically, we keep track off cumulative rewards of both memories during training, and the score (\(s^{rl}\) and \(s^{cc}\) for _2M-RL_ and _2M-EC_, respectively) is defined as the average cumulative reward over the last \(n\) episodes per memory method. We fix \(n=50\) in this work. Thus, the _2M_ agent will choose among two memories using Eq. 6.
\[\begin{split} 2M\leftarrow\begin{cases}\text{2M-RL}&\text{if }s^{rl} \geq s^{ec},\\ \text{2M-EC}&\text{otherwise}\end{cases}\end{split} \tag{6}\]
### _Learning_
To foster mutual improvement of the two memories, the collected data is shared between _2M-EC_ and _2M-RL_. Regardless of which memory the data is collected by, it is used to update _2M-EC_ according to Eq. 2, ensuring that _2M-EC_ is always up to date. On the other hand, since we use off-policy reinforcement learning methods, data collected by _2M-EC_ can also be used to update _2M-RL_. This is implemented by maintaining a replay buffer, and all data collected during the training will be pushed into it. _2M-RL_ is then trained (Eq. 1) every \(x\) timesteps (we use \(x=10\)) by sampling minibatches from the buffer. It should be noted that the value of \(p^{ec}\) indirectly determines the amount of data in the replay buffer originating from _2M-EC_, and thereby the proportion of such data used for training _2M-RL_.
## V Experiments
We first present experimental results on a simple WindyGrid environment to demonstrate the efficiency of the proposed _2M_ agent. Subsequently, we perform extensive experiments on five MinAtar [22] games, namely Breakout, SpaceInvaders, Asterix, Seaquest, Freeway, as illustrated in Fig. 3. These games present diverse challenges, such as exploration (Seaquest and Freeway), classical control tasks (Breakout and SpaceInvaders), and so on. MinAtar is a simplified version of Atari Learning Environment [11], which simplifies representations of games while still capturing general mechanics. This allows the agent to focus on behavioural challenges rather than representation learning. Finally, an ablation study is conducted to investigate the crucial choices of the proposed method.
### _Proof of Concept Under Tabular RL Setting_
The _2M_ agent integrates tabular 1-step Q-learning with episodic control to solve a WindyGrid task (shown in Fig. 4). The WindyGrid instance we use here is \(7\times 10\) large, with a stochastic wind in the \(7^{th}\) row and a trap where the agent will get a large penalty. The agent needs to navigate from the initial state \((3,0)\) to the terminal state \((3,7)\). The results presented in Fig. 5 demonstrate the performance of various agents. The left figure shows returns the agent obtains during the evaluation, while the right figure shows learned Q-values summed across the whole state-action space. The orange line represents the _EC_ agent, which achieves quick learning and decent performance from the start but only has sub-optimal asymptotic performance. In contrast, the grey line corresponds to the _RL_ agent, which initially performs poorly but gradually improves and eventually converges to the optimal solution. The blue line corresponds to the _2M_ agent, which learns faster at the beginning compared to _RL_ and achieves better asymptotic performance compared to _EC_. The background colors indicate the use of different memories by the _2M_ agent during the evaluation, with the agent using _2M-EC_ (orange) for evaluation at the beginning and then switching to _2M-RL_ (grey) after approximately \(25k\) steps.
Since the memories _2M-RL_ and _2M-EC_ share data, and _2M-RL_ is trained on data collected by both of them, we investigate whether this approach has a positive impact compared to training solely on data collected by pure _RL_. As Varun et al. [21] demonstrated theoretically and experimentally, learning from data correlated to the optimal policy will results in lower complexity (which is an intuitive result). If we assume the EC data is at least somewhat correlated to the optimal policy, we expect that use of EC data in RL updates will lead to faster learning. We experimentally test this idea in Fig. 5. The
Fig. 4: An illustration of the WindyGrid environment. The agent (smile face) needs to navigate from the starting point to the given goal (star). There is the stochastic wind that will (black column with up arrows) blow the agent up and the agent gets a penalty when it reaches the trap (red cell).
Fig. 5: Results on WindyGrid under tabular settings. **Left**: Evaluation returns of different agents, EC learns very fast but converges to local optima while RL learns slowly but converges to global optima. 2M learns faster (compare to RL) at the beginning and has better asymptotic performance (compare to EC). Colors in the background indicate memories used during evaluation, orange and grey represent the use of _2M-EC_ and _2M-RL_. **Right**: Learned Q-values summed across the entire state-action space, _2M-RL_ learns Q-values faster than pure RL which is trained using monotonic RL data.
right panel of Fig. 5 shows that using mixed data to train the _RL_ agent (_2M-RL_) can indeed result in faster learning of state-action values compared to using data solely collected by _RL_. This suggest EC data may actually improve RL sample efficiency.
### _Results on MinAtar Games_
Next, we perform experiments on five MinAtar games, which also vary in the amount of stochasticity. We optimize hyper-parameters over the values shown in Tab. I, where the settings used for the final results are highlighted in bold.
In the results presented in Fig. 6, the top rows depict the various memories the _2M_ agent selected during the evaluation, whereas the bottom rows display the returns agents obtain. The performance of _EC_ (orange lines) and _DQN_ (grey lines) is consistent with the findings observed in the toy example, where _EC_ learns quickly at the beginning but converges to sub-optimal solutions while _DQN_ learns slowly but has better asymptotic performance. _2M_ agents (blue lines) perform the best (comparably to _EC_) at the beginning on all five games, and in most games, they also exhibit better asymptotic performance (or at least equally good performance) compared to other baselines (except maybe for Asterix). In Asterix, the _2M_ agent underperforms _EMDQN_ (green lines), but is still better than _DQN_ and _EC_.
During the evaluation, most _2M_ agents demonstrate a preference for using _2M-EC_ initially and then switching to _2M-RL_ over time. In Freeway, the agent continually switches between _2M-EC_ and _2M-RL_, suggesting a mutually beneficial relationship between these two memories. However, in Breakout, the agent consistently favors to use _2M-EC_. This may be attributed to the fact that stochastic dynamic transitions have a less pronounced impact on performance in this game, with the most critical time step being the one at which the paddle must bounce the ball. We hypothesize _2M-RL_ can help _2M-EC_ escape local optima and then _2M-EC_ can rapidly learn improved solutions and continue to progress. To test this hypothesis, we need to check whether a _2M_ agent that does not share collected data between both memories indeed 1) has worse performance, and 2) prefers RL in the long run, instead of getting to a more optimal EC solution.
Fig. 7 shows the performance of an _2M_ agent with data sharing enabled and disabled. With data sharing (2Mw/DaS), the agent mostly prefers to use EC during evaluation (top-left figure), as expected from Fig. 6. When we deactivate data sharing (2Mw/oDaS, two memories are only trained using data collected by the corresponding memory separately), the _2M_ agent only prefers _2M-EC_ at the beginning and then sticks to _2M-RL_ (bottom-left graph of the figure). The performance graph on the right of the figure confirms these results. Without data sharing, 2M does not reach the same performance (blue line stays above the orange line). The circles show the performance of _2M-EC_ at the end of training for both methods. Without data sharing, _2M-EC_ (the orange circle in Fig. 7) converges to a sub-optimal solution. With data sharing enabled, _2M-EC_ (the blue circle in Fig. 7) has a way higher performance. This observation provides evidence to support the aforementioned notion that _2M-RL_ and _2M-EC_ complement eachother.
### _Ablation Study_
In this section, we conduct two groups of ablation experiments to study the design choices in our work. First, we would like to investigate the impacts of data sharing. Deactivating _data sharing_ (2Mw/oDS), resulting in _2M-RL_ being solely trained on data collected by _2M-RL_ and _2M-EC_ being solely trained on data collected by _2M-EC_. This transforms our proposed method becomes a'scheduler' that schedules the training between two distinct models and uses the better one for evaluation. Second, we aim to study different ways of scheduling \(p^{ec}\). Specifically, we examine three different scheduling approaches: decayed scheduling (2Mw/DS), constant scheduling (2Mw/CS) and increased scheduling (2Mw/IS).
Intuitively, data sharing can be helpful since each memory will get different data from another memory, hopefully, they could also learn from each other. It should result in a better performance compared to only training the agent on its own collected data. In fact, such sharing has different impacts on different games, shown in Fig. 8. In Asterix, data sharing improves _2M_ agent's performance, and harms the performance of the agent in Seaquest. To understand the reasons for these opposite impacts, we separately track the performance of _2M-EC_ and _2M-RL_ during the training, and final performance are represented by circles and triangles in Fig. 8 and larger size represents more use of _2M-EC_ (larger \(p^{s}\)) during the training. In Asterix, data sharing pulls down the performance of _2M-RL_ (blue triangles are always below orange ones), and training _2M-RL_ on more data collected by _2M-EC_ leads to even worse performance (the large blue triangle is way below the large orange one), indicating that data collected by _2M-EC_ is actually harmful for training _2M-RL_ in this game. Conversely, in Seaquest, data sharing improves the performance of _2M-EC_ (blue circles are always above orange ones), which again indicates that _2M-RL_ can help _2M-EC_ escape from local optima. However, overusing data collected by _2M-EC_ to train _2M-RL_ also leads to worse performance (the large blue triangle is way below the large orange one). All in all, _2M-RL_ should not use too much data collected by _2M-EC_. Although we show that such training is helpful to learn the optimal state-action values when collected data is correlated to the optimal policy, this assumption is not always satisfied. Meanwhile, _2M-RL_ can help _2M-EC_ escape from local optima but not always. We presume it helps when stochastic transitions have
fewer negative impacts, then _2M-EC_ is able to catch improved solutions provided by _2M-RL_. For example, in Asterix, there are many enemies and the agent dies immediately when it touches enemies, meaning the agent will more likely die if a single wrong (sub-optimal) action is made. Therefore, once _2M-EC_ discovers a sub-optimal trajectory with a very high return (luckily manage to survive for a long time, like ending up with \(s_{6}\) in the motivating example in Fig. 2) with a tiny probability, it will stick to it. Then it is difficult for _2M-EC_ to escape from local optima even though improved solutions are provided. We leave more systematic investigations on this phenomenon for future work.
Next we examine how different scheduling mechanisms affect performance. The _2M_ agent with decayed scheduling (_2Mw/DS_) will initially give a higher preference to use _2M-EC_ for data collection during the training, then gradually shifts towards _2M-RL_. On the contrary, increased scheduling (_2Mw/IS_) will start with a strong preference for using _2M-RL_, then gradually switch to _2M-EC_. Constant scheduling maintains a constant preference for _2M-EC_ and _2M-RL_ throughout the training process. Given that _2M_ agents with _2Mw/DS_ perform the best or one of the best on all games, we only present the performance on Seaquest as a representative in Fig. 9. The results demonstrate that the agent with _2Mw/DS_ outperforms the other two scheduling mechanisms explicitly.
Our experimental results demonstrate that the _2M_ agent surpasses its constituent components, i.e. a pure reinforcement learning approach (Q-learning in tabular settings and DQN [23] in deep RL settings) and a pure episodic memory approach (MFEC [6] in both tabular and deep RL settings). Furthermore, our proposed method exhibits better performance than a state-of-the-art memory-augmented reinforcement learning method EMDQN [8]. Since the proposed framework allows for the integration of various pure EC and RL methods, we only compare the performance of the _2M
Fig. 8: Performance of 2M agents with and without data sharing on Asterix (left) and Seaquest (right). Results are averaged over 2 different settings, one is with larger \(p^{ec}\) and one is with smaller \(p^{ec}\). Data sharing has different impacts on these two games. Circles represent the final performance of _2M-EC_ while triangles represent the performance of _2M-RL_. The larger size means the larger value of \(p^{ec}\) during the training.
Fig. 6: Results on MinAtar games: Breakout, SpaceInvaders, Asterix, Seaquest, Freeway. The top row shows the relevant memory that the _2M_ agent chooses for evaluation: orange represents _2M-EC_ and grey represents _2M-RL_. The bottom row shows the returns obtained during training (running independent evaluation epsisodes). The _2M_ agent either outperforms or is on par with all baseline comparisons (EC, DQN and EMDQN). EC generally learns fast, but then reaches a plateau. _2M_ learns equally fast initially, but then adopts the better long-term performance of RL.
Fig. 7: **Left**: Switching schedule of _2M_ agent with data sharing (top) and without data sharing (bottom) during evaluation. **Right**: Returns two different _2M_ agents are able to get in evaluation. The final performance of _2M-EC_ is represented by coloured circles. We see that with data sharing, EC reaches a return of 6, while without data sharing, EC only manages to reach a return of 3.
agent with those methods that are integrated into it in this work. Therefore, we do not compare our results with other pure episodic memory and reinforcement learning approaches. Lastly, we conduct ablation studies to investigate the impact of two essential design choices: the utilization of data sharing for the mutual improvement of the memories, and the scheduling of \(p^{cc}\) for switching between the two memories.
## VI Conclusion and Future Work
In this work, we proposed a novel approach, termed **Two-Memory (\(2M\)) reinforcement learning** agent, which integrates two distinct learning methods, namely non-parametric episodic memory and (parametric) reinforcement learning. This approach capitalizes on the advantages of both methods by leveraging the episodic memory's rapid starting and the reinforcement learning's superior generalization and optimality to create an agent that achieves higher data efficiency than the individual methods. Experimental results show that \(2M\) matches or outperforms both DQN and EMDQN on a range of MinAtar games. In addition, we show that these two distinct memory modules may complement eachother, leading to even better final performance.
For future work, it would be interesting to automatically determine when data sharing is useful, for example based on the discrepancy between both memories. Another clear direction to improve the \(2M\) agent could be an adaptive scheduling mechanism to switch between _2M-EC_ and _2M-RL_, instead of the hand-designed decay schedules used in this work. Moreover, combining stronger episodic memory methods (such as NEC) with off-policy reinforcement learning methods could lead to further improvements in performance. In our work, we uniformly replay data from the replay buffer to train the _2M-RL_ component, but a more sophisticated replay strategy, such as prioritized experience replay (PER), may further enhance performance. Overall, our proposed approach provides a general framework for combining two fundamentally different approaches to sequential decision-making, combining their respective strengths.
|
2305.16918 | Confinement and Deconfinement in Gauge Theories: A Quantum Field Theory | After a brief recount of small and large gauge transformations and the nature
of observables, we discuss superselection sectors in gauge theories. There are
an infinity of them, classified by large gauge transformations. Gauge theory
sectors are labelled by the eigenvalues of a complete commuting set (CCS) of
these transformations.
In QED, the standard chemical potential is one such operator generating
global U(1). There are many more given by the moments of the electric field on
the sphere at infinity. In QCD, the CCS are constructed from the two commuting
generators spanning a Cartan subalgebra.
Large gauge transformations commute with the Hamiltonian and preserve the
equations of motion. They form an infinite number of `classical symmetries'.
But most of them are anamolous changing the superselection sectors.
We show that any element of a large gauge transformation can be added to the
standard Hamiltonian as a generalised chemical potential without changing field
equations and that in QCD, they lead to confined and deconfined phases . A
speculation about the physical meaning of these chemical potentials is also
made. | A. P. Balachandran | 2023-05-26T13:31:39Z | http://arxiv.org/abs/2305.16918v2 | # Confinement and Deconfinement
###### Abstract
After a brief discussion of small and large gauge transformations and the nature of observables, we discuss superselection sectors in gauge theories. There are an infinity of them, classified by large gauge transformations. Gauge theory sectors are labelled by the eigenvalues of a complete commuting set (CCS) of these transformations.
In QED, the standard chemical potential is one such operator generating global U(1). There are many more given by the moments of the electric field on the sphere at infinity. In QCD, the CCS are constructed from the two commuting generators spanning a Cartan subalgebra.
We show that any element of a large gauge transformation can be added to the standard Hamiltonian as a chemical potential without changing field equations and that in QCD, they lead to confined and deconfined phases. A speculation about the physical meaning of these chemical potentials is also made.
**Comment:** This note is based on seminars by the author. So only a limited number odreferences are given, from which further literature can be traced.
A paper is under preparation.
## Introduction
We propose a new mechanism for confinement-deconfinement transitions in gauge theories. It is based on a chemical potential in the Hamiltonian which was discussed briefly by Balachandran et al.[1] and which is associated with 'large gauge transformations'. It defines both superselection sectors, and in non-abelian gauge theories, their evolution as well and suggests several new phenomena.
Let us first review the basics of gauge transformations as formulated by our group and reviewed in [2]. In later sections, we will develop the announced results.
The spacetime unless otherwise stated is 3+1 dimensional Minkowski \(M^{3,1}\) and the metric will be \((1,-1,-1,-1)\) diag. The gauge group \({\cal G}\) is the group of maps from spacetime to a compact Lie group \(G\). On a spatial slice, if \(g\) belongs to \({\cal G}\), then \(g(\vec{x})\) approaches an element \(h(\hat{x})\) of \(G\) as \(|\vec{x}|\) goes to infinity.
The case where \(|\vec{x}|\) going to infinity of \(g(\vec{x})\) does not exist is not considered here.
%beginframe
The Lie algebra-valued connection will be denoted by \(A_{\mu}\) and its conjugate electric field will be \(E^{\mu}\). If \(\lambda_{\alpha}\) form a ( hermitean ) basis for the Lie algebra of \(G\) with the normalisation \({\rm tr}\lambda_{\alpha}\lambda_{\beta}=2\delta_{\alpha,\beta}\) in a defining representation, an \(N\times N\) one for \(SU(N)\), then we can write \(A_{\mu}=A_{\mu}^{\alpha}\lambda_{\alpha}\) and \(E_{\mu}=E_{\mu}^{\alpha}\lambda_{\alpha}\). For \(U(1)\), we can put electric charge for \(\lambda_{\alpha}\).
In a gauge theory, an infinitesimal gauge transformation can be written on a spatial slice as
\[Q(\Xi)=\int_{0}^{\infty}d^{3}x\langle\left(D_{i}E_{i}+J_{0}\right)\Xi\rangle\]
where
a) \(\langle\cdots\rangle\) indicates trace over Lie algebra indices and \(D_{i}\) is covariant derivative. In the \(U(1)\) case, there is no need for trace and \(D_{i}\) becomes ordinary derivative \(\partial_{i}\).
b) \(\Xi\) is a Lie algebra-valued test function. In the non-abelian case, we can write it at a point \(\vec{x}\) as \(\Xi^{\alpha}(\vec{x})\lambda_{\alpha}\) where the functions \(\Xi^{\alpha}\) approach functions of \(\hat{\vec{x}}=\vec{x}/|\vec{x}|\) as \(|\vec{x}|\) goes to infinity. The test function can be any smooth function so that all local gauge transformations can be generated.
If all \(\Xi^{\alpha}\) go to zero at infinity adequately fast, we can partially integrate without surface terms and write
\[Q(\Xi)=\int_{-\infty}^{+\infty}d^{3}x\langle(-E_{i}D_{i}\Xi+J_{0}\Xi)\rangle\]
and that is the standard Gauss law. So it vanishes on the quantum states and maps to zero in the GNS construction.
But if \(\Xi\) do not become zero functions at infinity, \(Q(\Xi)\) need not vanish on quantum states. They generate the Sky group of Balachandran and Vaidya. In either case, regardless of the asymptotic behaviour of \(\Xi\), they commute with the algebra of all observables \({\cal A}\) due to locality.
We have argued for this fact many times elsewhere. (Chern-Simons papers, [2] ). This result has many consequences as we outline below.
## Novel Chemical Potentials
Let \(H_{0}\) be the standard full gauge theory Hamiltonian without any sort of chemical potential which generates equations of motion. Pick a \(\Xi\) and a constant \(\mu\) with energy dimension and consider the Hamiltonian
\[H=H_{0}+\mu Q(\Xi):=H_{0}+H_{1}.\]
The new term will not affect equations of motion as it commutes with all local observables. It will change the fields in equations of motion only by local gauge transformations which will not affect equations of motion. It is the non-abelian chemical potential.
Non-abelian chemical potentials have been discussed in previous work on finite temperature field theories [3].
Later we will interpret it as the coupling of the experimental setup with the system and suggest a scheme to resolve the observation problem in quantum physics. For \(U(1)\) and constant \(\Xi\), it becomes the standard chemical potential. The constant \(\mu\) sets the scale at which \(Q(\Xi)\) becomes significant.
There is no point in making \(\mu\) functions of \(\vec{x}\) as \(\Xi\) are already spatial functions.
The exponentials of \(Q(\Xi)\) for different test functions generate the Sky group \({\cal G}\) and its group algebra \(\hat{\cal G}\).
For non-abelian target group \(G\) such as \(SU(3)\), this algebra is non-commutative since
\[[Q(\Xi_{1}),Q(\Xi_{2})]=Q([\Xi_{1},\Xi_{2}]).\]
As Dirac taught us, an irreducible representation of \({\cal A}\) is characterised by a vector state diagonal in a complete commuting set. This vector state may or may not be preserved under the time evolution induced by \(\mu Q(\Xi)\in H\) (\(H_{0}\) will not affect it). We will interpret the former as a colour deconfining state and the latter as colour-confining state for reasons given below. In the latter case, for generic situations, the orbit of the vector state is ergodic as we shall also see below.
The term \(Q(\Xi)\) is already present in the Hamiltonian when it is derived from the gauge theory Lagrangian by Legendre transformation, as was observed in an earlier paper ( Balachandran, Nair, Pinzul, Reyes-Lega, Vaidya). It is the term
\[\int_{-\infty}^{+\infty}d^{3}x<(-D_{i}A_{0}E_{i}+A_{0}J_{0})>\]
where \({\rm tr}\ \langle\cdots\rangle\) is over the Lie algebra indices as usual and \(J_{0}\) is the Lie algebra valued charge density.
It is usually discarded by treating \(A_{0}\) as a classical field vanishing fast towards infinity and doing a partial integration. Then it becomes Gauss's law and hence zero as an operator.
But \(A_{0}\) need not vanish at infinity. It has become \(\Xi\) in the current notation.
The extra term in \(H\) will not spoil the commutativity of spatial translation operators with \(H\). But with \(H\) as \(P_{0}\) and with the usual unaltered spatial translation generators, \(P_{\mu}+\mu Q(\Xi)\delta_{\mu,0}\) will not transform as a four vector under Lorentz transformations : Lorentz symmetry is broken at the operator level But as the extra term does not affect equations of motion, the latter will maintain their covariance.
However, it has long been known that Lorentz invariance is broken by infrared effects and gauge transformations [ Frohlich,Morchio and Strocchi; Mund, Rehren and Schroer and references therein], so that there is no need for concern about Lorentz breaking from this chemical potential.
## Dynamics from \(H\)
This is determined by the expectation values in the vector states of the observables. The latter in turn are classified by the eigenvalues of the superselection operators on the vector states. In a particular superselection sector,
a complete set of commuting superselection operators( complete commuting set, CCS) are diagonal.
The CCS is an abelian subalgebra, preferably a maximal abelian subalgebra, of the Sky algebra with generators \(Q(\Xi)\). For example, in QED, the conventional choice for \(Q(\Xi)\) is the multiple of the electric charge operator \(Q\). The test function \(\Xi\) in this case goes to a constant on the celestial sphere \(S^{2}\) ( with coordinates \(\hat{x}\)).
But it can also go to any smooth function of \(\hat{x}\). Then the superselected operators become infinite dimensional.
In QCD and in non-abelian gauge theories, CCS is much richer. First we choose a CCS from the enveloping algebra of \(SU(3)\) Lie algebra. That is spanned by its quadratic and cubic Casimir operators, a Casimir of an 'isospin' \(SU(2)\in SU(3)\), a 'third component'\(I_{3}\) of its isospin and the hypercharge Y commuting with \(I_{3}\). Then in addition to the above Casimirs, we can diagonalise the commuting Sky operators \(Q(\tilde{\Xi})\) where \(\tilde{\Xi}\) is a linear combination \(I_{3}\) and \(Y\) with coefficients becoming functions of \(\hat{x}\) at infinity.
For now, let us consider the case where the coefficient functions are constants at infinity. Then the superselection sector is labelled by \(I_{3}\) and \(Y\) in an irreducible representation ( IRR) of \(SU(3)\) such as a triplet or octet. But a generic \(\mu Q(\Xi)=H_{1}\) will not commute with \(Q(\tilde{\Xi})\) and will change the superselection sector. Local observables and evolution by the standard \(H_{0}\) will preserve it, but not so the chemical potential.
This is the new feature in QCD : a generic non-abelian chemical potential will not preserve the superselection sector. We will discuss the orbit of the superselection sector below, indicate its ergodic features and argue that the mean values of coloured observables in this situation are all 0. This result substitutes for colour confinement. But the expectation values of Casimir operators of \(G\) are constants of motion for \(H\) and can certainly be measured. Hence the representation of the coloured sector, if it is \({\bf 3,3*,8,\cdots}\) can be determined.
But we can also choose a \(\Xi\) commuting with \(\tilde{\Xi}\), the diagonalised CCS. It will affect evolution by an \(\hat{x}\)-dependent phase, but will not change the superselection sector at all. We can call such states as characterising colour deconfined phases.
We need explicit examples to which we now turn.
## The Colour-Confined and Deconfined Phases
The title is misleading. Presumably in the popular literature, confinement is supposed to mean that coloured states are not in the domain of the Hamiltonian. It is not clear if numerical work based on Wilson lines accomplish such a result.
What emerges from our analysis leads to a related result : the mean values of coloured observables averaged over the interaction time are 0, in this phase. All the same, as Casimirs are colour singlets, they can be determined as previously remarked.
Here is a worked ergodic example for \(SU(3)\). The superselection sector is one with \(\lambda_{3,8}\) ( and Casimirs) diagonal and in the triplet representation. Their joint eigenvalues are (1,1),(-1,1) and (0,-2). Any one of them gives a density matrix on the coloured observables. Their time evolution is given by the chemical potential, assumed time independent.
If the picked vector state is an eigenstate of the chemical potential, it will be preserved in time, and coloured expectation values too will be preserved in time : these are the sectors with colour deconfinement.
So let us pick a chemical potential which does not preserve them, say
\[H_{1}=Q(\Xi),\Xi=\mu_{1}\lambda_{1}+\mu_{8}\sqrt{3}\lambda_{8},\qquad\mu_{1}/ \mu_{8}\quad\mbox{irrational}.\]
The \(\mu\)'s have energy dimension which sets the scale of this term. What sets this scale? It seems to be the same effect which sets the scale of the standard chemical potential in finite temperature field theories.
This operator evolves colour (but not local observables) and is our \(H_{1}\).
In this example, the two terms in \(H\) commute, making computations easy. But \(|(0,2)\rangle\) is its eigenstate, so ergodicity can show up only in the remaining states. But that is fine to illustrate the phenomenon.
A calculation shows that
\[e^{it\Xi}=e^{it\mu_{1}\lambda_{1}}e^{it\mu_{8}\sqrt{3}\lambda_{8}},\]
which gives the action of \(e^{itH_{1}}\) on a chosen vector state.
The vector ( 0,0,1) in the basis indicated above is an eigenstate of \(H_{I}\) and defines a deconfined phase. \(H_{1}\) has eigenvalue \(-2\mu_{8}\) and \(e^{itH_{1}}\) has periodicity \(\pi/\mu_{8}\). But under time evolution, the other two vectors never come back to the starting value as claimed.
That is because \(\frac{\mu_{1}}{\mu_{8}}\) is irrational by choice. So too are the ratios of periods.
If the ratios are rational, the orbit of vector states are periodic.
The strong interaction time is about \(10^{-23}\) secs. No experiment can probe the evolution of the system during this time. What is observed is the average of an operator during this time. But such an average is expected to be zero, especially if the scales \(\mu_{i}\) are large compared to QCD scales.
For the vector state, we can also choose an eigenstate of \(H_{1}\) such as the one given by the choice \((0,0,1)\). It changes just by a phase under time evolution and hence the corresponding density matrix does not change at all. Expectation values of colored observables are then time-independent and there is no theoretical issue in observing them. So we call them colour-deconfined phases.
## Open Question
It is reasonable to ask if the colour-deconfined phases are attractors for the confined ones. That will involve perturbing them and observing if they will relax back to the deconfined phase.
But in quantum theory, a unitary perturbation of short duration seems inadequate for this task. After it is switched off, the system will keep evolving unitarily after the switch-off time from wherever it finds itself. A perturbation with a POVM also seems not to help. A better formulation of the question seems needed. Maybe one should switch on the perturbation adiabatically and switch it off in the same way.
## An Interpretation of the Chemical Potential
One supposes that it has an interpretation similar to the standard one: \(\lambda XN\),\(N\) = number operator. In QED, it is the generator of \(U(1)\) gauge transformations so that it commutes with all observables, adding it to the QED Hamiltonian \(H_{0}\) as the \(H_{1}\) above will not affect equations of motion. It is like the abelian \(Q(\Xi)\) of QED discussed above. It is in fact what one gets for the choice \(\Xi(x)=\Lambda\) for all \(x\).
When \(\Xi\) has an \(\hat{x}\) dependent limit for \(|\vec{x}|\) going to \(\infty\), \(Q(\Xi)\) generates the Sky group yielding more general chemical potentials. One can consider
approaching definite combinations of spherical harmonics at infinity getting any number of novel chemical potentials. Just like the phase diagram of temperature \(T\) versus \(\lambda\), one can also consider multidimensional phase plots.
In the non-abelian case, the nature of the chemical potential changes : the non-abelian coloured sector is labelled by the eigenvalues of a basis of a Cartan subalgebra of the Sky group.
But \(H_{1}\) can be any element of the Sky group and need not commute with the diagonalised element. So in general it will generate an ergodic orbit in the space of states as discussed above.
Note that the evolution is via superselection sectors. It is like the problem studied by [4]. and the more recent one [5].
Elsewhere we have argued that quantum observations are done by coupling experimental operators which form an _abelian_ algebra. We next show that the superselection sectors naturally generate an abelian algebra and that our states from CCS are on this abelian algebra. It is this abelian algebra that is observed.
When \(H_{1}\) is switched off, the superselection sector no longer evolves and the expectation values of the abelian algebra must be giving information about the system. The suggestion here is similar to that of Fiorini and Immirzi, Wightman and probably others like Bohr and Landsman.
Therefore we can suggest that \(H_{1}\) is the evolution Hamiltonian for superselection sectors and reflects the coupling of the experimental apparatus to the system.
Thus let us suppose that the experiment is initiated at time \(t=0\). At this time, when the state is mixed in colour with zero mean value for coloured observables, which the experiment knows, \(H_{1}\) makes its appearance in \(H\). After a time \(T\), when the experiment ceases, and \(H_{1}\) becomes 0, the evolved mixed state no longer further evolves. Colour singlets can then be measured and the evolution of their expectation values from 0 to \(T\) should reveal information about the operator \(H_{0}\) and what has happened to the original state.
It has been assumed the original state is mixed in colour. If it is pure for colour and evolves at time T to a pure state, that would be appropriate for colour-deconfined state.
We have to show the emergence of a commutative algebra from superselection operators. The conjectures regarding them now follow.
**The Commutative Algebra at Infinity in QED**
Let us first recall how to create a charge \(q\) state localised at \(e_{\infty}\) in QED from the vacuum vector \(|\Omega\rangle\), assumed to be invariant under all gauge transformations.
Let \(W(x,e)\) be the Wilson line from \(\vec{x}\) to infinity along \(e\), it is \(exp(iq\phi(x,e))\) in terms of the escort field \(\phi\).
Let \(\psi(x)\) be any charge \(q\) field. Then
\[exp(iq\phi(x,e))\psi(x)|\Omega\rangle\]
has no local charges and a charge \(q\) blip at \(e_{\infty}\). That is the case for any \(\vec{x}\).
If \(\pi_{q,e}({\cal A})\) is the charge \(q\) representation of the algebra \({\cal A}\) of local observables realised on (1), it acts on (1) by left multiplication.
The vector states \(H_{q,e_{\infty}}\) are \(\pi_{q,e}(A)exp(iq\phi(x,e))\psi(x)|\Omega\rangle\).
Under a large gauge transformation \(exp(iq\Lambda)\), every vector in this sector transforms with the fixed phase \(exp(iq\Lambda(e_{\infty}))\) where this has been defined as
\[exp(iq\lim_{\tau\rightarrow\infty}\Lambda(x+\tau e)).\]
So, this charge \(q\) superselection sector is labelled by the representation \(Q(\Xi\rightarrow\Xi(e_{\infty}))\) where the latter is defined as in above for \(\Lambda\).
For
\[exp(iq\phi(x,e^{\prime}))\psi(x)|\Omega\rangle,\qquad e^{\prime}\neq e,\qquad \qquad(2)\]
since there are certainly \(\Xi\) with different values at \(e\) and \(e^{\prime}\), no small gauge transformation or local observable can map the superselection sector defined by (1) to that defined by (2).
Hence
\[\pi_{q,e}((a)exp(iq\phi(x,e))\psi(x)|\Omega\rangle\qquad(3)\]
\[\pi_{q,e^{\prime}}((b)exp(iq\phi(x,e^{\prime})\psi(x)|\Omega\rangle\qquad(4)\]
for any \(a,b\) in \(A\) are eigenvectors with different eigenvalues for such \(Q(\Xi)\) and hence orthogonal. The Hilbert spaces \(H_{q,e,e^{\prime}}\) built on (3) and (4 ) are orthogonal.
But \(e\)'s can be any points on a de Sitter space and take continuous values.
So we conclude that the direct integral over \(e\) of these Hilbert spaces is non-separable.
\(//math.stackexchangechange.com/questions/2163261/non-seperable-hilbert-spaces_ )
Let \(P_{q,e}\) be the 'projection' operator for \(H_{q,e}\), like \(|x\rangle\langle x|\) for position in quantum mechanics. Then
\[P_{q,e}P_{q,e^{\prime}}=0\qquad\mbox{if}\ \ e\neq e^{\prime},\qquad(5)\]
The set of \(e\)'s form a de Sitter space.
Hence
\[RHS\ \mbox{of}\ (5)=\delta_{e^{\prime},e}P_{q,e}(6)\]
where \(\delta_{e,e^{\prime}}\) is the Dirac delta functtion on de Sitter space defined by
\[\int de\ f(e)\delta(e^{\prime},e)=f(e^{\prime})\qquad(7)\]
where \(de\) is the Lorentz invariant volume on the de Sitter space and \(f\) is any smooth function thereon.
It follows that
\[I(f)=\int de\ f(e)P_{q,e}\qquad(8)\]
obeys the abelian algebra
\[I(f)I(g)=I(f\,g)\qquad(9)\]
where \(f\,g\) is defined by the pointwise multiplication of \(f\) and \(g\).
With \(*\) as complex conjugation and with _sup norm_, we get a commutative C*-algebra \({\cal C}\) with spectrum as the de Sitter space. Elements of this algebra label the superselection sectors associated in a charge \(q\) sector.
## The Commutative Algebra at Infinity in QCD
The remarks above can be adapted to any non-abelian gauge theory.
### Epilogue
There are many issues that remain open and to be addressed.
## Acknowledgement
I have beniefited by discussions with Manolo Asorey,Bruno Carneiro da Cunha, Arshad Momen, Paraneswaran Nair, Sasha Pinzul, Amilcar Queiroz, Babar Qureshi and Sachin Vaidya.Much of this work was done at the Institute of Mathematical Sciences. I thank my hosts Ravidran,Sanatan Digal and other colleagues there for their wonderful hospitality.
|
2307.00903 | Magnetic lump motion in saturated ferromagnetic films | In this paper, we study in detail the nonlinear propagation of magnetic
soliton in a ferromagnetic film. The sample is magnetized to saturation by an
external field perpendicular to film plane. A new generalized (2+1)-dimensional
short-wave asymptotic model is derived. The bilinear-like forms of this
equation are constructed, and exact magnetic line soliton solutions are
exhibited. It is observed that a series of stable lumps can be generated by an
unstable magnetic soliton under Gaussian disturbance. Such magnetic lumps are
highly stable and can maintain their shapes and velocities during evolution or
collision. The interaction between lump and magnetic soliton, as well as
interaction between two lumps, are numerically investigated. We further discuss
the nonlinear motion of lumps in ferrites with Gilbert-damping and
inhomogeneous exchange effects. The results show that the Gilbert-damping
effects make the amplitude and velocity of the magnetic lump decay
exponentially during propagation. And the shock waves are generated from a lump
when quenching the strength of inhomogeneous exchange. | Xin-Wei Jin, Shi-Jie Shen, Zhan-Ying Yang, Ji Lin | 2023-07-03T09:55:53Z | http://arxiv.org/abs/2307.00903v1 | # Magnetic lump motion in saturated ferromagnetic films
###### Abstract
In this paper, we study in detail the nonlinear propagation of magnetic soliton in a ferromagnetic film. The sample is magnetized to saturation by an external field perpendicular to film plane. A new generalized (2+1)-dimensional short-wave asymptotic model is derived. The bilinear-like forms of this equation are constructed and exact magnetic line soliton solutions are exhibited. It is observed that a series of stable lumps can be generated by an unstable magnetic soliton under Gaussian disturbance. Such magnetic lumps are highly stable and can maintain their shapes and velocities during evolution or collision. The interaction between lump and magnetic soliton, as well as interaction between two lumps, are numerically investigated. We further discuss the nonlinear motion of lumps in ferrites with Gilbert-damping and inhomogeneous exchange effects. The results show that the Gilbert-damping effects make the amplitude and velocity of the magnetic lump decay exponentially during propagation. And the shock waves are generated from a lump when quenching the strength of inhomogeneous exchange.
## I Introduction
The propagation of electromagnetic wave in ordered magnetic materials, especially in a ferromagnetic medium, plays a vital role in faster and higher density storage fields [1; 2; 3]. In particular, magnetic soliton(MS), which exists in both ferro- and antiferro-magnets, is becoming a very promising information carrier because of its particle-like behavior and maneuverability [4; 5; 6; 7; 8; 9]. In the past few decades, a wide range of soliton-type propagation phenomena has been theoretically predicted [10; 11; 12; 13], and some of them have been confirmed experimentally [14; 15].
Indeed, wave propagation in ferromagnetic media is well-known as a highly nonlinear problem. A complete description of all types of nonlinear excitations is governed by the Maxwell equations coupled with Landau-Lifschitz equation. For this moment, let us notice that a fully nonlinear theory has not been developed. But the linear theory for sufficiently small amplitudes was established and validated experimentally [16]. In order to obtain results valid in nonlinear regimes, or at least weakly nonlinear, one has to resort to intermediate models (by introducing a small perturbative parameter related to the soliton wavelength) [17]. These models include long-wave model [18; 19; 20], modulational asymptotic model [21], and short-wave model [22; 23; 24; 25]. Both long-wave model and modulational asymptotic model are mainly used to explain and predict the behavior of large-scale phenomena owing to their long-wave-type approximate condition [26]. However, this condition is not always applicable because the scale of magnetic materials and devices are getting more refined and more sophisticated. Moreover, the main practical interest of ferrites is that they propagate microwaves [27; 28]. On the contrary, from the viewpoint of applied physics, the short-wave-type approximation is much more relevant to available experiments than the former one.
Since Kraenkel et al. first proposed the short-wave model [29], quite a few related nonlinear evolution equations have been derived, which belong to the Kraenkel-Manna-Merle (KMM) system [22; 30; 31; 32]. Some significant works have been devoted to searching and explaining different excitation patterns of ferromagnetic insulators. As for (1+1)-dimensional KMM system, the existence of multi-valued waveguide channel solutions has been verified, and the nonlinear interaction properties were investigated between the localized waves alongside the depiction of their energy densities [22]. By applying the Hirota bilinear transformation method, the one- and two-soliton solutions were constructed while studying in details the solitons scattering properties [23]. This system is also solvable using the inverse scattering method [25]. It is noteworthy that this system possesses the loop-soliton and spike-like soliton [33; 34], and the magnetic loop-soliton dynamics have been extensively studied [35; 36; 37]. The propagation of electromagnetic waves in higher-dimensional ideal ferromagnets has also been studied, corresponding to the (2+1)-dimensional KMM system [38; 39; 26; 31]. The analytical one-line-soliton solution as well as its transverse stability have been reported [26]. It has been shown that these structures were stable under certain conditions.
On the other hand, most previous studies have only focused on the propagation of MS in ideal ferrites, which means some important properties of the magnetic material were neglected. The main reason is that the nonlinear wave equation describing the propagation of electromagnetic waves in non-ideal ferromagnetic materials is no longer integrable. However, the Gilbert-damping and inhomogeneous exchange effects are essential features in a real ferromagnetic film, and their connection with MS motion is an important issue that has not been explored so far. In this paper, we aim to investigate theoretically and numerically the dynamics of the MS in a ferromagnetic film including damping and the inhomogeneous exchange effect. The rest of this paper is organized as follows. In Section 2, we review the physical background and derive a new (2+1)-dimensional short-wave asymptotic model in ferromagnetic media. In Section 3, the bilinear-like form of the reduced system is constructed and the analytical MS solutions are acquired. In Section 4, the transmission stability of the magnetic soliton is numerically explored. The results show that an unstable MS will split to some magnetic lumps by a small perturbation. The motions of these lumps under the influence of damping and inhomogeneous exchange are analysed in detail. We end this work in Section 5 with a brief conclusion and perspectives.
## II Physical background
### Basic equations
The physical system under consideration is a saturated magnetized ferrite film lying in the \(x-y\) plane, as shown in Fig. 1. Different from Ref. [32], we consider the external field \(\mathbf{H}_{0}^{\infty}\) perpendicular to the film, i.e., \(\mathbf{M}_{0}=(0,0,m)\). So the transverse drift is avoided. The typical thickness of the film is about 0.5mm, and the width is approximately 10mm. We assume the propagation distance is large enough with regard to the wavelength, say more than 50cm. The evolution of the magnetic field \(\mathbf{H}\) and the magnetization density \(\mathbf{M}\) is governed by the Maxwell equations coupled with Landau-Lifschitz-Gilbert equation, which read as
\[-\nabla(\nabla\cdot\mathbf{H})+\Delta\mathbf{H}=\frac{1}{c^{2}} \frac{\partial^{2}}{\partial t^{2}}(\mathbf{H}+\mathbf{M}), \tag{1a}\] \[\frac{\partial}{\partial t}\mathbf{M}=-\gamma\mu_{0}\mathbf{M} \times\mathbf{H}_{\text{eff}}+\frac{\sigma}{M_{s}}\mathbf{M}\times\frac{ \partial}{\partial t}\mathbf{M}, \tag{1b}\]
where \(c=1/\sqrt{\mu_{0}\xi}\) is the speed of light with the scalar permittivity \(\xi\) of the medium, \(\gamma\) is the gyromagnetic ratio, \(\mu_{0}\) being the magnetic permeability of the vacuum, \(\sigma\) is the damping constant, and \(M_{s}\) is the saturation magnetization. The effective field \(\mathbf{H}_{\text{eff}}\) is given by [30]
\[\mathbf{H}_{\text{eff}}=\mathbf{H}-\beta\mathbf{n}(\mathbf{n}\cdot\mathbf{M})+ \alpha\Delta\mathbf{M}. \tag{2}\]
Here \(\alpha\) and \(\beta\) are the constants of the inhomogeneous exchange and the magnet anisotropy (\(\beta>0\) corresponds to the easy-plane case), respectively. For a simple tractability, the unit vector \(\mathbf{n}\) of the anisotropy axis is assumed to be along the \(z\) axis (i.e., \(\mathbf{n}\equiv\epsilon_{z}\)). In order to transform the above systems to dimensionless equation, we rescale the quantities \(\mathbf{M}\), \(\mathbf{H}\), and \(t\) into \(\mu_{0}\gamma\mathbf{M}/c\), \(\mu_{0}\gamma\mathbf{H}/c\), and \(ct\). Thus, the constants \(\mu_{0}\gamma/c\) and \(c\) in Eqs.(2) and (3) are replaced by 1, \(M_{s}\) by \(m=\mu_{0}\gamma M_{s}/c\), and \(\sigma\) by \(\bar{\sigma}=\sigma/\mu_{0}\gamma\)[30].
### Linear analysis
To study the linear regime we look at a small perturbation of a given solution. Equations (1) are linearized about the steady state:
\[\mathbf{M}_{0}=(0,0,m),\ \ \mathbf{H}_{0}=\mu\mathbf{M}_{0}. \tag{3}\]
where \(\mu\) is the strength of the internal magnetic field. Before proceeding further we assume that the ferromagnetic materials have weak damping \(\bar{\sigma}\sim\epsilon\bar{\sigma}\). The exchange interaction parameter \(\alpha\) and anisotropy parameter \(\beta\) are of order \(\epsilon^{2}\) and \(\epsilon^{3}\), respectively (i.e. \(\bar{\alpha}=\epsilon^{2}\alpha,\bar{\beta}=\epsilon^{3}\beta\)). Let us seek for the plane wave perturbation solution propagating along the \(x\)-direction such as
\[\mathbf{M}=\mathbf{M}_{0}+\epsilon\mathbf{m}\exp[i(kx+ly-\omega t )], \tag{4}\] \[\mathbf{H}=\mathbf{H}_{0}+\epsilon\mathbf{h}\exp[i(kx+ly-\omega t )],\]
where \(k\) and \(l\) are the wave numbers in the \(x\) and \(y\) directions, \(\omega\) is the frequency. Vectors \(\mathbf{m}=(m^{x},m^{y},m^{z})\) and \(\mathbf{h}=(h^{x},h^{y},h^{z})\) are arbitrary real scalar quantities.
Substituting Eq. (4) into (1) and (2) in the linear limit, it is reduced to
\[\left(\begin{array}{cccccc}\omega^{2}&0&0&\omega^{2}-l^{2}&kl&0\\ 0&\omega^{2}&0&kl&\omega^{2}-k^{2}&0\\ 0&0&\omega^{2}&0&0&\omega^{2}-k^{2}-l^{2}\\ -i\omega&m\mu&0&0&-m&0\\ -m\mu&-i\omega&0&m&0&0\\ 0&0&-i\omega&0&0&0\\ \end{array}\right)\cdot\left(\begin{array}{c}m_{x}\\ m_{y}\\ m_{z}\\ h_{x}\\ h_{y}\\ h_{z}\\ \end{array}\right)=0\]
Then we obtain the following dispersion relation
\[m^{2}(\mu+1)\left[\mu(k^{2}+l^{2}-\omega^{2})-\omega^{2}\right]-\omega^{2}(k^ {2}+l^{2}-\omega^{2})=0 \tag{5}\]
Note that we focus on studying the short-wave approximation \(k\rightarrow\infty\)[2]. It comes \(k_{0}\sim\epsilon^{-1}\) through a small parameter \(\epsilon\ll 1\) linked to the magnitude of the wavelength. Consequently, the frequency expands accordingly as
\[\omega=\omega_{-1}\epsilon^{-1}+\omega_{1}\epsilon+\omega_{3}\epsilon^{3}+.... \tag{6}\]
This assumption guarantees the phase velocity \(\omega(k)/k\) and the group velocity \(\partial\omega/\partial k\) are always bounded [3]. Now, replacing Eq. (6) into the dispersion relation above, we obtain a set of equations:
* At order of \(\epsilon^{-4}\): \(\omega_{-1}=\pm k_{0}\)
* At order of \(\epsilon^{-2}\): \(\omega_{1}=\left[(\mu+1)m^{2}+l^{2}\right]/2k_{0}\)
* higher order equations which determines \(\omega_{3}\), \(\omega_{5}\),...
The direction of the wave propagation is assumed to be close to the \(x\) axis, thus \(y\) variable gives only account of a slow transverse deviation[40; 41]. Therefore \(l\) is assumed to be very small with respect to \(k\) and we write \(l=l_{0}\) of order 0 with respect to \(\epsilon\). The phase up to order \(\epsilon\) is thus \((x-t)/\epsilon+l_{0}y-\epsilon\omega_{1}t\), which motivates the introduction of new variables:
\[\zeta=\frac{1}{\epsilon}(x-Vt),\ \ y=y,\ \ \tau=\epsilon t. \tag{7}\]
The variable \(\zeta\) describes the shape of the wave propagating at speed \(V\); it assumes a short wavelength about \(1/\epsilon\). The slow time variable \(\tau\) accounts for the propagation during very long time on very large distances with regard to the wavelength. The transverse variable \(y\) has an intermediate scale, as in KPP-type expansions [26; 41]
### Multiple scale approach
In order to derive the nonlinear model, fields \(\mathbf{M}\) and \(\mathbf{H}\) are expanded in power series of \(\epsilon\) as
\[\mathbf{M} =\mathbf{M}_{0}+\epsilon\mathbf{M}_{1}+\epsilon^{2}\mathbf{M}_{2}+ \epsilon^{3}\mathbf{M}_{3}+..., \tag{8}\] \[\mathbf{H} =\mathbf{H}_{0}+\epsilon\mathbf{H}_{1}+\epsilon^{2}\mathbf{H}_{2}+ \epsilon^{3}\mathbf{H}_{3}+....\]
Figure 1: Ferrite film under consideration. The sample is magnetized to saturation by long strong magnetic field \(\mathbf{H}_{0}^{\infty}\) applied in the \(z\)-direction. The \(x\)-direction of the short wave propagation is perpendicular to the direction of static magnetization.
where \(\mathbf{M}_{0}\), \(\mathbf{H}_{0}\), \(\mathbf{M}_{1}\), \(\mathbf{H}_{1}\),...are functions of \((\zeta,y,\tau)\). We consider the boundary conditions: \(\lim\limits_{\xi\rightarrow-\infty}\mathbf{M}_{0}=\left(0,0,m\right)\), \(\lim\limits_{\xi\rightarrow-\infty}\mathbf{M}_{j}=\lim\limits_{\xi \rightarrow-\infty}\mathbf{H}_{j}=0,(j\neq 0)\). We derive the following expressions by substituting Expansions (8) into equation (1):
* At order \(\varepsilon^{-2}\): \(\mathbf{M}_{0}\) is a constant vector \(\mathbf{M}_{0}\)=(0,0,m),
* At order \(\varepsilon^{-1}\): \(H_{0}^{z}=0,\;\;M_{1}^{z}=0,\;\;M_{1}^{z}=0\),
* At order \(\varepsilon^{0}\): \(M_{1\zeta}^{x}=mH_{0}^{y}\), \(M_{2\zetazeta}^{x}=-H_{2\zeta}^{x}-H_{1\zeta\tau}^{y}\) \(M_{2\zeta\zeta}^{x}=-H_{1\zeta\zeta}^{z}+H_{0\zeta\gamma}^{z}\) \(M_{2\zeta\zeta}^{z}=H_{2\zeta\tau}^{z}+H_{2\zeta}^{x}\)
* At order \(\varepsilon^{1}\): \(M_{2\zeta}^{x}=-mH_{1}^{y}\) \(M_{2\zeta}^{x}=m\delta M_{1\zeta\zeta}^{x}+\sigma M_{1\zeta^{x}}-M_{1}^{x}H_{ 0}^{z}+mH_{1}^{x}\) \(M_{2\zeta}^{z}=M_{1}^{x}H_{0}^{y}\) let us introduce some independent variables \(X\) and \(T\) defined as \(X=-m\zeta/2,Y=my,T=m\tau\).
After eliminating \(\mathbf{H}_{2}\) and \(\mathbf{M}_{2}\), we finally obtain the (2+1)-dimensional KMM equation:
\[\begin{split} C_{XT}&=-BB_{X}+C_{YY},\\ B_{XT}&=BC_{X}+B_{YY}-sB_{X}+\rho B_{XX},\end{split} \tag{9}\]
where observables \(B,C\) and constants \(s\), \(\rho\) are defined by
\[\begin{split} C&=-X-\int^{X}\left(H_{0}^{z}/m\right) dX,\;\;B=M_{1}^{x}/2m,\\ s&=-\sigma/2,\;\;\rho=\alpha m^{2}/4.\end{split} \tag{10}\]
This equation is new, which describes the evolution of magnetization field \(\mathbf{M}\) and magnetic field \(\mathbf{H}\) within a ferrite film in presence of Gilbert-damping and inhomogeneous exchange. The quantities \(H_{0}\) and \(M_{1}\) refer to the zeroth and first-order expansion coefficients of the external magnetic field and the magnetization, respectively. For some simplicity, in the next, the independent variables \(X\), \(Y\) and \(T\) will be rewritten as their lower cases \(x\), \(y\) and \(t\), respectively.
## III Hirota's bilinearization and soliton solutions of the (2+1)-dimensional KMM equation
To explore soliton solutions for the (2+1)-dimensional KMM equation (9), we consider a specific dependent variable transformation
\[B=\frac{G}{F},\;\;C=\delta x-2(\ln F)_{t}-2(\ln F)_{y}, \tag{11}\]
where \(\delta\) is an arbitrary constant. Consequently, the bilinear-like forms of the (2+1)-dimensional KMM equation can be derived as follow
\[F\cdot(D_{x}D_{t}+sD_{x}-D_{y}^{2})G\cdot F+G\cdot(D_{x}D_{y}+D_{y}^{2})F\cdot F =\delta F^{2}G \tag{12a}\] \[\partial_{x}\bigg{[}\frac{G^{2}}{2F^{2}}-\frac{(D_{y}D_{t}+D_{t}^{2})F\cdot F }{F^{2}}\bigg{]}+\partial_{y}\left[\frac{(D_{y}D_{t}+D_{t}^{2})F\cdot F}{F^{2} }\right]= 0 \tag{12b}\]
where \(G\), \(F\) are all differential functions of \((x,y,t)\) to be determined. The symbols \(D_{x},D_{t}\) refer to the Hirota's operators with respect to the variable \(x\), \(t\), respectively. In order to construct the solitary wave solutions of Eq.(6), we expand \(G\) and \(F\) with respect to a formal expansion parameter as \(G=\varepsilon G_{1}+\varepsilon^{3}G_{3}+\varepsilon^{5}G_{5}+...,F=1+ \varepsilon^{2}F_{2}+\varepsilon^{4}F_{4}+\varepsilon^{6}F_{6}+...\), in which \(\varepsilon\) is a perturbation parameter and functions \(G_{i},F_{i},(i=1,2,3,...)\) are expansion coefficients of the above series. The one-soliton solution could be constructed by truncating the perturbation expansion of \(G\) and \(F\) as follow
\[G=e^{\eta_{1}},\;\;F=1+\frac{k^{2}A^{2}}{16\delta^{2}}e^{2\eta_{1}}. \tag{13}\]
Substituting these expressions into Eq.(9) and solving the bilinear system recursively, in the absence of damping, the analytical one-soliton solution of the (2+1)-dimensional KMM equation can be transformed into
\[B=\frac{2\delta}{k}\operatorname{sech}(\eta_{1}+\eta_{0}),\;C=\delta x-\frac{2 \delta}{k}\left[\tanh(\eta_{1}+\eta_{0})+1\right], \tag{14}\]
where \(\eta_{1}=kx+ly+[(l^{2}-kl)/2k]t\), \(\eta_{0}=\ln(k/4\delta),\;k\) and \(l\) are arbitrary real constants. It should be noted that this soliton solution exists only when the damping is neglected \((s=0)\). Similar to the procedure for constructing one-soliton solution, the two-soliton solution can be given by treating the truncated perturbation expansions of \(G\) and \(F\) as
\[G = A_{1}e^{\xi_{1}}+A_{2}e^{\xi_{2}}+C_{12}e^{\xi_{1}+2\xi_{2}}+C_{2 1}e^{2\xi_{1}+\xi_{2}}, \tag{15a}\] \[F = 1+B_{11}e^{2\xi_{1}}+B_{22}e^{2\xi_{2}}+B_{12}e^{\xi_{1}+\xi_{2}} +E_{12}e^{2\xi_{1}+2\xi_{2}}, \tag{15b}\]
where \(A_{1},A_{2},k_{1},k_{2}\) are real constants, \(\xi_{i}=k_{i}x+l_{i}y+\left[(l_{i}^{2}+\delta)/k_{i}\right]t,(i=1,2)\), and the remaining parameters have the following forms:
\[\begin{split} B_{ii}&=\frac{A_{i}^{2}k_{i}^{2}}{16 \delta^{2}},B_{12}=\frac{A_{1}A_{2}}{2\delta^{2}}\frac{k_{1}^{2}k_{2}^{2}}{k_{ +}^{2}},k_{1}l_{2}=k_{2}l_{1},\\ C_{ij}&=\frac{A_{i}A_{j}^{2}}{16\delta^{2}}\frac{k_{ j}^{2}k_{-}^{2}}{k_{+}^{2}},E_{12}=\frac{A_{1}^{2}A_{2}^{2}}{256\delta^{4}} \frac{k_{1}^{2}k_{2}^{2}k_{-}^{4}}{k_{+}^{4}},\end{split} \tag{16}\]
where \(k_{+}=k_{1}+k_{2},\;k_{-}=k_{1}-k_{2}\). Parameters \(A_{i}\), \(A_{j}\), \(k_{i}\), \(k_{j}\) and \(l_{i},(i=1,2,j=3-i)\) are arbitrary real constants.
## IV Numerical investigation of line-soliton and magnetic Lumps
### Unstable MS splits into lumps
We now turn to the stability and interactions between MSs in a ferromagnetic film. The initial data is a MS perturbed by some position-dependent Gaussian wave packets with the following expression:
\[f=b\exp\left[-\left(\frac{x-x_{0}}{x_{r}}\right)^{2}-\left(\frac{y}{y_{r}} \right)^{2}\right], \tag{17}\]
where \(b,x_{r}\) and \(y_{r}\) correspond to the shape of the wave packet and \(x_{0}\) is related to the perturbation position.
The time evolution results clearly show the instability of the MS. For small \(b_{i}\), the MS will break up and eventually
evolve into some stable two-dimensionally localized _lumps_, as displayed in Figs. 2(a) and 2(b). We observe that most of the energy is always propagated as a lump, even if its speed may differ from the input. Such a magnetic lump is a solitary wave packet that maintains its shape and speed during propagation or collision.
A complete single lump of magnetic field component \(H^{z}\) (component \(H^{y}\)) is circled in red (black) in Fig.2. The enlarged views (see Figs.2(c) and 2(d)) provide a clear picture of the shape and contour map of the lump. It can be found that component \(H^{z}\) is a dipole-mode lump, whereas component \(H^{y}\) is a standard KP-lump. We also show the vector field of the magnetic lump in Fig.3(a). Note that magnetic field component \(H^{x}\) is zero, the magnetic field is always in the \(y-z\) plane, hence the lump can be regarded as a \(360^{\circ}\) domain wall localized in \(x\) and \(y\) directions. Fig.3(b) presents the magnetic field along \(y\!=\!0\). The blue and red arrows correspond to the magnetic field intensity of component \(H^{z}\), \(H^{y}\), respectively. The rest of this work is concerned with the propagation and interaction behavior of these lumps in ferrite medium.
### Lump motion in ferromagnets with damping or inhomogeneous exchange effects
The evolution behavior of the magnetic lump in the ideal ferrite is quite simple and imaginable. Each lump maintains its shape while it travels at a constant speed. However, in most of real ferromagnetic materials, we have to take the Gilbert-damping into account. For instance, the dimensionless damping constant \(s\) ranges from \(0.048\) to about \(0.385\) in garnet ferrite films. Here we are going to study the dynamics of magnetic lump in a damped ferrite film. The typical ferromagnetic film under consideration is a garnet ferrite film with the dimensionless damping constant \(s=0.1\). For a clearer view of the change in shape of the lump, we define \(\mathcal{H}\) and \(\mathcal{W}\) as the height and width of the lump, which are the vertical distance between the highest point and the lowest point and the horizontal distance along the propagation direction, respectively. All of these are summarized in Fig.4.
The propagation of a lump on the garnet ferrite film is presented in Fig.5. As shown in Fig.5(a), the lump travels forward a visible distance in the damped ferrite. Beyond that, comparing the profiles of lump between \(t\!=\!0\) and \(t\!=\!10\), we evidently observe that the lump becomes smaller and narrower. Fig.5(b) shows the lump height and width exhibit a tendency of exponential decay. The solid blue line is the exponential fitting curve to \(\mathcal{H}(t)\), with the function expression being \(\mathcal{H}(t)=A_{0}e^{-st}\). We confirm the above-mentioned amplitude attenuation law is universal by simulating the motion of lump in ferrites with virous damping factors. Moreover, a definite relationship between the amplitude and the localization region of solitons is important for the soliton excitations. We analyze different sizes of numerical lumps and mark the width and height of lumps in the phase diagram (see Fig. 5(c)). The results show that for a magnetic lump excitation, its width and height meet a linear relationship within the error range (\(\mathcal{W}/\mathcal{H}\sim 0.305\)). So the lump excitation, upon decay, retains a soliton form. Therefore, in this system, the Gilbert-damping plays a role of dissipating energy during the motion of magnetic lumps and it is characterized by decreasing the amplitude and width of lump.
The inhomogeneities otherwise referred to as deformities is inevitable in real magnetic materials, and it can be caused by either external fields or the presence of defects, voids and gaps in the material. It has already been reported that the MS may be deformed by the presence of inhomogeneities, in particular
Figure 3: (a) The vector field of the magnetic lump. (b) The magnetic lump along \(y\!=\!0\). The blue and red arrows correspond to the magnetic field intensity of components \(H^{z}\), \(H^{y}\), respectively.
Figure 2: Propagation of MS perturbed by a Gaussian disturbance. (a) Component \(H^{z}\), (b) Component \(H^{y}\), (c) and (d) are enlarged views of the indicated areas circled in red and black, respectively. The parameters are chosen as \(A_{1}=A_{2}=1,\delta=-1,l_{1}=l_{2}=0,k_{1}=1,k_{2}=2,x_{0}=-29,b=0.1,x_{r}=1.5, y_{r}=2.5\) in (16) and (17).
its structure and speed [35; 42]. In this present system, the inhomogeneous exchange process is unignorable when the wavelength of lump is comparable to the characteristic exchange length.
We now move to study the lump motion in the presence of inhomogeneous exchange effect. The initial data is the stable magnetic lump shown in Fig.5. As can be observed from Fig. 6(a) and 6(b), in ferrite without exchange interaction, the lump solution propagates at a constant speed and along the previous path. We then consider the non-equilibrium dynamics of lump by performing a sudden interaction quench. The pictures of component \(H^{\prime}\) at dimensionless times \(t=2\) and \(t=4.5\) are shown in Fig. 6(c) and 6(d). As we see, for a quench from the non-interacting to strong inhomogeneous exchange ferrite film, the lump oscillates rapidly and diffracts along the propagation direction. A two-dimensional shock wave is generated and propagates forward. The shock wave front continues to propagate in the negative direction along \(x\)-axis. Finally, the energy of lump will be dissipated into numberless tiny waves. Accordingly, considering that the lump would be destroyed by the inhomogeneous exchange process, one has to consider keeping its wavelength away from the characteristic exchange length in the lump-based microwave applications.
### Some examples of excitations and interactions
The evolution pattern given in Fig.2 reveals that the lump moves at a larger velocity than the broken MS in the propagation. The reason is that the velocity of soliton solution is proportional to the soliton amplitude. During the formation of the lump, the original MS will be destroyed, and most of the energy is concentrated in some certain centers, which causes the amplitude (and velocity) of the lump to be greater than that of MS. These lumps with various speeds enable us to explore the interaction between lump and soliton, as well as between two lumps.
A typical example of lump-MS collision is shown in Fig.7(a). The MS begins to break up around at \(t=4\). Subsequently, the splitting lump is going to catch up and collide with the front-MS. After the collision, the front-MS is destroyed and broken into several lumps with various sizes. It is remarkable that the lump keep its localized form before and after the collision almost unchanged. This phenomenon implies such two-component lumps are natural results from this nonlinear propagation equations. Further simulation shows these lump structures could be generated by a MS with random disturbance. Fig.7(b) depicts a characteristic inelastic collision between two lumps. We initially generate two adjoining lumps. They are emitted by MS at dimensionless time \(t=6.5\). The merging process can be performed as follows. From \(t=7.5\) to \(t=9.5\), two lumps merge simultaneously together and give birth to a new lump whose amplitude is significantly greater than the amplitude of previous lumps. Obviously there is a weak attraction between two lumps which results in their fusion. In addition to the fusion of the two lumps, we also observed an extraordinary peak at a specific moment (about \(t=9.5\)), which looks like a
Figure 5: Evolution of a magnetic lump in a damped ferrite film with dimensionless damping constant \(s=0.1\). (a) Comparison picture of lump wave at \(t=0\) and \(t=10\). (b) The variation of lump height \(\mathcal{H}\), lump width \(\mathcal{W}\) and velocity V. (c) Numerical relationship between the width and height of magnetic lump.
Figure 6: Propagation of lump with and without the inhomogeneous interaction, respectively.
second-order rogue wave. It appears to be the result of the interaction between the ripples surrounding the two lumps. After the fusion, the rouge wave-like structure disappears and the dynamics of the output is determined mainly by a single high-amplitude lump.
## V Conclusion
As a conclusion, the nonlinear propagation of MS in a saturation magnetized ferromagnetic thick film is studied in detail. In the starting point, we derive the (2+1)-dimensional KMM system that governs the evolution of short MS waves in a saturated ferromagnetic film. The bilinear form of the KMM system is constructed and the MS solutions are obtained analytically.
After that, numerical simulations are performed to analyse the evolution behaviours of MS. A significant observation is that the unstable MS can be destroyed by Gaussian perturbation and broken into some stable magnetic lumps. These lumps exhibit high stability during the propagation. Furthermore, some examples are given to analyse the collision behaviours between lump and MS, and the interaction between two lumps. It is found the lump keeps its shape and speed in the collision with MS. The results confirm that the lump is a stable propagation mode in this system and, more to the point, the velocity of lump can be adjusted by its amplitude. Their robustness and controllability provide the possibility for future information memory and logic devices. We also study the propagation of such a lump in ferrites subjected to influence of damping and inhomogeneous exchange effects. When the Gilbert-damping of ferrite is considered, the lumps undergo the following changes: the amplitude and the speed of lump are decreased, and the width of lump along the propagation direction is getting narrow. It would cause a strong diffraction of the lump if we quench the interaction strength.
We hope our work will invoke follow-up experimental studies of lump-based microwave applications. Additionally, since only one- and two-line-soliton are obtained, the integrability of the (2+1)-dimensional system Kraenkel-Manna-Merle (KMM) remains an open issue. The existence of the higher-dimensional evolution system as well as the bulk polariton solution is an intriguing avenue for future exploration.
## Acknowledgment
This work was supported by the National Natural Science Foundation of China under Great Nos. 11835011; 11675146; 11875220;.
|
2303.03725 | Fuzzy Logic and Markov Kernels | Fuzzy logic is a way to argue with boolean predicates for which we only have
a confidence value between 0 and 1 rather than a well defined truth value. It
is tempting to interpret such a confidence as a probability. We use Markov
kernels, parametrised probability distributions, to do just that. As a
consequence we get general fuzzy logic connectives from probabilistic
computations on products of the booleans, stressing the importance of joint
confidence functions. We discuss binary logic connectives in detail and recover
the "classic" fuzzy connectives as bounds for the confidence for general
connectives. We push multivariable logic formulas as far as being able to
define fuzzy quantifiers and estimate the confidence. | Rogier Brussee | 2023-03-07T08:16:55Z | http://arxiv.org/abs/2303.03725v1 | # Fuzzy logic and Markov kernels
###### Abstract.
Fuzzy logic is a way to argue with boolean predicates for which we only have a confidence value between 0 and 1 rather than a well defined truth value. It is tempting to interpret such a confidence as a probability. We use Markov kernels, parametrised probability distributions, to do just that. As a consequence we get general fuzzy logic connectives from probabilistic computations on products of the booleans, stressing the importance of joint confidence functions. We discuss binary logic connectives in detail and recover the "classic" fuzzy connectives as bounds for the confidence for general connectives. We push multivariable logic formulas as far as being able to define fuzzy quantifiers and estimate the confidence.
Key words and phrases:Fuzzy Logic, Markov Kernel, Probability Theory
## 1. Introduction
The notion of fuzzy sets tries to formalise the idea that if we have a set of things \(X\) and a property \(P\) for an element \(x\in X\), (i.e. a predicate function \(P:X\rightarrow\mathbb{B}\)), we can express a level of confidence in the truth of \(P(x)\), traditionally called "belief", as a function \(p:X\rightarrow[0,1]\). A predicate is in one to one correspondence with a subset \(X_{P}=\{x\in X\ |\ P(x)\}\subset X\), so the belief function \(p(x)\) can be interpreted as a level of confidence of membership of \(x\) in \(X_{P}\), a "fuzzy" membership function. It is tempting to interpret this membership function as a probability. On second thought this is a bit problematic, however, since \(p(x)\) is a function on \(X\) and certainly not a probability distribution on \(X\) as there is no way to "add up" confidences at different points \(x\in X\) to a confidence 1, and even for discrete spaces there is no reason why they should. Nonetheless the interpretation of fuzzy logic and its relation to probability theory has been a continued discussion since the inception of fuzzy logic [Ga][DP1][DP2][LSBDW][Za]. Here we will discuss a direct interpretation of fuzzy logic in terms of probability theory and more particularly in the Markov category. It interprets the belief function as a thinly disguised Markov kernel with values in the Booleans, i.e. instead of interpreting the belief function as a probability distribution _on_\(X\) we interpret the confidence function as defining a probability distribution on the _booleans_ which is _parametrised_ by \(X\). Somewhat similar ideas seem to have been proposed by Loginov [Lo] based on conditional probability rather than Markov kernels.
The main contribution of this paper is that this interpretation gives a straightforward probabilistic interpretation of fuzzy logic and its properties, which become a direct consequence of the properties of traditional Boolean logic and standard probabilistic computations. In particular we immediately get a family of fuzzy connectives like "and" and "or" parametrised by the confidence that (n)either predicate is true. More conceptually, in this interpretation logic connectives are only properly defined if we have a _joined_ confidence in the predicates i.e. joined probability distribution on two booleans rather than just the belief functions separately which only determines the marginals. This makes sense intuitively: even if we only have 50% confidence in a predicate \(P\), we must still have 0% confidence in \(P\wedge\neg P\) and can have 100% confidence in \(P\vee\neg P\). Of course the strong correlation between the two predicates is reflected in the joint distribution of \((P,\neg P)\), but the joint confidence can also express more subtle "if \(P_{1}\) then \(P_{2}\) is more or less likely" correlations. The current approach makes it possible, to make use of joined confidences/beliefs for correlated predicates in a natural way. In fact, the same argument that gives a fuzzy/probabilistic interpretation for binary logic relations also works for more general systems of logic formulas. We can even even extent this framework to quantification over infinite sets. However, while this Markov interpretation is useful and precise, it also requires to specify joint confidence functions which have \(2^{n}-1\) degrees for \(n\) predicates. Thus it is useful to have upper and lower bounds for logic connectives or results making assumptions like independence. It turns out that sensible bounds and independence simplifications exist and are given by connectives in their "traditional" fuzzy forms (plural!), "explaining" why an approach with logic flavour often works.
The organisation of this paper is as follows: in section 2 we define Markov kernels and the Markov category, as a quick recap and to fix notation. In section 3 we introduce Fuzzy logic, again mainly to fix notation and specify what form of fuzzy logic we use. In section 5 we reformulate the belief function in terms of Markov kernels and use them to define binary logic connectives. As an application of this probabilistic interpretation, we describe the relation to "classical" logic connectives and prove a de Morgan Law. We finally discuss the general case and quantifiers in section 5.
In the rest of this paper we use the standard notations \(\mathbb{R},\mathbb{N}\) for the real numbers, and nonnegative integers, \([a,b]\), \((a,b]\) or \((a,b)\) for the closed or (half) open interval and the less standard \(\mathbb{B}=\{\mathbf{F},\mathbf{T}\}\) for the Booleans.
### competing interest
The author gracefully acknowledges support from the RNOB Regio deal Noord Oost Brabant, but otherwise declares no competing interest in this research.
## 2. Markov kernels
A Markov kernel is a parametrised version of a probability distribution. Just like a probability distribution formalises a random value, a Markov
kernel formalises a function with random values. The formal definition leans on the language of measure theoretic probability theory, but it is mainly used as solid foundation supporting an oiled machinery. The section is mainly needed because we liberally use the language from this section, in particular that of functoriality of pushforward. Some readers may prefer to only take a cursory look at the definition of Markov kernel and pushforward, and go back to other definitions as needed.
Let \(X=(X,\Cal{A})\) and \(Y=(Y,\Cal{B})\) be two measurable spaces, i.e. a set together with a \(\sigma\)-algebra of measurable subsets [Ru, Ch 1].
**Definition 1**.: _A Markov kernel [Do, Appendix VI]\(F:X\to Y\) is a map \(\Cal{B}\times X\to[0,1]\) such that_
* _for each_ \(B\in\Cal{B}\)_, the map_ (1) \[F(B,\text{-}):X \to[0,1]\] (2) \[x \mapsto F(B,x)\] _is a measurable function on_ \(X\)_,_
* _for each_ \(x\in X\) _the map_ (3) \[F(\text{-},x):\Cal{B} \to[0,1]\] (4) \[B \mapsto F(B,x)\] _is a probability measure on_ \(Y\)_._
As a convenient (manifestly asymmetric!) notation we write \(F(dy|x)\) for the probability measure on \(Y\) defined by the point \(x\in X\). We likewise write \(F(B|x)=\int_{B}F(dy|x)\) for \(F(B,x)\). The notation also suggests conditional probability. While this is not a bad way to think about it, the two concepts are related but technically and conceptually slightly different: a conditional probability is a Markov kernel defined almost surely with respect to a probability measure on the "universe of discourse" \(X\). This background measure on \(X\) can become a block in interpretation, particularly if one liberally identifies measures with functions on discrete spaces. It is also the way people denote parametrised probability distribution, e.g. \(G(d^{n}x|\Sigma,\mu)\) for a Gaussian distribution on \(R^{n}\) that is parametrised by its covariance matrix \(\Sigma\) and mean \(\mu\).
The \(\sigma\)-algebras and measurability of the function \(F(B|\text{-})\) are a convenient technical setup allowing us to define integration without worries, and depending on the \(\sigma\) algebra disallow pathological constructions using the axiom of choice (e.g. if \(X\) and \(Y\) are topological spaces and the \(\sigma\)-algebras are the corresponding Borel algebras, all continuous functions are allowed) or give built in adaptation to some "pixelation" of the space. The meat of the definition is the second part: it allows us to think of Markov kernels as probabilistic "functions" that assign a probability distribution \(F(dy|x)\) on
rather than a specific value for every \(x\) in \(X\). Conversely, every normal measurable function \(f:X\to Y\) defines a Markov kernel1 by \(F(dy|x)=\delta_{f(x)}(dy)\) i.e.
Footnote 1: If the \(\sigma\)-algebra \(\mathcal{B}\) on \(Y\) does not separate points, different functions may define the same kernel.
\[F(B|x)=\begin{cases}1&\text{ if }f(x)\in B\\ 0&\text{ otherwise}\end{cases} \tag{5}\]
In particular, the identity function on \(X\) has the Markov kernel
\[1_{X}(dx^{\prime}|x)=\delta_{x}(dx^{\prime}) \tag{6}\]
Like functions, Markov kernels can be composed: if \(F:X\to Y\) and \(G:Y\to Z\) are Markov kernels, then the composition of the kernels \(G\circ F\) is defined by
\[(G\circ F)(dz|x)=\int_{Y}G(dz|y)F(dy|x) \tag{7}\]
i.e. for each measurable subset \(C\subset Z\) and \(x\in X\) we define
\[(G\circ F)(C|x)=\int_{Y}G(C|y)F(dy|x) \tag{8}\]
One checks that \(G\circ F\) is indeed a Markov kernel, that \(\circ\) is associative and \(1_{X}\) is the identity on \(X\). This gives measurable spaces and Markov kernels the structure of a category [McL, section I.2], the Markov category [Pa], (aka Lawvere category [La], aka Giry monad[Gi] aka the Stochastic Category [Vo]). The category has an initial object, the empty set \(\emptyset\), and a final object, the one point set \(\star=\{*\}\). A probability measure \(\mathbb{P}\) on a measurable set \(X\) is "the same thing" as a Markov kernel \(P:\star\to X\) because the Markov kernel is fully determined by the probability measure \(P(dx|*)=\mathbb{P}(dx)\). Hence a probability space \((X,\mathcal{A},\mathbb{P})\) is "the same thing" as a pointed space \(\star\to X\) in the Markov category and we will identify them. More generally, if \(X\) comes with a family of probability distributions parametrised by \(S\), (i.e. a Markov kernel \(\mathbb{P}:S\to X\)) then for every kernel \(F:X\to Y\) we have a pushforward \(F_{*}\mathbb{P}=F\circ\mathbb{P}:S\to Y\). The pushforward2: is a covariant functor, i.e. for \(G:Y\to Z\) we have
Footnote 2: There is also a functorial contravariant pullback: a kernel \(\phi:Y\to T\) gives a kernel \(F^{*}\phi:X\to T\) by \(F^{*}\phi=\phi\circ F\) which satisfies \((G\circ F)^{*}=F^{*}\circ G^{*}\)
\[(F\circ G)_{*}=F_{*}G_{*} \tag{9}\]
This _functoriality_ is just expressing the fact that \(\circ\) is associative here. If a kernel comes from a measurable function \(f:X\to Y\) the pushforward is very explicit. For \(B\in\mathcal{B}\)
\[(f_{*}P)(B|s)=P(f^{-1}(B)|s) \tag{10}\]
i.e. it is just the pushforward of measures but parametrised with \(s\in S\). Given \(F_{1}:X\to Y_{1}\) and \(F_{2}:X\to Y_{2}\) there does not, in general, exist a _unique_ Markov kernel
\[F_{12}:X\to Y_{1}\times Y_{2} \tag{11}\]
with marginals \(\pi_{1*}F_{12}=F_{1}\) and \(\pi_{2*}F_{12}=F_{2}\), where \(\pi_{1},\pi_{2}\) are the projections on the first and second coordinate. This is because marginal probability distributions do not, in general, determine a joint probability distribution even though the marginals may heavily restrict the joint distribution. We can, however, define a conditionally independent (i.e. independent for every \(x\)) joint kernel \(F_{1}\otimes F_{2}:X\to Y_{1}\otimes Y_{2}\) defined by the product measure [Ru, Definition 7.7]
\[(F_{1}\otimes F_{2})(d(y_{1},y_{2})|x)=F_{1}(dy_{1}|x)F_{2}(dy_{2}|x). \tag{12}\]
On measurable subsets \(B_{1}\times B_{2}\subset Y_{1}\times Y_{2}\), it is given by \((F_{1}\otimes F_{2})(B_{1}\times B_{2}|x)=F_{1}(B_{1}|x)F_{2}(B_{2}|x)\) which is then uniquely extended to the product \(\sigma\)-algebra using the Caratheodory extension theorem.
To explain what we are going to do with logic connectives \(\wedge\) and \(\vee\) it is useful to start with the more familiar situation of adding random variables. Therefore first assume that \((X,\mathcal{A})\) is not just a measurable space but a probability space \((X,\mathcal{A},\mathbb{P})\) for some probability measure \(\mathbb{P}\). Recall that in this setup a random variable or stochast3 is a measurable map \(f:X\to\mathbb{R}\). The law of the stochast \(f\) is the pushforward measure \(f_{*}\mathbb{P}\), which is a probability distribution on \(\mathbb{R}\). Concretely, for an interval \((-\infty,b]\subset\mathbb{R}\) the pushforward measure gives the cumulative distribution
Footnote 3: In this paper uppercase “end of alphabet” latin letters denote (measure) spaces rather than stochasts.
\[(f_{*}\mathbb{P})((-\infty,b])=\mathbb{P}(f^{-1}(-\infty,b])=\mathbb{P}(f(x) \leq b). \tag{13}\]
For \(F:X\to\mathbb{R}\) a Markov kernel, we have the pushforward probability measure \(F_{*}\mathbb{P}=F\circ\mathbb{P}\) with a cumulative distribution
\[(F_{*}\mathbb{P})((-\infty,b])=\int_{X}F((-\infty,b]|x)\mathbb{P}(dx) \tag{14}\]
For two stochasts \(f_{1},f_{2}:(X,\mathcal{A},\mathbb{P})\to\mathbb{R}\), we have a well defined sum function \((f_{1}+f_{2})(x)=f_{1}(x)+f_{2}(x)\). The law of their sum \(f_{1}+f_{2}\) is the probability measure \((f_{1}+f_{2})_{*}\mathbb{P}\). However, it is well known that, in general, the law of the sum is _not_ determined by the laws \(f_{1*}\mathbb{P}\) and \(f_{2*}\mathbb{P}\) of the stochasts separately. For example if \(f\) is normally distributed with mean \(0\) and standard deviation \(1\), and \(f_{1}=f\), \(f_{2}=f\), then \(f_{1}\) and \(f_{2}\) are also normally distributed with standard deviation \(1\) and \(f_{1}+f_{2}=2f\) is normally distributed with standard deviation \(2\), while if \(f_{1}^{\prime}=f\), \(f_{2}^{\prime}=-f\) then again \(f_{1}\) and \(f_{2}\) are again normally distributed with standard deviation \(1\), but obviously \(f_{1}+f_{2}=0\). What _does_ determine the law of the sum is the _joint_ law \((f_{1}\times f_{2})_{*}\mathbb{P}\) on \(\mathbb{R}^{2}=\mathbb{R}\times\mathbb{R}\). This follows from functoriality since if we
define \(\mathbf{add}(y_{1},y_{2})=y_{1}+y_{2}\) then \(f_{1}+f_{2}=\mathbf{add}\circ(f_{1}\times f_{2})\), and the joint law is
\[(f_{1}+f_{2})_{*}\mathbb{P}=(\mathbf{add}\circ(f_{1}\times f_{2}))_{*}\mathbb{P }=\mathbf{add}_{*}\left((f_{1}\times f_{2})_{*}\mathbb{P}\right). \tag{15}\]
For Markov kernels \(F_{1},F_{2}:X\to\mathbb{R}\) there is a very similar complication: they do not define a well defined kernel which one would reasonably call "\(F_{1}+F_{2}\)". In order to compute the distribution of the sum, we need a _joint_ Markov kernel i.e. a kernel \(F_{12}:X\to\mathbb{R}^{2}\) with marginals \(\pi_{1*}F_{12}=F_{1}\) and \(\pi_{2*}F_{12}=F_{2}\), where again \(\pi_{1},\pi_{2}\) are the projections on the first and second coordinate. Such a joint kernel certainly exists since we can always assume take \(F_{1}\) and \(F_{2}\) are conditionally independent and take \(F_{1}\otimes F_{2}\)
are conditionally independent (i.e. independent for every \(x\)) and take \(F_{12}=F_{1}\otimes F_{2}\) on \(\mathbb{R}^{2}\). However, as mentioned earlier, even though \(F_{12}\) is restricted by the marginals \(F_{1}\) and \(F_{2}\), there is, in general, no uniquely determined choice. _Given_ a joint Markov kernel, however, we have a Markov kernel that by abuse of notation we may call the sum
\[F_{1}+_{F_{12}}F_{2}:=\mathbf{add}_{*}F_{12}. \tag{16}\]
Here the notation \(+_{F_{12}}\) is supposed to indicate that we use the _joint_ Markov kernel \(F_{12}\) to define addition. This notation is a bit silly because \(F_{12}\) already determines \(F_{1}\) and \(F_{2}\), but If no confusion seems possible we write
\[F_{1}+_{12}F_{2}:=F_{1}+_{F_{12}}F_{2}. \tag{17}\]
Since \(F_{12}\) although not uniquely determined by the marginals, may still be substantially restricted, and so \(F_{1}+_{12}F_{2}\) is restricted.
**Example 1**.: _Consider two Markov kernels \(F_{1},F_{2}:\Delta\to\mathbb{R}\) on the interval \(\Delta_{1}=[0,1]\) with values in \(\{\pm 1\}\subset\mathbb{R}\), representing the loss or gain for two parties in two Bernoulli experiments each with a chance \(p\in\Delta\) for heads (\(h\)) and \((1-p)\) for tails (\(t\)). Party 1 wins on heads, Party 2 on tails. What we do not know is the correlation between the two Bernouilly experiments. We then have_
\[F_{1}(dy_{1}|p) =(p\delta_{+1}+(1-p)\delta_{-1})(dy_{1}) \tag{19}\] \[F_{2}(dy_{2}|p) =((1-p)\delta_{+1}+p\delta_{-1})(dy_{2}) \tag{18}\]
_An equivalent way to write down this kernel is as probabilities of the different outcomes_
\[F_{1}(+1|p) =p, F_{2}(+1|p) =(1-p) \tag{21}\] \[F_{1}(-1|p) =1-p F_{2}(-1|p) =p\] (22) \[F_{1}(B|p) =0 F_{2}(B|p) =0\text{ if }\pm 1\notin B \tag{20}\]
_where we used the usual abuse of notation in the probability distributions of discrete sets to write e.g. \(F_{1}(+1|p)=F_{1}(\{-1\}|p)\) which we will continue to
do. The joint Markov kernels can then be parametrised as_
\[F_{12}((+1,+1)|p) =p_{ht}, F_{12}((+1,-1)|p) =p_{hh} \tag{24}\] \[F_{12}((-1,+1)|p) =p_{tt} F_{12}((-1,-1)|p) =p_{th}\] (25) \[F_{12}(B|p) =0\text{ if }(\pm 1,\pm 1)\notin B \tag{23}\]
_where the condition that \(F_{12}\) has \(F_{1}\) and \(F_{2}\) as marginals gives_
\[p_{hh}+p_{ht} =p p_{hh}+p_{th} =p \tag{27}\] \[p_{th}+p_{tt} =1-p p_{ht}+p_{tt} =1-p \tag{26}\]
_with the inequalities \(p_{ab}\geq 0\). The system of equations is equivalent to_
\[p_{hh}+p_{ht}+p_{th}+p_{tt} =1 \tag{29}\] \[p_{hh}+p_{ht} =p\] (30) \[p_{ht}+p_{tt} =1-p \tag{28}\]
_and the solutions are of the form_
\[p_{hh} =q p_{ht} =p-q \tag{32}\] \[p_{th} =p-q p_{tt} =1-2p+q \tag{31}\]
_with \(\min(2p-1,0)\leq q\leq p\). Given this joint kernel, the sum kernel is_
\[(F_{1}+_{12}F_{2})(+2|p) =p_{ht}=p-q \tag{34}\] \[(F_{1}+_{12}F_{2})(0|p) =p_{hh}+p_{tt}=1-2p+2q\] (35) \[(F_{1}+_{12}F_{2})(-2|p) =q_{th}=p-q \tag{33}\]
What we are going to do for fuzzy logic is much the same except rather than adding numbers, we are going to apply "and" or "or" to boolean values.
## 3. Fuzzy Logic
For a set \(X\), given the belief (confidence) \(p:X\to[0,1]\) in a predicate \(P:X\to\mathbb{B}\) the belief in \(\neg P(x)\) is \(1-p(x)\).
\[(\neg p)=1-p \tag{36}\]
For two properties \(P_{1},P_{2}\) with belief functions \(p_{1},p_{2}\), we consider the belief (confidence) in \(P_{1}(x)\wedge P_{2}(x)\) or equivalently the belief (confidence) in \(x\in X_{P_{1}}\cap X_{P_{2}}\) as a function of \(x\). Several natural choices for a fuzzy "and" that only depend on the values \(p_{1}(x)\) and \(p_{2}(x)\) have been proposed [ATV], in particular:
\[p_{1}\triangle p_{2} =\max(0,p_{1}+p_{2}-1) \tag{38}\] \[p_{1}\curlywedge p_{2} =p_{1}p_{2}\] (39) \[p_{1}\overline{\wedge}p_{2} =\min(p_{1},p_{2}). \tag{37}\]
Note that all choices reduce to the usual "and" connective \(p_{1}\wedge p_{2}\), if \(p_{1}\) and \(p_{2}\) only take the value \(0\) or \(1\) and that
\[p\triangle q\leq p\curlywedge q\leq p\overline{\wedge}q \tag{40}\]
Likewise consider the belief (confidence) in \(P_{1}(x)\lor P_{2}(x)\) i.e. the confidence in \(x\in X_{P_{1}}\cup X_{P_{2}}\) for which
\[p_{1}\underline{\vee}p_{2} =\max(p_{1},p_{2}) \tag{42}\] \[p_{1}\vee p_{2} =p_{1}+p_{2}-p_{1}p_{2}\] (43) \[p_{1}\overline{\vee}p_{2} =\min(1,p_{1}+p_{2}). \tag{41}\]
has been proposed. Note again that all choices reduce to the usual "or" connective \(p_{1}\lor p_{2}\) if \(p_{1}\) and \(p_{2}\) only take the values \(0\) or \(1\) and that
\[p_{1}\underline{\vee}p_{2}\leq p_{1}\vee\,p_{2}\leq p_{1}\overline{\vee}p_{2}. \tag{44}\]
We will see later that \(\underline{\wedge}\) and \(\overline{\wedge}\) are extremals of a \(1\) parameter family of such connectives with \(\curlywedge\) as an intermediate value, and similarly for \(\underline{\vee}\), \(\overline{\vee}\) and \(\,\gamma\).
Given the proposed fuzzy "and" and "or connectives, the classical de Morgan's law
\[\neg(a\lor b)=(\neg a)\wedge(\neg b),\ \forall a,b\in\mathbb{B} \tag{45}\]
for Boolean truth values have fuzzy analogues
\[p_{1}\underline{\vee}p_{2} =\neg((\neg p_{1})\overline{\wedge}(\neg p_{2})) \tag{47}\] \[p_{1}\vee p_{2} =\neg((\neg p_{1})\curlywedge(\neg p_{2}))\] (48) \[p_{1}\overline{\vee}p_{2} =\neg((\neg p_{1})\underline{\wedge}(\neg p_{2})) \tag{46}\]
we will later see that there is a De Morgan law for a whole \(1\) parameter family with formulas (46), (47) and (48) as special cases.
## 4. Fuzzy logic connectives and Markov kernels
With the notion of Markov kernel at hand, we can interpret a fuzzy set or fuzzy predicates on a measurable set \((X,\mathcal{A})\) as a predicate kernel \(P:X\to\mathbb{B}\). A belief function \(p:X\to[0,1]\) as in section 3 is equivalent to a predicate kernel with
\[P(\mathbf{T}|x) =p(x) \tag{50}\] \[P(\mathbf{F}|x) =1-p(x). \tag{49}\]
Conversely the kernel predicate defines a belief function in this way. Again we abused notation in the probability distributions of discrete sets to write e.g. \(P(\mathbf{T}|x)\) for \(P(\{\mathbf{T}\}|x)\) here. This interpretation will allow us to think of fuzzy logic as a "shadow" of ordinary logic by suitably applying functoriality. In the process we will quickly find the need to use joint confidence functions or joint kernels. As explained in section 2, the introduction of a \(\sigma\)-algebra of subsets and assuming that \(p\) is measurable is a technical detail. In fact for most of what follows we could take \(\mathcal{A}=\mathcal{P}(X)\) the full powersetof X.
### logical connective "not"
As a warmup, consider the belief function of the "not" connective \(\neg\). There is an associated function
\[\mathbf{not}:\mathbb{B} \to\mathbb{B} \tag{52}\] \[t \mapsto\neg t \tag{51}\]
The Markov kernel for \(\neg P:X\to\mathbb{B}\) is then defined functorially on predicate kernels
\[(\neg P):=(\mathbf{not}_{*}P). \tag{53}\]
Its corresponding belief function is
\[(\neg p)(x) =(\neg P)(\mathbf{T}|x) \tag{55}\] \[=(\mathbf{not}_{*}P)(\mathbf{T}|x)\] (56) \[=P(\mathbf{not}^{-1}(\mathbf{T})|x)\] (57) \[=P(\{\mathbf{F}\}|x)\] (58) \[=1-P(\mathbf{T}|x)\] (59) \[=1-p(x) \tag{54}\]
just as expected.
### logical connective "and"
Now, just as the sum of two Markov kernels with values in \(\mathbb{R}\) turned out not to be well defined without a joint kernel, to define a logical "and" connective of predicate Markov kernels \(P_{1}\) and \(P_{2}\) (Markov predicates for short), we need a _joint_ Markov predicate \(P_{12}\) lifting \(P_{1}\) and \(P_{2}\). Spelled out: given Markov predicates \(P_{1},P_{2}:X\to\mathbb{B}\) uniquely defined by \(P_{i}(\mathbf{T}|x)=p_{i}(x)\), we need a joint Markov predicate \(P_{12}:X\to\mathbb{B}^{2}\) with conditional marginals \(\pi_{1*}P_{12}=P_{1}\) and \(\pi_{2*}P_{12}=P_{2}\) where \(\pi_{i}\) is projection onto the \(i\)-th coordinate. Again, such a lift exists because again as a special case of (12) we can take the joint Markov predicate \(P_{12}=P_{1}\otimes P_{2}\) assuming independence of \(P_{1}\) and \(P_{2}\). Here this becomes very explicit:
\[(P_{1}\otimes P_{2})((a,b)|x)=P_{1}(a|x)P_{2}(b|x). \tag{60}\]
However, since \(\mathbb{B}^{2}\) is small we can easily write down the general Markov predicate lifting \(P_{1}\) and \(P_{2}\):
\[P_{12}((\mathbf{F},\mathbf{F})|x) =p_{FF}(x) P_{12}((\mathbf{F},\mathbf{T})|x) =p_{FT}(x) \tag{62}\] \[P_{12}((\mathbf{T},\mathbf{F})|x) =p_{TF}(x) P_{12}((\mathbf{T},\mathbf{T})|x) =p_{TT}(x) \tag{61}\]
where the \(p_{ab}(x)\geq 0\) for all \(x\) and
\[p_{FF}(x)+p_{FT}(x) =1-p_{1}(x) p_{FF}(x)+p_{TF}(x) =1-p_{2}(x) \tag{64}\] \[p_{TF}(x)+p_{TT}(x) =p_{1}(x) p_{FT}(x)+p_{TT}(x) =p_{2}(x). \tag{63}\]
These equations are equivalent to
\[p_{FF}(x)+p_{FT}(x)+p_{TF}(x)+p_{TT}(x) =1 \tag{66}\] \[p_{TF}(x)+p_{TT}(x) =p_{1}(x)\] (67) \[p_{FT}(x)+p_{TT}(x) =p_{2}(x) \tag{65}\]
which has solutions
\[p_{FF}(x) =q(x) p_{TF} =(1-p_{2}(x))-q(x) \tag{69}\] \[p_{FT}(x) =(1-p_{1}(x))-q(x) p_{TT} =p_{1}(x)+p_{2}(x)-1+q(x). \tag{68}\]
Thus in this parametrisation (there can be others) the joint Markov predicate is determined by a "belief function" \(q(x)\) for _both_ predicates to be wrong. The inequalities \(p_{ab}(x)\geq 0\) translate inequalities
\[0\leq q_{\min}(x)\leq q(x)\leq q_{\max}(x)\leq 1 \tag{70}\]
for \(q(x)\) where
\[q_{\min}(x) :=\max(0,1-(p_{1}(x)+p_{2}(x))) \tag{72}\] \[q_{\max}(x) :=1-\max(p_{1}(x),p_{2}(x)). \tag{71}\]
We also define
\[q_{\text{indep}}(x):=(1-p_{1}(x))(1-p_{2}(x)). \tag{73}\]
Given the joint Markov predicate we can now _by abuse of notation_ define a Markov predicate \(P_{1}\wedge_{12}P_{2}\) for the "and" connective by functoriality
\[P_{1}\wedge_{12}P_{2}:=P_{1}\wedge_{P_{12}}P_{2}:=\mathbf{and}_{*}P_{12}. \tag{74}\]
As for the sum kernel in 17, the notation is a bit silly because \(P_{1}\) and \(P_{2}\) are already determined by \(P_{12}\) while \(P_{1}\) and \(P_{2}\) do not determine the joint Markov predicate \(P_{12}\). It is completely determined by its belief function
\[(P_{1}\wedge_{12}P_{2})(\mathbf{T}|x) :=(\mathbf{and}_{*}P_{12})(\mathbf{T}|x) \tag{76}\] \[=P_{12}(\mathbf{and}^{-1}(\mathbf{T})|x)\] (77) \[=P_{12}(\{(\mathbf{T},\mathbf{T})\}|x)\] (78) \[=p_{TT}(x)\] (79) \[=p_{1}(x)+p_{2}(x)+q(x)-1 \tag{75}\]
i.e. the belief function for the fuzzy "and" defined by the joint kernel is
\[p_{1}(x)\wedge_{q(x)}p_{2}(x):=(P_{1}\wedge_{12}P_{2})(\mathbf{T}|x)=p_{1}(x)+ p_{2}(x)+q(x)-1. \tag{80}\]
By the inequalities (70) for \(q(x\) we get
\[\max(0,p_{1}(x)+p_{2}(x)-1)\leq p_{TT}(x)\leq\min(p_{1}(x),p_{2}(x)) \tag{81}\]
and these inequalities are sharp if \(q(x)=q_{\min}(x)\) (for the lower bound), or \(q(x)=q_{\max}(x)\) (for the upperbound). By definitions (37), (39)
\[p_{1}(x)\triangle p_{2}(x) =p_{1}(x)\wedge_{q_{\min}}p_{2}(x) \tag{83}\] \[p_{1}(x)\overline{\wedge}p_{2}(x) =p_{2}(x)\wedge_{q_{\max}}p_{2}(x) \tag{82}\]
and so
\[p_{1}(x)\underline{\wedge}p_{2}(x)\leq p_{1}(x)\wedge_{q(x)}p_{2}(x)\leq p_{1} \overline{\wedge}p_{2}(x). \tag{84}\]
In case \(P_{1}\) and \(P_{2}\) are conditionally independent and \(P_{12}=P_{1}\otimes P_{2}\) we have
\[p_{FF}(x) =(1-p_{1}(x))(1-p_{2}(x))=q_{\text{indep}} \tag{86}\] \[p_{TT}(x) =p_{1}(x)p_{2}(x)=p_{1}(x)+p_{2}(x)+q_{\text{indep}}-1 \tag{85}\]
hence comparing with (38) we find
\[p_{1}(x)\curlywedge p_{2}(x)=p_{1}(x)\wedge_{q_{\text{indep}}}p_{2}(x) \tag{87}\]
### logical connective "or"
We can proceed similarly for the "or" connective. Using the same abuse of notation as for "and" we define a Markov predicate
\[P_{1}\vee_{12}P_{2}:=P_{1}\vee_{P_{12}}P_{2}:=\mathbf{or}_{*}P_{12} \tag{88}\]
with belief function \(p_{1}(x)\vee_{q(x)}p_{2}(x):=P_{1}\vee_{P_{12}}P_{2}(\mathbf{T}|x)\). We then find
\[p_{1}(x)\vee_{q(x)}p_{2}(x):=(\mathbf{or}_{*}P_{12})(\mathbf{T}|x)=1-q(x). \tag{89}\]
By the inequalities (70) for \(q(x)\) we get inequalities
\[\max(p_{1},p_{2})\leq 1-q(x)\leq\min(1,p_{1}(x)+p_{2}(x)) \tag{90}\]
which are sharp if \(q(x)=q_{\max}(x)\) (for the lowerbound) and \(q=q_{\min}\) (for the upperbound). By (41) and (43) the inequalities (90) then translate to
\[p_{1}(x)\underline{\vee}p_{2}(x)\leq p_{1}(x)\vee_{q(x)}p_{2}(x)\leq p_{1}(x) \overline{\vee}p_{2}(x). \tag{91}\]
Finally for the conditionally independent case \(P_{12}=P_{1}\otimes P_{2}\) we have
\[(\mathbf{or}_{*}(P_{1}\otimes P_{2}))(\mathbf{T}|x)=1-(1-p_{1})(1-p_{2})=p_{1 }(x)+p_{2}(x)-p_{1}(x)p_{2}(x), \tag{92}\]
and by (42) and (73) we find
\[p_{1}(x)\vee_{q_{\text{indep}}}p_{2}(x)=p_{1}(x)\vee p_{2}(x) \tag{93}\]
### logical implication
We can do exactly the same with other logic connectives in particular implication. For \(a,b\in\mathbb{B}\) we have
\[\mathbf{impl}(a,b)=a\underline{\rightarrow}b=\neg a\lor b=\neg(a\wedge \neg b). \tag{94}\]
We therefore define a Markov predicate
\[P_{1}\mathbf{\rightarrow}_{12}P_{2}:=P_{1}\mathbf{\rightarrow}_{P_{12}}P_{2}: =\mathbf{impl}_{*}P_{12} \tag{95}\]
with a belief function
\[p_{1}(x)\mathbf{\rightarrow}_{q(x)}p_{2}(x)=\mathbf{impl}_{*}P_{12}(\mathbf{ T}|x)=1-P_{TF}(x)=p_{2}(x)+q(x). \tag{96}\]
Now defining
\[p_{1}(x)\underline{\rightarrow}p_{2}(x) =p_{2}(x)+q_{\min}(x)=\max(p_{2}(x),1-p_{1}(x)) \tag{98}\] \[p_{1}(x)\rightsquigarrow p_{2}(x) =p_{2}(x)+q_{\text{indep}}(x)=1-p_{1}(x)+p_{1}(x)p_{2}(x)\] (99) \[p_{1}(x)\overline{\rightarrow}p_{2}(x) =p_{2}(x)+q_{\max}(x)=\min(1,1+p_{2}(x)-p_{1}(x)) \tag{97}\]
we have
\[p_{1}(x)\underline{\to}p_{2}(x)\leq p_{1}(x)\underline{\to}_{q(x)}p_{2}(x)\leq p _{1}(x)\overline{\to}p_{2}(x) \tag{100}\]
### de Morgan's law
It is easy to prove a general fuzzy de Morgan's law using direct computation. We will also give a more conceptual proof showing it is a consequence of the classical de Morgan's law (45) and functoriality. We have the identity
\[q=(1-p_{1})+(1-p_{2})+(p_{1}+p_{2}+q-1)-1=(\neg p_{1})+(\neg p_{2})+(p_{1} \wedge_{q}p_{2})-1. \tag{101}\]
In other words, the general fuzzy analogue of the de Morgan's law (45) is
\[\neg\left(p_{1}(x)\vee_{q(x)}p_{2}(x)\right)=(\neg p_{1}(x))\wedge_{(p_{1}(x) \wedge_{q(x)}p_{2}(x))}(\neg p_{2}(x)) \tag{102}\]
or
\[p_{1}(x)\vee_{q(x)}p_{2}(x)=-\left((\neg p_{1}(x))\wedge_{(p_{1}(x)\wedge_{q( x)}p_{2}(x))}(\neg p_{2}(x))\right). \tag{103}\]
For a more conceptual derivation of (102) rewrite the classic Boolean the Morgan's law (45) as
\[\mathbf{not}\circ\mathbf{or}=\mathbf{and}\circ(\mathbf{not}\times\mathbf{not}). \tag{104}\]
Then by functoriality
\[\mathbf{not}_{*}\mathbf{or}_{*} =(\mathbf{not}\circ\mathbf{or})_{*} \tag{106}\] \[=(\mathbf{and}\circ(\mathbf{not}\times\mathbf{not}))_{*}\] (107) \[=\mathbf{and}_{*}(\mathbf{not}\times\mathbf{not})_{*}. \tag{105}\]
Thus we get
\[\mathbf{not}_{*}(\mathbf{or}_{*}P_{12})=\mathbf{and}_{*}\left((\mathbf{not} \times\mathbf{not})_{*}P_{12}\right), \tag{108}\]
and so
\[(\mathbf{not}_{*}(\mathbf{or}_{*}P_{12}))(\mathbf{T}|x)=\mathbf{and}_{*}\left( (\mathbf{not}\times\mathbf{not})_{*}P_{12}\right)(\mathbf{T}|x) \tag{109}\]
But the "q" of the joint kernel \((\mathbf{not}\times\mathbf{not})_{*}P_{12}\) is
\[((\mathbf{not}\times\mathbf{not})_{*}P_{12})(\mathbf{F},\mathbf{F})|x)=P_{12 }((\mathbf{T},\mathbf{T})|x)=p_{1}(x)\wedge_{q(x)}p_{2}(x) \tag{110}\]
whereas the marginals "\(p_{1}\)" and "\(p_{2}\)" of \((\mathbf{not}\times\mathbf{not})_{*}P_{12}\) are
\[(\pi_{i*}(\mathbf{not}\times\mathbf{not})_{*}P_{12})(\,\mathbf{T} |x) =(\mathbf{not}_{*}\pi_{i*}P_{12})\,(\mathbf{T}|x) \tag{112}\] \[=(\mathbf{not}_{*}P_{i})(\mathbf{T}|x)\] (113) \[=\neg p_{i}(x) \tag{111}\]
Thus writing the left and right hand side of (109) in terms of belief functions we get the Morgan's law (102).
## 5. Systems of logic expressions
It should now be clear that in order to use the Markov predicate interpretation to define fuzzy logic predicates in multiple variables, we require, in general, a joint distribution probability distributions for the logic variables. A set of general logic formulas can be considered as a map \(L:\mathbb{B}^{n}\rightarrow\mathbb{B}^{m}\), \(0\leq n,m<\infty\). In fact a single map \(L_{0}:\mathbb{B}^{n}\rightarrow\mathbb{B}\) can always be considered as evaluating an element \(\tilde{L}_{0}\) in \(\mathbb{B}[t_{1},...,t_{n}]\), the free boolean algebra on \(n\) letters. For notational simplicity consider the \(n=2\) case, the general case being similar. We can define
\[\tilde{L}_{0}(t_{1},t_{2})=L_{0}(\mathbf{T},\mathbf{T})t_{1}t_{2}\lor L_{0}( \mathbf{F},\mathbf{T})\bar{t}_{1}t_{2}\lor L_{0}(\mathbf{T},\mathbf{F})t_{1} \bar{t}_{2}\lor L_{0}(\mathbf{F},\mathbf{F})\bar{t}_{1}\bar{t}_{2} \tag{114}\]
where juxtaposition is \(\wedge\) and \(\bar{t}=\neg t\). Since every logic formula in \(\mathbb{B}[t_{1},...,t_{n}]\) can be written uniquely in normal form as monomials of \(t\)'s and \(\bar{t}\)'s "or"ed together, we see that the map \(L_{0}\) defines \(\tilde{L}_{0}\) and vice versa. We will therefore no longer make a notational difference.
Suppose then that we are given _joint_ Markov predicates \(P_{\{1,...,n\}}:X\rightarrow\mathbb{B}^{n}\) or equivalently a full set of \(2^{n}\)_joint_ belief (confidence) functions
\[p_{a_{1}...a_{n}}(x)=P_{\{1,...,n\}}((a_{1}\dots a_{n})|x) \tag{115}\]
one for each "bitstring" \((a_{1},\dots a_{m})\in\mathbb{B}^{n}\). They are subject to the usual conditions for a probability distribution on a discrete set
\[\sum_{(a_{1},...a_{n})}p_{a_{1},...a_{n}}(x)=1\text{ and }p_{a_{1},...a_{n}}(x) \geq 0\ \forall(a_{1},\dots,a_{n})\in\mathbb{B}^{n}. \tag{116}\]
Thus, we have \(2^{n}-1\) degrees of freedom subject to \(n\) conditions, one for each of the believe (confidence) functions \(p_{1}(x)\dots p_{n}(x)\) from the the marginals \(P_{i}=\pi_{i*}P_{\{1,...n\}}\). Of course one can also consider intermediate marginals, e.g. for every pair. By the the above \(n=2\) case, these are defined by the belief functions \(p_{i}(x)\) and for every pair, the belief
\[q_{ij}=\pi_{ij*}P_{\{1,...n\}}((\mathbf{F},\mathbf{F})|x)=q_{ji} \tag{117}\]
that neither is true, giving \(n+n(n-1)/2\) conditions. 4
Footnote 4: Further assumptions that may be natural depending on the application, may reduce the degrees of freedom further. E.g. if we can assume conditional independence for \(P_{1},\dots,P_{n}\) we have \(p_{a_{1}...a_{n}}(x)=\prod_{i=1}^{n}p_{a_{i}}(x)\). Invariance of the joined Markov predicate under permutations ensures that \(p_{a_{1}...a_{n}}\) only depends on the number of \(\mathbf{T}\)’s in the “bitstring” \(a_{1},\dots,a_{n}\), giving \((n+1)-1=n\) degrees of freedom, with \(1\) condition for the marginal \(p_{1}(x)=\dots=p_{n}(x)\).
In any case, by functoriality, given a a system of joint Markov predicates \(P_{\{1,...,n\}}:X\rightarrow\mathbb{B}^{n}\) and a system of logic formulas \(L:\mathbb{B}^{n}\to B^{m}\) we can define fuzzy version of \(L\) applied to \(P_{\{1,...,n\}}\) as a joint Markov predicate \(X\rightarrow\mathbb{B}^{m}\)
\[L_{1...n}(P_{1},\dots P_{n}):=L_{P_{1...n}}(P_{1},\dots P_{n}):=L_{*}P_{1...n} \tag{118}\]
Again it is a silly _abuse of notation_ to pretend it is a fuzzy logic predicate of the Markov predicates \(P_{1},\dots P_{n}\) separately. The Markov predicate is completely determined by the joint belief (confidence) functions
\[p_{L,a_{1}\dots a_{m}}(x)=(L_{*}P_{\{1,\dots,m\}})((a_{1}\dots a_{m})|x) \tag{119}\]
one for each "bitstring" \((a_{1},\dots a_{m})\in{\mathbb{B}}^{m}\), which are likewise subject to the conditions for a probability distribution \(\sum_{(a_{1},\dots a_{m})}p_{L,a_{1},\dots a_{m}}(x)=1\) and \(p_{L,a_{1},\dots a_{m}}(x)\geq 0\).
### Fuzzy quantification
The general setup assuming \(n,m\) finite, does still not capture all natural examples. In particular we want to use the logical quantifiers \(\exists_{i\in I}P_{I}=\vee_{i\in I}P_{i}\)and \(\forall_{i\in I}P_{i}=\wedge_{i\in I}P_{i}\). In the case of finite sets this is covered by the above, but already in the case of countable sets we have an extension. However in the countable case quantifiers still define a map \(B^{\mathbb{N}}\to{\mathbb{B}}\). The space \({\mathbb{B}}^{\mathbb{N}}\) is a (non trivial) measurable space 5 and we study measurable "logical connectives": \({\mathbb{B}}^{\mathbb{N}}\to{\mathbb{B}}\). Unsurprisingly, \(\exists\) and \(\forall=\neg\circ\exists\circ(\neg^{\mathbb{N}})\) are measurable. We will only discuss the \(\exists\) quantifier to which the \(\forall\) quantifier can be reduced using the Morgan's law or treated similarly, _mutatis mutandis_. See also [LK] Our goal will be to derive sensible lower and upper bounds for its belief function that they only require finite marginals, and are therefore easier to handle.
Footnote 5: the measurable sets are the \(\sigma\)-algebra generated by the inverse images of subsets of \({\mathbb{B}}^{M}\), \(M<\infty\) under the projection \({\mathbb{B}}^{\mathbb{N}}\to B^{M}\)
Consider the countable infinite case first
\[{\bf{or}}_{\mathbb{N}}(t_{1},t_{2},\dots)=\vee_{i\in{\mathbb{N}}}.t_{i} \tag{120}\]
with for each \(I\subset{\mathbb{N}}\) a counterparts \({\bf{or}}_{I}((t_{i})_{i\in I}=\vee_{i\in I}t_{i}\). A joint Markov predicate \(P_{\mathbb{N}}:X\to{\mathbb{B}}^{\mathbb{N}}\) gives marginals \(P_{I}:X\to{\mathbb{B}}^{I}\) and for \(I_{1}\subseteq I_{2}\subseteq{\mathbb{N}}\)
\[({\bf{or}}_{I_{1}*}P_{I_{1}})({\bf{T}}|x) =1-P_{I_{1}}(({\bf{F}}|x)_{i\in I_{1}}) \tag{122}\] \[\leq 1-P_{I_{2}}(({\bf{F}}|x)_{i\in I_{2}})\] (123) \[=({\bf{or}}_{I_{2}*}P_{I_{2}})({\bf{T}}|x). \tag{121}\]
In particular from the subsets \(I=\{i\}\) we get the lower bound.
\[\sup_{i}{\mathbb{P}}_{i}({\bf{T}}|x)=\sup_{i}p_{i}\leq({\bf{or}}_{\mathbb{N}* }{\mathbb{P}}_{\mathbb{N}})({\bf{T}}|x) \tag{124}\]
and for \(I=\{i,j\}\) the lowerbound
\[\sup_{ij}({\bf{or}}_{ij*}P_{ij})({\bf{T}}|x)=\sup_{ij}p_{i}(x)\vee_{q_{ij}(x) }p_{j}(x)\leq({\bf{or}}_{\mathbb{N}*}{\mathbb{P}}_{\mathbb{N}})({\bf{T}}|x) \tag{125}\]
A sensible upper bound is a little harder to get by. Of course we have the trivial bound
\[({\bf{or}}_{\mathbb{N}*}{\mathbb{P}}_{\mathbb{N}})({\bf{T}})\leq 1 \tag{126}\]
We get potentially better bounds as follows: Define
\[C_{I}=\{t_{i}={\bf{T}}\ \text{for some}\ i\in I\}\subset{\mathbb{B}}^{\mathbb{N}}. \tag{127}\]
Starting from a disjoint partition \(\mathbb{N}=\coprod_{n}I_{n}\) we then have
\[\mathbb{B}^{\mathbb{N}}=\{(\mathbf{F},\mathbf{F},\ldots)\}\coprod\cup_{n}C_{I_{n}} \tag{128}\]
and therefore
\[1=P_{\mathbb{N}}(\mathbb{B}^{\mathbb{N}}|x) \leq\mathbb{P}_{\mathbb{N}}((\mathbf{F},\mathbf{F},\ldots)|x)+\sum _{n}P_{\mathbb{N}}(C_{I_{n}}|x) \tag{130}\] \[=\mathbb{P}_{\mathbb{N}}((\mathbf{F},\mathbf{F},\ldots)|x)+\sum_{ n}P_{I_{n}}(C_{I_{n}}|x) \tag{129}\]
where we used \(C_{I_{n}}\) for both \(\{t_{i}=\mathbf{T}\text{ for some }i\in I_{n}\}\subset\mathbb{B}^{I_{n}}\}\) and \(\{t_{i}=\mathbf{T}\text{ for some }i\in I_{n}\}\subset\mathbb{B}^{\mathbb{N}}\}\). We then have
\[(\mathbf{or}_{\mathbb{N}*}P_{\mathbb{N}})(\mathbf{T}|x)\leq\sum_{n}(\mathbf{ or}_{I_{n}*}P_{I_{n}})(\mathbf{T}|x) \tag{131}\]
in particular for \(I_{n}=\{n\}\)
\[(\mathbf{or}_{\mathbb{N}*}P_{\mathbb{N}})(\mathbf{T}|x)\leq\sum_{n}\mathbb{ P}_{\{n\}}(\mathbf{T}|x)=\sum_{n}p_{n}(x) \tag{132}\]
and for \(I_{n}=\{i_{n},j_{n}\}\)
\[(\mathbf{or}_{\mathbb{N}*}P)(\mathbf{T}|x)\leq\inf_{\mathbb{N}=\cup_{n}\{i_{ n},j_{n}\}}\sum_{n}p_{i_{n}}(x)\vee_{q_{inj_{n}}(x)}p_{j_{n}}(x) \tag{133}\]
Of course these bounds can be rather poor. Better bounds can be obtained from decomposing \(\mathbb{B}^{\mathbb{N}}\) in disjoint pieces. E.g. we can decompose as
\[\mathbb{B}^{\mathbb{N}}=\{(\mathbf{F},\mathbf{F},\ldots)\}\coprod C_{1}\coprod \coprod\coprod_{n\geq 2}C_{n}\setminus C_{n-1}. \tag{134}\]
However we can then no longer reduce to marginal joint Markov predicates with a finite number of predicates although additional assumptions that give Markov predicate for predicate \(n\) conditional on the value of predicates \(1,\ldots n-1\) determine the \(P_{\mathbb{N}}(C_{n}\setminus C_{n+1}|x)\), but we will not pursue this further. Finally the finite marginals do determine the infinite "or": defining \(C_{M}=C_{\{1,\ldots,M\}}\), we have
\[C_{\mathbb{N}}=\cup_{M}C_{M} \tag{135}\]
and therefore
\[(\mathbf{or}_{\mathbb{N}*}P_{\mathbb{N}})(\mathbf{T}|x)=P_{\mathbb{N}}(C_{ \mathbb{N}}|x)=\lim_{M\to\infty}P_{\mathbb{N}}(C_{M}|x)=\lim_{M\to\infty}( \mathbf{or}_{M*}P_{\{1\ldots,M\}})(\mathbf{T}|x) \tag{136}\]
by the monotone convergence theorem.
Where this countably infinite case shows up is in trying to quantify over the (not necessarily countable) set \(X\), i.e. making fuzzy sense of a classical predicate expression like
\[\exists x\in X:P(x)=\vee_{x\in X}P(x) \tag{137}\]
From the above, for a sensible fuzzy definition we should start with a lift to joint Markov predicate that lifts the (possibly uncountably many)
\(P(\mathbf{T}|x)_{x\in X}\). Since \(P(\mathbf{T}|x)\) is a measurable function of \(x\), it is a the supremum of a countable sequence of simple functions taking on a finite number of values. Thus we can taking on a finite number of values, and countable limit of a function taking on on finite number of values. This suggests that we should consider a lift
\[P_{\mathbb{N}}:X^{\mathbb{N}}\to\mathbb{B}^{\mathbb{N}} \tag{138}\]
such that
\[(\pi_{i*}P_{\mathbb{N}})(\mathbf{T}|(x_{1},x_{2},\ldots,))=P_{\mathbb{N}}(\{t_ {i}=\mathbf{T}\}|(x_{1},\ldots x_{n}))=P(\mathbf{T}|x_{i}) \tag{139}\]
Again such a lift can be constructed using a conditional independence construction, but is not necessarily unique. We can then follow the above and define the most general fuzzy version of the \(\exists\) quantification as
\[p_{\exists x:P(x)}=\sup_{(x_{1},x_{2},\ldots)\in X^{\mathbb{N}}}(\mathbf{or}_ {\mathbb{N}*}P_{\mathbb{N}})(\mathbf{T}|(x_{1},x_{2},\ldots)) \tag{140}\]
We can now play the game of getting useful bounds by going to marginals. The case \(n=1\) gives
\[\sup_{x,\in X}p(x)=\sup_{(x,x_{2},\ldots)}(\pi_{1*}P_{\mathbb{N} })(\mathbf{T}|(x,x_{2},\ldots))\leq p_{\exists x:P(x)} \tag{142}\] \[\sum_{x\in X}p(x)=\sup_{(x_{1},x_{2},\ldots)}\sum_{i}(\pi_{i*}P_ {\mathbb{N}})(\mathbf{T}|(x_{1},x_{2},\ldots))\geq p_{\exists x:P(x)} \tag{141}\]
The \(n=2\) case is more interesting. Spelled out, It means that in addition to the belief function \(p(x)\) we have a measurable function \(q(x_{1},x_{2}):X^{2}\to[0,1]\) such that \(q_{\min}(p(x_{1}),p(x_{2}))\leq q(x_{1},x_{2})\leq q_{\max}(p(x_{1}),p(x_{2}))\) which together with the belief function \(p(x)\) defines a lift \(P_{1,2}:X^{2}\to\mathbb{B}^{2}\) such that (compare (68) (69))
\[p_{FF}(x_{1},x_{2}) =q(x_{1},x_{2}) p_{FT}(x_{1},x_{2}) =1-p(x_{2})-q(x_{1},x_{2}) \tag{144}\] \[P_{TF}(x_{1},x_{2}) =1-p(x_{1})-q(x_{1},x_{2}) p_{TT}(x_{1},x_{2}) =p(x_{1})+p(x_{2})-1-q(x_{1},x_{2}). \tag{143}\]
We then have upper and lower bounds
\[\sup_{(x_{1},x_{2})\in X^{2}}p(x_{1})\vee_{q(x_{1},x_{2})}p(x_{2} )\leq p_{\exists x:P(x)} \tag{146}\] \[\sup_{(x_{1},x_{2},\ldots)\in X^{\mathbb{N}}}\inf_{\mathbb{N}= \cup_{n}\{i_{n},j_{n}\}}\sum_{n}p(x_{i_{n}})\vee_{q(x_{i_{n}},x_{jn})}p(x_{j_ {n}})\geq p_{\exists x:P(x)} \tag{145}\]
A final note: the way we defined a fuzzy exists fails to take into account any consideration of the way we sample the space \(X\) or more precisely \(X^{\mathbb{N}}\). Fortunately this is easy to incorporate in the Markov kernel framework and in fact makes things simpler. Choose a probability distribution \(\mathbb{P}:\star\to X^{\mathbb{N}}\), i.e. a sampling strategy. Then the composition
\[\star\stackrel{{\mathbb{P}}}{{\to}}X^{\mathbb{N}}\stackrel{{ P_{\mathbb{N}}}}{{\to}}\mathbb{B}^{\mathbb{N}}\stackrel{{ \mathbf{or}_{\mathbb{N}}}}{{\to}}\mathbb{B} \tag{147}\]
is a probability distribution on \(\mathbb{B}\), which gives a single number \(p_{\mathbb{P}}\) for the probability of \(\mathbf{T}\)
\[p_{\mathbb{P}} =\int_{X^{\mathbb{N}}}(\mathbf{or}_{\mathbb{N}*}P_{\mathbb{N}})( \mathbf{T}|(x_{1},x_{2},\ldots)\mathbb{P}(d(x_{1},x_{2},\ldots)) \tag{149}\] \[=\sum_{(a_{1},a_{2}\ldots),\vee i_{\alpha}i_{\alpha}=\mathbf{T}} \int_{X^{\mathbb{N}}}P_{\mathbb{N}}((a_{1},a_{2},\ldots)|(x_{1},x_{2},\ldots) \mathbb{P}(d(x_{1},x_{2},\ldots) \tag{148}\]
This number has a different interpretation than existence: it is the expected confidence to find a tuple \((x_{1},x_{2},\ldots)\) such that the predicate \(P\) is true for at least one \(x_{i}\)_given our sampling strategy_\(\mathbb{P}\). It does gives a lower bound
\[p_{\mathbb{P}}\leq\operatorname{ess\,sup}_{\mathbb{P}}(\mathbf{or}_{\mathbb{ N}*}P_{\mathbb{N}})(\mathbf{T}|(x_{1},\ldots))\leq\sup(\mathbf{or}_{\mathbb{ N}*}P_{\mathbb{N}})(\mathbf{T}|(x_{1},\ldots)=p_{\exists x:P(x)} \tag{150}\]
and may well be more useful than \(p_{\exists x:P(x)}\) for many purposes.
## 6. Conclusion
We have described fuzzy logic as a functorial"shadow" of ordinary boolean logic. All the subtlety comes from the applying boolean logic not to to boolean predicates but to boolean valued Markov kernels (Markov predicates) that express the confidence in the truth of a predicate. However this means that logic formulas become sensitive to the joint confidence in the predicates. We have described how this works out in the simple cases of a binary logic relation and arbitrary finite relations and simple infinte ones like the existential quantifier.
## 7. acknowledgements
The author is grateful for useful discussions with prof. Uzay Kaymak, and dr. Hans Onvlee.
|
2305.18921 | Large Car-following Data Based on Lyft level-5 Open Dataset: Following
Autonomous Vehicles vs. Human-driven Vehicles | Car-Following (CF), as a fundamental driving behaviour, has significant
influences on the safety and efficiency of traffic flow. Investigating how
human drivers react differently when following autonomous vs. human-driven
vehicles (HV) is thus critical for mixed traffic flow. Research in this field
can be expedited with trajectory datasets collected by Autonomous Vehicles
(AVs). However, trajectories collected by AVs are noisy and not readily
applicable for studying CF behaviour. This paper extracts and enhances two
categories of CF data, HV-following-AV (H-A) and HV-following-HV (H-H), from
the open Lyft level-5 dataset. First, CF pairs are selected based on specific
rules. Next, the quality of raw data is assessed by anomaly analysis. Then, the
raw CF data is corrected and enhanced via motion planning, Kalman filtering,
and wavelet denoising. As a result, 29k+ H-A and 42k+ H-H car-following
segments are obtained, with a total driving distance of 150k+ km. A diversity
assessment shows that the processed data cover complete CF regimes for
calibrating CF models. This open and ready-to-use dataset provides the
opportunity to investigate the CF behaviours of following AVs vs. HVs from
real-world data. It can further facilitate studies on exploring the impact of
AVs on mixed urban traffic. | Guopeng Li, Yiru Jiao, Victor L. Knoop, Simeon C. Calvert, J. W. C. van Lint | 2023-05-30T10:24:57Z | http://arxiv.org/abs/2305.18921v2 | # Large Car-following Data Based on Lyft level-5 Open Dataset:
###### Abstract
Car-Following (CF), as a fundamental driving behaviour, has significant influences on the safety and efficiency of traffic flow. Investigating how human drivers react differently when following autonomous vs. human-driven vehicles (HV) is thus critical for mixed traffic flow. Research in this field can be expedited with trajectory datasets collected by Autonomous Vehicles (AVs). However, trajectories collected by AVs are noisy and not readily applicable for studying CF behaviour. This paper extracts and enhances two categories of CF data, HV-following-AV (H-A) and HV-following-HV (H-H), from the open Lyft level-5 dataset. First, CF pairs are selected based on specific rules. Next, the quality of raw data is assessed by anomaly analysis. Then, the raw CF data is corrected and enhanced via motion planning, Kalman filtering, and wavelet denoising. As a result, 29k+ H-A and 42k+ H-H car-following segments are obtained, with a total driving distance of 150k+ km. A diversity assessment shows that the processed data cover complete CF regimes for calibrating CF models. This open and ready-to-use dataset provides the opportunity to investigate the CF behaviours of following AVs vs. HVs from real-world data. It can further facilitate studies on exploring the impact of AVs on mixed urban traffic.
Car-following, trajectory dataset, autonomous vehicle, driving behaviour
## I Introduction
Autonomous vehicles (AVs) have been rapidly developing in recent years, bringing potential benefits such as enhancing traffic safety [1], reducing congestion [2], and increasing mobility accessibility [3]. However, the extent of these improvements remains unclear, which depends not only on the performance of AVs but also on human drivers' reactions to AVs. Clarifying the impact of AVs on human driving behaviour is thus crucial for safe and efficient integration of AVs into transportation systems [4, 5].
Car-following (CF), which refers to one vehicle following another, is the most common driving behaviour. CF plays a critical role in maintaining smooth traffic flow and reducing congestion [6, 7]. The presence of AVs may reshape CF behaviours and thus mixed traffic flow. How an AV follows its leading vehicle is determined by the specific driving algorithm, which is continuously being improved. Therefore, it is more important to examine how human-driven vehicles (HVs) react differently when following an AV vs. an HV.
In the literature, the influence of AVs on the CF behaviours of human drivers has mainly been studied through field experiments, driving simulators, and real-world AV datasets. In field experiments, participants are asked to follow a real or seemingly-real AV in different scenarios [8, 9, 10]. Similar experiments can also be carried out in a virtual environment by using driving simulators [11]. Field tests and simulations are controllable so researchers can focus on specific points of interest. However, due to cost limitations, these two approaches cannot provide comprehensive and large data covering diverse scenarios.
Recently, the release of autonomous driving datasets, such as Waymo [12], nuScenes [13], and Lyft5 [14], has enabled researchers to study AVs' impacts on traffic with real-world data. Hu et al. [15] offer the first attempt to process a CF dataset from the Waymo dataset. However, because AVs are not marked in the entire dataset of Waymo, only 274 HV-following-AV (H-A) pairs and 1032 HV-following-HV (H-H) pairs are extracted. The limited amount of samples leads to contradictory findings. For example, Wen et al. [16] conclude that, compared with H-H, H-A has lower driving volatility, smaller time headways, and higher Time-to-Collision (TTC); while Hu et al. [17] found no significant difference between H-H and H-A, except for smaller spacing during congestion. To reduce the biases when using small datasets, a larger and comparative CF dataset balanced comprising both scenarios is indispensable.
In this paper, the Lyft level-5 open dataset is processed. We select, assess, and enhance 29k+ HV-following-AV pairs and 42k+ HV-following-HV pairs in similar environments. The dataset covers diverse CF regimes and the enhanced dataset provides smooth, ready-to-use motion information for CF model calibration/training. The contribution of this paper is double-fold. First, we propose a processing procedure and demonstrate its validity. Second, the processed data is openly shared as the first large CF dataset that allows comparing the behaviours of following AV vs. HV. It is expected to help better evaluate the influence of AVs in mixed traffic. The dataset is available at [https://github.com/RomainLITUD/Car-Following-Dataset-HV-vs-AV](https://github.com/RomainLITUD/Car-Following-Dataset-HV-vs-AV), including detailed instructions on reading and filtering desired CF pairs.
## II Lyft level-5 dataset
### _Dataset description_
The Lyft level-5 dataset [14] is a large-scale dataset of high-resolution sensor data collected by a fleet of 20 self-driving cars. The dataset includes 1000+ hours of perception and motion data collected over a 4-month period from urban and suburban environments along a fixed route in Palo Alto, California. The route is shown in Fig.1.
The motion prediction dataset comprises about 170,000 scenes, with each scene spanning approximately \(25\,\mathrm{s}\)
These scenes may be collected continuously or intermittently. For each scene, the track ids of agents are re-numbered (from 1). Each scene includes the movement states (e.g. position, yaw angle, size, speed) of perceived vehicles, cyclists, and pedestrians, as well as the position and orientation of the AV. Information about the driving environment, including high-definite maps and traffic light status, is also provided. The dataset is available from the website: [https://woven.toyota/en/prediction-dataset](https://woven.toyota/en/prediction-dataset). The material used in this paper includes the full training and validation datasets, the semantic map file, and the python toolkit l5kit ([https://woven-planet.github.io/l5kit/](https://woven-planet.github.io/l5kit/)) provided by Lyft developers.
### _Data Processing Framework_
The flowchart in Fig.2 presents the procedure of CF data selection, assessment, and enhancement. In section III, CF pairs and their raw trajectories are first selected from the unlabelled dataset based on certain rules. Next, the raw data quality is assessed from the perspective of anomaly analysis. In section IV, the raw data is enhanced. For AVs (only position \(x\) is given) and HVs (speed \(v\) and position \(x\) are given), we use two different methods to fill in the missing segments and estimate smooth \(x\) and \(v\). The acceleration \(a\) is estimated by the combination of the Kalman Filter and wavelet denoising methods. After the processing procedure, in section V, the enhanced data is assessed again, including both anomaly analysis and the diversity of CF regimes. In the following sections, we will introduce each step in detail.
## III CF pair selection and assessment
### _CF pair selection_
The first step is identifying CF pairs from unlabelled data. Previous studies about the rules of extracting CF pairs have been reviewed by Wen et al. [16]. We refer the readers to this recent paper for more details.
Considering the large size of the dataset (1000+ hours), we make two groups of rules and propose a two-step procedure. The first step is quick screening. The entire dataset is scanned second by second (every 1-5 seconds) based on the first group of rules to roughly identify possible CF events. Next, the second group of rules rigorously check each CF event frame by frame. The rules are listed in Table.I. The extracted CF pairs are categorized based on the type of the leading vehicle (AV or HV). Statistics of the CF dataset are presented in Table.II. The total duration of this CF dataset spans over 460+ hours, covering a total distance of 15,000+ km.
For each CF pair, the initial position of the leading vehicle is set as the origin. The straight road lane defines the driving direction. Because the angle between the yaw of the vehicles and the lanes is small (as stated in Rules 2.1 and 2.2), it is reasonable to assume that the lateral movement is
Fig. 1: The road maps of Palo Alto, California (in meters). The blue lines mark the fixed route in the Lyft level-5 dataset.
Fig. 2: Flowchart of the data processing.
independent of the car-following behaviours. Therefore, this paper will only focus on the longitudinal movement going forward.
### _Raw CF data assessment_
Calibrating CF models requires highly accurate and consistent position, speed, and acceleration data. It is necessary to first assess the quality of the raw CF data. For HV (either in H-A or H-H dataset), we assess (1) anomalies that violate constraints of vehicle kinematics, and (2) missing data. For AV, because only \(x\) is given, the anomalies of vehicle kinematics are assessed only. According to Punzo et al. [18], 3 constraints on acceleration \(a\) and jerk \(j\) must be satisfied:
* \(a\in[-8,5]\ \mathrm{m/s^{2}}\)
* \(j\in[-15,15]\ \mathrm{m/s^{3}}\)
* The jerk's sign cannot be inverse more than once in \(1\,\mathrm{s}\) (denoted as _Jerk Sign Inversion_, JSI).
For HV, \(a\) and \(j\) are directly computed by the 1st and the 2nd-order differentiation of \(v\) (\(v\)-based). While for AV, \(a\) and \(j\) are derived from the 2nd and the 3rd-order differentiation of \(x\) (\(x\)-based). Violating one of the 3 constraints is identified as an anomaly. The result is shown in Table.III, which is also compared with the CF data in the Waymo dataset (evaluated in [15]). The \(v\)-based jerk and JSI anomaly proportion for HVs in Lyft are on-par with the Waymo dataset. For \(x\)-based AVs, Lyft shows higher quality than Waymo (lower anomaly proportion).
Next, abnormal segments are further investigated. Fig.3 shows an example that compares the given speed and \(x\)-based speed of an HV. The result demonstrates that the speed provided by Lyft is smoothed from the measurements of position. However, there also exist some errors. At the beginning and the end of each scene (\(25\,\mathrm{s}\) duration), there is a 0-speed value, which seems to have been added artificially and was not excluded in smoothing. We guess that perhaps these 0-values are used to assist in segmenting scenes (for deep-learning purposes) but for some reason not removed. Therefore, the first 0.5 to \(1.5\,\mathrm{s}\) data in these scenes are unreliable.
The assessment of the raw CF data is summarized as follows:
* The given position data of AVs is of high quality.
* Data are missing at the beginning and end of each scene.
* For both AVs and HVs, acceleration and jerk derived by differentiation is not smooth enough for calibrating CF models.
In the next section, the raw data will be enhanced to address these problems.
## IV CF Data enhancement
### _Missing data filling_
Before further processing raw HV data, the missing part around the 0-speed points must be filled in first. In this study, we remove the speed and position data within \(1.5\,\mathrm{s}\) of the last 0-value timestamp. This segment of motion is estimated by the polynomial of degree seven (7-DOP) jerk-minimization method. The position of the vehicle is assumed to have the form of a polynomial of degree seven with 8 unknown coefficients:
\[x(t)=\sum_{i=0}^{7}p_{i}t^{i} \tag{1}\]
The initial and final positions, speeds, and accelerations (derived from speed) pose 6 boundary conditions (constraints). The objective function to minimize is the jerk in the duration \(T\):
\[J=\int_{0}^{T}[x^{\prime\prime\prime}(t)]^{2}dt \tag{2}\]
which is typical quadratic programming that can be easily solved. Notice that only the position and speed are estimated. The acceleration data will be derived later.
One example of the estimation results is shown in Fig.4. The time interval is not always uniform. Clearly, the 0-value influences the given speed at around \(3.0\,\mathrm{s}\) (the red line). The removed segment is filled in by the black-star curves (the timestamp series are not changed), which is a 6-degree polynomial. We see that the estimated speed profile is smoother due to the vehicle's kinematic constraints.
Fig. 4: Use the 7-DOP method to fill in the incorrect data segment.
Fig. 3: Comparison between the speed data and the position-derived speed.
### _Kalman filtering for speed estimation_
Because the speed of AVs is not provided, to facilitate the following acceleration estimation and smoothing steps, we need to derive the speed of AVs from position data first. The well-known Kalman Filter (KF) [19] is used. In this Kalman Filter, we employ the constant-speed model. For each time interval (mostly \(\Delta t=0.1\,\mathrm{s}\)), the state transition equation is:
\[\begin{bmatrix}x(t_{i+1})\\ v(t_{i+1})\end{bmatrix}=\begin{bmatrix}1&\Delta t_{i}\\ 0&1\end{bmatrix}\cdot\begin{bmatrix}x(t_{i})\\ v(t_{i})\end{bmatrix} \tag{3}\]
The given position and its differentiation are regarded as the measurement of \(x\) and \(v\). The process covariance matrix \(\mathbf{Q}_{1}\) and the measurement covariance matrix \(\mathbf{R}_{1}\) control the trade-off between accuracy and smoothness. Considering the range of acceleration in Eq.III-B, their values are set as follows by trial and error:
\[\mathbf{Q}_{1}=\begin{bmatrix}0.2&0\\ 0&0.8\end{bmatrix}^{2},\;\;\;\mathbf{R}_{1}=\begin{bmatrix}0.5&0\\ 0&1.1\end{bmatrix}^{2} \tag{4}\]
The next step is estimating the acceleration. In principle, this can also be accomplished by a Kalman filter that gives both speed and acceleration. However, tuning the process and measurement covariance is difficult and usually the acceleration covariance changes with time for a CF event. Therefore, we propose a 2-step method. First, continuing in this subsection, a second KF is applied for both AVs and HVs to derive an over-smoothed acceleration. Next in subsection IV-C, wavelet denoising will be used to smooth the \(v\)-based (differentiation) acceleration. For the second KF, the state transition equation is:
\[\begin{bmatrix}x(t_{i+1})\\ v(t_{i+1})\\ a(t_{i+1})\end{bmatrix}=\begin{bmatrix}1&\Delta t_{i}&\frac{1}{2}a(t_{i})( \Delta t_{i})^{2}\\ 0&1&a(t_{i})\Delta t_{i}\\ 0&0&1\end{bmatrix}\cdot\begin{bmatrix}x(t_{i})\\ v(t_{i})\\ a(t_{i})\end{bmatrix} \tag{5}\]
The \(\mathbf{Q}_{2}\) and \(\mathbf{R}_{2}\) are set as follows. We set a high value of measurement error for acceleration to over-smooth it.
\[\mathbf{Q}_{2}=\begin{bmatrix}0.2&0&0\\ 0&0.4&0\\ 0&0&1.5\end{bmatrix}^{2},\;\;\;\mathbf{R}_{2}=\begin{bmatrix}0.5&0&0\\ 0&1&0\\ 0&0&10\end{bmatrix}^{2} \tag{6}\]
Denote the \(v\)-based acceleration as \(a_{v}\) and the over-smoothed acceleration given by KF as \(a_{k}\), their RMSE, \(\sigma_{a}\), is regarded as an approximation of the total noise. This estimated hyper-parameter will be used in wavelet denoising to further smooth acceleration.
### _Wavelet denoising for acceleration smoothing_
Wavelet denoising [20] is a robust technique to remove noise from signals by transforming the time series into wavelet space and then thresholding the high-frequency coefficients while preserving the important information in the low-frequency coefficients. Compared with KF, wavelet denoising is non-parametric. It is effective for removing different types of noise, including Gaussian noise, impulsive noise, and mixed noise. Especially, wavelet denoising can adapt to non-stationary noise characteristics, which meets the requirements of acceleration estimation and smoothing.
In this study, we use the wavelet denoising tool in skimage python package. The noise standard deviation is set as \(\sigma_{a}\) from the result of KF. The type of wavelet is the Daubechies family with 6 vanishing moments ('db6'), which can effectively capture both short-term and long-term features. A soft threshold method and up to 4 wavelet decomposition levels are used.
Fig.5 presents a \(1\,\mathrm{min}\) H-A car-following example. The silver lines are raw position, speed, \(v\)-based acceleration and jerk of the following HV. This CF pair approaches an intersection, decelerates, stops and waits, and then accelerates to a desired speed. The missing segments at \(24\,\mathrm{s}\) and at \(49\,\mathrm{s}\) are filled in. Processed acceleration and jerk profiles are significantly smoother.
### _Vehicle size processing_
Besides motion information, the length of the vehicle is also important, especially for calculating some safety and efficiency metrics, such as bump-to-rear gap and TTC. The size of the AVs is a given, fixed value (\(4.87\,\mathrm{m}\) length, \(1.85\,\mathrm{m}\) width). However, the size of perceived HVs is not always stable. It varies with time due to perception errors (such as shading). Many datasets, including the Waymo CF data [15], ignore this step.
Because we excluded vehicles that are not passenger cars when selecting CF pairs, we set a maximum length of \(6.5\,\mathrm{m}\)
Fig. 5: Comparison between the raw data and the processed data in a \(60\,\mathrm{s}\) CF event.
Fig. 6: Distribution of HV lengths.
(pick-up truck) and a minimum length of \(4\,\mathrm{m}\) (small vehicle). The following rules are used to estimate the size. (1) For each HV, if the variance of the length time series \(<0.3\,\mathrm{m}\), then we choose the mean value. (2) Otherwise, perceived length series are clamped between \(3.5\,\mathrm{m}\) and \(6.5\,\mathrm{m}\), then we choose the percentile at 0.95 (95%). The distribution of estimated HV lengths is shown in Fig.6. The mean value is \(4.38\,\mathrm{m}\), which is close to the average length of passenger cars in the US [21].
By now, the entire data enhancement procedure is finished. Next, we will assess the quality of enhanced data.
## V Enhanced dataset evaluation
The enhanced dataset is split into two groups, following AV (denoted as H-A) vs. following HV (denoted as H-H). They will be assessed based on anomaly analysis and diversity evaluation. We will show that the enhanced trajectories have higher quantity and the dataset covers diverse regimes for calibrating CF models.
### _Vehicle kinematics anomalies_
For anomaly detection, we use the same rules as stated in section III-B. The result is shown in Table.IV. After processing, the anomaly percentage is significantly reduced, especially abnormal jerk sign inversion. The enhanced data is smoother and better conforms to the vehicle's kinematic constraints.
### _CF regime diversity_
Next, we will evaluate the regime diversity in both H-H and H-A subsets. Here a regime refers to a driving situation experienced by the following vehicle (usually restricted by its leader). A higher diversity of regimes means that the dataset is not just 'big' but also informative. For some CF models, e.g. Intelligent Driver Models (IDM), certain parameters cannot be calibrated if some specific regimes are missing in the data [22]. Insufficient regime diversity in CF data may also result in unrepresentative or over-fitted models [23], and thus contradictory conclusions about driving behaviours.
To evaluate the regime diversity in the enhanced dataset, we adopt the identification algorithm proposed in [24]. The algorithm consists of three steps: 1) segmenting the follower's speed profile into various sections, 2) categorizing the sections into car-following (CF) and free-following (FF), and 3) determining regimes based on the acceleration within these sections. For the second step, a threshold is selected based on the mean and variance of the distribution of time gaps that are calibrated from Newell's car-following model [25] (we refer the readers to [24] for more details). Given that the followers in H-H and H-A may exhibit different time gap distributions, we perform two separate threshold selection. Fig.7 compares the distributions of time gaps for H-H and H-A, where a clear difference is shown between following an HV vs. an AV.
Sharma et al. [24] classify regimes based on the previous studies by Treiber et al. [22], where standstill and deceleration are excluded from free-flowing. This consideration is reasonable for highways because the traffic flow on highways is highly continuous. However, vehicles face more interruptions in urban environments, such as vulnerable road agents (cyclists, pedestrians) and traffic signals. Therefore, this paper considers the following 7 regimes: free acceleration (**Fa**), free deceleration not caused by the leading vehicle (**Fd**), cruising at a desired speed (**C**), acceleration following a leading vehicle (**A**), deceleration following a leading vehicle (**D**), constant speed following (**F**), and standstill (**S**).
Fig.8 presents the proportion of accumulated duration of the 7 regimes in H-A and H-H subsets. The regimes F, D, and A constitute around 74% and 70% of the total duration in H-A and H-H, respectively. This suggests that the followers' behaviours depend on their leaders most of the time. This dataset is therefore suitable for studying how followers react differently to the leading AV vs. HV.
According to Sharma et al. [23], to calibrate all parameters in IDM, at least 3 regimes, A, D, and F, must be included in a CF event. Based on this principle, we categorize all CF pairs into two groups. One is 'ADF+\(n\)', which means all ADF regimes are included and there are \(n\) extra regimes. The other is 'others', which means at least one of ADF regimes is missing (no matter how many regimes they have). The results in Fig.9 show that 64% and 56% CF pairs fall in the
Fig. 8: Time proportion of car-following regimes.
Fig. 7: Distribution comparison between H-H and H-A time gaps calibrated from Newell’s car-following model.
ADF+\(n\) group in H-A and H-H subset, respectively. These pairs support calibrating IDMs. Meanwhile, we would like to emphasize that 'others' do not mean that these CF pairs are useless. They are not suitable for calibrating IDM but they are still informative for calibrating other more complex car-following models, e.g. deep-neural-networks-based methods.
In summary, this section shows that the enhanced dataset has fewer anomalies and high car-following regime diversity.
## VI Conclusion
This paper proposes a car-following trajectory data processing procedure. This procedure has been applied to an openly available dataset and validated by anomaly analysis and regime assessment. The Lyft level-5 dataset, which contains information about both autonomous vehicles and human-driven vehicles, has been processed with this technique and the enhanced car-following trajectories are publicly available. The initial dataset now is processed into a high-quality, ready-to-use dataset. It contains human drivers following autonomous as well as human-driven vehicles in diverse scenarios. The processing procedure is essential for doing further analysis. The published enhanced car-following dataset is expected to help researchers better understand the impact of AVs on traffic flow and to develop safer and more effective AV systems in the future.
## Acknowledgment
This research is sponsored by the NWO/TTW project MiRRORS with grant agreement number 16270.
|
2302.02706 | When the Ground Truth is not True: Modelling Human Biases in Temporal
Annotations | In supervised learning, low quality annotations lead to poorly performing
classification and detection models, while also rendering evaluation
unreliable. This is particularly apparent on temporal data, where annotation
quality is affected by multiple factors. For example, in the post-hoc
self-reporting of daily activities, cognitive biases are one of the most common
ingredients. In particular, reporting the start and duration of an activity
after its finalisation may incorporate biases introduced by personal time
perceptions, as well as the imprecision and lack of granularity due to time
rounding. Here we propose a method to model human biases on temporal
annotations and argue for the use of soft labels. Experimental results in
synthetic data show that soft labels provide a better approximation of the
ground truth for several metrics. We showcase the method on a real dataset of
daily activities. | Taku Yamagata, Emma L. Tonkin, Benjamin Arana Sanchez, Ian Craddock, Miquel Perello Nieto, Raul Santos-Rodriguez, Weisong Yang, Peter Flach | 2023-02-06T11:08:25Z | http://arxiv.org/abs/2302.02706v1 | # When the Ground Truth is not True:
###### Abstract
In supervised learning, low quality annotations lead to poorly performing classification and detection models, while also rendering evaluation unreliable. This is particularly apparent on temporal data, where annotation quality is affected by multiple factors. For example, in the post-hoc self-reporting of daily activities, cognitive biases are one of the most common ingredients. In particular, reporting the start and duration of an activity after its finalisation may incorporate biases introduced by personal time perceptions, as well as the imprecision and lack of granularity due to time rounding. Here we propose a method to model human biases on temporal annotations and argue for the use of soft labels. Experimental results in synthetic data show that soft labels provide a better approximation of the ground truth for several metrics. We showcase the method on a real dataset of daily activities.
## I Introduction
The development of systems designed for detection and monitoring of everyday human activities (e.g. movement, cooking, hygiene, sleep) benefits from accurate labelled (annotated) data. Diarising methods (participant self-reporting of events), often used for self-annotation, display several limitations. Reliability is limited, e.g. Moller et al [1] found that subjects in a 6-week study of smartphone use reported at most only 70% of detected events, perhaps due to forgetfulness. Intentional misreporting may also be a factor in recording of some activities, e.g. fitness [2]. Since the contemporaneous annotation of a task usually requires interruption of a participant's activity, it is also likely that participants will not record events contemporaneously, to retain goal focus and avoid excessive task switching [3] and for practical reasons, e.g. it is impractical to input or write down annotations during a task such as vacuuming or taking a shower. As a consequence, participant-contributed annotations may be characterised as unreliable and incomplete, in terms of _event logging_, _event start/end time_ and _event duration_.
Uncertainty in the estimation of time reflects participant use of coarse temporal units to describe time and duration (e.g. an event began around a certain time, estimated with a certain granularity ['around half-past'] and preference for 'prototypical' times [4, 5] and is estimated with a certain duration). Self-reporting of event duration has been studied in linguistics (often via large corpora [6]) and in music [7]. Precision of estimation of time of day reflects participant time awareness. Time perception is complex, elastic [8] and varies according to demographics such as age [9] and context, e.g. time of day [10]. It is also worth noting that disrupted time awareness is a clinical feature in various diagnoses, such as Alzheimer's disease [11]. At present, the characteristics of participant temporal annotation are not widely modelled in analysis of participant-labelled sensor data.
In summary, while a thorough discussion is beyond the scope of this paper, unless supported by electronic time-stamping, participants' timings for activity start, and and duration are often inexact; we give an example of this in Section 2. This quality issue may be compared with Kwon et al's 'label jitter' [12]. However, jitter in post-hoc annotation (e.g. of video) is likely to be of the order of fractions of a second. As we see in Section 2, participant estimation of activity times is found to have large uncertainty. Much of the work on both evaluating label quality [13] and learning from weak labels [14] originates from post-hoc annotation, in which multiple-annotator performance can be compared (e.g. via statistical measures) and probability distributions built accordingly.
This paper explores a means of modeling temporal uncertainty when making use of participant contributed labels describing time-series annotation tasks, and provides a first step in exploring the method's suitability for particular applications (e.g. evaluation of predictions, training models); it is organised as follows. Section II presents a motivating case study. Section III describes our approach to estimating the annotation ambiguity and computing soft labels. Section IV demonstrates the characteristics of the soft labels with artificial and real-world datasets. Section V presents critical discussions of our approach. Finally, Section VI concludes with a summary and possible future work.
## II Case study: SRM-17 recording of activities of daily living
During the SPHERE 100-homes project, a subset of participants were encouraged to provide daily annotations via a modified Social Rhythm Metric (SRM-17), designed to measure daily lifestyle regularity [15], self-administered by participants for up to seven days, resulting in approximately 100 days of data. Analysis of this small dataset demonstrated many features identified in the introduction. Taking the example of shower/bath events, as can be seen in Figure 1, participants' contributed annotations fall on particular times within the hour, e.g. on the hour or half past. Data often appears to be reported to the nearest five minutes. This does not reflect instructions given to participants, who were
given no specific guidance regarding precision of time reporting; researchers expected participants to report time read from a watch or clock. That said, in the clinical domain it is common for instructions to mandate a relatively low precision of time reporting. For example, the frequently-used Hauser home diary to assess functional status in Parkinson's disease asks participants to fill out diaries every half hour [16]. Therefore, we may envisage several circumstances in which data analysts working with clinical data may find themselves in one of two conditions: annotation reporting is constrained by _diary frequency_ (e.g. 'during the last half hour'); or by _participants' chosen granularity of time approximation_ (e.g. to the nearest five minutes, half hour, etc).
While we would like to evaluate how well predictions made on the basis of sensor data correlate to the annotations provided by participants, sensor data itself may give inaccurate timings. Activity and event predictions rely on signals that vary due to incidental characteristics about a home e.g. measurement of heat from cooking varies by stove type, placement, response time and sensor placement [17]. Hence, [17] apply a 'thermal lag' parameter to account for the time it takes for a temperature increase to be detected. Similarly, humidity sensors used to predict when shower or bath events occurred exhibit a 'humidity lag', due to characteristics of the bathroom appliances (e.g. shower, bath), configuration (e.g. closed shower stall, curtain, no enclosure from the bathroom), and architecture (e.g. room size, window placement and ventilation such as extractor fans).
## III Method
We propose to tackle the uncertainty in time-series annotations in two stages. In the first step, we predict the time resolution each annotator uses (ambiguity) with a Bayesian approach. In the second stage, we produce soft labels based on the predicted annotation's time resolution. It is also important to consider how to evaluate a target model with soft labels (i.e. how to use the soft labels), but this is outside the current paper's scope.
### _Predicting Annotator's Habit and Annotation's Uncertainties_
First, we introduce predefined categories (\(c_{n}\)) for annotation time resolutions. Each category has a subset of possible annotation values (\(\mathbb{C}_{n}\)) that indicate how the person who belongs to the category tends to annotate. For example, if we define the 1st category (\(c_{1}\)) as 30 minutes time resolution, then the possible annotation times would be 0 and 30 minutes (\(\mathbb{C}_{1}=\{0,30\}\)). We use the following five categories to demonstrate our method throughout this paper.
\begin{tabular}{l l l} \(c_{1}\) & : & 30 mins. res. & \(\mathbb{C}_{1}=\{0,30\}\) \\ \(c_{2}\) & : & 15 mins. res. & \(\mathbb{C}_{2}=\{0,15,30,45\}\) \\ \(c_{3}\) & : & 10 mins. res. & \(\mathbb{C}_{3}=\{0,10,20,30,40,50\}\) \\ \(c_{4}\) & : & 5 mins. res. & \(\mathbb{C}_{4}=\{0,5,10,15,\ldots,55\}\) \\ \(c_{5}\) & : & 1 mins. res. & \(\mathbb{C}_{5}=\{0,1,2,3,4,\ldots,59\}\) \\ \end{tabular}
Then, we introduce two random variables. One indicates the annotator's habit (\(H_{j}\)), which indicates which time resolution category the annotator tends to use, and the other is a category (\(C_{j,i}\)) that is actually used for each annotation. The index \(j\) and \(i\) are the annotators and the annotations index, respectively. \(C_{j,i}\) is defined for each annotation, and \(H_{j}\) is defined for each annotator. Now, we introduce our method to derive the posterior probabilities of the two random variables (\(H_{j}\) and \(C_{j,i}\)). For simplicity, we drop the annotator index \(j\) in the rest of the paper, as the following process can be repeated for each annotator separately. We present the notation below.
\begin{tabular}{l l} \(\mathcal{D}=\{d_{i}\}_{i=1}^{N}\) & Set of annotations \\ \(N\) & Number of annotations \\ \(d_{i}^{*}\) & True value for an annotation \(d_{i}\) \\ \(H\) & Annotator's habit \\ \(C_{i}\) & i-th annotation category \\ \(c_{n}\) & Categories for \(H\) and \(C_{i}\) \\ \(|c_{n}|\) & Number of annotations in category \(c_{n}\) \\ \(T_{c_{n}}\) & Time resolution for category \(c_{n}\) \\ \(N_{c}\) & Number of the categories \\ \end{tabular}
We assume that the \(i\)-th annotations (\(d_{i}\)) are generated from \(C_{i}\) and \(d_{i}^{*}\), and \(C_{i}\) is generated from \(H\). Figure 3 shows the probabilistic graphical model relating the variables. To infer the annotator's \(H\) and annotations' \(C_{i}\), we compute their posterior probabilities given annotations \(\mathcal{D}\) i.e. \(P(H|\mathcal{D})\) and \(P(C_{i}|\mathcal{D})\).
\[P(H|\mathcal{D})=\frac{P(H)}{P(\mathcal{D})}\prod_{i=1}^{N}\sum_{C_{i}}P(C_{i} |H)P(d_{i}|C_{i}) \tag{1}\]
Figure 1: Histogram of granularity patterns found on SRM-17 shower/bath annotations for dataset SPHERE 100-Homes, coloured by annotator.
Figure 2: Examples of biases added during the annotation process. The bottom graph shows the time in which the person took a 37 minutes shower. The graph above shows a possible annotation **B** estimating that it took about 30 minutes at 8am, while the graph above that shows a different annotation **A** estimating a shower of an hour at 8am. Each annotation indicates the true (and false) positives (and negatives) that would result from comparison with the ground truth.
Eq. 1 shows how we can compute \(P(H|\mathcal{D})\). Its detailed derivations are in Appendix A. The term \(P(d_{i}|C_{i})\) is the probability of getting the annotation \(d_{i}\) with the given annotation category \(C_{i}\). Here, we assume a uniform distribution for \(P(d_{i}^{*})\); hence we define it as Eq. 2.
\[P(d_{i}|C_{i})=\begin{cases}\frac{1}{C_{i}!},&\text{if }d_{i}\in C_{i}\\ 0,&\text{otherwise}\end{cases} \tag{2}\]
where \(|C_{i}|\) is a number of annotations in the category \(C_{i}\). For example, when \(C_{i}=c_{1}\), \(|C_{i}|=2\) as \(c_{1}\) has two members \(\mathbb{C}_{1}=\{0,30\}\). Hence the probability of \(d_{i}=0\) or \(30\) when \(C_{i}=c1\) is \(0.5\) in Eq. 2. The term \(P(C_{i}|H)\) is the probability of using the annotation category \(C_{i}\) when the annotator's habit is \(H\). We use the following \(P(C_{i}|H)\) in this work:
\[P(C_{i}|H)=\begin{cases}1-\delta,&\text{if }C_{i}=H\\ \frac{\delta}{N_{c}!-1},&\text{otherwise}\end{cases} \tag{3}\]
where \(\delta\in[0,1]\) is the probability of taking a different annotation category to the annotator's habit \(H\). In this paper, we use \(\delta=0.1\). Further discussion regarding the choice of \(\delta\) value is in Section V. We assume the prior of \(H\) is the uniform distribution, as we do not assume any prior knowledge about the annotator's habit. Now, we compute the posterior of \(C_{i}\), which can be derived as follows (again, the detailed derivations are in Appendix B):
\[P(C_{i}|\mathcal{D})=\sum_{H}\frac{P(d_{i}|C_{i})P(C_{i}|H)}{P(d_{i}|H)}P(H| \mathcal{D}), \tag{4}\]
where we can compute \(P(d_{i}|C_{i})\) from Eq. 2, \(P(C_{i}|H)\) from Eq. 3, \(P(H|D)\) from Eq. 1 and \(P(d_{i}|H)=\sum_{C_{i}}P(d_{i}|C_{i})P(C_{i}|H)\). Once we have computed the posteriors, we use them to produce the soft labels. It is possible to prepare the soft labels for all categories and combine them according to the posteriors (fully Bayesian approach). This work takes the maximum a posteriori (MAP) estimation for \(C_{i}\) and generates the soft label from it.
### _Generating Soft Labels_
The annotations for time-series data have been produced by recording an event's start and end time. We introduce a probability distribution over the timings based on the category \(C_{i}\) detected in the previous subsection and compute soft labels that indicate a probability of the event happening at a given time. We adopt a uniform distribution for the true start and end timings \(d_{s}^{*}\) and \(d_{e}^{*}\) because we assume the annotations are generated by rounding the actual time. (Again, it is interesting for future work to study different distributions.) The distributions are uniform across the period of the inferred annotation interval and placed its centre at the annotated timing. For example, if a start timing annotation category \(C_{s}\) is \(c_{3}\), it becomes uniform distribution bounded by \(d_{s}-5\) and \(d_{s}+5\) as the category \(c_{3}\) has annotations with \(10\) minutes intervals, where \(d_{s}\) and \(d_{e}\) are annotations for the start and end timing.
\[\begin{split} p(d_{s}^{*})&=\text{U}(d_{s}-T_{C_{s} }/2,d_{s}+T_{C_{s}}/2),\\ p(d_{e}^{*})&=\text{U}(d_{e}-T_{C_{e}}/2,d_{e}+T_{C_{ e}}/2),\end{split} \tag{5}\]
where \(T_{C_{s}}\) and \(T_{C_{e}}\) are the annotation time intervals for the inferred annotation categories \(C_{s}\) and \(C_{e}\).1 Then, we compute probabilities of the event that has started \(P(d_{s}^{*}\leq t)\) and has not yet ended \(P(d_{e}^{*}>t)\) at time \(t\) by taking the integration of Eq. 5.
Footnote 1: These probabilities must be conditioned on \(C_{i}\) and \(d_{i}\) like \(p\left(d_{s}^{*}|C_{s},d_{s}\right)\). We drop the conditioning for simplicity here.
\[\begin{split} P(d_{s}^{*}\leq t)&=\int_{-\infty}^{t }p(d_{s}^{*})\leavevmode\nobreak\ dd_{s}^{*},\\ P(d_{e}^{*}>t)&=\int_{t}^{+\infty}p(d_{e}^{*}) \leavevmode\nobreak\ dd_{e}^{*}.\end{split} \tag{6}\]
Finally, we compute the soft label (\(P^{\text{(label)}}(t)\)), which is the probability of the event at time \(t\) by multiplying \(P(d_{s}^{*}<t)\) and \(P(d_{e}^{*}>t)\). Here, we assume the start and end timings are statistically independent. It simplifies the following derivations and helps convey the idea of our approach. It is a strong assumption, and it does not hold in some cases. For example, if the start and end timings are close relative to the annotation time resolution, then we need to consider the dependency between the start and end timings - the end timing must be later than the start timing. We want to extend our model to support such a scenario in the future.
\[\begin{split} P^{\text{(label)}}(t)&=P(d_{s}^{*} \leq t\wedge d_{e}^{*}>t)\\ &=P(d_{s}^{*}\leq t)P(d_{e}^{*}>t).\end{split} \tag{7}\]
Figure 4 shows the above soft label computation process with uniform distributions. It starts with the given start/end time annotations (\(d_{s}\) and \(d_{e}\)) and the estimated time resolutions, we assume the probability distribution of the actual start/end time (\(d_{s}^{*}\) and \(d_{e}^{*}\)). Then, compute the probabilities that the event has already started and ended for each time slot. Finally, we compute the probability that the event has started but not ended for each time slot, becoming the soft label.
## IV Experiments
In this section, we compare the hard and soft labels with artificial examples and real-world datasets. The hard labels have either true or false values and are set to true for the time slots between the start and end time annotations. The artificial examples allow us to demonstrate the characteristics of the soft labels in a controlled environment. We show the actual use case with real-world datasets.
Figure 3: Probabilistic graphical model relating the variables for annotations. We assume that the annotation \(d_{i}\) is generated from the ground truth \(d_{i}^{*}\) and the annotation time resolution \(C_{i}\). The annotation time resolution depends upon the annotator’s habit \(H\). The index \(i\) is the annotation’s index.
### _Simple Task_
First, we compare the soft and conventional hard labels in a simple synthetic example. It consists in a series of tasks with a start and end events (e.g. sleeping, cooking or showering). Here, we randomly generate the true start and end times (ground truth), then we make the annotations by rounding the times based on a given time resolution (e.g. 5, 10, 15 and 30 minutes). We consider the rounded start and end times as the provided hard labels, and apply the proposed method to obtain the soft labels as described in Sec. III. Finally, we evaluate the hard and soft labels by comparing them to the ground truth. Figure 5 shows the mean squared error (MSE) of the ground truth with respect to the hard and soft labels, with the time resolution of the annotations in the X-axis. The MSEs are measured around the ground truth start and end time \(\pm\)15 minutes range. The result suggests that the hard label has a progressively larger error than the soft label when the resolution interval of the time annotations increase. This clearly indicates that the proposed soft labels are less penalizing than the hard labels.
Next, we perform a similar analysis with F1 score, as in most detection scenarios the positive class is more important than the negative one. The left plot of Figure 6 shows the results with the annotation time resolution on the X-axis. This suggests the hard label is better (higher F1 score) than the soft label. This is the opposite result to the MSE result and counter-intuitive as the soft label accurately reflects the degree of ambiguity by having values between zero and one, whereas the hard label has only either zero or one. The F1 score (or any metric based on the confusion matrix) is penalised by having a value between zero and one (like soft labels). Also, this is the best-case scenario for the hard label, as the annotation is produced by using rounding. Hence, the expected ground truth is matched with the hard label. We are interested in exploring evaluation metrics that do not penalise the prediction or ground truth ambiguity. However, this is left for future work.
We also evaluate the annotations produced by applying a bias (offset) and rounding to the ground truth. This is useful to simulate a case in which the annotator does not remember accurately the beginning and end of the event, which adds a possible bias on top of the rounding error. With this bias, the expected ground truth no longer matches the hard label. We set the bias to half of the annotation resolution and measure the F1 score for the hard and soft labels. The result (the right plot of Figure 6) suggests that the soft label is better than the hard label in this case. We can see that the hard label results are degraded due to the bias in the annotation, and the soft label results stay the same as before. This is because we shift the uniform distribution for producing the soft label (Eq. 5) based on the estimated annotation resolution.
### _Humidity Event Detection_
We test the soft and hard labels for evaluating a shower event detection model. The model uses the humidity sensor reading
Figure 4: Soft label for time series data. Given start/end time annotations (\(d_{s}\) and \(d_{e}\)) and the estimated time resolutions, we assume the probability distribution of the actual start/end time (\(d_{s}^{*}\) and \(d_{e}^{*}\)) – top of the figure. Then, we compute the probabilities that the event has already started and ended for each time slot. Finally, we compute the probability that the event has started but not ended for each time slot, becoming the soft label.
Figure 5: MSE for the hard and soft labels against the ground truth. The X-axis is the annotation time resolution. The MSE measured around the ground truth start and end time \(\pm\)15 minutes range. The results show the soft label is better (smaller MSE) than the hard label.
Figure 6: F1 score for the hard and soft labels against the ground truth with different annotation time resolutions. The annotations are produced from the ground truth by applying a bias and rounding. Here, we use zero and half of the annotation time resolution as the bias (offset). The results suggest that the soft label performs better than the hard label with the bias.
and predicts if someone is taking a shower at each time slot - binary classification task. The model used is a hidden Markov model. It takes the humidity level as the observations and treats the shower status (on/off) as the hidden variable. We assume that the predictive model is already provided, as the learning process is not the focus of this paper, but the evaluation.
We use the SRM-17 dataset [15] with self-reported annotations. We pick the dataset for ID:4 (Figure 1) as it has a coarse annotation time resolution. First, we generate the hard and soft labels based on self-report annotations and then compare them against the prediction model output. Table I shows the confusion matrices for the evaluation results with the hard and soft labels. We do not know the ground truth; hence we cannot directly compare the performance of these labels. We can say that the soft and hard labels give different results. We discuss approaches to further detailed characterisation and evaluation in the future work section of this paper. We can also see that the results with the soft label are slightly worse than those with the hard labels (slightly fewer true-positives and true-negatives and more false-positives and false-negatives). It is because these matrices penalise the ambiguities (soft labels); as in Section 4.1, the suitability of these metrics for comparative evaluation is left for further discussion.
## V Discussion
In this section, we first discuss the findings from synthetic data and then present implications and limitations for real-world examples.
Accuracy of the category estimation:Our method estimates the posterior of the random variables \(H\) and \(C_{i}\). It is designed to pick the annotation category for coarser time resolution categories than the finer resolution ones. It is intuitively correct because if the annotations have only 30 minutes of the resolution, it is natural to think the annotator uses 30 minutes rather than the finer resolution that also possible to have 0 and 30 minutes, such as 15 minutes or 10 minutes. Our method wrongly estimates the annotation categories if all (or most) annotations landed on a coarser resolution time. The likelihood of such an error diminishes if it has more annotations.
Figure 7 shows the error rate of the annotation category estimation \(\arg\max_{C_{i}}P(C_{i}|\mathcal{D})\). As the plot suggests, the error rate decreases quickly with the more annotation it receives. Here we assume all annotations come from a single time resolution category. Each line indicates the annotation category for the annotation. There is no line for resolution=30 minutes as the error rate is always zero.
Choice of \(\delta\) in Eq. 3The parameter \(\delta\) indicates how likely the annotator uses the different annotation categories from the habit (\(H\)). \(\delta=0.1\) means that the annotator uses different annotation categories once in ten times. It affects both of the two random variable posteriors \(P(H|\mathcal{D})\) and \(P(C_{i}|\mathcal{D})\). If \(\delta\) is small (close to zero), then \(H\) would be the category compatible with all annotations. For example, if all but one annotation are 30 minutes (e.g. 9:30), and one has 1-minute time resolution (e.g. 13:16), then \(H\) would be \(c_{5}\) (1-minute resolution) when \(\delta\) is too small. On the other hand, if \(\delta\) is too large (\(\sim 1.0\)), then the posterior of \(C_{i}\) might ignore the \(H\) and pick the category based only on the annotation value \(d_{i}\). For example, if all annotations are 5 minutes time resolution, then \(H\) would correctly be \(c_{4}\) (5-minutes resolution). However, \(C_{i}\) would be wrongly \(c_{1}\) if \(d_{i}\in\mathbb{C}_{1}\) with too large \(\delta\). We pick \(\delta=0.1\) as it seems a good balance for picking the right \(H\) and \(C_{i}\).
Choice of \(p(d_{i}^{*}|d_{i},C_{i})\):In this paper, we use the uniform distribution for \(p(d_{i}^{*}|d_{i},C_{i})\) (Eq. 2) as we assume a simple annotator behaviour - just rounding the actual time. However, it is more plausible to use gradually increasing and decreasing distributions - e.g. Gaussian or trapezoidal shape distributions. This is left for future work.
Learning from soft labels:Our proposed method can be potentially applied during the learning process of a classifier/ regressor as well. For example, by setting the start and end of the events with the proposed soft labels, we can interpret the augmented regions as weak labels [14, 18, 19]. Two possible algorithms that can be potentially used in this scenario are pseudo-labeling [20] and Optimistic Superset Learning [21], which are iterative learning methods that consider the model's predictions among the candidate labels as correct, and retrain the model with those labels. It is also possible to add those samples for which the model is more confident (e.g. exceeding a certain threshold).
We could also consider the proposed soft labels as probabilities coming from a Bernoulli distribution, or as prior beliefs of belonging to the positive class. Some algorithms that use similar soft labels are label smoothing [22] and an Expectation
\begin{table}
\end{table}
Table I: Confusion matrix for the shower event detection model with soft and hard labels.
Figure 7: Error rate of the annotation’s category estimation. The error rates are measured on synthetically generated annotations. Each line is for the annotation category. The X-axis is the number of annotations. The error rate decrease as it has more annotations.
Maximization method proposed by Jin and Ghahramani [23].
_Evaluation metrics for soft labelling:_ The commonly used evaluation metrics for classification tasks (such as accuracy, precision, recall and F1 score) are penalised by the ambiguity in the labels or the predictions (the ambiguity reduces these scores). In this paper, we show that MSE does not have such an issue and correctly show the benefit of the soft labels in our experiment. Also, the MSE is similar to the Brier score, which is commonly used to evaluate probabilistic predictions' accuracy. It may suggest that the MSE is the right evaluation metric for soft labels. However, we need further study to understand which metric is appropriate in which scenario (objective) with the soft labels.
_Towards real-world comparison of soft and hard labels:_ Real-world comparison of soft and hard labels in contemporaneous self-annotation by participants is complicated by the lack of wholly reliable ground truth, particularly in environments in which it is not possible to rely on data collection suitable for post-hoc annotation methods (e.g. video is unlikely to be appropriate in domestic bathrooms). However, we may look toward other data sources to resolve or reduce some of the ambiguities identified. For example, we might look to sensor fusion, referencing other sensors for associated information such as presence, temperature or power use e.g. to detect operating times of the appliance, or to get accurate bounds of a person's entrance into and departure from the room. This helps to resolve the confounding question of sensor lag (e.g. time taken before sensor detects temperature or humidity rises), giving us a greater insight into real-world timings.
_Human performance in temporal estimation:_ Time estimation performance and bias is a complex topic and the underlying mechanisms and their causes are largely beyond the scope of this paper. However, we hope that discussion of this mechanism may spark further exploration of the characterisation and handling of this aspect of self-reported data.
## VI Conclusion
We proposed a method to identify the annotator's approach to the task and the ambiguity that comes with it. Also, we devised a way to generate soft labels based on the estimated ambiguity. Our evaluation results suggest that the soft label is better in mean squared error than the hard label. However, the soft labels show worse results than the hard labels in terms of F1 score, because metrics like F1 score inherently penalise the ambiguities. We consider many avenues for future work, namely, improving the model of human annotations, designing new evaluation metrics for soft labels and using soft labels for the learning stage. We also hope to report on variance in granularity of participant annotation of other types of events in a future publication.
## Acknowledgment
This work was supported by the SPHERE Next Steps Project funded by EPSRC under Grant EP/R005273/1. RSR is funded by the UKRI Turing AI Fellowship EP/V024817/1.
|
2306.01920 | Context-Aware Bayesian Network Actor-Critic Methods for Cooperative
Multi-Agent Reinforcement Learning | Executing actions in a correlated manner is a common strategy for human
coordination that often leads to better cooperation, which is also potentially
beneficial for cooperative multi-agent reinforcement learning (MARL). However,
the recent success of MARL relies heavily on the convenient paradigm of purely
decentralized execution, where there is no action correlation among agents for
scalability considerations. In this work, we introduce a Bayesian network to
inaugurate correlations between agents' action selections in their joint
policy. Theoretically, we establish a theoretical justification for why action
dependencies are beneficial by deriving the multi-agent policy gradient formula
under such a Bayesian network joint policy and proving its global convergence
to Nash equilibria under tabular softmax policy parameterization in cooperative
Markov games. Further, by equipping existing MARL algorithms with a recent
method of differentiable directed acyclic graphs (DAGs), we develop practical
algorithms to learn the context-aware Bayesian network policies in scenarios
with partial observability and various difficulty. We also dynamically decrease
the sparsity of the learned DAG throughout the training process, which leads to
weakly or even purely independent policies for decentralized execution.
Empirical results on a range of MARL benchmarks show the benefits of our
approach. | Dingyang Chen, Qi Zhang | 2023-06-02T21:22:27Z | http://arxiv.org/abs/2306.01920v1 | Context-Aware Bayesian Network Actor-Critic Methods for Cooperative Multi-Agent Reinforcement Learning
###### Abstract
Executing actions in a correlated manner is a common strategy for human coordination that often leads to better cooperation, which is also potentially beneficial for cooperative multi-agent reinforcement learning (MARL). However, the recent success of MARL relies heavily on the convenient paradigm of purely decentralized execution, where there is no action correlation among agents for scalability considerations. In this work, we introduce a Bayesian network to inaugurate correlations between agents' action selections in their joint policy. Theoretically, we establish a theoretical justification for why action dependencies are beneficial by deriving the multi-agent policy gradient formula under such a Bayesian network joint policy and proving its global convergence to Nash equilibria under tabular softmax policy parameterization in cooperative Markov games. Further, by equipping existing MARL algorithms with a recent method of differentiable directed acyclic graphs (DAGs), we develop practical algorithms to learn the context-aware Bayesian network policies in scenarios with partial observability and various difficulty. We also dynamically decrease the sparsity of the learned DAG throughout the training process, which leads to weakly or even purely independent policies for decentralized execution. Empirical results on a range of MARL benchmarks show the benefits of our approach. The code is available at [https://github.com/dchen48/BNPG](https://github.com/dchen48/BNPG).
Machine Learning, ICML
## 1 Introduction
Cooperative multi-agent reinforcement learning (MARL) methods equip a group of autonomous agents with the capability of planning and learning to maximize their joint utility, or reward signals in the reinforcement learning (RL) literature, which provides a promising paradigm for a range of real-world applications, such as traffic control (Chu et al., 2019), coordination of multi-robot systems (Corke et al., 2005), and power grid management (Callaway & Hiskens, 2010). As a key distinction from the single-agent setting, multi-agent joint action spaces grow exponentially with the number of agents, which imposes significant scalability issues. As a convenient and commonly adopted solution, most existing cooperative MARL methods only consider _product policies_, i.e., each agent selects its local action independently given the state or its observations. Restricting to product policies, however, does come at a cost for cooperative tasks: consider an example where cars wait at a crossroads, it would be hard for the cars to coordinate their movements without knowing others' intentions, potentially resulting in a crash or congestion. Intuitively, optimizing over the smaller joint policy space of all product policies can lead to suboptimal joint policies compared to optimizing over the entire set of joint policies that also includes _correlated policies_ where the local actions of all agents are sampled together in a potentially correlated manner.
The research question then arises naturally: how can we introduce correlations for cooperative multi-agent joint policies, while taming the scalability issues? Noting that a joint policy is joint distributions (over agents' local actions), a straightforward yet underexplored solution idea is to use a Bayesian network (BN) that represents conditional dependencies between agents' local actions via a directed acyclic graph (DAG), where a desirable DAG topology structure captures important dependencies that exist among hopefully a set of sparsely connected agents. As our first contribution, we formalize this solution idea of BN joint policies in the cooperative Markov game framework (Boutilier, 1999; Peshkin et al., 2001), derive its associated BN policy gradient formula, and then prove the global convergence of its gradient ascent to Nash equilibria under the tabular policy parameterization.
As our second contribution, we then adapt existing multi-agent actor-critic methods such as MAPPO (Yu et al., 2021) to incorporate BN joint policies. For practicality and efficiency, our algorithm features the following two key design choices: (i) Our DAG topology of the BN joint policy is learnable to be context-aware based on the environment state or the agents' joint observations, leveraging a recently developed technique for differentiable DAG learning. (ii) To execute a BN joint policy, the agents need to communicate their intended actions to their children in the BN, unless the BN's DAG topology reduces to product policies, and the corresponding communication overhead is directly determined by the DAG's denseness/sparseness. To encourage sparse communication during execution, we develop a learning strategy that dynamically increases the sparsity of the learned DAG, where full sparsity (i.e., product policies) can be achieved at the last stage of the training process and therefore the learned joint policy can be executed in a purely decentralized manner, making our algorithm compatible with the centralized training, decentralized execution (CTDE) paradigm (Lowe et al., 2017). Empirically results show the benefits of our algorithm equipped with these two design choices.
The rest of this paper is structured as follows: Section 2 reviews closely related work; Section 3 introduces preliminaries of cooperative Markov games and solution concepts therein; Section 4 formulates our novel notion of Bayesian network joint policy, followed by the theoretical results in Section 5.1; Section 6 describes our practical algorithm, followed by the empirical results in Section 7; Section 8 concludes the paper.
## 2 Related Work
Convergence of policy gradient in cooperative MGs.Cooperative MGs are an important subclass of Markov games, where each agent has the same reward function. Recent work has established the convergence guarantee of policy gradient in Cooperative MGs to Nash policies under tabular setting with direct parameterization (Leonardos et al., 2021), and with softmax parameterization (Zhang et al., 2022; Chen et al., 2022).
Policy correlations in MARL.Some prior work has noticed the limitation of purely decentralized execution and made some efforts to introduce correlations among policies. Value-based method (Rashid et al., 2018) following the CTDE-based training paradigm has been combined with coordination graph (Bohmer et al., 2020) for introducing pairwise correlation. However, the optimization process requires Max-Sum (Rogers et al., 2009) which is computationally intensive when the coordination graph is dense. (Wang et al., 2022) proposes a rule-based pruning method to generate a sparse coordination graph that can speed up the Max-Sum algorithm without harming the performance. However, the extension from the pairwise correlation to more complicated ones is not trivial. There are also some policy-based algorithms augmented with correlated execution. (Ruan et al., 2022) combines MAPPO (Yu et al., 2021) with a graph generator outputting Bayesian Network that determines action dependencies. The optimization of the graph generator is achieved by maximizing the cumulative rewards constrained to the depth and dagness of the output graph. However, there is no theoretical justification for why using Bayesian networks is reasonable, and the output graph is not guaranteed to be a DAG, which requires some non-differentiable rule-based pruning and can harm performance. Moreover, the existing methods do not generate fully decentralized policies at the end of the training, which increases the execution time in the deployment of the model.
Differentiable DAG learning.The goal is to learn such an adjacency matrix that can help the actors better coordinate. However, the generation of DAG requires non-differentiable operations due to its discreteness and acyclicity, which prevents end-to-end training. Fortunately, recent work (Charpentier et al., 2022) proposes a simple fully differentiable DAG learning algorithm. Every DAG can be decomposed into the multiplication of a permutation matrix determining topological ordering and upper triangular matrix (edge matrix) determining DAG structure. We can use neural networks to learn the logits for permutation and edge matrices, and use Gumbel-Softmax (Jang et al., 2016) and Gumbel-Sinkhorn (Mena et al., 2018) to differentiably transform them to the corresponding discrete ones.
## 3 Preliminaries
Cooperative Markov game.We consider a cooperative Markov game (MG) \(\langle\mathcal{N},\mathcal{S},\mathcal{A},P,r,\mu\rangle\) with \(N\) agents indexed by \(i\in\mathcal{N}=\{1,...,N\}\), state space \(\mathcal{S}\), action space \(\mathcal{A}=\mathcal{A}^{1}\times\cdots\times\mathcal{A}^{N}\), transition function \(P:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\mathcal{S})\), (team) reward function \(r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) shared by all agents \(i\in\mathcal{N}\), and initial state distribution \(\mu\in\Delta(\mathcal{S})\), where we use \(\Delta(\mathcal{X})\) to denote the set of probability distributions over \(\mathcal{X}\). For ease of exposition, we assume full observability, i.e., each agent observes the global state \(s\in\mathcal{S}\), until Section 6 where we introduce our practical algorithm that incorporates partial observability. Under full observability, we consider general _joint policy_, \(\pi:\mathcal{S}\rightarrow\Delta(\mathcal{A})\), which maps from the state space to distributions over the joint action space. As the size of action space \(\mathcal{A}\) grows exponentially with \(N\), the commonly used joint policy subclass is the _product policy_, \(\pi=(\pi^{1},\cdots,\pi^{N}):\mathcal{S}\rightarrow\times_{i\in\mathcal{N}} \Delta(\mathcal{A}^{i})\), which is factored as the product of local policies \(\pi^{i}:\mathcal{S}\rightarrow\Delta(\mathcal{A}^{i})\), \(\pi(a|s)=\prod_{i\in\mathcal{N}}\pi^{i}(a^{i}|s)\), each mapping the state space only to the action space of an individual agent. Define the
discounted return time step \(t\) as \(G_{t}=\sum_{l=0}^{\infty}\gamma^{l}r_{t+l}\), where \(r_{t}:=r(s_{t},a_{t})\) is the team reward at time step \(t\). Joint policy \(\pi\) induces a value function defined as \(V_{\pi}(s_{t})=\mathbb{E}_{s_{t+1:\infty},a_{t:\infty}\sim\pi}[G_{t}|s_{t}]\), and action-value function \(Q_{\pi}(s_{t},a_{t})=\mathbb{E}_{s_{t+1:\infty},a_{t+1:\infty}\sim\pi}[G_{t}|s _{t},a_{t}]\). Following policy \(\pi\), the cumulative team reward, i.e., the value function, starting from \(s_{0}\sim\mu\) is denoted as \(V_{\pi}(\mu):=\mathbb{E}_{s_{0}\sim\mu}[V_{\pi}(s_{0})]\). The (unnormalized) _discounted state visitation measure_ by following policy \(\pi\) after starting at \(s_{0}\sim\mu\) is defined as
\[d_{\mu}^{\pi}(s):=\mathbb{E}_{s_{0}\sim\mu}\left[\sum_{t=0}^{\infty}\gamma^{t }\mathrm{Pr}^{\pi}(s_{t}=s|s_{0})\right]\]
where \(\mathrm{Pr}^{\pi}(s_{t}=s|s_{0})\) is the probability that \(s_{t}=s\) when starting at state \(s_{0}\) and following \(\pi\) subsequently.
As all agents share a team reward, cooperative MARL considers the same objective as single-agent RL of optimizing the joint policy from experience to maximize its value, i.e., \(\max_{\pi}V_{\pi}(\mu)\). For product policies, we will also consider the weaker solution concept of the Nash policy, as formally defined below.
**Definition 3.1** (Nash policy).: Product policy \(\pi=(\pi^{1},\cdots,\pi^{N})=(\pi^{i},\pi^{-i})\) is a Nash policy if
\[\forall i\in\mathcal{N},\forall\bar{\pi}^{i}\in\Delta(\mathcal{A}^{i}),V_{\pi ^{i},\pi^{-i}}(\mu)\leq V_{\pi}(\mu)\]
where \(\pi^{-i}\) is the local policies of the agents excluding \(i\).
For a Nash policy, each agent \(i\) maximizes the value function given fixed local policies of other agents.
## 4 Bayesian Network Joint Policy
Most existing cooperative MARL methods consider only product policies to optimize, rather than the more general set of general joint policies. This is mainly because product policies can conveniently deal with the scalability issue of the joint action space. Another justification is that restricting to product policies incurs no optimality gap, since it is well-known that there is always an optimal joint policy that is deterministic and therefore a product policy. However, the existence of an optimal product policy does not guarantee that we can search it out easily. In fact, existing theoretical and empirical results (Leonardos et al., 2021; Zhang et al., 2022; Chen et al., 2022), including ours in this paper, have shown that restrictively searching the product policies via gradient ascent can only find local optima such as Nash policies, even in the noiseless tabular setting.
As the key notion in this work, we now formally introduce a class of joint policies that is more general than product policies by introducing action dependencies captured by a Bayesian network (BN). We specify a BN by a directed acyclic graph (DAG) \(\mathcal{G}=(\mathcal{N},\mathcal{E})\) with vertex set \(\mathcal{N}\) and directed edge set \(\mathcal{E}\subseteq\{(i,j):i,j\in\mathcal{N},i\neq j\}\). Denote the parents of agent \(i\) as \(\mathcal{P}^{i}:=\{j:(i,j)\in\mathcal{E}\}\), and the corresponding parent actions as \(a^{\mathcal{P}^{i}}\in\times_{j\in\mathcal{P}^{i}}\mathcal{A}^{j}\), as illustrated in Figure 1. Under full observability and with BN \(\mathcal{G}\), we consider a _BN (joint) policy_, \((\pi,\mathcal{G})=(\pi^{1},\cdots,\pi^{N},\mathcal{G}):\mathcal{S}\to\Delta( \mathcal{A})\). Similar to product policies, BN policies are also factored as the product of local policies given the state and action dependencies determined by \(\mathcal{G}\), i.e., \(\pi^{i}:\mathcal{S}\times(\times_{j\in\mathcal{P}^{i}}\mathcal{A}^{j})\to \Delta(\mathcal{A}^{i})\), and thus joint action \(a=(a^{1},\cdots,a^{N})\) is sampled as \(\pi(a|s)=\prod_{i\in\mathcal{N}}\pi^{i}(a^{i}|s,a^{\mathcal{P}^{i}})\).
We make two remarks on BN policies: (i) With the factorization of a BN policy into its local policies, the concept of Nash policy in Definition 3.1 is also applicable to BN policies. (ii) BN policies naturally interpolate product policies and general joint policies, including them as two extremes: BN policies reduce to product policies when DAG \(\mathcal{G}\) is an empty graph (Figure 1 (left)) and can model general joint policies when \(\mathcal{G}\) is dense (Figure 1 (right)).
## 5 Convergence of the Tabular Softmax BN Policy Gradient in Cooperative MGs
In this section, we consider optimizing BN policies through policy gradient ascent under the tabular softmax parameterization. Under the same assumptions, we are able to extend existing convergence results from products policies to BN policies, asserting that optimizing BN policies through gradient ascent can indeed find global optima (rather than Nash) when the BN's DAG is dense.
Formally, the local policies in the BN policy are parameterized in the tabular softmax manner from the global state and parent actions, i.e., we have, for each agent \(i\), its policy parameter
\[\theta^{i}=\left\{\theta^{i}_{s,a^{\mathcal{P}^{i}},a^{i}}\in\mathbb{R}:s\in \mathcal{S},a^{\mathcal{P}^{i}}\in\times_{j\in\mathcal{P}^{i}}\mathcal{A}^{j}, a^{i}\in\mathcal{A}^{i}\right\}\]
and induced softmax local policy
\[\pi^{i}_{\theta^{i}}\left(a^{i}|s,a^{\mathcal{P}^{i}}\right)\propto\exp\left( \theta^{i}_{s,a^{\mathcal{P}^{i}},a^{i}}\right) \tag{1}\]
with the BN policy parameterized as \(\pi_{\theta}=(\pi^{1}_{\theta^{1}},\cdots,\pi^{N}_{\theta^{N}})\).
Figure 1: Illustration of various DAG topologies.
In Lemma 5.1, we derive the policy gradient form for the BN policy as parameterized in Equation (1), which will used to establish our convergence results in this section.
It will be also convenient to introduce a few shorthands before stating Lemma 5.1. Consider a subset \(\mathcal{M}\subseteq\mathcal{N}\) of all agents and its complement \(-\mathcal{M}\), such that a joint action can be decomposed as \(a=(a^{\mathcal{M}},a^{-\mathcal{M}})\). Let
\[\pi^{\mathcal{M}}(a^{\mathcal{M}}|s,a^{-\mathcal{M}}):=\frac{\pi(a^{\mathcal{ M}},a^{-\mathcal{M}}|s)}{\sum_{\bar{a}^{\mathcal{M}}}\pi(\bar{a}^{\mathcal{M}},a^{- \mathcal{M}}|s)}\]
be the conditional for \(a^{\mathcal{M}}\) under \(\pi\). Let
\[Q_{\pi}(s,a^{\mathcal{M}}):=\mathbb{E}_{a^{-\mathcal{M}}\sim\pi^{-\mathcal{M}} (\cdot|s,a^{\mathcal{M}})}\left[Q_{\pi}(s,a^{\mathcal{M}},a^{-\mathcal{M}}) \right].\]
Let \(\mathcal{P}^{i}_{+}:=\mathcal{P}^{i}\cup\{i\}\) denote the set of agent \(i\) and its parents. We will also abbreviate \(V_{\pi_{\theta}}\), \(Q_{\pi_{\theta}}\) as \(V_{\theta}\), \(Q_{\theta}\), respectively.
**Lemma 5.1** (Tabular softmax BN policy gradient form, proof in Appendix A.4).: _For the tabular softmax BN policy parameterized as in Equation (1), we have:_
\[\frac{\partial V_{\theta}(\mu)}{\partial\theta^{i}_{s,a^{\mathcal{P}^{i}},a^{ i}}}=\frac{1}{1-\gamma}d_{\mu}^{\pi_{\theta}}(s,a^{\mathcal{P}^{i}})\pi_{ \theta^{i}}^{i}(a^{i}|s,a^{\mathcal{P}^{i}})A_{\theta}^{i}(s,a^{\mathcal{P}^{ i}},a^{i})\]
_where \(d_{\mu}^{\pi_{\theta}}(s,a^{\mathcal{P}^{i}}):=d_{\mu}^{\pi_{\theta}}(s)\sum_{a^{- \mathcal{P}^{i}}}\pi_{\theta}(a^{-\mathcal{P}^{i}},a^{\mathcal{P}^{i}}|s)\), \(A_{\theta}^{i}(s,a^{\mathcal{P}^{i}},a^{i}):=Q_{\theta}(s,a^{\mathcal{P}^{i}}) -Q_{\theta}(s,a^{\mathcal{P}^{i}})\)._
The policy gradient form in Lemma 5.1 generalizes its counterpart for single-agent policies (Agarwal et al., 2021) and for multi-agent product policies (Zhang et al., 2022; Chen et al., 2022) under the tabular softmax policy parameterization, which enables us to extend the convergence results to the BN joint policies.
Below we state the assumptions that have been used (Zhang et al., 2022; Chen et al., 2022), to generate the convergence results for product policies, i.e., \(\mathcal{G}=(\mathcal{N},\emptyset)\).
**Assumption 5.2**.: For any \(\pi\) and any state \(s\) of the Markov game, \(d_{\mu}^{\pi}(s)>0\).
**Assumption 5.3** (Reward function is bounded).: The reward function \(r\) is bounded in the range \([r_{\min},r_{\max}]\), such that the value function \(V\) is bounded as \(V_{\min}\leq V_{\pi}(s)\leq V_{\max}\ \forall s,\pi\).
**Assumption 5.4**.: Following the policy gradient dynamics (2), the policy of every agent \(i\) converges asymptotically, i.e., \(\pi_{\theta_{\theta_{i}}}^{i}\rightarrow\pi_{\theta_{\theta_{i}}}^{i}\) as \(t\rightarrow\infty,\ \forall i\).
Assumption 1 and assumption 2 are standard assumptions used in (Agarwal et al., 2021; Zhang et al., 2022; Chen et al., 2022), which ensures the sufficient coverage of all states and the boundness of the reward function, respectively. Assumption 3 is a stronger assumption used in (Zhang et al., 2022; Chen et al., 2022). A sufficient condition for assumption 5.4 by (Fox et al., 2022) is that the fixed point of the equation in Lemma 5.1 are isolated. The purpose of assumption 5.4 is to establish the convergence of \(A_{\theta}^{i}(s,a^{\mathcal{P}^{i}})\) if \(d_{\mu}^{\pi_{\theta}}(s,a^{\mathcal{P}^{i}})\) is positive. Otherwise, it can be the case that both \(\pi_{\theta_{i}}^{i}(a^{i}|s,a^{\mathcal{P}^{i}})\) and \(A_{\theta}^{i}(s,a^{\mathcal{P}^{i}})\) are divergent when the gradient converges to zero.
We next present our convergence results for the standard policy gradient dynamics in Sections 5.1, where Assumptions 5.2 - 5.4 hold, with proofs in the appendix A.
### Asymptotic Convergence of the Tabular Softmax BN Policy Gradient Dynamics
In Theorem 5.5, we establish, under the tabular softmax BN policy parameterization, the asymptotic convergence to a Nash policy in a MPG of the standard policy gradient dynamics:
\[\theta_{t+1}^{i}=\theta_{t}^{i}+\eta\nabla_{\theta^{i}}V_{\theta_{t}}(\mu) \tag{2}\]
where \(\eta\) is the fixed stepsize and the update is performed by every agent \(i\in\mathcal{N}\).
For each agent \(i\), parent actions \(a^{\mathcal{P}^{i}}\), and local action \(a^{i}\), Equation (2) becomes
\[\theta_{s,a^{\mathcal{P}^{i}},a^{i}}^{i,t+1}=\theta_{s,a^{\mathcal{P}^{i}},a^{ i}}^{i,t}+\eta\nabla_{\theta_{*,\pi^{\mathcal{P}^{i}},a^{i}}^{i}}V_{\theta_{t}}^{i}(\mu) \tag{3}\]
**Theorem 5.5** (Asymptotic convergence of BN policy gradient, proof in Appendix A.17).: _Under Assumptions 5.2 - 5.4, suppose every agent \(i\) follows the policy gradient dynamics (2), which results in terms of the state (3) for each each agent \(i\), parent actions \(a^{\mathcal{P}^{i}}\), and local action \(a^{i}\), with \(\eta\leq\frac{(1-\gamma)^{3}}{8N(r_{\max}-r_{\min})}\), then the converged BN policy \((\pi_{\theta_{1}^{i}}^{1},\cdots,\pi_{\theta_{\theta_{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf \mathbf{\theta \theta}}}}}}}}}}}}}}^{N}, \mathcal{G})\) is a Nash policy._
The main trick of our proof for Theorem 5.5 is to view the parent actions \(a^{\mathcal{P}^{i}}\) as part of the state, i.e., \(d_{\mu}^{\pi_{\theta}}(s,a^{\mathcal{P}^{i}})\) becomes the new state visitation measure for the augmented state \((s,a^{\mathcal{P}^{i}})\). After this transformation, the update dynamics in 5.1 is resemble to the ones for the product policy,i.e., \(\mathcal{G}:=(\mathcal{N},\emptyset)\), and thus straightforwardly generalize their results for the product joint policy to the BN policy. However, the problem with this formulation of new state \((s,a^{\mathcal{P}^{i}})\) is that \(d_{\mu}^{\pi_{\theta}}(s,a^{\mathcal{P}^{i}})=d_{\mu}^{\pi_{\theta}}(s)\sum_{a ^{-pi^{i}}}\pi_{\theta}(a^{-\mathcal{P}^{i}},a^{\mathcal{P}^{i}}|s)\) can be zero even if the state visitation measure \(d_{\mu}^{\pi_{\theta}}(s)\) is strictly positive. This is the main reason we cannot establish results stronger than the ones obtained in (Zhang et al., 2022), even for the fully connected Bayesian network with \(N(N-1)/2\) edges which intuitively behave similar to the single-agent setting (Agarwal et al., 2021) and should therefore result in the optimal policy than only a Nash policy.
**Assumption 5.6**.: Any augmented state \((s,a^{\mathcal{P}^{i}})\) has positive visitation measure, i.e., \(d_{\mu}^{\pi_{\theta}}(s,a^{\mathcal{P}^{i}})>0\).
**Definition 5.7** (Fully-correlated BN policy).: A BN policy \((\pi,(\mathcal{N},\mathcal{E}))\) is fully-correlated if \(|\mathcal{E}|=N(N-1)/2\), the maximum number of edges in a DAG.
**Corollary 5.8** (Asymptotic convergence of BN policy gradient to optimal fully-correlated BN joint policy, proof in Appendix A.18).: _Under Assumptions 5.2 - 5.4 and additional Assumption 5.6 that assumes positive visitation measure for any augmented state, suppose every agent \(i\in\mathcal{N}\) follows the policy gradient dynamics (2), which results in the update dynamics (3) for each each agent \(i\), parent actions \(a^{\mathcal{P}^{i}}\), and local action \(a^{i}\), with \(\eta\leq\frac{(1-\gamma)^{3}}{8N(r_{\max}-r_{\min})}\), then the converged fully-correlated BN policy \((\pi^{1}_{\theta^{1}_{1}},\cdots,\pi^{N}_{\theta^{N}_{\pi}},\mathcal{G})\) is an optimal policy._
## 6 Practical Algorithm
The convergence guarantee in Theorem 5.5 relies on global observability and the availability of the oracle value function, which is hard to apply in more complicated scenarios. In this section, we relax those assumptions and propose an end-to-end training framework which can augment any multi-agent actor-critic methods with a differentiable Bayesian network determining action dependencies among agents' local policies. Figure 2 presents an overview of the our proposed neural architecture, consisting of the differentiable Bayesian network and the actor-critic networks as its main components that we describe below.
### Differentiable Bayesian Network
The graph model \(G\) takes the joint partial observation \(o=\{o^{i}\}_{i\in\mathcal{N}}\) as input, and outputs a DAG \(G(o)\) represented by an adjacency matrix, i.e., \(G(o)[j,i]=1\) if and only if \(j\in\mathcal{P}^{i}\). By using the same decomposition of DAG into the multiplication of permutation matrix and upper triangular matrix (Charpentier et al., 2022) described in section 2, \(G\) consists of two sub-modules Permutation Net \(\Pi\) and Edge Net \(U\) which both takes the joint partial observation \(o\) as input and output the logits \(l_{\Pi}\) and \(l_{U}\) for the permutation matrix and upper triangular matrix, respectively. We use the reparameterization trick Straight-Through Gumbel-Softmax (Jang et al., 2016) and Gumbel-Sinkhorn (Mena et al., 2018) to differentiably transform \(l_{\Pi}\) and \(l_{U}\) into the corresponding permutation matrix \(\Pi(o)\) and upper triangular matrix \(U(o)\). The resulting DAG \(G(o)=\Pi^{T}(o)U(o)\Pi(o)\), where \(\Pi(o)\) determines the topological ordering of the agents and \(U(o)\) determines the structure of the outputted DAG.
### Actor and Critic Networks
Our communication network is compatible with any multi-agent actor-critic architecture. Our experiments mainly explore discrete actors which sample actions conditioning on local observation \(o^{i}\) and parent actions \(a^{\mathcal{P}^{i}}\), i.e., \(a^{i}\sim\pi^{i}(\cdot|o^{i},a^{\mathcal{P}^{i}})\). The critic takes the joint local observation or the environment provided global state as input. Both actor and critics are implemented by deep neural networks with details in the appendix 2.
### Training
Critic \(Q\) is trained to minimize TD error \(\mathcal{L}_{\mathrm{TD}}=\mathbb{E}_{o_{t},a_{t},r_{t},o_{t+1}}[(Q(o_{t},a_{ t})-y_{t})^{2}]\), where \(o_{t}:=(o^{1}_{t},...,o^{N}_{t})\), \(a_{t}:=(a^{1}_{t},...,a^{N}_{t})\), and \(y_{t}:=r_{t}+\gamma Q(o_{t+1},a_{t+1})\) is the TD target. Actor \(\pi^{i}\) can be updated by any multi-agent policy gradient algorithm, such as MAPPO \(\mathcal{L}^{i}_{\mathrm{actor}}=\mathbb{E}_{o_{t},a_{t}}[\log\pi^{i}(a^{i}|o ^{i},a^{\mathcal{P}^{i}})A(o_{t},a_{t})]\). Due to the differentiability enabled by Gumbel-Softmax and Gumbel-Sinkhorn, the gradient can flow from \(\pi^{i}\) to DAG \(G\), then to its sub-modules Permutation Net \(\Pi\) and Edge Net \(U\). The DAG Density of \(G\) is defined as \(\rho(G):=\frac{2}{|N(N-1)}\sum_{i,j\in\mathcal{N}}G[j,i]\), and is regularized by the term \(\alpha|\rho(G)-\eta|\). This places a restriction on the sparsity of the learned DAG by rate \(\eta\).
## 7 Experiments
Theorem 5.5 only guarantees that the policy gradient ascent converges to Nash, but does not guarantee the solution quality of the convergent. Our experiments, in the tabular softmax Bayesian setting, aim to see how well different (fixed) DAG topologies of the BN policy perform empirically and the reasons behind it. Then, in the sample-based
Figure 2: Our architecture for BN joint policy which includes each agent \(i\)’s policy \(\pi^{i}\) and a differentiable DAG learner. DAG \(G(o)=\Pi^{T}(o)U(o)\Pi(o)\) is generated by sending the joint local observation \(o\) to Permutation Net \(\Pi\) and Edge Net \(U\). Based on \(G(o)\), agent \(i\) requests actions \(a^{\mathcal{P}^{i}}=(a^{j},a^{k},a^{i})\) from its parents \((j,k,l)\), which, together with local observation \(o^{i}\), are taken as input into agent \(i\)’s local policy \(\pi^{i}\) to output \(a^{i}\). During training, the gradient (shown in the red dotted lines) flows from \(\pi^{i}\) to \(G\), then to \(\Pi\) and \(U\).
setting, we want to see 1) How well our algorithm proposed in Section 6 performed against baselines and ablations? 2) What are the potential meaning of the DAG learned by the context-aware differentiable DAG learner?
### Environments
Our environments include (1) Coordination Game, a small-size domain where we can afford computing exact policy gradient under tabular parameterization, (2) Aloha, a domain where action correlations are intuitively helpful, and (3) StarCraft II Micromanagement (SMAC), a common cooperative MARL benchmark that is more complicated.
**Coordination Game.** We use the version in (Chen et al., 2022) with \(N=2,3,5\) agents. The state space and action space are \(\mathcal{S}=\mathcal{S}^{1}\times\cdots\times\mathcal{S}^{N},\mathcal{A}= \mathcal{A}^{1}\times\cdots\times\mathcal{A}^{N}\), respectively, where \(\forall i\leq N,\mathcal{S}^{i}\in\{0,1\},\mathcal{A}^{i}\in\{0,1\}\). It is a cooperative setting with the same reward for all the agents, which favors more agents in the same local state. The transition function for each agent \(i\)'s local state only depends on the local action: \(P(s^{i}=0|a^{i}=0)=1-\epsilon\), \(P(s^{i}=0|a^{i}=1)=\epsilon\), where \(\epsilon=0.1\). The performance of the learned joint policy is measured by _price of anarchy_ (POA) (Roughgarden, 2015), \(\frac{V_{\pi}(\mu)}{\max_{\pi}V_{\pi}(\mu)}\), which is bounded in the range \([0,1]\). The convergence rate is captured by _Nash-Gap_, defined as \(\text{Nash-gap}(\pi):=\max_{i}\big{(}\max_{\pi^{i}}V_{\pi^{i},\pi^{-i}}(\mu) -V_{\pi}(\mu)\big{)}\), where Nash policy has a Nash-Gap of zero.
**Aloha.** We use the version in (Wang et al., 2022) with 10 agents (islands). 10 islands are stored in a \(2\times 5\) array, each of which has a backlog of messages to send. At each timestep, agents can either choose to send or not send. The goal is to send as many messages as possible without colliding with the ones sent by the neighboring islands. At each timestep, with a probability of 0.6, a new message can be generated for each agent. For each successfully sent message without collision, all agents receive a 0.1 reward, and a -10 reward if with collision.
**StarCraft II Micromanagement (SMAC).** SMAC (Samvelyan et al., 2019) has become one of the most popular MARL benchmarks. We choose the _Super Hard_ scenarios 6h_vs_8z and MMM2 to evaluate our proposed algorithm, which has 6 agents and 10 agents, respectively.
### Baselines
As baselines to compare against our context-aware DAG topology learning to bring in correlations between local policies, we consider the following DAGs that are fixed during training (i.e., no context-awareness). The **Fully-correlated** baseline has DAG (\(\mathcal{N},\{(j,i)|i>j\}\)), which have the maximum number of \((N(N-1)/2)\) edges for any DAG. **Uncorrelated** has DAG (\(\mathcal{N},\emptyset\)), i.e., product policy. **Line-correlated** has DAG (\(\mathcal{N},\{(j,i)|i=j+1\}\)). The DAGs of all baselines have a topological ordering of \((1,2,\cdots,N)\), i.e., \(\Pi\) defined in Section 6.1 is fixed as the identity matrix.
### Results of Fixed DAG Topologies with Tabular Exact Policy Gradients
Figure 3 presents the POA and the Nash-gap of the algorithms under the tabular softmax parameterization with different DAG topologies. The results demonstrate that the Nash-gap indeed decreases and converges close to zero as proved in Theorem 5.5. Fully-correlated consistently outperforms Line-correlated and Uncorrelated but does not converge to an optimal policy with POA of \(1\), because Assumption 5.6 is violated, i.e., some \((s,a^{\mathcal{P}^{i}})\) has visitation probability converges to zero. On the other hand, the convergence rate of Fully-correlated is the slowest and one possible reason is that it has the most number of parameters. Line-correlated has a similar performance to Fully-correlated in scenarios with \(N=2,5\) agents, but it has poor performance in the scenario with 3 agents. This illustrates the fact that fixed DAG topology is not desirable in all scenarios and can degenerate to the performance of Uncorrelated.
### Results of Context-Aware DAG Topology Learning with Multi-Agent Actor-Critic Methods
In this section, we run experiments to compare our context-aware DAG topology against the baselines in Coordination Game, where we assume global observability, and in Aloha and SMAC, where we assume partial observability. We relax the requirement of only sharing local action to also include local observations when finding beneficial, based on the context-aware DAG. Specifically, based on the context-aware DAG, the experiments in Aloha share both local ac
Figure 3: POA (top) and Nash-gap (bottom) under the tabular softmax BN policy gradient dynamics with various BN DAG topologies (means and standard errors over 50 random seeds). Note that with \(N=2\) agents, Line-correlated and Fully-correlated are the same and thus have overlapping curves.
tions and observations, whereas the ones in SMAC only share local actions. We implement the algorithms based on MAPPO without recurrency, i.e., the model only incorporates information from the current timestep instead of from the whole trajectory, in both actor and critic. To have the uniform dimensionality required by the MLP-based actor, we handle the actions of the agents not selected by the DAG as the parents by padding dummy vectors of zeros.
#### 7.4.1 Coordination Game
We run the experiments in the Coordination Game with \(N=2,5\) under full observability, and no regularization (i.e., \(\alpha=0\)), plotted in Figure 4(top). Remarkably, the result in \(N=2\) shows that our context-aware DAG learning outperforms the Fully-correlated. One possible explanation is that the dynamic graph leads to sufficient exploration of the augmented state defined in 5.6, and thus results in better performance. The context-aware DAG topology performs similarly to Fully-correlated in \(N=5\), and both outperform Uncorrelated. As shown in Figure 4(bottom), the density of the unregularized learned context-aware DAG is increasing in both \(N=2\) and \(N=5\) scenarios, from 50% to 55% and 50% to 53%, respectively.
#### 7.4.2 Coordination Game
We run the experiments in the Coordination Game with \(N=2,5\) under full observability, and no regularization (i.e., \(\alpha=0\)), plotted in Figure 4(top). Remarkably, the result in \(N=2\) shows that our context-aware DAG learning outperforms the Fully-correlated. One possible explanation is that the dynamic graph leads to sufficient exploration of the augmented state defined in 5.6, and thus results in better performance. The context-aware DAG topology performs similarly to Fully-correlated in \(N=5\), and both outperform Uncorrelated. As shown in Figure 4(bottom), the density of the unregularized learned context-aware DAG is increasing in both \(N=2\) and \(N=5\) scenarios, from 50% to 55% and 50% to 53%, respectively.
#### 7.4.3 Aloha
We run the experiments in the Aloha with \(N=10\) under partial observability (each agent observes the backlog of its own messages), and no sparsity regularization (i.e., \(\alpha=0\)). The results in Figure 4(top) show that our context-aware DAG learning performs comparably to Fully-correlated, and both outperform Uncorrelated. Note that the initial policy at timestep \(0\) with a random initialization will generate collisions resulting in large negative rewards. The policy will soon learn to avoid collisions, and we only show the performance when the policy can generate positive rewards. As shown in Figure 4(bottom), the density of the unregularized learned context-aware DAG is also increasing from 50% to 62% which is larger than the ones learned in the Coordination Game. This suggests that the action dependencies in Aloha may be more important than the ones in the Coordination Game.
**Analysis: Learned DAG topologies.** In the timestep shown in Figure 5, each agent has a backlog of one message to send. For Context-Aware-Correlated, guided by the learned topology, agent 7 obtains the extra parent action (and observation) dependencies from nearby agents 2 and agent 8, which do not send, and far-away agent 3 which sends but causes no collision. Thus, agent 7 is therefore more confident to send its message. For Uncorrelated, agents need
Figure 4: _Top:_ Performance of the learned context-aware-correlated against the Fully-correlated and Uncorrelated baselines. (means and standard errors over 50 random seeds for Coordination Game, 10 random seeds for Aloha, and 5 random seeds for 6h_vs_8z and MMM2.) _Bottom:_ Changes in DAG Density of the learned context-aware BN policy during training with no regularization.
to be more careful to avoid collisions. Both agents 2 and 7 choose not to send in this case to make it safer for agent 3 and 6 to send. This results in one less message sent for the shown agents.
#### 7.4.4 Smac
We run the experiments in the _Super Hard_ SMAC scenario 6h_vs_8z and MMM2 under partial observability with no last actions stored, plotted in Figure 4(top). 6h_vs_8z and MMM2 are noisier than the Coordination Game and Aloha, and we find no benefit of using permutation matrix \(\Pi\) to change the topological ordering. Therefore, we use a fixed topological ordering where \(\Pi\) is the identity matrix. The action dependencies in both scenarios are crucial, as we can see in Figure 4(bottom) that the unregularized context-aware graph degenerates to an almost Full-Dependency graph in 6h_vs_8z and densely correlated graph with around 70% DAG density in MMM2, respectively. Therefore, we regularize it to control the DAG density with an annealing strategy, which gradually decreases threshold \(\eta\) and increases regularization weight \(\alpha\). Specifically, in the first \(a\)% training steps, the sparsity threshold \(\eta\) is set to 1, which encourages the agents to learn that the action dependencies are useful. Then, from \(a\)% total training steps to \(b\)% total training steps, we decrease sparsity threshold \(\eta\) from 1 to 0 uniformly in \(l_{\eta}\) times. From \(b\)% total training steps to \(c\)% total training steps, we uniformly increase in \(l_{\alpha}\) times the regularization weight \(\alpha\) from 0.1 to 1 in 6h_vs_8z and 0.05 to 0.5 in MMM2.
As shown in Figure 6, from 0% to a% total training steps, the performance is similar to the Fully-correlated baseline in both scenarios, with DAG density quickly becoming close to 1. From a% to b%, as we decrease sparsity threshold \(\eta\), the performance fluctuates but still be much better than Uncorrelated. From b% to c%, we increase the regularization weight \(\alpha\). For 6h_vs_8z, the performance decreases quickly close to Uncorrelated, but then recovers quickly to be better than Uncorrelated. For MMM2, the transition is more smooth and the performance consistently beat the Uncorrelated baseline. This multi-phase regularization strategy results in purely uncorrelated policies as shown in Figure 4(bottom), but achieves better performance than Uncorrelated, which is trained with purely uncorrelated policies during the whole training phase as shown in Figure 6.
**Analysis: Visibility.** The dependency on the allies which are not visible is meaningless. Since action dependencies in both 6h_vs_8z and MMM2 are important, one simple strategy that an agent can learn to maintain a good performance while decreasing the DAG density is to only output dependency on the visible agents. This is indeed the case, with the percentage of the visible agents that an agent wants to depend on almost consistently increasing in Figure 7.
**Analysis: Average health.** As shown in Figure 8, for 6h_vs_8z, agents 5 and 6 tend to depend on the actions of agents with a relatively low health bar, while agent 2 tends to depend on the actions of agents with relatively high health. This may be due to that we fixed the topological ordering of \((1,\cdots,6)\), so agents 5 and 6 can potentially have more dependencies, and it learns to depend on the actions of agents with low health. On the other hand, agent 2 can only depend on the action of agent \(1\) which may not always have low health. For MMM2, health does not differentiate agents' selections of parent actions until in the middle of a% to b%, where the increase of the regularization causes all agents to depend on actions of agents with relatively high health, with agent 7 to the extreme.
**Analysis: Average distance.** As shown in Figure 9, in both scenarios, agent 6 tends to depend on the actions of agents in relatively long distances. In 6h_vs_8z, agent 2 tends to depend on the actions of agents with relatively short
Figure 5: Learned DAG topology in Aloha. Only \(\mathcal{P}^{7}\) is shown.
Figure 6: The performance of the sparsity regularized context-aware-correlated (with density annealing) in 6h_vs_8z and MMM2, against the Fully-correlated and Uncorrelated baselines. The black lines show the Changes in DAG Density of sparsity regularized context-aware-correlated (with density annealing) during training.
Figure 7: Visibility of allies during training (context-aware-correlated with density annealing) in 6h_vs_8z and MMM2.
distances, while distance is relatively irrevelant for agents except agent 6 for the selection of parent actions. This also may be due to that we fixed the topological ordering of agent \((1,\cdots,N)\). Agent 6 can potentially have more dependencies, so it learns to depend on the actions of agents that are far away. On the other hand, agent \(2\) can only depend on the action of agent \(1\) which may be nearby sometimes. For MMM2, agent 8 consistently depends on the actions of agents from relatively longer distances, whereas agent 5 behaves the opposite.
**Analysis: Emergence of multi-modality for BN policy** Previous works (Baker et al., 2019; Lowe et al., 2019; Tang et al., 2021) show that the emergence of diverse behaviors is prevalent in many multi-agent reinforcement learning problems. A recent work (Fu et al., 2022) shows the benefit of learning a multi-modal policy. Here we analyze the emergence of multi-modality for BN policy learned with the annealing strategy. To quantify multi-modality, we measure the KL Divergence between the distribution of the BN policy and the distribution of the same BN policy with empty DAG. The result in Figure 10 shows that in 6h_vs_8z, agent 6 with most possible parent actions has the largest multi-modality, whereas agent 2 with least possible parent actions except agent 1 has the smallest multi-modality. In MMM2, agent 8 emerges with the largest multi-modality, whereas agent 2 with least possible parent actions except agent 1 has the smallest multi-modality. It is also shown in both scenarios that increasing DAG density regularization also decreases multi-modality, where the purely decentralized one has zero multi-modality.
## 8 Conclusion
In this paper, we have motivated action correlations for cooperative MARL and proposed the notion of BN joint policy to introduce correlations. We have then derived the BN policy gradient formula and proved the convergence to Nash policy asymptotically under the tabular softmax BN policy parameterization. Further, we have proposed a practical algorithm to adapt any multi-agent actor-critic method to realize the BN joint policy and empirically demonstrated the benefits of the proposed method.
## Acknowledgments
Dingyang Chen acknowledges funding support from NSF award IIS-2154904. Qi Zhang acknowledges funding support from NSF award IIS-2154904 and NSF CAREER award 2237963. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the views of the sponsors.
|
2303.15153 | Remarks on compressible convection in Super-Earths | The radial density of planets increases with depth due to compressibility,
leading to impacts on their convective dynamics. To account for these effects,
including the presence of a quasi-adiabatic temperature profile and entropy
sources due to dissipation, the compressibility is expressed through a
dissipation number, $\mathcal{D}$, proportional to the planet's radius and
gravity. In Earth's mantle, compressibility effects are moderate, but in large
rocky or liquid exoplanets (Super-Earths), the dissipation number can become
very large. This paper explores the properties of compressible convection when
the dissipation number is significant. We start by selecting a simple Murnaghan
equation of state that embodies the fundamental properties of condensed matter
at planetary conditions. Next, we analyze the characteristics of adiabatic
profiles and demonstrate that the ratio between the bottom and top adiabatic
temperatures is relatively small and probably less than 2. We examine the
marginal stability of compressible mantles and reveal that they can undergo
convection with either positive or negative superadiabatic Rayleigh numbers.
Lastly, we delve into simulations of convection performed using the exact
equations of mechanics, neglecting inertia (infinite Prandtl number case), and
examine their consequences for Super-Earths dynamics. | Yanick Ricard, Thierry Alboussière | 2023-03-27T12:39:58Z | http://arxiv.org/abs/2303.15153v1 | # Remarks on compressible convection in Super-Earths
###### Abstract
The radial density of planets increases with depth due to compressibility, leading to impacts on their convective dynamics. To account for these effects, including the presence of a quasi-adiabatic temperature profile and entropy sources due to dissipation, the compressibility is expressed through a dissipation number, \(\mathcal{D}\), proportional to the planet's radius and gravity. In Earth's mantle, compressibility effects are moderate, but in large rocky or liquid exoplanets (Super-Earths), the dissipation number can become very large. This paper explores the properties of compressible convection when the dissipation number is significant. We start by selecting a simple Murnaghan equation of state that embodies the fundamental properties of condensed matter at planetary conditions. Next, we analyze the characteristics of adiabatic profiles and demonstrate that the ratio between the bottom and top adiabatic temperatures is relatively small and probably less than 2. We examine the marginal stability of compressible mantles and reveal that they can undergo convection with either positive or negative superadiabatic Rayleigh numbers. Lastly, we delve into simulations of convection performed using the exact equations of mechanics, neglecting inertia (infinite Prandtl number case), and examine their consequences for Super-Earths dynamics.
## 1 Adiabatic conditions inside a convective planet
It is well known that convection of a compressible fluid at high Rayleigh number brings the average radial profiles of density, temperature and pressure close to their adiabatic and hydrostatic values (\(\rho_{a}\), \(Ta\), \(P_{a}\)) according to
\[\frac{\mathrm{d}\ln\rho_{a}}{\mathrm{d}z}+\frac{\alpha_{a}g}{\Gamma _{a}C_{P}^{a}} =0, \tag{1a}\] \[\frac{\mathrm{d}\ln T_{a}}{\mathrm{d}z}+\frac{\alpha_{a}g}{C_{P}^ {a}} =0,\] (1b) \[\frac{\mathrm{d}P_{a}}{\mathrm{d}z}+\rho_{a}g =0, \tag{1c}\]
\(z\) being the vertical coordinate (directed against gravity \(\mathbf{g}=-g\mathbf{e}_{z}\)). In these equations, \(\alpha\) is the thermal expansivity, \(C_{P}\) is the heat capacity (or specific heat) at constant pressure and \(\Gamma\) is the Gruneisen parameter
\[\Gamma=\frac{\alpha K_{T}}{\rho C_{V}}=\frac{1}{\rho C_{V}}\left(\frac{\partial P }{\partial T}\right)_{V}. \tag{2}\]
The superscript or underscript 'a' in equations (1a)-(1b)-(1c) indicates that the various quantities are computed along the adiabat itself.
A reasonable equation of state (EoS) for a condensed planet is based on the observation that the Gruneisen parameter is essentially a function of density (Anderson, 1979) according to
\[\Gamma=\Gamma_{0}\left(\frac{\rho_{0}}{\rho}\right)^{q}, \tag{3}\]
where \(q\) is around 1, \(\rho_{0}\) and \(\Gamma_{0}\) are the density and Gruneisen parameter at standard conditions that we choose to be at the surface of the planet. On average, the Gruneisen parameter is between 1 and 2 in the mantle (e.g. Stacey and Davis, 2004) or in the core (Alfe et al., 2002). In the following we will use \(q=1\). In this case the EoS appropriate
for condensed materials becomes
\[P=\frac{K_{T}^{0}}{n}\left[\left(\frac{\rho}{\rho_{0}}\right)^{n}-1\right]+ \alpha_{0}K_{T}^{0}(T-T_{0}), \tag{4}\]
where at reference temperature \(T_{0}\), the pressure-density relation is given by a Murnaghan expression (Murnaghan, 1951) and \(n\approx 3-4\) for solid silicates and for liquid silicates or metals. In the equation (4), \(\alpha_{0}\) and \(K_{T}^{0}\) are the thermal expansivity and the isothermal incompressibility at reference conditions. Although EoS (4) is simple and empirical, it encapsulates the typical properties of solids and fluids and gives a very good fit to the radial density of the Earth assuming its adiabaticity, away from the transition zone discontinuity (Ricard et al., 2022). This EoS implies direct relations between thermal expansivity and incompressibility with density which are
\[\alpha=\alpha_{0}\left(\frac{\rho_{0}}{\rho}\right)^{n}, \tag{5}\]
\[K_{T}=K_{T}^{0}\left(\frac{\rho}{\rho_{0}}\right)^{n}, \tag{6}\]
and again these two expressions provide realistic expressions of the properties measured in laboratory experiments. Since the Gruneisen parameter (3) is only density-dependent, one gets a simple relation between the adiabatic temperature and the adiabatic density by combination of (1a) and (1b)
\[T_{a}=T_{a}^{t}\exp\left[\Gamma_{0}\left(\frac{\rho_{0}}{\rho_{a}^{t}}-\frac{ \rho_{0}}{\rho_{a}}\right)\right], \tag{7}\]
where \(T_{a}^{t}\) and \(\rho_{a}^{t}\) are the surface (\(t\) stands for top) adiabatic temperature and density.
We make two additional approximations when deriving analytical expressions for the adiabatic profiles (those approximations will not be used in the numerical computations as they would invalidate them).
* We assume the heat capacities at constant volume and temperature \(C_{V}\) and \(C_{P}\) are equal and constant. The heat capacites are indeed very close when \(\alpha T<<1\) which is the case for condensed material, and they are both close to the Dulong and Petit values \(C_{V}\approx C_{P}\approx 3\mathcal{R}\) in J K\({}^{-1}\) mol\({}^{-1}\) (\(\mathcal{R}\) is the gas constant) at large temperatures (Petit and Dulong, 1819).
* The surface adiabatic temperature is likely different from the surface temperature, however as \(\alpha(T_{a}^{t}-T_{0})<<1\), the surface adiabatic density and the reference density can also be identified, \(\rho_{a}^{0}\approx\rho_{0}\) (in other words the density of planets is a function of pressure, not temperature).
At last assumption is made on the variation of gravity with depth in a generic planet. For simplicity, we assume that gravity is uniform which is basically the case in Earth's mantle.
With these hypotheses it is easy to solve for the adiabatic conditions in a layer where \(z\) varies between 0 and \(H\) (e.g., a mantle of thickness \(H\), \(z=0\) being at the core-mantle boundary), and we get
\[\rho_{a} = \rho_{0}\left(1+\frac{H-z}{h}\right)^{1/(n-1)}, \tag{8a}\] \[P_{a} = \frac{n-1}{n}\rho_{0}gh\left[\left(\frac{\rho_{a}}{\rho_{0}} \right)^{n}-1\right],\] (8b) \[T_{a} = T_{a}^{t}\exp\left[\Gamma_{0}\left(1-\frac{\rho_{0}}{\rho_{a}} \right)\right] \tag{8c}\]
where
\[h=\frac{1}{n-1}\frac{K_{T}^{0}}{\rho_{0}g}=\frac{1}{n-1}\frac{\Gamma_{0}}{ \mathcal{D}}H. \tag{9}\]
In the last equality, we have introduced the dissipation number \(\mathcal{D}\) defined by
\[\mathcal{D}=\frac{\alpha_{0}gH}{C_{V}}. \tag{10}\]
This surface dissipation numebr is only expressed from quantities known at the surface, this choice seems to be the only possible choice when exploring a new planet. In the Earth the dissipation number is around \(\mathcal{D}_{\Earth}=0.71\) in the mantle and \(0.56\) in the liquid core (using \(\alpha_{0}=3\times 10^{-5}\) K\({}^{-1}\), \(H=2900\) km and \(C_{V}=1200\) J K\({}^{-1}\) kg\({}^{-1}\) in the mantle, \(\alpha_{0}=1.8\times 10^{-5}\) K\({}^{-1}\)(Murphy et al., 2013), \(H=2300\) km and \(C_{V}=715\) J K\({}^{-1}\) kg\({}^{-1}\)(Gubbins et al., 2003) in the liquid core, with \(g=9.8\) m s\({}^{-2}\)).
In geophysical textbooks (see, e.g., Schubert et al., 2001)\(\mathcal{D}\) is defined with a \(C_{P}\) in the denominator which does not make much practical difference as their difference is always neglected in the geology literature. However we prefer to defined the dissipation with \(C_{V}\) like in the definition of the Gruneisen parameter. We are free to assume that one of the two heat capacities is a constant (here \(C_{V}\) is chosen constant), but assuming the constancy of both heat capacities leads to inconsistencies in the energy conservation (Alboussiere and Ricard, 2013) because their difference is directly related to the EoS through Mayer's relation.
The previous equations (8a)-(8b)-(8c) can be used to discuss the possible characteristics of the adiabatic profiles of large planets. From the variety of masses and radii of exoplanets that have been detected it seems that many of them are rocky at least until a radius of order 2.5 times the Earth's radius (Otegi et al., 2020). Their observed mass \(M\) increases roughly as a power 3.45 of their radius \(R\) (their large internal pressures increase their average densities as \(\approx R^{0.45}\)). We will use this observation to scale the gravity in our equations with \(g\varpropto M/R^{2}\approx R^{1.5}\) and we consider that the thickness of the convective layers is proportional to \(R\). With these scalings, the dissipation number \(\mathcal{D}\) varies like \(gH\varpropto R^{2.5}\). Therefore, according to Otegi et al. (2020) dissipation numbers up to \(2.5^{2.5}=10\) times that of the Earth can be expected in fairly common rocky planets, and in the following we will explore dissipation numbers up to \(\mathcal{D}=10\). We will use \(\mathcal{D}=\mathcal{D}_{\Earth}(R/R_{\Earth})^{2.5}\) when, to set ideas, we discuss in term of planetary radii instead of dissipation numbers; a Super-Earth with a radius twice that of the Earth (resp. 3
times) would therefore be assumed to have a mantle with a dissipation number around 4.0 (resp. 11.1) and a core with a dissipation number around 3.2 (resp. 8.7).
## 2 Adiabatic conditions in a Super-Earth
### The adiabatic density and temperature profiles
As the incompressibility increases very significantly with density and therefore with pressure, the adiabatic density and temperature only increase moderately as a function of the planetary radius. In Figure 1, we depict the adiabatic temperature normalized with its value at the surface according to (8c). We use \(\mathcal{D}=\mathcal{D}_{\bigoplus}=0.6\) (black), \(\mathcal{D}=2\) (red), \(\mathcal{D}=10\) (green).
The ratio of the adiabatic density between the bottom and the top of a convecting
Figure 1: Normalized adiabatic temperature for \(\mathcal{D}=0.6\), 2, 10 (black, red, green). In the case \(\mathcal{D}=10\), the dashed lines are conductive profiles that will be discussed below, one is with \(\Delta T=\Delta T_{a}\) (dark green), one with a gradient equal to the bottom adiabatic gradient (orange), the last one (blue) carries at the surface the same heat flow as the adiabatic gradient at the surface.
mantle is according to (8a) and (9)
\[\frac{\rho_{a}^{b}}{\rho_{a}^{t}}=\left(1+(n-1)\frac{\mathcal{D}}{\Gamma_{0}} \right)^{1/(n-1)}. \tag{11}\]
This ratio in plotted in Figure 2a as a function of \(\mathcal{D}\) (bottom axis) and as a function of \(R\) (top axis, assuming \(\mathcal{D}\varpropto R^{2.5}\)). The \(\bigoplus\) symbol indicates the position of the Earth where an adiabatic density ratio of 1.52 is predicted through the mantle (due to the phase changes in the transition zone the observed density change in Earth's mantle is rather 1.70).
This adiabatic density ratio controls the adiabatic temperature ratio according to (8c) (see Figure 2b). For the Earth, this ratio should be 1.41 through the mantle (indicated by the symbol \(\bigoplus\), say from 1600 K on top to 2256 K at the bottom). The maximum bottom temperature \(T_{a}^{b}\) is, at any rate, bounded when \(\rho_{a}\rightarrow\infty\) by
\[\frac{T_{a}^{b}}{T_{a}^{t}}\leq e^{\Gamma_{0}}\approx 2.72. \tag{12}\]
Even in very large silicated Super Earth, the bottom adiabatic temperature should remain moderate and hardly above a factor 2 times the surface adiabatic temperature (Figure 2b).
### The adiabatic temperature gradient
According to (1b), the surface adiabatic gradient is simply
\[\left.\frac{dT_{a}}{d\tilde{z}}\right|_{t}=\mathcal{D}T_{a}^{t} \tag{13}\]
where \(\tilde{z}=z/H\) is the normalized height in the convective layer. Obviously the heat carried out near the surface, along the adiabat, increases with \(\mathcal{D}\). However the adiabatic
gradient near the bottom is
\[\left.\frac{dT_{a}}{d\tilde{z}}\right|_{b}=\mathcal{D}\left(\frac{\rho_{a}^{t}}{ \rho_{a}^{b}}\right)^{n}T_{a}^{b} \tag{14}\]
In this expression, \(T_{a}^{b}\) is bounded by equation (12) and the thermal expansivity (related to the \(\left(\rho_{a}^{t}/\rho_{a}^{b}\right)^{n}\) term) decreases faster than \(1/\mathcal{D}\). This means that the adiabatic gradient at depth (in absolute value), initially increases and then decreases with the dissipation number (inspection of (11) shows that the adiabatic gradient at depth decreases with \(\mathcal{D}^{-1/(n-1)}\approx\mathcal{D}^{-0.43}\)). This is visible in Figure 1 (compare the adiabatic gradient at the bottom of the three curves). This implies that, as the dissipation number increases, the effects of compressibility become confined at shallow depth, while deeper, the fluid appears more and more incompressible. Unexpectedly, when the effects of compression increase (when the planet radius increases), the deep convection appears more and more incompressible!
Another way to understand this is to consider that the compres
Figure 2: Ratio between bottom and top adiabatic density (panel a) and temperature (panel b) in a compressible planet as a function of dissipation \(\mathcal{D}\) (\(n=3.3\), \(\Gamma_{0}=1\)). The symbol \(\bigoplus\) indicates the situation for the Earth. The horizontal axis is either labelled in dissipation numbers (bottom) or in Super-Earth radii (top).
convection are not related to \(\mathcal{D}\) which is based on the reference thermal expansivity but to \(\overline{\mathcal{D}}=\int_{0}^{H}\mathcal{D}d\tilde{z}\), where the dissipation number (10) is averaged over the thickness of the layer. Using the expressions (5) and (8a), as shown in Ricard et al. (2022), one gets
\[\overline{\mathcal{D}}\leq\Gamma_{0}, \tag{15}\]
i.e., the average dissipation is never larger than the Gruneisen parameter which is about 1.
## 3 Convection in the mantle of super-Earths
### Compressible convection
In situations where the compressibility is important and where the physical parameters vary strongly with depth, the use of a simple Boussinesq convection model and the correction of the results by the a posteriori addition of an adiabatic contribution are not sufficient. Using anelastic formulations (Ogura and Phillips, 1962; Jarvis and McKenzie, 1980; Braginsky and Roberts, 1995; Lantz and Fan, 1999) may also be tricky, as it is easy to inadvertently contradict the basic thermodynamic rules (see e.g., Leng and Zhong, 2008; Alboussiere and Ricard, 2013). In a previous paper (Ricard et al., 2022), we explained how we can solve the fully compressible equations without approximations, when inertia is neglected (the infinite Prandlt number approximation), which is appropriate for mantle convection, i.e., how to solve
\[\frac{\mathrm{D}\rho}{\mathrm{D}t}+\rho\mathbf{\nabla}\cdot\mathbf{u} =0, \tag{16a}\] \[\eta\mathbf{\nabla}^{2}\mathbf{u}+\frac{\eta}{3}\mathbf{\nabla}\mathbf{ \nabla}\cdot\mathbf{u}-\mathbf{\nabla}P+\rho\mathbf{g} =0,\] (16b) \[\rho T\frac{\mathrm{D}\mathcal{S}}{\mathrm{D}t}=\dot{\varepsilon} :\tau{+}k\nabla^{2}T, \tag{16c}\]
where the viscosity \(\eta\) and thermal conductivity \(k\) are assumed uniform. Following exactly the rules of thermodynamics and starting from the EoS (4), the entropy can be expressed and by integration of \(T\mathrm{d}\mathcal{S}=C_{V}\mathrm{d}T-\alpha K_{T}T\mathrm{d}\rho/\rho^{2}\), writes
\[\mathcal{S}=C_{V}\ln\frac{T}{T_{a}}+\alpha_{0}K_{T}^{0}\left(\frac{1}{\rho}- \frac{1}{\rho_{a}}\right). \tag{17}\]
which cancels out, at it should, when the density and temperature are those of the adiabatic conditions. The adiabatic density and temperature are computed from (1a) and (1b) (see Ricard et al., 2022, for details) where the heat capacity at constant pressure that appears in the adiabatic profile is exactly given by Mayer's relation
\[C_{P}=C_{V}\left[1+\Gamma_{0}\alpha_{0}T\left(\frac{\rho_{0}}{\rho}\right)^{n+ 1}\right]. \tag{18}\]
### Marginal stability and Schwarzschild criterion
When heated from below, a fluid starts to convect when two conditions are required. First the local temperature gradient \(|dT/dz|\) must be larger than the adiabatic gradient \(|dT_{a}/dz|\). This is the Schwarzschild criterion (Schwarzschild, 1906): the temperature of a rapidly upwardly moving fluid parcel follows the adiabatic gradient and must become warmer (e.g., less dense) that the surrounding to be gravitationally unstable. This criterion defines a necessary condition for convection. Second, the total temperature drop \(\Delta T=T^{b}-T^{t}\) across the convective layer must be large enough so that a dimensionless number, the Rayleigh number, exceeds some critical value. In the simple case where the top and bottom boundaries are free slip, Rayleigh (1916) proved that his last condition can be expressed under the form
\[\mathrm{Ra}^{c}=\frac{\alpha_{0}\rho_{0}^{2}C_{P}gH^{3}\Delta T}{\eta k}\geq \frac{27}{4}\pi^{4}=657.24 \tag{19}\]
This sufficient condition for convection was obtained in the Boussinesq approximation where all the parameters \(\alpha_{0}\), \(\rho_{0}\), \(\eta\) and \(k\) were uniform. In a compressible fluid \(\alpha\) and \(\rho\) are depth dependent and (19) is not necessary in agreement with the Schwarzschild criterion. As we are used to thinking that the adiabatic gradient is more or less uniform (this would be exactly true if the fluid were a perfect gas and which is roughly true in the Earth's mantle), a superadiabatic Rayleigh number \(\mbox{Ra}_{sa}\) is generally defined where the temperature drop \(\Delta T\) is replaced by the superadiabatic temperature drop \(\Delta T_{sa}=\Delta T-\Delta T_{a}\), the temperature drop in excess of the adiabatic temperature drop
\[\mbox{Ra}_{sa}=\frac{\alpha_{0}\rho_{0}^{2}C_{P}gH^{3}\Delta T_{sa}}{\eta \kappa}=\mbox{Ra}\frac{\Delta T_{sa}}{\Delta T}, \tag{20}\]
where \(\alpha_{0}\) and \(\rho_{0}\) are now some characteristic values of the depth-dependent thermal expansivity and density. With this definition, \(\mbox{Ra}_{sa}^{c}\geq 657.24\) is in agreement with both what is found in the Boussinesq approximation and with the Schwarzschild criterion (Malkus, 1954; Grossmann and Lohse, 2001), at least, if one assumes that \(T_{a}\) varies linearly with depth (i.e., if one assumes that \(dT_{a}/dz\) is uniform with \(dT_{a}/dz=-\Delta T_{a}/H\)). Various relations obtained in the Boussinesq approximation, for exemple between the heat flow and the Rayleigh number, are often considered to hold in the compressible case when the superadiabatic Rayleigh number is used. We will show now that the situation is more complex in super-Earth cases where the adiabatic gradient is large and with a large curvature.
To first discuss a simple case where the adiabatic gradient is constant, we consider a perfect gas with EoS \(P=\rho{\cal R}T\). This EoS is certainly not appropriate for a planetary mantle but it corresponds to the prototype case of compressible convection. For a perfect gas, \(\alpha T=1\) and \(C_{P}\) is constant in (1b) so that \(dT_{a}/dz=-\alpha_{a}T_{a}g/C_{P}=-g/C_{P}\). Using the surface temperature \(T_{0}\) and the height \(H\) of the convecting layer to non-dimensionalize the variable, one as simply \(dT_{a}/dz=-{\cal D}/\gamma\) where \(\gamma=C_{P}/C_{V}\) is the heat
capacity ratio (also known as adiabatic index or Laplace's coefficient). We then consider that a bottom temperature \(rT_{0}\) is imposed, in which case the conductive geotherm is \(dT/dz=-(r-1)\). The Schwarzschild criterion imposes therefore that convection cannot exist when \(\mathcal{D}\geq\gamma(r-1)\).
In a previous paper, we give the general equations verified by the marginally stable solution and how to compute the critical Rayleigh number for any EoS (equations 5.5-5.8 in Alboussiere and Ricard (2017) and following comments). We therefore calculate the critical Rayleigh number \(\text{Ra}^{c}(\mathcal{D},r)\) for Rayleigh Benard convection of a perfect gas and plot the result in Figure 3a. As expected, convection can only occur below the \(\mathcal{D}=\gamma(r-1)\) line. The cyan line corresponds to \(\text{Ra}^{c}\)=criterion 657.24, and indeed for \(\mathcal{D}\to 0\) and \(r\to 0\), the critical value obtained for the Boussinesq case is recovered. Increasing the temperature jump \(r-1\) decreases the critical Rayleigh number, increasing the dissipation \(\mathcal{D}\) increases the critical Rayleigh number.
The situation is obviously very different on a planet where the adiabatic gradient appearing in the Schwarzschild criterion depends on the depth. Inspection of Figure 1 for \(\mathcal{D}=10\) (green solid line), shows that convection cannot occur when the diffusive gradient \(|dT/dz|\) is lower than that of the dashed orange line (tangent to the adiabatic profile at the bottom where the adiabatic gradient is minimal in absolute value). However until \(|dT/dz|\) reaches the value corresponding to the dark green dashed line (corresponding to \(\Delta T=\Delta T_{a}\)), convection can start in the deep layers although \(\Delta T\leq\Delta T_{a}\), i.e., although the superadiabatic Rayleigh number is negative. This is confirmed by the computation of the critical Rayleigh number shown in Figure 3b. There is a large domain in blue, where the conductive temperature gradient is in between that of the dark green curve and that of the orange curve of Figure 1, where convection can start in the deep layer while the critical superadiabatic Rayleigh number is negative. Notice that again, when \(\mathcal{D}\to 0\) and \(r\to 0\), the critical value computed for the Boussinesq case is recovered (\(\text{Ra}^{c}_{sa}\to 657.24\), a value shown by a cyan line). This limit is indeed independent of the
chosen EoS.
### Convection simulations
With the same numerical code as in Ricard et al. (2022), we solve the mass, momentum and energy conservation system (16a)-(16b)-(16c) with the software Dedalus (Burns et al., 2020), which handles coupled differential equations that are solved iteratively using a spectral decomposition. In our simulations, the surface temperature is \(T_{t}=T_{0}\) and the bottom temperature \(T_{b}=rT_{0}\). We assume that the surface pressure is \(P_{t}=P_{0}=0\) (hence that the surface density is \(\rho_{0}\)).
Figure 3: Critical superadiabatic Rayleigh number as a function of the surface dissipation number \(\mathcal{D}\), and the ratio between top and bottom temperatures \(r\). In the left panel, the EoS of the convective fluid is that of a perfect gas(with \(\gamma=5/3\)), in the right panel we consider the Murnaghan EoS, appropriated for condensed matter. While for the perfect gas, a large dissipation \(\mathcal{D}\) decreases the domain where convection can occur, for the Murnhagan solid, convection can easily start in the deep layers even when the imposed temperature difference is lower than the adiabatic temperature difference. The dashed green and orange lines corresponds to the conductive profiles of Figure 1. The cyan lines in both panel are for \(\text{Ra}^{c}_{sa}=657.24\), the value obtained by Lord Rayleigh in the Boussinesq case.
As discussed in Ricard et al. (2022), it is tricky to work with a compressible fluid on a fixed numerical grid. Indeed, we do not know what the initial mass in the convective layer (defined by an initial density profile) must be to ensure that when convection is well established, the surface pressure is zero. We therefore perform our simulations for given Ra, \(r\) and \(\mathcal{D}\), starting with different initial masses (different initial assumptions of the density profile) until the average surface pressure is statistically zero (the local surface pressure itself remains a function of space and/or time and is commonly interpreted as equivalent to express the presence of a dynamic topography induced by convection).
### Heat flux in compressible convection
For compressible convection at infinite Pr number, the superadiabatic heat flux (the surface heat flux \(Q\) minus the surface adiabatic heat flux \(Q_{a}\)) and the superadiabatic Rayleigh number are related by
\[\mathrm{Nu}_{sa}=\frac{Q-Q_{a}}{\Delta T-\Delta T_{a}}\propto\mathrm{Ra}_{sa}^ {1/3}, \tag{21}\]
where \(\mathrm{Nu}_{sa}\) is the superadiabatic Nusselt number (in equation (21), the dimensionless heat flows are normalized by \(kT_{0}/H\) and the temperatures by \(T_{0}\)). This scaling law with the superadiabatic Rayleigh number is in agreement with what is found in the Boussinesq approximation (Malkus, 1954; Grossmann and Lohse, 2001). This expression is usually proposed in situations where \(\Delta T_{a}\) is smaller and often much smaller than \(\Delta T\) which is not necessarily verified when \(\mathcal{D}\) is large, as convection can occur even with \(\Delta T_{a}\geq\Delta T\). Notice also that, as shown by the blue dashed line in Figure 1, the adiabatic heat flow at the surface may be very large and \(Q-Q_{a}\) may be negative even in the case where \(\Delta T_{a}\leq\Delta T\). The "adiabatic" Nusselt number, \(\mathrm{Nu}_{a}=Q_{a}/\Delta T_{a}=\mathcal{D}T_{a}^{t}/\Delta T_{a}\) (see (13)) is of order \(\mathcal{D}\) as \(T_{a}^{t}/\Delta T_{a}\approx 1\). For \(\mathcal{D}=10\), this is already a large heat flow which requires a Rayleigh number of \(10^{5}-10^{6}\) in the Boussinesq case.
This situation where the convective heat flow can be lower than the adiabatic heat flow, is not unknown and may actually happen in the Earth's core (although inertial, electromagnetic and rotationnal effects not accounted for in our model become crucial in the core). The conductivity of iron is large enough (Stacey and Loper, 2007; de Koker et al., 2012; Gomi et al., 2013) that in the top part of the core, the heat transported along the adiabat may be larger than that carried out by convection (Labrosse et al., 1997; Lister and Buffett, 1998). This would imply the presence of a stratified layer in which the non-adiabatic temperature carries the heat flow downward to balance the upward transport along the adiabat. How the deep convection interacts with a shallow layer with a large adiabatic gradient can now be discussed with a few numerical simulations. There is a very large number of quantities of interest that could be computed from these numerical simulations. Here, we will simply discuss the general pattern of convection and the average temperature profiles, and how the energy is transported through the whole layer.
In the Boussinesq approximation used as a reference model, the heat flow through the fluid is simply
\[Q_{Bo}=\overline{\rho wC_{V}T}-k\frac{d\overline{T}}{dz}, \tag{22}\]
where \(w\) is the vertical velocity and \(T\) the total temperature. The overbar indicates here an average of the various quantities horizontally and in time; this heat flow is constant with depth. In the Boussinesq approximation, the density \(\rho\) is a constant, just as the heat capacity \(C_{V}\) and the heat capacity at constant volume or constant pressure are not distinguished. In a compressible fluid (see Ricard et al., 2022, for details), the relevant quantity advected by the flow is the enthalpy \(\mathcal{H}\) that can be deduced from (4) by integration of \(\mathrm{d}\mathcal{H}=T\mathrm{d}\mathcal{S}+\mathrm{d}P/\rho\)
\[\mathcal{H}=\left(C_{V}+\frac{\alpha_{0}K_{T}^{0}}{\rho}\right)T+\frac{K_{T}^{ 0}}{(n-1)}\frac{\rho^{n-1}}{\rho_{0}^{n}}, \tag{23}\]
and the heat flow can then be written as
\[Q=\overline{\rho wC_{V}T_{sa}}-k\frac{dT_{sa}}{dz}+\overline{\rho w(\mathcal{H}-C_ {V}T_{sa})}-k\frac{dT_{a}}{dz}-\overline{u\tau_{xz}+w\tau_{zz}}, \tag{24}\]
where we consider separately the adiabatic \(T_{a}\) and superadiabatic \(T_{sa}=T-T_{a}\) temperature.
The first two terms converge to their counterparts of (22) when the adiabatic temperature is a constant, the third term (which can obviously be simplified with the first one) is a correction due to the fact that the enthalpy rather than the specific heat \(C_{V}T\) is transported. The fourth term is the conduction along the adiabat and the last one is the work flow (\(\tau_{xz}\) and \(\tau_{zz}\) are the deviatoric stresses, \(u\) the horizontal velocity). Had we used dimensionless variables, then the ratio between this term and the first one would be \(\mathcal{D}\Delta T/(\mathrm{Ra}T_{0})\) and indeed, would become negligible in the Boussinesq approximation when \(\mathcal{D}\to 0\).
### Convection at small dissipation and large temperature drop (\(\mathcal{D}=1\), \(r=10\))
In a first simulation at moderate Rayleigh number \(\mathrm{Ra}_{sa}=10^{8}\), we consider a situation in which the adiabatic temperature drop is small compared to the imposed temperature difference, with \(\mathcal{D}=1\) and \(r=10\) which corresponds more or less to terrestrial conditions. A typical snapshot of the non-adiabatic temperature field is depicted in Figure 4. This situation remains relatively close to the classical Rayleigh-Benard convection in the Boussinesq approximation but some differences are notable. Due to dissipation, the descending and ascending plumes are rather discontinuous. They tend to form clusters, which is consistent with the suggestion by Schubert et al. (2004) that the two super-plume regions observed in the deep Earth's mantle may be clusters of smaller plumes whose heads have merged into a large region of hot and buoyant material.
The time-average temperature profile is depicted in blue in Figure 5. The dots along the temperature profile correspond to the nodes of the Chebyshev polynomials used by the software Dedalus. Top and bottom boundary layers have comparable thicknesses. The adiabatic profile (red) is quasi linear and provides a close approximation of the real temperature (Figure 5). The total mass under the actual temperature profile and under the adiabatic profile is the same and leads to a zero average pressure at the surface when the convection is statistically steady.
In this simulation the time averaged heat flow is \(Q=398\), \(Q_{a}=4.41\), \(\Delta T_{sa}=6.78\) and the Nusselt number, \(\mathrm{Nu}_{sa}=27.37\) and therefore \(\mathrm{Nu}_{sa}=0.13\,\mathrm{Ra}_{sa}^{1/3}\). The prefactor is in agreement with other simulations performed in the Boussinesq regime (Sotin and Labrosse, 1999) or in the fully compressible case with an ideal gas Eos (Curbelo et al., 2019). We depict in Figure 6 the profiles of the various components of the heat flow (see (24)). In the left panel (a), we plot the total heat flow (green), the transport of specific
Figure 4: Snapshot of the superadiabatic temperature in the convective layer with \(\mathrm{Ra}_{sa}=10^{8}\), \(\mathcal{D}=1\) and \(r=10\). The downwellings and upwellings are less stable and less continuous with depth than in the Boussinesq approximation. The plumes are not homogeneously distributed but tend to form clusters.
heat (red) and the conduction along the non-adiabatic temperature profile (blue) (i.e., the total heat flow \(Q\) and the two first terms of equation (24)). The total heat flow (green) would be depth-independent when averaged over a very long time. The red and blue components (the specific heat transport and the conductive term) are the only terms carrying heat in the Boussinesq approximation (see (22)). Due to compressibility, other minor contributors to the energy transport are present in panel 6b. The transport of \(\mathcal{H}-C_{V}T_{sa}\) (red), the conduction along the adiabat (blue), and the work flow (green) (i.e., the third, fourth and fifth terms of (24)). These different terms tend to increase the energy transport near the surface and decrease it at depth. The compressibility does not affect the energy transport very much as \(\mathcal{D}\) remains small compared to the applied total temperature difference across the layer and the minor components of panel b have amplitude at most of \(\approx 5\%\) of the major components of panel a.
Figure 5: Temperature profile (blue) and adiabatic (red) in a convection model with \(\text{Ra}_{sa}=10^{8}\), \(\mathcal{D}=1\) and \(r=10\). The mass of the fluid that can be computed from the adiabatic and hydrostatic profile is by construction the total mass of the fluid. The pressure average at the surface and over time is zero. The two boundary layers have similar thicknesses. The temperature overshot near the hot bottom boundary layer is slightly more pronounced that that under the lithosphere. The total temperature jump across the layer is \(r-1=9\) and the adiabatic temperature jump is \(2.22\).
### Convection at large dissipation and large temperature drop (\(\mathcal{D}=r=10\))
We can now consider the case where the dissipation number becomes comparable to the temperature ratio, \(\mathcal{D}=r=10\). A typical snapshot of the non-adiabatic temperature is depicted in Figure 7. At large dissipation number the cold plumes can hardly cross continuously the convective layer (Alboussiere et al., 2022). As already observed in Hansen et al. (1993), the hot instabilities gain buoyancy while they rise in the mantle as the thermal expansivity increases. They are stronger and more stationary than the cold instabilities that lose buoyancy with depth. The temperature profiles are shown in Figure 8. Due to the large heat flow carried out along the adiabat, the cold boundary layer (the lithosphere) is poorly defined and its thickness increases compared to the Boussinesq case. On the contrary the hot deep boundary layer is much less affected as the adiabatic gradient is here minimal. The adiabatic profile (red) and the actual temperature (blue)
Figure 6: Energy flow across the convective layer. In panel (a), we plot the total energy flow, averaged on time, in green. This flux should become independent of depth if we had performed the averaging over an infinitely long time window. The two major contributors of the energy are the advection of specific heat (red) and the conduction along the non-adiabatic thermal profile (blue). The former is efficient in the bulk of the fluid, the latter in the boundary layers. The other minor contributors of energy transport (panel (b)) are the conduction along the adiabat (blue), the work flow (green), the contribution due to the fact that enthalpy rather than specific heat is transported (red). Notice the difference in scale between the two panels.
have roughly similar curvatures. We recall that both thermal profiles correspond to the same total mass in the mantle ; the red adiabatic curve has not been computed to provide a best fit to the observed temperature. Furthermore, the idea that the actual temperature profile should be adiabatic is based on the assumption that the dissipation is negligible which is not the case when \(\mathcal{D}\) is large.
In this simulation the heat flow is \(Q=270\), \(Q_{a}=\) 30.75, \(\Delta T_{sa}=\) 5.56 and the Nusselt number, \(\mathrm{Nu}_{sa}=43.03\) and therefore \(\mathrm{Nu}_{sa}=0.09\,\mathrm{Ra}_{sa}^{1/3}\). The prefactor of this equation is smaller than usually found: the Nusselt number, which is inversely related to the thickness of the cold boundary layer, is small because this thickness is increased by the large flux carried by conduction along the adiabat.
Similarly to Figure 6, the various components of the heat flow are depicted in Figure 9 when the dissipation is now \(\mathcal{D}=10\). The transport of specific heat (red, panel a), like for the previous case, underestimates the energy transport in the upper part of the mantle and overestimates the energy transport in the lower part. The heat conducted along the non-adiabatic gradient (blue, panel (a)) is lower across the top cold boundary where a significant heat (12% of the surface heat flow) is carried out along the adiabat (blue curve, panel (b)). The energy transfert due to the difference between enthalpy and specific heat (red, panel (b)) and the work flow (green, panel (b)) are also significant. These three minor components added to the two major components of panel (a), lead to a global heat flow (green, panel (a)), independent of depth.
Figure 7: Snapshot of the superadiabatic temperature in the convective layer with \(\mathrm{Ra}_{sa}=10^{8}\), \(\mathcal{D}=10\) and \(r=10\). The downwellings are still less stable and less continuous with depth than the hot plumes but for both hot and cold plumes, crossing the mantle becomes difficult.
Figure 8: Temperature profile (blue) and adiabatic (red) in a convection model with \(\text{Ra}=10^{8}\), \(\mathcal{D}=10\) and \(r=10\). The mass of the fluid that can be computed from the adiabatic profile is by construction the total mass of the fluid. The total average pressure at the surface, average over time is zero. The thickness of the top boundary is strongly affected by the ability of the fluid to carry heat along the adiabat.
Figure 9: Energy flow across the convective layer. In panel (a), we plot the total energy flow, averaged on time, in green. The other minor contributors of energy transport are the conduction along the adiabat (blue, panel (b)), the work flow (green, panel (b)), the contribution due to the fact the enthalpy transport is not restricted to the \(C_{V}T\) term (red, panel (b)). Notice the difference in scale between the two panels.
### Convection at large dissipation and small temperature drop (\({\cal D}=10\) and \(r=2\))
As we discussed, convection can also occur with a negligible superadiabatic temperature jump \(\Delta T_{sa}\). We therefore perform a simulation, always at \({\rm Ra}_{sa}=10^{8}\) but with \({\cal D}=10\) and \(r=2\). In that case \(\Delta T_{sa}=0.01\). A temperature snapshot is shown in Figure 11. Rising plumes are strong and vigorous but no downwelling currents are visible. In fact, no cold boundary exist and on the contrary the surface corresponds to a maximum of superadiabatic temperature. These characteristics are also obvious in Figure 10 depicting the average temperature profile (blue). The adiabatic temperature (red) has a lower surface temperature that the real surface temperature.
The various components of the energy transport are shown in Figure 10. The balance between the various terms is very different from the previous cases (see Figures 5 and 8). In this simulation the heat flow is \(Q=4\), \(Q_{a}=8.9\), and the superadiabatic Nusselt number as defined in (21) would be negative \({\rm Nu}_{sa}=-490\). When the adiabatic gradient has a large curvature with depth, the usual Ra-Nu relation cannot be used. Instead of transfering heat to the surface, conduction along the superadiabatic temperature profile (panel a, blue) drives the energy downward. The advection of specific heat (red panel a) transports the energy in the deep convective layer, but the main transport occurs at shallow depth, along the adiabat (blue, panel b). The work flow (green) appears negligible and the distinction between enthalpy and specific heat (red, panel b) only transports a small excess of energy.
### Convection with negative superadiabatic Rayleigh number (\({\cal D}=5\) and \(r=1.5\))
As the marginal stability analysis suggests, convection can also occur with a negative superadiabatic Rayleigh number. This is the case when \({\cal D}=5\) and \(r=1.5\) which
Figure 11: Temperature profile (blue) and adiabatic (red) in a convection model with \(\mathrm{Ra}=10^{8}\), \(\mathcal{D}=10\) and \(r=2\). Except for a bottom boundary layer the non-adiabatic average temperature (blue) follows the general curvature of the adiabat (red).
Figure 10: Snapshot of the superadiabatic temperature in the convective layer (\(\mathcal{D}=10\) and \(r=2\)). The hot rise plumes spread under an even hotter (in terms of non-adiabatic temperatures) top boundary layer.
are parameters that belong to the blue shaded area of the phase diagram of Figure 3. We choose a negative Rayleigh number, \(\text{Ra}=-10^{8}\), smaller than the negative critical number computed in Figure 3. The resulting convection pattern is typically that shown in Figure 13. The shallow layer now appears as a warm (non adiabatic) stable layer beneath which vanish faint rising plumes, warmer than the deep average mantle but colder than the stable surface lid,. The temperature profile of Figure 14 (blue) is far away from the adiabat (red). The temperature profile is linear in the stable conductive layer (\(0.7\leq z\leq 1\)) but a weak hot boundary still remains at the bottom The heat transport components (Figure 15) are really dominated by the conductive terms along the superadiabatic profile (panel (a)) and along the adiabat (panel (b)). These two terms largely cancel each other out but together extract the heat flow transported in the deep layers by convection at the surface (red, panel (a)). The work flow and the difference between enthalpy and specific heat are totally negligible. In this case the Nusselt number is positive with negative numerator and denominator; \(\text{Nu}=12.5\) (\(Q=0.85\), \(Q_{a}=3.98\), \(Q_{b}=3.98\), \(Q_{c}=3.98\), \(Q_{d}=3.98\), \(Q_{e}=3.98\), \(Q_{f}=3.
\(\Delta T=0.5\), \(\Delta T_{a}=0.76\)), but with a negative Rayleigh number.
## 4 Conclusions
In this paper we have examined how convection is affected by compressibility when the dissipation number which is a measure of the non Boussinesq effects, is large. This situation occurs in Super-Earths, i.e. in solid or liquid planets whose radius much larger than that of the Earth. Such planets are quite commun and the surprising large variety of exoplanets that have been found so far suggests that planets even larger than those considered here (say up to 3 times the radius of the Earth) do exist. The characteristics of compressible flows are controlled by an adiabatic state that provides an approximate reference to the thermodynamic values during developed convection. In a first section we discussed some properties of the adiabatic conditions when the fluid obeys a simple Murnaghan EoS with a Gruneisen parameter decreasing inversely
Figure 13: Snapshot of the superadiabatic temperature in a convection model with a negative Rayleigh number \(\mathrm{Ra}=-10^{8}\), \(\mathcal{D}=5\) and \(r=1.5\). As show in Figure 3 convection occurs with these parameters. The color scale is chosen to emphasize the weak rising plumes and saturates near the surface.
Figure 14: Temperature profile (blue) and adiabatic (red) in a convection model with \(\mathrm{Ra}=-10^{8}\), \(\mathcal{D}=5\) and \(r=1.5\). The top to bottom temperature ratio is now significantly smaller than the adiabatic temperature difference. Convection occurs with a negative Rayleigh number. A conductive layer in which the temperature gradient is constant fills the upper 20% of the convective layer but a weak bottom boundary layer allows the emergence of plumes.
Figure 15: The conduction terms along the non-adiabatic temperature, in blue panel (a), and along the adiabat, in blue panel (b), largely cancel each other. At depth the energy is transported by convection (red, panel (a)). The other terms of energy transport are negligible.
with density. This EoS is quite simple but it faithfully reproduces the behavior of solids and liquids at high pressure and temperature. The consequences of this EoS for the adiabatic density or temperature profiles are quite independent of the nature of the fluid (solid silicate or liquid iron properties can be matched by this generic EoS). This EoS imposes that incompressibility increases rapidly with pressure. The effects of compression are therefore concentrated in the upper layers while the deep layers can hardly be further compressed. This imposes a strong curvature to the adiabatic density and temperature. The bottom adiabatic temperature is unlikely more than twice the adiabatic temperature at the surface even in large Super-Earths. Although the adiabatic temperature gradient, which plays a major role in convection, may be very large at the surface, it remains moderate at depth and, surprisingly, it decreases, rather than increase, with the dissipation number \(\mathcal{D}\). Another way to think about this is to notice that although \(\mathcal{D}\) estimated from the radius of a planet can be very large at its surface, the Murnaghan EoS requires its mean to be less than the Gruneisen parameter, i.e., typically less than about 1.
We then explore the marginal stability of compressible convection. For our chosen EoS, the strong curvature of the adiabat facilitates convection in the deep layers and promotes conductive heat transport in the shallow layers. Convection can develop in the deep layers even when the total temperature difference between the bottom and top surfaces is equal to or less than the adiabatic temperature difference. Convection with a negative superadiabatic Rayleigh number is therefore possible.
We then explore various cases of developed convection. Our computations are performed with a moderate superadiabatic Rayleigh number (\(10^{8}\) or \(-10^{8}\)), various dissipation numbers and various bottom to top temperature ratios. The simulations are performed for a fluid without inertia which is only valid for the creeping convection of planetary mantles. There is of course no indication of which Rayleigh numbers and temperature ratios are appropriate for Super-Earths. This temperature ratio (and the
Rayleigh number) depends on the surface and bottom conditions of the planet's mantle. The former is controlled by the atmosphere composition, the distance to the star, and the internal dynamics of the planet. The latter depends on the mechanisms of formation of the planet, the segregation of the core, the radioactive content of the mantle and the planet's formation age. It is probably unlikely that the imposed temperature ratio across silicated mantles is as small as in the cases of sections 3.7 or 3.8 (\({\cal D}\gg r\approx 1\)) but there are maybe more strange situations than are dreamt in our philosophy. A more commun situation might be provided by the case of section 3.6 (\({\cal D}\approx r\gg 1\)). In that case the top and bottom boundary layers are of very different thicknesses and the concept of a top boundary layer, a lithosphere, may become meaningless as significant conduction along the adiabat can suppress or dampen convection. Heat transport in the case of large dissipation can occur through various combinations of enthalpy transport (which cannot be limited to specific heat), conduction along the adiabatic gradient and along the average superadiabatic temperature gradient and a work flow term.
In this paper, viscosity has been taken uniform throughout the mantle. The various behaviours that have been described, according to the temperature ratio \(r\) and the dissipation number \({\cal D}\), are only due to the response of the EoS to thermal forcing. This means that we do not expect the large scale behaviour of convection to depend on the exact rheology. Of course, the rheology can be different - non newtonian and non uniform - and the precise small-scale flow structures will depend on this, however the overall picture of convection is mainly governed by the EoS.
Although our simulations without inertia are only valid for creeping convection, i.e., for solid state convection, some of our conclusions hold for the case where convection occurs in liquids including liquid metals. First, as already mentioned, the chosen EoS is probably a much better starting point for correctly expressing the thermodynamical equations that control the flow than is used done. The characteristics of the adiabatic properties discussed in this paper remain valid for fluids. Of course, other terms, inertia,
rotation, electromagnetic effects would have to be possibly added. For these cases, anelastic approximations should be used for numerical modeling. Starting from a realistic EoS and checking carefully the consistency of the approximations remains necessary. Situations like in sections 3.7 or 3.8 where the dissipation becomes much larger than the temperature ratio are probably common, and might prevail in Earth's core. Note that in these cases, an estimate of a extracted heat flow based on the adiabatic gradient is meaningless because convection does not develop near the surface (see the difference between the adiabatic transport, blue curve of Figure 15(b) and the total energy flow, green curve of Figure 15(a)). The superadiabatic temperature drives the heat flow down (blue curve of Figure 15(a)) and the surface heat flow remains comparable to what deep convection (red curve of Figure 15(a)) is able to carry.
|
2307.08393 | On the application of Large Language Models for language teaching and
assessment technology | The recent release of very large language models such as PaLM and GPT-4 has
made an unprecedented impact in the popular media and public consciousness,
giving rise to a mixture of excitement and fear as to their capabilities and
potential uses, and shining a light on natural language processing research
which had not previously received so much attention. The developments offer
great promise for education technology, and in this paper we look specifically
at the potential for incorporating large language models in AI-driven language
teaching and assessment systems. We consider several research areas and also
discuss the risks and ethical considerations surrounding generative AI in
education technology for language learners. Overall we find that larger
language models offer improvements over previous models in text generation,
opening up routes toward content generation which had not previously been
plausible. For text generation they must be prompted carefully and their
outputs may need to be reshaped before they are ready for use. For automated
grading and grammatical error correction, tasks whose progress is checked on
well-known benchmarks, early investigations indicate that large language models
on their own do not improve on state-of-the-art results according to standard
evaluation metrics. For grading it appears that linguistic features established
in the literature should still be used for best performance, and for error
correction it may be that the models can offer alternative feedback styles
which are not measured sensitively with existing methods. In all cases, there
is work to be done to experiment with the inclusion of large language models in
education technology for language learners, in order to properly understand and
report on their capacities and limitations, and to ensure that foreseeable
risks such as misinformation and harmful bias are mitigated. | Andrew Caines, Luca Benedetto, Shiva Taslimipoor, Christopher Davis, Yuan Gao, Oeistein Andersen, Zheng Yuan, Mark Elliott, Russell Moore, Christopher Bryant, Marek Rei, Helen Yannakoudakis, Andrew Mullooly, Diane Nicholls, Paula Buttery | 2023-07-17T11:12:56Z | http://arxiv.org/abs/2307.08393v1 | # On the application of Large Language Models for language teaching and assessment technology
###### Abstract
The recent release of very large language models such as PaLM and GPT-4 has made an unprecedented impact in the popular media and public consciousness, giving rise to a mixture of excitement and fear as to their capabilities and potential uses, and shining a light on natural language processing research which had not previously received so much attention. The developments offer great promise for education technology, and in this paper we look specifically at the potential for incorporating large language models in AI-driven language teaching and assessment systems. We consider several research areas - content creation and calibration, assessment and feedback - and also discuss the risks and ethical considerations surrounding generative AI in education technology for language learners. Overall we find that larger language models offer improvements over previous models in text generation, opening up routes toward content generation which had not previously been plausible. For text generation they must be prompted carefully and their outputs may need to be reshaped before they are ready for use. For automated grading and grammatical error correction, tasks whose progress is checked on well-known benchmarks, early investigations indicate that large language models on their own do not improve on state-of-the-art results according to standard evaluation metrics. For grading it appears that linguistic features established in the literature should still be used for best performance, and for error correction it may be that the models can offer alternative feedback styles which are not measured sensitively with existing methods. In all cases, there is work to be done to experiment with the inclusion of large language models in education technology for language learners, in order to properly understand and report on their capacities and limitations, and to ensure that foreseeable risks such as misinformation and harmful bias are mitigated.
large language models, education technology, natural language processing, question difficulty estimation, text generation, automated assessment, grammatical error correction, responsible AI 1
A 1ALTA Institute & Computer Laboratory, University of Cambridge
2King's College London
3Cambridge University Press & Assessment
4Writer, Inc.
5Imperial College London
6English Language iTuoring (ELiT)
## 1 Introduction
The training of _large language models_ (LLMs) - also known as _pre-trained language models_ or _foundation models_ - has had a transformative effect on the fields of natural language processing (NLP) and artificial intelligence (AI) more broadly. LLMs are 'large' because they are neural networks made up of billions or trillions of parameters. The networks are Transformers [1] trained on huge swathes of text from the World Wide Web, using language modelling objectives such as predicting omitted (or,'masked') words and sentence pairs [2], or predicting the next token in a sequence [3]. Furthermore, in the few-shot learning paradigm, LLMs can be directed towards new tasks without large quantities of task-specific data [4, 5], the collection of which tends to be time-consuming and costly. Overall, LLMs also offer great potential for educational applications. One previous paper has already provided an overview of some of the possible applications of LLMs to educational technology as a whole [6], across subjects. Our distinct contribution is to focus on the language learning and assessment domain specifically. In this paper, we describe some of the uses for LLMs in the context of language learning, discuss the state of the art or work in progress, and consider practical, societal and ethical implications.
We set out a number of uses for LLMs in the language learning domain, relating to content creation and calibration, automated assessment of written texts, and personalised feedback. In each case the general principles of the approach are well established thanks to previous work with pre-existing LLMs such as BERT [2] - language models with millions of parameters have existed for several years already. We look at the opportunities presented by the recent and rapid steps taken by OpenAI in releasing new variants from the 'generative pre-training' (GPT) model series, along with some newly published pre-prints relating to LLMs and the language learning research field. We refer to some LLM-driven language learning applications already in use, and outline the variety of LLMs available besides GPT. It is a fast evolving research field, one being driven by industry developments. We perceive some possible risks in this research trajectory, which include but are not limited to the absence of proper safeguards on education technology, the lack of public understanding as to how LLMs are trained and how they can confidently assert incorrect information, and the harm to the advancement of education technology as a whole if it is considered'solved' by investors and research councils - not to mention the ethical issues that are already well known, such as data protection [7], examination malpractice [8, 9]1, environmental impact [11, 12], and internet addiction [13, 14, 15], among others.
Footnote 1: Note that the text-matching tool Turnitin [10], which is commonly used to detect plagiarism, has developed a module to detect the use of AI in essays: [https://www.turnitin.com/blog/the-launch-of-turnitins-ai-writing-detector-and-the-road-ahead](https://www.turnitin.com/blog/the-launch-of-turnitins-ai-writing-detector-and-the-road-ahead)
## 2 Large Language Models & Language Learning EdTech
At the time of writing, one of the most prominent LLMs is OpenAI's GPT-4 [16] released in March 2023 after six months of pre-launch work improving safety and fact-checking, also for product development with selected partners including the education technology (EdTech) firms Duolingo2 and Khan Academy3. This built on the prior success and notori
in June 2020, along with its related chatbot application, ChatGPT4. Recently, we have seen more focus on the effects and implications of using chatbots for creating interactions with language learners [17, 18].
Footnote 4: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt)
Of most relevance here is the partnership between OpenAI and Duolingo, the language learning application developer, which resulted in the subscription service Duolingo Max[19]. Duolingo Max presents two new features: 'Role Play' and 'Explain My Answer'. The former involves some limited conversation towards a goal such as ordering food, which starts with a pre-scripted prompt but then proceeds over several open-ended chat turns between user and chatbot. The latter is an option for additional feedback on grammatical points, involving a limited dialogue with pre-specified responses for the user to guide the conversation (e.g. "Yes, I'm all set", "Can I see an example?", "No, please elaborate"). Another limitation is that the service is only currently available in selected countries and for a few languages.
Nevertheless, this development points towards further opportunities in AI-driven education technology for language learning, as discussed below. It should be noted that there are many alternatives to the GPT models, including the 'text-to-text Transformer' (T5) [20], PaLM (Parallel Language Model) [21] and LaMDA (Language Model for Dialogue Applications) [22] by Google; LLaMA [23] and Open Pre-trained Transformers (OPT) [24] by Meta AI; and DeepMind's Gopher [25]. At least a few of these models are'multilingual', having been trained on corpora from multiple languages, albeit with a strong bias towards English5. In addition there are models which have been trained on bilingual data, notably Chinese-English [26] and Russian-English6. Alongside LLM developments by large technology companies, we also note the various open-source efforts to train on known datasets (e.g. EleutherAI's GPT-X [27] and _The Pile_[28]), or as massive research collaborations (e.g. BLOOM: the BigScience Large Open-science Open-access Multilingual Language Model [29]), or to democratie LLMs for wider use in web applications involving natural language interfaces (e.g. langchain7, Cohere AI8, and Transformers [30]). There are also open-source alternatives to ChatGPT, such as Open Assistant9 and StableVicuna10. Finally we highlight efforts to transparently evaluate LLMs in comprehensive and varied ways, for instance in the HELM project (Holistic Evaluation of Language Models) [31].
Footnote 5: e.g. See the distribution of languages in the training data for GPT-3: [https://github.com/openai/gpt-3/blob/master/dataset_statistics/languages_by_document_count.csv](https://github.com/openai/gpt-3/blob/master/dataset_statistics/languages_by_document_count.csv)
Footnote 6: [https://github.com/yandex/YaLM-100B](https://github.com/yandex/YaLM-100B)
Footnote 7: [https://python.langchain.com/en/latest/index.html](https://python.langchain.com/en/latest/index.html)
Footnote 8: [https://cohere.com/](https://cohere.com/)
Footnote 9: [https://open-assistant.io/](https://open-assistant.io/)
Footnote 10: [https://stability.ai/blog/stablevicuna-open-source-rlhf-chatbot](https://stability.ai/blog/stablevicuna-open-source-rlhf-chatbot)
## 3 Content Creation: Creating Assessment Items and Teaching Materials
Pre-trained transformer models are being explored to generate exam texts and items for educational purposes like in the Duolingo English Test [32], or in Google's quantitative reasoning application, Minerva [33]. Variations of GPT models are best known to the wider public as being able to create common forms of language assessment tests. However, there are other successful
pre-trained LLMs such as variations of BERT [2], BART [34], or T5 [20] which are popular among NLP scientists. Since these models are trained with different target task objectives, they should be prompted differently for various kinds of text generation. None of them has been trained with a storytelling objective to generate fluent long texts as GPT\({}^{*}\) models have. Nevertheless, since the target tasks are better defined, evaluation of the performance of these models is more thorough and explainable. BERT-based models have shown impressive results in filling the gaps in sequences of text. BART achieves state-of-the-art results in generating parallel sentences as in machine translation or grammatical error correction [34, 35]. T5 modelled \(24\) tasks as text-to-text generation and proved very successful in question answering and text summarisation [20].
These models are widely used for narrower tasks like question generation [36], for reading comprehension exercises [37], or prompt generation for writing and speaking. For example, Felice _et al._[38] use BERT and ELECTRA [39] to predict the position of the gaps for designing high-quality cloze tests for language learners, while various pre-trained language models are used for generating distractors [40, 41, 42]. In addition LLMs have been put to use for the purpose of text simplification, which is relevant for language learners in the context of reading comprehension exercises and adapting texts automatically to an appropriate level. Notably, a GPT-3 based solution by the UniHD team was the winning entry for the English track of the TSAR-2022 Shared Task on Multilingual Lexical Simplification [43, 44].
Datasets and EvaluationWith the emergence of LLMs, large-scale datasets are required for evaluation. Text generation methods are evaluated using text-similarity based metrics like BLEU [45], ROUGE [46], METEOR [47], and more recently BERTScore [48], or learned evaluation metrics [49, 50] which assess the correlation between generated texts (e.g. the question) and the ones originally written by human experts. Automatic evaluations require top-quality and expert-designed datasets. Available datasets for language learning exams for NLP research include RACE [51], SCDE [52] and CLOTH [53]. However, there are smaller-scale datasets such as CEPOC [54] for Cloze test creation, and the Teacher-Student Chatroom Corpus [55], which can be used as test sets to evaluate zero-shot or few-shot learning models. Evaluation approaches for open-ended text generation are still far from being ideal. Human-centric evaluations involve ranking the generated texts based on different factors, such as fluency and coherence of the generated texts, or its relevance to the context document and the answer (where available) [56]. In this new era of increasingly large language models, human evaluation is more difficult and time-consuming, leading researchers to design comparison datasets that contain human-labelled comparisons between outputs of different systems [57].
Human-in-the-loop content generationAs an exploratory study, we have worked with publicly available GPT-3 models to generate open-ended texts and evaluate their suitability as a basis for low-stakes, self-study language learning exercises. Having a human-in-the-loop policy in mind, the prompts are engineered by a human expert, with post-generation text refinement and question authoring also carried out by experts. Such an approach can be seen as helping mitigate against various known risks associated with the output of LLMs (e.g., hallucinations, offensive content, stereotyping, etc).
All definable model parameters (Temperature, Frequency Penalty, Presence Penalty and Max Length) are kept to fixed levels throughout to limit the number of variables across the dataset. Input prompts containing target genre and key content points are designed by the human expert in order to provide a basis for possible testing foci at the target level. The key content points also help generate similar enough output texts from a single (or slightly modified) input prompt, to allow for collation of the best elements from multiple output versions.
For this research, the generated texts are intended to support single B2 CEFR level11 multiple choice reading comprehension questions with 3 answer options. The generated texts are reviewed by the human expert and given an 'accept' or'reject' status based on their appropriateness for the target proficiency level and relevance to the content points. Accepted texts are added to a content pool, also containing fully human-authored texts. Another group of human experts (question writers) approach the accepted texts as they would any other content. For openness and transparency, question writers are informed in advance that the pool contains AI-generated content, but not which texts are AI-generated, and which have been written by human authors. Question writers select and edit the texts, writing one 3-option multiple choice question per text. The annotations of this dataset, including the accept/reject status and the measures of quality of the generated texts assessed by question writers (based on the amounts of edits made on the texts) can be used to train models which can automatically assess the generated texts in future. The dataset can also be used to train reward functions for further fine-tuning of the generative models.
Footnote 11: The Common European Framework of Reference for Languages (CEFR) organises language proficiency in six levels, A1 to C2, which represent Basic User, Independent User and Proficient User.
ChatGPT has been trained using a combination of supervised fine-tuning and reinforcement learning from human feedback (RLHF) [58; 59]. It uses InstructGPT [57] and includes the steps: pre-training, fine-tuning, and reward learning. The reward function used to fine-tune InstructGPT is trained using a dataset of pairs of generated texts with a human-labelled judgement on which text is better, with the objective to maximise the score difference between the 'winning' and 'losing' texts. The purpose of collecting coarse human labelling is to mitigate any mismatch between the true objective and the preferences of human annotators, thus increasing inter-annotator agreement [59]. Nevertheless, relying solely on general annotations, as in the case of InstructGPT, results in a reward function that fails to shed light on the quality of texts across various aspects, making it too broad to apply in narrower tasks and fields. To address this limitation, we can take advantage of the existing high-quality annotations available to us from skilled and experienced professional human annotators. By exploiting their expertise, we can train a more nuanced reward function that offers fine-grained evaluation and provides interpretable scores, aligning more effectively with our specific research goals. Finally, we can evaluate different methods of content generation on our reading practice platform, Read&Improve12[60]. By collecting both implicit user feedback - which texts they engage with more by spending longer reading them, clicking on definitions, completing the tasks, _etc_ - and explicit user feedback (e.g. by asking them to rate texts and express opinions) we can assess which LLM-driven systems are most successful.
Footnote 12: [https://readandimprove.englishlanguageitutoring.com/](https://readandimprove.englishlanguageitutoring.com/)
## 4 Calibrating Assessment Items and Teaching Materials
In addition to content creation, LLMs can potentially be leveraged for the evaluation and calibration of existing learning content: test items and teaching content. An example of this is question difficulty estimation (QDE) from text, which has received increasing research interest in recent years [61, 62]. QDE from text offers a way to overcome the limitations of traditional approaches such as manual calibration and statistical analysis from pre-testing. These traditional approaches are either subjective or introduce a long delay between item creation and deployment due to the complexities of pre-testing on sizeable and representative populations.
QDE from text is a regression task, where the model is asked to provide, given the text of the question, a numerical estimation of its difficulty on a given scale. It can be either a supervised or unsupervised task, depending on whether a dataset of already calibrated exam questions is available, and LLMs have been used in both scenarios. Regarding supervised estimation, LLMs that leverage transfer learning - specifically, BERT [2] and DistilBERT [63] - are the current state of the art [64, 65] and have been shown to outperform other approaches using traditional NLP-derived features [66]. There has been less work on unsupervised estimation but, even in this scenario, LLMs have been shown to be helpful for estimating question difficulty from text [67].
All the models proposed in previous research require some kind of transfer learning starting from the publicly available pre-trained models, which might be expensive and not feasible in all scenarios. Bigger LLMs, such as the aforementioned GPT models, could be used for zero-shot or few-shot difficulty estimation from text, which is yet to be explored. As an example, ChatGPT can be asked to rank given questions by difficulty, and it can also provide an indication of the specific difficulty level (e.g. _Easy_, _Medium_, _Hard_). Crucially, the difficulty of a pool of questions depends on the specific student population that is assessed with them, and it is difficult to provide the LLM with all the information required to describe the specific pool of learners that will be assessed with the items. The model seems to be - at least partially - capable of distinguishing between different CEFR levels, since the same question can be assigned different levels depending on whether the model is asked to consider learners of level A1 or C1. However, extensive experiments should be carried out to better evaluate this, as the model sometimes performs counterintuitive estimations: in our preliminary experiments, for instance, ChatGPT sometimes estimated a question to be more difficult for C1-level learners (i.e., "Proficient") than A1-level learners (i.e., "Basic").
## 5 Automated Assessment of Language Learners
Automated assessment has long been a prominent task in educational applications research: for instance assessing learner English speech [68, 69, 70, 71] and writing [72, 73, 74, 75, 76]. Here we focus on writing and the task of 'automated essay scoring' (AES). Whereas previous systems have involved feature engineering - typically centred around informative sequences of characters, words, part-of-speech tags, as well as phrase structures from a parser and automatically detected errors [73, 77] - more recent research systems have involved neural models for assessment [74, 75, 76]. These models tend to be carefully crafted and evaluated, since language assessment
can be a task with major consequences for the learner, including education and career prospects. Therefore any involvement of LLMs in assessment systems must be approached cautiously and its impact measured on existing benchmarks. Deployment of LLM-based assessment models should be restricted to human-in-the-loop low-stakes contexts first, including practice applications [77, 60] or placement tests such as Linguaskill13.
Footnote 13: [https://www.cambridgeenglish.org/exams-and-tests/linguaskill/](https://www.cambridgeenglish.org/exams-and-tests/linguaskill/)
The idea of using ChatGPT for assessing students' answers was put forward by Jeon & Lee [78]. Further practical steps were taken by Mizumoto & Eguchi [79] who experimented with GPT-3.5 for AES on 12,000 essays from the ETS Corpus of Non-Native Written English (TOEFL11) [80], compared the scores to benchmark levels on a 0-9 scale, and concluded that a GPT-only model only achieves weak agreement with the reference scores (\(.388\) quadratic weighted kappa). The authors furthermore compare the GPT scorer with several models involving various combinations of 45 linguistic features - related to lexical diversity, lexical sophistication, syntactic complexity and dependency, and cohesion - and observe that although the GPT baseline is outperformed by the linguistic features on their own, the best results are obtained by combining the two approaches (\(.605\) QWK). This is a finding similar to previous research, as the previous state-of-the-art performance was obtained by combining BERT-style neural models and feature-based models [76, 81].
One potential use for LLMs regarding assessment that, to the best of our knowledge, has not been thoroughly explored is for explaining assessment predictions. _Explainable AI_ is an emerging research topic, a regulatory prospect [82], and a challenge for NLP models dependent on 'black box' neural networks. One possibility is to adopt the 'chain-of-thought' prompting style [83] - as opposed to zero-shot or few-shot prompting [5] - to elicit explanations about assessment decisions from LLMs. Exactly how to engineer a series of prompts for the LLM in the chain-of-thought style is a matter for investigation, but for instance they could be similar to the following (albeit longer to elicit better explanations):
A class of students has been given the essay prompt: <essay_prompt>. This student essay -- <example_text> -- was given a score of <example_score>. Explanation: the use of language and grammatical resource are advanced but there is a spelling error in the first sentence and a grammatical error in the final sentence. This student essay -- <target_text> -- has been given a score of <predicted_score>. Please give an explanation why the text was given this score.
The aim of such an approach would be to obtain explanations specific to the essay, pinpointing relevant sections from the text if possible, and grounded in marking criteria for the learner's target level so that the explanation is relevant and useful. In common with other tasks described in this paper, further research with LLMs and proper evaluation on existing benchmarks and by human experts is needed before we can definitively conclude that this is a research avenue worth exploring.
Finally we note that there are concerns around LLMs being used in fraudulent ways by learners, but that plagiarism concerns are long-standing in computer-based exam settings. If proctoring software is set up to prevent text import from elsewhere (e.g. disabling copy-and-paste keyboard shortcuts) or to detect bursty text insertion through keystroke logging, then this is one defence against exam malpractice from LLM text generation or any other online source. In this way, LLMs are an extension of a threat we are already familiar with. Furthermore, automatic detection of LLM-generated text is the subject of the AuTexTification (Automated Text Identification), part of IberLEF 2023 (the 5th Workshop on Iberian Languages Evaluation Forum) co-located with the SEPLN Conference this year14. It appears that the level of performance from submitted systems has been high, outdoing a logistic regression baseline in many cases, with system descriptions to be presented in September. It may be that such systems can be employed as an additional line of defence against exam malpractice involving LLM-generated text.
Footnote 14: [https://sites.google.com/view/autextification](https://sites.google.com/view/autextification)
## 6 Providing Feedback to Language Learners
As a precursor to providing automatic lexico-syntactic feedback to language learners, one requirement is to first carry out the NLP task of _grammatical error detection_ (GED) or _grammatical error correction_ (GEC). These tasks have a rich pedigree involving various benchmark corpora [84, 85, 86, 87], research papers [88, 89, 90, 91, 92] and shared tasks [93, 94, 95] - all of which enable us to establish that the current state-of-the-art approach to GED and GEC tends to involve supervised fine-tuning of neural network language models using carefully annotated training data [96]. The recent emergence of LLMs, however, offers the prospect of developing GED or GEC models which are largely unsupervised other than some examples for few-shot learning15.
Footnote 15: Note that LLM training is often described as ‘self-supervised’ due to the human-authored training data, but for the purpose of GED/GEC, we say ‘unsupervised’ because in this context no task-specific training data is required.
The challenge ahead, in common with the application of LLMs to other tasks described in this paper, is to properly benchmark LLM-based models for GED and GEC on existing corpora, so that their performance can be compared to previous models. Some preliminary work has been done towards this aim, as described in a recent survey of GEC [96]. For instance, Wu _et al._[97] and Coyne & Sakaguchi [98] present preliminary results applying LLMs to GEC. The former compares ChatGPT to Grammarly and GECToR [99], a previous state-of-the-art GEC system, and the latter compares GPT-3.516 to two other GEC systems [100, 101]. Both approaches find the GPT\({}^{*}\) models perform worse than existing systems when measured using automatic evaluation techniques on existing benchmark corpora (namely CoNLL-2014 [94], JFLEG [102], BEA-2019 [95]). The authors ascribe this to the model's tendency to _over-correct_ learner text; by inserting additional text or re-structuring phrases, the corrected text moves further from the original text and is penalised by the automatic scorers. However, both works carry out human evaluation to rate the output from each system and find a preference for the GPT\({}^{*}\) output because the corrected sentences tend to be more fluent. At the same time, they found instances of _under_-correction in the human-generated reference sentence: in other words
GPT\({}^{*}\) models catching and correcting errors which were not corrected by the expert annotators.
While both approaches are preliminary and human evaluation tentative - based on only small samples of 100 sentences at a time from each test set - overly fluent corrections present a challenge for automatic evaluation methods as they are much more open-ended than minimal edits targeting grammatical errors rather than stylistic choices. Furthermore, while fluent corrections may at times be preferred by human evaluators, they may not aid language learners if they drift too far from the original text. Existing annotation guides for error correction state that edits should be as minimal as possible so that the learner can be helped to express what they are trying to say, rather than told how to express it differently: that is, how to amend an error rather than avoid it [103]. The issue is not a new one [104] but remains a matter for further investigation under the new conditions presented by more capable LLMs.
Another potential use of LLMs in this area is providing automatically-generated feedback comments to learners to explain linguistic concepts, grammatical points or semantic nuance. Indeed there was a recent shared task on _feedback comment generation_[105] where, when presented with an erroneous sentence such as, "He agrees the opinion", the task was to produce a comment such as: The verb agree is an intransitive verb and cannot take direct objects: add the appropriate preposition17. Participants in the shared task were able to outperform the baseline system ('an encoder-decoder with a copy mechanism based on a pointer generator network'18) through careful feature extraction from parsers and GEC models, combined with prominent LLMs at the time such as T5 or GPT-Neo [106] (e.g. Babakov _et al._[107] achieved second place in the shared task; developers of the first-placed entry have not published a system description to the best of our knowledge). It remains to be seen whether current LLMs can be tuned towards even better performance on this task: it may be that the pre-processing of texts to obtain additional linguistic information and the incorporation of pre-defined templates will continue to be vital for accurate and sensible feedback comment generation, even with ever-larger LLMs involved [108]. These are methods we can trial through A/B testing of different feedback models on our essay-writing practice platform, Write&Improve19[73, 77].
Footnote 17: [https://fcg.sharedtask.org/](https://fcg.sharedtask.org/)
Footnote 18: [https://github.com/k-hanawa/fcg_genchal2022_baseline](https://github.com/k-hanawa/fcg_genchal2022_baseline)
Footnote 19: [https://writeandimprove.com/](https://writeandimprove.com/)
Other applications of LLMs for language learning feedback include chatbot interaction to explain linguistic concepts - akin to the 'Explain My Answer' feature in Duolingo Max, but also going beyond this with dialogue which is adaptive to the learner level [18] - word suggestion, paraphrasing and translation to aid learners with essay writing, and document-level feedback on, for instance, inter-sentence coherence markers, co-reference and anaphoric reference, maintaining tense and aspect consistently, argumentation structure, task completion and more. Key desiderata are that the feedback should be accurate, based on evidence, personalised, inoffensive and preferably linked to teaching materials so that the learner may continue to benefit from EdTech applications for language.
## 7 Risks & Ethical Considerations
We advocate for a cautious approach to the incorporation of LLMs in EdTech for language learning, in which the training process, performance and limitations, and pathway to delivery are well documented and the risks of misapplication of such technology are understood. There are general concerns about AI for NLP and education which are recorded in the literature and continue to be relevant, perhaps more so, as LLMs come to the fore. Firstly there is a bias towards English, and specific genres of English, due to a combination of commercial pressures, training data availability, and data sourcing from the World Wide Web: even though several models have been trained in multilingual ways, the general trend with LLMs has exacerbated this pre-existing bias [109, 110, 111]. As LLMs grow, so does their climate impact: an issue which interacts with societal and infrastructure complexities but which we should nevertheless bear in mind and attempt to mitigate [11, 12]. In addition, LLMs are known to exhibit certain biases [112] - both _representational_ (language use around demographic groups) [113, 114] and _allocational_ (how a system distributes resources or opportunities) [115, 113] - which need to be debiased or otherwise controlled [116, 117].
Suresh & Guttag identified various sources of harm in the'machine learning life cycle' [115]: historical bias, representation bias, measurement bias, learning bias, aggregation bias, evaluation bias, deployment bias. They note that effects cascade downstream and cycle around ML systems. They provide some mitigation strategies and reference previous work in this area [118, 119]. Kasneci _et al._[6] also point to copyright issues with output from LLMs which are largely unresolved, as well as concerns about pedagogical and learning effects: namely that both teachers and students, "may rely too heavily on the model", and that it may be difficult to, "distinguish model-generated from student-generated answers" [120, 121, 122]. In addition they raise data privacy and security issues which require firmer regulation and auditing of EdTech firms, the problem of false information issued by LLMs, and of designing appropriate application interfaces which are both engaging and beneficial to end-users. It is worth noting that NLP researchers have made some attempts at using LLMs to assess the trustworthiness of generated texts, which could go some way towards mitigating the false information problem [123, 124, 125].
Regarding AIED and language learning, LLMs present specific risks relating to generated outputs which may be inaccurate, confusing, offensive, and so on - risks which are present in human teachers too, but made no less harmful as a result. For this reason the most successful systems may be human-machine hybrids, with humans in-the-loop or similar, where LLMs are viewed as assistive technology for human experts rather than replacements for them - performing the more mundane and mechanical tasks while experts provide the inputs characteristic of human interaction [126]. Another way that humans can monitor LLM outputs is through evaluation, and feedback mechanisms for systems in production, so that problematic outputs may be flagged.
We can also look at standards for'responsible AI' published by technology firms and research institutes [127, 128, 129]20. For example, Duolingo [130] sets out its approach to responsible AI under Validity & Reliability, Fairness, Privacy & Security, Accountability & Transparency
- all of which have been touched on in this paper. Regarding the last attribute in particular
- it is apparent from recent media stories that more can be done in this area in terms of educating the general public about how LLMs are trained, how trustworthy they may or may not be, and how best to interact with them. This is a general problem but one which nonetheless presents a challenge for EdTech applications.
## 8 Conclusion
In this paper, we have explored the opportunities for language-learning EdTech offered by 'generative AI' through LLMs. We conclude that preliminary indications are promising, but that the best systems may still require human intervention and/or the inclusion of well-established linguistic features. It may well be that LLMs can enhance language-learning EdTech, if we can establish the following through further empirical work:
1. that models enhanced by LLMs perform better than existing models on established benchmarks, or on alternative evaluation metrics which need to be defined in order to properly probe LLM capabilities for language teaching and assessment [131] - moreover that performance is _sufficiently_ better to justify the additional costs in computing and environmental terms;
2. that LLM-enhanced technology is of benefit to language learners, whether that is measured through engagement, enjoyment, learning outcomes or some combination of the three;
3. that LLM-enhanced technology does not disadvantage relevant groups (learners, teachers, writers and editors of materials, examiners) whether through bias, misinformation, or adversely affecting student progress - instead, the technology should be assistive to all groups in some regard.
Finally, we note that LLMs should not be over-hyped as an AI revolution, but rather as an evolutionary step in neural network models - the inevitable result of the inexorable growth in network size since the Transformer was first applied to language tasks in 2017 [1]. LLMs represent a milestone on an evolutionary path which has been unfolding for many years and thus is well documented in open access publications and open source code repositories. If we maintain this tradition - by close inspection of proprietary models, or opting to use models trained in open ways - it will be of benefit both to future researchers and scientific development, but also users of AI applications who require some transparency regarding the technology. Harmful bias and other risks remain an ongoing challenge for developers of AI systems, and LLMs deployed in language learning EdTech may only exacerbate these. Therefore, proper mitigations should be put in place to address the issues which have been identified in this paper and elsewhere.
Nevertheless, LLMs present a great opportunity to continue improving EdTech for language learning, including novel ways to generate content, provide feedback, and deal with other linguistic features which hitherto have not been commonly attempted: for instance, chatting in open-ended ways at the level of the learner [18], providing document-level assessment and feedback [132], handling code-switching or 'plurilingual' learning [133].
## Acknowledgments
This work was supported by Cambridge University Press & Assessment. We thank Dr Nick Saville and Professor Michael McCarthy for their support. We are grateful to the anonymous reviewers for their helpful comments.
|
2310.02517 | The unresolved stochastic background from compact binary mergers
detectable by next-generation ground-based gravitational-wave observatories | The next generation of ground-based gravitational-wave detectors will look
much deeper into the Universe and have unprecedented sensitivities and
low-frequency capabilities. Especially alluring is the possibility of detecting
an early-Universe cosmological stochastic background that could provide
important insights into the beginnings of our Universe and fundamental physics
at extremely high energies. However, even if next-generation detectors are
sensitive to cosmological stochastic backgrounds, they will be masked by more
dominant astrophysical backgrounds, namely the residual background from the
imperfect subtraction of resolvable compact binary coalescences (CBCs) as well
as the CBC background from individually unresolvable CBCs. Using our latest
knowledge of masses, rates, and delay time distributions, we present a
data-driven estimate of the unresolvable CBC background that will be seen by
next-generation detectors. Accounting for statistical and systematic errors,
this estimate quantifies an important piece in the CBC noise budget for
next-generation detectors and can help inform detector design and subtraction
algorithms. We compare our results with predictions for backgrounds from
several cosmological sources in the literature, finding that the unresolvable
background will likely be a significant impediment for many models. This
motivates the need for simultaneous inference methods or other statistical
techniques to detect early-Universe cosmological backgrounds. | Darsan S. Bellie, Sharan Banagiri, Zoheyr Doctor, Vicky Kalogera | 2023-10-04T01:32:31Z | http://arxiv.org/abs/2310.02517v2 | The unresolved stochastic background from compact binary mergers detectable by next-generation ground-based gravitational-wave observatories
###### Abstract
The next generation of ground-based gravitational-wave detectors will look much deeper into the Universe and have unprecedented sensitivities and low-frequency capabilities. Especially alluring is the possibility of detecting an early-Universe cosmological stochastic background that could provide important insights into the beginnings of our Universe and fundamental physics at extremely high energies. However, even if next-generation detectors are sensitive to cosmological stochastic backgrounds, they will be masked by more dominant astrophysical backgrounds, namely the residual background from the imperfect subtraction of resolvable compact binary coalesceres (CBCs) as well as the CBC background from individually unresolvable CBCs. Using our latest knowledge of masses, rates, and delay time distributions, we present a data-driven estimate of the unresolvable CBC background that will be seen by next-generation detectors. Accounting for statistical and systematic errors, this estimate quantifies an important piece in the CBC noise budget for next-generation detectors and can help inform detector design and subtraction algorithms. We compare our results with predictions for backgrounds from several cosmological sources in the literature, finding that the unresolvable background will likely be a significant impediment for many models. This motivates the need for simultaneous inference methods or other statistical techniques to detect early-Universe cosmological backgrounds.
## I Introduction
In 2015, the Laser Interferometer Gravitational-Wave Observatory (LIGO) [1] directly detected gravitational waves (GWs) for the first time by capturing the coalescence of two stellar-mass binary black holes (BBHs) [2]. This was eventually followed by the detection of GWs from binary neutron stars (BNSs) [3] and neutron-star black-hole binaries (NSBHs) [4] by the LIGO and Virgo observatories [5]. Constituting the category of compact binary coalesceres (CBCs), these sources have heralded the arrival of GW astronomy with nearly a hundred detected candidates already [6; 7; 8].
While we have only detected GWs from CBCs so far, several other sources have been theorized and expected. In particular, several models postulate the presence of "cosmological" GWs, predominantly showing up as a cosmological stochastic gravitational wave background (SGWB) [9; 10; 11]. These might be accessible by next-generation ground-based GW detectors (henceforth known as XG detectors) like Cosmic Explorer (CE) and Einstein Telescope (ET) [12; 13; 14; 15]. Cosmological SGWBs originating from the early Universe might carry imprints of the physics of the earliest epochs in the Universe, inaccessible through any other channels. Detection of such a background would therefore drastically expand our understanding of the first instants in the Universe and unlock fundamental physics at very high energies well beyond the reach of particle accelerators [9].
However, such cosmological SGWBs can be shielded by the GWs arising from various astrophysical sources [16; 17; 18] and in particular CBCs of stellar origin since they are expected to be the largest contributor to the GW power in the frequency band of XG detectors. The GWs from CBCs will pose a major challenge for the capability of XG detectors to detect cosmological backgrounds. There are two components of such astrophysical shielding, coming from "resolved" and "unresolved" CBCs. Resolved sources, here, refer to those sources with GW signals that can be individually and confidently detected within the detector noise.
The proposed XG detectors are expected to have a high redshift reach to astrophysical CBC sources owing to their better sensitivities and low-frequency capabilities of up to \(\sim 5\) Hz [12; 19; 20]. It is expected that XG detectors would be able to resolve nearly all BBHs of stellar origin in the Universe as well as many BNSs and NSBHs up to redshifts of a few [12; 19; 20]. To minimize their impact on searches for cosmological SGWBs, significant ongoing work is devoted to developing techniques to subtract resolvable CBCs from GW data [21; 22; 23; 24; 25; 26]. However, the subtraction residue from imperfect subtraction could still be a challenge for XG detectors [21; 22].
In addition to the resolvable CBC signals, we also expect an astrophysical SGWB originating from the incoherent superposition of unresolvable CBC sources. Such an astrophysical SGWB is a key detection target for current-generation ground-based detectors [27; 28; 29], but it can also shield cosmological SGWBs. Crucially, this unresolved CBC background represents a noise source that is independent of the fidelity and efficacy of subtraction. Accurately characterizing this background will enable us
to understand the accessibility of cosmological SGWBs to XG detectors, will inform us of the necessary levels of subtraction of resolvable signals, and will provide an important data point in comparing detector designs and networks. Therefore, in this paper, we provide a data-driven estimate of the levels of the unresolvable CBC SGWB expected for different configurations of XG detector networks. We draw upon several models used in the population inference with LIGO-Virgo-KAGRA (LVK) observations and Galactic binary neutron stars; by utilizing multiple models, we provide robust estimates that account for both statistical and systematic uncertainty.
The rest of the paper is organized as follows. In Sec. II we briefly motivate the paper and place our study in the context of previous literature for XG detectors. In Sec. III, we mathematically define the SGWB and describe how an astrophysical SGWB can be calculated from a set of sources. Sec. IV describes our assumptions on the astrophysical merger rates, mass distributions, and redshift distributions which we use to compute the unresolved background, and Sec. V describes our choices of detector network configurations. In Sec. VI, we detail our formalism to numerically compute the unresolved background from CBC sources and in Sec. VII, we discuss the cosmological SGWB models that we use. Sec. VIII presents our results, and Sec. IX discusses their implications and comments on future work needed.
Throughout this paper, \(G\) is the gravitational constant, \(c\) the speed of light, \(H_{0}\) the Hubble constant, and we define our cosmology using the \(\Lambda\)CDM model with cosmological parameters taken from Planck 2018 [30].
## II The unresolved SGWB from CBCs
The total GW energy-density spectrum \(\Omega_{\rm GW}\) as seen by a GW detector is the sum of the GW energy-density spectrum from cosmological sources \(\Omega_{\rm cosmo}\) and from astrophysical sources 1\(\Omega_{\rm astro}\)[24],
Footnote 1: i.e. of stellar origin.
\[\Omega_{\rm GW}=\Omega_{\rm astro}+\Omega_{\rm cosmo}. \tag{1}\]
In general, the astrophysical component can mask the cosmological component if \(\Omega_{\rm astro}>\Omega_{\rm cosmo}\). For the rest of this paper, we assume that \(\Omega_{\rm astro}=\Omega_{\rm cbc}\) and that the GW energy from other sources can be neglected so that
\[\Omega_{\rm GW}\approx\Omega_{\rm cbc}+\Omega_{\rm cosmo}. \tag{2}\]
Various methods have been developed to subtract resolvable CBC signals in order to minimize their impact on searches for \(\Omega_{\rm cosmo}\)[21; 22; 23; 24; 25; 26]. Such techniques, however, are always imperfect, as they depend on parameter estimation results and can also suffer from waveform systematics. The imperfect subtraction leaves behind a residue \(\Omega_{\rm cbc,\;residue}\) that contributes to the effective SGWB from CBCs [21; 24].
In addition to the resolved CBC signals, the incoherent superposition of the unresolvable CBC signals gives rise to an astrophysical SGWB that we call \(\Omega_{\rm cbc,\;unres}\). Therefore, effectively,
\[\Omega_{\rm cbc}=\Omega_{\rm cbc,\;unres}+\Omega_{\rm cbc,\;residue}. \tag{3}\]
While some studies find that \(\Omega_{\rm cbc,\;residue}\) remains a major challenge for detecting cosmological SGWBs [21; 22], others find that \(\Omega_{\rm cbc,\;unres}\) will provide the noise floor [23; 26]. Since \(\Omega_{\rm cbc,\;unres}\) is a purely stochastic noise, it will have to be fit simultaneously with any cosmological SGWB.
While some constraints were previously placed on \(\Omega_{\rm cbc,\;unres}\) for XG detectors by Refs. [14; 21; 23; 24; 31; 32] they were either limited by the dearth of data before O3, or by simple population models or both. Ref.[22] in particular accounts for the uncertainties in the latest LVK BBH and BNS rate inferences in their calculation of the residue but not for \(\Omega_{\rm cbc,\;unres}\). Furthermore, papers in the literature generally do not include the contribution from NSBHs, whose unresolved SGWB, we find, is comparable with that of BNSs 2.
Footnote 2: Although Ref. [32] includes NSBHs in their estimate, this was not data-informed.
In our study, we implement a data-driven estimate of \(\Omega_{\rm cbc,\;unres}\) for XG detectors, using the latest inferences on the CBC population from LVK data. Wherever possible, we consistently incorporate the uncertainty from the rate and from the population modeling. In addition, we consider several population models to propagate the systematic uncertainty stemming from parameterized GW analyses to present uncertainty envelopes for \(\Omega_{\rm cbc,\;unres}\).
## III Calculating the SGWB from CBCs
The SGWB from any type of GW source is generally characterized by the dimensionless GW energy-density spectrum [33]
\[\Omega_{\rm GW}(f)=\frac{f}{\rho_{c}}\frac{d\rho_{\rm GW}}{df}(f), \tag{4}\]
where \(f\) is the observed GW frequency, \(\rho_{c}=3H_{0}^{2}c^{2}/8\pi G\) is the critical density needed to close the Universe, and \(\rho_{\rm GW}\) is the GW energy density at \(f\).
The GW energy density \(\rho_{\rm GW}\) arising from astrophysical sources is given by
\[\frac{d\rho_{\rm GW}}{df}(f)=\frac{1}{c}F(f), \tag{5}\]
where \(F\) is the GW energy flux in the observer frame [34]. We define the energy flux by summing up the individual
fluxes in a population of astrophysical sources between redshifts \(z_{\rm low}\) to \(z_{\rm up}\) and with an associated set of source parameters \(\theta\) as [16; 25]
\[F(f)=\int_{\theta}p(\theta)d\theta\int_{z_{\rm low}}^{z_{\rm up}}\frac{dR_{o}(z) }{dz}\frac{\frac{dE_{\rm sw}(f_{\rm s},\theta)}{df_{\rm s}}(1+z)^{2}}{4\pi d_{L }^{2}}dz, \tag{6}\]
where \(p(\theta)\) is the distribution of the source parameters and \(d_{L}\) is the luminosity distance. The source-frame GW energy spectrum emitted by each astrophysical source is given by \(dE_{\rm sw}(f_{\rm s},\theta)/df_{\rm s}\), where \(f_{\rm s}=f(1+z)\) is the source-frame frequency.
For astrophysical sources distributed isotropically across the Universe, the rate of observed signals \(R_{o}\) in some redshift slice \(dz\) is
\[\begin{split}\frac{dR_{o}(z)}{dz}=\frac{dN(z)}{dz\,dt_{o}}=& \frac{dN(z)}{dV_{c}\,dt_{s}}\frac{dt_{s}}{dt_{o}}\frac{dV_{c}}{dz}(z)\\ =& R_{\nu}(z)\frac{1}{1+z}\frac{dV_{c}}{dz}(z),\end{split} \tag{7}\]
where \(N\) specifies the number of events occurring in a given cosmic slice and \(dt_{s}/dt_{o}=1/(1+z)\) accounts for time dilation between the source and observer. \(R_{\nu}(z)=dN(z)/dV_{c}dt_{s}\) is the source-frame rate density per comoving volume \(V_{c}\) of a specific type of astrophysical source \(\nu\). The differential comoving volume element is given by [35]
\[\frac{dV_{c}}{dz}(z)=\frac{4\pi d_{L}^{2}}{(1+z)^{2}}\frac{c}{H_{0}E(z)}, \tag{8}\]
where, for a flat \(\Lambda\)CDM cosmology and ignoring radiation density, \(E(z)=\sqrt{\Omega_{M}(1+z)^{3}+\Omega_{\Lambda}}\)[16; 25]. Bringing this all together, we arrive at the energy-density spectrum for a SGWB arising from astrophysical sources distributed isotropically across the Universe between some redshifts \(z_{\rm low}\) to \(z_{\rm up}\):
\[\Omega_{\rm GW}(f)=\frac{f}{\rho_{c}H_{0}}\int_{\theta}\int_{z_{\rm low}}^{z_ {\rm up}}\frac{R_{\nu}(z)\frac{dE_{\rm sw}(f_{\rm s},\theta)}{df_{\rm s}}p( \theta)}{(1+z)E(z)}d\theta dz. \tag{9}\]
Finally, the spectral energy \(E_{\rm sw}\) emitted by any astrophysical source can be related to the amplitudes of the plus (\(+\)) and cross (\(\times\)) GW polarizations \(\tilde{h}_{+}\) and \(\tilde{h}_{\times}\) via [36]
\[\frac{dE_{\rm sw}}{df_{s}}=\frac{2\pi^{2}c^{3}d_{L}^{2}f^{2}}{G(1+z)^{2}} \bigg{\langle}|\tilde{h}_{+}(f,\theta)|^{2}+|\tilde{h}_{\times}(f,\theta)|^{ 2}\bigg{\rangle}_{\Omega_{s}}, \tag{10}\]
where the right-hand side is averaged over the source orientations \(\Omega_{s}\).
## IV The CBC population models
In the previous sections, we provided the motivation and the methods to calculate \(\Omega_{\rm cbc,\;unres}\), the unresolved SGWB from CBCs. Now we describe the models we use for the astrophysical population of CBCs, focusing in particular on detectability by XG GW detectors. Several studies have shown that almost all of the BBHs will be individually resolvable by these detectors [20; 21; 12] and that the unresolvable background from BBHs will be several orders of magnitude smaller than the corresponding background from BNS. Therefore, we assume that the unresolvable SGWB arising from the BBH population is negligible and limit ourselves to the NSBH and BNS populations.
Since XG detectors will likely be able to detect all compact binaries that merge at low redshifts, such events will not contribute to the unresolved CBC SGWB. On the other hand, mergers at high redshifts will often not be individually resolvable, making them important contributors to the unresolved CBC SGWB. Therefore, we incorporate realistic star-formation-based redshift distribution models paired with a model of merger delay-time distributions. This is described further in Sec. IV.1.
For all considered astrophysical populations, we assume that tidal effects and any eccentricity effects are negligible as they will be subdominant. The GW signal of such binaries can then be described by a set of 15 source parameters \(\theta\), of which 8 are intrinsic and 7 are extrinsic. Intrinsic parameters include the component masses \(m_{1}\) and \(m_{2}\) and the three-dimensional spin vectors \(\vec{\chi}_{1}\) and \(\vec{\chi}_{2}\). Since spins are expected to have a subdominant effect on the SGWB, we also set them to zero [21; 37]. Extrinsic parameters include the redshift \(z\), the right ascension and declination \((\alpha,\delta)\), the polarization angle \(\psi\), the inclination angle \(\iota\), the coalescence phase \(\phi_{c}\), and the coalescence time \(t_{c}\)3. We draw \(\cos\iota\), \(\cos\delta\), \(\alpha\), \(\psi\), and \(\phi_{c}\) from uniform distributions.
Footnote 3: Any choice of \(t_{c}\) does not affect the calculation of \(\Omega_{\rm cbc,\;unres}\), since it depends only on the GW power.
The rate and mass models are described in the following subsections and the full distributions are summarized in Tab. 1.
### The redshift distributions
Compact binaries experience a delay time \(t_{d}\) between their formation at \(z_{f}\) and merger at \(z\), where \(z_{f}\) is the zero-age main-sequence (ZAMS) "formation" redshift. We calculate the delay time as the difference between the cosmological lookback time [38] at \(z_{f}\) and \(z\),
\[t_{d}=t_{L}(z_{f})-t_{L}(z), \tag{11}\]
We assume a delay time distribution of \(p(t_{d})\propto t_{d}^{-1}\) as suggested by stellar evolution and population synthesis models [39; 40]. The GW and Galactic pulsar observations [41; 42] are also consistent with such a distribution, although Galactic populations likely harbor an excess of
sources that will merge rapidly. An analysis using localized short gamma-ray bursts potentially finds steeper time-delay distributions, but it does not include selection effects [43]. We set the maximum delay time \(t_{d}^{max}\) to the Hubble time to limit ourselves to binaries that will merge in the age of the Universe. We set a fiducial minimum delay time \(t_{d}^{min}\) of 20 Myr, which is approximately how long massive binaries take to evolve into two neutron stars [44; 45; 46]. It is also consistent with observations of binary pulsar merger times and of short gamma-ray bursts in both late- and early-type galaxies [47; 44].
We convolve \(t_{d}\) with a star formation rate (SFR) model \(R_{f}(z_{f})\) to calculate the source-frame CBC merger rate density per comoving volume [41]
\[R_{m}(z)\propto\int_{t_{d}^{min}}^{t_{d}^{max}}R_{f}(\tilde{z}[t_{L}(z)+t_{d}]) p(t_{d})dt_{d}, \tag{12}\]
where \(\tilde{z}\) is the formation redshift. We normalize Eq. 12 as
\[R_{\nu,cbc}(z)=\frac{\mathcal{R}_{0}}{R_{m}(z=0)}R_{m}(z), \tag{13}\]
such that \(R_{\nu,cbc}(z=0)=\mathcal{R}_{0}\) is the inferred local source-frame CBC merger rate density per comoving volume obtained from population analysis (see Sec. IV.2) for CBCs of type \(\nu\). Using Eq. 7, we calculate the observed differential CBC merger rate
\[\frac{dR_{o,cbc}(z)}{dz}=R_{\nu,cbc}(z)\frac{1}{1+z}\frac{dV_{c}}{dz}(z). \tag{14}\]
We simulate CBCs up to redshift \(z=10\) as we expect minimal star formation beyond this redshift and therefore no CBCs [48]. Our fiducial SFR model is the Madau-Fragos SFR [49],
\[R_{f}(z_{f})\propto\frac{(1+z_{f})^{2.6}}{1+(\frac{1+z_{f}}{3.2})^{6.2}}. \tag{15}\]
### The mass distributions and rates
We now describe the mass distributions and local merger rate densities \(\mathcal{R}_{0}\) we use to simulate the CBC population. We consider several population models, fit to LVK data up until the end of the third observing run via Bayesian inference. We also consider models of mass distribution fit to Galactic double neutron stars. Considering several models enables us to characterize systematic modeling uncertainties that could come from using strongly parameterized models. Each model has a statistical uncertainty on the population that we marginalize over. Additionally in each model, a single set of hyperparameters \(\Lambda_{\rm GW}\) characterizes a single CBC population. We assume that only the overall rate, not the mass distribution, evolves with redshift according to Eq. 12.
For each model, we self-consistently include the rates inferred by it when available [50]. We refer to such models as "rate-inclusive" models. For models where such information is not available - which we refer to as "rate-marginalized" models - we randomly pair each hyperparameter sample with a \(\mathcal{R}_{0}\) posterior from an LVK O3b population model [51] that estimates rates. For clarity, we rewrite Eq. 14 for cases where \(\mathcal{R}_{0}\) depends on \(\Lambda_{\rm GW}\) (i.e., \(\mathcal{R}_{0}(\Lambda_{\rm GW})\)) as
\[\frac{dR_{o,cbc}(z,\Lambda_{\rm GW})}{dz}=R_{v,cbc}(z,\Lambda_{\rm GW})\frac{ 1}{1+z}\frac{dV_{c}}{dz}(z). \tag{16}\]
The various mass-rate model configurations are described below and summarized in Tab. 1.
#### iv.2.1 Neutron-star black-hole models
We model the NSBH mass distribution using the Bayesian population analyses from Ref. [54]. We assume that the distribution of the primary mass \(m_{1}\) follows the truncated power-law [57]
\[\pi(m_{1}|\gamma,m_{1,\rm min},m_{1,\rm max})\propto\\ \begin{cases}m_{1}^{-\gamma},\text{ if }m_{1,\rm min}\leq m_{1} \leq m_{1,\rm max}\\ 0,\text{ otherwise}\end{cases}, \tag{17}\]
with a power-law index \(\gamma\), minimum \(m_{1}\) cutoff \(m_{1,\rm min}\), and maximum \(m_{1}\) cutoff \(m_{1,\rm max}\).
We consider two pairing functions to get the distribution of the secondary mass. The first is a power-law pairing function [58], which we refer to as the nsbh-pl model
\[\pi(q|\beta,m_{1},m_{2,\rm max})\propto\\ \begin{cases}q^{\beta},\text{ if }q_{\rm min}(m_{1})\leq q\leq q_{\rm max }(m_{1},m_{2,\rm max}))\\ 0,\text{ otherwise}\end{cases}, \tag{18}\]
where \(q=m_{2}/m_{1}\) is the mass ratio and \(\beta\) is a power-law index. The second is a truncated-Gaussian pairing function [54], which we refer to as the nsbh-g model
\[\pi(q|\mu,\sigma,m_{1},m_{2,\rm max})\propto\\ \begin{cases}\mathcal{N}(q|\mu,\sigma),\text{ if }q_{\rm min}(m_{1}) \leq q\leq q_{\rm max}(m_{1},m_{2,\rm max}))\\ 0,\text{ otherwise}\end{cases}, \tag{19}\]
where \(\mathcal{N}(q|\mu,\sigma)\) is a Gaussian with mean \(\mu\) and standard deviation \(\sigma\). For both models, we set the minimum NS mass \(m_{2,\rm min}=1\)\(M_{\odot}\) so that the minimum mass ratio cutoff \(q_{\rm min}=1/m_{1}\). The maximum mass ratio cutoff is set as \(q_{\rm max}=\text{min}(m_{2,\rm max}/m_{1},1)\), where the maximum NS mass \(m_{2,\rm max}\) is a free parameter drawn uniformly between 1.97 \(M_{\odot}\) and 2.7 \(M_{\odot}\). While the ranges of \(m_{1}\) and \(m_{2}\) differ based on the particular hyperparameter
values, the broadest possible range is \(m_{1}\in[2,20]\)\(M_{\odot}\) and \(m_{2}\in[1,2.7]\)\(M_{\odot}\). We refer the reader to Ref. [54] for a more detailed overview of the models.
Both the nsbh-pl and nsbh-g models are rate-inclusive models that self-consistently calculate \(\mathcal{R}_{0}\). Hence, for each hyperparameter sample in both analyses, we set \(\mathcal{R}_{0}\) to the associated posterior \(\mathcal{R}_{0}(\Lambda_{\text{GW}})\).
#### ii.1.2 Power Law + Dip + Break model
We next consider the Power Law + Dip + Break (PDB) model used in the LVK GWTC-3 analysis [51, 52, 53, 8, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 223, 219, 224, 225, 226, 227, 228, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 288, 287, 289, 291, 285, 288, 289, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 314, 315, 316, 317, 318, 324, 317, 319, 325, 326, 327, 328, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 40, 411, 42, 433, 44, 445, 45, 46, 47, 48, 49, 41, 435, 46, 49, 42, 44, 45, 46, 49, 436, 44, 47, 48, 49, 44, 45, 46, 49, 44, 47, 48, 49, 45, 49, 46, 47, 48, 49, 47, 49, 48, 49, 49, 40, 41, 42, 43, 44, 45, 46, 49, 45, 47, 49, 46, 48, 49, 49, 40, 41, 42, 43, 44, 45, 46, 49, 47, 48, 49, 49, 40, 41, 42, 43, 44, 46, 49, 45, 47, 49, 48, 49, 49, 41, 43, 44, 45, 46, 49, 47, 48, 49, 49, 40, 41, 42, 43, 44, 49, 42, 44, 45, 46, 49, 41, 44, 45, 46, 49, 47, 48, 49, 49, 42, 44, 46, 49, 43, 45, 47, 49, 48, 49, 49, 40, 41, 42, 43, 44, 46, 49, 45, 46, 47, 49, 48, 49, 49, 40, 41, 42, 43, 44, 45, 46, 49, 47, 48, 49, 49, 41, 44, 49, 42, 44, 46, 49, 45, 47, 49, 46, 48, 49, 49, 41, 42, 44, 49, 43, 44, 45, 46, 49, 47, 48, 49, 49, 42, 45, 49, 46, 49, 47, 49, 48, 49, 49, 49, 42, 49, 43, 49, 44, 45, 46, 49, 47, 49, 48, 49, 49, 49, 40, 41, 42, 43, 44, 46, 49, 45, 47, 49, 48, 49, 49, 49, 40, 41, 42, 43, 44, 49, 45, 46, 47, 49, 49, 42, 45, 47, 48, 49, 49, 41, 45, 49, 46, 47, 49, 48, 49, 49, 49, 42, 49, 43, 45, 46, 49, 47, 49, 49, 48, 49, 49, 49, 49, 45, 49, 49, 46, 49, 47, 49, 49, 49, 48, 49, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 59, 52, 59, 53, 57, 59, 54, 50, 53, 57, 59, 55, 56, 59, 50, 54, 51, 52, 54, 53, 55, 56, 57, 58, 59, 59, 50, 57, 59, 51, 52, 54, 55, 59, 54, 52, 55, 56, 57, 59, 53, 58, 59, 54, 50, 55, 59, 50, 51, 52, 53, 54, 55, 56, 57, 57, 59, 58, 59, 52, 59, 54, 53, 59, 55, 57, 59, 56, 58, 59, 57, 59, 59, 50, 59, 52, 50, 53, 57, 59, 58, 59, 50, 59, 51, 54, 52, 55, 56, 59, 57, 57, 58, 59, 59, 50, 59, 53, 59, 50, 51, 52, 54, 53, 55, 56, 57, 59, 58, 59, 50, 59, 52, 53, 57, 59, 50, 51, 53, 59, 52, 54, 55, 56, 57, 58, 59, 50, 59, 53, 59, 54, 55, 57, 59, 56, 59, 57, 58, 59, 50, 51, 52
While the analysis in Ref. [55] relies on observations of only Galactic BNSs, we assume in our usage of the bns-g model that it is extendable to all redshifts and metallicities. Note, however, that the BNS mass distribution inferred with Galactic observations is potentially inconsistent with GW observations. This is especially highlighted by the BNS merger GW190425 [59], which has a total mass heavier than that of typical Galactic double neutron stars and could potentially have formed through different formation channels [60].
Hence, we include a second model, which we call bns-pl, based on the BNS power mass distribution inferred from all the neutron star systems detected through GWs [51; 56; 8]. In the bns-pl model, both the primary and secondary masses are paired randomly after each being drawn from a power-law distribution
\[\pi(m|\gamma,m_{\rm min},m_{\rm max})\propto m^{\gamma}, \tag{24}\]
with a power-law index \(\gamma\) and minimum (maximum) mass cutoff \(m_{\rm min}\) (\(m_{\rm max}\)). While the neutron star power model is used in Ref. [51] to infer the mass distribution from both BNSs and NSBHs, we only consider the BNS case here.
Since our BNS models are all rate-marginalized, we model their rate distributions by drawing \(\mathcal{R}_{0}\) posteriors from the rate-inclusive PDB random-pairing model (which we refer to as pdb-ind) [51; 52] fit to LVK GWTC-3 data. Similar to Sec. IV.2.2, we correct the rates inferred by pdb-ind to include only the fraction of systems that are BNSs. This means restricting the masses to \(m_{1},m_{2}\in[1,3]\,M_{\odot}\) for the bns-pl model, but to \(m_{1},m_{2}\in[1,2]\,M_{\odot}\) for the bns-g model to be consistent with the mass ranges from Refs. [51; 55].
## V Detector networks
We consider four different possible networks of XG detectors, using the latest detector designs described in Refs. [61; 20]. Using the GW simulation package Gwbench[62], we consider the CE-40 and CE-20 options for the proposed 40-km arm and 20-km arm CE sensitivities respectively [61; 20], the A# option for the proposed 4-km arm A# sensitivity [63], and the ET-10-XYL option for the proposed triangular "xylophone" 10-km arm ET sensitivity [64]. Figure 1 shows the amplitude spectral density \(\sqrt{S_{n}(f)}\) of each detector considered.
Since an SGWB is detected by cross-correlating data observed separately by at least two detectors, we consider only multiple-detector networks [65]. We consider the CE-A (coast of Washington, USA), CE-B (coast of Texas, USA), ETS (slightly South of Virgo's current location at Cascina, Italy), and LL0 (current location of the LIGO-Livingston Observatory at Livingston, Louisiana, USA) facility locations specified in Tab. 2 of Ref. [61], which we point to for the details.
We consider four different networks of detectors:
* A _fiducial_ three-detector network including one CE-40 at CE-A, one CE-20 at CE-B, and one ET-10-XYL at ETS.
* An _alternate_ three-detector network including one CE-40 at CE-A, one ET-10-XYL at ETS, and one A#-upgraded detector at LL0.
* An _alternate_ two-detector network including one CE-40 at CE-A, and one CE-20 at CE-B.
* An _alternate_ two-detector network including one CE-40 at CE-A, and one ET-10-XYL at ETS.
We choose the same three-detector networks as those proposed in Refs. [61; 20] to make it easy to compare with existing and future literature. While the future development of ET has been officially confirmed, we specifically explore a two-detector network configuration that does not include ET in order evaluate the usefulness of and aid in the detector design of CE facilities in detecting astrophysically and cosmologically-arising SGWBs even in the absence of ET.
Throughout the paper, we set a minimum frequency of 5 Hz, corresponding to the proposed lower limit of CE and A# design sensitivities [61; 20; 63] as shown in Fig. 1. In addition, we set a maximum frequency of 2000 Hz. For each network, we use the same \(3\sigma\) power-law integrated (PI) curves [66; 21] as Ref. [61]4 in order to measure the ability of the network to detect SGWB signals. We refer the reader to Ref. [61] for the overlap reduction functions [66; 21] of the various detector pairs comprising the networks described in this section.
Footnote 4: Obtained through private communication with the authors.
## VI Simulating the SGWB : A Monte-Carlo approach
In the previous sections, we have described the population models and the detector networks that we use. In this section, we describe the simulation and the calculation of the unresolved SGWB using Eq. 9 and Eq. 10 for which we adopt a Monte-Carlo approach [67; 68; 14; 25; 44]. We use the Gwbench simulation platform [62] to generate GW waveforms for non-spinning quasi-circular binaries neglecting tidal effects. Gwbench naturally accounts for the Earth's rotation and its impact on the antenna patterns which is important since BNSs can last for several hours in the observing band of XG detectors [62]. We use IMRPhenomD [69; 70] as our waveform approximat 5.
Footnote 5: Note that the choice of waveform approximant is expected to be subdominant in calculating the SGWB [21].
For each population configuration described in Sec. IV and summarized in Tab. 1, we are interested in calculating
\(\Omega_{\rm cbc,\ unres}\). To account for astrophysical uncertainties, we draw 2000 hyperparameter samples from the inferred population distributions and estimate \(\Omega_{\rm cbc,\ unres}\) for each sample.
To do this in a computationally tractable way, we first simulate \(10^{5}\) waveforms each for NSBHs and BNSs drawn from broad fiducial population distributions 6. We then apply rejection sampling to draw a population corresponding to any particular hyperparameter draw. We estimate the mean number of sources needed for a CBC model \(\nu\) with a set of hyperparameters \(\Lambda_{\rm GW}\) and observation time \(T\) as
Footnote 6: In this fiducial population, for NSBHs we uniformly draw \(m_{1}\sim[2,50]\) and \(m_{2}\sim[1,3]\) and for BNSs we uniformly draw both \(m_{1},m_{2}\sim[0.5,3]\).
\[\langle N_{\nu}(\Lambda_{\rm GW})\rangle=\int_{0}^{z_{\rm max}}R_{\nu}(z, \Lambda_{\rm GW})\frac{dV_{c}}{dz}\frac{dz}{1+z}\ T, \tag{25}\]
and draw the actual number of sources through a Poisson draw
\[N_{\nu}(\Lambda_{\rm GW})\sim\texttt{Poiss}(\lambda=\langle N_{\nu}(\Lambda_ {\rm GW})\rangle). \tag{26}\]
In order to estimate \(\Omega_{\rm cbc,\ unres}\), we first need to extract only the unresolvable signals from this population. To do this, we first compute the matched-filter signal-to-noise ratio (SNR) \(\rho_{j}^{\rm mf}\) of the signal, defined for detector \(j\) with noise PSD \(S_{n,j}(f)\) as [71]:
\[\rho_{j}^{\rm mf}=\left[4\int_{0}^{\infty}\frac{|\tilde{h}_{j}(f)|^{2}}{S_{n, j}(f)}df\right]^{1/2}. \tag{27}\]
This is, however, the optimal matched-filter SNR; in order to account for the measurement uncertainty due to detector noise, we correct the SNR as [72; 48; 73]
\[\rho_{j}^{\rm obs}=\rho_{j}^{\rm mf}+\mathcal{N}(0,1). \tag{28}\]
We then define the optimal network SNR \(\rho_{\rm net}^{\rm obs}\) for a network of \(D\) detectors by summing the individual observed SNRs in quadrature as
\[\rho_{\rm net}^{\rm obs}=\sqrt{\sum_{j=1}^{D}(\rho_{j}^{\rm obs})^{2}}. \tag{29}\]
A signal is then labeled as resolved if \(\rho_{\rm net}^{\rm obs}\) is greater than some frequency-independent threshold \(\rho_{\rm thresh}\).
Once we have a population \(N_{\nu}^{\rm unres}(\Lambda_{\rm GW})\) of unresolvable sources with respect to \(\rho_{\rm thresh}\), we calculate the GW energy flux (Eq. 6) using a Monte-Carlo sum over our simulated population [14; 21; 68]:
\[F_{\nu}(f;\Lambda_{\rm GW})\approx\frac{\pi c^{3}f^{2}}{2GT}\sum_{i=1}^{N_{\nu }^{\rm unres}(\Lambda_{\rm GW})}\biggl{[}|\tilde{h}_{+}^{i}(f,\theta^{i})|^{2 }+|\tilde{h}_{\times}^{i}(f,\theta^{i})|^{2}\biggr{]}. \tag{30}\]
Figure 1: Amplitude spectral densities \(\sqrt{S_{n}}\) for the detectors in our networks.
Figure 2: This plot shows the \(\Omega_{\rm cbc,\,unres}\) estimate for the various BNS and NSBH models for the fiducial 3-detector network described in Sec. V. The purple band in both plots is the pdb-pl model that simultaneously incorporates both the NSBH and BNS systems. Also overlaid is the 3\(\sigma\) PI curve that shows that the unresolved CBC backgrounds will likely be very loud.
## VII Cosmological SGWB models
To gauge the impact of the \(\Omega_{\rm{cbc,\;unres}}\), we consider several models of early Universe cosmological SGWBs that could be accessible with XG detectors in principle. These models include SGWBs from domain walls [74], first-order phase transition (FOPT) sound waves [75, 76], FOPT bubble collisions [75], stiff equation of state [77], and preheating [61, 78]. We also use the model spectrum for Nambu-Goto oscillating cosmic string loops (the model C1 from Ref.[79]).
We note that XG detectors will not be sensitive to standard slow-roll inflation, for which \(\Omega_{\rm{GW}}\sim 10^{-15}-10^{-17}\) depending on the model [22, 9]. In general, these various SGWBs are highly sensitive to the choice of model parameters, but in our plots, we chose illustrative curves for \(\Omega_{\rm{cosmo}}\).
## VIII Results
As described in Sec. VI, for each population model in Tab. 1, we have 2000 different draws for the unresolved CBC SGWB. These draws incorporate both the uncertainty in the rate of mergers and the astrophysical uncertainty from the population model. In Fig. 2, we show the 90% credible bands for the unresolved SGWB for the various BNS-only and NSBH-only population models, for the fiducial 3-detector network from Sec. V using a threshold SNR of 12. We overlay them on top of the 3\(\sigma\) PI curve for this network. We assume an observation time of \(T=1\) year throughout the paper. The purple band in both plots is from the pdb-pl model and is therefore higher than either of the BNS-only and NSBH-only models.
The unresolved SGWB from CBCs is above the 3\(\sigma\) PI curve, implying that it will very likely be detectable by XG detectors with one year of observing. In general, the width of each band in Fig. 2 - which represents the total uncertainty in our estimate of the unresolved SGWB for that model - spans about an order of magnitude and the bands fall out of detectability at \(\sim 80\) Hz even with XG detectors. The combined pdb-pl background, however, might be significantly detectable up to \(\sim 200\) Hz. Nevertheless, it would seem that we will only observe the SGWB due to the inspiral phase of the mergers. Similarly, while the simulations show a clear turnover for the NSBH SGWB - the morphology of which probably depends on the waveform used - it clearly happens at high frequencies and XG detectors will probably not be able to probe this.
In addition to the pdb-pl model which naturally in
Figure 3: A comparison of various cosmological SGWB models along with \(\Omega_{\rm{cbc,\;unres}}\) estimates for the fiducial network described in Sec. V. The blue band shows the pdb-pl model while the pink band shows the joint-envelope constructed from the various BNS-only and NSBH-only models in Tab. 1. Both bands represent 90% credible intervals. The solid black curve is the 3\(\sigma\) PI curve.
corporates both BNS and NBSH mergers, we create a maximum-uncertainty envelope, which we refer to as the joint-envelope as a means of accounting for model systematics. This is generated by combining the BNS-only and NSBH-only models (i.e., nsbh-pl, nsbh-g, bns-pl, bns-g) in Tab. 1 in all possible combinations and choosing the widest band at each frequency bin.
Figure 3 shows the 90% credible band of the joint-envelope along with the pdb-pl estimate of \(\Omega_{\rm cbc,\;unres}\) for our fiducial 3-detector network. We contrast these bands with several cosmological SGWB models, demonstrating that the unresolved background will likely be a major source of noise for accessing the said cosmological signals. Since the effective CBC background is the sum of unresolved and resolved components (see Eq. 3), it also depends on the efficacy of subtraction of resolved signals. Since the subtraction residue could contribute significantly to the effective CBC background (see for e.g. [23; 26]), the unresolved background we show in Fig. 3 therefore represents the floor of the noise from CBC sources. Either way, a joint simultaneous analysis of the CBC background and the cosmological background will be necessary for the detection of the latter. We note again that the cosmological SGWB curves themselves can change depending on the choice of parameters. In Appendix. A, we present similar results for the three alternate detector networks described in Sec. V.
Further exploring the uncertainty in the \(\Omega_{\rm cbc,\;unres}\), we see remarkable consistency between the two NSBH-only models, and similarly between the two BNS-only models. This suggests that the uncertainty in the rate is dominant, as opposed to the astrophysical uncertainty on the mass distribution. This is further confirmed by Fig. 4, where we plot the SGWB from different BNS models for three different rates. The consistency in SGWB between models at any given frequency shows that the astrophysical uncertainty is a much smaller contributor to the SGWB uncertainty than the uncertainty in CBC rate. We find similar results for NSBHs as well. Since the number of BNSs and NSBHs observed thus far is fairly small [6; 7; 8], this means that any future detections in O4 and O5 can substantially lower the uncertainty of these bands. This is also consistent with studies such as Ref. [80], which show that the monopole of the CBC SGWB is particularly sensitive to the local merger rate.
To explore how SFR models impact our estimates of \(\Omega_{\rm cbc,\;unres}\), we also used an SFR extracted from long gamma-ray burst rates following [21; 81]:
\[R_{f}(z_{f})\propto\nu\frac{ae^{b(z_{f}-z_{p})}}{a-b+be^{a(z_{f}-z_{p})}}, \tag{31}\]
Figure 4: The SGWB for different BNS models evaluated at three different rates using a threshold SNR of 12. We use the following NS-system rates (corrected BNS rates for bns-pl and bns-g): Rate 1 = 45.5 (32.2) \(\rm Gpc^{-3}\;Yr^{-1}\), Rate 2 = 209.3 (186.6) \(\rm Gpc^{-3}\;Yr^{-1}\), Rate 3 = 377.0 (362.6) \(\rm Gpc^{-3}\;Yr^{-1}\). While there is significant difference between the curves for various rates, the differences between the models is minimal.
where \(\nu=0.146\,M_{\odot}\;\mathrm{yr}^{-1}\;\mathrm{Mpc}^{3}\), \(z_{p}=1.72\), \(a=2.80\), and \(b=2.46\). As the pdb-pl bands in Fig. 5 show, we find that the impact of the SFR model is minimal as long as the minimum delay times are small. A minimum delay time of 1 Gyr gives a significantly smaller, albeit still loud, SGWB. Note, however, that studies of Galactic systems have actually found an excess of systems with small delay times [42], meaning that the value of 1 Gyr is very conservative. We therefore conclude that the key observation of this paper, that the unresolved background will significantly impact searches for cosmological backgrounds, is likely robust to uncertainties in SFR. We have also tested our assumption of \(z_{\mathrm{max}}=10\) and find that the GW power from higher redshifts is very small.
The LVK population inference at the end of O3 tested multiple versions of the power model [51] including only confident events, or confident and marginal events, or confident events and GW190814 7. While we use the last version in all our plots as the bns-pl model, we find that the impact of this choice is negligible. In particular, our estimates of \(\Omega_{\mathrm{bns,\;unres}}\) for our implemented version differs from those for the other two versions at most by 3% at 25 Hz, demonstrating that our estimates are robust with respect to this uncertainty.
Footnote 7: which is a population outlier [82]
Finally, we also estimate the CBC SGWB at multiple \(\rho_{\mathrm{thresh}}\), which determines what we define as a resolvable signal. We note that our fiducial threshold of \(\rho_{\mathrm{thresh}}=12\) is somewhat conservative for the purposes of the unresolved SGWB. Figure 6 shows the pdb-pl and the joint envelopes for two different SNR choices. The threshold \(\rho_{\mathrm{thresh}}=20\), in particular, was shown to be the frequency-independent optimal threshold for minimizing the effective background from BNS systems, including both the subtraction residue and the unresolved background [21].
## IX Implications and Conclusions
In this paper, we have presented a data-driven estimate of the strength of the unresolvable CBC background seen by XG detectors, accounting for both statistical and systematic uncertainties. A robust implication from our study is that \(\Omega_{\mathrm{cbc,unres}}\) will be an impediment for the direct detection of many cosmological SGWB models, independent of the fidelity and efficacy of the subtraction of resolved CBC signals. Some manner of simultaneous inference of the astrophysical and cosmological background (e.g., see Refs. [83; 84; 85]) will be required at a minimum.
Given the large number of resolvable CBC signals that we expect, it is also possible that their population infer
Figure 5: 90% credible intervals of \(\Omega_{\mathrm{cbc,\;unres}}\) for the pdb-pl model with three different SFR assumptions. While the green and the saffron bands are for a gamma-ray burst based SFR and the Madau-Fragos SFR respectively, the pink band is for a conservative 1 Gyr minimum time delay.
ence can be used as a strong prior for \(\Omega_{\rm cbc,unres}\). While this could make inferring multiple backgrounds easier, this effort could be hindered by the efficacy of subtraction. Moreover, using the resolvable signals to form a prior could create a systematic bias because the resolvable and unresolvable sources come from different redshifts [86].
However, this also makes \(\Omega_{\rm cbc,\;unres}\) astrophysically interesting in its own right by allowing us to study compact binary populations at high redshifts [87, 48]. Since we will confidently detect \(\Omega_{\rm cbc,unres}\) with XG detectors, this represents a clear science case that can inform detector design. For this purpose, in Tab. 2 we provide figures-of-merit of the level of \(\Omega_{\rm cbc,\;unres}\) for various detector networks at two reference frequencies.
Our results also help inform the target efficacy of subtraction techniques for XG detectors. The optimal level of subtraction is one that leaves a residue not significantly higher than \(\Omega_{\rm cbc,unres}\). At the same time, a subtraction technique that leaves a residue substantially lower than \(\Omega_{\rm cbc,unres}\) will not be ideal and likely computationally expensive from a statistical point of view. Our figures-of-merit for the various networks in Tab. 2 will again be useful in informing the optimal efficacy of subtraction that algorithms should target.
Instead of subtraction, one can also infer the parameters of CBC signals without any threshold. This Bayesian "global fit" technique was first developed in Ref. [88], with various extensions studied in Refs. [89, 90, 91, 92]. Ref. [92] in particular showed how a cosmological (non-CBC) background can be incorporated into this formalism for simultaneous inference. While this statistical formalism holds much promise because it removes both the need for subtracting the resolved CBC signals and for separately inferring the unresolved CBC background, it is also computationally very expensive, which can make running it on long stretches of data untenable. Moreover, the method is susceptible to subtle systematics [90] and can struggle to deal with overlapping signals, both of which can make an application on real XG data challenging.
Finally, we discuss some of the caveats of this study. One of the most important assumptions we made is the specific form of the SFR distribution. While we have tried two different SFR models as we show in Fig. 5, uncertainties on the rate naturally increase at higher redshift. For instance, observations with the James Webb Space Telescope show heightened star formation at high redshifts, which might imply a louder CBC background, and more observation in the near future might make the picture clearer [93, 94]. Another effect that is somewhat related, which we did not consider here is the effect of metallicity [95, 96, 97, 98, 99, 100]. This would again predominantly impact the rate at higher redshifts.
Figure 6: 90% credible intervals of \(\Omega_{\rm cbc,\;unres}\) for the PDB-PL and the joint-envelope models with two different \(\rho_{\rm thresh}\) values. The filled bands show the credible intervals with \(\rho_{\rm thresh}=12\) and the dashed-dotted lines bound the intervals for \(\rho_{\rm thresh}=20\).
On the GW waveform side, we have assumed zero spins and tidal effects. While these are all subdominant compared to the mass distributions, it is possible that the spins in particular might have a noticeable impact on \(\Omega_{\rm cbc,unres}\) given the lengths of the signals in XG detectors.
## X Acknowledgements
We thank Salvatore Vitale, Emanuele Berti, Vuk Mandic, Haowen Zhong, and Vishal Baibhav for helpful discussions. We thank Amanda Farah and Philippe Landry for their help with the LVK population analyses and Sylvia Biscoveanu for her help with the NSBH population analyses and for providing PI curves for XG detectors. D.S.B. is supported by the National Science Foundation (NSF) Graduate Research Fellowship Program under grant DGE-2234667. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. D.S.B. was supported by a CIERA Post-Baccalaureate Research Fellowship during the majority of this research. D.S.B. acknowledges partial support for this project from NASA through a NASA Illinois Space Grant. S.B. acknowledges support from the NSF grant PHY-2207945. Z.D. is supported by the CIERA Board of Visitors Research Professorship. V.K. is partially supported through a CIFAR Senior Fellowship, a Guggenheim Fellowship, the Gordon and Betty Moore Foundation (grant award GBMF8477), and from Northwestern University, including the Daniel I. Linzer Distinguished University Professorship fund.
This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. This research has made use of data obtained from the Gravitational Wave Open Science Center (gwosc.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States NSF as well as the STFC of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO 600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Isti
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{ Model} & \(f\) & \multicolumn{3}{c}{\(\rho_{\rm thresh}=12\)} & \multicolumn{1}{c}{\(\rho_{\rm thresh}=20\)} & \multicolumn{1}{c}{\(\rho_{\rm thresh}=40\)} \\ \hline & & 40 km CE A, 20 km CE B, 10 km ET & & \\ \hline joint-envelope & 25 Hz & \(-10.8^{+0.4}_{-0.4}\) & \(-10.3^{+0.4}_{-0.4}\) & \(-10.0^{+0.3}_{-0.4}\) \\ & 250 Hz & \(-10.3^{+0.5}_{-0.4}\) & \(-9.9^{+0.4}_{-0.4}\) & \(-9.6^{+0.4}_{-0.4}\) \\ pdb-pl & 25 Hz & \(-10.4^{+0.4}_{-0.5}\) & \(-10.0^{+0.4}_{-0.4}\) & \(-9.8^{+0.4}_{-0.4}\) \\ & 250 Hz & \(-9.8^{+0.4}_{-0.5}\) & \(-9.5^{+0.4}_{-0.5}\) & \(-9.3^{+0.4}_{-0.4}\) \\ \hline & & 40 km CE A, 10 km ET, 4 km A\# ILO & & \\ \hline joint-envelope & 25 Hz & \(-10.6^{+0.4}_{-0.4}\) & \(-10.3^{+0.4}_{-0.4}\) & \(-10.0^{+0.4}_{-0.4}\) \\ & 250 Hz & \(-10.2^{+0.4}_{-0.4}\) & \(-9.8^{+0.4}_{-0.4}\) & \(-9.6^{+0.4}_{-0.4}\) \\ pdb-pl & 25 Hz & \(-10.3^{+0.4}_{-0.5}\) & \(-10.0^{+0.4}_{-0.5}\) & \(-9.8^{+0.4}_{-0.5}\) \\ & 250 Hz & \(-9.8^{+0.4}_{-0.5}\) & \(-9.4^{+0.4}_{-0.5}\) & \(-9.3^{+0.4}_{-0.5}\) \\ \hline & & 40 km CE A, 20 km CE B & & \\ \hline joint-envelope & 25 Hz & \(-10.6^{+0.4}_{-0.4}\) & \(-10.2^{+0.4}_{-0.4}\) & \(-10.0^{+0.4}_{-0.3}\) \\ & 250 Hz & \(-10.1^{+0.4}_{-0.4}\) & \(-9.8^{+0.4}_{-0.4}\) & \(-9.6^{+0.4}_{-0.3}\) \\ pdb-pl & 25 Hz & \(-10.2^{+0.4}_{-0.5}\) & \(-9.9^{+0.4}_{-0.5}\) & \(-9.8^{+0.4}_{-0.4}\) \\ & 250 Hz & \(-9.7^{+0.4}_{-0.5}\) & \(-9.4^{+0.4}_{-0.5}\) & \(-9.2^{+0.4}_{-0.4}\) \\ \hline & & 40 km CE A, 10 km ET & & \\ \hline joint-envelope & 25 Hz & \(-10.6^{+0.4}_{-0.4}\) & \(-10.3^{+0.4}_{-0.4}\) & \(-10.0^{+0.4}_{-0.4}\) \\ & 250 Hz & \(-10.2^{+0.4}_{-0.4}\) & \(-9.8^{+0.4}_{-0.4}\) & \(-9.6^{+0.4}_{-0.4}\) \\ pdb-pl & 25 Hz & \(-10.3^{+0.4}_{-0.5}\) & \(-10.0^{+0.4}_{-0.5}\) & \(-9.8^{+0.4}_{-0.5}\) \\ & 250 Hz & \(-9.8^{+0.4}_{-0.5}\) & \(-9.4^{+0.4}_{-0.5}\) & \(-9.3^{+0.4}_{-0.5}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: \(\Omega_{\rm cbc,\,unres}\) for different networks at two reference frequencies for the pdb-pl model and the joint-envelope. The quoted numbers correspond to median values with 90% uncertainties. We present these estimates as figures of merit for both detector design and subtraction algorithms.
tuto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. KAGRA is supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan Society for the Promotion of Science (JSPS) in Japan; National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea; Academia Sinica (AS) and National Science and Technology Council (NSTC) in Taiwan. The authors are grateful for computational resources provided by the LIGO Laboratory and supported by NSF Grants PHY-0757058 and PHY-0823459. Lastly, we acknowledge the efforts of the CE Consortium and ET Collaboration in the planning and development of XG GW detectors. This manuscript carries an LSC DCC number LIGO-P2300334.
|
2308.10359 | Central Bank Digital Currency with Collateral-constrained Banks | We analyze the risks to financial stability following the introduction of a
central bank digital currency (CBDC). The CBDC competes with commercial bank
deposits as the household's source of liquidity. We revisit the result in the
literature regarding the equivalence of payment systems by introducing a
collateral constraint for banks when borrowing from the central bank. When
comparing two equilibria with and without the CBDC, the central bank can ensure
the same equilibrium allocation and price system by offering loans to banks.
However, to access loans, banks must hold government bonds as collateral at the
expense of extending credit to firms, and the central bank assumes part of the
credit-extension role. Thus, in the equivalence analysis, while the CBDC
introduction has no real effects on the economy, it does not guarantee full
neutrality as it affects banks' business models. We also analyze the dynamics
of an increase in the CBDC and show that the CBDC not only does not cause bank
disintermediation and financial instability but may foster an expansion of bank
credit to firms. | Hanfeng Chen, Maria Elena Filippin | 2023-08-20T20:24:49Z | http://arxiv.org/abs/2308.10359v4 | # Unveiling the Interplay between Central Bank Digital Currency and Bank Deposits+
###### Abstract
We extend the Real Business Cycle model in Niepelt (2022) to analyze the risk to financial stability following the introduction of a central bank digital currency (CBDC). CBDC competes with commercial bank deposits as households' source of liquidity. We consider different degrees of substitutability between payment instruments and review the equivalence result in Niepelt (2022) by introducing a collateral constraint banks must respect when borrowing from the central bank. When CBDC and deposits are perfect substitutes, the central bank can offer loans to banks that render the introduction of CBDC neutral to the real economy. We show that the optimal level of the central bank's lending rate depends on the restrictiveness of the collateral constraint: the tighter it is, the lower the loan rate the central bank needs to post. However, when CBDC and deposits are imperfect substitutes, the central bank cannot make banks indifferent to the competition from CBDC. It follows that the introduction of CBDC has real effects on the economy.
**JEL codes**: E42, E58, G21
**Keywords**: Central bank digital currency, collateral-constrained banks, equivalence, substitutability
Introduction
Digital currencies have been around for a while, but their potential significance in the global economy has increased recently due to a growing demand for digital payment methods for retail purposes and the gradual decline of the use of cash for transactions in many economies. Besides the private digital means of payment currently in circulation, many central banks have been investigating the possibility of launching a central bank digital currency (CBDC). CBDCs are central bank liabilities denominated in an existing unit of account that serves as a medium of exchange and a store of value.1 Were CBDCs to be issued to retail customers (i.e., households), they would likely be a digital form of cash that shares features with central bank reserves, as they are universally accessible, like banknotes, but in a digital form.
Footnote 1: As defined by the Committee on Payments and Market Infrastructures (CPMI) of the Bank of International Settlements.
Alongside an intense policy debate, a growing academic literature on the broader economic implications of CBDCs has emerged. A primary concern for central banks when considering the issuance of a CBDC is the risk to financial stability as a potential cause of financial crises. In this work, the risk of financial instability is intended as the risk of CBDC disintermediating the banking sector, potentially leading to a financial crisis. Introducing a CBDC will likely alter the equilibrium in the real economy, as it will represent a novel payment option alternative to cash and commercial bank deposits. The macroeconomic consequences of introducing a CBDC will impact individuals and financial institutions. This paper analyses the implications of introducing a retail CBDC, particularly concerning its relationship with bank deposits.
In recent work, Niepelt (2022) provides a macroeconomic analysis of the introduction of a retail CBDC. The paper addresses several normative questions, from optimality in terms of the monetary system and policy to the effect of the introduction of a CBDC on equilibrium outcomes. In particular, the author extends findings in Brunnermeier and Niepelt (2019) and establishes an equivalence result between different funding sources.2 Introducing CBDC has no real consequences if the private and the public sectors are equally efficient in operating payment systems. For this to happen, the central bank must refinance banks at a lending interest rate that supports the banks' original portfolio position so that central bank funding exactly replaces the lost deposits for banks.
Footnote 2: The two authors consider the simplified scenario without reserves and resource costs of providing liquidity.
In Niepelt's framework, central bank loans are extended against no collateral. The collateral requirement imposed by the central bank is important for how introducing a CBDC may affect the banking sector and the real economy.3 Central banks' lending to commercial banks (i.e., discount window lending) is a monetary policy instrument meant to support the
liquidity and stability of the banking system. The liquidity provided by central banks helps financial institutions to manage their liquidity risks efficiently. These loans are issued at an administered discount rate and must be collateralized to the satisfaction of the issuing central bank. In the euro area, banks can obtain loans resorting to the marginal lending facility, which enables banks to obtain overnight liquidity from the European Central Bank against the presentation of sufficient eligible assets. In the United States, the Federal Reserve offers different types of discount window credit from the regional Federal Reserve Banks. The lending rates are the same across all Reserve Banks, and all discount window loans must be collateralized. The discount window mechanism has become increasingly important after the Global Financial Crisis.
Additionally, Niepelt's equivalence analysis only considers CBDC and deposits that are perfect substitutes. However, CBDC will likely not be a perfect substitute for bank deposits [see, e.g., Bacchetta and Perazzi (2022)]. In what follows, we will extend the equivalence result considering different degrees of substitutability between CBDC and deposits.
This work addresses the potential risk to financial stability following the introduction of a CBDC and investigates the impact of the substitutability between CBDC and bank deposits on this risk. Specifically, we investigate if the issuance of CBDC leads to disruptions in financial markets, thereby positing a risk to financial stability due to bank disintermediation rather than bank runs. To address this concern, we develop a model with a CBDC and deposits issued by banks subject to a collateral requirement when borrowing from the central bank. The framework builds on Niepelt (2022) and is an extension of the model by Sidrauski (1967) that embeds banks, deposits, government bonds, reserves, and a CBDC into the baseline Real Business Cycle model. Households value goods, leisure, and the liquidity services deposits and CBDC provide. Non-competitive banks invest in capital, reserves, and government bonds and fund themselves through either deposits or borrowing from the central bank. Firms produce using labor and physical capital. Finally, the consolidated government collects taxes, pays deposit subsidies, invests in capital, lends to banks against collateral, and issues CBDC and reserves.
In the first part of the paper, we review the equivalence analysis between operating payment systems by introducing a collateral constraint for the central bank lending to banks. We study the problem under different degrees of substitutability between CBDC and bank deposits. First, we consider the case of perfect substitutes, as in Niepelt (2022), and then the more general case of imperfect substitutability between different means of payments.
Our findings reveal that when CBDC and deposits are perfect substitutes, the introduction of CBDC has no real consequences as long as (1) the resource cost per unit of effective real balances is the same for CBDC and deposits and (2) the central bank offers a loan
rate that induces non-competitive banks to maintain the same balance sheet positions as before the introduction of CBDC. Our equivalent loan rate is lower than the one obtained in Niepelt (2022) because of the collateral requirement banks must respect when borrowing from the central bank. In particular, when the collateral constraint facing banks becomes more restrictive, the equivalent loan rate must be lower. Below a threshold, the collateral constraint becomes so restrictive that the central bank needs to pay banks to take these loans. This scenario is unlikely, as the haircut is too large to be seen.
However, when CBDC and deposits are imperfect substitutes, the central bank cannot make banks indifferent to the competition from CBDC. This is because there is no loan rate that the central bank can offer such that it leaves unchanged banks' profits. It follows that the new policy does not guarantee the same equilibrium allocations as before, implying that the introduction of CBDC has real effects on the economy.
In the second part of the paper, we explore how an increase in the demand for CBDC affects the real economy and if it could potentially lead to the crowding out of deposits. To do so, we study the economy's responses to changes in the households' relative preferences for CBDC over bank deposits. We consider an economy where households hold CBDC and deposits, and we assume that the payment instruments are imperfect substitutes. First, we look at how the economy reacts to a positive shock to the liquidity share of CBDC and then to a negative shock to the substitutability between payments. In both cases, we find that the demand for CBDC increases without crowding out deposits, but banks' profit drops due to the lower market power of banks.
Related LiteratureThe contribution of our work to the existing literature is threefold. First, we complement the recent literature examining the impact of the introduction of CBDC on commercial banks.
For instance, Chiu et al. (2019) develop a micro-founded general equilibrium model calibrated to the U.S. economy and find that CBDC expands bank intermediation when the price of CBDC falls within a certain range while leading to disintermediation if its interest rate exceeds the upper limit of that range.
In another study, Burlon et al. (2022) construct a quantitative euro area dynamic stochastic general equilibrium (DSGE) model, where banks must post government bonds as collateral to borrow from the central bank. They investigate the transmission channels of the issuance of CBDC to bank intermediation, finding a bank disintermediation effect with central bank financing replacing deposits and government bonds displacing reserves and loans.
Along a parallel trajectory, Assenmacher et al. (2021) use a DSGE model to investigate the macroeconomic effects of CBDC when the central bank administrates the CBDC rate
and collateral and quantity requirements. Their findings indicate that a more ample supply of CBDC reduces bank deposits, while stricter collateral or quantitative constraints reduce welfare but can potentially contain bank disintermediation. The latter effect is particularly true when the elasticity of substitution between bank deposits and CBDC is low.
Williamson (2022), on the other hand, explores the effects of the introduction of CBDC using a model of multiple means of payment. In his model, CBDC is a more efficient payment instrument than cash, but it lengthens the central bank's balance sheet, creating collateral scarcity in the economy.
Differently from these works, our study investigates the implications of CBDC issuance on bank intermediation using a Real Business Cycle model. Specifically, we extend the framework proposed in Niepelt (2022) by incorporating a collateral constraint for central bank lending to banks. Our findings indicate that increased demand for CBDC leads to a drop in banks' profit.
Second, our study contributes to the literature on the equivalence of payment systems.
Existing works by Brunnermeier and Niepelt (2019) and Niepelt (2022) propose a compensation mechanism where households shifting from deposits to CBDC can be offset by central bank lending to banks.
However, these models abstract from modeling the collateral constraint for central bank lending to banks. Notably, Piazzesi and Schneider (2022) show that when banks are required to hold liquid assets to back their deposits and face asset management costs, the equivalence between alternative payment instruments breaks down, even if banks can be refinanced directly by the central bank.
In light of this, we review the equivalence result established in Niepelt (2022) and extend the analysis by incorporating a collateral constraint for banks. We derive a new central bank lending rate that depends on the restrictiveness of the collateral requirement. Our findings reveal that the more binding the collateral constraint, the lower the central bank's loan rate must post.
Finally, we contribute to the literature on the relationship between CBDC and bank deposits by examining the effects of the introduction of CBDC considering different degrees of substitutability across means of payments [see, e.g., Bacchetta and Perazzi (2022), Barrdear and Kumhof (2022) and Kumhof and Noone (2021)].
The paper is organized as follows. Section 2 describes the model. Section 3 presents and discusses the analysis of the equivalence between operating payment systems. Section 4 characterizes the general equilibrium in which the household holds CBDC and deposits and discusses the dynamic effects of households' preferences shocks. Section 5 concludes.
Model
The model is based on Niepelt (2022) and is an extended Sidrauski (1967) model that embeds banks, deposits, government bonds, reserves, and CBDC into the baseline Real Business Cycle model. There is a continuum of mass one of homogeneous infinitely-lived households who own a succession of two-period-lived banks and of one-period-lived firms. The consolidated government determines monetary and fiscal policy.
### Households
The representative household wants to maximize the discounted felicity function \(\mathscr{U}\), which is increasing, strictly concave and satisfies Inada conditions. It takes prices, \(w_{t}\) and \(r_{t}\); returns on asset \(i\), \(R_{t}^{i}\); profits, \(\pi_{t}\); and taxes, \(\tau_{t}\) as given and solves
\[\max_{\{c_{t},x_{t},k_{t+1}^{h},m_{t+1},n_{t+1}\}_{t=0}^{\infty}} \mathbb{E}_{0}\sum_{t=0}^{\infty}\beta^{t}\mathscr{U}(c_{t},x_{t},z_{t+1}) \tag{1}\] \[\text{s.t.}\qquad c_{t}+k_{t+1}^{h}+m_{t+1}+n_{t+1}+\tau_{t}=w_{t }(1-x_{t})+\pi_{t}+k_{t}^{h}R_{t}^{k}+m_{t}R_{t}^{m}+n_{t}R_{t}^{n},\] \[k_{t+1}^{h},m_{t+1},n_{t+1}\geq 0,\]
where \(\beta\in(0,1)\) is the positive discount factor, \(c_{t}\) and \(x_{t}\) denote household consumption of the good and leisure at date \(t\), respectively; \(k_{t+1}^{h}\) is capital at date \(t+1\); and \(z_{t+1}=z(m_{t+1},n_{t+1})\) are effective real balances carried from date \(t\) to \(t+1\). Effective real balances are a function of both CBDC, \(m_{t+1}\), and bank deposits, \(n_{t+1}\).
Households value liquidity, as suggested by the _money in the utility function_ specification. In this setting, it only matters that households demand liquidity services, not why they do. Equation (1) represents the household's budget constraint. The household finances consumption, taxes, investments in capital, and real balances, out of wage income, which equals the sum of the wage times the labor supply, distributed profits, and the gross return on its portfolio.
We assume interior solutions for capital, CBDC, and deposits, and we define the stochastic discount factor as \(\Lambda_{t+s}\equiv\beta\mathscr{U}_{c}(c_{t+s},x_{t+s},z_{t+1+s})/\mathscr{U }_{c}(c_{t},x_{t},z_{t+1}).\) To express the Euler equations for CBDC and deposits in a more compact form, we define the risk-free rate and the liquidity premium on asset \(i\), respectively:
\[R_{t+1}^{f}\equiv\frac{1}{\mathbb{E}_{t}\Lambda_{t+1}},\qquad\mathscr{X}_{t+ 1}^{i}\equiv 1-\frac{R_{t+1}^{i}}{R_{t+1}^{f}}\text{ for }i\in\{m,n\}.\]
Assuming that the interest rates on CBDC and deposits are risk-free, we can summarize
the Euler equations as
\[x_{t}: \mathcal{U}_{x}(c_{t},x_{t},z_{t+1})=\mathcal{U}_{c}(c_{t},x_{t},z_{ t+1})w_{t}, \tag{2}\] \[k_{t+1}^{h}: 1=\mathbb{E}_{t}R_{t+1}^{k}\Lambda_{t+1},\] (3) \[m_{t+1}: \mathcal{U}_{c}(c_{t},x_{t},z_{t+1})\mathcal{X}_{t+1}^{m}= \mathcal{U}_{z}(c_{t},x_{t},z_{t+1})z_{m}(m_{t+1},n_{t+1}),\] (4) \[n_{t+1}: \mathcal{U}_{c}(c_{t},x_{t},z_{t+1})\mathcal{X}_{t+1}^{n}= \mathcal{U}_{z}(c_{t},x_{t},z_{t+1})z_{n}(m_{t+1},n_{t+1}). \tag{5}\]
Combining equations (4) and (5) we get that
\[z_{m}(m_{t+1},n_{t+1})\mathcal{X}_{t+1}^{n}=z_{n}(m_{t+1},n_{t+1})\mathcal{X}_ {t+1}^{m}.\]
### Banks
Within the literature, a motivation for introducing CBDC is when banks have some market power [e.g., Andolfatto (2021)]. The main implication is that banks can reduce deposit rates to extract rents, and households accept this markdown as they value the liquidity service provided by deposits. In particular, we follow Niepelt (2022) and assume that each bank is a monopsonist in its regional deposit market, such that households in a region can only access the regional bank.
A bank at date \(t\) issues deposits, \(n_{t+1}\), borrows from the central bank, \(l_{t+1}\), and collects government subsidies on deposit at rate \(\theta_{t}\). It invests in reserves, \(r_{t+1}\), government bonds, \(b_{t+1}\), and capital.4 Without loss of generality, we abstract from bank equity. We follow Burlon et al. (2022) and assume that banks are subjective to a collateral requirement such that the loans they get from the central bank must be lower than a fraction \(\theta_{b}\) of government bond holdings. In this setting, government bonds are the only assets that can be pledged as collateral. For simplification, we abstract away from interbank loans with collateral. Holding government bonds gives liquidity benefits to banks since they can use their holdings to obtain funding from the central bank. In other words, banks are willing to forego a spread on the risk-free rate because of the collateral benefits of holding government bonds (i.e., convenience yield). The convenience yield of government bonds reflects the additional benefits banks derive from holding these bonds beyond their financial yield. Therefore, government bonds are remunerated at a slightly lower rate than the risk-free rate.
Footnote 4: Bank’s capital is defined as \(k_{t+1}^{b}=n_{t+1}+l_{t+1}-r_{t+1}-b_{t+1}\). Alternatively, the bank can invest in loans to firms that eventually fund physical capital accumulation.
The operating costs in the retail payment system, \(\nu_{t}\), are a negative function of the reserve-to-deposit ratio, \(\zeta_{t+1}\). This is analogous to a binding minim
as larger reserves holdings relative to deposits lower the bank's operating costs. We also allow \(\nu_{t}\) to vary with the stock of reserves and deposits of other banks, \(\bar{\zeta}_{t+1}\), so as to capture positive externalities of reserve holdings. Differently from Niepelt (2022), we use a unit resource cost of managing deposit-based payments in the form \(\nu_{t}(\zeta_{t+1},\bar{\zeta}_{t+1})\). To simplify the analysis, we make some assumptions which imply that in equilibrium \(\zeta_{t+1}=\bar{\zeta}_{t+1}\), and reserves are strictly positive if and only if deposits are strictly positive: when a bank holds no deposits, its operating costs are null, and when all other banks have no deposits, the bank's operating costs are large but bounded. In this way, we rule out asymmetric equilibrium in the bank's deposits and other banks' deposits. Otherwise, the operating cost function, \(\nu_{t}(\zeta_{t+1},\bar{\zeta}_{t+1})\), is strictly decreasing in both arguments, strictly convex, and satisfies \(\nu_{\zeta\bar{\zeta},t}=0\) or \(\nu_{\zeta\zeta,t}\geq\nu_{\bar{\zeta}\bar{\zeta},t}\), as well as \(\lim_{\zeta_{t+1}\to 0}\nu_{\zeta,t}=\infty\).
The bank takes the risk-free rate, as well as the rates of return on capital, reserves, and government bonds, the households' stochastic discount factor, and the subsidy rate as given. In contrast, the bank chooses the quantity of deposits and central bank loans subject to the deposit funding schedule of households.5 Since the bank acts as a monopsonist in its regional deposit market, it takes the deposit funding schedule (rather than the deposit and the central bank loan rates) as given.
Footnote 5: In the model, the central bank’s loan funding schedule mirrors the deposit funding schedule. This assumption plays a crucial role in the context of the equivalence analysis.
The program of the bank at date \(t\) reads
\[\max_{n_{t+1},l_{t+1},r_{t+1},b_{t+1}}\biggl{\{}\pi^{b}_{1,t}+ \mathbb{E}_{t}\left[\Lambda_{t+1}\pi^{b}_{2,t+1}\right]\biggr{\}}\] \[\text{s.t.} \pi^{b}_{1,t}=-n_{t+1}\left(\nu_{t}(\zeta_{t+1},\bar{\zeta}_{t+1 })-\theta_{t}\right),\] \[\pi^{b}_{2,t+1}=(n_{t+1}+l_{t+1}-r_{t+1}-b_{t+1})R^{k}_{t+1}+r_{ t+1}R^{r}_{t+1}+b_{t+1}R^{b}_{t+1}-n_{t+1}R^{n}_{t+1}-l_{t+1}R^{l}_{t+1},\] \[l_{t+1}\leq\theta_{b}\frac{b_{t+1}}{R^{l}_{t+1}},\] \[n_{t+1},l_{t+1},b_{t+1}\geq 0.\]
where
\[\zeta_{t+1}\equiv\frac{r_{t+1}}{n_{t+1}},\text{ and }\bar{\zeta}_{t+1}\equiv \frac{\bar{r}_{t+1}}{\bar{n}_{t+1}},\]
and \(\pi^{b}_{1,t}\), \(\pi^{b}_{2,t+1}\) denote the cash flow generated in the first and second period of the bank's operations, respectively.
We assume interior solutions for deposits, loans, and government bonds, and we make use of the household's Euler equation for capital and the definition of the risk-free rate. Also, we
define the elasticity of the asset \(i\) with respect to the rate of returns on \(i\) as
\[\eta_{i,t+1}=\frac{\partial i_{t+1}}{\partial R_{t+1}^{i}}\frac{R_{t+1}^{i}}{i_{t+ 1}},\qquad\text{ for }i\in\{n,l\},\]
and the liquidity premium as
\[\mathcal{X}_{t+1}^{i}=1-\frac{R_{t+1}^{i}}{R_{t+1}^{f}},\qquad\text{ for }i\in\{n,l,r,b\}.\]
We follow Burlon et al. (2022) and assume that the collateral constraint is binding in equilibrium:6
Footnote 6: See Appendix A for checking under which condition the collateral constraint binds.
\[\mu_{t}>0,\qquad l_{t+1}=\theta_{b}\frac{b_{t+1}}{R_{t+1}^{l}}.\]
We can rewrite the bank's optimality conditions as
\[n_{t+1}: \mathcal{X}_{t+1}^{n}-\left(\nu_{t}(\zeta_{t+1},\bar{\zeta}_{t+1} )-\theta_{t}-\nu_{\zeta}(\zeta_{t+1},\bar{\zeta}_{t+1})\zeta_{t+1}\right)= \frac{1}{\eta_{n,t+1}}\frac{R_{t+1}^{n}}{R_{t+1}^{f}}, \tag{6}\] \[r_{t+1}: -\nu_{\zeta}(\zeta_{t+1},\bar{\zeta}_{t+1})=\mathcal{X}_{t+1}^{r},\] (7) \[l_{t+1}: \mathcal{X}_{t+1}^{l}-\mu_{t}\left(1+\frac{1}{\eta_{l,t+1}} \right)=\frac{1}{\eta_{l,t+1}}\frac{R_{t+1}^{l}}{R_{t+1}^{f}},\] (8) \[b_{t+1}: \mu_{t}\frac{\theta_{b}}{R_{t+1}^{l}}=\mathcal{X}_{t+1}^{b}. \tag{9}\]
The optimal conditions for deposits and reserves are analogous to the ones derived in Niepelt (2022). We first comment on the liability side of the bank's balance sheet, starting with deposits. The left-hand side (LHS) of equation (6) represents the marginal profit from issuing deposits, which is given by the difference between the bank's gain from the positive deposit liquidity premium and the marginal cost associated with increased deposit issuance. The right-hand side (RHS) equals the profit loss of inframarginal deposits, as higher deposit issuance is associated with increased interest rates on deposits. Similarly, the condition for central bank's loans, equation (8), states that the sum of the bank's marginal benefits of taking on more central bank's loans and the gain coming from the positive loan liquidity premium should be equal to the profit loss from the marginal cost associated with central bank's loans. In fact, higher loan holdings are associated with an increase in the interest rate on the central bank's loans.
Turning to the asset side of the bank's balance sheet, equation (7) equalizes the bank's
gain from lower operating costs with the loss due to the bank's lower return coming from a positive spread on reserves. Looking at equation (9), the optimal choice of government bonds is when the bank's marginal costs of bond holdings are equal to the loss coming from the bank's lower return with a positive spread on government bonds.
We can combine the optimality conditions and comment on the implications. First, the combination of equations (6) and (7) gives:
\[\mathcal{X}_{t+1}^{n}-\big{[}\left(\nu_{t}(\zeta_{t+1},\bar{\zeta}_{t+1})- \theta_{t}\right)+\mathcal{X}_{t+1}^{r}\zeta_{t+1}\big{]}=\frac{1}{\eta_{n,t+ 1}}\frac{R_{t+1}^{n}}{R_{t+1}^{f}}. \tag{6a}\]
This result implies that the bank's net benefit of issuing more deposits must equal the inframarginal cost of deposits.
Combining equations (8) and (9) results in the following relation:
\[\mathcal{X}_{t+1}^{l}-\mathcal{X}_{t+1}^{b}\frac{R_{t+1}^{l}}{\theta_{b}} \left(1+\frac{1}{\eta_{l,t+1}}\right)=\frac{1}{\eta_{l,t+1}}\frac{R_{t+1}^{l} }{R_{t+1}^{f}}. \tag{8a}\]
The marginal cost of taking on more central bank loans must equal the bank's net benefit of taking on more loans. This is given by the difference between the liquidity benefits given by the central bank loans and the marginal cost associated with the collateral constraint.
### Firms
Neoclassical firms rent capital, \(k_{t}\), and labor, \(\ell_{t}\), to produce the output good to maximize the profit \(\pi_{t}^{f}\). The representative firm takes wages, \(w_{t}\); the rental rate of capital, \(R_{t}^{k}+\delta-1\); and the good price as given and solves
\[\max_{k_{t},\ell_{t}}\pi_{t}^{f}\] s.t. \[\pi_{t}^{f}=f_{t}(k_{t},\ell_{t})-k_{t}(R_{t}^{k}+\delta-1)-w_{t} \ell_{t},\]
where \(f_{t}\) is the neoclassical production function.
The first-order conditions read
\[k_{t}: f_{k}(k_{t},\ell_{t})=R_{t}^{k}+\delta-1, \tag{10}\] \[\ell_{t}: f_{l}(k_{t},\ell_{t})=w_{t}. \tag{11}\]
### Consolidated government
The consolidated government collects taxes and subsidies deposits, lends to banks against collateral, \(l_{t+1}\) invest in capital, \(k_{t+1}^{g}\), and issues CBDC and reserves. The government budget constraint reads
\[k_{t+1}^{g}+l_{t+1}-b_{t+1}-m_{t+1}-r_{t+1}= k_{t}^{g}R_{t}^{k}+l_{t}R_{t}^{l}-b_{t}R_{t}^{b}-m_{t}R_{t}^{m}-r_{t}R_{t}^ {r}+\tau_{t}+\] \[-n_{t+1}\theta_{t}-m_{t+1}\mu^{m}-r_{t+1}\rho, \tag{12}\]
where \(\mu^{m}\) and \(\rho\) are the unit resource costs of managing CBDC and reserves payments, respectively.
### Market clearing
Market clearing in the labor market requires that firms' labor demand equals the household's labor supply:
\[\ell_{t}=1-x_{t}. \tag{13}\]
Market clearing for capital requires that firms' demand for capital equals capital holdings of the household, banks, and the government:
\[k_{t}=k_{t}^{h}+(n_{t}+l_{t}-r_{t}-b_{t})+k_{t}^{g}. \tag{14}\]
Profits distributed to the household must equal the sum of the banks' and firms' profits:
\[\pi_{t}=\pi_{1,t}^{b}+\pi_{2,t}^{b}+\pi_{t}^{f}. \tag{15}\]
By Walras' law, market clearing on labor and capital markets and the budget constraints of households, banks, firms, and the consolidated government imply market clearing on the goods market.
To derive the aggregate resource constraint for the economy, we plug equation (15) into the household's budget constraint, equation (1), and we impose market clearing conditions (13) and (14). Then, in combination with the government's budget constraint, equation (12), the resulting expression is the aggregate resource constraint:
\[k_{t+1}=f_{t}(k_{t},1-x_{t})+k_{t}(1-\delta)-c_{t}\Omega_{t}^{rc}, \tag{16}\]
where we defined
\[\Omega_{t}^{rc}=1+\frac{m_{t+1}}{c_{t}}\mu^{m}+\frac{n_{t+1}}{c_{t}}( \nu_{t}(\zeta_{t+1},\bar{\zeta}_{t+1})+\zeta_{t+1}\rho). \tag{17}\]
### Functional Form Assumptions
In our analysis of the general equilibrium, we will impose the following functional form assumptions for households' preferences and real balances, operating costs for deposit-based payments, and firms' production function, respectively:
\[\mathcal{U}(c_{t},x_{t},z_{t+1})=\frac{\left((1-\mathbf{\upsilon})c_ {t}^{1-\psi}+\mathbf{\upsilon}z_{t+1}^{1-\psi}\right)^{\frac{1-\sigma}{1-\sigma}} }{1-\sigma}x_{t}^{\upsilon}, \tag{18}\]
\[z_{t+1}(m_{t+1},n_{t+1})=\left(\lambda_{t}m_{t+1}^{1-\epsilon_{ t}}+n_{t+1}^{1-\epsilon_{t}}\right)^{\frac{1}{1-\epsilon_{t}}}, \tag{19}\]
\[\nu_{t}(\zeta_{t+1},\bar{\zeta}_{t+1})=\phi_{1}\zeta_{t+1}^{1- \varphi}+\phi_{2}\bar{\zeta}_{t+1}^{1-\varphi}, \tag{20}\]
\[f_{t}(k_{t},\ell_{t})=k_{t}^{\alpha}\ell_{t}^{1-\alpha}, \tag{21}\]
where \(\mathbf{\upsilon},\psi\in(0,1);\sigma>0,\neq 1;\epsilon_{t}>0;\phi_{1}>0,\phi_{2} \geq 0;\varphi>1\). The parameter \(\lambda_{t}>0\) represents the liquidity benefits of holding CBDC relative to the liquidity benefits of holding deposits.
## 3 Equivalence between operating payment systems
In this section, we study the equivalence between operating payment systems. We study the problem under different degrees of substitutability between CBDC and deposits. First, we consider the case of perfect substitutes, as in Niepelt (2022), then the more general case of imperfect substitutability between different means of payment, assuming constant elasticity of substitution (CES) for real balances.
### Perfect substitutability between payment instruments
We start by analyzing the equivalence of operating payment systems considering the case of perfect substitutability between payment instruments, as in Niepelt (2022). We do so by letting \(\epsilon\) be zero in the household's real balances function, equation (29).
Consider a policy that implements an equilibrium with deposits, \(n_{t+1}\); reserves, \(r_{t+1}\); no central bank loans, \(l_{t+1}=0\); and no government bonds, \(b_{t+1}=0\). We want to analyze whether there exists another policy, indicated by circumflexes, that in equilibrium guarantees fewer deposits, \(\hat{n}_{t+1}\), and reserves, \(\hat{r}_{t+1}\); more CBDC, \(\hat{m}_{t+1}\); central bank loans, \(\hat{l}_{t+1}\); government bonds, \(\hat{b}_{t+1}\); a different ownership structure of capital; possibly households taxes, \(\hat{T}_{1,t}\) and \(\hat{T}_{2,t+1}\); but the same equilibrium allocation and price system.
Denoting by \(\Delta\) the intervention, the two policies and equilibrium described above coincide, except that
\[\hat{n}_{t+1}-n_{t+1}=-\Delta,\qquad\hat{r}_{t+1}-r_{t+1}=-\zeta_{t+1}\Delta,\]
\[\hat{m}_{t+1}-m_{t+1}=\frac{1}{\lambda_{t}}\Delta,\qquad\hat{l}_{t+1}-l_{t+1}= (1-\zeta_{t+1})\Delta,\qquad\hat{b}_{t+1}-b_{t+1}=\frac{\hat{l}_{t+1}R_{t+1}^{ l}}{\theta_{b}},\]
\[\hat{k}_{t+1}^{g}-k_{t+1}^{g}=-\left(1-\frac{1}{\lambda_{t}}\right)\Delta+\hat {b}_{t+1},\qquad\hat{k}_{t+1}^{h}-k_{t+1}^{h}=\left(1-\frac{1}{\lambda_{t}} \right)\Delta,\]
where \(l_{t+1}\) and \(b_{t+1}\) are normalized to zero.7
Footnote 7: To guarantee the non-negativity of deposits, capital holdings, and reserves, \(\Delta\) must not be too large. Specifically, we impose
\[\Delta\leq n_{t+1},\qquad\zeta_{t+1}\Delta\leq r_{t+1},\]
\[\left(1-\frac{1}{\lambda_{t}}\right)\Delta\leq k_{t+1}^{g},\qquad\left(1- \frac{1}{\lambda_{t}}\right)\Delta\geq-k_{t+1}^{h}.\]
\[\hat{z}_{t+1}=z_{t+1},\qquad\hat{k}_{t+1}=k_{t+1},\qquad\hat{\zeta}_{t+1}= \zeta_{t+1}.\]
The cross flows generated in the first and second period of the bank's operations are, respectively:
\[\pi_{1,t}^{b}=-n_{t+1}\left(\nu_{t}(\zeta_{t+1},\bar{\zeta}_{t+1})-\theta_{t} \right),\]
\[\pi_{2,t+1}^{b}=(n_{t+1}+l_{t+1}-r_{t+1}-b_{t+1})R_{t+1}^{k}+r_{t+1}R_{t+1}^{ r}+b_{t+1}R_{t+1}^{b}-n_{t+1}R_{t+1}^{n}-l_{t+1}R_{t+1}^{l}.\]
Dropping the arguments of the \(\nu_{t}\) function, the changes in bank profits at date \(t\) and at date \(t+1\) are, respectively:
\[\hat{\pi}_{1,t}^{b}-\pi_{1,t}^{b} =\Delta\left(\nu_{t}(\dots)-\theta_{t}\right),\] \[\hat{\pi}_{2,t+1}^{b}-\pi_{2,t+1}^{b} =\Delta\left(R_{t+1}^{n}-\zeta_{t+1}R_{t+1}^{r}-(1-\zeta_{t+1}) \left(1+\frac{R_{t+1}^{k}-R_{t+1}^{b}}{\theta_{b}}\right)R_{t+1}^{l}\right).\]
Let \(\hat{T}_{1,t}\) be a tax at date \(t\) that compensates for the reduced bank losses borne by households. Also, let \(\hat{T}_{2,t+1}\) be a tax at date \(t+1\) that compensates for the change in the households portfolio's return as well as for the change in bank profits that households collect at date \(t+1\).
We define these quantities as
\[\hat{T}_{1,t} =\hat{\pi}_{1,t}^{b}-\pi_{1,t}^{b}\] \[=\Delta\left(\nu_{t}(\dots)-\theta_{t}\right),\] \[\hat{T}_{2,t+1} =(\hat{k}_{t+1}^{h}-k_{t+1}^{h})R_{t+1}^{k}+(\hat{n}_{t+1}-n_{t+1 })R_{t+1}^{n}+(\hat{m}_{t+1}-m_{t+1})R_{t+1}^{m}+\hat{\pi}_{2,t+1}^{b}-\pi_{2,t +1}^{b}\] \[=\left(\Delta-\left[\frac{1}{\lambda_{t}}\left(n_{t+1}^{1-\epsilon _{t}}-\hat{n}_{t+1}^{1-\epsilon_{t}}\right)+m_{t+1}^{1-\epsilon_{t}}\right]^{ \frac{1}{1-\epsilon_{t}}}+m_{t+1}\right)R_{t+1}^{k}-\Delta R_{t+1}^{n}\] \[+\left(\left[\frac{1}{\lambda_{t}}\left(n_{t+1}^{1-\epsilon_{t}} -\hat{n}_{t+1}^{1-\epsilon_{t}}\right)+m_{t+1}^{1-\epsilon_{t}}\right]^{\frac{ 1}{1-\epsilon_{t}}}-m_{t+1}\right)R_{t+1}^{m}\] \[+\Delta\left(R_{t+1}^{n}-\zeta_{t+1}R_{t+1}^{r}-(1-\zeta_{t+1}) \left(1+\frac{R_{t+1}^{k}-R_{t+1}^{b}}{\theta_{b}}\right)R_{t+1}^{l}\right).\]
Using conditions from the household's optimization problem, we can write the market value of the two taxes as time \(t\) as
\[\hat{T}_{1,t}+\mathbb{E}_{t}\Lambda_{t+1}\hat{T}_{2,t+1}\] \[=\Delta\left\{(\nu_{t}(\dots)-\theta_{t})+\frac{R_{t+1}^{n}- \zeta_{t+1}R_{t+1}^{r}-(1-\zeta_{t+1})\left(1+\frac{R_{t+1}^{k}-R_{t+1}^{b}}{ \theta_{b}}\right)R_{t+1}^{l}}{R_{t+1}^{f}}\right\}.\]
For the intervention to be neutral to the economy, the market value of taxes must be zero. We can derive the central bank's loan rate that ensures this is true:
\[R_{t+1}^{l}=\frac{R_{t+1}^{n}+\left(\nu_{t}(\dots)-\theta_{t}\right)R_{t+1}^{ f}-\zeta_{t+1}R_{t+1}^{r}}{(1-\zeta_{t+1})\left(1+\frac{R_{t+1}^{k}-R_{t+1}^{b}}{ \theta_{b}}\right)}. \tag{22}\]
The market value of the changes in bank profits is
\[(\hat{\pi}_{1,t}^{b}-\pi_{1,t}^{b})+\mathbb{E}_{t}\Lambda_{t+1}( \hat{\pi}_{2,t+1}^{b}-\pi_{2,t+1}^{b})\] \[=\Delta\left(\nu_{t}(\dots)-\theta_{t}\right)+\mathbb{E}_{t} \Lambda_{t+1}\Delta\left(R_{t+1}^{n}-\zeta_{t+1}R_{t+1}^{r}-(1-\zeta_{t+1}) \left(1+\frac{R_{t+1}^{k}-R_{t+1}^{b}}{\theta_{b}}\right)R_{t+1}^{l}\right).\]
Substituting in equation (22) and the definition of the risk-free rate, we get:
\[(\hat{\pi}_{1,t}^{b}-\pi_{1,t}^{b})+\mathbb{E}_{t}\Lambda_{t+1}(\hat{ \pi}_{2,t+1}^{b}-\pi_{2,t+1}^{b})\] \[=\Delta\left(\nu_{t}(\dots)-\theta_{t}\right)+\frac{1}{R_{t+1}^{f }}\Delta\Bigg{(}R_{t+1}^{n}-\zeta_{t+1}R_{t+1}^{r}+\] \[-(1-\zeta_{t+1})\left(1+\frac{R_{t+1}^{k}-R_{t+1}^{b}}{\theta_{b}} \right)\frac{R_{t+1}^{n}+\left(\nu_{t}(\dots)-\theta_{t}\right)R_{t+1}^{f}- \zeta_{t+1}R_{t+1}^{r}}{(1-\zeta_{t+1})\left(1+\frac{R_{t+1}^{k}-R_{t+1}^{b}}{ \theta_{b}}\right)}\Bigg{)}\] \[=0.\]
It follows that if the central bank offers the lending rate derived in equation (22), the market values of the taxes and of the changes in bank profits are zero.
As for households, we show that also the government's dynamic and intertemporal budget constraints continue to be satisfied with the modified portfolios and policy. The consolidated government budget constraint at date \(t\) reads:
\[k_{t+1}^{g}-m_{t+1}-r_{t+1}=k_{t}^{g}R_{t}^{k}-m_{t}R_{t}^{m}-r_{t}R_{t}^{r}+ \tau_{t}-n_{t+1}\theta_{t}-m_{t+1}\mu^{m}-r_{t+1}\rho.\]
The government budget constraint at date \(t\), after the portfolio's change, is
\[\hat{k}_{t+1}^{g}+\hat{l}_{t+1}-\hat{m}_{t+1}-\hat{r}_{t+1}-\hat{b}_{t+1}=k_{t }^{g}R_{t}^{k}-m_{t}R_{t}^{m}-r_{t}R_{t}^{r}+\tau_{t}-\hat{n}_{t+1}\theta_{t}- \hat{m}_{t+1}\mu^{m}-\hat{r}_{t+1}\rho+\hat{T}_{1,t}.\]
Rearranging, simplifying, and collecting terms:
\[k_{t+1}^{g}-m_{t+1}-r_{t+1}+\Delta\left\{\frac{\mu^{m}}{\lambda_{t}}-\left( \nu_{t}(\dots)+\rho\zeta_{t+1}\right)\right\}=k_{t}^{g}R_{t}^{k}-m_{t}R_{t}^{ m}-r_{t}R_{t}^{r}+\tau_{t}-n_{t+1}\theta_{t}-m_{t+1}\mu^{m}-r_{t+1}\rho.\]
Assume that the resource cost per unit of effective real balances is the same for CBDC and deposits:
\[\frac{\mu^{m}}{\lambda_{t}}=\nu_{t}(\zeta_{t+1},\bar{\zeta}_{t+1})+\zeta_{t+1 }\rho. \tag{23}\]
Under condition (23), the constraint after the intervention is identical to the constraint before the portfolio's change.
The government budget constraint at date \(t+1\), after the portfolio's change, is
\[k_{t+2}^{g}-m_{t+2}-r_{t+2} =\hat{k}_{t+1}^{g}R_{t+1}^{k}+\hat{l}_{t+1}R_{t+1}^{l}-\hat{m}_{t+ 1}R_{t+1}^{m}-\hat{r}_{t+1}R_{t+1}^{r}-\hat{b}_{t+1}R_{t+1}^{b}+\tau_{t+1}+\] \[-n_{t+2}\theta_{t+1}-m_{t+2}\mu_{t+1}^{m}-r_{t+2}\rho_{t+1}+\hat{ T}_{2,t+1}.\]
Rearranging, simplifying, and collecting terms:
\[k_{t+2}^{g}-m_{t+2}-r_{t+2} =k_{t+1}^{g}R_{t+1}^{k}-m_{t+1}R_{t+1}^{m}-r_{t+1}R_{t+1}^{r}+\tau_{ t+1}+\] \[-n_{t+2}\theta_{t+1}-m_{t+2}\mu_{t+1}^{m}-r_{t+2}\rho_{t+1}.\]
Equation (22) implies that the constraint at date \(t+1\) is equivalent to the constraint before the portfolio's change.
We claimed initially that the proposed intervention does not change the price system. In this case, firms' optimal production decisions and profits are unchanged. Lastly, we must show that the modified bank's portfolio is still optimal.
Before the intervention, the bank's choice set is determined by the cost function, the subsidy rate, the household's stochastic discount factor, rates on returns on capital and reserves, and the deposit funding schedule. The intervention leaves unchanged the cost function, the subsidy rate, the stochastic discount factor, and the rates on returns on capital and reserves. After the intervention, as households hold more CBDC, there is a modified deposit funding schedule, together with a central bank's loan funding schedule. To induce non-competitive banks to go along with the equivalent balance sheet positions as before the intervention, the central bank needs to post an appropriate loan funding schedule. Subject to this schedule, the bank chooses loans that make up for the reduction in funding from households, net of reserves, at the same effective price. Previously, we have derived the central bank's lending rate that makes the market values of the changes in bank profit equal to zero:
\[R_{t+1}^{l}=\frac{R_{t+1}^{n}+\left(\nu_{t}(\dots)-\theta_{t}\right)R_{t+1}^{ f}-\zeta_{t+1}R_{t+1}^{r}}{\left(1-\zeta_{t+1}\right)\left(1+\frac{R_{t+1}^{k}-R_{t+1} ^{b}}{\theta_{b}}\right)}.\]
We can demonstrate that the term in parenthesis at the denominator on the RHS is positive. From the households' problem, we know that \(R_{t+1}^{k}\leq R_{t+1}^{f}\), assuming that the rate of return on capital is not risky, we can approximate \(R_{t+1}^{k}\simeq R_{t+1}^{f}\). We also know that, due to the liquidity benefits banks have from holding government bonds, \(R_{t+1}^{b}<R_{t+1}^{f}\). Recalling that \(\theta_{b}\in[0,1]\), it follows that the additional term is positive.
The rate we derived differs from the one derived in Niepelt (2022) precisely from the additional terms in parenthesis at the denominator on the RHS. It follows that our equivalent loan rate is lower than the one in Niepelt (2022). The intuition is that when banks are not collateral constrained, they can borrow as much as they want from the central bank. With a collateral requirement, the central bank needs to offer a lower lending rate to incentivize
banks to borrow the same quantity as in the absence of the constraint, such that their balance sheet remains unaffected. The equivalent loan rate we derived depends on how restrictive is the collateral constraint: the more binding it is, the lower the lending rate the central bank needs to offer. The equivalent loan rate is at its highest when all bonds held by the banks can be pledged, i.e., \(\theta_{b}=1\).
### Relationship between the equivalent loan rate and the collateral constraint level
In Section 3.1, we derived the central bank's lending rate that makes the modified bank portfolio optimal. Next, we investigate the relationship between this equivalent loan rate and the level of the collateral constraint, which is proxied by the fraction of government bonds that can be pledged as collateral.
Figure 1 shows the equivalent loan rate we derive above, equation (22), as a function of the fraction of government bonds pledged as collateral. The maximum equivalent loan rate, the blue dashed line, is reported next to the orange line, representing the maximum value of the equivalent loan rate obtained in Niepelt (2022), which we call \(\tilde{R}^{l}_{t+1}\).8 It was previously noted that the equivalent rate we obtain is below Niepelt's and is logarithmically increasing in the fraction of government bonds pledged as collateral. Values of \(R^{l}_{t+1}\) equal or lower than unity imply that the central bank needs to pay banks to be able to lend to them, as the
Figure 1: Equivalent loan rate, \(R^{l}_{t+1}\), as function of the collateralized government bonds, \(\theta_{b}\)
collateral constraint is so restrictive that banks do not want to borrow from the central bank. This scenario is unlikely, as the corresponding values of \(\theta_{b}\) in the shaded grey area are too low to be seen. In fact, empirically the value of \(\theta_{b}\) is around \(0.995\)(see, e.g., Burlon et al. (2022)).
We can ask what would happen if the central bank offered a lending rate slightly above or below the equivalent loan rate. Nevertheless, the equivalence analysis has to be seen as a comparison of two steady states: one in which there is no CBDC (or a very little quantity in circulation) and the other in which both CBDC and bank deposits are used as means of payment. We can conjecture that, with a central bank's lending rate around the equivalent loan rate, we would still be in a situation very close to the optimum, and our conclusions would not change.
### Imperfect substitutability between payment instruments
Imperfect substitutability between CBDC and bank deposits has been widely assumed in the existing literature.9 However, there is little guidance on the degree of substitutability between the two payment methods. Barrdear and Kumhof (2022) offer a possible approach by calibrating the elasticity of substitution between CBDCs and bank deposits based on the elasticity of substitution across retail deposit accounts at different banks. They argue that the level of substitutability is low, given that even with high variability of interest rates offered on instant-access accounts by different banks, most households tend to remain with their current providers. They suggest that people tend to stick with what they know and are familiar with.
Footnote 9: See, e.g., Bacchetta and Perazzi (2022), Barrdear and Kumhof (2022) and Kumhof and Noone (2021).
Although the lack of empirical evidence on the relationship between CBDC and bank deposits, CBDC would most likely not be a perfect substitute for bank deposits (see, e.g., Bacchetta and Perazzi (2022)). In this section, we study the equivalence of operating payment systems assuming that the household values CBDC and deposits according to equation (29), such that CBDC and deposits are imperfect substitutes.
As in Section 3.1, consider a policy that implements an equilibrium with deposits, \(n_{t+1}\): reserves, \(r_{t+1}\); no central bank loans, \(l_{t+1}=0\); and no government bonds, \(b_{t+1}=0\). We want to analyze whether there exists another policy, indicated by circumflexes, that in equilibrium guarantees fewer deposits, \(\hat{n}_{t+1}\), and reserves, \(\hat{r}_{t+1}\); more CBDC, \(\hat{m}_{t+1}\); central bank loans, \(\hat{l}_{t+1}\); government bonds, \(\hat{b}_{t+1}\); a different ownership structure of capital; possibly households taxes, \(\hat{T}_{1,t}\) and \(\hat{T}_{2,t+1}\); but same allocation and price system.
The two policies and equilibrium coincide, except that
\[\hat{n}_{t+1}-n_{t+1}=-\Delta,\qquad\hat{r}_{t+1}-r_{t+1}=-\zeta_{t+1}\Delta,\]
\[\hat{m}_{t+1}-m_{t+1}=\left[\frac{1}{\lambda_{t}}\left(n_{t+1}^{1-\epsilon_{t}}- \hat{n}_{t+1}^{1-\epsilon_{t}}\right)+m_{t+1}^{1-\epsilon_{t}}\right]^{\frac{1 }{1-\epsilon_{t}}}-m_{t+1},\]
\[\hat{l}_{t+1}-l_{t+1}=(1-\zeta_{t+1})\Delta,\qquad\hat{b}_{t+1}-b_{t+1}=\frac {\hat{l}_{t+1}R_{t+1}^{l}}{\theta_{b}},\]
\[\hat{k}_{t+1}^{g}-k_{t+1}^{g}=-\left(\hat{k}_{t+1}^{h}-k_{t+1}^{h}\right)+\hat {b}_{t+1},\qquad\hat{k}_{t+1}^{h}-k_{t+1}^{h}=\Delta-\left(\hat{m}_{t+1}-m_{t+ 1}\right),\]
where \(l_{t+1}\) and \(b_{t+1}\) are normalized to zero.
We still observe that the proposed household portfolio change does not alter effective real balances, the aggregate capital stock, and the reserves-to-deposits ratio. We define the two taxes \(\hat{T}_{1,t}\) and \(\hat{T}_{2,t+1}\) as in Section 3.1. Using the household's Euler equation for capital and from the definition of the risk-free rate, we can express the market value of taxes at time \(t\) as \(\hat{T}_{1,t}+\mathbb{E}_{t}\Lambda_{t+1}\hat{T}_{2,t+1}\). For the intervention to be neutral to the economy, the market value of taxes must be zero. This is true when the central bank's lending rate is equal to:10
Footnote 10: In deriving the central bank’s loan rate, we use the condition \(\frac{m_{t+1}}{n_{t+1}}=\left(\lambda_{t}\frac{\chi_{t+1}^{n}}{\chi_{t+1}^{m}} \right)^{\frac{1}{\epsilon_{t}}}\) from the household’s optimization problem.
\[R_{t+1}^{l}=\frac{\lambda_{t}\left(\frac{\hat{m}_{t+1}-m_{t+1}}{\Delta}\right) \left(\frac{n_{t+1}}{m_{t+1}}\right)^{\epsilon_{t}}R_{t+1}^{n}-\zeta_{t+1}R_{t +1}^{r}+\left(\nu_{t}(\dots)-\theta_{t}+1-\lambda_{t}\left(\frac{\hat{m}_{t+1 }-m_{t+1}}{\Delta}\right)\left(\frac{n_{t+1}}{m_{t+1}}\right)^{\epsilon_{t}} \right)R_{t+1}^{f}}{\left(1-\zeta_{t+1}\right)\left(1+\frac{R_{t+1}^{k}-R_{t +1}^{b}}{\theta_{b}}\right)}.\]
The market value of the changes in bank profits is
\[\left(\hat{\pi}_{1,t}^{b}-\pi_{1,t}^{b}\right)+\mathbb{E}_{t} \Lambda_{t+1}(\hat{\pi}_{2,t+1}^{b}-\pi_{2,t+1}^{b})\] \[=\Delta\left(\nu_{t}(\dots)-\theta_{t}\right)+\mathbb{E}_{t} \Lambda_{t+1}\Delta\left(R_{t+1}^{n}-\zeta_{t+1}R_{t+1}^{r}-(1-\zeta_{t+1}) \left(1+\frac{R_{t+1}^{k}-R_{t+1}^{b}}{\theta_{b}}\right)R_{t+1}^{l}\right).\]
Substituting in the expression we derived for \(R_{t+1}^{l}\), the last expression becomes
\[\left(\hat{\pi}_{1,t}^{b}-\pi_{1,t}^{b}\right)+\mathbb{E}_{t} \Lambda_{t+1}(\hat{\pi}_{2,t+1}^{b}-\pi_{2,t+1}^{b})\] \[=\mathbb{E}_{t}\Lambda_{t+1}\Delta\left(R_{t+1}^{n}-\lambda_{t} \left(\frac{\hat{m}_{t+1}-m_{t+1}}{\Delta}\right)\left(\frac{n_{t+1}}{m_{t+1} }\right)^{\epsilon_{t}}R_{t+1}^{n}-\left(1-\lambda_{t}\left(\frac{\hat{m}_{t+ 1}-m_{t+1}}{\Delta}\right)\left(\frac{n_{t+1}}{m_{t+1}}\right)^{\epsilon_{t}} \right)R_{t+1}^{f}\right),\]
which does not reduce to zero for any central bank's lending rate. In other words, the central
bank cannot make banks indifferent to the competition from CBDC. In fact, a change in banks' profitability implies that the new policy does not guarantee the same allocations as before, implying that the introduction of CBDC has real effects on the economy.
## 4 General Equilibrium
In this section, we analyze an equilibrium in which the household holds both CBDC and deposits. Note that to pin down the demands for CBDC and deposits, we assume the functional form for real balances as in equation (29).
### Equilibrium conditions
The set of equilibrium conditions resembles a Real Business Cycle model augmented with some "pseudo wedges".
The Euler equation, leisure choice, and resource constraint are, respectively:
\[c_{t}^{-\sigma}x_{t}^{v}\Omega_{t}^{c} =\beta\mathbb{E}_{t}\left[c_{t+1}^{-\sigma}x_{t+1}^{v}\Omega_{t+1 }^{c}R_{t+1}^{k}\right], \tag{24}\] \[\frac{c_{t}^{1-\sigma}}{1-\sigma}vx_{t}^{v-1}\Omega_{t}^{x} =w_{t}c_{t}^{-\sigma}x_{t}^{v}\Omega_{t}^{c},\] (25) \[k_{t+1} =k_{t}^{\alpha}(1-x_{t})^{1-\alpha}+k_{t}(1-\delta)-c_{t}\Omega_{t }^{rc}, \tag{26}\]
where the return on capital and the real wage are
\[R_{t+1}^{k} =1-\delta+\alpha\left(\frac{k_{t+1}}{1-x_{t+1}}\right)^{\alpha-1}, \tag{27}\] \[w_{t} =(1-\alpha)\left(\frac{k_{t}}{1-x_{t}}\right)^{\alpha}. \tag{28}\]
The demand for effective real balances, CBDC, and bank deposits are, respectively:
\[z_{t+1} =c_{t}\left(\frac{\nu}{1-\nu}\frac{1}{\chi_{t+1}}\right)^{\frac{ 1}{\psi}}, \tag{29}\] \[m_{t+1} =z_{t+1}\left(\lambda_{t}\frac{\chi_{t+1}}{\chi_{t+1}^{m}}\right)^ {\frac{1}{\epsilon_{t}}},\] (30) \[n_{t+1} =z_{t+1}\left(\frac{\chi_{t+1}}{\chi_{t+1}^{n}}\right)^{\frac{1}{ \epsilon_{t}}}. \tag{31}\]
The cost of liquidity reads
\[\chi_{t+1}=\frac{\chi_{t+1}^{m}\chi_{t+1}^{n}}{\left(\lambda_{t}^{ \frac{1}{\epsilon_{t}}}\left(\chi_{t+1}^{n}\right)^{\frac{1-\epsilon_{t}}{ \epsilon_{t}}}+\left(\chi_{t+1}^{m}\right)^{\frac{1-\epsilon_{t}}{\epsilon_{t }}}\right)^{\frac{\epsilon_{t}}{1-\epsilon_{t}}}}. \tag{32}\]
From the bank's optimality condition:
\[\chi_{t+1}^{n}-\chi_{t+1}^{n}\left(\frac{1-s_{t}}{\psi}+\frac{s_ {t}}{\epsilon_{t}}\right)^{-1}=\left(\phi_{1}\varphi+\phi_{2}\right)\left( \frac{\chi_{t+1}^{r}}{\phi_{1}(\varphi-1)}\right)^{\frac{\varphi-1}{\varphi}} -\theta_{t}, \tag{33}\]
where
\[s_{t}=\frac{\lambda_{t}^{\frac{1}{\epsilon_{t}}}\left(\chi_{t+1 }^{n}\right)^{\frac{1-\epsilon_{t}}{\epsilon_{t}}}}{\lambda_{t}^{\frac{1}{ \epsilon_{t}}}\left(\chi_{t+1}^{n}\right)^{\frac{1-\epsilon_{t}}{\epsilon_{t }}}+\left(\chi_{t+1}^{m}\right)^{\frac{1-\epsilon_{t}}{\epsilon_{t}}}}. \tag{34}\]
The CBDC and reserve spreads are, respectively:
\[\chi_{t+1}^{m}=1-\frac{R_{t+1}^{m}}{R_{t+1}^{f}}, \tag{35}\] \[\chi_{t+1}^{r}=1-\frac{R_{t+1}^{r}}{R_{t+1}^{f}}. \tag{36}\]
Recall the definition of the risk-free rate:
\[R_{t+1}^{f}=\frac{1}{\mathbb{E}_{t}[\Lambda_{t+1}]}, \tag{37}\]
where
\[\Lambda_{t+1}=\beta\frac{c_{t+1}^{-\sigma}x_{t+1}^{v}\Omega_{t+1 }^{c}}{c_{t}^{-\sigma}x_{t}^{v}\Omega_{t}^{c}}. \tag{38}\]
Finally, the auxiliary variables are:
\[\Omega_{t}^{c}=\left(1-\mathbf{v}\right)^{\frac{1-\sigma}{1-\psi}} \left(1+\left(\frac{\mathbf{v}}{1-\mathbf{v}}\right)^{\frac{1}{\psi}}\chi_{t+1}^{1- \frac{1}{\psi}}\right)^{\frac{\psi-\sigma}{1-\psi}}, \tag{39}\] \[\Omega_{t}^{x}=\left(1-\mathbf{v}\right)^{\frac{1-\sigma}{1-\psi}} \left(1+\left(\frac{\mathbf{v}}{1-\mathbf{v}}\right)^{\frac{1}{\psi}}\chi_{t+1}^{1- \frac{1}{\psi}}\right)^{\frac{1-\sigma}{1-\psi}},\] (40) \[\Omega_{t}^{rc}=1+\frac{m_{t+1}}{c_{t}}\mu^{m}+\frac{n_{t+1}}{c_{ t}}\left(\left(\phi_{1}+\phi_{2}\right)\left(\frac{\chi_{t+1}^{r}}{\phi_{1}( \varphi-1)}\right)^{\frac{\varphi-1}{\varphi}}+\left(\frac{\chi_{t+1}^{r}}{ \phi_{1}(\varphi-1)}\right)^{-\frac{1}{\varphi}}\rho\right). \tag{41}\]
### Dynamic effects of households' preferences shocks
After characterizing the general equilibrium, we aim to investigate the potential threat to financial stability should CBDC crowds out deposits. We address this concern by studying the economy's responses to changes in the households' relative preferences for CBDC over bank deposits.
Specifically, we first assess the effects of a positive shock to the liquidity share of CBDC, \(\lambda_{t}\), and assume it follows a log AR(1) process of the form
\[\log(\lambda_{t})=(1-\rho^{\lambda})\log(\lambda)+\rho^{\lambda}\log(\lambda_{ t-1})+e_{t}^{\lambda}, \tag{42}\]
where where \(\rho^{\lambda}\) is the persistence parameter, \(\lambda\) is the steady-state value, and \(e_{t}^{\lambda}\) is the exogenous one-time shock.
Secondly, we evaluate the impact of a positive shock to the inverse of the elasticity of substitution, \(\epsilon_{t}\), assuming it also follows a log AR(1) process of the form
\[\log(\epsilon_{t})=(1-\rho^{\epsilon})\log(\epsilon)+\rho^{\epsilon}\log( \epsilon_{t-1})+e_{t}^{\epsilon}, \tag{43}\]
where \(\rho^{\epsilon}\) is the persistence parameter, \(\epsilon\) is the steady-state value, and \(e_{t}^{\epsilon}\) is the exogenous one-time shock. This shock corresponds to a negative shock to the substitutability between CBDC and deposits. Was a CBDC to be issued, the substitutability parameter would represent the willingness of households to substitute away one payment method for the other.
Bacchetta and Perazzi (2022) study the effect of the elasticity of substitution between CBDC and bank deposits on the demand for CBDC, conditioning on the relative level of interest paid by CBDC with respect to the interest paid by bank deposits. They find that the relationship between the interest rate on CBDC and demand for it is affected by the elasticity of substitution between CBDC and deposits. When the interest rate on CBDC is below that of deposits, demand for CBDC decreases in the elasticity of substitution. On the other hand, when the CBDC interest rate is higher but not significantly above that of deposits, the same effect is seen but in the opposite direction, meaning as the substitutability increases, demand for CBDC increases. However, when the CBDC interest rate is significantly above that of deposits, the holdings of CBDC decrease as the substitutability between the two instruments increases. The intuition is that when the two instruments are easily substitutable, people are less willing to use the more expensive option, but when they are less substitutable, it takes more of one to replace the other.
### Parameters
The model is quarterly. We assume that in the steady state, the household perceives CBDC and deposits as equally useful for liquidity purposes, i.e., \(\lambda=1\). The household's discount factor \(\beta\) is set to the standard value of \(0.99\). Additionally, we assume that in the steady state the inverse substitutability between the two liquid assets, \(\epsilon\), is \(1/6\). This corresponds to a medium degree of substitutability following Bacchetta and Perazzi (2022). We follow Niepelt (2022) and set the inverse intertemporal elasticity of substitution, \(\sigma\), to \(0.5\). The leisure function coefficient, \(\upsilon\), is set to \(0.85\) to match a steady-state labor supply of approximately \(1/3\). We assume that consumption and liquidity services are complements. Therefore, the inverse intratemporal elasticity of substitution between the two, \(\psi\), is set higher than \(\sigma\) and equal to \(0.6\). The capital share of output, \(\alpha\), and the rate of capital depreciation, \(\delta\), are set to the standard values of \(1/3\) and \(0.025\), respectively.
Throughout this section, we assume that the government does not extend subsidies to banks, i.e., \(\theta_{t}=0\). We follow Niepelt (2022) and set the government's marginal cost of providing reserves, \(\rho\), to \(0.01\). We follow the standard and set the persistence parameters in the log AR(1) processes, \(\rho^{\epsilon}\) and \(\rho^{\lambda}\), to \(0.9\).
Appendix C describes the model calibration of the remaining parameters. For simplicity, we assume that \(\phi_{1}=\phi_{2}=\phi\). We calibrate the banks' cost function coefficients, \(\phi\) and \(\varphi\), as well as the utility weight of liquidity, \(\mathbf{\upsilon}\), to match three steady-state quantities: CBDC-to-deposits ratio, \(m/n\), reserve-to-deposits ratio, \(\zeta\), and consumption velocity, \(c/z\). To minimize the compositional effect on the resource cost of liquidity provision, we set the unit resource costs of managing CBDC, \(\mu\), equal to the total resource cost of providing deposits.
Table 1 summarizes the parameter values.
### Impulse responses
Figure 2 shows the impulse responses as deviations from the steady state to a 10% increase in the liquidity share of CBDC, \(\lambda_{t}\).
\begin{table}
\begin{tabular}{c c c} \hline \hline Parameter & Value & Source \\ \hline \(\lambda\) & 1 & Assumption \\ \(\beta\) & 0.99 & Standard \\ \(\epsilon\) & 1/6 & Bacchetta and Perazzi (2022) \\ \(\sigma\) & 0.5 & Niepelt (2022) \\ \(\upsilon\) & 0.85 & Assumption (Match steady-state labor supply \(\approx 1/3\)) \\ \(\psi\) & 0.6 & Assumption (Ensure \(\psi>\sigma\)) \\ \(\alpha\) & 1/3 & Standard \\ \(\delta\) & 0.025 & Standard \\ \(\theta_{t}\) & 0 & Assumption \\ \(\rho\) & 0.01 & Niepelt (2022) \\ \(\rho^{\epsilon},\rho^{\lambda}\) & 0.9 & Standard \\ \(\phi\) & 0.00061 & Model \\ \(\varphi\) & 2.00924 & Model \\ \(\upsilon\) & 0.01200 & Model \\ \(\mu\) & 0.00745 & Model \\ \hline \end{tabular}
\end{table}
Table 1: Model Parameters
As the liquidity share of CBDC increases, the spreads on both CBDC and deposits decrease. Since the CBDC rate is kept constant, the marginal decrease in CBDC spread is due to a slight reduction in the risk-free rate.11 On the other hand, the deposit spread decreases by a much larger magnitude. An increase in \(\lambda_{t}\) means CBDC is becoming a more attractive source of liquidity for households; thus, CBDC demand increases significantly. Banks, facing tougher competition from CBDC, decrease the price of deposits by a large margin to stem the potential deposit outflows. Intuitively, a more attractive CBDC diminishes the market power of banks. Thus, the markup on deposit spread (over the marginal cost of deposit issuance) that banks impose on households is reduced, and bank profits drop.
Footnote 11: The risk-free rate response is not plotted in the figure, but it is analogous to that of the reserve spread.
With both spreads decreasing, households' (average) cost of liquidity becomes lower, which induces them to hold more liquidity services and increase consumption. This is because a lower cost of liquidity increases households' current marginal utility of consumption. In other words, the opportunity cost of savings has increased, incentivizing households to save less and consume more. At the same time, households' marginal benefit
Figure 2: Impulse responses to 10% increase in the liquidity share of CBDC, \(\lambda_{t}\)
than the marginal cost, inducing them to decrease labor supply. Lastly, with the increased liquidity holdings by households, the societal resource costs associated with liquidity provision are higher and thus reduce capital accumulation.
Figure 3 shows the impulse responses, as deviations from steady state, to a 10% increase in inverse elasticity of substitution between CBDC and deposits, \(\epsilon_{t}\). The impulse responses are essentially the same as with an increase in \(\lambda_{t}\).
### Robustness checks
An aspect of uncertainty regarding any practical implementation of CBDC would be the households' perception of its usefulness relative to bank deposits. Therefore, we test the robustness of our results by changing the liquidity weight of CBDC and the substitutability between CBDC and deposits in the steady state.
Firstly, we change the steady-state liquidity weight of CBDC, \(\lambda\), to 0.5 and 1.5. Figures 4 and 5 in the Appendix D show the impulse responses to a 10% increase in \(\lambda_{t}\) when its steady
Figure 3: Impulse responses to 10% increase in inverse elasticity of substitution between CBDC and deposits, \(\epsilon_{t}\)
state value is 0.5 and 1.5, respectively. Comparing these responses to the main specification in Figure 2, we see that the results remain the same. The shapes of the impulse responses are identical, while there are some minor differences in magnitudes. Figures 8 and 9 in the Appendix D show the impulse responses to a 10% increase in \(\epsilon_{t}\) when steady-state \(\lambda\) is 0.5 and 1.5, respectively. Comparing these to Figure 3, we see that the main takeaways from the previous section stand. The only major difference is in Figure 8 when the steady-state CBDC liquidity weight is 0.5, the response of deposits is of the opposite sign. All other responses remain identical in shape and similar in magnitude.
Secondly, we change the elasticity of substitution between CBDC and deposits to 1/9 and 1/3. Figures 6 and 7 show the impulse responses to a 10% increase in \(\lambda_{t}\) when steady-state \(\epsilon\) is 1/9 and 1/3, respectively. Figures 10 and 11 show the impulse responses to a 10% increase in \(\epsilon_{t}\) when its steady-state value is 1/9 and 1/3, respectively. Comparing these impulse responses to their main counterparts in Figures 2 and 3, we see that they are all identical in shape and similar in magnitude.
## 5 Conclusion
Our work highlights the importance of considering the degree of substitutability between CBDC and bank deposits when evaluating the potential risk to financial stability resulting from the introduction of a retail CBDC. We find that, when CBDC and deposits are perfect substitutes, as long as they have the same resource cost per unit of effective real balances, the central bank can offer loans to banks such that they maintain the same balance sheet positions as before the introduction of CBDC, making it neutral to the real economy. Additionally, it is crucial to account for the collateral requirement that banks must respect when borrowing from the central bank, as the central bank's lending rate depends on how restrictive the collateral constraint is. The tighter the constraint is (the lower the fraction of banks' bond holdings that can be pledged as collateral), the lower the central bank's loan rate should be to keep the allocations unchanged when introducing a CBDC.
However, when CBDC and deposits are imperfect substitutes, issuing a CBDC changes banks' profitability irrespective of the central bank's lending rate, meaning that the central bank cannot make banks indifferent to the competition from CBDC. In fact, a change in banks' profitability implies that the new policy does not guarantee the same allocations as before, implying that the introduction of CBDC has real effects on the economy. Nevertheless, based on our dynamic effects analysis, there seems to be no risk of CBDC crowding out deposits when considering changes in households' preferences for CBDC over deposits.
Overall, our findings can help policymakers and central bankers design and implement CBDCs to minimize the risk of financial instability. The natural next step in the analysis of the equivalence result is to investigate the transition from the steady state with no CBDC to the steady state after CBDC has been introduced and identify the driving forces governing the transition between the two.
Check under which condition the collateral constraint binds
Assuming interior solutions, the bank's optimality conditions for loans and bonds are, respectively:
\[\mathbb{E}_{t}\left[\Lambda_{t+1}(R_{t+1}^{k}-R_{t+1}^{l}-l_{t+1} \frac{\partial R_{t+1}^{l}}{\partial l_{t+1}})\right] =\mu_{t}\left(1+\theta_{b}\frac{b_{t+1}}{R^{l}_{\;t+1}}\frac{ \partial R_{t+1}^{l}}{\partial l_{t+1}}\right),\] \[\mathbb{E}_{t}\left[\Lambda_{t+1}(R_{t+1}^{k}-R_{t+1}^{b})\right] =\mu_{t}\frac{\theta_{b}}{R_{t+1}^{l}}.\]
Subtracting the condition for bonds from the one for loans:
\[\mathbb{E}_{t}\left[\Lambda_{t+1}(R_{t+1}^{b}-R_{t+1}^{l}-l_{t+1} \frac{\partial R_{t+1}^{l}}{\partial l_{t+1}})\right]=\mu_{t}\left(1-\frac{ \theta_{b}}{R_{t+1}^{l}}+\theta_{b}\frac{b_{t+1}}{R^{l}_{\;t+1}}\frac{ \partial R_{t+1}^{l}}{\partial l_{t+1}}\right). \tag{44}\]
To define the sign of the RHS, recall that \(\theta_{b}\in[0,1]\), and since the rate of return on reserves is positive, and we assumed interior solutions, all the terms are positive.
We define the elasticity of central bank's loans with respect to their rate of returns as
\[\eta_{l,t+1}=\frac{\partial l_{t+1}}{\partial R_{t+1}^{l}}\frac{R_{t+1}^{l}}{ l_{t+1}},\]
such that we can rewrite the last term on the LHS as
\[\frac{1}{\eta_{l,t+1}}R_{t+1}^{l}.\]
Expression (44) says that the collateral constraint is binding if:
\[R_{t+1}^{b}-R_{t+1}^{l}>\frac{1}{\eta_{l,t+1}}R_{t+1}^{l}.\]
Rearranging
\[R_{t+1}^{b}>R_{t+1}^{l}+\frac{1}{\eta_{l,t+1}}R_{t+1}^{l}.\]
We can conclude that the collateral constraint is binding if the sum of the cost of borrowing from the central bank and the bank's cost of taking on more loans is cheaper than the return
banks get from government bonds:12
Footnote 12: We replicated the same analysis in the setting by Burlon et al. (2022), and we got an analogous result.
\[\mu_{t}>0,\qquad l_{t+1}=\theta_{b}\frac{b_{t+1}}{R_{t+1}^{l}} \qquad\text{ iff }R_{t+1}^{b}>R_{t+1}^{l}+\frac{1}{\eta_{l,t+1}}R_{t+1}^{l}.\]
## Appendix B Steady State
Variables without time subscripts denote the steady-state values. The discount factor determines the return on capital and the risk-free rate:
\[R^{k}=R^{f}=\frac{1}{\beta}.\]
Conditional on the policy rates, \(R^{m}\) and \(R^{r}\), we know CBDC and reserve spreads, \(\chi^{m}\) and \(\chi^{r}\), respectively. We find the deposit spread, \(\chi^{n}\), using equilibrium condition (33). Knowing the above-mentioned spreads, we derive \(\chi^{z}\), \(\Omega^{c}\), \(\Omega^{x}\). We find the capital-labor ratio dividing the return on capital, expression (27), by the labor supply:
\[\frac{k}{1-x}=\left(\frac{1}{\alpha}\left(R^{k}-1+\delta\right) \right)^{\frac{1}{\alpha-1}}.\]
Next, we divide the resource constraint (26) by the labor supply to get the consumption-labor ratio:
\[\frac{c}{1-x}=\left(\left(\frac{k}{1-x}\right)^{\alpha}-\delta \left(\frac{k}{1-x}\right)\right)\frac{1}{\Omega^{rc}}.\]
From the household's leisure choice, condition (25), we get the consumption-leisure ratio:
\[\frac{c}{x}=\frac{(1-\sigma)w}{\upsilon}\frac{\Omega_{c}}{\Omega _{x}},\]
where the wage rate is given by condition (28):
\[w=(1-\alpha)\left(\frac{k}{1-x}\right)^{\alpha}.\]
Lastly, we combine the consumption-leisure and consumption-labor ratios to derive the steady-state leisure:
\[x=\frac{c}{1-x}\left(\frac{c}{1-x}+\frac{c}{x}\right)^{-1}.\]
Having derived the above-mentioned steady-state values, it is straightforward to find all other quantities.
## Appendix C Calibration
The household's first-order conditions for CBDC and deposits imply the following relation:
\[\chi^{n}=\frac{1}{\gamma}\left(\frac{m}{n}\right)^{\epsilon}\chi^{m},\]
where \(m/n\) is the desired steady-state ratio of CBDC to deposits. To simulate a situation with limited CBDC adaptation, we set this value equal to \(1/10\).13
Footnote 13: To find the steady-state level of the CBDC spread, \(\chi^{m}\), we set the steady-state CBDC rate, \(R^{m}\), equal to \(0.99\), to simulate a negative net return in real terms. We make this conjecture since most of the central banks’ research projects indicate that CBDCs may offer low or no nominal interests.
We use the last relation to derive \(\chi^{z}\) and, consequently, the left-hand side of the bank's optimality condition (33), denoted by \(LHS\). Having derived \(LHS\), we again use expression (33) to find \(\varphi\) as
\[\varphi=\frac{LHS+\chi^{r}\zeta}{LHS-\chi^{r}\zeta},\]
where \(\zeta\) is the desired steady-state reserves-to-deposits ratio. This ratio is set to \(0.25\) to align with U.S. data.14
Footnote 14: To find the steady-state level of the reserve spread, \(\chi^{r}\), we set the steady-state reserve rate, \(R^{r}\), equal to \(1.0\).
We use the right-hand side of expression (33) to find \(\phi\):
\[\phi=\frac{\chi^{r}}{\zeta^{-\varphi}(\varphi-1)}.\]
Rearranging the household's demand for effective real balances, expression (29), we derive \(\nu\)
as
\[v=\frac{\left(\frac{z}{c}\right)^{\psi}\chi^{z}}{1+\left(\frac{z}{c}\right)^{\psi }\chi^{z}}.\]
where \(c/z\) is the consumption velocity, targeted at 1.147 to align with U.S. data.
Lastly, to minimize the compositional effect on the resource cost of liquidity provision, we set the unit resource costs of managing CBDC, \(\mu\), to be equal to the total resource cost of providing deposits:
\[\mu=2\phi\zeta^{1-\varphi}+\zeta\rho.\]
Robustness checks
Figure 5: Higher steady-state \(\lambda=1.5\)
Figure 6: Lower steady-state \(\epsilon=1/9\)
Figure 7: Higher steady-state \(\epsilon=1/3\)
### Impulse responses to 10% increase in \(\epsilon_{t}\) with alternative specifications
Figure 8: Lower steady-state \(\lambda=0.5\)
Figure 9: Higher steady-state \(\lambda=1.5\)
Figure 10: Lower steady-state \(\epsilon=1/9\)
Figure 11: Higher steady-state \(\epsilon=1/3\) |
2310.04166 | Autoregressive Neural Quantum States with Quantum Number Symmetries | Neural quantum states have established themselves as a powerful and versatile
family of ansatzes for variational Monte Carlo simulations of quantum many-body
systems. Of particular prominence are autoregressive neural quantum states
(ANQS), which enjoy the expressibility of deep neural networks, and are
equipped with a procedure for fast and unbiased sampling. Yet, the
non-selective nature of autoregressive sampling makes incorporating quantum
number symmetries challenging. In this work, we develop a general framework to
make the autoregressive sampling compliant with an arbitrary number of quantum
number symmetries. We showcase its advantages by running electronic structure
calculations for a range of molecules with multiple symmetries of this kind. We
reach the level of accuracy reported in previous works with more than an order
of magnitude speedup and achieve chemical accuracy for all studied molecules,
which is a milestone unreported so far. Combined with the existing effort to
incorporate space symmetries, our approach expands the symmetry toolbox
essential for any variational ansatz and brings the ANQS closer to being a
competitive choice for studying challenging quantum many-body systems. | Aleksei Malyshev, Juan Miguel Arrazola, A. I. Lvovsky | 2023-10-06T11:29:06Z | http://arxiv.org/abs/2310.04166v1 | # Autoregressive neural quantum states with quantum number symmetries
###### Abstract
Neural quantum states have established themselves as a powerful and versatile family of ansatzes for variational Monte Carlo simulations of quantum many-body systems. Of particular prominence are autoregressive neural quantum states (ANQS), which enjoy the expressibility of deep neural networks, and are equipped with a procedure for fast and unbiased sampling. Yet, the non-selective nature of autoregressive sampling makes incorporating quantum number symmetries challenging. In this work, we develop a general framework to make the autoregressive sampling compliant with an arbitrary number of quantum number symmetries. We showcase its advantages by running electronic structure calculations for a range of molecules with multiple symmetries of this kind. We reach the level of accuracy reported in previous works with more than an order of magnitude speedup and achieve chemical accuracy for all studied molecules, which is a milestone unreported so far. Combined with the existing effort to incorporate space symmetries, our approach expands the symmetry toolbox essential for any variational ansatz and brings the ANQS closer to being a competitive choice for studying challenging quantum many-body systems.
## I Introduction
The variational approach has been key to numerous advances of computational quantum many-body physics, and the quest for better, more expressive and compact ansatzes is still ongoing. Carleo and Troyer pioneered neural networks as an ansatz for variational Monte Carlo (VMC) studies of quantum many-body systems [1]. They employed a veteran neural network called restricted Boltzmann machine with complex weights and demonstrated that it could closely describe the ground states of the transverse field Ising and antiferromagnetic Heisenberg Hamiltonians.
This result jumpstarted the field of neural quantum states (NQS). NQS were proven to be more expressive than tensor network states [2], to support volume law entanglement [3; 4], and to exactly represent ground states of gapped Hamiltonians [5; 6]. The domain of NQS applications has been extended to quantum state tomography [7; 8], study of open quantum systems [9; 10], and classical simulation of quantum computing [11; 12]. In a separate line of research, various modern neural network architectures, such as feedforward [13], convolutional [14; 15], recurrent [16] and transformer [17; 18], were adopted as bases for NQS.
One widely exploited family of NQS is _autoregressive neural quantum states (ANQS)_, introduced by Sharir _et al_. [19] to avoid expensive Metropolis-Hastings sampling, which was typical for early work on NQS-based optimisation. Instead, ANQS rely on fast and unbiased _autoregressive sampling_. In addition, they share the expressive power of deep neural networks, which made them a preferred ansatz in many subsequent studies [17; 18; 20; 21; 22; 10; 23].
Yet, existing ANQS research did not take full advantage of symmetries diagonal in the computational basis, which we refer to as quantum number symmetries. These symmetries partition the Hilbert space of a system into non-interacting symmetry sectors and thus allow one to reduce the computational space of a problem. For example, in the case of variational ground state search only the basis vectors from the correct symmetry sector can contribute to the ground state.
When an NQS is optimised via Metropolis-Hastings sampling, the proposal step of the latter can be easily adjusted so that irrelevant basis vectors never appear during the optimisation. However, the standard autoregressive sampling procedure does not have a built-in proposal step. Namely, instead of sampling from an \(N\)-variable probability distribution directly, one samples variables one by one (_locally_), and each local probability distribution depends on the previous sampling outcomes, resulting in a tree-like sampling process. While samples in the correct symmetry sector can be postselected, this approach is inefficient and computationally expensive. It is hence appealing to check whether a _partially_ sampled basis vector has a possible continuation belonging to the correct symmetry sector. If not, one would deem it as unphysical and discard it, thus terminating the sampling early and not wasting compute power.
In this paper, we propose an algorithm to identify unphysical partial basis vectors for a system that possesses an arbitrary number of quantum number symmetries satisfying two mild assumptions. This is in contrast to existing works that either relied on _ad hoc_ solutions to address a small number of specific symmetries [20; 21; 22], or considered quantum number symmetries of less generic nature [24]. We apply our algorithm to calculations of molecular electronic structure, where we identify a class of symmetries that had previously not been considered in NQS-based quantum chemistry calculations -- specifically, the \(\mathbb{Z}_{2}\) symmetries encoding the spatial symmetries of a molecule. We apply our treatment to these symmetries in addition to particle number and spin projection symmetries considered in previous works. This results in
dramatically improved accuracy and computational efficiency of the variational optimisation. In particular, we reach chemical accuracy for all studied molecules, which is a milestone unreported so far, and do this with an order of magnitude speedup compared to previous ANQS-based electronic structure calculations.
## II Background
### Variational Monte Carlo
We consider the problem of finding the ground state of a system of \(N\) interacting qubits governed by a Hamiltonian \(\hat{H}\). The state of the system is described by a superposition of \(2^{N}\) computational basis vectors \(\mathbf{x}\in\{0,1\}^{\otimes N}\):
\[\ket{\psi}=\sum_{\mathbf{x}=0}^{2^{N}-1}\psi(\mathbf{x})\ket{\mathbf{x}};\ \psi(\mathbf{x})\in\mathbb{C}. \tag{1}\]
To tame the exponential scaling of this problem we resort to variational Monte Carlo approach. One starts by choosing an _ansatz_ -- a certain class of quantum states \(\ket{\psi_{\theta}}\) dependent on a parameter set \(\theta\) of polynomial size. In this case finding the ground state can be formulated as a problem of variational minimisation:
\[\arg\min_{\theta}E(\theta)=\arg\min_{\theta}\frac{\bra{\psi_{\theta}}\hat{H} \ket{\psi_{\theta}}}{\bra{\psi_{\theta}}\psi_{\theta}}. \tag{2}\]
To solve this problem, VMC utilises the state-dependent diagonal operator of _local energy_:
\[\hat{H}_{\mathrm{loc}}(\mathbf{x})=\frac{\sum_{\mathbf{x}^{\prime}}\hat{H}_{ \mathbf{x}\mathbf{x}^{\prime}}\psi(\mathbf{x}^{\prime})}{\psi(\mathbf{x})}, \tag{3}\]
as an unbiased estimator of the energy (we omit dependence on \(\theta\) for brevity). That is, the energy \(E(\theta)\) of a quantum state can be calculated as the expectation value of the local energy operator \(\mathbb{E}\left[\hat{H}_{\mathrm{loc}}(\mathbf{x})\right]\) taken with respect to the underlying Born distribution of the ansatz [1]. In practice one evaluates this expectation after sampling a finite batch of \(N_{\mathrm{s}}\) basis vectors from \(p(\mathbf{x})\):
\[E_{\mathrm{est}}(\theta)=\sum_{l=1}^{N_{\mathrm{unq}}}\hat{H}_{\mathrm{loc}} \left(\mathbf{x}^{(l)}\right)\cdot\frac{n^{(l)}}{N_{\mathrm{s}}}, \tag{4}\]
where we suppose that \(N_{\mathrm{unq}}\) unique samples were produced, and \(n^{(l)}\) is the number of occurrences for the \(l\)-th unique basis vector. To seek a locally optimal set of parameters, one can employ gradient descent according to [1]:
\[\nabla_{\theta}E=2\operatorname{Re}\left\{\mathbb{E}\left[\hat{H} _{\mathrm{loc}}(\mathbf{x})\cdot\nabla_{\theta}\ln\psi_{\theta}^{*}(\mathbf{ x})\right]\right.\\ -\left.\mathbb{E}\left[\hat{H}_{\mathrm{loc}}(\mathbf{x})\right] \cdot\mathbb{E}\left[\nabla_{\theta}\ln\psi_{\theta}^{*}(\mathbf{x})\right] \right\}. \tag{5}\]
### Autoregressive neural quantum states
In Ref. [1] Carleo and Troyer put forward neural networks as a footing for potent ansatz parameterisation thanks to their ability to compactly and accurately represent complex high-dimensional quantum states. Many works on NQS used the celebrated Metropolis-Hastings algorithm to sample from the corresponding Born distribution. This algorithm is not without shortcomings: it produces unbiased samples from the target probability distribution only when the underlying Markov chain process has equilibrated, which in practice results in long autocorrelation times [19]. In addition, it struggles to adequately sample multimodal distributions, and in many cases the success of sampling hinges on a lucky initialisation of the underlying Markov chain.
Sharir _et al._ proposed the ANQS ansatz free of these drawbacks [19]. The ANQS wave function is expressed as a product of single-qubit normalised conditional wave functions:
\[\psi(\mathbf{x})=\prod_{i=1}^{N}\psi_{i}(x_{i}|\mathbf{x}_{<i}), \tag{6}\]
where \(\mathbf{x}_{<i}\coloneqq(x_{1},x_{2},\ldots,x_{i-1})\). The product rule (6) enables fast and unbiased sampling from the Born distribution: instead of sampling the whole basis vector at once, one can sequentially sample \(N\) one-dimensional Bernoulli probability distributions \(p_{i}(x_{i}|\mathbf{x}_{<i})\coloneqq\ket{\psi_{i}(x_{i}|\mathbf{x}_{<i})}^{2}\) in a tree-like manner as depicted in Fig. 1A. One starts at the root with an empty configuration string \(\mathbf{x}_{<1}=\varnothing\) and samples the first variable \(x_{1}\) according to the unconditional probability distribution \(p_{1}(x_{1}|\varnothing)=\ket{\psi_{1}(x_{1}|\varnothing)}^{2}\), where \(\psi_{1}(x_{1}|\varnothing)\) is produced by the ansatz. Based on the obtained value, one forms a partially sampled basis vector \(\mathbf{x}_{<2}=x_{1}\) and proceeds to sampling from the respective conditional probability distribution \(p_{2}(x_{2}|\mathbf{x}_{<2})\). The process is continued until the full basis vector \(\mathbf{x}=x_{1}x_{2}\ldots x_{N}\) is sampled. As a result, after a single traversal of the sampling tree one obtains an unbiased sample from \(p(\mathbf{x})\) -- this is in stark contrast with the Metropolis-Hastings sampling which might require multiple evaluations of unnormalised probability due to the possible rejections at the accept/reject step.
### Autoregressive statistics sampling
As described, autoregressive sampling is already able to provide a substantial speed up over the Metropolis-Hastings sampling. However, in the context of VMC calculations a further improvement can be achieved by _autoregressive statistics sampling_ proposed by Barrett _et al._[21]. It relies on the observation that to calculate \(E_{\mathrm{est}}(\theta)\) according to Eq. (4), one needs to know only the _sampling statistics_, i.e. the set of pairs \(\left\{\left(\mathbf{x}^{(l)},n^{(l)}\right)\right\}_{l=1}^{N_{\mathrm{unq}}}\). Hence, instead of sampling a Bernoulli random variable at the tree root \(N_{\mathrm{s}}\) times, one might sample a single
random _binomial_ variable \(B\left(N_{\mathrm{s}},p_{1}(x_{1}|\varnothing)\right)\) corresponding to \(N_{\mathrm{s}}\) trials of the underlying Bernoulli distribution \(p_{1}(x_{1}|\varnothing)\). In practice this can be done substantially faster than sampling \(N_{\mathrm{s}}\) individual Bernoulli random variables. The sampling produces two integer occurrence numbers \(n_{\mathrm{L}}\) and \(n_{\mathrm{R}}=N_{\mathrm{s}}-n_{\mathrm{L}}\) indicating how many times one has to proceed to the left and to the right child trees correspondingly. Then one applies the same trick to the second level of the sampling tree, and samples two random binomial variables \(B\left(n_{\mathrm{L}},p_{2}(x_{2}|0)\right)\) and \(B\left(n_{\mathrm{R}},p_{2}(x_{2}|1)\right)\) to figure out how many times each conditional distribution at the third level of the sampling tree should be sampled. This process is repeated recursively until one reaches the \(N\)-th level of the sampling tree. If for some node the sampled occurrence number is zero, it is discarded from further sampling, thus preventing exponential complexity growth.
As a result, the neural network is evaluated on only \(N_{\mathrm{unq}}\) configurations, as opposed to the direct sampling where the neural network is invoked \(N_{\mathrm{s}}\) times. This proved to be beneficial for the electronic structure calculations with highly peaked ground state wave functions. In particular, Ref. [21] reports emulating sampling statistics for \(N_{\mathrm{s}}\) as big as \(10^{12}\), which is many orders of magnitude larger than batch sizes typical for standard NQS calculations.
## III Quantum number symmetries
### Symmetry-aware sampling
Consider a set \(\left\{\hat{S}_{m}\right\}_{m=1}^{N_{\mathrm{unq}}}\) of operators, which commute with the Hamiltonian and are diagonal in the computational basis, i.e., \(\forall m\ \hat{S}_{m}=\sum_{\mathbf{x}}s_{m}(\mathbf{x})\left|\mathbf{x}\right\rangle \left\langle\mathbf{x}\right|\); here \(s_{m}(\mathbf{x})\in\mathbb{C}\) is the eigenvalue of \(\hat{S}_{m}\) corresponding to the basis vector \(\left|\mathbf{x}\right\rangle\). We refer to such set of operators as the set of _Hamiltonian quantum number symmetries_, and provide a few examples -- such as the particle number and total magnetisation -- in Appendix A. Since operators \(\hat{S}_{m}\) are diagonal in the computational basis, they also commute pairwise, and therefore any Hamiltonian eigenstate will be an eigenstate of _all_ symmetry operators with the associated ordered set of eigenvalues \(\mathbf{s}=\left(s_{1},s_{2},\ldots,s_{N_{\mathrm{sym}}}\right)\) specifying the symmetry sector of the Hamiltonian. An important consequence is that the decomposition of the ground state \(\left|\psi_{\mathrm{GS}}\right\rangle\) into the computational basis contains only the vectors with the same set of symmetry eigenvalues, which we denote \(\mathbf{s}_{\mathrm{GS}}\).
It is of benefit to incorporate the latter observation into VMC optimisation. The simplest approach would be to compute the vector of symmetry eigenvalues \(\mathbf{s}(\mathbf{x})\) for each generated sample and postselect samples in the correct symmetry sector. However, this wastes the compute power since the samples outside the correct symmetry sector are produced too. During the initial iterations of optimisation, the fraction of relevant samples can be extremely poor, making this approach significantly inefficient. Hence, we are interested in designing a sampling procedure which automatically produces only basis vectors from the correct symmetry sector. We call such sampling procedures _symmetry-aware_.
One can make the autoregressive sampling symmetry-aware by pruning the sampling tree on-the-fly -- that is, avoiding "unphysical" subtrees that have no leaves in the correct symmetry sector, as illustrated in Fig. 1B. In other words, a partially sampled vector \(\mathbf{x}_{<i}\) is discarded as soon as it becomes apparent that it has no physical continuations.
Bruteforce checking all possible leaves of a given \(\mathbf{x}_{<i}\) is exponentially costly. For some symmetries, there exist _ad hoc_ ways to circumvent this hurdle: e.g. for the particle number symmetry with the target eigenvalue \(n_{\mathrm{e}}\), the subtree \(\mathbf{x}_{<i}\) is unphysical if \(\sum_{j=1}^{i}x_{j}>n_{\mathrm{e}}\). Such techniques are however not readily generalisable. An additional complication is that, even if a simple rule existed for each individual symmetry, testing them one-by-one may miss a node that is unphysical because its left and right subtrees are rendered unphysical by different symmetries. Hence it is desirable to find a physicality check algorithm that (a) has polynomial complexity with respect to \(N\) and (b) checks all symmetries in combination.
### Physicality evaluation algorithm
We provide such an algorithm for an arbitrary number of symmetries as long as each of them satisfies the following two requirements:
1. _Local decomposability_. The eigenvalues of \(\hat{S}\) can be obtained as a composition of local eigenvalues calculated on individual qubits, i.e., \(s\left(\mathbf{x}\right)=\odot_{i=1}^{N}s_{i}(x_{i})\), where \(\odot\) is some binary composition operation. For example, in the case of the particle number symmetry, \(s\left(\mathbf{x}\right)=\sum_{i}x_{i}\), and therefore \(\forall i\ s_{i}(x)\equiv x\) and \(\odot\) is merely the addition operation. For symmetries with this property, we can define partial eigenvalues as \(s_{<i}\coloneqq s(\mathbf{x}_{<i})=\odot_{j=1}^{i}s_{j}(x_{j})\).
2. _Polynomially-sized spectrum_. The number of unique eigenvalues of \(\hat{S}\), either partial or not, is bounded from above by \(\mathcal{O}(\mathrm{poly}(N))\).
Common quantum number symmetries -- such as those discussed in Appendix A -- do satisfy both of these requirements. An important consequence of the local decomposability property is the following proposition.
**Proposition 1**.: _Consider two sampling subtrees defined by the partial basis vectors \(\mathbf{x}_{<i}\) and \(\mathbf{x}_{<i}^{\prime}\). If their vectors of partial eigenvalues are equal, i.e., \(\mathbf{s}_{<i}(\mathbf{x}_{<i})=\mathbf{s}_{<i}(\mathbf{x}_{<i}^{\prime})\), then the subtrees are either both physical or not._
Proof.: Suppose \(\mathbf{x}_{<i}\) is physical. It means that \(\exists\mathbf{x}_{\geq i}:\mathbf{s}(\mathbf{x}_{<i}\mathbf{x}_{\geq i})= \mathbf{s}_{<i}(\mathbf{x}_{<i})\odot\mathbf{s}_{\geq i}(\mathbf{x}_{>i})= \mathbf{s}_{\mathrm{GS}}\). In other words,
the subtree \(\mathbf{x}_{<i}\) has a physical leaf \(\mathbf{x}_{<i}\mathbf{x}_{\geq i}\). But then the same continuation path \(\mathbf{x}_{\geq i}\) will lead to a physical leaf of \(\mathbf{x}^{\prime}_{<i}\). Indeed, thanks to the local decomposability \(\mathbf{s}(\mathbf{x}^{\prime}_{<i}\mathbf{x}_{\geq i})=\mathbf{s}_{<i}( \mathbf{x}^{\prime}_{<i})\odot\mathbf{s}_{\geq i}(\mathbf{x}_{\geq i})=\mathbf{s }_{<i}(\mathbf{x}_{<i})\odot\mathbf{s}_{\geq i}(\mathbf{x}_{\geq i})=\mathbf{s }_{\text{GS}}\). Hence \(\mathbf{x}^{\prime}_{<i}\) is physical too.
Therefore for a given \(\mathbf{x}_{<i}\) it is enough to enquire only about the physicality of the corresponding \(\mathbf{s}_{<i}\). Let us define a function IsPhys\((i;\mathbf{s}_{<i}=\mathbf{s}(\mathbf{x}_{<i}))\) which returns True if \(\mathbf{s}_{<i}\) is physical, and False otherwise. This function can be calculated recursively using the following considerations. First, a subtree is physical as soon as at least one of its child subtrees is physical. Second, the partial eigenvalue vectors for the left and right child subtrees are \(\mathbf{s}_{<i}\odot\mathbf{s}_{i}(0)\) and \(\mathbf{s}_{<i}\odot\mathbf{s}_{i}(1)\) correspondingly. Hence, IsPhys satisfies the following recurrent relation:
\[\text{IsPhys}(i;\mathbf{s}_{<i})=\text{OR}\left[\begin{array}{l}\text{IsPhys} (i+1;\mathbf{s}_{<i}\odot\mathbf{s}_{i}(0))\\ \text{IsPhys}\left(i+1;\mathbf{s}_{<i}\odot\mathbf{s}_{i}(1)\right).\end{array}\right. \tag{7}\]
The recursion should terminate at \(i=N+1\) with IsPhys\((N+1,\mathbf{s}_{<N+1})=\) True when \(\mathbf{s}_{<N+1}=\mathbf{s}_{\text{GS}}\) and False otherwise. We provide the corresponding pseudocode in Algorithm 1.
```
1:\(i\): the index of the current sampling tree level; \(\mathbf{s}_{<i}\): a vector of partial eigenvalues.
2:Output: True if\(\mathbf{s}_{<i}\) is physical, False otherwise.
3:
4:functionIsPhys\((i;\ \mathbf{s}_{<i})\)
5:if\((i,\mathbf{s}_{<i})\) is not in Lookup then
6:if\(i=N+1\)then
7:if\(\mathbf{s}_{<i}=\mathbf{s}_{\text{GS}}\)then
8: Lookup\((i,\mathbf{s}_{<i})\leftarrow\) True
9:else
10: Lookup\((i,\mathbf{s}_{<i})\leftarrow\) False
11:else
12: Lookup\((i,\mathbf{s}_{<i})\leftarrow\) OR \(\left[\begin{array}{l}\text{IsPhys}(i+1;\mathbf{s}_{<i}\odot\mathbf{s}_{i}(0) \\ \text{IsPhys}\left(i+1;\mathbf{s}_{<i}\odot\mathbf{s}_{i}(1)\right)\end{array}\right.\)
13:return Lookup\((i,\mathbf{s}_{<i})\)
```
**Algorithm 1**Physicality evaluation
It might seem that due to binary branching at each recursion level the runtime of the algorithm is exponential; however, one can resort to caching -- that is, store each calculated value of IsPhys\((\cdot)\) in a lookup table. Since the spectra of the symmetries considered are assumed polynomially-sized, at each level of the sampling tree we can have at most \(\mathcal{O}\left(\left(\text{poly}(N)\right)^{N_{\text{sym}}}\right)=\mathcal{O }\left(\text{poly}(N)\right)\) different \(\mathbf{s}_{<i}\), in contrast to the exponential number of subtrees. As a result, the domain of IsPhys\((\cdot)\) is polynomially-sized, and so is the runtime of the algorithm. Finally, let us note that the proposed algorithm allows targeting _any_ symmetry sector of the Hilbert space -- one just has to substitute \(\mathbf{s}_{\text{GS}}\) with the desired vector of eigenvalues \(\mathbf{s}_{\text{ref}}\) in the termination condition for IsPhys.
### Pruning strategies
Suppose \(n_{\text{in}}\) samples enter the node containing the local probability distribution \(p_{i}(x_{i}|\mathbf{x}_{<i})\). After statistics sampling, we obtain two numbers \(n_{\text{L}}\) and \(n_{\text{R}}\) (with \(n_{\text{L}}+n_{\text{R}}=n_{\text{in}}\)), which should be passed to the left and right child subtrees, respectively (Fig. 1C.i). Suppose, however, that IsPhys shows the left child subtree to be unphysical. One can think of two strategies to handle this situation. The first approach would be to pass \(n_{\text{R}}\) to the right subtree as planned, and pass no samples to the left subtree. We refer to such strategy as Discard-Unphysical (DU, Fig. 1C.ii). Its shortcoming is loss of samples: the occurrence numbers of final samples \(n^{(l)},\ l\in\{1,N_{\text{unq}}\}\) might not add up to \(N_{\text{s}}\).
An alternative strategy was adopted in Ref. [21] and Ref. [20]; we refer to it as Mask-Unphysical (MU, Fig. 1C.iii). This strategy prescribes passing all \(n_{\text{in}}\) "virtual" samples to the right subtree so that no samples are ever lost at any level of the sampling tree. However, this biases the sampling and the empiric probabilities \(\frac{n(\mathbf{x})}{N_{\text{s}}}\) do not correspond to the probability distribution \(|\psi(\mathbf{x})|^{2}\) yielded by the ansatz. To restore the correspondence, one has to modify the conditional wave functions too: if a subtree \(\mathbf{x}_{i}0\) of the node \(\mathbf{x}_{<i}\) is unphysical, the conditional wave function must be manually modified (or _masked_) so that \(\psi_{i}^{\text{MU}}(0|\mathbf{x}_{<i})=0\) and \(\psi_{i}^{\text{MU}}(1|\mathbf{x}_{<i})=1\) regardless of their initial values. This modified amplitudes are then used in all subsequent calculations.
Unfortunately, masking might bring unforeseen side effects affecting the ansatz expressibility. For example, for a particle number symmetry the value of \(x_{N}\) is completely defined by the sampled values of \(\mathbf{x}_{<N}\), and therefore for _all_\(\mathbf{x}_{<N}\) the conditional wave function \(\psi_{N}(x_{N}|\mathbf{x}_{<N})\) will be masked. As a result, the subnetwork encoding \(\psi_{N}(x_{N}|\mathbf{x}_{<N})\) is rendered useless. A possible remedy is to apply MU to the first \(N-d\) levels of the tree, and then use DU for the last \(d\) levels (where \(d\) is a hyperparameter). We denote such family of strategies as MU-\(d\). The purpose of MU-\(d\) strategies is to maintain a higher level of variational freedom at the cost of sample loss in a controllable way.
## IV Results
### System description
The above discussion of symmetry-aware sampling is system agnostic: it applies to any system of interacting qubits. In this section, we showcase the advantages of our approach on the problem of molecular electronic struc
ture calculation, since molecules usually possess multiple quantum number symmetries.
We work in the Born-Oppenheimer approximation and in the second quantisation approach, which means that the Hilbert space is spanned by \(N\) spin-orbitals, each of which can be occupied by one of \(n_{\mathrm{e}}\) electrons. The system wave function is then represented as a linear combination of Slater determinants (SDs) comprised of \(n_{\mathrm{e}}\) spin-orbitals out of \(N\). The system Hamiltonian is a conventional fermionic Hamiltonian including one- and two-body interactions:
\[\hat{H}_{\mathrm{SQ}}=\sum_{ij}h_{ij}\hat{a}_{i}^{\dagger}\hat{a}_{j}+\sum_{ ijkl}h_{ijkl}\hat{a}_{i}^{\dagger}\hat{a}_{j}^{\dagger}\hat{a}_{k}\hat{a}_{l}. \tag{8}\]
We obtain the basis of molecular spin-orbitals with the Hartree-Fock (HF) procedure. In this case the mean-field solution (the HF state) is represented as a single Slater determinant, in which the first \(n_{\mathrm{e}}\) lowest energy orbitals are filled and the rest are empty [25]. We use the Jordan-Wigner encoding to map this fermionic system into qubit form. This preserves the occupation number interpretation, so the HF state is represented by the basis vector \(|\mathbf{x}_{\mathrm{HF}}\rangle\coloneqq\underbrace{|11\ldots 1}_{n_{\mathrm{e}}} \underbrace{0\ldots 0}_{N-n_{\mathrm{e}}}\rangle\).
#### Molecular symmetries
We consider three types of quantum number symmetries inherent to molecules. Two of them -- the total number of electrons \(\hat{n}_{\mathrm{e}}\) and their total spin projection \(\hat{S}_{z}\) -- were incorporated in previous research [21; 22] and are discussed in Appendix A. The third type are \(\mathbb{Z}_{2}\) symmetries encoding spatial symmetries of a molecule; they are considered for the first time in the context of NQS-based quantum chemistry calculations.
One usually accounts for spatial symmetries by considering irreducible representations (irreps) of the molecule point group, which is a group of spatial transformations preserving the nuclear positions. It is known that the ground state of a molecule belongs to one of irreps of the largest abelian subgroup of the point group. At the same time, each Slater determinant belongs to one of the irreps too, and therefore only the SDs within the same irrep as the ground state contribute to the latter.
To include this selection rule into ANQS-based calculations, we build on the observation made by Setia _et al._[27]. It states that after the Jordan-Wigner transform every spatial symmetry in the largest abelian subgroup of the point group would translate into a Hamiltonian \(\mathbb{Z}_{2}\) symmetry. A \(\mathbb{Z}_{2}\) symmetry is an operator \(\hat{S}\) which commutes with \(\hat{H}\) and is an \(N\)-fold tensor product of either the identity or Pauli \(\hat{Z}\) matrices. For example, if \(\hat{H}=\hat{X}_{1}\hat{X}_{2}+0.5\hat{Z}_{1}\hat{Z}_{2}\), then \(\hat{Z}_{1}\hat{Z}_{2}\) is its \(\mathbb{Z}_{2}\) symmetry, while \(\hat{I}_{1}\hat{Z}_{2}\) and \(\hat{Z}_{1}\hat{I}_{2}\) are not. All \(\mathbb{Z}_{2}\) symmetries of a Hamiltonian could be found in an automated way using the algorithm outlined by Bravyi _et al._[28].
An important consequence is that a basis vector belongs to the ground state irrep only if for each \(\mathbb{Z}_{2}\) symmetry it has the same eigenvalue as the ground state.
Figure 1: Symmetry-aware sampling. (A) One starts with an ordinary autoregressive sampling tree. (B) Then, one feeds the information about quantum number symmetries and their reference eigenvalues into the IsPhys algorithm, which identifies the unphysical subtrees (shown in red), which are pruned from further sampling. The example is for the particle number symmetry with \(n_{\mathrm{e}}=2\). (C) Possible strategies for pruning unphysical subtrees.
However, at the start of calculations the ground state and its symmetry sector are unknown. Therefore, to fix the symmetry sector we assume that the HF Slater determinant will necessarily contribute to the ground state, which is a valid premise for a vast number of molecules. Consequently, for the purpose of running the IsPhys algorithm, we fix \(\mathbf{s}_{\text{GS}}\) to be equal to \(\mathbf{s}(\mathbf{x}_{\text{HF}})\).
### Numerics
#### iv.2.1 General results
In the first set of experiments we test our ANQS on a large set of molecules. The reference information for the molecules studied, including the number of \(\mathbb{Z}_{2}\) symmetries and the corresponding computational space sizes is presented in Table 1. The \(\mathbb{Z}_{2}\) symmetries were calculated for each qubit Hamiltonian using the implementation of the Bravyi _et al._ algorithm [28] provided by the PennyLane software library [29; 30]. For each molecule we employ five randomly initialised instances of ANQS corresponding to different seeds of the underlying pseudorandom number generator. We test both the MU-2 and DU pruning strategies because our initial ablation studies (Appendix C) showed their performance to be comparable and substantially surpassing that of MU-0.
Each optimisation lasts for \(3\times 10^{4}\) iterations. We vary the sample batch size \(N_{s}\) across the simulation: to reduce the computational burden of the first iterations when many basis vectors might be sampled, we start with \(N_{\text{s}}=10^{5}\); we increase it afterwards in a stepwise manner several times during the optimisation until it reaches the final value of \(10^{8}\). After the gradient calculation, the ANQS parameters are updated with the ADAM optimiser in a default configuration of hyperparameters. More details and the description of the ansatz architecture are provided in Appendix B.
The results are given in Fig. 2, where we measure the difference between the minimum variational energy achieved in the course of optimisation and the full configuration interaction energy \(E_{\text{FCI}}\), which is a quantum chemistry parlance for exact diagonalisation. We also plot reference energy difference values obtained with such conventional quantum chemistry methods as CISD, CCSD and CCSD(T) (see Ref. [25] for an overview).
We compare our results with the minimum energy errors obtained in the existing works [21] and [22] (which did not account for the \(\mathbb{Z}_{2}\) symmetries and used the MU strategy). Another difference is that both of the above references aimed to keep the final \(N_{\text{unq}}\) within some range and thus the initial batch size \(N_{s}\) was adaptively chosen at each iteration after multiple possible resamplings. Their motivation was to sample not too few unique basis vectors -- in which case the gradient becomes too noisy -- but also not too many, since the local energy calculations might become prohibitively expensive. In our experience, there is no need for adaptive batch sizes so long as \(N_{\text{s}}\) is large enough.
For all molecules, our method with both pruning strategies demonstrates median errors below the chemical accuracy of 1.6 mHa -- this is a milestone unreported so far. We also observe that the largest gain in accuracy (up to an order of magnitude compared with previous research) is achieved for molecules with higher spatial symmetry such as LiCl, N\({}_{2}\), LiF, Li\({}_{2}\)O and C\({}_{2}\). We note that our method appears to underperform with respect to Barrett _et al._ on LiCl molecule. However, the experimental point shown for Ref. [21] in Fig. 2 corresponds to the best performance amongst multiple seeds; the average energy of the Barrett _et al._ optimisation on LiCl is above the median value obtained in this work.
\begin{table}
\begin{tabular}{c c c c c c} Molecule & \(N\) & \(n_{\text{e}}\) & Number of \(\mathbb{Z}_{2}\) symmetries & \multicolumn{2}{c}{Computational space size (SDs)} \\ & & & & With \(\mathbb{Z}_{2}\) & Without \(\mathbb{Z}_{2}\) \\ \hline LiH & 12 & 4 & 4 & 69 & 225 \\ H\({}_{2}\)O & 14 & 10 & 3 & 261 & 441 \\ NH\({}_{3}\)a & 16 & 10 & 2 (3) & 3,136 (1,576) & 3,136 \\ CH\({}_{4}\)a & 18 & 10 & 2 (4) & 15,876 (4,076) & 15,876 \\ N\({}_{2}\) & 20 & 14 & 5 & 1,824 & 14,400 \\ C\({}_{2}\) & 20 & 12 & 5 & 5,612 & 44,100 \\ LiF & 20 & 12 & 4 & 11,124 & 44,100 \\ PH\({}_{3}\)a & 24 & 18 & 2 (3) & 48,400 (24,202) & 48,400 \\ LiCl & 28 & 20 & 4 & 250,581 & 1,002,001 \\ Li\({}_{2}\)Oa & 30 & 14 & 4 (5) & 10,355,569 (5,179,569) & 41,409,225 \\ NaCl & 36 & 28 & 4 & 2,341,648 & 9,363,600 \\ \end{tabular}
\end{table}
Table 1: Reference information for studied molecules. The “asymmetric geometry” superscript indicates molecules which exist in geometries with larger number of spatial symmetries compared to the geometries provided by the PubChem database [26] and studied in Ref. [21] and [22]. For such molecules we provide in parentheses figures corresponding to the most symmetric geometries.
#### iii.1.2 Time to CCSD
While the final accuracy achieved during the variational optimisation is deemed to be the primary figure of merit, it is also important to analyse the computational cost of the method. To that end, we follow Ref. [22] and extract two additional metrics from the experiments held in the previous section -- the time required to achieve the CCSD level of accuracy and the total time spent on \(3\times 10^{4}\) iterations; we present the results in Fig. 3 together with the numbers from Ref. [22].
As can be seen, the per-iteration runtime of our method is similar to that of previous approaches. However, for highly symmetric N\({}_{2}\) and C\({}_{2}\) molecules our method converges to the desired level of accuracy much faster than the existing ones: we observe a speedup of more than an order of magnitude. Another example is Li\({}_{2}\)O: the ANQS of Ref. [21] required 45.6 hours to achieve the accuracy of \(1.8\cdot 10^{-3}\) Ha, which is above the chemical accuracy (not shown on the plot, we quote the figure given in Ref. [21]). In contrast, our method required 5.1 hours on average to reach the chemical accuracy.
#### iii.1.3 Loss of samples
In the experiments described so far, both MU-2 and DU strategies performed on par. To reveal the difference between them, we investigate how the total number of samples and the number of unique samples produced per
Figure 3: Comparison of the computational performance of the proposed symmetry-aware ANQS-based optimisation and the existing ANQS variants. Data points labelled “CCSD” correspond to the time required to achieve the CCSD level of accuracy; data points labelled “30K” correspond to the time spent on \(3\times 10^{4}\) variational optimisation iterations.
Figure 2: Comparison of the variational energies achieved by our symmetry-aware ANQS-based optimisation and the existing ANQS variants studied in the literature [21; 22] (denoted as “NADE” and “MADE” respectively). Ref. [21] and Ref. [22] studied the C\({}_{2}\) molecule at two different geometries, we present results for both of them. The black bold line on the box bodies corresponds to the median value and whiskers stretch from the minimum to maximum value in the distribution of results. The shadowed area spans energies below the chemical accuracy benchmark. For better visibility we plot the reference energies of different methods as continuous curves, even though they belong to different molecules and are not related. For C\({}_{2}\) (MADE), LiF, LiCl and Li\({}_{2}\)O molecules CCSD(T) energies are below the corresponding FCI energies, and therefore the CCSD(T) curve “dips” due to logarithmic scale of the energy error axis.
iteration changes in the course of optimisation.
In Fig. 4 we take N\({}_{2}\) as a model molecule and show how typical optimisation unravels for both strategies. At the very start, the ANQS is equally likely to sample any physical basis vector, and therefore first iterations feature large \(N_{\text{unq}}\). In addition, a substantial number of samples is lost by both strategies since the probability mass assigned to the correct symmetry sector is roughly equal to the fraction of the unmasked Hilbert space it occupies. Yet, the ansatz quickly learns the peaked structure of the molecular wave function, and as the variational energy passes the Hartree-Fock reference value (which can be achieved with only one basis vector contributing to the state), \(N_{\text{unq}}\) reaches minimum. At this stage, a large portion of the probability mass is located inside the correct symmetry sector, and therefore few samples are lost. Finally, as the optimisation proceeds, \(N_{\text{unq}}\) gradually increases, which reflects how the ANQS seeks to decrease the energy by adding more and more basis vectors to the quantum state.
While both strategies perform qualitatively similarly, there is a quantitative difference between them: the DU strategy is more likely to lose samples, whether unique or not -- sometimes it produces as few unique samples as one. One might not consider this as a major drawback: neither the total number of samples nor the number of unique samples are a primary figure of merit for the variational optimisation; in the end, DU achieves lower variational energies on N\({}_{2}\) molecule than MU-2. However, we see this as a disadvantage of _practical_ importance: for bigger molecules the DU strategy is more likely to produce no samples in the correct symmetry sector at early stages of optimisation, and thus stall the whole process. Even though this can be mitigated by carefully scheduling \(N_{\text{s}}\), we believe this puts MU-2 forward as a more robust and practical pruning strategy.
## V Conclusion
We proposed a systematic approach to include an arbitrary set of quantum number symmetries into ANQS-based optimisation. We used it to carry out electronic structure calculations on a set of molecules and achieved previously unattainable variational energies with an order of magnitude speedup. In addition, we showed that \(\mathbb{Z}_{2}\) symmetries initially conceived in the context of variational quantum algorithms are also beneficial in the context of NQS. They allow one to reduce the computational space of the problem to the level operated with by the conventional quantum chemistry methods.
Now that both space [31] and quantum number symmetries are in the toolbox of ANQS-based optimisation, it would be interesting to see how much further it is possible to push the latter. In that regard, one might follow the ideas described by Sharir _et al._[23] to reduce the computational burden of the local energy calculations, which remained the bottleneck in our experiments.
## Acknowledgements
AM thanks Davide Castaldo, Matija Medvidovic, Alain Delgado Gran and Soran Jahangiri for the fruitful discussions and kind responses given promptly to any request.
Figure 4: Comparison of the MU-2 and DU pruning strategies with respect to the total number of samples and the number of unique samples produced at every iteration. The solid lines represent the median values obtained during five runs of randomly initialised ANQS, while shaded regions span from minimum to maximum values. The main plots focus on the first 500 iterations when the variational energy passes the HF reference value. The insets show the performance of both strategies over the whole course of iteration. Overall, the MU-2 strategy loses fewer samples and produces more unique samples at each iteration. |
2306.16284 | Cartesian institutions with evidence: Data and system modelling with
diagrammatic constraints and generalized sketches | Data constraints are fundamental for practical data modelling, and a
verifiable conformance of a data instance to a safety-critical constraint
(satisfaction relation) is a corner-stone of safety assurance. Diagrammatic
constraints are important as both a theoretical concepts and a practically
convenient device. The paper shows that basic formal constraint management can
well be developed within a finitely complete category (hence the reference to
Cartesianity in the title). In the data modelling context, objects of such a
category can be thought of as graphs, while their morphisms play two roles: of
data instances and (when being additionally labelled) of constraints.
Specifically, a generalized sketch $S$ consists of a graph $G_S$ and a set of
constraints $C_S$ declared over $G_S$, and appears as a pattern for typical
data schemas (in databases, XML, and UML class diagrams). Interoperability of
data modelling frameworks (and tools based on them) very much depends on the
laws regulating the transformation of satisfaction relations between data
instances and schemas when the schema graph changes: then constraints are
translated co- whereas instances contra-variantly. Investigation of this
transformation pattern is the main mathematical subject of the paper | Zinovy Diskin | 2023-06-27T17:20:45Z | http://arxiv.org/abs/2306.16284v1 | Cartesian institutions with evidence: Data and system modelling with diagrammatic constraints and generalized sketches
###### Abstract
Data constraints are fundamental for practical data modelling, and a verifiable conformance of a data instance to a safety-critical constraint (satisfaction relation) is a corner-stone of safety assurance. Diagrammatic constraints are important as both a theoretical concepts and a practically convenient device. The paper shows that basic formal constraint management can well be developed within a finitely complete category (hence the reference to Cartesianity in the title). In the data modelling context, objects of such a category can be thought of as graphs, while their morphisms play two roles: of data instances and (when being additionally labelled) of constraints. Specifically, a generalized sketch \(S\) consists of a graph \(G_{S}\) and a set of constraints \(C_{S}\) declared over \(G_{S}\), and appears as a pattern for typical data schemas (in databases, XML, and UML class diagrams). Interoperability of data modelling frameworks (and tools based on them) very much depends on the laws regulating the transformation of satisfaction relations between data instances and schemas when the schema graph changes: then constraints are translated co- whereas instances contra-variantly. Investigation of this transformation pattern is the main mathematical subject of the paper.
## 1 Introduction
Constraints are fundamental for data modelling: they keep data integrity and ensure safety. If \(D\) is a data instance over schema \(S\), then adding a _constraint_\(c\) to \(S\) classifies instances into _valid_ (and then we write \(D\models c\)) or (otherwise) invalid. Many constraint specification languages were developed, e.g., FOL and its fragments, database dependencies, the OCL (Object Constraint Language) widely used in the UML/EML software development ecosystem [35], or the diagrammatic language of lifting constraints [33] popular within the ACT community. These languages are successfully employed within their own ecosystems, but create severe interpretability problems when used in a heterogeneous environment [28, 19]. Indeed, any systematic approach to model management would be centred around the notion of model mapping and traceability (cf. [7, 32, 14]), but what is a mapping between a full-fledged UML class diagram and a full-fledged relational schema? Unification via XML allows solving the problem for only simple constraints and does not help for complex constraints modelling complex requirements often appearing in system engineering. We need a unifying abstract framework for data and system modelling encompassing different schema specification languages including complex constraints.
Developing such a framework can be traced back to so called _abstract model theory_[6] and, more recently, Goguen & Burstall's _institutions_[18]. The latter are practically a bare-bone framework for describing translation of a binary satisfaction relation (further called _Sat_), \(\modelsmodels\subset Mod_{\Sigma}\times\), between sets \(Mod_{\Sigma}\) of \(\Sigma\)_-models_ and \(Sen_{\Sigma}\) of \(\Sigma\)_-sentences_ (i.e., constraints), when the logical signature \(\Sigma\) (of, say, relation and function symbols) changes. If such changes are specified by a signature morphism \(f\colon\Sigma\to\Sigma^{\prime}\), then the main and, in fact, the only substantial postulate of the institution theory states that
Sat is preserved in a twisted way: for models with a function \(f^{*}\colon Mod_{\Sigma^{\prime}}\to Mod_{\Sigma}\) in the direction opposite to \(f\), for sentences with \(f_{*}\colon Sen_{\Sigma}\to Sen_{\Sigma^{\prime}}\) in the same direction, and for all \(m^{\prime}\in Mod_{\Sigma^{\prime}}\) and all \(\phi\in Sen_{\Sigma}\), the _twisted Sat condition_
\[f^{*}(m^{\prime})\models_{\Sigma}\phi\text{ iff }m^{\prime}\models_{\Sigma^{ \prime}}f_{*}(\phi) \tag{1}\]
holds. Logicians refers to the Sat-axiom above as to 'The invariance of truth under change of notation'; it may also be termed technologically as 'The main law of Sat-interoperability'.
There is a major deficiency of the institution framework wrt. its application to software interoperability, where models become many-sorted data instances, and sentences are constraints. To wit: institutions ignore the _local_ nature of Sat appearing in practice: to find if a model \(m\) satisfies a constraint \(c\), normally a small part of \(m\) is to be examined and fixed if the constraint is violated. For industrial instances/models comprising thousands of elements over dozens of sorts, locality of constraints is their crucial feature (e.g., see [21] for how it can be used to make inter-model consistency checking more effective) and needs to be included into a principled framework. It is here where the _Diagram Constraint Logic (DCL)_ and _generalized sketches_ (further just _sketches_) presented in the paper come on the stage and place the constraints' locality at the very centre of the specification machinery, but otherwise strive to keep the framework as abstract as possible. We will briefly outline the main ideas.
In DCL, both models (instances) and sentences (constraints) are arrows in a given category \(\mathbb{G}\) with pullbacks, whose objects may be thought of as graphs to guide intuition (and we will call them graphs), but existence of pullbacks is the only assumption about \(\mathbb{G}\) we need. A _constraint_ over \(\mathbb{G}\) is a triple \((c,G_{c},\llbracket c\rrbracket)\) of a _constraint name_\(c\), its _arity_ graph \(G_{c}\in\mathsf{Ob}\mathbb{G}\), and a category \(\llbracket c\rrbracket\hookrightarrow\mathbb{G}/G_{c}\) of _valid_ instances and their morphisms (typically, a full subcategory but it's not a must); we require \(\llbracket c\rrbracket\) to be closed under isomorphisms. Then any graph \(G\) determines a _Cartesian Sat_\(\models_{G}\subset\mathbb{G}/G\times\mathbb{G}(G_{c},G)\) by claiming conformance \(t\models_{G}\mathsf{b}_{c}\) for an instance \(t\colon\cdot\to G\) and a constraint \(\mathsf{b}_{c}\colon G_{c}\to G\) iff pulling \(t\) back along \(\mathsf{b}_{c}\) results in \(t\mathbin{\restriction}_{\mathsf{b}_{c}}\in\mathsf{Ob}\llbracket c\rrbracket\). If \(G\) changes by a \(\mathbb{G}\)-morphism \(f\colon G\to G^{\prime}\), then instances/models change contravariantly via the pullback functor \(f^{*}\colon\mathbb{G}/G^{\prime}\to\mathbb{G}/G\), while constraints change covariantly via the post-composition functor \(f_{*}\colon\mathbb{G}(G_{c},G)\to\mathbb{G}(G_{c},G^{\prime})\). Now the PB lemma implies the Sat-axiom (1) for Cartesian \(\models_{G}\) with \(G\) ranging over \(\mathsf{Ob}\mathbb{G}\). A major result of the paper (Theorem 1 on 26) states that any constraint signature (i.e., a collection of constraints with their arities and valid instances as described above) gives rise to an institution with sound logical inference. Thus, Cartesian Sat underlying the DCL is a well-defined logical framework, which is much more concrete than institutions but is much more abstract than constraint specification languages mentioned above. Indeed, in DCL, semantics of s constraint is postulated to exist as a category \(\llbracket c\rrbracket\) of valid \(c\)-instances but its implementation is not specified. This fine tuning has already allowed for several successful applications of DCL and sketches in diagrammatic domain and data modelling [11, 9] and software engineering [30, 3, 29]; see also applications of sketches to an important Model-Driven Engineering (MDE) problem of model consistency [21, 33]. In this sense, the present paper provides a new mathematical underpinning for a methodology being already employed in practice and, moreover, enrich the framework with several new features briefly outlined below.
The paper makes special efforts to make DCL closer to system engineering (SE) practice. In the latter, a central role is played by the conformance relation between products and requirements. They can be modelled by, reps., instances and constraints/sentences, which reduces product-requirement conformance to logical Sat discussed above. However, it is normal when at a given logical time moment, different requirements are modelled by the same constraint, and different products by the same instance. It means that we need a theory of _indexed_ Sat dealing with multisets rather than sets of logical constructs (e.g., a conjunctive theory/sketch becomes a multiset of atomic constraints if different requirements are modelled by the same constraint). Yet another new feature of DCL developed in the paper is considering instance
morphisms as _spans_ in \(\mathbb{G}/G\), which model product updates, e.g., a span of monics \(\mathsf{t}\stackrel{{ i}}{{\leftarrow}}\cdot\stackrel{{ j}}{{\rightarrow}}\mathsf{t}^{\prime}\) models a deletion of \(\mathsf{t}\)'s part followed by addition of a new part resulting in instance \(\mathsf{t}^{\prime}\).
Most importantly, the DCL of this paper transits from binary Sat to ternary _eSat_, whose third component refers to a piece of evidence supporting the claim \(t\models_{G}\mathsf{b}_{c}\), and thus models a fundamental concept of safety assurance. In a bit more detail, as soon as we index valid \(c\)-instances by a mapping \(\mathsf{t}\cdot\llbracket c\rrbracket\rightarrow\mathbb{G}/G_{c}\), for an instance \(t\in\mathbb{G}/G\) and constraint \(\mathsf{b}_{c}\in\mathbb{G}(G_{c},G)\), we can (and should) write \(t\models_{G}^{e}\mathsf{b}_{c}\) when \(t\,!\upharpoonright_{\mathsf{b}_{c}}=\mathsf{t}_{e}\) for index \(e\in\llbracket c\rrbracket\). We will also discuss and motivate the categorical setting for eSat, in which the latter is a ternary span of functors, whose apex and feet are categories. This leads to the notion of an _institution with evidence (e-institution)_, which generalizes ordinary institutions for the eSat relation between models and sentences. Moreover, we will argue that our twisted eSat-axiom is closer to the intuition underlying Sat-interoperability than the usual binary Sat-axiom (see Remark 2 on 15). Main Theorem 1 mentioned above actually states that any constraint signature gives rise to a sound e-institution.
Our plan for the rest of the paper is as follows. Section 2 is a motivational background demonstrating the importance, and possibly intricate nature of constraints appearing in practice. Sect.3 provides a mild introduction to sketches to give the reader a basic intuition of basic concepts. Sect.4 presents a formal theory of twisted morphisms, including a novel framework of e-institutions, and Sect.5 exhibits the formal framework of Cartesian institutions. Sect.6 defines sketches and demonstrates that the sketch framework can be seen as a correct implementation of a very general specification framework for data and system modelling (Theorem 2 and its discussion). In Sect.7 related work is briefly discussed and Sect.6 concludes. Some auxiliary material is placed in Appendix.
###### Contents
* 1 Introduction
* 2 Example: Constraints are essential
* 2.1 A sample database schema.
* 2.2 Graph instances over the schema.
* 2.3 Schema instances.
* 3 Diagram constraint logic (DCL) and sketches: An introduction
* 3.1 Possible instances in DCL
* 3.2 Abstract diagrammatic constraints and their implementation
* 3.3 DCL and sketches: an outline
* 4 Institutions with evidence
* 4.1 Background: Why evidential institutions
* 4.1.1 Homogeneous Sat: Model updates and evidence
* 4.1.2 Homogeneous Sat: Sentence dependencies and evidence
* 4.1.3 Heterogeneous Sat: a signature changes along a signature morphism \(f\colon\Sigma\rightarrow\Sigma^{\prime}\)
* 4.2 Abstract institutions without elements
* 4.2.1 Spans
* 4.2.2 eSat spans
* 4.2.3 Twisted morphisms of eSats
* 4.3 Concrete institutions with elements
* 4.3.1 The eSats: A signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature change along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature change along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature change along a signature change in signature changes along a signature change in signature change along a signature change in signature changes along a signature change in signature change along a signature change in signature changes along a signature change in signature change along a signature change in signature changes along a signature change in signature changes along a signature change in signature change along a signature change in signature changes along a signature change in signature changes along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature changes along a signature change in signature changes along a signature change in signature changes along a signature change in signature change along a signature change in signature change along a signature change in signature changes along a signature change in signature changes along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature signature change along a signature change in signature change along a signature change in signature signature change along a signature change in signature change along a signature change in signature change in signature signature change along a signature change in signature signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a change in signature change in signature signature change along a signature change in signature change in signature change along a signature change in signature signature change along a change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature change along a change in signature change along a signature change in signature change along a change in signature signature change along a signature change in signature change in signature signature change along a change in signature change along a signature change in signature signature change along a change in signature change along a signature change in signature change along a signature change in signature signature change along a change in signature change along a signature change in signature change along a signature change in signature change along a signature change in signature signature change along a change in signature signature change along a change in signature signature change in signature signature change along a change in signature signature signature change along a signature signature change in signature signature change along a change in signature signature change along a signature change in signature signature change in signature signature change along a signature change in signature signature change along a change in signature signature signature change in signature signature change in signature signature signature change in signature signature signature signature change in
* 5 Diagram constraint logic (DCL) as a Cartesian institution
* 5.1 Cartesian Interoperability 1: Basics
* 5.2 Cartesian Interoperability 2: Slice categories with delta morphisms
* 5.3 Constraint signatures and constraints
* 5.4 Cartesian interoperability 3: Slice categories with labelled objects
* 6 Generalized sketches
* 6.1 Basic notions and constructs
* 6.2 Sketches at work: Discussion
* 6.2.1 A general stage
* 6.2.2 Delta lenses and sketches
* 7 Related Work
* 8 Conclusions
* A Injectivity primer
* B Arrow categories
* B.1 Several facts about arrow categories
b) Set \(C_{S}\) consists of multiple constraint declarations, each one has its scope in graph \(G_{S}\) shown with red dotted edges. For example, the left constraint \(\Rightarrow\) is declared for arrows has\({}_{\sf dr}\) and has. Numerical constraints (so called _multiplicities_) placed near arrow heads are declared for those arrows (e.g., [1] for arrow 'of' or [1..4,6] for arrow has) and their scope is clear without scope lines (but, of course, they are recorded in the tool storing the schema). These constraints says that a vehicle may have 1, 2, or 4 driving wheels amongst (due to constraint \(\Rightarrow\)) its 1..4 or 6 wheels (e.g., a 3-wheel motorcycle may have 1 or 2 driving wheels). The scope of constraint \(c_{\sf wh}\) includes three associations and two wheel attributes.
There are several other constraints in set \(C_{S}\), whose presence in the schema is not explicit in the visual diagram in Fig. 1 but can be restored based on syntactical conventions -- we will discuss them later.
### Graph instances over the schema.
Graph \(G_{S}\) is a template for data populating the database, which we call _data instances_ over \(G_{S}\) as they change with time. A fragment of a possible instance is shown in a cloud window on the right side of the figure. It shows objects \(d\), \(V\), \(L\) of types Driver, _etc_ specified after codons, and values "Bob" _etc_ shown in ovals, whose type is implicit (String for Bob, Date for 01/01/95, _etc_).
For an instance \(D\), the database should store sets \([\![\mbox{Vehicle}]\!]^{D}\), \([\![\mbox{Driver}]\!]^{D}\)_etc_ of _objects_ for classes, and sets \([\![\mbox{drives}]\!]^{D}\), \([\![\mbox{of}]\!]^{D}\)_etc_ of directed _references_ for associations, e.g., a Driver-object \(d\in[\![\mbox{Driver}]\!]^{D}\) may have multiple or none (e.g., if driver \(d\) is on leave of absence) references to Vehicle-objects (note the grey multiplicity [0..1] attached to arrow drives). References can be implemented in different ways, but after all, any drives-reference in \(D\) is a special object that has exactly one object in \([\![\mbox{Driver}]\!]^{D}\) as its _source_ (which, in the UML speak, _owns_ the reference), and exactly one object in \([\![\mbox{Vehicle}]\!]^{D}\) as its _target_. We can thus model the set of drives-references in \(D\) by a span
\[[\![\mbox{Driver}]\!]^{D}\longleftarrow[\![\mbox{drives}]\!]^{D} \longrightarrow[\![\mbox{Vehicle}]\!]^{D}\]
whose unnamed legs are the source and the target functions. In general, reference spans may be not jointly monic (jm). Consider, e.g., an instance fragment in the cloud window on the right, which shows two reference links \(c,c^{\prime}\) of type covers between the same pair of objects. It may happen because the fact of covering a vehicle type by a license is regulated by normative documents, and it happened that two
Figure 1: Sample data schema \(S\) and a fragment (in the rightmost cloud) of its instance
such documents establish that license \(L\) covers the vehicle type \(V\). In a finer model, we would add to covers-references references to such docs, but it would change the structure of both the schema and the instance graphs (we will need arrows from arrows to nodes), and it is often reasonable to keep graphs simple but admit multiple links of the same type between the same pair of objects. The UML does allow interpreting associations by non-jm spans, but in case when the span is required to be jm, the association is labelled by a keyword _unique_ (see a detailed discussion in [26]). For our schema in Fig. 1, we may take a default assumption that if an association has a multiplicity declared for it, then the span is assumed to be jm.
In contrast to classes, sets of literal values \(\llbracket\mathsf{String}\rrbracket\), \(\llbracket\mathsf{Date}\rrbracket\)_etc_ are defined by their types (\(\mathsf{String}\), \(\mathsf{Date}\), _etc_) rather than stored in the database (i.e., are defined intensionally rather than extensionally). Attributes are thus implemented differently, but we can still model them by spans, e.g., \(\llbracket\mathsf{name}\rrbracket^{D}:\llbracket\mathsf{Driver}\rrbracket^{D} \rightarrow\llbracket\mathsf{String}\rrbracket\) (and indeed, in a semi-structured data storage, e.g., XML-based, two different elements may have the same name and the same value).
Thus, an instance \(D\) can be seen as a graph morphism \(\llbracket\mathsf{.}\rrbracket^{D}\colon G_{S}\rightarrow|\mathsf{Span}|\) into the graph underlying the category of spans between sets. Equivalently, an instance \(D\) is a graph morphism \(t_{D}\colon G_{D}\to G_{S}\), where \(G_{D}\) is the graph of all objects and values involved in \(D\) as nodes, and all references and attributes involved in \(D\) as arrows (below we will collectively refer to them as to _links_), and typing mapping \(t_{D}\) maps each element in \(G_{D}\) to its type in \(G_{S}\). We will discus this equivalence later in Section 3.1.)
### Schema instances.
Not any \(G_{S}\)-instance as considered above is a _valid schema instance_: the latter must also satisfy all constraints in set \(C_{S}\). We have already considered multiplicity constraints whose semantics is clear. We also assume that the absence of an arrow's multiplicity means constraint [1..*] for that arrow by default, but then the absence of a multiplicity constraint is to be explicated by a grey pseudo-constraint note [0..*]. Similarly, we assume all attributes have multiplicity [2] by default whereas any other multiplicity is to be shown including grey no-constraint note. Constraint [key] over Driver's attributes states that each driver is uniquely identified by her name and birthdate, and an instance violating this condition is to be considered invalid.
Constraint \([\Rightarrow]\) formalizes an obvious property of the Vehicles-Wheels ontology that driving wheels of a vehicle are to be amongst its wheels, and as both spans are jm, we have subsetting \(\llbracket\mathsf{has_{dr}}\rrbracket^{D}\subseteq\llbracket\mathsf{has} \rrbracket^{D}\) to hold for any instance \(D\). Similarly, constraint \([\Rightarrow]_{4}\), which declares subsetting \(\llbracket\mathsf{drives}\rrbracket^{D}\subseteq\llbracket\mathsf{ledBy} \rrbracket^{D}\). \(\llbracket\mathsf{covers}\rrbracket^{D}\), formalizes an important safety requirement that a driver can drive a vehicle of type \(V\) only if she is licenced to drive \(V\)-type vehicles.2 If the subsetting above holds for all \(d\in\llbracket\mathsf{Driver}\rrbracket^{D}\), we say that instance \(D\) satisfies the constraint and write \(D\models[\Rightarrow_{4}]\).
Footnote 2: Of course, it’d be better to declare binary \([\Rightarrow]\) for composed associations introduced directly into the schema, but it needs having the query machinery in our schema formalism, which is beyond the paper’s scope (but see [15]) and we thus hide composition inside the constraint \([\Rightarrow_{4}]\).
Finally, we discuss semantics of constraint \(c_{\mathsf{wh}}\). Suppose there is a safety requirement \(R\) that demands all wheels of a vehicle to satisfy certain conditions depending on the vehicle type and the wheels' parameters encoded in their code-string. We can specify this by setting an admissible range \([R]_{V}\) (depending on the vehicle type \(V\)) for some complex vehicle's attribute \(\mathsf{whData}\) so that a vehicle conforms to \(R\) if the complex value of its \(\mathsf{whData}\)-attribute is within the range, i.e., \(\llbracket\mathsf{whData}\rrbracket\in[R]_{V}\). More formally, if \(D\) is an instance of our schema, then we consider \(D\models c_{\mathsf{wh}}\) iff for any vehicle \(v\in\llbracket\mathsf{Vehicle}\rrbracket^{D}\), we have \(v.\llbracket\mathsf{whData}\rrbracket^{D}\in[R]_{V}\) where \(V=v.\llbracket\mathsf{or}\rrbracket^{D}\). Clearly, we can rewrite this as \(D\models c_{\mathsf{wh}}\) iff \(v\models c_{\mathsf{wh}}^{\prime}\) for all \(v\in\mathsf{
\(\llbracket\![\mathsf{Vehicle}]\!]^{D}\), and \(v\models c^{\prime}_{\mathsf{wh}}\) iff the data collection \(\left((w.\llbracket\![\mathsf{code}]\!]^{D}\right)_{w\in v.\llbracket\![\mathsf{ has}]^{D}},(w.\llbracket\![\mathsf{code}]\!]^{D})_{w\in v.\llbracket\![\mathsf{ has}]\![\mathsf{dr}]^{D}},v.\llbracket\![\mathsf{of}]\!]^{D}\right)\) is within some predefined range \(\llbracket\![c^{\prime}_{\mathsf{wh}}]\!]\). The latter can be specified by a table in some normative document (or a complex formula if the description of wheel's wear is somehow formalized), having which the constraint monitoring system (CMS) can provide a definitive answer to whether vehicle \(v\) satisfies \(c^{\prime}_{\mathsf{wh}}\) or not. In principle, it is possible that the CMS would ask a human expert (e.g., if the normative document suggests it for the \(v\)'s configuration of values in the table). Irrespectively to implementation, the CMS provides a certain Boolean value for the claim \(D\models c_{\mathsf{wh}}\), and optionally, a reference to a normative document or expert supporting the claim. In fact, for safety critical constraints such a reference is a must and usually called (a piece of) _evidence_ supporting the claim. Then conformance becomes a ternary relation and we should write \(D\models^{e}c\), where \(e\) refers to an object proving evidence.
To summarize, each constraint \(c\in C_{S}\) has its scope graph \(G_{c}\subset G_{S}\) and checking of whether \(D\models c\) only depends on the corresponding part \(D\!\restriction_{G_{c}}\) of the instance over \(G_{c}\):
\[D\models_{G_{S}}c\ \ \text{iff (by definition) }D\!\restriction_{G_{c}}c,\text{ i.e., }D\!\restriction_{G_{c}}\in \llbracket\![c]\!]\]
where the subindex near \(\models\) points to the scope of conformance checking. These considerations will be made accurate in the next section.
## 3 Diagram constraint logic (DCL) and sketches: An introduction
The main ingredients of a semantically-oriented logical framework are _possibly valid_ schema instances (or _pre_instances), constraints, and the satisfaction relation (Sat) between them, which defines a subclass of _valid_ (indeed) instances amongst preinstances. In this section, we will discuss and define these ingredients for our Diagram Constraint Logic (DC-logic or DCL). We begin with preinstances in Section 3.1, and then proceed to constraints Section 3.2. Specifically, the diversity of possible implementations of constraints' semantics will be discussed in some detail: understanding the difference between an abstract diagrammatic constraint and its concrete implementation in a given constraint specification language is, perhaps, a major stumbling block for understanding the generalized sketches idea and its applications. Finally, in Section 3.3, we define a pullback based Sat between preinstances and constraints, and give the notion of a sketch and its valid instances.
### Possible instances in DCL
#### 3.1.1 Fibrational vs. indexed.
Let \(S=(G_{S},C_{S})\) be a schema as above. Its _possible instance_, or _pre-instance_, or an _instance over graph_\(G_{S}\), is a graph morphism \(t\colon G_{t}\to G_{S}\), where \(G_{t}\) is the graph of all objects and values involved in the instance as nodes, and all references and attributes involved in the instance as arrows, and _typing_\(t\) maps each element in \(G_{t}\) to its type in \(G_{S}\). This setting will be referred to as _fibred_ (or _fibrational_) semantics for graph \(G_{S}\). As multiple \(G_{t}\)-arrows between the same two nodes \(d,d^{\prime}\) in \(G_{t}\) can be typed by the same arrow \(a\colon N\to N^{\prime}\) in \(G_{S}\), inverting \(t\) will map arrow \(a\) to a span \(\llbracket\![a]\!]^{t}\colon\llbracket\![N]\!]^{t}\to\llbracket\![N^{\prime}]\!]^{t}\) between sets, and gives us a graph morphism \(\llbracket\![.]\!]\colon G_{S}\to\!]\!]\!]\) into the graph underlying the category of spans between sets. Having such a morphisms is referred to as an _indexed_ semantics for \(G_{S}\). The Grothendieck construction transforms an arbitrary graph morphism \(\llbracket\![.]\!]\colon G_{S}\to\!]\!]\!]\mathsf{Span}|\) into a global graph of elements \(\llbracket\![.]\!]\!]\) supplied with projection \(t_{\llbracket\![.]\!]\!]\!]\!]\!]\!]\!]\!]\!\!
\(\llbracket a\rrbracket^{\prime}\llbracket\!\cdot\!\rrbracket\simeq\llbracket a\rrbracket\) (in the vertical category of the double category \(\mathbf{Span}\) of functions and spans between sets) for any arrow \(a\) in \(G_{S}\).
Including into this equivalence also morphisms between instances needs care and resorting to categories and functors rather than graphs and graph morphisms. Graphs can always be replaced by categories they freely generate, but the point is that in practice schemas are often non-free categories, i.e., are presentations of non-free categories given by their underlying graphs and path equations, i.e., commutativity constraints declared over the corresponding diagrams. For schemas as categories, we have the following general Grothendieck equivalence: \(\mathsf{Cat}/S\simeq\mathsf{Lax}(S,\mathcal{Span})\), where \(\mathcal{Span}\) is the bicategory of spans between sets and \(S\) a category considered as a 2-discrete bicategory. Indeed, laxity does matter in data modelling, e.g., consider adding association \(\mathsf{drives}^{\prime}\) from \(\mathsf{Driver}\) to \(\mathsf{VehType}\) in the schema \(S\) of Fig. 1. In general, data links over this association can contain non-composed links, i.e., \(\llbracket\mathsf{drives}^{\prime}\rrbracket^{\prime}\subseteq\llbracket\mathsf{drives }\rrbracket^{\prime}\cdot\llbracket\mathsf{of}\rrbracket^{\prime}\) (composition is in \(\mathsf{Span}\)), if the database keeps information about driving vehicles of some type in the past but not currently.
Below we will keep our graph-based fibrational setting as the main one: (pre)instances are graph morphism \(t\colon G_{t}\to G_{S}\) and instance morphisms are morphisms \(u\colon G_{t}\to G_{t^{\prime}}\) such that \(u.t^{\prime}=t\). However, we will freely use inversion of \(t\) into \(\llbracket\rrbracket^{\prime}\) for a given \(t\); specifically, it is convenient for specifying semantics of many constraints.
|
2302.04377 | ER network heterogeneity guides diffusive transport and kinetics | The endoplasmic reticulum (ER) is a dynamic network of interconnected sheets
and tubules that orchestrates the distribution of lipids, ions, and proteins
throughout the cell. The impact of its complex, dynamic morphology on its
function as an intracellular transport hub remains poorly understood. To
elucidate the functional consequences of ER network structure and dynamics, we
quantify how the heterogeneity of the peripheral ER in COS7 cells affects
diffusive protein transport. In vivo imaging of photoactivated ER membrane
proteins demonstrates their non-uniform spreading to adjacent regions, in a
manner consistent with simulations of diffusing particles on extracted network
structures. Using a minimal network model to represent tubule rearrangements,
we demonstrate that ER network dynamics are sufficiently slow to have little
effect on diffusive protein transport. Furthermore, stochastic simulations
reveal a novel consequence of ER network heterogeneity: the existence of 'hot
spots' where sparse diffusive reactants are more likely to find one another.
Intriguingly, ER exit sites are disproportionately found in these highly
accessible regions. Combining in vivo experiments with analytic calculations,
quantitative image analysis, and computational modeling, we demonstrate how
structure guides diffusive protein transport and reactions in the ER. | Zubenelgenubi C. Scott, Katherine Koning, Molly Vanderwerp, Lorna Cohen, Laura M. Westrate, Elena F. Koslover | 2023-02-08T23:52:12Z | http://arxiv.org/abs/2302.04377v1 | # Heterogeneity in ER network structure guides diffusive transport and kinetics
###### Abstract
The endoplasmic reticulum (ER) is a dynamic network of interconnected sheets and tubules that orchestrates the distribution of lipids, ions, and proteins throughout the cell. The impact of its complex, dynamic morphology on its function as an intracellular transport hub remains poorly understood. To elucidate the functional consequences of ER network structure and dynamics, we quantify how the heterogeneity of the peripheral ER in COS7 cells affects diffusive protein transport. _In vivo_ imaging of photoactivated ER membrane proteins demonstrates their non-uniform spreading to adjacent regions, in a manner consistent with simulations of diffusing particles on extracted network structures. Using a minimal network model to represent tubule rearrangements, we demonstrate that ER network dynamics are sufficiently slow to have little effect on diffusive protein transport. Furthermore, stochastic simulations reveal a novel consequence of ER network heterogeneity: the existence of 'hot spots' where sparse diffusive reactants are more likely to find one another. Intriguingly, ER exit sites are disproportionately found in these highly accessible regions. Combining _in vivo_ experiments with analytic calculations, quantitative image analysis, and computational modeling, we demonstrate how structure guides diffusive protein transport and reactions in the ER.
The endoplasmic reticulum (ER) is the largest organelle in the eukaryotic cell, forming a web of interconnected hollow tubules and sheets. The ER is central to the transport of many cellular components such as lipids, ions, and proteins. However, the impact of the ER's complex network architecture on these transport processes remains opaque. Using live-cell experiments and simulations, we demonstrate that structural heterogeneity leads to non-uniform transport of proteins to nearby regions of the ER. As a consequence, certain regions of the network function as 'hot spots' where diffusive reactants are more likely to find each other. In live cells, sites of protein export are preferentially localized to these regions.
## Introduction
The eukaryotic cell contains a myriad of complex structures and compartments, each serving a specialized functional role. These include the tortuous interior of interconnected mitochondria [(1)], the stacked sheets [(2)] and tubular networks [(3, 4)] of the perinuclear and peripheral endoplasmic reticulum (ER), and the intertwined actin and microtubule networks of the cytoskeleton [(5, 6, 7)].
The morphology of these intracellular structures modulates the long-range active and passive transport of particles within them [(8)]. For example, the winding cristae of mitochondria slow down the long-range spread of particles [(1)], while spiral dislocations connecting ER sheets facilitate more rapid diffusive transport [(2, 9)].
A number of theoretical studies have demonstrated that the architecture of the domain can play an important role in determining reaction rates, a general phenomenon described as 'geometry-controlled kinetics' [(10)]. Emergent kinetic behaviors such as ultrasensitivity, bistability, and proofreading can be promoted or suppressed when enzyme and reactant diffusion is perturbed by crowding or by association with cellular structures [(11, 12, 13)]. Additional effects arise when the domain structure is dynamic, leading to time-varying effective diffusivity [(14, 15)] and broadening the distribution of search times [(16)].
One important class of intracellular geometries includes network structures, consisting of effectively one-dimensional edges connected at junction nodes. The transport properties of spatial networks [(17)] have been studied in a variety of contexts, from
porous media [(18)], to neuronal maintenance [(19; 20)]. For instance, particles diffusing through networks of tubes and containers have been shown to exhibit novel transport properties such as wavelike concentration fluctuations [(21)], as well as enhanced reaction rates [(22)].
The peripheral ER [(23; 24; 25)] and the mitochondrial networks of yeast and mammalian cells [(26; 27)] can both be described as spatial networks of interconnected hollow tubules. Studies of search kinetics in these networks have highlighted the importance of network connectivity, as described by the number of loops within the network [(26; 28)]. The connectivity can be biologically perturbed by mutations in ER morphogens [(3; 25)] and mitochondrial fusion and fission proteins [(26; 27)]. Prior studies have focused largely on global network architecture and transport properties, such as mean first-passage times averaged over the entire network. Cellular networks, however, are not homogeneous lattices, implying that a significant amount of variability should be expected in local transport to specific regions [(29)]. This variability has the potential to modulate encounter kinetics and dispersal to different regions of the cell.
The dynamic, interconnected web of the ER plays an important biological role as a delivery network for proteins, ions, and lipids throughout the cell [(30; 31; 32; 33)]. For example, phospholipids manufactured in the ER, must diffuse through its membrane to contact sites with lipid droplets, mitochondria, and other organelles in order to be transferred to their eventual cellular destinations [(33; 34)]. Additionally, alteration of network structure through modulating expression of ER morphogens has been shown to affect the magnitude of calcium release, possibly due to altered transport through the ER lumen [(30)].
The ER also serves as a quality-control hub for newly synthesized proteins destined for secretion [(35; 36)]. These proteins are co- or post-translationally inserted into the ER lumen or membrane, interact with a variety of ER-resident chaperones to ensure correct folding, and exit the organelle after encountering an ER exit site (ERES). The ERES are punctate, persistent structures which package secretory cargo into COPII-coated vesicles for subsequent transport to the Golgi apparatus [(37; 38; 39; 40)]. While in the ER, the proteins engage in diffusive transport to encounter their chaperone binding partners and to find the exit sites. Furthermore, certain steps in the protein quality control pathways are thought to occur in specialized local regions of the ER [(41; 42)], necessitating transport of proteins into and out of these regions. Given that many of the biological functions of the ER rely on its ability to serve as a topologically isolated transport network throughout the cell, understanding how network architecture modulates particle transport and encounter kinetics forms an important problem in cell biology.
In this work, we focus on the spatial heterogeneity of the peripheral ER network in mammalian (COS7) cells. We demonstrate that structural variability across individual ER networks translates to heterogeneous diffusive accessibility for different ER regions within the same cell. Live-cell imaging data is used to show that locally photoactivated membrane proteins spread non-uniformly to nearby regions of the ER, in agreement with simulation results that predict preferential transport to better-connected regions of the network. The contribution of dynamic ER network rearrangements is quantified using a minimal network model [(23; 43)], and shown to have little effect on membrane protein spreading. Furthermore, with the aid of stochastic simulations we demonstrate that the heterogeneity of the ER leads to the formation of 'hot spots' where diffusing reactants are more likely to find each other, and show that ER exit sites appear to be preferentially localized to such spots in the network. By examining the impact of ER network heterogeneity on diffusion-limited reactions and local protein spread, this work sheds light on the structure-function relationship of a biologically crucial organelle.
## Materials and Methods
### DNA plasmids
ER plasmids (mCherry_KDEL, KDEL_Venus or BFP_KDEL) were described previously [(44; 45; 46)]. Plasmids expressing fluorescently tagged COPII proteins: GFP_Sec16s, GFP_Sec23A, GFP_Sec24D, and EYFP_Sec31A were acquired from adheren (gifts from Benjamin Glick #15775, David Stephens #66609 and #66613, Henry Lester #32678) [(47; 48; 49)]. Generation of PAGFP_Calnexin was performed by PCR amplification of calnexin from mEmerald_Calnexin (gift from Michael Davidson, addgene #54021) using iProof high-fidelity DNA Master mix (Bio-Rad) and primers flanked with Xho1 or BamH1 recognition sites (Primer Fwd: 5'-AGATCTCGAGCTATGAAAGGGAAGTGGTGTGTGCTG -3' and Primer Rev: 5'-CCGATGATCCCCCTTCGGCTTTCGTGTTTCT-3') according to manufacturer instructions. Amplified DNA was purified using the Monarch Gel Extraction Kit (New England Bioscience) according to manufacturer protocol and digested with Xho1 and BamH1 (NEB). The digested calnexin was then ligated into PAGFP_N1 (gift from Jennifer Lippincott-Schwartz, addgene #11909) [(50)] using T4 DNA ligase (NEB) according to manufacture protocols. Bacterial clones were screened for insertion of calnexin sequence and confirmed by sequencing.
### Photoactivation experiment
COS7 cells were purchased from ATCC and cultured in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (FBS) and 1% penicillin/streptomycin. For all imaging experiments, COS7 cells were seeded in six-well,
plastic-bottom dishes at \(7.5\times 10^{4}\) cells/ml about 16 hours before transfection. Plasmid transfections were performed using Lipofectamine 3000, as described previously [(51)]. The following standard DNA amounts were transfected per mL: 0.2 \(\mu\)g mCherry_KDEL, 0.2 \(\mu\)g BFP_KDEL and 0.4\(\mu\)g PAGFP_Calnexin. Cells were transferred to 35mm imaging dishes (CellVis) at least 16 hours before imaging.
All photoactivation experiments were performed at the Van Andel Institute Optical Microscopy Core on a Zeiss LSM 880, equipped with an Axio Observer 7 inverted microscope body, stage surround incubation, Airyscan detector, two liquid cooled MA_PMT confocal detectors and one 32-channel GaAsP array confocal detector. Images were acquired with a Plan-Apochromat 63x (NA 1.4) oil objective with 3x optical zoom using Zeiss Zen 2.3 Black Edition software. Photoactivated target ROIs (60x60 pixel ROI) in the peripheral ER network were stimulated with 405nm light (single pass with 0.51\(\mu\)sec pixel dwell) to selectively activate defined regions within the peripheral ER. Cells were tracked for at least 2 minutes after stimulation with constant acquisition (0.629 sec/frame) to track diffusion of photoactivated signal into the surrounding ER network.
### Image analysis and network structure extraction
The machine learning segmentation toolkit ilastik [(52)] is employed to segment ER network structures from live-cell images using the mcherry_KDEL marker. A custom-written skeleton tracing subroutine in Matlab [(53)] is used to extract a network structure from the probability file output by ilastik. This code is publicly provided at [https://github.com/lenafabr/networktools](https://github.com/lenafabr/networktools) and includes data structures for storing the network morphology as a set of nodes connected by edges with curved spatial paths. The networks are manually curated (using a network editing GUI provided as part of the networktools package) to remove unphysical terminal nodes arising from skeletonization artifacts.
For the calculations shown in Fig. 1A,C,D,F, and G a circular region of radius 20\(\mu\)m is cut out from the extracted network structure. In Fig. 1B, eight sections of peripheral ER with radius 8.5\(\mu\)m are used from three different cells. These eight extracted ER networks are again used in the pairwise reaction simulations (Fig. 4B-E) and in the Supplemental Material.
### Analysis of photoactivated spreading data
Imaging datasets for 9 individual cells are selected for analysis, each of which has a photoactivation region surrounded by a well-defined tubular network structure with primarily 3-way junctions. The net signal over time is computed in 10 distinct wedge regions comprising an annulus around the photoactivation region with inner radius 3.5\(\mu\)m and outer radius 6\(\mu\)m. The photoactivated signal in wedge \(j\) at time \(i\) is defined as \(w_{ij}^{\text{PAGFP}}\). The fractional signal is then given by \(f_{ij}^{\text{PAGFP}}=w_{ij}^{\text{PAGFP}}/p_{0}^{\text{PAGFP}}\) where \(p_{0}^{\text{PAGFP}}\) is the total initial signal within the photoactivated zone. We find the slope ('signal arrival rate') of the fractional signal via a linear fit over the first 10 seconds of imaging time following photoactivation.
Two rounds of filtering are applied to ensure a meaningful relationship between the photoactivated signal dynamics and the observed network structure. The first filter removes regions with extremely rapid and/or large fluctuations in the ER signal. We calculate the time-variance in the fractional signal in each wedge as
\[V_{j}=\text{var}_{i}(w_{ij}^{\text{PAGFP}}/m_{i}^{\text{PAGFP}}), \tag{1}\]
where \(m_{i}^{\text{PAGFP}}=\sum_{j}w_{ij}^{\text{PAGFP}}\) is the total signal in the annular region at time \(i\). Given the distribution of these time-variances, a threshold of \(2.5\times(\text{MAD}\times 1.4826)\), where MAD is the median absolute deviation, is used to define outliers with extreme ER dynamics, in keeping with commonly used outlier detection methods [(54)].
The second round of filtering removes instances where network extraction does not accurately capture the underlying ER morphology. For example, small peripheral sheet regions, expanded junctions, or dense tubular matrices [(4)] can complicate the extraction of a well-defined network structure. Outlier regions are defined as those where the extracted total tubule length and the ER marker (mCherry_KDEL) signal levels are mismatched. Specifically, we compute the time-averaged fractional mCherry_KDEL signal for each wedge region in each cell as \(s_{j}=\left(w_{ij}^{\text{mCherry}}/m_{i}^{\text{mCherry}}\right)_{i}\). A linear fit is performed relating \(s_{j}\) with the total extracted network length for each wedge, averaged over time. Any wedge with a residual above \(2.5\times(\text{MAD}\times 1.4826)\) is filtered out of the analysis. Wedge regions filtered out due to either criterion are shown as gray dots in Fig. 2E-G.
### Simulations of photoactivation on static networks
For each frame in a photoactivation video, the ER network structure is extracted from the mCherry_KDEL fluorescence channel, as described in the Image Analysis section. On each individual network structure, simulations of diffusing particles are conducted via a kinetic Monte Carlo method, as described in prior work [(29)]. Briefly, analytically computed propagator functions are used to sample the time required for each particle to transition between neighboring nodes and edges, obviating
any artifacts associated with a fixed time discretization. This method allows the particle to propagate in larger timesteps than would be achievable through classic Brownian dynamics simulations on a network.
Batches of \(N=10000\) particles are initiated within the experimentally photoactivated region, a \(3\times 3\mu\)m patch in the peripheral ER. Particles propagate through the network with a diffusivity of \(D=1\mu\text{m}^{2}/\text{s}\), consistent with previous measurements of ER membrane protein diffusivity [(24)]. All particle positions are saved at a frame rate matching the experimental imaging rate, \(dt=0.629\text{s}\).
To process the simulated data, we define individual wedge regions of the same size and location as in the experimental images and analyze the number of particles in each. Note that each simulation is run on a static network extracted from a single frame (\(k\)) of the experimental image. The simulated signal in each wedge (\(w_{kij}^{\text{sim}}\)) is then defined as the total number of particles in wedge \(j\) and time point \(i\) on the network extracted from frame \(k\), and the fractional signal (plotted in Fig. 2D) is \(f_{kij}^{\text{sim}}=w_{kij}^{\text{sim}}/N\).
We next average this fractional signal over the different networks, defining \(f_{ij}^{\text{sim}}=\left\langle f_{kij}^{\text{sim}}\right\rangle_{k}\) (see Supplemental Material Fig. S2A.iii for example averaged signal vs time curves). The resulting values are used to find the signal arrival rate (slope over first 10 seconds), exactly as for experimental data. Alternative methods for incorporating the time-varying ER network structure are considered in the Supplemental Material.
The simulations make it possible to incorporate a range of values for the particle diffusivity. We accomplish this by rescaling the simulation time in our analysis, which leads to a rescaling of the diffusivity (assuming static networks). For example, to test whether \(D_{\text{eff}}=0.5\mu\text{m}^{2}/\text{s}\) is a better representation of the protein diffusivity than \(D_{\text{orig}}=1\mu\text{m}^{2}/\text{s}\), we can find the slope of \(f_{ij}^{\text{sim}}\) over time \(T_{\text{scale}}=\frac{D_{\text{eff}}}{D_{\text{orig}}}\times 10\text{s}=5\text{s}\). The slope is then multiplied by \(\frac{D_{\text{orig}}}{D_{\text{eff}}}\) to arrive at a simulated arrival rate with an effective diffusivity of \(D_{\text{eff}}=0.5\mu\text{m}^{2}/\text{s}\).
We perform a linear fit of the rescaled simulated rates to the experimental protein arrival rates (slopes over 10s). Repeating over a range of effective diffusivities, the value of \(D_{\text{eff}}\) with the optimal fit indicates the best estimate of ER membrane protein diffusivity given the photoactivated spreading data.
### Minimal model for dynamic ER networks
To estimate the effects of ER network rearrangements on particle spreading, we conduct Brownian dynamics simulations on synthetic dynamic networks. To represent the dynamic network, we use a modified version of the previously published'minimal network model' [(23; 43)]. In this model, the network consists of mobile nodes connected by edges, where the node positions \(\mathbf{x}_{i}(t)\) obey an overdamped Langevin equation
\[\frac{d\mathbf{x}_{i}}{dt}=-b\nabla f(\mathbf{x}_{i})+\sqrt{2D_{n}}\mathbf{\eta}( t), \tag{2}\]
where \(D_{n}\approx 10^{-3}\mu\text{m}^{2}/\text{s}\)[(23)] is the node diffusivity, \(b\) is the node mobility in units of \(\mu\text{m}/\text{s}\), and \(f(\mathbf{x}_{i})\) is the total edge length attached to each node. Specifically, \(f(\mathbf{x})=\sum_{j=1}^{d}\left|\mathbf{x}-\mathbf{y}_{j}\right|\), where the sum is over neighbor nodes and \(\mathbf{y}_{j}\) are the neighbor positions. The stochastic variable \(\mathbf{\eta}(t)\) is a Gaussian distributed noise term with mean zero and standard deviation 1. This model represents a network of edges which are under a constant tension, driving a minimization of their length.
As the edges of the network shrink, neighboring nodes approach each other. When two nodes are sufficiently close together, topological rearrangements of network connectivity can occur. If the two nodes are both degree 3 junctions, they undergo a T1 rearrangement [(55)] if and only if this decreases the total edge length. If one of the nodes has degree 2, or if they are connected by two edges (forming a short loop), then the two nodes can fuse together into a single node. The combination of these processes allows for ring-closure events, as observed in live-cell imaging of ER dynamics [(56; 32)].
To maintain a steady-state network structure, new edges are generated by a tube spawning and growth process. A new tube spawns at a fixed growth rate per existing total edge length (\(k\), units of \(\text{s}^{-1}\mu\text{m}^{-1}\)). The new tube location is uniformly selected along existing edges. The nascent tube grows at a right angle from the parent edge, with fixed velocity \(v=2\mu\text{m}/\text{s}\), comparable to rapid rates observed in dynamic ER images [(40; 57)]. When the tip of a nascent tube crosses an existing tube, it stops growing and fuses to form a new junction node.
Similar to [(43)], the balance between new tubule growth and shrinking due to length minimization enables the dynamic network to reach a stable steady-state. Of the parameters in the model, the diffusivity is sufficiently low (\(D\ll v\ell\), where \(\ell\approx 1\mu\text{m}\) is the characteristic edge length) to have little effect on network structure. Additionally, the tubule growth speed (\(v\gg b\)) is high enough that newly spawned tubules fuse much quicker than the node rearrangement timescales. The network structure is thus largely determined by the remaining two parameters \(k\), \(b\), which are set to match observations of COS7 ER in live-cell images.
The approximate growth rate \(k\) is extracted from videos of COS7 peripheral ER, labeled with 0.2\(\mu\)g KDEL_Venus (transfected as described above) and imaged on the Zeiss LSM 880 at a frame rate of 0.315s, by manually counting new growth events (Supplemental Video 3). The number of growth events in a region of size 10x10\(\mu\)m is manually counted over time interval 63s. This number is normalized by the time interval and the time-averaged total segmented ER length within the region, giving: \(k\approx 0.005\mu\)m\({}^{-1}\)s\({}^{-1}\).
Dimensional analysis indicates that the average steady-state edge length in the network scales as \(\ell\sim\sqrt{b/k}\). We tune the node mobility \(b\) to set a typical ER network edge length \(\ell\sim 1\mu\)m (24), corresponding to an estimate of \(b=0.05\mu\)m/s.
The resulting dynamic network model thus has tubule lengths and turnover timescales that approximately represent those of the COS7 ER ('normal ER model'). For comparison, we consider also a model where \(b\) and \(k\) are both increased by 2\(\times\), allowing for more rapid turnover but the same steady-state structure ('fast ER model').
### Simulating photoactivated spread on the dynamic network model
Diffusive particles (\(N=10000\)) are simulated on the dynamic network using Brownian dynamics, with particles moving along the network edges in discrete timesteps \(dt=10^{-3}\)s, with diffusivity \(D=1\mu\)m\({}^{2}\)/s, and network structure updated at each timestep. After each network update, the particle position is projected to the closest location in the new network. The network architecture is first evolved for a total time of 1000s to allow it to reach steady state prior to initiating the diffusive particles. The particles are placed within a square 3x3\(\mu\)m region of the network and the joint simulations of particle and network evolution then proceed for an additional 15s of simulated time.
The number of particles arriving in each wedge region surrounding the starting center is analyzed on both the dynamically evolving network and on each individual static network structure extracted at 0.6s intervals from the simulation. The signal arrival rates are obtained as described in the previous section.
### Paired particle simulations
Simulations of reactive particle pairs are run using the propagator-based approach, which enables particles to hop rapidly from node to node of the network until they come within the same neighborhood of each other. Details of the methodology, including the appropriate propagator functions for two reactive particles on the same edge, are provided in prior work (29). Each simulation is run until the two particles encounter each other, and the reaction position on the network is recorded. A total of \(N=160000\) particles are simulated on each network structure. The ER network structures are extracted from COS7 cell imaging data as described in the Image Analysis section.
Two other families of network are also analyzed. Eight circular honeycomb networks are generated, each with the same diameter and total edge length as one of the eight ER structures analyzed. Mikado networks are generated by scattering \(N_{\text{rod}}\) randomly oriented rods of length \(L_{\text{rod}}\) in a square of size \(L_{\text{space}}\) x \(L_{\text{space}}\). The intersections of these rods define the nodes of the network, the segments of rods between intersections define the edges of the network. This algorithm generates highly heterogeneous networks with a density that is tunable by changing any of the three input parameters **[CITE]**. However, Mikado networks tend to have degree 4 junctions, whereas ER (and honeycomb) networks are composed of mostly degree 3 junctions. Our modification to the Mikado networks is thus to remove degree 4 nodes by iteratively removing 1 random edge from a randomly chosen degree 4 node until all nodes in the network have degree 3 or less. The Mikado parameters are chosen to be \(N_{\text{rod}}=80\), \(L_{\text{rod}}=12\mu\)m, and \(L_{\text{space}}=24\mu\)m and a circular portion of the network is extracted with diameter 18\(\mu\)m, matching the circular ER networks. We generate many copies of these circular modified Mikado networks and select for analysis only those which have a total length within 5% of the corresponding ER network.
### ERES localization on ER network
COS7 cells were seeded in plastic 6-well dishes and transfected as described in the photoactivation experiment section. Cells were then imaged as previously described (40). The following standard amounts of DNA were transfected per mL: 0.1\(\mu\)g GFP_Sec16s, 0.1\(\mu\)g GFP_Sec23a, 0.1\(\mu\)g GFP_Sec24d, 0.1\(\mu\)g EYFP_Sec31a, 0.2\(\mu\)g mcherry_KDEL. Images were acquired on inverted fluorescent microscope (TE2000-U; Nikon) equipped with a 100x oil objective (NA 1.4) on an electron-multiplying charge coupled device (CCD) camera (Andor). Live cell imaging was performed at 37\({}^{\circ}\)C following media change to pre-warmed imaging media (fluorobrite DMEM (Invitrogen) + 10% FBS).
Images of 23 different COS7 cells are analyzed as described in the Image Analysis section to extract the peripheral ER network structure. The perinuclear region is manually excised from each one. ERES locations are identified as puncta in the GFP or EYFP fluorescent signal using a previously published implementation of the standard particle localization algorithm by Crocker and Grier (58, 59). The ERES positions are projected onto the nearest point along the extracted network structure. The global mean first passage time (GMFPT) to each projected ERES location is computed analytically as previously described (29).
A total of 1466 exit site positions are analyzed. For comparison, 1000 target points are selected uniformly at random along the edges of each network structure, and the GMFPT is computed to each of those points individually.
## Results and Discussion
### ER network structures exhibit spatially heterogeneous accessibility
The peripheral endoplasmic reticulum forms an intricate web of tubules, with primarily three-way junctions scattered at varying densities across the cell periphery. We aim to characterize the heterogeneity of the network structure and its effects on the accessibility of different regions by particles diffusing on the network.
ER network morphologies are extracted from confocal images of the peripheral ER in cultured COS7 cells (Fig. 1A), where these network structures are largely planar. The network structure is simplified into effectively one-dimensional edges (not necessarily straight), connecting point-like nodes. Although more complex peripheral structures, including hole-studded sheets [(60)] and dense localized matrices[(4)], have been observed, we focus here on regions composed primarily of well-defined tubules and junctions.
The ER density in different spatial regions can be characterized by computing the local edge length \(L_{\text{loc}}(x;\sigma)\), defined as the total length of network tubules that falls within distance \(\sigma=5\mu\)m of position \(x\). We sample local edge length for random points scattered across the domain of an example network (shown in Fig. 1A). The values of \(L_{\text{loc}}(x;\sigma)\) span one order of magnitude, demonstrating substantial spatial heterogeneity in the ER density (Fig. 1B). Notably, spatial variation in the local edge length within a single network is comparable to the variation between networks extracted from different cells.
While local edge length provides a purely structural metric of heterogeneity, we further consider the consequences of network variability on the diffusive transport of particles within the ER. One useful metric for quantifying search efficiency on spatial networks is the global mean first passage time (GMFPT) [(28)], which gives the mean first passage time for a diffusing particle to reach a given node in the network, averaged over all starting nodes. This quantity can be computed analytically from the edge lengths and topology of the network [(29)].
The GMFPTs for different nodes in a single ER network can vary substantially (Fig. 1C). Nodes near the boundary have a higher GMFPT, whereas more centrally located nodes and those in denser regions of the network exhibit the lowest GMFPTs.
Figure 1: (A) Confocal image of COS7 cell expressing fluorescent ER marker (KDEL_Venus, gray) with extracted network structure overlay (green). (B) The distribution of local edge lengths in one cell (left) is similar to the corresponding distribution across multiple cells (right). (C) Global mean first passage times (GMFPTs) to nodes on the example COS7 ER network shown in A. (D) GMFPT scales inversely with local edge length, color denotes radial position from center of network. (E) MFPT to each network node for particles diffusing outward from the center of a circular honeycomb network. (F) MFPTs to all network nodes for particles diffusing outward from the center of the example ER network. Vertical lines highlight heterogeneity in a ring from \(3.5-6\mu\)m around the center. (G) MFPTs for nodes in the ring from \(3.5-6\mu\)m, as highlighted in F.
Some of the variation in GMFPT can be explained by the local edge length surrounding a node, as well as proximity to the edge of the domain (Fig. 1D), both a measure of centrality within the network (17). However, even nodes with similar local edge lengths and radial position can have GMFPTs that vary by a factor of 2. We note that the ER networks form a highly-looped structure composed primarily of 3-way junctions, with less than 5% terminal nodes. Thus, although network search times are known to vary with node degree (61), the degree of each node is insufficient to account for the observed variability of the GMFPTs.
Individual mean first passage times (MFPTs) between pairs of nodes in the network can be used to further assess heterogeneity in local transport processes. The MFPT for a particle diffusing outward from a central point to each possible target node in a uniform honeycomb network exhibits a characteristic scaling with distance, as shown in Fig. 1E. Unsurprisingly, nodes that are located farther from the source tend to have higher MFPTs, with the search time increasing exponentially for the most distant population of nodes. This particular scaling of the diffusive search time relative to distance has also been observed for particles that hop actively across edges in planar network structures (62). Similar scaling is found in diffusive search for targets on the ER (Fig. 1F). However, the heterogeneity of the ER network structure gives rise to a broad range of mean search times for nodes at similar distances from the source. A factor of 3 difference is observed for nodes that fall within a ring from \(3.5\mu\text{m}-6\mu\text{m}\) from the center (Fig. 1F, G).
Overall, we use analytic mean first passage times as a measure of accessibility for different regions of the ER, either by particles starting throughout the network, or those originating from a localized source. This accessibility is shown to vary between different regions of an ER network, due to the heterogeneous density and connectivity patterns of the tubules.
### Network morphology governs the nonuniform spread of photoactivated proteins
Although mean first passage times are a convenient, easily computed metric of diffusive accessibility, they are difficult to probe experimentally. To directly observe the heterogeneity of diffusive spreading within the ER, we consider instead the short-time rate of arrival to nearby regions surrounding a particle source. This process is visualized by photoactivating ER membrane-associated proteins within a localized region of the network and watching their spread into surrounding regions.
Cultured COS7 cells are transfected with PAGFP_Calnexin, a membrane-bound ER protein in the ER with a photoactivatable fluorescent tag, as well as mCherry_KDEL as a general marker for ER structure. A single pass photoactivating pulse is applied in a \(3\mu\text{m}\times 3\mu\text{m}\) square of the peripheral ER. Several frames from an example video (Supplemental Video 1) are shown in Fig. 2A with mCherry_KDEL in red and the PAGFP_Calnexin in green. The initial dense bolus of photoactivated proteins can be seen spreading outward through the network. We track the signal in individual small regions located equidistant from the photoactivation site. Diffusive spreading of particles over a homogeneous continuum would be expected to yield similar time-courses of signal arrival to each of these regions. However, the observed PAGFP fluorescence signal over time varies substantially between the individual wedges in a single cell (Fig. 2B). This variability can be attributed to the heterogeneous distribution and connectivity of the ER tubules. Intuitively, the blue region contains dense, highly connected tubules and has the strongest and fastest-growing photoactivation signal. By contrast, the orange region is poorly connected to the activation site and exhibits the smallest initial signal growth.
To account for the observed differences in signal arrival rates due to ER morphology, we extracted the ER network structure in the vicinity of the photoactivation site and carried out agent-based simulations of diffusing particles initiated at the site (Fig. 2C, Supplemental Video 2). Quantifying the number of simulated particles accumulating in each region over time allows for a direct comparison between simulated and observed fluorescent signal. For both the experimental and simulated data, we normalize the measured signal in each region by the initial total signal within a disc of \(3.5\mu\text{m}\) radius centered on the photoactivation zone (inner circle in Fig. 2A,C). Thus, the reported signal traces are given in terms of the fraction of initially photoactivated particles present in a given region at a given time. The normalized simulated signal (Fig. 2C) exhibits similar behavior to the experimental results, with well-connected dense regions receiving more signal faster than poorly-connected and sparse ER regions.
To partially incorporate the effect of ER network rearrangement over time, the photoactivation simulations are run on network structures extracted for every frame of the experimental movie (at time interval 0.6s). The signal over time is then averaged across the ensemble of simulations on all of these different network structures. This ensemble-averaged simulated signal is used in the subsequent analysis. Analogous results using only a single network structure can be found in the Supplemental Material.
To quantitatively compare protein arrival rates in the experimental and simulated ER networks, we extract the slope of the normalized signal curves up to 10 seconds following photoactivation. These slopes (referred to as 'arrival rates') serve as a simple metric that provides information about the spatial heterogeneity of protein spreading around the photoactivation site. Because our simulations are carried out on network structures extracted from the experimental images, it is possible to directly compare the rate of signal arrival in matched regions between experimental and simulated data (Fig. 2E). Regions where the extracted network length was a poor match for the observed ER marker (mCherry_KDEL) fluorescence, or where the
Figure 2: Spreading of localized bolus of particles over the ER network. (A) ER membrane protein PAGFP_Calnexin (green) is pulse-activated in a local region, while ER luminal marker mCherry_KDEL (red) serves to visualize network structure. Equidistant surrounding regions (colored wedges) are used to analyze signal spread. (B) Photoactivated signal arriving in each analyzed region, normalized by initial total signal in photoactivated zone. (C) Snapshots of simulations on frozen ER structures extracted from first frame in A. (D) Simulated particle counts arriving in individual analysis regions, normalized by total number of particles. (E) Correlation between signal arrival rates (slopes of signal vs time curves) for experimental and simulated data. Color indicates cell (N=9). Inset: simulated protein arrival rates best match experimental arrival rates when effective diffusivity is scaled from \(D_{\mathrm{orig}}=1\,\mu\mathrm{m}^{2}/\mathrm{s}\) to \(D_{\mathrm{eff}}=1.3\mu\mathrm{m}^{2}/\mathrm{s}\) (dashed line). (F) Correlation of experimental signal arrival rate in individual regions versus the fraction of ER marker signal in that region. (G) Correlation of experimental signal arrival rate with the number of edges intersecting the boundary of each region. Regions removed due to filtering are shown in gray in E-G.
ER marker showed large fluctuations over time, were filtered out of the analysis (gray dots; see Methods for details). Notably, the variability of measured rates between regions within each individual cell (same color dots) is comparable to the inter-cell variability (different color dots), indicating that the arrival rates are similarly heterogeneous in all the observed cells. The experimental and simulated arrival rates show a direct correlation: \(R^{2}=0.68\), obtained from a linear fit. The high correlation implies that diffusive particle motion over an ER network is a good predictor of signal arrival to different regions.
Notably, the simulation time can be rescaled to effectively represent particles of different diffusivity (see Methods for details). We compare the correlation of signal arrival rates between experimental measurements and simulations with different time-scaling. The simulations which best correlate with experimental values correspond to a particle diffusivity of \(D_{\text{eff}}\approx 1.3\mu\text{m}^{2}/\text{s}\) (Fig. 2E, inset), a value that is similar in magnitude to previous measurements of diffusivity via single-particle tracking for other ER membrane proteins [(24)]. This result demonstrates that particle diffusivity in the ER can be measured by quantifying signal arrival rates to different structural regions regions of network, all located relatively close to the photoactivated zone, without the need for tracking longer-range spread across the cell [(30)].
In order to test whether particle diffusion simulations are more predictive than simpler metrics of network structure, we also compare the experimental arrival rates to the mCherry_KDEL ER signal in each region. A linear fit (Fig. 2F) demonstrates there is some correlation between the two (\(R^{2}=0.44\)), but variation in the ER volume within each region (as measured by mCherry_KDEL signal) cannot capture the full variability in protein spreading rates. A simple metric for local connectivity, the number of edges crossing the boundary of each wedge region, is shown to be roughly correlated (Fig. 2G, \(R^{2}=0.2\)), but also does not provide a strong predictor of protein arrival rates. Thus, the distribution of protein spreading rates in live cells is best modeled by simulations which take into account not just the local ER density in a region, but also the connectivity of the surrounding network together with the dynamics of diffusive particles moving through this network.
Figure 3: ER network dynamics does not substantially affect particle spreading. (A) Snapshots from simulation of diffusing particles spreading from a local region, on a minimal network dynamic model. Network edges are blue and a subset (500) of the simulated particles (\(D=1\mu\text{m}^{2}/\text{s}\)) are shown in orange. White arrows highlight several new edges that grew between first and second snapshots. (B) Correlation of signal arrival rate (slope of signal-vs-time curves) to individual regions, comparing simulations on a single static network structure and on dynamic minimal network model with turnover timescales comparable to ER dynamics (blue) or \(2\times\) faster (red). (C) Comparison of signal arrival rates for simulation on a dynamic network versus simulations of particles diffusing on a static network, averaged over static structures from individual snapshots of the minimal network. The average static rates are obtained in the same manner as in the analysis of experimental data.
### Slow ER network dynamics have little effect on particle spreading
The ER network in a living cell is itself a dynamic structure, with network rearrangements occurring over tens-of-second timescales as a result of attachment to motile organelles, molecular motors, and growing microtubule tips [32, 57, 63]. In comparing the measured rates of protein spread to simulations of diffusing particles (Fig. 2E), we account for time variation in ER architecture by averaging over network structures extracted from each frame.
To gain a better sense of how ER tubule dynamics may contribute to the spread of photoactivated proteins, we incorporate network rearrangements directly into our simulations, by treating the ER as a'minimal network' with tubules subject to growth and constant tension [43]. These synthetic dynamic networks (described in the Methods) mimic the rearrangements of the ER over time, including new tubule growth, junction sliding, and the merging of junctions.
The two parameters primarily responsible for determining the equilibrium properties are node mobility (units of \(\mu\)m/s, sets speed with which nodes rearrange) and new tubule growth rate (units of \(\mu\)m\({}^{-1}\)s\({}^{-1}\), rate at which new tubules are pulled out of existing tubules). Other parameters, such as node diffusivity and new tubule growth speed play a secondary role in the parameter regimes considered here. Input parameters to the model are set so that network properties at equilibrium match the ER network in COS7 cells. Specifically, the modeled networks match the measured rate of new tubule formation (Supplemental Video 3) and the steady-state average edge length in the network (see Methods for details). For comparison, we also ran simulations of networks that exhibit faster dynamics, with both tubule growth rate and node mobility increased by a factor of 2. These faster networks have the same steady-state network structure but rearrange twice as rapidly.
For each of these dynamic networks, 16 separate photoactivation events are simulated in different regions of the network. Particles are initiated within \(3\times 3\mu\)m patches and allowed to diffuse through the structure either on a static network or concurrently with the network dynamics (Fig. 3A, Supplemental Video 4). We compare the rate of particles arriving to equidistant regions surrounding the initiation zone both with and without network dynamics.
When the particle simulations are run on a single static network structure, the arrival rates are moderately well-correlated (\(R^{2}=0.87\)) with the rates observed for simulations on dynamic networks (Fig. 3B). Faster dynamics in the synthetic networks reduces this correlation to \(R^{2}=0.78\). Intuitively, as the rearrangements occur more quickly, diffusive particles encounter more extensive changes in structure during the 10-second timescale of the measurement.
Notably, even when network dynamics are twice as rapid as the experimentally observed dynamics of the ER network, the static network approximation is a good predictor for particle arrival rates. We can estimate the importance of active network rearrangements versus the diffusive motion of the particles by considering an effective Peclet number for the system. The mobility parameter for the dynamic networks (\(b=0.05\mu\)m/s) sets a typical velocity for tension-driven sliding. Over a length scale of \(10\mu\)m (corresponding to the diameter of the analyzed region), the corresponding Peclet number for a protein within the network is \(\mathrm{Pe}=vL/D\approx 0.5\). Doubling the rate of ER rearrangement doubles this Peclet number. Because this dimensionless quantity is close to or below \(\mathrm{Pe}=1\), the motion of the particles is dominated by their diffusivity rather than by the tubule rearrangement dynamics.
The moderate effect of network dynamics on particle spreading can be partly accounted for by running simulations on many individual static network structures extracted at different points in time. We perform this analysis using snapshots of our simulated dynamic networks and averaging the signal in each region at each time point. For this ensemble-averaged data, the arrival rates on static and dynamic networks become more closely correlated (Fig. 3C), even in the case of rapid network rearrangements (\(R^{2}=0.91\)). Thus, the effect of network dynamics is almost entirely accounted for by averaging multiple static simulations on consecutive network structures. These results on synthetic dynamic networks validate the use of the same ensemble averaging approach when analyzing experimental data.
### ER structure directs reaction locations
The endoplasmic reticulum does more than simply serve as a transport hub for proteins, lipids and ions; it also plays a role in protein synthesis and quality control [42, 35], as well as forming functionally important contact sites with other organelles [33]. The formation of reactive complexes, exit sites for protein export, and contact site assemblies requires multiple intra-ER particles to find each other within the network. In order to better understand how diffusion-mediated biochemical reactions are impacted by ER morphology, we simulate reactive particle pairs diffusing on extracted ER network structures (Fig. 4A). From these simulations, both the spatial locations of reactions on the network as well as the distribution of reaction times are extracted.
Many previous studies of diffusive processes on networks have focused on the temporal properties of reactions or exit times (e.g. MPFTs, extreme statistics, and full FPT distributions [26, 29, 16]), without considering in detail where those reactions occur. Here, we provide fresh insight by analyzing the spatial locations, as well as temporal distributions, of pair-wise reactions on the ER. Pairs of particles are distributed randomly across the network to begin the simulation. Each pair diffuses along the edges of the network until the two particles come into contact with one another. At this point, they react and the position and time of reaction is recorded. The network edges are meshed into segments of length \(\ell\approx 0.2\mu\)m, and the normalized reaction
density in mesh cell \(i\) is defined as:
\[\gamma_{i}=\frac{\text{\# of reactions in cell $i$}}{\text{\# of particle pairs}}\times\frac{\text{total network length}}{\ell}. \tag{3}\]
When averaged over an entire network, \(\langle\gamma\rangle=1\). Simulations on the ER network structure demonstrate that paired particle reaction locations are heterogeneous (Fig. 4B, middle panel), with some regions showing a particularly high reaction density \(\gamma\). Certain tubule segments are more likely to serve as the reaction site, due to their enhanced connectivity to the rest of the network. The normalized reaction density correlates with the inverse of the GMFPT (Fig. 4C), indicating that these highly reactive regions are in fact easier to find by diffusing particles.
These simulations imply that heterogeneity in ER structure and accessibility is expected to result in diffusive particle reactions becoming concentrated within certain regions. For comparison, we repeat the simulations on two synthetic network structures: a homogeneous honeycomb network (Fig. 4B, left panel) and a highly heterogeneous modified Mikado network (64) (Fig. 4B, right panel; see Methods for details). Both of these networks have the same spatial extent and total network length as the extracted ER networks. This allows for a quantitative comparison of the reaction density and reaction time distributions between all three families of networks.
As expected, reaction locations are more uniformly distributed on the honeycomb network. Within this homogeneous network, reactions are slightly more likely to occur at junction nodes rather than along the edges, in keeping with past work
Figure 4: ER heterogeneity leads to hot spots of paired particle encounters. (A) Schematic of paired particle simulations. Pairs of particles (pink and blue circles) diffuse through the network (dashed lines indicate trajectories) until they encounter and react. (B) Normalized reaction density on three example networks, each with similar total edge length and spatial size. For each discretized segment of network, the fraction of simulated reactions occurring within that segment is normalized by the fraction of total edge length contained within that segment. Left panel, a homogeneous honeycomb network with the same average edge length as the ER network in the middle panel. Middle panel, ER network is extracted from a section of COS7 peripheral ER, and exhibits regions of higher reaction density than the homogeneous honeycomb (bright yellow segments). In the right panel, the normalized reaction density on a highly heterogeneous synthetic, Mikado-like network, exhibiting even more pronounced hot spots than the ER. (C) Paired reaction density on each segment of the ER is roughly correlated with the inverse of the global mean first passage time (GMFPT) to that segment. (D) Distribution of reaction rate densities for all discretized segments in honeycomb, ER, and Mikado-like networks, showing increasing heterogeneity in the densities. Insets show long tail of distribution plotted on log-log axes. Mean for each distribution is one, red overlay denotes standard deviation. (E) Distribution of paired reaction times in the three network structures. Dashed lines show fit to an exponential distribution. Dotted lines mark the mean reaction time. The target-averaged GMFPT on the ER networks indicated by the purple arrow is more than twice the mean pair reaction time.
showing random walkers are more likely to encounter each other at higher-degree network nodes [(61)]. There is also a dearth of reactions at the network boundary, mirroring the increased GMFPT in the boundary region (Fig. 1C). The ER networks show a similar drop-off in reaction density along edges as compared to junctions, as well as at the boundary. Moreover, due to the heterogeneous network density and connectivity, reactions are more concentrated into certain junctions within the network, with a higher maximum reaction density at these select junctions than is observed in the more uniform honeycomb. Reactions are further concentrated in the modified Mikado networks, demonstrating that more heterogeneous networks exhibit a broader range of reaction densities. This effect is quantified in Fig. 4D, where a longer tail is visible in the distribution of normalized reaction densities for ER and Mikado networks, as compared to the honeycomb. The morphology and connectivity of a network can thus tune the spatial distribution of reaction locations.
Network structure is not only responsible for shaping the spatial profile of reaction density, but can also affect the overall reaction time [(26, 28)]. The distribution of pair-wise reaction times on each network exhibits exponential scaling (Fig. 4E), as for a Poisson process with a single dominant time-scale. As noted in previous work, the mean reaction time on the ER (dashed purple line) is less than half of the target-averaged GMFPT (purple arrow) [(29)]. Even though there are higher spatial reaction densities in the more heterogeneous networks, mean reaction time is lowest in the homogeneous honeycombs and highest in the modified Mikado networks (Fig. 4E). Thus, there is a trade-off between locally concentrating reactions in space versus minimizing overall reaction time.
Given that our simulation results demonstrate the presence of disproportionately reactive regions within the peripheral ER network, we sought to examine whether certain ER-associated protein assemblies may be more likely to localize to such regions. Specifically, we explore the distribution of ER exit sites (ERES), which serve as the export hubs for newly synthesized proteins in the ER [(39, 40)]. The mechanism underlying the distribution of exit sites on the network is not well understood, although prior work has suggested they may arise from a process of confined diffusive aggregation [(65)]. The process of ERES formation is not modeled explicitly here. Instead we investigate whether these structures are more likely to be found within highly connected and reactive network regions (as measured by search rate, defined as 1/GMFPT).
We extract ERES puncta locations using several different markers for the exit sites (Fig. 5A, details in Methods) and project
Figure 5: Concentration of ERES in regions of the peripheral ER with high search rate. (A) Image of the ER (magenta, mCherry_KDEL) and ERES (green, GFP_Sec24d) of a COS7 cell. (B) Extracted network structure of peripheral ER (excluding nucleus and perinuclear sheet regions). Junctions are colored by their effective search rate (inverse GMFPT, units of \(1/\mathrm{s}\)). ERES positions are shown in green. (C) ERES positions exhibit higher effective search rates than random points on the network. Distributions consist of 1466 exit sites and 23k random points extracted from the peripheral ER of 23 different COS7 cells.
these locations onto the extracted ER network structure (Fig. 5B). We next calculate the search rate to each ERES position, and compare the distribution of these rates to that expected for randomly selected locations along the networks (Fig. 4I). The distribution of search rates at the exit sites (mean \(\pm\) std: \(6.8\pm 2.8\times 10^{-3}\)s\({}^{-1}\)) is shifted to higher values as compared to the randomized control (mean \(\pm\) std: \(6.1\pm 2.7\times 10^{-3}\)s\({}^{-1}\)). Given the large numbers of exit sites and random points sampled (1466 exit sites, 23k random points), this difference is statistically significant, \(p<<10^{-6}\) by a one-sided student's t-test. This indicates that the ERES are disproportionately likely to be found in highly connected regions of the ER network.
These results indicate a potential structure-function relationship for the peripheral ER network. Structural heterogeneity in the network translates to heterogeneous locations for reactions of diffusing particles. In turn, certain multi-protein assemblies within the ER network appear to be localized to the more highly reactive regions, where they can be more easily reached by other diffusive particles.
## Conclusion
In this work, we highlight the heterogeneous connectivity of the tubular ER network and its consequences for diffusive particle transport. We extract peripheral ER network structures from live-cell confocal images of COS7 cells and analytically compute mean first passage times (MFPTs) for particles diffusing over these networks. These calculations allow us to quantify the variability in diffusive accessibility within individual ER architectures. The global MFPT to individual nodes within the network is found to vary by up to 4-fold due to the heterogeneous connectivity of the network.
We then directly visualize the local spreading of ER membrane proteins from an initial region of pulsed photoactivation. Signal arrival rates to distinct regions equidistant from the photoactivated center show marked disparities (varying by more than a factor of 4 within a single cell). We compare these measurements to simulations of diffusing particles on the visualized ER network structure, and show that the simulated rates of arrival to distinct regions show strong agreement with experimental data. These results demonstrate the importance of network structure in guiding the observed heterogeneity in protein spread.
By modifying and extending a model for'minimal networks' driven by membrane tension and new tubule growth [(43)], we assess the effect of ER network rearrangements on protein spread. The substantial separation of timescales between network dynamics and protein diffusivity leads to only a marginal predicted effect of tubule rearrangement on the motion of proteins within the ER.
Additionally, we simulate pairs of reactive particles diffusing through the ER and demonstrate that the structural heterogeneity of the network gives rise to effective hot spots where encounters are more likely to occur. Intriguingly, visualization and analysis of ERES positions across the peripheral ER indicates that these structures are more likely to be found in diffusively accessible hot-spot regions.
We note that the ER models in this study are intentionally highly simplified, reducing the complex membrane-enclosed geometry of the ER to a network of effectively one-dimensional tubules. These simplifications make it possible to focus on the role of cellular-scale network connectivity and rearrangements in particle transport. Notably, the simple structural model is sufficient to reproduce the observed heterogeneous protein arrival rates to different network regions in photoactivation experiments. More detailed structural models could include variability in tubule diameter [(66; 67)] as well as scattered peripheral sheets [(68)], which may themselves be perforated with holes [(60)] or composed of dense tubular matrices [(4)]. Exploring the effect of these structures on particle transport forms a potentially interesting avenue for future work.
The network dynamics model employed here aims to isolate the key important features governing ER rearrangements - namely, the formation of new tubules and the tension-driven movement of junctions [(32; 43; 69)]. Although network dynamics are shown to have little impact on protein diffusion, they are expected to play a greater role in the motion of larger and slower-moving ER-associated bodies such as the ERES [(70)]. Furthermore, it is possible that directed flows of luminal and / or membrane contents may be associated with the growth and shrinking of ER tubules, as implied by recent evidence that new tubule growth is followed by a delayed widening and infilling with Climp63 spacer proteins [(67)]. Although the spatial extent and magnitude of such flows is not currently established, they could more extensively contribute to modulating intra-ER protein motion.
The transport, quality control and export of proteins in the ER are essential biological processes in the early secretory pathway. These processes require a variety of encounters between newly manufactured proteins, chaperones, and regulatory factors. The structural heterogeneity of the ER network implies that certain regions may allow for more efficient encounters between binding partners. Notably, however, the effect of morphology becomes important only in the regime of diffusion-limited kinetics when the particles are sparsely scattered over the network [(26)]. The sequestration of some quality control machinery to specific regions of the ER [(42)] implies that long-range diffusive search by proteins through the network may be an important factor in the kinetics of such pathways.
In addition to protein transport, the results described here apply to any diffusive particles contained in the membrane or lumen of the peripheral ER network. This includes ions such as calcium, as well as the buffer proteins which bind to them.
In particular, we would expect the demonstrated structural heterogeneity of the ER to lead to more rapid calcium release in better-connected regions of the network. Given that calcium homeostasis and signaling is one of the key functional roles of the ER, heterogeneous transport could thus provide an important link between physical structure and biological function. Furthermore, it would be interesting to explore whether contacts between the peripheral ER and other cellular structures, such as mitochondria, tend to preferentially occur at highly connected regions, which may facilitate the delivery of lipids or ions across these contacts [71, 72, 73].
Through the use of experiments paired with quantitative image analysis and computational modeling, our results demonstrate how morphology guides particle transport and reactions in the ER, with broad implications for diffusive transport in any intracellular network structure.
## Author Contributions
ZCS, LMW, and EFK conceived and designed the research and wrote the manuscript. ZCS and EFK developed and implemented the model for both particle diffusion and network dynamics. ZCS analyzed imaging and simulation data. KK, MV, LC, and LMW generated experimental data and performed imaging studies.
## Acknowledgments
We thank Dr. Edward Avezov and Dr. Eric Arnoys for helpful discussions and the Van Andel Institute Optical Imaging Core (RRID:SCR_021968), especially Corinne Esquibel for their assistance with the Zeiss LSM 880. Funding was provided by the National Science Foundation (Grant ID #2034482 to EFK and #2034486 to LMW), as well as a Cottrell Scholar Award from the Research Corporation for Science Advancement to EFK.
|
2308.11047 | Harmonization Across Imaging Locations(HAIL): One-Shot Learning for
Brain MRI | For machine learning-based prognosis and diagnosis of rare diseases, such as
pediatric brain tumors, it is necessary to gather medical imaging data from
multiple clinical sites that may use different devices and protocols. Deep
learning-driven harmonization of radiologic images relies on generative
adversarial networks (GANs). However, GANs notoriously generate pseudo
structures that do not exist in the original training data, a phenomenon known
as "hallucination". To prevent hallucination in medical imaging, such as
magnetic resonance images (MRI) of the brain, we propose a one-shot learning
method where we utilize neural style transfer for harmonization. At test time,
the method uses one image from a clinical site to generate an image that
matches the intensity scale of the collaborating sites. Our approach combines
learning a feature extractor, neural style transfer, and adaptive instance
normalization. We further propose a novel strategy to evaluate the
effectiveness of image harmonization approaches with evaluation metrics that
both measure image style harmonization and assess the preservation of
anatomical structures. Experimental results demonstrate the effectiveness of
our method in preserving patient anatomy while adjusting the image intensities
to a new clinical site. Our general harmonization model can be used on unseen
data from new sites, making it a valuable tool for real-world medical
applications and clinical trials. | Abhijeet Parida, Zhifan Jiang, Syed Muhammad Anwar, Nicholas Foreman, Nicholas Stence, Michael J. Fisher, Roger J. Packer, Robert A. Avery, Marius George Linguraru | 2023-08-21T21:13:30Z | http://arxiv.org/abs/2308.11047v1 | # Harmonization Across Imaging Locations (HAIL): One-Shot Learning for Brain MRI
###### Abstract
For machine learning-based prognosis and diagnosis of rare diseases, such as pediatric brain tumors, it is necessary to gather medical imaging data from multiple clinical sites that may use different devices and protocols. Deep learning-driven harmonization of radiologic images relies on generative adversarial networks (GANs). However, GANs notoriously generate pseudo structures that do not exist in the original training data, a phenomenon known as "hallucination". To prevent hallucination in medical imaging, such as magnetic resonance images (MRI) of the brain, we propose a one-shot learning method where we utilize neural style transfer for harmonization. At test time, the method uses one image from a clinical site to generate an image that matches the intensity scale of the collaborating sites. Our approach combines learning a feature extractor, neural style transfer, and adaptive instance normalization. We further propose a novel strategy to evaluate the effectiveness of image harmonization approaches with evaluation metrics that both measure image style harmonization and assess the preservation of anatomical structures. Experimental results demonstrate the effectiveness of our method in preserving patient anatomy while adjusting the image intensities to a new clinical site. Our general harmonization model can be used on unseen data from new sites, making it a valuable tool for real-world medical applications and clinical trials.
Keywords:Image Harmonization Domain Adaptation One-shot Learning Style Transfer Adaptive Instance Normalization Magnetic Resonance Imaging
## 1 Introduction
Deep learning (DL)-based models trained on large radiologic data with high-quality labels are effective for clinical diagnosis and trials. However, to achieve clinically useful outcomes for rare diseases such as pediatric brain tumors, data collection requires collaboration between multiple clinical centers. Only then, the amount of data generally required to effectively train such models could be
made available. Since clinical centers use different imaging equipment and often varying acquisition protocols, we are presented with significant challenges for the analysis and interpretation of radiological imaging data such as magnetic resonance imaging (MRI). Since there is no underlying standardized unit in MRIs, they may have different intensities and anatomical resolutions. Further, MRIs are subject to domain shifts arising from a wide range of scanning parameters and differences in populations across clinical centers. Such domain shifts between training and testing data (e.g. new unseen site) could lead to increased errors in clinical tasks performed using machine learning algorithms [6, 28, 29]. Therefore, multi-site data must be pre-processed with harmonization to obtain a uniform appearance and allow machine learning algorithms to be effectively trained [21]. However, such intensity harmonizations could adversely affect anatomical information in a scan, if not properly managed.
The diversity in medical imaging data poses challenges to traditional but limited-intensity harmonization methods, such as histogram matching [18, 23]. Deep learning approaches that map an image from a source to a target domains have the additional benefit of combining spatial and anatomical features information to achieve intensity harmonization. These methods typically rely on types of generative adversarial networks (GANs) [5], such as conditional GANs [16] that translate between domains using paired images. However, it is rare to find medical images from the same patient acquired at multiple sites. Alternatives like CycleGAN [16] learn two GANs by enforcing cycle consistency, thus forgoing the need for paired data. In addition, unsupervised image-to-image translation (UNIT) [13] combines GANs with a variational autoencoder [10] and uses a shared latent space for harmonization. The UNIT model has been applied to MRI data to generate a harmonized optimal domain, but exclusively for segmentation [26]. Unfortunately, GAN-based methods do not enforce structural consistency to preserve patient anatomy during image transformation. Conserving patient anatomy is paramount for accurate diagnosis and treatment. Without structural consistency, the generated images could lose clinically relevant details [30].
Therefore, we focus on intensity harmonization for MRI data, while preserving patient-specific anatomical information. The first inspiration for our work is the neural style transfer (NST), a technique that uses neural networks to generate images by combining the anatomy of a input image and the intensity of a target image [8]. The main assumption is that the patient anatomy in an MRI scan remains the same, regardless of the imaging site [14]. The differences in MRI appearances is due to changes in scanners or protocols. An adaptive instance normalization (AdaIN) module [8] aligns the distribution of the anatomical features with that of the target features to achieve harmonized features. For 2D image harmonization, NST employ pre-trained VGG models [24] as feature extractors. The advantages of using such a pre-trained model diminishes when the new task deviates from the task for which the model was initially trained [11]. Therefore, for an optimal MRI feature extractor, we need to train a 3D feature extractor specific to the downstream data. NST methods, such as [14, 15, 27], jointly minimize two losses for the prediction- content loss from the input image and the
style loss from the target image. We design a NST framework to handle 3D data using the 3D feature extractor and AdaIN to minimize style and content losses.
The second inspiration for the study is one-shot learning technique which learns from a limited set of data, making it a valuable tool for rare diseases [19]. While few-shot learning for image-to-image translation has been used in image registration [7], for image harmonization, we must translate the intensity while preserving the anatomy. One training strategy used for one-shot learning is called meta-learning [3, 22], which learns a model in two stages- an unrelated training stage(meta-learning phase) and a task-specific learning stage [9, 19]. Convolutional Siamese networks are common one-shot learning architectures [11], which have branched networks to learn highly discriminative representations of the inputs, even with limited training data [25]. Branched networks can predict outcomes on unseen data by enforcing similarity at test time, which is an advantage for medical imaging tasks.
To address these requirements for medical image harmonization, we propose the harmonization across imaging locations (HAIL) framework illustrated in Figure 1. Our novel method has four major contributions:
1. Novel modular NST framework that harmonizes 3D medical images.
2. One-shot learning image harmonization framework that learns broad features, thus generalizing to data from unseen test sites using one target sample.
3. Novel metrics for measuring intensity harmonization and preservation of anatomical structures to allow future methods to be fairly compared.
4. Evaluation of the effectiveness of the proposed approach for harmonizing multi-site MRI data from rare diseases, i.e., pediatric brain tumors.
## 2 Method & Experimental Setup
### Image Harmonization
The proposed framework- HAIL, using one-shot learning has two phases- 1) a feature extractor for meta-learning and 2) learning a task-specific 3D NST model with AdaIN [8]. We use four different losses for model training- reconstruction loss, consistency loss, style loss, and content loss. The reconstruction loss is used to train the 3D feature extractor. The content loss ensures similarity in activation of the higher layers for input and predicted images [8]. Whereas, style loss ensures similar feature statistics for the prediction and target images [8]. We introduce the notion of consistency loss to the loss landscape to prevent harmonization when target and input images are similar.
#### 2.1.1 Phase 1: Pre-training a feature extractor.
In this meta-learning phase, we trained an encoder-decoder architecture (Appendix A) to compress and reconstruct an image (Figure 1). The training is governed by reconstruction loss
and in the process, a latent space is generated which is used by the decoder for image reconstruction. Later, we froze the encoder parameters and used them to extract features and compute the content and style losses in phase 2.
**Implementation details:** Images from all three sites A, B, and C were divided into training and validation sets- using 80:20 splits. The input image was 3D cropped to \(64\times 64\times 64\) patch. The algorithms were implemented on the lightning [2] framework and trained on an NVIDIA RTX A5000 using half-precision (FP16). The encoder-decoder was optimized to minimize the reconstruction loss using \(AdamW\) optimizer, batch size 48, and learning rate \(1e^{-4}\). The reconstruction loss was a combination of L1 and structural similarity (SSIM) [17] losses with equal weights. The model was trained for 1,000 epochs and the best validation model was saved for phase 2.
### 2.0.2 Phase 2: Training a style transfer model for one-shot learning.
To learn the task-specific and dataset agnostic style transfer between the 3D images, we used a Convolutional Siamese network (Figure 1). The twin network with identical weights reused the frozen encoder from phase 1 to extract the input and target image features. The target image acted as the single example for the one-shot image harmonization. The input features are translated to the target
Figure 1: **Harmonization across imaging locations (HAIL) framework.** The input and target MRIs each pass through a 3D feature extractor to produce latent representations. These representations are then passed through a 3D adaptive instance normalization (AdaIN) module, which translates them for the decoder to produce the predicted image-harmonized MRI. The loss function includes a consistency loss, which serves as a regularizer to prevent over-correction during image harmonization. The style loss and content loss are calculated based on features extracted by the layers of the pre-trained 3D feature extractor.
site using AdaIN [8]. The decoder, with same architecture from phase 1, takes the translated features and generates a stylized image corresponding to the intensity harmonized image.
**Implementation details:** The images from sites A and B were divided into training, validation, and testing sets using 70:20:10 split. Site C was reserved to test the generalizability of the HAIL framework in one-shot learning. Each instance in a batch has a pair of images- input, and target. The input image was 3D cropped \(64\times 64\times 64\) patch. The paired target image patch was created by cropping the corresponding location of the target image. An instance of the batch used site A image as input and site B image as target. The next instance used the site B image as input and the site A image as target. So, we learned harmonization between sites (A \(\rightarrow\) B and B \(\rightarrow\) A) simultaneously. This combined with the random sampling of image pairs for training helps prevent overfitting and train one-shot learners. The decoder was optimized to minimize the content [8], style [8], and consistency loss function using AdamW optimizer with initial learning rate \(1e^{-4}\) and batch size 32. The learning rate decayed by 0.8 when validation loss plateaued. The consistency loss was a combination of the L1 and SSIM losses with equal weights. The weights (\(\lambda\)) between style, content, and consistency losses were \(\lambda_{style}=100\), \(\lambda_{content}=150\), and \(\lambda_{consistency}=200\), respectively. The choice of the weights was made to bring the losses in the order of magnitude of \(10^{-1}\). The model was trained for 1,000 epochs and the best validation model was saved to report metrics.
### Image Harmonization Evaluation
The evaluation strategy assess 1) intensity harmonization, i.e., the appearance of the predicted image match that of the target image, and 2) anatomy preservation, i.e., the structures in the input image are preserved even after harmonization. To this end, we propose using Wasserstein distance (WD) [20] to evaluate intensity harmonization by measuring the movement of intensity histograms. We chose WD over Jenson-Shannon (JS) or Kullback-Leibler (KL) divergences, since JS divergence is a fixed value for non-overlapping distributions, and KL divergence is not defined for non-overlapping distributions [12]. We define \(WD(i,t)\) as WD between input (\(i\)) and target (\(t\)) images as the upper bound for the model prediction performance. To make the metric agnostic to the magnitude of scales for different sites and make it comparable between sites, we report the normalized WD defined as
\[nWD(i,p)\%=\frac{WD(i,p)}{WD(i,t)}\times 100\quad\text{and}\quad nWD(t,p)\%= \frac{WD(t,p)}{WD(i,t)}\times 100, \tag{1}\]
where \(WD(i,p)\) is the WD between input(\(i\)) and prediction (\(p\)) and \(WD(t,p)\) between target (\(t\)) and prediction (\(p\)). For good performance in intensity harmonization, we expect a large \(nWD(i,p)\) and a small \(nWD(t,p)\).
To evaluate anatomy preservation, we propose using a method that automatically segments anatomical structures in the input and the predicted image
for comparison. This also checks if the output is suitable to be used for a downstream DL-based task, such as segmentation. Since minor changes in clinical information maybe critical, we propose using relative absolute volume difference (rAVD) for comparing the segmentation results.
\[rAVD\%=\frac{|vol(p)-vol(i)|}{vol(i)}\times 100, \tag{2}\]
where \(vol(i)\) and \(vol(p)\) denote input and prediction volumes for a structure. For good performance in anatomy preservation, we expect a small \(rAVD\).
**Implementation details:** For calculating \(nWD\) for harmonizing from A \(\rightarrow\) B, we pick one sample from site B (example for one-shot learning) as target and made prediction on test samples of site A as input. To segment anatomy, we used a robust model from Freesurfer v7 [4] to segment the brain gray matter (GM) and white matter (WM). Freesurfer models have been trained on large datasets and are robust to a wide range of data shifts. Also, most importantly they are publicly accessible and noted as an acceptable performance by the community.
## 3 Results
### Data and Pre-processing
We collected full head MRIs of pediatric brain tumor patients from three clinical sites: A, B, and C. Each site provided \(n=60\) 3D T1-weighted MRIs using different scanners and acquisition protocols (details in Table 1). We applied N4 bias field correction and using an MRI from site A as reference performed inter-subject rigid registration using advanced normalization tools (ANTs) [1].
Due to computational resource limitation and the fact that we focus only on intensity harmonization, MRI resolution was changed to \(1\times 1\times 1\ mm^{3}\) and was resized to \(256\times 256\times 256\) voxels. All voxels were re-scaled to \([0,1]\) using the min-max normalization. The inverse transforms were stored to convert the images back to values that are clinically meaningful.
### Image Harmonization
**Intensity harmonization:** As shown in Table 2, HAIL achieves a higher \(nWD(i,p)\) (average=0.94) compared to \(nWD(t,p)\) (average=0.11), so the prediction has moved away from the input intensity domain and is closer to the
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline & MANUFACTURE & \begin{tabular}{c} ACQUISITION \\ PLANE \\ \end{tabular} & \begin{tabular}{c} ECHO \\ TIME (\(ms\)) \\ \end{tabular} & \begin{tabular}{c} REPETITION \\ TIME (\(ms\)) \\ \end{tabular} &
\begin{tabular}{c} IN-PLANE (\(mm^{2}\)) \\ \end{tabular} & SLICE (\(mm\)) \\ \hline SITE A & General Electric & Axial & 10.5 & 600 & \(0.41\times 0.41\) & 0.6 \\ SITE B & Siemens & Sagittal & 2.5 & 1900 & \(0.82\times 0.82\) & 0.9 \\ SITE C & Phillips & Coronal & 3.8 & 8.23 & \(0.94\times 0.94\) & 1.0 \\ \hline \end{tabular}
\end{table}
Table 1: **Dataset summary** displays the acquisition protocols for pediatric brain MRIs at each site.
target. This is visually confirmed in Fig. 2, where the predicted intensity resembles the target intensity. Further, \(nWD(t,p)\) is low for both seen and unseen sites, indicating that HAIL is not specific to the style transfer A \(\rightarrow\) B \(\rightarrow\) A, but can be used for transfers between any pairs of sites. To test this outcome, we added data from an independent site C, and used a single target image to demonstrate generalizability of the one-shot harmonization strategy (Table 2 unseen sites).
#### 3.2.2 Anatomy Preservation:
Visual inspection of Fig. 2 for both seen and unseen sites shows the input and prediction have similar shapes, sizes and structures. Quantitatively, as seen in Table 2, the perceived anatomical change due to harmonization is rAVD = 7.06% for GM and rAVD = 13.42% for WM. Thus, HAIL preserves the anatomy well within the clinically acceptable margin of error.
### Comparison with State-of-the-Art
We compared the performance of HAIL with a GAN-based NST approach [14], which harmonized 2D images and aggregated them to generate a 3D output. As shown in Table 2, for the seen sites, the model in [14] performs similar for \(nWD(i,p)\) and better for \(nWD(i,t)\) by \(\sim 2\%\) when compared with HAIL. However, HAIL strategy achieved better \(rAVD\), which is clinically a meaningful metric. Further, for the unseen sites, we had two observations. First, the GAN-based model failed to converge and produce meaningful output for two samples during the A \(\rightarrow\) C harmonization, while HAIL converged on all data. Second, HAIL significantly outperformed the approach in [14] on \(nWD(i,p),nWD(t,p)\) and
Figure 2: **Qualitative results of image harmonization.** We show axial, sagittal, and coronal slices of the 3D input, target, and predicted MRIs. The predicted MRI preserved the anatomical structures from the input MRI, while the intensities are aligned with that of the target MRI. The image shows good harmonization of the model for data from both seen and unseen sites.
\(rAVD(WM)(p<=0.05)\) by an average margin of 11%, 7%, and 16% respectively. The performance was similar for \(rAVD(GM)\), where the improvement was 2%. This suggests that HAIL generalizes better than the GAN-based model when learning from a small training dataset and with the addition of a new site.
### Impact of Consistency Loss
We hypothesized that consistency loss in HAIL acts as a regularizer and aids towards better image harmonization performance in a one-shot manner. To investigate this, the model was retrained using the exact same parameters and seeds but with \(\lambda_{consistency}=0\). The model performance for intensity harmonization trained with consistency loss, was lower for the seen sites in terms of \(nWD(p<=0.05)\), as seen in Table 3. However, the performance with consistency loss was better for unseen sites by a margin of 9% for \(nWD(i,p)\) and 10% for \(nWD(t,p)(p<=0.05\) for both). The consistency loss model performed better for both- seen data (2% for \(rAVD(GM)\) and 6% for \(rAVD(WM)\), \(p<=0.05\) for both), and unseen data (6% for \(rAVD(GM)\) and 8% for \(rAVD(WM)\), \(p<=0.05\) for both). These findings have implications for the design and optimization of image harmonization models, as they demonstrate that incorporating consistency loss is important for generalizability for unseen sites.
## 4 Conclusion
Rare diseases present unique challenges for clinical trial design and implementation due to limited data availability. In our study, we suggest using a deep learning framework for image harmonization (HAIL) can improve the quality of multi-site data and increase the statistical power of analyses. We showed how a neural style transfer model can achieve good intensity harmonization for 3D medical scans by learning generic features, which allows training generic image harmonization models. These methods are one-shot learners as they can adapt
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c} \hline \multirow{2}{*}{Sites} & \multicolumn{2}{c|}{nWD(i,p) \%} & \multicolumn{2}{c|}{nWD(t,p) \%} & \multicolumn{2}{c|}{rAVD(GM) \%} & \multicolumn{2}{c}{rAVD(WM) \%} \\ \cline{3-10} & & HAIL & Liu et. al.[14] & HAIL & Liu et. al.[14] & HAIL & Liu et. al.[14] & HAIL & Liu et. al.[14] \\ \hline \multirow{3}{*}{\begin{tabular}{c} A \(\rightarrow\) B \\ B \(\rightarrow\) A \\ \end{tabular} } & A \(\rightarrow\) B & **92.27\(\pm\)2.03** & 90.63\(\pm\)3.88 & 15.71\(\pm\)0.31 & **12.76\(\pm\)0.63** & **6.99\(\pm\)16.76** & 12.05\(\pm\)19.31 & **18.86\(\pm\)35.28** & 43.94\(\pm\)26.64 \\ & B \(\rightarrow\) A & 96.16\(\pm\)2.01 & **96.75\(\pm\)3.31** & 9.04\(\pm\)0.21 & **7.47\(\pm\)0.68** & 6.78\(\pm\)4.71 & **6.27\(\pm\)4.71** & **7.69\(\pm\)8.47** & 21.40\(\pm\)12.70 \\ \hline \multirow{3}{*}{\begin{tabular}{c} avg \\ A \(\rightarrow\) C \\ B \(\rightarrow\) C \\ \end{tabular} } & **94.22** & 93.69 & 12.38 & **10.12** & **6.89** & 9.16 & **13.28** & 32.67 \\ & A \(\rightarrow\) C & **94.81\(\pm\)2.28** & 73.04\(\pm\)4.27 & **9.43\(\pm\)0.15** & **27.31\(\pm\)0.43** & **12.50\(\pm\)12.55** & 21.99\(\pm\)0.47** & **19.97\(\pm\)19.58** & 68.45\(\pm\)46.36 \\ & C \(\rightarrow\) A & **97.99\(\pm\)2.69** & 85.83\(\pm\)5.13 & 13.87\(\pm\)0.18 & **11.64\(\pm\)0.75** & **4.16\(\pm\)2.77** & 4.17\(\pm\)4.67 & **9.03\(\pm\)5.90** & 17.07\(\pm\)19.92 \\ & B \(\rightarrow\) C & **94.59\(\pm\)2.25** & 85.11\(\pm\)3.60 & **7.12\(\pm\)0.14** & 15.45\(\pm\)0.36 & 9.73\(\pm\)9.32 & **6.79\(\pm\)7.73** & 21.79\(\pm\)15.45 & **18.88\(\pm\)13.07** \\ & C \(\rightarrow\) B & **91.92\(\pm\)3.92** & 88.90\(\pm\)7.36 & **16.06\(\pm\)0.71** & 19.21\(\pm\)0.53 & **21.71\(\pm\)2.16** & 3.86\(\pm\)3.56 & **3.16\(\pm\)2.92** & 8.58\(\pm\)22.92 \\ \hline \multirow{3}{*}{
\begin{tabular}{c} avg \\ Overall \\ \end{tabular} } & **94.83** & 83.27 & **11.62** & 18.40 & **7.14** & 9.20 & **13.49** & 28.23 \\ \cline{2-8} & Overall & **94.63** & 88.46 & **11.87** & 14.26 & **7.06** & 9.18 & **13.42** & 30.45 \\ \hline \end{tabular}
\end{table}
Table 2: **Quantitative results** for image harmonization calculated for various sites, the metrics are presented as avg\(\pm\)std across all test samples in the dataset. Higher \(nWD(i,p)\) compared to \(nWD(t,p)\) indicates good harmonization of intensities, while low \(rAVD\) means anatomies are preserved during the harmonization. ”\(\star\)” shows significant(\(p<=0.05\)) performance differences between HAIL and Liu et al. method [14] using the Wilcoxon signed-rank test.
an input image to a target intensity domain by using only one image from unseen data at test time. We also proposed metrics that would allow future methods for medical image harmonization to be fairly compared. Our results demonstrated that HAIL improved the consistency of multi-site, multi-protocol data and could lead to better generalizability of deep learning models.
\begin{table}
\begin{tabular}{c|c c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Sites} & \multicolumn{3}{c|}{nWD(i,p) \%} & \multicolumn{3}{c|}{nWD(t,p) \%} & \multicolumn{3}{c|}{rAVD(GM) \%} & \multicolumn{3}{c}{rAVD(WM) \%} \\ \cline{2-11} & \multicolumn{3}{c|}{with loss} & \multicolumn{1}{c|}{without loss} & \multicolumn{1}{c|}{with loss} & \multicolumn{1}{c|}{without loss} & \multicolumn{1}{c|}{with loss} & \multicolumn{1}{c|}{with loss} & \multicolumn{1}{c|}{without loss} & \multicolumn{1}{c}{with loss} & \multicolumn{1}{c}{without loss} \\ \hline \(\mathbf{x}\) & \(\Lambda\rightarrow\) B & 92.27\(\pm\)2.03 & **96.32\(\pm\)1.91\({}^{*}\)** & 15.71\(\pm\)0.31 & **9.58\(\pm\)0.16\({}^{*}\)** & **6.99\(\pm\)16.76** & 8.18\(\pm\)14.49\({}^{*}\) & **18.86\(\pm\)35.28** & 23.53\(\pm\)54.64 \\ \(\mathbf{B}\rightarrow\) A & **96.16\(\pm\)2.01** & 96.09\(\pm\)1.91 & 9.04\(\pm\)0.21 & **7.24\(\pm\)0.12\({}^{*}\)** & **6.78\(\pm\)4.71** & 8.35\(\pm\)5.71\({}^{*}\) & **7.69\(\pm\)8.47** & 15.42\(\pm\)10.51\({}^{*}\) \\ \hline \(\mathbf{w}\) avg & 94.22 & **96.21** & 12.38 & **8.41** & **6.89** & 8.27 & **13.28** & 19.47 \\ \hline \(\mathbf{A}\rightarrow\) C & **94.81\(\pm\)2.28** & 68.07\(\pm\)1.96\({}^{*}\) & **9.43\(\pm\)0.15** & 35.41\(\pm\)0.23\({}^{*}\) & **12.50\(\pm\)12.55** & 28.20\(\pm\)14.36\({}^{*}\) & **19.97\(\pm\)19.58** & 34.54\(\pm\)20.01\({}^{*}\) \\ \(\mathbf{C}\rightarrow\) A & 97.99 \(\pm\)2.69 & **99.83\(\pm\)2.64** & 13.87\(\pm\)0.18 & **12.22\(\pm\)0.13** & **4.16\(\pm\)2.77** & 4.93\(\pm\)3.68 & **9.03\(\pm\)5.90** & 15.07\(\pm\)9.92\({}^{*}\) \\ \(\mathbf{B}\rightarrow\) C & **94.59\(\pm\)2.25** & 79.69\(\pm\)1.83\({}^{*}\) & **7.12\(\pm\)0.14** & 25.51\(\pm\)0.19\({}^{*}\) & **9.73\(\pm\)9.32** & 17.15\(\pm\)10.36\({}^{*}\) & **21.79\(\pm\)15.45** & 29.32\(\pm\)18.67\({}^{*}\) \\ \(\mathbf{C}\rightarrow\) B & 91.92\(\pm\)3.92 & **95.25\(\pm\)3.92\({}^{*}\)** & 16.06\(\pm\)0.17 & **14.47\(\pm\)0.19\({}^{*}\)** & **2.17\(\pm\)2.16** & 2.22\(\pm\)2.13 & **3.16\(\pm\)2.92** & 5.04\(\pm\)5.29\({}^{*}\) \\ \hline avg & **94.83** & 85.71 & **11.62** & 21.90 & **7.14** & 13.13 & **13.49** & 21.18 \\ \hline Overall & **94.63** & 90.96 & **11.87** & 15.16 & **7.06** & 10.70 & **13.42** & 20.33 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Impact of consistency loss** on image harmonization, the metrics are presented as avg\(\pm\)std across all test samples in the dataset. ”\(\star\)” shows significant (\(p<=0.05\)) performance differences between HAIL with and without the consistency loss using the Wilcoxon signed-rank test.
## 5 Acknowledgments
This work was possible due to the support from the National Cancer Institute (Grant No: UG3CA236536) and US Department of Defense (Grant No : W81XWH1910376).
## References
* [1]A. Avants, N. Tustison, M. Stauffer, G. Song, B. Wu, and J. Gee (2014) The insight toolkit image registration framework. Frontiers in Neuroinformatics8, pp. 44. External Links: Document, ISSN 1063-6405, Link Cited by: SS1.
* [2]W. Falcon (2019) The PyTorch Lightning team: PyTorch Lightning (3 2019). External Links: Link Cited by: SS1.
* [3]C. Finn, P. Abbeel, and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pp. 1126-1135. Cited by: SS1.
* [4]B. Fischl (2012) Freesurfer. Neuroimage62 (2), pp. 774-781. Cited by: SS1.
* [5]I. Goodfellow (2016) NIPS 2016 tutorial: generative adversarial networks. arXiv preprint arXiv:1701.00160. Cited by: SS1.
* [6]H. Guan and M. Liu (2023) DomainAT: domain adaptation toolbox for medical data analysis. NeuroImage, pp. 119863. Cited by: SS1.
* [7]Y. He, T. Li, R. Ge, J. Yang, Y. Kong, J. Zhu, H. Shu, G. Yang, and S. Li (2021) Few-shot learning for deformable medical image registration with perception-correspondence decoupling and reverse teaching. IEEE Journal of Biomedical and Health Informatics26 (3), pp. 1177-1187. Cited by: SS1.
* [8]X. Huang, S. Belongie, and S. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision, pp. 1501-1510. Cited by: SS1.
* [9]R. Khadka, D. Jha, S. Hicks, V. Thambawita, M. A. Riegler, S. Ali, and P. Halvorsen (2022) Meta-learning with implicit gradients in a few-shot setting for medical image segmentation. Computers in Biology and Medicine143 (), pp. 105227. External Links: Document, ISSN 1063-6405, Link Cited by: SS1.
* [10]D. P. Kingma, M. Welling, et al. (2019) An introduction to variational autoencoders. Foundations and Trends(r) in Machine Learning12 (4), pp. 307-392. Cited by: SS1.
* [11]G. Koch, R. Zemel, R. Salakhutdinov, et al. (2015) Siamese neural networks for one-shot image recognition. In ICML deep learning workshop, Vol. 2, Lille, Cited by: SS1.
* [12]S. Kolouri, P. E. Pope, C. E. Martin, and G. K. Rohde (2018) Sliced-wasserstein autoencoder: an embarrassingly simple generative model. arXiv preprint arXiv:1804.01947. Cited by: SS1.
* [13]M. Y. Liu, T. Breuel, and J. Kautz (2017) Unsupervised image-to-image translation networks. In NIPS'17, pp. 700-708. Cited by: SS1.
* [14]M. Liu, P. Maiti, S. Thomopoulos, A. Zhu, Y. Chai, H. Kim, and N. Jahanshad (2021) Style transfer using generative adversarial networks for multi-site mri harmonization. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 313-322. Cited by: SS1.
* [15]S. Liu and P. T. Yap (2021) Learning multi-site harmonization of magnetic resonance images without traveling human phantoms. Cited by: SS1.
* [16]S. Liu, P. Yap, and J. Kautz (2021) Unsupervised image-to-image translation networks. In NIPS'17, pp. 700-708. Cited by: SS1.
[MISSING_PAGE_POST]
* [16] Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv:1411.1784 (2014)
* [17] MONAI Consortium: MONAI: Medical Open Network for AI (9 2022), [https://github.com/Project-MONAI/MONAI](https://github.com/Project-MONAI/MONAI)
* [18] Nyul, L., Udupa, J., Zhang, X.: New variants of a method of mri scale standardization. IEEE Transactions on Medical Imaging **19**(2), 143-150 (2000). [https://doi.org/10.1109/42.836373](https://doi.org/10.1109/42.836373)
* [19] Parida, A., Tran, A., Navab, N., Albarqouni, S.: Learn to segment organs with a few bounding boxes. CoRR **abs/1909.07809** (2019), [http://arxiv.org/abs/1909.07809](http://arxiv.org/abs/1909.07809)
* [20] Peyre, Remi: Comparison between w2 distance and 1 norm, and localization of wasserstein distance. ESAIM: COCV **24**(4), 1489-1501 (2018). [https://doi.org/10.1051/cocv/2017050](https://doi.org/10.1051/cocv/2017050), [https://doi.org/10.1051/cocv/2017050](https://doi.org/10.1051/cocv/2017050)
* [21] Pomponio, R., Erus, G., Habs, M., Doshi, J., Srinivasan, D., Mamourian, E., Bashyam, V., Nasrallah, I.M., Satterthwaite, T.D., Fan, Y., et al.: Harmonization of large mri datasets for the analysis of brain imaging patterns throughout the lifespan. NeuroImage **208**, 116450 (2020)
* [22] Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. In: International conference on learning representations (2017)
* [23] Shah, M., Xiao, Y., Subbanna, N., Francis, S., Arnold, D., Collins, D., Arbel, T.: Evaluating intensity normalization on mris of human brain with multiple sclerosis. Medical Image Analysis **15**(2), 267-282 (2011). [https://doi.org/https://doi.org/10.1016/j.media.2010.12.003](https://doi.org/https://doi.org/10.1016/j.media.2010.12.003)
* [24] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
* [25] Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hospedales, T.M.: Learning to compare: Relation network for few-shot learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1199-1208 (2018)
* [26] Tor-Diez, C., Porras, A.R., Packer, R.J., Avery, R.A., Linguraru, M.G.: Unsupervised mri homogenization: application to pediatric anterior visual pathway segmentation. In: International Workshop on Machine Learning in Medical Imaging. pp. 180-188. Springer (2020)
* [27] Torbati, M.E., Tudorascu, D.L., Minhas, D.S., Maillard, P., DeCarli, C.S., Hwang, S.J.: Multi-scanner harmonization of paired neuroimaging data via structure preserving embedding learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops. pp. 3284-3293 (October 2021)
* [28] Torralba, A., Efros, A.A.: Unbiased look at dataset bias. In: CVPR 2011. pp. 1521-1528. IEEE (2011)
* [29] Wilson, G., Cook, D.J.: A survey of unsupervised deep domain adaptation. ACM Transactions on Intelligent Systems and Technology (TIST) **11**(5), 1-46 (2020)
* [30] Yang, H., Sun, J., Carass, A., Zhao, C., Lee, J., Xu, Z., Prince, J.: Unpaired brain mr-to-ct synthesis using a structure-constrained cyclegan. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. pp. 174-182 (2018)
## Appendix 0.A Network Architectures
Figure 3: **Encoder architecture.** The image shows the various layers of the encoder architecture for the proposed HAIL framework. Convolution(.) refers to the Convolution implementation in monai.networks.blocks. Convolution(1, 64, ’RELU’) means it is a convolution layer with \(spatial\_dims=3\), \(in\_channels=1\), \(out\_channels=64\), \(kernel\_size=3\), \(stride=1\), \(padding=1\), followed by a \(ReLU\) non-linearity and normalization as \(None\). MaxPool3d(.) refers to the MaxPool3d implementation in torch.nn. MaxPool3d(k=2, s=2) means a 3D max pooling operation with \(kernel\_size=2\) and \(stride=2\). The features from layer 4 for the input and target are passed into the AdaIN module. Each layer of the encoder is used to extract features for the calculation of the style and content losses. |
2301.02256 | Dynamical Data Mining Captures Disc-Halo Couplings that Structure
Galaxies | Studying coupling between different galactic components is a challenging
problem in galactic dynamics. Using basis function expansions (BFEs) and
multichannel singular spectrum analysis (mSSA) as a means of dynamical data
mining, we discover evidence for two multi-component disc-halo dipole modes in
a Milky-Way-like simulated galaxy. One of the modes grows throughout the
simulation, while the other decays throughout the simulation. The
multi-component disc-halo modes are driven primarily by the halo, and have
implications for the structural evolution of galaxies, including observations
of lopsidedness and other non-axisymmetric structure. In our simulation, the
modes create surface density features up to 10 per cent relative to the
equilibrium model stellar disc. While the simulated galaxy was constructed to
be in equilibrium, BFE+mSSA also uncovered evidence of persistent periodic
signals incited by aphysical initial conditions disequilibrium, including rings
and weak two-armed spirals, both at the 1 per cent level. The method is
sensitive to distinct evolutionary features at and even below the 1 per cent
level of surface density variation. The use of mSSA produced clean signals for
both modes and disequilibrium, efficiently removing variance owing to estimator
noise from the input BFE time series. The discovery of multi-component
halo-disc modes is strong motivation for application of BFE+mSSA to the rich
zoo of dynamics of multi-component interacting galaxies. | Alexander Johnson, Michael S. Petersen, Kathryn V. Johnston, Martin D. Weinberg | 2023-01-05T19:00:02Z | http://arxiv.org/abs/2301.02256v1 | # Dynamical Data Mining Captures Disc-Halo Couplings that Structure Galaxies
###### Abstract
Studying coupling between different galactic components is a challenging problem in galactic dynamics. Using basis function expansions (BFEs) and multichannel singular spectrum analysis (mSSA) as a means of dynamical data mining, we discover evidence for two multi-component disc-halo dipole modes in a Milky-Way-like simulated galaxy. One of the modes grows throughout the simulation, while the other decays throughout the simulation. The multi-component disc-halo modes are driven primarily by the halo, and have implications for the structural evolution of galaxies, including observations of lopsidedness and other non-axisymmetric structure. In our simulation, the modes create surface density features up to 10 per cent relative to the equilibrium model stellar disc. While the simulated galaxy was constructed to be in equilibrium, BFE+mSSA also uncovered evidence of persistent periodic signals incited by physical initial conditions disequilibrium, including rings and weak two-armed spirals, both at the 1 per cent level. The method is sensitive to distinct evolutionary features at and even below the 1 per cent level of surface density variation. The use of mSSA produced clean signals for both modes and disequilibrium, efficiently removing variance owing to estimator noise from the input BFE time series. The discovery of multi-component halo-disc modes is strong motivation for application of BFE+mSSA to the rich zoo of dynamics of multi-component interacting galaxies.
## 1 Introduction
The structures of galaxies are manifestations of how the laws that govern dynamics combine with the nature of matter. Understanding galaxies strengthens our understanding of fundamental physics. There are tremendous opportunities to deepen that understanding: a rich legacy of analytic descriptions of galactic dynamics; community investment in high resolution simulations; large scale, high dimensional surveys of billions of stars and galaxies; and the emergence of the vital field of data science to robustly mine and characterise both simulated and real data sets.
Yet recent years have revealed the limits to our conception of our home galaxy, long thought to be a quiet backwater in the Universe. Maps of the positions and motions of billions of stars from the _Gaia_ satellite (Gaia Collaboration et al., 2016, 2018, 2022) have revealed a Milky Way in disarray, with abundant signatures of action and reaction - past and ongoing (e.g. Antoja et al., 2018; Trick et al., 2019; Friske and Schonrich, 2019; Helmi, 2020). These represent significant departures from the descriptions of equilibrium and mild perturbations on which the field of Galactic Dynamics has been built (Binney and Tremaine, 2008). Simulations are capable of capturing such complexities but robustly linking the features to theoretical descriptions and identifying their physical origins remains challenging.
Recent work by Weinberg and Petersen (2021) suggest one approach to this challenge centred around two mathematical tools: Basis Function Expansions (BFE) and Multi-Channel Singular Spectrum Analysis (mSSA). **BFE** represent a distribution as a linear combination of basis functions, with half a century of application to galactic dynamics (e.g. Clutton-Brock, 1972, 1973; Kalnajs, 1976; Polyachenko and Shukhman, 1981; Weinberg, 1989, 1999; Petersen et al., 2022). When representing a simulation with a fixed set of basis functions, one obtains time series of coefficients that encode the dynamics in a compressed representation. **mSSA** is a method for identifying temporal correlations. Together, one obtains a powerful analysis tool for studying galaxy simulations. The method does not require prior information and thus can be considered a form of unsupervised learning. Applying mSSA to BFE time-series, Weinberg and Petersen (2021) analysed barred-galaxy simulations. They found that BFE+mSSA could autonomously extract the dominant space and time correlated features and disentangle different phase of bar formation and evolution recovered through more traditional analysis (Petersen et al., 2021).
In this paper, we build on the success of Weinberg and Petersen (2021) in characterising the evolution of a known feature and explore the use of BFE+mSSA
as a dynamical discovery tool. We do so through the analysis of a model galaxy comprised of a stellar disc, stellar bulge, and dark matter halo that is designed to be in equilibrium and hence featureless (described in Section 2). Studying such a galaxy serves as a 'control' sample for future work with more feature-rich discs, with features from in situ (i.e. spiral arms) or ex situ (i.e. minor mergers) sources. With a control model, we want to answer the following questions about BFE+mSSA as a dynamical data mining tool:
1) Can BFE+mSSA _separate_ distinct features that overlap in time and are not distinct by eye (real astrophysical signals, phase mixing, and \(N\)-body noise)?
2) Can BFE+mSSA _connect_ features within or across components by identifying their shared spatial and temporal structure?
The answer, as we shall see, is yes to both questions. BFE+mSSA isolates features and allows them to be interpreted independently, while also isolating interactions between components independent of the presence of other interactions.
While analysing the disc in the present study, it became clear that the model was not the perfect featureless system we intended. By applying BFE+mSSA to the disc, and then the combination of disc+halo, we identify two dynamical causes of features: phase-mixing from initial conditions, and interactions between the disc and halo. We identify multiple distinct dynamical signals in each, and examine the dynamical signals in detail (Section 3). We find that the signals are likely to be generic features in disc+halo systems, and can have real impact on galaxies in the real Universe.
This study is a key step in understanding and exploring the strengths and limitations of BFE+mSSA in multi-component systems (see Section 4). In partnership, BFE+mSSA has great potential beyond simulations analysis. Much of analytic linear theory is also built on BFEAs. Moreover BFFs may be used to described observational data sets. Hence BFFs provide a common dynamical language to quantitatively connect theory, simulations, observations and data science while providing rigorous physical interpretations of dynamical processes. We conclude in Section 5 with a discussion of how our results impact galaxy evolution more generally, and how BFE+mSSA fits in a larger program of dynamical data mining.
## 2 Methods
We first review the rationale and overarching goals for BFE+mSSA analysis in dynamical systems in Section 2.1, and then describe the construction of a model isolated disc+bulge+halo galaxy in Section 2.2. Two appendices provide specifics of the expansions used in our analysis (Appendix B) and an overview of mSSA (Appendix C).
### Rationale for BFE+mSSA analysis
All self-gravitating stellar systems, like ionised plasma, have a spectrum of both _continuous_ and _point_ modes (Krall & Trivelpiece, 1973; Ichimaru, 1973; Ikeuchi et al., 1974). Here, we define a _mode_ to be a superposition of oscillations that lead to a self-similarly growing or damping response to a perturbation1.
Footnote 1: Mathematically, we are referring to the set of solutions to the collisionless Boltzmann equation for at a specific complex frequency. These are the solutions to the response operator that generalise eigenfunctions in a finite vector space. In plasma physics, these solutions are usually call ‘modes’ although there is some disagreement.
**Continuous modes** are excited by perturbations with a continuous range of frequencies, for example a single encounter with a satellite. Other sources of disequilibrium, whether physical or aphysical, also drive continuous response. This continuous response appears as phase mixing in galaxies. These modes are also transient: since the response is not dominated by a single frequency the mode quickly looses coherence and therefore is not self-sustaining. We expect that mSSA will efficiently detect a plethora of signals owing to continuous modes, of varying strength. These signals will appear with relatively broad frequency support. As the modes are transient, few theoretical approaches exist capable of predicting the existence or evolution of these modes, making BFE+mSSA an efficient tool to study them.
**Point modes** are excited by specific frequencies. They have model-dependent self-similar shapes and well defined frequencies and can therefore be reinforced by their own gravity. The point modes are damped (growing) for stable (unstable) systems. The most commonly known point mode is the Jeans' instability in a homogeneous sea of stars (e.g. Binney & Tremaine, 2008). Fluctuations from environmental disturbances such as satellite encounters or Poisson noise from \(N\)-body distributions may excite these weakly self-gravitating features. We expect that some of the results recovered by mSSA will be the phase space manifestation of these modes, appearing as distinct frequency peaks. Calculations for unstable evolutionary modes in galactic discs
Figure 1: Circular (black) and radial (red) frequency curves as a function of radius for the \(T=0\) equilibrium model. Both frequencies are computed using the epicyclic approximation, in the plane of the disc (\(z=0\)). Three frequency values have been marked to guide the eye (\(\Omega=0.6\), \(\Omega=1.5\), and \(\Omega=6.6\) cycles/Gyr), corresponding to spatial scales near the peak disc circular velocity (\(2.2R_{d}=7.7\)) and multiples of the halo scale length (\(a=52\) kpc).
have found evidence for point modes supported in various analytic geometries (e.g. Fouvry et al., 2015; De Rijcke et al., 2019). While we do not have explicit theoretical results for damped modes at many azimuthal orders in discs, \(N\)-body simulations seem to suggest that the amplitude is largest at \(m=2\) and decreases for \(m>2\). Crucially for the problem at hand (a disc+halo system), we have no analytic predictions for the modal spectra, owing to the complexity of approaching such a problem analytically. BFE+mSSA gives us a means to detect these modes amongst a sea of other signals.
### Model Galaxy
#### 2.2.1 Simulation Overview
We design an isolated model Milky-Way-like galaxy for our study of the compressive power2 of BFE and the dynamical information one can extract with mSSA. We draw the model from components in the merger simulation of Laporte et al. (2018): a Hernquist profile dark matter halo with a mass of \(10^{12}M_{\odot}\) and a scale length of 52 kpc; an exponential stellar disc with a mass of \(6\times 10^{10}M_{\odot}\), a scale length of 3.5 kpc, and a scale2 scale height of 0.53 kpc; a Hernquist stellar bulge with a mass of \(10^{10}M_{\odot}\) and a scale length of 0.7 kpc. The halo has \(40\times 10^{6}\) particles, the disc has \(5\times 10^{6}\) particles, and the bulge has \(10^{6}\) particles. Unlike Laporte et al. (2018), we do not introduce a satellite perturber so that our model galaxy evolves in isolation. The initial circular and radial frequency curves in the disc plane are shown in Figure 1: as we shall see below, we are able to use these frequencies to inform our mSSA analysis. We evolve the model with Gadget-4 (Springel et al., 2021) for 5.49 Gyr, saving snapshots every 0.01 Gyr, for a total of 549 snapshots. The total simulation requires approximately 800 GB of computer disk storage.
Footnote 2: Here, ‘compression’ refers to the amount of information one needs to store. A straightforward metric is the total computer disk space. We provide specifics to our simulation, but the scale of compression should be similar in other simulations.
#### 2.2.2 BFE representation
To compactly describe the simulation, we represent each component in each snapshot with a BFE designed to provide compression and create a continuous representation from the particles. Further information regarding the BFEs used may be found in Appendix B. In a BFE, a target distribution is represented as the linear sum of some chosen basis functions, with weighting on each of the basis functions (_coefficients_). If the basis functions are selected well, the distribution will be described by a small number of functions and corresponding coefficients, \(C_{\mu}\), where \(\mu\) is a tag that indexes each basis function. The coefficients then are a measure of the importance of each basis function to representing the overall distribution. To facilitate representing the distribution with the smallest number of functions, we choose expansions whose lowest-order function resembles the target equilibrium.
For a principally two-dimensional structure, the stellar disc, we use a Fourier-Laguerre expansion3. The Fourier-Laguerre basis for expanding disc surface density was introduced in Weinberg and Petersen (2021). Given the exponential weighting of Laguerre polynomials, they serve as a natural radial basis element for exponential discs. If the scale lengths are chosen to match, the equilibrium disc is well-represented by the lowest-order Laguerre polynomials. The scale length of our Fourier-Laguerre expansion is 3.5 kpc, matching the scale length of the modelled disc. To capture angular structure, we expand in Fourier terms \(\cos\phi\) and \(\sin\phi\). We index the Fourier azimuthal with \(m\), and the Laguerre radial terms with \(n\), creating \((2m-1)\times n\) total coefficients, each tagged with a unique \((m,n)\), written \(C_{mn}\). We find that as expected, \(C_{00}\) dominates by multiple orders of magnitude as desired. We expand the disc to \(m_{\rm max}=6,\ n_{\rm max}=6\), making \(2\times(m_{\rm max}+1)\times n_{\rm max}=84\) coefficients for the disc. The choice of maximum radial order is motivated by a desire to probe specific spatial scales. The \(n=6\) radial Laguerre density function has nodes at 0.9, 3.1, 6.8, 12.1, 19.7, and 30.9 kpc, thus ensuring that the majority of the nodes are within 18 kpc of the disc centre (where 90% of the particles are located).
Footnote 3: Another option is presented in Weinberg and Petersen (2021): the use of 3d basis functions designed to resemble the exponential disc. In this work, we use the 2d Fourier-Laguerre expansion owing to the straightforward generalisation to the expansion of velocity fields, which will be the subject of future works.
The dark matter halo4 is efficiently described through the empirical orthogonal function basis approach introduced in Weinberg (1999) and most recently updated in Petersen et al.
Figure 2: Disc coefficients over time for the first three harmonic orders (\(m=0,1,2\)) and all corresponding radial orders (\(n\in[0,6]\)). The coefficients have been detrended by subtracting the mean and dividing out the variance. The coefficient series are dominated by apparent noise, though some trends may be discerned: a steady decrease in some \(m=0\) coefficients (upper panel), elevated amplitude towards the end of the simulation in \(m=1\), and some periodicity in \(m=2\). The origin of these features is difficult to interpret owing to the coefficient series’ noisy appearance across multiple basis functions. Any spatial features encoded in the basis are all but impossible to determine.
(2022). Beginning with the equilibrium distributions, we design a 1d radial model that matches the initial spherically symmetric density profile. From this one-dimensional model, we construct an empirical orthogonal function basis whose lowest-order member perfectly matches the input initial density profile. Higher-order terms are generated as eigenfunctions of the Sturm-Liouville equation with the input equilibrium potential-density model and appropriate boundary conditions. The three-dimensional structure of the spherical components is described by a spherical harmonic expansion in the angular coordinates. Each term in the expansion is represented by three numbers: the spherical harmonic indices \(\ell\) and \(|m|\leq\ell\) and the index of the radial basis function \(n\). In total, we have \((\ell_{\rm max}+1)^{2}\times n_{\rm max}\) coefficients per snapshot. For the halo, we expand to \(\ell_{\rm max}=2\), \(n_{\rm max}=11\). The expansions, for the entire simulation, only require approximately 12 MB of storage: a more than 60000\(\times\) compression, with the benefit of encoding the dynamics. In practice, we will often consolidate the same-integer positive and negative spherical harmonic \(m\) indices when describing the coefficient amplitudes such that a quoted \((\ell,m)\) tag contains both \(\pm m\). As expected, the \(C_{tmn}=C_{000}\) term is the largest by multiple orders of magnitude, with \(C\) generally decreasing as either \((\ell,m)\) or \(n\) increases.
## 3 Evolution of a near-equilibrium galaxy
Our isolated disc+bulge+halo galaxy was constructed to be in a completely stable equilibrium. However, the model is not in equilibrium, for reasons both physical and unphysical. Figure 2 shows the raw BFE coefficients for the low-order disc harmonics derived from the simulation snapshots. While it is clear that the coefficient time-series are noisy, inspection by eye suggests that there exists lower frequency coherent signals buried in the higher frequency noise: early evolution in \(m=0\); modestly elevated power at late times in \(m=1\); and a periodic signal in \(m=2\).
To explore dynamical evolution in our simulation, we performed mSSA decompositions of various combinations of BFE coefficients. These decompositions revealed clean, persistent features in the individual low-order disc harmonics (\(m=0,1,2\)), which we concentrate on understanding in this section. We also augment the analysis of the low-order disc harmonics with mSSA analysis of halo coefficients, joins of disc and halo coefficients, and higher-order disc harmonics (\(m>2\)). These multi-component mSSA analyses prove to be the most fruitful in identifying the causes of different features. The full results of all our analyses are presented in Appendix A.
Section 3.1 describes how the results of the mSSA analysis can be used to group coefficients into separate dynamical features, characterise the properties of these features and come to a physical understanding of their nature. The following subsections illustrate these ideas by dividing our own analysis of the disc+bulge+halo simulation into three classifications: initial conditions disequilibrium (Section 3.2), secular evolution signals (Section 3.3), and fluctuations and other uninterpretable features (Section 3.4).
### Interpreting the results of the mSSA analysis
We use several diagnostics (denoted below in slanted text) to describe the character and understand the nature of the features identified in the mSSA analysis. Each diagnostic has a corresponding section in Appendix C describing the mathematical details.
Applied to BFE multiple series, mSSA identifies temporally correlated signals in the BFE coefficients series as an ensemble. Briefly, mSSA uses the autocorrelation of time lagged matrix of the input series and performs an eigenanalysis to find
\begin{table}
\begin{tabular}{c c c c c c} & mSSA & & DFT peak & contrast & SV \\ name & decomposition & PCs & (Gyr\({}^{-1}\)) & (\(R<R_{d}\)) & fraction \\ \hline \hline \multicolumn{6}{c}{Disequilibrium Signal 1: halo profile readjustment (slow decay)} \\ \hline Group \(m0\)-\(1\) & disc \(m=0\) & 0,1 & 0.2 & 0.031 & 0.641 \\ Group \(l0\)-\(1\) & halo \(l=0\) & 0,1,2,3 & 0.4 & - & 0.944 \\ Group \(m0l0\)-\(1\) & disc \(m=0\), halo \(l=0\) & 0,1,2,3 & 0.2 & 0.054 & 0.832 \\ \hline \hline \multicolumn{6}{c}{Disequilibrium Signal 2: phase mixing of disc initial conditions (fast decay)} \\ \hline Group \(m0\)-\(2\) & disc \(m=0\) & 2,3,4,5 & 6.4 & 0.006 & 0.084 \\ Group \(l0\)-\(2\) & halo \(l=0\) & 4,5 & 6.6 & - & 0.028 \\ Group \(m0l0\)-\(2\) & disc \(m=0\), halo \(l=0\) & 4,5 & 6.6 & 0.007 & 0.037 \\ Group \(m1\)-\(3\) & disc \(m=1\) & 4,5 & 6.9 & 0.002 & 0.057 \\ Group \(m2\)-\(1\) & disc \(m=2\) & 0,1 & 6.6 & 0.006 & 0.201 \\ Group \(m4\)-\(1\) & disc \(m=4\) & 0,1 & 14.2 & 0.004 & 0.086 \\ Group \(m6\)-\(1\) & disc \(m=6\) & 0,1 & 20.2 & 0.001 & 0.036 \\ Group \(m2\)-\(m6\)-\(1\) & disc \(m=2\), 4,6 & 0,1 & 6.6 & 0.010 & 0.072 \\ Group \(m11\)-\(3\) & disc \(m=1\), halo \(l=1\) & 6,7 & 6.9 & 0.002 & 0.040 \\ Group \(m1m21\)-\(2\) & disc \(m=1\), 2, halo \(l=1\) & 2,3 & 6.6 & 0.003 & 0.082 \\ \end{tabular}
\end{table}
Table 1: Summary of two different signals identified in our mSSA decompositions as associated with initial disequilibrium. The first signal results from halo disequilibrium, and the appearance in the disc is primarily manifest in the central surface density. The second signal is present in myriad decompositions, but appears to be seeded first by disequilibrium in the disc \(m=0\), which then persists in other harmonics. Disc feature strengths are reported in surface density to give a measure of ‘visual contrast’, defined as max (\(|\Delta_{\Sigma}|\)) within a disc scale length (see equation 10). Contrasts have an approximate error of 0.001, estimated from grid size adjustments. Owing to simulation sampling rates (0.01 Gyr), the DFT peak is only accurate to 0.1.
dominant trends. Each time series is detrended by its mean and variance to intercompare the variations in each coefficient series with. These eigenvectors describing these trends are usually called _principal components_ (PCs). As we always find multiple PCs contribute to a single dynamical feature in our analysis (see 'PCs' column in Tables), we will refer to each feature as a 'Group' (of PCs), labelling the strongest group (ordered by PC variance) as the first group. We also denote the particular decomposition by the input coefficient harmonic in the group name. For example, the strongest group in the \(m=0\) disc analysis will be labelled 'Group \(m0\)-1', and the strongest group in the \(l=1\) halo analysis will be
\begin{table}
\begin{tabular}{c c c c c c} & mSSA & & DFT peak & contrast & singular value \\ name & decomposition & PCs & (Gyr\({}^{-1}\)) & (\(R<R_{d}\)) & fraction \\ \hline \hline \multicolumn{6}{c}{Point Mode 1: slow growth} \\ \hline Group \(m1\)-1 & disc \(m=1\) & 0,1 & 0.6 & 0.007 & 0.201 \\ Group \(l1\)-1 & halo \(l=1\) & 0,1,2,3 & 0.4 & - & 0.272 \\ Group \(m1l1\)-1 & disc \(m=1\), halo \(l=1\) & 0,1,2,3 & 0.6 & 0.008 & 0.244 \\ \hline \hline \multicolumn{6}{c}{Point Mode 2: slow decay} \\ \hline Group \(m1\)-2 & disc \(m=1\) & 2,3 & 1.7 & 0.003 & 0.064 \\ Group \(l1\)-2 & halo \(l=1\) & 4,5 & 1.5 & - & 0.035 \\ Group \(m1l1\)-2 & disc \(m=1\), halo \(l=1\) & 4,5 & 1.5 & 0.003 & 0.048 \\ \end{tabular}
\end{table}
Table 2: The coupled disc+halo dipole modes appearing in different mSSA decompositions. Both modes appear in multiple mSSA decompositions, and that they both appear in disc-only, halo-only, and disc-halo decompositions strongly suggests that they are both joint modes. In the table, disc harmonics are denoted with \(m\), halo harmonics are denoted with \(l\). Columns are the same as in Table 1.
Figure 3: An analysis of two monopole signals resulting from distinct sources of initial disequilibrium. The left panels show the reconstructed coefficient amplitudes over time for each signal (identified as Groups 1 and 2 in both disc-only, halo-only, and disc+halo analyses). The right panels show the power spectra of the reconstructed coefficients for each group. The first signal is a slow rearrangement owing to the halo settling in the presence of the disc, manifest by eye in the disc primarily as a change in the central surface density (cf. Figure 4). We show the appearance of this signal in the disc and halo as the upper two rows. The second signal is ringing in the disc resulting from the initial velocity disequilibrium of the disc. While the signal decays rapidly in the monopole component, the disequilibrium seeds long-lasting persistent periodic features in other harmonics: see entries under ‘Disequilibrium Signal 2’ in Table 1. We show the appearance of this signal in the disc and halo as the lower two rows. In each left-hand panel, we show two thicknesses of curves: the thick lines are for the components when analysed separately and the thin lines are for the components when analysed jointly. That the different thicknesses of lines, for the same radial order, are not particularly different, is strong evidence that the features are correlated between the disc and halo.
labelled 'Group \(l1\)-\(1\)'. As PC groups capture trends in basis function coefficients that are correlated over snapshots, PC groups capture how spatial features dynamically evolve.
Mathematically descriptive (but often difficult to interpret beyond the most significant few), the mSSA decomposition returns _singular values_ (SVs) as measurements of the contribution of each PC to the total decomposition. Larger SVs indicate which PCs represent more of the net change in time of the distribution. This property greatly helps the robust identification of features that represent true dynamical evolution. PCs which correspond to random fluctuations due to (e.g.) numerical noise are by nature uncorrelated. They have very low SV even as they may be the dominant source of variations in the surface density. Conversely, PCs which describe evolution in coefficient series that are coherent over time will have high SV even though they may be (orders of magnitude) below the inherent noise. We report the singular value fraction5 attributable to a given group in the Tables.
Footnote 5: To compute the relative contribution, we normalise each singular value corresponding to a particular principal value to the sum of all singular values. Then, we can say that some per cent of the signal is represented by the principal component (or group). We will call this the contribution of a principal component (or group), and may be interpreted as a measure of signal robustness.
We examine the _coefficient reconstructions_ from a group of PCs for physical insight. From the coefficient reconstructions, we can also construct _power spectra_ from a Discrete Fourier Transform (DFTs) of the reconstructed coefficients from a group of PCs give insight into frequencies (and time scales) that characterise the time evolution of a feature. Approximately equal values of dominant frequencies in the power spectra of the coefficient reconstructions between different PCs from mSSA of the same component suggest they are describing different aspects of the same feature and may be grouped together. If equal values occur across different components they may be mutually interacting. See the 'DFT peak' entry in Tables, which reports the frequency value where the DFT is maximised.
We can also calculate _contrast_ in the disc from the reconstructions6. Calculating the average of the fractional deviation in surface density within one disc scalelength gives a measure of the 'detectability' of a feature (by eye or algorithm). See the 'contrast' entry in Tables. Related, the inferred location in the galaxy is where the dominant frequencies found in the power spectrum match the circular velocity of the unperturbed galaxy can indicate the spatial scales of any interactions taking place. Refer to Figure 1.
Footnote 6: We do not look at the contrast in the halo, as this is not straightforwardly measured in real galaxies. Therefore, the contrast columns do not contain entries for halo-only mSSA analyses.
In general, identified features evolve as one of the following types of evolution (noted in Tables): decaying, where a feature peaks at the beginning of the simulation and decays in importance; growing, where the feature grows and then saturates in amplitude with later maximum times therefore having slower growth rates; or consistent with no evolution. By comparing the evolution type across different components, one may also infer causality. The relative growth or decay may indicate when one component is driving another.
### Initial Conditions Disequilibrium Uncovered Through Disc \(m=0\) Analysis
We start our investigation with perhaps the most striking feature in the raw coefficients apparent in the top panel of Figure 2 which shows the evolution of the \(m=0\) (monopole) disc coefficients. The figure suggests the simulation suffers from a disequilibrium that is typical in disc-halo initial conditions: outwardly propagating rings in surface density. This section reports the insights into this apparent evolution afforded by mSSA, starting from its application to the \(m=0\) disc coefficients alone (3.2.1). The properties of the features identified in this preliminary analysis provide a template for further applications of mSSA both to the halo (separately and combined with the disc, see 3.2.2) and higher order disc terms (see 3.2.3). Table 1 summarises the properties of all these analyses.
#### 3.2.1 Grouping into Dynamical Features
The mSSA analysis of the \(m=0\) disc reconstructed coefficients reveals that PCs (0,1) and PCs (2,3,4,5) had distinct power spectra, suggesting natural groupings. This also suggested the presence of _two_ distinct dynamical features with the signal in Figure 2. The properties of these two groups that are quoted below are summarised in Table 1, with the rows labelled 'Group \(m0\)-\(1\)' and 'Group \(m0\)-\(2\)' corresponding to this first mSSA analysis.
Two more figures illustrate our results. Figure 4 shows the
Figure 4: Disc monopole (\(m=0\)) surface density as a function of radius and time, computed from the full coefficient series (upper panel), showing a largely featureless disc. The surface density has been normalised by the central surface density. The remaining panels show the contribution to the surface density deviations for two groups of \(m=0\) principal components, identified as two disequilibrium signals (see Table 1). The surface density deviations are computed relative to the \(m=0,n=0\) background, and are of the order a few per cent (excepting the outer disc, where the low densities mean a variations naturally result in a larger per cent variation).
amplitude (left hand panel) and DFTs of the coefficient reconstructions for Groups \(m0\)-\(1\) and \(m0\)-\(2\), revealing their distinct temporal characteristics. In Figure 3, we show the \(m=0\) surface density amplitude reconstruction as a function of disc radius (y-axis) and time (x-axis) from the unprocessed coefficients (top panel), as well as the surface density deviations relative to a smooth monopole background, constructed from the two \(m=0\) PC groups.
Overall, we find the following characteristics.
Group \(m0\)-\(1\) represents a dynamical feature that shows weak evolution over the entire simulations with a surface density contrast of approximately \(3\) per cent. The slow decay of Group \(m0\)-\(1\) produces power at a range of very low fre
Figure 5: An analysis of two groups obtained from the disc-only \(m=1\) mSSA decomposition. Each group corresponds to a distinct point mode, discussed in the text as ‘Mode 1’ and ‘Mode 2’. The left panels show the reconstructed \(m=1\) coefficient amplitudes over time for Groups \(m1\)-\(1\) and \(m1\)-\(2\). The right panels show the power spectra of the reconstructed \(m=1\) coefficients for each group. Both modes have well-defined slow patterns – significantly slower than any frequency associated with stars in the disc – and show evolving behaviour: the first mode is unstable and grows with time, while the second mode is damped and decays with time. The mode summaries are listed in Table 2.
Figure 6: Normalised face-on \((x,y)\) disc surface density deviation determined for two groups in the \(m=1\) decomposition. Each group corresponds to a distinct point mode, discussed in the text as ‘Mode 1’ and ‘Mode 2’. The panels shows a reconstruction of snapshots for either Group \(m1\)-\(1\) (upper row) or Group \(m1\)-\(2\) (lower row) in the disc-only \(m=1\) decomposition (cf. Figure 5). Both groups are retrograde with respect to the disc rotation (rotation direction of the pattern is marked with an arrow). The mode shown in the upper panels grows in amplitude over the course of the simulation; the mode shown in the lower panels decays in amplitude over the course of the simulation, evident from the surface density features. Neither pattern strongly winds; both are a largely self-similar evolution, despite being fairly tightly wound.
quencies, peaked at \(0.2{\rm Gyr}^{-1}\).
Group \(m0\)-\(2\) shows outwardly propagating rings in surface density that start at the beginning of the simulation and disappear after \(\approx 1\) Gyr, losing speed as they move to larger radii. While this is a sub-1 per cent effect within a disc scale length, at larger radii, the surface density deviation is obvious by eye as ringing features. The periodic nature of Group \(m0\)-\(2\) corresponds to a frequency peak at \(6.4{\rm Gyr}^{-1}\).
_We conclude that mSSA has cleanly separated two distinct evolutionary processes operating simultaneously within one harmonic term._ The next two subsections explore the nature of both of these features.
#### 3.2.2 Group 1: Halo-driven disequilibrium?
The appearance of Group \(m0\)-\(1\), at low frequency, suggests that its origin may be connected to the halo, where timescales are naturally long. Specifically, the frequency \(0.2~{}{\rm Gyr}^{-1}\) corresponds to a circular orbit at \(R~{}\sim~{}50~{}{\rm kpc}\) (see Figure 1). This motivated us to apply mSSA to the \(l=0\) coefficients representing the halo component in the simulation to explore this connection further. We run analyses of both the halo \(l=0\) alone and in combination with the disc \(m=0\) coefficients.
The results of the analysis of the halo alone is shown in lower panels of Figure 3 and summarised in the second row of Table 1. These demonstrate that the readjustment of the halo component's radial profile is even more significant than
Figure 8: Description of the strongest principal component group for halo and disc decompositions: a growing multi-component point mode. The upper panel shows the detrended and normalised amplitude of the reconstructed cosine component of the \(m=1\) (disc; grey curves) or \(l=1\) (halo; black curves) \(n=0\) coefficient versus time. The solid curves are for mSSA decompositions run on each component alone (Group \(m1\)-\(1\) and Group \(l1\)-\(1\)). The dashed curves are for the joint halo+disc mSSA decomposition (Group \(m1\)I1-\(1\)). The lower panel shows the power spectrum (DFT amplitude vs frequency), for the four series shown in the upper panel. The relative similarity of the curves and power spectra suggests that the patterns are correlated between the disc and halo. The slow growth of the disc amplitude over time relative to the larger halo amplitude at the outset of the simulation suggests that the halo is responsible for driving the mode.
Figure 7: Amplitude and phase as a function of radius and time for the disc-only \(m=1\) decomposition for the first two groups identified in the mSSA analysis. Each group corresponds to a distinct point mode. From top to bottom, we show the amplitude and phase for the unprocessed \(m=1\) coefficient streams, the reconstructed coefficients of Group \(m1\)-\(1\), and the reconstructed coefficients of Group \(m1\)-\(2\). The density is shown as the log of the absolute value of the density. Both groups show coherent phases identifiable in the seemingly random phase information of the unprocessed coefficients. The growing (decaying) nature of Group \(m1\)-\(1\) (Group \(m1\)-\(2\)) is also evident in the amplitudes.
the disc radial profile, with a signal amplitude twice as strong as the disc (compare detrended amplitudes in Figure 3). Such halo-driven disequilibrium is also a common feature for numerical realisations of multi-component galaxies as their combined equilibrium properties have been approximated, for example through Jeans modelling or adiabatic contraction corrections. Thus the mass distribution of the halo adjusts to full equilibrium in the presence of the disc, and vice versa.
In Table 1, a comparison of rows 1 (analysis disc coefficients alone), 2 (halo coefficients alone) and 3 (disc and halo coefficients combined) confirms: (i) all three mSSA analyses have similar temporal structures, corresponding to the dynamical timescales at several tens of kpc in the system; (ii) the joint disc/halo analysis actually identifies the same coherent features in the disc and at greater contrast (0.054 vs 0.031) than the disc analysis alone; (iii) that the driver for the combined evolution is likely the halo given the larger amplitude of its coherent changes relative to random fluctuations for that component.
_The above results demonstrate the ability of mSSA to successfully identified the mutual readjustment of the coupled disc-halo system from a mild disequilibrium state._
#### 3.2.3 Group 2: Disc-driven disequilibrium
The strength of Group \(m0\)-2 in the analysis inspired an investigation as to whether this disequilibrium could also seed other features in the simulation. Examination of other mSSA decompositions for different coefficient combinations finds many similar-frequency signals (see lower rows of Table 1). Even disc harmonics (\(m=2,4,6\)) show a persistent signal in the most important PCs (0 and 1) with a pattern speed of \(\sim 3.3\) cycles/Gyr that is equal to the half the Group \(m0\)-2 frequency peak of 6.6 cycles/Gyr7. Note that the joint analysis of all even disc harmonics (\(m=2,4,6\)) returns essentially the same results as the \(m=2\) only decomposition. In the case of harmonic orders \(m>2\), this result likely owes to the need for higher order harmonics to fully represent the feature being described.
Footnote 7: The pattern speed of a harmonic is the number of cycles per Gyr divided by the harmonic number. That is, the pattern speed of the disc-only decomposition of Group 2 \(m\) harmonic coefficients is \(\Omega_{m}=\Omega_{\rm DFT}/m\) cycles/Gyr.
The remaining rows of Table 1 demonstrate that the Group \(m0\)-2 disc disequilibrium signal is also evident at a lower level (i.e. higher PC numbers, lower contrast in the disc and smaller SV) in both the disc \(m=1\) and halo \(l=1\) decompositions when comparing frequency structure of the groups. While the peak surface density deviation is near the outset of the simulation for \(m=0\), in higher harmonic orders the signal does not completely fade over the simulation, with peak measured contrasts coming at later times. _Our findings show the utility of mSSA in detecting evolution incited across different harmonic orders._
Figure 9: Normalised face-on \((x,y)\) halo \(z=0\) plane density deviation reconstruction snapshots for Group \(m11\)1-1 (upper panels) and Group \(m11\)1-2 (lower panels) in the halo-and-disc \(l=1+m=1\) decomposition. Each group corresponds to a distinct point mode. The patterns extends to large radii in the halo and are retrograde with respect to the disc rotation. The halo reconstructions exhibit significantly less ordered behaviour compared to the disc owing to the three-dimensional nature of the mode, which also tips relative to the \(z=0\) plane. However, the bulk properties are similar to the disc (cf. Figure 6). The mode summaries are listed in Table 2. That the joint decomposition of the halo and disc returns the same groups, with similar behaviour, is strong evidence for the mutual mode nature of the features. The large spatial scale of the modes in the halo, coupled with their relatively early coherence, is suggestive that the modes are induced by the halo.
#### 3.2.4 Key insights
In this section, BFE+mSSA has been used to increase our understanding of a dynamical simulation by:
(i) separating distinct evolutionary pathways within a single harmonic;
(ii) identifying coupling between multiple components;
(iii) detecting features across different harmonics within a single component.
These results emphasise that initial conditions for near equilibrium studies of galaxy evolution need to be dynamically relaxed (or virialised) by evolving in isolation for tens of halo dynamical times (i.e. much longer than than the equivalent timescale in the disc) prior to an studies of interactions in order to truly isolate signatures of the external perturbation. While the perturbation in our study is a numerical artifact, the distinct adjustments to density profiles and couplings within and across components uncovered by BFE+mSSA represent the drivers of the evolution of galaxies seeded by any perturbation.
### Secular evolution signals Uncovered Through Disc \(m=1\) Analysis
Examination of the PCs from the mSSA decomposition of the dipole disc harmonic (\(m=1\)) revealed two groups, with properties summarised in Table 2 and contributing coefficients and power spectra visualised in the left and right panels of Figure 5. Examination of the power spectra show that these features are distinct in nature to the disequilibrium-seeded \(m=0\)-dominated Groups \(m0\)-\(1\) and \(m0\)-\(2\) described in the previous section in that they have clear, well-defined frequencies, rather than a broad spectrum. This indicates that each of these groups may be a _point mode_ present in the system. As discussed in Section 2.1, point modes are a result of the fundamental properties of the underlying phase-space distribution. They have single-valued real and imaginary frequencies (hence the descriptive _point_) that describe the periodicity and growth or decay of the features they support. These modes drive secular, self-sustained evolution distinct from that of a transient response to an external driver (e.g. the disequilibrium initial conditions in the previous section) that phase mixes away. Hence we refer to these groups as 'Mode 1' and 'Mode 2', and examine their nature in the following subsections. In the disc \(m=1\) analysis, these are Groups \(m1\)-\(1\) and \(m1\)-\(2\).
#### 3.3.1 Appearance of modes in the disc
We augment the information about the two modes summarised in Figure 5 and Table 2 with visualisations of their appearance in Figures 6 and 7. Figure 6 shows selected face-on disc surface density reconstructions to demonstrate that both modes create spiral patterns that are retrograde relative to the rotation of the disc. Figure 7 illustrates the radial (y-axis) and time (x-axis) evolution of the surface density (upper panel in each pair) and phase over (lower panel in each pair) for the full time sequence, indicating both the growth/decay and periodicity. Inspection of these figures and the table provide the full characterisation of the modes.
**Mode 1** groups \(m=1\) PCs 0 and 1, reconstructing a slowly rotating, growing mode. Referring to Figure 1, the frequency of the signal (\(\Omega=0.6\) cycles/Gyr) is located near the scale radius of the halo, well outside the disc8. Mode 1 grows significantly in amplitude over the simulation, with the peak surface density signal coming near the end of the simulation. Computing the contrast in the outer, low-density disc (\(r>12\) kpc), the surface density deviation amplitude reaches 10 per cent, detectable as lopsidedness in deep imaging of disc galaxies.
Footnote 8: For \(m>0\) harmonics, PC groupings frequently occur in pairs that describe both the amplitude and phase of a feature. In the left hand panels of Figure 5 only the cosine terms in the coefficients are plotted to allow the reader to infer both amplitude and periodicity.
**Mode 2** groups \(m=1\) PCs 2 and 3, reconstructing a slowly rotating, slowly decaying mode. The frequency of the signal (\(\Omega=1.7\) cycles/Gyr) is located closer to the Galactic centre, but also beyond the bulk of the disc mass. Mode 2 decays from the outset of the simulation, and is significantly weaker than the first mode, with a peak contrast of order 0.1 per cent within a scale length.
#### 3.3.2 Connection between the disc and halo
Since the frequencies of the two modes are consistent with halo frequencies we naturally suspect that the halo is supporting the modes. To test this, we perform additional mSSA decompositions: first with the \(l=1\) halo coefficients alone, and then with the \(l=1\) halo coefficients jointly with the \(m=1\) disc coefficients9. The results of the runs are summarised in Table 2. We find sets of PCs in the halo-only decompositions corresponding to Modes 1 and 2, which we associate by means of their similar frequencies. We also find corresponding PCs in the joint disc-halo decomposition. The joint analysis in particular suggests that the modes are multi-component in nature, owing to the similar properties between all decompositions.
Footnote 9: To find correlated features between the halo and disc we choose halo coefficients that can describe features with meaningful projections into the disc plane. To this end we choose only the \(Y_{1}^{m}=Y_{1}^{1}\) terms of the halo expansion, excluding the \(Y_{1}^{0}\) term. In addition we use the same number of coefficients from each component to avoid introducing the prior of unequal representation.
Figure 8 provides an example visualisation for a single radial coefficient (\(n=0\)) contributing to Mode 1 to verify this interpretation. Comparing the coefficients reconstructed from identified in the independent analyses of the disc and halo (solid lines), as well as the joint disc-halo decomposition (dashed lines), we find the same features are identified in both the combined and independent analyses: the curves in the upper panel of Figure 8 are unchanged whether the decomposition is performed on a per-component basis, or jointly. This implies that the same principal component can describe the evolution in both the disc and halo, and that the signal is strong enough in both components to be identified in per-component analyses. This is a strong indication of a correlated multi-component signal. In general the same features will not be recovered from combined analysis of different components because the _inter_-component decomposition need not match the _intra_-component decomposition. In contrast, our joint analysis finds a single PC group may be used to reconstruct the modes in _both_ the disc and the halo, identifying them as a mutual mode.
For both modes, we can examine and compare timescales and amplitudes to try to understand the driver of the evolution. Comparing between components, the feature strength is higher in the halo at earlier times in each mode (of order 1% density contrast in the halo, but well below that in the disc), implying that the halo is responsible for starting each mode at large radii (compare Figures 6 and 9). For the growing Mode 1, estimating the growth rate from the modulus of the coefficients at early times also reveals the growth of the halo feature to be twice that of the disc. The saturation point of the halo is also measurably earlier than the disc (\(T=2.2\) Gyr in the halo vs \(T=3.2\) Gyr in the halo).
The comparison of the disc and halo features in the previous paragraph suggest that the modes may arise from a fundamental dynamical property of the halo component. Figure 9 shows snapshots of the halo feature during the simulation at times corresponding to Figure 6. The features are both slow retrograde pattern which build and/or damp over time. They bear hallmarks - a slow dipole pattern at relatively large scales - of the weakly damped \(l=1\) modes in spherical systems that have been studied in using linear perturbation theory. These were first identified by Weinberg (1994), and later additionally reported by Heggie et al. (2020), Fouvry and Prunet (2021), and Weinberg (2022).
_We conclude that BFE+mSSA has allowed us to detect and characterise slow, secular evolution of our isolated simulated galaxy due to the nature of the underlying equilibrium._
#### 3.3.3 Key insights
The results in this section provide additional illustrations of the ability of BFE+mSSA to separate evolutionary pathways in a single harmonic and to detect coupling across components.
Most significantly, BFE+mSSA allowed the detection of slow, low-level secular evolution in our simulation that had been predicted in analytic work, (Weinberg, 1994; Fouvry and Prunet, 2021) and recently observed in star cluster and dark-matter-halo-only simulations (Heggie et al., 2020; Weinberg, 2022). The analytic work suggests that spherical systems, such as dark matter halos, generically exhibit dipole point modes. The common existence of these modes has important implications for understanding lopsidedness in galaxies: the halo and disc mutually open dynamical avenues that cannot be taken by either component independently; therefore many dynamical features are simply inexplicable without an understanding of the interplay between components. However, making a clear connection between the theory and observed galaxies has been hampered by the technical challenge of applying analytic work to multi-component systems. Moreover, while numerical simulations routinely represent multiple component systems, the description of the results is typically limited to visualisations and statistical analyses that can only qualitatively be connected to dynamical drivers.
BFE+mSSA has bridged this gap by clearly showing an \(l=1\) mode in our simulated halo driving lopsidedness in our simulated disc. These results speak to the promise of BFE+mSSA for forging the missing connection between theory, simulations, and observations needed to interpret galactic properties in terms of our fundamental dynamical understanding secular evolution.
### Fluctuations and other uninterpretable features
In the two previous sections, we identified interpretable signals in various harmonics of both the disc and halo coefficients in groups of low-order PCs using BFE+mSSA. However, inspection of the last column of Tables 1 and 2 shows that these PC groups only contain a fraction of the total singular values (which are normalised to total unity): most of the groups represent less than 20 per cent of the variance in the coefficients being analysed10. The rest of the signal spread over many (many!) higher-order PCs with lower SVs. These are PCs with very weak self-gravity. We refer to these remaining terms as the _nullity_, owing to its uninterpretable nature: it will contain numerical noise, but may also contain signals too weak to be included in our analysis.
Footnote 10: The exception are some of the PCs associated with the monopole, which encode the equilibrium. These PCs are responsible for upwards of 60 per cent of the singular value signal, cf. Table 1.
To understand the properties of the nullity, we collect all uninterpretable PCs for a given mSSA decomposition and analyse their reconstructions, summarising the results for low-order disc harmonics in Figure 10 and for all decompositions in Table 3. Figure 10 shows the reconstructed coefficients and corresponding power spectrum for the PCs assigned to the nullity for low-order disc harmonics. Comparing this to the corresponding Figures 4 and 5 for lower order PCs, the difference is clear. The bottom panels for the \(m=2\) nullity do have hints of a signal in the form of low-level systematic evolution in the left hand panel and some clear peaks in the right panel. We discuss future strategies to hunt for weak signals in Section 4.1. However, in general, there is a lack of periodic or systematic evolution in the left hand panels and flat spectra of frequencies in the right hand panel, characteristic of noise. A comparison of the contrast columns of Tables 3, A1 and 2 shows that the fluctuations in the surface density derived from the nullity are mostly stronger than the coherent signals in this particular simulation: our BFE+mSSA analysis has supported insights that would otherwise be inaccessible.
## 4 Looking ahead
### Essential Future Work - assessment of weak feature significance
Our analyses of simulations of bar formation (Weinberg and Petersen, 2021; Petersen et al., 2022) and an isolated disc galaxy (this paper) amply illustrate the facility of BFE+mSSA to learn about both significant and expected as well as subtle and unanticipated dynamical evolution. The results are very promising for general applications to a wide variety of dynamical systems. However, our work so far has been involved close supervision of BFE+mSSA to both interpret and understand the significance of what features it has identified.
In particular, the interpretative ambiguity we encountered in the higher order terms in this paper outlines the current limit of BFE+mSSA. This limit motivates the need for a rigorous statistical analysis of significance for mSSA-identified signals. Many of the well-known approaches from statistical
analysis would be suitable for this purpose. For example, let us take the hypothesis that the signal observed at \(m=2,3,4\) is consistent with background noise as a test case. That is, our null hypothesis is that our simulation can generate with the same properties of the signal in question without inherent self gravity. To do this, we need to generate a simulation with the same noise spectrum as the full simulation but without any self-gravitating features on the spatial and temporal scales of our putative signal. Let us assume that we know how to perform such simulations (we propose an exp-enabled approach below). An ensemble of these null-hypothesis simulations can be run and analysed using mSSA. From the ensemble of simulations, one may construct prediction intervals for singular values under the null hypothesis. Then, if the singular value corresponding to the signal in question is beyond the prediction intervals, the corresponding principal component is considered significant. In such a case, the signal can be reliably reconstructed. This approach is often called Markov Chain SSA (MC-SSA, see Allen and Smith, 1996).
Analyses of this sort are particularly well-suited to the exp framework described in Petersen et al. (2022). We can use the mSSA analysis to construct a realistic reconstruction of the coefficients series from the self-gravitating simulation _without_ the self-gravitating features of interest by removing the groups corresponding to the signal in question. In the study presented here, this would be akin to retaining only the nullity reconstructions of the coefficients. We can generate new
Figure 10: An analysis of the content in the nullity for \(m=0\) (upper panels), \(m=1\) (middle panels), and \(m=2\)) lower panels. The left panels show the reconstructed nullity coefficient amplitudes over time for \(m=0,1,2\) (top to bottom). The right panels show power spectrum of the reconstructed nullity coefficients for each harmonic. Both \(m=0\) and \(m=1\) show no discernible signals. The \(m=2\) harmonic shows some periodicity, but the power spectrum suggests the frequencies are broad and not strongly coherent. Therefore, we are confident that we are not throwing away interpretable signal in the nullity in any harmonics. These reconstructions may be compare to the unprocessed coefficients, Figure 2, for a quantitative analysis of what signals are part of coherent signal groups.
\begin{table}
\begin{tabular}{c c c c c} mSSA & PCs & DFT peak & contrast & SV \\ decomposition & & (\(\mathrm{Gyr}^{-1}\)) & (\(R<R_{d}\)) & fraction \\ \hline \hline disc \(m=0\) & 6+ & - & 0.005 & 0.275 \\ disc \(m=1\) & 6+ & - & 0.008 & 0.678 \\ disc \(m=2\) & 2+ & - & 0.017 & 0.799 \\ disc \(m=3\) & 2+ & - & 0.012 & 0.936 \\ disc \(m=4\) & 2+ & - & 0.010 & 0.914 \\ disc \(m=5\) & 2+ & - & 0.006 & 0.954 \\ disc \(m=6\) & 2+ & - & 0.005 & 0.964 \\ disc \(m=1,3,5\) & 2+ & - & 0.027 & 0.933 \\ disc \(m=2,4,6\) & 2+ & - & 0.037 & 0.928 \\ halo \(l=0\) & 6+ & - & - & 0.028 \\ halo \(l=1\) & 6+ & - & - & 0.693 \\ disc \(m=0\), halo \(l=0\) & 6+ & - & 0.021 & 0.130 \\ disc \(m=1\), halo \(l=1\) & 8+ & - & 0.010 & 0.667 \\ disc \(m=1,2\), halo \(l=1\) & 4+ & - & 0.012 & 0.801 \\ \end{tabular}
\end{table}
Table 3: Summary of principal components assigned the nullity in our decompositions. We refer to each collection of PCs here as the ‘Nullity’, rather than a PC group. Disc harmonics are denoted with \(m\), halo harmonics are denoted with \(l\). Columns are the same as in Table 1.
coefficient series from an autoregressive model11 consistent with the coefficient covariance from the mSSA reconstruction. Then, exp allows initial potential fields from the reconstructed coefficients to be replayed for a new ensemble of particles with very little computational effort. The resulting expansion coefficient series are gathered automatically for analysis by mSSA, and can be analysed for significance of detected features. A detailed description of the MC-SSA approach in the exp context will be described in a later contribution.
Footnote 11: Autoregressive noise models are typically used for null hypotheses in MC-SSA because SSA provides good estimates for frequencies and exponential factors processes generated by the related linear recurrence relations (Golyandina & Zhigljavsky, 2013, Section 3).
### Prospects for applications to simulations
Despite the limitations, there are multitude of prospects for immediate, supervised applications of BFE+mSSA to simulations of galaxies, whether isolated, interacting or evolving in the full cosmological context.
Dynamical analyses of simulations of galactic evolution.
Recent surveys (Majewski et al., 2017; Steinmetz et al., 2020; Gaia Collaboration et al., 2022) demonstrate that the Milky Way continues to evolve through satellite interaction. \(N\)-body simulations have explained some key observational signatures (Laporte et al., 2019; Petersen & Penarrubia, 2020; Garavito-Camargo et al., 2021; Vasiliev et al., 2021; Hunt et al., 2022). However, interpretation of these simulations is challenging since many actors contribute simultaneously. The BFE+mSSA knowledge discovery approach is capable of separating, characterising and dissecting the signatures of the mutual interactions of each component in simulations by separating features by correlating temporal and spatial scales non-parametrically. BFE+mSSA promise detailed predictions and identification of features in current stellar data sets (see Petersen & Penarrubia, 2021; Garavito-Camargo et al., 2021; Lilleengen et al., 2022, for some recent results) and confident mapping the the dark matter halo's global structure and _distortions_ to that structure. This goal was unimaginable even 5 years ago.
Structural characterisation and correlation of fields.This paper demonstrated the discovery of two-dimensional features in disc density resulting from internal (disequilibrium-related) dynamics and halo interactions. However, BFE+mSSA can be applied to any field in any number of dimensions. For example, Weinberg & Petersen (2021) illustrated a three-dimensional disc BFE. The exp library already enables joint BFE+mSSA investigations of any number of three-dimensional density and potential fields. These may be augmented by kinematic fields as in Weinberg & Petersen (2021) or some other field such as star formation rates and implied local metallicity. If the additional fields encode spatial information (e.g. they are BFE coefficients or even radial and azimuthal bins), their temporal and spatial scales will be correlated with the density and potential fields. The BFE+mSSA can adapt to new observational tools and _windows_ as new surveys become available.
Understanding of noise.There have been many years of debate on the effect of noise in conclusions drawn from dynamical simulations, from bar-halo interactions (Weinberg & Katz, 2007), through dynamical friction (Weinberg, 2001), to satellite disruption (Errani & Penarrubia, 2020). BFE+mSSA clearly separates the correlated, quasi-periodic signals resulting from dynamical interaction and coupling from the fluctuating forces resulting from finite particle number stochastic effects. We expect that couplings in orbital dynamics have frequencies near or smaller than the characteristic orbital frequencies. Since the individual PCs describe the temporal behaviour of components assigned to the noise field and the power spectrum describes their characteristic frequencies, mSSA provides a natural classification of signal and noise. Investigations of test-particle orbits with and without the noise component provide a diagnostic tool for the reliability of features in simulations and the role of fluctuations more generally.
## 5 Conclusions
### Near-equilibrium evolution: the importance of multi-component modes
We applied BFE+mSSA to a simulation of an isolated Milky Way like galaxy. The BFE+mSSA combination allows us to _automatically_ identify the main features in the model galaxy _and their origins_. Most remarkably, BFE+mSSA achieved this in the challenging case of an isolated, multi-component galaxy that had specifically been constructed to _not_ evolve and where the dynamical signatures were below the level of the noise. Our work complements a prior investigation (Weinberg & Petersen, 2021) which used BFE+mSSA to characterise significant evolution of known nature in a simulation which formed a galactic bar.
In our near-equilibrium model, we identified - for the first time - two multi-component (disc-halo) dipole point modes (Figures 5-8) which evolve over time (one growing, one damped; Table 2). This discovery is enabled by the BFE+mSSA methodology; such dynamical effects are at a level such that other methods, such as Fourier analyses, will not be able to recover the signals. Halo modes are expected from linear perturbation theory (Weinberg, 1994; Fouvry & Prunet, 2021), and are observed in simulations of star clusters (Heggie et al., 2020) and dark matter halos (Weinberg, 2022), but the coupling of a spheroid to a disc has not been discovered to date. The BFE+mSSA methodology makes the identification of point modes straightforward, and provides several avenues for corroboration. We employed several different mSSA decompositions to validate our findings. The existence of point modes in these isolated simulations demonstrates the fundamental contribution of component interactions to the dynamical evolution of galaxies. We expect that the existence of such multi-component disc-halo modes is a generic feature of such systems, possibly including the Milky Way. These modes likely have influence over the structural evolution of disc galaxies. For instance, our results immediately suggest that low dipole modes will be most detectable at large radii (e.g. \(R>20\) kpc in the Milky Way), where density contrasts can exceed 10 per cent relative to a smooth disc.
In addition to point modes, we identified the long-lived results of initial conditions disequilibrium, resulting from individual halo and disc disequilibrium features. Starting with rings encoded in the monopole \(m=0\) coefficients, we found correlations with many other harmonics, including a persistent \(m=2\) signal. We uncovered an aphysical'settling' of the halo in response to the presence of the disc at the outset of the simulation. Future work modelling idealised galaxies must take care to ensure that disequilibria in the initial conditions, and the resulting persistent features, are not treated as real dynamics.
Finally, we quantified the remaining signal that we classified as the nullity, and put limits on the magnitude of unexplained surface density fluctuations in the disc. The desire to push even deeper in the decomposition of simulations motivates the essential future work, but also inspires prospects for future applications.
### Dynamical Data Mining as the future of Galactic Dynamics
Galactic Dynamics is a mature field with elegant descriptions of equilibrium systems, estimates for scales of the processes involved in the interactions that are known affect them and sophisticated analytic methods that describe evolution in the linear regime. While we can understand the basic governing principles with detailed mathematical models from Hamiltonian perturbation theory, the inter-component and environmental interactions that produce this morphology are hard if not impossible to study from modal analysis alone. BFE methods both underpin our dynamical data mining technique and are often used in analytic perturbation work. Thus they provide a natural bridge between theoretical work and numerical simulation. The combination of BFE representation of the possibly unknown dynamics in simulations with a machine-based knowledge acquisition tool such as mSSA allows for identification of couplings that may be too hard to predict otherwise. This natural synergy between mathematical theory and simulation is the main motivation for our approach. Series of BFE can also be used to characterise _observed_ fields in galaxies.
A galaxy's picturesue morphological structure is a historical summary of its evolution. Cosmological predictions for the frequency of galactic interactions explain the abundant signatures of disequilibrium observed. The detail of our picture of disequilibrium is rapidly advancing, in terms of resolution in the Galaxy (e.g. Hunt et al., 2022) in galaxies and occurrence rate in others (e.g. Pearson et al., 2022). Simulated realisations of galaxies in disequilibrium are similarly advancing in resolution and scale. However, the tools to take full advantage of this twin onslaught of data, simulated and real, are currently lacking. Such tools must be capable of modelling galaxies in disequilibrium, make quantitative and dynamically meaningful connections between simulated and observed galaxies and connect with analytic work in the linear regime.
As outlined in Section 4.2, our results suggest the tremendous promise of BFE+mSSA for the field of Galactic Dynamics, with a myriad of envisioned applications. Many of these applications can be undertaken now by adopting the supervised learning approach to using BFE+mSSA. These include detailed analyses of galactic components for galaxies in both isolated and cosmological settings.
Nor is there any reason to limit the BFE+mSSA analysis to galaxies. BFE+mSSA can be used for the characterisation and dynamical evolution of self gravitating, interacting systems more generally and in any context, from binary asteroids in the solar system (Quillen et al., 2022), through protoplanetary discs (Cadman et al., 2021), to nuclear star clusters in the centres of galaxies (Fourry et al., 2022).
The remaining and key challenge to be solved is to understand how to confidently assess the significance of all the features that BFE+mSSA recovers in an _unsupervised_ way. Once this is developed it will be possible to broadly apply BFE+mSSA to large samples of systems: both simulated and real.
## Acknowledgements
We warmly thank Chervin Laporte for sharing the initial conditions from his simulation with us. We acknowledge support from the Center for Computational Astrophysics (CCA) at the Flatiron Institute in the form of access to their computational resources which allowed us to create our simulation and generate the associated data. In addition, we thank CCA leadership and staff for hosting the Beyond-BFE collaboration meetings. We thank members of the B-BFE collaboration and the Dynamics Group at CCA for numerous conversations during development of this paper. MSP's contributions were partially supported by grant Segal ANR-19-CE31-0017 of the French Agence Nationale de la Recherche as well as a UKRI Stephen Hawking Fellowship. KVJ and AJ's contributions were supported by NSF grant AST-1715582.
## Data Availability
The code, data, and simulation used to generate the results in this article will be made available upon reasonable request to the appropriate author.
|
2302.03418 | Consistency of higher derivative couplings to matter fields in
scalar-tensor gravity | Recently, a generalization of invertible disformal transformations containing
higher-order derivatives of a scalar field has been proposed in the context of
scalar-tensor theories of gravity. By applying this generalized disformal
transformation to the Horndeski theory, one can obtain the so-called
generalized disformal Horndeski theories which are more general healthy
scalar-tensor theories than ever. However, it is unclear whether or not the
generalized disformal Horndeski theories can be coupled consistently to matter
fields because introducing a matter field could break the degeneracy conditions
of higher-order scalar-tensor theories and hence yield the unwanted
Ostrogradsky ghost. We investigate this issue and explore the conditions under
which a minimal coupling to a matter field is consistent in the generalized
disformal Horndeski theories without relying on any particular gauge such as
the unitary gauge. We find that all the higher derivative terms in the
generalized disformal transformation are prohibited to avoid the appearance of
the Ostrogradsky ghost, leading to the conclusion that only the theories that
are related to the Horndeski theory through a conventional disformal
transformation remain ghost-free in the presence of minimally coupled matter
fields. | Tact Ikeda, Kazufumi Takahashi, Tsutomu Kobayashi | 2023-02-07T12:05:30Z | http://arxiv.org/abs/2302.03418v2 | # Consistency of higher derivative couplings to matter fields in scalar-tensor gravity
###### Abstract
Recently, a generalization of invertible thermal transformations containing higher-order derivatives of a scalar field has been proposed in the context of scalar-tensor theories of gravity. By applying this generalized disformal transformation to the Horndeski theory, one can obtain the so-called generalized disformal Horndeski theories which are more general healthy scalar-tensor theories than ever. However, it is unclear whether or not the generalized disformal Horndeski theories can be coupled consistently to matter fields because introducing a matter field could break the degeneracy conditions of higher-order scalar-tensor theories and hence yield the unwanted Ostrogradsky ghost. We investigate this issue and explore the conditions under which a minimal coupling to a matter field is consistent in the generalized disformal Horndeski theories without relying on any particular gauge such as the unitary gauge. We find that all the higher derivative terms in the generalized disformal transformation are prohibited to avoid the appearance of the Ostrogradsky ghost, leading to the conclusion that only the theories that are related to the Horndeski theory through a conventional disformal transformation remain ghost-free in the presence of minimally coupled matter fields.
+
Footnote †: preprint: RUP-23-2, YITP-23-11
## I Introduction
The mechanism of the accelerated expansion of the late Universe is yet unknown, which motivates the active study of modified gravity as an alternative to dark energy. A modification of gravity at high energies is also strongly motivated because general relativity is considered to be a low-energy effective theory and it is thus interesting to explore physics beyond general relativity in a strong gravity regime. Furthermore, modified gravity models are useful for comparison with general relativity in the context of testing gravity (see, e.g., Refs. [1; 2; 3] for reviews). Modified gravity is described, at least effectively, by theories equipped with new gravitational degree(s) of freedom (DOFs) on top of the massless graviton DOFs, and as such scalar-tensor theories are often studied where a scalar DOF is taken into account. Once one goes beyond general relativity, one may naturally consider higher derivative terms in the gravitational Lagrangian, but higher-derivative theories are plagued by the Ostrogradsky instability in general [4; 5; 6]. Therefore, in generalizing theories of gravity, one must be careful not to induce such an instability arising from higher derivatives. Even if the Lagrangian itself contains higher derivatives, this instability can be avoided as long as the field equations are intrinsically of second order. The Horndeski theory [7; 8; 9] is the most general scalar-tensor theory having second-order Euler-Lagrange equations, and due to this merit it has been studied extensively over the recent years. However, the Horndeski theory is not the most general scalar-tensor theory that is free from the Ostrogradsky instability. The point here is that, if the system is degenerate, higher-order field equations contain less number of DOFs than anticipated from the order of derivatives [10; 11; 12; 13; 14]. Degenerate higher-order scalar-tensor (DHOST) theories [15; 16; 17] exploit this loophole and extend the Horndeski theory to a large family of higher-derivative scalar-tensor theories having a single scalar and two tensor DOFs (see Refs. [18; 19] for reviews).
Given that two theories related via invertible field redefinition have the same number of dynamical DOFs [20; 21], an invertible metric redefinition is a convenient and useful way of generating nontrivial class of healthy scalar-tensor theories from existing ones that are manifestly ghost-free. In particular, derivative-dependent transformations yield higher-derivative theories that are nevertheless free from the Ostrogradsky ghost. For example, by applying to the Horndeski theory a disformal transformation, which is a general metric redefinition involving the first derivative of the scalar DOF [22], we can obtain a certain subset of DHOST theories [23]. Indeed, the first example of scalar-tensor theories beyond Horndeski was obtained in that way [24]. It is important to note that, among subclasses of DHOST theories that are systematically constructed by imposing the degeneracy conditions, only the subclass generated via invertible disformal transformation from the Horndeski theory is physically interesting because cosmological solutions can be stable (and tensor perturbations remain dynamical) only in that subclass [25; 26].1 Such disformally generated DHOST theories include, e.g., the so-called "beyond Horndeski" or GLPV theories [29; 30] as specific cases.
Footnote 1: A noninvertible subclass of disformal transformations can also be used to generate a certain subclass of DHOST theories, but this subclass does not accommodate stable cosmological solutions or otherwise the tensor perturbations are nondynamical [27; 28].
Recently, a generalization of invertible disformal transformations involving second (and higher) derivatives of the scalar DOF was proposed [31]. By applying this novel class of generalized disformal transformations to the Horndeski theory, we can derive yet more general higher-order scalar-tensor theories than ever constructed, which we call generalized disformal Horndeski (GDH) theories [32]. Since two theories thus related via invertible disformal transformation are equivalent, GDH theories are also free from the Ostrogradsky instability (even though the field equations are apparently of higher order). However, this statement should be taken with care. The equivalence holds only in vacuum, and the Horndeski theory with minimally coupled matter and GDH theories with minimally coupled matter are _not_ equivalent. Therefore, the inclusion of (minimally coupled) matter fields in GDH theories could result in extra ghost DOFs. The possible appearance of extra DOFs can be understood by moving back to the "Horndeski frame" where the gravity sector is described by the Horndeski theory and the matter sector is coupled to a generalized disformal metric involving second derivatives of the scalar DOF. In the Horndeski frame, the matter fields are thus coupled with second derivatives of the scalar DOF, possibly breaking the degeneracy conditions. This issue has been pointed out already in the context of DHOST theories [33; 34]. Motivated by this concern, the consistency of matter couplings in GDH theories has been investigated in Refs. [35; 36; 32]. However, in those previous studies, the unitary gauge is taken in which the scalar DOF is homogeneous on constant-time hypersurfaces. The conditions that remove extra ghost DOFs derived in Refs. [35; 36; 32] should therefore be weaker than those that would be derived without assuming any particular gauge. In other words, upon imposing the degeneracy conditions validated only in the unitary gauge, there would be the Ostrogradsky ghost away from the unitary gauge. Such a gauge-dependent Ostrogradsky ghost would be a nonpropagating "shadow mode," and hence harmless [37; 38]. Nevertheless, since such theories are rather tricky and the shadow mode requires a careful treatment, we are interested mostly in theories without the shadowy mode. Also, by removing the Ostrogradsky ghost in any gauge, one may safely consider, e.g., static stars and black holes dressed with a static scalar profile. Along this line of thought, in this paper, we explore, without assuming any particular gauge, the conditions under which matter couplings to GDH theories are consistent and there is no Ostrogradsky instability.
This paper is organized as follows. In the next section, we briefly review the generalized disformal transformation [31] and introduce the action for GDH theories by applying a generalized disformal transformation to the Horndeski theory. In Sec. III, we study the consistency of matter couplings in the GDH theory. We review the previous results obtained in the unitary gauge [35; 36; 32], and then investigate the conditions for the matter couplings to be consistent away from the unitary gauge. We finally draw our conclusions in Sec. IV.
## II Generalized disformal transformations
### Invertible disformal transformations with higher-order derivatives
Let us consider a generalized disformal transformation defined as [31]
\[\bar{g}_{\mu\nu}=F_{0}g_{\mu\nu}+F_{1}\phi_{\mu}\phi_{\nu}+2F_{2}\phi_{(\mu}X_ {\nu)}+F_{3}X_{\mu}X_{\nu}, \tag{1}\]
where \(\phi_{\mu}\coloneqq\nabla_{\mu}\phi\) and \(X_{\mu}\coloneqq\nabla_{\mu}X\) with \(X\coloneqq\phi^{\mu}\phi_{\mu}\). Here, \(F_{i}\) (\(i=0,1,2,3\)) are functions of \((\phi,X,Y,Z)\), where \(Y\coloneqq\phi^{\mu}X_{\mu}\) and \(Z\coloneqq X^{\mu}X_{\mu}\). Restricting the form of the functions to be \(F_{0}=F_{0}(\phi,X)\), \(F_{1}=F_{1}(\phi,X)\) and \(F_{2}=F_{3}=0\), we obtain the conventional disformal transformation [22]
\[\bar{g}_{\mu\nu}=F_{0}(\phi,X)g_{\mu\nu}+F_{1}(\phi,X)\phi_{\mu}\phi_{\nu}. \tag{2}\]
In this sense, the transformation (1) involves the conventional one (2).
Following Ref. [31], we summarize the conditions under which the transformation (1) is invertible. The essential ingredient of the invertible generalized disformal transformation is the requirement that a set of generalized disformal transformations forms a group under the following two operations:
\[\left(\bar{g}\cdot\hat{g}\right)_{\mu\nu}\coloneqq\bar{g}_{\mu\alpha}g^{\alpha \beta}\hat{g}_{\beta\nu},\quad\text{(Matrix product)} \tag{3}\]
and
\[\left(\bar{g}\circ\hat{g}\right)_{\mu\nu}\left[g,\phi\right]\coloneqq\bar{g}_ {\mu\nu}[\hat{g},\phi].\quad\text{(Functional composition)} \tag{4}\]
We are then allowed to construct the inverse of the transformed metric and the inverse transformation. Note that, in contrast to the case of conventional disformal transformation (2), the closedness under the functional composition for the generalized disformal transformation (1) is nontrivial as the transformation law involves the derivative of
the metric. For the set of generalized disformal transformations to form a group, it is sufficient that the following conditions are satisfied [31]:
\[F_{0}\neq 0,\quad\mathcal{F}\neq 0,\quad\bar{X}_{X}\neq 0,\quad\bar{X}_{Y}=\bar{X} _{Z}=0,\quad\left|\frac{\partial(\bar{Y},\bar{Z})}{\partial(Y,Z)}\right|\neq 0, \tag{5}\]
where
\[\bar{X}\coloneqq\bar{g}^{\mu\nu}\phi_{\mu}\phi_{\nu},\quad\bar{Y}\coloneqq\bar {g}^{\mu\nu}\phi_{\mu}\bar{X}_{\nu},\quad\bar{Z}\coloneqq\bar{g}^{\mu\nu}\bar{ X}_{\mu}\bar{X}_{\nu}, \tag{6}\]
and
\[\mathcal{F}(\phi,X,Y,Z) \coloneqq F_{0}^{2}+F_{0}(XF_{1}+2YF_{2}+ZF_{3})+(Y^{2}-XZ)(F_{2}^{ 2}-F_{1}F_{3}). \tag{7}\]
Note that, among the conditions in Eq. (5), the one that guarantees the closedness under the functional composition is \(\bar{X}_{Y}=\bar{X}_{Z}=0\). Suppose that these conditions are satisfied. The inverse metric \(\bar{g}^{\mu\nu}\) of \(\bar{g}_{\mu\nu}\) is then given by
\[\bar{g}^{\mu\nu}=f_{0}g^{\mu\nu}+f_{1}\phi^{\mu}\phi^{\nu}+2f_{2}\phi^{(\mu}X^ {\nu)}+f_{3}X^{\mu}X^{\nu}, \tag{8}\]
where
\[f_{0}\coloneqq\frac{1}{F_{0}},\quad f_{1}\coloneqq-\frac{F_{0}F_{1}-Z\left(F _{2}^{2}-F_{1}F_{3}\right)}{F_{0}\mathcal{F}},\quad f_{2}\coloneqq-\frac{F_{ 0}F_{2}+Y\left(F_{2}^{2}-F_{1}F_{3}\right)}{F_{0}\mathcal{F}},\quad f_{3} \coloneqq-\frac{F_{0}F_{3}-X\left(F_{2}^{2}-F_{1}F_{3}\right)}{F_{0}\mathcal{ F}}. \tag{9}\]
The following formula is also useful for reconstructing the barred metric \(\bar{g}_{\mu\nu}\) [Eq. (1)] when its inverse is given in the form of Eq. (8):
\[F_{0}\coloneqq\frac{1}{f_{0}},\quad F_{1}\coloneqq-\frac{f_{0}f_{1}-Z\left( f_{2}^{2}-f_{1}f_{3}\right)}{f_{0}\mathcal{H}},\quad F_{2}\coloneqq-\frac{f_{0}f_{2}+Y \left(f_{2}^{2}-f_{1}f_{3}\right)}{f_{0}\mathcal{H}},\quad F_{3}\coloneqq- \frac{f_{0}f_{3}-X\left(f_{2}^{2}-f_{1}f_{3}\right)}{f_{0}\mathcal{H}}. \tag{10}\]
Here, we have defined
\[\mathcal{H}(\phi,X,Y,Z)\coloneqq f_{0}^{2}+f_{0}(XF_{1}+2Yf_{2}+Zf_{3})+(Y^{2 }-XZ)(f_{2}^{2}-f_{1}f_{3}), \tag{11}\]
which is obtained simply by replacing \(F_{i}\) in Eq. (7) by \(f_{i}\). Having constructed the inverse metric explicitly, let us next look at the inverse transformation. The inverse transformation of (1) is expressed as
\[g_{\mu\nu}[\bar{g},\phi]=\frac{1}{F_{0}}\bar{g}_{\mu\nu}-\frac{\bar{X}_{X}^{2} F_{1}-2\bar{X}_{\phi}\bar{X}_{X}F_{2}+\bar{X}_{\phi}^{2}F_{3}}{\bar{X}_{X}^{2}F_{0}} \phi_{\mu}\phi_{\nu}-2\frac{\bar{X}_{X}F_{2}-\bar{X}_{\phi}F_{3}}{\bar{X}_{X}^ {2}F_{0}}\phi_{(\mu}\bar{X}_{\nu)}-\frac{F_{3}}{\bar{X}_{X}^{2}F_{0}}\bar{X}_ {\mu}\bar{X}_{\nu}, \tag{12}\]
where \(F_{i}\)'s in the right-hand side are given as functions of \((\phi,\bar{X},\bar{Y},\bar{Z})\). Thanks to the group structure of the set of generalized disformal transformations, one can thus obtain the inverse metric and inverse transformation.
### Generalized disformal Horndeski theories
We now define a scalar-tensor theory obtained by applying a generalized disformal transformation to the metric in the Horndeski theory. The Horndeski theory (in vacuum) is described by the action [7; 8; 9]
\[S_{\text{Hor}}[g_{\mu\nu},\phi]\coloneqq\int\mathrm{d}^{4}x\sqrt{-g}\, \mathcal{L}_{\text{Hor}}[g_{\mu\nu},\phi], \tag{13}\]
where
\[\mathcal{L}_{\text{Hor}} \coloneqq G_{2}(\phi,X)+G_{3}(\phi,X)\Box\phi+G_{4}(\phi,X)R-2G_{4,X}( \phi,X)[(\Box\phi)^{2}-\phi^{\mu\nu}\phi_{\mu\nu}] \tag{14}\] \[+G_{5}(\phi,X)G^{\mu\nu}\phi_{\mu\nu}+\frac{1}{3}G_{5,X}(\phi,X) [(\Box\phi)^{3}-3\Box\phi\phi^{\mu\nu}\phi_{\mu\nu}+2\phi_{\mu\nu}\phi^{\mu \rho}\phi_{\rho}^{\nu}],\]
with \(\phi_{\mu\nu}\coloneqq\nabla_{\mu}\nabla_{\nu}\phi\). Performing a generalized disformal transformation satisfying the invertibility conditions (5), we obtain a new action for a scalar-tensor theory [32]
\[S_{\text{GDH}}[g_{\mu\nu},\phi]\coloneqq S_{\text{Hor}}[\bar{g}_{\mu\nu},\phi] =\int\mathrm{d}^{4}x\sqrt{-g}\,\mathcal{J}\mathcal{L}_{\text{Hor}}[\bar{g}_{\mu \nu},\phi], \tag{15}\]
where we have defined
\[\mathcal{J}\coloneqq\frac{\sqrt{-\bar{g}}}{\sqrt{-g}}=F_{0}\mathcal{F}^{1/2}=f_{0 }^{-1}\mathcal{H}^{-1/2}. \tag{16}\]
This theory was dubbed the generalized disformal Horndeski (GDH) theory [32]. Since the generalized disformal transformation is just a field redefinition, \(S_{\rm{GDH}}[g_{\mu\nu},\phi]\) and \(S_{\rm{Hor}}[\bar{g}_{\mu\nu},\phi]\) are mathematically equivalent as long as the transformation satisfies the invertibility conditions (5). This in particular means that there are one scalar and two tensor degrees of freedom in the (vacuum) GDH theory as in the (vacuum) Horndeski theory, even though the field equations in the former theory contain higher derivatives in general. However, in the presence of matter fields, things become subtle and the relation between the two theories must be examined carefully.
To see this point more closely, let us add matter field(s) (collectively denoted by \(\Psi\)) minimally coupled to the GDH theory,
\[S_{\rm{GDH}}[g_{\mu\nu},\phi]+S_{\rm{m}}[g_{\mu\nu},\Psi]=S_{\rm{Hor}}[\bar{g }_{\mu\nu},\phi]+S_{\rm{m}}[g_{\mu\nu},\Psi]. \tag{17}\]
We see that the matter fields are coupled with the generalized disformal metric (12) in the Horndeski frame (the right-hand side) in which the gravitational part of the action is written manifestly in the Horndeski form in terms of the metric \(\bar{g}_{\mu\nu}\). Since the generalized disformal metric (12) contains higher-order derivatives of \(\phi\), there is no guarantee that the coupling to matter is consistent, i.e., no unwanted degree of freedom appears as an Ostrogradsky ghost through this coupling. If such an additional dangerous degree of freedom were to appear, then the GDH theory in the presence of (minimally coupled) matter would be inconsistent, albeit healthy in vacuum.2 In the next section, we will study this point in detail.
Footnote 2: We expect that the mass of the Ostrogradsky mode would be proportional to some negative power of the energy density of the matter field. Therefore, from the EFT point of view, the ghost would be irrelevant if we push its mass above the cutoff scale.
## III Consistency of matter coupling
### Unitary gauge
In this subsection, we briefly review the consistency of matter coupling in the GDH theory under the unitary gauge, where the scalar field is spatially uniform. Though it is not always justified, one is allowed to take the unitary gauge at least in the context of cosmology where the scalar field is supposed to have a timelike gradient. The case of bosonic matter fields was discussed in Refs. [35; 36; 32]. As we saw in Sec. II.2, in the Horndeski frame, the matter fields are coupled to the generalized disformal metric (with respect to the metric that describes the gravity sector), and hence the matter action involves higher-order derivatives of \(\phi\). This indicates that the matter action yields the time derivative of the lapse function under the unitary gauge, which generically makes the (otherwise nondynamical) lapse function dynamical. Thus, the GDH theory would give rise to the Ostrogradsky ghost in general in the presence of matter fields. Fortunately, one can remove the Ostrogradsky ghost by restricting the generalized disformal metric to the following form [35; 36; 32]:
\[\bar{g}_{\mu\nu}=\tilde{F}_{0}g_{\mu\nu}+\tilde{F}_{1}\phi_{\mu}\phi_{\nu}+2 \tilde{F}_{2}\phi_{(\mu}\mathcal{X}_{\nu)}+\tilde{F}_{3}\mathcal{X}_{\mu} \mathcal{X}_{\nu},\qquad\mathcal{X}_{\mu}\coloneqq\left(\delta_{\mu}^{\alpha }-\frac{\phi_{\mu}\phi^{\alpha}}{X}\right)\partial_{\alpha}X, \tag{18}\]
where \(\mathcal{X}_{\mu}\) is the derivative of \(X\) projected onto a constant-\(\phi\) hypersurface and \(\tilde{F}_{i}=\tilde{F}_{i}(\phi,X,\mathcal{Z})\), with
\[\mathcal{Z}\coloneqq\mathcal{X}^{\mu}\mathcal{X}_{\mu}=Z-\frac{Y^{2}}{X}. \tag{19}\]
The point is that the object \(\mathcal{X}_{\mu}\) does not contain the time derivative of the lapse function under the unitary gauge where \(\phi=\phi(t)\). Note that Eq. (18) is embedded in the original generalized disformal transformation (1) as
\[F_{0}=\tilde{F}_{0},\qquad F_{1}=\tilde{F}_{1}-\frac{2Y}{X}\tilde{F}_{2}+\frac {Y^{2}}{X^{2}}\tilde{F}_{3},\qquad F_{2}=\tilde{F}_{2}-\frac{Y}{X}\tilde{F}_{ 3},\qquad F_{3}=\tilde{F}_{3}. \tag{20}\]
On the other hand, the case of fermionic matter fields needs a separate treatment as the matter action is written in terms of the tetrad rather than the metric itself. The authors of Ref. [36] developed the transformation law for the tetrad to show that the consistency of fermionic matter coupling requires an additional condition \(\tilde{F}_{3}=0\)[36].
Here, it should be emphasized that all these results were obtained in the unitary gauge. Away from the unitary gauge, there would be an apparent Ostrogradsky mode. It could be true that such an apparent Ostrogradsky ghost is harmless because it would be a nonpropagating "shadowy mode" that satisfies a three-dimensional elliptic differential equation on a spacelike hypersurface, leading to a configuration completely determined by boundary conditions. (See Refs. [37; 38; 39] for a more detailed discussion in the context of U-DHOST theories.) Having said that, we need a careful treatment for the shadowy mode, and hence theories without the shadowy mode are still of primary interest to us. By removing the Ostrogradsky mode in any gauge, one may safely consider, for example, static stars and black holes dressed with a static scalar field. In what follows, we investigate the consistency of matter coupling away from the unitary gauge and derive degeneracy conditions without either Ostrogradsky or shadowy mode.
### Away from the unitary gauge
Let us now explore the consistency of the matter coupling away from the unitary gauge. We start with the Horndeski-frame Lagrangian
\[\mathcal{L}[g_{\mu\nu},\phi]=\mathcal{L}_{\text{Hor}}[g_{\mu\nu},\phi]+ \mathcal{L}_{\text{m}}[\bar{g}_{\mu\nu},\psi], \tag{21}\]
where \(\bar{g}_{\mu\nu}\) is defined in Eq. (1). Note that we have interchanged the roles of \(g_{\mu\nu}\) and \(\bar{g}_{\mu\nu}\) as compared to those in Eq. (17). In any case, so long as the (generalized) disformal transformation is invertible, the barred metric is some disformal transformation of the unbarred metric and vice versa, and hence this is just a matter of convention. Note, however, that the Lagrangian (17) makes sense as a theory of gravity nonminimally coupled to matter even if the disformal transformation is noninvertible, though one cannot move to the equivalent description in the GDH (or Jordan) frame in this case. Therefore, in this subsection, we study a system described by the Lagrangian (17) without imposing the invertibility conditions for the disformal transformation from the outset.3 For simplicity, we assume that the matter sector is described by a massless scalar field \(\psi\),
Footnote 3: Precisely speaking, we assume the first two conditions in Eq. (5) to guarantee the existence of the barred inverse metric \(\bar{g}^{\mu\nu}\), but do not necessarily impose the last three conditions. The authors of Ref. [35] used a similar approach to specify the degeneracy conditions under the unitary gauge without imposing the invertibility conditions from the outset.
\[S_{\text{m}}[\bar{g}_{\mu\nu},\psi]\coloneqq-\frac{1}{2}\int\mathrm{d}^{4}x \sqrt{-\bar{g}}\,\bar{g}^{\mu\nu}\psi_{\mu}\psi_{\nu}=-\frac{1}{2}\int\mathrm{ d}^{4}x\sqrt{-g}\,\mathcal{J}\bar{g}^{\mu\nu}\psi_{\mu}\psi_{\nu}, \tag{22}\]
where \(\psi_{\mu}\coloneqq\nabla_{\mu}\psi\) and \(\bar{g}^{\mu\nu}\) is the inverse metric associated with \(\bar{g}_{\mu\nu}\) defined in Eq. (8).
We expect that the system described by the Lagrangian (17) has four physical degrees of freedom, where three come from the gravity sector (\(g_{\mu\nu}\) and \(\phi\)) and one from the matter scalar field \(\psi\). In order to avoid an unwanted fifth (would-be Ostrogradsky) degree of freedom, terms with the highest time derivatives must possess a degenerate structure. In order to study the kinetic structure of the Lagrangian (17) in detail, let us introduce the Arnowitt-Deser-Misner (ADM) variables as
\[g_{\mu\nu}\mathrm{d}x^{\mu}\mathrm{d}x^{\nu}=-N^{2}\mathrm{d}t^{2}+h_{ij}( \mathrm{d}x^{i}+N^{i}\mathrm{d}t)(\mathrm{d}x^{j}+N^{j}\mathrm{d}t), \tag{23}\]
where \(N\) is the lapse function, \(N^{i}\) is the shift vector, and \(h_{ij}\) is the induced metric. Note that we do not choose the unitary gauge, and hence the timelike unit normal vector \(n_{\mu}=-N\delta^{0}_{\mu}\) associated with a constant-\(t\) hypersurface is not proportional to \(\phi_{\mu}\). The extrinsic curvature is written in terms of the ADM variables as
\[K_{ij}=\frac{1}{2N}\left(\dot{h}_{ij}-\mathrm{D}_{i}N_{j}-\mathrm{D}_{j}N_{i} \right), \tag{24}\]
where a dot denotes the time derivative and \(\mathrm{D}_{i}\) denotes the covariant derivative associated with \(h_{ij}\). We also define the variables associated with the first and second time derivatives of \(\phi\) and the first time derivative of \(\psi\) as follows:
\[A_{*}\coloneqq n^{\mu}\nabla_{\mu}\phi,\qquad X_{*}\coloneqq n^{\mu}\nabla_{ \mu}X,\qquad\psi_{*}\coloneqq n^{\mu}\nabla_{\mu}\psi. \tag{25}\]
The kinetic structure of the Lagrangian (17) can be captured by the Hessian matrix \(\mathbb{H}\) of the Lagrangian (21) with respect to \(K_{ij}\), \(X_{*}\), and \(\psi_{*}\). Written explicitly, one has
\[\mathbb{H}=\begin{pmatrix}\mathcal{K}^{ij,kl}&0&0\\ 0&\mathcal{A}&\mathcal{M}\\ 0&\mathcal{M}&\mathcal{P}\end{pmatrix}, \tag{26}\]
with
\[\mathcal{K}^{ij,kl}\coloneqq\frac{\partial^{2}\mathcal{L}}{\partial K_{ ij}\partial K_{kl}},\qquad\mathcal{A}\coloneqq\frac{\partial^{2}\mathcal{L}}{ \partial X_{*}^{2}},\qquad\mathcal{M}\coloneqq\frac{\partial^{2}\mathcal{L}}{ \partial X_{*}\partial\psi_{*}},\qquad\mathcal{P}\coloneqq\frac{\partial^{2} \mathcal{L}}{\partial\psi_{*}^{2}}. \tag{27}\]
Note that there is no kinetic mixing between the gravitational and matter sectors: The gravitational (Horndeski) sector concerns only \(\mathcal{K}^{ij,kl}\), while the matter sector concerns only \(\mathcal{A}\), \(\mathcal{M}\), and \(\mathcal{P}\). In order to kill the unwanted degree of freedom revived due to the matter coupling, we require that the \(2\times 2\) lower-right submatrix of \(\mathbb{H}\) is degenerate, i.e.,
\[\mathcal{D}\coloneqq\mathcal{AP}-\mathcal{M}^{2}=0. \tag{28}\]
The quantity \(\mathcal{D}\) can be rewritten in the form of a polynomial in \(\{A_{*},X_{*},\psi_{*},\mathcal{Q}_{1},\mathcal{Q}_{2},\mathcal{Q}_{3}\}\) as
\[\mathcal{D}=\sum_{i,j,k,l,m,n\geq 0}d_{ijklmn}(\phi,X,Y,Z)\,A_{*}^{i}\,X_{ *}^{j}\,\psi_{*}^{k}\,\mathcal{Q}_{1}^{l}\,\mathcal{Q}_{2}^{m}\,\mathcal{Q}_ {3}^{n}, \tag{29}\]
with
\[\mathcal{Q}_{1}\coloneqq g^{\mu\nu}\psi_{\mu}\psi_{\nu},\quad \mathcal{Q}_{2}\coloneqq g^{\mu\nu}\psi_{\mu}\phi_{\nu},\quad\mathcal{Q}_{3} \coloneqq g^{\mu\nu}\psi_{\mu}X_{\nu}. \tag{30}\]
Note that the coefficients \(d_{ijklmn}\) depend only on the functions \(f_{i}\) characterizing the generalized disformal transformation. In order for \(\mathcal{D}\) to vanish for any configuration of \(\phi\) and \(\psi\), we shall fix \(f_{i}\)'s so that all \(d_{ijklmn}\)'s vanish.4 Since the full expression of \(\mathcal{D}\) is extremely involved, we proceed step by step: Among nonvanishing \(d_{ijklmn}\), we first focus on the simplest one(s) to read off condition(s) that \(f_{i}\)'s should satisfy. We then substitute the condition(s) back into \(\mathcal{D}\), which simplifies some of \(d_{ijklmn}\)'s. With simplified \(d_{ijklmn}\)'s, we follow the same steps, selecting the simplest one(s) to find additional condition(s) on \(f_{i}\)'s. Repeating this procedure, we finally obtain a set of conditions on \(f_{i}\)'s under which all \(d_{ijklmn}\)'s vanish, i.e., \(\mathcal{D}=0\). In what follows, we apply this strategy to fix the functional form of \(f_{i}\)'s. It should be noted that we assume \(F_{0}\neq 0\) and \(\mathcal{J}\neq 0\) throughout the following discussion because otherwise one cannot define the barred inverse metric \(\hat{g}^{\mu\nu}\).
Footnote 4: Under the unitary gauge, we have \(A_{*}=(-X)^{1/2}\), \(X_{*}=-Y(-X)^{-1/2}\), and \(\psi_{*}=-(-X)^{-1/2}\mathcal{Q}_{2}\). Hence, we obtain a weaker condition that the coefficients in front of \(\mathcal{Q}_{1}^{2}\mathcal{Q}_{2}^{m}\mathcal{Q}_{3}^{n}\) vanish, i.e., \(\sum_{i,j,k\geq 0}(-1)^{j+k}(-X)^{(i-j-k)/2}\,Y^{j}\,\delta_{k+m,p}\,d_{ ijklmn}(\phi,X,Y,Z)=0\) for all \(l,n,p(\geq 0)\).
The condition that the coefficient of \(\psi_{*}^{2}\) vanishes yields
\[f_{3}=0. \tag{31}\]
Likewise, from the coefficients of \(A_{*}^{4}\mathcal{Q}_{3}^{2}\) and \(X_{*}^{4}\mathcal{Q}_{2}^{2}\), we find that \(f_{2}\) must be of the form
\[\mathcal{J}f_{2}=\alpha_{2}(\phi,X), \tag{32}\]
where \(\alpha_{2}\) is a function of \(\phi\) and \(X\) which is arbitrary at this step. From the coefficients of \(\mathcal{Q}_{1}\) and \(A_{*}^{2}\mathcal{Q}_{1}\), we see that \(f_{0}\) must take the form
\[\mathcal{J}f_{0}=\alpha_{0}(\phi,X)+\beta_{0}(\phi,X)Y, \tag{33}\]
where \(\alpha_{0}\) and \(\beta_{0}\) are arbitrary functions of \(\phi\) and \(X\). Then, from the coefficient of \(A_{*}^{2}\psi_{*}^{2}\), we find that \(\alpha_{2}\) must be related to \(\beta_{0}\) by
\[\alpha_{2}=-\beta_{0}. \tag{34}\]
The coefficient of \(A_{*}X_{*}^{3}\mathcal{Q}_{2}^{2}\) leads us to the following relation:
\[\beta_{0}(\mathcal{J}f_{1})_{,ZZ}=0. \tag{35}\]
We now have the two branches of solutions, \(\beta_{0}=0\) and \((\mathcal{J}f_{1})_{,ZZ}=0\).
Let us first consider the branch \(\beta_{0}=0\). In this case, from the coefficients of \(\mathcal{Q}_{2}^{2}\), \(A_{*}^{2}\mathcal{Q}_{2}^{2}\), and \(A_{*}^{4}\mathcal{Q}_{2}^{2}\), we see that \(f_{1}\) must take the form
\[\mathcal{J}f_{1}=\alpha_{1}(\phi,X), \tag{36}\]
where \(\alpha_{1}\) is an arbitrary function of \(\phi\) and \(X\). Combining the conditions obtained so far, we obtain the following relations:
\[f_{0}=\frac{\alpha_{0}}{\mathcal{J}},\qquad f_{1}=\frac{\alpha_{1}}{\mathcal{J} },\qquad f_{2}=f_{3}=0. \tag{37}\]
On top of these, we have \(\mathcal{J}^{-1}=f_{0}\mathcal{H}^{1/2}\) with \(\mathcal{H}\) defined in Eq. (11), which yields
\[\mathcal{J}=\sqrt{\alpha_{0}^{3}(\alpha_{0}+\alpha_{1}X)}. \tag{38}\]
We now know \(f_{i}\)'s as functions of \((\phi,X)\). Written explicitly, we have
\[f_{0}=\frac{\alpha_{0}}{\sqrt{\alpha_{0}^{3}(\alpha_{0}+\alpha_{1}X)}},\qquad f _{1}=\frac{\alpha_{1}}{\sqrt{\alpha_{0}^{3}(\alpha_{0}+\alpha_{1}X)}},\qquad f _{2}=f_{3}=0. \tag{39}\]
By use of Eq. (10), the barred metric can be reconstructed as
\[\bar{g}_{\mu\nu}=\sqrt{\alpha_{0}(\alpha_{0}+\alpha_{1}X)}\,g_{\mu\nu}-\frac {\alpha_{0}\alpha_{1}}{\sqrt{\alpha_{0}(\alpha_{0}+\alpha_{1}X)}}\phi_{\mu} \phi_{\nu}. \tag{40}\]
Note that both coefficients are now functions of \((\phi,X)\), and hence this is nothing but a conventional disformal transformation.
Let us now study the other branch of solutions for Eq. (35), i.e., \((\mathcal{J}f_{1})_{,ZZ}=0\). After straightforward manipulations, one can show that all \(d_{ijklmn}\)'s vanish if and only if
\[f_{0}=\frac{\alpha_{0}+\beta_{0}Y}{\mathcal{J}},\qquad f_{1}=\frac{\alpha_{0} \alpha_{1}+\beta_{0}^{2}Z}{\mathcal{J}(\alpha_{0}+\beta_{0}Y)},\qquad f_{2}=- \frac{\beta_{0}}{\mathcal{J}},\qquad f_{3}=0,\qquad\mathcal{J}^{2}=\alpha_{0} (\alpha_{0}+\alpha_{1}X)(\alpha_{0}+\beta_{0}Y)^{2}, \tag{41}\]
with \(\alpha_{0}(\neq 0)\), \(\alpha_{1}\), and \(\beta_{0}\) being functions of \((\phi,X)\). By use of Eq. (10), the coefficient functions of the barred metric can be reconstructed from Eq. (41) as
\[F_{0}=\sqrt{\alpha_{0}(\alpha_{0}+\alpha_{1}X)},\qquad F_{1}=-\frac{\alpha_{0} \alpha_{1}}{F_{0}},\qquad F_{2}=\frac{\alpha_{0}\beta_{0}}{F_{0}},\qquad F_{3 }=\frac{X\beta_{0}^{2}}{F_{0}}, \tag{42}\]
or written explicitly,
\[\bar{g}_{\mu\nu}=\sqrt{\alpha_{0}(\alpha_{0}+\alpha_{1}X)}\left[g_{\mu\nu}- \frac{\alpha_{1}}{\alpha_{0}+\alpha_{1}X}\phi_{\mu}\phi_{\nu}+\frac{2\beta_{0 }}{\alpha_{0}+\alpha_{1}X}\phi_{(\mu}X_{\nu)}+\frac{X\beta_{0}^{2}}{\alpha_{0} (\alpha_{0}+\alpha_{1}X)}X_{\mu}X_{\nu}\right]. \tag{43}\]
Interestingly, one can check that the generalized disformal transformation (43) satisfies the degeneracy condition even in the presence of a k-essence matter scalar field whose Lagrangian is written as a general function of \(\psi\) and \(\bar{g}^{\mu\nu}\psi_{\mu}\psi_{\nu}\). Note, however, that the above result does not satisfy a part of the invertibility conditions (specifically, \(\bar{X}_{Y}=\bar{X}_{Z}=0\)) in general. Indeed, for the above choice of \(f_{i}\)'s, we have
\[\bar{X}=X\left(f_{0}+Xf_{1}+2Yf_{2}+\frac{Y^{2}}{X}f_{3}\right)=\frac{X\left[ \alpha_{0}^{2}+X\alpha_{0}\alpha_{1}-\beta_{0}^{2}(Y^{2}-XZ)\right]}{ \mathcal{J}(\alpha_{0}+\beta_{0}Y)}, \tag{44}\]
which has a nontrivial dependence on \(Y\) and \(Z\) unless \(\beta_{0}=0\). If \(\beta_{0}=0\), the generalized disformal transformation (43) reduces to Eq. (40), i.e., the conventional one.
So far, we have found that there exists a nontrivial family of generalized disformal metrics described by Eq. (43) that allows for consistent coupling of a k-essence scalar field without either Ostrogradsky or shadowy mode. As mentioned above, this family does not satisfy a part of the invertibility conditions in general, meaning that one cannot move to the equivalent description in the Jordan frame. The only exception is the case \(\beta_{0}=0\), where Eq. (43) reduces to the conventional disformal metric (2). Nevertheless, even if \(\beta_{0}\neq 0\), the theory makes sense as gravity nonminimally coupled to matter, and hence we could keep it in our consideration. However, as we shall show in the Appendix, this family does not allow for consistent coupling of fermionic matter fields unless \(\beta_{0}=0\). Therefore, all the higher-derivative terms in the generalized disformal metric are prohibited when we require that both bosonic and fermionic matter couplings do not introduce an Ostrogradsky or shadowy mode, even if we do not impose the invertibility conditions. Our analysis shows that, if one considers the generalized disformal transformation with nontrivial higher-derivative terms, one has to live with either Ostrogradsky or shadowy mode. As clarified in Refs. [37; 38], the shadowy mode itself is harmless as it satisfies a three-dimensional elliptic differential equation on a spacelike hypersurface and hence does not propagate. On the other hand, if an Ostrogradsky mode is revived due to matter coupling, the theory can be trusted only up to an energy scale well below the mass of the Ostrogradsky ghost.
Conclusions
In this work, we have investigated the degeneracy conditions of generalized disformal Horndeski (GDH) theories in the presence of a minimally coupled matter field, which is represented by a canonical scalar field. We have started with the Horndeski-frame Lagrangian (21) where the gravitational action is given by the Horndeski one while the matter field is coupled to the generalized disformal metric (1). We have rewritten the total Lagrangian in terms of the \(3+1\) language and constructed the Hessian matrix so that we can investigate the kinetic structure of the theory. The degeneracy conditions are required for the matter coupling to be consistent, giving the conditions that the determinant of the Hessian matrix vanishes. The degeneracy conditions have been written in the form (29) as a polynomial in \(\{A_{*},X_{*},\psi_{*},\mathcal{Q}_{1},\mathcal{Q}_{2},\mathcal{Q}_{3}\}\), which are independent functions of spacetime constructed out of the derivatives of the gravitational scalar field \(\phi\) and the matter scalar field \(\psi\) [see Eqs. (25) and (30)]. In order for the degeneracy conditions to be satisfied for arbitrary configurations of \(\phi\) and \(\psi\), all the coefficients \(d_{ijklmn}\) of the polynomial must vanish. We have thus arrived at the following conclusions: (i) if one sticks to the invertible transformations satisfying the conditions (5) so that the equivalent GDH theory in the Jordan frame exists, only the conventional disformal metric can be consistently coupled to the kinetic term of the matter scalar field; (ii) however, if one gives up the invertibility conditions and just considers a scalar field coupled nonminimally to the gravitational scalar degree of freedom \(\phi\) through the generalized disformal metric, a nontrivial coupling (containing second derivatives of \(\phi\)) given by Eq. (41) [or, equivalently, Eqs. (42) and (43)] is allowed.
We note that our analysis in the present paper is based on the Horndeski-frame Lagrangian, which itself makes sense even if the metric to which the matter fields are minimally coupled is not associated with invertible disformal transformations. In this regard, it would be intriguing to take into account more general higher-order derivatives that are not covered by our transformation law (1) (e.g., \(\square\phi\)) to study what types of higher derivative couplings to matter fields can survive. It would also be interesting to investigate the consistency of matter coupling in a class of scalar-tensor theories with a nondynamical scalar field, i.e., the cuscuton [40] or its extension [41]. These issues will be left for future work.
###### Acknowledgements.
We thank the authors of Ref. [35] for their helpful correspondence. The work of TI was supported by the Rikkyo University Special Fund for Research. The work of KT was supported by JSPS KAKENHI Grant No. JP21J00695. The work of TK was supported by JSPS KAKENHI Grant No. JP20K03936 and MEXT-JSPS Grant-in-Aid for Transformative Research Areas (A) "Extreme Universe", No. JP21H05182 and No. JP21H05189.
## Appendix A Consistency of fermionic matter coupling
In Sec. III.2, we showed that the generalized disformal transformation (1), with which we define the GDH theory, is restricted to be of the conventional form (2) in order for a matter scalar field can be consistently coupled without reviving the Ostrogradsky ghost, provided that the invertibility conditions (5) are satisfied. On the other hand, our analysis was based on the Horndeski-frame Lagrangian (21), which itself makes sense as a theory of gravity nonminimally coupled to matter field(s) even if the disformal transformation is noninvertible. Interestingly, if we relax the invertibility conditions, there is a nontrivial family of generalized disformal transformations given by Eq. (43) that allows for consistent coupling of a matter scalar field, where the deviation of Eq. (43) from the conventional disformal transformation is characterized by the function \(\beta_{0}=\beta_{0}(\phi,X)\). However, it remains unclear whether the transformation (43) with \(\beta_{0}\neq 0\) accommodates consistent coupling of fermionic matter fields. In this Appendix, following the discussion in Ref. [36], we argue that further imposing the consistency of spinorial matter coupling leads to \(\beta_{0}=0\), i.e., we are again left with the conventional disformal transformation.
For this purpose, one needs to study the transformation law for the tetrad under the generalized disformal transformation, since the action of fermions in curved spacetime is written in terms of the tetrad. The authors of Ref. [36] developed the tetrad transformation law for the class of generalized disformal transformations defined by Eq. (18), which we repeat here for convenience:
\[\bar{g}_{\mu\nu}=\tilde{F}_{0}g_{\mu\nu}+\tilde{F}_{1}\phi_{\mu} \phi_{\nu}+2\tilde{F}_{2}\phi_{(\mu}\mathcal{X}_{\nu)}+\tilde{F}_{3}\mathcal{ X}_{\mu}\mathcal{X}_{\nu},\qquad\mathcal{X}_{\mu}:=\left(\delta_{\mu}^{\alpha}- \frac{\phi_{\mu}\phi^{\alpha}}{X}\right)\partial_{\alpha}X, \tag{10}\]
where \(\mathcal{X}_{\mu}\) is the derivative of \(X\) projected onto a constant-\(\phi\) hypersurface and \(\tilde{F}_{i}=\tilde{F}_{i}(\phi,X,\mathcal{Z})\), with \(\mathcal{Z}:=\mathcal{X}^{\mu}\mathcal{X}_{\mu}\). The reason why they focused on this particular type of generalized disformal transformation is that it trivially
accommodates consistent bosonic matter coupling under the unitary gauge [32; 35]. The analysis in Ref. [36] shows that \(\tilde{F}_{3}=0\) is necessary to avoid the revival of the Ostrogradsky ghost for fermionic matter coupling.
One can recast the generalized disformal metric (43) into the form
\[\bar{g}_{\mu\nu}=\sqrt{\alpha_{0}(\alpha_{0}+\alpha_{1}X)}\bigg{[} g_{\mu\nu}+\frac{\beta_{0}^{2}Y^{2}+2\alpha_{0}\beta_{0}Y-\alpha_{0} \alpha_{1}X}{X\alpha_{0}(\alpha_{0}+\alpha_{1}X)}\phi_{\mu}\phi_{\nu}\] \[+\frac{2\beta_{0}(\alpha_{0}+\beta_{0}Y)}{\alpha_{0}(\alpha_{0}+ \alpha_{1}X)}\phi_{(\mu}\mathcal{X}_{\nu)}+\frac{X\beta_{0}^{2}}{\alpha_{0}( \alpha_{0}+\alpha_{1}X)}\mathcal{X}_{\mu}\mathcal{X}_{\nu}\bigg{]}, \tag{57}\]
but this is not of the form (56) because some of the coefficient functions depend on \(Y\), which cannot be written in terms of \((\phi,X,\mathcal{Z})\). Nevertheless, the discussion in Ref. [36] itself applies even if the coefficient functions \(\tilde{F}_{i}\) in Eq. (56) had \(Y\)-dependence, as we shall see below.
In what follows, let us promote the coefficient functions \(\tilde{F}_{i}\) in Eq. (56) as functions of \((\phi,X,Y,Z)\). The transformation law for the tetrad \(e^{a}_{\mu}\) associated with the generalized disformal transformation can be written in the form [36]
\[\bar{e}^{a}_{\mu}=\left(E_{0}\delta^{\alpha}_{\mu}+E_{1}\phi_{\mu}\phi^{\alpha }+E_{2}\phi_{\mu}\mathcal{X}^{\alpha}+E_{3}\mathcal{X}_{\mu}\mathcal{X}^{ \alpha}\right)e^{a}_{\alpha}, \tag{58}\]
with
\[E_{0}=\sqrt{\tilde{F}_{0}},\qquad E_{1}=\frac{\sqrt{X/X}-\sqrt{\tilde{F}_{0}}} {X},\qquad E_{2}=\frac{\tilde{F}_{2}}{\sqrt{\tilde{F}_{0}}+\mathcal{Z}\tilde{ F}_{3}},\qquad E_{3}=\frac{\sqrt{\tilde{F}_{0}+\mathcal{Z}\tilde{F}_{3}}-\sqrt{ \tilde{F}_{0}}}{\mathcal{Z}}. \tag{59}\]
Indeed, it is straightforward to verify that \(\bar{g}_{\mu\nu}=\eta_{ab}\bar{e}^{a}_{\mu}\bar{e}^{b}_{\nu}\). Note that one could add a term \(\mathcal{X}_{\mu}\phi^{\alpha}\) inside the parentheses in Eq. (58), but it can always be absorbed into a local Lorentz transformation [36]. Having introduced the tetrad transformation law, let us consider the generalized disformal transformation of the action for a fermionic matter field represented by a free massless Dirac spinor \(\lambda\), i.e.,
\[S_{\text{m}}[e^{a}_{\mu},\lambda]=\int\mathrm{d}^{4}x\,e\left(-\frac{1}{2} \lambda^{\dagger}\mathrm{i}\gamma^{\hat{0}}e^{\mu}_{a}\gamma^{a}\nabla_{\mu} \lambda+\text{c.c.}\right), \tag{60}\]
where \(e\coloneqq\det e^{a}_{\mu}\), c.c. denotes the complex conjugate, and \(\gamma^{a}\) denotes the gamma matrices in the Minkowski spacetime such that \(\gamma^{a}\gamma^{b}+\gamma^{b}\gamma^{a}=2\eta^{ab}\mathbb{1}\), with \(\mathbb{1}\) being the identity matrix in the spinor indices. Note that we put hats on local Lorentz indices (\(a,b,\cdots=\{\hat{0},\hat{1},\hat{2},\hat{3}\}\)). The covariant derivative acting on the Dirac field is defined by
\[\nabla_{\mu}\lambda\coloneqq\left(\mathbb{1}\partial_{\mu}+\frac{1}{4}\omega_{ \mu}{}^{ab}\gamma_{ab}\right)\lambda. \tag{61}\]
Here, \(\gamma_{ab}\coloneqq(\gamma_{a}\gamma_{b}-\gamma_{b}\gamma_{a})/2\) and the (torsion-free) spin connection \(\omega_{\mu}{}^{ab}\) is defined by
\[\omega_{\mu}{}^{a}{}_{b}=-e^{\nu}_{b}\left(\partial_{\mu}e^{a}_{\nu}-\Gamma^{ \alpha}_{\mu\nu}e^{a}_{\alpha}\right), \tag{62}\]
where \(\Gamma^{\lambda}_{\mu\nu}\) is the Christoffel symbol associated with the metric. We now consider the generalized disformal transformation of the spinor action (60). Since we are only interested in the degeneracy structure of the action (60), let us focus on terms that involve time derivatives [36]:
\[S_{\text{m}}[e^{a}_{\mu},\lambda]\supset\int\mathrm{d}^{4}x\sqrt{h}\left( \frac{\mathrm{i}}{2}\lambda^{\dagger}\dot{\lambda}-\frac{\mathrm{i}}{2}\dot{ \lambda}^{\dagger}\lambda+\frac{\mathrm{i}}{4}\lambda^{\dagger(3)}\mathrm{e }^{k(3)}_{\hat{i}}\hat{e}_{jk}\gamma^{\hat{i}\hat{j}}\lambda\right), \tag{63}\]
where \({}^{(3)}\mathrm{e}^{\hat{i}}_{k}\) denotes the triad such that \(h_{kl}=\delta_{ij}{}^{(3)}\mathrm{e}^{\hat{i}}_{k}{}^{(3)}\mathrm{e}^{\hat{j}}_ {l}\) and \(h\coloneqq\det h_{kl}=(\det{}^{(3)}\mathrm{e}^{\hat{i}}_{k}{})^{2}\). Replacing the tetrad by the barred one, we obtain [36]
\[S_{\text{m}}[\bar{e}^{a}_{\mu},\lambda]\supset\int\mathrm{d}^{ 4}x\sqrt{h}\,E_{0}^{2}(E_{0}+\mathcal{Z}E_{3})\bigg{[} \frac{\mathrm{i}}{2}\lambda^{\dagger}\dot{\lambda}-\frac{\mathrm{i}}{2} \dot{\lambda}^{\dagger}\lambda+\frac{\mathrm{i}}{4}\lambda^{\dagger(3)} \mathrm{e}^{k(3)}_{\hat{i}}\hat{e}_{jk}\gamma^{\hat{i}\hat{j}}\lambda\] \[-\frac{\mathrm{i}}{4}\frac{\mathcal{Z}E_{3}^{2}}{E_{0}(E_{0}+ \mathcal{Z}E_{3})}\mathcal{X}_{m}\dot{\mathcal{X}}_{l}\lambda^{\dagger(3)} \mathrm{e}^{k(3)}_{\hat{i}}\mathrm{e}^{l}_{\hat{j}}\gamma^{\hat{i}\hat{j}} \lambda\bigg{]}. \tag{64}\]
As detailed in Ref. [36], under the unitary gauge, the last term inside the square brackets leads to nondegenerate higher-order derivatives in the equations of motion for the lapse function and the spinor field. Therefore, one needs
to impose \(E_{3}=0\), i.e., \(\tilde{F}_{3}=0\) in order not to avoid the Ostrogradsky ghost. Note that this condition obtained under the unitary gauge should be a necessary condition for the fermionic matter coupling to be consistent in an arbitrary coordinate system. Since the transformation (A.2) has \(\tilde{F}_{3}\propto\beta_{0}^{2}\), the condition \(\tilde{F}_{3}=0\) requires \(\beta_{0}=0\).
|
2307.00917 | On the second largest distance eigenvalue of graphs | The distance spectrum of a connected graph is the spectrum of its distance
matrix. We characterize those bicyclic graphs and split graphs with the second
largest distance eigenvalue less than $-\frac{1}{2}$. Any such graph is
chordal. | Haiyan Guo, Bo Zhou | 2023-07-03T10:33:22Z | http://arxiv.org/abs/2307.00917v4 | # On the second largest distance eigenvalue of graphs
###### Abstract
The distance spectrum of a connected graph is the spectrum of its distance matrix. We characterize those bicyclic graphs and split graphs with the second largest distance eigenvalue less than \(-\frac{1}{2}\). Any such graph is chordal.
**AMS classifications:** 05C50, 05C12
**Key words:** second largest distance eigenvalue, chordal graphs, bicyclic graphs, split graphs
## 1 Introduction
The eigenvalues of a graph are the eigenvalues of its adjacency matrix. The second largest eigenvalue of graphs has been widely studied by many mathematicians. Cao and Hong [6] characterized the simple graphs with the second largest eigenvalue less than \(\frac{1}{3}\). Wu et al. [21] determined the simple connected graphs with the second largest eigenvalue less than \(\frac{1}{2}\). Cheng et al. [4] considered graphs with three eigenvalues and second largest eigenvalue at most \(1\). Liu et al. [17] determined all connected \(\{K_{1,3},K_{5}-e\}\)-free graphs whose second largest eigenvalue does not exceed \(1\). Zhang et al. [26] classified the \(2\)-partially distance-regular graphs whose each local graph has second largest eigenvalue at most \(1\). Recently, multiplicity of the second largest eigenvalue of graphs has also received much attention, see [5, 12].
The distance eigenvalues of a connected graph are the eigenvalues of its distance matrix. The distance eigenvalues of graphs were first studied by Graham and Pollak [10]. They established a relationship between the number of negative distance eigenvalues of trees and
the addressing problem in data communication systems. Thereafter and in particular, in recent 15 years, the distance eigenvalues attracted much more attention. However, the focus was more on the largest distance eigenvalue (also known as distance spectral radius), see the survey of Aouchiche and Hansen [2] and [7, 14, 16, 19, 23, 20].
As far as we know, the only studies on the second largest distance eigenvalue of graphs are as follows. Lin [15] showed that the second largest distance eigenvalue of a graph \(G\) is less than the number of triangles in \(G\) when the independent number is less than or equal to two, confirming a conjecture in [8]. Xing and Zhou [24] characterized all connected graphs with the second largest distance eigenvalue less than \(-2+\sqrt{2}\) and all trees with the second largest distance eigenvalue less than \(-\frac{1}{2}\). Besides, they also considered unicyclic graphs with a few exceptions whose second largest distance eigenvalue less than \(-\frac{1}{2}\). In [25], they obtained sharp lower bounds for the second largest distance eigenvalue of the \(k\)-th power of a connected graph and determined all trees and unicyclic graphs \(G\) such that the second largest distance eigenvalue of the squares less than \(\frac{\sqrt{5}-3}{2}\). Liu et al. [18] proved that the graphs with the second largest distance eigenvalue less than \(\frac{17-\sqrt{329}}{2}\approx-0.5692\) are determined by their distance spectra, among other results. Xue et al. [22] characterized all block graphs whose second largest distance eigenvalue less than \(-\frac{1}{2}\). Alhevaz et al. [1] gave some upper and lower bounds for the second largest eigenvalue of the generalized distance matrix of graphs in terms of some graph parameters.
By Cauchy's interlacing theorem and the distance spectrum of a cycle (see Lemmas 2.2 and 2.3 below), a connected graph whose second largest distance eigenvalue is less than \(-\frac{1}{2}\) must be chordal. A connected graph on \(n\) vertices with \(n+1\) edges is called a bicyclic graph. A graph \(G\) is a split graph if both \(G\) and \(\overline{G}\) are chordal. In this paper, we characterize all bicyclic graphs and split graphs with the second largest distance eigenvalue less than \(-\frac{1}{2}\).
## 2 Preliminaries
All graphs considered in this paper are simple and connected. Let \(G\) be a graph with vertex set \(V(G)\) and edge set \(E(G)\). For \(v\in V(G)\), we use \(N_{G}(v)\) to denote the neighborhood of \(v\) in \(G\), and let \(d_{G}(v)=|N_{G}(v)|\) be the degree of \(v\) in \(G\). For a nonempty vertex subset \(S\), let \(G[S]\) be the subgraph of \(G\) induced by \(S\). For a connected graph \(G\) with \(n\) vertices and \(m\) edges, if \(m=n+c-1\), then \(G\) is called a \(c\)-cyclic graph. Specially, a \(c\)-cyclic graph with \(c=0,1,2\) is known as a tree, a unicyclic graph, a bicyclic graph, respectively.
As usual, we denote by \(K_{n}\) the complete graph on \(n\) vertices, and \(K_{r,s}\) the complete bipartite graph with bipartite sizes \(r\) and \(s\). Let \(S_{n}=K_{1,n-1}\). Denote by \(C_{n}\) the cycle on \(n\) vertices. Let \(\overline{G}\) be the complement of a graph \(G\).
A clique of a graph is a set of pairwise adjacent vertices, and a maximum clique is a clique with maximum cardinality. An independent set is a set of pairwise non-adjacent vertices.
A graph is chordal if every cycle of length at least 4 has a chord, that is, an edge joining two non-adjacent vertices of the cycle. A graph is a split graph if its vertex set can be partitioned into a clique and an independent set. A graph \(G\) is a split graph if and only if both \(G\) and \(\overline{G}\) are chordal, or equivalently, it does not have an induced \(C_{4},\overline{C_{4}}\), or \(C_{5}\).
Assume that \(V(G)=\{v_{1},\ldots,v_{n}\}\). The distance matrix of \(G\) is defined as the \(n\times n\) matrix \(D(G)=(d_{G}(v_{i},v_{j}))\), where \(d_{G}(v_{i},v_{j})\) denotes the distance between \(v_{i},v_{j}\) in \(G\), which is the length of a shortest path from \(v_{i}\) to \(v_{j}\) in \(G\). Since \(D(G)\) is symmetric, the distance eigenvalues of \(G\) are all real. Denote by \(\lambda_{2}(G)\) the second largest distance eigenvalue of \(G\).
A block of a given graph is a maximal connected subgraph that has no cut vertex. A connected graph \(G\) is called a block graph (also known as clique tree) if all of its blocks are cliques. A block star is a block graph whose blocks contain a common vertex. A block graph \(G\) is loose if for each vertex \(v\in V(G)\), the number of blocks of \(G\) which contain the vertex \(v\) is at most \(2\). Let \(BG(p,q,3,2,2)\) with \(p,q\geq 2\) and \(BGA\) be two block graphs as shown in Figure 1.
**Lemma 2.1**.: _[_22_]_ _Let \(G\) be a block graph with second largest distance eigenvalue \(\lambda_{2}(G)\). Then \(\lambda_{2}(G)<-\frac{1}{2}\) if and only if_
* \(G\) _is a block star, or_
* \(G\) _is a loose block graph, or_
* \(G\) _is a nontrivial connected induced subgraph of_ \(BG(p,q,3,2,2)\)_, or_
* \(G\) _is a nontrivial connected induced subgraph of_ \(BGA\)_._
Let \(\mathcal{G}\) denote the set of unicyclic graphs with at least \(5\) vertices obtained from \(C_{3}=uvw\) by attaching a pending path at \(u,v,w\), respectively. Note that any \(G\) in \(\mathcal{G}\) is a loose block graph, so we have \(\lambda_{2}(G)<-\frac{1}{2}\) by Lemma 2.1. In [24], all unicyclic graphs \(G\) outside \(\mathcal{G}\) with \(\lambda_{2}(G)<-\frac{1}{2}\) are determined. By this result of [24], we have the following.
**Theorem 2.1**.: _Let \(G\) be a unicyclic graph. Then \(\lambda_{2}(G)<-\frac{1}{2}\) if and only if \(G\cong S_{n}^{+}\) for \(n\geq 3\), \(G\in\mathcal{G}\), or one of the six graphs \(U_{1},\ldots,U_{6}\) shown in Figure 2, where \(S_{n}^{+}\) is the \(n\)-vertex unicyclic graph obtained by adding an edge to the star \(S_{n}\)._
For an \(n\times n\) real symmetric matrix \(M\), let \(\rho_{1}(M)\geq\ldots\geq\rho_{n}(M)\) be the eigenvalues of \(M\). The following lemma is the classical Cauchy's interlacing theorem, see [13, Theorem 4.3.28] or [11].
**Lemma 2.2** (Cauchy's interlacing theorem).: _Let \(A\) be an \(n\times n\) symmetric matrix. If \(B\) is an \(m\times m\) principal submatrix of \(A\), then \(\rho_{i}(A)\geq\rho_{i}(B)\geq\rho_{n-m+i}(A)\)._
Figure 1: Graphs \(BG(p,q,3,2,2)\) and BGA.
If \(H\) is a connected induced subgraph of \(G\) and \(d_{H}(u,v)=d_{G}(u,v)\) for all \(\{u,v\}\subseteq V(H)\), then we say \(H\) is a distance-preserving subgraph of \(G\). In this case, \(D(H)\) is a principal submatrix of \(D(G)\), so \(\lambda_{2}(G)\geq\lambda_{2}(H)\) by Lemma 2.2. Specially, if \(G\) is a connected graph with \(\lambda_{2}(G)<-\frac{1}{2}\), then \(\lambda_{2}(H)<-\frac{1}{2}\) for any distance-preserving subgraph \(H\) of \(G\). In this paper, we are concerned with the graphs \(G\) with \(\lambda_{2}(G)<-\frac{1}{2}\). So, we call graph \(H\) a forbidden subgraph of \(G\) if \(H\) is a distance-preserving subgraph of \(G\) but \(\lambda_{2}(H)\geq-\frac{1}{2}\). We show forbidden subgraphs \(F_{1},\ldots,F_{10}\) in Figure 3, where the second largest distance eigenvalue is also listed below the corresponding graph.
The following lemma may be checked easily [9].
**Lemma 2.3**.: _For the cycle \(C_{n}\) on \(n\geq 3\) vertices,_
\[\lambda_{2}(C_{n})=\begin{cases}0&\text{if $n$ is even,}\\ -\frac{1}{4}\sec^{2}\frac{\pi}{n}&\text{if $n$ is odd.}\end{cases}\]
Figure 3: Forbidden subgraphs \(F_{1},\ldots,F_{10}\).
Figure 2: Graphs \(U_{1},\ldots,U_{6}\).
A path \(u_{0}\ldots u_{r}\) with \(r\geq 1\) in a graph \(G\) is called a pending path of length \(r\) at \(u_{0}\) if \(\mathrm{d}_{G}(u_{0})\geq 3\), the degrees of \(u_{1},\ldots,u_{r-1}\) (if any exists) are all equal to \(2\) in \(G\), and \(\mathrm{d}_{G}(u_{r})=1\). In this case, we also say that \(G\) is obtained from \(G-\{u_{1},\ldots,u_{r}\}\) by attaching a pending path of length \(r\) at \(u_{0}\). For \(v\in V(G)\), the graph obtained from \(G\) by attaching a pending path of length \(0\) in \(G\) is itself. A pending path of length \(1\) at \(u_{0}\) is called a pending edge at \(u_{0}\).
## 3 Bicyclic graphs \(G\) with \(\lambda_{2}(G)<-\frac{1}{2}\)
For integers \(p\), \(q\) and \(s\) with \(p\geq 3\), \(q\geq 3\) and \(s\geq 0\), let \(\infty(p,q,s)\) be the bicyclic graph obtained from the cycles \(C_{p}=u_{1}u_{2}\ldots u_{p}u_{1}\), \(C_{q}=v_{1}v_{2}\ldots v_{q}v_{1}\) and the path \(P_{s+1}=p_{0}p_{1}\ldots p_{s}\) by identifying \(u_{1}\) with \(p_{0}\) and identifying \(v_{1}\) with \(p_{s}\). In particular, \(\infty(p,q,0)\) consists of two cycles of lengths \(p\) and \(q\) respectively with precisely one vertex in common.
For positive integers \(p\), \(q\) and \(s\), where at most one of \(p\), \(q\), \(s\) is equal to one, let \(\theta(p,q,s)\) be the bicyclic graph obtained from three paths \(P_{p+1}=u_{0}u_{1}\ldots u_{p-1}u_{p}\), \(P_{q+1}=v_{0}v_{1}\ldots v_{q-1}v_{q}\) and \(P_{s+1}=w_{0}w_{1}\ldots w_{s-1}w_{s}\) by identifying \(u_{0}\), \(v_{0}\) and \(w_{0}\) to a new vertex \(x\) and identifying \(u_{p}\), \(v_{q}\) and \(w_{s}\) to a new vertex \(y\).
Graphs \(\infty(p,q,s)\) and \(\theta(p,q,s)\) are depicted in Figure 4.
For a bicyclic graph \(G\), if \(\infty(p,q,s)\) is an induced subgraph of \(G\) for some \(p,q,s\), then we say \(G\) is a \(\infty\)-bicyclic graph. Otherwise, \(G\) contains \(\theta(p^{\prime},q^{\prime},s^{\prime})\) as an induced subgraph for some \(p^{\prime},q^{\prime},s^{\prime}\), so we say \(G\) is a \(\theta\)-bicyclic graph.
To state the results, we define several families of bicyclic graphs.
Let \(u_{1}\), \(u_{2}\), \(u_{3}\), \(u_{4}\) be the four vertices with degree two in the cycles of \(\infty(3,3,s)\) with \(u_{1}u_{2},u_{3}u_{4}\in E(\infty(3,3,s))\), where \(s\geq 0\). We use \(B(s;h_{1},h_{2},h_{3}\), \(h_{4})\) to denote the graph obtained from \(\infty(3,3,s)\) by attaching a pending path of length \(h_{i}\) at \(u_{i}\), respectively, where \(h_{i}\geq 0\) for \(1\leq i\leq 4\), see Figure 5.
Figure 4: Graphs \(\infty(p,q,s)\) and \(\theta(p,q,s)\).
Let \(B_{t}^{\infty}\) be the graph obtained from \(\infty(3,3,0)\) by attaching \(t\) pending edges at the vertex with maximum degree, where \(t\geq 0\), as depicted in Figure 6 left. We use \(B_{k}^{\theta}\) to denote the graph obtained from \(\theta(2,2,1)\) by attaching \(k\) pending edges at a vertex of degree three in \(\theta(2,2,1)\), where \(k\geq 0\), as depicted in Figure 6 right.
**Theorem 3.1**.: _Let \(G\) be a bicyclic graph. Then \(\lambda_{2}(G)<-\frac{1}{2}\) if and only if_
* \(G\cong B_{1},\ldots,B_{7}\) _as displayed in Figure_ 7_, or_
* \(G\cong B(s;h_{1},h_{2},h_{3},h_{4})\)_, where_ \(s\geq 0\) _and_ \(h_{i}\geq 0\) _for_ \(1\leq i\leq 4\)_, or_
* \(G\cong B_{t}^{\infty}\) _with_ \(t\geq 0\)_, or_
* \(G\cong B_{k}^{\theta}\)_, where_ \(k\geq 0\)_._
To prove Theorem 3.1, it suffices to show the following two theorems.
**Theorem 3.2**.: _Let \(G\) be a \(\infty\)-bicyclic graph. Then \(\lambda_{2}(G)<-\frac{1}{2}\) if and only if \(G\cong B_{1},\ldots,B_{5}\), \(G\cong B(s;h_{1},h_{2},h_{3},h_{4})\), where \(s\geq 0\) and \(h_{i}\geq 0\) for \(1\leq i\leq 4\), or \(G\cong B_{t}^{\infty}\) with \(t\geq 0\)._
**Theorem 3.3**.: _Let \(G\) be a \(\theta\)-bicyclic graph. Then \(\lambda_{2}(G)<-\frac{1}{2}\) if and only if \(G\cong B_{6},B_{7}\), or \(G\cong B_{k}^{\theta}\), where \(k\geq 0\)._
Firstly, we give the proof of Theorem 3.2.
Proof of Theorem 3.2.: Let \(G\) be a \(\infty\)-bicyclic graph with \(\lambda_{2}(G)<-\frac{1}{2}\). Suppose without loss of generality that \(C_{p}=u_{1}u_{2}\ldots u_{p}u_{1}\) and \(C_{q}=v_{1}v_{2}\ldots v_{q}v_{1}\) are the two cycles of \(G\), and \(P_{s+1}=p_{0}p_{1}\ldots p_{s}\) is the shortest path connecting a vertex of \(C_{p}\) and a vertex of \(C_{q}\), say \(u_{1}=p_{0}\), \(p_{s}=v_{1}\), and \(p\geq q\). We claim that \(p=3\). Otherwise, \(C_{p}\) is an induced distance-preserving subgraph of \(G\), where \(p\geq 4\). Then, from Lemma 2.3 (ii), we have \(\lambda_{2}(G)\geq\lambda_{2}(C_{p})=0\) for even \(p\), and \(\lambda_{2}(G)\geq-\frac{1}{4}\sec^{2}\frac{\pi}{p}>-\frac{1}{2}\) for odd \(p\geq 5\), which is a contradiction. It thus follows that \(p=q=3\).
**Case 1.**\(s\geq 2\).
In this case, we have \(d_{G}(p_{i})=2\) for \(1\leq i\leq s-1\). Otherwise \(F_{3}\) is an induced distance-preserving subgraph of \(G\), contradicting that \(F_{3}\) is a forbidden subgraph of \(G\). Suppose without loss of generality that \(d_{G}(u_{1})\geq d_{G}(v_{1})\). Since \(F_{1}\) is a forbidden subgraph, we have \(d_{G}(u_{1})\leq 4\). That is, \(d_{G}(u_{1})=3,4\).
Suppose first that \(d_{G}(u_{1})=4\). Then \(s\leq 2\) since \(F_{4}\) is a forbidden subgraph. So \(s=2\). Let \(w\) be the unique vertex in \(N_{G}(u_{1})\setminus\{u_{2},u_{3},p_{1}\}\). We have \(d_{G}(w)=1\) as \(F_{3}\) is a forbidden subgraph. Similar argument leads to \(d_{G}(u_{2})=d_{G}(u_{3})=2\). Since \(F_{4}\) is a forbidden subgraph, we have \(d_{G}(v_{2})=d_{G}(v_{3})=2\). Thus, if \(d_{G}(v_{1})=4\), then \(G\cong B_{2}\), and if \(d_{G}(v_{1})=3\), then \(G\cong B_{3}\).
Suppose next that \(d_{G}(u_{1})=3\). As \(d_{G}(u_{1})\geq d_{G}(v_{1})\geq 3\), one gets \(d_{G}(v_{1})=3\). Since \(F_{4}\) is a forbidden subgraph, we have \(\max\{d_{G}(u_{2}),d_{G}(u_{3})\}\leq 3\), and if \(d_{G}(u_{i})=3\) for \(i=2,3\), then there is a pending path at \(u_{i}\). Similarly, \(\max\{d_{G}(v_{2}),d_{G}(v_{3})\}\leq 3\), and if \(d_{G}(v_{i})=3\) for \(i=2,3\), then there is a pending path at \(v_{i}\). Hence, \(G\cong B(s;h_{1},h_{2},h_{3},h_{4})\), where \(s\geq 2\) and \(h_{i}\geq 0\) for \(1\leq i\leq 4\).
**Case 2.**\(s=1\).
Since \(F_{5}\) is a forbidden subgraph, we have \(d_{G}(u_{1})=d_{G}(v_{1})=3\). Assume that \(\max\{d_{G}(u_{2})\), \(d_{G}(u_{3}),d_{G}(v_{2}),d_{G}(v_{3})\}=d_{G}(u_{2})\). Since \(F_{1}\) is a forbidden subgraph, we have \(d_{G}(u_{2})\leq 4\).
Suppose that \(d_{G}(u_{2})=4\). Let \(N_{G}(u_{2})\setminus\{u_{1},u_{3}\}=\{x_{1},x_{2}\}\). We have \(d_{G}(x_{1})=d_{G}(x_{2})=1\) as \(F_{3}\) is a forbidden subgraph. Since \(F_{6}\) is a forbidden subgraph, we have \(d_{G}(u_{3})=2\). Since \(F_{4}\) is a forbidden subgraph, we have \(d_{G}(v_{2})=d_{G}(v_{3})=2\). Hence, \(G\cong B_{1}\).
If \(d_{G}(u_{2})=3\), then for any \(w\in V(G)\setminus\{u_{1},u_{2},u_{3},v_{1},v_{2},v_{3}\}\), one has \(d_{G}(w)=1,2\) since \(F_{4}\) is a forbidden subgraph. Hence, \(G\cong B(1;h_{1},h_{2},h_{3},h_{4})\), where \(h_{1}\geq 1\) and \(h_{i}\geq 0\) for \(2\leq i\leq 4\).
If \(d_{G}(u_{2})=2\), then \(G\cong\infty(3,3,1)\cong B(1;0,0,0,0)\).
**Case 3.**\(s=0\).
If \(d_{G}(u_{1})=4\), then \(\max\{d_{G}(u_{2}),d_{G}(u_{3}),d_{G}(v_{2}),d_{G}(v_{3})\}\leq 3\) as \(F_{5}\) is a forbidden subgraph, and every vertex not on the cycles is of degree one or two due to the fact that \(F_{4}\) and \(F_{5}\) are both forbidden subgraphs. This implies that \(G\cong B(0;h_{1},h_{2},h_{3},h_{4})\), where \(h_{i}\geq 0\) for \(1\leq i\leq 4\).
Suppose that \(d_{G}(u_{1})\geq 5\). Let \(N_{G}(u_{1})\setminus\{u_{2},u_{3},v_{2},v_{3}\}=\{x_{1},\ldots,x_{t}\}\), where \(t\geq 1\). Since \(F_{7}\) is a forbidden subgraph, one has \(d_{G}(x_{1})=\cdots=d_{G}(x_{t})=1\). Moreover, \(\max\{d_{G}(u_{2}),d_{G}(u_{3}),d_{G}(v_{2}),d_{G}(v_{3})\}\leq 3\) as \(F_{5}\) is a forbidden subgraph.
If \(d_{G}(u_{1})\geq 6\), then, since \(F_{1}\) is a forbidden subgraph, we have \(d_{G}(u_{2})=d_{G}(u_{3})=d_{G}(v_{2})=d_{G}(v_{3})=2\), which implies that \(G\cong B_{t}^{\infty}\), where \(t\geq 2\).
Suppose that \(d_{G}(u_{1})=5\). Assume that \(\max\{d_{G}(u_{2}),d_{G}(u_{3}),d_{G}(v_{2}),d_{G}(v_{3})\}=d_{G}(u_{2})\). Since \(F_{5}\) is a forbidden subgraph, we have \(d_{G}(u_{2})\leq 3\). Then, if \(d_{G}(u_{2})=2\), then \(G\cong B_{1}^{\infty}\). Suppose that \(d_{G}(u_{2})=3\). Since \(F_{3}\) is a forbidden subgraph, we have \(d_{G}(v_{2})=d_{G}(v_{3})=2\). As \(F_{6}\) is a forbidden subgraph, we have \(d_{G}(u_{3})=2\). Let \(w_{1}\) denote the neighbor of \(u_{2}\) not on the cycles. Since \(F_{5}\) is a forbidden subgraph, we have \(d_{G}(w_{1})\leq 2\). Then \(G\cong B_{5}\) if \(d_{G}(w_{1})=1\). If \(d_{G}(w_{1})=2\), then denoting by \(w_{2}\) the neighbor of \(w_{1}\) different from \(u_{2}\), one has \(d_{G}(w_{2})=1\) as \(F_{4}\) is a forbidden subgraph, which implies that \(G\cong B_{4}\). Hence, \(G\cong B_{4},B_{5}\), or \(B_{1}^{\infty}\).
Combining Cases 1-3, we have \(G\cong B_{1},\ldots,B_{5}\), or \(G\cong B(s;h_{1},h_{2},h_{3},h_{4})\), where \(s\geq 0\) and \(h_{i}\geq 0\) for \(1\leq i\leq 4\), or \(G\cong B_{t}^{\infty}\) with \(t\geq 0\).
Conversely, suppose that \(G\cong B_{1},\ldots,B_{5}\), or \(G\cong B(s;h_{1},h_{2},h_{3},h_{4})\), where \(s\geq 0\) and \(h_{i}\geq 0\) for \(1\leq i\leq 4\), or \(G\cong B_{t}^{\infty}\) with \(t\geq 0\). By a direct calculation, we have \(\lambda_{2}(B_{1})\approx-0.5110<-\frac{1}{2}\), \(\lambda_{2}(B_{4})\approx-0.5023<-\frac{1}{2}\). Since \(B_{5}\) is an induced distance-preserving subgraph of \(B_{4}\). By Lemma 2.2, we have \(\lambda_{2}(B_{5})\leq\lambda_{2}(B_{4})<-\frac{1}{2}\). Note that \(B_{2}\cong BGA\) and \(B_{3}\) is an induced distance-preserving subgraph of \(BGA\). By Lemma 2.1, we have \(\lambda_{2}(B_{3})\leq\lambda_{2}(B_{2})<-\frac{1}{2}\). If \(G\cong B(s;h_{1},h_{2},h_{3},h_{4})\), where \(s\geq 0\) and \(h_{i}\geq 0\) for \(1\leq i\leq 4\), then \(G\) is a loose block graph; If \(G\cong B_{t}^{\infty}\) with \(t\geq 0\), then \(G\) is a block star. In either case, we have by Lemma 2.1 that \(\lambda_{2}(G)<-\frac{1}{2}\).
Next, we move to give proof of Theorem 3.3.
Let \(J_{s\times t}\) be the \(s\times t\) matrix of all 1's, and \(I_{s}\) the identity matrix of order \(s\). For convenience, let \(J_{s}=J_{s\times s}\).
**Lemma 3.1**.: _Let \(G=B_{k}^{\theta}\), where \(k\geq 0\). Then \(\lambda_{2}(G)<-\frac{1}{2}\)._
Proof.: Let \(G=B_{k}^{\theta}\) with \(n=k+4\) vertices. If \(k=0\), then \(G\cong\theta(2,2,1)\). By a direct calculation, we have \(\lambda_{2}(\theta(2,2,1))\approx-0.5616<-\frac{1}{2}\). Suppose in following that \(k\geq 1\), then \(n\geq 5\). Let \(V_{1}\) be the set of two vertices of degree 2, \(V_{2}\) be the set of vertices of degree one. So we partition \(V(G)\) as \(V(G)=\{w\}\cup\{z\}\cup V_{1}\cup V_{2}\), where \(w\) is the vertex with maximum degree and \(z\) is the vertex with degree 3. Under this partition, we have
\[D(G)+2I_{n}=\left(\begin{array}{cccc}2&1&J_{1\times 2}&J_{1\times(n-4)}\\ 1&2&J_{1\times 2}&2J_{1\times(n-4)}\\ J_{2\times 1}&J_{2\times 1}&2J_{2}&2J_{2\times(n-4)}\\ J_{(n-4)\times 1}&2J_{(n-4)\times 1}&2J_{(n-4)\times 2}&2J_{n-4}\end{array} \right).\]
It is easily seen that \(D(G)+2I_{n}\) is of rank 4, which implies that 0 is an eigenvalue of \(D(G)+2I_{n}\) with multiplicity \(n-4\).
Note that the above partition for \(D(G)+2I_{n}\) is equitable, thus the eigenvalues of its quotient matrix \(Q\) are also the eigenvalues of \(D(G)+2I_{n}\), see [3, Lemma 2.3.1], where
\[Q=\left(\begin{array}{cccc}2&1&2&n-4\\ 1&2&2&2(n-4)\\ 1&1&4&2(n-4)\\ 1&2&4&2(n-4)\end{array}\right).\]
Let \(f(\lambda)=\det(\lambda I_{4}-Q)\). By a direct calculation,
\[f(\lambda)=\lambda^{4}-2n\lambda^{3}+3(n+1)\lambda^{2}+4(n-6)\lambda-6n+24.\]
Note that \(f(+\infty)>0\), \(f(7)=2404-517n<0\), \(f(\frac{3}{2})=-\frac{3}{16}<0\), \(f(0)=24-6n<0\), \(f(-\infty)>0\), \(f(\frac{3n-1}{2n})=\frac{1}{16n^{4}}(n^{4}-12n^{3}+70n^{2}-12n+1)\). Let \(g(x)=x^{4}-12x^{3}+70x^{2}-12x+1\). Then \(g^{\prime}(x)=4(x^{3}-9x^{2}+35x-3)\), \(g^{\prime\prime}(x)=4(3x^{2}-18x+35)\). Since \(g^{\prime\prime}(x)>0\) for all \(x\), \(g^{\prime}(x)\geq g^{\prime}(5)=288>0\) for \(x\geq 5\), which implies that \(g(x)\geq g(5)=816>0\). Thus \(f(\frac{3n-1}{2n})>0\). It follows that the second largest root \(\lambda^{(2)}\) of \(f(\lambda)=0\) satisfies \(\frac{3n-1}{2n}<\lambda^{(2)}<\frac{3}{2}\). By the above argument, \(\lambda^{(2)}\) is the second largest eigenvalues of \(D(G)+2I_{n}\), i.e., \(\lambda_{2}(G)<-\frac{1}{2}\).
From the proof of Lemma 3.1, we have \(-\frac{n+1}{2n}<\lambda_{2}(G)<-\frac{1}{2}\). Thus, the limit of \(\lambda_{2}(G)\) as \(n\) approaches \(+\infty\) is \(-\frac{1}{2}\).
Proof of Theorem 3.3.: Assume that \(\theta(p,q,s)\) is an induced subgraph of \(G\), where \(p\geq q\geq s\). Let \(P_{p+1}=u_{0}u_{1}\ldots u_{p}\), \(P_{q+1}=v_{0}v_{1}\ldots v_{q}\) and \(P_{s+1}=w_{0}w_{1}\ldots w_{s}\) be three paths of length \(p\), \(q\) and \(s\) respectively, where \(u_{0}=v_{0}=w_{0}=x\), \(u_{p}=v_{q}=w_{s}=y\). Suppose that \(s\geq 2\). It is evident that two of the lengths \(p,q,s\) of the three paths are of same parity. Note that the subgraph induced by the vertices of such two paths is an induced distance-preserving even cycle, written \(C^{\prime}\). By Lemma 2.2 and 2.3, we have \(\lambda_{2}(G)\geq\lambda_{2}(C^{\prime})=0\), a contradiction. Hence, \(s=1\). Similarly, both \(p\) and \(q\) are even. Suppose that \(p\geq 4\). Then the length of the cycle \(C^{\prime\prime}\) induced by the vertices of \(P_{p+1}\) and \(P_{s+1}\) is \(p+1\). By Lemma 2.2 and 2.3, we have \(\lambda_{2}(G)\geq\lambda_{2}(C^{\prime\prime})=-\frac{1}{4}\sec^{2}(\frac{ \pi}{p+1})>-\frac{1}{2}\), a contradiction. Thus, \(p=q=2\). Since \(F_{9}\) is a forbidden subgraph, one gets \(d_{G}(u_{1})=d_{G}(v_{1})=2\). Since \(F_{10}\) is a forbidden subgraph, there can only be some pending edges at \(x\) or \(y\). Assume that \(d_{G}(x)\geq d_{G}(y)\). Then \(d_{G}(y)\leq 4\), as otherwise, there would be a forbidden subgraph \(F_{2}\).
If \(d_{G}(y)=4\), then, since \(F_{1}\) is a forbidden subgraph, we have \(d_{G}(x)\leq 5\), so, \(G\cong B_{6},B_{7}\). If \(d_{G}(y)=3\), then \(G\cong B_{k}^{\theta}\), where \(k\geq 0\).
Conversely, from Lemma 3.1, we have \(\lambda_{2}(G)<-\frac{1}{2}\) if \(G\cong B_{k}^{\theta}\). By a direct calculation, we have \(\lambda_{2}(B_{6})\approx-0.5578<-\frac{1}{2}\), \(\lambda_{2}(B_{7})\approx-0.5119<-\frac{1}{2}\), as desired.
## 4 Split graphs \(G\) with \(\lambda_{2}(G)<-\frac{1}{2}\)
In the following, we view a clique \(K\) of cardinality \(s\) of a graph \(G\) as \(K_{s}\), the subgraph of \(G\) induced by \(K\).
For nonnegative integer \(t\), let \(SP^{t}\) be the split graph consisting of a clique \(K_{4}\) and an independent set \(I=\{x_{1},\dots,x_{t},w\}\) so that \(x_{1},\dots,x_{t}\) have a unique neighbor \(u\in V(K_{4})\) and \(w\) has exactly two neighbors \(u,v\in V(K_{4})\). In particular, \(SP^{0}\) is the split graph with a clique \(K_{4}\) and an independent set \(I=\{w\}\) so that \(w\) has exactly two neighbors in \(V(K_{4})\).
**Lemma 4.1**.: _Let \(G=SP^{t}\) with \(t\geq 0\). Then \(\lambda_{2}(G)<-\frac{1}{2}\)._
Proof.: Note that \(SP^{t}\) is a distance-preserving induced subgraph of \(SP^{t+1}\). From Lemma 2.2, we have \(\lambda_{2}(SP^{t})\leq\lambda_{2}(SP^{t+1})\), which implies that \(\{\lambda_{2}(SP^{t}):t=0,1,\dots\}\) is a non-decreasing sequence. So it suffices to show that \(\lambda_{2}(G)<-\frac{1}{2}\) for large enough \(t\).
Let \(G=SP^{t}\). Then \(|V(G)|=t+5\). We partition \(V(G)\) as \(V(G)=V_{1}\cup V_{2}\cup\{u\}\cup\{v\}\cup\{w\}\), where \(V_{1}=K_{4}\setminus\{u,v\}\), \(V_{2}=\{x_{1},\dots,x_{t}\}\). Under this partition, we have
\[D(G)=\left(\begin{array}{ccccc}J_{2}-I_{2}&2J_{2\times t}&J_{2\times 1}&J_{2 \times 1}&2J_{2\times 1}\\ 2J_{t\times 2}&2(J_{t}-I_{t})&J_{t\times 1}&2J_{t\times 1}&2J_{t\times 1}\\ J_{1\times 2}&J_{1\times t}&0&1&1\\ J_{1\times 2}&2J_{1\times t}&1&0&1\\ 2J_{1\times 2}&2J_{1\times t}&1&1&0\end{array}\right).\]
The first two rows of \(-I_{t+5}-D(G)\) are equal, implying that \(-1\) is a distance eigenvalue of \(G\) with multiplicity at least \(1\), and in \(-2I_{t+5}-D(G)\), there are \(t\) equal rows, implying that \(-2\) is a distance eigenvalue of \(G\) with multiplicity at least \(t-1\).
Let \(Q\) be the quotient matrix of \(D(G)\) with respect to the above partition on \(V(G)\). Then
\[Q=\left(\begin{array}{ccccc}1&2t&1&1&2\\ 4&2(t-1)&1&2&2\\ 2&t&0&1&1\\ 2&2t&1&0&1\\ 4&2t&1&1&0\end{array}\right).\]
Note that the above partition is equitable. Thus the spectrum of \(Q\) is contained in the distance spectrum of \(G\), see [3, Lemma 2.3.1]. Let \(f(\lambda)=\det(\lambda I_{5}-Q)\). By a direct calculation,
\[f(\lambda)=\lambda^{5}-(2t-1)\lambda^{4}-(15t+17)\lambda^{3}-(33t+49)\lambda^{ 2}-(23t+44)\lambda-5t-12.\]
Note that \(f(+\infty)>0\), \(f(-0.5)\approx-0.094<0\), \(f(-\frac{t+1}{2t})=\frac{1}{32t^{5}}(t^{5}+19t^{4}-142t^{3}+62t^{2}-3t+1)>0\) for large enough \(t\), \(f(-1)=-2t<0\), \(f(-3)=10t-24>0\) for \(t\geq 4\), \(f(-\infty)<0\). It follows that the second largest root \(\lambda^{(2)}\) of \(f(\lambda)=0\) satisfies \(-\frac{t+1}{2t}<\lambda^{(2)}<-\frac{1}{2}\) for large enough \(t\). By the above argument, \(\lambda^{(2)}\) is the second largest distance eigenvalue of \(G\), i.e., \(\lambda_{2}(G)<-\frac{1}{2}\).
Let \(s\) be an integer with \(s\geq 2\). Let \(K_{s}(x_{1},\dots,x_{r})\) be the split graph obtained from \(K_{s}\) with vertex set \(\{v_{1},\dots,v_{s}\}\) by attaching \(x_{i}\) pending edges at \(v_{i}\) for \(i=1,\dots,r\), where \(1\leq r\leq s\). In particular, \(K_{2}(1,1)\cong P_{4}\) and \(K_{2}(t)\cong S_{t+2}\).
**Lemma 4.2**.: _Let \(G=K_{s}(2,1)\), where \(s\geq 2\). Then \(\lambda_{2}(G)<-\frac{1}{2}\)._
Proof.: Let \(G=K_{s}(2,1)\). Then \(|V(G)|=s+3\). Since \(K_{s}(2,1)\) is a distance-preserving induced subgraph of \(K_{s+1}(2,1)\), it follows from Lemma 2.2 that \(\lambda_{2}(K_{s}(2,1))\leq\lambda_{2}(K_{s+1}(2,1))\). Then the sequence \(\{\lambda_{2}(K_{s}(2,1)):s=2,3,\dots\}\) does not decrease with \(s\). So it suffices to show that \(\lambda_{2}(G)<-\frac{1}{2}\) for large enough \(s\).
Let \(I=\{w_{1},w_{2},w_{3}\}\) be the independent set of \(K_{s}(2,1)\). We use \(u\) to denote the only common neighbour of \(w_{1}\) and \(w_{2}\), \(v\) denotes the neighbor of \(w_{3}\) in \(K_{s}(2,1)\). Then we may partition \(V(G)\) as \(V(G)=V_{1}\cup V_{2}\cup\{u\}\cup\{v\}\cup\{w_{3}\}\), where \(V_{1}=K_{s}\setminus\{u,v\}\), \(V_{2}=\{w_{1},w_{2}\}\). Under this partition, we have
\[D(G)=\left(\begin{array}{cccc}J_{s-2}-I_{s-2}&2J_{(s-2)\times 2}&J_{(s-2) \times 1}&J_{(s-2)\times 1}&2J_{(s-2)\times 1}\\ 2J_{2\times(s-2)}&2(J_{2}-I_{2})&J_{2\times 1}&2J_{2\times 1}&3J_{2\times 1}\\ J_{1\times(s-2)}&J_{1\times 2}&0&1&2\\ J_{1\times(s-2)}&2J_{1\times 2}&1&0&1\\ 2J_{1\times(s-2)}&3J_{1\times 2}&2&1&0\end{array}\right).\]
It is easy to see that \(-1\) is a distance eigenvalue of \(G\) with multiplicity at least \(s-3\) and \(-2\) is a distance eigenvalue of \(G\) with multiplicity at least \(1\).
Let \(Q\) be the quotient matrix of \(D(G)\) respect the above partition on \(V(G)\). Then
\[Q=\left(\begin{array}{cccc}s-3&4&1&1&2\\ 2(s-2)&2&1&2&3\\ s-2&2&0&1&2\\ s-2&4&1&0&1\\ 2(s-2)&6&2&1&0\end{array}\right).\]
As the above partition is equitable, the eigenvalues of \(Q\) are also distance eigenvalues of \(G\). Let \(f(\lambda)=\det(\lambda I_{5}-Q)\). Then
\[f(\lambda)=\lambda^{5}-(s-1)\lambda^{4}-12(s+1)\lambda^{3}-(40s+2)\lambda^{2} -(37s-10)\lambda-10s+4.\]
Note that \(f(+\infty)>0\), \(f(0)=-10s+4<0\), \(f(-\frac{1}{2})=\frac{1-2s}{32}<0\), \(f(-\frac{s+1}{2s})=\frac{1}{32s^{5}}(2s^{6}-89s^{5}+233s^{4}-170s^{3}-44s^{2}+3 s+1)>0\) for large enough \(s\), \(f(-1)=4-2s<0\) for \(s\geq 3\), \(f(-4)=2(5s-34)>0\) for \(s\geq 7\), \(f(-\infty)<0\). It follows that the second largest root \(\lambda^{(2)}\) of \(f(\lambda)=0\) satisfies \(-\frac{s+1}{2s}<\lambda^{(2)}<-\frac{1}{2}\) for large enough \(s\). Thus, \(\lambda^{(2)}\) is the second largest distance eigenvalue of \(G\), i.e., \(\lambda_{2}(G)<-\frac{1}{2}\).
For a graph \(G\) and its subgraph \(H\) and a vertex \(v\) of \(G\) outside \(H\), let \(N_{H}(v)=N_{G}(v)\cap V(H)\) and \(d_{H}(v)=|N_{H}(v)|\).
**Theorem 4.1**.: _Let \(G\) be a split graph. Then \(\lambda_{2}(G)<-\frac{1}{2}\) if and only if_
* \(G\cong SP_{1},SP_{2}\) _as displayed in Figure_ 8_, or_
* \(G\cong B_{6},B_{7}\)_, or_
* \(G\cong SP^{t}\)_, where_ \(t\geq 0\)_, or_
* \(G\cong B_{k}^{\theta}\)_, where_ \(k\geq 0\)_, or_
* \(G\cong K_{s}(t)\)_, where_ \(s\geq 2\) _and_ \(t\geq 0\)_, or_
* \(G\cong K_{s}(a,1)\)_, where_ \(s\geq 2\) _and_ \(a=1,2\)_, or_
* \(G\cong K_{s}(\underbrace{1,\ldots,1}_{t})\)_, where_ \(s\geq 3\) _and_ \(t\geq 3\)_._
Proof.: Let \(G\) be a split graph of order \(n=s+t\). Let \(K_{s}\) be the maximum clique and \(I\) be the independent set in \(G\) of size \(t\). Then \(s\geq 2\).
If \(I=\emptyset\), then \(G\cong K_{s}\) and \(\lambda_{2}(G)=-1<-\frac{1}{2}\).
Suppose that \(I\neq\emptyset\).
Suppose first that \(s=2\). Let \(V(K_{2})=\{u,v\}\). Assume that \(d_{G}(u)\geq d_{G}(v)\). Since \(F_{2}\) is a forbidden subgraph, we have \(d_{G}(v)=1,2\). If \(d_{G}(v)=2\), then \(d_{G}(u)\leq 3\) due to \(F_{1}\) being forbidden. Thus \(G\cong K_{2}(1,1)\) or \(K_{2}(2,1)\). If \(d_{G}(v)=1\), then \(G\cong K_{2}(t)\).
By a direct calculation, we have \(\lambda_{2}(K_{2}(2,1))\approx-0.5120<-\frac{1}{2}\). From Lemma 2.1, we have \(\lambda_{2}(K_{2}(1,1))<-\frac{1}{2}\) and \(\lambda_{2}(K_{2}(t))<-\frac{1}{2}\).
Suppose next that \(s\geq 3\). Let \(ID_{G}=\{z\in I:d_{K_{s}}(z)\geq 2\}\).
**Claim.**\(|ID_{G}|=0,1\).
Otherwise, there exist \(z_{1}\) and \(z_{2}\) in \(I\) with \(d_{K_{s}}(z_{1})\geq 2\) and \(d_{K_{s}}(z_{2})\geq 2\). Let \(x_{1},x_{2}\in N_{K_{s}}(z_{1})\) and \(y_{1},y_{2}\in N_{K_{s}}(z_{2})\). There are three possibilities.
Suppose that \(|\{x_{1},x_{2}\}\cap\{y_{1},y_{2}\}|=2\), i.e., \(\{x_{1},x_{2}\}=\{y_{1},y_{2}\}\). Note that \(s\geq 3\). Let \(H_{1}\) be the subgraph of \(G\) induced by \(\{x_{1},x_{2},x_{3},z_{1},z_{2}\}\), where \(x_{3}\) is any vertex is \(K_{s}\) different from \(x_{1},x_{2}\). By a direct calculation, we have \(\lambda_{2}(H_{1})\approx-0.3723>-\frac{1}{2}\), a contradiction.
Suppose that \(|\{x_{1},x_{2}\}\cap\{y_{1},y_{2}\}|=1\). Without loss of generality, let \(y_{1}=x_{1},y_{2}\neq x_{2}\). Then it is easily seen that for the subgraph \(H_{2}\) induced by \(\{x_{1},x_{2},y_{2},z_{1},z_{2}\}\), \(\lambda_{2}(H_{2})\approx-0.3820>-\frac{1}{2}\), a contradiction.
Suppose that \(|\{x_{1},x_{2}\}\cap\{y_{1},y_{2}\}|=0\). Let \(H_{3}\) be the subgraph of \(G\) induced by \(\{x_{1},x_{2},y_{1},y_{2},z_{1},z_{2}\}\). By a direct calculation, we have \(\lambda_{2}(H_{3})\approx-0.2679>-\frac{1}{2}\), also a contradiction.
Therefore, the claim follows.
Figure 8: Graphs \(SP_{1},SP_{2}\).
By the above claim, \(|ID_{G}|=0,1\).
**Case 1.**\(|ID_{G}|=1\).
Let \(ID_{G}=\{z\}\).
If \(s=3\), then \(G\) is a \(\theta\)-bicyclic graph, so by Theorem 3.3, we have \(G\cong B_{6},B_{7},B_{k}^{\theta}\).
Assume that \(s\geq 4\). Then \(d_{K_{s}}(z)=2\). Otherwise, there is a distance-preserving subgraph isomorphic to \(K_{5}-e\) and \(\lambda_{2}(K_{5}-e)\approx-0.4495>-\frac{1}{2}\), a contradiction. Suppose that \(s\geq 5\). Then there is a distance-preserving subgraph, say \(H\), induced by \(V(K_{5})\cup\{z\}\). By a direct calculation, we have \(\lambda_{2}(H)\approx-0.4913>-\frac{1}{2}\), a contradiction. It thus follows that \(s=4\). Let \(N_{K_{s}}(z)=\{u,v\}\) with \(d_{G}(u)\geq d_{G}(v)\). Since \(F_{2}\) is a forbidden subgraph, we have \(d_{G}(v)=4,5\). If \(d_{G}(v)=5\), then, since \(F_{1}\) is a forbidden subgraph, we have \(d_{G}(u)=5,6\). So \(G\cong SP_{1}\) if \(d_{G}(u)=6\) and \(G\cong SP_{2}\) if \(d_{G}(u)=5\). If \(d_{G}(v)=4\), then \(G\cong SP^{t}\).
By a direct calculation, we have \(\lambda_{2}(SP_{1})\approx-0.5106<-\frac{1}{2}\). Since \(SP_{2}\) is an induced distance-preserving subgraph of \(SP_{1}\), \(\lambda_{2}(SP_{2})\leq\lambda_{2}(SP_{1})<-\frac{1}{2}\). By Lemma 4.1, \(\lambda_{2}(SP^{t})<-\frac{1}{2}\).
**Case 2.**\(|ID_{G}|=0\).
If there is exactly one vertex in \(K_{s}\) with degree not less than \(s\), then \(G\cong K_{s}(t)\), which is a block star.
Suppose that there are exactly two vertices in \(K_{s}\), say \(u,v\) with \(d_{G}(u)\geq d_{G}(v)\geq s\). Since \(F_{2}\) is a forbidden subgraph, we have \(d_{G}(v)\leq s\). So \(d_{G}(v)=s\). Furthermore, since \(F_{1}\) is a forbidden subgraph, we have \(d_{G}(u)\leq s+1\). Then \(G\cong K_{s}(2,1)\) if \(d_{G}(u)=s+1\), and \(G\cong K_{s}(1,1)\) if \(d_{G}(u)=s\). From Lemma 4.2, \(\lambda_{2}(K_{s}(2,1))<-\frac{1}{2}\). Since \(K_{s}(1,1)\) is an induced distance-preserving subgraph of \(K_{s}(2,1)\), \(\lambda_{2}(K_{s}(1,1))\leq\lambda_{2}(K_{s}(2,1))<-\frac{1}{2}\).
If there are \(t\) vertices in \(K_{s}\) with degree not less than \(s\) with \(t\geq 3\), then, since \(F_{6}\) is a forbidden subgraph, we have \(G\cong K_{s}(\underbrace{1,\ldots,1}_{t})\), which is a loose block. From Lemma 2.1, we have \(\lambda_{2}(K_{s}(\underbrace{1,\ldots,1}_{t}))<-\frac{1}{2}\).
**Acknowledgements.** This work was supported by the National Natural Science Foundation of China (Nos. 12101249 and 12071158).
|
2308.01254 | Generating ultra compact boson stars with modified scalar potentials | The properties of selfinteracting boson stars with different scalar
potentials going beyond the commonly used $\phi^4$ ansatz are studied. The
scalar potential is extended to different values of the exponent $n$ of the
form $V \propto \phi^n$. Two stability mechanism for boson stars are
introduced, the first being a mass term and the second one a vacuum term. We
present analytic scale-invariant expressions for these two classes of equations
of state. The resulting properties of the boson star configurations differ
considerably from previous calculations. We find three different categories of
mass-radius relation: the first category resembles the mass-radius curve of
selfbound stars, the second one those of neutron stars and the third one is the
well known constant radius case from the standard $\phi^4$ potential. We
demonstrate that the maximal compactness can reach extremely high values going
to the limit of causality $C_\text{max} = 0.354$ asymptotically for
$n\to\infty$. The maximal compactnesses exceed previously calculated values of
$C_\text{max}=0.16$ for the standard $\phi^4$-theory and $C_\text{max}=0.21$
for vector-like interactions and is in line with previous results for solitonic
boson stars. Hence, boson stars even described by a simple modified scalar
potential in the form of $V \propto \phi^n$ can be ultra compact black hole
mimickers where the photon ring is located outside the radius of the star. | Sarah Louisa Pitz, Jürgen Schaffner-Bielich | 2023-08-02T16:15:40Z | http://arxiv.org/abs/2308.01254v3 | # Generating ultra compact boson stars with modified scalar potentials
###### Abstract
The properties of selfinteracting boson stars with different scalar potentials going beyond the commonly used \(\phi^{4}\) ansatz are studied. The scalar potential is extended to different values of the exponent \(n\) of the form \(V\propto\phi^{n}\). Two stability mechanism for boson stars are introduced, the first being a mass term and the second one a vacuum term. We present analytic scale-invariant expressions for these two classes of equations of state. The resulting properties of the boson star configurations differ considerably from previous calculations. We find three different categories of mass-radius relation: the first category resembles the mass-radius curve of selfbound stars, the second one those of neutron stars and the third one is the well known constant radius case from the standard \(\phi^{4}\) potential. We demonstrate that the maximal compactness can reach extremely high values going to the limit of causality \(C_{\rm max}=0.354\) asymptotically for \(n\to\infty\). The maximal compactnesses exceed previously calculated values of \(C_{\rm max}=0.16\) for the standard \(\phi^{4}\)-theory and \(C_{\rm max}=0.21\) for vector-like interactions and is in line with previous results for solitonic boson stars. Hence, boson stars even described by a simple modified scalar potential in the form of \(V\propto\phi^{n}\) can be ultra compact black hole mimickers where the photon ring is located outside the radius of the star.
Introduction
Dark matter plays a crucial role in explaining certain phenomena in cosmology and astrophysics on large scales, where self-interacting dark matter provides an explanation for small-scale structure observations [1]. By including self-interactions dark matter can form compact objects, such as boson stars. Boson stars are self-gravitating spheres, described by complex scalar fields, see [2] for a review. These kind of compacts objects were discussed for the first time by Wheeler in 1955 [3] with self-gravitating, non-interacting spheres made of bosons, which he named geons. However, these configurations turned out to be unstable. In order to obtain stable stars one needs time dependent solutions of the Klein-Gordon equation [4]. These stable boson stars were usually of microscopic sizes for the non-interacting case [5; 6]. Boson stars of astrophysical sizes were found for solitonic boson stars [7; 8; 9; 10], by introducing a self-interaction potential of the form \(V=\lambda\phi^{4}\)[11], and for repulsive vector-like self-interactions proportional to the density squared [12]. Generic self-interactions have been considered in [13]. The properties of boson stars with self-interactions have been investigated in detail using a \(\phi^{4}\)-potential or vector-like self-interactions [14; 15; 16; 17]. Boson stars can be built via standard structure formation from the early universe [18].
So far the LIGO-Virgo collaboration has only detected one event of a neutron star-neutron star merger, GW170817 [19], confirmed by a gamma-ray burst and the optical afterglow. A significant amount of the gravitational wave sources measured by the LIGO-Virgo collabaration is located in the mass gap between the lightest black hole of \(5M_{\odot}\) and the heaviest neutron star [20]. These compact objects could possibly be exotic stars since the most massive neutron stars measured so far have masses of around \(2M_{\odot}\), constrained by observations of radio pulsars [21; 22; 23] and \(2.35M_{\odot}\), constrained from optical observations of black widow pulsars [24]. For example, in the gravitational event GW190814 one compact object has a mass of \(2.6M_{\odot}\), exceeding all known neutron star masses considerably [25]. Boson stars can be observed by the emission of gravitational waves from a merger of two boson stars or one boson star with a neutron star. Moreover gravitational waves from boson star-boson star merger can be distinguished from other mergers by their echo in gravitational waves [26; 27; 28; 29; 30; 31; 32; 33]. Also collisions of neutron stars with boson stars can send out gravitational waves that are detectable by future telescopes [34]. Another possibility of gravitational wave sources are ordinary neutron stars containing a bosonic dark matter fluid [35; 36; 37; 38; 39; 40; 41].
These two-fluid compact objects could contain dark matter cores which are influencing the macroscopic properties of the star. Accreting boson stars can be also distinguished from black holes by radio observation of supermassive black holes [42].
In our work we are extending the standard \(\phi^{4}\) self-interaction potential of bosons to a general one of the form \(\phi^{n}\) with arbitrary values of the power \(n\). By choosing this generalized self-interaction potential we find that compact stars with massive bosons can have different forms of their mass-radius relations, with some of them being similar to those of neutron stars or selfbound stars. We find curves that are not constrained to the constant radius case for the \(\phi^{4}\)-potential and the one of vector-like self-interactions. We will also show that their compactness can exceed those of neutron stars and goes asymptotically to the limit of causality with a compactness of \(C=0.354\) for large values of \(n\). These extreme values of compactness are similar to the ones found in a recent work on solitonic boson stars [43] and lead to new opportunities in the search of self-interacting boson stars via merging boson stars and thus in the search of self-interacting dark matter. For a first investigation of ultra compact solitonic boson star mergers we refer to [33]. The outline of the paper is as follows: first the theoretical basis from classical field theory for complex scalar fields and general relativity is summarized. Then equations of state (EOS) with different stability mechanisms for boson stars will be introduced. We discuss two stabilizing mechanisms: a standard mass term in the Lagrangian and a vacuum energy in the potential without a mass term. Finally the mass-radius curves as well as the compactness are presented.
## II Theoretical framework
### Scale-invariant equation of state from a classical scalar potential
Assuming a complex scalar field for the description of a bosonic matter, a suitable Lagrangian reads as follows:
\[\mathcal{L}=\partial_{\mu}\phi^{*}\partial^{\mu}\phi+m^{2}\phi^{*}\phi-V \tag{1}\]
where \(V\) represents the potential, \(m\) the mass of the boson and \(\phi\) the complex scalar field. The equation of motion is then given by
\[(\partial_{\mu}\partial^{\mu}+m^{2})\phi=-\frac{\partial V}{\partial\phi^{*}}. \tag{2}\]
For the investigation of the potential we are going to present analytic general equations of state which only depend on the exponent \(n\).
In this work we assume an ideal fluid for the bosons and calculate the EOS adopting a flat space-time. Assuming flat space-time is justifiable in a local density approach. Without any interaction potential the radius of curvature of the boson star is of the same order as the Compton wavelength of the massive boson (\(r\propto M_{P}/m_{b}^{2}\) with the Planck mass \(M_{P}\) and the mass of the boson \(m_{b}\)) [11], which means space-time is strongly curved. The opposite is given with a strong interaction potential with an interaction strength \(\lambda\). The radius of curvature increases with \(M_{P}/m_{b}\sqrt{\lambda}\)[11], which allows to consider flat space-time. Moreover the scalar field only varies on a large scale, so the gradient of the field can be neglected. This is given, when \(M_{P}/m_{b}\sqrt{\lambda}\gg 1\)[11]. Starting off with the energy-momentum tensor to calculate the equation of state
\[T_{\mu\nu}=-\eta_{\mu\nu}{\cal L}+\sum_{\phi,\phi^{*}}\frac{\partial{\cal L}}{ \partial(\partial^{\mu}\phi_{i})}\partial_{\nu}\phi_{i}. \tag{3}\]
Together with equation (1) we get
\[T_{\mu\nu}=-\eta_{\mu\nu}{\cal L}+\partial_{\mu}\phi^{*}\partial_{\nu}\phi+ \partial_{\mu}\phi\partial_{\nu}\phi^{*}+\frac{\partial V}{\partial(\partial ^{\mu}\phi)}\partial_{\nu}\phi+\frac{\partial V}{\partial(\partial^{\mu} \phi^{*})}\partial_{\nu}\phi^{*}. \tag{4}\]
The last two terms in equation (4) vanish since the potentials used in this work do not depend on the derivatives of the scalar field. The energy-momentum tensor reduces to the form for an ideal fluid:
\[T_{\nu}^{\mu}=\begin{pmatrix}\varepsilon&0&0&0\\ 0&p&0&0\\ 0&0&p&0\\ 0&0&0&p\end{pmatrix} \tag{5}\]
with the energy density \(\varepsilon\) and the pressure \(p\). Calculating \(T_{00}\) and \(T_{ii}\) and making use of the ansatz \(\phi=\phi_{0}e^{-i\omega t}\) leads to the following equations:
\[T_{00} = \phi_{0}^{2}\omega^{2}+m^{2}\phi_{0}^{2}+V \tag{6}\] \[T_{ii} = \phi_{0}^{2}\omega^{2}-m^{2}\phi_{0}^{2}-V \tag{7}\]
where \(\omega\) and \(t\) denote energy and time respectively. Using the ansatz together with equation (2) gives a relation between \(\omega\) and \(m\):
\[\omega^{2}\phi_{0}^{2}=\phi_{0}^{2}m^{2}+\frac{\partial V}{\partial\phi^{*}} \phi^{*}. \tag{8}\]
Making use of this relation leads to the desired expressions for \(\varepsilon\) and \(p\):
\[T_{00} = \varepsilon=2m^{2}\phi_{0}^{2}+\frac{\partial V}{\partial\phi^{*}} \phi^{*}+V \tag{9}\] \[T_{ii} = p=\frac{\partial V}{\partial\phi^{*}}\phi^{*}-V. \tag{10}\]
To obtain dimensionless quantities we divide equations (9) and (10) by a factor \(1/m^{4}\)
\[\varepsilon^{\prime} = 2\phi_{0}^{{}^{\prime}2}+\frac{\partial V^{\prime}}{\partial \phi^{*}}\phi^{*}+V^{\prime} \tag{11}\] \[p = \frac{\partial V^{\prime}}{\partial\phi^{*}}\phi^{*}-V^{\prime} \tag{12}\]
with \(\varepsilon^{\prime}=\varepsilon/m^{4}\), \(V^{\prime}=V/m^{4}\) and \(p^{\prime}=p/m^{4}\).
### Tolman-Oppenheimer-Volkoff (TOV) equations
By solving the TOV equations one obtains the mass and the radius of an compact object, which is necessary in order to calculate the mass-radius curves:
\[\frac{dp}{dr} = -G\frac{m_{r}(r)\varepsilon(r)}{r^{2}}\bigg{(}1+\frac{p(r)}{ \varepsilon(r)}\bigg{)}\bigg{(}1+\frac{4\pi r^{3}p(r)}{m_{r}(r)}\bigg{)} \bigg{(}1-\frac{2Gm_{r}(r)}{r}\bigg{)}^{-1} \tag{13}\] \[\frac{dm_{r}(r)}{dr} = 4\pi r^{2}\varepsilon(r) \tag{14}\]
with the pressure \(p(r)\), radius \(r\), the gravitational constant \(G\), the mass \(m_{r}\) inside a sphere of radius \(r\) and the energy density \(\varepsilon(r)\). Since all the calculations are dimensionless, the TOV equations need to be rescaled. Applying the following scaling relations for the mass and the radius
\[m_{r} = (G^{3}\cdot\varepsilon_{0})^{-1/2}m_{r}^{\prime} \tag{15}\] \[r = (G\cdot\varepsilon_{0})^{-1/2}r^{\prime} \tag{16}\]
leads to the following form:
\[\frac{dp^{\prime}}{dr^{\prime}} = -\frac{m_{r}^{\prime}\varepsilon^{\prime}}{r^{\prime 2}}\bigg{(}1+ \frac{p^{\prime}}{\varepsilon^{\prime}}\bigg{)}\bigg{(}1+\frac{4\pi r^{\prime 3}p^{ \prime}}{m_{r}^{\prime}}\bigg{)}\bigg{(}1-\frac{2m_{r}^{\prime}}{r^{\prime}} \bigg{)}^{-1} \tag{17}\] \[\frac{dm_{r}^{\prime}}{dr^{\prime}} = 4\pi r^{\prime 2}\varepsilon^{\prime}. \tag{18}\]
with \(\varepsilon_{0}\) being a constant with dimension of an energy density. Also the pressure and the energy density need to be rescaled:
\[p^{\prime} = \frac{p}{\varepsilon_{0}} \tag{19}\] \[\varepsilon^{\prime} = \frac{\varepsilon}{\varepsilon_{0}}. \tag{20}\]
Please note that in natural units the energy density and the pressure have the same dimension. For e.g. the case of non-interacting massive bosons with a mass \(m\) one would choose \(\varepsilon_{0}=m^{4}\).
### Dimensionless Equation of State
Since all calculations are independent on units, we need to derive an equation of state that satisfies this requirement. We use in the following two different kind of generalized scalar potentials: one with a mass term and one with a vacuum term but without a mass term. It turns out that these two kinds of potentials lead to stable compact star configurations. We note in passing that without a mass and a vacuum term the compact star configurations are unstable. For the two different stability mechanisms the equations of state differ considerably as discussed below.
The starting point for the following calculations is equation (9), respectively equation (10). For the scalar potential of the form
\[V=\frac{\lambda}{2^{n/2}}\left(\phi^{*}\phi\right)^{n/2} \tag{21}\]
we get
\[p^{\prime} = \frac{\lambda^{\prime}}{2^{n/2}}(\phi^{\prime}_{0})^{n}\left( \frac{n}{2}-1\right) \tag{22}\] \[\varepsilon^{\prime} = 2\phi^{\prime 2}_{0}+\frac{\lambda^{\prime}}{2^{n/2}}(\phi^{ \prime}_{0})^{n}\left(\frac{n}{2}+1\right) \tag{23}\]
with the dimensionless coupling strength \(\lambda^{\prime}=\lambda/m^{4-n}\) and the dimensionless scalar field \(\phi^{\prime}_{0}=\phi_{0}/m\). In order to obtain an analytic expression for the EOS, we can express the pressure in terms of the scalar field and insert it into the expression for the energy density. The EOS can be further simplified by rescaling the pressure and the energy density with a factor \(\lambda^{\prime}(n/2-1)\) to the form
\[\varepsilon^{\prime}=p^{\prime 2/n}+\frac{n+2}{n-2}\,p^{\prime}. \tag{24}\]
It is evident that the EOS (24) is restricted to \(n>2\).
For the second form of the EOS studied we introduce a vacuum term \(V_{0}\). We start with equation (9) and (10) but setting the mass term to zero. The potential in this case is given by
\[V=\frac{\lambda}{2^{n/2}}\left(\phi^{*}\phi\right)^{n/2}+V_{0}. \tag{25}\]
We obtain for the pressure and energy density
\[\varepsilon = \frac{\lambda}{2^{n/2}}\phi_{0}^{n}\left(\frac{n}{2}+1\right)+V_ {0} \tag{26}\] \[p = \frac{\lambda}{2^{n/2}}\phi_{0}^{n}\left(\frac{n}{2}-1\right)-V_ {0}. \tag{27}\]
Combining these two equations gives an EOS which is independent on the interaction strength \(\lambda\)
\[\varepsilon=\frac{n+2}{n-2}\,p+\frac{2n}{n-2}\,V_{0}. \tag{28}\]
Interestingly, this EOS is the one for selfbound stars of the form
\[p=c_{s}^{2}\left(\varepsilon-\varepsilon_{\rm vac}\right) \tag{29}\]
where the pressure vanishes at a nonvanishing vacuum energy density \(\varepsilon_{\rm vac}\). The prefactor \(c_{s}^{2}\) stands for the speed of sound. As one can see from eq. (28) different values of \(c_{s}^{2}\), i.e. different stiffnesses of the EOS, emerge from the chosen value of the power \(n\) in the scalar potential. Note that the selfbound EOS for has been derived from interacting bosonic matter. One arrives at the MIT bag EOS by setting \(n=4\) in eq. (28) so that \(c_{s}^{2}=1/3\). These similar EOS are based on a completely different descriptions of the matter, demonstrating that the EOS is composition blind so that the results from general relativity do not depend on the underlying microphysics, as dictated from the strong equivalence principle. Rescaling the EOS with \(\varepsilon^{\prime}=\varepsilon/V_{0}\) and \(p^{\prime}=p/V_{0}\) results in a dimensionless EOS. A further rescaling with the factor \(2n/(n-2)\) gives the final dimensionless EOS of the form
\[\varepsilon^{\prime}=\frac{n+2}{n-2}\,p^{\prime}+1. \tag{30}\]
This EOS is quite similar to the one with the mass term (see equation (24)). The part linear in \(p\) stays the same but the second part differs and is now a constant.
## III Results: mass-radius curves and compactness
By solving the TOV equations together with the derived equations of state, we obtain the corresponding mass-radius curves.
Figures 1 and 2 show the mass-radius curves for the case with a mass term for different values of the power \(n\). The case \(n=3\) is plotted in a separate figure, see Fig. 1, due to a different magnitude of the maximum mass and the corresponding radius. Additionally this curve differs in its shape compared to curves with larger values of \(n\), since the mass decreases with increasing radius, in a fashion known for e.g. neutron stars. Nevertheless its compactness is lower than the ones for larger values of the power \(n\). Note that the solutions to the left of the maximum are unstable as depicted in Fig. 1. The mass-radius curves for \(n=4\) starts for vanishing mass at a nonvanishing radius, as it is typical for a mass-radius relation with a radius independent on the mass. On the other hand, the mass-radius curves for the cases with \(n=5\) and larger start at the origin, i.e. at vanishing mass and radius. The shape of these mass-radius curves look like the ones of selfbound stars where the mass increases with \(R^{3}\). However, the underlying EOS does not exhibit a nonvanishing energy density at vanishing pressure.
Figure 1: Mass-radius curve for \(V\propto\phi^{3}\) with a mass term in the Lagrangian in dimensionless units. The mass decreases with increasing radius in the stable branch as \(M\propto R^{-1}\).
In summary three types of solutions are identifiable, for \(n=3\), \(n=4\) and \(n>4\). The curves differ in their behaviour in the limit of small masses. The curve with \(n=3\) goes to infinite radius, the curve with \(n=4\) goes to a constant value and the curves with \(n>4\) are going to zero for small masses. In order to understand their behaviour it is useful to consider the limit of small pressure \(p\) for the EOS. Equation (24) then simplifies to:
\[\varepsilon\approx p^{2/n}. \tag{31}\]
These different shapes can be understood by having a look at the mass-radius relation of a sphere in hydrostatic equilibrium for a polytropic EOS of the form \(p\propto\rho^{\Gamma}\), with \(\rho\) being the mass density and \(\Gamma\) a constant. In the nonrelativistic limit at low density, the energy density is simply given by the mass density \(\varepsilon=\rho\). The mass \(M\) and the radius \(R\) for a polytrope are related by (see e.g. [44]):
\[M^{2-\Gamma}\cdot R^{3\Gamma-4}\propto\text{const}. \tag{32}\]
Equation (31) describes a polytropic EOS and thus gives the possibility to make use of
Figure 2: Mass-radius curves for \(V\propto\phi^{n}\) for \(n=4\) up to \(n=100\) with a mass term in the scalar potential, shown in dimensionless units. The maximum mass and radius decrease with \(n\). The mass-radius curve for \(n=4\), i.e. for the standard \(\phi^{4}\)-potential, has a different shape compared to the curves with \(n>4\) since the radius goes to a constant value when the mass goes to zero. The other mass-radius curves resemble those of selfbound stars as they start at the origin.
equation (32) by setting \(\Gamma=n/2\).
In table 3 we calculated the mass-radius relations for different values of \(n\). We can reproduce that the mass decreases in the limit of large radii in the case of \(n=3\) as \(M\propto R^{-1}\). Furthermore table 3 confirms a constant value of the radius \(R\) for small pressures, respectively small masses, for the case \(n=4\). The remaining cases (with \(n>4\)) are described by slightly different mass-radius relations. However, the mass vanishes in all these cases in the limit of \(R\to 0\), making them look like the mass-radius curves of selfbound stars. In the limit of \(n\to\infty\) one recovers the mass-radius relation of an incompressible fluid, i.e. \(M/R^{3}=\text{const.}\) as for selfbound stars. Please note that these relations are valid for small pressure \(p\) which corresponds to small masses. For higher pressures \(p\) the first term in equation (24) dominates so that the mass-radius relations shown in the table 3 do not hold anymore. As seen in Figs. 1 and 2, this is the case for the configurations close to the maximum mass.
Another way to understand the shape of the curves is by having a look at their slope. By rearranging equation (32) we obtain the relation
\[\frac{\text{d}\log M}{\text{d}\log R}=\frac{3\Gamma-4}{\Gamma-2}. \tag{33}\]
We can now identify the right-hand side of the equation with the slope of the curve \(m\). For \(n=3\) one finds \(m=-1\), for \(n=4\) the slope goes to infinity (\(m\to\infty\)) and the case \(n>4\) gives constant positive values for \(m\). Again we can confirm the shape of the curves with our numerical results. As already mentioned before the curves for \(n>4\) look similar to those of selfbound stars, where gravity is not needed to ensure stability. Selfbound stars are characterized by a nonvanishing value of the energy density at zero pressure. The shape of the mass-radius relation is determined by a constant energy density, ensuring that \(M\propto R^{3}\) holds. This explains why the mass-radius curves of those stars are located at the origin. However, we stress that the stars here at not purely selfbound, since they need gravity to
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \(\mathbf{n}\) & \(\mathbf{3}\) & \(\mathbf{4}\) & \(\mathbf{5}\) & \(\mathbf{6}\) & \(\mathbf{7}\) & \(\mathbf{8}\) & \(\mathbf{\infty}\) \\ \hline \(\mathbf{\Gamma}\) & \(\frac{3}{2}\) & 2 & \(\frac{5}{2}\) & 3 & \(\frac{7}{2}\) & 4 & \(\infty\) \\ \hline \(\mathbf{M}\)**-\(\mathbf{R}\) relation** & \(M\propto R^{-1}\) & \(R\propto\text{const.}\) & \(M\propto R^{7}\) & \(M\propto R^{5}\) & \(M\propto R^{13/3}\) & \(M\propto R^{4}\) & \(M\propto R^{3}\) \\ \hline \end{tabular}
\end{table}
Table 1: Mass-radius relations for different values of \(n\) derived by using equation (32).
remain stable.
The mass-radius curves for the scalar potential without a mass term but with a vacuum term are depicted in Fig. 3. Contrary to the mass term case, these mass-radius curves only lead to one type of solution. The mass vanishes for small radii and increases with increasing radius. The mass-radius curves for the different values of \(n\) are lying on top of each other for small radii. This feature originates from the dominant behaviour of the vacuum term in the EOS at low densities, which is independent on the value of \(n\). The energy density stays nearly constant as it is just given by the vacuum energy density resulting in a mass-radius curve of the form \(M/R^{3}=\text{const.}\), i.e. the familiar one for selfbound stars. The maximum masses and radii increase with higher values of \(n\)
For both types of EOS we also investigated their dependence on the speed of sound squared \(c_{s}^{2}\), which is defined as the derivative of the pressure with respect to the energy density:
\[c_{s}^{2}=\frac{\partial p}{\partial\varepsilon}. \tag{34}\]
This gives the following simple relation between the equation of state and \(c_{s}^{2}\) for the vacuum
Figure 3: Mass-radius curves for \(V\propto\phi^{n}\) stabilized with a vacuum constant in dimensionless units. All curves have a similar shape and the same behavior in the limit of small masses, i.e. the radius vanishes for small masses. The maximum mass and radius increase with \(n\).
term case:
\[c_{s}^{2} = \frac{n-2}{n+2} \tag{35}\]
while for the case with a the mass term one arrives at the relation:
\[c_{s}^{2}=\left(\frac{2}{n}\,p^{\frac{2-n}{n}}+\frac{n+2}{n-2}\right)^{-1}. \tag{36}\]
One realizes that the EOS with a vacuum term results in a constant \(c_{s}^{2}\). The EOS with a mass term has an increasing \(c_{s}^{2}\) with increasing pressure (strictly speaking for \(n>2\) which is the case in our studies). In the limit of \(p\rightarrow\infty\) one recovers the speed of sound squared of the case with a vacuum term. Hence, for a given power \(n\), \(c_{s}^{2}\) will always be larger for the EOS with a vacuum term compared to the one with a mass term. Thereby, the EOS with a vacuum term will be stiffer, for a fixed value of \(n\), compared to the one with a mass term. We also see, that \(c_{s}^{2}\) increases monotonically with the power \(n\) reaching \(c_{s}^{2}=1\) in the limit \(n\rightarrow\infty\).
In figure 4 we plot the results for the maximal value of \(c_{s}^{2}\) in the center of the maximum mass configuration, compared for both cases of the EOS. We find that the values of \(c_{s}^{2}\) are in good agreement but not identical as expected. This feature indicates that the maximum
Figure 4: The speed of sound squared \(c_{s}^{2}\) plotted against the exponent of the potential \(n\). \(c_{s}^{2}\) increases with increasing \(n\) and goes asymptotically to \(c_{s}^{2}=1\). This is the stiffest possible equation of state, where the speed of sound is equal to the speed of light.
mass configuration for the EOS with a mass term has a \(c_{s}^{2}\) close to their asymptotic limit for large pressures. For large values of \(n\) the speed of sound squared reaches the causal limit \(c_{s}^{2}=1\) where the speed of sound squared equals the speed of light.
In addition to \(c_{s}^{2}\) we also studied the compactness, respectively the maximum compactness \(C_{\rm max}\), of the the resulting compact star configurations, defined as:
\[C=\frac{M}{R}, \tag{37}\]
with \(M\) and \(R\) representing the dimensionless quantities calculated by solving the TOV equations. In the following we demonstrate that \(C_{\rm max}\) can exceed the maximum value for quark stars (\(C_{\rm max}=0.271\)), as well as the compactness needed to place the photon orbit outside the star (\(C_{\rm max}=1/3\)) and that the maximum compactness goes asymptotically to the limit of causality \(C_{\rm max}=0.354\) for large values of the power \(n\).
The results for both cases of the EOS are depicted in figure 5. We find that for higher values of \(n\) the maximum compactness increases monotonically. The steepest slope of the
Figure 5: The maximum compactness for the \(V\propto\phi^{n}\) potential with a mass and a vacuum term plotted against \(n\). Furthermore different limits for compact objects are included in the plot. The red line represents the limit for the photon sphere being outside the radius of the star (\(C=1/3\)). The green line displays the maximum compactness for causal equations of state (\(C=0.354\)) (causal in the sense that the speed of sound cannot exceed the speed of light). The blue line represents the maximum compactness for selfbound stars with a conformal EOS where \(c_{s}^{2}=1/3\) (\(C=0.271\)).
curves is in the range of \(n=3\) up to \(n=10\). From that point on the maximum compactness goes asymptotically to an upper limit. This boundary is given by the limit of causality \(c_{s}^{2}=1\) with \(C=0.354\), where the speed of sound is equal to the speed of light.
This feature implies that the compactness can be greater than \(C=1/3\). The photon sphere for the Schwarzschild metric lies at \(R=3M\) with a corresponding minimal compactness of \(C=M/R=1/3\). According to Fig. 5 one realizes that the photon orbit can lie outside of the boson star for sufficiently large value of \(n\), producing a light ring. A similar feature has been seen for solitonic boson stars where the maximum compactness also reaches asymptotically the one for the causal limit [43]. We conclude that solitonic boson stars as well as self-interacting boson stars studied here can constitute black hole mimickers with a maximal compactness in excess of \(C=1/3\).
Figure 5 shows that boson stars can reach higher \(C_{\rm max}\) than expected. These values exceed those of standard neutron stars (without a phase transition typically \(C_{\rm max}=0.2\) to \(0.3\)) or other exotic compact objects like quark stars where \(C=0.271\), (see e.g. the discussions in [44; 45]). We find that the maximal compactness for \(n=4\) in the vacuum term case is \(C=0.271\) which is in agreement with the values for quark stars quoted above. This was expected since equation (28) for this value of \(n\) is \(\varepsilon=3p+{\rm const.}\) which is the equation of state for an ultrarelativistic ideal gas of quarks with nonvanishing vacuum energy, given by the MIT bag constant, or more general a conformal EOS with \(c_{s}^{2}=1/3\).
## IV Summary and conclusions
We investigated the properties of boson stars with a generalized scalar potential by extending the power-law potential \(V\propto\lambda\phi^{n}\) to an arbitrary value of the exponent \(n\). We derive analytic expressions for the equation of state as input to the TOV equations for static spheres of fluids. We introduced two ways to stabilize the boson star configurations: by including a mass term for the bosons and by including a vacuum term in the scalar potential without a mass term. For both cases we calculated the mass-radius curves, the speed of sound, as well as their maximal compactness. We found three different categories of the mass-radius curves for the EOS with a mass term. They differ in their behaviour in the limit of small masses. The radius \(R\) for the case \(n=3\) goes to infinity when the mass \(M\) goes to zero. For the classical \(\phi^{4}\)-potential, i.e. for \(n=4\), the radius goes to a constant value for small
masses. The mass-radius curves for larger values of \(n\) resemble the mass-radius curves of selfbound stars as their mass vanishes when the radius goes to zero. Nevertheless, these compact star configurations are not selfbound because gravity is needed to stabilize them. However, we find that the mass-radius curves for the EOS with a vacuum term turn out to be like those of selfbound stars. The mass vanishes for small radii but in this case the EOS has a nonvanishing energy density for a vanishing pressure so that these compact star configurations are bound without gravity, i.e. they constitute selfbound star configurations.
For both types of EOSs studied we find that the speed of sound squared \(c_{s}^{2}\) goes asymptotically to the limit of causality \(c_{s}^{2}=1\). The case for the EOS with a vacuum constant and the one with a mass term case differ slightly in their absolute values for the same value of the power \(n\) while the former one turns out to be always stiffer compared to the latter one. Both cases show an increase of the speed of sound squared with the power \(n\), so that higher powers in \(n\) lead to a stiffer EOS. We calculated also the compactness for several values of the exponent \(n\) in the range between \(n=3\) and \(n=100\). We demonstrate that the maximal compactness \(C_{\rm max}\) increases continuously with the power \(n\). The highest increase occurs between \(n=3\) and \(n=10\). For higher values of \(n\) the maximal compactness goes asymptotically to the limit of causality for both types of EOS, in line with the increase of the speed of sound squared \(c_{s}^{2}\) with \(n\) seen before. The highest values of \(C_{\rm max}\) for a given power \(n\) are reached with the EOS with a vacuum term. Even for low values of \(n\) the compactness can be already close to the conformal limit, where \(c_{s}^{2}=1/3\). Specifically, for the EOS with a vacuum term and \(n=4\) we reproduce the result from the equation of state of an ultrarelativistic ideal gas of quarks with a vacuum term, the MIT bag model, with a compactness of \(C=0.271\), Furthermore, for the EOS with a mass term and the one with a vacuum term the maximal compactness can reach and even surpass a compactness of \(C=1/3\) marking the compactness needed to place the photon ring outside of the compact object. This makes boson stars described by a scalar potential with a power law with the power of \(n\gtrsim 20\) black hole mimickers.
In summary, we establish that by extending the scalar potential \(V=\lambda\phi^{n}\) to an arbitrary value of the exponent \(n\) we change the properties, in particular the compactness, of the resulting compact configurations drastically. We point out that we have investigated properties like mass and radius of the boson star independently of the mass of the scalar field boson \(m\) as well as the coupling strength \(\lambda\), which enables a generic approach. For a
given dark matter model of selfinteracting bosonic dark matter our results can be rescaled to physical units by the simple rescaling laws given in the derivation of the EOS. By choosing a suitable mass scale and interaction strength one can obtain boson stars with masses and radii comparable to (supermassive) black holes and neutron stars, for example. Our results could be used to study now boson star mergers with the generalized scalar potential. By choosing different values of the power of the scalar potential one is now able to study within the same setting boson star configurations with entirely different properties concerning the mass-radius relation, the speed of sound squared, and the maximal compactness up to black hole mimickers and to delineate the impact on the pattern of emitted gravitational waves and possible signals for their detection in present and future gravitational wave observatories.
###### Acknowledgements.
The authors acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the CRC-TR 211 'Strong-interaction matter under extreme conditions'- project number 315477589 - TRR 211.
|
2308.13888 | Neural Implicit Morphing of Face Images | Face morphing is a problem in computer graphics with numerous artistic and
forensic applications. It is challenging due to variations in pose, lighting,
gender, and ethnicity. This task consists of a warping for feature alignment
and a blending for a seamless transition between the warped images. We propose
to leverage coord-based neural networks to represent such warpings and
blendings of face images. During training, we exploit the smoothness and
flexibility of such networks by combining energy functionals employed in
classical approaches without discretizations. Additionally, our method is
time-dependent, allowing a continuous warping/blending of the images. During
morphing inference, we need both direct and inverse transformations of the
time-dependent warping. The first (second) is responsible for warping the
target (source) image into the source (target) image. Our neural warping stores
those maps in a single network dismissing the need for inverting them. The
results of our experiments indicate that our method is competitive with both
classical and generative models under the lens of image quality and
face-morphing detectors. Aesthetically, the resulting images present a seamless
blending of diverse faces not yet usual in the literature. | Guilherme Schardong, Tiago Novello, Hallison Paz, Iurii Medvedev, Vinícius da Silva, Luiz Velho, Nuno Gonçalves | 2023-08-26T14:12:19Z | http://arxiv.org/abs/2308.13888v4 | # Neural Implicit Morphing of Face Images
###### Abstract
**Face morphing** is one of the seminal problems in computer graphics, with numerous artistic and forensic applications. It is notoriously challenging due to pose, lighting, gender, and ethnicity variations. Generally, this task consists of a **warping** for feature alignment and a **blending** for a seamless transition between the warped images. We propose to leverage **coordinate-based neural networks** to represent such warpings and blendings of face images. During training, we exploit the smoothness and flexibility of such networks, by combining energy functionals employed in classical approaches without discretizations. Additionally, our method is **time-dependent**, allowing a continuous warping, and blending of the target images. During warping inference, we need both direct and inverse transformations of the time-dependent warping. The first is responsible for morphing the target image into the source image, while the inverse is used for morphing in the opposite direction. Our neural warping stores those maps in a single network due to its **inversible property**, dismissing the hard task of inverting them. The results of our experiments indicate that our method is competitive with both classical and data-based neural techniques under the lens of face-morphing detection approaches. Aesthetically, the resulting images present a seamless blending of diverse faces not yet usual in the literature.
## 1 Introduction
_Image warping_ is a continuous transformation mapping points of the image support to points in a second domain. The process of warping an image has applications ranging from correcting image distortions caused by lens or sensor imperfections [20] to creating new distortions for artistic or scientific purposes [3].
Image warping finds a special application in creating _image morphings_[8], where the warping is used to align corresponding features of the images. By gradually transforming one image into another using the aligned features, we can produce a smooth transition between them.
A smooth image warping is often necessary when we require smooth distortions. To achieve this, we can assume that the warping is defined by a parametric form. This allows us to use its derivatives to constrain the deformation, such as approximating it as a minimum of a given _variational problem_. Feature alignment can be specified using _landmarks_ to establish correlations between two images.
In this work, we use _coordinate-based neural networks_, which we call _neural warpings_, to parametrize image warpings. This approach enables us to calculate the derivatives in a closed form, eliminating the need for discretization. Additionally, we employ a time parameter, to represent smooth transitions. By incorporating the derivatives into the loss function, we can regularize the network and easily add
constraints by summing additional terms. To train a neural warping, we propose a _loss function_ that consists of two main terms. First, a _data constraint_ is used to ensure that the warping fits the given keypoint correspondences. Second, we _regularize_ the neural warping using the _thin-plate_ energy to minimize distortions.
We utilize neural warping to model a _time-dependent_ morphing between images, with a specific focus on face images. This employs neural warping to align the image features over time; then, we explore two approaches for defining continuous image morphing. Firstly, we can simply blend the resulting aligned image warpings using point-wise interpolation. Alternatively, we propose blending them in the _gradient-domain_ by utilizing their derivatives, which can be obtained in closed form. To achieve this, we introduce another neural network to represent the morphing and train it to satisfy the corresponding variational problem, which blends the image-warping gradients.
Our contributions can be summarized as follows:
* The introduction of a time-dependent **neural warping** which encodes in a single network the _direct_ and _inverse_ transformations needed to align two images along time. We use warping to transport the images and their derivatives from the initial states to intermediate times.
* The proposed network is **smooth,** both in space and time, which enables the use of its derivatives in the loss function. We exploit it to define an implicit regularization based on the classic _thin-plate_ energy which penalizes distortions. Since the warping is regularized over time, the features follow the _minimum_ path instead of a linear one, as in classical approaches. Moreover, we have the flexibility to add other constraints during training.
* The neural warping model is **compact**, as it can be represented by small networks. In our experiments, we achieved accurate warping by employing a network with a single hidden layer with \(128\) neurons.
* We blend the resulting neural warpings to define a time-dependent **smooth morphing** of images, distinguishing it from current methods that focus on a single blend. Furthermore, we use a neural network, which we call **neural morphing**, to learn the blending. This takes into account the feature alignment provided by the warping and trains the network in the gradient domain.
## 2 Related Works
The first face morphing algorithms were simple _cross-dissolves_, i.e., pixel interpolation between the target images [26]. However, the resulting morphings are substandard unless the images are aligned, resulting in artifacts. To overcome this, _mesh-based_ alignment was employed by researchers before the interpolation stage to solve these issues, shifting the complexity from the interpolation to the image alignment. Beier and Neely [1] further refined the process using line correspondences and an user interface to align facial features. Liao et al. [11] exploited halfway domains, well-known _thin-plate_ splines, and _structural similarity_ to create a discrete vector field to warp the images. Other morphing approaches are listed by [8].
The above techniques are landmark-based morphings [4], and our work fits this category. However, there is one crucial distinction: our method is _continuous_ both in space and time. Furthermore, unlike traditional approaches that operate directly on pixel values, our method operates on a _smooth representation_ of the underlying images. Therefore, we eliminate the need for interpolation and image resampling. Another advantage is that our method provides the warping _derivatives in closed-form_ through automatic differentiation. This allows for efficient gradient computation, facilitating training and analysis. A third distinction lies in incorporating the time variable as an input to the neural warping. Combined with the advantages above, this enables the creation of continuous, smooth, and compact warpings. Furthermore, this approach allows us to constrain the landmark paths over time by minimizing distortions, unlike classical interpolating methods.
Recently, _generative-adversarial networks_ (GANs) have been tailored for face morphing. These approaches create a latent space of images. Thus, the morphings are created by interpolating the latent codes. In this category, StyleGAN 2 [9] produces morphings virtually indistinguishable from real faces. Recently, _diffusion models_ have been exploited [18], however, there are still challenges when it comes to state-of-the-art (SOTA) face recognition models [14]. GAN and diffusion models use a _discrete_ representation of the data and rely on interpolations. While the interpolated representations
blend the target images when properly calibrated, they lack explicit user control and can be challenging to interpret. Our approach uses _coordinate-based neural networks_ to represent the image morphings continuous in space and time, capitalizing on their smoothness during the warping/blending training.
This aspect distinguishes our method from data-based neural networks and classical approaches, which rely on discrete pixel values as input. Our experiments (Section 5) demonstrate that our process yields comparable results to StyleGAN and traditional morphing methods.
From an artistic perspective, these approaches yield means to create believable faces of non-existing people. However, it also presents serious security issues, facilitating the forgery of documents and digital media for extortion purposes. Such misuses of face-morphing tools are known in the literature as **face-morphing attacks** and have also attracted the attention of the biometrics research community, leading to a series of works on detecting such attacks expediently and efficiently.
## 3 Background and Notation
In this work, we opt for representing an _image_ by a function \(\mathbf{I}:\Omega\subset\mathbb{R}^{2}\to\mathcal{C}\), where \(\Omega\) is the image _support_ and \(\mathcal{C}\) is the _color space_, and parametrize it using a (coordinate-based) neural network \(\mathbf{I}_{\beta}:\mathbb{R}^{2}\to\mathcal{C}\) with parameters \(\beta\). To optimize the _neural image_\(I_{\beta}\) such that it approximates \(I\), we can simply use the loss function \(\mathcal{F}(\beta)=\int_{\Omega}\left(\mathbf{I}-\mathbf{I}_{\beta}\right)^{2}dx\). This work explores _coordinate-based neural networks_ to transform _neural images_ using a novel neural _warping_ and _morphing_ approaches.
Along the text, we assume that a coordinate-based neural network is a _sinusoidal_ multilayer perceptron (MLP) \(f_{\theta}(p):\mathbb{R}^{n}\to\mathbb{R}^{m}\) which is defined as the composition \(f_{\theta}(x)\!=\!W_{d+1}\circ f_{d}\circ\cdots\circ f_{1}(x)+b_{d+1}\) of \(d\)_sinusoidal layers_\(f_{i}(x_{i})\!=\!\sin(W_{i}x_{i}+b_{i})\!=\!x_{i+1}\), where \(W_{i}\in\mathbb{R}^{n_{i+1}\times n_{i}}\) are the weight matrices, and \(b_{i}\!\in\!\mathbb{R}^{n_{i+1}}\) are the biases. The union of these parameters defines \(\theta\). The integer \(d\) is the _depth_ of \(f_{\theta}\) and the dimensions \(n_{i}\) are the layers _widths_.
The sinusoidal MLP \(f_{\theta}\) is a smooth function because its layers are composed of smooth maps, and we can compute its derivatives in closed form using automatic differentiation. This property plays an important role in our method because it allows us to consider the network derivatives for implicit regularization of the desired warpings and morphings in the underlying neural image.
## 4 Neural Morphing
This section introduces the _neural morphing_ of two images. Roughly speaking, it consists of a _neural warping_ to align the image features and a _neural blending_ of the resulting warped images.
Specifically, let \(\mathbf{I}_{0},\mathbf{I}_{1}:\mathbb{R}^{2}\to\mathcal{C}\) be two neural images, we propose to represent their _neural morphing_ using a neural network \(\mathbf{I}:\mathbb{R}^{2}\times[0,1]\to\mathcal{C}\) subject to \(\mathbf{I}(\cdot,i)=\mathbf{I}_{i}(\cdot)\). Thus, for each \(t\) we have an image \(\mathbf{I}(\cdot,t)\), and varying \(t\) results in a video interpolating \(\mathbf{I}_{i}\). To define the morphing \(\mathbf{I}\), we take two common steps. First, we construct a transformation (_warping_) to align the corresponding _features_ of \(\mathbf{I}_{i}\) along the time. Then, the morphing is obtained by blending the resulting aligned warped images.
For the warping, we use pairs of _landmarks_\(\{p_{i},q_{i}\}\) sampled from the domains of \(\mathbf{I}_{0}\) and \(\mathbf{I}_{1}\) providing feature correspondences. Then, we seek a warping \(\mathbf{T}\!:\!\mathbb{R}^{2}\!\times\![-1,1]\!\to\!\mathbb{R}^{2}\) satisfying the _data constraints_:
* The curves \(\mathbf{T}(p_{i},t)\) and \(\mathbf{T}(q_{i},t-1)\), with \(t\in[0,1]\), has \(p_{i}\) and \(q_{i}\) as end points.
* For each \(t\in(0,1)\), we require \(\mathbf{T}(p_{i},t)=\mathbf{T}(q_{i},t-1)\).
Thus, the values \(\mathbf{I}_{0}(p_{i})\) and \(\mathbf{I}_{1}(q_{i})\) can be blended along the path \(\mathbf{T}(p_{i},t)\). In points \(x\neq p_{i}\), we employ the well-known _thin-plate_ energy to force the transformations to be as affine as possible. The resulting network \(\mathbf{T}\) deforms \(\mathbf{I}_{i}\) along the time resulting in the _warpings_\(\mathbf{I}_{i}\!:\!\mathbb{R}^{2}\!\times\![0,1]\!\to\!\mathcal{C}\) defined as:
\[\mathbf{I}_{0}(x,t):=\mathbf{I}_{0}\big{(}\mathbf{T}(x,-t)\big{)}\text{ and }\mathbf{I}_{1}(x,t):=\mathbf{I}_{1}\big{(}\mathbf{T}(x,1-t)\big{)}. \tag{1}\]
Figure 1 illustrates the warpings \(\mathbf{I}_{i}\). Given a point \((x,t)\), to evaluate \(x\) in image \(\mathbf{I}_{i}\) we move it to time \(t=i\), for \(i=0,1\), which is done by \(x_{i}:=\mathbf{T}(x,i-t)\). Note that for \(x_{0}/x_{1}\) we need the inverse/direct transformations of \(\mathbf{T}\) (in red/blue) since it employs negative/positive time values. Then we obtain the image values by evaluating \(\mathbf{I}_{i}(x_{i})\). Moreover, if we have a vector \(v_{i}\) at \(x_{i}\) we can move it to \(x\) at time \(t\) by considering the matrix product \(v_{i}\cdot\text{Jac}(\mathbf{T}(x,i-t))\), where \(\text{Jac}\) is the Jacobian
operator. In Section 4.2, we use such property and consider \(v_{i}=\nabla\mathbf{I}_{i}(x_{i})\) to blending the images in the _gradient domain_.
We blend the resulting aligned warpings \(\mathbf{I}_{i}\) to define the desired morphing \(\mathbf{I}:\mathbb{R}^{2}\times[0,1]\to\mathcal{C}\). A naive blending would be an interpolation \(\mathbf{I}\!=\!\!(1\!-\!t)\mathbf{I}_{0}\!+\!t\mathbf{I}_{1}\). Section 4.2 presents the classic and our neural blending approaches.
The following steps summarize the procedure of morphing two images \(\mathbf{I}_{i}\):
* Extract **key points**\(\{p_{i},q_{i}\}\) in the domains of \(\mathbf{I}_{0}\) and \(\mathbf{I}_{1}\), providing feature correspondence. Since we are assuming \(\mathbf{I}_{i}\) to contain faces, we use DLib [10; 21] for face landmark detection;
* Define and train the **neural warping**\(\mathbf{T}:\mathbb{R}^{2}\times\mathbb{R}\to\mathbb{R}^{2}\) to align the image key points while penalizing distortions using the thin-plate energy. This produces the warpings \(\mathbf{I}_{i}\) that align the features of the images \(\mathbf{I}_{i}\) along time;
* Blending the warpings \(\mathbf{I}_{i}\) results in the **morphing \(\mathbf{I}:\mathbb{R}^{2}\times\mathbb{R}\to\mathcal{C}\)** of \(\mathbf{I}_{i}\). We represent \(\mathbf{I}\) by a sinusoidal MLP and exploit the flexibility of such network to define a training framework for training \(\mathbf{I}\) in the _gradient domain_.
### Neural warping
This section presents the _neural warping_, a neural network that maps points of the image support without changing their colors. Precisely, we parametrize the warping using a sinusoidal MLP \(\mathbf{T}:\mathbb{R}^{2}\times[-1,1]\to\mathbb{R}^{2}\), and require the following properties:
* \(\mathbf{T}(\cdot,0)\) is the _identity_;
* For each \(t\in[-1,1]\), we have that \(\mathbf{T}_{-t}\) is the _inverse_ of \(\mathbf{T}_{t}\).
The corresponding deformation of an image \(\mathbf{I}:\mathbb{R}^{2}\to\mathcal{C}\) by \(\mathbf{T}\) is defined using \(\mathbf{I}(\cdot,t)=\mathbf{I}\circ\mathbf{T}(\cdot,-t)\) which uses the inverse \(\mathbf{T}_{-t}\) of \(\mathbf{T}_{t}\). That is one of the reasons we require the inverse property. In fact, if \(\mathbf{T}\) holds such a property, there is no need for inverting the _direct_ warp \(\mathbf{T}_{t}\), which is hard task in general. For simplicity, we abuse the notation by calling \(\mathbf{I}\) a _warping_ of \(\mathbf{I}\). Note that at \(t=0\), we have \(\mathbf{I}(\cdot,0)=\mathbf{I}\) because \(\mathbf{T}(\cdot,0)=\mathbf{I}\). Thus, \(\mathbf{I}\) evolves the initial image \(\mathbf{I}\) along time.
We could avoid using the inverse map \(\mathbf{T}_{-t}\) by considering a discretization of \(\mathbf{I}\) given by a sampling \(\{\mathbf{I}_{ij}\}\) on an uniform grid \(\{x_{ij}\}\) of the image support. Then, \(\{\mathbf{I}_{ij}\}\) are samples of the desired image \(\mathbf{I}\circ\mathbf{T}_{-t}\) at points \(\{\mathbf{T}_{t}(p_{ij})\}\). However, this representation has the drawbacks of resampling \(\mathbf{I}\circ\mathbf{T}_{-t}\) in a new regular grid which can result in _holes_ and relies on interpolation techniques. Our warping \(\mathbf{T}\) avoids such a problem once it will be trained to fit the property \(\mathbf{T}_{t}\circ\mathbf{T}_{-t}=\mathbf{I}\) for all \(t\in[-1,1]\).
Observe that, for each \(t\), the map \(\mathbf{T}_{t}\) approximates a _diffeomorphism_ since it is a sinusoidal MLP which is smooth, moreover, its inverse is also given by a sinusoidal MLP \(\mathbf{T}_{-t}\) since \(\mathbf{T}_{t}\circ\mathbf{T}_{-t}=\mathbf{I}\).
#### 4.1.1 Loss function
Let \(\mathbf{I}_{0},\mathbf{I}_{1}:\mathbb{R}^{2}\to\mathcal{C}\) be two neural images and \(\{p_{i},q_{i}\}\) be the _source_ and _target_ points sampled from the supports of \(\mathbf{I}_{0}\) and \(\mathbf{I}_{1}\) providing their feature correspondences. Let \(\mathbf{T}:\mathbb{R}^{2}\times\mathbb{R}\to\mathbb{R}^{2}\) be a sinusoidal MLP, we train its parameters \(\theta\) such that \(\mathbf{T}\) approximates a warping aligning the key points \(p_{i}\) and \(q_{i}\) along time. For this, we consider the following loss functional.
\[\mathcal{L}(\theta)=\mathcal{W}(\theta)+\mathcal{D}(\theta)+\mathcal{T}(\theta). \tag{2}\]
Figure 1: Schematic illustration of the neural warping \(\mathbf{T}\) being used to aligning the initial images \(\mathbf{I}_{i}\)
Where \(\mathscr{W}(\theta)\), \(\mathscr{D}(\theta)\), \(\mathscr{G}(\theta)\) are the _warping_, _data_, _thin-plate_ constraints.
\(\mathscr{W}(\theta)\) requires the network \(\mathbf{T}\) to satisfy the identity and inverse properties of the warping definition.
\[\mathscr{W}(\theta)=\underbrace{\int\limits_{\mathbb{R}^{2}}\left\|\mathbf{ T}(x,0)-x\right\|^{2}dx}_{\text{Identity constraint}}+\underbrace{\int\limits_{\mathbb{R}^{2}\times\mathbb{R}}\left\|\mathbf{T} \Big{(}\mathbf{T}(x,t),-t\Big{)}-x\right\|^{2}dxdt}_{\text{Inverse constraint}}. \tag{3}\]
The _identity_ constraint forces \(\mathbf{T}_{0}=\text{Id}\) and the second term is the _inverse_ constraint, which asks for \(\mathbf{T}_{-t}\) to be the inverse of \(\mathbf{T}_{t}\) for all \(t\in\mathbb{R}\). Optimizing \(\mathscr{W}\) forces \(\mathbf{T}\) to hold the warping properties.
The _data constraint_\(\mathscr{D}(\theta)\) is responsible of forcing the warping \(\mathbf{T}\) to move the source points \(p_{i}\) to the corresponding target points \(q_{i}\) such that their path match along time. For this, we simply consider:
\[\mathscr{D}(\theta)=\int\limits_{[0,1]}\left\|\mathbf{T}(p_{i},t)-\mathbf{T} (q_{i},1-t)\right\|^{2}dt \tag{4}\]
Observe that \(\mathscr{D}\) is asking for \(\mathbf{T}(p_{i},1)=q_{i}\) and \(\mathbf{T}(q_{i},-1)=p_{i}\) because at the same time \(\mathscr{W}\) is forcing the identity property. Moreover, it forces \(\mathbf{T}(p_{i},t)=\mathbf{T}(q_{i},1-t)\) along time, thus, as observed at the beginning of this section this is the required property for the key points \(\{p_{i},q_{i}\}\) be aligned along time. Since we are assuming \(\mathbf{T}\) to be a sinusoidal MLP, the resulting warping provides a smooth deformation that moves the source points to the target points.
However, \(\mathscr{D}\) does not add any restriction on points other than the source and target points. Even assuming \(\mathbf{T}\) to be smooth the resulting warping would need some regularization, such as minimizing distortions. For this, we propose a _regularization term_ which penalizes distortions of the transformations \(\mathbf{T}_{t}\) using the well-known the _thin-plate_ energy [2; 7].
\[\mathscr{S}\left(\theta\right)=\int\limits_{\mathbb{R}^{2}\times\mathbb{R}} \left\|\mathbf{Hess}\left(\mathbf{T}\right)(x,t)\right\|_{F}^{2}dxdt \tag{5}\]
The constraint \(\mathscr{G}\) regularizes the warping function \(\mathbf{T}\) and works like a bending energy term, which penalizes excessive deformation at each point \((x,t)\) based on the derivatives of \(\mathbf{T}\). This helps eliminate global effects that may arise from considering only data and warping constraints. Note that we have incorporated the time variable into the thin-plate energy \(\mathscr{T}\). By using a sinusoidal MLP to represent \(\mathbf{T}\) and training it with \(\mathscr{W}\) while regularizing with the thin-plate energy, we achieve robust warpings, see Figure 2 for an alignment between two images, for more detail see the experiments in Sec. 5.
### Neural Blending
Let \(\mathbf{I}_{0},\mathbf{I}_{1}:\mathbb{R}^{2}\rightarrow\mathcal{C}\) be two neural images and \(\mathbf{T}:\mathbb{R}^{2}\times\mathbb{R}\rightarrow\mathbb{R}^{2}\) be a trained neural warping aligning their features. Specifically, the images \(\mathbf{I}_{i}\) are deformed by \(\mathbf{T}\) along time and Equation 1 gives the corresponding warpings \(\mathbf{I}_{i}(x,t)=\mathbf{I}_{i}\big{(}\mathbf{T}(x,i-t)\big{)}\). With \(\mathbf{I}_{i}\) in hands, we can blend them or their derivatives to construct a morphing \(\mathbf{I}:\mathbb{R}^{2}\times\mathbb{R}\rightarrow\mathcal{C}\) of the initial images \(\mathbf{I}_{i}\). A naive blending approach could be defined directly from \(\mathbf{I}_{i}\) by interpolating using \(\mathbf{I}(x,t)=(1-t)\mathbf{I}_{0}(x,t)+t\mathbf{I}_{1}(x,t)\). Thus, at \(t=0\) and \(t=1\), we obtain \(\mathbf{I}_{0}\) and \(\mathbf{I}_{1}\), respectively (See Fig 2).
However, only interpolating \(\mathbf{I}_{i}\) does not allow us to keep certain parts of one of the images unchanged during the morphing process. For instance, it may be desirable to preserve a regions of \(\mathbf{I}_{0}\) (such as the
Figure 2: A neural warping \(\mathbf{T}\) continuously aligning two face images along time. We use \(\mathbf{T}\) to create their aligned warpings \(\mathbf{I}_{i}\). The morphing \((1-t)\mathbf{I}_{0}+t\mathbf{I}_{1}\) was sampled at \(t=0,0.25,0.5,0.75,1\).
face complement region) throughout the morphing. To address these issues, inspired by the _Poisson image editing_ technique [17], we propose to blend the images by solving a _boundary value problem_ in the domain \(\mathbb{R}^{2}\times\mathbb{R}\) to handle smooth animations and parameterize \(\mathbf{I}\) by a neural network.
We use the Jacobians \(\text{Jac}(\mathbf{I}_{i})\) of the warpings \(\mathbf{I}_{i}\) to train \(\mathbf{I}\) in the gradient-domain. For convenience, we restrict the morphing support to \(S:=[-1,1]^{2}\times[0,1]\), with \([-1,1]^{2}\) representing the image domain and \([0,1]\) allows for navigation among them. Let \(\Omega\subset S\) be an open set used for blending \(\mathbf{I}_{i}\), such as the interior of the faces, and let \(\mathbf{I}^{*}:S\rightarrow\mathbb{R}\) be a known function on \(S-\Omega\) (it could be either \(\mathbf{I}_{0}\) or \(\mathbf{I}_{1}\)). Finally, let \(U\) be a matrix field obtained by blending \(\text{Jac}(\mathbf{I}_{i})\), for example, \(U=(1-t)\text{Jac}(\mathbf{I}_{0})+t\text{Jac}(\mathbf{I}_{1})\). A common way to extend \(\mathbf{I}^{*}\) to \(\Omega\) is by solving:
\[\min\int\limits_{\Omega}\|\text{Jac}(\mathbf{I})-U\|^{2}\,dxdt\text{ subject to }\mathbf{I}|_{S-\Omega}=\mathbf{I}^{*}|_{S-\Omega}. \tag{6}\]
We propose to use this variational problem to define a loss functional to train the parameters \(\theta\) of \(\mathbf{I}\).
\[\mathcal{U}(\theta)=\underbrace{\int\limits_{\Omega}\left\|\text{Jac}( \mathbf{I})-U\right\|^{2}dxdt}_{\mathbb{R}(\theta)}+\underbrace{\int\limits_{S -\Omega}\left(\mathbf{I}-\mathbf{I}^{*}\right)^{2}dxdt}_{\mathbb{R}(\theta)}. \tag{7}\]
The _cloning term_\(\mathcal{R}(\theta)\) fits \(\mathbf{I}\) to the primitive of \(U\) in \(\Omega\), and \(\mathcal{R}(\theta)\) is a _boundary constraint_ to fit \(\mathbf{I}\) to \(\mathbf{I}^{*}\) in \(S-\Omega\). Thus, the resulting loss functional \(\mathcal{U}\) trains \(\mathbf{I}\) to clone the primitive of \(U\) into the \(\mathbf{I}^{*}\) in \(\Omega\). In other words, this approach is seamless blending the two images while preserving the integrity in \(\partial\Omega\) (_seamless cloning_). Unlike traditional approaches that rely on direct pixel manipulation, seamless cloning operates on the image gradients. Here, we are considering it directly in the training of \(\mathbf{I}\).
When the images \(\mathbf{I}_{i}\) represent faces and \(\mathbf{T}\) aligns their features, we can define \(\Omega\) as the path of the facial region over time. Specifically, let \(\Omega_{0}\) be the region containing the face in \(\mathbf{I}_{0}\), define \(\Omega\) by warping \(\Omega_{0}\) along time using \(\mathbf{T}\), i.e., \(\Omega=\cup_{t\in[0,1]}\mathbf{T}_{t}(\Omega_{0})\). Note that the deformation of \(\Omega_{0}\) uses the direct deformation \(\mathbf{T}_{t}\) while the warped image \(\mathbf{I}_{0}\) uses the inverse \(\mathbf{T}_{-t}\). The use of both inverse and direct deformations encoded in our neural warping avoids the need for additional processing to compute inverses at inference time. Finally, for each \(t\), \(\mathbf{T}\) aligns the faces \(\mathbf{I}_{i}\) in the region \(\mathbf{T}_{t}(\Omega_{0})\). Thus, \(\mathcal{U}\) trains \(\mathbf{I}\) to morph the face in \(\mathbf{I}_{0}\) into the face in \(\mathbf{I}_{1}\) while cloning the result to \(\mathbf{I}_{0}\) on \(S-\Omega\).
Besides choosing the matrix field \(U\) as a linear interpolation of the Jacobians \(\text{Jac}(\mathbf{I}_{i})\), we could simple choose \(U=\text{Jac}(\mathbf{I}_{1})\) and \(\mathbf{I}^{*}=\mathbf{I}_{0}\). Thus, the resulting loss function \(\mathcal{U}\) forces \(\mathbf{I}\) to _seamless clone_ the face in \(\mathbf{I}_{1}\) to the corresponding region of \(\mathbf{I}_{0}\). We call the first case an _averaged seamless cloning_.
Sometimes it is desirable to combine features of \(\mathbf{I}_{i}\). However, a linear interpolation of \(\text{Jac}(\mathbf{I}_{i})\) can lead to washing out of details. To avoid such problems, we apply the approach proposed in [17], which allows for mixing the features of both images. At each \((x,t)\), we retain the stronger of the variations in the warpings by choosing \(U=\text{Jac}(\mathbf{I}_{0})\) if \(\|\text{Jac}(\mathbf{I}_{0})\|>\|\text{Jac}(\mathbf{I}_{1})\|\), and \(U=\text{Jac}(\mathbf{I}_{1})\), otherwise. The corresponding loss function \(\mathcal{U}\) forces \(\mathbf{I}\) to learn a _mixed seamless clone_ of \(\mathbf{I}_{i}\). Figure 3 presents examples of neural blending using the end images of the morphing in Figure 2.
## 5 Experiments and Discussions
In the experiments, we use small sinusoidal MLPs consisting of a single hidden layer with \(128\) neurons. This shows that our neural warping approach is both compact and robust for representing time-dependent warpings. The initialization of MLP parameters follows the definitions in [24].
### Quantitative comparisons
From the quantitative comparison results we show that our method is on-par with the state-of-the-art face morphing generation approaches, when compared using morphing-attack detection techniques.
We use the state-of-the-art morphing attack detection (MAD) algorithm proposed by Medvedev et al. [14]. The target images were obtained from the FRLL dataset [5]. We also obtained morphed images from the FRLL-Morphs dataset to use as baseline in our comparisons [22; 23]. MAD receives a face image and outputs the probability of it being a Bona-fide face, i.e., without morphing. For more details on the morphing detection approach see [14].
Figure 4 (left) displays the distribution of the Bona-fide probability histogram obtained from the morphing-attack detector. In these experiments, we selected time instants \(t=0.2,0.4,0.6,0.8\) and employed three blending methods: linear interpolation, Poisson seamless mix, and Poisson seamless cloning. Notably, the probability distribution mean is centered around \(0.7\), with a standard deviation of approximately \(0.15\). This indicates that our morphing model has strong performance in eluding the morphing attack detection model. Compared with other SOTA and classic face-morphing generation methods, our approach presents comparable performance, as shown in Figure 4 (right). The images for comparison were created using StyleGAN 2 [9], WebMorph [6; 25], FaceMorpher [19], and OpenCV [13].
Figure 4: Histogram of bona-fide probability values for faces created using our approach (left), and box-and-whiskers plot of said values for each blending type compared to other methods in the literature (right).
Figure 3: Comparing different neural blendings of two face images \(\text{I}_{i}\), the end images of Figure 2. Line 1/2 shows examples of cloning the half-space/face region of \(\text{I}_{1}\) into \(\text{I}_{0}\). In Column 1 we do not align the image landmarks, the remaining columns use our neural warping for the alignment. Column 2 considers \(U=\text{Jac}(\text{I}_{1})\) and \(\text{I}^{*}=\text{I}_{0}\) in the neural blending, Column 3 applies the mixed seamless cloning, and Column 4 employs the normal seamless clone.
### Qualitative comparisons
We provide a qualitative comparison between our morphing method and the aforementioned techniques. To conduct this comparison, we once again employ the target face images shown in Figure 2 as the basis for generating the morphs. Figure 5 showcases the results morphings. In the first row, we present our morphings, at time \(t=0.5\), using a linear interpolation and blendings in the gradient domain with mixed/average seamless cloning. The second row showcases morphings generated using OpenCV, WebMorph, and StyleGAN 2. Notably, our blending technique achieves better feature alignment compared to OpenCV and WebMorph. Additionally, it is observed that StyleGAN removes skin texture details present in both target images and introduces blur to the pupil region.
Our approach also handles target images with varying genders and ethnicities, resulting in high-quality morphings, as shown in Figure 6. It is noteworthy how our method learns effective alignments, enabling seamless blendings using both linear interpolations (second column) and more mixed seamless cloning (third column) to preserve the details present in the target images. Typically, morphings between different genders and ethnicities are challenging due to difficulties in landmark alignment, and blending different skin colors and textures. However, our approach proves to be flexible and capable of overcoming these challenges, yielding high-quality morphings.
Hardware usedThe target images and morphing networks were trained using an NVIDIA GeForce RTX 3090 GPU, with 24GB of memory. The system has a AMD Ryzen Threadripper PRO 5965WX CPU and 256GB of DDR4 memory.
Ethical IssuesOne of the main issues with face-morphing in general, including our approach, is that it may be employed to create fake appearances for official purposes or defamation of individuals. This naturally raises several concerns both in the community and the authors. We hope that by exposing our method to the community, we ensure that other colleagues can create detection models to counteract such threats.
Figure 5: Example morphings using different methods. The top row exhibits different blending methods, starting with a linear interpolation of the target images (end images from Figure 2) on the left, followed by seamless mix in the middle, and seamless cloning on the right. The bottom row displays morphings created using OpenCV on the left, WebMorph in the middle, and StyleGAN 2 on the right. It is worth mentioning that the morphing produced by FaceMorpher was very similar to that of OpenCV, hence we have chosen not to present it.
LimitationsOur method builds a functional representation of the warping to align the features of two neural images. It encodes the direct and inverse transformations required during the morphing procedure in a single network. Therefore, requesting the learning of a non-invertible transformation may lead to inconsistencies. For example, if a particular region of the image collapses during the warping, it cannot be inverted. Nevertheless, we can still represent such a transformation with the inverse part of the neural warping or using its direct counterpart.
## 6 Conclusions
We proposed an approach for morphing of face images by leveraging the coordinate-based neural networks. We exploited their smoothness to add energy functionals to warp and blend target images seamlessly without the need of derivative discretizations.
Our method ensures continuity in both spatial and temporal coordinates, resulting in a visually pleasing and natural transition between target images. By operating on a smooth representation of the underlying images, we eliminate the need for pixel-level interpolation and resampling, preserving the integrity of fine details and textures. The use of coordinate-based neural networks allows us to capture and encode the intricate spatial relationships between facial features, enabling precise and accurate alignment during the morphing process. The seamless blending of the target images is achieved through the integration of energy functionals, ensuring a harmonious fusion of their respective attributes. The resulting morphs exhibit a high level of visual fidelity and maintain the overall structure and appearance of the original faces, even when morphing between different genders or ethnicities. Our approach offers a versatile and robust framework for face morphing, opening up possibilities for applications in computer graphics, animation, and digital entertainment.
In future works, we aim to investigate initialization strategies specifically designed for sinusoidal MLPs in the context of face morphing and image warping. We hypothesize that a customized initialization scheme could lead to faster training and the ability to encode the morphing using smaller networks. By achieving these objectives, we can achieve real-time training of face morphings while utilizing fewer computational resources. Additionally, we plan to leverage our findings to enhance the performance of morphing-attack detector models. By training these models with insights gained from our work, we can enhance their ability to recognize and identify our morphings, thereby bolstering their effectiveness in security applications. Additionally, we can extend our method to operate on tridimensional morphing and interpolation of neural implicit surfaces [16; 12; 15; 24] to improve the current state-of-the-art in this topic as well.
Figure 6: Mrophings between persons of different ethnicities (top row) and genders (bottom row). The left and right columns show the target faces. The middle columns show morphings using linear interpolation (second column from the left), and seamless mix (third column from the left). In both morphings we employed the neural blending for the feature alignment.
## References
* Beier and Neely [1992] T. Beier and S. Neely. Feature-based image metamorphosis. _ACM SIGGRAPH computer graphics_, 26(2):35-42, 1992.
* Bookstein [1989] F. L. Bookstein. Principal warps: Thin-plate splines and the decomposition of deformations. _IEEE Transactions on pattern analysis and machine intelligence_, 11(6):567-585, 1989.
* Carroll et al. [2010] R. Carroll, A. Agarwala, and M. Agrawala. Image warps for artistic perspective manipulation. In _ACM SIGGRAPH 2010 papers_, pages 1-9. 2010.
* Damer et al. [2023] N. Damer, M. Fang, P. Siebke, J. N. Kolf, M. Huber, and F. Boutros. Mordiff: Recognition vulnerability and attack detectability of face morphing attacks created by diffusion autoencoders, 2023. URL [https://arxiv.org/abs/2302.01843](https://arxiv.org/abs/2302.01843).
* DeBruine and Jones [2017] L. DeBruine and B. Jones. Face research lab london set, May 2017.
* DeBruine [2018] L. M. DeBruine. Webmorph (version v0. 0.0. 9001). zenodo, 2018.
* Glasbey and Mardia [1998] C. A. Glasbey and K. V. Mardia. A review of image-warping methods. _Journal of applied statistics_, 25(2):155-171, 1998.
* Gomes et al. [1999] J. Gomes, L. Darsa, B. Costa, and L. Velho. _Warping & morphing of graphical objects_. Morgan Kaufmann, 1999.
* Karras et al. [2020] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila. Analyzing and improving the image quality of stylegan. In _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 8107-8116. Computer Vision Foundation / IEEE, 2020. doi: 10.1109/CVPR42600.2020.00813.
* Kazemi and Sullivan [2014] V. Kazemi and J. Sullivan. One millisecond face alignment with an ensemble of regression trees. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2014.
* Liao et al. [2014] J. Liao, R. S. Lima, D. Nehab, H. Hoppe, P. V. Sander, and J. Yu. Automating image morphing using structural similarity on a halfway domain. _ACM Transactions on Graphics (TOG)_, 33(5):1-12, 2014.
* Liu et al. [2022] H.-T. D. Liu, F. Williams, A. Jacobson, S. Fidler, and O. Litany. Learning smooth neural functions via lipschitz regularization. In _ACM SIGGRAPH 2022 Conference Proceedings_, SIGGRAPH '22. Association for Computing Machinery, 2022. doi: 10.1145/3528233.3530713.
* Mallick [2016] S. Mallick. Face morph using opencv, 2016. URL [https://learnopencv.com/face-morph-using-opencv-cpp-python/](https://learnopencv.com/face-morph-using-opencv-cpp-python/).
* Volume 1: ICPRAM._, pages 193-204. INSTICC, SciTePress, 2023. ISBN 978-989-758-626-2. doi: 10.5220/0011606100003411.
* Novello et al. [2022] T. Novello, G. Schardong, L. Schirmer, V. da Silva, H. Lopes, and L. Velho. Exploring differential geometry in neural implicities. _Computers & Graphics_, 108:49-60, 2022. ISSN 0097-8493. doi: [https://doi.org/10.1016/j.cag.2022.09.003](https://doi.org/10.1016/j.cag.2022.09.003). URL [https://www.sciencedirect.com/science/article/pii/S0097849322001649](https://www.sciencedirect.com/science/article/pii/S0097849322001649).
* Novello et al. [2023] T. Novello, V. da Silva, G. Schardong, L. Schirmer, H. Lopes, and L. Velho. Neural implicit surface evolution, 2023.
* Perez et al. [2003] P. Perez, M. Gangnet, and A. Blake. Poisson image editing. In _ACM SIGGRAPH 2003 Papers_, pages 313-318. 2003.
* Preechakul et al. [2022] K. Preechakul, N. Chatthee, S. Wizadwongsa, and S. Suwajanakorn. Diffusion autoencoders: Toward a meaningful and decodable representation. In _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2022.
* Quek et al. [2015] A. Quek, J. v. Loenen, E. Goode, and G. Rakholia. Face morpher, 2015. URL [https://github.com/yaopang/FaceMorpher](https://github.com/yaopang/FaceMorpher).
* Roberto et al. [2020] R. Roberto, D. Perazzo, J. P. Lima, V. Teichrieb, J. P. Quintino, F. Q. da Silva, A. L. Santos, and H. Pinho. Using local refinements on 360 stitching from dual-fisheye cameras. In _VISIGRAPP (5: VISAPP)_, pages 17-26, 2020.
* Sagonas et al. [2016] C. Sagonas, E. Antonakos, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic. 300 faces in-the-wild challenge: database and results. _Image and Vision Computing_, 47:3-18, 2016. ISSN 0262-8856. doi: [https://doi.org/10.1016/j.imavis.2016.01.002](https://doi.org/10.1016/j.imavis.2016.01.002). URL [https://www.sciencedirect.com/science/article/pii/S0262885616000147](https://www.sciencedirect.com/science/article/pii/S0262885616000147). 300-W, the First Automatic Facial Landmark Detection in-the-Wild Challenge.
* Sarkar et al. [2020] E. Sarkar, P. Korshunov, L. Colbois, and S. Marcel. Vulnerability analysis of face morphing attacks from landmarks and generative adversarial networks. _arXiv preprint_, Oct. 2020. URL [https://arxiv.org/abs/2012.05344](https://arxiv.org/abs/2012.05344).
* 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, pages 2959-2963, 2022. doi: 10.1109/ICASSP43922.2022.9746477. URL [https://doi.org/10.1109/ICASSP43922.2022.9746477](https://doi.org/10.1109/ICASSP43922.2022.9746477).
* Sitzmann et al. [2020] V. Sitzmann, J. Martel, A. Bergman, D. Lindell, and G. Wetzstein. Implicit neural representations with periodic activation functions. _Advances in Neural Information Processing Systems_, 33, 2020.
* Tiddeman et al. [2006] B. Tiddeman, M. Stirrat, and D. Perrett. Towards realism in facial prototyping: results of a wavelet mrf method. In _Proc. Theory and Practice of Computer Graphics_, volume 1, pages 20-30, 2006.
* Wolberg [1998] G. Wolberg. Image morphing: a survey. _The visual computer_, 14(8):360-372, 1998.
# Neural Implicit Morphing of Face Images
- Supplementary Material -
Guilherme Schardong
U Coimbra
Tiago Novello
IMPA
Daniel Perazzo
IMPA
Hallison Paz
IMPA
Iurii Medvedev
U Coimbra
Luiz Velho
IMPA
Nuno Goncalves
U Coimbra
## 1 Ablation of Parameters
This section presents experiments with neural face morphing using different values for networks' width. For these experiments, we employed the landmarks provided with the FRLL dataset [1]. Additionally, we used the initialization procedure proposed in [4].
### Neural Warping
For the neural warping network, we've varied the network width of the only hidden layer, from 32 to 128 nodes incrementing by 32 neurons. Figure 1 shows sample reconstructions using mixed with at varying values of \(t\) and directions of morphing (either I\({}_{0}\) to I\({}_{1}\) or I\({}_{1}\) to I\({}_{0}\)).
We've employed a single hidden layer for all our experiments, since it provided an adequate trade-off between training speed and morphing performance. Our remaining experiments used a network with hidden layer width of 128 nodes. This configuration enhanced training convergence when aligning the landmarks on intermediate times.
## 2 Additional Results
This section presents additional results not shown in the main paper due to space constraints. We show additional variations of ethnicities and genders to illustrate the flexibility of our approach (Sec. 2.1). Furthermore, we show experiments using automatic face landmark detection via DLib (Sec. 2.2) and how the morphings may be improved with additional manual adjustments to these landmarks (Sec. 2.2.1).
### Variations of Gender and Ethnicity
One of the main limitations of the current state-of-the-art methods is generating credible morphings of people of different genders and ethnicities. Figure 2 shows results of these morphings using our approach. We employed the landmarks provided with the FRLL dataset for the neural warping.
### Employing Automatic Landmark Detection
The landmarks used for the experiments were provided with the dataset, however, other approaches for facial landmark detection may be employed and attain equally good results. For the experiments below, we used DLib with the 68 landmark model [2; 3]. Figure 3 shows the landmarks overlaid on sample images of the FRLL dataset, while Figure 4 shows the results of morphings using our proposed neural warping with said landmarks.
#### 2.2.1 Manual Doctoring of Landmarks
In addition to automatic landmark detection, we may also manually edit landmarks to constrain other image areas during the warping, such as clothing, and hairline. This approach improves the morphing on areas other than the face. Figure 5 shows a visual comparison between morphing using only automatic landmark detection and using manual landmark adjustments.
|
2303.17744 | Detecting Nanometer-Scale New Forces with Coherent Neutron Scattering | Significant effort has been devoted to searching for new fundamental forces
of nature. At short length scales (below approximately 10 nm), the strongest
experimental constraints come from neutron scattering from individual nuclei in
gases. The leading experiments at longer length scales instead measure forces
between macroscopic test masses. We propose a hybrid of these two approaches:
scattering neutrons off of a target that has spatial structure at nanoscopic
length scales. Such structures will give a coherent enhancement to small-angle
scattering, where the new force is most significant. This can considerably
improve the sensitivity of neutron scattering experiments for new forces in the
0.1 - 100 nm range. We discuss the backgrounds due to Standard Model
interactions and a variety of potential target structures that could be used,
estimating the resulting sensitivities. We show that, using only one day of
beam time at a modern neutron scattering facility, our proposal has the
potential to detect new forces as much as two orders of magnitude beyond
current laboratory constraints at the appropriate length scales. | Zachary Bogorad, Peter W. Graham, Giorgio Gratta | 2023-03-30T23:34:10Z | http://arxiv.org/abs/2303.17744v2 | # Detecting Nanometer-Scale New Forces with Coherent Neutron Scattering
###### Abstract
Significant effort has been devoted to searching for new fundamental forces of nature. At short length scales (below approximately \(10\) nm), the strongest experimental constraints come from neutron scattering from individual nuclei in gases. The leading experiments at longer length scales instead measure forces between macroscopic test masses. We propose a hybrid of these two approaches: scattering neutrons off of a target that has spatial structure at nanoscopic length scales. Such structures will give a coherent enhancement to small-angle scattering, where the new force is most significant. This can considerably improve the sensitivity of neutron scattering experiments for new forces in the \(0.1-100\) nm range. We discuss the backgrounds due to Standard Model interactions and a variety of potential target structures that could be used, estimating the resulting sensitivities. We show that, using only one day of beam time at a modern neutron scattering facility, our proposal has the potential to detect new forces as much as four orders of magnitude beyond current laboratory constraints at the appropriate length scales.
###### Contents
* I Introduction
* II Overview
* III Scattering from Single Materials
* III.1 Possible Target Materials
* III.2 Separating Scattering Contributions
* III.3 Sensitivity Projections
* IV Scattering from Two Materials
* IV.1 Possible Target Materials
* IV.2 Sensitivity Projections
* V Conclusion
* A Neutron Scattering from Atoms
* A.1 Nuclear Scattering
* A.2 Electromagnetic Scattering
* A.2.1 The Scattering
* A.2.2 The Scattering
* A.2.3 New Force Scattering from Atoms
* A.3 Scattering from Structured Materials
* A.3.1 Structure Factors of Simple Geometries
* A.3.2 Coherent and Incoherent Scattering
* A.3.3 Structure Factors of Nanotube Forests
* D Separating Scattering Contributions with Two-Material Targets
E Multiple Scattering Events F Thermal Effects G Atomic Interaction Effects 1 Interactions at the Grain Surface 2 Interactions Among Noble Atoms H Instrument Parameters 1 Target Geometry 2 Neutron Beam Parameters 3 X-Ray Beam Parameters
## I Introduction
While the Standard Model has been fantastically successful at describing much of the observable universe, several outstanding questions--the nature of dark matter, the Higgs hierarchy problem, and the quantum description of gravity, to name a few--render it necessarily incomplete. Theories that attempt to resolve these problems generally involve the addition of new fields, often leading to a variety of new associated phenomenology. In particular, though the Standard Model includes only four fundamental forces, extensions to it can include a range of additional interactions.
One way that such new forces can arise is via additional gauged \(U(1)\) symmetries, such as baryon (\(B\)) or baryon minus lepton (\(B-L\)) number [1; 2; 3; 4]. The resulting gauge bosons will generically mix with the \(Z\) boson, leading to a force proportional to some combination of baryon number, lepton number, and hypercharge. Alternatively, new finite-range forces appear in many models with compact extra dimensions [5; 6; 7], for example due to messenger fields living in the bulk of such extra dimensions. Other motivations for new forces include proposals to resolve the cosmological constant problem [8], vector models of dark matter [9; 10] and various new scalar fields [11]. More comprehensive reviews of these various motivations can be found in, for example, [12; 13].
In this work, we consider new forces independent of the spins of the interacting particles. Such interactions are generally described by a Yukawa potential [14],
\[V(\mathbf{r})=-\frac{g^{2}Q_{1}Q_{2}}{4\pi|\mathbf{r}|}e^{-\mu|\mathbf{r}|} \tag{1}\]
with \(Q_{1,2}\) the charges of the two interacting particles, separated by \(\mathbf{r}\), with \(g\) the coupling to the new force mediator and \(\mu\) the mediator's mass.
In most of this work, we will further assume for simplicity that the new force couples to mass, such that the charges of the two particles are simply their respective masses. Since such a new force acts as a short-range modification to gravity, it is conventional to parametrize the new force's strength by its ratio \(\alpha\) to that of gravity: \(\alpha=g^{2}m_{\rm Pl}^{2}/(4\pi)\) with \(m_{\rm Pl}\) the Planck mass. Extending our discussion to forces coupled to other charges (e.g. baryon number) is generally simply a matter of rescaling, so long as the interaction remains a Yukawa potential (1).
In this work, we will be focused on mediator masses around \(10^{0}-10^{4}\) eV, corresponding to force ranges \(\lambda\) of roughly \(10^{-2}-10^{2}\) nm (we will use \(\lambda=1/\mu\); note that some sources instead define \(\lambda=2\pi/\mu\)). This regime is uniquely interesting because it lies around the boundary of two dramatically distinct approaches to new force detection: macroscopic test masses, and neutron scattering. Longer-range interactions can be effectively detected by measuring forces between macroscopic objects [15; 16; 17; 18; 19; 20; 21], with the collection of atoms in one test mass seeing the coherently-summed potential of the other. Conversely, shorter-range interactions are typically probed through the angular distribution of neutrons scattered from a target [22; 23; 24; 25]. These experiments rely on the drastically smaller charge radius of the neutron compared to atomic matter, reducing the backgrounds from electromagnetic interactions and Casimir forces [26] which plague force measurements below the \(\mu\)m scale.
This work's proposal is a combination of these two approaches, using spatial structure such that individual neutrons scatter coherently from collections of many atoms. Such an approach should allow for significantly superior sensitivity
to new forces at these length scales. An different technique for detecting new forces with \(\lambda\gtrsim 1\) nm has recently been proposed in [27].
The remainder of this work is organized as follows: We begin by presenting a summary of our proposal in Section II. We then specialize to the relatively simple version of our approach that can be performed on targets consisting of only a single element in Section III, before addressing targets consisting of two different materials in Section IV. In both of these sections, we discuss candidate materials and provide projected sensitivities; the former section also includes an explanation of how to separate the effects of a new force from those of structure using X-ray scattering, with the corresponding two-material discussion left for Appendix D. Finally, we summarize our results and offer some concluding remarks in Section V.
Because our proposal blends two largely distinct fields--the study of new interactions familiar to particle and nuclear physicists, and scattering techniques used largely for material analysis--we have also included a range of background information, as well as various technical details, in the appendices. We discuss the theory of neutron scattering from single atoms in Appendix A. Appendix B provides an introduction to X-ray scattering from atoms and photoabsorption, as X-ray scattering is necessary in order to normalize the neutron scattering distribution of structured targets. Appendix C describes how scattering is modified for targets with structure on length scales comparable to the inverse momentum transfer of the scattering process; while our discussion in this appendix is focused on neutron scattering, the ideas are applicable to scattering of any particle. As we noted above, Appendix D describes how a new force can be distinguished from a modification of this sort of target structure in two-material targets.
The next several appendices describe a variety of systematic effects that must be controlled in our proposal. Appendix E describes the impact of multiple scattering events, in which a neutron is scattered multiple times before its detection. Appendix F considers the effects of finite target temperatures on our proposal. The effects of interactions between atoms within the target are then considered in Appendix G.
Some additional information about neutron and X-ray scattering instruments, including the realistically achievable parameters of instruments that are relevant to our proposal, are presented in Appendix H. Finally, in Appendix I, we describe our numerical approach to translating predicted scattering distributions into projections for sensitivity to new forces.
## II Overview
The general principle behind neutron scattering-based searches for new forces is straightforward: a beam of neutrons is scattered off of a target, and the resulting angular distribution of scattered neutrons is measured; any significant deviation from the Standard Model prediction for that distribution is then an indication of new physics. A simplified sketch of such an experiment is shown in Figure 1.
There are two significant Standard Model sources of neutron scattering: the strong nuclear interaction of neutrons with target nuclei, and the electromagnetic interaction of neutrons with atomic electric and magnetic fields. Nuclear scattering can be treated as hard sphere scattering at the \(10^{-2}-10^{2}\) nm length scales that we consider, so accounting for it is a matter of a single, angle-independent fit parameter. Electromagnetic scattering, on the other hand, can be more difficult to model precisely, as it arises from a combination of several different effects and depends sensitively on the target atoms' electronic states. This is frequently circumvented by conducting new force searches using targets composed of noble gases (most often xenon) with zero spin and orbital angular momentum, in which case electromagnetic scattering is far more predictable.
The limiting factors for this procedure are then statistical: though the Standard Model backgrounds are well-understood, the finite number of neutrons scattered from the target sets a minimum strength for a new force that can be detected. This problem is exacerbated by the need to select neutrons scattered with very small momentum transfers in order to detect new forces of interest. Taking the Yukawa force that is the focus of this work as an example, the neutron scattering distribution from a noble gas can be written as (see (12))
\[\frac{d\sigma}{d\ln\theta}\propto|b_{0}|^{2}\left(1+2\kappa_{\rm EM}f(q_{T}( \theta))+\frac{2\kappa_{\rm new}}{1+(q_{T}(\theta)/\mu)^{2}}\right)\theta\sin 2\theta \tag{2}\]
where \(b_{0}\) is a characteristic, angle-independent scattering length (primarily due to nuclear scattering, although it receives an electromagnetic correction), \(\kappa_{\rm EM}\) and \(\kappa_{\rm new}\) are some measures of the relative strength of electromagnetic and new force scattering relative to nuclear scattering (typical values for this work are \(\kappa_{\rm EM}\sim 10^{-2}\) and, at our sensitivity goal, \(\kappa_{\rm new}\sim 10^{-6}\)), \(q_{T}(\theta)\) is the momentum transfer for a scattering angle of \(2\theta\) (i.e. \(q_{T}(\theta)=2q_{0}\sin\theta\) for incident neutrons of momentum \(q_{0}\); see Figure 1), \(f(q)\) is a form factor for the atom, and \(\mu\) is the new force mediator's mass. The three terms in this distribution are plotted in Figure 5 (although note that that figure uses \(dp/d\Omega\) rather
than \(dp/d\ln\theta\)). The new force contribution to this distribution is best resolved when \(q_{T}(\theta)\sim\mu\), such that the new force term is not yet heavily suppressed by \((q_{T}/\mu)^{-2}\) but is no longer an angle-independent offset that cannot be distinguished from the nuclear force, when \(q_{T}/\mu\ll 1\). In terms of the new force's mass coupling \(g\), we have
\[\kappa_{\rm new}=\frac{m_{n}^{3}g^{2}A}{2\pi\mu^{2}b_{0}} \tag{3}\]
(see Appendix A.3); \(\kappa_{\rm EM}\) is defined by (14).
The former condition--that the momentum transfer not be too large--can be accomplished through some combination of two approaches: by using colder neutrons, and by considering scattering at small angles. Both methods are statistically costly. "Cold" neutrons, with wavelengths conventionally in the \(0.4-3\) nm range, are generally produced by thermalizing neutrons in a cryogenic moderator [38]; see Appendix H.2. Neutrons with longer wavelengths ("ultra-cold" neutrons, or "UCNs"), however, are produced via momentum selection of cold neutrons, reducing the available neutron flux. Restricting to small-angle scattering also impacts the statistics, by requiring a more precisely collimated neutron beam. This further reduces the neutron flux, since neutron beams are collimated primarily by rejecting neutrons outside of the chosen phase space. A straightforward optimization shows that, for the application discussed here, looking at small-angle scattering of thermal neutrons is preferable to employing UCNs.
In addition, in experiments done to-date on nuclei of conventional materials, the small-angle scattering is also suppressed by the \(\theta\sin 2\theta\) term in (2), corresponding to the limited phase space available for small-angle scattering. In this work, we present a method of circumventing this problem using coherent scattering from structured targets to enhance the fraction of incident neutrons that are scattered at the desired momentum transfers (typically to order unity, in fact). In particular, coherent scattering changes the scattering distribution (2) to (schematically) [39; 40; 41]
\[\frac{d\sigma}{d\ln\theta}\propto|b_{0}|^{2}\left(1+2\kappa_{\rm EM}f(q_{T}( \theta))+\frac{2\kappa_{\rm new}}{1+(q_{T}(\theta)/\mu)^{2}}\right)S(q_{T}( \theta))\,\theta\sin 2\theta \tag{4}\]
where \(S(q)\) is the structure factor of the target, which gives the coherent enhancement of scattering at a given momentum transfer. Then, by employing targets such that \(S(q_{T}(\theta))\,\theta\sin 2\theta\) is maximal at \(q_{T}(\theta)\sim\mu\), neutrons can be made to scatter primarily at angles where the new force is most observable, effectively increasing the neutron count available for the measurement.
The structure factor for a collection of identical target atoms, assuming incident plane wave neutrons, is given by
Figure 1: A simplified sketch of a neutron scattering experiment of the type discussed in this work, illustrating the key components of such an experiment; for details, see Appendix H.2. Neutrons are produced from a reactor [28; 29; 30; 31; 32; 33] or via spallation [34; 35], and then cooled in a moderator. A subpopulation of smaller velocity spread is then selected using either a rotating helical passage [36] or a series of rotating disks [37], and a collimated beam is formed by passing the neutrons through two or more apertures. This beam is then incident on the target material—which, in this work, will typically have some internal structure—with scattered neutrons detected at some distance beyond the target.
(see [39; 40; 41], or the discussion in Appendix C.1)
\[S(q_{T})=\frac{1}{N}\left|\sum_{j=1}^{N}e^{i\mathbf{q}_{T}\cdot\mathbf{r}_{j}} \right|^{2}, \tag{5}\]
with the sum over the \(N\) atoms in the target and \(\mathbf{r}_{j}\) the position of atom \(j\). For an ideal gas of spatial extent much larger than \(q_{T}^{-1}\), the positions are effectively uncorrelated, so one expects \(N\) atoms to give \(S(q_{T})\sim 1\). However, if one considers a cluster of \(N\) atoms over some length scale \(R\) with \(q_{T}R\ll 1\), one instead expects \(S(q_{T})\sim N\), giving a factor of \(N\) enhancement in the differential scattering cross-section at this momentum transfer. This is the central idea behind this work's proposal: using targets with structures at length scales comparable to \(\mu^{-1}\), such that scattering is coherent at small momentum transfers but becomes incoherent at large ones.
As we noted previously, it is generally preferable to perform neutron scattering from noble gases, in order to both reduce and simplify the electromagnetic scattering background. Forming nanometer- to micrometer-scale structures from noble elements alone is likely to be difficult, though perhaps not impossible, as we discuss in Section III.1. A more straightforward option is to employ a combination of two materials: a granular or porous solid and a noble liquid or gas which fills in the gaps in the solid. (In most of this work, we will refer to the noble component of a two-material target as a "gas," although we will ultimately be interested in fluids near liquid density. Distinctions between gases, liquids, and supercritical fluids other than density will generally not be significant for our purposes; see Appendix G.2.) We will consider several candidates for such two-material targets in this work, though we will not attempt to catalogue them exhaustively and better options than what we discuss are likely to exist.
Realistic targets' structure factors cannot be predicted _a priori_ with sufficient accuracy to remove them from the measured neutron scattering distribution alone. Thus, when using a structured target, a low-angle bump in the scattering distribution cannot be attributed to a new force because it may instead correspond to some additional target structure at that scale. This issue can be circumvented by employing another type of scattering, most probably of X-rays. In the single-material case, the ratio of the neutron to X-ray scattering distributions is then target structure-independent, and remains well predictable within the Standard Model, so a deviation of this ratio from its prediction signals the presence of new physics. Thus, while we will generally focus on neutron scattering--as it is in many ways more technically difficult, and is where many new forces are likely to appear--practical experiments will require both X-ray and neutron scattering and treat them on mostly equal footing.
An additional complication arises in the case of two-material scattering, due to the interference of the solid and noble gas scattering amplitudes. If not for this interference, the solid scattering contribution could simply be measured separately and subtracted out. Dealing with the interference term, however, requires making measurements using at least two, and possibly three, distinct noble elements; we discuss this procedure in Appendix D. Nonetheless, while more involved, two-material scattering can still be used to constrain new forces.
## III Scattering from single materials
We begin by considering the more straightforward implementation of our proposal using targets consisting of only a single noble element. Whether such a target could be produced with appropriate structure is unclear: we discuss several potential approaches to doing so below, and there may exist others, but the viability of these target candidates will need to be tested experimentally. Even if none of these approaches can be implemented, however, the single-material version of our proposal is useful as a simple illustration of how neutron and X-ray measurement can be combined to look for new forces, before considering the far more involved analysis required when using two-material targets.
### Possible Target Materials
The neutron's magnetic moment leads to significant, angle-dependent scattering from atoms with non-zero total orbital angular momenta, total electron spins, or nuclear spins; see Appendix A. The uncertainty in the Standard Model predictions for these scattering contributions acts as a background for any neutron scattering search for new forces. Similarly, the electromagnetic interactions between atoms in molecules or solids are likely to induce significant (at the required \(\kappa_{\mathrm{new}}\sim 10^{-6}\) level) magnetic moments even in atoms that do not otherwise have them, creating an analogous background. Avoiding these two effects makes noble elements particularly attractive target materials [42], as they have no magnetic moments and form nonmolecular gases.
While other elements (e.g. mercury) may in principle make for usable targets, we will focus on noble gases exclusively, considering them alone in this section, and in the presence of a solid in Section IV. Of the noble elements, xenon is likely the most promising candidate, and has historically been the most used for new force searches, as its large atomic weight enhances the new force scattering contribution of typical models. All of our discussion in this work should hold for any (stable) noble element, however. There should likewise be no qualitative distinction between isotopes of those elements, except through their different nuclear spins, which lead to a small electromagnetic background. (In fact, we will generally focus on isotopes with zero nuclear spin, but this is not critical; see Appendix A.) We note, however, that different isotopes of a single element can have wildly different neutron scattering lengths; see, for example, [43; 44].
We consider three possible approaches to creating structured targets from a single noble element: noble solids, aerosols, and boiling liquids.
While the solid states of most noble elements are reasonably achievable in laboratory conditions [45], forming granular structures of such solids may be significantly more difficult. Xenon can form a "snow-like" state under appropriate cooling conditions [46]. We are not aware of any systematic studies of this state, but it may be possible to create xenon snow with structure on length scales appropriate for our purposes. Similarly, there may or may not be ways to produce snow from other noble elements. Substantial density changes have been discovered when decreasing the temperature of noble solids below a certain critical temperature [47], although this may be due to phase transitions in the solid without changes in homogeneity.
It may also be possible to create granular structures from noble liquids. One way to do this is through aerosolization of a noble liquid. As with the possibility of xenon snow discussed above, we are not aware of any analyses of achievable droplet size distributions for noble elements, but the sub-micrometer sizes we are interested are fairly typical for generic aerosols [48; 49]. Since such an aerosol would be unlikely to remain airborne or maintain a constant particle size distribution, this option would require continuous production and extraction of the aerosol in the target chamber. This does not meaningfully change the measurement strategy, however: in the case of a time-varying target, every appearance of the structure factor in the separation of scattering contributions procedure described below can simply be replaced by its average, so variation in the structure factor does not affect final sensitivity. Note that, in this case, the structure factor must remain constant (to a precision of order \(\kappa_{\rm new}\)) between neutron and X-ray scattering measurements; this should be possible, however, for example by performing these measurements simultaneously [50].
Finally, scattering could be performed from noble liquids in the process of boiling, with the granular structure formed by the gaseous bubbles that appear during this process. It appears unlikely that the resulting bubbles would be sufficiently small or consistent [51; 52; 53; 54; 55], however, so we leave serious consideration of this approach to future work.
For the rough sensitivity projections of this work, we assume that single-material targets consist of isolated granular spheres of approximately equal radii, separated by vacuum; see Appendix C for a more precise description of our assumptions. This should be a reasonable approximation of aerosol geometry, but may appear less appropriate for snow (which does not consist of spherical grains) or boiling liquids (which have liquid between the grains). However, as Appendix C further discusses, the general behavior of structure factors is determined solely by the structure's dimensionality and length scale, precluding any large corrections from the differing geometry of snow. Similarly, boiling liquids' structure factors are suppressed by the limited density contrast between the liquid and gaseous states, but are not otherwise affected. All three cases should therefore be approximately described by the same structure factor, to the order-unity precision we desire in this work (see e.g. [40; 41] and Appendix C):
\[S(q_{T})\approx\frac{12\pi}{9+2(q_{T}\overline{R})^{4}}n\overline{R}^{3}+1, \tag{6}\]
where \(\overline{R}\) is the typical radius of the grains in the material and \(n\) is the number density of the noble atoms. Example structure factors for liquid xenon grains of various size are plotted in Figure 6.
The limiting behaviors of this structure factor are easily understood. For \(q_{T}R\ll 1\), \(S(q_{T})\rightarrow(4\pi/3)n\overline{R}^{3}\): scattering is coherent over individual grains, and thus the scattering distribution is enhanced by the number of atoms per grain. Conversely, for \(q_{T}R\gg 1\), \(S(q_{T})\to 1\), corresponding to fully incoherent scattering, with cross-sections simply summed over all atoms. Accounting for the variation in phase space available at different scattering angles (see (4) and the preceding discussion) thus gives a scattering distribution peaked at \(q_{T}\sim R^{-1}\). Granular materials therefore provide a means of increasing scattering probabilities at chosen momentum transfers. In particular, using materials with \(R\sim\mu^{-1}\) will allow us to increase experimental sensitivity to a new force of range \(\mu^{-1}\).
### Separating Scattering Contributions
While it is possible to calculate the structure factors of targets based on their geometric properties (see Appendix C), such estimates will not be exactly correct for any realistic targets due to variation in their constituent grain size
and shape, as well as due to impurities. As a result, it is not sufficient to measure the neutron scattering distribution from a structured target in order to search for a new force, as any bump in low-angle scattering could indicate a bump in the structure factor rather than in the atomic scattering distribution. Circumventing this requires an additional set of measurements to extract the structure factor of the target alone. We now describe how this can be done for a single material, in which case the procedure is relatively straightforward and can be described analytically.
For simplicity, we restrict further to targets consisting of only a single phase or density of the noble element (e.g. xenon snow). Two-phase targets such as a boiling liquids require a slightly modified analysis to account for the density contrast between the two phases, as we discuss for the two-material analysis in Appendix D. (Note that this is the only change required, however; the majority of Appendix D is devoted to removing uncertain electromagnetic backgrounds, which are not a concern for targets containing only noble atoms.)
The key fact allowing the scattering distributions of individual atoms to be disentangled from the structure factor of the target as a whole is that structure factors are independent of the scattered particle, so long as the scattering lengths of each atom are equal for the scattered particle. For a noble gas, this is likely to hold quite generally, since all of the lowest-order electromagnetic properties (the total electron orbital momentum, the total electron spin, etc.) are zero. Thus, the structure factor can be obtained by performing scattering with X-rays. That structure factor can then be used to extract the neutron scattering distribution from individual atoms.
X-ray scattering is discussed in more detail in Appendix B; here we will merely cite the corresponding scattering distribution for noble atoms:
\[\frac{d\sigma_{X}}{d\Omega}=\left(\frac{Ze^{2}}{4\pi m_{e}}\right)^{2}\left( f(q_{T}(\theta))-\frac{Zm_{e}}{m_{\rm nuc}}\right)^{2}\frac{1+\cos^{2}2\theta}{2}, \tag{7}\]
where \(m_{\rm nuc}\) is the mass of the atomic nucleus, \(m_{e}\) is the mass of the electron, \(Z\) is the atomic number of the target atoms, and we have averaged over incident polarizations and summed over outgoing ones. Note in particular two features of this distribution: it approaches a constant comparable (or equal) to its maximal value at small angles, and it is fully described by precisely-known parameters except for its dependence on the atomic form factor.
The ratio of the X-ray scattering probability distribution for the structured target to that of a uniform target of the same material is
\[\frac{dp_{\rm X,s}/dq_{T}}{dp_{\rm X,u}/dq_{T}}=S(q_{T}), \tag{8}\]
with the's' and 'u' subscripts referring to structured and uniform targets, respectively, and we use \(q_{T}\) in place of \(\Omega\) or \(\theta\) in order to emphasize that it is the momentum transfer, and not the angle, that should be compared between X-ray and neutron measurements. Here, we have switched to probability rather than cross-section distributions in order to account for normalization (or, equivalently, target thickness): we will generally assume that target thickness is selected so that 10% of neutrons are scattered above some minimum angle (see Appendix E), requiring different target thicknesses for different target structures. We will therefore want to compare these normalized scattering probabilities, rather than cross-sections.
The unstructured neutron scattering distribution can then be reconstructed from these two X-ray measurements, combined with a structured neutron measurement:
\[\frac{dp_{\rm n,u}}{dq_{T}}=\frac{dp_{\rm n,s}}{dq_{T}}\left(\frac{dp_{\rm X,u}/dq_{T}}{dp_{\rm X,s}/dq_{T}}\right). \tag{9}\]
Crucially, this combination of measurements can lead to smaller uncertainties at small angles than a single, direct measurement of neutron scattering from a uniform target would, due to the latter's poor statistics at small angles. This requires, up to \(\mathcal{O}(1)\) factors,
\[N_{n}\frac{dp_{\rm n,s}}{dq_{T}} \gtrsim N_{n}\frac{dp_{\rm n,u}}{dq_{T}} \tag{10a}\] \[\frac{N_{X}}{2}\frac{dp_{\rm X,s}}{dq_{T}} \gtrsim N_{n}\frac{dp_{\rm n,u}}{dq_{T}}\] (10b) \[\frac{N_{X}}{2}\frac{dp_{\rm X,u}}{dq_{T}} \gtrsim N_{n}\frac{dp_{\rm n,u}}{dq_{T}} \tag{10c}\]
over the range of momentum transfers useful for detecting the new force (\(q_{T}\sim\mu\)), where \(N_{n}\) is the total incident neutron count given a fixed available neutron beam time, and \(N_{X}\) is the analogous total X-ray count. (The included factors of 2 conservatively account for dividing X-ray beam time equally between the structured and unstructured
measurements, although an unequal division may be more efficient.) The first condition holds whenever the structure factor is greater than its average,
\[\overline{S}=\frac{1}{\cos^{2}\theta_{\rm min}}\int_{\theta_{\rm min}}^{\pi/2}S(q _{T}(\theta))\sin(2\theta)d\theta, \tag{11}\]
where \(\theta_{\rm min}\) is the smallest angle observed. Note that this is a stronger condition than merely \(S(q_{T})>1\), due to the differing normalization of structured and uniform targets needed to keep the total scattering probability constant. As long as this is the case, the third condition also implies the second.
In fact, we will be able to satisfy a somewhat stronger condition: that the error on the neutron scattering distribution from the noble element (i.e. from the uniform target) is dominated by the error in the structured neutron scattering distribution rather than by the pair of X-ray measurements. This is the case whenever the number of X-rays scattered in a given angular range from the uniform target is greater than the number of neutrons scattered at those angles from the structured one, i.e. whenever
\[\frac{N_{X}}{2}\frac{dp_{\rm X,u}}{dq_{T}}\gtrsim N_{n}\frac{dp_{n,s}}{dq_{T}}; \tag{12}\]
since X-ray scattering is approximately angle-independent at small momentum transfers (see (7)), this is approximately equivalent to the requirement that
\[\frac{N_{X}}{2N_{n}}\gtrsim\frac{S(q_{T})}{\overline{S}}>1. \tag{13}\]
The assumption that \(S(q_{T})>\overline{S}\) will never hold at all momentum transfers: increased scattering at small angles corresponds to decreased scattering at large angles, when total scattering probability is held constant. Depending on the particular measurement, this may be irrelevant (if \(\mu\) is small enough that only the enhanced small angles are useful for detecting a new force), or it may indicate that the optimal measurement strategy is to spend some neutron beam time on the uniform target measurement in order to reduce uncertainties at large angles. In this work we will restrict to neutron measurements using only structured targets; we leave a more thorough analysis of optimal measurement strategies to future work, though our results suggest that there is little advantage to neutron scattering from uniform targets (see Figure 2).
As we discuss in Appendix H, achievable fluxes for X-ray beams exceed those of neutron beams by a factor of at least \(10^{6}\). Since we will not consider structure factors in excess of approximately \(10^{5}\) (see Figure 6), this is sufficient to ensure that (13) should always hold.
It is worth noting, however, that the condition (13) is likely too stringent, as it assumes no knowledge of the atomic form factor \(f(q)\). In fact, atomic form factors can be calculated numerically from Standard Model parameters (see e.g. [56; 57; 58; 59]), though it is unclear if this can be done with the precision necessary for our purposes. A complete prediction for \(f(q)\) is unnecessary, however: since the momentum scale over which \(f(q)\) varies (\(q_{0}\sim 11\)\(Z^{1/3}\) nm\({}^{-1}\); see Appendix A) is known to be much larger than the momentum transfers of interest, the X-ray scattering distribution (7) can be accurately described by a combination of known parameters and a series expansion of \(f(q_{T})\) in powers of \(q_{T}/q_{0}\). Doing so contributes a few additional degrees of freedom to the fitting procedure, but, crucially, it does not eliminate the signal, as there is no way for such an expansion to replicate the \(1/(1+(q_{T}/\mu)^{2})\) behavior of the new force scattering length contribution once \(q_{T}>\mu\). (This is essentially the same reason why electromagnetic effects have little impact on our sensitivity projections, as we discuss in the next subsection.)
Using this separation of scales, only the structured measurements are necessary, and the condition for the neutron measurement to dominate the final uncertainty becomes considerably weaker:
\[\frac{N_{X}}{N_{n}}\gg 1. \tag{14}\]
Along with being easily satisfied by a wide range of X-ray instruments, this condition has the added benefit of being intuitively understandable: in this measurement approach, the experiment consists simply of looking at the ratio of the neutron to X-ray scattering distributions of a structured target. The shared dependence of these distributions on the structure factor is eliminated in the ratio, leaving only a measurement of the ratio of scattering distributions of individual atoms; any deviation of this ratio from the Standard Model prediction is then interpreted as a signal of a new force. Since this approach is fully symmetric between the neutron and X-ray measurements, the dominant uncertainty is determined simply by whichever included fewer scattering events.
### Sensitivity Projections
In the absence of systematics, a new force is detectable if it increases the number of small-angle scattering events by more than the corresponding Poisson error (summed over bins). Accounting for the uncertainties in the nuclear scattering length, electromagnetic scattering length scale, and atomic form factor complicates this criterion: a scattering distribution that includes the new force must be fit with Standard Model parameters, and only new forces for which no combination of these parameters leads to a sufficiently good fit can be detected. A precise statistical description of this criterion is presented in Appendix I; here we will merely summarize our approach.
Given a pair of new force parameters (\(\mu\) and \(g\)), it is straightforward to generate a predicted total neutron scattering distribution from an assumed target. This distribution can then be fit with Standard Model parameters alone, or with the addition of the two new force parameters. The fits can then be compared using an F-test (see Appendix I); a significant improvement in the fit when including the new force parameters indicates the presence of such a new force. We assume errors from X-ray scattering (i.e. in our knowledge of the structure factor) to be subdominant. We include three Standard Model parameters in our single-material fits: an overall normalization \(\mathcal{N}\) (corresponding to the angle-independent scattering length of the target atom \(b_{0}\)), the magnitude of electromagnetic scattering \(\kappa_{\mathrm{EM}}\), and the momentum scale of electromagnetic scattering \(q_{0}\) (see Appendix A). Our two fit functions are therefore given by
\[\frac{d\sigma}{d\ln\theta}=\mathcal{N}^{\mathrm{(fit)}}\left(1+\frac{2\kappa _{\mathrm{EM}}^{\mathrm{(fit)}}}{\sqrt{1+\left(q_{T}(\theta)/q_{0}^{\mathrm{ (fit)}}\right)^{2}}}\left[+\frac{2\kappa_{\mathrm{new}}^{\mathrm{(fit)}}}{1+ \left(q_{T}(\theta)/\mu^{\mathrm{(fit)}}\right)^{2}}\right]\right)S(q_{T}( \theta))\,\theta\sin 2\theta, \tag{15}\]
with and without the bracketed term; all labeled fit parameters are allowed to vary freely. This ignores any existing constraints on these quantities, but this is unlikely to be particularly conservative: nuclear scattering lengths are generally not known to the required precision, and including the electromagnetic fit parameters had only a small effect on our sensitivity projections.
While this fitting procedure omits higher-order corrections to the atomic form factor (see Appendix A), such terms are unlikely to be significant for \(\mu\ll q_{0}\) given the minimal effect of the leading-order electromagnetic term. This is the result of the same separation of momentum scales discussed in the previous subsection: the large ratio of \(q_{0}/\mu\) precludes a modified electromagnetic term from effectively imitating the new force contribution's momentum dependence. Note as well that the atomic form factor can be empirically determined from X-ray scattering from a uniform target alone, if the approximate analytic form that we employ is insufficient; see Appendix B. To be conservative, we do not show projections for \(\lambda<10^{-1}\) nm since \(\mu\) begins to approach \(q_{0}\) at these length scales; an accurate treatment of this regime is beyond the scope of this work.
Our projections are based on a fiducial beamline with a flux of \(10^{8}\) cm\({}^{-2}\) s\({}^{-1}\) neutrons over a target area of 10 cm\({}^{2}\), a typical wavelength of 0.6 nm, and a minimum resolvable angle of 3 mrad (i.e. a minimum visible momentum transfer of approximately (30 nm)\({}^{-1}\)). We assume an integration time of 28 hours, giving a total of \(10^{13}\) scattered neutrons (i.e. \(10^{14}\) incident neutrons; see Appendix E); sensitivities for other neutron counts are easily estimated from \(g_{\mathrm{min}}^{2}\propto N^{-1/2}\). Neutron beam properties relevant to our proposal are discussed further in Appendix H.2.
We illustrate the projected sensitivities for several single-material targets in Figure 2. We focus primarily on scattering from xenon, as its large atomic weight makes it the optimal target in this case, though we illustrate an achievable sensitivity for argon as well for comparison. Both noble elements are assumed to be at liquid densities, and the number of grains in each target is adjusted to reach a scattering fraction of 0.1 into angles above the minimum visible angle (see Appendix E). We assume angular acceptance of \(2\theta\) from \(3\times 10^{-3}\) to \(\pi/4\) radians. For xenon, we consider targets with spherical grains of typical radii 1, 10, and 100 nanometers as well as with no structure at all; the spatial density of these grains has no impact on the sensitivity, so long as their positions are uncorrelated (and assuming that the total scattering fraction is held fixed). The no structure curve illustrates the gain in sensitivity over existing experiments that arises simply from assuming more neutrons scattered from the target (as well as from the inevitably somewhat simplified analysis of our estimate compared to actual experiments), as opposed to the benefits of coherent low-angle scattering. As Figure 2 illustrates, the use of structures targets can potentially extend a neutron scattering experiment's sensitivity to new forces by nearly two orders of magnitude in \(g^{2}\) at ranges of at least 10 nm, with gains decreasing at shorter ranges until achieving parity at a few angstroms.
It is worth briefly considering some properties of Figure 2 in order to confirm that our projections are reasonable. First, the no-structure curve can be reasonably compared to the results of [23], as the primary difference between the assumptions of our curve and the parameters of that work should be the number of scattered neutrons. We assume approximately 400 times more scattered neutrons than were used in [23] (noting that the scattering event count reported in that work is restricted to small-angle scattering events), in part due to a higher assumed integrated incident flux and, in part, because the target depth in [23] was insufficient to scatter 10% of incident neutrons, as we
assume. This should lead to a sensitivity improved by a factor of approximately 20, quite close to our projection at their optimal sensitivity, given the coarseness of this comparison.
Second, it is straightforward to understand the limiting behavior and grain radius dependence of the curves in Figure 2. In the high-\(\mu\) limit, all of the sensitivity curves approach \(g_{\rm min}^{2}\propto\mu^{4}\). This is the result of a combination of two effects: the \(\mu^{2}\) suppression in \(\kappa_{\rm new}\) (see (3)), and the fact that the new force scattering distribution is angle-independent up to \((q_{\rm max}/\mu)^{2}\) corrections; the angle-independent component is therefore absorbed into the nuclear scattering length fit. At small masses \(\mu\), the advantages of larger grain radii (10 nm rather than 1 nm) become apparent, as longer-range new forces can be coherent over larger grains. Conversely, increasing the grain radius from 10 to 100 nm is not beneficial with a minimum accepted momentum transfer of \((30\;{\rm nm})^{-1}\), as the scattering that would be enhanced at these radii is below the minimum visible angle; a sensitivity advantage appears only if one observes scattering at smaller angles, and only at force ranges near the upper limit considered. (Note as well that targets with 100 nm grains may or may not suffer from significant multiple scattering backgrounds that would worsen their sensitivity; see the discussion in Appendix E.)
The inferior sensitivity of an argon-based experiment is also easily interpretable: the visibility of the new force is given by its relative strength \(\kappa_{\rm new}\propto A/b_{0}\), which is larger for xenon than any other noble element (see Table 1). We include argon in Figure 2 both to illustrate the large advantage of xenon over other elements and because argon will be a more promising candidate gas in the two-material experiments discussed below.
## IV Scattering from two materials
We now turn to scattering from targets consisting of two materials: one structured solid, providing a framework with the required non-uniformity scale, and one noble element filling the spaces within the solid. Crucially, while the neutron scattering length of the noble element is analytically tractable, we do not assume any particular behavior
Figure 2: A comparison of projected sensitivities to new mass-coupled Yukawa forces for experiments following the design described in this work, using several different single-material target candidates. Detailed assumptions for each projection are discussed in the main text. Shown is scattering from simple xenon gas (the target of many previous experiments) as well from xenon targets with spherical grains of radius 1, 10 and 100 nm and from argon targets with 10 nm radius grains. (The spacing between these spheres does not affect the sensitivity, so long as their positions remain uncorrelated, as we discuss in the main text.) Also shown are the regions of parameter space already excluded by previous experiments, namely [21, 22, 23, 24, 15], and the line corresponding to \(\kappa_{\rm new}=10^{-6}\) for xenon, as an illustration of the target systematic error we use throughout this work. Astrophysical constraints at these masses lie below the bottom edge of the plot, but are somewhat model-dependent; see Section V.
for the scattering length of the solid, whose electronic structure may lead to difficult-to-predict electromagnetic scattering. It is therefore necessary to combine several measurements--including, at minimum, measurements with two different noble gases--from such targets in order to eliminate this background. This procedure lacks a simple analytic description (as we described for the single-material case above), so we limit ourselves to demonstrating that there are sufficient possible measurements in order to constrain the pertinent degrees of freedom; this analysis is presented in Appendix D.
As we discuss below, two-material targets have generically inferior sensitivity compared to single-material targets, as a result of systematic effects deriving from the backgrounds due to the solids, as well as of the loss in statistics occurring when a fixed number of neutrons is divided between measurements with different targets. However, unlike the speculative single-material targets of the previous section, the two-material combinations we consider here are readily available. The resulting sensitivity projections are therefore a lower bound on the potential of this proposal.
### Possible Target Materials
The solid materials that could be appropriate for our purposes can be loosely divided into two categories: porous materials whose pores are filled with a noble gas or liquid, and granular materials whose interstitial volume can be filled with the noble element. We will generally remain agnostic to the particular noble element used in the examples below; as we discuss in Appendix D, two-material targets will generally require two or even three noble elements in order to separate the different scattering contributions, but the procedure does not otherwise depend on the specific element.
Different solid materials may, however, be more or less compatible with particular noble elements. One reason for this is the enhancement of the scattering probability with increasing difference in scattering length density (SLD, i.e. the product of atomic number density and scattering length, summed over constituent atoms) between the solid and the gas (see Appendix C): solid materials whose SLD is much higher than that of the gas will scatter too much, potentially forcing an experiment to use impractically thin targets in order to avoid excessive multiple scattering (see Appendix E). Note that, since scattering lengths of different isotopes of a single element can differ by large factors (see [43, 44]), we will focus on particular isotopes of noble gases in this section; see Table 1. We assume natural abundances for all solids.
Even if thickness is not a concern, however, large solid SLDs lead to a loss of sensitivity because they reduce the fraction of scattering events that come from the noble gas, which are the only events for which a new force can be resolved from the electromagnetic background.
For both of these reasons, solids whose SLD is not much greater than the maximum achievable value for a given noble element--essentially the SLD for its liquid form, given the incompressibility of liquids--are preferable. Table 1 lists approximate SLDs for all of the stable noble liquids and several example solid materials. As this list illustrates, finding solids with sufficiently small SLDs (and the necessary granular structure) may be challenging, motivating much of our discussion below.
Note that we do not include electromagnetic scattering lengths in Table 1 or in our calculations, as these depend on the detailed electronic structure of solids and are thus difficult to predict. While electromagnetic scattering lengths of atoms with non-zero total spins or orbital angular momenta can be comparable to their nuclear scattering lengths, they sum incoherently (see Appendix C), and thus should have little impact on the sensitivity projections, as the expected scattering distributions are dominated by small-angle coherent scattering.
Several example candidate solid materials are presented in this section, but this is likely not a complete list and better materials may result from a more thorough search. The first is silica, SiO\({}_{2}\), which we use as our primary benchmark for sensitivity projections. Silica may be a candidate material in either of two forms: as a porous ceramic, or as a collection of spherical grains. Silica gels (in particular aerogels and xerogels) can have very high porosities, and can be produced with a variety of pore sizes, including some appropriate for our purposes (for a review, see, for example, [66]). Silica can also be manufactured in the form of small spheres, typically via the Stober process (see e.g. [67, 68]). Silica has a reasonably low SLD of 420 \(\mathrm{fm\;nm}^{-3}\) (see Table 1): lower than the maximum of argon, though not of any other noble element.
Cerium oxide (CeO\({}_{2}\)) has a very similar SLD of 410 \(\mathrm{fm\;nm}^{-3}\), and is widely available in powdered form due to its application in surface polishing [69]. Note, however, that the typical grain size of common cerium oxide powders is too large for our purposes.
Alumina, Al\({}_{2}\)O\({}_{3}\) can be produced as a porous ceramic with pores of variable sizes [70], as it is used in filtration and various industrial processes; it cannot, to the authors' knowledge, be produced in the form of appropriately-sized beads. However, its SLD is 580 \(\mathrm{fm\;nm}^{-3}\), somewhat higher than that of silica.
Carbon nanotubes (CNTs) can provide appropriate granularities but, unlike the preceding materials, in a non-isotropic geometry. The structure factors of forests of CNTs, with incident neutrons parallel to the average nanotube
direction, are estimated in Appendix C.3. While not as sharply peaked as the spherical grain structure factor, the CNT result nonetheless accommodates the same low-angle scattering enhancement. Forests of multi-walled CNTs with appropriate radii (\(\mathcal{O}\)(10 nm)) have been produced [71; 72; 73; 74], though it is unclear whether they can be produced with appropriate thickness, density, substrate, etc. for this work's procedure.
Finally, we broadly consider the potential of some alloys to be effective targets. Alloys can be attractive solid targets because they can be chosen to have small or even vanishing SLD: since some elements (e.g. hydrogen, lithium, titanium [43; 44]) have negative coherent scattering lengths, it is possible for the total coherent scattering length to be suppressed at length scales greater than the inverse interatomic spacing. (A degree of this effect can be achieved even with non-metals, as the two titanium-containing entries in Table 1 illustrate; the advantage of alloys is the greater freedom to adjust their atomic compositions.) This can potentially reduce the solid-background issues discussed above. We note that we are not aware of any materials with this characteristic that can be produced with the necessary granular structure, but the attractiveness of such a target means that this may be a direction worth exploring in future work.
Our projections for the two-material case assume the same geometry as the single-material projections--a collection of uncorrelated spherical grains of gas--but now with the space between those grains filled with a solid material rather than vacuum. As we noted in Section III.1, the exact shape of the grains should only modify our projections by order-unity factors, leaving us to compute only a single structure factor. The structure factor in this case is defined analogously to the single-material definition (5), but now including the contributions of both materials:
\[S(q_{T})=\frac{1}{\sum_{j}N_{j}|b_{j}(\mathbf{q}_{T})|^{2}}\left|\sum_{j}\sum _{k=1}^{N_{j}}b_{j}(\mathbf{q}_{T})e^{i\mathbf{q}_{T}\cdot\mathbf{r}_{j,k}} \right|^{2}, \tag{16}\]
where the target contains \(N_{j}\) atoms of the j'th element, each with scattering length \(b_{j}(q_{T})\). As we show in Appendix C.1, this is well-approximated for our geometry by
\[S(q_{T})\approx\frac{12\pi\overline{R}^{3}}{9+2(q_{T}\overline{R})^{4}}\left( \frac{f\left|\Delta\mathcal{S}\right|^{2}}{fn_{g}|b_{g}(\mathbf{q}_{T})|^{2}+( 1-f)\sum_{j}n_{s,j}|b_{s,j}(\mathbf{q}_{T})|^{2}}\right)+1. \tag{17}\]
where \(f\) is the volume fraction of the target occupied by the gas, \(n_{g}\) (\(n_{s,j}\)) is the number density of the gas (the j'th element in the solid), \(b_{g}(\mathbf{q}_{T})\) (\(b_{s,j}(\mathbf{q}_{T})\)) is the neutron scattering length of the gas (the j'th solid element), and \(\Delta\mathcal{S}\) is the difference in SLDs between the two materials. The key difference from the single-material case is the dependence on this difference: coherent scattering vanishes in the limit of equal SLD, as the two materials become essentially equivalent for it.
\begin{table}
\begin{tabular}{|c|c c c|} \hline Material & \(b_{c}\) (fm) & \(n_{\text{liquid }}\)(nm\({}^{-3}\)) & SLD\({}_{\text{liquid }}\)(fm nm\({}^{-3}\)) \\ \hline He-4 & 3.3 & 22 & 72 \\ Ne-20 & 4.6 & 37 & 170 \\ Ar-36 & 25 & 21 & 530 \\ Kr-86 & 8.1 & 18 & 140 \\ Xe-136 & 9.0 & 14 & 120 \\ \hline Material & \(b_{c}^{\text{unt}}\) (fm) & \(n^{\text{unt}}\) (nm\({}^{-3}\)) & SLD\({}_{\text{max}}\) (fm nm\({}^{-3}\)) \\ \hline SiO\({}_{2}\) & 16 & 27 & 420 \\ Al\({}_{2}\)O\({}_{3}\) & 24 & 24 & 580 \\ Al\({}_{2}\)Ti\({}_{3}\)O\({}_{9}\) & 49 & 5.6 & 275 \\ BaTiO\({}_{3}\) & 19 & 15 & 290 \\ CeO\({}_{2}\) & 16 & 25 & 410 \\ CNTs & 6.7 & 100 & 670 \\ \hline \end{tabular}
\end{table}
Table 1: The coherent scattering lengths, densities, and scattering length densities of all of the stable noble elements and of some candidate solid materials: alumina, silica, and carbon nanotubes (“CNTs”). Noble element densities are given for their liquid state, as an upper bound; the optimal isotope was chosen for each. For the solid ceramics, the scattering lengths and densities are for the full “molecule,” e.g. they treat all five atoms of Al\({}_{2}\)O\({}_{3}\) as one unit; note that this is an inaccurate measure of coherent scattering once the inverse momentum transfer is comparable to the interatomic spacing. The CNT number density assumes a skeletal mass density of 2 g/cm\({}^{3}\), though estimates for the true value vary; see, for example, [60; 61; 62; 63]. All of the solid results assume a natural mixture of isotopes. Nuclear incoherent scattering and absorption should not significantly affect measurements using any of these materials, so we do not include them here. Values for scattering lengths are from [43; 44], while those for densities are from [64; 65].
### Sensitivity Projections
Sensitivity projection is considerably more involved for two-material targets; we describe our approximation of it in Appendix I.3. We assume the same neutron beam parameters as for the single-material case; see Section III.3 and Appendix H.2. The two-material analysis requires several different neutron scattering measurements, so, in this case, we assume that the \(10^{13}\) total scattered neutrons are divided evenly between them. We choose the porosity of each composite target such that 10% of neutrons are scattered by the minimum accepted angle (\(3\times 10^{-3}\) radians) or more, when both solid and gas are present, in a thickness of 0.1 cm. (The one exception to this is xenon and argon in 1 nm radius-grain silica, for which 0.1 cm thickness is never sufficient; in that case we assume a porosity of 0.5 and increase the thickness to reach 10% scattering.) The resulting sensitivity projections are shown in Figure 3.
We focus on the promising of the readily-available target candidates: granular silica, with measurements taken from both xenon and argon within it, showing the resulting sensitivities for several grain radii. Sensitivities for the 10 nm grain case with helium in place of xenon and in place of argon are also shown for reference; notably, the latter performs comparably to the argon option. This is the result of two competing effects with opposite influences on the resulting sensitivity: helium's low scattering length density makes it difficult to resolve above the solid background, but its small atomic weight means that comparison to it removes less of the new force contribution of xenon than comparison to argon does; see Appendix I.3.
Figure 3 illustrates many of the same features discussed in the single-material case in Section III.3. However, the projected sensitivities are notably worse in the two-material case, due to the loss in statistics from dividing neutron flux between measurements and from eliminating the solid background. Nonetheless, scattering from silica with 10 nm radius grains surrounded by xenon and argon has the potential to surpass the reach of traditional, xenon-only scattering experiments by a factor of several for forces with ranges at or slightly above 10 nm; see Figure 4.
Sensitivity projections for solid materials other than silica were calculated but are not shown as they differed from the silica results only by order-unity factors. This is consistent with the comparable SLDs of all solids we considered
Figure 3: A comparison of projected sensitivities to new mass-coupled Yukawa forces for experiments following the design described in this work, using several different two-material target candidates. As discussed in Appendix D, the two-material measurement requires at least two different noble elements, so each projection is labeled by two noble elements and one porous solid. Detailed assumptions for each projection are discussed in the main text. Shown is scattering from xenon and argon within silica with spherical pores of radius 1, 10 and 100 nm, as well as from the 10 nm case with helium replacing either of the gases. Also shown are the regions of parameter space already excluded by previous experiments, namely [22, 23, 24, 15, 25], and the line corresponding to \(\kappa_{\rm new}=10^{-6}\) for xenon, as an illustration of the target systematic error we use throughout this work. Astrophysical constraints at these masses lie below the bottom edge of the plot, but are somewhat model-dependent; see Section V.
(see Table 1). We emphasize, however, that our projections in Figure 3 should be interpreted as conservative estimates, as there may exist better solid material candidates (i.e. granular solids with smaller SLDs) than we have considered. Thus the true potential reach of two-material neutron scattering likely lies somewhere between our one- and two-material projections.
## V Conclusion
We have presented an improved approach to constraining short-range forces using neutron scattering. By taking advantage of the enhancement of scattering at length scales comparable to target structures, neutrons can be made to preferentially scatter at small angles, where a new force may be most visible. The effects of such substructure can then be separated from those of a new force by combining measurements with different targets and by using X-ray scattering. The technique we describe could be implemented using a variety of different targets, including both single-element targets--which offer superior sensitivity but may be significantly more difficult to produce--and two-material targets.
Our estimates for spin-independent forces proportional to mass, assuming approximately one day of neutron beam time, are summarized in Figure 4; these projections can be generalized to other couplings (e.g. couplings to baryon number or baryon minus lepton number) simply by rescaling. We do not consider parametrically different forces (e.g. spin-dependent interactions) in this work. Such forces can be detected in scattering measurements using polarized neutrons, but we leave consideration of such approaches to future work.
Figure 4, like the two preceding figures, also shows the parameter space excluded by previous experiments, including both other neutron-based experiments [22, 23, 24, 15] and searches for Casimir forces between many-atom test masses [25]. A variety of other experiments have obtained limits that are now subdominant to those plotted for the force
Figure 4: A summary of the sensitivity to new mass-coupled Yukawa forces for the most promising single- and two-material targets considered in this work (see Figures 2 and 3). Here, the solid black region indicates the portion of parameter space excluded by previous experiments [22, 23, 24, 25, 15, 21] (exclusion from astrophysical observations lies below the bottom edge of the plot; see text), the two gray lines show the mass-coupling relations expected for new forces arising from \(U(1)\) gauge symmetries broken at the electroweak scale or at 10 TeV [2, 3], while the dotted black line corresponds to an estimate of the sensitivity that could be obtained using the conventional, uniform targets of previous experiments but assuming the \(10^{13}\) scattered neutrons that we use for our projections. Any parameter space below this dotted line reachable using structured targets then corresponds to the benefits of coherently enhanced low-angle scattering. Note that, while the additional achievable parameter space for the silica-based target is fairly small, this projection is quite conservative: significantly better sensitivity may be achievable using porous or granular solids with smaller SLDs; see Section IV.1.
ranges we consider; see e.g. [75; 76; 77; 24; 78; 79]. Other proposals to explore similar parameter space include [27; 80].
Our focus in this work has been on experimental searches for new forces, but we note that typical sources of new interactions are strongly constrained by astrophysical observations. In particular, measurements of stellar cooling generally restrict \(g^{2}\lesssim 10^{-24}\) GeV\({}^{-2}\) for all mediator masses we consider [81], excluding most models to which our proposal is sensitive. Note, however, that astrophysical bounds are generically quite model-dependent: for example, stellar cooling constraints on forces coupled to \(B-L\) are several orders of magnitude stronger, due to their interactions with electrons [81]; conversely, modifications to gravity due to extra dimensions will generally evade cooling constraints entirely. For a variety of particle physics models that can avoid standard astrophysical bounds, see [82; 83; 84; 85; 86; 87; 88]; although none of these models are immediately applicable to the new forces we consider, they illustrate the general model-dependence of such constraints. See also [89] for a discussion of a broader class of generally weaker astrophysical bounds on new forces.
Notably, the techniques discussed here are expected to achieve significantly improved sensitivity in the \(1-100\) nm regime even using only existing facilities and materials. Substantial additional sensitivity improvement over the entire \(0.1-100\) nm range should be possible through the development of appropriate granular materials.
###### Acknowledgements.
The authors would like to thank Daniel Hussey, Young Lee, and Thomas Weiss for helpful discussions about SANS and SAXS techniques and instruments. This work was supported by the Simons Investigator Award No. 824870, NSF Grant No. PHY-2014215, DOE HEP QuantISED Award No. 100495, the Gordon and Betty Moore Foundation Grant No. GBMF7946, and the U.S. Department of Energy (DOE), Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under contract No. DE-AC02-07CH11359. ZB is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1656518 and by the Robert and Marvel Kirby Fellowship and the Dr. HaiPing and Jianmei Jin Fellowship from the Stanford Graduate Fellowship Program. GG is supported, in part, by DoE grant DE-SC0017970.
## Appendix A Neutron Scattering from Atoms
In this section, we tabulate leading contributions to the neutron-atom scattering length, including both Standard Model backgrounds and the potential fifth force contribution. As discussed in the main text, we break these contributions into three categories: nuclear scattering, which arises due to quantum chromodynamics (QCD) effects; electromagnetic scattering, due to quantum electrodynamics (QED) effects; and new force scattering, due to an assumed new spin-independent force coupling neutrons to nucleons. A plot of these three scattering contribution is shown in Figure 5. Scattering due to weak interactions is negligible, so we will not discuss it here. More detailed reviews of neutron-atom interactions can be found in [90; 91].
Since we will be interested in the interference of scattering contributions (both of individual and distinct atoms), it is most convenient to work in terms of scattering lengths \(b(\mathbf{q}_{T})\), where \(\mathbf{q}_{T}\) is the momentum transfer and the differential cross-section is
\[\left.\frac{d\sigma}{d\Omega}\right|_{\mathbf{q}_{T}(\theta)=\mathbf{q}}=|b( \mathbf{q})|^{2}. \tag{10}\]
This will be appropriate when incident neutrons are accurately described as plane waves; we discuss the alternative in Appendix C.
### Nuclear Scattering
Nuclear scattering of neutrons from atoms arises due to the strong force, which has a range of only \(\mathcal{O}(1-10\) fm). Since the maximum momentum transfers that we consider in this work are \(\mathcal{O}(10\) nm\({}^{-1})\), we can model nuclear scattering as scattering from a delta function potential (the "Fermi pseudopotential") up to corrections of order \(\mathcal{O}((|\mathbf{q}_{T}|b_{\mathrm{nuc}})^{2})\lesssim\mathcal{O}(10^{-8})\) for momentum transfer \(\mathbf{q}_{T}\) and nuclear scattering length scale \(b_{\mathrm{nuc}}\)[92; 93; 94; 95; 96; 97]. This is sufficiently small that we will not be concerned with corrections to the delta function form in this work. Subject to this approximation, the nuclear scattering length is therefore angle-independent.
This scattering length can, however, depend on the neutron's spin with respect to the neutron's spin. In particular, the most general expression we can write for it is
\[b_{\rm nuc}({\bf q}_{T})=b_{\rm nuc,c}+\sqrt{I(I+1)}b_{\rm nuc,i}\mathbf{\sigma}\cdot{ \bf I} \tag{10}\]
with \(\mathbf{\sigma}\) the neutron's spin, \({\bf I}\) the nuclear spin, and \(b_{\rm nuc,c}\) and \(b_{\rm nuc,i}\) constants that must be determined empirically. In this work, we focus in large part on noble element isotopes with zero nuclear spin, in which case the second term vanishes and nuclear scattering is fully isotropic.
Here, the subscripts \(c\) and \(i\) in the two components of the nuclear scattering length mark these as being coherent and incoherent contributions, respectively. As the name suggests, incoherent scattering contributions do not generally combine coherently in bulk targets, since different atoms' nuclear spins should not be significantly correlated in any systems we consider. This is discussed in more detail in Appendix C.
### Electromagnetic Scattering
Electromagnetic scattering of neutrons is significantly more complex than nuclear scattering. Here, we will not attempt to provide a complete description of the electromagnetic contributions to neutron-atom scattering, but will simply summarize the results. For a more detailed discussion, see, for example, [90; 96; 98].
The largest source of electromagnetic neutron-atom scattering is the interaction of the neutron with atoms' magnetic dipole moments. Including the contributions from electron spin, electron orbital angular momentum, and nuclear spin, this gives a total scattering length of
\[b_{\rm dipole}({\bf q}_{T})=\frac{g_{n}e^{2}}{8\pi m_{e}}\mathbf{\sigma}\cdot({ \bf 1}-\hat{\bf q}_{T}\hat{\bf q}_{T})\cdot\left(g_{e}f_{S}({\bf q}_{T}){\bf S}+f_ {L}({\bf q}_{T}){\bf L}+\frac{m_{e}}{m_{n}}g_{I}{\bf I}\right) \tag{11}\]
Figure 5: An illustration of the relative sizes of the three scattering probability contributions in (2)—nuclear, electromagnetic, and new force—for xenon gas, assuming a new force with \(\mu^{-1}=1\) nm and coupling \(g^{2}=10^{-19}\), near our projected sensitivity at this range. Also shown is a linear combination of the nuclear and electromagnetic contributions that attempts to reproduce the new force’s behavior, illustrating the inability of the new force contribution to be absorbed into the Standard Model fit parameters. Since the total scattering distribution is proportional to the target density and depth, the three solid lines shown here have been normalized so that \(dp/d\Omega=1\) for the nuclear contribution (at all angles, since it is angle-independent). Note that the electromagnetic and new force contributions shown here correspond to the interference terms between those forces’ contributions and the nuclear contribution, since this is their dominant effect.
where \(g_{n}\) is the neutron g-factor (\(g_{n}\approx-3.8\)[99]), \(\mathbf{\sigma}\) is the neutron's spin (with magnitude \(1/2\)), \(\mathbf{q}_{T}\) is the momentum transfer, \(\hat{\mathbf{q}}_{T}\) is a unit vector along the direction of the momentum transfer, \(g_{e}\) is the electron g-factor (\(g_{e}\approx 2\)), \(g_{I}\) is the nuclear g-factor of the target atom, \(\mathbf{S}\), \(\mathbf{L}\) and \(\mathbf{I}\) are the atom's total electron spin, electron orbital angular momentum, and nuclear spin, respectively, and the functions \(f_{S,L}(\mathbf{q})\) are form factors, defined below. (Note that we work in units where \(\varepsilon_{0}=1\), which leads to a factor of \(4\pi\) difference relative to, for example, [90].)
The form factors \(f_{S,L}(\mathbf{q})\) account for the spatial correlations of electron spins and angular momenta, which affect the relative phases of scattering contributions from different electrons. (Form factors are entirely analogous to structure factors, discussed in the main text and in Appendix C.) Thus, while scattering from all of the electrons will add coherently in low momentum transfer scattering (\(q_{T}r_{\text{atom}}\ll 1\)), this will not be the case at momentum transfers comparable to or larger than the inverse atomic radius. For spin, this form factor is defined by
\[f_{S}(\mathbf{q})\mathbf{S}=\left\langle\sum_{j}\mathbf{s}_{j}e^{i\mathbf{q} \cdot\mathbf{r}_{j}}\right\rangle \tag{10}\]
where the expectation value is over one atom, the sum is over the electrons, and the \(j\)'th electron has spin \(\mathbf{s}_{j}\) and position \(\mathbf{r}_{j}\). The definition of \(f_{L}\) is somewhat more involved [100] and we will not be concerned with its form in this work. Analogous form factors for the nucleus are not relevant for our purposes, as the nuclear radius is far smaller than the inverse momentum transfers we consider (see the discussion of nuclear scattering above).
Since the neutron is moving with respective to the atom, the magnetic field it sees also acquires a contribution from the Lorentz transformation of the atom's electric field. This leads to an additional scattering contribution known as the "Schwinger term," given by [101]
\[b_{\text{Schwinger}}(\mathbf{q}_{T})=-i\frac{g_{n}Ze^{2}}{8\pi m_{n}}(1-f( \mathbf{q}_{T}))\mathbf{\sigma}\cdot\hat{\mathbf{n}}\cot\theta \tag{11}\]
where \(Z\) is atomic number of the atom, \(\hat{\mathbf{n}}\) is a unit vector along the cross product of the incident and outgoing neutron momenta, and \(f(\mathbf{q})\) is the atomic form factor,
\[f(\mathbf{q})Z=\left\langle\sum_{j}e^{i\mathbf{q}\cdot\mathbf{r}_{j}}\right\rangle. \tag{12}\]
This atomic form factor is reasonably well-approximated by
\[f(q)\approx\frac{1}{\sqrt{1+(q/q_{0})^{2}}} \tag{13}\]
where \(q_{0}\approx 11\)\(Z^{1/3}\) nm\({}^{-1}\). Note that, while \(\cot\theta\) diverges at small scattering angles, \(1-f(\mathbf{q}_{T}(\theta))\) approaches \(0\) as \(\theta\) goes to zero quickly enough (\(\propto\theta^{2}\)) for the Schwinger term to also go to zero at small angles [90].
Neutrons can also scatter from purely electric fields, due to their internal charge distribution. Though the neutron is charge-neutral to extremely high precision and has a vanishing or negligible electric dipole moment [99], positive and negative charge densities within it may still be physically separated. Radial dependence of the charge density then leads to a potential depending on the Laplacian of the electric potential, and thus to a scattering contribution of the form [90; 98]
\[b_{E}(\mathbf{q}_{T})=-\frac{m_{n}Z}{3a_{0}m_{e}}\left\langle r_{n}^{2} \right\rangle(1-f(\mathbf{q}_{T})) \tag{14}\]
where \(a_{0}\) is the Bohr radius and \(\left\langle r_{n}^{2}\right\rangle\sim 10^{-1}\) fm\({}^{2}\)[99] is the neutron mean-square charge radius, defined as
\[\left\langle r_{n}^{2}\right\rangle=\int r^{2}\rho(\mathbf{r})\ d^{3}\mathbf{r} \tag{15}\]
with \(\rho(\mathbf{r})\) the charge density within the neutron.
We note that, historically, it was common to include one additional source of neutron scattering of the same form, known as the "Foldy term" (see e.g. [90; 102; 96]), though this is now understood to be incorrect [103; 104]. This change in understanding is of no phenomenological importance, however, as the Foldy term took a form identical to that of (14), and could thus be absorbed into the value of the (empirically determined) neutron charge radius.
Finally, there is one more noteworthy contribution to neutron-atom scattering, which arises due to the electric polarizability of the neutron. Though the neutron's electric dipole moment is known to be extremely small (or zero)
in the absence of external electric fields [99], it may acquire one in their presence. This leads to additional scattering of neutrons from electric fields, which can be shown to take the form [90]
\[b_{P}({\bf q}_{T})=-\sqrt{\frac{3}{\pi}}\frac{m\alpha_{n}(Ze)^{2}}{4\pi\sqrt{ \langle r_{n}^{2}\rangle}}\left(1+{\cal O}\left(\frac{\sqrt{\langle r_{n}^{2} \rangle}}{r_{A}}\right)+{\cal O}\left(q_{T}\sqrt{\langle r_{n}^{2}\rangle} \right)\right) \tag{10}\]
where \(\alpha_{n}\) is the neutron's electric polarizability (\(\sim 10^{-3}\) fm\({}^{3}\)[99]) and \(r_{A}\) is the atomic radius. Note that both \(\sqrt{\langle r_{n}^{2}\rangle}/r_{A}\) and \(q_{T}\sqrt{\langle r_{n}^{2}\rangle}\) are \(\lesssim 10^{-5}\), while the overall magnitude of \(b_{P}\) is around \(10^{-4}\)\(b_{\rm nuc}\). We can therefore ignore both of these terms at our systematic error target of \(10^{-6}\) (see Section I.1), in which case the polarizability scattering term is angle-independent and is indistinguishable from a change in the nuclear scattering length.
Throughout this discussion, we have omitted terms at higher order in \(m_{e}/m_{n}\). Since \((m_{e}/m_{n})^{2}\sim 3\times 10^{-7}\), such terms should generally be too small to matter for our purposes. However, the large value of \(Z\) for many targets of interest could potentially result in some such terms becoming significant. We leave the calculation of such higher-order terms to future work, noting that these are purely electromagnetic effects and should therefore be precisely calculable if necessary.
### New Force Scattering
The scalar-scalar fifth forces that we consider in this work correspond to a Yukawa potential of [14]
\[V(r)=-\frac{g^{2}M_{1}M_{2}}{r}e^{-\mu r} \tag{11}\]
with \(g\) the new force coupling, \(M_{1,2}\) the masses of the interacting particles, and \(\mu\) the mediator mass. The resulting neutron scattering length of an atom of atomic weight \(A\) (i.e. mass \(Am_{n}\)) is (see e.g. [22])
\[b_{\rm new}({\bf q}_{T})=\frac{m_{n}^{3}g^{2}A}{2\pi}\frac{1}{\mu^{2}+q_{T}^{2 }}. \tag{12}\]
It is convenient to define a relative strength
\[\kappa_{\rm new}=\frac{m_{n}^{3}g^{2}A}{2\pi\mu^{2}b_{0}} \tag{13}\]
of the new force scattering length \(b_{\rm new}\) to the sum of angle-independent Standard Model scattering lengths \(b_{0}\), where the latter is dominated by nuclear scattering but also receives contributions from electromagnetic scattering. (There is, of course, some ambiguity in this definition, since we can always subtract a constant from the angle-dependent scattering length and add it to \(b_{0}\). We define \(b_{0}\) explicitly after our discussion of how scattering is simplified for noble gases in Appendix C; see (13).) In terms of \(\kappa_{\rm new}\), we have
\[b_{\rm new}({\bf q}_{T})=\frac{\kappa_{\rm new}b_{0}}{1+(q_{T}/\mu)^{2}}. \tag{14}\]
## Appendix B X-Ray Scattering from Atoms
The elastic X-ray scattering distribution from the electrons of a single atom, ignoring near-resonance effects, is given by [105; 106; 107]
\[\left.\frac{d\sigma}{d\Omega}\right|_{X}=\left(\frac{e^{2}}{4\pi m_{e}}\right) ^{2}\left|\left\langle f\left|\sum_{j}e^{i{\bf q}_{T}\cdot{\bf r}_{j}}\right| i\right\rangle\hat{\varepsilon}\cdot\hat{\varepsilon}^{\prime}-\frac{iE}{m_{e}} \left\langle f\left|\sum_{j}e^{i{\bf q}_{T}\cdot{\bf r}_{j}}\left(\frac{i{\bf q }_{T}\times{\bf p}_{j}}{E^{2}}\cdot{\bf A}+{\bf s}_{j}\cdot{\bf B}\right) \right|i\right\rangle\right|^{2} \tag{15}\]
where \(E\) is the photon energy, \(\varepsilon\) and \(\varepsilon^{\prime}\) are the incident and outgoing photon polarizations, \({\bf r}_{j}\), \({\bf p}_{j}\) and \({\bf s}_{j}\) are the position, momentum, and spin of the \(j\)'th charge, \(i\) (\(f\)) is the initial (final) atomic state, and \(A\) and \(B\) are matrices depending on \(\varepsilon\) and \(\varepsilon^{\prime}\) whose exact forms will not matter for our purposes. Scattering from the nucleus is described
by analogous terms [108]. As in the case neutron scattering, these expressions simplify considerably for noble atoms, leaving only
\[\left.\frac{d\sigma}{d\Omega}\right|_{X}=\left(\frac{e^{2}}{4\pi m_{e}}Zf(q_{T})- \frac{Z^{2}e^{2}}{4\pi m_{\rm nuc}}\right)^{2}\left(\hat{\varepsilon}\cdot\hat{ \varepsilon}^{\prime}\right)^{2} \tag{10}\]
where \(f(q_{T})\) is the usual atomic form factor (see Appendix A), \(m_{\rm nuc}\) is the mass of the nucleus, and we are ignoring the finite size of the nucleus. If we assume unpolarized incident X-rays and sum over outgoing polarizations, the observed scattering distribution becomes
\[\left.\frac{d\sigma}{d\Omega}\right|_{X}=\left(\frac{e^{2}}{4\pi m_{e}}Zf(q_{T} )-\frac{Z^{2}e^{2}}{4\pi m_{\rm nuc}}\right)^{2}\frac{1+\cos^{2}\theta}{2}. \tag{11}\]
X-ray scattering in practice is significantly complicated by photoabsorption, with typical attenuation lengths of 10-1000 \(\mu\)m for 10 keV X-rays; higher-weight elements typically lead to stronger absorption [109]. While attenuation lengths at the higher end of this range are unlikely to be a problem for our proposal, the much shorter attenuation length of, for example, xenon would lead to essentially complete absorption of 10 keV X-rays for the targets we consider. X-ray attenuation lengths generically increase rapidly with photon energy, however, so this issue can be largely circumvented using higher-energy X-ray sources, at the cost of requiring measurements at smaller scattering angles in order to study the same momentum transfers. At 40 keV, X-rays have an attenuation length of several hundred micrometers in liquid xenon [109]; using X-rays at or above this energy should therefore be sufficient for our purposes.
## Appendix C Scattering from Structured Materials
In this appendix, we present a largely pedagogical introduction to structure factors; other sources on this topic include [39; 40; 41]. We begin by summarizing the structure factors of several target material configurations, beginning with relatively simple targets before considering the structured targets that are the focus of this work. In the first subsection of this appendix, we ignore the distinction between the coherent and total scattering lengths of an atom (i.e. the incoherent scattering length) for simplicity; while this is a reasonable approximation for noble gases, it is not for many other elements, so we return to the more general case in the second subsection. Finally, we present an estimate of the structure factor of a carbon nanotube forest: this serves both as an illustrative example of the behavior of structure factors, and provides an alternative (albeit likely inferior) structure to the grain-based targets that are the focus of this work.
We emphasize that the structure factors computed below are intended only as rough predictions in order to estimate the projected sensitivities of our proposal. The structure factors of actual targets will need to be measured in order to separate their effects from those of a new force, as discussed in Section III.2 and Appendix D.
### Structure Factors of Simple Geometries
Our focus in this work is scattering from noble gases, which have zero total electron spin and angular momentum. We further assume zero nuclear spin; while this is not true of all noble element isotopes, it holds for all of the isotopes we consider. Moreover, at the target temperatures relevant to our proposal, there are no significant excited state populations for any atoms we consider. Thus, in the absence of any internal state variation, scattering from individual noble atoms depends exclusively on the momentum transferred during the scattering process. We therefore begin by considering scattering from targets where every atom has the same scattering length, before returning to the more general case in the next subsection of this appendix.
Scattering lengths of different atoms within the target sum, though the differing path lengths corresponding to scattering from different target atoms lead to relative phase factors. In the limit of large distances to the neutron source and detector, the resulting total scattering length is then
\[b_{\rm tot}(\mathbf{q}_{T})=\sum_{j}b(\mathbf{q}_{T})e^{i\mathbf{q}_{T}\cdot \mathbf{r}_{j}}, \tag{12}\]
where the sum if over atoms at positions \(\mathbf{r}_{j}\).
The simplest relevant geometry for which we can evaluate the result of (12) is a large volume of ideal gas. Here, the "largeness" requirement is satisfied if all length scales of the target volume are much larger than the inverse
momentum transfers \(q_{T}^{-1}\) considered. If this is the case, the phase factors associated with scattering from each atom are independent and uniformly distributed, such that the expected total scattering length of the target is 0. The expected total cross-section, however, is given by the variance of this distribution,
\[\left.\frac{d\sigma}{d\Omega}\right|_{\text{tot}}=N|b(\mathbf{q}_{T})|^{2}, \tag{10}\]
with \(N\) the total number of atoms. The structure factor for this case is thus \(S(q_{T})=1\). This is the usual incoherent sum of scattering cross-sections, and is plotted in Figure 6 as "(No Structure)."
Note that, up to this point, we have implicitly assumed that the incident and outgoing neutrons are plane waves, such that the momentum transfer and scattering cross-section are well-defined. In practice, the finite momentum spread (or, equivalently, finite spatial extent) of neutrons complicates this result. Consider a neutron propagating along \(\hat{z}\) with transverse wavefunction
\[\psi_{\perp}(\mathbf{r}_{\perp})=\frac{1}{\sqrt{2\pi\Delta r_{\perp}^{2}}}e^{ -\mathbf{r}_{\perp}^{2}/(4\Delta r_{\perp}^{2})}; \tag{11}\]
we will ignore any \(z\)-dependence of the neutron wavefunction throughout this discussion since it does not affect the result in the limit of small momentum transfers, where \(\mathbf{q}_{T}\) is orthogonal to \(\hat{z}\); generalizing to larger angles is straightforward. The scattering probability is then
\[\left.\frac{dp}{d\Omega}\right|_{\text{tot}}=\left|\sum_{j}\psi_{\perp}( \mathbf{r}_{j,\perp})b(\mathbf{q}_{T})e^{i\mathbf{q}_{T}\cdot\mathbf{r}_{j}} \right|^{2}. \tag{12}\]
Figure 6: A comparison of the neutron scattering distributions of different targets, including a target with no structure, three targets consisting of spherical grains of xenon of radius 1, 10 or 100 nm, and three targets consisting of xenon or argon within spherical 10 nm-radius pores in silica or alumina. All of targets are assumed to have depths that lead to 10% scattering at or above \(3\times 10^{-3}\) radians for a neutron wavelength of 0.6 nm, with the two-material targets’ porosities set to make that depth 0.1 cm, consistent with our assumptions in Section IV. The left and right limits of the plot correspond to scattering angles (\(2\theta\)) of \(3\times 10^{-3}\) and \(\pi\) radians, respectively.
To calculate the coherent prediction for this value, we take the standard continuum limit, such that
\[\left.\begin{aligned} \frac{dp}{d\Omega}\right|_{\text{coh}}& =\left|\int\psi_{\perp}(\mathbf{r}_{j,\perp})b(\mathbf{q}_{T})e^{i \mathbf{q}_{T}\cdot\mathbf{r}_{j}}dN\right|^{2}\\ &=8\pi|b(\mathbf{q}_{T})|^{2}\left(\Delta r_{\perp}nL\right)^{2} e^{-2q_{T}^{2}\Delta r_{\perp}^{2}},\end{aligned} \tag{100}\]
with \(n\) the atomic number density and \(L\) the depth of the target. Accounting for incoherent scattering, subject to our usual assumption that the resulting phase factors are entirely random, gives the usual contribution of \(nL|b(\mathbf{q}_{T})|^{2}\), for a total scattering distribution of
\[\left.\frac{dp}{d\Omega}\right|_{\text{tot}}=nL|b(\mathbf{q}_{T})|^{2}\left(1+ 8\pi(nL\Delta r_{\perp}^{2})e^{-2q_{T}^{2}\Delta r_{\perp}^{2}}\right). \tag{101}\]
In particular, while the scattering distribution of Gaussian neutrons is unchanged when \(q_{T}\Delta r_{\perp}\gg 1\), it is enhanced by an additional factor of \(8\pi nL\Delta r_{\perp}^{2}\) (effectively the number of atoms seen by a single neutron) for \(q_{T}\Delta r_{\perp}\ll 1\): scattering is "fully coherent" in this regime, even for targets with no structure whatsoever. We will assume that \(q_{T}\Delta r_{\perp}\gg 1\) throughout most of this work, but we note that this requirement may (or may not) constrain realistic implementations of our proposal due to the impact of multiple scattering events; this is discussed in Appendix E.
The more typical source of non-trivial structure factors is the arrangement of the atoms in the target. Most of the geometries of interest in this work will be characterized by regions with similar length scales in all directions (the sole exception, carbon nanotubes, is discussed later in this appendix). Such isotropic geometries can be qualitatively understood by considering scattering from a sphere; the exact structure factors of generic geometries must be computed numerically.
The structure factor for a sphere of monatomic ideal gas with radius \(R\) is (see, for example, [40; 41]; we also derive this result, including corrections from incoherent scattering, in the next subsection)
\[S(q_{T})=\left(\frac{3(\sin(q_{T}R)-q_{T}R\cos(q_{T}R))}{(q_{T}R)^{3}}\right)^ {2}N+1, \tag{102}\]
with \(N\) the total number of atoms in the sphere. Here, the first term corresponds to coherent scattering from the sphere as a whole, while the (frequently omitted) second term accounts for incoherent scattering due to random variation in atom positions as discussed in the large-volume case above.
All targets that we consider will consist of many distinct grains. We will generally assume that these grains' positions are essentially uncorrelated on the scale of \(q_{T}^{-1}\), such that we can treat their sum in the same way as an ideal gas; we consider deviations from this assumption below. Thus, the structure factor of many, randomly-positioned spheres is still (102), with \(N\) understood to be the number of atoms per sphere rather than the total number of atoms in the target.
In practice, the grains of realistic targets are unlikely to have identical radii (or even, in most cases, identical shapes). It will therefore be convenient for our purposes to work with a version of (102) averaged over a small spread of radii:
\[S(q_{T})\approx\frac{12\pi}{9+2(q_{T}\overline{R})^{4}}n\overline{R}^{3}+1, \tag{103}\]
with \(n\) the number density of atoms within each grain, is accurate to within a few percent for relevant \(q_{T}\) and \(R\), assuming 10% variation in \(R\) around its average \(\overline{R}\). This simplified form will be preferable for the approximate sensitivity projections we make in this paper; more precise calculations, using measured distributions of \(R\), can be performed numerically.
Since creating isolated grains of noble atoms is potentially quite difficult (though perhaps not impossible; see Section III.1), many of the targets we consider in this work consist of two separate materials: a solid that creates the granular structure, and a noble gas filling in the unoccupied space within that solid. There are two general categories of solids that could be used for this: porous materials, consisting of a single solid block with holes that are filled by the noble gas, and piles of grains, where the noble gas fills the spaces between the grains. Notably, these are largely equivalent from the perspective of coherent scattering: since coherent scattering from a uniform target is negligible for \(q_{T}\Delta r_{\perp}\gg 1\) (see (101)), the coherent scattering cross-section of a collection of grains filled with an ideal gas is equal to the coherent scattering cross-section from a target where the same ideal gas occupies all of space except those grains.
Predicting scattering distributions from solid targets is significantly more challenging, due to the non-trivial correlations between atomic positions. In the limiting case of scattering plane wave neutrons from a perfect crystal lattice,
scattering is infinitely peaked at momentum transfers that are integer multiples of the inverse lattice spacing, generally less than 1 nm. Since our interest in this work is in maximizing scattering at much smaller momentum transfers, we will instead focus on amorphous solids, whose lack of regular structure should dramatically reduce this source of coherence. Nonetheless, some degree of short-range order, and consequently some amount of coherent enhancement, is likely to persist. We will ignore this effect below, as it should not have a significant impact on our sensitivity projections so long as it is smaller than the coherent enhancement caused by the target's granular structure; while this appears likely, it should be checked empirically for any particular target solid candidate.
For targets consisting of multiple elements, the definition of the structure factor (5) must be generalized to
\[S(q_{T})=\frac{1}{\sum_{j}N_{j}|b_{j}(\mathbf{q}_{T})|^{2}}\left|\sum_{j}\sum_ {k=1}^{N_{j}}b(\mathbf{q}_{T})e^{i\mathbf{q}_{T}\cdot\mathbf{r}_{j,k}}\right|^ {2} \tag{109}\]
where the target contains \(N_{j}\) atoms of the j'th element, each with scattering length \(b_{j}(q_{T})\). Ignoring the effects of regular solid structure discussed above, this can be rewritten as (see e.g. [40; 41], or, again, the derivation in the next subsection)
\[S(q_{T})=\frac{\left|\int\limits_{R_{\text{gas}}}\left(\mathcal{S}_{\text{gas }}(\mathbf{q}_{T})-\mathcal{S}_{\text{solid}}(\mathbf{q}_{T})\right)e^{i \mathbf{q}_{T}\cdot\mathbf{r}}d^{3}\mathbf{r}\right|^{2}}{\sum_{j}N_{j}|b_{j}( \mathbf{q}_{T})|^{2}}+1, \tag{110}\]
where \(R_{\text{gas}}\) is the region filled with gas and \(\mathcal{S}_{\text{solid}}\) (\(\mathcal{S}_{\text{gas}}\)) is the scattering length density or "SLD" of the solid (gas), defined as \(\mathcal{S}(q_{T})=\sum_{j}n_{j}b_{j}(q_{T})\) with the sum over the distinct elements making up the solid (gas). In particular, coherent scattering from two-material targets is dependent on the difference between the SLDs of the two materials, and vanishes when they are equal. Taking as an example our usual geometry of isolated spherical grains, but now with a different material between the grains, we have
\[S(q_{T})\approx\frac{12\pi\overline{R}^{3}}{9+2(q_{T}\overline{R})^{4}}\frac{ f\left|\Delta\mathcal{S}\right|^{2}}{fn_{g}|b_{g}(\mathbf{q}_{T})|^{2}+(1-f) \sum_{j}n_{s,j}|b_{s,j}(\mathbf{q}_{T})|^{2}}+1, \tag{111}\]
where \(\Delta\mathcal{S}\) is the difference of the two materials' SLDs, \(f\) is the fraction of the total volume taken up by the gas, and the remaining sum is over the distinct elements making up the solid. (The apparent asymmetry in \(f\leftrightarrow 1-f\) here is a consequence of our assumption that the grain locations are uncorrelated, which can only hold exactly in the \(f\ll 1\) limit. Realistic geometries will have \(f\sim 1-f\), giving order-one corrections to this result.) Approximate scattering distributions for a selection of two-material targets of interest in this work are illustrated in Figure 6.
### Coherent and Incoherent Scattering
The discussion of scattering above assumed that the scattering length of every atom in the sphere was equal. At minimum, this requires all of the atoms in the target to be the same isotope. Even for a single isotope, however, electromagnetic scattering depends on the electronic (and nuclear spin) state of the atom, which will vary from atom to atom in every system we consider; see Appendix A.2. It is easy to see that a uniform mixture of isotopes and states can be handled simply by replacing \(b\) and \(|b|^{2}\) in (109) by their average values; we consider effects that might lead to spatially correlated states in later appendices.
Throughout this section, we will restrict to scattering of unpolarized neutrons, for which the neutron spin \(\mathbf{\sigma}\) is uniformly distributed. Averaging over spins will therefore allow us to simplify our scattering distributions significantly.
For an isotropic medium, the expectation values of \(\mathbf{L}\), \(\mathbf{S}\) and \(\mathbf{I}\) are all zero. The spin average of \(\mathbf{\sigma}\cdot\hat{\mathbf{n}}\cot\theta\) is also zero for every scattering direction (but note that it is generally non-zero for any particular neutron, so Schwinger scattering is enhanced by structure factors; averaging over neutrons merely eliminates any interference between it and coherent scattering). Then the expectation value of the scattering length is given by the remaining terms from Appendix A:
\[\begin{split} b_{c}(\mathbf{q}_{T})&=\left\langle b _{n}(\mathbf{q}_{T})\right\rangle=b_{\text{nuc},c}+b_{E}(\mathbf{q}_{T})+b_{P} +b_{\text{new}}(\mathbf{q}_{T})\\ &=\left(b_{\text{nuc},c}-\sqrt{\frac{3}{\pi}}\frac{m\alpha_{n}( Ze)^{2}}{4\pi\sqrt{\left\langle r_{n}^{2}\right\rangle}}-\frac{m_{n}Z}{3a_{0}m_{e}} \left\langle r_{n}^{2}\right\rangle\right)+\left(\frac{m_{n}Z}{3a_{0}m_{e}} \left\langle r_{n}^{2}\right\rangle\right)f(q_{T})+\frac{m_{n}^{3}g^{2}A}{2\pi \mu^{2}}\frac{1}{1+(q_{T}/\mu)^{2}}\\ &=b_{0}\left(1+\kappa_{\text{EM}}f(q_{T})+\frac{\kappa_{\text{new }}}{1+(q_{T}/\mu)^{2}}\right)\end{split} \tag{112}\]
where we have split the expectation value of the scattering length (i.e. the coherent scattering length) into an angle-independent contribution
\[b_{0}=b_{\text{nuc,c}}-\sqrt{\frac{3}{\pi}}\frac{m\alpha_{n}(Ze)^{2}}{4\pi\sqrt{ \langle r_{n}^{2}\rangle}}-\frac{m_{n}Z}{3a_{0}m_{e}}\left\langle r_{n}^{2}\right\rangle \tag{103}\]
and two angle-dependent components: one from electromagnetic interactions, and one from any new force. Here,
\[\kappa_{\text{EM}}=\frac{m_{n}Z}{3a_{0}m_{e}b_{0}}\left\langle r_{n}^{2}\right\rangle \tag{104}\]
parametrizes the relative strength of electromagnetic scattering compared to nuclear scattering (the noble elements we consider typically have \(\kappa_{\text{EM}}\sim 10^{-2}\)), and \(\kappa_{\text{new}}\) (defined by (103)) does the same for the new force. The contributions of the three terms in (102) are plotted in Figure 5, although we do not absorb the angle-independent component of electromagnetic scattering into \(b_{0}\) in that figure (i.e. we plot the physical contribution \(b_{0}\kappa_{\text{EM}}(1-f(q_{T}))\) of electromagnetism, rather than the \(b_{0}\kappa_{\text{EM}}f(q_{T})\) that we use in most of the text for convenience).
For an anisotropic medium, the coherent scattering length will pick up additional terms proportional to the expectation values of \(\mathbf{L}\), \(\mathbf{S}\) and \(\mathbf{I}\). We will never need these terms, however: as we discuss in the main text, we do not expect scattering from solids to be sufficiently predictable anyway, so we will always use combinations of measurements in which solid scattering cancels out. This leaves only scattering from the gas (or perhaps liquid) component, which should be sufficiently isotropic on its own. We demonstrate that solids will not lead to significant anisotropy in the gas or liquid near their surface in Appendix G.
Polarized neutrons are less inherently problematic, but require somewhat more tedious calculations: preferential polarization along some direction leads to Schwinger scattering that depends on the angle of scattering around the beam axis. Since we focus on unpolarized neutron beams, however, we omit this contribution.
Incoherent scattering is significantly more complicated, since the various electromagnetic terms generally do not average to zero. It will be helpful to organize the scattering length contributions as follows:
\[b_{n}(\mathbf{q}_{T})=b_{c}(\mathbf{q}_{T})+b_{\text{Sch}}(\mathbf{q}_{T}) \boldsymbol{\sigma}\cdot\hat{\mathbf{n}}+b_{i,S}(\mathbf{q}_{T})\boldsymbol{ \sigma}\cdot(\mathbf{1}-\hat{\mathbf{q}}_{T}\hat{\mathbf{q}}_{T})\cdot\mathbf{ S}+b_{i,L}(\mathbf{q}_{T})\boldsymbol{\sigma}\cdot(\mathbf{1}-\hat{\mathbf{q}}_{T} \hat{\mathbf{q}}_{T})\cdot\mathbf{L}+b_{i,I}\boldsymbol{\sigma}\cdot(a\mathbf{ 1}-\hat{\mathbf{q}}_{T}\hat{\mathbf{q}}_{T})\cdot\mathbf{I} \tag{105}\]
where
\[b_{\text{Sch}}(\mathbf{q}_{T}) =-i\frac{g_{n}Ze^{2}}{8\pi m_{n}}(1-f(\mathbf{q}_{T}))\cot\theta \tag{106a}\] \[b_{i,S}(\mathbf{q}_{T}) =\frac{g_{n}g_{e}e^{2}}{8\pi m_{e}}f_{S}(\mathbf{q}_{T})\] (106b) \[b_{i,L}(\mathbf{q}_{T}) =\frac{g_{n}e^{2}}{8\pi m_{e}}f_{L}(\mathbf{q}_{T})\] (106c) \[b_{i,I} =\frac{g_{n}g_{I}e^{2}}{8\pi m_{n}}\] (106d) \[a =1+\sqrt{I(I+1)}\frac{b_{\text{nuc,}i}}{b_{i,I}} \tag{106e}\]
with the various constants and functions defined in Appendix A.2. Note that \(a\) can be much larger than \(1\) for some atoms, but is exactly equal to one for atoms with zero nuclear spin, which including all of the most promising target gases considered in this work.
The expectation value for the norm squared of the scattering length receives contributions from the square of each of these, as well as from any cross-terms that are non-zero. Fortunately, many of these terms turn out to be negligible for the situations we consider. Isotropic targets and unpolarized neutrons make all of the cross-terms including \(b_{c}\) zero. (This is precisely what we assumed when discussing the coherent scattering length above.) The same holds for cross-terms involving \(b_{\text{Sch}}(\mathbf{q}_{T})\). This leaves only cross-terms from terms that depend on the target atoms' electronic and nuclear spin states.
The largest of these cross-terms is
\[2b_{i,S}(\mathbf{q}_{T})b_{i,L}(\mathbf{q}_{T})\left\langle(\boldsymbol{ \sigma}\cdot(\mathbf{1}-\hat{\mathbf{q}}_{T}\hat{\mathbf{q}}_{T})\cdot \mathbf{S})\left(\boldsymbol{\sigma}\cdot(\mathbf{1}-\hat{\mathbf{q}}_{T}\hat {\mathbf{q}}_{T})\cdot\mathbf{L})\right\rangle=\frac{1}{9}b_{i,S}(\mathbf{q}_{ T})b_{i,L}(\mathbf{q}_{T})\left\langle\mathbf{S}\cdot\mathbf{L}\right\rangle \tag{107}\]
where the right hand side has been averaged over neutron spin and assumes an isotropic target, leaving only the expectation value over that target. (Recall that \(|\boldsymbol{\sigma}|=1/2\) in our conventions.) There are analogous terms for the
cross-terms including \(b_{i,I}\), but in practice the expectation values of \(\left\langle\mathbf{S}\cdot\mathbf{I}\right\rangle\) and \(\left\langle\mathbf{L}\cdot\mathbf{I}\right\rangle\) are suppressed by the small value of the hyperfine coupling compared to the target temperature, which renders these cross-terms negligible for the targets we consider.
With these simplifications, and performing the rest of the spin averages, we find, for a single atom,
\[\begin{split}\left|b_{i}(\mathbf{q}_{T})\right|^{2}=\left\langle \left|b_{n}(\mathbf{q}_{T})\right|^{2}\right\rangle-\left|b_{c}(\mathbf{q}_{T })\right|^{2}-\frac{1}{12}\left|b_{\mathrm{Sch}}(\mathbf{q}_{T})\right|^{2}=& \frac{1}{18}\sqrt{S(S+1)}b_{i,S}(\mathbf{q}_{T})^{2}+\frac{1}{18} \sqrt{L(L+1)}b_{i,L}(\mathbf{q}_{T})^{2}\\ &+\frac{3a^{2}-2a+1}{36}\sqrt{I(I+1)}b_{i,I}^{2}+\frac{1}{9}b_{i,S}(\mathbf{q}_{T})b_{i,L}(\mathbf{q}_{T})\left\langle\mathbf{S}\cdot\mathbf{ L}\right\rangle.\end{split} \tag{103}\]
Note that the \(b_{\mathrm{Sch}}(\mathbf{q}_{T})^{2}\) term is suppressed by \((Zm_{e}/m_{N})^{2}\) relative to the leading-order electromagnetic terms. While this makes it a small correction to the total scattering length, the large values of \(Z\) for target atoms we consider, and the fact that it is coherently enhanced in structured materials, make it potentially non-negligible.
In much of this work, we restrict to scattering from noble gases with zero nuclear spin. In this case, the incoherent scattering length is simply zero, which is why we generally ignore the distinction between coherent and total scattering length. Note, however, that the scattering lengths throughout this text should be understood to include the small Schwinger correction,
\[\Delta b(\mathbf{q}_{T})=\frac{1}{\sqrt{12}}b_{\mathrm{Sch}}(\mathbf{q}_{T})= -i\frac{g_{n}Ze^{2}}{16\sqrt{3}\pi m_{n}}(1-f(\mathbf{q}_{T}))\cot\theta, \tag{104}\]
which behaves like a coherent scattering length but must always be added to scattering probabilities in quadrature after averaging over neutron polarizations (as well as due to its relative factor of \(i\)).
We now return to scattering from a collection of spherical grains. Our estimate of the resulting scattering distribution above (1000) did not distinguish between coherent and incoherent scattering lengths. We can correct this by more carefully evaluating the total scattering length of a sphere of ideal gas:
\[\left\langle\left|b_{\mathrm{sphere}}(\mathbf{q}_{T})\right|^{2}\right\rangle =\left|\left\langle b_{\mathrm{sphere}}(\mathbf{q}_{T})\right\rangle \right|^{2}+\mathrm{Var}\left(b_{\mathrm{sphere}}(\mathbf{q}_{T})\right). \tag{105}\]
As discussed above, the expectation value of the total scattering length of one sphere can be evaluated in terms of the integrated SLD:
\[\begin{split}\left\langle b_{\mathrm{sphere}}(\mathbf{q}_{T}) \right\rangle&=\int\mathcal{S}(\mathbf{q}_{T})e^{i\mathbf{q}_{T} \cdot\mathbf{r}}d^{3}\mathbf{r}\\ &=\mathcal{S}(\mathbf{q}_{T})\iint\limits\,2\pi r^{2}\sin\theta \ e^{i\mathbf{q}_{T}\cdot\mathbf{r}}\ dr\ d\theta\\ &=\frac{4\pi(\sin(q_{T}R)-q_{T}R\cos(q_{T}R))}{q_{T}^{3}}\mathcal{ S}(\mathbf{q}_{T}).\end{split} \tag{106}\]
Note that, since this corresponds to coherent scattering, only the coherent scattering length should be included in the SLD, i.e. \(\mathcal{S}(\mathbf{q}_{T})=nb_{c}(\mathbf{q}_{T})\).
The variance of the total scattering length of the sphere conversely corresponds to incoherent scattering and, as discussed previously, should simply be given by the sum of the total scattering length's norm squared over all of the atoms within it. Thus
\[\left\langle\left|b_{\mathrm{sphere}}(\mathbf{q}_{T})\right|^{2}\right\rangle =\left(\frac{4\pi(\sin(q_{T}R)-q_{T}R\cos(q_{T}R))}{q_{T}^{3}}n\right)^{2}|b_{ c}(\mathbf{q}_{T})|^{2}+\frac{4\pi}{3}nR^{3}|b_{n}(\mathbf{q}_{T})|^{2}. \tag{107}\]
We are interested in the structure factor of a target containing many such spheres; assuming that these sum incoherently, the total scattering lengths of each sphere should add in quadrature, which does not affect the structure factor of the target. Armed with this result, and performing the same radius-averaging as above, correcting (1000) is straightforward:
\[S(q_{T})\approx\frac{12\pi}{9+2(q_{T}\overline{R})^{4}}n\overline{R}^{3}\frac {|b_{c}(\mathbf{q}_{T})|^{2}}{|b_{n}(\mathbf{q}_{T})|^{2}}+1. \tag{108}\]
When grains are closely spaced (e.g. in highly porous materials) and for small \(q_{T}\) (e.g. \(q_{T}^{-1}\sim 30\) nm, near the limit of what we consider), there may be some degree of coherence even between the grains of the target, as their positions
become correlated on scales of \(q_{T}^{-1}\). Similarly to the effect of short-range correlations in amorphous solids discussed previously, this is likely to have some effect on the total scattering distribution but should not significantly change our conclusions so long as it does not prevent an order-unity fraction of neutrons from scattering at small angles.
We can now extend this approach to two-material targets in which one of the materials is arranged in spherical grains or pores; above, we merely cited the result (105). The expectation value of the total scattering length is now
\[\begin{split}\langle b_{\rm tot}(\mathbf{q}_{T})\rangle& =\int\limits_{R_{\rm gas}}\mathcal{S}_{\rm gas}(\mathbf{q}_{T})e^{i \mathbf{q}_{T}\cdot\mathbf{r}}d^{3}\mathbf{r}+\int\limits_{R_{\rm solid}} \mathcal{S}_{\rm solid}(\mathbf{q}_{T})e^{i\mathbf{q}_{T}\cdot\mathbf{r}}d^{3 }\mathbf{r}\\ &=\int\limits_{R_{\rm gas}}\left(\mathcal{S}_{\rm gas}(\mathbf{q }_{T})-\mathcal{S}_{\rm solid}(\mathbf{q}_{T})\right)e^{i\mathbf{q}_{T}\cdot \mathbf{r}}d^{3}\mathbf{r}+\int\limits_{R_{\rm total}}\mathcal{S}_{\rm solid }(\mathbf{q}_{T})e^{i\mathbf{q}_{T}\cdot\mathbf{r}}d^{3}\mathbf{r}.\end{split} \tag{106}\]
As discussed above, the last term is suppressed as \(\exp(-q_{T}^{2}\Delta r_{\perp}^{2}/2)\) for neutrons with gaussian transverse profile of width \(\Delta r_{\perp}\). We assume throughout this work that this is renders it irrelevant for all momentum transfers we wish to observe; the potentially significant effects of this term through multiple scattering events are the subject of Appendix E. Then
\[\langle b_{\rm tot}(\mathbf{q}_{T})\rangle=\int\limits_{R_{\rm gas}}\left( \mathcal{S}_{\rm gas}(\mathbf{q}_{T})-\mathcal{S}_{\rm solid}(\mathbf{q}_{T} )\right)e^{i\mathbf{q}_{T}\cdot\mathbf{r}}d^{3}\mathbf{r}. \tag{107}\]
The variance in the total scattering length can be broken into three contributions: the variance of the total scattering length of the gas, the variance of the total scattering length of the solid, and the covariance of those two scattering lengths. We assume, as usual, that the first two are given simply by the sums of the squares of the atomic scattering lengths; the third is discussed in Appendix G, where we show that it should be negligible. We can then immediately write down the structure factor of the two-material target:
\[S(q_{T})\approx\frac{12\pi\overline{R}^{3}}{9+2(q_{T}\overline{R})^{4}}\left( \frac{f\left|\Delta\mathcal{S}\right|^{2}}{fn_{g}|b_{g}(\mathbf{q}_{T})|^{2}+( 1-f)\sum_{j}n_{s,j}|b_{s,j}(\mathbf{q}_{T})|^{2}}\right)+1. \tag{108}\]
This looks identical to the form we obtained previously (104), but it should now be understood that the SLDs in the numerator include only the coherent scattering lengths, while the scattering lengths in the denominator are the total scattering lengths (which may be quite different for non-noble elements).
### Structure Factors of Nanotube Forests
There is one other geometry considered in this work: a collection of tubes, such as that of a carbon nanotube forest. We will therefore be interested in calculating the structure factor for this geometry as well. We will calculate it for the single-material case; generalizing to the (more applicable) two-material case can be straightforwardly done as above.
Consider an array of long, thin tubes approximately aligned with the neutron beam axis \(z\). Let one tube have length \(L\) and a circular cross-section of radius \(R\), and let its center line be described by the function \(x(z)\hat{\mathbf{x}}+y(z)\hat{\mathbf{y}}\). If we assume that the tubes are, on average, radially symmetric about \(z\), we can take \(x\) to be momentum transfer direction without loss of generality, in which case the total coherent scattering length from one tube is approximately
\[\langle b_{\rm tube}(\mathbf{q}_{T})\rangle=\iiint dz\;rdr\;d\varphi\;ne^{iq_{T }(x(z)+r\sin\varphi)}b_{c}(\mathbf{q}_{T}). \tag{109}\]
Note that the effects of the tube's transverse size are completely separable from the effects of the variation in the centerline position:
\[\langle b_{\rm tube}(\mathbf{q}_{T})\rangle=nb_{c}(\mathbf{q}_{T})\iint rdr \;d\varphi\;e^{iq_{T}r\sin\varphi}\int dz\;e^{iq_{T}x(z)}. \tag{110}\]
The first factor, from the tube cross-section, is analytically expressible in terms of a Bessel function:
\[\iint rdr\;d\varphi\;e^{iq_{T}r\sin\varphi}=\frac{2\pi R}{q_{T}}J_{1}\left(q_{ T}R\right). \tag{111}\]
The second factor, however, is strongly dependent on the details of the tube shape. In general, this must be calculated numerically for a chosen target geometry. For the purpose of obtaining an estimate of the sensitivities we can achieve, however, it is helpful to compute an approximate result for it.
If the variation in \(x(z)\) is large compared to the inverse momentum transfers considered, we can approximate the center line integral using the stationary phase approximation. In this case,
\[\int dz\ e^{iq_{T}x(z)}\approx\sum_{z_{0}\in\Sigma}e^{iq_{T}x(z_{0})}e^{i\frac{ \pi}{4}\text{sign}(x^{\prime\prime}(z_{0}))}\sqrt{\frac{2\pi}{q_{T}x^{\prime \prime}(z_{0})}} \tag{126}\]
where \(\Sigma\) is the set of points \(z_{0}\) satisfying \(x^{\prime}(z_{0})\). Intuitively, this corresponds to assuming that the rapidly varying phases along slanted portions of the tube average to zero, leaving only contributions from around the points where the tube is temporarily parallel to the \(z\) axis.
At this point, it is helpful to switch to evaluating the norm squared of the coherent scattering length, \(\left|\langle b_{\text{tube}}(q_{T})\rangle\right|^{2}\), which allows us to simplify these expressions further. Averaging the first factor squared over some variation in \(R\) (just as we did for spheres) allows us to make the approximate replacement
\[\left|\iint rdr\ d\varphi\ e^{iq_{T}r\sin\varphi}\right|^{2}\approx\frac{(\pi R ^{2})^{2}}{1+0.7(q_{T}R)^{3}}. \tag{127}\]
While not exact, this expression is convenient and will be good enough for order-of-magnitude sensitivity estimate we wish to obtain.
Since the \(x\) coordinates of the points \(z_{0}\) are essentially random on the scale of \(q_{T}^{-1}\), the contributions from each point add in quadrature (i.e. incoherently) even for the coherent scattering contribution. (This is analogous to our assumption of spheres summing incoherently above, with similar caveats about small \(q_{T}\).) The second factor then simplifies to
\[\left|\int dz\ e^{iq_{T}x(z)}\right|^{2}\approx\sum_{z_{0}\in\Sigma}\frac{2\pi }{q_{T}x^{\prime\prime}(z_{0})}. \tag{128}\]
This is still a somewhat awkward expression, so it is helpful to rewrite it in terms of more intuitive variables. If we let \(\lambda\) and \(A\) be the typical wavelength and amplitude of the tube's center line undulation, the integral can be approximated, very roughly, by
\[\left|\int dz\ e^{iq_{T}x(z)}\right|^{2}\sim\frac{L\lambda}{q_{T}A}. \tag{129}\]
Combining these two approximations, we have a total coherent scattering length per tube of
\[\left|\langle b_{\text{tube}}(q_{T})\rangle\right|^{2}\sim\frac{(\pi R^{2})^{ 2}}{1+0.7(q_{T}R)^{3}}\frac{L\lambda}{q_{T}A}(nb_{c}(q_{T}))^{2}. \tag{130}\]
If carbon nanotubes' atomic structure were highly irregular, the incoherent scattering contribution would be given by a sum of \(|b_{n}(\mathbf{q}_{T})|^{2}\) over all of the atoms in the tube. For a forest of tubes, we assume, just as for the sphere case, that the individual tubes' phases are uncorrelated. Then the structure factor of the full forest would be the same as that of a single tube,
\[S(\mathbf{q}_{T})\sim\frac{\pi nR^{2}\lambda}{1+0.7(q_{T}R)^{3}}\frac{1}{q_{T }A}\frac{|b_{c}(\mathbf{q}_{T})|^{2}}{|b_{n}(\mathbf{q}_{T})|^{2}}+1. \tag{131}\]
In fact, carbon nanotubes are likely to have highly regular atomic structures, which may significantly modify this result. Accounting for this is beyond the scope of this work (and is likely to require measurements of particular nanotube forests), but we note that, as usual, this should not meaningfully affect our conclusions unless it changes the dominant angular scale of scattering from the forest.
## Appendix D Separating Scattering Contributions with Two-Material Targets
Many of the targets that we consider in this work combine a solid responsible for the target's granular structure with a noble liquid or gas within that structure. This significantly complicates the task of separating features of the
scattering distribution that are the result of the target structure from any that are the result of a new force. We will not attempt to illustrate this process explicitly (though a crude approximation is presented in Appendix A), as we did for the single-material case above; in practice this will need to be done numerically. In this section, we demonstrate merely that this is possible: that is, that enough parameters can be measured to constrain every pertinent degree of freedom, including the two (coupling and mass) for the new force.
The generic neutron scattering distribution from such a solid-gas combination is (see Appendix A)
\[\begin{split}\frac{d\sigma_{n,2}}{d\theta}=&\Bigg{(} \left|B_{0}(q_{T})+b_{0}\left(1+\kappa_{\text{EM}}f(q_{T})+\kappa_{\text{new}} \frac{1}{1+(q_{T}/\mu)^{2}}\right)W(q_{T})\right|^{2}\\ &+\ \left|(\mathbf{B}_{S}(q_{T})-ib_{S,0}(1-f(q_{T}))\cot\theta\ W(q_{T}) \mathbf{\hat{n}})\cdot\mathbf{\sigma}\right|^{2}\Bigg{)}\sin\theta\end{split} \tag{104}\]
where
\[W(q_{T})=\sum_{j=1}^{N_{\text{gas}}}e^{i\mathbf{q}_{T}\cdot\mathbf{r}_{j}} \tag{105}\]
is the sum of the scattering phases from the atoms in the gas (such that, if the gas remained in place but there was no solid, the structure factor would be \(S^{\prime}(q_{T})=|W(q_{T})|^{2}/N_{\text{gas}}\)),
\[b_{S,0}=\frac{g_{n}Ze^{2}}{8\pi m_{n}} \tag{106}\]
is the magnitude of the gas's Schwinger scattering length, \(B_{0}(q_{T})\) is an unknown total spin-independent scattering length for the solid (already summed over all the atoms) and \(\mathbf{B}_{S}(q_{T})\cdot\mathbf{\sigma}\) is the analogous spin-dependent total scattering length, which sums coherently with the spin-dependent Schwinger scattering length of the noble gas. We do not assume that either \(B_{0}\) or \(B_{S}\) takes any particular form or value, as they are likely to both depend heavily on the complicated interactions within the solid. We do, however, assume that the electronic structures of the noble gas atoms are not significantly affected by the presence of the solid (or by the likely high pressure of the gas); this is confirmed in Appendix G.
For simplicity, we begin by assuming that the spin-dependent terms are negligible; we reintroduce them below. In this simplified case, we have
\[\begin{split}\frac{d\sigma_{n,2}}{d\theta}=&\left( \left|B_{0}(q_{T})\right|^{2}+b_{0}\left(1+\kappa_{\text{EM}}f(q_{T})+\kappa_ {\text{new}}\frac{1}{1+(q_{T}/\mu)^{2}}\right)\left|W(q_{T})\right|^{2}\right. \\ &+\ 2\text{Re}\left(B_{0}(q_{T})W(q_{T})^{*}\right)b_{0}\left(1+ \kappa_{\text{EM}}f(q_{T})+\kappa_{\text{new}}\frac{1}{1+(q_{T}/\mu)^{2}} \right)\right)\sin\theta.\end{split} \tag{107}\]
We can separate the various parameters in this expression by taking a series of measurements. In particular, consider measurements performed using \(P\) different particles (neutrons, X-rays, electrons, etc.; we will generally restrict to \(P=2\) in this work, assuming neutron and X-ray scattering) and \(Q\) different noble gases, with a shared solid. Then we can measure the following set of scattering distributions:
* \(PQ\) distributions for each of the \(P\) particles scattered from the solid filled with each of the \(Q\) noble elements;
* \(P\) distributions scattering each particle from the solid alone; and
* \((P-1)Q\) distributions for each of the \(P\) particles, except neutrons, scattered from the \(Q\) noble elements alone.
We do not include neutron scattering from the gas alone here since, as we discussed for the single-material case in Section III.2, it suffers from poor statistics at small angles; it is, however, still potentially useful at large angles, as we discuss below. We can thus take a total of
\[N_{\text{measure}}=2PQ+P-Q \tag{108}\]
independent measurements.
This should be compare to the number of degrees of freedom in the scattering distributions above:
* One sum of phase factors \(W(q_{T})\). This sum is complex, but there is an arbitrary overall phase corresponding to the choice of origin (i.e. only the relative phases between \(W\) and each \(B\) appears in scattering distributions) so we can take \(W(q_{T})\) to be real without loss of generality.
* \(Q\) real atomic form factors \(f(q_{T})\) for the noble elements.
* \(Q\) real neutron nuclear scattering lengths for the noble elements.
* \(P\) complex solid scattering lengths (\(B_{n,0}(q_{T})\), \(B_{X,0}(q_{T})\), etc.).
* One real, angle-independent electromagnetic scattering length scale for neutrons (i.e. \(b_{0}\kappa_{EM}\)).
* \(P-1\) angle-independent scattering length scales for each of the other particles. We will assume that each of these particles' scattering distributions is fully described by some combination of this length scale, the atomic form factor, and the scattering angle, as is the case for X-ray scattering: \[\left.\frac{d\sigma}{d\Omega}\right|_{X}=\left(\frac{Ze^{2}}{4\pi m_{e}}\right) ^{2}\left(f(q_{T})-\frac{Zm_{e}}{m_{\rm nuc}}\right)^{2}\frac{1+\cos^{2}2 \theta}{2}.\] (100) Generalizing this discussion to more complicated scattering lengths is straightforward, but we will find below that neutron and X-ray scattering alone should be sufficient for our purposes.
* Two angle-independent parameters describing the new force: the coupling \(g\) and the mass \(\mu\).
We thus find a total of
\[N_{{\rm dof},\theta}=2P+Q+1 \tag{101}\]
degrees of freedom per angular bin, plus the \(P+Q+2\) angle-independent parameters. Of these, the \(Q\) neutron scattering lengths can be measured using the otherwise-unhelpful gas-only neutron scattering measurements discussed above, as they are both inherently angle-independent and not suppressed by other effects at large angles.
Assuming we have at least \(P+2\) bins per measurement, we can conservatively treat the remaining angle-independent parameter as one additional angle-dependent degree of freedom. Note that at least 2 of these bins must be at small angles, since the new force parameters are suppressed at larger momentum transfers; this minimum may be increased if any of the scattered particles have distributions that are similarly peaked. We will assume that there are sufficient bins for this to be the case; see Appendix H.1.
In order to detect a new force, we need \(N_{\rm measure}\geq N_{{\rm dof},\theta}+1\), with the 1 accounting for these angle-independent parameters. This can be achieved with at least 2 particles and at least 2 noble gases. The most promising candidates are likely neutron and X-ray scattering from xenon and argon (see Section IV.1), though there may be circumstances in which other options are preferable.
We now return to the full scattering distribution (100), including spin-dependent scattering. This leads to three new terms, which can be handled in different ways:
* After averaging over neutron polarizations, \(\left|{\bf B}_{S}(q_{T})\cdot\boldsymbol{\sigma}\right|^{2}\) acts simply as a correction to the pure-solid \(|B_{0}(q_{T})|^{2}\) term; it can therefore be absorbed into the spin-independent description by a replacement of \(B_{0}\) with \(B_{0}^{\prime}\) such that \(|B_{0}^{\prime}(q_{T})|^{2}=|B_{0}(q_{T})|^{2}+\overline{\left|{\bf B}_{S}(q_{ T})\cdot\boldsymbol{\sigma}\right|^{2}}\) while \(2{\rm Re}\left(B_{0}(q_{T})W(q_{T})^{*}\right)=2{\rm Re}\left(B_{0}^{\prime}(q _{T})W(q_{T})^{*}\right)\). This term therefore does not require any modification to the analysis above.
* \(|ib_{S,0}(1-f(q_{T}))\cot\theta\;W(q_{T})\hat{\bf n}\cdot\boldsymbol{\sigma} \right|^{2}\) is the corresponding gas-only term; it is suppressed relative to spin-independent scattering both by two powers of the small Schwinger scattering length \(b_{S,0}\), typically of order \(10^{-3}\) times the nuclear scattering length \(b_{0}\), generally rendering it irrelevant at our systematic error target. Even if this is insufficient, note that, at small angles, it is additionally suppressed by \((1-f(q_{T}))^{2}\cot^{2}\theta\propto\theta^{2}\).
* This leaves \(2{\rm Re}(i{\bf B}_{S}(q_{T}))\cdot\boldsymbol{\sigma}b_{S,0}(1-f(q_{T}))\cot \theta\;W(q_{T})\hat{\bf n}\cdot\boldsymbol{\sigma}\), the cross-term from these two effects. This is suppressed at small angles by a factor of \(\theta b_{S,0}/b_{0}\), or approximately \(10^{-5}\) for the smallest angles we consider in this work. There may be additional suppression relative to the spin-independent scattering contributions from the magnitude of \({\bf B}_{S}(q_{T})\), though this is uncertain: the magnitude of spin-dependent scattering from individual, non-noble atoms is generally comparable to that of spin-independent scattering (see Appendix A), but the spin dependence may limit coherence at small angles.
One additional source of suppression is created by the target structure. If the atoms in the solid had random spins and angular momenta, their spin-dependent contributions would never sum coherently; in practice, there will
be some correlations between nearby atoms due to interactions within the solid, leading to coherent scattering up to the correlation length scale \(\xi\). Nonetheless, if \(\xi<R\), coherent spin-dependent scattering will still be suppressed relative to spin-independent scattering when \(q_{T}\xi\lesssim 1\lesssim q_{T}R\), which may be sufficient to make this cross-term insignificant at small angles. Otherwise, this term produces one additional degree of freedom per angular bin.
If we assume, optimistically, that the cross-term is insignificant at small angles, none of the spin-independent discussion above needs to be modified and we can still separate all scattering contributions with 2 scattered particles and 2 noble gases.
If not, however, we now have one additional degree of freedom; requiring \(N_{\rm measure}\geq N^{\prime}_{\rm dof,\theta}+1\) then requires either a third scattered particle (e.g. electrons) or, likely more simply, a third noble gas. Thus, even in this pessimistic scenario, new force scattering can be separated from generic solid backgrounds, though doing so requires a combination of several measurements.
## Appendix E Multiple Scattering Events
Throughout most of this work, we have worked in terms of scattering probabilities. While this is a good approximation when scattering events are rare, it is not sufficient at the level of precision we require. In fact, the various scattering probabilities estimated throughout this work should be interpreted as expected numbers of scatterings per neutron, with the scattering count per neutron Poisson distributed. This has an unfortunate consequence: a neutron that is observed to have scattered with some momentum transfer \({\bf q}_{T}\) may in fact have scattered two or more times with momentum transfers that summed to \({\bf q}_{T}\). This becomes especially problematic when low-angle scattering is coherently enhanced, as this coherent enhancement will extend not only to the small angles we want to measure, but also to even smaller angles. This results in the observed small-angle scattering distribution being enhanced more than expected, due to combinations of even-smaller angle scatterings, which could simulate a new force signal. In this appendix, we estimate the magnitude of this effect and discuss how one can account for it.
For concreteness, we continue to use the collection of spherical grains model that we use throughout much of this work; we begin with the single-material case for simplicity. It is helpful to rewrite the resulting scattering distribution (given by (100), plus the fully-coherent contribution analogous to (101)) in terms of the logarithm of the scattering angle:
\[\frac{dp}{d\ln\theta}=\left(fnL\left(\frac{12\pi}{9+2(q_{T}R)^{4}}nR^{3}+1 \right)+8\pi\left(fnL\Delta r_{\perp}\right)^{2}e^{-{\bf q}_{T}^{2}\Delta r_{ \perp}^{2}/2}\right)\left|b(q_{T})\right|^{2}2\pi\theta\sin 2\theta \tag{104}\]
where \(f\) is the fraction of the target volume occupied by the grains, each with atomic number density \(n\) and average radius \(R\), and \(L\) is the thickness of the target. We can rewrite this as the sum of three terms, which dominate at different angles:
\[\frac{dp}{d\ln\theta}=A_{\rm inc}\theta\sin 2\theta+A_{\rm pc}\frac{(\theta/ \theta_{\rm pc})^{2}}{1+(\theta/\theta_{\rm pc})^{4}}+A_{\rm coh}\left(\frac{ \theta}{\theta_{\rm coh}}\right)^{2}e^{-(\theta/\theta_{\rm coh})^{2}} \tag{105}\]
where \(A_{\rm inc}\), \(A_{\rm pc}\) and \(A_{\rm coh}\) are constant prefactors for incoherent, partially-coherent (i.e. coherent over grains but not over the full target), and fully-coherent scattering, respectively, and \(\theta_{\rm pc}\) and \(\theta_{\rm coh}\) are the angular scales of partial and full coherence (i.e. the angle above which scattering from individual spheres becomes incoherent and the angle above which the neutron size no longer matters). Here we use \(q_{T}\propto\theta\) rather than the exact form \(q_{T}\propto\sqrt{2-2\cos(2\theta)}=2\sin(\theta)\) for simplicity, since we are interested only in small-angle effects in this section. The resulting scattering distribution is illustrated schematically in Figure 7.
In a typical experiment, \(\theta_{\rm pc}\) is likely to be comparable to the smallest scattering angle at which neutrons can be observed, since the entire purpose of employing structured targets is to maximize small-angle scattering. The question, then, is whether neutrons observed to scatter by an angle around \(\theta_{\rm pc}\) are sourced by a single significant scattering of order that angle, or by a series of smaller-angle scatterings that summed to it. In particular, any experiment must ensure that the observed scattering distribution is not significantly affected by scatterings at angles too small to study using X-ray scattering, since there is no way to determine the structure factor at such small momentum transfers.
To establish conditions for this to be the case, divide angles below \(\theta_{\rm pc}\) into two regimes: those large enough to be measured using X-ray scattering (\(\theta>\theta_{X}\)), and those below that threshold. At least some of the former will certainly affect the final observed scattering distribution, but their effects can be numerically predicted once the structure factor at these angles is determined with X-ray scattering. The effects of angles below \(\theta_{X}\), on the other hand, should be insignificant as long as \(\theta_{X}/\theta_{\rm pc}\) is sufficiently small, as we show below.
Now, further subdivide the angles below \(\theta_{X}\) further into two ranges: those above the local minimum in the scattering distribution at \(\theta_{\rm min}\) (see Figure 7), and those below it. We begin with the former. The number of scatterings per neutron from the \(\theta_{\rm min}<\theta<\theta_{X}\) range is upper bounded by a Poisson distribution with expected value \(\alpha=dp/d\ln\theta|_{\theta=\theta_{X}}\ln(\theta_{X}/\theta_{\rm min})\), with each scattering by an angle upper bounded by \(\theta_{X}\) by assumption; we will conservatively use both of these upper bounds.
Under these assumptions, the probability of \(N\) scatterings from this range is
\[p(N)=\frac{e^{-\alpha}\alpha^{N}}{N!}. \tag{101}\]
The minimum number of scattering events from this angular range needed to create an observed scattering by at least \(\theta_{\rm pc}\) is \(N_{\rm min}=\lceil\theta_{\rm pc}/\theta_{X}\rceil\), though contributions from larger numbers of scatterings may dominate when \(N_{\rm min}\gg 1\) or \(N_{\rm min}-\theta_{\rm pc}/\theta_{X}\ll 1\) as it is unlikely for all of the small-angle scatterings to be in the same direction. Even this weak lower bound is likely sufficient for our purposes, however: the probability of at least \(N_{\rm min}\) scatterings is at most [110]
\[p(N\geq N_{\rm min})\leq\frac{(e\alpha)^{N_{\rm min}}e^{-\alpha}}{(N_{\rm min} )^{N_{\rm min}}} \tag{102}\]
and we have
\[\frac{dp}{d\ln\theta}\propto\theta^{2} \tag{103}\]
in this angle range, so this bound becomes
\[p(N\geq N_{\rm min})\leq\frac{\left(eA_{\rm pc}\frac{\theta_{X}^{2}}{\theta_{ \rm pc}^{2}}\right)^{N_{\rm min}}e^{-A_{\rm pc}\theta_{X}^{2}/\theta_{\rm pc }^{2}}}{(N_{\rm min})^{N_{\rm min}}}. \tag{104}\]
Taking a typical value of \(A_{\rm pc}\sim 0.1\) to balance maximizing statistics with not being swamped by multiple scatterings at angles around the peak, and assuming \(\ln(\theta_{X}/\theta_{\rm min})\sim 1\), this gives
\[p(N\geq 3)\lesssim 10^{-6} \tag{105}\]
Figure 7: A schematic illustration of the scattering distribution (100). The locations and values of local extrema indicated on the plot are approximate; we have omitted various \(\mathcal{O}(1)\) factors here for simplicity. Also marked is the minimum angle \(\theta_{X}\) accessible via X-ray scattering (conservatively assumed to be larger than \(\theta_{\rm min}\)).
so we should only need to perform X-ray scattering down to an angle of around one third (perhaps one fourth) of the minimal neutron scattering angle in order to be able to accurately numerically predict the effects of multiple scatterings in the \(\theta_{\rm min}<\theta<\theta_{X}\) range. As we discuss in Appendix H.3, this should not be particularly difficult.
We can similarly bound the effects of scattering at angles below \(\theta_{\rm min}\), which are enhanced by total coherence over the neutron's transverse extent. In this case, the minimum number of scatterings to reach the first angular bin outside of the beam is \(\lceil\theta_{\rm pc}/\theta_{\rm coh}\rceil\), which should be at least of order 10 for realistic experiments. The fully-coherent scattering peak height \(A_{\rm coh}\) can be estimated using (101); relative to the partially-coherent peak, it is
\[\frac{A_{\rm coh}}{A_{\rm pc}}\sim\frac{fL}{\sqrt{2\pi}R}. \tag{102}\]
This is likely to give \(A_{\rm coh}\gg 1\), so we can instead estimate the result of these fully coherent scatters by the sum of \(A_{\rm coh}\) random 2-dimensional vectors of length \(\theta_{\rm coh}\). This, in turn, is well approximated by a 2-dimensional Gaussian, which gives a probability for the fully coherent scatters to sum to an observable angle of
\[\begin{split} p(\theta_{\rm coh,total}\geq\theta_{\rm pc})& \sim\exp\left(-\frac{3(nb\Delta r_{\perp}\lambda_{0})^{2}}{4\sqrt{2}A_{\rm pc }^{2}}\right)\\ &\sim\exp\left(-53\left(\frac{n}{10\ {\rm nm}^{-3}}\right)^{2} \left(\frac{b}{10\ {\rm fm}}\right)^{2}\left(\frac{\Delta r_{\perp}}{10\ \mu{\rm m}}\right)^{2}\left(\frac{\lambda_{0}}{10\ {\rm\AA}}\right)^{2}\left(\frac{0.1}{A_{\rm pc}}\right)^{2}\right)\end{split} \tag{103}\]
where \(\lambda_{0}=2\pi/q_{0}\) is the incident neutron wavelength. This probability needs to be smaller than \(10^{-6}\) in order to reach our desired control over systematic errors (see Section I.1). Assuming near-liquid densities, this is always satisfied for argon-36 (see Table 1), but may not be the case for xenon-136 if the neutron transverse size is less than approximately 7 \(\mu\)m (for a wavelength of 0.6 nm); previous experiments have measured values from roughly \(1-100\ \mu\)m [111, 112, 113, 114].
Our discussion to this point has assumed scattering from a single material. However, as we discuss in the main text, it is likely that a realistic experiment would combine two materials: a solid lattice to form the granular structure, and a noble liquid or gas within it whose scattering distribution has simpler electromagnetic contributions. This leads to a suppression of the partially-coherent scattering peak by the fractional difference between the two materials' scattering length densities since partial coherence depends on the SLD contrast (see (102)) and an enhancement of the fully-coherent and incoherent scattering peaks since both materials contribute to them. While this does not affect the multiple scattering contribution of the intermediate angular range, \(\theta_{\rm min}<\theta<\theta_{X}\), it does enhance the effect of small angles \(\theta<\theta_{\rm min}\). This may prohibit certain material combinations and porosities, depending on the value of \(\Delta r_{\perp}\). In particular, for two-material targets, the argument of the exponent in (103) acquires a factor \(f(\mathcal{S}_{\rm gas}-\mathcal{S}_{\rm solid})/(f\mathcal{S}_{\rm gas}+(1- f)\mathcal{S}_{\rm solid})\), accounting for the different scattering length densities relevant for fully- and partially-coherent scattering.
Finally, we note that the frequency of multiple scattering events can be further increased by short-range order, which tends to enhance scattering at momentum transfers comparable to the inverse length scale of that order. This is unlikely to be significant for short-range order within amorphous solids, as the expected length scales should be shorter than those of the target grains, leading to additional peaks at larger rather than smaller angles. Any correlations in grain positions, however, may be more consequential, as the necessarily large associated length scales would enhance scattering at angles below \(\theta_{\rm pc}\). The magnitude of this effect will depend on the detailed structure of a particular target, but it appears unlikely that it would significantly affect the discussion above except in highly ordered targets. We will therefore not consider this effect further in this work.
## Appendix F Thermal Effects
The observed neutron scattering distribution depends on the temperature of the target in several ways. At high temperatures, the velocity of atoms in the target leads to a significant difference between the center-of-mass velocity of the scattering event and the lab-frame velocity of the neutron, creating an apparent enhancement of the cross-section at low neutron energies [115, 116]. Additionally, higher temperatures lead to larger populations of excited states of atoms, which can have different neutron scattering lengths due to electromagnetic effects.
As discussed in Appendix A, nuclear scattering of neutrons is angle-independent in the center-of-mass frame of the scattering event. When considering scattering from a bulk target, however, it is more convenient to work in the laboratory frame, in which individual atoms within the target have a Maxwellian velocity distribution. Since the target atoms are no lighter (and typically much heavier) than the incident neutron, this leads to large differences in the center-of-mass velocity of the neutron. In particular, the apparent scattering cross-section of neutrons slower
than the atoms in the target is enhanced because the scattering becomes the result of atoms striking an essentially stationary neutron. This leads to an enhancement of the low-energy cross section as \(\sigma_{\rm lab}\propto 1/v\), with \(v\) the neutron velocity, whenever this velocity is lower than the thermal velocity of atoms in the target, as well as a modification of the angular distribution of neutrons after scattering in the lab frame.
As we discuss in Section H.2, typical neutron wavelengths at beamlines that may be appropriate for our purposes are around 0.4-0.8 nm. Targets with atomic weight \(A\) then have thermal velocities equal to the neutron speed at temperature
\[T\sim A\left(\frac{\lambda}{0.4\ {\rm nm}}\right)^{-2}\times 10\ {\rm K}. \tag{109}\]
Most of the targets we consider in this work have \(A\gg 1\), and can therefore likely be cooled below this temperature with little difficulty. In this case, frame differences will only lead to small corrections in the observed scattering distribution, which should not meaningfully affect final sensitivities. If this is not the case--for example if using helium, or for argon at larger neutron wavelengths-low velocity enhancement may become more significant. While this should not prevent measurements of the sort we describe in this work from being performed, it may have a more significant effect on the final sensitivity, which we do not attempt to estimate.
The other effect of high target temperatures on cold neutron scattering is the enhancement of populations of excited atomic states. In particular, our modeling of neutron scattering from noble elements throughout this work assumes their ground state electron configuration, with zero total spin or orbital angular momentum. These assumptions no longer hold for excited states of the atoms.
Fortunately, the excitation energies of noble elements are too high for this to be a significant effect: the lowest excited state of xenon is at an energy of 1.3 eV [117; 118; 119], corresponding to an excited state population fraction of order \(10^{-22}\) even at room temperature; the excited states of other noble elements are even more suppressed. This is far below the \(10^{-6}\) benchmark that we use throughout this work, so any effects of excited electronic states in the noble gas should be negligible. Any thermal effects in the solid component of a target should be accounted for when separating scattering contributions, so they do not affect our analysis further; see Appendix D.
## Appendix G Atomic Interaction Effects
In this appendix, we discuss the effects of interactions between atoms on structured targets' scattering distributions. We consider both interactions between atoms within the noble gas (or liquid; while we have ignored the distinction in most of this work, the high density of the noble fluid will be relevant here) and between those noble atoms and the atoms in the solid. Interactions among atoms within the solid are not relevant for our purposes since we are generally impartial to the total scattering length of the solid.
### Interactions at the Grain Surface
There are two ways in which interactions at the surface between the solid component of the target and the liquid or gas within it can affect the observed neutron scattering distribution. First, they can modify the distribution of the noble atoms within their grain, modifying the structure factor. For example, an attractive potential near the surface could increase the density of the noble gas near the surface, whereas our estimates in Appendix C assumed a uniform distribution of the gas within each grain. Second, they can modify the electron orbitals of the noble atoms, potentially changing the electromagnetic scattering length. In particular, we have assumed throughout this work that the noble atoms have zero net electron spin and angular momentum, but this may cease to be the case in the presence of electromagnetic fields induced by the solid.
The former effect does not significantly affect us, so long as it does not lead to an order-unity change in the structure factor, since it is independent of the scattered particle and thus will be accounted for when combining measurements, as described in Appendix D. This condition should certainly be satisfied for typical materials, considering both the weakness of Van der Waals interactions and that optimal sensitivity is generally achieved for near-liquid densities of the noble element, in which case there is little room for atoms to deviate from uniform packing.
Significant modification of the noble atoms' electronic structure could be more difficult to handle, however. In the presence of an inhomogeneous magnetic field, the atomic Hamiltonian should acquire off-diagonal terms of order \(|\mathbf{\mu}||\Delta\mathbf{B}|\), with \(\mu\sim\mu_{B}\) the characteristic magnetic moment of atomic electrons and \(\Delta\mathbf{B}\) the variation in the magnetic field over the extent of the atom; we ignore any numerical factors here, given the considerable uncertainties in the
discussion below. This perturbation then leads to a mixing of the ground state by a fraction
\[\frac{\Delta H}{\Delta E}\sim\frac{\mu_{B}|\Delta{\bf B}|}{\Delta E} \tag{127}\]
of the excited state, with \(\Delta E\) the energy of that state, and thus gives a correction to the neutron scattering length of order
\[|\Delta b|\sim\frac{|g_{n}|e^{2}}{8\pi m_{e}}\frac{\mu_{B}|\Delta{\bf B}|}{ \Delta E}; \tag{128}\]
see Appendix A.
The maximum magnetic field resulting from a single atom on the solid surface is of order
\[|{\bf B}|_{\rm max}\sim\frac{\mu_{B}}{R_{\rm atom}^{3}}, \tag{129}\]
with \(R_{\rm atom}\sim 0.1\) nm the characteristic size of the atoms. The neutron scattering length of such an atom is then modified by approximately
\[|\Delta b|\sim\frac{|g_{n}|e^{2}\mu_{B}^{2}}{8\pi m_{e}R_{\rm atom}^{3}\Delta E }\lesssim 3\times 10^{-3}\ {\rm fm}, \tag{130}\]
where we use the lowest excited state energy of a noble element (1.3 eV for xenon [117; 118; 119]) for this upper bound. This should be compared to nuclear scattering lengths of order 10 fm; see Table 1.
Such a correction to every atom would be well above our systematic error target of one part in \(10^{6}\), if it were not further suppressed by two additional effects. First, the rapid decay of dipole magnetic fields with distance from the dipole means that only an order-unity number of atoms near the surface dipole will be affected; only a fraction of order \(R_{\rm atom}/R_{\rm grain}\) of the atoms in a grain of radius \(R_{\rm grain}\) are therefore affected. Second, since we are not considering ferromagnetic targets, we expect the magnetic field directions, and thus the signs of the neutron scattering length changes for a given momentum transfer, to vary within (as well as among) grains. Assuming that any correlations in the alignment of surface dipoles occur on length scales \(\xi<R_{\rm grain}\), we expect a total of approximately \((R_{\rm grain}/\xi)^{2}\) independently chosen directions over the extent of the grain surface.
The average change in scattering length is then
\[\left|\overline{\Delta b}\right|\sim\frac{|g_{n}|e^{2}\mu_{B}^{2}}{8\pi m_{e} R_{\rm atom}^{3}\Delta E}\left(\frac{R_{\rm atom}}{R_{\rm grain}}\right)\frac{1}{ \sqrt{(R_{\rm grain}/\xi)^{2}}}\lesssim\left(\frac{R_{\rm atom}\xi}{R_{\rm grain }^{2}}\right)3\times 10^{-3}\ {\rm fm}, \tag{131}\]
which should be below our target of order \(10^{-5}\) fm for all targets we consider.
Note that this average change in the scattering length is the correct quantity to consider (as opposed to, for example, its root mean square change) even for incoherent scattering: since the maximal change in the scattering length of one noble atom is much smaller than its nuclear scattering length, its non-negligible effect comes from the interference between nuclear and electromagnetic scattering (i.e. from the product of the nuclear and electromagnetic scattering lengths). This term is linear in the electromagnetic scattering length, and thus the change in the total scattering probability depends precisely on \(\left|\overline{\Delta b}\right|\).
### Interactions Among Noble Atoms
We now turn to the effects of interactions within the noble liquid or gas itself. Similarly to the surface interactions described above, the effects of interactions between the atoms in a fluid can affect the observed scattering distribution both by modifying the target's structure factor and by inducing changes in the electromagnetic scattering length of the atoms.
The high density of liquids leads to significant correlations between the positions of atoms and their neighbors, i.e. a non-trivial pair distribution function [120; 121; 122; 123; 124; 125; 126]. These correlations will then lead to a modification of the structure factor at inverse momentum transfers comparable to the atomic spacing in the liquid. Fortunately, these are not an issue for the measurements we propose: since this effect is independent of the scattered particle, it will automatically be accounted for when combining measurements (see Section III.2), and it should have little effect on achievable sensitivity since it occurs at much larger momentum transfers than the focus of this work.
Interactions between the atoms within the liquid may, in principle, also lead to changes in their electronic states, which would then modify the electromagnetic scattering lengths of the target atoms. Since we are considering noble elements, whose ground states have zero magnetic moment, there are no apparent mechanisms to induce significant changes in the atoms' electronic states. Thus this should not be a significant effect for liquids or gases. It is less clear whether spontaneous magnetization could appear in noble solids, but we leave consideration of this to future work.
## Appendix H Instrument Parameters
In this appendix, we summarize the key features of neutron and X-ray scattering instruments and describe the parameters of such instruments that we assume for our projections in the main text. We begin with an outline of small-angle neutron scattering instruments as a whole, before focusing on the properties of neutron sources and beams in particular, which are the limiting parameters for our proposal's reach. We then consider the analogous properties of X-ray beams, justifying our assumption in the main text that these contribute subdominant uncertainties.
### Target Geometry
A sketch of a simplified neutron scattering experiment layout is shown in Figure 1. In this section, we consider a few aspects of this layout and their impact on the sensitivity of such an experiment to the new forces we consider.
The importance of the distance between the neutron source and the scattering target for collimation is discussed in the next section of this appendix. We begin instead with the distance beyond the target, between it and the neutron detectors, which is significant in comparison to three other length scales: the pixel size of the neutron detector, and the transverse and longitudinal target dimensions.
Modern neutron detectors can achieve resolutions approaching 1 \(\mu\)m [127; 128; 129; 130; 131; 132], though this enormously exceeds our needs; as we discuss below, there is little benefit to resolutions significantly below the target's transverse size. The ratio of the pixel size to the distance between the target and the neutron detector sets the maximum angular resolution of the experiment. This angular resolution should certainly be no greater than the minimum scattering angle we wish to measure, and there is likely to be some benefit to an angular resolution a few times better, in order to better resolve the new force peak. Our projections below assume a minimum scattering angle of 3 mrad, requiring a detector distance of at least 300 times the pixel size.
The minimum useful pixel size, in turn, is set by the target's transverse length scale. Below this length scale, scattering events at the same angle can appear in different pixels if they occurred at different locations within the target. While this is not intrinsically problematic, there is little purpose to using much smaller pixels, since the angular distribution of the neutrons will be washed out by the target size at these scales. Note as well that, when the target transverse size is larger than the pixel size, it is the ratio of the detector distance to the former that sets the maximum resolution. Thus, the 10 cm\({}^{2}\) targets that we assume in our projections require a target to detector distance of approximately 10 m.
There is a similar effect from the target's longitudinal size (i.e. its depth), though it is likely to be sub-dominant for our purposes due to our focus on small-angle scattering. There are three other constraints on this depth, however. First, for a given set of target materials, the target depth sets the fraction of neutrons that are scattered into observable angles; as we discuss in Appendix E, we assume that this is set to 0.1, in order to maintain control over multiple scattering events. Second, as we noted in Section IV.1, excessively thin targets may be difficult to work with; combined with the maximum depth from multiple scattering, this may preclude certain material combinations.
The third effect of finite target depth is a suppression of fully coherent scattering at large angles. Our calculation of fully coherent scattering in Section C assumed that the direction of momentum transfer was perpendicular to the neutron's incident direction, but this is only true in the limit of zero scattering angle. At larger angles, neutrons scattered from one end of the target become incoherent with those scattered from the other end, reducing the fully-coherent scattering contribution at these angles. This is completely insignificant for all parameters we consider, however, since full coherence is only significant at very small angles to begin with.
Finally, we consider the effects of the cell containing the target material(s), which we have heretofore ignored. The additional scattering contribution from the neutron beam passing through the walls of the cell must be separated in order to isolate the scattering distribution of the ideal gas within. However, the separation of contributions procedure discussed in Appendix D is not immediately applicable to this issue because scattering from the ideal gas cannot be performed without the cell. Note, however, that this is only a concern if there is coherent scattering from the cell walls combined with the cell contents (i.e. the cross-term discussed in Appendix D); otherwise the cell's scattering distribution can simply be measured separately and then subtracted out of the final distribution. Our projections assume that the cell walls lack any structure at length scales comparable to the inverse momentum transfer, such that this should be the case, leaving characterization of cell wall roughness to future work.
### Neutron Beam Parameters
Neutrons used in small-angle neutron scattering (SANS) experiments are typically produced in nuclear reactors (e.g. [28; 29; 30; 31; 32; 33]) or neutron generators (e.g. [34; 35]). Since such sources produce neutrons at much higher energies
than desirable for SANS experiments, they are then cooled by passing through one or more cold moderators (e.g. water and liquid hydrogen), resulting in an uncollimated collection of neutrons with a broad (though not necessarily thermal) distribution of energies.
The simplest approach to forming a neutron beam is simply to reject all neutrons that do not pass through two small, widely separated apertures. Neutrons can also be transported via waveguides and focused with optics, though we will not review such devices here (see e.g. [133; 134]). The key feature of neutron collimation for our purposes is simply that it is statistically costly, with neutron count proportional to accepted phase space.
Neutron energy distributions are, similarly, narrowed primarily through rejection of velocities other than those desired. A typical design for a neutron velocity selector is described in [36]: a rotating cylinder of absorbing material with helical channels, such that neutrons are absorbed unless they have the right velocity to pass through in a straight line without striking any surfaces. An alternative approach to velocity selection using slotted disks is described in [37]. The statistical cost of these procedures depends somewhat on the particular neutron source used and the corresponding energy distribution after moderation.
A variety of existing neutron scattering instruments may be able to accommodate our proposal: the NIST Center for Neutron Research's VSANS instrument [135], Oak Ridge National Laboratory's EQ-SANS diffractometer [136], the Institut Laue-Langevin's D22 diffractometer [137], and various instruments at J-PARC's Materials and Life Science Experimental Facility [138], among others. Since this work is not specific to any particular source, our projections assume a set of parameters approximately representative of these optimal sources: a flux of \(10^{8}\) cm\({}^{-2}\) s\({}^{-1}\) neutrons of 0.6 nm wavelength over a target area of 10 cm\({}^{2}\), with a minimum resolvable angle (including collimation, detector pixel size, and detector distance to target size ratio) of 3 mrad, corresponding to a momentum transfer of approximately (30 nm)\({}^{-1}\).
Other parameters of the neutron beam should be less important for our sensitivity projections. In particular, energy spread acts only to "wash out" the low-angle peak that would indicate the presence of a new force; while this increases the uncertainty on a detected new force's mediator mass \(\mu\), it has little effect on the total number of new force scattering events within that peak and therefore has minimal impact on our ability to detect a new force's presence (see Appendix I). Similarly, our proposal should not require particularly good angular resolution, so long as sufficiently small angles can be observed.
### X-Ray Beam Parameters
Small-angle X-ray scattering (SAXS) sources are typically based on undulation of an electron beam by a series of magnets of alternating polarity [139]. SAXS instruments are available over a wide range of X-ray energies, including well above the keV momentum scales of the neutron sources we consider. Using such high-energy sources is likely to be necessary for most targets of interest, due to X-ray absorption: as discussed in Appendix B, low-energy X-rays are rapidly attenuated within dense materials, especially at large atomic weights. For the xenon targets that are the focus of this work, energies of at least 40 keV are likely desirable.
A variety of SAXS facilities could potentially meet the requirements of our proposal; see, for example, [140; 141]. Just as for neutron sources, we do not choose a particular source in this work, and in fact we are generally agnostic to the parameters of the source used so long as they are sufficient for the dominant uncertainties in the final measurement to arise from neutron scattering; we assume that this holds for all of our projections. The precise X-ray photon counts required for this to hold were discussed in Section III.2: an X-ray count of \(10^{5}\) times the neutron count should always be sufficient, with as little as a few times the neutron count sufficient at the smaller mediator masses we focus on. Even the former condition can be satisfied by instruments such as the European Synchrotron Radiation Facility's ID15A [141], and an enormous variety of X-ray sources can meet the weaker flux requirement.
There is one other property of X-ray sources that complicates our proposal somewhat: their beamsize. Typical X-ray scattering instrument beams are far narrower than the centimeter-scale targets that are necessary to maximize neutron count. Since targets may not be spatially uniform over their transverse extent, it is necessary to scan the X-ray scattering measurement over the target in order to obtain a spatially-average structure factor applicable to the neutron scattering measurement. Note that the neutron beam may not be spatially uniform either, so the spatial distribution of neutrons must be measured as well. This has a negligible impact on the total neutron count needed for our experiment, however, as the absence of a scattering target for this measurement eliminates the usual factor of 10 loss in flux from most of the beam passing through a target without scattering (see Appendix E).
It is similarly critical that the X-ray and neutron beamlines be precisely coaxial in order to see the same effective target thickness and structures. This may pose challenges for the use of recently developed techniques for simultaneous X-ray and neutron scattering measurements [50].
Other parameters of X-ray sources are generally not a concern for our proposal. As discussed in Appendix E, the X-ray collimation requirement is a factor of a few stronger (in terms of transverse momentum) to the neutron
requirement, due to the need to measure the structure factor at smaller momentum transfers in order to predict the impact of multiple scattering events; this is easily satisfied by many X-ray instruments. Similarly, the distribution of incident X-ray energies is generally far narrower than that of neutrons, so energy spread should not be a meaningful constraint for our purposes.
## Appendix I Statistics
In this appendix, we describe our approach to estimating the potential sensitivity of the various target materials considered in the main text. We first tabulate the various systematic errors faced by any implementation of our proposal. We then describe our approach to calculating the statistical reach of single-material targets, before explaining our approximation of the statistical error for two-material targets.
### Systematic Errors
We begin by summarizing the systematic errors that limit the achievable sensitivity of new force searches implementing our proposed strategy. Most of the effects we consider in this section have been discussed elsewhere in this work; our goal here is to tabulate their respective magnitudes before we calculate realistically achievable experimental sensitivities.
A relatively fundamental limit on searches for new neutron-atom interactions is our limited ability to predict strong nuclear interactions. In most of this work, we have described nuclear scattering as entirely angle-independent, but this is an approximation: significant deviations from angle-independence occur at momentum transfers comparable to the inverse strong force range, i.e. inverse femtometers [93; 94; 95; 96; 97]; see Appendix A. Corrections to the angle-independence of nuclear scattering are therefore suppressed by \(\mathcal{O}((q_{T}b_{\mathrm{nuc}})^{2})\lesssim 10^{-8}\) even at the largest momentum transfers we consider, leaving them far below our systematic target. In fact, a more significant systematic error may arise from the contribution of the nuclear charge form factor to the neutron's electric polarizability scattering length, which is suppressed by \(\mathcal{O}(q_{T}\sqrt{\langle r_{h}^{2}\rangle}b_{P}/b_{\mathrm{nuc}})\sim 10 ^{-9}\) (see Appendix A.2). This error may be reducible using knowledge of this form factor, but we will not explore this here, given that we do not expect this to be a limiting error for our proposal.
Errors related to modeling of electromagnetic scattering may be more significant, though these depend considerably on the exact target used. As we discuss in Section III.3, contributions from the new force and from electromagnetic scattering become difficult to distinguish once \(q_{\mathrm{o}}\sim\mu\); we avoid this issue in our projections by conservatively restricting to \(\lambda>10^{-1}\) nm. Moreover, even for ideal noble gases, our description of electromagnetic scattering (see Appendix A) ignored various terms suppressed by \(m_{e}/m_{n}\), \(r_{N}/r_{A}\), or \(q_{T}r_{N}\), with \(r_{N}\) the nuclear radius and \(r_{A}\) the atomic radius. All three of these factors are of order \(10^{-6}\), but may be enhanced sufficiently to be relevant by the large atomic numbers of the target atoms we consider. We note, however, that these higher-order terms can likely be worked out more precisely if this is useful for future experiments, since, unlike the nuclear corrections described above, they are the result of well-understood electromagnetism.
Scattering from realistic targets leads to a number of other electromagnetic corrections, however. Non-noble elements generally have non-zero magnetic dipole moments, leading to additional neutron scattering (see Appendix A), and even noble atoms may have induced magnetic moments due to interatomic interactions (see Appendix G). The separation of scattering contributions explained in Appendix D allows non-noble elements' contributions to be removed, so they do not lead to a systematic error (up to small caveats discussed below). Magnetic moments induced in the target noble atoms, however, cannot be separated out. We show in Appendix G that magnetic moments induced by surface interactions should lead to scattering length corrections of no more than roughly \(3\times 10^{-3}(R_{\mathrm{atom}}\xi/R_{\mathrm{grain}}^{2})\) fm when correlations in the magnetic dipole moments within the solid have length scale \(\xi\); effects of interactions within noble liquids (or dense noble gases) should be negligible by comparison.
Separation of contributions may fail to entirely remove systematic electromagnetic backgrounds from non-noble elements if there are any changes to the target between neutron and X-ray scattering or between different measurements of one type will inhibit the accuracy of this separation, however. One potential cause of such changes is material degradation from the scattering processes themselves, though this is likely to be minimal for the low energies we consider. In the case of materials consisting of piles of grains, the structure may also change simply due to motion of the target between measurements (though this may be circumventable by, for example, rotating such targets continuously during measurements and using the resulting average structure, as one would need to do for aerosols or boiling liquids). Xenon snow may be so unstable that it degrades even if not moved. Since all of these effects are strongly dependent on the specific materials used, we will not include them in our sensitivity projections below, but we note that they may be significant in some circumstances.
A related systematic effect arises due to the differing energies of appropriate neutron and X-ray sources, which change their respective correspondences between momentum transfer and scattering angle. As a result, neutrons and X-rays at a fixed momentum transfer take different paths through the target and do not, in fact, see identical target structures, potentially changing their respective structure factors. While we expect this effect to be small, given the generally thin targets and small angles that we are most interested in, it will likely need to be simulated numerically or tested with additional measurements given our desired precision. We leave a more detailed treatment of this effect to future work.
Systematic errors may also arise if materials' compositions change between measurements. This could occur due to finite noble gas purity, or due to imperfect separation of materials, for example from adsorption of atoms by the solid components of a target. Again, these depend heavily on details beyond the general principles outlined in this work, so we will not include these effects below.
A final, more generic source of systematic error is low-angle multiple scattering, discussed in Appendix E. As we note in that appendix, multiple scattering should not be an issue for scattering from argon, but may or may not be a significant constraint on xenon-based targets. The error introduced by multiple scattering grows exponentially as a function of various experimental parameters (see (100)) so it is generally either enormous or irrelevant. In particular, this error depends on the transverse size of individual neutrons, which is not currently known sufficiently well for any apparatus to definitively determine which material combinations are compatible with a scattering fraction of 0.1; reducing the scattering fraction suppressed this effect. We assume in most of this work that neutrons have sufficient transverse sizes to use the materials we consider with scattering fractions of 0.1 without significant multiple scattering errors, but we note that some solids discussed in Section IV.1 may in actuality require reduced statistics.
### Projecting Sensitivity for Single-Material Targets
We can now estimate the sensitivity of a neutron scattering experiment implementing our proposal. Since we expect to have many neutrons observed in each angular bin, we can approximate the exact maximum-likelihood analysis of the data with an F-test [142], comparing the \(\chi^{2}\) values obtained when fitting the observed data including (\(\chi^{2}_{\rm with}\)) and not including (\(\chi^{2}_{\rm without}\)) a new force. More precisely, let
\[F=\frac{\left(\chi^{2}_{\rm without}-\chi^{2}_{\rm with}\right)/2}{\chi^{2}_ {\rm with}/N^{\rm dof}_{\rm with}}, \tag{11}\]
for \(N^{\rm dof}_{\rm with}\) degrees of freedom in the with-new force fit (and assuming two degrees of freedom for the new force: \(\mu\) and \(g\)). Then the distribution of \(F\) values will follow an \(F\)-distribution with \(d_{1}=2\) and \(d_{2}=N^{\rm dof}_{\rm with}\) degrees of freedom. We can then constrain any new force for which the resulting \(F\) is expected to exceed some threshold value.
In the case of single-material scattering, we can straightforwardly compute the expected values of \(\chi^{2}\) both with and without the new force included in the fit as follows. We generically expect a \(\chi^{2}\) contribution of 1 for each degree of freedom in a fit, simply from Poisson statistics within each angular bin, independent of which fit is used. In the presence of a new force, we expect an additional number of scatterings into the bin at angle \(\theta\) given by \(2\kappa_{\rm new}N_{\rm expected}(\theta)/(1+q_{T}(\theta)^{2})\), with \(N_{\rm expected}\) the expected number of neutrons scattered into this bin by nuclear scattering. In the absence of any fit corrections--that is, using the fit parameters that would be optimal in the absence of a new force--this leads to an expected \(\chi^{2}_{\rm without}\) contribution given by \(4\kappa^{2}_{\rm new}(\theta)N_{\rm expected}(\theta)/(1+q_{T}(\theta)^{2})^ {2}\), but no contribution to \(\chi^{2}_{\rm with}\). Note that this total expected contribution is independent of the number of bins in the limit that the nuclear scattering distribution (including coherence) is constant within each bin:
\[\Delta\chi^{2}_{\rm no-fit}=\sum_{\rm bins}\frac{4\kappa^{2}_{\rm new}( \theta)N_{\rm expected}(\theta)}{(1+q_{T}(\theta)^{2})^{2}}\to N_{\rm scattered }\int d\theta\frac{4\kappa^{2}_{\rm new}(\theta)}{(1+q_{T}(\theta)^{2})^{2}} \frac{dp}{d\theta} \tag{12}\]
for \(N_{\rm scattered}\) total scattered neutrons and (nuclear, coherence-enhanced) scattering distribution \(dp/d\theta\), normalized to integrate to unity over angles above some minimum. This additional \(\chi^{2}\) contribution is reduced after fitting empirically-determined Standard Model scattering parameters: fitting over the angle-independent nuclear scattering length, for example, eliminates the average number of additional scatters per bin (of constant solid angle), leaving only the variation in new force scattering over different angles.
In terms of this additional post-fit \(\chi^{2}\) contribution, (11) predicts
\[F=1+\frac{1}{2}\Delta\chi^{2}_{\rm fit}. \tag{13}\]
For \(N_{\rm with}^{\rm dof}\gg 1\), the cumulative distribution function is well approximated at \(F\) of no more than a few by
\[{\rm CDF}_{2,N_{\rm with}^{\rm dof}}(F)\approx 1-e^{-F}, \tag{14}\]
so we expect to be able to detect a new force at 95% confidence whenever \(1+\Delta\chi_{\rm fit}^{2}/2>\ln(20)\approx 3.0\), i.e. when \(\Delta\chi_{\rm fit}^{2}>4.0\). Since the finite true number of angular bins weakens this somewhat, we conservatively require \(\Delta\chi_{\rm fit}^{2}>5.0\) for the projections in this work.
### Projecting Sensitivity for Two-Material Targets
In the previous subsection, we explained our method for estimating the sensitivity of a single-material neutron scattering experiment following the approach discussed in this work. An exact analysis of the two-material case is necessarily numerical. However, in this appendix, we describe an approximate, analytic approach to estimating the two-material sensitivity, which avoids the computational cost of the numerical analysis while offering additional insight into the parameter dependence of the sensitivity.
Note that, throughout this appendix, we assume that the spin-dependent scattering cross-term discussed in Appendix D is not significant, such that measurements from two noble gases are sufficient. Including the spin-dependent cross-term complicates the analysis further, but we will not consider it in detail in this work; in any case it is unlikely to have more than an order-unity effect on the final sensitivity.
In the single-material case, the effects of a new force can be resolved from those of similar-length scale structure simply by looking at ratios of neutron to X-ray scattering. Unfortunately, this fails in the two-material case due to interference between scattering from the solid and gas, which differs for neutrons and X-rays. In particular, the total neutron scattering distribution in this case can be written as (see (118))
\[\left.\frac{dp}{d\Omega}\right|_{2-{\rm material}}=\left.\frac{dp}{d\Omega} \right|_{\rm s,inc}+\left.\frac{dp}{d\Omega}\right|_{\rm g,inc}+\left|\left<B_ {0}({\bf q}_{T})\right>+b_{g}({\bf q}_{T})\left<W({\bf q}_{T})\right>\right|^ {2} \tag{15}\]
where the first two terms on the right-hand side are the incoherent scattering distributions from the solid alone and the gas alone, while the last term is the total coherent scattering length, including both the unknown solid contribution \(B_{0}({\bf q}_{T})\) and the gas contribution, which is given by the product of the gas's single-atom scattering length \(b_{g}({\bf q}_{T})\) and the target phase sum \(W({\bf q}_{T})\). Note that we have taken the expectation values of \(B_{0}({\bf q}_{T})\) and \(W({\bf q}_{T})\) above in order to separate the coherent and incoherent scattering contributions to the scattering distribution.
We can eliminate all of the terms not enhanced by the gas's structure by making three measurements--the two materials together, the solid alone, and the gas alone (which gives precisely the incoherent gas scattering distribution above)--and taking the following linear combination:
\[\left.\frac{dp}{d\Omega}\right|_{\rm difference} =\left.\frac{dp}{d\Omega}\right|_{2-{\rm material}}-\left. \frac{dp}{d\Omega}\right|_{\rm s~{}only}-\left.\frac{dp}{d\Omega}\right|_{\rm g ~{}only} \tag{16}\] \[=2\,\,{\rm Re}\left(\left<B_{0}^{*}(\theta)\right>b_{g}(\theta) \left<W(\theta)\right>\right)+\left|b_{g}(\theta)\left<W(\theta)\right>\right|^ {2}.\]
Now suppose that we have obtained \(W({\bf q}_{T})\) from X-ray scattering measurements. This is not a precise description of the two-material analysis, since the X-ray scattering distribution similarly suffers from interference, but it should act as a reasonable approximation of the process, since the purpose of the X-ray measurements is precisely to distinguish any new force from the (shared) structure factor.
Then, if we ignore the electromagnetic and new force contributions to the last term, we can predict it from a combination of the gas-only measurement of \(b_{g}(\theta)\approx b_{0}\) and the X-ray measurement of \(W(\theta)\); including an estimate of the electromagnetic contribution (from other measurements, or from theoretical calculations) can make this prediction even more precise. Similarly to the handling of the phase sum above, this is not necessary in the true numerical analysis, but is useful for our simple estimate here; note that this approximation is likely to be conservative, as we are discarding part of the effect of the new force. Then let
\[\left.\frac{dp}{d\Omega}\right|_{\rm cross}=\left.\frac{dp}{d\Omega}\right|_ {2-{\rm material}}-\left.\frac{dp}{d\Omega}\right|_{\rm s~{}only}-\left.\frac{ dp}{d\Omega}\right|_{\rm g~{}only}-\left|b_{g}(\theta)\left<W(\theta)\right> \right|_{\rm predicted}^{2}. \tag{17}\]
Since the only complex component to the single-atom scattering length comes from the irrelevant Schwinger scattering length (see Appendix D), we can also write this measurement combination is
\[\left.\frac{dp}{d\Omega}\right|_{\rm cross}=2\,\,{\rm Re}\left(\left<B_{0}^{* }(\theta)\left<W(\theta)\right>\right)b_{g}(\theta). \tag{18}\]
Crucially, this is now fully factored into a gas-independent term \(2\ \mathrm{Re}\left(\left\langle B_{0}^{*}(\theta)\right\rangle\left\langle W( \theta)\right\rangle\right)\) and a gas-specific scattering length \(b_{g}(\theta)\). Thus, if we perform this process for two different noble elements, we can take the ratio
\[\frac{dp/d\Omega|_{\mathrm{cross},1}}{dp/d\Omega|_{\mathrm{cross},2}}=\frac{b_{ g,1}(\theta)}{b_{g,2}(\theta)}, \tag{19}\]
which is now independent of both the solid and the structure factor.
We can now use this to detect a new force by detecting a difference in the angle-dependence of the two elements' single-atom scattering lengths. In particular, using (12), we have
\[\frac{dp/d\Omega|_{\mathrm{cross},1}}{dp/d\Omega|_{\mathrm{cross},2}}=(\mathrm{ const.})\left(1+\kappa_{\mathrm{EM},1}f_{1}(q_{T})-\kappa_{\mathrm{EM},2}f_{2}(q_{T})+ \frac{\Delta\kappa_{\mathrm{new}}}{1+(q_{T}/\mu)^{2}}+\mathcal{O}(\kappa^{2} )\right) \tag{10}\]
where \(\Delta\kappa_{\mathrm{new}}=\kappa_{\mathrm{new},1}-\kappa_{\mathrm{new},2}\). From here, the fitting procedure is much the same as for the single-material case, with seven free parameters (the constant prefactor, two electromagnetic parameters for each noble element, and two new force parameters). The four electromagnetic parameters were not included in the fits for the rough projections of this work, as this significantly reduced computational expense. We do not expect this to have a meaningful impact on the final sensitivity, however, as electromagnetic scattering has a significantly different angular dependence from new force scattering; this was also confirmed numerically for the single-material case.
As we note in the main text, the dependence of this final ratio only on the difference \(\Delta\kappa_{\mathrm{new}}\) makes it advantageous to use noble gases with very different atomic weights, in order to maximize this difference. However, this must be balanced against the weakness of the gas's scattering contribution relative to that of the solid, given the low value of helium's SLD in particular-see Table 1-leading to the approximate parity between argon- and helium-based scattering seen in Figure 3.
|
2309.02474 | Symmetry dictated universal helicity redistribution of Dirac fermions in
transport | Helicity is a fundamental property of Dirac fermions. Yet, the general rule
of how it changes in transport is still lacking. We uncover, theoretically, the
universal spinor state transformation and consequently helicity redistribution
rule in two cases of transport through potentials of electrostatic and mass
types, respectively. The former is dictated by Lorentz boost and its complex
counterpart in Klein tunneling regime, which establishes miraculously a unified
yet latent connection between helicity, Klein tunneling, and Lorentz boost. The
latter is governed by an abstract rotation group we construct, which reduces to
SO(2) when acting on the plane of effective mass and momentum. They generate
invariant submanifolds, i.e., leaves, that foliate the Hilbert space of Dirac
spinors. Our results provide a basis for unified understanding of helicity
transport, and may open a new window for exotic helicity-based physics and
applications in mesoscopic systems. | Jun-Yin Huang, Rui-Hua Ni, Hong-Ya Xu, Liang Huang | 2023-09-05T12:13:59Z | http://arxiv.org/abs/2309.02474v2 | # Symmetry dictated universal helicity redistribution of Dirac fermions in transport
###### Abstract
Helicity is a fundamental property of Dirac fermions. Yet, the general rule of how it changes in transport is still lacking. We uncover, theoretically, the universal spinor state transformation and consequently helicity redistribution rule in two cases of transport through potentials of electrostatic and mass types, respectively. The former is dictated by Lorentz boost and its complex counterpart in Klein tunneling regime, which establishes miraculously a unified yet latent connection between helicity, Klein tunneling, and Lorentz boost. The latter is governed by an abstract rotation group we construct, which reduces to SO(2) when acting on the plane of effective mass and momentum. They generate invariant submanifolds, i.e., leaves, that foliate the Hilbert space of Dirac spinors. Our results provide a basis for unified understanding of helicity transport, and may open a new window for exotic helicity-based physics and applications in mesoscopic systems.
_Introduction._--Helicity, the projection of the spin onto the direction of momentum, is known as an intrinsic and measurable property of Dirac fermions in relativistic quantum mechanics, and plays a crucial role in understanding the nature of fundamental particles. For instance, the helicity nature of massless Dirac particles turns out to be responsible for a highly anisotropic tunneling, i.e., chiral tunneling, through an electrostatic potential barrier [1; 2; 3; 4; 5]. Flip and conservation of helicity has been recognized as a key issue in addressing scattering of massive Dirac particles by a magnetic monopole [6; 7] and an Aharonov-Bohm potential [8; 9; 10]. As an independent and exploitable degree of freedom, it attracts attention for electronic applications in mesoscopic systems recently [11; 12]. However, before stepping forward further, it is crucial to understand the redistribution rules of helicity in typical scattering, tunneling, and transport processes.
Generally, the transport process can be abstracted as an operation on the Hilbert space of Dirac spinors mapping the initial to the final spinor states characterized by energy and momentum. Yet, the helicity degree of freedom is undetermined as the coefficients of the two helicity components can still take different values, which typically depends on the transport details. One fundamental question is then under commonly encountered transport situations, how does the helicity change?
By utilizing the (3+1)-D Dirac equation formalism in the single-particle framework, we consider two typical transport processes, i.e., through piecewise-constant electrostatic [13; 14; 15; 16; 17] or mass potentials [18; 19; 20; 21; 22; 23; 24]. For each process, as the potential of the concerned region changes, the final spinor evolves out a curve started from the given initial state. We find, strikingly, that the final spinor depends only on the potential height of this region, i.e., process-independent. Thus the corresponding abstract operation forms a one-parameter transformation group, whose exertion on any initial state generates a one-dimensional invariant submanifold, which forms leaves foliating the Hilbert space of Dirac spinors.
Our main analytical results are shown in Fig. 1. The left panels plot the schematics of the two processes, where the potentials are along the \(x\)-axis, and the perpendicular momentum component \(p_{{}_{\perp}}\) is conserved. The final state \(|\psi_{f}\rangle\) is chosen in the middle region to simply include the reflected wavefunction. The middle and right panels show the foliation structures. For transport through an electrostatic potential with a given initial energy \(E\) [Fig. 1(a)], the Hilbert space of spinors can be fully parameterized by the effective energy \(E-V_{2}\), momentum \(p_{x}\) and \(p_{{}_{\perp}}\), and helicity polarization \(P\)[25]. As the potential \(V_{2}\) changes, from any initial state \(|\psi_{i}\rangle\), a one-dimensional leaf grows out in the Hilbert space, i.e., a curve in the four dimensional parameter space. Particularly, the projection of this leaf into the (\(E-V_{2}\), \(p_{x}\), \(p_{{}_{\perp}}\)) subspace is a hyperboloid characterized by \(E\) [Fig. 1(b)]. Different \(p_{{}_{\perp}}\), as determined by the incident angle, corresponds to different sections, i.e., hyperbolic curves. The projection onto the (\(p_{x}/p\), \(P\)) plane is shown in Fig. 1(c), where \(p=\sqrt{p_{x}^{2}+p_{\perp}^{2}}\). Surprisingly, the one-parameter transformation group for the spinors and consequently the redistribution of helicity is just the spinor representation of Lorentz boost \(\Lambda(w)\) along the \(x\)-axis. When \(V_{2}\) is large enough, the branch for negative energy states (\(E-V_{2}<0\)) associated with Klein tunneling emerges, which can be mapped from the original branch by a complex Lorentz boost \(\Lambda(\mathrm{i}\pi)\). This builds miraculously a latent, unified connection between different essential properties, i.e., helicity, Klein tunneling, and Lorentz boost, of Dirac fermions. The universal redistribution rule of the helicity is corroborated by extensive numerics in more general cases.
For the mass potential case with initial energy \(E\) [Fig. 1(d)], the effective mass \(m+U_{2}\) plays a similar role of the effective energy. The leaf can then be projected onto the (\(p_{x}\), \(m+U_{2}\)) plane, which, in the natural units (\(c=1\)), is a circle. All these circles for possible \(p_{{}_{\perp}}\)s construct a
sphere in the (\(m+U_{2}\), \(p_{x}\), \(p_{{}_{\perp}}\)) space [Fig. 1(e)]. The projection on the (\(p_{x}\), \(P\)) plane is shown in Fig. 1(f). As \(U_{2}\) varies, the one-parameter transformation group \(\Gamma(\mu)\) acting on the (\(p_{x}\), \(m+U_{2}\)) plane can be conveniently described by an SO(2) rotation. Its spinor representation for the abstract spinor state transformation has also been constructed.
These findings uncover universal redistribution rules of helicity, dictated by symmetries described by Lorentz boost and an abstract rotation group, respectively, in transport processes through electrostatic or mass potentials. Our results provide conveniently a modulation strategy of the helicity by simply configuring the electrostatic or mass potential profiles, holding great promise to exploit the new degree of freedom for novel applications.
_Model._--The transport problem of the (3+1)-D massive Dirac fermions can be described by the Dirac Hamiltonian in the position representation [26]
\[\hat{H}=\mathbf{\alpha}\cdot\hat{\mathbf{p}}+\beta[m+U(x)]+V(x), \tag{1}\]
where \(\mathbf{\alpha}=\begin{pmatrix}0&\mathbf{\sigma}\\ \mathbf{\sigma}&0\end{pmatrix}\) with Pauli matrices \(\mathbf{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})\), \(\hat{\mathbf{p}}\) is the 3-momentum operator, \(\beta=\begin{pmatrix}\mathds{1}_{2}&0\\ 0&-\mathds{1}_{2}\end{pmatrix}\), \(m\) is the Dirac mass, \(U\) is the mass potential, and \(V\) is the electrostatic potential. Natural units \(c=\hbar=e=1\) are employed. For the sake of simplicity, we consider two typical cases, i) \(U(x)=0\), and ii) \(V(x)=0\). We exploit a piecewise-constant potential with \(n\) regions of width \(\{d_{j}\}\) along the \(x\)-axis, where \(d_{1}=d_{n}=\infty\), and \(V_{1}=U_{1}=0\), which, in the large \(n\) limit, can approach arbitrary continuous potential profile.
Note that the whole Hamiltonian \(\hat{H}\) does not commute with \(\hat{h}\). Nevertheless, in each region \(j\), the corresponding Hamiltonian \(\hat{H}_{j}\) commutes with the helicity operator [26]
\[\hat{h}=\begin{pmatrix}\mathbf{\sigma}&0\\ 0&\mathbf{\sigma}\end{pmatrix}\cdot\hat{\mathbf{p}}/p_{j}, \tag{2}\]
where \(p_{j}=\sqrt{(E-V_{j})^{2}-(m+U_{j})^{2}}\), and the incident energy \(E>m\). The transmitted wavefunctions (chosen as the common eigenstates of \(\hat{H}\), \(\hat{\mathbf{p}}\), and \(\hat{h}\)) are plane wave solutions [27]
\[\psi_{j}^{(h)}=\begin{pmatrix}\chi_{j}^{(h)}\\ hk_{j}\chi_{j}^{(h)}\end{pmatrix}e^{\mathrm{i}(\mathbf{p}_{j}\cdot\mathbf{\mathbf{x }}-Et)}, \tag{3}\]
where \(k_{j}=p_{j}/(E-V_{j}+m+U_{j})\), \(h=\pm 1\) is the eigenvalue of \(\hat{h}\), and \(\chi_{j}^{(+)}=[\cos\frac{\theta_{j}}{2}e^{-\mathrm{i}\varphi_{j}/2},\sin \frac{\theta_{j}}{2}e^{\mathrm{i}\varphi_{j}/2}]^{T}\), \(\chi_{j}^{(-)}=[-\sin\frac{\theta_{j}}{2}e^{-\mathrm{i}\varphi_{j}/2},\cos \frac{\theta_{j}}{2}e^{\mathrm{i}\varphi_{j}/2}]^{T}\)[27], with \(\mathbf{p}_{j}=(p_{j,x},p_{y},p_{z})=p_{j}(\sin\theta_{j}\cos\varphi_{j}, \sin\theta_{j}\sin\varphi_{j},\cos\theta_{j})\) as \(p_{y}\) and \(p_{z}\) are conserved in different regions [13; 28], \(\theta_{j}=\cos^{-1}(p_{1}\cos\theta_{1}/p_{j})\), \(\varphi_{j}=\cos^{-1}(\lambda_{j}\sqrt{p_{j}^{2}-p_{y}^{2}-p_{z}^{2}}/p_{j}\sin \theta_{j})\), and \(\lambda_{j}=\mathrm{sgn}(E-V_{j})\) is for positive or negative energy states. Due to the rotational symmetry with respect to the \(x\)-axis, the
Figure 1: Schematics of the two transport processes and the rule of spinor state transformations. (a) Transport through a piecewise-constant electrostatic potential, with momentum \(\mathbf{p}_{j}\) and relative helicity components \(|t_{j}^{(\pm)}|^{2}\) (red for positive, blue for negative) in each region \(j\) demonstrated. \(|\psi_{i}\rangle\) is the initial (incident) spinor state with completely positive helicity, and \(|\psi_{f}\rangle\) is the final state in region 2. (b,c) Visualizing the Lorentz boost \(\Lambda(w)\) dictated transformation rule we uncovered from \(|\psi_{i}\rangle\) (hollow circle) to \(|\psi_{f}\rangle\)s. Projection of the leaf consisted of all \(|\psi_{f}\rangle\)s onto subspaces parameterized by effective energy \(E-V_{2}\), momentum \(p_{x}\) and \(p_{{}_{\perp}}\) (b), and by \(p_{s}/p\) and helicity polarization \(P\) (c). Solid (dashed) curves are for transmitted (reflected) parts. Red and blue indicate positive and negative effective energy branches, respectively, which are connected by \(\Lambda(\mathrm{i}\pi)\). (d-f) Mass potential case. The transformation rule is now governed by an abstract rotation group \(\Gamma(\mu)\) we identified. (e) Projection of the leaf in the (\(m+U_{2}\), \(p_{x}\), \(p_{{}_{\perp}}\)) subspace, and (f) on the (\(p_{x}/p\), \(P\)) plane. Red and yellow curves are for positive and negative effective mass \(m+U_{2}\), respectively.
angle \((\theta_{j},\varphi_{j})\) can be characterized by a single angle \(\theta_{j}=\cos^{-1}(\sin\theta_{j}\cos\varphi_{j})\) between \(\mathbf{p}_{j}\) and the positive \(x\)-axis. For the reflected plane wave \(\overline{\psi}_{j}^{(h)}\), one only needs to change \(p_{j,x}\) to \(-p_{j,x}\) and \(\varphi_{j}\) to \(\pi-\varphi_{j}\). Without loss of generality, we assume that the incident Dirac fermions are completely positively helicity-polarized plane waves.
The wavefunction in each region can be decomposed into the eigenmodes in terms of Eq. (3) and its reflected counterpart, i.e., \(\psi_{j}=t_{j}^{(h)}\psi_{j}^{(h)}+r_{j}^{(h)}\overline{\psi}_{j}^{(h)}\). For transmitted flow, the helicity polarization is defined as [29; 30]
\[P_{j}=(J_{j}^{(+)}-J_{j}^{(-)})\big{/}(J_{j}^{(+)}+J_{j}^{(-)}), \tag{4}\]
where \(J_{j}^{(h)}=|t_{j}^{(h)}|^{2}|\psi_{j}^{(h)\dagger}\mathbf{\alpha}\psi_{j}^{(h)}|\) is the magnitude of the probability current. For reflected flow, \(\overline{J}_{j}^{(h)}\) and \(\overline{P}_{j}\) can be defined similarly. The transmission and reflection coefficients \((t_{j}^{(h)},r_{j}^{(h)})\) are then numerically calculated via the transfer matrix method by matching them at the boundary of each region [31; 32; 33]. The overall transmission probability is then
\[T^{(h)}=|t_{n}^{(h)}|^{2}k_{n}\sin\theta_{n}\cos\varphi_{n}/k_{1}\sin\theta_{1 }\cos\varphi_{1}.\]
As a test, by approximating a smooth linear potential with large \(n\), our numerical calculation matches perfectly with the analytical result [34] [Supplemental Material (SM) Sec. I].
_Theory._--We derive the redistribution rules of helicity in transport as follows.
_Case_ i), only consider the electrostatic potential, i.e., for any \(j\in[1,n]\), \(U_{j}=0\). We have
\[\hat{H}_{j}\psi_{j}^{(h)}=\lambda_{j}|E-V_{j}|\psi_{j}^{(h)},\ \hat{h}\psi_{j}^{(h)}=h\psi_{j}^{(h)}. \tag{5}\]
For \(V_{j}\in(V_{-},V_{+})\) with \(V_{\pm}=E\pm\sqrt{p_{y}^{2}+p_{z}^{2}+m^{2}}\), the transmitted part has an imaginary \(p_{j,x}\), which is evanescent and no longer has a well-defined helicity. Therefore we only consider \(V_{j}<V_{-}\) scattering or \(V_{j}>V_{+}\) Klein tunneling.
For \(n=3\), the normalized transmission and reflection coefficients \(t_{j}^{(h)}\) (\(j=2,3\)) and \(r_{j}^{(h)}\) (\(j=1,2\)) can be obtained analytically (SM Sec. II), e.g.,
\[\begin{pmatrix}t_{j}^{(+)}\\ t_{j}^{(-)}\end{pmatrix}=|\mathcal{M}|^{-1/2}\begin{pmatrix}\mathcal{M}_{(+,+)}& \mathcal{M}_{(+,-)}\\ \mathcal{M}_{(-,+)}&\mathcal{M}_{(-,-)}\end{pmatrix}\cdot\begin{pmatrix}t_{1} ^{(+)}\\ t_{1}^{(-)}\end{pmatrix}, \tag{6}\]
where \(\mathcal{M}_{(\mu,h)}\equiv\langle\psi_{j}^{(h^{\prime})}|\,\alpha_{x}\,|\psi_ {1}^{(h)}\rangle\) with normalized ket \(|\psi_{j}^{(h)}\rangle=\psi_{j}^{(h)}/|\psi_{j}^{(h)}|\). For positively helicity-polarized initial state, \(t_{1}^{(+)}=1\), \(t_{1}^{(-)}=0\). Plugging into Eq. (4), the exact expression of the helicity polarization for the transmitted flow is
\[P_{j}=(\eta_{j}-1)\big{/}(\eta_{j}+1), \tag{7}\]
where
\[\eta_{j}=\Big{|}t_{j}^{(+)}\big{/}t_{j}^{(-)}\Big{|}^{2}=\left(\frac{k_{1}+k_{ j}}{k_{1}-k_{j}}\right)^{2}\cdot\frac{1+\cos(\varrho_{1}+\varrho_{j})}{1- \cos(\varrho_{1}+\varrho_{j})}. \tag{8}\]
For reflected flow, one only needs to change \(\varrho_{j}\) to \(\pi-\varrho_{j}\) to get \(\overline{P}_{j}\).
Remarkably, the spinor state \(\psi_{j}^{(h)}\) in region \(j\) is determined completely by \(V_{j}\) and \((E,\mathbf{p}_{1})\) [Eq. (3)], and is irrelevant to the potential in other regions. So do the matrix \(\mathcal{M}\) and the helicity polarization \(P_{j}\). Regarding the transformation from \(|\psi_{1}\rangle\) to \(|\psi_{j}\rangle\) as an abstract operation, it is additive and forms a one-parameter (\(V_{j}\)) transformation group (SM Sec. III-A), generating leaves in the spinor Hilbert space.
Alternatively, disregard the actual transport process, only consider the two sets of energy and momentum of the initial and final states. Now imagine that they are for the same state but observed in two different inertial reference frames, i.e., \(O\) and \(O^{\prime}\) in Fig. 2, and are thus connected by a Lorentz boost \(\Lambda(w)\) with rapidity \(w\) (defined by \(\cosh w=(1-v^{2})^{-1/2}\) with \(v\) being the relative velocity of the two frames),
\[\begin{pmatrix}E-V_{j}\\ p_{j,x}\end{pmatrix}=\begin{pmatrix}\cosh w&-\sinh w\\ -\sinh w&\cosh w\end{pmatrix}\cdot\begin{pmatrix}E\\ p_{1,x}\end{pmatrix}. \tag{9}\]
In this regard, the spinor state will be transformed to \(\hat{S}\,|\psi_{1}\rangle\) with
\[\hat{S}[\Lambda(w)]=\cosh(w/2)\mathds{1}_{4}-\sinh(w/2)\alpha_{x}, \tag{10}\]
being the spinor representation of the Lorentz boost [26]. Surprisingly, we find \(\hat{S}\,|\psi_{1}\rangle\) from the imagined inertial frame transformation is the same as the final state \(|\psi_{j}\rangle\) in the actual transport problem. As such, the one-parameter transformation group characterizing the transport process is just the Lorentz boost. Retrospectively, as the electrostatic potential only shifts the energy of the Dirac fermion, the equation structure is preserved, and the corresponding abstract operation is equivalent to the inertial frame transformation (Lorentz boost). This simple, universal behavior unveils yet another concealed attribute of the miraculous Dirac equation.
For Klein tunneling (\(V_{j}>V_{+}\)), we will take the convention \(p_{j,x}<0\), so that the \(x\)-component of the group velocity \(v_{x}=\lambda_{j}p_{j,x}/|E-V_{j}|\) is always positive for transmitted part [35; 36; 37]. Conventional Lorentz boost fails to link the positive and negative energy states. By replacing \(w\) with \(\mathrm{i}\pi\), the operation \(\Lambda(\mathrm{i}\pi)\) flips the sign of effective energy and \(p_{x}\), as from \(O^{\prime}\) to \(\widetilde{O}\) in Fig. 2. \(\hat{S}[\Lambda(\mathrm{i}\pi)]=-\mathrm{i}\alpha_{x}\) flips the helicity polarization, exactly the same as that in the actual tunneling process. The combined operation from \(O\) to \(\widetilde{O}\) is then \(\Lambda(\mathrm{i}\pi)\Lambda(w)\) (SM Sec. III-B), which is traceable [38; 39; 40; 41; 42]. The complex Lorentz boost \(\Lambda(\mathrm{i}\pi)\) enforces an antisymmetry between the helicity polarization and the effective energy \(E-V\) [Fig. 3(a)].
Indeed, for Dirac spinors, introducing \(\hat{S}[\Lambda(\mathrm{i}\pi)]\) is required by the \(\mathcal{PCT}\) symmetry. The \(\mathcal{PCT}\) symmetry interchanges the positive and negative energy states, with \(\mathcal{P}=\mathcal{P}_{x}\mathcal{P}_{y}\mathcal{P}_{z}\), \(\mathcal{C}\), \(\mathcal{T}\) being the parity, charge conjugate, and time-reversal operations. Due to the rotational symmetry with respect to the \(x\)-axis, \(\mathcal{P}_{y}\mathcal{P}_{z}=\mathds{1}\), the \(\mathcal{PCT}\) symmetry reduces to \(\mathcal{P}_{x}\mathcal{C}\mathcal{T}\), which equals to \(\hat{S}[\Lambda(\mathrm{i}\pi)]\) disregarding an unobservable phase (SM Sec. III-B).
The above results do not explicitly depend on the fact of \(n=3\), and are valid for arbitrary \(n\) regions.
_Case_ ii), the mass potential. Requiring \(p_{j,x}\) to be real leads to \(U_{j}\in(U_{-},U_{+})\) with \(U_{\pm}=-m\pm\sqrt{E^{2}-p_{y}^{2}-p_{z}^{2}}\). Note that the negative mass potential has been widely used in models of nuclear physics and condensed matter physics [18; 19; 20; 21; 22; 23]. Similarly, Eqs. (6-8) have the same form except that now \(\psi_{j}^{(h)}\), \(k_{j}\), and \(\varrho_{j}\) are determined by \(U_{j}\). The corresponding map from \(|\psi_{1}\rangle\) to \(|\psi_{j}\rangle\) also forms a one-parameter transformation group, denoted as \(\Gamma(\mu)\) (SM Sec. IV). In the plane of effective mass \(m+U_{j}\) and \(x\)-momentum, the action of \(\Gamma(\mu)\) reduces to SO(2) rotation
\[\begin{pmatrix}m+U_{j}\\ p_{j,x}\end{pmatrix}=\begin{pmatrix}\cos\mu&-\sin\mu\\ \sin\mu&\cos\mu\end{pmatrix}\cdot\begin{pmatrix}m\\ p_{1,x}\end{pmatrix}, \tag{11}\]
where \(\mu\) is an angle parameter determined by \(U_{j}\). The transformation of the spinor states from \(|\psi_{1}\rangle\) to \(|\psi_{j}\rangle\) in the actual transport process can be constructed as \(|\psi_{j}\rangle=\hat{S}_{m}\,|\psi_{1}\rangle\), with
\[\hat{S}_{m}[\Gamma(\mu)]=\cos(\mu/2)\mathds{1}_{4}-\sin(\mu/2)\beta\alpha_{x} \tag{12}\]
being the spinor representation of \(\Gamma(\mu)\). Thus the redistribution rule of the helicity is derived, and is completely different from that due to electrostatic potentials. To the best of our knowledge, this group \(\Gamma(\mu)\) and its spinor representation are unknown before.
_Numerical verification._--To be as general as possible, we consider a piecewise-constant potential model with \(n=10\) and random widths and potential heights for each region: \(d_{j}\in(0,1/m)\), \(V_{j}\) or \(U_{j}\in(-10m,10m)\). We then systematically vary the potential value in a randomly chosen region, say, \(j=5\), and plot the helicity polarization of the transmitted and reflected waves in Fig. 3 as symbols. They agree with the theory well for both of the electrostatic and mass potentials. The numerical simulation corroborates clearly that the helicity redistribution only depends on the potential in this region, but is independent of the transport processes before or after this region.
_Potential applications._--Figure 4 demonstrates the variation of the helicity polarization \(P\) vs the key parameters. For electrostatic potential, \(P\) can be reversed from \(1\) (incident) to \(-1\) (output). For mass potential, with a non-polarized incident wave of \(P=0\), the output can be modulated to either \(1\) or \(-1\). That is, complete polarization of helicity can be generated from completely non-polarized incident waves by applying a properly adjusted mass potential. Due to the inherent connection to spin polarization, the helicity, with the unveiled re
Figure 3: Plots of helicity polarization for transmitted (\(P_{j}\), solid curves) and reflected (\(\overline{P}_{j}\), dashed) waves. (a) vs effective energy \(E-V_{j}\) in units of \(E\) for electrostatic potential, and (b) vs effective mass \(m+U_{j}\) in units of \(m\) for mass potential. The incident energy and angle: \(E=1.08m\), \(\varrho_{1}=2\pi/5\). The curves are from Eqs. (7-8). The circles are numerical results from the random piecewise-constant potential model with \(n=10\) and \(j\) a randomly chosen region, e.g., \(j=5\).
distribution rule, may lead to appealing applications for spin-based electronics.
_Discussion and conclusion._-- Another transport setup is through magnetic barriers. In this case, the helicity is conserved [43; 44; 45; 46] as it commutes with the (3+1)-D Dirac Hamiltonian in a static magnetic field [47; 48; 49; 8]. Thus the redistribution rule is trivial.
Helicity is one of the most fundamental and intriguing properties of Dirac fermions. Due to the complexity of its combined nature of spin and momentum, a unified understanding of how it changes in transport is still lacking. We tackle this issue and have uncovered the universal helicity redistribution rule, from which process-independent helicity polarizations arise. The underlying principle of the universality is unravelled by abstracting the physical transport of helicity into imagined transforming operations on the spinor states. In such way, we identify one-parameter transformation groups as well as their spinor representations, which are described by the Lorentz boost \(\Lambda(w)\) for electrostatic potentials, and the abstract rotation group \(\Gamma(\mu)\) for mass potentials. This explicitly reveals an unknown connection between the physical rule of helicity in transport that is highly nontrivial and the underlying symmetries of the Dirac equation. Our results not only are fundamental to the understanding of relativistic quantum scattering, tunneling, and transport, but also offer promises of generating desired helicity polarization for exotic applications in mesoscopic systems [50; 51; 52; 12].
_Acknowledgments._-- We thank Profs. Yong-Shi Wu, Zhong-Zhou Ren, Li-Sheng Geng for insightful discussions. This work was supported by NSFC under Grants No. 12175090, No. 12105125, No. 11775101, and No. 12247101, and by the 111 Project under Grant No. B20063.
|
2307.14962 | Probing an ultralight QCD axion with electromagnetic quadratic
interaction | The axion-gluon coupling is the defining feature of the QCD axion. This
feature induces additional and qualitatively different interactions of the
axion with standard model particles -- quadratic couplings. Previously,
hadronic quadratic couplings have been studied and experimental implications
have been explored especially in the context of atomic spectroscopy and
interferometry. We investigate additional quadratic couplings to the
electromagnetic field and electron mass. These electromagnetic quadratic
couplings are generated at the loop level from threshold corrections and are
expected to be present in the absence of fine-tuning. While they are generally
loop-suppressed compared to the hadronic ones, they open up new ways to search
for the QCD axion, for instance via optical atomic clocks. Moreover, due to the
velocity spread of the dark matter field, the quadratic nature of the coupling
leads to low-frequency fluctuations in any detector setup. These distinctive
low-frequency fluctuations offer a way to search for heavier axions. We provide
an analytic expression for the power spectral density of this low-frequency
background and briefly discuss experimental strategies for a low-frequency
background search. | Hyungjin Kim, Alessandro Lenoci, Gilad Perez, Wolfram Ratzinger | 2023-07-27T15:55:37Z | http://arxiv.org/abs/2307.14962v2 | # Probing an ultralight QCD axion with electromagnetic quadratic interaction
###### Abstract
The axion-gluon coupling is the defining feature of the QCD axion. This feature induces additional and qualitatively different interactions of the axion with standard model particles - quadratic couplings. Previously, hadronic quadratic couplings have been studied and experimental implications have been explored especially in the context of atomic spectroscopy and interferometry. We investigate additional quadratic couplings to the electromagnetic field and electron mass. These electromagnetic quadratic couplings are generated at the loop level from threshold corrections and are expected to be present in the absence of fine-tuning. While they are generally loop-suppressed compared to the hadronic ones, they open up new ways to search for the QCD axion, for instance via optical atomic clocks. Moreover, due to the velocity spread of the dark matter field, the quadratic nature of the coupling leads to low-frequency fluctuations in any detector setup. These distinctive low-frequency fluctuations offer a way to search for heavier axions. We provide an analytic expression for the power spectral density of this low-frequency background and briefly discuss experimental strategies for a low-frequency background search.
+
Footnote †: preprint: DESY-23-110
## I Introduction
The axion solution of the strong CP problem requires the axion field to couple to the strong sector [1; 2; 3; 4; 5; 6; 7; 8]. Couplings between ultralight spin-0 fields and the strong sector occur also in models addressing other theoretical questions, such as the quark-flavor puzzle and electroweak hierarchy problem, or in phenomenological models like the Higgs-portal [9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. If the spin-0 field is a scalar, these couplings are severely constrained by equivalence-principle and fifth-force bounds [19; 20]. If the spin-0 field is a pseudo-scalar field, long-range forces do not appear at leading order, and therefore, the corresponding bounds are dramatically weaker. A similar trend in the strength of bounds is obtained in cases where the spin-0 field is assumed to be ultralight dark matter (ULDM); the bound on scalar DM is more than 10 orders of magnitude stronger than that of a pseudo-scalar for the same field content [21].
Axion searches are mostly based on its anomalous coupling to the photon or axial-vector couplings to standard model particles. Below the confinement scale, however, strong confining dynamics generate sizable quadratic couplings of the QCD axion to SM scalar operators in the strong sector1, offering new directions for axion searches. For instance, the quadratic couplings can change the potential structure of the axion in a finite density environment, which can be examined in extreme stellar environments such as white dwarfs and neutron stars [22; 23; 24; 25].
Footnote 1: In generic axion models these are suppressed by the axion mass [21]
Moreover, these hadronic quadratic couplings induce small time-oscillations of nuclear parameters, if the axion constitutes the observed dark matter. Such small oscillations of nuclear parameters can be probed by atomic spectroscopy and/or interferometry, for instance by atomic clocks [26]. While atomic spectroscopy provides an interesting way to probe axion dark matter, it is still challenging to probe the axion via hadronic quadratic couplings since atomic clocks are in general less sensitive to the variation of nuclear parameters.
In this work, we explore another kind of quadratic interaction of the QCD axion - the quadratic interactions with the electromagnetic field and the electron. The former arises from one-loop corrections, while the latter is induced by two-loop corrections. Although such couplings are generally smaller than their hadronic counterpart, they allow us to probe the QCD axion with a wider range of experimental setups, some of which are more sensitive. These couplings are induced and dominated by loops involving IR states, and therefore are expected to be naturally present in the theory regardless of the details of the microscopic theory. The main objective of this work is, therefore, to study the interplay between current and near-future experimental sensitivity of the quadratic axion couplings to the strong and electromagnetic sectors.
This work is organized as follows. In Section II, we briefly discuss hadronic quadratic couplings from the axion-gluon interaction. We then show that the hadronic quadratic couplings generate quadratic interactions with the photon and the electron at one and two-loop levels, respectively. In Section III, we discuss the implications of electromagnetic quadratic interactions in axion searches with atomic spectroscopy and gravitational wave detectors. In Section IV, we discuss in more detail the signal spectrum of axion dark matter generated by quadratic interactions. We show that the quadratic nature of the coupling leads to low-frequency stochastic fluctuations of observables besides the coherent harmonic signals at
frequencies corresponding to two times the axion mass. We further discuss possibilities to constrain and probe such stochastic signals in an experimental setup with a single detector and multiple detectors. We conclude in Section V. We use natural units \(c=\hbar=1\) throughout this work.
## II Quadratic couplings
We start from the axion coupling to the standard model gluon field,
\[\mathcal{L}=\frac{g_{s}^{2}}{32\pi^{2}}\frac{\phi}{f_{\phi}}G_{\mu\nu}^{a} \widetilde{G}^{a\mu\nu}, \tag{1}\]
where \(f_{\phi}\) is the axion decay constant, \(g_{s}\) is the strong coupling, \(G_{\mu\nu}^{a}\) and \(\widetilde{G}_{\mu\nu}^{a}\) are the gluon field strength and its dual. We do not take into account any other couplings in this work; i.e. we consider KSVZ-like models where axion couplings to the axial vector currents of SM fields are absent at UV scales. Model-dependent couplings will not change our analysis, but they may lead to additional bounds on the axion parameter space.
The axion-gluon coupling (1) naturally leads to hadronic quadratic couplings below the QCD scale. For instance, the pion mass can be found from the chiral Lagrangian as \(m_{\pi}^{2}(\theta)=B(m_{u}^{2}+m_{d}^{2}+2m_{u}m_{d}\cos\theta)^{1/2}\) with \(B=-\langle\bar{q}q\rangle_{0}/f_{\pi}^{2}\) and \(\theta=\phi/f_{\phi}\)[40]. Expanding the pion mass around \(\theta=0\), we find a quadratic coupling to pions, \(\mathcal{L}\supset\theta^{2}\pi^{2}\). Furthermore, the nucleon mass depends on the pion mass through \(\mathcal{L}\supset\mathrm{4c}_{1}m_{\pi}^{2}(\theta)\bar{N}N\) with \(c_{1}=-1.1\,\mathrm{GeV}^{-1}\)[41], which leads to a quadratic interaction between the axion and nucleons as well.
The hadronic quadratic interactions introduce time oscillations of the nuclear parameters if the axion is the
Figure 1: Summary of constraints. Constraints with microwave clocks are represented in a red tone: Rb/Cs fountain clocks (**Rb/Cs**) [27], a H-maser with a Si cavity (**H/Si**) [28], and strontium and cesium clocks (**Sr/Cs**) [29]. They receive the dominant contribution from the variation of nuclear parameters [26]. Searches based on optical clock transitions are represented with a green color scheme: Yb\({}^{+}\) and Sr (Yb/Sr) [29], Sr with a Si cavity (**Sr/Si**) [28], Al\({}^{+}\), Hg\({}^{+}\), Yb, and Sr (**Al**/**Hg, Al/Yb, Yb/Sr**) [30], and the electric-octupole (E3) and the electric-quadrupole (E2) transitions of Yb\({}^{+}\) ion (Yb\({}^{+}\)E3/E2), and Yb\({}^{+}\) (E3) and Sr (Yb\({}^{+}\)E3/Sr) [31]. The region bounded by the green dashed line is excluded by comparing measured frequency uncertainties in Yb\({}^{+}\)E3/E2 with the low-frequency fluctuations of the axion DM (see Section IV for details). Other constraints and projections are shown as follows: co-magnetometer and NASDUCK (pink, projection as dashed pink) [32; 33], molecular iodine I\({}_{2}\) spectroscopy (**dark blue**), MAGIS-100/MAGIS-km (**dot-dashed**/**dotted blue**), CASPEr-electric (**red dashed**) [34], the AURIGA resonant bar gravitational wave experiment (emerald) [35], oscillating neutron EDM (brown) [36], supernova 1987A (orange) [37], axion superradiance constraints (gray) [38], \({}^{229}\)Th nuclear isomer transition (gray dashed), and strontium monohydrookie SOH (violet dashed). The diagonal line in the bottom right is the minimal QCD axion line (olive), \(m_{\pi}^{2}f_{\phi}^{2}\simeq m_{\pi}^{2}f_{\pi}^{2}\). Spectroscopy bounds above the cyan solid line must be taken carefully as the axion could develop a static profile around the Earth (cyan) [22]. In addition, we show the reaches of MAGIS-km and \({}^{229}\)Th nuclear clock as a thick blue and gray line in scenarios where the DM density in the solar system is enhanced via capture processes [39].
dark matter in the present universe. As it is clear from the chiral Lagrangian, the pion and nucleon mass will receive a small oscillating component, e.g.
\[\frac{\delta m_{\pi}^{2}}{m_{\pi}^{2}}=-\frac{z}{2(1+z)^{2}}\theta^{2}(t) \tag{2}\]
with \(z=m_{u}/m_{d}\). The amplitude of the oscillating \(\theta^{2}(t)\) is proportional to the dark matter density \(\rho\approx\theta^{2}(t)m^{2}f_{\phi}^{2}\), where \(m\) denotes the axion mass. Other nuclear parameters that depend on the pion mass, such as the nuclear \(g\)-factor will also receive such a time-oscillating component. As a consequence, atomic energy levels oscillate and, in addition, any object whose mass receives QCD contributions experiences an acceleration due to the axion dark matter background.
Based on this observation, it was shown in Ref. [26] that spectroscopic and interferometric measurements, such as atomic clocks and gravitational wave interferometers, can be used to search the axion at the low mass range. In particular, clock comparison tests using hyperfine transitions are considered since hyperfine transitions are directly affected by the variation of nuclear parameters. In addition, gravitational wave interferometers are considered as most mass of the test bodies comes from QCD, and therefore, they fluctuate inevitably in the axion dark matter background.
As discussed in the introduction, the hadronic interactions also lead to EM quadratic couplings at low energy scales through loop corrections. Below, we detail how these EM couplings are induced and show that all of these effects are due to the variation of the pion mass.
### Quadratic interaction with the electromagnetic field
We first consider the quadratic coupling to the electromagnetic field,
\[\mathcal{L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\frac{C_{\gamma}}{4}\frac{\phi^{2 }}{f_{\phi}^{2}}F_{\mu\nu}F^{\mu\nu} \tag{3}\]
at energy scales below the pion mass. The coefficient \(C_{\gamma}\) is given as
\[C_{\gamma}=-\frac{z}{24(1+z)^{2}}\frac{\alpha}{\pi}\bigg{(}1+8\frac{\sigma_{ \pi N}}{m_{N}}\bigg{)}\simeq-3\times 10^{-5}. \tag{4}\]
Here \(z=m_{u}/m_{d}\simeq 0.46\) and \(\sigma_{\pi N}=\partial m_{N}/\partial\ln m_{\pi}^{2}\sim\mathcal{O}(50)\, \mathrm{MeV}\). The coefficient \(C_{\gamma}\) can be directly obtained from the one-loop computation of \(\phi\phi\rightarrow\gamma\gamma\) via a pion loop or a nucleon loop. Alternatively, it can be read off from the threshold correction to the running of the fine structure constant with respect to the variation of the pion and the nucleon masses.
Consider the running of the electromagnetic coupling \(1/e^{2}\) from \(\Lambda_{\mathrm{UV}}\) to \(\Lambda_{\mathrm{IR}}\). Let us assume a single charged particle whose mass \(m_{\pi}\) is in between these scales. The gauge coupling runs as \(1/e^{2}(\Lambda_{\mathrm{IR}})-1/e^{2}(\Lambda_{\mathrm{UV}})=\int_{\Lambda_{ \mathrm{H}}}^{\Lambda_{\mathrm{UV}}}d\ln\mu\,[2\beta_{e}(\mu)/e^{3}]\) where \(\beta_{e}(\mu)=(be^{3}/8\pi^{2})\), \(b=\frac{2}{3}\sum_{f}Q_{f}^{2}+\frac{1}{6}\sum_{s}Q_{s}^{2}\), and the sums over \(f\) and \(s\) account for Dirac fermions and complex scalars, respectively. For a fixed UV value of the gauge coupling, one finds that the gauge coupling at low energy depends on the mass of the charged particle,
\[\delta\bigg{(}\frac{1}{e_{\mathrm{IR}}^{2}}\bigg{)}=-\frac{\Delta b}{4\pi^{2} }\frac{\delta m_{\pi}}{m_{\pi}}. \tag{5}\]
Here \(\Delta b\) is the change of the beta function coefficient at the threshold; \(\Delta b=2Q^{2}/3,\ Q^{2}/6\) for a Dirac fermion and a complex scalar field of charge \(Q\), respectively. In the effective Lagrangian, this dependence is incorporated by
\[\mathcal{L}=\frac{\Delta be^{2}}{(4\pi)^{2}}\frac{\delta m_{\pi}}{m_{\pi}}F_{ \mu\nu}F^{\mu\nu}.\]
From this, we obtain
\[\frac{\delta\alpha}{\alpha}=C_{\gamma}\theta^{2}=\frac{\alpha}{\pi}\sum_{i} \Delta b_{i}\frac{\delta m_{i}}{m_{i}}=\frac{\alpha}{12\pi}\left[1+\frac{8 \sigma_{\pi N}}{m_{N}}\right]\frac{\delta m_{\pi}^{2}}{m_{\pi}^{2}}, \tag{6}\]
where we include the variation of the nucleon mass; the fine structure constant fluctuates as the pion mass changes due to the background axion dark matter.
### Quadratic interaction with electron mass
Furthermore, the hadronic quadratic couplings lead to a quadratic coupling to the electron mass,
\[\mathcal{L}=-C_{e}m_{e}\frac{\phi^{2}}{f_{\phi}^{2}}\bar{e}e, \tag{7}\]
where the coefficient \(C_{e}\) is
\[C_{e}\simeq\frac{3\alpha}{4\pi}C_{\gamma}\ln\frac{m_{\pi}^{2}}{m_{e}^{2}}. \tag{8}\]
This effect arises at two-loop order; it is suppressed by \((\alpha/\pi)^{2}\).
We estimate the coefficient \(C_{e}\) in the following way. Below the QCD scale, the dim-6 operator (3) contributes to the running of the electron mass. In QED, the correction to the electron mass due to running from the QCD scale to the electron mass is given by \((\delta m_{e}/m_{e})=(3\alpha/4\pi)\ln m_{\pi}^{2}/m_{e}^{2}\). Since \(\theta^{2}(t)\) oscillates at a frequency much smaller than the electron mass or the QCD scale, we can effectively take \(C_{\gamma}\theta^{2}\) as a constant and absorb it by rescaling the gauge field \(A_{\mu}\rightarrow(1+C_{\gamma}\theta^{2})^{1/2}A_{\mu}\). This is equivalent to taking \(e^{2}\to e^{2}(1+C_{\gamma}\theta^{2})\). Using the QED result, one finds that the dim-6 operator contributes to the running as
\[\frac{\delta m_{e}}{m_{e}}\simeq\frac{3\alpha}{4\pi}C_{\gamma}\theta^{2}\ln(m_ {\pi}^{2}/m_{e}^{2})=C_{e}\theta^{2}\]
from which we estimate \(C_{e}\) as in (8). An explicit diagrammatic computation leads to the same result. We do not consider the variation of electron mass further, however, as its effect on observables is usually much smaller than the variation of the fine structure constant and nuclear parameters.
## III Implications
The quadratic couplings to the electromagnetic field and the electron mass offer alternative ways to search for the QCD axion. Previously, Ref. [26] focused on the quadratic coupling to hadrons. Assuming that the axion constitutes DM, it was pointed out that those lead to variations of the nucleon mass and nuclear \(g\)-factor over time and that atomic clocks based on hyperfine transitions could pick up the axion DM-induced signals. When additionally taking into account quadratic couplings to the electromagnetic field and the electron mass, a wider range of experiments becomes sensitive, including atomic clocks based on electronic transitions. Since such optical clocks usually have a shorter averaging time and better sensitivities, one can expect to possibly probe a wider range of parameter space.
For clarification, let us briefly recall the argument form Ref. [42], how a clock comparison test is sensitive to a coupling of ultralight dark matter like e.g. the one introduced in (6). Consider two stable frequency standards \(f_{A}\) and \(f_{B}\). Suppose that each frequency standard has a slightly different dependence on the fine structure constant, \(f_{A}\propto\alpha^{\xi_{A}}\) and \(f_{B}\propto\alpha^{\xi_{B}}\) with \(\xi_{A}\neq\xi_{B}\). Due to the time-variation of the fine structure constant caused by the DM field, the ratio of these two frequencies fluctuates as
\[\frac{\delta(f_{A}/f_{B})}{(f_{A}/f_{B})}=(\xi_{A}-\xi_{B})\frac{\delta\alpha }{\alpha}\propto\theta^{2}(t),\]
where \(\theta^{2}(t)=(\rho_{0}/m^{2}f_{\phi}^{2})\cos(2mt)\) is related to \(\delta\alpha/\alpha\) by (6). By monitoring the frequency ratio and investigating if the time series contains any harmonic signal at \(\omega=2m\), one can probe the QCD axion.
Generically the fractional frequency deviation arising from a quadratic coupling can be written as
\[\frac{\delta f_{A}}{f_{A}}=K_{A}\theta^{2}(t), \tag{9}\]
where \(K_{A}\) is the sensitivity coefficient that depends on the atomic species and transition. It takes all the effects (hadronic and electromagnetic) into account. A list of the coefficient \(K_{A}\) for different atom species is available in Appendix A.
Any stable frequency standard can be used to search for the QCD axion. Ref. [26] only used hyperfine transitions as only hadronic quadratic couplings were considered in that work and hadronic couplings do not affect the electronic transition to leading order. Possible variations of electronic transition caused by oscillations of the nuclear charge radius were investigated in Ref. [43]. Due to the electromagnetic quadratic couplings described above, electronic transition levels now change directly as the background DM oscillates. Although these new quadratic couplings are at least one-loop suppressed, they still lead to competitive bounds compared to microwave clocks as optical clocks have orders of magnitude smaller frequency uncertainties. Also, due to a smaller required averaging time of the spectroscopic measurement, optical clocks probe a slightly higher mass range than microwave clocks as shown in Figure 1.
The QCD axion also changes the length of objects through its electromagnetic quadratic couplings. Since the length of any object is proportional to the size of its atoms, i.e. the Bohr radius, \(L\propto(m_{e}\alpha)^{-1}\), the QCD axion induces a strain \(\Delta L/L=-(\delta\alpha/\alpha+\delta m_{e}/m_{e})\). This small strain can be effectively probed by resonant bar gravitational detectors, such as AURIGA [35], which is also shown in Figure 1. We also present the projected sensitivities of the atom interferometry MAGIS-100 and MAGIS-km experiments [34]. Atom interferometers compare the phase accumulated by two delocalized atom clouds. The presence of a background ULDM field can affect the phase of the atomic cloud in two ways: (i) via the value of the internal energy splitting, like an atomic clock [44]; (ii) via the exertion of an additional force on the atomic clouds, causing acceleration [45].
## IV Spectrum
The quadratic axion DM signal discussed so far is the harmonic signal, \(s(t)\propto\theta^{2}(t)\propto\cos(2mt)\), at the frequency twice the dark matter mass, \(\omega=2m\). By investigating if the detector output has an oscillating component at \(\omega=2m\) via matched filtering, it is possible to probe or constrain interactions of ULDM with SM particles.
The quadratic operator exhibits not only coherent harmonic oscillations but also distinctive low-frequency stochastic fluctuations at \(\omega\lesssim mv^{2}\), where \(v\) denotes the DM velocity. This offers another opportunity to test the QCD axion. Recently, Masia-Roig et al [46] showed that a network of sensors can be used to probe such low-frequency stochastic background in the context of non-gravitational quadratic interactions of ULDM with SM particles. Flambaum and Samsonov [47] argued that, by directly comparing the low-frequency background with experimentally measured uncertainties, it is possible to set limits on the QCD axion parameter space at higher masses.
We provide below the analytic spectrum of the low-frequency fluctuation of the axion dark matter from its quadratic interactions and project the sensitivity of different detector networks.
To see how this low-frequency stochastic noise arises from the quadratic operator, let us assume that the signal
is proportional to the quadratic operator as follows,
\[s(t)=K\theta^{2}(t),\]
with arbitrary constant \(K\). Once we expand the field as2
Footnote 2: See Appendix B and Ref. [48; 49] for more detailed discussions on this statistical description of wave dark matter.
\[\phi(t,x)=\sum_{i}\frac{1}{\sqrt{2mV}}\left[\alpha_{i}e^{-ik_{i}\cdot x}+\alpha_{ i}^{*}e^{ik_{i}\cdot x}\right] \tag{10}\]
with complex random numbers \((\alpha_{i},\alpha_{i}^{*})\), it is clear that the quadratic operator contains the sum, \(\omega_{i}+\omega_{j}\), and the difference, \(\omega_{i}-\omega_{j}\) of two frequencies in the field,
\[\phi^{2}(t,0)\supset\alpha_{i}\alpha_{j}e^{-i(\omega_{i}+\omega_{j})t}+\alpha_ {i}\alpha_{j}^{*}e^{-i(\omega_{i}-\omega_{j})t}+\text{h.c.}.\]
In the non-relativistic limit, the first term \(\omega_{i}+\omega_{j}\simeq 2m\) provides the harmonic signal at \(\omega=2m\). The second term, on the other hand, provides a low-frequency fluctuation at \(\omega\lesssim mv^{2}\).
A more careful investigation is possible via the power spectrum of the quadratic operator. The one-sided power spectral density (PSD) of the signal, \(P_{s}(f)\), is defined as
\[\langle\tilde{s}(f)\tilde{s}^{*}(f^{\prime})\rangle=\delta(f-f^{\prime})\frac {1}{2}P_{s}(f), \tag{11}\]
where \(s(t)=\int df\,e^{-2\pi ift}\,\tilde{s}(f)\). Following Ref. [49], for a normal DM velocity distribution \(n(\vec{v})=[(\rho_{0}/m)/(2\pi\sigma^{2})^{3/2}]\exp(-v^{2}/2\sigma^{2})\) with the mean dark matter density \(\rho_{0}\) and the velocity dispersion \(\sigma\), one finds the signal PSD as
\[P_{s}(f)=K^{2}\frac{\theta_{0}^{4}}{4}\tau_{\phi}\Big{[}A(f)+B(f)\Big{]} \tag{12}\]
where \(\tau_{\phi}=1/m\sigma^{2}\) is the coherence time and
\[A(f) = \pi\frac{\bar{v}^{4}}{\sigma^{4}}e^{-\bar{v}^{2}/\sigma^{2}}\theta (\bar{v}^{2}) \tag{13}\] \[B(f) = 4\bar{\omega}K_{1}(\bar{\omega}). \tag{14}\]
Here \(\bar{v}^{2}=2\pi f/m-2\), \(\bar{\omega}=2\pi f/m\sigma^{2}\), \(K_{n}(x)\) is the modified Bessel function of the second kind, and \(\theta(x)\) is the step function. The expression shows two distinctive frequency components: \(A(f)\) represents the harmonic signal at \(\omega=2\pi f=2m\), and \(B(f)\) represents the low-frequency stochastic fluctuation. For a detailed derivation, see Appendix B.
The low-frequency stochastic background behaves similarly to white noise and is therefore difficult to distinguish from other random noises in a detector. If it is somehow possible to arrange the output data in a way that it is insensitive to the axion DM signal, then it could be possible to calibrate the noise and therefore detect the axion with only one experiment. Alternatives are the following two approaches: (i) the reported stability of clocks can be used to constrain the parameter space as the low-frequency stochastic DM background would lead to larger fluctuations than the ones observed; (ii) to possibly detect the axion, one relies on the cross-correlation between multiple experiments for which the individual experiment-intrinsic noise cancels while the axion signal persists. In the following, we discuss these two aspects in more detail.
### Single detector setup
As already demonstrated in Ref. [47], by comparing the low-frequency fluctuations with the measured uncertainty of clocks, one can place lower limits on the decay constant \(f_{\phi}\). In a repeated measurement of a given frequency standard, there will be varying fluctuations due to the experiment's intrinsic effects as well as possibly the axion signal. Since the low-frequency part of the signal has very similar properties to white noise, we expect the two to be hardly distinguishable. Even if we consider the axion signal as just another component of the noise, we can still constrain the axion by requiring that the noise due to the axion is smaller than the total observed one.
To illustrate this further, we choose the measurement of the frequency ratio of Yb\({}^{+}\) electric-octupole (E3) and electric-quadrupole (E2) clock transitions [31]. As explained above, one way to extract the constraints on \(1/f_{\phi}\) is to directly compare the low-frequency noise \(P_{s}\sim K^{2}\theta_{0}^{2}\tau_{\phi}\) (12) with the measured clock frequency uncertainties. The fluctuations of the measured frequency ratio in Ref. [31] are consistent with white noise of \(P_{n}(f)=\sigma_{n}^{2}\simeq(10^{-14}/\sqrt{\text{Hz}})^{2}\). A direct comparison leads to the constraint on \(1/f_{\phi}\) as \(f_{\phi}^{-1}=[m^{2}\sigma_{n}/(2K\rho_{0}\sqrt{\tau_{\phi}})]^{1/2}\) with \(K\simeq 10^{-4}\).
More precisely, we compute the Allan deviation caused by the axion DM and compare it to the experimentally reported value. The Allan deviation is defined in (14) and the expected value for the axion DM is provided in Appendix B.2. We find
\[\frac{1}{f_{\phi}}=\left[\frac{m^{4}\sigma_{n}^{2}(\tau)}{8K^{2}\rho_{0}^{2} \mathcal{I}(\tau/2\tau_{\phi})}\right]^{1/4} \tag{15}\]
where \(\sigma_{n}(\tau)\) is the reported Allan deviation with an averaging time \(\tau\). The detailed derivation and the function \(\mathcal{I}(x)\) are given in Appendix B.2. This constraint is shown by the green dashed region in Figure 1 and 2.
### Multi-detector setup
If two or more detectors are available, it is possible to distinguish the axion DM signal from the detector's noise by cross-correlating multiple detector outputs. Suppose we have two detector outputs \(d_{1,2}(t)=s_{1,2}(t)+n_{1,2}(t)\)
If we now consider the correlation between the two outputs \(\langle d_{1}d_{2}\rangle\), we expect the noises in the two detectors to be uncorrelated amongst themselves, \(\langle n_{1}n_{2}\rangle\sim 0\), while the signal is \(\langle s_{1}s_{2}\rangle\neq 0\) as long as the two detectors are placed within one coherence length \(L<\lambda\approx 1/(mv)\). In practice, this is done by constructing an observable as \(Y=\int dt\int dt^{\prime}\,s_{1}(t)s_{2}(t^{\prime})Q(t-t^{\prime})\) with some real filter function \(Q(t-t^{\prime})\). The signal and noise are computed as \(S=\langle Y\rangle\) and \(N^{2}=[\langle Y^{2}\rangle-\langle Y\rangle^{2}]_{s=0}\), respectively. The maximum signal-to-noise ratio is [50]
\[\frac{S}{N}=\left[2T\int_{f_{l}}^{f_{u}}df\,\frac{|P_{\rm cross}|^{2}}{P_{n}^{ 2}(f)}\right]^{1/2} \tag{16}\]
where \(f_{u,l}\) is the highest and lowest frequency where \(P_{n}(f)\) is available, \(T\) is the total observation time scale, \(P_{n}(f)=[P_{n_{1}}(f)P_{n_{2}}(f)]^{1/2}\) is the noise PSD, and \(P_{\rm cross}\) is the cross-correlation defined as
\[\langle\tilde{s}_{1}(f)\tilde{s}_{2}^{*}(f^{\prime})\rangle=\delta(f-f^{ \prime})\frac{1}{2}P_{\rm cross}(f). \tag{17}\]
For \(N_{\rm det}\) detectors, the above expression is modified as \(T\to[N_{\rm det}(N_{\rm det}-1)/2]T\) assuming that the noise PSD in all detectors is more or less the same.
The cross-correlation PSD from the axion DM can be computed straightforwardly with the formulation described above. For a normal velocity distribution with zero mean velocity, we find
\[P_{\rm cross}(f,\vec{L})=K_{1}K_{2}\frac{\theta_{0}^{4}\tau}{4}B_{\rm cross}(f,\vec{L}) \tag{18}\]
where
\[B_{\rm cross}(f,\vec{L})=2\int_{-\infty}^{\infty}dx\frac{e^{-i \bar{\omega}x}}{(1+x^{2})^{3/2}}\exp\left[-\frac{(m\sigma L)^{2}}{1+x^{2}} \right]. \tag{19}\]
Note that \(K_{1,2}\) is the sensitivity coefficient defined as \(s_{i}=K_{i}\theta^{2}(t)\) and \(L\) is the detector separation. The detailed derivation and more general expressions with dark matter mean velocity are given in Appendix B. Note that the above expression coincides with (14) in the \(L\to 0\) limit.
In Figure 2, we choose optical clock systems to investigate to which extent they can probe the QCD axion parameter space at a higher mass range. Assuming only white noise and \(m\sigma L\ll 1\) such that \(B_{\rm cross}\approx B\), one finds the projected sensitivity on \(1/f_{\phi}\) as
\[\frac{1}{f_{\phi}}\approx\left[\frac{P_{n}}{4K_{1}K_{2}}\frac{S}{N}\frac{m^{ 4}}{\rho_{0}^{2}}\right]^{\frac{1}{4}}\left[\frac{\pi}{T\tau_{\phi}\min(1,2\pi f _{u}\tau_{\phi})}\right]^{\frac{1}{8}} \tag{20}\]
We choose \(K_{1,2}=10^{-4}\), measurement frequency \(f_{u}=1\,{\rm Hz}\), and \(P_{n}^{1/2}(f)=10^{-16}/\sqrt{\rm Hz}\). Unlike the single detector setup in the previous section, the signal-to-noise ratio and the projection on \(1/f_{\phi}\) show a mild improvement as a function of observation time and the number of detectors. This can be seen by comparing the projection of a network with \(T=100\,{\rm days}\) and \(N_{\rm det}=10\) detectors shown in Fig. 2 as a red line, with the green dashed region showing the Yb\({}^{+}\) (E3)/(E2) constraint from the previous section in a single detector setup. Crucially the multi-detector setup allows for the detection of the axion since the non-vanishing cross-correlation can distinguish the signal from detector noise.
## V Conclusion
In this work, we have considered the quadratic interactions of the QCD axion with the electromagnetic field and the electron mass. These quadratic interactions naturally arise as long as the axion couples to the gluon field of the standard model. Similar to the quadratic interaction with pions and nucleons, such interactions lead to oscillating atomic energy levels. Contrary to the hadronic coupling, the electromagnetic interaction directly affects the electronic energy levels, making systems that depend on these energy levels sensitive to axion DM. As examples of such systems, we studied optical clocks, resonant-bar gravitational wave detectors, and atom interferometers.
We have summarized existing constraints and projected sensitivities of future nuclear clocks in Fig. 1. While they are still far from the minimal QCD axion parameter space, they provide alternative ways to search the QCD axion. Moreover, the quadratic nature inevitably introduces a low-frequency stochastic background. We have derived an analytic expression for the low-frequency spectrum of the ultralight DM-induced signal. By directly comparing the axion DM-induced low-frequency fluctuations with measured clock uncertainties, we show that the Yb\({}^{+}\) (E3) and (E2) comparison can also probe heavier axions than those considered in previous work [31]. In addition, with several assumptions, we have also projected the sensitivity of a network of detec
Figure 2: A projection for cross-correlation with optical clock systems (red line). We choose \(T=100\,{\rm days}\), \(N_{\rm det}=10\) detectors, and \(S/N\)=3.
tors, which could probe this higher mass range further.
###### Acknowledgements.
We would like to thank Abhishek Banerjee for useful discussions and for providing us with tabulated data of the Yb\({}^{+}\) comparison test. The work of HK and AL was supported by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy - EXC 2121 Quantum Universe - 390833306. The work of GP is supported by grants from BSF-NSF, Friedrich Wilhelm Bessel research award of the Alexander von Humboldt Foundation, ISF, Minerva, SABRA - Yeda-Sela - WRC Program, the Estate of Emile Mimran, and the Maurice and Vivienne Wohl Endowment.
_Note added:_ While this work was being finalized, a related work [51] appeared on the arXiv, which shares some of the points discussed above.
## Appendix A Clock comparison test
We list the sensitivity coefficient for the QCD axion for different atomic species. Let us consider the frequency standards based on hyperfine and electronic transitions. The transition frequency is parameterized as
\[f_{\rm hfs} =g\frac{m_{e}^{2}}{m_{p}}\alpha^{4}F_{\rm hfs}(\alpha), \tag{10}\] \[f_{\rm elec} =m_{e}\alpha^{2}F_{\rm elec}(\alpha), \tag{11}\]
where \(g\) is the nuclear \(g\)-factor, and \(F(\alpha)\) is the relativistic correction.
There are total 4 parameters, \(\{g,m_{e},m_{p},\alpha\}\). Each of them varies in time. The transition frequency can be conveniently written as \(f_{A}=g_{A}^{K_{g}}m_{e}^{K_{m}}m_{p}^{K_{m_{p}}}\alpha^{K_{n}}\). Since the effect of the QCD axion always arises through the variation of pion mass, the fractional frequency change can be written as
\[\frac{\delta f_{A}}{f_{A}}=\sum_{i}K_{i}\frac{\partial\ln A_{i}}{\partial\ln m _{\pi}^{2}}\frac{\delta m_{\pi}^{2}}{m_{\pi}^{2}} \tag{12}\]
where the index runs over all four parameters. \(K_{g}=1,0\), \(K_{m_{e}}=2,1\), and \(K_{m_{p}}=-1,0\) for hyperfine and electronic transition, respectively. The values for \(K_{\alpha}\) can be found in Refs. [52; 53]. The dependence of each parameter on the pion mass is
\[\frac{\partial\ln m_{p}}{\partial\ln m_{\pi}^{2}} =\frac{\sigma_{\pi N}}{m_{N}}\simeq 0.06 \tag{13}\] \[\frac{\partial\ln\alpha}{\partial\ln m_{\pi}^{2}} =\frac{\alpha}{12\pi}\left(1+\frac{8\sigma_{\pi N}}{m_{N}}\right) \simeq 3\times 10^{-4}\] (14) \[\frac{\partial\ln m_{e}}{\partial\ln m_{\pi}^{2}} =\frac{3\alpha^{2}}{48\pi^{2}}\left(1+\frac{8\sigma_{\pi N}}{m_{N }}\right)\ln\frac{m_{\pi}^{2}}{m_{e}^{2}}\simeq 6\times 10^{-6}. \tag{15}\]
where \(\sigma_{\pi N}=\partial m_{N}/\partial\ln m_{\pi}^{2}\). For the \(g\)-factor, one finds \(\partial\ln g_{p}/\partial\ln m_{\pi}^{2}=-(g_{A}^{2}/g_{p})[m_{N}m_{\pi}/(8 \pi f_{\pi}^{2})]\simeq-0.17\) for the hydrogen atom, \(\partial\ln g_{\rm Rb}/\partial\ln m_{\pi}^{2}=-0.024\) for \({}^{87}\)Rb, and \(\partial\ln g_{\rm Cs}/\partial\ln m_{\pi}^{2}=0.011\) for \({}^{133}\)Cs [26]. For the nuclear clock transition in \({}^{229}\)Th, the hadronic quadratic coupling is dominant, \(\delta f_{A}/f_{A}\simeq(2\times 10^{5})\times\delta m_{\pi}^{2}/m_{\pi}^{2}\)[26].
The above expression can be written in a more compact form:
\[\frac{\delta f_{A}}{f_{A}}=K_{A}\theta^{2}. \tag{16}\]
The sensitivity coefficient \(K_{A}\) for each atom is listed in Table 1. The sensitivity coefficient of the frequency ratio of any pair of atomic transitions is simply the difference of the two respective sensitivity coefficients.
## Appendix B Quadratic spectrum
Here we provide a detailed computation of the low-frequency power spectrum, following Ref. [49].
We expand the field as
\[\phi(t,x)=\sum_{i}\frac{1}{\sqrt{2mV}}\left[\alpha_{i}e^{-ik_{i}x}+\alpha_{i} ^{*}e^{ik_{i}\cdot x}\right]. \tag{17}\]
Here \((\alpha_{i},\alpha_{i}^{*})\) are complex random numbers. The underlying probability distribution of this complex random number is given by [54; 55; 56; 48]
\[p(\alpha_{i})=\frac{1}{\pi n_{i}}\exp\left[-\frac{|\alpha_{i}|^{2}}{n_{i}} \right], \tag{18}\]
where \(n_{i}\) is the mean occupation number of the mode \(i\). In this description, the field \(\phi\) is a Gaussian random field.
The mean occupation number \(n_{i}\) is given by the dark matter velocity distribution. For simplicity, we assume a normal distribution
\[n(\vec{v})=\frac{\rho_{0}/m}{(2\pi\sigma^{2})^{3/2}}\exp\left[-\frac{(\vec{v}- \vec{v}_{0})^{2}}{2\sigma^{2}}\right] \tag{19}\]
\begin{table}
\begin{tabular}{c|c|c} System & Transition & \(K_{A}\) \\ \hline \hline H & Ground state hyperfine & \(+1.2\times 10^{-2}\) \\ Cs & Ground state hyperfine & \(+2.6\times 10^{-3}\) \\ Rb & Ground state hyperfine & \(+4.5\times 10^{-3}\) \\ Si & cavity & \(-1.5\times 10^{-5}\) \\ Sr & \({}^{1}S_{0}\rightarrow\,^{3}P_{0}\) & \(-3.2\times 10^{-5}\) \\ Al\({}^{+}\) & \({}^{1}S_{0}\rightarrow\,^{3}P_{0}\) & \(-3.1\times 10^{-5}\) \\ Hg\({}^{+}\) & \({}^{2}S_{1/2}\rightarrow\,^{2}D_{5/2}\) & \(+1.4\times 10^{-5}\) \\ Yb & \({}^{1}S_{0}\rightarrow\,^{3}P_{0}\) & \(-3.5\times 10^{-5}\) \\ Yb\({}^{+}\) (E2) & \({}^{2}S_{1/2}\rightarrow\,^{2}D_{3/2}\) & \(-4.6\times 10^{-5}\) \\ Yb\({}^{+}\) (E3) & \({}^{2}S_{1/2}\rightarrow\,^{2}F_{7/2}\) & \(+6.0\times 10^{-5}\) \\ Th & nuclear & \(-2.2\times 10^{4}\) \\ \hline \end{tabular}
\end{table}
Table 1: Table for the sensitivity coefficient \(K_{A}\) for the QCD axion. The electromagnetic quadratic interactions provide the dominant effect for the optical clock transitions, while the hadronic couplings provide the dominant effects for hyperfine and nuclear clock transitions.
where \(\rho_{0}\) is the mean dark matter density, \(\vec{v}_{0}\) is the velocity of the dark matter wind relative to the experiment, and \(\sigma\) is the velocity dispersion.
We focus on the case where the signal in the detector is of the following form:
\[s(t)=K\theta^{2}(t)\, \tag{10}\]
where \(K\) is a sensitivity coefficient and \(\theta=\phi/f_{\phi}\). The power spectral density \(P_{s}(f)\) is defined as
\[\langle\tilde{s}(f)\tilde{s}^{*}(f^{\prime})\rangle=\delta(f-f^{\prime})\frac{ 1}{2}P_{s}(f). \tag{11}\]
We choose the following convention for the Fourier transformation, \(s(t)=\int dfe^{-2\pi ift}\tilde{s}(f)\).
The signal power spectral density is related to the PSD of the axion field as
\[P_{s}(f)=K^{2}P_{\delta\theta^{2}}(f) \tag{12}\]
where
\[\langle\widetilde{\delta\theta^{2}}(f)\widetilde{\delta\theta^{2}}^{*}(f^{ \prime})\rangle=\delta(f-f^{\prime})\frac{1}{2}P_{\delta\theta^{2}}(f). \tag{13}\]
We have introduced \(\delta\theta^{2}=\theta^{2}-\langle\theta^{2}\rangle\). This subtracts an unobservable constant shift in \(\theta^{2}\). Note that the above power spectrum is one-sided; we only consider \(f\geq 0\).
### Power spectral density
The Fourier component of the quadratic operator is
\[\widetilde{\theta^{2}}(\omega) =\frac{1}{f_{\phi}^{2}}\frac{1}{2mV}\sum_{i,j}\] \[\times \Big{[}\alpha_{i}\alpha_{j}e^{i(\vec{k}_{i}+\vec{k}_{j})\cdot\vec {x}}(2\pi)\delta(\omega-\omega_{i}-\omega_{j})\] \[+\alpha_{i}\alpha_{j}^{*}e^{i(\vec{k}_{i}-\vec{k}_{j})\cdot\vec{x }}(2\pi)\delta(\omega-\omega_{i}+\omega_{j})\] \[+\alpha_{i}^{*}\alpha_{j}e^{-i(\vec{k}_{i}-\vec{k}_{j})\cdot\vec {x}}(2\pi)\delta(\omega+\omega_{i}-\omega_{j})\] \[+\alpha_{i}^{*}\alpha_{j}^{*}e^{-i(\vec{k}_{i}+\vec{k}_{j})\cdot \vec{x}}(2\pi)\delta(\omega+\omega_{i}+\omega_{j})\Big{]} \tag{14}\]
To compute \(\langle\widetilde{\delta\theta^{2}}(\omega)\widetilde{\delta\theta^{2}}^{*}( \omega^{\prime})\rangle\), the following expression is useful
\[\langle\alpha_{i}\alpha_{j}\alpha_{k}^{*}\alpha_{l}^{*}\rangle=n_{i}n_{j}( \delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}). \tag{15}\]
The angle bracket denotes an ensemble average, defined as
\[\langle\mathcal{O}\rangle=\int\bigg{[}\prod_{i}d^{2}\alpha_{i}\,p(\alpha_{i}) \bigg{]}\mathcal{O}. \tag{16}\]
After a straightforward computation, we find
\[P_{\delta\theta^{2}}(\omega) =\frac{1}{m^{2}f_{\phi}^{4}}\int d^{3}v_{1}d^{3}v_{2}\,n(\vec{v}_ {1})n(\vec{v}_{2})\] \[\times\Big{[}(2\pi)\delta(\omega-\omega_{1}-\omega_{2})+(2\pi) \delta(\omega+\omega_{1}+\omega_{2})\] \[+(2\pi)\delta(\omega-\omega_{1}+\omega_{2})+(2\pi)\delta(\omega+ \omega_{1}-\omega_{2})\Big{]}, \tag{17}\]
where we took the continuum limit in the velocities. This is a general expression, which holds for an arbitrary velocity distribution as long as the probability distribution for \(\alpha_{i}\) is given as (11).
Given a normal velocity distribution (10), we find that the power spectrum of the scalar quadratic operator is
\[P_{\delta\theta^{2}}(f)=\frac{1}{4}\theta_{0}^{4}\tau_{\phi}\Big{[}A(f)+B(f) \Big{]}, \tag{18}\]
where \(\theta_{0}=\sqrt{2\rho_{0}}/mf_{\phi}\), \(\tau_{\phi}=1/m\sigma^{2}\) is the coherence time, and
\[A(f) =\frac{2\pi\bar{v}^{2}}{v_{0}^{2}}\exp\left[-\frac{\bar{v}^{2}+v_ {0}^{2}}{\sigma^{2}}\right]I_{2}\left(\frac{2v_{0}\bar{v}}{\sigma^{2}}\right) \Theta(\bar{v}^{2}), \tag{19}\] \[B(f) =\frac{2\sigma}{v_{0}}\int_{0}^{\infty}dv_{e}e^{-\frac{\bar{v}^{2 }}{4v_{e}^{2}}}\left[e^{-(v_{e}-\frac{v_{0}}{\sigma})^{2}}-e^{-(v_{e}+\frac{v_ {0}}{\sigma})^{2}}\right]. \tag{20}\]
Here \(I_{n}(x)\) is the modified Bessel function of the first kind and \(\Theta(x)\) is the unit step function. For notational simplicity, we have introduced
\[\bar{v}^{2}=\frac{2\pi f}{m}-2,\quad\text{and}\quad\bar{\omega}=\frac{2\pi f }{m\sigma^{2}}.\]
The spectral function \(A(f)\) represents the coherent harmonic oscillation at \(\omega=2m\). The spectral function \(B(f)\) represents the low-frequency background at \(\omega<m\sigma^{2}\). Note that the above PSD is valid for \(f>0\). The low-frequency spectrum \(B(f)\) is still valid for \(f<0\), but \(A(f)\) changes to \(A(-f)\).
These expressions are further simplified in the isotropic limit \(v_{0}\to 0\). In this case, we find
\[A(f) =\pi\frac{\bar{v}^{4}}{\sigma^{4}}e^{-\bar{v}^{2}/\sigma^{2}} \Theta(\bar{v}^{2}), \tag{21}\] \[B(f) =4\bar{\omega}K_{1}(\bar{\omega}), \tag{22}\]
Figure 3: The spectrum of the quadratic operator \(\delta\theta^{2}\) in the isotropic limit. We choose \(\sigma=0.1\) for demonstration. The narrow peak at \(\omega=2m\) represents the harmonic oscillations, while the plateau at \(\omega<m\sigma^{2}\) gives the low-frequency stochastic background.
where \(K_{n}(x)\) is the modified Bessel function of the second kind and \(\Theta(x)\) is the step function. Note that both functions, \(A\) and \(B\), are normalized such that \(\tau_{\phi}\int_{0}^{\infty}df\,A(f)=\tau_{\phi}\int_{0}^{\infty}df\,B(f)=1\). The spectrum in this case is shown in Figure 3.
### Allan deviation
Let us consider a single clock comparison test in which the axion causes a signal \(s(t)=K\ \delta\theta^{2}(t)\). If this signal cannot be distinguished from the noise, it still contributes to the total observed variation of the frequencies commonly characterized by the Allan deviation. In terms of the fractional frequency shift, the Allan variance over a period \(\tau=n\cdot\Delta t\), where \(\Delta t\) is the time between measurements, is defined as [57]
\[\sigma_{s}^{2}(\tau)=\frac{1}{2(M-1)}\,\sum_{i=1}^{M-1}\left|\langle s(\tau) \rangle_{i+1}-\langle s(\tau)\rangle_{i}\right|^{2}\, \tag{14}\]
where \(\langle s(\tau)\rangle_{i}\) denotes the \(i\)-th measurement of \(s(t)\) over the period \(\tau\),
\[\langle s(\tau)\rangle_{i}=\frac{1}{\overline{t}}\int_{t_{i}}^{t_{i}+\tau}dt \ s(t)=K\overline{\delta\theta^{2}}(t_{i}). \tag{15}\]
In the second step, we defined \(\overline{\delta\theta^{2}}(t_{i})\) as the average value of \(\delta\theta^{2}\) over this period. The ensemble average of the Allan variance then becomes
\[\langle\sigma_{s}^{2}(\tau)\rangle =\frac{K^{2}}{2(M-1)} \tag{16}\] \[\times\sum_{i=1}^{M-1}\left\langle\left|\int df\left(e^{-2\pi if( t_{i}+\tau)}-e^{-2\pi ift_{i}}\right)\widetilde{\delta\theta^{2}}(f)\right|^{2}\right\rangle\] \[=2K^{2}\int_{0}^{\infty}df\sin^{2}(\pi f\tau)P_{\overline{\delta \theta^{2}}}(f). \tag{17}\]
Here the angle bracket denotes an ensemble average. The Fourier transformation and power spectrum of \(\overline{\delta\theta^{2}}\) are defined analogously to the ones of \(\delta\theta^{2}\). To find the relation between these quantities let us consider the Fourier transformation
\[\widetilde{\delta\theta^{2}}(f) =\int dt\ e^{2\pi t\int}\left[\frac{1}{\tau}\int_{t}^{t+\tau}dt^ {\prime}\ \delta\theta^{2}(t^{\prime})\right] \tag{18}\] \[=e^{-\pi ift}\text{sinc}(\pi f\tau)\delta\widetilde{\theta}^{2} (f)\, \tag{19}\]
where \(\text{sinc}(x)=\sin(x)/x\). The two power spectra are therefore simply related by a factor \(\text{sinc}^{2}(\pi f\tau)\), i.e. \(P_{\overline{\delta\theta^{2}}}(f)=\text{sinc}^{2}(\pi f\tau)P_{\delta\theta ^{2}}(f)\). Using this, we find
\[\langle\sigma_{s}^{2}(\tau)\rangle=2K^{2}\int_{0}^{\infty}dfP_{\delta\theta^{ 2}}(f)\frac{\sin^{4}(\pi f\tau)}{(\pi f\tau)^{2}}. \tag{20}\]
From this expression, the Allan deviation caused by the quadratic coupling can be computed using the coupling coefficients that can be found in Tab. 1 and the power spectral density from the last section. In particular, in the isotropic limit, we find
\[\langle\sigma_{s}^{2}(\tau)\rangle=2K^{2}\theta_{0}^{4}\mathcal{I}(\tau/2\tau _{\phi}) \tag{21}\]
with the integral \(\mathcal{I}(x)\) defined as
\[\mathcal{I}(x)=\int_{0}^{\infty}\frac{d\bar{\omega}}{2\pi}\,\bar{\omega}K_{1} (\bar{\omega})\frac{\sin^{4}(\bar{\omega}x)}{(\bar{\omega}x)^{2}}. \tag{22}\]
The constraint on \(1/f_{\phi}\) is therefore obtained as
\[\frac{1}{f_{\phi}}=\left[\frac{m^{4}\sigma_{s,\text{obs}}^{2}(\tau)}{8K^{2} \rho_{0}^{2}\mathcal{I}(\tau/2\tau_{\phi})}\right]^{1/4} \tag{23}\]
where \(\sigma_{s,\text{obs}}(\tau)\) is the experimentally measured Allan deviation with an averaging time \(\tau\).
We obtain the bound shown in Figs. 1 and 2 as a green dashed region, by requiring that the noise caused by the coupling of the axion is below the \(1\sigma\) upper bound on the Allan deviation shown in Fig. 1 of [31] for all given values of \(\tau\).
### Cross-correlation
Above we computed the correlation between \(\delta\theta^{2}(\omega)\) evaluated at the same spatial position. For the cross-correlation of displaced detectors, we must evaluate \(\delta\theta^{2}(\omega)\) at different spatial positions. In particular, we are interested in
\[\langle\widetilde{\delta\theta^{2}}_{a}(\omega)\widetilde{\delta\theta^{2}}_ {b}^{*}(\omega^{\prime})\rangle=(2\pi)\delta(\omega-\omega^{\prime})\frac{1}{ 2}P_{\delta\theta^{2}}^{\text{cross}}(\omega,\vec{L}). \tag{24}\]
where \(\widetilde{\delta\theta^{2}}_{a}(\omega)=\widetilde{\delta\theta^{2}}(\omega, \vec{x}_{a})\) and \(\vec{L}=\vec{x}_{a}-\vec{x}_{b}\) is the distance between two detectors. Following the same line of computation, we find
\[P_{\delta\theta^{2}}^{\text{cross}}(\omega,\vec{L}) =\frac{1}{m^{2}f_{\phi}^{4}}\int d^{3}v_{1}d^{3}v_{2}\,n(\vec{v}_{1 })n(\vec{v}_{2})\] \[\times\Big{[}(2\pi)\delta(\omega-\omega_{1}-\omega_{2})e^{+i( \vec{k}_{1}+\vec{k}_{2})\cdot\vec{L}}\] \[\quad+(2\pi)\delta(\omega+\omega_{1}+\omega_{2})e^{-i(\vec{k}_{1}+ \vec{k}_{2})\cdot\vec{L}}\] \[\quad+(2\pi)\delta(\omega-\omega_{1}+\omega_{2})e^{+i(\vec{k}_{1}- \vec{k}_{2})\cdot\vec{L}}\] \[\quad+(2\pi)\delta(\omega+\omega_{1}-\omega_{2})e^{-i(\vec{k}_{1}- \vec{k}_{2})\cdot\vec{L}}\Big{]} \tag{25}\]
Assuming the normal velocity distribution (17), we find
\[P_{\delta\theta^{2}}^{\text{cross}}(f,\vec{L})=\frac{1}{4}\theta_{0}^{2}\tau \Big{[}A_{\text{cross}}(f,\vec{L})+B_{\text{cross}}(f,\vec{L})\Big{]} \tag{26}\]
where the two spectral functions are given by
\[A_{\text{cross}}(f) =\frac{2\pi(\bar{v}/\sigma)^{2}}{X^{2}}\exp\left[-\frac{\bar{v}^{2}+ v_{0}^{2}}{\sigma^{2}}\right]I_{2}\left(2X\frac{\bar{v}}{\sigma}\right)\theta(\bar{v}^{2}) \tag{30}\] \[B_{\text{cross}}(f) =2\int_{-\infty}^{\infty}ds\frac{e^{-i\bar{\omega}s}}{(1+s^{2})^ {3/2}}\exp\left[-\frac{(\vec{L}_{\lambda}+s\frac{\bar{v}_{0}}{\sigma})^{2}}{1+ s^{2}}\right] \tag{31}\]
Here \(\vec{X}=\vec{v}_{0}/\sigma+i\vec{L}_{\lambda}\) with \(\vec{L}_{\lambda}=m\sigma\vec{L}\) is introduced. These expressions reduce to (31)-(32) in the isotropic \(v_{0}\rightarrow\) and small distance \(L\to 0\) limit. This expression is valid again for \(f>0\). The low-frequency spectrum \(B_{\text{cross}}(f,\vec{L})\) is valid also for \(f<0\), but the other component \(A_{\text{cross}}(f,\vec{L})\) changes for \(f<0\) to \(A_{\text{cross}}(f,\vec{L})\to A_{\text{cross}}(-f,-\vec{L})\).
|
2301.06030 | Self-recovery of memory via generative replay | A remarkable capacity of the brain is its ability to autonomously reorganize
memories during offline periods. Memory replay, a mechanism hypothesized to
underlie biological offline learning, has inspired offline methods for reducing
forgetting in artificial neural networks in continual learning settings. A
memory-efficient and neurally-plausible method is generative replay, which
achieves state of the art performance on continual learning benchmarks.
However, unlike the brain, standard generative replay does not self-reorganize
memories when trained offline on its own replay samples. We propose a novel
architecture that augments generative replay with an adaptive, brain-like
capacity to autonomously recover memories. We demonstrate this capacity of the
architecture across several continual learning tasks and environments. | Zhenglong Zhou, Geshi Yeung, Anna C. Schapiro | 2023-01-15T07:28:14Z | http://arxiv.org/abs/2301.06030v1 | # Self-recovery of memory via generative replay
###### Abstract
A remarkable capacity of the brain is its ability to autonomously reorganize memories during offline periods. Memory replay, a mechanism hypothesized to underlie biological offline learning, has inspired offline methods for reducing forgetting in artificial neural networks in continual learning settings. A memory-efficient and neurally-plausible method is generative replay, which achieves state of the art performance on continual learning benchmarks. However, unlike the brain, standard generative replay does not self-reorganize memories when trained offline on its own replay samples. We propose a novel architecture that augments generative replay with an adaptive, brain-like capacity to autonomously recover memories. We demonstrate this capacity of the architecture across several continual learning tasks and environments.
## 1 Introduction
The brain strengthens and reorganizes its own memories during offline periods, especially sleep [13; 24; 33]. A mechanism hypothesized to underlie this capacity is the replay of neural activity associated with awake experience [5], which has been identified across species and shown to facilitate memory [5; 16]. Theories posit that re-organizing memories offline is the brain's key to continually learning new tasks without disrupting existing knowledge across a lifetime [17].
In artificial neural networks (ANNs), learning new tasks without forgetting previous tasks (i.e., continual learning) remains a core challenge: In non-stationary settings where the data from previous tasks are no longer accessible when learning a new task, incorporating new information can catastrophically damage existing knowledge [18; 6]. Inspired by neuroscience, an approach to continual learning is to replay data that reflect prior tasks when learning a new task [21; 30; 34; 3; 8; 9; 15; 26; 27; 28; 29; 25] Replaying memories in a way that comprehensively reflects prior experience allows information across temporally-separated experiences to be integrated gracefully. The most straightforward implementation of this approach stores veridical inputs of past tasks in a memory buffer, from which data can be drawn for subsequent replay [26; 28; 3]. However, replaying veridical inputs is unlikely to scale in real-world scenarios (e.g., when a general-purpose robot has to learn a large number of tasks across a lifetime) and is biologically implausible [11; 32; 2]. A promising extension of the approach is generative replay [30; 34], in which a generative model learns to reconstruct previous inputs, avoiding storage of exact inputs. This form of replay is considered more neurally-plausible [34]; the brain does not store exact copies of experience, and biological replay resembles noisy samples from a generative model of the environment [11; 32]. On continual learning benchmarks in which a set of tasks have to be learned sequentially, generative replay shows state of the art performance [35; 34].
Thus, generative replay appears to be a useful mechanism for both biological and artificial learning. However, despite its similarity to neural replay, generative replay lacks a brain-like capacity to adaptively self-reorganize memories offline. In this work, we first highlight a crucial difference between how biological and artificial replay are typically assessed. We point out how, when assessed in a way similar to empirical studies of offline processing, standard generative replay does not benefit
behavior. We propose a novel architecture that endows generative replay with a brain-like capacity of self-reorganization: The ability to self-repair damaged memories through offline processing. We demonstrate this capacity of the model on MNIST [14] and CIFAR-100 [12].
## 2 Standard generative replay does not self-reorganize offline
Psychology and neuroscience research has investigated the ways in which the brain autonomously reorganizes memories offline to enhance behavior. This work suggest that the offline brain can go beyond what's learned during wakefulness, giving rise to useful changes in behavior through spontaneous processes [13; 24; 33]. These changes are indexed by differences in behavior before versus after an offline period (Fig. 1b). For example, after learning a sequence of tasks, a period of sleep can repair memories that have been damaged due to interference in a preceding wakeful period [19; 20; 1; 4] -- an adaptive capacity that biologically detailed models of sleep have explored [7; 22; 31].
In contrast, standard continual learning evaluations in machine learning settings do not assess offline algorithms' ability to re-organize memories in an offline period: The effectiveness of algorithms in these settings is quantified by how well they help retain past memories as ANNs sequentially learn multiple tasks (Fig. 1a). Therefore, it is unclear if these algorithms can benefit behavior in the same way the offline brain does. We highlight that, although generative replay effectively reduces forgetting in such settings and has been proposed to mimic biological replay, it affords no learning when trained on its own replay samples (Fig. 1c). Standard generative replay [30] consists of two components: a generator that learns to reconstruct inputs \(x\) and a solver that learns to map inputs to their target outputs \(y\). To replay input-output pairs \((x^{\prime},y^{\prime})\) that reflect the model's knowledge of past tasks, the generator samples \(x^{\prime}\) that reflect its learned reconstruction of past inputs and the solver labels \(x^{\prime}\) as \(y^{\prime}\) according to its learned mapping. When mixed with data from a new task, \((x^{\prime},y^{\prime})\) can serve to preserve the knowledge of past tasks. However, when trained exclusively on \((x^{\prime},y^{\prime})\), the model shows no change in behavior, because \((x^{\prime},y^{\prime})\) is fully consistent with the solver's learned input-output mapping. As a result, standard generative replay does not self-reorganize memories offline.
## 3 An architecture that self-recovers memory
In this section, we propose a novel architecture that augments generative replay with a brain-like capacity to reorganize memories offline: The ability to self-recover damaged memories. We assess
Figure 1: Differences in the assessment of artificial and biological offline learning. **a**. In standard continual learning settings, offline methods are assessed on their ability to help preserve memories as ANNs sequentially learn a set of tasks. **b**. In neuroscience and psychology, studies of offline learning examine how behavior changes across an offline period in the absence of external inputs. **c**. When assessed as in empirical studies, standard generative replay does not benefit behavior, whereas our model shows an improvement in performance through offline learning. Each dot shows the performance of a model initialized with a unique random seed. The pre- and post- offline accuracy of each model instantiation is connected by a grey line. Error bars represent \(\pm\) 1 SEM. *** indicates p<0.001.
this capacity of the model using two continual learning benchmarks: split-MNIST [36] and a split CIFAR-100 task. After the model learns all tasks, we include an offline period where the model learns exclusively from its own generated replay samples. To assess autonomous offline learning, we measure the change in the model's performance on testing data before and after this offline period.
### The architecture of the model
Our proposed model (Fig. 2a) is a generative model trained to concurrently reconstruct a representation of each input \(x\) and produce a target output \(y\). We implemented the model as a variational autoencoder with feedforward connections mapping the reconstruction layer (i.e., the layer that outputs the reconstructed representation of an input \(x\)) to an output layer of the same dimension as \(y\) for classification. A neurally-plausible feature of the architecture is that, unlike standard generative replay, the model does not employ separate pathways for generating input patterns and labels. As in prior work [34], we trained the model to minimize the sum of a generative loss (i.e., the sum of a reconstruction loss and a variational loss) and a cross-entropy classification loss. During offline replay, the model first samples \((x^{\prime},y^{\prime})\) pairs via the decoder. Generated samples \(x^{\prime}\) then go through a full forward pass through the model, producing the model's reconstruction and classification of \(x^{\prime}\): \((x^{\prime\prime},y^{\prime\prime})\). For offline learning, we consider \((x^{\prime},y^{\prime})\) as the target outputs and \((x^{\prime\prime},y^{\prime\prime})\) as the model's outputs (Fig. 2b). The offline model is trained to minimize the sum of a generative loss and a distillation loss (see section A.2) computed with respect to \((x^{\prime},y^{\prime})\) and \((x^{\prime\prime},y^{\prime\prime})\).
### Experiments on MNIST
In the first set of experiments, we tested the model on the split-MNIST task, in which the the MNIST dataset is divided into five two-way classification tasks. During training, the model sequentially learned the five tasks one-at-a-time. For training, we considered three continual learning protocols that vary in terms of whether the task identity is provided at the time of test (see section A.4): task-incremental (i.e., Task-IL), domain-incremental (i.e., Domain-IL), and class-incremental learning (i.e., Class-IL) [35]. After learning, the model learns from its own replayed samples offline. Our main interest was whether the model would self-improve performance through this offline period. To allow room for performance improvement, we omitted replay in between tasks in the Task-IL and Domain-IL scenarios to avoid ceiling performance prior to the offline period.
Figure 2: The architecture of our model and its offline recovery of hidden representations. **a**. We implemented the model as a variational autoencoder with feedforward connections that map the reconstruction layer to a classification output. Each shaded trapezoid represents a stack of hidden layers. During training, the model receives input-output pairs \((x,y)\) and is trained to output \((x^{\prime},y^{\prime})\) that approximate \((x,y)\). For CIFAR-100 experiments, the model (not illustrated here) learns to output \((h^{\prime},y^{\prime})\) that approximate \((h,y)\), where \(h\) is a hidden representation of \(x\). **b**. After the model is trained on all available tasks, it learns from its own replayed samples offline. The model first generates \((x^{\prime},y^{\prime})\) pairs as offline training data (left). Given \(x^{\prime}\), the model outputs \((x^{\prime\prime},y^{\prime\prime})\) and learns to minimize the discrepancy between \((x^{\prime},y^{\prime})\) and \((x^{\prime\prime},y^{\prime\prime})\). In CIFAR-100 experiments, the model (not illustrated here) minimizes the difference between \((h^{\prime},y^{\prime})\) and \((h^{\prime\prime},y^{\prime\prime})\), which are, respectively, the model’s replayed samples and the model’s outputs given replayed hidden representations \(h^{\prime}\) as inputs. **c**. Analyses of the similarity between hidden layers’ representations of Task 1 inputs across stages of learning suggest that the model can recover prior hidden representations, even those corresponding to the very first task learned, through offline replay. The figure shows an example of this analysis for a hidden layer — the layer that sits directly on top of the latent layer z.
To measure change in performance across the offline period, we tested the model on the same set of held-out testing data both prior to and subsequent to the offline period. Across all scenarios and runs of the model initialized with different random seeds, performance improves through the offline period (Table 1). In contrast, neither standard generative replay nor a recently proposed modification of generative replay, BI-R, [34] show reliable performance improvement through offline replay (Table 1). We performed a control experiment by randomly shuffling the pairings of \(x^{\prime}\) and \(y^{\prime}\) in the replayed data. In this control experiment, we observed impaired performance through the offline period (Table 1), suggesting that the pairings of of \(x^{\prime}\) and \(y^{\prime}\) were essential to the model's offline self-recovery of memory. In sum, our results demonstrate that the model self-recovers impaired memories through offline replay.
### Tracking changes in hidden representations across tasks and offline learning
To gain insight into how offline replay promotes memory recovery in the model, we measured the change in the model's hidden representations across tasks and offline learning in split-MNIST experiments. For this analysis, we withheld a set of inputs from Task 1. We computed the similarity between each layer's representations of these inputs at different stages of learning using centered kernel alignment (CKA) [10; 23]. We observed that offline learning facilitated the recovery of all layers' initial representations of Task 1 inputs (i.e., each layer's hidden representation at the end of Task 1 learning): Across all layers, initial representations are more similar to post-offline representations than to pre-offline representations (i.e., p<0.05 for post- vs. pre- offline improvement in CKA similarity across all layers in all three scenarios; an example is shown in Fig. 2c).
### Extending the model to CIFAR-100
To identify whether the memory recovery capacity of the model extends to more challenging settings, we tested the model on the CIFAR-100 dataset consisting of naturalistic image categories. We trained the model to sequentially learn 10 tasks, each of which is a ten-way classification task with ten image classes from CIFAR-100. Due to the difficulty of CIFAR-100, for simulations with this dataset, we adapted BI-R, a modification of generative replay that shows robust learning on this task [34]. As in the model we used for MNIST experiments, we added feedforward connections on top of the reconstruction layer for mapping reconstructed representations to classification outputs. Similar to our experiments on MNIST, we disabled replay in the Task-IL and Domain-IL scenarios, and trained the model on its replayed samples after all task learning. On CIFAR-100, we again observed improved performance through the offline period across training scenarios. These results suggest that the memory-recovery capacity of the model generalizes to more complex tasks.
## 4 Conclusion
In the brain, offline replay is generative and appears to benefit memories in the absence of external data. In contrast, standard generative replay in artificial systems does not facilitate performance when trained on its internally-generated replay data. We introduce an architecture that endows generative replay with a capacity of self-reorganization: Self-recovery of damaged memories. The architecture is a generative model augmented with feedforward connections for generating labels. After sequentially learning a set of tasks on split-MNIST and CIFAR-100, our proposed architecture self-recovers damaged memories when trained on its internally-generated data. By contrast, we do not observe this
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & Dataset & Task-IL & Domain-IL & Class-IL \\ \hline Our model & split-MNIST & 3.24 (\(\pm 0.42\)) & 23.30 (\(\pm 0.57\)) & 23.91 (\(\pm 1.28\)) \\ \hline BI-R & split-MNIST & -1.50 (\(\pm 1.12\)) & 0.13 (\(\pm 0.06\)) & 1.55 (\(\pm 0.22\)) \\ \hline Standard generative replay & split-MNIST & 0.00 (\(\pm 0.00\)) & 0.00 (\(\pm 0.00\)) & 0.00 (\(\pm 0.00\)) \\ \hline Control with shuffled targets & split-MNIST & -32.71 (\(\pm 1.17\)) & -8.68 (\(\pm 1.84\)) & -26.48 (\(\pm 0.89\)) \\ \hline Our model & CIFAR-100 & 3.30 (\(\pm 0.33\)) & 6.55 (\(\pm 0.24\)) & 8.59 (\(\pm 0.38\)) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean post- vs. pre- offline accuracy change. For each experiment, each model was run 10 times with different random seeds, with mean (\(\pm\) SEM) accuracy across runs shown below.
capacity in standard generative replay, in a recent extension of generative replay, nor in our model when its replayed labels are shuffled. These results suggest that including task labels as part of the generative process could enable generative replay models to learn from and self-reorganize existing representations.
|
2304.03796 | Loss-tolerant architecture for quantum computing with quantum emitters | We develop an architecture for measurement-based quantum computing using
photonic quantum emitters. The architecture exploits spin-photon entanglement
as resource states and standard Bell measurements of photons for fusing them
into a large spin-qubit cluster state. The scheme is tailored to emitters with
limited memory capabilities since it only uses an initial non-adaptive
(ballistic) fusion process to construct a fully percolated graph state of
multiple emitters. By exploring various geometrical constructions for fusing
entangled photons from deterministic emitters, we improve the photon loss
tolerance significantly compared to similar all-photonic schemes. | Matthias C. Löbl, Stefano Paesani, Anders S. Sørensen | 2023-04-07T18:00:25Z | http://arxiv.org/abs/2304.03796v3 | # Loss-tolerant architecture for quantum computing with quantum emitters
###### Abstract
We develop an architecture for measurement-based quantum computing using photonic quantum emitters. The architecture exploits spin-photon entanglement as resource states and standard Bell measurements of photons for fusing them into a large spin-qubit cluster state. The scheme is tailored to emitters with limited memory capabilities since it only uses an initial non-adaptive (ballistic) fusion process to construct a fully percolated graph state of multiple emitters. By exploring various geometrical constructions for fusing entangled photons from deterministic emitters, we improve the photon loss tolerance significantly compared to similar all-photonic schemes.
**Introduction.** Measurement-based quantum computing requires the generation of large graph states (cluster states) followed by measurements on them [1; 2; 3]. This approach is particularly promising for photonic systems since most of the operations can be implemented with linear optics. The required photonic cluster states can be created from small resource states by connecting them through so-called fusion processes [4; 5; 6]. This approach, however, has several challenges: first, fusions are probabilistic and consume photonic qubits [7]. Furthermore, photons travel at immense speed, which necessitates long delay lines to implement conditional feedback operations [8]. Most critically, however, photons are easily lost which puts stringent bounds on the required photon efficiency. Recent schemes require efficiencies \(>97\%\)[5; 9] which is above the typical values of photonic platforms [10; 11; 12; 13]. Furthermore, schemes using few-photon resource states require boosting [14; 15] the fusion success probability with ancillary photons [5; 6; 9], which further complicates the experimental implementation. Improved architectures have been developed based on larger initial resource states [9], but even few-photon states can only be made with a low probability using the typically employed parametric down conversions sources [10; 11; 16]. New approaches are thus required to make the generation of large-scale cluster states experimentally feasible.
Fortunately, there are promising new methods to generate large resource states [17; 18]. In particular quantum emitters, such as quantum dots, even enable generating them in a deterministic and thus scalable way [19; 20; 21; 22; 23]. At the same time, these emitters have high photon efficiencies on-chip [24; 25] and end-to-end [12; 13]. Indeed, the largest photonic resource states that have ever been generated are GHZ-states made with a quantum emitter [26]. Several resource states that can be generated with a single emitter are shown in Fig. 1(a). Fig. 1(b) illustrates a lattice on which photonic resource states are geometrically arranged and fused. Besides purely photonic resource states, quantum emitters can also generate states where a stationary spin is entangled with the photons [27; 28; 29]. In this letter, we exploit these properties to construct an architecture, which uses star-shaped resource states (locally equivalent to GHZ states) with a central spin qubit. The spin is entangled with several photonic _leaf_ qubits where the connections represent the entanglement properties (see Fig. 1(c)). From these resource states, we generate a large spin cluster state via rotated type-II fusions (Bell measurements on the photons) [30]. We propose an implementation with semiconductor quantum dots [31; 32], but the scheme can also be applied to atoms [26] or color centers [33]. By using spin qubits as the building blocks of the cluster state, we remove the need to implement feedback on flying qubits as well as unheralded loss of the qubits in the final cluster state. The latter poses a challenge to purely photonic approaches [6; 34]. The proposed hybrid approach combines the advantages of spin-based and photon-based platforms: spins in quantum dots are excellent photon emitters with coherence times [35] much longer than qubit initialization, readout, and manipulation [36; 37; 38], but so far no clear strategy existed for how to scale these systems. In our proposal, the photons provide a fast link between the static spins enabling full-scale quantum computing.
Previous proposals for generating large spin-spin entanglement use a repeat-until-success strategy [39; 40; 41] that creates an immense overhead in the number of qubits [39]. This overhead can be circumvented to some extent with more than one qubit per emitter, one for storage and one for generating the entanglement [42; 43; 44]. However, very long coherence times are required due to the overhead in the running time of the protocol. Our cluster-state generation does not require qubits with long coherence times as no repeat-until-success is required and all fusions are performed in one shot (ballistically) [5], also enabling a high overall clock speed. In contrast to previously proposed ballistic schemes [5; 6], our architecture can operate loss-tolerantly without boosting [14; 15] the fusion success probability with ancillary photons. It only requires standard rotated type-II fusions [30; 45] where success, failure, and photon loss are heralded on the detection pattern. All these features keep the experimental overhead low and make our approach particularly feasible. To optimize the tolerance to photon loss, we explore several lattices on which star-shaped resource states are arranged and fused. Since there are no locality constraints for entanglement generated by Bell measurements on photons,
we consider lattices in several dimensions and search for ideal lattices with a discrete optimization algorithm.
**Building lattices by fusing resource states.** Our approach starts by arranging star-shaped resource states on a fusion lattice as illustrated in Fig. 1(c). These states are defined as [47]:
\[\prod_{j=1}^{N}C_{0j}\left(\ket{+}_{s}\otimes\ket{+}^{\otimes N}\right), \tag{1}\]
where the first qubit is a spin (\(s\)) and the \(N\) other qubits are photons. The state \(\ket{+}=\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})\) is the \(+1\) eigenstate of the Pauli X operator (represented here in the Pauli Z basis with eigenstates \(\ket{0}\), \(\ket{1}\)). \(C_{0j}\) represents a controlled phase gate between the spin (index 0) and photon number \(j\). The central spin qubit of the star-shaped state will be part of the final cluster state and the photons on the _leaves_ are used to perform fusions with other resource states. The rotated type-II fusion that we consider succeeds with a probability of \(p_{s}=0.5\)[45; 30] in which case a connection between two central qubits is established [5]. When the fusion fails, there is no connection. When enough fusions succeed, a large connected entangled state is created, which is a resource for measurement-based quantum computing [1; 4]. The required success probability of the fusions can be quantified by a so-called percolation threshold [4; 5; 6]: when the success probability of fusion is below the bond percolation threshold of the lattice, the generated graph state consists of many small pieces and is useless for quantum computing (that applies for instance to the two-dimensional honeycomb lattice [48]). Above the percolation threshold, a cluster state with a giant connected component spanning the entire lattice is generated (see Fig. 1(d)). Such a graph state can then be renormalized into a universal lattice [46] by local Pauli measurements [49; 50] as illustrated in Fig. 1(d).
However, photon losses make it challenging to generate a large-scale percolated cluster state as they leave the graph state in a mixed state [50]. To retain a pure state, the neighborhood of every lost qubit is removed from the graph state by measurements in the Z-basis [5]. When fusion photons are lost, the fusion heralds the loss but not which of the two fusion qubits was lost. Therefore, the neighborhoods of both fusion qubits are removed from the graph state [5] (see Fig. 2(a)). Every edge of the fusion lattice has two fusion photons and such a loss, therefore, occurs with probability \(1-(1-p_{loss})^{2}\) per edge, where \(p_{loss}\) is the probability that an individual photon is lost. In contrast, fusion success occurs with probability \((1-p_{loss})^{2}\cdot p_{s}\) and fusion failure with \((1-p_{loss})^{2}\cdot(1-p_{s})\) (both without loss). With this model, we compute percolation thresholds for photon loss: only when the photon efficiency \(\eta=1-p_{loss}\) is above the percolation threshold \(\lambda_{c}^{loss}\), a percolated graph state with a large connected component is created.
**Lattice construction.** The loss tolerance of the constructions described above depends on which lattice the resource states are geometrically arranged and fused. Therefore, we consider various fusion lattices and analyze their tolerance to photon loss. Our lattice construction is based on the \(d\)-dimensional hypercubic lattice \(\mathbb{Z}^{d}\)[51; 52]. Every point of the lattice represents the central qubit of a star-shaped resource state and the edges represent which fusions are performed. Fig. 1(b) corresponds to a simple cubic fusion lattice for instance. We represent all the connections to the neighbors of a lattice point (qubit) by integer vectors \(\vec{z}\in\mathbb{Z}^{d}\) that illustrate the corresponding geometric differences. We consider graphs with a local neighborhood such that for every connection vector \(\vec{z}\), its maximum integer value per dimension is restricted. Initially, just one integer step per dimension is allowed (\(\vec{z}\in\left\{\,0,\pm 1\right\}^{d}\setminus\left\{\,0\,\right\}\)). Lattices of this type are for instance the hypercubic or the fcc lattice. Further lattices such as the \(d\)-dimensional brickwork representation of the diamond lattice [5; 51] can be obtained by removing particular edges from these lattices. A more detailed description
Figure 1: **(a)** A quantum emitter in a waveguide is used to generate different types of resource states [19; 21]. **(b)** Several star-shaped photonic resource states are arranged on a simple cubic lattice. Fusions of the resource states are performed by using the photons on the _leaves_ of the resource states. Sufficiently many successful fusions generate a large connected cluster state. **(c)** Several star-shaped resource states with a quantum emitter spin as the central qubit can be fused into a distributed spin graph state. **(d)** As the fusions succeed with a finite probability, parts of the desired graph state are missing. The state can be renormalized into a cluster state with a well-defined lattice (here a square lattice) via Y and Z measurements [46].
of all these lattices together with classical site percolation simulations can be found in Refs. [51; 52] and the supplemental material [47].
Note that, in practice, the construction of these high-dimensional lattices is obtained in a standard set-up by collapsing them in the (3+1)-dimensional space. Such a construction is suitable for the photonic platform as the long-range links that are generated in such a collapse can be implemented with minimal loss via optical fibers.
**Loss simulations.** As a metric for the loss tolerance of a graph state construction, we use the percolation probability (a cluster spanning from one edge of the simulated lattice to the other) when fusion failures and photon losses are probabilistically applied (see Fig. 2(a)). A corresponding percolation simulation of a three-dimensional lattice is shown in Fig. 2(b). Associated numerical results for the size of the largest connected cluster component are given in supplemental material [47]. In all simulations, we consider the emitter to be the central qubit of the star-shaped resource (see Fig. 1(c)). This type of qubit is not lost, in contrast to the fused photonic qubits for which we assume a uniform loss probability \(p_{\mathrm{loss}}\). The percolation thresholds for several multi-dimensional lattices are shown in Fig. 2(c), and the corresponding values can be found in the supplemental material [47]. For the best lattices, the percolation thresholds \(\lambda_{c}^{loss}\) are below 0.94 showing that a photon loss probability \(p_{loss}\) of about \(1-\lambda_{c}^{loss}>6\%\) can be tolerated. A detailed description of the algorithm that we use to efficiently perform the percolation simulations for photon loss will be presented elsewhere [53]. For completeness, we also simulate the loss tolerance when using purely photonic resource states where the central qubits can also suffer unheralded loss [34], reported as dashed lines in Fig. 2(c). Note that, in contrast to our spin-based approach, for the all-photonic case the obtained loss thresholds only provide necessary conditions for successful lattice renormalization [4; 46].
For the considered lattices, Fig. 2(c) shows that the hypercubic lattices perform best in dimensions 3, 4 whereas the diamond lattices are ideal for higher dimensions. A remarkable feature is the dependence of the percolation thresholds on the dimension. For the hypercubic (hc) lattices and the lattices fcc+hc, bcc+hc, we observe an optimum dimension where the corresponding fusion lattice has the lowest percolation threshold. For the diamond lattice, it is not obvious where the optimum is but we expect such an optimum: with increasing dimension, the vertex degree of the corresponding lattices increases. On the one hand, a higher vertex degree is desirable because more fusion attempts lead to a higher chance to establish connections in the cluster state. On the other hand, in the presence of photon loss, a too high vertex degree (more fusion photons) is problematic since the loss of a fusion qubit makes the central qubit useless (it must be erased by measuring in the Z-basis, see Fig. 2(a)). We further analyze this point in the supplemental material [47] in the context of the largest connected cluster state component. A dimension where the percolation threshold of a certain lattice type reaches a minimum is a feature
Figure 2: **(a)** Loss on a fusion lattice where star-shaped resource states are geometrically arranged and fused. When a fusion photon is lost, the neighborhoods of both fusion photons are removed from the graph state by Z-measurements to retain a pure quantum state. **(b)** Percolation simulation for estimating the tolerance to photon loss. The simulated lattice is three-dimensional and has been optimized for a low percolation threshold. Every curve is an average of over \(10^{3}\) simulations. Above a certain threshold, the probability for a connection between the edges of a \(d\)-dimensional lattice of size \(L^{d}\) approaches unity as \(L\rightarrow\infty\). **(c)** Percolation thresholds \(\lambda_{c}^{loss}\) (minimum required value for \(\eta=1-p_{loss}\)). The thresholds are obtained by extrapolating the simulation results of lattices with finite sizes \(L\) (see part (b)) towards infinity using the method from Ref. [51]. The considered lattices are hypercubic (hc), diamond, body-centered cubic combined with the (bcc+hc), face-centered cubic combined with the (fcc+hc), as well as lattices obtained by a discrete optimization algorithm (\(L_{2d}^{u}\), \(L_{3d}^{u}\)). The simulations are plotted for all-photonic star-shaped resource states (dashed lines) as well as resource states with a quantum emitter spin as a static (st) central qubit (solid lines).
of the fragile nature of entanglement. This differs from classical bond- and site-percolation where the percolation thresholds always decrease when adding more bonds to the same lattice.
**Lattice optimization.** Fig. 2(c) illustrates that different lattices show quite different performances under photon loss. Therefore, we search for ideal lattices in a given dimension by a discrete optimization algorithm. As before, we virtually place vertices on a hypercubic lattice and represent the connections by integer vectors \((\vec{z}\in\{\,-n,...,n\,\}^{d}\setminus\{\,0\,\})\). We only consider lattices where all nodes have an identical neighborhood, so the presence of \(\vec{z}\) implies that there is also a connection to a neighbor in direction \(-\vec{z}\). Therefore, all lattices that we consider have an even vertex degree. Our algorithm starts from a lattice where every node only has two connection vectors \((\vec{z}\) and \(-\vec{z})\) that are randomly chosen from \(\{\,-n,...,n\,\}^{d}\setminus\{\,0\,\}\). Next, two additional vectors \((\vec{a}\) and \(-\vec{a})\) are randomly selected and added to the lattice. If the percolation threshold improves by adding these vectors, the algorithm adds further vectors. In contrast, if the percolation threshold gets worse, the algorithm removes the two most recently added vectors and adds another pair of vectors \((\vec{b}\) and \(-\vec{b})\) instead. If adding any vector just makes the percolation threshold worse, the algorithm terminates. Note, that one could instead start from a lattice with all possible vectors and remove vectors randomly. However, when starting from a lattice with a very high vertex degree, the algorithm often does not succeed. The reason is that the percolation threshold can only be determined with limited accuracy which can cause the algorithm to terminate too early.
In two and three dimensions, we find lattices that show better performance in comparison to the simple lattices studied above. These lattices (\(L_{2d}^{a}\) and \(L_{3d}^{a}\)) are shown in Fig. 3(a,b) and the vectors representing the different lattices are given in the supplemental material [47]. They can already tolerate up to \(6.5\%\) photon loss when the standard type-II fusions only succeed with \(p_{s}=0.5\). The lattice \(L_{2d}^{a}\) has an interesting spider structure with some local connection plus some long legs making connections to nodes that are further away. We believe that this lattice has a high loss tolerance because it imitates properties of higher dimensional lattices. In a high-dimensional lattice, a loss may break a connection along a certain dimension, but connections in the other dimensions find a way around it. The long connections of the spider-lattice may play a similar role by bridging local interruptions caused by photon loss. In four dimensions and with a photon as the central qubit, the algorithm finds a lattice with a performance that is almost identical to the 4d hypercubic lattice. Remarkably, when the central qubit is a spin, the algorithm finds exactly the 4d hypercubic lattice (or a lattice that is equivalent up to distortions).
**Discussion.** We have proposed a scheme for generating giant spin cluster states for measurement-based quantum computing. The basic building block is star-shaped resource states that can be generated on demand by quantum emitters with a spin and are thus experimentally in reach [26, 20]. We fuse these resource states performing all fusions simultaneously (ballistically). This allows a fast clock rate and can even be done with limited spin coherence times. We only use non-boosted rotated type-II fusion, and find fusion lattices in two and higher dimensions that can tolerate about \(6-8\%\) photon loss. This loss tolerance is a significant improvement compared to recent all-photonic schemes [5, 9] even though some of these schemes assume more complicated resource states and boosted fusion [54, 9]. Our results can guide experimental work towards scalable quantum computing with quantum emitters and provide performance thresholds that such emitters need to meet.
To further reduce the experimental requirements we see several possible extensions of our work: (1) We only consider a certain type of lattices and more general classes of lattices could be simulated by using combinatorial tiling theory [55] or quasi-periodic tilings [56]. (2) In our approach, lattices are built from star-shaped (GHZ) states. More complex resource states enable the exploitation of loss and error-tolerant sub-spaces [57]. However, generating such states is more involved and might require direct spin-spin gates between emitters [58, 59, 60]. (3) Here we focus on loss tolerance. Integrating the approach with techniques for quantum error correction against logical errors can yield a fully fault-tolerant architecture. In this regards, it is encouraging that fault tolerance relies on similar percolation concepts [61, 62] facilitating its integration with the current approach. Furthermore, error correction benefits from high-dimensional structures, which also favors the developed architecture [63, 64, 65].
_Acknowledgements._ We are grateful for discussions with Love A. Pettersson, Yu-Xiang Zhang, and Peter Lodahl. S.P. acknowledges funding from the Cisco University Research Program Fund (nr. 2021-234494), from the Marie Sklodowska-Curie Fellowship project QSun (nr. 101063763), from the VILLUM FONDEN research grant VIL50326, and support from the NNF Quantum Computing Programme.
Figure 3: **(a)** Two-dimensional fusion lattice with high loss tolerance. **(b)** Optimized fusion lattice in three dimensions. |
2303.08664 | Grzegorczyk and Whitehead points: the story continues | One of the main goals of region-based theories of space is to formulate a
geometrically appealing definition of points. The paper is devoted to the
analysis of two such seminal definitions: Alfred N. Whitehead's (1929) and
Andrzej Grzegorczyk's (1960). Relying on the work of Loredana Biacino's and
Ginagiacomo Gerla's (1996), we improve their results, solve some open problems
concerning the mutual relationship between Whitehead and Grzegorczyk points,
and put forward open problems for future investigation. | Rafał Gruszczyński, Santiago Jockwich Martinez | 2023-03-15T14:51:11Z | http://arxiv.org/abs/2303.08664v2 | # Grzegorczyk and Whitehead points: the story continues
###### Abstract.
One of the main goals of region-based theories of space is to formulate a geometrically appealing definition of _points_. The paper is devoted to the analysis of two such seminal definitions: Alfred N. Whitehead's (1929) and Andrzej Grzegorczyk's (1960). Relying on the work of Loredana Biacino's and Ginagacomo Gerla's (1996), we improve their results, solve some open problems concerning the mutual relationship between Whitehead and Grzegorczyk points, and put forward open problems for future investigation.
MSC: 00A30, 03G05, 06E25.
Keywords: Boolean contact algebras; region-based theories of space; point-free theories of space; points; spatial reasoning; Grzegorczyk; Whitehead
## Introduction
Alfred N. Whitehead (1929) was one of the first thinkers who--following ideas to be found in the seminal paper of de Laguna (1922)--proposed a geometrically appealing definition of _point_ in terms of _regions of space_ and the _contact_ relation. His construction was inventive and elegant yet lacked mathematical rigor. In the 1960s, the Polish logician Andrzej Grzegorczyk put forward one of the first mathematically satisfactory systems of region-based topology, in which he formulated a different, yet also geometrically motivated, construction of points. The comparison of the two approaches was carried out by Loredana Biacino and Giangiacomo Gerla (1996), who--under some reasonable assumptions--demonstrated that the two notions of _point_ coincide.
The seminal paper by Biacino and Gerla (1996) is the foundation for our work. The two main results of the paper were Theorems 5.1 and 5.3. The former establishes that every Grzegorczyk representative of a point is a Whitehead representative; the latter shows that the reverse inclusion holds for those Whitehead representatives that can be represented by countable families of regions.
To prove the first inclusion Biacino and Gerla work with the second-order theory of Grzegorczyk's (1960). We show that the specific axioms can be eliminated in favour of the standard first-order mereotopological postulates. Moreover, we prove that the second-order monadic statement 'every Grzegorczyk representative is a Whitehead representative' is equivalent (in the subclass of Boolean weak contact algebras in the sense of Duntsch and Winter (2006) in which every region has a non-tangential part) to the first-order statement 'there are no atoms'. For the completeness of presentation we show that no part of this equivalence holds in the (general) class of Boolean weak contact algebras.
As for the second inclusion, we identify a gap in the proof of Theorem 5.3, and we show that it cannot be carried out without assuming an additional axiom postulating coherence, a mereotopological counterpart of the connectedness property. We also improve the original result by addressing an open problem
from (Biacino and Gerla, 1996). That is, we show that the countability assumption about Whitehead representatives can be eliminated, if we assume a stronger second-order version of the standard mereotopological interpolation axiom.
Moreover, we prove that in complete structures, purely mereological notions are too weak to guarantee the existence of Whitehead representatives of points. The English logician himself envisaged this, but no general proof of this fact exists in the literature so far.1
Footnote 1: We elaborate on this further on p. 11.
We also provide various examples of Whitehead points within algebraic structures. This provides evidence for the claim that Whitehead points are mathematically tractable.
More or less from the beginning of the 21st century, Boolean contact algebras (see e.g., Bennett and Duntsch, 2007) have provided the standard mathematical framework for doing region-based topology. This is a comfortable situation that allows for the unification and comparison of different approaches to point-free theories of space. For this reason, in this paper, we also use the aforementioned algebras. This approach is different from the original approaches of Whitehead and Grzegorczyk, as the former used a _contact_ relation as the only primitive, and the latter worked in mereology (the theory of the _part of_ relation) extended with contact. From a technical point of view, these differences are irrelevant. At the same time, the unified well-established environment of Boolean contact algebras allows for a precise and clear presentation of both approaches to region-based theories.
The paper is organized as follows: In Section 1, we review some preliminaries and introduce the main objects of study, viz., Boolean (weak) contact algebras. In Section 2, we present two formal accounts of Grzegorczyk points, i.e., Grzegorczyk points defined in terms of equivalences classes and Grzegorczyk points understood as filters. Moreover, we show that these two definitions are equivalent in the context of Boolean weak contact algebras. Sections 3 addresses a formal account of Whitehead points. We study some of their properties and provide examples of such points within regular open algebras. This section witnesses as well a proof of the insufficiency of purely mereological notions for the existence of Whitehead points. Section 4 studies the minimal constraints that a Boolean Weak Contact Algebras has to satisfy to guarantee that every Grzegorczyk representative of a point is a Whitehead representative. In particular, in this section we strengthen Theorem 5.1 of Biacino and Gerla (1996). In Section 5 we fill the mentioned gap in the original proof of Theorem 5.3 of Biacino and Gerla (1996) and study the logical status of the second-order condition 'every Whitehead representative is a Grzegorczyk representative'. In Section 6, we generalize Theorem 5.3 to Whitehead points of any size.
## 1. Weak-contact and contact algebras
As usual, \(\neg\), \(\wedge\), \(\vee\), \(\longrightarrow\), \(\longleftrightarrow\), \(\forall\) and \(\exists\) denote the standard logical constants of negation, conjunction, disjunction, material implication, material equivalence, universal and the existential quantifier. We use '\(\nexists\)' as an abbreviation for '\(\neg\exists\)'. Moreover, \(\ ;\longleftrightarrow\) means _equivalent by definition_, and \(:=\) means _equal by definition_. We use \(\omega\) to denote the set of natural numbers understood as von Neumann ordinals. For a fixed space \(X\) and \(x\subseteq X\), \(\complement{\bf{c}}x:=X\setminus x\) is the set-theoretical complement of \(x\) in \(X\). \(|X|\) is the cardinal number of a set \(X\), and \(\mathcal{P}(X)\) is its power set.
Moreover, let:
\[\mathfrak{B}=\langle\boldsymbol{B},\cdot,+,-,\boldsymbol{0},\boldsymbol{1}\rangle\]
be a Boolean algebra (BA for short) with the operations of, respectively, meet, join, and boolean complement; and with the two distinguished elements: the minimum
\(\mathsf{0}\) and the maximum \(1\). Elements of the domain will be called _regions_. The class of all Boolean algebras will be denoted by '\(\mathsf{BA}\)'. We will often refer to the domain of BA via its name '\(\mathfrak{B}\)'. Notice that this convention will not lead to any ambiguities.
In \(\mathfrak{B}\) we define two standard order relations:
\[\begin{split}\mathrm{(df\leq)}&\qquad\qquad\qquad x \leq y\;:\longleftrightarrow\;x\cdot y=x\,,\\ \mathrm{(df<)}&\qquad\qquad\qquad x<y\;:\longleftrightarrow \;x\leq y\wedge x\neq y\,.\end{split}\]
In the former case we say that \(x\) is _part_ of \(y\) or that \(x\) is _below_\(y\), in the latter that \(x\) is _proper part_ of \(y\) or that \(x\) is _strictly below_\(y\).
Any Boolean algebra \(\mathfrak{B}\) is turned into a _Boolean contact algebra_ (BCA for short) by extending it to a structure \(\langle\boldsymbol{B},\cdot,+,-,\mathsf{0},\mathsf{1},\mathsf{C}\rangle\) where \(\mathsf{C}\subseteq\boldsymbol{B}^{2}\) is a _contact_ relation which satisfies the following five axioms:
(C0) \[\neg(\mathsf{0}\;\mathsf{C}\;x),\] (C1) \[x\leq y\wedge x\neq\mathsf{0}\longrightarrow x\;\mathsf{C}\;y,\] (C2) \[x\;\mathsf{C}\;y\longrightarrow y\;\mathsf{C}\;x,\] (C3) \[x\leq y\longrightarrow\forall_{z\in B}(z\;\mathsf{C}\;x \longrightarrow z\;\mathsf{C}\;y)\,,\] (C4) \[x\;\mathsf{C}\;(y+z)\longrightarrow x\;\mathsf{C}\;y\lor x\; \mathsf{C}\;z\,.\]
The complement of \(\mathsf{C}\) will be denoted by '\(\mathfrak{C}\)', and in the case \(x\;\mathsf{C}\;y\) we say that \(x\) is _separated from_\(y\). The class of all Boolean contact algebras will be denoted by '\(\mathsf{BCA}\)'. If \(\mathsf{C}\) satisfies (C0)-(C3), it is called--after Duntsch and Winter (2006)--a _weak contact_ relation and the corresponding structure bears the name of a Boolean _weak contact_ algebra (BWCA for short). The class of all weak contact algebras will be denoted by '\(\mathsf{BWCA}\)'.
We introduce the convention according to which given a class \(\mathsf{K}\) of structures and some conditions \(\varphi_{1},\ldots,\varphi_{n}\) put upon elements of \(\mathsf{K}\), \(\mathsf{K}+\varphi_{1}+\ldots+\varphi_{n}\) (or \(\mathsf{K}+\{\varphi_{1},\ldots,\varphi_{n}\}\)) is the subclass of \(\mathsf{K}\) in which every structure satisfies all \(\varphi_{1},\ldots,\varphi_{n}\), e.g.,
\[\mathsf{BCA}=\mathsf{BWCA}+\mathsf{(C4)}\,.\]
In \(\mathfrak{B}\in\mathsf{BWCA}\) we define an auxiliary relation of _non-tangential_ inclusion (or _way below_, _well-inside_) relation:
\[x\ll y\;:\longleftrightarrow\;x\;\mathsf{C}\;-y\,.\]
We also define \(x\;\mbox{\Bbbfont y}\) to mean that \(x\cdot y\neq\mathsf{0}\), and take \(\bot\subseteq\mathfrak{B}\times\mathfrak{B}\) to be the set-theoretical complement of \(\mbox{\Bbbfont y}\). In the former case we say that _\(x\) overlaps_\(y\), in the latter, that _\(x\) is disjoint from \(y\)_ or _\(x\) is incompatible with \(y\)_. A structure \(\langle\boldsymbol{B},\cdot,+,-,\mathsf{0},\mathsf{1},\mbox{\Bbbfont y}\rangle\) is a standard example of a BCA.2 The most well-known interpretation of contact is the _topological_ one. For a fixed space \(\langle X,\mathscr{O}\rangle\) we take the underlying algebra to be either the complete algebra \(\mathrm{RO}(X)\) of all regular open subsets of \(X\), or its subalgebra \(B\). The Boolean operations3 are:
Footnote 2: The overlap relation is actually the smallest contact relation on a BCA, see (Düntsch and Winter, 2004).
Footnote 3: Int and \(\mathrm{Cl}\) are the standard topological _interior_ and _closure_ operators.
\[x\cdot y :=x\cap y\] \[x+y :=\operatorname{Int}\operatorname{Cl}(x\cup y)\] \[-x :=\operatorname{Int}\operatorname{\mathbbm{C}}x\]
and the contact relation is given by:
\[x\,\mathsf{C}_{\mathrm{T}}\,y\;:\longleftrightarrow\;\operatorname{Cl}x \cap\operatorname{Cl}y\neq\emptyset\,.\]
Moreover, we have:
\[x\ll_{\mathrm{T}}y\longleftrightarrow\mathrm{Cl}\,x\subseteq y\,.\]
The relation \(\mathsf{C}_{\mathrm{T}}\) satisfies axioms (C0)-(C4), so any topological contact algebra is in the class \(\mathsf{BCA}\).
We may use a similar interpretation on the whole power set algebra of \(X\), i.e., \(\langle\mathcal{P}(X),\mathsf{C}_{\mathrm{T}}\rangle\) is a Boolean contact algebra (provided \(X\) is equipped with a topology, of course). Observe that despite the algebra being atomic, the contact relation does not collapse to the overlap relation. For example, in the case of \(\mathds{R}\) with the standard topology, the open intervals \((0,1)\) and \((1,2)\) are disjoint, yet they are in contact since their closures share an atom. However, we may look upon \(\mathsf{C}_{\mathrm{T}}\) as a form of an overlap relation since in this special case of the power set algebra, we have:
\[x\,\mathsf{C}_{\mathrm{T}}\,y\longleftrightarrow\mathrm{Cl}\,x\circ\mathrm{ Cl}\,y\,,\]
which usually is not true when we take into account regular open algebras. From this, it follows that closed sets are in contact only if they overlap, and thus the contact between atoms reduces to identity, if the underlying topology is \(T_{1}\).
The following facts are standardly proven to hold in \(\mathsf{BWCA}\):
\[x\ll y\longrightarrow x\leq y\,, \tag{1.2}\] \[x\ll y\wedge y\ll x\longrightarrow x=y\,,\] (1.3) \[x\ll y\wedge y\leq z\longrightarrow x\ll z\,,\] (1.4) \[x\leq y\wedge y\ll z\longrightarrow x\ll z\,,\] (1.5) \[x\ll y\wedge y\ll z\longrightarrow x\ll z\,,\] (1.6) \[x\ll y\longleftrightarrow-y\ll-x\,. \tag{1.1}\]
**Definition 1**.: An _atom_ of a Boolean (contact) algebra is a non-zero region \(x\) that is minimal with respect to \(\leq\) among non-zero regions. A BWCA is _atomic_ iff its underlying BA is atomic iff every non-zero region contains an atom. A BWCA is _atomless_ iff it does not have any atoms, i.e., satisfies the following condition:
( \[\sharp\mathrm{At}\] ) \[(\forall x\in\mathfrak{B}\setminus\{\mathfrak{0}\})(\exists y\in\mathfrak{B} \setminus\{\mathfrak{0}\})\,y<x\,.\]
\(\dashv\)
## 2. Grzegorczyk points
A _Grzegorczyk representative of a point_ (for short: _G-representative_)4 in \(\mathfrak{B}\in\mathsf{BWCA}\) is a non-empty set \(Q\) of regions such that:
Footnote 4: Both the term and its abbreviation are adopted from (Biacino and Gerla, 1996).
(r0) \[\mathfrak{0}\notin Q\,,\] (r1) \[(\forall u,v\in Q)(u=v\lor u\ll v\lor v\ll u)\,,\] (r2) \[(\forall u\in Q)(\exists v\in Q)\;v\ll u\,,\] (r3) \[(\forall x,y\in\mathfrak{B})\big{(}(\forall u\in Q)(u\circ x \wedge u\circ y)\longrightarrow x\;\mathsf{C}\;y\big{)}\,.\]
Let \(\mathbf{Q}_{G}\) be the set of all G-representatives of \(\mathfrak{B}\). The purpose of the definition is to formally grasp the intuition of a point as a system of diminishing regions determining a unique location in space. We call it a _representative_, since if we understand a point as a perfect representation of a location in space, then two different sets of regions may represent the same location (see Figure 1 for a geometrical intuition on the Cartesian plane). Further, we will identify such G-representatives to be one point. Although the definition has a strong geometrical flavor, G-representatives
may be somewhat strange entities in BCAs that have little to do with spatial intuitions. We will look at some indicative examples. But first, let us go through an example of a G-representative in a well-known setting: the reals.
_Example 1_.: Take the real line \(\mathds{R}\) with the Euclidean topology. It is a standard result that the pair \(\langle\mathrm{RO}(\mathds{R}),\mathsf{C}_{\mathrm{T}}\rangle\), where \(\mathrm{RO}(\mathds{R})\) is the complete algebra of regular open subsets of \(\mathds{R}\) and \(\mathsf{C}_{\mathrm{T}}\) is the standard topological interpretation of contact (as defined above) is a Boolean contact algebra.
Take \(0\in\mathds{R}\). Obviously, the set:
\[\{(-\nicefrac{{1}}{{n}},\nicefrac{{1}}{{n}})\,|\,n\in\omega\setminus\{0\}\}\]
is a G-representative. But also:
\[\{(-\nicefrac{{1}}{{n}},\nicefrac{{1}}{{n}})\,|\,n\in\mathds{O}\}\]
where \(\mathds{O}\subseteq\omega\) is the set of odd numbers, and
\[\{(-\nicefrac{{1}}{{r}},\nicefrac{{1}}{{r}})\,|\,r\text{ is a positive irrational}\}\]
are G-representatives standing for the same location in the one-dimensional space, i.e., number \(0\). Moreover, one can easily see that there are uncountably many such G-representatives.5
Footnote 5: The reader interested in philosophical issues related to Grzegorczyk points is asked to consult (Gruszczynski and Pietruszczak, 2009).
**Definition 2**.: If \(X,Y\) are subsets of a BWCA, then \(Y\)_covers_\(X\) (or \(X\) is _covered by \(Y\)_) iff for every \(y\in Y\) there is \(x\in X\) such that \(x\leq y\). We write '\(X\unrhd Y\)' meaning \(X\) covers \(Y\), and '\(X\unrhd Y\)' meaning \(X\) is covered by \(Y\). Let \(\not\preceq\) be the set-theoretical complement of \(\unrhd\).
For a region \(x\) of a BWCA, let \(\downarrow\!x:=\{y\in B\mid y\leq x\}\), i.e., \(\downarrow\!x\) is the set of all parts of \(x\). \(\dashv\)
The general fact that different G-representatives can represent the same location in space follows also from:
**Lemma 2.1**.: (Gruszczynski and Pietruszczak, 2018, Lemma 5.6) _If \(Q\) is a G-representative in \(\mathfrak{B}\in\mathsf{BWCA}\), then every subset of \(Q\) covered by \(Q\) is also a G-representative. In particular, for any region \(x\), \(Q\cap\!\downarrow\!x\) is a G-representative, provided \(Q\cap\!\downarrow\!x\neq\emptyset\)._
In light of the above, to speak about points we need to be able to identify different G-representatives which stand for the same locus.
Figure 1. \(Q_{1}\) and \(Q_{2}\) representing the same location in two-dimensional Euclidean space
### Grzegorczyk points as quotients
Let us begin with the definition:
(df \(\natural\)) \[\downarrow x:=\{y\in\mathfrak{B}\mid y\ll x\}\]
and two lemmas.
**Lemma 2.2**.: _If \(Q_{1}\) and \(Q_{2}\) are G-representatives in \(\mathfrak{B}\in\)_BWCA_, then:_
\[(\forall x\in Q_{1})(\forall y\in Q_{2})\,x\ \mathsf{C}\ y\qquad\text{iff} \qquad Q_{2}\trianglelefteq Q_{1}.\]
Proof.: (i) Assume that \(Q_{2}\) is not covered by \(Q_{1}\), i.e., there is \(x_{1}\in Q_{1}\) such that for every \(y\in Q_{2}\), \(y-x_{1}\neq\mathsf{0}\).6 By (r2), there is \(x_{0}\in Q_{1}\cap\dot{\downarrow}x_{1}\) (i.e., by definition of \(\ll\) we have \(x_{0}\not\in-x_{1}\)). Observe that for every \(z,y\in Q_{2}\), \(z\circ y-x_{1}\). Indeed, if \(z,y\in Q_{2}\), we have that either (a) \(z\leq y\) or (b) \(y\leq z\). If (a) holds, \(z-x_{1}\leq z\) and \(z-x_{1}\leq y-x_{1}\). If (b) holds, \(y-x_{1}\leq z\). If (\(\forall z\in Q_{2})\,z\circ x_{0}\), then by (r3) we obtain that \(x_{0}\ \mathsf{C}\ y-x_{1}\), a contradiction. So there is \(z_{0}\in Q_{2}\) such that \(z_{0}\perp x_{0}\). By (r2) again there is \(z_{1}\in Q_{2}\cap\dot{\downarrow}z_{0}\), so \(z_{1}\not\in x_{0}\).
Footnote 6: \(x-y\) ‘ abbreviates ‘\(x\cdot-y\)’.
(ii) Suppose there are \(x\in Q_{1}\) and \(y\in Q_{2}\) such that \(x\not\in y\), but \(Q_{2}\trianglelefteq Q_{1}\). Take \(z\in Q_{2}\cap\dot{\downarrow}x\). If \(z\leq y\), then \(y\circ x\), and if \(y\leq z\), then \(y\leq x\), a contradiction, as \(x\perp y\) by the fact that \(\circ\subseteq\mathsf{C}\), which is easily verified.
In consequence we have:
**Lemma 2.3**.: _Let \(\mathfrak{B}\in\)_BWCA_. If \(Q_{1}\) and \(Q_{2}\) are G-representatives and \(Q_{1}\trianglelefteq Q_{2}\), then \(Q_{2}\trianglelefteq Q_{1}\)._
Proof.: Let \(Q_{1}\trianglelefteq Q_{2}\) but \(Q_{2}\not\trianglelefteq Q_{1}\). By Lemma 2.2 applied twice, we have that for all \(x\in Q_{1}\) and \(y\in Q_{2}\): \(x\circ y\), and there are \(x_{0}\in Q_{1}\) and \(y_{0}\in Q_{2}\) such that \(x_{0}\perp y_{0}\), a contradiction.
**Theorem 2.4**.: _If \(\mathfrak{B}\in\)_BWCA_, then \(\trianglelefteq\) is an equivalence relation on the set of G-representatives._
Proof.: The symmetry of \(\trianglelefteq\) follows from Lemma 2.3. The reflexivity and transitivity of \(\trianglelefteq\) follow from the reflexivity and transitivity of \(\leq\).
We are now in a position to say precisely that G-representatives \(Q_{1}\) and \(Q_{2}\)_represent the same location_ if and only if \(Q_{1}\) is covered by \(Q_{2}\) and \(Q_{2}\) is covered by \(Q_{1}\). Therefore, it is reasonable to define _points_ as equivalence classes of \(\trianglelefteq\) on the set of all G-representatives \(\mathbf{Q}_{G}\) (to emphasize the fact that \(Q_{1}\) and \(Q_{2}\) represent the same location, i.e., are mutually covered by each another, we will write '\(Q_{1}\sim Q_{2}\)'):
\[\mathbf{Eq}:=\mathbf{Q}_{G}/_{\sim}\,.\]
For any two sets of regions \(X\) and \(Y\) such that \(X\) covers \(Y\) and \(Y\) covers \(X\), we will say that \(X\) and \(Y\) are _coinitial_.
### Grzegorczyk points as filters
The second (chronologically the first) idea--used by Grzegorczyk (1960)--is to define points as filters that are generated by G-representatives:
\[\mathscr{F}\text{ is a point iff }(\exists Q\in\mathbf{Q}_{G})\,\mathscr{F}=\{x \in\mathfrak{B}\mid(\exists q\in Q)\,q\leq x\}\,.\]
By '\(\mathscr{F}_{Q}\)', we will denote a point generated by the G-representative \(Q\). These filters will be called _G-points_, and the set of all G-points will be denoted by '**Grz**', while its elements by small fraktur letters 'p', 'q' and 'r', indexed if necessary. For every G-point \(\mathscr{F}_{Q}\) we have:
\[x\in\mathscr{F}_{Q}\longleftrightarrow(\exists y\in Q)\,y\ll x\longleftrightarrow (\exists y\in Q)\,y\leq x\,. \tag{2.1}\]
Observe that for \(Q_{1},Q_{2}\in\mathbf{Q}_{G}\):
\[Q_{1}\sim Q_{2}\longleftrightarrow\left(\exists\mathfrak{p}\in\mathbf{Grz} \right)Q_{1}\cup Q_{2}\subseteq\mathfrak{p}\,. \tag{2.2}\]
Proof.: (\(\longrightarrow\)) This follows from Lemma 2.3, the definition of G-points and (2.1).
(\(\leftarrow\)) Let \(X=\mathscr{F}_{Q}\). Since both \(Q_{1}\) and \(Q_{2}\) are subsets of \(\mathscr{F}_{Q}\), they must be coinitial with \(Q\), and so \(Q_{1}\sim Q_{2}\).
Thus, as we see, by considering points as filters we can recover the equivalence relation between G-representatives. The reverse transition--from equivalence classes to filters--is obvious, since for a given class \([Q]_{\sim}\) it is enough to take \(\mathscr{F}_{Q}\).
Let us conclude this section with an observation that there is a 1-1 correspondence between G-points as equivalence classes and G-points as filters:
**Lemma 2.5**.: _Let \(\mathfrak{B}\in\mathbf{BWCA}\). The function \(f\colon\mathbf{Eq}\to\mathbf{Grz}\) such that \(f([Q]_{\sim}):=\mathscr{F}_{Q}\) is a bijection._
Proof.: If \([Q_{1}]\neq[Q_{2}]\), then \(Q_{1}\nleq Q_{2}\). Therefore, by Lemma 2.2(i) there are \(x\in Q_{1}\) and \(y\in Q_{2}\) such that \(x\perp y\). So \(\mathscr{F}_{Q_{1}}\neq\mathscr{F}_{Q_{2}}\), since otherwise both \(x\) and \(y\) are in the same filter. Surjectivity is obvious, since every G-point is a filter \(\mathscr{F}_{Q}\), for some G-representative \(Q\).
## 3. Whitehead points
In this section, we present a mathematical analysis of (a representative of) a point as formulated by Whitehead (1929), and then used by Biacino and Gerla (1996). Roughly, the idea is that Whitehead points are minimal elements of a poset of abstractive sets of a Boolean weak contact algebra.7
Footnote 7: More about the philosophy of and motivations for Whitehead points can be found in two excellent papers by Gerla (2020) and Varzi (2020).
We begin with the crucial definition:
**Definition 3**.: A set of regions \(A\) of a BWCA is an _abstractive set_ iff it satisfies (\(\mathfrak{r}\)0), (\(\mathfrak{r}\)1) and:
(A) \[(\nexists x\in\mathfrak{B})(\forall y\in A)\,x\leq y\,.\]
The class of all abstractive sets of a given BWCA is denoted by '\(\mathbf{A}\)'. Since by (\(\mathfrak{r}\)1) and (1.1) every abstractive set is a chain w.r.t. \(\leq\), it must be the case that for every \(x\in A\) there is \(y\in A\) such that \(y<x\). So, by the Axiom of Dependent Choices, every abstractive set is infinite.
The idea behind the definition is that we can abstract geometrical objects--like lines, segments, and points--from other entities. However, unlike representatives of Grzegorczyk's, these entities do not have to represent points but might be planes, straight lines, line segments, triangles, and so on. To use a simple example, we take the algebra \(\mathrm{RO}(\mathds{R}^{2})\) and the family of regular open sets of the form:
\[\{\langle x,y\rangle\,|\,y\in(-\nicefrac{{1}}{{n}},\nicefrac{{1}}{{n}})\} \quad\text{for }n\in\omega\setminus\{0\}\,,\]
which is an abstractive set that represents the straight line \(y=0\) (i.e., some object from beyond the domain \(\mathrm{RO}(\mathds{R}^{2})\)). Of course, we easily see that it is not a G-representative, since regions:
\[\{\langle x,y\rangle\,|\,x\geqslant 1\wedge y\in(-1,1)\}\quad\text{and} \quad\{\langle x,y\rangle\,|\,x\leqslant-1\wedge y\in(-1,1)\}\]
overlap all regions from the abstractive set, but are not in contact (in the sense of \(\mathsf{C}_{\mathrm{T}}\) for the algebra \(\mathrm{RO}(\mathds{R}^{2})\)). So the set violates (\(\mathfrak{r}\)3).
We use the same terminology and symbols for the _covering_ relation between abstractive sets that has been introduced for Grzegorczyk representatives. In particular, recall that \(A\) and \(B\) are coinitial in the case where \(A\) is covered by \(B\) and \(B\) is covered by \(A\).
Unlike in the case of covering relation on G-representatives, covering on abstractive sets does not have to be an equivalence relation since it is not--in general--symmetric. However, it is reflexive and transitive, so the coinitiality on abstractive sets is an equivalence relation. Following Whitehead, we will call every element of \(\mathbf{A}/_{\sim}\) a _geometrical element_. Given \(A\in\mathbf{A}\) its equivalence class w.r.t. \(\sim\) will be denoted by '\([A]\)'. If \(A_{1},A_{2}\in\mathbf{A}\), define a binary relation on \(\mathbf{A}/_{\sim}\):
\[[A_{1}]\preceq[A_{2}]\;:\longleftrightarrow\;A_{1}\trianglelefteq A_{2}\,.\]
The relation \(\preceq\) is clearly a partial order. Moreover, \(A_{1}\sim A_{2}\) and \(B_{1}\sim B_{2}\) together entail that: \(A_{1}\trianglelefteq B_{1}\) iff \(A_{2}\trianglelefteq B_{2}\), thus \(\preceq\) is well-defined.
**Definition 4**.: For \(A\in\mathbf{A}\), \([A]\) is a _Whitehead point_ (_W-point_) iff \([A]\) is minimal in \(\langle\mathbf{A}/_{\sim},\preceq\rangle\). The set of all Whitehead points will be denoted by '\(\mathbf{W}\)'. \(A\in\mathbf{A}\) is a _W-representative_ of a point iff \([A]\in\mathbf{W}\). Let \(\mathbf{Q}_{W}\) be the set of all W-representatives of a given BCA. \(\dashv\)
Alternatively, for an abstractive set \(A\) we have:
\[A\in\mathbf{Q}_{W}\longleftrightarrow\forall_{B\in\mathbf{A}}\left(B \trianglelefteq A\longrightarrow A\trianglelefteq B\right). \tag{3.1}\]
Recall that a BWCA is _atomic_ iff its underlying BA is atomic iff every regions contains an atom (i.e., an element that is minimal w.r.t. the standard Boolean order). As an immediate consequence of the definition of an abstractive set we get the following:
**Corollary 3.1**.: _If \(\mathfrak{B}\in\mathbf{BWCA}\) is atomic, then \(\mathfrak{B}\) does not have any abstractive sets, more so it does not have W-representatives._
### W-representatives in regular open algebras
If \(\operatorname{RO}(X)\) is a regular open algebra and \(A\) is its W-representative, then of course \(\bigwedge A=\mathfrak{0}\). However, this does not exclude the possibility in which \(\bigcap A\neq\emptyset\), as \(\bigwedge A=\operatorname{Int}\bigcap A\). Thus we may ask about set-theoretical intersections of abstractive sets.
**Lemma 3.2**.: _If \(X\) is a topological space, and \(\langle\operatorname{RO}(X),\mathsf{C}_{\mathrm{T}}\rangle\) is its topological contact algebra, then for every abstractive set \(A\subseteq\operatorname{RO}(X)\), \(\bigcap A\) is closed. Therefore if \(\bigcap A\neq\emptyset\) and \(\bigcap A\in\operatorname{RO}(X)\), then the space \(X\) is disconnected._
Proof.: Fix an abstractive set \(A\) whose elements are from a \(\langle\operatorname{RO}(X),\mathsf{C}_{\mathrm{T}}\rangle\). According to the characterization of \(\ll_{\mathrm{T}}\), \(A\) and \(\operatorname{Cl}[A]=\{\operatorname{Cl}a\mid a\in A\}\) are coinitial. Indeed, if \(a\in A\), then by (r2) there is \(b\in A\) such that \(b\ll_{\mathrm{T}}a\), i.e., \(\operatorname{Cl}b\subseteq a\). So \(A\) covers \(\operatorname{Cl}[A]\). The other direction is obvious since \(a\subseteq\operatorname{Cl}a\). Thus, \(\bigcap A=\bigcap\operatorname{Cl}[A]\), and in consequence \(\bigcap A\) is closed in \(X\).
Since the infima in \(\operatorname{RO}(X)\) are given by the interiors of the intersections, if \(X\) is connected, \(\bigcap A\) is never an element of the algebra if non-empty.
This lemma gives rise to a philosophical interpretation of abstractive sets. If the underlying regular algebra is composed of sets that are models of objects from the physical space (spatial bodies), it usually is a sub-algebra of \(\operatorname{RO}(\mathds{R}^{n})\), where \(\mathds{R}^{n}\) is given the standard topology. Various choices are possible8, yet irrespective of these for no abstractive set \(A\subseteq\operatorname{RO}(\mathds{R}^{n})\), \(\bigcap A\neq\emptyset\). In this sense abstractive sets represent objects from beyond the universe of models of spatial bodies, i.e., serve as abstraction processes to introduce objects that may be called _geometrical_,
ideal_ or, precisely, _abstract_. These objects are, of course, elements of the power set algebra of \(\mathds{R}^{n}\), but the idea is that there are <<too many>> objects in \(\mathcal{P}(\mathds{R}^{n})\) from the perspective of the physical space, yet some of the elements of \(\mathcal{P}(\mathds{R}^{n})\) can be treated as approximations made via elements of subalgebras of the regular open algebra of \(\mathds{R}^{n}\).
The definition of a W-representative from the point of view of Euclidean spaces seems to be neat and grasp a certain way in which we may abstract points as higher-order objects. However, in the sequel, we will point to <<strange>> examples. But first, we prove that there are contact algebras that have W-points. Their existence stems from the following:
**Theorem 3.3**.: _Let \(\langle X,\mathscr{O}\rangle\) be a topological space. If \(A\) is an abstractive set in \(\langle\operatorname{RO}(X),\mathsf{C}_{\mathrm{T}}\rangle\) that is at the same time a local basis at a point \(p\in X\), then \(A\) is a W-representative._
Proof.: Suppose \(A\) is an abstractive set in \(\langle\operatorname{RO}(X),\mathsf{C}_{\mathrm{T}}\rangle\) and a local basis at \(p\). Let \(B\subseteq\operatorname{RO}(X)\) be an abstractive set such that \(B\trianglelefteq A\). We show that \(p\in\bigcap B\). Suppose otherwise, i.e., let \(b_{0}\in B\) be such that \(p\notin b_{0}\). Since \(B\) is an abstractive set it follows that there exists a \(b_{1}\in B\) such that \(b_{1}\ll_{\mathrm{T}}b_{0}\) i.e., \(\operatorname{Cl}b_{1}\subseteq b_{0}\) and thus \(p\notin\operatorname{Cl}b_{1}\). Therefore, we have \(p\in X\setminus\operatorname{Cl}b_{1}\), where \(X\setminus\operatorname{Cl}b_{1}\) is an open set in \(\mathscr{O}\). It follows that there exists an \(a\in A\) such that \(a\leq X\setminus\operatorname{Cl}b_{1}\) and \(p\in a\). Hence, \(a\cdot b_{1}=\mathsf{0}\). In consequence, there exists no \(b\in B\) such that \(b\leq a\), which contradicts our initial assumption that \(A\) covers \(B\).
Since \(p\in\bigcap B\) and \(A\) is a local basis at \(p\), we know that for every \(b\in B\) there exists an \(a\in A\) such that \(a\subseteq b\), so \(A\) must be covered by \(B\). Thus, \(A\) is a W-representative.
In consequence we have:
**Corollary 3.4**.: _The real line with the standard topology has a W-representative at every point of the space._
**Definition 5** (Davis, 1978).: A _lob-space_ is a topological space that at every of its point has a local basis linearly ordered by the subset relation.
**Definition 6** (Gruszczynski, 2016).: A topological space \(X\) is _concentric_ iff it is \(T_{1}\) and at every \(p\in X\) there is a local basis \(\mathscr{B}^{p}\) such that:
(R1) \[(\forall U,V\in\mathscr{B}^{p})\left(U=V\vee\operatorname{Cl}U\subseteq V \vee\operatorname{Cl}V\subseteq U\right).\qed\]
Thus, concentric spaces are those \(T_{1}\)-spaces whose all points have local bases that satisfy the topological version of (r1) condition for G-representatives. The theorem below demonstrates that these are a subclass of Davies's lob-spaces.
**Theorem 3.5** (Gruszczynski and Pietruszczak, 2021).: _A topological space \(X\) is concentric iff it is a regular lob-space._
**Theorem 3.6**.: _If \(X\) is a concentric space whose regular open algebra is atomless, then at every point there is a local basis that is a W-representative._
Proof.: Since \(X\) is regular, it is also semi-regular, so \(\operatorname{RO}(X)\) is its basis, which is atomless by assumption. Therefore, the local basis at any point \(p\) that satisfies (R1) must be an abstractive set. So by Theorem 3.3 the basis must be a W-representative.
Moreover, we have the following result regarding W-representatives, which shows that in the case of topological interpretation, they represent <<small>> chunks of the underlying space.
**Lemma 3.7**.: _If \(X\) is a regular topological space, and \(\langle\operatorname{RO}(X),\mathsf{C}_{\mathrm{T}}\rangle\) is its topological contact algebra, then for every W-representative \(A\subseteq\operatorname{RO}(X)\), \(\bigcap A\) is a nowhere dense subset of \(X\).9_
Footnote 9: The observation and the proof that \(\bigcap A\) is nowhere dense is due to Hart (2023).
Proof.: Assume that \(x:=\bigcap A\) has a non-empty interior. Therefore, there is a non-empty regular open set \(y\) such that \(\operatorname{Cl}y\subseteq\operatorname{Int}x\). This in particular means that \(y\ll_{\mathrm{T}}a\), for all \(a\in A\). Since the space is regular and the algebra atomless, we can construct a sequence such that \(y_{0}:=y\) and \(y_{n+1}\ll_{\mathrm{T}}y_{n}\). In consequence for \(Y:=\{y_{n}\mid n\in\omega\}\) we have that \(A\) covers \(A\cup Y\) but not vice versa, since no element of \(Y\) contains an element of \(A\). So \(A\) is not a W-representative.
What is common for all W-representatives whose existence follows from the above result is that although they do not have infima as subsets for regular open algebras, they do have a non-empty intersection that is precisely the point of the space that they represent as a basis. This raises the question whether there is a regular open algebra that has a W-representative whose set-theoretical intersection is non-empty and that may be interpreted as a <<new>> point, i.e., something similar to a free ultrafilter being treated as a point of a topological space. The answer is positive, and the example is due to Klaas Pieter Hart.
_Example 2_(Hart, 2023).: Consider the ordinal space \(X:=[0,\omega_{1})\), where \(\omega_{1}\) is the first uncountable ordinal. Recall that if \(x\) and \(y\) are closed and unbounded subsets of \(X\), then \(x\cap y\neq\emptyset\). Due to this, for any open subsets \(x\) and \(y\) of \(X\), if \(\operatorname{Cl}x\subseteq y\), then either \(\operatorname{Cl}x\) is compact or \(y\) contains an interval \([\alpha+1,\omega_{1})\), for some \(\alpha<\omega_{1}\). For if \(\operatorname{Cl}x\) is not compact, then it must be unbounded, and since \(\operatorname{Cl}x\cap\complement y=\emptyset\), \(\complement y\) is bounded, i.e., there is \(\alpha<\omega_{1}\) such that \(\complement y\subseteq[0,\alpha]\). Therefore \([\alpha+1,\omega_{1})\subseteq y\). The following set \(A:=\{[\alpha+1,\omega_{1})\mid\alpha<\omega_{1}\}\) consists of clopen--and the more so regular open--subsets of \(X\), and is an abstractive set. If \(B\) is also an abstractive set such that \(A\) covers \(B\), then every element \(b\in B\) must be unbounded. The more so the closure of every element of \(B\) is unbounded, and since for every \(b\in B\) there is \(b_{0}\) in \(B\) such that \(\operatorname{Cl}b_{0}\subseteq b\), \(b\) must contain an interval \([\alpha+1,\omega)\). Thus \(B\) covers \(A\), and so \(A\) is a W-representative in \(\operatorname{RO}(X)\). Of course, \(\bigcap A=\emptyset\), and the W-point \([A]\) represents the ordinal \(\omega_{1}\) that is absent from \(X\).
This example is quite important from the point of view of the hidden assumptions behind Whitehead points. Bostock (2009, p. 30) writes that:
[...] Whitehead's construction [...] does actually have the idea of boundedness built into it: only a bounded nest10 can satisfy Whitehead's definition of a point-nest. (But I do not suppose that Whitehead recognised this.)
Footnote 10: A nest is the counterpart of an abstractive set, it is bounded if it contains only bounded regions (actually it is enough that it contains one such region to be considered bounded).
What the example shows then is that boundedness is not built into the idea of Whitehead points. It only is as far as <<natural>> spaces--like Euclidean spaces--are considered, which follows from further results and properties of Grzegorczyk points proven in (Gruszczynski, 2016; Gruszczynski and Pietruszczak, 2018, 2019). In an abstract setting, relevant for this paper, the notion of _boundedness_ does not have to be considered, as Hart's example shows. Also, this example shows that the _connectedness_ of regions that constitute W-representatives is not built into the general idea of points in the sense of Whitehead, as every region in the W-representative from the example is topologically disconnected.11
Let us make another philosophical remark at this point. It may also be the case that we do not know whether a particular subset of a regular open algebra is a W-representative due to our current state of knowledge. Consider the following example.
_Example 3_.: Let \(\mathrm{RO}(\mathrm{R})\) be the complete algebra of regular open subsets of \(\mathrm{R}\) and \(\left\langle\mathrm{RO}(\mathrm{R}),\mathsf{C}_{\mathrm{T}}\right\rangle\) its topological contact algebra. Then, consider the following set.
\[A:=\left\{(-\nicefrac{{1}}{{p}},\nicefrac{{1}}{{p}})\mid p:=\ \max\{s,t\}\text{ where }s,t \text{ are twin primes}\right\}\]
The twin prime conjecture, i.e., the claim that there exist infinitely many twin primes, is still an unsolved problem within number theory. So we do not know whether \(A\) is finite or infinite. This implies that we also do not know whether \(A\) is an abstractive set and thus a W-representative. \(\dashv\)
Observe that there are BCAs without any W-representatives and, therefore, without any Whitehead points.
**Definition 7**.: An ordinal number \(\alpha\) is _even_ iff there is an ordinal number \(\beta\) such that \(\alpha=2\cdot\beta\) (where \(\cdot\) is the standard ordinal multiplication). Otherwise, it is _odd_. Let \(\mathds{E}_{\kappa}\) and \(\mathds{O}_{\kappa}\) be, respectively, the set of all even and odd ordinals smaller than \(\kappa\). \(\dashv\)
**Lemma 3.8**.: _No complete \(\mathfrak{B}\in\mathsf{BCA}\) in which \(\mathsf{C}=\Circle\) has W-representatives.12_
Footnote 12: If, additionally, \(\mathfrak{B}\) is atomless, then it does not have any G-representative either, which is entailed by Theorem 4.4 proven further.
Proof.: If \(\mathfrak{B}\) is finite, then it cannot have any abstractive sets. The more so it cannot have W-representatives.
So suppose \(\mathfrak{B}\) is infinite, and let \(\left\langle x_{\alpha}\mid\alpha<\kappa\right\rangle\) be an abstractive set, for some limit cardinal \(\kappa\). Since we consider the case in which contact is overlap, we have that:
\[x_{0}>x_{1}>\ldots>x_{n}>x_{n+1}>\ldots>x_{\beta}>x_{\beta+1}>\ldots\]
is an abstractive set. For any \(\alpha<\kappa\) define: \(y_{\alpha}:=x_{\alpha}-x_{\alpha+1}\) and consider the antichain \(\left\langle y_{\alpha}\mid\alpha<\kappa\right\rangle\). Let \(\mathds{O}_{\kappa}\) and \(\mathds{E}_{\kappa}\) be, respectively, all odd and all even ordinals smaller than \(\kappa\). Divide the sequence into two sub-sequences:
\[\left\langle u_{\alpha}\mid\alpha\in\mathds{O}_{\kappa}\right\rangle\quad \text{and}\quad\left\langle v_{\alpha}\mid\alpha\in\mathds{E}_{\kappa}\right\rangle,\]
take the following suprema:
\[a_{\beta}:=\bigvee\left\{u_{\alpha}\mid\alpha\in\mathds{O}_{\kappa}\setminus( \mathds{O}_{\kappa}\cap(2\cdot\beta+1))\right\},\]
and gather them into \(A:=\left\{a_{\beta}\mid\beta<\kappa\right\}\). \(A\) is an abstractive set covered by \(\left\langle x_{\alpha}\mid\alpha<\kappa\right\rangle\), but does not cover this sequence. Therefore the sequence is not a W-representative. In consequence, no abstractive set is a Whitehead representative.
Let us round off this section with the following remarks. Lemma 3.8 is a mathematical embodiment of what Whitehead (1920) discovered himself: the purely mereological notion of _parthood_ is too weak to represent his concept of _point_ as a collection of regions. Some arguments for this can be found in Whitehead's book, (Bostock, 2009) and (Varzi, 2020). However, their common weaknesses are that (a) they refer to particular kind of regions that invoke the notions of _dimension_ and of _shape_ (either explicitly or implicitly) and (b) they do not single out precise assumptions. These are arguments, not proofs in a strict mathematical sense. We present a fully-fledged proof, which is general in the sense that we consider regions as abstract elements of any Boolean algebra. What remains to be eliminated is the assumption of completeness. Thus, we put forward the following open problem:
_Problem 1_.: Is there an incomplete BCA in which both \(\mathsf{C}=\mathcal{O}\) and there exists a W-representative?
Observe that Lemma 3.8 does not exclude such algebras, as the property of being a W-representative does not have to be preserved for completions of BAs. That is, if \(\mathfrak{B}\) is an incomplete BA that has a W-representative \(A\), and \(\overline{\mathfrak{B}}\) is the completion of \(\mathfrak{B}\), then the structure of \(A\) is preserved by the canonical embedding \(e\colon\mathfrak{B}\to\overline{\mathfrak{B}}\). In consequence, we can repeat the reasoning from Lemma 3.8 and show that \(e[A]\) is not a W-representative.
## 4. G-representatives are W-representatives (under additional assumptions)
In this section, we are occupied with two problems: (a) what are the minimal conditions for BWCAs that guarantee that every G-representative is a W-representative, and (b) what is the content of the second order monadic statement about the dependency between the two sets of representatives. Theorem 4.4 below is a stronger version of (Biacino and Gerla, 1996, Theorem 5.1). Biacino and Gerla's proof to establish that every G-representative is a W-representative uses the second-order constraints that postulate the existence of Grzegorczyk points. These are their axioms \(\mathrm{G}_{4}\) and \(\mathrm{G}_{5}\), closely related to the original Grzegorczyk axiom from his paper.13 We prove that the original result of the Italian mathematicians can be substantially improved, as we only assume the axioms for the weak contact relation plus:
Footnote 13: See (Gruszczynski and Pietruszczak, 2018a) for a comparison of the original axiomatization of Grzegorczyk’s system with the system of Biacino and Gerla’s.
(C5) \[(\forall x\neq 1)(\exists y\neq 0)\,x\not\mathrel{\hbox{\kern-4.3pt$\vrule he ight 0.0pt width 0.4pt depth 0.0pt\kern-3.0ptC}}y\,,\]
and the atomlessness of the underlying Boolean algebra. (C5) is known as the _disconnection_ axiom. In the class \(\mathbf{BWCA}\) it is equivalent to the _extensionality_ axiom:
\[(\forall z\in B)\,(z\mathrel{\hbox{\kern-4.3pt$\vrule heigh t 0.0pt width 0.4pt depth 0.0pt\kern-3.0ptC}}x\longleftrightarrow z\mathrel{\hbox{ \kern-4.3pt$\vrule heigh t 0.0pt width 0.4pt depth 0.0pt\kern-3.0ptC}}y)\longrightarrow x=y\,,\]
Furthermore, in the class \(\mathbf{BCA}\) both these axioms and are equivalent to:
\[(\forall x\neq 0)(\exists y\neq 0)\,y\ll x\,,\]
the so-called _non-tangential part_ axiom.
Moreover, we show that in the class \(\mathbf{BWCA}+\eqref{C5}\) the second-order monadic statement 'every G-representative is a W-representative' is equivalent to the first order condition 'there are no atoms'. Additionally, in Theorem 4.5 we demonstrate that both the implications fail if we omit the axiom (C5). Thus, via the two theorems, we provide answers to both (a) and (b) above.
The first requirement that G-representatives must meet to be W-representatives is that they are abstractive sets. In general, it does not have to be true: there are contact algebras with G-representatives that are not abstractive sets since the former do not have to satisfy (A). Biacino and Gerla do not have to take this into account since their definition of G-representative contains the requirement that it is a set of regions without the minimal element. This, however, is the assumption that is absent from the definition introduced in the original paper by Grzegorczyk.
In connection with this we have:
**Proposition 4.1** (Gruszczynski and Pietruszczak, 2016).: _If \(\mathfrak{B}\in\mathbf{BWCA}+\eqref{C5}\) and \(\mathfrak{B}\) has an atom \(a\), then \(\{a\}\in\mathbf{Q}_{G}\)._
Proof.: Fix an atom \(a\). By (C5) there exists a non-zero \(b\in B\) such that \(b\ll a\). So \(b\leq a\) by (1.1), and thus \(b=a\). From this, we can see that the conditions (r0)-(r2) are satisfied. For (r3), if \(x\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\circ$}}}a\) and \(y\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\circ$}}}a\), then \(a\leq x\) and \(a\leq y\), so \(x\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\circ$}}}y\).
So, if a BWCA has atoms and satisfies (C5) there are G-representatives, which are not abstractive sets. Thus, in general, it is not the case that \(\mathbf{Q}_{G}\subseteq\mathbf{A}\), and the natural thought is to eliminate the existence of atoms, especially due to the fact that G-points generated by atoms are--in a way--not very interesting, similarly as are not principal ultrafilters.
Before we do this, we prove a propoisition that will help us establish the main results of this section.
**Proposition 4.2**.: _Let \(\mathfrak{B}\in\mathbf{BWCA}\)._
1. _If_ \(A\in\mathbf{A}\)_, then_ \(A\) _satisfies the strong version of (_r2_):_ \[(\mathrm{r}2^{s})\qquad\qquad\qquad\qquad(\forall x\in A)(\exists y\in A)\,(y \ll x\wedge y\neq x)\,.\]
2. _If_ \(X\) _is a set of regions such that_ \(X\trianglelefteq Q\) _for some G-representative_ \(Q\)_, then_ \(X\) _satisfies (_r3_)._
3. _If_ \(A\in\mathbf{A}\)_,_ \(Q\in\mathbf{Q}_{G}\) _and_ \(A\trianglelefteq Q\)_, then_ \(A\in\mathbf{Q}_{G}\) _and_ \([A]_{\sim}=[Q]_{\sim}\)_._
Proof.: (i) For every \(x\in A\) there is a \(y\in A\) such that \(y<x\). But, by (r1), either \(x\ll y\) or \(y\ll x\). Since the former cannot hold in light of (1.1), we have the latter.
(ii) Assume that for all \(x\in X\), \(x\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\circ$}}}u\) and \(x\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\circ$}}}v\). If \(q\in Q\), then by covering of \(X\) by \(Q\) and by (C3) we have that \(q\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\circ$}}}u\) and \(q\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\circ$}}}v\). Therefore \(u\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\circ$}}}v\), by (r3) for \(Q\).
(iii) Follows from the previous items (i), (ii) and Lemma 2.3.
**Proposition 4.3**.: _If \(\mathfrak{B}\in\mathbf{BWCA}+(\mathrm{C}5)+(\sharp\mathrm{At})\), then every non-zero region of \(B\) has two proper parts that are separated from each other._
Proof.: Let \(x\) be a non-zero element of \(\mathfrak{B}\). Since the algebra has no atoms, there is a non-zero \(y\) that is a proper part of \(x\). So, by the Boolean axioms, there is another non-zero \(z<x\) that is incompatible with \(y\). But by (C5), \(z\) must have a non-tangential part \(z_{0}\). So \(z_{0}\not\in\mathbf{Z}\)\(y\), and both regions are parts of \(x\).
**Theorem 4.4**.: _If \(\mathfrak{B}\in\mathbf{BWCA}+(\mathrm{C}5)\), then \(\mathfrak{B}\) is atomless iff in \(\mathfrak{B}\) every G-representative is a W-representative._
Proof.: Suppose \(\mathfrak{B}\in\mathbf{BWCA}+(\mathrm{C}5)+(\sharp\mathrm{At})\). Observe that if \(Q\) is a G-representative of an algebra \(\mathfrak{B}\) from the class, then \(Q\) is an abstractive set by Proposition 4.3. Further, \([Q]\) is a geometrical element. Suppose \(A\in\mathbf{A}\) is such that \([A]_{\sim}\preceq[Q]\), i.e., \(A\trianglelefteq Q\). Therefore by Proposition 4.2 we have that \(A\in[Q]\), which means that \([A]=[Q]\), as required.
On the other hand, if \(\mathfrak{B}\in\mathbf{BWCA}+(\mathrm{C}5)\) and \(\mathfrak{B}\) has an atom \(a\), then by Proposition 4.1, \(\{a\}\in\mathbf{Q}_{G}\). Thus \(\mathbf{Q}_{G}\nsubseteq\mathbf{Q}_{W}\).
The axiom (C5) cannot be dropped, even in the class of Boolean contact algebras, that is:
**Theorem 4.5**.: _There is a \(\mathfrak{B}\in\mathbf{BCA}+\neg(\mathrm{C}5)\) in which there are no atoms, and in which \(\mathbf{Q}_{G}\nsubseteq\mathbf{Q}_{W}\); and there is also an algebra from the same class that has atoms and in which \(\mathbf{Q}_{G}\subseteq\mathbf{Q}_{W}\)._
To prove the first part of the theorem we provide a general method for constructing contact algebras. Given a \(\mathfrak{B}\in\mathbf{BA}\), let \(\boldsymbol{d}\) be its non-zero element that we call _distinguished_. By means of it, we define the following relation:
\[x\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\circ$}}}\mathbf{C}_{\boldsymbol{d} }y\ \mathrel{\raisebox{0.0pt}{\scalebox{1.2}{$\circ$}}}\longleftrightarrow\ x \mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\circ$}}}y\vee(x\mathbin{\raisebox{0.0 pt}{\scalebox{1.2}{$\circ$}}}\boldsymbol{d}\wedge y\mathbin{\raisebox{0.0 pt}{\scalebox{1.2}{$\circ$}}}\boldsymbol{d})\,.\]
It is routine to verify that \(\mathsf{C}_{\boldsymbol{d}}\) is a contact relation, i.e., satisfies axioms (C0)-(C4). Observe that the largest contact relation on \(\mathfrak{B}\), i.e., \(\mathfrak{B}^{+}\times\mathfrak{B}^{+}\), is a special case of \(\mathsf{C}_{\boldsymbol{d}}\) in which \(\boldsymbol{d}=1\), or more generally, where \(\boldsymbol{d}\) is a _dense_ region in \(\mathfrak{B}\) (i.e., such that every non-zero \(x\) overlaps it).
We have that:
\[x\ll_{\boldsymbol{d}}y \longleftrightarrow x\leq y\wedge(x\leq-\boldsymbol{d}\lor \boldsymbol{d}\leq y)\] \[\longleftrightarrow x\leq y-\boldsymbol{d}\lor x+\boldsymbol{d} \leq y\,.\]
from which it follows immediately that:
\[\boldsymbol{d}\ll_{\boldsymbol{d}}\boldsymbol{d}\,. \tag{4.1}\]
We also have that:
**Corollary 4.6**.: _If \(x<\boldsymbol{d}\) and \(x\neq\mathfrak{0}\), then \(x\) does not have any non-tangential part. In consequence any Boolean contact algebra \(\langle\mathfrak{B},\mathsf{C}_{\boldsymbol{d}}\rangle\) fails to satisfy (C5)._
**Lemma 4.7**.: \(\{\boldsymbol{d}\}\) _is a G-representative of \(\langle B,\mathsf{C}_{\boldsymbol{d}}\rangle\), and \(\uparrow\{\boldsymbol{d}\}\) is its only G-point._
Proof.: (r0) holds by the definition of \(\boldsymbol{d}\), (r1) and (r2) by (4.1), and (r3) by the definition of \(\mathsf{C}_{\boldsymbol{d}}\). In consequence, \(\uparrow\{\boldsymbol{d}\}\) is a G-point.
Suppose there is a G-point \(\mathfrak{p}\neq\,\uparrow\{\boldsymbol{d}\}\). By (Gruszczynski, 2016, Fact 6.28) there are \(x\in\mathfrak{p}\) and \(y\geq\boldsymbol{d}\) such that \(x\not\in_{\boldsymbol{d}}y\), i.e., \(y\ll_{\boldsymbol{d}}-x\). Therefore either \(y\leq-\boldsymbol{d}\) or \(\boldsymbol{d}\leq-x\). The first possibility is excluded by the fact that \(\boldsymbol{d}\neq\mathfrak{0}\) and \(\boldsymbol{d}\) is below \(y\). Therefore the second one holds, and thus \(x\leq-\boldsymbol{d}\).
Proof of the first part of Theorem 4.5.: Take any atomless Boolean algebra \(\mathfrak{B}\), fix its distinguished element \(\boldsymbol{d}\), and expand it to the Boolean contact algebra \(\langle\mathfrak{B},\mathsf{C}_{\boldsymbol{d}}\rangle\). By Lemma 4.7, the singleton \(\{\boldsymbol{d}\}\) is a G-representative that is finite and therefore cannot be a W-representative. By Proposition 4.6, the algebra has regions without non-tangential parts, so it fails to satisfy (C5).
Proof of the second part of Theorem 4.5.: Consider the following two contact algebras, \(\langle\mathrm{RO}(\mathds{R}),\mathsf{C}_{\mathrm{T}}\rangle\) and the four element Boolean algebra \(\mathfrak{B}_{4}:=\{0,a,b,1\}\) with the full contact relation \(\mathsf{C}_{1}\) (i.e., \(\boldsymbol{d}:=1\)). Consider their product \(\mathfrak{P}:=\mathrm{RO}(\mathds{R})\times\mathfrak{B}_{4}\) as Boolean algebras (i.e., all the operations are defined coordinate-wise) but with the contact defined as:
\[\langle x,u\rangle\;\mathsf{C}\;\langle y,w\rangle\;:\longleftrightarrow\; x\,\mathsf{C}_{\mathrm{T}}\,y\lor u\;\mathsf{C}_{1}\;w\,.\]
It is routine to verify that \(\mathsf{C}\) satisfies (C0)-(C4).14 We have that:
Footnote 14: This is not, then, the product of the two algebras as the _contact_ algebras. The relation \(\langle x,w\rangle\;R\;\langle y,w\rangle\;:\longleftrightarrow\;x\,\mathsf{ C}_{\mathrm{T}}\,y\wedge u\;\mathsf{C}_{1}\;w\) is not contact, as the reader may easily convince themself.
\[\langle x,u\rangle\;\mathsf{C}\;\langle y,w\rangle\;:\longleftrightarrow\; x\,\mathsf{C}_{\mathrm{T}}\,y\lor u\;\mathsf{C}_{1}\;w\,.\]
It is routine to verify that \(\mathsf{C}\) satisfies (C0)-(C4).14 We have that:
Footnote 14: This is not, then, the product of the two algebras as the _contact_ algebras. The relation \(\langle x,w\rangle\;R\;\langle y,w\rangle\;:\longleftrightarrow\;x\,\mathsf{ C}_{\mathrm{T}}\,y\wedge u\;\mathsf{C}_{1}\;w\) is not contact, as the reader may easily convince themself.
\[\langle x,u\rangle\;\ll\langle y,w\rangle\;\longleftrightarrow\;x\,\mathsf{ C}_{\mathrm{T}}\,y\wedge u\;\mathsf{C}_{1}\;w\]
The algebra has two atoms: \(\langle 0,a\rangle\) and \(\langle 0,b\rangle\). \(\mathfrak{P}\) does not satisfy (C5), as none of the two atoms is its own non-tangential part. In consequence, neither the singleton of the former nor the singleton of the latter is a G-representative.
Observe that the set of G-representatives of \(\mathfrak{P}\) contains all sets of the form \(Q\times\{\mathfrak{0}\}\), where \(Q\) is a G-representative of \(\langle\mathrm{RO}(\mathds{R}),\mathsf{C}_{\mathrm{T}}\rangle\). It is quite obvious that (r0)-(r2) are satisfied by \(Q\times\{\mathfrak{0}\}\). As for (r3), if we have pairs \(\langle x,u\rangle\) and \(\langle y,w\rangle\) that overlap every element of \(Q\times\{\mathfrak{0}\}\), then for all \(z\in Q\), \(x\circ z\) and \(y\circ z\), so \(x\,\mathsf{C}_{\mathrm{T}}\,y\), which is enough to conclude that \(\langle x,u\rangle\;\mathsf{C}\;\langle y,w\rangle\).
The only products that are G-representatives in the algebra are sets of the form \(Q\times\{\mathfrak{0}\}\), where \(Q\) is a G-representative in \(\mathrm{RO}(\mathds{R})\). Firstly, neither \(Q\times\{a\}\) nor \(Q\times\{b\}\) can be G-representatives, as no element of any of the two sets has a
non-tangential part. Secondly, any set of the form \(Q\times\{1\}\), where \(Q\) is a G-representative in \(\operatorname{RO}(\operatorname{R})\), fails to satisfy (r3). To see this, take any regular open set \(x\) that overlaps every element of \(Q\) and consider pairs \(\langle x,0\rangle\) and \(\langle 0,1\rangle\). We see that for any \(\langle y,1\rangle\in Q\times\{1\}\), \(\langle x,0\rangle\circ\langle y,1\rangle\) and \(\langle 0,1\rangle\circ\langle y,1\rangle\), yet \(\langle x,0\rangle\not\in\langle 0,1\rangle\). Thirdly, any product \(Q\times M\), where \(M\) is an at least two element subset of \(\mathfrak{B}_{4}\), fails to be a chain and so cannot be a G-representative. Fourthly, if we have a set \(M\times\{0\}\) where \(M\) is not a G-representative, i.e., it fails to meet one of the conditions (r0)-(r3), then since \(\mathfrak{0}\ll_{1}\mathfrak{0}\), \(M\times\{0\}\) also fails to meet one of the four conditions for \(\ll\).
Of course, every \(Q\times\{0\}\) is an abstractive set, and in the case where it covers an abstractive set \(A\times\{0\}\), \(Q\) must cover \(A\) in \(\operatorname{RO}(\operatorname{R})\). So by Proposition 4.2 (iii) \(A\) is a W-representative in \(\operatorname{RO}(\operatorname{R})\), and so \(A\times\{0\}\) is a W-representative in \(\mathfrak{P}\).
## 5. W-representatives with countable coinitiality are G-representatives
In this section, we reconstruct Biacino and Gerla's proof according to which every W-representative that can be represented by an \(\omega\)-sequence (in the sense explained below) is also a G-representative. We aim to show that with respect to the original proof from (Biacino and Gerla, 1996) we need to assume the so-called _coherence_ axiom to assure that the machinery works properly.
To be more precise, Biacino and Gerla in Theorem 5.3 prove that if we extend the standard axiomatization for contact with the _interpolation_ axiom15:
Footnote 15: They call it the _normality_ axiom.
(IA) \[x\ll y\longrightarrow\left(\exists z\in\mathfrak{B}\right)x\ll z\ll y\]
we can prove that every Whitehead representative that can be represented as an \(\omega\)-sequence is a Grzegorczyk representative. However, to show that a certain sequence of regions is an abstractive set they make a transition that cannot be justified without an application of the so-called _coherence_ axiom, that we introduce below. Thus, in the premises of Theorem 5.3 we explicitly assume coherence in the form of (C6) below. Coherence is a mereotopological counterpart of topological connectedness.16
Footnote 16: See, e.g., (Bennett and Duntsch, 2007) for details.
**Definition 8**.: For a given chain \(C\) let the _coinitiality_ of \(C\) be the smallest cardinal number \(\kappa\) such that there exists an antitone function \(f\colon\kappa\to C\) with \(f[\kappa]\) coinitial with \(C\). \(\dashv\)
**Definition 9**.: For a given \(\mathfrak{B}\in\mathbf{BWCA}\), let \(\mathbf{Q}_{W}^{\omega}\) be the set of all Whitehead representatives whose coinitiality is \(\omega\). \(\dashv\)
Observe that the set \(A\) from Example 2 is an instance of a W-representative whose coinitiality is \(\omega_{1}\). On the other hand, a local basis of any point \(r\in\operatorname{R}\) (with the standard topology) that satisfies (R1) is a W-representative that is an element of \(\mathbf{Q}_{W}^{\omega}\) in \(\operatorname{RO}(\operatorname{R})\).
**Definition 10**.: A Boolean weak contact algebra is _coherent17_ iff its unity is coherent, iff it satisfies the following _coherence_ axiom:
Footnote 17: The term is taken from (Roeper, 1997).
(C6) \[x\notin\{0,1\}\longrightarrow x\mathrel{\mathsf{C}}-x\,.\]
**Proposition 5.1**.: _In the class \(\mathbf{BWCA}\), (C6) is equivalent to:_
\[x\notin\{0,1\}\wedge x\ll y\longrightarrow x<y\,.\]
Proof.: \((\longrightarrow)\) If \(x\) is neither \(\mathsf{0}\) nor \(1\), then \(x\)\(\mathsf{C}\)\(-x\). Assume that \(x\ll y\), i.e., \(x\not\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Theorem 5.4**.: _If \(\mathfrak{B}\in\mathbf{BCA}+(\mathrm{IA})+(\mathrm{C5})+(\mathrm{C6})\), then \(\mathbf{Q}_{W}^{\omega}\subseteq\mathbf{Q}_{G}\subseteq\mathbf{Q}_{W}\). If additionally \(\mathfrak{B}\) satisfies the countable chain condition, then \(\mathbf{Q}_{W}=\mathbf{Q}_{G}\)._
Recall that a topological space \(X\) is _semi-regular_ iff it has a basis that consist of regularly open subsets of \(X\). It is _weakly regular_ iff for every non-empty open set \(M\) there exists a non-empty open set \(K\) such that \(\operatorname{Cl}K\subseteq M\). \(X\) is _\(\kappa\)-normal18_ (or _weakly normal_) iff any pair of disjoint regular closed sets can be separated by disjoint open sets.
Footnote 18: These spaces were introduced and studied by Shchepin (1972).
By (Duntsch and Winter, 2005, Proposition 3.7) we have that for a space \(X\) and a dense subalgebra \(\mathfrak{B}\) of \(\langle\operatorname{RO}(X),\mathsf{C}_{\mathrm{T}}\rangle\) the following correspondences hold:
1. \(\mathsf{C}_{\mathrm{T}}\) satisfies (C5) iff \(X\) is weakly regular,
2. \(\mathsf{C}_{\mathrm{T}}\) satisfies (C6) iff \(X\) is connected,
3. \(\mathsf{C}_{\mathrm{T}}\) satisfies (IA) iff \(X\) is \(\kappa\)-normal.
Let \(\mathbf{T BCA}\) be the class of all _topological_ contact algebras, that is those of the form \(\langle\mathfrak{B},\mathsf{C}_{\mathrm{T}}\rangle\), where \(\mathfrak{B}\) is a subalgebra of a regular open algebra \(\operatorname{RO}(X)\) of a topological space \(X\).
**Lemma 5.5** (Bennett and Duntsch, 2007, Lemma 3.56).: \(\mathfrak{B}\in\mathbf{T BCA}+(\mathrm{IA})+(\mathrm{C5})+(\mathrm{C6})\) _iff \(\mathfrak{B}\) is a dense subalgebra of \(\langle\operatorname{RO}(X),\mathsf{C}_{\mathrm{T}}\rangle\), where \(X\) is a \(\kappa\)-normal, connected \(T_{1}\)-space._
In consequence, by this and by Theorem 5.4 we obtain:
**Corollary 5.6**.: _If \(X\) is a \(\kappa\)-normal, connected \(T_{1}\)-space, and \(\mathfrak{B}\) is a dense subalgebra of \(\langle\operatorname{RO}(X),\mathsf{C}_{\mathrm{T}}\rangle\), then in \(\mathfrak{B}\): \(\mathbf{Q}_{W}^{\omega}\subseteq\mathbf{Q}_{G}\subseteq\mathbf{Q}_{W}\). If additionally \(X\) as a topological space satisfies the countable chain condition, then \(\mathbf{Q}_{W}=\mathbf{Q}_{G}\)._
Theorem 4.4 shows that the second-order statement '\(\mathbf{Q}_{G}\subseteq\mathbf{Q}_{W}\)' corresponds to a first-order property of atomlessness of Boolean algebras. It is then natural to ask if the statements '\(\mathbf{Q}_{W}^{\omega}\subseteq\mathbf{Q}_{G}\)' and '\(\mathbf{Q}_{W}\subseteq\mathbf{Q}_{G}\)' correspond to any familiar, not necessarily first-order, properties of BCAs, or topological BCAs. Let us focus on this, and let us prove some negative results.
Observe that the coherence axiom cannot be deduced by means of '\(\mathbf{Q}_{W}\subseteq\mathbf{Q}_{G}\)' (more so, by means of '\(\mathbf{Q}_{W}^{\omega}\subseteq\mathbf{Q}_{G}\)'), in the following sense:
**Proposition 5.7**.: \((\mathrm{C6})\) _is not true in \(\mathbf{BCA}+(\mathrm{IA})+(\mathrm{C5})+(\nexists\mathrm{At})+\mathbf{Q}_{W} \subseteq\mathbf{Q}_{G}\)._
Proof.: Take the contact algebra \(\langle\operatorname{RO}(\mathbb{R}),\odot\rangle\) and apply Lemma 3.8.
If the reader finds the reference to Lemma 3.8 somewhat sneaky (as there are no W-representatives in BCAs that satisfy the premises), we can construct an example of a BCA that has W-representatives and meets the conditions of the proposition but not (C6) by taking the topological space \(X:=[0,1]\cup[2,3]\) as the subspace of the reals with the standard topology and considering \(\langle\operatorname{RO}(X),\mathsf{C}_{\mathrm{T}}\rangle\). \(X\) is normal (more so, \(\kappa\)-normal), \(T_{1}\), satisfies the countable chain condition. Thus \(\mathbf{Q}_{W}\subseteq\mathbf{Q}_{G}\). \(\operatorname{RO}(X)\) is obviously atomless. Moreover, \(X\) is metrizable, so it's a concentric space and thus has W-representatives by Theorem 3.6. But \(X\) has two non-trivial components, and thus \(\mathsf{C}_{\mathrm{T}}\) does not satisfy (C6). Does adding the assumption \(\mathbf{Q}_{W}\neq\emptyset\) does not improve the situation.
Making a suitable modification to \(X\), e.g., taking \(Y:=[0,1]\cup\{2\}\) we see that:
**Proposition 5.8**.: \((\nexists\mathrm{At})\) _is not a consequence of \(\mathbf{BCA}+(\mathrm{IA})+(\mathrm{C5})+\mathbf{Q}_{W}\subseteq\mathbf{Q}_{G}\)._
Of course, by Theorem 4.4, in \(\operatorname{RO}(Y)\) we have \(\mathbf{Q}_{G}\nsubseteq\mathbf{Q}_{W}\), specifically with \(\{\{2\}\}\) being the culprit.
As for the relation of (C5) to '\(\mathbf{Q}_{W}\subseteq\mathbf{Q}_{G}\)', observe that since the axiom is not a consequence of \(\{\)(C0)-(C4), \(\mathrm{(IA)},\mathrm{(C6)},\mathrm{(\nexists\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Theorem 6.2**.: _Any \(\langle\mathfrak{B},\mathsf{C}_{\boldsymbol{d}}\rangle\) satisfies_ (GIA)_._
Proof.: Take as the distinguished element of a \(\mathfrak{B}\in\mathsf{BA}\) any \(\boldsymbol{d}\neq\mathfrak{0}\) and consider \(\mathsf{C}_{\boldsymbol{d}}\). Suppose that \(A\subseteq\mathfrak{B}\), and let \(x\) be such that for all \(a\in A\), \(x\ll_{\boldsymbol{d}}a\), which means that either \(x\leq a-\boldsymbol{d}\) or \(x+\boldsymbol{d}\leq a\). In the former case, \(x\leq-\boldsymbol{d}\), so \(x\ll_{\boldsymbol{d}}x\), and we are done. In the latter, \(x+\boldsymbol{d}\ll_{\boldsymbol{d}}a\), as \(-a\) must be disjoint from \(x+\boldsymbol{d}\). Since it is always true that \(x\ll_{\boldsymbol{d}}x+\boldsymbol{d}\), we have that \(x+\boldsymbol{d}\) is the region that is strongly between \(x\) and \(a\).
In light of the above theorem, any Boolean algebra can be turned into a Boolean contact algebra that meets the generalized interpolation axiom. In particular, there will be such algebras that are either complete or incomplete, with or without any atoms, and of arbitrary cardinality. However, in light of Corollary 4.6, none of these algebras will satisfy (C5).
Having shown the consistency of (GIA) with the standard axioms for contact, we go on to prove the following:
**Theorem 6.3**.: _If \(\mathfrak{B}\in\mathsf{BCA}+(\text{GIA})+(\text{C6})\), then \(\mathbf{Q}_{W}\subseteq\mathbf{Q}_{G}\)._
Proof.: Suppose \(|B|=\kappa\). Fix an abstractive set \(A\). Since it is linearly ordered by \(\leq\), it must have a coinitial sequence \(\langle x_{\alpha}\mid\alpha<\lambda\rangle\) for a limit ordinal \(\lambda\leqslant\kappa\). Again, suppose the sequence is not a G-representative, i.e. it fails to satisfy (r3). Let \(u\) and \(v\) be regions such that each of them overlaps every \(x_{\alpha}\) from the sequence, yet they are separated, i.e., \(u\ll-v\). We construct another \(\lambda\)-sequence repeating the technique from the proof of Theorem 5.3, but applying (GIA).
If \(\lambda=\omega\), then it is enough to observe that (IA) is just a special case of (GIA) where \(Y:=\{-v\}\). So assume that \(\lambda>\omega\). Suppose \(\alpha<\lambda\) is a limit ordinal and for every \(\delta<\beta<\alpha\) we defined \(u_{\beta}\) and \(u_{\delta}\) such that:
\[u\ll u_{\beta}\ll u_{\delta}\ll-v\,.\]
Consequently, we have that \(u\) is a non-tangential part of every element of the sequence \(\langle u_{\beta}\mid\beta<\alpha\rangle\). Thus, by (GIA) we may choose \(u_{\alpha}\) to be a region \(z\) such that \((\forall\beta<\alpha)\,z\ll u_{\beta}\) and \(u\ll z\). Following this procedure we can construct the \(\lambda\)-sequence \(\langle u_{\alpha}\mid\alpha<\lambda\rangle\). We go on to show that \(\langle x_{\alpha}\cdot u_{\alpha}\mid\alpha<\lambda\rangle\) is an abstractive set. (r0) holds given that we have \(u_{\alpha}\cdot x_{\alpha}\neq\mathfrak{0}\) for any \(\alpha<\lambda\). (r1) holds due to (5.1): \(u_{\delta}\cdot x_{\delta}\ll u_{\beta}\cdot x_{\beta}\) for any \(\beta,\delta\in\lambda\) such that \(\beta<\delta\). Additionally,we have that \(u_{\alpha+1}\cdot x_{\alpha+1}\ll u_{\alpha}\cdot x_{\alpha}\) and \(u_{\alpha}\cdot x_{\alpha}\neq\mathfrak{1}\) for any \(\alpha<\lambda\). Therefore, by Proposition 5.1 the sequence \(\langle x_{\alpha}\cdot u_{\alpha}\mid\alpha<\lambda\rangle\) satisfies (A).
Notice that the sequence \(\langle x_{\alpha}\cdot u_{\alpha}\mid\alpha<\lambda\rangle\) is covered by \(\langle x_{\alpha}\mid\alpha<\lambda\rangle\). However, we also have that \(x_{\beta}\nleq u_{\delta}\cdot x_{\delta}\) for any \(\beta,\delta\in\lambda\) since \(u_{\delta}\cdot x_{\delta}\ll-v\) and \(x_{\beta}\nleq-v\). Therefore, \(\langle x_{\alpha}\mid\alpha<\lambda\rangle\) is not covered by \(\langle x_{\alpha}\cdot u_{\alpha}\mid\alpha<\lambda\rangle\). Thus, \(\langle x_{\alpha}\mid\alpha<\lambda\rangle\) is not a W-representative and we can conclude that also \(A\) is not a W-representative.
_Problem 6_.: In Theorem 6.2 we have shown that the generalized interpolation axiom is consistent with the standard axioms for BCAs. However, what we do not know is whether there are contact algebras in which both the axioms (GIA), (C6) hold, and there is at least one W-representative. Thus we ask: _are there such BCAs?_
## Acknowledgements
This research was funded by the National Science Center (Poland), grant number 2020/39/B/HS1/00216, "Logico-philosophical foundations of geometry and topology".
For the purpose of Open Access, the authors have applied a CC-BY public copyright license to any Author Accepted Manuscript (AAM) version arising from this submission. |
2310.19889 | Exploring Geometry of Blind Spots in Vision Models | Despite the remarkable success of deep neural networks in a myriad of
settings, several works have demonstrated their overwhelming sensitivity to
near-imperceptible perturbations, known as adversarial attacks. On the other
hand, prior works have also observed that deep networks can be under-sensitive,
wherein large-magnitude perturbations in input space do not induce appreciable
changes to network activations. In this work, we study in detail the phenomenon
of under-sensitivity in vision models such as CNNs and Transformers, and
present techniques to study the geometry and extent of "equi-confidence" level
sets of such networks. We propose a Level Set Traversal algorithm that
iteratively explores regions of high confidence with respect to the input space
using orthogonal components of the local gradients. Given a source image, we
use this algorithm to identify inputs that lie in the same equi-confidence
level set as the source image despite being perceptually similar to arbitrary
images from other classes. We further observe that the source image is linearly
connected by a high-confidence path to these inputs, uncovering a star-like
structure for level sets of deep networks. Furthermore, we attempt to identify
and estimate the extent of these connected higher-dimensional regions over
which the model maintains a high degree of confidence. The code for this
project is publicly available at
https://github.com/SriramB-98/blindspots-neurips-sub | Sriram Balasubramanian, Gaurang Sriramanan, Vinu Sankar Sadasivan, Soheil Feizi | 2023-10-30T18:00:33Z | http://arxiv.org/abs/2310.19889v1 | # Exploring Geometry of Blind Spots in Vision Models
###### Abstract
Despite the remarkable success of deep neural networks in a myriad of settings, several works have demonstrated their overwhelming sensitivity to near-imperceptible perturbations, known as adversarial attacks. On the other hand, prior works have also observed that deep networks can be under-sensitive, wherein large-magnitude perturbations in input space do not induce appreciable changes to network activations. In this work, we study in detail the phenomenon of under-sensitivity in vision models such as CNNs and Transformers, and present techniques to study the geometry and extent of "equi-confidence" level sets of such networks. We propose a Level Set Traversal algorithm that iteratively explores regions of high confidence with respect to the input space using orthogonal components of the local gradients. Given a source image, we use this algorithm to identify inputs that lie in the same equi-confidence level set as the source image despite being perceptually similar to arbitrary images from other classes. We further observe that the source image is linearly connected by a high-confidence path to these inputs, uncovering a star-like structure for level sets of deep networks. Furthermore, we attempt to identify and estimate the extent of these connected higher-dimensional regions over which the model maintains a high degree of confidence. The code for this project is publicly available at this URL.
## 1 Introduction
Deep neural networks have demonstrated remarkable success in various diverse domains including computer vision and natural language processing, surpassing the performance of classical algorithms by a significant margin. However, though they achieve state of the art results on many computer vision tasks like image classification, sometimes even exceeding human-level performance [He et al., 2015, 2016], the overall visual processing conducted by such models can deviate significantly from that effectively observed in the human visual system. Perhaps most iconic and representative of these differences lies in the overwhelming susceptibility of neural networks to near-imperceptible changes to their inputs -- commonly called adversarial attacks [Szegedy et al., 2014] -- an extraordinary failure mode that highlights the _over-sensitivity_ of such models. Indeed, an active area of research over the past few years has been focused towards analyzing adversarial perturbations under different threat models [Goodfellow et al., 2015, Tramer et al., 2018, Wong et al., 2019, Laidlaw et al., 2021, Croce and Hein, 2022] and in addressing adversarial vulnerabilities using robust training methodologies [Madry et al., 2018, Zhang et al., 2019, Singla and Feizi, 2020, Wu et al., 2020, Laidlaw et al., 2021, Levine and Feizi, 2021].
On the other hand, it has been shown that such models may also be _under-sensitive_, wherein input images that are unmistakably disparate to a human oracle induce near identical network activations or predictions (Jacobsen et al., 2018). To tractably analyze this phenomenon, Jacobsen et al. (2018) utilize a special class of neural networks that are bijective functions, called fully Invertible RevNets (Jacobsen et al., 2018, Kingma and Dhariwal, 2018), to craft large-magnitude semantic perturbations in input space that are designed to leave its corresponding network representations unchanged. Furthermore, Tramer et al. (2020) show that robust training with \(\ell_{p}\) bounded adversaries can be a source of excessive model-invariance in itself, due to the poor approximation of the true imperceptible threat model of human oracles by \(\ell_{p}\) norm-bounded balls in RGB-pixel space (Laidlaw et al., 2021). Indeed, the authors 'break' a provably robust defense on MNIST (Zhang et al., 2019) with a certified accuracy of \(87\%\), by crafting perturbations within the certified \(\ell_{\infty}\) radius of \(0.4\), that however cause model agreement with human oracles to diminish to \(60\%\).
However, these methods either rely upon special invertible network architectures or the selection of the nearest training image of another class as a target followed by a sequence of complex alignment and spectral clustering techniques, so as to semantically alter the input in a conspicuous manner that induces a change in the human assigned oracle label, while leaving the model prediction unchanged. This leads us to our research question: Is it possible to analyze the phenomenon of _under-sensitivity_ of general vision models in a systematic manner on natural image datasets, and characterize the geometry and extent of "blind spots" of such networks? Indeed, we empirically demonstrate the veracity of this claim -- in this work, we present a novel Level Set Traversal algorithm to explore the "equi-confidence" level sets of popular vision models. Given an arbitrary source and target image pair, our proposed algorithm successfully finds inputs that lie in the same level set as the source image, despite being near-identical perceptually to the target image. Furthermore, the proposed algorithm identifies a connected path between the original source image and the "blind spot" input so generated, wherein high prediction confidence with respect to the source class is maintained throughout the path. In summary, we make the following contributions in this work:
* We present a novel Level Set Traversal algorithm that iteratively uses orthogonal components of the local gradient to identify the "blind spots" of common vision models such as CNNs and ViTs on CIFAR-10 and ImageNet.
* We thereby show that there exist piecewise-linear connected paths in input space between images that a human oracle would deem to be extremely disparate, though vision models retain a near-uniform level of confidence on the same path.
* Furthermore, we show that the linear interpolant path between these images also remarkably lies within the same level set; as we observe the consistent presence of this phenomenon across arbitrary source-target image pairs, this unveils a star-like set substructure within these equi-confidence level sets.
* We demonstrate that adversarially robust models tend to be _under-sensitive_ over subsets of the input domain that lie well beyond its original threat model, and display level-sets of high-confidence that extend over a significant fraction of the triangular convex hull of a given source image and arbitrary pair of target images.
## 2 Preliminaries
**Notation:** In this paper, we primarily consider the setting of classification with access to a labelled dataset. Let \(\mathbf{x}\in\mathcal{X}\) denote \(d\)-dimensional input images, and let their corresponding labels be denoted as \(y\in\{1,\dots,N\}\). Let \(f:\mathcal{X}\rightarrow[0,1]^{N}\) denote the classification model considered, where \(f(\mathbf{x})=\big{(}f^{1}(\mathbf{x}),\dots,f^{N}(\mathbf{x})\big{)}\) represents the softmax predictions over the \(N\)-classes. Further, let \(C_{f}(\mathbf{x})\) be the argmax over the \(N\)-dimensional softmax output, representing the class predicted by the model for input \(\mathbf{x}\). For a given data sample \((x,y)\), let the cross-entropy loss achieved by the model be denoted as \(CE(f(\mathbf{x}),y)\). Given a prediction confidence value \(p\in[0,1]\), and a class \(j\in\{1,\dots,N\}\) we define the Level Set \(L_{f}(p,j)\), and Superlevel Set \(L_{f}^{+}(p,j)\) for the function \(f\) as follows:
\[L_{f}(p,j)=\{\mathbf{x}\in\mathcal{X}:f^{j}(x)=p\}\ \,\ L_{f}^{+}(p,j)=\{\mathbf{x}\in \mathcal{X}:f^{j}(x)\geq p\}\]
Given a pair of inputs \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), we define the linear interpolant path between them as \(P(\lambda;\mathbf{x}_{1},\mathbf{x}_{2})=\lambda\cdot\mathbf{x}_{1}+(1-\lambda)\cdot\mathbf{ x}_{2}\), for \(\lambda\in[0,1]\). A given set \(S\) is thus said to be convex if \(P(\lambda;\mathbf{x}_{1},\mathbf{x}_{2})\in S,\;\forall\mathbf{x}_{1},\mathbf{x}_{2}\in S\) and \(\lambda\in[0,1]\). Further, a set \(S\) is said to be _star-like_ if there exists some \(\textbf{s}_{0}\in S\) such that \(P(\lambda;\textbf{s}_{0},\mathbf{x})\in S,\;\forall\mathbf{x}\in S\) and \(\lambda\in[0,1]\).
### Conjugate Nature of Adversarial and Confidence Preserving Perturbations
Given a classification model \(f\) and a correctly classified benign input \((\mathbf{x},y)\), an adversarial image is a specially crafted image \(\widetilde{\mathbf{x}}=\mathbf{x}+\mathbf{\varepsilon}\) such that both \(\mathbf{x}\) and \(\widetilde{\mathbf{x}}\) appear near-identical to a human oracle, but induces the network to misclassify, i.e. \(C_{f}(\mathbf{x}+\mathbf{\varepsilon})\neq C_{f}(\mathbf{x})=y\). To enforce imperceptibility in a computationally tractable manner, several adversarial threat models have been proposed, with the \(\ell_{2}\) and \(\ell_{\infty}\) norm constraint models being the most popular. Amongst the earliest adversarial attacks specific to the latter threat model was the Fast Gradient Sign Method (FGSM) attack, proposed by Goodfellow et al. (2015), wherein the adversarial perturbation is found by single-step direct ascent along the local gradient with pixel-wise clipping. A stronger multi-step variant of this attack called Iterated-FGSM (IFGSM) was later introduced by (Kurakin et al., 2017), wherein iterated gradient ascent is performed alternately with projection operations onto the constraint set. A popular variant, called the Projected Gradient Descent (PGD) attack was introduced by Madry et al. (2018) which incorporates a initial random perturbation to the clean image, which was observed to help mitigate gradient masking effects Kurakin et al. (2016). A large class of adversarial attacks (Croce and Hein, 2020; Gowal et al., 2019; Carlini et al., 2019; Sriramanan et al., 2020) thus utilize perturbations parallel to the gradient, with appropriate projection operations to ensure constraint satisfaction.
In contrast, perturbations that leave the network prediction confidence unchanged, are locally orthogonal to the gradient direction. Indeed, for any differentiable function \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}\), denote the level set as \(L_{g}(c)\) for a given output \(c\in\mathbb{R}\). Let \(\gamma(t):[0,1]\to L_{g}(c)\) be any differentiable curve within the level set. Then, \(g(\gamma(t))=c\ \forall t\in[0,1]\). Thus, \(\frac{d}{dt}(g(\gamma(t)))=0=\langle\nabla g(\gamma(t)),\gamma^{\prime}(t)\rangle\), implying that \(\gamma^{\prime}(t)\) is orthogonal to \(\nabla g(\gamma(t)),\ \ \forall t\in[0,1]\). Since this is true for _any_ curve \(\gamma\) contained in the level set, we conclude that the gradient vector is always perpendicular to the level set. Furthermore, we can additionally show that the level set \(L_{g}(c)\) is often a differentiable submanifold, with mild additional conditions on the gradient. Indeed, a level set \(L_{g}(c)\) is said to be _regular_ if \(\nabla g(\mathbf{x})\neq\mathbf{0}\ \ \forall\mathbf{x}\in L_{g}(c)\).
**Lemma 1**.: _If \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}\) is a continuously differentiable function, then each of its regular level sets is a \((d-1)\) dimensional submanifold of \(\mathbb{R}^{d}\)._
We present the proof of Lemma 1 in Section A of the Appendix. We thus observe that adversarial perturbations, largely parallel to the gradient, are locally orthogonal to confidence preserving directions which correspond to the \((d-1)\) dimensional tangent space of the level set. We take inspiration from this observation to develop a general framework that applies to a broad class of differentiable neural networks, as opposed to previous works (Jacobsen et al., 2018) that require the network to be invertible to identify confidence preserving perturbations.
We also remark that adversarial attacks and level set preserving perturbations are complementary from another perspective as well, as noted by prior works: the former attempts to find inputs that change model predictions without modifying human oracle assignments, while the latter attempts to keep network predictions unchanged though human oracles would likely change their original label assignment. Thus, both classes of adversarial attacks and level set preserving perturbations induce misalignment between oracle and model predictions, and cast light onto independent means of evaluating the coherence of models towards human expectations.
## 3 Proposed Method: Level Set Traversal
We now describe our algorithm for traversing the level set, which we call the Level Set Traversal (LST) algorithm (Algorithm 1). We try to find a path from a source image to a target image such that all points on that path are classified by the model as the source class with high confidence. Given that these \((d-1)\) dimensional level set submanifolds can be potentially highly complex in their geometries, we use a discretized approximation using small yet finite step sizes to tractably explore these regions. Let the source image be \(\mathbf{x}_{s}\) with true label \(y_{s}\), and \(\mathbf{g}=\nabla_{\mathbf{x}}CE(f(\mathbf{x}),y_{s})\) be the gradient of the cross entropy loss of the model (\(f\)) prediction with respect to \(\mathbf{x}\). The key idea is to get as close as possible to a target image \(\mathbf{x}_{t}\) from some image \(\mathbf{x}\) by computing the projection of \(\Delta\mathbf{x}=\mathbf{x}_{t}-\mathbf{x}\) onto the orthogonal complement of \(\mathbf{g}\), to obtain a new image \(\mathbf{x}_{\text{new}}\) that leaves the model confidence unchanged. We can compute the projection of \(\Delta\mathbf{x}\) on the orthogonal complement by subtracting the component of \(\Delta\mathbf{x}\) parallel to \(\mathbf{g}\), that is, \(\mathbf{x}_{||\mathbf{g}}=\mathbf{g}\left(\frac{\Delta\mathbf{x}^{\top}\mathbf{g}}{\|\mathbf{g}\|^{d}}\right)\) (L6). Then, using a scale factor \(\eta\), the vector update to \(\mathbf{x}\) can be expressed as \(\Delta\mathbf{x}_{\perp}=\eta(\Delta\mathbf{x}-\mathbf{x}_{||\mathbf{g}})\) (L7), and therefore \(\mathbf{x}_{\text{new}}=\mathbf{x}+\Delta\mathbf{x}_{\perp}\). Starting from the source image \(\mathbf{x}_{s}\), we repeatedly perform this iteration to get reasonably close to the target image
while carefully ensuring that the confidence of the model prediction at any given point does not drop below a preset confidence threshold \(\delta\) compared to the source image (L10).
While the above method works well with a relatively large \(\eta\) if the curvature of \(f\) is low, it risks a non-trivial drop in model confidence (or even a change in the model prediction) if the curvature of \(f\) at \(\mathbf{x}\) is high enough. Therefore, after each step, we add a small perturbation vector \(\mathbf{x}_{||}\) in the direction of \(-\mathbf{g}\) scaled by a factor of \(\epsilon\). This step decreases the cross entropy loss, and thus increases the confidence so as to offset any confidence drops incurred due to the addition of \(\Delta x_{\perp}\). Thus, we can maintain a higher model confidence over the path. In order to ensure that the norm of this perturbation is not arbitrarily large, we bound the \(\ell_{\infty}\) norm by projecting the vector to an \(\ell_{\infty}\) ball of radius \(\epsilon\). Then, the step \(\mathbf{x}_{||}\) can be computed as \(\text{clamp}(\epsilon\mathbf{g},-\epsilon,\epsilon)\), where the function \(\text{clamp}(\mathbf{v},a,b)\) is the element-wise application of the function \(\min(\max(\cdot,a),b)\) on all elements of \(\mathbf{v}\). Thus, we modify L9 so that now \(\mathbf{x}_{\text{new}}=\mathbf{x}+\Delta\mathbf{x}_{\perp}-\mathbf{x}_{||}\). We present a pictorial schematic of Algorithm 1 in Fig 1. We further employ an exponential moving average of \(\mathbf{x}_{||}\), in order to smoothen components that potentially undulate heavily during the iterative traversal. Thus, since the frequent changes in \(\mathbf{x}_{||}\) are smoothened out, we find that the final output images are often linearly connected to the source image with high confidence over the linear interpolations (see Fig 6).
```
1:Input: Source image \(\mathbf{x}_{s}\) with label \(y\), target image \(\mathbf{x}_{t}\), model \(f\), max iterations \(m\), scale factor \(\eta\), stepsize \(\epsilon\), confidence threshold \(\delta\)
2:Initialize \(\mathbf{x}=\mathbf{x}_{s},\mathbf{x}_{||}=\mathbf{0}\)
3:for\(i=1\mathbf{to}\mathbf{m}\)do
4:\(\Delta\mathbf{x}=\mathbf{x}-\mathbf{x}\)
5:\(\mathbf{g}=\nabla_{\mathbf{x}}CE(f(\mathbf{x}),y)\)
6:\(c_{//}=(\mathbf{g}\cdot\Delta\mathbf{x})/||\mathbf{g}||^{2}\)
7:\(\Delta\mathbf{x}_{\perp}=\eta(\Delta\mathbf{x}-c_{//}/\mathbf{g})\)
8:\(\mathbf{x}_{||}=\Pi_{\infty}(\mathbf{x}_{||}-\epsilon\mathbf{g},-\epsilon,\epsilon)\)
9:\(\mathbf{x}_{\text{new}}=\mathbf{x}+\Delta\mathbf{x}_{\perp}+\mathbf{x}_{||}\)
10:if\(f(\mathbf{x}_{s})[j]-f(\mathbf{x}_{\text{new}})[j]>\delta\)then
11:return\(\mathbf{x}\)
12:\(\mathbf{x}=\mathbf{x}_{\text{new}}\)
13:return\(\mathbf{x}\)
```
**Algorithm 1** Level Set Traversal (LST)
We now apply this algorithm to explore the level sets of standard and robust ResNet-50 models. In Fig 2, we present the path traversed by the LST algorithm, wherein the model predicts the source 'goose' class with very high confidence over the entire path, though the target blindspot found by LST clearly appears as a 'dog' to human oracles. To substantiate the efficacy of the LST algorithm, we randomly select five images from five arbitrary ImageNet classes ('goose', 'Scottish Terrier','meerkat', 'academic gown', 'cleaver'), and compute LST blindspots for all possible source-target image pairs. We show the final images output by the LST algorithm in Fig 3 as an image-grid. If we order the five selected images, the \(i^{th}\) row and \(j^{th}\) column of the image-grid is the LST output obtained using the \(i^{th}\) image as target and \(j^{th}\) image as the source. Thus, each column represents the source image being transformed iteratively into the other target images. The confidence of the model prediction for the source class (names on top of each column) is displayed just below each image. For both the normally trained and adversarially trained model, these images are almost indistinguishable from the target while retaining high model confidence for the source class. Since adversarially robust models have
Figure 1: Pictorial representation of the LST algorithm. The contour lines represent level sets of the model confidence \(f(\mathbf{x})\), with blue representing high confidence (low loss) and red representing low confidence (high loss). At each iteration, we obtain \(\mathbf{x}_{\text{new}}\) by adding two vectors, \(\mathbf{x}_{\perp}=\eta(\Delta\mathbf{x}-c_{//}/\mathbf{g})\) (projection of \(\Delta\mathbf{x}\) onto the orthogonal complement of \(\mathbf{g}\)) and \(\mathbf{x}_{||}\) (a small perturbation to increase the confidence and remain within the level set).
Figure 2: Intermediate images over the path traversed by the LST algorithm using a source image of a ‘goose’ and a target image of a ‘Scottish terrier’ (a dog) for a normally trained ResNet-50 model. We observe that the model predicts the ‘goose’ class with very high confidence for all images over the path, though the target blindspot found by LST clearly appears as a ‘dog’ to human oracles.
perceptually aligned gradients, we can sometimes visually notice a few traces of the source image in the final LST image; for example the'meerkat' image in the 3rd row, 2nd column in the right side of Fig 3 has some traces of the source 'terrier' image, but differences are usually hard to perceive.
We also examine the model confidence over linear interpolations between the source image and LST outputs for all target pairs in Fig 6. Formally, consider a source image \(\mathbf{x}_{s}\) with label \(y\) and let \(\mathbf{x}_{\text{op}}\) represent the LST output when applied toward a target image \(\mathbf{x}_{t}\). Denote the difference vector as \(\Delta\mathbf{v}=\mathbf{x}_{\text{op}}-\mathbf{x}_{s}\). Then, we observe that \(\mathbf{x}_{s}+\alpha\Delta\mathbf{v}\) is assigned high confidence with respect to class \(y\) by the model \(\forall\alpha\in[0,1]\), which represents the entire linear interpolant path between \(\mathbf{x}_{s}\) and \(\mathbf{x}_{\text{op}}\). Furthermore, we observe that the path discovered by the Level Set Traversal algorithm enjoys two key properties: (1) Uniqueness (once the target image is fixed) and (2) Extremality:
(1) _Uniqueness:_ Since the local tangent space of the level set is \((d-1)\) dimensional, several independent directions are orthogonal to the local gradient, and apriori do not yield a unique path like a gradient-flow. However, once we fix our target image, we use its difference vector with respect to the current iterate (L4) and compute its projection onto the local tangent space (L7) of the level set, thereby generating a _uniquely defined path_.
(2) _Extremality:_ Though this flow-based path may be non-linear, we additionally discover that the final output-point of this flow is surprisingly linearly connected with high-confidence to the source image after we apply discretized approximations in practical settings for common vision models etc. Formally, the LST output \(\mathbf{x}_{\text{op}}\) is _linearly extremal_ in the sense that \(\mathbf{x}_{s}+(1+\epsilon)\Delta\mathbf{v}\) is rapidly assigned low-confidence by the model even for extremely small values of \(\epsilon>0\), where \(\Delta\mathbf{v}=\mathbf{x}_{\text{op}}-\mathbf{x}_{s}\).
Thus, using the LST algorithm, we find that the level sets of common models extend outwards in an expansive, connected manner to include images of _arbitrary_ classes, which human oracles would never state as being similar. Since the linear path from any given source image to LST outputs for arbitrary target images retains high model confidence throughout, it unveils a remarkable star-like substructure for superlevel sets as shown in Fig 4, where the number of "limbs" or linear protuberances of the star-like structure is _extraordinarily large_, plausibly as large as the number of images in all other classes. Furthermore, to study the size of the level sets beyond the one-dimensional interpolant paths, we analyze the two-dimensional triangular convex hull between a given source image and two LST output blindspot images, using quantitative metrics in Section 6.
Figure 3: The images returned by LST for 5 random source and target images for normally trained (left) and adversarially trained (right) ResNet-50 models. The image in the \(i^{th}\) row and \(j^{th}\) column is the LST output obtained using the \(i^{th}\) image as target and \(j^{th}\) image as the source. The confidence of the model prediction for the source class (names on top of each column) is displayed just below each image. For example, all four LST output blindspots in the first column (highlighted), using the source ‘goose’ image, are all predicted to be of the ‘goose’ class with very high confidence. Diagonal images are unchanged, as source equals target. We observe that almost any source image can be iteratively modified using LST to resemble any target image very closely without any loss in confidence for both normal and adversarially trained ResNet-50 models.
## 4 Disconnected Nature of Standard Adversarial Attacks
At first glance, the images output by the LST algorithm may seem similar to those produced using targeted variants of standard adversarial attacks. In this section, we explore the connectivity of high-confidence paths as induced by standard adversarial attacks, and show that adversarially attacked source images are not linearly connected with high-confidence paths to their target image. In detail, let \((\mathbf{x}_{1},y_{1})\) and \((\mathbf{x}_{2},y_{2})\) be any two data samples from different classes \((y_{1}\neq y_{2})\). A targeted adversarial attack [Carlini and Wagner, 2017] on input \(\mathbf{x}_{1}\) with respect to class \(y_{2}\) is formulated as solving \(\mathbf{\varepsilon}_{12}=\arg\min_{\mathbf{\varepsilon}_{12}:\|\mathbf{\varepsilon}_{12} \|\leq\epsilon}CE(f(\mathbf{x}_{1}+\mathbf{\varepsilon}_{12}),y_{2})\). A related variant, called a feature-level targeted adversarial attack [Sabour et al., 2015] instead uses intermediate network activations to craft image perturbations. If \(f_{|L}(\mathbf{x})\) represents the network activations from hidden layer \(L\) for an input \(\mathbf{x}\), then this attack attempts to match features by optimizing \(\mathbf{\varepsilon}_{12}=\arg\min_{\mathbf{\varepsilon}_{12}:\|\mathbf{\varepsilon}_{12} \|\leq\epsilon}||f_{|L}(\mathbf{x}_{1}+\mathbf{\varepsilon}_{12})-f_{|L}(\mathbf{x}_{2})||\). The hidden layer (\(L\)) that is often selected corresponds to pre-activations of the final fully-connected layer, and thus often induces misclassification.
Using this framework, we can thus analyze the connectivity of high-confidence paths with respect to class \(y_{2}\) between the target benign sample (\(\mathbf{x}_{2}\)) and adversarial examples (such as \(\mathbf{x}_{1}+\mathbf{\varepsilon}_{12}\) targeted towards \(\mathbf{x}_{2}\)), using the linear interpolant path between the two. By doing so, we can analyze the level set with respect to class \(y_{2}\) using targeted adversarial examples alone, though the perturbation utilized is inherently norm-limited. In Fig 5, we plot the median confidence of a normally trained ResNet-50 model over the linear interpolant paths for 1000 source-target pairs. We find that though the model confidence with respect to class \(y_{2}\) is high for both the target benign input \(\mathbf{x}_{2}\) and the targeted adversarially attacked input \(\mathbf{x}_{1}+\mathbf{\varepsilon}_{12}\) (that is, at the end-points of the linear path), the model _does not_ maintain high confidence over the linear interpolant path between the two, but sharply declines to a valley of near zero-confidence over the path. For specific inputs, we often observe sharp, erratic diminutions in target-class prediction confidence over the convex combination, with a non-trivial region of near-zero confidence. This contrasts sharply with the existence of linear-connectivity observed between a source image and LST blindspot images. We thus crucially note that adversarial attacks are _standalone insufficient_ to study the structure of level sets of common vision models. However, we incorporate them into our proposed Level Set Traversal algorithm in a fruitful manner.
## 5 Theoretical Analysis for Common Models
To better understand the geometry of level set submanifolds for a given prediction confidence threshold, we analyze the nature of confidence preserving perturbations in a few simplified model settings.
Figure 4: Schematic of Star-like set substructure of Superlevel sets: The linear interpolant paths between the source image \(\mathbf{x}_{s}\) and blindspots found using LST maintain high-confidence throughout for arbitrary target images of other classes.
Figure 5: Model Confidence over Linear Interpolant Paths: We plot the median prediction confidence with respect to class \(y_{2}\) over linear paths between the target benign sample (\(\mathbf{x}_{2}\)) and adversarial examples (such as \(\mathbf{x}_{1}+\mathbf{\varepsilon}_{12}\) targeted towards input \(\mathbf{x}_{2}\)) in input space. While model confidence falls to near-zero using standard targeted adversarial attacks, the linear path between the LST output and source image has high confidence of almost 1.0 throughout.
**1) Linear Functional:** First, consider the classification model to be a linear functional \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\), that is, \(f(\mathbf{x}_{1}+\mathbf{x}_{2})=f(\mathbf{x}_{1})+f(\mathbf{x}_{2})\) and \(f(\lambda\mathbf{x})=\lambda f(\mathbf{x})\). Then, by the Riesz Representation theorem, there exists a unique vector \(\mathbf{w}_{f}\in\mathbb{R}^{d}\) such that \(f(\mathbf{x})=\langle\mathbf{w}_{f},\mathbf{x}\rangle\;\;\forall\mathbf{x}\in\mathbb{R}^{d}\). We thus observe that for any vector \(\mathbf{v}\) orthogonal to \(\mathbf{w}_{f}\), \(f(\mathbf{x}+\mathbf{v})=\langle\mathbf{w}_{f},\mathbf{x}+\mathbf{v}\rangle=\langle\mathbf{w}_{f}, \mathbf{x}\rangle+\langle\mathbf{w}_{f},\mathbf{v}\rangle=\langle\mathbf{w}_{f},\mathbf{x}\rangle\). Thus, in this setting, the level sets of \(f\) are \((d-1)\) dimensional _linear subspaces_ spanned by the set \(\{\mathbf{v}\in\mathbb{R}^{d}:\langle\mathbf{w},\mathbf{v}\rangle=0\}\). We observe that a similar argument can be extended to affine functions of the form \(f(\mathbf{x})=\langle\mathbf{w}_{f},\mathbf{x}\rangle+c\), where \(c\in\mathbb{R}\).
We remark that by applying a first-order Taylor series approximation for general real-valued smooth functions, we observe near-affine behavior locally within a small neighborhood: \(f(\mathbf{x}+\epsilon\mathbf{a})=f(\mathbf{x})+\epsilon\langle\nabla f(\mathbf{x}),\mathbf{a} \rangle+O(\epsilon^{2}\|\mathbf{a}\|^{2})\). Thus locally, we observe that confidence-preserving perturbations can arise from a \((d-1)\) dimensional plane orthogonal to the gradient. Indeed we incorporate this implicitly in our proposed Level Set Traversal algorithm, wherein the orthogonal projection \(\Delta\mathbf{x}_{\perp}\) with respect to the local gradient is iteratively computed, and its relative success can partly be attributed to the orthogonal hyperplane having a large dimension, namely \((d-1)\).
Another related setting worthy of note is that of neural networks that utilize ReLU activations, which induces a piece-wise linear structure over tessellated subsets of the input domain. Thus, a given output neuron of a ReLU network functionally has the form \(f(\mathbf{x})=\langle\mathbf{w},\mathbf{x}\rangle+c\) within a given tessellated region, and thus has a constant gradient within the same region. Further, between two such adjacent regions, the two orthogonal \((d-1)\) dimensional hyperplanes typically intersect over an affine space over dimension at least \((d-2)\). Thus if \(\mathbf{x}_{1},\mathbf{x}_{2}\) are two inputs such that their linear interpolant path cuts across \(n\) distinct tessellated regions, the common intersection of these orthogonal hyperplanes will typically be \((d-n)\) dimensional, indicating that there exist perturbations which lie in the common null-space of the gradients as defined along each tessellated region that is crossed. Indeed in the following section, we demonstrate empirically for Residual Networks that though the exact iterative path followed by the Level Set Traversal algorithm is discretized and non-linear, the final outputs so found are often linearly connected through paths of high confidence to the source image, thereby lying within the level set of the original source class. This indicates that the overlap of different \((d-1)\) dimensional hyperplanes is non-trivial at a non-local scale in image space, whereby we observe extended connected regions of high confidence.
**2) Full-Rank Linear Transformations:** Let us now consider a setting apart from classification, such as regression, wherein the complete vector representation of the output is of principal interest. Let the model be of the form \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\), \(f(\mathbf{x})=A\mathbf{x}\), where \(A\in\mathbb{R}^{d\times d}\) is a full-rank matrix. In this setting, we observe that if \(f(\mathbf{x}_{1})=f(\mathbf{x}_{2})=A\mathbf{x}_{1}=A\mathbf{x}_{2}\), implying that \(A(\mathbf{x}_{1}-\mathbf{x}_{2})=\mathbf{0}\), the zero-vector. But since \(A\) is full-rank, this implies that \(\mathbf{x}_{1}-\mathbf{x}_{2}=\mathbf{0}\), i.e, \(\mathbf{x}_{1}=\mathbf{x}_{2}\). Thus, \(f\) is a bijective function and has a trivial null space. Thus in this setting, we necessarily have to relax the problem to be that of identifying perturbations of large magnitude to the input that _minimally_ change the function output: that is, we solve for \(\min_{\mathbf{v}\neq\mathbf{\theta}}\frac{\|A\mathbf{v}\|^{2}}{\|\mathbf{v}\|^{2}}\). Indeed, let the Singular Value Decomposition (SVD) of the full rank matrix \(A\) be given by \(A=U\Sigma V^{T}\), where \(U,V\) are \(d\times d\) orthogonal matrices, and \(\Sigma\) is a diagonal matrix consisting of the positive singular values \(\sigma_{i}\) for \(1\leq i\leq d\), in descending order without loss of generality. Then, \(\|A\mathbf{v}\|=\|U\Sigma V^{T}\mathbf{v}\|=\|U(\Sigma V^{T}\mathbf{v})\|=\|\Sigma V^{T} \mathbf{v}\|\) since \(U\) is an orthogonal matrix. Similarly, since \(V\) is orthogonal as well, let \(\mathbf{z}=V^{T}\mathbf{v}\), so that \(\|\mathbf{z}\|=\|\mathbf{v}\|\). Thus, \(\min_{\mathbf{v}\neq\mathbf{\theta}}\frac{\|A\mathbf{v}\|^{2}}{\|\mathbf{v}\|^{2}}=\min_{\mathbf{z }\neq\mathbf{0}}\frac{\|\Sigma\mathbf{z}\|^{2}}{\|\mathbf{z}\|^{2}}=\min_{\mathbf{z}}\|\Sigma\bm {z}\|^{2}\)such that \(\mathbf{z}^{T}\mathbf{z}=1\). But since \(\Sigma\) is diagonal, \(\min_{\mathbf{z}}\|\Sigma\mathbf{z}\|^{2}=\sum_{i=1}^{d}\sigma_{i}^{2}z_{i}^{2}\), under the constraint that \(||z||^{2}=1\). It is then easy to observe that the minimum value attained is \(\sigma_{d}^{2}=\sigma_{min}^{2}\), and is attained when the input vector to \(f\) is the right-singular vector corresponding to the minimum singular value of \(A\).
In this setting, we remark that adversarial perturbations, in contrast, can be formulated in this setting as \(\max_{\mathbf{v}\neq\mathbf{\theta}}\frac{\|A\mathbf{v}\|^{2}}{\|\mathbf{v}\|^{2}}\), with the maximum value given by \(\sigma_{max}^{2}\) and attained by the right-singular vector corresponding to the maximum singular value of \(A\), highlighting the complementary nature of the two problems as indicated previously in Section 2.1. Indeed, the condition number \(\kappa\) of an invertible matrix \(A\) is defined as \(\kappa=\sigma_{max}/\sigma_{min}=\|A\|\cdot\|A^{-1}\|\). If condition number is large with \(\kappa\gg 1\), the matrix \(A\) is said to be ill-conditioned, and induces an inescapable compromise: if \(\sigma_{max}=\|A\|\approx 1\) (as potentially expected in "robust" networks), then \(1/\sigma_{min}=\|A^{-1}\|\gg 1\) is necessarily large, thereby inducing extreme _under-sensitivity_ along some dimensions; while if \(1/\sigma_{min}=\|A^{-1}\|\approx 1\), then \(\sigma_{max}=\|A\|\gg 1\) and the model is extremely _over-sensitive_, similar to the phenomenon of adversarial vulnerability.
## 6 Quantifying Under-Sensitivity of Vision Models
In this paper, we primarily consider standard vision datasets such as ImageNet (Deng et al., 2009) and CIFAR-10 (Krizhevsky et al., 2009) (latter in Section C of the Appendix). We thereby explore the "blind-spots" of popular vision models such ResNet (He et al., 2016) and Vision Transformers (Dosovitskiy et al., 2020, Touvron et al., 2021). Furthermore, we explore the connectivity of such level sets on normally trained variants of such networks, as well as adversarially robust counterparts. For the latter, we analyze robust models trained adversarially against \((4/255)\)\(\ell_{\infty}\) constrained adversaries, available on RobustBench (Croce et al., 2021). Specifically, we utilize a robust ResNet-50 model from Salman et al. (2020) and a robust DeiT model from Singh et al. (2023). We also fix the hyperparameters of LST for all models for a fair comparison (detailed in Section E of the Appendix).
**Image Quality Metrics:** We now proceed to quantitatively verify our observations in Section 3. First, to help quantify the deviation between target images and blindspot outputs generated by our proposed Level Set Traversal algorithm, we utilize a combination of classical image metrics such as RMSE, \(\ell_{\infty}\) and Structural Similarity Index (SSIM), and perceptual measures such as LPIPS distance using AlexNet (Zhang et al., 2018). For SSIM, a higher value indicates a closer match, while for all other metrics, a lower value indicates a closer match. To calculate these metrics, we sample around 1000 source images from ImageNet, and select five other random target images of different classes for each source image. We present the image quality metrics for blindspots discovered by LST in Table 1. Here the standard deviation is over the different randomly chosen source and target images.
**Metrics for Model Confidence:** To evaluate the extent of the model invariance over the regions between the LST outputs and the source image, we evaluate the model confidence with respect to the source class over one-dimensional linear interpolant paths and over two-dimensional subspaces as well. For the latter, we evaluate the model confidence over the triangular convex hull obtained by linear interpolation over three reference points, namely the source image and the two target blindspot images produced using LST. For example, the input at the 'centroid' of the triangle formed by a source image and pair of target blindspots is the arithmetic mean of the three images. We visualize these in Fig 6, wherein the prediction confidence (in the range \([0,1]\)) assigned by the model with respect to the source class is mapped to a continuous colorbar, with high-confidence points (close to 1.0) appearing as bright yellow, and low-confidence points (close to 0.0) appearing as dark violet. Specifically, we use the following metrics: (a) **Average Triangle (\(\Delta\)) Confidence**: the mean of the model's source class confidence over the enclosed triangle, (b) **Average Triangle (\(\Delta\)) Fraction** for various values of \(\delta\): the fraction of inputs in the triangular region for which the model confidence is greater than \(p_{\text{src}}-\delta\), averaged over all possible target blindspot pairs, where \(p_{\text{src}}\) is the confidence of the source image, (c) **Average Path Confidence**: the average model confidence over all linear paths from the source image to all LST blindspot images. The higher these metrics, the more confident, and thus invariant, the model is in this region. For computing these metrics, we use linear interpolations between the source images and the 5 LST outputs found previously for computing the distance metrics. We thus use \(\binom{5}{2}=10\) triangles for each source image, and sample this triangular area in an equispaced manner to obtain 66 images for computation of the triangle (\(\Delta\)) metrics. We present these metrics in Table 2, along with the mean (and standard deviation) of model confidence on the source class images (\(p_{\text{src}}\)) for reference. Here, the standard deviation is over the different randomly chosen source images.
We can now quantitatively confirm many of the trends we qualitatively observed in Section 3. In Table 1, we observe that the LST outputs found for the models are closer to the targets as compared to the adversarially trained models. The difference is particularly stark when comparing the \(\ell_{\infty}\) distances, as \(\ell_{\infty}\) is a particularly sensitive metric and is also the threat model against which the models were trained. However, for other distance metrics like RMSE or LPIPS, the difference between normally trained and adversarially trained ResNet-50 is not as high. In particular, LPIPS
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline
**Models** & RMSE : \(\mu\pm\sigma\) & \(\ell_{\infty}\) dist: \(\mu\pm\sigma\) & SSIM: \(\mu\pm\sigma\) & LPIPS dist: \(\mu\pm\sigma\) \\ \hline ResNet-50 (Normal) & 0.008 \(\pm\) 0.001 & 0.046 \(\pm\) 0.020 & 0.990 \(\pm\) 0.021 & 0.002 \(\pm\) 0.004 \\ ResNet-50 (AT) & 0.029 \(\pm\) 0.008 & 0.746 \(\pm\) 0.124 & 0.915 \(\pm\) 0.041 & 0.057 \(\pm\) 0.037 \\ DeiT-S (Normal) & 0.011 \(\pm\) 0.002 & 0.116 \(\pm\) 0.030 & 0.973 \(\pm\) 0.024 & 0.024 \(\pm\) 0.017 \\ DeiT-S (AT) & 0.046 \(\pm\) 0.010 & 0.821 \(\pm\) 0.117 & 0.898 \(\pm\) 0.041 & 0.219 \(\pm\) 0.068 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative image distance metrics between output of Level Set Traversal and target images.
distances for both models are low, which indirectly implies that the perceived difference between the images for humans oracles is relatively low. This can also be confirmed by visually inspecting the images in Fig 3. For the normally trained DeiT-S, the distance metrics are very similar to that of ResNet-50, with slightly higher RMSE and LPIPS distances. However, the adversarially trained (AT) variant is significantly less under-sensitive compared to both its normal counterpart and the ResNet-50 (AT). Specifically, the LPIPS distance metric is much greater for DeiT-S (AT), which implies there exist significant human-perceptible differences in the image. We can confirm this in Fig 7, where the LST output for the adversarially trained DeiT-S model contains visible traces of the source image. However, the LST outputs are still clearly much closer to the target class as compared to the source, which indicates that there are still some significant blind spots in DeiT-S (AT).
However, when we measure the extent of model invariance over the convex regions enclosed between the LST output and the target images, we find that adversarially trained ResNet-50 are overall _more_ invariant (or under-sensitive) as compared to the normally trained variant. For example, the average triangle confidence for ResNet-50 (AT) is higher than that of normally trained ResNet-50, even though its source confidence is much lower. We also find that a much larger fraction of the triangular convex hull lies within the superlevel set for \(p_{\text{src}}-\delta\) for ResNet-50 (AT) as compared to normal ResNet-50 for all values of \(\delta\). The average path confidence is much closer to \(p_{\text{src}}\) for ResNet-50 (AT) as compared to normally trained ResNet-50. This quantitatively verifies the observation made in Fig 6 that adversarial training demonstrably exacerbates under-sensitivity. Interestingly, these robust models are under-sensitive over subsets of the input domain that lie well beyond the original threat model used in its training. Moreover, between the normally trained DeiT-S and ResNet-50 models, the former appears to be more invariant with greater average confidence over the triangular convex hulls, despite having lower source image confidences \(p_{\text{src}}\). For the robust variant of DeiT-S however, the trend is less apparent due to the significantly lower average source image confidences \(p_{\text{src}}\). However the average relative \(\Delta\) fraction becomes higher for larger values of \(\delta\) (such as \(0.3\)), indicating that the superlevel sets are indeed expansive, albeit for lower confidence thresholds.
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multirow{2}{*}{\(p_{\text{src}}\)} & \multirow{2}{*}{Avg \(\Delta\) Conf.} & \multicolumn{3}{c}{Avg \(\Delta\) Frac. (\(\mu\pm\sigma\))} & \multicolumn{2}{c}{Avg Path Conf.} \\ & (\(\mu\pm\sigma\)) & (\(\mu\pm\sigma\)) & \(\delta=0.0\) & \(\delta=0.1\) & \(\delta=0.2\) & \(\delta=0.3\) & (\(\mu\pm\sigma\)) \\ \hline ResNet-50 (Normal) & 0.99 \(\pm\) 0.02 & 0.56 \(\pm\) 0.10 & 0.13 \(\pm\) 0.15 & 0.51 \(\pm\) 0.11 & 0.53 \(\pm\) 0.1 & 0.54 \(\pm\) 0.10 & 0.96 \(\pm\) 0.05 \\ ResNet-50 (AT) & 0.88 \(\pm\) 0.11 & 0.83 \(\pm\) 0.09 & 0.49 \(\pm\) 0.29 & 0.79 \(\pm\) 0.13 & 0.85 \(\pm\) 0.1 & 0.88 \(\pm\) 0.09 & 0.93 \(\pm\) 0.06 \\ DeiT-S (Normal) & 0.85 \(\pm\) 0.06 & 0.68 \(\pm\) 0.05 & 0.54 \(\pm\) 0.11 & 0.67 \(\pm\) 0.06 & 0.71 \(\pm\) 0.06 & 0.73 \(\pm\) 0.06 & 0.94 \(\pm\) 0.02 \\ DeiT-S (AT) & 0.76 \(\pm\) 0.08 & 0.59 \(\pm\) 0.07 & 0.20 \(\pm\) 0.09 & 0.43 \(\pm\) 0.14 & 0.63 \(\pm\) 0.15 & 0.76 \(\pm\) 0.12 & 0.73 \(\pm\) 0.06 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative confidence metrics over the triangular convex hull (\(\Delta\)) of a given source image and two target LST blindspot image-pairs and over linear interpolant paths between source and blindspot images. (For reference, a random classifier would have confidence of 0.001)
Figure 6: Visualization of confidence of standard (top) and robust (bottom) ResNet-50 models over the triangular convex hull of a source ‘goose’ image and two LST outputs for all pairs of target images from 4 other classes (same images as in Fig 3). In all source-target image pairs, the linear interpolant path maintains high confidence, implying that the source image is linearly connected to the LST target output in a star-like substructure within the level set. For adversarially trained models, we observe that a significant fraction of the triangular hull lies in the superlevel sets of high-confidence, thereby indicating their under-sensitivity in regions far beyond their original threat model.
Discussion
While the existence of level sets alone is not very significant in itself, using the proposed LST algorithm, we find that the level set for common vision models is remarkably expansive -- large enough to contain inputs that look near-identical to arbitrary target images from other classes. Since the linear path from any given source image to LST blindspot outputs retain high model confidence throughout, the level sets have a star-like connected substructure, where the number of 'l limbs' or linear protuberances of the star-like structure is _extraordinarily large_, plausibly as large as the number of images in all other classes. This is considerably noteworthy since it indicates the hitherto unknown and unappreciated scale and extent of under-sensitivity in common vision models. Moreover this hints at the potential degree of difficulty towards adequately mitigating this phenomenon in practical settings. For instance, if the level set for images of class \(y_{1}\) contained sizable protuberances towards only one other class \(y_{2}\) alone, the problem could perhaps be tackled by introducing a contrastive training objective that encourages the network to better discriminate between \(y_{1}-y_{2}\) image pairs by utilizing a denser sampling of related image augmentations, likely resulting in the diminution of these specific 'directed' protuberances (assuming reasonable train-test generalization). But since the star-like connected substructure uncovered by LST implies that such protuberances exist towards any generic image of any other class, such simple approaches will likely be ineffective and possibly computationally infeasible from a combinatorial perspective. Thus, based on the observations uncovered with LST, we hypothesize that addressing the pervasive issue of under-sensitivity in conventional vision models might present a significantly non-trivial challenge.
## 8 Related Work
The phenomenon of under-sensitivity in classification models was first pointed out by Jacobsen et al. (2018), wherein they utilize a class of invertible neural networks called fully Invertible RevNets Jacobsen et al. (2018); Kingma and Dhariwal (2018) to specifically craft input images that do not affect network activations at a given layer. In contrast, our proposed algorithm is applicable to general network architectures since we solely utilize input-gradients to perform the traversal over image space. Tramer et al. (2020) further demonstrated that due to the misalignment of \(\ell_{p}\) norm bounded balls and the ideal set of human-imperceptible perturbations, networks that are adversarially trained against such \(\ell_{p}\) bounded perturbations of relatively large radius are overly-smooth, and become excessively susceptible to invariance-based attacks within the same \(\ell_{p}\) radius. To find such images that lie within a given \(\ell_{p}\) threat model, but induce human oracles to change their label assignment, the authors propose to identify the training image from another class closest in image space and apply a series of semantic-preserving transformations, and additionally use techniques such as realignment and spectral clustering. Given that these operations are fairly complex, the attack algorithm is slow, with alignment alone requiring an amortized time of minutes per input example. In contrast, our proposed method relies upon gradient-backpropagation steps which are efficiently parallelized across a minibatch of input images. Furthermore, our technique is seen to be successful for arbitrary source-target image pairs, since we do not utilize near-neighbour training images as the target. On the theoretical front, Jayaram et al. (2020) analyze the problem of span-recovery of neural networks given only oracle access to the network predictions. They characterize the feasibility of span recovery, and thus approximations of the null space of networks in a provable setting, but remark that its success in practical settings is potentially limited to networks that are extremely thin.
## 9 Conclusions
In this work, we investigate the phenomenon of under-sensitivity of classification models, wherein large-magnitude semantic perturbations leave network activations unchanged. To identify such "blind spots" that occur within high-confidence level sets of common vision models, we develop a novel Level Set Traversal algorithm that iteratively perturbs a given source image to appear visually similar to an arbitrary target image by utilizing orthogonal projections with respect to the local input gradient. The proposed method is applicable to general neural networks, and helps uncover a star-like substructure for the level and superlevel sets of CNNs and ViTs on common datasets. We further observe that adversarially trained models retain a high-degree of confidence over regions that lie far beyond its original threat model, with super-level sets that extend over a significant fraction of the triangular convex hull between a given source image and arbitrary pair of blindspot images.
Acknowledgements
This project was supported in part by a grant from an NSF CAREER AWARD 1942230, ONR YIP award N00014-22-1-2271, ARO's Early Career Program Award 310902-00001, Meta grant 23010098, HR00112090132 (DARPA/RED), HR001119S0026 (DARPA/GARD), Army Grant No. W911NF2120076, NIST 60NANB20D134, the NSF award CCF2212458, an Amazon Research Award and an award from Capital One.
|
2308.05032 | Density Crop-guided Semi-supervised Object Detection in Aerial Images | One of the important bottlenecks in training modern object detectors is the
need for labeled images where bounding box annotations have to be produced for
each object present in the image. This bottleneck is further exacerbated in
aerial images where the annotators have to label small objects often
distributed in clusters on high-resolution images. In recent days, the
mean-teacher approach trained with pseudo-labels and weak-strong augmentation
consistency is gaining popularity for semi-supervised object detection.
However, a direct adaptation of such semi-supervised detectors for aerial
images where small clustered objects are often present, might not lead to
optimal results. In this paper, we propose a density crop-guided
semi-supervised detector that identifies the cluster of small objects during
training and also exploits them to improve performance at inference. During
training, image crops of clusters identified from labeled and unlabeled images
are used to augment the training set, which in turn increases the chance of
detecting small objects and creating good pseudo-labels for small objects on
the unlabeled images. During inference, the detector is not only able to detect
the objects of interest but also regions with a high density of small objects
(density crops) so that detections from the input image and detections from
image crops are combined, resulting in an overall more accurate object
prediction, especially for small objects. Empirical studies on the popular
benchmarks of VisDrone and DOTA datasets show the effectiveness of our density
crop-guided semi-supervised detector with an average improvement of more than
2\% over the basic mean-teacher method in COCO style AP. Our code is available
at: https://github.com/akhilpm/DroneSSOD. | Akhil Meethal, Eric Granger, Marco Pedersoli | 2023-08-09T15:59:42Z | http://arxiv.org/abs/2308.05032v1 | # Density Crop-guided Semi-supervised Object Detection in Aerial Images
###### Abstract
One of the important bottlenecks in training modern object detectors is the need for labeled images where bounding box annotations have to be produced for each object present in the image. This bottleneck is further exacerbated in aerial images where the annotators have to label small objects often distributed in clusters on high-resolution images. In recent days, the mean-teacher approach trained with pseudo-labels and weak-strong augmentation consistency is gaining popularity for semi-supervised object detection. However, a direct adaptation of such semi-supervised detectors for aerial images where small clustered objects are often present, might not lead to optimal results. In this paper, we propose a density crop-guided semi-supervised detector that identifies the cluster of small objects during training and also exploits them to improve performance at inference. During training, image crops of clusters identified from labeled and unlabeled images are used to augment the training set, which in turn increases the chance of detecting small objects and creating good pseudo-labels for small objects on the unlabeled images. During inference, the detector is not only able to detect the objects of interest but also regions with a high density of small objects (density crops) so that detections from the input image and detections from image crops are combined, resulting in an overall more accurate object prediction, especially for small objects. Empirical studies on the popular benchmarks of VisDrone and DOTA datasets show the effectiveness of our density crop-guided semi-supervised detector with an average improvement of more than 2% over the basic mean-teacher method in COCO style AP. Our code is available at: [https://github.com/akhilpm/DroneSSOD](https://github.com/akhilpm/DroneSSOD)
Semi-supervised object detection, small object detection
## I Introduction
With abundant labeled data and efficient deep learning algorithms, supervised object detection has achieved impressive results in natural images [4, 9, 29, 43]. Though the progress is also resonated in Aerial Image object detection with images captured by drones and satellites [7, 14, 30, 45], we are yet to see large-scale annotated datasets like Pascal VOC [12], MS-COCO [24] and Open Images [17] in aerial images. Getting sufficient labeled data is difficult in aerial images, especially at instance-level recognition tasks like object detection [28, 32, 46], limiting the scalability of the popular supervised detectors to aerial images. The annotation of several tiny objects per image is a tedious task [49]. This puts more demand on learning object detectors with limited annotations in aerial image detection. Practical applications with aerial imagery produce large amounts of unlabeled data [3, 39, 40] but they are simply not utilized in the learning process. Therefore, it is important to leverage the available unlabeled data to train the detectors for aerial image detection.
To train a detector with limited annotated images and a large collection of unlabeled data, we focus on training them in a semi-supervised setting. Although semi-supervised object detection (SSOD) has achieved tremendous progress in recent years on natural images [13, 16, 18, 28, 32, 38, 41, 46], we are yet to see large-scale adoption of them on aerial images. While adapting the best SSOD techniques on natural images to aerial images, one should be cautious due to the large difference in these images. Specifically, factors like the high-resolution nature of the imagery, the small size of the objects, and their sparse distribution across the image need consideration. The number of target objects is also fairly high in aerial images compared to natural images. For example, the average number of objects in Pascal VOC and MS-COCO images are 3 and 7, respectively, whereas the images in the VisDrone [49] and DOTA [45] datasets, two popular benchmarks in aerial detection research, have an average number of 53 and 67 objects, respectively. A naive adaptation of semi-supervised detectors based on pseudo-labels, in this case, is not labeling enough small objects in the unlabeled images as shown in figure 1 right. We hypothesize that these difficulties contributed to the lower adoption rate of semi-supervised detectors in aerial image detection. To the best of our knowledge, we are yet to see a large-scale study of the recent semi-supervised detectors in aerial images.
In supervised settings, even with a fully annotated training set, vanilla object detectors still struggle to accurately localize the small objects in high-resolution aerial images. Thus additional techniques for performance enhancement are often used including density-guided detection [8, 10, 19], scale-specific detectors [36, 37], feature fusion [22, 25], attention methods [47] etc. Among these, density crop-based approaches are popular as they process the cluster of small objects by zooming in on the crops obtained from there, resulting in better small object detection. To bring the same benefits to the semi-supervised settings, we designed our semi-supervised detector with density crop-guided training and inference. Although density crops can be used in supervised settings with external learnable modules and additional loss functions [10, 19, 48], using them in semi-supervised settings with mean-teacher method [42] is not straightforward. The external module may need additional loss functions, and often times they are trained before the detector with sufficient labeled data. Also, it is not immediately clear how to construct pseudo-labels for the density extraction module if one wants to train them in the mean-teacher settings using unlabeled images. Thus, in our case, we want to identify the density crops from the detector's
prediction itself. For this, we used a cascade zoom-in detector (CZ Detector) design that learns to predict density crops as an additional class in addition to the base class objects. The CZ detector was introduced in [31]. In this work, we extend the CZ detector in order to exploit additional unsupervised data present in a semi-supervised setting. We show that a simple adaptation of the CZ detector for semi-supervised learning would not work and that some additional care is needed. Thus, experiments on two different datasets and meaningful ablation are reported that show the advantages of the proposed approach.
With the CZ detector, density crops can be identified on the labeled and unlabeled images. For the labeled images, they are identified a priori with the available ground-truth (GT) labels. For the unlabeled images, pseudo-GT predictions are utilized to locate the cluster of small objects and then labeled as density crops. The crops identified on both labeled and unlabeled images are used to augment the training set. The augmented crops result in more samples of small objects seen at higher pixel resolution, improving their detection chance. This is due to more pseudo-labels created on the unlabeled images (figure 1 right) compared to the vanilla mean-teacher method. The detector is then trained in the mean-teacher fashion with weak-strong augmentation consistency and pseudo labels for the unlabeled images. At inference, detection is performed separately on both the input image and upscaled density crops if any are present in that image. They are then fused and post-processed to get the final results. As the training and inference with crops here are relying only on the detector itself, their semi-supervised adaptation becomes easier. Figure 1 left shows how density crops are improving AP detection on the VisDrone dataset. It can be observed that by utilizing the density crops effectively in the semi-supervised settings, our detection accuracy increases significantly over the vanilla semi-supervised detector, as in the fully supervised settings.
Our main contributions can be summarized as follows:
**(1)** A density-crop guided semi-supervised detection method is proposed for aerial images. It adapts the vanilla mean-teacher semi-supervised detector with the capability to identify and process the cluster of small objects, improving their suitability for training semi-supervised detectors on high-resolution aerial images.
**(2)** The detector is designed in a zoom-in fashion that can identify clusters of small objects and re-detect the small objects by upscaling the clusters. Different from other approaches, the zoom-in operation doesn't need a separate module, the detector itself can identify the regions to zoom in.
**(3)** We empirically validate the benefits of our semi-supervised detection method on aerial images from drones (VisDrone) and satellites (DOTA), and observed a consistent improvement in the detection accuracy on both datasets over the supervised training.
## II Related works
**Object Detection.** Generic object detectors have achieved impressive progress in recent years on object detection tasks in natural images [22, 26, 33, 34] object detectors. This has resulted in wider adoption of them in practical applications. With aerial images, Faster RCNN based detectors are widely used as they tend to be more accurate due to potential object region extraction performed in the first stage [8, 19, 48]. Recently, one-stage detectors are being explored in aerial images [47]. These days, anchor-free detectors [11, 43] are getting popular, since they avoid the need for hand-crafted anchor box dimensions and their matching process. Anchor-based matching is difficult with small objects as the overlap values are low for small bounding boxes. Though appealing,
Fig. 1: (a) Change in mAP over the epochs with and without density crops on supervised and semi-supervised settings. FS: Fully Supervised, FS+C: Fully supervised + density crops, SS: Semi-supervised (mean-teacher baseline), SS+C: Semi-supervised + density crops (on labeled and unlabeled images).(b) The average number of pseudo-GT boxes per image over training iteration. The density crop-guided mean-teacher is producing more pseudo labels compared to the vanilla mean-teacher method. This will result in more pseudo-labels for small objects.
the anchor-free detectors are still challenging to use in semi-supervised settings as they produce many noisy pseudo labels [5]. This might result in substantial modifications to the backbone detector and additional loss functions. To avoid further challenges in terms of adapting the backbone detector, in this paper, we used the Faster RCNN detector which has shown excellent results in the existing works [8, 10, 48].
**Semi-supervised Object Detection** Semi-supervised object detectors are trained on a small set of labeled images and a large collection of unlabeled images [41]. They have been explored in many different forms. The dominant approaches for semi-supervised object detection in the past were consistency regularization or pseudo-label based [16, 38, 46]. But recently, the mean-teacher method became very popular in object detection which combines both approaches, resulting in state-of-the-art results [13, 18, 28, 38, 41, 46]. In this setting, a student network is trained to optimize the combined supervised and unsupervised loss. The supervised loss is the standard detection loss. The unsupervised loss forces the student network to make consistent predictions with a weak and strong augmented version of an unlabeled image. It is implemented by using pseudo labels from a teacher network which temporally accumulates the student network weights in an exponential moving average (EMA) fashion. Initially successful in two-stage detectors, the mean-teacher method is now getting popular with one-stage detectors too [5, 27]. While existing semi-supervised detectors adapt the mean-teacher method to fit different requirements, we focus on adapting the mean-teacher detector to the small object detection setting. To this end, we propose a density crop-guided semi-supervised detector.
**Detection of Small Objects.** Small object detection is getting popular these days with the large-scale availability of drone and satellite images. Cheng et al. [6] provides a comprehensive survey of small object detection techniques which is broadly classified into: scale aware training [21, 22, 36, 37], super-resolution methods [20], context modeling [1], and density guided detection [19, 48, 10] among others. In aerial images, the small objects are distributed in clusters at sparse locations, so density cropping based approaches have shown excellent results [8, 10, 19, 48]. Thus we also used density crop-guided detection for improving the small object detection in this work. Different from others, we trained the density-based detector in semi-supervised settings. As most of the density-based detectors are trained with additional density extraction modules and loss functions, they cannot fit in the mean-teacher semi-supervised framework. So we used a density-based approach that performs density crop extraction within the detector itself. By doing so, we can easily adapt the mean-teacher semi-supervised training to incorporate small object detection guided by object density.
## III Proposed Approach
In this section, we discuss the formulation of our semi-supervised zoom-in detection in detail. First, we present how we leverage the density crops for small object detection within the object detector using the zoom-in design. This design avoids the need for additional modules to get density crops simplifying the subsequent semi-supervised detector learning. Then we present the semi-supervised detection in detail with the teacher-student learning paradigm (mean-teacher method). We present how density crops are used in semi-supervised settings where crops are identified on labeled and unlabeled images. While for the labeled images, the original GT annotation can be used, for the unlabeled images, we rely on the detection prediction. As the majority of the images are unlabeled in the standard semi-supervised settings, without using them we'll be limited to a few samples for the crop class. We utilized the pseudo-GT predictions on the unlabeled images to identify the dense cluster of small objects. Then the detector is trained with augmented crops from labeled and unlabeled images in a teacher-student mutual learning fashion.
### _Zoom-in Detector_
The zoom-in object detector helps us to zoom in on dense image regions containing cluster of small objects and detect them with better localization accuracy. It does the zoom-in operation like detecting any other class in the dataset, allowing us to use the detector off the shelf. - In this section, we will explain the working of zoom-in detection in detail. The first component of the zoom-in detector is the density crop labeling algorithm that labels the dense image regions as "density crops" and augments the training data by adding up-scaled versions of those crops. For the labeled data, we use the available GT boxes for the crop labeling. For the unlabeled data, we rely on pseudo labels for crop labeling. Then, the density crops are added to the set of target objects to be detected on each image. The zoom-in detector is then trained in the mean-teacher fashion. The inference is performed in two stages. In the first stage, we detect base class objects and "dense crops" on the input image. In the second stage, the obtained dense image regions are up-scaled and a second detection is performed on them. This will help us to detect those small objects in the crowded image regions. A fusion operation is then performed by merging the detections on the input image and its dense regions which will be explained later.
### _Density Crop Labeling_
To label density crops, we need to locate dense image regions containing cluster of small objects and group them in a single bounding box. These bounding boxes are then added to the training targets as a new class called "density crop". To locate the density crops, we identified the cluster of small objects using the algorithm 1. It basically computes the pairwise IoU(Intersection over Union) among the GT boxes and adjacent boxes are labeled as connected if there is an overlap above a threshold \(\theta\). All those connected boxes are merged into a single large box (by finding an enclosing box based on the min and max coordinates of all connected boxes) identifying a dense image region. The merging and connection labeling operations are performed iteratively. Algorithm 1 describes the procedures in detail.
First, all GT boxes \(\mathcal{B}\) are scaled by expanding their min and max coordinates by \(\sigma\) pixels (scale(\(\mathcal{B},\sigma\))). Then the pairwise IoU between the scaled boxes (pairwise_IoU(\(\mathcal{D}\))) is calculated in \(O\) as a \(|\mathcal{D}|\times|\mathcal{D}|\) matrix. Two boxes are deemed as connected if their overlap is above a threshold \(\theta\). \(C\) matrix stores all the pairwise connections. Then we select the box \(m^{*}\) with the maximum number of connections from \(C\). An enclosing box is computed (enclosing_box(\(C_{m^{*}}\))) by finding the min and max coordinates of all boxes connected to \(m^{*}\). The newly obtained box is added to the list of crops. Then all connections from the box \(m^{*}\) are removed by setting the row \(C_{m^{*}}\) to zero. Thus we complete the discovery of one density crop. This procedure is repeated until all the connections are removed finding many dense regions in the image. Then we perform a filtering operation where crops bigger than a maximum threshold \(\pi\) are removed from the list \(\mathcal{D}\) (filter_size(\(\mathcal{D},\pi\))). This concludes one step of iterative merging. We perform this operation \(N\) times by considering the newly discovered crops as GT boxes. Without the iterative merging, there will be many redundant crops as observed in [48]. The size threshold \(\pi\) for the crops is the ratio of the area of the crop to that of the image.
The newly obtained crops are also used for augmenting the training set. During training time, the detector is processing an up-scaled version of the density crops, and the loss is computed against the target small objects in the crop. As we have the small objects in higher pixel resolution in the up-scaled version, the detector can recognize them easily. Note that in the original input where we resize the image to the detector's maximum training resolution, the small objects from these crowded dense regions are mostly missed by the detector. But when we process the up-scaled density crops, we improve the chance of detecting them. See figure 3 for an illustration of the zoom-in detection in action.
### _Semi-supervised Training_
Semi-supervised learning takes place by distilling the weights of a detector (called a student network) during training
Fig. 2: The pipeline of our proposed density crop guided semi-supervised detection. The training data contains both labeled and unlabeled images. There are two networks that are identical copies of the backbone detector. The student network is learned via backpropagating the loss gradients, whereas the teacher network is an exponential moving average (EMA) of the student weights. The labeled images are passed through the student network and supervised loss \(\mathcal{L}_{sup}\) is calculated. Unlabeled images are passed to the teacher network, whose predictions are then filtered (we used confidence thresholding here) to get good-quality pseudo-labels. If there are dense clusters of small objects in the unlabeled image, such clusters are cropped and passed after up-scaling to the teacher network. Then pseudo-labels are computed on newly added density crops as well in a similar fashion. A strongly augmented version of the unlabeled images and their density crops are then passed to the student network. The loss \(\mathcal{L}_{unsup}\) is calculated based on the pseudo-labels obtained before. The combined loss is then backpropagated to update the student weights. Teacher weights are then updated by EMA of the student weights.
to another identical copy of the network (called the teacher network) by exponential moving average (EMA). The teacher network is generally more stable due to the slower pace at which it temporally ensembles the noisy student weights, so it is used to give pseudo GT for the unlabeled images [28]. The student network learns its weights by optimizing a combination of supervised and unsupervised loss. For the labeled data, we have the GT annotations to compute the supervised loss \(\mathcal{L}_{sup}\). Let the available labeled data is \(D_{s}=\{x_{i},y_{i}\}_{i=1}^{N_{s}}\), where each \(y_{i}\) is a bounding box coordinate and its class label (\(y_{i}=(b_{i},c_{i})\)). Here \(N_{s}\) is the number of labeled samples. For the unlabeled data \(D_{u}=\{x_{i}\}_{i=1}^{N_{u}}\), we get pseudo GT \(\hat{y}_{i}\) from the teacher network which is used to calculate the unsupervised loss \(\mathcal{L}_{unsup}\). Here \(N_{u}\) is the number of unlabeled samples. Finally, the network is trained by optimizing the following loss
\[\mathcal{L}=\mathcal{L}_{sup}+\lambda\mathcal{L}_{unsup} \tag{1}\]
where \(\lambda\) is a hyperparameter to control the relative importance of the supervised and unsupervised loss. Figure 2 shows the overall architecture of our semi-supervised learning system. At each iteration, we sample a minibatch of labeled and unlabeled samples following a preset ratio \(d_{r}\). Each datapoint in the minibatch undergoes two types of transformation, referred to as weak and strong augmentation. The weak augmentation is simply the rescaling and horizontal flip transformation. The strong augmentation includes color jittering, grayscale, Gaussian blur, and cutout patches which perform only pixel-level transforms, thus the bounding box labels need not be transformed. We followed the scale ranges provided in [28] for the strong augmentation. The augmented images then go through the mean-teacher semi-supervised learning process. We followed [28] for the mean-teacher training implementation. We compute density crops on the unlabeled images using pseudo-labels from the teacher. This is then used to augment more crops, this time from the unlabeled images. In the following, we will describe the semi-supervised learning process in detail.
#### Iii-B1 Burn-in stage
To get reliable pseudo GT for the unlabeled images, the teacher network should have a good initialization. Typically, existing methods perform a supervised pre-training with the available supervised data to get this good initialization [28, 38, 41]. This supervised pre-training is called Burn-in stage. During burn-in, we optimize \(\mathcal{L}_{sup}\) only which is a sum of classification and localization losses of the detector.
\[\mathcal{L}_{sup}=\sum_{i=1}^{N_{s}}\mathcal{L}_{cls}(f_{W}(x_{i}),y_{i})+ \mathcal{L}_{reg}(f_{W}(x_{i}),y_{i}) \tag{2}\]
After burn-in, the weights of the network \(W\) are copied to the teacher (\(W\to W_{t}\)) and student network (\(W\to W_{s}\)). From this point, unsupervised data is also used in the learning process with teacher-student mutual learning.
#### Iii-B2 Teacher-student learning stage
The teacher-student learning process optimizes the loss in equation 1 to learn the student network (with backpropagation), whereas the teacher network is learned by temporally accumulating the student weights (with EMA). It combines consistency regularization and pseudo label-based learning - the most popular approaches for semi-supervised learning - in one framework. The consistency regularization is ensured with the weak-strong augmentation prediction consistency. Pseudo label based learning is performed by producing pseudo labels on the unlabeled images.
The weakly augmented version of unlabeled data first goes through the teacher network producing the instance predictions. This prediction then undergoes confidence thresholding to produce pseudo labels \(\hat{y}\). Let \(y_{j}^{pred}=(b_{j}^{pred},c_{j}^{pred},p_{j}^{pred})\) are instance predictions containing predicted box \(b_{j}^{pred}\), class \(c_{j}^{pred}\) and probability \(p_{j}^{pred}\) where \(y_{pred}\) is obtained as
\[y^{pred}=f_{W_{t}}(x) \tag{3}\]
The confidence thresholding considers all predictions with a class probability above a threshold \(\tau\) as foreground instances:
\[\hat{y}=\{y_{j}^{pred}|p_{j}^{pred}>\tau,\forall j\in y^{pred}\} \tag{4}\]
This is the filtering process shown in figure 2. Once we get the pseudo labels for the unlabeled images, we can compute the \(\mathcal{L}_{unsup}\). For that, the strongly augmented version of the unlabeled data is passed through the student network to get the predictions. The unsupervised loss is then applied to the classification head as follows:
\[\mathcal{L}_{unsup}=\sum_{i=1}^{N_{u}}\mathcal{L}_{cls}(f_{W_{s}}(x_{i}),\hat {y}_{i}) \tag{5}\]
\(\mathcal{L}_{unsup}\) is not applied to the localization head of the detector because the confidence thresholding based pseudo labeling is suitable only for getting confident class predictions, it has no information about the bounding box correctness. After computing \(\mathcal{L}_{unsup}\), we update the student network weights \(W_{s}\) by optimizing equation 1. The teacher weights \(W_{t}\) is then updated by EMA as follows:
\[W_{t}=\alpha W_{t}+(1-\alpha)W_{s} \tag{6}\]
where \(\alpha\) is a hyperparameter that controls the pace at which the student weights are updated to the teacher weights.
Fig. 3: Detection using the zoom-in detector. (a) detection with a standard Faster RCNN. (b) detection with the zoom-in Faster RCNN. The density crops are shown in red color. The zoom-in detector is detecting many small objects inside the crop regions.
### _Density Crops on Unlabeled Images_
As density crops help to process crowded image regions in higher pixel resolution and improve small object detection performance, it is useful to find them on unlabeled images as well. While for the labeled data \(D_{s}\) we have the GT labels \(y\) to run crop-labeling algorithm 1, we don't have annotations for the unlabeled data \(D_{u}\) to produce density crops. As we have plenty of unlabeled images, we could get more augmented crops from dense regions of unlabeled images, also increasing samples for the crop category. Thus we expect further improvement in performance if density crop-based learning can be utilized on unlabeled images as well. To do so, we rely on the predictions of the teacher network. Particularly, we utilize the pseudo labels provided by the teacher network to label crops on the unlabeled images, again using algorithm 1.
After the semi-supervised training with labeled and unlabeled data (where crops are only augmented on labeled images) is converged, we use the final teacher model to get the predictions on the unlabeled images. These predictions are then processed to get accurate pseudo GTs following confidence thresholding as in equation 4. Crop labeling on the unlabeled images is then performed following algorithm 1 this time with pseudo GT boxes. The semi-supervised training is then continued as before but with more unlabeled datapoints obtained from the cluster of small objects in the unlabeled images. As the clusters mostly remain intact on the unlabeled images at this point, it is not necessary to recompute them at every iteration. We recomputed them at every 10,000 iterations to make the training faster.
### _Multi-stage Inference_
The density crops are leveraged at inference by performing detection on the up-scaled crops and fusing that detection with the detection on the input image. In this way, the crowded small object regions are processed at a higher scale, facilitating better small object detection. Figure 4 bottom explains our inference process in detail. It consists of two stages. In stage one, it predicts detections on the input image. Then we can obtain density crops from this detection in two ways.
1. select the high-quality density crops based on their confidence score of the predicted crop instances.
2. perform crop labeling with the predicted confident detections to get density crops.
The empirical studies show that predicted crops are faster at inference than labeled crops. But, labeled crops are more accurate than the predicted ones. So, depending on the speed vs accuracy trade-off of the downstream application, one can select the suitable inference procedure. In stage two, the upscaled density crops are passed through the same detector again, producing small object detection on the density crops. Finally, we re-project the detections on the crops to the original image and concatenate them with the detections on the original image. Let \(d\in\mathcal{D}\) be an up-scaled crop image of size \((I_{d}^{W},I_{d}^{H})\) defined by its bounding box coordinates \((d_{x1},d_{y1},d_{x2},d_{yz})\) in the original image. Given the scaling factors \((S_{d}^{W},S_{d}^{H})=(\frac{d_{x2}-d_{x1}}{I_{d}^{W}},\frac{d_{y2}-d_{y1}}{I_ {d}^{H}})\), the re-projection box \(p_{i}\) scales down and shifts the predicted boxes \((x_{1,i},y_{1,i},x_{2,i},y_{2,i})\in\mathcal{B}^{d}\) in the crop \(d\) as:
\[p_{i}= (S^{W}x_{1,i},S^{H}y_{1,i},S^{W}x_{2,i},S^{H}y_{2,i})\] \[+(d_{x1},d_{y1},d_{x1},d_{y1}) \tag{7}\]
The Non-Maximal Suppression (NMS) is then applied to remove duplicate detections. All of these operations can be easily wrapped on top of a detector's inference procedure without changing its internal operations.
### _Semi-supervised Training Algorithm_
Algorithm 2 summarizes our density crop-guided semi-supervised training process. Given the labeled and unlabeled data \(D_{s}\) and \(D_{u}\) respectively, we first compute and label crops in \(D_{s}\) using the available ground-truth labels. The training process then begins. We load a batch of images from both the labeled and unlabeled pool. The batch loaded from labeled pool \(x_{s}\) is directly used to calculate \(\mathcal{L}_{sup}\). For the batch from unlabeled pool \(x_{u}\), strongly augmented and weakly augmented versions are produced. The teacher processes weakly augmented images computing pseudo labels \(\hat{y}_{u}\) for the images in \(x_{u}\). This is then used to compute \(\mathcal{L}_{unsup}\) where the loss is computed against the student predictions obtained using strongly augmented images. The combined loss \(\mathcal{L}\) is backpropagated, and then teacher weights are updated using the EMA update rule in equation 6. When this training process is converged (after a sufficient number of iterations \(n\)), crops are computed on the unlabeled images and used to further augment \(D_{u}\).
## IV Experiments
**Datasets and evaluation measures.** For the evaluation of methods, we employed two popular challenging benchmark datasets for Aerial Image Detection, namely the VisDrone [49] and DOTA [45] datasets. We used COCO style AP for assessing and comparing the performance of the methods [24]. The AP of small, medium, and large objects are also reported, in particular, to understand the performance of methods for small-scale object detection in aerial images.
Fig. 4: Multi-stage inference with density crops. During the first stage, detections are obtained on the input image. Density crops are then derived using these detections. In the second stage, the density crops are upscaled and fed to the detector again followed by a second inference. Finally, the detections on density crops are combined with the detections on the input image.
**VisDrone.** This dataset contains 8,599 drone-captured images (6,471 for training, 548 for validation, and 1,580 for testing) with a resolution of about 2000 \(\times\)1500 pixels. The objects are from ten categories with 540k instances annotated in the training set, mostly containing different categories of vehicles and pedestrians observed when the drone is flying through the streets. It has an extreme class imbalance and scale imbalance making it an ideal benchmark for studying small object detection problems. As the evaluation server is closed now, following the existing works, we used the validation set for evaluating the performance.
**DOTA.** This dataset is comprised of satellite images. The images in this dataset have a resolution ranging from 800\(\times\)800 to 4000\(\times\)4000. Around 280k annotated instances are present in the dataset. The objects are from fifteen different categories, with movable objects such as planes, ships, large vehicles, small vehicles, and helicopters. The remaining ten categories are roundabouts, harbors, swimming pools, etc. The training and validation data contain 1411 images and 458 images, respectively. Since the images of DOTA are too large to be fed to the network directly, we extracted 1500\(\times\)1500 crops from the image by shifting 1000 pixels in a sliding window fashion.
**Implementation details.** The Detectron2 toolkit [44] was used to implement our CZ detector. The backbone detector used in our study is Faster RCNN [34]. We used Feature Pyramid Network (FPN) [22] backbone with ResNet50 [15] pre-trained on ImageNet [35] dataset for our experimental validation. For data augmentation, we resized the shorter edge to one randomly picked from (800, 900, 1000, 1100, 1200), and applied horizontal flip with a 50% probability. The model was trained on both datasets for 180k iterations. The initial learning rate is set to 0.01 and decayed by 10 at 70k iteration. For training, we used one NVIDIA A100 GPU with 40 GB of memory.
### _Comparison with Different Percentage of Labeled Data_
We analyzed the effectiveness of our semi-supervised learning method by using partially labeled data from the train set of VisDrone and DOTA datasets. In particular, we used 1%, 5%, and 10% randomly chosen data points from the train set as labeled data and the remaining as unlabeled for the semi-supervised training. There are five settings in the comparison; supervised baseline, supervised baseline with density crops (Supervised + Crop), semi-supervised with the mean teacher (SSOD), SSOD with density crops on labeled images (SSOD + Crop (L)), and SSOD with density crops on labeled and unlabeled images (SSOD + Crop (L + U)). These settings progressively assess the impact of the components of our density crop-guided semi-supervised object detection.
Table I presents the results for the VisDrone [49] dataset. It compares the detection average precision values obtained using the COCO evaluation protocol [24] for Intersection over Union (IoU) thresholds [0.5:0.05:0.95] (**AP**), and 0.5 (**AP\({}_{50}\)**). It can be observed that **AP** is improved by more than 6% in all cases with our density-guided SSOD over their supervised baseline. Compared to the vanilla mean-teacher method (SSOD), our density crop-guided SSOD shows an average improvement of more than 2% on all metrics. Compared to 1% and 5% cases, with very limited labeled samples per class, 10% shows a better boost in performance while leveraging density crops with SSOD. Another interesting result is that the improved performance with semi-supervised learning for 1% settings is more than that of supervised training with 5% labels and 2% below with the 10% labels. This is achieved with less than 100 labeled samples. AP\({}_{50}\) has a gain of more than 5% compared to the vanilla mean-teacher when semi-supervised learning is performed with density crops in the 10% setting.
We also studied how the AP of small, medium, and large objects behave in the same five settings described above. Figure 5 shows the results. The trend here is similar to that of table I. Using density crops increases the detection accuracy both in supervised and semi-supervised settings. Compared to the supervised settings, the AP of all-sized objects increases by more than 5% when semi-supervised learning is performed with density crops. The improvement over the vanilla mean-teacher is more than 3% in most settings. The APs of small, medium, and large objects with fully supervised training using 100% labeled data are 25.74, 42.93, and 41.44 respectively. It can be observed that our model with 10% labeled data performs competitively with this fully supervised upper bound.
We further verified this observation by conducting the same type of study in the satellite images of the DOTA dataset. Table II shows the results. The magnitude of improvements is comparable to that of the VisDrone dataset. AP shows an average improvement above 2% compared to the mean-teacher method. AP\({}_{50}\) has a gain of more than 3% in this dataset compared to the mean-teacher. Also, the APs of small, medium, and large objects are studied in the same way as above. Figure 6 shows the results. APs of small, medium, and large objects with 100% supervised data on the DOTA
dataset are 15.66, 38.16, and 44.2 respectively. While for small objects, our method with 10% labeled data is 3% below the supervised upper bound, the gap is around 10% for medium and large objects. This implies the boost from the density-guided training is more concentrated on the small objects. All of these experiments confirm the impact of each component in our model as well. The performance gain with our density-guided semi-supervised detector over the supervised baseline is significant and consistent.
We also produced a qualitative comparison of the detection results from our semi-supervised model with that of its supervised baseline. Figure 7 shows the comparison on the DOTA (top two rows) and VisDrone (bottom two rows) datasets. The supervised baseline is shown at the top and the semi-supervised results at the bottom among each pair of rows. We can see that many tiny objects are getting detected with our density-guided semi-supervised detector. In the case of VisDrone datasets, the baseline detector is missing most of the small objects at the farther end of the camera, whereas our method with zoom-in capability is discovering them. In the DOTA dataset, the missing happens at a much higher rate as the images are very high in pixel resolution. Especially objects like small cars are mostly missed by the baseline detector on the DOTA dataset. But our method shows impressive results
Fig. 5: Detection AP of small, medium, and large objects with different percentages of supervised data on the VisDrone dataset. FS: fully supervised, FS+C: fully supervised with crops, SS: vanilla mean-teacher, SS+C: mean-teacher with density crops on labeled images, SS+C+U: mean-teacher with density crops on all images.
Fig. 6: Detection AP of small, medium, and large objects with different percentages of supervised data on the DOTA dataset. FS: fully supervised, FS+C: fully supervised with crops, SS: vanilla mean-teacher, SS+C: mean-teacher with density crops on labeled images, SS+C+U: mean-teacher with density crops on all images.
in detecting them.
### _Comparison with Other Semi-supervised Detectors_
As other density-based approaches for small object detection use an external module (and multi-stage training) for crop extraction, we cannot adapt them to the semi-supervised settings with mean-teacher. So, we choose the recently proposed scale-aware detection QueryDet [47] as it also accelerates small object detection with a detector itself. In particular, they proposed sparse querying on the high-resolution feature maps to improve small object detection. This is implemented on the feature pyramids within a detector, so we can wrap the mean-teacher training on top of this method. We used the VisDrone dataset with 10% labels in this study. The result is shown in table III. Our method has an AP of more than 7% compared to the QueryDet semi-supervised detector. The AP\({}_{s}\) is improved by 7% whereas AP\({}_{m}\), AP\({}_{l}\) has an improvement of more than 10%. While the semi-supervised QDet has an improvement of 3% over its supervised baseline, our method has an improvement of 8% over the supervised baseline. Note that the supervised baselines are different here because QueryDet proposed a method specific to the RetinaNet [23] detector. This study also establishes the superiority of density-based detection over scale-aware training as well.
### _Ablation Studies_
The improvement in performance when progressively adding density crops and semi-supervised detection is verified with the results in tables II and I. The results in these tables ablate the components of the proposed method extensively. Furthermore, the change in performance on objects of different sizes with these components is illustrated in figures 5 and 6. The observations conclude that using density crops with the semi-supervised mean-teacher detector significantly improves the results over the basic supervised detector. As the impact
Fig. 7: Qualitative comparison of detection results between supervised baseline and semi-supervised detector trained with density crops. More objects are detected with our semi-supervised zoom-in detector, especially the small ones.
of the components of the proposed method is verified with these experiments, we devote the ablation studies to consider other design choices and fine-grained analysis of the method's performance. We used the VisDrone dataset in this study.
### _Inference_
The inference with density crops can be performed in two ways; taking the crop prediction directly from the model or running the cluster labeling algorithm with output detections. While the crop predictions are fast for inference, we observed that running the cluster labeling algorithm on the detection output is slightly more accurate. So one can choose the inference procedure among the two based on the speed vs accuracy trade-off of the downstream application. In the results reported so far, we used crop predictions directly from the model. To compare the performance of both we performed inference in two ways and reported the performance in table IV. The VisDrone dataset with 10% labels is used in this study. We can observe that while the improvement is small in AP, AP\({}_{50}\) has a gain of more than 1%. We can also see that crop-labeled inference is improving the AP of small objects significantly, but at the same time, the AP of medium and large objects is declining. As the dataset is dominated by small objects, we still observe an overall improvement in performance. We also reported the detection speed in Frames Per Second (FPS). The FPS is only reduced by 5 frames when the expensive crop-labeled inference is used.
### _Comparison with the Supervised Upper-bound_
In table V, we compare the results of our semi-supervised model with the fully supervised upper bound where 100% images are labeled. The setting used here is images labeled 10%. The lower bound of the performance when only the available 10% labeled data is also provided. It can be observed that our method with 10% labeled data is approximately 6% points close to the upper bound, both in the AP and AP\({}_{s}\). AP\({}_{m}\) and AP\({}_{l}\) also show a similar trend. Therefore, it can be concluded that, by effectively leveraging unlabeled data, our method is able to achieve a performance close to the fully supervised upper bound, while using minimal labeled data points.
### _computational cost_
Using unlabeled data for mean-teacher training comes with additional training costs. Exponentially averaged teacher weights must be learned with a small \(\alpha\) value to have stable distillation. We used a 0.9996 following the standard practices [28, 41]. This results in many iterations for the mean-teacher training. In table VI, we compared the training iterations and time for different settings. Inference time per image is also provided. Finding crops on unlabeled images is performed only after the pseudo labels on unlabeled images are converged. The augmentation then adds an additional set of crops to the training process. That is why the SS+C (L+U) setting is taking longer iterations. For inference, the difference when using crops is due to the second detection performed on the crops. Even though there is an effective increase in training and inference time, the improvement in detection performance is significant.
### _Analysis of the Type of Errors_
To understand how the addition of semi-supervised learning and density crops affects the detector's abilities, we profiled different error types based on the TIDE [2] evaluation protocol. Figure 8 shows the comparison results. With the addition of density crops on a supervised detector, we observe the localization error reduces. Other types of errors remain mostly the same. With semi-supervised training using the vanilla mean-teacher method, the classification error reduces. Using density crops with semi-supervised learning is reducing the localization error similar to the fully-supervised case and other errors remain the same mostly. Compared to fully supervised detectors, semi-supervised detectors reduce classification error, but they tend to miss objects too. This is probably due to the imbalance in object classes of this dataset such that dominant classes get more pseudo-labels on unlabeled images. This can result in rare class objects being missed on the unlabeled images.
## V Conclusion
We proposed an efficient adaptation of the mean-teacher semi-supervised method to high-resolution aerial images for the detection of small objects. This is achieved by identifying the clusters of small objects from labeled and unlabeled images and processing them in higher resolution. For the labeled images, the original ground-truth is used for crop identification,
whereas on unlabeled images the pseudo ground-truth labels from the mean-teacher detector are used. As crop identification is happening within the detector, it is now possible to wrap the mean-teacher training on top of it. The clusters identified are cropped and used to augment the training set. The training with augmented crops is producing more pseudo-labels than the vanilla mean-teacher. This translates to improved detection performance. The inference is performed on the original image and crops of clusters obtained on it to boost the small object detection. Empirical studies on the popular benchmark datasets reveal the superiority of our method over supervised training and vanilla mean-teacher training. We also find more boost in performance for density-based approaches than the scale-aware training with the mean-teacher method for small object detection.
|
2306.14493 | Josephson dynamics at high transmissions: Perturbation theory | We theoretically analyze Josephson dynamics of superconducting weak links
with transmissions ${\mathcal T}$ not much smaller than unity at subgap bias
voltages $V$. Employing the effective action approach combined with the Keldysh
technique we develop a regular perturbation theory in ${\mathcal R}=1-{\mathcal
T}$ and derive the first order correction to the current across the weak link
which consists of two different contributions. One of them is negative
effectively corresponding to a decrease of the excess current at small $V$ due
to breaking of the multiple Andreev reflection cycle by normal reflection for
some subgap quasiparticles. These quasiparticles, in turn, generate the second
-- Josephson-like -- contribution to the current which increases with
decreasing $V$ down to very small voltages where the perturbation theory in
${\mathcal R}$ ceases to be valid. Some of the above features are not
reproduced within the physical picture involving Landau-Zener tunneling between
subgap Andreev states. | Artem V. Galaktionov, Andrei D. Zaikin | 2023-06-26T08:08:38Z | http://arxiv.org/abs/2306.14493v1 | # Josephson dynamics at high transmissions: Perturbation theory
###### Abstract
We theoretically analyze Josephson dynamics of superconducting weak links with transmissions \(\mathcal{T}\) not much smaller than unity at subgap bias voltages \(V\). Employing the effective action approach combined with the Keldysh technique we develop a regular perturbation theory in \(\mathcal{R}=1-\mathcal{T}\) and derive the first order correction to the current across the weak link which consists of two different contributions. One of them is negative effectively corresponding to a decrease of the excess current at small \(V\) due to breaking of the multiple Andreev reflection cycle by normal reflection for some subgap quasiparticles. These quasiparticles, in turn, generate the second - Josephson-like - contribution to the current which increases with decreasing \(V\) down to very small voltages where the perturbation theory in \(\mathcal{R}\) ceases to be valid. Some of the above features are not reproduced within the physical picture involving Landau-Zener tunneling between subgap Andreev states.
## I Introduction
Josephson dynamics beyond the tunneling limit involves a non-trivial interplay between superconductivity and non-equilibrium effects which can in general be described only with the aid of complicated many-body techniques possibly combined with numerical methods. In some special cases one can also proceed analytically. An example of such situation has to do with ac Josephson effect in short ballistic superconductor-normal metal-superconductor (SNS) junctions or superconducting quantum point contacts at full transmissions [1; 2]. In this case for bias voltages \(V\) well below the superconducting gap of the electrodes, \(eV\ll\Delta\), one recovers the following expressions for the time-dependent current across the system
\[I(t)\equiv I_{0}(t)=I_{c}\left|\sin eVt\right|\,\mathrm{sgn}\,V \tag{1}\]
and for the \(I-V\) curve
\[\overline{I_{0}}=\frac{2I_{c}}{\pi}\,\mathrm{sgn}\,V, \tag{2}\]
where
\[I_{c}=e\Delta\tanh\left(\frac{\Delta}{2T}\right) \tag{3}\]
is the dc Josephson critical current [3] of the structure at temperature \(T\). Note that for simplicity in Eqs. (1), (2) we neglected the growing with \(V\) term that is small in the parameter \(eV/\Delta\). Equations (1)-(3) hold for a superconducting point contact with a single transport channel at full transmission \(\mathcal{T}=1\) and are trivially generalized to the case of an arbitrary number of channels. Note that here and below \(-e\) is the electron charge, and we set the Planck and Boltzmann constants equal to unity (\(\hbar=k_{B}=1\)).
The charge transfer in such superconducting contacts is essentially governed by the mechanism of multiple Andreev reflection (MAR) [4] which yields, e.g., the large excess current on the \(I-V\) curve (2) already at small voltages as well as a non-trivial current-phase relation (1). Remarkably, it was demonstrated [2; 5] that the same results can also be recovered without involving the physical picture of MAR but operating only with occupation probabilities of subgap Andreev bound states \(\pm\epsilon_{A}\) (see Fig. 1). The corresponding kinetic equation that controls dynamics of these occupation probabilities is supplemented with the boundary conditions [2; 5] for the superconducting phase difference \(\chi\) across the junction resulting from discrete Andreev levels merging with the continuum at \(\chi=0,\pm 2\pi,...\)
The description of Josephson dynamics at subgap voltages employing the effective basis of Andreev levels was also extended [2; 5] to the case of non-zero reflection coefficients \(\mathcal{R}\equiv 1-\mathcal{T}>0\), which otherwise was treated
Figure 1: A pair of Andreev levels \(\pm\epsilon_{A}(\chi)\) in a superconducting quantum point contact with \(\mathcal{T}=1,0.97\) and \(0.9\) shown respectively by green, orange and blue lines.
with the aid of numerics [6; 7]. For non-zero \(\mathcal{R}\) the gap between two Andreev levels develops (see Fig. 1), and it was suggested in Refs. [2] and [5] to include the effect of Landau-Zener tunneling between these levels with the probability \(p=\exp(-\pi\mathcal{R}\Delta/e|V|)\) as an extra boundary condition at \(\chi=\pi\mod(2\pi)\). Following this scenario one concludes that, since for \(e|V|\ll\pi\mathcal{R}\Delta\) the probability of Landau-Zener tunneling is exponentially small \(p\ll 1\), the system should remain in the lowest Andreev state and, hence, the current \(I(t)\) is essentially described by a time-dependent version of the Kulik-Omelyanchuk current-phase relation [3]. On the other hand, in the opposite limit \(e|V|\gg\pi\mathcal{R}\Delta\) (though \(e|V|\ll 2\Delta\) also implying \(\mathcal{R}\ll 1\)) the probability \(p\) becomes close to unity, both Andreev levels get occupied and the current \(I(t)\) can be evaluated perturbatively in \(1-p\). In this limit Eq. (11) of Ref. [2] yields
\[I(t)=I_{0}(t)-\delta I(t),\quad\delta I(t)=\pi\overline{\delta I}F(eVt), \tag{4}\]
where
\[\overline{\delta I}\simeq\frac{2\mathcal{R}\Delta}{eV}I_{c} \tag{5}\]
denotes the correction to the average current (i.e. \(\overline{I}=\overline{I_{0}}-\overline{\delta I}\)) and
\[F(x)=F_{LZ}(x)=\Theta\left(x-\frac{\pi}{2}\right)\sin x,\quad 0\leq x\leq\pi \tag{6}\]
is the \(\pi\)-periodic function that accounts for Landau-Zener tunneling and \(\Theta(x)\) is the Heaviside step function. Note that the current correction \(\delta I(t)\) defined in Eqs. (4), (6) becomes discontinuous at \(2eVt=\pi\mod(2\pi)\).
The above reduced description of adiabatic Josephson dynamics operating only with subgap Andreev states was later employed by many authors in different physical contexts, see, e.g., Refs. [8; 9; 10; 11] as well as many other publications. While this description is quite simple and intuitively appealing, the question remains if it is fully equivalent to the standard physical picture of the charge transfer in terms of MAR. This question will be addressed in our present work.
Note that the applicability condition of the perturbation theory in \(\mathcal{R}\ll 1\) can easily be reconstructed also within the framework of the physical picture of MAR. Indeed, in order for the current \(I(t)\) to be only weakly disturbed as compared to \(I_{0}(t)\) (1) the total normal reflection probability \(\mathcal{R}n\) within the full MAR cycle of \(n\) traverses across the weak link (see Fig. 2) should remain much smaller than unity. Since \(n\sim 2\Delta/e|V|\), we immediately recover the condition \(e|V|\gg 2\mathcal{R}\Delta\) essentially equivalent to that derived within the Landau-Zener tunneling scenario.
Under this condition below we will develop a regular perturbation theory in \(\mathcal{R}\) and evaluate the first order correction to the current \(\delta I(t)\) both analytically and numerically. We demonstrate that the function \(\delta I(t)\) is continuous at any \(t\). Expressing it in terms of Fourier series, we have
\[\delta I(t)=\pi\overline{\delta I}\sum_{l=-\infty}^{\infty}F_{l}e^{-2ileVt}, \quad F_{-l}=F_{l}^{*}. \tag{7}\]
Within the framework of our analysis we arrive at the result for the average current \(\overline{\delta I}\) consistent with Eq. (5). At the same time, all Fourier coefficients \(F_{l}\) with \(|l|>1\) exhibit qualitatively different behavior from that found for the analogous Fourier coefficients of the function \(F_{LZ}(x)\) (6).
Our paper is organized as follows. In Section II we develop a regular perturbation theory that allows to evaluate the first order in \(\mathcal{R}\) correction \(\delta I(t)\) to the current flowing across a superconducting weak link. Section III is devoted to the calculation of \(\delta I(t)\) in the limit of small constant voltage bias \(|V|\ll 2\Delta/e\). The results are further discussed in Section IV. Additional information employed in our numerical calculation is presented in Appendix A.
## II Perturbation theory
In our subsequent analysis, we will closely follow the procedure outlined in detail in Ref. [12], i.e. we will employ the combination of the Keldysh technique and the effective action approach. In order to describe charge transfer across our superconducting weak link with arbitrary distribution of transmission probabilities \(\mathcal{T}_{k}\equiv 1-\mathcal{R}_{k}\) over its conducting channels \(k\) we routinely use
Figure 2: A typical MAR trajectory in a highly transparent superconducting weak link biased by a small constant voltage \(V\ll\Delta/e\). A quasiparticle (hole) suffers successive Andreev reflections at both NS interfaces (solid line) until it gets normally scattered at the point marked by a star. After this scattering event the quasiparticle either (a) passes the barrier with the probability \(\mathcal{T}=1-\mathcal{R}\) and completes the MAR cycle (solid line) or (b) gets out of the MAR cycle with the probability \(\mathcal{R}\) and further propagates along the time-reversed trajectory indicated by the dashed line. Since normal scattering occurs for each of \(n\) traverses across the weak link the total probability for the quasiparticle to leave the full MAR cycle is \(1-\mathcal{T}^{n}\) which equals to \(\mathcal{R}n\) as long as \(\mathcal{R}n\ll 1\).
the general expression for the effective action [13; 14] that can be written in the form
\[iS_{t}[\varphi]=\frac{1}{2}\sum_{k}\mathrm{Tr}\,\ln\left[1+\frac{ \mathcal{T}_{k}}{4}\left(\left\{\tilde{Q}_{L}(\varphi),\tilde{Q}_{R}\right\}-2 \right)\right]\] \[=\frac{1}{2}\sum_{k}\mathrm{Tr}\,\ln\left[\frac{1-\mathcal{R}_{k }}{4}\left(\tilde{Q}_{L}+\tilde{Q}_{R}\right)^{2}+\mathcal{R}_{k}\right]. \tag{8}\]
The summation over all conducting channels \(k\) is implied in Eq. (8), and \(\{...,...\}\) stands for the anticommutator. The product of (defined in Keldysh and Nambu spaces) \(4\times 4\) matrices \(\tilde{Q}_{L}\) and \(\tilde{Q}_{R}\), describing respectively the left and the right superconducting reservoirs implies the time convolution:
\[\left(\tilde{Q}_{L}\circ\tilde{Q}_{R}\right)(t^{\prime},t^{\prime\prime})= \int\limits_{-\infty}^{\infty}dt\tilde{Q}_{L}(t^{\prime},t)\tilde{Q}_{R}(t,t^ {\prime\prime}), \tag{9}\]
and the matrices \(\tilde{Q}_{L,R}\) obey the standard normalization condition \(\left(\tilde{Q}_{L}\circ\tilde{Q}_{L}\right)(t^{\prime},t^{\prime\prime})= \delta(t^{\prime}-t^{\prime\prime})\), where \(\delta(t)\) is the Dirac delta function. This normalization condition was directly employed in order to cast the action to the form presented the second line in Eq. (8) which serves as a convenient starting point of our perturbation theory.
Assuming that all reflection coefficients \(\mathcal{R}_{k}\) are much smaller than unity we may formally limit our analysis to the first order in \(\mathcal{R}_{k}\) and further rewrite Eq.(8) as
\[iS_{t}[\varphi]=\frac{1}{2}\sum_{k}\mathrm{Tr}\,\ln\left[\frac{1 }{4}\bigg{(}\left(1-\frac{\mathcal{R}_{k}}{2}\right)\left(\tilde{Q}_{L}+\tilde {Q}_{R}\right)\right.\] \[\left.+2\mathcal{R}_{k}\left(\tilde{Q}_{L}+\tilde{Q}_{R}\right)^ {-1}\bigg{)}^{2}\right]. \tag{10}\]
Introducing the matrix
\[\tilde{Q}=\frac{1}{2}\tilde{I}\left(\tilde{Q}_{L}+\tilde{Q}_{R}\right),\quad \tilde{I}=\left(\begin{array}{cc}\hat{\tau}_{3}&0\\ 0&-\hat{\tau}_{3}\end{array}\right), \tag{11}\]
where \(\hat{\tau}_{3}\) stands for the Pauli matrix, we can convert the action to the form
\[iS_{t}[\varphi]=\sum_{k}\mathrm{Tr}\,\ln\left[\left(1-\frac{\mathcal{R}_{k}}{ 2}\right)\tilde{Q}+\frac{\mathcal{R}_{k}}{2}\tilde{I}\tilde{Q}^{-1}\tilde{I} \right]. \tag{12}\]
It is straightforward to check that at \(t\to t^{\prime}\) one has \(\tilde{Q}(t,t^{\prime})\approx\delta(t-t^{\prime})\), i.e. the expansion of the logarithm is properly organized.
Following [12] let us decompose the matrix \(\tilde{Q}\) as
\[\tilde{Q}=\tilde{Q}_{0}+\tilde{Q}_{1}. \tag{13}\]
The matrix \(\tilde{Q}_{0}\) reads
\[\tilde{Q}_{0}=\left(\begin{array}{cc}\hat{a}^{R}&\hat{a}^{K}\\ 0&-\hat{a}^{A}\end{array}\right), \tag{14}\]
where
\[\hat{a}^{R,A,K}(t,t^{\prime})=\left(\begin{array}{cc}g^{R,A,K}(t,t^{\prime} )\cos\left[\frac{\varphi_{+}(t)-\varphi_{+}(t^{\prime})}{2}\right]&f^{R,A,K}( t,t^{\prime})\cos\left[\frac{\varphi_{+}(t)+\varphi_{+}(t^{\prime})}{2} \right]\\ f^{R,A,K}(t,t^{\prime})\cos\left[\frac{\varphi_{+}(t)+\varphi_{+}(t^{\prime}) }{2}\right]&g^{R,A,K}(t,t^{\prime})\cos\left[\frac{\varphi_{+}(t)-\varphi_{+}( t^{\prime})}{2}\right]\end{array}\right), \tag{15}\]
\(g^{R,A,K}\) and \(f^{R,A,K}\) denote respectively normal and anomalous retarded (R), advanced (A) and Keldysh (K) quasiclassical Green functions of a superconductor [15] and - in the absence of electron-electron interactions - \(\varphi_{+}(t)=\varphi_{0}+e\int_{0}^{t}V(t^{\prime})dt^{\prime}\) is a half of the time-dependent superconducting phase difference across our weak link.
The matrix \(\tilde{Q}_{1}\) has the form
\[\tilde{Q}_{1}(t,t^{\prime})=\frac{\varphi_{-}(t)}{4}\tilde{B}(t,t ^{\prime})+ \tag{16}\] \[\left(\begin{array}{cc}0&\hat{\tau}_{3}\\ \hat{\tau}_{3}&0\end{array}\right)\tilde{B}(t,t^{\prime})\left(\begin{array}{ cc}0&\hat{\tau}_{3}\\ \hat{\tau}_{3}&0\end{array}\right)\frac{\varphi_{-}(t^{\prime})}{4},\]
where
\[\tilde{B}(t,t^{\prime})=\left(\begin{array}{cc}0&-\hat{b}^{A}(t,t^{\prime}) \\ \hat{b}^{R}(t,t^{\prime})&\hat{b}^{K}(t,t^{\prime})\end{array}\right), \tag{17}\]
\(\varphi_{-}\) is the "quantum" part of the phase difference and
\[\hat{b}^{R,A,K}(t,t^{\prime})=\left(\begin{array}{c}g^{R,A,K}(t,t^{\prime})\sin \left[\frac{\varphi_{+}(t)-\varphi_{+}(t^{\prime})}{2}\right]\\ f^{R,A,K}(t,t^{\prime})\sin\left[\frac{\varphi_{+}(t)+\varphi_{+}(t^{\prime})}{2 }\right]\end{array}f^{R,A,K}(t,t^{\prime})\sin\left[\frac{\varphi_{+}(t)+ \varphi_{+}(t^{\prime})}{2}\right]\right). \tag{18}\]
Neglecting electron-electron interactions we can define the current across our weak link by means of the formula
\[I(t)=-e\frac{\delta S_{t}}{\delta\varphi_{-}(t)}|_{\varphi_{-}(t)=0}. \tag{19}\]
Combining this formula with the above expressions for the action \(S_{t}\), in the first order in \(\mathcal{R}_{k}\) we get
\[I(t)=\frac{ie}{4}\sum_{k}\mathrm{Tr}\left[\hat{B}\tilde{Q}_{0}^ {-1}\left(1-\mathcal{R}_{k}\left(\hat{I}\tilde{Q}_{0}^{-1}\right)^{2}\right)+\right.\] \[\left(1-\mathcal{R}_{k}\left(\tilde{Q}_{0}^{-1}\hat{I}\right)^{ 2}\right)\tilde{Q}_{0}^{-1}\hat{B}^{\prime}\right](t,t), \tag{20}\]
where we also defined
\[\tilde{B}^{\prime}(t,t^{\prime})=\left(\begin{array}{cc}0&\hat{\tau}_{3}\\ \hat{\tau}_{3}&0\end{array}\right)\tilde{B}(t,t^{\prime})\left(\begin{array}[] {cc}0&\hat{\tau}_{3}\\ \hat{\tau}_{3}&0\end{array}\right). \tag{21}\]
## III Constant Voltage Limit
While the above perturbative expression for the current (20) generally holds for an arbitrary dependence of the applied voltage \(V(t)\) on time, below we will specifically restrict our analysis to the time-independent voltage bias limit \(V(t)\equiv V\). In this special case the solution for the inverse matrix \(\tilde{Q}_{0}^{-1}\) has already been constructed earlier [12]. It is convenient to write the components of this matrix using the following expansion in terms of the voltage harmonics
\[x(t,t^{\prime})=\sum_{n=-\infty}^{\infty}\int\frac{d\epsilon}{2\pi}x( \epsilon,n)e^{-i\epsilon(t-t^{\prime})}e^{-incV(t+t^{\prime})/2}. \tag{22}\]
The retarded component of the inverse matrix \(\tilde{Q}_{0}^{-1}\) has been demonstrated to acquire the structure [12]
\[\left(\begin{array}{cc}\zeta^{R}(\epsilon,2n)&\zeta^{R}(\epsilon,2n+1)\\ \zeta^{R}(\epsilon,2n+1)&\zeta^{R}(\epsilon,2n)\end{array}\right), \tag{23}\]
i.e. the components with even index are diagonal and those with odd index are off-diagonal. We also introduce components with shifted energy arguments. Here and below for brevity we denote these components by tilde, i.e. \(\tilde{\zeta}^{R}(\epsilon,n)=\zeta^{R}\left(\epsilon+(neV/2),n\right)\). We have
\[\tilde{\zeta}^{R}\left(\epsilon+\frac{eV}{2},l\right)=\left\{\begin{array}{ l}(-1)^{l}\prod_{1\leq k\leq l}a^{R}(\epsilon+eVk),\;\mbox{if}\;l>0,\\ 1,\quad\mbox{if}\;l=0,\\ (-1)^{l}\prod_{l+1\leq k\leq 0}a^{R}(\epsilon+eVk),\;l<0,\end{array}\right. \tag{24}\]
where
\[a^{R}(\epsilon)=\frac{f^{R}(\epsilon)}{1+g^{R}(\epsilon)} \tag{25}\]
defines the Andreev reflection amplitude. This combination is also involved in the standard Riccati parametrization for the Green functions [15]
\[f^{R}=\frac{2a^{R}}{1-(a^{R})^{2}},\quad g^{R}=\frac{1+(a^{R})^{2}}{1-(a^{R})^ {2}}. \tag{26}\]
Having in mind that
\[g^{R,A}(\epsilon)=\frac{\epsilon\pm i\theta}{\xi^{R,A}(\epsilon)},\quad f^{R, A}(\epsilon)=\frac{\Delta}{\xi^{R,A}(\epsilon)}, \tag{27}\]
where \(\xi^{R,A}(\epsilon)=\pm\sqrt{(\epsilon\pm i\theta)^{2}-\Delta^{2}}\) and \(\theta\) phenomenologically controls the strength of inelastic relaxation, from Eq. (25) in the limit \(\theta\to 0\) we obtain
\[a^{R}(\epsilon)=\frac{\epsilon}{\Delta}-\frac{i\sqrt{\Delta^{2}-\epsilon^{2}} }{\Delta}=\exp\left(-i\arccos\frac{\epsilon}{\Delta}\right) \tag{28}\]
for \(|\epsilon|<\Delta\) and
\[a^{R}(\epsilon)=\frac{\mathrm{sgn}\,\epsilon}{\Delta}\left(|\epsilon|-\sqrt{ \epsilon^{2}-\Delta^{2}}\right) \tag{29}\]
for \(|\epsilon|>\Delta\).
Note that the multiplicative structure (24) is associated with the process of MAR. Formally it results from the multiplicative structure of the inverse symmetric tridiagonal matrix, as it is discussed in Ref. [16]. The advanced and Keldysh components of the inverse matrix have also been established and analyzed in Ref. [12]. Here, however, it is sufficient for our purposes to restrict our attention to the retarded matrix component.
Making use of Eq.(20), we obtain
\[\delta I(t)=\frac{ie}{2}\mathcal{R}\left(b_{g}^{R}Y_{g}^{K}+b_{f}^{ R}Y_{f}^{K}+b_{g}^{K}Y_{g}^{A}+b_{f}^{K}Y_{f}^{A}\right.\] \[\left.+Y_{g}^{R}b_{g}^{K}-Y_{f}^{R}b_{f}^{K}-Y_{g}^{K}b_{g}^{A}+Y_ {f}^{K}b_{f}^{A}\right)_{t,t}. \tag{30}\]
Note that from here on for simplicity we only consider the case of a weak link with a single conducting channel characterized by the reflection coefficient \(\mathcal{R}\ll 1\). Generalization to an arbitrary number of channels simply amounts to substitute \(\mathcal{R}\rightarrow\sum_{k}\mathcal{R}_{k}\) in any of our final results.
The symmetric \(2\times 2\) matrices in Eq. (18) can be expressed in the form
\[\hat{b}^{R}=b_{g}^{R}\hat{1}+b_{f}^{R}\hat{\tau}_{2}, \tag{31}\]
with
\[b_{g}^{R}(t,t^{\prime})=g^{R}(t,t^{\prime})\sin\left[\frac{ \varphi_{+}(t)-\varphi_{+}(t^{\prime})}{2}\right], \tag{32}\] \[b_{f}^{R}=f^{R}(t,t^{\prime})\sin\left[\frac{\varphi_{+}(t)+ \varphi_{+}(t^{\prime})}{2}\right],\]
while the matrix \(\check{Y}\) involved in Eq. (30) is defined as
\[\check{Y}=\check{Q}_{0}^{-1}\check{I}\check{Q}_{0}^{-1}\check{I}\check{Q}_{0}^{-1}. \tag{33}\]
Let us introduce the notations
\[\check{Q}_{0}^{-1}=\left(\begin{array}{cc}\hat{X}^{R}&\hat{X}^{K}\\ 0&\hat{X}^{A}\end{array}\right),\;\hat{X}^{R}=\left(\hat{a}^{R}\right)^{-1}, \tag{34}\]
\[\hat{X}^{A}=-\left(\hat{a}^{A}\right)^{-1},\;\hat{X}^{K}=-\hat{X}^{R}\circ\hat {a}^{K}\circ\hat{X}^{A}.\]
The components of the matrix \(X^{R}\) are already specified in Eqs. (23), (24), whereas for the components of the matrix \(\check{Y}\) we have
\[Y_{g}^{R}=X_{g}^{R}X_{g}^{R}X_{g}^{R}+X_{f}^{R}X_{g}^{R}X_{f}^{R} -X_{f}^{R}X_{f}^{R}X_{g}^{R}\] \[-X_{g}^{R}X_{f}^{R}X_{f}^{R}, \tag{35}\] \[Y_{f}^{R}=X_{g}^{R}X_{g}^{R}X_{f}^{R}+X_{f}^{R}X_{g}^{R}X_{g}^{R }-X_{f}^{R}X_{f}^{R}X_{f}^{R}\] (36) \[-X_{g}^{R}X_{f}^{R}X_{g}^{R},\] \[Y_{g}^{A}=X_{g}^{A}X_{g}^{A}X_{g}^{A}+X_{f}^{A}X_{g}^{A}X_{f}^{A} -X_{f}^{A}X_{f}^{A}X_{g}^{A}\] \[-X_{g}^{A}X_{f}^{A}X_{f}^{A},\] (37) \[Y_{f}^{A}=X_{g}^{A}X_{g}^{A}X_{f}^{A}+X_{f}^{A}X_{g}^{A}X_{g}^{A }-X_{f}^{A}X_{f}^{A}X_{f}^{A}\] \[-X_{g}^{A}X_{f}^{A}X_{g}^{A}. \tag{38}\]
The expressions for \(Y^{K}\)-components turn out to be somewhat lengthier. They are
\[Y_{g}^{K}=X_{g}^{R}X_{g}^{R}X_{g}^{K}+X_{f}^{R}X_{g}^{R}X_{f}^{K} -X_{f}^{R}X_{f}^{R}X_{g}^{K}\] \[-X_{g}^{R}X_{f}^{R}X_{f}^{K}+X_{g}^{K}X_{g}^{A}X_{g}^{A}+X_{f}^{K} X_{g}^{A}X_{f}^{A}\] \[-X_{f}^{K}X_{f}^{A}X_{g}^{A}-X_{g}^{K}X_{f}^{A}X_{f}^{A}+X_{g}^{R }X_{f}^{K}X_{f}^{A}\] \[+X_{f}^{R}X_{f}^{K}X_{g}^{A}-X_{g}^{R}X_{g}^{K}X_{g}^{A}-X_{f}^{R }X_{g}^{K}X_{f}^{A} \tag{39}\]
and
\[Y_{f}^{K}=X_{g}^{R}X_{g}^{R}X_{f}^{K}+X_{f}^{R}X_{g}^{R}X_{g}^{K }-X_{f}^{R}X_{f}^{R}X_{f}^{K}\] \[-X_{g}^{R}X_{f}^{R}X_{g}^{K}+X_{g}^{R}X_{g}^{A}X_{f}^{A}+X_{f}^{K }X_{g}^{A}X_{g}^{A}\] \[-X_{f}^{K}X_{f}^{A}X_{f}^{A}-X_{g}^{K}X_{f}^{A}+X_{g}^{R}X_{f}^{ A}\] \[+X_{f}^{R}X_{f}^{K}X_{f}^{A}-X_{g}^{R}X_{g}^{K}X_{f}^{A}-X_{f}^{R }X_{g}^{K}X_{g}^{A}. \tag{40}\]
Consider the energies \(-\Delta-eV/2<\epsilon<-\Delta+eV/2\) which merely contribute to the current harmonics. One can use the following property of the convolution \(z=fg\) with energy-shifted components,
\[\tilde{z}(\epsilon,n)=\sum_{k+l=n}\tilde{f}(\epsilon+leV,k)\tilde{g}(\epsilon, l). \tag{41}\]
Introducing the combinations \(T_{1}^{R}=X_{g}^{R}X_{g}^{R}-X_{f}^{R}X_{f}^{R}\) and \(T_{2}^{R}=X_{g}^{R}X_{f}^{R}-X_{f}^{R}X_{g}^{R}\) and limiting the summation in Eq.(41) to \(k\geq 0,l\geq 0\) one can demonstrate that the following identities hold for \(n>0\)
\[\tilde{T}_{1}(\epsilon,n)=\tilde{X}_{g}^{R}(\epsilon,n),\quad\tilde{T}_{2}( \epsilon,n)=0. \tag{42}\]
While deriving Eq. (42) we chose to neglect Andreev reflection at overgap energies, which effectively amounts to set \(a^{R}(\epsilon)=0\) at \(|\epsilon|>\Delta\). This approximation was used, e.g., in Refs. [1] and [2] based on the fact that deep in the interesting for us subgap voltage regime \(eV\ll\Delta\) higher order MAR processes (and, hence, higher powers of \(a^{R}(\epsilon)\)) determine the current across the contact. This approximation just simplifies the calculation and - as will be demonstrated below - is by no means crucial for our results and conclusions.
Employing the relations (42), we obtain
\[\tilde{Y}_{g}^{R}(\epsilon,2n)=(1+|n|)\tilde{\zeta}^{R}(\epsilon,2n). \tag{43}\]
and
\[\tilde{Y}_{f}^{R}(\epsilon,2n+1)=(1+n)\tilde{\zeta}^{R}(\epsilon,2n+1),\] \[\tilde{Y}_{f}^{R}(\epsilon,-2n-1)=(1+n)\tilde{\zeta}^{R}( \epsilon,-2n-1) \tag{44}\]
for \(n\geq 0\). Let us emphasize the presence of an additional factor \(n\) in Eqs. (43), (44), which becomes particularly important in the limit of small voltages \(eV\ll\Delta\).
More generally, at sufficiently large \(n\) one may write \(\tilde{Y}_{g,f}^{R}(\epsilon,n)\approx\alpha(|n|/2)\tilde{\zeta}^{R}(\epsilon,n)\), where the dimensionless prefactor \(\alpha\) can be determined explicitly only if one abandons the approximation \(a^{R}(\epsilon)=0\) at \(|\epsilon|>\Delta\) employed above. We will not proceed with this (much more complicated) analysis here and simply determine the prefactor \(\alpha\) numerically (see below).
The Keldysh component of the matrix \(\check{Y}\) can be written as
\[\hat{Y}^{K}=-\hat{X}^{R}\hat{\tau}_{3}\hat{X}^{R}\hat{\tau}_{3} \hat{X}^{R}\hat{a}^{K}\hat{X}^{A} \tag{45}\] \[-\hat{X}^{R}\hat{a}^{K}\hat{X}^{A}\hat{\tau}_{3}\hat{X}^{A}\hat{ \tau}_{3}\hat{X}^{A}+\hat{X}^{R}\hat{\tau}_{3}\hat{X}^{R}\hat{a}^{K}\hat{X}^{A} \hat{\tau}_{3}\hat{X}^{A}.\]
The first two summands can be expressed in terms of \(\hat{Y}^{R}\) and \(\hat{Y}^{A}\), while the third summand can be neglected in the low voltage limit as it does not contain an additional \(n\)-factor corresponding to higher order MAR processes.
Combining all the above expressions, after some algebra we obtain the linear in \(\mathcal{R}\) correction to the current in the form (7), where
\[\overline{\delta I}=\frac{2\alpha\mathcal{R}\Delta^{2}}{\pi V}\tanh\left(\frac{ \Delta}{2T}\right), \tag{46}\]
and
\[F_{l}=\frac{1}{2\pi}\int\limits_{-1}^{1}dx(1-x)\exp(2il\arccos x). \tag{47}\]
Equations (46) and (47) combined with Eq. (7) represent the central result of our present work. They determine the correction to the current \(I(t)\) across weakly reflecting superconducting point contacts in the interesting for us voltage range \(\mathcal{R}\Delta\ll eV\ll 2\Delta\).
Evaluating the integral in Eq. (47), we obtain
\[\operatorname{Re}F_{l}=\frac{1}{\pi(1-4l^{2})},\quad\operatorname{Im}F_{l}= \left\{\begin{array}{cc}\mp\frac{1}{8},&\text{if }l=\pm 1\\ 0,&\text{if }l\neq\pm 1\end{array}\right. \tag{48}\]
and recover the \(\pi\)-periodic function \(F(eVt)\) in the form
\[F(x)=\frac{1}{2}\left|\sin x\right|-\frac{1}{4}\sin(2x),\quad 0\leq x\leq\pi. \tag{49}\]
In addition to the above analysis we also performed a numerical calculation of \(\delta I(t)\) employing the algorithm described in Appendix A. Our numerics clearly supports the above perturbative results and, furthermore, allows to sufficiently accurately determine the value of the prefactor \(\alpha\) in Eq. (46). We obtain \(\alpha\simeq 3\) which is numerically close to \(\alpha=\pi\). With this in mind, we may conclude that our result for \(\overline{\delta I}\) is consistent with Eq. (5) that follows from the Landau-Zener-tunneling-type of analysis [2].
Our numerical results for \(\overline{\delta I}\) as a function of \(V\) are displayed in Fig. 3 for several different values of \(\mathcal{R}\) and \(eV<2\Delta\). We observe that in accordance with Eq. (46) the current correction decays as \(\overline{\delta I}\propto 1/V\) roughly between \(eV\approx 0.2\Delta\) and \(eV\sim\Delta\). Also the subharmonic gap structure at \(eV=2\Delta/k\) with \(k=2,3,...\) is clearly observed, in particular for curves corresponding to lower transmissions \(\mathcal{T}\). At larger voltages \(eV\gtrsim 2\Delta\) (not shown in Fig. 3) the current correction \(\overline{\delta I}\) starts to grow with \(V\) and demonstrates the expected behavior \(\overline{\delta I}\simeq e^{2}\mathcal{R}V/\pi\) at sufficiently large \(V\).
The result of our numerical calculation for the function \(F(x)\) is presented in Fig. 4 along with the analytic formula (49) and the function \(F_{LZ}(x)\) (6). For \(\mathcal{R}\ll 1\) the numerical curve \(F(x)\) does not depend on \(\mathcal{R}\) and essentially follows the dependence (49). In fact, the blue curve is also described by the function of the form (49) provided one multiplies the last term in Eq. (49) by an extra numerical factor \(\simeq 1.19\). Such minor difference between the two curves is, of course, of no significance and could be attributed, e.g., to neglecting Andreev reflection at overgap energies while deriving Eq. (49). On the other hand, both these curves differ substantially from the result of Landau-Zener-tunneling-type of analysis (6).
## IV Discussion
Combining our perturbative results with those of our numerical calculation we may conclude that the perturbation theory in \(\mathcal{R}\) developed here should work sufficiently well at least up to values \(\mathcal{R}\lesssim 0.1\). This perturbation theory allows to microscopically derive the leading first order in \(\mathcal{R}\) correction to the current. Including this correction, for \(\mathcal{R}\Delta\ll e|V|\ll 2\Delta\) we have
\[I(t)\simeq\tilde{I}_{c}\left|\sin eVt\right|\,\mathrm{sgn}\,V+\frac{\alpha \mathcal{R}\Delta I_{c}}{2e|V|}\sin 2eVt, \tag{50}\]
where
\[\tilde{I}_{c}=I_{c}(1-\alpha\mathcal{R}\Delta/e|V|). \tag{51}\]
We observe two effects. Firstly, the current \(\tilde{I}_{c}\) (51) becomes smaller than \(I_{c}\) since some subgap quasiparticles are eliminated from the full MAR cycle due to the presence of weak normal reflection. Secondly, these quasiparticles produce an extra Josephson-like contribution to \(I(t)\) defined by the last term in Eq. (50). Decreasing the bias voltage down to \(e|V|\sim\mathcal{R}\Delta\) one reaches the limit beyond which normal reflection - no matter how weak it is - prevents the vast majority of subgap quasiparticles from completing the MAR cycle and the perturbation theory in \(\mathcal{R}\) ceases to be valid.
Interestingly, Eq. (50) turns out to be useful also in the latter limit of smaller \(V\). Indeed, for any bias voltage \(V\) the probability for quasiparticles to follow the trajectories (a) in Fig. 2 and complete the full MAR cycle is \(\mathcal{T}^{n}\) with \(n\sim\alpha\Delta/e|V|\). Accordingly, the contribution of such quasiparticles to \(I(t)\) can be described by the first term in the right-hand side of Eq. (50) also at \(e|V|<\mathcal{R}\Delta\) provided one replaces Eq. (51) by
\[\tilde{I}_{c}=I_{c}(1-\mathcal{R})^{\frac{n\Delta}{e|V|}}. \tag{52}\]
Clearly, this contribution to \(I(t)\) will die out in the limit \(V\to 0\) for any \(\mathcal{R}>0\).
Figure 4: The function \(F(x)\) evaluated numerically for \(\mathcal{R}\ll 1\) and \(\theta=0.01\Delta\) (blue curve) and analytically (Eq. (49), orange curve. For comparison we also present the function \(F_{LZ}(x)\) (6) (green curve).
Quasiparticles following the trajectories (b) may contribute to the current only provided the condition \({\cal T}^{m}\sim 1\) remains fulfilled, where \(m\) is the total number of traverses of these quasiparticles across the weak link. It follows immediately that for \(e|V|<{\cal R}\Delta\) the maximum number of such traverses is \(m\sim 1/{\cal R}\). Hence, in order to estimate the contribution of these quasiparticles to \(I(t)\) it suffices to replace the number \(n\) by \(m\) (i.e. \(\alpha\Delta/e|V|\to 1/{\cal R}\)) in the last term of Eq. (50). This simple estimate already yields the right order of magnitude for the current \(\sim I_{c}\) in the limit \(e|V|\ll{\cal R}\Delta\). The correct current-phase dependence [3] in this regime should be recovered by additionally taking into account all higher order in \({\cal R}\) processes disregarded here.
Although the analysis [2] yields the average value of the perturbative in \({\cal R}\) correction \(\overline{\delta I}\) (5) which essentially matches with our Eq. (46), some other features differ substantially from those found here. Within the approach [2] the correction to the current \(\delta I(t)\) emerges due to incomplete Landau-Zener tunneling of the system to the higher Andreev level rather than due to breaking of the MAR cycle by normal reflection. As a result, the correction \(\delta I(t)\) derived from this physical picture turns discontinuous (cf. Eq. (6)) and differs from zero only within a half of the Josephson period.
Let us evaluate the ratio between the current correction harmonics \(\overline{\delta I}F_{l}\) and the current harmonics \(I_{l}\) corresponding to fully open quantum point contact, Eq. (1). With the aid of Eq. (47) we get
\[\frac{\overline{\delta I}F_{l}}{I_{l}}\sim\frac{{\cal R}\Delta}{e|V|}\left(1+ \frac{3\pi}{8}i\delta_{l,1}-\frac{3\pi}{8}i\delta_{l,-1}\right). \tag{53}\]
The same calculation with the function \(F_{LZ}(eVt)\) (6) yields
\[\frac{\overline{\delta I}(F_{LZ})_{l}}{I_{l}}\sim\frac{{\cal R}\Delta}{e|V|} \left(1-2il(-1)^{l}\right). \tag{54}\]
Some quantitative difference between the two above expressions is observed already for the first current harmonics. For all higher harmonics with \(|l|>1\) the difference between Eqs. (53) and (54) becomes even more pronounced, since the imaginary contribution vanishes for such \(l\) in the first of these equations, while it remains non-zero and grows with \(l\) in the second one. Thus, Eq. (53) assures that for \({\cal R}\ll 1\) and \(e|V|\gg{\cal R}\Delta\) the perturbative correction remains small for all current harmonics. In contrast, under the same conditions the perturbation theory is obviously violated for harmonics with sufficiently large \(l\) in Eq. (54).
Finally, we would also like to point out that in many cases even equilibrium charge transport in superconducting weak links cannot be described solely in terms of Andreev bound states and their filling factors. This is the case, for instance, in ballistic SNS junctions with thicknesses of the N-layer comparable to (larger than) the superconducting coherence length or in junctions formed by two different superconductors with \(\Delta_{1}\gg\Delta_{2}\) where there exist no Andreev bound states in the range \(-\pi/2<\chi<\pi/2\) at all. Accordingly, the current cannot be derived by simply taking a derivative of the Andreev states energy with respect to \(\chi\). At the same time, a description of Josephson dynamics in terms of MAR is, of course, well possible also in those cases.
To conclude, it appears that one should be cautious while describing Josephson dynamics of superconducting weak links at high transmissions operating only with a pair of subgap Andreev levels and including the process of Landau-Zener tunneling between them. While this reduced physical picture is intuitively appealing and might capture some of the features, it turns insufficient for some other features, as it is demonstrated by our analysis.
## Appendix A Numerical procedure
It is convenient to rewrite a single channel version of the effective action in Eq. (8) in the form (cf. also [17])
\[iS_{t}=\frac{1}{2}{\rm Tr}\left[\ln\left(1+\frac{\sqrt{\cal T}}{2}\left( \check{Q}_{L}-\check{Q}_{R}\right)\right)+\ln\left(1-\frac{\sqrt{\cal T}}{2} \left(\check{Q}_{L}-\check{Q}_{R}\right)\right)\right]. \tag{55}\]
Evaluation of the current requires inverting the matrices in Eq. (55). This matrix inversion procedure follows closely to that of Ref. [12], and we shall just write out the resulting expressions. The current is defined as
\[I(t)=\sum_{l=-\infty}^{\infty}I_{l}e^{-2ileVt},\quad I_{l}=\frac{e^{2}}{\pi}{ \cal T}V\delta_{l,0}+\int\limits_{-\infty}^{\infty}\frac{d\epsilon}{2\pi}\left[ {\cal I}(\epsilon,l)+{\cal I}^{*}(\epsilon,-l)\right], \tag{56}\]
where
\[\mathcal{I}(\epsilon,l)=\frac{e\sqrt{\mathcal{T}}}{4}\left\{\left( \tilde{\zeta}^{R}\left(\epsilon-\frac{eV}{2},2l\right)+\tilde{\zeta}^{R}\left( \epsilon+\frac{eV}{2},2l\right)\right)g^{K}(\epsilon)\right. \tag{10}\] \[\left.-\left(\tilde{\zeta}^{R}\left(\epsilon-\frac{eV}{2},2l+1 \right)+\tilde{\zeta}^{R}\left(\epsilon+\frac{eV}{2},2l-1\right)\right)f^{K}(\epsilon)\right.\] \[\left.+g^{K}(\epsilon)\sum_{m+n=2l}\left[Y\left(\epsilon-\frac{ eV}{2},n\right)\tilde{\zeta}^{R}\left(\epsilon-\frac{eV}{2},n\right)\tilde{ \zeta}^{R}\left(-\epsilon+\frac{eV}{2},m\right)\right.\right.\] \[\left.\left.-Y\left(\epsilon+\frac{eV}{2},n\right)\tilde{\zeta}^ {R}\left(\epsilon+\frac{eV}{2},n\right)\tilde{\zeta}^{R}\left(-\epsilon-\frac {eV}{2},m\right)\right]\right.\] \[\left.+f^{K}(\epsilon)\sum_{m+n-1=2l}Y\left(\epsilon-\frac{eV}{2},n\right)\tilde{\zeta}^{R}\left(\epsilon-\frac{eV}{2},n\right)\tilde{\zeta}^ {R}\left(-\epsilon-\frac{eV}{2},m\right)\right.\] \[\left.-f^{K}(\epsilon)\sum_{m+n+1=2l}Y\left(\epsilon+\frac{eV}{2},n\right)\tilde{\zeta}^{R}\left(\epsilon+\frac{eV}{2},n\right)\tilde{\zeta}^ {R}\left(-\epsilon+\frac{eV}{2},m\right)\right\}.\]
Both normal and anomalous Keldysh functions in Eq. (10) are defined by the standard relations
\[g^{K}(\epsilon)=2\operatorname{Re}\left[g^{R}(\epsilon)\right]\tanh\frac{ \epsilon}{2T},\quad f^{K}(\epsilon)=2\operatorname{Re}\left[f^{R}(\epsilon) \right]\tanh\frac{\epsilon}{2T} \tag{11}\]
and the function \(\tilde{\zeta}^{R}(\epsilon,n)\) takes the form
\[\tilde{\zeta}^{R}(\epsilon,n)=\left(-\frac{\sqrt{\mathcal{T}}}{2}\right)^{-n} \zeta_{0}^{R}(\epsilon,eV)\prod_{k=n}^{-1}\frac{f^{R}\left(\epsilon+\left(k+ \frac{1}{2}\right)eV\right)}{\delta_{k}(\epsilon,eV)},\quad n<0, \tag{12}\]
\[\tilde{\zeta}^{R}(\epsilon,n)=\left(\frac{\sqrt{\mathcal{T}}}{2}\right)^{n} \zeta_{0}^{R}(\epsilon,eV)\prod_{k=1}^{n}\frac{f^{R}\left(\epsilon+\left(k- \frac{1}{2}\right)eV\right)}{d_{k}(\epsilon,eV)},\quad n>0 \tag{13}\]
and
\[\tilde{\zeta}^{R}(\epsilon,n)=\zeta_{0}^{R}(\epsilon,eV),\quad n=0. \tag{14}\]
The functions \(d_{k}\), \(\delta_{k}\) (cf. Ref. [16]) are defined by the following recurrence relations
\[\delta_{-N}(\epsilon,eV)=1+\frac{\sqrt{\mathcal{T}}}{2}\left(g^{R }\left(\epsilon+\left(-N+\frac{1}{2}\right)eV\right)-1\right), \tag{15}\] \[\delta_{n}(\epsilon,eV)=1+\frac{\sqrt{\mathcal{T}}}{2}\left(g^{R }\left(\epsilon+\left(n+\frac{1}{2}\right)eV\right)-g^{R}\left(\epsilon+\left( n-\frac{1}{2}\right)eV\right)\right)+\] \[+\frac{\mathcal{T}}{4}\frac{\left[f^{R}\left(\epsilon+\left(n- \frac{1}{2}\right)eV\right)\right]^{2}}{\delta_{n-1}(\epsilon,eV)},\] \[\delta_{N}(\epsilon,eV)=1+\frac{\sqrt{\mathcal{T}}}{2}\left(1-g^ {R}\left(\epsilon+\left(N-\frac{1}{2}\right)eV\right)\right)+\frac{\mathcal{T }}{4}\frac{\left[f^{R}\left(\epsilon+\left(N-\frac{1}{2}\right)eV\right) \right]^{2}}{\delta_{N-1}(\epsilon,eV)}.\]
\[d_{N}(\epsilon,eV)=1+\frac{\sqrt{\mathcal{T}}}{2}\left(1-g^{R} \left(\epsilon+\left(N-\frac{1}{2}\right)eV\right)\right), \tag{16}\] \[d_{n}(\epsilon,eV)=1+\frac{\sqrt{\mathcal{T}}}{2}\left(g^{R} \left(\epsilon+\left(n+\frac{1}{2}\right)eV\right)-g^{R}\left(\epsilon+\left( n-\frac{1}{2}\right)eV\right)\right)+\] \[+\frac{\mathcal{T}}{4}\frac{\left[f^{R}\left(\epsilon+\left(n+ \frac{1}{2}\right)eV\right)\right]^{2}}{d_{n+1}(\epsilon,eV)},\] \[d_{-N}(\epsilon,eV)=1+\frac{\sqrt{\mathcal{T}}}{2}\left(g^{R} \left(\epsilon+\left(-N+\frac{1}{2}\right)eV\right)-1\right)+\frac{\mathcal{T} }{4}\frac{\left[f^{R}\left(\epsilon+\left(-N+\frac{1}{2}\right)eV\right) \right]^{2}}{d_{-N+1}(\epsilon,eV)}.\]
We also have
\[\zeta_{0}^{R}(\epsilon,eV)=\frac{\prod_{k=1}^{N}d_{k}(\epsilon,eV)}{\prod_{k=0}^{N }\delta_{k}(\epsilon,eV)}, \tag{30}\]
which can be rewritten as \(\zeta_{0}^{R}(\epsilon,eV)=1/X\), where
\[X(\epsilon,eV)=d_{0}(\epsilon,eV)+\delta_{0}(\epsilon,eV)-1-\frac{\sqrt{ \mathcal{T}}}{2}\left[g^{R}\left(\epsilon+\frac{eV}{2}\right)-g^{R}\left( \epsilon-\frac{eV}{2}\right)\right]. \tag{31}\]
Finally, the function \(Y(\epsilon,n)\) in Eq. (30) is defined as
\[Y(\epsilon,n)=\left\{\begin{array}{l}2d_{n}(\epsilon)-1+\sqrt{\mathcal{T}}g ^{R}\left(\epsilon+\left(n-\frac{1}{2}\right)eV\right),\;\text{if}\;n\geq 1\\ d_{0}(\epsilon)-\delta_{0}(\epsilon)+\frac{\sqrt{\mathcal{T}}}{2}\left[g^{R} \left(\epsilon+\frac{eV}{2}\right)+g^{R}\left(\epsilon-\frac{eV}{2}\right) \right],\;\text{if}\;n=0\\ -2\delta_{n}(\epsilon)+1+\sqrt{\mathcal{T}}g^{R}\left(\epsilon+\left(n+\frac{ 1}{2}\right)eV\right),\;\text{if}\;n\leq-1\end{array}\right. \tag{32}\]
The functions \(d_{n}(\epsilon)\) and \(\delta_{n}(\epsilon)\) are also used in the definition of \(\tilde{\zeta}^{R}(\epsilon,n)\) and are given by the recurrence relations (31), (32).
Our numerical procedure amounts to first choosing a sufficiently large number of Andreev reflection cycles \(N\) relevant in the limit of small bias voltages \(eV\ll 2\Delta\). This information is included in the boundary conditions given by the first lines of Eqs. (31), (32). As a next step, it is necessary to resolve the recurrence relations (31), (32) and to construct the functions \(\tilde{\zeta}^{R}(\epsilon,n)\) and \(Y(\epsilon,n)\). Then, integrating over energy in Eq. (31), one recovers all current harmonics \(I_{\text{I}}\). It is also worth pointing out that in the course of our calculation we essentially employed the standard relations between the retarded and advanced Green functions \(g^{A}(\epsilon)=-\left(g^{R}(\epsilon)\right)^{*},\;f^{A}(\epsilon)=-\left(f^{ R}(\epsilon)\right)^{*}\) as well as the conditions \(g^{R}(-\epsilon)=\left(g^{R}(\epsilon)\right)^{*}\) and \(f^{R}(-\epsilon)=-(f^{R}(\epsilon))^{*}\).
|
2310.09442 | Learning Agile Locomotion and Adaptive Behaviors via RL-augmented MPC | In the context of legged robots, adaptive behavior involves adaptive
balancing and adaptive swing foot reflection. While adaptive balancing
counteracts perturbations to the robot, adaptive swing foot reflection helps
the robot to navigate intricate terrains without foot entrapment. In this
paper, we manage to bring both aspects of adaptive behavior to quadruped
locomotion by combining RL and MPC while improving the robustness and agility
of blind legged locomotion. This integration leverages MPC's strength in
predictive capabilities and RL's adeptness in drawing from past experiences.
Unlike traditional locomotion controls that separate stance foot control and
swing foot trajectory, our innovative approach unifies them, addressing their
lack of synchronization. At the heart of our contribution is the synthesis of
stance foot control with swing foot reflection, improving agility and
robustness in locomotion with adaptive behavior. A hallmark of our approach is
robust blind stair climbing through swing foot reflection. Moreover, we
intentionally designed the learning module as a general plugin for different
robot platforms. We trained the policy and implemented our approach on the
Unitree A1 robot, achieving impressive results: a peak turn rate of 8.5 rad/s,
a peak running speed of 3 m/s, and steering at a speed of 2.5 m/s. Remarkably,
this framework also allows the robot to maintain stable locomotion while
bearing an unexpected load of 10 kg, or 83\% of its body mass. We further
demonstrate the generalizability and robustness of the same policy where it
realizes zero-shot transfer to different robot platforms like Go1 and AlienGo
robots for load carrying. Code is made available for the use of the research
community at https://github.com/DRCL-USC/RL_augmented_MPC.git | Yiyu Chen, Quan Nguyen | 2023-10-13T23:23:39Z | http://arxiv.org/abs/2310.09442v2 | # Learning Agile Locomotion and Adaptive Behaviors via RL-augmented MPC
###### Abstract
In the context of legged robots, adaptive behavior involves adaptive balancing and adaptive swing foot reflection. While adaptive balancing counteracts perturbations to the robot, adaptive swing foot reflection helps the robot to navigate intricate terrains without foot entrapment. In this paper, we manage to bring both aspects of adaptive behavior to quadruped locomotion by combining RL and MPC while improving the robustness and agility of blind legged locomotion. This integration leverages MPC's strength in predictive capabilities and RL's adeptness in drawing from past experiences. Unlike traditional locomotion controls that separate stance foot control and swing foot trajectory, our innovative approach unifies them, addressing their lack of synchronization. At the heart of our contribution is the synthesis of stance foot control with swing foot reflection, improving agility and robustness in locomotion with adaptive behavior. A hallmark of our approach is robust blind stair climbing through swing foot reflection. Moreover, we intentionally designed the learning module as a general plugin for different robot platforms. We trained the policy and implemented our approach on the Unitree A1 robot, achieving impressive results: a peak turn rate of 8.5 rad/s, a peak running speed of 3 m/s, and steering at a speed of 2.5 m/s. Remarkably, this framework also allows the robot to maintain stable locomotion while bearing an unexpected load of 10 kg, or 83% of its body mass. We further demonstrate the generalizability and robustness of the same policy where it realizes zero-shot transfer to different robot platforms like Go1 and AlienGo robots for load carrying. Code is made available for the use of the research community at [https://github.com/DRCL-USC/RL_augmented_MPC.git](https://github.com/DRCL-USC/RL_augmented_MPC.git)
## I Introduction
In the quest for the practical deployment of quadruped robots in real-world scenarios, the integration of adaptive behavior into their motion remains a challenge. This adaptive behavior consists of two essential dimensions: 1) real-time adaptation to external perturbations, and 2) self-adjustments such as foot reflection when a robot's foot gets stuck in an obstacle. Current advancements in legged mobility predominantly lean on Model Predictive Control (MPC) and Reinforcement Learning (RL). MPC, which employs real-time optimization over a set horizon to compute the optimal control sequence, often requires substantial computational resources and careful parameter tuning. RL, while notable for exceptional adeptness at navigating uneven terrains and unexpected disturbances, demands extensive offline computation, and careful reward tuning, and often produces policies tailored to specific robots. To combine the benefits of both MPC and RL, we present an innovative approach synthesizing the strengths of model-based control and reinforcement learning. Our central objective is to bolster agility, robustness, and adaptive behavior in blind locomotion through the integration of stance foot control and swing foot reflection using RL.
MPC, as validated by multiple studies [1, 2, 3, 4, 5, 6] has gained traction in the legged robot community for its capability to handle the hybrid nature of quadrupedal locomotion under constraints. Central to the MPC paradigm is its prediction based on simplified dynamics, offering a future state estimation while preserving real-time computational feasibility. Yet, these simplified models inherently come with model uncertainties. For instance, the single rigid dynamics (SRB) model overlooks the resultant dynamics from leg momentum and external disturbance. Further complexities
Fig. 1: Experiment result highlights. a) High-speed steering in place; b) High-speed running; c) High-speed running and steering; d) Generalization of the same policy across different robot platforms; e) Blind stair climbing with swing foot reflection.
arise when translating these optimized trajectories into joint-level commands. Existing methodologies adopt a hierarchical control structure for this conversion, using techniques like Jacobian mapping [7], control barrier functions[8], and joint-level whole body control[2, 9]. The problem of addressing uncertainties in legged motion has been approached with adaptive control methods [10, 11, 12, 13, 14, 15, 16], adjusting the control parameters online. In addition, an implicit limitation of the MPC framework is its inherent decoupling of stance foot control from swing foot control due to the intricate modeling challenges of their interplay.
Reinforcement learning offers a tantalizing alternative [17, 18, 19, 20, 21, 22, 23, 24], implicitly deciphering the dynamics interplay between the stance and swing control for all kinds of locomotion. In this paradigm, agents continually engage with environments, iteratively refining their action strategies based on the reward, resulting in the mastery of complex terrains and adaptive behavior attuned to environmental dynamics. Nevertheless, deploying RL in real-world scenarios raises legitimate concerns about its generalizability and safety. The aforementioned challenges underscore the urgency for a control framework evolution, one that concurrently addresses model uncertainties and optimizes swing foot reflection with regularized motions.
Researchers integrate MPC and reinforcement learning to combine the benefits of RL and model-based control. In [25], the RL framework utilizes model-based optimal control to generate reference motion and then leverages motion imitation technique[26] to learn versatile legged motion. RL [27, 28] is also utilized to learn parameters or dynamics in the MPC problem. [29] proposes an RL-based control of the accelerations of an SRB model which allows robust sim-to-real transfer. [30] leverages RL to infer the set of unmodeled dynamics for the RMPC framework for adaptive locomotion. Additionally, [31] proposed an online supervised learning technique to derive a residue model to address the model uncertainties for model-based controller. What sets our research apart from these studies is our innovative synthesis of stance foot control and swing foot reflection by leveraging RL, enabling adaptive balancing and adaptive foot reflection within one unified framework.
In this paper, we present a novel RL-augmented MPC framework tailored to enhance blind locomotion for quadruped robots. By leveraging RL, we synthesize stance foot control and swing foot reflection from the linear MPC framework [1], specifically addressing the inherent issues of model uncertainty and the pre-defined swing foot trajectory. By tackling the dual challenges of adapting to model uncertainty and optimizing foot reflection, we successfully demonstrated improved agility, robustness, and adaptive behavior in blind legged locomotion. Notably, our research introduces robot-agnostic action and observation spaces to guarantee the policy's generalizability across various robot platforms. Our proposed framework has the following contributions:
* We introduce a novel RL-augmented MPC framework designed for adaptive blind quadruped locomotion, encompassing high-speed movement, uncertain dynamics adaptation, and reactive obstacle traversal.
* Our contribution uniquely combines stance foot force control with swing foot reflection, addressing model uncertainties and bridging foot swing and force control, overcoming inherent challenges in the nominal MPC framework.
* Our framework provides a universal RL module for MPC, realizing zero-shot transfer across various robot platforms, and showcasing state-of-the-art performance on the Unitree A1, Go1, and AlienGo robots.
The paper is organized as follows: Sec. II presents our novel RL augmented MPC framework. Then, experimental validation is presented in Sec. III. Sec. IV provides concluding remarks.
## II Proposed Approach
### _System Overview_
Illustrated in Fig. 2 is the overall system architecture, which is built upon [7]. The user provides velocity commands to the robot, while an event-driven finite state machine determines the gait schedule. Within the locomotion control module, the MPC is in charge of stance foot control. In contrast, the swing foot control determines the desired foot positions \(p_{f}\). Force commands \(F\) are converted into joint torques using the Jacobian, and concurrently, the desired foot positions are mapped to corresponding joint angles through inverse kinematics. A Kalman filter facilitates state estimation, delivering proprioception data to both the locomotion control and the adaptive behavior policy.
Central to this system is our innovative adaptive behavior policy. Its primary aim is to impose supplementary actions onto the baseline MPC framework to bring adaptive behavior to the robot while ensuring performance across multiple robot platforms. This policy processes past commands, proprioception, acceleration from MPC force commands and heuristic foot placement. The result is dynamic compensation (explained in Sec. II-B) for the MPC and an offset joint angle \(\Delta q\) for swing foot reflection (explained in Sec. II-C). The adaptive behavior policy (explained in Sec. II-D), which learns to synthesize both the dynamics compensation essential for force control and the reaction for swing foot trajectory, given gait schedule and velocity commands. More than mere compensation, our policy amplifies the agility and robustness of the locomotion. Importantly, it achieves broad generalizability across different robot platforms without resorting to domain randomization.
### _MPC with Dynamics Compensation_
To address the challenges of model uncertainties while retaining the generalizability across different robot platforms, we build upon the linear MPC setup in [1]. Model uncertainties inherently sprout from the simplifications when abstracting from full-order dynamics, for instance, leg dynamics, and joint-level torque mapping. Furthermore, unpredictable external perturbations such as varying loads and terrain conditions introduce unknown forces and moments acting on the robot's body, contributing to the model's uncertainty.
To capture these unknown dynamics, we introduce time-varying, locally-linear acceleration terms to incorporate into the linearized continuous-time state space equation. The dynamics compensation terms are encapsulated \(\Delta\mathbf{\alpha}\) and \(\Delta\mathbf{a}\), which represent angular and linear accelerations respectively in the continuous-time state space equation:
\[\frac{d}{dt}\left[\begin{array}{c}\mathbf{\Theta}\\ \mathbf{p}\\ \mathbf{\omega}\\ \dot{\mathbf{p}}\end{array}\right]=\left[\begin{array}{cccc}\mathbf{0}_{3}&\mathbf{0}_ {3}&\mathbf{R}_{x}(\psi)&\mathbf{0}_{3}\\ \mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{I}_{3}\\ \mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}\\ \mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}\end{array}\right]\left[\begin{array} []{c}\mathbf{\Theta}\\ \mathbf{p}\\ \mathbf{\omega}\\ \dot{\mathbf{p}}\end{array}\right]+\] \[\left[\begin{array}{cccc}\mathbf{0}_{3}&...&\mathbf{0}_{3}\\ \mathbf{0}_{3}&...&\mathbf{0}_{3}\\ \mathbf{\hat{I}}^{-1}[\mathbf{r}_{1}]_{\times}&...&\mathbf{\hat{I}}^{-1}[\mathbf{r}_{n}]_{ \times}\\ \mathbf{1}_{3}/m&...&\mathbf{1}_{3}/m\end{array}\right]\left[\begin{array}{c}\mathbf{F}_ {0}\\ \cdot\\ \cdot\\ \cdot\\ \mathbf{F}_{n}\end{array}\right]+\left[\begin{array}{c}0_{3\times 1}\\ 0_{3\times 1}\\ \Delta\mathbf{\alpha}\\ \Delta\mathbf{\alpha}+\mathbf{g}\end{array}\right] \tag{1}\]
where \(\mathbf{\Theta}\) represents the robot's orientation as a vector of Euler angles \([\phi,\theta,\psi]^{T}\), \(R(\psi)\) is the rotation matrix corresponding to the yaw angle \(\psi\), \(\mathbf{p}\) and \(\dot{\mathbf{p}}\) is the COM position and velocity of the robot, \(\omega\) is the angular velocity of the robot, \(r_{i}\) is the vector from the robot's COM to foot \(i\), \(F_{i}\) is the ground reaction force for leg \(i\), \(I\) is the inertia, \(m\) is the mass of the robot, and \(\mathbf{g}\) is the gravity term.
Equation (1) can be rewritten with an auxiliary state to represent the dynamics into a convenient state-space form:
\[\dot{\mathbf{x}}_{c}(t)=\mathbf{A}_{c}(\psi,\Delta\mathbf{\alpha},\Delta\mathbf{a})\mathbf{x}_{c }(t)+\mathbf{B}_{c}(\mathbf{r}_{1\ldots n},\psi)\mathbf{u}(t) \tag{2}\]
where
\[\mathbf{x}_{c}(t)=\left[\begin{array}{cccc}\mathbf{\Theta}&\mathbf{p}& \mathbf{\omega}&\dot{\mathbf{p}}&1\end{array}\right]^{T}\in\mathbb{R}^{13}\] \[\mathbf{A}_{c}(t)=\left[\begin{array}{cccc}\mathbf{0}_{3}&\mathbf{0}_{3}& \mathbf{R}_{x}(\psi)&\mathbf{0}_{3}&\mathbf{0}_{3\times 1}\\ \mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{I}_{3}&\mathbf{0}_{3\times 1}\\ \mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\Delta\mathbf{\alpha}\\ \mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\Delta\mathbf{a}+\mathbf{g}\\ \mathbf{0}_{1\times 3}&\mathbf{0}_{1\times 3}&\mathbf{0}_{1\times 3}&\mathbf{0}_{1\times 3}&0 \end{array}\right]\in\mathbb{R}^{13\times 13} \tag{3}\]
Then, this formulation is discretized and formulated as a QP problem as in [1].
Opting for accelerations over forces and moments offers a broader view of disturbances. This is particularly pertinent considering robots vary in their ability to withstand external forces and moments. Consequently, we choose a metric that stands independent of the unique mass characteristics specific to each robot, recognizing that these attributes play a pivotal role in adaptability. It's noteworthy that even if the inertia of robots is usually minimal, any oversight in compensating moments can critically impair controller efficacy. Angular acceleration \(\mathbf{\Delta\alpha}\), in this respect, offers a more intuitive and efficient mechanism to modulate the robot's orientation. To delve deeper into this nuance: the process of translating force/moment to acceleration inherently mandates knowledge of the robot's mass and inertia. This perspective inherently considers the robot's mass and inertia, allowing our approach to seamlessly apply across various robotic platforms regardless of mass and inertia differences. Moreover, this design choice also allows a compact formulation of the optimization problem as the size of all the matrices remains the same as in [1], which also facilitates onboard computation.
### _Adaptive Foot Swing Reflection_
The swing foot in legged robots involves two main components: 1) foot placement and 2) swing trajectory. The foot placement determines the contact location which is crucial for stance control, while the swing trajectory would help the foot reflection to overcome obstacles. Our goal transcends the conventional scope of adaptive control that solely addresses model uncertainties. Instead, we seek to attain adaptive behavior in both the foot placement and swing trajectory, responding reactively to external disturbances, such as external loads or varying terrain.
In the baseline MPC framework, foot placement follows a predetermined heuristic[2] based on velocity commands, feedback, and stance time:
\[\mathbf{p}_{heuristic,i}=\mathbf{p}_{hip,i}+\frac{T_{stance}}{2}\mathbf{v}+k( \mathbf{v}-\mathbf{v}_{cmd})\\ +\frac{1}{2}\sqrt{\frac{z_{0}}{||g||}}\mathbf{v}\times\mathbf{\omega}_{cmd} \tag{4}\]
Fig. 2: System architecture of the proposed framework. The high-level module, framed in blue, includes the adaptive behavior policy and locomotion control module, operating at 33Hz. The low-level module, running at 1kHz, includes leg control (using Jacobian and IK), state estimation, and the robot’s hardware. The \(F/m\) block normalizes the MPC force command into accelerations as a robot-agnostic input to the adaptive behavior policy.
where \(\mathbf{p}_{hip,i}\) is the hip location in the world frame for leg \(i\), \(T_{stance}\) is the scheduled stance phase time, \(\mathbf{v}\) is the velocity of the robot's COM, \(\mathbf{v_{cmd}}\) is the velocity command, \(z_{0}\) is the nominal height of locomotion, \(\mathbf{\omega}_{cmd}\) is the yaw rate command and in this setup, we used a \(k\) of \(0.03\). We then employ a pre-defined Bezier curve to interpolate the foot swing trajectory, outputting the \(p_{f}\) as the desired foot location to the swing foot.
Our methodology places a heightened emphasis on adaptive swing reflection. Unlike traditional approaches that manually adjust trajectories, our system utilizes an offset joint angle, \(\Delta q\), to modulate the nominal swing trajectory. This doesn't just modify foot placement; it also dynamically adjusts its swing trajectory over discrete obstacles. By prioritizing adaptive swing control, our system offers a more holistic and responsive solution, synchronously adjusting both the trajectory shape and final foot placement in real time, all under the guidance of one unified adaptive behavior policy. When combined with dynamic compensation, our action space enhances the robustness and agility of legged locomotion.
### _Learning to Synthesize Stance Control and Swing Control for Adaptive Behavior_
In this section, we present our novel learning framework that integrates the strengths of the traditional MPC control approach with the dynamic adaptability offered by Reinforcement Learning (RL). While the baseline MPC framework excels in forward-looking predictions, RL is capable of reasoning over past experiences. Our primary goal transcends merely addressing model uncertainties; we strive to synthesize these decoupled control realms (stance foot control and swing foot control) using RL. This method unravels the intricate connections between stance foot and swing foot controls. This synthesis means that when force optimization is constantly evolving, enriched by insights from the swing foot's heuristic and past proprioception data. In parallel, the swing foot's trajectory and placement are fine-tuned based on cues from the force optimizations and past proprioception data. This seamless integration and reciprocal adaptation ensure that the robot exhibits adaptive behavior under different conditions, underscoring the power and efficacy of our proposed approach.
#### Iii-D1 Action Space
Considering that the MPC problem intrinsically incorporates the robot's mass properties, we can design robot-agnostic action space to account for model uncertainty while ensuring generalizability. Our learning module computes both dynamics compensation components - namely \([\Delta\mathbf{\alpha},\Delta\mathbf{a}]\) - and swing foot reflection in the form of joint angle offset \(\Delta q\) from the nominal swing trajectory. In other words, our adaptive behavior policy seeks to derive supplementary actions that can be seamlessly layered onto the locomotion controls. This design ensures improved robustness and agility in locomotion performance and substantially facilitates the generalizability of the framework.
#### Iii-D2 Observation Space
To ensure generalizability, the observation space must remain independent of the robot's mass properties, because the MPC problem inherently considers them. At every time \(t\). the policy obtains an observation and performs supplementary actions to the nominal MPC control framework. As presented in Fig. 2, the observation for the policy takes a history window of 5 MPC horizons, capturing parameters including the joint angle and velocities \(\mathbf{q}\) and \(\mathbf{\dot{q}}\), linear and angular velocities of the robot \(\mathbf{v}_{com}\) and \(\mathbf{\omega}_{com}\), planned contact boolean of every foot from the given gait schedule \(\mathbf{s}_{\phi}\), the actual contact state of each foot from the contact sensor data \(\mathbf{s}_{actual}\), desired COM state from user input \(\mathbf{v}_{des}\) and \(\mathbf{\omega}_{des}\), the heuristic foot placement of every foot \(\mathbf{p}_{heuristic}\) and the acceleration from force commands of MPC optimization \(\mathbf{F}/m\). Similar to dynamics compensation, the ground reaction force command is expressed in terms of acceleration. This consideration is pivotal, given that robots come with diverse mass properties, prompting the MPC to produce force commands on varying scales. By ensuring the observation space remains agnostic to a robot's unique internal attributes, we substantially facilitate the generalizability of our framework.
#### Iii-D3 Training
In this paper, we employ PPO[32] for training and use the Unitree A1 robot in the simulation. The reward function at time \(t\) is designed to ensure velocity tracking of the robot while minimizing the energy cost.
\[R(t)=w_{1}r_{survival}+w_{2}r_{velocity}+w_{3}r_{energy}+w_{4}r_{ height} \tag{5}\]
where
\[r_{survival}=1\\ r_{velocity}=||\mathbf{v}_{des}-\mathbf{v}_{COM}||+||\mathbf{\omega}_{des}- \mathbf{\omega}||\\ r_{energy}=\sum_{i=1}^{12}||\tau_{i}\dot{q}_{i}||t\\ r_{height}=0.02-||z_{COM}-z_{des}|| \tag{6}\]
and \(w_{i}\) are the corresponding weight factors for each reward term. We use the same reward function by varying weights for the 3 applications we validate our approach. In this work, we approximate the policy using MLPs with hidden layers of [256, 32, 256] neurons with tanh as activation function. The training of the MLP is performed offline with numerical simulation by Pybullet[33].
### _Learning with MPC in Simulation_
In the architectural design of our training methodology, the MPC setup holds a central position. The integration of MPC becomes a computation bottleneck, prominently manifested in the form of optimization-induced latency. With the QPOASES[34] solver, the MPC problem's computation time averages \(1ms\). In comparison, a simulation step in Pybullet is computed in less than 0.1\(ms\). The implication here is clear: updating the ground reaction force at each simulation step would drastically decelerate simulation and consequently slow done training. Therefore, our strategy is to update the MPC problem at a less frequent rate, while the lower-level joint commands receive updates much more
frequently. As illustrated in Fig. 2, we update the MPC problem every 30 \(ms\) (MPC horizon time) to generate the desired ground reaction force command while the Jacobian maps this force command to joint torques every 1 \(ms\). We also have our policy to be set up with an update rate of 33 \(Hz\). This configuration aligns with our hardware experiments, negating the need to constantly run the MPC solver for updating the ground reaction force at every control step. Thanks to this efficient setup, our approach not only expedites the training but also ensures the sim2real of the framework as it mirrors the exact setup used in our hardware experiments.
## III Experimental Validation
In this section, we detail the experimental validation of our framework, highlighting the enhanced robustness, agility, and adaptive behavior in blind locomotion. Our approach is rigorously tested across various computation platforms and robotic systems, with all policy learning conducted offline using the Unitree A1 robot model. For the comparative study, we maintained uniformity by adhering to the baseline MPC specifications, which include gait patterns, foot swing height, MPC weights, and joint PD gains. Throughout the experiments, the robot was maneuvered using a joystick by the author. A seamless transfer of all policies to the actual hardware was made possible thanks to the robustness of the MPC framework and the generalizability of the learning framework.
### _High Speed Locomotion_
The primary aim here is to validate our RL-augmented MPC methodology during challenging high-speed maneuvers--both running and turning. Due to their stark dynamical distinctions, these activities provide a rigorous testbed. We use a flying trot gait for these high-speed maneuvers. The learning domain comprises dynamics compensation factors and foot placement offsets, delineated as The angular acceleration within \(\pm[2.0,10.0,2.0]rad/s^{2}\), the linear acceleration bounded by \(\pm[4.0,2.0,3.0]m/s^{2}\) and the joint angle set to \(\pm 0.3rad\).
#### Iii-A1 High Speed Turning
We validated our approach with high-speed turning in place and we are able to achieve state-of-the-art performance in terms of the turning rate. As presented in Fig. 3, we ramp up the yaw rate command to \(\pm 7rad/s\). Impressively, the A1 robot hit a peak turn rate of 8.5 \(rad/s\) in the counter-clockwise direction, maintaining an average yaw rate close to \(7rad/s\), while the baseline MPC couldn't survive this command and failed immediately. As demonstrated in the support video, the policy doesn't merely rely on adjusting foot placement to enhance motion. A significant contribution comes from the angular acceleration compensation in the roll direction as shown in Fig. 1. This intelligent adjustment by the policy facilitates smoother turns, enabling us to register the swiftest turn ever recorded on the Unitree A1 robot.
#### Iii-A2 High-Speed Running and Steering
Furthermore, we conducted tests involving high-speed running and steering. In experiments depicted in Fig. 4, when velocity commands were increased to \(3.5m/s\), the baseline MPC failed within a few running steps due to the model uncertainty at high speed. Contrarily, our learned policy enabled the robot to withstand disturbances at high speeds, achieving a peak velocity of around \(3m/s\) based on state estimation data. Further tests involved steering at high speed. We first accelerated the robot to \(2.5m/s\) and then applied a yaw rate command of \(0.5rad/s\). Fig. 5 suggests that our learned policy surpassed the baseline MPC in velocity tracking. Prior to turning, our RL-augmented MPC tracked the intended velocity of \(2.5m/s\), while the baseline lagged, peaking at roughly \(2m/s\). Integrating translation and rotation dynamics, our RL-augmented MPC deftly steered at \(2.5m/s\) as highlighted in Fig. 1. Meanwhile, the baseline MPC decelerates to \(1m/s\) to prevent failure.
### _Walking with Significant Model Uncertainty_
In this section, we highlight our framework's effectiveness in managing model uncertainty and enhancing the robustness of locomotion. In the training setup, we randomize the velocity commands for the robot in body frame of \(\mathbf{v}_{x}\in[-1,1]m/s\), \(\mathbf{v}_{y}\in[-0.5,0.5]m/s\) and \(\mathbf{\omega}_{z}\in[-1,1]m/s\). The velocity commands for the robot are shown in Fig. 6. The velocity commands for the robot are shown in Fig. 7. The velocity commands for the robot are shown in Fig. 8. The velocity commands for the robot are shown in Fig. 9. The velocity commands for the robot are shown in Fig. 10. The velocity commands for the robot are shown in Fig. 11. The velocity commands for the robot are shown in Fig. 12. The velocity commands for the robot are shown in Fig. 13. The velocity commands for the robot are shown in Fig. 14. The velocity commands for the robot are shown in Fig. 15. The velocity commands for the robot are shown in Fig. 16. The velocity commands for the robot are shown in Fig. 17. The velocity commands for the robot are shown in Fig. 18. The velocity commands for the robot are shown in Fig. 19. The velocity commands for the robot are shown in Fig. 19. The velocity commands for the robot are shown in Fig. 19. The velocity commands for the robot are shown in Fig. 20. The velocity commands for the robot are shown in Fig. 21. The velocity commands for the robot are shown in Fig. 22. The velocity commands for the robot are shown in Fig. 23. The velocity commands for the robot are shown in Fig. 24. The velocity commands for the robot are shown in Fig. 25. The velocity commands for the robot are shown in Fig. 26. The velocity commands for the robot are shown in Fig. 27. The velocity commands for the robot are shown in Fig. 28. The velocity commands for the robot are shown in Fig. 29. The velocity commands for the robot are shown in Fig.
\([-2.0,2.0]rad/s\) given a trotting gait of 0.3\(s\) gait cycle. We choose trotting as the test gait as it is more difficult than standing or quasi-static walking. Adding to this complexity, random external forces and moments are applied, challenging the robot further. Within this setting, agents learn dynamics compensation and foot placement offsets, with angular acceleration constrained to \(\pm[4.0,10.0,2.0]rad/s^{2}\), linear acceleration at \(\pm[4.0,2.0,8.0]m/s^{2}\) and the joint angle set to \(\pm 0.3rad\).
Notably, even though the policy was primarily trained on flat terrain, it showcased remarkable adaptability on soft, uneven soil terrain, when the A1 robot carries an additional 5\(kg\) load, as depicted in Fig. 1. Moreover, the generalizability of our approach is evident as the policy learned on the A1 robot realizes zero-shot transfer to different robotic platforms like Go1 and AlienGo while adapting effortlessly to different gait cycle timings (see support video). This generalizability is credited to our selected action space and observation space, which is independent of the robot's internal properties.
Our framework also excels in compensating for external moments. The exemplary adaptive behavior of our framework is evidenced in Fig. 6, where we introduce loads that impose additional moments on the robot's body. Despite these challenges, our adaptive behavior policy effectively mitigates the uncertainties. This performance can be attributed to the incorporation of angular acceleration as the key dynamics compensation term. Furthermore, our framework's computational efficiency stands out, comfortably running on a Raspberry Pi 4 board on the Go1 robot. The policy inference and MPC optimization respectively clock in at approximately 1\(ms\) and 3\(ms\) respectively on the Raspberry Pi 4 board.
### _Blind Walking over Discrete Terrain_
We also validated our framework in the realm of adaptive foot swing reflection, specifically on discrete terrain. We trained the policy to traverse over randomly generated discrete terrains ranging from 8 to 12 \(cm\) in height, with the robot's foot swing height target fixed at \(8cm\). The action space for this scenario are angular acceleration within \(\pm[4.0,10.0,2.0]rad/s^{2}\), the linear acceleration at \(\pm[2.0,2.0,2.0]m/s^{2}\) and the joint angle set to \(\pm 0.3rad\).
In this setup, dynamic compensation handles unexpected ground contact, and adaptive foot behavior prevents foot entrapment. Impressively, this policy seamlessly transitioned to blind stair climbing of a stair height of \(13cm\). As presented in Fig. 7, while the standard MPC often led to the robot's foot being trapped, our refined framework adopted adaptive swing trajectories, facilitating immediate foot reaction upon contact with the discrete obstacle(see support video). This emergent behavior is learned by the policy, and the MPC ensures the robot's stability and movement. The seamless integration is a testament to the effectiveness of our RL-augmented MPC approach for adaptive behavior.
## IV Conclusion
In this research, we have presented an integration of Reinforcement Learning (RL) and Model Predictive Control (MPC), improving the agility and robustness of legged locomotion. Through the integration of MPC's forward-looking predictions and RL's ability to reason on past experiences, our framework has unveiled marked improvements in a robot's dexterity to traverse intricate landscapes and handle unexpected perturbations. Notably, the adoption of adaptive swing foot reflection showcases how the blend of these two methodologies can lead to real-world locomotion improvements. The extensive experimental validations presented further underscore the robustness and generalizability of our proposed RL-augmented MPC framework, setting a new benchmark for state-of-the-art robotic control systems. We envision that this synthesized approach will pave the way for future research such as perceptive locomotion, fostering even more resilient and adaptive behavior for legged robots.
Fig. 6: Comparison between baseline and proposed approach in terms of pitch compensation. The Go1 robot is carrying a load of 5\(kg\) and the A1 robot is carrying a load of 10\(kg\). They all run the same adaptive behavior policy trained on the A1 robot.
Fig. 7: Comparison between baseline and proposed approach for blind stair climbing. The stair height is approximately 13cm and the foot swing height is all set to Scm. |
2302.06560 | Large Scale Multi-Lingual Multi-Modal Summarization Dataset | Significant developments in techniques such as encoder-decoder models have
enabled us to represent information comprising multiple modalities. This
information can further enhance many downstream tasks in the field of
information retrieval and natural language processing; however, improvements in
multi-modal techniques and their performance evaluation require large-scale
multi-modal data which offers sufficient diversity. Multi-lingual modeling for
a variety of tasks like multi-modal summarization, text generation, and
translation leverages information derived from high-quality multi-lingual
annotated data. In this work, we present the current largest multi-lingual
multi-modal summarization dataset (M3LS), and it consists of over a million
instances of document-image pairs along with a professionally annotated
multi-modal summary for each pair. It is derived from news articles published
by British Broadcasting Corporation(BBC) over a decade and spans 20 languages,
targeting diversity across five language roots, it is also the largest
summarization dataset for 13 languages and consists of cross-lingual
summarization data for 2 languages. We formally define the multi-lingual
multi-modal summarization task utilizing our dataset and report baseline scores
from various state-of-the-art summarization techniques in a multi-lingual
setting. We also compare it with many similar datasets to analyze the
uniqueness and difficulty of M3LS. | Yash Verma, Anubhav Jangra, Raghvendra Kumar, Sriparna Saha | 2023-02-13T18:00:23Z | http://arxiv.org/abs/2302.06560v1 | # Large Scale Multi-Lingual Multi-Modal Summarization Dataset
###### Abstract
Significant developments in techniques such as encoder-decoder models have enabled us to represent information comprising multiple modalities. This information can further enhance many downstream tasks in the field of information retrieval and natural language processing; however, improvements in multi-modal techniques and their performance evaluation require large-scale multi-modal data which offers sufficient diversity. Multi-lingual modeling for a variety of tasks like multi-modal summarization, text generation, and translation leverages information derived from high-quality multi-lingual annotated data. In this work, we present the current largest multi-lingual multi-modal summarization dataset (M3LS), and it consists of over a million instances of document-image pairs along with a professionally annotated multi-modal summary for each pair. It is derived from news articles published by British Broadcasting Corporation(BBC) over a decade and spans 20 languages, targeting diversity across five language roots, it is also the largest summarization dataset for 13 languages and consists of cross-lingual summarization data for 2 languages. We formally define the multi-lingual multi-modal summarization task utilizing our dataset and report baseline scores from various state-of-the-art summarization techniques in a multi-lingual setting. We also compare it with many similar datasets to analyze the uniqueness and difficulty of M3LS. 1
Footnote 1: The dataset and code used in this work are made available at [https://github.com/anubhav-jangra/M3LS](https://github.com/anubhav-jangra/M3LS).
## 1 Introduction
The world we live in today is very diverse, with over 7,000+ languages spoken across the globe2. These languages have varying traits and are spoken by communities of various sizes depending upon the popularity of the language. For example, Mandarin consists of over 50,000 _hanzi_ (characters) and is spoken by over 1.117 billion people3, while there exist languages like Rotokas, which is an indigenous language spoken by about 4,320 people on the island of Bougainville, Papua New Guinea, which consists of only 12 letters4.
Footnote 2: [https://www.ethnologue.com/guides/how-many-languages](https://www.ethnologue.com/guides/how-many-languages)
Footnote 3: [https://www.berlitz.com/en-uy/blog/most-spoken-languages-world](https://www.berlitz.com/en-uy/blog/most-spoken-languages-world)
Footnote 4: [https://en.wikipedia.org/wiki/Rotokas_language](https://en.wikipedia.org/wiki/Rotokas_language)
These languages, although very crucial, restrict people to communicate their thoughts to others who speak the same language. The gift of sight, however, is something that is universally shared by every human being on this plant, irrespective of their culture, ethnicity, or the language that they speak. Through this work we aim to instigate the research towards improving existing automatic summarization systems by leveraging information from multiple languages and visual modalities.
Various studies in the past have illustrated how unified summarization frameworks across multiple languages improve the summarization quality over mono-lingual frameworks Wang et al. (2021). Similarly, there have been works in multi-modal summarization that illustrate how multi-modal input can help improve the quality of summarization over text summarization systems Jangra et al. (2020, 2020); Chen and Zhuge (2018); Mukherjee et al. (2021).
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{4}{c}{**Might-Based Summarization Datasets**} \\ \hline \(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text
2022). Additionally, having multiple modalities in the output summary can help improve the overall satisfaction of the user Zhu et al. (2018); Jangra et al. (2021). Multiple modalities can also compensate for the inability of individual modalities to express various aspects of the summary. For instance, it is hard to express abstract concepts like "freedom", "gravity", etc. through images, while it can be expressed through text conveniently. Similarly, it is very difficult to describe a "Pangolin" to someone who hasn't seen one beforehand.
Hence, in this work we propose the task of Multimodal Multi-lingual Summarization (M3LS), and also release the M3LS dataset5 to facilitate the research in this direction. The dataset comprises 1.1M news articles, spanning 20 languages comprising _English_, _Chinese_, _Spanish_, _Russian_, _French_, _Ukrainian_, _Portuguese_, _Japanese_, _Tamil_, _Hindi_, _Marathi_, _Gujarati_, _Bengali_, _Sinhala_, _Urdu_, _Pashto_, _Indonesian_, _Telugu_, _Punjabi_, and _Nepali_; making it the largest language-spanning summarization dataset. To the best of our knowledge, the proposed dataset is the largest summarization dataset for 13 languages (_Russian_, _Ukrainian_, _Tamil_, _Hindi_, _Marathi_, _Gujarati_, _Bengali_, _Sinhala_, _Urdu_, _Pashto_, _Telugu_, _Punjabi_, and _Nepali_).
Footnote 5: A sample of our dataset is available at [https://github.com/zenquiorra/M3LS](https://github.com/zenquiorra/M3LS), the complete dataset will be released in the camera ready version of the work
We hope that the proposed task and the dataset will instigate and inspire multi-modal and multi-lingual research in less-explored languages for solving various tasks including but not limited to automatic summarization Nallapati et al. (2016); See et al. (2017), article headline generation Jin et al. (2020); Gavrilov et al. (2019); Zhang et al. (2018), keyword extraction Showrov and Sobhan (2019); Lee and Kim (2008); Yao et al. (2019), image caption generation Xu et al. (2015); Bai and An (2018), multi-modal embedding generation Sun et al. (2019); Lu et al. (2019); Li et al. (2019); Zhou et al. (2020), large-scale language modeling Raffel et al. (2020); Devlin et al. (2019) etc.
The major contributions of this work are as follows - _1) We have proposed the multi-modal multi-lingual summarization_ (M3LS) _task. 2) We have released the largest multi-modal summarization dataset that spans 20 languages. 3) The proposed dataset is the largest text summarization dataset for 13 languages. 4) To the best of our knowledge, we present the first ever multi-modal cross-lingual dataset (consisting of Japanese-to-English and English-to-Japanese). 5) We have provided multi-modal summarization baseline results for our dataset and a detailed analysis of the dataset._
## 2 Related Work
The field of text summarization is more than 5 decades old Edmundson (1969), and has evolved to a great extent in recent years. Prior to the advances in sequence-to-sequence frameworks Sutskever et al. (2014), people mainly focused on extractive summarization techniques that aim to generate summary via extracting words, phrases, or sentences Mihalcea and Tarau (2004); Saini et al. (2019); Alguliev et al. (2010). See et al. (2017) proposed the Pointer-Generator Networks, an attentive recurrent neural network based framework Bahdanau et al. (2015). Recent years have seen great progress in research in automatic summarization leveraging transformer based models Zhang et al. (2020); Devlin et al. (2019) and attention mechanism Vaswani et al. (2017).
In this section we discuss the related works showcasing multi-modal datasets and multi-lingual datasets. A detailed size comparison of these datasets with M3LS is shown in Table 1.
### Multi-modal summarization datasets
Multi-modal summarization is the task of summarizing content comprising two or more input modalities. The output can be uni-modal or multi-modal depending on the task. In this section, we discuss existing large-scale multi-modal summarization datasets proposed in the community. We point the readers to Jangra et al. (2021) for a comprehensive survey.
**MSMO**: Zhu et al. (2018) proposed a multi-modal summarization dataset that consists of text and images. The dataset is obtained from the DailyMail6 website and contains 314,581 instances in English language. However, Hasan et al. (2021) illustrated that the DailyMail news highlights lack novel n-grams. Fabbri et al. (2021) also highlighted the inconsistency in quality of some reference summaries in the CNN/DailyMail dataset Nallapati et al. (2016).
Footnote 6: [https://www.dailymail.co.uk/home/index.html](https://www.dailymail.co.uk/home/index.html)
**E-Dailymail**Chen and Zhuge (2018) proposed the E-Dailymail dataset, which contains text and images extracted from the DailyMail website. The dataset consists of 219,100 instances in English,
containing the input document, article title, images, and image captions.
**How2**: Sanabria et al. (2018) proposed a multimodal summarization dataset consisting of text, video, and audio modalities; it contains over 2000 hours of videos accompanied by the corresponding audio and speech transcriptions.
**MMSS**: Li et al. (2018) proposed a multi-modal summarization dataset consisting of text and images with the aim of proposing an image-aided sentence summarization framework. The dataset has 66K instances in English language, that is generated by extracting sentence-headline pairs from the Gigaword corpus7.
Footnote 7: [https://github.com/harvardnlp/sent-summary](https://github.com/harvardnlp/sent-summary)
**VMSMO**: To the best of our knowledge, Li et al. (2020) proposed the first large-scale asynchronous text-audio-video summarization dataset. The dataset is generated from the famous microblogging platform Sina Weibo8, and comprises of 184,920 instances in Chinese language.
Footnote 8: [http://ir.weibo.com/](http://ir.weibo.com/)
Similar trends of incorporating multiple modalities in language tasks can also be noticed in several tasks like question answering (Singh et al., 2021), translation (Elliott and Kadar, 2017), sentiment analysis (Soleymani et al., 2017), lexico-semantic classification (Jha et al., 2022), keyword extraction (Verma et al., 2022) etc.
### Multi-lingual Text Summarization Datasets
The popularity of studying the benefits of summarization in different languages to improve summarization qualities increased over the past few years. There have been a lot of research work in bi-lingual setting; however, in this work, we limit ourselves to discussing multi-lingual summarization datasets to be concise.
**MLSUM** : Scialom et al. (2020) proposed the MLSUM dataset that consists of 1.5 million news articles obtained from the Dailymail/CNN websites. The dataset spans five languages - French, German, Spanish, Russian and Turkish.
**XL-Sum**: Hasan et al. (2021) proposed the XL-Sum dataset that consists of 1.35 million articles in 44 languages obtained from BBC news, making it the most language-diverse summarization dataset to date. However, 25 of these 44 languages do not contain even 10,000 instances, making it incompetent to train any language model.
**WikiLingua**: Ladhak et al. (2020) proposed the wikilingua dataset, which is the largest parallel multi-lingual summarization to date. The dataset consists of 770K instances in English language, and is extended to 17 other languages for varying number of English articles.
**MLGSum**: Wang et al. (2021) proposed the MLGSum dataset that consists of articles from various news providers such as BBC, france243 and select faz. The dataset has five high-resource and seven low-resource languages, with a total of 1.1 million instances, and is a rich source for text summarization for German language with 500K instances.
We observe that multiple popular datasets (see Table 1) in multimodal summarization and multilingual summarization are useful for both technique evaluation and technique improvisation. However, the combined field of multilingual multimodal summarization has remained largely unexplored, and it can be attributed to the lack of dedicated high quality dataset and formalizing it as a problem statement. Hence, we formally define the M3LS task and discuss the dataset addressing the problem further.
## 3 M3LS Task
Given for each language \(l_{k}\in L\) where \(L\) is the set of all languages, we have data \(M^{l_{k}}=<T^{l_{k}},I^{l_{k}}>\), where \(T^{l_{k}}=\left\{t_{1}^{l_{k}},t_{2}^{l_{k}},\ldots,t_{|T|}^{l_{k}}\right\}\) is a set of documents, and \(I^{l_{k}}=\left\{I^{l_{k}}_{t},I^{l_{k}}_{t},\ldots,I^{t_{|T|}^{l_{k}}}\right\}\) is a set of images, where \(I^{l_{k}}_{t}=\left\{i_{1},i_{2},\ldots,i_{|I|}\right\}^{l_{k}}\) denotes the set of images belonging to the document \(t_{j}^{l_{k}}\in T^{k_{k}}\) and \(|.|\) denotes the cardinality of a set.
The task is to obtain a function \(F\) that maps documents \(t_{j}^{l_{k_{1}}}\in T^{l_{k_{1}}}\) in language \(l_{k_{1}}\) along with their corresponding images, \(I^{l_{j}^{k_{1}}}\in I^{l_{k_{1}}}\) to a set of multi-modal summaries in target language, \(l_{k_{2}}\), comprising of text summaries (denoted by \(O^{l_{k_{2}}}\)) along with images from the input (denoted by \(I^{l_{k_{1}}}\)).
\[F:<T^{l_{k_{1}}},I^{l_{k_{1}}}>\rightarrow<O^{l_{k_{2}}},I^{l_{k_{1}}}> \tag{1}\]
When \(k_{1}\neq k_{2}\), we have multi-modal cross-lingual summarization, otherwise the task is multi-modal mono-lingual summarization, a graphic representation of the task is shown in Figure 1.
## 4 M3LS Dataset
Through the M3LS task, we motivate the need for a multi-modal multi-lingual dataset by studying the
developments in summarization techniques such as secondary enhancements using images with multi-modal output Zhu et al. (2018), video-based multi-modal summarization Li et al. (2020) and using multiobjective optimization Jangra et al. (2020). On the other hand, development of multi-lingual transformer based models like Xue et al. (2020) has publicly available checkpoints fine-tuned for multiple language modelling tasks, including multi-lingual summarization.
Development of such models requires high-quality heterogeneous data and improvements in various models utilizing multi-modal shared attention layers for annotated data with image-text pairs for a specific language task. To address these issues, we present M3LS and in this section we discuss various steps involved in its construction.
### Dataset Construction
We explore the news domain, as it is one of the most abundant and readily available domains and covers articles in multiple topics, while describing the events and lacking extreme bias. We analyzed the structure of articles and surveyed multiple news providers before finalizing on BBC News, which provides full sentence summaries in a uniform structured format across multiple languages. The summaries are professionally created by the author which ensures the quality of the data. We explain the steps involved in creating the M3LS dataset and discuss various aspects of the data.
**BBC News**: BBC News9 is a division of British Broadcasting Corporation responsible for gathering and broadcasting current news affairs. Each BBC news article has a text summary comprising complete sentences in the present tense, avoiding opinions and sensationalism. We cover 20 different languages with summaries written in corresponding languages. We extract data from various parts of the webpage as shown in Figure 2.
Footnote 9: [https://www.bbc.com/news](https://www.bbc.com/news)
**Obtaining Articles**: We obtain links to articles from the corresponding Twitter10 pages for each BBC language news dataset. To extend the dataset, we scrape11 valid links12 obtained from the parsed articles of each language.
Footnote 11: Data is collected in accordance with the terms and conditions mentioned on the website
Footnote 12: A link is valid if it contains a BBC article summary for the corresponding domain.
The final collection of links is scraped separately13 to obtain the final dataset. Since these links are showcased at the corresponding twitter page, these links ensure articles con
Figure 1: Proposed M3LS task.
Figure 2: Snapshot format of a webpage used in development of M3LS, and various features extracted during the scraping procedure
taining topics of interest and high popularity. We extend the dataset by recursively extracting links from suggestions and hyperlinks within a webpage.
**Structuring the Data**: We obtain various features from the webpage as shown in Figure (2), and compile them in a JSON format; we also provide a dedicated parser, instructions and a tutorial for the ease of access of features from any instance. The data is freely available for use in accordance with the terms and conditions of BBC News, we discuss this in detail on the same link where our dataset is uploaded.
**Text Validation**: In order to ensure high-quality text from the source, we manually read 10 instances from each language14 from the collected links to verify if the articles are descriptive in nature and consist of text written in complete sentences.
Footnote 14: For languages unknown to the authors, we use Google Translate[https://translate.google.com](https://translate.google.com) to translate the content in English language
**Summary Validation**: We manually checked the summary quality for 100 articles each in 4 languages15 from our dataset and validated if the given summary captures the information represented in the text. For every article, after carefully reading the text, we assign the gold summary a score between 1-5 with 5 being the best possible summary which captures most of the important information from the given article and vice versa, and also including parameters like summary length and length of the article. We observe that for > 70 articles across the languages evaluated obtain a score of > 4 out of 5 in our analysis. Assuming the uniformity of articles published by BBC across multiple domains, we assume that this fact is true for every language in our dataset.
Footnote 15: We restrict ourselves to 4 languages (English, Hindi, Bengali and Marathi) due to the understanding of languages of the authors presenting this work
**Final Dataset**: In final dataset, each news article contains the text document, images with corresponding captions, keywords, links to related news articles, and a multi-modal summary comprising of a few sentences and an image.
**Cross-lingual Dataset**: Our cross-lingual dataset contains all features from our final dataset, along with multi-modal summaries consisting of text in another language. It is obtained from the links given by the author within the Japanese language article to the corresponding article in English language, we manually check the information provided in both articles using Google Translate16 for 100 instances to verify the similarity of the content and summaries provided.
Footnote 16: [https://google.com/translate](https://google.com/translate)
**Train-Test-Validation split:** The dataset has 1.2 million news articles which we split into 80% training, 10% test and 10% validation for languages having \(\leq\) 50,000 articles, otherwise we select 90% data for training, 5% for testing and 5% as validation split.
## 5 Dataset Analysis
### Overview
The M3LS dataset has 1.11M+ multi-lingual multi-modal instances across 20 languages and over 9K cross-lingual multi-modal instances for English-Japanese language pair. The dataset can be categorized into 8 high resource languages and 12 low resource languages17 (refer to Appendix B for more details). The chosen languages originate from different parts of the globe, and belong to 5 different language roots: _Indo-European_, _Austronesian_, _Japanic_, _Dravidian_, and _Sino-Tibetan_.
Footnote 17: The categorization is done based on a threshold value of 50K data instances.
M3LS dataset is quite diverse, with the least #articles for Sinhala (10,148) and greatest #articles for English (376,367). The dataset becomes even more complex and challenging with different sizes of input documents for different languages, with document size varying from \(\bar{3}\)30 tokens to over 2800+ tokens. The dataset articles cover a wide time span, with articles from 2009 to 2021 (refer to Appendix A for more details).
We hope that the M3LS dataset will instigate and inspire research in less-explored languages, since 14 out of these 20 languages covered by the dataset are among the top-20 most spoken languages in the world18; this diversity helps in modelling tasks for both well-explored and less-explored languages.
Footnote 18: [https://lingua.edu/theX2D20%2DmostX2Dspoken%2Dlanguage%2DinX2Dthe%2Dworld%2DinX2D2022/](https://lingua.edu/theX2D20%2DmostX2Dspoken%2Dlanguage%2DinX2Dthe%2Dworld%2DinX2D2022/)
### Dataset Comparison
To study the size and span of our dataset, we compare M3LS with other summarization datasets extracted from the BBC News domain. We found that XSum contains 53% of the tokens from our dataset, while XL-Sum contains 58% of the tokens from our dataset across all languages present in M3LS. However, they are uni-modal in nature, while XSum is uni-lingual. We observe that M3LS is magnitudes larger when compared to XSum, while ex
ceeding by times 2-3 in almost all individual language instances when compared to XL-Sum. Both of these datasets are used to train and fine-tune several state-of-the-art-summarization models like Pegasus, hence we believe that M3LS will offer a wider and better language modelling support in terms of size and diversity for the languages present in it, with the additional benefit of multi-modality.
## 6 Experiments
### Setup
Depending upon the number of instances in each language within M3LS, we perform a train:test:validation split with a ratio of 80:10:10 if the number of instances is below 50K and 90:5:5 otherwise. To conduct our experiments in a multi-lingual setting, we survey publicly available tokenizers and sentence segmenters for multiple languages, and we combine them within one dedicated package for our experiments. We further define a set of rules for sentence segmentation for languages lacking such support from external packages within our package19.
Footnote 19: [https://github.com/zenquiorra/TokSeg](https://github.com/zenquiorra/TokSeg)
We compile our package using segtok20 for the Indo-European language, IndicNLP21 for Indian languages, fugashi McCann (2020) for Japanese (ja) and chinese22 for Chinese.
Footnote 20: [https://pypi.org/project/chinese/](https://pypi.org/project/chinese/)
Footnote 21: [https://nltk.org/](https://nltk.org/)
Footnote 22: [https://github.com/explosion/spacy/tree/master/spacy/lang](https://github.com/explosion/spacy/tree/master/spacy/lang)
For data pre-processing steps such as stopword removal, we collect stopwords from the nltk23 package, and publicly available stopwords present in the spaCy24 repository for all languages in a centralized pipeline for our experiments.
Footnote 23: [https://huggingface.co/csebuentnlp/mT5_multilingual_XL_Sum](https://huggingface.co/csebuentnlp/mT5_multilingual_XL_Sum)
Footnote 24: We use the implementation provided by the authors, which is a multi-layered package, modification of which to be compatible for a multi-lingual setting isn’t feasible based on the software complexity
We evaluated the performance of various summarization techniques utilizing our dataset, including simpler techniques such as LEAD-3 and RANDOM which have proven to be quite useful in past (Ghalandari et al., 2020; Scialom et al., 2020; Sharma et al., 2019). We have also included statistics based CENTROID(Radev et al., 2004) and graph based TextRank(Mihalcea and Tarau, 2004) techniques.
To have a fair comparison across multiple languages using a shared dedicated model, we have evaluated the performance of an abstractive technique in a multi-lingual setting utilizing a pre-trained checkpoint25 for summarization of the transformer-based MT5(Xue et al., 2020) model. Finally to explore the multi-modal aspect of our dataset, we evaluate the performance of a multi-modal encoder-decoder based technique (Zhu et al., 2018) that utilizes images and text to generate a multi-modal text summary. However the publicly available implementation26 for MSMO restricts us to evaluate it only for the English language. However, to compare this score, we evaluate the performance of three state-of-the-art transformer-based models - Pegasus(Zhang et al., 2020), BART(Lewis et al., 2020), and T5(Xue et al., 2020) for summarization which are compatible with the English language.
Footnote 25: [https://huggingface.co/csebuentnlp/mT5_multilingual_XL_Sum](https://huggingface.co/csebuentnlp/mT5_multilingual_XL_Sum)
Since, two of the pre-trained models we described above are fine-tuned on XSum and XL-Sum datasets which are extracted from the same source - BBC News - we avoid fine-tuning on models to have a fair comparison of the models and we explain the scores in discussions.
In all techniques, we set the generated summary length threshold as the average length of gold summary for the corresponding language in our corpus.
### Baselines
#### 6.2.1 Simpler Extractive Approaches
LEAD-3: In this baseline, the first three sentences of the source text are extracted as the final summary. This method is a robust baseline, as shown by (Sharma et al., 2019) for news summarization datasets.
RANDOM: We recursively extract words randomly from the source text until the threshold summary length is reached. The aim of this baseline is to understand and compare other baselines with an unbiased model as a point of reference.
#### 6.2.2 Statistical Approach
CENTROID: We use the strategy proposed by Radev et al. (2004), which ranks sentences based on the centrality scores obtained by the words in the sentence. We use TF-IDF scores to measure each word's similarity, and extract top sentences from each ranking until the threshold summary length is obtained.
#### 6.2.3 Graph Based Approach
TextRank: TextRank(Mihalcea and Tarau,
2004) is an unsupervised graph-based ranking technique based on the relevance of sentences in the source text27 We consider the sentences which are most central to the document based on the ranking as generated summaries.
Footnote 27: We use the implementation provided by the gensim[https://radimrehurek.com/gensim_3.8.3/summarization/summariser.html](https://radimrehurek.com/gensim_3.8.3/summarization/summariser.html) package and modify the segmentation and tokenizer part using our dedicated package.
**RNN Based Approach**
MSMO: MSMO Zhu et al. (2018) is an encoder-decoder model trained for multi-modal summarization. It utilizes a multi-modal attention mechanism to generate multi-modal summaries utilizing text and images.
**Transformer Based Approaches**
MT5: MT5 Xue et al. (2020) is a transformer-based seq2seq model pretrained for multiple natural language tasks. We use the publicly available checkpoint28 pre-trained for text summarization on the XL-Sum dataset Hasan et al. (2021) for a multi-lingual setting.
Footnote 28: [https://huggingface.co/csebuethlp/mT5_multilingual_XLSum](https://huggingface.co/csebuethlp/mT5_multilingual_XLSum)
PEGASUS: Pegasus Zhang et al. (2020) is a transformer-based model, pre-trained on a task to remove meaningful sentences from an input text, making it suitable for summarization. We used a checkpoint29 of PEGASUS model pre-trained on the XSum dataset Narayan et al. (2018) for summarization.
Footnote 29: [https://huggingface.co/google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum)
BART: BART Lewis et al. (2020) uses a standard seq2seq architecture with a bi-directional encoder and a left-to-right decoder. We use a pre-trained model trained on the DailyMail/CNN Nallapati et al. (2016) for our evaluation.
T5: T5 Raffel et al. (2020) is an encoder-decoder model trained on a mixture of natural language tasks, including translation and summarization; it converts any task into a text-to-text format. We use the pre-trained T5-large model for the summarization task.
## 7 Results and Discussion
We evaluate the generated summaries against the gold summaries using the ROUGE Lin (2004) evaluation metric. We report the ROUGE-1, ROUGE-2, ROUGE-L f-scores across every baseline discussed above Lin (2004) (refer to Tables 1 and 3). We additionally report BERTSCORE for English baselines Zhang et al. (2019) (refer to Table 1).
### Multi-lingual baseline scores
We observe that transformer based techniques used in our experiments perform significantly better compared to other techniques. However, for the "MT5" column, we observe very high scores and spikes of very low scores as observed in Table 3, this behavior maybe caused due to two factors:
* Relatively high scores can be attributed to the use of "MT5" checkpoint that is fine-tuned for the task of summarization on a dataset (XL-Sum) obtained from same source as ours.
* Very low scores for some languages can be attributed to the "ROUGE" evaluation metric which relies on token overlap30. Many of these languages, especially the ones with Dravidan and Indo-European origins have words which change their form significantly depending on their placement in the text and the context in which they appear, hence simple token overlap metrics show lower scores if the root form of the word isn't considered. Footnote 30: We are not implementing stemming of tokens during evaluation, due to the lack of support of multi-lingual stemming methods across various softwares which we have used for experimentation and to have an even comparison with the supported languages
We observe that LEAD-3 performs better for the languages in which transformer-based baseline performs poorly, this can be attributed to two factors:
* As shown by Sharma et al. (2019) that LEAD-3 performs very well for summarization tasks when we consider the news domain, suggesting the idea that top sentences capture a lot of information within a news article.
* LEAD-3 considers top-3 sentences from the text, unlike abstractive summarization, new
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**English** & **R-f1** & **R-f2** & **R-fL** & **BrS** \\ \hline BART & 0.195 & 0.031 & 0.131 & 0.863 \\ Pegasus & **0.389** & **0.181** & **0.321** & **0.910** \\ T5 & 0.197 & 0.0328 & 0.131 & 0.858 \\ MSMO & 0.217 & 0.046 & 0.158 & 0.851 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of “ROUGE” fscores for summaries generated using Multi-modal baseline MSMO and Unimodal transformer based baselines against gold summaries from the English language dataset. “R-f1” denotes ROUGE-1 fscore, “R-f2” denotes ROUGE2 f-score, “R-L” denotes the ROUGEL fscore, and “BrS” denotes BERTSCORE.
tokens or new forms of existing tokens are not present in the given article. Since it is an extractive technique, the chances of token overlap are higher and hence better "f-scores".
### Multi-modal baseline scores
Due to the limitation of lack of pre-trained frameworks in a multi-modal setting for most of the languages in the dataset, we were constrained to evaluate the multi-modal technique on the English dataset. On comparing the "f-scores" of various uni-modal techniques with the multi-modal technique, we notice that the transformer based model Pegasus outperforms other techniques. This is largely attributed to the fact that the pre-trained checkpoint we have used for evaluation of summaries through the Pegasus model is fine-tuned on the XSum dataset, which has data collected from the same source as ours. We observe that for other models which are not fine-tuned on a dataset extracted from same source as ours, the multi-modal technique MSMO is able to outperform other techniques.
### Abstractiveness of the proposed dataset
We propose an abstractive summarization dataset where the target summaries are manually written by human beings. The M3LS dataset demands abstractive techniques since the percentage of novel uni-grams in the dataset is quite high (refer to "abs.gold" column in Appendix B). This fact is also observed in the results from the baseline techniques. For instance, MT5 performs consistently superior for multiple languages as observed in Table3, the abstractive baselines have thrice as good ROUGE scores as the extractive baselines.
## 8 Conclusion
In this work, we release a large-scale multi-modal multi-lingual summarization dataset comprising of over 1.1M+ news articles and spanning 20 languages, and motivate the problem statement of Multi-modal Multi-lingual summarization using M3LS. To the best of our knowledge, this is the first ever multi-modal summarization data set spanning several languages. The proposed dataset is the largest summarization dataset for 13 out of 20 languages. We have evaluated the performance of various baselines to establish the quality of the proposed dataset in both multi-modal and multi-lingual settings. Through this work, we hope to instigate research in various less-explored languages in the community for various research problems including but not limited to summarization, headline generation, keyword extraction, image caption generation, multi-modal embedding generation, etc. In future works, we plan to work on shared models which address the M3LS task utilizing our dataset.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Base} & \multicolumn{3}{c}{**Random**} & \multicolumn{3}{c}{**LEAD-3**} & \multicolumn{3}{c}{**TextRank**} & \multicolumn{3}{c}{**CENTROID**} & \multicolumn{3}{c}{**MT5**} \\ Lang & R-f1 & R-f2 & R-f1 & R-f2 & R-f1 & R-f2 & R-f1 & R-f2 & R-f1 & R-f2 & R-f1 & R-f2 & R-f1 & R-f2 & R-f1 \\ \hline bn & 0.003 & 0.000 & 0.002 & 0.001 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & **0.004** & 0.001 & 0.003 \\ mr & 0.013 & 0.000 & 0.012 & 0.041 & 0.005 & 0.040 & 0.025 & 0.002 & 0.025 & 0.006 & 0.001 & 0.006 & **0.044** & 0.005 & **0.044** \\ gu & 0.014 & 0.001 & 0.014 & **0.039** & 0.005 & 0.038 & 0.014 & 0.001 & 0.014 & 0.016 & 0.002 & 0.016 & 0.036 & 0.005 & 0.036 \\ ps & 0.002 & 0.000 & 0.001 & 0.000 & 0.000 & 0.000 & 0.002 & 0.000 & 0.001 & 0.000 & 0.000 & 0.000 & **0.003** & 0.000 & 0.001 \\ uk & 0.030 & 0.002 & 0.029 & 0.062 & 0.016 & 0.061 & 0.043 & 0.010 & 0.042 & 0.032 & 0.006 & 0.032 & **0.094** & 0.025 & **0.094** \\ pt & 0.179 & 0.009 & 0.114 & 0.204 & 0.033 & 0.124 & 0.199 & 0.030 & 0.128 & 0.089 & 0.008 & 0.075 & **0.276** & 0.085 & 0.193 \\ id & 0.118 & 0.001 & 0.083 & 0.172 & 0.037 & 0.117 & 0.144 & 0.030 & 0.104 & 0.104 & 0.014 & 0.080 & **0.289** & 0.115 & 0.233 \\ ne & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ pa & 0.012 & 0.000 & 0.012 & **0.038** & 0.004 & **0.038** & 0.014 & 0.001 & 0.014 & 0.010 & 0.002 & 0.010 & 0.026 & 0.000 & 0.026 \\ si & 0.014 & 0.000 & 0.014 & 0.032 & 0.004 & 0.031 & 0.019 & 0.002 & 0.019 & 0.007 & 0.001 & 0.007 & **0.039** & 0.018 & **0.039** \\ ur & 0.006 & 0.000 & 0.006 & 0.023 & 0.001 & 0.023 & 0.006 & 0.000 & 0.005 & 0.024 & 0.001 & 0.023 & **0.044** & 0.000 & **0.044** \\ fr & 0.168 & 0.007 & 0.107 & 0.206 & 0.043 & 0.126 & 0.177 & 0.033 & 0.115 & 0.164 & 0.024 & 0.110 & **0.209** & 0.041 & 0.141 \\ ru & 0.032 & 0.001 & 0.032 & 0.071 & 0.017 & 0.069 & 0.041 & 0.012 & 0.040 & 0.036 & 0.008 & 0.036 & **0.081** & 0.011 & **0.081** \\ ja & 0.069 & 0.001 & 0.068 & 0.126 & 0.012 & 0.120 & 0.084 & 0.007 & 0.081 & 0.063 & 0.004 & 0.062 & **0.306** & 0.081 & 0.291 \\ te & 0.010 & 0.000 & 0.009 & 0.023 & 0.001 & 0.023 & 0.011 & 0.000 & 0.001 & 0.008 & 0.001 & 0.008 & **0.026** & 0.000 & **0.026** \\ ta & 0.014 & 0.001 & 0.014 & **0.034** & 0.005 & **0.034** & 0.023 & 0.003 & 0.022 & 0.012 & 0.001 & 0.012 & 0.026 & 0.000 & 0.026 \\ zh & 0.022 & 0.001 & 0.022 & **0.053** & 0.008 & 0.051 & 0.042 & 0.005 & 0.041 & 0.025 & 0.003 & 0.025 & 0.125 & 0.042 & 0.118 \\ es & 0.177 & 0.008 & 0.117 & 0.180 & 0.033 & 0.117 & 0.110 & 0.018 & 0.073 & 0.081 & 0.008 & 0.067 & **0.280** & 0.084 & 0.202 \\ hi & 0.010 & 0.000 & 0.010 & **0.018** & 0.002 & 0.018 & 0.013 & 0.001 & 0.013 & 0.005 & 0.000 & 0.005 & 0.002 & 0.000 & 0.001 \\ en & 0.146 & 0.002 & 0.102 & **0.175** & 0.026 & 0.114 & 0.100 & 0.014 & 0.071 & 0.140 & 0.016 & 0.102 & **0.427** & 0.182 & 0.345 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of various techniques for summarization against the M3LS dataset gold summaries for every language. “Lang” refers to the language code for a language according to the ISO 639-1 standard, “R-f1” refers to the ROUGE-1 f-scores, “R-f2” refers to the ROUGE-2 f-scores, “R-fL” refers to the ROUGE-1 f-scores
### Limitations
There are a few considerations to keep in mind in our work. **First**, the dataset currently has a multi-modal input, mapping to a textual summary. However, future work could involve annotating images to enhance the dataset with a multi-modal output. **Second**, the distribution for languages in the M3LS dataset is skewed due to the imbalanced number of articles published in BBC across languages and the late establishment of virtual print media in certain languages (as shown in Appendix A). **Third**, the current dataset uses an independent identically distributed split to create train and test sets, but more advanced techniques such as adversarial splits and likelihood splits could also be explored in future work. **Fourth**, while the current manuscript does not evaluate the dataset on both multi-modal and multi-lingual aspects simultaneously, we believe that this dataset has the potential to contribute to the development of such systems in the future.
## Acknowledgements
This publication is an outcome of the R&D work undertaken in the project under the Visvesvaraya Ph.D. Scheme of Ministry of Electronics & Information Technology, Government of India, being implemented by Digital India Corporation (Formerly Media Lab Asia).
|
2301.01987 | Energy Efficient Semantic Communication over Wireless Networks with Rate
Splitting | In this paper, the problem of wireless resource allocation and semantic
information extraction for energy efficient semantic communications over
wireless networks with rate splitting is investigated. In the considered model,
a base station (BS) first extracts semantic information from its large-scale
data, and then transmits the small-sized semantic information to each user
which recovers the original data based on its local common knowledge. At the BS
side, the probability graph is used to extract multi-level semantic
information. In the downlink transmission, a rate splitting scheme is adopted,
while the private small-sized semantic information is transmitted through
private message and the common knowledge is transmitted through common message.
Due to limited wireless resource, both computation energy and transmission
energy are considered. This joint computation and communication problem is
formulated as an optimization problem aiming to minimize the total
communication and computation energy consumption of the network under
computation, latency, and transmit power constraints. To solve this problem, an
alternating algorithm is proposed where the closed-form solutions for semantic
information extraction ratio and computation frequency are obtained at each
step. Numerical results verify the effectiveness of the proposed algorithm. | Zhaohui Yang, Mingzhe Chen, Zhaoyang Zhang, Chongwen Huang | 2023-01-05T09:47:50Z | http://arxiv.org/abs/2301.01987v1 | # Energy Efficient Semantic Communication over Wireless Networks with Rate Splitting
###### Abstract
In this paper, the problem of wireless resource allocation and semantic information extraction for energy efficient semantic communications over wireless networks with rate splitting is investigated. In the considered model, a base station (BS) first extracts semantic information from its large-scale data, and then transmits the small-sized semantic information to each user which recovers the original data based on its local common knowledge. At the BS side, the probability graph is used to extract multi-level semantic information. In the downlink transmission, a rate splitting scheme is adopted, while the private small-sized semantic information is transmitted through private message and the common knowledge is transmitted through common message. Due to limited wireless resource, both computation energy and transmission energy are considered. This joint computation and communication problem is formulated as an optimization problem aiming to minimize the total communication and computation energy consumption of the network under computation, latency, and transmit power constraints. To solve this problem, an alternating algorithm is proposed where the closed-form solutions for semantic information extraction ratio and computation frequency are obtained at each step. Numerical results verify the effectiveness of the proposed algorithm.
Rate splitting multiple access, semantic communication, energy efficient design.
## I Introduction
The rapid development of emerging applications such as digital twin, edge learning, and metaverse requires wireless networks to support high transmission data rate, ultra low latency, and seamless connectivity [1, 2, 3, 4]. However, due to limited wireless resources such as frequency and time, conventional orthogonal multiple access schemes cannot support massive connectivity concern for next-generation wireless communication networks [5]. Through using the same time or frequency resource, multiple users can be served in non-orthogonal multiple access (NOMA) [6, 7, 8, 9, 10, 11], where users can be split in the power or code domain. Since additional users can be served with superposition coding at the transmitter and successive interference cancellation (SIC) at the receiver, the spectral efficiency of NOMA is generally higher than conventional orthogonal multiple access schemes.
In downlink NOMA transmission, the receiver side decodes the interference for all received strong messages [11]. Thus, the computation capacity of NOMA decoding is generally high. To balance the decoding tradeoff of intended signal and interference signal, the concept of rate splitting multiple access (RSMA) was introduced in [12, 13, 14, 15]. For downlink RSMA transmission, the transmission message intended for each user is divided into both common and private parts. All users intend to receive the common part of the message, i.e., common message, while only part of the users wish to receive and decode the specific private part of the message, i.e., private message. At the user side, the common message is decoded first with regarding all private messages as interference, while the intended private message is decoded with only considering the private messages intended for other users as interference. Through dynamically controlling the split of private and common messages, the computation complexity of RSMA can be adjusted to achieve the specific spectral efficiency requirements. To implement RSMA for wireless communication systems, there are still many challenges, which include the resource allocation for private and common messages, decoding order optimization, system design in imperfect channel and hardware mismatch cases.
There are many contributions investigating the problems of RSMA in wireless communication systems. The general challenges of RSMA were pointed out in [14] for multiple input multiple output (MIMO) communication systems. To maximize the sum rate of all users, a distributed rate splitting technique was proposed in [16]. For a two-receiver multiple input single output (MISO) communication system with limited rate feedback, the rate analysis was investigated in [17]. Compared with NOMA and space-division multiple access (SDMA), it was shown in [18] that RSMA can achieve the best performance in terms of spectral and energy efficiency [19]. In particular, the energy efficiency optimization for RSMA and NOMA transmissions in a unmanned aerial vehicle assisted wireless communication system was investigated in [20]. Considering wireless energy transfer and information transmission, the linear precoding method for RSMA was investigated in [21]. For the case with imperfect channel state information, the sum rate maximization with partial channel
state information for RSMA was studied in [22], while a downlink MISO RSMA system with bounded channel errors was investigated in [23].
The interplay between rate splitting with emerging technologies has been investigated. With the help of reconfigurable intelligent surface, the energy efficient resource allocation for reconfigurable intelligent surface assisted RSMA was investigated in [24], where the phase shift, rate allocation, and trasmmit beamforming were jointly scheduled. The learning based traffic prediction method was studied in [25] for unmanned aerial vehicle enabled wireless communication system with rate splitting. The neural network was proposed in [26] to solve the user clustering problem in hierarchical rate splitting communication systems. Due to coupled rate and power allocation relationship, the resource allocation of RSMA usually leads to the nonconvex problem, which can be solved by utilizing the learning techniques such as deep reinforcement learning. Several deep learning algorithms were designed to solve various complex resource allocation problems for RSMA, which include total power minimization problem [27], joint power control, beamforming design, and splitting optimization problem [28, 29], power allocation problem with limited channel state information knowledge [30, 31], joint transmit power, user clustering, and resource block allocation problem [32], joint passive precoding at the reconfigurable intelligent surface and active precoding at the transmitter [33]. In the federated learning frameworks [34], the authors in [35] utilized RSMA for uplink model transmission to minimize the total delay of the whole system. A model-based deep learning algorithm was developed to solve the receiver design problem of RSMA in [36].
Recently, semantic communication has attracted a lot of attention [37, 38, 39, 40, 41, 42, 43, 44, 45, 46]. For the wireless communication system characterized by Shannon capacity, the receiver side needs to recover the information that is exactly the same as the transmitted information. However, in the emerging wireless applications such as virtual reality, personalized healthcare, autonomous driving, and the Internet-of-Everything (IoE), the wireless communication systems aim to meet the multimodal quality-of-experience (QoE) requirements with massive data, which makes the traditional Shannon capacity characterized transmission infeasible. Especially in human-computer interaction scenarios, humans can control multiple IoE devices simultaneously through voice and augmented/virtual reality commands, making communication ubiquitous in small-range wireless networks, which poses severe challenges to traditional bit-oriented communication challenge. Supporting real-time human-machine interaction and machine-to-machine interaction through the use of text, speech, images, and augmented/virtual reality is important for future wireless communications. In order to support this interaction, the important information finally received depends mainly on the intent, rather than the bit information dependence of common sense. These applications use advanced signal processing to facilitate the development of task-oriented semantic communication [3, 47, 48]. In semantic communication, both transmitter and receiver share common knowledge, which can be used to extract small-size information at the transmitter and recover the original information at the receiver [49]. Similarly, in downlink RSMA, all users also need to receive both common information and private information. Due to the inherent similarity of common knowledge and common message, RSMA can be utilized to enhance the system performance of downlink semantic communication. To our best knowledge, there is no prior works that consider the integration of semantic communication and RSMA.
The main contributions of this paper include:
* The problem of wireless resource allocation and semantic information extraction for energy efficient semantic communications over wireless networks with rate splitting is investigated. In the considered model, the BS first extracts the semantic information from its large-scale data, and then transmits the small-sized semantic information to each user which recovers the original data based on the local common knowledge.
* In the downlink transmission, the rate splitting scheme is adopted, while the private small-sized semantic information is transmitted through private message and the common knowledge is transmitted through common message. Due to limited wireless resource, both computational energy and transmission energy must be considered. This joint computation and communication problem is formulated as an optimization problem whose goal is to minimize the total energy consumption of the network under a latency constraint.
* To solve this problem, an iterative algorithm is proposed where, at every step, closed-form solutions for semantic information extraction ratio and computation frequency are derived. Numerical results show the effectiveness of the proposed algorithm.
The rest of this paper is organized as follows. The system model and problem formulation are described in Section II. The algorithm design is presented in Section III. Simulation results are analyzed in Section IV. Conclusions are drawn in Section V.
## II System Model and Problem Formulation
Consider a downlink semantic wireless communication (SWC) network with one multiple-antenna BS and \(K\) single-antenna users, as shown in Fig. 1. The BS is equipped with \(N\) antennas and the set of users is denoted by \(\mathcal{K}\). Each user \(k\) has a large-sized data \(\mathcal{D}_{k}\) to receive. Due to limited wireless resource, the BS needs to extract the small-sized semantic information from the original data \(\mathcal{D}_{k}\). In the considered model, the BS first extracts the semantic information based on directional probability graph and then transmits the semantic information via rate splitting technique.
### _Semantic Communication Model_
In this part, we utilize the directional probability graph to characterize the inherent information of the transmitted information. In the directional probability graph, each vertex represents the semantic entity with different semantic levels. The higher level the semantic level is, the more complicated
the semantic information is. The link between any two vertexes represents the probability.
To construct the directional probability graph, we use the deep neural network to train the stored dataset, which includes three main steps. In the first step, the semantic entity is recognized from the dataset, where the semantic entity means the names in text, including person names, place names, etc. The name of semantic entity is highly open (various types, flexible lengths, unregistered words), contains rich knowledge and highlights individuality. Three common methods, i.e., rule method, taxonomy method, and sequence labeling [50] can be used to identify the semantic entity. The semantic entity is presented as a vertex in the directional probability graph. In the second step, the link between any two vertexes means the probability that one vertex can be linked with the other vertex. Through training the dataset, the probability between two vertexes can be calculated via convolutional neural networks. In the third step, the semantic information fusion is conducted. For two vertexes, if the link probabilities between these two vertexes are higher than a predefined threshold. As a result, the final directional probability graph becomes a multi-tier graph, as shown in Fig. 2.
To obtain the small-size semantic communication, the extraction process includes two parts, as shown in Fig. 3. In the first part, the directional probability graph is used to extract semantic information and the output is denoted by \(\mathcal{G}(\mathcal{D}_{k})\). To efficiently transmit information, in the second part, a subset \(\mathcal{S}_{k}\) out of \(\mathcal{G}(\mathcal{D}_{k})\) is selected at user \(k\), which is used for data transmission.
At the user side, each user utilizes the shared common directional probability graph to recover the original data and the recovered data is denoted by \(\mathcal{R}(\mathcal{S}_{k})\). The semantic accuracy of the recovered data
\[u_{k}(\mathcal{D}_{k},\mathcal{S}_{k})=\frac{\sum_{i=1}^{|\mathcal{R}( \mathcal{S}_{k})|}\min\{\sigma(\mathcal{R}(\mathcal{S}_{k}),s^{\prime}_{ki}), \sigma(\mathcal{D}_{k},s^{\prime}_{ki})\}}{\sum_{i=1}^{|\mathcal{R}( \mathcal{S}_{k})|}\sigma(\mathcal{R}(\mathcal{S}_{k}),s^{\prime}_{ki})}, \tag{1}\]
where \(|\mathcal{R}(\mathcal{S}_{k})|\) is the number of bits in \(\mathcal{R}(\mathcal{S}_{k})\), \(s^{\prime}_{ki}\) denotes the \(i\)-th word in text or frame in video in \(\mathcal{R}(\mathcal{S}_{k})\), and \(\sigma(\mathcal{R}(\mathcal{S}_{k}),s^{\prime}_{ki})\) is the number of occurrences of \(s^{\prime}_{ki}\) in \(\mathcal{R}(\mathcal{S}_{k})\).
### _RSMA Model_
In RSMA, the message intended for each user can be split into two parts, i.e., common part and private part [51]. The common parts from all users are collected and combined into a common message. Through sharing the same codebook for all users, the common message is encoded into the common message \(s_{0}\), which all users need to decode. The private part of each user \(k\) is encoded into the private stream \(s_{k}\), which is intended for the specific user \(k\). As a result, the transmitted signal \(\boldsymbol{x}\) of the BS can be written as:
\[\boldsymbol{x}=\sqrt{p_{0}}\boldsymbol{w}_{0}s_{0}+\sum_{k=1}^{K}\sqrt{p_{k} }\boldsymbol{w}_{k}s_{k}, \tag{2}\]
where \(\boldsymbol{w}_{0}\) is the transmit beamforming of the common message \(s_{0}\), \(\boldsymbol{w}_{k}\) is the transmit beamforming of the private message \(s_{k}\) intended for user \(k\), \(p_{0}\) is the transmit power of the common message \(s_{0}\), and \(p_{k}\) is the transmit power of the private message \(s_{k}\).
Fig. 1: Illustration of the considered SWC network with rate splitting.
Fig. 3: Illustration of the SWC model.
Fig. 2: An example of the multi-level semantic information extraction.
For user \(k\), the received message can be represented by:
\[\mathbf{h}_{k}^{H}\mathbf{x}+n_{k}=\mathbf{h}_{k}^{H}\sqrt{p_{0}}\mathbf{w}_{0}s_{0}+\sum_{j=1}^{ K}\sqrt{p_{k}}\mathbf{h}_{k}^{H}\mathbf{w}_{k}s_{j}+n_{k}, \tag{3}\]
where \(\mathbf{h}_{k}\) stands for the channel between user \(k\) and the BS. To decode the common message \(s_{0}\), the rate of user \(k\) can be given by:
\[c_{k}=B\log_{2}\left(1+\frac{p_{0}|\mathbf{h}_{k}^{H}\mathbf{w}_{0}|^{2}}{\sum_{j=1}^{ K}p_{j}|\mathbf{h}_{k}^{H}\mathbf{w}_{j}|^{2}+\sigma^{2}}\right). \tag{4}\]
where \(B\) is the bandwidth of the BS. Note that all users need to decode the same common message. To ensure that all users can successfully decode the common message, the rate of the common message can be set as [18]
\[c_{0}=\min_{k\in\mathcal{K}}c_{k}. \tag{5}\]
In our considered SWC with rate splitting, the common knowledge is shared by all users. Thus, the common knowledge required for semantic communication can be encoded in the common message. Besides, the common message also includes the parts that are allocated for different users, i.e., the rate in the common message allocated to user \(k\) is denoted by \(a_{k}\). As a result, the rate constraint for the common message can be given by
\[a_{0}+\sum_{k=1}^{K}a_{k}\leq c_{k},\quad\forall k\in\mathcal{K}, \tag{6}\]
where \(a_{0}\) is the rate allocated to updated common knowledge that all users need to receive. In SWC, \(a_{0}\) represents the rate of transmitting the information of updated directional probability graph.
For each user, the common message is decoded first, and then the common message can be subtracted for decoding the private message. As a result, the rate for decoding the private message for user \(k\) can be calculated as
\[r_{k}=B\log_{2}\left(1+\frac{p_{k}|\mathbf{h}_{k}^{H}\mathbf{w}_{k}|^{2}}{\sum_{j=1,j \neq k}^{K}p_{j}|\mathbf{h}_{k}^{H}\mathbf{w}_{j}|^{2}+\sigma^{2}}\right). \tag{7}\]
### _Transmission and Computation Model_
For each user \(k\), the computation time for extracting semantic information from data \(\mathcal{D}_{k}\) is
\[t_{1k}=\frac{y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})}{f_{k}}, \tag{8}\]
where \(y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})\) is the required amount of CPU cycles for calculating \(\mathcal{S}_{k}\) out of \(\mathcal{D}_{k}\), and \(f_{k}\) is the computing capacity of user \(k\). The local computation energy can be given by:
\[E_{1k}=\kappa y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})f_{k}^{2}, \tag{9}\]
where \(\kappa\) is a constant coefficient to measure the effective switched capacitance.
With private rate (7) and allocated common rate \(a_{k}\), the downlink transmission time for transmitting \(\mathcal{S}_{k}\) is given by
\[t_{2k1}=\frac{Z(\mathcal{S}_{k})}{r_{k}+a_{k}}, \tag{10}\]
where \(Z(\mathcal{S}_{k})\) is the data size of set \(\mathcal{S}_{k}\). To transmit the renewed information about the knowledge base, i.e., updated information of directional probability graph, the transmission time of all users can be formulated as
\[t_{0}=\frac{K_{0}}{a_{0}}, \tag{11}\]
where \(K_{0}\) is the size of updated information of directional probability graph. Combining (10) and (11), the downlink transmission time for user \(k\) is
\[t_{2k}=\max\{t_{2k1},t_{0}\}. \tag{12}\]
The transmission energy for sending \(\mathcal{S}_{k}\) is
\[E_{2k}=t_{2k1}p_{k}, \tag{13}\]
and the transmission energy for broadcasting updated information of directional probability graph is
\[E_{20}=t_{0}p_{0}. \tag{14}\]
At user \(k\), to recover the original data, the user needs to compute the semantic information \(\mathcal{S}_{k}\). The computation time of user \(k\)
\[t_{3k}=\frac{y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})}{g_{k}}, \tag{15}\]
where \(y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})\) is the number of computation cycles of recovering \(\mathcal{D}_{k}\) from \(\mathcal{S}_{k}\) and \(g_{k}\) is the computation capacity at user \(k\). The total complete time for user \(k\) includes computation time at the BS, downlink transmission time, and computation time at user \(k\), as shown in Fig. 4. The overall complete time of user \(k\) including both computation and computation is
\[t_{k} =t_{1k}+t_{2k}+t_{3k}\] \[=\frac{y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})}{f_{k}}+\max\left\{ \frac{Z(\mathcal{S}_{k})}{r_{k}+a_{k}},\frac{K_{0}}{a_{0}}\right\}+\frac{y_{2k} (\mathcal{D}_{k},\mathcal{S}_{k})}{g_{k}}. \tag{16}\]
The energy consumption at user \(k\) is
\[E_{3k}=\kappa y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})g_{k}^{2}. \tag{17}\]
Fig. 4: Illustration of the computation and communication time.
With the above considered model, the total communication and computation energy consumption of the system is
\[E= \sum_{k=1}^{K}(E_{1k}+E_{2k}+E_{3k})+E_{0}\] \[= \sum_{k=1}^{K}\left(\kappa y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k}) f_{k}^{2}+\frac{Z(\mathcal{S}_{k})p_{k}}{r_{k}+a_{k}}+\kappa y_{2k}(\mathcal{D}_{k}, \mathcal{S}_{k})g_{k}^{2}\right)\] \[+\frac{K_{0}p_{0}}{a_{0}}. \tag{18}\]
We aim to minimize the total energy consumption of the whole system with considering the completion time, transmit information accuracy, computation capacity, rate allocation, and power allocation constraints. Mathematically, the formulated total energy minimization problem can be given by:
\[\min_{\mathcal{S},\mathbf{f},\mathbf{g},\mathbf{p},\mathbf{a},\mathbf{w}}E, \tag{19}\] \[\text{s.t.}\quad\frac{y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})}{f _{k}}+\max\left\{\frac{Z(\mathcal{S}_{k})}{r_{k}+a_{k}},\frac{K_{0}}{a_{0}}\right\}\] \[\quad+\frac{y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})}{g_{k}}\leq T,\quad\forall k\in\mathcal{K},\] (19a) \[u_{k}(\mathcal{D}_{k},\mathcal{S}_{k})\geq A_{k},\quad\forall k \in\mathcal{K},\] (19b) \[\mathcal{S}_{k}\subseteq\mathcal{G}(\mathcal{D}_{k})\quad\forall k \in\mathcal{K},\] (19c) \[a_{0}+\sum_{k=1}^{K}a_{k}\leq c_{k},\quad\forall k\in\mathcal{K},\] (19d) \[\sum_{k=1}^{K}f_{k}\leq F^{\max}\] (19e) \[\sum_{k=0}^{K}p_{0}\leq P^{\max}\] (19f) \[a_{k},f_{k},p_{k}\geq 0,\quad\forall k,\] (19g) \[\|\mathbf{w}_{k}\|=1,\quad\forall k\in\mathcal{K}\cup\{0\},\] (19h) \[0\leq g_{k}\leq g_{k}^{\max},\quad\forall k\in\mathcal{K}, \tag{19i}\]
where \(\mathcal{S}=\{\mathcal{S}_{1},\cdots,\mathcal{S}_{K}\}\), \(\mathbf{f}=[f_{0},1,\cdots,f_{K}]^{T}\), \(\mathbf{g}=[g_{1},\cdots,g_{K}]^{T}\), \(\mathbf{p}=[p_{1},\cdots,p_{K}]^{T}\), \(\mathbf{a}=[a_{0},\cdots,p_{K}]^{T}\), \(\mathbf{w}=[\mathbf{w}_{0};\mathbf{w}_{1};\cdots;\mathbf{w}_{K}]\), \(T\) is the maximum communication delay of the system, \(A_{k}\) is the minimum semantic accuracy for user \(k\), \(F^{\max}\) is the maximum computation capacity at the BS, \(P^{\max}\) is the transmission power of the BS, and \(g_{k}^{\max}\) is the maximum local computation capacity of user \(k\). Since both objective function and constraints (19a)-(19c) are nonconvex, it is generally hard to solve this problem. To solve this problem, we propose an iterative algorithm using the alternating method and successive convex approximation (SCA) approach.
## III Algorithm Design
In this section, an alternating algorithm is proposed to iteratively solve problem (19) through optimizing three subproblems, i.e., semantic information extraction subproblem, computation capacity subproblem, joint power control, rate allocation and beamforming design subproblem.
### _Semantic Information Extraction_
With given computation capacity, power control, rate allocation, and beamforming design, problem (19) can be simplified as
\[\min_{\mathcal{S}} \sum_{k=1}^{K}\left(\kappa y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k} )f_{k}^{2}+\frac{Z(\mathcal{S}_{k})p_{k}}{r_{k}+a_{k}}+\kappa y_{2k}(\mathcal{ D}_{k},\mathcal{S}_{k})g_{k}^{2}\right)\] \[+\frac{K_{0}p_{0}}{a_{0}}\] (20) s.t. \[\frac{y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})}{f_{k}}+\max\left\{ \frac{Z(\mathcal{S}_{k})}{r_{k}+a_{k}},\frac{K_{0}}{a_{0}}\right\}\] \[+\frac{y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})}{g_{k}}\leq T,\quad \forall k\in\mathcal{K}, \tag{20a}\] \[u_{k}(\mathcal{D}_{k},\mathcal{S}_{k})\geq A_{k},\quad\forall k \in\mathcal{K},\] (20b) \[\mathcal{S}_{k}\subseteq\mathcal{G}(\mathcal{D}_{k})\quad\forall k \in\mathcal{K}. \tag{20c}\]
Problem (20) is hard to solve because of two general difficulties. The first difficulty lies in the discrete value space of variable \(\mathcal{S}_{k}\), which leads to the discrete optimization problem and the complexity to find the optimal solution is usually extremely too high. The second difficulty is the implicit expressions of accuracy function \(u_{k}(\mathcal{D}_{k},\mathcal{S}_{k})\) and computation functions \(f_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})\) and \(f_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})\).
To handle the first difficulty, we introduce the new variable, extraction rate \(\rho_{k}\), which is defined as
\[\rho_{k}=\frac{Z(\mathcal{S}_{k})}{Z(\mathcal{G}(\mathcal{D}_{k}))}. \tag{21}\]
Absolutely, the value of \(\rho_{k}\) lies in (0,1]. In the following, we use variable to replace \(\mathcal{S}_{k}\) for the purpose of obtaining the insights about extraction rate.
To handle the second difficulty, we first analyze the trend of accuracy and computation functions. For the accuracy function, the accuracy always increases with the extraction rate since more information can be used to recover the original data, as shown in Fig. 5. As a result, the minimum accuracy constraint (20c) can be equivalent to
\[\rho_{k}\geq\Gamma_{k}, \tag{22}\]
where \(\Gamma_{k}\) is the minimum extraction rate satisfying \(u_{k}(\mathcal{D}_{k},\Gamma_{k})=A_{k}\). For computation function \(y_{1k}(\mathcal{D}_{k},\rho_{k})\) is the number of required CPU cycles for computing the
Fig. 5: Illustration of the accuracy and computation functions versus the extraction rate.
information with extraction rate \(\rho_{k}\) out of \(\mathcal{D}_{k}\), \(y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})\) includes two parts. The first part is computing the directional probability graph, which can be modeled as a function only related to the size of \(\mathcal{D}_{k}\), i.e., \(y_{3k}(\mathcal{D}_{k})\). The second part is selecting the information with extraction rate \(\rho_{k}\) out from the directional probability graph. Since \(\rho_{k}=0\) or \(\rho_{k}=1\), the selection scheme is straightforward, which leads to the lowest computation cycles. Hence, the computation of the second part first increases and then decreases with the extraction rate \(\rho_{k}\). As an example, computation function \(y_{1k}(\mathcal{D}_{k},\rho_{k})\) can be expressed as
\[y_{1k}(\mathcal{D}_{k},\rho_{k})=y_{3k}(\mathcal{D}_{k})+C_{k1}(\rho_{k}-C_{k2 })^{C_{k3}}, \tag{23}\]
where \(C_{k1}>0\), \(C_{k2}\in(0,1)\), and \(C_{k3}>0\) are constant parameters and theses parameters can be obtained through simulations. For computation function \(y_{2k}(\mathcal{D}_{k},\rho_{k})\), the number of computation cycles decreases with \(\rho_{k}\) since more semantic information can be helpful in recovering the original information. As an example, the computation function \(y_{2k}(\mathcal{D}_{k},\rho_{k})\) can be expressed as
\[y_{2k}(\mathcal{D}_{k},\rho_{k})=C_{k4}\rho_{k}^{-C_{k5}}, \tag{24}\]
where \(C_{k4}>0\) and \(C_{k5}>0\) are constant parameters through simulations.
With the above variable substitution (21) and expressions (22)-(24), problem (20) can be reformulated as:
\[\min_{\mathbf{\rho}} \sum_{k=1}^{K}\left(\kappa f_{k}^{2}(y_{3k}(\mathcal{D}_{k})+C_{k 1}(\rho_{k}-C_{k2})^{C_{k3}})\right. \tag{25}\] \[+\frac{Z(\mathcal{G}(\mathcal{D}_{k}))p_{k}\rho_{k}}{r_{k}+a_{k} }+\kappa C_{k4}\rho_{k}^{-C_{k5}}g_{k}^{2}\Big{)}\] \[+\frac{K_{0}p_{0}}{a_{0}}\] s.t. \[\frac{y_{3k}(\mathcal{D}_{k})+C_{k1}(\rho_{k}-C_{k2})^{C_{k3}}}{f _{k}}\] \[+\max\left\{\frac{Z(\mathcal{G}(\mathcal{D}_{k}))\rho_{k}}{r_{k}+ a_{k}},\frac{K_{0}}{a_{0}}\right\}+\frac{C_{k4}\rho_{k}^{-C_{k5}}}{g_{k}}\leq T,\] \[\quad\forall k\in\mathcal{K},\] (25a) \[\Gamma_{k}\leq\rho_{k}\leq 1,\quad\forall k\in\mathcal{K}, \tag{25b}\]
where \(\mathbf{\rho}=[\rho_{1},\cdots,\rho_{K}]^{T}\). Since both objective function and feasible set are convex, problem (25) is a convex problem. Thus, we can apply the dual method to obtain the Karush-Kuhn-Tucker (KKT) point. To calculate the solution of problem (25), we can obtain the following theorem.
**Theorem 1**.: _The optimal solution of problem (25) is_
\[\rho_{k}^{*}=\left\{\begin{array}{ll}\rho_{k1}^{*}(\lambda_{1k1})&\text{if }\rho_{k1}^{*}(\lambda_{11})\geq\frac{K_{0}(a_{k}+r_{k})}{a_{0}Z(\mathcal{G}( \mathcal{D}_{k}))}\\ \rho_{k2}^{*}(\lambda_{1k2})&\text{if }\rho_{k2}^{*}(\lambda_{12})<\frac{K_{0}(a_{k }+r_{k})}{a_{0}Z(\mathcal{G}(\mathcal{D}_{k}))}\end{array}\right., \tag{26}\]
_where \(\rho_{k1}^{*}(\lambda_{1k})\) and \(\rho_{k2}^{*}(\lambda_{1k})\) are respectively the solutions to \(\frac{\partial\mathcal{L}_{1}(\mathbf{\rho}_{k1})}{\partial\rho_{k}}=0\) in (30) and (32), \(\lambda_{1k1}\) and \(\lambda_{1k2}\) respectively satisfy_
\[\frac{y_{3k}(\mathcal{D}_{k})+C_{k1}(\rho_{k1}^{*}(\lambda_{1k1}) |_{\Gamma_{k}}^{\dagger}-C_{k2})^{C_{k3}}}{f_{k}}+\frac{Z(\mathcal{G}( \mathcal{D}_{k}))}{r_{k}+a_{k}}\] \[+\frac{C_{k4}(\rho_{k1}^{*}(\lambda_{1k1})|_{\Gamma_{k}}^{ \dagger})^{-C_{k5}}}{g_{k}}=T, \tag{27}\]
\[\frac{y_{3k}(\mathcal{D}_{k})+C_{k1}(\rho_{k2}^{*}(\lambda_{1k2}) |_{\Gamma_{k}}^{\dagger}-C_{k2})^{C_{k3}}}{f_{k}}+\frac{K_{0}}{a_{0}}\] \[+\frac{C_{k4}(\rho_{k2}^{*}(\lambda_{1k2})|_{\Gamma_{k}}^{\dagger })^{-C_{k5}}}{g_{k}}=T, \tag{28}\]
_with \(a_{0}^{\dagger}=\min\{\max\{a,b\},c\}\)._
Proof.: Denoting \(\lambda_{11},\cdots,\lambda_{1K}>0\) as the Lagrange multiplier variables associated with constraint (25a), we obtain the Lagrange function of problem (25) as
\[\mathcal{L}_{1}(\mathbf{\rho},\mathbf{\lambda}_{1})=\sum_{k=1}^{K}\Big{(} \kappa f_{k}^{2}(y_{3k}(\mathcal{D}_{k})+C_{k1}(\rho_{k}-C_{k2})^{C_{k3}})\] \[+\frac{Z(\mathcal{G}(\mathcal{D}_{k}))p_{k}\rho_{k}}{r_{k}+a_{k} }+\kappa C_{k4}\rho_{k}^{-C_{k5}}g_{k}^{2}\Big{)}+\frac{K_{0}p_{0}}{a_{0}}\] \[+\sum_{k=1}^{K}\lambda_{1k}\Big{(}\frac{y_{3k}(\mathcal{D}_{k})+C _{k1}(\rho_{k}-C_{k2})^{C_{k3}}}{f_{k}} \tag{29}\] \[+\max\left\{\frac{Z(\mathcal{G}(\mathcal{D}_{k}))\rho_{k}}{r_{k}+ a_{k}},\frac{K_{0}}{a_{0}}\right\}\] \[+\frac{C_{k4}\rho_{k}^{-C_{k5}}}{g_{k}}-T\Big{)},\]
where \(\mathbf{\lambda}_{1}=[\lambda_{11},\cdots,\lambda_{1K}]^{T}\). The first derivative of (29) becomes
\[\frac{\partial\mathcal{L}_{1}(\mathbf{\rho},\mathbf{\lambda}_{1})}{ \partial\rho_{k}}=\kappa f_{k}^{2}C_{k1}C_{k3}(\rho_{k}-C_{k2})^{C_{k3}-1} \tag{30}\] \[+\frac{Z(\mathcal{G}(\mathcal{D}_{k}))p_{k}}{r_{k}+a_{k}}-\kappa C_ {k4}C_{k5}\rho_{k}^{-C_{k5}-1}g_{k}^{2}\] \[+\lambda_{1k}\Big{(}\frac{C_{k1}C_{k3}(\rho_{k}-C_{k2})^{C_{k3}-1}}{ f_{k}}+\frac{Z(\mathcal{G}(\mathcal{D}_{k}))}{r_{k}+a_{k}}\] \[-\frac{C_{k4}C_{k5}\rho_{k}^{-C_{k5}-1}}{g_{k}}\Big{)}\]
for
\[\rho_{k}\geq\frac{K_{0}(a_{k}+r_{k})}{a_{0}Z(\mathcal{G}(\mathcal{D}_{k}))}, \tag{31}\]
and
\[\frac{\partial\mathcal{L}_{1}(\mathbf{\rho},\mathbf{\lambda}_{1})}{\partial \rho_{k}}=\kappa f_{k}^{2}C_{k1}C_{k3}(\rho_{k}-C_{k2})^{C_{k3}-1} \tag{32}\] \[+\frac{Z(\mathcal{G}(\mathcal{D}_{k}))p_{k}}{r_{k}+a_{k}}-\kappa C _{k4}C_{k5}\rho_{k}^{-C_{k5}-1}g_{k}^{2}\] \[+\lambda_{1k}\Big{(}\frac{C_{k1}C_{k3}(\rho_{k}-C_{k2})^{C_{k3}-1}}{ f_{k}}-\frac{C_{k4}C_{k5}\rho_{k}^{-C_{k5}-1}}{g_{k}}\Big{)}\]
for
\[\rho_{k}<\frac{K_{0}(a_{k}+r_{k})}{a_{0}Z(\mathcal{G}(\mathcal{D}_{k}))}, \tag{33}\]
Denote the solution of \(\frac{\partial\mathcal{L}_{1}(\mathbf{\rho},\mathbf{\lambda}_{1})}{\partial\rho_{k}}=0\) to equations (30) and (32) by \(\rho_{k1}^{*}(\lambda_{1})\) and \(\rho_{k2}^{*}(\lambda_{1k})\),
### _Optimal Computation Capacity_
With given semantic information extraction, power control, rate allocation, and beamforming design, problem (19) can be simplified as
\[\min_{\mathbf{f},\mathbf{g}} \sum_{k=1}^{K}\bigg{(}\kappa y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k} )f_{k}^{2}+\frac{Z(\mathcal{S}_{k})p_{k}}{r_{k}+a_{k}}+\kappa y_{2k}(\mathcal{D }_{k},\mathcal{S}_{k})g_{k}^{2}\bigg{)}\] (34) s.t. \[\frac{y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})}{f_{k}}+\max\left\{ \frac{Z(\mathcal{S}_{k})}{r_{k}+a_{k}},\frac{K_{0}}{a_{0}}\right\}\] \[+\frac{y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})}{g_{k}}\leq T,\quad \forall k\in\mathcal{K}, \tag{34a}\] \[\sum_{k=1}^{K}f_{k}\leq F^{\max}\] (34b) \[f_{k}\geq 0,\quad\forall k,\] (34c) \[0\leq g_{k}\leq g_{k}^{\max},\quad\forall k\in\mathcal{K}. \tag{34d}\]
The Language function of problem (34) can be given by
\[\mathcal{L}_{2}(\mathbf{f},\mathbf{g},\mathbf{\lambda}_{2},\lambda_{3})=\sum _{k=1}^{K}\Big{(}\kappa y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})f_{k}^{2}+ \frac{Z(\mathcal{S}_{k})p_{k}}{r_{k}+a_{k}} \tag{35}\] \[+\kappa y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})g_{k}^{2}\Big{)}+ \frac{K_{0}p_{0}}{a_{0}}\] \[+\sum_{k=1}^{K}\lambda_{2k}\Big{(}\frac{y_{1k}(\mathcal{D}_{k}, \mathcal{S}_{k})}{f_{k}}+\max\left\{\frac{Z(\mathcal{S}_{k})}{r_{k}+a_{k}}, \frac{K_{0}}{a_{0}}\right\}\] \[+\frac{y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})}{g_{k}}-T\Big{)}+ \lambda_{3}\left(\sum_{k=1}^{K}f_{k}-F^{\max}\right),\]
where \(\mathbf{\lambda}_{2}=[\lambda_{21},\cdots,\lambda_{2K}]^{T}\) is the Language multiplier associated with constraint (34a) and \(\lambda_{3}>0\) is the Language multiplier associated with constraint (34b). The first derivative of (35) becomes
\[\frac{\partial\mathcal{L}_{2}(\mathbf{f},\mathbf{g},\mathbf{\lambda}_{2}, \lambda_{3})}{\partial f_{k}} \tag{36}\] \[= 2\kappa y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})g_{k}-\frac{ \lambda_{2k}y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})}{g_{k}^{2}}+\lambda_{3}\]
\[\frac{\partial\mathcal{L}_{2}(\mathbf{f},\mathbf{g},\mathbf{\lambda}_{2}, \lambda_{3})}{\partial g_{k}} \tag{37}\] \[= 2\kappa y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})f_{k}-\frac{ \lambda_{2k}y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})}{f_{k}^{2}}\]
Setting \(\frac{\partial\mathcal{L}_{2}(\mathbf{f},\mathbf{g},\mathbf{\lambda}_{2},\lambda_{3})}{ \partial f_{k}}=0\) and \(\frac{\partial\mathcal{L}_{2}(\mathbf{f},\mathbf{g}_{2},\lambda_{3})}{\partial g_{k}}=0\) yields
\[2\kappa y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})f_{k}^{3}+\lambda_{3}f_{k}^{2} -\lambda_{2k}y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})=0, \tag{38}\]
\[g_{k}=\left(\frac{\lambda_{2k}y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})}{2 \kappa y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})}\right)^{\frac{1}{3}}. \tag{39}\]
The value of \(f_{k}\) can be obtained via solving the cubic function in (38). Having obtained the value of computation capacity \(f_{k}\) and \(g_{k}\), the value of Language multiplier can be updated via the gradient method. In the \(t\)-th iteration, the value of \(\lambda_{2k}\) and \(\lambda_{3}\) are updated by
\[\lambda_{2k}(t)= \Bigg{[}\lambda_{2k}(t-1)-\upsilon(t)\Big{(}\frac{y_{1k}(\mathcal{ D}_{k},\mathcal{S}_{k})}{f_{k}}+ \tag{40}\] \[\max\left\{\frac{Z(\mathcal{S}_{k})}{r_{k}+a_{k}},\frac{K_{0}}{a_{ 0}}\right\}+\frac{y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})}{g_{k}}-T\Big{)} \Bigg{]}^{+},\]
and
\[\lambda_{3}(t)=\left[\lambda_{3}(t-1)-\upsilon(t)\left(\sum_{k=1}^{K}f_{k}-F^{ \max}\right)\right]^{+}, \tag{41}\]
where \([a]^{+}=\max a,0\) and \(\upsilon(t)>0\) is the dynamic step size. Through iteratively updating \((f_{k},g_{k})\) and \((\lambda_{2k},\lambda_{3})\), the overall procedure yields the global optimal solution of problem (34).
### _Joint Power Control, Rate Allocation, and Beamforming Design_
With given semantic information extraction and computation capacity, problem (19) can be simplified as
\[\min_{\mathbf{p},\mathbf{a},\mathbf{w}} \sum_{k=1}^{K}\bigg{(}\kappa y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k })f_{k}^{2}+\frac{Z(\mathcal{S}_{k})p_{k}}{r_{k}+a_{k}}+\kappa y_{2k}(\mathcal{ D}_{k},\mathcal{S}_{k})g_{k}^{2}\bigg{)} \tag{42}\] \[+\frac{K_{0}p_{0}}{a_{0}},\] s.t. \[\frac{y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})}{f_{k}}+\max \left\{\frac{Z(\mathcal{S}_{k})}{r_{k}+a_{k}},\frac{K_{0}}{a_{0}}\right\}\] (42a) \[+\frac{y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})}{g_{k}}\leq T,\quad \forall k\in\mathcal{K},\] \[a_{0}+\sum_{k=1}^{K}a_{k}\leq c_{k},\quad\forall k\in\mathcal{K},\] (42b) \[\sum_{k=0}^{K}p_{0}\leq P^{\max}\] (42c) \[a_{0},a_{k},p_{k}\geq 0,\quad\forall k,\] (42d) \[\|\mathbf{w}_{k}\|=1,\quad\forall k\in\mathcal{K}\cup\{0\}, \tag{42e}\]
Problem (42) is nonconvex owing to the nonconvex objective function and constraints (42a), (42b) and (42e). To handle the nonconvexity of the objective function, we introduce new variable \(r_{k}\) and use variable \(p_{k}^{2}\) to replace power \(p_{k}\). Thus,
problem (42) can be equivalently transformed to
\[\min_{\mathbf{p},\mathbf{a},\mathbf{r},\mathbf{w}} \sum_{k=1}^{K}\left(\kappa y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})f _{k}^{2}+\frac{Z(\mathcal{S}_{k})p_{k}^{2}}{r_{k}+a_{k}}+\kappa y_{2k}(\mathcal{ D}_{k},\mathcal{S}_{k})g_{k}^{2}\right)\] \[+\frac{K_{0}p_{0}^{2}}{a_{0}},\] (43) s.t. \[\frac{y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})}{f_{k}}+\max\left\{ \frac{Z(\mathcal{S}_{k})}{r_{k}+a_{k}},\frac{K_{0}}{a_{0}}\right\}\] \[+\frac{y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})}{g_{k}}\leq T,\quad \forall k\in\mathcal{K}, \tag{43a}\] \[a_{0}+\sum_{k=1}^{K}a_{k}\leq B\log_{2}\left(1+\frac{p_{0}^{2}| \mathbf{h}_{k}^{H}\mathbf{w}_{0}|^{2}}{\sum_{j=1}^{K}p_{j}^{2}|\mathbf{h}_{k}^{H}\mathbf{w}_{ j}|^{2}+\sigma^{2}}\right),\] \[\quad\forall k\in\mathcal{K},\] (43b) \[r_{k}\leq B\log_{2}\left(1+\frac{p_{k}^{2}|\mathbf{h}_{k}^{H}\mathbf{w}_ {k}|^{2}}{\sum_{j=1,j\neq k}^{K}p_{j}^{2}|\mathbf{h}_{k}^{H}\mathbf{w}_{j}|^{2}+\sigma ^{2}}\right),\] \[\quad\forall k\in\mathcal{K},\] (43c) \[a_{0},a_{k},p_{k}\geq 0,\quad\forall k,\] (43d) \[\|\mathbf{w}_{k}\|\leq 1,\quad\forall k\in\mathcal{K}\cup\{0\}, \tag{43e}\]
where \(\mathbf{r}=[r_{0},r_{1},\cdots,r_{K}]^{T}\), the objective function is convex, and constraint (43e) is replaced by the inequality without loss of generality. In problem (43), we only need to deal with the nonconvexity of constraints (43b) and (43c). Through introducing slacking variables \(\gamma_{k}\) and \(\eta_{k}\), problem (43) can be reformulated as:
\[\min_{\mathbf{p},\mathbf{a},\mathbf{r},\mathbf{w},\mathbf{\gamma},\mathbf{\eta}} \sum_{k=1}^{K}\left(\kappa y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k}) f_{k}^{2}+\frac{Z(\mathcal{S}_{k})p_{k}^{2}}{r_{k}+a_{k}}\right.\] \[\quad+\kappa y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})g_{k}^{2} \right)+\frac{K_{0}p_{0}^{2}}{a_{0}},\] (44) s.t. \[\frac{y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k})}{f_{k}}+\max\left\{ \frac{Z(\mathcal{S}_{k})}{r_{k}+a_{k}},\frac{K_{0}}{a_{0}}\right\}\] \[\quad+\frac{y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})}{g_{k}}\leq T,\quad\forall k\in\mathcal{K}, \tag{44a}\] \[a_{0}+\sum_{k=1}^{K}a_{k}\leq B\log_{2}\left(1+\eta_{k}\right), \quad\forall k\in\mathcal{K},\] (44b) \[r_{k}\leq B\log_{2}\left(1+\gamma_{k}\right),\quad\forall k\in \mathcal{K},\] (44c) \[a_{0},a_{k},p_{k}\geq 0,\quad\forall k,\] (44d) \[\|\mathbf{w}_{k}\|\leq 1,\quad\forall k\in\mathcal{K}\cup\{0\},\] (44e) \[\frac{p_{k}^{2}|\mathbf{h}_{k}^{H}\mathbf{w}_{k}|^{2}}{\sum_{j=1,j\neq k} ^{K}p_{j}^{2}|\mathbf{h}_{k}^{H}\mathbf{w}_{j}|^{2}+\sigma^{2}}\geq\gamma_{k},\quad \forall k\in\mathcal{K},\] (44f) \[\frac{p_{0}^{2}|\mathbf{h}_{k}^{H}\mathbf{w}_{0}|^{2}}{\sum_{j=1}^{K}p_{j} ^{2}|\mathbf{h}_{k}^{H}\mathbf{w}_{j}|^{2}+\sigma^{2}}\geq\eta_{k},\quad\forall k\in \mathcal{K}, \tag{44g}\]
where \(\mathbf{\gamma}=[\gamma_{1},\cdots,\gamma_{K}]^{T}\) and \(\mathbf{\eta}=[\eta_{1},\cdots,\eta_{K}]^{T}\). In problem (44), the objective function is transformed into convex. Because of nonconvex constraints (44f) and (44g), problem (44) is nonconvex. In the following, we utilize the SCA method to handle these two nonconvex constraints.
For constraint (44f), it can be equivalent to
\[p_{k}^{2}|\mathbf{h}_{k}^{H}\mathbf{w}_{k}|^{2}\geq\gamma_{k}\alpha_{k}, \tag{45}\] \[\sum_{j=1,j\neq k}^{K}p_{j}^{2}|\mathbf{h}_{k}^{H}\mathbf{w}_{j}|^{2}+ \sigma^{2}\leq\alpha_{k}, \tag{46}\]
where \(\alpha_{k}\) is a nonnegative slack variable. In (45), we can always choose the term \(\mathbf{h}_{k}^{H}\mathbf{w}_{k}\) as a real value through changing the phase of beamforming \(\mathbf{w}_{k}\). Thus, constraint (45) can be rewritten as
\[\mathcal{R}(\mathbf{h}_{k}^{H}\mathbf{w}_{k})\geq\frac{\sqrt{\gamma_{k}\alpha_{k}}}{p_{ k}}, \tag{47}\]
where the left hand side is convex now. Through using the first-order Taylor series to replace the right hand side of (47), constraint (47) can be approximated by
\[\mathcal{R}(\mathbf{h}_{k}^{H}\mathbf{w}_{k})\geq\frac{\sqrt{\gamma_{k}^{ (n)}\alpha_{k}^{(n)}}}{p_{k}^{(n)}}+\frac{\sqrt{\gamma_{k}^{(n)}}}{2p_{k}^{(n)} \sqrt{\alpha_{k}^{(n)}}}(\alpha_{k}-\alpha_{k}^{(n)})\] \[+\frac{\sqrt{\alpha_{k}^{(n)}}}{2p_{k}^{(n)}\sqrt{\gamma_{k}^{(n)} }}(\gamma_{k}-\gamma_{k}^{(n)})-\frac{\sqrt{\gamma_{k}^{(n)}\alpha_{k}^{(n)}}}{(p_ {k}^{(n)})^{2}}(p_{k}-p_{k}^{(n)}), \tag{48}\]
where the superscript \((n)\) means the value of the variable in the \(n\)-th iteration. Moreover, (46) can be reformulated as
\[\sum_{j=1,j\neq k}^{K}\frac{1}{4}((p_{j}^{2}+|\mathbf{h}_{k}^{H}\mathbf{w }_{j}|^{2})^{2}-(p_{j}^{2}-|\mathbf{h}_{k}^{H}\mathbf{w}_{j}|^{2})^{2})\] \[=\sum_{j=1,j\neq k}^{K}p_{j}^{2}|\mathbf{h}_{k}^{H}\mathbf{w}_{j}|^{2}+ \sigma^{2}\leq\alpha_{k}. \tag{49}\]
Through replacing the left hand side of (49) with its first-order Taylor approximation, we can obtain
\[\sum_{j=1,j\neq k}^{K}\frac{1}{4}\Big{[}((p_{j}^{(n)})^{2}+|\mathbf{h }_{k}^{H}\mathbf{w}_{j}^{(n)}|^{2})^{2}+4((p_{j}^{(n)})^{2}\] \[+|\mathbf{h}_{k}^{H}\mathbf{w}_{j}^{(n)}|^{2})p_{j}^{(n)}(p_{j}-p_{j}^{(n) })-(p_{j}^{2}-|\mathbf{h}_{k}^{H}\mathbf{w}_{j}|^{2})^{2}\] \[+4((p_{j}^{(n)})^{2}+|\mathbf{h}_{k}^{H}\mathbf{w}_{j}^{(n)}|^{2})( \mathcal{R}(\mathbf{h}_{k}^{H}\mathbf{w}_{j}^{(n)}\mathbf{h}_{k}^{H}\mathbf{w}_{j})-|\mathbf{h}_{k}^{H} \mathbf{w}_{j}^{(n)}|^{2})\Big{]}\] \[+\sigma^{2}\leq\alpha_{k}. \tag{50}\]
Similarly, we can introduce slack variable \(\beta_{k}\) and constraint (44g) can be rewritten as:
\[\frac{1}{4}((p_{0}^{2}+\ |\mathbf{h}_{k}^{H}\mathbf{w}_{0}|^{2})^{2}-(p_{0}^{2}-|\mathbf{h}_ {k}^{H}\mathbf{w}_{0}|^{2})^{2})\] \[= p_{0}^{2}|\mathbf{h}_{k}^{H}\mathbf{w}_{0}|^{2}\geq\beta_{k}\eta_{k}= \frac{1}{4}((\beta_{k}+\eta_{k})^{2}-(\beta_{k}-\eta_{k})^
Considering the first-order Taylor approximation on both sides, (51) can be transformed to
\[((p_{0}^{(n)})^{2}+|\mathbf{h}_{k}^{H}\mathbf{w}_{0}^{(n)}|^{2})^{2}+4((p_{0} ^{(n)})^{2}\] \[+ |\mathbf{h}_{k}^{H}\mathbf{w}_{0}^{(n)}|^{2})p_{0}^{(n)}(p_{0}-p_{0}^{(n)} )-(p_{0}^{2}-|\mathbf{h}_{k}^{H}\mathbf{w}_{0}|^{2})^{2}\] \[+ 4((p_{0}^{(n)})^{2}+|\mathbf{h}_{k}^{H}\mathbf{w}_{0}^{(n)}|^{2})(\mathcal{ R}(\mathbf{h}_{k}^{H}\mathbf{w}_{0}^{(n)}\mathbf{h}_{k}^{H}\mathbf{w}_{0})-|\mathbf{h}_{k}^{H}\mathbf{w}_{0 }^{(n)}|^{2})\] \[\geq (\beta_{k}+\eta_{k})^{2}-(\beta_{k}^{(n)}-\eta_{k}^{(n)})(\beta_{ k}-\eta_{k})+(\beta_{k}^{(n)}-\eta_{k}^{(n)})^{2}, \tag{53}\]
For constraint (52), we can use the similar method to handle the nonconvexity of (46). Thus, (52) can be approximated by
\[\sum_{j=1}^{K}\frac{1}{4}\Big{[}((p_{j}^{(n)})^{2}+|\mathbf{h}_{k}^{ H}\mathbf{w}_{j}^{(n)}|^{2})^{2}+4((p_{j}^{(n)})^{2}\] \[+ |\mathbf{h}_{k}^{H}\mathbf{w}_{j}^{(n)}|^{2})p_{j}^{(n)}(p_{j}-p_{j}^{(n) })-(p_{j}^{2}-|\mathbf{h}_{k}^{H}\mathbf{w}_{j}|^{2})^{2}\] \[+ 4((p_{j}^{(n)})^{2}+|\mathbf{h}_{k}^{H}\mathbf{w}_{j}^{(n)}|^{2})( \mathcal{R}(\mathbf{h}_{k}^{H}\mathbf{w}_{j}^{(n)}\mathbf{h}_{k}^{H}\mathbf{w}_{j})-|\mathbf{h}_{k }^{H}\mathbf{w}_{j}^{(n)}|^{2})\Big{]}\] \[+\sigma^{2}\leq\beta_{k}. \tag{54}\]
With the above approximations, we can approximate the nonconvex constraints (44f) and (44g) with the corresponding convex approximation terms. Thus, the original problem (44) can be approximated by the following convex one:
\[\min_{\mathbf{p},\mathbf{\alpha},\mathbf{r},\mathbf{w},\mathbf{\gamma},\mathbf{\alpha}, \mathbf{\eta},\mathbf{\beta}} \sum_{k=1}^{K}\bigg{(}\kappa y_{1k}(\mathcal{D}_{k},\mathcal{S}_{k })f_{k}^{2}+\frac{Z(\mathcal{S}_{k})p_{k}^{2}}{r_{k}+a_{k}}\] \[\quad+\kappa y_{2k}(\mathcal{D}_{k},\mathcal{S}_{k})g_{k}^{2} \bigg{)}+\frac{K_{0}p_{0}^{2}}{a_{0}},\] (55a) \[\text{s.t.} (44a)-(44e),(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:
yield a near globally optimal solution through running the proposed algorithm with 1000 initial solutions. It can be shown from this figure that the total energy decreases with the maximum transmit power of the BS. This is due to the fact that large transmit power can lead to low transmit time, which allows more time for computation and yields low total energy consumption. It is observed that the proposed RSMA outperforms FDMA, NOMA, since RSMA can achieve higher spectral efficiency than FDMA and NOMA. Compared to SDMA, RSMA can still achieve better energy consumption, in particular the maximum transmit power is high. The reason is that SDMA is more likely to serve the users with higher channel gains, while the users with poor channel gains tend to have long transmit time and high computation power is needed for task computation, thus leading to higher total energy consumption than RSMA. It can be also found that the proposed RSMA achieves near performance as the EXH-RSMA, which indicates the effectiveness of the proposed scheme.
Fig. 7 shows the total energy versus bandwidth of the system. Based on this figure, the total communication and computation energy decreases as the bandwidth of the system increases for all schemes. This is because high bandwidth decreases the transmit time between users and the BS, which allows long computation time and consequently reduces the local computation energy consumption.
Fig. 8 illustrates the trend of total communication and computation energy with the transmit data size of each user. It is observed that the total energy increases as the data size for all schemes. This is due to the fact that more information needs to be transmitted, thus increasing the transmit and computation power. It can be found that the growing speed of total energy od the proposed RSMA is slower than that of NOMA and FDMA, which shows the robustness of the RSMA.
To show how the computation capacity affects the system performance, Fig. 9 presents the total communication and computation energy versus the maximum computation capacity of each user. According to this figure, the total energy first decreases rapidly and then the total energy tends to approach a fixed value. The reason lies in that for small computation capacity region, the increase of maximum computation capacity can greatly decrease the computation time and more time can be used for transmission, thus reducing the transmit power and total energy. For high computation capacity region, each user has chosen its optimal computation capacity and the increase of maximum computation capacity does not affect the computation capacity allocation result, thus leading to stable energy consumption.
## V Conclusions
In this paper, the problem of wireless resource allocation and semantic information extraction for energy efficient semantic communications over wireless networks with rate splitting is investigated. In the considered model, the BS first extracts the semantic information from its large-scale
Fig. 8: Total communication and computation energy vs. transmit data size of each user.
Fig. 6: Total communication and computation energy vs. maximum transmit power.
Fig. 7: Total communication and computation energy vs. bandwidth of the system.
data, and then transmits the small-sized semantic information to each user which recovers the original data based on the local common knowledge. In the downlink transmission, the rate splitting scheme is adopted, while the private small-sized semantic information is transmitted through private message and the common knowledge is transmitted through common message. Due to limited wireless resource, both computational energy and transmission energy need to be considered. This joint computation and communication problem is considered as an optimization problem whose goal is to minimize the total energy consumption of the network under both task completion and semantic accuracy constraints. An iterative algorithm is presented to solve this problem, where at each step, the optimal solutions for semantic information extraction ratio and computation frequency are derived. Numerical results show the effectiveness of the proposed algorithm.
|
2302.09653 | Remote Identification Trajectory Coverage in Urban Air Mobility
Applications | As Urban Air Mobility (UAM) and Advanced Air Mobility (AAM) continue to
mature, a safety-critical system that will need to be implemented in tandem is
Remote Identification (Remote ID) for uncrewed aircraft systems (UAS). To
ensure successful and efficient deployment (e.g., maximal surveillance of UAS
trajectories), as well as to better understand secondary impacts (e.g.,
consumer privacy risks in collecting real-time UAS trajectory information), the
coverage of broadcast-receive Remote ID architectures needs to be
characterized. Motivated by this need, we examine theoretical and empirical
trajectory coverage of several common Remote ID technologies (e.g., Bluetooth,
Wi-Fi) deployed for urban package delivery missions, a commonly-cited use case
for UAM and AAM. We derive methods to explicitly compute expected coverage
proportions under idealized geometries, as well as conduct case studies with
realistic city geographies and UAS path planning algorithms. An example of
results include approximate magnitudes of Remote ID receivers needed
(approximately 500-5000 receivers needed to achieve 50-95\% coverage for
Bluetooth Legacy, and approximately 10-40 receivers needed for the same
coverage range for Wi-Fi NAN/Beacon, assuming a cruise altitude of 200 feet) to
achieve specific trajectory coverage proportions for San Francisco, California.
Our analyses, combined with complementary works related to Remote ID bandwidth
and deployment topologies, can help guide municipal authorities and AAM
stakeholders in future Remote ID system deployments and upkeep. | Hejun Huang, Billy Mazotti, Joseph Kim, Max Z. Li | 2023-02-19T18:47:57Z | http://arxiv.org/abs/2302.09653v1 | # Remote Identification Trajectory Coverage
###### Abstract
As Urban Air Mobility (UAM) and Advanced Air Mobility (AAM) continue to mature, a safety-critical system that will need to be implemented in tandem is Remote Identification (Remote ID) for uncrewed aircraft systems (UAS). To ensure successful and efficient deployment (e.g., maximal surveillance of UAS trajectories), as well as to better understand secondary impacts (e.g., consumer privacy risks in collecting real-time UAS trajectory information), the _coverage_ of broadcast-receive Remote ID architectures needs to be characterized. Motivated by this need, we examine theoretical and empirical trajectory coverage of several common Remote ID technologies (e.g., Bluetooth, Wi-Fi) deployed for urban package delivery missions, a commonly-cited use case for UAM and AAM. We derive methods to explicitly compute expected coverage proportions under idealized geometries, as well as conduct case studies with realistic city geographies and UAS path planning algorithms. An example of results include approximate magnitudes of Remote ID receivers needed (approximately 500-5000 receivers needed to achieve 50-95% coverage for Bluetooth Legacy, and approximately 10-40 receivers needed for the same coverage range for Wi-Fi NAN/Beacon, assuming a cruise altitude of 200 feet) to achieve specific trajectory coverage proportions for San Francisco, California. Our analyses, combined with complementary works related to Remote ID bandwidth and deployment topologies, can help guide municipal authorities and AAM stakeholders in future Remote ID system deployments and upkeep.
Urban Air Mobility (UAM); Advanced Air Mobility (AAM); Uncrewed Aircraft Systems (UAS); drones; Remote Identification; receiver coverage
## I Introduction
Uncrewed aircraft systems (UAS), also referred to colloquially as drones, are anticipated to play a large role in the future of aerial mobility. There is a diverse range of UAS vehicle classes, corresponding to a wide array of use cases and mission applications that previously were infeasible (i.e., unserved) or underserved [1]. This expansion of the air transportation system into new modalities and services is encompassed within Advanced Air Mobility (AAM) [2] - of particular interest is the subset of AAM focused on urban applications: Urban Air Mobility (UAM) is generally differentiated from other contexts such as Rural Air Mobility (RAM) in terms of use cases and constraints. UAM use cases include last mile-type, potentially even vendor-to-door package and food delivery services, passenger air taxi and shuttle operations [3], as well as rapid transport of time-sensitive goods such as donated organs and emergency medical equipment [4]. Constraints unique to UAM include collision risks with dense infrastructures, difficulties in forecasting urban weather and environmental conditions, and acceptance by large communities with different attitudes regarding and perceptions of UAM [5].
Given the numerous safety and security concerns revolving around UAS and AAM applications, several safety-critical adjacent systems have been proposed [6]. One of these systems - _Remote Identification_ (Remote ID) in the US [7, 8] and analogous counterparts internationally (e.g., [9]) - has reached technical, legislative, and regulatory maturity. Remote ID requires the broadcast of identifying information (e.g., drone ID), real-time position and velocities, as well as control station location from drone takeoff to shutdown [7, 8]. Through data blocks retrieved via Remote ID, the _trajectory_ of a drone can be tracked in real time, in addition to being subject to post hoc analysis. For brevity, we will refer to all remote identification-type systems as Remote ID.
Remote ID was published to the US Federal Register in 2021, and enforcement of mandatory compliance is expected prior to 2025 [7, 8]. Unsuccessful lawsuits challenging the propriety of Remote ID [10] further demonstrate commitments to and momentum for implementation: The deciding factor in favor of Remote ID rests on its safety-critical nature; this is best summarized through the following court opinion striking down a challenge to Remote ID:
* Drones are coming. Lots of them. They are fun and useful. But their ability to pry, spy, crash, and drop things poses real risks. Free-for-all drone use threatens air traffic, people and things on the ground, and even national security. [The US] Congress recognizes as much. It passed a law in 2016 requiring the Federal Aviation Administration (FAA) to "develop[... consensus standards for remotely identifying operators and owners of unmanned aircraft systems" and to "issue regulations or guidance, as appropriate, based on any standards developed." [10]
The successful initiation and maturation of UAM (and AAM more broadly) will require safety-critical systems such as Remote ID to be implemented effectively (e.g., reliability in terms of drone tracking and reporting) and efficiently (e.g., without overwhelming required physical infrastructure such as Remote ID receivers). Furthermore, unintended negative externalities from such systems should be characterized in the context of real-world operations, with resultant mitigation strategies.
### _Technical gap and research problem_
Even though Remote ID broadcast standards have been published (e.g., Bluetooth 4, Bluetooth 5, and Wi-Fi-based [11]), to our knowledge there has not been a rigorous examination of how _comprehensive_ Remote ID-type systems are for monitoring drone operations, particularly in UAM use cases. Motivated by the setting of UAM operations and focusing on Remote ID systems, in this work we conduct an analysis of Remote ID coverage in idealized settings and real-world case studies. Specifically, we are interested in determining what proportion of a given drone trajectory might be surveilled by a Remote ID receiver: Given a (fixed) Remote ID receiver with its coverage area, the trajectory of a drone flown entirely within its coverage area is considered 100% covered, i.e., a coverage proportion of \(1\).
In the idealized setting, we assume specific geometries for the Remote ID coverage area, in addition to how drone trajectories are generated: We derive analytical solutions to obtain the expected coverage proportion, and compare with Monte Carlo-based simulations to validate our solutions. We then conduct more realistic simulation-based case studies given geographic information of a major city, coverage properties (e.g., Bluetooth range), and different drone trajectory path planning techniques.
The results from our work can be used to inform how Remote ID infrastructure could be optimally deployed within an urban environment. It also provides a framework for analyzing how important factors such as the radius of the coverage area, Remote ID receiver distribution, and trajectory path planning method impact the effectiveness of Remote ID systems from the perspective of trajectory coverage. Finally, having better coverage - as well as a better understanding of how the extent of the coverage changes with respect to aforementioned factors - are critical inputs needed to mitigate negative externalities stemming from the Remote ID system itself (e.g., consumer privacy risks [12]) or from UAM operations (e.g., noise and visual nuisance [13], drone intrusion concerns [14]).
### _Related literature_
Within the US, the final ruling on Remote Identification for uncrewed aircraft operating beyond visual line-of-sight (BVLOS) has been published to the US Federal Register [7, 8], with associated ASTM standards documented in [11]. Additionally, research development, including large-scale testing sites for the establishment of critical criteria such as Minimum Operational Performance Standards (MOPS) have been carried out [15].
Two operating modalities for Remote ID are broadcast-based versus network-based. Broadcast-based Remote ID utilizes individual ground receiver modules which receive Remote ID broadcasts from active drones and UAS [16]. In addition to Bluetooth and Wi-Fi, long-range (LoRA) radio has also been explored for broadcast-based Remote ID [17]. Our work focuses on the broadcast-based modality, as network-based Remote ID operates in a fundamentally different manner [18]. Our planar setup of ground station receivers and broadcast drone trajectories align with previous work in [19] and [17]; for [19], an analysis of Bluetooth and Wi-Fi coverage in one hub-and-spoke network topology was performed using simulations. Our work complements and extends [19] via an analytical derivation of expected coverage proportions, as well as realistic, random Remote ID receiver placements within a realistic UAM operating geography.
Finally, we note that the classical coverage problem, e.g., \(k\)-cover constructions wherein every point in an area is covered by at least \(k\) sensors, has been well-studied [20, 21]. However, these setups do not consider the coverage of a _trajectory_, and mobile UAS have only been factored in as sensors themselves (e.g., [22]), but not as the object of surveillance and coverage. More specifically, our setting can be considered as a trajectory-centered extension of the Boolean disc-type coverage set up; we refer interested readers to [23] for an extensive survey of Boolean disc-type coverage. Other related setups include the usage of fixed traffic sensors (e.g., stationary traffic cameras) to perform tasks such as congestion characterization, traffic flow estimation, and trajectory reconstruction [24, 25].
## II Contributions of Work
The contributions of our work in this paper are as follows:
1. We derive explicit formulas for expected coverage proportions under idealized environment and coverage area geometries. These expected coverage proportions depend on the method through which UAS trajectories are generated, as well as the Remote ID receiver technology under consideration. We validate these formulas through Monte Carlo simulations, and analyze differences in coverage proportions under different trajectory generation assumptions.
2. We conduct a simulation-based case study centered on San Francisco, California, where we examine the number of Remote ID receivers required to attain specific trajectory coverage proportions (e.g., 50% versus 95% coverage) at different UAS cruising altitudes (200 feet versus 400 feet) and using different Remote ID receiver technologies (Bluetooth Legacy, Bluetooth Long Range, Wi-Fi NAN/Beacon).
3. We provide an outline for a hybrid approach combining both the idealized analysis and realistic geographies. Additionally, we point out future directions for studying Remote ID deployment strategies in terms of broadcast-receiver architectures and coverage.
## III Methods and Data
We present an idealized analysis of the Remote ID coverage problem in Section III-A, and derive expressions for expected coverage proportions given assumptions on the environment geometry and origin-destination (OD) pair generation. We then describe the setup and data used in our urban area simulation-based case studies in Section III-B.
### _Idealized analysis_
The setup for our idealized analysis of the Remote ID coverage problem is as follows: We define the _coverage area_ of the Remote ID receiver to be a disk \(\mathcal{D}_{c}\) with radius \(r_{c}>0\) centered at \((x_{0},y_{0})\) in \(\mathbb{R}^{2}\). We define the _environment_ containing possible OD pairs to be a circle \(\mathcal{B}_{e}\) with radius \(r_{e}\geq r_{c}>0\), and we assume that the environment is centered at the same coordinate \((x_{0},y_{0})\) as \(\mathcal{D}_{C}\). Future extensions of this analysis could generalize the centers of \(\mathcal{D}_{c}\) and \(\mathcal{B}_{e}\) such that they do not coincide.
To generate an OD pair, we select two points \(A,B\in\mathcal{B}_{e}\) in the environment at random (to be formalized later in this section). Note that \(A\) and \(B\) lie on the circle \(\mathcal{B}_{e}\), and can be described by the coordinates \((r_{e}\cos(\alpha),r_{e}\sin(\alpha))\) and \((r_{e}\cos(\beta),r_{e}\sin(\beta))\), respectively, for some angles \(\alpha,\beta\in[0,2\pi]\). We can describe the midpoint \(M\) of the straight-line drone trajectory between OD pairs \(A\) and \(B\), which _may or may not_ lie in \(\mathcal{D}_{c}\), by the coordinates
\[\left(\frac{r_{e}(\cos(\alpha)+\cos(\beta))}{2},\frac{r_{e}(\sin(\alpha)+\sin (\beta))}{2}\right). \tag{1}\]
Now, if \(M\) lies in the coverage area \(\mathcal{D}_{c}\setminus\partial\mathcal{D}_{c}\), we note that the straight-line trajectory intersects the boundary \(\partial\mathcal{D}_{c}\) of the coverage area at two locations \(D\) and \(C\). In this case, we denote by \(L\) and \(L_{p}\) the portions of the straight-line trajectory outside and inside \(\mathcal{D}_{c}\), respectively, and \(\ell\) as the distance between the midpoint and \((x_{0},y_{0})\) (i.e., the length of the line segment \(\overline{OM}\) in panel (_a_) of Figure 1). We have the following proposition:
**Proposition 1** (Coverage proportion).: _Let \(P\in(0,1]\) be the coverage proportion. If the straight-line trajectory intersects the boundary \(\partial\mathcal{D}_{c}\) of the coverage area in two locations, we have that_
\[P=\frac{L_{p}}{L}=\sqrt{\frac{r_{c}^{2}-\ell^{2}}{r_{e}^{2}-\ell^{2}}}. \tag{2}\]
Proof.: Following notation in Figure 1(_a_), the ratio \(L_{p}/L\) is the same as the ratio of the lengths of line segments \(\overline{MD}\) and \(\overline{MB}\) since \(M\) is the midpoint. This latter ratio is the same as the ratio of the areas of triangles \(\Delta_{OMD}\) and \(\Delta_{OMB}\) as the two triangles share the same height \(\ell\). The expression in (2) follows after noting that the areas of \(\Delta_{OMD}\) and \(\Delta_{OMB}\) are \(\ell\sqrt{r_{c}^{2}-\ell^{2}}/2\) and \(\ell\sqrt{r_{e}^{2}-\ell^{2}}/2\), respectively.
We now return to the key word of _random_ when selecting OD pairs to generate the straight-line trajectory for which we are interested in the coverage proportion \(P\). Given the reliance of Proposition 1 on the midpoint \(M\), we expect that, if we were to sample random straight-line trajectories, the distribution of \(M\) in the coverage area will be important. However, generating this trajectory - which is precisely a chord of \(\mathcal{B}_{e}\) - is ambiguous: This is known as the Bertrand paradox [26]. For the purposes of our Remote ID coverage analysis, we are interested in two cases: (i) Uniformly distributed endpoints, and (ii) uniformly distributed midpoints.
#### Iii-A1 Case (i): Uniformly distributed endpoints
This case hinges on the following assumption:
**Assumption 1**.: _When randomly selecting a straight-line trajectory (chord), we proceed by selecting endpoints that are uniformly distributed along \(\mathcal{B}_{e}\)._
Recall from above our endpoints \(A,B\) with coordinates \((r_{e}\cos(\alpha),r_{e}\sin(\alpha))\) and \((r_{e}\cos(\beta),r_{e}\sin(\beta))\), respectively. Under Assumption 1, we note that this is equivalently to selecting angles \(\alpha,\beta\) uniformly, i.e., \(\alpha\) and \(\beta\) are drawn identically and independently from \(\mathrm{Unif}[0,2\pi]\). By symmetry, we note that we could set one of the angles to be fixed arbitrarily (see Figure 1(_b_) for intuition); without loss of generality, we set \(\alpha=0\). The coordinates for midpoint \(M\) can now be rewritten as \((r_{e}(1+\cos(\beta))/2,r_{e}\sin(\beta)/2)\).
In Figure 1(_b_) we visualize (via green-colored trajectories with yellow-colored portions covered by the Remote ID receiver) the intuition that the straight-line trajectory will only have a non-zero coverage proportion for certain ranges of \(\beta\). However, since \(\beta\) is a random variable, the squared distance \(\ell^{2}\) from the midpoint \(M\) to \((x_{0},y_{0})\) is a derived random variable, since
\[\ell^{2} =\left(\frac{r_{e}(1+\cos(\beta))}{2}\right)^{2}+\left(\frac{r_{ e}\sin(\beta)}{2}\right)^{2} \tag{3}\] \[=\frac{r_{e}^{2}(1+\cos(\beta))}{2}.\]
Denote by \(P_{\mathrm{UDE}}\) the coverage proportion under Assumption 1. We observe that \(P_{\mathrm{UDE}}\) is also a derived random variable, as it is a function of \(\ell^{2}\) from Proposition 1. Let \(\mathbb{E}\left[P_{\mathrm{UDE}}\right]\) be the _expected_ coverage proportion under Assumption 1, we have that
**Proposition 2** (Case (i) expected coverage proportion).: _Define the constant (deterministic) ratio \(\rho=r_{c}/r_{e}\). We have that_
Figure 1: _(a) Remote ID receiver coverage area \(\mathcal{D}_{c}\) within environment \(\mathcal{B}_{e}\), with points of interest annotated; (b) Fixing point \(A\) with \(\alpha=0\)._
\[\mathbb{E}\left[P_{\mathrm{UDE}}\right]=\frac{1}{2\pi}\int_{\pi-2\arcsin(\rho)}^{ \pi+2\arcsin(\rho)}\sqrt{\frac{r_{c}^{2}-r_{c}^{2}\gamma(b)}{r_{c}^{2}-r_{c}^{2} \gamma(b)}}\,db, \tag{4}\]
_with \(\gamma(b)=\frac{1+\cos(b)}{2}\)._
Proof.: We omit the full proof for brevity, and give a sketch of the proof: As shown in Figure 1(\(b\)), the expectation of \(P\) can be derived from the expectation of the angles governing the uniformly distributed endpoints. The upper and lower limits of (4) denote the interval of \(\beta\) where coverage occurs.
#### Iii-A2 Case (ii): Uniformly distributed midpoints
Similar to case (i), we begin by stating the following assumption:
**Assumption 2**.: _When randomly selecting a straight-line trajectory (chord), we proceed by selecting its midpoint such that it is uniformly distributed in \(\mathcal{D}_{c}\)._
We note that, excluding the case of a circle's diameter, choosing the midpoint fixes a unique chord, i.e., straight-line trajectory. We can exclude the case where the midpoint falls precisely at \((x_{0},y_{0})\in\mathcal{D}_{c}\) as this happens with zero probability (measure zero). Denote by \(\ell\) the distance of the randomly chosen midpoint to \((x_{0},y_{0})\). We note that \(\ell\) is a random variable taking values in \([0,r_{e}]\), and can write down its cumulative density function \(F_{\ell}(\ell^{\star})\) explicitly:
\[F_{\ell}(\ell^{\star})=\Pr\left(\ell\leq\ell^{\star}\right)=\begin{cases} \frac{\pi(\ell^{\star})^{2}}{\pi r_{e}^{2}},&\ell^{\star}\in[0,r_{e}],\\ 0,&\text{otherwise}.\end{cases} \tag{5}\]
Accordingly, the probability density function \(f_{\ell}(\ell^{\star})\) is
\[f_{\ell}(\ell^{\star})=\frac{d}{d\ell^{\star}}F_{\ell}(\ell^{\star})=\begin{cases} \frac{2\ell^{\star}}{r_{e}^{2}},&\ell^{\star}\in[0,r_{e}],\\ 0,&\text{otherwise}.\end{cases} \tag{6}\]
Denote by \(P_{\mathrm{UDM}}\) the coverage proportion under Assumption 2. We observe that \(P_{\mathrm{UDM}}\) is a derived random variable as it is a function of \(\ell\) from Proposition 1. Let \(\mathbb{E}\left[P_{\mathrm{UDM}}\right]\) be the _expected_ coverage proportion under Assumption 2, we have that
**Proposition 3** (Case (ii) expected coverage proportion).: _We have that_
\[\mathbb{E}\left[P_{\mathrm{UDM}}\right]=\int_{0}^{r_{e}}\frac{2l}{r_{e}^{2}} \sqrt{\frac{r_{c}^{2}-l^{2}}{r_{e}^{2}-l^{2}}}\,dl, \tag{7}\]
_with \(r_{c}\in(0,r_{e})\)._
Proof.: We omit the full proof for brevity, and give a sketch of the proof: Since \(P_{\mathrm{UDM}}\) is a function of the random variable \(l^{\star}\), its expectation can be obtained directly from the definition for the expectation of a continuous random variable.
The difference between the two expected coverage proportions in (4) and (7) is due to the difference in terms of midpoint distributions within the coverage area \(\mathcal{D}_{c}\). Under the assumption of uniformly distributed endpoints (Assumption 1), the resultant midpoint distribution is denser closer to \((x_{0},y_{0})\). By comparison, the midpoint distribution by definition under Assumption 2 is uniformly distributed in \(\mathcal{D}_{c}\). We plot and confirm this graphically in Figure 2 with \(R_{e}=1\) and \(R_{c}=0.5\). Evaluating (4) and (7) numerically with \(R_{e}=1\) and \(R_{c}=0.5\) gives \(\mathbb{E}\left[P_{\mathrm{UDE}}\right]\approx 0.134\) or \(13.4\)% and \(\mathbb{E}\left[P_{\mathrm{UDM}}\right]\approx 0.088\) or \(8.8\)%.
To summarize, we examined the Remote ID coverage problem under specific geometric assumptions of the coverage area and environment, given a Remote ID receiver with coverage radius \(r_{c}\). We showed how differences in random trajectory generation can give rise to two different expected coverage proportions \(\mathbb{E}\left[P_{\mathrm{UDE}}\right]\) and \(\mathbb{E}\left[P_{\mathrm{UDM}}\right]\). Without using large-scale simulations, Remote ID location planning could be approximated as scaled-up versions of the idealized setup, and an estimation of expected coverage proportions made based off of, e.g., (7). In Section IV we verify our expressions for \(\mathbb{E}\left[P_{\mathrm{UDE}}\right]\) and \(\mathbb{E}\left[P_{\mathrm{UDM}}\right]\) via Monte Carlo simulations, and also examine the difference \(\Delta\mathbb{E}[P]=\mathbb{E}\left[P_{\mathrm{UDE}}\right]-\mathbb{E}\left[P_ {\mathrm{UDM}}\right]\) in expected coverage proportions under Assumption 1 versus Assumption 2.
### _Urban area simulations_
To simulate how choices in Remote ID receiver technologies, geographies, customer and vendor distributions, as well as path planning algorithms may affect overall trajectory coverage in real-world UAM settings, we use publicly available data sets for San Francisco, California, US. Additionally, for future studies, we prepared customer and vendor data sets for New York City and Los Angeles as well: We chose these cities based on their popularity in recent literature examining various UAM applications such as drone package delivery operations [27, 28, 29, 30]. In addition, these major population centers within the US have different densities of potential customers, vendors, and building structures (e.g., building heights), creating an ideal environment for our urban area simulations.
For the simulation environment, we set the origin and destination pair (OD pair) for one-way UAS flights to be located at the geolocations of real-world stores and residents. We assume that Remote ID receivers can be deployed on top of buildings, and that the centers of city building footprints serve as possible geolocations for these receivers. We retrieve the geolocation data sets used for this study from OpenStreetMap [31]. Specific geolocation data are organized by MyGeoData
Figure 2: Midpoint distribution within \(\mathcal{D}_{c}\) for (a) Case (i) with uniformly distributed endpoints; (b) Case (ii) with uniformly distributed midpoints.
[32, 33] into _themed_ data sets. We chose to represent customer locations using MyGeoData's "Residential Land Use" theme; this theme provides general areas within a city that are predominantly occupied by single houses, grouped dwellings, apartments, flats, and units. We used MyGeoData's "Shopping Centers and Department Stores" themed data set, which provides building footprints and geolocation points of general stores, department stores, malls, supermarkets, and kiosks. Lastly, we chose MyGeoData's "Buildings" theme which provided the footprints and heights of individual and connected buildings. These building footprints and associated heights represent obstacles that must be avoided by the planned path of a drone. Additionally, these building footprints from the Building-theme data set serve as potential Remote ID receiver sites.
For the urban case study experiments, we select between different coverage radii depending on the Remote ID receiver technology. This is equivalent to setting the \(r_{c}\) parameter in the idealized analysis. Future work will involve the rigorous approximation of real-world geographies via the idealized analysis, e.g., exploring set partitioning of a city into simpler environments. In addition to varying the coverage radius of a given Remote ID receiver, we also vary the following for our experiments:
* _Path planning method._ We compare between two path planning approaches commonly used in UAS traffic management research. The first is simple straight-line path planning (Slpp) between the origin and the destination (used in, e.g., [35]). We also explore rapidly exploring random trees (RRT\({}^{*}\)) as a path planning algorithm [36], which have been used in the context of UAM in, e.g., [37].
* _Cruise altitude._ We explore two different cruising altitude for UAS, at 200 feet and at 400 feet.
* _Coverage proportion._ This parameter is equivalent to \(P,P_{\mathrm{UDE}}\), and \(P_{\mathrm{UDM}}\) defined in the idealized analysis. For example, an average coverage proportion of 50% indicates that, on average, 50% of a given trajectory (with a fixed path planning algorithm) was covered by a Remote ID receiver.
Additionally, we can explore different city geographies, given the appropriate base maps (e.g., customer and vendor distributions). We note that the altitudes we use are below current altitude maximums [38]; however, at further, lower altitudes, computation time becomes more significant, as more buildings and obstacles must be considered.
## IV Idealized Analysis Results
We first verify our expressions for the expected coverage proportions under Assumptions 1 and 2 using Monte Carlo simulations, where we randomly sample straight-line trajectories with OD pairs lying in the environment \(\mathcal{B}_{e}\). We then examine the numerical differences between the expected coverage proportions under Assumption 1 versus 2.
### _Monte Carlo-based verification_
For the Monte Carlo setup, we first fix the radius of the environment \(r_{e}\in\{0.1,1,1.5,2,2.5\}\). For each fixed environment radius \(r_{e}\), we sample \(10,000\) random straight-line trajectories per \(r_{c}\) coverage area radius, where \(r_{c}\in\{kr_{e}/5\}_{k=1}^{k=5}\). The sampling method for Case (i) utilizes uniformly distributed endpoints, whereas for Case (ii) we use uniformly distributed midpoints. We compute the empirical mean of the coverage proportions across \(10,000\) trials, as well as the standard deviation, and plot them in Figure 3, overlaid with the direct evaluations of (4) and (7). We observe a good match between our analytical expressions for the expected coverage proportions and the Monte Carlo results.
### _Numerical differences in expected coverage proportions_
Recall previously the discussion regarding the differences between Case (i) and (ii) in terms of the midpoint distributions within \(\mathcal{D}_{c}\). However, as \(r_{c}\) varies between \(0\) and \(r_{e}\), it is not clear what is the numerical difference between Case (i) and (ii). We would like to better understand when, e.g., the expected coverage proportion for Case (i) is _greater_ than Case (ii), for the same \(r_{e}\) and \(r_{c}\) values. We first note that this difference does not appear to depend on \(r_{e}\) - hence, we fix
Fig. 3: Comparisons between analytical expressions for the expected coverage proportions and Monte Carlo simulations, given \(r_{e}\) and \(r_{c}\), across \(10,000\) trials.
\(r_{e}=1\), and vary \(\rho=r_{c}/r_{e}\in[0,1]\). We plot the difference \(\Delta\mathbb{E}[P]=\mathbb{E}\left[P_{\mathrm{UDE}}\right]-\mathbb{E}\left[P_{ \mathrm{UDM}}\right]\) versus \(\rho\) in Figure 4.
We note that interestingly the expected coverage proportion for Case (i) is larger for a portion of \(\rho\) values, then the expected coverage proportion for Case (ii) becomes larger past approximately \(\rho=0.79\) (recalling that \(r_{e}\) is fixed at \(1\)). We observe two points where the difference between the two cases is maximized, first at approximately \(\rho=0.5\) (i.e., the environment radius is twice as big as the coverage radius), and when \(\rho\approx 0.97\) (i.e., when the coverage area is almost as large as the environment). At the first extremum (\(\rho\approx 0.5\)), generating straight-line trajectories assuming uniformly distributed endpoints produce larger coverage proportions in expectation. At the second extremum (\(\rho\approx 0.97\)), generation via uniformly distributed midpoints produces larger expected coverage proportions. At the trivial cases when \(\rho=0\) (no coverage area) and \(\rho=1\) (coverage area matches the environment), the difference \(\Delta\mathbb{E}[P]\) between the two expressions is 0, as expected.
## V Urban Area Simulation-Based Case Studies
### _Data set description_
After obtaining labeled geolocation data sets of buildings, vendors, and customers via MyGeoData and OpenStreetMap, we simplified the layouts of the San Francisco occupancy maps, origin points, and destination points to enable path planning simulations. Expanding on the discussion in Section III-B, MyGeoData breaks down OpenStreetMap data into pre-defined themes (e.g., airports, banks, cafes), and extracts all theme-associated data from a pre-defined region of interest (ROI). For our case study, we choose the MyGeoData themes of residential land use, shopping centers and department stores, and buildings in order to simulate the customers, vendors, and buildings in San Francisco.
The residential data sets highlight land containing residential dwellings, providing polygon geolocation coordinates for these areas. Customer locations are determined by randomly selecting the necessary number of geolocation coordinates within all 2D regions labeled as residential land to serve as possible drone destination locations. The data sets containing shopping centers and department stores feature land and hub centers associated with individual stores and supermarkets. These data sets provide analogous polygon and point geolocation coordinates for these commercial locations. One vendor location is assigned to each point and polygon to represent possible drone origin locations. Finally, the building data sets provide geolocation coordinates for the footprints of all buildings within the pre-defined ROI. The buildings serve as obstacles for the path planning algorithms when constructing the trajectory for an OD pair consisting of customers and vendors. We provide visualizations of the building occupancy at different altitudes in Figure 5.
We visualize the geographies of customers and vendors for each city in Figure 6. As can be seen in Figure 6, vendor and customer sites located outside of a given city's ROI boundaries are disregarded in the simulation. This is to adhere to city-specific customer population densities, as well as to ensure that UAS origins and destinations remain within the pre-defined ROI. We retain all buildings and potential Remote ID receiver locations within a convex polygon encompassing the city; we do this to ensure accurate representation of obstacles and potential coverage centers across all possible trajectories. Finally, in terms of case study environmental statistics, we note that San Francisco has a population per km
Figure 4: Numerical difference between the expected coverage proportion from Case (i) versus Case (ii), plotted against \(\rho=r_{c}/r_{e}\).
Figure 5: Building occupancy maps for (\(a\)) San Francisco at \(0\) feet altitude and (\(b\)) San Francisco at \(200\) feet altitude. Building occupancy at additional altitudes (i.e., \(400\) feet) used in the experiment is not shown for brevity.
of approximately 7,200 persons [39]. Within San Francisco, there are 0.5 store sites per km\({}^{2}\), with approximately 555 buildings per km\({}^{2}\)[31, 32].
### _Case study results_
Prior to discussing the case study results, we provide visualizations of the case study setup in Figure 7. Using data related to vendors and customers, we randomly select OD pairs for path planning. We also generate a random distribution of Remote ID receivers within the ROI. The coverage radius \(r_{c}\) depends on the selected Remote ID receiver technology (e.g., Bluetooth, Wi-Fi), and coverage areas are visualized as red circles in Figure 7. The coverage proportion of a single trajectory is given by the length of covered (i.e., detected, the red-colored portions of the trajectories in Figure 7) trajectories divided by the total length of the trajectory. We continue sampling until we achieve convergence in terms of the average coverage proportion per scenario. Recall that a _scenario_ denotes a fixed altitude, a fixed Remote ID receiver technology, and iterating through a number of receivers (for Slpp). For RRT\({}^{*}\), due to its computational intensity compared to Slpp, the number of receivers is informed by the analogous scenario under Slpp. In addition, we only perform 1 trial of 200 randomly sampled trajectories for RRT\({}^{*}\); this is because we observe good convergence in terms of the coverage proportion, and it reduces the computation time required for conducting simulations involving RRT\({}^{*}\). Finally, we also examine the differences in average coverage proportions between the two path planning methods.
We list results for Slpp in Table II. We present the number of Remote ID receivers needed to achieve three different desired coverage proportions, selecting between three different Remote ID receiver technologies (Bluetooth Legacy - R250; Bluetooth Long Range - R1000; Wi-Fi NAN/Beacon - R2000). Note that for Slpp, we do not factor in the altitude as we are interested only in the straight-line path for an OD pair - this is not the case for RRT\({}^{*}\). We observe that the required number of Remote ID receivers can vary drastically across different broadcast technologies, although the convergence in terms of average coverage proportions is good compared to the desired average coverage proportion.
Results for RRT\({}^{*}\) are listed in Table III. Recall that for RRT\({}^{*}\), we fix the number of Remote ID receivers used to achieve a specific average coverage proportion for Slpp, and evaluate the converged average coverage proportion when using RRT\({}^{*}\) as the path planning algorithm. We note some interesting differences, such as in the case between 200 versus 400 feet for the nominally 50% coverage scenario: Even though Slpp achieved approximately 50% coverage, using RRT\({}^{*}\) results in a higher coverage proportion at higher altitudes. We also note particularly drastic cases of higher coverage under RRT\({}^{*}\) for larger radii, as compared to Slpp.
Finally, we briefly remark on observed convergences between Slpp and RRT\({}^{*}\). In Figure 8, we plot 200 randomly sampled trajectories for two scenarios: R2000 (i.e., Wi-Fi NAN/Beacon) at 75% desired coverage proportion, 200 feet cruising altitude in panel (_a_), and R250 (i.e., Bluetooth Legacy) at 95% desired coverage proportion, 400 feet cruising altitude in panel (_b_). We observe that even after only 200 randomly generated trajectories, the running average of coverage proportions appear to stabilize. We note that in Figure 8(_a_), this was for one trial (out of 20) for Slpp - the average across all trials is what is reported in Table II as 75.5%.
### _Hybrid analysis outline_
Recall from the idealized analysis that, assuming specific geometries for the environment \(\mathcal{B}_{e}\) and coverage area \(\mathcal{D}_{c}\), along with how trajectories between origins and destinations are generated, we can explicitly compute the expected coverage proportion. Given the computational intensity discussed previously in this section with respect to simulating individual trajectories, particularly if more sophisticated path planning algorithms are assumed to be used, a reasonable "hybrid" approach may be to use geographic partitioning, approximations, and repeated applications of, e.g., (4) or (7). Using Figure 9 as a visual guide, the outline for this hybrid approach is as follows:
1. Select a desired \(r_{e}\) for each individual \(\mathcal{B}_{e}\) to be used in _packing_ the ROI. This selected \(r_{e}\) should be greater than or equal to \(r_{c}\) (fixed based on the Remote ID receiver technology of interest).
2. Using a packing heuristic (e.g., [40]), pack the ROI with \(K\) numbers of environments \(\mathcal{B}_{e}^{1},\ldots,\mathcal{B}_{c}^{K}\) each with its own coverage area \(\mathcal{D}_{c}^{1},\ldots,\mathcal{D}_{c}^{K}\). Note that \(\mathcal{B}_{e}^{i}\) and \(\mathcal{D}_{c}^{i}\) are centered at \(\left(x_{0}^{i},y_{0}^{i}\right)\) for \(i=1,\ldots,K\).
3. Given a trajectory from an origin to a destination, decompose the trajectory into \(\bigcup_{i}\tau_{i}\) where each \(\tau_{i}\) is associated with a specific environment (and coverage area) \(\{\mathcal{B}_{e}^{i},\mathcal{D}_{c}^{i}\}\).
4. Compute the average coverage proportion expected per trajectory segment \(\tau_{i}\) analytically via (4) or (7), and report the final coverage proportions averaged across all possible trajectories in the ROI, along with approximation errors \(\varepsilon\) as not all portions of the trajectory may be covered by an environment \(\mathcal{B}_{e}^{1},\ldots,\mathcal{B}_{e}^{K}\).
Figure 6: Locations of customers and vendors for San Francisco.
This approach of partitioning the trajectories per environment aligns most closely with the case of uniformly distributed endpoints, assuming that OD pairs are randomly generated with no consideration of \(\mathcal{B}_{e}^{i}\) boundaries. Future analysis will be needed to determine exactly how the random process of OD generation maps to randomly sampled points on \(\mathcal{B}_{e}^{i}\) boundaries. Additionally, an open question is how overlapping environments (as is shown in Figure 9) will impact Remote ID receiver coverage estimations.
## VI Concluding Remarks
Remote ID standards for UAM and AAM applications are safety-critical, and required for procedures such as counter-UAS operations, UAS traffic management (UTM), and low-altitude airspace management. In this work, we address the problem of trajectory coverage by Remote ID receivers in urban settings: Specifically, we assume a broadcast-receive architecture for Remote ID, and conduct (1) an idealized analysis of expected coverage proportions, as well as (2) a simulation-based case study with realistic urban geographies and path planning techniques. Under simplified geometries, we derived explicit equations for the expected coverage proportion, given the coverage radius of a Remote ID receiver. For the urban case studies, we explored the number of Remote ID receivers needed to achieve specific average coverage proportions in San Francisco. In designing the San Francisco case study, we considered different path planning assumptions, vendor and customer densities, Remote ID receiver distributions and broadcast technologies, as well as cruise altitude. The results and models from our idealized and simulation-based urban case study can be used by municipal authorities and AAM stakeholders for guidance when implementing future Remote ID systems for UAM and AAM applications.
### _Limitations and future work_
For our idealized analysis, we assumed specific geometries and overlaps between the coverage area and the environment. Furthermore, for both the idealized analysis and the urban case study, we made simplifying assumptions regarding drone dynamics as well as the reception capabilities of the Remote ID receiver (e.g., [19] focuses on communication bandwidth in Remote ID setups). Readily-available extensions include performing the coverage characterization using different city geographies, such as New York City and Los Angeles (see
Fig. 7: Sample OD drone paths at 200 feet cruising altitude, with 2 km radius Remote ID receivers overlaid, for San Francisco with (_a_) Slpp and (_b_) RRT\({}^{*}\).
Figure 10). Future work includes relaxing assumptions that leads to generalizing our idealized analysis (e.g., not requiring the Remote ID receiver to be centered at the environment's center), incorporating random dropout and communication errors, along with field testing to validate our results using real drones and physical Remote ID setups.
|
2303.01126 | Speaker-Aware Anti-Spoofing | We address speaker-aware anti-spoofing, where prior knowledge of the target
speaker is incorporated into a voice spoofing countermeasure (CM). In contrast
to the frequently used speaker-independent solutions, we train the CM in a
speaker-conditioned way. As a proof of concept, we consider speaker-aware
extension to the state-of-the-art AASIST (audio anti-spoofing using integrated
spectro-temporal graph attention networks) model. To this end, we consider two
alternative strategies to incorporate target speaker information at the frame
and utterance levels, respectively. The experimental results on a custom
protocol based on ASVspoof 2019 dataset indicates the efficiency of the speaker
information via enrollment: we obtain maximum relative improvements of 25.1%
and 11.6% in equal error rate (EER) and minimum tandem detection cost function
(t-DCF) over a speaker-independent baseline, respectively. | Xuechen Liu, Md Sahidullah, Kong Aik Lee, Tomi Kinnunen | 2023-03-02T10:14:59Z | http://arxiv.org/abs/2303.01126v2 | # Speaker-Aware Anti-spoofing
###### Abstract
We address _speaker-aware anti-spoofing_, where prior knowledge of the target speaker is incorporated into a voice spoofing countermeasure (CM). In contrast to the frequently used speaker-independent solutions, we train the CM in a speaker-conditioned way. As a proof of concept, we consider speaker-aware extension to the state-of-the-art AASIST (audio anti-spoofing using integrated spectro-temporal graph attention networks) model. To this end, we consider two alternative strategies to incorporate target speaker information at the frame and utterance levels, respectively. The experimental results on a custom protocol based on ASVspoof 2019 dataset indicates the efficiency of the speaker information via enrollment: we obtain maximum relative improvements of 25.1% and 11.6% in equal error rate (EER) and minimum tandem detection cost function (t-DCF) over a speaker-independent baseline, respectively.
Xuechen Liu\({}^{1,2}\), Md Sahidullah\({}^{2}\), Kong Aik Lee\({}^{3}\), Tomi Kinnunen\({}^{1}\)\({}^{1}\)School of Computing, University of Eastern Finland, Joensuu, Finland
\({}^{2}\)Universite de Lorraine, CNRS, Inria, LORIA, F-54000, Nancy, France
\({}^{3}\)Institute for Infocomm Research, A\({}^{*}\)STAR, Singapore [email protected], [email protected], [email protected], [email protected]
**Index Terms**: Speaker Verification, Speaker-Aware Anti-Spoofing, ASVspoof, Deepfake, Spoofing Countermeasures.
## 1 Introduction
Thanks to recent advances in _neural vocoding_ of raw speech waveforms [1], modern _text-to-speech_ (TTS) allows flexible generation of artificial speech that sounds natural human speech [2, 3, 4]. Combined with parallel developments in speaker information extraction through _neural speaker embeddings_[5, 6] to condition waveform generation [7, 8], modern TTS allows, in principle, to 'put words into anyone's mouth' in the voice of a targeted person.
Despite numerous useful applications, such flexibility raises obvious concerns. First, in the context of biometric authentication, the possibility for an adversary (attacker) to spoof _automatic speaker verification_ (ASV) by miscuing one-self as another individual (target) is well known [9]. Second, the potential negative implications of _deepfakes_ -- a combination of 'deep learning' and 'fake' based on adversarial machine learning [10]-- has recently been called to the attention of researchers [11] and the general public [12]. We have already seen alerting examples [13], even if _speech_-related deepfakes have received less attention compared to image- and video-based deepfakes. Deepfakes used for malicious purposes may damage not only the reputation of the targeted individuals but lead to reduced general trust in audio-visual media and biometric technology. To retain the trust, novel protective means are required.
On the positive side, the importance of being able to differentiate'real' inputs from 'fake' inputs was proactively recognized early on -- way before the concepts of 'adversarial machine learning', or 'deepfakes' were introduced. In particular, the biometrics research community has studied various _anti-spoofing_ methods to protect biometric systems for more than two decades [14]. _Presentation attack detection_ (PAD) systems [15], also known as _countermeasures_ (CMs), refer to methods aimed at detecting spoofed inputs.
In this study, 'CM' refers to a classifier that takes speech input(s) and produces a binary bonafide/spoof prediction. Since 2015, the ASVspoof challenge series [16] has spearheaded benchmarking of speech CMs using common data and performance metrics [17]. Despite its title, the ASVspoof challenges focus on _standalone_, speaker-independent CMs that can be integrated with ASV systems or other applications. Thanks to the common data provided by the ASVspoof challenge and other similar recent initiatives [18, 19], several standalone speech CMs have been developed ranging from early statistical methods [20, 21] to recent deep architectures [22, 23].
Unfortunately, most existing CMs are far from perfect, particularly when faced with the unknown -- be it unseen vocoders, TTS systems, data domains, or codecs [24]. The unconstrained form of the standalone speaker-independent CM task, combined with an artificial speech that is already difficult to differentiate from a real speech by listeners, makes CM generalization beyond training data challenging. The quest for fully general, speaker-independent CM implies that one has to compensate for the potential confounding effects due to speaker, content, and channel variation, with limited prior knowledge.
Even if not addressed in challenges like ASVspoof, in many applications we _do_**have prior knowledge of the target person that could readily be utilized by the CM**: spoofing attacks are typically targeted against a particular individual -- the _same_ individual whose identity we seek to verify and who the ASV system already 'knows' based on enrollment data collected earlier. Concerning deepfakes targeted against public figures such as politicians and news anchors, it seems equally safe to assume that we know _who_ the intended target in a potential deepfake sample is. For these reasons, it seems then very reasonable to inform CM at the test time of the identity of the hypothesized speaker based on the enrollment sample. To this end, we present an initial investigation on the use of target speaker information for anti-spoofing that we dub **speaker-aware anti-spoofing**.
Our study is not the first one to explore this general idea. The two prior studies [25, 26] that the authors are aware of focus either on replay attack detection with Gaussian mixture model (GMM) backend [25] or on improving the back-end of the ASV system [26]. Our work differs substantially from both studies in terms of CM solutions (statistical model [25] vs. deep learning), the type of fake data (replay attacks [25] vs. synthetic media), the experimental setup, and the evaluation in terms of protocol design and metrics. The main novelty of our work is to compare different alternative ways of integrating target speaker information into the state-of-the-art AASIST model [23]. To be
specific, this information is presented using deep speaker embedding and integrated into different parts of AASIST as illustrated in Fig. 1 and detailed in the next section.
## 2 Speaker-Aware Training of Countermeasure
### Problem Definition
We first define the problem of _speaker-aware anti-spoofing_, as a **binary classification task of discriminating between bonafide and spoofed speech conditioned on the enrolled speaker**. More specifically, it is a conditional hypothesis test defined as follows:
* \(H_{0}\): Test sample is bonafide and corresponds to the target speaker.
* \(H_{1}\): Test sample is spoofed and corresponds to the target speaker.
In practice, we address this task by incorporating additional bona fide utterances of the target speaker, detailed next. It is worth noticing that the two hypotheses are conditioned on the target speaker, which make them different from the conventional definition of anti-spoofing.
### Speaker-aware anti-spoofing
The proposed speaker-aware CM is illustrated in Fig. 1. The speaker information can be represented in various ways, from raw audio to well-established deep speaker embeddings. In this preliminary study, we focus on the latter. We feed the enrollment audio data into a pre-trained ASV model (here, ECAPATDNN [27]). For each enrollment speaker, we extract speaker embeddings correspondingly and average them to get one enrollment embedding: \(\varphi_{\text{emul}}=\frac{1}{N}\sum_{i=1}^{N}\varphi_{i}\), where \(N\) is the number of utterance available for the enrollment.
We consider the recent AASIST [28] for this study. It consists of a speech encoder based on RawNet2 [29]; two heterogeneous graph attention layers operated respectively on spectral and temporal axes; and a graph pooling layer. The pooling layer is followed by node stacking and a fully connected (FC) layer for binary decision-making. As illustrated by the blue lines in Fig. 1, we propose to integrate the enrollment embedding \(\varphi_{\text{emul}}\) into the training by regarding it as auxiliary conditioning information. Methods proposed along with their short-handed forms are presented in the followings.
**Integration at the encoder output**: Firstly we focus on the output of the encoder, which is a 3-dimensional feature map with channel, spectral, and temporal axes. Let us denote the shape of the map as \((d_{\text{c}},d_{\text{s}},d_{\text{t}})\). Inspired by earlier works on channel-wise extension [30, 31, 32] and speaker adaptation on spectral features [33], we extend our embedding vector \(\mathbf{F}_{\text{em}}\) at either channel-level or spectral-level as illustrated in Fig. 2. The \(\mathbf{F}_{\text{em}}\) in Fig. 1 can thus be either \(\mathbf{F}_{\text{em}}\) on \(\mathbf{F}_{\text{spec}}\). \(\mathbf{F}_{\text{chan}}\) is of shape \((d_{\text{embed}},d_{\text{s}},d_{\text{t}})\) and \(\mathbf{F}_{\text{chan}}\) is of shape \((d_{\text{c}},d_{\text{embed}},d_{\text{t}})\). Here, \(d_{\text{embed}}\) is the dimension of the enrollment vector. Such two methods on attaching at channel or spectral axis are denoted as _enc-chan_ and _enc-spec_, respectively.
**Integration at the encoder output with dimensionality reduction**: Since the dimensionalities between the embedding vector and the original feature map are respectively different (\(d_{\text{embed}}\) vs. \(d_{\text{s}}\) or \(d_{\text{t}}\)), one of them may have potentially more impact to model predictions. Therefore, alternatively, we consider including a transformation matrix \(\mathbf{P}_{\text{trans}}\) for dimensionality reduction as shown in Fig. 1. \(\mathbf{P}_{\text{trans}}\) can be either \(\mathbf{P}_{\text{chan}}\) or \(\mathbf{P}_{\text{spec}}\), accordingly for channel-level and spectral-level attachment, as illustrated in Fig. 2. In the case where the enrollment vector is firstly reduced to \(d_{\text{c}}\) or \(d_{\text{s}}\), \(\mathbf{P}_{\text{trans}}\) is initialized with normal distribution and jointly optimized along with other learnable components in AASIST, and with shape of \((d_{\text{embed}},d_{\text{c}})\) (for \(\mathbf{P}_{\text{chan}}\)) or \((d_{\text{embed}},d_{\text{s}})\) (\(\mathbf{P}_{\text{spec}}\)) respectively; in the case where dimensionality reduction is not carried out, \(\mathbf{P}_{\text{trans}}\) is an identity matrix with shape of \((d_{\text{embed}},d_{\text{embed}})\). We denote the resulting feature maps by adding the suffix -_reduced_, so the corresponding methods are _enc-chan-reduced_ and _enc-spec-reduced_, respectively.
**Integration at the FC layer input**: Up to this point, we have described the use of enrollment embedding at the early layers of AASIST. As an alternative strategy, we also consider integration before the fully-connected layer, as illustrated in Fig. 1. The input of the FC layer before the decision-making is a utterance-level 160-dimensional vector, denoted as \(\varphi_{\text{net}}\). It has been extracted in earlier works [34] for joint optimization with ASV systems. Here we simply append \(\varphi_{\text{emul}}\) to \(\varphi_{\text{test}}\), with the
Figure 1: Illustration of speaker information integration via the enrollment vectors. Blue lines and red lines correspond to different approaches which are not applied simultaneously. Dash lines represent the auxiliary attachment operation. Best viewed in color.
Figure 2: Illustration of transformation of speaker enrollment vector into channel-wise or spectral-wise attachable components. \(\mathbf{F}_{\text{em}}\) in Fig. 1 can be either \(\mathbf{F}_{\text{chan}}\) or \(\mathbf{F}_{\text{spec}}\). The transformation matrix \(\mathbf{P}_{\text{trans}}\) in can be respectively either \(\mathbf{P}_{\text{chan}}\) or \(\mathbf{P}_{\text{spec}}\). \(Rep(.)\) denotes repeating operation on the other two dimensions. Best viewed in color.
input dimension of the FC layer then being \(d_{\text{embed}}+160\). We denote this mean of attachment as _utterance_ when presenting the results in Section 4.
## 3 Experimental Setup
**Data**. We conduct experiments for this study based on ASVspoof 2019 LA dataset, which covers 19 types of spoofing attacks. The training data for the AASIST CM model consists of 20 speakers and covers 6 attacks (A01-A06), along with the bonafide condition. The CM evaluation data contains additionally 13 types of attacks (A07-A19). Readers are referred to [35] for more details.
**Protocol**. Recall that the assumption in the proposed speaker-aware anti-spoofing approach is that **the test utterance originates from a known target speaker, and the task is to determine whether or not the sample is bonafide or spoofed utterance**. Therefore, for each test utterance, its associated speaker embedding for enrollment \(\phi_{\text{sent}}\) is from the same target speaker. In this case, our CM evaluation protocol is based on the original ASVspoof 2019 CM protocol ("Original" in Table 1), with the test utterances without corresponding target speaker in the dataset being removed. The speaker information of the audio samples in the evaluation protocol comes from the ASV protocol of the original metadata, but only for reference to create the protocol. The removed audios are selected with reference also to the ASV protocol. The protocol statistics are presented in Table 1. We refer to this protocol setup as _Main_, which differs from the _Ablation_ setup described in Section 4.2.
**Model.** For the AASIST model, we adopt the solution from the open-sourced repository as the speaker-independent baseline1. Model training follows the setup of [28], except for the batch size that was reduced from 24 down to 12 due to limited computational resources. The original shape of the feature map is \((d_{c},d_{s},d_{t})=(64,23,29)\), as shown in Fig. 1. For the pre-trained ASV model, we use ECAPA-TDNN [27], which is a state-of-the-art level DNN speaker embedding extractor. We use the open-sourced pre-trained model2 to extract the speaker embedding with \(d_{\text{embed}}=192\) from the first fully-connected layer after the pooling layer for each input sentence.
Footnote 1: [https://github.com/clovai/aasist](https://github.com/clovai/aasist)
Footnote 2: [https://github.com/TaoRuijie/ECAPA-TDNN/](https://github.com/TaoRuijie/ECAPA-TDNN/)
**Evaluation.** We report _equal error rate_ (EER) and minimum _tandem detection cost function_ (IDCF) [17]. Since compared to minimum tDCF, EER reflects more on the sole CM performance [17, 28], we present our analysis on results primarily on EERs, including the per-attack-type analysis.
## 4 Results and Analysis
### Results
Results in terms of pooled EER and tDCF are presented in Table 2. While channel-wise speaker integration without dimensionality reduction only marginally improves the EER, the spectral-wise integration works nicely by achieving the lowest numbers in both metrics, outperforming the baseline by relatively 25.1% and 11.6% in terms of EER and minimum tDCF, respectively. This indicates the efficiency of integrating the target speaker integration method on the spectral feature map. The relatively under-performed channel-wise integration, in turn, might be explained by noting the original audio is single channel and contain a rather low level of noise. Applying dimensionality reduction degrades the CM performance for both methods. Attaching the enrollment vector before the FC layer with the bottleneck embedding does not lead to improvements.
A detailed breakdown of the results per attack is shown in Table 3 for the baseline and the best speaker-aware anti-spoofing approach (per protocol). In addition to the CM results shown in the first four lines, the table also displays the EERs of the ASV system on the full CM protocol. These numbers serve to indicate the effectiveness of each attack in spoofing the ASV system (but should not be compared with the CM results). Reflected by the ASV EERs, some attacks do not spoof the ASV model well such as A09, A17, and A18, which means that those algorithms do not model the speaker information well.
Moving back to the CM performances, there are five types of attacks (A08, A09, A16, A17, A18) where the best-performed proposed method outperforms (or reaches similar performance with) the baseline under both protocols. The ASV EER for four of them is relatively low (lower than 20%) except
\begin{table}
\begin{tabular}{|c|c|c|} \hline Setup & dev & eval \\ \hline Original [35] & 24844 & 71237 \\ \hline _Main \& Ablation_ (Ours) & 23780 & 69252 \\ \hline \end{tabular}
\end{table}
Table 1: Number of dev/eval trials available for the full CM and customized protocols.
Figure 3: The conceptual illustration of the setups for (a) conventional speaker-independent anti-spoofing, (b) speaker-aware anti-spoofing (Main), (c) ablation study setup (Ablation). \(\varphi_{\text{sent}}\) represents an enrollment embedding from the same speaker of the input audio \(\mathbf{X}_{\text{sent}}\). \(\varphi_{\text{nour}}\) represents the one from a different speaker from \(\mathbf{X}_{\text{sent}}\). Whereas Main complies with the assumption of known target speaker, Ablation is used for assessing the impact of violation of this assumption on CM performance.
for A16, which indicates that the proposed speaker information integration method further exploits the weakness of the spoofing algorithm by not being able to encode the target speakers well. Improvements can also be observed on A16, which corresponds to the highest ASV EER among all spoofing algorithms. This may exploit the compensation ability of the proposed algorithms on strong attacks that models speaker information well. Future work may further exploit the relationship between the speaker information modeling ability of the spoofing algorithms and its compensation from the CM via such integration.
### Ablation study: Mis-specified speaker identity
The evaluation setup and results described above are based on the assumption that the input audio is target speaker. A natural question that arises is _what might happen if this assumption is violated?_ - i.e. how robust the CM is to modeling mis-specification in terms of mismatched speaker identities across the enrollment and test utterances. To this end, in this ablation study, we assume **that the bonafide input audio is not from its corresponding speaker**. In this case, we retain the exact same of test utterances as in the main protocol, but replace the corresponding enrollment utterance by a randomly selected enrollment utterance from another randomly selected speaker.
The overall results for the proposed methods for this setup are shown in Table 2. For most proposed methods, compared to the _Main_ setup, the results in both metrics are degraded, but not by a large margin. The EER of the best-performed _enc-spec_ degrades by relatively 25.1%, but still retains the accuracy of the speaker-independent baseline, even in the severe modeling mis-specification / strong violation of modeling assumption. For _enc-chan-reduced_ and _utterance_, the results remain at about same level. The per-attack results for _enc-spec_ under this setup is shown in Table 3. For the six types of attacks where improvements are observed under the _Main_ setup, _enc-spec_ holds its superiority over the baseline, although with marginal performance degradation from _Main_ except on A09 and A19. While such degradation indicates the usefulness of target speaker information compared to the one from another speaker, the potential of such _non-target_ speaker information still deserves further investigation and extension onto other scenarios.
### Ablation study: Additional bonafide training data
An enrollment vector is not only a speaker representation but also an additional container of bonafide information. Both speaker and bonafide information can be useful as prior conditions for training CM systems. Therefore, we consider an experiment on the effect of additional bonafide training data.
We implement the addition under the full CM protocol by pooling additional speech data from various datasets. We consider VoxCeleb [36] and LibriSpeech [37] corpora. For each dataset, we vary the number of utterances for CM training. The results are shown in Fig. 4, along with the baseline and the best-performing speaker-aware CM. The figure reveals two interesting patterns. First, the larger amount of additional sole bonafide data from either VoxCeleb1 or LibriSpeech improves performance. Second, only one case outperforms the baseline, where 25k additional audos from LibriSpeech are added. The amount of data applied is almost equal to the total amount of CM training data (25380 utterances [35]). We may need a huge amount of specific or crafted data to acquire sole bonafide information to outperform the baseline. This suggests a more significant benefit provided by additional speaker information, but this might also since the ASVspoof dataset is originated from \(VCTK^{3}\), which is a very clean dataset recorded using the anechoic room. Future work may investigate this issue.
## 5 Conclusion
We have investigated the feasibility of **speaker-aware anti-spoofing** using state-of-the-art AASIST countermeasures for synthetic spoofing attack detection. Our findings indicate that integration of target speaker enrollment embedding as auxiliary information leads to a maximum of 25.1% relative improvement on anti-spoofing EER. Additional experiments on the effect of alternative speaker information and augmenting the bonafide training using auxiliary corpora have suggested that the proposed speaker-aware training strategy can be more effective. Confirming similar findings done in the two earlier studies using completely different classifiers and datasets [25, 26], this study adds evidence to the positive impact of target speaker prior information. Future work may focus on the Siamese network to encode speaker information and make it available during the training of the CM module, along with more advanced cohort models to encode the speaker information.
Figure 4: The relationship between the additional data from different common speech processing datasets and the CM performance under Main. The green dashed line indicates the baseline performance and the pink one indicates the best-performed system. Best viewed in color.
\begin{table}
\begin{tabular}{|c|c c c c c c c c c c c c c c|} \hline & Method & A07 & A08 & A09 & A10 & A11 & A12 & A13 & A14 & A15 & A16 & A17 & A18 & A19 \\ \hline \hline \multirow{2}{*}{_Main_} & (Baseline) & 0.75 & 0.19 & 0.02 & 0.88 & 0.37 & 0.72 & 0.14 & 0.15 & 0.47 & 0.73 & 2.15 & 4.80 & 0.78 \\ \cline{2-13} & _enc-spec_ & 1.18 & **0.07** & **0.00** & 1.38 & 0.41 & 0.98 & 0.22 & 0.28 & 0.98 & **0.65** & **1.28** & **2.70** & **0.34** \\ \hline \hline _Ablation_ & _enc-spec_ & 1.57 & **0.08** & **0.00** & 1.95 & 0.47 & 1.26 & 0.28 & 0.35 & 1.30 & **0.71** & **1.79** & **3.13** & **0.30** \\ \hline \hline \multicolumn{2}{|c|}{ASV EER [34]} & 32.66 & 18.80 & 2.20 & 50.61 & 47.08 & 39.56 & 11.62 & 35.39 & 36.54 & 60.71 & 1.85 & 2.38 & 4.77 \\ \hline \end{tabular}
\end{table}
Table 3: Results in terms of per-attack-type EER(%) for baselines and best-performed systems. Spoofing attacks in bold font indicate the acquisition of speaker information during the development, according to [35]. The ASV EER is returned by the same pre-trained ECAPA-TDNN model as used in this study, as described in [34]. |
2303.01640 | Hierarchical Graph Neural Networks for Particle Track Reconstruction | We introduce a novel variant of GNN for particle tracking called Hierarchical
Graph Neural Network (HGNN). The architecture creates a set of higher-level
representations which correspond to tracks and assigns spacepoints to these
tracks, allowing disconnected spacepoints to be assigned to the same track, as
well as multiple tracks to share the same spacepoint. We propose a novel
learnable pooling algorithm called GMPool to generate these higher-level
representations called "super-nodes", as well as a new loss function designed
for tracking problems and HGNN specifically. On a standard tracking problem, we
show that, compared with previous ML-based tracking algorithms, the HGNN has
better tracking efficiency performance, better robustness against inefficient
input graphs, and better convergence compared with traditional GNNs. | Ryan Liu, Paolo Calafiura, Steven Farrell, Xiangyang Ju, Daniel Thomas Murnane, Tuan Minh Pham | 2023-03-03T00:14:32Z | http://arxiv.org/abs/2303.01640v1 | # Hierarchical Graph Neural Networks for Particle Track Reconstruction
###### Abstract
We introduce a novel variant of GNN for particle tracking--called Hierarchical Graph Neural Network (HGNN). The architecture creates a set of higher-level representations which correspond to tracks and assigns spacepoints to these tracks, allowing disconnected spacepoints to be assigned to the same track, as well as multiple tracks to share the same spacepoint. We propose a novel learnable pooling algorithm called GMPool to generate these higher-level representations called "super-nodes", as well as a new loss function designed for tracking problems and HGNN specifically. On a standard tracking problem, we show that, compared with previous ML-based tracking algorithms, the HGNN has better tracking efficiency performance, better robustness against inefficient input graphs, and better convergence compared with traditional GNNs.
## 1 Introduction
In the upcoming High Luminosity Phase of the Large Hadron Collider (HL-LHC) [1, 2], the average number of inelastic proton-proton collisions per bunch \(\langle\mu\rangle\) (pile-up) is expected to reach 200 in the new silicon-only Inner Tracker (ITk). This will pose a significant challenge in track reconstruction due to the limited computational resources [3]. Since charged particle reconstruction ("particle tracking") dominates the CPU resources dedicated to event offline reconstruction, a new and efficient algorithm for event reconstruction becomes an urgent need. The HEP.TrkX project [4] and its successor the Exa.TrkX project [5] have studied Graph Neural Networks (GNNs) for charged particle tracking, and excellent performance on the TrackML dataset [6] has been demonstrated in Refs. [7, 8] and more recently on ITk simulation, referred to as GNN4ITk [9].
However, despite the success of GNN-based tracking algorithms, there is much in these techniques that can be improved. In particular, GNN tracking suffers from two types of errors: (1) **broken tracks** (one true track split into multiple segments) and (2) **merged tracks** (a track contains spacepoints of multiple particles). In its nature, the GNN4ITk tracking pipeline prototype [9] is a process of reducing the number of edges; starting from a graph constructed for example by a multi-layer perceptron (MLP) embedding model, filter MLP and GNN edge classifiers are applied to filter out fake edges (i.e. connecting two spacepoints of distinct particles). Thus, broken tracks are more difficult to remove than merged tracks since they can only be resolved by including more edges during the graph construction stage. As such, the pipeline is very sensitive to the efficiency of the graph constructed. Furthermore, the nature of
message-passing neural networks [10] utilized in the GNN4ITk pipeline, precludes the passing of information between disconnected components, such as the two ends of a broken track. Broken tracks not only limit the performance of edge-cut-based algorithms but also inhibit the full capability of the message-passing mechanism.
In this paper, we present a novel machine learning model called Hierarchical Graph Neural Network (HGNN) 1 for particle tracking to address the aforementioned problems. Similar to the pooling operation often used in Convolutional Neural Networks (CNN), the HGNN pools nodes into clusters called "super-nodes" to enlarge the "receptive field" of nodes to resolve the problem that a "flat" GNN cannot pass messages between disconnected components. Unlike the case of image processing where pooled pixels are already arranged on a 2D grid, the pooled super-nodes cannot use a graph induced by the original graph since disconnected components will remain disconnected. Thus we propose to utilize a K-nearest-neighbors (KNN) algorithm to build the super-graph among super-nodes to facilitate message passing between super-nodes. Furthermore, the HGNN offers us a new approach to track building, as defining a bipartite matching between nodes (spacepoints) and super-nodes (tracks). We measure the performance of this matching procedure against several baselines and show that it can not only recover broken tracks, but also produces fewer fakes tracks from merging.
Footnote 1: The code now available on github
## 2 Related Work
### The GNN4ITk Pipeline for Charged Particle Tracking
The GNN4ITk pipeline [8, 9] aims to accelerate particle tracking by utilizing geometric deep learning models. The pipeline as implemented can be divided into four steps: firstly, graph construction takes place to build a graph on the input point-cloud. With one possible construction technique, an MLP is trained to embed spacepoints into a high-dimensional space such that spacepoints belonging to the same particle gets closer in space; a fixed radius graph is then built and passed to a "filter" MLP. The filter takes in spacepoint doublets and prunes the graph down by a \(O(10)\) factor in the number of edges. A graph neural network is used to prune the graph further down. Finally, the tracks are built by running a connected components algorithm on the pruned graphs, and ambiguities are resolved by a walk-through algorithm based on topological sorting.
### Graph Pooling Algorithms
As discussed in section 1, the pooling algorithm is a crucial piece of the HGNN architecture. Graph pooling has long been studied in the context of graph neural networks as generating graph
Figure 1: The HGNN can not only shorten the distance between two nodes and effectively enlarge the receptive field but also pass messages between disconnected components
representations require some global pooling operation. Ying _et al._ introduced DiffPool [11], which pools the graph by aggregating nodes according to weights generated by a GNN. DiffPool pools the graph to a fixed number of super-nodes, and the pooled graph has a dense adjacency matrix. Lee _et al._ proposed SAGPool [12], which pools a graph by selecting top-k rank nodes and uses the subgraph induced. However, SAGPool does not support soft assignment, i.e. assigning a node to multiple super-nodes. The granularity is completely defined by the hyperparameter \(k\) and thus also pools to a fixed number of super-nodes. Diehl proposed EdgePool [13], which greedily merges nodes according to edge scores. It is capable of generating a graph that is sparse and variable in size. These pooling algorithms and their features are presented in table 1, along with our proposed pooling technique, described in section 3.1.
### Hierarchical Graph Neural Networks
Hierarchical structures of graph neural networks have been studied in the context of many graph learning problems; some of them utilize deterministic pooling algorithms or take advantage of preexisting structures to efficiently create the hierarchy [14, 15, 16, 17, 18], while the others [19, 20, 21] create the hierarchy in a learnable fashion. Compared with solely graph pooling operations [11], by retaining both pooled and original representations one has the capability of simultaneously performing node predictions and learning cluster-level information. Furthermore, as shown in [20], introducing hierarchical structures can solve the long-existing problem of the incapability of capturing long-range interactions in graphs. Empirical results also show that Hierarchical GNNs have better convergence and training stability compared with traditional flat GNNs.
## 3 Model Architecture
In order to build the model, there are several challenges that must be tackled, namely, pooling the graph, message passing in the hierarchical graph, and designing a loss function for such a model. In the following section, we introduce our proposed methods for each of them.
### Gaussian Mixture Pooling
In order to provide the features in table 1, we propose a method that leverages the connected components algorithm and Gaussian Mixture Model. The algorithm takes a set of node embeddings as input. The embeddings are then used to calculate edge-wise similarities defined as \(s_{ij}=\tanh^{-1}(\vec{v}_{i}\cdot\vec{v}_{j})\). We hypothesize that the graph consists of two types of edges, in-cluster edges and out-of-cluster edges. Then, given the distribution of node similarities, we fit a Gaussian Mixture Model (GMM) to obtain the estimation of the in-cluster and out-of-cluster distributions \(p_{in}(s)\) and \(p_{out}(s)\). An example distribution is plotted in fig. 3b. We then solve for \(s_{cut}\) by \(\ln(p_{in}(s_{cut}))-\ln(p_{out}(s_{cut}))=r\), where \(r\) is a hyperparameter defining the resolution
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline Tracking Goal & Feature & DiffPool & SAGPool & EdgePool & GMPool (ours) \\ \hline Subquadratic scaling & Sparse & ✗ & ✓ & ✓ & ✓ \\ End-to-end trainable & Differentiable & ✓ & ✓ & ✓ & ✓ \\ Variable event size & Adaptive number & ✗ & ✗ & ✓ & ✓ \\ & of clusters & & & & \\ Many hits to many & Soft assignment & ✓ & ✗ & ✗ & ✓ \\ particles relationship & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Graph Pooling Algorithms
of the pooling algorithm. The \(s_{cut}\) value that gives the best separation of in- and out-of-cluster Gaussians is chosen, and edges with scores below this value are cut. The connected components algorithm follows, and the components \(C_{\alpha}\) of the cut graph are regarded as super-nodes.
To construct super-edges, first super-node embeddings are defined as the centroid of each of the connected components in the embedding space, i.e. \(\vec{V}_{\alpha}=\frac{\vec{V}_{\alpha}^{\prime}}{\left\|\vec{V}_{\alpha}^{ \prime}\right\|_{L_{2}}}\) where \(\vec{V}_{\alpha}^{\prime}=\frac{1}{N(C_{\alpha})}\sum_{i\in C_{\alpha}}\vec{ v}_{i}\) To connect nodes with super-nodes, similar to the method used in [22], we maintain the sparsity by constructing the bipartite graph with the k-nearest neighbors algorithm. The differentiability can be restored by weighting each of the edges according to the distance in the embedding space, i.e. and \(w_{i\alpha}=\frac{\exp(v_{i}\cdot V_{\alpha})}{\sum_{\alpha\in\mathcal{N}(i)} \exp(v_{i}\cdot V_{\alpha})}\) Finally, node features are aggregated to be super-node features according to the graph weights. The super-graph construction is identical except that the k-nearest neighbor search has the same source and destination set. Thanks to its edge-cut nature, the GMPool has sub-quadratic time complexity and runs in milliseconds on our graphs.
### Hierarchical Message Passing Mechanism
In general, it is possible to stack arbitrarily many pooling layers to obtain a hierarchy of arbitrary height. However, the nature of tracking problems suggests that a spacepoint-particle hierarchy will be sufficient for tracking problems. Thus, the pooling layer in this work is kept to be of two levels. For each of the nodes, we update it by aggregating adjacent edge features, super-nodes
Figure 3: (a): schematic overview of the GMPool algorithm. (b): Distribution of edge similarities. Edges connecting spacepoints of the same particle are colored in yellow and otherwise blue.
Figure 2: A schematic overview of the HGNN architecture. A flat GNN encoder is used to transform features and embed spacepoints. A pooling algorithm (GMPool) follows to build the hierarchy using the embedded vectors. Finally, hierarchical message passing is applied iteratively to obtain final representations of both nodes and super-nodes.
features weighted by bipartite graph weights, and its own features. For each of the super-nodes, it is updated by aggregating super-edge features weighted by super graph weights, node features weighted by bipartite graph weights, and its own features. For edges and super-edges, their update rule is identical to the one used in interaction networks.
### Bipartite Classification Loss
At this point, the architecture of HGNN is possible to train on traditional tasks such as node-embedding thanks to GMPool's differentiability. This feature is useful for apples-to-apples comparisons between flat and hierarchical GNNs under the same training regimes. However, to exploit the full potential of the HGNN, we propose a new training regime for it specifically. The most natural way of doing track labeling with HGNN is to use super-nodes as track candidates. For each of the spacepoint-track pairs (bipartite edges), a score is produced to determine if it belongs to a specific track. A maximum-weight bipartite matching algorithm is used to match tracks to super-nodes to define the "truth" for each of the bipartite edges. The loss is given by the binary cross-entropy loss defined by the matched truth. An auxiliary hinge embedding loss is also used for the first warm-up epochs to help the embedding space stably initialize.
## 4 Results
### Dataset
In this paper, the dataset used to report the performance of HGNN is that of the TrackML Challenge[6]. The TrackML dataset contains events of simulated proton-proton collisions at \(\sqrt{s}=14\mathrm{TeV}\) with pile-up \(\langle\mu\rangle=200\). Details can be found in [6]. The HGNN has been evaluated in two scenarios; the first scenario is called TrackML-full and contains \(2200\) filter-processed events, each with approximately \(O(7k)\) particles and \(O(120k)\) spacepoints. In addition to that, an extensive test of robustness has been done on Bipartite Classifiers, using a simplified dataset TrackML-1GeV. We take the subgraph induced by removing any track below \(p_{T}=1\mathrm{GeV}\). Such an event typically consists of \(O(1k)\) particles and \(O(10k)\) spacepoints.
### Evaluation
The evaluation metric is tracking efficiency and purity. A particle is matched to a track candidate if **(1)**: the track candidate contains more than \(50\%\) of the spacepoints left by the particle and **(2)**: more than \(50\%\) of the spacepoints in the track candidate are left by the particle. A track is called reconstructable if it **(1)** left more than \(5\) spacepoints in the detector and **(2)** has \(p_{T}\geq 1\mathrm{GeV}\). The tracking efficiency and fake rate (FR) are thus defined as:
\[\mathrm{Eff}:=\frac{N(\mathrm{matched},\mathrm{reconstructable})}{N(\mathrm{ reconstructable})}\qquad\qquad\mathrm{FR}:=1-\frac{N(\mathrm{matched})}{N(\mathrm{track\ candidates})}\]
### Experiments
We evaluate four models on the TrackML-full dataset. **(1)**: Embedding Flat GNN (E-GNN), **(2)**: Embedding Hierarchical GNN (E-HGNN), **(3)**: Bipartite Classifier Hierarchical GNN (BC-HGNN), **(4)**: Edge Classifier Flat GNN (EC-GNN). The first two serve for apples-to-apples comparisons between flat and hierarchical GNNs - the loss function is the same as the hinge embedding loss used for the metric learning graph construction; tracks candidates are selected by applying a spatial clustering algorithm (H-DBSCAN). The third model represents the state-of-the-art hierarchical GNN for particle tracking; the last one is identical to the GNN4ITk pipeline, and serves as a baseline. The performance of a truth-level connected-components (Truth-CC) track builder are also reported; this takes in filter-processed graphs and prunes them down with ground truth. It is a measure of the graph quality and also an upper bound of edge classifier flat GNN performance. The timing results are obtained on a single Nvidia A100 GPU. To test
robustness against edge inefficiency, we remove 0%, 20%, 30%, and 40% of the edges and train the Bipartite Classifier model to compare it with the Truth-CC.
## 5 Conclusion
In this paper, we introduced a novel graph neural network called a hierarchical graph neural network. We also proposed a new learnable pooling algorithm called GMPool to construct the hierarchy. The architecture successfully resolved the issues of GNN being incapable of capturing long-range interactions and the GNN particle tracking pipeline being sensitive to graphs' efficiency. Creating higher-level representations both shortens the distance between distant nodes in graphs and offers new methods of building track candidates. Empirical results demonstrate that Hierarchical GNNs have superior performance compared with flat GNNs. The hierarchical GNN is available at [https://github.com/ryanliu30/HierarchicalGNN](https://github.com/ryanliu30/HierarchicalGNN) and has been integrated into the common framework of the GNN4ITk pipeline [23].
## 6 Acknowledgements
This research was supported in part by: the U.S. Department of Energy's Office of Science, Office of High Energy Physics, under Contracts No. DE-AC02-05CH11231 (CompHEP Exa.TrkX). This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Models & E-GNN & E-HGNN & BC-HGNN & EC-GNN & Truth-CC \\ \hline Efficiency & 94.61\% & 95.60\% & **97.86\%** & 96.35\% & 97.75\% \\ Fake Rate & 47.31\% & 47.45\% & **36.71\%** & 55.58 \% & 57.67\% \\ Time (sec.) & 2.17 & 2.64 & 1.07 & **0.22** & 0.07 \\ \hline \hline \end{tabular}
\end{table}
Table 2: TrackML-Full experiment results. Comparison between embedding models shows that hierarchical structure can enhance the expressiveness of GNNs. Comparing Bipartite Classifiers with the Truth CC, we can see that Bipartite Classifiers can recover some of the tracks that cannot be reconstructed by edge-based GNNs2. The timing results also show that HGNN scales to large input graphs of HL-LHC events competitively with other embedding GNNs
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Percent Edge Removed & 0\% & 10\% & 20\% & 30\% & 40\% & 50\% \\ \hline BC Efficiency & 98.55\% & 98.39\% & 97.68\% & 96.63\% & 95.10\% & 92.79\% \\ BC Fake Rate & 1.23\% & 1.55\% & 2.13\% & 3.10\% & 4.75\% & 7.31\% \\ Truth-CC Efficiency & 98.72\% & 96.21\% & 92.31\% & 85.81\% & 77.26\% & 64.81\% \\ Truth-CC Fake Rate & 5.87\% & 15.53\% & 24.40\% & 33.48\% & 42.99\% & 53.12\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: TrackML-1GeV extensive robustness test results. We can see that Bipartite Classifiers (BC) are very robust against inefficiencies, whereas edge-based GNN’s performance is strongly influenced by missing edges. |
2310.01778 | Revisiting the Scalar Leptoquark ($S_1$) Model with the Updated Leptonic
Constraints | The Standard Model, if extended to the energy scale of $\mathcal{O}(1)$ TeV,
the known particle spectrum could be augmented with a scalar leptoquark. Within
this minimally extended framework, explaining the anomalous magnetic moment and
electric dipole moment simultaneously for the three lepton generations over a
parameter space consistent with all the lepton flavor violating bounds is
possible. Such a model can be tested or falsified through the collider search
experiments and/or by probing the low-energy lepton phenomena. This work
studies the current prospects of the model in the presence of recent
experimental updates for the leptonic observables. | Bibhabasu De | 2023-10-03T03:59:55Z | http://arxiv.org/abs/2310.01778v2 | # Revisiting the Scalar Leptoquark (\(S_{1}\)) Model with the Updated Leptonic Constraints
###### Abstract
The Standard Model, if extended to the energy scale of \(\mathcal{O}(1)\) TeV, the known particle spectrum could be augmented with a scalar leptoquark. Within this minimally extended framework, explaining the anomalous magnetic moment and electric dipole moment simultaneously for the three lepton generations over a parameter space consistent with all the lepton flavor violating bounds is possible. Such a model can be tested or falsified through the collider search experiments and/or by probing the low-energy lepton phenomena. This work studies the current prospects of the model in the presence of recent experimental updates for the leptonic observables.
Introduction
The Standard Model (SM) has already explained the color and electroweak sectors up to a high degree of testable precision. Further, the discovery of the 125 GeV Higgs boson at the Large Hadron Collider (LHC) has completed the proposed particle spectrum of the SM [1; 2]. However, certain experimental observations and theoretical issues can't be explained within the framework of the SM and thus indicate the presence of some New Physics (NP) yet to be explored. For example, the idea of gauge coupling unification hints at a more fundamental theory corresponding to a single gauge group. The SM gauge group, i.e., \(SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}\) can be considered as its effective low-energy version obtained via a particular symmetry-breaking chain. The list of such Grand Unified Theories (GUT) includes \(SU(4)\)[3], \(SU(5)\)[4], \(SO(10)\)[5; 6], \(E_{6}\)[7; 8], etc. It is interesting to note that within a GUT structure, quarks and leptons can directly couple at the tree-level through a hypothetical mediator -- Leptoquark (LQ) (for recent reviews, see Refs. [9; 10; 11; 12]). Though, in principle, within a local quantum field theory LQs can either be scalar or vector, the scalar LQs are more useful to study the loop-induced Beyond Standard Model (BSM) contributions [13; 14; 15]. LQs are crucial from various phenomenological aspects. For example, an extension of the SM with a LQ can explain several B-meson anomalies [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26] or can contribute to the flavor violating processes like \(\tau\to\mu\gamma\) and \(h\to\tau\mu\)[27]. LQs may also be significant for the dark matter phenomenology [28; 29; 30] and the production of scalar particles at the LHC [31; 32; 33; 34; 35]. Note that the simplest GUT extensions assume a heavy LQ [36; 37] to evade the proton lifetime constraints, but they can't be produced at the LHC. However, there are GUT formulations that can explain the stability of proton with a TeV-scale scalar LQ [38; 39; 40; 41; 42; 43]. Thus, in this paper, the later GUT motivation will be considered as the gauge theoretical background for the new interactions, i.e., the SM will be extended to an energy scale of \(\mathcal{O}(1)\) TeV to augment the observed particle spectrum with a scalar LQ.
Recent experiments have resulted in some remarkable observations in the lepton sector, which may indicate towards a possible BSM theory yet to be discovered. For example, in 2021 a combined result from the Fermilab-based Muon \(g-2\) collaboration and Brookhaven National Laboratory (BNL) showed a \(4.2\sigma\) discrepancy between the predicted and measured values of the anomalous magnetic moment of muon [44; 45]. The result has been updated very recently on August 2023, enhancing the significance to \(5\sigma\)[46]1. Moreover, a precision measurement of the fine-structure constant using either Cesium (Cs) [50] or Rubidium (Rb) [51] indicates a similar anomaly in \((g-2)_{e}\). However, note that a relative sign between the two results leads to an experimental dispute that can't be settled with the present technologies. LQs can play a vital role in explaining the discrepancy in \((g-2)_{\mu}\)[52; 53; 54; 55; 56; 57; 58]. Moreover, in the presence of a scalar LQ, various NP signatures, e.g., the neutrino oscillation, \(W\) mass anomaly, lepton flavor violating decays and dark matter can be connected to the \((g-2)_{e,\,\mu}\) anomalies within a single BSM formulation [29; 59; 60; 61; 62; 63; 64; 65; 66; 67]. LQs can also have important implications to explain the electric dipole moment (EDM) of leptons [68; 69; 70].
Footnote 1: A recent lattice calculation of the hadronic vacuum polarization (HVP) term by the BMW collaboration [47] and a preliminary experimental update from the CMD-3 detector [48] indicate a significant tension with the present data which may result in a smaller and less significant discrepancy [49] between the predicted and observed values of \((g-2)_{\mu}\).
In this paper, a _minimal_ extension of the SM has been considered with a scalar LQ \(S_{1}(\mathbf{\bar{3}},\mathbf{1},\,1/3)\) at an energy scale of \(\mathcal{O}(1)\) TeV. In Refs. [66; 71], it has already been studied in detail that such a simple BSM framework can easily explain all the possible NP signatures and experimental constraints in the lepton sector. However, we shall see that the scenario could be simplified further if formulated with a particular flavor ansatz. The present work will try to constrain the parameter space for all the three lepton generations simultaneously considering the current experimental updates on \((g-2)_{\ell}\) and EDM. However, due to experimental inadequacy, the \(\tau\)-sector is not at all interesting compared to \(e\) and \(\mu\). For \(e\)-sector, both experimental possibilities (i.e., the results from the Cs and Rb experiments) will be addressed through a common generic formulation. A direct consequence of augmenting the SM with a LQ is opening up the 2-body and 3-body charged lepton flavor violating (CLFV) decay channels and initiating a possibility for the lepton flavor vi
olating Higgs decays [72; 73; 74; 27]. However, the experimental upper limits associated with the non-observation of these processes can easily be explained within the considered model by adjusting the lepton-quark couplings in a \(3\times 3\) flavor basis, making the parameter space consistent with the CLFV bounds. The paper has been organized as follows. Sec. II introduces the new interactions arising at the TeV scale. In Sec. III, \((g-2)_{\ell}\) and EDM have been defined along with their recent experimental bounds. Sec. IV elaborates on the one-loop BSM contributions to the \(\ell\ell\gamma\) vertex appearing in the presence of \(S_{1}\), whereas in Sec. V, the allowed parameter space has been analyzed using numerical techniques. Finally, the outcomes have been summarized in Sec. VI.
## II The model: a minimal extension of the SM
The considered model assumes a simple extension of the SM at a NP scale \(\Lambda\sim\mathcal{O}(1)\) TeV, where the known particle spectrum gets augmented with a scalar Leptoquark (LQ) of electromagnetic (EM) charge \(1/3\) -- usually labeled as \(S_{1}\equiv S_{1}(\mathbf{\bar{3}},\mathbf{1},1/3)\). Following the notations of Ref. [9], the NP Lagrangian can be cast as,
\[\mathcal{L}_{\Lambda} =\left[\lambda_{L}^{ij}\left(\bar{Q}_{L}^{Ci\beta}\epsilon^{ab}L _{L}^{ib}\right)S_{1}^{\beta}+\lambda_{R}^{ij}\left(\bar{u}_{R}^{Ci\beta}S_{ 1}^{\beta}\ell_{R}^{j}\right)+\text{h.c.}\right]+\bar{M}_{S_{1}}^{2}(S_{1}^{ \dagger}S_{1})+\kappa(H^{\dagger}H)(S_{1}^{\dagger}S_{1}),\] \[=\left[\left\{\bar{u}_{L}^{Ci\beta}\left(\triangledown^{\dagger} \lambda_{L}\right)^{ij}\ell_{L}^{i}-d_{L}^{Ci\beta}\lambda_{L}^{ij}\nu_{L}^{j} \right\}S_{1}^{\beta}+\lambda_{R}^{ij}\left(\bar{u}_{R}^{Ci\beta}S_{1}^{\beta} \ell_{R}^{j}\right)+\text{h.c.}\right]\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
to the mass of the virtual fermion (here, the SM quarks) appearing in the loop [see Fig. 2]. Therefore, the largest NP contribution to \(\Delta a_{\ell}\) corresponds to the \(t\)-quark loop, and within the perturbative regime of Yukawa couplings, one can easily neglect the \(u\) and \(c\) quark contributions to \(\Delta a_{\ell}\) considering the mass hierarchy among the three quark generations. Thus, to a good approximation, the mixing among the quarks can be ignored.
Following the above discussion, one may be tempted to assume a minimal flavor structure for enhancing the loop contribution to \(\Delta a_{\ell}\) (\(\ell=e,\,\mu,\,\tau\)) as follows:
\[\lambda_{L,R}=\left(\begin{array}{ccc}0&0&0\\ 0&0&0\\ \lambda_{L,R}^{\,t,e}&\lambda_{L,R}^{\,t,\mu}&\lambda_{L,R}^{\,\tau}\end{array} \right). \tag{4}\]
However, it can be readily understood that the parameter space presented in Eq. (4) will be strongly constrained through the 2-body and 3-body lepton flavor violating decays. For example, if one sets \(|\lambda_{L,R}^{\,t,e}|\sim\mathcal{O}(1)\), \(\text{BR}(\mu\to e\gamma)<4.2\times 10^{-13}\)[75] leads to an upper limit \(|\lambda_{L,R}^{\,t,\mu}|<10^{-8}\), making it impossible to explain \(\Delta a_{\mu}\) within the assumed parameter space. A similar argument goes for the \(\tau\)-sector. Therefore, the minimal flavor ansatz should be so chosen that it can maximize the NP contribution to \(\Delta a_{\ell}\) while explaining the non-observation of all the CLFV processes in the most economical way. Eq. (5) represents the _minimal_ Yukawa structure for this simplified model.
\[\lambda_{L,R}=\left(\begin{array}{ccc}0&0&\lambda_{L,R}^{\,u,\,\tau}\\ 0&\lambda_{L,R}^{\,c,\,\mu}&0\\ \lambda_{L,R}^{\,t,e}&0&0\end{array}\right). \tag{5}\]
For Eq. (5) one could have equivalently chosen the diagonal line, i.e., \(\lambda_{L,R}=\text{diag}(\lambda_{L,R}^{\,u,\,e},\,\lambda_{L,R}^{\,c,\,\mu}, \,\lambda_{L,R}^{\,t,\,\tau})\). Though the phenomenology of \(\mu\) and \(\tau\)-sector would remain mostly unchanged, but due to the \(u\)-quark mass suppression, this diagonal Yukawa structure would lead to non-perturbative values of \(|\lambda_{L,R}^{\,u,\,e}|\) for explaining the observed discrepancy in \((g-2)_{e}\). Note that, the zeros in Eq. (5) are completely from the phenomenological perspective.
## III New physics observables and experimental bounds
The most generic gauge invariant representation for the effective \(\ell\ell\gamma\) vertex corresponding to Fig. 1 is given by,
\[\Gamma_{\ell\ell\gamma}^{\mu}=\gamma^{\mu}\mathcal{F}_{1}(q^{2})+\left\{i \mathcal{F}_{2}(q^{2})+\mathcal{F}_{4}(q^{2})\,\gamma^{5}\right\}\left(\frac{ \sigma^{\mu\nu}q_{\nu}}{2m_{\ell}}\right)+\mathcal{F}_{3}(q^{2})(q^{\mu}\not{ q}-q^{2}\gamma^{\mu})\gamma^{5}, \tag{6}\]
where \(\mathcal{F}_{(1,2,3,4)}\) are the form factors and \(q\) represents the photon momentum. However, in the case of an off-shell photon, there should be additional contributions in Eq. (6). Note that the form factors \(\mathcal{F}_{3}\) and \(\mathcal{F}_{4}\) must vanish in any parity-conserving theory (e.g., QED) and can only arise through the diagrams where electroweak (EW) gauge bosons appear as the virtual particles. Thus, the renormalized vertex correction in QED results in [76],
\[\mathcal{F}_{1}(0)=0,\qquad\mathcal{F}_{2}(0)=\frac{\alpha_{\text{EM}}}{2\pi}, \tag{7}\]
where \(\mathcal{F}_{1}(0)\) corresponds to the correction in EM charge while \(\mathcal{F}_{2}(0)\) represents the QED contribution to the anomalous magnetic moment of leptons at \(\mathcal{O}(\alpha_{\text{EM}})\). However, in the presence of the weak gauge bosons, the \(\ell\ell\gamma\) vertex gets modified as [77]3,
Footnote 3: The axial vector coupling associated with the form factor \(\mathcal{F}_{3}\) vanishes for on-shell photons as a consequence of the Ward identity [78].
\[\Gamma_{\text{EW}}^{\mu}=e\Bigg{[}\left(1+\frac{\delta e}{e}\right)\gamma^{ \mu}+\frac{i\sigma^{\mu\nu}q_{\nu}}{2m_{\ell}}\left\{\frac{\alpha_{\text{EM}} }{2\pi}+\mathcal{F}_{2}^{EW}(0)\right\}+\frac{\sigma^{\mu\nu}q_{\nu}}{2m_{\ell} }\gamma^{5}\mathcal{F}_{4}(0)\Bigg{]}, \tag{8}\]
here \(\delta e\) denotes the sum of the charge correction at one-loop order and the corresponding counterterm. The additional contribution to the anomalous magnetic moment can be parametrized as [79],
\[\mathcal{F}_{2}^{EW}(0)=\frac{\mathcal{G}_{F}\,m_{\ell}^{2}}{8\sqrt{2}\,\pi^{2}} \left[\frac{5}{3}+\frac{1}{3}(1-4\sin^{2}\theta_{W})^{2}+\mathcal{O}\left( \frac{m_{\ell}^{2}}{M_{W}^{2}}\right)\right], \tag{9}\]
where, \(\mathcal{G}_{F}\), \(\theta_{W}\), and \(M_{W}\) signify the Fermi constant, weak mixing angle, and mass of the \(W\)-boson, respectively. Moreover, considering the leading order (LO) hadronic contribution, one obtains [80],
\[\mathcal{F}_{2}(0)^{\rm Had}[\text{LO}]=\left(\frac{\alpha_{\rm EM}}{\pi\sqrt{ 3}}\right)^{2}\int_{m_{\pi}^{2}}^{\infty}\frac{K(s)}{s}R^{(0)}(s)\,ds, \tag{10}\]
where, \(K(s)\) stands for the QED kernel function [81] and \(R^{(0)}(s)\) represents the ratio of electron-positron bare annihilation cross into the hadrons to the cross section of muon-pair production with center of mass energy \(\sqrt{s}\). However, this leading order hadronic contribution \(\mathcal{F}_{2}(0)^{\rm Had}[\text{LO}]\) includes a significant amount of uncertainty which might be resolved soon through the updated lattice calculations [47].
The last term in Eq. (8), i.e., \(\mathcal{F}_{4}(0)\) represents the leading order SM contribution to the electric dipole moment (\(d_{\ell}\)) of leptons. As already mentioned in Sec. I, despite considering all the SM contributions these leptonic observables exhibit a sharp discrepancy with the experimental results.
### Anomalous Magnetic Moment
The best available SM prediction for the anomalous magnetic moment of muon is given by \(a_{\mu}^{\rm SM}=116591810(43)\times 10^{-11}\)[82], whereas the recent experimental data from Muon \(g-2\) collaboration results in a world average of \(a_{\mu}^{\rm Exp}=116592059(22)\times 10^{-11}\)[46], leading to a discrepancy,
\[\Delta a_{\mu}=a_{\mu}^{\rm Exp}-a_{\mu}^{\rm SM}=(2.49\pm 0.48)\times 10^{- 9}\ (5.0\,\sigma). \tag{11}\]
As discussed, it can be one of the most remarkable signatures of a possible BSM sector. Further, in the context of electrons, experiments indicate a similar anomaly in \((g-2)_{e}\). A precision measurement of the fine structure constant through the recoil of Cs\({}^{133}\) atoms has yielded a notable contradiction between the measured and predicted values of \(a_{e}\) as [50],
\[\Delta a_{e}^{\rm(Cs)}=a_{e}^{\rm Exp\,(Cs)}-a_{e}^{\rm SM}=(-8.8\pm 3.6)\times 1 0^{-13}\ (2.4\,\sigma). \tag{12}\]
However, for the same \(a_{e}\) a Rubidium based experiment results in [51],
\[\Delta a_{e}^{\rm(Rb)}=a_{e}^{\rm Exp\,(Rb)}-a_{e}^{\rm SM}=(4.8\pm 3.0)\times 1 0^{-13}\ (1.6\,\sigma). \tag{13}\]
Figure 1: Effective \(\ell\ell\gamma\) vertex. \(p_{1,2}\) represent the external momenta, with \(q=p_{1}-p_{2}\) being the photon momentum.
Note that, despite having a significant expectation value, the experimental measurements for \(\Delta a_{e}\) have large error bars. However, this paper is able to address both of the results for \(\Delta a_{e}\), along with the non-zero value of \(\Delta a_{\mu}\) within a common BSM framework.
Unlike the first two generations, measuring the anomalous magnetic moment of \(\tau\) is extremely challenging due to its short lifetime. Thus, \(a_{\tau}^{\rm Exp}\) can only be traced back from the secondary particles produced through the decay of \(\tau\). The latest experimental bound (95% CL) can be quoted as [83; 84],
\[-0.052<a_{\tau}<0.013, \tag{14}\]
whereas, the corresponding SM prediction is given by, \(a_{\tau}^{\rm SM}=117721(5)\times 10^{-8}\)[85].
### Electric Dipole Moment
The precision measurement of the electric dipole moment of leptons can be crucial to search for the NP. EDM can be related to the form factor \(\mathcal{F}_{4}\) as, \(d_{\ell}=e.\mathcal{F}_{4}(0)/m_{\ell}\), for which the SM predicts \(|\mathcal{F}_{4}^{e}(0)|<|\mathcal{F}_{4}^{\mu}(0)|<|\mathcal{F}_{4}^{\tau} (0)|\approx 10^{-23}\)[86; 87; 88; 89], i.e., \(|d_{\tau}^{\rm SM}|\simeq 10^{-37}\)\(e\,\)cm. It is much smaller than the experimental sensitivity. Thus, any observation of lepton EDM can be treated as a direct evidence of some New Physics interaction. The experimental upper limits for the three lepton generations can be read as [90; 91; 92],
\[|d_{e}|<0.11\times 10^{-28}\;e\,\text{cm}\;(90\%\;\text{CL}),\] \[|d_{\mu}|<1.8\times 10^{-19}\;e\,\text{cm}\;(95\%\;\text{CL}),\] \[\text{Re}\,[d_{\tau}]\supset[-0.220,\,0.45]\times 10^{-16}\;e\, \text{cm}\;(95\%\;\text{CL}),\] \[\text{Im}\,[d_{\tau}]\supset[-0.250,\,0.08]\times 10^{-16}\;e\, \text{cm}\;(95\%\;\text{CL}). \tag{15}\]
These experimental bounds on \(\Delta a_{\ell}\) and \(d_{\ell}\) will be simultaneously considered to constrain the chosen parameter space for each lepton generation.
### CLFV Processes
In general, the CLFV decays are allowed in a \(S_{1}\)-LQ extension of the SM. However, there is no positive signal from the ongoing experiments [75; 93; 94; 95; 96; 97; 98; 99; 100; 101] supporting the lepton flavor violating processes and thus only leads to upper bounds on the Yukawa couplings. Therefore, the non-observation of the 2-body and 3-body CLFV decays can easily be accommodated in this considered model if one follows the Yukawa structure defined by Eq. (5) without any conflict with the experimental data. Thus, the minimal parameter space chosen here is automatically consistent with all the CLFV bounds.
## IV BSM contributions to \((g-2)_{\ell}\) and EDM
As already stated in Sec. I, in the presence of a scalar LQ, there can be new contributions to the \(\ell\ell\gamma\) vertex at one-loop order. Fig. 2(a) shows the case where the photon couples to the up-type quarks, while Fig. 2(b) represents the situation when photon touches the \(S_{1}\) propagator (magenta line). The former will be referred to as Type-1 diagram, while the latter will be called Type-2 for convenience.
### Type-1 Diagram
The correction term to \(\ell_{j}\ell_{j}\gamma\) vertex due to the Type-1 diagram can be computed as,
\[\Delta\Gamma_{1}^{\sigma}=iN_{C}\int\frac{d^{4}k}{(2\pi)^{4}} \left[(-\lambda_{L}^{ij}P_{L}+\lambda_{R}^{ij}P_{R})\frac{(\not{p}_{2}-\not{k}+m _{i})}{(k-p_{2})^{2}-m_{i}^{2}}(Q_{\rm EM}^{i}\gamma^{\sigma})\frac{(\not{p}_{1 }-\not{k}+m_{i})}{(k-p_{1})^{2}-m_{i}^{2}}\right.\] \[\left.\times\frac{1}{k^{2}-M_{S_{1}}^{2}}\left\{-(\lambda_{L}^{ ij})^{*}P_{R}+(\lambda_{R}^{ij})^{*}P_{L}\right\}\right]\] \[\equiv iQ_{\rm EM}^{i}N_{C}\int\frac{d^{4}k}{(2\pi)^{4}}\left[ \frac{\mathcal{N}_{1}^{\sigma}}{\mathcal{D}_{1}}\right]. \tag{16}\]
Here \(N_{C}=3\) defines the color degeneracy factor, and \(Q_{\rm EM}^{i}=2/3\) represents the EM charge of up-type quarks in the unit of electronic charge \(e\). \(m_{i}\) denotes the up-type quark masses for \(i=u,\,c,t\). The numerator can be rearranged as,
\[\mathcal{N}_{1}^{\sigma}=\frac{1}{2}\Bigg{[}\mathcal{A}_{1}\Big{\{} (\not{p}_{2}-\not{k})\gamma^{\sigma}(\not{p}_{1}-\not{k})+m_{i}^{2}\gamma^{ \sigma}\Big{\}}+\mathcal{A}_{2}m_{i}\Big{\{}(\not{p}_{2}-\not{k})\gamma^{\sigma }+\gamma^{\sigma}(\not{p}_{1}-\not{k})\Big{\}}\] \[\qquad\qquad\qquad+\mathcal{A}_{3}\gamma^{5}\Big{\{}(\not{p}_{2} -\not{k})\gamma^{\sigma}(\not{p}_{1}-\not{k})+m_{i}^{2}\gamma^{\sigma}\Big{\}} +\mathcal{A}_{4}m_{i}\gamma^{5}\Big{\{}(\not{p}_{2}-\not{k})\gamma^{\sigma}+ \gamma^{\sigma}(\not{p}_{1}-\not{k})\Big{\}}\Bigg{]}, \tag{17}\]
where,
\[\mathcal{A}_{1}=|\lambda_{R}^{ij}|^{2}+|\lambda_{L}^{ij}|^{2}\,, \mathcal{A}_{2}=-2\,\mathrm{Re}[(\lambda_{L}^{ij})^{*}\lambda_{R}^{ij}],\] \[\mathcal{A}_{3}=|\lambda_{R}^{ij}|^{2}-|\lambda_{L}^{ij}|^{2}\,, \mathcal{A}_{4}=-2\,\mathrm{Im}[(\lambda_{L}^{ij})^{*}\lambda_{R}^{ij}]. \tag{18}\]
After Feynman parametrization, the denominator can be cast as,
\[\mathcal{D}_{1}=n^{2}-\Delta_{1}(x), \tag{19}\]
where \(n=k-yp_{1}-zp_{2}\) and \(\Delta_{1}(x)=M_{S_{1}}^{2}\left[x+\rho_{i}(1-x)\right]\). \(x,\,y,\,z\) are the Feynman parameters and \(\rho_{i}=(m_{i}/M_{S_{1}})^{2}\). This calculation assumes an on-shell photon and the physically viable approximation of \((m_{\ell}/M_{S_{1}})^{2}\to 0\). \(m_{\ell}\) denotes the mass of the SM leptons. Integrating over the loop
Figure 2: BSM contributions to the \(\ell\ell\gamma\) vertex, where (a) the up-type quarks couple to the photon (Type-1 diagram), and (b) the LQ \(S_{1}\) couples to the photon (Type-2 diagram). \(p_{1}\), \(p_{2}\) represent the external momenta.
momentum \(n\), the BSM contributions to the anomalous magnetic moment (\(\Delta a_{1}^{\ell}\)) and electric dipole moment (\(d_{1}^{\ell}\)) of the SM leptons can be defined as,
\[\Delta a_{1}^{\ell} =\frac{1}{8\pi^{2}}\Bigg{[}\mathcal{A}_{1}\left(\frac{m_{\ell}}{M_ {S_{1}}}\right)^{2}G_{1}(\rho_{i})+\mathcal{A}_{2}\left(\frac{m_{\ell}\,m_{i}}{ M_{S_{1}}^{2}}\right)G_{2}(\rho_{i})\Bigg{]}, \tag{20}\] \[d_{1}^{\ell} =\frac{e}{8\pi^{2}}\Bigg{[}\mathcal{A}_{3}\left(\frac{m_{\ell}}{ M_{S_{1}}^{2}}\right)G_{1}(\rho_{i})+\mathcal{A}_{4}\left(\frac{m_{i}}{M_{S_{1}}^{2}} \right)G_{2}(\rho_{i})\Bigg{]}, \tag{21}\]
where, the functions \(G_{1}\) and \(G_{2}\) are given by,
\[G_{1}(w) =\int_{0}^{1}\left[\frac{x(1-x)^{2}}{x+(1-x)w}\right]\,dx=\frac{ 2+3w-6w^{2}+w^{3}+6w\ln w}{6(1-w)^{4}},\] \[G_{2}(w) =\int_{0}^{1}\left[\frac{(1-x)^{2}}{x+(1-x)w}\right]\,dx=\frac{-3 +4w-w^{2}-2\ln w}{2(1-w)^{3}}\,. \tag{22}\]
### Type-2 Diagram
Fig. 2 (b) contributes to the \(\ell_{j}\ell_{j}\gamma\) vertex as follows.
\[\Delta\Gamma_{2}^{\sigma} =iN_{C}\int\frac{d^{4}k}{(2\pi)^{4}}\Bigg{[}(-\lambda_{L}^{ij}P_{ L}+\lambda_{R}^{ij}P_{R})\frac{(\not{k}+m_{i})}{k^{2}-m_{i}^{2}}.\frac{1}{(k-p_{1}) ^{2}-M_{S_{1}}^{2}}\cdot Q_{\text{EM}}^{S_{1}}(p_{1}+p_{2}-2k)^{\sigma}\] \[\times\frac{1}{(k-p_{2})^{2}-M_{S_{1}}^{2}}\{-(\lambda_{L}^{ij})^ {*}P_{R}+(\lambda_{R}^{ij})^{*}P_{L}\}\Bigg{]},\] \[\equiv iQ_{\text{EM}}^{S_{1}}N_{C}\int\frac{d^{4}k}{(2\pi)^{4}} \Bigg{[}\frac{\mathcal{N}_{2}^{\sigma}}{\mathcal{D}_{2}}\Bigg{]}. \tag{23}\]
Here \(Q_{\text{EM}}^{S_{1}}=1/3\) is the EM charge of \(S_{1}\). Recasting the numerator of Eq. (23), one gets,
\[\mathcal{N}_{2}^{\sigma} =\frac{1}{2}\Big{[}\{\mathcal{A}_{1}\not{k}+\mathcal{A}_{2}m_{i} \}+\gamma^{5}\{\mathcal{A}_{3}\not{k}+\mathcal{A}_{4}m_{i}\}\Big{]}(p_{1}+p_{2 }-2k)^{\sigma}. \tag{24}\]
Feynman parametrization recasts the denominator as,
\[\mathcal{D}_{2} =(k-yp_{1}-zp_{2})^{2}-M_{S_{1}}^{2}\Big{[}x\rho_{i}+(1-x)\Big{]}\] \[=n^{2}-\Delta_{2}(x). \tag{25}\]
Thus, the NP contributions to the anomalous magnetic moment and EDM, arising from the Type-2 diagram, can be formulated as,
\[\Delta a_{2}^{\ell} =-\frac{1}{16\pi^{2}}\Bigg{[}\mathcal{A}_{1}\left(\frac{m_{\ell}} {M_{S_{1}}}\right)^{2}G_{3}(\rho_{i})+\mathcal{A}_{2}\left(\frac{m_{\ell}\,m_{ i}}{M_{S_{1}}^{2}}\right)G_{4}(\rho_{i})\Bigg{]}, \tag{26}\] \[d_{2}^{\ell} =-\frac{e}{16\pi^{2}}\Bigg{[}\mathcal{A}_{3}\left(\frac{m_{\ell} }{M_{S_{1}}^{2}}\right)G_{3}(\rho_{i})+\mathcal{A}_{4}\left(\frac{m_{i}}{M_{S_ {1}}^{2}}\right)G_{4}(\rho_{i})\Bigg{]}, \tag{27}\]
where,
\[G_{3}(w) =\int_{0}^{1}\left[\frac{x(1-x)^{2}}{xw+(1-x)}\right]\,dx=\frac{ 1-6w+3w^{2}+2w^{3}-6w^{2}\ln w}{6(1-w)^{4}},\] \[G_{4}(w) =\int_{0}^{1}\left[\frac{x(1-x)}{xw+(1-x)}\right]\,dx=\frac{1-w^{ 2}+2w\ln w}{2(1-w)^{3}}. \tag{28}\]
Therefore, within this minimally extended BSM framework, the complete NP contribution to the leptonic observables can be defined as,
\[\Delta a_{\ell}=\Delta a_{1}^{\ell}+\Delta a_{2}^{\ell}\,,\qquad|d_{\ell}|=\left| d_{1}^{\ell}+d_{2}^{\ell}\right|. \tag{29}\]
## V Numerical analysis and results
In this section, we shall try to identify the allowed region of the parameter space through flavor-specific constraints. \(\Delta a_{\ell}\) and the experimental upper bound on EDM will be considered simultaneously as the constraining factors for each generation. For completeness, one can enlist the free parameters of this model as follows:
\[\left\{M_{S_{1}},\lambda_{L,R}^{u,\tau},\lambda_{L,R}^{c,\mu},\lambda_{L,R}^{ l,e}\right\}.\]
Note that, respecting the LHC constraints at \(\sqrt{s}=13\) TeV one has to choose \(M_{S_{1}}\geq 1.5\) TeV [102; 103; 104; 105; 106]. However, the NP couplings can be varied freely within the bounds of perturbative unitarity [107]. Fig. 3(a) shows the allowed parameter space in the \(\lambda_{L}^{t,e}-\lambda_{R}^{t,e}\) plain for a set of four \(M_{S_{1}}\) values: \(M_{S_{1}}=1.5\) TeV (violet), \(2.0\) TeV (golden), \(2.5\) TeV (sky blue), and \(3.0\) TeV (red). The depicted region simultaneously satisfies the observed \(\Delta a_{e}^{(\text{Cs})}\) value and the experimental bound on
\(|d_{e}|\). However, it is a notable feature of this considered framework that even if one assumes the \(\Delta a_{e}^{\rm(Rb)}\) results instead of Cs, a valid parameter space can be obtained [see Fig. 3(b)]. Similarly, for the muon sector \(\lambda_{L}^{c,\mu}-\lambda_{R}^{c,\mu}\) plain has been constrained through the \(\mu\)-specific observables, i.e., \(\Delta a_{\mu}\) and \(|d_{\mu}|\) [see Fig. 3(c)]. Note that, numerically, the same exercise can be repeated for the \(\tau\)-sector to constrain the \(\lambda_{L}^{u,\tau}-\lambda_{R}^{u,\tau}\) region. However, the present experimental sensitivity is inadequate to probe the NP effects to \(a_{\tau}\) and \(d_{\tau}\) that one can obtain from Fig. 2. Thus, no significant conclusion can be drawn in this case and the entire parameter space is effectively available.
Fig. 3 leads to two interesting observations:
* _With increasing \(M_{S_{1}}\) value, the magnitude of the couplings shifts to the higher side_. This particular behavior can be understood by analyzing the \(M_{S_{1}}\)-dependence of \(\Delta a_{\ell}\) and \(d_{\ell}\) for a fixed set of fermion masses. From Eqs. (20)-(21) and (26)-(27) it is clear that there is an overall \(M_{S_{1}}^{2}\) suppression. However, the complete \(M_{S_{1}}\)-dependence can only be noted by studying the individual variation of the functions \(G_{\{1,2,3,4\}}\). Fig. 4 shows the variation of the \(G\) functions with respect to \(M_{S_{1}}\). For illustration, \(m_{\ell}=m_{\mu}=0.105\) GeV and \(m_{i}=m_{c}=1.275\) GeV have been assumed [84]. Fig. 4 clearly indicates that \(G_{1},\,G_{3},\,G_{4}\) do not exhibit any notable variation with the increasing \(M_{S_{1}}\) value, while \(G_{2}\) shows only a slight increment. Thus, to a good approximation, one can conclude that for a given set of quark and lepton masses, \(\Delta a_{\ell}\) and \(|d_{\ell}|\) decreases quadratically with \(M_{S_{1}}\). Therefore, for compensating this suppression, the couplings must rise to match the experimental observations.
* _In Fig. 3(a) the product \(\lambda_{L}^{t,e}\times\lambda_{R}^{t,e}\) is positive, whereas it flips to a negative value in Fig. 3(b)._ This is a direct consequence of the oppositely aligned values of \(\Delta a_{e}^{\rm(Cs)}\) and \(\Delta a_{e}^{\rm(Rb)}\). From Fig. 4, one can see that the function \(G_{2}\) produces the leading contribution over the entire parameter space. The effect is further enhanced due to the chosen flavor ansatz [see Eq. (5)] as it connects the lightest lepton with the heaviest quark and vice versa. Thus, the sign of the term \(\mathcal{A}_{2}\left(\frac{m_{\ell}\,m_{\ell}}{M_{S_{1}}^{2}}\right)G_{2}(\rho _{i})\) [see Eq. (20)], or to be more specific, the sign of \(\mathcal{A}_{2}\) effectively decides the sign of \(\Delta a_{e}\) in the theory. The same argument is valid for the negative values of \(\lambda_{L}^{c,\mu}\times\lambda_{R}^{c,\mu}\) in Fig. 3(c).
## VI Conclusions
This paper has considered a minimal extension of the Standard Model with a TeV-scale scalar Leptoquark \(S_{1}\) transforming as \((\bar{\bf 3},{\bf 1},1/3)\) under the SM gauge group. In the presence of \(S_{1}\), there can be corrections to the \(\ell\ell\gamma\) vertex at the one-loop level, which may lead to new physics contributions to the lepton \((g-2)\) and EDM. A particular flavor structure has been chosen to suppress
the CLFV processes while enhancing the BSM contributions to other low-energy lepton phenomena. The new one-loop contributions have been computed analytically, followed by a numerical scan to determine the parameter space allowed under the recent \((g-2)_{\ell}\) and EDM constraints for each of the lepton generations. Four different LQ masses have been considered to understand the phenomenological implication of the NP scale on flavor-specific low-energy observables. For the electron sector, viable parameter spaces have been found corresponding to both of the experimental results, i.e., \(\Delta a_{e}^{(\text{Cs})}\) [see Eq. (12)] and \(\Delta a_{e}^{(\text{Rb})}\) [see Eq. (13)]. Note that it is a significant feature of this work that it can explain both positive (\(\Delta a_{e}^{(\text{Rb})}\) & \(\Delta a_{\mu}\)) and negative (\(\Delta a_{e}^{(\text{Cs})}\)) discrepancies in the anomalous magnetic moment of leptons by simply rotating the parameter space while keeping the entire scenario consistent with the respective EDM discovery limits. Though the \(\tau\)-sector has also been analyzed but due to lower experimental sensistivity the complete parameter space is allowed within the perturbative bounds. However, the assumed model structure can explain any future update on \(a_{\tau}\) and/or \(d_{\tau}\) which can probe the BSM contributions to the \(\tau\) phenomenology. Collider-based experiments searching for the TeV-scale scalar LQs and/or any experimental update on the low-energy lepton phenomena can be used to test or falsify the proposed framework.
|
2310.16204 | Mass-based separation of active Brownian particles in an asymmetric
channel | Inertial effects should be considered for micro- and nano-swimmers moving in
a low-density medium confined by irregular structures that create entropic
barriers, where viscous effects are no longer paramount. Here, we present a
separation mechanism of self-propelled particles in a two-dimensional
asymmetric channel, which leads to the drift of particles of different masses
in opposite directions. In particular, this mechanism is based on the combined
action of the spatial asymmetry of the channel structure, the temporal
asymmetry inherent in particles dynamics, and an external static force. This
work is relevant for potential applications that can be found in the
development of lab-on-a-chip devices and artificial channels for separating
particles of different masses. | Narender Khatri | 2023-10-24T21:45:03Z | http://arxiv.org/abs/2310.16204v2 | # Mass-based separation of active Brownian particles in an asymmetric channel
###### Abstract
The inertial effects should be considered for micro- and nano-swimmers moving in a low-density medium confined by irregular structures that create entropic barriers, where viscous effects are no longer paramount. Here, we present a separation mechanism of self-propelled particles in a two-dimensional asymmetric channel, which leads to the drift of particles of different masses in opposite directions. In particular, this mechanism is based on the combined action of the spatial asymmetry of the channel structure, temporal asymmetry inherent in particles dynamics, and an external static force. This work is relevant for potential applications that can be found in the development of lab-on-a-chip devices and artificial channels for separating particles of different masses.
## I Introduction
For the most part, active matter systems consist of microscopic or sub-microscopic active agents, biological as well as synthetic, that take free energy from their environments and convert it to a persistent motion under nonequilibrium conditions [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. The nonequilibrium dynamics of such particles, also termed as micro- and nano-swimmers, are often described by the overdamped Langevin or continuum models, where viscous force is paramount and inertial effects can be neglected. However, in low-density media, inertial effects become relevant due to reduced viscous force, and a body of research [18] describes the new phenomena as a result of their inclusion. Some examples where inertial effects are important include active particle motion in low-density media such as gases [19; 20; 21], plasmas [22; 23; 24] and superfluids [25], vibrobots [26; 27; 28], active aerosols [29], and systems that support a temperature gradient across coexisting phases [30]. The inertial active particles are called micro- and nano-flyers rather than swimmers [18; 31].
Active matter very often exists in the form of a mixture with a wide distribution of sizes, masses, motilities, chiralities, and shapes. Separating wanted active particles from unwanted ones is of great fundamental and nanotechnological importance for various branches of science and engineering. So far, much effort has been devoted to the motility-based [32; 33; 34; 35] and chirality-based [36; 37; 38; 39; 40; 41; 42; 43; 44] separation of particles. However, the mass-based separation of particles is of utmost importance for micro- and nano-robotics, environmental monitoring, nanotechnological applications, etc. In particular, separating particles based on their masses can be very challenging because, typically, identically-sized particles can have different masses.
In this paper, we present a mass-based separation mechanism of active particles in a two-dimensional asymmetric channel. The irregular shape of the considered channel structure gives rise to entropic barriers [45; 46; 47; 48; 49; 50; 51] that significantly influence the diffusive behavior of particles. As well, the spatial asymmetry of the channel can induce active directed transport of particles in the absence of external forces, i.e., so-called entropic rectification [52; 31; 53]. We implement the sliding-reflecting boundary conditions [52; 53; 31; 54] for the collisional dynamics of particles with the channel walls. The results that follow show how particles of different masses can be separated in opposite directions by purely entropic means.
The rest of the paper is organized as follows: in Sec. II, we introduce the minimal underdamped Langevin model used to describe the dynamics of active particles in a two-dimensional asymmetric channel. Section III discusses the results of the mass-based separation of particles, and the main conclusions are given in Sec. IV.
## II Model
We consider a dilute suspension of active particles in a dissipative medium constrained to move in a two-dimensional asymmetric channel with periodicity \(L\) (see Fig. 1). We neglect the direct interactions among the particles under the assumption of sufficiently small particle density. The dynamics of an active particle with
Figure 1: Schematic illustration of a two-dimensional (triangular-shaped) asymmetric channel, described by Eq. (2), with periodicity \(L\) confining an active Brownian particle of mass \(M\) and moment of inertia \(I\), which is subjected to an external constant force \(\mathbf{F}=-F\mathbf{\hat{x}}\) along the negative \(x\)-direction. The local width of the channel \(2\,\mathrm{w}(x)\), active force \(\mathbf{F_{0}}=F_{0}\mathbf{\hat{n}}\), angle \(\theta\), and active torque \(T_{0}\) are indicated.
position \(\mathbf{r}=(x,y)\), orientation \(\mathbf{\hat{n}}=(\cos\theta,\sin\theta)\), mass \(M\), and moment of inertia \(I\), driven by an external static force \(\mathbf{F}=-F\mathbf{\hat{x}}\) acting along the negative \(x\)-direction, is described by the coupled Langevin equations as
\[\begin{split} M\ddot{\mathbf{r}}(t)&=-\gamma_{t}\dot{ \mathbf{r}}(t)-F\mathbf{\hat{x}}+F_{0}\mathbf{\hat{n}}(t)+\gamma_{t}\sqrt{2D_{t}}\ \mathbf{\xi}(t),\\ I\ddot{\theta}(t)&=-\gamma_{r}\dot{\theta}(t)+T_{0} +\gamma_{r}\sqrt{2D_{r}}\ \zeta(t),\end{split} \tag{1}\]
where \(\gamma_{t}\) and \(\gamma_{r}\) denote the translational and rotational friction coefficients, \(F_{0}\) and \(T_{0}\) are the active force and torque, and \(D_{t}\) and \(D_{r}\) are the translational and rotational diffusion constants, respectively. The translational and rotational Brownian fluctuations from the surrounding medium are modeled by the Gaussian white noise terms \(\mathbf{\xi}(t)\) and \(\zeta(t)\), respectively, with zero mean and unit variances given by \(\langle\mathbf{\xi}(t)\otimes\mathbf{\xi}(t^{\prime})\rangle=\delta(t-t^{\prime})\mathbf{1}\) and \(\langle\zeta(t)\zeta(t^{\prime})\rangle=\delta(t-t^{\prime})\), where \(\mathbf{1}\) is the identity matrix. As is well known, the set of coupled Langevin equations (1) serves as models for diverse active systems, including systems where the noise is athermal. For systems subject to athermal noise [6; 28; 31; 55; 56; 57], \(\langle\gamma_{t},\gamma_{r},D_{t},D_{r}\rangle\) are treated as independent parameters, and \(D_{t}\) and \(D_{r}\) control the strength of the translational and rotational noises, respectively. However, the diffusion and friction constants are related by Stokes-Einstein relations, \(D_{t}=k_{B}T/\gamma_{t}\) and \(D_{r}=k_{B}T/\gamma_{r}\), for systems with thermal noises that satisfy the fluctuation-dissipation relation. The generalization of the considered set of coupled Langevin equations (1) to a more general set of coupled Langevin equations that accounts for asymmetric particles has been given in Ref. [58]. Using fluctuating chemohydrodynamics [59] and molecular theory [60], a set of general underdamped coupled Langevin equations has been derived for chemically-powered colloids, where active force and torque expressions are given.
For the two-dimensional asymmetric and spatially periodic channel with periodicity \(L\) depicted in Fig. 1, the channel walls at position \(x\) are described by
\[\begin{split}\mathrm{w}_{u}(x)=\begin{cases}\mathrm{w}_{\min},& x=0,\\ \mathrm{w}_{\max}-(\mathrm{w}_{\max}-\mathrm{w}_{\min})\frac{x}{L},&0<x\leq L,\end{cases}\end{split} \tag{2}\]
where \(\mathrm{w}_{u}(x)\) and \(\mathrm{w}_{l}(x)=-\mathrm{w}_{u}(x)\) are the upper and lower channel walls, \(\mathrm{w}_{\min}\) and \(\mathrm{w}_{\max}\) are the minimum and maximum half-widths of the channel, respectively, and \(2\,\mathrm{w}(x)=\mathrm{w}_{u}(x)-\mathrm{w}_{l}(x)\) is the local width of the channel. The dimensionless parameter \(\epsilon=\mathrm{w}_{\min}/\mathrm{w}_{\max}\) defines the aspect ratio of the channel, where we choose \(\mathrm{w}_{\max}=L\) and \(\epsilon=0.1\) throughout the work.
The collisional dynamics of the particle at the channel walls is modeled by sliding-reflecting boundary conditions [52; 53; 54; 31] as follows: the translational velocity \(\dot{\mathbf{r}}\) of the particle is elastically reflected [61; 62], and its orientation \(\mathbf{\hat{n}}\) is unchanged during the collision. Consequently, the active force \(\mathbf{F}_{0}=F_{0}\mathbf{\hat{n}}\) keeps pointing in the same direction; hence, the particle slides along the channel wall until a fluctuation in the orientational vector \(\mathbf{\hat{n}}\) redirects it towards the interior of the channel. While our consideration is restricted only to this collision mechanism, it should be noted that the particle's orientation will change during the collision if it interacts with the channel wall through rough sphere collisions as well as its interaction with the solvent particles. The later collision mechanism depends on the scale of channel wall roughness.
Specifically, when the dynamics takes place in a periodic channel, the motion of an active particle governed by Eq. (1) is often determined by the linear velocity relaxation time \(\tau_{v}=M/\gamma_{t}\), angular velocity relaxation time \(\tau_{\omega}=I/\gamma_{r}\), reorientation time \(\tau_{r}=1/D_{r}\), spinning orientation time \(\tau_{s}=\gamma_{r}/|T_{0}|\), and characteristic diffusion time \(\tau=L^{2}/D_{t}\)[63]. It is convenient to use a dimensionless description [31] where length variables are scaled by \(L\), \(\mathbf{r}^{\prime}=\mathbf{r}/L\), and time by \(\tau\), \(t^{\prime}=t/\tau\). In the following, we shall dispense with the prime symbols for better readability. The Eq. (1) in dimensionless form reads
\[\begin{split} M^{*}\ddot{\mathbf{r}}(t)&=-\dot{\mathbf{r}}(t )-f\mathbf{\hat{x}}+f_{0}\mathbf{\hat{n}}(t)+\sqrt{2}\ \mathbf{\xi}(t),\\ I^{*}\ddot{\theta}(t)&=-\dot{\theta}(t)+t_{0}+\sqrt{2 \alpha}\ \zeta(t).\end{split} \tag{3}\]
Here the dimensionless parameters are given by \(M^{*}=\tau_{v}/\tau=MD_{t}/(\gamma_{t}L^{2})\), \(I^{*}=\tau_{\omega}/\tau=ID_{t}/(\gamma_{r}L^{2})\), \(f=FL/(D_{t}\gamma_{t})\), \(f_{0}=F_{0}L/(D_{t}\gamma_{t})\), \(t_{0}=T_{0}L^{2}/(D_{t}\gamma_{r})\), and \(\alpha=D_{r}\tau\). Notably, one could have used other choices of dimensionless units to find the number of independent parameters in the system [18].
In the following, we present results for the mass-based separation of particles in opposite directions. The observable of foremost interest for separating particles is the stationary average velocity along the principal axis of the channel (\(x\)-direction). The average velocity is calculated from simulations of the coupled Langevin equations (3) in the channel [64]. As an initial condition at time \(t=0\), the mixture of particles, with random orientations, of various masses was placed in a single cell of the channel located between \(x=0\) and \(x=1\), and the results are obtained by averaging over an ensemble of \(10^{4}\) stochastic trajectories.
## III Particle separation
One of the best ways to separate particles is to move them in opposite directions, which in our setup is achieved by applying a small static force \(f\) that controls the transport direction. The average velocity \(v\) is plotted as a function of \(M^{*}\) in Fig. 2 for several values of the static force \(f\). In the absence of \(f\), all particles move towards the right side of the channel (\(v>0\)) due to their rectification ascribed by the spatial asymmetry of the channel structure and temporal asymmetry inherent in the particle dynamics. The rectification strongly depends on \(M^{*}\), and \(v\) has a peak at an optimal mass \(M^{*}_{\mathrm{op}}\). We have reported the dependence of \(M^{*}_{\mathrm{op}}\) on the rotational diffusion rate \(\alpha\), active force \(f_{0}\), aspect ratio of the channel \(\epsilon\), and active torque \(t_{0}\) in Ref. [31]. In the presence
of small \(f\), particles lighter than a given threshold follow \(f\), i.e., move towards the left (\(v<0\)), whereas heavier particles than this drift towards the right. However, only particles having mass within a band of \(M^{*}\) move towards the right on slightly increasing \(f\), and \(v\) exhibits a valley behavior as a function of \(M^{*}\) for higher \(M^{*}\) values because, in this regime, effects due to the static force dominate over rectification effects. It is worth mentioning that \(M^{*}_{\rm op}\) is found to be independent of the static force and moment of inertia. As expected, when \(M^{*}\to\infty\), i.e., in the strongly underdamped limit, inertia dominates over the self-propulsion and static force, resulting in \(v\) tending to zero [65]. The mass-based separation of particles of the same size is illustrated schematically in Fig. 3.
The dependence of the average velocity \(v\) on the static force \(f\) is depicted in Fig. 4 for different values of \(M^{*}\). As is reflected in this figure, the mass-based separation of particles can be effectively controlled by suitably tuning \(f\). For instance, by choosing \(f=2.5\), the lighter particles of mass \(M^{*}=0.01\) move towards the left with \(v\approx-0.1\), whereas the heavier particles of mass \(M^{*}=0.1\) drift towards the right with \(v\approx 0.1\). In particular, there exists a stall force \(f_{\rm sf}\) for which \(v\) is zero. For \(f<f_{\rm sf}\), particles of mass \(M^{*}\) move towards the right; whereas the same particles drift towards the left for \(f>f_{\rm sf}\). Since \(v\) has a maximum as a function of \(M^{*}\) in the absence of the static force, as is expected, \(f_{\rm sf}\) shows a nonmonotonic dependence on \(M^{*}\) with the appearance of a peak at the optimal mass \(M^{*}_{\rm op}\) (see the inset). More interestingly, the nonmonotonic dependence of \(f_{\rm sf}\) on \(M^{*}\) provides that we obtain a smart device with a fixed shape that can be used to continuously separate particles of any mass by suitably changing merely \(f\).
The dependence of the average velocity \(v\) on the self-propulsion force \(f_{0}\), rotational diffusion rate \(\alpha\), and active torque \(t_{0}\) is shown in Fig. 5 for different values of \(M^{*}\). For fixed values of \(\alpha,t_{0},\) and \(f\), the mass-based separation of particles can be controlled by suitably tuning \(f_{0}\) (see panel (a)). As is reflected from panels (b) and (c), the separation of particles can also be tuned by the values of \(\alpha\) and \(t_{0}\). We have verified that \(v\) is independent of the sign of \(t_{0}\) because our channel is symmetric about its principal axis, i.e., it has top-down symmetry. In particular, as \(f_{0}\to 0\), \(\alpha\to\infty\), or \(t_{0}\to\infty\), the motion of self-propelled particles approaches passive Brownian motion; thus, the rectification effect tends to vanish, and as a consequence, all particles follow the static force.
## IV Concluding remarks
To summarize, the paper presented a mechanism to separate self-propelled particles of different masses by purely entropic means. This mechanism relies on the entropic rectification of particles caused by the spatial asymmetry of the channel structure and temporal asymmetry inherent in their dynamics. In particular, the entropic rectification strongly depends on the mass of particles. An external static force can overcome the entropic rectification effect and lead to the mass-based
Figure 3: Schematic illustration of separating particles of the same size but different masses, placed initially in the center, in opposite directions. Under the combined action of the static force, shape of the channel structure, and active motion of particles, lighter particles are following the static force, i.e., are moving towards the left side of the channel, whereas heavier particles are drifting towards the right.
Figure 2: The plot of the average velocity \(v\) versus \(M^{*}\) for different values of the static force \(f\). Here and below, the statistical error for \(v\) is smaller than the symbol sizes, and the solid lines are guides to the eye. The set parameters are: \(I^{*}=0.001,\alpha=0.1,f_{0}=5,\text{and}\ t_{0}=0\).
separation of particles in opposite directions. Furthermore, the separation of particles can also be tuned by the self-propulsion force, rotational diffusion rate, and active torque. Viewed in the context, the presented mechanism has the potential to be experimentally implemented in asymmetric narrow channels and micropores, where entropic effects are prominent, for the separation of particles based on their masses.
## V Acknowledgment
I would like to thank Prof. Dr. Raymond Kapral for some valuable discussions. This work was supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada and Compute Canada (www.computecanada.ca).
## Appendix A Choice of parameters
The presented mass-based separation mechanism can be applied to a wide variety of physical systems whose active agents are self-propelled by various mechanisms and are subject to either thermal or athermal noise. It is instructive to provide an estimate of parameters in real units, which is very useful for experimentalists. One should note that the choice of parameters places limits on the physical systems.
A class of systems that may be of interest are active aerosols [29; 66; 67], where our setup can lead to the mass-based separation of such particles in opposite directions. As an example, consider a mixture of identically-sized active particles of radii \(a\sim 200\) nm but different masses lie in the range \(M\sim 10^{-16}\) - \(10^{-15}\) kg with moment of inertia \(I\sim 10^{-31}\) kg m\({}^{2}\) in air at room temperature and pressure \(p=10^{4}-10^{5}\) Pa. The translational and rotational friction coefficients are given by \(\gamma_{t}\sim 10^{-11}\) kg/s and \(\gamma_{r}\sim 10^{-24}\) kg m\({}^{2}\)/s, respectively, corresponding to the viscosity of the medium \(\eta\sim 10^{-5}\) kg/(m s) (independent of pressure). The active and external static forces lie in the range \(F_{0}\sim 0.1-1\) pN and \(F\sim 0.1-1\) pN, respectively, and the active torque is taken to be zero. The asymmetric channel parameters are fixed as \(\mathrm{w_{max}}=L=10\)\(\mu\)m and \(\epsilon=0.1\).
The Stokes-Einstein relations, hold for thermal noise, provide \(D_{t}=k_{B}T/\gamma_{t}\sim 10^{-9}\) m\({}^{2}\)/s and \(D_{r}=k_{B}T/\gamma_{r}\sim 10^{3}\) s\({}^{-1}\). Correspondingly, in dimensionless units, we obtain \(\tau\sim 0.1\) s, \(M^{*}\sim 10^{-4}-10^{-3}\), \(I^{*}\sim 10^{-6}\), \(f_{0}\sim 10^{2}-10^{3}\), \(f\sim 10^{2}-10^{3}\), and \(\alpha\sim 10^{2}\). From these parameters, we can see that the separation of particles cannot be achieved because the regime where inertial effects play a significant role cannot be accessed.
For athermal noise, select \(D_{t}\sim 10^{-8}-10^{-6}\) m\({}^{2}\)/s and \(D_{r}\sim 10^{3}\) s\({}^{-1}\). Then, we obtain \(\tau\sim 10^{-4}-10^{-2}\) s, \(M^{*}\sim 0.001-1\), \(I^{*}\sim 10^{-5}-10^{-3}\), \(f_{0}\sim 0.1-100\), \(f\sim 0.1-100\), and \(\alpha\sim 0.1-10\). The persistence length and Peclet number lie in the range \(l_{p}=v_{0}/D_{r}\sim 10-100\)\(\mu\)m and \(Pe=v_{0}/\sqrt{D_{t}D_{r}}\sim 0.3-30\), respectively. Under these conditions, the separation of particles can be achieved in opposite directions.
Figure 5: The plot of the average velocity \(v\) dependence on the self-propulsion force \(f_{0}\) (a), rotational diffusion rate \(\alpha\) (b), and active torque \(t_{0}\) (c) for different values of \(M^{*}\). For panel (a), \(\alpha=0.1,t_{0}=0,I^{*}=0.001,\mathrm{and}\)\(f=2.5\), for panel (b), \(f_{0}=5,t_{0}=0,I^{*}=0.001,\mathrm{and}\)\(f=1\), and for panel (c), \(f_{0}=5,\alpha=0.1,I^{*}=0.001,\mathrm{and}\)\(f=1\). |
2308.14659 | RESTORE: Graph Embedding Assessment Through Reconstruction | Following the success of Word2Vec embeddings, graph embeddings (GEs) have
gained substantial traction. GEs are commonly generated and evaluated
extrinsically on downstream applications, but intrinsic evaluations of the
original graph properties in terms of topological structure and semantic
information have been lacking. Understanding these will help identify the
deficiency of the various families of GE methods when vectorizing graphs in
terms of preserving the relevant knowledge or learning incorrect knowledge. To
address this, we propose RESTORE, a framework for intrinsic GEs assessment
through graph reconstruction. We show that reconstructing the original graph
from the underlying GEs yields insights into the relative amount of information
preserved in a given vector form. We first introduce the graph reconstruction
task. We generate GEs from three GE families based on factorization methods,
random walks, and deep learning (with representative algorithms from each
family) on the CommonSense Knowledge Graph (CSKG). We analyze their
effectiveness in preserving the (a) topological structure of node-level graph
reconstruction with an increasing number of hops and (b) semantic information
on various word semantic and analogy tests. Our evaluations show deep
learning-based GE algorithm (SDNE) is overall better at preserving (a) with a
mean average precision (mAP) of 0.54 and 0.35 for 2 and 3-hop reconstruction
respectively, while the factorization-based algorithm (HOPE) is better at
encapsulating (b) with an average Euclidean distance of 0.14, 0.17, and 0.11
for 1, 2, and 3-hop reconstruction respectively. The modest performance of
these GEs leaves room for further research avenues on better graph
representation learning. | Hong Yung Yip, Chidaksh Ravuru, Neelabha Banerjee, Shashwat Jha, Amit Sheth, Aman Chadha, Amitava Das | 2023-08-28T15:41:30Z | http://arxiv.org/abs/2308.14659v2 | # RESTORE: Graph Embedding Assessment Through Reconstruction
###### Abstract
Following the success of Word2Vec embeddings, graph embeddings (GEs) have gained substantial traction. GEs are commonly generated and evaluated extrinsically on downstream applications, but intrinsic evaluations of the original graph properties in terms of topological structure and semantic information have been lacking. Understanding these will help identify the deficiency of the various families of GE methods when vectorizing graphs in terms of preserving the relevant knowledge or learning incorrect knowledge. To address this, we propose RESTORE, a framework for intrinsic GEs assessment through graph reconstruction. We show that reconstructing the original graph from the underlying GEs yields insights into the relative amount of information preserved in a given vector form. We first introduce the graph reconstruction task. We generate GEs from three GE families based on factorization methods, random walks, and deep learning (with representative algorithms from each family) on the CommonSense Knowledge Graph (CSKG). We analyze their effectiveness in preserving the (a) topological structure of node-level graph reconstruction with an increasing number of hops and (b) semantic information on various word semantic and analogy tests. Our evaluations show deep learning-based GE algorithm (SDNE) is overall better at preserving (a) with a mean average precision (mAP) of 0.54 and 0.35 for 2 and 3-hop reconstruction respectively, while the factorization-based algorithm (HOPE) is better at encapsulating (b) with an average Euclidean distance of 0.14, 0.17, and 0.11 for 1, and 3-hop reconstruction respectively. The modest performance of these GEs leaves room for further research avenues on better graph representation learning.
How effective is graph embedding in preserving both graph topology and semantic information when transforming a graph into a vector?
An embedding is a mapping of a discrete set of objects to a continuous vector space. Embeddings have been successful in providing effective features for many neural network models. A major attraction of vector space representation is that they represent the objects in a way that captures the relationships and patterns within the data from large unannotated corpora. For example, in Natural Language Processing (NLP), a word embedding maps words in a vocabulary to vectors in a continuous vector space such that semantically similar words are close together. As the use of graph representation rises, graph embedding has gained traction. In graph analysis, a graph embedding (GE) maps nodes in a graph to vectors in a continuous space such that nodes that are structurally similar or connected in the graph are close together in the vector space.
Vector representations, however, are linguistically opaque and non-interpretable for a human. While NLP word analogies such as the _king-queen_ analogy popularized by Word2Vec Mikolov et al. (2013), lend themselves as a standard practice for the intrinsic evaluation of word embeddings, there is a lack of consensus or standard for the intrinsic evaluation of GEs. While there are a plethora of GE algorithms have been proposed Ahmed et al. (2013); Tang et al. (2015); Wang et al. (2016); Ou et al. (2016); Belkin and Niyogi (2001); Perozzi, Al-Rfou, and Skiena (2014); Grover and Leskovec (2016); Kipf and Welling (2016); Roweis and Saul (2000) and evaluated based on various graph analytic tasks, obtaining a vector representation of each node of a graph that preserves the global structure of the graph and the local connections between individual nodes is challenging. Suppose we assume the performance improvement based on extrinsic evaluations by current deep neural networks on downstream applications is attributed to the use of GEs Makarov et al. (2021), in that case, it is imperative that the GEs are preserving the right graph structure and semantics. As discussed in Xu et al. (2017); Liu et al. (2019), the embeddings generated by the current state-of-the-art approaches can only preserve part of the topological structure. While Bollegala and Bao (2018) have shown that by combining different source _(word)_ embeddings into a coherent common meta-embedding space, the generated meta-embedding is able to produce a more accurate and complete representation, the GEs counterpart is yet to be studied.
Orthogonally, generating GEs from large graphs is computationally expensive due to their size and real-world complexity Fu et al. (2021); Goyal and Ferrara (2018). Extracting only subgraphs relevant to the entities or communities of interest (domain-specific) is generally the strategy for learning semantically relevant embeddings Fu et al. (2021). However, determining the amount of knowledge to extract (i.e., number
of hops) for a given task is non-trivial Ribeiro et al. (2021). The degree to which the existing GE algorithms preserve the graph properties based on the number of hops is yet to be investigated.
## Our Contributions
The proposed RESTORE is an evaluation framework to assess the quality of embeddings generated for a given node by different families of GE algorithms through graph reconstruction along three dimensions:
1. The **type** of graph properties preserved: (a) topological structure and (b) semantic information between nodes.
2. Which **family** of GE algorithm(s) are **better** at preserving 1(a) versus 1(b).
3. The degree of information that is preserved (retained, added, and missed) by the various GE algorithms with increasing **number of hops**.
It is difficult to determine which GE is capable and appropriate for a specific downstream application based on the aforementioned dimensions without exhaustive experimentation. Through RESTORE, we intend to shed light on the performance of each GE family through comparative analysis. We omit the typed relations between nodes and thereof their reconstruction assessment as the task falls under the extreme classification category and beyond the scope of this paper. Our contributions can be summarized as follows:
* Understand the effectiveness of graph embeddings in retaining graph topological structure and/or semantic information with increasing graph size (number of hops).
* Reconstruct the original graph from the vector produced by graph embedding methods to understand the degree of topological/semantic information captured.
* man + woman = queen_ analogy, a standard in NLP).
The paper is organized as follows. We first provide the preliminaries required to understand GE and introduce the three families of GE algorithms. We then describe the graph reconstruction task and our experimental setup in detail. Next, we assess the topological structure reconstruction accuracy and evaluate the degree of semantic information retained by the different GE algorithms with various word semantic and analogy tests. Finally, we discuss the findings and limitations of this work and draw our conclusions.
## Graph Embeddings - Definitions, Preliminaries and Background
**Graph:** A graph \(G=(V,E)\) is a collection of node set, \(V=\{v_{i}|i=1,...,n\}\) and edge set, \(E\subseteq V\times V\). In this paper, we are assessing a directed and unweighted graph such that the edge weight is uniformly 1. The adjacency matrix \(W\) of a graph \(G\) is denoted as:
\[W_{ij}=\left\{\begin{array}{ll}1&if(v_{i},v_{j})\in E\\ 0&otherwise\end{array}\right. \tag{1}\]
**Graph embedding:** Given a graph \(G\), a graph embedding, \(Y\) is a mapping \(f:v_{i}\to y_{i}\in R^{d}\forall\,i\in[n]\) such that \(d\ll|V|\) where \(d\) is the dimension size and the scoring function \(f\) maps each node to a low-dimensional feature vector space to preserve the topological structure and semantic information of the graph \(G\). That is, if there exists a link between \(v_{i}\) and \(v_{j}\), the corresponding embeddings \(y_{i}\) and \(y_{j}\) should be close to each other in the projected vector space.
### Families of Graph Embedding Algorithms
Numerous GE algorithms introduced in the past decade can be categorized into three families Goyal and Ferrara (2018): (i) factorization (e.g., Locally Linear Embedding, Laplacian Eigenmaps, HOPE), (ii) random walk (e.g., Node2vec), and (iii) deep learning-based (e.g., SDNE). Given the applicability of GEs in various downstream applications Ameer et al. (2019); Goyal and Ferrara (2018), we are interested in assessing which family of GE algorithms is better at preserving the topological structure versus information between individual nodes. Hence, we select representative algorithms from each family in our evaluation, and the following provides a background of these GE algorithms.
### Family 1: Factorization-based Algorithms.
Factorization-based algorithms represent the connections between nodes in the form of a matrix and generate embeddings by factorizing the matrix based on different matrix properties.
_Locally Linear Embedding (LLE)_Roweis and Saul (2000): This approach assumes a linear combination of the node's neighbors and is designed to preserve first-order proximity (i.e., one hop). It recovers the embedding \(Y^{N\times d}\) from the locally linear fits by minimizing \(\phi(Y)=\sum_{i}\left|Y_{i}-\sum_{j}W_{ij}Y_{j}\right|^{2}\). However, it is not scalable due to its complexity of \(O(V^{2})\) where \(V\) is the number of vertices.
_Laplacian Eigenmaps (LAP)_Belkin and Niyogi (2001): This approach preserves the first-order proximity of a network structure by minimizing \(\phi(Y)=\frac{1}{2}\sum_{i,j}||Y_{i}-Y_{j}||^{2}W_{ij}\). Similar to LLE, it has a complexity of \(O(V^{2})\).
_High-Order Proximity-preserved Embedding (HOPE)_Ou et al. (2016): This approach preserves asymmetric transitivity and higher order proximity by minimizing \(||\mathbf{S}-\mathbf{Y}_{s}\mathbf{Y}_{t}^{T}||_{F}^{2}\), where \(S\) is the similarity matrix. Asymmetric transitivity describes the correlation among directed edges in the graph. Asymmetric transitivity between nodes \(u\) and \(v\) states that if there is a directed path from \(u\) to \(v\), there is likely a directed edge from \(u\) to \(v\). HOPE uses generalized Singular Value Decomposition (SVD) Van Loan (1976) to obtain the embedding efficiently with a linear time complexity with respect to the number of edges. The downside of this GE family is that they are, in general, not capable of approximating an arbitrary function nor structural equivalence (i.e., the extent to which two nodes are connected to the same others) unless explicitly designed into their learning function \(\phi(Y)\).
### Family 2: Random Walk-based Algorithms.
Random walking is a popular approach to approximate graph properties such as betweenness Newman (2005) and similarities Fouss et al. (2007) between nodes.
_Node2Vec_[16]: This approach captures network structure by preserving the network neighborhood of a node by maximizing the probability of occurrence of subsequent nodes in fixed-length random walks. Nodes that commonly appear together are embedded closely in the embedding space. It employs a biased-random neighborhood sampling strategy that explores the neighborhoods in a breath-first search (BFS) and depth-first search (DFS) fashion and subsequently feeds them to the Skip-Gram model. Unlike factorization methods, the mixture of community and structural equivalences can be approximated by varying the random walk parameters.
Family 3: Deep Learning-based Algorithms.Deep learning research has seen increasing applications of deep neural networks on graphs due to their ability to approximate a wide range of functions following the universal approximation theorem [10]. One such application is deep auto-encoders [23] due to their ability to model non-linearity in graphs.
_Structural Deep Network Embeddings (SDNE)_[23]: This approach is a semi-supervised model which uses a coupled deep auto-encoder to embed graphs. It uses highly non-linear functions to capture the non-linearity in network structure by jointly optimizing the first-order and second-order proximities. It consists of an unsupervised part where an autoencoder is employed to embed the nodes such that the reconstruction error is minimized and a supervised part based on Laplacian Eigenmaps [1] where a penalty is applied when similar nodes are mapped far from one another in the embedding space. The trained weights of the auto-encoder can be interpreted as the representation of the structure of the graph.
[12, 13] explores how each GE family represents the global structure at the graph level; however, the degree to which they preserve the structural properties and information at the _node_ level based on different levels of order proximity (i.e., number of hops) has not been explored. This is one of the main motivations for our research.
## Graph Embedding Assessment Through Reconstruction
We can interpret GE as a representation that encapsulates graphical data: topological structure and local information between nodes. Thus, a "good" GE is expected to accurately reconstruct the graph (Figure 1) and retain the underlying
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multicolumn{6}{c}{**Commonsense Knowledge Graph (CSKG)**[14]} \\ \hline & \(V\) & & & 2,160,968 & \\ & \(E\) & & & 6,001,531 & \\ \hline
**Properties** & **Min. \(V\)** & **Avg. \(V\)** & **Max. \(V\)** & **Min. \(E\)** & **Avg. \(E\)** & **Max. \(E\)** \\ \hline
1-hop & 1 & 4 & 111 & 1 & 3 & 110 \\
2-hop & 1 & 215 & 10,287 & 1 & 250 & 21,395 \\
3-hop & 1 & 6,212 & 90,573 & 1 & 16,595 & 442,727 \\ \hline \hline \multicolumn{6}{c}{**Word Semantic and Analogy Datasets**} \\ \hline
**Dataset** & **No. of unique \(V\)** & **No. of \(V\) overlap with CSKG** & **Percentage overlap (\%)** \\ \hline Google Analogy [12] & 919 & 906 & 98.50 \\ MSR Analogy [12] & 982 & 869 & 88.50 \\ MEN [10] & 751 & 751 & 100.00 \\ MTNet [1] & 499 & 499 & 100.00 \\ WS353 [14] & 437 & 437 & 100.00 \\ RG65 [12] & 48 & 48 & 100.00 \\ RW [15] & 2951 & 2926 & 99.00 \\ SimLex99 [13] & 1028 & 1028 & 100.00 \\ \hline Total unique \(V\) overlap with CSKG across all datasets & - & 5703 & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Dataset statistics.**\(V\) is the collection of node set and \(E\) is the edge set.
Figure 1: **CSKG Subgraphs Reconstruction.** In this illustration example, we generate the 1, 2, and 3-hop subgraphs for the node of interest (/c/en/smartphone; c: ConceptNet, en: English) and train the corresponding GEs with _Node2Vec_. In each bounding box, the graph on the left represents the original graph and the graph on the right represents the reconstructed graph. Red dotted lines indicate missing edges and red solid lines indicate added edges compared to the original graph.
semantics.
### Topological Structure
While there are various methods to evaluate how much graph structure is preserved by embeddings [14], we are interested in assessing the notion of nodes that are structurally similar or connected in the graph close together in the vector space. Hence, we opt for the reconstruction assessment based on node proximity [1]. For each GE algorithm, we first reconstruct the adjacency matrix, \(W\), based on the proximity of nodes from the generated embeddings. We then rank pairs of nodes according to their normalized proximity (edge weight) score (including only the edge weight greater than the set threshold of _0.5_) and calculate the reconstruction precision based on the ratio of actual links in top \(k\) predictions. Figure 8 and algorithm 1 illustrate the process of reconstructing the original graph, \(G\) from a trained GE, \(Y\), with Node2Vec.
**Evaluation Metric**: We use Precision at \(k\) (Prec@\(k\)) and mean Average Precision (mAP) for evaluating the graph reconstruction. Prec@\(k\) is the fraction of correct predictions in top \(k\) predictions. It is defined as \(Prec@k=\frac{|E_{pred}(1:k)\cap E_{obs}|}{k}\), where \(E_{pred}(1:k)\) are the top \(k\) predictions and \(E_{obs}\) are the observed edges. The \(k\) used in our experiments is fractionalized over the total number of nodes, \(V\) (i.e., 0.1 denotes 10% of the reconstructed graph). mAP computes the average precision over all nodes and is defined as \(mAP=\frac{\sum_{i}AP(i)}{|V|}\).
### Semantic Information
To assess the learned semantic information between nodes, we adopt the word embeddings evaluation framework proposed by [1], which
Figure 2: **The RESTORE framework to assess the degree of topological structure and semantic information preserved by GEs during graph reconstruction. In this illustration example, our node of interest is (\(v_{1}\): /c/en/smartphone). We first generate the corresponding 1-hop node-level subgraphs, \(g_{i}\), by accounting all in and out-degrees, for every immediate 1-hop neighbors of \(v_{1}\) (\(v_{2}\): /c/en/mobile_phone/n; \(v_{3}\): /c/en/cellular_telephone/n; c: ConceptNet; en: English). We then generate the corresponding subgraph embedding, \(y_{i}\), with various families of GE algorithms. Thereafter, we reconstruct the adjacency matrix, \(w_{recon}\) by computing the pairwise edge weight between nodes (e.g., dot product of two vectors) and normalizing them. A set threshold, \(t\) is applied to preserve only edges with a probability \(\geq t\) for the reconstructed graph. To assess the learned semantic information between nodes, \(V\), we compute the pairwise Euclidean distance between the GEs. We iterate the same process for 2-hop and 3-hop graph reconstructions for the node of interest.**
consists of an array of datasets divided into two evaluation categories: _Similarity_ and _Analogy_. The _Similarity_ datasets comprise pairs of words and assigned a mean rank by human annotators, while the _Analogy_ datasets consist of quadruples: two pairs of words bounded by an intrinsic relation (e.g., play-played, make-made).
_Similarity:_ Multimodal Distributional Semantics (MEN) [1], MTurk [1], WS353 (Word Similarity) [13], RG65 [14], Rare Words (RW) [15], and Similarity Estimation (SimLex99) [15]
_Analogy:_ Google Analogy [16] and MSR Analogy [16]
**Evaluation Metric**: We assume the notion that semantically similar nodes are positioned closely to one another in the vector space. We use Euclidean Distance to compute the distance, \(D\), between two node embeddings, \(y_{1}\) and \(y_{2}\), which is expressed as \(D\left(y_{1},y_{2}\right)=\sqrt{\sum_{i=1}^{n}\left(y_{2_{i}}-y_{1_{i}}\right) ^{2}}\) where \(n\) is the size of the vector.
## Experiments and Analysis
We re-implement and train the different GE algorithms with GEM [1] on the Commonsense Knowledge Graph (CSKG) [15]. The rationale for using CSKG is two-fold: (1) with the increasing research on incorporating relevant knowledge (i.e., in the form of infusing GE trained on extracted subgraphs onto neural networks) on various downstream tasks [13], we are interested in assessing the quality of the GEs generated from a popular knowledge graph with varying type of graph properties; and (2) the CSKG corpus contains up to 88.5% of the vocabularies present in our word semantic and analogy tests (Table 1), which allows for a wide assessment of the degree of semantic information preserved by the GEs. All experiments are performed on a CentOS Linux 7 system with a hyperthreaded Intel Xeon Platinum 8260 processor with 24 cores and a clock speed of 2.4 GHz, 200 GB RAM, and a Tesla V100 32 GB GPU.
### Dataset
CSKG combines seven popular sources into a consolidated representation: ATOMIC, ConceptNet, FrameNet, Roget, Visual Genome, Wikidata, and WordNet [15]. It covers a rich spectrum of knowledge
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline
**GE Algorithm** & **Hop** & **mAP** & **[email protected]** & **[email protected]** & **[email protected]** & **[email protected]** & **[email protected]** & **[email protected]** \\ \hline \multirow{3}{*}{Node2vec} & 1-hop & 0.53 & 0.72 & 0.65 & 0.67 & 0.37 & 0.38 & 0.38 \\ & 2-hop & 0.17 & 0.25 & 0.18 & 0.19 & 0.19 & 0.20 & 0.21 \\ & 3-hop & 0.16 & 0.24 & 0.17 & 0.16 & 0.14 & 0.13 & 0.13 \\ \hline \multirow{3}{*}{HOPE} & 1-hop & **0.92** & 1.00 & 0.99 & 0.95 & 0.91 & 0.85 & 0.83 \\ & 2-hop & 0.42 & 0.89 & 0.43 & 0.38 & 0.30 & 0.27 & 0.24 \\ & 3-hop & 0.30 & 0.99 & 0.24 & 0.18 & 0.14 & 0.11 & 0.10 \\ \hline \multirow{3}{*}{SDNE} & 1-hop & 0.67 & 0.73 & 0.69 & 0.69 & 0.66 & 0.63 & 0.63 \\ & 2-hop & **0.54** & 0.52 & 0.34 & 0.34 & 0.33 & 0.28 & 0.25 \\ & 3-hop & **0.35** & 0.71 & 0.53 & 0.34 & 0.23 & 0.18 & 0.14 \\ \hline \multirow{3}{*}{LAP} & 1-hop & 0.40 & 0.55 & 0.27 & 0.38 & 0.42 & 0.41 & 0.36 \\ & 2-hop & 0.38 & 0.69 & 0.31 & 0.34 & 0.33 & 0.33 & 0.30 \\ & 3-hop & 0.14 & 0.51 & 0.22 & 0.18 & 0.13 & 0.11 & 0.09 \\ \hline \multirow{3}{*}{LLE} & 1-hop & 0.71 & 1.00 & 0.89 & 0.83 & 0.54 & 0.51 & 0.50 \\ & 2-hop & 0.20 & 0.83 & 0.45 & 0.36 & 0.30 & 0.23 & 0.21 \\ \cline{1-1} & 3-hop & 0.13 & 0.48 & 0.20 & 0.16 & 0.14 & 0.11 & 0.10 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **mAP and precision metrics of various GE algorithms on the CSKG subgraphs reconstruction.** The higher, the better. Prec@{0.1, 0.2, 0.4, 0.6, 0.8, 1.0} denotes precision at {10%, 20%, 40%, 60%, 80%, 100%} of the reconstructed subgraph.
Figure 3: **Precision@\(k\) for 1, 2, and 3-hop graph reconstructions.** Prec@\(k\) is the fraction of correct predictions in top \(k\) predictions where \(k\) is fractionalized over the total number of nodes, \(V\) (i.e., 0.1 denotes 10% of the reconstructed subgraph).
ranging from every day to event-centric knowledge and taxonomies to visual knowledge. It is modeled as a _directed_, _unweighted_, and _hyper-relational_ graph with 2,160,968 nodes and 6,001,531 edges. In our assessment, we are _not_ training and evaluating the GE algorithms on the entirety of the CSKG graph due to (1) limitations in algorithmic scalability and (2) in most downstream applications, extracting only subgraphs relevant to the entities or communities of interest (domain-specific) is generally the strategy for learning semantically relevant embeddings. Instead, for each vocabulary that is present in both CSKG and the word semantic and analogy tests (a total of 5703 total unique vocabularies overlap with CSKG across all datasets), we first retrieve and generate the corresponding subgraphs of size 1, 2, and 3-hop. We then train and evaluate embeddings for these subgraphs with each GE algorithm. Table 1 shows a summary of the dataset statistics.
### Hyperparameters Selection
Given the cost, time, and increasing complexity of the GE algorithms in generating 1, 2, and 3-hop subgraphs and their corresponding GEs for each vocabulary (node of interest), we are interested in assessing the different off-the-shelf characteristics and performance of the various GE families with the commonly-used list of hyper-parameters.
**Node2Vec:** We use a _context size_ of 10; _walk length_ of 80; and both _inout_ and _return_ parameter of 1.
**HOPE:** We use the Katz index, which describes the similarity between of \(v_{i}\) and \(v_{j}\) to compute the similarity matrix, \(S\) with the attenuation factor, \(\beta\) set to 0.01.
**SDNE:** We use 2 hidden layers with 50 and 15 hidden units for the encoder/decoder layer respectively; _alpha_ of \(1e^{-5}\); _beta_ of 5; both L1 and L2 regularization of \(1e^{-6}\); _rho_ of 0.3; _xeta_ of 0.01; and a batch size of 100.
There are no hyperparameters for _LAP_ and _LLE_. In addition, the following parameters are set _constant_ across all settings whenever applicable: the embedding sizes, \(d\), for 1-hop, 2-hop, and 3-hop graphs are set to 2, 64, and 128 respectively with 50 training iterations.
## Results
We report our assessment of the quality of embeddings generated by different families of GE algorithms through graph reconstruction with increasing levels of order proximity in Table 2 and Figure 3. Table 3 shows the average number of added, missing nodes and edges by various GE algorithms during the subgraphs reconstruction with increasing number of hops. Table 4 reports the average Euclidean distance measure of the various GEs on the array of word semantic and analogy tests. Next, we summarize our observations along each dimension across all GE algorithms.
**1. Understand the effectiveness of graph embeddings in preserving the type of graph properties: (a) topological structure and (b) semantic information between nodes with increasing graph size.** For (a), we observed in Table 2 that HOPE outperforms others in 1-hop graph reconstruction with the highest mAP of 0.92 with a gradual decrease in performance as the graph is being reconstructed. SDNE demonstrates overall better performance at 2 and 3-hop graph reconstructions with mAP of 0.54 and 0.35 respectively. For
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**GloVe**} & \multicolumn{3}{c}{**Node2Vec**} & \multicolumn{3}{c}{**HOPE**} & \multicolumn{3}{c}{**SDNE**} & \multicolumn{3}{c}{**LAP**} & \multicolumn{3}{c}{**LLE**} \\ \cline{3-13} & & \multicolumn{3}{c}{**1-hop**} & \multicolumn{3}{c}{**2-hop**} & \multicolumn{3}{c}{**1-hop**} & \multicolumn{3}{c}{**2-hop**} & \multicolumn{3}{c}{**3-hop**} & \multicolumn{3}{c}{**1-hop**} & \multicolumn{3}{c}{**2-hop**} & \multicolumn{3}{c}{**3-hop**} & \multicolumn{3}{c}{**3-hop**} & \multicolumn{3}{c}{**3-hop**} \\ \hline Google Analogy & 2.09 & 1.28 & 1.89 & 4.26 & 0.12 & 0.17 & 0.08 & 1.92 & 2.90 & 3.01 & 0.63 & 1.15 & 1.26 & 0.48 & 0.76 & 0.96 \\ MSR & 0.63 & 1.26 & 1.79 & 4.20 & 0.10 & 0.15 & 0.07 & 4.30 & 4.69 & 4.96 & 4.96 & 0.72 & 1.21 & 1.33 & 0.18 & 1.06 & 1.25 \\ MEN & 0.51 & 1.28 & 1.79 & 4.16 & 0.12 & 0.17 & 0.09 & 4.41 & 5.18 & 5.79 & 0.73 & 1.22 & 1.35 & 0.62 & 0.65 & 0.76 \\ MTurk & 1.99 & 1.31 & 1.83 & 4.12 & 0.17 & 0.18 & 0.12 & 1.15 & 1.27 & 1.33 & 0.65 & 1.22 & 1.33 & 0.58 & 1.05 & 1.36 \\ WS353 & 2.65 & 1.35 & 1.78 & 3.98 & 0.16 & 0.17 & 0.16 & 1.08 & 1.02 & 1.74 & 0.79 & 1.15 & 1.25 & 0.66 & 1.05 & 1.47 \\ RG65 & 0.75 & 1.28 & 1.79 & 4.41 & 0.15 & 0.17 & 0.10 & 0.63 & 1.73 & 2.05 & 0.74 & 1.18 & 1.29 & 0.58 & 1.07 & 1.28 \\ RW & 0.96 & 1.27 & 1.63 & 3.82 & 0.15 & 0.16 & 0.14 & 1.88 & 2.58 & 2.67 & 0.72 & 1.17 & 1.51 & 0.50 & 1.07 & 0.94 \\ ShILEX99 & 2.31 & 1.29 & 1.75 & 3.89 & 0.16 & 0.17 & 0.12 & 2.76 & 4.82 & 4.93 & 0.67 & 1.20 & 0.92 & 0.63 & 1.00 & 1.14 \\ \hline Average & 1.49 & 1.29 & 1.78 & 4.12 & **0.14** & **0.17** & **0.11** & 2.26 & 3.04 & 3.31 & 0.70 & 1.19 & 1.28 & 0.53 & 0.96 & 1.14 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Word semantic and analogy tests based on pairwise Euclidean distance measure.** The lower, the better.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**GE Algorithm**} & \multirow{2}{*}{**Hop**} & \multicolumn{3}{c}{**Nodes, V**} & \multicolumn{3}{c}{**Edges, E**} \\ \cline{3-6} & & **Avg. no. of** & **Avg. no. of** & **Avg. no. of** & **Avg. no. of** & **Avg. no. of** & **Avg. no. of** & **Avg. no. of** \\ & & **V** & **added V** & **missing V** & **E** & **added E** & **missing E** \\ \hline \multirow{3}{*}{Node2vec} & 1-hop & 4 & 0 & 0 & 3 & 40 & 0 \\ & 2-hop & 215 & 0 & 0 & 250 & 102,370 & 15 \\ & 3-hop & 6,212 & 0 & 0 & 16,595 & 145,416 & 2,265 \\ \hline \multirow{3}{*}{HOPE} & 1-hop & 4 & 0 & 0 & 3 & 1 & 0 \\ & 2-hop & 215 & 0 & 0 & 250 & 59,889 & 15 \\ & 3-hop & 6,212 & 0 & 3,352 & 16,595 & 110,450 & 1 \\ \hline \multirow{3}{*}{SDNE} & 1-hop & 4 & 0 & 0 & 3 & 10 & 1 \\ & 2-hop & 215 & 0 & 50 & 250 & 50,239 & 134 \\ & 3-hop & 6,212 & 0 & 1,812 & 16,595 & 90,141 & 19,905 \\ \hline \multirow{3}{*}{LAP} & 1-hop & 4 & 0 & 1 & 3 & 44 & 0 \\ & 2-hop & 215 & 0 & 0 & 250 & 57,936 & 9 \\ & 3-hop & 6,212 & 0 & 8 & 16,595 & 155,256 & 21 \\ \hline \multirow{3}{*}{LLE} & 1-hop & 4 & 0 & 0 & 3 & 18 & 0 \\ & 2-hop & 215 & 0 & 0 & 250 & 91,056 & 13 \\ \cline{1-1} & 3-hop & 6,212 & 0 & 4 & 16,595 & 162,860 & 7 \\ \hline \end{tabular}
\end{table}
Table 3: **The average number of added, missing nodes and edges by various GE algorithms during reconstruction.**
(b), Table 4 shows that HOPE performs the best across all the word similarity and analogy tests and its performance does not fluctuate with the increasing number of hops (1-3), with the lowest average Euclidean distance of 0.14, 0.17, and 0.11 respectively. On the contrary, SDNE performs the worst with an average distance of 2.26, 3.04, and 3.31 with increasing order of proximity.
**2. Which _family_ of GE algorithm(s) are _better_ at preserving 1(a) versus 1(b)._** For 1(a), all families of GE algorithms show a similar _downward_ trend in their reconstruction performances with increasing number of hops and graph size (Figure 3). Nonetheless, deep-learning algorithm (SDNE) demonstrates consistent performance in preserving the global structure of the graph across 1, 2, and 3-hop when compared to the other GE algorithms. On the contrary, from Table 4, we observe that the factorization-based family of GE algorithms (HOPE, LAP, and LLE) is more suited for capturing 1(b) as demonstrated by their consistently low Euclidean distances when compared to random walk (Node2Vec) and deep learning-based (SDNE) as well as GloVe embedding [2] across all word similarity and analogy tests.
**3. The degree of information that is preserved (retained, added, and missed) by the various GE algorithms with increasing _number of hops_**. From Table 3, we observe that all GE algorithms suffer from incorrectly added and missing edges. The average number of incorrectly added edges scales exponentially with the increasing number of hops across all GE algorithms (notably with 3-hop). While SDNE (which shows the best mAP score for preserving the topological structure for 2 and 3-hop subgraphs) has the lowest average number of added edges, it also has the highest number of missing edges. HOPE follows second, and Node2vec is the worst-performing GE algorithm overall when compared to LAP and LLE, which also perform relatively poorly in 3-hop graph reconstruction.
## Discussion and Future Work
**Findings.** We learn that different families of GE algorithms capture different information and there is no "one size fits all" approach to preserving the full length of the original graph properties. The best performance (mAP) for 1-hop, 2-hop, and 3-hop reconstructions is only capturing 0.92 (HOPE), 0.54 (SDNE), and 0.35 (SDNE) of the original graph information. Dissecting the information that was supposedly _learned_, we observe that these GEs are creating additional links between nodes and missing edges, which can plausibly be translated to factually incorrect knowledge, thereby limiting the benefits of GEs.
**Significance.** Suppose we assume the performance improvement based on extrinsic evaluations by current deep neural networks on downstream applications is attributed to the use of GEs [1], it is critical that the GEs are preserving the right graph structure and semantics. In this work, we shed light on the effectiveness and shortcomings of GEs. We hope these insights encourage new research avenues on approaches to better improve the graph representation and learning, to which, an auto-encoder/decoder framework of generating meta-embeddings [1] based on a combination of multiple source embeddings to produce more accurate and complete GEs warrants a promising direction.
**Limitation and Future Work.** As mentioned earlier, this work omits the reconstruction evaluation of typed/ labeled relations. We formulate the task as a graph reconstruction task instead of a link prediction task. Our focus is rather on the reconstruction of existing links between nodes rather than link discovery, to which we based our findings that the addition of new and/or missing links contribute to the inaccurate representation of the original graph. We acknowledge the other form of GE, the Knowledge Graph Embeddings (KGEs) which treats typed relationships as first-class citizens. Performing a comparative analysis between network embeddings (used in this work) and KGEs was beyond the scope of this investigation but is part of our future work. In addition, we plan to improve the current performance of GEs, e.g., by developing novel meta-embedding techniques that consider both structure and semantic information.
## Conclusion
In summary, we proposed RESTORE, which is an intrinsic evaluation framework that aims to assess the quality and effectiveness of GEs in retaining the original graph topological structure and semantic information through reconstruction. Understanding these will help identify the deficiency and yield insights into these GEs when vectorizing graphs in terms of preserving the relevant knowledge or learning incorrect knowledge (i.e., incorrectly added and missing nodes as well as edges). Particularly, we show that deep learning-based GEs are better at preserving the global topological structure and factorization-based GEs are more suited for capturing the semantic information. Nonetheless, the modest performance of these GEs leaves room for further research avenues on better graph representation learning.
|
2310.09324 | Supercurrent-induced spin switching via indirect exchange interaction | Localized spins of single atoms adsorbed on surfaces have been proposed as
building blocks for spintronics and quantum computation devices. However,
identifying a way to achieve current-induced switching of spins with very low
dissipation is an outstanding challenge with regard to practical applications.
Here, we show that the indirect exchange interaction between spin impurities
can be controlled by a dissipationless supercurrent. All that is required is a
conventional superconductor and two spin impurities placed on its surface. No
triplet Cooper pairs or exotic material choices are needed. This finding
provides a new and accessible way to achieve the long-standing goal of
supercurrent-induced spin switching. | Chi Sun, Jacob Linder | 2023-10-13T18:00:00Z | http://arxiv.org/abs/2310.09324v1 | # Supercurrent-induced spin switching via indirect exchange interaction
###### Abstract
Localized spins of single atoms adsorbed on surfaces have been proposed as building blocks for spintronics and quantum computation devices. However, identifying a way to achieve current-induced switching of spins with very low dissipation is an outstanding challenge with regard to practical applications. Here, we show that the indirect exchange interaction between spin impurities can be controlled by a dissipationless supercurrent. All that is required is a conventional superconductor and two spin impurities placed on its surface. No triplet Cooper pairs or exotic material choices are needed. This finding provides a new and accessible way to achieve the long-standing goal of supercurrent-induced spin switching.
_Introduction_ - Electrical manipulation of spin or magnetization is crucial in the development of spintronics devices and technologies for data storage and computation [1; 2; 3]. Current-induced magnetization switching is presently used in magnetic random access memory through spin-transfer torque [4; 5]. However, the inclusion of electric current inevitably involves Joule heating and therefore high energy dissipation. An important objective is therefore to identify a way to electrically switch the directions of spins with very low dissipation. In spintronics, major efforts have been devoted to optimizing the choice of materials and hybrid structures [6; 7; 8; 9; 10] in order to reduce the power consumption for switching, making it comparable to that in present semiconductor field-effect transistors [11].
At low temperatures, an obvious candidate for achieving low-dissipation electric control over magnetism is superconducting materials due to their ability to host dissipationless supercurrents. Combining superconductivity and spintronics [12] offers possibilities to achieve supercurrent-induced magnetization dynamics, by which Joule heating and dissipation can be minimized. To achieve this, several theory papers have proposed to utilize spin-polarized triplet supercurrents [13], which have been experimentnelly verified in superconductor (SC)/ferromagnet (FM) Josephson junctions [14; 15; 16; 17]. It has also been theoretically shown that triplet supercurrents can induce spin-transfer torque switching [18; 19; 20] and magnetization dynamics [21; 22; 23; 24; 25; 26; 27]. However, there exists no experimental observation of supercurrent-induced torque or magnetization dynamics. Part of the challenge lies within the complexity of the appropriate fabrication of the SC/FM multilayered structures, in which the SC/FM interface plays an essential role to create the triplet Cooper pairs for spin-polarized supercurrent.
In this work, we theoretically demonstrate a new and conceptually simple way in which the goal of spin switching via singlet supercurrents can be achieved. We consider a conventional SC and two spin impurities placed on its surface (see Fig. 1). Without the requirement of triplet Cooper pair or exotic material choices, this is a drastically simpler setup than previous studies, theoretical and experimental, that have considered magnetization dynamics in the superconducting state. By investigating the indirect exchange interaction between the two spin impurities, also known as the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction [28; 29; 30], we find that the spin orientation can be controlled by applying a supercurrent flowing through the SC. In the presence of the supercurrent, the quasiparticle bands become asymmetric in momentum space due to the broken parity symmetry, which also modulates the RKKY interaction. Further, the resulting sign change in the RKKY interaction causes the preferred spin orientation to be switched between parallel and antiparallel alignments, both by varying the magnitude of the supercurrent as well as its direction, providing two experimental routes to observe this effect.
_Theory_ - We consider a conventional Bardeen-Cooper-Schrieffer (BCS) [31] SC with two impurity spins on its surface. For the superconducting part of the Hamilton-operator, the presence of a supercurrent can be modelled by allowing the order parameter to have a phase gradient. Thus, we may write
Figure 1: (Color online) Two impurity spins (purple small arrows) are coupled via the RKKY interaction (wavy black line) mediated by conduction band quasiparticles in the superconducting state. The picture shows a scenario where a paralell spin orientation is energetically preferred. When a supercurrent (large orange arrow) is applied, giving the Cooper pairs a finite momentum \(Q\), the quasiparticle bands become asymmetric in momentum \(k\) (upper part of the plot), due to the broken parity symmetry. This causes a change in the RKKY interaction which can now favor the opposite spin orientation, in this case antiparallel (small orange arrow). In this way, the supercurrent induces spin switching.
in real space
\[H_{\text{SC}}=\frac{\Delta_{0}}{2}\sum_{i\alpha\beta}\text{e}^{\dagger\mathbf{Q}\cdot \mathbf{r}_{i}}(\text{i}\sigma^{y})_{\alpha\beta}c^{\dagger}_{i\alpha}c^{\dagger}_{i \beta}+\text{h.c.} \tag{1}\]
Here, \(\Delta_{0}\) is the magnitude of the superconducting order parameter, \(\mathbf{Q}\) quantifies the magnitude and direction of the supercurrent, whereas \(c^{\dagger}_{i\sigma}\) are electron creation operators at site \(i\) for spin \(\sigma\). For \(\mathbf{Q}=0\), \(H_{\text{SC}}\) reduces to the standard BCS Hamiltonian.
The full Hamilton-operator for the superconducting part, which includes a hopping term, takes the form
\[H_{0}=\frac{1}{2}\sum_{\mathbf{k}\sigma}\phi^{\dagger}_{\mathbf{k}\,\sigma}\left( \begin{matrix}\varepsilon_{\mathbf{k}}&\sigma\Delta_{0}\\ \sigma\Delta_{0}&-\varepsilon_{-\mathbf{k}-\mathbf{Q}}\end{matrix}\right)\phi_{\mathbf{k} \,\sigma}, \tag{2}\]
after performing a Fourier transformation \(c^{\dagger}_{i\sigma}=\frac{1}{\sqrt{N}}\sum_{\mathbf{k}}c^{\dagger}_{\mathbf{k}\, \sigma}\text{e}^{\dagger\mathbf{k}\cdot\mathbf{r}_{i}}\) where \(N\) is the total number of the lattice points. Above, \(\varepsilon_{\mathbf{k}}=-2t[\cos(k_{x}a)+\cos(k_{z}a)]-\mu\) is the dispersion relation, in which \(t\) is the hopping parameter, \(a\) is the lattice constant, and \(\mu\) is the chemical potential. Here a 2D model in the \(xz\)-plane is chosen for concreteness and \(\phi^{\dagger}_{\mathbf{k}\,\sigma}=(c^{\dagger}_{\mathbf{k}\,\sigma}\quad c_{-\mathbf{k }-\mathbf{Q},-\sigma})\) is the fermion basis. The two pairs of energy eigenvalues and eigenstates of the matrix in Eq. (2) are obtained as \(E^{+}_{\mathbf{k}}\) with \((u_{\mathbf{k}},\sigma v_{\mathbf{k}})^{T}\) and \(E^{-}_{\mathbf{k}}\) with \((-\sigma v_{\mathbf{k}},u_{\mathbf{k}})^{T}\), in which \(E^{\pm}_{\mathbf{k}}=\frac{1}{2}(\varepsilon_{\mathbf{k}}-\varepsilon_{-\mathbf{k}-\mathbf{Q} }\pm\sqrt{(\varepsilon_{\mathbf{k}}+\varepsilon_{-\mathbf{k}-\mathbf{Q}})^{2}+4\Delta_{0}^ {2}})\) and
\[u_{\mathbf{k}}\,(v_{\mathbf{k}})=\sqrt{\frac{1}{2}(1+(-)\frac{\varepsilon_{\mathbf{k}}+ \varepsilon_{-\mathbf{k}-\mathbf{Q}}}{\sqrt{(\varepsilon_{\mathbf{k}}+\varepsilon_{-\mathbf{ k}-\mathbf{Q}})^{2}+4\Delta_{0}^{2}}})}. \tag{3}\]
Based on the eigenpairs, the Hamiltonian is diagonalized as
\[H_{0}=\frac{1}{2}\sum_{\mathbf{k}\,\sigma}(E^{+}_{\mathbf{k}}-E^{-}_{-\mathbf{k}-\mathbf{Q}}) \gamma^{\dagger}_{\mathbf{k}\,\sigma}\gamma_{\mathbf{k}\,\sigma}, \tag{4}\]
where the operators satisfy
\[\phi_{\mathbf{k}\,\sigma}=\left(\begin{matrix}c_{\mathbf{k}\,\sigma}\\ c^{\dagger}_{-\mathbf{k}-\mathbf{Q},-\sigma}\end{matrix}\right)=\left(\begin{matrix}u_{ \mathbf{k}}&-\sigma v_{\mathbf{k}}\\ \sigma v_{\mathbf{k}}&u_{\mathbf{k}}\end{matrix}\right)\left(\begin{matrix}\gamma_{ \mathbf{k}\,\sigma}\\ \gamma^{\dagger}_{-\mathbf{k}-\mathbf{Q},-\sigma}\end{matrix}\right). \tag{5}\]
To model the impurity spins interacting with the SC, we consider a Hamilton-operator which is treated as a perturbation: \(\Delta H=J\sum_{j}\mathbf{S}_{j}\cdot\mathbf{s}_{j}\), in which \(J\) is the strength of the interaction between the impurity classical spin \(\mathbf{S}_{j}\) and the conduction electron spin \(\mathbf{s}_{j}=\sum_{\alpha\beta}c^{\dagger}_{j\alpha}\mathbf{\sigma}_{\alpha\beta}c _{j\beta}\) where \(\mathbf{\sigma}\) denotes the Pauli matrix vector. We set \(S\equiv|\mathbf{S}_{j}|=1\), meaning that the magnitude of the impurity spin is absorbed into the coupling constant \(J\). After Fourier-transforming and expressing the \(c\)-operators in terms of \(\gamma\)-operators described by Eq. (5), we obtain
\[\Delta H =\sum_{\mathbf{k}\mathbf{k}^{\prime}\alpha\beta}T_{\mathbf{k}\mathbf{k}^{\prime} \alpha\beta}[u^{*}_{\mathbf{k}}u_{\mathbf{k}^{\prime}}\gamma^{\dagger}_{\mathbf{k}\, \alpha}\gamma^{\dagger}_{\mathbf{k}\,\beta}-\beta u^{*}_{\mathbf{k}}v_{\mathbf{k}^{\prime} }\gamma^{\dagger}_{\mathbf{k}\,\alpha}\gamma^{\dagger}_{-\mathbf{k}^{\prime}-\mathbf{Q},- \beta}\] \[-\alpha v^{*}_{\mathbf{k}}u_{\mathbf{k}^{\prime}}\gamma_{-\mathbf{k}-\mathbf{Q},- \alpha}\gamma_{\mathbf{k}^{\prime}\,\beta}+\alpha\beta v^{*}_{\mathbf{k}^{\prime}}v_{ \mathbf{k}^{\prime}}\gamma_{-\mathbf{k}-\mathbf{Q},-\alpha}\gamma_{-\mathbf{k}^{\prime}-\mathbf{Q},-\beta}], \tag{6}\]
in which \(T_{\mathbf{k}\mathbf{k}^{\prime}\,\alpha\beta}=\sum_{j}\frac{J}{N}e^{i(\mathbf{k}-\mathbf{k}^{ \prime})\cdot\mathbf{r}_{j}}\mathbf{S}_{j}\cdot\mathbf{\sigma}_{\alpha\beta}\) is defined.
We now perform a Schrieffer-Wolff transformation to obtain the RKKY interaction between the impurity spins, mediated by the SC. This is in essence a second order perturbation theory for \(\Delta H\) achieved by applying a canonical transformation \(H_{\text{eff}}=e^{\eta S}He^{-\eta S}\) for \(H=H_{0}+\Delta H\). Subsequently, one identifies \(\eta S\) so that it satisfies \(\Delta H+[\eta S,H_{0}]=0\) which projects out the first order effect of the perturbation, which does not generate any interaction between the impurity spins. This gives rise to the effective Hamiltonian
\[H_{\text{eff}}=H_{0}+\frac{1}{2}\left[\eta S,\Delta H\right], \tag{7}\]
in which one can express \(\eta S\) with the same operators as in Eq. (6): \(\eta S=\sum_{\mathbf{k}\mathbf{k}^{\prime}\alpha\beta}[A_{\mathbf{k}\mathbf{k}^{\prime}\, \alpha\beta}\gamma^{\dagger}_{\mathbf{k}\,\alpha}\gamma_{\mathbf{k}^{\prime}\,\beta}+B_{ \mathbf{k}\mathbf{k}^{\prime}\alpha\beta}\gamma^{\dagger}_{\mathbf{k}\,\alpha}\gamma^{ \dagger}_{-\mathbf{k}^{\prime}-\mathbf{Q},-\beta}+C_{\mathbf{k}\mathbf{k}^{\prime}\alpha\beta \gamma-\mathbf{k}-\mathbf{Q},-\alpha}\gamma_{\mathbf{k}^{\prime}\,\beta}+D_{\mathbf{k}\mathbf{k}^{ \prime}\alpha\beta\gamma-\mathbf{k}-\mathbf{Q},-\alpha}\gamma_{-\mathbf{k}^{\prime}-\mathbf{Q},- \beta}]\). The coefficients are consequently identified as
\[A_{\mathbf{k}\mathbf{k}^{\prime}\,\alpha\beta} =-\frac{2u^{*}_{\mathbf{k}}u_{\mathbf{k}^{\prime}}T_{\mathbf{k}\mathbf{k}^{\prime} \alpha\beta}}{E^{+}_{\mathbf{k}^{\prime}\,\beta}-E^{-}_{\mathbf{k}^{\prime}-\mathbf{Q},- \beta}-E^{+}_{\mathbf{k}\,\alpha}+E^{-}_{-\mathbf{k}-\mathbf{Q},-\alpha}},\] \[B_{\mathbf{k}\mathbf{k}^{\prime}\alpha\beta} =-\frac{2\beta u^{*}_{\mathbf{k}^{\prime}}v_{\mathbf{k}^{\prime}}T_{\mathbf{k} \mathbf{k}^{\prime}\alpha\beta}}{E^{+}_{\mathbf{k}\,\alpha}-E^{-}_{-\mathbf{k}-\mathbf{Q},- \alpha}-E^{+}_{-\mathbf{k}^{\prime}-\mathbf{Q},-\beta}-E^{-}_{\mathbf{k}^{\prime}\beta}},\] \[C_{\mathbf{k}\mathbf{k}^{\prime}\alpha\beta} =\frac{2\alpha v^{*}_{\mathbf{k}^{\prime}}v_{\mathbf{k}^{\prime}}\tau_{ \mathbf{k}\mathbf{k}^{\prime}\alpha\beta}}{E^{+}_{\mathbf{k}^{\prime}\,\beta}-E^{-}_{-\mathbf{k}^{ \prime}-\mathbf{Q},-\beta}+E^{+}_{-\mathbf{k}-\mathbf{Q},-\alpha}-E^{-}_{\mathbf{k}\alpha}},\] \[D_{\mathbf{k}\mathbf{k}^{\prime}\alpha\beta} =\frac{2\alpha\beta v^{*}_{\mathbf{k}^{\prime}}v_{\mathbf{k}^{\prime}}T_{ \mathbf{k}\mathbf{k}^{\prime}\alpha\beta}}{E^{+}_{-\mathbf{k}^{\prime}-\mathbf{Q},-\beta}-E^{-}_{ \mathbf{k}^{\prime}\beta}-E^{-}_{-\mathbf{k}-\mathbf{Q},-\alpha}+E^{-}_{\mathbf{k}\alpha}}. \tag{8}\]
Given \(\eta S\), the expectation value of the effective Hamiltonian given by Eq. (7) may now be evaluated to obtain the RKKY interaction.
_Results and discussion_ - Defining \(S^{\alpha\beta}_{j}\equiv\mathbf{S}_{j}\cdot\mathbf{\sigma}_{\alpha\beta}\) and using \(\Sigma_{\alpha\beta}
in the following form
\[E_{\rm RKKY} =-(\frac{J}{N})^{2}\Sigma_{\mathbf{k}\mathbf{k}^{\prime}}e^{i(\mathbf{k}-\mathbf{k} ^{\prime})\cdot\mathbf{R}_{ij}}\left[F_{1}(\mathbf{k},\mathbf{k}^{\prime})+F_{2}(\mathbf{k},\bm {k}^{\prime})\right.\] \[\left.+\,F_{3}(\mathbf{k},\mathbf{k}^{\prime})+F_{4}(\mathbf{k},\mathbf{k}^{ \prime})\right] \tag{11}\]
where \(\mathbf{R}_{ij}=\mathbf{r}_{j}-\mathbf{r}_{i}\) and \(n(E)=(1+e^{\beta E})^{-1}\) denotes the Fermi-Dirac distribution at energy \(E\) with \(\beta=1/k_{B}T\). The above expression can be further simplified since \(E_{\rm RKKY}\) is real, and thus the exponential prefactor can be replaced with its corresponding cosine component. Subsequently, one observes that the contribution from \(F_{2}\) is the same as \(F_{3}\), which can be seen by renaming indices \(\mathbf{k}\leftrightarrow\mathbf{k}^{\prime}\) and using that \(u,v\) are real.
For \(\mathbf{Q}=0\), we regain the results studied previously in the literature for RKKY interaction in SCs [32; 33; 34; 35] in the form of an additional antiferromagnetic, exponentially decaying term that appears in \(E_{\rm RKKY}\) along with the usual rapidly oscillating interaction. Eq. (11) can then be numerically evaluated to determine the effect of a supercurrent on the spin-spin interaction. To estimate a reasonable magnitude for the momentum \(Q=|\mathbf{Q}|\) of the Cooper pairs, we note that the critical supercurrent that a SC can sustain is provided by \(Q\xi\simeq 1\)[36] where \(\xi=\hbar v_{F}/(\pi\Delta_{0})\) is the coherence length. An analytical estimate for \(Q\) can be given for a simple 1D model. The Fermi velocity in our lattice model is defined via \(v_{F}=\frac{1}{t}(de_{k}/dk)|_{k=k_{F}}\) where \(k_{F}\) is obtained as the momentum where \(\varepsilon_{k}=0\). To maximize the value of \(Q\) (in order to have a supercurrent which can strongly influence the RKKY interaction), one ideally needs a SC with as small \(\xi\) as possible. High-\(T_{c}\) superconductors can have \(\xi\simeq 3a\), allowing \(Q\simeq 0.3/a\). Subsequently, \(\mu\) and \(\Delta_{0}\) should be chosen to get \(\xi\simeq 3a\). Choosing \(\mu=-1.8t\), one finds from \(-2t\cos(ka)-\mu=0\) that \(k_{F}\simeq 0.5/a\), which gives \(v_{F}\simeq at/\hbar\). Then, for \(\Delta_{0}/t=0.1\), we can achieve \(\xi\simeq 3a\) which gives the upper limit \(Q\simeq 0.3/a\). Similar parameters were used in Ref. [37].
The RKKY interaction results are shown in Fig. 2 for \(Q=0.1/a\). We show results both for zero supercurrent (\(Q=0\)), supercurrent flowing parallel (\(\parallel\)) and perpendicular (\(\perp\)) to the separation vector \(\mathbf{R}_{ij}\) of the two spin impurities. Here we fix \(\mathbf{Q}\) along \(\mathbf{x}\) and consider \(\mathbf{R}_{ij}\) along \(\mathbf{x}\) (\(\mathbf{z}\)) to cover the \(\parallel\) (\(\perp\)) configuration. The figure demonstrates that the RKKY interaction changes its sign within several separation distance regimes by tuning the magnitude and direction of the supercurrent. Since \(E_{\rm RKKY}<0\) causes a parallel (P) alignment of the two spin impurities while \(E_{\rm RKKY}>0\) supports an antiparallel (AP) orientation, the sign change thus induces spin switching between the P and AP states. In addition, the RKKY curves are almost the same for the \(Q=0\) and \(\perp\) cases. This can be explained by the energy dispersion symmetry breaking induced by the supercurrent, which is the strongest for the quasiparticles mediating the RKKY interaction when \(\mathbf{Q}\parallel\mathbf{R}_{ij}\) and negligible for the perpendicular case when \(Q\) is small. In Fig. 2, the black arrows denote spin switching achieved by changing the direction of supercurrent flow (between \(\mathbf{Q}\parallel\mathbf{R}_{ij}\) and \(\mathbf{Q}\perp\mathbf{R}_{ij}\)). The black arrows also indicate switching caused by turning the supercurrent on and off (between \(\mathbf{Q}\parallel\mathbf{R}_{ij}\) and \(\mathbf{Q}=0\)) since the \(Q=0\) and \(\perp\) cases essentially coincide due to the small value of \(Q\). It is clear from the arrows in the figure that the presence of supercurrent gives rise to ample opportunities for spin switching at several separation distances. Note that for each arrow, the switch occurs in a finite interval centered around the position of the arrow and not just exactly at the location of the arrow, making the switching effect more accessible.
We also show results in Fig. 3 for a slightly larger value of the supercurrent, \(Q=0.2/a\), demonstrating the robustness of the effect and that there exists an abundance of possible switching effects by either turning of the supercurrent or by changing its direction. As \(Q\) increases, compared with \(Q=0.1/a\) in Fig. 2, the difference between the \(Q=0\) and \(\perp\) cases becomes distinguishable and the additional spin switching between them becomes possible, as the red arrows show.
Finally, we plot the RKKY interaction energy at a fixed lattice site as a function of the supercurrent magnitude \(Q\) in Fig.
Figure 2: (Color online) Normalized RKKY interaction between two impurity spins on top of a current-carrying superconductor with \(Qa=0.1\). The inset shows a zoom-in of the main plot and the horizontal black line is a guide to the eye for where the RKKY interaction changes from P to AP. The direction of the supercurrent is along the impurity separation distance for \(\parallel\) and perpendicular to it for \(\perp\). \(E_{\rm RKKY}>0\) favors an AP alignment of the spins, whereas \(E_{\rm RKKY}<0\) favors a P alignment. The arrows show the preferred spin alignment is altered by either turning the supercurrent on and off or by changing its direction between \(\parallel\) and \(\perp\). Since the supercurrent magnitude is small for \(Qa=0.1\), the \(Q=0\) and \(\perp\) cases essentially coincide. We consider \(\mu/t=-1.8\), \(\Delta_{0}/t=0.1\), \(k_{B}T/t=0.01,Q=0.1/a\) and \(N=10^{4}\) sites.
4 for two site choices. The supercurrent flow starts modifying \(E_{\rm{RKKY}}\) at much smaller values of \(Q\) when it flows along \(\mathbf{R}_{ij}\) compared to when it flows perpendicular to it. The physical mechanism behind this is the directional dependence of the asymmetry in the quasiparticle bands \(E_{\mathbf{k}}\) created by \(\mathbf{Q}\). As mentioned before, the asymmetry is strongest for particles moving between the impurity spins when \(\mathbf{Q}\parallel\mathbf{R}_{ij}\), which are precisely the ones contributing the most to the RKKY interaction. The lower panel of Fig. 4 shows that at a fixed separation distance \(R_{ij}=|\mathbf{R}_{ij}|\), modulating the supercurrent magnitude \(Q\) can cause the preferred spin orientation to switch between P (\(E_{\rm{RKKY}}<0\)) and AP (\(E_{\rm{RKKY}}>0\)), which is consistent with the switching results observed in Figs. (2,3).
We also give an estimate for the effect of the supercurrent Oersted field acting on the impurity spins via a Zeeman-effect, and show that it is negligible compared to the RKKY interaction. Considering a thin superconducting film of thickness \(d\) with a critical current density \(J_{c}=10^{7}\) A/cm\({}^{2}\), the field at the surface can be approximated as \(B=\mu_{0}J_{c}d/2\) at the critical supercurrent strength. For \(d=15\) nm, this gives \(B\simeq 10^{-3}\) T, corresponding to a very small Zeeman-coupling \(E_{Z}\simeq 10^{-5}\) meV at about half of the critical current density. This can be compared to \(tE_{\rm{RKKY}}/J^{2}\) in our plots, which is typically of order \(10^{-4}\) at a separation distance of several lattice sites. Using a weak impurity spin coupling \(J=0.05t<\Delta_{0}\), as appropriate for the perturbative approach employed here, we get for \(t=500\) meV that \(E_{\rm{RKKY}}\simeq 10^{-4}\) meV which is \(\approx E_{Z}\). Although this is a rough estimate, we note that larger couplings \(J\), outside the regime of our approach, between the impurity and conduction electron spins are accessible experimentally [38]. This will make the RKKY interaction even larger, in particular compared to the Oersted-field effect. A thinner SC film decreases the Oersted field further. The effect of supercurrent flow in the strong-coupling regime could be an interesting topic for future studies where in-gap Yu-Shiba-Rusinov states [39; 40; 41] are expected to have a more prominent role. The main conclusion of this work, being the tunability of the RKKY interaction and thus the possibility to switch the ground state spin configuration, is expected to hold also in this case.
_Concluding remarks_ - Our proposed system setup should be experimentally feasible. In Ref. [42], the RKKY interaction between Cr impurity spins coupled to a SC was studied using scanning tunneling spectroscopy. All that is required in addition to observe the supercurrent-induced spin switching is the application of a current bias to the SC. We hope that the present work will stimulate the anticipated experimental realization of supercurrent-induced spin switching.
Figure 4: (Color online) Normalized RKKY interaction as a function of supercurrent magnitude. We consider two separation distances in the top and bottom panels and consider both a supercurrent flow along (\(\parallel\)) the separation distance vector and perpendicular (\(\perp\)) to it. We set \(\mu/t=-1.8,\Delta_{0}/t=0.1,k_{B}T/t=0.01\), and \(N=10^{4}\) sites.
Figure 3: (Color online) Same as in Fig. 2, but for \(Qa=0.2\). Since the supercurrent is now larger than in Fig. 2, additional spin switching is enabled. Namely, the preferred spin alignment is switched by either turning the supercurrent on and off in the \(\parallel\) direction (blue arrows), on and off in the \(\perp\) direction (red arrows), or changing the direction of the supercurrent between \(\parallel\) and \(\perp\) (black arrows). We consider \(\mu/t=-1.8,\Delta_{0}/t=0.1,k_{B}T/t=0.01,Q=0.2/a\) and \(N=10^{4}\) sites.
_Acknowledgments_ - This work was supported by the Research Council of Norway through Grant No. 323766 and its Centres of Excellence funding scheme Grant No. 262633 "QuSpin." Support from Sigma2 - the National Infrastructure for High-Performance Computing and Data Storage in Norway, project NN9577K, is acknowledged.
|
2302.06206 | MOND as a peculiar case of the SIV theory | The scale invariant theory is preserving the fundamental physical properties
of General Relativity, while enlarging the group of invariances subtending
gravitation theory (Dirac1973; Canuto et al.1977). The Scale Invariant Vacuum
(SIV) theory assumes, as gauging condition, that:"The macroscopic empty space
is scale invariant, homogeneous and isotropic". Some basic properties in Weyl's
Integrable Geometry and cotensor calculus are examined in relation with
scalar-tensor theories. Possible scale invariant effects are strongly reduced
by matter density, both at the cosmological and local levels. The weak feld
limit of SIV tends to MOND, when the scale factor is taken as constant, an
approximation valid (<1%) over the last 400 Myr. A better understanding of the
a0-parameter is obtained: it corresponds to the equilibrium point of the
Newtonian and SIV dynamical acceleration. Parameter a0 is not a universal
constant, it depends on the density and age of the Universe. As MOND is doing,
SIV theory avoids the call to dark matter, moreover the cosmological models
predict accelerated expansion. | Andre Maeder | 2023-02-13T09:26:46Z | http://arxiv.org/abs/2302.06206v1 | # MOND as a peculiar case of the SIV theory
###### Abstract
The scale invariant theory is preserving the fundamental physical properties of General Relativity, while enlarging the group of invariances subtending gravitation theory (Dirac1973; Canuto et al.1977). The Scale Invariant Vacuum (SIV) theory assumes, as gauging condition, that "The macroscopic empty space is scale invariant, homogeneous and isotropic". Some basic properties in Weyl's Integrable Geometry and cotensor calculus are examined in relation with scalar-tensor theories. Possible scale invariant effects are strongly reduced by matter density, both at the cosmological and local levels. The weak field limit of SIV tends to MOND, when the scale factor is taken as constant, an approximation valid (\(<\)1%) over the last 400 Myr. A better understanding of the \(a_{0}\)-parameter is obtained: it corresponds to the equilibrium point of the Newtonian and SIV dynamical acceleration. Parameter \(a_{0}\) is not a universal constant, it depends on the density and age of the Universe. As MOND is doing, SIV theory avoids the call to dark matter, moreover the cosmological models predict accelerated expansion.
keywords: Cosmology: theory - dark energy-dark matter
## 1 Introduction
In the context of the dark matter problem, the Modified Newtonian Dynamics (MOND) was proposed by (Milgrom, 1983); this dynamics was accounting for the flat rotation curves of galaxies. Over the following decades, this theory received a number of further extensions and applications, _e.g._Milgrom (2009, 2014a,b). The application of MOND to observations is meeting a number of positive results, see review by Famaey and McGaugh (2012). Agreement has been obtained for the flat rotation curves of spiral galaxies (Lelli et al., 2017), also for clusters of galaxies (Sanders, 2003; Milgrom, 2018), as well as in the Local Group (Pawlowski et al., 2012; Pawlowski and Kroupa, 2022) and in the Fornax Cluster (Asencio et al., 2022).
The dynamical acceleration (\(V^{2}/R\), where \(V\) is the velocity and \(R\) the galactocentric distance) is related to the baryonic gravity (\(GM/R^{2}\)) for spiral, irregular, elliptical, lenticular and spheroidals by a unique thin relation that significantly deviates from the Newtonian Law at very low gravities (McGaugh et al., 2016; Lelli et al., 2017; Li et al., 2018). The unicity of this relation is striking, because it concerns observations made in different types of galaxies and at different distances from their centers. Such a result is uneasy to account for by dark matter, and it has been recognized to better correspond to a gravity effect (Lelli et al., 2017).
In MOND, the usual Newtonian gravity law remains unmodified for gravities above a constant value \(a_{0}\approx 1.2\cdot 10^{-8}\) cm s\({}^{-2}\). In the so-called _deep-MOND limit_, for gravities much lower than \(a_{0}\), a different gravitation law applies with a gravity \(g\) given by,
\[g=\sqrt{a_{0}\,g_{\rm N}}, \tag{1}\]
where \(g_{\rm N}\) is the usual Newtonian gravity.
In attempts to generalize the theory, developments of the basic MOND have been made in a variety of theoretical directions (Milgrom, 2015): in the non-relativistic regime the _Modified Poisson Gravity_ and _Quasilinear MOND_, both space dilation invariant; in the relativistc regime the _Tensor-Vector-Scalar (TeVeS)_, MOND adaptations of Einstein-Aether theories, Bimetric MOND gravity (BIMOND) (Milgrom, 2022)_. Properties of General Relativity (GR), such as the weak Principle of Equivalence, the Lorentz invariance and the general covariance, become invalid in some of the MOND extensions (Milgrom, 2019).
Scale invariance was first considered by Weyl (1923) and Eddington (1923) in order to account by the geometry of space-time for both gravitation and electromagnetism, before being abandoned because the properties of a particle would have depended on its past worldline (Einstein, 1918). A revival of the theory was first brought about by Dirac (1973) and then by Canuto et al. (1977), who considered the so-called Weyl Integrable Geometry (WIG), where Einstein's criticism does not apply (see Sect. 2.2). Dirac (1973) emphasized that: _"It appears as one of the fundamental principles in Nature that the equations expressing basic laws should be invariant under the widest possible group of transformations"_. Scale invariance means that the basic equations do not change upon a transformation of the line element of the form,
\[ds^{\prime}=\lambda(x^{\mu})\,ds, \tag{2}\]
where \(\lambda(x^{\mu})\) is the scale factor, \(ds^{\prime}\) refers to GR while \(ds\) refers to the WIG space, for example. Scale invariance is present in Maxwell's equations in absence of charge and currents, as well as in General Relativity (GR) in the case of the empty space (Bondi, 1990).
Geodesics and geometrical properties of the WIG were studied by Bouvier & Maeder (1978) and the weak field limit, which shows an additional acceleration in the direction of motion, by Maeder & Bouvier (1979). Instead of the Large Number Hypothesis (Dirac, 1974), a new gauging condition is now applied to derive the cosmological equations, which naturally show an accelerated expansion; several observational tests were performed (Maeder, 2017a, b). A number of studies followed: on the rotation of galaxies and the dynamics of clusters of galaxies (Maeder, 2017c; Maeder and Gueorgiuev, 2020); on the growth of density fluctuations in the early Universe, where the formation of galaxies is favoured by the additional acceleration without the need of dark matter (Maeder and Gueorgiuev, 2019); a general review (Maeder and Gueorgiuev, 2020a); on horizons and inflation (Maeder and Gueorgiuev, 2021) and on the lunar recession (Maeder and Gueorgiuev, 2022). The tests were positive and cast doubts on the need of dark components.
The aim of this work is to study the relations between MOND and SIV theories. In Section 2, we recall the basics of the SIV theory and examine the relation with scalar-tensor theories. In Section 3, some cosmological properties of the SIV theory are emphasized in view of Section 4, where the Newtonian and MOND approximations in the SIV theory are derived. The meaning and numerical value of the \(a_{0}\)-parameter are examined. Section 5 contains the conclusions.
## 2 The basic theoretical context
### Scalar-tensor versus cotensor theories
Among alternative theories of gravity, the scalar-tensor theories represent one of the most elaborated developments1. In particular, the scalar-tensor theories also offer an appropriate framework for the implementation of scale invariance. A fundamental property of these theories is the presence of an extra scalar field \(\varphi\), acting in parameter domains where General Relativity is not currently tested. The occurrence of dark components has stimulated such researches. The starting point is generally the expression of the action,
Footnote 1: See a 312 pages review by Clifton, Ferreira, Padilla and Skordis on “_Modified Gravity and Cosmology_”(Clifton et al., 2012)
\[S=\int d^{4}x\sqrt{-g}\left[\frac{1}{12}\alpha\varphi^{2}R+\frac{1}{2}\partial _{\mu}\varphi\partial^{\mu}\varphi+\lambda\varphi^{4}\right], \tag{3}\]
here in a simple version given by Ferreira & Tattersall (2020), with \(\varphi\) the scalar field. As pointed out by these authors, the theory is invariant under \(g_{\mu\nu}\rightarrow\lambda^{2}g_{\mu\nu}\) and \(\varphi\rightarrow\lambda^{-1}\varphi\) where \(\lambda\) is a constant. With \(\alpha=1\), the theory is conformally invariant. Decades ago, applications to varying \(G\) were often considered (see also Fujii (2003)), while present perspectives more concern early phases and inflation as well as advanced phases of accelerated expansion. Observational signatures of scale invariant gravity predicted in the scalar-tensor theories are quite rare, however a pertinent signature to distinguish such effects from those of General Relativity may be provided by perturbations in the ring down phase of black holes, with signatures in their resulting gravitational wave emission (Ferreira & Tattersall, 2020).
In the scale invariant vacuum (SIV) theory, an action with great similarities to the above Eq.(3) can also be expressed, see below expression (37). However, rather than a scalar-tensor theory, SIV should better be called "_a cotensor theory_". Now, some preliminaries are required. SIV applies properties of the Weyl's Geometry (Weyl, 1923; Eddington, 1923; Dirac, 1973), where the metric determination is given by the usual quadratic form \(ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}\). Moreover, the length \(\ell\) of any vector with contravariant components \(a^{\mu}\) is determined by a scale factor \(\lambda(x^{\mu})\) and the same for the line element \(ds\),
\[\ell^{2}=\lambda^{2}(x^{\mu})\,g_{\mu\nu}\,a^{\mu}a^{\nu}\,,\quad\text{and}\ ds^{\prime}=\lambda\,(x^{\mu})\,ds. \tag{4}\]
This last relation with the quadratic form implies that Weyl's space is conformally equivalent to an other space2 (defined by \(g^{\prime}_{\mu\nu}\)) through the gauge transformation,
Footnote 2: In the original Weyl’s Geometry, this other space was not necessarily General Relativity, while in WIG it is always the case.
\[g^{\prime}_{\mu\nu}=\lambda^{2}\,g_{\mu\nu}\,. \tag{5}\]
In the transport from a point \(P_{\Gamma}(x^{\mu})\) to a nearby point \(P_{\Gamma}(x^{\mu}+dx^{\mu})\), the length \(\ell\) of a vector is assumed to change by,
\[d\ell=\,\ell\,\kappa_{\nu}\,dx^{\nu}. \tag{6}\]
There, \(\kappa_{\nu}\) is called the coefficient of metrical connection, as a fundamental characteristics of the geometry alike \(g_{\mu\nu}\). If we change the standards of length \(\ell\) of any vector to \(\ell^{\prime}=\lambda(x)\,\ell\), one has
\[\ell^{\prime}+d\ell^{\prime}=(\ell+d\ell)\,\lambda(x+dx)=(\ell+d \ell)\lambda(x)+\ell\lambda_{;\mu}\,dx^{\mu}\,, \tag{7}\] \[\text{thus}\,\ d\ell^{\prime}=\lambda(x)\,d\ell+\ell\,\lambda_{; \mu}\,dx^{\mu}=\lambda(x)\,\ell\,(\kappa_{\mu}+\Phi_{;\mu})\,dx^{\mu}\,,\] \[\text{with}\,\ \Phi_{;\mu}=\frac{\lambda_{;\mu}}{\lambda}\,,\ \text{and}\ \Phi=\ln\lambda\,.\]
Thus, we get for \(d\ell^{\prime}=\ell^{\prime}\,\kappa^{\prime}_{\nu}\,dx^{\mu}\), (note that different derivatives, e.g. with respect to \(x^{\mu}\) are indicated by "\({}_{,\mu}\) ", or a by "\({}_{,\mu}\) " or even simply by "\({}_{,\mu}\)", see definitions by Dirac (1973)),
\[\kappa^{\nu}_{\nu}=\kappa_{\nu}+\Phi_{;\mu}\,. \tag{8}\]
A quantity \(Y\), scalar, vector or tensor, which in a scale transformation changes like \(Y^{\prime}=\lambda^{\mu}(x)Y\) is said to be _coscalar_, _covector or cotensor_ of power \(\Pi(Y)=n\), this is called _scale covariance_. For \(n=0\), we have an _incsolar, invector or tensor_, this is the particular case of scale invariance. The derivatives \(Y_{,\mu}\) do not necessarily enjoy the _co-covariant_ (a definiton by Dirac) or invariant properties, but derivatives with such properties can be defined. Let us take for example a scalar \(S\) of power \(n\). Its ordinary (covariant) derivative is \(S_{,\mu}\) (also noted \(S_{\mu}\) when no ambiguity). Following Dirac (1973), let us perform a change of scale, we have
\[S^{\prime}_{\mu}=(\lambda^{\pi}S)_{\mu}=\lambda^{\pi}S_{\mu}+n\lambda^{\pi-1} \lambda_{\mu}S=\lambda^{\pi}(S_{\mu}+n\,\Phi_{\mu})S. \tag{9}\]
With Eqs. (8) and (9), we get
\[(S_{\mu}-n\kappa_{\mu}S)^{\prime}=\lambda^{\pi}(S_{\mu}-n\kappa_{\mu}S)\,. \tag{10}\]
Now, we see that the covector \(S_{*\mu}=(S_{\mu}-n\kappa_{\mu}S)\) of power \(n\)
is the co-covariant derivative of scalar \(S\) (such derivatives are always indicated with "\({}_{*}\)"). This ensures that the co-covariant derivatives preserve the power of the object they are applied on, thus the co-covariant derivative is also preserving scale covariance.
For a covector of power \(n\), similar developments (Dirac, 1973) lead to the derivatives of covectors,
\[A_{\mu\nu} = \partial_{\nu}A_{\mu}-{}^{*}\Gamma^{\alpha}_{\mu\nu}A_{\alpha}-n \kappa_{\nu}A_{\mu}, \tag{11}\] \[A^{\mu}_{\nu} = \partial_{\nu}A^{\mu}+{}^{*}\Gamma^{\mu}_{\nu\alpha}A^{\alpha}-n \kappa_{\nu}A^{\mu},\] (12) \[\text{with}\;\;^{*}\Gamma^{\alpha}_{\mu\nu} = \Gamma^{\alpha}_{\mu\nu}+g_{\mu\nu}\kappa^{\alpha}-g^{\alpha}_{ \mu}\kappa_{\nu}-g^{\alpha}_{\nu}\kappa_{\mu}, \tag{13}\]
\[\text{with}\;\;\;^{*}\Gamma_{\sigma,\mu\nu}=g^{\alpha}_{\sigma\alpha}\Gamma^{ \alpha}_{\mu\nu}\,. \tag{14}\]
There, \({}^{*}\Gamma^{\alpha}_{\mu\nu}\) is a modified Christoffel symbol, while \(\Gamma^{\alpha}_{\mu\nu}\) is the usual form. As a further example, the first derivatives of a co-tensor have the following expressions,
\[T^{\mu\nu}_{\nu\rho} =T^{\mu\nu}_{\rho}+{}^{*}\Gamma^{\mu}_{\rho\sigma}\Gamma^{\sigma \nu}+{}^{*}\Gamma^{\nu}_{\rho\sigma}\mathcal{T}^{\mu\sigma}-n\kappa_{\rho} \mathcal{T}^{\mu\nu}\,, \tag{15}\] \[T_{\mu\nu\rho} =T_{\mu\nu\rho}-{}^{*}\Gamma^{\alpha}_{\mu\rho}T_{\sigma\nu}-{}^ {*}\Gamma^{\alpha}_{\nu\rho}T_{\mu\sigma}-n\kappa_{\rho}\mathcal{T}_{\mu\nu}, \tag{16}\]
which also ensures their scale covariance.
Second co-covariant derivatives can also be expressed. As an example, the second derivative of coscalar \(S\) becomes,
\[S_{s\mu,\nu}=S_{s\mu,\nu}-(n-1)\kappa_{\nu}S_{s\mu}+\kappa_{\mu}S_{s\nu}-g_{\mu \nu}\kappa^{\sigma}S_{s\sigma}. \tag{17}\]
An appropriate expression for the second derivative of a covector can also be derived. See Canuto et al. (1977) for a summary of cotensor calculus. Operations on covectors and cotensors can also be performed. Briefly, a contravariant covector \(a^{\mu}\) with power \(m\) multiplied by a contravariant vector \(b^{\nu}\) with power \(n\) will form a cotensor \(a^{\mu}b^{\nu}\) of power (m+n). The corresponding covariant components \(a_{\mu}\) and \(b_{\nu}\) have power (m+2) and (n+2) respectively. Also, the scalar product \(\vec{a}\cdot\vec{b}=g_{\mu\nu}a^{\mu}b^{\nu}\) gives a coscalar of power (m+n+2), the angle \(\Phi\) between the two vectors is conserved in displacement on a geodesics as shown by (Bouvier and Maeder, 1978), who also studied geodesics, isometries and the Killing vectors.
### Weyl's Integrable Geometry (WIG) and relation with scalar-tensor theories
If a vector is parallelly transported along a closed loop, the total change of the length of the vector can be expressed as,
\[\Delta\ell=\ell\left(\partial_{\nu}\kappa_{\mu}-\partial_{\mu}\kappa_{\nu} \right)d\sigma^{\mu\nu}\,, \tag{18}\]
where \(d\sigma^{\mu\nu}=dx^{\mu}\wedge dx^{\nu}\) is an infinitesimal surface element. Weyl identified the tensor \(F_{\mu\nu}=\left(\kappa_{\mu,\nu}-\kappa_{\nu,\mu}\right)\) with the electromagnetic field, as its original aim was to also provide a geometrical interpretation of electromagnetism. The problem is that this formulation (if the parenthesis does not vanish) implies non-integrable lengths so that the properties of an atom, such as its emission frequencies, would be influenced by its past world line. Thus, one could not observe sharp atomic lines in the presence of an electromagnetic field. This was the key point of Einstein (1918), who criticized the use of Weyl's geometry to describe electromagnetism and gravitation.
In the line of the developments by Dirac (1973), a modified version of Weyl's Geometry called Weyl's Integrable Geometry (WIG) was proposed by Canuto et al. (1977). In WIG, the above Einstein's remark no longer applies and it may thus form a consistent framework for the study of gravitation as emphasized by Canuto et al. (1977), see also Bouvier and Maeder (1978). Let us consider a line element \(ds^{\prime}=g^{\prime}_{\mu\nu}dx^{\mu}dx^{\nu}\), where the prime symbols apply to Riemann space, while \(ds\) refers to the WIG, which in addition to the quadratic form \(g_{\mu\nu}\) also has a scalar gauge field \(\lambda(x)\). In this case, in Riemann space we have,
\[\kappa^{\prime}_{\nu}=0\,, \tag{19}\]
since there is no change length in Riemann space. According to Eq. (8), we thus have,
\[\kappa_{\nu}=-\Phi_{,\nu}=-\frac{\partial\ln\lambda}{\partial x^{\nu}}\,. \tag{20}\]
This means that the metrical connecion \(\kappa_{\nu}\) in WIG space is the gradient of a scalar field,
\[\Phi=\ln\lambda\,, \tag{21}\]
and \(\kappa_{\nu}dx^{\nu}\) is an exact differential,
\[\partial_{\nu}\kappa_{\mu}=\partial_{\mu}\kappa_{\nu}\,. \tag{22}\]
This implies that the parallel displacement of a vector along a closed loop does not change its length, see Eq. (18). This means that the change of the length does not depend on the path followed and thus Einstein's objection does not hold in WIG. Nevertheless, the nice mathematical tools of Weyl's geometry designed for preserving scale covariance (and invariance) also work in the integrable form of this geometry and this is what we are using in this work.
At this point, it is appropriate to comment on the relation between the SIV theory and the scalar-tensor theories. Alike these theories, but in the context of WIG, SIV enjoys the property of conformal scale invariance. In addition, there is also a coupled scalar field \(\Phi\), expressed in Eqs. (20) and (21). However, there is a major difference with scalar tensor theories. In these, the scalar field is generally defined in an independent way, it has some specific coupling to standard gravitation, but is not directly determined by the scale factor. On the contrary, in SIV theory the scalar field is entirely defined by the scale factor. Thus, there is no degree of freedom to choose the scalar field and the theory is very constrained.
Now, the question is what is then constraining the scale factor \(\lambda\) in SIV. Alike the field equation of GR needs the specification of the metric (Minkowski, Schwarschild, FLWR, etc) characterizing the physical system under investigation, SIV needs one more specification in the form of a gauging condition, necessary for fixing the gauge \(\lambda\). Canuto et al. (1977) have chosen "The Large Number Hypothesis" (Dirac, 1974), while a different choice is made in Sect. (2.4), see also Maeder (2017).
Many expressions were originally developed by Weyl (1923), Eddington (1923), then extended by Dirac (1973) and applied by Canuto et al. (1977) in the integrable form of Weyl's Geometry. On the whole, WIG together with coTensor calculus forms a complete scale covariant framework for gravitation.
### The geodesic and the field equations
The objective of this work is to show that the Newtonian approximation in SIV is just leading to the MOND rules for
a valid approximation. For this purpose we need the geodesic equation in our WIG framework. It has been obtained in different consistent ways - 1. It has first been established by Dirac (1973) from an action principle. - 2. It was obtained by the covariant generalisation of the usual relation \(u^{\mu}_{\nu,\nu}u^{\nu}=0\)(Canuto et al., 1977),
\[u^{\mu}_{\nu\,\nu}u^{\nu}=0. \tag{23}\]
- 3. The geodesic in WIG is the curve between two points so that the change of the length of a vector is minimum: \(\int_{\eta_{\rm b}}^{\eta_{\rm b}}\delta(dl)=0\)(Bouvier & Maeder, 1978).
- 4. As in GR, the geodesics in WIG are also direct consequences of the Equivalence Principle (Maeder & Bouvier, 1979) in the scale invariant framework,
\[\frac{d^{2}x^{\alpha}}{ds^{2}}+{}^{*}\Gamma^{\alpha}_{\mu\nu}\frac{dx^{\mu}}{ ds}\frac{dx^{\nu}}{ds}+\kappa_{\nu}\frac{dx^{\alpha}}{ds}\frac{dx^{\nu}}{ds}=0, \tag{24}\]
with \({}^{*}\Gamma^{\alpha}_{\mu\nu}\) given by Eq.(13). Thus, we get
\[\frac{du^{\alpha}}{ds}+\Gamma^{\alpha}_{\mu\nu}u^{\mu}u^{\nu}-\kappa_{\mu}u^{ \mu}u^{\alpha}+\kappa^{\alpha}=0, \tag{25}\]
with the velocity \(u^{\mu}=dx^{\mu}/ds\). This expression satisfying the requirement of scale invariance will be used to derive the weak field approximation (Sect. 4.1). At this stage, the expression of the metrical connexion \(\kappa_{\mu}\) is still unknown, since the gauge is not yet fixed.
The developments of cotensor calculus of Sect. (2.1) can be pursued, and they are leading to a corresponding Riemann-Christoffel tensor \({}^{*}R^{\mu}_{\nu\lambda\rho}\)(see Sect. 87 by Eddington (1923); Dirac (1973)),
\[{}^{*}R^{\mu}_{\nu\lambda\rho}=\frac{\partial^{*}\Gamma^{\mu}_{\lambda\nu}}{ \partial x^{\rho}}+\frac{\partial^{*}\Gamma^{\mu}_{\nu\lambda}}{\partial x^{ \lambda}}+{}^{*}\Gamma^{\eta}_{\nu\lambda}\,{}^{*}\Gamma^{\mu}_{\eta\rho}+{}^ {*}\Gamma^{\eta}_{\nu\rho}{}^{*}\Gamma^{\mu}_{\lambda\,\eta}\,. \tag{26}\]
The contracted Riemann-Christoffel tensor or Ricci tensor appears to be an intenseny, it writes in the cotensor form (Eddington, 1923; Dirac, 1973)
\[{}^{*}R_{\mu\nu}=R^{\prime}_{\mu\nu}-\kappa_{\mu,\nu}-\kappa_{\nu,\mu}-g_{\mu \nu}\kappa^{\alpha}_{\alpha}-2\kappa_{\mu}\kappa_{\nu}+2g_{\mu\nu}\kappa^{ \alpha}\kappa_{\alpha}\,, \tag{27}\]
where \(R^{\prime}_{\mu\nu}\) is the usual expression in General Relativity. The total curvature \(R\) in the scale-invariant context is,
\[{}^{*}R=R^{\alpha}_{\alpha}=R^{\prime}-6\kappa^{\alpha}_{\alpha}+6\kappa^{ \alpha}\kappa_{\alpha}, \tag{28}\]
where \(R^{\prime}\) is the total curvature in a standard Riemann geometry. All these expressions, first and second derivatives, modified Christoffel symbols \({}^{*}\Gamma^{\alpha}_{\mu\nu}\), Riemann-Christoffel tensor \({}^{*}R^{\mu}_{\nu\lambda\rho}\), Ricci tensor and total curvature are scale invariant by construction. The summation and products are keeping the cotensoral character and scale covariance. They all contain additional terms depending on \(\kappa_{\nu}\). The major difference with these former references rests on the fact that the metrical connexion \(\kappa_{\nu}\) is the gradient of the scale factor \(\lambda\) by Eq. (20).
The scale invariant field equation has also been derived in different ways (Canuto et al., 1977): parallelly to GR but with cotensor expressions taking care of co-covariant expressions, and from an action principle (see below). In the first method, the first member \({}^{*}R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}{}^{*}R\) in the scale invariant context is obtained from Eqs. (27) and (28). Its scale invariant properties are ensured by the cotensoral nature of these expressions.
The second member of the field equation must also be scale invariant, implying that the product of the gravitational constant \(G\) and the energy-momentum tensor \(G\,T_{\mu\nu}\) should have the same property. Unlike Dirac (1974) and Canuto et al. (1977), \(G\) is considered as a constant as in Maeder (2017a). Thus, one has,
\[T_{\mu\nu}=T^{\prime}_{\mu\nu}\,. \tag{29}\]
This expression has implications on the behavior of the relevant densities. The scale invariance of tensor \(T_{\mu\nu}\) implies,
\[(p+\rho)u_{\mu}u_{\nu}-g_{\mu\nu}p=(p^{\prime}+p^{\prime})u^{\mu}_{\mu}u^{ \prime}_{\nu}-g^{\prime}_{\mu\nu}p^{\prime}. \tag{30}\]
There, velocities \(u^{\mu}\) and \(u^{\prime}_{\mu}\) transform as follows,
\[u^{\mu} = \frac{dx^{\mu}}{ds^{\prime}}=\lambda^{-1}\frac{dx^{\mu}}{ds}= \lambda^{-1}u^{\mu}\,,\] \[\text{and}\ u^{\prime}_{\mu} = g^{\prime}_{\mu\nu}u^{\prime\nu}=\lambda^{2}g_{\mu\nu}\lambda^{-1 }u^{\nu}=\lambda\,u_{\mu}\,. \tag{31}\]
The contravariant and covariant components of a vector have different power, we have seen above that their covariant derivatives are different. Thus, the energy-momentum tensor is scaling like,
\[(p+\rho)u_{\mu}u_{\nu}-g_{\mu\nu}p=(p^{\prime}+\rho^{\prime})\lambda^{2}u_{\mu} u_{\nu}-\lambda^{2}g_{\mu\nu}p^{\prime}\,, \tag{32}\]
implying the following behaviour for \(p\) and \(\rho\)(Canuto et al., 1977),
\[p=p^{\prime}\,\lambda^{2}\quad\text{and}\quad\rho=\rho^{\prime}\,\lambda^{2}\,. \tag{33}\]
The consistency of the field equation means that pressure and density are not scale invariant, but coscalars of power \(\Pi(\rho)=-2\).
The term with the cosmological constant in Einstein's equation of GR is \(\Lambda_{\rm E}g^{\prime}_{\mu\nu}\). According to Eq. (5), we have
\[\Lambda_{\rm E}g^{\prime}_{\mu\nu}=\Lambda_{\rm E}\lambda^{2}g_{\mu\nu}\equiv \Lambda g_{\mu\nu}. \tag{34}\]
Notation \(\Lambda_{\rm E}\) is adopted to avoid any confusion with \(\Lambda\) in WIG (\(\Lambda_{\rm E}\) is not necessarily the value of Einstein static model). In the SIV context, \(\Lambda=\lambda^{2}\Lambda_{\rm E}\), it is a coscalar of power \(\Pi(\Lambda)=-2\), quite consistently with the previous results for pressure and density. Also, this correspondence will insure the scale invariance of the second member of the general field equation. Assembling the above expressions developed in the WIG cotensoral context, we may express the corresponding scale invariant field equation (Canuto et al., 1977),
\[R^{\prime}_{\mu\nu}-\frac{1}{2}\ g_{\mu\nu}{}R^{\prime}-\kappa_{\mu,\nu}- \kappa_{\nu,\mu}-2\kappa_{\mu}\kappa_{\nu}+2g_{\mu\nu}\kappa^{\alpha}_{\alpha}-g_ {\mu\nu}\kappa^{\alpha}\kappa_{\alpha}=\]
\[-8\pi G\mu_{\nu}-\lambda^{2}\Lambda_{\rm E}g_{\mu\nu}, \tag{35}\]
where \(G\) is a true constant, as seen above. This is a generalization of Einstein equation in WIG. In addition to the general covariance of Einstein equation, it is also scale invariant, since each term in the equation satisfies this requirement. The terms with a prime are the same as in GR, the equation contains additional terms depending on \(\kappa_{\nu}\). This coefficient will be determined by the gauging condition we may adopt for the physical system considered, in the same way as the \(g_{\mu\nu}\) and their sequence of derivatives (up to the Ricci tensor) are determined by the metric, _i.e._ some sort of topography of the space-time system.
The above coordinate and scale invariant field equation (35) has also been derived from an action principle. Such a
so-called co-covariant equation has been developed by Dirac (1973) for the vacuum and thus it concerns the first member of the general field equation. As this was originally performed in the classical Weyl's geometry, the additional field Dirac introduced in the action is independent of the scale factor \(\lambda\), in this sense Dirac's results belong to scalar-tensor theories, as he was pointing it himself.
The study by Canuto et al. (1977) was made in the line of that by Dirac (1973), with the difference that it is formally related to the WIG framework. The action is a generalization of Einstein action dealing with both coordinate and scale invariant equations,
\[I=\int\lambda^{2}\left(*R\right)\sqrt{g}\,d^{4}x. \tag{36}\]
The scale factor \(\lambda\) is a coscalar of power \(\Pi(\lambda)=-1\), while \((*R)\) (so noted as a coscalar) has a power \(\Pi(*R)=-2\), since it is a contraction of the intensor \(R_{\mu\nu}\). We also have \(\Pi(g_{\mu\nu})=2\), and thus \(g=\det(g_{\mu\nu})\) is a co-scalar of power \(\Pi(g)=8\). The multiplication by \(\lambda^{2}\) generates the invariant property. Terms being functions of \(\lambda\) and of its cotensor derivatives are added to the above equation, no other new field is introduced. The action principle writes,
\[\delta I=\delta\int\left(-\lambda^{2}*R+c_{1}\lambda^{*\mu}\lambda_{*\mu}+c_{2 }\lambda^{4}\right)\sqrt{g}\,d^{4}x=0\,, \tag{37}\]
where \(c_{1}\) and \(c_{2}\) are constants. The quartic term then is related to the cosmological constant. A matter Lagrangian \(\mathscr{L}\) may be included, its relation with the energy momentum tensor is,
\[G\,T^{\mu\nu}=\lambda^{-2}\,\frac{2}{\sqrt{g}}\,\frac{\delta\sqrt{g}\, \mathscr{L}}{\delta g_{\mu\nu}}\,, \tag{38}\]
with the term \(\lambda^{-2}\) for power consistency, while the Lagrangian density \(\mathscr{L}\) must be an inscalar expression. As a result, the development of the above expressions confirms the above scale invariant field equation (35) and, consistently enough, produces no new field equations.
### Fixing the gauge
Since scale covariance is considered in addition to the coordinate covariance of GR, an additional condition is necessary to specify the gauge \(\lambda\). Dirac (1973) and Canuto et al. (1977) had chosen the so-called _"Large Number Hypothesis"_, see also Dirac (1974). The author's choice is to adopt the following statement (Maeder 2017a): _The macroscopic empty space is scale invariant, homogeneous and isotropic._ This assumption is consistent with the scale invariance of GR in empty space and with Maxwell's equations in absence of charges and currents. Moreover, the equation of state of the vacuum \(p_{\rm vac}=-\rho_{\rm vac}c^{2}\) is precisely the relationship permitting the vacuum density to remain constant for an adiabatic expansion or contraction (Carroll et al. 1992).
Under the above key hypothesis, one is left from the general field Eq.(35) with the following condition for the empty spacetime,
\[\kappa_{\mu;\nu}+\kappa_{\nu;\mu}+2\kappa_{\mu}\,\kappa_{\nu}-2\kappa_{\mu\nu }\kappa^{a}_{\alpha}+g_{\mu\nu}\kappa^{a}\kappa_{\alpha}=\lambda^{2}\Lambda_ {\rm E}\,g_{\mu\nu}. \tag{39}\]
The geometrical terms \(R^{\prime}_{\mu\nu}\) and \(R^{\prime}\) of the field equation have disappeared, since the de Sitter metric for an empty space endowed with a cosmological constant is conformal to the Minkowski metric where \(R^{\prime}_{\mu\nu}=0\) and \(R^{\prime}=0\). The conformal relation becomes an identity, if \(3\lambda^{-2}/(\Lambda_{\rm E}\,\!\!\!/^{2})=1\)(Maeder 2017a), a condition which is noticeably fully consistent with the solution of (39), as shown below.
We assume that the macroscopic empty space characterized by the above equation is homogeneous and isoptropic. This is consistent with the hypothesis that the scale factor \(\lambda\) is a function of time only. Thus, only the zero component of \(\kappa_{\mu}\) is non-vanishing and the coefficient of metrical connection becomes,
\[\kappa_{\mu;\nu}=\kappa_{0;0}=\partial_{0}\kappa_{0}=\frac{d\kappa_{0}}{dt} \equiv\dot{\kappa}_{0}=-\frac{\dot{\lambda}}{\lambda}\,, \tag{40}\]
The \(0\) and the \(1\), \(2\), \(3\) components of what remains from Eq. (39) become respectively:
\[3\kappa_{0}^{2}=\lambda^{2}\,\Lambda_{\rm E}\,,\quad\mbox{and}\,\,2\dot{ \kappa}_{0}-\kappa_{0}^{2}=-\lambda^{2}\Lambda_{\rm E}\,. \tag{41}\]
and we get the two most important equations,
\[3\,\frac{\dot{\lambda}^{2}}{\lambda^{2}}=\lambda^{2}\,\Lambda_{\rm E}\,\quad \mbox{and}\quad 2\frac{\ddot{\lambda}}{\lambda}-\frac{\dot{\lambda}^{2}}{ \lambda^{2}}=\lambda^{2}\,\Lambda_{\rm E}\,, \tag{42}\]
or some combinations of them. These two expressions have important consequences:
- In GR, \(\Lambda_{\rm E}\) and the properties of the empty space are considered not to depend on the matter content of the Universe. The same applies to the above two equations and to the scale factor \(\lambda\). This is also consistent with the fact that matter density does not appear in these two equations.
- These differential equations establish a relation of the cosmological constant, or the energy density of the vacuum, with the scale factor \(\lambda\) and its variations.
- From the relation \(\Lambda=8\pi G\rho_{\rm vac}\) and the first of the Eqs. (42), the energy density of the vacuum can be expressed in term of a scalar field \(\psi\),
\[\rho=\frac{1}{2}\,C\,\dot{\psi}^{2}\quad\mbox{with}\,\,\dot{\psi}=\kappa_{0} =-\frac{\dot{\lambda}}{\lambda}\,. \tag{43}\]
with constant \(C=3/(4\pi G)\). The field \(\psi\) obeys a modified Klein-Gordon equation (Maeder and Gueorguiev 2021a).
- For a solution of Eqs. (42) of the form \(\lambda=a(t-b)^{n}+d\), we get \(d=0\), \(n=-1\) with \(a=\sqrt{\frac{3}{\Lambda_{\rm E}}}\). There is no condition on \(b\) from Eqs.(42), any value would fit. However, the solutions of the cosmological equations may put some conditions on the origin of time, depending on model parameters, see Sect. 3. Thus, with this remark the general solution is,
\[\lambda(t)=\sqrt{\frac{3}{\Lambda_{\rm E}}}\frac{1}{c^{\prime}}\,. \tag{44}\]
Thus, if we adopt the scale factor \(\lambda_{0}=1\) at the present time \(t_{0}=1\), we just have \(\lambda(t)=(t_{0}/t)\) in a system of units where \(\sqrt{\frac{3}{c^{2}\Lambda_{\rm E}}}=1\).
## 3 Cosmological solutions and their implications
The application of the FLWR metric to Eq.(35) leads to cosmological equations (Canuto et al., 1977),
\[\frac{8\,\pi G\rho}{3}=\frac{k}{a^{2}}+\frac{\dot{a}^{2}}{a^{2}}+2 \frac{\lambda}{\lambda}\frac{\dot{a}}{\lambda}+\frac{\lambda^{2}}{\lambda^{2}}- \frac{\Lambda_{\rm E}\lambda^{2}}{3}\,, \tag{45}\] \[-8\,\pi Gp=\frac{k}{a^{2}}+2\frac{\ddot{a}}{a}+2\frac{\ddot{\lambda }}{\lambda}+\frac{\dot{a}^{2}}{a}+4\frac{\dot{a}\lambda}{a\dot{\lambda}}-\frac{ \dot{\lambda}^{2}}{\lambda^{2}}-\Lambda_{\rm E}\lambda^{2}\,,\] (46) \[\mbox{with }\rho\,a^{3(1+\dot{c}_{s}^{2})}\lambda^{1+3\dot{c}_{s}^{2}}= \mbox{const}. \tag{47}\]
The last expression is the conservation law, with a sound velocity \(c_{s}^{2}=0\) for a dust model and \(c_{s}^{2}=1/3\) for the radiative era. If \(\lambda\) is a constant, the derivatives of \(\lambda\) vanish and one is brought back to the equations of GR. Solutions of these equations have been searched by Canuto et al. (1977) for two variant cases of the Large Number Hypothesis. The major point is that the above equations are leading to solutions characterized by expansion factors \(a(t)\sim t\) at large cosmological times, a situation no longer supported nowadays.
Remarkably, the gauging condition, which implies Eqs. (42), leads to major simplifications of Eqs. (45) and (46), which become (Maeder, 2017),
\[\frac{8\,\pi G\rho}{3}=\frac{k}{a^{2}}+\frac{\dot{a}^{2}}{a^{2}} +2\frac{\dot{a}\lambda}{a\dot{\lambda}}\,, \tag{48}\] \[-8\,\pi Gp=\frac{k}{a^{2}}+2\frac{\ddot{a}}{a}+\frac{\dot{a}^{2}} {a^{2}}+4\frac{\dot{a}\lambda}{a\dot{\lambda}}\,. \tag{49}\]
For a constant \(\lambda\), Friedmann's equations are recovered. A third equation may be derived from the above two,
\[-\frac{4\pi G}{3}\left(3p+\rho\right)=\frac{\ddot{a}}{a}+\frac{\dot{a}\dot{ \lambda}}{a\dot{\lambda}}\,. \tag{50}\]
Since \(\dot{\lambda}/\lambda\) is negative, the extra term represents an additional acceleration _in the direction of motion_. This effect of the scale invariance is fundamentally different from that of the cosmological constant. Now, for an expanding Universe, this extra force produces an accelerated expansion, without requiring dark energy particles. For a contraction, the additional term favours collapse, as shown in the study of the growth of density fluctuations (Maeder and Gueorguiev, 2019), where an early formation of galaxies is resulting without the need of dark matter.
The solutions of these equations have been discussed in details in Maeder (2017), together with various cosmological properties concerning the Hubble-Lemaitre and deceleration parameters, the cosmological distances and different cosmological tests. The redshift drifts appear as one of the most promising cosmological tests (Maeder and Gueorguiev, 2020). Here, we limit the discussion to a few points pertinent to the subject of the paper. Analytical solutions for the flat SIV models with \(k=0\), considered here, have been found for the matter (Jesus, 2018) and radiation (Maeder, 2019) dominated models. In the former case, we have
\[a(t)=\left[\frac{t^{3}-\Omega_{\rm m}}{1-\Omega_{\rm m}}\right]^{2/3}. \tag{51}\]
It is expressed in the timescale \(t\) where at present \(t_{0}=1\) and \(a(t_{0})=1\). Such solutions are illustrated in Fig. 1, top. They are lying relatively close to the \(\Lambda\)CDM ones, the differences being larger for lower \(\Omega_{\rm m}\). This is a general property: _the effects of scale invariance are always larger for the lower matter densities, being the largest ones for the empty space_. There \(\Omega_{\rm m}=\rho/\rho_{\rm c}\) with \(\rho_{\rm c}=3H_{0}^{2}/(8\pi G)\). Remarkably, Eqs. (48) and (49) allow flatness for different values of \(\Omega_{\rm m}\), unlike the classical Friedmann models.
The initial time when \(a(t_{\rm in})=0\) is,
\[t_{\rm in}=\Omega_{\rm m}^{1/3}\,. \tag{52}\]
This dependence in \(1/3\) produces a rapid increase of \(t_{\rm in}\) for increasing \(\Omega_{\rm m}\). For \(\Omega_{\rm m}=0,0.01,0.1,0.3,0.5\), the values of \(t_{\rm in}\) are 0, 0.215, 0.464, 0.669, 0.794 respectively. The key point is that this leads to a strong reduction of the range of \(\lambda(t)\) for increasing \(\Omega_{\rm m}\) (Fig. 1, bottom): while the range of \(\lambda\) is infinite for an empty model, it is very limited for significant \(\Omega_{\rm m}\)-values. Thus, the presence of matter through \(\Omega_{\rm m}\) drastically reduces the range of variation of the universal scale factor \(\lambda\). For \(\Omega_{\rm m}>1\) scale invariance is killed, which makes sense in view of the remarks by Feynman (1963). This is a global effect associated to the range of \(\lambda\) in Universe models. This does
not prevent other effects due to local variations of density to also intervene, as shown in Sect. (4.3), see Eq. (74).
The Hubble parameter is, in the timescale \(t\) (which goes from \(t_{\rm in}\) at Big-Bang to \(t_{0}=1\) at present),
\[H(t)=\frac{2t^{2}}{t^{3}-\Omega_{\rm m}}\,. \tag{53}\]
From Eqs. (51) and (53), we see that there is no meaningful scale invariant solution for an expanding Universe model with \(\Omega_{\rm m}\) equal or larger than \(1\). Thus, the model solutions are quite consistent with the causality relations discussed by Maeder and Gueorgue (2021).
One can also define
\[\Omega_{\rm k}=-\frac{k}{a^{2}H_{0}^{2}}\quad\mbox{and}\quad\Omega_{\lambda}=- \frac{2}{H_{0}}\left(\frac{\dot{\lambda}}{\lambda}\right)_{0}=\frac{2}{H_{0}t _{0}}\,. \tag{54}\]
These are the normalized contributions vs. \(\rho_{\rm c}\) respectively of the matter, space curvature, and scale factor \(\lambda\). With these definitions, the first cosmological equation (48) leads to,
\[\Omega_{\rm m}+\Omega_{\rm k}+\Omega_{\lambda}=1\,. \tag{55}\]
These quantities are usually considered at the present time.
## 4 The Newtonian and MOND approximations
### The weak field approximation
The scale invariant expression of the geodesic equation was derived by Dirac (1973) and also from an action principle by Bouvier and Maeder (1978), see Sect. 2.3. The weak field low velocity approximation has been obtained by Maeder and Bouvier (1979), see also Maeder (2017c),
\[\frac{d^{2}{\bf r}}{dt^{2}}=-\frac{G_{\rm r}\,M(t)}{r^{2}}\,\frac{{\bf r}}{r}+ \kappa(t)\,\frac{d{\bf r}}{dt}\,, \tag{56}\]
in spherical coordinates. It contains an additional acceleration term in the direction of motion, _the dynamical gravity_. This term, proportional to the velocity, favours collapse during a contraction, and outwards motion if expansion.
The conservation law (Eq. 47) imposes for a dust Universe a relation \(\rho a^{3}\lambda=const.\), meaning that the inertial mass of a particle is not a constant and that it depends on the scale factor \(\lambda\). We note that the non-constancy of mass is also a common situation in Special Relativity. Here, masses vary like \(M(t)=M(t_{0})(t/t_{0})\). Interestingly enough, rather than the inertial and gravitational mass, the gravitational potential \(\Phi=GM/r\) of an object, thus the field, appears as a more fundamental quantity, being scale-invariant through the evolution of Universe. As an example, for \(\Omega_{\rm m}=0.3\), the mass at the Big-Bang was, \(M(t_{\rm in})=\Omega_{\rm m}^{1/3}M(t_{0})=0.6694\,M(t_{0})\), the variations are smaller than \(1\%\) over the last \(400\) million years (Sect.4.2).
In the above cosmological models, the age \(t\) is \(t_{0}=1\) at present and \(t_{\rm in}=\Omega_{\rm m}^{1/3}\) at the origin. The usual timescale \(\tau\) in years or seconds is \(\tau_{0}=13.8\) Gyr at present (Frieman et al., 2008) and \(\tau_{\rm in}=0\) at the Big-Bang. Thus, the relation between these ages is,
\[\frac{\tau-\tau_{\rm in}}{\tau_{0}-\tau_{\rm in}}=\frac{t-t_{\rm in}}{t_{0}-t _{\rm in}}\,, \tag{57}\]
expressing that the age fraction with respect to the present age is the same in both timescales. This gives
\[\tau=\tau_{0}\,\frac{t-\Omega_{\rm m}^{1/3}}{1-\Omega_{\rm m}^{1/3}}\quad \mbox{and}\,\,\,t=\Omega_{\rm m}^{1/3}+\frac{\tau}{\tau_{0}}(1-\Omega_{\rm m} ^{1/3})\,, \tag{58}\]
and for the derivatives,
\[\frac{d\tau}{dt}=\frac{\tau_{0}}{1-\Omega_{\rm m}^{1/3}}\,,\quad\mbox{and} \,\,\frac{dt}{d\tau}=\frac{1-\Omega_{\rm m}^{1/3}}{\tau_{0}}\,. \tag{59}\]
For larger \(\Omega_{\rm m}\), timescale \(t\) is squeezed over a smaller fraction of the interval \(0\) to \(1.0\), (which reduces the range of \(\lambda\) over the ages).
We need to convert the equation of motion (56) expressed with variable \(t\) into the usual time \(\tau\). Equation (56) becomes,
\[\frac{d^{2}{\bf r}}{d\tau^{2}}\left(\frac{d\tau}{dt}\right)^{2}=-\frac{G_{\rm r }\,M(t)}{r^{2}}\,\frac{{\bf r}}{r}+\frac{1}{t_{\rm in}+\frac{\tau}{\tau_{0}}(t _{0}-t_{\rm in})}\,\frac{d\tau}{dt}\frac{d{\bf r}}{d\tau}\,. \tag{60}\]
Here \(G_{\rm r}\) is used to specify that the gravitational constant is expressed with time units \(t\). In the \(\tau\)-scale, the units of \(G\) are \([cm^{3}\cdot g^{-1}s^{-2}]\), thus, the correspondence is \(G_{\rm r}\left(\frac{d\tau}{dt}\right)^{2}=G\). At present, the masses \(M(t_{0})\) and \(M(\tau_{0})\) are evidently equal. At other epochs, the relation is,
\[M(t)=\frac{t}{t_{0}}M(t_{0}),\,\mbox{thus}\,M(\tau)=\left[\Omega_{\rm m}^{1/3} +\frac{\tau}{\tau_{0}}(1-\Omega_{\rm m}^{1/3})\right]M(\tau_{0}). \tag{61}\]
Now, multiplying both members of Eq.(60) by \(\left(\frac{d\tau}{dt}\right)^{2}\), we get at time \(\tau/\tau_{0}\),
\[\frac{d^{2}{\bf r}}{d\tau^{2}}=-\frac{GM(\tau)}{r^{2}}\,\frac{{\bf r}}{r}+ \frac{1}{t_{\rm in}+\frac{\tau}{\tau_{0}}(t_{0}-t_{\rm in})}\,\frac{t_{0}-t_{ \rm in}}{\tau_{0}}\,\frac{d{\bf r}}{d\tau}\,. \tag{62}\]
We define the numerical factor \(\psi\),
\[\psi(\tau)=\frac{t_{0}-t_{\rm in}}{t_{\rm in}+\frac{\tau}{\tau_{0}}(t_{0}-t_{ \rm in})}\,\mbox{; thus}\,\,\psi_{0}=\psi(\tau_{0})=1-\Omega_{\rm m}^{1/3}\,. \tag{63}\]
The modified Newton's equation at present time \(\tau_{0}\) is then,
\[\frac{d^{2}{\bf r}}{d\tau^{2}}=-\frac{GM(\tau_{0})}{r^{2}}\,\frac{{\bf r}}{r}+ \frac{\psi_{0}}{\tau_{0}}\frac{d{\bf r}}{d\tau}\,. \tag{64}\]
The additional term, _the dynamical gravity_, which is generally extremely small (cf. Eq. 74), also depends on \(\Omega_{\rm m}\): in an empty Universe, \(\psi_{0}=1\), the effect being maximum, while for \(\Omega_{\rm m}=1\), one consistently has \(\psi_{0}=0\), scale invariance has no effect. For \(\Omega_{\rm m}=0.30\), \(0.20\), \(0.10\) and \(0.05\) one has \(\psi_{0}=0.331,0.415,0.536\) and \(0.632\), which reduces the dynamical gravity.
### The MOND approximation: first approach
Let us first examine how the scale factor \(\lambda(\tau)\) and consequently the masses \(M(\tau)\) are varying over the past. For the case \(\Omega_{\rm m}=0.30\) as an example, over the last \(100\) Myr, \(200\) Myr and \(0.5\) Gyr the mass increase predicted by Eq. (61) amounts to a factor \(1.0024\), \(1.0048\) and \(1.012\) respectively. For \(\Omega_{\rm m}=0.10\), these values would be \(1.0039\), \(1.0078\) and \(1.020\).
Thus, for galaxies where the rotation periods are a few hundred millions years, we can consider that both \(\lambda\) and masses are constant with an accuracy equal or better than \(1\%\). Indeed, this is quite consistent with MOND, which is
also known to be scale invariant with a constant scale factor \(\lambda\)(Milgrom, 2015). Such an approximation is much less satisfactory for clusters of galaxies where the time scales, _e.g._ the crossing times, are of the order of a few Gyr.
With a constant \(\lambda\), the coefficient \(\kappa=-\frac{\lambda}{\lambda}\) is equal to zero and the dynamical gravity in Eq. (64) disappears. We are left with transformations \(r=\lambda\)\(r^{\prime}\) and \(t=\lambda\)\(t^{\prime}\) with a constant \(\lambda\) (and thus \(M\)) applied to the Newton equation expressed in the prime coordinates,
\[\frac{d^{2}r^{\prime}}{dr^{\prime 2}}=-\frac{GM}{r^{\prime 2}}\equiv g^{\prime}_{N}. \tag{65}\]
The total acceleration \(g\) in system (\(r\),\(t\)) becomes,
\[g=\frac{d^{2}r}{dt^{2}}=\frac{1}{\lambda}\,\frac{d^{2}r^{\prime}}{dt^{\prime 2 }}\,. \tag{66}\]
The Newtonian gravitational accelerations \(g_{N}\) and \(g^{\prime}_{N}\) are related by,
\[g_{N}\equiv-\frac{GM}{r^{2}}=-\frac{1}{\lambda^{2}}\,\frac{GM}{r^{\prime 2}} \equiv\frac{1}{\lambda^{2}}\,g^{\prime}_{N}\,. \tag{67}\]
Eq. (66) can thus be developed as follows,
\[g=\frac{d^{2}r}{dt^{2}}=\frac{1}{\lambda}\,\frac{d^{2}r^{\prime}}{dr^{\prime 2 }}=\frac{1}{\lambda}\,g^{\prime}_{N}=\lambda\,g_{N}, \tag{68}\]
according to (67). This last relation also implies,
\[g=\frac{d^{2}r}{dt^{2}}=\lambda\,g_{N}=\left(\frac{g^{\prime}_{N}}{g_{N}} \right)^{1/2}g_{N}=\left(g^{\prime}_{N}g_{N}\right)^{1/2}. \tag{69}\]
This is to be compared to the deep-MOND limit given by Eq. (1). We note a correspondence between constant \(a_{0}\) and \(g^{\prime}_{N}\). At this stage, we have no information on what kind of value should be used for \(g^{\prime}_{N}\), and for what range of gravities it may apply. In a second approach below, we will get more information on these points. For now, we note that the approximation of constant \(\lambda\) and masses over a few hundred millions years in the scale invariant theory just leads to a form analog to the deep-MOND limit.
The constant \(a_{0}\) is related to \(cH_{0}\)(Milgrom, 1983, 2015). We may wonder why a constant, considered as a universal constant, thus being the same at any time, should just be related to the present value \(H_{0}\), see also (Milgrom, 2020). This is highly suggestive that \(a_{0}\), alike \(\lambda\), is in fact time dependent, contrarily to MOND assumption, but in agreement with the scale invariant theory.
### The MOND approximation: second approach
We may also derive the MOND behaviour at very low gravities from the equation of motion Eq. (64). The ratio \(x\) of the radial components of the dynamical gravity to the Newtonian one is given by,
\[x=\frac{\Psi_{0}\,\upnu^{2}}{\tau_{0}\,GM}\,. \tag{70}\]
where \(\upnu\) is the radial component of the velocity. The ratio \(x\) may become larger than 1 in two particular cases:
- 1. In very early stages of the Universe, \(\tau_{0}\) is very small and favours a large dynamical acceleration \((\psi\,\upnu)/\tau_{0}\) in the sense of motion. This effect is likely to have favoured the early galaxy formation without the need of dark matter (Maeder and Gueorguiev, 2019).
- 2. At large distances from a central body, the Newtonian gravity \(g_{N}\) may become smaller than \((\psi\,\upnu)/\tau_{0}\). This may typically occur in very wide binaries and in the outer layers of galaxies and clusters. This situation is favoured by the fact that both the deep-MOND Limit and SIV theory predict that the orbital velocities in two-body systems are independant of the orbital radius (Milgrom, 2014; Maeder and Bouvier, 1979). Thus, in both cases larger orbital velocities may favour the dynamical acceleration \((\psi\,\upnu)/\tau_{0}\) in the outer layers of gravitational systems.
We use the fact that \(H_{0}=\frac{\xi}{\xi_{0}}\) and \(\rho_{\rm c}=\frac{3R_{0}^{2}}{8\pi G}\) to express \(\tau_{0}\) in term of \(\rho_{\rm c}\) in Eq.(70). To obtain \(\xi\), we use the following expression of the Hubble-Lemaitre parameter (see Appendix),
\[H(\tau_{0}) = \frac{2}{1-\Omega_{\rm m}}\,\frac{(1-\Omega_{\rm m}^{1/3})}{\tau_ {0}}\,, \tag{71}\] \[\mbox{thus }\xi=\frac{2\,(1-\Omega_{\rm m}^{1/3})}{(1-\Omega_{\rm m})}. \tag{72}\]
The ratio \(\xi\) is about unity in the SIV theory for \(\Omega_{\rm m}=0.10,0.20\) or \(0.30\), there one has \(\xi=1.191,1.038\) or \(0.945\) respectively. We consider a mass \(M\) spherically distributed in a radius \(r\) with a mean density \(\rho\) and get,
\[x=\frac{\sqrt{2}\,\psi_{0}}{\xi}\left(\frac{\upnu^{2}}{(GM/r)}\frac{\rho_{\rm c }}{\rho}\right)^{1/2}. \tag{73}\]
Let us consider a two-body system formed by a massive central mass \(M\) and a test object of negligible mass, there the instantaneous equilibrium of forces determined by Eq. (64) along the radial direction is just \(\frac{\nu^{2}}{r}=\frac{GM}{r^{2}}\), since the additional dynamical acceleration is in the direction of the motion3. Thus, in external regions of galaxies where normally the motion should be mainly determined by the central mass concentration, the ratio \(x\) becomes,
Footnote 3: When one considers the average velocity dispersion in a long-time evolution, this type of relation does not necessarily hold any more (Maeder, 2017c))
\[x=\frac{\sqrt{2}\,\psi_{0}}{\xi}\left(\frac{\rho_{\rm c}}{\rho}\right)^{1/2}. \tag{74}\]
For high values of the density \(\rho\) with respect to the critical density, the \(x\)-parameter becomes negligible and thus the gravity is just determined by the usual Newton Law. At the edge of the sphere of radius \(r\) and mean density \(\rho\), the Newtonian gravity \(g_{N}=(4/3)\pi G\rho\,r\), so that we also have,
\[x=\frac{\sqrt{2}\,\psi_{0}}{\xi}\left(\frac{\g_{\rm c}}{g_{\rm N}}\right)^{1/2}, \tag{75}\]
where \(g_{\rm c}\) is the mean gravity at the edge of a similar sphere having the critical density. Thus, the total gravity \(g\) given by the the first member of Eq.(64) is
\[g=g_{\rm N}+xg_{\rm N}=g_{\rm N}\left[1+\frac{\sqrt{2}\,\psi_{0}}{\xi}\left( \frac{g_{\rm c}}{g_{\rm N}}\right)^{1/2}\right]. \tag{76}\]
Let us consider regions at large distances from the galactic center, or in a binary system at large distances from the other body. There, the Newtonian gravity \(g_{\rm N}\), resulting from the attraction of the central or other body, can be counterbalanced
by the gravity of external galaxies or other star systems so that the resulting \(g_{\rm N}\) essentially vanishes and \(x\) becomes bigger than 1 in large regions. When this happens, we get a total resulting gravity \(g\) behaving like,
\[g\rightarrow\frac{\sqrt{2}\,\Psi_{0}}{\xi}\,(g_{\rm c}\,g_{\rm N})^{1/2}\,. \tag{77}\]
The numerical factor \(\frac{\sqrt{2}\,\Psi_{0}}{\xi}\) becomes,
\[\frac{\sqrt{2}\,\Psi_{0}}{\xi}=\frac{\sqrt{2}\,(1-\Omega_{\rm m}^{1/3})}{H( \tau_{0})\,\tau_{0}}=\frac{(1-\Omega_{\rm m})}{\sqrt{2}}\,, \tag{78}\]
where we have used Eq.(72) for \(\xi\). For \(\Omega_{\rm m}=0\), 0.1, 0.2, 0.3, 0.5, one has \(\frac{\sqrt{2}\,\Psi_{0}}{\xi}=0.707\), 0.636, 0.566, 0.495, 0.354. Thus, the weak field equation (64) leads to relation (77) equivalent to the deep-MOND Limit \(g=\sqrt{a_{0}\,g_{\rm N}}\). In this second approach, we may learn much more on the significance of \(a_{0}\) and its numerical value.
### The significance of the \(a_{0}\)-parameter
We have the correspondence
\[a_{0}\Longleftrightarrow\frac{(1-\Omega_{\rm m})^{2}}{2}g_{\rm c}\,. \tag{79}\]
We may explicit the limiting value \(g_{\rm c}\) in term of the critical density over the radius \(r_{\rm H_{0}}\) of the Hubble sphere, defined by \(nc=r_{\rm H_{0}}H_{0}\). There, \(n\) depends on the cosmological model. As an example, for the EdS model, \(n=2\), for SIV or \(\Lambda\)CDM models with \(\Omega_{\rm m}=0.2-0.3\), the initial braking and recent acceleration almost compensate each other, so that \(n\simeq 1\). We get,
\[a_{0}=\frac{(1-\Omega_{\rm m})^{2}}{2}\,\frac{4\pi}{3}G\rho_{ \rm c}r_{\rm H_{0}}=\frac{(1-\Omega_{\rm m})^{2}}{4}\,n_{\rm c}\,H_{0} \tag{80}\] \[\qquad\qquad{\rm or}\quad a_{0}=\frac{nc\,(1-\Omega_{\rm m})(1- \Omega_{\rm m}^{1/3})}{2\,\tau_{0}} \tag{81}\]
The product \(cH_{0}\) is equal to 6.80 \(10^{-8}\) cm s\({}^{-2}\). For \(\Omega_{\rm m}\)=0, 0.10, 0.20, 0.30 and 0.50, we get \(a_{0}\approx\) (1.70, 1.36, 1.09, 0.83, 0.43) \(10^{-8}\) cm s\({}^{-2}\) respectively. These values obtained from the SIV theory are remarkably close to the value \(a_{0}\) about \(1.2\cdot 10^{-8}\) cm s\({}^{-2}\) derived from observations by Milgrom (2015).
Let us make several remarks on the \(a_{0}\)-parameter and its meaning:
- 1. The equation of the deep-MOND limit is reproduced by the SIV theory both analytically and numerically if \(\lambda\) and \(M\) can be considered as constant. This may apply to systems with a typical dynamical timescale up to a few hundred million years.
-2. Parameter \(a_{0}\) is not a universal constant. It depends on the Hubble-Lemaitre \(H_{0}\) parameter (or the age of the Universe) and on \(\Omega_{\rm m}\) in the model Universe, cf. Eq. (80). The value of \(a_{0}\) applies to the present epoch.
-3. Parameter \(a_{0}\) is defined by the condition that \(x>1\), _i.e._ when the dynamical gravity \((\Psi_{0}0)/\tau_{0}\)) in the equation of motion (Eq. 64) becomes larger than the Newtonian gravity. This situation may occur over large regions at the edge of gravitational systems.
## 5 Conclusions
The basic properties of SIV and its relations with the scalar-tensor theories of gravity have been reviewed. Similarities and differences have been enlightened. The deep-MOND limit is found to be an approximation of the SIV theory for low enough densities and for systems with timescales smaller than a few Myr.
SIV theory preserves the physical properties of General Relativity and enlarges the group of symmetries by the inclusion of scale covariance (Dirac, 1973). In fact, this also avoid the call to dark matter, and at the same time as shown by Fig.1 the SIV theory predicts an accelerated expansion. Results of a number of cosmological and astrophysical applications have been quoted in the introduction.
Finally, while the present approach may for the moment look "non standard" or out of the main stream, we point out that the present work is not in contradiction with the following statement by Einstein (1949): _"...the existence of rigid standard rulers is an assumption suggested by our approximate experience, assumption which is arbitrary in its principle"_.
## Acknowledgements
I express my gratitude to Dr. Vesselin Gueorguiev for many years of support and excellent collaboration.
## Data availability
No new data were generated or analyzed in this research.
|
2308.13136 | Deciphering The Slow-rise Precursor of a Major Coronal Mass Ejection | Coronal mass ejections (CMEs) are explosive plasma phenomena prevalently
occurring on the Sun and probably on other magnetically active stars. However,
how their pre-eruptive configuration evolves toward the main explosion remains
elusive. Here, based on comprehensive observations of a long-duration precursor
in an event on 2012 March 13, we determine that the heating and slow rise of
the pre-eruptive hot magnetic flux rope (MFR) are achieved through a precursor
reconnection located above cusp-shaped high-temperature precursor loops. It is
observed that the hot MFR threads are built up continually with their middle
initially showing an "M" shape and then being separated from the cusp of
precursor loops, causing the slow rise of the entire MFR. The slow rise in
combination with thermal-dominated hard X-ray source concentrated at the top of
the precursor loops shows that the precursor reconnection is much weaker than
the flare reconnection of the main eruption. We also perform a
three-dimensional magnetohydrodynamics simulation that reproduces the early
evolution of the MFR transiting from the slow to fast rise. It is also
disclosed that it is the magnetic tension force pertinent to "M"-shaped threads
that drives the slow rise, which, however, evolves into a magnetic pressure
gradient dominated regime responsible for the rapid-acceleration eruption. | X. Cheng, C. Xing, G. Aulanier, S. K. Solanki, H. Peter, M. D. Ding | 2023-08-25T02:09:28Z | http://arxiv.org/abs/2308.13136v1 | # Deciphering The Slow-rise Precursor of a Major Coronal Mass Ejection
###### Abstract
Coronal mass ejections (CMEs) are explosive plasma phenomena prevalently occurring on the Sun and probably on other magnetically active stars. However, how their pre-eruptive configuration evolves toward the main explosion remains elusive. Here, based on comprehensive observations of a long-duration precursor in an event on 2012 March 13, we determine that the heating and slow rise of the pre-eruptive hot magnetic flux rope (MFR) are achieved through a precursor reconnection located above cusp-shaped high-temperature precursor loops. It is observed that the hot MFR threads are built up continually with their middle initially showing an "M" shape and then being separated from the cusp of precursor loops, causing the slow rise of the entire MFR. The slow rise in combination with thermal-dominated hard X-ray source concentrated at the top of the precursor loops shows that the precursor reconnection is much weaker than the flare reconnection of the main eruption. We also perform a three-dimensional magnetohydrodynamics simulation that reproduces the early evolution of the MFR transiting from the slow to fast rise. It is also disclosed that it is the magnetic tension force pertinent to "M"-shaped threads that drives the slow rise, which, however, evolves into a magnetic pressure gradient dominated regime responsible for the rapid-acceleration eruption.
Solar coronal mass ejections (310) -- Solar flares (1496) -- Magnetohydrodynamics (1964) -- Solar magnetic reconnection (1504) +
Footnote †: journal: ApJ
0000-0001-8001-8001]X. Cheng
0000-0002-801-801]C. Xing
0000-0002-801-8019]G. Aulanier
0000-0002-1919-7019]S. K. Solanki
0000-0002-1919-7019]H. Peter
0000-0002-1919-7019]M. D. Ding
## 1 Introduction
Stellar mass ejections and accompanying flaring result in energetic events (Maehara et al., 2012; Argiroffi et al., 2019) that may prevent life from thriving on orbiting exo-planets (Dong et al., 2018). At present, these issues can best be studied in the solar system thanks to the direct visibility of coronal mass ejections (CMEs) and solar flares that release a large quantity of magnetized plasma (\(\sim\)10\({}^{11}\)-10\({}^{13}\) kg), strong electromagnetic radiation and sub-relativistic energetic particles into the heliosphere (Forbes et al., 2006; Chen, 2011; Schmieder et al., 2015). When directed toward the Earth, CMEs interact with the magnetosphere and ionosphere and can disrupt communications, overload power grids, and present a hazard to astronauts (Gosling, 1993; Webb et al., 2000; Solanki et al., 2004).
The energetic eruptions are essentially consequences of the destabilization and reconfiguration of the coronal magnetic field. Prior to such eruptions, in a long-lasting quasi-static phase, the magnetic field, in particular above the polarity inversion line (PIL) of active regions (ARs), is gradually stressed by various flows at the photosphere, resulting in accumulation of magnetic free energy (Cheung & Isobe, 2014). The stressed magnetic fields are organised in an orderly fashion as either sheared arcades or a magnetic flux rope (MFR, a coherent structure with all field lines wrapping around its central axis). Regardless of the difficulty in directly measuring the coronal magnetic field, some observables serve as proxies of pre-eruptive magnetic configurations including filaments/prominences (Kuperus & Raadu, 1974; Mackay et al., 2010; Schmieder et al., 2013), sigmoids (Hudson et al., 1998; Rust & Kumar, 1996; Green et al., 2007), cavities (Gibson et al., 2006; Wang & Stenborg, 2010) and hot channels (coherent plasma structure with a temperature
above 8 MK (Zhang et al., 2012; Cheng et al., 2013)). Among them, the hot channels and analogues seem to be a promising proxy of the MFR, which can even be used for prediction, as they usually appear prior to the eruption (Zhang et al., 2012), sometimes for hours (Patsourakos et al., 2013; Nindos et al., 2020), and then continuously evolve toward the eruptions (Cheng et al., 2013; Gou et al., 2019; Mitra & Joshi, 2019).
Nevertheless, how these pre-eruptive configurations evolve, in particular, toward the very onset of the fast eruption is yet to be ascertained (Aulanier, 2014, 2021). In order to initiate the eruption, many physical mechanisms have been proposed including tether-cutting and breakout reconnection (Moore et al., 2001; Antiochos et al., 1999) and ideal MHD instabilities etc. (Forbes & Isenberg, 1991; Torok et al., 2004; Kliem & Torok, 2006). Although evidence has been presented for the action of individual process (Moore et al., 2001; Williams et al., 2005; Chen et al., 2014; Cheng et al., 2020), it could be extremely difficult for a sole mechanism to initiate a real eruption. Many comprehensive observational studies suggest that the initiation process of CMEs is much more complicated than expected, multiple physical processes are often coupled to each other even though the dominated one may change from one phase to the other (Cheng et al., 2020). Once the eruption has been initiated, the dynamic energy release via runaway magnetic reconnection is switched on, during which the different structural components of CMEs are quickly formed and accelerated, giving rise to flare radiation at the same time (Priest & Forbes, 2002; Lin et al., 2015; Veronig et al., 2018).
The other important but still puzzling characteristic during the early rise phase is that the pre-eruptive MFR is found to be much, almost one order of magnitude, hotter than the background quiet corona of 1-3 MK (Cheng et al., 2012), which is true for over half of major eruptions based on a statistical survey (Nindos et al., 2015). It is speculated that the heating is most likely due to magnetic reconnection (Dudik et al., 2014). One piece of evidence is that the pre-eruptive hot MFR shows an increase in toroidal flux and stays stable for hours before it erupts successfully (Patsourakos et al., 2013). On the other hand, a number of pre-flare activities are detected prior to the eruption such as slow rise of pre-eruptive configurations (e.g., Zhang et al., 2001; Sterling & Moore, 2005; Kliem et al., 2014; McCauley et al., 2015; Cheng et al., 2020), H\(\alpha\) line broadening of pre-eruptive filaments (e.g., Cho et al., 2016), enhancement of soft X-ray emission (e.g., Zhang & Dere, 2006; Priest, 2014), brightenings at the footpoints of chromospheric kernels (e.g., Wang et al., 2017; Chen et al., 2019), changes in magnetic topology (e.g., Chintzoglou et al., 2015; Liu et al., 2018) and even appearance of non-thermal particles (e.g., Syntelis et al., 2016; Awasthi et al., 2018; Hernandez-Perez et al., 2019). All these pre-flare activities are also suggested to be more or less caused by magnetic reconnection. However, justifying a clear and integrated physical picture that links the formation, heating and early rise of the MFR, as well as various observed pre-flare characteristics, to magnetic reconnection remains rather difficult. The major difficulty is that these pre-eruptive signatures are often of short-lived (\(\sim\)minutes), in particular for those from ARs (Cheng et al., 2020). Moreover, it is also limited by observational capacity, e.g., the field-of-view of instruments (such as Goode Solar Telescope) is too small to observe the entire pre-eruptive structure that usually approximates to the size of ARs, and/or only the signatures of pre-flare activities in the lower atmosphere were detected (Wang et al., 2017). Furthermore, it is almost hardly to disentangle critical characteristics in real observations as which are more complex than models. Such a problem becomes even worse due to inevitable projection effects (Zhang et al., 2017; Zhou et al., 2017; Gou et al., 2019; Awasthi et al., 2018; Hernandez-Perez et al., 2019).
Here, through comprehensive analyses of a long-duration precursor phase of a major CME/flare on 2012 March 13 which overcomes part of aforementioned limitations, we disclose the relations intertwined among the heating and early rise of the pre-eruptive hot MFR, various pre-flare characteristics and precursor reconnection. With a combination of observation-inspired three-dimensional magnetohydrodynamics simulation, it is suggested that the magnetic tension force within the MFR drives the slow rise; while the magnetic pressure gradient one is responsible for the following acceleration eruption.
## 2 Instruments and data
The Atmospheric Imaging Assembly (AIA; Lemen et al., 2012) on board Solar Dynamics Observatory (SDO; Pesnell et al., 2012) images the solar atmosphere almost simultaneously with ten passbands covering temperatures from 0.06 to 20 MK. The temporal cadence and spatial resolution of seven (two) EUV (UV) passbands are 12 (24) s and 1.2 arcseconds, respectively. Among the seven EUV passbands, the 131 A and 94 A, sensitive to the high-temperature plasma above 6 MK, are used for detecting the hot pre-eruptive MFR; the 171 A, 193 A, 211 A and 335 A for observing the large-scale background corona; the 304 A, 1600 A and 1700 A for searching for signals of the eruption in the lower atmosphere (O'Dwyer et al., 2010). The EUV imaging data from the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI; Howard et al., 2008) on board the Solar Terrestrial Relations Observatory (STEREO), which
was separated from SDO by an angle of \(\sim\)110\({}^{\circ}\) at the time the analysed observations were made, provide the second perspective on the eruption even though with a lower cadence (5 minutes) and resolution (3.2 arcseconds).
In order to locate the reconnection during the precursor phase and reveal its physical properties, we utilize the Ramaty High Energy Solar Spectroscopic Imager (RHESSI; Lin et al., 2002) that is capable of performing X-ray imaging and spectroscopic diagnostics for hot plasma and accelerated electrons. The hard X-ray images are reconstructed with the Clean algorithm based on the detectors 3, 5, 6, 7, 8 and 9. The X-ray spectra from the detector 3 are analysed in detail to derive the temperature of hot plasma and low energy cut-off of accelerated electrons. The spectra from the other detectors are also examined but not showed here because the results from all detectors were very similar.
In addition, to inspect the CME generated by the MFR eruption, we also make use of the data from the Large Angle and Spectrometric Coronagraph (LASCO; Brueckner et al., 1995) on board the Solar and Heliospheric Observatory (SOHO) and the SECCHI instrument suite on board STEREO-A. The CME velocity in the higher corona is from the CDAW Data Center1. The Geostationary Operational Environmental Satellite (GOES) provides the soft X-ray (SXR) fluxes of associated flares at two bands of 1-8 A and 0.5-4 A. The Helioseismic and Magnetic Imager (HMI; Schou et al., 2012) also on board SDO provides the photospheric vector magnetic field of the investigated AR with a temporal cadence of 12 minutes and spatial resolution of 1.2 arcsecond.
Footnote 1: [https://cdaw.gsfc.nasa.gov/](https://cdaw.gsfc.nasa.gov/)
## 3 Results
### Overview of major CME eruption
On 2012 March 13, an M7.9 class flare, taking place in the NOAA AR 11429 (Figure 1a), started at \(\sim\)17:12 UT, peaked at \(\sim\)17:41 UT and ended at \(\sim\)18:25 UT (Figure 1b). It also produced prominent 30 THz emissions (Kaufmann
Figure 1: **Pre-eruptive MFR and pre-flare emission.** a. A composite of the AIA 131 Å (red) and 171 Å (green) images showing the pre-eruptive hot MFR and induced precursor loops (left) for the eruption on 2012 March 13. The AIA 131 Å difference image (subtracting the image on 15:50 UT) displaying zoom-in of the MFR (right). b. Temporal evolution of the GOES 8XR 1–8 Å flux (red), temperature (blue), X-ray energy loss rate (yellow), integrated AIA 131 Å (cyan) and 304 Å (brown) intensity for NOAA AR 11429 with the FOV shown by the white box in panel a. The three dash-dotted lines indicate the onset, peak and end time of the main flare phase, respectively.
et al., 2013; Trottet et al., 2015) and was accompanied by a very energetic CME that included an erupting MFR as its main body. The projected average speed of the CME was over 1900 km s\({}^{-1}\), as measured in the field of view of LASCO. Figure 1a shows that a loop-like pre-eruptive structure had appeared prior to the main flare and was associated with a long-lasting cusp-shaped precursor structure. Using the visibility of the cusp-shaped structure, such a precursor is found to last for more than one hour, much longer than that usually observed (several minutes) in previous events (Zhang et al., 2012; Cheng et al., 2013; Hernandez-Perez et al., 2019), thus enabling us to decipher the heating and early rise of the pre-eruptive hot MFR.
### Formation and heating of pre-eruptive MFR
The long precursor of interest caused evident coronal emissions at different AIA bands as shown in Figure 1b. The GOES SXR 1-8 A flux started to increase at \(\sim\)16:00 UT and reached a plateau for 20 min. After that, it was further enhanced. At \(\sim\)16:50 UT, the SXR flux reached a peak and then decreased slightly followed by the onset of the main phase. This evolution at the X-ray band is very similar to that of the integrated intensity of the AIA high-temperature 131 A passband. In contrast, for the AIA low-temperature 304 A passband (peaking at roughly 80 kK), only a slow increase in the integrated intensity is observed except for some small fluctuations. The distinction indicates that the energy release process primarily occurred in the corona. Figure 1b also displays the evolution of the SXR radiation rate and temperature, which are estimated based on fluxes at the two X-ray bands of GOES. One can see that the temperature is mostly above 8 MK during the precursor phase, consistent with the similarity between the evolutions of the SXR 1-8 A flux and integrated 131 A intensity. The peak temperature of the precursor appeared at \(\sim\)16:40 UT, preceding that of the GOES SXR 1-8 A flux and radiation rate by about 5 min, implying that the plasma is first heated and then induces an enhanced radiation as what happens during the main flare phase (Sun et al., 2014).
Figure 2: **Build up of pre-eruptive MFR.** The AIA 131 Å difference image (subtracting the image taken at 15:50 UT) showing the build up of the pre-eruptive MFR and its relation to the cusp-shaped precursor loops. The two flux bundles of the pre-eruptive MFR are delineated by two curves in yellow and blue (d–f). The oblique arrow in panel b shows the orientation of the main PIL. The animation 1 that starts at 2012 March 13 16:00 UT and ends at 17:59 UT is available online to show the detailed evolution of the slowly rising MFR and precursor loops with the duration of 10 s.
Thanks to the capability of the AIA 131 A and 94 A passbands to image hot plasma, it is disclosed that the formation of the pre-eruptive MFR involved that of two sets of hot flux bundles. The cusp-shaped hot structure, also named as precursor loops hereafter, is found to be the main source of the precursor emission (Figure 2 and animations 1 and 2). Its activation could be related to a small flare occurred at the nearby neighboring AR 11430 (Figure 2a). At \(\sim\)16:00 UT, the first set of relatively diffuse hot threads gradually showed up with their middle being concave and connecting to the top of the cusp-shaped structure. They were almost aligned with the direction of the main PIL and much longer than the precursor loops, presenting an "M" shape in morphology at \(\sim\)16:30 UT (Figure 2c). During the formation, the middle of the hot threads also rose up slowly, being separated from the top of precursor loops, and then became flat (Figure 3a). At \(\sim\)16:30 UT, the second flux bundle started to appear with the left footpoints almost being mixed with that of the first one but with the right footpoints far away from the AR 11429. The visibility of the remote dimming at the AR 11430 indicates that the second flux bundle connects the ARs 11429 and 11430 (Figure 11c1-c2). The two sets of flux bundles constitute the pre-eruptive channel-like MFR with a bifurcated right leg (also see Zhong et al., 2019). Afterwards, the two sets of threads rose up as a whole continuously. This is different from a rise following by a descent of the erupting flux detected in confined eruptions (Patsourakos et al., 2013; Liu et al., 2018). The continuous rise of the pre-eruptive MFR also caused a slight amplification of the reconnection as indicated by the brighter precursor loops (Figure 2) and the increases of the SXR flux, temperature and integrated 131 A intensity (Figure 1b). However, comparing with the flare main phase, the SXR flux was still one order of magnitude smaller, indicating that the reconnection still proceeded in a gentle way. Nevertheless, although being moderate, it was critical for the pre-eruptive MFR to be formed and heated (Figure 3a).
The temperature map shows that the average temperatures of the pre-eruptive MFR and precursor loops are about 8 MK and 10 MK, respectively (Figure 3b). They are in agreement with the average temperature of the full flaring region estimated from the ratio of two GOES SXR broadband (0.5-4 A and 1-8 A) fluxes (see Thomas et al., 1985). Moreover, we find an interesting X-shaped high-temperature plasma structure prior to the eruption, highly resembling to the structure of magnetic reconnection during the eruption (Cheng et al., 2018; Chen et al., 2020). It consists of the upper part of the cusp-shaped loops and the middle of the M-shaped MFR as shown by the contour of 7 MK (right panel of Figure 3b). Its projected height is \(\sim\)20 Mm. The two features suggest that magnetic reconnection takes place during the precursor phase and in a high altitude to build up and heat the pre-eruptive structures.
### Locating X-ray emissions
The RHESSI hard X-ray (HXR) data show that, during the time period of 16:00-16:20 UT, the HXR emissions only appeared in the energy bands below 12 keV and were from the source concentrated at the top of precursor loops, as shown in Figure 4a. The left two panels of Figure 4b display that the corresponding X-ray spectra at \(\sim\)16:01 and 16:15 UT can be well fitted by a thermal model. It gives a thermal temperature of \(\sim\)10 MK, very similar to the DEM-average temperature of the region at the top of precursor loops as derived from the DEM analysis, indicating pertinent reconnection process being of thermal-dominated. As the pre-eruptive MFR formed, the HXR emissions at the top of the precursor loops were gradually enhanced. At \(\sim\)16:47 UT, the emissions in the energy range of 3-6 keV and 6-12 keV increased by almost one order of magnitude relative to the value observed half an hour earlier (Figure 4b). At the same location, the emission in the higher energy range (e.g., 12-25 keV) also appeared (the right two panels of Figure 4a), suggestive of a non-thermal property. The combination of a thermal model and a thin-target model best fits the X-ray spectra at two following times (middle and right panels of Figure 4b). The fitting gives a thermal temperature of 11-12 MK and a cut-off energy of 12.8 keV for the non-thermal electrons. This indicates that magnetic reconnection in the later stage of the precursor phase was also capable of accelerating electrons. However, the accelerated electrons still have a relatively low energy, mostly \(<\)20 keV, showing that the reconnection process was not energetic, probably similar to that during micro-flares as recently observed by the Spectrometer/Telescope for Imaging X-rays (STIX) onboard Solar Orbiter (Battaglia et al., 2021).
### Initiation of MFR-induced CME
The temporal variations of the height and velocity of the pre-eruptive MFR provide valuable information to disentangle initiation models. Figure 5a shows the long-duration slow rise of the pre-eruptive MFR consisting of two rising flux bundles. After \(\sim\)17:00 UT, the MFR gradually became vague because of its expansion. During the entire precursor phase, as a response to the rise of the hot MFR, the overlying field also gradually expanded, as indicated by diamonds in Figure 5b, and then evolved toward the CME leading front during the main phase.
Figure 5c-d display that the MFR velocity increased very slowly in the precursor phase, varying from \(\sim\)5 km s\({}^{-1}\) at \(\sim\)16:15 UT to \(\sim\)25 km s\({}^{-1}\) at \(\sim\)16:40 UT. The average acceleration was only \(\sim\)13 m s\({}^{-2}\). Afterwards, the MFR even started to slow down, the velocity decreased from \(\sim\)25 km s\({}^{-1}\) at \(\sim\)16:40 UT to \(\sim\)20 km s\({}^{-1}\) at \(\sim\)16:55 UT, with a deceleration of about -6 m s\({}^{-2}\). At \(\sim\)17:00 UT, because of the second flux bundle, the MFR velocity increased again. The temporal evolution of the velocity of the pre-eruptive MFR during the whole precursor phase roughly kept in step with the variation of the GOES 1-8 A SXR flux. For the CME leading front, its velocity kept a small value (\(\sim\)14 km s\({}^{-1}\)) before eruption at \(\sim\)17:12 UT. However, as the main phase began, the velocity of the CME leading front increased to \(\sim\)110 km s\({}^{-1}\) in 6 minutes. The MFR was accelerated more impulsively, the velocity increased from \(\sim\)50 km s\({}^{-1}\) at \(\sim\)17:12 UT to \(\sim\)450 km s\({}^{-1}\) at \(\sim\)17:20 UT with an acceleration of \(\sim\)830 m s\({}^{-2}\), almost two orders of magnitude larger than that during the precursor phase. Meanwhile, the SXR flux also increased impulsively (see its time derivative), in synchronisation with the variation of the MFR velocity.
The MFR fast acceleration may be triggered by the torus instability, which occurs when the decay of the background magnetic field with height exceeds a threshold (Kliem & Torok, 2006; Fan & Gibson, 2007; Aulanier et al., 2010). From the distribution of the decay index as shown in Figure 5f, one can see that the decay index first quickly and then gradually increases with height. At the onset time of the impulsive acceleration (17:13 UT), the upper and lower edges of the MFR reached heights of \(\sim\)110 and \(\sim\)90 Mm, respectively. The average (\(\sim\)100 Mm) of them is regarded
Figure 3: **Cusp-shaped precursor loops and its relation to pre-eruptive MFR.** a. The AIA 131 Å difference image (subtracting the image taken at 16:16 UT) showing the cusp-shaped precursor loops and pre-eruptive MFR (left). The rise and change in morphology of the MFR threads are indicated by the curves in zoom-in images (right). The oblique dashed line indicates the main eruption direction. The oblique arrow shows the orientation of the main PIL. b. DEM-weighted average temperature (left) and total EM (right) maps. The temperature contours (black) of 7 MK (corresponding to a logT of 6.85) with an “X” shape configuration are also overplotted in the EM map. The dashed boxes in black outline the field-of-view of the right of panel a. The animation 2 that starts at 2012 March 13 16:00 UT and ends at 17:00 UT is available online to show the evolution of the MFR threads as shown in the right portion of panel a with the duration of 12 s.
as the height of the MFR axis, where the decay index is found to be \(\sim\)2.0, obviously exceeding all critical values of torus instability derived theoretically (Kliem & Torok, 2006; Demoulin & Aulanier, 2010). This shows that the torus instability has occurred, probably earlier than 17:13 UT, because it takes time to accumulate speed in an exponentially-accelerating instability starting from a weak perturbation to overtake the slow-rise velocity caused by the independent precursor reconnection.
We also inspect the decay index during the slow rise phase. For the acceleration stage of the precursor phase (16:15-16:40 UT), the height of the MFR axis was below 60 Mm, the corresponding decay index was mostly smaller than \(\sim\)1.5, the critical value for a toroidal current ring (Kliem & Torok, 2006), and the statistical average of critical decay indices for AR eruptions (Cheng et al., 2020). Furthermore, the deceleration of the MFR during the following 15 minutes conflicts with an expected exponential acceleration during the early development stage of torus instability (Torok & Kliem, 2005; Schrijver et al., 2008). Thus, although the decay index keeps increasing as the MFR is elevated continuously, the torus instability seems to not take effect in the most of precursor phase. In contrast, once entering the main phase, the temporal variation of the MFR height is found to exactly follow an exponential form (Figure 11). These results support that the torus instability plays a critical role in initiating the MFR impulsive acceleration and
Figure 4: **X-ray emission and spectra.** a. RHESSI X-ray source in the energy ranges of 3–6 keV (blue), 6–12 keV (red) and 12–25 keV (yellow) overlaid on the AIA 131 Å difference images (subtracting the image taken at 15:50 UT) showing the energy release locations by precursor reconnection. The two contours for each energy band denote 50% and 80% of their maximum emissions. The image on the left and right is scaled linearly and logarithmically, respectively. b. Spectra of X-ray emission and their temporal evolution. The curves in blue display the best fitting to background-subtracted spectra (curves in grey) with the ones in red and yellow indicating the thermal and non-thermal thin-target model, respectively. The resulting fitting residuals are shown in the bottom panels.
fast flare energy release, or in other words, it is the key to turn the moderate reconnection in the precursor phase to the runaway reconnection in the main phase.
### Driver of MFR Slow-rise
To understand the dominant driving mechanisms underneath the long-term slow rise of the MFR and the transition toward the eruption, as well as the relations of various involved features to the inferred moderate precursor reconnection, we run a zero-\(\beta\) three-dimensional (3D) magnetohydrodynamic (MHD) simulation. To broadly compare with observations, in our numerical model, the initial magnetic configuration consists of an asymmetric bipolar field. It is driven by means of line-tied shearing motions at the bottom boundary, which are often observed near the PIL. The
Figure 5: **Early kinematics and initiation of pre-eruptive MFR.** a. Slice-time plot of the AIA 131 Å difference images showing the evolution of two flux bundles of the pre-eruptive MFR. Diamonds and filled circles indicate their height-time measurements. b. Same as panel a but for the AIA 171 Å passband representing the expanding overlying field. c. Temporal evolution of the heights of the MFR (red) and expanding overlying field (yellow). d. Temporal evolution of the velocities overplotted by the GOES SXR 1–8 Å flux (gray) and its time derivation (black). The uncertainty in velocity is mainly from that in height, which is estimated to be 2 Mm. The vertical slits in panel c and d indicate the onset of the CME impulsive acceleration and flare main phase with their width denoting the uncertainty (one minute). e. The radial component of HMI Cylindrical Equal-Area (CEA) vector magnetogram. The white (black) indicates the magnetic field upward (downward). f. The decay index distribution with height above the PIL, which is an average of all height-profiles of the decay index at the yellow dots as shown in panel e. The bars in grey denote its uncertainty as derived by the standard deviation of height-profiles. The dots in blue and yellow represent the initial and onset height (35 and 100 Mm) of the MFR axis, respectively. Their reference point is the midpoint of the line segment connecting the two footpoints of the cusp-shaped precursor loops. The dot in red points out the the critical decay index of 1.5 for a ring current.
flux cancellation is also introduced by magnetic diffusion at the bottom boundary. The parameter setups are the same as Aulanier et al. (2012) only with a higher spatial resolution (375\(\times\)375\(\times\)336).
The 3D MHD simulation shows that, as the bald-patches (BPs) bifurcates into a quasi-separatrix layer (QSL) containing a coronal hyperbolic flux tube (HFT; Titov et al., 2002) lying below the MFR, the MFR is slowly evolving before it reaches an eruptive stage. The HFT reconnection is believed to inject newly formed flux into the MFR, making it ascend further (Aulanier et al., 2010). Although the MFR in the simulation only includes one set of helical threads, many characteristics are still similar to observations, including (1) that the newly reconnected flux first presents an "M" shape and then becomes flat (orange field lines in Figure 6a-6d); (2) that the precursor reconnection forming the "M"-shaped MFR field lines and precursor loops is not energetic prior to the main eruption; (3) that the MFR eruption does not start until its axis reaches an altitude where the decay index of the background field is large enough to allow the occurrence of ideal torus instability (Section F); and (4) that the height of the erupting MFR during the early acceleration phase increases exponentially (see Figure F1).
The MHD simulation enables disclosing the driving forces acting in the slow rise of the pre-eruptive MFR and the following acceleration phase, respectively. The distribution of the vertical component of the Lorentz force in the plane perpendicular to the MFR axis and crossing the HFT show that the Lorentz force near the HFT and the outer
Figure 6: **3D magnetic field lines at five different times displaying the slow rise and early eruption of the MFR.** The blue and orange tubes show the MFR field lines, the red tube in panel b shows one precursor loop. The bottom images show the vertical magnetic field components at the bottom boundaries of the MHD simulation domain. The vertical planes (y=-0.06) perpendicular to the MFR axis show the distribution of current density \(j\) with the contours in green indicating \(\log Q=3\). The onset time of the MFR eruption is in period of 120-125\(t_{A}\). The animation 3 is available online to show the evolution of 3D M-shaped field lines during the early rise phase with the duration of 46 s.
part of the MFR is pointing upward but that in the central part of the MFR is mostly directed downward (Figure 7a). Nevertheless, the downward force within the MFR is negligible, so that the net force of the MFR is obviously dominated by the upward one which drives the MFR to rise up slowly with a small acceleration (Figure 7b). On the other hand, the Lorentz force below the HFT is mainly pointing downward, which causes shrinkage of precursor loops. The Lorentz force is further decomposed into two components (magnetic tension and pressure gradient). One can find that, during the slow rise phase of the MFR, the upward directed Lorentz force producing a positive acceleration is primarily contributed by the upward directed magnetic tension (Figure 7c), in agreement with the observation that the middle part of the "M"-shaped flux tends to rise up obviously and gradually becomes flat. As the fast eruption starts, the acceleration induced by the upward Lorentz force quickly increases, explaining the fast acceleration of the MFR eruption as observed. However, the driving force for the large acceleration is no longer the magnetic tension but the magnetic pressure gradient. The latter gradually changes from negative to positive and significantly counteracts the negative magnetic tension as the eruption enters into the main phase (Figure 7d).
## 4 Summary and Discussions
Figure 7: **Lorentz force during the slow rise and early eruption of the MFR.** a. Distributions of the vertical component of the Lorentz force density \(F_{z}\) at the plane y=-0.06, as shown in Figure 6, at four different times. The boundaries of the MFR are delineated by the dashed lines; the contours of \(\log Q=3\) are shown by the curves in green. b–d. Temporal evolution of the MFR accelerations (\(a_{F_{z}}\), \(a_{(F_{1})_{z}}\), \(a_{(F_{p})_{z}}\)) contributed by the vertical components of the Lorentz force \(F_{z}\), magnetic tension (\(F_{t}\))\({}_{z}\) and magnetic pressure gradient (\(F_{p}\))\({}_{z}\), respectively. Their errors indicated by vertical bars are mainly from the uncertainty in determining the MFR outer boundary, which is achieved through changing \(\log Q\) value from 2.4 to 3.6. The horizontal dotted lines indicate zero acceleration.
In this paper, we present comprehensive observations of a long-lasting precursor phase before a major solar eruption. With further combination of suitable viewing angle and multi-wavelength data, it is disclosed that the heating and slow rise of the pre-eruptive hot MFR are achieved through the precursor reconnection as often speculated previously (Wang et al., 2017; Zhou et al., 2017; Awasthi et al., 2018; Chen et al., 2019; Hernandez-Perez et al., 2019; Gou et al., 2019). The precursor reconnection is found to take place at the X-shaped high-temperature plasma sheet. The continual formation of the "M"-shaped hot threads via the precursor reconnection results in the heating and early rise of the entire MFR, as well as the formation of precursor loops. It is surprising that the main features generally observed in the impulsive phase have counterparts in the precursor phase that are physically linked to the precursor reconnection. However, both the slow rise of the pre-eruptive hot MFR (\(<\)30 km s\({}^{-1}\)) and the relatively low energy of accelerated electrons (\(<\)20 keV) suggest that the precursor reconnection is far less efficient than that in the main eruption phase (e.g., Cheng et al., 2018).
In spite of being moderate, the precursor reconnection is critical for lifting the MFR along an equilibrium sequence in a mutual feedback process. On the one hand, the precursor reconnection forms the M-shaped flux that accumulates and rises up as illustrated in Figure 6. On the other hand, the rising MFR slightly enhances the reconnection that can further inject flux to the MFR. Such a feedback is strongly indicated by the simultaneity between the increases in the height of the pre-eruptive MFR and the enhancement of associated SXR emissions. The transition from moderate reconnection to fast reconnection is switched on by the fast acceleration of the MFR. This is most likely caused by the ideal torus instability as the transition from the slow rise to the fast acceleration of the MFR occurs at the height where the decay index of the background field exceeds the thresholds of torus instability and that the temporal evolution of the MFR acceleration highly resembles an exponential profile. Once entering the main eruption phase, the moderate reconnection immediately transitions to runaway reconnection responsible for the flare impulsive phase. The fast reconnection rapidly accelerates the CME eruption, which vice versa drives the opposite-directed magnetic fields to continuously participate in the reconnection. With the feedback more efficiently operating in this period, the eruption is finally accelerated to a high speed in about one hour. Moreover, as shown in the MHD simulation, the precursor reconnection occurs between highly sheared arcades (also see Aulanier et al., 2012), the guide field of the reconnection is thus large, which could be the reason why the precursor reconnection is not that energetic (e.g., Leake et al., 2020; Dahlin et al., 2022).
The synergism of the moderate-reconnection-formed MFR and ideal torus instability causing the transition from the precursor phase to the main eruption phase as revealed and further testified by our 3D MHD simulation can be used to clarify the applicability of the initiation models proposed in the past decades. The tether-cutting-like topology of the precursor reconnection seems to support the tether-cutting initiation model (Moore et al., 2001), which, however, is insufficient if working alone without the presence of the torus instability. The tether-cutting reconnection is found to be only efficient once initiated by tearing mode instability as proved by recent numerical simulations (Jiang et al., 2021). This is at variance with the results of observation-constrained simulation (Inoue et al., 2018) and observational characteristics for the 2012 March 13 event, where the reconnection is found to be moderate before the onset of the fast eruption. Moreover, in the event under study, no prominent signatures for the MFR writhing motion are observed during the slow rise phase. Thus, we tend to exclude the possibility of the kink instability (Torok and Kliem, 2003) causing both the precursor phase and the main eruption phase. It is worthy of noticing that the first bundle of the pre-eruptive MFR seems to present an untwisting motion in the early phase of the eruption (Figure 2f). However, after a careful inspection, it is more likely to be the apparent manifestation of the MFR morphology varying from the "M" to semicircle shape as previously detected (Zhang et al., 2012; Cheng et al., 2013). In addition, it was suggested that the onset of the eruption may correspond the transition of magnetic field topology that embeds the pre-eruptive MFR, so to say, from BPS to HFT (Savcheva et al., 2012). However, in our observations, only an HFT-like configuration appears in the precursor phase. Therefore, as proved in our numerical model, such a topological transition may occur much earlier than the initiation of the eruption, if it exists at all.
The long-duration precursor is equivalent to a confined flare preceding the following eruptive one, during which both magnetic helicity and twist are thought to be quickly injected to the pre-eruptive MFR (Priest and Longcope, 2017). However, it presents two characteristics obviously different from those during confined eruptions. First of all, a continuously rising pre-eruptive MFR toward the eruption differs from the MFR during confined flares that initially rises but finally stops at the high corona (Liu et al., 2018; Kliem et al., 2021). Secondly, the high-temperature of the pre-eruptive MFR in combination with its morphology evolution and the appearance of thermal X-ray source above the precursor loops provide solid evidence for slow reconnection heating during the slow rise prior to the main eruption;
while such a process may be unnecessary, even absent (Patsourakos et al., 2013; Chintzoglou et al., 2015), in the interval between the preceding confined flares and the following successful eruptions.
Lastly, as only one particular event is investigated here, more similar observations and in-depth MHD modelling are suggested in the future to justify the universality of the mechanisms we have determined for the heating and slow rise precursor of pre-eruptive configurations of solar eruptions.
We appreciate all referees who reviewed the manuscript and provided their comments and constructive suggestions. We also thank Chun Xia, Bernard Kliem, Jie Zhang, Jun Chen and Lakshmi Pradeep Chitta for their helpful discussions. AIA data are courtesy of NASA/SDO, a mission of NASA's Living With a Star Program. _STEREO_/SECCHI data are provided by a consortium of NRL (US), LMSAL (US), NASA/GSFC (US), RAL (UK), UBHAM (UK), MPS (Germany), CSL (Belgium), IOTA (France), and IAS (France). X.C., C.X. and M.D.D. are supported by National Key R&D Program of China under grants 2021YFA1600504 and by NSFC under grant 12127901. X.C. is also supported by Alexander von Humboldt foundation. G.A. and C.X. acknowledge financial support from the French national space agency (CNES), as well as from the Programme National Soleil Terre (PNST) of the CNRS/INSU also co-funded by CNES and CEA.
## Appendix A DEM inversion
The differential emission measure (DEM) is reconstructed by the "xrt_dem_iterative2.pro" routine in the Solar Software (SSW) using six co-aligned AIA EUV images. The observed intensity \(I_{i}\) for the passband \(i\) can be written as:
\[I_{i}=\int DEM(T)\times R_{i}(T)\mathrm{d}T+\delta I_{i},\] (A1)
where \(DEM(T)\) denotes the plasma DEM, \(R_{i}(T)\) is the temperature response function and \(\delta I_{i}\) is the uncertainty in intensity \(I_{i}\). The temperature range of inversion is set as 5.5\(\leq\) log\(T\)\(\leq\) 7.5.
We calculate the average temperature and total EM by means of the following two formulae:
\[\bar{T}=\frac{\int DEM(T)\times TdT}{\int DEM(T)dT}\] (A2)
\[EM=\int DEM(T)dT.\] (A3)
We also ran 100 Monte Carlo (MC) simulations by adding a random noise corresponding to the uncertainties of the observed intensities, derived by "aia_bp_estimate_error.pro" in SSW, to the intensity \(I_{i}\) and then resolving the DEM. It is found that in the temperature range of 5.7\(\leq\) log\(T\)\(\leq\) 7.4, which is selected to integrate Equation (A2) and (A3), the 100 MC solutions are well constrained.
## Appendix B Dimmings
Figure 11 shows the evolution of the three dimming regions and flare ribbons from the two perspectives of SDO and STEREO-A before and during the eruption. It is found that the east dimming has appeared prior to the main eruption, as seen in the SDO-AIA difference images (Figure 11). Not surprisingly, the left footpoints of the pre-eruptive MFR were cospatial with the east dimming (ED in Figure 11). Considering that the pre-eruptive MFR consists of two sets of threads that have a different connectivity on the right, their footpoints are expected to correspond to two dimmings, which is confirmed by the EUVI-A 195 A running-difference images. It is revealed that the one is on the right of the main flare loops and the other is near the AR 11430. As the MFR took off, the two dimming regions expanded outward and further darkened, implying a plasma rarefaction caused by the eruption (Figure 11-14). The erupted fluxes should be mostly from NOAA 11429 as where the main dimmings were observed (ED and DR1 in Figure 11).
## Appendix C Kinematical Analyses
To measure the MFR height, we take a slice along the MFR eruption direction (Figure 3a) and make slice-time plots of the AIA 131 A and 171 A base-difference images (Figure 5a-5b). Based on the slice-time plots, we measure projected heights of the MFR upper edge and CME leading front as shown in Figure 5a-5c. The projected heights are simply corrected assumes that the eruption is along the radial direction during the early phase (as indicated by the oblique dashed line in Figure 3a). With the first order numerical derivative, we then calculated the velocities of the MFR and CME leading front.
By comparing multiple fit functions, it was found that, for the majority of events, the height-time profiles of solar eruptions in the lower corona can be best fitted by the function:
\[h(t)=a\exp(bt)+ct+d,\] (C1)
which is a superposition of a linear and exponential component, mimicking the slow-rise phase and the early impulsive acceleration phase, respectively (Cheng et al., 2020). Here, we take advantage of the superposed function to fit the height-time profile of the second MFR bundle that continuously evolved from the precursor to the main phase. Figure C1 shows that the measured height-time data are perfectly fitted by Equation C1. Furthermore, all velocities, even accelerations, that are directly calculated by numerical derivative of height-time data are also found to follow the derivative curve of the fit function with a very small discrepancy except for the last point.
In terms of the best fitted function, we further estimate the onset of the impulsive acceleration phase, i.e., the breakpoint time where the velocity of the exponential term starts to dominate (equal) that of the linear term. This gives an onset time of \(\sim\)17:13 UT. Moreover, we also estimate directly from the acceleration-time profile the onset time, i.e., when the acceleration begins to obviously increase at \(\sim\)17:12 UT. The onset times we derived by two different methods are synchronized with that of the main flare phase (17:12 UT). For more details on determination of the eruption onset and its uncertainty, the reader can refer to Cheng et al. (2020).
## Appendix D 3D Coronal Magnetic Extrapolation
Based on a potential field model, we extrapolate the 3D coronal magnetic field by the Green function method using the radial component of the HMI vector field as shown in Figure 5e as the bottom boundary. The constrained background field over the erupting MFR is approximated by the horizontal component of extrapolated potential field. The decay index of the background field is calculated as follows
\[n(h)=-\frac{d(\ln B_{\rm t})}{d(\ln h)},\] (D1)
where \(B_{\rm t}\) and \(h\) denote the horizontal component of the background field and the height above the photosphere, respectively.
The accuracy of the extrapolated 3D coronal potential field largely depends on that of the bottom boundary, which is selected as the radial component of the HMI vector magnetogram at 15:00 UT on 2012 March 13. The 180\({}^{\circ}\) ambiguity in the horizontal component is removed using a minimum energy method. In addition, the data are reprojected from helio-projective Cartesian to heliographic Cylindrical Equal-Area (CEA) coordinates. Finally, it is worth mentioning that the measured vector field close to the solar limb is still less accurate than that near the disk center. This may influence the accuracy of the calculated background field and thus the decay index. However, as discussed previously (Cheng et al., 2020), the decay property of the background field is primarily determined by the large-scale structure of ARs, which generally evolve slowly after their emergence. The influence is thus not expected to be fatal.
## Appendix E The Mfr Geometry
The thresholds of an MFR taking place torus instability are closely related to its geometry. For a straight and toroidal thin current ring, two particular limited cases, the critical decay index of the background field was deduced to be 1 and 1.5, respectively (Kliem and Torok, 2006; Demoulin and Aulanier, 2010). For the current 2012 March 13 event, the pre-eruptive MFR presents a curved loop-like structure, deviating from a full torus. Moreover, the pre-eruptive MFR is composed of two flux bundles with the right footpoints being located at the different regions. This is essentially a result of the nonuniform distribution of the MFR current. The two factors may influence the critical value of torus instability, however, based on Kliem and Torok (2006), which is not expected to be significant.
## Appendix F The MHD Model
We run a zero-\(\beta\) MHD simulation performed by the Observationally driven High-order Magnetohydrodynamics code (OHM; Aulanier et al., 2005, 2010). The simulation starts from an asymmetric bipolar potential field. To drive the potential field evolving toward a highly sheared state, we impose the driving motion at the bottom boundary, which is mainly manifested as two shearing flows on two sides of the main PIL, reaching its maximum close to the PIL and has little effect in the center of each polarity. In addition, since the driving motion follows the contours of \(B_{z}\), the vertical component of the magnetic field at the bottom boundary is hardly changed by this motion.
The simulation is composed of the shearing phase with shearing motions imposed and the relaxation phase without driving motions at the bottom boundary. During the shearing phase, the flux cancellation is achieved by adding a photospheric resistivity \(\eta^{phot}=1.44\times 10^{-3}\) on the bottom; an uniform coronal resistivity, \(\eta=4.8\times 10^{-4}\), is set in the whole domain except the bottom boundary. During the relaxation phase, the photospheric resistivity is set to be zero, and the coronal resistivity is multiplied by 4 during the eruption for numerical stability.
We determine the onset time of the MFR eruption with a series of tests, in which we stop the driving motion at different times and then relax the system. We find that the MFR fails to erupt in a control simulation where the driving motion is switched off at \(t=120t_{A}\) (with an interval of \(2\Delta t=6t_{A}\)) but erupts successfully in the simulation where the driving motion is stopped at \(125t_{A}\) as analysed here. Thus, the onset time of the eruption should be in the time period of \(120-125t_{A}\). The decay index at the height of the MFR axis at \(t=125t_{A}\) is found to be close to the theoretical threshold of the torus instability (\(\sim\)1.5; Kliem & Torok, 2006). Afterwards, both the height and velocity of the MFR increase exponentially (Figure F1).
## Appendix G Determining the MFR Boundary
We investigate the mechanism that drives the slow rise of the MFR by analyzing the integrated z-components of the Lorentz force and its two components (magnetic tension and pressure gradient) at the section of the MFR perpendicular to its axis. Note that, the magnetic tension force (pressure gradient) analyzed here refers to the component of magnetic tension force (pressure gradient) in the normal direction of magnetic field as the tangential component has no contribution to the acceleration.
To identify the MFR boundary, we calculate the squashing degree Q, which measures the mapping of the field lines. The squashing degree Q is defined by Titov et al. (2002) as:
\[Q=\frac{(\frac{\partial\frac{\partial X}{\partial x})^{2}+(\frac{\partial X}{ \partial y})^{2}+(\frac{\partial Y}{\partial x})^{2}+(\frac{\partial Y}{ \partial y})^{2}}}{|\frac{\partial X}{\partial y}\frac{\partial Y}{\partial y }-\frac{\partial X}{\partial y}\frac{\partial Y}{\partial x}|},\] (G1)
where \((x,y)\) and \((X,Y)\) are coordinates of two footpoints of a field line.
It is believed that the MFR boundary corresponds to the QSL, where the squashing degree \(Q\) is very large. In practice, we identify the top and side boundaries of the MFR mainly by following the outer contours of \(\log Q=3\). To enclose the bottom boundary of the MFR, the inner contours of \(\log Q=5\), which clearly present the HFT configuration (Figure 6), are used for a reference. The integrated vertical components of the Lorentz force, magnetic tension force, magnetic pressure gradient and mass density are calculated by summing up the corresponding quantities within the boundary of the MFR at the plane. The acceleration caused by the Lorentz force and that by its two components are obtained by dividing the integrated forces by the integrated mass density. In order to estimate the uncertainty in determining the top and side boundaries of the MFR, we also use the contours of \(\log Q=2.4\) and \(\log Q=3.6\) instead of \(\log Q=3.0\) to repeat the same procedure. The integrated quantities shown in Figure 6 and the accelerations in Figure 7 are the averages of three measurements and their errors are corresponding standard deviations. |
2301.07448 | A characterization of MG Dual frames using infimum cosine angle | This article discusses the construction of dual frames and their uniqueness
for the multiplication generated frames on $L^2(X; \mathcal H)$, where $X$ is a
$\sigma$-finite measure. A necessary and sufficient condition of such duals
associated to infimum cosine angle is obtained. The result is illustrated for
the translation-generated systems on a locally compact group (not necessarily
abelian ) by action of its abelian subgroup. | Sudipta Sarkar, Niraj K. Shukla | 2023-01-18T11:44:25Z | http://arxiv.org/abs/2301.07448v1 | # A characterization of MG dual frames
###### Abstract.
This article discusses the construction of dual frames and their uniqueness for the multiplication generated frames on \(L^{2}(X;\mathcal{H})\), where \(X\) is a \(\sigma\)-finite measure. A necessary and sufficient condition of such duals associated to infimum cosine angle is obtained. The result is illustrated for the translation-generated systems on a locally compact group (not necessarily abelian ) by action of its abelian subgroup.
Key words and phrases:Multiplication invariant space, Angle-between subspaces, Oblique dual frames, Riesz basis, Translation invariant space 2000 Mathematics Subject Classification: 42C40,42C15 Research of S. Sarkar and N. K. Shukla was supported by research grant from CSIR, New Delhi [09/1022(0037)/2017-EMR-I] and NBHM-DAE [02011/19/2018-NBHM(R.P.)/R&D II/14723], respectively.
**Definition 1.2**.: Let \(V\) and \(W\) be closed subspaces of \(\mathcal{H}\). The _infimum cosine angle_ between \(V\) and \(W\) of \(\mathcal{H}\) is defined by
\[R(V,W)=\inf_{v\in V\backslash\{0\}}\frac{\|P_{W}v\|}{\|v\|},\]
where \(P_{W}\) is the projection on \(W\).
In general, \(R(V,W)\neq R(W,V)\). If \(R(V,W)>0\) and \(R(W,V)>0\) then \(R(V,W)=R(W,V)\), and hence we can decompose the Hilbert space \(\mathcal{H}=V\oplus W^{\perp}\) (not necessary orthogonal direct sum), means, \(\mathcal{H}=V+W^{\perp}\) and \(V\bigcap W^{\perp}=0\)[5]. In addition, if the following reproducing formula holds :
\[f=\sum_{k\in I}\langle f,f_{k}\rangle g_{k},\ \forall f\in V,\]
where \(\{f_{k}\}_{k\in I}\) and \(\{g_{k}\}_{k\in I}\) are Bessel sequences in \(\mathcal{H}\) and \(W=\overline{\mathrm{span}}\{f_{k}\}\), then \(\{f_{k}\}_{k\in I}\) is an oblique dual frame of \(\{g_{k}\}_{k\in I}\) on \(W\), and \(\{g_{k}\}_{k\in I}\) is an oblique dual frame of \(\{f_{k}\}_{k\in I}\) on \(V\)[5, Lemma 3.1]. Furthermore, \(\{g_{k}\}_{k\in I}\) and \(\{P_{V}f_{k}\}_{k\in I}\) are dual frames for \(V\) and \(\{f_{k}\}_{k\in I}\) and \(\{P_{W}g_{k}\}_{k\in I}\) are dual frames for \(W\). This decomposition is important to recover data from a given set of samples. Tang in [12] studied the infimum cosine angles in connection with oblique projections that leads to oblique dual frames, followed by Kim et al. for the different contexts [9, 10]. Further, Christensen and Eldar in [5], and Kim et. al in [11] developed a connection of the infimum cosine angle with oblique dual frames for shift-invariant (SI) spaces in \(L^{2}(\mathbb{R}^{n})\). An existence of Riesz basis using infimum cosine angle for the theory of multiresolution analysis in \(L^{2}(\mathbb{R}^{n})\) was discussed by Bownik and Garrigos in [3]. We aim to continue the work in the context of set-theoretic abstraction.
Now we provide our first main result which is a measure theoretic abstraction of [11, Theorem 4.10] using range function. The novelty of considering the approach on \(L^{2}(X;\mathcal{H})\) is to develop the theory of duals for a continuous frame on locally compact group (not necessarily abelian) translated by its abelian subgroup.
**Theorem 1.3**.: _Let \((X,\mu_{X})\) and \((\mathcal{M},\mu_{\mathcal{M}})\) be \(\sigma\)-finite measure spaces such that \(\mu(X)<\infty\), and the set \(\mathcal{D}=\{\varphi_{s}\in L^{\infty}(X):s\in\mathcal{M}\}\) is Parseval determining set for \(L^{1}(X)\). For the finite collection of functions \(\mathscr{A}=\{f_{i}\}_{i=1}^{m}\) and \(\mathscr{B}=\{g_{i}\}_{i=1}^{n}\) in \(L^{2}(X;\mathcal{H})\), and for a.e. \(x\in X\), assume the range functions \(J_{\mathscr{A}}(x)=\mathrm{span}\{f_{i}(x):i=1,2,\ldots,m\}\) and \(J_{\mathscr{B}}(x)=\mathrm{span}\{g_{i}(x):i=1,2,\ldots,n\}\) associated with the MI spaces \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})\) and \(\mathcal{S}_{\mathcal{D}}(\mathscr{B})\), respectively. Then the following are equivalent:_
1. _There exist_ \(\mathscr{A}^{\prime}=\{f_{i}^{r}\}_{i=1}^{r}\) _and_ \(\mathscr{B}^{\prime}=\{g_{i}^{r}\}_{i=1}^{r}\) _in_ \(L^{2}(X;\mathcal{H})\) _such that_ \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) _and_ \(\mathcal{E}_{\mathcal{D}}(\mathscr{B}^{\prime})\) _are continuous frames for_ \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})\) _and_ \(\mathcal{S}_{\mathcal{D}}(\mathscr{B})\)_, respectively, satisfying the following reproducing formulas for_ \(g\in\mathcal{S}_{\mathcal{D}}(\mathscr{A})\) _and_ \(h\in\mathcal{S}_{\mathcal{D}}(\mathscr{B})\)_:_ (1.2) \[g=\sum_{i=1}^{r}\int_{\mathcal{M}}\langle g,M_{\phi_{s}}g_{i}^{\prime}\rangle M _{\phi_{s}}f_{i}^{\prime}\ d_{\mu_{\mathcal{M}}}(s),\,\text{and}\ h=\sum_{i=1}^ {r}\int_{\mathcal{M}}\langle h,M_{\phi_{s}}f_{i}^{r}\rangle M_{\phi_{s}}g_{i}^{ \prime}\ d_{\mu_{\mathcal{M}}}(s).\]
2. _The infimum cosine angles of_ \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})\) _and_ \(\mathcal{S}_{\mathcal{D}}(\mathscr{B})\) _are greater than zero, i.e.,_ \[R(\mathcal{S}_{\mathcal{D}}(\mathscr{A}),\mathcal{S}_{\mathcal{D}}(\mathscr{B }))>0\text{ and }R(\mathcal{S}_{\mathcal{D}}(\mathscr{B}),\mathcal{S}_{\mathcal{D}}( \mathscr{A}))>0.\]
3. _There exist collection of functions_ \(\{f_{i}^{\prime}\}_{i=1}^{r}\) _and_ \(\{g_{i}^{\prime}\}_{i=1}^{r}\) _in_ \(L^{2}(X;\mathcal{H})\) _such that for a.e._ \(x\in X\)_, the systems_ \(\{f_{i}^{\prime}(x)\}_{i=1}^{r}\) _and_ \(\{g_{i}^{\prime}(x)\}_{i=1}^{r}\) _are finite frames for_ \(J_{\mathscr{A}}(x)\) _and_ \(J_{\mathscr{B}}(x)\)_, respectively, satisfying the following reproducing formulas for_ \(u\in J_{\mathscr{A}}(x)\) _and_ \(v\in J_{\mathscr{B}}(x)\)_:_ (1.3) \[u=\sum_{i=1}^{r}\langle u,g_{i}^{\prime}(x)\rangle f_{i}^{\prime}(x),\text{ and }v=\sum_{i=1}^{r}\langle v,f_{i}^{\prime}(x)\rangle g_{i}^{\prime}(x),\text{ a.e. }x\in X.\]
4. _For a.e._ \(x\in X\)_, the infimum cosine angles of_ \(J_{\mathscr{A}}(x)\) _and_ \(J_{\mathscr{B}}(x)\) _are greater than zero, i.e.,_ \[R(J_{\mathscr{A}}(x),J_{\mathscr{B}}(x))>0\text{ and }R(J_{\mathscr{B}}(x),J_{ \mathscr{A}}(x))>0.\]
The equations (1.2) and (1.3) explore the various possibilities of obtaining oblique dual frames in the global and local setups, respectively. These duals and associated reproducing formulas are not necessarily unique. But when \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\) is a Riesz basis then the dual is always unique. The following main result discusses the uniqueness of reproducing formula, which is a measure-theoretic abstraction of [12, Corrollary 2.4] and [3, Proposition 2.13].
**Theorem 1.4**.: _Let \((X,\mu_{X})\) be a \(\sigma\)-finite measure space with \(\mu(X)<\infty\), and let \(\mathscr{V}\) and \(\mathscr{W}\) be multiplication invariant subspaces of \(L^{2}(X;\mathcal{H})\) corresponding to an orthonormal basis \(\mathscr{D}\) of \(L^{2}(X)\). For the finite collection of functions \(\mathscr{A}=\{f_{i}\}_{i=1}^{r}\), assume \(\mathcal{E}_{\mathscr{D}}(\mathscr{A})\) is a Riesz basis for \(\mathscr{V}\). Then the following holds:_
1. _Global setup:_ _If there exists_ \(\mathscr{A}^{\prime}=\{f_{i}^{\prime}\}_{i=1}^{r}\) _in_ \(L^{2}(X;\mathcal{H})\) _such that_ \(\mathcal{E}_{\mathscr{D}}(\mathscr{A}^{\prime})\) _is a Riesz basis for_ \(\mathscr{W}\) _satisfying the following biorthogonality condition_ (1.4) \[\langle M_{\phi}f_{i},M_{\mathscr{V}}f_{i^{\prime}}^{\prime}\rangle=\delta_{i,i^{\prime}}\delta_{\phi,\phi^{\prime}},\quad i,i^{\prime}=1,2,\cdots,r;\ \phi,\phi^{\prime}\in \mathscr{D},\] (1.5) _then the infimum cosine angles of_ \(\mathscr{V}\) _and_ \(\mathscr{W}\) _are greater than zero, i.e.,_ (1.6) \[R(\mathscr{V},\mathscr{W})>0\ \text{and}\ R(\mathscr{W},\mathscr{V})>0.\] _Conversely if (_1.6_) holds true, then there exists_ \(\mathscr{A}^{\prime}=\{f_{i}^{\prime}\}_{i=1}^{r}\) _in_ \(L^{2}(X;\mathcal{H})\) _such that_ \(\mathcal{E}_{\mathscr{D}}(\mathscr{A}^{\prime})\) _is a Riesz basis for_ \(\mathscr{W}\) _satisfying the biorthogonality condition (_1.5_). Moreover, the following reproducing formulas hold:_ \[f=\sum_{\phi\in\mathscr{D}}\sum_{i=1}^{r}\langle f,M_{\phi}f_{i}^{\prime} \rangle M_{\phi}f_{i},\ \forall f\in\mathscr{V},\ \text{and}\ g=\sum_{\phi\in \mathscr{D}}\sum_{i=1}^{r}\langle g,M_{\phi}f_{i}\rangle M_{\phi}f_{i}^{\prime},\ \forall g\in\mathscr{W}.\]
2. _Local setup:_ _If there exists_ \(\mathscr{A}^{\prime}=\{f_{i}^{\prime}\}_{i=1}^{r}\) _in_ \(L^{2}(X;\mathcal{H})\) _such that for a.e._ \(x\in X\)_,_ \(\{f_{i}^{\prime}(x)\}_{i=1}^{r}\) _is a Riesz sequence in_ \(\mathcal{H}\) _satisfying the following biorthogonality condition_ (1.6) \[\langle f_{i}(x),f_{i^{\prime}}^{\prime}(x)\rangle=\delta_{i,i^{\prime}},\quad i,i^{\prime}=1,2,\cdots,m,\ a.e.\ x\in X,\] (1.7) _the infimum cosine angles of_ \(J_{\mathscr{A}}(x)=\operatorname{span}\{f_{i}(x)\}_{i=1}^{r}\) _and_ \(J_{\mathscr{A}^{\prime}}(x)=\operatorname{span}\{f_{i}^{\prime}(x)\}_{i=1}^{r}\) _are greater than zero, i.e.,_ (1.8) \[R(J_{\mathscr{A}^{\prime}}(x),J_{\mathscr{A}^{\prime}}(x))>0\ \text{and}\ R(J_{\mathscr{A}^{\prime}}(x),J_{\mathscr{A}}(x))>0,\ a.e.\ x\in X.\] _Conversely if (_1.8_) holds, there exists_ \(\mathscr{A}^{\prime}=\{f_{i}^{\prime}\}_{i=1}^{r}\) _in_ \(L^{2}(X;\mathcal{H})\) _such that for a.e._ \(x\in X\)_,_ \(\{f_{i}^{\prime}(x)\}_{i=1}^{r}\) _is a Riesz sequence in_ \(\mathcal{H}\) _satisfying the biorthogonality condition (_1.6_). Moreover, the following reproducing formulas hold for_ \(u\in J_{\mathscr{A}^{\prime}}(x)\)_, and_ \(v\in J_{\mathscr{A}^{\prime}}(x)\)_:_ \[u=\sum_{i=1}^{r}\langle u,f_{i}^{\prime}(x)\rangle f_{i}(x),\ \text{and}\ v=\sum_{i=1}^{r}\langle v,f_{i}(x)\rangle f_{i}^{\prime}(x),\ \text{for a.e.}\ x\in X.\]
The paper is organized as follows; in Section 2, we have discussed multiplication-generated oblique dual frames and their characterizations in connection with the Gramian matrix. Then in Section 3 the proofs of the Theorem 1.3 and 1.4 are provided. The paper ends with Section 4 which discusses the applications to the locally compact group translated by the closed abelian subgroup.
## 2. Multiplication generated oblique dual frames
Given a Parseval determining set \(\mathcal{D}:=\{g_{s}\in L^{\infty}(X):s\in\mathcal{M}\}\) for \(L^{1}(X)\) (see, (1.1)), and a finite collection of functions \(\mathscr{A}=\{\varphi_{i}\}_{i\in\mathcal{I}_{r}}\) in \(L^{2}(X;\mathcal{H})\), we recall the MG system \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\) and its associated MI space \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})\) given by
\[\mathcal{E}_{\mathcal{D}}(\mathscr{A}):=\{M_{g_{s}}\varphi_{i}(\cdot)=g_{s}( \cdot)\varphi_{i}(\cdot):s\in\mathcal{M},i\in\mathcal{I}_{r}\}\,,\quad \text{and}\quad\mathcal{S}_{\mathcal{D}}(\mathscr{A}):=\overline{\operatorname{ span}}\,\mathcal{E}_{\mathcal{D}}(\mathscr{A}), \tag{2.1}\]
respectively, where \((\mathcal{M},\mu_{\mathcal{M}})\) is a \(\sigma\)-finite measure space, and \(\mathcal{I}_{r}:=\{1,2,\cdots,r\}\), for \(r\in\mathbb{N}\). The MG system \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\) is said to be a _continuous frame_ (simply, _frame_) for \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})\) if the map \((s,i)\mapsto\langle f,M_{g_{s}}\varphi_{i}\rangle\) from \((\mathcal{M}\times\mathcal{I}_{r})\) to \(\mathbb{C}\) is measurable, and there exist \(0<A\leqslant B<\infty\) such that
\[A\|f\|^{2}\leqslant\sum_{i\in\mathcal{I}_{r}}\int_{\mathcal{M}}|\langle f,M_{g_{s} }\varphi_{i}\rangle|^{2}d_{\mu_{\mathcal{M}}}(s)\leqslant B\|f\|^{2},\ \text{for all}\ f\in S_{\mathcal{D}}(\mathscr{A}). \tag{2.2}\]
If \(S_{\mathcal{D}}(\mathscr{A})=L^{2}(X;\mathcal{H})\), \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) is a _frame_ for \(L^{2}(X;\mathcal{H})\), and it is _Bessel_ in \(L^{2}(X;\mathcal{H})\) when only upper bound holds in (2.2).
For a Bessel family \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\) in \(L^{2}(X;\mathcal{H})\), we define a bounded linear operator \(T_{\mathcal{E}_{\mathcal{D}}(\mathscr{A})}:L^{2}(X;\mathcal{H})\to L^{2}( \mathcal{M}\times\mathcal{I}_{r})\), known as _analysis operator_, by
\[T_{\mathcal{E}_{\mathcal{D}}(\mathscr{A})}(f)(s,i)=\langle f,M_{g_{s}}\varphi_{i} \rangle,\ \text{for all}\ (s,i)\in\mathcal{M}\times\mathcal{I}_{r},\,\text{and}\,f\in L^{2}(X; \mathcal{H}),\]
and its adjoint operator \(T^{*}_{\mathcal{E}_{\mathcal{D}}(\mathscr{A})}:L^{2}(\mathcal{M}\times\mathcal{I}_{ r})\to L^{2}(X;\mathcal{H})\), known as _synthesis operator_, by
\[T^{*}_{\mathcal{E}_{\mathcal{D}}(\mathscr{A})}\psi=\sum_{i\in\mathcal{I}_{r}} \int_{\mathcal{M}}\psi(s,i)M_{g_{s}}\varphi_{i}\;d_{\mu_{\mathcal{M}}}(s),\text{ for all }\psi\in L^{2}(\mathcal{M}\times\mathcal{I}_{r}),\]
in the weak sense. Then, the composition \(S_{\mathcal{E}_{\mathcal{D}}(\mathscr{A})}:=T^{*}_{\mathcal{E}_{\mathcal{D}}( \mathscr{A})}T_{\mathcal{E}_{\mathcal{D}}(\mathscr{A})}:L^{2}(X;\mathcal{H}) \to L^{2}(X;\mathcal{H})\) is known as _frame operator_.
At this juncture it can be noted that the Bessel family \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\) is a continuous frame for \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})\) with bounds \(0<A\leq B\) if and only if the frame operator \(S_{\mathcal{E}_{\mathcal{D}}(\mathscr{A})}\big{|}_{\mathcal{S}_{\mathcal{D}}( \mathscr{A})}\) restricted on \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})\) is positive, bounded and invertible with \(AI_{\mathcal{S}_{\mathcal{D}}(\mathscr{A})}\leq S_{\mathcal{E}_{\mathcal{D}}( \mathscr{A})}\big{|}_{\mathcal{S}_{\mathcal{D}}(\mathscr{A})}\leq BI_{ \mathcal{S}_{\mathcal{D}}(\mathscr{A})}\), where \(I_{\mathcal{S}_{\mathcal{D}}(\mathscr{A})}\) denotes the identity operator on \(L^{2}(X;\mathcal{H})\) which is restricted on \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})\). The inverse of frame operator \(S_{\mathcal{E}_{\mathcal{D}}(\mathscr{A})}\) satisfies \(\frac{1}{B}I_{\mathcal{S}_{\mathcal{D}}(\mathscr{A})}\leq(S_{\mathcal{E}_{ \mathcal{D}}(\mathscr{A})}\big{|}_{\mathcal{S}_{\mathcal{D}}(\mathscr{A})})^{- 1}\leq\frac{1}{A}I_{\mathcal{S}_{\mathcal{D}}(\mathscr{A})}\) and the family \(\{(S_{\mathcal{E}_{\mathcal{D}}(\mathscr{A})}\big{|}_{\mathcal{S}_{\mathcal{D} }(\mathscr{A})})^{-1}S_{\mathcal{D}}(\mathscr{A})\}\) is also a continuous frame for \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})\), known as _canonical dual frame_ of \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\), which satisfies the following reproducing formula for all \(f\in\mathcal{S}_{\mathcal{D}}(\mathscr{A})\) in the weak sense:
\[f=\sum_{i\in\mathcal{I}_{r}}\int_{\mathcal{M}}\langle f,(S_{\mathcal{E}_{ \mathcal{D}}(\mathscr{A})}\big{|}_{\mathcal{S}_{\mathcal{D}}(\mathscr{A})})^{ -1}M_{g_{s}}\varphi_{i}\rangle M_{g_{s}}\varphi_{i}\,d_{\mu_{\mathcal{M}}}(s). \tag{2.3}\]
The reproducing formula (2.3) gives an idea to find a new Bessel family, say \(\{h_{i}\}_{i\in\mathcal{I}_{r}}=:\mathscr{A}^{\prime}\) in \(L^{2}(X;\mathcal{H})\), such that the following decomposition formula holds:
\[f=\sum_{i\in\mathcal{I}_{r}}\int_{\mathcal{M}}\langle f,M_{g_{s}}h_{i}\rangle M _{g_{s}}\varphi_{i}\,d_{\mu_{\mathcal{M}}}(s),\ f\in\mathcal{S}_{\mathcal{D}}( \mathscr{A}),\quad i.e.,\quad T^{*}_{\mathcal{E}_{\mathcal{D}}(\mathscr{A})}T _{\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})}\big{|}_{\mathcal{S}_{ \mathcal{D}}(\mathscr{A})}=I_{\mathcal{S}_{\mathcal{D}}(\mathscr{A})},\]
where \(\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\prime})\) need not be a subset of \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})\). It motivates to define duals other than canonical dual.
Next we define alternate and oblique duals for \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\). Askari and Gabardo in [7] introduced such duals for shift invariant subspaces of \(L^{2}(\mathbb{R}^{n})\) and Heil et al. in [6] defined them for a separable Hilbert space.
**Definition 2.1**.: Let \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\) be a continuous frame for \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})\) and \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) be a Bessel family in \(L^{2}(X;\mathcal{H})\). Then
1. \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) is an _alternate MG-dual_ for \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\) if \(T^{*}_{\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})}T_{\mathcal{E}_{ \mathcal{D}}(\mathscr{A}^{\prime})}\big{|}_{\mathcal{S}_{\mathcal{D}}(\mathscr{ A})}=I_{\mathcal{S}_{\mathcal{D}}(\mathscr{A})}\).
2. The alternate MG-dual \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) is called an _oblique MG-dual_ for \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\) if \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) is a continuous frame for \(\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\prime})\) and \(T^{*}_{\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})}T_{\mathcal{E}_{ \mathcal{D}}(\mathscr{A})}\big{|}_{\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\prime} )}=I_{\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\prime})}\).
3. The oblique MG-dual \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) is an _MG-dual frame_ for \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\) if \(\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\prime})=\mathcal{S}_{\mathcal{D}}( \mathscr{A})\).
In this section, we aim to characterize alternate and oblique duals for \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\). For this, we need the concept of Fourier transform in the abstract setup. We now provide a notion of Fourier transform for \(L^{2}(X)\), introduced by Bownik and Iverson in [4].
**Definition 2.2**.: For \(f\in L^{1}(X)\bigcap L^{2}(X)\), the _Fourier transform_\(\mathcal{F}f\in L^{2}(\mathcal{M})\) corresponding to the Parseval determining set \(\mathcal{D}=\{\phi_{s}\in L^{\infty}(X):s\in\mathcal{M}\}\) is given by
\[(\mathcal{F}f)(s)=\int_{X}f(x)\overline{g_{s}(x)}d_{\mu_{X}}(x),\ a.e.\ s\in \mathcal{M}. \tag{2.4}\]
The Fourier transform \(\mathcal{F}\) is a unique extension from \(L^{2}(X)\) to \(L^{2}(\mathcal{M})\) which is linear and isometry.
The _Plancherel's relation_ and _Parseval's formula_ are given by
\[\|\mathcal{F}f\|_{L^{2}(\mathcal{M})}=\|f\|_{L^{2}(X)}\text{ and }\langle\mathcal{F}f,\mathcal{F}h \rangle_{L^{2}(\mathcal{M})}=\langle f,h\rangle_{L^{2}(X)},\text{ for all }f,h\in L^{2}(X), \tag{2.5}\]
respectively.
The following result gives a way to move from global setup to local setup. For \(\mathscr{B}\subset L^{2}(X;\mathcal{H})\) and \(x\in X\), the set \(\mathscr{B}(x)\) is given by \(\mathscr{B}(x):=\{f(x):f\in\mathscr{B}\}\), which will be frequently used in the sequel.
**Proposition 2.3**.: _Let \(\mathscr{A}\) and \(\mathscr{A}^{\prime}\) be finite collections of functions in \(L^{2}(X;\mathcal{H})\) having same cardinality such that \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\) and \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) are Bessel. The following holds true for all \(f,g\in L^{2}(X;\mathcal{H})\):_
\[\Big{\langle}T^{*}_{\mathcal{E}_{\mathcal{D}}(\mathscr{A})}T_{\mathcal{E}_{ \mathcal{D}}(\mathscr{A}^{\prime})}f,g\Big{\rangle}=\int_{X}\langle T^{*}_{ \mathscr{A}^{\prime}(x)}T_{\mathscr{A}^{\prime\prime}(x)}f(x),g(x)\rangle d_{\mu_{X }}(x),\]
_where the operators \(T_{\mathscr{A}(x)}\) and \(T_{\mathscr{A}^{\prime}(x)}\) are analysis operators associated to \(\mathscr{A}(x)\) and \(\mathscr{A}^{\prime}(x)\), respectively, for a.e. \(x\in X\)._
Proof.: Let \(\mathscr{A}=\{f_{i}\}_{i=1}^{r}\) and \(\mathscr{A}^{\prime}=\{g_{i}\}_{i=1}^{r}\) be two finite collections of functions in \(L^{2}(X;\mathcal{H})\). For \(f,g\in L^{2}(X;\mathcal{H})\), the analysis operators \(T_{\mathscr{E}_{\mathcal{D}}(\mathscr{A}^{\prime})}\) and \(T_{\mathscr{E}_{\mathcal{D}}(\mathscr{A}^{\prime})}\) satisfy
\[(T_{\mathscr{E}_{\mathcal{D}}(\mathscr{A}^{\prime})}g)(s,i)=\langle g,M_{\phi_ {s}}\varphi_{i}\rangle\text{ and }(T_{\mathscr{E}_{\mathcal{D}}(\mathscr{A}^{\prime})}f)(s,i)= \langle f,M_{\phi_{s}}\psi_{i}\rangle,\text{ for all }(s,i)\in\mathcal{M}\times\{1,\ldots,r\},\]
and then we compute the following:
\[\big{\langle}T_{\mathscr{E}_{\mathcal{D}}(\mathscr{A}^{\prime})}f,T _{\mathscr{E}_{\mathcal{D}}(\mathscr{A}^{\prime})}g\big{\rangle}= \int_{\mathcal{M}}\sum_{i=1}^{r}\langle f,M_{\phi_{s}}\psi_{i} \rangle\overline{\langle g,M_{\phi_{s}}\varphi_{i}\rangle}d_{\mu_{\mathcal{M} }}(s)\] \[= \int_{\mathcal{M}}\sum_{i=1}^{r}\bigg{(}\int_{X}\langle f(x),\psi _{i}(x)\rangle\overline{\phi_{s}(x)}d_{\mu_{X}}(x)\bigg{)}\times\overline{ \bigg{(}\int_{X}\langle g(x),\varphi_{i}(x)\rangle\overline{\phi_{s}(x)}d_{\mu _{X}}(x)\bigg{)}}d_{\mu_{\mathcal{M}}}(s).\]
Choosing \(F_{\psi_{i}}(x)=\langle f(x),\psi_{i}(x)\rangle\) and \(G_{\varphi_{i}}(x)=\langle g(x),\varphi_{i}(x)\rangle\), for \(x\in X\) and \(i\in\{1,\ldots,r\}\), we have
\[\big{\langle}T_{\mathscr{E}_{\mathcal{D}}(\mathscr{A}^{\prime})}f,T_{\mathscr{E}_{\mathcal{D}}(\mathscr{A}^{\prime})}g\big{\rangle}= \int_{\mathcal{M}}\sum_{i=1}^{r}\mathcal{F}F_{\psi_{i}}(s) \overline{\mathcal{F}G_{\varphi_{i}}(s)}d_{\mu_{\mathcal{M}}}(s)=\sum_{i=1}^{ r}\int_{\mathcal{M}}\mathcal{F}F_{\psi_{i}}(s)\overline{\mathcal{F}G_{\varphi_{i}}(s)}d_{\mu _{\mathcal{M}}}(s)\] \[= \sum_{i=1}^{r}\langle\mathcal{F}F_{\psi_{i}},\mathcal{F}G_{\varphi _{i}}\rangle=\sum_{i=1}^{r}\langle F_{\psi_{i}},G_{\varphi_{i}}\rangle\] \[= \sum_{i=1}^{r}\int_{X}F_{\psi_{i}}(x)\overline{G_{\varphi_{i}}(x) }d_{\mu_{X}}(x)\] \[= \sum_{i=1}^{r}\int_{X}\langle f(x),\psi_{i}(x)\rangle\overline{ \langle g(x),\varphi_{i}(x)\rangle}d_{\mu_{X}}(x),\]
using Fourier transform in (2.4), Parseval's formula (2.5) on \(L^{2}(X)\) and Fubini's theorem over \(\mathcal{M}\times\{1,2,\ldots,r\}\), \(F_{\psi_{i}},G_{\varphi_{i}}\in L^{2}(X;\mathcal{H})\) hold in the above calculations by noting the facts that \(\mathscr{E}_{\mathcal{D}}(\mathscr{A})\) is Bessel systems with bound \(B\) if and only if \(\mathscr{A}(x)\) is Bessel systems with bound \(B\) for a.e. \(x\in X\), and the following estimate
\[\int_{X}\sum_{i=1}^{r}\Big{|}F_{\psi_{i}}(x)\overline{G_{\varphi_ {i}}(x)}\Big{|}\,d_{\mu_{X}}(x)\leqslant \left(\int_{X}\sum_{i=1}^{r}\left|\langle f(x),\psi_{t}(x)\rangle \right|^{2}d_{\mu_{X}}(x)\right)^{\frac{1}{2}}\times\left(\int_{X}\sum_{i=1}^{ r}\left|\langle g(x),\varphi_{t}(x)\rangle\right|^{2}d_{\mu_{X}}(x)\right)^{ \frac{1}{2}}\] \[\leqslant \sqrt{BB^{\prime}}|f|\|g|,\]
using Cauchy-Schwarz inequality, where we assume \(\mathscr{E}_{\mathcal{D}}(\mathscr{A})\) and \(\mathscr{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) are Bessel systems with bounds \(B\) and \(B^{\prime}\), respectively. Therefore using Fubini's theorem over \(\mathcal{N}\times X\), we get
\[\big{\langle}T_{\mathscr{E}_{\mathcal{D}}(\mathscr{A}^{\prime})}f,T_{\mathscr{E}_{\mathcal{D}}(\mathscr{A})}g\big{\rangle}= \sum_{i=1}^{r}\int_{X}\langle f(x),\psi_{i}(x)\rangle\overline{ \langle g(x),\varphi_{i}(x)\rangle}d_{\mu_{X}}(x)\] \[= \int_{X}\sum_{i=1}^{r}T_{\mathscr{A}^{\prime}(x)}(f(x))(i) \overline{T_{\mathscr{A}(x)}(g(x))(i)}d_{\mu_{X}}(x)\] \[= \int_{X}\langle T_{\mathscr{A}^{\prime}(x)}f(x),T_{\mathscr{A}^{ \prime}(x)}g(x)\rangle d_{\mu_{X}}(x),\]
where \(T_{\mathscr{A}^{\prime}(x)}(f(x))(i)=\langle f(x),\psi_{i}(x)\rangle\) and \(T_{\mathscr{A}(x)}(g(x))(i)=\langle g(x),\varphi_{i}(x)\rangle\), for \(i=1,2,\ldots,r\).
Next we provide a characterization of alternate dual associated with the Gramian operators. The _Gramian_ and _dual Gramian operators_ are given
\[G_{\mathscr{A}}(x)=T_{\mathscr{A}}(x)T_{\mathscr{A}}^{\ast}(x)\text{ and }\tilde{G}_{\mathscr{A}}(x)=T_{\mathscr{A}}^{\ast}(x)T_{\mathscr{A}}(x),\ a.e.\ x \in X,\]
where \(T_{\mathscr{A}}(x)\) and \(T_{\mathscr{A}}^{\ast}(x)\) denote the analysis and synthesis operators corresponding to \(\mathscr{A}(x)=\{\varphi_{i}(x)\}_{i\in\mathscr{I}_{\mathscr{I}_{\mathscr{I}_{ \mathscr{I}_{\mathscr{I}_{\mathscr{I}}}}}}}\). For \(\mathscr{A}=\{\varphi_{i}\}_{i\in\mathscr{I}_{\mathscr{I}_{\mathscr{I}_{\mathscr{I}_{ \mathscr{I}_{\mathscr{I}}}}}}}\) and \(\mathscr{A}^{\prime}=\{\psi_{i}\}_{i\in\mathscr{I}_{\mathscr{I}_{\mathscr{I}_{ \mathscr{I}}}}}\), the operator \(G_{\mathscr{A},\mathscr{A}^{\prime}}(x)=\left[\langle\varphi_{j}(x),\psi_{i}(x) \rangle\right]_{i,j\in\mathscr{I}_{\mathscr{I}_{\mathscr{I}_{\mathscr{I}_{\mathscr{I} _{\mathscr{I}}}}}}}\) is known as the _mixed Gramian operator_, for a.e. \(x\in X\). The following result is a measure theoretic abstraction of [11, Theorem 4.1] and [7, Theorem 5(a)].
**Proposition 2.4**.: _In addition to the assumptions of Proposition 2.3, let us assume \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\) be a frame for \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})\). Then the system \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) is an alternate MG-dual for \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\) if and only if for a.e. the system \(\mathscr{A}^{\prime}(x)=\{\psi(x):\psi\in\mathscr{A}^{\prime}\}\) is an alternate dual for \(\mathscr{A}(x)=\{\varphi(x):\varphi\in\mathscr{A}\}\), equivalently, the Gramian \(G_{\mathscr{A}}(x)\) and mixed Gramian \(G_{\mathscr{A},\mathscr{A}^{\prime}}(x)\) operators satisfy the following relation:_
\[G_{\mathscr{A}^{\prime}}(x)G_{\mathscr{A}^{\prime},\mathscr{A}^{\prime}}(x)=G _{\mathscr{A}}(x),\text{ for a.e. }x\in X.\]
Proof.: Suppose the system \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) is an alternate MG-dual for \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\), we have \(T^{*}_{\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})}T_{\mathcal{E}_{ \mathcal{D}}(\mathscr{A}^{\prime})}\big{|}_{S_{\mathcal{D}}(\mathscr{A})}=I_{ S_{\mathcal{D}}(\mathscr{A})}\). By Proposition 2.3, we get
\[\int_{X}\langle T^{*}_{\mathscr{A}^{\prime}}T_{\mathscr{A}^{\prime}(x)}f(x),g (x)\rangle d_{\mu_{X}}(x)=\int_{X}\langle f(x),g(x)\rangle d_{\mu_{X}}(x),\ \forall f,g\in S_{\mathcal{D}}(\mathscr{A}). \tag{2.6}\]
At first we will show, \(T^{*}_{\mathscr{A}(x)}T_{\mathscr{A}^{\prime}(x)}\big{|}_{J_{\mathscr{A}(x)}}= I_{J_{\mathscr{A}}(x)}\), for a.e. \(x\in X\). For this, let \(\{x_{n}\}_{n\in\mathbb{N}}\) be a countable dense subset of \(\mathcal{H}\) and let \(P_{J_{\mathscr{A}}}(x)\) be an orthogonal projection onto \(J_{\mathscr{A}}(x)\) for a.e. \(x\in X\). Clearly for a.e. \(x\in X\), \(\{P_{J_{\mathscr{A}}}(x)x_{n}\}_{n\in\mathbb{N}}\) is dense in \(J_{\mathscr{A}}(x)\). Next for each \(m,n\in\mathbb{N}\), we define a set \(S_{m,n}\) as follows:
\[S_{m,n}=\Big{\{}x\in X:\rho_{m,n}(x):= \langle T^{*}_{\mathscr{A}^{\prime}(x)}T_{\mathscr{A}^{\prime}(x )}P_{J_{\mathscr{A}}}(x)x_{m},P_{J_{\mathscr{A}}}(x)x_{n}\rangle\] \[-\langle P_{J_{\mathscr{A}}}(x)x_{m},P_{J_{\mathscr{A}}}(x)x_{n} \rangle\neq\{0\}\Big{\}}.\]
Now we assume on the contrary \(T^{*}_{\mathscr{A}(x)}T_{\mathscr{A}^{\prime}(x)}\big{|}_{J_{\mathscr{A}}(x)} \neq I_{J_{\mathscr{A}}(x)}\) on a Borel measurable subset \(Y\) of \(X\) having positive measure. Then, there are \(m_{0},n_{0}\in\mathbb{N}\) such that \(S_{m_{0},n_{0}}\bigcap Y\) is a Borel measurable subset of \(X\) having positive measure, and hence either real or imaginary parts of \(\rho_{m_{0},n_{0}}(x)\) are strictly positive or negative on a.e. \(x\in S_{m_{0},n_{0}}\bigcap Y\). Firstly we assume that the real part of \(\rho_{m_{0},n_{0}}(x)\) is strictly positive on \(S_{m_{0},n_{0}}\bigcap Y\). By choosing a Borel measurable subset \(S\) of \(S_{m_{0},n_{0}}\bigcap Y\) having positive measure, we define functions \(h_{1}\) and \(h_{2}\) as follows: Then, we have \(h_{1}(x),h_{2}(x)\in J_{\mathscr{A}}(x)\), for a.e. \(x\in X\) since \(\{P_{J_{\mathscr{A}}}(x)x_{n}\}_{n\in\mathbb{N}}\) is dense in \(J_{\mathscr{A}}(x)\), and hence we get \(h_{1},h_{2}\in S_{\mathcal{D}}(\mathscr{A})\) in view of [2, Theorem 2.4]. Therefore using \(f=h_{1}\), \(g=h_{2}\) in (2.6), we obtain \(\int_{S}\rho_{m_{0},n_{0}}(x)d_{\mu_{X}}(x)=0\) which is a contradiction since the measure of \(S\) is positive and the real part of \(\rho_{m_{0},n_{0}}(x)\) is strictly positive on \(S\). Other cases follow in a similar way. Since \(J_{\mathscr{A}}(x)=\mathrm{span}\{\varphi_{i}(x)\}_{i=1}^{r}\), for each \(i=1,2\ldots,r\), we have, \(\varphi_{i}(x)=\sum_{i=1}^{r}\langle\varphi_{i}(x),\psi_{i}(x)\rangle\varphi_{ i}(x)\), which is equivalent to
\[T^{*}_{\mathscr{A}(x)}T_{\mathscr{A}^{\prime}(x)}\big{|}_{J_{\mathscr{A}}(x)}=I _{J_{\mathscr{A}(x)}},\text{for a.e. }x\in X.\]
Equivalently, we have \(G_{\mathscr{A}}(x)G_{\mathscr{A},\mathscr{A}^{\prime}}(x)=G_{\mathscr{A}}(x), \text{ for a.e. }x\in X,\) by looking at the definition of Gramian and mixed Gramian operators. Therefore we get the result.
Conversely, assume \(G_{\mathscr{A}}(x)G_{\mathscr{A},\mathscr{A}^{\prime}}(x)=G_{\mathscr{A}}(x), \text{ for a.e. }x\in X\), equivalently, \(T^{*}_{\mathscr{A}^{\prime}(x)}T_{\mathscr{A}^{\prime}(x)}\big{|}_{J_{ \mathscr{A}^{\prime}}(x)}=I_{J_{\mathscr{A}^{\prime}(x)}}\), for a.e. \(x\in X\). Then we have \(T^{*}_{\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})}T_{\mathcal{E}_{ \mathcal{D}}(\mathscr{A}^{\prime})}\big{|}_{S_{\mathcal{D}}(\mathscr{A})}=I_{S_{ \mathcal{D}}(\mathscr{A})}\), follows from the computation
\[\Big{\langle}T^{*}_{\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime} )}T_{\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})}f,g\Big{\rangle} =\int_{X}\Big{\langle}T_{\mathscr{A}^{\prime\prime}(x)}f(x),T_{ \mathscr{A}^{\prime}(x)}g(x)\Big{\rangle}\,d_{\mu_{X}}(x)\] \[=\int_{X}\Big{\langle}T^{*}_{\mathscr{A}^{\prime}(x)}T_{\mathscr{A} ^{\prime}(x)}f(x),g(x)\Big{\rangle}\,d_{\mu_{X}}(x)\] \[=\int_{X}\left\langle f(x),g(x)\right\rangle d_{\mu_{X}}(x),\ \forall f,g\in S_{\mathcal{D}}(\mathscr{A})\]
in view of Proposition 2.3.
The following result is a measure-theoretic abstraction of [11, Theorem 4.1] for oblique dual frames associated with the rank of mixed Gramian operator and the dimension of range functions.
**Proposition 2.5**.: _In addition to the assumptions of Proposition 2.3, let us assume \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\) and \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) be frames for \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})\) and \(\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\prime})\), respectively, such that \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) is an alternate MG-dual for \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\) and_
\[\mathrm{rank}\ G_{\mathscr{A},\mathscr{A}^{\prime}}(x)=\dim J_{\mathscr{A}}(x)= \dim J_{\mathscr{A}^{\prime}}(x),\ a.e.\ x\in X, \tag{2.7}\]
_where \(J_{\mathscr{A}^{\prime}}(x)=\mathrm{span}\{f(x):f\in\mathscr{A}\}\) and \(J_{\mathscr{A}^{\prime}}(x)=\mathrm{span}\{g(x):g\in\mathscr{A}^{\prime}\}\). Then \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) is an oblique MG-dual for \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\)._
Proof.: Observing the proof of Proposition 2.4, we get \(\mathscr{A}^{\prime}(x)\) is an alternate dual to \(\mathscr{A}(x)\), for a.e. \(x\in X\) since \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) is an alternate MG-dual for \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\). Then for a.e. \(x\in X\), we can write \(\varphi(x)=\sum_{i=1}^{r}\langle\varphi(x),g_{i}(x)\rangle f_{i}(x)\), for each \(\varphi\in\mathcal{S}_{\mathcal{D}}(\mathscr{A})\). Further note that \(P(x):=P_{J_{\mathscr{A}^{\prime}(x)}}|_{J_{\mathscr{A}}(x)}:J_{\mathscr{A}(x) }\to J_{\mathscr{A}^{\prime}(x)}\) is invertible in view of [11, Lemma 3.1] and relation (2.7). Therefore for \(k=1,2,\ldots,r\) and a.e. \(x\in X\), we get
\[\langle P(x)\varphi(x),g_{k}(x)\rangle =\langle P_{J_{\mathscr{A}^{\prime}(x)}}\varphi(x),g_{k}(x) \rangle=\langle\varphi(x),P_{J_{\mathscr{A}^{\prime}(x)}}g_{k}(x)\rangle= \langle\varphi(x),g_{k}(x)\rangle\] \[=\left\langle\langle\sum_{i=1}^{r}\varphi(x),g_{i}(x)\rangle f_{i }(x),g_{k}(x)\right\rangle\] \[=\left\langle\varphi(x),\sum_{i=1}^{r}\langle g_{k}(x),f_{i}(x) \rangle g_{i}(x)\right\rangle\] \[=\left\langle P(x)\varphi(x),\sum_{i=1}^{r}\langle g_{k}(x),f_{i }(x)\rangle g_{i}(x)\right\rangle.\]
and hence \(g_{k}(x)=\sum_{i=1}^{r}\langle g_{k}(x),f_{i}(x)\rangle g_{i}(x)\) since \(P(x)\) is invertible. Hence the result holds by noting Proposition 2.4.
The next result tells that the space \(L^{2}(X;\mathcal{H})\) can be decomposed with the help of \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})\) and \(\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\prime})\) using the rank condition (2.7). We use the angle between two MI subspaces and their point-wise characterizations for its proof. From Definition 1.2, note that
\[(P_{\mathcal{S}_{\mathcal{D}}(\mathscr{A})}|_{\mathcal{S}_{\mathcal{D}}( \mathscr{A}^{\prime})}f)(x)=(P_{\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{ \prime})}P_{\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\prime})}f)(x)=P_{ \mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\prime})}(x)P_{\mathcal{S}_{\mathcal{D }}(\mathscr{A}^{\prime})}(x)f(x)=P_{\mathcal{S}_{\mathcal{D}}(\mathscr{A})(x )}|_{\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\prime})(x)}f(x).\]
and by [4, Theorem 4.1 (iii)], we have
\[\inf\left\{\frac{\|P_{\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\prime})}f\|}{ \|f\|}:f\in\mathcal{S}_{\mathcal{D}}(\mathscr{A})\backslash\{0\}\right\}= \operatorname*{ess-inf}_{x\in X}\left\{\frac{\|P_{\mathcal{S}_{\mathcal{D}}( \mathscr{A}^{\prime})}(x)w\|}{\|w\|}:w\in J_{\mathscr{A}}(x)\backslash\{0\} \right\}.\]
Thus if we define, \(\sigma(\mathcal{S}_{\mathcal{D}}(\mathscr{A})):=\{x\in X:J_{\mathscr{A}}(x)\neq 0\}\) then
\[R(\mathcal{S}_{\mathcal{D}}(\mathscr{A}),\mathcal{S}_{\mathcal{D}}(\mathscr{A }^{\prime}))=\begin{cases}\operatorname*{ess-inf}_{x\in\sigma(\mathcal{S}_{ \mathcal{D}}(\mathscr{A}))}R(J_{\mathscr{A}}(x),J_{\mathscr{A}^{\prime}}(x)) \text{ if }\mu_{X}(\sigma(\mathcal{S}_{\mathcal{D}}(\mathscr{A})))>0,\\ 1,\text{otherwise}.\end{cases}. \tag{2.8}\]
**Proposition 2.6**.: _In addition to the assumptions of Proposition 2.3, the following statements are equivalent:_
* _For a.e._ \(x\in X\)_, the relation (_2.7_) holds, i.e.,_ \(\operatorname{rank}\,G_{\mathscr{A},\mathscr{A}^{\prime}}(x)=\dim J_{ \mathscr{A}}(x)=\dim J_{\mathscr{A}^{\prime}}(x),\ a.e.\ x\in X,\) _and there exists a constant_ \(C>0\) _such that_ \[\|(G_{\mathscr{A}}(x))^{1/2}G_{\mathscr{A},\mathscr{A}^{\prime}}(x)^{\dagger}(G _{\mathscr{A}^{\prime}}(x))^{1/2}\|\leqslant C,\ a.e.\ x\in\{x\in X:J_{ \mathscr{A}}(x)\neq 0\}:=\sigma(\mathcal{S}_{\mathcal{D}}(\mathscr{A})),\] _where_ \(G_{\mathscr{A},\mathscr{A}^{\prime}}(x)^{\dagger}\) _denotes the pseudo inverse of_ \(G_{\mathscr{A},\mathscr{A}^{\prime}}(x)\)_._
* \(L^{2}(X;\mathcal{H})=\mathcal{S}_{\mathcal{D}}(\mathscr{A})\oplus\mathcal{S} _{\mathcal{D}}(\mathscr{A}^{\prime})^{\perp}\)_._
* \(L^{2}(X;\mathcal{H})=\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\prime})\oplus \mathcal{S}_{\mathcal{D}}(\mathscr{A})^{\perp}\)_._
* \(R(\mathcal{S}_{\mathcal{D}}(\mathscr{A}),\mathcal{S}_{\mathcal{D}}(\mathscr{A }^{\prime}))>0\) _and_ \(R(\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\prime}),\mathcal{S}_{\mathcal{D}}( \mathscr{A}))>0\)__
Proof.: The result can be establish easily following the steps of [4, Theorem 4.18] and [11, Theorem 3.8].
At the end of this section we provide a method to construct alternate (oblique) duals, which is an abstraction version of [11, Lemma 5.1].
**Proposition 2.7**.: _For a \(\sigma\)-finite measure space \((X,\mu_{X})\) with \(\mu(X)<\infty\), consider the assumptions of Proposition 2.3 and assume \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\) to be a frame for \(\mathcal{S}_{\mathcal{D}}(\mathscr{A}).\) Define a class of functions \(\mathscr{A}^{\prime}=\{h_{i}\}_{i=1}^{r}\) associated to \(\mathscr{A}^{\prime}=\{g_{i}\}_{i=1}^{r}\subset L^{2}(X;\mathcal{H})\) by_
\[h_{i}(x)=\begin{cases}\sum_{j=1}^{r}\overline{G_{\mathscr{A},\mathscr{A}^{\prime} }(x)^{\dagger}_{i,j}}\,g_{j}(x),\text{ if }x\in\sigma(\mathcal{S}_{\mathcal{D}}(\mathscr{A})),\\ 0,\quad otherwise.\end{cases} \tag{2.9}\]
_Then, \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) is an alternate (oblique) MG-dual for \(\mathcal{E}_{\mathcal{D}}(\mathscr{A})\) if the Proposition 2.6 (i) rank condition holds and there exists a \(C>0\) such that \(\|G_{\mathscr{A},\mathscr{A}^{\prime}}(x)^{\dagger}\|\leqslant C\) a.e. \(x\in\sigma(\mathcal{S}_{\mathcal{D}}(\mathscr{A}))\)._
Proof.: Note that \(G_{\vec{\mathscr{A}}^{\prime}}(x)=G_{\mathscr{A},\mathscr{A}^{\prime}}(x)^{\dagger}G _{\mathscr{A}^{\prime}}(x)(G_{\mathscr{A}^{\prime},\mathscr{A}^{\prime}}(x)^{ \dagger})^{*}\), a.e. \(x\in\sigma(\mathcal{S}_{\mathcal{D}}(\mathscr{A}))\) and \(G_{\vec{\mathscr{A}}^{\prime}}(x)=0\), otherwise, \(\|G_{\vec{\mathscr{A}}^{\prime}}(x)\|\) is bounded above due to Bessel property of \(\mathscr{A}^{\prime}(x)\). By the Proposition 2.4 we need to verify \(G_{\mathscr{A}}(x)G_{\mathscr{A},\vec{\mathscr{A}}^{\prime}}(x)=G_{\mathscr{A }}(x)\) which follows from the same technique of proof [11, Lemma 5.3].
## 3. **Proof of Theorem 1.3 and Theorem 1.4**
Proof of Theorem 1.3.: (ii) \(\to\) (i): Assume that (ii) holds, then we have rank \(\,G_{\mathscr{A}^{\prime},\mathscr{B}}(x)=\dim J_{\mathscr{A}}(x)=\dim J_{ \mathscr{B}}(x)\) a.e. \(x\in X\) by Proposition 2.6 (iv). Considering the projection \(P(x):=P_{J_{\mathscr{A}}(x)|_{\mathscr{B}(x)}}:J_{\mathscr{B}(x)}\to J_{ \mathscr{A}(x)}\), we have \(G_{\mathscr{A},\mathscr{B}}(x)=T_{\mathscr{B}}(x)P(x)T_{\mathscr{A}}^{*}(x)\), and \(P(x)\) is invertible by [11, Lemma 3.1]. Then the length of \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})=\) length \(\mathcal{S}_{\mathcal{D}}(\mathscr{B})\) since \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})\) and \(\mathcal{S}_{\mathcal{D}}(\mathscr{B})\) are finitely generated. Let \(r\) be the common length of \(\mathcal{S}_{\mathcal{D}}(\mathscr{A})\) and \(\mathcal{S}_{\mathcal{D}}(\mathscr{B})\). Then using [2, Theorem 2.6] there exists \(\mathscr{A}^{\#}=\{f_{i}^{\#}\}_{i=1}^{r}\) and \(\mathscr{K}=\{g_{i}^{\#}\}_{i=1}^{r}\) such that \(\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\#})=\mathcal{S}_{\mathcal{D}}(\mathscr{ A})\) and \(\mathcal{S}_{\mathcal{D}}(\mathscr{B}^{\#})=\mathcal{S}_{\mathcal{D}}(\mathscr{B})\). Hence \(R(S_{\mathcal{D}}(\mathscr{A}^{\#}),S_{\mathcal{D}}(\mathscr{B}^{\#}))>0\) and \(R(S_{\mathcal{D}}(\mathscr{B}^{\#}),S_{\mathcal{D}}(\mathscr{A}^{\#}))>0\). Further applying Proposition 2.6 (iv), there exists a positive constant \(C\) such that \(\|G_{\mathscr{A}^{\#}}(x)^{1/2}G_{\mathscr{A}^{\#},\mathscr{B}^{\#}}(x)^{ \dagger}G_{\mathscr{B}^{\#}}(x)^{1/2}\|\leqslant C\) a.e. \(x\in\sigma(\mathcal{S}_{\mathcal{D}}(\mathscr{A}))\).
For the class of functions \(\mathscr{A}^{\#}=\{f_{i}^{\#}\}_{i=1}^{r}\), define the new class of functions \(\mathscr{A}^{\prime}=\{f_{i}^{\prime}\}_{i=1}^{r}\) by,
\[f_{i}^{\prime}(x)=\sum_{j=1}^{r}\overline{((G_{\mathscr{A}^{\#}}(x)^{\dagger })^{1/2})}_{i,j}f_{j}^{\#}(x),\text{ for a.e. }x\in X,\quad\text{ for each }i\in\{1,2,\dots,r\}.\]
Applying the singular value decomposition of the positive semidefinite matrix, \(G_{\mathscr{A}^{\#}}(x)\) for a.e. \(x\in X\),
\[G_{\mathscr{A}^{\#}}(x)=Q(x)D(x)Q(x)^{*},\]
where the diagonal entries of \(D(x)\) are the non-zero eigenvalues of \(G_{\mathscr{A}^{\#}}(x)\), and \(Q(x)\) is unitary. Also, note that
\[\|f_{i}^{\prime}(x)\|^{2}=(G_{\mathscr{A}^{\#}}(x)^{\dagger})^{1/2}G_{\mathscr{ A}^{\#}}(x)(G_{\mathscr{A}^{\#}}(x)^{\dagger})^{1/2})_{ii}=0\text{ or }1.\]
For each \(i\in\{1,2,\dots,r\}\), \(\|f_{i}^{\prime}\|^{2}=\int_{X}\|f_{i}^{\prime}(x)\|^{2}d_{\mu_{X}}(x)<\mu(X)<\infty\), hence \(f_{i}^{\prime}\in L^{2}(X;\mathcal{H})\). Also
\[G_{\mathscr{A}^{\prime}}(x)=(G_{\mathscr{A}^{\#}}(x)^{\dagger})^{1/2}G_{ \mathscr{A}^{\#}}(x)(G_{\mathscr{A}^{\#}}(x)^{\dagger})^{1/2}=G_{\mathscr{A} ^{\#}}(x)^{\dagger}G_{\mathscr{A}^{\#}}(x),\ a.e.\ x\in X.\]
The eigenvalues of \(G_{\mathscr{A}^{\prime}}(x)\) are \(0\) or \(1\). Thus \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime})\) is a frame for \(\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\prime\prime})\). Now we will show \(\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\prime})=\mathcal{S}_{\mathcal{D}}( \mathscr{A}^{\#})\). It is clear that \(\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\prime})(x)\subset\mathcal{S}_{ \mathcal{D}}(\mathscr{A}^{\#})(x)\) for a.e. \(x\in X\). Also,
\[\dim J_{\mathscr{A}^{\prime}}(x)=\text{rank }\,G_{\mathscr{A}^{\prime}}(x)= \text{rank }\,G_{\mathscr{A}^{\prime\#}}(x)=\dim\,J_{\mathscr{A}^{\#}}(x).\]
Hence \(J_{\mathscr{A}^{\prime}}(x)=J_{\mathscr{A}^{\prime\#}}(x)\) a.e. \(x\in X\), i.e., \(S_{\mathcal{D}}(\mathscr{A}^{\prime})=S_{\mathcal{D}}(\mathscr{A}^{\#})\)[8, Proposition 2.2 (iii)]. The class \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime\prime})\) is a tight frame for \(\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\#})\). In a similar way we can show that there exists a collection \(\mathscr{B}^{\prime}=\{g_{i}^{\prime}\}_{i=1}^{r}\) such that \(\mathcal{E}_{\mathcal{D}}(\mathscr{B}^{\prime})\) is a tight frame for \(\mathcal{S}_{\mathcal{D}}(\mathscr{B}^{\#})\), and also we have
\[G_{\mathscr{A}^{\prime},\mathscr{B}^{\prime}}(x)=(G_{\mathscr{B}^{\#}}(x)^{ \dagger})^{1/2}G_{\mathscr{A}^{\#},\mathscr{B}^{\#}}(x)(G_{\mathscr{A}^{\#}}(x)^{ \dagger})^{1/2}.\]
Since \(P(x)\) is invertible a.e. \(G_{\mathscr{A}^{\prime\prime},\mathscr{B}^{\prime}}(x)^{\dagger}=(G_{\mathscr{A} ^{\#}}(x))^{1/2}G_{\mathscr{A}^{\#},\mathscr{B}^{\#}}(x)^{\dagger}(G_{\mathscr{B} ^{\#}}(x))^{1/2}\). Now \(|G_{\mathscr{A}^{\prime\prime},\mathscr{B}^{\prime}}(x)^{\dagger}|=|(G_{ \mathscr{A}^{\#}}(x))^{1/2}G_{\mathscr{A}^{\#},\mathscr{B}^{\#}}(x)^{\dagger }(G_{\mathscr{B}^{\#}}(x))^{1/2}|\leqslant C\) a.e. \(x\in\sigma(\mathcal{S}_{\mathcal{D}}(\mathscr{A}^{\#}))\). The result follows by Proposition 2.7, \(\mathcal{E}_{\mathcal{D}}(\mathscr{B}^{\prime})\) is an oblique MG-dual for \(\mathcal{E}_{\mathcal{D}}(\mathscr{A}^{\prime\prime})\).
(i)\(\to\)(ii): Define a map \(\Xi:L^{2}(X;\mathcal{H})\to\mathcal{S}_{\mathcal{D}}(\mathscr{A})\) by \(\Xi f=\sum_{i=1}^{r}\int_{\mathcal{M}}\!\!\left\langle f,M_{\phi}g_{i}^{\prime} \right\rangle\!M_{\phi}f_{i}^{\prime}d_{\mu_{\mathcal{M}}}(s)\). Then \(\Xi\) is not necessarily an orthogonal, projection. Therefore, \(L^{2}(X;\mathcal{H})=\text{range }\Xi\,\oplus\text{Ker }\,\Xi=\mathcal{S}_{ \mathcal{D}}(\mathscr{A})\oplus\text{Ker }\,\Xi\). We now show that \(\text{Ker }\Xi=\mathcal{S}_{\mathcal{D}}(\mathscr{B})^{\perp}\). Let \(\varphi\in\text{Ker }\Xi\
In a similar ways, the converse part follows.
(iv)\(\rightarrow\)(iii): Assume (iv) holds, i.e., there exist frames \(\{f^{\prime}_{i}(x)\}_{i=1}^{r}\) and \(\{g^{\prime}_{i}(x)\}_{i=1}^{r}\) for \(J_{\mathscr{A}}(x)\) and \(J_{\mathscr{B}}(x)\), respectively. We need to show that \(R(J_{\mathscr{A}}(x),J_{\mathscr{B}}(x))>0\) and \(R(J_{\mathscr{B}}(x),J_{\mathscr{A}^{\prime}}(x))>0\), which is equivalent to \(J_{\mathscr{A}}(x)\oplus J_{\mathscr{B}}(x)^{\perp}=\mathcal{H}\)[5, Lemma 2.1]. For this define a map, \(\Xi:\mathcal{H}\to J_{\mathscr{A}}(x)\) by \(\Xi(f)=\sum_{i=1}^{r}\langle f,g^{\prime}_{i}(x)\rangle f^{\prime}_{i}(x)\). Then \(\Xi\) need not be an orthogonal projection. Hence
\[\mathcal{H}=\operatorname{range}\Xi\ \oplus\ \operatorname{Ker}\Xi=J_{\mathscr{A}}(x )\oplus\operatorname{Ker}\Xi.\]
Our aim to prove \(\operatorname{Ker}\Xi=J_{\mathscr{B}}(x)^{\perp}\). Let \(u\in\operatorname{Ker}\Xi\). Then \(u=f-\Xi f\) for some \(f\). Let \(h\in J_{\mathscr{B}}(x)\). Writing
\[\langle u,h\rangle=\langle f-\Xi f,h\rangle=\langle f,h\rangle- \left\langle\sum_{i=1}^{r}\langle f,g^{\prime}_{i}(x)\rangle f^{ \prime}_{i}(x),h\right\rangle=\langle f,h\rangle-\sum_{i=1}^{r}\langle f,g^{ \prime}_{i}(x)\rangle\langle f^{\prime}_{i}(x),h\rangle\] \[=\langle f,h\rangle-\left\langle f,\sum_{i=1}^{r}\langle h,f^{ \prime}_{i}(x)\rangle g^{\prime}_{i}(x)\right\rangle=\langle f,h\rangle- \langle f,h\rangle=0,\]
we have \(u\in J_{\mathscr{B}}(x)^{\perp}\), and if \(u\in J_{\mathscr{B}}(x)^{\perp}\), then \(u\in\operatorname{Ker}\Xi\).
(ii)\(\rightarrow\) (iv): If \(R(\mathcal{S}_{\mathcal{D}}(\mathscr{A}),\mathcal{S}_{\mathcal{D}}(\mathscr{B }))>0\), we have \(R(J_{\mathscr{A}}(x),J_{\mathscr{B}}(x))>0\) for a.e. \(x\in X\). The remaining part follows easily.
Before moving towards the proof of Theorem 1.4 we need the concept of supremum cosine angle. For two subspaces \(V\) and \(W\) of a Hilbert space \(\mathcal{H}\), the supremum cosine angle between them is: \(S(V,W)=\sup_{v\in V\setminus\{0\}}\|P_{W}v\|/\|v\|\). The correlation between supremum and infimum cosine angle is related with the following: \(R(V,W)=\sqrt{1-S(V,W^{\perp})^{2}}\). One of the main uses of supremum cosine angle is to determine an addition of two closed subspaces is again closed or not. The sum of two closed subspaces \(V\) and \(W\) is again closed and \(V\bigcap W=\{0\}\) if and only if \(S(V,W)<1\)[12, Theorem 2.1].
Proof of the Theorem 1.4.: (i) **Global Setup:** Suppose \(\mathcal{E}_{\mathscr{D}}(\mathscr{A})\) and \(\mathcal{E}_{\mathscr{D}}(\mathscr{A}^{\prime})\) are Riesz basis for \(\mathscr{V}\) and \(\mathscr{W}\), with constants \(A,B\) and \(A^{\prime},B^{\prime}\) respectively, and are biorthogonal. By [8, Theorem 2.3] we have \(\mathscr{A}(x)=\{f_{i}(x):i=1,2,\ldots,r\}\) and \(\mathscr{A}^{\prime}(x)=\{f^{\prime}_{i}(x):i=1,2,\ldots,r\}\) are Riesz bases for \(J_{\mathscr{A}}(x)\) and \(J_{\mathscr{A}^{\prime}}(x)\) for a.e. \(x\in X\). It suffices to show \(R(\mathscr{V},\mathscr{W})=R(\mathscr{W},\mathscr{V})>0.\) The dual Riesz basis for \(\mathcal{E}_{\mathscr{D}}(\mathscr{A}^{\prime})\) in \(\mathscr{V}\) is of the form \(\mathcal{E}_{\mathscr{D}}(\mathscr{A}^{\#})\) where \(\mathscr{A}^{\#}=\{f^{\#}_{i}:i=1,2,\ldots,r\}\subseteq\mathscr{V}\). Therefore the orthogonal projection \(P_{\mathscr{V}}\) onto \(\mathscr{V}\) can be expressed as
\[P_{\mathscr{V}}f=\sum_{i=1}^{r}\sum_{\phi\in\mathscr{D}}\langle f,M_{\phi}f^{ \#}_{i}\rangle M_{\phi}f_{i}=\sum_{i=1}^{r}\sum_{\phi\in\mathscr{D}}\langle f, M_{\phi}f_{i}\rangle M_{\phi}f^{\#}_{i},\ \forall\ f\in L^{2}(X;\mathcal{H}).\]
Observe that \(P_{\mathscr{V}}M_{\phi}f^{\prime}_{i}=f^{\#}_{i}\), for all \(\phi\in\mathscr{D}\) and \(i=1,2,\ldots,r\). For \(f\in\mathscr{W}\backslash\{0\}\), we have \(f=\sum_{i=1}^{r}\sum_{\phi\in\mathscr{D}}c^{\phi}_{i}M_{\phi}f^{\prime}_{i}\), where
\[A^{\prime}\sum_{i=1}^{r}\sum_{\phi\in\mathscr{D}}|c^{\phi}_{i}|^{2}\leq\|f\|^{2} \leq B^{\prime}\sum_{i=1}^{r}\sum_{\phi\in\mathscr{D}}|c^{\phi}_{i}|^{2}.\]
Then \(P_{\mathscr{V}}f=\sum_{i=1}^{r}\sum_{\phi\in\mathscr{D}}c^{\phi}_{i}M_{\phi}f^{ \#}_{i}\) and \(\frac{1}{|f|^{2}}\geq\frac{B^{-1}\sum_{i,\phi}|c^{\phi}_{i}|^{2}}{B^{\prime}\sum _{i,\phi}|c^{\phi}_{i}|^{2}}=\frac{1}{BB^{\prime}}\), since \(\mathcal{E}_{\mathscr{D}}(\mathscr{A}^{\#})\) is a Riesz basis with constants \(B^{-1},A^{-1}\). Hence \(R(\mathscr{V},\mathscr{W})\geq(BB^{\prime})^{-1/2}\).
Conversely, Since \(R(\mathscr{W},\mathscr{V})>0\), then \(R(\mathscr{W},\mathscr{V})\|f\|\leq\|f\|\leq\|f\|,\forall f\in\mathscr{V}\). Since \(\mathcal{E}_{\mathscr{D}}(\mathscr{A})\) is a Riesz basis for \(\mathscr{V}\), then the corresponding projection on \(\mathscr{W}\), that is, \(\{P_{\mathscr{W}}M_{\phi}f_{i}:\phi\in\mathscr{D},i=1,2,\ldots,r\}\) is a Riesz basis for \(\mathscr{W}\). Since \(R(\mathscr{V},\mathscr{W})>0\), we get \(\overline{\operatorname{span}\{P_{\mathscr{W}}M_{\phi}f_{i}:\phi\in\mathscr{D},i= 1,2,\ldots,r\}}=\mathscr{W}\), and by [4, Corollary 5.14] there exists a dual Riesz basis for \(\overline{\operatorname{span}\{P_{\mathscr{W}}M_{\phi}f_{i}:\phi\in\mathscr{D},i= 1,2,\ldots,r\}}\) of the multiplication generated form, i.e., \(\{M_{\phi}f^{\prime}_{i}:\phi\in\mathscr{D},i=1,2,\ldots,r\}\) in \(\mathscr{W}\). Thus we have
\[\langle M_{\phi}f_{j},M_{\phi}f^{\prime}_{i}\rangle=\langle M_{\phi}f_{j},P_{ \mathscr{W}}M_{\phi^{\prime}}f^{\prime}_{i}\rangle=\langle P_{\mathscr{W}}M_{ \phi}f_{j},M_{\phi^{\prime}}f^{\prime}_{i}\rangle=\delta_{\phi,\phi^{\prime}} \delta_{i,j}\ \phi,\phi^{\prime}\in\mathscr{D},\text{and }i,j\in\{1,2,\ldots,r\}.\]
Thus the result follows.
(ii) **Local Setup:** For a.e. \(x\in X\), let \(\{f_{i}(x)\}_{i=1}^{r}\) and \(\{f^{\prime}_{i}(x)\}_{i=1}^{r}\) be Riesz bases for \(J_{\mathscr{A}}(x)\) and \(J_{\mathscr{A}^{\prime}}(x)\), respectively and they are biorthogonal. We now show this is equivalent to
\[J_{\mathscr{A}^{\prime}}(x)\oplus J_{\mathscr{A}^{\prime}}(x)^{\perp}=\mathcal{H} \ \text{and}\ J_{\mathscr{A}^{\prime}}(x)\oplus J_{\mathscr{A}^{\prime}}(x)^{\perp}= \mathcal{H},\ \text{for a.e.}\ x\in X. \tag{3.1}\]
Since \(\{f_{i}(x)\}_{i=1}^{r}\) and \(\{f^{\prime}_{i}(x)\}_{i=1}^{r}\) are Riesz basis, then \(J_{\mathscr{A}}(x)=\{u\in\mathcal{H}:u=\sum_{i=1}^{r}c_{i}f_{i}(x)\}\) and \(J_{\mathscr{A}^{\prime}}(x)=\{v\in\mathcal{H}:v=\sum_{i=1}^{r}c_{i}f^{\prime}_{ i}(x)\}\). Let \(h\in J_{\mathscr{A}}(x)\bigcap J_{\mathscr{A}^{\prime}}(x)^{\perp}\) then \(h=\sum_{i=1}^{r}\langle h,f_{i}(x)\rangle f^{\prime}_{i}(x)=0\) hence \(J_{\mathscr{A}^{\prime}}(x)\bigcap J_{\mathscr{A}}(x)^{\perp}=\{0\}\). Let \(w\in\mathcal{H}\), then \(Pw:=\sum_{i=1}^{r}\langle w,f_{i}(x)\rangle f^{\prime}_{i}(x)\in J_{\mathscr{A }^{\prime}}(x)\). By the biorthogonal property of \(\{f_{i}(x)\}_{i=1}^{r}\) and \(\{f^{\prime}_{i}(x)\}_{i=1}^{r}\), we have \(\langle w-Pw,f_{i}(x)\rangle=0\) for all \(i=1,\ldots,r\) i.e., \(w-Pw\in J_{\mathscr{A}}(x)^{\perp}\). So \(w=Pw+(w-Pw)\in J_{\mathscr{A}^{\prime}}(x)+J_{\mathscr{A}}(x)^{\perp}\) which implies \(J_{\mathscr{A}^{\prime}}(x)+J_{\mathscr{A}^{\prime}}(x)^{\perp}=\mathcal{H}\). Combining, we have \(J_{\mathscr{A}^{\prime}}(x)\oplus J_{\mathscr{A}}(x)^{\perp}=\mathcal{H}\). In a similar way, interchanging the roll of \(J_{\mathscr{A}}(x)\) and \(J_{\mathscr{A}^{\prime}}(x)\) other side of (3.1) i.e., \(J_{\mathscr{A}^{\prime}}(x)\oplus J_{\mathscr{A}^{\prime}}(x)^{\perp}=\mathcal{ H}\), for a.e. \(x\in X\), can be shown.
Since \(J_{\mathscr{A}^{\prime}}(x)+J_{\mathscr{A}^{\prime}}(x)^{\perp}\) is closed and \(J_{\mathscr{A}^{\prime}}(x)\bigcap J_{\mathscr{A}^{\prime}}(x)^{\perp}=\{0\}\), by the [12, Theorem 2.1] the supremum cosine angle
\[S(J_{\mathscr{A}^{\prime}}(x),J_{\mathscr{A}^{\prime}}(x)^{\perp}):=\sup\{| \langle v,w\rangle|:v\in J_{\mathscr{A}^{\prime}}(x),w\in J_{\mathscr{A}^{ \prime}}(x)^{\perp},\|v\|=\|w\|=1\}<1.\]
Hence \(R(J_{\mathscr{A}^{\prime}}(x),J_{\mathscr{A}^{\prime}}(x))=\sqrt{1-S(J_{ \mathscr{A}^{\prime}}(x),J_{\mathscr{A}^{\prime}}(x)^{\perp})^{2}}>0\). Interchanging the roll of \(J_{\mathscr{A}^{\prime}}(x)\) and \(J_{\mathscr{A}^{\prime}}(x)\) in the above argument \(R(J_{\mathscr{A}^{\prime}}(x),J_{\mathscr{A}}(x))>0\).
Converse part, Let \(R(J_{\mathscr{A}}(x),J_{\mathscr{A}^{\prime}}(x))>0\). Then \(S(J_{\mathscr{A}}(x),J_{\mathscr{A}^{\prime}}(x)^{\perp})<1\). Using [12, Theorem 2.1], we have \(J_{\mathscr{A}}(x)+J_{\mathscr{A}^{\prime}}(x)^{\perp}\) is closed and \(J_{\mathscr{A}}(x)\bigcap J_{\mathscr{A}^{\prime}}(x)^{\perp}=\{0\}\). In a similar way when \(R(J_{\mathscr{A}^{\prime}}(x),J_{\mathscr{A}(x)})>0\), then we can show \(J_{\mathscr{A}^{\prime}}(x)+J_{\mathscr{A}}(x)^{\perp}\) is closed and \(J_{\mathscr{A}^{\prime}}(x)\bigcap J_{\mathscr{A}}(x)^{\perp}=\{0\}\). Hence
\[J_{\mathscr{A}}(x)+J_{\mathscr{A}^{\prime}}(x)^{\perp}=(J_{\mathscr{A}}(x)+J_ {\mathscr{A}^{\prime}}(x)^{\perp})^{\perp\perp}=\left(J_{\mathscr{A}}(x)^{ \perp}\bigcap J_{\mathscr{A}^{\prime}}(x)\right)^{\perp}=\mathcal{H}.\]
So \(\mathcal{H}=J_{\mathscr{A}}(x)\oplus J_{\mathscr{A}^{\prime}}(x)^{\perp}\). In a similar way, \(\mathcal{H}=J_{\mathscr{A}^{\prime}}(x)\oplus J_{\mathscr{A}}(x)^{\perp}\).
Assume \(J_{\mathscr{A}^{\prime}}(x)\bigoplus J_{\mathscr{A}^{\prime}}(x)^{\perp}= \mathcal{H}\). Let for a.e. \(x\in X\), \(\{f_{i}(x)\}_{i=1}^{r}\) and \(\{h_{i}(x)\}_{i=1}^{r}\) be the dual Riesz bases for \(J_{\mathscr{A}^{\prime}}(x)\), i.e., \(\langle f_{i}(x),h_{j}(x)\rangle=\delta_{i,j}\). Let \(S:J_{\mathscr{A}}(x)\to J_{\mathscr{A}}(x)\) be the frame operator, then consider
\[g_{i}(x):=S^{-1}f_{i}(x),1\leqslant i\leqslant r.\]
Now the map, \(P_{J_{\mathscr{A}(x)}}:\mathcal{H}\to\mathcal{H}\) by \(P_{J_{\mathscr{A}}(x)}f:=\sum_{i=1}^{r}\langle f,f_{i}(x)\rangle g_{i}(x)\) is the orthogonal projection of \(\mathcal{H}\) on \(J_{\mathscr{A}}(x)\). Consider
\[\mathscr{P}:=P_{J_{\mathscr{A}}(x)}|_{J_{\mathscr{A}^{\prime}}(x)}.\]
If \(f\in J_{\mathscr{A}^{\prime}}(x)\) then \(\mathscr{P}(f)=0\) then \(f\in J_{\mathscr{A}^{\prime}}(x)\bigcap J_{\mathscr{A}}(x)^{\perp}=\{0\}\) so \(\mathscr{P}\) is injective and \(\mathscr{P}(J_{\mathscr{A}^{\prime}}(x))=P_{J_{\mathscr{A}^{\prime}}(x)}(J_{ \mathscr{A}^{\prime}}(x))=P_{J_{\mathscr{A}^{\prime}}(x)}(J_{\mathscr{A}^{ \prime}}(x)+J_{\mathscr{A}}(x)^{\perp})=P_{J_{\mathscr{A}^{\prime}}(x)}( \mathcal{H})=J_{\mathscr{A}}(x)\). Hence \(\mathscr{P}\) is bounded invertible operator. Define \(f^{\prime}_{i}(x):=\mathscr{P}^{-1}(f_{i}(x))\), \(1\leqslant i\leqslant r\). Then \(\{f^{\prime}_{i}(x)\}_{i=1}^{r}\) is the required Riesz basis for \(J_{\mathscr{A}^{\prime}}(x)\), satisfying the biorthogonality condition.
## 4. Application to locally compact group
Let \(\mathscr{G}\) be a second countable locally compact group which is not necessarily abelian, and \(\Gamma\) be a closed abelian subgroup of \(\mathscr{G}\). A closed subspace \(V\) in \(L^{2}(\mathscr{G})\) is said to be \(\Gamma\)_-translation invariant (\(\Gamma\)-TI)_ if \(L_{\xi}f\in V\) for all \(f\in V\) and \(\xi\in\Gamma\), where for \(\eta\in\mathscr{G}\) the _left translation_\(L_{\eta}\) on \(L^{2}(\mathscr{G})\) is defined by
\[(L_{\eta}f)(\gamma)=f(\eta^{-1}\gamma),\quad\gamma\in\mathscr{G}\text{ and }\ f\in L^{2}(\mathscr{G}).\]
Translation invariant spaces are widely used in various domains, significant among them are harmonic analysis, signal processing, and time-frequency analysis. Researchers are often interested in characterizing the class of generators of TI spaces that allows for reconstruction of any function/signal/image _via_ a reproducing formula. For a family of functions \(\mathcal{A}\subseteq L^{2}(\mathscr{G})\), let us consider a \(\Gamma\)_-translation generated_\((\Gamma\)_-TG)_ system \(\mathcal{E}^{\Gamma}(\mathcal{A})\) and its associated \(\Gamma\)-translation invariant (\(\Gamma\)-TI) space \(\mathcal{S}^{\Gamma}(\mathcal{A})\) generated by \(\mathcal{A}\), i.e.,
\[\mathcal{E}^{\Gamma}(\mathcal{A}):=\{L_{\xi}\varphi:\varphi\in\mathcal{A},\xi\in \Gamma\}\quad\text{and}\quad\mathcal{S}^{\Gamma}(\mathcal{A}):=\overline{ \operatorname{span}}\ \mathcal{E}^{\Gamma}(\mathcal{A}),\]
respectively.
For \(x\in\mathscr{G}\), a right coset of \(\Gamma\) in \(\mathscr{G}\) with respect to \(x\) is denoted by \(\Gamma x\), and for a function \(f:\mathscr{G}\to\mathbb{C}\), we define a complex valued function \(f^{\Gamma x}\) on \(\Gamma\) by \(f^{\Gamma x}(\gamma)=f(\gamma\,\Xi(\Gamma x)),\quad\gamma\in\Gamma\), where the space of orbits \(\Gamma\backslash\mathscr{G}=\{\Gamma x:x\in\mathscr{G}\}\) is the
\(\int_{\Gamma}f^{\Gamma x}(\gamma)\alpha(\gamma^{-1})d\mu_{\Gamma}(\gamma)\), for \(\alpha\in\widehat{\Gamma}\), which can be extended to \(L^{2}(\Gamma)\). The _Zak transformation_\(\mathcal{Z}\) of \(f\in L^{2}(\mathscr{G})\) for the pair \((\mathscr{G},\Gamma)\) is defined by
\[(\mathcal{Z}f)(\alpha)(\Gamma x)=\widehat{f^{\Gamma x}}(\alpha),\quad a.e. \quad\alpha\in\widehat{\Gamma}\text{ and }\Gamma x\in\Gamma\backslash\mathscr{G}, \tag{4.1}\]
which is a unitary linear transformation from \(L^{2}(\mathscr{G})\) to \(L^{2}(\widehat{\Gamma};L^{2}(\Gamma\backslash\mathscr{G}))\)[8]. Note that the Zak transform \(\mathcal{Z}\) is closely associated to fiberization map \(\mathscr{F}\) when \(\mathscr{G}\) becomes abelian. For a second countable LCA group \(\mathcal{G}\) and its closed subgroup \(\Lambda\), the _fiberization_\(\mathscr{F}\) is a unitary map from \(L^{2}(\mathcal{G})\) to \(L^{2}(\widehat{\mathcal{G}}/\Delta^{\perp};L^{2}(\Lambda^{\perp}))\) given by \((\mathscr{F}f)(\beta\Lambda^{\perp})(x)=\widehat{f}(x\,\zeta(\beta\Lambda^{ \perp}))\), \(x\in\Lambda^{\perp},\beta\in\widehat{\mathcal{G}}\), for \(f\in L^{2}(\mathcal{G})\), where \(\Lambda^{\perp}:=\{\beta\in\widehat{\mathcal{G}}:\beta(\lambda)=1,\ \forall\ \lambda\in\Lambda\}\), \(\Lambda^{\perp}\backslash\widehat{\mathcal{G}}=\widehat{\mathcal{G}}/\Lambda^ {\perp}\) and \(\zeta:\widehat{\mathcal{G}}/\Lambda^{\perp}\to\widehat{\mathcal{G}}\) is Borel section which maps compact sets to pre-compact sets. The Zak transform and fiberization map on the Euclidean space \(\mathbb{R}^{n}\) by the action of integers \(\mathbb{Z}^{n}\) are
\[(\mathcal{Z}f)(\xi,\eta)=\sum_{k\in\mathbb{Z}^{n}}f(\xi+k)e^{-2\pi ik\eta}, \text{ and }(\mathscr{F}f)(\xi)(k)=\widehat{f}(\xi+k),\]
for \(k\in\mathbb{Z}^{n}\), \(\xi,\eta\in\mathbb{T}^{n}\) and \(f\in L^{1}(\mathbb{R}^{n})\bigcap L^{2}(\mathbb{R}^{n})\).
Observe that the Zak transform \(\mathcal{Z}\) satisfies the intertwining property with the left translation and multiplication operators, i.e., for \(f\in L^{2}(\mathscr{G})\), \((\mathcal{Z}L_{\gamma}f)(\alpha)=(M_{\phi_{\gamma}}\mathcal{Z}f)(\alpha)\), for a.e. \(\alpha\in\widehat{\Gamma}\) and \(\gamma\in\Gamma\), where \(M_{\phi_{\gamma}}\) is the multiplication operator on \(L^{2}(\widehat{\Gamma};L^{2}(\Gamma\backslash\mathscr{G}))\), \(\phi_{\gamma}(\alpha)=\overline{\alpha(\gamma)}\) and \(\phi_{\gamma}\in L^{\infty}(\widehat{\Gamma})\) for each \(\gamma\in\Gamma\). Therefore, our goal can be established by converting the problem of \(\Gamma\)-TI space \(\mathcal{S}^{\Gamma}(\mathscr{A})\) into the MI spaces on \(L^{2}(X;\mathcal{H})\) with the help of Zak transform, where \(X=\widehat{\Gamma}\) and \(\mathcal{H}=L^{2}(\Gamma\backslash\mathscr{G})\).
The following result is a generalization of [11, Theorem ] for the locally compact group.
**Theorem 4.1**.: _Let \(\mathscr{G}\) be a locally compact group having a discrete abelian subgroup \(\Gamma\), then for the finite collection of functions \(\mathcal{A}=\{f_{i}\}_{i=1}^{m}\) and \(\mathcal{B}=\{g_{i}\}_{i=1}^{n}\) in \(L^{2}(\mathscr{G})\), and for a.e. \(\alpha\in\widehat{\Gamma}\), assume the range functions \(J_{\mathcal{A}}(\alpha)=\operatorname{span}\{\mathcal{Z}f_{i}(\alpha):i=1,2, \ldots,m\}\) and \(J_{\mathcal{B}}(\alpha)=\operatorname{span}\{\mathcal{Z}g_{i}(\alpha):i=1,2, \ldots,n\}\) associated to \(\Gamma\)-TI spaces \(\mathcal{S}^{\Gamma}(\mathcal{A})\) and \(\mathcal{S}^{\Gamma}(\mathcal{B})\), respectively. Then the following are equivalent:_
1. _There exists_ \(\mathcal{A}^{\prime}=\{f^{\prime}_{i}\}_{i=1}^{r}\) _and_ \(\mathcal{B}^{\prime}=\{g^{\prime}_{i}\}_{i=1}^{r}\) _in_ \(L^{2}(\mathscr{G})\) _such that_ \(\mathcal{E}^{\Gamma}(\mathcal{A}^{\prime})\) _and_ \(\mathcal{E}^{\Gamma}(\mathcal{B}^{\prime})\) _are continuous frames for_ \(\mathcal{S}^{\Gamma}(\mathcal{A})\) _and_ \(\mathcal{S}^{\Gamma}(\mathcal{B})\)_, respectively, satisfying the following reproducing formulas for_ \(g\in\mathcal{S}^{\Gamma}(\mathcal{A})\) _and_ \(h\in\mathcal{S}^{\Gamma}(\mathcal{B})\)_:_ \[g=\sum_{i=1}^{r}\int_{\Gamma}\langle g,L_{\gamma}g^{\prime}_{i}\rangle L_{\gamma} f^{\prime}_{i}d_{\mu_{\Gamma}}(\gamma)\,\text{ and }h=\sum_{i=1}^{r}\int_{\Gamma}\langle h,L_{\gamma}f^{\prime}_{i}\rangle L_{ \gamma}g^{\prime}_{i}\ d_{\mu_{\Gamma}}(\gamma).\]
2. _The infimum cosine angles of_ \(\mathcal{S}^{\Gamma}(\mathcal{A})\) _and_ \(\mathcal{S}^{\Gamma}(\mathcal{B})\) _are greater than zero, i.e.,_ \[R(\mathcal{S}^{\Gamma}(\mathcal{A}),\mathcal{S}^{\Gamma}(\mathcal{B}))>0\text{ and }R(\mathcal{S}^{\Gamma}(\mathcal{B}),\mathcal{S}^{\Gamma}(\mathcal{A}))>0.\]
3. _There exists collection of functions_ \(\{f^{\prime}_{i}\}_{i=1}^{r}\) _and_ \(\{g^{\prime}_{i}\}_{i=1}^{r}\) _in_ \(L^{2}(\mathscr{G})\) _such that for a.e._ \(\alpha\in\widehat{\Gamma}\)_, the systems_ \(\{\mathcal{Z}f^{\prime}_{i}(\alpha)\}_{i=1}^{r}\) _and_ \(\{\mathcal{Z}g^{\prime}_{i}(\alpha)\}_{i=1}^{r}\) _are finite frames for_ \(J_{\mathscr{A}^{\prime}}(\alpha)\) _and_ \(J_{\mathscr{B}}(\alpha)\)_, respectively, satisfying the following reproducing formulas for_ \(u\in J_{\mathcal{A}}(\alpha)\) _and_ \(v\in J_{\mathcal{B}}(\alpha)\)_:_ \[u=\sum_{i=1}^{r}\langle u,\mathcal{Z}g^{\prime}_{i}(\alpha)\rangle\mathcal{Z}f^{ \prime}_{i}(\alpha),\text{ and }v=\sum_{i=1}^{r}\langle v,\mathcal{Z}f^{\prime}_{i}(\alpha)\rangle \mathcal{Z}g^{\prime}_{i}(\alpha),\text{ a.e. }\alpha\in\widehat{\Gamma}.\]
4. _For a.e._ \(\alpha\in\widehat{\Gamma}\)_, the infimum cosine angles of_ \(J_{\mathcal{A}}(\alpha)\) _and_ \(J_{\mathcal{B}}(\alpha)\) _are greater than zero, i.e.,_ \[R(J_{\mathcal{A}}(\alpha),J_{\mathcal{B}}(\alpha))>0\text{ and }R(J_{\mathcal{B}}(\alpha),J_{\mathcal{A}}(\alpha))>0.\]
Proof.: Since \(\mathcal{Z}\) is an unitary operator, \(R(\mathcal{S}^{\Gamma}(\mathcal{A}),\mathcal{S}^{\Gamma}(\mathcal{A}^{\prime}))=R (\mathcal{Z}\mathcal{S}^{\Gamma}(\mathcal{A}),\mathcal{Z}\mathcal{S}^{\Gamma}( \mathcal{A}^{\prime}))\) using Definition 1.2. Hence we have the desired result using Theorem 1.3.
Next we state the following result which is a generalization to the locally compact group in case of Riesz basis [3, Proposition 2.13].
**Theorem 4.2**.: _Let \(\mathscr{G}\) be a locally compact group having a discrete abelian subgroup \(\Gamma\) and \(\mathscr{V}\), \(\mathscr{W}\) be \(\Gamma\)-TI subspaces of \(L^{2}(\mathscr{G})\). For the finite collection of functions \(\mathscr{A}=\{f_{i}\}_{i=1}^{r}\), assume \(\mathcal{E}^{\Gamma}(\mathcal{A})\) is a Riesz basis for \(\mathscr{V}\). Then the following holds:_
* _Global setup:_ _If there exists_ \(\mathcal{A}^{\prime}=\{f^{\prime}_{i}\}_{i=1}^{r}\) _in_ \(L^{2}(\mathscr{G})\) _such that_ \(\mathcal{E}^{\Gamma}(\mathcal{A}^{\prime})\) _is a Riesz basis for_ \(\mathscr{W}\) _satisfying the biorthogonality condition_ \(\langle L_{\gamma}f_{i},L_{\gamma^{\prime}}f^{\prime}_{i^{\prime}}\rangle= \delta_{i,i^{\prime}}\delta_{\gamma,\gamma^{\prime}},\quad i,i^{\prime}=1,2, \cdots,r;\ \gamma,\gamma^{\prime}\in\Gamma,\) _then the infimum cosine angles of_ \(\mathscr{V}\) _and_ \(\mathscr{W}\) _are greater than zero, i.e.,_ (4.2) \[R(\mathscr{V},\mathscr{W})>0\text{ and }R(\mathscr{W},\mathscr{V})>0.\] _Conversely if (_4.2_) holds true, then there exists_ \(\mathcal{A}^{\prime}=\{f^{\prime}_{i}\}_{i=1}^{r}\) _in_ \(L^{2}(\mathscr{G})\) _such that_ \(\mathcal{E}^{\Gamma}(\mathcal{A}^{\prime})\) _is a Riesz basis for_ \(\mathscr{W}\) _satisfying the biorthogonality condition. Moreover, the following reproducing formulas hold:_ \[f=\sum_{\gamma\in\Gamma}\sum_{i=1}^{r}\langle f,L_{\gamma}f^{\prime}_{i} \rangle L_{\gamma^{\prime}}f_{i},\ \forall f\in\mathscr{V},\text{ and }g=\sum_{\gamma\in \Gamma}\sum_{i=1}^{r}\langle g,L_{\gamma}f_{i}\rangle L_{\gamma}f^{\prime}_{i},\ \forall g\in\mathscr{W}.\]
* _Local setup:_ _If there exists_ \(\mathcal{A}^{\prime}=\{f^{\prime}_{i}\}_{i=1}^{r}\) _in_ \(L^{2}(\mathscr{G})\) _such that for a.e._ \(\alpha\in\widehat{\Gamma}\)_,_ \(\{\mathcal{Z}f^{\prime}_{i}(\alpha)\}_{i=1}^{r}\) _is a Riesz sequence in_ \(L^{2}(\Gamma\backslash\mathscr{G})\) _satisfying the following biorthogonality condition_ (4.3) \[\langle\mathcal{Z}f_{i}(\alpha),\mathcal{Z}f^{\prime}_{i^{\prime}}(\alpha) \rangle=\delta_{i,i^{\prime}},\quad i,i^{\prime}=1,2,\cdots,r,\ a.e.\ \alpha\in \widehat{\Gamma},\] (4.3) _the infimum cosine angles of_ \(J_{\mathcal{A}}(\alpha)=\operatorname{span}\{\mathcal{Z}f_{i}(\alpha)\}_{i=1 }^{r}\) _and_ \(J_{\mathcal{A}^{\prime}}(\alpha)=\operatorname{span}\{\mathcal{Z}f^{\prime}_{ i}(\alpha)\}_{i=1}^{r}\) _are greater than zero, i.e.,_ (4.4) \[R(J_{\mathcal{A}}(\alpha),J_{\mathcal{A}^{\prime}}(\alpha))>0\text{ and }R(J_{\mathcal{A}^{\prime}}(\alpha),J_{\mathcal{A}}(\alpha))>0,\ a.e. \alpha\in\widehat{\Gamma}.\] _Conversely if (_4.4_) holds, there exists_ \(\mathcal{A}^{\prime}=\{f^{\prime}_{i}\}_{i=1}^{r}\) _in_ \(L^{2}(\mathscr{G})\) _such that for a.e._ \(\alpha\in\widehat{\Gamma}\)_,_ \(\{\mathcal{Z}f^{\prime}_{i}(\alpha)\}_{i=1}^{r}\) _is a Riesz sequence in_ \(L^{2}(\Gamma\backslash\mathscr{G})\) _satisfying the biorthogonality condition (_4.3_). Moreover, the following reproducing formulas hold for_ \(u\in J_{\mathcal{A}}(\alpha)\)_, and_ \(v\in J_{\mathcal{A}^{\prime}}(\alpha)\)_:_ \[u=\sum_{i=1}^{r}\langle u,\mathcal{Z}f^{\prime}_{i}(\alpha)\rangle\mathcal{Z} f_{i}(\alpha),\text{ and }v=\sum_{i=1}^{r}\langle v,\mathcal{Z}f_{i}(\alpha)\rangle\mathcal{Z}f^{\prime}_{i}( \alpha),\text{ for a.e. }\alpha\in\widehat{\Gamma}.\]
The similar results can be deduced for locally compact abelian group \(\mathcal{G}\) using fiberization \(\mathscr{F}\).
**Declarations**
**Competing Interests** The authors have not disclosed any competing interests.
**Data Availability** No data sets were generated during the study.
|
2302.09135 | Equilibrium non-selfgravitating tori around black holes in parameterised
spherically symmetric spacetimes | Non-selfgravitating equilibrium tori orbiting around black holes have a long
history and have been employed in numerous simulations of accretion flows onto
black holes and other compact objects. We have revisited the problem of
constructing such equilibria starting from spherically symmetric black-hole
spacetimes expressed in terms of a fully generic and rapidly converging
parameterisation: the RZ metric. Within this framework, we have extended the
definitions of all of the quantities characterising these equilibria, starting
from the concept of the von Zeipel cylinders and up to the possible ranges of
the specific angular momenta that are employed to construct families of tori.
Within the allowed space of parameters we have then encountered both standard
``single-torus'' solutions, but also non-standard ``double-tori'' solutions.
While the properties of the first ones in terms of the presence of a single
cusp, of a local pressure maximum and of a varying outer radius, are very
similar to those encountered in general relativity, the properties of
double-tori solutions are far richer and naturally allow for configurations
having the same constant specific angular momentum and hence are potentially
easier to produce in nature. The existence of these objects is at present very
hypothetical, but these equilibrium tori were to be observed, they would
provide very valuable information on the properties of the spacetime and on its
deviation from general relativity. | Marie Cassing, Luciano Rezzolla | 2023-02-17T20:57:38Z | http://arxiv.org/abs/2302.09135v2 | Equilibrium non-selfgravitating tori around black holes in parameterised spherically symmetric spacetimes
###### Abstract
Non-selfgravitating equilibrium tori orbiting around black holes have a long history and have been employed in numerous simulations of accretion flows onto black holes and other compact objects. We have revisited the problem of constructing such equilibria starting from spherically symmetric black-hole spacetimes expressed in terms of a fully generic and rapidly converging parameterisation: the RZ metric. Within this framework, we have extended the definitions of all of the quantities characterising these equilibria, starting from the concept of the von Zeipel cylinders and up to the possible ranges of the specific angular momenta that are employed to construct families of tori. Within the allowed space of parameters we have then encountered both standard "single-torus" solutions, but also non-standard "double-tori" solutions. While the properties of the first ones in terms of the presence of a single cusp, of a local pressure maximum and of a varying outer radius, are very similar to those encountered in general relativity, the properties of double-tori solutions are far richer and naturally allow for configurations having the same constant specific angular momentum and hence are potentially easier to produce in nature. The existence of these objects is at present very hypothetical, but these equilibrium tori were to be observed, they would provide very valuable information on the properties of the spacetime and on its deviation from general relativity.
## 1 Introduction
The theory of non-geodesic, perfect-fluid, non-selfgravitating, geometrically thick and stationary tori orbiting a black hole has a long history dating back to fundamental work in the 1970s (Fishbone & Moncrief, 1976; Abramowicz et al., 1978; Kozlowski et al., 1978). As for any stationary fluid with compact support in a gravitational field, its equilibrium is mainly determined by the balance of gravitational forces, pressure gradients and centrifugal forces. A spherical topology is natural in those configurations in which the contributions coming from the centrifugal force are much smaller than those due to pressure gradients and gravitational forces (e.g., in a star). On the other hand, a toroidal topology is inevitable when the contributions to the force balance coming from the pressure gradients are smaller than those due to the centrifugal and gravitational forces. These are indeed the conditions of the fluid flow which we will consider hereafter.
There are several reasons behind the important role that these configurations have played over the years. Firstly, these configurations are sufficiently simple that their configurations can be constructed almost entirely analytically, thus expanding enormously our ability to investigate radically different configurations (see, e.g., Witzany & Jefremov, 2018). Second, because in equilibrium, these tori have been employed for decades as initial conditions in advanced numerical simulations of accretion flows onto black holes (see, e.g., Perth et al., 2019), or neutron stars (see, e.g., Cikintoglut et al., 2022), as they are subject to instabilities of various types when endowed with magnetic fields (Abramowicz & Fragile, 2013). Third, when taken away from their equilibrium conditions, these tori exhibit an interesting dynamics with quasi-periodic oscillations that can be associated with those that are observed, for instance, in high-mass X-ray binaries (see, e.g., Rezzolla et al., 2003a). Finally, by describing the motion of the fluid very close to the event horizon of the black hole, these tori have the potential of providing important observational information on the properties of the spacetime in regions of strong curvature. It is this last aspect, in particular how the spacetime properties can be imprinted on the characteristics of the equilibrium tori, that we will explore in detail in this paper.
Equilibrium tori around black holes have so far been studied in spacetimes that are either spherically symmetric or axisymmetric in general relativity. Under these conditions, they have been shown to appear under very generic conditions as isolated configurations or - after a suitable work of fine tuning - in complex nested multirelitor configurations (Pugliese & Stuchlik, 2017), where each torus has a _distinct_ angular momentum (Pugliese & Stuchlik, 2020). The investigations, however, have not been limited to Schwarzschild and Kerr black-hole spacetimes (see Abramowicz & Fragile, 2013, for an extensive review) and equilibrium tori have been studied also in other, more exotic spacetimes. For instance, the properties - either equilibrium or dynamical - of geometrically thick tori have been explored in Schwarzschild-de-Sitter spacetimes (Rezzolla et al., 2003b; Stuchlik et al., 2009), in (Newman-Unti-Tamburino) NUT spacetimes (Jefremov & Perlick, 2016), in spherically symmetric spacetimes in \(f(R)\)-gravity (Cruz-Osorio et al., 2021), around Kerr black holes with a scalar hair (Gimeno-Soler et al., 2019; Gimeno-Soler et al., 2021; Teodoro et al., 2021), or more recently in the so-called \(q\)-metric (Faraji & Trova, 2021; Memmen & Perlick, 2021)1. Finally, equilibrium tori also have been investigated around ultra-compact
objects different from black holes, such as boson stars (Meliani et al., 2015; Olivares et al., 2020; Teodoro et al., 2021), or naked singularities in Kerr-de Sitter spacetimes Stuchlik et al. (2015).
All of these works testify both the interest in studying these fluid configurations in general relativity and in other, alternative theories of gravity. We here follow the same interest but focus our attention not on a precise alternative theory of gravity or on an exotic spacetime. Rather, our goal here is to investigate the properties of equilibrium tori in generic and parameterised spherically symmetric spacetimes describing either a black hole or another compact object. While there are several options when considering parameterised spherically symmetric spacetimes, our choice here falls on the Rezzolla-Zhidenko (RZ) metric, which uses compactified (conformal) coordinates and a Pade-expansion in terms of continuous fractions to achieve high accuracy already with a small number of parameters and a straightforward treatment of the expansion. In this way, we are able to offer a very general description of equilibrium tori around black holes and to highlight the existence of a much richer family of tori solutions in black-hole spacetimes that are not Schwarzschild. In particular, we discuss the natural occurrence - in some regions of the possible space of parameters - of double-tori solutions sharing the same specific angular momentum and thus not requiring any fine tuning. Should the existence of these tori become evident through astronomical observations, it would provide very precise information on the properties of the spacetime and on its deviation from general relativity.
Our work is organised as follows. In Sec. 2 we briefly recall the theoretical background and mathematical details necessary to describe equilibrium tori and the von Zeipel cylinders. We describe the RZ-metric and its parametrization in Sec. 3, leaving the exploration of single-torus or double-tori solutions in Secs. 4 and 5, respectively. Finally, we present our summary and the conclusions in Sec. 6.
## 2 The theory of equilibrium tori
In this Section we briefly recall the essential aspects of a non-geodesic, perfect-fluid, non-selfgravitating, geometrically thick and stationary torus orbiting a black hole. Since the gravitational mass of the torus is assumed to be very small when compared with that of the central black hole, we can exploit the test-fluid approximation and therefore ignore the solution of the Einstein equations, relying simply on the background metric of the given black hole, which we will employ in the calculation of the general-relativistic hydrodynamic equations (see also Font and Daigne, 2002; Abramowicz and Fragile, 2013; Rezzolla and Zanotti, 2013, for more detailed discussions).
### von Zeipel cylinders
A fundamental starting point to describe such equilibrium tori is the Newtonian theory of the von Zeipel cylinders, which will also be useful to introduce a number of quantities that will be employed in the remainder of this paper. Besides pressure gradients, the equilibrium in these tori is made possible by their rotation, which we express in terms of the angular velocity \(\Omega\)
\[\Omega:=\frac{u^{\phi}}{u^{t}}=\frac{d\phi}{dt}\,. \tag{1}\]
and the corresponding specific angular momentum
\[\ell:=-\frac{u_{\phi}}{u_{t}}\,. \tag{2}\]
Using now from a convenient identity for the \(t\)-component of the four-velocity, \((u^{t})^{-2}=-(g_{tt}+2\Omega g_{t\phi}+\Omega^{2}g_{\phi\phi})\), it is straightforward to obtain that
\[u_{t}^{2}=\frac{g_{t\phi}^{2}-g_{tt}g_{\phi\phi}}{g_{\phi\phi}+2\ell g_{t \phi}+\ell^{2}g_{tt}}\,. \tag{3}\]
After using the symmetries of the problem (i.e., stationarity and axisymmetry), the law of momentum conservation (Euler equation) can be expressed in a very compact form as
\[\partial_{\mu}\ln|u_{t}|-\left(\frac{\Omega}{1-\Omega\ell}\right)\partial_{ \mu}\ell=-\frac{1}{\rho h}\partial_{\mu}p\,. \tag{4}\]
where \(p,\rho\), and \(h\) are the pressure, rest-mass density, and the specific enthalpy, respectively. For a barotropic fluid, i.e., a fluid for which \(p=p(\rho)\), the derivative of the enthalpy is proportional to the derivative of the pressure and, as a consequence, the partial derivatives of pressure and enthalpy commute and cancel each other (Rezzolla and Zanotti, 2013). Under these conditions, the following identities can be proven, which constitute the thesis of the relativistic von Zeipel theorem
\[\Omega=\Omega(\ell)\,, \tag{5}\]
and
\[\mathcal{R}^{2}:=\frac{\ell}{\Omega}=-\frac{\ell(g_{\phi\phi}+g_{t\phi}\ell)} {(g_{t\phi}+g_{tt}\ell)}=\frac{r^{3}\sin^{2}(\theta)}{r-2M}\,. \tag{6}\]
where \(\mathcal{R}\) is the so-called von Zeipel (cylindrical) radius and the last expression in Eq (6) refers to a Schwarzschild spacetime.
Stated differently, in stationary and axisymmetric circular flow of a barotropic fluid around a compact object, the surfaces of constant angular velocity \(\Omega\) coincide with the surfaces of constant specific angular momentum \(\ell\). Such surfaces are also known as von Zeipel cylinders. This theorem was originally formulated in Newtonian gravity and stated that, within a rotating selfgravitating object, isodensity (or isopyonic, i.e., at constant rest-mass density) and isobaric (i.e., at constant pressure) surfaces coincide if and only if the angular velocity is a function of the distance from the rotation axis only (see, e.g., von Zeipel, 1924; Tassoul, 2007). As a result, the Newtonian von Zeipel cylinders are indeed _cylinders_. In a black-hole spacetime, however, the general-relativistic version of the theorem, which is due to Abramowicz (1971), reveals that this is no longer true and that the von Zeipel cylinders are cylindrical surfaces only asymptotically (Rezzolla and Zanotti, 2013). In Sec. 3.1 we will discuss how this theorem varies when considering a generic and parameterised black-hole spacetime.
Another important advantage of barotropic fluids is that the differential \(dp/\rho h\) is an exact differential and the partial derivatives commute \(\partial_{t}\partial_{\theta}p=\partial_{\theta}\partial_{t}p\). As a result, the integration of the Euler equation (4) does not depend on the integration path and can be expressed as
\[W_{\rm eff}-W_{\rm in}=\ln|u_{t}|-\ln|(u_{t})_{\rm in}|-\int_{\ell_{\rm in}}^ {\ell}\left(\frac{\Omega}{1-\Omega\ell^{\prime}}\right)d\ell^{\prime}\,, \tag{7}\]
where the index in refers to the "inner-edge" of the disc and \(W_{\rm eff}:=\ln|u_{t}|\).
### General properties of geometrically thick tori
Already in Newtonian gravity, the equilibrium of a stationary rotating fluid with compact support is determined by the balance of three factors: gravitational forces, pressure gradients and centrifugal forces. The relative strength of these forces will determine the geometric properties of the fluid and, in particular, a torus topology
will appear if the contributions from pressure gradients are smaller than the contributions from centrifugal and gravitational forces. The calculation of equilibrium tori is particularly simple when the fluid has a constant specific angular momentum
\[\ell=\pm U=\mathrm{const.}\,, \tag{8}\]
where the \(\pm\) signs in (8) refer to a fluid that is either corotating (\(+\)) or counter-rotating (\(-\)) with respect to the compact object/black hole. The advantage of tori with a constant specific angular momentum is that, in this case, \(\Omega=\Omega(g_{\mu\nu})\) [see Eqs. (5)-(6)], that is, the fluid angular velocity becomes an expression of the metric functions of spacetime only, and the equipetial surfaces can be computed from the metric coefficients and the constant specific angular momentum.
An equipotential surface that is closed at infinity, i.e., \(W_{\mathrm{eff}}=0\), contains local extrema in the radial coordinate: \(r_{\mathrm{cusp}}\) and \(r_{\mathrm{max}}\), which mark, respectively, the appearance of a cusp in the effective potential and the location of the pressure (rest-mass density) maximum. At both locations, \(\partial W_{\mathrm{eff}}/\partial r=0\), such that the corresponding specific angular momentum at these locations is that of a Keplerian geodesic orbit. If the torus matter fills the outermost closed equipotential surface, then \(r_{\mathrm{cusp}}=r_{\mathrm{in}}\) represents the location on the equatorial plane where such matter can accrete onto the black hole and where the maximum of the effective potential on the equatorial plane is reached. Note that this cusp is similar to the cusp appearing in a "Roche lobe", with the important difference that in the latter case it corresponds to a single point, while here to a whole circle in view of the axisymmetric nature of these tori. On the other hand, the minimum of the effective potential \(W_{\mathrm{eff}}\) is reached at \(r_{\mathrm{max}}\) and the fluid around this location is in a stable equilibrium, such that, if perturbed, it will oscillate around the corresponding radial or polar epicyclic frequency (Rezzolla et al., 2003).
In general, the choice of the (constant) specific angular momentum will then determine the location of the inner edge of the torus, \(r_{\mathrm{in}}\), which can be set to be between \(r_{\mathrm{cusp}}\) and \(r_{\mathrm{max}}\); clearly \(W_{\mathrm{eff,in}}\leq W_{\mathrm{eff,cusp}}\). The torus will also be limited in the radial direction by \(r_{\mathrm{out}}\), which is also the radial location where \(W_{\mathrm{eff,out}}=W_{\mathrm{eff,in}}\). Note that in the region: \(r_{\mathrm{in}}<r<r_{\mathrm{max}}\), the specific angular momentum is larger than the Keplerian specific angular momentum \(\ell>\ell_{\mathrm{cusp}}\) and the orbital motion of the fluid is therefore super-Keplerian in the inner part of the torus. On the other hand, the opposite is true in the region \(r_{\mathrm{max}}<r<r_{\mathrm{out}}\), i.e., \(\ell<\ell_{\mathrm{cusp}}\), such that the orbital motion of the fluid is sub-Keplerian in the outer parts of the torus. Clearly, in these regions the pressure gradients in the torus are needed to balance the excess centrifugal acceleration.
Figure 1 collects in a single plot the whole set of possible tori solutions in a Schwarzschild spacetime having constant specific angular momentum. The latter is taken to range between the specific angular momenta of the marginally stable orbit \(\ell_{\mathrm{mh}}=3.6738\) and of the marginally bound orbit \(\ell_{\mathrm{mh}}=4\)(Font and Daigne, 2002; Rezzolla and Zanotti, 2013). More specifically, shown via a colourmap is the value of the effective potential as a function of the conformal coordinate \(x\) for different values of the specific angular momentum. Marked with different lines are the most important properties of the torus, namely: the radial position of the cusp \(r_{\mathrm{cusp}}\) (orange line), the radial position of the pressure maximum \(r_{\mathrm{max}}\) (purple line), and the radial position of the outer edge of the torus \(r_{\mathrm{out}}\) (blue line). Using Fig. 1, which to the best of our knowledge has not been presented before, it is straightforward to appreciate a number of salient aspects of tori in a Schwarzschild spacetime. First, the cusp moves towards the event horizon as the specific angular momentum is increased. Second, in contrast to the cusp, the position of the pressure maximum moves outwards with increasing \(\ell\). Finally, the outer radius of the torus also moves out to larger and larger radii, reaching spatial infinity (\(x=1\)) for the maximum value of the specific angular momentum. Two remarks are worth making. First, when the specific angular momentum is larger than the one at the marginally bound orbit, i.e., for \(\ell>\ell_{\mathrm{mh}}\), the tori are effectively accreting in the sense that the outermost equipotential surfaces will connect with the event horizon without crossing the cusp [see, e.g.,, Fig. 11.8 of Rezzolla and Zanotti (2013)]. Under these conditions, in fact, the maximum of the effective potential \(W_{\mathrm{eff}}\) is larger than the corresponding value at spatial infinity; because of this, we indicate this region with the label "no stable tori". Second, if the specific angular momentum is below the value of the marginally stable orbit of \(\ell<\ell_{\mathrm{ms}}\) no equilibrium tori can be constructed and indeed when \(\ell\to\ell_{\mathrm{ms}}^{*}\), the three fundamental scales of the torus, namely, \(r_{\mathrm{cusp}}\), \(r_{\mathrm{max}}\) and \(r_{\mathrm{out}}\) coincide; we indicate this region with the label "no tori".
## 3 Equilibrium tori in the RZ-metric
We recall that the Rezzolla-Zhidenko (RZ) metric (Rezzolla and Zhidenko, 2014) can describe accurately arbitrary spherically symmetric and asymptotically flat spacetimes representing either black-holes - obtained in different theories of gravity - or other compact objects (e.g., naked singularities or boson stars) (Kocherlakota and Rezzolla, 2020). It does so by exploiting the rapidly converging properties of a Pade expansion expressed in terms of a continuous fraction so that a small (i.e., \(\lesssim 4-5\)) number of parameters is sufficient to reproduce most of the spacetime metrics with percent precision (Konoplya and Zhidenko, 2020; Kocherlakota and Rezzolla, 2020). Such parameters can be chosen such that they incorporate deviations from general relativity and can be constrained by experimental observations (Rezzolla and Zhidenko, 2014). The RZ parametrization solves difficulties of other parametrizations, which normally have difficulties in isolat
Figure 1: Colourmap of the effective potential \(W_{\mathrm{eff}}\) shown as a function of the conformal radial coordinate \(x\) and of the specific angular momentum \(\ell\) for the Schwarzschild spacetime. The figure reports all of the possible tori solutions that can be built with a constant specific angular momentum. Shown with a solid lines are respectively: the location of the cusp of the torus \(r_{\mathrm{cusp}}\) (orange), of the maximum pressure \(r_{\mathrm{max}}\) (purple), and of the outer edge of the torus \(r_{\mathrm{out}}\) (blue). Transparent regions refer to situations in which the specific angular momentum is either larger than that of the marginally bound orbit, \(\ell>\ell_{\mathrm{mh}}\) (top part) or where the specific angular momentum is smaller than that of the marginally stable orbit \(\ell<\ell_{\mathrm{ms}}\) (bottom part); no tori can be built in these regions.
ing the dominant terms within the corresponding parametrization, so that a very large number of parameters is required, all with similar strength (see, e.g., Cardoso et al., 2014). Since its development the RZ-metric has found a number of applications (see, e.g., Volkel et al., 2020; Suvorov and Volkel, 2021; Bauer et al., 2021; Kocherlakota and Rezzolla, 2022) and some extensions of the metric have been made in (see, e.g., Kokkotas et al., 2017; Konoplya and Zhidenko, 2020; Konoplya et al., 2020; Bronnikov et al., 2021). The RZ-metric has been also used in studies of quasi-normal modes (Volkel and Kokkotas, 2019; Suvorov and Volkel, 2021; Konoplya and Zhidenko, 2022), and more recently, it has played an important role in the theoretical interpretation of the supermassive black-hole images of M87* (Kocherlakota et al., 2021) and of Sgr A* (Event Horizon Telescope Collaboration et al., 2022).
In 2016, the spherically symmetric framework was extended to axisymmetric black holes and generic compact-object stationary spacetimes (Konoplya et al., 2016). This was accomplished by developing two different expansions: _i)_ a continued-fraction expansion in terms of a compactified radial coordinate and _ii)_ a Taylor expansion in terms of the cosine of the polar angle. Both have a fast convergence and in the polar direction there is an exact limit on the equatorial plane. Calculations of black holes shadow images in this axisymmetric KRZ-metric have been performed by Younsi et al. (2016) and the Blandford-Znajek mechanism (Blandford and Znajek, 1977) has been investigated by Konoplya et al. (2021). A test of the KRZ-metric has been done with iron K\(\alpha\)-lines by Ni et al. (2016), with X-ray reflection spectroscopy by (Nampalliwar et al., 2020; Abdikamalov et al., 2021; Yu et al., 2021) or with gravitational-wave observations (Shashank and Bambi, 2022). Furthermore, Siqueira and Richartz (2022) have considered the KRZ parametrization to describe a Kerr-like black hole surrounded by a massive scalar field.
In what follows, we briefly review the most salient aspects of the RZ approach. Using spherical polar coordinates \((t,r,\theta,\phi)\) the RZ metric has the generic form (Rezzolla and Zhidenko, 2014)
\[ds^{2}=-N^{2}(r)dt^{2}+\frac{B^{2}(r)}{N^{2}(r)}dr^{2}+r^{2}d\Omega^{2}\,, \tag{9}\]
where \(d\Omega^{2}=d\theta^{2}+\sin^{2}(\theta)d\phi^{2}\) is the line element of a sphere. This metric is more conveniently expressed in terms of the coordinates \((t,x,\theta,\phi)\), where \(x\) is the compactified radial coordinate defined as
\[x:=1-\frac{r_{0}}{r}\,, \tag{10}\]
which maps the infinite interval \(r\in[r_{0},\infty)\) between the event horizon at \(r_{0}\) and spatial infinity to the finite interval \(x\in[0,1)\). At the horizon, the value of the metric function \(N\) must be zero \(N(r_{0})=0\). This boundary condition is implicit for the Ansatz in the compactified coordinate \(x\) as:
\[N^{2}=xA(x)\,, \tag{11}\]
where \(A(x)>0\) for \(0\leq x\leq 1\). After introducing the near horizon parameters \(\epsilon,a_{0},b_{0}\) the functions \(A(x)\) and \(B(x)\) can be expressed as
\[A(x) =1-\epsilon(1-x)+(a_{0}-\epsilon)(1-x)^{2}+\tilde{A}(x)(1-x^{3})\,, \tag{12}\] \[B(x) =1+b_{0}(1-x)+\tilde{B}(x)(1-x)^{2}\,. \tag{13}\]
Here \(\tilde{A}(x)\) and \(\tilde{B}(x)\) characterize the near horizon properties of the metric, i.e., at \(x\simeq 0\) and the properties at spatial infinity, i.e., at \(x\simeq 1\). They can be parametrized in terms of a Pade-expansion as
\[\tilde{A}(x) =\frac{a_{1}}{1+\frac{a_{2}x}{1+\frac{a_{3}x}{1+\frac{a_{3}x}{1+....}}}} \tag{14}\] \[\tilde{B}(x) =\frac{b_{1}}{1+\frac{b_{2}x}{1+\frac{b_{3}x}{1+....}}}\,, \tag{15}\]
where, at the horizon, we neatly have that
\[\tilde{A}(0)=a_{1}\,,\qquad\tilde{B}(0)=b_{1}\,. \tag{16}\]
An important and highly effective feature of the RZ expansion is that if any of the coefficients \(a_{i}\) and \(b_{i}\) with \(i\geq 1\) is zero, all the others are automatically zero, such that the truncation of the expansion is sharp from that order on.
### Equilibrium tori in the RZ-metric
Recalling the theory of the von Zeipel cylinders reviewed in Sect. 2.1, we know that the relevant expressions to describe the tori do not depend on the \(g_{rr}\) metric function and therefore are independent of the function \(B(x)\) in the metric (9). As a result, the expression for the \(t\)-component of the 4-velocity is given by
\[u_{t}^{2}=\frac{xA(x)r_{0}^{2}\sin^{2}(\theta)}{-(1-x)^{2}xA(x)\ell^{2}+r_{0}^ {2}\sin^{2}(\theta)}\,, \tag{17}\]
while the acceleration has the generic covariant components
\[a_{\mu}=\frac{1}{2}\frac{\partial_{\mu}(-N^{2}(x))+\Omega^{2}\partial_{\mu} \left(r_{0}^{2}\sin^{2}(\theta)/(1-x)^{2}\right)}{-N^{2}(x)+\Omega^{2}\left( r_{0}^{2}\sin^{2}(\theta)/(1-x)^{2}\right)}\,. \tag{18}\]
From these quantities and recalling that the acceleration must vanish for a particle moving on a Keplerian circular orbit, it is possible to derive the expression of the Keplerian angular velocity as
\[\Omega_{\kappa\varphi}:=\left(\frac{u^{\phi}}{u^{t}}\right)_{\kappa\varphi}= \pm\sqrt{\frac{A(x)+xA^{\prime}(x)}{2r_{0}^{2}/(1-x)^{3}}}\,, \tag{19}\]
while the corresponding expression for the Keplerian specific angular momentum is
\[\ell_{\kappa\varphi}(x):=-\left(\frac{u_{\phi}}{u_{t}}\right)_{\kappa\varphi}= \pm\frac{r_{0}}{x}\sqrt{\frac{A(x)+xA^{\prime}(x)}{2(1-x)A^{2}(x)}}\,. \tag{20}\]
Under these conditions and a metric that is diagonal (as the RZ metric), the equation of the von Zeipel cylinders reduces to the known relation between \(\ell\) and \(\Omega\), i.e.,
\[g_{tt}\ell+g_{t\phi}(1+\Omega\ell)+\Omega g_{\phi\phi} = 0\,,\] \[g_{tt}\ell = -\Omega g_{\phi\phi}\,. \tag{21}\]
such that the von Zeipel radius (squared) in the RZ-metric is given by
\[\mathcal{R}_{\rm VZ}^{2}:=\frac{\ell}{\Omega}=-\frac{g_{\phi\phi}}{g_{tt}}= \frac{r_{0}^{2}\sin^{2}(\theta)}{(1-x)^{2}N^{2}(x)}\,. \tag{22}\]
Furthermore, the effective potential in the RZ-metric - expressed in
terms of \(x\) and \(\theta\) coordinates - reads as follows with the last expression referring to the potential at the Keplerian angular momentum.
\[W_{\rm eff}(x) = \ln|u_{t}| \tag{23}\] \[= \frac{1}{2}\ln\left|\frac{xA(x)\nu_{0}^{2}\sin^{2}(\theta)}{-(1-x )^{2}xA(x)\ell^{2}+\nu_{0}^{2}\sin^{2}(\theta)}\right|\] \[= \frac{1}{2}\ln\left(\frac{xA}{1\mp\left((1-x)(A+xA^{\prime}) \right)/\left(2x(1-x)A\right)}\right)\,.\]
\[u_{t}^{2} = \frac{x(1-\epsilon(1-x)+(a_{0}-\epsilon)(1-x)^{2}+a_{1}(1-x)^{3} )\nu_{0}^{2}\sin^{2}(\theta)}{-x((1-x)^{2}-\epsilon(1-x)^{3}+(a_{0}-\epsilon) (1-x)^{4}+a_{1}(1-x)^{5})\ell^{2}+\nu_{0}^{2}\sin^{2}(\theta)} \tag{24}\] \[\Omega_{\rm x_{0}} = \pm\sqrt{\frac{(1-x)^{3}\left[1+\epsilon(-2+6x-3x^{2})+a_{0}(1-4 x+3x^{2})+a_{1}(1-6x+9x^{2}-4x^{3})\right]}{2\nu_{0}^{2}}}\] (25) \[\ell_{\rm x_{0}} = \pm\frac{\nu_{0}}{x}\sqrt{\frac{1+\epsilon(-2+6x-3x^{2})+a_{0}(1 -4x+3x^{2})+a_{1}(1-6x+9x^{2}-4x^{3})}{2(1-x)\left[1-\epsilon(1-x)+(a_{0}- \epsilon)(1-x)^{2}+a_{1}(1-x)^{3}\right]^{2}}}\,,\] (26) \[W_{\rm eff}(x) = \frac{1}{2}\ln\left(\frac{x(1-\epsilon(1-x)+(a_{0}-\epsilon)(1- x)^{2}+a_{1}(1-x)^{3})}{1\mp(1-x)\left[1-\epsilon(2-6x+3x^{2})+a_{0}(1-4x+3x^{2})+a_{1}(1 -6x+9x^{2}-4x^{3})\right]\left[2x(1-\epsilon(1-x)+(a_{0}-\epsilon)(1-x)^{2}+a _{1}(1-x)^{3})\right]^{-4}}\right)\] (27) \[\mathcal{R}_{\rm HZ}^{2} = \frac{r_{0}^{2}\sin^{2}(\theta)}{x\left[(1-x)^{2}-\epsilon(1-x)^{ 3}+(a_{0}-\epsilon)(1-x)^{4}+a_{1}(1-x)^{5}\right]}\,. \tag{28}\]
### Constraints on the range of parameters of the RZ-metric
Although the expressions presented above are given in terms of three RZ coefficients \(\epsilon,a_{1}\) and \(a_{2}\) and that these coefficients are presently unknown, it is worth noting that post-Newtonian (PN) experiments in the solar system already provide some constraints on at least one of them. In particular, the experiments suggest that the parameterised post-Newtonian coefficients \(\beta\) and \(\gamma\) are constrained to be (Will, 2006)
\[|\beta-1|\lesssim 2.3\times 10^{-4},\qquad|\gamma-1|\lesssim 2.3\times 10^{-5}\,, \tag{29}\]
such that
\[a_{0}=(\beta-\gamma)\frac{2M^{2}}{r_{0}^{2}}\simeq 10^{-4}\,, \tag{30}\]
and can be reasonably assumed to be zero in a first approximation (see below).
For the RZ parameters considered here, analytical bounds can be derived on the possible range of values and these have been presented in a number of studies and recently been summarised by Kocherlakota & Rezzolla (2022). We next briefly review them also in connection to the existence of solutions of non-selfgravitating tori. We start by recalling that the condition for the presence of an event horizon in the RZ-metric is \(N(r_{0})=0\). Outside the outermost horizon the function \(N(r)\) has to be positive and therefore \(A(x)>0\). Furthermore, the square of the specific angular momentum has to be positive, such that using expression (20) we find that
\[A(x)+xA^{\prime}(x)>0\qquad\mbox{for}\quad x\in[0,1]\,. \tag{31}\]
#### 3.1.1 Analytical expressions for \(\epsilon\), \(a_{0}\) and \(a_{1}\)
Expressions (19)-(23) are obtained in the RZ-metric but otherwise generic, that is, not restricted to a finite set of coefficients \(\epsilon\) and \(a_{i}\). In practice, however, we need to truncate the infinite expansion (15) to a finite (and possibly small) set of coefficients. In this case, setting \(a_{2}=0\) and taking into account the coefficients \(\epsilon\), \(a_{0}\), \(a_{1}\) we obtain the following expressions for the four-velocity, the angular velocity, the specific angular momentum, the von Zeipel (squared) radius and the effective potential
\[A(x)+xA^{\prime}(x)>0\qquad\mbox{for}\quad x\in[0,1]\,. \tag{32}\]
Evaluating the condition (31) at \(x=0\) and \(x=1\) we obtain the following constraints on the parameters \(\epsilon,a_{0}\), and \(a_{1}\)
\[a_{1}\geq-1+2\epsilon-a_{0}\,, \tag{33}\] \[\epsilon>-1\,. \tag{34}\]
To find the range of allowed parameters \(\epsilon\), \(a_{0}\), \(a_{1}\) we look at the behaviour of the metric functions in a boundary case. For example, when \(\epsilon=0\), \(a_{0}=0\) and \(a_{1}\neq 0\), we obtain that the largest allowed value of \(a_{1}\) for the specific angular momentum to be positive is \(a_{1}=4\).
To find an analytic expression for the upper boundary of \(a_{1}\) we consider the condition that the minimum \(x_{m}\) of the function \(A(x)+xA^{\prime}(x)\) has to be positive, i.e.,
\[A(x_{m})+x_{m}A^{\prime}(x_{m})\geq 0\,. \tag{35}\]
The minimum is found analytically to be at
\[x_{m_{n}} = \left(\frac{9a_{1}-3\epsilon+3a_{0}}{12a_{1}}\right) \tag{36}\] \[\pm\sqrt{\left(\frac{9a_{1}+3a_{0}-3\epsilon}{12a_{1}}\right)^{2} +\left(\frac{3\epsilon-2a_{0}-3a_{1}}{6a_{1}}\right)}\,.\]
For the derivation of this condition the following expressions have been used:
\[F(x) = A(x)+xA^{\prime}(x) \tag{37}\] \[= 1+\epsilon(-2+6x-3x^{2})+a_{0}(1-4x+3x^{2})\] \[+a_{1}(1-6x+9x^{2}-4x^{3})\,,\]
and
\[F^{\prime}(x)=6\epsilon(1-x)+a_{0}(6x-4)+a_{1}(-6+18x-12x^{2})\,.\]
## 4 Space of solutions: single tori
Figure 2 shows with different shadings the various regions where solutions as single-torus solutions can be found (green shading) or not (salmon shading). More specifically, the green-shaded area - where solutions are possible - is upper-limited by the condition (20) and lower-limited by the condition (31); in such a region, the specific angular momentum only has a single minimum and the effective potential shows a single cusp. The left panel in Fig. 2 refers to the case when \(a_{0}=0\), while the remaining two are computed when \(a_{1}=0\) and \(a_{1}=2\), respectively. In what follows (see Sec. 4.1), we will discuss four models in this green region: A, B, C, and D, which are marked by red dots and surround the Schwarzschild solution; the latter obviously corresponds to the case of \(\epsilon=0\), \(a_{1}=0\) and is marked with a black cross. Also shown in Fig. 2 (with a blue-shaded area) is the region in which double-tori solutions are possible. The specific angular momentum here has two minima and one maximum, which allows to distinguish these solutions from the single-torus ones. The effective potential, on the other hand, has two cusps and two maxima, which can be filled with fluid to obtain two tori.
Further below (see Sec. 5), we will also discuss in the double-tori region two models: E, and F. If the fluid in these three models fills up the outermost equipeutical surface, then the cusp of the outer torus is connected to the outer edge of the inner torus. Since the models in the middle and right panels of Fig. 2 do not offer specific qualitative differences from those presented in the right panel, they will not be discussed in detail here.
### Single-torus solutions (\(a_{0}=0\))
As anticipated above, fixing the value of the specific angular momentum allows us to investigate the changes in the potential with the parameters \(\epsilon\) and \(a_{1}\). Hence, by suitably choosing the parameters \(\epsilon\), and \(a_{1}\), we are able to construct four representative models,
\[\mathrm{Model\;A}: \epsilon=-0.25\,, a_{1}=\phantom{-}0.00\,,\] \[\mathrm{Model\;B}: \epsilon=\phantom{-}0.00\,, a_{1}=\phantom{-}0.25\,,\] \[\mathrm{Model\;C}: \epsilon=\phantom{-}0.25\,, a_{1}=\phantom{-}0.00\,,\] \[\mathrm{Model\;D}: \epsilon=\phantom{-}0.00\,, a_{1}=-0.25\,.\]
As can be seen from Fig. 3, the shape of the potential changes and, in particular, the left panel contrasts Models A and C (dark and light blue lines) assessing the impact of the parameter \(\epsilon\), while the right panel contrasts Models B and D (dark and light red lines) highlighting the impact of the parameter \(a_{1}\). In all cases, the Schwarzschild solution is shown as a reference by a black solid line.
From the study of Models A and C it is possible to deduce that for \(\epsilon<0\) the value of the potential at the cusp and at the maximum of the torus is smaller than the corresponding value in the Schwarzschild spacetime (see, respectively, filled and empty diamonds in the left panel of Fig. 3), since a larger value of the specific angular momentum of \(\ell=4.35\) for Model A is needed to lift the value of the potential to the one of the Schwarzschild spacetime, which is shown for \(\ell=3.9\). At the same time for \(\epsilon>0\) the opposite is true and the value of the potential is larger than in the Schwarzschild spacetime needing a smaller specific angular momentum of \(\ell=3.82\) for Model C. In both cases, the position of the cusp, \(r_{\mathrm{cusp}}\), moves to larger radii when considering the same value of angular momentum.
This can be readily appreciated by inspecting the left and right panels of Fig. 4, which are similar to Fig. 1, and report via a colour-code the effective potential as a function of the radial coordinate \(x\) and of the parameter \(\epsilon\) at a fixed specific angular momentum \(\ell=4.35\) (left) and \(\ell=3,82\) (right), and RZ-parameters \(a_{0}=0=a_{1}\). Clearly, the cusp is closest to the horizon for \(\epsilon=0\), which corresponds to the Schwarzschild spacetime, and then moves further away (i.e., to larger values of \(x\)) for increasing \(|\epsilon|\) (orange line). The left and right panels of Fig. 4 also show the position of the maximum-pressure of the torus \(r_{\mathrm{max}}\) and highlight that it shifts to smaller radii for negative \(\epsilon\) and to larger radii for positive \(\epsilon\) as compared to the Schwarzschild solution (purple line). In essence, therefore, for increasing \(\epsilon\), and independently of its sign, the position \(r_{\mathrm{max}}\) moves to ever larger values of \(x\). Similarly, the position of the outer radius of the torus \(r_{\mathrm{out}}\) (blue line) increases for increasing values of \(\epsilon\) and in the left panel it reaches spatial infinity at \(\epsilon=-0.21\), for which the value of the potential at the cusp becomes zero. For larger values \(\epsilon\) the tori would be overfilling their Roche limit and hence no tori in stable equilibrium can be built (see by the bright shaded region with no stable-tori solutions).
Rather similar considerations can be made for Models B and D (see Fig. 5), for which we find that for \(a_{1}<0\) the value of the potential at the cusp is smaller than the value of the potential in the Schwarzschild spacetime and larger for \(a_{1}>0\). At the same time, the cusp shifts closer to the horizon with increasing \(a_{1}\) (orange line), while the position and value of the potential at the maximum of the torus changes only slightly with \(a_{1}\), increasing with increasing \(a_{1}\) (purple line). Analogous the position of \(r_{\mathrm{out}}\) moves closer to spatial infinity for increasing \(a_{1}\).
Figure 2: Multidimensional parameter space for tori solutions for the three KZ parameters considered here: \(\epsilon\), \(a_{0}\) and \(a_{1}\). Shown as green-shaded are the regions where tori can be constructed and these are limited by solid green lines, with the lower one enforcing Eq. (32) and the upper one Eq. (34); no tori solutions can be built in the salmon-shaded regions. From left to right the different panels refer to sections where \(a_{0}=0\), \(a_{1}=0\), and \(a_{1}=2\), respectively. The blue-shaded areas represent instead the regions where double-tori solutions are possible. Finally, red circles are used to mark the position of the representative models that are discussed in the text, while the black cross marks the position of a Schwarzschild spacetime.
Figures 4 and 5 also help to appreciate via the colourcoding that the potential increases for increasing values of the RZ parameters \(\epsilon\) and \(a_{1}\) and that a single cusp is present for all values of \(a_{1}\), but also disappears for sufficiently small and negative values of \(\epsilon\), i.e., for \(\epsilon\leq-0.302\) for a specific angular momentum of \(\ell=4.35\) and \(\epsilon\leq-0.135\) for a specific angular momentum of \(\ell=3.82\) (where the orange and the purple lines meet). Below these values the effective potential shows no cusp and therefore no tori solutions are possible, which is shown by the brighter shaded region. While this behaviour is generic, the value of \(\epsilon\) at which this happens depends on the constant specific angular momentum considered. Finally, marked with red circles in Figs. 4, 5 are the four representative Models A-D and by simply considering the distance in the conformal coordinate \(x\) between the these points it is simple to appreciate that the torus shape can vary considerably when changing the RZ parameters \(\epsilon\) and \(a_{1}\) and that, as \(a_{1}\) is increased, the torus cusp and center systematically move-in and out, respectively.
The very different aspects of the tori in the various cases can also be seen from Fig. 6 where we present the equipotential surfaces and the von Zeipel cylinders for the case of the Schwarzschild spacetime (left panel) and Models C (middle panel) and D (right panel). The equipotential surfaces are constructed for the same value of the specific angular momentum of \(\ell=3.82\) and are displayed in terms of the Cartesian coordinate \(\tilde{x}\) (not to be confused with the conformal coordinate \(x\)) and \(\tilde{z}\). Compared to the Schwarzschild case we clearly observe that we obtain a much larger torus for Model C and a much smaller torus for Model D. This figure underlines the statement that the size of the torus increases with increasing parameters \(\epsilon\) and \(a_{1}\). Model C has the RZ-parameter \(\epsilon=0.25\) and the torus size increases (especially in the \(\tilde{z}\)-direction) when compared to the \(\epsilon=0\) Schwarzschild case. In contrast, Model D, has a negative \(a_{1}\) RZ-parameter, the torus is smaller and also less extended vertically. Note that the von Zeipel cylinders have a cusp at different von Zeipel radius \(\mathcal{R}_{\rm VZ}\). In the Schwarzschild case, in fact, \(\mathcal{R}_{\rm VZ}^{2}=5.1975\), while it is \(\mathcal{R}_{\rm VZ}^{2}=4.8351\) for Model C and \(\mathcal{R}_{\rm VZ}^{2}=5.3879\) for Model D.
Figure 4: _Left panel_: the same as in Fig. 1 but for Model A in a RZ spacetime with parameters \(a_{0}=0=a_{1}\); in this case the coefficient \(\epsilon\) is varied and the specific angular momentum is kept fixed at the representative value \(\ell=4.35\). _Right panel_: the same as in the left panel but Model C and \(\ell=3.82\).
Figure 3: _Left panel_: the effective potential at a fixed constant specific angular momentum in units of the ADM-mass \(M\) for Model A at \(\ell=4.35\) (dark-blue solid line), Model case C (light-blue solid line) at \(\ell=3.82\); also reported is \(W_{\rm eff}\) in the case of a Schwarzschild spacetime with \(\ell=3.9\) (black solid line). _Right panel_: the same as in the left but for Model cases B (light-red solid line) and D (dark-red solid line) and the Schwarzschild solution at \(\ell=3.82\) (black solid line). In both cases we mark with filled (unfilled) diamonds the position of the maximum (minimum) of the effective potential. Finally, shown with insets in both cases is the spatial dependence of the specific angular momentum, which is terminated at the marginally bound orbit.
## 5 Space of solutions: double tori
We next discuss the region of the RZ space of parameters where the specific angular momentum has two minima and the effective potential therefore shows two cusps and two maxima for the _same_ constant value of the specific angular momentum. These properties of the effective potential, which _are not_ encountered in a Schwarzschild spacetime, lead to what we refer to as "double-tori" solutions.
Within this region, which is shown as blue-shaded in Fig. 2, we consider again two representative model cases obtained after setting \(a_{0}=0\) and suitably selecting the parameters \(\epsilon\) and \(a_{1}\), i.e.,
\[\mathrm{Model\;E} : \epsilon=1.00\,, a_{1}=2.00\,,\] \[\mathrm{Model\;F} : \epsilon=0.85\,, a_{1}=1.78\,. \tag{38}\]
We should remark that it is in principle possible to build double-tori solutions in a Schwarzschild spacetime and indeed, interesting works exist where these configurations are considered in great detail and where families of nested tori configurations are considered (see, e.g., Pugliese & Stuchlik, 2017; Pugliese & Stuchlik, 2020). However, in these cases one needs to suitably choose and fine-tune the specific angular momenta - which are _different_ for each torus - such that the configuration can actually be built. At the same time, double-tori solutions with the same constant specific angular momentum have been found also in another spherically symmetric black-hole spacetime that is different from the Schwarzschild spacetime, i.e., the \(q\)-metric (Memmen & Perlick, 2021).
### Double-tori solutions (\(a_{0}=0\))
When considering Model E, we note that with \(\epsilon=2.00\) the RZ parameter \(a_{1}\) needs to be in the range \(1.812\leq a_{1}\leq 2.28\) so that two well-defined cusps can appear in the effective potential, hence our choice of \(a_{1}=2\). The bottom-left part of Fig. 7 shows the spatial dependence of the specific angular momentum, which clearly exhibits two minima, while the top-left of the same figure shows the effective potential at a fixed angular momentum; the latter, \(\ell=3.3779\), has been chosen such that the value of the effective potential at the two cusps is the same. With increasing values of the specific angular momentum, the potential of the inner cusp increases and the potential of the outer cusp decreases; the opposite happens if the specific angular momentum is decreased. Similarly, when considering Model F, we note that for \(\epsilon=0.85\) the allowed range for the RZ parameter is \(1.74\leq a_{1}\leq 1.85\) so that two well-defined cusps appear in the effective potential, hence our choice of a representative case with \(a_{1}=1.78\). Also in this case we display, respectively, in the bottom-right and top-right part of Fig. 7 the spatial dependence of the specific angular momentum and of the effective potential for a fixed value \(\ell=3.2259\). Also in this case, and as we will discuss below more in detail, different choices of the constant angular momentum can lead to considerable changes of the effective potential and hence to the appearance (or disappearance) of the outermost cusp.
Figs. 4 and 5 have already illustrated how the changes in the constant specific angular momentum impact on the shape of the effective potential and, in turn, on the properties of the tori. This behaviour becomes made more complex when double-tori solutions are possible and this is demonstrated in Fig. 8, where we concentrate on the representative Model F and again show (with a colourcode) the effective potential for fixed values of \(\ell\) and \(\epsilon\) but varying values of \(a_{1}\), while the right panel shows the effective potential for fixed values of \(\ell\) and \(a_{1}\) but varying values of \(\epsilon\).
In analogy to Figs. 4 and 5, marked with a transparent map in the left panel of Fig. 8, are those regions where tori solutions are not possible. More specifically, the transparent region at low values of \(a_{1}\) denotes the range where the specific angular momentum is too small for the effective potential to show a cusp and hence no tori can be constructed there. On the other hand, in the transparent region at high \(a_{1}\) the tori are unstable because the chosen specific angular momentum is larger than the value at the marginally bound orbit \(\ell>\ell_{\mathrm{mb}}\). Note that the effective potential has only one cusp for \(1.734<a_{1}<1.773\), such that only single-torus solutions are possible in this range of \(a_{1}\); we refer to this region as to the "1-cusp region", where the positions of the cusps, maxima of the tori and outer radii are presented by solid lines (orange, purple, and blue, respectively) in the 1 cusp region. In the range \(1.773<a_{1}<a_{1,\mathrm{F}}\), where \(a_{1,\mathrm{F}}:=1.78\) is the value of \(a_{1}\) for Model F, a second inner cusp and a second maximum appear in the effective potential at smaller values of \(x\). In this region, which we refer to as the "2-cusps" region, we mark with solid lines the relevant positions of the innermost (first) torus and with dotted lines of the corresponding colour the various positions of the outermost (second) torus.
In this region, both tori fill their cusps but as the value of \(a_{1}\) is increased to reach the value of Model F (red dots) the cusps are at the same value of potential for the selected value of \(\ell\). As a result, for \(a_{1}>a_{1,\mathrm{F}}\) the potential of the inner cusp increases while the one of the outer cusp decreases. This results in the inner cusp having a value of the effective potential that is larger than the outer cusp and we can no longer obtain two tori filling their cusps. Hence, in the region \(a_{1,\mathrm{F}}<a_{1}<1.794\) there are two possibilities to construct tori. The first one is to fill the effective potential up to the inner cusp (solid orange, purple and blue lines), which yields a position of the outer radius larger than the position of the second (outer) cusp and a single extended torus. The second possibility is to fill the effective potential only up to the second (outer) cusp, where the small outer torus (transparent dotted lines) would fill its Roche lobe, but the inner one would not and therefore is not shown. On the other hand, for \(a_{1}>1.794\), the outer cusp disappears, while the inner (and now only) cusp continues to move to smaller radii, leading to singletons solutions, which are very extended and again represented by solid lines. Finally, and as mentioned already, the specific angular momentum for parameters \(a_{1}>1.844\) (transparent region) is larger than the marginally bound orbit and the single tori in this region are not in a stable equilibrium and will be subject to accretion.
Figure 5: The same as in Fig. 4 but for Models B and D in a RZ spacetime with parameters \(\epsilon=0=a_{0}\); in this case the coefficient \(a_{1}\) is varied and the specific angular momentum is kept fixed at the representative value \(\ell=3.82\).
The right panel of Fig. 8, which displays the effective potential at a fixed value of \(a_{1}=1.78\), but with varying \(\epsilon\) is very similar to the left panel and, indeed, the same considerations apply when going from high to low values of \(\epsilon\). More specifically, no cusp exists for \(\epsilon>0.865\) (transparent top region), while a single cusp and maximum (orange and purple solid lines) appear in the region \(0.855<a_{1}<0.865\) giving rise to single tori.
Two tori filling their cusp are obtained for parameters \(0.85<\epsilon<0.855\) with the inner torus represented by solid lines and the outer torus by dotted lines. Model F (marked by red circles) shows two cusps at the same value of the effective potential, while for \(0.845<\epsilon<\epsilon_{F}\) the effective potential of the inner cusp rises above the outer one and it is possible to either fill the inner cusp to obtain one large torus (solid lines) or the outer one giving a small outer torus (transparent dotted lines) and an inner one not filling its cusp (not shown). With decreasing value of \(\epsilon\) the outer cusp disappears at
Figure 6: Equipotential surfaces of the effective potential (top row) and the von Zeipel cylinders (bottom row) shown in Cartesian coordinates \(\tilde{x}\) and \(\tilde{z}\) and in units of the black-hole mass \(M\). From left to right the three columns refer to a Schwarzschild spacetime (left), Model C (middle), and Model D (right). Note that the torus in Model C has an outer radius \(r_{\rm out}=\tilde{x}=100\)\(M\) and hence cannot be shown in the figure; furthermore, illustrated in an inset in the top-right panel is a magnification of the torus of Model D.
Figure 7: Effective potentials at a fixed specific angular momentum (top row) and spatial dependence of the specific angular momenta (bottom row) for Models E (left) and F (right). The effective potentials refer to \(\ell=3.3779\) for Model E and to \(\ell=3.2259\) for Model F; these values are chosen so that the two cusps have the same value of \(W_{\rm eff}\).
the white solid line and we are left with a single large torus (solid orange, purple and blue lines) in the region \(0.8147<\epsilon<0.845\). Finally, for even smaller values \(\epsilon<0.8147\) (transparent shading) no tori in stable equilibrium can be obtained since the specific angular momentum is above that of the marginally bound orbit.
Fig. 9 provides a more direct representation of the properties of the tori in Models E (left column) and F (right column) by showing, in analogy to Fig. 6, the equipotential surfaces (top row) and the von Zeipel cylinders (bottom row). Since Models E(left column) and F (right column) differ essentially in the values of the parameter \(\epsilon\), Fig. 9 shows that for an increasing value of \(\epsilon\) also the size of the tori increases. Note also the outer torus is systematically larger than the inner one and grows (shrinks) faster with increasing (decreasing) \(\epsilon\). Overall, the examples discussed in this section highlight that double-tori solutions can be easily obtained in RZ spacetime with \(a_{0}=0\) and in a suitably chosen range of values for \(\epsilon\) and \(a_{1}\), which
Figure 8: _Left panel:_ the same as in Fig. 4 but for Model F in a RZ spacetime with parameters \(\epsilon=0.85\), \(a_{0}=0\), \(a_{1}=1.78\); in this case the coefficient \(a_{1}\) is varied and the specific angular momentum is kept fixed at the representative value \(\ell=3.2259\). The convention for the various lines is the same as in the previous similar representations of the effective potential with the addition that we show by dotted transparent lines the situations in which one or two tori can exist depending on how the equipotential surfaces are filled. Note that in this case double-tori solutions are possible. _Right panel:_ the same as in the left panel but where the RZ-parameter \(\epsilon\) is allowed to vary at fixed \(a_{1}=1.78\).
Figure 9: The same as in Fig. 6 but for Model E (left) and Model F (right). Note that in both cases two tori are possible and that in the case of Model E the outer torus is much larger than the inner one (the outer radius at \(r_{\rm out}=\tilde{x}=13.793\,M\)) and hence is shown only close to the second cusp.
determine their properties in terms of vertical thickness and size. More importantly, these tori can be filled with fluids having the same constant specific angular momentum and hence are potentially easier to produce in nature as one expects that the matter sourcing these tori near supermassive black holes has the same astrophysical origin and hence the same specific angular momentum. Double-tori solutions are also possible to be constructed in general relativity but, in contrast, require _different_ and suitably tuned values of the specific angular momentum for each torus (Pugliese and Stuchlik, 2017).
As a final remark we note that while the effective potential in the case of double-tori solutions has two distinct cusps, the von Zeipel radius can only have one cusp and this has to necessarily appear close to the black hole and not at large distances (see Fig. 9). This is because \(W_{\rm eff}\) depends on both the \(g_{tt}\) metric function and on the specific angular momentum [see Eq. (23)]; while the former does not have extrema at large distances from the black hole, the latter does, as shown for instance in Fig. 7. By contrast, the von Zeipel radius depends only on the \(g_{tt}\) metric function and thus has a only one local minimum at the photon ring, while approaching asymptotic flatness at large distances [see Eq. (22)].
## 6 Conclusions
The study of non-selfgravitating equilibrium tori orbiting around black holes has a long history and these models have found a number of applications in the simulation of accretion flows onto black holes and other compact objects. We have revisited the problem of constructing such equilibria starting from the simplest black-hole spacetimes, i.e., spherically symmetric, but expressed in terms of a fully generic and rapidly converging parameterisation: the RZ metric.
Already in general relativity, the construction of such equilibria does not depend on the \(g_{rr}\) metric function and thus our analysis has been restricted to the first three parameters of the \(g_{tt}\) metric function within the RZ metric, i.e., \(\epsilon\), \(a_{0}\) and \(a_{1}\). Within this framework we have extended the definitions of all of the quantities characterising these equilibria, starting from the concept of the von Zeipel cylinders and up to the possible ranges of the specific angular momenta that are employed to construct families of tori.
In this way, we were able to set precise constraints on the ranges of the RZ coefficients and thus define the space of parameters in which tori solutions are possible and those where no such solutions can be constructed. To make our analysis more tractable, we have further assumed that \(a_{0}=0\) (as in general relativity) since this is the coefficient that is best constrained by parameterised post-Newtonian measurements to be \(a_{0}\lesssim 10^{-4}\). Within the allowed space of parameters we have then encountered both standard "single-torus" solutions, but also non-standard "double-tori" solutions. While the properties of the first ones in terms of the presence of a single cusp, of a local pressure maximum and of a varying outer radius, are very similar to those encountered in general relativity, the properties of double-tori solutions are far richer. In particular, depending on the specific region of the space of parameters, it is possible to construct tori that have two cusps and fill the corresponding equipotential surfaces either up to the value of the effective potential at the first cusp or at that of the second cusp. In this way, transitions from single-torus to double-tori solutions (and vice-versa) are possible and the RZ parameterisation opens therefore the way to the exploration of a much richer class of equilibria than in general relativity. More importantly, these tori can be filled with fluids having the _same_ constant specific angular momentum and hence are potentially easier to produce in nature as one expects that the matter sourcing these tori close to supermassive black holes has the same astrophysical origin and hence the same specific angular momentum.
A concluding remark should be dedicated to the plausibility of these solutions. Of course, all present observations of black holes, either via gravitational waves (The LIGO Scientific Collaboration and the Virgo Collaboration, 2016) or via imaging (Event Horizon Telescope Collaboration et al., 2019, 2022) portray a picture that is perfectly compatible with general relativity, which still represents the best theory of gravity presently available. At the same time, observations are sufficiently uncertain to leave room for alternative theories and from deviations from general relativity. If cast in this context, when supported by precise astronomical observations, the existence of these equilibrium tori would provide very valuable information on the properties of the spacetime and on its deviation from general relativity.
## Acknowledgments
It is a pleasure to thank A. Cruz-Osorio, C. Ecker, and P. Kocherlakota for useful discussions and comments. Support in funding comes from the State of Hesse within the Research Cluster ELEMENTS (Project ID 500/10.006), from the ERC Advanced Grant "JETSET: Launching, propagation and emission of relativistic jets from binary mergers and across mass scales" (Grant No. 884631).
|
2310.06017 | Riding the dark matter wave: Novel limits on general dark photons from
LISA Pathfinder | We note the possibility to perform a parametrically improved search for
gauged baryon ($B$) and baryon minus lepton ($B-L$) Dark Photon Dark Matter
(DPDM) using auxiliary channel data from LISA Pathfinder. In particular we use
the measurement of the differential movement between the test masses (TMs) and
the space craft (SC) which is nearly as sensitive as the tracking between the
two TMs. TMs and SC are made from different materials and therefore have
different charge-to-mass ratios for both $B-L$ and $B$. Thus, the surrounding
DPDM field induces a relative acceleration of nearly constant frequency. For
the case of $B-L$, we find that LISA Pathfinder can constrain previously
unexplored parameter space, providing the world leading limits in the mass
range $4\cdot 10^{-19}\,\text{eV}<m<3\cdot 10^{-17}\,\text{eV}$. This limit can
easily be recast also for dark photons that arise from gauging other global
symmetries of the SM. | Jonas Frerick, Joerg Jaeckel, Felix Kahlhoefer, Kai Schmidt-Hoberg | 2023-10-09T18:00:00Z | http://arxiv.org/abs/2310.06017v2 | # Riding the dark matter wave: Novel limits on general dark photons from LISA Pathfinder
###### Abstract
We note the possibility to perform a parametrically improved search for gauged baryon (\(B\)) and baryon minus lepton (\(B-L\)) Dark Photon Dark Matter (DPDM) using auxiliary channel data from LISA Pathfinder. In particular we use the measurement of the differential movement between the test masses (TMs) and the space craft (SC) which is nearly as sensitive as the tracking between the two TMs. TMs and SC are made from different materials and therefore have different charge-to-mass ratios for both \(B-L\) and \(B\). Thus, the surrounding DPDM field induces a relative acceleration of nearly constant frequency. For the case of \(B-L\), we find that LISA Pathfinder can constrain previously unexplored parameter space, providing the world leading limits in the mass range \(4\cdot 10^{-19}\,\mathrm{eV}<m<3\cdot 10^{-17}\,\mathrm{eV}\). This limit can easily be recast also for dark photons that arise from gauging other global symmetries of the SM.
keywords: dark photon dark matter, direct detection, gravitational wave interferometry +
Footnote †: journal: The Astrophysical Journal
## 1 Introduction
The existence of Dark Matter (DM) is a well-established observational fact [1]. For a long time the WIMP paradigm has dominated the quest for DM [2] but with null observations in the increasingly sensitive direct detection (DD) experiments [3; 4; 5] there is increased interest in alternative DM candidates. One particularly well-motivated class of such particles are ultra-light and weakly coupled bosons (see, e.g., [6; 7; 8] for reviews). These include, among others, the axion [9; 10; 11; 12] or general axion-like particles (ALPs) [13] as well as new vector bosons (cf., e.g., [14; 15; 16; 17; 18; 19]), often referred to as dark photons (DPs).1
Footnote 1: In this work we will refer to any light new vector boson as a dark photon, allowing for couplings that gauge global symmetry groups of the SM. Our method is unfortunately insensitive to the “canonical”, kinetically mixed DP.
In this work, we will focus on ultra-light DPs as a DM candidate. Small DP masses \(m\) can be generated either by the Stuckelberg [20; 21] or by the Higgs mechanism (where the latter often causes additional constraints). For \(m\lesssim 30\,\mathrm{eV}\) the local DM halo behaves like a classical wave as the spacing between particles becomes smaller than the de Broglie wave length [22]. We will stay agnostic about the details of the production mechanism. While heavier DPs can be produced from the thermal SM bath [23], very light DPs require a non-thermal mechanism to ensure that the DM is cold. To this end, there are many gravitational or extended dark sector solutions that provide the correct relic density [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37]. Furthermore, we will assume that the DP is gauged under a combination of baryon number (\(B\)) and lepton number (\(L\)), with a particular focus on the difference (\(B-L\)).
Astrophysical objects like the Sun can efficiently produce light DPs which, in turn, results in impressive limits on the existence of DPs without requiring them to be DM [38; 39; 40; 41; 42]. Additionally, lab experiments testing the equivalence principle are perfect candidates to look for this kind of new physics that induces long-range forces beyond electromagnetism and gravity [43; 44; 45; 46]. Planetary [47] and asteroidal [48; 49] orbits are sensitive to new long-range forces as well. Finally, the gauge anomaly associated with baryon number leads to strong constraints from meson decays [17] for this specific gauge group.
If the DPs are also DM, new tests become available, e.g. with accelerometers as proposed in Ref. [50] and realized in [51]. A particularly interesting possibility is to search for these DM candidates directly at gravitational wave observatories. This idea was first pointed out in Ref. [52] for ultra-light scalar DM and later briefly discussed for DPDM in Ref. [50]. Independently, Ref. [53] focused especially on GW interferometry and performed a more detailed analysis for several instruments based on the small inhomogeneity of the field. We will use this work as a guideline for our own analysis as it also investigated new vector bosons gauged under \(B\) and \(B-L\), and discussed both ground-based and space-based laser interferometers. However, it makes use of the differential acceleration between the two equal test masses and is therefore limited by the small ratio of arm length to the scale of inhomogeneity. Here, we point out that the setup of LISA Pathfinder (LPF) also offers the possibility to use the differential acceleration between a test mass and the satellite carrying the interferometer itself. As we will argue this type of search is not limited by the arm length, significantly increasing the sensitivity in the relevant mass range compared to a previous analysis [54]. That said, we want to point out that the use of auxiliary channels was already
proposed for KAGRA in Ref. [55].
We briefly discuss the signal prediction and analysis method in sec. 2 and point out the similarities and differences to previous DPDM interferometer limits. This is followed by an introduction to LPF including a discussion of the sensitivity in sec. 3. Finally, we estimate the sensitivity of this instrument to DPDM and discuss the necessary steps for a refined analysis in sec. 4 before we conclude in sec. 5. Throughout this letter we work in natural units \(\hbar=c=1\).
## 2 Calculation of the signal
Let us begin by introducing the DP Lagrangian [56; 57; 18]
\[\mathcal{L}\supset-\frac{1}{4}F^{\prime}_{\mu r}F^{\prime\mu r}-\frac{\epsilon_ {\rm KM}}{2}F^{\prime}_{\mu r}F^{\mu r}+\frac{m^{2}}{2}A^{\prime}_{\mu}A^{\prime \mu}-\epsilon_{g}eA^{\prime}_{\mu}g^{\mu}\, \tag{1}\]
which contains the renormalisable interactions of the DP field \(A^{\prime}_{\mu}\) in full generality. It takes into account both the kinetic mixing \(\epsilon_{\rm KM}\) between the field strength tensors \(F^{(\prime)}_{\mu r}\) of the SM photon and the dark photon, and an explicit coupling \(\epsilon_{g}\) to a current \(J^{\mu}_{g}\) associated with a gauge group \(g\). Note that we have rescaled the gauge coupling \(g_{g}\) to the electromagnetic coupling \(e\), i.e. \(g_{g}=\epsilon_{g}e\). From now on, we will assume the kinetic mixing to be negligible and focus on the explicit couplings with \(g=B\) or \(g=B-L\) unless mentioned otherwise. In sec. 4 we will discuss how to generalize our analysis to arbitrary gauge groups.
Under the aforementioned assumptions, any piece of baryonic matter is directly charged under both gauge groups. Thus, in a background field of DPDM these charges behave in full analogy to electric charges in an electric field. Assuming the DM to be cold and have mass \(m\), the field will be nearly monochromatic with a linewidth suppressed by the non-relativistic velocity \(v\sim 10^{-3}\) of the halo [58]
\[\mathbf{A}(t,x)=\mathbf{A}_{\rm DM}e^{-i\omega t+\phi(x)}\, \tag{2}\]
where \(\omega=m+\mathcal{O}(v^{2})\) denotes the non-relativistic particle energy, \(\mathbf{A}_{\rm DM}\) is the 3-vector of the DPDM field, and \(\phi(x)=i\mathbf{k}\cdot\mathbf{x}+\phi_{0}\) is a weakly position dependent phase where \(\mathbf{k}\approx m\mathbf{v}\) denotes the momentum of the wave and \(\phi_{0}\) is a constant phase.2 We can obtain the temporal component of the 4-potential from the "Lorenz condition"
Footnote 2: For our purposes, weakly dependent means that \(|\mathbf{k}|L\ll 1\) where \(L\) denotes the size of the experiment.
\[\partial_{\mu}A^{\mu}=0\Rightarrow A^{0}(t)=-\frac{\mathbf{k}\cdot\mathbf{A}( t)}{\omega}\approx\mathbf{v}\cdot\mathbf{A}(t)\ll|\mathbf{A}(t)|\, \tag{3}\]
which has to be fulfilled for a massive vector boson as dictated by the equations of motion. Due to the weak spatial dependence of eq. (2) we have omitted the \(x\) in the argument of all vector components. We observe that the temporal component is generically velocity suppressed for non-relativistic DPDM.3
Footnote 3: Indeed, for transversely polarized DPs, i.e. \(\mathbf{k}\cdot\mathbf{A}_{\rm DM}=0\), the component vanishes exactly.
Within a coherence patch the signal can be treated as being monochromatic. The coherence length is given by the wavelength \(\lambda_{c}\simeq 2\pi/(mv)\) and the coherence time is \(t_{c}\simeq 2\pi/(mv^{2})\). It is important to realize that neither the amplitude nor the direction of the field changes within a coherence patch. LISA Pathfinder (LPF) covers a frequency range from a few Hz down to around \(10^{-5}\,\)Hz (see sec. 3). Especially for the lowest frequencies this implies an extremely long coherence time due to the non-relativistic velocities. In fact, even for the highest frequencies in LPF's sensitivity range the coherence time is more than a week so that even long-term searches for monochromatic signals suffer at most weakly from the decoherence of the signal, thus enhancing the limits significantly without employing new technology.
For the sake of clarity, we will give a minimal description of LPF here and use it as an example for the main idea behind our analysis but the following arguments are equally valid for two generic objects made from different materials. In LPF, two (almost) identical test masses (TMs) are enclosed separately in a space craft (SC) and the relative motion between the TMs themselves and between TMs and the SC is tracked. A more detailed description will follow in sec. 3. The dominant effect of the gauged DPDM on a charged object is analogous to the electric component of the Lorentz force (see [50] for a similar calculation as we do in the following). Therefore, we need to determine the "electric field"
\[\mathbf{E}_{g}=-\partial_{t}\mathbf{A}(t)=i\omega\mathbf{A}_{\rm DM}e^{-i \omega t+\phi(x)}. \tag{4}\]
For our purposes, we can completely ignore the phase of the field \(\phi(x)\approx 0\) and consider it spatially constant over the size of the experiment. Therefore, the field is oscillating at a single frequency as long as we consider only coherent time scales.
This field then exerts a force on all objects charged under the given gauge group. Therefore, we find the following acceleration
\[\mathbf{a}(t)\simeq i\omega\epsilon_{g}e\frac{q}{M}\mathbf{A}_{\rm DM}e^{-i \omega t}=i\epsilon_{g}e\frac{q}{M}\sqrt{2\rho_{\rm DM}}\;\hat{\mathbf{e}}_{A }\;e^{-i\omega t}\, \tag{5}\]
for an object of mass \(M\) and charge \(q\) (under \(g\)). Furthermore, we used the well-known relation between the average energy density and amplitude of wave-like DM
\[\rho_{\rm DM}=\frac{1}{2}\omega^{2}|\mathbf{A}_{\rm DM}|^{2}. \tag{6}\]
Finally, \(\hat{\mathbf{e}}_{A}\) is the unit vector in direction of the DPDM field within a coherence patch.
To estimate the signal-to-noise ratio in LPF we need the amplitude of the relative acceleration between the SC center of mass and the TMs for the three SC axes \(i=x,y,z\). For a monochromatic signal, we obtain those values by taking the real part and dropping the harmonic behavior of eq. (5)
\[\Delta a_{i}=\epsilon_{g}e\Big{(}\Delta\frac{q}{M}\Big{)}\sqrt{2\rho_{\rm DM}} \cos\theta_{A,i}. \tag{7}\]
We note that a one-dimensional setup or even a typical planar
interferometer can in principle be totally insensitive to this effect if the polarization is orthogonal to the plane of the experiment. LPF offers the advantage that it features a full 3D sensitivity of the SC motion w.r.t. the TMs.
In general, there are two different polarization models in the literature. One of them assumes that the DPs have the same polarization everywhere in space while the other one assumes that each coherence patch has a different polarization which is distributed uniformly on the unit sphere. From our discussion of the coherence time we conclude that in the ultra-low mass regime we will not be able to tell the difference as all experiments with realistic lifetimes will only observe a single coherence patch.4 In contrast, for larger masses measuring at different times corresponds to measuring different polarization of the DPs. Combining this with the fact that LPF has a non-trivial orbit and orientation will result in a very complex scheme required to perform a rigorous analysis. Nevertheless, applying this information which in principle is known might provide additional constraining power as demonstrated in Ref. [59]. We will treat this issue in more detail in sec. 4. To conclude this discussion we emphasize that the position and orientation of the SC will not change significantly on the time scales of the used observations [60].
Footnote 4: At the higher end of the frequency range the situation might be more promising if we allow for an observation time of several years. This can get even better if the sensitivity can be extended to higher frequencies.
Let us quickly compare our result, using the auxiliary channels between SC and TMs, to the case where \(\Delta\frac{q}{M}=0\) which corresponds to two bodies made from the same material. To the best of our knowledge, the TMs for the main interferometers in all GW searches including LPF fulfill this criterion. Additionally, any elemental impurities that could break this degeneracy are kept extremely small in order to improve the performance of the interferometer. Therefore, in this case, we have to look for subleading effects, e.g. from the phase in eq. (4) which introduces both an arm-length and a velocity suppression. This is exactly the approach following Ref. [53]. Only later it was re-discovered that the finite light-traveling time of the laser [61] leads to an improvement if the length scale associated with the DP mass \(1/m\) coincides with the arm length of the interferometer \(L\) as already pointed out in Refs. [52; 50].5 This "new" analysis method and the decoherence effect from the small inhomogeneity of the field will give an observable relative acceleration even for strictly equal charge-to-mass ratios. Nevertheless, this acceleration is suppressed by \(\max\left\{(\omega L)^{2},\nu\omega L\right\}\). For full scale interferometers where the arm length is on the scale of \(1/m\sim 1/\omega\) by construction this may not be big a problem. But, for LPF with its very limited arm length of \(\sim 40\,\)cm these effects will substantially suppress all limits derived following the standard methods in the literature as shown in Ref. [54]. Therefore, looking for auxiliary channels between TMs and the SC that feature \(\Delta\frac{q}{M}\neq 0\) is promising. Indeed, for a different gravitational wave interferometer, KAGRA, this observation was already utilized in Ref. [55], which enhanced the limits in the low frequency region significantly.
Footnote 5: We thank the anonymous referee for pointing out the historically correct version of how these limits were (re-)derived. At this point, it should be emphasized that this analysis method was already applied directly to LIGO/VIRGO data [62; 63].
With the prediction for the acceleration amplitude, it is straightforward to estimate the signal-to-noise ration (SNR) of an interferometer with a given relative acceleration amplitude spectral density (ASD) \(S_{a}^{1/2}(f)\) via
\[\text{SNR}=\frac{\Delta a_{i}}{S_{a}^{1/2}(f)}\sqrt{T_{\text{eff}}}\, \tag{8}\]
where \(T_{\text{eff}}\) depends on observation time \(T_{\text{obs}}\) and coherence time \(t_{c}\) via
\[T_{\text{eff}}=\begin{cases}T_{\text{obs}}\ \,&T_{\text{obs}}\leq t_{c}\\ \sqrt{T_{\text{obs}}t_{c}}\ \,&T_{\text{obs}}>t_{c}\end{cases}\, \tag{9}\]
as outlined e.g. in the appendix of Ref. [64].6
Footnote 6: Conventionally, results are quoted as ASDs for the relative displacement instead of acceleration as used by us. The translation from our ASD to this so-called strain sensitivity is fairly simple as it just requires a rescaling factor of \(\sim a^{2}/L\).
## 3 LISA Pathfinder sensitivity
LPF [65] was a precursor mission to the planned space-borne gravitational wave interferometer LISA [66]. The mission's objective was to demonstrate that the technology developed for LISA will be able to perform as predicted under realistic space conditions. For this purpose, a SC containing two TMs was sent to the first Lagrange point of the Sun-Earth system. These TMs are \(2\,\)kg Gold-Platinum alloy cubes with a side length of \(\sim 5\,\)cm and they were placed in two separate electrode housings with an optical bench placed in between. The main aim was to keep the noise in the relative acceleration between the two free falling TMs at a level that would verify the applicability of this technology for LISA. Indeed, the test was successful and performed even better than expected [67; 68]. Such a high-precision instrument requires more scrutiny than just a single interferometer measuring the relative TM displacement. Therefore, LPF contained a radiation monitor [69], additional interferometers [70] and capacitive sensing [71]. Several of these auxiliary channels are used to avoid a collision between "TM1", the reference test mass, and the SC. The choice of a preferred TM is required as two TMs on their respective geodesics within a single SC cannot coexist without a collision. Therefore, there are measures in place to correct the trajectory of the second TM w.r.t. the reference TM.
The TMs are aligned on what the collaboration labeled the x-axis. On this axis, there is the so called \(x_{12}\) interferometer which is the central instrument on-board as it is used to measure the relative acceleration between the TMs. For our purposes, we want to focus on another instrument, the \(x_{1}\) interferometer, which controls the SC position w.r.t. TM1. This auxiliary interferometer will be the best channel to search for DPDM over a large mass/frequency range.
As we have pointed out in sec. 2 we require knowledge about the charge-to-mass ratio and therefore the elemental composition of both the TMs and the SC. Unfortunately, the SC itself is made up of a collection of different materials and components but they are on average expected to be at much lower atomic number than Au or Pt and thus they will have different charge-to-mass ratios. This result is intuitive for \(B-L\) as the total charge of an atom is given by the neutron number and the neutron-to-proton ratio tends to increase with atomic number yielding different charge-to-mass ratios for light and heavy elements. This effect is more subtle for \(B\). The difference mainly comes from the variation in binding energies and the small mass difference between proton and neutron. This immediately provides us with an estimate for the suppression of the charge-to-mass ratio w.r.t. the \(B-L\) result: both the binding energies and the nucleon mass difference are of order MeV compared to the total nucleon masses which are at the GeV scale. Naively, this suggests a suppression factor \(\sim 10^{-3}\) which turns out to be quite accurate, as we will see in Sec. 4.
In the composition of the SC, the second most important contribution after the technology package enclosing the TMs arises from the structure of the SC which is made mostly from carbon and aluminium [72]. Indeed, the SC contains many different sub-components of similar mass and some of them will also contain elements with atomic number much larger than C or Al. A detailed analysis of the SC composition is beyond the scope of this letter and thus we will simply use a lower bound on the charge-to-mass ratio of the SC. To arrive at this conservative estimate, we will assume that all components are made from the same material as the TMs except for the SC structure which we assume to be entirely made from carbon. Using this approximation and table 1 from Ref. [72], we conclude that the 450 kg SC has an 83 kg C component and the remaining material will have a charge-to-mass ratio equal to that of gold.
For a better understanding of the geometry we show an exploded view of LPF in fig. 1. An important factor in the sensitivity analysis of the SC motion against the TMs is that not just the x-axis but also the y- and z-axis are tracked where the z-axis points from the TMs to the solar array. These axes are measured via capacitive sensing which in general is less precise than the interferometers for most frequencies. Nevertheless, we have the advantage of being able to analyze the relative acceleration ASDs for all SC axes [73] and we show these results in fig. 2. They represent the simulated, data-backed sensitivities to the relative acceleration of the SC w.r.t. the TM(s) which is exactly what we are interested in for eq. (8).7 These results were obtained from a 6.5 day noise-only run in April 2016 [67]. The curves explicitly account for all known noise on SC and TMs and therefore they present the best estimate for the stability of the SC w.r.t. the TM(s). To derive limits, we will set cuts at 1 Hz and \(10^{-4}\) Hz as a careful evaluation of the highest and lowest frequencies is beyond the scope of this work. Nevertheless, a detailed analysis of the data will most likely lead to interesting constraints in these extremal regimes.
Footnote 7: In fact, the y- and z-direction is tracked w.r.t. the average of both TM coordinates while for the x-axis only the relative motion w.r.t. TM1 is measured via the additional interferometer.
Before we calculate the limits from these ASDs, let us briefly discuss their behavior. For higher frequencies down to \(\sim 10^{-3}\) Hz, the sensitivity is limited mostly by the so-called out-of-loop noise which describes several external influences on the SC. The bump at the low frequency part of the spectrum is due to the star-tracker noise which comes from imperfections in the determination of the position of the SC. Ref. [73] argues that this low-frequency noise will most likely be mitigated in the LISA mission pointing out an interesting avenue for future investigations of gauged DPDM. We will discuss this in more detail in sec. 4. In the extreme low frequency region we observe additional loss in sensitivity from the capacitive actuation noise experienced by the TMs. In this regime, the simulation also predicts a significantly better sensitivity than the data as shown in Ref. [73] further justifying the cuts introduced above. At peak sensitivity, the x-axis almost reaches the TM1-TM2 result, cf. the dashed black line in fig. 2 taken from Ref. [67] which is based on the same data sample.8 At this point of best sensitivity the other axes perform comparably worse as the capacitive sensing cannot compete with the interferometer on the x-axis.
Figure 1: Exploded view of LPF showing the science module containing the test masses in its center as well as the propulsion module. Image by ESA/ATG medialab (with permission).
## 4 Results
Now let us piece together our detailed knowledge of the LPF sensitivity with our signal prediction. Table 1 shows the elemental charge-to-mass ratios for carbon and gold [74]. We ignore the Pt contribution to the TMs as its charge-to-mass ratio is close to the one of Au. We can derive the simple relation for the SC-TM difference of charge-to-mass ratio under the conservative assumptions about the SC composition of sec. 3:
\[\left(\frac{q}{M}\right)_{\rm TM} =\left(\frac{q}{M}\right)_{\rm Au} \tag{10}\] \[\left(\frac{q}{M}\right)_{\rm SC} \approx f_{\rm C}\left(\frac{q}{M}\right)_{\rm C}+(1-f_{\rm C}) \left(\frac{q}{M}\right)_{\rm Au}\] (11) \[\left|\Delta\left(\frac{q}{M}\right)\right| =\left|\left(\frac{q}{M}\right)_{\rm TM}-\left(\frac{q}{M}\right) _{\rm SC}\right|\approx f_{\rm C}\left|\left(\frac{q}{M}\right)_{\rm Au}-\left( \frac{q}{M}\right)_{\rm C}\right|\, \tag{12}\]
with \(f_{\rm C}\approx 83\,{\rm kg}/450\,{\rm kg}\approx 0.18\). The last column of Table 1 shows the corresponding absolute value of this difference.9 Finally, we observe that our initial estimate of the suppression in the \(B\) charge-to-mass ratio is in good agreement with the actual calculation.
Footnote 9: Unfortunately, the full frequency range is not shown in that work.
Footnote 9: It is an unfortunate coincidence that the baryon charge-to-mass ratio is so similar for Au and C. Taking into account the true composition of the SC will alleviate this suppression.
Demanding that the SNR in eq. (8) is at most unity we get a good estimate for the LPF sensitivity on the coupling strength of the DP to the chosen gauge group. For the DM density, we assume \(\rho_{\rm DM}\simeq 0.4\,{\rm GeV}/{\rm cm}^{3}\)[75].
As noted earlier, there is a rigorous way to combine the different axes but it requires taking into account a proper convolution of SC position and orientation with all possible DP polarizations. Whereas this procedure necessitates knowledge of the exact orbit of LPF it will provide even stronger limits if one follows the detailed guide provided in Ref. [59]. LPF offers the advantageous feature that it is sensitive in all three spatial dimensions which means that our results cannot suffer from a "blindness" due to an unfortunate orientation of the polarization. We can always set a conservative estimate from taking the least sensitive axis for every frequency according to fig. 2.
In fig. 3, we show our main result for \(B-L\) as solid lines for the individual axes following the color-coding of fig. 2. Here we assume for each axis separately that the polarization is exactly aligned with the given axis, i.e. setting \(\cos\theta_{A,i}=1\) in eq. (7). We see that we get rather similar constraints from all axes except for the better peak sensitivity of the x-axis. Following the above argument by taking the upper envelope, i.e. to just consider the weakest limit for every mass, we can obtain a conservative combination of the limits.
Keeping this in mind, we will nevertheless opt for a more optimistic way to simplify the visualization of additional forecasts and later the results for \(B\). Following Ref. [53] we perform an average over all possible velocities and polarizations. While this is technically not the most conservative assumption for these long coherence times, we adopt this approach to facilitate comparison with previous LPF limits [54] and LISA projections [53; 61] (see also appendix A of Ref. [76] or Ref. [77].). As our limits are independent of the velocity of the DPs, the resulting "geometry factor" is \(1/\sqrt{3}\) as compared to the usual result of \(1/3\). Then, instead of taking the upper envelope we will use the lower envelope, i.e. the strongest limit for every mass, multiplied by this suppression factor. We will refer to this as the envelope simplification.10
Footnote 10: Most of the limits we show with this method are more optimistic projections anyways. Only for the solid blue line of the \(B\) limits in fig. 4 and the blue region in fig. 5 we should keep the shape of all three axes in mind.
Using this approach we also include an estimate of the improved reach of LPF as a dashed blue line taking into account the whole data set and using the improved understanding of the detector noise and reduction of Brownian noise in the later stages of the mission [68].11 We demonstrate the impact of the observation time by also including the blue dotted line which assumes the same sensitivity as the solid lines but we set the observation time to the coherence time for each frequency. This explains why the high frequency sensitivity is similar to the "fixed observation time" scenario: for the highest frequencies available, i.e. around \(1\,{\rm Hz}\), the coherence time is about \(10^{6}\,{\rm s}\) which is roughly on the time scale of a week, coinciding with the real observation time used to model the sensitivity
\begin{table}
\begin{tabular}{l c c c} \hline \hline Material & Au & C & SC-TM \\ \hline \(\left(\frac{q}{M}\right)_{B-L}\) in \({\rm GeV}^{-1}\) & 0.64 & 0.54 & 0.018 \\ \(\left(\frac{q}{M}\right)_{B}\) in \({\rm GeV}^{-1}\) & 1.0736 & 1.0737 & \(1.8\cdot 10^{-5}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Charge-to-mass ratios for Au, C, and the difference between these two elements rescaled to our estimate for the SC composition.
Figure 2: Sensitivity of the LPF SC acceleration w.r.t. the TM(s). Red shows the interferometer sensitivity while green and orange are found from the capacitive sensors in the housing averaged over both TMs; data from [73]. The dashed black line shows the maximum sensitivity of the LPF TM-TM measurement based on the same data set; taken from [67]
curve. On the low frequency end of the spectrum, the coherence times approach millennia scales making the dashed line much stronger than the other limits.
To demonstrate the power of our approach we compare it to three major results from the literature. We see that our analysis is able to cover new parameter space beyond the otherwise dominant limits set by the fifth-force search interpretation of the MICROSCOPE experiment [46]. Furthermore, it is immediately clear that our analysis can easily outperform previous LPF limits [54] just because there is no need to rely on the decoherence of the field which is the dominant effect if one only considers the two test masses. In fact, the improvement of our limits over the naive results that can be obtained from the decoherence method evaluated around our peak sensitivity at \(\sim 5\cdot 10^{-18}\,\mathrm{eV}\) is given by
\[\frac{\epsilon_{B-L,\mathrm{sat}}}{\epsilon_{B-L,\mathrm{dec}}}\sim\frac{ \Lambda\frac{\Lambda}{n}}{\frac{q}{M}}\left(mvL\right)^{-1}\sim 3\cdot 10^{12}\, \tag{13}\]
ignoring the small difference in sensitivity between the \(x_{12}\) and the \(x_{1}\) sensitivity at this mass. The first factor takes into account that our method suffers from a mild charge-to-mass ratio suppression w.r.t. the decoherence method whereas the second factor comes from smallness of the decoherence on a length scale of \(40\,\mathrm{cm}\). We note that the limits found in Ref. [54] are better than naively expected from our analysis method, presumably because of their more sophisticated statistical analysis. This observation makes us confident in the potential reach of our approach for future analyses using all the available data. The third literature result is a LISA forecast using the conventional analysis method [61].
Before turning to the LISA projections let us discuss the results for \(B\) shown in fig. 4. We lose around 3 orders of magnitude in sensitivity which can be explained by the stronger charge-to-mass ratio suppression in eq. (13) for \(B\). This becomes immediately clear in the comparison of our results to the decoherence limits from LPF which do not suffer from this issue as they scale with the total charge-to-mass ratio. Nevertheless, the decrease in sensitivity of the Equivalence Principle limits due to the same effect still allows to probe a small region of new parameter space and makes an extended study of LPF (and LISA) auxiliary channels very attractive as it will cover a significant amount of new parameter space. Additionally, we added limits from the baryon number anomaly [17] which are non-existent for \(B-L\).
The limits shown as solid lines are rather robust and include conservative estimates on several different levels. Now we will take a more optimistic point of view and focus especially on the future LISA mission. As noted earlier in eqs. (8) & (9), longer observation times up to one coherence time are extremely efficient to enhance the limits. With the peak sensitivity of LPF lying at around \(10^{-3}\,\mathrm{Hz}\) it would be ideal to have data for around 30 years which of course is far beyond the actual lifetime of the mission. Nevertheless, the LISA mission may take data for up to 10 years [66] which means that it naively maximizes the efficiency for frequencies around \(\sim 3\cdot 10^{-3}\,\mathrm{Hz}\). Together with a general decrease in the noise this will allow for probing the ultra-low frequency parameter space complementary to the previous LISA forecasts.12
Footnote 12: Optimistically, we will assume a factor 10 improvement from the LPF sensitivity in 2016 and mitigation of the star tracker noise for our projections.
Previous projections using the planned arm length of around \(2.5\cdot 10^{6}\,\mathrm{km}\) significantly cut into unexplored parameter space as shown in figs. 3 & 4. These limits are based on looking for TM-TM displacements using the light-traveling time method and they are strongest around masses of \(10^{-16}\,\mathrm{eV}\). However, decreasing the mass by just one order of magnitude already introduces a decline of the limits by a factor of at least 100. In contrast to that, our method is well-suited for the lowest frequencies available because eq. (8) does not depe
Figure 4: Limits on the rescaled coupling to \(B\), \(\epsilon_{B}\), of DPDM. While most of the limits in fig. 3 are quite similar for baryon number, there are additional limits from the anomalous nature of this gauge group [17] shown as a solid black line.
Figure 3: Limits on the rescaled coupling to \(B-L\), \(\epsilon_{B-L}\), of DPDM. In grey we show the DM-independent limits from searches for violation of the Equivalence Principle [46] and the dark red filled region shows the LPF limits derived from decoherence in Ref. [54]. The dark red dotted line shows the forecast from LISA [61]. In red, green and orange we show the main result of this paper. Forecasts for similar analyses are shown in blue using the envelope simplification explained in the text.
all. Thus, there is no suppression of the constraints for low frequencies, i.e. large coherence lengths, except for the intrinsic sensitivity loss of the instrument. Indeed, the enhanced reach of our limits at small masses agrees very well with the findings of Ref. [55] using the KAGRA auxiliary channels. Even though these auxiliary channels are at best as sensitive as the main interferometer they clearly outperform the conventional limits in the low mass region. In conclusion, the LISA mission will provide us with a very powerful tool to constrain the interactions of DPDM when combining the main channel analysis with the auxiliary channel analysis. These limits, spanning several orders of magnitude in mass, will reach deeply into unprobed parameter space.
As mentioned in sec. 2 our approach is not limited to \(B-L\) and \(B\). In fact, there is a plethora of additional gauge groups \(g\) that will have very similar limits. These limits just require a proper rescaling procedure depending on the type of coupling. As noted earlier, there are essentially two types of couplings in our problem when it comes to analyzing observations involving different elements. The first one ("\(B-L\)-like") is essentially sensitive to different neutron-to-proton ratios of the different elements while the second one ("\(B\)-like") relies on the smaller differences in binding energies for different nuclei. Limits that instead depend on the total charge-to-mass ratio do not suffer from this "binding energy suppression" as can be seen from the small changes between the LISA projections and the previous LPF limits in fig. 3 to fig. 4. For arbitrary gauge groups with a given combination of baryon and lepton number \(\alpha B-\beta L\) we find that the charge-to-mass ratio can change from element to element.13 Therefore, ignoring different isotopes and changing to nuclear physics notation we find for an element with atomic number \(Z=L\) and mass number \(A=B\)
Footnote 13: For simplicity, we take \(L=L_{e}\) here as \(L_{p}\) and \(L_{e}\) will give no contribution.
\[\left(\frac{q}{M}\right)_{\alpha B-\beta L}\simeq\frac{\alpha A-\beta Z}{Am_{p }}=\alpha\frac{1}{m_{p}}-\beta\frac{Z/A}{m_{p}}\, \tag{14}\]
where \(m_{p}\) denotes the proton mass.
If instead we are interested in the difference between two elements we find
\[\Delta\left(\frac{q}{M}\right)_{\alpha B-\beta L}=\begin{cases}\alpha\Delta \left(\frac{q}{M}\right)_{B}\,&\beta=0\\ \beta\Delta\left(\frac{q}{M}\right)_{B-L}\,&\text{else}\end{cases}\, \tag{15}\]
using our results for the total charge-to-mass ratios from before. This makes the distinction between "\(B-L\)-like" and "\(B\)-like" immediately clear. Only a gauge group without coupling to electron number will suffer from the binding energy suppression. The interesting observation is that the calculation for \(\beta\neq 0\) is already enough to rescale our limits to all possible gauge groups fulfilling this criterion. We present a selection of groups in table 2. The second column shows the rescaling for the relative charge-to-mass ratio and the third column shows the rescaling for the total charge-to-mass ratio. We note that the decoherence/light-traveling limit rescalings are technically only valid for Au and the \(\beta=0\) are only valid for Au-C systems. Nevertheless, the rescalings for these cases will still give solid approximations for the true rescaling factor to arbitrary elements.
In sec. 2 we neglected any contribution from kinetic mixing. Let us briefly discuss the main reasons why this is well justified. First of all, in-medium effects lead to an effective suppression of the kinetic mixing if the plasma mass \(\omega_{p}=\sqrt{4\pi\alpha n/m_{e}}\) is larger than the DP mass [38], i.e. \(\epsilon_{\text{KM,eff}}\propto m^{2}/\omega_{p}^{2}\). \(m_{e}\) denotes the electron mass and \(n\sim 5e^{-}/\text{cm}^{3}\) denotes the electron density in the interplanetary medium close to Earth [78] implying a plasma mass of \(\sim 10^{-10}\,\text{eV}\) which is much larger than our mass range of interest. Secondly, both the TMs and the SC are essentially electrically neutral [79]. Finally, the SC acts like a Faraday cage for the TMs [80]. Of course, the plasma will also interact with gauged DPs [81] but if we consider the plasma mass of the DPs due to their direct coupling to SM particles \(\omega_{p,g}\sim\epsilon_{g}\omega_{p}\) we see that the effects are very small.
Finally, let us put the constraints derived in this letter into larger context for gauged \(B-L\) using the excellent collection of limits from Ref. [82] shown in fig. 5. Note that the y-axis shows the gauge coupling \(g_{B-L}=\epsilon_{B-L}e\). In addition to the limits shown above, one can also consider equivalence principle violation searches as direct detection experiments in a similar mass range [51]. These limits are quite similar to our work as they also search for a monochromatic DPDM signal on a "\(B-L\)-dipole" test mass. Several additional projections are shown in this plot coming from asteroids [76], atomic interferometry [83], space-based quantum sensors [84], and future torsion balance experiments [50]. We see that neither LPF nor LISA is expected to have the best sensitivity in the long run but as LPF already has available data, this makes it the leading limit over almost two orders of magnitude in mass and at peak sensitivity it outperforms the current limits by more than two orders of magnitude in the gauge coupling. Furthermore, we have outlined why and how a detailed analysis of the LPF data can push the sensitivity providing excellent motivation for further work.
## 5 Conclusion
In this work we demonstrated how to improve existing DP limits based on the LPF data. The novel idea is that there is the option to use auxiliary measurements for the acceleration between the SC and the TMs to constrain the coupling strength of gauged \(B-L\) and \(B\) DPDM in analogy to the use of auxiliary arms in KAGRA [61]. The main advantage is the different atomic compositions of the test masses and the space craft leading to a relative acceleration. Relying only on the measurement between the two TMs will lead to extremely suppressed limits as the TMs react identically to the DPDM field. The existing literature focused on decoherence and light-traveling time effects which weakly break this degeneracy at the cost of a massive suppression for low frequencies where the arm length is much smaller than the wavelength. While our new limits are also moderately suppressed by the similar charge-to-mass ratios
they are free from any arm length suppression and can therefore rely on the auxiliary channels working at almost full sensitivity. For LPF the auxiliary channels at their peak frequency are not significantly more noisy than the main (\(x_{12}\)) channel which is an important advantage for our work. Furthermore, we can cover all three spatial dimension with the auxiliary channels which will prevent a potential blindness towards specific DP polarizations.
We showed that even conservative estimates of the LPF results are already able to probe much new parameter space in the \(B-L\) case and at least a small region for \(B\) considering masses around \(5\cdot 10^{-18}\) eV. Our approach offers an enhancement of up to about 12 orders of magnitude over the most naive analysis of \(B-L\). It is therefore likely that a detailed analysis of the whole data set of LPF will set even better and thus world-leading limits over a considerable mass range. Additionally, this work motivates a rigorous analysis of the reach of LISA using auxiliary channels as our approach might be highly complementary to the previous forecasts.
## Acknowledgements
We would like to thank Andreas Ringwald for useful discussions. JF is grateful to Fermilab for its hospitality. This project has received funding from the European Union's Horizon Europe research and innovation programme under the Marie Sklodowska-Curie Staff Exchange grant agreement No 101086085 - ASYMMETRY and the ITN HIDDeN grant agreement No 860881 - HIDDeN, and from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2121 "Quantum Universe" - 390833306 and through the Emmy Noether Grant No. KA 4662/1-2.
|
2301.10134 | Bipartite Graph Diffusion Model for Human Interaction Generation | The generation of natural human motion interactions is a hot topic in
computer vision and computer animation. It is a challenging task due to the
diversity of possible human motion interactions. Diffusion models, which have
already shown remarkable generative capabilities in other domains, are a good
candidate for this task. In this paper, we introduce a novel bipartite graph
diffusion method (BiGraphDiff) to generate human motion interactions between
two persons. Specifically, bipartite node sets are constructed to model the
inherent geometric constraints between skeleton nodes during interactions. The
interaction graph diffusion model is transformer-based, combining some
state-of-the-art motion methods. We show that the proposed achieves new
state-of-the-art results on leading benchmarks for the human interaction
generation task. | Baptiste Chopin, Hao Tang, Mohamed Daoudi | 2023-01-24T16:59:46Z | http://arxiv.org/abs/2301.10134v2 | # Bipartite Graph Diffusion Model for Human Interaction Generation
###### Abstract
The generation of natural human motion interactions is a hot topic in computer vision and computer animation. It is a challenging task due to the diversity of possible human motion interactions. Diffusion models, which have already shown remarkable generative capabilities in other domains, are a good candidate for this task. In this paper, we introduce a novel bipartite graph diffusion method (BiGraphDiff) to generate human motion interactions between two persons. Specifically, bipartite node sets are constructed to model the inherent geometric constraints between skeleton nodes during interactions. The interaction graph diffusion model is transformer-based, combining some state-of-the-art motion methods. We show that the proposed achieves new state-of-the-art results on leading benchmarks for the human interaction generation task.
## 1 Introduction
Modeling dynamics of human motion interaction is at the core of many applications in computer vision and computer graphics. Most works on human motion generation ignore human interactions and focus instead on the generation of actions of a single person [14, 15]. In this paper, we explore the problem of generating 3D human motion interaction. What makes interaction generation challenging are the non-linearity of human motion interaction and the diversity of the interaction between humans. Several questions arise to tackle these challenges. How to represent the interaction between humans? How to model motion and generate diverse motion interaction? To solve the first question, we propose to represent the skeleton interaction by using a bipartite graph [15]. The main goal of the bipartite graph is to capture the relations between humans represented by skeletons. To solve the second question, the motion interaction generation is formulated as a reverse diffusion process. Overall, our contributions are summarized as follows:
* We propose the first Bipartite graph denoising diffusion model (BiGraphDiff) for human interaction generation. Our BiGraphDiff is able to generate motion interaction in a stochastic way, naturally leading to high diversity, and is able to generate very long motion sequences (\(>\)1000 frames).
* BiGraphDiff is a denoising diffusion process that learns not only the denoising of the motion, but also it learns a Bipartite graph. The aim of Bipartite graph is to capture the relations between the two persons.
* BiGraphDiff achieves state-of-the-art quantitatively and qualitatively in action interaction and dance tasks. A user study shows that the generated sequences are better qualitatively than the sequences generated by state-of-the-art methods.
## 2 Related Work
We discuss the relevant literature from two perspectives, namely, previous methods of Human interaction motion synthesis and the literature on diffusion models.
**Human Interaction Motion Generation.** Recently there has been an increase in motion generation based on different modalities, [13] use control signals such as the global trajectory of the person to generate human motion in long-term horizons while [1] and [1] generate motion based on speech audio. Meanwhile, others use only knowledge of the past motion which allows them to work in real-time but on shorter motion [12, 13, 14]. More recently, several works have been dedicated to human pose and motion generation from text or action labels, as well as its reciprocal task [11, 12]. These papers focus only on one person, while our approach is dedicated to the generation of two-person interactions. [1] propose a multimodal variational recurrent neural network to predict the future motion of both participants in an interaction based on pasts sequences of motion. In contrast, we propose to generate human interaction between two persons.
**Generative Diffusion Models.** Diffusion models [10, 13] have shown great promise in terms of generative modeling by showing impressive results in synthesis applications ranging from image generation [1], audio-drive motion synthesis [1], molecule generation [15].
boom _et al._[2022], to text-driven motion generation [Saharia _et al._2022]. More recently, some concurrent work in the field of text-to-motion introduces a diffusion-based method for generating text-conditioned motion. For example, [Zhang _et al._2022] propose MotionDiffuse, a diffusion model-based text-driven motion generation framework. [Tseng _et al._2022] propose EDGE, a method for generating editable dances that is able to create a realistic dance while remaining faithful to the original music. [Dabral _et al._2022] introduce MoFusion, a denoising-diffusion-based framework for high-quality conditional human motion synthesis that can generate long and temporally plausible motions conditioned based on music or text. Despite achieving impressive performance, these methods use a diffusion-based method for generating the motion of only one person. In contrast, our proposed method BiGraphDiff proposes to generate the interaction between two persons and propose to learn a bipartite graph during the diffusion process. In addition, BiGraphDiff is applied for both text-to-motion and text-to-dance, and it is able to generate a long sequence of dance motion.
## 3 Bipartite Graph Diffusion Model
### Framework Overview
Our goal is to generate a human motion interaction \(x^{1:N}\) given an arbitrary condition \(c\). Let us consider \(x^{1:N}\)=\(\{x^{1},\ldots,x^{N}\}\) an arbitrary sequence of joints that compose the two skeletons, \(x^{i}\)\(\in\)\(\mathbb{R}^{k\times 3\times 2}\), where \(k\) is the number of joints. The motion generation is formulated as a reverse diffusion process that requires sampling a random noise \(x^{1:N}_{t}\) from noise distribution to generate a motion sequence. While the forward process requires successively corrupting the motion sequence \(x^{1:N}_{t}\) by adding the noise to the motion sequence for \(T\) timesteps in Markov fashion. We propose Transformers to learn the denoising function and a bipartite graph to represent the relationship between the joints of the skeleton. The proposed Transformer learns not only the denoising function but also the bipartite graph. See Fig. 1 for an overview.
### Diffusion for Motion Generation
The diffusion model consists of two separate processes called forward diffusion and reverse diffusion. During the forward diffusion process, we add to real data a small amount of Gaussian noise repeatedly until the data becomes Gaussian noise. Formally, the forward process on a real sample from a real data distribution \(x^{1:N}_{0}\)\(\sim\)\(q(x^{1:N})\) consists in a Markov chain that gradually adds noise following a variance schedule \(\beta_{t}\) to obtain the posterior \(q(x^{1:N}_{1:T}|x^{1:N}_{0})\) with \(x^{1:N}_{1}\) to \(x^{1:N}_{T}\) the latent data:
\[\begin{split} q(x^{1:N}_{1:T}|x^{1:N}_{0})&:=\prod _{t=1}^{T}q(x^{1:N}_{t}|x^{1:N}_{t-1}),\\ q(x^{1:N}_{t}|x^{1:N}_{t-1})&:=\mathcal{N}(x^{1:N }_{t};\sqrt{1-\beta_{t}}x^{1:N}_{t-1},\beta_{t}\mathbf{I}).\end{split} \tag{1}\]
Eventually with \(T\)\(\rightarrow\)\(+\)\(\infty\) the distribution will be close to \(\mathcal{N}(\mathbf{0},\mathbf{I})\). This formulation implies that the forward process is recursive but this can be avoided by using [Ho _et al._2020b] formulation:
\[q(x^{1:N}_{t}|x^{1:N}_{0})=\sqrt{\overline{\alpha}_{t}}x^{1:N}_{0}+\epsilon \sqrt{1-\overline{\alpha}_{t}},\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I}), \tag{2}\]
with \(\overline{\alpha_{t}}\)=\(\prod_{i=0}^{t}\alpha_{i}\) and \(\alpha_{t}\)=\(1-\beta_{t}\). With this formulation, we can sample a noise \(\epsilon\) and directly generate any \(x^{1:N}_{t}\). Forward diffusion does not require any training but only gradually adds noise to real data. To generate motions, we need to be able to obtain clean data from noisy data to reverse the forward process.
The reverse diffusion process, \(p_{\theta}(x^{1:N}_{0:T})\), is a Markov chain that eliminates the noise from \(x^{1:N}_{T}\) recursively until we obtain \(x^{1:N}_{0}\). With \(p(x^{1:N}_{T})=\mathcal{N}(x^{1:N}_{T};\mathbf{0},\mathbf{I})\):
\[\begin{split}& p(x^{1:N}_{0:T}):=p(x^{1:N}_{T})\prod_{t=1}^{T}p_{ \theta}(x^{1:N}_{t-1}|x^{1:N}_{t}),\\ & p_{\theta}(x^{1:N}_{t-1}|x^{1:N}_{t}):=\mathcal{N}(x^{1:N}_{t-1 };\mu_{\theta}(x^{1:N}_{t},t,c),\Sigma_{\theta}(x^{1:N}_{t},t,c)).\end{split} \tag{3}\]
During the denoising process the goal is to estimate \(\mu_{\theta}(x^{1:N}_{t},t,c)\) and \(\Sigma_{\theta}(x^{1:N}_{t},t,c)\). However, if we use Eq. (2) formulation and [Ho _et al._2020b] method then we can set \(\Sigma_{\theta}(x^{1:N}_{t},t,c)\)=\(\sigma^{2}_{t}\mathbf{I}\) with \(\sigma_{t}\) a constant and replace \(\mu_{\theta}(x^{1:N}_{t},t,c)\) as follow:
\[\mu_{\theta}(x^{1:N}_{t},t,c)=\frac{1}{\sqrt{\alpha_{t}}}(x^{1:N}_{t}-\frac{1 -\alpha_{t}}{\sqrt{1-\overline{\alpha_{t}}}}\epsilon_{\theta}(x^{1:N}_{t},t,c )), \tag{4}\]
this means that we only need to estimate \(\epsilon_{\theta}(x^{1:N}_{t},t,c)\) to be able to denoise the latent data since we can recover \(x^{1:N}_{t-1}\) using:
\[x^{1:N}_{t-1}=\frac{1}{\sqrt{\alpha_{t}}}(x^{1:N}_{t}-\frac{1-\alpha_{t}}{ \sqrt{1-\overline{\alpha_{t}}}}\epsilon_{\theta}(x^{1:N}_{t},t,c))+\sigma_{t}\gamma, \tag{5}\]
with \(\gamma\)\(\sim\)\(\mathcal{N}(\mathbf{0},\mathbf{I})\). In our model we set \(\sigma_{t}\)=\(\log(\beta_{t}\frac{1-\alpha_{t-1}}{1-\alpha_{t}})\) following [Ho _et al._2020b] recommendation. To estimate \(\epsilon_{\theta}(x^{1:N}_{t},t,c)\) we will train a Bipartite Graph Interaction Transformer (defined in Sec. 3.3) to minimize the loss:
\[\begin{split} L:=& E_{t\in[1,T],x^{1:N}_{0}\sim q(x^{1:N }_{0}),\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})}[||\epsilon-\epsilon_{ \theta}(x^{1:N}_{t},t,c)||^{2}]\\ :=& E_{t\in[1,T],x^{1:N}_{0}\sim q(x^{1:N}_{0}), \epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})}[||\epsilon-\epsilon_{\theta}( \sqrt{\overline{\alpha_{t}}}x^{1:N}_{0}\\ &+\epsilon\sqrt{1-\overline{\alpha_{t}}},t,c)||^{2}].\end{split} \tag{6}\]
### Bipartite Graph Interaction Transformer
The Bipartite Graph Interaction Transformer used by BiGraphDiff is based on the original Transformer [Vaswani _et al._2017]. It is composed of a text encoder, embedding and positional encoding layers, self-attention modules, cross-attention modules, a Bipartite graph module, feed-forward modules, and a final linear layer. We input \(x^{1:N}_{t}\) and \(c\) to obtain \(\epsilon_{\theta}(x_{t},t,c)\).
**Text Encoder.** The text encoder is used to encode \(c\) the class label. We use a simple four layers Transformer encoder as described in [Vaswani _et al._2017] that uses multi-head self-attention. To avoid training the encoder from scratch, we initialize the weight with those of CLIP [Radford _et al._2021].
**Motion Decoder.** The motion decoder uses \(x^{1:N}_{t}\) and the output of the text encoder to obtain \(\epsilon_{\theta}(x_{t},t,c)\). First we split \(x^{1:N}_{t}\) into \(x^{1:N}_{1,t}\) and \(x^{1:N}_{2,t}\) which represent the first and second skeleton, respectively. Each skeleton passes through an
embedding layer followed by a positional encoding layer introduced by [23] that encodes the temporal information from each frame of the sequence.
**Self-Attention and Cross-Attention** Then the data goes through self-attention and cross-attention layers. Attention is used to find correlations within the data and is defined as
\[\mathrm{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V})=\mathrm{softmax}\left( \frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d}}\right)\mathbf{V}, \tag{7}\]
where \(\mathbf{Q}\), \(\mathbf{K}\), and \(\mathbf{V}\) are the query, key, and value matrices that have the same size as \(x_{1,t}^{1:N}\) and \(d{=}k*3\) the dimension of one frame from \(x_{1,t}^{1:N}\). \(\mathbf{Q}\), \(\mathbf{K}\), and \(\mathbf{V}\) are defined for self-attention for the first skeleton as
\[Q=x_{1,t}^{1:N}\mathbf{W}_{q},\quad K=x_{1,t}^{1:N}\mathbf{W}_{k},\quad V=x_{1,t}^{1:N}\mathbf{W}_{v}, \tag{8}\]
and for cross-attention
\[Q=x_{1,t}^{1:N}\mathbf{W}_{q},\quad K=c_{emb}\mathbf{W}_{k},\quad V=c_{emb} \mathbf{W}_{v}, \tag{9}\]
where \(\mathbf{W}_{q}\), \(\mathbf{W}_{k}\), and \(\mathbf{W}_{v}\) are the weight matrices for the projection and \(c_{emb}\) the text embedding from the text encoder. This type of attention is also used in the text encoder. The issue with using this attention is its complexity. Indeed the complexity is \(N^{2}d\). This means that long sequences take a very long time to be processed on top of taking a lot of memory space. To solve this issue and following the similar observation from [15], we use efficient attention instead. Efficient attention was introduced by [20] to have a linear complexity on attention by calculating a global feature map instead :
\[\begin{split}\mathbf{F}&=\mathrm{softmax}(\mathbf{K} ^{T})\mathbf{V},\\ \mathrm{Attention}&=\mathrm{softmax}(\mathbf{Q}) \mathbf{F}.\end{split} \tag{10}\]
This simple modification allows us to get a complexity of \(d_{h}^{2}Nh\) for the attention with \(h\) the number of heads and \(d_{h}\) the dimension of each head. \(d_{h}\) being a fixed value and much smaller than \(N\), the complexity is much lower than standard attention. These heads are a part of the multi-head attention, a concept introduced by [23] where we split the inputs into smaller parts of size \(d_{h}\). Each part is fed to a head that contains its own attention module. The output of all the heads is then concatenated. We use 8 heads for both self-attention and cross-attention. Each attention layer (self and cross) is followed by a stylization block. This module, introduced by [15] allows the generative process to keep track of the current diffusion timestep \(t\) improving the generation. The output of this module is added to the input of the attention through a residual connection.
**Bipartite Graph.** Following the self-attention and cross-attention module, both skeletons, \(z_{1,t}^{1:N}\) and \(z_{2,t}^{1:N}\) go through the bipartite graph module. The proposed bipartite graph aims to capture the long-range cross relations between the two skeletons \(S_{a}{=}z_{1,t}^{1:N}\) and \(S_{b}{=}z_{2,t}^{1:N}\) in a bipartite graph via GCNs. Each node in \(S_{a}\) is connected to all the nodes in \(S_{b}\). Firstly, \(S_{a}\) and \(S_{b}\) are separately fed into two encoders to obtain the feature \(F_{a}\) and \(F_{b}\), respectively.
Figure 1: **BiGraphDiff overview.** Top: the forward diffusion process to add noise to the motion sequence. Middle: the proposed Bipartite Graph Interaction Transformer to learn the denoising function. Bottom reverse diffusion process to generate motion sequence from noise.
We then reduce the dimension of \(F_{a}\) with the function \(\varphi_{a}(F_{a}){\in}\mathbb{R}^{C\times D_{a}}\), where \(C\) is the number of feature map channels, and \(D_{a}\) is the number of nodes of \(F_{a}\). Meanwhile, we reduce the dimension of \(F_{b}\) with the function \(\theta_{b}(F_{b}){=}H_{b}^{\intercal}{\in}\mathbb{R}^{D_{b}\times C}\), where \(D_{b}\) is the number of nodes of \(F_{b}\). Next, we project \(F_{a}\) to a new feature \(V_{a}\) in a bipartite graph using the projection function \(H_{b}^{T}\). Thus we have:
\[V_{a}=H_{b}^{\intercal}\varphi_{a}(F_{a})=\theta_{b}(F_{b})\varphi_{a}(F_{a}), \tag{11}\]
where both functions \(\theta_{b}(\cdot)\) and \(\varphi_{a}(\cdot)\) are implemented using a \(1{\times}1\) convolutional layer. This results in a new feature \(V_{a}{\in}\mathbb{R}^{D_{b}\times D_{a}}\) in the bipartite graph, which represents the cross relations between the nodes of the skeleton \(F_{b}\) and the skeleton \(F_{a}\).
After projection, we employ a fully connected bipartite graph with adjacency matrix \(A_{a}{\in}\mathbb{R}^{D_{b}\times D_{b}}\). We then use a graph convolution to learn the long-range cross relations between the nodes from both skeletons, which can be represented as:
\[M_{a}=(\mathrm{I}-A_{a})V_{a}W_{a}, \tag{12}\]
where \(W_{a}{\in}\mathbb{R}^{D_{a}\times D_{a}}\) denotes the trainable edge weights. We use Laplacian smoothing [3, 10] to propagate the node features over the bipartite graph. The identity matrix \(\mathrm{I}\) can be viewed as a residual sum connection to alleviate optimization difficulties. We randomly initialize both the adjacency matrix \(A_{a}\) and the weights \(W_{a}\) and then train them by gradient descent.
After the cross-reasoning process, the new updated feature \(M_{a}\) is mapped back to the original coordinate space for further processing. Next, we add the result to the original feature \(F_{a}\) to form a residual connection, as follows:
\[\tilde{F}_{a}=\phi_{a}(H_{b}M_{a})+F_{a}, \tag{13}\]
where we reuse the projection matrix \(H_{b}\) and apply a linear projection \(\phi_{a}(\cdot)\) to project \(M_{a}\) back to the original coordinate space. Therefore, we obtain the feature \(\tilde{F}_{a}\), which has the same dimension as the original one \(F_{a}\).
Similarly, we can obtain the new feature \(\tilde{F}_{b}\). Overall, the proposed method reasons the cross relations between feature maps of different skeletons using a bipartite graph.
**Feed-Forward Network.** After the bipartite graph module, the data of each skeleton goes through a feed-forward network. It is composed of linear projections, dropout, and GELU activation functions. It is followed by a stylization block to ensure that the information about the current timestep is not lost. The output is added to the input of the feed-forward network thanks to a residual connection.
**Linear Transformation.** The Motion decoder described above contains 8 identical layers and the input of layer \(m\) is the output of layer \(m-1\). Following those 8 layers the data of the two skeletons is concatenated and goes through a final linear projection to obtain \(\epsilon_{\theta}(x_{t},t,c)\) that we can use in our loss and to retrieve \(x_{t-1}^{1:N}\).
## 4 Experiments
### Datasets
There are few 3D motion two-persons interaction datasets. Therefore we focus on two complementary datasets. The NTU RGB+D 120 dataset [10], among its 120 classes, contains 26 classes labeled as "Mutual Actions / Two Person Interactions" which show two persons performing simple interaction motions. We take this 26 classes subset that we call NTU-26 and split each class randomly to obtain our training and testing set. The testing set contains 2,600 samples (100 per class) and the training set is 19,787 samples. The second dataset is DuetDance [13], which contains five classes of two persons dance motions for a total of 406 sequences. The motions are more complex than the one from NTU-26 and harder to classify, even for a human observer. The original dataset contains motions with great variations in lengths from 100 frames to more than 4,000. The average length is 483 frames with a median of 360 frames. While our model can generate very long motions it causes a problem when obtaining quantitative results and lower the quality of the generation due to the limited presence of some sequence of certain lengths. We decided to split the sequences into subsequences of 300 frames or less. This increases the number of samples to train our network with. This increased number of samples will also help the diffusion model since it needs a lot of data. This leaves us with 698 training samples and 125 test samples (25 per class randomly selected).
### Implementation Details
We train our model on an NVIDIA A100 80Go GPU with PyTorch with a batch size of 128 for NTU and 64 for DuetDance. We train on NTU for 1,500 epochs and for 30,000 epochs on DuetDance.
### Baselines
We compare BiGraphDiff to two methods from the state-of-the-art, i.e., MotionDiffuse [22] and ACTOR [20]. MotionDiffuse, a recent Diffusion and Transformer based architecture, generates a single-person motion from the text. For our experiments, the code provided by the author and recommended parameters are used. Due to the similarity with our method, however, we take the same batch size and number of epochs as for our method. ACTOR, a Transformer VAE method, generates a single-person motion. We use the code provided by the authors and retrain it on our datasets. We use the recommended parameters to run the model without SMPL [11] loss function. SMPL loss function is deactivated because SMPL is not available in NTU RGB+D and DuetDance datasets.
### Quantitative Results
We perform the quantitative evaluation by using classification accuracy, Frechet Video Distance (FVD) score, and Multi-modality. The classification accuracy is obtained using a simple Transformer encoder followed by an MLP. The classifier is trained and tested on the same set as the generative methods.
**NTU-26.** Table 1 shows that our method outperforms the two the-state-of-art methods in terms of average accuracy. BiGraphDiff outperforms MotionDiffuse by 7.0% and ACTOR
by 46.3%. We are also very close to the accuracy of the classifier on the ground truth. This shows that the sequences generated by our method are realistic and correspond to the input class. In more detail, we can see that we outperform or equate the other methods on 22 classes out of 26. MotionDiffuse and ACTOR are both better in 2 classes. However, we can see that ACTOR results being actually better is debatable as some classes have very low accuracy, down to 0%. We can also see that the classes in which we perform the worse (i.e., "Hit with object" 28%, "Wield knife" 41%, and "Shoot with gun" 44%) are the ones where the results are also low for the ground truth. Those are classes where the main difference is the object used which is something we can not see using 3D skeleton data. Table 2 shows the FVD and multimodality results. In terms of FVD and Multimodality, our method also outperforms the two other methods indicating that our method produces sequences closer to the real data. One issue when using the NTU dataset is that it is very noisy (see the ground truth in the qualitative results). This means that it is harder to generate noiseless sequences but also that a method that generates samples without noise might be disadvantaged in the quantitative results since they are compared with the ground truth and the classifier is trained on the noisy data.
**DuetDance.** Table 3 shows the classification results on the DuetDance dataset. We can see that, as on NTU-26, we have the best performance on average. The accuracy of our method is 9.6% higher than on MotionDiffuse and only 6.4% lower than on the ground truth. We can note that the accuracy for the ground truth is much lower than for NTU-26. This is due to the nature of the motion in DuetDance. The dance motions are much harder to recognize even for a human and also longer so it is not surprising that the results are worse. ACTOR, on the other only achieves results slightly higher than chance (20%) we will see in the qualitative results that on DuetDance, ACTOR does not produce any motion, we will discuss this in detail in the qualitative results. In the "jive" class, all methods only achieve chance level or lower accuracy. But the results are not so low for the ground truth with means that all methods have trouble generating motion of the "jive class". In Table 4, we show that we outperform the other methods on both metrics meaning that the results of our method are more realistic.
### Qualitative Results
**NTU-26.** Figure 2 shows visuals of sequences generated for the "Cheers and drinks" class. This class of motion is more complex than others because it is composed of two separate motions "cheers" and "drink". All methods generate a proper
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Method & GT & ACTOR [Petrovich _et al._, 2021] & MotionDiffuse [Zhang _et al._, 2022] & BiGraphDiff \\ \hline \multicolumn{5}{c}{Classification Accuracy \(\uparrow\)} \\ Punching & 76.0\% & 1.0\% & 43.0\% & **49.0\%** \\ Kicking & 86.0\% & 14.0\% & 61.0\% & **86.0\%** \\ Pushing & 97.0\% & 77.0\% & **86.0\%** & 74.0\% \\ Pat on back & 88.0\% & 4.0\% & 72.0\% & **80.0\%** \\ Point Finger & 83.0\% & 0.0\% & 52.0\% & **76.0\%** \\ Hugging & 97.0\% & 59.0\% & 90.0\% & **97.0\%** \\ Giving object & 91.0\% & 34.0\% & 68.0\% & **86.0\%** \\ Touch pocket & 93.0\% & 35.0\% & 81.0\% & **84.0\%** \\ Shaking hands & 89.0\% & 16.0\% & 80.0\% & **90.0\%** \\ Walking toward & 93.0\% & 72.0\% & 98.0\% & **99.0\%** \\ Walking apart & 95.0\% & **90.0\%** & **90.0\%** & **90.0\%** \\ Hit with object & 44.0\% & 8.0\% & 23.0\% & **28.0\%** \\ Wield knife & 50.0\% & 7.0\% & 31.0\% & **41.0\%** \\ Knock over & 85.0\% & 4.0\% & **61.0\%** & **61.0\%** \\ Grab stuff & 74.0\% & 0.0\% & 57.0\% & **62.0\%** \\ Shoot with gun & 57.0\% & 1.0\% & **46.0\%** & 44.0\% \\ Step on foot & 89.0\% & 5.0\% & 85.0\% & **90.0\%** \\ High five & 90.0\% & 4.0\% & 75.0\% & **78.0\%** \\ Cheers and drink & 90.0\% & 16.0\% & 69.0\% & **92.0\%** \\ Carry object & 96.0\% & **98.0\%** & 92.0\% & 95.0\% \\ Take a photo & 87.0\% & 19.0\% & 63.0\% & **80.0\%** \\ Follow & 94.0\% & 68.0\% & **90.0\%** & 81.0\% \\ Whisper & 83.0\% & 0.0\% & 72.0\% & **79.0\%** \\ Exchange things & 88.0\% & 6.0\% & 65.0\% & **78.0\%** \\ Support somebody & 94.0\% & **100.0\%** & 94.0\% & 92.0\% \\ Rock paper scissor & 91.0\% & 6.0\% & 75.0\% & **91.0\%** \\ Average & 84.6\% & 30.7\% & 70.0\% & **77.0\%** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification score on NTU-26.
\begin{table}
\begin{tabular}{c|c c} \hline \hline Method & FVD\(\downarrow\) & Multimodality\(\downarrow\) \\ \hline ACTOR [Petrovich _et al._, 2021] & 25298.73 & 34.91 \\ MotionDiffuse [Zhang _et al._, 2022] & 1292.32 & 14.94 \\ BiGraphDiff & **1048.13** & **11.28** \\ \hline \hline \end{tabular}
\end{table}
Table 2: FVD and Multimodality on NTU-26.
motion but ACTOR shows a low intensity for "cheers" and does not really generate the "drink" motion. MotionDiffuse generates a good motion with both "cheers" and "drink" but there is some noise and the arm length grows over time. Our method generates the proper motion with the two steps and does not produce the noise that is present in the ground truth. In our case, one character drinks while grabbing the glass with one hand while the other use both hands showing the diversity in the generated motions. Overall we see that our motion is more realistic, temporally, and spatially coherent and manages well to keep the interaction coherent.
**DuetDance.** Figure 3 shows examples of motion generation of the "salsa" class. The dance motions are more complex and the sequences are longer than NTU-26 sequences. ACTOR does not produce motion. We believe this to be due to the great variability of motions from the same class. ACTOR converges to a mean and finds that an unmoving pair of skeletons is the best generation for its losses. We see that MotionDiffuse produces a dance motion without noise. This is because there is less noise in DuetDance than in NTU-26. Our method also generates a dance motion but is better than MotionDiffuse, we reproduce the motion of characters changing sides that is present in the ground truth and the interaction is better as the arm of both characters does not overlap.
### User Study
The user study compared BiGraphDiff with two leading methods (i.e., ACTOR [20], MotionDiffuse [11]) and the ground truth sequence. For both datasets, we randomly select 20 samples for each class from the test data. For each comparison, 30 participants are asked to answer two questions, i.e., 'Q1: Which skeleton sequence is more realistic?', and 'Q2: Which skeleton sequence matches the input text better?'. The numbers indicate the preference percentage of users who favor the results of the corresponding methods or the GT skeleton sequence. The results highlight the quality of the sequence generated by our method.
### Ablation Study
We report ablation results in Table 6 on the NTU-26 dataset. We compare a simple two-stream Transformer (S1), a two-stream Transformer in a diffusion process (S2), a two-stream Transformer in a diffusion process with a simple GCN (S3), and finally our method with bipartite graphs (S4).
The results of S1 are extremely bad. It is explained by
\begin{table}
\begin{tabular}{c|c c} \hline \hline Method & FVD\(\downarrow\) & Multimodality\(\downarrow\) \\ \hline ACTOR [20] & 2641.08 & 67.79 \\ MotionDiffuse [11] & 1133.51 & 12.24 \\ BiGraphDiff & **997.92** & **4.33** \\ \hline \hline \end{tabular}
\end{table}
Table 4: FVD and Multimodality on DuetDance.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Method & GT & ACTOR [20] & MotionDiffuse [11] & BiGraphDiff \\ \hline \multicolumn{4}{c}{Classification Accuracy \(\uparrow\)} \\ Cha-cha & 28.0\% & **36.0\%** & 32.0\% & 32.0\% \\ Jive & 52.0\% & 16.0\% & **20.0\%** & 16.0\% \\ Rumba & 56.0\% & 16.0\% & 48.0\% & **68.0\%** \\ Salsa & 88.0\% & 0.0\% & 64.0\% & **76.0\%** \\ Samba & 52.0\% & **80.0\%** & 32.0\% & 52.0\% \\ Average & 55.2\% & 29.6\% & 39.2\% & **48.8\%** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Classification score on DuetDance.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{NTU-26} & \multicolumn{2}{c}{DuetDance} \\ \cline{2-5} & Q1 & Q2 & Q1 & Q2 \\ \hline ACTOR [20] & 6.1 & 7.8 & 5.6 & 5.9 \\ MotionDiffuse [11] & 22.4 & 24.3 & 21.8 & 23.7 \\ BiGraphDiff & **31.6** & **32.7** & **28.5** & **30.1** \\ GT & 39.9 & 35.2 & 44.1 & 40.3 \\ \hline \hline \end{tabular}
\end{table}
Table 5: User study results (\(\%\)).
Figure 2: Examples of diverse motion generation for a given text prompt “Cheers and Drink” action from NTU.
the fact the Transformer is a deterministic method and has a low generation diversity which explains the very high FVD. Furthermore, the noisy data from the NTU dataset makes it even harder to provide well-generated sequences. S2 provides much better results both in classification accuracy and FVD, the results are similar to the results obtained by MotionDiffuse. With S3 the simple GCN helps enhance the generation leading to better accuracy and FVD. This highlights the ability of the GCN to model more accurately the spatio-temporal dependencies from each skeleton. Adding a bipartite graph network in S4 provides a stronger increase in performance. It shows that modeling the interactions between the two skeletons is more important than trying to refine the interactions inside each skeleton like S3 did. It validates the use of the bipartite graph network in BiGraphDiff architecture.
## 5 Very Long Generation
Long-term motion generation plays an important role in real-world applications. Our method is able to generate longer sequences as shown in Figure 4. We train the network on the original DuetDance dataset with a maximum sequence length of 4050 frames. We use 376 samples for training and 40 (8 per class) for testing. Figure 4 shows an example of 1580 frames from the "rumba" class. We can see that we generate dance-like motion for the entire duration of the sequence. However, it is very noticeable that we generate better motion for the first few hundred frames, we see that the motion quality around 300 frames is good but then around 600 frames we see deterioration that gradually becomes worse. This is due to the length of the sequences in the DuetDance dataset distribution which are usually not very long (average: 483 frames, median: 360 frames).
## 6 Conclusion, Limitations and Future Work
We introduce the first approach for 3D human motion interactions based on denoising diffusion models. Both quantitative and qualitative evaluations show that BiGraphDiff outperforms state-of-the-art methods. The proposed BiGraphDiff method generates coherent human motion sequences that are longer and more diverse than the results of previous approaches.
The proposed BiGraphDiff suffers however from the common limitations of diffusion models: the need for large datasets and the long training and testing duration. The method is also still slightly sensitive to noise in the training data and can sometimes generate deformed skeletons. This is due in part to the quality of the data used but also because we do not set any constraint related to the input data, e.g., bone length or relative position of joint for 3D skeletons. This means that BiGraphDiff can be used for tasks other than human interaction generation. As long as the input data can be split into two sets and has a temporal or positional component BiGraphDiff can be used for generation.
## Acknowledgments
This project has received financial support from the CNRS through the 80--Prime program.
\begin{table}
\begin{tabular}{l|c c} \hline \hline Method & Classification\(\uparrow\) & FVD\(\downarrow\) \\ \hline S1: Two Stream Transformer & 3.9\% & 21215.21 \\ S2: Two Stream Transformer + Diffusion & 69.3\% & 1406.09 \\ S3: S2 + Simple GCN & 73.2\% & 1123.88 \\ S4: S2 + Bipartite Graph & **77.0\%** & **1048.16** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation study on NTU.
Figure 4: Example of generation of very long sequences on the “rumba” class from DuetDance. Under each skeleton the frame number.
Figure 3: Examples of diverse motion generation for a given text prompt “Salsa” action from DuetDance.
Appendix
### Evaluation Metrics Details
**Classification Accuracy.** To evaluate the generated sequence, we use a classifier made of a simple Transformer encoder followed by an MLP. The classifier is trained and tested on the same training and testing sets as the generative methods. We look at the percentage of correctly classified samples in each class and the average over the entire testing set.
**FVD.** Frechet Video Distance (FVD) adapts the Frechet Inception distance (FID) [17] for video sequences [10]. With FVD, we compute the distance between the generated data distribution and the ground truth using deep features.
\[\text{FVD}=|\mu_{gt}-\mu_{gen}|^{2}+\text{tr}\left[\mathbf{C}_{gt}+\mathbf{C} _{gen}-2\left(\mathbf{C}_{gt}*\mathbf{C}_{gen}\right)^{1/2}\right], \tag{14}\]
where \(\mu_{gt}\), \(\mu_{gen}\) and \(\mathbf{C}_{gt}\) and \(\mathbf{C}_{gen}\) are the means and covariance matrices of the deep features from ground truth and the generated samples respectively, \(\text{tr}(\cdot)\) is the trace. We obtain the deep features from one of the last MLP layer from the classifier used to get the classification accuracy.
**Multimodality.** Multimodality is defined as the average of deep features distance of the samples generated by a method compared to the average deep features distance of the ground truth on a specific class. Multimodality allows us to see if the samples we generate are different from each other. It corresponds to intra-class diversity. To compute the average deep features distance we split the set of features of each class into two equal sets and compare the euclidean norm between the pairs formed by a member of each set and compute the average over the size of the subsets. The average deep features distance for multimodality is defined as follows:
\[dist=\frac{1}{cm}\sum_{j=1}^{c}\sum_{i=1}^{m}||F^{A}_{ji}-F^{B}_{ji}||_{2}, \tag{15}\]
where \(c\) is the number of classes in the dataset \(F^{A}_{ji}\) and \(F^{B}_{ji}\) the \(i^{th}\) features of the subset \(A\) and \(B\) of class \(j\). The multimodality is then calculated as follows:
\[score=100\times\frac{|dist_{gt}-dist_{gen}|}{dist_{gt}}, \tag{16}\]
with \(dist_{gt}\) and \(dist_{gen}\) the deep features distance of the ground truth and of the considered method, respectively. The lower the multimodality the better, as it means that we are close to the multimodality of real data.
### Additional Qualitative Results
**NTU-26.** Figures 5 and 6 show visuals of sequences generated for the "High-five" and "Kicking" classes, respectively. For "High-five", ACTOR also generates a low-intensity motion, and both characters raise their hand but do not perform a high-five. Both MotionDiffuse and our method generate a high-five but MotionDiffuse shows noise and the hands of both characters stay far from each other. The ground truth once again contains noise that is not present in our generation. For the "Kicking" class, ACTOR does not generate any motion for either character. MotionDiffuse generates the red character as being kicked but does not generate the blue person kicking. Our method, on the other hand, generates both the kicking motion and the other character being kicked like the ground truth. In the ground truth, we can see that the leg is never fully extended during the kick. This is common for this class. The NTU-RGB+D dataset is captured using a Kinect camera and has difficulties capturing the legs due to camera positioning and occlusion during interactions. This shows the kind of noise present in the original data again. Overall we see that our motions are more realistic, temporally, and spatially coherent, and manage well to keep the interaction coherent.
### Video Results
Videos of the qualitative results presented can be found at [https://drive.google.com/drive/folders/11zW5hmWGW0csKp8XX9GwTijVCRp4j2hQ?usp=sharing](https://drive.google.com/drive/folders/11zW5hmWGW0csKp8XX9GwTijVCRp4j2hQ?usp=sharing) in "Videos.zip". In "Cheer_and_drink.mp4", "High_five.mp4", "Kicking.mp4", and "salsa.mp4" we show a comparison of the GT, the two state-of-the-art methods and BiGraphDiff on the four classes presented in the main paper and the supplementary material. In file "very_long_rumba.mp4", we show the video for very long-term generation on the rumba class presented in the main paper.
Figure 5: Examples of diverse motion generation for a given text prompt “High-five” action from NTU.
Figure 6: Examples of diverse motion generation for a given text prompt “Kicking” action from NTU. |
2305.05970 | FusionBooster: A Unified Image Fusion Boosting Paradigm | In recent years, numerous ideas have emerged for designing a mutually
reinforcing mechanism or extra stages for the image fusion task, ignoring the
inevitable gaps between different vision tasks and the computational burden. We
argue that there is a scope to improve the fusion performance with the help of
the FusionBooster, a model specifically designed for the fusion task. In
particular, our booster is based on the divide-and-conquer strategy controlled
by an information probe. The booster is composed of three building blocks: the
probe units, the booster layer, and the assembling module. Given the result
produced by a backbone method, the probe units assess the fused image and
divide the results according to their information content. This is instrumental
in identifying missing information, as a step to its recovery. The recovery of
the degraded components along with the fusion guidance are the role of the
booster layer. Lastly, the assembling module is responsible for piecing these
advanced components together to deliver the output. We use concise
reconstruction loss functions in conjunction with lightweight autoencoder
models to formulate the learning task, with marginal computational complexity
increase. The experimental results obtained in various fusion tasks, as well as
downstream detection tasks, consistently demonstrate that the proposed
FusionBooster significantly improves the performance. Our code will be publicly
available at https://github.com/AWCXV/FusionBooster. | Chunyang Cheng, Tianyang Xu, Xiao-Jun Wu, Hui Li, Xi Li, Josef Kittler | 2023-05-10T08:28:51Z | http://arxiv.org/abs/2305.05970v3 | # FusionBooster: A Unified Image Fusion Boosting Paradigm
###### Abstract
Numerous ideas have emerged for designing fusion rules in the image fusion field. Essentially, all the existing formulations try to manage the diverse levels of information communicated by the source images to achieve the best fusion result. We argue that there is a scope for improving the performance of existing methods further with the help of FusionBooster, a fusion guidance method proposed in this paper. Our booster is based on the divide and conquer strategy controlled by an information probe. The booster is composed of three building blocks: the probe units, the booster layer, and the assembling module. Given the embedding produced by a backbone method, the probe units assess the source images and divide them according to their information content. This is instrumental in identifying missing information, as a step to its recovery. The recovery of the degraded components along with the fusion guidance are embedded in the booster layer. Lastly, the assembling module is responsible for piecing these advanced components together to deliver the output. We use concise reconstruction loss functions and lightweight models to formulate the network, with marginal computational increase. The experimental results obtained in various fusion tasks, as well as downstream detection tasks, consistently demonstrate that the proposed FusionBooster significantly improves the performance. Our codes will be publicly available on the project homepage.
## 1 Introduction
Image fusion is a technique to combine complementary information from diverse modalities, or images with different shooting settings, into a single image. The fused image, which becomes more informative, has enhanced visual quality, and boosts the performance of downstream vision tasks. This technique has been widely applied to different areas, including video surveillance, object tracking, remote sensing imaging, and medical diagnosis [25, 39, 40, 44, 50].
Broadly speaking, the current image fusion tasks fall into one of the two main categories, \(i.e.\), multi-modal image fusion and digital photography fusion. For instance, the infrared and visible image fusion (IVIF) task, which belongs to the former category, arises in many practical applications. It aims to draw the rich scene texture from the visible image, and tap the robust thermal and structural information from the infrared modality. Since the infrared modality is insensitive to variations in the environmental conditions, combining these complementary sources of information helps to enhance the visualization of challenging scenes, _e.g._, in the foggy or low-light environments [34]. On the other hand, the multi-exposure image fusion (MEIF) and the multi-focus image fusion (MFIF) belong to the lat
Figure 1: Illustration of the proposed FusionBooster integrated with an infrared and visible image fusion method DDCGAN [27]. We realize the disentanglement of the image fusion task, \(i.e.\), the coarse-grained weight assignments for the source images in the first stage and fine-grained refinement in the second stage. With a slight increase in model size, the upgraded method generates visually attractive results. Its detection accuracy on the full dataset is significantly improved by 5.6%. As our FusionBooster is a unified paradigm, a consistent bonus can be observed for both traditional and learning-based approaches in different fusion tasks.
ter category (digital photography). Specifically, the MEIF task is to combine the input overexposed and underexposed images in order to generate fusion results with an appropriate exposure setting [43]. The goal of the MFIF task is to produce a fully focused image by combining the near-focused and far-focused images at the input to counteract the depth-of-field limitation in imaging [47].
In the earlier approaches, various signal processing techniques had been applied to accomplish the fusion process in the conventional paradigm exemplified by [2, 13, 16, 21, 24, 46]. However, the limitations of the classical feature extraction and fusion techniques motivated the emergence of deep learning-based fusion methods [4, 14, 37, 41, 48]. Currently, the trend has shifted towards the focus on the interplay between fusion and other vision tasks [10, 35, 36, 42]. A few recent studies also argue for the adoption of the transformer to capture global relationships in the fusion task [8, 26, 31].
However, all of the above-mentioned algorithms generate the fusion results in one shot. The issues of information loss, blurred details, and artifacts in the fusion results are not considered explicitly. Typically, elaborate fusion designs or network improvements are proposed to handle them in a manner lacking clarity [23, 52]. In this paper, in order to address these issues in a principled manner, we propose a module called FusionBooster, shown in Fig. 1, which is designed to fine-tune the initial result and mitigate its deficiencies. In this way, we shift the attention from the information aggregation accomplished in the first stage of the image fusion system and focus on the task of image quality improvement and source information enhancement instead.
As show in Fig. 2(a), although some algorithms consider different computer vision tools in their image fusion approaches, typically they disregard the potential discrepancy between the nature of information processing in low-level fusion task and high-level vision problems (semantic gap). Consequently, the feedback from certain vision tasks may be completely inappropriate for the task of refining the fusion model. Besides, a particular design can only be applied to a specific fusion task.
In this paper, we adopt a completely new paradigm, with a divide and conquer approach being proposed to boost the information fusion process, by considering the essence of the fusion task, _i.e._, the need for the preservation of information conveyed by the source images (Fig. 2(b)). Specifically, we use an information probe to gauge the quality of information conveyed by the source images in different components of the initial fusion result. Due to the inappropriate assessment of the quality of information conveyed by the source images, or noise introduced in the first stage of processing, a good quality reconstruction of the source input from the fused image is generally impossible and the constituent components tend to degrade. Interestingly, the degree of degradation is correlated with the quality of the initial fusion results. Motivated by this observation, we incorporate, into the fusion system, a novel mechanism, which guides the process of reassembling these components to produce the fused image. The mechanism enables delivering a better quality and more robust fusion result.
The contributions of this work can be summarized as follows:
* We devise an image fusion booster by analysing the quality of the initial fusion results by means of a dedicated Information Probe.
* In a novel two-stage image fusion paradigm, the results of the analysis performed by the Information Probe guide the refinement of the fusion result.
* The proposed FusionBooster is a general enhancer, which can be applied in association with various image fusion methods, \(e.g.\), traditional or learning-based algorithms, irrespective of the type of fusion task (IVIF, MFIF, and MEIF).
* The experimental results demonstrate that the proposed FusionBooster, in general, significantly enhances the performance of the state-of-the-art (SOTA) fusion methods and downstream detection tasks, with only a slight increase in the computational overhead.
## 2 Related work
### Learning based image fusion methods
In recent years, various learning-based image fusion methods have been proposed. These methods can be
Figure 2: Comparison of the advanced methods combined with proxy vision tasks and the proposed FusionBooster. In our booster, we first use the information probe to perceive the constituent components of the initial result. The deficiencies in the initial result will make the perceived components degraded. Thus, the quality of the fusion result is linked to that of these components. By incorporating the embedded fusion guidance to enhance these component images, the reassembled fusion result can be improved. Our booster avoids the semantic gap and coupled information enhancement issues in the single stage fusion paradigm.
roughly divided into three categories, \(i.e.\), algorithms based on generative adversarial networks (GAN), autoencoders (AE), and regular convolutional neural networks (CNN). Specifically, the GAN-based methods rely on the adversarial game established between the generator and the discriminator to produce the fusion results [7, 29]. A representative work is the DDcGAN proposed by Ma [27], which uses two discriminators to enable the fused images to preserve the useful information from the infrared and visible images. However, according to the investigations in [28, 31, 43], noise and artifacts are also incorporated into the fusion result, as part of the adversarial learning. As shown in Fig. 1, by virtue of the FusionBooster, any noise and artifacts contained in the fused images can be effectively eliminated. Note, the fusion style from the backbone method is retained in the enhanced result, _e.g._, the salient thermal information and the rich texture details.
In the MEIF field, taking into the consideration their structural similarity, Prabhakar _et al._ use an autoencoder to integrate the information from underexposed and overexposed images [30]. Li and Wu extend its application to the IVIF task [14] and a series of AE-based algorithms are proposed in [6, 15, 17]. Although the authors devise elaborate fusion rules and even utilize a trainable network to learn the optimal fusion strategy, bias issues still arise in this process, which leads to information loss. On the other hand, for the CNN-based methods [5, 23, 41, 48, 49, 51], the fusion results are highly dependent on the loss function used to optimize the fusion network. Although these end-to-end methods eliminate handcrafted feature aggregation processes, even well-designed loss functions do not avoid a bias, which results in suboptimal output.
More importantly, without an extra stage to fine-tune the fused images, existing approaches basically generate the output in one shot. Our FusionBooster is specifically designed to address this problem. It uses a second stage to refine the initial results by removing the artifacts, adjusting the exposure setting, or sharpening the edge information. Only minor computational costs will be introduced in the backbone models when using this booster.
### Combination of image fusion and other tasks
In the image fusion field, a popular research trend is the integration of image fusion tasks and vision problems [19, 34, 36]. These methods train the fusion model and the detection or segmentation model in a joint or mutually reinforcing manner (Fig. 2(a)). In this way, the performance of both the proxy vision task and the IVIF task can benefit from it. Although producing promising results, the vision task incurs expensive annotation costs, which precludes its wide adoption in practical scenarios. Another weakness of these methods is that the feedback from other tasks is constrained by the semantic gap between various problems. To address these issues, we innovatively exploit the inner properties of the fusion result by dividing each part of the source images from it. We demonstrate that in this way we can simultaneously improve the performance of the fusion task and a downstream detection application. The above-mentioned algorithms are only available for the IVIF task. In contrast, our booster, as a general paradigm, is applicable to more fusion tasks.
## 3 The Approach
In this section, we introduce the proposed FusionBooster (FB) architecture in detail. We assume that the source images for an arbitrary fusion method at stage one are \(I_{\text{A}}\) and \(I_{\text{B}}\). For example, in the MEIF task, the \(I_{\text{A}}\) and \(I_{\text{B}}\) correspond to the underexposed and overexposed images, respectively. For the backbone method, its initial fusion result at stage one is denoted as \(F_{\text{init}}\).
### Problem formulation
In the image fusion field, different fusion tasks pursue the same objective, which is to preserve information from different modalities or images with different capture settings. In previous works, the formulation of the fusion problem and the core objective of fusion are not fully aligned.
Figure 3: Pipeline of the proposed FusionBooster in the case of the MEIF task (Backbone: U2Fusion). Our booster is composed of three parts, _i.e._, the information probe, the booster layer, and the assembling (ASE) module. The information probe first perceives the source components \(I_{\text{partA}}\) and \(I_{\text{partB}}\) of the initial result. The ASE module will piece these components together to rebuild the initial result. In the test phase, the degraded components are fine-tuned in the booster layer and the ASE module correspondingly yields the enhanced result.
To achieve alignment, in our approach we use the information probe to control the fusion process so as to enhance the relevant information from the source images and thus boost performance.
As shown in Fig. 3, our FB follows the divide and conquer strategy. Specifically, in the training phase, the information probe learns to gauge the information conveyed by the source images from the initial result outputted by the backbone, which is formulated as:
\[[I_{\text{partA}},I_{\text{partB}}]=PU(F_{\text{init}}), \tag{1}\]
where \(PU\) indicates the probe unit in the information probe, \(I_{\text{partA}}\) and \(I_{\text{partB}}\) represent the underexposed and overexposed components, respectively. After that, the ASE module will be optimized to assemble the perceived components to rebuild the initial result \(F_{\text{init}}\), _i.e._
\[\hat{F}_{\text{init}}=ASE(I_{\text{partA}},I_{\text{partB}}), \tag{2}\]
where \(\hat{F}_{\text{init}}\) denotes the assembly result.
Given an ideal fusion result, the detached parts in Eq. (1) are expected to obey the following constraints:
\[I_{\text{partA}}=I_{\text{A}},\;I_{\text{partB}}=I_{\text{B}}. \tag{3}\]
However, the information loss issues and artifacts contained in \(F_{\text{init}}\) will contaminate these parts and make them degraded. Thus, in the test phase, we devise a booster layer to recover these two defective components and improve the assembly result. Since we expect the \(\hat{F}_{\text{init}}\) to approximately contain all information from source images (approach the ideal fused image), in the booster layer, we achieve this objective by maintaining the upgraded components and source images as close as possible, _i.e._
\[\hat{I}_{\text{partA}}\approx I_{\text{A}},\;\hat{I}_{\text{partB}}\approx I_{ \text{B}}, \tag{4}\]
where \(\hat{I}_{\text{partA}}\) and \(\hat{I}_{\text{partB}}\) indicate the boosted components. In this way, the enhanced \(F_{\text{init}}\) will become more informative and have refined imaging quality.
Without considering the weight measurement of source images, we only focus on strengthening the perceived parts of the initial result. Thus, compared to the conventional approach with one stage being used to handle multiple issues, our divide and conquer strategy has distinct advantages.
### Training of the FusionBooster
The trainable parameters of our FB are from the information probe and the ASE module. Essentially, our FB only involves reconstruction tasks in the training process. As shown in Fig. 4, the information probe contains two probe units and the ASE module is composed of a simple encoder and decoder architecture. In Fig. 5, we present the iterative training paradigm of our FB. Specifically, we use three loss functions to rebuild the source components and the initial fusion result at the pixel level, respectively.
In the information probe, since we have to handle the diversity of the source images among different fusion tasks, we assume the input images are of equal importance. Accordingly, the two probe units are designed to perform the information perception task. Two perception loss functions are used to optimize this probe and they are formulated as:
\[Loss_{\text{perA}}=\frac{1}{HW}\sum_{i}\sum_{j}|I_{\text{partA}}(i,j)-I_{A}(i, j)|, \tag{5}\]
\[Loss_{\text{perB}}=\frac{1}{HW}\sum_{i}\sum_{j}|I_{\text{partB}}(i,j)-I_{B}(i, j)|, \tag{6}\]
where \(H\) and \(W\) denote the height and width of the images.
On the other hand, in the ASE module, we extract the features of the source components and integrate them in the encoder part, and the subsequent convolution layers in the decoder are used to produce the updated fusion result. All the convolution layers use the standard \(3\times 3\) kernels with the stride of 1. Besides, we use the reflection padding operation to keep the resolution of the feature maps constant. In this way, our FB accepts fusion results with arbitrary image size.
We train the ASE module in the second step after all the parameters in the information probe are frozen. The corresponding reconstruction loss function used to optimize this module is defined as:
\[Loss_{\text{rec}}=\frac{1}{HW}\sum_{i}\sum_{j}|\hat{F}_{\text{init}}(i,j)-F_{ \text{init}}(i,j)|. \tag{7}\]
Since we do not apply complicated transformations or constrain the detached components in the feature domain by
Figure 4: Network architecture of the lightweight probe units and the ASE module.
Figure 5: An illustration of the training process of the FusionBooster. The information probe learns to perceive the source images by utilizing the perception loss functions. The ASE module is optimized by the reconstruction loss to rebuild the initial result.
using the pre-trained model [23, 41], the ASE module can smoothly rebuild the initial result and extra computational burden can be avoided.
### Booster layer
The booster layer is designed to improve the fused image quality, while simultaneously preserving the fusion style of the backbone method (hidden in the detached components). Since we need to cover multiple fusion tasks, the flexibility would be sacrificed if extra measurements or parameters were introduced in the booster layer. Besides, as discussed in Section 3.1, the refined constituent components should approach the source images. Thus, as shown in Fig. 6, we only use the clean source images \(I_{\text{A}}\) and \(I_{\text{B}}\) of different fusion tasks in this layer as the reference sources. Specifically, for a degraded image component, \(e.g.\), \(I_{\text{partA}}\), we apply average filtering to obtain the low frequency component \(I_{\text{partA}}^{b}\) (base layer) as
\[I_{\text{partA}}^{b}=I_{\text{partA}}*D(k), \tag{8}\]
where \(D(k)\) denotes the average filter with the size of \((2k+1)\times(2k+1)\). Correspondingly, the high-frequency component (the details layer) can be represented as
\[I_{\text{partA}}^{d}=I_{\text{partA}}-I_{\text{partA}}^{b}. \tag{9}\]
The proposed booster layer is expected to take care of the degraded components. However, we also need to keep the fusion styles or clues in the output components for the reassembly in the ASE module. Thus, we follow the image sharpening operation by combining the clean source image with the detail layer of the degraded component, _i.e._
\[\hat{I}_{\text{partA}}=I_{\text{A}}+I_{\text{partA}}^{d}. \tag{10}\]
Here, the high-frequency information from the degraded component is expected to provide fusion clues and edge sharpening for the ASE module. Involving the source images in the enhanced component \(\hat{I}_{\text{partA}}\) helps to replace the degraded base layer with the clean one and forces the ASE module to deliver a more robust fusion result. The effectiveness of the booster layer design will be demonstrated in Section 4.6.
## 4 Experiment
### Experimental settings
We apply our FB to three widely investigated image fusion tasks, _i.e._, the IVIF task, the MFIF task, and the MEIF task. Three public benchmark datasets are used in our experiments, including the LLVIP dataset [11] for the IVIF task, MFI-WHU dataset [47] for the MFIF task, and SCIE dataset [1] for the MEIF task.
Specifically, the LLVIP dataset is a challenging dataset. It is composed of high-quality infrared and visible image pairs in a low-light environment. The MFI-WHU dataset contains 120 far-focused and near-focused image pairs of different scenes. The SCIE dataset consists of 590 high-resolution indoor and outdoor image sequences with different exposure settings. Considering the small scale of the last two datasets, we random crop \(128\times 128\) patches for augmenting the training data. The number of images or patches used for training is 12,025, 33,703, and 11,702, respectively. The number of randomly selected image pairs used for evaluation is 250, 20, and 51, respectively.
This algorithm is implemented in PyTorch and executed on an NVIDIA GeForce RTX 3090 GPU. The Adam optimizer [12] is used to update the parameters of the models with the learning rate of \(10^{-4}\). The number of epochs is set as 10 and the batch size is 2. The filter size \(k\) in Eq. (8) is empirically set as 3. All the competitors' implementations come from the code repositories mentioned in the original papers or reproduced by other researchers.
For the quantitative experiments, five widely used image fusion metrics, _i.e._, visual information fidelity (VIF) [9], mutual information (MI) [22], information entropy (EN) [33], edge intensity (EI) [45], and standard deviation (SD) [3] are adopted. VIF measures the distortion between the fusion result and the source images to indicate the information fidelity. MI is used to calculate the correlation between source images and the fusion result. EN and SD measure the information content and contrast of the image. The edge information and clarity of the fusion results are reflected by EI.
### Infrared and visible image fusion task
In this section, we present the fusion results obtained by advanced image fusion methods, with the output enhanced by our booster (see _Supplementary Material_ for more results). The tested algorithms include the traditional method GTF [24], the AE-based method DenseFuse [14], 3 CNN-based methods, namely U2Fusion [41], SDNet [48], and MUFusion [5], 2 approaches that combine the fu
Figure 6: An illustration of the booster layer. As shown in the highlighted regions, the decomposed components are unable to recover the information from the source images perfectly. Based on the supplementary source images and the image sharpening technique, this layer is designed to enhance these degraded constituents.
sion with downstream tasks, \(i.e.\), SeAFusion [36] and TarDAL [19], the GAN-based method DDCGAN [27], and the transformer-based method YDTR [38].
**Qualitative experiments:** For the IVIF task, due to the limitations of the handcrafted image features, the traditional methods cannot handle complex scenes effectively. As shown in Fig. 7, the traditional method GTF suffers from the blurring issues in the fusion results. Our booster can effectively address this and produce visually pleasing images. Meanwhile, our paradigm also reduces the artifacts, which severely degrade the image quality of DDCGAN. Besides, compared with the SOTA methods SeAFusion and TarDAL, the enhanced DDCGAN inherits the merits of the original method and shows the ability to cope with the challenges of dark environments, preserving the details of the background (blue highlighted regions), and presenting more salient thermal information on the foregrounds,
**Quantitative experiments:** For the quantitative comparison, we select three different types of fusion methods, _i.e._, the traditional method GTF, the transformer-based method YDTR, and the GAN-based method DDCGAN as the backbone methods of our booster. As indicated in Table 1, although some advanced algorithms, _e.g._, TarDAL, are trained jointly with proxy tasks, the DDCGAN with our booster outperforms both the TarDAL (optimized for the human vision) and TarDAL++ (optimized for both vision and detection) models on all of these five metrics. This demonstrates that the current image fusion performance can be still improved without the consideration of other vision tasks. The remarkable performance on these metrics indicates that the proposed FB is able to increase the information fidelity (VIF), maintain a strong correlation with the source images for the fusion results (MI), and produce robust fused images with sharp edge information (EN, SD, and EI). Finally, as a general enhancer, our FB consistently improves the performance of various types of fusion algorithms to a large extent. (GTF and YDTR)
**The pedestrian detection task:** In addition to the visual quality, an important application for the IVIF task is to improve the performance of downstream vision tasks by using the complementary information contained in the fusion results. In this experiment, we use the YOLOv5 detector [32]
\begin{table}
\begin{tabular}{c c c c c c c} \hline Method & Venue & VIF \(\uparrow\) & MI \(\uparrow\) & EN \(\uparrow\) & EI \(\uparrow\) & SD \(\uparrow\) \\ \hline DenseFuse & TIP 18 & 0.797 & 14.836 & 7.418 & 47.067 & 50.844 \\
1U2Fusion & TPAMI 20 & 0.492 & 13.414 & 6.707 & 46.899 & 37.428 \\ SDNet & IJCV 21 & 0.415 & 13.778 & 6.889 & 44.609 & 36.257 \\ SeAFusion & Inf. Fus.’ 22 & 0.839 & **14.902** & **7.451** & 55.935 & 51.810 \\
1UFNusion & Inf. Fus.’ 23 & 0.791 & 14.364 & 7.182 & 58.687 & 43.976 \\ TarDAL++ & CVPR 22 & 0.676 & 13.207 & 6.604 & 70.005 & 41.059 \\ TarDAL & CVPR 22 & 0.809 & 14.707 & 7.353 & 46.126 & 52.106 \\ \hline \hline GTF & Inf. Fus’ 16 & 0.576 & 14.703 & 7.351 & 44.129 & 50.164 \\ +FB (Ours) & - & **0.884** & 14.824 & 7.412 & **73.185** & **53.260** \\ \hline \hline YDTR & TMM 22 & 0.388 & 13.565 & 6.782 & 31.336 & 36.502 \\ +FB (Ours) & - & 0.532 & 13.976 & 6.988 & 48.590 & 41.159 \\ \hline \hline DDCGAN & TIP’ 20 & 0.764 & 14.862 & 7.431 & 49.127 & 51.495 \\ +FB (Ours) & - & **0.986** & **15.301** & **7.650** & **75.362** & **57.672** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The quantitative results obtained by different image fusion methods and the proposed FusionBooster on three kinds of fusion algorithms. (Red: best; Blue: second best)
\begin{table}
\begin{tabular}{c c c c} \hline Methods & Venue & \(mAP_{50,95}\) (\%) & \(mAP_{50}\) (\%) \\ \hline Visible & - & 54.2 & 94.4 \\ MUFusion & Inf. Fus.’ 23 & 66.4 & 96.2 \\ DenseFuse & TIP 18 & 66.5 & 96.4 \\ TarDAL & CVPR 22 & 66.6 & 96.9 \\ SeAFusion & Inf. Fus.’ 22 & 66.9 & 97.2 \\ U2Fusion & TPAMI’ 20 & 67.3 & 97.0 \\ Infrared & & 67.9 & 97.3 \\ SDNet & IJCV 21 & 68.1 & 97.3 \\ TarDAL++ & CVPR ’22 & 68.3 & 97.2 \\ \hline \hline GTF & Inf. Fus’ 16 & 65.7 & 96.5 \\ +FB (Ours) & - & 67.8 (+2.1) & 97.0 (+0.5) \\ \hline \hline YDTR & TMM 22 & 64.9 & 97.4 \\ +FB (Ours) & - & 67.6 (+2.7) & **97.9 (+0.5)** \\ \hline \hline DDCGAN & TIP’ 20 & 63.7 & 94.4 \\ +FB (Ours) & - & **69.3 (+5.6)** & **97.4 (+3.0)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: The accuracy of pedestrian detection using different modalities on the LLVIP dataset.
Figure 8: Visualization of the results obtained by DDCGAN and DDCGAN with FusionBooster on the pedestrian detection task.
Figure 7: Illustration of the qualitative results of the infrared and visible image fusion task on three pairs of images from the LLVIP dataset.
to test the accuracy of different image fusion methods on the pedestrian detection task. We separately train the detector by using the fusion results of different algorithms on the training set of the LLVIP dataset. The trained models are used to detect pedestrians in different modalities. As shown in Table 2, in the low-light environment, the accuracy of some SOTA methods cannot even match that of the single modality, _i.e._, the infrared modality. Once the FB is applied in conjunction with these methods, the average precision is significantly improved, _e.g._, \(5.6\%\) for DDcGAN over the IoU thresholds from 0.5 to 0.95. It is worth noting that the performance of DDcGAN with our booster is better than that of the SeAFusion and TarDAL++, which particularly consider segmentation and detection tasks in their training process.
In Fig. 8, we present the visualization of two results obtained with our booster on the pedestrian detection task. The detector has a higher confidence for the detected pedestrians and the false detection issue is mitigated (bike in the first example). This comparison also reveals that the fusion results with sharpened edge information and higher contrast can benefit the detection task.
### Multi-focus image fusion task
**Qualitative experiments:** For the MFIF task, our FB is also able to improve existing methods. As shown in Fig. 9, applying the proposed FB to the traditional method, CSR, and the learning-based method, U2Fusion, the details on the board of the bus become clearer. In contrast, the CSR does not accurately infer the focused regions of the source images (second example). As shown in the magnified region, the enhanced result successfully addresses this issue. Besides, our FB also improves the learning-based method, U2Fusion, producing a piece of sharper edge information. Compared with the other algorithms, the enhanced methods are superior in terms of preserving the local details in the highlighted area.
**Quantitative experiments:** For the quantitative results, we further select 3 methods, _i.e._, PMGI [49], DRPL [18] and UNIFusion [3] as competitors. As shown in Fig. 10, with our booster, the U2Fusion has a clear advantage over the other advanced methods in terms of all the metrics. This demonstrates the superiority of the proposed FB. Moreover, integrating with FB, the traditional method CSR [21] also exhibits distinct strengths on multiple metrics.
### Multi-exposure image fusion task
**Qualitative experiments:** As a gradient-based method, the U2Fusion delivers promising results on other fusion tasks. While in the MEIF task, its gradient-based information measurement ignores the adaption for the exposure setting and tends to preserve information from the underexposed image (Fig. 11), which corresponds to the previously mentioned bias issues. In contrast, our booster mitigates this issue by lighting the dark area of the original results. Meanwhile, as shown in the magnified regions, our booster also enhances the edge information and generates results of higher clarity.
**Quantitative experiments:** We also compare the performance of different image fusion methods in the image quality assessments. Three open source CNN-based
Figure 10: The quantitative results obtained by different fusion methods with and without the FB in the case of the MFIF task.
Figure 9: Illustration of the qualitative results achieved in the MFIF task on two pairs of images from the MFI-WHU dataset.
MEIF algorithms, _i.e_., DeepFuse [30], MEF-GAN [43], and AGAL [20] are used in this experiment. As shown in Table 4, the enhanced traditional (TLER [46]) and learning-based (U2Fusion) methods achieve consistent improvements on these metrics, as in the previous two fusion tasks. The best performance on all metrics over other SOTA methods in multiple fusion tasks demonstrates the powerful generalization capability of our concise booster design.
### The time consumption and a model size comparison
In this section, we provide the statistics of additional time consumption and model size burden of several image fusion methods utilizing the FB module. As shown in Table 3, we collect the inference time of several approaches, as well as their model sizes in the context of the IVIF task on the LLVIP dataset. While achieving much higher performance in various fusion tasks, our booster increases the time consumption of the baseline methods by only around 2 seconds on 250 infrared and visible image pairs, and increases the size of the model by less than 200KB.
### Ablation Experiments
In this section, we conduct ablation experiments with the FB. We have the following experimental settings: (a) Feed the enhanced \(I_{\text{A}}\), \(I_{\text{B}}\) into the backbone. (b) Directly enhance the output of the backbone. (c) Without the probe, feed the \(I_{\text{A}}\), \(I_{\text{B}}\) into the ASE module. (d) Without the probe, feed the enhanced \(I_{\text{A}}\), \(I_{\text{B}}\) into the ASE module. Here, the enhancement denotes the sharpening operation in the booster layer (the detail layer is from the image itself). As shown in Fig. 12, compared with other settings, the backbone method with FB presents the clearest view of the pedestrians and has the best performance on all metrics, which validates the FB approach and demonstrates its utility.
## 5 Conclusion
In this paper, we propose an image fusion enhancer based on a divide and conquer strategy guided by an innovative information probe. Given a fused image from an arbitrary method, _e.g_., an IVIF algorithm, we first decompose the initial result into different components. The information probe gauges the affinity of the components to the input images, and filters them to yield an improved fused image. The difference signal iteratively drives the update of the Fusion-Boster parameters. In this way, we effectively mitigate the
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Method & Venue & VIF \(\uparrow\) & MI \(\uparrow\) & EN \(\uparrow\) & EI \(\uparrow\) & SD \(\uparrow\) \\ \hline DepFuse & ICCV’ 17 & 1.295 & 14.311 & 7.156 & 60.527 & 46.091 \\ MEF-GAN & TIP’ 20 & 1.592 & 14.383 & 7.192 & 80.214 & **52.363** \\ SDNet & IJCV’ 21 & 1.299 & 14.070 & 7.035 & 72.497 & 44.135 \\ AGAL & TCSVT’ 22 & 1.314 & 14.212 & 7.106 & 71.954 & 43.725 \\ MUfusion & Inf. Fus’ 23 & 1.637 & 14.462 & 7.231 & 70.179 & 49.682 \\ \hline \hline TLER & SPL’ 18 & 1.713 & 14.206 & 7.103 & 75.570 & 41.443 \\ +FB & - & **1.934** & **14.498** & **7.249** & **110.503** & 50.187 \\ \hline \hline U2Fusion & TPAMI’ 20 & 1.695 & 14.425 & 7.213 & 83.001 & 49.013 \\ +FB & - & **2.507** & **15.012** & **7.506** & **134.524** & **58.573** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Illustration of the quantitative results obtained by different methods on the SCIE dataset in the multi-exposure image fusion task.
Figure 11: The qualitative results, obtained on two pairs of images from the SCIE dataset, when performing the MEIF task.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Metric & U2Fusion & MUfusion & YDTR & SeA Fusion & DenseFuse & GTF & GIF + FB & DDoGAN & DDGAN + FB & SDNet & SDNet + FB \\ \hline Time (s) & 66.90 & 52.98 & 28.01 & 7.51 & 1.82 & 128.67 & 130.66 (**+1.99**) & 123.12 & 125.04 (**+1.92**) & 2.14 & 4.13 (**+1.99**) \\ Model size (MB) & 2.51 & 2.12 & 0.85 & 0.65 & 0.29 & – & – & 21.18 & 21.35 (**+0.17**) & 0.26 & 0.43 (**+0.17**) \\ \hline \hline \end{tabular}
\end{table}
Table 3: The inference time and model size comparison of different methods on 250 image pairs from the LLVIP dataset. (**Red**: extra cost)
Figure 12: Ablation study of the FusionBooster with different settings. The Metrics on the top are the results obtained on the entire test set.
information loss and image blurring issues in the backbone. The design of the network architecture and the loss function are the key ingredients of the improved performance at the expense of a minor increase in the computational cost. The proposed method can be applied to different fusion tasks. It significantly boosts various fusion approaches, including traditional and learning-based methods.
|
2301.06756 | Excitonic Insulator to Superconductor Phase Transition in
Ultra-Compressed Helium | Helium, the second most abundant element in the universe, exhibits an
extremely large electronic band gap of about $20$ eV at low pressures ($\le
0.1$ GPa). While the metallization pressure of hcp helium has been accurately
predicted, thus far little attention has been paid to the specific mechanisms
driving the band-gap closure and electronic properties of this quantum crystal
in the terapascal regime (1 TPa $= 1,000$ GPa). Here, we employ
state-of-the-art density functional theory and many-body perturbation theory
calculations to fill up this knowledge gap. It is found that prior to reaching
metallicity bulk solid helium becomes an excitonic insulator (EI), an exotic
state of matter typically observed in low-dimensional systems in which
electrostatically bound electron-hole pairs form spontaneously. Furthermore, it
is shown that electron-phonon coupling (EPC) is significantly enhanced across
the EI to metal phase transition as signaled by prominent phonon softening and
giant EPC strength values ($\lambda \sim 10-100$) estimated at specific
reciprocal space points. Accordingly, we predict metallic helium to be a
superconductor with a critical temperature of $\approx 30$ K at $20$ TPa and of
$\approx 100$ K at $100$ TPa. These unforeseen phenomena have important
consequences on the elastic, thermodynamic and transport properties of metallic
helium hence may be critical for improving our fundamental understanding and
modelling of celestial bodies. | Cong Liu, Ion Errea, Chris Pickard, Lewis J. Conway, Bartomeu Monserrat, Yue-Wen Fang, Chi Ding, Qing Lu, Jian Sun, Jordi Boronat, Claudio Cazorla | 2023-01-17T08:44:42Z | http://arxiv.org/abs/2301.06756v1 | # Excitonic Insulator to Superconductor Phase Transition in Ultra-Compressed Helium
###### Abstract
Helium, the second most abundant element in the universe, exhibits an extremely large electronic band gap of about 20 eV at low pressures (\(\leq 0.1\) GPa). While the metallization pressure of hcp helium has been accurately predicted, thus far little attention has been paid to the specific mechanisms driving the band-gap closure and electronic properties of this quantum crystal in the terapascal regime (1 TPa = \(1,000\) GPa). Here, we employ state-of-the-art density functional theory and many-body perturbation theory calculations to fill up this knowledge gap. It is found that prior to reaching metallicity bulk solid helium becomes an excitonic insulator (EI), an exotic state of matter typically observed in low-dimensional systems in which electrostatically bound electron-hole pairs form spontaneously. Furthermore, it is shown that electron-phonon coupling (EPC) is significantly enhanced across the EI to metal phase transition as signaled by prominent phonon softening and giant EPC strength values (\(\lambda\sim 10\)-\(100\)) estimated at specific reciprocal space points. Accordingly, we predict metallic helium to be a superconductor with a critical temperature of \(\approx 30\) K at 20 TPa and of \(\approx 100\) K at 100 TPa. These unforeseen phenomena have important consequences on the elastic, thermodynamic and transport properties of metallic helium hence may be critical for improving our fundamental understanding and modelling of celestial bodies.
+
Footnote †: Corresponding author: [email protected]
+
Footnote †: Corresponding author: [email protected]
In their final evolution stage, most stars in the Universe become white dwarfs (WDs) consisting of a mixture of helium, carbon, and oxygen atoms immersed in a sea of electrons. In small-radius WDs, fusion reactions beyond helium hardly occur hence their chemical composition is practically monoelemental. In the interior of WDs, pressure may reach values billions of times higher than that in the Earth's surface (\(\sim\) 10,000 GPa = 10 TPa), which currently are not accessible in experiments. Thus, theoretical modelling of light materials under extreme compression conditions, and in particular of helium, turns out to be critical for probing the interior of WDs and comprehend their physico-chemical evolution.
Highly accurate diffusion Monte Carlo (DMC) calculations predict solid helium to become a metal in the hexagonal closed packed (hcp) phase at a pressure of 25.7 TPa [1]. By considering zero-point energy and electron-phonon coupling effects estimated with density functional theory (DFT) methods, such a metallization pressure increases up to 32.9 TPa at \(T=0\) K [2]. Both experimental and theoretical studies have shown that the valence-band maximum (VBM) of \({}^{4}\)He appears on the line joining the reciprocal lattice points \(\Gamma\) (\(0,0,0\)) and \(M\) (\(q,0,0\)), while the conduction-band minimum (CBM) is located at the \(\Gamma\) point [2; 3]. Thus, the band gap of solid helium is indirect and according to previous DFT calculations the overlap between the conduction and valence bands when approaching metallization is characteristic of a semimetal (i.e., the density of electronic states at the Fermi level is negligible) [2]. Meanwhile, the lattice phonons involving atomic displacements perpendicular to the hcp basal plane strongly couple to the electronic bands and at very high pressure these drive the widening of the band gap [2].
A detailed understanding of the electronic band structure properties of this archetypal quantum crystal [4], however, is still lacking. First, about half a century ago
the existence of an exotic insulating phase called "excitonic insulator" (EI) was predicted in which electrons and holes spontaneously form bound pairs called excitons [5]. The EI phase could be stabilized at sufficiently low temperatures in semiconductors with tinny band gaps or semimetals with very small band overlaps. Recently, experimental EI fingerprints have been reported for low-dimensional transition metal dichalcogenide structures exhibiting small band gaps [6; 7]; however, stabilization of a bulk EI state remains elusive. Owing to its semiconductor nature, absence of structural transformations and marked quantum character, ultra-compressed \({}^{4}\)He appears to be an excellent candidate in which a bulk EI state could emerge and genuine quantum many-body phenomena like high-temperature excitonic superconductivity and BEC-BCS crossover might occur [8; 9]. Is possibly solid helium a bulk EI in the TPA regime? And second, the substantial electron-phonon coupling and semimetal Fermi surface previously disclosed in solid helium suggest the possibility of superconductivity in this quantum crystal upon band-gap closure. Is metallic helium a superconductor? If so, what are the underlying physical mechanisms and corresponding critical temperature? Besides their fundamental interest, answering to these questions may have major consequences in the fields of planetary science and astrophysics since this new knowledge could improve our understanding of the thermal and chemical evolution of small-radius WDs [10; 11].
In this Letter, we employ theoretical first-principles approaches based on DFT and many-body perturbation theory to advance knowledge on the electronic, elastic and superconductor properties of solid helium in the TPa regime. Our main finding is an unprecedented bulk excitonic insulator to superconductor phase transition driven by pressure in which the superconductor state can reach a critical temperature of \(\approx 100\) K under a compression of 100 TPa. It is worth noting that an exhaustive random sampling of the structural space of solid helium was performed at \(P=100\) TPa (AIRSS [12; 13]), with the finding that the hcp phase imperturbably remained the ground state (Methods).
We started by benchmarking different families of DFT functionals (i.e., semi-local, van der Waals corrected and hybrid) [4] against the metallization pressure of solid he
Figure 1: **DFT benchmarking for the metallization pressure of hcp \({}^{4}\)He.****a** Electronic band gap, \(E_{g}\), expressed as a function of pressure and calculated with different DFT functionals. Negative \(E_{g}\) values indicate overlapping between the VBM and CBM levels. The grey region indicates the stability range of metallic helium as calculated with QMC methods [1]. **b** Electronic localization function (ELF isosurface = 0.8, yellow) of solid helium at 12 and 36 TPa in a red-green-blue color scale with red denoting high electronic density and blue low electronic density. **c** Lowest conduction (red) and highest valence (blue) bands expressed as a function of reciprocal wave vector in the \(k_{z}\) = 0 plane; the grey and transparent plane represents the Fermi surface.
lium calculated with DMC methods, which amounts to 25.7 TPa (Fig.1a). In all the analyzed cases, the band gap decreases almost linearly under increasing pressure due to the steady enhancement of electronic delocalization among neighbouring atoms (Fig.1b). The semi-local PBE functional predicts a metallization pressure of 17 TPa, in consistent agreement with previous computational studies [1; 2]. Meanwhile, the hybrid functional B3LYP performs the best in comparison to the DMC benchmark by providing a metallization pressure of \(\approx\) 24 TPa (Fig.1a and Fig.S1). Van der Waals corrections turn out to be practically negligible in the TPa regime (e.g., the two PBE and PBE-D3 curves practically overlap each with the other) due to the dominant role of interatomic repulsive interactions at short distances [14]. Based on these results, we adopted the hybrid functional B3LYP for our subsequent analysis of the electronic band structure of solid helium.
Unlike atomic hydrogen, hcp \({}^{4}\)He presents an indirect band gap with the VBM located at the reciprocal point \(\Lambda\) and the CBM at the center of the Brillouin zone (\(\Gamma\) point, Fig.1c), in consistent agreement with previous DFT calculations and experiments [2; 3]. It is noted that the direct band gap at the \(\Lambda\) point actually increases under compression (Fig. S2). Interestingly, when the energy gap between the VBM and CBM levels disappears a semimetal state characterized by an almost negligible density of states at the Fermi level emerges due to the fact that no additional electronic bands cross the Fermi surface (Fig.2a-c). At 1 TPa, the VBM consists exclusively of \(s\)-like orbitals while the CBM exhibits full \(p\)-like character (Fig.2a). Upon further compression, the VBM presents increasingly larger hybridization between \(s\) and \(p\)-like orbitals while the CBM conserves its pure \(p\)-like character (Fig.2b). At pressures higher than 24 TPa, electrons from the VBM at the \(\Lambda\) point are transferred to the CBM minimum at \(\Gamma\) in order to lower their energy, thus rendering a \(p\)-type semimetal system (Fig.2c).
The continuous pressure-driven closure of the band gap and subsequent stabilization of a semimetal state in hcp \({}^{4}\)He, suggest the possibility of spontaneous formation of excitons with finite momentum \(|\mathrm{q}|=\mathbf{\Lambda}\)-\(\Gamma\) at low temperatures. An exciton is a bound state formed by an excited electron (\(e^{-}\)) in the conduction band and a hole
Figure 2: **Band-gap closure and emergence of the excitonic insulator state in hcp \({}^{4}\)He.****a-c** Evolution of the electronic band structure under compression calculated with the hybrid B3LYP functional. A red-magenta-blue color scale is employed for representing the orbital character of the relevant electronic bands with blue for \(s\)- and red for \(p\)-like. The \(\Lambda\) and \(\Omega\) reciprocal space points indicate the location of the primary and secondary VBM. The inset highlights the migration of electrons (\(e^{-}\)) and formation of holes (\(h^{+}\)) along the reciprocal space line \(K\)-\(\Lambda\)-\(\Gamma\) near the Fermi surface. **d** Comparison of the exciton binding energy, \(E_{bind}\), and band gap calculated with B3LYP DFT and many-body perturbation theory within the GW approximation. The blue regions indicate the pressure range in which the sufficient condition for spontaneous formation of excitons is fulfilled.
(\(h^{+}\)) in the valence band that interact through attractive Coulomb forces. In narrow-gap semiconductors, a sufficient condition for the spontaneous formation of excitons is that the corresponding binding energy, \(E_{bind}\), is larger in absolute value than the band gap since then the total energy of the system can be lowered by promoting electrons to the conduction band in the absence of optical excitations [9; 15]. We computed the binding energy of an exciton in ultra-compressed hcp \({}^{4}\)He by relying on the Wannier-Mott model (Methods) since the dielectric constant of solid helium in the TPa regime is relatively high (\(\epsilon_{r}>5\), Fig.S3) and consequently electric field screening effects are large [16].
Our excitonic binding energy results obtained with the hybrid B3LYP functional and expressed as a function of pressure are shown in Fig.2d. It was found that the sufficient condition for spontaneous formation of excitons, namely, \(|E_{bind}|>E_{g}\), was fulfilled over a wide pressure interval of approximately 200 GPa prior to metallization. In view of this result, we performed many-body perturbation theory calculations within the GW approximation to explicitly and more accurately determine quasiparticle excitations in ultracompressed \({}^{4}\)He (Methods) [17]. As it is shown in the inset of Fig.2d, GW calculations provided a much larger excitonic binding energy than calculated with the Wannier-Mott model and hybrid DFT functionals, namely, \(\approx 0.4\) eV. Moreover, the estimated pressure interval in which excitons can spontaneously form noticeably increased up to 600 GPa. Therefore, based in our hybrid DFT and many-body perturbation GW calculations we may conclude that on the verge of metallization hcp \({}^{4}\)He is a bulk excitonic insulator (EI). The same conclusion was reached when considering alternative structural phases for ultra-compressed solid helium (Fig.S4).
The emergence of a bulk EI state is expected to be accompanied by strong lattice distortions and instabilities due to arising electron-phonon interactions [18; 19]. We computed the phonon spectrum of hcp \({}^{4}\)He at different pressures using the semi-local PBE and PBEsol functionals (phonon calculations at this level of theory are feasible), as shown in Fig.3a and Fig.S5. Reassuringly, a distinct phonon softening appears at the reciprocal lattice point \(\Lambda\) between 15 and 20 TPa, that is, when semi-local DFT functionals predict that solid helium becomes a metal (Fig.S6). Interestingly, above 30 TPa additional phonon softenings emerge along the \(K\)-\(\Gamma\) and \(M\)-\(K\) reciprocal space directions; we found that around this pressure the energy gap between the CBM (located at \(\Gamma\)) and secondary VBM (located at \(\Omega\), Fig.2c and Fig.S6) vanished. Thus, in addition to validating our prediction for the stabilization of a bulk EI state, these findings corroborate the strong coupling between electrons and lattice vibrations previously disclosed in ultra-compressed solid helium [2]. It is worth noting that quantum anharmonic effects in \({}^{4}\)He were assessed with the stochastic self-consistent harmonic approximation (SSCHA) method [20; 21; 22; 23], finding that these are of little relevance in the TPa regime (Methods and Fig.S7).
Besides lattice dynamics, the elastic, structural and thermodynamic properties of solid helium were also found to be influenced by the pressure-driven EI to metal phase transition (Fig.S8). In the insulating phase, the elastic constants of hcp \({}^{4}\)He display a practically linear dependence on pressure whereas in the metallic phase they depart from this behaviour and in some cases do not even display a monotonic increase under compression (e.g., \(C_{13}\)). Similar effects were also observed for the bulk and shear moduli, sound velocities, Debye temperature and heat capacity (Fig.S8). Regarding the structural features, it was found that the pressure evolution of the hcp lattice parameter ratio \(c/a\) drastically changes when the metallic phase is stabilized (Fig.S9). Thus, taking into account these unanticipated physical effects could have important consequences on current modelling of astrophysical bodies, in particular, of small-radius and helium-rich WDs [10; 11].
Motivated by the findings described above, we explored the superconducting properties of ultra-compressed hcp \({}^{4}\)He. Accurate electron-phonon coupling (EPC) calculations were carried out using the techniques outlined in the Methods section, which essentially involve the Bardeen-Cooper-Schrieffer (BCS) theory of superconductivity and modified Allen-Dynes formula [24]. Figure 3b shows the Eliashberg spectral function, \(\alpha^{2}F\), estimated at different pressures, from which the corresponding average EPC strength, \(\lambda\), can be straightforwardly computed (Methods and Supplementary Information). At 20 TPa, this function exhibits two appreciable peaks: the most prominent appearing at low frequencies stems from the lowest-energy phonon band at the wave-vector \(\mathbf{q}=(0.25,0.43,0)\), which is very close to the reciprocal space point \(\Lambda\) associated with phonon softening; the other peak emerging at higher frequencies stems from the second and third lowest-energy phonon bands at \(\Gamma\). The EPC strength of these phonon modes, specially of that rendering the \(\alpha^{2}F\) maximum, are extremely high (i.e., the corresponding EPC strengths, \(\lambda_{q\nu}\), are of the order of \(10^{1}\)-\(10^{2}\)) due to their huge phonon linewidth and minute density of electronic states at the Fermi level (Methods, Supplementary Information and Fig.S6). However, since the number of phonon modes that appreciably contribute to \(\alpha^{2}F\) (or, equivalently, to \(\lambda\)) is quite reduced, the superconducting temperature estimated at \(P=20\) TPa is relatively low, namely, \(T_{c}=30\) K (Fig.3c and Supplementary Information).
Interestingly, upon further compression, when the energy overlap between the conduction and valence bands is enhanced (Fig.2c), additional peaks appear in the Eliashberg spectral function that noticeably contribute to the average EPC strength, thus raising the superconducting critical temperature. For instance, at a pressure of 50 TPa, when multiple phonon softenings and \(\alpha^{2}F\) local maxima are observed (Figs.3a,b), we estimated a substantial superconducting critical temperature of 71 K (Fig.3c and Supplementary Information). Under higher compression the superconducting critical temperature
steadily increases, reaching a peak value of \(\approx 100\) K at the maximum pressure of 100 TPa considered in our calculations (Fig.3c). It is worth noting that within the pressure interval \(20\leq P\leq 30\) TPa both \(T_{c}\) and \(\lambda\) noticeably decrease; this transient effect is due to a dominant \(P\)-induced surge in the Fermi density of electronic states that drastically reduces the \(\alpha^{2}F\) peaks (Fig.3b, Methods and Table S1).
An analogous EPC strength parameter and \(T_{c}\) analysis was carried out for hcp xenon (Fig.S10) since this material is isoelectronic to solid helium and becomes metallic at experimentally accessible pressures of the order of 100 GPa. An EI to metal phase transition similar to that disclosed in ultra-compressed \({}^{4}\)He was also found for hcp Xe at 140 GPa. A noticeable phonon softening appeared at a higher pressure of 190 GPa, coinciding with the closure of a secondary band gap involving a \(s\)-like dominant CBM and \(p\)-like dominant VBM (i.e., of the same character than the primary band gap in solid helium). The EPC strength and superconducting critical temperature estimated for hcp Xe at 140 GPa are 0.75 and \(\approx 10\) K, respectively. Thus, bulk Xe seems to be a good candidate material in which to experimentally search for analogues of some of the key theoretical findings revealed in this work for ultra-compressed hcp \({}^{4}\)He.
Figure 3d shows a sketch of the possible phase diagram of solid helium at pressures and temperatures that are relevant to astrophysical studies. At sufficiently high pressures and low temperatures, a bulk EI state is stabilized. Whether in such a state the spontaneously created electron-hole bound pairs form excitonic Bose-Einstein condensates or exhibit excitonic superconductivity close to zero temperature [8; 9], is a matter that we cannot resolve with the DFT-based methods employed in this
Figure 3: **Electron-phonon coupling and superconducting properties of ultra-compressed hcp \({}^{4}\)He.****a** Pressure-induced variation of the acoustic phonon branches (colored lines). The phonon calculations were performed with the semi-local PBEsol functional. The grey curves were calculated for the insulating phase at 15 TPa; the colored lines correspond to higher pressures but were re-scaled to facilitate the comparison. **b** Eliashberg spectral function, \(\alpha^{2}F\), of metallic hcp \({}^{4}\)He. **c** Superconducting properties estimated with the semi-local PBEsol functional and parameters \(\mu^{*}=0.10\) and \(\mu^{*}=0.13\) (Methods). The critical superconducting temperature values, \(T_{c}\) (colored bars), obtained with the modified Allen-Dynes formula [24] and the electron-phonon coupling strength parameter, \(\lambda\) (red stars), are represented in the left and right ordinate axis, respectively. **d** Qualitative sketch of the possible phase diagram of ultra-compressed solid helium based on work [9] and the key physical findings presented in this work.
work (hence the questions marks in the figure). Upon further compression, hcp \({}^{4}\)He becomes a superconductor with a critical temperature that increases under pressure (made the exception of a small pressure interval following metallization, which has been neglected in the figure). At high enough temperatures, superconductor solid helium transforms into a \(p\)-type semimetal. These electronic phase transitions significantly impact the structural, elastic, thermodynamic and transport properties of hcp \({}^{4}\)He hence should be taken into consideration in advanced evolutionary models of stellar bodies like white dwarfs (WDs).
In conclusion, we have presented a comprehensive first-principles computational study of the physical properties of solid helium in the TPa regime, putting special emphasis on its electronic band-structure features. It was found that over a broad pressure range preceding metallization hcp \({}^{4}\)He becomes a bulk excitonic insulator in which electrostatically bound electron-hole pairs can form spontaneously. This bulk excitonic insulator state could host genuine quantum many-body phenomena like high-temperature excitonic superconductivity and excitonic BEC-BCS crossover, although additional advanced studies are necessary to fully assess these hypotheses. Upon band-gap closure, solid helium transitions into a superconductor state that possesses a critical temperature of the order of \(10^{1}\)-\(10^{2}\) K, depending on compression. This pressure-induced EI to superconductor phase transition is accompanied by several elastic and structural anomalies. Thus, our theoretical findings besides conveying great fundamental interest are also of great relevance to the physics of celestial bodies, in particular, of small-radius WDs mostly containing metallic helium. Furthermore, it is argued that some analogues of the key theoretical findings revealed here for ultra-compressed helium could be experimentally observed in solid xenon.
## Methods
**First-principles calculations outline.** Density functional theory (DFT) calculations were performed with the Vienna _ab initio_ simulation package (VASP) [25]. The projector augmented-wave (PAW) method [26] was employed and the \(1s^{2}\) electrons in the He atoms were treated as valence. Different families of DFT functionals were tested among which we highlight the semi-local Perdew-Burke-Ernzerhof (PBE) [27] and revised PBE for solids (PBEsol) [28], van der Waals corrected DFT-D3 [29], non-local dispersion corrected vdW-optB88 [30], vdW-DF-cx[31], rev-vdW-DF2 [32], and the hybrid HSE06[33], B3LYP [34] and HSEsol [35]. A plane wave energy cutoff of 1500 eV was employed along with dense Monkhorst-Pack \(k\)-point sampling grids of resolution \(2\pi\times 0.025\) A (Fig.S11). The energy and atomic forces in the structural relaxations were converged to within \(10^{-6}\) eV and \(0.002\) eV/A, respectively. For validation purposes, we compared our band gap results obtained with the PBE functional as implemented in the VASP code with a full-potential (linearized) augmented plane-wave method as implemented in the WIEN2k code [36] (Fig.S12). Phonon calculations were performed with the small displacement method and the PHONOPY code [37] by employing large supercells of \(4\times 4\times 4\).
The binding energy of an exciton was estimated with the Wannier-Mott formula:
\[E_{bind}=-(m_{u}*Ry)/(m_{0}\epsilon_{r}^{2})\, \tag{1}\]
where \(m_{u}=(m_{e}\cdot m_{h})/(m_{e}+m_{h})\). In the equation above, \(m_{e}\) and \(m_{h}\) are the effective mass of the electron at the bottom of the conduction band and the hole at the top of the valence band, respectively. \(Ry\) represents the Rydberg constant (= 13.6 eV), \(m_{0}\) the rest mass of the electron, and \(\epsilon_{r}\) the dielectric constant of the system as referred to vacuum. The electron and hole effective masses were computed like the inverse of the second derivative of the conduction and valence band energies with respect to crystal momentum module, \(|k|\), along the reciprocal space path \(\Lambda\)-\(\Gamma\). The Wannier-Mott formula is a good approximation for the exciton binding energy of materials possessing high dielectric constants [16], which is the case of hcp \({}^{4}\)He in the TPa regime (Fig.S3).
The elastic tensor was determined at zero temperature by performing six finite lattice distortions and four atomic displacements of 0.01 A along each Cartesian direction. The adiabatic bulk modulus, \(K\), and shear modulus, \(G\), were obtained by computing the Voigt-Reuss-Hill averages from the elastic tensor. The longitudinal and transverse sound velocities were calculated with the formulas \(v_{p}=\left[\left(K+\frac{4}{3}G\right)/\rho\right]^{1/2}\) and \(v_{s}=\left[G/\rho\right]^{1/2}\), respectively, where \(\rho\) represents the atomic density of the system.
**Crystal structure prediction analysis.** The _ab initio_ random structure searching (AIRSS) package [12; 13] was used to perform crystal structure searches for solid \({}^{4}\)He. The first-principles DFT code CASTEP [38] was employed to perform the underlying electronic structure calculations based on the PBE functional [27]. The searches were performed at 100 TPa, producing approximately 1000 relaxed structures and considering a total of 12 atoms in the simulation cell. The energy cutoff was set to 1000 eV and a specially designed hard OTFG potential was employed for the calculations. Under these conditions, it was found that the hexagonal \(P6_{3}/mmc\) (hcp) phase remained the ground state followed by a rhombohedral \(R\overline{3}m\) phase with a higher relative energy of 0.131 eV/atom. Other energetically competitive structures were an hexagonal \(P\overline{6}m_{2}\) (0.140 eV/atom) and a cubic \(Im\overline{3}m\) (0.170 eV/atom) phase.
**SSCHA calculations.** Quantum anharmonic effects were assessed with the stochastic self-consistent harmonic approximation (SSCHA) method [20; 21; 22; 23]. All the SSCHA calculations were evaluated at the pressure
of 35 TPa and 0 K, conditions at which excellent convergence of the SSCHA minimization has been verified by an extra population including 800 supercell configurations. SSCHA calculations were performed with a 6\(\times\)6\(\times\)3 supercell including 216 atoms, which yields the dynamical matrices on a commensurate **q**-mesh of 6\(\times\)6\(\times\)3. The trial harmonic dynamical matrices used for initializing the free energy were obtained from the DFPT method as implemented in the Quantum Espresso (QE) code in the corresponding commensurate **q**-mesh. In the self-consistent calculations of the supercells, we used the same cutoff energy as the electron-phonon coupling calculations for the primitive cell, but the **k**-mesh was reduced accordingly and was tested for convergence. In the SSCHA iterations, except the first four populations in which only internal coordinates were optimized to speed up the minimization, the free energy in other populations was minimized with respect to all degrees of freedom of the crystal structure including the internal coordinates and the lattice cell parameters.
**Many-body perturbation theory calculations.** The excitonic binding energy was also estimated by means of highly accurate many-body perturbation theory calculations [39] performed with the Yambo code [40]. For this, we employed the generalized gradient approximation (GGA) as parameterized by PBE together with a plane-wave basis set and norm-conserving pseudopotential. The kinetic energy cutoff for the wave functions was set to 600 Ry. The Brillouin zone was sampled with a \(64\times 64\times 32\) k-mesh. Many-body quasiparticle GW corrections [41] were calculated within the single-shot G\({}_{0}\)W\({}_{0}\) approximation, and the dynamic dielectric function was obtained with the plasmon-pole approximation. The exciton energies were calculated by solving the Bethe-Salpeter equation [17] within the Tamm-Dancoff approximation [42]. The static screening in the direct term was calculated within the random-phase approximation with the inclusion of local field effects. We used 2 valence and 3 conduction bands to solve the Bethe-Salpeter equation matrix. For the GW band-structure calculations, we sampled the Brillouin zone with a \(16\times 16\times 8\)**k**-point grid. A kinetic energy cutoff of 90 Ry was used for the evaluation of the exchange part of the self-energy and of 150 Ry for the dielectric screening matrix size. About one hundred unoccupied bands were used to build the polarizability and integrate the self-energy. The exciton energies were mapped along high symmetry paths for different pressures, as it is shown in Fig.S13 and summarized in Table S2.
**Electron-phonon coupling parameters and critical superconducting temperature.** Electron phonon coupling (EPC) calculations were performed with the Quantum Espresso (QE) code [43; 44] by using ultrasoft pseudopotentials, an energy cutoff of 200 Ry for the kinetic energy and an energy cutoff of 2000 Ry for the charge density (convergence tests are shown in Table S3). The equation of state of hcp \({}^{4}\)He computed with the VASP and QE codes show very good agreement, as it is illustrated in Fig.S9. The electron-phonon matrix elements were calculated in a \(16\times 16\times 8\)**q**-point grid with density functional perturbation theory (DFPT) [45]. We adopted a dense and shifted \(k\)-point mesh of \(80\times 80\times 40\) to increase the convergence in the self-consistent calculations. For the EPC calculations, we further increased the \(k\)-point mesh up to \(192\times 192\times 96\) (convergence tests are shown in Table S4) and to ensure \(k\)-point sampling convergence we employed the Methfessel-Paxton scheme with a smearing width of 0.02 Ry. The Dirac deltas on the band energies were substituted by Gaussian functions with a broadening of 0.002 Ry, which are necessary for the calculation of the EPC strength parameter \(\lambda\) (convergence tests are shown in Fig.S14). Convergence tests on the **q**-point grid sampling are also presented in Table S5. For further validation purposes, we also performed calculations with a hardcore pseudopotential involving a real-space cutoff of \(r_{c}=0.37\)\(a_{0}\)[46] and compared the results with those obtained with ultrasoft pseudopotentials, as it is shown in Fig.S15.
The Eliashberg spectral function, \(\alpha^{2}F(\omega)\), accounts for the coupling between phonons and electrons in the Fermi surface like:
\[\alpha^{2}F(\omega)=\frac{1}{2\pi\hbar N(E_{F})N_{q\nu}}\sum_{q\nu}\frac{ \gamma_{q\nu}}{\omega_{q\nu}}\delta(\omega-\omega_{q\nu}), \tag{2}\]
where \(N(E_{F})\) is the density of states at the Fermi level (per unit cell), \(\gamma_{q\nu}\) the linewidth of the phonon mode \(\nu\) at the wave vector \(q\), and \(N_{q\nu}\) the total number of \(q\nu\) points in the sum.
The critical superconductor temperature, \(T_{c}\), was estimated with three different formulas: the McMillan formula [47], \(T_{c}^{McM}\), the Allen-Dynes formula [48], \(T_{c}^{AD}\), and the modified Allen-Dynes formula [24], \(T_{c}^{mAD}\):
\[T_{c}^{McM}=\frac{\omega_{log}}{1.20}\times\exp\left[-\frac{1.04\left(1+ \lambda\right)}{\lambda-\mu^{*}\left(1+0.62\lambda\right)}\right]\,, \tag{3}\]
\[T_{c}^{AD}=f_{1}f_{2}T_{c}^{McM}\, \tag{4}\]
\[T_{c}^{mAD}=(1.0061+0.0663\lambda)T_{c}^{AD}\, \tag{5}\]
where \(\mu^{*}\) is the Coulomb pseudopotential, for which we selected values within the widely accepted range of 0.10-0.13, and the parameters \(f_{1}\) and \(f_{2}\) are defined like:
\[f_{1}=\left[1+\left(\lambda/\Lambda_{1}\right)^{3/2}\right]^{1/3}\]
\[f_{2}=1+\frac{\left(\bar{\omega}_{2}/\omega_{log}-1\right)\lambda^{2}}{ \lambda^{2}+\Lambda_{2}^{2}}\, \tag{6}\]
with \(\bar{\omega}_{2}=\langle\omega^{2}\rangle^{1/2}\), \(\Lambda_{1}=2.46\left(1+3.8\mu^{*}\right)\) and \(\Lambda_{2}=1.82\left(1+6.3\mu^{*}\right)\left(\bar{\omega}_{2}/\omega_{log}\right)\).
Meanwhile, the logarithmic average phonon frequency, \(\omega_{log}\), is defined like:
\[\omega_{log}=\exp\left[\frac{2}{\lambda}\int_{0}^{\infty}\frac{d\omega}{\omega} \alpha^{2}F(\omega)ln(\omega)\right]\,, \tag{7}\]
and the EPC strength, \(\lambda\), is proportional to the first inverse momentum of the spectral function, namely:
\[\lambda=2\int_{0}^{\infty}\frac{d\omega}{\omega}\alpha^{2}F(\omega)=\frac{1}{N _{q\nu}}\sum_{q\nu}\lambda_{q\nu}\, \tag{8}\]
where
\[\lambda_{q\nu}=\frac{\gamma_{q\nu}}{\pi\hbar N(E_{F})\omega_{q\nu}^{2}}. \tag{9}\]
The \(\lambda_{q\nu}\) parameter in the equation above corresponds to the EPC strength of the phonon mode at wave vector \(q\) and phonon branch \(\nu\). All electron phonon coupling results are listed in Table S1.
## Acknowledgements
C.C. acknowledges support from the Spanish Ministry of Science, Innovation and Universities under the fellowship RYC2018-024947-I. C.L and C.C. thankfully acknowledge the computer resources at MareNostrum and the technical support provided by Barcelona Supercomputing Center (RES-FI-1-0006 and RES-FI-2022-2-0003). J.S. gratefully acknowledges the financial support from the National Key R&D Program of China (grant nos. 2022YFA1403201), the National Natural Science Foundation of China (grant nos. 12125404, 11974162, and 11834006), and the Fundamental Research Funds for the Central Universities. Part of the calculations were carried out using supercomputers at the High Performance Computing Center of Collaborative Innovation Center of Advanced Microstructures, the high-performance supercomputing center of Nanjing University. L.J.L gratefully acknowledges the computational resources provided by the National Supercomputer Service through the United Kingdom Car-Parrinello Consortium (EP/P022561/1). I.E. and Y.-W.F. acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 802533) and the Department of Education, Universities and Research of the Eusko Jaurlaritza and the University of the Basque Country UPV/EHU (Grant No. IT1527-22). C.L and C.C. acknowledge interesting discussions and kind assistance from Raymond C. Clay III on ultra-compressed helium pseudopotentials.
## Additional Information
Supplementary Information is available in the online version of the paper.
## Competing financial interests
The authors declare no competing financial interests.
|
2310.10675 | Creation Of A ChatBot Based On Natural Language Proccesing For Whatsapp | In the era of digital transformation, customer service is of paramount
importance to the success of organizations, and to meet the growing demand for
immediate responses and personalized assistance 24 hours a day, chatbots have
become a promising tool to solve these problems. Currently, there are many
companies that need to provide these solutions to their customers, which
motivates us to study this problem and offer a suitable solution. The objective
of this study is to develop a chatbot based on natural language processing to
improve customer satisfaction and improve the quality of service provided by
the company through WhatsApp. The solution focuses on creating a chatbot that
efficiently and effectively handles user queries. A literature review related
to existing chatbots has been conducted, analyzing methodological approaches,
artificial intelligence techniques and quality attributes used in the
implementation of chatbots. The results found highlight that chatbots based on
natural language processing enable fast and accurate responses, which improves
the efficiency of customer service, as chatbots contribute to customer
satisfaction by providing accurate answers and quick solutions to their queries
at any time. Some authors point out that artificial intelligence techniques,
such as machine learning, improve the learning and adaptability of chatbots as
user interactions occur, so a good choice of appropriate natural language
understanding technologies is essential for optimal chatbot performance. The
results of this study will provide a solid foundation for the design and
development of effective chatbots for customer service, ensuring a satisfactory
user experience and thus meeting the needs of the organization. | Valderrama Jonatan, Aguilar-Alonso Igor | 2023-10-10T18:54:15Z | http://arxiv.org/abs/2310.10675v1 | # Creation of a Chatbot Based on Natural Language Processing for Whatsapp
###### Abstract
In the era of digital transformation, customer service is of paramount importance to the success of organizations, and to meet the growing demand for immediate responses and personalized assistance 24 hours a day, chatbots have become a promising tool to solve these problems. Currently, there are many companies that need to provide these solutions to their customers, which motivates us to study this problem and offer a suitable solution. The objective of this study is to develop a chatbot based on natural language processing to improve customer satisfaction and improve the quality of service provided by the company through WhatsApp. The solution focuses on creating a chatbot that efficiently and effectively handles user queries. A literature review related to existing chatbots has been conducted, analyzing methodological approaches, artificial intelligence techniques and quality attributes used in the implementation of chatbots. The results found highlight that chatbots based on natural language processing enable fast and accurate responses, which improves the efficiency of customer service, as chatbots contribute to customer satisfaction by providing accurate answers and quick solutions to their queries at any time. Some authors point out that artificial intelligence techniques, such as machine learning, improve the learning and adaptability of chatbots as user interactions occur, so a good choice of appropriate natural language understanding technologies is essential for optimal chatbot performance. The results of this study will provide a solid foundation for the design and development of effective chatbots for customer service, ensuring a satisfactory user experience and thus meeting the needs of the organization.
Natural Language Processing, Chatbot for WhatsApp, Chatbot development, Chatbot for Customer Service. 10.14810/elelij.2023.12402
## 1 Introduction
The customer service area plays a critical role in the success of any organization. With the constant growth of e-commerce and the need for immediate feedback, it is important to provide users with a satisfying and efficient experience. In this context, chatbots seem to be a promising tool to provide automated and personalized support.This study focuses on developing a chatbot based on natural language processing for WhatsApp, with the purpose of improving customer satisfaction and service quality. The existing literature in the field of chatbots was reviewed in detail, analysing methodological approaches, artificial intelligence techniques and quality attributes used in the implementation of these systems. The literature has highlighted that chatbots based on natural language processing allow fast and accurate responses, which translates into a significant improvement in customer service efficiency [1]. Furthermore, chatbots have been observed to contribute to customer satisfaction by providing accurate responses and quick solutions to their queries [1]. Therefore, it is of vital importance to design a chatbot with a
friendly interaction and that mimics human interactions, to offer a satisfactory user experience [4].
Another consideration is choosing the right technology to understand natural language. Essential for good chatbot performance[2]. Previous research has highlighted the importance of using artificial intelligence techniques such as machine learning to improve chatbot learning and its adaptability when interacting with users.
Developing an effective customer service chatbot requires not only advanced tools and technology implementation, but also a deep understanding of user needs and expectations.
Through a comprehensive review of the literature, methodological approaches, artificial intelligence techniques and quality attributes relevant to the successful implementation of the chatbot will be identified.
## 2 Theoretical Framework
### Chatbot
Chatbot is an application that simulates human conversation in a chat interface. It uses advanced artificial intelligence and natural language processing techniques to understand and provide automated responses to user queries. Chatbots find applications in a variety of scenarios, including customer service, help desk, sales, and marketing. They offer a convenient and efficient way to interact with users, simulating a human-like conversation while taking advantage of intelligent algorithms and language processing capabilities.
The historical evolution of chatbots is essential to understand their development and current applications. Several studies have investigated this evolution and offer an overview of significant milestones and advances in this field [1]. From early rule-based systems to sophisticated AI-based chatbots, there has been a remarkable growth in the power and versatility of chatbots [2].
### Types of Chatbots
Chatbots are classified into different types based on their features and functionality. Below are the main types of chatbots:
1. **Rule-based chatbots**. Work by applying predefined instructions and providing predetermined responses based on specific input patterns. These chatbots are efficient in situations where the queries are clear, and a limited set of responses are available [3]. However, its limitation lies in the difficulty of dealing with ambiguous or complex queries.
2. **Chatbots based on Artificial Intelligence**. Leveraging methods like natural language processing and machine learning, chatbots have the capacity to comprehend and generate responses in human language [4].These AI-driven chatbots possess the ability to learn and enhance their performance over time through interactions with users. This enables them to be versatile in various scenarios, delivering responses that are not only more precise but also contextually relevant.
3. **Voice chatbots**. Are designed to interact using voice commands. These chatbots use speech recognition and synthesis technologies to understand and generate spoken responses [6]. Voice chatbots are especially useful in situations where the use of hands or vision is limited, such as in automotive applications or smart home devices.
4. **Hybrid chatbots**. Combine features of rule-based and AI-based chatbots. These chatbots use predefined rules for common cases and resort to artificial intelligence techniques in more
complex situations [5]. This blend of capabilities enables a higher degree of adaptability and receptiveness in customer service.
Every variety of chatbot comes with its own set of advantages and constraints, and the selection hinges on the requirements and goals of the company.
### Natural Language Processing
Natural language processing (NLP) is a fundamental field in the development of intelligent chatbots. Various approaches and models related to NLP have been proposed, such as transformer-based models, which have shown excellent results in understanding and generating natural language [7].
These models use tokenization, attention, and decoding techniques to improve the chatbots' ability to understand queries and generate appropriate responses.
On the other hand, machine learning plays an important role in the development of intelligent chatbots for fluid and contextual conversations [8]. Machine learning techniques are used for consistent response generation, user intent detection, and dialog personalization. These approaches allow chatbots to adapt to the preferences and needs of users, thus improving the quality of interaction.
### WhatsApp
Is an instant messaging application that enables users to send text messages, initiate voice, and video calls, share various files and multimedia content, as well as engage in group conversations. It was developed as a communication platform for smartphones and has become one of the most popular messaging apps in the world.
### Integration of Chatbots in Customer Service
The effective integration of chatbots in customer service is an important aspect to consider. Strategies and best practices are explored to implement chatbots in different customer service channels, such as online chat, social networks, and mobile applications [8]. In addition, the personalization of responses is key to providing a more satisfactory experience, adapting interactions to the individual preferences and needs of each customer. Finally, it matters
## 3 Methodology
For the development of this research, we used [26] guide, which establishes three important parts.
**1.**: Planning. This phase is important to consider the requirements for carrying out the literature review, considering the information search sources, the research questions and the search criteria.
**2.**: Conduct of the review. In this phase, the methodological selection of the information from the main studies is carried out according to the inclusion and exclusion criteria.
**3.**: Results of the review. In this phase, the statistical results of the studies selected for the literature review in each of the information sources are presented in summary. These results will serve for our research proposal.
To develop this research, a literature review of scientific articles no older than 5 years was carried out, extracted from important scientific databases such as Science Direct, Springer Link, Emerald Insight, IOP science and Taylor & Francis Online.
To learn better about the types of interaction with users, the AI techniques and algorithms used, the attributes, the technologies used in development, the mechanisms for training chatbot data, we review articles from different authors of research related to chatbots. In the area of education, it was necessary to ask the research questions indicated below:
Q1 What are the types of user interaction with the chatbot?
Q2 What artificial intelligence techniques and algorithms will be employed in the chatbots development?
Q3 What will be the quality attributes of the chatbot?
Q4 Which technologies will be utilised for the chatbots development?
Q5 What mechanism will be adopted to train the chatbot data?
According to the articles identified in the literature review process according to the established search string, the articles were filtered according to the inclusion and exclusion criteria of Table 1, resulting in many of them excluded for not meeting the established criteria. Other articles were excluded because they did not contribute significantly to our research.
The search string was extensively designed using key terms related to chatbots, user interaction, artificial intelligence techniques, quality attributes and the technologies used in its development.
Searches were performed on different combinations of implement chatbots in different customer service channels, such as online chat, social networks, and mobile applications [8]. Furthermore, the benefits and Keywords.
After applying the inclusion and exclusion criteria, a total of forty-eight potential studies were obtained that could provide relevant information to answer the research questions posed in the
\begin{table}
\begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{**Inclusion criteria**} & \multicolumn{1}{c|}{**Exclusion criteria**} \\ \hline Articles published from 2018 to 2023 & Articles that are not in the 2018, 2023 range \\ \hline Articles in English or Spanish & Articles written in languages other than English or Spanish \\ \hline Articles related to chatbots customer service & Other issues unrelated to customer service \\ \hline \end{tabular}
\end{table}
Table 1: Inclusion and exclusion criteria
\begin{table}
\begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{**Databases**} & \multicolumn{1}{c|}{**Search Strings**} \\ \hline Science Direct & natural language processing, chatbot implementation, intelligent conversational agents \\ \hline SpringerLink & natural language processing, chatbot development, conversational AI \\ \hline Emerald Insight & natural language processing, chatbot applications, conversational agents \\ \hline IOPscience & natural language processing, chatbot algorithms, intelligent dialogue systems \\ \hline Taylor \& Francis Online & natural language processing, chatbot evaluation, conversational interfaces \\ \hline \end{tabular}
\end{table}
Table 2: Search string applied in the databases.
study. These studies were carefully examined, reviewing their titles, abstracts, and the full content of each article.
From this review process, thirty-five relevant studies were identified that directly addressed the research questions and provided valuable information on the types of user interaction with chatbots, AI techniques and algorithms used, chatbot quality attributes, the technologies used in its development and the data training mechanisms.
Finally, twenty-five selected studies were considered based on their relevance to the research topic, the solidity of their methodology and their valuable contributions to the field of study.
## 4 Description of the Results
### Types of user Interaction with Chatbot
To answer this question, we consulted several articles that examine the types of user interaction with chatbots. Among them are the following:
The user communicates with the chatbot by sending text messages and receiving responses in text form. This form of interaction is widely used in chatbots, as it is simple and accessible to most users, according to the study by [24].
The user communicates with the chatbot using voice commands and receives spoken responses. This form of interaction has become more popular with the advancement of voice recognition technology, according to the study by [16].
The chatbot allows the user to interact through text entry and voice commands, providing the flexibility to choose the user's preferred method, as indicated in the study by [20].
The chatbot presents predefined options in the form of buttons or drop-down menus, allowing the user to select an option and receive responses according to their choice, as mentioned in the study by [21].
The user asks the chatbot specific questions and receives direct answers related to the query. This type of interaction focuses on getting clear and direct answers to the user questions, as mentioned in the study by [2].
These reviewed articles provide a solid foundation for understanding the different forms of user interaction with chatbots, allowing us to recognise and understand the basic characteristics of the interaction, which can be very valuable when creating our chatbot.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Database** & **Potential** & **Relevant** & **Selected** & **\%** \\ & **studies** & **studies** & **studies** & \\ \hline Science Direct & 15 & 15 & 15 & 56\% \\ \hline SpringerLink & 12 & 10 & 5 & 19\% \\ \hline Emerald Insight & 6 & 3 & 2 & 7\% \\ \hline IOPscience & 6 & 3 & 2 & 7\% \\ \hline Taylor \& Francis Online & 9 & 6 & 3 & 11\% \\ \hline TOTAL & 48 & 35 & 25 & 100\% \\ \hline \end{tabular}
\end{table}
Table 3: Articles found and selected by source consulted.
### Artificial Intelligence Techniques and Algorithms Employed in the Chatbots Development
Techniques and algorithms are vital for creating an efficient chatbot. Several pertinent techniques and algorithms are identified from the analysis of related literature.
[1] suggest using techniques such like intent matching, machine learning, and natural language processing.
Additionally, the study conducted by [2] highlights the use of metamodels and natural language processing.
These results demonstrate the importance of using techniques and algorithms such as machine learning, intention classification, and sentiment analysis to enhance chatbot efficiency and precision.
### Chatbot Quality Attributes
To identify quality attributes of the chatbot, we analysed numerous studies on chatbot usability and user experience.
The use of chatbots to support educational systems was identified, as mentioned in the study by [6], highlights the importance of usability, ease of use and user satisfaction. Furthermore, article [14] mentions the impact of "humanizing" chatbots to improve user satisfaction.
These articles offer a strong basis for examining quality attributes such as effectiveness, efficiency, usability, user satisfaction, and responsiveness when developing chatbots.
### Technologies for the Development of Chatbots
Selecting suitable technologies is a crucial factor in the successful development of a chatbot. By analysing the relevant articles, the key technologies used in chatbot development are identified.
Article [1], discusses the utilization of neural networks, natural language processing, machine learning, and chatbots. Moreover, the study by Abdellatif et al. [2] highlights the use of natural language understanding platforms for the development of chatbots.
These findings highlight the relevance of technologies like neural networks, natural language processing, and machine learning in the creation of efficient and effective chatbots.
### The Mechanism for Training the Chatbot Data
The process of training chatbot data is fundamental to achieving high accuracy and performance of a chatbot. By reviewing the relevant articles, different mechanisms used to train data in chatbot development are identified.
In particular, [24] provide a comparison of natural language understanding platforms for chatbots in software engineering. They analysed the performance of different platforms using supervised learning techniques and evaluated their ability to understand user queries in the context of software engineering. Additionally, [16] explored the effects of AI-based chatbots on user compliance in customer service. They employed a supervised learning approach to train the chatbots and evaluated their impact on user behaviour and satisfaction, particularly concerning adherence to the instructions provided by the chatbot.
[17] conducts a comparative analysis of the performance of a multimodal chatbot implementation that utilises on news classification via categories.
## 5 Analysis of the Results
### Types of User Interaction with the Chatbot
We can examine in greater detail the percentages associated with each type of interaction, as shown in Figure 1.
Interaction I1, which is based on the use of text, represents 37.5% of the total interactions studied. This remarkable figure underscores the prevalence of textual communication in the context of chatbots. The authors related to this interaction are: [24], [14] and [20].
Interaction I2, which involves the use of voice, constitutes 12.5% of the interactions. This finding highlights the growing adoption of speech recognition technology and its integration into chatbot systems. The author related to this interaction is [16].
Interaction I3, which combines the use of text and voice, also occupies 12.5% of the analysed interactions. This convergence of communication modalities demonstrates the importance of offering multiple and flexible options to users. The author related to this interaction is [20].
The I4 Interaction, based on the use of buttons, also represents 12.5% of the interactions. This result suggests the relevance of an intuitive and simplified user interface. The author related to this interaction is [13].
Interaction I5, which is based on a question-answer model, also shows a significant presence, representing 25% of the interactions studied. This emphasizes the importance of chatbots' ability to provide accurate and relevant responses to user queries. The authors related to this interaction are [22]and [13].
### Artificial Intelligence Techniques and Algorithms Used
Figure 2 shows the results found of the AI techniques and algorithms that are used to develop chatbots.
Figure 1: Types of interaction
The Decision Trees technique as an algorithm represents 25% of the techniques used in chatbots. This indicates that the authors [24] and [14] have recognized the importance of using decision trees in the development of their chatbots. This finding suggests that these algorithms are effective in making decisions and generating appropriate responses for users.
The natural language processing (NLP) technique also represents 25% of the techniques used. This indicates that the authors [24] and [16] recognize the importance of understanding and processing natural language to achieve effective communication with users. This technology is crucial to understanding queries and generating consistent and meaningful responses.
The Support Vector Machines technique as an algorithm represents 12.5% of the techniques used. This implies that the author [21] has explored the use of these algorithms in their chatbots. Support Vector Machines are renowned for their capability to classify and analyse intricate data, a feature that could prove advantageous in the realm of chatbots.
The Recurrent Neural Networks (RNN) technique also represents 25% of the techniques used. This indicates that the authors [24] and [16] have recognized the utility of RNNs in the development of chatbots. Recurrent Neural Networks are recognized for their capacity to handle data streams, a characteristic that holds relevance in user conversation contexts.
The Markov Chain technique and Long Short-Term Memory (LSTM) as algorithms represent 12.5% of the techniques used.
### Chatbot Quality Attributes
Several articles focused on the user experience and usability of chatbots have been analysed. Figure 3 provides a more detailed look at the specific attributes to consider when developing chatbots.
Naturalness: 42.86% of the authors have addressed naturalness in their research. This means that they have researched and considered the importance of chatbots being able to generate responses and conversations that are as natural and human as possible. This attribute seeks that users perceive the chatbot as an entity with which they can interact in a fluid and natural way.
Speed: Regarding speed, it is observed that an author [16] has considered this attribute in his research. Speed, in the context of chatbots, pertains to their capability to deliver prompt and
Figure 2: Artificial intelligence techniques and algorithms
efficient responses to user inquiries. A fast chatbot can enhance the user experience by furnishing information in a timely fashion.
Availability: One author[14] has addressed availability in research on it. Availability pertains to the chatbot's capacity to be accessible and ready for users at any given time. This means that the chatbot is available to answer questions and help at any time of the day.
Precision: Another author [13] has considered the precision in the investigation of it. Accuracy refers to the chatbot's ability to provide correct and exact answers. An accurate chatbot is capable of correctly understanding user queries and delivering accurate and relevant responses.
Learning: Lastly, it is important to mention that one of the authors [1] has explored the aspect of learning in their research.
### Technologies for the Development of Chatbots
The following technologies are among the primary ones used in the development of chatbots, as shown in Figure 4.
Python: This technology, mentioned by the author [24] represents 10% of the total technologies used in chatbot development. Python is a widely used programming language in the realm of artificial intelligence and natural language processing, making it a popular choice for implementing chatbots.
Dialogflow: This technology, mentioned by the authors [24] and [20],represents 20% of the technologies used.
Dialogflow is a cloud-based chatbot development platform that provides advanced natural language processing and intent understanding capabilities.
Keras: This technology, mentioned by the author [24] represents another 20% of the technologies used. Keras is a high-level library for constructing and training neural networks, commonly employed in deep learning and natural language processing.
Figure 3: Quality attributes
IBM Watson: This technology, mentioned in articles [24] and [19], represents a percentage of 20%. IBM Watson is an artificial intelligence and machine learning platform that offers a wide range of services for the development of chatbots and other AI-based applications.
Twilio: This technology, mentioned in articles [24], [17] and [1], also represents the highest percentage with 30% of the technologies used.
### The Mechanism for Training the Chatbot Data
The results of the most appropriate mechanisms for data training in a chatbot can be seen in Figure 5.
Supervised Learning: 30% of the authors have addressed supervised learning in their research. [24] have used this approach in their work. Supervised learning entails training the chatbot using a labeled dataset, in which instances of anticipated input and output are provided. This allows the chatbot to learn to generate correct responses based on previous patterns and examples.
Reinforcement Learning: 10% of authors have explored reinforcement learning in their studies. [1] have investigated this approach in their work. Reinforcement learning involves the chatbot interacting with the environment and receiving feedback in the form of rewards or penalties. Through feedback, the chatbot learns to make decisions that maximize rewards over time.
Transfer of Learning: 10% of the authors have considered the transfer of learning in their research. [20] have mentioned this approach in their work. Transfer of learning involves drawing on a model trained on a task's prior knowledge and experience and applying it to a related but different task. This expedites the training process and enhances the chatbot's performance in the new task.
Generation of Synthetic Data: 10% of the authors have addressed the generation of synthetic data in their studies. In this case, a specific author has not been provided. Synthetic data generation involves creating artificially generated training data to increase the number and diversity of examples available to the chatbot. This can improve the chatbot's ability to generalize and handle a variety of situations.
Active Learning: 40% of the authors have investigated active learning in their work. [24] mention this approach in their article.
Figure 4: Development technologies
## 6 Proposed Architecture for Chatbot
The architecture of the system implemented in this project comprises three levels, as shown in Figure 6.
First level: involves the WhatsApp client application that the user will use to access the chatbot,
Second level: We have the AI engine built with python and the Flask framework using natural language processing.
Third level: We have the Twilio-based REST API for communication between the front-end application and the AI engine.
The customer service module is responsible for interacting with users through the WhatsApp application. It uses an artificial intelligence engine that leverages natural language processing to successfully understand and respond to queries [1].
The backend module is divided into two submodules. The first submodule consists of a Twilio webhook application that enables communication between the front-end application and the AI engine, located in the second backend submodule [2]. This AI engine uses natural language processing algorithms and techniques implemented in a Python application. For the development of the system, backend frameworks such as Django or Flask are used, which facilitate the implementation of projects of this type [3].
The data set used contains information about the service company and is organized with common words, possible intentions, and the corresponding responses. Natural language processing techniques, such as a bag of words, are applied to count the frequency of words in the data set. In addition, pre-processing tasks such as tokenization and removal of symbols and special characters are performed. To build the natural language model, tools such as TensorFlow and Keras [4] are used.
The chatbot solution to satisfy customer inquiries requires an internet connection and an Android mobile device with the WhatsApp app installed. Users initiate the query flow by typing the number provided by Twilio [5].
The AI engine employs a fully connected neural network architecture, known as dense layers, for intent classification. Dense layers are used with the ReLU (Rectified Linear Unit) activation function to learn patterns and nonlinear representations in the input data. The model is trained
Figure 5: Data training mechanism
using the SoftMax activation function in the output layer to assign probabilities to each intention class [6]. The chatbot architecture consists of client, AI, and REST API layers. The interaction is done through the WhatsApp application and the AI engine processes the queries using natural language processing techniques. The natural language model is constructed using dense layers within a neural network, employing the SoftMax activation function for intent classification.
## 7 Conclusions
In our research, we reviewed several studies related to chatbots that provide customer service, focusing on their efficiency, customer satisfaction, and the quality of the service they provide.
The results revealed that chatbots based on natural language processing improve customer service efficiency by providing fast and accurate responses.
Chatbots were also found to contribute to customer satisfaction by providing quick and accurate solutions to their problems.
The studies found have provided information on the importance of investing in technologies and tools that support natural language processing and artificial intelligence, as well as user-centred design to improve user experience and satisfaction.
According to the literature review, a chatbot architecture has been proposed through WhatsApp that is friendly and personalized to achieve a positive interaction with the user.
|
2310.07309 | Fluorescence enhancement in topologically optimized gallium phosphide
all-dielectric nanoantennas | Nanoantennas capable of large fluorescence enhancement with minimal
absorption are crucial for future optical technologies from single-photon
sources to biosensing. Efficient dielectric nanoantennas have been designed,
however, evaluating their performance at the individual emitter level is
challenging due to the complexity of combining high-resolution nanofabrication,
spectroscopy and nanoscale positioning of the emitter. Here, we study the
fluorescence enhancement in infinity-shaped gallium phosphide (GaP)
nanoantennas based on a topologically optimized design. Using fluorescence
correlation spectroscopy (FCS), we probe the nanoantennas enhancement factor
and observed an average of 63-fold fluorescence brightness enhancement with a
maximum of 93-fold for dye molecules in nanogaps between 20 nm and 50 nm. The
experimentally determined fluorescence enhancement of the nanoantennas was
confirmed by numerical simulations of the local density of optical states
(LDOS). Furthermore, we show that beyond design optimisation of dielectric
nanoantennas, increased performances can be achieved via tailoring of
nanoantenna fabrication. | Cynthia Vidal, Benjamin Tilmann, Sunny Tiwari, T. V. Raziman, Stefan A. Maier, Jerome Wenger, Riccardo Sapienza | 2023-10-11T08:42:38Z | http://arxiv.org/abs/2310.07309v2 | # Fluorescence enhancement in topologically optimized gallium phosphide all-dielectric nanoantennas
###### Abstract
Nanoantennas capable of large fluorescence enhancement with minimal absorption are crucial for future optical technologies from single-photon sources to biosensing. Efficient dielectric nanoantennas have been designed, however, evaluating their performance at the individual emitter level is challenging due to the complexity of combining high-resolution nanofabrication, spectroscopy and nanoscale positioning of the emitter. Here, we study the fluorescence enhancement in infinity-shaped gallium phosphide (GaP) nanoantennas based on a topologically optimized design. Using fluorescence
correlation spectroscopy (FCS), we probe the nanoantennas enhancement factor and observed an average of 63-fold fluorescence brightness enhancement with a maximum of 93-fold for dye molecules in nanogaps between 20 nm and 50 nm. The experimentally determined fluorescence enhancement of the nanoantennas was confirmed by numerical simulations of the local density of optical states (LDOS). Furthermore, we show that beyond design optimisation of dielectric nanoantennas, increased performances can be achieved via tailoring of nanoantenna fabrication.
[MISSING_PAGE_POST]
imizing the in-phase backscattering into the source dipole, while concurrently mitigating the undermining impact of destructive interference, we have forged a rational architectural framework for all-dielectric antennas. Further improvement using an iterative approach has led to intense electromagnetic LDOS enhancement up to 3 orders of magnitude using a topologically optimized dielectric nanoantenna [29].
Despite the plethora of theoretical insights into the topological optimization of dielectric nanostructures, experimental demonstrations of these hybrid nanoantennas remain sparse and predominantly confined to the near-infrared domain [30, 31, 21]. This preference stems from the greater ease of fabrication due to the larger wavelength and antenna dimensions. However, in the visible spectrum, the intricacies of nanofabrication and the imperative for precise positioning of the dipole emitter have constrained experimental endeavors [32, 33, 34]. Notably, there has yet to emerge a dielectric nanoantenna that has been both designed and characterized at optical wavelengths utilizing a topologically optimized model. This unexplored area emphasizes the importance of taking significant steps forward to connect theory with real-world applications and further advance nanophotonics.
Here, we bridge this gap and experimentally showcase the performance of gallium phosphide (GaP) nanoantennas designed according to a topologically optimised approach. Our GaP nanoantenna is shaped like the infinity symbol, with a bowtie-shaped nanogap at its center (shown in Figure 1a,b. This design, inspired by the general framework outlined in Mignuzzi's work [29], is strategically crafted to enhance local electromagnetic effects through the precise tuning of constructive interference. Fluorescence correlation spectroscopy (FCS) experiments thoroughly characterize the GaP nanoantennas and assess their optical performance in enhancing single Alexa Fluor 647 molecule fluorescence. Our all-dielectric nanoantennas achieve a remarkable enhancement of the fluorescence brightness up to 90-fold together with optical confinement into a 200 zeptoliter (\(10^{-21}\) L) detection volumes, 5000 fold below the confocal diffraction limit. These experimental values stand in excellent agreement with our numerical simulations. This successful experimental demonstration of all-dielectric
topologically optimized nanoantennas in the visible spectral range holds profound significance for the realms of future sensing and quantum technologies.
The rationale behind our design departs from the quasi-static approximation to fully consider the phase of the induced polarization currents into the source dipole [29]. By enhancing the constructive interference terms and removing the negative influence of destructive interference terms, the local electromagnetic enhancement can by strategically optimized. We fabricated arrays of GaP nanoantennas using standard electron beam lithography (EBL) followed by reactive-ion etching (RIE). Details and a sketch of the nanofabrication process is illustrated in Figure S1 in Supporting Information. A scanning electron microscope (SEM)
Figure 1: (a) All-dielectric topologically optimized nanoantenna to enhance the fluorescence from single diffusing molecules. (b) SEM image of a GaP nanoantenna. (c) AFM measurement of a nanoantenna. (d) Map of the LDOS enhancement of a 30 nm gap antenna with dipole emitter aligned along the gap. (e) Overlay of the LDOS spectral enhancement (left axis, 6.5 nm gap size) and the fluorescence excitation and emission spectra of Alexa Fluor 647 used in this study (right axis). (f) 2D map of the LDOS enhancement as a function of the gap size and the emission wavelength. The gray lines and the respective numbers indicate the contours of LDOS enhancement values.
image of a typical GaP nanoantenna is shown in Figure 1b. Nanoantennas were designed with a range of gap sizes from 15 nm to 45 nm, with 660 nm x 510 nm dimensions. The deviations of fabricated antenna dimensions from the design, determined by SEM, are listed in SI Table S1. The antenna height is 64 \(\pm\) 10 nm as measured by AFM (Figure 1c. The AFM line cut across the middle of the antenna (inset of Figure 1c indicates a roughness below 8 nm. Due to proximity effects, the gap is often bridged below 15 \(\pm\) 2.5 nm and is difficult to consistently replicate with a standard EBL setup [35].
Numerical simulations predict an intense LDOS enhancement for the topologically optimized GaP antenna, as shown in Figure 1d-f. At resonance, the field is confined in the nanoantenna due to constructive interferences, leading to strong LDOS enhancement in the nanogap region between the two bowtie tips (Figure 1d. In addition, in the gap, a strong electrostatic enhancement, i.e. frequency independent, can be achieved for emitters aligned along the gap direction [16], so the LDOS enhancement effectively covers a broad spectral range (Figure 1e,f). LDOS enhancement factors exceeding 40-fold are predicted for nanogap sizes below 10 nm (Figure 1f).
We use fluorescence correlation spectroscopy (FCS) to characterize the enhancement of the fluorescence brightness for a molecule placed in the gap of the GaP nanoantenna. While placing an individual static emitter in the center of the nanogap is highly challenging [36], FCS exploits the Brownian motion of the individual molecules diffusing in solution to probe the nanoantenna response [37, 38]. FCS consists of measuring the temporal auto-correlation function (ACF) of the fluorescence signal from single molecules in order to determine their brightness. It is an established method which allows the quantification of the radiative enhancement from nanostructures [39, 40, 14]. The fluorescence signal is collected via a confocal microscope and detected by a single-photon counting avalanche photodetector. When the LDOS enhancement from the nanogap occurs, the shape and amplitude of the ACF are modified which in turn allow to estimate the number of molecules and their brightness enhancement within the nanogap volume [40, 14].
Figure 2: Experimental characterization of topologically optimized GaP nanoantennas. (a) Fluorescence intensity time traces recorded on a 1.4 \(\upmu\)M solution of Alexa Fluor 647 with 200 mM methylviologen on a GaP nanoantenna with the excitation polarization parallel (orange) or perpendicular (blue) to the 30 nm bowtie nanogap. The binning time is 100 ms. (b) Measured and fitted ACF g(\(\tau\)) as a function of the correlation time \(\tau\) from a single GaP nanoantenna with a 30 nm gap imaged in the inset. The arrows indicate the polarization direction of the laser excitation. Scale bar: 200 nm. (c) Evolution of the number of molecules in the nanogap \(N^{*}\) as a function of the fluorescent dye concentration. (d) Scatter plot of the fluorescence brightness enhancement as a function of the nanogap size as determined by SEM. Through (d-f), each marker symbol corresponds to a specific GaP nanoantenna. The line is the prediction from numerical simulations, it is not a fit to the experimental data. (e) Nanogap detection volume determined by FCS as a function of the nanogap size. The line fit to the data is linear. (f) Fluorescence brightness enhancement as a function of the nanogap detection volume. The line is a fit with a fixed -1 exponent.
As a molecular probe, we use Alexa Fluor 647 dyes, with excitation wavelength at 635 nm and emission at 670 nm, where GaP is transparent (Figure 1e). 200 mM of methyl viologen is added to the buffer solution in order to quench the fluorescence quantum yield of Alexa Fluor 647 from 33 to 8% and increase the magnitude of the fluorescence enhancement factor [41, 42]. This experimental configuration also enables a straightforward comparison with our earlier works using different dielectric and plasmonic antenna designs [14, 41].
Figure 2a,b displays typical experimental results on a 30 nm gap antenna probed with two different excitation polarizations parallel or perpendicular to the nanogap. We have checked that our microscope setup is polarization-insensitive so that the difference seen on the fluorescence intensity time traces (Figure 2a and ACFs (Figure 2b can be directly related to the excitation of the nanogap mode which in turns leads to the fluorescence enhancement. We apply a similar analysis of the FCS fit as in our earlier studies [14, 41] to extract for each antenna the average number of molecules \(N^{*}\) in the effective volume defined by the nanogap together with the average fluorescence brightness per emitter \(Q^{*}\). From the knowledge of \(N^{*}\) and the fluorescent dye concentration, we can then compute the effective detection volume of the nanogap region. The fluorescence brightness enhancement is obtained by dividing the brightness per emitter in the nanogap \(Q^{*}\) by the reference brightness per molecule \(Q_{0}\) found with the diffraction-limited confocal configuration. All the fit results for the data in Figure 2a,b are summarized in the SI Table S2. We find a linear dependence between the number of molecules measured in the nanogap \(N^{*}\) and the Alexa 647 concentration used in the experiments (Figure 2c. This provides an important confirmation of the validity of our results and demonstrates a good reusability of our GaP nanoantennas.
Ideally one would like to directly record the fluorescence lifetime reduction and Purcell enhancement on each GaP nanoantenna. However, this is not currently possible in our setup for two main reasons. First, because we rely on diffusing molecules to probe the nanogap, we have to work at high micromolar concentrations and thus there is a significant number (about 500) of molecules diffusing away from the antenna hot spot but still present in the diffraction
limited confocal volume. As a result of these non-enhanced molecules contribution, there is a significant non-fluctuating fluorescence background overlaid on the antenna hot spot signal. The second reason is that to make the hot spot contribution more apparent in the FCS functions and maximize the brightness enhancement, we use low quantum yield emitters. These molecules have a short fluorescence lifetime around 380 ps [14] which is below the 600 ps resolution of our current instrument. Further accelerating the decay dynamics with the Purcell enhancement in the nanoantenna leads to a fluorescence lifetime totally beyond the capabilities of our system. This is why we rely on FCS to assess the antenna performance and cannot use TCSPC.
For each individual nanoantenna, we correlate the gap size determined by SEM with the measured brightness enhancement (Figure 2d. Our results show a clear increase in brightness enhancement for smaller gap sizes consistent with the enhancement stemming from the hotspot in the nanogaps. Enhancement factors exceeding 60-fold are readily observed for nanogaps below 30 nm. To support our findings, we simulated the brightness enhancement using Lumerical for a dipole emitter with 8% quantum yield aligned in the centre of the nanogap and an excitation intensity well below the saturation as in the experiment (see Methods). The solid line in Figure 2d deduced from the numerical simulations without any free parameter shows a remarkable agreement with the experimental data.
Along with the brightness enhancement per molecule, our FCS measurements simultaneously monitor the evolution of the nanoantenna detection volume with the gap size (Figure 2e). Detection volumes below 300 zL are achieved with nanogaps below 30 nm. From the numerical simulations in Figure 1d, we can estimate a detection volume around 400 zL which comes close to the \(215\pm 50\) zL measured for 30 nm gap antennas with FCS. We observe a cubic law dependence of the detection volume with the gap size, which is expected as the nanogap size is the key factor determining the 3D confinement of light into the nanogap. Furthermore, the interdependence between the brightness enhancement and the detection volume provides a supplementary validation of our data. The nanogap vol
ume scales linearly with the gap size (Figure 2e while the LDOS enhancement in the gap decays inversely proportional to the gap size as seen from simulations (Figure 2d. Since the brightness enhancement is proportional to the square of LDOS enhancement for low-quantum yield emitters, we expect the brightness enhancement to decay slightly sublinearly with the mode volume when accounting for the background enhancement contribution from the antenna. This observed correlation echoes findings from previous studies on gold dimer nanoantennas [40], reaffirming the credibility of our results.
The optical performance of these topologically optimized GaP nanoantennas significantly outperforms the values achieved using silicon nanodisk dimers [14] or gold antenna-in-box [41]. To ensure a fair comparison, we focus on nanoantennas with similar 30 nm gap sizes probed under similar experimental conditions with the same fluorescent dye. Figure 3a shows a
Figure 3: Comparison and performance assessment of topologically optimized GaP nanoantennas. (a) 2D map of the fluorescence enhancement and detection volume comparing different optical nanoantennas: the topologically optimized GaP antenna (this work), the silicon disk dimer [14] and the gold plasmonic antenna-in-box [41]. Importantly for the comparison, the nanoantennas indicated here all share a similar 30 nm gap and were probed using the same fluorescent dye (Alexa Fluor 647 with 200 mM methyl viologen in the buffer). (b,c) Numerical predictions of the fluorescence enhancement as a function of the nanogap size and the reference quantum yield of the emitter in homogeneous environment. In (b) the different numbers associated to each curve indicate the initial quantum yield of the emitter considered for the simulations while in (c) the numbers denote the GaP antenna gap size.
bi-dimensional map allowing to compare at a glance between the brightness enhancement and the detection volume achieved with different nanogap antennas. Importantly, our topologically optimized nanoantenna outperforms its competitors on both the brightness enhancement and the optical confinement, demonstrating the superiority of its rational phase optimization design [29].
The excellent agreement found between the experimental data and the numerical simulations in Figure 2d allows us to elaborate on the simulations to predict the conditions leading to maximum brightness enhancement. The results summarized in Figure 3b,c predict enhancement factors exceeding 1000-fold for all-dielectric GaP nanoantennas, albeit for emitters with quantum yields below 2% and gap sizes below 10 nm. This positive insight holds promising implications for the realm of all-dielectric nanophotonics, providing an added incentive to enhance nanofabrication technology and attain sub-10 nm gaps. A narrowing of the nanogap is clearly one of the best ways to improve a nanoantenna's performance, however it remains extremely challenging to consistently control on multiple structures [43, 44, 45].
In conclusion, we have successfully demonstrated the superior optical performance of all-dielectric GaP nanoantennas designed according to a topologically optimised approach. Thanks to a precise tuning of interferences occurring in the near field of the emitter [29], the LDOS enhancement is maximized, leading to intense brightness enhancement of single quantum emitters. Nanoantennas capable of large fluorescence enhancement with minimal absorption losses are key elements to advance optical technologies from single-photon sources to biosensing. Therefore, this experimental demonstration of all-dielectric topologically optimized nanoantennas in the visible spectral range holds profound significance for the realms of future sensing and quantum technologies. Beyond the design optimisation using a rational approach, increased antenna performances can be achieved via tailoring the nanofabrication to reach smaller gaps. Localising the emitter into the nanogap, controlling its orientation or using new high refractive index materials like transition metal dichalcogenides are other future optimization directions.
## 3 Experimental
### Nanofabrication
Here, we base the nanoantenna design on the one from Ref. [29] i.e. 50 nm thick, 550 nm long and 10 nm gap size. For this, a GaP film is deposited on a glass substrate using sputter deposition at 350 \({}^{\circ}\)C. Next, the nanofabrication is carried out using EBL and subsequent RIE, as sketched in Figure S1 in the Supporting Information. Poly(methyl methacrylate) (PMMA) is used as photoresist and the EBL process is carried out at an acceleration voltage of 30 kV and an aperture size of 30 \(\mu m\). Afterwards, the development is done by rinsing the sample in a mixture of methyl isobutyl ketone and isopropyl alcohol (ratio 1:3) for 45 seconds. The gold mask is then deposited using electron-beam evaporation at ultra-high vacuum, with a dedicated thickness of 40 nm. After the lift-off in an acetone bath, where the remaining PMMA is dissolved, the designed structures remain as gold mask on top of the GaP film. Finally, the structures are transferred into the GaP by performing inductively-coupled plasma RIE based on chlorine gases, after which the remaining gold is removed by respective wet chemistry.
### Optical microscopy experiments
The FCS measurements are performed using a custom-built confocal microscope (Nikon Ti-U Eclipse) equipped with a water immersion objective (Zeiss C-Apochromat 63x, 1.2 NA). A focused linearly-polarized pulsed laser at 635 nm, with 70 ps pulse duration and 40 MHz repetition rate (LDH series laser diode, PicoQuant) illuminates individual nanoantennas. The antenna sample is immersed in a buffer solution of Alexa Fluor 647 at micromolar concentration with 200 mM methyl viologen as a quencher and 10 mM glutathione as an antioxidant and photostabilizer. Methyl viologen is used to improve the antenna apparent brightness enhancement and make the FCS signature from the nanogap stand out more clearly. With this 200 mM methyl viologen concentration, the quantum yield of the Alexa Fluor 647 dyes
is quenched down to 8% [14, 41]. The fluorescence emission in the 650 to 690 nm range is collected by the same microscope objective in the epifluorescence mode. A multiband dichroic mirror (ZT 405/488/561/640rpc, Chroma) and emission filters (ZET405/488/565/640mv2 and ET655 from Chroma plus one FF01-676/37 from Semrock) reject the baskscattered laser light. Detection is performed with a single-photon counting avalanche photodiode (Perkin Elmer SPCM AQR 13) whose output is connected to a time-correlated single photon counting module (HydraHarp 400, Picoquant). Throughout all our experiments, the laser power measured at the microscope back entrance is kept constant at 2 uW and the total integration time per FCS experiment is 240 s. To efficiently remove the afterpulsing artefacts in the ACFs, we implement the FLCS correction following the approach in Ref. [46] and the built-in function in Symphotime64 (Picoquant).
### Fluorescence Correlation Spectroscopy analysis
FCS computes the temporal correlation of the fluorescence signal \(\langle I(t).I(t+\tau)\rangle/\langle I(t)\rangle^{2}\), where \(\tau\) is the delay (lag) time, and \(\langle\,\rangle\) indicates time averaging. Our analysis approach builds on the similar methodology used for our earlier studies on plasmonic [39, 40, 41] and dielectric nanoantennas [14]. The total fluorescence signal is considered to be composed to two parts: the enhanced fluorescence from molecules within the nanogap and the fluorescence from the molecules away from the nanogap yet still present within diffraction-limited confocal volume. An essential feature in FCS is that the molecules contribute to \(G\) in proportion to the square of their fluorescence brightness, so that the fluorescence from molecules in the nanogap region experiencing the maximum enhancement will have a major contribution in the FCS correlation. The temporal correlation of the fluorescence intensity can be written as:
\[G(\tau)=\frac{N^{*}Q^{*2}G_{d}^{*}(\tau)+N_{0}Q_{0}^{2}G_{d0}(\tau)}{(N^{*}Q^{* }+N_{0}Q_{0})^{2}}=\rho_{1}G_{d}^{*}(\tau)+\rho_{2}G_{d0} \tag{1}\]
where \(N^{*}\) is the number of molecules within the gap region with brightness \(Q^{*}\), and \(N_{0}\) is the number of molecules with brightness \(Q_{0}\) diffusing away from the region of interest. \(\rho_{1}\) and \(\rho_{2}\) are the amplitude of a 2-species FCS fit while \(G_{d}^{*}(\tau)\) and \(G_{d0}(\tau)\) are the normalized correlation functions for each species taken individually based on a classical three dimensional model:
\[G_{di}(\tau)=\frac{1}{(1+\tau/\tau_{d,i})\sqrt{1+s_{i}^{2}\,\tau/\tau_{d,i}}} \tag{2}\]
\(\tau_{d,i}\) stands for the mean residence time (set by translational diffusion) and \(s_{i}\) is the ratio of transversal to axial dimensions of the analysis volume, whose value is set to \(s=0.2\) and has negligible influence on the estimates of the number of molecules and brightness within the gap (\(N^{*}\), \(Q^{*}\)).
The number of molecules within the gap \(N^{*}\) and their fluorescence brightness \(Q^{*}\) are extracted from the 2-species FCS fit amplitudes \(\rho_{1}\) and \(\rho_{2}\), with the additional knowledge of the total fluorescence intensity \(I=N_{0}Q_{0}+N^{*}Q^{*}\) directly measured by our instrument:
\[N^{*}=\frac{(I-N_{0}Q_{0})^{2}}{I^{2}(\rho_{1}+\rho_{2})-N_{0}Q_{0}^{2}} \tag{3}\]
\[Q^{*}=\frac{I^{2}(\rho_{1}+\rho_{2})-N_{0}Q_{0}^{2}}{(I-N_{0}Q_{0})} \tag{4}\]
The last step to compute \(N^{*}\) and \(Q^{*}\) is to estimate the number of molecules \(N_{0}\) and brightness \(Q_{0}\) for the molecules diffusing away from the nanogap hot spot. To this end, we use the FCS results recorded on the same nanoantenna when the excitation polarization is rotated by 90\({}^{\circ}\) to be perpendicular to the dimer axis. As additional control, the fluorescence brightness \(Q_{0}\) found with perpendicular polarization is similar to the value found for the confocal reference, while the number of molecule \(N_{0}\) diffusing away from the hot spot is approximately half that seen in the diffraction-limited confocal volume. We relate this effect to the presence of the glass coverslip interface located at the laser focus which cuts the confocal detection volume by a factor of 2.
### Numerical simulations
We performed electrodynamic simulations using Lumerical, a commercial finite difference time domain (FDTD) solver [47]. The geometric structure of the antenna used in the simulation was created by importing the two-dimensional cross-section from a scanning electron micrograph (SEM) and raising it to a height of 50 nm. Antennas with different gap sizes were created by distorting the central region of the SEM to increase the gap size without affecting the other dimensions. We have considered an overall average background refractive index \(n_{b}=~{}1.41\) to account for both the glass substrate and surrounding aqueous solution. The dielectric function of GaP was taken from Ref. [48].
As our measurements were undertaken at a laser fluence well below saturation to guarantee a linear dependence with the laser power, we used the simplification of the low-intensity regime
\[E\approx\frac{P~{}F}{1-\eta_{0}+\eta_{0}~{}P}\,. \tag{5}\]
## 3 Acknowledgement
This work has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 882135-BrightNano-vdW and the European Research Council (ERC) grant agreement 723241. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy, EXC 2089/1-390776260, and the Bavarian programme Solar Energies Go Hybrid (SolTech). The authors also acknowledge the support of the Center of Nanoscience (CeNS) and EPSRC (EP/T027258/1 and EP/P033431/1). S.A.M. additionally acknowledges the EPSRC (EP/W017075/1), the LeeLucas Chair in Physics and the Centre of Excellence in Future Low-Energy Electronics Technologies, Australian Research Council (CE170100039).
## Supporting Information Available
Additional information on nanofabrication, FCS fit and numerical simulations can be found in the Supporting Information:
## References
* Novotny and Hecht 2006 Novotny, L.; Hecht, B. _Principles of Nano-Optics_; Cambridge University Press, 2006.
* Krasnok et al. 2012 Krasnok, A. E.; Miroshnichenko, A. E.; Belov, P. A.; Kivshar, Y. S. All-dielectric optical nanoantennas. _Opt. Express, OE_**2012**, _20_, 20599-20604.
* Kuznetsov et al. 2016 Kuznetsov, A. I.; Miroshnichenko, A. E.; Brongersma, M. L.; Kivshar, Y. S.; Luk'yanchuk, B. Optically resonant dielectric nanostructures. _Science_**2016**, _354_.
* Staude et al. 2013 Staude, I.; Miroshnichenko, A. E.; Decker, M.; Fofang, N. T.; Liu, S.; Gonzales, E.; Dominguez, J.; Luk, T. S.; Neshev, D. N.; Brener, I.; Kivshar, Y. Tailoring Directional Scattering through Magnetic and Electric Resonances in Subwavelength Silicon Nanodisks. _ACS Nano_**2013**, \(7\), 7824-7832.
* Yang et al. 2018 Yang, Y.; Zenin, V. A.; Bozhevolnyi, S. I. Anapole-Assisted Strong Field Enhancement in Individual All-Dielectric Nanostructures. _ACS Photonics_**2018**, \(5\), 1960-1966.
* Yan et al. 2020 Yan, J.; Liu, X.; Ma, C.; Huang, Y.; Yang, G. All-dielectric materials and related nanophotonic applications. _Materials Science and Engineering: R: Reports_**2020**, _141_, 100563.
* Koshelev et al. 2019 Koshelev, K.; Favraud, G.; Bogdanov, A.; Kivshar, Y.; Fratalocchi, A. Nonradiating photonics with resonant dielectric nanostructures. _Nanophotonics_**2019**, \(8\), 725-745.
* Koshelev and Kivshar 2021 Koshelev, K.; Kivshar, Y. Dielectric Resonant Metaphotonics. _ACS Photonics_**2021**, \(8\), 102-112.
* Diaz-Escobar et al. 2023 Diaz-Escobar, E.; Barreda, A. I.; Mercade, L.; Griol, A.; Pitanti, A.; Martinez, A. Light Guidance Aided by the Toroidal Dipole and the Magnetic Quadrupole in Silicon Slotted-Disk Chains. _ACS Photonics_**2023**, _10_, 707-714.
* Cambiasso et al. 2017 Cambiasso, J.; Grinblat, G.; Li, Y.; Rakovich, A.; Cortes, E.; Maier, S. A. Bridging the Gap between Dielectric Nanophotonics and the Visible Regime with Effectively Lossless Gallium Phosphide Antennas. _Nano Lett._**2017**, _17_, 1219-1225.
* Frizyuk et al. 2021 Frizyuk, K.; Melik-Gaykazyan, E.; Choi, J.-H.; Petrov, M. I.; Park, H.-G.; Kivshar, Y. Nonlinear Circular Dichroism in Mie-Resonant Nanoparticle Dimers. _Nano Lett._**2021**, _21_, 4381-4387.
* Ghenuche et al. 2015 Ghenuche, P.; Mivelle, M.; de Torres, J.; Moparthi, S. B.; Rigneault, H.; Van Hulst, N. F.; Garcia-Parajo, M. F.; Wenger, J. Matching Nanoantenna Field Confinement to FRET Distances Enhances Forster Energy Transfer Rates. _Nano Lett._**2015**, _15_, 6193-6201.
* Cambiasso et al. 2018 Cambiasso, J.; Konig, M.; Cortes, E.; Schlucker, S.; Maier, S. A. Surface-Enhanced Spectroscopies of a Molecular Monolayer in an All-Dielectric Nanoantenna. _ACS Photonics_**2018**, \(5\), 1546-1557.
* Regmi et al. 2018 Regmi, R.; Berthelot, J.; Winkler, P. M.; Mivelle, M.; Proust, J.; Bedu, F.; Ozerov, I.; Begou, T.; Lumeau, J.; Rigneault, H.; Garcia-Parajo, M. F.; Bidault, S.; Wenger, J.; Bonod, N. All-Dielectric Silicon Nanogap Antennas To Enhance the Fluorescence of Single Molecules. _Nano Lett._ _16_, 5143-5151.
* Sortino et al. 2021 Sortino, L.; Zotev, P. G.; Phillips, C. L.; Brash, A. J.; Cambiasso, J.; Marensi, E.; Fox, A. M.; Maier, S. A.; Sapienza, R.; Tartakovskii, A. I. Bright single photon emitters with enhanced quantum efficiency in a two-dimensional semiconductor coupled with dielectric nano-antennas. _Nat Commun_**2021**, _12_, 6063.
* Robinson et al. [1951] Robinson, J. T.; Manolatou, C.; Chen, L.; Lipson, M. Ultrasmall Mode Volumes in Dielectric Optical Microcavities. _Phys. Rev. Lett.__95_, 143901.
* Liang and Johnson [2013] Liang, X.; Johnson, S. G. Formulation for scalable optimization of microcavities via the frequency-averaged local density of states. _Opt. Express__21_, 30812-30841.
* Wang et al. [2011] Wang, F.; Christiansen, R. E.; Yu, Y.; Mork, J.; Sigmund, O. Maximizing the quality factor to mode volume ratio for ultra-small photonic crystal cavities. _Appl. Phys. Lett.__113_, 241101.
* Molesky et al. [2021] Molesky, S.; Lin, Z.; Piggott, A. Y.; Jin, W.; Vuckovic, J.; Rodriguez, A. W. Inverse design in nanophotonics. _Nature Photon__12_, 659-670.
* Wu et al. [2021] Wu, T.; Gurioli, M.; Lalanne, P. Nanoscale Light Confinement: the Q's and V's. _ACS Photonics_**2021**, \(8\), 1522-1538.
* Albrechtsen et al. [2013] Albrechtsen, M.; Vosoughi Lahijani, B.; Christiansen, R. E.; Nguyen, V. T. H.; Casses, L. N.; Hansen, S. E.; Stenger, N.; Sigmund, O.; Jansen, H.; Mork, J.; Stobbe, S. Nanometer-scale photon confinement in topology-optimized dielectric cavities. _Nat Commun__13_, 6281.
* Albrechtsen et al. [2015] Albrechtsen, M.; Lahijani, B. V.; Lahijani, B. V.; Stobbe, S.; Stobbe, S. Two regimes of confinement in photonic nanocavities: bulk confinement versus lightning rods. _Opt. Express__30_, 15458-15469.
* Yang et al. [2014] Yang, Y.; Wang, Y.; Yan, Y.; Cheng, W.; Zhao, Q.; Li, Y. On-Chip Single-Molecule Fluorescence Enhancement via Slotted Gallium Phosphide Nanodisks at Anapole States. _Advanced Optical Materials_ 2301444.
* Gondarenko et al. [1964] Gondarenko, A.; Preble, S.; Robinson, J.; Chen, L.; Lipson, H.; Lipson, M. Spontaneous Emergence of Periodic Patterns in a Biologically Inspired Simulation of Photonic Structures. _Phys. Rev. Lett.__96_, 143904.
* Gondarenko and Lipson 1694 Gondarenko, A.; Lipson, M. Low modal volume dipole-like dielectric slab resonator. _Opt. Express__16_, 17689-17694.
* Bonod et al. 2019 Bonod, N.; Bidault, S.; Burr, G. W.; Mivelle, M. Evolutionary Optimization of All-Dielectric Magnetic Nanoantennas. _Advanced Optical Materials_**2019**, \(7\), 1900121.
* Hu and Weiss 2019 Hu, S.; Weiss, S. M. Design of Photonic Crystal Cavities for Extreme Light Concentration. _ACS Photonics__3_, 1647-1653.
* Brale et al. 2030 Brale, Y.; Wiecha, P.; Cuche, A.; Paillard, V.; Colas des Francs, G. Magnetic and electric Purcell factor control through geometry optimization of high index dielectric nanostructures. _Opt. Express__30_, 20360.
* Mignuzzi et al. 1995 Mignuzzi, S.; Vezzoli, S.; Horsley, S. A. R.; Barnes, W. L.; Maier, S. A.; Sapienza, R. Nanoscale Design of the Local Density of Optical States. _Nano Lett.__19_, 1613-1617.
* Hu et al. 2015 Hu, S.; Khater, M.; Salas-Montiel, R.; Kratschmer, E.; Engelmann, S.; Green, W. M. J.; Weiss, S. M. Experimental realization of deep-subwavelength confinement in dielectric optical resonators. _Science Advances__4_, eaat2355.
* Hong et al. 2023 Hong, I.; Hong, C.; Tutanov, O. S.; Massick, C.; Castleberry, M.; Zhang, Q.; Jeppesen, D. K.; Higginbotham, J. N.; Franklin, J. L.; Vickers, K.; Coffey, R. J.; Ndukaife, J. C. Anapole-Assisted Low-Power Optical Trapping of Nanoscale Extracellular Vesicles and Particles. _Nano Lett._**2023**,
* Moller et al. 2021 Moller, F. M.; Holzmeister, P.; Sen, T.; Acuna, G. P.; Tinnefeld, P. Angular modulation of single-molecule fluorescence by gold nanoparticles on DNA origami templates. _Nanophotonics__2_, 167-172.
* Kuzyk et al. 5115 Kuzyk, A.; Jungmann, R.; Acuna, G. P.; Liu, N. DNA Origami Route for Nanophotonics. _ACS Photonics__5_, 1151-1163.
* Humbert et al. 2015 Humbert, M.; Hallez, Y.; Larrey, V.; Fournel, F.; Palleau, E.; Paillard, V.; Cuche, A.; Ressier, L. Versatile, rapid and robust nano-positioning of single-photon emitters by AFM-nanoxerography. _Nanotechnology__33_, 215301.
* Xiong et al. 2021 Xiong, M.; Sakanas, A.; Dimopoulos, E.; Christiansen, R. E.; Semenova, E.; Sigmund, O.; Yu, Y.; Yvind, K.; Mork, J. Experimental Realization of Topology-Optimized InP Photonic Cavities with Extreme Dielectric Confinement. OSA Advanced Photonics Congress 2021 (2021), paper IM2A.7. p IM2A.7.
* Glembockyte et al. 2021 Glembockyte, V.; Grabenhorst, L.; Trofymchuk, K.; Tinnefeld, P. DNA Origami Nanoantennas for Fluorescence Enhancement. _Acc Chem Res_**2021**, _54_, 3338-3348.
* Wenger and Rigneault 2010 Wenger, J.; Rigneault, H. Photonic Methods to Enhance Fluorescence Correlation Spectroscopy and Single Molecule Fluorescence Detection. _IJMS_**2010**, _11_, 206-221.
* Wohland et al. 2020 Wohland, T.; Maiti, S.; Machan, R. Theoretical FCS models. _An Introduction to Fluorescence Correlation Spectroscopy_**2020**,
* Regmi et al. 585 Regmi, R.; Al Balushi, A. A.; Rigneault, H.; Gordon, R.; Wenger, J. Nanoscale volume confinement and fluorescence enhancement with double nanohole aperture. _Sci Rep__5_, 15852.
* Flauraud et al. 1703 Flauraud, V.; Regmi, R.; Winkler, P. M.; Alexander, D. T. L.; Rigneault, H.; van Hulst, N. F.; Garcia-Parajo, M. F.; Wenger, J.; Brugger, J. In-Plane Plasmonic Antenna Arrays with Surface Nanogaps for Giant Fluorescence Enhancement. _Nano Lett.__17_, 1703-1710.
* Punj et al. 2013 Punj, D.; Mivelle, M.; Moparthi, S. B.; van Zanten, T. S.; Rigneault, H.; van Hulst, N. F.; Garcia-Parajo, M. F.; Wenger, J. A plasmonic 'antenna-in-box' platform for enhanced single-molecule analysis at micromolar concentrations. _Nature Nanotech_**2013**, \(8\), 512-516.
* Puchkova et al. 2020 Puchkova, A.; Vietz, C.; Pibiri, E.; Wunsch, B.; Sanz Paz, M.; Acuna, G. P.; Tinnefeld, P. DNA Origami Nanoantennas with over 5000-fold Fluorescence Enhancement and Single-Molecule Detection at 25 uM. _Nano Lett._ _15_, 8354-8359.
* Chengfeng et al. 2023 Chengfeng, P.; Shutao, Z.; Farsari, M.; Oh, S. H.; Yang, J. K. W. Nanofabrication: the unsung hero in enabling advances in nanophotonics. _Nanophotonics_**2023**, _12_, 1359-1361.
* Lyon and Hubler 2005 Lyon, D.; Hubler, A. Gap size dependence of the dielectric strength in nano vacuum gaps. _IEEE Transactions on Dielectrics and Electrical Insulation__20_, 1467-1471.
* Manfrinato et al. 2005 Manfrinato, V. R.; Zhang, L.; Su, D.; Duan, H.; Hobbs, R. G.; Stach, E. A.; Berggren, K. K. Resolution Limits of Electron-Beam Lithography toward the Atomic Scale. _Nano Lett._ _13_, 1555-1558.
* Enderlein and Gregor 2005 Enderlein, J.; Gregor, I. Using fluorescence lifetime for discriminating detector afterpulsing in fluorescence-correlation spectroscopy. _Review of Scientific Instruments_**2005**, _76_, 033102.
* Ansys Lumerical FDTD \(|\) 3D Electromagnetic Simulator, Release 2021 R1.3. [https://www.ansys.com/products/photonics/fdtd](https://www.ansys.com/products/photonics/fdtd).
* Aspnes and Studna 1983 Aspnes, D. E.; Studna, A. A. Dielectric functions and optical parameters of Si, Ge, GaP, GaAs, GaSb, InP, InAs, and InSb from 1.5 to 6.0 eV. _Phys. Rev. B_**1983**, _27_, 985-1009.
## TOC Graphic
**Supporting Information: Fluorescence enhancement in topologically optimized gallium phosphide all-dielectric nanoantennas**
Cynthia Vidal,\({}^{*,\dagger}\) Benjamin Tilmann,\({}^{\ddagger}\) Sunny Tiwari,\({}^{\lx@paragraphsign}\) T. V. Raziman,\({}^{\lx@sectionsign,\dagger}\) Stefan A. Maier,\({}^{\parallel,\ddagger,\dagger}\) Jerome Wenger,\({}^{\lx@paragraphsign}\) Riccardo Sapienza\({}^{*,\dagger}\)
\(\dagger\)The Blackett Laboratory, Department of Physics, Imperial College London, London SW7 2AZ, U.K.
\(\ddagger\)Nano-Institute Munich, Department of Physics, Ludwig-Maximilians-University Munich, 80539 Munich, Germany
\({}^{\lx@paragraphsign}\)Aix Marseille Univ, CNRS, Centrale Marseille, Institut Fresnel, 13013 Marseille, France
\(\lx@sectionsign\)Department of Mathematics, Imperial College London, London SW7 2AZ, U.K.
\(\parallel\)School of Physics and Astronomy, Monash University, Clayton, Victoria 3800, Australia
E-mail: [email protected]; [email protected]
## Methods
## Fluorescence Correlation Spectroscopy: Table of fit values
## Numerical simulations
The enhancement factors of local density of optical states (LDOS), for different combinations of locations and orientations of the emitter, were evaluated by performing separate simulations with an electric dipole source for each combination, and obtaining the Purcell factor \(P\). Orientation dependence of LDOS was computed by simulating dipoles along six canonical directions [\((xyz)=(001),(010),(100),(011),(101),(110)\)] and expressing LDOS along an arbitrary orientation with polar angles \((\theta,\phi)\) in terms of these enhancements [1],
\[P(\theta,\phi)= P_{001}\sin(\theta)\cos(\phi)\left[\sin(\theta)\cos(\phi)-\sin( \theta)\sin(\phi)-\cos(\theta)\right]\] \[+P_{010}\sin(\theta)\sin(\phi)\left[\sin(\theta)\sin(\phi)-\sin( \theta)\cos(\phi)-\cos(\theta)\right]\] \[+P_{100}\cos(\theta)\left[\cos(\theta)-\sin(\theta)\cos(\phi)- \sin(\theta)\sin(\phi)\right]\] \[+P_{011}\sin(2\theta)\sin(\phi)\] \[+P_{101}\sin(2\theta)\cos(\phi)\] \[+P_{110}\sin^{2}(\theta)\sin(2\phi)\,. \tag{1}\]
The brightness enhancement in the nanoantenna results from three processes [2]: (1) excitation enhancement due to the concentration of the electric field, (2) quantum yield enhancement due to the Purcell effect, and (3) increased collection efficiency due to antenna beaming effect. These three processes can be disentangled through numerical simulations and accounted for individually. Here, we neglected the enhancement due to collection efficiency considering the wide collection numerical aperture of our objective and the all-dielectric nature of our antenna. Further, as the wavelength of interest is sufficiently above the band gap, we neglected the ohmic losses in the antenna. |
2301.09465 | Assessment and Performance of Flexible Quench Antenna Array Diagnostics
for Superconducting Magnets | FNAL has been developing multiple versions of flexible quench antennas
(flex-QA), including some specially optimized for high sensitivity and/or high
resolution, to characterize quench events and transients during current ramping
in superconducting magnets. A fundamental feature in our use of these is the
creation of grid-like structures of sensitive elements to cover coil surfaces,
with the aim of getting precise localization of magnetic flux-change events.
The flex-QA are coupled with fast data-acquisition, allowing comprehensive
analysis of signals at the relevant fine time scales. In addition to arrays of
various flex-QA types being used during cryogenic testing of superconducting
magnets, we also are utilizing a newly developed room temperature test stand to
better understand QA response characteristics. The data from actual
superconducting magnet tests, warm test stand measurements, and simulation data
on the same QA designs allows us to draw conclusions on operational feasibility
and plan better for improvements of our sensors. In this paper we present data
from the multiple tests performed and analysis results. Flex-QA designs are
compared, and their features, options, and optimization discussed. | Stoyan Stoynev, Joe DiMarco | 2023-01-23T14:54:47Z | http://arxiv.org/abs/2301.09465v1 | # Assessment and Performance
###### Abstract
FNAL has been developing multiple versions of flexible quench antennas (flex-QA), including some specially optimized for high sensitivity and/or high resolution, to characterize quench events and transients during current ramping in superconducting magnets. A fundamental feature in our use of these is the creation of grid-like structures of sensitive elements to cover coil surfaces, with the aim of getting precise localization of magnetic flux-change events. The flex-QA are coupled with fast data-acquisition, allowing comprehensive analysis of signals at the relevant fine time scales. In addition to arrays of various flex-QA types being used during cryogenic testing of superconducting magnets, we also are utilizing a newly developed room temperature test standard to better understand QA response characteristics. The data from actual superconducting magnet tests, "warm" test stand measurements, and simulation data on the same QA designs allows us to draw conclusions on operational feasibility and plan better for improvements of our sensors. In this paper we present data from the multiple tests performed and analysis results. Flex-QA designs are compared, and their features, options, and optimization discussed.
Antenna arrays, electromagnetic measurements, electromagnetic transients, superconducting magnets.
## I Introduction
Diagnostic tools are crucial in understanding performance of superconducting accelerator magnets for which the designs and technology continually develop. Instrumentation is needed which has both high spatial and temporal resolution, as well as a variety of sensor types in order to fully understand and describe the complex phenomena driving magnet performance - particularly affected by quenching and its dependencies. FNAL has developed new type of arrays based on flexible printed-circuit board technology and quench antenna sensing, flex-QA [1], to help address the diagnostic needs here. Earlier studies with individual small QA [1, 2] indicated significant potential of this approach and mixed-type array configurations for long magnets [3] also proved the concept viable. Flex-QAs from one of the newly designed arrays were embedded in a mirror magnet [4], following the curved aperture and facing the coil at \(\sim\) 5 mm distance. The same design was also tested later at a specially developed room temperature test stand [1], where tests were subsequently also completed on several other flex-QA designs.
In this paper, we describe the work on QA data acquired from a superconducting magnet, along with the resulting analysis and insights. We explore bench-testing techniques relatable to the experiment and compare to simulations. We additionally examine features of the different flex-QA arrays, obtained on the warm test stand, to guide us to improved designs. Main points are discussed, and conclusions are drawn.
## II Superconducting Magnet Test with Flex-QA Arrays
### _Flex-QA positioning and DAQ_
The QA design with which we chose to instrument a "mirror" magnet is shown schematically in Fig. 1 and pictured in ([1], Fig. 5). It features 20 independent diagonal channels on a rectangular panel with sensitive area 80 mm x 460 mm. Two lengths of those cover the inner layer coil area, given the coil active area was \(\sim\) 80 cm long and had 60 mm aperture. Two panels were connected in series (20 channels total) and two others placed on top of them individually (40 channels together) to form a grid of 60 channels as presented in Fig. 1. Fig. 2 shows an actual picture of the flex-QAs with the 3D printed support seen under the panels. Features on the coil, QA arrangements and magnet parts (laminations) were all used together to relate geometrically the coil-to-QA coordinates with the goal of matching positions to better than 1 mm. The many twisted-pair wires coming out of the partial-bore of the magnet were carefully routed and labeled.
Fig. 1: Flex-QA channel arrangement. The top two shown have their QA sensors connected in series and the bottom ones (LE, RE) remain independent. Dotted lines show an example channel on each of the top and bottom QA.
The 60 QA signal channels along with the quench detection signal were read by a DAQ system based on NI-6143 modules (8 channels, 16-bit, 250 kHz). We took data at 100 kHz per channel rate. Data recording included the whole magnet current ramps, facilitated by LabView software.
### _Magnet quenching and flex-QA data analysis_
The magnet under consideration in this paper was part of other developments [4] and its test program started at 4.5 K. Most magnets quench predominantly in their coil inner layer(s) where the field is higher, and the first quench in our test was in the inner layer. This layer was instrumented with voltage taps and based on those we knew the likely quench location was somewhere around the second spacer/end-part, within \(\sim\)40 cm cable length. We also had acoustic sensors installed on both magnet ends and, given that the first quench yielded fairly large mechanical signals, we triangulated the likely source to within 10 cm range in longitudinal direction.
With typical noise level at below 1 mV, we identified many QA signals above the noise at a distinctive time which we identified as quench time. Flex-QA quench time was typically few ms before visible voltage rise from the quenching voltage tap segment, as observed in previous work with flex-QA as well [1]. While QA signals show complex behavior over time, the first reaction to quench is a sharp peak(s) developing within 200 micro-seconds. Within our time resolution of 10 micro-seconds all channels featuring this peak reacted simultaneously. The magnitude of the peaks however differed, and we used it to associate highest level channels to geometrical origin of current redistribution. The highest peaks in quench one, 16 mV and 14 mV, were seen in channels shown on Figures 1 (dotted), 3 (highlighted) and 4 with 12 other channels reaching peak magnitudes of 2-8 mV. The precise quench location from QA was consistent with the rough voltage tap based constraints around the pole turn and the axial constraints from the acoustic signals. Thus, multiple diagnostics tools can be utilized for building a coherent and richer picture although VTs and acoustics were not quite sensitive enough to provide good location in this case.
Most of the quenches in this magnet test occurred in the outer layer, there were tens of those quenches. The flex-QA array was sensitive enough to register all of them - inner layer quenches went to an immediate-signal/noise ratio of over 35 while outer layer ones were at a ratio of up to 10. Fig 4. shows a zoomed in version of typical quench patterns. We identified four distinctive patterns, including time dependencies, and the one on the right was associated with almost all outer layer quenches. The channels with highest amplitudes at the time of quench (within 200 micro-seconds) point to a spot in the non-lead end of the outer layer, approximately consistent with coil damage regions as discussed in [4]. Fig. 5 is another geometrical representation of flex-QA array signals incorporating the need for correlated processing. It shows sensitivity at particular time slice, with sensitivity for cross-channels (1,2) defined as s = sqrt(M1*M2), where M is signal normalized to std. dev. computed "pre-quench" ([-0.2, -0.1] s) and based on running average over 1 ms. Noise filtering could improve sensitivity further but is not applied here. Those and other representations revealed a "weak" spot in the non-lead end outer layer associable to quenches. This area consistently sees current redistribution in adjacent QA channels up to the quench detection time. The same area was active between quench and quench detection
Fig. 4: QA array activity around quench time in first (inner layer / IL/, left) and the sixth (outer layer /OL/, right) magnet quenches. Noise patterns at 60 Hz are clearly visible and phase-shifted in the boards covering the two halves of the coils and are largely missing in the QA array consisting of two connected boards over the whole coil (channels 41-60). Absolute and normalized signals are shown for quench one and six, respectively; no filtering is applied.
Fig. 5: Most outer layer quenches show this pattern at the first moments after quench start. The active area (reddish color) corresponds to locations between the end part and the midplane in the outer coil layer. Dimensions on the figure are approximate and used for visualization on the coil (which is in fact curved).
Fig. 3: Top: Superimposed images of QA arrays on top of each other with the two highest amplitude channels responding to the first quench highlighted. Bottom: Located position on the inner layer (ID) surface of the coil (note: the coil is seen from “below” and QA images are taken from “above” ).
Fig. 2: Flex-QA arrays are placed on top of 3D-printed support seen under the bundle of wires coming from the QA. The coil is placed on top of them; Kapton insulation sheet and protection heaters complete the view. All components are placed on “mirror” iron blocks which are part of the “mirror” magnet.
times even in the first quench which was in the inner layer. Thus, evidence suggests the main damage to the coil outer layer was done at or before the first spontaneous quenching occurred.
## III Flex-QA arrays at the test-bench
### Warm test stand characteristics and test goals
The QA warm test stand (WTS), [1] and Fig. 6 (left), allows to position and control a current source (loop) close to a QA board(s) under testing. After experimenting, we decided to adopt a loop of 2 cm diameter oriented in "Y-Z" plane (Y is along sensor length) emulating the cable plane at the straight section in a magnet during current redistribution and quench propagation. The current loop is fixed with the help of a shaper attached to the positioning system. The lowest point of it is maintained at \(\sim\) 1 mm from the QA surface during measurements, to allow for proper signal-to-noise ratio, Fig. 6 (right).
Flex-QA arrays were placed in horizontal positions aligned and fixed in the "X-Y" plane. To minimize electrical noise, we had to cover the surface under the QA board with aluminum foil and grounded it which eliminated any modulation in most cases. The signal noise level was as low as 0.2 mV although few individual channels reached \(\sim\)1 mV modulated (i.e. still reducible) noise. The boards were placed parallel to the motion system coordinate frame within a very good degree, given the length of boards was \(\sim\) 0.5 m. However, the flatness of boards within parts of mm cannot be guaranteed. As the current source was 1 mm away, some "waviness", largely due to QA manufacturing and material features, may be expected.
A generator-controlled power supply was used to provide current of \(\pm\)12 A in the form of square waves with 100 Hz frequency. Data was recorded with the same DAQ system used for the cold magnet test and same sampling rate of 100 kHz.
WTS measurements can serve multiple purposes. Response of QA channels are recorded and effectively used for relative calibration. Crosstalk effects are checked and investigated. Abnormal behavior in particular channels/board can also be pinpointed. Current redistribution models or orientation dependencies, in the form of current loop(s) characteristics, can be studied. Most importantly, sensitivity within the QA arrays can be probed which affects design choices or applicability to cases. Finally, WTS can be utilized to emulate observed responses in real magnets, given a proper current shape is provided; and, ultimately, help to explain or predict real responses. Elaborated QA sets could be assessed there before committing to cold-test application while simulations could be benchmarked in advance of cold testing.
### Test bench data
The flex-QA arrays we developed had two separate main features - inclined with respect to border lines sensors, a.k.a. "diagonal" design, and "triangle-pair" bucked design forming a rectangular sensor. We fabricated both, keeping the sensor width the same, 8 mm, and overall length and width of the different boards also the same ([1], Fig 5). The "diagonal" design, used in magnet testing, featured 20 sensors of variable length per board. The "triangle-pair" design consisted of 10 equivalent sensors, each having six identical sections of bucked "triangle-pairs", all sections connected in series.
Fig. 6 (right) shows QA response to a single current "jump"; the decay constant \(\tau\), primarily driven by the system and not the QA channels, is \(\sim\) 0.2 ms. Thus, the half-period of signal modulation corresponds to 25 \(\tau\).
The "diagonal" sensors have a clear symmetry defined by their mid-line and Fig. 7 confirms a dead-zone near the line of symmetry. By design this symmetry line is not along the current or quench propagation and is thus a limited "blind-spot" in practice. The response at channel edges is very strong which allows relatively strong readout of multiple channels at a given time/position, Fig 7 (bottom), when the source scans the board width, Fig. 7 (top). As long as noise or systematic effects are negligible, having multiple sensors providing scaled response is of utmost importance to pin-point current variation sources. Alternatively, these give a strong clue about acceptable sensor sizes, assuming emulation of current source relative location to QA sensor is satisfactory.
The "diagonal" QA array showed multiple occasions of reproducible "cross-talk": the noise floor (average read-out value without signal) is affected by signal in neighboring channel(s). We do not know yet the source of this systematic problem and it was not the case for all channels. We did not observe similar
Fig. 6: Left: WTS overview with a flex-QA array ready for testing. Right: A QA channel response to current “jump” (-12 A to +12 A) - time constant of 0.2 ms is measured. Noise level (RMS) is typical; voltage offset is channel dependent and signal strength is determined with respect to it.
Fig. 7: Top: A scan path (blue arrow) over a “diagonal” design board. Dark lines are the middles of the sensors and pale bright lines in between indicate separation of channels. Channels are symmetric along the dark lines. Bottom: Results from the scan at constant speed. Inflection points (no signal) indicate crossing the symmetry lines (middles of channels) where signal vanishes as magnetic flux through the sensor on the left of the current loop compensates the flux on the right. Distance between those inflection points is nominally 8 mm / cos (0.165 rad). Non-symmetries with respect to the offset voltages indicate positive cross-talk. Voltage offsets are not channel ordered.
behavior before, in short or long single or multiple adjacent short channels. Crosstalk is a serious issue if one wants to use a multi-channel approach for providing better spatial resolution. It is instructive to note that the "triangle-pair" design showed no such problem but manufacturers, along with other parameters, were different between QA board designs.
Fig. 8 shows scan directions across channels (comprised of multiple "sections") of the "triangle pair" design (top) and resulting signal (bottom). The response to signal above the middle part of a section (see Fig. 9 (left)) is particularly insensitive close to channel edges where an "island" is present at each edge; the adjacent channels are insensitive for the same reason. By design, current and quench propagation is not parallel to the "island" lines and thus dead-zones have limited practical length. Although not shown, inflection points (low signal) are also present between sections of a single channel. Sectioning cannot be avoided for long and narrow channels without introducing too long dead-zones in real applications. The advantages of the "triangle-pair" design relate to robustness against noise. A main disadvantage is that having less sensitivity along edges of channels is apparently worse suited for multi-channel use.
### _Simulations_
We explore simple simulation [3] where a current doublet with a length 1/5 of a channel section of the "triangle pair" design acts on the simulated version of the sensor. On Fig. 9 the pair is emphasized on a picture (left) and results from simulations shown along with the shapes from WTS data (right). In the simulation the doublet, approximating a current redistribution source in a magnet and resembling the WTS probe (i.e., 2 cm diameter over a 8-cm-long section), is moved along the width of the sensor at five adjacent positions along its length and fixed distance from the sensor surface. The simulation describes well how the signal changes from one side to the other over the sensor width/length and is consistent with data (in data we change signal polarity while moving the source; in the analytic simulation we calculate the absolute flux/voltage). It is apparent that behavior is reproducible and understandable to a good degree even though some parameters (e.g. round probe vs a straight doublet) deviate. It is a good start to tune simulations with different simple sources on various designs and then move to more elaborate source shapes which may describe the 3D-shapes of real magnet current redistribution better.
Simulations will have to be also employed to resolve convoluted signals in data. The initial spike we associate with quench start has the characteristic decay of our DAQ system (\(\sim\)0.2 s decay constant) but it is followed by other spikes and more gradual signal changes, many of them higher than the initial spike amplitude. At conceptual level this can be interpreted as consecutive fast current redistribution steps (each within tens to hundreds of micro-seconds), possibly from individual strands being shut off and on during current sharing and quenching [5]. Refined simulation studies are needed to describe the actual process with a good level of confidence.
## Conclusion
We demonstrate how a flex-QA arrays can be used to pinpoint quench locations with remarkable precision. They are very reliable, and signals are well above noise. We identified all quench starts in a superconducting magnet test well before quench detection and before any visible voltage rise in the conductor - this includes tens of quenches in the outer layer with the whole thickness of the inner coil layer between current redistribution location and the QA sensors. Our "warm" test stand studies indicated some weaknesses in the QA designs and possibly unavoidable imperfections but also showed that multi-channel sensitivity to quenches can serve as an additional tool to improve precision. Time development is yet to be fully explored along with more detailed studies on flex-QA designs at the warm test stand and in actual cryogenic magnet testing.
## Acknowledgments
We thank T. Cummings and N. Moibenko for QA design work, V. Nikolic for mechanical engineering leadership, O. Kiemschies and S. Krave for comprehensive DAQ support, Kelsey Scheidt for helping with WTS measurements.
Fig. 8: Top: Scan paths (blue arrows left, middle, right) across the board (crossing over several channels). “Islands” (along lines) are the equivalents of the symmetry lines in the “diagonal” design – they correspond to lowest signal within a channel. Bottom: Results from the scans. Signal depends on source positions along both channel length and width.
Fig. 9: Left: A triangle-pair section design portion of a sensor. Right: Simulated response of the circuit to current doublet at various positions (fixed axial, scan along the channel width). WTS data at relevant positions is compared to simulations. Only the main sensor geometry is embedded in simulations. |
2302.10464 | Creation of crystal structure reproducing X-ray diffraction pattern
without using database | When a sample's X-ray diffraction pattern (XRD) is measured, the
corresponding crystal structure is usually determined by searching for similar
XRD patterns in the database. However, if a similar XRD pattern is not found,
it is tremendously laborious to identify the crystal structure even for
experts. This case commonly happens when researchers develop novel and complex
materials. In this study, we propose a crystal structure creation scheme that
reproduces a given XRD pattern. We employed a combinatorial inverse design
method using an evolutionary algorithm and crystal morphing (Evolv&Morph)
supported by Bayesian optimization, which maximizes the similarity of the XRD
patterns between target one and those of the created crystal structures. For
sixteen different crystal structure systems with twelve simulated and four
powder target XRD patterns, Evolv&Morph successfully created crystal structures
with the same XRD pattern as the target (cosine similarity > 99% for the
simulated ones and > 96% the experimentally-measured ones). Furthermore, the
present method has merits in that it is an automated crystal structure creation
scheme, not dependent on a database. We believe that Evolv&Morph can be applied
not only to determine crystal structures but also to design materials for
specific properties. | Joohwi Lee, Junpei Oba, Nobuko Ohba, Seiji Kajita | 2023-02-21T06:11:08Z | http://arxiv.org/abs/2302.10464v3 | # Creation of crystal structure reproducing X-ray diffraction pattern without using database
###### Abstract
When a sample's X-ray diffraction pattern (XRD) is measured, the corresponding crystal structure is usually determined by searching for similar XRD patterns in the database. However, if a similar XRD pattern is not found, it is tremendously laborious to identify the crystal structure even for experts. This case commonly happens when researchers develop novel and complex materials. In this study, we propose a crystal structure creation scheme that reproduces a given XRD pattern. We employed a combinatorial inverse design method using an evolutionary algorithm and crystal morphing (Evolv&Morph) supported by Bayesian optimization, which maximizes the similarity of the XRD patterns between target one and those of the created crystal structures. For twelve different crystal structure systems, Evolv&Morph successfully created crystal structures with the same XRD pattern as the target (cosine similarity > 99%). Furthermore, the present method has merits in that it is an automated crystal structure creation scheme, not dependent on a database. We believe that Evolv&Morph can be applied not only to determine crystal structures but also to design materials for specific properties.
**Keywords:** crystal structure creation, evolutionary algorithm, crystal morphing, Evolv&Morph, XRD similarity, inverse design
## Introduction
When synthesizing a material in a particular composition, one wants to confirm whether the intended crystal structure is successfully synthesized as a crystalline phase. An XRD (X-ray diffraction) analysis is used to determine atomistic and molecular structures.[1] It has a wide range of applications, including determination of crystalline phase, orientation, lattice parameters, and grain size. Furthermore, because XRD is prevalent to market and relatively easy to handle, the method can be said to be the first analysis in investigation of the crystal structure and phase.
For determining the crystal structure based on the XRD analysis, the material database (DB) is widely used together. There are big material DB including more than hundreds of thousands of XRD patterns and corresponding crystal structures, such as Powder Diffraction File (PDF(tm))[2] produced/managed by International Centre for Diffraction Data (ICDD(tm)) and Inorganic Crystal Structure Database (ICSD).[3] XRD patterns can be directly simulated from crystal structures saved in the DB. Conventionally, a crystal structure of a measured sample is identified by matching its XRD pattern with those of candidate materials searched in DB.
Recently, various methods have been suggested for more correct and efficient identification of material systems or crystal structures based on the XRD analysis and DB. Machine learning models[4, 5, 6] were proposed to predict crystal systems and space groups by inputting XRD patterns. Griesemer _et al._, suggested a prototype search method based on exploiting DB and first-principles calculations to identify the structures of approximately five hundred compounds from experimental XRD patterns missed from DB.[7] Dong _et al._, constructed a deep learning model to predict XRD pattern only by the input of chemical composition directly.[8] However, when the measured XRD pattern indicates an unknown crystal structure, it is often impossible to find similar XRD patterns in DB. Because such advanced methods strongly depend on the accumulated DB, the generated crystal structure is not guaranteed to match the measured XRD pattern successfully.
To reduce the gap between the measured and candidate XRD patterns, a Rietveld refinement,[9] which can directly tune a candidate crystal structure to approach a similar XRD pattern, is usually employed. Because this method needs to optimize complex combinations of various parameters, high expertise is needed to successfully reduce the difference between the XRD patterns. Recently, Ozaki _et al._, proposed Black-Box Optimization Rietveld (BBO-Rietveld) method[10] that automatically optimizes various combinations of parameters for Rietveld refinement. BBO-Rietveld is easier to be handled even by nonexperts, providing a higher success probability of Rietveld Refinement. However, Rietveld refinements strongly depend on an initial structure for the optimization. If an XRD pattern of the initial structure loaded from DB differs significantly from the measured XRD pattern, the Rietveld refinement often does not succeed. Therefore, it is desired to develop an inverse
design[11, 12] method to directly create crystal structure reproducing the target XRD pattern without relying on any DB.
In this study, we propose a scheme that consists of evolutionary algorithm[13, 14] and cyrstal morphing[15] (Evolv&Morph) for the direct creation of crystal structures similar to a target XRD pattern. Evolv&Morph does not use prior knowledge in crystal structures such as structural DB. We show that Evolv&Morph successfully created structures reproducing the target XRD patterns for twelve different materials.
## 2 Results
### Overview of the present scheme
Figure 1 shows an overview of Evolv&Morph. The goal is to create a crystal structure with the same XRD pattern as a given target. To achieve the goal, the main part of Evolv&Morph requires two important factors. One is to create an enormous number of structures automatically, and another is to select and modify such structures to maximize (optimize) their similarity score of the XRD pattern with respect to the target. To this end, the similarity score is required to be evaluated immediately. As structure creation methods, we employed evolutionary algorithm[13, 14] and crystal morphing[15].
The evolutionary algorithm is a heuristic optimization method for creating various crystal structures. It has been widely used to suggest novel structures that have optimized target property (fitness) of thermodynamic energy[13, 14], or other more practical properties such as defect formation energy[16] and hardness[17]. We chose a similarity score of XRD patterns to find a structure possessing a target XRD pattern as the optimized target property. It tries to create structures based on various genetic operators to maximize the similarity score.
Crystal morphing generates intermediate crystal structures between given two structures[15]. As an application example, several virtual geometric structures with four carbon atoms had been morphed in such a way as to have the target XRD patterns. For practical material systems, it was not yet confirmed whether such an application of crystal morphing is successful or not in offering crystal structure reproducing XRD patterns. Its optimization of similarity score can be supported by external optimization functions such as Bayesian optimization[18].
Among the various created structures, the structures with significantly high similarity scores can be refined by post-process such as Rietveld refinement and symmetrization. Such refinement methods further slightly tune the structures to increase the similarity score. Finally, the structures with the highest similarity scores can be recommended as candidates for reproducing the target XRD pattern successfully. More details of each method are written in METHOD section.
### XRD simulations and similarity metrics
When a crystal structure is given, the XRD patterns can be simulated. Accordingly, the XRD patterns of the created structures were simulated immediately. The target XRD patterns for reproducing were also simulated using the same way from crystal structures loaded from ICSD [3]. Herein, the structures for reproducing the target XRD patterns are referred to as target structures. Six binary and six ternary compound systems were selected with each cubic, hexagonal, trigonal, tetragonal, orthorhombic, and monoclinic structure. They are listed in Table 1. Some typical structures were selected (for example, MgO in the Rocksalt structure and Al\({}_{2}\)O\({}_{3}\) in the Corundum structure), while the others were randomly chosen from the material list. In ICSD, the lattice parameters and internal coordinates of constituent atoms have primarily been obtained through synthesis and measurement.
As a similarity metric for two XRD patterns, a cosine similarity (\(S_{\mathrm{cos}}\)) was employed. A higher score indicates that two vectors \(\mathbf{x}\) and \(\mathbf{y}\) of XRD patterns have more overlapped peaks. They have the same number of bins (the same \(2\theta\) range). The maximum \(S_{\mathrm{cos}}\) is 100%. \(S_{\mathrm{cos}}\) can be obtained from the following
Figure 1: **Overview to create a crystal structure reproduces the target XRD pattern.** First, a target XRD pattern for reproducing is given. Evolv&Morph, consisting of the evolutionary algorithm and crystal morphing, tries to create various crystal structures with optimizing the target score: similarity of their XRD patterns with respect to the target. The open circles indicate some of the created structures, and the closed circles indicate the structure with the highest similarity score for each method. The optimization target score is evaluated immediately during the creation of the structures. For the crystal morphing procedure, the dashed curve and gray shaded area indicate the predictive mean and confidence interval of Bayesian optimization, respectively. Finally, after refinement, the structure with almost the same XRD pattern as the target is suggested.
equation:
\[S_{\mathrm{cos}}(\mathbf{x},\mathbf{y})=\frac{\mathbf{x}\cdot\mathbf{y}}{\|\mathbf{x}\|\|\mathbf{y}\|}= \frac{\sum_{i}x_{i}y_{i}}{\sqrt{\sum_{i}x_{i}^{2}}\sqrt{\sum_{i}y_{i}^{2}}}. \tag{1}\]
where \(\|\ \|\) is a \(L2\)-norm, \(i\) is an index of bin (\(2\theta\)), and \(\cdot\) is a dot product. Two XRD patterns were smeared with a 0.5deg-width Gaussian before calculating \(S_{\mathrm{cos}}\).
Note that \(S_{\mathrm{cos}}\) was set at the maximum score obtained after isotropic volume changes to complement a critical weak point of this metric: it is too sensitive to the peak shift [19, 20], which corresponds to the change of lattice volume (or parameter) of crystal structure. The exact method how to solve the problem is discussed in Supplementary Section Cosine similarity with isotropic volume changes and Figs. S1-S3.
### Evolutionary algorithm for optimizing similarity of XRD pattern to target
First, the evolutionary algorithm's performance was tested whether it could create the crystal structure possessing the target XRD pattern. As the input, the number of atoms was set to be the same as that of the primitive or conventional cell for the target structure. Each evolutionary algorithm was performed separately five times. The crystal structure with the highest \(S_{\mathrm{cos}}\) is the best structure to reproduce the target XRD pattern. Note that considering the thermodynamic stability, structures with a formation energy of 0.2 eV/atom higher than the most stable structure in each trial were excluded despite exhibiting the highest \(S_{\mathrm{cos}}\).
Table 1 shows the result of \(S_{\mathrm{cos}}\) of the created crystal structure by the evolutionary algorithm and other post process (will be discussed later). In many cases, the evolutionary algorithm successfully produced structures with a significantly high \(S_{\mathrm{cos}}\). Among twelve systems, ten exhibited the best structure with the highest \(S_{\mathrm{cos}}\geq 95\%\) from all five trials of the evolutionary algorithm. Six systems exhibited the best structure with mean \(S_{\mathrm{cos}}\geq 95\%\). Therefore, the evolutionary algorithm is a strong tool for directly creating the crystal structure reproducing the XRD pattern.
One feature of this algorithm is that it relies on luck in selecting evolutionary paths. For example, for Zr\({}_{2}\)CuSb\({}_{3}\) and Zr\({}_{3}\)Cu\({}_{4}\)Si\({}_{2}\) systems, the standard deviation value of \(S_{\mathrm{cos}}\) for the five different evolutionary algorithms was \(\geq 6\%\). Figure 2 shows different \(S_{\mathrm{cos}}\) distribution of all created Zr\({}_{2}\)CuSb\({}_{3}\) crystal structures during two different trials of the evolutionary algorithm. To quantify the structural difference between the created structures and target structure, a smooth overlap of atomic position (SOAP) [21, 22] distances \(d\) (see the definition at METHOD section and Oba _et al._[15]) are plotted together. Herein, a simple concept for the SOAP distance is essential: a smaller distance indicates a slight difference between two structures.
For the successful case, some structures with increased \(S_{\rm cos}\) were proposed by meaningful genetic operators despite randomly created structures having low \(S_{\rm cos}\) at the first generation. In the result, as shown in Fig. 2c, \(S_{\rm cos}\) was gradually increased with increasing a generation number. Finally, a structure with a significantly high \(S_{\rm cos}\) of 98% was created. However, for the unfortunate case, a high \(S_{\rm cos}\) was not obtained because a meaningful structural evolution was not performed and remained at a similar level (\(\leq\) 69%) with those from structures created at early generation. Therefore, to increase the probability of successfully creating the structure that matches the desired goal, it is recommended to make multiple attempts using the evolutionary algorithm and complement any unsuccessful trials.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multicolumn{2}{c}{Target} & \multicolumn{4}{c}{Highest \(S_{\rm cos}\) of XRD pattern} \\ & & \multicolumn{4}{c}{of created structure with respect to target (\%)} \\ \hline Materials 1 & Space & By evolutionary & By crystal & By Refine- \\ & group & & algorithm & morphing 2 & ment 3 \\ \cline{3-6} & & Mean & Max. & & \\ \hline MgO (4:4) & \(Fm\)–\(3m\) & 99.5 \(\pm 0.5\) & 99.9 & & 99.9 \\ NbCu\({}_{3}\)Se\({}_{4}\) (1:3:4) & \(P\)–\(43m\) & 99.2 \(\pm 0.4\) & 99.6 & & 100 \\ GaN (2:2) & \(P6_{3}mc\) & 98.6 \(\pm 0.8\) & 99.4 & & 100 \\ Zr\({}_{3}\)Cu\({}_{4}\)Si\({}_{2}\) (3:4:2) & \(P\)–\(62m\) & 90.4 \(\pm 6.1\) & 96.4 & 98.7 & 99.6 \\ & & & & & [99.6] \\ Al\({}_{2}\)O\({}_{3}\) (4:6) & \(R\)–\(3c\) & 98.5 \(\pm 0.8\) & 99.2 & & 100 \\ AlAgS\({}_{2}\) (1:1:2) & \(P3m1\) & 86.2 \(\pm 2.0\) & 88.9 & 92.4 & 99.7 \\ & & & & & [98.7] \\ TiAl\({}_{3}\) (2:6) & \(I4/mmm\) & 94.7 \(\pm 1.2\) & 95.5 & & 99.9 \\ Zr\({}_{2}\)CuSb\({}_{3}\) (2:1:3) & \(P\)–\(4m2\) & 84.4 \(\pm 12.5\) & 97.9 & 99.3 & 100 \\ & & & & & [99.8] \\ Mo\({}_{2}\)C (8:4) & \(Pbcn\) & 96.0 \(\pm 1.6\) & 98.6 & & 99.9 \\ LaTaO\({}_{4}\) (2:2:8) & \(Cmc2_{1}\) & 90.0 \(\pm 4.7\) & 96.6 & 96.7 & 99.9 \\ & & & & & [99.8] \\ ZrO\({}_{2}\) (4:8) & \(P2_{1}/c\) & 97.7 \(\pm 0.6\) & 98.4 & & 100 \\ Li\({}_{5}\)BiO\({}_{5}\) (5:1:5) & \(Cm\) & 73.6 \(\pm 4.5\) & 79.1 & 92.0 & 99.4 \\ & & & & & [83.0] \\ \hline \hline \end{tabular}
\end{table}
Table 1: \(S_{\rm cos}\) obtained by crystal structure creation methods used in this study for twelve materials. The values after \(\pm\) indicate the standard deviation of \(S_{\rm cos}\) from five different trials of the evolutionary algorithm. The score in parenthesis [] indicates the highest one using the input structures obtained by the evolutionary algorithm without performing post crystal morphing.
### Crystal morphing with Bayesian optimization for optimizing similarity of XRD pattern to target
Despite multiple tries of the evolutionary algorithm, it could fail to create crystal structure to reproduce the target XRD pattern for some complex systems. Particularly, the mean and maximum \(S_{\mathrm{cos}}\) remained only 74% and 79% for Li\({}_{5}\)BiO\({}_{5}\) system, respectively. Crystal morphing with Bayesian optimization was applied to such five systems with a mean value of \(S_{\mathrm{cos}}\)\(\leq\) 90% after rounding off the decimal point (see comment in Table 1) from five different evolutionary algorithms. For other systems with a high mean value of \(S_{\mathrm{cos}}\), the created structures used as the input structures for crystal morphing are already close to the target structure and similar to each other. Therefore, the interpolation between such similar structures is not considered meaningful.
The result of the five systems shows that crystal morphing with Bayesian optimization proposed crystal structures with further increased \(S_{\mathrm{cos}}\) than those of the input structures (see Table 1). In particular, Li\({}_{5}\)BiO\({}_{5}\) system achieved \(S_{\mathrm{cos}}\) of 89% and 92%, which were significantly increased from 79%, by greedy optimization and followed all pairs investigation, respectively. This procedure is shown in Fig. 3. The improvement of \(S_{\mathrm{cos}}\) by crystal morphing means that it is a proper complementary method to explore space that the evolutionary algorithm could not. However, the search space by this method relies on selecting the input structures. Therefore, it is significant to select multiple input
Figure 2: **Different creations of crystal structures for the same target system by an evolutionary algorithm.****a** Successful and **b** failed examples of crystal structure creations for achieving high \(S_{\mathrm{cos}}\) by the evolutionary algorithm for Zr\({}_{2}\)CuSb\({}_{3}\) system. For the champion structure with \(S_{\mathrm{cos}}\) of 98%, blue and orange indicate the trajectory of the structural evolution by mutation and crossover, respectively. **c** Increase in \(S_{\mathrm{cos}}\) with the increase in generation number for the case of **a** and **b**. The maximum \(S_{\mathrm{cos}}\) for each generation was connected by lines.
structures having sufficiently high \(S_{\rm cos}\) but are different from each other for exploring vast space.
### Refinement
As the last step, refinement was performed to decrease further the gap of XRD patterns of the created structure and target. For a successful Rietveld refinement, preparing an initial structure that has a slight difference in the target XRD patterns, namely, a high \(S_{\rm cos}\), is essential. Except for Li\({}_{5}\)BiO\({}_{5}\), for the eleven systems with the maximum \(S_{\rm cos}\geq 88\%\) obtained by evolutionary algorithm, Rietveld refinement raised \(S_{\rm cos}\) to \(\geq 97\%\). In addition, for Li\({}_{5}\)BiO\({}_{5}\), when the structure with \(S_{\rm cos}\geq 92\%\) obtained by crystal morphing was used as input of Rietveld refinement, \(S_{\rm cos}\) increased to \(99\%\). Notably, the performance of Rietveld refinement without crystal morphing did not increase or even decrease \(S_{\rm cos}\) when poor input structures with \(S_{\rm cos}<80\%\) were prepared by evolutionary algorithm, as shown in Fig. 3. Therefore, creating search space and increasing \(S_{\rm cos}\) by crystal morphing is helpful for a successful post-refinement
Figure 3: **Sequential procedure of increasing \(S_{\rm cos}\) for Li\({}_{5}\)BiO\({}_{5}\) system.** Red solid and blue dashed indicators show the procedures with and without the crystal morphing with Bayesian optimization, respectively. Refinement (including procedures both BBO-Rietveld and symmetrization) could not increase \(S_{\rm cos}\) for the case where \(S_{\rm cos}\) of input structures provided by evolutionary algorithm remained only \(\sim\)80%. The refinement succeeded by improving it to \(>90\%\) by crystal morphing with Bayesian optimization, and \(S_{\rm cos}\) reached \(>99\%\). Different marks for evolutionary algorithm indicate \(S_{\rm cos}\) obtained from different trials. For crystal morphing and post Refinement, structures with \(S_{\rm cos}\) higher than those of previous methods are shown among the obtained.
process in the case that the input structure could not achieve sufficiently high \(S_{\mathrm{cos}}\).
Then, symmetrization was performed after Rietveld refinement. It further increases \(S_{\mathrm{cos}}\) of the refined structure during determining its space group and tuning the structure with high symmetry. For all the twelve systems, finally, our scheme achieved significantly high \(S_{\mathrm{cos}}>99\%\). Therefore, it is concluded that the crystal structure that reproduces the XRD pattern is successfully created.
## Discussion
The target and created XRD pattern and their structures are summarized in Supplementary Fig. S5. It is notable that for two systems, despite significantly high \(S_{\mathrm{cos}}>99\%\), created crystal structures had distinct parts from the target ones. Zr\({}_{3}\)Cu\({}_{4}\)Si\({}_{2}\) had an exchange of Cu and Si sites, and Li\({}_{5}\)BiO\({}_{5}\) had shifted O layers. This is a limitation of conventional XRD analysis to distinguish such structures, and support by other analyses, which focus on the local structure and nearest neighbors such as X-ray absorption fine structure [23, 24], might be useful to determine the structure more correctly.
The performance of Evolv&Morph is compared with that obtained only by the Rietveld refinement after DB-search. To prepare the vanilla strategy for comparison, first we selected close structures to the targets from Materials Project Database [25]. Then, BBO-Rietveld and symmetrization were performed after the lattice volumes with the highest \(S_{\mathrm{cos}}\) were adjusted. Note that the lattice parameters and internal coordinates of atoms of a structure loaded from MPD have only a small difference from those of the target one because MPD records the structures obtained by first-principles calculations. Namely, the vanilla strategy was advantageous because it could start from the input structures significantly close to the target. Nevertheless, Evolv&Morph successfully exhibited \(S_{\mathrm{cos}}\) similar to or higher than those obtained by the vanilla strategy. This result also indicates that the Evolv&Morph may produce structures well matched with the target XRD even from scratch. More details are discussed in Supplementary section Comparison of performance with Rietveld refinement after DB-search.
In this study, an automated crystal structure creation method, consisting of the evolutionary algorithm and crystal morphing supported by Bayesian optimization (Evolv&Morph), was proposed for reproducing the XRD pattern. The method optimizes the similarity score of XRD patterns of the created structures and the target. The evolutionary algorithm can automatically create various structures by genetic operators without input structures. Crystal morphing, using the input structures obtained by the evolutionary algorithm, expands search space and further increases the similarity of XRD patterns creating a better input structure for the post-refinement. After the refinement, for twelve binary and ternary systems in different crystal structures, Evolv&Morph achieves cosine similarity of \(\sim\)99%, indicating that the created
crystal structure successfully reproduces the target XRD pattern. Therefore, the present method can identify unknown crystal structures after getting XRD measurement without depending on DB. Furthermore, Evolv&Morph can play a role of inverse design [11, 12], which indicates that the desired property is defined first and the materials with such property are automatically searched. The optimization target score, which corresponds to the cosine similarity of the XRD pattern used in this study (see Fig. 1), can be exchanged for particular functional property according to the goal of the material design; therefore, the present method has a powerful potential to be applicable to material design as well as crystal structure determination.
## 2 Method
### Evolutionary algorithm
An evolutionary algorithm is one of the optimization methods inspired by biological evolution such as reproduction, mutation, recombination and selection [13]. It automatically creates various crystal structures by genetic operators such as crossover (shown in Fig. 4a, also usually called heredity or two-parent variation operator) and mutation (shown in Fig. 4b) for multiple generations (ages). In addition, it includes an optimization function because it repeats the following procedure for multiple generations: it remains some survivors with high scores, kill some losers with low scores in a generation, and creates additional structures in the next generation. As shown in Fig. 4c, by setting the target property (also known as a fitness) to _S_\({}_{\mathrm{cos}}\), this algorithm tries to create various crystal structures to possess XRD patterns more similar to the target one. Input structures are not essential for this method, and structures can be searched in a vast space. Moreover, thermodynamic calculations such as first principles calculations can aid the evolutionary algorithm by allowing optimization of the created crystal structure onto a local minimum energy surface, thereby preventing the formation of unphysical structures.
USPEX (Universal Structure Predictor: Evolutionary Xtallography) [26, 27, 28, 13] code was used for the evolutionary algorithm program. In the first generation, crystal structures were produced using randomly selected space groups. When the structures were randomly created, the minimum bond lengths of 1.95 A and 1.5 A were limited between the same and different elements, respectively. The space group numbers were limited to 3-230. From the second-generation onwards, new crystal structures were produced by genetic operators: crossover (50%), random symmetry creation (20%), and mutation (30%). Each generation consisted of fifty crystal structures. The evolutionary algorithm was terminated if the best-ranked crystal structure was not changed over ten generations or the generation number reached 20th.
### First-principles calculations
First-principles calculations were performed for each crystal structure to be optimized after the evolutionary algorithm created it to avoid becoming unphysical structures. All first-principles calculations were performed using the projector augmented wave (PAW) [29, 30] method implemented in the Vienna Ab initio Simulation Package (VASP) [31, 32]. We used the exchange-correlation function of the generalized gradient approximation (GGA) parameterized in the Perdew-Burke-Ernzerhof (PBEsol) form modified for solids [33]. Focusing on high efficiency, a low cutoff energy of 300 eV and a low \(k\)-space resolution of 0.12 (in a unit of \(2\pi/\)A) were used. Optimizing the created unit cell was performed until the interatomic force on each atom was reduced to within 0.03 eV/A or the number of ionic iterations reached 30. The wall time for stopping the unfinished calculation was also tightly set to ten minutes with eight Central Processing Units (CPUs), to avoid frequent time losses caused by convergence failures that often occur in calculations for unphysical structures.
Figure 4: **Evolutionary algorithm.** Example of crystal structure creations by genetic operator **a** crossover and **b** mutation in an evolutionary algorithm. This way, the structures in (_N_+1)-th generation can be derived from those in \(N\)-th generation. **c** Example of optimization of \(S_{\text{cos}}\) according to increase of generation number. Red lines connect each generation’s highest \(S_{\text{cos}}\).
### Crystal morphing with Bayesian optimization
Crystal morphing[15] is an interpolation method to create intermediate crystal structures between two input structures shown in Fig. 5a. It can create intermediate crystal structures on the target morphing distance determined by crystal structure descriptors such as SOAP[21, 22]. A crystal structure is modified to be far from a target structure. The metric, called SOAP distance, promises that the morphing takes into account invariances in translation, rotation, and unit-cell choice. Bayesian optimization is a sequential design strategy to find the maximum or minimum value of targeting property based on a posterior distribution and an acquisition function[18]. Therefore, when \(S_{\mathrm{cos}}\) is employed as the optimization target, this optimization method searches an optimal \(S_{\mathrm{cos}}\) in the line drawn by the crystal morphing. Although crystal morphing is an interpolation method, search space can be expanded by multiple trials by selecting different input structures. We used two recipes to find a structure with high \(S_{\mathrm{cos}}\), as shown in Fig. 5b.
First one is "greedy optimization". The input structure list is prepared, and the structures are sorted in descending order for their \(S_{\mathrm{cos}}\) scores. The two structures with the highest \(S_{\mathrm{cos}}\), as indicated by "\(A\)" and "\(B\)" in Fig. 5b, are selected as for the first search. If an intermediate structure with the \(S_{\mathrm{cos}}\) is higher than the two input structures, the found one is added to the input structure list, but the input two structures are removed from the list. If such an intermediate structure is not found, the input structure with the second highest \(S_{\mathrm{cos}}\) is excluded from the input structure list. Then, the two structures with the highest \(S_{\mathrm{cos}}\) are reselected for the next search. This procedure is repeated, and the structure with higher \(S_{\mathrm{cos}}\) is sequentially updated. This method works \(S_{\mathrm{cos}}\) score maximization successfully if higher \(S_{\mathrm{cos}}\) are gradually updated by newly found intermediate structures. However, suppose the intermediate structures have lower \(S_{\mathrm{cos}}\) than the input structures. In that case, the method fails to optimize \(S_{\mathrm{cos}}\) by searching the limited space as a consequence. To avoid such limited search space, the second method, "all pairs investigation" was used. This method searches the intermediate structures among all pairs formed in the input structure list. As a result, it can expand search space.
Greedy optimization was first used with input structures obtained by the \(S_{\mathrm{cos}}\) champions in five different trials of evolutionary algorithms, and then followed by all pairs investigation. Newly created intermediate structures by greedy optimization with higher \(S_{\mathrm{cos}}\) than the input structures were also added to the input structure list for all pairs investigation with the structures obtained by evolutionary algorithms. For the input structure selections and expanded search, one can repeat the mentioned two methods with increased structures; however, in this study, only one cycle was used.
Squared SOAP distance \(d\) between two structures \(A\) and \(B\) can be defined by a following equation:
\[d^{2}(\chi_{A},\chi_{B})\equiv\frac{|\mathbf{P}(\chi_{A})-\mathbf{P}(\chi_{B}) |^{2}}{|\mathbf{P}(\chi_{A})||\mathbf{P}(\chi_{B})|}, \tag{2}\]
**Fig. 5: Crystal morphing.****a** Concept of crystal morphing. A distance can be determined based on structural descriptor between two input crystal structures. Then, crystal morphing can create intermediate structures possessing the structural descriptor on the target distances. **b** A search of crystal structures possessing higher \(S_{\cos}\) by crystal morphing and Bayesian optimization via (left panel) “greedy optimization” and (right panel) “all pairs investigation”. The detailed procedure is described in the main text. A red rectangular mark indicates the input structures for crystal morphing. A circle mark indicates an intermediate structure found by crystal morphing and Bayesian optimization. The color indicates the \(S_{\cos}\) of the structure. In this caption, an example case is explained. (left panel) greedy optimization: \(A\), \(B\), \(C\), and \(D\) are the input structures. Their \(S_{\cos}\) are assumed to be \(A>B>C>D\). The first search by crystal morphing and Bayesian optimization is performed between \(A\) and \(B\). Intermediate structure _AB_ is newly found with higher \(S_{\cos}\) than those of \(A\) and \(B\). Then, a second search is performed between _AB_ and \(C\). The intermediate structure with higher \(S_{\cos}\) than those of _AB_ and \(C\) is not found. Then, a third search is performed between _AB_ and \(D\). The intermediate structure _ABD_ with higher \(S_{\cos}\) than those of _AB_ and \(C\) is newly found. (right panel) All pairs investigation: \(A\), \(B\), \(C\), and \(D\) are the input structures. The search by crystal morphing and Bayesian optimization is performed among the combinations of each pair. The intermediate structures, _AB_, _AC_, _AD_, and _BD_ with higher \(S_{\cos}\) than those of each pair of two input structures are newly found.
where the vector \(\mathbf{P}\) is the SOAP power spectrum and \(\chi\) is the reciprocal-space representation of the structure \(A\) or \(B\). For crystal morphing to reach the intended SOAP distance \(d_{\mathrm{int}}\) calculated between two input structures \(A\) and \(B\), the optimization of lattices and internal coordinates of atoms was performed by the steepest descent method. The maximum iteration number was set to fifteen. The type of elements was distinguished by taking a different sign for the elements of the real-space density distribution for the two-element case. We decomposed the system into three pairwise systems for the three-element case and defined a distance as a sum of each pairwise system. Other default parameters relevant to crystal morphing were the same as those used in Oba _et al._[15]
Gaussian processes framework in python optimization (GPyOpt) [34] code was used for Bayesian optimization. The interpolated SOAP distance point between two input structures at 0, 25, 50, 75, 100%, and two randomly selected points were investigated to get an initial posterior distribution. Then, four additional iterations were performed with each four-point parallel investigation.
### Refinement
The created structure can be further tuned by refinement method such as Rietveld refinement [18] and symmetrization. Rietveld refinement tunes the structure to decrease the gap between simulated and target XRD patterns. Therefore, it can raise the \(S_{\mathrm{cos}}\) score. Symmetrization is refining a crystal structure to have symmetry when determining a space group. Ideally, a crystal structure has a unique space group; however, the determined space group could be slightly changed according to a tolerance parameter to satisfy all given crystallographical constraints. Therefore, technically, multiple refined crystal structures can be obtained according to changing the tolerance parameter.
BBO-Rietveld code [10] was used for Rietveld refinement. Because Rietveld needs a lot of parameters, the refinement result strongly depends on the setting of parameters. BBO-Rietveld tests various combinations of parameters and suggests the best refined crystal structure determined as the case where the weighted profile residual factor is the lowest. The iteration number for optimization was set at 200. The intermediate structure, which was obtained by crystal morphing, with \(S_{\mathrm{cos}}\) score higher than input structures was all performed for BBO-Rietveld.
The symmetrization was performed using SPGLIB [35] in PHONOPY code [36]. The tolerance factor considered is summarized in Supplementary Table S1. If multiple space groups were found according to different tolerance factors, the space group with higher \(S_{\mathrm{cos}}\) score was determined for the refined crystal structure. However, if their difference in \(S_{\mathrm{cos}}\) score is less than 1%, a space group with higher symmetry was selected.
### XRD simulations
The XRD patterns scanned by \(\theta\)-2\(\theta\) mode were simulated using GSAS-II code[37] with a source energy of Copper K-\(\alpha\) (wavelength of 1.54 A) between 2\(\theta\) range of 0-180deg with a width of 0.01deg.
J.L. would like to thank Enago ([https://www.enago.com](https://www.enago.com)) for editing and reviewing this manuscript for English language.
J. L. mainly performed simulations and prepared the manuscript. J. O. and S. K. developed crystal morphing code. N. O. and S. K. designed the project. All authors discussed the results and wrote the manuscript.
The authors declare no competing interests.
**Correspondence** and requests for materials should be addressed to J. L. |
2310.18229 | Revising with a Backward Glance: Regressions and Skips during Reading as
Cognitive Signals for Revision Policies in Incremental Processing | In NLP, incremental processors produce output in instalments, based on
incoming prefixes of the linguistic input. Some tokens trigger revisions,
causing edits to the output hypothesis, but little is known about why models
revise when they revise. A policy that detects the time steps where revisions
should happen can improve efficiency. Still, retrieving a suitable signal to
train a revision policy is an open problem, since it is not naturally available
in datasets. In this work, we investigate the appropriateness of regressions
and skips in human reading eye-tracking data as signals to inform revision
policies in incremental sequence labelling. Using generalised mixed-effects
models, we find that the probability of regressions and skips by humans can
potentially serve as useful predictors for revisions in BiLSTMs and Transformer
models, with consistent results for various languages. | Brielen Madureira, Pelin Çelikkol, David Schlangen | 2023-10-27T16:08:15Z | http://arxiv.org/abs/2310.18229v1 | # Revising with a Backward Glance: Regressions and Skips during Reading
###### Abstract
In NLP, incremental processors produce output in instalments, based on incoming prefixes of the linguistic input. Some tokens trigger revisions, causing edits to the output hypothesis, but little is known about why models revise when they revise. A policy that detects the time steps where revisions should happen can improve efficiency. Still, retrieving a suitable signal to train a revision policy is an open problem, since it is not naturally available in datasets. In this work, we investigate the appropriateness of regressions and skips in human reading eye-tracking data as signals to inform revision policies in incremental sequence labelling. Using generalised mixed-effects models, we find that the probability of regressions and skips by humans can potentially serve as useful predictors for revisions in BiLSTMs and Transformer models, with consistent results for various languages.
## 1 Introduction
"_Supreme court plans an attack on independent judiciary, says Labour._" This was the headline of a news article,1 which sounds incongruuous until one interprets it the way intended. That is a _crash blossom_,2 a sentence that becomes ambiguous _e.g._ due to brevity. The correspondent later _revised_ the headline to remove the ambiguity. You probably had to go back and read that sentence again. Such movement is called _regression_ in the eye-tracking literature, when the eye makes a regressive, as opposed to progressive, saccade while reading a text.
Footnote 1: Source: The Guardian, Nov 15, 2020. Retrieved from the Language Log blog.
Footnote 2: [https://en.wiktionary.org/wiki/crash_blossom](https://en.wiktionary.org/wiki/crash_blossom)
In incremental NLP models, partial output hypotheses are built at each time step, based on incoming input prefixes, which renders revisability a desirable property to correct mistakes (Schlangen and Skantze, 2011). This mode takes place in interactive settings that require real-time processing, for instance disfluency detecion or reference resolution in dialogue (Hough and Schlangen, 2015; Kennington and Schlangen, 2017) and simultaneous translation (Cho and Esipova, 2016; Arivazhagan et al., 2020; Sen et al., 2023).
Figure 1 depicts a constructed example for sequence labelling. For each new token, the model either just extends the current output prefix with a new label, or also edits the output by changing previous labels (here at time steps 3 and 5). Modelling a policy that predicts when revisions should occur is an open research problem, because this signal is not naturally available in the training data (Kohn, 2018; Kahardipraja et al., 2023). Moreover, we currently lack evaluation methods to understand whether the revisions performed by a model are linguistically or cognitively motivated (_i.e._ being grounded in the linguistic input or resembling cognitive processes) or an idiosyncratic result of its internal processing patterns.
In eye-tracking experiments, many measures can be extracted per token while humans read texts (Rayner, 1998). Common data formats include variables representing whether each token, in first-pass reading, was skipped, fixac
Figure 1: A constructed example of incremental sequence labelling where revisions occur at time steps 3 and 5. If tokens where humans initiate regressions in reading align with tokens that trigger revisions, it can be a cognitive signal to model a revision policy.
or triggered a regressive eye movement. In Figure 1, the constructed scanpath shows regressions at tokens _of_ and _by_ and skips at _one_ and _us_. Various theories exist to account for why humans regress (see SS3), but the fact that underlying cognitive processes cause the eyes to move forward or backward at each word (or skip it) lends itself as a cognitively motivated token-level signal.
In this paper, we bridge the concepts of _revisions_ in incremental sequence labelling and _regressions_ in human eye-tracking reading data. We investigate whether regressions and skips can aid the prediction of revisions in incremental processors, and conclude that eye-tracking measures are a potential cognitively-motivated learning signal to model revision policies.
## 2 Motivation
Currently on-trend models like Bi-LSTMs (Schuster and Paliwal, 1997) and Transformers (Vaswani et al., 2017) operate in a non-incremental fashion, relying on the availability of complete input sentences or texts to deliver output. One workaround to employ non-incremental encoders in real-time is applying a restart-incremental interface (Schlangen and Skantze, 2011), enabling outputs to be revised as a by-product of recomputations, as explored by Madureira and Schlangen (2020) and Khandirdpraja et al. (2021). Although possible, it forces recomputation from scratch at every new piece of input, which increases the computational load and can become infeasible for long sequences (Kahardipraja et al., 2021). On the other hand, inherently incremental models like RNNs have the disadvantage of not being able to recover from mistakes via revisions (at least their prototypical versions).
The sweet spot would be a model that can detect the need to revise. Initiatives in this direction are Hear(Kaushal et al., 2023), which has a module that predicts the need to _restart_, and Tapir(Kahardipraja et al., 2023), which integrates an RNN with a Transformer-revisor, predicting whether to _recompute_ or to just extend the current output. A difficulty encountered in the latter is how to obtain a ground-truth signal for the revision policy. They derived silver labels from the outputs of another Transformer, which is possibly too model-specific and its linguistic motivation is not explored. Hear compares partial outputs to the non-incremental gold standard which, however, does not encode locally valid hypotheses (which only future input will rule out) and does not accommodate the fact that the gold standard may differ from its final output, thus penalising the incremental metrics with the model's non-incremental deficits (Baumann et al., 2011; Madureira et al., 2023).
We usually do not have corpora containing annotation for the incremental hypotheses for input prefixes by humans, only the annotated gold labels for the final output. But there is vast literature using human reading data as a supervision signal in NLP tasks (Barrett and Hollenstein, 2020; Mathias et al., 2021). Inspired by that, we ask ourselves whether a model's revisions coincide with human regressions in eye-tracking reading data. A positive answer would mean that human reading data could help modelling a dedicated policy for revisions (as opposed to naive recomputations or restarts), and would serve as a cognitively motivated yardstick to judge a models' revisions.
Among all revisions, some are _effective_, _i.e._ they edit the prefix into a better state, with respect to a gold standard or to the final output (Madureira et al., 2023). Identifying them can contribute to reducing undesired revisions, which cause instability without bringing the advantage of improvement in output quality. Therefore, if human reading behaviour can help perform only effective revisions, the signal is even more useful for incremental processing.
## 3 Related Literature
During reading, humans fixate the gaze on some words and make saccades that can be progressive or regressive with respect to the order of the words in the text, so that scanpaths and various measures regarding gaze position, direction and duration can be extracted with eye-tracking devices (Rayner et al., 2012), a technique that is becoming more accessible at scale (Ribeiro et al., 2023).
Research based on eye-tracking reading data often rely on the eye-mind hypothesis, which assumes that the eye remains fixated on a word as long as it is being processed (Just and Carpenter, 1980). Various research fields rely on the temporal and spatial dimensions of human reading data. We identify at least three (non-mutually exclusive) uses. A consolidated line of research involves studying human cognition and verifying linguistic theories of sentence processing (_e.g._Demberg and Keller (2008) and Shain et al. (2016)). Another field is occupied with understanding to what extent com
putational models like artificial neural networks resemble human cognition in how they process language, for example by estimating their psychometric predictive power (Wilcox et al., 2020; Hollenstein et al., 2021). A relationship commonly investigated is the surprisal of language models _versus_ human reading time (Fernandez Monsalve et al., 2012; Goodkind and Bicknell, 2018; Wilcox et al., 2020). NLP has been incorporating eye-tracking data in recent years (Iida et al., 2013; Tokunaga et al., 2017), with the emerging use of human reading data both as input to enhance NLP models ( see Barrett and Hollenstein (2020) and Mathias et al. (2021) for recent surveys) and as a means for their interpretability (Ikhwantri et al., 2023).
In this work, the phenomenon of interest is _regressions_, _i.e._ eye movements that move backwards in the text and can be shorter or longer-range (Rayner et al., 2012). They are a common topic in psycholinguistics research (Paape et al., 2022, 2021) and various hypotheses account for their role, such as comprehension or word identification difficulties, low-level visuomotor processes, rereading, memory cues and tools for language processing (see Vitu (2005), Lopopolo et al. (2019) and Booth and Weger (2013) for comprehensive discussions and references). Relevant measures are at which word a regression initiates, at which word it lands, regression path duration (how long the reader remains in past text before progressing to unseen text), and how many regressions are initiated for each word. We can also differentiate between first-pass and subsequent regressions.
Regressions in NLPReading data has been used as a source of psycholinguistic information for various NLP tasks. When it comes to regressions, Barrett and Sogaard (2015) used eye-movements to predict syntactic categories, an idea further explored in Barrett et al. (2016), who augmented PoStaggers with various gaze features, among which was the number of regressions originating from a word. Barrett and Sogaard (2015) used the number of regressions from and to a word as features to predict grammatical functions. The number of total regressions per word was also used as a feature by Mishra et al. (2016) for sarcasm understandability prediction. Regression duration, _i.e._ the total time spent on a word after the first pass over it, was a useful feature for sentence compression proposed by Klerke et al. (2016). Regressions during coreference resolution annotation were investigated by Cheri et al. (2016), who used it to propose a heuristic for pruning candidates in a coreference resolution model. In Hollenstein and Zhang (2019), the total duration of regressions from a word was used as a context feature in named-entity recognition.
We draw inspiration from the work by Lopopolo et al. (2019), who hypothesised that backward saccades are involved in online syntactic analysis, in which case regressions should coincide, at least partially, with the edges of the relations computed by a dependency parser. They found a significant effect of the number of left-hand side dependency relations on the number of backward saccades. While the authors were interested at predicting human regressions from a model instantiating a parsing theory, we are conversely interested in using human regressions as a signal to train an NLP model.3
Footnote 3: It is also worth investigating whether a model’s revisions can predict human regression behaviour, but it is beyond the scope of this work.
## 4 Method
To perform the analysis, we use binomial generalised linear mixed models (GLMM) with a logit link function to predict model revisions. Similar to the approach by Lopopolo et al. (2019), for each combination of dataset and NLP model/task, we fit two GLMMs: The baseline model (1) only includes the token position variable as a fixed effect and texts as random effects. Since a model's revisions may vary depending on the word's position in the text, we add token position as a baseline predictor and include texts to account for any variability due to different types of texts. We fit model (2) with the same structure, adding the predictors of regression probability and skipping probability as fixed effects. The binary dependent variable is a token's revise/not-revise label.
\[\begin{split} model\ revision&\sim token\ position\\ &+(1|text)\end{split} \tag{1}\]
\[\begin{split} model\ revision&\sim token\ position\\ &+p(regression)\\ &+p(skip)\\ &+(1|text)\end{split} \tag{2}\]
We use likelihood ratio tests (LRT) between the null and the full models to evaluate the goodness of fit. LRTs are used to compare a baseline model to
a more complex one with more predictors and decide if certain predictors should be included, consequently selecting the model that fits the data better. To infer statistical significance, we obtain \(p\)-values using the \(\chi^{2}\) distribution.
We do not intend to make claims about _why_ regressions occur. For our purposes, we take at face value that they _did_ occur in the eye-tracking experiments (and when). We are interested in words at which regressions are initiated when they are first read, knowing that, for some reason, the reader went to past input before continuing (as a consequence, we also analyse words that are not fixated in the first pass). Still, the hypothesis that regressions occur due to reanalysis, when humans encounter garden path sentences Altmann et al. (1992), is at our favour, since revisions represent updates in the current model's interpretation caused by input seen for the first time.
## 5 Data
In this section, we explain the data structure constructed for the analysis. We then introduce the eye-tracking corpora and the models selected for this study, and discuss how we extract the incremental outputs from non-incremental, pre-trained sequence labelling models.4
Footnote 4: The pre-processing scripts and implementation code is available at [https://github.com/briemadu/revreg](https://github.com/briemadu/revreg).
ProcedureOur method requires knowing, for each token \(w\) in a text, what was the behaviour of the model while performing sequence labelling and of the humans while reading the text. More specifically, we need to know whether the model revised its hypothesis upon processing \(w\) and whether humans skipped \(w\), fixated it but moved forward, or fixated it and regressed. We thus construct an annotation mapping tokens to human and model data as illustrated in Figure 2. The texts come from the eye-tracking corpora, from which we also extract the human skips or regressions. The revisions are retrieved by feeding the same texts to the NLP models, prefix by prefix in a restart-incremental fashion, and checking if labels change at each time step.
Human RegressionsWe analyse six eye-tracking human reading corpora: MECO-L1 Siegelman et al. (2022), MECO-L2 Kuperman et al. (2023), Nicenboim (no official name) Nicenboim et al. (2015), PoTeC Makowski et al. (2019); Jager et al. (2020), Provo Luke and Christianson (2018) and RastrOS Vieira (2020); Leal et al. (2022). Table 1 presents their language and size. The distribution of regressions and skips (per token and per subject) is shown in Figure 3. Although many other corpora exist, we opted to use those that had first-pass regression and first-pass skip measures already available or easy to infer from other measures. For each interest area,5 we retrieve the label for each subject as follows: If the token was skipped in the first-pass reading, we label it as skipped. Otherwise, we retrieve a variable which is 1 if a first-pass
\begin{table}
\begin{tabular}{l l c c c} \hline \hline & language & tokens & texts & subjects \\ \hline MECO-L1 & Dutch & 2,231 & 12 & 45 \\ MECO-L2 & English (L2) & 1,658 & 12 & 538 \\ Nicenboim & Spanish & 791 & 48 & 71 \\ PoTeC & German & 1,895 & 12 & 62 \\ Provo & English & 2,743 & 55 & 84 \\ RastrOS & Br. Portuguese & 2,494 & 50 & 37 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Human reading eye-tracking corpora.
Figure 2: An example of our data structure for a portion of a text in the Provo corpus, processed by a restart-incremental Transformer predicting dependency relations. Each token is annotated with the reading variable for each subject (eyes: regressed, 0: not regressed, -: skipped) and the model’s decision (r: revised, /: not revised).
regression was initiated at that interest area, and 0 otherwise. Although regressions can occur later, we only consider what happens in the first-pass reading to approximate what the model does (revisions happen when a token is integrated for the first time in the sequence). The probabilities are estimated by computing the proportion of regressions and skips per token (excluding subjects with missing data), following existing literature in terms of using average human behaviour as a feature Barrett et al. (2016); Hollenstein and Zhang (2019). We checked that they are only moderately (negatively) correlated (\(-0.59<\rho<-0.44\)). See Appendix for details about the measures and pre-processing.
Models' RevisionsWe opt to evaluate pre-trained sequence labelling models with a restart-incremental paradigm. Models were selected according to the availability of languages to match the eye-tracking corpora. We evaluate Stanza's BiLSTM models Qi et al. (2020)6 and Explosion's pre-trained multi-task Transformer architectures.7 These families of models were selected due to the availability of all languages and comparability in terms of similar training data, as both were trained on the Universal Dependencies corpora de Marneffe et al. (2021). The model checkpoints for each language are listed in Table 3. We extract the incremental outputs for dependency parsing (prediction of the head position and the relation) and POS-tagging. We also inspected NER, but revisions were extremely sparse in these datasets (possibly due to the genres of the texts), so we did not analyse it further. The same texts from the eye-tracking data are fed to each model, one prefix after another, as illustrated in Figure 1, following previous works Madureira and Schlangen (2020); Khardipraja et al. (2021). At each time step, we extend the input with one interest area (_i.e._, sometimes it means more than one token). If the output prefix at time \(t\) (apart from the recently added label(s), which refer to the last interest area) differs from the output at time \(t-1\), a revision occurred. If more labels match the final output than in the previous prefix, the revision is effective. The percentage of (effective) revisions over tokens/timesteps is shown in Table 2.
Footnote 6: [https://github.com/stanfordnlp/stanza](https://github.com/stanfordnlp/stanza).
Footnote 7: Releases documented in [https://explosion.ai/blog/ud-benchmarks-v3-2](https://explosion.ai/blog/ud-benchmarks-v3-2) and available at their model hub on Hugging Face [https://huggingface.co/explosion](https://huggingface.co/explosion).
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l l} \hline \hline & & & **MECO (du)** & **MECO (enl2)** & **Nicenboim (es)** & **PoTc (de)** & **Provo (en)** & **RastrOS (ptbr)** \\ \cline{3-13} & & all-r & eff-r & all-r & eff-r & all-r & eff-r & all-r & eff-r & all-r & eff-r & all-r & eff-r \\ \hline \multirow{2}{*}{**BiLSTM**} & deprel & 58.45 & 47.20 & 60.74 & 54.52 & 55.75 & 50.32 & 53.56 & 44.27 & 60.99 & 53.70 & 54.01 & 46.75 \\ & head & 65.76 & 38.32 & 66.95 & 38.60 & 61.31 & 43.36 & 67.28 & 40.37 & 67.92 & 39.30 & 60.34 & 43.70 \\ & pos & 12.95 & 11.52 & 11.70 & 10.68 & 6.32 & 5.44 & 17.89 & 15.51 & 12.65 & 11.27 & 29.19 & 27.11 \\ \hline \multirow{2}{*}{**Transformer**} & deprel & 63.92 & 52.44 & 67.97 & 57.66 & 48.93 & 44.37 & 73.67 & 56.36 & 66.68 & 58.77 & 52.81 & 44.23 \\ & head & 67.55 & 38.01 & 69.06 & 37.21 & 57.27 & 41.47 & 74.56 & 43.38 & 69.30 & 38.46 & 61.39 & 42.98 \\ \multirow{2}{*}{} & pos & 9.82 & 6.28 & 7.84 & 6.09 & 1.90 & 1.64 & 5.01 & 4.12 & 8.09 & 6.56 & 9.22 & 7.62 \\ \hline \hline \end{tabular}
\end{table}
Table 2: % of timesteps that trigger revisions (all-r) and effective revisions (eff-r) for each model and task.
Figure 3: Distributions of the probabilities of regression and skips, by token (left) and by subject (right) estimated from the human reading data for each dataset.
## 6 Results
We summarise the full GLMM results in Table 4 for Provo and MECO-L2 datasets. Due to a large number of experiments, we only present results for the English models in this table; the complete results are in the Appendix. In every (dataset, NLP model, task) combination, the likelihood ratio test between the baseline and full models revealed that the full model, including the two predictors of interest, is a better fit to the data than the baseline model with only token position and text.
The token position was a significant predictor of revisions in most models. For the few cases in which it did not significantly affect revisions (_i.e._, MECO-L2-Transformer-head and BiLSTM-head, MECO-L1-BiLSTM-head, Provo-BiLSTM-pos), we fitted models without this predictor instead.
We found that average human gaze patterns, namely the estimated word's regression and skip probability, were significant predictors of revisions. This was a consistent result across all eye-tracking corpora, for the BiLSTM and the Transformer, both for dependency parsing and POS-tagging. On the one hand, human regressions were often positively related to revisions, so that words with a higher regression probability were more likely to be revised by models (MECO-L2-Transformer-pos was the only exception where regression probability negatively affected revisions). Conversely, a word's skip probability decreased the probability of it triggering a revision in most cases (with the exceptions of Potec and Provo-Transformer-pos and Nicenboim-BiLSTM-pos). These relationships are illustrated in Figure 4. The magnitude of the regression coefficient did not follow a general pattern for the tasks, but the skip coefficient was more often larger for the task of predicting the head than for the dependency relation, which was usually larger than for POS-tagging (exceptions to this is RastrOS-Transformer and MECO-L1-BiLSTM).
In a further analysis, we repeated the same procedure to predict only the effective revisions and observed the same trend in regression and skip coefficients when predicting effective revisions, in terms of direction and significance, in all experiments. However, the magnitude of the coefficients differed, sometimes being larger in one or the other, which does not allow us to draw general conclusions at this point. The coefficient of token position
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline & & & \multicolumn{2}{c}{**estimate**} & \multicolumn{2}{c}{**SE**} & \multicolumn{2}{c}{**z**} & \multicolumn{2}{c}{**p**} \\ \cline{4-11} & & & MECO-L2 & Provo & MECO-L2 & Provo & MECO-L2 & Provo & MECO-L2 & Provo \\ \hline
**BiLSTM** & deprel & intercept & 1.29*** & 1.22*** & 0.05 & 0.05 & 24.18 & 24.29 & \(<\)0.001 & \(<\)0.001 \\ & & (p(reg)) & 3.41*** & 3.30*** & 0.05 & 0.09 & 73.39 & 38.56 & \(<\)0.001 & \(<\)0.001 \\ & & p(skip) & -2.80*** & -3.68*** & 0.02 & 0.03 & -178.47 & -133.52 & \(<\)0.001 & \(<\)0.001 \\ & & position & -0.03*** & 0.21*** & 0.00 & 0.01 & -8.94 & 38.87 & \(<\)0.001 & \(<\)0.001 \\ \cline{2-11} & head & intercept & 1.59*** & 1.76*** & 0.06 & 0.05 & 27.44 & 33.12 & \(<\)0.001 & \(<\)0.001 \\ & & p(reg) & 4.32*** & 2.18*** & 0.05 & 0.10 & 81.05 & 21.84 & \(<\)0.001 & \(<\)0.001 \\ & & p(skip) & -3.23*** & -4.92*** & 0.02 & 0.03 & -193.35 & -155.18 & \(<\)0.001 & \(<\)0.001 \\ & & position & - & 0.01 & - & 68.85 & - & \(<\)0.001 \\ \cline{2-11} & pos & intercept & -2.62*** & -1.92*** & 0.07 & 0.08 & -36.21 & -22.77 & \(<\)0.001 & \(<\)0.001 \\ & & p(reg) & 1.25*** & 1.42*** & 0.05 & 0.08 & 27.53 & 18.61 & \(<\)0.001 & \(<\)0.001 \\ & & p(skip) & -1.16*** & -0.66*** & 0.02 & 0.04 & -52.26 & -18.63 & \(<\)0.001 & \(<\)0.001 \\ & & position & 0.20*** & - & 0.00 & - & 42.18 & - & \(<\)0.001 & - \\ \hline
**Transformer** & deprel & intercept & 1.22*** & 1.28*** & 0.09 & 0.05 & 14.28 & 24.39 & \(<\)0.001 & \(<\)0.001 \\ & & p(reg) & 4.39*** & 3.26*** & 0.05 & 0.09 & 82.91 & 34.39 & \(<\)0.001 & \(<\)0.001 \\ & & p(skip) & -2.53*** & 3.75*** & 0.02 & 0.03 & -154.71 & -129.34 & \(<\)0.001 & \(<\)0.001 \\ & & position & 0.03*** & 0.30*** & 0.00 & 0.00 & 11.37 & 54.95 & \(<\)0.001 & \(<\)0.001 \\ \cline{2-11} & head & intercept & 1.45*** & 1.45*** & 0.08 & 0.05 & 18.13 & 29.17 & \(<\)0.001 & \(<\)0.001 \\ & & p(reg) & 4.40*** & 2.27*** & 0.05 & 0.10 & 82.24 & 23.76 & \(<\)0.001 & \(<\)0.001 \\ & & p(skip) & -2.64*** & -4.01*** & 0.02 & 0.03 & -160.14 & -133.24 & \(<\)0.001 & \(<\)0.001 \\ & & position & - & 0.37*** & - & 0.01 & - & 64.92 & - & - \\ \hline pos & intercept & -2.64*** & -2.69*** & 0.17 & 0.14 & -15.28 & -19.71 & \(<\)0.001 & \(<\)0.001 \\ & & p(reg) & -0.62*** & 3.00*** & 0.06 & 0.10 & -9.49 & 31.11 & \(<\)0.001 & \(<\)0.001 \\ & & p(skip) & -0.77*** & 0.80*** & 0.03 & 0.04 & -29.33 & 18.07 & \(<\)0.001 & \(<\)0.001 \\ & & position & 0.08*** & -0.25*** & 0.01 & 0.01 & 15.56 & -30.18 & \(<\)0.001 & \(<\)0.001 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Overview of the GLMM results, showing the estimated coefficients for each variable and their statistical significance, for the English corpora. See Appendix for the the complete table.
was, in most cases, smaller in the model that predicts effective revisions. Similarly, in many models the magnitude of the coefficient of skips was larger for models predicting effective revisions.
To assess the fit of the model to the data in more detail, we evaluated its predictions by running permutation tests with the null hypothesis that the probabilities assigned to (effective) revisions and to not-revisions are randomly sampled from the same distribution. Besides, we computed the area under the ROC curve in each model. As we can see in Table 5, most of the differences were significant (except for many cases in POS-tagging), but their magnitude was relatively small. The AUC was around 0.7 for all datasets, and in some experiments the models of effective revisions had higher AUC. Examples with considerable improvements are RastrOS-head and Nicenboim-head.
## 7 Do models revise when humans regress?
We have gathered evidence that there is a relationship between NLP restart-incremental models' revisions and human gaze behaviour in reading, which manifests as the probability of revision at a given token being partially predictable from it being often skipped or triggering regressions, when token position and text are accounted for. Interestingly, the overall findings hold for BiLSTM and Transformers, even though their encoding mechanisms are different, and also for all five languages, despite the eye-tracking data having been collected from different text genres and the readers having performed different tasks (or no additional task beyond reading for comprehension, as in Provo).
For this conclusion, we did not rely on any assumptions for the connection between human regressions and incremental models' revisions beyond the analogy of what we factually know: When seeing text areas for the first time, humans made decisions to skip or fixate, and possibly to revisit past text, and at some words, models "decided" to revisit past decisions.
Some exceptions to the general trend in predicting model revisions occurred in POS-tagging, for which relatively fewer revisions occur (see Table 2). The sparsity of revisions may cause the signal to be harder to model well without more data. For dependency parsing, more revisions are expected, especially because in the beginning of the sentence
Figure 4: The full GLMM predictions of the revision probability are shown. Each plot presents the predictions for BiLSTM and Transformer models given regression and skip probability in the corresponding dataset. Error bars represent 95% confidence interval.
the model has to wait until the root is processed to make good predictions. There may also be a difference in processing, since the humans could regress to previous sentences in the text, whereas the NLP models depend on their internal tokenisation and sentence boundary detection.
This suggests that eye-tracking measures can be transformed into a useful signal to inform the decision of when to revise in mixed restart-incremental processors, especially when the model's task entails more syntactic tasks with frequent revisions to the input.
Still, preliminary investigation of the revision probabilities predicted by the model did not yield a straightforward threshold for binary classification, despite the difference in means being statistically significant. This invites a more detailed extrinsic evaluation, by incorporating the human predictors into a revision controller like TAPIR (Kahardipraja et al., 2023), and assessing the revisions with the evaluation methods discussed by Madureira et al. (2023). One approach is to train an incremental sequence labelling model whose revision policy relies on eye-tracking data as part of the input and comparing its performance against a model without it. Since skips had a negative effect, it may also be possible to use other variables that relate to the probability of a token being skipped, like POS-tags or word frequency and length, as additional input, which are cheaper to obtain. The analysis should also be done with larger datasets and other models and tasks.
The usefulness of our findings presupposes the availability of eye-tracking measures during inference on truly unseen data, which is an open problem because such signal is not always available in real time. One possibility is to use pretrained eye-tracking models to predict regressions and skips, as in approaches discussed in the literature (Engbert et al., 2005; Deng et al., 2023).
Down the road, a revision policy should not only detect times to revise, but times to revise _effectively_, since wrong revisions make the partial outputs less reliable for downstream processors. Our experiments showed that regressions and skips are also good predictors for effective revisions. Identifying ways to filter this more specific signal demands further investigation. An immediate next step is to evaluate the predictions of each model in unseen data for all revisions and for effective revisions.
## 8 Conclusion
Let us conclude with a _backward glance_ to our contribution. We have addressed the open question of whether pre-trained sequence labelling models, when employed incrementally, perform revisions in a similar fashion as humans skip words or make regressive eye movements while reading. We have
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & & & \multicolumn{2}{c}{**abs. mean diff**} & \multicolumn{2}{c}{**AUC**} \\ \cline{3-6} & & & all-r & eff-r & all-r & eff-r \\ \hline
**MECO-L1** & deprel & BiLSTM & 0.13* & 0.16* & 0.71 & 0.74 \\ & & Trfmer & 0.15* & 0.14* & 0.73 & 0.72 \\ \cline{2-6} & head & BiLSTM & 0.22* & 0.26* & 0.78 & 0.80 \\ & & Trfmer & 0.18* & 0.21* & 0.76 & 0.77 \\ \cline{2-6} & pos & BiLSTM & 0.05* & 0.05* & 0.69 & 0.71 \\ & & Trfmer & 0.03* & 0.02 & 0.68 & 0.66 \\ \hline
**MECO-L2** & deprel & BiLSTM & 0.12* & 0.12* & 0.70 & 0.69 \\ & & Trfmer & 0.14* & 0.10* & 0.72 & 0.68 \\ \cline{2-6} & head & BiLSTM & 0.15* & 0.20* & 0.73 & 0.76 \\ & & Trfmer & 0.12* & 0.22* & 0.70 & 0.77 \\ \cline{2-6} & pos & BiLSTM & 0.02* & 0.02* & 0.63 & 0.62 \\ & & Trfmer & 0.03* & 0.01* & 0.67 & 0.64 \\ \hline
**Nicenboim** & deprel & BiLSTM & 0.27* & 0.28* & 0.79 & 0.80 \\ & & Trfmer & 0.19* & 0.19* & 0.74 & 0.74 \\ \cline{2-6} & head & BiLSTM & 0.31* & 0.45* & 0.81 & 0.88 \\ & & Trfmer & 0.31* & 0.41* & 0.81 & 0.87 \\ \cline{2-6} & pos & BiLSTM & 0.03* & 0.04* & 0.69 & 0.73 \\ & & Trfmer & 0.06 & 0.06 & 0.89 & 0.89 \\ \hline
**PoTeC** & deprel & BiLSTM & 0.14* & 0.12* & 0.71 & 0.70 \\ & & Trfmer & 0.14* & 0.11* & 0.74 & 0.69 \\ \cline{2-6} & head & BiLSTM & 0.23* & 0.28* & 0.79 & 0.81 \\ & & Trfmer & 0.15* & 0.22* & 0.75 & 0.77 \\ \cline{2-6} & pos & BiLSTM & 0.08* & 0.08* & 0.70 & 0.71 \\ & & Trfmer & 0.01 & 0.00 & 0.62 & 0.61 \\ \hline
**Provo** & deprel & BiLSTM & 0.20* & 0.19* & 0.76 & 0.75 \\ & & Trfmer & 0.20* & 0.17* & 0.76 & 0.74 \\ \cline{2-6} & head & BiLSTM & 0.25* & 0.21* & 0.79 & 0.77 \\ & & Trfmer & 0.20* & 0.22* & 0.76 & 0.77 \\ \cline{2-6} & pos & BiLSTM & 0.02 & 0.01 & 0.64 & 0.64 \\ & & Trfmer & 0.04 & 0.02 & 0.72 & 0.70 \\ \hline
**RastrOS** & deprel & BiLSTM & 0.17* & 0.18* & 0.74 & 0.74 \\ & & Trfmer & 0.16* & 0.16* & 0.73 & 0.74 \\ \cline{2-6} & head & BiLSTM & 0.22* & 0.32* & 0.77 & 0.83 \\ & & Trfmer & 0.21* & 0.31* & 0.76 & 0.82 \\ \cline{2-6} & pos & BiLSTM & 0.16* & 0.17* & 0.76 & 0.76 \\ & & Trfmer & 0.05* & 0.02 & 0.71 & 0.68 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Left block: Absolute difference of sample means in the predictions of the models between time steps with and without revisions. * means \(p\)-value < 0.001. Right block: Area Under the ROC Curve when the fitted models’ predictions are used for binary classification of revision time steps in the data.
found a significant effect in all the experiments, supporting the use of human reading data as a cognitive signal to inform revision policies. This is a valuable finding: BiLSTMs and Transformers are bidirectional, trained on full sequences, but if we make them process linguistic input incrementally, their revisions can be partially predicted by human reading behaviour. This is also a step forward towards understanding why these models change hypotheses at some tokens, when only partial prefixes are available.
Besides advancing the research on eye-tracking-augmented NLP, this study also opens the door to exploring other cognitive perspectives with restart-incremental NLP models. We see a potential to go the other direction and investigate to what extent a "mixed incrementality" model (architectures relying on an incremental processor with occasional restarts) would capture the patterns of human gaze in reading, and hence function as a model of that. In this case, revisions would serve as predictors of human regressions, with control variables like word frequency, surprisal and word length. Other possibility for future work is to investigate whether other measures, like number of fixations or regressions _to_ a token, are related to the edits per label.
## Limitations
Here we summarise a few known limitations that we have mentioned throughout the text. We have analysed various datasets which differ both in the ways they were collected (the task humans were performing, _e.g._ only reading or also answering to comprehension questions) as well as the length and genre of the texts. The size of the eye-tracking datasets is, in general, small. Ideally, larger amounts of data are necessary to train a revision policy than what we had available for the analysis. Some preprocessing steps had to be made; in particular, some decisions were necessary on had how to merge tokens and interpret documentation, so that a mapping could be created. This is documented in the Appendix, but alternative ways are also possible. We limited the study to families of pre-trained models and tasks for which all languages were available. There can be a mismatch between the humans having the full text available at any point and the models performing sentence segmentation internally in different ways. For models that are trained on sequence level, it may be better if the human reading is also performed the same way. Further research expanding these aspects is desired. Other models beyond GLLMs, _e.g._ with non-linearity, may be examined, because the probability of regression is within a narrow range in most of the cases. Using models' revisions to predict human behaviour is also a possible research question which was not addressed in this work.
## Acknowledgements
We thank Nora Hollenstein for some helpful advice on using eye-tracking measures, as well as the authors of the eye-tracking datasets who replied to our clarification requests. Thanks to Patrick Kahardipraja for initial discussions on using surprisal as a signal for revisions policies. We are also thankful for the valuable feedback and suggestions provided by the anonymous reviewers.
|
2303.17022 | Active Brownian Particles in Random and Porous Environments | The transport of active particles may occur in complex environments, in which
it emerges from the interplay between the mobility of the active components and
the quenched disorder of the environment. Here we explore structural and
dynamical properties of Active Brownian Particles (ABPs) in random environments
composed of fixed obstacles in three dimensions. We consider different
arrangements of the obstacles. In particular, we consider two particular
situations corresponding to experimentally realizable settings. Firstly, we
model pinning particles in (non--overlapping) random positions and secondly in
a percolating gel structure, and provide an extensive characterization of the
structure and dynamics of ABPs in these complex environments. We find that the
confinement increases the heterogeneity of the dynamics, with new populations
of absorbed and localized particles appearing close to the obstacles. This
heterogeneity has a profound impact on the motility induced phase separation
(MIPS) exhibited by the particles at high activity, ranging from nucleation and
growth in random disorder to a complex phase separation in porous environments. | Fergus J. Moore, John Russo Tanniemola B. Liverpool, C. Patrick Royall | 2023-03-29T21:06:59Z | http://arxiv.org/abs/2303.17022v1 | # Active Brownian Particles in Random and Porous Environments
###### Abstract
The transport of active particles may occur in complex environments, in which it emerges from the interplay between the mobility of the active components and the quenched disorder of the environment. Here we explore structural and dynamical properties of Active Brownian Particles (ABPs) in random environments composed of fixed obstacles in three dimensions. We consider different arrangements of the obstacles. In particular, we consider two particular situations corresponding to experimentally realizable settings. Firstly, we model pinning particles in (non-overlapping) random positions and secondly in a percolating gel structure, and provide an extensive characterization of the structure and dynamics of ABPs in these complex environments. We find that the confinement increases the heterogeneity of the dynamics, with new populations of absorbed and localized particles appearing close to the obstacles. This heterogeneity has a profound impact on the motility induced phase separation (MIPS) exhibited by the particles at high activity, ranging from nucleation and growth in random disorder to a complex phase separation in porous environments.
## I Introduction
Active matter concerns systems comprised of individual bodies undergoing motion via self-propulsion [1]. This description encompasses a plethora of systems over a wide range of lengthscales from bacteria [2; 3; 4], biological microswimmers [5], schools of fish [6], bird flocks [7], to human crowds [8]. While these systems can all be classified as active matter, accurate models must be tailored to the specifics of each system and its environment. One class of much simpler model systems which nevertheless captures key elements of the behavior of more complex systems is active colloids [9; 10; 11; 12].
Yet if we are to apply such model systems in a biological context, it is essential that we study the dynamics of active particles in environments that are relevant to their real-world counterparts. For biological active matter on mesoscopic lengthscales this means environments like porous soils [13] and organic tissues [14]. Environments such as these have several qualities in common, they are often crowded, random and irregular. This of course has an impact on the transport or displacement of the active bodies inside these spaces [9], for example in the (biological) case of bacteria, their run-and-tumble dynamics can be drastically altered [15; 16].
In equilibrium systems, the dynamics of fluids in dense and complex environments has long been an area of interest. For example, the inclusion of specific structures into dense fluids has proven significant for progress towards understanding the glass transition and in liquids [17; 18]. Furthermore, the addition of randomly pinned particles within a dense ensemble is known to greatly slow the dynamics of such systems, providing access to rare states [19; 20; 21; 22]. A special kind of localisation has been explored in glass-forming systems [23; 24], and which may be realized using colloidal systems [25; 26; 27]. Here _pinning_, i.e. immobilizing a fraction of the particles, provides access to the so-called ideal glass, a putative amorphous state of very low configurational entropy whose diverging timescales render it otherwise inaccessible to experiment or computer simulation [28].
For active systems, in the simplest case of a confining wall, self-propelling spheres will accumulate at a wall as a consequence of the timescale of its persistent motion, even in the absence of hydrodynamic interactions [29; 30]. For many-body systems in two-dimensions, the influence of disordered landscapes on to the dynamics of active systems has been shown to manifest in clogging and localisation transitions [31; 32; 33], subdiffusion over long timescales [34; 35; 36], destruction of flocking clusters [37], suppression of Motility-Induced Phase Separation (MIPS) and the prevention of uniform wetting at boundaries [38]. Furthermore, manipulation of complex environments has been shown to provide a degree of control over the transport of active matter in the form of sorting [39], and the intriguing phenomenon of topotaxis (control over net flow directions by controlling the topology of the environment [40; 41]). Active Brownian particles (ABPs) exhibit rich phase behavior such as MIPS [42; 43], and the formation of active crystals [44; 45] and fundamental properties such as pressure and the equation of state differ drastically from what might be expected from equilibrium systems [46; 47; 48].
However, the question of how active Brownian spheres couple to a complex surrounding environment remains unanswered. Recently, there is been interest in experiments with mesoscale active matter in 3d. In one study a random heterogeneous environment was found to impose strong inhibitions on active transport of bacteria [15], and in another study the impact of dimensionality was made clear, with the dominance of three-dimensional structure in the presence of an anisotropic potential [49]. This latter example is a well-controlled 3d colloidal model system which provides the inspiration for our work because it is possible to confine such systems using pinning [50; 26] or allowing a subset of particles to undergo gelation [51]. Insight into the transport of active matter in 3d complex environments could provide a major step towards control of such systems and aid progress towards applications such as drug delivery.
In the present article, we perform three-dimensional molecular dynamics simulations of active Brownian particles in complex heterogeneous environments. To model such environments, we choose two example structures which, as noted above, may be realized in experiment: a random homogeneous array of pinned particles, providing an extension of disordered random obstacle studies to 3D; and a continuous, percolating porous network (a gel), simulating the complex environments typical of active matter under confinement. Furthermore, we will investigate the structural and dynamical properties of ABPs within these structures, with a focus on phase separation and how this varies from the MIPS observed in bulk suspensions.
This article is organized as follows: in section II we outline the computational methods used to study these systems, in section III we discuss the results of the simulations, and finally in section IV we will summarize and conclude our findings and discuss future work in this area.
## II Model and Methods
### Active Particles
We model active colloids as active Brownian particles, which propel with a constant velocity \(V_{0}\) along their individual direction vectors \(\mathbf{e}\), which in turn are subject to rotational diffusion. We implement this model through molecular dynamics simulations using a customized version of the open source LAMMPS package [52; 45], which integrates the following equations of motion:
\[\dot{\mathbf{r}}=V_{0}\mathbf{e}+\beta D_{T}\mathbf{F}+\sqrt{2D_{T}}\eta \tag{1}\]
\[\dot{\mathbf{e}}=\sqrt{2D_{R}}\xi\times\mathbf{e} \tag{2}\]
Here \(\dot{\mathbf{r}}\) is the particle velocity, \(V_{0}\) is the magnitude of the constant active velocity, and \(\mathbf{F}\) is the inter-particle force. The thermal fluctuations promoting translational diffusion are included in the Gaussian white-noise term \(\eta\), where \(\langle\eta\rangle=0\), and \(D_{T}\) is the translational diffusion coefficient. Thermal noise driving rotational diffusion of the direction vector \(\mathbf{e}\) is represented by an independent Gaussian noise term \(\xi\), where \(\langle\xi\rangle=0\), and \(D_{R}\) is the rotational diffusion coefficient. The two diffusion coefficients are related via \(D_{T}=D_{R}\sigma^{2}/3\), where \(\sigma\) is the particle diameter. Time is scaled in units of the characteristic rotational diffusion time \(\tau_{R}=1/(2D_{R})\)[42]. The active particles are modelled as being similar to hard spheres and to achieve this we include a Weeks-Chander-Andersen (WCA) inter-particle potential, which takes the form:
\[\beta u(r_{ij})=\begin{cases}4\beta\varepsilon\left[\left(\frac{\sigma}{r_{ij }}\right)^{12}-\left(\frac{\sigma}{r_{ij}}\right)^{6}\right]+\beta\varepsilon &r_{ij}\leq 2^{\frac{1}{6}}\sigma\\ 0&r_{ij}>2^{\frac{1}{6}}\sigma\end{cases} \tag{3}\]
where \(\varepsilon=5\) is the interaction strength \(r_{ij}\) is the inter-particle distance, and \(\beta=1/k_{B}T\) the thermal energy. Since we use the WCA interaction, it is hard to define a volume fraction. Methods that determine an effective diameter such as Barker-Henderson [53], may not hold outside of equilibrium systems. Therefore, as in ref. [54], we use the total density \(\rho=N/V\), where \(N\) is the number of particles and \(V\) is the volume of the system.
We use the Peclet number to refer to the relative strength of the activity in the system, which we define as: \(\mathit{Pe}=V_{0}/\sigma D_{R}\). Throughout this work we keep \(D_{R}\) constant via \(D_{T}=1\), and vary \(\mathit{Pe}\) by changing the propulsion velocity \(V_{0}\).
Simulations are performed with periodic boundary conditions. The majority of the work is carried out in a cubic box of dimension length \(L=55\sigma\), with a total number of \(N=144000\) particles. In some cases, there was a need to sample from many state points and for these a smaller system was used where \(L=27.5\sigma\) and \(N\) ranges from 18000 to 24000. Analysis at constant density is always conducted at \(\rho=0.87\). This state point is chosen such that it lies below the freezing line in the bulk.
### Complex Environments
The complex environments relevant to microscopic biological systems are often irregular and random in nature. To investigate the dynamical properties of ABPs under these conditions, one must prepare obstacle geometries that satisfy these requirements. Here we consider two primary structures; porous gel networks and randomly pinned particles. As noted above, these may be realized in experiments. In addition to these, we include simulations studying the bulk dynamics of ABPs as a reference, these bulk simulations use the approach outlined in the previous section.
In the following section, we describe how the two
complex environments are created and characterized. A schematic depicting these two environments is displayed in Fig. 1, featuring 3d renderings of each environment type along with a cross-sectional slice. The porous gel network (Fig. 1a-b), is a heterogeneous system, comprised of two distinct meso-phases which percolate through the entire simulation box comprised of the particle-rich phase and a particle-poor phase in which for our parameters no particles are found. The random environment (Fig. 1c-d), is comprised of randomly pinned particles that create number of discrete obstacles dispersed throughout the system.
_Preparation of a porous network_-- In this work the porous network is modeled as a colloidal gel, specifically a colloid-polymer mixture [55; 56]. We do this because we seek to connect our work to experiments where such a network could be realized [49; 56]. Therefore, the preparation protocol in our simulations is as close to one which might be experimentally realizable as possible. For suitable parameters, colloids with such an attractive interaction begin to phase separate via spinodal decomposition which is then arrested, leaving a bicontinuous network, ie a gel [56; 57]. The (polymer-induced) attraction between the colloidal particles that would be used in an experiment is here modeled with the Morse potential [58; 59],
\[\beta u\left(r_{ij}\right)=\beta\varepsilon\exp\left[a_{0}\left(\sigma-r_{ij} \right)\right]\left(\exp\left[a_{0}\left(\sigma-r_{ij}\right)\right]-2\right) \tag{4}\]
where \(a_{0}=33\) is a range parameter.
To create the gel structures in simulation, we begin with particles in a simple cubic crystal at the desired number density, and then evolve this system according to Brownian dynamics (Eq. (1), for \(V_{0}=0\)), with the particles interacting via the Morse potential Eq. (4). This system is then evolved for \(5\times 10^{7}\) integration steps, which is equivalent to \(1200\tau_{B}\) after which the system is frozen and no further movement is allowed. Here \(\tau_{B}\) is the Brownian time \(\tau_{B}=(\sigma/2)^{2}/6D_{T}\). An example of the resulting gel is shown in Fig. 1(a-b).
With the gel in place, the positions of the free particles are initialized via the Lubachevsky-Stillinger algorithm [60]. This method comprises the following steps: First, initial particle positions are randomly assigned. Then, the particles are slowly grown and displaced from an initial diameter \(\sigma_{\text{in}}=0.1\), to the desired diameter \(\sigma=1\), such that these particles experience minimal overlaps with themselves and with the frozen gel particles. Additionally, to guard against the presence of any small but sufficient remaining particle overlaps, a pre-run simulation with a soft potential is performed:
Figure 1: Preparation of complex environments for ABPs. (a) Percolating gel network at number density \(\rho=0.38\). (b) Cross-section through the gel network of depth \(2\sigma\). (c) A collection of randomly pinned obstacles at \(\rho=0.31\). (d) Cross-section through the random obstacles of depth \(2\sigma\). (e-f) Gel network (grey) filled with active particles (blue) to a total density \(\rho=0.42\). (g-h) Random obstacles filled with active particles to a total number density \(\rho=0.42\).
\[u(r_{ij})=A\left[1+\cos\left(\frac{\pi r_{ij}}{r_{c}}\right)\right]\quad r_{ij}<r_{c} \tag{5}\]
where \(r_{c}=2^{\frac{1}{6}}\) is the potential cut-off, and the constant A is ramped from 0 to 100 over \(1.2\tau_{R}\) without activity. Following this the system is equilibrated again without activity, for particles following the equations of motion outlined in Eq. (1) and Eq. (2), and the WCA inter-particle potential Eq. (3). A low density example of this system is shown in Fig. 1(e-f).
_Random pinning --_ For the random pinning case, an arbitrary configuration of particles is generated in the simulation box at the desired density. These particles then follow Brownian dynamics with the soft potential Eq. (5) to eliminate any significant overlaps before the system is equilibrated with the WCA potential. A fraction of the particles are then chosen at random and frozen. This creates the random obstacles. This fraction is chosen such that the volume accessible to the free particles is the same in both the gel network and the random pinning systems (Fig. 1(c-d)).
The fixing of the accessible free volume of the mobile particles enables comparison of observables in both environments. Fixing this is a necessary step as the difference in structure between the gel network and the random pins could result in the free particles operating at two different effective densities in the case of the same number of frozen particles.
For this, the density of the gel network is held constant and the number of pinned particles is varied as a function of the total density \(\rho\). The total free volume available to the mobile particles is determined by taking the Voronoi tessellation of the instantaneous configuration and associating with each particle the volume of its Voronoi cell \(V_{\rm voro}^{i}\). The sum of these volumes provides the total volume accessible to the free particles \(\sum_{i}^{N}V_{\rm voro}^{i}\), which are then averaged over ten independent simulation runs. This information is used to determine the fraction of particles to be pinned, such that \(\sum_{i}^{N}V_{\rm voro}^{i}\) for the free particles in the pinned system matches that of the porous gel network at the same density.
_Lengthscales in the complex environments --_ The interplay between the lengthscale of the two environments and the persistent motion of the active particles will have a large impact on the dynamics. Therefore, to characterize the lengthscale the obstacles impose on the active particles we will use the pore _chord length_. The chord length is a measure of the distance between two interfaces in a homogeneous phase of a heterogeneous system. A chord is defined as the distance between two interfaces in a heterogeneous system, where the chord lies wholly within one phase. The chord length distribution \(p(\ell)\), defines the probability of finding a chord of length between \(\ell\) and \(\ell+d\ell\) within one phase. We characterize the environments in this work by the mean pore chord length \(\langle L_{c}\rangle\). In practice, this is determined by measuring chords of varying lengths along each axis of the three dimensional sample that lie wholly within the pore phase [61; 62]. The mean chord length was measured and averaged for six independent configurations for each environment species. The gel networks have a mean pore chord length \(\langle L_{c}\rangle=6.62\sigma\), whereas for the random pinning system \(\langle L_{c}\rangle=3.24\sigma\), almost half that of the gel system. The difference in this measurement derives from arrangements of the particles comprising these structures: in the case of the gel, particles are arranged locally into dense branches and therefore the branches provide the relevant lengthscale in this system. Conversely, in the random system the particles are arranged in a non-overlapping random configuration, and the surrounding free space is then dependent on the shorter lengthscale of the average particle separation.
### Dynamical analysis
The addition of obstacles into dense fluids greatly influences the dynamics, and in some cases a system may become arrested. The structural relaxation time \(\tau_{\alpha}\) provides a useful metric which one can use to understand the variation of timescales across different state points and environments.
The relaxation time \(\tau_{\alpha}\) is determined via the self part of intermediate scattering function:
\[F_{\rm s}(k,t)=\frac{1}{N}\left\langle\sum_{j=1}^{N}\exp\left[i\vec{k}\cdot( \vec{r}_{j}(t)-\vec{r}_{j}(0))\right]\right\rangle \tag{6}\]
where \(\vec{k}\) is the wavevector \(k=|\vec{k}|\), taken as \(2\pi\). We define \(\tau_{\alpha}\) as \(F_{s}(k=2\pi,\tau_{\alpha})=e^{-1}\). Here the index \(j\) runs over all the particles.
The persistent motion of active particles induces clustering and aggregation at boundaries. This will likely cause density fluctuations where some fraction of particles are in dense and crowded regions while others are in locally dilute regions. To quantify the degree to which these variations are taking place within different environments, we use the four-point dynamic susceptibility \(\chi_{4}\). To calculate \(\chi_{4}\), we follow the methodology of Lacevic _et al._[63]. For this, one must first define an overlap function \(w\left(|{\bf r}_{j}(0)-{\bf r}_{i}(t)|\right)\), where \(i\), and \(j\) are particle indices. This measures the degree of spatial similarity between configurations of a system as a function of time. The overlap is unity inside a region \(|{\bf r}_{j}(0)-{\bf r}_{i}(t)|\leq a\) and 0 otherwise, where \(a=0.3\sigma\). The fraction of overlapping regions in a system of particles at times 0 and \(t\) is given by:
\[Q(t)=\frac{1}{N}\sum_{j=1}^{N}\sum_{i=1}^{N}w\left(|{\bf r}_{j}(0)-{\bf r}_{i }(t)|\right). \tag{7}\]
The fluctuation of \(Q(t)\) then defines \(\chi_{4}\). This quantity is a susceptibility and measures dynamic heterogeneity [64].
\[\chi_{4}(t)=\frac{V}{N^{2}k_{B}T}\left[\left\langle Q^{2}(t)\right\rangle-\left\langle Q (t)\right\rangle^{2}\right]. \tag{8}\]
## III Results
Our results section is organized as follows. We first characterize the dynamical behavior at low activity characterized by the Peclet number. We then move on to moderate activity, where significant structural changes are observed, related to motility induced phase separation. These we characterize with a number of measures, one-body measures such as the Voronoi volume associated with each particles, and many-body properties, accessed with the topological cluster classification [68]. Finally, we probe the case of very high activity, where we observe a re-entrant mixing.
### Low _Pe_: crystal suppression and heterogeneous dynamics
_Structural relaxation --_ As mentioned in the introduction, active systems can undergo a localization transition in the presence of quenched disorder. This phenomenon is often only present in the homogeneous phase i.e. for activity below that associated with the boundary of motility induced phase separation. To assess the extent to which complex environments impose localization on active Brownian particles, we measure the structural relaxation time (\(\tau_{\alpha}\)). This enables us to to determine the regimes in which these systems become arrested. Figure
Figure 3: The Dynamic susceptibility \(\chi_{4}\) is measured in the three systems at \(\rho=0.87\) and at \(Pe=0,10,40\) in panels (a), (b) and (c) respectively. The time corresponding to the peak of \(\chi_{4}\) relates to the timescale of maximal dynamical correlation and the height corresponds to the number of particles involved in dynamic heterogenieties. Due to the large amount of sampling required, this data was measured using the smaller simulation box size.
Figure 2: Structural relaxation time (\(\tau_{\alpha}\)) as a function of number density \(\rho\) and Péclet number \(Pe\). Here we consider the bulk, gel network and random pinning systems from left to right. Colours indicate the common logarithm of \(\tau_{\alpha}\). Crystalline states are marked in white. The number density \(\rho=0.87\) is emphasized with a black dotted line. Due to the large amount of sampling required, this data was measured using the smaller simulation box size.
2 shows the variation of the structural-relaxation time in the bulk, gel network and random pinning systems as a function of \(\mathit{Pe}\) and total number density \(\rho\).
In the bulk system (Fig. 2-bulk), the particles will crystallize for state points that fall below the freezing line. This crystal regime was studied previously from which the data for this phase boundary originates [45]. Outside of the crystalline regime, the active particles exhibit a monotonic decrease in \(\tau_{\alpha}\) as \(\mathit{Pe}\) rises for a given density, which is qualitatively similar to a temperature increase in passive systems [65].
With the addition of a complex environment, there is an emergence of slow dynamics distinct to that of the crystal in the bulk system. Figure 2-gel and Fig. 2-random show the variation in \(\tau_{\alpha}\) for the gel network and the random pins respectively. Comparing the two panels, we can see that the two systems operate on significantly different timescales, with the system with random obstacles exhibiting slower dynamics than the gel network for the majority of state points considered. Both the gel net
Figure 4: (_Left_) 3d Snapshots of the three systems at \(\mathit{Pe}=100\), particles are coloured by their Voronoi volumes (\(V_{\mathrm{voro}}\)), obstacles are not rendered. (_Right_) A slice through each system (depth \(4\sigma\)).
work and the random pinning systems exhibit structural relaxation times in excess of \(10^{5}\tau_{R}\) at high densities, plotted as the darkest blue contour. These are state points which will not relax within the maximum run time of the simulations and are thus for the purposes of this work, we define these as arrested. In this arrested regime, active particles are localized due to the constraints imposed by the gel or the pins, even at Peclet numbers of the order of 10 for dense ensembles in the random pins.
_Dynamic Susceptibility_ -- The interplay of the complex environment and the self propulsion will cause a degree of dynamical correlation in these systems, as clusters of active particles become aligned or absorbed at the boundaries. The dynamic susceptibility \(\chi_{4}\) provides a measure of this, it is plotted for the three systems in Fig. 3. The dynamic susceptibility manifests a peak at the timescale for which the particle dynamics are maximally correlated on the chosen lengthscale \(a\) (Eq. 8). This measure shares some similarities with the structural relaxation time. For example, in the passive systems (\(Pe=0\)), the positions of the peaks indicate the same relationship as measured in \(\tau_{\alpha}\) in Fig. 2, with the gel system being the fastest, followed by the bulk and then the random system coming significantly later. At low activity (\(Pe=10\)), these peaks become more clearly defined and move to shorter times. Furthermore the relative positions of these peaks switch, with the dynamic correlations in the bulk system happening on shorter timescale than the complex environment systems. Finally, at higher \(Pe\) (the regime of motility-induced phase separation), the dynamic correlations are appreciably more prominent in the bulk system. Conversely, it is clear that for the gel and random system, \(\chi_{4}\) does not fully relax on the timescales we probe, as the environment enforces some degree of dynamical correlation over long timescales.
### Intermediate \(Pe\): MIPS in random environments
From the results presented in the previous section it is clear that the different confinement types cause significant and varying perturbations to the dynamics of active systems at low \(Pe\). What remains unclear, is the mechanism by which active particles overcome localization due to the environment, and what behavior these particles exhibit when driven towards higher Peclet numbers, particularly in the regime of motility induced phase separation (MIPS). Compared to the lengthscale of MIPS, we know that the complex environment restricts the active particles to motion over shorter lengthscales; from the chord length measurements, the confining lengthscale is \(6.62\sigma\) in the gel and \(3.24\sigma\) in the case of random pinning.
We use the individual particle Voronoi volumes (\(V_{\mathrm{voro}}\)) as a measure of the local density to provide insight into the relationship between self-propulsion and complex environments. Figure 4 displays representative snapshots of the bulk, gel network and random pinning systems in the steady-state at an activity of (\(Pe=100\)), alongside a slice through each system. In the bulk system (top panel) the particles have undergone motility-induced phase separation. Within the large dense region are particles (colored dark blue, which have \(V_{\mathrm{voro}}\leq 0.8\) in Fig. 4), clustering due to their persistent motion, which are surrounded by a gas of active particles at a lower density.
Figure 4 (middle panel) displays the behavior of active particles in the gel at high \(Pe\). The structure of the gel is non-trivial featuring extreme variations in surface curvature and channel width. It is clear that this has a strong impact on the dynamics, with a distinct pattern emerging which is clearly dependent on the gel structure. In particular MIPS leads to a highly complex structure, with some pores being filled by the slow/dense phase and others by the fast/less dense phase. The location of the regions which acquire a low or high density under MIPS is fixed by the initial configuration, with independent runs always exhibiting the same demixing pattern.
The influence of the random pinning (bottom panel) on the active particle dynamics at high \(Pe\) is distinct from that of the bulk and gel systems. Here, the particles phase separate into a large cylindrical droplet somewhat akin to the bulk system. However, this droplet has random pins throughout its structure holding the dense phase in space. Furthermore, the presence of the random pins brings about an unexpected result: the arrangement
Figure 5: Heat map of Voronoi volumes and single particle displacements in the gel network (left), and random pinning (right) systems respectively. Note that displacements are measured over \(\Delta t=6\tau_{R}\). All systems are at \(\rho=0.87\), and \(Pe=100\).
of the random pins pre-defines the location in which the dense MIPS phase will form. For the same configuration of pinned particles, but distinct alternate initial positions and velocities; MIPS occurs in roughly the same region of the system, with only small fluctuations from run to run. Therefore, we believe that the location of the MIPS is somehow encoded in the initial positions of the randomly pinned particles.
The three-dimensional snapshots in Fig. 4 provide a good indication of the phase behavior of these systems. However, they do not contain any information on the stability of the dense phase. Therefore, in Fig. 5 we plot the average Voronoi volume \(\langle V_{\rm voro}\rangle\) as a function of space in both the gel network and random pinning systems. These plots give information regarding the way in which each system phase separates. For the gel, the inclusion of the network into the space does not allow for the formation of a single dense droplet as seen in bulk systems experiencing MIPS. Instead, the structure of the gel determines the locations in which the system will phase separate. Given the disordered nature of the gel, there are sections where the pores are more constricted, have tighter curvature or are less connected; these are the locations that will trap active particles. The spatial distribution of \(\langle V_{\rm voro}\rangle\) in the random pinning system tells an alternate story. Active particles in this system at sufficient \(\mathit{Pe}\) will form a large droplet around a subset of pinned particles. The random pins in this droplet keep it stable over long time periods in a steady state where particles are exchanged between the droplet and the surrounding active gas.
The Voronoi volumes give information into the phase separation of these systems but not into the transport
Figure 8: The fraction of locally dense particles \(N_{D}/N\). Particles are considered locally dense if \(V_{\rm voro}<0.8\).
Figure 6: Time–evolution of the pinned system undergoing MIPS. The state point is that of Figs. 5 (right), \(\rho=0.87\), and \(\mathit{Pe}=100\). The color map represents the Voronoi volumes of the active particles and the pale yellow particles are the pinned particles. Each snapshot is taken at the time noted underneath from the start of active motion. Data are taken from a slice of depth \(\sigma\).
Figure 7: Probability density of the Voronoi volumes (\(V_{\rm voro}\)), for the bulk, gel, and random pinning systems respectively. All systems are at \(\rho=0.87\), and various \(\mathit{Pe}\) (see figure legend).
dynamics of the individual particles in this environment. Some insight into this aspect can be gained by looking at the average single particle displacements \(\langle\Delta r\rangle\) for the same system. The distributions of \(\langle\Delta r\rangle\) are plotted in Fig. 5 for displacements over the time period \(t=6\tau_{R}\), which is a time period of the order of \(\tau_{\alpha}\) in the passive bulk system at this density. For the gel system, these displacements are largely uniform in the wider channels. Close to the surfaces of the gel particles the displacements are significantly less, indicating shorter movements along the surface. Furthermore, there are several locations where particles are localized with displacements less than the particle diameter. These are all located in regions of high surface curvature.
For displacements in the random system, there are four identifiable populations of particles, each with its distinct environment. The first is the group of localized particles. These are particles that have become trapped between the random pins and other particles in the dense phase and are recognizable as the dark spots dispersed through the dense phase. These are surrounded by the second group of particles that are not localized but remain trapped within the dense phase and are moving very slowly (\(\Delta r<\sigma\)). Beyond this are particles in the interface, which undertake mid-range displacements. Finally, outside the dense phase is the active gas where particles are completing relatively large and uniform displacements.
Time-evolution of MIPS in random pinningOur protocol enables us to investigate the process by which the system undergoes MIPS. To this end, we show a time-sequence of snapshots of a pinned system undergoing MIPS. In Fig. 6 we show the formation of MIPS, starting from the passive WCA system as described in section II. Here we consider the same state point as in the previous figures (\(\rho=0.87,Pe=100\)) and plot the Voronoi volumes as in Fig. 5 (top row). The time-evolution of this data is also shown in the supplementary movie.
At time \(t=0\), we see that the system is largely uniform in density. Visual inspection indicates the following. Even at quite small times (\(t\lesssim\tau_{R}\)), there are fluctuations in density which are small, both in the change of local volume per particle and also in their spatial extent. Over time, the larger fluctuations (in terms of their spatial extent) appear to grow at the expense of smaller fluctuations and even at the quite short time of \(10\tau_{R}\), the pattern seems to be largely fixed. Longer times correspond to an increase in density difference between the particle-rich regions (blue) and the more dilute regions. During this time regime, the interface between these regions becomes better defined. While we have been able to observe the time-evolution of MIPS under random pinning, exactly what causes the spatial distribution of the MIPS phases and how this is encoded in the random pins seems to be a challenging problem for the future.
Density fluctuations as a function of PeSo far we have seen examples of how these systems phase separate at \(Pe\approx 100\), now we will look at how the local density fluctuates in these systems for different \(Pe\). The probability density of the Voronoi volumes for each system are shown in Fig. 7 for various \(Pe\). In the absence of activity ie. \(Pe=0\), all systems feature an approximate Gaussian distribution. Common to all the environments there is the fact that the addition of activity produces a shift in the peak towards smaller volumes and a broadening of the tail of the distribution towards larger volumes. In the bulk system (Fig. 7-bulk) the distributions feature a non-monotonic trend in the spread. First increasing as a function of \(Pe\) up to a maximum at \(Pe=60\), before decreasing. This is the first sign of re-entrant MIPS mixing, which will be the focus of Section III.4.
For the gel network (Fig. 7-gel) and the random pins (Fig. 7-random), this broadening is monotonic, with the gel network covering a wider range of volumes. However, for the random pins and the bulk systems we observe the emergence of twin-peaked distributions at higher activity as a result of phase separation. For both the gel and the random pins, the influence of the complex environment leads to a splitting of the active particles into more than one population, with a proportion of active particles aggregating or becoming localized because of interactions with the environment.
The persistent motion of active particles induces symmetry breaking that causes them to aggregate at surfaces and walls. In these systems the surfaces could be the edge of a MIPS dense phase, the surface of the gel network, or a dense cluster of pins. These surfaces collect active particles. To determine how the environment structure affects the collection of particles at surfaces, we count the number of particles located in locally dense regions \(N_{D}\), defined for particles where \(V_{\rm voro}<0.8\). In Fig. 8 we plot the fraction of localised particles \(N_{D}/N\) as a function of \(Pe\).
Active transport in complex environmentsWe have seen so far that with a progressive increase in \(Pe\), all three systems undergo dramatic changes in terms of local density. We have also seen that at high \(Pe\) the average single particle displacements reveal information concerning the interaction the active particles have with their environments. Like with the Voronoi volumes, we plot the probability densities of active particles in the three systems for various \(Pe\) [Fig 9(a)]. At first glance, it is clear that for all systems the particles complete larger displacements as \(Pe\) is increased. Moreover, these distributions are highly featured and reveal a great deal of information to the behaviour and interactions of the active particles.
In the bulk system [Fig. 9(a-bulk)], the \(\Delta r\) distribution is a single peaked at \(Pe\)=0. As \(Pe\) increases, this distribution shifts to higher displacements and we observe the growth of a second peak; indicating the presence of MIPS in the system. With a further increase of \(Pe\) there is a continuous transition between the relative heights of
the peaks as the fraction of particles in the dense phase grows. This is corroborated by the growth of regions of locally dense particles over this range of \(Pe\) observed previously in Fig. 8.
The displacements for the gel network and the random pinned system are plotted in Fig. 9(a-gel) and Fig. 9(a-random) respectively The distributions of both of these systems show the splitting of a single population into two or more populations with the progressive increase of \(Pe\). The first of these is the emergent peak at very small displacements \(\Delta r/\sigma\sim 10^{-1}\). These particles move only a small fraction of their diameter and have become localized as a result of the interplay between their activity and the environment.
Interestingly, both systems in complex environments [Fig. 9(a-gel/random)] feature a strong peak at \(\Delta r/\sigma=1\), with some smaller features at subsequent integer displacements. Examination of the average displacements in Fig. 5, shows that they are located along the surfaces of the gel network and in small pockets of lower density in the dense phase of the pinning system. The location of these displacements makes it clear that they are a result of particle re-arrangements at the obstacle interface, and in dense particle clusters. For the gel network [Fig. 9(a-gel)] at \(Pe>0\), the remaining particles are in a single large population, moving comparable distances to particles in the bulk system. However, in the case of the random pinned system [Fig. 9(a-random)], at \(Pe\geq 40\) there is a splitting of larger displacements across two length scales, one at the interface and the second in the active gas.
Thus far we have primarily considered two observables \(V_{\text{voro}}\) and \(\Delta r\), both of which tell part of the story. These two observables can be correlated to complete this picture, the result of this is plotted in Fig. 9b. The result is a series of contours stacked logarithmically, each layer denotes the level at which a percentage of the data lies below. These plots provide some insight into the population splitting phenomena we have seen so far. We see that the combination of confinement and activity create a subpopulation of particles that are arrested and have very little free space, a feature not found in the bulk system. Interestingly, this arrested group is relatively larger in the random pinning system. For the systems at high
Figure 9: (a) Probability density of the Voronoi volumes (\(V_{\text{voro}}\)), for the bulk, porous network, and random pinning systems respectively. All systems are at \(\rho=0.87\), and various \(Pe\) (see figure legend). (b) Correlation of the Voronoi volumes and the single particle displacements of motile particles in the gel network, random pins and the bulk system. All systems are at a density of \(\rho=0.87\), and plotted for \(Pe=0\), \(20\), and \(120\). Black lines contain \(90\%\) of data. Colours show contour levels below which the indicated percentage of the data will lie beneath.
\(Pe\) in Fig. 9(b), the particles that have \(V_{\rm voro}<0.8\) and in some form belong to dense clusters, these particles still cover a wide range of displacements. This is likely a combination of two phenomena, first is that of the particles that have been localized over a longer time frame. These are particles that are not moving and have very little free space. The other case, is that of particles which have been mobile but have very little space. These will be particles that have moved from a position at a previous time and are now located in a dense cluster or at the object interface.
### Local structure
So far, we have focussed mainly on one-body structural properties via the Voronoi volumes. It is instructive when studying amorphous systems to consider higher-order structural correlations, as a precise means to probe smaller changes in the local structure. To this end, we use the topological cluster classification (TCC) [68]. The TCC identifies local environments whose bond topology is identical to that of small minimum energy clusters of a suitable reference system, here the Lennard-Jones model.
Figure 11: (a) Probability density of the Voronoi volumes in the bulk system at various \(Pe\). As \(Pe\) increases the distributions show first one, then two populations, and then a re-entrant single phase at high \(Pe\). (b) Correlation of \(V_{\rm voro}\) and \(\Delta r\) in the bulk system at \(Pe=160\) shows a single population. (c) Variance of the distribution of Voronoi volumes in the porous network (\(\sigma_{\rm voro}^{2}\)), random pinning and the bulk systems as a function of \(Pe\), all at density \(\rho=0.87\).
Figure 10: Higher–order structural analysis using the topological cluster classification. Here particles in specific local environments corresponding to clusters of 5-13 particles, and the hexagonal close-packed and face centered cubic crystals are considered. The number of particles in each environment \(N_{c}\) is then plotted as a function of \(Pe\) for the three geometries under consideration, bulk, gel and random systems at \(\rho=0.87\). Colors of the lines and data points correspond to the clusters depicted in the legend. The colors of the particles in the renderings in the key correspond to geometric properties of the clusters. In particular, the grey particles are in 3,4 or 5-membered rings and the yellow particles correspond to so-called “spindles” [68].
Some of us have shown that this is appropriate for passive WCA particles [66] and also have investigated the effect of activity on a similar (bulk) system [67; 45]. Here, we consider clusters of differing sizes (5-13 particles) as indicated in Fig. 10. The number of particles in each of these clusters \(N_{c}\), scaled by \(N\) is then plotted as a function of \(Pe\) for \(\rho=0.87\). We further consider the influence of the local environment where \(N_{c}\) is the number of particles participating in a particular cluster. To identify the cluster population we need the bond network, which we identify with a modified Voronoi decomposition (specifically we set the Voronoi parameter of Ref. [68] to \(f_{c}=0.82\)). Now we are interested in the active particles, but in order to identify the clusters, we include the immoblized particles in the bond network. We then run the TCC, but when analyzing data, we only consider active particles.
Figure 10 shows a common trend between the bulk, gel and random environments: the equilibrium local structure population is disrupted by increasing \(Pe\). Both of the confined environments show cluster populations that are lower compared to the bulk case. The random environment, where the position of the obstacles is taken from equilibrium configurations, at \(Pe=0\) has the same population numbers as the bulk case, but as soon as activity is switched on all cluster populations fall more abruptly compared to the bulk case. The first rapid decrease in local structure at \(Pe\simeq 30\) coincides with the formation of MIPS. Unlike the gel and random pinning systems the bulk case experiences another large drop in local structure population for \(Pe\simeq 140\). As we will see in the next section, this drop corresponds to a re-entrant mixing. Local structures are present in lower population in active systems than in similar equilibrium systems as has been seen previously [67]. However, it is possible that phase separation or the influence of the environment could influence this.
### Motility-Induced Mixing
In addition to the phase behavior discussed thus far, for our purposes, there is one more regime to be considered. At very high \(Pe\), ABPs will transition from a demixed state due to motility-induced phase separation to a mixed state, ie a homogeneous active fluid. A similar behavior has been observed previously in Ref. [69; 43] in bulk systems. We confirm these observations in the bulk case. The presence of the transition is apparent in the probability density of \(V_{\rm voro}\) in the bulk system [Fig. 11(a)]. These distributions show two populations at intermediate \(Pe\), but a single population at \(Pe=0\) and at \(Pe>120\). Looking at the correlation of \(V_{\rm voro}\) with \(\Delta r\) confirms the single re-entrant phase at high \(Pe\) [Fig. 11(b)].
To study the re-entrant MIPS behaviour in systems with quenched disorder in Fig. 11c we plot the variance of the probability density of \(V_{\rm voro}\) as a function of \(Pe\): non monotonic behaviour in this quantity signals a re-entrant phase. Looking at this data we can see that for the bulk and random systems, as \(Pe\) increases, the variance also increases up to a maximum; beyond which it decays to an intermediate value. The peak corresponds to the state where there is a roughly equal fraction of particles in the dilute and dense phases \(N_{D}/N\approx 1/2\). We plot the same for the systems with confinement in Fig. 11(c). Notably, the random pinning system also undergoes a transition to a re-entrant fluid. However this transition is delayed relative to the bulk, indicating that the presence of the pins works to stabilize MIPS at high \(Pe\). For the gel system we do not observe the re-entrant behavior within the considered range of \(Pe\).
## IV Conclusion
Suspensions of Active Brownian Particles show a rich range of dynamical behavior, and our goal was to explore the influence of complex confinement at high densities. To do this, we prepared confining geometries with different static properties: randomly pinned particles from an equilibrium bulk configuration and from a porous gel structure. Both confining geometries are constructed to have the same free volume available to the mobile particles, thus allowing a direct comparison of the effects of the static lengthscale of the confinement on the behavior of active particles.
We first explored the phase behavior at low \(Pe\) revealing how pinning suppressed the crystallization of the fluid at high densities. The relaxation time of the particles is slowed down by the obstacles (more for the random case compared to the gel case) and also it becomes more dynamically heterogeneous, as confirmed by the study of the overlap function and four-point susceptibility.
At intermediate \(Pe\) the bulk system displays MIPS, where the systems forms domains of dense/slow regions and low density/fast regions that nucleate and grow in a similar manner to equilibrium phase separation of two disordered phases. Surprisingly, we find that the obstacles not only do not suppress MIPS formation, but actually act to stabilize it. Random obstacles display a very similar phase-separation pattern as the bulk case, but the domains do not appear homogeneously in the system, but are always formed from the same regions of the sample. The mechanism of MIPS nucleation from random obstacles is still unknown, and should be addressed in future studies. In particular, the effect of system size and changing the state point would be most important to explore. The system size that we have studied here fully demixes to two distinct phases, but the timescale for this for different system sizes, and indeed whether larger systems fully demix would benefit from a detailed finite size analysis. Furthermore, exactly how the pinned particle environment encodes the spatial distribution of the MIPS patterns remains an outstanding challenge. In the case of the gel, the MIPS domains change completely, and forms a complex structure where the active and inactive regions
occupy different pores of the structure.
Finally we considered how local structure is perturbed by the activity, and revealed that re-entrant MIPS behavior is suppressed (or moved to higher \(Pe\)) by the random environments. In particular the gel environment seems to be the most effective in stabilizing the density and activity fluctuations. In this case, we attributed this behavior to the absorption at the rough walls, where the persistent motion of active particles creates localized and highly dense regions, which in turn frees space inside the pores that increase particle transport in the system.
We believe that these results could be important for better understanding transport in biological environments and for guiding future studies of active matter particles in random media. The confining environments that we consider are experimentally realizable [49], and indeed combining this with some means to break the symmetry in the future might enable investigations of topotaxis [41]. Other possibilities include the possibility to explore the interplay of the complex environments such as those we have investigated and state functions such as pressure [46; 47; 48] which also might be measured experimentally in colloidal systems [70].
## Supplementary material
Supplementary movie 1: This movie corresponds to the stills in Fig. 6. The state point is \(\rho=0.87\) and \(Pe=100\) for the random pinning system. The movie shows the emergence of dense and dilute regions.
## Acknowledgments
FJM was supported by a studentship provided by the Bristol Centre for Functional Nanomaterials (EPSRC grant EP/L016648/1). JR acknowledges support from the European Research Council Grant DLV-759187. CPR acknowledges support from the European Research Council (ERC Consolidator Grant NANOPRS, project number 617266).
## Appendix
### Overlap
The overlap function \(Q(t)\) compares a particle configuration with itself at a later time. An important feature of this measure of similarity is that it is not particle specific, it matters not whether it is the same particle occupying that space, only that there is a particle there. With this in mind, it is clear why the bulk system [Fig. 12(a)] behaves as it does. In this system, there are no fixed obstacles, and thus there are no points at which a cluster of particles could be anchored. Therefore, even in a system with MIPS present the centre of mass of a dense cluster remains diffusive and thus \(Q(t)\) decays to a small value. \(Q(t)\) does not decay to \(0\) in this case in the bulk due to the relatively high density, as there will always be some degree of overlap.
For active particles in the other two systems, there is clear evidence of the interplay between the self-propulsion and the complex environment. Both systems converge to an overlap value, higher than that of the bulk system. This is a product of the fixed geometry of the obstacles. Although the obstacle particles are not considered, there will be regions in structure that will be more likely to trap active particles, and even though particles are mobile, there is a high probability that there will be some particles in these regions. Interestingly, the gel and the random pins approach their convergent overlaps from different directions as \(Pe\) increases. For the gel system [Fig. 12(b)], the overlap value \(Q(t)\) increases with \(Pe\) at longer times, while decreasing with an increased rate at shorter times. The increased rate on short timescales is explained by the increased propulsion velocity promoting a quicker re-configuration of the active particle populations. However, this increase in \(Pe\) has the added effect of increasing the likelihood of a particle becoming trapped against the walls of the gel network, this is supported by the increase in the localised fraction \(N_{\mathrm{loc}}/N\) with \(Pe\) in Fig. 8.
Like in the bulk and the gel network, the overlap of the random system also decays at an increasing rate as \(Pe\) increases at short times (Fig. 12 c). In the absence of activity, the random system particles have a very slow decay in \(Q(t)\). This is a result of the pinned particles dramatically slowing down the dynamics of passive particles, this also seen in the alpha-relaxation time in Fig. 2, where \(\tau_{\alpha}\) was measured to be of the order \(10^{4}\) larger than that of the bulk or gel systems. Unlike in the gel system, \(Q(t)\) for the random pinning approaches its convergent value from above. This is likely due to the arrangement of the obstacles in the random pinning system, since the pinned particles are dispersed through the entire space, the active particles have a higher chance of becoming trapped. The random system converges to have higher overlap value than the gel, this is due to the larger fraction of particles becoming localized in the random system compared to the gel at the same activity and density.
|
2301.01039 | Brass-Stancu-Kantorovich Operators on a Hypercube | We deal with multivariate Brass-Stancu-Kantorovich operators depending on a
non-negative integer parameter and defined on the space of all Lebesgue
integrable functions on a unit hypercube. We prove $L^{p}$-approximation and
provide estimates for the $L^{p}$-norm of the error of approximation in terms
of a multivariate averaged modulus of continuity and of the corresponding
$L^{p}$-modulus. | Gülen Başcanbaz-Tunca, Heiner Gonska | 2023-01-03T10:59:12Z | http://arxiv.org/abs/2301.01039v1 | # Brass-Stancu-Kantorovich operators on a hypercube\({}^{*}\)
###### Abstract.
We deal with multivariate Brass-Stancu-Kantorovich operators depending on a non-negative integer parameter and defined on the space of all Lebesgue integrable functions on a unit hypercube. We prove \(L^{p}\)-approximation and provide estimates for the \(L^{p}\)-norm of the error of approximation in terms of a multivariate averaged modulus of continuity and of the corresponding \(L^{p}\)-modulus.
Key words and phrases:Multivariate Kantorovich operator; Multivariate averaged modulus of smoothness; Multivariate \(K\)-functional 2010 MSC: 41A36, 41A25, 26A45 \({}^{*}\)This paper is an extension of a talk given in ICATA 2022
## 1. Introduction
In this paper we consider the following problem of determining the energy of the energy of the system of equations
(1.1)
where \(\left(x\right)\) is a solution of the system of equations (1.1) and \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2). The energy of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.2}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.3}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.4}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.5}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.6}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.7}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.8}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.9}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.1}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.1}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.2}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.3}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.4}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.5}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.6}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.7}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.8}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.9}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.10}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.11}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.12}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.13}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left[\left(1-x \right)f\left(\frac{k}{n}\right)+xf\left(\frac{k+r}{n}\right)\right], \tag{1.14}\]
where \(\left(x\right)\) is a solution of the system of equations (1.1) and (1.2) is given by
\[E_{n,r}\left(f;x\right)=\sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\
second order moduli of different types. See, e.g., the work of Berens and DeVore [5], [6], Swetits and Wood [25] and Gonska and Zhou [10]. It is beyond the scope of this note to further discuss this matter. As further work on the classical case here we only mention the 1976 work of Muller [16], Maier [15], and Altomare et al. [1], see also the references therein.
Similarly to Kantorovich operators Bodur et al. [7] constructed a Kantorovich type modification of BSB operators as
\[K_{n,r}\left(f;x\right):=\sum_{k=0}^{n}w_{n,k,r}(x)\left(\left(n+1\right) \int\limits_{\frac{k}{n+1}}^{\frac{k+1}{n+1}}f\left(t\right)dt\right),\quad x \in[0,1], \tag{1.6}\]
for \(f\in L^{1}\left[0,1\right]\), where \(r\) is a non-negative integer parameter, \(n\) is a natural number such that \(n>2r\) and \(w_{n,k,r}(x)\) are given by (1.2). And, it was shown that _If_\(f\in L^{p}[0,1],\ 1\leq p<\infty,\)_then_\(\lim\limits_{n\rightarrow\infty}\left\|K_{n,r}(f)-f\right\|_{p}=0.\) In addition, it was obtained that each \(K_{n,r}\) is variation detracting as well [7]. Throughout the paper, we shall call the operators \(K_{n,r}\) given by (1.6) "Brass-Stancu-Kantorovich", BSK operators.
Notice that from the definition of \(w_{n,k,r}\), \(K_{n,r}\left(f;x\right)\) can be expressed as
\[K_{n,r}\left(f;x\right)\] \[= \sum_{k=0}^{n-r}p_{n-r,k}\left(x\right)\left(n+1\right)\left[ \left(1-x\right)\int\limits_{\frac{k}{n+1}}^{\frac{k+1}{n+1}}f\left(t\right) dt+x\int\limits_{\frac{k+r+1}{n+1}}^{\frac{k+r+1}{n+1}}f\left(t\right)dt\right]\]
and in the cases \(r=0\) and \(r=1\) they reduce to the Kantorovich operators; \(K_{n,0}=K_{n,1}=K_{n}\) given by (1.5). Again they are defined for all \(n\geq r\).
MULTIVARIATE SITUATION
Some work has been done in the multivariate setting for BSB and BSK operators. For the standard simplex this was done, e.g., by Yang, Xiong and Cao [27] and Cao [9], For example, Cao proved that multivariate Stancu operators preserve the properties of multivariate moduli of continuity and obtained the rate of convergence with the help of Ditzian-Totik's modulus of continuity.
In this work, motivated by the work Altomare et al. [3], we deal with a multivariate extension of the BSK operators on a \(d\)-dimensional unit hypercube and we study \(L^{p}\) -approximation by these operators. For the rate of convergence we provide an estimate in terms of the so called first order multivariate \(\tau\)-modulus, a quantity coming from the Bulgarian school of Approximation Theory. Also, inspired by Muller's approach in [17], we give estimates for differentiable functions and such in terms of the \(L^{p}\)-modulus of smoothness, using properties of the \(\tau\)-modulus. Here the work of Quak [20], [21] was helpful.
## 2. Preliminaries
Consider the space \(\mathbb{R}^{d},\ d\in\mathbb{N}\). Let \(\left\|\mathbf{x}\right\|_{\infty}\) denote the max-norm of a point \(\mathbf{x}=\left(x_{1},\ldots,x_{d}\right)\in\mathbb{R}^{d}\);
\[\left\|\mathbf{x}\right\|_{\infty}:=\left\|\mathbf{x}\right\|_{\max}=\max_{i\in \left\{1,\ldots,d\right\}}\left|x_{i}\right|\]
and let \(\mathbf{1}\) denote the constant function \(\mathbf{1}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) such that \(\mathbf{1}\left(\mathbf{x}\right)=1\) for \(\mathbf{x}\in\mathbb{R}^{d}.\) And, for each \(j=1,\ldots,d\), let
\[pr_{j}:\mathbb{R}^{d}\rightarrow\mathbb{R}\]
stand for the \(j\)th coordinate function defined for \(\mathbf{x}\in\mathbb{R}^{d}\) by
\[pr_{j}\left(\mathbf{x}\right)=x_{j}.\]
**Definition 2.1**.: _A multi-index is a \(d\)-tuple \(\boldsymbol{\alpha}=\left(\alpha_{1},\ldots,\alpha_{d}\right)\) of non-negative integers. Its norm (length) is the quantity_
\[\left|\boldsymbol{\alpha}\right|=\sum_{i=1}^{d}\alpha_{i}.\]
_The differential operator \(D^{\boldsymbol{\alpha}}\) is defined by_
\[D^{\boldsymbol{\alpha}}f=D_{1}^{\alpha_{1}}\cdots D_{d}^{\alpha_{d}}f,\]
_where \(D_{i},\ i=1,\ldots,d\), is the corresponding partial derivative operator (see [4, p. 335])._
Throughout the paper \(Q_{d}:=\left[0,1\right]^{d},\ d\in\mathbb{N}\), will denote the \(d\)-dimensional unit hypercube and we consider the space
\[L^{p}\left(Q_{d}\right)=\left\{f\ :Q_{d}\rightarrow\mathbb{R}\ |\ f\text{ $p$-integrable on }Q_{d}\right\},\ 1\leq p<\infty,\]
with the standard norm \(\left\|.\right\|_{p}\). Recall the following definition of the usual \(L^{p}\)-modulus of smoothness of first order:
**Definition 2.2**.: _Let \(f\in L^{p}\left(Q_{d}\right),\ 1\leq p<\infty\), \(\mathbf{h}\in\mathbb{R}^{d}\) and \(\delta>0\). The modulus of smoothness of the first order for the function \(f\) and step \(\delta\) in \(L^{p}\)-norm is given by_
\[\omega_{1}\left(f;\delta\right)_{p}=\sup_{0<\left\|\mathbf{h}\right\|_{\infty }\leq\delta}\left(\int_{Q_{d}}\left|f\left(\mathbf{x}+\mathbf{h}\right)-f \left(\mathbf{x}\right)\right|^{p}d\mathbf{x}\right)^{1/p}\]
_if \(\mathbf{x},\mathbf{x}+\mathbf{h}\in Q_{d}\)[21]._
Let \(M\left(Q_{d}\right):=\left\{f\ |\ f\text{ bounded and measurable on }Q_{d}\right\}\). Below, we present the concept of the first order averaged modulus of smoothness.
**Definition 2.3**.: _Let \(f\in M\left(Q_{d}\right),\ \mathbf{h}\in\mathbb{R}^{d}\) and \(\delta>0\). The multivariate averaged modulus of smoothness, or \(\tau\)-modulus, of the first order for function \(f\) and step \(\delta\) in \(L^{p}\)-norm is given by_
\[\tau_{1}\left(f,\delta\right)_{p}:=\left\|\omega_{1}\left(f,:\delta\right) \right\|_{p},\ 1\leq p<\infty,\]
_where_
\[\begin{array}{l}\omega_{1}\left(f,\mathbf{x};\delta\right)=\\ \sup\left\{\left|f\left(\mathbf{t}+\mathbf{h}\right)-f\left(\mathbf{t}\right) \right|:\mathbf{t},\mathbf{t}+\mathbf{h}\in Q_{d},\ \left\|\mathbf{t}-\mathbf{x}\right\|_{\infty}\leq\frac{\delta}{2},\left\| \mathbf{t}+\mathbf{h}-\mathbf{x}\right\|_{\infty}\leq\frac{\delta}{2}\right\} \end{array}\]
_is the multivariate local modulus of smoothness of first order for the function \(f\) at the point \(\mathbf{x}\in Q_{d}\) and for step \(\delta\). [21]._
For our future purposes, we need the following properties of first order multivariate averaged modulus of smoothness:
For \(f\in M\left(Q_{d}\right),\;1\leq p<\infty\) and \(\delta,\lambda,\gamma\in\mathbb{R}^{+}\), there hold
* \(\tau_{1}\left(f,\delta\right)_{p}\leq\tau_{1}\left(f,\lambda\right)_{p}\) for \(0<\delta\leq\lambda\),
* \(\tau_{2}\left(f,\lambda\delta\right)_{p}\leq\left(2\left\lfloor\lambda\right \rfloor+2\right)^{d+1}\tau_{1}\left(f,\delta\right)_{p}\), where \(\left\lfloor\lambda\right\rfloor\) is the greatest integer that does not exceed \(\lambda\),
* all multi-indices \(\boldsymbol{\alpha}\) with \(\left|\boldsymbol{\alpha}\right|\geq 1\) and \(\alpha_{i}=0\) or \(1\) (see [19] or [21]).
For a detailed knowledge concerning averaged modulus of smoothness, we refer to the book of Sendov and Popov [22].
Now, consider the Sobolev space \(W_{1}^{p}\left(Q_{d}\right)\) of functions \(f\in L^{p}\left(Q_{d}\right),\;1\leq p<\infty\), with (distributional) derivatives \(D^{\boldsymbol{\alpha}}f\) belong to \(L^{p}\left(Q_{d}\right)\), where \(\left|\boldsymbol{\alpha}\right|\leq 1\), with the seminorm
\[\left|f\right|_{W_{1}^{p}}=\sum\limits_{\left|\boldsymbol{\alpha}\right|=1} \left\|D^{\boldsymbol{\alpha}}f\right\|_{p}\]
(see [4, p. 336]). Recall that for all \(f\in L^{p}\left(Q_{d}\right)\) the \(K\)-functional, in \(L^{p}\)-norm, is defined as
\[K_{1,p}\left(f;t\right):=\inf\left\{\left\|f-g\right\|_{p}+t\left|g\right|_{W _{1}^{p}}:g\in W_{1}^{p}\left(Q_{d}\right)\right\}\quad\left(t>0\right). \tag{2.1}\]
\(K_{1,p}\left(f;t\right)\) is equivalent with the usual first order modulus of smoothness of \(f\), \(\omega_{1}\left(f;t\right)_{p}\); namely, there are positive constants \(c_{1}\) and \(c_{2}\) such that
\[c_{1}K_{1,p}\left(f;t\right)\leq\omega_{1}\left(f;t\right)_{p}\leq c_{2}K_{1, p}\left(f;t\right)\quad\left(t>0\right) \tag{2.2}\]
holds for all \(f\in L^{p}\left(Q_{d}\right)\) (see [4, Formula 4.42 in p. 341]).
The following result due to Quak [21] is an upper estimate for the \(L^{p}\)-norm of the approximation error by the multivariate positive linear operators in terms of the first order averaged modulus of smoothness. Note that this idea was used first by Popov for the univariate case in [18].
**Theorem 2.1**.: _Let \(L:M\left(Q_{d}\right)\to M\left(Q_{d}\right)\) be a positive linear operator that preserves the constants. Then for every \(f\in M\left(Q_{d}\right)\) and \(1\leq p<\infty\), the following estimate holds:_
\[\left\|L(f)-f\right\|_{p}\leq C\tau_{1}\left(f,\sqrt[2d]{A}\right)_{p},\]
_where \(C\) is a positive constant and_
\[A:=\sup\left\{L\left(\left(pr_{i}\circ\psi_{\mathbf{x}}\right)^{2};\mathbf{x} \right):i=1,\ldots,d,\;\mathbf{x}\in Q_{d}\right\},\]
_in which \(\psi_{\mathbf{x}}\left(\mathbf{y}\right):=\mathbf{y}-\mathbf{x}\) for fixed \(\mathbf{x}\in Q_{d}\) and for every \(\mathbf{y}\in Q_{d}\) and \(A\leq 1\)[21]._
## 3. Multivariate BSK-Operators
In this section, motivated by the works of Altomare et al. [1] and Altomare et al. [3], we consider the multivariate extension of BSK-operators on \(L^{p}\left(Q_{d}\right)\) and study approximation properties of these operators in \(L^{p}\)-norm. We investigate the rate of the convergence in terms of the first order \(\tau\)-modulus and the usual \(L^{p}\)-modulus of smoothness of the first order.
Let \(r\) be a given non-negative integer. For any \(n\in\mathbb{N}\) such that \(n>2r,\ \mathbf{k}=\left(k_{1},\ldots,k_{d}\right)\in\left\{0,\ldots,n\right\}^{d}\) and \(\mathbf{x}=\left(x_{1},\ldots,x_{d}\right)\in Q_{d}\), we set
\[w_{n,\mathbf{k},r}(\mathbf{x}):=\prod_{i=1}^{d}w_{n,k_{i},r}(x_{i}), \tag{3.1}\]
where, \(w_{n,k_{i},r}(x_{i})\) is Stancu's fundamental function given by (1.2), written for each \(i=1,\ldots,d\), \(0\leq k_{i}\leq n\) and \(x_{i}\in\left[0,1\right]\). Thus, for \(\mathbf{x}\in Q_{d}\), we have
\[w_{n,\mathbf{k},r}(\mathbf{x})\geq 0\text{ and }\sum_{\mathbf{k}\in\left\{0, \ldots,n\right\}^{d}}w_{n,\mathbf{k},r}(\mathbf{x})=1. \tag{3.2}\]
For \(f\in L^{1}\left(Q_{d}\right)\) and \(\mathbf{x}=\left(x_{1},\ldots,x_{d}\right)\in Q_{d}\) we consider the following multivariate extension of the BSK-operators \(K_{n,r}\) given by (1.6):
\[K_{n,r}^{d}\left(f;\mathbf{x}\right)=\sum_{k_{1},\ldots,k_{d}=0}^{n}\prod_{i=1 }^{d}w_{n,k_{i},r}(x_{i})\int\limits_{Q_{d}}f\left(\frac{k_{1}+u_{1}}{n+1}, \ldots,\frac{k_{d}+u_{d}}{n+1}\right)du_{1}\cdots du_{d}.\]
Notice that from (3.1), and denoting, as usual, any \(f\in L^{1}\left(Q_{d}\right)\) of \(\mathbf{x}=\left(x_{1},\ldots,x_{d}\right)\in Q_{d}\) by \(f\left(\mathbf{x}\right)=f\left(x_{1},\ldots,x_{d}\right)\), we can express these operators in compact form as
\[K_{n,r}^{d}\left(f;\mathbf{x}\right)=\sum_{\mathbf{k}\in\left\{0,\ldots,n \right\}^{d}}w_{n,\mathbf{k},r}(\mathbf{x})\int\limits_{Q_{d}}f\left(\frac{ \mathbf{k}+\mathbf{u}}{n+1}\right)d\mathbf{u}. \tag{3.3}\]
It is clear that multivariate BSK-operators are positive and linear and the cases \(r=0\) and \(1\) give the multivariate Kantorovich operators on the hypercube \(Q_{d}\), which can be captured from [1] as a special case.
**Lemma 3.1**.: _For \(\mathbf{x}\in Q_{d}\), we have_
\[K_{n,r}^{d}\left(\mathbf{1};\mathbf{x}\right) = 1,\] \[K_{n,r}^{d}\left(pr_{i};\mathbf{x}\right) = \frac{n}{n+1}x_{i}+\frac{1}{2\left(n+1\right)},\] \[K_{n,r}^{d}\left(pr_{i}^{2};\mathbf{x}\right) = \frac{n^{2}}{\left(n+1\right)^{2}}\left[x_{i}^{2}+\left(1+\frac{r \left(r-1\right)}{n}\right)\frac{x_{i}\left(1-x_{i}\right)}{n}\right]\] \[+\frac{3nx_{i}+1}{3\left(n+1\right)^{2}},\]
_for \(i=1,\ldots,d\)._
Taking this lemma into consideration, by the well-known theorem of Volkov [26], we immediately get that
**Theorem 3.1**.: _Let \(r\) be a non-negative fixed integer and \(f\in C\left(Q_{d}\right)\). Then \(\lim\limits_{n\rightarrow\infty}K_{n,r}^{d}\left(f\right)=f\) uniformly on \(Q_{d}\)._
Now, we need the following evaluations for the subsequent result: For \(0\leq x_{i}\leq 1,\ i=1,\ldots,d,\) we have
\[\int\limits_{0}^{1}\left(1-x_{i}\right)p_{n-r,k_{i}}\left(x_{i} \right)dx_{i} = \binom{n-r}{k_{i}}\int\limits_{0}^{1}x_{i}^{k_{i}}\left(1-x_{i} \right)^{n-r-k_{i}+1}dx_{i}\] \[= \frac{n-r-k_{i}+1}{\left(n-r+2\right)\left(n-r+1\right)}\]
when \(0\leq k_{i}<r\) and
\[\int\limits_{0}^{1}x_{i}p_{n-r,k_{i}-r}\left(x_{i}\right)dx_{i} = \binom{n-r}{k_{i}-r}\int\limits_{0}^{1}x_{i}^{k_{i}-r+1}\left(1-x _{i}\right)^{n-k_{i}}dx_{i}\] \[= \frac{k_{i}-r+1}{\left(n-r+2\right)\left(n-r+1\right)}\]
when \(n-r<k_{i}\leq n.\) Thus, from (1.1) and (1.2), it follows that
\[\int\limits_{0}^{1}w_{n,k_{i},r}(x_{i})dx_{i}=\left\{\begin{array}{ll}\frac{ n-r-k_{i}+1}{\left(n-r+2\right)\left(n-r+1\right)};&0\leq k_{i}<r\\ \frac{n-2r+2}{\left(n-r+2\right)\left(n-r+1\right)};&r\leq k_{i}\leq n-r\\ \frac{k_{i}-r+1}{\left(n-r+2\right)\left(n-r+1\right)};&n-r<k_{i}\leq n\end{array} \right.. \tag{3.4}\]
Note that we can write the following estimates
\[n-r-k_{i}+1 \leq n-r+1\ \text{when}\ 0\leq k_{i}<r,\] \[n-2r+2 \leq n-r+1\ \text{when}\ r\leq k_{i}\leq n-r,\] \[k_{i}-r+1 \leq n-r+1\ \text{when}\ n-r<k_{i}\leq n \tag{3.5}\]
for each \(i=1,\ldots,d,\) where in the middle term, we have used the hypothesis \(n>2r\). Making use of (3.5), (3.4) and (3.1), we obtain
\[\int\limits_{Q_{d}}w_{n,\mathbf{k},r}(\mathbf{x})d\mathbf{x}=\prod\limits_{i =1}^{d}\int\limits_{0}^{1}w_{n,k_{i},r}(x_{i})dx_{i}\leq\frac{1}{\left(n-r+2 \right)^{d}}. \tag{3.6}\]
\(L^{p}\)-approximation by the sequence of the multivariate Stancu-Kantorovich operators is presented in the following theorem.
**Theorem 3.2**.: _Let \(r\) be a non-negative fixed integer and \(f\in L^{p}\left(Q_{d}\right),\ 1\leq p<\infty\). Then \(\lim\limits_{n\rightarrow\infty}\left\|K_{n,r}^{d}(f)-f\right\|_{p}=0.\)_
Proof.: Since the cases \(r=0\) and \(1\) correspond to the multivariate Kantorovich operators (see [1] or [3]), we consider only the cases \(r>1,\) which is taken as fixed. From Theorem 3.1, we obtain that \(\lim\limits_{n\rightarrow\infty}\left\|K_{n,r}^{d}(f)-f\right\|_{p}=0\) for any \(f\in C\left(Q_{d}\right).\) Since \(C\left(Q_{d}\right)\) is dense in \(L^{p}\left(Q_{d}\right)\), denoting the norm of the operator \(K_{n,r}^{d}\) acting on \(L^{p}\left(Q_{d}\right)\) onto itself by \(\left\|K_{n,r}^{d}\right\|\), it remains to show that there exists an \(M_{r}\), where \(M_{r}\) is a positive constant that maybe depends on \(r\), such that \(\left\|K_{n,r}^{d}\right\|\leq M_{r}\) for all \(n>2r\). Now, as in [3, p.604], we adopt the notation
\[Q_{n,\mathbf{k}}:=\prod\limits_{i=1}^{d}\left[\frac{k_{i}}{n+1},\frac{k_{i}+1 }{n+1}\right]\subset Q_{d};\ \bigcup\limits_{\mathbf{k}\in\left\{0,\ldots,n\right\}^{d}}Q_{n,\mathbf{k}}=Q _{d}.\]
Making use of the convexity of the function \(\varphi\left(t\right):=\left|t\right|^{p},\ t\in\mathbb{R},\ 1\leq p<\infty\) (see, e.g., [2]), and (3.2), for every \(f\in L^{p}\left(Q_{d}\right),\ n>2r,\) and \(\mathbf{x}\in Q_{d},\) we obtain
\[\left|K_{n,r}^{d}\left(f;\mathbf{x}\right)\right|^{p} \leq \sum_{\mathbf{k}\in\left\{0,\ldots,n\right\}^{d}}w_{n,\mathbf{k}, r}(\mathbf{x})\int\limits_{Q_{d}}\left|f\left(\frac{\mathbf{k}+\mathbf{u}}{n+1} \right)\right|^{p}d\mathbf{u}\] \[= \sum_{\mathbf{k}\in\left\{0,\ldots,n\right\}^{d}}w_{n,\mathbf{k}, r}(\mathbf{x})\left(n+1\right)^{d}\int\limits_{Q_{n,\mathbf{k}}}\left|f\left( \mathbf{v}\right)\right|^{p}d\mathbf{v}.\]
Taking (3.6) into consideration, we reach to
\[\int\limits_{Q_{d}}\left|K_{n,r}^{d}\left(f;\mathbf{x}\right)\right|^{p}d \mathbf{x}\leq\sum_{\mathbf{k}\in\left\{0,\ldots,n\right\}^{d}}\left(\frac{n+1 }{n-r+2}\right)^{d}\int\limits_{Q_{n,\mathbf{k}}}\left|f\left(\mathbf{v} \right)\right|^{p}d\mathbf{v}.\]
Since \(\sup\limits_{n>2r}\left(\frac{n+1}{n-r+2}\right)^{d}=\left(\frac{2r+2}{r+3} \right)^{d}:=M_{r}\) for \(r>1,\) where \(1<\frac{2r+2}{r+3}<2,\) we get
\[\int\limits_{Q_{d}}\left|K_{n,r}^{d}\left(f;\mathbf{x}\right)\right|^{p}d \mathbf{x}\leq M_{r}\int\limits_{Qd}\left|f\left(\mathbf{v}\right)\right|^{p} d\mathbf{v},\]
which implies that \(\left\|K_{n,r}^{d}\left(f\right)\right\|_{p}\leq M_{r}^{1/p}\left\|f\right\|_{p}\). Note that for the cases \(r=0\) and \(1\); we have \(M_{r}=1\) (see [3]). Therefore, the proof is completed.
## 4. Estimates for the rate of convergence
In [17], Muller studied \(L^{p}\)-approximation by the sequence of the Cheney-Sharma-Kantorovich operators (CSK). The author gave an estimate for this approximation in terms of the univariate \(\tau\)-modulus and moreover, using some properties of the \(\tau\)-modulus, he also obtained upper estimates for the \(L^{p}\)-norm of the error of approximation for first order differentiable functions as well as for continuous ones. In this part, we show that similar estimates can also be obtained for \(\left\|K_{n,r}^{d}\left(f\right)-f\right\|_{p}\) in the multivariate setting. Our first result is an application of Quak's method in Theorem 2.1
**Theorem 4.1**.: _Let \(r\) be a non-negative fixed integer, \(f\in M\left(Q_{d}\right)\) and \(1\leq p<\infty\). Then_
\[\left\|K_{n,r}^{d}\left(f\right)-f\right\|_{p}\leq C\tau_{1}\left(f,\sqrt[2d] {\frac{3n+1+3r\left(r-1\right)}{12\left(n+1\right)^{2}}}\right)_{p} \tag{4.1}\]
_for all \(n\in\mathbb{N}\) such that \(n>2r\), where the positive constant \(C\) does not depend on \(f\)._
Proof.: According to Theorem 2.1; by taking \(\psi_{\mathbf{x}}\left(\mathbf{y}\right)=\mathbf{y}-\mathbf{x}\) for fixed \(\mathbf{x}\in Q_{d}\) and for every \(\mathbf{y}\in Q_{d}\), and defining
\[A_{n,r}:=\sup\left\{K_{n,r}^{d}\left(\left(pr_{i}\circ\psi_{\mathbf{x}}\right) ^{2};\mathbf{x}\right):i=1,\ldots,d,\ \mathbf{x}\in Q_{d}\right\},\]
where \(\left(pr_{i}\circ\psi_{\mathbf{x}}\right)^{2}=pr_{i}^{2}-2x_{i}pr_{i}+x_{i}^{ 2}\mathbf{1}\), \(i=1,\ldots,d\), we get the following estimate
\[\left\|K_{n,r}^{d}\left(f\right)-f\right\|_{p}\leq C\tau_{1}\left(f;\sqrt[2d] {A_{n,r}}\right)\]
for any \(f\in M\left(Q_{d}\right)\), under the condition that \(A_{n,r}\leq 1\). Now, applying the operators \(K_{n,r}^{d}\) and making use of Lemma 3.1, for every \(i=1,\ldots,d\) and \(\mathbf{x}\in Q_{d}\), we obtain
\[K_{n,r}^{d}\left(\left(pr_{i}\circ\psi_{\mathbf{x}}\right)^{2}; \mathbf{x}\right) = \frac{n-1+r\left(r-1\right)}{\left(n+1\right)^{2}}x_{i}\left(1-x _{i}\right)+\frac{1}{3\left(n+1\right)^{2}}\] \[\leq \frac{n-1+r\left(r-1\right)}{4\left(n+1\right)^{2}}+\frac{1}{3 \left(n+1\right)^{2}}\] \[= \frac{3n+1+3r\left(r-1\right)}{12\left(n+1\right)^{2}}\]
for all \(n\in\mathbb{N}\) such that \(n>2r\), where \(r\in\mathbb{N}\cup\left\{0\right\}\). Therefore, since we have \(n\geq 2r+1\), we take \(r\leq\frac{n-1}{2}\) and obtain that \(A_{n,r}\leq\frac{3n+1+3r\left(r-1\right)}{12\left(n+1\right)^{2}}\leq 1\) is satisfied, which completes the proof.
Now, making use of the properties \(\tau 1)\)-\(\tau 3)\) of the multivariate first order \(\tau\)-modulus, we obtain
**Theorem 4.2**.: _Let \(r\) be a non-negative fixed integer, \(f\in L^{p}\left(Q_{d}\right),\ 1\leq p<\infty\), and \(D^{\boldsymbol{\alpha}}f\in L^{p}\left(Q_{d}\right)\) for all multi-indices \(\boldsymbol{\alpha}\) with \(\left|\boldsymbol{\alpha}\right|\geq 1,\ \alpha_{i}=0\) or \(1\). Then_
\[\left\|K_{n,r}^{d}\left(f\right)-f\right\|_{p}\leq 2C_{r}\sum_{\left| \boldsymbol{\alpha}\right|\geq 1}\left(\frac{1}{\sqrt[2d]{n+1}}\right)^{ \left|\boldsymbol{\alpha}\right|}\left\|D^{\boldsymbol{\alpha}}f\right\|_{p},\]
_for all \(n\in\mathbb{N}\) such that \(n>2r\), where \(C_{r}\) is a positive constant depending on \(r\)._
Proof.: Since \(n>2r\), we immediately have \(n+1\geq 2\left(r+1\right)\). Thus, the term appearing inside the \(2d\)th root in the formula (4.1) can be estimated, respectively, for \(r>1\), and \(r=0,1\), as
\[\frac{3n+1+3r\left(r-1\right)}{12\left(n+1\right)^{2}} = \frac{3n+3+3r\left(r-1\right)-2}{12(n+1)^{2}}\] \[= \frac{1}{n+1}\left[\frac{1}{4}+\frac{3r\left(r-1\right)-2}{12(n+1 )}\right]\] \[\leq \frac{1}{n+1}\left[\frac{1}{4}+\frac{3r\left(r-1\right)-2}{24(r+1 )}\right]\] \[= \frac{1}{n+1}\left[\frac{3r^{2}+3r+4}{24(r+1)}\right]\]
and
\[\frac{3n+1}{12\left(n+1\right)^{2}}=\frac{1}{n+1}\frac{3n+1}{4\left(3n+3 \right)}<\frac{1}{4\left(n+1\right)}.\]
Now, defining
\[B_{r}:=\left\{\begin{array}{cc}\frac{3r^{2}+3r+4}{24(r+1)};&r>1,\\ \frac{1}{4};&r=0,1,\end{array}\right.\]
and making use of the properties \(\tau_{1})\)-\(\tau_{3})\) of \(\tau\)-modulus, from (4.1), we arrive at
\[\left\|K_{n,r}^{d}\left(f\right)-f\right\|_{p} \leq C\tau_{1}\left(f,\sqrt[2d]{\frac{3n+1+3r\left(r-1\right)}{12\left( n+1\right)^{2}}}\right)_{p}\] \[\leq C\tau_{1}\left(f,\sqrt[2d]{B_{r}}\frac{1}{\sqrt[2d]{n+1}}\right) _{p}\] \[\leq C\left(2\left[\sqrt[2d]{B_{r}}\right]+2\right)^{d+1}\tau_{1} \left(f,\frac{1}{\sqrt[2d]{n+1}}\right)_{p}\] \[\leq 2C_{r}\sum_{|\boldsymbol{\alpha}|\geq 1}\left(\frac{1}{2\sqrt[2d]{n +1}}\right)^{|\boldsymbol{\alpha}|}\left\|D^{\boldsymbol{\alpha}}f\right\|_{p},\]
where the positive constant \(C_{r}\) is defined as \(C_{r}:=C\left(2\left[\sqrt[2d]{B_{r}}\right]+2\right)^{d+1}.\)
For non-differentiable functions we have the following estimate in terms of the first order modulus of smoothness, in \(L^{p}\)-norm.
**Theorem 4.3**.: _Let \(r\) be a non-negative fixed integer and \(f\in L^{p}\left(Q_{d}\right),\ 1\leq p<\infty.\) Then_
\[\left\|K_{n,r}^{d}\left(f\right)-f\right\|_{p}\leq c_{2}C_{r,p}\omega_{1} \left(f;\frac{1}{\sqrt[2d]{n+1}}\right)_{p},\]
_where \(\omega_{1}\) is the first order multivariate modulus of smoothness of \(f\) and \(C_{r,p}\) is a constant depending on \(r\) and \(p\)._
Proof.: By Theorem 3.2, since \(K_{n,r}^{d}\) is bounded, with \(\left\|K_{n,r}^{d}\right\|_{p}\leq M_{r}^{1/p}\), for all \(n\in\mathbb{N}\) such that \(n>2r\), we have \(\left\|K_{n,r}^{d}\left(g\right)-g\right\|_{p}\leq\left(M_{r}^{1/p}+1\right) \left\|g\right\|_{p}\) for \(g\in L^{p}\left(Q_{d}\right)\). Moreover, from Theorem 4.2, we can write
\[\left\|K_{n,r}^{d}\left(g\right)-g\right\|_{p}\leq 2C_{r}\sum_{|\boldsymbol{ \alpha}|\geq 1}\left(\frac{1}{2\sqrt[2d]{n+1}}\right)^{|\boldsymbol{\alpha}|} \left\|D^{\boldsymbol{\alpha}}g\right\|_{p}\]
for those \(g\) such that \(D^{\boldsymbol{\alpha}}g\in L^{p}\left(Q_{d}\right)\), for all multi-indices \(\boldsymbol{\alpha}\) with \(|\boldsymbol{\alpha}|\geq 1\) and \(\alpha_{i}=0\) or \(1.\) Hence, for \(f\in L^{p}\left(Q_{d}\right)\), it readily follows that
\[\left\|K_{n,r}^{d}\left(f\right)-f\right\|_{p} \leq \left\|K_{n,r}^{d}\left(f-g\right)-\left(f-g\right)\right\|_{p}+ \left\|K_{n,r}^{d}\left(g\right)-g\right\|_{p}\] \[\leq \left(M_{r}^{1/p}+1\right)\left\{\left\|f-g\right\|_{p}+2C_{r} \sum_{|\boldsymbol{\alpha}|\geq 1}\left(\frac{1}{2\sqrt[2d]{n+1}}\right)^{| \boldsymbol{\alpha}|}\left\|D^{\boldsymbol{\alpha}}g\right\|_{p}\right\}.\]
Passing to the infimum for all \(g\in W_{1}^{p}\left(Q_{d}\right)\) in the last formula, since the infimum of a superset does not exceed that of subset, we obtain
\[\left\|K_{n,r}^{d}\left(f\right)-f\right\|_{p} \tag{4.2}\] \[\leq \left(M_{r}^{1/p}+1\right)\inf\left\{\left\|f-g\right\|_{p}+\frac{ 2C_{r}}{\sqrt[2d]{n+1}}\sum_{|\boldsymbol{\alpha}|=1}\left\|D^{\boldsymbol{ \alpha}}g\right\|_{p}:g\in W_{1}^{p}\left(Q_{d}\right)\right\}\] \[= \left(M_{r}^{1/p}+1\right)\inf\left\{\left\|f-g\right\|_{p}+\frac {2C_{r}}{\sqrt[2d]{n+1}}\left|g\right|_{W_{1}^{p}}:g\in W_{1}^{p}\left(Q_{d} \right)\right\}\] \[= \left(M_{r}^{1/p}+1\right)K_{1,p}\left(f;\frac{2C_{r}}{\sqrt[2d] {n+1}}\right),\]
where \(K_{1,p}\) is the \(K\)-functional given by (2.1). The proof follows from the equivalence (2.2) of the \(K\)-functional and the first order modulus of smoothness in \(L^{p}\)-norm and the non-decreasingness property of the modulus. Indeed, we get
\[K_{1,p}\left(f;\frac{2C_{r}}{\sqrt[2d]{n+1}}\right) \leq c_{2}\omega_{1}\left(f;\frac{2C_{r}}{\sqrt[2d]{n+1}}\right)_{p} \tag{4.3}\] \[\leq c_{2}\left(2C_{r}+1\right)\omega_{1}\left(f;\frac{1}{\sqrt[2d]{ n+1}}\right)_{p}.\]
Combining (4.3) with (4.2) and defining \(C_{r,p}:=\left(M_{r}^{1/p}+1\right)\left(2C_{r}+1\right)\), where \(M_{r}^{1/p}\) and \(C_{r}\) are the same as in Theorems 3.2 and 4.2, respectively, we obtain the desired result.
|
2301.02530 | ARIADNE+: Large scale demonstration of fast optical readout for
dual-phase LArTPCs at the CERN Neutrino Platform | Optical readout of large scale dual-phase liquid Argon TPCs is an attractive
alternative to charge readout and has been successfully demonstrated on a 2x2m
active region within the CERN protoDUNE cold box. ARIADNE+ uses four Timepix3
cameras imaging the S2 light produced by 16 novel, patent pending, glass
THGEMs. ARIADNE+ takes advantage of the raw Timepix3 data coming natively 3D
and zero suppressed with a 1.6ns timing resolution. Three of the four THGEM
quadrants implement readout in the visible light range through wavelength
shifting, with the fourth featuring a VUV light intensifier, thus removing the
need for wavelength shifting altogether. Cosmic ray reconstruction and energy
calibration was performed. Presented is a summary of the detector setup and
experimental run, preliminary analysis of the run data and future outlook for
the ARIADNE program. | Adam Lowe, Pablo Amedo, Diego González-Díaz, Alexander Deisting, Krishanu Majumdar, Konstantinos Mavrokoridis, Marzio Nessi, Barney Philippou, Francesco Pietropaolo, Sudikshan Ravinthiran, Filippo Resnati, Adam Roberts, Angela Saá Hernández, Christos Touramanis, Jared Vann | 2023-01-06T14:42:10Z | http://arxiv.org/abs/2301.02530v3 | ARIADNE\({}^{+}\): Large scale demonstration of fast optical readout for dual-phase LArTPCs at the CERN Neutrino Platform
###### Abstract
Optical readout of large scale dual-phase liquid Argon TPCs is an attractive alternative to charge readout and has been successfully demonstrated on a 2x2m active region within the CERN protoDUNE cold box. ARIADNE\({}^{+}\) uses four Timepix3 cameras imaging the S2 light produced by 16 novel, patent pending, glass THGEMs. ARIADNE\({}^{+}\) takes advantage of the raw Timepix3 data coming natively 3D and zero suppressed with a 1.6ns timing resolution. Three of the four THGEM quadrants implement readout in the visible light range through wavelength shifting, with the fourth featuring a VUV light intensifier, thus removing the need for wavelength shifting altogether. Cosmic ray reconstruction and energy calibration was performed. Presented is a summary of the detector setup and experimental run, preliminary analysis of the run data and future outlook for the ARIADNE program.
Glas Thick Gaseous Electron Multipliers (G-THGEMs); Thick Gaseous Electron Multipliers (THGEM); Large Electron Multiplier (LEM); Micropattern gaseous detectors; Time Projection Chambers (TPC); Noble liquid detectors; Photon Detectors for UV, Visible and IR Photons (Solid-state)
## 1 Introduction
With the size of Liquid Argon Time Projection Chambers (LArTPCs) ever increasing, and the cost of constructing and operating such detectors following a similar trend, the importance of R&D into alternative readout methods has never been greater. The ARIADNE (**AR**gon **Im**A**ging **D**etectio**N** cham**E**r) Experiment, a 1-ton dual-phase LArTPC, has demonstrated the viability of optical readout as an alternative to charge readout [1] and ARIADNE\({}^{+}\) tests the feasibility of scaling up this technology further.
The ARIADNE program utilises the 1.6ns timing resolution and native 3D raw data of the Timepix3 camera to image the wavelength-shifted secondary scintillation light generated by a THGEM (THick
Gaseous Electron Multiplier) within the gas phase of the dual-phase LArTPC. The main detection principle sees incoming particles ionising LAr and creating prompt scintillation light (known as S1). The ionisation electrons then drift towards an extraction grid situated below the liquid level where they are transferred to the gas phase and subsequently amplified using a THGEM. The drift charge multiplication produces secondary scintillation light (S2) which is wavelength-shifted before imaging with a Timepix3 camera.
With no need for thousands of internal charge TPC readout channels, pre-amps etc., reduction in construction costs is one of a number of advantages to ARIADNE technology. The Timepix3 camera, for example, with 256x256 pixels can achieve a spatial resolution of approx. 1mm on a 32x32cm active region. One THGEM-accelerated electron can generate 100s of scintillation photons, increasing sensitivity to low energies; Timepix3 is sensitive to single photons providing a high signal-to-noise ratio. The cameras are mounted externally relative to the TPC, this means low noise (decoupled from TPC electronics) and ease to swap out and repair technology.
The move within ARIADNE from EMCCDs (Electron Multiplying CCDs) to Timepix3 brought with it natively 3D readout where XY is given by the pixel position, the Time of Arrival (ToA) which is equivalent to Z and the Time over Threshold (ToT), analogous with intensity. Given the pixel-driven readout, as compared to frame-based readout, events come with zero background suppression and therefore require only a few kilobytes of storage.
With tests carried out at Liverpool showing promise on expanding the field of view for one camera from the 32cmx32cm on ARIADNE to 1mx1m, successfully imaging larger active regions is the main focus behind the ARIADNE\({}^{+}\) detector, alongside other developments in dual-phase readout. The protoDUNE cold box located at the CERN Neutrino Platform offers a 5x5m cryogenic vessel with a cathode as a test bed for scaling up ARIADNE technology with the support and expertise of the Neutrino Platform team. This is a key step in validating the feasibility of ARIADNE readout for kilo-tonne LAr detectors such as the DUNE far detector.
## 2 The ARIADNE\({}^{+}\) Detector
Testing optical readout on a scale relevant for DUNE, the ARIADNE\({}^{+}\) detector was built within the protoDUNE cold box located at the CERN Neutrino Platform and had an experimental run from February to April of 2022. The cold box itself is a 15-tonne cryogenic vessel, refurbished in 2021 to accommodate testing of vertical drift CRPs (Charge Readout Planes for DUNE). For ARIADNE\({}^{+}\), a LRP (Light Readout Plane) was mounted underneath the lid of the cold box, suspended 20cm above the cathode and imaged using four Timepix3 cameras.
### LRP (Light Readout Plane)
The LRP itself is comprised of a 2.3x2.3m Invar support frame, chosen for its uniquely low coefficient of thermal contraction, and an assembly of extraction grid, THGEM and WLS PEN glass. The extraction grid is made up of chemically-etched stainless steel pieces of a modular design, intended on reducing sag across the area of the LRP. The extraction grid is mounted 15mm from the 50x50cm glass THGEMs. Liverpool University patent approved glass THGEMs [3] have increased rigidity over FR4 THGEMs, vital for area sizes such as these within ARIADNE\({}^{+}\), and feature 'hour-glass' shaped holes which collect charge over time and increase light output at lower biases than FR4 THGEMs. For the three quadrants with Timepix3 cameras imaging visible light, situated above the THGEMs is a PEN film-coated glass.
One Timepix3 camera images direct VUV (Vacuum Ultraviolet) light via a VUV intensifier. A custom-made 5mm diameter Magnesium-Fluoride lens focuses light from a 1x1m active region onto the intensifier's photo-cathode for imaging with the Timepix3 camera.
For the duration of the three week ARIADNE\({}^{+}\) run, the purity was continuously monitored by the CERN Neutrino Platform team, and an electron lifetime of approximately 0.5msec was ensured throughout. S1 data was also collected using X-ARAPUCAS embedded within the cold box cathode for S1/S2 analysis which is ongoing.
Figure 1: The cold box at the Neutrino Platform in optical configuration.
Figure 2: Cross-section of the ARIADNE\({}^{+}\) LRP assembly (Left) and LRP integration
## 3 Results
### Gallery of Events
Three weeks of cosmic data was taken with the ARIADNE\({}^{+}\) detector, Figure 3 is a selection of through-going muons, imaged using visible light over 1x1m active area, with a spatial resolution of approximately 4mm. Figure 4 is a selection of, again, through-going muons, imaged this time using a VUV intensifier over 1x1m, with again approximately 4mm spatial resolution.
### 30 Second Exposure Cosmics
When the total ToT is summed for 30 seconds for both visible light and VUV, Figure 5 is produced. This is for one camera imaging 1x1m active area, i.e. one quadrant of the LRP, comprising of four THGEMs. The VUV light was focused using a custom MgF\({}_{2}\) lens which does have noticeable vignetting effects; this will be corrected in future analysis or further development of VUV optics.
Figure 4: VUV events with a spatial resolution of approx. 4mm.
Figure 3: Visible events with a spatial resolution of approx. 4mm.
### Energy Calibration and Resolution
By selecting only through-going muons after track fitting (i.e. muons that go through the THGEM and greater than 19cm in depth), it is possible to obtain an energy calibration and resolution given muons well known mean energy deposition rate in LAr of 2.12 MeV/cm [4]. The process for obtaining these values is given in greater detail in [2]. The energy calibration for imaging one of the large 1x1m quadrant, depicted in Figure 6, was **199.10 \(\pm\) 1.73** ADU per MeV. The energy resolution (preliminary) is approximately **16.73 \(\pm\) 0.16**% using tracks of an average length of approx. 22cm as seen in the dX plot (Figure 6).
## 4 Conclusions
The ARIADNE\({}^{+}\) collaboration has successfully demonstrated dual-phase optical readout with stable detector conditions at a 2x2m active area scale within the CERN Neutrino Platform cold box. This demonstration substantiates further the scalability of the TPX3 camera and THGEM technologies for their application to the kton-scale far LAr detector planned for the DUNE experiment.
Given the preliminary results presented at NuFACT2022, ARIADNE technology is a serious candidate for the DUNE LAr far detector. Further testing of the technology is required at larger scales, envisioned is instrumentation of the NP02 cryostat at the CERN Neutrino Platform with an optical TPC, including beam data.
|
2310.14413 | Data Augmentation: a Combined Inductive-Deductive Approach featuring
Answer Set Programming | Although the availability of a large amount of data is usually given for
granted, there are relevant scenarios where this is not the case; for instance,
in the biomedical/healthcare domain, some applications require to build huge
datasets of proper images, but the acquisition of such images is often hard for
different reasons (e.g., accessibility, costs, pathology-related variability),
thus causing limited and usually imbalanced datasets. Hence, the need for
synthesizing photo-realistic images via advanced Data Augmentation techniques
is crucial. In this paper we propose a hybrid inductive-deductive approach to
the problem; in particular, starting from a limited set of real labeled images,
the proposed framework makes use of logic programs for declaratively specifying
the structure of new images, that is guaranteed to comply with both a set of
constraints coming from the domain knowledge and some specific desiderata. The
resulting labeled images undergo a dedicated process based on Deep Learning in
charge of creating photo-realistic images that comply with the generated label. | Pierangela Bruno, Francesco Calimeri, Cinzia Marte, Simona Perri | 2023-10-22T21:02:26Z | http://arxiv.org/abs/2310.14413v1 | # Data Augmentation: a Combined Inductive-Deductive Approach featuring Answer Set Programming
###### Abstract
Although the availability of a large amount of data is usually given for granted, there are relevant scenarios where this is not the case; for instance, in the biomedical/healthcare domain, some applications require to build huge datasets of proper images, but the acquisition of such images is often hard for different reasons (e.g., accessibility, costs, pathology-related variability), thus causing limited and usually imbalanced datasets. Hence, the need for synthesizing photo-realistic images via advanced Data Augmentation techniques is crucial. In this paper we propose a hybrid inductive-deductive approach to the problem; in particular, starting from a limited set of real labeled images, the proposed framework makes use of logic programs for declaratively specifying the structure of new images, that is guaranteed to comply with both a set of constraints coming from the domain knowledge and some specific desiderata. The resulting labeled images undergo a dedicated process based on Deep Learning in charge of creating photo-realistic images that comply with the generated label.
## 1 Introduction
In recent years, applications of Deep Learning (DL) gained a lot of popularity, due to the impressive results in many areas such as image processing, pattern, and object recognition [17]. However, such methodologies are based on models that have to be trained over some background knowledge represented in proper data; in order to obtain accurate training, large collections of data are typically needed. In some domains, obtaining a fair amount of training data is not easy; for instance, this is particularly challenging in the biomedical domain, due to acquisition accessibility, costs, manual annotation effort, data availability, and imbalance. To overcome this problem, data augmentation techniques have been widely studied in order to enrich (and improve to several extents) poor datasets. To this respect, generative models such as Generative Adversarial Networks (GAN) have been proposed to create synthetic but realistic images, showing a great deal of potential. Nevertheless, such approaches present also some drawbacks and limitations. For example, their training can be unstable and slow [7]; furthermore, in general, to guide feature extraction and image generation one has to rely on proper composition and adaptation of the dataset used for training. This makes difficult to take advantage from available knowledge and express desiderata on the way data will be generated. However, such knowledge can be of great help in avoiding the generation of wrong images, reducing generation times, improving the overall quality of the results (e.g., by ensuring the generation of reliable images that respect some given directions/desiderata, preserving the nature of the data, protecting relevant features). The use of declarative approaches can help at expressing constraints emerging from the background knowledge along with specific desired features; for this reason, the design of hybrid solutions featuring inductive and deductive techniques should be explored.
In this work, we make a first step in this direction and propose the use of Answer Set Programming (ASP) for incorporating express knowledge that guides the automatic generation of realistic images. In particular, we consider image generation in the biomedical domain, with a special focus on a Laryngeal Endoscopic Dataset [10]. The idea is to start from a (even limited) set of available photo-realistic images. First, a number of labeled images are produced; then, relevant elements are identified that make images differ one from another, and ASP is used for producing brand-new labeled images by describing how such elements can appear according to a given background (medical) knowledge. Once a labeled image has been generated, specific methods that rely on Deep Learning are used in order to actually create photo-realistic images that comply with the ASP-generated label. It is worth noting that a careful implementation of proper ASP programs guarantees, by design, the generation of labels that comply with the domain knowledge at hand. Furthermore, the use of ASP in this approach has some further advantages; indeed, one can also declaratively express specific desiderata that lead to the generation of images that can significantly differ from the original available ones also at a semantic level (e.g., number and position of elements, spatial relationships, etc.).
It is worth noting that image specifications go down to the pixel level; hence, given the combinatorial nature of the problem and the amount of involved information, ASP modeling must be carefully designed, in order to both correctly express knowledge/desiderata and reduce the computational cost.
To the best of our knowledge, this is one of the first attempts of employing ASP in medical image generation and augmentation. We tested our approach on the cited Laryngeal Endoscopic Dataset; results prove the viability of the approach, which is able to allow the generation of new images according to declaratively expressed direc
tions.
In the following, we illustrate the proposed approach mainly focusing on the design and implementation of the ASP-based declarative generation of new labeled images. The remainder of the paper is structured as follows. We first briefly introduce data augmentation techniques and their related work in Section 2; in Section 3 we provide a detailed description of our approach, that has been tested in Section 4. We eventually draw our conclusions in Section 5.
## 2 Image Data Augmentation
Image data augmentation techniques have been widely studied in the literature and used in state-of-the-art solutions to reduce overfitting, increase generalizability and overcome the lack of data or other limitations that could affect algorithm performance. Indeed, data augmentation: \((i)\) is a method that results much less expensive than regular data collection with its label annotation, \((ii)\) can be extremely accurate (it is generated from ground-truth data), \((iii)\) controllable, to some extent, in generating balanced data [8].
Traditionally, image data augmentation is performed relying on "classical" strategies or deep learning-based methods. In the first case, geometric transformation (i.e., flipping, rotation, shearing, cropping, translation in the geometric transformation) and photometric shifting (i.e., color space shifting, image filtering, addition of noise) are applied to existing available images in order to enrich the collection [8]. However, these techniques present some disadvantages, including memory consumption, transformation costs, and additional training time. Also, some photometric shifting strategies can produce the eliminations of important color information or specific features in the image, thus not always guaranteeing the preservation of nature and meaning of the image labels [13]. On the other hand, DL methods, especially Generative Adversarial Networks (GAN)-based ones, represent a huge breakthrough in image generation, due to the ability to generate artificial images from the initial dataset and then make use of them to predict image features. GANs are composed of two networks: a _generator_ network that creates tentative fake images and a _discriminator_ network that identifies whether the generated images are indicative of real-world evidence or not [1]. Nevertheless, GANs are inherently unstable and suffer from both the lack of meaningful measures to evaluate the quality of their result and limited sample generation capabilities when only a little representative of the population is available.
In the biomedical context, the availability of huge datasets is one of the major concerns: it is indeed a difficult task, as it requires continuous efforts in the long term. Image data augmentation techniques aim at tackling this issue, generating medical images for automated assessment of pathological conditions, and supporting healthcare providers in finding the most appropriate preventive interventions and therapeutic strategies without the need for the availability of large medical datasets [6].
Kossen et al. [9] used GANs to create synthetic brain data and corresponding labels, showing good performance in the arterial brain vessel segmentation task. Similarly, Toikkanen et al. [16] used GAN to improve the quality of the predictive model in localizing the hemorrhage from computerized tomography (CT) scans. Synthetic samples from generative models have been demonstrated to alleviate the in-balance and scarcity of labeled training issues. In the same context, Zhai et al. [18] proposed a novel asymmetric semi-supervised GAN (ASSGAN) to generate reliable segmentation-predicted masks. The authors show that in the absence of labeled data, the network can make use of unlabeled data to improve segmentation performance.
To the best of our knowledge, there are no logic-based approaches for performing image data augmentation. Some attempts have been done of using ASP for improving the segmentation and the quality of medical images (e.g., [3, 2]); however they don't focus on the generation of new images, but rather are concerned in quality improvements. Other logic-based contributions concern the somehow related field of Content Generation, where Answer Set Programming has been used for the production of game contents with desirable properties [5, 12, 14]. Such approaches demonstrate that ASP can be used for declaratively express quantitative and qualitative desiderata as well as content generation strategies, which can be of use in the case of image generation, providing one with the possibility to easily increment, modify and update new knowledge at will.
## 3 An ASP-based Method for Image Generation
In this section, we present our proposed ASP-based method for custom image generation. We focus on the case of laryngeal endoscopic image generation, and design and develop ad-hoc ASP programs for generating new synthetic images starting from the dataset in [10].
Figure 1 shows the workflow of our approach: at first, a set of available photo-realistic images and their corresponding labels are used in order to identify (at a semantic level) what are the relevant elements that images can contain and that make them differ one from another, according to prior medical knowledge; then, a declarative module based on ASP is in charge to make use of such elements for generating new labeled images that comply to explicitly expressed desiderata. Eventually, once a labeled image is generated, a module based on DL methods is used to create photo-realistic images that match the ASP-generated directions.
In the following, we first recall the case study and then detail the framework.
### A Case Study: Laryngeal Endoscopic Images
The Laryngeal Endoscopic Images dataset [10] consists of \(536\) manually segmented in vivo color images (\(512\)x\(512\) pixels) of the larynx captured from videos recorded during two different resection surgeries. The images are composed of \(7\) classes: _void_ (in gray), _vocal folds_ (in light green), _other tissue_ (in green), _glottal space_ (in blue), _pathology_ (in purple), _surgical tool_ (in red), and _intubation_ (in yellow).
The dataset features \(8\) sequences from two patients; they have been categorized into \(5\) different groups, depending on what kind of features the images exhibit. This results in solid medical background knowledge, that is detailed, for each group, in the following (see [10]).
Figure 1: Workflow of the proposed approach
1. Sequence 1: these are images extracted from pre-operative videos; the tumor is always present: in particular, it is clearly visible on the vocal folds. The images feature also changes in scale, translation, rotation, and do not present intubation, nor visible instruments.
2. Sequence 2: as in Sequence 1, images are extracted from pre-operative videos: the tumor is present and clearly visible; changes in scale and translation are featured. In these sequences, however, there are visible instruments, with intubation.
3. Sequences 3-4: these images are extracted from post-operative videos; given that the tumor has been removed, there is no visible tumor. Images feature changes in scale and translation, no visible instruments, yet intubation is present, along with some damaged tissue.
4. Sequences 5-7: images taken from pre-operative videos, where instruments are visible while manipulating and grasping the vocal folds. Changes in scale and translation, are featured; intubation is present.
5. Sequence 8: images extracted from post-operative videos. Here, blood on vocal folds is visible, along with instruments, surgical dressing, and intubation.
A visual example of images in the dataset is represented in Fig. 2, where a raw image (top) is reported along with its corresponding labeled version (bottom) for each of the five groups. For instance, images (b),(g) come from Sequence 2, where the tumor is clearly visible on the vocal folds and also intubation is present.
### Framework Core
The proposed approach relies on an ASP-based model to generate new synthetic labeled images. Basically, logic programs are in charge of describing the main characteristics of a specific scenario and modeling the a-priori knowledge. It is worth noting that the declarative nature of ASP allows us to \((i)\) incorporate explicit medical knowledge in the image generation process, \((ii)\) generate semantically labeled new images that comply with a set of provided requirements, \((iii)\) easily modify specific aspects of images according to variation of the domain or specific directions. More in detail, declarative specifications not only can force realistic compliance in image generation, but can also allow to heavily customize the way new data will look like at a "semantic" level. For instance, one can explicitly ask for images containing tumors of a given average size, or to include an additional instrument, or to prefer images featuring specific relative positions of some elements, and so on.
Intuitively, for each domain, ad-hoc ASP programs need to be properly designed; nonetheless, when working at a pixel level, several pieces of knowledge can be reused across different tasks and domains. In the following, we focus on the specific design choices we made for tackling the problem in the already mentioned laryngeal endoscopic domain.
We started from a given number of labeled images taken from the dataset described in Section 3.1, and identified the image elements to be considered as relevant in the data augmentation process; such elements are removed from the original labeled images and will be actually generated from scratch, while the others are just kept. More specifically, we decided to keep the classes _vocal folds_, _glottal space_ and _other tissue_ as the background of the images; indeed, their shape and appearance do not significantly vary between different images. On the contrary, we focused on the classes _intubation_ and _surgical tool_ for their generation from scratch, as during the surgery they move faster than background tissues and, consequently, their shape and position can easily change according to the rotation and the angle of endoscopic video images. Furthermore, we also considered _pathology_ class: its importance trivially derives from the domain, and its occurrence in the whole dataset is significantly lower [10]; the
Figure 2: Examples of Laryngeal Endoscopic images from the dataset [10]. Images come from Sequences 1,2,3,5, and 8 (from top to bottom). Raw images and the corresponding labeled ones are reported in the first and second columns, respectively.
generation of further images featuring the _pathology_ class allows to better balance the dataset.
For generating and placing objects as instances of the aforementioned classes, we rely on the following strategy: given a class, a point is properly selected that is intended to represent the position of the object to construct (as its "center"); starting from the chosen pixel we design and construct, step by step, the remaining part of the object under consideration.
As presented in Section 3.1, the images in the dataset are collected in 5 different groups: images in each group have specific characteristics that differ among groups, and are related to the classes of objects that can be featured in images and their relative positions. We want new images to satisfy such features, accordingly. To this aim, we designed ad-hoc ASP programs for the generation of different kind of objects, namely in the _pathology_, _intubation_, and _surgical tool_ classes. labeled images for a given group are obtained by properly combining the results from some of such programs (as an example, we can create images of the group \(2\) by generating a _pathology_ object and the _intubation_). In the following, we describe the ASP programs along with input and output information. We assume that the reader is familiar with standard ASP syntax and semantics; for more details, we refer to the vast literature (e.g., [4]).
Input.The ASP programs take in input \(512\times 512\) bitmap images, in form of matrices of the same size and properly represented as facts. Each matrix element is associated with a color determined by the class of the object present in that position. In particular, as said above, we consider images where _vocal folds_, _glottal space_ and _other tissue_ classes are fixed; thus, the input matrices contain elements (or cells) already colored in light green, blue and green. In order to properly address the resulting search space, we adopt an approach inspired by what in Content Generation contexts is called "space partitioning" [15, 5], that works by dividing large areas into smaller zones to be addressed separately; the final result is then obtained by combining the partial results. In particular, we iterate through the elements of the matrix by considering blocks of dimension \(64\times 64\); each block is, in turn, divided into sub-blocks of dimension \(8\times 8\). Such dimensions are empirically chosen.
Matrix elements are modeled by facts of the form cell(X,Y,Col,IDB,IDSB), where term variables X, Y, Col, IDB, and IDSB are mapped to, respectively, the rows and columns of the matrix, the color associated to that cell, the identifier of the block and the identifier of the sub-block containing the cell (X,Y).
Output.Given the class of the object to generate, the ASP program identifies a suitable area of the matrix where the object can be placed. This is done by generating predicates of the form subBlockIn(IDB,IDSB), that represent the sub-blocks IDSB within the block IDB, whose cells will be re-colored according to the color of the class of interest.
ASP program for generating _pathology_ objects.From the medical background, we know that in each image where the _pathology_ is present (images in groups \(1\) and \(2\)), this is clearly visible and positioned "on top of" the class _vocal folds_. Thus, the aim of the ASP program is to get as input a matrix where the _vocal folds_ are represented via cells colored in light green, and generate a new tumor by properly select a number of sub-blocks to be re-colored in purple. The first part of the ASP program consists of a series of guessing rules aiming at choosing: \((i)\) the block (and therefore the area) where the tumor is placed, \((ii)\) the position (i.e., the "center" point) in such block, and \((iii)\) some _contour "pivot" points_ that will be used to define final shape and size of the tumor; we experimentally determined that 8 pivot points are suitable for our purposes. To encode \((i)\), we define the choice rule
{chosenBlock(ID):lightGreenBlock(ID)}=1.
where atom lightGreenBlock(ID) represents all the suitable blocks on which it is possible to generate the tumor. It is worth noting that this ensures that the tumor will only appear where it is supposed to be according to the background knowledge. Whereupon, based on from chosenBlock(ID), we encode \((ii)\) via the choice rule
{center(ID,X,Y):centralPoint(ID,X,Y)}=1:-chosenBlock(ID). where centralPoint(ID,X,Y) collects all the possible starting points, i.e., points previously selected that are far enough from the area of the matrix that do no comply with the presence of the tumor, and hence are suitable for its construction. Eventually, to encode \((iii)\), we draw the diagonals with respect center(ID,X,Y) and chosenBlock(ID), and guess eight distinct points positioned as follows: two on the main diagonal (predicate sameMainDiag), two on the secondary diagonal (sameSecDiag), two on the same row with respect to the central point (sameRow), and two on the same column with respect to the central point (sameCol). This is encoded by the following choice rules:
{pointsMainDiag(ID,X,Y):sameMainDiag(ID,X,Y)}=2:-chosenBlock(ID).
{pointsSecDiag(ID,X,Y):sameSecDiag(ID,X,Y)}=2:-chosenBlock(ID).
{horizPoints(ID,X,Y):sameRow(ID,X,Y)}=2:-chosenBlock(ID).
{vertPoints(ID,X,Y):sameCol(ID,X,Y)}=2:-chosenBlock(ID).
Furthermore, we want to ensure that the guessed points satisfy some geometric properties, define an ordering among them depending on their position, and assign them with an identifier, accordingly. In such a way, we obtain the \(8\) contour pivot points that are represented via atoms contourPivot(IDFP,ID,X,Y), where IDFP is the identifier of the contour pivot and ID is the identifier of the block containing the cell (X,Y) of the guessed contour pivot. Moreover, we make use of some constraints to avoid situations in which the guessed contour pivot points do not comply with a realistic situation (for instance, a contour pivot point is placed too close to the central point). On the basis of the suitable contour pivot points, we define the outline of the tumor: for each consecutive pair of pivot points, we define a connection by guessing a path over the sub-blocks of the chosenBlock. In particular, given two contour pivot points contourPivot(IDFP1,ID,X1,Y1) and contourPivot(IDFP2,ID,X2,Y2), connecting paths are chosen within a restricted area of the matrix that corresponds to the rectangle having vertices \((X1,Y1),(X1,Y2),(X2,Y1)\) and \((X2,Y2)\). Relying on such sort of "bounding boxes" has two different purposes. First, we reduce the search space: indeed, guessing over the whole (bigger) area would be prohibitive, given the combinatorial nature of the problem; furthermore, we model what intuitively would be done by a human expert on the basis of the knowledge about the realistic shape of a tumor (for instance, one should avoid that borders feature significant protrusions or cavities). The size of the rectangular area can be adapted, and experimentally set to values representing
good trade-off between the number of sub-blocks to be guessed and the number and the shape of paths to be explored.
Sub-blocks inside the defined rectangular area are considered as "guessable", a concept modeled by instances of the predicate guessableSubBlock(IDB,IDSB), where IDSB identifies a sub-block inside the block IDB that is suitable to be part of the path. The path between two given contour pivot points is defined via the following rules:
subBlockIn(ID,IDS) | subBlockout(ID,IDS)!- guessableSubBlock(ID,IDS).
subBlockIn(ID,IDS)!- contourPivot(_,ID,X,Y), cell(X,Y,_,ID,IDS).
reachSubBlock(ID1,IDS1,ID2,IDS2)!- subBlockIn(ID1,IDS1), subBlockIn(ID2,IDS2), IDS1!-IDS2, adjSubBlock(ID1,IDS1,ID2,IDS2).
reachSubBlock(ID1,IDS1,IDS3)!- reachSubBlock(ID1,IDS1,ID2,IDS2), reachSubBlock(ID2,IDS2,IDS3), IDS1!-IDS3.
Briefly, the rules in the program snippet above guess which sub-blocks can be part of the path, enforce that the contour pivot points are part of the path, and check reachability via recursion.
We express our preference among possible paths via proper weak-constraints. For instance, the following one has been defined to state that we prefer paths where non-adjacent sub-blocks are not in-line (thus trying to avoid both _zigzags_ and straight lines).
:- subBlockIn(ID,IDS), subBlockIn(ID1,IDS1), ID!=ID1, cell(X,Y,_,ID,IDS), cell(X,Y1,_,TD1,IDS1), not adjSubBlock(ID,IDS,ID,IDS1). [181, IDS,IDS1]
**ASP program for generating _intubation_.** Images featuring _intubation_ are from groups 2-5. In these images, _intubation_ is present in the bottom part of the photo, starting from the border; furthermore, it is always positioned on top of the _glottal space_ class. Thus, the ASP program is designed in order to choose a proper set of sub-blocks belonging to the _glottal space_, hence colored in blue, where the _intubation_ has to be positioned; such sub-blocks are recolored as _intubation_, i.e. yellow.
Sub-blocks are selected by a program that behaves similarly to the one designed for the _pathology_ case; the main differences are in the way guessable sub-blocks and contour pivot points are identified. In particular, since the intubation has to be positioned on the glottal space, the guessable sub-blocks are selected in the blue area; moreover, the center point is chosen within the very lowest region of the matrix, and all contour pivot points are guessed above the center point, such that, by connecting them, a shape resembling a semi-oval is obtained (as this is the form an expert expect to see when finding the intubation in a picture). Cells within this shape are colored in yellow.
**ASP program for generating _surgical tool_.** The ASP program for generating _surgical tool_ objects is defined according a strategy similar to what described above. Differences are mainly related to the way shapes of surgical tools are modeled (this clearly depends on the type of tool) and their positions are chosen (as tools can be present both on top of the _vocal folds_ and in the _glottal space_).
### From Labels to Raw Data
The labeled images generated via the ASP-based method described above represent semantic descriptions of images that comply with the background knowledge and the expressed desiderata. The next step is to generate a photo-realistic counterpart for each one: intuitively, new raw images are supposed to be such that, when semantically segmented, correspond to the related output of the ASP-based phase.
Among all different types of image synthesis tasks, _label-to-image_ is one of the most challenging ones, due to the complexity of the images that have to be synthesized [19]. Different works have been proposed in the literature, some for addressing paired-data training (i.e., the model is fed with label maps and corresponding images), and others for unpaired-data (i.e., unpaired label maps and images are used for training) [19]. In the scope of this work, we are experimenting with one of the most recent state-of-the-art proposals: Semantic Image Synthesis With Spatially-Adaptive Normalization (SPADE) [11] that is commonly used as paired-data techniques. SPADE processes the input semantic layout (i.e., an abstract representation of an image that defines the different parts or objects in the image and their spatial relationships) through several layers of convolution, normalization, and nonlinearity. Instead of traditional normalization layers, the authors used spatially-adaptive normalization layers. These layers modulate the activations using the input semantic layout through a learned transformation that adapts to the spatial layout. This helps the network to effectively propagate semantic information and produce better results than previous methods [11]. Specifically, in the SPADE approach, the mask is first projected onto an embedding space and then convolved to produce the modulation parameters. Unlike prior conditional normalization methods, these parameters are not vectors, but tensors with spatial dimensions that are multiplied and added to the normalized activation element-wise. In our experimental analysis, we used the same parameters configuration provided by the authors (i.e., learning rates of 0.0001 for the generator and 0.0004 for the discriminator and ADAM as optimizer) and we trained the network for 350 epochs.
## 4 Tests and Results
In this section, we present the results obtained in the first tests of the proposed framework. In particular, we show how a new labeled image is produced by generating the tumor, the intubation, and, eventually, surgical tools, by providing visual examples of the results.
As discussed in Section 3.2, the ASP programs are fed with input images featuring only the classes _other tissue_, _vocal folds_, and _glottal space_, colored in green, light green, and blue, respectively; an example is reported in see Figure 3(a).
For generating a sample labeled image of group \(1\), and in particular Sequence 1, we can start from this image and generate a tumor over it; such sequence images, indeed, do not feature intubation nor surgical tools.
By taking advantage from the ASP programs described in Section 3.2, a center point and then series of contour pivot points are generated and then properly connected, thus forming the shape of a tumor over the correct area (as reported in the dataset description, the tumor must be clearly visible on the vocal folds, i.e., light green area): see Figure 3(b). Eventually, we assign the appropriate color (i.e., purple) to each pixel inside the obtained area; the final result is shown in Figure 3(c).
In order to generate images of Groups \(2\)-\(5\), we need to incorporate the intubation class (see Figure 3(d) and 3(e)) and the surgical tool (see Figure 3(f) and 3(g)). For instance, note that an object of intubation class, according to the a-priori knowledge and in order to generate a realistic result, must be positioned and the contour must be guessed inside the _glottal space_ area (blue).
It is worth noting that different answer sets model different positions, sizes, and shapes; what is reported in Figure 3 shows the result of randomly chosen ones. We also point out that by slightly modifying the ASP programs one can obtain significantly different results; for instance, one might change the number of contour pivot points, the way are placed and distanced, how they are connected in order to form the shape, and so on.
The preliminary tests we carried out show that the herein proposed approach is viable, even under some _caveats_. Indeed, for the whole process to be successful, significant efforts must be spent on outlining and formally representing the background knowledge, for properly designing the ASP programs and fine-tuning them in order to make the newly generated data match the original desiderata, and for defining the workflow for composing the result. Furthermore, the computational burden to be carried out by ASP solvers is heavy, especially in the case of image generation carried out at the pixel level. However, the advantages of declarative specifications that actually guide the generation of new data clearly overcome such considerations, especially if one considers that, in a typical scenario, data are generated once and used many.
Starting from the labeled image generated via ASP, we make use of SPADE (See Section 3.3) to create photo-realistic data. As shown in Figures 4, our approach is able to successfully generate synthetic images from semantic labels.
Some additional considerations are reported in the next Section.
## 5 Conclusion and Perspectives
In this paper we presented a framework aiming at enabling the declarative specification of data augmentation processes; in particular, we proposed the use of Answer Set Programming for guiding the generation of realistic images in the biomedical domain. The presented approach relies on the collection of a (small) dataset of labeled images from the ground truth, and on the identification of relevant elements that make images differ one from another; then, specific ASP reasoning tasks are employed for generating brand new labeled images, obtained by describing how such elements can appear in the images and then properly composing the output. The new semantically labeled images are then used as input for specific methods relying on Deep Learning for producing photo-realistic images, which actually constitute the final output. We assessed the viability of the approach over images coming from laryngeal endoscopic surgery videos, and the results are promising, as they show that declarative specifications can be incorporated in the image data augmentation process. Such specifications, expressed via ASP, can encode both background knowledge and specific desiderata; in our opinion, this is one of the main strength points of the approach. Indeed, it allows to significantly customize the generation of new raw data in a declarative fashion without the need for finding, collecting, and adapting data in the domain at hand (for instance, surgical images featuring a given number of instruments, a specific position of the tumor, etc.); and yet, it allows to enjoy the typical resilience of ASP with respect to knowledge update (i.e., changes in specifications). As an example, one can easily adapt the logic programs so to change the number of elements of a given class, or the spatial relationships among elements (e.g., generating images featuring smaller/larger tumors or less/more instruments, etc.).
Figure 4: Example of synthetic image generation from ASP-based labels.
Figure 3: Generation from scratch of classes _pathology_, _intubation_, and _surgical tool_.
From a larger perspective, the use of ASP (as the declarative formalism of choice) in the loop of data augmentation to the extent herein described allows one to collect declarative specifications (i.e., logic programs) and "translate" them in such a way (i.e., properly generated labeled images) that they can be fed to DL methods. In particular, SPADE achieved satisfactory results, generating realistic raw images complying with a corresponding labeled one. As future work is concerned, next steps involve experimental campaigns designed for assessing the quality of images with respect to the desired task and the performance of our approach on additional biomedical datasets.
|
2303.03057 | Hamiltonian Dynamics and Structural States of Two-Dimensional
Microswimmers | We show that a two-dimensional system of flocking microswimmers interacting
hydrodynamically can be expressed using a Hamiltonian formalism. The
Hamiltonian depends strictly on the angles between the particles and their
swimming orientation, thereby restricting their available phase-space.
Simulations of co-oriented microswimmers evolve into "escalators" - sharp lines
at a particular tilt along which particles circulate. The conservation of the
Hamiltonian and its symmetry germinate the self-assembly of the observed
steady-state arrangements as confirmed by stability analysis. | Yuval Shoham, Naomi Oppenheimer | 2023-03-06T12:00:42Z | http://arxiv.org/abs/2303.03057v2 | # Hamiltonian Dynamics and Structural States of Two-Dimensional Microswimmers
###### Abstract
We show that a two-dimensional system of flocking microswimmers interacting hydrodynamically can be expressed using a Hamiltonian formalism. The Hamiltonian depends strictly on the angles between the particles and their swimming orientation, thereby restricting their available phase-space. Simulations of co-oriented microswimmers evolve into "escalators" -- sharp lines at a particular tilt along which particles circulate. The conservation of the Hamiltonian and its symmetry germinate the self-assembly of the observed steady-state arrangements as confirmed by stability analysis.
At equilibrium, material structural states can be predicted and designed using an energetic description. Since structure encodes function, the energetic framework is a powerful tool in natural sciences. Yet, structural states are not prerogative of equilibrium -- structure also emerges in many-body systems, far from equilibrium, where canonical conservation laws fail (1). For example, in Turing patterns (2), and in phase separation in biological and synthetic microswimmers (3-5). In such systems, prediction becomes impossible without monitoring the full dynamical evolution of the many degrees of freedom, e.g., by agent-based simulations (6-11), or a continuum description (12-14). At equilibrium, finding the energy of a given state amounts to formulating its Hamiltonian, which is readily derived when the microscopic interactions are known. By contrast, even when the microscopic hydrodynamic interactions between active particles are known at great precision, an equivalent, general, Hamiltonian framework remains elusive.
Here we show that for active particles in a 2D fluid, the equations of motion give rise to a geometric Hamiltonian description. We further show that when particles' orientations are aligned, such as in a flock, symmetries of the Hamiltonian limit the angular spread of the particles, resulting in an emergent structure of sharp lines at a given angle. This description applies to motile swimmers, such as bacteria (15, 16), and also to fixed active particles, such as proteins applying forces on the membrane (17). An analogous geometric Hamiltonian proved useful for vortices in an ideal fluid (18-20), and more recently also for active rotors in a viscous flow (21-26), for sedimenting disk arrays (27-29), and for a swimmer or two interacting with a flow or an external field (30-32).
We consider hydrodynamic interactions between microscopic organisms such that inertia is negligible, and the governing equations are Stoke's equations. An active particle is force-free. Thus, to a leading order, it will generate a flow of a force-dipole given by \(0=-\nabla p+\mu\nabla^{2}\mathbf{v}+\mathbf{D:}\nabla\delta(\mathbf{r})\), where \(\mu\) is the viscosity, \(\mathbf{D}\) is the magnitude of the force dipole, and \(\mathbf{:}\) is a double dot product. For an incompressible fluid (\(\mathbf{\nabla}\cdot\mathbf{v}=0\)) the resulting flow can be decomposed into an anti-symmetric part, a "rolet", and a symmetric part, a "stresslet". They are compactly written in polar coordinates as
\[\mathbf{v}=\frac{1}{2\pi r}\left[T\hat{\theta}+S\cos{(2\theta-2\phi)}\hat{r} \right], \tag{1}\]
where \(T\) is the rotlet strength, \(S\) the stresslet strength, \(\phi\) the orientation of the force-dipole with respect to the \(\hat{x}\) axis, and \(\mathbf{r}=(x,y)=r(cos\theta,sin\theta)\) is the position of the particle. A general active particle will have a stresslet part, whether it is moving or statically applying active forces. Previous work focused on the rotational part (8, 21, 24-26). Here we focus on the stresslet part of the flow-field. The incompressibility equation implies the existence of a vector potential such that \(\mathbf{v}=\mathbf{\nabla}^{\perp}\psi\), where \(\psi\) is the streamfunction, and \(\mathbf{\nabla}^{\perp}\equiv(\partial_{y},-\partial_{x})\). These equations of motion are Hamilton equations, with \(x\) and \(y\) being the conjugate variables. The streamfunction of the symmetric part of Eq. 1 is \(\psi_{s}=S\sin(2\theta-2\phi)/\pi\) (see Fig. 1).
In a system of many similar active particles all swimming along the same direction with the same velocity, here chosen to be \(\hat{x}\) (i.e. \(\phi=0\)), where each particle is affected only by the other particle's flow-field, this streamfunction can be summed to become the Hamiltonian of the system. The flow-field of the \(i^{th}\) particle is given by,
\[\mathbf{v}_{i}=\sum_{j\neq i}\frac{S_{j}}{2\pi}\frac{\left(x_{i}-x_{j}\right) ^{2}-\left(y_{i}-y_{j}\right)^{2}}{r_{ij}^{4}}\mathbf{r}_{ij}, \tag{2}\]
where \(\mathbf{r}_{ij}\equiv\mathbf{r}_{i}-\mathbf{r}_{j}\) is the vector pointing from particle \(j\) to particle \(i\), and \(S_{i}\) is the strength of the stresslet of the \(i^{th}\) particle. These velocities can be derived from a Hamiltonian, \(S_{i}\mathbf{v}_{i}=\nabla_{i}H\), where \(H\) is
\[H=\sum_{i,j,i\neq j}\frac{S_{i}S_{j}}{2\pi}\sin{2\theta_{ij}}, \tag{3}\]
and \(\theta_{ij}\) is the relative angle between the vector connecting stresslets \(i\) and \(j\) and the \(x\)-axis (see Fig. 1B). We note four points about this Hamiltonian: (a) It does not depend on time, therefore, from Noether's theorem (33), it is conserved. (b) The Hamiltonian is scale-invariant, that is, it is the same whether the stresslets are very close or very far, as long as the relative angles between
them are the same (see Fig. 1B). (c) It is symmetric with respect to \(\pm\pi/4\). Therefore, we expect the solution to be symmetric around that angle. (d) It is symmetric to translations, therefore \(\mathbf{d}_{\mathrm{act}}\equiv\sum_{i}S_{i}\mathbf{r}_{i}/N=\mathrm{const}\), where \(\mathbf{d}_{\mathrm{act}}\) is the "center of activity" in analogy to the center of mass. In what follows, we consider only stresslets with the same activity strength \(S_{i}=S\) and find that a system of many oriented swimmers evolves into lines at angles \(\pm\pi/4\) (see Fig. 2). To understand why that is, we start by examining the dynamics of two stresslets.
A single stresslet does not move by the flow it creates. The simplest dynamic system is therefore composed of two particles, in which case, the Hamiltonian is reduced to \(H=S^{2}\sin 2\theta/(2\pi)\), and the relative angle itself is conserved. The dynamics of two stresslets are confined to the line connecting them. When placed at initial distance \(d_{0}\) and angle \(\varphi\) relative to the \(x\)-axis, the relative distance between the two particles is \(R=\sqrt{2S\cos{(2\varphi)}t/\pi+d_{0}^{2}}\). The angle \(\varphi\) determines if the two stresslets collide or disperse. For \(\varphi<\pi/4\), the stresslets peel each other. The repulsion scales with time as \(R^{2}\sim t\), similar to diffusion. On the other hand, for \(\varphi>\pi/4\), the stresslets collide after a finite time \(t=d_{0}^{2}\pi/(2S\cos{(2\varphi)})\). At exactly \(\varphi~{}=~{}\pi/4\), particles remain static, and the initial distance is fixed.
A system of many stresslets can no longer be solved analytically. However, the conservation laws still apply. We numerically integrated Eq. 2 using the python library scipy.integrate.DOP853, which is an \(8^{th}\) order Runge-Kutta method with an adaptive time stepper. The interaction between each two particles is radial and their interaction decays as \(\sim r^{-1}\) (Eq. 2). They repel or attract depending on their relative angle. Thus, when two swimmers attract, they accelerate toward each other and eventually collide, such that the velocity diverges. Actual active particles have a given size and cannot overlap. We, therefore, introduce soft steric repulsion of the form \(\Delta\mathbf{v}_{s}=\Delta t\,k_{s}\left(l_{s}-|\mathbf{r}|\right)\hat{r}\) if \(|\mathbf{r}|<l_{s}\) and zero otherwise. When two stresslets get closer than a certain steric length \(l_{s}\), a repelling force proportional to their relative distance \(|\mathbf{r}|\) by a very large spring constant \(k_{s}\) is applied and pushes them apart. In our simulations, \(l_{s}=0.001\) and \(k_{s}=1,000\). This interaction is added to the regular stresslet interaction. Due to the steric interactions, the Hamiltonian is no longer strictly conserved. To ensure that collisions do not dominate the behavior of the system, we work in the dilute limit where collisions are rare, with average distances between particles much larger than the steric length. We verified that the Hamiltonian is accurately conserved in between collisions (see Fig. 3).
We initialize 300 oriented stresslets randomly positioned in a square and let them evolve, all swimming
Figure 1: (a) Streamlines of a single particle swimming along the \(x\) direction. Note the stagnation lines at \(\pm\pi/4\), the inward flow along the vertical and outward along the horizontal. (b) Schematics showing six oriented swimmers, swimming in the \(\hat{x}\) direction with the same Hamiltonian whether at close proximity or far apart as long as the angle between each two swimmers and their swimming direction, \(\theta_{ij}\) is fixed.
Figure 2: Snapshots from a molecular dynamics simulations of 300 swimmers, in a frame of reference moving with the particles. Particles are initiated randomly in a square of size \(10\times 10\). Swimming directions are fixed along the \(x\) axis. (a) Snapshots from the early times where the ensemble spreads and elongates. \(y\) positions are shifted down between times for clarity. (b) Snapshots from later times where an instability is formed and grows. The particles develop sharp lines at \(\pm\pi/4\), which we call escalators as particles circulate around them. Inset shows a zoom in on one of escalators with overlayed snapshots at different times. particles going left (right) are marked in blue (green).
in the \(\hat{x}\) direction. At first, the system follows the general shape of a single stresslet shown in Fig. 1A-- the system compresses in the \(y\)-direction and expands in the \(x\)-direction, into a "street" of stresslets, see Fig. 2A. At intermediate times, this street shows instabilities, which grow over time. At long times, these instabilities create stable shapes where particles concentrate along inclined streets at angles \(\pm\varphi=\pi/4\) (see Fig. 2B), which we term "escalators". There is circulation around these escalators, with particles above and below going in opposite directions. See inset in Fig. 2B, for overlayed snapshots at preceding times where particles are color coded according to their direction of motion.
Why are there always escalators of \(\varphi=\pm\pi/4\) when the system is initialized in a random square? We explain this using symmetry and Hamiltonian conservation arguments. We begin by showing that the system can only evolve into several escalators and not a single one. The Hamiltonian of the initial state is, on average, zero since the initial angles are random and \(H\) can be written as \(H=N(N-1)S^{2}\left<\sin 2\theta_{ij}\right>/2\pi=0\), where \(\left<.\right>\) is the average over the ensemble, \(N\) being the number of stresslets. The Hamiltonian of an escalator with \(\varphi=\pm\pi/4\), on the other hand, is non-zero. In fact it has maximal magnitude. Given that the Hamiltonian is conserved, a system of stresslets scattered in a square cannot develop into a single escalator. Indeed, the system always decomposes into a few escalators, each with an inclination angle \(\varphi=\pm\pi/4\). A simple case for this decomposition is two escalators with opposing inclination angles \(\varphi=\pm\pi/4\), placed at opposing sides of the \(y\) axis in symmetric form, see Fig. 1B. Mirroring the system along the \(y\)-axis, the Hamiltonian is \(H^{\rm mirror_{y}}\propto\left<\sin\left(2\cdot(\pi-\theta_{ij})\right) \right>=-H\), which implies again that the Hamiltonian is zero. Thus, a symmetric combination of stresslets conserves the Hamiltonian. Moreover, there is no limitation to the number of escalators the system can develop into, and different runs resulted in different numbers. The Hamiltonian is symmetric around \(\pm\pi/4\), so the steady state configuration needs to exhibit this symmetry. We go on to test the stability of particles aligned at different angles.
_Stability Analysis For a Street of stresslets._ We use three different methods to test the stability of an escalator, inclined at an angle \(\varphi\). First, we use linear stability analysis and show that the stability of the inclined escalator is of non-linear nature, as the first order of perturbation gives a fixed point of type center. Next, we use numeric simulations of escalators with different inclination angles to show that escalators are unstable -- except when \(\varphi=\pi/4\). Lastly, by calculating analytically the velocity field created by a continuous distribution of particles in an escalator, we show that for \(\varphi<\pi/4\), there is a repelling force pushing particles away. In addition, escalators at \(\varphi>\pi/4\) have an attractive force that causes a collapse and are therefore, inherently unstable. We combine the results to conclude that the only stable escalator is \(\varphi=\pm\pi/4\).
We follow the method of linear stability analysis used in Ref. [34] SS8. We start with a continuous line of stresslets with strength density \(S\). In complex notation
Figure 3: The Hamiltonian as a function of time for 4 random particles. The force dipole of all particles is oriented along the \(\hat{x}\) axis.The Hamiltonian is conserved between collisions. A collision event is marked in red in the middle panel.
Figure 4: Stability of particles at different angles under perturbation. We initiate 200 particles randomly on a line tilted at three different angles: \(\pi/8\) in green, \(\pi/4\) in black and \(\pi/3\) in green. We add a small sine-wave perturbation to their positions and let the system evolve over time. (a) Initial positions, (b) Configuration after a short period (\(t=0.1\)) shows all initial conditions resulted in escalators at about \(\pi/4\) degrees. Note how the \(\pi/8\) street breaks into smaller streets. (c) Result at long times with all initial conditions resulting in lines at \(\sim 40^{\circ}\). Slightly less than \(\pi/4\) and consistent with around 15% decrease in the value of the Hamiltonian. At later times the system continues to spread but maintains these angles as there are hardly any collisions. (d) The relative error of the Hamiltonian as a function of time, where \(H_{0}\) is its initial value.
the velocity is
\[\dot{Z}=\frac{S}{2\pi}\frac{1}{2Z}\left(1+\left(\frac{Z}{\bar{Z}}\right)^{2} \right), \tag{4}\]
where \(Z=x+iy\), and \(\bar{Z}=x-iy\). Along the parametric line \(\left\{Z_{s}=Z\left(s\right)|s\in\left(-\infty,\infty\right)\right\}\), the velocity at each point \(\dot{Z}_{s}\) is
\[\dot{Z}_{s}=\int_{-\infty}^{\infty}\frac{S}{4\pi\left(Z_{s}-Z_{s^{\prime}} \right)}\left(1+\left(\frac{Z_{s}-Z_{s^{\prime}}}{\bar{Z}_{s}-\bar{Z}_{s^{ \prime}}}\right)^{2}\right)\mathrm{d}s^{\prime}. \tag{5}\]
For a stresslet "street" with an inclination \(\varphi\) compared to the \(x\)-axis, \(Z\left(s\right)=se^{i\varphi}\). Adding a small Fourier-decomposed perturbation \(\varepsilon_{s}=\sum_{q}a_{q}e^{iqs}\) to the street, keeping only its lowest non-zero order, the equations of motion for \(Z\) and \(\bar{Z}\) (Eq. 5) become
\[\frac{\mathrm{d}}{\mathrm{d}t}\begin{pmatrix}a_{q}\\ \bar{a}_{-q}\end{pmatrix}=-\frac{Sq}{2}\begin{pmatrix}i\sin 4\varphi&-e^{4i \varphi}\\ e^{-4i\varphi}&i\sin 4\varphi\end{pmatrix}\begin{pmatrix}a_{q}\\ \bar{a}_{-q}\end{pmatrix}, \tag{6}\]
with eigenvalues \(\lambda_{1,2}=-\frac{1}{2}iSq\left(\sin 4\varphi\pm 1\right)\). Note that the eigenvalues are imaginary and create a neutrally stable point of type center. Therefore, in the linear limit, small perturbations will not grow nor decay. A similar calculation in a 3D fluid showed that stresslets in that case are linearly unstable (11). We next show by two other means that streets at all angles except \(\pm\pi/4\) are, in fact, unstable due to higher-order perturbations. When a street of any inclination is infinite, and the stresslets are at equal distances apart, the system is completely stationary. On the other hand, when a street is finite, only an escalator with inclination \(\pm\pi/4\) will remain stationary, because of the stagnation lines at \(\pm\pi/4\). This already hints that an unperturbed escalator of inclination \(\pm\pi/4\) is more stable than any other unperturbed street.
Next, we use numeric simulations of perturbed escalators. We initialize 200 stresslets in random locations along tilted lines at a fixed angle and perturb them by adding a sine wave to their locations in the perpendicular direction. We then let the system develop over time. After a short time (\(t=0.1\)), escalators with an inclination angle \(\varphi\neq\pi/4\) are broken and end up as a system of many escalators approaching \(\varphi=\pi/4\) (Fig. 4). The result at long times (\(t\sim 100\), Fig. 4C) is, though, not exactly \(\pi/4\), due to a decrease in the Hamiltonian (see Fig. 4D). Such a decrease in the Hamiltonian comes from the steric interactions introduced to the system (Fig. 1C), which are necessary to avoid the divergence of the velocity field at short distances since \(v\sim 1/r\). From its initial value, the Hamiltonian loses \(\sim 15\%\) at \(t=100\). This change seems to scale sub-linearly with time, i.e. \(\leq\sqrt{t}\). In a way, the system diffuses into another state due to collisions. Let us attempt to take into account the decrease in the Hamiltonian and estimate the angle of a "perfect" escalator with the same decreased Hamiltonian. The resulting angle is aroung \(40^{\circ}\) and agrees well with the simulation predictions, as shown in the red dashed line in Fig. 4.
To test analytically the stability, we use a continuous model of an infinite escalator where particles are equispaced and ask what flow field it creates -- if its flow field repels particles away from it, it is certainly not stable. We sum an infinite series of stresslets laid on a line of inclination \(\varphi\) distanced \(L\) apart. We calculate the velocity that the system of stresslets creates at the time of initiation. Instead of inclining the street, we look at stresslets on the \(x\)-axis whose direction is rotated by \(\varphi\). The velocity due to a single stresslet is given by the radial part of Eq. 1. The velocity at a distance \(h\) above the escalator is given by an infinite sum over all stresslets which gives \(\mathbf{v}=\left[\left(\coth\rho-\rho\,\mathrm{csch}^{2}\,\rho\right)\sin 2 \varphi,\rho\,\mathrm{csch}^{2}\,\rho\,\mathrm{cos}\,2\varphi\right]S/2L\) with \(\rho=\pi h/L\). The \(y\)-component represents the instability of the line. We see there will be a repelling force from the inclined street when \(\cos 2\varphi>0\). This means that for \(\cos 2\varphi>0\) (i.e. \(\varphi<\pi/4\)), the inclined street is unstable to perturbations. For \(\cos 2\varphi=0\), that is \(\varphi=\pi/4\), the velocity outside the escalator is zero, therefore a perturbation neither grows nor decays. What happens for angles larger than \(\pi/4\)? In that case, as we have seen for two stresslets, there is an attraction between the particles. Because there are no repelling forces other than collisions, a stresslet street at such an angle collapses onto itself and loses its stability. We finally conclude that indeed \(\varphi=\pi/4\) is the only stable angle.
_Discussion_. In this work, we introduced a new method to describe a 2D system of microswimmers robustly, using multipole expansion and a Hamiltonian formalism, which can be used to unveil the dynamics of colonies of microbiological swimmers and other synthetic active systems. The symmetries of the Hamiltonian limit the possible steady state configurations of the particles. Such a Hamiltonian description has been useful in the study of vortices in an ideal fluid, and here we find its application in a viscosity dominated, active, many-body system. Numeric simulations gave further insight into the dynamics. We found that a system of co-aligned, yet randomly positioned swimmers progressively flattened into a linear street, before the onset of an instability where the swimmers self-assembled into "escalators" in which particles circulate on canted conveyer belts. In future directions of this work, we plan to look into populations of active particles with different orientations, thus altering the Hamiltonian, and possibly changing the dynamics drastically.
_Acknowledgments_. We thank Haim Diamant for fruitful discussions. This research was supported by the ISRAEL SCIENCE FOUNDATION (grant No. 1752/20). |
2302.14602 | On the Estimation of Cross-Firm Productivity Spillovers with an
Application to FDI | We develop a novel methodology for the proxy variable identification of firm
productivity in the presence of productivity-modifying learning and spillovers
which facilitates a unified "internally consistent" analysis of the spillover
effects between firms. Contrary to the popular two-step empirical approach,
ours does not postulate contradictory assumptions about firm productivity
across the estimation steps. Instead, we explicitly accommodate cross-sectional
dependence in productivity induced by spillovers which facilitates
identification of both the productivity and spillover effects therein
simultaneously. We apply our model to study cross-firm spillovers in China's
electric machinery manufacturing, with a particular focus on productivity
effects of inbound FDI. | Emir Malikov, Shunan Zhao | 2023-02-26T22:52:40Z | http://arxiv.org/abs/2302.14602v1 | # On the Estimation of Cross-Firm Productivity Spillovers with an Application to FDI+
###### Abstract
We develop a novel methodology for the proxy variable identification of firm productivity in the presence of productivity-modifying learning and spillovers which facilitates a unified "internally consistent" analysis of the spillover effects between firms. Contrary to the popular two-step empirical approach, ours does not postulate contradictory assumptions about firm productivity across the estimation steps. Instead, we explicitly accommodate cross-sectional dependence in productivity induced by spillovers which facilitates identification of both the productivity and spillover effects therein simultaneously. We apply our model to study cross-firm spillovers in China's electric machinery manufacturing, with a particular focus on productivity effects of inbound FDI.
**Keywords**: productivity spillovers, production function, proxy variable, FDI spillovers
**JEL Classification**: C14, C23, D24, F21, L20, O30
Introduction
Since its popularization by Marshall (1890), the concept of cross-firm knowledge, or technology, spillovers has increasingly become a central fixture in many economic theories, including of long-run growth, spatial agglomeration, research and innovation, international trade and more. The idea is simple: firms improve their productivity by learning from one another, with the most commonly conjectured drivers of these knowledge exchanges (technology transfers) being human interaction along with spatial and industrial/technological proximity. These productivity spillovers can also propel significant positive externalities in many productivity-enhancing activities such as research and development (R&D), foreign direct investment (FDI) or exporting. In this paper, we develop a new methodology for the proxy variable structural identification of firm productivity in the presence of productivity-modifying learning and spillovers which facilitates a unified "internally consistent" analysis of the spillover effects between peer firms.
Although productivity is straightforward in concept, its measurement is not trivial for a multitude of reasons among which is the inherent latency of firm productivity/efficiency. Naturally, the identification of productivity spillovers across firms is even more challenging a task because, as Krugman (1991, p.53) points out, "knowledge flows... are invisible; they leave no paper trail by which they may be measured and tracked." On this account, most empirical work on cross-firm technology spillovers abstracts away from pinpointing specific mechanisms by which such spillovers occur1 and instead focuses on a simpler but more feasible objective of testing for the presence of cross-firm spillovers in general. The most common frameworks either (i) focus squarely on "productivity spillovers" by seeking to identify how a firm's productivity is affected by that of its peers--the "endogenous effect" in the Manski (1993) nomenclature--or (ii) take a more reduced-form approach centered only on measuring "contextual" spillover effects of
various productivity-modifying activities (FDI, R&D, exporting, etc.) facilitated by cross-firm spillovers in productivity. Recent examples of the first include Bazzi et al. (2017) and Serpa & Krishnan (2018) who study vertical productivity spillovers along supply chains and material-product connections. As it happens, the literature embracing the second framework is more predominant and has a longer history: e.g., see Alvarez & Lopez (2008) on spillovers from exporting; Javorcik (2004), Javorcik & Spatareanu (2008), Haskel et al. (2007), Blalock & Gertler (2008), Keller & Yeaple (2009), Barrios et al. (2011), Lu et al. (2017) on FDI spillovers; Branstetter (2001), Griffith et al. (2006), Bloom et al. (2013), Zacchia (2020) on technology spillovers from R&D; and Acharya & Keller (2008) on productivity spillover effects of imports.
Both empirical frameworks are usually operationalized in two steps, whereby one first recovers firm productivity from the production function estimates and then examines spillovers in the second step by (linearly) regressing these productivity estimates on various peer-group averages capturing firms' exposure to potential spillovers. Owing to its popularity and ease of implementation, most studies estimate firm productivity in the first step via the proxy variable approach a la Olley & Pakes (1996) and Levinsohn & Petrin (2003) that typically assumes that each firm's productivity process is an independent (over firms) exogenous Markov chain. However, if present, spillovers would generate cross-sectional dependence among firms, which is nonetheless being overlooked in the first-step estimation of productivity. Not only does this raise reservations about the identification of production function (and hence, productivity) econometrically, but more importantly, such a two-step procedure suffers from the conceptual "internal inconsistency" because the second-step regressions, in effect, contradictorily postulate the existence of spillover-induced cross-firm dependence in productivity which is at odds with the identifying assumptions used in the first step. As such, conclusions about spillovers based on a two-step procedure may be spurious.
With the above in mind, we provide a novel (semiparametric) methodology for the estimation of productivity spillovers. In line with the existence of cross-firm spillovers, in building our model, we explicitly accommodate cross-sectional peer dependence in firm-specific (latent)
productivity that such spillovers induce. This is fundamentally different from the aforementioned traditional two-step approach.2
Footnote 2: We should note that a two-step framework is not universal across empirical studies of productivity spillovers. The exceptions are predominantly from the literature on R&D-borne spillovers that favors the estimation of “augmented production functions.” We discuss the benefits of our methodology over the latter in Appendix A.
To keep our methodology amenable to a wide range of contexts, we conceptualize peer dependence in firm performance via spatiotemporal spillovers in latent productivity itself. We generalize the conventional setup of firm production assumed in the literature to introduce the dependence of each firm's productivity on its (geographically and industrially proximate) peer-group average productivity. To that end, we dispense with the standard assumption of independent (over _i_) exogenous Markov process for latent productivity (e.g., Olley and Pakes, 1996; Levinsohn and Petrin, 2003; Ackerberg et al., 2015; Gandhi et al., 2020, and others) in favor of a controlled productivity process with explicitly incorporated cross-sectional dependence. This permits the firm to improve its productivity by learning not only directly from its own productivity-modifying activities but also indirectly from the activities of its peers.
Explicit modeling of cross-sectional dependence in firm productivity directly affecting its evolution (along with a structural timing assumption about learning process) enables us to build upon the popular proxy variable technique to develop a unified identification scheme for both the latent productivity and spillover effects therein _simultaneously_ that is also robust to Ackerberg et al.'s (2015) and Gandhi et al.'s (2020) critiques. In fact, as we show in the paper, estimating the firm production function or productivity using traditional proxy methods while ignoring the spillover-induced cross-sectional dependence, as customarily done in the literature, likely leads to misspecification and omitted variable bias. This underscores the key practical advantage of our proposed methodology. Also, by virtue of a nonparametric formulation of the firm productivity process, we transcend restrictive additively linear specifications favored in the spillovers literature which lets us accommodate heterogeneous spillover effects.
Because our methodology can be easily adapted to admit various spillover origins, it is fit to investigate productivity spillovers in many contexts, including spatial agglomeration, R&D
externalities, learning from exporters, and others. In our paper, for example, we consider an application to the FDI inflows.
We also contribute to the literature on proxy-based identification of production functions more broadly, by providing a practical, easy-to-implement semiparametric adaptation of Gandhi et al.'s (2020) estimator. Our point of departure is a parametric assumption about the functional form of production function, which is the predominant modeling strategy in productivity literature with the Cobb-Douglas specification being the most popular among researchers. Along the lines of Doraszelski & Jaumandreu (2013), our modeling approach fully embraces the assumed parametric specification of the production function by explicitly utilizing a known functional form of the static first-order condition for materials and the inverse conditional input demand function that it implies. By doing so, we circumvent the need to integrate the estimated material elasticity function at _each_ observation in order to recover the unknown production function required by Gandhi et al.'s (2020) more computationally demanding, albeit admittedly less restrictive, nonparametric methodology. In contrast, our parametric inversion of the material demand yields a much simpler semiparametric estimator. We also show how to extend our methodology to more flexible specifications of the firm's production function such as translog.
Besides the empirical application of our methodology, we also demonstrate its ability to successfully identify firm productivity and cross-firm spillovers therein in a set of Monte Carlo experiments. The results are encouraging and show that our approach recovers the true parameters well, thereby lending strong support to the validity of our identification strategy. We also use the simulations to show how estimating spillovers via the popular but internally inconsistent two-step procedure can lead to spurious and misleading results.
The rest of the paper unfolds as follows. Section 2 provides context for our application centered on FDI spillovers. Section 3 describes a generic model of firm production with productivity-modifying learning and spillovers. We discuss identification and estimation in Sections 4 and 5, respectively. Section 6 reports simulation results. We present our empirical application in Section 7 and conclude in Section 8.
Application to FDI
Cross-firm productivity spillovers can propel significant positive externalities in many productivity-enhancing activities, which are especially important from a policy perspective. Take, for instance, inbound foreign direct investment.
Public policies aimed at attracting FDI are commonplace both in developing and developed economies. Besides immediate returns in the form of capital inflows and employment gains, the primary justification for the FDI-promoting government incentives is mostly centered around gaining access to intangible productive "knowledge" assets from abroad such as new technologies, proprietary know-hows, more efficient and innovative marketing and management practices, established relational networks, reputation, etc., which can boost productivity of domestic firms. More crucially, productivity-enhancing effects of inbound FDI are widely believed to realize broadly beyond immediate recipients, who benefit from direct learning of foreign knowledge, by also benefiting many other domestic firms via productivity spillovers. These spillovers may occur via informal contacts (e.g., attendance of trade shows, exposure to affiliate and/or competitor products and marketing, learning by imitation, customer-supplier discussions), more formal reverse engineering, or labor turnover, eventually yielding large within- and/or cross-industry productivity gains. Measuring the extent and significance of these "social returns" of FDI is therefore imperative for the design of effective industrial policy.
To empirically showcase our estimator, we apply it to study horizontal productivity spillovers in China's electric machinery manufacturing industry in 1998-2007, with a particular focus on the technology-transfer effects of inbound FDI on productivity via domestic firms' learning of more advanced/efficient foreign knowledge to which they may gain access directly through their _own_ foreign investors and indirectly through spillovers from their foreign-invested _peers_. Among the world's top destinations for foreign investment, China presents a natural environment for the analysis of broad productivity effects of FDI on domestic firms especially because of its "open door" policies aimed at promoting foreign investment (e.g., special economic zones
with regulatory environments favorable to foreign capital) and its fairly recent accession to the World Trade Organization in 2001. Focus on the electric machinery industry in particular is motivated by its being historically one of China's most fundamental manufacturing sectors and among the largest FDI recipients (see Appendix B for more on the choice of the industry).
The empirical literature on FDI spillovers has generally produced mixed findings, especially for the long-sought-after horizontal productivity spillovers (see Keller, 2008, 2010, for excellent surveys). Few earlier studies that have analyzed external productivity spillovers from inbound FDI in China (see Jiang et al., 2019, and the references therein) have done so using the two-step approach or by "augmenting" the firm's production function with the associated methodological issues as discussed earlier. The results have been mixed, further heightening the appeal of our study based on a new methodology. The reanalysis of FDI-borne technology spillovers in China is also timely and relevant in light of the ongoing trade disputes between the U.S. and China fostered, among other things, by grievances of the former against China's "unfair technology transfer regime" for foreign companies. Investigating the extent of external spillovers can therefore provide an informative context for a more holistic understanding of the FDI environment in the country and the implications of its technology-transfer rules and regulations.
To briefly preview our key results, we find that at least 87% of manufacturers of electric machinery in China enjoy significant productivity-boosting effects of inbound FDI, both directly and indirectly. At the median, an increase of the foreign share in all firms' equity by 10 percentage points, in the short run, improves each firm's productivity by 1.4% via direct learning and by 0.4% via external effects. The latter indirect effect of FDI is facilitated by substantial cross-firm productivity spillovers in the industry, with the median spillover elasticity estimated at 0.33. These productivity spillovers are significantly positive for about 84% of firms in the industry.
## 3 Production with Learning and Spillovers
We now describe a generic paradigm of production in the presence of productivity-modifying learning and spillovers. Consider the production process of a firm \(i=1,\ldots,n\) in time period
\(t=1,\ldots,T\) in which physical capital \(K_{it}\), labor \(L_{it}\) and an intermediate input such as materials \(M_{it}\) are transformed into the output \(Y_{it}\) via production function, given the log-additive Hicks-neutral firm productivity. Following the popular convention in the literature (e.g., Olley & Pakes, 1996; Levinsohn & Petrin, 2003; Topalova & Khandelwal, 2011; Doraszelski & Jaumandreu, 2013), we assume that the firm's stochastic production process is Cobb-Douglas:
\[Y_{it}=K_{it}^{\beta_{K}}L_{it}^{\beta_{L}}M_{it}^{\beta_{M}}\exp\left\{ \omega_{it}+\eta_{it}\right\}, \tag{3.1}\]
where the exponent \(\omega_{it}+\eta_{it}\) is the latent "composite" productivity residual consisting of (i) the firm \(i\)'s persistent productivity \(\omega_{it}\) and (ii) a random transitory productivity shock \(\eta_{it}\). Our methodology can also adopt more flexible specifications of the firm's production function such as the log-quadratic translog specification. See Appendix C for this extension of our model.
We assume that \(K_{it}\) and \(L_{it}\) are subject to adjustment frictions (e.g., time-to-install, hiring and training costs) and thus are quasi-fixed, whereas \(M_{it}\) is freely varying. That is, \(M_{it}\) is chosen in period \(t\), whereas \(K_{it}\) and \(L_{it}\) are determined in period \(t-1\). Both \(K_{it}\) and \(L_{it}\) are the state variables with dynamic implications and follow their respective deterministic laws of motion:
\[K_{it}=I_{it-1}+(1-\delta)K_{it-1}\quad\text{and}\quad L_{it}=H_{it-1}+L_{it-1}, \tag{3.2}\]
where \(I_{it}\), \(H_{it}\) and \(\delta\) are the gross investment, net hiring and the depreciation rate, respectively. The firm maximizes a discounted stream of expected life-time profits in perfectly competitive output and factor markets. Also, for convenience, let \(\mathcal{I}_{it}\) denote the information set available to the firm \(i\) for making period \(t\) production decisions.
Our main objective is to study the role of learning and spillovers in the evolution of firm productivity. To that end, we need to dispense with the standard assumption of _exogenous_ Markov process for \(\omega_{it}\) in favor of a _controlled_ productivity process and, more importantly, to explicitly recognize the potential for cross-sectional dependence therein. For generality sake, we denote the productivity-modifying "controls" via a (vector of) generic variable(s) \(G_{it}\). This variable may measure the firm's deliberate activities aimed at improving its productivity such as R&D expenditures (Doraszelski & Jaumandreu, 2013) or some other aspects of its behavior in the market
place that have productivity implications such as exporting (De Loecker, 2013). Depending on the application of interest, \(G_{it}\) may also admit measures of the firm's exposure to technological innovations from investors or partners--the focus of our empirical illustration--or its access to public subsidies and other forms of favorable treatment from the government owing to political connections, etc. In the end, no matter the choice of \(G_{it}\), the rationale for its inclusion in the firm productivity evolution is to capture within-firm "learning" facilitated by the firm's own productivity-modifying activities or characteristics.
Next, we permit the firm \(i\) to improve its productivity by learning not only from its _own_ activities but also from its _peer_ firms. We do so by relaxing the usual assumption of firm productivity being an independent (over \(i\)) Markov chain to allow for cross-sectional dependence. More concretely, we assume that the \(i\)th firm's productivity \(\omega_{it}\) evolves according to the following controlled first-order process:
\[\omega_{it}=\mathbb{E}\left[\omega_{it}\right]\omega_{i,t-1},G_{i,t-1},\sum_{ j(\neq i)}s_{ij,t-1}\omega_{j,t-1}\bigg{]}+\zeta_{it}, \tag{3.3}\]
where \(\{s_{ij,t-1};\ j(\neq i)=1,\ldots,n\}\) are the peer-identifying weights (from the perspective of firm \(i\)), and \(\zeta_{it}\) is a mean-independent unanticipated random innovation in persistent productivity normalized to have a zero mean: \(\mathbb{E}[\zeta_{it}|\mathcal{I}_{i,t-1}]=\mathbb{E}[\zeta_{it}|\omega_{i,t- 1},G_{i,t-1},\sum_{j(\neq i)}s_{ij,t-1}\omega_{j,t-1}]=0\).
While the exact choice of how to construct peer weights \(\{s_{ijt}\}\) depends on the empirical context, for a general baseline case here, we let the peers be identified based on their spatial vicinity and industrial similarity to firm \(i\). Thus, letting \(\mathcal{L}(i,t)\) represent a set of spatially proximate "neighbors" of the firm \(i\) in time period \(t\) that also operate in the same industry, peer weights \(\{s_{ijt}\}\) are constructed for each \((i,t)\) as follows:
\[s_{ijt}=\frac{\mathbb{1}\big{\{}(j,t)\in\mathcal{L}(i,t)\big{\}}}{\sum_{k(\neq i )=1}^{n}\mathbb{1}\big{\{}(k,t)\in\mathcal{L}(i,t)\big{\}}}, \tag{3.4}\]
where the normalization in the denominator yields a convenient interpretation of \(\sum_{j(\neq i)}s_{ijt}\omega_{jt}\) as the average peer productivity. Focusing on geographically proximate peers within the industry fits a broader narrative in regional and urban economics about the scopes of agglomeration economies and the localized cross-firm productivity spillovers (due to technology and knowl
edge diffusion, labor market interactions, etc.) being one of the main sources of such externalities (e.g., see Duranton & Puga, 2004; Rosenthal & Strange, 2004). By restricting the scope to the same industry we effectively focus on intra-industry horizontal productivity spillovers.
**Remark 1**.: The weighing scheme in (3.4) treats cross-firm spillovers symmetrically in that all members of a peer group affect each other's productivity. Not only is this a standard approach to measuring "peer effects," but in doing so we also wish to remain as agnostic about spillovers as possible and avoid imposing priors about the directionality of peer dependence. But should one choose to regulate the direction of spillovers by restricting them to occur, say, from more productive to less productive firms only, our framework can be modified to accommodate that too. For more discussion, see Appendix D.
**Remark 2**.: In (3.4), a uniform weighting is applied across all peers of the firm \(i\) that are located in its spatial proximity. This implicitly assumes that within boundaries of the firm's spatial "neighborhood" the distance gradient is of second-order importance for knowledge spillovers. The main benefit of postulating such a feature of peer networks is that it does not require granular geographic data about individual firms and can be operationalized using coarse location information such as ZIP code, census track, city, region. The degree to which this is a reasonable weighting scheme obviously depends on the selected "level" of neighborhoods as well as the application-specific institutional context. If desired and feasible, peer weights \(\{s_{ijt}\}\) can be appended to incorporate a (decay) function of the distance between \(i\) and its peers \(\{j\}\).
The innovativeness of our model in the context of a broader literature on the structural proxy variable estimation of production functions is as follows. The "controlled" formulation in (3.3) is more general than the most commonly assumed exogenous Markov process a la Olley & Pakes (1996) whereby \(\omega_{it}=\mathbb{E}\left[\omega_{it}|\omega_{i,t-1}\right]+\zeta_{it}\) because it enables the firm to influence the evolution of its productivity via its own productivity-enhancing activities/characteristics as well as by interacting with other local firms in the industry as captured by \(G_{i,t-1}\) and \(\sum_{j(\neq i)}s_{ij,t-1}\omega_{j,t-1}\), respectively. While controlled Markov processes for \(\omega_{it}\) are not novel to the literature (e.g., Do
raszelski & Jaumandreu, 2013; De Loecker, 2013), all such studies have focused exclusively on an _independently_ (over \(i\)) distributed \(\omega_{it}\) having the latter depend on the firm's own productivity and productivity-modifying variables. Our important generalization is that we permit cross-sectional dependence in firm productivity within peer networks due to agglomeration.
Consider the spatiotemporal autoregressive conditional mean of \(\omega_{it}\) in (3.3) that represents the \(i\)th firm's expected one-period-ahead productivity at time \(t-1\). First, by letting it depend on the firm's _own_ productivity modifier \(G_{i,t-1}\), we are able to account for (internal) _direct_ learning taking place within the firm, with the corresponding estimand of interest being
\[DL_{it}=\frac{\partial\mathbb{E}[\omega_{it}|\cdot]}{\partial G_{i,t-1}}. \tag{3.5}\]
Second, in including the spatial average of other firms' productivities \(\sum_{j(\neq i)}s_{ij,t-1}\omega_{j,t-1}\), not only can we accommodate potential agglomeration externalities facilitated by productivity _spillovers_ across firms, but we are also able to capture the (external) _indirect_ learning whereby the productivity-modifying activities may have secondary effects on firms beyond their immediate (and intended) beneficiary. Concretely, defining the cross-firm productivity spillovers as
\[SP_{it}=\frac{\partial\mathbb{E}[\omega_{it}|\cdot]}{\partial\sum_{j(\neq i) }s_{ij,t-1}\omega_{j,t-1}}, \tag{3.6}\]
the measure of firm \(i\)'s indirect learning from firm \(j\)'s productivity-modifying activities is
\[IL_{ijt}=\frac{\partial\mathbb{E}[\omega_{it}|\cdot]}{\partial G_{j,t-2}}= \frac{\partial\mathbb{E}[\omega_{it}|\cdot]}{\partial\sum_{j(\neq i)}s_{ij,t-1 }\omega_{j,t-1}}\left(s_{ij,t-1}\frac{\partial\omega_{j,t-1}}{\partial G_{j,t- 2}}\right)=SP_{it}\times s_{ij,t-1}\times DL_{j,t-1}. \tag{3.7}\]
The \(IL_{ijt}\) effect in (3.7) is defined for an \((i,j)\) pair of firms, and we can aggregate it to the total indirect learning of firm \(i\) from all of its peers as
\[TIL_{it}=\sum_{j(\neq i)}IL_{ijt}=SP_{it}\times\sum_{j(\neq i)}s_{ij,t-1}DL_{j,t-1}. \tag{3.8}\]
As defined in (3.5)-(3.8), the learning and spillover effects on firm productivity are "short-run," but they accumulate and diffuse over time owing to a persistent nature of the firm's productivity evolution. Indirectly, this dynamic feature permits two peer firms that are separated temporally to continue to affect one another with the effect size attenuating over time. Such time-separated interactions characterize the temporal scope of productivity spillovers which
helps propel _dynamic_ agglomeration economies. The underlying idea here is that the knowledge acquired either through internal learning or from peers takes time to accumulate. Together with the geographic and industrial scopes of spillovers embedded in the definition of peer weights \(\{s_{ijt}\}\), the autoregressiveness of productivity specification covers the three main dimensions of external economies (see Rosenthal & Strange, 2004).
**Remark 3**.: The total indirect learning effect in (3.8) is, effectively, a measure of spillovers _specifically_ in \(G\). In this, our conceptualization of external effects of \(G\) as operating through the firm's exposure to the aggregate of its peers' unobservable productivities--that is, via "productivity spillovers" more broadly--fundamentally differs from the conventional approach to measuring spillovers in productivity-modifying activities (think, FDI, R&D or export spillovers) that relies on observable industry aggregates of \(G\). That is, we measure the \(i\)th firm's exposure to the _external_ knowledge using an aggregate of \(\{\omega_{jt};\ j(\neq i)=1,\ldots,n\}\) as opposed to an aggregate of \(\{G_{jt};\ j(\neq i)=1,\ldots,n\}\). Our formulation is more flexible because it does not restrict the origins of cross-firm productivity spillovers to \(G\) alone. It is also more realistic and conceptually congruous because it incorporates secondary information about the peer firms' own direct/internal learning facilitated by the productivity-modifying activities they undertake: namely, to learn from one's peers' \(G\), peers themselves should learn from their own \(G\) first.
The productivity evolution process in (3.3) characterizes the peer interaction between firms through their productivity. Each firm \(i\) has a "reference group" of spatially proximate peers from the same industry \(\mathcal{L}(i,t)\) with which it interacts. The identification of such cross-peer relations in networks is a notoriously challenging problem (e.g., see Manski, 1993, 2000; Moffitt, 2001; Blume et al., 2011). The potential obstacles include (i) the perfect functional dependence between the average outcome of the group and its mean characteristics due to the so-called "reflection problem" which may leave no exogenous variation excluded to instrument the endogenous peer behavior when there is more than one channel for the peer effects, (ii) the confounding presence of unobserved "correlated" group effects, and (iii) the endogenous group membership (or network structure). In our case, the additional layer of complexity is the la
tency of firm productivity. This aspect is addressed in the proxy variable framework by making a full use of the behavioral model of firm production, and we discuss this in detail in Section 4. We now consider the issues pertaining to the measurement of peer effects between firms.
The identification of learning and spillover effects on firm productivity in our model is based on several structural assumptions about the timing as well as the underlying form of peer interactions and network organization. To begin with, the productivity process in (3.3) is a dynamic analogue of a "pure endogenous-effects model" in Manski's nomenclature. It postulates that the cross-firm peer interactions occur only through the outcomes (i.e., \(\omega\)) whereby each firm's productivity is affected by the mean productivity of the peers in its reference group. As noted in Remark 3, we effectively assume away the "contextual effects" of the peers' productivity modifiers and, in doing so, address the first of two Manski's (1993) unidentified results about the indistinguishability of endogenous and exogenous peer effects.3 The latter issue becomes moot because in the absence of contextual effects of \(\{G_{j,t-1};\ j(\neq i)=1,\ldots,n\}\) on \(\omega_{it}\) our model postulates a single channel of cross-peer dependence. Appendix E discusses how our setup may be augmented to allow for such contextual effects.
Footnote 3: His second result is about the difficulty to distinguish “real” peer interactions through observables from the unobservable “correlated effects,” more on this later.
The evolution process in (3.3) also implicitly assumes that learning occurs with a delay which is why the dependence of \(\omega_{it}\) on both its own productivity-modifying controls and peers' productivity is lagged, implying that the improvements in firm productivity take a period to materialize. Furthermore, in \(\mathbb{E}[\zeta_{it}|\mathcal{I}_{i,t-1}]=0\) we assume that firms do not experience changes in their location and/or productivity modifiers in light of expected _future_ innovations in productivity. This timing assumption about the arrival of \(\zeta_{it}\) renders both the lagged \(G_{i,t-1}\) and a set of spatially proximate peers \(\mathcal{L}(i,t-1)\) at time \(t-1\) that defines the peer weights \(\{s_{ij,t-1};\ j(\neq i)=1,\ldots,n\}\) predetermined (weakly exogenous) with respect to the firm \(i\)'s productivity innovation at time \(t\), which helps identify both the learning and spillover effects on firm productivity.
When it comes to internal learning effects (via own \(G_{i,t-1}\)), such a timing assumption is quite
common in the productivity literature (e.g., see Van Biesebroeck, 2005; De Loecker, 2013; Doraszelski & Jaumandreu, 2013; Malikov et al., 2020). More specifically, \(\mathbb{E}[\zeta_{it}|\mathcal{I}_{i,t-1}]=0\) rules out the firm's ability to systematically predict future productivity shocks. Instead, the Markovian process in (3.3) states that the firm anticipates the effect of its \(G\) productivity modifier on \(\omega_{it}\) in period \(t\) when adjusting the former in period \(t-1\), and the conditional mean \(\mathbb{E}\left[\omega_{it}|\omega_{i,t-1},G_{i,t-1},\right.\)\(\left.\sum_{j(\neq i)}s_{ij,t-1}\omega_{j,t-1}\right]\) is what captures that _expected_ productivity. But the _actual_ firm productivity at time \(t\) also includes a random innovation \(\zeta_{it}\). Essentially, the conditional-expectation-function error \(\zeta_{it}\) represents unpredictable uncertainty that is naturally associated with productivity-modifying activities (new R&D investments, entering export markets or attracting new foreign investors) such as chance in discovery, success in implementation, etc. This productivity innovation \(\zeta_{it}\) is realized after \(G_{i,t-1}\) is fully determined.4
Footnote 4: Depending on the source of learning, it may be possible to reasonably relax the assumption of a delayed learning effect of \(G_{it}\) on firm productivity. See discussion in Appendix E.
In our paper, we also extend this timing assumption to external cross-firm learning via spillovers, which yields mean-orthogonality of the spatiotemporal "lag" of peers' productivities \(\sum_{j(\neq i)}s_{ij,t-1}\omega_{j,t-1}\) and the innovation \(\zeta_{it}\). That is, the assumed is weak exogeneity of the location-dependent peer weights \(\{s_{ij,t-1}\}\), according to which firms do not relocate in anticipation of _future_ productivity shocks because such shocks are purely random. This rules out endogeneity of the firm's peer network in period \(t-1\) with respect to the productivity shock \(\zeta_{it}\) it experiences at time \(t\). The plausibility of this is further buttressed by the fact that firm _r_elocation in most industries (e.g., agriculture, manufacturing, utilities) is highly, if not prohibitively, costly. In fact, our assumption about the weak exogeneity of group membership is not as strong as the standard assumption of fixed (non-random) networks commonly made in the (empirical) social-effects or spatial literature.
Note that our timing assumption about learning and spillover effects does _not_ rule out a _contemporaneous_ correlation between firm productivity and its productivity modifiers or even the location. That is, we do not assume that \(\mathbb{E}[\zeta_{it}|\mathcal{I}_{it}]=0\). Consequently, firms are permitted to endogenously update their \(G_{it}\) as well as to change their locations based on the (observable by
firms) period \(t\) level of their productivity \(\omega_{it}\). For instance, in the presence of inbound FDI opportunities that can help improve a domestic firm's productivity (when \(G_{it}\) measures the firm's exposure to investors from abroad), the more productive firms are more likely to be attractive for investors, and the corresponding non-zero \(\text{Cov}[G_{it},\omega_{it}]\) is well within our framework.
An important implication of our structural assumption about \(\mathbb{E}[\zeta_{it}]\sum_{j(\neq i)}s_{ij,t-1}\omega_{j,t-1}]=0\) is that the innovation in the productivity evolution process (3.3) does _not_ contain any unobservable "correlated effects" at the reference group level--to borrow Manski's terminology--the presence of which can complicate, if not hinder, the identification of peer effects occurring through the group mean productivity \(\sum_{j(\neq i)}s_{ij,t-1}\omega_{j,t-1}\). Effectively, we attribute all cross-firm dependence in productivity to the within-group dependence of each firm's underlying productivity on that of its peers as opposed to the tendency of all group firm-members to see their productivities evolve in a similar fashion due to the influence of common group unobservables such as shared locational/institutional environments. This is an admittedly strong but fairly common working assumption in the literature, given the well-known challenges in tackling group-level unobservables in network models (for an excellent review, see Blume et al., 2011). Our no-group-effects assumption echoes the existing studies of R&D/FDI/export spillovers and the productivity literature more broadly, and we maintain it to maximize comparability with the commonly used methodologies. Having said that, this assumption can be relaxed--we do so in our robustness checks--if we restrict the group-level unobservables to be time-invariant a la Graham & Hahn (2005) and Bramoulle et al. (2009).
Fundamentally, the potential threats to identification of the spillover effects posed by the correlated group effects can otherwise be cast as a spatial selection/sorting problem, whereby more productive firms may be _ex ante_ sorting into the what-then-become high productivity locations. Under this scenario, when we compare the firm to its spatial peers, we may mistakenly attribute any future productivity improvements to spillovers from the peers (i.e., agglomeration), while in actuality it merely reflects the underlying propensity of _all_ firms in this location to be more productive and, consequently, more apt at improving their productivity.
While there has recently been notable progress in formalizing and understanding these coincident phenomena theoretically (e.g., Behrens et al., 2014; Gaubert, 2018), disentangling firm sorting and agglomeration remains a non-trivial task empirically.5 However, by including the firm's _own_ lagged productivity in the autoregressive \(\omega_{it}\) process in (3.3), we are able (at least to some extent) to account for this potential self-sorting because sorting into locations is heavily influenced by the firm's own productivity (oftentimes stylized as the "talent" or "efficiency" in theoretical models). That is, the spillover effect \(SP_{it}\) on future firm productivity in our model is measured after partialling out the contribution of its own productivity. Incidentally, De Loecker (2013) argues the same in the context of export-based learning and self-selection of exporters.
Footnote 5: Urban economics literature also distinguishes the third endogenous process usually referred to as the “selection” which differs from sorting in that it occurs _ex post_ after the firms had self-sorted into locations and which determines their continuing survival. We abstract away from this low-productivity-driven attrition issue in the light of the growing empirical evidence suggesting that it explains none of spatial productivity differences which, in contrast, are mainly driven by agglomeration economies (see Combes et al., 2012). Relatedly, the firm attrition out of the sample has also become commonly accepted as a _practical_ non-issue in the productivity literature so long as the data are kept unbalanced. For instance, Levinsohn & Petrin (2003, p.324) write: “The original work by Olley and Pakes devoted significant effort to highlighting the importance of not using an artificially balanced sample (and the selection issues that arise with the balanced sample). They also show once they move to the unbalanced panel, their selection correction does not change their results.”
We maintain the _i.i.d._ assumption about the random transitory shock \(\eta_{it}\), from where it follows that \(\mathbb{E}[\eta_{it}|\mathcal{I}_{it}]=\mathbb{E}[\eta_{it}]=0\) with the mean normalized to zero. The latter implies that the shock \(\eta_{it}\) is observable to firms in period \(t\) only _ex post_ after all production decisions.
## 4 A System Approach to Identification via Proxy Variables
Logging the production function in (3.1) and making use of the Markovian nature of \(\omega_{it}\) from (3.3), we obtain
\[y_{it}=\beta_{K}k_{it}+\beta_{L}l_{it}+\beta_{M}m_{it}+h\left(\omega_{i,t-1},G _{i,t-1},\sum_{j(\neq i)}s_{ij,t-1}\omega_{j,t-1}\right)+\zeta_{it}+\eta_{it}, \tag{4.1}\]
where the lower-case variables denote the logs of the respective upper-case variables, and \(h[\cdot]\equiv\mathbb{E}[\omega_{it}|\cdot]\) is some unknown function. Under our structural assumptions about firm behavior, all right-hand-side covariates in (4.1) are predetermined and thus mean-independent of \(\zeta_{it}+\eta_{it}\), except for the freely varying input \(m_{it}\) that the firm chooses in time period \(t\) conditional on \(\omega_{it}\)
(among other state variables including quasi-fixed inputs) thereby making it a function of \(\zeta_{it}\). That is, the materials variable is endogenous.
To consistently estimate (4.1), we first need to address the latency of firm productivity \(\omega_{it}\). A widely popular solution to this problem in the literature is to adopt a proxy variable approach a la Levinsohn & Petrin (2003) whereby unobservable \(\omega_{it}\) is proxied by inverting the firm's conditional demand for an observable static input \(m_{it}\). However, Gandhi et al. (2020) show that identification generally fails under such a standard estimation procedure due to the lack of a valid instrument for the endogenous \(m_{it}\) despite the abundance of predetermined higher-order lags of inputs. Therefore, the production function remains unidentified in the flexible input. To solve this problem, they suggest exploiting a structural link between the production function and the firm's (static) optimality condition. In what follows, we build on this idea which we adapt in the spirit of Doraszelski & Jaumandreu (2013), whereby we explicitly make use of the assumed functional form of the production function to streamline identification of the material elasticity and to ease computational burden of estimation (also see Remark 4).
We first focus on the identification of the production function in its flexible input \(M_{it}\). Specifically, given the Cobb-Douglas form, we seek to identify the material elasticity parameter \(\beta_{M}\). To do so, we consider an equation for the firm's first-order condition with respect to \(M_{it}\). Since it is a static input, the firm's optimal choice of \(M_{it}\) can be modeled as the restricted expected profit-maximization problem6 subject to the (already) optimal allocation of quasi-fixed inputs:
Footnote 6: Under the risk neutrality of firms.
\[\max_{M_{it}}P_{t}^{Y}K_{it}^{\beta_{K}}L_{it}^{\beta_{L}}M_{it}^{\beta_{M}} \exp\{\omega_{it}\}\theta-P_{t}^{M}M_{it}, \tag{4.2}\]
where \(P_{t}^{Y}\) and \(P_{t}^{M}\) are respectively the output and material prices that, under the commonly invoked assumption of perfect competition, need not vary across firms; and \(\theta\equiv\mathbb{E}[\exp\{\eta_{it}\}|\mathcal{I}_{it}]\). The first-order condition is given by
\[\beta_{M}P_{t}^{Y}K_{it}^{\beta_{K}}L_{it}^{\beta_{L}}M_{it}^{\beta_{M}-1}\exp \{\omega_{it}\}\theta=P_{t}^{M}, \tag{4.3}\]
which can be transformed via dividing it by the production function in (3.1) to obtain the fol
lowing stochastic material share equation (in logs):
\[\ln V_{it}=\ln(\beta_{M}\theta)-\eta_{it}, \tag{4.4}\]
where \(V_{it}\equiv P_{t}^{M}M_{it}/(P_{t}^{Y}Y_{it})\) is the nominal share of material costs in total revenue. This share is readily observable in the data, and the construction thereof does not require firm-level prices.
Intuitively, equation (4.4) says that _unobservable_ material elasticity of the production function \(\beta_{M}\) can be identified from _observable_ material share \(V_{it}\) because the two must be equal on average (in logs) to maximize profits. Specifically, it identifies \(\beta_{M}\times\theta\) (and the random productivity residual \(\eta_{it}\)) based on \(\mathbb{E}[\eta_{it}]=0\):
\[\ln(\beta_{M}\theta)=\mathbb{E}[\ln V_{it}]. \tag{4.5}\]
To identify the material elasticity \(\beta_{M}\) net of constant \(\theta\), recognize that \(\theta\) can be identified via \(\theta=\mathbb{E}[\exp\{\eta_{it}\}]=\mathbb{E}\left[\exp\{\ln(\beta_{M} \theta)-\ln V_{it}\}\right]=\mathbb{E}\left[\exp\{\mathbb{E}[\ln V_{it}]-\ln V _{it}\}\right]\). Then, we have that
\[\beta_{M}=\exp\left\{\mathbb{E}[\ln V_{it}]\right\}/\mathbb{E}\left[\exp\{ \mathbb{E}[\ln V_{it}]-\ln V_{it}\}\right]. \tag{4.6}\]
With \(\beta_{M}\) identified from (4.6), we have thus identified the production function in the dimension of its endogenous freely varying input \(M_{it}\) thereby effectively circumventing Gandhi et al.'s (2017) critique. To see the latter, we rewrite (4.1) as follows:
\[y_{it}^{*}=\beta_{K}k_{it}+\beta_{L}l_{it}+h\left(\omega_{i,t-1},G_{i,t-1}, \sum_{j(\neq i)}s_{ij,t-1}\omega_{j,t-1}\right)+\zeta_{it}+\eta_{it}, \tag{4.7}\]
where \(y_{it}^{*}\equiv y_{it}-\beta_{M}m_{it}\) is already identified and thus observable, and our model in (4.7) no longer contains endogenous variables needing instrumentation.
To identify the remaining parameters of the production function \((\beta_{K},\beta_{L})^{\prime}\) as well as latent firm productivity \(\omega_{it}\) in (4.7), we make use of the _known_ parametric form of the conditional material demand function \(M_{it}=\mathbb{M}(\omega_{it},K_{it},L_{it},P_{t}^{Y},P_{t}^{M})\) implied by the first-order condition in (4.3) which we invert for \(\omega_{it}\). Under our standard assumptions about firm behavior and regularity conditions on the production function, \(\mathbb{M}(\cdot)|M_{it}>0\) must be strictly monotonic in \(\omega_{it}\) for any given \((K_{it},L_{it},P_{t}^{Y},P_{t}^{M})\), and hence we can invert \(\mathbb{M}(\cdot)\) to control for unobserved persistent productivity via \(\omega_{it}=\mathbb{M}^{-1}(M_{it},K_{it},L_{it},P_{t}^{Y},P_{t}^{M})\). Specifically, substituting for \(\omega_{i,t-1}\) and \(\omega_{j,t-1}\)
using the inverted material function derived analytically from (4.3), from (4.7) we get
\[y_{it}^{*}=\beta_{K}k_{it}+\beta_{L}l_{it}+h\left(\omega_{i,t-1}^{*} \left(\beta_{K},\beta_{L}\right),G_{i,t-1},\sum_{j(\neq i)}s_{ij,t-1}\omega_{j,t- 1}^{*}\left(\beta_{K},\beta_{L}\right)\right)+\zeta_{it}+\eta_{it}, \tag{4.8}\]
where
\[\omega_{it}^{*}\left(\beta_{K},\beta_{L}\right)=\left[(1-\beta_{M})m_{it}-\ln( \beta_{M}\theta)-\ln(P_{t}^{Y}/P_{t}^{M})\right]-\beta_{K}k_{it}-\beta_{L}l_{ it}\quad\forall i,t \tag{4.9}\]
is the inverted material demand function in which the bracketed component is already observable and therefore the only remaining unknown parameters in it are \((\beta_{K},\beta_{L})^{\prime}\). All right-hand-side covariates in the semiparametric model (4.8) are weakly exogenous and thus self-instrument. The model is thus identified on the basis of
\[\mathbb{E}\left[\zeta_{it}+\eta_{it}\bigg{|}k_{it},l_{it},k_{i,t-1 },l_{i,t-1},m_{i,t-1},G_{i,t-1},\sum_{j\neq i}s_{ij,t-1}k_{j,t-1},\sum_{j\neq i }s_{ij,t-1}l_{j,t-1},\sum_{j\neq i}s_{ij,t-1}m_{j,t-1}\right]=0, \tag{4.10}\]
where we have made explicit use of the variables entering the proxy function \(\omega_{it}^{*}\left(\beta_{K},\beta_{L}\right)\).
The appearance of group averages of the peers' predetermined inputs in (4.10) is akin to the idea of instrumenting the endogenous group mean of an outcome with the exogenous group mean characteristics, which is a common identification strategy in both the social-effects and spatial econometrics literature (e.g., see LeSage & Pace, 2009; Bramoulle et al., 2009). The critical distinction here is that, in our case, the "group mean of an outcome" \(\sum_{j\neq i}s_{ij,t-1}\omega_{j,t-1}\) is _not_ endogenous with respect to \(\zeta_{it}+\eta_{it}\) and therefore needs no instrumentation. In contrast, our use of the "group mean characteristics" \(\left(\sum_{j\neq i}s_{ij,t-1}k_{j,t-1},\sum_{j\neq i}s_{ij,t-1}l_{j,t-1},\sum_ {j\neq i}s_{ij,t-1}m_{j,t-1}\right)^{\prime}\) is effectively in their proxy-variable capacity given latency of \(\omega_{j,t-1}\).
**Remark 4**.: Following the steps of Doraszelski & Jaumandreu (2013), our approach fully embraces the assumed parametric specification of the firm's production function by explicitly utilizing the known functional form of the first-order condition for materials and the inverse conditional input demand function that it implies. By doing so, we circumvent the need to integrate the estimated material elasticity function at _each_ observation in order to recover the unknown production function required by Gandhi et al.'s (2017) nonparametric methodology. Importantly, by relying on parameter restrictions between the production function and inverted ma
terial demand function in (4.8), we do not have to rely on nonparametric methods to estimate the unknown proxy function for \(\omega\) that appears inside the also unknown \(h(\cdot)\) function. Otherwise, identification of (4.8) would have been complicated by the presence of a nonparametric \(\mathbb{M}^{-1}(\cdot)\) function (evaluated at multiple data points7) inside another nonparametric function \(h(\cdot)\). Our parametric inversion of \(\mathbb{M}^{-1}(\cdot)\) yields a much simpler semiparametric estimator.
Footnote 7: That is, evaluated at \((m_{i,t-1},k_{i,t-1},l_{i,t-1})\) to proxy for \(\omega_{i,t-1}\) as well as at \((m_{j,t-1},k_{j,t-1},l_{j,t-1})\)\(\forall\)\(j\) to proxy for \(\omega_{j,t-1}\) entering the spillover-capturing peer group average.
**Remark 5**.: Our model is also robust to Ackerberg et al.'s (2015) critique that focuses on the potential inability of structural proxy variable estimators to separably identify the production function and productivity proxy. This issue arises in the wake of perfect functional dependence between variable inputs appearing both inside the unknown production function and productivity proxy function. Our second-stage equation (4.8) does not suffer from such a problem because it contains no (endogenous) variable input on the right-hand side, the corresponding parameter of which has already been identified from the share equation in the first stage.
Lastly, with all parameters of the production function \((\beta_{K},\beta_{L},\beta_{m})^{\prime}\) and the transitory productivity shock \(\eta_{it}\) successfully identified in the two stages, we readily identify latent firm productivity \(\omega_{it}\) from the production function in logs: \(\omega_{it}=y_{it}-\beta_{K}k_{it}-\beta_{L}l_{it}-\beta_{M}m_{it}-\eta_{it}\).
## 5 Estimation Procedure
We implement our identification strategy in two stages. In the first stage, we estimate the material elasticity of the production function. Based on (4.6), the consistent estimator of \(\beta_{M}\) is
\[\widehat{\beta}_{M}=\exp\left\{\frac{1}{N}\sum_{i}\sum_{t}\ln V_{it}\right\} \Big{/}\Big{[}\frac{1}{N}\sum_{i}\sum_{t}\exp\left\{\left(\frac{1}{N}\sum_{i} \sum_{t}\ln V_{it}\right)-\ln V_{it}\right\}\Big{]}, \tag{5.1}\]
where \(N\) is the total number of observations, which equals \(nT\) in the case of a balanced panel.
We then estimate \(y_{it}^{*}\) via \(\widehat{y}_{it}^{*}=y_{it}-\widehat{\beta}_{M}m_{it}\) and also construct "partial" estimates of the pro
ductivity proxy function \(\omega_{it}^{*}\left(\beta_{K},\beta_{L}\right)\) in (4.9) as
\[\widehat{\omega}_{it}^{*}\left(\beta_{K},\beta_{L}\right)=\underbrace{\left[(1- \widehat{\beta}_{M})m_{it}-\ln(\widehat{\beta_{M}\theta})-\ln(P_{t}^{Y}/P_{t}^ {M})\right]}_{\widehat{\varkappa}_{it}}-\beta_{K}k_{it}-\beta_{L}l_{it},\]
where \(\ln(\widehat{\beta_{M}\theta})=\frac{1}{N}\sum_{i}\sum_{t}\ln V_{it}\) on the basis of (4.5). Note that \(\widehat{\omega}_{it}^{*}\left(\beta_{K},\beta_{L}\right)\) still contains two unknowns which enter the function linearly: \((\beta_{K},\beta_{L})^{\prime}\). For convenience, let the already identified portion of productivity be denoted by \(\widehat{\varkappa}_{it}=(1-\widehat{\beta}_{M})m_{it}-\ln(\widehat{\beta_{M} \theta})-\ln(P_{t}^{Y}/P_{t}^{M})\).
With \(\widehat{\gamma}_{it}^{*}\) and \(\widehat{\omega}_{it}^{*}\left(\beta_{K},\beta_{L}\right)\) from the first stage in hand, we proceed to the second-stage estimation of (4.8), where we approximate unknown function \(h(\cdot)\) using polynomial sieves. Specifically, recognize that \(\widehat{\omega}_{it}^{*}\left(\beta_{K},\beta_{L}\right)=\widehat{\varkappa} _{it}-\beta_{K}k_{it}-\beta_{L}l_{it}\) and let \(\widehat{\varkappa}_{i,t-1}(\boldsymbol{\beta})=([\widehat{\varkappa}_{i,t-1 }-\beta_{K}k_{i,t-1}-\beta_{L}l_{i,t-1}],\)\(G_{i,t-1},\sum_{j(\neq i)}s_{ij,t-1}[\widehat{\varkappa}_{j,t-1}-\beta_{K}k_{j,t-1}- \beta_{L}l_{j,t-1}])^{\prime}\) be a \(3\times 1\) vector with \(\boldsymbol{\beta}=(\beta_{K},\beta_{L})^{\prime}\). Then, for each \(\widehat{\varkappa}(\boldsymbol{\beta})\), we approximate the unknown function \(h\left(\widehat{\varkappa}(\boldsymbol{\beta})\right)\) in (4.8) as follows:
\[h\left(\widehat{\varkappa}(\boldsymbol{\beta})\right)\approx\mathcal{A}_{L_{n} }\left(\widehat{\varkappa}(\boldsymbol{\beta})\right)^{\prime}\boldsymbol{\gamma}, \tag{5.2}\]
where \(\mathcal{A}_{L_{n}}\left(\cdot\right)=\left(A_{1}\left(\cdot\right),\ldots,A_{ L_{n}}\left(\cdot\right)\right)^{\prime}\) is an \(L_{n}\times 1\) vector of known basis functions of \(\widehat{\varkappa}(\boldsymbol{\beta})\) including a vector of ones, \(\boldsymbol{\gamma}\) is a conformable vector of parameters, and \(L_{n}\to\infty\) slowly with \(n\).
Given the orthogonality conditions in (4.10), we estimate \(\boldsymbol{\beta}\) and \(\boldsymbol{\gamma}\) via nonparametric nonlinear least squares. Letting \(\mathbf{x}_{it}=(k_{it},l_{it})^{\prime}\), the parameter estimators are given by
\[\left(\widehat{\beta}_{K},\widehat{\beta}_{L},\widehat{\boldsymbol{\gamma}} \right)^{\prime}=\operatorname*{argmin}_{\boldsymbol{\beta},\boldsymbol{\gamma }}\,\sum_{i}\sum_{t}\left[\widehat{\gamma}_{it}^{*}-\mathbf{x}_{it}^{\prime} \boldsymbol{\beta}-\mathcal{A}_{L_{n}}\left(\widehat{\varkappa}_{i,t-1}( \boldsymbol{\beta})\right)^{\prime}\boldsymbol{\gamma}\right]^{2}, \tag{5.3}\]
where the minimand is the sum of squared errors corresponding to our sieve estimator of (4.8).
Using the already estimated \((\widehat{\beta}_{K},\widehat{\beta}_{L},\widehat{\beta}_{M})^{\prime}\) and \(\widehat{\boldsymbol{\gamma}}\), we then readily have the estimators for our primary estimands of interest: \(\widehat{DL}_{it}\equiv\partial\widehat{h}_{it}(\cdot)/\partial G_{i,t-1},\ \widehat{ SP}_{it}\equiv\partial\widehat{h}_{it}(\cdot)/\partial\sum_{j(\neq i)}s_{ ij,t-1}\widehat{\omega}_{j,t-1}\) and \(TIL_{it}=\widehat{SP}_{it}\times\sum_{j(\neq i)}s_{ij,t-1}\widehat{DL}_{j,t-1}\) respectively measuring the direct learning, cross-firm spillover and total indirect learning effects, where \(\widehat{h}_{it}(\cdot)=\mathcal{A}_{L_{n}}\big{(}\widehat{\boldsymbol{\beta} }_{it}(\widehat{\boldsymbol{\beta}})\big{)}^{\prime}\widehat{\boldsymbol{\gamma}}\) and \(\widehat{\omega}_{j,t-1}=\widehat{\varkappa}_{j,t-1}-\widehat{\beta}_{K}k_{j,t-1 }-\widehat{\beta}_{L}l_{j,t-1}\). Lastly, the estimator of latent firm productivity is \(\widehat{\omega}_{it}=y_{it}-\widehat{\beta}_{K}k_{it}-\widehat{\beta}_{L}l_{it} -\widehat{\beta}_{M}m_{it}-\widehat{\eta}_{it}\), where \(\widehat{\eta}_{it}=\ln(\widehat{\beta_{M}\theta})-\ln V_{it}\) from the first stage.
For the limit results, our sequential estimation methodology can be recast as a moment
based semiparametric sieve M-estimation problem. Specifically, letting
\[\mathbf{z}_{i,t-1}\left(\beta_{M},\beta_{K},\beta_{L},\theta\right)=\left[\begin{array} []{c}(1-\beta_{M})m_{it-1}-\ln(\beta_{M}\theta)-\ln(P_{t-1}^{Y}/P_{t-1}^{M})- \beta_{K}k_{it-1}-\beta_{L}l_{it-1}\\ G_{it-1}\\ \sum_{j(\neq i)}s_{ijt-1}\left[(1-\beta_{M})m_{jt-1}-\ln(\beta_{M}\theta)-\ln(P _{t-1}^{Y}/P_{t-1}^{M})-\beta_{K}k_{jt-1}-\beta_{L}l_{jt-1}\right]\end{array}\right]\]
and \(r_{it}\left(\beta_{M},\beta_{K},\beta_{L},\theta\right)=y_{it}-\beta_{M}m_{it}- \beta_{K}k_{it}-\beta_{L}l_{it}-\mathcal{A}_{L_{n}}\big{(}\mathbf{z}_{i,t-1} \left(\beta_{M},\beta_{K},\beta_{L},\theta\right)\big{)}^{\prime}\boldsymbol{\gamma}\), we can rewrite our two estimation stages in the form of the following multiple-equation moment restrictions:
\[\mathbb{E}\left[\begin{array}{c}\ln V_{it}-\ln(\beta_{M}\theta)\\ \exp\left\{\ln(\beta_{M}\theta)-\ln V_{it}\right\}-\theta\\ r_{it}\left(\beta_{M},\beta_{K},\beta_{L},\theta\right)\begin{bmatrix}\partial r _{it}\left(\beta_{M},\beta_{K},\beta_{L},\theta\right)/\partial\left(\beta_{K },\beta_{L}\right)^{\prime}\\ -\mathcal{A}_{L_{n}}\big{(}\mathbf{z}_{i,t-1}\left(\beta_{M},\beta_{K},\beta_{ L},\theta\right)\big{)}\end{bmatrix}\right]=\mathbf{0}_{4+L_{n}}, \tag{5.4}\]
consisting of two blocks, where the first two moments correspond to the estimator of the material elasticity (first block) and the remaining orthogonality conditions correspond to the nonlinear sieve least-squares estimation of the proxied production function and productivity in (5.3).
In the above, \(\mathcal{A}_{L_{n}}(\cdot)\) is a sieve approximation of the unknown infinite-dimensional nonparametric function \(h(\cdot)\), and \(\left(\beta_{M},\beta_{K},\beta_{L},\theta\right)^{\prime}\) are the unknown parameters of fixed dimension. Thus, our estimator falls within Ai & Chen's (2003) general framework for the minimum distance estimation based on the conditional moment restrictions of a generic form \(\mathbb{E}[\rho(X,\delta_{0},g_{0}(\cdot))|Z]=0\), where \(X\) and \(Z\) are data and \(\rho(\cdot)\) is a vector of "residual functions" with finite-dimensional unknown parameters \(\delta\) and infinite-dimensional unknown functions \(g\). The large-sample limit results from Ai & Chen (2003) and Chen et al. (2003) therefore extend to our two-step estimator. Inference can be asymptotic or via bootstrap; we discuss both in detail in Appendix F.
## 6 Simulations
We conduct a set of Monte Carlo experiments. Our data generating process draws from those used by Grieco et al. (2016) and Gandhi et al. (2020). Specifically, we consider a balanced panel of \(n=\{100,200,400\}\) firms operating during \(T=10\) periods.8 Each panel is simulated 1,000
times. To simplify matters, we dispense with labor and consider the production process only with the quasi-fixed dynamic \(K_{it}\) and freely-varying static \(M_{it}\). The production technology is
\[Y_{it}=K_{it}^{\beta_{K}}M_{it}^{\beta_{M}}\exp\{\omega_{it}+\eta_{it}\}, \tag{6.1}\]
where we set \(\beta_{K}=0.25\) and \(\beta_{M}=0.65\), and the noise \(\eta_{it}\sim\) i.i.d. \(\mathbb{N}(0,\sigma_{\eta}^{2})\) with \(\sigma_{\eta}=\sqrt{0.07}\).
The firm's capital is set to evolve according to \(K_{it}=I_{i,t-1}+(1-\delta_{i})K_{i,t-1}\), with the firm-specific depreciation rates \(\delta_{i}\in\{0.05,0.075,0.10,0.125,0.15\}\) distributed uniformly across \(i\). The initial levels of capital \(K_{i0}\) is drawn from _i.i.d._\(\mathbb{U}(10,200)\). The investment function takes the following form: \(I_{i,t-1}=K_{i,t-1}^{\alpha_{1}}\exp\{\alpha_{2}\omega_{it-1}\}\), where \(\alpha_{1}=0.8\) and \(\alpha_{2}=0.1\).
The materials \(M_{it}\) series is generated solving the firm's restricted expected profit maximization problem along the lines of (4.2). The conditional demand for \(M_{it}\) is given by
\[M_{it}=\underset{\mathcal{M}_{it}}{\operatorname{argmax}}\,\left\{P_{t}^{Y} K_{it}^{\beta_{K}}\mathcal{M}_{it}^{\beta_{M}}\exp\{\omega_{it}\}\theta-P_{t}^{M} \mathcal{M}_{it}\right\}=\left(\beta_{M}K_{it}^{\beta_{K}}\exp\{\omega_{it} \}\right)^{1/(1-\beta_{M})}, \tag{6.2}\]
where, in the second equality, we have normalized \(P_{t}^{M}=\theta\ \forall\ t\) and have assumed no temporal variation in output prices: \(P_{t}^{Y}=1\) for all \(t\).
We assume that the firm's productivity modifier \(G_{it}\) is autoregressively persistent. More specifically, we consider two laws of motion for \(G_{it}\): (a) an exogenous autoregressive process whereby \(G_{it}=\gamma_{0}+\gamma_{1}G_{i,t-1}+\epsilon_{it}\) and (b) a controlled autoregressive process, contemporaneously conditional on the firm's latent productivity: \(G_{it}=\gamma_{0}+\gamma_{1}G_{i,t-1}+\gamma_{2}\omega_{it}+\epsilon_{it}\), where \(\gamma_{0}=0.01\), \(\gamma_{1}=0.6\), \(\gamma_{2}=0.3\) and \(\epsilon_{it}\sim\) i.i.d. \(\mathbb{N}(0,\sigma_{\epsilon}^{2})\) with \(\sigma_{\epsilon}=0.1\). Of the two processes, the second one assumes that more productive firms engage in higher levels of the productivity-modifying activities. The process (b) permits firms to endogenously update their \(G_{it}\) based on the (observable by them) period \(t\) level of their productivity \(\omega_{it}\). For example, if \(G_{it}\) measures the firm's exposure to investors from abroad, this accommodates the scenario when foreign investors choose to invest in more productive domestic firms in the first place.
We let firm productivity \(\omega_{it}\) be a linear spatiotemporal first-order autoregressive process:
\[\omega_{it}=\rho_{0}+\rho_{1}\omega_{i,t-1}+\rho_{2}\sum_{j(\neq i)}s_{ij,t-1} \omega_{j,t-1}+\rho_{3}G_{i,t-1}+\zeta_{it}, \tag{6.3}\]
where, unless stated otherwise, we set \(\rho_{0}=0.2\), \(\rho_{1}=0.55\), \(\rho_{2}=0.4\) and \(\rho_{3}=0.5\). The innovation
is generated as \(\zeta_{it}\sim\) i.i.d. \(\mathbb{N}(0,\sigma_{\zeta}^{2})\) with \(\sigma_{\zeta}=0.2\). The initial level of productivity \(\omega_{i0}\sim\) i.i.d. \(\mathbb{U}(1,3)\) over \(i\). In Appendix G, we also present the results for a _nonlinear_ specification for \(\omega_{it}\).
To keep matters simple, we consider one common spatial region for all firms and assume that all firms belong to the same industry. Hence, cardinality of the set \(\mathcal{L}(i,t)\) is the same across all \(i\) and equals \(n-1\). The peer weights \(\{s_{ijt};\,j(\neq t)\}\) are constructed according to (3.4) and, given the setup, are equal to \(1/(n-1)\) for all firms and time periods.
Proposed Methodology.First, we evaluate the performance of our proposed estimator with the focus on its ability to successfully identify productivity spillovers across firms. For each combination of the \(G\) and \(\omega\) processes, we consider the following three DGP scenarios: (**i**) a general case scenario in which firm productivity is modified via both the direct learning (\(DL_{it}\neq 0\)) and cross-firm spillovers (\(SP_{it}\neq 0\)); (**ii**) a special case scenario in which we assume no direct learning (\(DL_{it}=0\) globally) in order to focus our attention exclusively on the agglomeration-driven learning via spillovers (\(SP_{it}\neq 0\)); (**iii**) an even more special case scenario in which firm productivity evolves exogenously (both \(DL_{it}=0\) and \(SP_{it}=0\) globally) as traditionally assumed in the proxy variable production function estimation literature. The special case scenarios are implemented by setting the appropriate coefficients in the productivity process (6.3) to zero.
We estimate the model via the two-stage estimation algorithm outlined in Section 5, where we approximate unknown \(h(\cdot)\) using second-degree polynomial sieves. Table 1 reports the simulation results for our proposed estimator, when \(G_{it}\) evolves exogenously [top panel] and following an \(\omega_{it}\)-dependent controlled process [bottom panel]. Each of these two panels includes the results from the three different scenarios. Reported are the mean, root mean squared error (RMSE) and mean absolute error (MAE) of the fixed-parameter \(\beta_{K}\) estimates9 and the averages (across simulation iterations) of these metrics corresponding for each iteration computed using observation-specific nonparametric estimates of the autoregressive gradient \(AR_{it}=\partial h(\cdot)/\partial\omega_{i,t-1}\)
\(DL_{it}=\partial h(\cdot)/\partial G_{i,t-1}\), \(SP_{it}=\partial h(\cdot)/\partial\sum_{j\neq i}\omega_{j,t-1}\) and \(TIL_{it}=SP_{it}\times\sum_{j(\neq i)}s_{ij,t-1}DL_{j,t-1}\).
The results in Table 1 are encouraging and show that our methodology recovers the true parameters remarkably well, thereby lending strong support to the validity of our identification strategy. As expected of a consistent estimator, the estimation becomes more stable as \(n\) grows. Same is the case when the productivity DGP in nonlinear (see Table G.1 in Appendix G).
Alternative Procedures.Next, to demonstrate the advantage of our internally consistent methodology, we inspect the performance of a widely used alternative procedure for estimating spillovers via a two-step approach. In this case, the unobserved firm productivity \(\omega_{it}\) is first estimated via the standard proxy variable estimator (which assumes that the productivity process is an exogenous Markov chain and thus ignores spillovers) and then linearly regressed on some measure of the firm's exposure to its peers in the second step. As already discussed at length, such second-step regressions are inconsistent with the assumptions made in the first step because they contradictorily postulate the existence of cross-peer dependence which was assumed away when recovering firm productivity in the first place. Consequently, the productivity estimates (by means of the production function) obtained via such an approach are prone to biases due to the endogeneity-inducing misspecification of the productivity proxy. The empirical evidence of spillovers can thus be spurious. This is unsurprising because the unaccounted cross-sectional dependence is a hindrance to identification in general (see Pesaran, 2006; Bai, 2009).
The second-step regressions used in spillovers literature have numerous variations but can be by and large categorized into two distinct types: those that measure the firm's exposure to spillovers from peers using the group means of characteristics which are said to facilitate such spillovers (FDI, R&D, exports, etc.), and those that measure the firm's exposure to spillovers using the peer group mean of an outcome (that is, firm productivity). Essentially, the first type of regressions focuses on the "contextual effects" while the second type models cross-peer dependence via "endogenous effects." Rarely do researchers allow for both effects at the same time. The first type is arguably the predominant choice in spillovers literature. Such studies overwhelmingly estimate linear specifications, and virtually all omit the temporal lag of the firm's
own productivity in the second-step analysis.
We consider alternative methodologies with the second-step regressions of both these types. To facilitate a level-playing-field comparison between these and our models in the ability to identify spillovers, we specify the second-step regressions in lags. This is to ensure the maximal compatibility of the second-step regressions with the fashion in which learning and spillovers occur in the DGP. For concreteness, we run the following two second-step regressions:
\[[\text{ALT1}]\qquad\widehat{\omega}_{it}=\alpha_{11}+\alpha_{12}\sum_{j(\neq i )}s_{ij,t-1}G_{j,t-2}+\alpha_{13}G_{i,t-1}+e_{1,it}, \tag{6.4}\]
\[[\text{ALT2}]\qquad\widehat{\omega}_{it}=\alpha_{21}+\alpha_{22}\sum_{j(\neq i )}s_{ij,t-1}\widehat{\omega}_{j,t-1}+\alpha_{23}G_{i,t-1}+e_{2,it}, \tag{6.5}\]
using \(\widehat{\omega}_{it}\) recovered in the first step via our semiparametric production function estimator but assuming an exogenous Markov process for productivity \(\omega_{it}=\mathbb{E}[\omega_{it}|\omega_{i,t-1}]+\zeta_{it}\).10 Here we also permit the \(DL\) effects as oftentimes done in this literature. Because regressors in both alternative procedures in (6.4)-(6.5) are all weakly exogenous per our DGP and the assumptions, these second-step regressions are estimates via least squares.
Footnote 10: Essentially, firm productivity here is estimated via the semiparametric adaptation of the original Gandhi et al. (2020) procedure modified to take advantage of the “known” parametric form of the production technology.
To be able to meaningfully analyze the performance of alternative models as well as to fairly compare them to our methodology (especially, in case of the popular ALT1 specification), we focus on the estimands that match in terms of their _qualitative_ interpretations. Instead of looking at specific parameters that may not always be directly comparable across the models and with the DGP, we consider the derived measures of \(DL\), \(SP\) and \(TIL\) as appropriate/available. For instance, of the two alternative methodologies, only the ALT2 specification postulates cross-firm spillovers via the mean peer productivity as in our proposed conceptualization in Section 3 and the DGP. Therefore, \(\alpha_{22}\) is essentially comparable to \(\rho_{2}\) in the DGP: both measure the \(SP\) effect. This is however not the case with the ALT1 specification which only models the contextual effect. Hence, we cannot contrast \(\alpha_{12}\) to the true \(\rho_{2}\) value in the DGP. Having said that, \(\alpha_{12}\) measuring the (twice lagged) total indirect effect of the peers' \(G\) can indeed be meaningfully compared to the similarly interpretable \(TIL=SP\times DL=\rho_{2}\times\rho_{3}\) effect derived from the
DGP that also occurs over two periods. When it comes to direct learning, both \(\alpha_{13}\) and \(\alpha_{23}\) are comparable to the true \(DL=\rho_{3}\) from the DGP in (6.3). Tables 2-3 summarize these results.
To examine the ability of alternative models to identify firm productivity, we first study if they can consistently estimate the production function coefficients (here \(\beta_{K}\)) because \(\widehat{\omega}_{it}\) is a direct construct of these parameters: \(\widehat{\omega}_{it}=y_{it}-\widehat{\beta}_{K}k_{it}-\widehat{\beta}_{M}m_{ it}-\widehat{\eta}_{it}\).11 The corresponding estimates of \(\beta_{K}\) are reported in Table G.2 in Appendix G. These first-step results apply to both the ALT1 and ALT2 models and are obtained assuming that \(\omega_{it}\) is an exogenous first-order Markov process. As expected, the estimation of production-function parameters (and hence, firm productivity) becomes biased with no tangible improvement following the growth in \(n\) as soon as we deviate from the exogenous productivity process [scenarios (i) and (ii)]. In the latter case, biases originate from misspecification of the productivity proxy function that is missing relevant controls pertaining to productivity-modifying learning and/or spillovers.
Footnote 11: The estimates of \(\widehat{\beta}_{M}\) and \(\widehat{\eta}_{it}\) are obtained from the material revenue share regression which does not depend on the Markovian assumption about \(\omega_{it}\). Hence, they are exactly the same as those in our methodology.
As seen in Tables 2-3, the misestimation of productivity feeds into the second-step regressions. Across all experiments, both the ALT1 and ALT2 models exhibit non-vanishing biases in the estimation of spillovers. The same is also generally the case for estimation of within-firm direct learning, with the exception of the ALT2 estimator in the least probable scenarios when \(G\) is an irrelevant uncorrelated covariate (i.e., when \(DL=0\)_and_\(G\) evolves exogenously in population). Notably and perhaps more importantly, the alternative estimators fail at identifying (zero) cross-firm spillovers even when exogeneity of the Markov productivity process assumed in the first step is true [scenario (iii)]. This is because the second-step regressions remain misspecified due to their omission of the lagged productivity as customarily done in spillovers studies. Thus, if the first-step assumption of exogenous productivity in such analyses is indeed correct, the "evidence" of cross-firm spillovers uncovered in the second step is likely spurious and effectively driven by the missing _auto_regressive dynamics in productivity _within_ the firm. This is not just a feature of specifications in (6.4)-(6.5). In Appendix G, we consider their multiple variants drawn from the literature. Those results provide further evidence of the potential for spurious
findings of spillovers using the popular two-step analysis procedure.
## 7 Empirical Application
We apply our methodology to study cross-firm spillovers with a particular focus on the productivity effects of inbound FDI via the domestic firms' learning of more advanced/efficient foreign knowledge. We proxy the firm's exposure to foreign knowledge using information on the share of foreign capital in its equity. This is a standard measure of foreign knowledge exposure in the literature. Thus, the foreign equity share \(G_{it}\in[0,1]\) is our productivity modifier of interest.
Our objective is to study two potential channels--direct and indirect--of the productivity-boosting effects of inbound FDI. First, domestic firms may boost their productivity levels via "importing" better/new technology and learning more efficient management and marketing practices from abroad that they gain _direct_ access to through foreign investors; these are direct technology transfers. A second mechanism by which domestic firms may _indirectly_ improve their productivity is by learning from other spatially proximate foreign-invested/owned firms in the industry and then adopting their superior practices already imported into the country. The latter channel is indirect and works through cross-firm peer effects. To model these indirect productivity effects of FDI, we need to explicitly recognize the potential for cross-sectional dependence in productivity which would permit FDI spillovers capable of influencing the domestic firms' productivity levels (and hence their output) beyond the immediate recipients. Our proposed model in Section 3 readily provides an empirical framework for this analysis. It allows identification of both the direct/internal (\(DL\)) and indirect/external (\(TIL\)) effects of inbound FDI in the presence of non-zero productivity spillovers (\(SP\)) among peer firms. In line with Remark 3, we model "FDI spillovers" as operating through the firm's exposure to the average peer productivity, i.e., via "productivity spillovers" due to agglomeration externalities more broadly.
Data.Our data come from the Chinese Industrial Enterprises Database survey conducted by China's National Bureau of Statistics. We focus on the electric machinery and equipment manufacturing industry, SIC 2-digit code 39. The rationale behind the choice of this industry is
discussed in Appendix B. Our sample period runs from 1998 to 2007, and the operational sample is an unbalanced panel of 23,720 firms with a total of 73,095 observations. In Appendix H, we provide the details of variable construction and describe the data.
We use postal codes to identify spatial neighbors included in each firm's peer group \(\mathcal{L}(i,t)\). Peers are defined at the city level and at the level of the upper administrative division (provinces, autonomous regions, municipalities under the direct rule of government and special administrative regions) to allow for a broader geographical extent of spillovers while also respecting regulatory, administrative and cultural heterogeneity across regions. For the baseline results, the industrial scope of peer effects is defined at the level of the whole 2-digit industry. We consider a more granular definition of industrial similarity at the 4-digit level in robustness checks.
### Results
Owing to the nonparametric specification of the firm productivity process, we obtain observation-specific heterogeneous estimates of \(SP_{it}\), \(DL_{it}\) and \(TIL_{it}\). We estimate the unknown \(h(\cdot)\) via sieve methods using the popular second-degree polynomial series.12 All estimations include time effects (the quadratic time trend yields qualitatively similar results). Also note that, because \(\omega_{it}\) is the log-productivity, \(SP_{it}\) is an elasticity measured in percents per unit percent of firm productivity, whereas both the \(DL_{it}\) and \(TIL_{it}\) are semi-elasticities measured in percents per unit percentage _point_ change in the firm's foreign equity share. The measured learning effects on productivity are short-run and partial (i.e., for one given firm only). They do not capture mutual peer effects of an FDI injection across the network of firms, and neither do they account for dynamic effects over time. Obviously, owing to the persistence and cross-peer dependence of productivity, the cumulative implications of FDI for domestic firms' productivity in the long run equilibrium will be more sizable due to accumulation and diffusion over time and space.
Footnote 12: For instance, see De Loecker et al. (2016) or Gandhi et al. (2020). We have also experimented with higher-order polynomials, and the results are very similar except somewhat noisier, as expected.
Table 4 summarizes semiparametric point estimates of cross-firm productivity spillovers along with the direct and indirect effects of FDI on the productivity of domestic firms from our
baseline specification,13 in which each firm's peer group is restricted to the same province and the industrial scope of spillovers is defined at the level of the entire 2-digit industry. All reported estimates are accompanied by the two-tailed 95% bootstrap intervals. In addition, we formally test for significantly _positive_ productivity effects at each observation using the _one_-sided 95% bootstrap lower bounds. Throughout, we use accelerated bias-corrected bootstrap percentile confidence intervals (see Appendix F). The share of firms for which the estimates statistically exceed zero are reported in the last column of Table 4. In Appendix I, we also summarize these productivity effect estimates graphically via empirical distributions across firms.
Footnote 13: The associated production function parameter estimates are \(\widehat{\beta}_{M}=0.74\), \(\widehat{\beta}_{K}=0.05\) and \(\widehat{\beta}_{L}=0.12\) with the implied scale elasticity of 0.91. These are in line with the Cobb-Douglas estimates for Chinese manufacturing reported in the literature (e.g., see Brandt et al., 2017) and suggest that the industry exhibits the decreasing returns to scale.
The estimated median \(DL\) effect of own FDI is 0.14, whereby an increase of the foreign share in the median firm's equity by 10 percentage points boosts its productivity next year by 1.4%. Expectedly, the \(TIL\) effect of peers' FDI is smaller in magnitude--0.04 at the median--so a 10 percentage point increase in the peer group average of the foreign equity share boosts the firm's productivity by only 0.4%. Overall, at least 87% of firms enjoy significant productivity-boosting effects of inbound FDI, both directly and indirectly.
The non-zero external/indirect learning effect of FDI is facilitated by the presence of substantial and positive cross-firm productivity spillovers in the industry, with the median spillover elasticity \(SP\) estimated at 0.33 along with the corresponding interquartile range of 0.18-0.45. Thus, a 10% improvement in the average productivity of the firm's peers is estimated to increase the median firm's own productivity by about 3.3%. These productivity spillovers are significantly positive for roughly 84% of firms in the industry. We examine their geographic distribution in Appendix I.
Heterogeneity and Nonlinearity.Even within a given industry, firms are highly heterogeneous across many dimensions including their productivity and the extent of their exposure to foreign investors, both direct and through their peers. These characteristics can influence the effect size of spillovers and learning. Conveniently, our model readily facilitates testing of that.
Recall that we estimate the productivity effects of interest via \(\widehat{SP}_{it}=\partial\widehat{h}_{it}(\cdot)/\partial\sum_{j(\neq i)}s_{ij,t- 1}\omega_{j,t-1}\) and \(\widehat{DL}_{it}=\partial\widehat{h}_{it}(\cdot)/\partial G_{i,t-1}\), where we estimate \(h(\cdot)\) using the second-order polynomial sieve approximation. Thus, by analytical derivation, the estimated \(\widehat{SP}_{it}\) and \(\widehat{DL}_{it}\) are the linear functions of the "determinants" of firm productivity \((\omega_{i,t-1},G_{i,t-1},\sum_{j(\neq i)}s_{ij,t-1}\omega_{j,t-1})^{\prime}\). Table 5 reports the estimates of sieve coefficients on these three variables in the \(SP\) and \(DL\) functions.
Consider the spillovers first. The coefficient on the firm's own productivity is negative, indicating that the spillover effects decline in magnitude as firms themselves become more productive. Thus, less productive manufacturers have a greater potential to benefit from positive peer effects. Also consistent with economic intuition, the effect size of spillovers increases with the average productivity of peers: there is more to learn from highly productive neighbors in the industry. We also find a negative relationship between the firm's foreign equity share and the effect size of spillovers. This suggests that the domestic firms experiencing larger productivity improvements via indirect learning from their foreign-invested peers--thanks to positive productivity spillovers--are those with limited direct access to foreign knowledge through their own investors (i.e., low-foreign-equity-share firms).
In the case of FDI effects on productivity, results in the far right column of Table 5 suggest that the direct learning effects diminish as the firm's productivity rises, implying that the more productive firms have less absorptive capacity to learn. The foreign share in a firm's equity negatively affects the learning effect size, which basically indicates the diminishing productivity returns to receiving FDI. Lastly, there is evidence that the higher the average of peer productivity, the lesser the productivity boosts from FDI. Thus, positive spillovers from highly productive neighbors essentially diminish the importance of direct FDI effects.
Robustness Analysis.We first assess robustness of our empirical findings of significantly positive productivity spillovers and learning effects of FDI to the following modeling choices: (i) the inclusion of peer group effects to control for unobservable "correlated effects;" (ii) the composition of a reference peer group \(\mathcal{L}(i,t)\); and (iii) the peer-weighting scheme \(\{s_{ijt}\}\).
As discussed in Section 3, to structurally identify productivity spillovers \(SP\), we rule out
unobservable "correlated effects" at the peer group level. However, we can replace this no-group-effects assumption with a much milder assumption allowing for network unobservables but having them be time-invariant. In this case, we can control for the potential network confounders using group-level fixed effects (see Graham & Hahn, 2005; Bramoulle et al., 2009). We consider group effects across both the spatial and industrial dimensions. Specifically, we re-estimate our baseline specification by adding fixed effects at the level of the entire peer group as well as more granular subgroups.14 The corresponding results are summarized in columns F1-F4 of Table 6 (see table notes for the details on group fixed effects). While predictably there is no dramatic change in the \(DL\) estimates of within-firm learning, the median effect size of cross-firm spillovers \(SP\) increases notably when we rely solely on the within-group variation over time to estimate the productivity peer effects.15 The latter is especially true when the correlated group effects are defined narrowly at the 4-digit sub-industry level. In this case, the median spillover effect is 0.61 (against the baseline estimate of 0.33) and the effect is statistically positive for almost all firms (99%). The increase in the effect size of spillovers is indicative of the substantial between-group heterogeneity in (peer) firm productivity, which is consistent with the well-documented differential in productivity levels across regions in China. Thus, when omitting group-specific effects, the measure of the strength of peer dependence across firms gets "diluted" in the baseline model due to the variation across groups.16
Footnote 14: These peer group effects are included in the second-stage estimation that models the productivity process.
Footnote 15: Larger magnitudes of \(TIL\) are a direct result of the increased \(SP\) estimates.
Footnote 16: This is the case even if the strength of within-group spillovers is the same for all groups.
In columns P1-P3 of Table 6, we estimate productivity spillovers under three alternative definitions of who the firm's relevant peers are. Each one presumes a much smaller reference group than the baseline. Namely, we consider narrowing the scope of local spillovers to the level of city and/or the 4-digit sub-industry. The direct effect of the firm's own FDI expectedly continues to stay largely unchanged, but the estimates of productivity spillovers diminish in size significantly. The latter indirectly corroborates the rationale of our baseline specification in that the agglomeration effects have broad geographical and industrial scopes. By restricting
the extent of spillovers to the local city and/or the firm's sub-industry only, we also restrict the reach of cross-firm externalities in productivity. Intuitively, when restricting the firm's learning opportunities to a narrower group of neighbors, a 10% improvement in the average peer productivity is estimated to help boost the firm's own productivity by only about 0.6-0.9% at the median. In contrast, if the relevant peer reference group is actually larger in scope, the same 10% improvement across all peers (as in the baseline specification) implies a bigger industry-wide aggregate effect and, consequently, a larger estimated spillover effect on the firm of 3.3%.
In Table 6, we also consider an alternative way of weighting peers, whereby bigger neighbors get assigned larger relative weights (column W1). The spillover effects only modestly decline in size. Overall, the cross-model variation in the spillover estimates we observe in Table 6 is unsurprising and, in fact, expected because each model treats peer interactions a bit differently and/or utilizes different variation in data to identify productivity spillovers. Having said that, the \(SP\) point estimates across all models are highly positively correlated, with the rank correlation coefficient being 0.81 on average. Consistently across all specifications, we continue to find that the overwhelming majority of the electric machinery manufacturers in China enjoy positive and significant productivity spillovers, in general, and FDI spillovers, specifically.
Appendix A contains additional robustness checks, including to the potential violation of the weak exogeneity of the _lagged_ foreign equity share. Controlling for potential endogeneity of the FDI exposure, we continue to find strong evidence in support of significantly positive productivity spillovers for 80% of firms or more, with our findings remaining qualitatively unchanged. In the same appendix, we also explore potential heterogeneity in the external productivity spillovers from the peers _conditional_ on their FDI status, which gives rise to _b_idimensional spillovers. We find evidence of heterogeneity in the strength of spillovers from wholly-domestic versus foreign-invested peers but, overall, our main findings stay the same: productivity spillovers are positive and significant for most firms. For the details, see Appendix A.
Conclusion
This paper develops a novel methodology for the proxy variable structural identification of (latent) firm productivity in the presence of learning and cross-firm spillovers which allows a unified one-step analysis of the knowledge-transfer effects _between_ peer firms. Our framework is fundamentally different from the popular empirical approach traditionally implemented in two steps, whereby one first recovers firm productivity using the available standard proxy variable estimators and then tests for spillovers in the second step by regressing these productivity estimates on various peer-group averages capturing firms' exposure to potential spillovers. Contrary to such an approach, our methodology is "internally consistent" in that it does not postulate contradictory assumptions. In building our model, we explicitly accommodate cross-sectional dependence in firm productivity induced by spillovers. We also show that estimating the firm production function or productivity using traditional proxy methods while ignoring the spillover-induced cross-sectional dependence, as customarily done in the literature, likely leads to misspecification and endogeneity-generating omitted variable bias. Because our methodology can be easily adapted to admit various spillover origins such as spatial agglomeration, R&D, FDI, exporting, etc., it is fit to investigate cross-firm productivity spillovers in many contexts.
## References
* Acharya & Keller (2008) Acharya, R. C. & Keller, W. (2008). Estimating the productivity selection and technology spillover effects of imports. NBER Working Paper 14079.
* Ackerberg et al. (2015) Ackerberg, D. A., Caves, K., & Frazer, G. (2015). Identification properties of recent production function estimators. _Econometrica_, _83_, 2411--2451.
* Ai & Chen (2003) Ai, C. & Chen, X. (2003). Efficient estimation of models with conditional moment restrictions containing unknown functions. _Econometrica_, _71_, 1795-1843.
* Alvarez & Lopez (2008) Alvarez, R. & Lopez, R. A. (2008). Is exporting a source of productivity spillovers? _Review of World Economics_, _144_(4), 723-749.
* Bai (2009) Bai, J. (2009). Panel data models with interactive fixed effects. _Econometrica_, _77_, 1229-1279.
* Balsvik (2011) Balsvik, R. (2011). Is labor mobility a channel for spillovers from multinationals? Evidence from Norwegian manufacturing. _Review of Economics and Statistics_, _93_, 285-297.
* Barrios et al. (2011) Barrios, S., Gorg, H., & Strobl, E. (2011). Spillovers through backward linkages from multinationals: Mea
surement matters! _European Economic Review_, _55_, 862-875.
* Bazzi et al. (2017) Bazzi, S., Chari, A. V., Nataraj, S., & Rothenberg, A. D. (2017). Identifying productivity spillovers using the structure of production networks. _RAND Working Paper_ WR-1128.
* Behrens et al. (2014) Behrens, K., Duranton, G., & Robert-Nicoud, F. (2014). Productive cities: Sorting, selectiton, and agglomeration. _Journal of Political Economy_, _122_, 507-553.
* Blalock & Gertler (2008) Blalock, G. & Gertler, P. J. (2008). Welfare gains from foreign direct investment through technology transfer to local suppliers. _Journal of International Economics_, _74_, 402-421.
* Bloom et al. (2013) Bloom, N., Schankerman, M., & Van Reenen, J. (2013). Identifying technology spillovers and product market rivalry. _Econometrica_, _81_, 1347-1393.
* Blume et al. (2011) Blume, L. E., Brock, W. A., Durlauf, S. N., & Ioannides, Y. M. (2011). Identification of social interactions. In J. Benhabib, A. Bisin, & M. O. Jackson (Eds.), _Handbook of Social Economics_, volume 1B. North-Holland, Amsterdam.
* Bramoulle et al. (2009) Bramoulle, Y., Djebbari, H., & Fortin, B. (2009). Identification of peer effects through social networks. _Journal of Econometrics_, _150_, 41-55.
* Brandt et al. (2017) Brandt, L., Van Biesebroeck, J., Wang, L., & Zhang, Y. (2017). WTO accession and performance of Chinese manufacturing firms. _American Economic Review_, _107_, 2784-2820.
* Branstetter (2001) Branstetter, L. G. (2001). Are knowledge spillovers international of intranational in scope? Microeconomic evidence from the U.S. and Japan. _Journal of International Economics_, _53_, 53-79.
* Chen et al. (2003) Chen, X., Linton, O., & Van Keilegom, I. (2003). Estimation of semiparametric models when the criterion function is not smooth. _Econometrica_, _71_, 1591-1608.
* Combes et al. (2012) Combes, P.-P., Duranton, G., Gobillon, L., Puga, D., & Roux, S. (2012). The productivity advantages of large cities: Distinguishing agglomeration from firm selection. _Econometrica_, _80_, 2543-2594.
* De Loecker (2013) De Loecker, J. (2013). Detecting learning by exporting. _American Economic Journal: Microeconomics_, \(5\), 1-21.
* De Loecker et al. (2016) De Loecker, J., Goldberg, P. K., Khandelwal, A. K., & Pavcnik, N. (2016). Prices, markups, and trade reform. _Econometrica_, _84_, 445-510.
* De Loecker & Warzynski (2012) De Loecker, J. & Warzynski, F. (2012). Markups and firm-level export status. _American Economic Review_, _102_, 2437-2471.
* Deloitte China Manufacturing Industry Group (2013) Deloitte China Manufacturing Industry Group (2013). A new stage for overseas expansion for China's equipment manufacturing industry. Report by Deloitte China Research and Insight Centre.
* Doraszelski & Jaumandreu (2013) Doraszelski, U. & Jaumandreu, J. (2013). R&D and productivity: Estimating endogenous productivity. _Review of Economic Studies_, _80_, 1338-1383.
* Doraszelski & Jaumandreu (2018) Doraszelski, U. & Jaumandreu, J. (2018). Measuring the bias of technological change. _Journal of Political Economy_, _126_, 1027-1084.
* Duranton & Puga (2004) Duranton, G. & Puga, D. (2004). Micro-foundations of urban agglomeration economies. In J. V. Henderson & J. Thisse (Eds.), _Handbook of Regional and Urban Economics_, volume 4. Elsevier B.V.
* Efron (1987) Efron, B. (1987). Better bootstrap confidence interval. _Journal of American Statistical Association_, _82_, 171-200.
* Efron et al. (2012)
Eichengreen, B. & Tong, H. (2007). Is China's FDI coming at the expense of other countries? _Journal of the Japanese and International Economies_, _21_(2), 153-172.
* Euro Exim Bank (2020) Euro Exim Bank (2020). Export of electrical machinery from China. Euro Exim Bank Global Finance Blog; October 30, 2020.
* Gandhi et al. (2020) Gandhi, A., Navarro, S., & Rivers, D. (2020). On the identification of gross output production functions. _Journal of Political Economy_, _128_, 2973-3016.
* Gaubert (2018) Gaubert, C. (2018). Firm sorting and agglomeration. _American Economic Review_, _108_, 3117-3153.
* Graham & Hahn (2005) Graham, B. S. & Hahn, J. (2005). Identification and estimation of the linear-in-means model of social interactions. _Economics Letters_, _88_, 1-6.
* Grieco et al. (2016) Grieco, P. L. E., Li, S., & Zhang, H. (2016). Production function estimation with unobserved input price dispersion. _International Economic Review_, _57_, 665-689.
* Griffith et al. (2006) Griffith, R., Harrison, R., & Van Reenen, J. (2006). How special is the special relationship? Using the impact of U.S. R&D spillovers on U.K. firms as a test of technology sourcing. _American Economic Review_, _96_, 1859-1875.
* Griliches (1979) Griliches, Z. (1979). Issues in assessing the contribution of research and development to productivity growth. _Bell Journal of Economics_, _10_, 92-116.
* Griliches & Mairesse (1998) Griliches, Z. & Mairesse, J. (1998). Production functions: The search for identification. In _Econometrics and Economic Theory in the Twentieth Century: The Ragnar Frisch Centennial Symposium_ (pp. 169-203). Cambridge University Press.
* Hahn et al. (2018) Hahn, J., Liao, Z., & Ridder, G. (2018). Nonparametric two-step sieve M estimation and inference. _Econometric Theory_. forthcoming.
* Haskel et al. (2007) Haskel, J. E., Pereira, S. C., & Slaughter, M. J. (2007). Does inward foreign direct investment boost the productivity of domestic firms? _Review of Economics and Statistics_, _89_, 482-496.
* Hernan & Robins (2020) Hernan, M. A. & Robins, J. M. (2020). _Causal Inference: What If_. Boca Raton: Chapman & Hall.
* Hirano & Imbens (2004) Hirano, K. & Imbens, G. W. (2004). The propensity score with continuous treatments. In W. A. Shewhart & S. S. Wilks (Eds.), _Applied Bayesian Modeling and Causal Inference from Incomplete-Data Perspectives: An Essential Journey with Donald Rubin's Statistical Family_. New York: Wiley & Sons, Ltd.
* Horowitz (2001) Horowitz, J. L. (2001). The bootstrap. In J. J. Heckman & E. Leamer (Eds.), _Handbook of Econometrics_ (5 ed.). chapter 52, (pp. 3159-3228). Elsevier Science B.V.
* Ihrcke & Becker (2006) Ihrcke, J. & Becker, K. (2006). Study on the future opportunities and challenges of EU-China trade and investment relations. Study 1 of 12: Machinery. Report commissioned and financed by the Commission of the European Communities.
* Imbens & Wooldridge (2009) Imbens, G. W. & Wooldridge, J. M. (2009). Recent developments in the econometrics of program evaluation. _Journal of Economic Literature_, _47_, 5-86.
* Javorcik (2004) Javorcik, B. S. (2004). Does foreign direct investment increase the productivity of domestic firms? In search of spillovers through backward linkages. _American Economic Review_, _94_, 605-627.
* Javorcik & Spatareanu (2008) Javorcik, B. S. & Spatareanu, M. (2008). To share or not to share: Does local participation matter for spillovers from foreign direct investment? _Journal of Development Economics_, _85_, 194-217.
* Jorcik & Spatareanu (2008)
Jiang, K., Keller, W., Qiu, L. D., & Ridley, W. (2019). International joint ventures and internal vs. external technology transfer: Evidence from China. NBER Working Paper 24455.
* Kelejian & Prucha (1999) Kelejian, H. H. & Prucha, I. R. (1999). A generalized moment estimator for the autoregressive parameter in a spatial model. _International Economic Review, 40_, 509-533.
* Keller (2008) Keller, W. (2008). Transfer of technology. In L. E. Blume & S. N. Durlauf (Eds.), _The New Palgrave Dictionary of Econometrics_ (2 ed.). New York: Macmillan.
* Keller (2010) Keller, W. (2010). International trade, foreign direct investment, and technology spillovers. In B. Hall & N. Rosenberg (Eds.), _Handbook of the Economics of Innovation_, volume 2 (pp. 793-829). Amsterdam: North Holland.
* Keller & Yeaple (2009) Keller, W. & Yeaple, S. R. (2009). Multinational enterprises, international trade, and productivity growth: Firm-level evidence from the United States. _Review of Economics and Statistics, 91_, 821-831.
* Krugman (1991) Krugman, P. R. (1991). _Geography and Trade_. Cambridge, MA: MIT Press.
* Kuersteiner & Prucha (2020) Kuersteiner, G. M. & Prucha, I. R. (2020). Dynamic spatial panel models: Networks, common shocks, and sequential exogeneity. _Econometrica, 88_, 2109-2146.
* Lee (2007) Lee, L.-f. (2007). GMM and 2SLS estimation of mixed regressive, spatial autoregressive models. _Journal of Econometrics, 137_, 489-514.
* LeSage & Pace (2009) LeSage, J. & Pace, R. K. (2009). _Introduction to Spatial Econometrics_. Boca Raton: Taylor & Francis CRC Press.
* Levinsohn & Petrin (2003) Levinsohn, J. & Petrin, A. (2003). Estimating production functions using inputs to control for unobservables. _Review of Economic Studies, 70_, 317-341.
* Lewbel (2012) Lewbel, A. (2012). Using heteroskedasticity to identify and estimate mismeasured and endogenous regressor models. _Journal of Business and Economic Statistics, 30_, 67-80.
* Lu et al. (2017) Lu, Y., Tai, Z., & Zhu, L. (2017). Identifying FDI spillovers. _Journal of International Economics, 107_, 75-90.
* Malikov et al. (2020) Malikov, E., Zhao, S., & Kumbhakar, S. C. (2020). Estimation of firm-level productivity in the presence of exports: Evidence from China's manufacturing. _Journal of Applied Econometrics, 81_, 457-480.
* Mammen (1993) Mammen, E. (1993). Bootstrap and wild bootstrap for high dimensional linear models. _Annals of Statistics, 21_, 255-285.
* Manski (1993) Manski, C. F. (1993). Identification of endogenous social effects: The reflection problem. _Review of Economic Studies, 60_, 531-542.
* Manski (2000) Manski, C. F. (2000). Economic analysis of social interactions. _Journal of Economic Perspectives, 14_, 115-136.
* Marshall (1890) Marshall, A. (1890). _Principles of Economics_ (1st ed.). London: Macmillan.
* Moffitt (2001) Moffitt, R. A. (2001). Policy interventions, low-level equilibria and social intentions. In S. N. Durlauf & H. P. Young (Eds.), _Social Dynamics_. MIT Press.
* Newey (1984) Newey, W. K. (1984). A method of moments interpretation of sequential estimators. _Economics Letters, 14_(2), 201-206.
* Newman et al. (2015) Newman, C., Rand, J., Talbot, T., & Tarp, F. (2015). Technology transfers, foreign investment and productivity spillovers. _European Economic Review, 76_, 168-187.
* Newman et al. (2016)
* Olley & Pakes (1996) Olley, G. S. & Pakes, A. (1996). The dynamics of productivity in the telecommunications equipment industry. _Econometrica_, _64_, 1263-1297.
* Pesaran (2006) Pesaran, M. H. (2006). Estimation and inference in large heterogenous panels with a multifactor error structure. _Econometrica_, _74_, 967-1012.
* Poole (2013) Poole, J. P. (2013). Knowledge transfers from multinationals to domestic firms: Evidence from worker mobility. _Review of Economics and Statistics_, _95_, 393-406.
* Rosenthal & Strange (2004) Rosenthal, S. S. & Strange, W. C. (2004). Evidence on the nature and sources of agglomeration economies. In J. V. Henderson & J. Thisse (Eds.), _Handbook of Regional and Urban Economics_, volume 4. Elsevier B.V.
* Serpa & Krishnan (2018) Serpa, J. C. & Krishnan, H. (2018). The impact of supply chains on firm-level productivity. _Management Science_, _64_, 511-532.
* Shao & Tu (1995) Shao, J. & Tu, D. (1995). _The Jackknife and Bootstrap_. Springer-Verlag New York Inc.
* Stoyanov & Zubanov (2012) Stoyanov, A. & Zubanov, N. (2012). Productivity spillovers across firm through worker mobility. _American Economic Journal: Applied Economics_, \(4\), 168-198.
* Topalova & Khandelwal (2011) Topalova, P. & Khandelwal, A. (2011). Trade liberalization and firm productivity: The case of India. _Review of Economics and Statistics_, _93_, 995-1009.
* Van Biesebroeck (2005) Van Biesebroeck, J. (2005). Exporting raises productivity in sub-Saharan African manufacturing firms. _Journal of International Economics_, _67_, 373-391.
* Zacchia (2020) Zacchia, P. (2020). Knowledge spillovers through networks of scientists. _Review of Economic Studies_, _87_, 1989-2018.
\begin{table}
\begin{tabular}{l r|r r r|r r r|r r r} \hline \hline & \multicolumn{2}{c|}{True} & \multicolumn{2}{c|}{\(n=100\)} & \multicolumn{2}{c|}{\(n=200\)} & \multicolumn{2}{c}{\(n=400\)} \\ & Value & Mean & RMSE & MAE & Mean & RMSE & MAE & Mean & RMSE & MAE \\ \hline \multicolumn{8}{c}{DGP with \(G\) Evolving Exogenously} \\
**Scenario (i):**\(DL\neq 0\) **and**\(SP\neq 0\) & & & & & & & & \\ \(\beta_{K}\) & 0.25 & 0.249 & 0.055 & 0.043 & 0.250 & 0.038 & 0.030 & 0.252 & 0.027 & 0.021 \\ \(AR\) & 0.55 & 0.546 & 0.080 & 0.065 & 0.547 & 0.054 & 0.044 & 0.549 & 0.038 & 0.031 \\ \(DL\) & 0.50 & 0.500 & 0.164 & 0.134 & 0.502 & 0.113 & 0.091 & 0.501 & 0.079 & 0.064 \\ \(SP\) & 0.40 & 0.399 & 0.144 & 0.116 & 0.400 & 0.102 & 0.082 & 0.399 & 0.070 & 0.057 \\ \(TIL\) & 0.20 & 0.199 & 0.121 & 0.097 & 0.201 & 0.085 & 0.069 & 0.199 & 0.059 & 0.047 \\
**Scenario (ii):**\(DL=0\) **and**\(SP\neq 0\) & & & & & & & & \\ \(\beta_{K}\) & 0.25 & 0.249 & 0.055 & 0.042 & 0.250 & 0.037 & 0.029 & 0.252 & 0.027 & 0.021 \\ \(AR\) & 0.55 & 0.546 & 0.085 & 0.070 & 0.546 & 0.059 & 0.048 & 0.549 & 0.042 & 0.034 \\ \(DL\) & 0 & -0.002 & 0.154 & 0.124 & 0.001 & 0.106 & 0.087 & 0.000 & 0.075 & 0.060 \\ \(SP\) & 0.40 & 0.395 & 0.172 & 0.140 & 0.402 & 0.124 & 0.102 & 0.400 & 0.085 & 0.069 \\ \(TIL\) & 0 & 0.004 & 0.067 & 0.049 & 0.002 & 0.045 & 0.034 & 0.000 & 0.030 & 0.024 \\ \multicolumn{8}{c}{**Scenario (iii):**\(DL=0\) **and**\(SP=0\)} & & & & & & & \\ \(\beta_{K}\) & 0.25 & 0.248 & 0.051 & 0.040 & 0.250 & 0.035 & 0.028 & 0.251 & 0.025 & 0.020 \\ \(AR\) & 0.55 & 0.546 & 0.083 & 0.067 & 0.546 & 0.058 & 0.047 & 0.549 & 0.040 & 0.033 \\ \(DL\) & 0 & 0.003 & 0.155 & 0.125 & 0.003 & 0.106 & 0.086 & 0.001 & 0.075 & 0.061 \\ \(SP\) & 0 & -0.031 & 0.157 & 0.131 & -0.009 & 0.108 & 0.091 & -0.004 & 0.078 & 0.065 \\ \(TIL\) & 0 & -0.004 & 0.028 & 0.018 & -0.002 & 0.013 & 0.008 & -0.001 & 0.006 & 0.004 \\ \hline \multicolumn{8}{c}{DGP with \(G\) Following an \(\omega\)-Controlled Process} \\
**Scenario (i):**\(DL\neq 0\) **and**\(SP\neq 0\) & & & & & & \\ \(\beta_{K}\) & 0.25 & 0.249 & 0.060 & 0.047 & 0.251 & 0.040 & 0.032 & 0.252 & 0.029 & 0.023 \\ \(AR\) & 0.55 & 0.544 & 0.110 & 0.089 & 0.546 & 0.073 & 0.060 & 0.548 & 0.053 & 0.043 \\ \(DL\) & 0.50 & 0.503 & 0.148 & 0.120 & 0.502 & 0.099 & 0.081 & 0.502 & 0.070 & 0.057 \\ \(SP\) & 0.40 & 0.404 & 0.058 & 0.047 & 0.403 & 0.041 & 0.033 & 0.401 & 0.028 & 0.023 \\ \(TIL\) & 0.20 & 0.201 & 0.069 & 0.056 & 0.201 & 0.047 & 0.038 & 0.201 & 0.032 & 0.026 \\
**Scenario (ii):**\(DL=0\) **and**\(SP\neq 0\) & & & & & & & & \\ \(\beta_{K}\) & 0.25 & 0.251 & 0.058 & 0.044 & 0.250 & 0.037 & 0.029 & 0.252 & 0.027 & 0.021 \\ \(AR\) & 0.55 & 0.544 & 0.105 & 0.087 & 0.545 & 0.070 & 0.058 & 0.548 & 0.051 & 0.042 \\ \(DL\) & 0 & 0.004 & 0.134 & 0.108 & 0.003 & 0.092 & 0.076 & 0.002 & 0.067 & 0.055 \\ \(SP\) & 0.40 & 0.387 & 0.224 & 0.183 & 0.400 & 0.158 & 0.130 & 0.399 & 0.116 & 0.094 \\ \(TIL\) & 0 & -0.007 & 0.060 & 0.043 & -0.003 & 0.038 & 0.028 & -0.001 & 0.027 & 0.021 \\ \multicolumn{8}{c}{**Scenario (iii):**\(DL=0\) **and**\(SP=0\)} & & & & & & \\ \(\beta_{K}\) & 0.25 & 0.248 & 0.052 & 0.041 & 0.250 & 0.035 & 0.028 & 0.251 & 0.025 & 0.019 \\ \(AR\) & 0.55 & 0.546 & 0.101 & 0.083 & 0.546 & 0.069 & 0.056 & 0.548 & 0.050 & 0.040 \\ \(DL\) & 0 & -0.001 & 0.134 & 0.109 & 0.000 & 0.092 & 0.075 & 0.001 & 0.068 & 0.054 \\ \(SP\) & 0 & -0.031 & 0.155 & 0.128 & -0.008 & 0.105 & 0.088 & -0.004 & 0.075 & 0.063 \\ \(TIL\) & 0 & 0.000 & 0.022 & 0.015 & 0.000 & 0.011 & 0.007 & 0.000 & 0.005 & 0.003 \\ \hline \multicolumn{8}{c}{Notes: Owing to linearity of the productivity process in (6.3), the true values of \(AR\), \(DL\), \(SP\) and \(TIL\) are the fixed coefficients for all \(i\) and \(t\), and \(TIL=SP\times DL\) is derived indirectly. Throughout, \(T=10\).} \\ \hline \hline \end{tabular}
\end{table}
Table 1: Simulation Results for Our Estimation Methodology
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline \multicolumn{2}{c|}{True} & \multicolumn{2}{c}{\(n=100\)} & \multicolumn{2}{c}{\(n=200\)} & \multicolumn{2}{c}{\(n=400\)} \\ Value & Mean & RMSE & Mean & RMSE & Mean & RMSE \\ \hline \multicolumn{2}{c}{DGP with \(G\) Evolving Exogenously} \\
**Scenario (i):**\(DL\neq 0\) **and**\(SP\neq 0\) \\ \(DL\) & 0.50 & 0.167 & 0.355 & 0.171 & 0.340 & 0.173 & 0.333 \\ \(TIL\) & 0.20 & \(-\)0.577 & 0.793 & \(-\)0.578 & 0.787 & \(-\)0.577 & 0.781 \\
**Scenario (ii):**\(DL=0\) **and**\(SP\neq 0\) \\ \(DL\) & 0 & \(-\)0.202 & 0.225 & \(-\)0.200 & 0.211 & \(-\)0.200 & 0.206 \\ \(TIL\) & 0 & \(-\)0.348 & 0.372 & \(-\)0.345 & 0.361 & \(-\)0.352 & 0.357 \\
**Scenario (iii):**\(DL=0\) **and**\(SP=0\) \\ \(DL\) & 0 & 0.461 & 0.470 & 0.464 & 0.468 & 0.465 & 0.467 \\ \(TIL\) & 0 & 0.165 & 0.204 & 0.170 & 0.191 & 0.169 & 0.180 \\ \hline \multicolumn{2}{c}{DGP with \(G\) Following an \(\omega\)-Controlled Process} \\
**Scenario (i):**\(DL\neq 0\) **and**\(SP\neq 0\) \\ \(DL\) & 0.50 & 1.651 & 1.152 & 1.650 & 1.151 & 1.652 & 1.152 \\ \(TIL\) & 0.20 & 0.582 & 0.387 & 0.585 & 0.387 & 0.586 & 0.387 \\
**Scenario (ii):**\(DL=0\) **and**\(SP\neq 0\) \\ \(DL\) & 0 & 0.416 & 0.420 & 0.420 & 0.422 & 0.424 & 0.425 \\ \(TIL\) & 0 & \(-\)0.112 & 0.119 & \(-\)0.113 & 0.117 & \(-\)0.115 & 0.117 \\
**Scenario (iii):**\(DL=0\) **and**\(SP=0\) \\ \(DL\) & 0 & 0.616 & 0.619 & 0.617 & 0.619 & 0.621 & 0.621 \\ \(TIL\) & 0 & \(-\)0.362 & 0.365 & \(-\)0.361 & 0.362 & \(-\)0.362 & 0.363 \\ \hline \hline \end{tabular} Notes: Reported are the results from the second-step regression in (6.4) estimated with \(\widehat{\omega}_{it}\) obtained in the first step using the standard proxy estimator under the assumption of exogenous Markov productivity process. \(DL\) and \(TIL\) are respectively measured by \(\alpha_{13}\) and \(\alpha_{12}\), with the latter capturing spillovers. Throughout, \(T=10\).
\end{table}
Table 2: Simulation Results for the Two-Step Alternative Estimator of Spillovers ALT1
\begin{table}
\begin{tabular}{c c|c c|c c|c c} \hline \hline \multicolumn{2}{c|}{True} & \multicolumn{2}{c}{\(n=100\)} & \multicolumn{2}{c}{\(n=200\)} & \multicolumn{2}{c}{\(n=400\)} \\ Value & Mean & RMSE & Mean & RMSE & Mean & RMSE \\ \hline \multicolumn{8}{c}{DGP with \(G\) Evolving Exogenously} \\
**Scenario (i):**\(DL\neq 0\) **and**\(SP\neq 0\) \\ \(DL\) & 0.50 & 0.800 & 0.313 & 0.814 & 0.320 & 0.817 & 0.320 \\ \(SP\) & 0.40 & 1.074 & 0.676 & 1.083 & 0.684 & 1.090 & 0.690 \\
**Scenario (ii):**\(DL=0\) **and**\(SP\neq 0\) \\ \(DL\) & 0 & –0.036 & 0.093 & –0.015 & 0.063 & –0.008 & 0.044 \\ \(SP\) & 0.40 & 0.751 & 0.518 & 0.874 & 0.502 & 0.924 & 0.527 \\
**Scenario (iii):**\(DL=0\) **and**\(SP=0\) \\ \(DL\) & 0 & 0.005 & 0.084 & 0.004 & 0.059 & 0.003 & 0.042 \\ \(SP\) & 0 & 0.539 & 0.541 & 0.545 & 0.546 & 0.548 & 0.548 \\ \hline \multicolumn{8}{c}{DGP with \(G\) Following an \(\omega\)-Controlled Process} \\
**Scenario (i):**\(DL\neq 0\) **and**\(SP\neq 0\) \\ \(DL\) & 0.50 & 1.149 & 0.650 & 1.152 & 0.652 & 1.156 & 0.656 \\ \(SP\) & 0.40 & 0.580 & 0.182 & 0.579 & 0.180 & 0.577 & 0.178 \\
**Scenario (ii):**\(DL=0\) **and**\(SP\neq 0\) \\ \(DL\) & 0 & 0.369 & 0.376 & 0.376 & 0.379 & 0.375 & 0.377 \\ \(SP\) & 0.40 & –0.708 & 1.304 & –0.635 & 1.121 & –0.583 & 1.020 \\
**Scenario (iii):**\(DL=0\) **and**\(SP=0\) \\ \(DL\) & 0 & 0.406 & 0.406 & 0.405 & 0.406 & 0.407 & 0.407 \\ \(SP\) & 0 & 0.390 & 0.393 & 0.395 & 0.396 & 0.397 & 0.397 \\ \hline \hline \multicolumn{8}{c}{Notes: Reported are the results from the second-step regression in (6.5) estimated with \(\widehat{\omega}_{it}\) obtained in the first step using the standard proxy estimator under the assumption of exogenous Markov productivity process. \(DL\) and \(SP\) are respectively measured by \(\alpha_{23}\) and \(SP=\alpha_{22}\). Throughout, \(T=10\).} \\ \hline \hline \end{tabular}
\end{table}
Table 3: Simulation Results for the Two-Step Alternative Estimator of Spillovers ALT2
\begin{table}
\begin{tabular}{l c c c|c} \hline & & _Point Estimates_ & & _Statistically_ \(>0\) \\ Estimand & 1st Qu. & Median & 3rd Qu. & (\% Obs.) \\ \hline \(SP\) & 0.182 & 0.327 & 0.452 & 83.84 \\ & (0.108, 0.240) & (0.211, 0.399) & (0.302, 0.543) & \\ \(DL\) & 0.096 & 0.138 & 0.168 & 89.13 \\ & (0.076, 0.118) & (0.118, 0.164) & (0.146, 0.197) & \\ \(TIL\) & 0.022 & 0.037 & 0.053 & 86.71 \\ & (0.011, 0.032) & (0.022, 0.050) & (0.034, 0.068) & \\ \hline \end{tabular} Notes: Reported are the results based on our baseline specification of the productivity process in (3.3). The left panel summarizes point estimates of \(SP_{it}\), \(DL_{it}\) and \(TIL_{it}\) with the corresponding two-sided 95\(\%\) bootstrap percentile confidence intervals in parentheses. The last column reports the share of observations for which the point estimates are statistically positive at the 5\(\%\) significance level using one-sided bootstrap confidence intervals.
\end{table}
Table 4: Estimates of the Productivity Effects
\begin{table}
\begin{tabular}{l c c} \hline \hline & \(SP\) & \(DL\) \\ \hline \(\omega_{i,t-1}\) & –0.735 & –0.164 \\ & (–0.890, –0.550) & (–0.203, –0.126) \\ \(G_{i,t-1}\) & –0.198 & –0.155 \\ & (–0.403, –0.103) & (–0.184, –0.127) \\ \(\sum_{j}s_{ij,t-1}\omega_{j,t-1}\) & 1.249 & –0.198 \\ & (0.423, 1.527) & (–0.403, –0.103) \\ \hline \end{tabular} Notes: Reported are the parameter estimates for the \(SP\) and \(DL\) functions derived from the polynomial approximation of the conditional mean of \(\omega_{lt}\) in the productivity process formulation in (3.3). Two-sided 95% bootstrap percentile confidence intervals in parentheses. These correspond to our baseline specification.
\end{table}
Table 5: Heterogeneity and Nonlinearity in the Productivity Effects
On the Estimation of Cross-Firm Productivity Spillovers with an Application to FDI
Emir Malikov\({}^{1}\)Shunan Zhao\({}^{2}\)
\({}^{1}\)University of Nevada, Las Vegas
\({}^{2}\)Oakland University
Relation to the Augmented Production Function Approach
A two-step framework, which we seek to improve upon in this paper, is not universal across empirical studies of productivity spillovers. The exceptions are predominantly from the literature on R&D-borne productivity spillovers, where some studies instead adopt a singe-step methodology centered on the estimation of the Griliches (1979)-style "augmented production function" which, besides the conventional inputs, also explicitly admits the firm's own and external knowledge capital stock. Seemingly, such a model readily provides estimates of the "contextual" spillover effects of R&D on firm production in one step. However, this framework is rather unique to the studies of spillovers in R&D, because this productivity-enhancing activity is the most input-accumulation-like in that it is an investment into the knowledge capital. Augmenting the firm's production function to include the FDI/exports/imports variables and their respective spillover pool measures is however not as conceptually unambiguous. Specifically, this is problematic on at least two fronts. First, the spillover effects on firm productivity in such a setup is essentially assumed to be deterministic, whereby the impact on productivity is improbably the same for all firms without a possibility of the varying degree of success (say, due to random luck or misfortune). Second, including internal and external measures of productivity modifiers directly into the production function effectively implies substitutability of the firm's inputs with not only its own productivity-enhancing activities such FDI or exporting but also--and perhaps more eyebrow-raising--with those of its peers. This remark equally applies to the case of R&D spillovers1 and is along the lines of De Loecker's (2013) critique in the context of estimating the learning-by-exporting effects on firm productivity. Not least importantly, identification of productivity spillovers in prior studies (including those via R&D) may also be seriously hindered by the well-known econometric problems with standard proxy-based or (dynamic panel) fixed-effects production function estimators.2 This further highlights the practical
usefulness of our proposed methodology.
## Appendix B China's Electric Machinery Manufacturing
Our empirical analysis focuses on China's electric machinery and equipment manufacturing industry which includes manufacturing of generators and motors, power transmission and distribution equipment, wires and cables, batteries, household electric and non-electric appliances, lighting appliances, etc. We select this industry because it is histrionically one of the country's most fundamental manufacturing sectors. The development of this industry has been closely related to the growth of GDP and the ever-expanding demand of electricity. By its very nature, the industry has thus been crucial for promoting the overall industrialization in China. Besides that, electric machinery and equipment is also China's most exported product (Euro Exim Bank, 2020), and the industry amounts to over a quarter of global sales (Deloitte China Manufacturing Industry Group, 2013). It is also one of the manufacturing industries that receive most of FDI. For instance, in 2005 alone (near the end of our sample period) the machinery and equipment industry in China attracted $4 billion in foreign investment, which was about 10% of FDI inflows to Chinese manufacturing that year (Ihrcke & Becker, 2006).
Foreign-invested firms are the dominant players in this industry. Arguably, this is mainly due to China's lack of domestic innovation capabilities, excessive failure rates of R&D and, consequently, high dependence on new technologies imported from abroad, particularly during the first decade following renewed privatization efforts in the late 1990s (the period of our analysis). For example, according to the Xiamen Bureau of Statistics, in Xiamen (a large and important port-city on the East coast) just 48 large foreign-invested firms owned 82% of fixed assets and produced 79% of the output value in the local electric machinery industry in 2005.
More generally, the Chinese electric machinery and equipment manufacturing industry is characterized by a high degree of spatial clustering and industrial agglomeration [mainly on the coast; see Figure H.2(a) in Appendix H] typical for such technology- and skill-intensive industries. Along with the government's emphasis on innovations and new technologies as a means
for sustainable development of this industry, this makes it an interesting application for studying productivity effects of inbound FDI and the associated spillovers across firms.
## Appendix C Translog Production Function
Our methodology can adapt more flexible specifications of the firm's production function. The log-quadratic translog specification provides a natural extension of the log-linear Cobb-Douglas form that we have assumed in (3.1). The former is more flexible and implies input and scale elasticities that vary both over time and across firms thereby being more robust to firm heterogeneity. For instance, see De Loecker & Warzynski (2012) and De Loecker et al. (2016) for recent applications of the translog production functions in the structural proxy estimation.
Let the firm's stochastic production function takes the following form _in logs_:
\[y_{it} =\beta_{0}+\beta_{K}k_{it}+\tfrac{1}{2}\beta_{KK}k_{it}^{2}+\beta _{L}l_{it}+\tfrac{1}{2}\beta_{LL}l_{it}^{2}+\beta_{M}m_{it}+\tfrac{1}{2}\beta_{ MM}m_{it}^{2}+\] \[\quad\beta_{KL}k_{it}l_{it}+\beta_{KM}k_{it}m_{it}+\beta_{LM}l_{ it}m_{it}+\omega_{it}+\eta_{it}\] \[\equiv T(k_{it},l_{it},m_{it})+\omega_{it}+\eta_{it},\] (C.1)
where \(T(k_{it},l_{it},m_{it})\) is a shorthand for the translog expansion of inputs. All the remaining assumptions about the market environment, productivity processes, timing of production decisions and learning, etc. stay unchanged.
The firm's static optimization problem with respect to materials now is
\[\max_{M_{it}}\,P_{t}^{Y}\exp\{T(k_{it},l_{it},m_{it})\}\exp\{\omega_{it}\} \theta-P_{t}^{M}M_{it},\] (C.2)
with the corresponding first-order condition given by
\[P_{t}^{Y}\exp\{T(k_{it},l_{it},m_{it})\}\frac{\beta_{M}+\beta_{MM}m_{it}+\beta _{KM}k_{it}+\beta_{LM}l_{it}}{M_{it}}\exp\{\omega_{it}\}\theta=P_{t}^{M}.\] (C.3)
Dividing (C.3) by the translog production function expressed in levels and then taking logs of both sides, we obtain the following material share equation:
\[\ln V_{it}=\ln(\left[\beta_{M}+\beta_{MM}m_{it}+\beta_{KM}k_{it}+\beta_{LM}l_ {it}\right]\theta)-\eta_{it},\] (C.4)
where \(\beta_{M}+\beta_{MM}m_{it}+\beta_{KM}k_{it}+\beta_{LM}l_{it}\) is the material elasticity function. Analogous to the
discussion in Section 4, the above share equation identifies the material-related production-function parameters \((\beta_{M},\beta_{MM},\beta_{KM},\beta_{LM})^{\prime}\) as well as the mean of exponentiated shocks \(\theta=\mathbb{E}[\exp\{\eta_{it}\}]\) based on the mean-orthogonality condition \(\mathbb{E}[\eta_{it}]\ \mathcal{I}_{it}]=\mathbb{E}[\eta_{it}]=0\). These parameters are to be estimated in the first stage via nonlinear least squares on (C.4).
Having identified the production function in the dimension of its endogenous static input \(m_{it}\), we focus on the remaining production-function parameters as well as the nonparametric evolution process for \(\omega_{it}\). With the already identified \(y_{it}^{*}\equiv y_{it}-\beta_{M}m_{it}-\frac{1}{2}\beta_{MM}m_{it}^{2}-\beta_{ KM}k_{it}m_{it}-\beta_{LM}l_{it}m_{it}\) and using the Markovian process for productivity, we now have the analogue of (4.7):
\[y_{it}^{*}=\beta_{K}k_{it}+\tfrac{1}{2}\beta_{KK}k_{it}^{2}+\beta_{L}l_{it}+ \tfrac{1}{2}\beta_{LL}l_{it}^{2}+\beta_{KL}k_{it}l_{it}+h\left(\omega_{i,t-1}, G_{i,t-1},\sum_{j(\neq i)}s_{ij,t-1}\omega_{j,t-1}\right)+\zeta_{it}+\eta_{it}\] (C.5)
that contains no endogenous variables on the right-hand side. Next, proxying for \(\omega_{i,t-1}\) and \(\omega_{j,t-1}\) via the inverted material function derived from (C.3), we obtain
\[y_{it}^{*} =\beta_{K}k_{it}+\tfrac{1}{2}\beta_{KK}k_{it}^{2}+\beta_{L}l_{it}+ \tfrac{1}{2}\beta_{LL}l_{it}^{2}+\beta_{KL}k_{it}l_{it}+\] (C.6) \[h\left(\omega_{i,t-1}^{*}\left(\beta_{K},\beta_{L},\beta_{KK}, \beta_{LL},\beta_{KL}\right),G_{i,t-1},\sum_{j(\neq i)}s_{ij,t-1}\omega_{j,t-1} ^{*}\left(\beta_{K},\beta_{L},\beta_{KK},\beta_{LL},\beta_{KL}\right)\right)+ \zeta_{it}+\eta_{it},\]
where the productivity proxy function is given by
\[\omega_{it}^{*}\left(\beta_{K},\beta_{L},\beta_{KK},\beta_{LL},\beta_{KL} \right)=\varkappa_{it}^{*}-\beta_{K}k_{it}-\tfrac{1}{2}\beta_{KK}k_{it}^{2}- \beta_{L}l_{it}-\tfrac{1}{2}\beta_{LL}l_{it}^{2}-\beta_{KL}k_{it}l_{it}\quad \forall i,t,\] (C.7)
with
\[\varkappa_{it}^{*} =\ln(P_{t}^{M}/P_{t}^{Y})-\ln([\beta_{M}+\beta_{MM}m_{it}+\beta_{ KM}k_{it}+\beta_{LM}l_{it}]\theta)-\] \[(1-\beta_{M})m_{it}-\tfrac{1}{2}\beta_{MM}m_{it}^{2}-\beta_{KM}k_{ it}m_{it}-\beta_{LM}l_{it}m_{it}\]
being a function of the parameters that have already been identified in the first stage.
A semiparametric model in (C.6) is then identified based on the same moment restriction as in (4.10), with all right-hand-side covariates being weakly exogenous and thus self-instrumenting. Approximating the unknown \(h(\cdot)\) via linear sieves, (C.6) is to be estimated in the second stage via semiparametric nonlinear least-squares. The remaining aspects closely
follow the estimation procedure outlined in Section 5.
## Appendix D Asymmetric Productivity Spillovers
Our baseline peer weighing scheme in (3.4) treats cross-firm spillovers symmetrically in that all members of a peer group affect each other's productivity. That is, each \(i\)th firm's productivity is influenced by the average productivity of all its peers: those that are more _and_ those that are less productive than the firm \(i\) itself. Given that we have no prior beliefs about the directionality of productivity spillovers in China's electric machinery manufacturing that we study in our empirical application, we opt for a symmetric specification. But should one choose to regulate the direction of productivity spillovers by restricting them to occur _from_ more productive _to_ less productive firms, our framework can be modified to accommodate that too.
The latter case however implies a somewhat different conceptualization of cross-firm dependence in which firms are said to learn exclusively from (relative) productivity "leaders." The identification of such _a_symmetric spillovers, which are conditional on the firm's own productivity relative to that of its peers, generally requires additional structural/timing assumptions.
To model productivity spillovers between firms asymmetrically, we can redefine peer weights \(\{s_{ijt}\}\) as follows:
\[s^{*}_{ijt}=\frac{\mathbb{1}\{(j,t)\in\mathcal{L}(i,t)\text{ and }\omega_{j,t-1}> \omega_{i,t-1}\}}{\sum_{k(\neq i)=1}^{n}\mathbb{1}\{(k,t)\in\mathcal{L}(i,t) \text{ and }\omega_{j,t-1}>\omega_{i,t-1}\}},\] (D.1)
so that only the neighbors who are more productive than the firm \(i\) are identified as its peers for external cross-firm learning. Note that, in the above, the relevant peers at time \(t\) are selected based on their relative productivity superiority in the _previous_ period \(t-1\). Without this, we would not be able to separate the cross-firm spillover effect from the firm's own autoregressive effect, conflating the two. The latter becomes obvious when we substitute (D.1) into the Markov
productivity process (3.3) that describes the evolution of firm \(i\)'s productivity over time:
\[\omega_{it}=\mathbb{E}\left[\omega_{it}\right]\omega_{i,t-1},G_{i,t-1},\sum_{j (\neq i)}\underbrace{\frac{1}{\{j,t-1\}}\{(j,t-1)\in\mathcal{L}(i,t-1)\text{ and }\omega_{j,t-2}>\omega_{i,t-2}\}}_{s_{i,t-1}^{*}}\omega_{j,t-1} \right]+\zeta_{it}.\] (D.2)
By making the asymmetry in external learning be a function of the _twice_-lagged pair-wise productivity differentials between the firm and its peers, we avoid the appearance of \(\omega_{i,t-1}\) in two places thereby allowing us to partial out the cross-firm spillovers from the _autoregressive persistence in productivity. Thus, to separably identify asymmetric spillovers in productivity, in addition to assuming that (both the internal and external) learning occurs with a delay, one also requires an assumption that the firm takes an additional period to identify more productive peers. However, we do not need this additional timing assumption in our baseline analysis (with symmetric interactions).
## Appendix E Additional Modeling Considerations
Contextual Effects.Because we assume delayed cross-firm peer interactions, as noted by Manski (1993), the _dynamic_ nature of productivity model (3.3) can potentially provide an additional avenue to circumvent the unidentification problems and separate different types of peer effects, should one be interested in also modeling the "contextual effects" on productivity via \(\sum_{j(\neq i)}s_{ij,t-1}G_{j,t-1}\). In such a setup, per the results in Bramoulle et al. (2009), the separable identification of "endogenous" and "contextual" effects can also be achieved by relying on variation in the size of peer reference groups, so long as the firm \(i\) is excluded when computing group means, as is in our case. Alternatively, (co)variance-based quadratic moment conditions may be used to aid identification (Kelejian & Prucha, 1999; Lee, 2007; Kuersteiner & Prucha, 2020).
Contemporaneous Effects.Depending on the particular source of learning, it may sometimes be possible to reasonably relax the timing assumption that the learning effect of \(G_{it}\) on firm productivity be with a delay. Take, for example, the firm's export status in the context of "learning by exporting." Consistent with much theoretical and empirical work in international trade, the
decision to start exporting is usually associated with large sunk entry costs, which would impede firms from adjusting their export status _immediately_ after experiencing an improvement in their productivity. Analogous arguments can be made about costliness of swift geographic relocations. If so, it may be feasible to replace weak exogeneity of lagged \(G_{i,t-1}\) and \(\{s_{ij,t-1}\}\) with a stronger assumption of weak exogeneity of \(G_{it}\) and \(\{s_{ijt}\}\). The productivity process in (3.3) can then be modified as follows: \(\omega_{it}=\mathbb{E}\left[\omega_{it}|\,\omega_{i,t-1},G_{it},\sum_{j(\neq i) }s_{ijt}\omega_{j,t-1}\right]+\zeta_{it}\), where the implied mean-orthogonality of \(\zeta_{it}\) and \((G_{it},\sum_{j(\neq i)}s_{ijt}\omega_{j,t-1})^{\prime}\) is effectively paramount to assuming that, due to adjustment costs, both the \(G_{it}\) and firm location in period \(t\) are determined in period \(t-1\) based on \(\omega_{i,t-1}\) just like the dynamic inputs are.
## Appendix F Inference
Asymptotic Inference.Let the moment vector in (5.4) be concisely written as \(\mathbb{E}[\mathbf{\rho}(\Theta)]=\mathbf{0}\), where \(\Theta\) is a collection of both the finite-dimensional coefficients \(\left(\beta_{M},\beta_{K},\beta_{L},\theta\right)^{\prime}\) and nonparametric sieve "parameters" \(\mathbf{\gamma}\). Given the just-identification of the model and so long as we use linear sieves for \(\mathcal{A}_{L_{n}}\left(\cdot\right)\) such as polynomial or B-spline series, we can make use of the numerical equivalence (see Hahn et al., 2018) between the consistent estimator of the asymptotic variance of _semiparametric_ "parameters" \(\widehat{\Theta}\) and a consistent estimator of the asymptotic variance derived for these parameter estimators as if the estimated model were of a _parametric_ form specified in (5.1) and (5.3). Thus, in practice, one can use the variance formula for a parametric two-step estimator to consistently estimate the variance of a semiparametric sieve estimator.3 The asymptotic variance for such a parametric two-step estimator can in turn be derived following Newey's (1984) suggestion by making use of the optimal GMM covariance formula: \(\mathbb{V}ar\left[\widehat{\Theta}\right]=\left[\mathbb{E}\frac{\partial\mathbf{ \rho}(\Theta)}{\partial\Theta^{\prime}}\right]^{-1}\mathbb{E}[\mathbf{\rho}( \Theta)\mathbf{\rho}(\Theta)^{\prime}]\left[\mathbb{E}\frac{\partial\mathbf{\rho}( \Theta)}{\partial\Theta}\right]^{-1}\). This streamlines asymptotic inference.
Footnote 3: Note that this equivalence applies to finite samples only because, asymptotically, the number of sieve “parameters” will diverge to infinity with the sample size whereas the number of parameters in a parametric specification will stay a finite constant. Furthermore, the numerical equivalence holds more generally for fully _non_parametric two-step sieve estimators. In our case, the estimator is _semip_arametric, with the first step implemented using the known parametric form. Since ours is a special case of the nonparametric setup studied by Hahn et al. (2018), their results continue to apply.
Bias-Corrected Bootstrap Inference.However, because asymptotic inference for semi- and nonparametric estimators is well-known to perform unreliably due to finite-sample biases as well as the first-order asymptotic theory's poor ability to approximate the distribution of estimators in finite samples (Horowitz, 2001), for hypothesis testing, we therefore rely on Efron's (1987) accelerated bias-corrected bootstrap percentile confidence intervals, which are second-order accurate and provide means not only to correct for the estimator's finite-sample bias but also to account for higher-order moments (particularly, skewness) in the sampling distribution.
We approximate sampling distributions of the estimator via wild residual block bootstrap that takes into account a panel structure of the data, with both stages resampled jointly owing to a sequential nature of our estimation procedure. More specifically, when constructing wild bootstrap residuals, we work with the _joint_ distribution of firm-specific time series of \(\{\widehat{\eta}_{it}\}\) and \(\{\widehat{\zeta}_{it}\}\), with the auxiliary random variable drawn from the Mammen (1993) two-point distribution independently over \(i\). Note that this independence over \(i\) is consistent with our model's assumption about random productivity shocks. We set the number of bootstrap replications to \(B=400\). Having first obtained bootstrap parameter estimates \(\{(\widehat{\beta}_{K}^{b},\widehat{\beta}_{L}^{b},\widehat{\beta}_{M}^{b})^{ \prime};\ b=1,\ldots,B\}\) and \(\{\widehat{\gamma}^{b};\ b=1,\ldots,B\}\), we then obtain bootstrap values for our main estimands of interest: \(\widehat{DL}_{it}^{b}\), \(\widehat{SP}_{it}^{b}\) and \(\widehat{TIL}_{it}^{b}\) for \(b=1,\ldots,B\) (at each observation). Next, we use the accelerated bias-correction method to make inference about \(DL\), \(SP\) and \(TIL\).
To make matters concrete, let the (observation-specific) estimand of focus be denoted by \(\widehat{E}\). We use the empirical distribution of \(B\) bootstrap estimates \(\{\widehat{E}^{1},\ldots,\widehat{E}^{B}\}\) to estimate \((1-a)\times 100\%\) confidence bounds for \(\widehat{E}\) as intervals between the \([a_{1}\times 100]\)th and \([a_{2}\times 100]\)th percentiles of its bootstrap distribution with
\[a_{1}=\Phi\left(\widehat{\phi}_{0}+\frac{\widehat{\phi}_{0}+ \phi_{a/2}}{1-\widehat{c}\big{(}\widehat{\phi}_{0}+\phi_{a/2}\big{)}}\right) \quad\text{and}\quad a_{2}=\Phi\left(\widehat{\phi}_{0}+\frac{\widehat{\phi}_ {0}+\phi_{(1-a/2)}}{1-\widehat{c}\big{(}\widehat{\phi}_{0}+\phi_{(1-a/2)} \big{)}}\right), \tag{10}\]
where \(\Phi(\cdot)\) is the standard normal cdf, \(\phi_{\alpha}\) is the (\(\alpha\times 100\))th percentile of the standard normal distribution,
\[\widehat{\phi}_{0}=\Phi^{-1}\left(\#\{\widehat{E}^{b}<\widehat{E }\}/B\right) \tag{11}\]
is a bias-correction factor, and \(\widehat{c}\) is an acceleration parameter which, following the literature, is estimated via jackknife as follows (e.g., see Shao & Tu, 1995):
\[\widehat{c}=\frac{\sum_{j=1}^{J}\left(\sum_{s=1}^{J}\widehat{E}^{s}-\widehat{E} ^{j}\right)^{3}}{6\left[\sum_{j=1}^{J}\left(\sum_{s=1}^{J}\widehat{E}^{s}- \widehat{E}^{j}\right)^{2}\right]^{3/2}},\] (F.3)
where \(\widehat{E}^{j}\) is the \(j(=1,\ldots,J)\)th jackknife estimate of \(E\).4
Footnote 4: We have tried different versions of jackknife with similar results. We settle on a delete-\(50T\) jackknife (i.e., leave-\(50\)-cross-sections-out) which respects the panel structure of our data while yielding a reasonable number of subsamples the estimation on which is not computationally prohibitive.
Note that both the acceleration and bias-correction factors are different for each estimator, denoted here generically by \(\widehat{E}\). That is, the bias-correction procedure is not only estimand-specific but may also be observation-specific as is in our case. Also, the estimated confidence intervals may not contain the original estimates if the finite-sample bias is large.
## Appendix G Additional Simulation Results
Nonlinear Productivity Process.Table G.1 presents the results for our proposed estimator when the productivity DGP is nonlinear. Specifically, we consider the following nonlinear productivity process:
\[\omega_{1t}=\rho_{0}+\rho_{11}\omega_{i,t-1}+\rho_{12}\omega_{i,t-1}^{2}+ \rho_{21}\sum_{j(\neq i)}s_{ij,t-1}\omega_{j,t-1}+\rho_{22}\bigg{(}\sum_{j( \neq i)}s_{ij,t-1}\omega_{j,t-1}\bigg{)}^{2}+\]
\[\rho_{31}G_{i,t-1}+\rho_{32}G_{i,t-1}^{2}+\rho_{12}\omega_{i,t-1}\bigg{(}\sum_ {j(\neq i)}s_{ij,t-1}\omega_{j,t-1}\bigg{)}+\rho_{13}\omega_{i,t-1}G_{i,t-1}+\]
\[\lambda_{23}G_{i,t-1}\bigg{(}\sum_{j(\neq i)}s_{ij,t-1}\omega_{j,t-1}\bigg{)}+ \zeta_{1t},\] (G.1)
where \(\rho_{0}=0.2\), \(\rho_{11}=0.65\), \(\rho_{12}=-0.015\), \(\rho_{21}=0.18\), \(\rho_{22}=0.025\), \(\rho_{31}=0.37\), \(\rho_{32}=0.12\), \(\rho_{12}=0.006\), \(\rho_{13}=-0.06\) and \(\lambda_{23}=0.07\). The rest of the DGP is kept unchanged (see Section 6).
Table G.1 essentially replicates Table 1 using this new productivity process. The simulation results remain encouraging and show that our estimation methodology is consistent and recovers the true parameters well.
\begin{table}
\begin{tabular}{r r|r r r|r r r|r r r} \hline \hline \multicolumn{1}{c|}{Mean True} & \multicolumn{3}{c}{\(n=100\)} & \multicolumn{3}{c}{\(n=200\)} & \multicolumn{3}{c}{\(n=400\)} \\ \multicolumn{1}{c|}{Value} & \multicolumn{1}{c}{Mean} & \multicolumn{1}{c}{RMSE} & \multicolumn{1}{c|}{MAE} & \multicolumn{1}{c}{Mean} & \multicolumn{1}{c}{RMSE} & \multicolumn{1}{c|}{MAE} & \multicolumn{1}{c}{Mean} & \multicolumn{1}{c}{RMSE} & \multicolumn{1}{c}{MAE} \\ \hline \multicolumn{10}{c}{DGP with \(G\) Evolving Exogenously} \\
**Scenario (i): \(DL\neq 0\) and \(SP\neq 0\)** \\ \(\beta_{K}\) & 0.250 & 0.237 & 0.077 & 0.059 & 0.247 & 0.046 & 0.035 & 0.251 & 0.031 & 0.024 \\ \(AR\) & 0.596 & 0.593 & 0.081 & 0.066 & 0.592 & 0.054 & 0.044 & 0.594 & 0.038 & 0.031 \\ \(DL\) & 0.412 & 0.416 & 0.157 & 0.128 & 0.418 & 0.106 & 0.087 & 0.414 & 0.075 & 0.061 \\ \(SP\) & 0.301 & 0.274 & 0.313 & 0.257 & 0.303 & 0.200 & 0.166 & 0.302 & 0.138 & 0.115 \\ \(TIL\) & 0.124 & 0.102 & 0.144 & 0.112 & 0.119 & 0.085 & 0.069 & 0.120 & 0.059 & 0.048 \\
**Scenario (ii): \(DL=0\) and \(SP\neq 0\)** \\ \(\beta_{K}\) & 0.250 & 0.233 & 0.082 & 0.063 & 0.247 & 0.049 & 0.037 & 0.251 & 0.033 & 0.025 \\ \(AR\) & 0.609 & 0.606 & 0.081 & 0.067 & 0.606 & 0.055 & 0.045 & 0.608 & 0.039 & 0.032 \\ \(DL\) & 0 & 0.003 & 0.158 & 0.127 & 0.003 & 0.107 & 0.087 & 0.001 & 0.076 & 0.061 \\ \(SP\) & 0.274 & 0.088 & 0.301 & 0.246 & 0.222 & 0.198 & 0.162 & 0.257 & 0.130 & 0.107 \\ \(TIL\) & 0 & \(-0.011\) & 0.075 & 0.049 & \(-0.005\) & 0.038 & 0.026 & \(-0.002\) & 0.023 & 0.017 \\
**Scenario (iii): \(DL=0\) and \(SP=0\)** \\ \(\beta_{K}\) & 0.250 & 0.246 & 0.064 & 0.051 & 0.249 & 0.044 & 0.035 & 0.251 & 0.032 & 0.025 \\ \(AR\) & 0.672 & 0.623 & 0.078 & 0.064 & 0.623 & 0.054 & 0.044 & 0.626 & 0.038 & 0.031 \\ \(DL\) & 0 & 0.003 & 0.155 & 0.126 & 0.003 & 0.106 & 0.087 & 0.001 & 0.075 & 0.061 \\ \(SP\) & 0 & \(-0.034\) & 0.152 & 0.126 & \(-0.010\) & 0.106 & 0.086 & \(-0.004\) & 0.073 & 0.060 \\ \(TIL\) & 0 & \(-0.004\) & 0.027 & 0.017 & \(-0.002\) & 0.012 & 0.008 & \(-0.001\) & 0.006 & 0.004 \\ \hline \multicolumn{10}{c}{DGP with \(G\) Following an \(\omega\)-Controlled Process} \\
**Scenario (i): \(DL\neq 0\) and \(SP\neq 0\)** \\ \(\beta_{K}\) & 0.250 & 0.249 & 0.028 & 0.022 & 0.250 & 0.020 & 0.015 & 0.251 & 0.014 & 0.011 \\ \(AR\) & 0.338 & 0.328 & 0.078 & 0.060 & 0.334 & 0.054 & 0.043 & 0.338 & 0.038 & 0.030 \\ \(DL\) & 1.148 & 1.157 & 0.119 & 0.093 & 0.152 & 0.083 & 0.064 & 0.148 & 0.058 & 0.046 \\ \(SP\) & 0.695 & 0.697 & 0.036 & 0.029 & 0.697 & 0.025 & 0.021 & 0.695 & 0.017 & 0.014 \\ \(TIL\) & 0.798 & 0.808 & 0.294 & 0.166 & 0.803 & 0.207 & 0.119 & 0.798 & 0.146 & 0.082 \\
**Scenario (ii): \(DL=0\) and \(SP\neq 0\)** \\ \(\beta_{K}\) & 0.250 & 0.233 & 0.086 & 0.065 & 0.245 & 0.053 & 0.039 & 0.251 & 0.033 & 0.025 \\ \(AR\) & 0.609 & 0.609 & 0.102 & 0.084 & 0.607 & 0.068 & 0.055 & 0.608 & 0.050 & 0.040 \\ \(DL\) & 0 & \(-0.006\) & 0.135 & 0.109 & \(-0.002\) & 0.092 & 0.075 & 0.000 & 0.066 & 0.054 \\ \(SP\) & 0.274 & 0.085 & 0.341 & 0.277 & 0.207 & 0.215 & 0.175 & 0.259 & 0.152 & 0.124 \\ \(TIL\) & 0 & 0.023 & 0.069 & 0.045 & 0.005 & 0.034 & 0.023 & 0.003 & 0.021 & 0.015 \\
**Scenario (iii): \(DL=0\) and \(SP=0\)** \\ \(\beta_{K}\) & 0.250 & 0.246 & 0.068 & 0.053 & 0.249 & 0.044 & 0.035 & 0.251 & 0.031 & 0.025 \\ \(AR\) & 0.627 & 0.624 & 0.100 & 0.082 & 0.623 & 0.067 & 0.054 & 0.626 & 0.049 & 0.040 \\ \(DL\) & 0 & \(-0.001\) & 0.134 & 0.108 & 0.001 & 0.092 & 0.075 & 0.001 & 0.066 & 0.054 \\ \(SP\) & 0 & \(-0.035\) & 0.147 & 0.122 & \(-0.009\) & 0.101 & 0.084 & \(-0.004\) & 0.071 & 0.060 \\ \(TIL\) & 0 & 0.002 & 0.021 & 0.014 & 0.000 & 0.010 & 0.007 & 0.000 & 0.005 & 0.003 \\ \hline \multicolumn{10}{c}{Notes: Owing to nonlinearity of the productivity process in (G.1), \(AR\), \(DL\), \(SP\) and \(TIL\) are all observation-specific. With the sole exception for the fixed parameter \(\beta_{K}=0.25\), the mean true values are the averages (across simulation repetitions) of the mean simulated values over \(i\) and \(t\). Throughout, \(T=10\).} \\ \hline \hline \end{tabular}
\end{table}
Table 1: Simulation Results for Our Estimator under the Nonlinear Productivity Process
First-Step Estimates of \(\beta_{K}\) from the Two-Step Estimator.To examine the ability of alternative models to identify firm productivity, we first study if these two-step estimators can consistently estimate the production function coefficients (here \(\beta_{K}\)) because \(\widehat{\omega}_{it}\) is a direct construct of these parameters. The corresponding estimates of \(\beta_{K}\) are reported in Table G.2. These first-step results apply to both the ALT1 and ALT2 models and are obtained assuming that \(\omega_{it}\) is an exogenous first-order Markov process.
Alternative Two-Step Estimators of Spillovers.Table G.3 reports the results for the "spillovers" estimator (defined as either \(SP\) or \(TIL\)) from different variants of the second-step regressions in (6.4)-(6.5) estimated with \(\widehat{\omega}_{it}\) obtained in the first step using the standard proxy estimator under the assumption of exogenous Markov productivity process. The data are simulated assuming a linear productivity process under scenario (iii) with the true \(DL=0\) and \(SP=0\). Thus, the first-step estimation of productivity is correctly specified and consistent. For the estimation of second-step regressions, the \(G\) series is generated as an \(\omega\)-controlled process (b). The specifications containing contemporaneously endogenous regressors are estimated using their respective first lags as predetermined instruments.
Examining the results in Table G.3, we find that, across all specifications, the second-step
\begin{table}
\begin{tabular}{l c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{True} & \multicolumn{2}{c}{\(n=100\)} & \multicolumn{2}{c}{\(n=200\)} & \multicolumn{2}{c}{\(n=400\)} \\ & Value & Mean & RMSE & Mean & RMSE & Mean & RMSE \\ \hline \multicolumn{8}{c}{DGP with \(G\) Evolving Exogenously} \\ Scenario (i): \(DL\neq 0\) and \(SP\neq 0\) & 0.25 & 0.376 & 0.136 & 0.373 & 0.128 & 0.374 & 0.126 \\ Scenario (ii): \(DL=0\) and \(SP\neq 0\) & 0.25 & 0.438 & 0.193 & 0.435 & 0.188 & 0.436 & 0.187 \\ Scenario (iii): \(DL=0\) and \(SP=0\) & 0.25 & 0.251 & 0.040 & 0.250 & 0.029 & 0.251 & 0.020 \\ \hline \multicolumn{8}{c}{DGP with \(G\) Following an \(\omega\)–Controlled Process} \\ Scenario (i): \(DL\neq 0\) and \(SP\neq 0\) & 0.25 & 0.118 & 0.152 & 0.113 & 0.148 & 0.109 & 0.146 \\ \hline \multicolumn{8}{l}{Notes: Reported are the first-step results for \(\widehat{\beta}_{K}\) from the alternative estimators which proxy for latent productivity under the assumption of exogenous Markov process for \(\omega_{it}\). The results corresponding to scenarios (ii) and (iii) of the second DGP [bottom panel] are omitted because they are identical to those for the first DGP [top panel]. This is because not only does \(G\) not enter the alternative estimator but it also does not affect the evolution of firm productivity by design (\(DL=0\)) in these two scenarios. Throughout, \(T=10\).} \\ \hline \hline \end{tabular}
\end{table}
Table G.2: Simulation Results for the Alternative Estimator of \(\beta_{K}\)
estimator exhibits non-vanishing biases in the estimation of spillovers. All models spuriously fail at identifying _zero_ cross-firm spillovers. Here we also report the rejection frequencies (over simulation repetitions) for the asymptotic \(z\)-test of the null that the coefficient of a spillover variable in the model is zero at the 95% confidence level. Had the second-step been correctly specified and consistent, we were to expect these rejection frequencies to be all around 0.05. Consistent with our expectations, the results in the table indicate drastic size distortions due to misspecification of the two-step approach. The results are qualitatively the same when we also control for firm and/or time fixed effects.
\begin{table}
\begin{tabular}{l r r r r r r r|r r r r} \hline \hline & \multicolumn{6}{c}{Results for \(\alpha_{12}\) (\(TIL\)) in eq. (6.4)} & \multicolumn{6}{c}{Results for \(\alpha_{22}\) (\(SP\)) in eq. (6.5)} \\ & I & II & III & IV & V & VI & VII & IX & X & XI \\ \hline \multicolumn{12}{l}{**Mean Estimate**} \\ \(n=100\) & 1.154 & 0.558 & 1.057 & 0.620 & \(-\)0.359 & 0.378 & \(-\)0.362 & 0.990 & 0.591 & 0.540 & 0.390 \\ \(n=200\) & 1.157 & 0.556 & 1.064 & 0.624 & \(-\)0.355 & 0.386 & \(-\)0.361 & 0.995 & 0.597 & 0.546 & 0.395 \\ \(n=400\) & 1.167 & 0.560 & 1.073 & 0.628 & \(-\)0.356 & 0.387 & \(-\)0.362 & 0.998 & 0.599 & 0.548 & 0.397 \\ \multicolumn{12}{l}{**Root Mean Squared Error**} \\ \(n=100\) & 1.171 & 0.596 & 1.072 & 0.647 & 0.359 & 0.416 & 0.365 & 0.987 & 0.595 & 0.541 & 0.393 \\ \(n=200\) & 1.166 & 0.575 & 1.072 & 0.638 & 0.356 & 0.405 & 0.362 & 0.995 & 0.599 & 0.547 & 0.396 \\ \(n=400\) & 1.170 & 0.570 & 1.077 & 0.635 & 0.357 & 0.398 & 0.363 & 0.998 & 0.600 & 0.549 & 0.397 \\ \multicolumn{12}{l}{**Rejection Frequency for \(H_{0}:\)** Spillover Parameter = 0} \\ \(n=100\) & 1.00 & 0.97 & 1.00 & 0.99 & 1.00 & 0.90 & 1.00 & 1.00 & 0.99 & 1.00 & 1.00 \\ \(n=200\) & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 0.98 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ \(n=400\) & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ \hline \multicolumn{12}{l}{**Variables in the Second-Step Regression**} \\ Spillover Variable & \(\overline{G}_{it}\) & \(\overline{G}_{it}\) & \(\overline{G}_{it-1}\) & \(\overline{G}_{it-2}\) & \(\overline{G}_{it-2}\) & \(\overline{G}_{it-2}\) & \(\overline{G}_{it-2}\) & \(\overline{\omega}_{it}\) & \(\overline{\omega}_{it}\) & \(\overline{\omega}_{it-1}\) & \(\overline{\omega}_{it-1}\) \\ DL Variable & – & \(G_{it}\) & – & \(G_{it-1}\) & – & \(G_{it-2}\) & \(G_{it-1}\) & – & \(G_{it}\) & – & \(G_{it-1}\) \\ \hline \hline \end{tabular} Notes: Reported are the results for “spillovers” (defined as either \(SP\) or \(TIL\)) from different variants of the second-step regressions in (6.4)–(6.5) estimated with \(\widehat{\omega}_{it}\) obtained in the first step using the standard proxy estimator under the assumption of exogenous Markov productivity. The data are simulated assuming linear productivity process under scenario (iii) with true \(DL=0\) and \(SP=0\). Thus, the true spillovers are zero. For the estimation of second-step regressions, the \(G\) series is generated as an \(\omega\)-controlled process. The specifications containing contemporaneous endogenous regressors are estimated using first lags as instruments. \(\overline{G}_{it}=\sum_{j}s_{ijt}G_{jt}\) and \(\overline{\omega}_{it}=\sum_{j}s_{ijt}\omega_{jt}\). Throughout, \(T=10\).
\end{table}
Table 3: Simulation Results for Different Variants of the Two-Step Alternative Estimator
## Appendix H Data
Our data are drawn from the Chinese Industrial Enterprises Database survey conducted by China's National Bureau of Statistics (NBS). This database covers all firms with sales above 5 million yuan (about 0.6 million in U.S. dollar) and includes most industries including mining, manufacturing and public utilities. We focus on the electric machinery and equipment manufacturing industry, SIC 2-digit code 39.
The production variables are as follows. The firm's capital stock (\(K_{it}\)) is the net fixed assets deflated by the price index of investment into fixed assets. Labor (\(L_{it}\)) is measured as the total wage bill plus benefits deflated by the GDP deflator. Materials (\(M_{it}\)) are the total intermediate inputs, including raw materials and other production-related inputs, deflated by the purchasing price index for industrial inputs. The output (\(Y_{it}\)) is defined as the gross industrial output value deflated by the producer price index. The price indices are obtained from NBS and the World Bank. The four variables are measured in thousands of real RMB. In addition, the foreign equity share (\(G_{it}\)) is a bounded proportion that lies between zero and one, by construction.
We exclude observations with missing values for these variables as well as a small number of likely erroneous observations with the foreign equity share values outside the unit interval. With the sample period running from 1998 to 2007, the operational sample is an unbalanced
Figure H.2: Geographic Distribution of Firms (a) and the Foreign Equity Share (b) Note: Shown are the number of firms in the sample from each province (a) and the sample mean at the province level (b), with the darker areas corresponding to higher values.
panel of 23,720 firms with a total of 73,095 observations. Table H.4 reports summary statistics for these data.
Figure H.1(a) plots a histogram of \(G_{it}\) across firms which expectedly has a zero mode because the manufacturing sector in China is dominated by wholly domestic firms. Figure H.1(b) on the right plots the distribution of \(G_{it}|G_{it}>0\), i.e., for foreign-invested firms only. Overall, 81% firms in our sample are wholly domestically-owned, 1% are pure foreign multinationals, with the remaining 18% being (partially) foreign-invested domestic firms. The map in Figure H.2(b) shows the spatial distribution of the (average) foreign equity share across regions where, consistent with one's priors, we see the heightened concentration of FDI along the coast.
## Appendix I Additional Empirical Results
Baseline Results.Figure I.3 plots empirical histograms of the point estimates of productivity effects under the baseline specification. These estimates are the same as those summarized in Table 4. Subfigure I.3(a) shows the distribution of productivity spillover elasticities \(SP\); the direct/internal and indirect/external learning effects of FDI (\(DL\) and \(TIL\), respectively) are presented in subfigure I.3(b).
We also examine the geographic distribution of productivity spillovers in Figure I.4. The map shows by-province median estimates of productivity spillovers. We observe that productivity spillovers are stronger in the highly industrialized, fast-growing provinces in the Southeast and near the coast. Interestingly, comparing Figure I.4 with the spatial distribution of the industry in Figure H.2(a) in Appendix H, we find that productivity spillovers are comparable in strength (little, if any, shade gradient) across most Southeastern and coastal provinces and thus extend beyond the Shanghai and Guangzhou areas where the majority of the electric machinery manufacturing industry is concentrated. Therefore, the evidence of productivity spillovers that we find is not just a "mechanical" function of the spatial density of data.
Endogenous Exposure to FDI.The structural identification of our model only requires that the _lagged_ foreign equity share \(G_{i,t-1}\) be weakly exogenous with respect to the future produc
Figure I.4: Spatial Distribution of the Productivity Spillover Effects
Note: Plotted are the sample medians at the province level, with the darker areas corresponding to higher values.
Figure I.3: Distributions of the Productivity Effects: (a) \(SP\), (b) \(DL\) and \(TIL\)
Note: Plotted are the point estimates based on our baseline specification of the productivity process in (3.3).
tivity innovation \(\zeta_{it}\). Therefore, firms in our framework may experience endogenous updates to their exposure to foreign knowledge \(G_{it}\) and even relocate based on the contemporaneous productivity \(\omega_{it}\). Nonetheless, however mild an assumption, predeterminedness of \(G_{i,t-1}\) may still be violated in case the firms--or their foreign investors--can forecast their future productivity (shocks). As a robustness check to this potential violation of the weak exogeneity of lagged foreign equity share, we re-estimate our baseline model using the inverse probability weighting (IPW) procedure and instrumentation. In our case, we need to weight only the second-stage regression since the first-stage share equation contains neither the FDI variable nor a productivity innovation. Analogously, instrumenting the lagged foreign equity share affects only the second stage. The results are summarized in Table I.5.
By means of IPW, we seek to account for the potential selection (on observables) of firms by
\begin{table}
\begin{tabular}{l c|c c c} \hline \hline & Exogenous & IPW & IV-External & IV-Lewbel \\ \hline \multicolumn{5}{c}{**Median Estimates—**} \\ \(SP\) & 0.327 & 0.477 & 0.466 & 0.414 \\ & (0.211, 0.399) & (0.413, 0.512) & (0.411, 0.585) & (0.276, 0.515) \\ \(DL\) & 0.138 & 0.128 & 0.171 & 0.191 \\ & (0.118, 0.164) & (0.105, 0.158) & (0.138, 0.241) & (0.053, 0.341) \\ \(TIL\) & 0.037 & 0.055 & 0.067 & 0.066 \\ & (0.022, 0.050) & (0.044, 0.069) & (0.056, 0.100) & (0.019, 0.126) \\ \hline \multicolumn{5}{c}{**Statistically \(>0\) (\% Obs.)—**} \\ \(SP\) & 83.84 & 98.55 & 97.17 & 79.56 \\ \(DL\) & 89.13 & 91.40 & 78.48 & 82.30 \\ \(TIL\) & 86.71 & 98.99 & 97.64 & 83.48 \\ \hline \hline \end{tabular} Notes: Reported are the results for the productivity process in (3.3) under baseline specification. The two-sided 95% bootstrap percentile confidence intervals for the median point estimates are in parentheses. Statistical positiveness is at the 5% significance level, using one-sided bootstrap confidence intervals. The “Exogenous” column corresponds to our proposed estimation procedure under the structural assumption of weak exogeneity of \(G_{i,t-1}\), with the second-stage eq. (4.8) estimated via least squares. The three models on the right allow for the violation of \(\mathbb{E}[\zeta_{it}+\eta_{it}]G_{i,t-1}]=0\) and address the potential endogeneity of lagged FDI in (4.8) via (i) inverse probability weighting [IPW], (ii) instrumentation using external IVs including coastal province dummy, province-level openness measure and their interactions with predetermined \(L_{i,t-1}\) [IV-External], (iii) instrumentation with the heteroskedasticity-based internal IV a la Lewbel (2012) [IV-Lewbel].
\end{table}
Table I.5: Estimates of the Productivity Effects: Endogeneity of Lagged FDI
their foreign investors based on their _future_ productivity. We use the stabilized IPWs which are typically more numerically stable and produce narrow confidence bounds (Hernan & Robins, 2020). Also, note that we deal with a continuous "treatment" \(G_{it}\) which is why our approach is different from the more conventional propensity score estimation suitable for binary treatments (e.g., Imbens & Wooldridge, 2009). The IPWs for a continuous treatment \(G_{it}\) are given by \(f_{G}(G_{it})/f_{G|\mathbf{d}}(G_{it}|\mathbf{d}_{i,t+1})\), where \(f(\cdot)\) is a _pdf_. [For more, also see Hirano & Imbens (2004) and Hernan & Robins (2020).] Note that the vector of observables \(\mathbf{d}_{i,t+1}\) includes firm characteristics reflective of its performance next period, because of concern is the selection into treatment based on the future productivity. We include the following correlates of the firm productivity that may influence its foreign exposure: size proxied by the logged labor, age, state equity share, government subsidy receipts, export intensity, normalized profits, the return on assets, leverage, logged total assets as well as the East coast dummy and the time trend.
To avoid the curse of dimensionality as well as the problem of near-zero extreme values associated with nonparametric estimation of densities, we employ a parametric maximum-likelihood approach to estimate \(f_{G}\) and \(f_{G|\mathbf{d}}\). Given the bounded nature of a fractional variable \(G_{it}\in[0,1]\), we assume it is Beta-distributed. Because beta distribution is not trivial to estimate, we impose few data-motivated restrictions on it. More specifically, we let \(G_{it}\sim\text{Beta}(\alpha,\beta)\) and \(G_{it}|\mathbf{d}_{i,t+1}\sim\text{Beta}(\alpha,\beta(\mathbf{d}_{i,t+1}))\) where, to match the data, we restrict the first shape parameter to a unit value (\(\alpha=1\)) and the second parameter/function \(\beta(\cdot)\) to be greater than 1 so that the distribution of \(G_{it}\) is unimodal with a zero mode in both instances.5
Footnote 5: Recall that the mode of the \(\text{Beta}(\alpha,\beta)\) distribution is 0 for \(0<\alpha\leq 1\) and \(\beta>1\).
The densities are estimated via maximum likelihood (ML), although the method of moments provides an alternative route.
Since we seek to address the potential endogeneity of lagged \(G_{i,t-1}\) in the second-stage least squares regression, we also lag the estimated IPWs to have them match the time period of "treatment." In other words, we weight each observation in the second stage by \(\widehat{f}_{G}(G_{i,t-1})/\widehat{f}_{G|\mathbf{d}}(G_{i,t-1}|\mathbf{d}_{ it})\).6
The results are summarized in the IPW column of Table I.5.
Table I.5 also reports the estimates of productivity effects from the second stage estimated via generalized method of moments using external instruments (the IV-External column) and Lewbel's (2012) heteroskedasticity-based internal instruments (the IV-Lewbel column) for \(G_{i,t-1}\). The external instruments include the East coast dummy, a province-level measure of openness (defined as the ratio of the sum of imports and exports to the gross domestic product) and their interactions with the firm-level lagged labor input. The two external instruments are motivated by the previous studies, such as Eichengreen & Tong (2007) and Keller & Yeaple (2009), and are selected to proxy friendly regional policies towards foreign capital, shipping costs and the overall ease of engaging in international trade and finance. We interact these instruments with the predetermined labor at the firm level to gain variation. For identification of the firm's productivity process based on heteroskedasticity, adapting Lewbel (2012) we first estimate an auxiliary equation for the endogenous regressor by regressing \(G_{i,t-1}\) on all the other exogenous variables in the \(\omega_{it}\) process, namely, \(\omega_{i,t-1}\) and \(\sum_{j\neq i}s_{ij,t-1}\omega_{j,t-1}\). The residuals from this auxiliary regression are then interacted with the demeaned \(\omega_{i,t-1}\) and used to instrument for \(G_{i,t-1}\). Here, we use the firm's predetermined lagged productivity \(\omega_{i,t-1}\) as a "\(Z\)" variable that is uncorrelated with the _product_ of productivity innovation \(\zeta_{it}\) (with which \(G_{i,t-1}\) is suspected to be correlated) and the error in the auxiliary equation for \(G_{i,t-1}\). For more details on instrumentation via heteroskedasticity, see Lewbel (2012).
Controlling for potential endogeneity of the FDI exposure, we continue to find strong empirical evidence in support of significantly positive productivity spillovers for 80% of firms or more. At least 78% of manufacturers benefit from significant productivity boosts associated with receiving FDI. The changes in the effect sizes are not out of the ordinary either, and the rank correlation coefficient of the point estimate of productivity effects across these estimators is at least as high as 0.71. All in all, our findings remain qualitatively unchanged.
Bidimensional Spillovers.In our main analysis, per the productivity process (3.3), spillovers from all spatially proximate peers in the industry have the potential to affect the recipient firm's productivity in the same manner, no matter the exposure of these peers to foreign knowledge. Given the documented impact of FDI on firm productivity, it may also be of interest to allow for heterogeneity in the external productivity spillovers from the peers _conditional_ on their FDI status. To this end, we adapt our methodology to allow a more general evolution process for productivity that permits bidimensional spillovers by means of two spatiotemporal lags. Thus, we now consider the following evolution process of firm productivity:
\[\omega_{it}=\mathbb{E}\left[\omega_{it}\left|\ \omega_{i,t-1},G_{i,t-1},\sum_{j( \neq i)}s^{0}_{ij,t-1}\omega_{j,t-1},\sum_{j(\neq i)}s^{1}_{ij,t-1}\omega_{j,t -1}\right]+\zeta_{it},\right.\] (I.1)
with the distinction between peer weights \(\{s^{0}_{ij,t}\}\) and \(\{s^{1}_{ij,t}\}\) based on the peers' FDI status:
\[s^{0}_{ij,t}=\frac{\mathbb{1}\left\{G_{jt}=0\text{ and }(j,t)\in \mathcal{L}(i,t)\right\}}{\sum_{k(\neq i)=1}^{n}\mathbb{1}\left\{G_{kt}=0 \text{ and }(k,t)\in\mathcal{L}(i,t)\right\}},\] (I.2) \[s^{1}_{ij,t}=\frac{\mathbb{1}\left\{G_{jt}>0\text{ and }(j,t)\in \mathcal{L}(i,t)\right\}}{\sum_{k(\neq i)=1}^{n}\mathbb{1}\left\{G_{kt}>0 \text{ and }(k,t)\in\mathcal{L}(i,t)\right\}}.\] (I.3)
The cross-firm productivity spillovers are now bidimensional. The null intersection of the two sets of peers ensures separable identifiability of heterogeneous spillovers from (i) the wholly domestically owned peers \(SP^{0}_{it}=\partial\mathbb{E}[\omega_{it}\left|\cdot\right|\big{/}\partial \sum_{j(\neq i)}s^{0}_{ij,t-1}\omega_{j,t-1}\) and (ii) foreign-invested peers \(SP^{1}_{it}=\partial\mathbb{E}[\omega_{it}\left|\cdot\right|\big{/}\partial \sum_{j(\neq i)}s^{1}_{ij,t-1}\omega_{j,t-1}\).
Table I.6 summarizes point estimates of bidimensional productivity spillovers under our baseline specification, whereby the firm's peer group \(\mathcal{L}(i,t)\) is defined at the level of the same province and the entire 2-digit industry. Just like in the case of our main model with the unidimensional cross-firm dependence, we continue to find substantial and positive productivity spillovers in the industry. However, by disentangling the peer effects of wholly domestically owned and foreign-invested neighbors, we find that the former group has a significantly larger effect on its neighbors. The median spillover elasticity from fully domestic firms \(SP^{0}\) is 0.30, whereas the counterpart estimate \(SP^{1}\) from the foreign-invested peers is 0.17 only. Regardless of the peer group, the productivity spillovers are statistically positive for the overwhelming ma
jority of firms in the industry (83% or more). This is on par with the extent of spillovers that we have found in our main model with unidimensional spillovers.
The documented heterogeneity in the magnitudes of spillovers across the two types of peers suggests that it is relatively "easier" for Chinese manufacturers to learn from other domestic firms that are _not_ recipients of FDI. This may be because foreign-invested firms are more protective of their newly adopted foreign technologies/knowledge, which makes it more difficult to learn from them. At the same time, it also may be that learning from these foreign-invested firms is simply more difficult because their practices are too advanced and biased towards more productive/efficient firms in the first place. If so, firms that are already foreign-invested--and, hence, are more productivity due to direct learning effects of FDI--are to enjoy larger spillovers from their peers who have also received foreign investments. This is corroborated by the evidence in Table I.7: the coefficient on the firms' _own_ FDI exposure is negative for \(SP^{0}\) and positive for \(SP^{1}\).
The results in Table I.7 continue to indicate that the more productivity firms have less absorptive capacity to learn from their peers. For both the \(SP^{0}\) and \(SP^{1}\) spillovers, the effect size increases with the average productivity of peers from whom spillovers originate. But on the other hand, the strength of spillovers from one peer group declines with the average productivity of the other group (note the negative coefficient on the cross-peer productivity). Taken to
\begin{table}
\begin{tabular}{l c c c|c} \hline \hline & \multicolumn{3}{c}{_Point Estimates_} & \multicolumn{1}{c}{_Statistically_ \textgreater{} 0} \\ Estimand & 1st Qu. & Median & 3rd Qu. & (\% Obs.) \\ \hline \(SP^{0}\) & 0.153 & 0.300 & 0.428 & 82.58 \\ & (0.090, 0.180) & (0.198, 0.349) & (0.302, 0.504) & \\ \(SP^{1}\) & 0.158 & 0.172 & 0.185 & 98.89 \\ & (0.119, 0.188) & (0.129, 0.204) & (0.137, 0.216) & \\ \hline \hline \end{tabular} Notes: Reported are the semiparametric estimates of bidimensional productivity spillovers from (I.1) under our baseline specification. The left panel summarizes point estimates of \(SP^{0}_{it}\) and \(SP^{1}_{it}\) with the corresponding two-sided 95% bootstrap percentile confidence intervals in parentheses. The last column reports the share of observations for which the point estimates are statistically positive at the 5% significance level using one-sided bootstrap percentile confidence intervals.
\end{table}
Table I.6: Estimates of Bidimensional Productivity Spillovers
gether, these two findings suggest some substitutability between learning from the two groups along with the recipient firm's finite capacity to absorb such spillovers from the peers in a given period. Thus, if the foreign-invested peers improve their productivity, the firm starts learning more from them and less from the non-foreign-invested peers, and vice versa.
To conclude, although we find evidence of heterogeneity in the strength of spillovers from wholly-domestic versus foreign-invested peers (with those from the former being relatively stronger), in the grand scheme of things, our main findings stay the same: productivity spillovers are positive and significant for most firms in the industry.
Asymmetric Spillovers.As we explain in Appendix D, the peer weighing scheme that we use in our analysis treats cross-firm spillovers symmetrically in that all members of a peer group affect each other's productivity. That is, each \(i\)th firm's productivity is influenced by the average productivity of all its peers: those that are more _and_ those that are less productive that the firm \(i\) itself. But should one choose to regulate the direction of productivity spillovers by restricting
\begin{table}
\begin{tabular}{l c c} \hline \hline & \(SP^{0}\) & \(SP^{1}\) \\ \hline \(\omega_{i,t-1}\) & \(-0.720\) & \(-0.070\) \\ & \((-0.882,-0.571)\) & \((-0.125,-0.019)\) \\ \(G_{i,t-1}\) & \(-0.185\) & \(0.036\) \\ & \((-0.355,-0.073)\) & \((-0.151,0.130)\) \\ \(\sum_{j}s^{0}_{i,j,t-1}\omega_{j,t-1}\) & \(1.206\) & \(-0.147\) \\ & \((0.591,1.459)\) & \((-0.276,-0.023)\) \\ \(\sum_{j}s^{1}_{i,j,t-1}\omega_{j,t-1}\) & \(-0.147\) & \(0.144\) \\ & \((-0.276,-0.023)\) & \((0.101,0.178)\) \\ \hline \hline \end{tabular} Notes: Reported are the parameter estimates for the \(SP^{0}\) and \(SP^{1}\) functions derived from the polynomial approximation of the conditional mean of \(\omega_{it}\) in the productivity process formulation with bidimensional spillovers in (I.1). Two-sided 95% bootstrap percentile confidence intervals in parentheses. These correspond to our baseline specification, with (i) each firm’s peers restricted to the firms located in the same province and the industrial scope of spillovers defined at the level of the entire 2-digit industry, (ii) the technical change flexibly controlled for using a series of year effects.
\end{table}
Table I.7: Heterogeneity in Bidimensional Productivity Spillovers
them to occur _from_ more productive _to_ less productive firms, our framework can be modified to accommodate that, albeit with additional timing assumptions.
We estimate such an asymmetric specification given in (D.2). Table I.8 summarizes the corresponding estimates of productivity effects. Comparing these results with our main estimates in Table 4, we see that the estimates of direct learning stay by and large unchanged, as expected. While comparable at the median, the asymmetric spillover effect estimates exhibit a much smaller variation in effect size (perhaps, because the peer pool is more homogeneous now) but are _as_ prevalent as they are when we model them symmetrically.
|
2308.01532 | MA-FSAR: Multimodal Adaptation of CLIP for Few-Shot Action Recognition | Applying large-scale vision-language pre-trained models like CLIP to few-shot
action recognition (FSAR) can significantly enhance both performance and
efficiency. While several studies have recognized this advantage, most of them
resort to full-parameter fine-tuning to make CLIP's visual encoder adapt to the
FSAR data, which not only costs high computations but also overlooks the
potential of the visual encoder to engage in temporal modeling and focus on
targeted semantics directly. To tackle these issues, we introduce MA-FSAR, a
framework that employs the Parameter-Efficient Fine-Tuning (PEFT) technique to
enhance the CLIP visual encoder in terms of action-related temporal and
semantic representations. Our solution involves a Fine-grained Multimodal
Adaptation, which is different from the previous attempts of PEFT in regular
action recognition. Specifically, we first insert a Global Temporal Adaptation
that only receives the class token to capture global motion cues efficiently.
Then these outputs integrate with visual tokens to enhance local temporal
dynamics by a Local Multimodal Adaptation, which incorporates text features
unique to the FSAR support set branch to highlight fine-grained semantics
related to actions. In addition to these token-level designs, we propose a
prototype-level text-guided construction module to further enrich the temporal
and semantic characteristics of video prototypes. Extensive experiments
demonstrate our superior performance in various tasks using minor trainable
parameters. | Jiazheng Xing, Chao Xu, Mengmeng Wang, Guang Dai, Baigui Sun, Yong Liu, Jingdong Wang, Jian Zhao | 2023-08-03T04:17:25Z | http://arxiv.org/abs/2308.01532v2 | # Multimodal Adaptation of CLIP for Few-Shot Action Recognition
###### Abstract
Applying large-scale pre-trained visual models like CLIP to few-shot action recognition tasks can benefit performance and efficiency. Utilizing the "pre-training, fine-tuning" paradigm makes it possible to avoid training a network from scratch, which can be time-consuming and resource-intensive. However, this method has two drawbacks. First, limited labeled samples for few-shot action recognition necessitate minimizing the number of tunable parameters to mitigate over-fitting, also leading to inadequate fine-tuning that increases resource consumption and may disrupt the generalized representation of models. Second, the video's extra-temporal dimension challenges few-shot recognition's effective temporal modeling, while pre-trained visual models are usually image models. This paper proposes a novel method called Multimodal Adaptation of CLIP (MA-CLIP) to address these issues. It adapts CLIP for few-shot action recognition by adding lightweight adapters, which can minimize the number of learnable parameters and enable the model to transfer across different tasks quickly. The adapters we design can combine information from video-text multimodal sources for task-oriented spatiotemporal modeling, which is fast, efficient, and has low training costs. Additionally, based on the attention mechanism, we design a text-guided prototype construction module that can fully utilize video-text information to enhance the representation of video prototypes. Our MAC-CLIP is plug-and-play, which can be used in any different few-shot action recognition temporal alignment metric.
## 1 Introduction
Few-shot action recognition aims to quickly learn new action categories using limited labeled samples. Compared with general action recognition, the main distinction of few-shot action recognition lies in the extremely small amount of labeled data in each task and the variety of task types. Therefore, few-shot action recognition requires models to possess the ability to quickly transfer between different tasks, making this work extremely difficult. Previous methods [3, 61, 56, 33, 38, 49, 46, 22, 14] mainly used metric-based framework and episode training to solve the transfer to new classes. However, despite the short training time for each task, it requires a large amount of training on similar tasks to enable the model to have strong generalization capabilities across various tasks. Therefore, relying solely on the above solutions still requires the model to spend much time training on different datasets, which somewhat hinders its application in industry.
With the development of computer vision, more and more large foundation visual models [34, 54, 43, 39, 15, 19] have emerged. Their key to these models is providing an excellent pre-trained model, which can be fine-tuned for
Figure 1: Performance comparison of different few-shot action recognition methods under the 5-way 1-shot settings on the SSV2-Small dataset, including our **MA-CLIP**, OTAM [3], TRX [33], STRM [38], HyRSM [46], and CLIP-FSAR [44]. Bubble or star size indicates the recognition accuracy. Our **MA-CLIP** achieves the highest recognition accuracy with the least number of tunable parameters.
downstream tasks to provide strong transferability. By utilizing large foundation models such as CLIP for downstream tasks such as action recognition [42, 30], segmentation [35, 28, 50], object detection [10, 58], et al., the "pre-training, fine-tuning" paradigm leverages the power of robust pre-trained models, thus eliminating the need to train a network from scratch and obtaining impressive performance. Due to the powerful generalization ability of the CLIP pre-trained model, applying it to few-shot action recognition tasks can significantly reduce the number of similar training tasks during the fine-tuning stage to save training time. Furthermore, CLIP is a multimodal model, and for few-shot action recognition tasks with limited visual samples, introducing the additional textual features can serve as a powerful aid. CLIP-FSAR [44] follows this approach using CLIP [34] and has achieved good results. However, this method has at least two drawbacks. First, each task has limited trainable labeled samples for few-shot action recognition, so it is necessary to minimize the number of training parameters as much as possible to avoid the overfitting phenomenon. Insufficient complete fine-tuning will increase the consumption of computational resources and time and may disrupt the good generalized representation of the foundation models. Second, large foundation models are mostly image pre-trained models, while videos have an extra-temporal dimension compared to images. One of the challenges in few-shot recognition is how to perform temporal modeling effectively. We applied CLIP to perform zero-shot action recognition and found that while it performed well on spatial datasets, its performance was not ideal on temporal datasets, highlighting the importance of temporal modeling. CLIP-FSAR only uses an additional temporal module to extend the image model, which cannot fully integrate temporal information in videos.
To overcome the above drawbacks, we followed a new approach called parameter-efficient fine-tuning, i.e., PEFT, that efficiently utilizes large foundation models. PEFT was initially applied in natural language processing (NLP) [12, 21, 55, 13] and has made remarkable progress in computer vision (CV) [1, 17, 16, 51, 47] in recent years. The core idea is to keep the large pre-trained foundation model frozen to achieve robust performance while only fine-tuning a small number of extra parameters. This idea is very well-suited for few-shot action recognition, which can minimize the number of learnable parameters and enable the model to possess the ability to transfer across different tasks quickly. In addition, our task involves video understanding, while most large foundation models are based on images lacking temporal understanding. To address this issue, adding a small number of trainable parameters for temporal modeling in the large foundation model proves effective, such as AIM [51] and Vita-CLIP [47].
Based on these findings, we propose a novel method for few-shot action recognition, dubbed **MA-CLIP**, a shot for **M**ultimodal **A**daptation of **CLIP**. Specifically, we adopt the idea of PEFT and choose CLIP [34] as our baseline due to its multimodal capability. We freeze CLIP's pre-trained image and text encoders during fine-tuning and add some lightweight adapters [12, 51] and tunable parameters. As CLIP is a foundation model for image-text pairs, the adapters we design can combine the bi-modal information of the videos (spatiotemporal information) and texts (semantic information) for task-oriented modeling. Meanwhile, we design a text-guided prototype construction module based on the attention mechanism to fully utilize the video-text multimodal information and enhance the representation of video class prototypes. Finally, our MA-CLIP is plug-and-play and can be used in any different few-shot action recognition temporal alignment metric, i.e., video matcher. Extensive experiments unequivocally demonstrate that our method attains exceptional performance while employing the fewest tunable parameters, as depicted in Fig.1.
In summary, we make the following contributions:
* We propose a novel method to adapt CLIP for few-shot action recognition by adding lightweight adapters and relatively few tunable parameters. The adapters we designed can combine information from video-text multimodal sources for task-oriented modeling, which is fast, efficient, and has low training costs.
* Based on the attention mechanism, we design a text-guided prototype construction module that can fully utilize video-text information to further enhance the representation of video prototypes.
* Our plug-and-play method can be used in any different few-shot action recognition temporal alignment metric. Experiments demonstrate that our method performs excellently using any metric in various task settings.
* Extensive experiments on five widely used datasets have shown that our method can achieve outstanding performance with minor trainable parameters.
## 2 Related Works
### Few-shot Learning
Few-shot learning leverages the episodic training paradigm, wherein a limited number of labeled training samples from a large number of related tasks are utilized to effectively represent a substantial volume of labeled training samples. In recent years, research on few-shot learning can be mainly classified into adaptation-based and metric-based methods. The former [8, 31, 26] aims to find a network initialization that can be fine-tuned for unknown tasks using limited labeled data, called _gradient by gradient_. The
latter [36, 40, 53, 52, 6, 23] aims to acquire knowledge of feature space and compare task features using various matching strategies, referred to as _learning to compare_.
### Few-shot Action Recognition
The core concept of few-shot action recognition is akin to few-shot learning, but including the temporal dimension amplifies the problem's difficulty. Despite the potential benefits of adaptation-based methods (e.g., MetaU-VFS [32]), these approaches have received limited attention in few-shot action recognition due to their high computational requirements and extensive experimental time. Instead, existing research predominantly emphasizes metric-based learning approaches with varying focuses. On the one hand, some methods focus on class prototype matching strategies. TRX [33] matches each query sub-sequence with all sub-sequences in the support set, facilitating correspondences between different videos. OTAM [3] introduces a temporal alignment module to calculate the distance value between query and support set videos. On the other hand, some approaches aim to enhance feature representation. For instance, SloshNet [49] leverages a feature fusion architecture search module to exploit low-level spatial features, combining it with long-term and short-term temporal modeling modules to encode complementary global and local temporal representations. STRM [38] adopts local and global enrichment modules for spatiotemporal modeling, while HyRSM [46] utilizes hybrid relation modeling to learn task-specific embeddings. With the development of large foundation visual models, how to apply them in downstream tasks is receiving increasing attention. CLIP-FSAR [44] makes attempts using the CLIP pre-trained model and designs a video-text contrastive objective and a prototype modulation, achieving good results. However, completely fine-tuning the visual encoder would increase computational costs and risk catastrophic forgetting. Additionally, CLIP is an image pre-trained model that CLIP-FSAR does not extend the visual encoder for temporal modeling. Our approach will address the problems encountered in the CLIP-FSAR mentioned above.
### Parameter-efficient Fine-tuning (PEFT) for Vision Models
With the development of an increasing number of large-scale visual foundational models [34, 54, 43, 39, 15, 19], more and more attention is focused on parameter-efficient fine-tuning, i.e., PEFT. PEFT, initially employed in natural language processing (NLP) [12, 21, 55, 13], has exhibited impressive advancements in computer vision (CV) in recent times. The fundamental concept revolves around preserving the immutability of the extensive pre-trained models to ensure consistent and reliable performance, focusing solely on refining a limited set of additional parameters. The application of PEFT in computer vision can be broadly categorized into two main approaches: Adapter-based and Prompt-tuning-based. The design of the Adapter originated from [12]. It adds two adapters with residual structures in each transformer layer to fine-tune the model. During the fine-tuning process, the parameters of the original transformer are frozen, and only the parameters of the adapter layers are learned. Inspired by this, AIM [51] applied Adapter technology in action recognition. In each ViT [7] (Vision Transformer) block, AIM designed three adapters for spatial, temporal, and joint adaptation, achieving excellent results. Prompt-tuning refers to the flexible adjustment of prompts, which significantly impacts the final performance of the model. The pioneering use of prompt-tuning in the visual domain was by VPT [16]. It introduced learnable prompts within ViT while freezing the other training parameters in the network and achieved impressive results in downstream tasks related to image processing. Inspired by this, Vita-CLIP [47] designed prompt-tuning specifically for videos, which proposed the video summary tokens, frame-level prompts, and video-level prompts, achieving impressive results. Due to Adapter's simplicity and AIM's success in action recognition, we choose the Adapter-based method as our PEFT method.
## 3 Method
### Problem Formulation
In the case of few-shot action recognition, the goal is to categorize an unlabeled query video into one of \(M\) action categories in the support set, with only limited \(K\) samples allotted per action class. This can be considered an \(M\)-way \(K\)-shot task. Comparable to prior research studies, we follow the episode training framework outlined by [3, 61, 56, 33, 38, 49, 46, 22, 14], where episodes are chosen randomly from a vast pool of collected data. In each episode, we assume that the set \(\mathcal{S}\) comprises \(M\times K\) samples originating from \(M\) different action classes. Additionally, \(S_{k}^{m}=\{s_{1}^{m},s_{k2}^{m},\cdots,s_{kT}^{m}\}\) denotes the \(k\)-th video in class \(m\in\{1,\cdots,M\}\) randomly sampled with \(T\) frames. Finally, the query video represents \(Q=\{q_{1},q_{2},\cdots,q_{T}\}\) sampled with \(T\) frames.
### Architecture Overview
We choose CLIP [34] as the pre-trained foundation model, a dual-encoder structure composed of visual and text encoders. CLIP can simultaneously encode input images and texts and map them into the same vector space. It can perform cross-modal reasoning and achieve mutual conversion between images and texts. In few-shot action recognition, since there are limited labeled video samples in each task, enhancing the semanticity of videos to a great extent can be achieved by mining the semantic information of label texts and associating them with corresponding video
features. CLIP has been pre-trained on 400 million web-crawled image-text pairs, making the model highly generalizable. We choose the ViT [7] architecture in CLIP as our visual encoder. In addition, to align with textual descriptions during pre-training, input texts are usually utilized with prompt templates (the selection method of prompt templates is detailed in Sec.4.1.2). To minimize the number of trainable parameters in the model as much as possible, allowing the model to possess the ability to transfer across different tasks rapidly, we froze the pre-trained image and text encoders during fine-tuning and added some learnable lightweight adapters.
We present our overall architecture in Fig.2. For the frame-selecting strategy, we employ the approach previously used in TSN [41], which involves dividing the input video sequence into \(T\) segments and extracting snippets from each segment. We will focus on a specific scenario for simplicity and convenience: the 5-way 1-shot problem and the query set \(\mathcal{Q}\) with a single video. In this way, the query video \(Q=\{q_{1},q_{2},\cdots,q_{T}\}\) and the class support set videos \(S^{m}=\{s_{1}^{m},s_{2}^{m},\cdots,s_{T}^{m}\}\left(S^{m}\in\mathcal{S}=\left\{ S^{1},S^{2},\cdots,S^{5}\right\}\right)\) pass through the visual encoder (TMA) to obtain the query feature \(\mathbf{F}_{\mathcal{Q}}\) and the support features \(\mathbf{F}_{\mathcal{S}}^{m}(\mathbf{F}_{\mathcal{S}}^{m}\in\mathbf{F}_{ \mathcal{S}})\) in each episode. Similarly, the text descriptions \(C^{m}\)(\(C^{m}\in\mathcal{C}=\left\{C^{1},C^{2},\cdots,C^{5}\right\}\) ) pass through the text encoder to obtain text features \(\mathbf{F}_{\mathcal{T}}^{m}\)(\(\mathbf{F}_{\mathcal{T}}^{m}\in\mathbf{F}_{\mathcal{T}}\)). Then we apply global average pooling operation to the features \(\mathbf{F}_{\mathcal{S}}\) and \(\mathbf{F}_{\mathcal{Q}}\) to obtain features \(\mathbf{F}_{\mathcal{S}}^{avg}\) and \(\mathbf{F}_{\mathcal{Q}}^{avg}\). The Kullback-Leibler divergence losses \(\mathcal{L}_{\mathcal{S}2T}\) and \(\mathcal{L}_{\mathcal{Q}2T}\) are obtained by the cosine similarity metric between \(\mathbf{F}_{\mathcal{S}}^{avg}\), \(\mathbf{F}_{\mathcal{Q}}^{avg}\), and the text feature \(\mathbf{F}_{\mathcal{T}}\), which adapts CLIP to the few-shot action recognition task. Meanwhile, the probability distribution \(\mathbf{p}_{\mathcal{Q}2\mathcal{T}}\) is obtained using the cosine similarity metric. Then, features \(\mathbf{F}_{\mathcal{S}}\) and \(\mathbf{F}_{\mathcal{Q}}\) are passed through a text-guided prototype construction module (TPCM) with weight sharing to obtain the final features before the prototype matching process, denoted by \(\widetilde{\mathbf{F}_{\mathcal{S}}}\) and \(\widetilde{\mathbf{F}_{\mathcal{Q}}}\). Finally, the enhanced features are fed into the prototype matching metric to obtain the probability distribution \(\mathbf{p}_{\mathcal{Q}2\mathcal{S}}\) and loss \(\mathcal{L}_{\mathcal{Q}2\mathcal{S}}\).
### Task-oriented Multimodal Adaptation (TMA)
To minimize the number of tunable parameters as much as possible to avoid the overfitting phenomenon and fully leverage the spatiotemporal information in the videos and the semantic information in the texts, we propose a new method to adapt image pre-trained models for few-shot action recognition by adding lightweight adapters. The adapters we design can combine the bi-modal information of the videos and texts for task-oriented modeling.
We choose ViT [7] as our visual encoder. Specifically, consider a video clip \(V\in\mathbb{R}^{T\times H\times W\times 3}\), where \(H,W\) represent the spatial size and \(T\) represents the number of frames. Each frame \(t\in\{1\cdots T\}\) is divided into \(N\) non-overlapping square patches \(\left\{\textbf{x}_{t,i}\right\}_{i=1}^{N}\in\mathbb{R}^{P^{2}\times 3}\) of size \(P\times P\), with the total number of patches being \(N=HW/P^{2}\). Then the patches \(\left\{\textbf{x}_{t,i}\right\}_{i=1}^{N}\in\mathbb{R}^{P^{2}\times 3}\) are then projected into the path embeddings \(\textbf{x}_{t,p}\in\mathbb{R}^{N\times D}\) through a linear projection \(\textbf{E}\in\mathbb{R}^{3P^{2}\times D}\). An additional learnable [class] token \(\textbf{x}_{cls}\in\mathbb{R}^{D}\) to the embedded patch sequence \(\textbf{x}_{t,p}\) is presented for each frame as \(\textbf{x}_{t}^{(0)}=\left[\textbf{x}_{cls};\textbf{x}_{t,p}\right]\in\mathbb{ R}^{(N+1)\times D}\). The final per-frame token sequence fed into the ViT blocks is given by:
\[\textbf{z}_{t}^{(0)}=\textbf{x}_{t}^{(0)}+\textbf{e}_{pos} \tag{1}\]
where \(\textbf{e}_{pos}\in\mathbb{R}^{(N+1)\times D}\) represents the spatial position encoding. As shown in Fig.3(b), each ViT block consists of several components, including a multiheaded self-attention (MSA) mechanism, a multilayer perceptron (MLP) layer, the layer normalization (LN), and skip connections. Formally, the computation of a ViT block can be formulated as:
\[\textbf{z}_{t}^{\prime}(l)=\textbf{z}_{t}^{(l-1)}+\mathrm{MSA}\left(\mathrm{ LN}\left(\textbf{z}_{t}^{(l-1)}\right)\right) \tag{2}\]
\[\textbf{z}_{t}^{(l)}=\textbf{z}_{t}^{\prime(l)}+\mathrm{MLP}\left(\mathrm{ LN}\left(\textbf{z}_{t}^{\prime(l)}\right)\right) \tag{3}\]
where \(\textbf{z}_{t}^{(l-1)}\) and \(\textbf{z}_{t}^{(l)}\) represent per-frame input and the output of the \(l\)-th ViT block, respectively. And the video level representation at the \(l\)-th layer can be represented as \(\textbf{z}^{(l)}=\left[\textbf{z}_{0}^{(l)}\cdots\textbf{z}_{t}^{(l)}\cdots \textbf{z}_{T}^{(l)}\right]\).
Inspired by the vision parameter-efficient fine-tuning techniques [1, 17, 16, 51, 47], we obey their ideas that keep the large pre-trained foundation model frozen to achieve robust performance while only fine-tuning a small number of extra parameters. Due to Adapter's [12] simplicity and AIM's [51] success in action recognition, we propose a task-oriented multimodal adaptation based on Adapter, which can be divided into three parts: temporal adaptation, multimodal adaptation, and joint adaptation. As shown in Fig.3(a), Adapter has a straightforward structure that includes two fully connected layers (FC), an activation layer, and a residual connection. The first FC layer maps the input to a lower dimension, while the second FC layer maps the input back to its original dimension. The support and query set branches' network structures are represented in Fig.3(c) and Fig.3(d), respectively. Since the label information of the support set data is known while that of the query set is unknown in each task, their network structures differ accordingly. Moreover, inspired by AIM [51], we reuse the pre-trained self-attention layer in the image model for temporal and multimodal adaptation to minimize the number of trainable parameters. By changing the dimensions of the input, the self-attention layer can be used in different ways. In what follows, we will introduce three types of adaptation respectively.
#### 3.3.1 Temporal Adaptation
Since videos have an additional temporal dimension compared to images, temporal modeling is crucial for video tasks. Based on this, we design temporal adaptation for temporal modeling. Compared to AIM, we only use the [class] token \(\mathbf{x}_{cls}\) as the input for temporal modeling, greatly reducing the computational costs. Specifically, for the \(l^{th}\) layer given the input video [class] token embedding \(\mathbf{x}_{cls}^{(l-1)}\in\mathbb{R}^{T\times 1\times D}\), we reshape it into \(\mathbf{x}_{TA}^{(l-1)}\in\mathbb{R}^{1\times T\times D}\). Then we feed \(\mathbf{x}_{T\mathcal{A}}^{(l-1)}\) into temporal adaptation to learn the temporal relationships between multiple frames, given by:
\[\mathbf{x}_{TA}^{(l)}=\mathbf{x}_{TA}^{(l-1)}+\mathrm{Adapter}\left(\text{T- MSA}\left(\mathrm{LN}\left(\mathbf{x}_{TA}^{(l-1)}\right)\right)\right) \tag{4}\]
where \(\mathbf{x}_{TA}^{(l-1)}\) and \(\mathbf{x}_{TA}^{(l)}\) denotes the temporal adaptation input and output of the \(l^{th}\) transformer block. Self-attention operates on the temporal dimension \(T\) to explore the temporal relationships between multiple frames. Inspired by AIM [51], the Adapter structure maintains the same configuration as illustrated in Fig.3(a). However, the skip connection is removed to prevent the influence of temporal adaptation during the initial training phase.
#### 3.3.2 Multimodal Adaptation
After the temporal adaptation, we aim to integrate spatiotemporal information with text semantic information to perform multimodal adaptation to achieve task-oriented feature enhancement. Specifically, we feed the text description corresponding to the video \(C^{m}\in\mathcal{C}\) into the text encoder to get text features \(\mathbf{F}_{\mathcal{T}}^{m}\left(\mathbf{F}_{\mathcal{T}}^{m}\in\mathbf{F}_ {\mathcal{T}}\right)\), which the text encoder is frozen to avoid the extra computation cost and catastrophic forgetting phenomenon. To facilitate the fusion of multimodal data, we have processed the text features \(\mathbf{F}_{\mathcal{T}}^{m}\in\mathbb{R}^{1\times D^{\prime}}\) as follows:
\[\mathbf{F}_{\mathcal{T}}^{MA}=\mathrm{Repeat}\left(\mathrm{FC}_{text}\left( \mathbf{F}_{\mathcal{T}}^{m}\right)\right) \tag{5}\]
where \(\mathrm{FC}_{text}\in\mathbb{R}^{D^{\prime}\times D}\) aims to align text features with video features in the feature dimension, and the \(\mathrm{FC}_{text}\) weights are shared across all layers of the visual transformer. The \(\mathrm{Repeat}\) operation duplicates text features \(T\) times to obtain \(\mathbf{F}_{\mathcal{T}}^{MA}\in\mathbb{R}^{T\times 1\times D}\). For the support set branch, given the temporal adapted features \(\mathbf{x}_{TA}^{(l)}\in\mathbb{R}^{T\times 1\times D}\), the input video features \(\mathbf{z}^{(l-1)}\in\mathbb{R}^{T\times(N+1)\times D}\) and the text features \(\mathbf{F}_{\mathcal{T}}^{MA}\in\mathbb{R}^{T\times 1\times D}\), we concatenate these features together along the spatial dimension to obtain the feature \(\mathbf{z}_{MA\mathcal{S}}^{(l-1)}=\left[\mathbf{z}^{(l-1)};\mathbf{x}_{T \mathcal{A}}^{(l)};\mathbf{F}_{\mathcal{T}}^{MA}\right]\in\mathbb{R}^{T\times (N+3)\times D}\), where \(N\) denotes the total number of patches. However,
Figure 2: Overview of **MA-CLIP**. We will focus on a specific scenario for simplicity and convenience: the 5-way 1-shot problem and the query set \(\mathcal{Q}\) with a single video. The support set video features \(\mathbf{F}_{\mathcal{S}}\) and query video feature \(\mathbf{F}_{\mathcal{Q}}\) are obtained by the visual encoder(TMA). Similarly, text features \(\mathbf{F}_{\mathcal{T}}\) are obtained using a text encoder. The text-guided prototype construction module (TPCM) generates the final features before the prototype matching process, denoted by \(\mathbf{F}_{\mathcal{S}}\) and \(\mathbf{F}_{\mathcal{Q}}\). The probability distribution \(\mathbf{p}_{\mathcal{Q}2\mathcal{T}}\) is obtained using cosine similarity metric, and \(\mathbf{p}_{\mathcal{Q}2\mathcal{S}}\) is calculated using prototype matching metric. The loss \(\mathcal{L}_{\mathcal{Q}2\mathcal{S}}\) is the standard Cross-Entropy loss and \(\mathcal{L}_{\mathcal{S}2\mathcal{T}}\), \(\mathcal{L}_{\mathcal{Q}2\mathcal{T}}\) are Kullback-Leibler divergence (KL) loss.
the corresponding text labels for the videos are unknown for the query set branch, so we can only concatenate the input video features \(\textbf{z}^{(l-1)}\) and temporal adapted features \(\textbf{x}^{(l)}_{TA}\) to obtain \(\textbf{z}^{(l-1)}_{STA\cdot Q}=\left[\textbf{z}^{(l-1)};\textbf{x}^{(l)}_{TA} \right]\in\mathbb{R}^{T\times(N+2)\times D}\). For the support set branch, we feed \(\textbf{z}^{(l-1)}_{MA\cdot S}\) into multimodal adaptation to integrate spatiotemporal information with text semantic information as shown in Fig.3(c), written by:
\[\textbf{z}^{(l)}_{MA\cdot S}=\textbf{z}^{(l-1)}_{MA\cdot S}+\mathrm{Adapter} \left(\mathrm{M\text{-}MSA}\left(\mathrm{LN}\left(\textbf{z}^{(l-1)}_{MA \cdot S}\right)\right)\right) \tag{6}\]
where \(\textbf{z}^{(l-1)}_{MA\cdot S}\) and \(\textbf{z}^{(l)}_{MA\cdot S}\) denotes the multimodal adaptation input and output of the \(t^{th}\) transformer block. Similarly, we feed \(\textbf{z}^{(l-1)}_{MA-Q}\) into spatiotemporal adaptation to explore spatiotemporal relationships for the query set branch as shown in Fig.3(d), given by:
\[\textbf{z}^{(l)}_{STA\cdot Q}=\textbf{z}^{(l-1)}_{STA\cdot Q}+\mathrm{Adapter }\left(\mathrm{ST\text{-}MSA}\left(\mathrm{LN}\left(\textbf{z}^{(l-1)}_{STA \cdot Q}\right)\right)\right) \tag{7}\]
where \(\textbf{z}^{(l-1)}_{STA\cdot Q}\) and \(\textbf{z}^{(l)}_{STA\cdot Q}\) denote the spatiotemporal adaptation input and output of the \(l^{th}\) transformer block. The Adapter's structure is the same as shown in Fig.3(b). The multimodal adaptation and spatiotemporal adaptation processes share weight parameters, allowing query and support samples to be in the same feature space. Due to the variation of videos within the same category in different tasks, the fusion of textual semantic information for that category has achieved task-oriented feature enhancement.
#### 3.3.3 Joint Adaptation
Temporal adaptation and multimodal adaptation each have their roles, which can combine information from video-text multimodal sources for task-oriented modeling. Lastly, we introduce joint adaptation, in which an Adapter is parallel to the MLP layer to tune the final representations jointly. Specifically, to ensure the consistency of each layer of the transformer block in the spatial dimension, we perform the Select operation on \(\textbf{z}^{(l)}_{MA\cdot S}\) and \(\textbf{z}^{(l)}_{STA\cdot Q}\), taking the first \(N+1\) features in the spatial dimension of them. Joint adaptation can be computed as follows:
\[\textbf{z}^{(l)}=\begin{cases}\textbf{z}^{(l)}_{MA\cdot S}+\mathrm{MLP} \left(\mathrm{LN}\left(\textbf{z}^{(l)}_{MA\cdot S}\right)\right)+\\ r\cdot\mathrm{Adapter}\left(\mathrm{LN}\left(\textbf{z}^{(l)}_{MA\cdot S} \right)\right)&if\ i=0\\ \textbf{z}^{(l)}_{STA\cdot Q}+\mathrm{MLP}\left(\mathrm{LN}\left(\textbf{z}^{( l)}_{STA\cdot Q}\right)\right)+\\ r\cdot\mathrm{Adapter}\left(\mathrm{LN}\left(\textbf{z}^{(l)}_{STA\cdot Q} \right)\right)&if\ i=1\end{cases} \tag{8}\]
where \(i=0\) refers to the support set branch and \(i=1\) refers to the query set branch. In this context, \(r\) is a scaling factor that regulates the influence of the Adapter's output weight.
### Text-guided Prototype Construction Module (TPCM)
In few-shot action recognition, the quality of class prototype construction will directly affect the results of class prototype matching. The better the video class prototypes construct, the higher recognition accuracy performs, and vice versa. However, many existing methods [3, 61, 56, 33, 38, 49, 46, 22, 14] only use a limited number of videos from each category to construct class prototypes, making distinguishing similar classes in each task difficult. Recently, the success of multimodal methods in action recognition [42, 30, 18, 27, 47] has demonstrated that it is possible to understand and represent the semantic information contained in the video more accurately by jointly modeling the video data and relevant textual information. Therefore, based on the attention mechanism, we design a text-guided
Figure 3: (a) shows the structure of the Adapter, and (b) shows the structure of a standard ViT block. (c) and (d) illustrate how we adapt the standard ViT block for the support and query set videos. Note that T-MSA, M-MSA, and ST-MSA share weights but are applied to different inputs.
prototype construction module that can fully utilize video-text information to enhance the representation of video prototypes and optimize the intra-class and inter-class correlations of videos. For the support set branch, given the support set adapted features \(\mathbf{F}_{\mathcal{S}}^{m}\in\mathbf{F}_{\mathcal{S}}\) and the corresponding text features \(\mathbf{F}_{\mathcal{T}}^{m}\in\mathbf{F}_{\mathcal{T}}\), we apply the cross-attention to them to obtain the enhanced features \(\widetilde{\mathbf{F}_{\mathcal{S}}^{m}}\in\widetilde{\mathbf{F}_{\mathcal{S}}}\). Specifically, the query-key-value triplets \(\mathbf{q}_{\mathcal{S}}^{m}\), \(\mathbf{k}_{\mathcal{S}}^{m}\), \(\mathbf{v}_{\mathcal{S}}^{m}\) obtained process can be written as:
\[\mathbf{q}_{\mathcal{S}}^{m}=\mathbf{F}_{\mathcal{S}}^{m}+\mathrm{Repeat}\left( \mathbf{F}_{\mathcal{T}}^{m}\right) \tag{9}\]
\[\mathbf{k}_{\mathcal{S}}^{m}=\mathbf{v}_{\mathcal{S}}^{m}=\mathrm{Concat}\left( \left[\mathbf{F}_{\mathcal{S}}^{m};\mathbf{F}_{\mathcal{T}}^{m}\right]\right) \tag{10}\]
where \(\mathbf{F}_{\mathcal{S}}^{m}\in\mathbb{R}^{T\times D^{\prime}}\), \(\mathbf{F}_{\mathcal{T}}^{m}\in\mathbb{R}^{1\times D^{\prime}}\), \(\mathbf{q}_{\mathcal{S}}^{m}\in\mathbb{R}^{T\times D^{\prime}}\), \(\mathbf{k}_{\mathcal{S}}^{m}=\mathbf{v}_{\mathcal{S}}^{m}\in\mathbb{R}^{(T+1) \times D^{\prime}}\), and \(\mathrm{Repeat}\) aims to copy \(\mathbf{F}_{\mathcal{T}}^{m}\) - \(T\) times. Then, we apply the multi-head attention (MHA) and a feed-forward network to obtain the enhanced support video feature \(\widetilde{\mathbf{F}_{\mathcal{S}}^{m}}\in\mathbb{R}^{T\times D^{\prime}}\) as shown in Fig.4(a), given by:
\[\widetilde{\mathbf{F}_{\mathcal{S}}^{m}}=\mathbf{q}_{\mathcal{S}}^{m}+\mathrm{ MHA}\left(\mathbf{q}_{\mathcal{S}}^{m},\mathbf{k}_{\mathcal{S}}^{m},\mathbf{v}_{ \mathcal{S}}^{m}\right) \tag{11}\]
\[\widetilde{\mathbf{F}_{\mathcal{S}}^{m}}=\widetilde{\mathbf{F}_{\mathcal{S}}^{ m}}+\mathrm{FFN}\left(\widetilde{\mathbf{F}_{\mathcal{S}}^{m}}\right) \tag{12}\]
where \(\mathrm{MHA}\) consists of the layer normalization and a multi-head attention layer, and \(\mathrm{FFN}\) consists of the layer normalization and an MLP layer. Similarly, we perform the same operation on the query set videos to explore the temporal relationships, as shown in Fig.4(b). However, the difference is that \(\mathbf{q}_{\mathcal{Q}}^{m}=\mathbf{k}_{\mathcal{Q}}^{m}=\mathbf{v}_{ \mathcal{Q}}^{m}=\mathbf{F}_{\mathcal{Q}}^{m}\in\mathbb{R}^{T\times D^{\prime}}\) since it does not have corresponding textual features. Note that the support and query set branches share the parameter weights of all modules to reduce computation costs while ensuring that query and support samples are in the same feature space.
### Metric Loss and Predictions
The existing few-shot action recognition works [3, 61, 56, 33, 38, 49, 46, 22, 14], typically based solely on visual information, classify a query video by comparing the temporal-aligned distances between the query video and the support set prototypes. With the advent of text information and visual-language pre-trained model CLIP, text features can now be utilized to classify query videos. This means that query videos can be classified by matching not only with the prototypes of the support set (visual branch) but also with the corresponding text features of the support set (text branch), as shown in Fig.2. For the visual branch, given the support prototype enhanced feature \(\widetilde{\mathbf{F}_{\mathcal{S}}^{m}}\in\widetilde{\mathbf{F}_{\mathcal{S}}}\) and the query enhanced feature \(\widetilde{\mathbf{F}_{q}}\in\widetilde{\mathbf{F}_{\mathcal{Q}}}\), the distance \(D_{q,\mathcal{S}^{m}}\) can be calculated as:
\[D_{q,\mathcal{S}^{m}}=\mathcal{M}\left(\widetilde{\mathbf{F}_{q}},\widetilde{ \mathbf{F}_{\mathcal{S}}^{m}}\right) \tag{13}\]
where \(\mathcal{M}\) denotes the temporal alignment metric, and \(D_{q,\mathcal{S}^{m}}\in D_{q,\mathcal{S}}\). Based on the distances \(D_{q,\mathcal{S}}\), we can obtain the probability distribution over support classes \(\mathbf{p}_{\mathcal{Q}2\mathcal{S}}\) and use a standard cross-entropy loss \(\mathcal{L}_{\mathcal{Q}2\mathcal{S}}\) to optimize the model parameters. For the text branch, given the adapted support set prototype feature \(\mathbf{F}_{\mathcal{S}}^{m}\in\mathbf{F}_{\mathcal{S}}\), adapted query feature \(\mathbf{F}_{q}\in\mathbf{F}_{\mathcal{Q}}\), and corresponding text feature \(\mathbf{F}_{\mathcal{T}}^{m}\in\mathbf{F}_{\mathcal{T}}\), we apply global average pooling on temporal dimension to the features \(\mathbf{F}_{\mathcal{S}}^{m}\) and \(\mathbf{F}_{q}\) to obtain \(\mathbf{F}_{\mathcal{S}}^{m\text{-}avg}\) and \(\mathbf{F}_{q}^{avg}\). To bring the pairwise representations of videos and labels closer to each other, we define symmetric similarities between the two modalities using cosine distances in the similarity calculation module, given by:
\[s\left(\mathbf{F}_{\mathcal{S}}^{m\text{-}avg},\mathbf{F}_{\mathcal{T}}^{m} \right)=\frac{\left\langle\mathbf{F}_{\mathcal{S}}^{m\text{-}avg},\mathbf{F}_{ \mathcal{T}}^{m}\right\rangle}{\left\|\mathbf{F}_{\mathcal{S}}^{m\text{-}avg} \right\|\left\|\mathbf{F}_{\mathcal{T}}^{m}\right\|} \tag{14}\]
\[s\left(\mathbf{F}_{q}^{avg},\mathbf{F}_{\mathcal{T}}^{m}\right)=\frac{\left\langle \mathbf{F}_{q}^{avg},\mathbf{F}_{\mathcal{T}}^{m}\right\rangle}{\left\| \mathbf{F}_{q}^{avg}\right\|\left\|\mathbf{F}_{\mathcal{T}}^{m}\right\|} \tag{15}\]
where \(s\left(\mathbf{F}_{\mathcal{S}}^{m\text{-}avg},\mathbf{F}_{\mathcal{T}}^{m} \right)\in s\left(\mathbf{F}_{\mathcal{S}}^{avg},\mathbf{F}_{\mathcal{T}}\right)\) and \(s\left(\mathbf{F}_{q}^{avg},\mathbf{F}_{\mathcal{T}}^{m}\right)\in s\left( \mathbf{F}_{\mathcal{Q}}^{avg},\mathbf{F}_{\mathcal{T}}\right)\). Based on the cosine similarities \(s\left(\mathbf{F}_{\mathcal{S}}^{avg},\mathbf{F}_{\mathcal{T}}\right)\) and \(s\left(\mathbf{F}_{\mathcal{Q}}^{avg},\mathbf{F}_{\mathcal{T}}\right)\), we can obtain the softmax-normalized video-to-text similarity scores \(\mathbf{p}_{\mathcal{S}2\mathcal{T}}\) and \(\mathbf{p}_{\mathcal{Q}2\mathcal{T}}\). Inspired by ActionCLIP [42], we define the Kullback-Leibler (KL) divergence as the video-text contrastive loss \(\mathcal{L}_{\mathcal{S}2\mathcal{T}}\) and \(\mathcal{L}_{\mathcal{Q}2\mathcal{T}}\). By optimizing contrastive loss, the CLIP model can be adapted to our downstream task. Finally, we integrate the losses of both the visual and textual branches, given by:
\[\mathcal{L}=\alpha\cdot\frac{1}{2}\left(\mathcal{L}_{\mathcal{S}2\mathcal{T}}+ \mathcal{L}_{\mathcal{Q}2\mathcal{T}}\right)+\left(1-\alpha\right)\cdot\mathcal{L }_{\mathcal{Q}2\mathcal{S}} \tag{16}\]
Similarly, we also combine the query set video prediction distributions from both the visual and text branches, written as:
\[\mathbf{p}=\alpha\cdot\mathbf{p}_{\mathcal{Q}2\mathcal{T}}+\left(1-\alpha\right) \cdot\mathbf{p}_{\mathcal{Q}2\mathcal{S}} \tag{17}\]
where \(\alpha\in[0,1]\) is an adjustable hyperparameter.
Figure 4: (a) and (b) respectively show the structure of the TPCM module for the support set and query set branch. \(\oplus\) denotes element-wise summation.
## 4 Experiments
### Experimental Setup
#### 4.1.1 Datasets
Our method's performance is assessed on five datasets that can be classified into two categories: 1) spatial-related datasets, including Kinetics [4], HMDB51 [20], and UCF101 [37]. 2) temporal-related datasets, including SSv2-Full [9] and SSv2-Small [9]. For spatial-related datasets, the recognition of actions primarily relies on background information, with temporal information playing a minor role. On the other hand, the situation is precisely the opposite for temporal-related datasets, where the key to action recognition lies in temporal modeling. Referring to the previous setups [3, 62, 61] on Kinetics, SSv2-Full, SSv2-Small, we select 100 classes and divide them into 64/12/24 action classes as training/validation/testing classes. For UCF101 and HMDB51, we evaluate our method on the splits provided by [56].
#### 4.1.2 Network Architectures
We choose CLIP [34] as our pre-trained foundation model for efficient fine-tuning, where the visual encoder is ViT-B/32 [7] or ViT-B/16 [7], while the text encoder is a 12-layer, 512-wide transformer with eight attention heads. However, due to the previous works [3, 61, 56, 33, 38, 49, 46, 22, 14] that used ResNet-50 [11] pre-trained on ImageNet [5] as the backbone, we provided a version of utilizing pre-trained CLIP ResNet50 without the TMA module as our visual encoder. Meanwhile, we set the bottleneck ratio of Adapters to 0.25 in the TMA module, the same as AIM [51]. For the prompt templates of the text encoder, we follow the same approach as ActionCLIP [42]. In training, a prompt template is randomly selected from 18 candidate templates for each video. However, the vector is obtained during inference by utilizing all 18 prompt templates as inputs and taking their average. For the temporal alignment metric \(\mathcal{M}\), we choose OTAM [3] as our baseline metric.
#### 4.1.3 Training and Inference
Following TSN [41], we uniformly select 8 frames (\(T\)=8) of a video as the input augmented with some fundamental techniques, such as random horizontal flipping, cropping, and color jitter in training, while center crop in inference. For training, SSv2-Full and SSv2-Small randomly sample 100,000 training episodes, and the other datasets randomly sample 10,000 training episodes. Meanwhile, we freeze the pre-trained foundation model and only fine-tune lightweight adapters during the training process if the visual encoder is ViT-B/32 or ViT-B/16. If the visual encoder is ResNet-50, we only freeze the text encoder and fully fine-tune the visual encoder. Moreover, our framework uses the Adam optimizer with the multi-step scheduler. As for inference, the average results of 10,000 tasks randomly sampled from the test sets in all datasets are reported in our experiments.
### Results
#### 4.2.1 Results on Spatial-related Datasets
For spatial-related datasets, the recognition of actions primarily relies on background information, with temporal modeling playing a minor role. CLIP is the large foundation image pre-trained model that mainly relies on background information to recognize images. Therefore, fine-tuning CLIP on spatial-related datasets will result in a significant improvement in few-shot action recognition. Our approach reports results using three different visual encoders. The CLIP-RN50 model has a fully fine-tuned visual encoder since it does not have an Adapter structure. On the other hand, the two ViT-B models only fine-tune lightweight adapter modules during the training process. As shown in Tab.1, even our CLIP-RN50 model significantly improves accuracy in any task setting compared to excellent methods (such as TRX [33], STRM [38], HyRSM [46], SloshNet [49], MoLo [45], et al.) that use ImageNet pre-training. Compared to CLIP-FSAR [44], which uses the same CLIP pre-training and temporal alignment metric, our MA-CLIP achieves better results in multiple datasets and task settings. Specifically, compared to CLIP-FSAR using the same ViT-B/16 as the visual encoder, our method brings 6.3%, 0.9% performance improvements in the 1-shot task of of HMDB51 and Kinetics, and 0.2%, 0.6% gains in the 5-shot task of HMDB51 and Kinetics, respectively.
#### 4.2.2 Results on Temporal-related Datasets
For temporal-related datasets, the key to action recognition is temporal information. The performance improvement from CLIP's pre-trained weights is less significant than those for spatial-related datasets. However, our model still shows excellent results due to its remarkable capacity in temporal modeling. We report three model results using different visual encoders as shown in Tab.2. Compared to the baseline OTAM [3], our MA-CLIP using CLIP-RN50 as the visual encoder can bring 16.1%, 16.0% performance improvements in the 1-shot task, and 9.8%, 11.3% accuracy gains in the 5-shot task of SSv2-Small and SSv2-Full, respectively. Meanwhile, our CLIP-RN50 model achieves the best performance in the 1-shot task compared to all the methods using ResNet-50 as the visual encoder in all temporal-related datasets. Compared to CLIP-FSAR [44], which uses the same CLIP pre-training and temporal alignment metric, our method has a significant performance improvement. Specifically, compared to CLIP-FSAR with the same highest configuration (ViT-B/16), our MA-CLIP brings 4.5%, 2.7% accuracy improvements in the 1-shot
task, and 1.2%, 0.2% accuracy gains in the 5-shot task of SSv2-Small and SSv2-Full, respectively. For the SSv2-Small datasets, even our ViT-B/32 model can perform better than CLIP-FSAR's ViT-B/16 model.
### Ablation Study
#### 4.3.1 Impact of The Proposed Components
To validate the contributions of each module (i.e. TMA, TPCM) in our method, we experiment under 5-way 1-shot settings on the SSv2-Small and SSv2-Full datasets. Our multimodal baseline method chooses CLIP-ViT-B/32 as our visual encoder and freezes all the learnable weights without extra modules. As shown in Tab.3, we observe each component is effective. Specifically, compared to the baseline, the TMA module can bring 13.5% and 16.3% accuracy improvements on SSv2-Small and SSv2-Full, and the TPCM module can bring 16.9% and 19.6% on two datasets. Combining all modules can get the best results, bringing 27.7% and 31.7% accuracy gains on SSv2-Small and SSv2-Full over the baseline.
#### 4.3.2 Effectiveness of The Adaptation Components
To demonstrate the effectiveness of our proposed adaptation in TMA, we compare our method to two baselines. We choose CLIP-ViT-B/32 as our visual encoder. The first baseline is a frozen space-only model without any adaptation, freeing all the trainable parameters of the visual and text encoder but not including the TPCM module. Compared to the first baseline, the second baseline fully fine-tuned the visual encoder without any adaptation. As shown in Tab.4, the fine-tuned visual-only model can bring 12.4% performance improvement over the first baseline but the number of tunable parameters increases from 3.15M to 90.99M. Our method aims to add a few tunable parameters in a fully frozen visual model without compromising the pre-trained weights to achieve better performance than the fully fine-tuned model. In Tab.4, after multimodal adaptation, the frozen model achieves comparable performance with the full fine-tuned visual-only model (54.0 vs. 53.9) with less than one-tenth of the parameter count of the latter (7.94M vs. 90.99M). After adding temporal and joint adaptation, they bring 1.9% and 0.6% performance improve
\begin{table}
\begin{tabular}{c|c|c|c c|c c|c c} \multirow{2}{*}{Method} & \multirow{2}{*}{Reference} & \multirow{2}{*}{Pre-training} & \multirow{2}{*}{Fine-tuning} & \multicolumn{2}{c|}{HMDB51} & \multicolumn{2}{c|}{UCF101} & \multicolumn{2}{c}{Kinetics} \\ & & & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot \\ \hline MatchingNet [40] & NeurIPS(16) & INet-RN50 & Full & - & - & - & - & 53.3 & 74.6 \\ MAML [8] & ICML(17) & INet-RN50 & Full & - & - & - & - & 54.2 & 75.3 \\ ProtoNet [36] & NeurIPS(17) & - & Full & 54.2 & 68.4 & 74.0 & 89.6 & 64.5 & 77.9 \\ TRN++ [60] & ECCV(18) & INet-RN50 & Full & - & - & - & - & 68.4 & 82.0 \\ CMN++ [61] & ECCV(18) & INet-RN50 & Full & - & - & - & - & 57.3 & 76.0 \\ TARN [2] & BMVC(19) & - & Full & - & - & - & - & 64.8 & 78.5 \\ ARN [56] & ECCV(20) & - & Full & 45.5 & 60.6 & 66.3 & 83.1 & 63.7 & 82.4 \\ OTAM [3] & CVPR(20) & INet-RN50 & Full & 54.5 & 68.0 & 79.9 & 88.9 & 73.0 & 85.8 \\ TTAN [24] & ArXiv(21) & INet-RN50 & Full & 57.1 & 74.0 & 80.9 & 93.2 & - & - \\ ITANet [57] & IJCAI(21) & INet-RN50 & Full & - & - & - & - & 73.6 & 84.3 \\ TRX [33] & CVPR(21) & INet-RN50 & Full & 54.9* & 75.6 & 81.0* & 96.1 & 65.1* & 85.9 \\ TA2N [25] & AAAI(22) & INet-RN50 & Full & 59.7 & 73.9 & 81.9 & 95.1 & 72.8 & 85.8 \\ STRM [38] & CVPR(22) & INet-RN50 & Full & 57.6* & 77.3 & 82.7* & 96.9 & 65.1* & 86.7 \\ MTFAN [48] & CVPR(22) & INet-RN50 & Full & 59.0 & 74.6 & 84.8 & 95.1 & 74.6 & 87.4 \\ HyRSM [46] & CVPR(22) & INet-RN50 & Full & 60.3 & 76.0 & 83.9 & 94.7 & 73.7 & 86.1 \\ HCL [59] & ECCV(22) & INet-RN50 & Full & 59.1 & 76.3 & 82.5 & 93.9 & 73.7 & 85.8 \\ Huang \(etal.\)[14] & ECCV(22) & INet-RN50 & Full & 60.1 & 77.0 & 71.4 & 91.0 & 73.3 & 86.4 \\ Nguyen \(etal.\)[29] & ECCV(22) & INet-RN50 & Full & 59.6 & 76.9 & 84.9 & 95.9 & 74.3 & 87.4 \\ SloshNet [49] & AAAI(23) & INet-RN50 & Full & 59.4 & 77.5 & 86.0 & 97.1 & 70.4 & 87.0 \\ MoLo (OTAM) [45] & CVPR(23) & INet-RN50 & Full & 59.8 & 76.1 & 85.4 & 95.1 & 73.8 & 85.1 \\ \hline CLIP-FSAR [44] & ArXiv(23) & CLIP-RN50 & Full & 69.4 & 80.7 & 92.4 & 97.0 & 90.1 & 92.0 \\ CLIP-FSAR [44] & ArXiv(23) & CLIP-ViT-B/16 & Full & 77.1 & 87.7 & **97.0** & **99.1** & 94.8 & 95.4 \\ \hline
**MA-CLIP** & - & CLIP-RN50 & Full & 73.3 & 82.1 & 91.8 & 96.6 & 92.8 & 93.0 \\
**MA-CLIP** & - & CLIP-ViT-B/32 & PEFT & 77.3 & 83.9 & 92.7 & 97.2 & 93.5 & 94.3 \\
**MA-CLIP** & - & CLIP-ViT-B/16 & PEFT & **83.4** & **87.9** & 96.5 & 98.6 & **95.7** & **96.0** \\ \hline \end{tabular}
\end{table}
Table 1: State-of-the-art comparison on the 5-way k-shot benchmarks of the spatial-related benchmarks including HMDB51, SSv2, and Kinetics. The **boldfacen** and **underline** font indicate the highest and the second highest results. Note: * means our implementation. For Fine-tuning, “Full” indicates the full fine-tuning of the visual encoder, and “PEFT” indicates the parameter-efficient fine-tuning of the visual encoder.
ments, respectively. Our final model brings a 2.6% accuracy improvement compared to the fine-tuned visual-only model, but the number of tunable parameters is only one-fifth.
#### 4.3.3 Comparison Between Multimodal Adaptation and Spatiotemporal Adaptation
To compare multimodal and spatiotemporal adaptation fairly, we conduct experiments on the 5-way 1-shot task of SSv2-Small and SSv2-Full. As shown in Sec.3.3.2, the difference between multimodal and spatiotemporal adaptation lies in whether or not to add text features to do self-attention with spatiotemporal features for support videos. As shown in Tab.5, using multimodal adaptation instead of spatiotemporal adaptation results in 0.6% and 0.7% performance improvements on the SSv2-Small and SSv2-Full datasets, respectively. The experimental results reveal that enhancing the semantic representation of visual features by introducing textual features is effective in Adapter.
#### 4.3.4 Comparison of Different Prototype Construction Methods
To demonstrate the effectiveness of our proposed module and compare the efficacy of various methods for prototype construction, we conduct the experiments on the 5-way 1-shot task of SSv2-Small. We choose CLIP-ViT-B/32 as our visual encoder, and the transformer includes the multi-head self-attention and a feed-forward network. The first baseline unimodal transformer indicates the features \(\mathbf{F}_{\mathcal{S}}\) and \(\mathbf{F}_{\mathcal{Q}}\) doing self-attention on the temporal dimension. The difference between the second (CLIP-FSAR[44]) and first baseline is that the text features \(\mathbf{F}_{\mathcal{T}}\) are stacked along the temporal dimension before performing self-attention on the support features \(\mathbf{F}_{\mathcal{S}}\). We set all the layers of the transformer to be one. As shown in Tab.6, our TPCM module brings 8.6% and 1.8% performance improvements compared to the unimodal transformer and multimodal transformer on SSv2-Small, respectively. Based on the experimental results, our TPCM module demonstrates a higher level of efficacy in effectively leveraging textual information as guidance to integrate visual and textual features. This integration leads
\begin{table}
\begin{tabular}{c|c|c|c|c c|c c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Reference} & \multirow{2}{*}{Pre-training} & \multirow{2}{*}{Fine-tuning} & \multicolumn{2}{c}{SSv2-Small} & \multicolumn{2}{c}{SSv2-Full} \\ & & & & 1-shot & 5-shot & 1-shot & 5-shot \\ \hline MatchingNet [40] & NeurIPS(16) & INet-RN50 & Full & 31.3 & 45.5 & - & - \\ MAML [8] & ICML(17) & INet-RN50 & Full & 30.9 & 41.9 & - & - \\ TRN++ [60] & ECCV(18) & INet-RN50 & Full & - & - & 38.6 & 48.9 \\ CMN++ [61] & ECCV(18) & INet-RN50 & Full & 34.4 & 43.8 & 36.2 & 48.8 \\ OTAM [3] & CVPR(20) & INet-RN50 & Full & 36.4 & 48.0 & 42.8 & 52.3 \\ TTAN [24] & ArXiv(21) & INet-RN50 & Full & - & - & 46.3 & 60.4 \\ ITANet [57] & IJCAI(21) & INet-RN50 & Full & 39.8 & 53.7 & 49.2 & 62.3 \\ TRX [33] & CVPR(21) & INet-RN50 & Full & 36.0* & 56.7* & 42.0* & 64.6 \\ TA2N [25] & AAAI(22) & INet-RN50 & Full & - & - & 47.6 & 61.0 \\ STRM [38] & CVPR(22) & INet-RN50 & Full & 37.1* & 55.3* & 43.1* & 68.1 \\ MTFAN [48] & CVPR(22) & INet-RN50 & Full & - & - & 45.7 & 60.4 \\ HyRSM [46] & CVPR(22) & INet-RN50 & Full & 40.6 & 56.1 & 54.3 & 69.0 \\ HCL [59] & ECCV(22) & INet-RN50 & Full & 38.7 & 55.4 & 47.3 & 64.9 \\ Huang \(etal.\)[14] & ECCV(22) & INet-RN50 & Full & 38.9 & 61.6 & 49.3 & 66.7 \\ Nguyen \(etal.\)[29] & ECCV(22) & INet-RN50 & Full & - & - & 43.8 & 61.1 \\ SloshNet [49] & AAAI(23) & INet-RN50 & Full & - & - & 46.5 & 68.3 \\ MoLo (OTAM) [45] & CVPR(23) & INet-RN50 & Full & 41.9 & 56.2 & 55.0 & 69.6 \\ \hline CLIP-FSAR [44] & ArXiv(23) & CLIP-RN50 & Full & 52.1 & 55.8 & 58.7 & 62.8 \\ CLIP-FSAR [44] & ArXiv(23) & CLIP-ViT-B/16 & Full & 54.6 & 61.8 & 62.1 & 72.1 \\ \hline
**MA-CLIP** & - & CLIP-RN50 & Full & 52.5 & 57.8 & 58.8 & 63.6 \\
**MA-CLIP** & - & CLIP-ViT-B/32 & PEFT & 56.5 & 62.3 & 61.9 & 64.5 \\
**MA-CLIP** & - & CLIP-ViT-B/16 & PEFT & **59.1** & **64.5** & **63.3** & **72.3** \\ \hline \end{tabular}
\end{table}
Table 2: State-of-the-art comparison on the 5-way k-shot benchmarks of the temporal-related benchmarks including SSv2-Small, and SSv2-Full. The **boldfacen** and underline font indicate the highest and the second highest results. Note: * means our implementation. For Fine-tuning, Full indicates the full fine-tuning of the visual encoder, and PEFT indicates the parameter-efficient fine-tuning of the visual encoder,
\begin{table}
\begin{tabular}{c|c|c|c} \hline TMA & TPCM & SSv2-Small & SSv2-Full \\ \hline ✗ & ✗ & 28.8 & 30.2 \\ \hline ✗ & ✓ & 42.3 & 46.5 \\ \hline ✓ & ✗ & 45.7 & 49.8 \\ \hline ✓ & ✓ & 56.5 & 61.9 \\ \hline \end{tabular}
\end{table}
Table 3: The impact of proposed modules on SSv2-Small and SSv2-Full in the 5-way 1-shot task. The visual encoder is ViT-B/32.
to the attainment of more robust class prototype representations.
#### 4.3.5 Method Effectiveness on Different Temporal Alignment Metrics
We conduct the experiments using different temporal alignment metrics on the 5-way 1-shot task of Kinetics and SSv2-Small to demonstrate that our model is plug-and-play. We choose CLIP-ViT-B/32 as our visual encoder. We adopt three different temporal alignment metrics, including OTAM [3], Bi-MHM [46], and TRX [33]. As displayed in Tab.7, our method can adapt to any temporal alignment metric, and the final accuracies are closely correlated to the metric's performance. Moreover, irrespective of the temporal alignment metric employed, our MA-CLIP consistently achieves the most outstanding performance comparing the baselines, which serves as compelling evidence for the superiority of our model.
#### 4.3.6 Unimodal Model vs. Multimodal Model
We also compare the performance between the unimodal model and multimodal model, as well as the impacts of different pre-training. We experiment with different pre-training and model modalities on the 5-way 1-shot task of Kinetics and SSv2-Small. We conducted experiments on multiple temporally aligned metrics. We provided two baselines for each metric: an ImageNet [5] pre-trained unimodal model and a CLIP pre-trained unimodal model. We choose ViT-B/32 as our visual encoder, and all baseline models' visual encoders are fully fine-tuned. As shown in Tab.7, using a CLIP pre-trained single-tower model can lead to performance improvements compared to ImageNet pre-trained model, but these improvements are still relatively limited. However, when using our proposed MA-CLIP multimodal model, there is a significant improvement in performance on two datasets. Specifically, our MA-CLIP consistently achieves a minimum accuracy improvement of 15% over the unimodal model utilizing ImageNet pre-training and a minimum performance improvement of 10% over the unimodal model using CLIP pre-training on two datasets. These results, on the one hand, demonstrate the importance of text information for few-shot action recognition tasks and, on the other hand, proves the effectiveness of our approach.
#### 4.3.7 Full Fine-tuning vs. Adaptation
In Tab.8, we conduct experiments on the 5-way 1-shot task of SSv2-Small to make a fair comparison between full fine-tuning and adaptation, which indicates the TMA module we proposed here. We choose ViT/B-32 as our visual encoder. As shown in Tab.8, our adaptation method can bring 2.6% and 2.8% accuracy improvements on SSv2-Small and SSv2-Full over the full fine-tuning model, respectively. Our adaptation method implements multimodal fusion and temporal modeling, while the full fine-tuning method does not achieve this. However, our method has only one-fifth (18.54M vs. 90.99M) of tunable parameters compared to the full fine-tuning method, requires 1.6G (11.9G vs 13.5G) less memory usage, and takes 0.4 (3.0H vs. 3.4H) hours less time to train for 10,000 tasks on a single RTX3090. The experimental results demonstrate that our MA-CLIP is fast, efficient, and has low training costs.
#### 4.3.8 Comparison of Different Methods for The Number of Training Tasks.
To demonstrate the significance of applying large-scale foundation pre-trained models in few-shot action recognition, significantly reducing the number of training tasks and dramatically improving recognition accuracy. We conduct experiments on SSv2-Small, Kinetics in the 5-way 1-shot task to compare the number of training tasks and accuracy among different methods. The visual encoder is ViT-B/32
\begin{table}
\begin{tabular}{c|c|c} \hline Method & Visual Encoder & Acc \\ \hline Unimodal Transformer & ViT-B/32 & 47.9 \\ \hline Multimodal Transformer & ViT-B/32 & 54.7 \\ \hline
**TPCM** & ViT-B/32 & **56.5** \\ \hline \end{tabular}
\end{table}
Table 6: Comparison of different prototype construction methods on SSv2-Small in the 5-way 1-shot task. The transformer includes the multi-head self-attention and a feed-forward network. The visual encoder is ViT-B/32.
\begin{table}
\begin{tabular}{c|c|c|c} \hline Method & Param (M) & Tunable Param (M) & Acc \\ \hline Frozen & 154.43 & 3.15 & 42.3 \\ Fine-tuned visual-only & 154.43 & 90.99 & 53.9 \\ \hline Frozen + multimodal adaptation & 159.21 & 7.94 & 54.0 \\ + temporal adaptation & 166.31 & 15.04 & 55.9 \\ + joint adaptation & 169.81 & 18.54 & 56.5 \\ \hline \end{tabular}
\end{table}
Table 4: Effectiveness of the Adapter components on SSv2-Small in the 5-way 1-shot task. The visual encoder is ViT-B/32.
\begin{table}
\begin{tabular}{c||c||c} \hline Method & Dataset & Acc \\ \hline Spatiotemporal Adaptation & SSv2-Small & 55.9 \\
**Multimodal Adaptation** & SSv2-Small & **56.5** \\ \hline Spatiotemporal Adaptation & SSv2-Full & 61.2 \\
**Multimodal Adaptation** & SSv2-Full & **61.9** \\ \hline \end{tabular}
\end{table}
Table 5: Effectiveness comparison between multimodal adaptation and spatial adaptation on SSv2-Small and SSv2-Full in the 5-way 1-shot task. The visual encoder is ViT-B/32.
and MA-CLIP's temporal alignment metric is OTAM. As shown in Tab. 9, using the ViT/B-32 model, our MA-CLIP achieves at least a 15% improvement in accuracy compared to other methods that use ImageNet pre-training, while the number of training tasks is only one-fourth of theirs on SSv2-Small. Similarly, on Kinetics, our MA-CLIP achieves at least a 10% improvement in accuracy while the number of training tasks is only one-tenth of other methods. Based on the above results, applying large-scale foundation models to few-shot recognition is necessary.
#### 4.3.9 Attention Visualization of MA-CLIP
Fig.5 shows the attention visualization of our MA-CLIP on SSv2-Small in the 5-way 1-shot setting. Corresponding to the original RGB images (left), the attention maps of the unimodal full fine-tuning model using CLIP pre-trained weights (middle), which we have mentioned in Sec.4.3.6 are compared to the attention maps with our MA-CLIP (right). As shown in Fig.5, the attention maps generated by MA-CLIP focus more on action-related objects and reduce attention to the background and unrelated objects. These observations provide empirical evidence of the effectiveness of our MA-CLIP in enhancing semantic and spatiotemporal representation.
## 5 Conclusion
In this work, we propose a novel method MA-CLIP to adapt CLIP for few-shot action recognition by adding lightweight adapters, which can minimize the number of learnable parameters and enable the model to possess the ability to transfer across different tasks quickly. The adapters we designed can combine information from video-text multimodal sources for task-oriented spatiotemporal modeling, which is fast, efficient, and has low training costs. Additionally, based on the attention mechanism, we design a text-guided prototype construction module that can fully utilize video-text information to enhance the representation of video prototypes. Our MA-CLIP is plug-and-play, which can be used in any different few-shot action recognition temporal alignment metric. Experiments demonstrate that our method performs excellently using any metric in various task settings. Extensive experiments on five widely
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Temporal Alignment Metric & Model Modality & Pre-training & Kinetics & SSv2-Small \\ \hline OTAM [3] & Unimodal & INet-ViT-B/32 & 75.8 & 38.2 \\ OTAM [3] & Unimodal & CLIP-ViT-B/32 & 83.7 & 44.8 \\
**MA-CLIP(OTAM)** & Multimodal & CLIP-ViT-B/32 & **93.5** & **56.5** \\ \hline Bi-MHM [46] & Unimodal & INet-ViT-B/32 & 75.2 & 39.5 \\ Bi-MHM [46] & Unimodal & CLIP-ViT-B/32 & 83.2 & 45.5 \\
**MA-CLIP(Bi-MHM)** & Multimodal & CLIP-ViT-B/32 & **93.2** & **56.9** \\ \hline TRX [33] & Unimodal & INet-ViT-B/32 & 67.2 & 37.3 \\ TRX [33] & Unimodal & CLIP-ViT-B/32 & 82.8 & 42.7 \\
**MA-CLIP(TRX)** & Multimodal & CLIP-ViT-B/32 & **92.8** & **52.4** \\ \hline \end{tabular}
\end{table}
Table 7: Method effectiveness on different temporal alignment metrics on SSv2-Small and Kinetics in the 5-way 1-shot task. And effectiveness comparison between the unimodal model and the multimodal model. The visual encoder is ViT-B/32.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline Method & Dataset & Tunable Param (M) & Memory(G) & Time(H) & Acc \\ \hline \hline Full fine-tuning & SSv2-Small & 90.99 & 13.5 & 3.4 & 53.9 \\
**Adaptation** & SSv2-Small & **18.54** & **11.9** & **3.0** & **56.5** \\ \hline Full fine-tuning & SSv2-Full & 90.99 & 13.5 & 3.4 & 59.1 \\
**Adaptation** & SSv2-Full & **18.54** & **11.9** & **3.0** & **61.9** \\ \hline \end{tabular}
\end{table}
Table 8: Effectiveness comparison between full fine-tuning and adaptation on SSv2-Small and SSv2-Full in the 5-way 1-shot task. The visual encoder is ViT-B/32. “Memory(G)” refers to the amount of video memory usage, and “Time(H)” indicates the time required to train 10,000 tasks, measured in hours on a single RTX3090.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Method & Dataset & Pre-training & Num of training tasks & Acc \\ \hline OTAM [3] & SSv2-Small & INet-ViT-B/32 & 80000 & 38.2 \\ HYSRM [46] & SSv2-Small & INet-ViT-B/32 & 75000 & 40.4 \\ TRX [33] & SSv2-Small & INet-ViT-B/32 & 80000 & 37.3 \\
**MA-CLIP** & SSv2-Small & CLIP-ViT-B/32 & **20000** & **56.5** \\ \hline OTAM [3] & Kinetics & INet-ViT-B/32 & 10000 & 83.7 \\ HYSRM [46] & Kinetics & INet-ViT-B/32 & 10000 & 83.2 \\ TRX [33] & Kinetics & INet-ViT-B/32 & 10000 & 82.8 \\
**MA-CLIP** & Kinetics & CLIP-ViT-B/32 & **1000** & **93.5** \\ \hline \end{tabular}
\end{table}
Table 9: Comparison of different methods for the number of training tasks on SSv2-Small, Kinetics in the 5-way 1-shot task. The visual encoder is ViT-B/32. MA-CLIP’s temporal alignment metric is OTAM [3].
used datasets have shown that our MA-CLIP can achieve outstanding performance with minor trainable parameters.
|
2303.04738 | Defectors: A Large, Diverse Python Dataset for Defect Prediction | Defect prediction has been a popular research topic where machine learning
(ML) and deep learning (DL) have found numerous applications. However, these
ML/DL-based defect prediction models are often limited by the quality and size
of their datasets. In this paper, we present Defectors, a large dataset for
just-in-time and line-level defect prediction. Defectors consists of $\approx$
213K source code files ($\approx$ 93K defective and $\approx$ 120K defect-free)
that span across 24 popular Python projects. These projects come from 18
different domains, including machine learning, automation, and
internet-of-things. Such a scale and diversity make Defectors a suitable
dataset for training ML/DL models, especially transformer models that require
large and diverse datasets. We also foresee several application areas of our
dataset including defect prediction and defect explanation.
Dataset link: https://doi.org/10.5281/zenodo.7708984 | Parvez Mahbub, Ohiduzzaman Shuvo, Mohammad Masudur Rahman | 2023-03-08T17:23:24Z | http://arxiv.org/abs/2303.04738v4 | # Defectors: A Large, Diverse Python Dataset for Defect Prediction
###### Abstract
Defect prediction has been a popular research topic where machine learning (ML) and deep learning (DL) have found numerous applications. However, these ML/DL-based defect prediction models are often limited by the quality and size of their datasets. In this paper, we present Defectors, a large dataset for just-in-time and line-level defect prediction. Defectors consists of \(\approx\) 213K source code files (\(\approx\) 93K defective and \(\approx\) 120K defect-free) that span across 24 popular Python projects. These projects come from 18 different domains, including machine learning, automation, and internet-of-things. Such a scale and diversity make Defectors a suitable dataset for training ML/DL models, especially transformer models that require large and diverse datasets. We also foresee several application areas of our dataset including defect prediction and defect explanation.
Defect Prediction, Just-in-Time, Dataset, Software Engineering
## I Introduction
A software defect is an incorrect step, process, or data definition in a computer program that prevents the program from working correctly [1]. Software defects are informally called software bugs. They cost the global economy billions of dollars every year [2, 3]. Despite the adoption of various software quality assurance (SOA) practices, defects may still sneak into official releases [4, 5]. Furthermore, a recent work [6] shows that only \(\approx\) 3% code lines of the whole release could lead to many of the bugs. Therefore, prioritizing SQA efforts for highly risky areas of source code is essential to ensure the high quality of a software release.
Defect prediction has been a popular research topic for the last few decades. It identifies the defects in software code before releasing the software to end users. It can also help prioritize the SQA efforts. Defects can be predicted at different abstraction levels such as module [7, 8], file [9, 10], method [11], and line [12, 13, 14, 6]. In recent years, just-in-time (JIT) defect prediction [14, 15, 16, 17, 18, 19, 20] also has gained significant attention, which predicts the defects just at the time of committing software changes. Thus, a combination of line-level defect prediction and JIT defect prediction can provide a fine-grained location of a software defect.
Over the past few years, deep-learning models have been used for both line-level defect prediction [6] and JIT defect prediction [18, 19]. Deep learning models provide state-of-the-art performance in various tasks of software engineering including bug localization [21, 22] and bug explanation [23]. However, their performance in JIT defect prediction is sub-optimal. Deep-learning-based tools such as DeepJIT [18] and CC2Vec [19] cannot outperform simpler models such as logistic regression. These models can be limited by the size and quality of their datasets. First, the performance of ML/DL models often scales with the size of their dataset [24, 25]. However, most of the existing datasets used in defect prediction might not be large enough [26]. Second, these datasets also suffer from the class imbalance problem containing only 5%-26% defective instances [26, 15, 20]. Such an imbalance could lead to sub-optimal performance with any deep-learning models. Third, these datasets were constructed either from a small number of projects [15] or the projects from a single organization [27, 26]. Such a choice limits the capability of these models to generalize their performances across different domains and organizations.
To mitigate the challenges with existing datasets, in this paper, we present _Defectors_ - a large-scale dataset, containing both source code and their changes from 24 popular Python projects across 18 domains and 24 organizations. We carefully identify defective source code files and their code changes, following five levels of noise filtration recommended in the literature. Our dataset contains \(\approx\) 213K source code files (\(\approx\) 93K defective and \(\approx\) 120K defect-free). It is suitable for training large models on the task of defect prediction that has the potential to provide high performance.
Defectors stands out from similar datasets in the following aspects.
1. **Size:** To the best of our knowledge, Defectors is the largest defect prediction dataset and is twice in size as the previous largest dataset (i.e., \(\approx\) 106K) [26].
2. **Class Balance:** It maintains a near 1:1 ratio between defective and defect-free instances in the training set, where existing datasets contain only 5%-26% defective instances [26, 15, 20].
3. **Diversity in Application:** Defectors uses 24 projects from 18 application domains and 24 organizations, where existing datasets either use a small number of projects (\(<10\)) [13, 15] or the projects from only one organization (e.g., Apache) [26, 27].
4. **Diversity in Platform:** Our dataset is based on Python projects, whereas nearly all existing datasets were constructed from Java-based projects. Thus, it diversifies the existing collection of defect prediction datasets.
The dataset is publicly available at the following link: [https://doi.org/10.5281/zenodo.7708984](https://doi.org/10.5281/zenodo.7708984)
## II Dataset Description And Usage
The Defectors dataset is one of the largest available datasets for defect prediction. Table I shows an overview of the columns, their data types, and their descriptions. Our dataset contains \(\approx\) 213K source code files and their changes (\(\approx\) 93K defective and \(\approx\) 120K defect-free). We store and distribute our dataset in Apache Parquet format, which is efficient, structured and compressed [28].
Our dataset is suitable for training large-scale models on software defect prediction. The training dataset maintains a near 1:1 ratio for defective and defect-free data, whereas _the test and validation datasets maintain the original distribution_. The Defectors dataset can be adapted for method-level defect prediction as it contains names of the touched methods by a commit. It can also be adapted for file-level defect prediction. Since our dataset contains references to thousands of bug-fix commits, it can be extended for bug explanation [23].
## III Dataset Construction
The Defectors dataset has been curated from 24 projects that span 18 domains and 24 organizations. We capture past bug-inducing and bug-fixing changes from these projects to construct our dataset.
First, we select the projects based on their popularity and other quality aspects (e.g., consistent issue labelling). Then, we identify the bug-fixing changes from these projects. From them, we identify the bug-inducing changes using the SZZ algorithm [29]. Then, we apply five levels of noise filtration to minimize the number of false positives identified by the SZZ algorithm. Then, we sample defect-free code changes with 95% confidence level and 5% margin of error (see Section III-E). Finally, we store our dataset as train, validation, and test splits. The following sections discuss the major steps of our dataset construction process.
### _Project Selection_
Most of the existing datasets used in defect prediction are constructed using Java projects. To diversify the existing collection of datasets, we choose Python-based projects for our dataset. Similar to existing studies [26], we sort all the Python projects on GitHub in _descending order_ using their star counts. Then, we manually investigate this ordered list of repositories sequentially to find mature and high-quality projects. In each repository, we look for a consistent way of identifying the bug-fix pull requests (PR). The following steps summarize our repository selection process1.
Footnote 1: Accessed: December 2, 2022
1. If a repository has less than 2000 PRs, we discard the project considering that it is not mature enough.
2. For a mature repository, we identify all the bug-related labels (e.g., bug, bugfix) from the repository.
3. We find the PRs using these labels. If the number of such PRs is more than 100, then we accept the repository. We find 11 projects with consistently labelled PRs.
4. If the number of bug-fix PRs is not sufficient, we find the issues that are associated with one of these labels and have a linked pull request resolving the issue. If the number of such issues is more than 100, then we accept the repository. We find 12 projects with consistently labelled and linked issues.
5. If the number of neither PRs nor issues are sufficient, we read the titles of the first 200 PRs and look for any consistent patterns. We find that the titles of bug-fix PRs from one project start with a specific keyword (i.e., fix). The titles of bug-fix PRs from another project _consistently_ contain issue IDs from a different bug report management website. We accept these two projects for having such consistent patterns of labelling bug-fix PRs.
By following the steps above, we select a total of 25 projects from various organizations and domains. To find the first 25 projects matching these criteria, we had to investigate the first 100 repositories from the ordered list (based on descending star count). Both the first and the second author individually selected these projects to avoid any mistakes. Later, in our quality filtration stage, we discard a repository for not having enough bug-fix commits (see Section III-D). Table II enlists 24 projects along with their domain, bug report management system, and the ratio of defective and defect-free code changes in our dataset.
### _Bug Fixing Commit Collection_
During project selection, we check their bug report management system. For projects where bug-fix PRs are directly labelled with appropriate labels, we collect the labelled PRs. For projects where issue reports are labelled with appropriate labels, we first collect the issues and then collect their associated PRs. Using these two approaches, we collect \(\approx\) 39K bug-fix PRs from 23 projects. For the remaining two projects, we use slightly different approaches. getsentry/sentry follows a pattern where all bug-fix PRs start with the keyword \(-\)_fix_. Therefore, we collect the PRs with such a pattern. Finally, django/django uses a separate website2 to manage their issues. In GitHub, its PRs contain corresponding issue IDs from that site. We first collect the closed bug reports from the issue site and then capture the corresponding PRs from
GitHub. Once we have the bug-fix PRs, we collect their merge commits as the bug-fix commits. This way, we collect \(\approx\) 51K bug-fix commits from 25 projects.
### _Bug Inducing Commit Collection_
In this step, we capture the bug-inducing commits from the bug-fix commits using the SZZ algorithm [29]. Most of the studies in defect prediction [13, 15, 26, 27, 30] use the SZZ algorithm to identify the bug-inducing changes. We use an implementation of the SZZ algorithm by the PyDriller tool [31]. This implementation takes a commit as the input and returns a list of commits that last changed the deleted lines in the input commit.
### _Bug Inducing Commits Filtration_
The SZZ algorithm yields a considerable amount of false positives, i.e., identifies defect-free commits as defective commits [27]. Thus, we apply a series of filtration inspired by the literature to minimize the number of false positives.
#### Iii-D1 Filtration Using The Number of Linked Bug Inducing Commits
SZZ often links several bug-inducing commits to a single bug-fix commit. This suggests that a bug could occur due to non-coherent changes in hundreds or even thousands of files, which is impractical. Therefore, existing studies discard the bug-fix commits that are linked to too many bug-inducing commits [15, 20, 26]. Let \(inducer\)-\(count\) be the number of bug-inducing commits linked to a single bug-fix commit. Keshavarz _et al._[26] suggest using Equation 1 as a threshold.
\[thresh(X)=mean(X)+std(X) \tag{1}\]
In this work, we use a threshold of 14, derived from the above equation to filter out the noisy bug-fix commits.
#### Iii-D2 Filtration Using The Number of Linked Bug-fix Commits
Let \(fixer\)-\(count\) be the number of bug-fix commits linked to a single bug-inducing commit. If a bug-inducing commit has a \(fixer\)-\(count\) greater than one, it suggests that the commit induced multiple bugs in the project, which are reported by multiple bug reports and are fixed by multiple bug-fix commits. Similar to \(inducer\)-\(count\), we apply Equation 1 to \(fixer\)-\(count\) as well. We thus discard the bug-inducing commits having a \(fixer\)-\(count\) higher than 7, which is derived from the above equation.
#### Iii-D3 Filtration Using The Size of Changed Code
If a commit changes a large number of lines or files, it indicates that the commit might contain tangled changes. During our manual analysis, we found some commits that modified even up to 1,000 files. Often these commits indicate some administrative tasks, for instance, merging several related projects into a single repository. Therefore, existing studies [15, 26] filter out the commits that are large. Similarly, we filter out the bug-inducing commits that have more than 1000 changed lines or have touched more than 100 files.
#### Iii-D4 Filtration Using File Type
In this paper, we focus specifically on Python source code. Such a language constraint
makes it easy to perform static analysis on source code. It also helps us capture the structural information from source code (e.g., changed methods). Even though Python is the main language of all of our projects, they contain a small fraction of non-Python files (e.g., configuration files). Thus, we filter out the commits that do not modify any Python file.
#### Iii-D5 Filtration Using The Nature of Change
All the changes in a source code file might not be bug-inducing. For instance, changes in the comments or code formatting generally do not introduce new bugs. As done by existing studies [26, 30, 32], we thus discard such trivial changes. Trivial changes do not modify the abstract syntax tree (AST) of the source code. Therefore, to identify trivial changes, we compare the AST of the source code before the commit to the AST after the commit. If both ASTs are the same, then the commit performs a trivial change, which is discarded. Our implementation of this filtration is tolerant of syntax errors. That is, if the source code is not syntactically correct, our implementation will still generate partial AST for comparison.
After completing all these filtrations, we find that one project, namely - freqtrade/freqtrade, contains only one bug-fix commit. We thus discard the project and keep the remaining 24 repositories in our dataset.
### _Collecting and Sampling Defect-free Commits_
We collect all the commits within the date range of the defective (i.e., bug-inducing) commits from the same project. Then, we separate the defective commits from the defect-free commits using the commit hashes. In software projects, defect-free commits often outnumber defective commits by a large margin. Therefore, we down-sample the defect-free commits to ensure a near 1:1 class ratio. For each project, we sample defect-free commits with a 95% confidence level and a 5% margin of error. If this sample size is less than the number of defective commits, then we increase the sample size to achieve parity. Finally, we discard the defect-free commits that do not modify any Python file.
### _Formalizing The Dataset_
We formalize our dataset by targeting two tasks - just-in-time (JIT) defect prediction (i.e., defect prediction on commit diff) and defect prediction from source files. For JIT defect prediction, the input is _git diff_, and for defect prediction from source files, the input is the content of the file after the commit. In both cases, the output is the list of added (i.e., defective) line numbers if the commit is defective. Otherwise, the output is an empty list. For both variants, we make train-validation sets based on both random and time-wise splitting approaches. The training splits maintain a near 1:1 ratio of defective and defect-free instances, where _test and validation splits maintain the original distribution_. In particular, \(\approx\) 7% files in the codebase are defective and \(\approx\) 4% lines are defective in the defective files. These variants of our dataset could help to effectively train and comprehensively evaluate large-scale defect prediction models.
## IV Limitations
Similar to former studies, we use SZZ [29] algorithm to identify the bug-inducing commits. Despite its limitations [27, 30], SZZ is the most used algorithm to identify bug-inducing commits. To minimize the effect of mislabelling by SZZ, we follow several filtration steps from the literature [15, 26, 30] that have been shown to be effective. In this work, we limit our focus to a single language (i.e., Python). However, our dataset construction approach is language agnostic (see Section III) and can be adapted to any programming language.
## V Similar Datasets
Kamei _et al._[20] perform a large-scale study on change-level defect prediction using six open-source and five closed-source projects. They use the original SZZ algorithm [29] to annotate the bug-inducing changes. However, their dataset is not publicly available. McIntosh _et al._[15] conduct a time-series analysis on JIT defect prediction using two rapidly evolving projects. Jiang _et al._[13] attempt to personalize defect prediction for different developers. Their dataset consists of only six open-source projects and is not publicly available. Keshavarz _et al._[26] present ApacheJIT, a large dataset for JIT defect prediction. This dataset is derived from 14 open-source Java projects and contains \(\approx\) 106K software revisions. Despite their large size, this dataset contains projects from a single organization. Such a choice might limit the generalizability of a model trained on their dataset.
All of these datasets use heuristic-based approaches that look for certain keywords (e.g., fix) or issue numbers in commit messages to identify bug-fixing changes. However, Yatish _et al._[33] mention that such approaches produce a significant amount of mislabeled data. They suggest using bug-fix labels directly provided by the developers. In a case study of nine projects, they show that this approach leads to improvement in prediction accuracy. In our work, we use projects where bug-fix labels are directly provided by the developers. The distinction between our dataset and that of Yatish _et al._[33] is that our dataset is more diverse than theirs in terms of repositories (9 vs. 24 repositories) and organizations (1 vs. 24 organizations). Furthermore, all of the aforementioned datasets are constructed from Java projects whereas our dataset is constructed from Python projects.
## VI Conclusion
Among various defect prediction approaches, line-level defect prediction and just-in-time (JIT) defect prediction offer the most precise predictions. The deep-learning models have strong potential to support such predictions. However, the performance of these models could still be limited by the lack of a large, diverse, and balanced dataset. In this paper, we present Defectors - a large dataset suitable for both line-level defect prediction and JIT defect prediction. Our dataset can be used to train large deep-learning models that are precise in their defect predictions and could be generalizable.
## Acknowledgement
This work was supported by Dalhousie University and Mitacs Accelerate International Program. We would like to thank Avinash Gopal, Ben Reaves, and Massimiliano Genta from our industry partner - _Metabob Inc_.
|
2301.12723 | Measuring robustness of dynamical systems. Relating time and space to
length and precision | Verification of discrete time or continuous time dynamical systems over the
reals is known to be undecidable. It is however known that undecidability does
not hold for various classes of systems: if robustness is defined as the fact
that reachability relation is stable under infinitesimal perturbation, then
their reachability relation is decidable. In other words, undecidability
implies sensitivity under infinitesimal perturbation, a property usually not
expected in systems considered in practice, and hence can be seen (somehow
informally) as an artefact of the theory, that always assumes exactness. In a
similar vein, it is known that, while undecidability holds for logical formulas
over the reals, it does not hold when considering delta-undecidability: one
must determine whether a property is true, or $\delta$-far from being true. We
first extend the previous statements to a theory for general (discrete time,
continuous-time, and even hybrid) dynamical systems, and we relate the two
approaches. We also relate robustness to some geometric properties of
reachability relation. But mainly, when a system is robust, it then makes sense
to quantify at which level of perturbation. We prove that assuming robustness
to polynomial perturbations on precision leads to reachability verifiable in
complexity class PSPACE, and even to a characterization of this complexity
class. We prove that assuming robustness to polynomial perturbations on time or
length of trajectories leads to similar statements, but with PTIME. It has been
recently unexpectedly shown that the length of a solution of a polynomial
ordinary differential equation corresponds to a time of computation: PTIME
corresponds to solutions of polynomial differential equations of polynomial
length. Our results argue that the answer is given by precision: space
corresponds to the involved precision. | Manon Blanc, Olivier Bournez | 2023-01-30T08:44:43Z | http://arxiv.org/abs/2301.12723v2 | # Measuring robustness of dynamical systems.
###### Abstract
Verification of discrete time or continuous time dynamical systems over the reals is known to be undecidable. It is however known that undecidability does not hold for various classes of systems when considering _robust_ systems: if robustness is defined as the fact that reachability relation is stable under infinitesimal perturbation, then their reachability relation is decidable. In other words, undecidability implies sensitivity under infinitesimal perturbation, a property usually not expected in systems considered "in practice", and hence can be seen (somehow informally) as an artifact of the theory, that always assumes exactness. In a similar vein, it is known that, while undecidability holds for logical formulas over the reals, it does not hold when considering \(\delta\)-undecidability: one must determine whether a property is true, or \(\delta\)-far from being true.
We first extend the previous statements to a theory for general (discrete time, continuous-time, and even hybrid) dynamical systems, and we relate the two approaches. We also relate robustness to some geometric properties of reachability relation.
But mainly, when a system is robust, it then makes sense to quantify at which level of perturbation. We prove that assuming robustness to polynomial perturbations on precision leads to reachability verifiable in complexity class PSPACE, and even to a characterization of this complexity class. We prove that assuming robustness to polynomial perturbations on time or length of trajectories leads to similar statements, but with PTIME.
Actually, these results can also be read in another way, in terms of computational power of analog models. It has been recently unexpectedly shown that the length of a solution of a polynomial ordinary differential equation corresponds to a time of computation: PTIME corresponds to solutions of polynomial differential equations of polynomial length. Can we do something similar for PSPACE? How should we measure space robustly for dynamical systems? Our results argue that the answer is given by precision: space corresponds to the involved precision.
## I Introduction
Several research communities have studied the relations between dynamical systems and computation. This includes studies of the unpredictability and undecidability in dynamical systems [27, 26], questions related to the hardness of verification for discrete, continuous and the so-called hybrid systems [2], questions related to the computational power of models of recurrent neural networks [31], or control theory questions [3, 10].
_Motivation related to verification:_ while several undecidability results were stated for hybrid systems (such as Linear Hybrid Automata [22] or Piecewise Constant Derivative systems [2]), and while it was observed that some practical tools were "working well" (terminating) in practice, a folklore (sometimes informal) conjecture appeared in the literature of verification or in several talks, stating that this undecidability is due to non-stability, non-robustness, sensitivity to the initial values of the systems, and that it never occurs in "real" systems. There were several attempts to formalize and to prove (or to disprove) this conjecture, including [15, 23]. Refer to [1, 10] for some discussions.
In this article, we are inspired by the approach of [1], where several models for discrete, continuous time, and hybrid dynamical systems are considered: Turing machines, Piecewise affine maps, Linear hybrid automata and Piecewise Constant Derivative systems. To each of these models is associated a notion of perturbed dynamics by a small \(\epsilon\) (with respect to a suitable metrics), and a perturbed reachability relation is defined as the intersection of all reachability relations obtained by \(\epsilon\)-perturbations, for all possible values of \(\epsilon\). The authors show that for all these models, the perturbed reachability relation is co-computably enumerable (co-c.e., \(\Pi^{0}_{1}\)), and that any co-c.e. relation can be defined as the perturbed reachability relation of such models. A corollary is, that if we define robustness as stability of the reachability relation under infinitesimal perturbations, then robust systems have a decidable reachability relation, and hence a decidable verification.
In a similar vein, [16] observed that some logics such as real arithmetic are decidable, but even the set of \(\Sigma_{1}\) sentences in a language extending real arithmetic with the sine function is already undecidable, and so deciding satisfaction of such "simple" formulas is impossible in the general case. However, such theoretical negative result only refers to the problem of deciding logic formula symbolically and precisely. If a relaxed notion of correctness is considered, then verification becomes algorithmically solvable: one asks to answer true when a given formula \(\phi\) is true, and to return false when it is \(\delta\)-robustly wrong, namely \(\phi\) is wrong, but actually some \(\delta\)-strengthening of \(\phi\) is false. In other words, undecidability happens only for
exact questions, whose decision is not stable by infinitesimal perturbations, while robust-satisfaction is decidable.
These approaches are relating robustness to the decidability of their verification, or robust satisfaction to a decidability of satisfaction, i.e. a _computability_ question. Our purpose in the current article is to extend all this, and in particular [1], to more general classes of dynamical systems, but also, mainly to discuss also _complexity_ issues: when is the verification of a system in \(\mathrm{PTIME}\) or in \(\mathrm{PSPACE}\)? How does it relate to the robustness of the system with respect to perturbations?
Basically, when a system is robust, it makes sense to quantify at which level it is: which level of perturbation \(\epsilon\) is allowed? We basically prove that assuming this perturbation is polynomial in the data leads to a characterization of \(\mathrm{PSPACE}\). This provides ways to guarantee complexity of the verification of a system. Actually, we also show that this idea can be used to define various complexity classes, by playing with various concepts of robustness: we introduce the concept of time and length perturbation (basically trajectories cannot be too long), and we prove that this leads to \(\mathrm{PTIME}\), and hence leads to systems where verification becomes polynomial.
Motivation related to models of computation: another field of research is related to understanding how analog (continuous-time) models of computation compare to more classical discrete models such as Turing machines. In particular, a famous historical celebrated mathematical model is the General Purpose Analog Computer (GPAC) model, introduced by Shanon in 1941 in [30], as a model for the Differential Analyzer [13], a mechanical programmable machine, on which he worked as an operator. It is known that functions computed by a GPAC correspond to a (component of the) solution of a system of Ordinary Differential Equations (ODE) with polynomial right-hand side (also called _pODE_) [30, 19].
Relating the computational power of this model to classical models such as the Turing machines, at the complexity level, is not a trivial task. In short, contrarily to discrete models of computation, continuous time models of computation (not only the GPAC, but many others) may exhibit the so-called "Zeno phenomena", where time can be arbitrarily contracted in a continuous system, thus allowing an arbitrary speed-up of the computation. This forbids to take the naive approach of using the time variable of the ODE as a well-defined measure of time complexity: see [4, 10] for discussions.
A celebrated recent breakthrough has been obtained in [28, 7], where it has been proved that a key idea to solve this issue is, when using pODEs (in principle this idea can also be used for others continuous models of computation) to compute a given function \(f\), the cost should be measured as a function of the length of the solution curve of the polynomial initial value computing the function \(f\). As ODEs is a kind of universal language in experimental sciences, this breakthrough led to solve several open problems in various other contexts. This includes the proof of the existence of a universal (in the sense of Rubel) ODE [9]; proof of the strong Turing-completeness of biochemical reactions [14], or statements about the completeness of reachability problems (e.g. \(\mathrm{PTIME}\)-completeness of bounded reachability) for ODEs [7].
While it is now known that time should be measured by length, the question on how space should be measured is unclear. How should be measure the "memory" used by some ordinary differential equation? Can we provide a characterization of \(\mathrm{PSPACE}\) using ODEs similar to the characterization of \(\mathrm{PTIME}\) obtained in [7]? If time corresponds to length, what space corresponds to? We give some arguments to state that basically, over a compact domain space corresponds to precision, while it corresponds to the log of the size of some graph for systems over more general domains.
Related work and main results: the current article lies its foundations on some extensions of [1]. We reformulate some of their statements, and extend some of their results. In particular, we allow more general discrete time and continuous time dynamical systems, while [1] was restricting to Piecewise Affine Maps (PAM), and hybrid systems, both of them assumed to work on a bounded domain. Our statements are established in a wide generality by considering only computability of the domain or the dynamics. We know about generalizations that have also been obtained in [8], but focusing on dynamical systems as language recognizers, and mainly focusing somehow on generalizations of Theorem 14 ([1, Theorem 4]), while we discuss here reachability in a more natural and general way.
We also connect our approach to \(\delta\)-decidability introduced in [16]. Observe that the logics considered in the latter are not sufficiently expressive to cover our results (or [1]).
Another clear difference is also that we talk about complexity issues (space, time), while all these references are only talking about computability issues. Furthermore, we consider not only space perturbations, but also time and length perturbations, and we prove that many of the results, even for computability, can also be extended to this framework.
In Section II, we recall some known facts about the hardness of reachability (accessibility) problems in graphs, discussing in particular their encoding.
Before going to general discrete time dynamical systems, and later continuous and hybrid time dynamical systems, we consider a specific case of discrete time dynamical systems, namely Turing machines (TMs). We think this helps to understand the coming statements. We basically introduce in in Section III the concept of (space) perturbed Turing machine, following [1], and recall some of the results established in this paper. The original contributions in this section are the extension of this framework to complexity. We prove that considering a polynomial robustness corresponds precisely to polynomial space, i.e. \(\mathrm{PSPACE}\) (Theorem 17). We consider here space perturbations, but we prove later (in Section VIII) that a natural concept of time perturbation could also be considered in order to lead to a characterisation of polynomial time, i.e. \(\mathrm{PTIME}\) (Theorem 58), using similar ideas.
Section IV, recalls how several undecidability results for dynamical systems are established using the embedding of a class of dynamical systems (e.g. TMs) into another. We believe this helps to understand the related sources of non-robustness.
Section V is considering general discrete time dynamical systems (over \(\mathbb{R}^{d}\)). We use the concept of perturbation and robustness from [1]. But unlike the latter article, mainly restricting to Piecewise Affine Maps (PAMs), we consider general systems, discussing the importance of the domain, and of the preservation of rational numbers as in PAMs. A main technical result is then Theorem 24: this is an extension of [1, Theorem 5] to the general settings, established using similar ideas, but not exactly1. We also extend the framework to the case of systems that would not preserve rationals. This leads to an extension of main statement of [1]: robustness implies decidability of reachability (Corollary 25 and Corollary 39). Then we prove that this theory can actually be related to the approach of \(\delta\)-decidability proposed in [16]: (\(\epsilon\)-)robust means true or (\(\epsilon\)-)far from being true (Theorem 27). This is an also important original contribution relating the two approaches, giving arguments in the spirit of [16]. Our key point is then to be able to extend all at the complexity level. One key result is that polynomially robust implies reachability in \(\mathrm{PSPACE}\) (Theorem 35). This is even a characterization of \(\mathrm{PSPACE}\) (Theorem 36). Concretely, Section V is split into two subsections. In the first part, we only need to talk about rational numbers, and classical computability. In the second part, as functions may take non-rational values, we use the framework of Computable Analysis (CA). The first part helps for the intuition of the second. But actually, as a computable function in CA is continuous, the first is not a consequence of the second, even if several methods and reasoning are shared.
Footnote 1: As the proof there turns out to be ambiguous about whether “can make a transition” means in one step vs in one or many steps, and we are not sure to fully follow it in its exact form.
Section VI provides a nice and original view. Using various results established in the community of CA, we prove that robustness can also be seen as having a reachability relation that can be drawn. We use there the fact that the latter relates to computability for subsets of \(\mathbb{R}^{p}\).
In Section VII, we state that similar statements hold also for continuous time, and even hybrid dynamical systems.
Section VIII is proving that other perturbations could be considered and would lead to talk about time complexity \(\mathrm{PTIME}\) instead of \(\mathrm{PSPACE}\). Section IX discusses how the current work relates to the known characterizations of complexity classes with continuous time dynamical systems. Notice that some characterizations of \(\mathrm{PSPACE}\) have been obtained recently in [17, 5] (discussed in this section), but at the price of technical conditions, that we believe not to provide a fully satisfying answer to previous questions.
Preliminaries: a * means that a (possibly more extended) proof can be found in appendix. We write \(d(\cdot,\cdot)\) for the norm-sup distance. A (open) (respectively close) _rational ball_ is a subset of the form \(B(\mathbf{x},\delta)=\{\mathbf{y}:d(\mathbf{x},\mathbf{y})<\delta\}\) (resp. \(\overline{B}(\mathbf{x},\delta)=\{\mathbf{y}:d(\mathbf{x},\mathbf{y})\leq \delta\}\)) for some rational \(\mathbf{x}\) and \(\delta\). We could in principle use the Euclidean distance, but this distance has the advantage that its balls correspond directly to rounding at some precision. A set of the form \(\prod_{i=1}^{d}[a_{i},b_{i}]\), for rational \((a_{i})\), \((b_{i})\), will be called a _rational closed box_. An open rational box is obtained by considering open intervals in previous definition. The least closed set containing \(X\) is denoted by \(cls(X)\). We write \(\ell(\cdot)\) for the function that measures the binary size of its argument.
## II About reachability in graphs
Consider a directed graph \(G=(V,\rightarrow)\), with \(\rightarrow\subseteq V^{2}\). It is said _deterministic_ if any node has at most one outgoing edge.
**Lemma 1** (Reachability for graphs).: _Consider the following decision problem \(\mathrm{PATH}(G,u,v)\): given a directed graph \(G=(V,\rightarrow)\), and some vertices \(u,v\in V\), determine whether there is some path between \(u\) and \(v\) in \(G\), denoted by \(u\stackrel{{*}}{{\rightarrow}}v\)._
_Then \(\mathrm{PATH}(G,u,v)\in\mathrm{NLOGSPACE}\)._
Proof.: Basically, a non-deterministic TM can guess the intermediate nodes of the path: see e.g. [32, Example 8.19].
**Lemma 2** (Immerman-Szelepcsenyi's theorem [24, 33]).: \(\mathrm{NLOGSPACE}=\mathrm{coNLOGSPACE}\).
Proof.: See e.g. [32, Theorem 8.22].
Actually, as we will see, we mainly focus its complement:
**Corollary 3**.: _Consider the following decision problem \(\mathrm{NOPATH}(G,u,v)\): given a directed graph \(G=(V,\rightarrow)\), and some vertices \(u,v\in V\), determine whether there is no path between \(u\) and \(v\) in \(G\)._
_Then \(\mathrm{NOPATH}(G,u,v)\in\mathrm{NLOGSPACE}\)._
**Theorem 4** (Savitch's theorem).: _For any function \(f:\mathbb{N}\rightarrow\mathbb{N}\) with \(f(n)\geq\log n\), we have \(\mathrm{NSPACE}(f(n))\subseteq\mathrm{SPACE}(f^{2}(n))\)._
Proof.: See e.g. [32, Theorem 8.5].
**Corollary 5**.: \(\mathrm{PATH}(G,u,v)\in\mathrm{SPACE}(log^{2}(n))\) _and_
\(\mathrm{NOPATH}(G,u,v)\in\mathrm{SPACE}(log^{2}(n))\)_._
**Remark 6**.: _Notice that detecting whether there is no path between \(u\) and \(v\) is basically equivalent to determine whether all paths starting from \(u\) "loop", i.e. remain disjoint from \(v\). The above statement is established using a more subtle method that a simple depth of width search of the graph, using the trick of the proof of Savitch's theorem, i.e. a recursive procedure (expressing reachability in less than \(2^{t}\) steps, called \(\mathrm{CANYIELD}(\mathrm{C}_{1},\mathrm{C}_{2},\mathrm{t})\) in [32]) guaranteeing the above space complexity._
Our purpose is to talk about various discrete time dynamical systems (and latter even continuous time dynamical systems).
**Definition 7** (Discrete Time Dynamical System).: _A (general) discrete-time dynamical system \(\mathcal{P}\) is given by a set \(X\), called domain, and some (possibly partial) function \(f\) from \(X\) to \(X\)._
A trajectory of \(\mathcal{P}\) is a sequence \((x_{t})\) evolving according to \(f\), i.e. such that \(x_{t+1}=f\left(x_{t}\right)\) for all \(t\). We say that \(x^{*}\) (or a set \(X^{*}\)) is reachable from \(x\) if there is a trajectory with \(x_{0}=x\) and \(x_{t}=x^{*}\) (respectively \(x_{t}\in X^{*}\)) for some \(t\).
In other words, any discrete time dynamical system \(\mathcal{P}\) can be seen as a particular (deterministic) directed graph but where
\(V\) is not necessarily finite. This graph corresponds to \(V=X\), and \(\rightarrow\) corresponds to the graph of function \(f\). If it remains finite, we can generalize some of the previous statements, but working over representations in order to make things feasible.
**Corollary 8** (Reachability for finite graphs).: _Let \(s(n)\geq\log(n)\). Assume the vertices of \(G\) can be encoded in binary using words of length \(s(n)\). Assume the relation \(\rightarrow\) is decidable using a space polynomial in \(s(n)\) with this encoding._
_Then, given the encoding of \(u\in V\) and of \(v\in V\), we can decide whether there is a (respectively: no) path from \(u\) to \(v\), in a space polynomial in \(s(n)\)._
Proof.: Use similar arguments and algorithms (in particular the trick of Savitch's theorem) as in previous corollary, but working over representations of vertices.
We will still write (abusively) \(\mathrm{PATH}\) and \(\mathrm{NOPATH}\) for these problems. Notice that assuming that the vertices of \(G\) can be encoded in binary using words of length \(s(n)\) requires the graph \(G\) to be finite, with less than \(2^{s(n)}\) vertices.
**Remark 9**.: _If write \(\log\)-\(\mathrm{size}(G)\) for the log of the number of vertices of a finite graph, we see that this provides some relevant complexity measure of the hardness of the space complexity of these problems (with above assumptions on \(G\))._
## III Turing machines
Let us recall the definition of a (bi-infinite tape) Turing machine (TM): let \(\Sigma\) be a finite alphabet, and let \(B\not\in\Sigma\) be the blank symbol. A TM over \(\Sigma\) is a tuple \((Q,q_{\mathrm{init}}\,,F,R,\Gamma)\) where \(Q\) is a finite set of control states, \(q_{0}\in Q\) is the initial control state, \(F\subseteq Q\) (respectively \(R\subseteq Q\)) is a set of accepting (respectively rejecting) states, with \(F\cap R=\emptyset\), and \(\Gamma\) is a set of transitions of the form \((q,a)\rightarrow(q^{\prime},b,\delta)\) where \(q,q^{\prime}\in Q\), \(a,b\in\Sigma\cup\{B\}\), and \(\delta\in\{-1,0,1\}\). When the machine has accepted or rejected, decision is unchanged: when \(q\in F\), then \(q^{\prime}\in F\), and when \(q\in R\) then \(q^{\prime}\in R\).
A configuration \(C\) of the machine is given by the current control state \(q\), and the current content of the bi-infinite tape: \(\cdots a_{-2}a_{-1}a_{0}a_{1}a_{2}\cdots\), where the \(a_{i}\)'s are symbols in \(\Sigma\cup\{B\}\): this means that the head of the machine is in front of symbol \(a_{0}\). We write \(\mathcal{C}_{\mathcal{M}}\) for the set of the configurations of a TM, and write such a configuration as the triple \((q,\cdots a_{-2}a_{-1},a_{0}a_{1}a_{2}\cdots)\). Given a transition \((q,a)\rightarrow(q^{\prime},b,\delta)\) in \(\Gamma\), if the control state is \(q\) and the symbol pointed by the head of the machine is equal to \(a\), then the machine can change its configuration \(C\) to the configuration \(C^{\prime}\) in the following manner: the control state is now \(q^{\prime}\), the symbol pointed by the head is replaced by \(b\) and then the head is moved to the left or to the right, or it stays at the same position according to whether \(\delta\) is \(-1,1\), or 0, respectively. We write \(C\vdash C^{\prime}\) when this holds, i.e. \(C^{\prime}\) is the one-step next configuration of the configuration \(C\). Then \((\mathcal{C}_{\mathcal{M}},\vdash)\) corresponds to a particular dynamical system.
Word \(w=a_{1}\cdots a_{n}\in\Sigma^{*}\) is accepted by \(\mathcal{M}\) if, starting from the initial configuration \(C_{0}=C_{0}[w]=(q_{0},\cdots BBB,a_{1}a_{2}\cdots a_{n}BBB\cdots)\) the machine eventually stops in an accepting control state: that is, if we write \(\mathcal{F}\) for the configurations where \(q\in F\), iff \(C_{0}\stackrel{{*}}{{\vdash}}C^{*}\) for some \(C^{*}\in\mathcal{F}\). Let \(L(\mathcal{M})\) denote the set of such words, i.e., the computably enumerable (c.e) language semi-recognized by \(\mathcal{M}\). We say that \(w\) is rejected by \(\mathcal{M}\) if, starting from the configuration \(C_{0}\) the machine \(\mathcal{M}\) eventually stops in a rejecting state. \(\mathcal{M}\) is said to always halt if for all \(w\), either \(w\) is accepted or \(w\) is rejected.
The article [1] introduces the concept of space perturbed Turing machine: the idea is, given \(n>0\), that the \(n\)-perturbed version of the machine \(\mathcal{M}\) is unable to remain correct at distance more than \(n\) from the head of the machine. Namely, the \(n\)-perturbed version \(\mathcal{M}_{n}\) of the machine is defined exactly as \(\mathcal{M}\) except that before any transition all the symbols at the distance \(n\) or more from the head of the machine can be altered: given a configuration \((q,\cdots a_{-n-1}a_{-n}a_{-n+1}\cdots a_{-1},a_{0}a_{1}\cdots a_{n-1}a_{n}a_{n+ 1}\cdots)\), \(\mathcal{M}_{n}\) may replace any symbol to the left of \(a_{-n}\) (starting from \(a_{-n-1}\)) and to the right of \(a_{n}\) (starting from \(a_{n+1}\)) by any other symbols in \(\Sigma\cup\{B\}\) before executing a transition of \(\mathcal{M}\) at head position \(a_{0}\). Hence \(\mathcal{M}_{n}\) is nondeterministic.
A word \(w\) is accepted by the \(n\)-perturbed version of \(\mathcal{M}\) iff there exists a run of this machine which stops in an accepting state. Let \(L_{n}(\mathcal{M})\) be the \(n\)-perturbed language of \(\mathcal{M}\), i.e., the set of words in \(\Sigma^{*}\) that are accepted by the \(n\)-perturbed version of \(\mathcal{M}\). From definitions, if a word is accepted by \(\mathcal{M}\), then it can also be recognized by all the \(n\)-perturbed versions of \(\mathcal{M}\), for every \(n>0\): perturbed machines have more behaviours. Moreover, if the \((n+1)\)-perturbed version accepts a word \(w\), the \(n\)-perturbed version will also accept it since \(n\)-perturbed machines have more behaviours than \((n+1)\)-perturbed machines.
Let \(L_{\omega}(\mathcal{M})=\bigcap_{n}L_{n}(\mathcal{M})\): this consists of all the words that can be accepted by \(\mathcal{M}\) when subject to arbitrarily "small" perturbations. From definitions:
**Lemma 10** ([1, Lemma 2]).: \[L(\mathcal{M})\subseteq L_{\omega}(\mathcal{M})\subseteq\cdots\subseteq L_{2} (\mathcal{M})\subseteq L_{1}(\mathcal{M}).\]
The \(\omega\)-perturbed language of a TM is the complement of a computably enumerable language:
**Theorem 11** (Perturbed reachability is co-c.e. [1, Theorem 3]).: \(L_{\omega}(\mathcal{M})\) _is in the class \(\Pi^{0}_{1}\)._
Proof.: Given a bi-infinite configuration \(C\) of \(M\) of the form \((q,\cdots a_{-n-1}a_{-n}\ldots a_{-1},a_{0}a_{1}\ldots a_{n}a_{n+1}\cdots)\), we define \(\varphi_{n}(C)=(q,a_{-n}\cdots a_{-1},a_{0}a_{1}\cdots a_{n})\) made of words of length \(n\) and \(n+1\).
For every \(n\in\mathbb{N}\), we associate to the \(n\)-perturbed version \(\mathcal{M}_{n}\) of TM \(\mathcal{M}\) some graph \(G_{n}=(V_{n},\rightarrow_{n})\): the vertices, denoted \((\mathcal{V}_{i})_{i}\), of this graph correspond to the \(|Q|\times|\Sigma+1|^{2n+1}\) possible values of \(\varphi_{n}(C)\) for a configuration \(C\) of \(\mathcal{M}\). There is an edge between \(\mathcal{V}_{i}\) and \(\mathcal{V}_{j}\) in \(G_{n}\) iff \(\mathcal{M}_{n}\) can go from configuration \(C\) to configuration \(C^{\prime}\) in one step, with \(\varphi_{n}(C)=\mathcal{V}_{i}\) and \(\varphi_{n}(C^{\prime})=\mathcal{V}_{j}\).
Determining whether \(\mathcal{V}_{i}\rightarrow\mathcal{V}_{j}\) holds is easy (and in particular polynomial space computable) by considering that, when the head is moved to the left (resp. to the right) of
\(\mathcal{V}_{i}\) a symbol in \(\Sigma\cup\{B\}\) is nondeterministically chosen and appended to the left (resp. right) of the configuration and the right-most (resp. left-most) one is lost (it belongs now to the perturbed area of the configuration and hence it can be replaced by any other symbol).
Let \(\mathcal{F}_{n}=\varphi_{n}(\mathcal{F})\) correspond to the accepting control states. By construction, the \(n\)-perturbed version \(\mathcal{M}_{n}\) of \(\mathcal{M}\) has an accepting run starting from a configuration \(C\), iff \(\mathcal{F}_{n}\) is reachable from \(\varphi(C)\), that is to say \(\mathrm{PATH}(G_{n},\varphi(C),\mathcal{F}_{n})\). By Corollary 8, this is decidable in a space polynomial in \(n\).
Let \(Basis_{n}\) be the finite set of sequences \(s_{n}\in\Sigma^{n+1}\), such that \(\mathcal{F}_{n}\) is reachable from \(C_{0}[s_{n}]\). Let \(Short_{n}\) be the finite set of sequences \(s_{k}\in\Sigma^{k}\) with \(k<n\), such that \(\mathcal{F}_{n}\) is reachable from \(C_{0}[s_{k}B^{n-k}]\). Then \(L_{n}(\mathcal{M})=\mathrm{Short}\cup\mathrm{Basis}\,\Sigma^{*}\). Consequently, \(L_{n}(\mathcal{M})\) is decidable in space polynomial in \(n\), and hence its complement also is. Thus, \(L_{\omega}(\mathcal{M})\) is c.e., as it is a (uniform in \(n\)) union of decidable sets.
Since a set that is c.e. and co-c.e. is decidable, robust languages (i.e. \(L_{\omega}(\mathcal{M})=L(\mathcal{M})\)) are necessarily decidable.
**Corollary 12** (Robust \(\Rightarrow\) decidable [1, Corollary 3)]).: _If \(L_{\omega}(\mathcal{M})=L(\mathcal{M})\) then \(L(\mathcal{M})\) is decidable._
The converse holds if another requirement on \(\mathcal{M}\) is added. Indeed, since \(L(\mathcal{M})\subseteq L_{\omega}(\mathcal{M})\), \(L_{\omega}(\mathcal{M})\neq L(\mathcal{M})\) means that there exists some word \(w\), rejected by \(\mathcal{M}\), but accepted by any \(n\)-perturbed version \(\mathcal{M}_{n}\). This \(w\) is not rejected by \(\mathcal{M}\) in finite time, otherwise it would use finitely many cells of the tape, and with \(n\) sufficiently big, \(\mathcal{M}_{n}\) would still reject it. In other words, this \(w\) must nor be accepted nor rejected by \(\mathcal{M}\).
**Proposition 13** (Decidable \(\Rightarrow\) robust [1, Proposition 1]).: _Assume \(\mathcal{M}\) always halts. Then \(L(\mathcal{M})\) is decidable and \(L_{\omega}(\mathcal{M})=L(\mathcal{M})\)._
In general, \(\omega\)-perturbed languages are not computable enumerable. Some of them are complete among co-r.e. languages: perturbed reachability is complete in \(\Pi^{0}_{1}\)[1, Theorem 4].
**Theorem 14** (Perturbed reachability is complete in \(\Pi^{0}_{1}\)[1, Theorem 4]).: _For every TM \(\mathcal{M}\), we can effectively construct another TM \(\mathcal{M}^{\prime}\) such that \(L_{\omega}\left(\mathcal{M}^{\prime}\right)=\overline{L(\mathcal{M})}\)._
Actually is possible to go to some complexity issues, and not only restrict to computability.
Indeed, when a language is robust, it makes sense to measure what level of perturbation \(f\) can be tolerated:
**Definition 15**.: _Given some function \(f:\mathbb{N}\to\mathbb{N}\), we write \(L_{\{f\}}(\mathcal{M})\) for the set of words accepted by \(\mathcal{M}\) with space perturbation \(f\): \(L_{\{f\}}(\mathcal{M})=\{w|\ w\in L_{f(\ell(w))}(\mathcal{M})\}\)._
The proof above of Theorem 11 establishes explicitly:
**Lemma 16**.: \(L_{n}(\mathcal{M})\in\mathrm{SPACE}(poly(n))\)_._
We get a characterization of \(\mathrm{PSPACE}\):
**Theorem 17** (Polynomial precision robust \(\Leftrightarrow\mathrm{PSPACE}\)).: \(L\in\mathrm{PSPACE}\) _iff for some \(\mathcal{M}\) and some polynomial \(p\), \(L=L(\mathcal{M})=L_{\{p\}}(\mathcal{M})\)._
Proof.: (\(\Rightarrow\)) If \(M\) always terminates and works in polynomial space, then there exists a polynomial \(q(\cdot)\) that bounds the size of the used part of the tape of \(M\). Considering a polynomial \(p\geq q+2\), we have for \(n\in N\)\(L_{p(\ell(w))}(\mathcal{M})\subseteq L(M)\). We always have the other inclusion.
(\(\Leftarrow\)) We always have \(L_{p(n)}(\mathcal{M})\in\mathrm{PSPACE}\) by previous lemma, and since \(L_{p(n)}(\mathcal{M})=L\), then \(L\in PSPACE\).
This considers space perturbations. Other types of perturbations are considered in Section VIII, leading to \(\mathrm{PTIME}\) instead of \(\mathrm{PSPACE}\). For now, we keep to space perturbations.
## IV Embedding dynamical systems
Discussing issues for TMs has the advantage that related computability and complexity issues are well-known or easier to discuss. Many authors have then embedded TMs in various classes of dynamical systems in order to get hardness results, i.e. state that the difficulty of the reachability problem for the latter is at least as hard as for Turing machines.
Generally speaking, the trick is the following: if we fix the alphabet to \(\Sigma=\{0,1\}\), and \(Q=\{1,\ldots,q\}\) for some integer \(q\), and if we forget about blanks, we can always consider that \(\mathcal{C}_{\mathcal{M}}\subseteq\mathcal{C}=\mathbb{N}\times\Sigma^{*}\times \Sigma^{*}\), i.e. that a configuration is given by some control state, and two finite words.
Fix some encoding function of configurations into a vector of real (or integer) numbers: \(\Upsilon:\mathcal{C}\to\mathbb{R}^{d}\), with \(d\in\mathbb{N}\). For example, one can consider, \(\Upsilon(q,w_{1},w_{2})=(q,\gamma(w_{1}),\gamma(w_{2}))\) with \(\gamma:\Sigma^{*}\to\mathbb{R}\) taken as:
* the encoding \(\gamma_{\mathbb{N}}\) that maps the word \(w=a_{1}\ldots a_{n}\) to the integer whose binary expression is \(w\),
* or the encoding \(\gamma_{[0,1]}\) that maps \(w\) to the real number of \([0,1]\) whose binary expansion is \(w\),
* or more generally, the encoding \(\gamma^{k}_{[0,1]}\) or \(\gamma^{k}_{\mathbb{N}}\), using base \(k\) instead of base \(2\) for some \(k\geq 2\),
* or \(\gamma^{k}_{[0,1]}\) that maps \(w\) to \((\gamma^{k}_{[0,1]}(w),\ell(w))\).
Assume you have a function \(\mathbf{f}:\ X\subseteq\mathbb{R}^{d}\to X\) such that for any configuration \(C\), if we denote by \(C^{\prime}\) the one step next configuration, we have \(f(\Upsilon(C))=\Upsilon(C^{\prime})\): i.e. one step of the Turing machine corresponds to one step of the dynamical system \((X,f)\) with respect to the encoding \(\gamma\). That is, the following diagram commutes for one step:
Then it will commutes for any number of steps:
And hence, questions related to the existence of trajectories in the (dynamical system associated to the) Turing machine will be mapped to corresponding questions about the existence of trajectories over the dynamical system \((X,f)\).
In particular, as reachability is undecidable (c.e. complete, and hence c.e. hard) for Turing machines, this provides undecidability (c.e. hardness) of reachability for various classes of dynamical systems. As in most of the natural classes of dynamical systems, reachability is c.e. (just simulate the system to get a semi-algorithm), this leads to c.e. completeness.
Call such a situation a _step-by-step_ emulation.
Such embedding strategies do provide undecidability results. But, encodings such as \(\gamma_{[0,1]}\) or \(\gamma_{[0,1]}^{k}\), whose image is compact, map some intrinsically different configurations to points arbitrarily closed to each other (as a sequence over a compact must have some accumulation point). An encodings like \(\gamma_{\mathbb{N}}\) do not have a compact image, but involves emulations with arbitrarily big integers, which is another issue.
## V Discrete Time Dynamical Systems
This leads to discuss now robustness issues for general dynamical systems over in \(\mathbb{R}^{d}\) for some \(d\in\mathbb{N}\).
**Definition 18** (Discrete Time Dynamical System).: _A discrete-time dynamical system \(\mathcal{P}\) is given by a set \(X\subseteq\mathbb{R}^{d}\), and some (possibly partial) function \(\mathbf{f}\) from \(X\) to \(X\)._
The dynamical system will be called _rational_ when \(\mathbf{f}\) preserves rational numbers, i.e. whenever \(\mathbf{f}(\mathbb{Q}^{d})\subseteq\mathbb{Q}^{d}\). We will say that a system is \(\mathbb{Q}\)-computable, if it is rational and \(\mathbf{f}:\mathbb{Q}^{d}\rightarrow\mathbb{Q}^{d}\) is computable. We say that the system is (respectively: locally) _Lipschitz_ when the function is.
To each rational discrete time dynamical system \(\mathcal{P}\) is associated its reachability relation \(R^{\mathcal{P}}(\cdot,\cdot)\) on \(\mathbb{Q}^{d}\times\mathbb{Q}^{d}\). Namely, for two rational points \(\mathbf{x}\) and \(\mathbf{y}\), the relation \(R^{\mathcal{P}}(\mathbf{x},\mathbf{y})\) holds iff there exists a trajectory of \(\mathcal{P}\) from \(\mathbf{x}\) to \(\mathbf{y}\).
### _The case of rational systems_
We first focus on the case of rational systems. Clearly the reachability relation of a \(\mathbb{Q}\)-computable system is computably enumerable: just simulate the dynamics with a TM. Actually, [1] considers only the special case of Piecewise affine maps, as representative of discrete time systems, which are particular \(\mathbb{Q}\)-computable Lipschitz systems.
**Definition 19** (PAM System).: _A Piecewise affine map system (PAM) is a discrete-time dynamical system \(\mathcal{P}\) where \(\mathbf{f}\) is a (possibly partial) function from \(X\) to \(X\) represented by a formula: \(\mathbf{f}(\mathbf{x})=A_{i}\mathbf{x}+\mathbf{b}_{i}\) for \(\mathbf{x}\in P_{i},\quad i=1\ldots N\) where \(A_{i}\) are rational \(d\times d\)-matrices, \(\mathbf{b}_{i}\in\mathbb{Q}^{d}\) and \(P_{i}\) are convex rational polyhedral sets in \(X\)._
In other words, a PAM system consists of partitioning the space into convex polyhedral sets (called _regions_), and assigning an affine update rule \(\mathbf{x}:=A_{i}\mathbf{x}+\mathbf{b}_{i}\) to all the points sharing the same region.
**Remark 20**.: _All constants in the PAM definitions are assumed to be rational so that this remains a \(\mathbb{Q}\)-computable system. No form of continuity is assumed on function \(\mathbf{f}\)._
The following result on the computational power of PAMs is known, and has been established using the technique of step-by-step emulation described in previous section (using \(\gamma_{[0,1]}\) and taking \(\mathbf{f}\) as piecewise affine).
**Theorem 21** (Computational power of PAMs [27][26]).: _Any c.e. language is reducible to the reachability relation of a PAM._
**Remark 22**.: _PAMs are introduced in [1] only for the case where \(X\) is necessarily some bounded polyhedral sets. Actually, from considered \(\gamma\), the above result would also still hold when the regions \(P_{i}\) are assumed to be rational boxes._
This proves c.e.-completeness (\(\Sigma^{0}_{1}\)-completeness), and hence undecidability of reachability for \(\mathbb{Q}\)-computable systems. Let discuss whether this still holds for "robust systems".
We can apply the paradigm of small perturbations: consider a discrete time dynamical system \(\mathcal{P}\) with function \(\mathbf{f}\). For any \(\varepsilon>0\) we consider the \(\varepsilon\)-perturbed system \(\mathcal{P}_{\varepsilon}\). Its trajectories are defined as sequences \(\mathbf{x}_{t}\) satisfying the inequality \(d(\mathbf{x}_{t+1},\mathbf{f}\left(\mathbf{x}_{t}\right))<\varepsilon\) for all \(t\). This non-deterministic system can be considered as \(\mathcal{P}\) submitted to a small noise with magnitude \(\varepsilon\). For convenience, we write \(\mathbf{y}\in\mathbf{f}_{\varepsilon}(\mathbf{x})\) as a synonym for \(d(\mathbf{f}(\mathbf{x}),\mathbf{y})<\varepsilon\). We denote reachability in the system \(\mathcal{P}_{\varepsilon}\) by \(R^{\mathcal{P}}_{\varepsilon}(\cdot,\cdot)\).
All trajectories of a non-perturbed system \(\mathcal{P}\) are also trajectories of the \(\varepsilon\)-perturbed system \(\mathcal{P}_{\varepsilon}\). If \(\varepsilon_{1}<\varepsilon_{2}\) then any trajectory of the \(\varepsilon_{1}\)-perturbed system is also a trajectory of the \(\varepsilon_{2}\)-perturbed PAM. Like for TMs we can pass to a limit for \(\varepsilon\to 0\). Namely \(R^{\mathcal{P}}_{\omega}(\mathbf{x},\mathbf{y})\) iff \(\forall\varepsilon>0\)\(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{y})\): this relation encodes reachability with arbitrarily small perturbing noise.
**Lemma 23** ([1, Lemma 3]).: _For any \(0<\varepsilon_{2}<\varepsilon_{1}\) and any \(\mathbf{x}\) and \(\mathbf{y}\) the following implications hold: \(R^{\mathcal{P}}(\mathbf{x},\mathbf{y})\Rightarrow R^{\mathcal{P}}_{\omega}( \mathbf{x},\mathbf{y})\Rightarrow R^{\mathcal{P}}_{\varepsilon_{2}}(\mathbf{x},\mathbf{y})\Rightarrow R^{\mathcal{P}}_{\varepsilon_{1}}(\mathbf{x},\mathbf{y})\)._
We prove the perturbed reachability relation of Lipschitz \(\mathbb{Q}\)-computable system is co-c.e, extending [1, Theorem 5].
**Theorem 24** (Perturbed reachability is co-c.e.).: _Consider a locally Lipschitz \(\mathbb{Q}\)-computable system whose domain \(X\) is a closed rational box._
_Then the relation \(R^{\mathcal{P}}_{\omega}(\mathbf{x},\mathbf{y})\subseteq\mathbb{Q}^{d} \times\mathbb{Q}^{d}\) is in the class \(\Pi^{0}_{1}\)._
Proof.: As \(\mathbf{f}\) is locally Lipschitz, and \(X\) is compact, we know that \(\mathbf{f}\) is Lipschitz: there exists some \(L>0\) so that \(d(\mathbf{f}(\mathbf{x}),\mathbf{f}(\mathbf{y}))\leq L\cdot d(\mathbf{x}, \mathbf{y})\).
For every \(\delta=2^{-m}\), \(m\in\mathbb{N}\), we associate some graph \(G_{m}=(V_{\delta},\rightarrow_{\delta})\): its vertices, denoted by \((\mathcal{V}_{i})_{i}\), correspond to some finite covering of compact \(X\) by rational open balls \(\mathcal{V}_{i}=B(\mathbf{x}_{i},\delta_{i})\) of radius \(\delta_{i}<\delta\). There is an edge from \(\mathcal{V}_{i}\) to \(\mathcal{V}_{j}\) in this graph, that is to say \(\mathcal{V}_{i}\rightarrow_{\delta}\mathcal{V}_{j}\), iff \(B(\mathbf{f}(\mathbf{x}_{i}),(L+1)\delta)\cap\mathcal{V}_{j}\neq\emptyset\). With our hypothesis on the domain, such a graph can be effectively obtained from \(m\), considering a suitable discretization of the rational box \(X\).
Claim 1: assume \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{y})\) with \(\mathbf{x}\in\mathcal{V}_{i}\) for \(\epsilon=2^{-n}\). Then \(\mathcal{V}_{i}{\rightarrow_{\epsilon}}\mathcal{V}_{j}\) for all \(\mathcal{V}_{j}\) with \(\mathbf{y}\in\mathcal{V}_{j}\).
This basically holds as the graph for \(\delta=\epsilon\) is made to always have more trajectories/behaviours than \(R^{\mathcal{P}}_{\varepsilon}\).
Proof.: If \(\mathbf{y}\in\mathbf{f}_{\epsilon}(\mathbf{x})\), then \(d(\mathbf{f}(\mathbf{x}_{i}),\mathbf{y})\leq d(\mathbf{f}(\mathbf{x}_{i}), \mathbf{f}(\mathbf{x}))+d(\mathbf{f}(\mathbf{x}),\mathbf{y})<Ld(\mathbf{x}_{i},\mathbf{x})+\epsilon\leq L\epsilon+\epsilon=(L+1)\epsilon\), and hence there is an edge from \(\mathcal{V}_{i}{\rightarrow_{\epsilon}}\mathcal{V}_{j}\) to any \(\mathcal{V}_{j}\) containing \(\mathbf{y}\) by definition of the graph.
Claim 2: for any \(\epsilon=2^{-n}\), there is some \(\delta=2^{-m}\) so that if we have \(\mathcal{V}_{i}{\rightarrow_{\delta}}\mathcal{V}_{j}\) then \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{y})\) whenever \(\mathbf{x}\in\mathcal{V}_{i}\), \(\mathbf{y}\in\mathcal{V}_{j}\).
That is, Claim 2 says that \(\neg R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{y})\) implies \(\neg(\mathcal{V}_{i}{\rightarrow_{\delta}}\mathcal{V}_{j})\) whenever \(\mathbf{x}\in\mathcal{V}_{i}\), \(\mathbf{y}\in\mathcal{V}_{j}\), for the corresponding \(\delta\).
Proof.: Consider \(\delta=2^{-m}\) with \(\delta<\epsilon/(2L+2)\): assume \(\mathcal{V}_{i=i_{0}}{\rightarrow_{\delta}}\mathcal{V}_{i_{1}}\ldots{ \rightarrow_{\delta}}\mathcal{V}_{i_{t}=j}\) with \(\mathbf{x}\in\mathcal{V}_{i}\), \(\mathbf{y}\in\mathcal{V}_{j}\).
Assume by contradiction that \(\neg R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{y})\), and let \(\ell\) be the least index such that \(\neg R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\overline{\mathbf{x}})\) for some \(\overline{\mathbf{x}}\in\mathcal{V}_{i_{t+1}}\).
As \(\mathcal{V}_{i_{t}}{\rightarrow_{\delta}}\mathcal{V}_{i_{t+1}}\) there is some \(\overline{\mathbf{y}}\in\mathcal{V}_{i_{t+1}}\) with \(d(\mathbf{f}(\mathbf{x}_{i_{t}}),\overline{\mathbf{y}})<(L+1)\delta\). Take \(\overline{\mathbf{z}}\in\mathcal{V}_{i_{t+1}}\).
If \(\ell=0\), then \(d(\mathbf{f}(\mathbf{x}),\overline{\mathbf{z}})\leq d(\mathbf{f}(\mathbf{x}), \mathbf{f}(\mathbf{x}_{i_{\ell}}))+d(\mathbf{f}(\mathbf{x}_{i_{\ell}}), \overline{\mathbf{y}})+d(\overline{\mathbf{y}},\overline{\mathbf{z}})<L \delta+(L+1)\delta+\delta=(2L+2)\delta<\epsilon\), and hence \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\overline{\mathbf{z}})\): contradiction.
If \(\ell>0\), as \(\ell\) is the least index with the above property, \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{x}_{i_{\ell}})\). But then \(d(\mathbf{f}(\mathbf{x}_{i_{\ell}}),\overline{\mathbf{z}})\leq d(\mathbf{f}( \mathbf{x}_{i_{\ell}}),\overline{\mathbf{y}})+d(\overline{\mathbf{y}}, \overline{\mathbf{z}})<(L+1)\delta+\delta<(2L+2)\delta<\epsilon\). And hence, \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x}_{i_{\ell}},\overline{\mathbf{z}})\), and since we have \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{x}_{i_{\ell}})\), we get \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\overline{\mathbf{z}})\) and a contradiction.
From the two above items, \(\neg R^{\mathcal{P}}_{\omega}(\mathbf{x},\mathbf{y})\) holds iff for some \(\delta=2^{-m}\), \(\neg(\mathcal{V}_{i}{\rightarrow_{\delta}}\mathcal{V}_{j})\) for some \(\mathcal{V}_{i}\), \(\mathcal{V}_{j}\) with \(\mathbf{x}\in\mathcal{V}_{i}\), \(\mathbf{y}\in\mathcal{V}_{j}\). This holds iff for some integer \(m\), \(\mathrm{NOPATH}(G_{m},\mathcal{V}_{i},\mathcal{V}_{j})\) for some \(\mathcal{V}_{i}\), \(\mathcal{V}_{j}\) with \(\mathbf{x}\in\mathcal{V}_{i}\), \(\mathbf{y}\in\mathcal{V}_{j}\).
The latter property is c.e., as it corresponds to a union of decidable sets (uniform in \(m\)), as \(\mathrm{NOPATH}(G_{m},\mathcal{V}_{i},\mathcal{V}_{j})\) is a decidable property over finite graph \(G_{m}\).
Notice this would work even only assuming the domain to be a computable compact: we will recall later what it is (using computable analysis), but let's say for now that the above proof only requires that given \(\delta=2^{-m}\), there is an effective way to determine an effective cover of it using finitely many rational balls of radius less than \(\delta\).
**Corollary 25** (Robust \(\Rightarrow\) decidable [1, Corollary 5]).: _Assume the hypotheses of Theorem 24._
_If \(R^{\mathcal{P}}_{\omega}=R^{\mathcal{P}}\) then \(R^{\mathcal{P}}\) is decidable._
Proof.: \(R^{\mathcal{P}}\) is c.e. and we know from Theorem 24 that \(R^{\mathcal{P}}_{\omega}\) is co-c.e.. If they are equal, then they are decidable, as a c.e. and co-c.e. set is decidable.
**Remark 26**.: _Notice that a similar statement holds even if \(X\) is not compact: from the proof, it is sufficient that there exists some family of graphs \(\mathcal{G}=(G_{m})\) with \(G_{m}=(V_{m},\rightarrow_{m})\) to get a similar reasoning with the following properties:_
1. \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{y})\) _with_ \(\mathbf{x}\in\mathcal{V}_{i}\)_,_ \(\epsilon=2^{-n}\)_, implies_ \(\mathcal{V}_{i}{\rightarrow_{n}}\mathcal{V}_{j}\) _for all_ \(\mathcal{V}_{j}\) _containing_ \(\mathbf{y}\)_._
2. _For any_ \(\epsilon=2^{-n}\)_, there is some_ \(m\) _such that if we have_ \(\mathcal{V}_{i}{\rightarrow_{m}}\mathcal{V}_{j}\) _then_ \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{y})\) _whenever_ \(\mathbf{x}\in\mathcal{V}_{i}\)_,_ \(\mathbf{y}\in\mathcal{V}_{j}\)_._
3. _For all_ \(m\)_,_ \(G_{m}\) _is a finite computable graph: determining whether_ \(\mathcal{V}_{i}\rightarrow_{m}\mathcal{V}_{j}\) _in_ \(G_{m}\) _can be effectively determined given integers_ \(m\)_,_ \(i\)_, and_ \(j\)_._
_When these three properties hold, we say that \(\mathcal{G}\) is a computable abstraction of the discrete time dynamical system._
There is a kind of converse property if some condition is added. Before stated this as Corollary 31, we relate robustness to the concept of \(\delta\)-decidability in [16], and also the existence of some witness of non-reachability.
Given \(\mathbf{x}\), we write \(R^{\mathcal{P}}(\mathbf{x})\) for the set of the points \(\mathbf{y}\) reachable from \(\mathbf{x}\): \(R^{\mathcal{P}}(\mathbf{x})=\{\mathbf{y}|R^{\mathcal{P}}(\mathbf{x},\mathbf{y})\}\). This is easily seen to also corresponds the smallest set such that \(\mathbf{x}\in R^{\mathcal{P}}(\mathbf{x})\) and \(\mathbf{f}(R^{\mathcal{P}}(\mathbf{x}))\subseteq R^{\mathcal{P}}(\mathbf{x})\).
We say that \(R^{\mathcal{P}}(\mathbf{x},\mathbf{y})\) is \(\epsilon\)-far from being true, if there is \(\mathcal{R}^{*}\subseteq X\) so that
1. \(\mathbf{x}\in\mathcal{R}^{*}\),
2. \(\mathbf{f}_{\epsilon}(\mathcal{R}^{*})\subseteq\mathcal{R}^{*}\),
3. \(\mathbf{y}\not\in\mathcal{R}^{*}\).
When this holds, necessarily \(\neg R^{\mathcal{P}}(\mathbf{x},\mathbf{y})\): indeed, for all \(\epsilon>0\), the set \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x})=\{\mathbf{y}|R^{\mathcal{P}}_{ \varepsilon}(\mathbf{x},\mathbf{y})\}\) is the smallest set that satisfies \(\mathbf{x}\in R^{\mathcal{P}}_{\varepsilon}(\mathbf{x})\) and \(\mathbf{f}_{\epsilon}(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x}))\subseteq R^{ \mathcal
We come back to the converse of Corollary 25: from Proposition 27, a robust dynamical system (i.e. \(R^{\mathcal{P}}_{\omega}=R^{\mathcal{P}}\)) is eventually decisional, by considering \(R^{*}=\mathcal{R}^{*}\) for the \(\mathcal{R}^{*}\) given by item 2) there. Conversely:
**Lemma 28**.: _Consider a rational system not robust, with \(\mathbf{f}\) continuous or Lipschitz; as \(R^{\mathcal{P}}\subseteq R^{\mathcal{P}}_{\omega}\), this means that there exist some \(\mathbf{x}\) and \(\mathbf{y}\) with \(R^{\mathcal{P}}_{\omega}(\mathbf{x},\mathbf{y})\) but not \(R^{\mathcal{P}}(\mathbf{x},\mathbf{y})\). The trajectory starting from \(\mathbf{x}\) can not reach any \(\epsilon\)-rejecting subset._
Proof.: By contradiction, assume the trajectory starting from \(\mathbf{x}\) reaches some \(\epsilon\)-rejecting \(R^{*}\). Possibly by considering one more step, we can assume it reaches the interior of \(R^{*}\) for the first time at time \(t\): indeed, if it reaches the frontier at \(\mathbf{x}^{*}\), then we know that \(B(\mathbf{f}(\mathbf{x}^{*}),\epsilon)\subseteq R^{*}\), and \(\mathbf{f}(\mathbf{x}^{*})\) is in the interior of that ball. Now, from initial \(\mathbf{x}\) until the position at time \(t\), it remains at some positive distance of \(\mathbf{y}\). As \(\mathbf{f}\) is continuous or Lipschitz, the \(t\)-th iteration of \(\mathbf{f}\) also is. So there is some \(0<\epsilon^{\prime}<\epsilon\) taken sufficiently small so that \(R^{\mathcal{P}}_{\epsilon^{\prime}}\) intersects the interior of \(R^{*}\), and remains at a positive distance of \(\mathbf{y}\). Once in \(R^{*}\), \(\epsilon^{\prime}\)-perturbed trajectories will remain in it, since we have \(\epsilon^{\prime}<\epsilon\). We get \(\mathbf{y}\not\in R^{\mathcal{P}}_{\epsilon^{\prime}}\), and consequently \(\neg R^{\mathcal{P}}_{\omega}(\mathbf{x},\mathbf{y})\): contradiction.
**Corollary 29**.: _Consider a continuous or Lipschitz rational dynamical system. It is robust iff it is eventually decisional._
We can even compute the witnesses under the hypotheses of Theorem 24. We say that some dynamical system is _effectively eventually decisional_ when there is an algorithm such that, given \(\mathbf{x}\) and \(\mathbf{y}\), it (terminates and) outputs such a \(R^{*}\) in the form of the union of rational balls.
**Proposition 30** (Reinforcement of Corollary 25).: _Assume the hypotheses of Theorem 24. If \(R^{\mathcal{P}}_{\omega}=R^{\mathcal{P}}\) then \(R^{\mathcal{P}}\) is computable, and the system is effectively eventually decisional._
Proof.: The proof of Theorem 24 shows that when \(R^{\mathcal{P}}_{\omega}(\mathbf{x},\mathbf{y})\) is false, then \(R^{\mathcal{P}}_{\epsilon}(\mathbf{x},\mathbf{y})\) is false for some \(\epsilon=2^{-n}\), and there is some \(\delta=2^{-m}\), with some graph \(G_{m}\) with some vertices \(\mathcal{V}_{i}\) and \(\mathcal{V}_{j}\) with \(\mathbf{x}\in\mathcal{V}_{i}\), \(\mathbf{y}\in\mathcal{V}_{j}\) and \(\neg(\mathcal{V}_{i}\rightarrow^{\delta}_{\mathcal{V}_{j}})\). Denote by \(R^{G_{m}}\) the union of the vertices \(\mathcal{V}_{k}\) such that \(\mathcal{V}_{i}\rightarrow^{\delta}_{\mathcal{V}_{k}}\), for \(\mathbf{x}\in\mathcal{V}_{i}\) in that graph. Consider \(\mathcal{R}^{*}=R^{G_{m}}\). This constitutes a witness at level \(\delta=2^{-m}\) from the properties of the construction in that proof.
Then \(m\) can be found by testing increasing \(m\) until a graph with the above properties is found. The corresponding \(\mathcal{R}^{*}=R^{G_{m}}\) for the first graph found will be a witness at level \(\delta=2^{-m}\) from above arguments.
An effectively eventually decisional system has its reachability relation necessarily decidable (given \(\mathbf{x}\) and \(\mathbf{y}\) compute the path until it reaches \(\mathbf{y}\) (then accept), or \(R^{*}\) (then reject)):
**Corollary 31** (Decidable \(\Leftrightarrow\) robust, for eventually decisional systems).: _Under the hypotheses of Theorem 24, \(R^{\mathcal{P}}_{\omega}=R^{\mathcal{P}}\) iff \(R^{\mathcal{P}}\) is decidable and it is effectively eventually decisional iff it is effectively eventually decisional._
We now go to complexity issues.
Assume the dynamical system is robust, i.e. \(R^{\mathcal{P}}=R^{\mathcal{P}}_{\omega}\). That means that for all rationals \(\mathbf{x},\mathbf{y}\), we have \(R^{\mathcal{P}}(\mathbf{x},\mathbf{y})\Leftrightarrow R^{\mathcal{P}}_{ \omega}(\mathbf{x},\mathbf{y})\). Consequently, for all rationals \(\mathbf{x},\mathbf{y}\), there exists some \(\varepsilon\) (depending of \(\mathbf{x}\), \(\mathbf{y}\)) such that \(R^{\mathcal{P}}(\mathbf{x},\mathbf{y})\) and \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{y})\) have the same truth value (and unchanged by smaller \(\epsilon\)).
It is then natural to quantify on the level of required robustness according to \(\mathbf{x}\) and \(\mathbf{y}\), i.e. on the value \(\varepsilon\).
As we may always assume \(\varepsilon=2^{-n}\) for some integer \(n\), we write \(R^{\mathcal{P}}_{n}\) for \(R^{\mathcal{P}}_{2^{-n}}\), and we then introduce:
**Definition 32**.: _Given some function \(f:\mathbb{N}\rightarrow\mathbb{N}\), we write \(R^{\mathcal{P}}\{f\}\) for the relation defined as follows: for any rational points \(\mathbf{x}\) and \(\mathbf{y}\) the relation holds iff \(R^{\mathcal{P}}_{f(\ell(\mathbf{x})+t(\mathbf{y}))}(\mathbf{x},\mathbf{y})\)._
**Lemma 33**.: _Consider a locally Lipschitz \(\mathbb{Q}\)-computable system, with \(\mathbf{f}:\mathbb{Q}\rightarrow\mathbb{Q}\) computable in polynomial time, whose domain \(X\) is a closed rational box. For \(\delta=2^{-m}\), consider the associated graph \(G_{m}\) considered in the proof of Theorem 24. Then \(\mathrm{NOPATH}(G_{m},\mathcal{V}_{i},\mathcal{V}_{j})\) is decidable using a space polynomial in \(m\)._
Proof.: This graph has less than \(\mathcal{O}(2^{d*m})\) vertices. The graph has a successor relation \(\rightarrow_{\delta}\) computable in space polynomial in \(m\). Hence, the analysis of Corollary 8 applies, and we can determine whether \(\mathrm{NOPATH}(G_{m},\mathcal{V}_{i},\mathcal{V}_{j})\) using a space polynomial in \(m\).
**Theorem 34**.: _Assume the hypotheses of Lemma 33. Assume \(p\) is some polynomial. Then \(R^{\mathcal{P}}\{p\}\in\mathrm{PSPACE}\)._
Proof.: From the proof of Theorem 24, we know that for all \(n\) there exists some \(m\) (depending on \(n\)), such that \(R^{\mathcal{P}}_{n}(\mathbf{x},\mathbf{y})\) and \(R^{G_{m}}(\mathbf{x},\mathbf{y})\) have the same truth value, where \(R^{G_{m}}\) denotes reachability in the graph \(G_{m}\).
With the hypotheses, given \(\mathbf{x}\) and \(\mathbf{y}\), we can determine whether \(R^{\mathcal{P}}_{\{p\}}(\mathbf{x},\mathbf{y})\), by determining the truth value of \(R^{\mathcal{P}}_{n}(\mathbf{x},\mathbf{y})\), taking \(n\) polynomial in \(\ell(\mathbf{x})+\ell(\mathbf{y})\). From the proof of Theorem 24, the corresponding \(m\) is polynomially related to \(n\) (it is even affine in \(n\)). Now the analysis of Lemma 33, shows that the truth value of \(R^{G_{m}}(\mathbf{x},\mathbf{y})\) can be determined in space polynomial in \(m\).
**Theorem 35** (Polynomially robust to precision \(\Rightarrow\mathrm{PSPACE}\)).: _With same hypotheses, if \(R^{\mathcal{P}}=R^{\mathcal{P}}\{p\}\) for some polynomial \(p\), then \(R^{\mathcal{P}}\in\mathrm{PSPACE}\)._
Proof.: We have \(R^{\mathcal{P}}\{p\}\in\mathrm{PSPACE}\) by Theorem (34) and since \(R^{\mathcal{P}}=R^{\mathcal{P}}\{p\}\), then \(R^{\mathcal{P}}\in\mathrm{PSPACE}\).
Actually, this is even a characterization of \(\mathrm{PSPACE}\) (if one prefers: Reachability is \(\mathrm{PSPACE}\)-complete for \(\mathbb{Q}\)-computable poly-time computational, polynomial robust to precision).
**Theorem 36** (Polynomially robust to precision \(\Leftrightarrow\mathrm{PSPACE}\)).: _Any \(\mathrm{PSPACE}\) language is reducible to the reachability relation of PAM with \(R^{\mathcal{P}}=R^{\mathcal{P}}\{p\}\) for some polynomial \(p\)._
Proof.: Let \(L\in\mathrm{PSPACE}\). There is a TM \(\mathcal{M}\) with \(L(\mathcal{M})=L\) that works in polynomial space \(q(\cdot)\). Its step-by-step emulation considered in Theorem 21, using \(\gamma_{[0,1]}\) is done using a precision \(\mathcal{O}(2^{-q(n)})\) on words of length \(n\). The obtained
system satisfies \(R^{\mathcal{P}}_{\{q+\mathcal{O}(1)\}}=R^{\mathcal{P}}\) from the properties of the emulation. In other words, this comes from the fact that the involved emulation preserves robustness of TMs.
Assuming the same hypotheses as in Theorem 35, when \(R^{\mathcal{P}}=R^{\mathcal{P}}_{\{p\}}\) for some polynomial \(p\), we also see that we can determine a witness of the fact that \(\neg R^{\mathcal{P}}(\mathbf{x},\mathbf{y})\) in polynomial space (using a suitable representation of it).
### _The case of computable systems_
We consider now the case of general (possibly non-rational) discrete time dynamical systems. In that case, \(\mathbf{f}\) may take some non-rational values, and we need to talk about computability for functions over the reals. A system is said computable if the function \(\mathbf{f}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) is in computable analysis (CA).
A crash course on CA can be found in the appendix (see e.g. [34] or [11]), but in very short: a name for a point \(\mathbf{x}\in\mathbb{R}^{d}\) is a sequence \((I_{n})\) of nested open rational balls with \(I_{n+1}\subseteq I_{n}\) for all \(n\in\mathbb{N}\) and \(\{x\}=\bigcap_{n\in\mathbb{N}}I_{n}\). A name for a function \(\mathbf{f}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d^{\prime}}\) is a list of all pairs of open rational balls \((I,J)\) such that \(\mathbf{f}(cls(I))\subseteq J\). A name for a closed set \(F\) is a sequence \((I_{n})\) of open rational balls such that \(cls(I_{n})\cap F=\emptyset\) and a sequence \((J_{n})\) of open rational balls such that \(J_{n}\cap F\neq\emptyset\). A name for a compact \(K\) is a name of \(F\) as a closed set, and an integer \(L\) such that \(K\subseteq B(0,L)\). All these names can be encoded as infinite sequences of symbols. The notion of computability involved is the one of Type 2 Turing machines, that is to say machines possibly working over infinite tapes, and outputting their results in possibly write-only output tapes. Then: a point \(\mathbf{x}\in\mathbb{R}^{d}\) is computable if it has a computable name. And similarly for defining the concept of computable function, computable closed set, or computable compact: we mean for example, that a closed set is computable if it has a computable name (and a compact is computable consequently also as a closed set). In particular this concept of computability for compacts implies the property discussed after Theorem 24 (see [34, Lemma 5.2.5] for a proof). If \(Y\) and \(Z\) are spaces with an associated naming system, then an operator \(f:Y\to Z\) is said computable if there is a computable function which associates each name of \(y\in Y\) to a name of \(f(y)\in Z\).
From the model of CA, given the name2 of \(\mathbf{f}\), and (even for) some rational \(\mathbf{x}\) and \(\mathbf{y}\), this is impossible to tell effectively if \(\mathbf{f}(\mathbf{x})=\mathbf{y}\) in the general case. Consequently, given some rational ball \(B(\mathbf{y},\delta)\), we have to forbid "frontier reachability", that is to say the case where \(B(\mathbf{y},\delta)\) would not be reachable, but its frontier is, i.e. \(\overline{B}(\mathbf{y},\delta)-B(\mathbf{y},\delta)\) is reachable. A natural question arises then: given some ball such that either \(B(\mathbf{y},\delta)\) is reachable (that case implies that \(\overline{B}(\mathbf{y},\delta)\) is), or such that \(\overline{B}(\mathbf{y},\delta)\) is not, decide which possibility holds. We call this the _ball (decision) problem_. Of course, from the above definitions, when \(R^{\mathcal{P}}(\mathbf{x})\) is a closed set, \(R^{\mathcal{P}}(\mathbf{x})\) is a computable closed set iff the associated ball problem is algorithmically solvable.
Footnote 2: Even assuming it is computable.
For a computable system, the ball decision problem is c.e.: simulate the evolution of the system starting from \(\mathbf{x}\) until step \(T\), with increasing precision and \(T\), until one finds the guarantee that the position \(\mathbf{x}_{T}\) at time \(T\) remains in \(B(\mathbf{y},\delta^{\prime})\) for some \(\delta^{\prime}<\delta\). This works: if the ball is indeed reachable, this will terminate by eventually computing a sufficient approximation of the corresponding \(\mathbf{x}_{T}\), and conversely it can't terminate without guaranteeing reachability. The ball problem is of course not co-c.e. in general.
To a discrete time system, we can also associate its reachability relation \(R^{\mathcal{P}}(\cdot,\cdot,\cdot)\) over \(\mathbb{Q}^{d}\times\mathbb{Q}^{d}\times\mathbb{N}\). Namely, for two rational points \(\mathbf{x}\) and \(\mathbf{y}\) and rational \(0<\eta=2^{-p}\), encoded by the integer \(p\), the relation \(R^{\mathcal{P}}(\mathbf{x},\mathbf{y},p)\) holds iff there exists a trajectory of \(\mathcal{P}\) from \(\mathbf{x}\) to \(\overline{B}(\mathbf{y},\eta)\). We define \(R^{\mathcal{P}}_{\varepsilon}\) similarly, and \(R^{\mathcal{P}}_{\omega}=\bigcap_{\varepsilon}R^{\mathcal{P}}_{\varepsilon}\). This relation encodes reachability with arbitrarily small perturbing noise to some closed ball.
**Lemma 37**.: _for any \(0<\varepsilon_{2}<\varepsilon_{1}\) and any \(\mathbf{x}\) and \(\mathbf{y}\), \(\eta\), the following implications hold: \(R^{\mathcal{P}}(\mathbf{x},\mathbf{y},p)\Rightarrow R^{\mathcal{P}}_{\omega}( \mathbf{x},\mathbf{y},p)\Rightarrow R^{\mathcal{P}}_{\varepsilon_{2}}(\mathbf{x },\mathbf{y},p)\Rightarrow R^{\mathcal{P}}_{\varepsilon_{1}}(\mathbf{x}, \mathbf{y},p)\)._
Given \(\mathbf{x}\), and \(0<\varepsilon_{2}<\varepsilon_{1}\), we have \(R^{\mathcal{P}}(\mathbf{x})\subseteq R^{\mathcal{P}}_{\varepsilon_{2}}(\mathbf{ x})\subseteq cls(R^{\mathcal{P}}_{\varepsilon_{2}}(\mathbf{x})) \subseteq ds^{\mathcal{P}}_{\varepsilon_{1}}(\mathbf{x})\subseteq cls(R^{ \mathcal{P}}_{\varepsilon_{1}}(\mathbf{x}))\). Consequently, \(R^{\mathcal{P}}_{\omega}(\mathbf{x})=\bigcap_{\varepsilon>0}R^{\mathcal{P}}_{ \varepsilon}(\mathbf{x})=\bigcap_{\varepsilon>0}cls(R^{\mathcal{P}}_{\varepsilon} (\mathbf{x}))\) is a closed set.
**Theorem 38** (Perturbed reachability is co-r.e.).: _Consider a locally Lipschitz computable system whose domain \(X\) is a computable compact. \(R^{\mathcal{P}}_{\omega}(\mathbf{x},\mathbf{y},p)\subseteq\mathbb{Q}^{d} \times\mathbb{Q}^{d}\times\mathbb{N}\) is in \(\Pi^{0}_{\Gamma}\)._
This statement have similarities with [8, Theorem 13]: the result established their is about the language accepted by a system, but with very strong hypotheses on termination compared to ours which make their analysis really simpler.
Proof.: As \(\mathbf{f}\) is locally Lipschitz, and \(X\) is compact, we know that \(\mathbf{f}\) is Lipschitz: there exists some \(L>0\) so that \(d(\mathbf{f}(\mathbf{x}),\mathbf{f}(\mathbf{y}))\leq L\cdot d(\mathbf{x}, \mathbf{y})\).
For every \(\delta=2^{-m}\), \(m\in\mathbb{N}\), we associate some graph \(G_{m}=(V_{\delta},\rightarrow_{\delta})\): the vertices, denoted \((\mathcal{V}_{i})_{i}\), of this graph correspond to some finite covering of compact \(X\) by rational open balls \(\mathcal{V}_{i}=B(\mathbf{x}_{i},\delta_{i})\) of radius \(\delta_{i}<\delta\).
There is an edge from \(\mathcal{V}_{i}\) to \(\mathcal{V}_{j}\) in this graph, that is to say \(\mathcal{V}_{i}\rightarrow_{\delta}\mathcal{V}_{j}\), iff \(B(\mathbf{f}_{i},(L+2)\delta)\cap\mathcal{V}_{j}\neq\emptyset\), given some rational \(\mathbf{f}_{i}\) given by some (computed) \(\delta\)-approximation of \(\mathbf{f}(\mathbf{x}_{i})\), i.e. \(\mathbf{f}_{i}\) such that \(\mathbf{f}(\mathbf{x}_{i})\in B(\mathbf{f}_{i},\delta)\).
This is done to guarantee to cover \(B(\mathbf{f}(\mathbf{x}_{i}),(L+1)\delta)\).
As we assumed compact \(X\) to computable, such a graph can be effectively obtained from \(m\), by computing suitable approximation \(\mathbf{f}_{i}\) of the \(\mathbf{f}(\mathbf{x}_{i})\)'s at precision \(\delta\).
We write, as expected, \(R^{\mathcal{P}}(\mathbf{x},\mathbf{y})\) if there is a trajectory from \(\mathbf{x}\) to \(\mathbf{y}\), allowing \(\mathbf{y}\) to be some real point (and similarly for \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{y})\).
* Claim 1: assume \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{y})\) with \(\mathbf{x}\in\mathcal{V}_{i}\) for \(\epsilon=2^{-n}\). Then \(\mathcal{V}_{i}\rightarrow_{\epsilon}\mathcal{V}_{j}\) for all \(\mathcal{V}_{j}\) with \(\mathbf{y}\in\mathcal{V}_{j}\). This basically holds as the graph for \(\delta=\epsilon\) is made to always have more trajectories/behaviours than \(R^{\mathcal{P}}_{\varepsilon}\).
Proof.: If \(\mathbf{y}\in\mathbf{f}_{\varepsilon}(\mathbf{x})\), then \(d(\mathbf{f}_{i},\mathbf{y})\leq d(\mathbf{f}_{i},\mathbf{f}(\mathbf{x}_{i}))+d( \mathbf{f}(\mathbf{x}_{i}),\mathbf{f}(\mathbf{x}))+d(\mathbf{f}(\mathbf{x}), \mathbf{y})<\epsilon+Ld(\mathbf{x}_{i},\mathbf{x})+\epsilon=(L+2)\epsilon\), and hence there is an edge from
* Claim 2: for any \(\epsilon=2^{-n}\), there is some \(\delta=2^{-m}\) so that if we have \(\mathcal{V}_{i}\stackrel{{*}}{{\rightarrow}}_{\delta}\mathcal{V}_{j}\) then \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{y})\) whenever \(\mathbf{x}\in\mathcal{V}_{i}\), \(\mathbf{y}\in\mathcal{V}_{j}\).
Proof.: Consider \(\delta=2^{-m}\) with \(\delta<\epsilon/(2L+4)\): assume \(\mathcal{V}_{i=i_{0}}\stackrel{{*}}{{\rightarrow}}_{\delta} \mathcal{V}_{i_{1}}\ldots\stackrel{{*}}{{\rightarrow}}_{\delta} \mathcal{V}_{i_{t}=j}\) with \(\mathbf{x}\in\mathcal{V}_{i}\), \(\mathbf{y}\in\mathcal{V}_{j}\).
Assume by contradiction that \(\neg R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{y})\), and let \(\ell\) be the least index such that \(\neg R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\overline{\mathbf{z}})\) for some \(\overline{\mathbf{z}}\in\mathcal{V}_{i_{t+1}}\). As \(\mathcal{V}_{i_{t}}\stackrel{{*}}{{\rightarrow}}_{\delta} \mathcal{V}_{i_{t+1}}\) there is some \(\overline{\mathbf{y}}\in\mathcal{V}_{i_{t+1}}\) with \(d(\mathbf{f}_{i_{\ell}},\overline{\mathbf{y}})<(L+2)\delta\) with \(d(\mathbf{f}_{i_{\ell}},\mathbf{f}(x_{t}))<\delta\). Take \(\overline{\mathbf{z}}\in\mathcal{V}_{i_{t+1}}\).
If \(\ell=0\), then \(d(\mathbf{f}(\mathbf{x}),\overline{\mathbf{z}})\leq d(\mathbf{f}(\mathbf{x}), \mathbf{f}(\mathbf{x}_{t}))+d(\mathbf{f}(\mathbf{x}_{i_{\ell}}),\mathbf{f}_{i _{\ell}})+d(\mathbf{f}_{i_{\ell}},\overline{\mathbf{y}})+d(\overline{ \mathbf{y}},\overline{\mathbf{z}})<L\delta+\delta+(L+2)\delta+\delta+\delta=(2L+ 4)\delta<\epsilon\), and hence \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\overline{\mathbf{z}})\): contradiction.
If \(\ell>0\), as \(\ell\) is the least index with the above property, \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{x}_{i_{\ell}})\). But then \(d(\mathbf{f}(\mathbf{x}_{i_{\ell}}),\overline{\mathbf{z}})\leq d(\mathbf{f}( \mathbf{x}_{i_{\ell}}),\mathbf{f}_{i_{\ell}})+d(\mathbf{f}_{i_{\ell}}, \overline{\mathbf{y}})+d(\overline{\mathbf{y}},\overline{\mathbf{z}})<\delta +(L+2)\delta+\delta<(2L+4)\delta<\epsilon\).
And hence, \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x}_{i_{\ell}},\overline{\mathbf{z}})\), and since we have \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{x}_{i_{\ell}})\), we get \(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\overline{\mathbf{z}})\) and a contradiction.
That is, Claim 2 says that \(\neg R^{\mathcal{P}}_{\varepsilon}(\mathbf{x},\mathbf{y})\) implies \(\neg(\mathcal{V}_{i}\stackrel{{*}}{{\rightarrow}}_{\delta} \mathcal{V}_{j})\) whenever \(\mathbf{x}\in\mathcal{V}_{i}\), \(\mathbf{y}\in\mathcal{V}_{j}\), for the corresponding \(\delta\).
From the two above items, \(R^{\mathcal{P}}_{\omega}(\mathbf{x},\mathbf{y})\) holds iff for all \(\delta=2^{-m}\), we have \(\mathcal{V}_{i}\stackrel{{*}}{{\rightarrow}}_{\delta}\mathcal{V}_ {j}\), for all \(\mathcal{V}_{i}\), \(\mathcal{V}_{j}\) with \(\mathbf{x}\in\mathcal{V}_{i}\), \(\mathbf{y}\in\mathcal{V}_{j}\).
If one prefers, \(\neg R^{\mathcal{P}}_{\omega}(\mathbf{x},\mathbf{y})\) holds iff for some \(\delta=2^{-m}\), \(\neg(\mathcal{V}_{i}\stackrel{{*}}{{\rightarrow}}_{\delta} \mathcal{V}_{j})\) for some \(\mathcal{V}_{i}\), \(\mathcal{V}_{j}\) with \(\mathbf{x}\in\mathcal{V}_{i}\), \(\mathbf{y}\in\mathcal{V}_{j}\).
Then:
* Claim* (compactness argument): Given a ball \(B(\mathbf{y},\eta)\), we have that \(\overline{B}(\mathbf{y},\eta)\cap R^{\mathcal{P}}_{\omega}(\mathbf{x})=\emptyset\) iff \(\overline{B}(\mathbf{y},\eta)\cap cls(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x}) )=\emptyset\) for some \(\epsilon>0\).
Proof.: \(\Leftarrow\) (easy direction): If \(\overline{B}(\mathbf{y},\eta)\cap R^{\mathcal{P}}_{\varepsilon}(\mathbf{x})=\emptyset\) for some \(\epsilon>0\), we cannot have \(\overline{B}(\mathbf{y},\eta)\cap R^{\mathcal{P}}_{\omega}(\mathbf{x})\neq\emptyset\), as it would contain a point that would necessarily be in \(\overline{B}(\mathbf{y},\eta)\cap R^{\mathcal{P}}_{\varepsilon}(\mathbf{x})\).
\(\Rightarrow\) (compactness argument): Assume \(\overline{B}(\mathbf{y},\eta)\cap R^{\mathcal{P}}_{\omega}(\mathbf{x})=\emptyset\). Since \(R^{\mathcal{P}}_{\omega}(\mathbf{x})=\bigcap_{k>0}cls(R^{\mathcal{P}}_{ \varepsilon}(\mathbf{x}))\), this means that \(\bigcup_{n\in\mathbb{N}}(cls(R^{\mathcal{P}}_{2^{-n}}(\mathbf{x})))^{c}\) is some covering of \(\overline{B}(\mathbf{y},\eta)\). As \(\overline{B}(\mathbf{y},\eta)\) is closed and bounded, it is compact. Consequently, from the covering can be extracted some finite covering. Consequently, \(\bigcup_{n\neq n_{0}}(cls(R^{\mathcal{P}}_{2^{-n}}(\mathbf{x})))^{c}=(cls(R^{ \mathcal{P}}_{2^{-n}}(\mathbf{x})))^{c}\) for some \(n_{0}\) is a covering of \(\overline{B}(\mathbf{y},\eta)\). In other words, \(\overline{B}(\mathbf{y},\eta)\cap cls(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x}) )=\emptyset\) for \(\epsilon=2^{-n_{0}}\). This proves the direction from left to right.
Consequently, we basically can use arguments really similar to those of the proof of Theorem 24, where the role played by \(\mathbf{y}\) is now played by \(\overline{B}(\mathbf{y},\eta)\). With more details:
\(R^{\mathcal{P}}_{\omega}(\mathbf{x},\mathbf{y},\eta)\) holds iff for all \(\delta=2^{-m}\), we have \(\mathcal{V}_{i}\stackrel{{*}}{{\rightarrow}}_{\delta}\mathcal{V}_ {j}\), for all \(\mathcal{V}_{i}\), \(\mathcal{V}_{j}\) with \(\mathbf{x}\in\mathcal{V}_{i}\), \(\mathcal{V}_{j}\cap\overline{B}(\mathbf{y},2^{-p})\neq\emptyset\).
The direction from left to right is clear from Claim 1. Conversely, assume that for all \(\delta=2^{-m}\), we have \(\mathcal{V}_{i}\stackrel{{*}}{{\rightarrow}}_{\delta}\mathcal{V}_ {j}\), for all \(\mathcal{V}_{i}\), \(\mathcal{V}_{j}\) with \(\mathbf{x}\in\mathcal{V}_{i}\), \(\mathcal{V}_{j}\cap\overline{B}(\mathbf{y},2^{-p})\neq\emptyset\). Assume by contradiction that \(\neg R^{\mathcal{P}}_{\omega}(\mathbf{x},\mathbf{y},\eta)\). From Claim*, we know that \(\overline{B}(\mathbf{y},\eta)\cap cls(R^{\mathcal{P}}_{\varepsilon}(\mathbf{x}) )=\emptyset\) for some \(\epsilon>0\). In particular \(\overline{B}(\mathbf{y},\eta)\cap R^{\mathcal{P}}_{\varepsilon}(\mathbf{x})=\emptyset\). Then for the corresponding \(\delta=2^{-m}\) from Claim 2, we cannot have \(\mathcal{V}_{i}\stackrel{{*}}{{\rightarrow}}_{\delta}\mathcal{V}_ {j}\), for any \(\mathcal{V}_{i}\), \(\mathcal{V}_{j}\) with \(\mathbf{x}\in\mathcal{V}_{i}\), \(\mathcal{V}_{j}\cap\overline{B}(\mathbf{y},2^{-p})\neq\emptyset\). This proves the direction from right to left.
If one prefers, \(\neg R^{\mathcal{P}}_{\omega}(\mathbf{x},\mathbf{y},\eta)\) holds iff for some integer \(m\), the following property \(P_{m}\) holds: \(\mathrm{NOPATH}(G_{m},\mathcal{V}_{i},\mathcal{V}_{j})\) for any \(\mathcal{V}_{i}\) and \(\mathcal{V}_{j}\) with \(\mathbf{x}\in\mathcal{V}_{i}\) and \(\mathcal{V}_{j}\cap\overline{B}(\mathbf{y},2^{-p})\neq\emptyset\).
The latter property is computably enumerable, as it corresponds to a union of decidable sets (uniform in
enumerate the rational open balls intersecting it. From the statements of [34], the following holds:
**Theorem 43**.: _Consider a computable discrete time system \(\mathcal{P}\) whose domain is a computable compact._
_For all \(\mathbf{x}\), \(cls(R^{\mathcal{P}}(x))\subseteq\mathbb{R}^{d}\) is a c.e. closed subset._
Proof.: We do the proof for the case of discrete time dynamical system. Write \(R^{\mathcal{P},T}(\mathbf{x},\mathbf{y})\) iff there exists a trajectory of \(\mathcal{P}\) from \(\mathbf{x}\) to \(\mathbf{y}\) in less than \(T\) steps. We can write \(R^{\mathcal{P},0}(\mathbf{x},\mathbf{y})\) as \(\{(\mathbf{x},\mathbf{y})|\mathbf{x}=\mathbf{y}\}\), which is a computable closed subset [34, Example 5.1.3]). We can then also write \(R^{\mathcal{P},T+1}(\mathbf{x},.)=\mathbf{F}(R^{\mathcal{P},T}(\mathbf{x},.))\) where \(\mathbf{F}(K):=K\cup\mathbf{f}(K)\). As \(\mathbf{f}\) is computable, we know it is continuous, and by induction on \(T\), \(R^{\mathcal{P},T+1}(\mathbf{x},\mathbf{y})\) is a compact: indeed, as \(\mathbf{f}\) is computable, it is continuous, and as \(K\) is a closed subset living in a compact by induction, it is compact, and the image of a compact by some continuous function is compact. As \(\mathbf{f}\) is computable, and \(K\) is compact, we know that \(\mathbf{f}(K)\) is computable ([34, Theorem 6.2.4]), and hence also \(\mathbf{F}(K)\) ([34, Theorem 5.1.13]). And then by induction on \(T\), that \(R^{\mathcal{P},T}(\mathbf{x},\mathbf{y})\) is a closed computable subset. A computable closed set is computably enumerable-closed: we can enumerate the rational balls intersecting it ([11, Proposition 5.16]).
Furthermore, as it can be checked in all the above references of theorems above from [34], (see also [34, Theorem 6.2.1] for the required iteration) the above reasoning is even effective: we can even produce effectively in \(T\) a name of it (even effectively from a name of \(\mathbf{f}\)). This means consequently by doing things in parallel (i.e. dovetailing) that we can effectively enumerate the rational balls intersecting \(cls(\bigcup_{T}R^{\mathcal{P},T}(.,.))\), by considering increasing \(T\) and the balls in these enumerations.
A closed set is called co-c.e. closed if one can effectively enumerate the rational closed balls in its complement. Using arguments similar to the proof of Theorems 38 and 24:
**Theorem 44**.: _Consider a computable locally Lipschitz discrete time system whose domain \(X\) is a computable compact._
_For all \(\mathbf{x}\), \(cls(R^{\mathcal{P}}_{\omega}(\mathbf{x}))\ \subseteq\mathbb{R}^{d}\) is a co-c.e. closed subset._
Proof.: From the proof of Theorem 38, for all the \(\mathbf{x}\), \(\mathbf{y}\) such that \(R^{\mathcal{P}}_{\epsilon}(\mathbf{x},\mathbf{y})\) is false, in a easily controllable computable neighborhood of \(\mathbf{x}\), with \(\epsilon=2^{-n}\), there exists some \(\delta=2^{-m}\) and some witness \(\mathcal{R}^{*}=\mathcal{R}^{*}_{\delta}\) at level \(\delta\) of that fact: this witness guarantees \(R^{\mathcal{P}}(\mathbf{x})\subseteq R^{\mathcal{P}}_{\epsilon}(\mathbf{x}) \subseteq\mathcal{R}^{*}_{\delta}\), and \(\overline{B}(\mathbf{y},\eta)\cap R^{\mathcal{P}}_{\epsilon}(\mathbf{x})=\emptyset\) implies \(\overline{B}(\mathbf{y},\eta)\cap\mathcal{R}^{*}_{\delta}=\emptyset\)..
Then a strategy to produce all the rational balls whose closure is not intersecting \(cls(R^{\mathcal{P}}(\mathbf{x}))\), for increasing \(n\), generate in parallel all such balls in the corresponding witness. This will exhaust all such balls.
**Corollary 45** (Robust \(\Rightarrow\) computable).: _Assume the hypotheses of Theorem 44._
_If \(R^{\mathcal{P}}_{\omega}=R^{\mathcal{P}}\) then \(cls(R^{\mathcal{P}})\subseteq\mathbb{R}^{d}\times\mathbb{R}^{d}\) is computable._
Proof.: This follows from the fact that a closed set is computable iff it is c.e. closed and co-c.e. closed ([11, Proposition 5.16]), observing that above statements are effective given a name of \(\mathbf{x}\).
For closed sets, the notion of computability can be also interpreted as the possibility of being plotted with arbitrarily chosen precision: here the intuition is that \(\mathbf{z}/2^{n}\) corresponds to some pixel at precision \(2^{n}\), and that \(1\) is black (i.e. the pixel is plotted black), \(0\) is white (i.e. the pixel is plotted white).
**Theorem 46** ([11, Proposition 5.7],[34, pages 127-128]).: _For a closed set \(A\subseteq\mathbb{R}^{k}\), A is computable iff it can be plotted: there exists a computable function \(f:\mathbb{N}\times\mathbb{Z}^{k}\rightarrow\mathbb{N}\) with \(range(f)\subseteq\{0,1\}\) and such that for all \(n\in\mathbb{N}\) and \(\mathbf{z}\in\mathbb{Z}^{k}\)_
\[f(n,\mathbf{z})=\begin{cases}1&\text{if }B(\frac{\mathbf{z}}{2^{n}},2^{-n}) \cap A\neq\emptyset,\\ 0&\text{if }B(\frac{\mathbf{z}}{2^{n}},2.2^{-n})\cap A=\emptyset,\\ 0\text{ or }1&\text{otherwise.}\end{cases}\]
This is called _local computability_ in [12].
**Corollary 47** (Robust \(\Rightarrow\) drawable).: _Assume the hypotheses of Theorem 44._
_If \(R^{\mathcal{P}}_{\omega}=R^{\mathcal{P}}\) then \(cls(R^{\mathcal{P}})\subseteq\mathbb{R}^{d}\times\mathbb{R}^{d}\) can be plotted._
Proof.: This follows from Corollary 45 and Theorem 46 (that is to say [11, Proposition 5.7],[34, page 127-128]).
This is even effective in a name of \(\mathbf{f}\). Actually, the converse is true, if some topological properties are assumed.
**Theorem 48**.: _Assume \(R^{\mathcal{P}}\) is closed, and can be plotted effectively in a name of \(\mathbf{f}\). Then the system is robust, i.e. \(R^{\mathcal{P}}_{\omega}=R^{\mathcal{P}}\)._
Actually, we prove the stronger statement that, if \(cls(R^{\mathcal{P}})\) can be plotted effectively in a name of \(\mathbf{f}\), then \(R^{\mathcal{P}}_{\omega}(\mathbf{x},\mathbf{y})=R^{\mathcal{P}}(\mathbf{x}, \mathbf{y})\) except maybe for some \((\mathbf{x},\mathbf{y})\) in \(cls(R^{\mathcal{P}})-R^{\mathcal{P}}\).
Proof.: By Theorem 46, we know that \(cls(R^{\mathcal{P}})\) is computable, and it is known to be equivalent to the fact that the distance function \(d(\cdot,cls(R^{\mathcal{P}}))\) is computable ([34, Corollary 5.1.8]). That means that given some rational ball, a name for \(\mathbf{x}\), and for \(\mathbf{y}\), with \(\neg R^{\mathcal{P}}(\mathbf{x},\mathbf{y})\), the following procedure is guaranteed to terminate, when \((\mathbf{x},\mathbf{y})\) is not in \(cls(R^{\mathcal{P}})-R^{\mathcal{P}}\): compute a name of \(d((\mathbf{x},\mathbf{y}),cls(R^{\mathcal{P}}(\mathbf{x})))\) until a proof that it is strictly positive is found: \(d((\mathbf{x},\mathbf{y}),cls(R^{\mathcal{P}}(\mathbf{x})))=0\) would mean that \((\mathbf{x},\mathbf{y})\in cls(R^{\mathcal{P}})\), but not in \(R^{\mathcal{P}}\).
It answers by reading a finite part, say \(m\) cells, of the names of \(\mathbf{x}\), \(\mathbf{y}\) and \(\mathbf{f}\), and hence give the same answer if the names are altered after symbol number \(m\). That means there exists some precision \(\epsilon\) (related to \(m\), basically \(2^{-m}\) if we consider names converging exponentially fast) so that \(\neg R^{\mathcal{P}}(\mathbf{x},\mathbf{y})\) remains true for some \(\epsilon\)-neighborhood of \(\mathbf{x}\) and \(\mathbf{y}\), and unchanged by a small variation of \(\mathbf{f}\). In other words, for all \(\mathbf{x}\), \(\mathbf{y}\), when \(\neg R^{\mathcal{P}}(\mathbf{x},\mathbf{y})\), there exists some \(\epsilon\) such that \(\neg R^{\mathcal{P}}_{\epsilon}(\mathbf{x},\mathbf{y})\), i.e. \(\neg R^{\mathcal{P}}_{\omega}(\mathbf{x},\mathbf{y})\). When \(R^{\mathcal{P}}(\mathbf{x},\mathbf{y})\) holds, we always have that \(R^{\mathcal{P}}_{\omega}(\mathbf{x},\mathbf{y})\) holds.
This is also possible to adapt this at the complexity level with hypotheses in the spirit of previous results. This would lead to a concept of _local poly-space computability_ in the
spirit of the local poly-time complexity introduced in [12]. The latter is devoted to discussing equivalence at the poly-time complexity of various representations of compact sets.
## VII Continuous time and Hybrid Systems
The previous approaches have a very high level of applicability, and are able to talk about systems that could be even continuous time, or hybrid.
**Definition 49**.: _A continuous-time dynamical system \(\mathcal{P}\) is given by a set \(X\subseteq\mathbb{R}^{d}\), and some Ordinary Differential Equation (ODE) of the form \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\) on \(X\)._
It is known that the maximal interval of existence of solutions can be non-computable, even for computable ODEs [20]. To simplify the discussion, we assume in this article that the ODEs have solutions3 defined over all \(\mathbb{R}\). A trajectory of \(\mathcal{P}\) starting at some \(\mathbf{x}_{0}\in X\) is a solution of the differential equation with initial condition \(\dot{\mathbf{x}}=\mathbf{x}_{0}\), defined as a continuous right derivable function \(\xi:\mathbb{R}^{+}\to X\) such that \(\xi(0)=\mathbf{f}(\mathbf{x}_{0})\) and for every \(t\), \(\mathbf{f}(\xi(t))\) is equal to the right derivative of \(\xi(t)\). To each continuous time dynamical system \(\mathcal{P}\) we associate its reachability relation \(R^{\mathcal{P}}\) as before.
Footnote 3: Notice that a non-total solution must necessarily leave any compact, see e.g. [21], so when \(X\) is compact this is not a restriction.
For any \(\varepsilon>0\) the \(\varepsilon\)-perturbed system \(\mathcal{P}_{\varepsilon}\) is described by the differential inclusion \(d(\dot{\mathbf{x}},\mathbf{f}(\mathbf{x}))<\varepsilon\). This non-deterministic system can be considered as \(\mathcal{P}\) submitted to a small noise of magnitude \(\varepsilon\). We denote reachability in the system \(\mathcal{P}_{\varepsilon}\) by \(R^{\mathcal{P}}_{\varepsilon}\). The limit reachability relation \(R^{\mathcal{P}}_{\omega}\) is introduced as before.
**Theorem 50** (Perturbed reachability is co-r.e.).: _Consider a continuous time dynamical system, with \(\mathbf{f}\) locally Lipschitz and computable, and whose domain is a computable compact._
_Then, for all \(\mathbf{x}\), \(cls(R^{\mathcal{P}}_{\omega}(\mathbf{x}))\ \subseteq\mathbb{R}^{d}\) is a co-c.e. closed subset._
Its proof can be considered as the main technical result established in [29]. Independently:
Proof.: The proof is similar to the proof of Theorems 38 and 24: adapt the construction of the involved graph \(G_{m}\) to cover the flow of the trajectory. With our hypotheses the solutions are defined over all \(\mathbb{R}\). It is proved in [20, Theorem 3.1] that Lipschitz (and even effectively locally Lipschitz) homogeneous computable ODEs have computable solutions over their maximal domain, so this is feasible.
**Corollary 51** (Robust \(\Rightarrow\) decidable).: _Assume the hypotheses of Theorem 50. If \(R^{\mathcal{P}}_{\omega}=R^{\mathcal{P}}\) then \(cls(R^{\mathcal{P}})\subseteq\mathbb{R}^{d}\times\mathbb{R}^{d}\) is computable._
We can also state the equivalent of previous statements at the complexity level, assuming basic hypotheses that make computability of the solutions to remain in polynomial space.
Actually, we can even deal with the so-called hybrid systems. Various models have been considered in literature, but one common point is that they all correspond to continuous time dynamical systems, where the dynamics might be discontinuous (hence it is not computable). To be very general, a dynamical system can be described by its flow \(\phi(\mathbf{x},t)\) (the idea is that, given \(\mathbf{x}\), and time \(t\), \(\phi\) maps the trajectory starting from \(\mathbf{x}\) to the position at time \(t\)). By considering \(T\) to be discrete, this even covers discrete time dynamical systems, and \(T=[0,+\infty)\), continuous and hybrid time systems.
**Definition 52**.: _A hybrid system \(\mathcal{P}\) is given by a set \(X\subseteq\mathbb{R}^{d}\), and a semi-group \(T\), and some flow function \(\phi:X\times T\to X\) satisfying \(\phi(\mathbf{x},0)=\mathbf{x}\) and \(\phi(\phi(\mathbf{x},t),t^{\prime})=\phi(\mathbf{x},t+t^{\prime})\)._
Previous proofs are basically using the fact that
1. reachability \(R^{\mathcal{P}}\) is c.e;
2. perturbed reachability is co-c.e.
The first point is usually very clear in any of the considered models, as it is, roughly speaking, always expected that one can at least simulate the model. The second point is, on a given class of models, usually less clear, but if we look at our proof methods, typically the proof of Theorems 38 and 24, we see that we only need to be able to construct some computable abstraction satisfying Claim 1 and Claim 2.
The key remark is that these properties are not talking about function \(\mathbf{f}\), but its graph. The major point is that assuming a function has the closure of its graph computable is a way more general concept than assuming computability. For example, the characteristic function \(\chi_{[0,\infty)}\) is not computable, as it is not continuous. But its graph, as well as its closure, is easy to draw (made of two segments): see discussions for e.g. in [12]. One usually expects to be able to draw the closure of the graph of the flow, and this is sufficient to get results similar to the previous ones, relating robustness to computability. In particular, this allowed us to talk about discontinuous functions, in particular not computable in CA, as we did.
## VIII Other perturbations
We can also consider time perturbed TM: the idea, is that given \(n>0\), the \(n\)-perturbed version of the machine \(\mathcal{M}\) is unable to remain correct after a time \(n\): given an integer \(n>0\), the \(n\)-perturbed version of the machine \(\mathcal{M}\) is defined exactly as \(\mathcal{M}\) except that after a time greater than \(n\) then its internal state \(q\) can change in a non-deterministic manner: given a configuration \((q,\cdots a_{-n-1}a_{-n}a_{-n+1}\cdots a_{-1},a_{0}a_{1}\cdots a_{n-1}a_{n}a_{n+1}\cdots)\) (with \(\neg(q\in F\cup R)\)) the \(n\)-perturbed version of \(\mathcal{M}\) may go to \((q^{\prime},\cdots a_{-n-1}a_{-n}a_{-n+1}\cdots a_{-1},a_{0}a_{1}\cdots a_{n-1 }a_{n}a_{n+1}\cdots)\) for any 4\(q^{\prime}\in Q\).
Footnote 4: In particular, may accept. More subtle perturbations can be considered, keeping the results valid.
Let \(L^{n}(\mathcal{M})\) be the time \(n\)-perturbed language of \(\mathcal{M}\), i.e., the set of words in \(\Sigma^{*}\) that are accepted by the time \(n\)-perturbed version of \(\mathcal{M}\). From definitions, and using similar ideas:
**Lemma 53**.: \(L(\mathcal{M})\subseteq L^{\omega}(\mathcal{M})\subseteq\cdots\subseteq L^{2}( \mathcal{M})\subseteq L^{1}(\mathcal{M})\)_._
**Theorem 54**.: \(L^{\omega}(\mathcal{M})\) _is in the class \(\Pi^{0}_{1}\)._
Proof.: For a word \(w\), \(w\not\in L^{\omega}(\mathcal{M})\), iff there exists \(n\in\mathbb{N}\) such that \(w\not\in L^{n}(\mathcal{M})\). As \(L^{n}(\mathcal{M})\) is decidable uniformly in
\(n\), the complement of \(L^{\omega}(\mathcal{M})\) is computably enumerable, as it is the uniform in \(n\) union of decidable sets. We get that \(L^{\omega}(\mathcal{M})\in\Pi^{0}_{1}\) (co-computably enumerable).
**Corollary 55** (Length robust \(\Rightarrow\) decidable).: _If \(L^{\omega}(\mathcal{M})=L(\mathcal{M})\) then \(L(\mathcal{M})\) is decidable._
**Theorem 56**.: _When \(M\) always stops, \(L^{\omega}(\mathcal{M})=L(\mathcal{M})\)._
Proof.: We directly have \(L(\mathcal{M})\subseteq L^{\omega}(\mathcal{M})\).
Let \(w\in L^{\omega}(\mathcal{M})=\bigcap_{n\in\mathbb{N}}L^{n}(\mathcal{M})\), so \(\forall n\in\mathbb{N}\), \(w\in L^{n}(\mathcal{M})\). By contradiction, we assume that \(L(\mathcal{M})\subsetneq L^{\omega}(\mathcal{M})\). So there exists \(w\in L^{\omega}(\mathcal{M})\) such that \(w\not\in L(\mathcal{M})\). Since \(\mathcal{M}\) always terminates, it rejects \(w\) after using a time \(q(\ell(w))\). But, then \(w\not\in L^{n}(\mathcal{M})\) for any \(n\geq q(\ell(w))+2\), and hence \(w\not\in L^{\omega}(\mathcal{M})\), a contradiction.
**Definition 57**.: _Given some function \(f:\mathbb{N}\to\mathbb{N}\), we write \(L^{\{f\}}(\mathcal{M})\) for the set of words accepted by \(\mathcal{M}\) with time perturbation \(f\): \(L^{\{f\}}(\mathcal{M})=\{w|\ w\in L^{f(\ell(w))}(\mathcal{M})\}\)._
**Theorem 58** (Polynomially robust to time \(\Leftrightarrow\mathrm{PTIME}\)).: _A language \(L\) is in \(\mathrm{PTIME}\) iff for some \(\mathcal{M}\) and some polynomial \(p\), \(L=L(\mathcal{M})=L^{\{p\}}(\mathcal{M})\)._
Proof.: This can be established as for space perturbation. In an independent view, the intuition of the proof is that the polynomial in \(n\) can be seen as a time-out. \(M\) works in polynomial time \(p(n)\), so in at most \(p(n)\) steps, so the machine for \(L^{n}(M)\) can reject if it has not accepted or rejected in \(p(n)\) steps.
(\(\Rightarrow\)) If \(M\) always terminates and works in polynomial time, then there exists a polynomial \(q\) that bounds the execution time of \(M\), so we have a polynomial \(p\) (\(p\geq q\)) such that, for \(n\in N\), \(L^{p}(\mathcal{M})\subseteq L(M)\). We have the other inclusion by definition.
(\(\Leftarrow\)) We always have \(L^{p(n)}(\mathcal{M})\in\mathrm{PTIME}\) and since \(L^{p(n)}(\mathcal{M})=L\), then \(L\in\mathrm{PTIME}\).
**Theorem 59** (Polynomially robust to length \(\Leftrightarrow\mathrm{PTIME}\)).: _Any \(\mathrm{PTIME}\) language is reducible to the reachability relation of PAM with \(R^{\mathcal{P}}=L^{\{p\}}(\mathcal{P})\) for some polynomial \(p\)._
Fix some distance \(\delta(\cdot,\cdot)\) over the domain \(X\). A finite trajectory of discrete time dynamical system \(\mathcal{P}\) is a finite sequence \((x_{t})_{t\in 0\ldots T}\) such that \(x_{t+1}=f\left(x_{t}\right)\) for all \(0\leq t<T\). Its associated _length_ is defined as \(\mathcal{L}=\sum_{i=0}^{T-1}\delta(x_{i},x_{i+1})\).
We could also consider length perturbed discrete time dynamical system: the idea, is that given \(L>0\), the \(L\)-perturbed version of the system is unable to remain correct after a length \(L\). We then define \(R^{\mathcal{P},L}(\mathbf{x},\mathbf{y})\) as there exists a finite trajectory of \(\mathcal{P}\) from \(\mathbf{x}\) to \(\mathbf{y}\) of length \(\mathcal{L}\leq L\).
When considering TMs as such dynamical systems, \(\delta(\cdot,\cdot)\) is basically some distance over configurations of TMs. Word \(w\) is said to be accepted in length \(d\) if the trajectory starting from \(C_{0}[w]\) to the accepting configuration has length \(\leq d\).
**Definition 60**.: _Distance \(\delta(C,C^{\prime})\) is called time metric iff for \(C\vdash C^{\prime}\), we have \(\delta(C,C^{\prime})\leq p(\ell(C))\), and \(\delta(C,C^{\prime})\geq\frac{1}{p(\ell(C))}\) for some polynomial \(p\)._
Write \(\mathcal{L}(M,t)\) for the set of words accepted by \(M\) in a length less than \(t\). Given some function \(f:\mathbb{N}\to\mathbb{N}\), we write \(L^{(f)}(\mathcal{M})\) for \(L^{(f)}(\mathcal{M})=\{w|\ w\in\mathcal{L}(\mathcal{M},f(\ell(w))\}\).
**Theorem 61** (Length robust for some time-metric distance \(\Leftrightarrow\mathrm{PTIME}\)).: _Assume \(\delta(\cdot,\cdot)\) is time metric. Then, a language \(L\) is in \(PTIME\) iff for some Turing machine \(\mathcal{M}\), and some polynomial \(p(n)\), \(L=L(\mathcal{M})=L^{(f)}(\mathcal{M})\)._
Proof.: Let \(w\) be the input of size \(n\). The execution of a Turing machine is a sequence \((C_{i})=(q_{i},l_{i},r_{i})\).
(\(\Rightarrow\)) If \(L\) is in \(\mathrm{PTIME}\), so there is a Turing machine \(\mathcal{M}\) that computes \(L\) in polynomial time \(p(n)\). Since the distance between two successive configurations is bounded by a polynomial \(q(n)\), we have that the total length \(\mathcal{L}\) is \(\mathcal{L}=\sum_{i=0}^{p(n)-1}d(C_{i},C_{i+1})\leq\sum_{i=0}^{p(n)-1}q(n)\), which is a polynomial in \(n\).
Thus \(L\) is computable in polynomial length.
(\(\Leftarrow\)) If \(L\) is computable in polynomial length \(p(n)\).
Let \(T\) to be fixed. Then we have, for all \(i\in\{1\ldots T\}\): \(d(C_{i},C_{i+1})\geq\frac{1}{poly(\ell(C_{i}))}\), thus \(\sum_{i=0}^{T-1}d(C_{i},C_{i+1})\geq\frac{T}{poly(\ell(C_{min}))}\), where \(C_{min}\) is chosen to minimize the previous lower bound.
Take \(T=p(n)\times poly(\ell(C_{min}))\), we have that a Turing machine simulating the trajectory will accepts or rejects after a polynomial number of steps, thus \(L\in\mathrm{PTIME}\).
One way to obtain such a distance \(\delta(C,C^{\prime})\) is to take the Euclidean between \(\Upsilon(C)\) and \(\Upsilon(C^{\prime})\) for \(\gamma=\gamma_{[0,1]}\).
**Proposition 62**.: _The obtained distance is time metric._
Proof.: We consider \(C_{i}=(q_{i},l_{i},r_{i})\) and \(C_{i+1}=(q_{i+1},l_{i+1},r_{i+1})\), with \(C_{i}\vdash C_{i+1}\). We write \(\overline{l_{i}}=\Upsilon(l_{i})\) and \(\overline{r_{i}}=\Upsilon(r_{i})\).
Then, from the definition of \(\Upsilon\) and \(\gamma_{[0,1]}\):
* \(\left|\overline{r_{i+1}}-\overline{r_{i}}\right|\leq 1\).
* \(\left|\overline{l_{i+1}}-\overline{l_{i}}\right|\leq 1\).
* And the gaps between \(\overline{r_{i+1}}\) and \(\overline{r_{i}}\), and \(\overline{l_{i+1}}\) and \(\overline{l_{i}}\) remain polynomial in the size of a configuration.
This provides property 1.
By the encoding of the real numbers over the tapes of the Turing machines, the gap between two consecutive configurations is at least \(\frac{1}{2}\) (we assume that the Turing machine is not allowed not to do anything: that would clearly corresponds to a looping situation). This provides property 2.
Given some function \(f:\mathbb{N}\to\mathbb{N}\), we write \(R^{\mathcal{P},(f)}\) for the set of words accepted by \(\mathcal{M}\) with length perturbation \(f\):
\(R^{\mathcal{M},(f)}=\{w|\ w\in R^{\mathcal{M},f(\ell(w)}\}\).
**Theorem 63** (Polynomially length robust \(\Leftrightarrow\mathrm{PTIME}\)).: _Assume distance \(d\) is time metric. Assume \(R^{\mathcal{P}}=R^{\mathcal{P},(p)}\) for some polynomial \(p\). Then \(R^{\mathcal{P}}\) is in \(\mathrm{PTIME}\)._
Proof.: Since \(d\) is time metric, a polynomial time and polynomial length are essentially the same, so the proof is very analogous to the one of Theorem 61.
## IX Analog complexity under robustness Prism
The following was established in [7] (\(\operatorname{len}_{\mathbf{y}}(0,t)\) stands for the length of the curve between time \(0\) and \(t\)):
**Theorem 64** (Analog characterization of \(\operatorname{PTIME}\)[7, Theorem 2.2]).: _A decision problem (language) \(\mathcal{L}\) belongs to the class \(\operatorname{PTIME}\) if and only if it is poly-length-analog-recognizable. That is to say: there exist vectors \(\mathbf{p}\), and \(\mathbf{q}\) of polynomials with rational coefficients and a polynomial \(\amalgalg{1}{:}\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\), such that for all \(w\in\Sigma^{*}\), there is a (unique) \(\mathbf{y}:\mathbb{R}_{+}\rightarrow\mathbb{R}^{d}\) such that for all \(t\in\mathbb{R}_{+}\): 1) \(\mathbf{y}(0)=\mathbf{q}(\gamma_{[0,1]}^{k}(w))\) and \(\mathbf{y}^{\prime}(t)=\mathbf{p}(\mathbf{y}(t))\) 2) if \([\mathbf{y}_{1}(t)]\geqslant 1\) then \(|\mathbf{y}_{1}(u)|\geqslant 1\) for all \(u\geqslant t\) 3) if \(w\in\mathcal{L}\) (resp. \(\notin\mathcal{L}\)) and \(\operatorname{len}_{\mathbf{y}}(0,t)\geqslant\amalg{1}(|w|)\) then \(\mathbf{y}_{1}(t)\geqslant 1\) (resp. \(\leqslant-1\)) 4) \(\operatorname{len}_{\mathbf{y}}(0,t)\geqslant t\)_
Conditions 2) is basically stating that this is a length-robust system. Condition 4) is the equivalent of second condition of Definition 60, that guarantees the equivalence with length for Turing machines. And the result is obtained using a step-by-step emulation using encoding \(\gamma_{[0,1]}^{k}\).
Very recently, a characterization of \(\operatorname{PSPACE}\) has also been obtained: see [17], for full details. Basically, of the form.
**Theorem 65** ([6, Theorem 3.11],[17]).: _A language \(L\subseteq\Gamma^{*}\) belongs to \(\operatorname{PSPACE}\) iff there are ODEs \(\mathbf{y}^{\prime}=\mathbf{p}(\mathbf{y},\mathbf{z})\) and \(\mathbf{z}^{\prime}=\mathbf{q}(\mathbf{y},\mathbf{z})\), where \(\mathbf{p},\mathbf{q}\) are vectors of polynomials, there is a polynomial \(u\), (vector-valued) functions \(\mathbf{r},\mathbf{s}\in\operatorname{GPVAL}\), a \(\gamma_{[0,\bar{\mathbf{z}}]}^{k}\)-bound \(\phi\), and \(\varepsilon>0\), \(\tau\geq\alpha>0\), \(\alpha,\tau\in\mathbb{Q}\), such that, for all \(w\in\Sigma^{*}\), one has that the solution \((\mathbf{y},\mathbf{z})\) of the ODEs with the initial condition \(\mathbf{y}(0)=\mathbf{r}(x)\) and \(\mathbf{z}(0)=\mathbf{s}(\mathbf{y}(0))=\mathbf{s}\circ\mathbf{r}(0)\), with \(d(\gamma_{[0,\bar{\mathbf{z}}]}^{k}(w),\mathbf{x})<1/4\), satisfies: 1. If \(\bar{t}_{1}>0\) is such that \(|\mathbf{y}_{1}(\bar{t}_{1})|\geq 1\), then \(|\mathbf{y}_{1}(t)|\geq 1\) for all \(t\geq\bar{t}_{1}\) and \(|\mathbf{y}_{1}(t)|\geq 3/2\) for all \(t\geq\bar{t}_{1}+1\); 2. If \(w\in L\) (respectively \(\notin L\)) then there is some \(\bar{t}_{1}>0\) such that \(\mathbf{y}_{1}(\bar{t}_{1})\geq 1\) (respectively \(\leq-1\)); 3. \(\|(\mathbf{y}(t),\mathbf{z}(t))\|\leqslant\phi\circ u(|w|)\), for all \(t\geq 0\); 4. Suppose that \((\tilde{\mathbf{y}},\tilde{\mathbf{z}})\) satisfies the ODE with \(\mathbf{y}(0)=\mathbf{r}(x)\), \(\mathbf{z}(0)=\mathbf{s}(\mathbf{y}(0))=\mathbf{s}\circ\mathbf{r}(\mathbf{x})\), except possibly at time instants \(t_{i}\), for \(i=1,2,\ldots\), satisfying: a) \(t_{i}-t_{i-1}\geq\tau\), where \(t_{0}=0\); b) \(\|\tilde{\mathbf{y}}(t_{i})-\lim_{t\rightarrow t_{i}^{-}}\tilde{\mathbf{y}}(t) \|\leq\varepsilon\) and \(\tilde{\mathbf{z}}(t_{i})=\mathbf{s}(\tilde{\mathbf{y}}(t_{i}))\); c) \(\lim_{t\rightarrow\bar{t}_{i}^{-}}\tilde{\mathbf{z}}_{1}(t)>1\). Then conditions 1,2,3,5 hold for \((\tilde{\mathbf{y}},\tilde{\mathbf{z}})\); 5) For any \(b>a\geq 0\) such that \(|b-a|\geq\tau\), there is an interval \(I=[c,d]\subseteq[a,b]\), with \(|d-c|\geq\alpha\), such that \(\mathbf{z}_{1}(t)\geq 3/2\) for all \(t\in I\)._
Condition \(d(\gamma_{[0,\bar{\mathbf{z}}]}^{k}(w),x)<1/4\), 4) and 5) basically impose that there exists some abstraction graph. Conditions 3) makes it keep a polynomial \(\log\)-size, and conditions 1) and 2) impose to the system to be eventually decisional. This guarantees \(\operatorname{PSPACE}\), and allows a robust emulation of a TM. However, the space used by the ODEs cannot be "read" easily (as by a concept such as length in previous theorem).
This is using a rather natural encoding, but the system is not living in a compact. If one wants to remain bounded, an alternative is to use the trick of [8], based on a change of variable, at the price of a rather adhoc encoding. This is basically based on the following:
**Theorem 66** (Robust simulation of a TM over \(\mathbb{R}^{6}\)[18]).: _For any TM \(M\), there is an analytic and computable ODE \(y^{\prime}=g_{M}(y)\) defined over \(\mathbb{R}^{6}\) which simulates \(M\) using the encoding \(\gamma_{[0,\bar{\mathbf{z}}]}\) and remains valid for perturbations less than \(\varepsilon\leq 1/4\)._
The idea is that if \(\phi\) is a solution of \(y^{\prime}=\mathbf{g}_{M}(y)\) simulating \(M\) on \(\mathbb{R}^{6}\), we can consider \(\phi_{1}=\frac{2}{\pi}\arctan\phi\) as a corresponding emulation of \(M\) on \((-1,1)^{6}\). Then \(\phi_{1}\) will be solution of the ODE \(\phi_{1}^{\prime}=\mathbf{f}_{M}(\phi_{1})\) with \(\mathbf{f}_{M}(x)=\frac{2}{\pi}\frac{1}{1+\tan^{2}\left(\frac{\pi x}{2}\right)} \mathbf{g}_{M}\left(\tan\left(\frac{\pi x}{2}\right)\right)\). Consequently, the continuous time dynamical system given by \(y^{\prime}=\mathbf{f}_{M}(y)\) simulates TM \(M\) on \(X\), if the input word \(w=a_{1}\ldots a_{n}\) is encoded in \(X\) by \(\gamma_{arctan}(w)=\frac{2}{\pi}\arctan(\gamma_{[0,\bar{\mathbf{z}}]}(w))=\frac{2}{ \pi}\arctan(w_{0}+w_{1}2+\cdots+w_{n}2^{n})\). Then a computation will remain correct if the states are not perturbed more than \(\arctan(s(w)+\varepsilon)-\arctan(s(w))\), where \(s(w)\) is an upper bound of the size of tape on word \(w\).
However, the used encoding \(\gamma_{arctan}\) is rather artificial, and mainly the system is defined over some open bounded domain, not compact. Furthermore, robustness holds for points close to the image by \(\Upsilon\) of configurations, but not for arbitrary points.
The question of a simpler characterization of \(\operatorname{PSPACE}\), over a compact, using a simple encoding remains open. However, we believe that our results help to understand that space-complexity is related to robustness to precision over a compact, or more generally to the \(\log\)-size of some abstraction graph over general domains.
Notice that we can prove that reachability is \(\operatorname{PSPACE}\)-complete for polynomially robust pODE systems, But the point of the previous results (and open question) is to get a _uniform_ embedding: given some machine \(M\), we would like a same ODE that works for any input size. |
2302.04411 | Geometry of Score Based Generative Models | In this work, we look at Score-based generative models (also called diffusion
generative models) from a geometric perspective. From a new view point, we
prove that both the forward and backward process of adding noise and generating
from noise are Wasserstein gradient flow in the space of probability measures.
We are the first to prove this connection. Our understanding of Score-based
(and Diffusion) generative models have matured and become more complete by
drawing ideas from different fields like Bayesian inference, control theory,
stochastic differential equation and Schrodinger bridge. However, many open
questions and challenges remain. One problem, for example, is how to decrease
the sampling time? We demonstrate that looking from geometric perspective
enables us to answer many of these questions and provide new interpretations to
some known results. Furthermore, geometric perspective enables us to devise an
intuitive geometric solution to the problem of faster sampling. By augmenting
traditional score-based generative models with a projection step, we show that
we can generate high quality images with significantly fewer sampling-steps. | Sandesh Ghimire, Jinyang Liu, Armand Comas, Davin Hill, Aria Masoomi, Octavia Camps, Jennifer Dy | 2023-02-09T02:39:11Z | http://arxiv.org/abs/2302.04411v1 | # Geometry of Score Based Generative Models
###### Abstract
In this work, we look at Score-based generative models (also called diffusion generative models) from a geometric perspective. From a new view point, we prove that both the forward and backward process of adding noise and generating from noise are Wasserstein gradient flow in the space of probability measures. We are the first to prove this connection. Our understanding of Score-based (and Diffusion) generative models have matured and become more complete by drawing ideas from different fields like Bayesian inference, control theory, stochastic differential equation and Schrodinger bridge. However, many open questions and challenges remain. One problem, for example, is how to decrease the sampling time? We demonstrate that looking from geometric perspective enables us to answer many of these questions and provide new interpretations to some known results. Furthermore, geometric perspective enables us to devise an intuitive geometric solution to the problem of faster sampling. By augmenting traditional score-based generative models with a projection step, we show that we can generate high quality images with significantly fewer sampling-steps.
Machine Learning, ICML
## 1 Introduction
Score-based (or Diffusion) models are a new type of generative models in the field of computer vision and machine learning, achieving state-of-the-art results in image synthesis (Dhariwal and Nichol, 2021) and log likelihood (Kingma et al., 2021). They have recently gained popularity due to interesting applications such as text to image generation (DALL-E (Ramesh et al., 2022), (Rombach et al., 2022) and Imagen (Saharia et al., 2022)), image super-resolution, image editing (Meng et al., 2022), etc. Score-based generative models have enjoyed diverse perspectives from different fields. Originally, diffusion models (DDPM) were developed from expected lower bound (ELBO) maximization on data log likelihood (Ho et al., 2020). Song and Ermon (2019) showed that we can learn gradient of log likelihood (called score functions) and use it to generate images. Song et al. (2021) showed that the epsilon function in DDPM is in fact the scaled version of the score function. They further generalized these models to a continuous setting as stochastic differential equations (Song et al., 2021). More recent works have connected score-based generative models with Schrodinger bridge problem (De Bortoli et al., 2021) and control theoretic perspectives (Chen et al., 2022; Huang et al., 2021).
In this work, we present a completely different view-point on score-based generative models: geometric perspective. To the best of our knowledge, we are the first to explore the geometric connection of these generative models. Applying the solid mathematical framework in the area of Wasserstein gradient flow (Jordan et al., 1998; Ambrosio et al., 2005; Wibisono, 2018; Salim et al., 2020; Korba et al., 2020), we show that the forward and backward process of adding noise and generating image from the noise are in fact equivalent to moving on a gradient-flow-path in a metric space of probability distributions following the Wasserstein gradient flow equation.
While our understanding of score-based generative models has matured over time, few important questions remain unanswered. For example, why is it a good idea to choose forward and reverse variance the same? Can we choose reverse variance differently? Are score-based generative
Figure 1: Both the forward diffusion process and reverse generation process in diffusion generative models correspond to Wasserstein gradient flow along the same gradient-flow-path.
models same as the energy based models (Xie et al., 2016; Gao et al., 2020; Du et al., 2021)? Furthermore, new models have been proposed like Wavefit (Koizumi et al., 2022), which tries to generalize the diffusion sampling to a proximal gradient type of update. How can we explain this type of algorithms? In this work, we demonstrate that the geometric connection investigated in this work helps to answer these questions from a geometric point of view.
In addition to conceptual advantages and novel perspectives, geometric framework enables us to design practical algorithms with faster sampling capability. Score-based generative models work remarkably well when the number of sampling-steps is large (i.e. small step-size). However, the sampling time is also large for such finer schemes. As we decrease the number of sampling steps, the samples move away from the gradient-flow-path incurring error in each step, and resulting into high overall error. To minimize such error and achieve high quality samples even with small number of sampling steps, we propose to project back intermediate samples to the gradient-flow-path after every step. To achieve this, we propose an efficient estimation of Wasserstein gradient to descend towards the flow-path. As demonstrated in the result section, our proposed method significantly reduces error for smaller number of sampling steps. Below we summarize our contributions. All complete proofs are included in the appendix.
1. To the best of our knowledge, this is the first work to theoretically prove the connection between score-based generative models and Wasserstein gradient flow. We establish this relationship through Theorems 1 and 2.
2. This connection sheds light on several interesting questions: 1) the reverse variance in score-based generative models, 2) the connection between score-based model and energy based model, and 3) the use of proximal gradient algorithms as proposed in recent works.
3. Based on these insights, we propose a new algorithm which generalizes the score-based model and allows for significantly faster sampling, which would otherwise be very difficult to achieve. To achieve this, we also propose an efficient Wasserstein gradient estimation algorithm.
## 2 Related Works
Early works on diffusion models were based on matching the forward and reverse joint distributions through bounds on log likelihood (Ho et al., 2020; Sohl-Dickstein et al., 2015). (Song and Ermon, 2019) proposed a score-based generative model motivating from the Langevin dynamics and estimating the score function. Later (Song et al., 2021) showed that the two approaches are actually equivalent and it can be generalized further in continuous time setting through the stochastic differential equations. On the more theoretical directions, score-based optimization has been shown to be equivalent to likelihood maximization through Feynman-Kac theorem (Chen et al., 2022; Huang et al., 2021). Other notable works are interpreting the forward diffusion and generation as solving the Schrodinger Bridge problem (De Bortoli et al., 2021). Many approaches have been proposed to speed up the sampling process through clever ways to solve differential equations (Lu et al., 2022).
In their seminal work, Jordan, Kinderlehrer, Otto (JKO) proved the connection between Wasserstein gradient flow and the the diffusion systems guided by Fokker-Planck equations (Jordan et al., 1998). This result has been vastly generalized and formalized by (Villani, 2003; 2009) and (Ambrosio et al., 2005) giving birth to the theory of Wasserstein gradient flow and optimization on space of probability measures. Several notable works have followed in machine learning (Wibisono, 2018), (Korba et al., 2020), (Salim et al., 2020), for example for sampling, generative models, etc.
## 3 Preliminaries
### Notations
Let \(\mathcal{B}(\mathcal{X})\) denote the Borel \(\sigma\)- algebra over \(\mathcal{X}\), and let \(\mu\) denote a probability measure on \(\mathcal{X}\). \(\mathcal{P}_{2}(\mathcal{X})\) denotes the space of probability measures \(\mu\) on \(\mathcal{X}\) with finite second order moment. For any \(\mu\in\mathcal{P}_{2}(\mathcal{X})\), \(L^{2}(\mu)\) is the space of functions \(f:\mathcal{X}\rightarrow\mathcal{X}\) such that \(\int||f||^{2}d\mu<\infty\)(Ambrosio et al., 2005; Korba et al., 2020). Let \(T:\mathcal{X}\rightarrow\mathcal{X}\), then \(T_{\#}\mu\) denotes the pushforward measure of \(\mu\) by \(T\) such that the transfer lemma \(\int\phi(T(x))d\mu(x)=\int\phi(y)dT\mu(y)\) holds for any measurable bounded function \(\phi\). We use Wasserstein-2 distance as a metric on the space of probability measures. The Wasserstein-2 distance is defined as \(W_{2}^{2}(\mu,\nu)=\inf_{s\in\mathcal{S}(\mu,\nu)}\int||x-y||^{2}ds(x,y)\), where \(\mu,\nu\in P_{2}(\mathcal{X})\) and \(\mathcal{S}(\mu,\nu)\) is the set of couplings between \(\mu\) and \(\nu\), i.e. the set of nonnegative measures, \(s\) over \(\mathcal{X}\times\mathcal{X}\) such that their projections on first and second components are \(P_{\#}s=\mu\) and \(Q_{\#}s=\nu\) where \(P:(x,y)\mapsto x\) and \(Q:(x,y)\mapsto y\)(Villani, 2003).
### Wasserstein Gradient Flow
Let \((\mu_{t})_{t\in(0,T)}\) denote a family of probability measures. This family satisfies a continuity equation if there exists a family of velocity fields, \((v_{t})_{t\in(0,T)}\) such that
\[\frac{\partial\mu_{t}}{\partial t}+div(\mu_{t}v_{t})=0 \tag{1}\]
in a distributional sense. It is also absolutely continuous if \(||v_{t}||_{L^{2}(\mu_{t})}\) is integrable over \((0,T)\). Among all possible \(v_{t}\), there is one with minimum \(L^{2}(\mu_{t})\) norm, and it lies on the
tangent space of \(\mathcal{P}_{2}(\mathcal{X})\) and is called tangent vector field (Ambrosio et al., 2005) Chapter 8.
We define a functional on the space of probability measures, \(\mathcal{F}(\mu):\mathcal{P}_{2}(\mathcal{X})\rightarrow(-\infty,\infty)\). We define Wasserstein gradient of any functionals on the \(\mathcal{P}_{2}(\mathcal{X})\) space as the change in the value of functional with small perturbation on the probability measure. Wasserstein gradient can be expressed in the following form (Ambrosio et al., 2005) Chapter 10:
\[\nabla_{W_{2}}\mathcal{F}=\nabla\mathcal{F}^{\prime}(\mu) \tag{2}\]
Consider the KL divergence, \(\text{KL}(\mu||\pi)\) between any measure \(\mu\) and a base measure \(\pi\). We can show that the Wasserstein gradient of the functional \(\text{KL}(.||\pi)\) at \(\mu\) is
\[\nabla_{W_{2}}\text{KL}(.||\pi)=\nabla\log(\frac{\mu}{\pi}) \tag{3}\]
In the family \((\mu_{t})_{t\in(0,T)}\), let the initial measure be \(\mu_{0}=\rho\) and final measure is \(\mu_{T}=\pi\). Then there exists a geodesic between two probability measures \(\rho\) and \(\pi\) with respect to the Wasserstein metric. If we choose the velocity field equal to the negative of Wasserstein gradient (i.e. \(v_{t}=-\nabla_{W_{2}}\text{KL}(.||\pi)\)), then we can show that the path traced by the probability measures is the geodesic between \(\rho\) and \(\pi\)(Ambrosio et al., 2005) Chapter 7, and the flow is known as Wasserstein gradient flow. Using the functional \(\text{KL}(.||\pi)\) and the continuity equation, we obtain the equation of Wasserstein gradient flow as:
\[\frac{\partial\mu_{t}}{\partial t}=div\big{[}\mu_{t}\nabla\log(\frac{\mu_{t}} {\pi})\big{]} \tag{4}\]
Wasserstein gradient flow is a differential equation of probability measures. Consider a Wasserstein gradient flow with initial measure \(\mu_{0}\) satisfying the continuity equation 1. Let \(x_{0}\sim\mu_{0}\) be sample from the initial measure. The differential equation for the samples can be derived from continuity equation as follows (Ambrosio et al., 2005):
\[\dot{x}=v_{t} \tag{5}\]
### Score Based Generative Model
Score-based generative model (Song et al., 2021) extends diffusion models to work on continuous time setting using stochastic differential equations (SDEs). The forward and reverse process of adding noise and generating images are interpreted as forward and reverse diffusion process with following differential equations:
\[\text{FOR}:dx =f(x,t)dt+g_{t}dw \tag{6}\] \[t:0\to T,x_{0}\sim\rho\] \[\text{REV}:dx =[f(x,t)-g_{t}^{2}\nabla_{x}\log\mu_{t}(x)]dt+g_{t}d\bar{w}\] (7) \[t:T\to 0,x_{T}\sim\pi\]
where \(f\) is forward drift function and \(dw\) is the Brownian motion. Note that the flow of time in two SDE is different: the time flows from \(0\) to \(T\) in the forward process and the initial distribution is \(\rho\), while the time flows from \(T\) to \(0\) in the reverse process. Time direction is crucial in the stochastic differential equation because for the forward process, \(x_{t}\) is independent of the future \(t^{\prime}>t\) while in the reverse direction it is independent of the past (Anderson, 1982). To make things simpler such that time always flow in positive direction, we can equivalently use the positive time notation indexed by \(\tau\) (following is equivalent to eq.(7)):
\[dx =[-f(x,\tau)+g_{\tau}^{2}\nabla_{x}\log\mu_{\tau}(x)]d\tau+g_{ \tau}d\bar{w} \tag{8}\] \[\tau:0\to T,x_{0}\sim\pi,\tau=T-t\]
Note that we use \(t\) for forward flow of time and \(\tau=T-t\) for the backward flow of time, so that \(\tau\) now flows from \(0\) to \(T\). With this notation, \(\mu_{t=0}=\mu_{\tau=T}=\rho\), \(\mu_{t=T}=\mu_{\tau=0}=\pi\), and \(\beta_{t}=\beta_{T-\tau}\). Euler-Maruyama discretization of the reverse SDE equation yields:
\[x_{\tau+\delta\tau}=x_{\tau}+(g_{\tau}^{2}\nabla_{x}\log\mu_{\tau}(x)-f(x,\tau ))\delta\tau+g_{\tau}z \tag{9}\]
where \(z\sim N(0,I)\) is a random normal distributed sample.
## 4 Forward Diffusion as Gradient Flow
Instead of taking the velocity vector to be negative of Wasserstein gradient, we consider an accelerated flow where at any time, \(t\), the velocity is equal to the negative Wasserstein gradient scaled by a time-varying \(\beta_{t}\).
**Proposition 1** (Accelerated Wasserstein Gradient Flow).: _We define accelerated gradient flow with respect to the functional \(\mathcal{F}\) as the gradient flow where the velocity vector is defined as \(v_{t}=-\beta_{t}\nabla_{W_{2}}\mathcal{F}\). Consequently, the continuity equation is given by:_
\[\frac{\partial\mu_{t}}{\partial t}=div\big{[}\mu_{t}\beta_{t}\nabla_{W_{2}} \mathcal{F}\big{]} \tag{10}\]
Using this accelerated Wasserstein Gradient flow, we can establish a connection with the forward process in score-based generative model. We start from the Fokker-Planck equation corresponding to the stochastic differential equation of the forward diffusion process given by eq.(6):
\[\frac{\partial\mu_{t}}{\partial t}=-div(\mu_{t}f)+\frac{1}{2}div(\nabla(g_{t}^{ 2}\mu_{t})) \tag{11}\]
where the initial measure is \(\mu_{0}=\rho\). Following this SDE, we know that it will end up in the final measure, \(\mu_{T}=\pi\). Next theorem shows that the forward Fokker Planck equation and the accelerated Wasserstein Gradient descent are equivalent.
**Theorem 1**.: _Consider an accelerated gradient flow in eq.(10) with initial measure \(\mu_{0}=\rho\) and the target measure \(\mu_{T}=\pi\) and the functional on the Wasserstein space
defined by \(\mathcal{F}(\mu)=\text{KL}(.||\pi)\). The family of measures corresponding to this gradient flow is equivalent to the family of measures corresponding to the forward Fokker Plank equation in eq.(11) given that \(f\) and \(\beta_{t}\) take the following form: \(f=\beta_{t}\nabla\log\pi\), \(\beta_{t}=\frac{g_{t}^{2}}{2}\)._
**Remark 1.1**.: _Consider the special case with measure \(\mu_{T}=\pi=\mathcal{N}(0,I)=\exp(-\frac{||\pi||^{2}}{2})/Z\), we get \(f=-\beta x\) and the forward diffusion equation is given by the following SDE:_
\[dx=-\beta_{t}xdt+\sqrt{2\beta_{t}}dw \tag{12}\]
_which is exactly the forward flow of DDPM model (Ho et al., 2020; Song et al., 2021)._
This implies that the forward diffusion process considered in the diffusion generative model, DDPM (Ho et al., 2020; Song et al., 2021) can be equivalently thought as an accelerated Wasserstein gradient flow starting from an initial measure \(\mu_{0}=\rho\) corresponding to the data distribution and following the negative gradient towards the target measure \(\mu_{T}=\mathcal{N}(0,I)\).
**Remark 1.2**.: _We can also think of accelerated Wasserstein gradient flow as regular Wasserstein gradient flow with non-uniform discretization, i.e., step at \(t\) is scaled by \(\beta_{t}\)._
Next we investigate the geometric interpretation of the generation process or the reverse SDE.
## 5 Generation as Reverse Gradient Flow
Next theorem establishes the equivalence between the reverse SDE and the Wasserstein Gradient flow.
**Theorem 2**.: _The reverse SDE in eq.(8) is equivalent to the Wasserstein gradient flow in the space of probability measures with respect to the functional \(\mathcal{F}(\mu)=-\text{KL}(.||\pi)\) starting from the initial measure \(\mu_{\tau=0}=\pi\) towards the target measure \(\mu_{\tau=T}=\rho\)._
Proof.: \[\mathcal{F}(\mu) =-KL(\mu||\pi)\] (13) \[=\int-\log\mu d\mu+\int\log\pi d\mu\] (14) \[=\underbrace{\int(-2\log\mu+\log\pi)d\mu}_{\mathcal{G}}+ \underbrace{\int\log\mu d\mu}_{\mathcal{H}}\] (15)
Here, we apply forward backward splitting scheme due to (Wibisono, 2018; Salim et al., 2020)
\[\nu_{\tau} =(I-\beta_{\tau}\nabla_{W_{2}}\mathcal{G}(\mu_{\tau}))_{\#}\mu_{\tau} \tag{16}\] \[\mu_{\tau+\delta\tau} =JKO_{\beta_{\tau}\mathcal{H}}(\nu_{\tau}) \tag{17}\]
where \(\nabla_{W_{2}}\mathcal{G}(\mu)\) is the Wasserstein gradient, the expression for which can be obtained as:
\[\nabla_{W_{2}}\mathcal{G}=\nabla\mathcal{G}^{\prime}(\mu)=\nabla(-2\log\mu+ \log\pi) \tag{18}\]
In eq.(16), we are trying to move in the direction of Wasserstein gradient. Let samples \(x_{\tau}\) from distribution \(x_{\tau}\sim\mu_{\tau}\). Transforming differential equation in measure space to sample space, similar to eq.(5), yields:
\[y_{\tau}=x_{\tau}-\beta_{\tau}\nabla(-2\log\mu_{t}(x_{\tau})+\log\pi(x_{\tau} ))\delta\tau \tag{19}\]
In eq.(17), we are using JKO operator as a solution of the negative entropy functional, \(\mathcal{H}\), where the JKO operator is defined as :
\[JKO_{\beta,\mathcal{H}}(\nu)=\underset{\zeta\in\mathcal{P}_{2}(\mathcal{X})}{ \text{argmin}}\mathcal{H}(\zeta)+\frac{1}{2\beta}W_{2}^{2}(\zeta,\nu)\]
For the negative entropy functional, we have the exact solution as Brownian motion (Jordan et al., 1998; Wibisono, 2018; Salim et al., 2020). Let \(y_{\tau}\sim\nu_{\tau}\), we obtain
\[y_{\tau}=x_{\tau}-\beta_{\tau}\nabla(-2\log\mu_{t}(x_{\tau})+ \log\pi(x_{\tau}))\delta\tau \tag{20}\] \[x_{\tau+\delta\tau}=y_{\tau}+\sqrt{2\beta_{\tau}}z_{\tau} \tag{21}\]
Combining both, we obtain
\[x_{\tau+\delta\tau} =x_{\tau}+(2\beta_{\tau}\nabla\log\mu_{\tau}(x_{\tau})-\beta_{ \tau}\nabla\log\pi(x_{\tau}))\delta\tau\] \[+\sqrt{2\beta_{\tau}}z_{\tau} \tag{22}\]
In the limiting case as \(\delta\tau\to 0\), we obtain,
\[dx=(2\beta_{\tau}\nabla\log\mu_{\tau}(x_{\tau})-\beta_{\tau}\nabla\log\pi(x_{ \tau}))d\tau+\sqrt{2\beta_{\tau}}dw\]
which coincides exactly with the reverse SDE in eq.(8) for \(g_{\tau}^{2}=2\beta_{\tau}\) and \(f=\beta_{\tau}\nabla\log\pi\).
The reverse SDE or the score-based model is trying to reverse the forward process by tracing the path followed in the forward process in the opposite direction. One important implication of this theorem is that since we are moving towards the target measure \(\mu_{T}=\mathcal{N}(0,I)\) in the forward process, the reverse is actually simply moving away from \(\mathcal{N}(0,I)\), which is realized as the accelerated Wasserstein gradient flow with the functional \(-\text{KL}(.||\pi)\). The gradient flow path with constant velocity is the geodesic. Since we are considering gradient flow path with acceleration, it is not exactly the geodesic, but similar path traced by gradient flow. We will call it gradient-flow-path in rest of the paper.
## 6 Insights, Connections, Discussion
We have shown that both the forward and reverse diffusion process involved in score-based generative models are gradient flows on the space of probability measures. We gain geometric insights because of this geometric interpretation.
### Alternative Interpretation of Reverse SDE equation
Score-based generative model uses the fact that for every forward SDE of the form in eq.(6), there exists a reverse SDE as in eq.(8), which is a remarkable result due to (Anderson, 1982). Theorem 2 provides an interesting interpretation of this result from a completely different perspective. In eq.(15), we added and subtracted the negative entropy term \(\int\log\mu d\mu\) in diffusion and drift terms respectively. It allowed us to design a forward-backward algorithm instead of forward algorithm of Wasserstein gradient flow. The backward term essentially added the Brownian motion term yielding us a reverse stochastic differential equation. Note that if we had not added and subtracted the term \(\int\log\mu d\mu\), we would have obtained following iterative scheme:
\[x_{\tau+\delta\tau}=x_{\tau}+(\beta_{\tau}\nabla\log\mu_{\tau}(x_{\tau})-\beta _{\tau}\nabla\log\pi(x_{\tau}))\delta\tau \tag{23}\]
Note that this is a discretized version of the following ODE.
\[\frac{dx}{d\tau}=-f(x,\tau)+\frac{1}{2}g_{\tau}^{2}\nabla\log\mu_{\tau}(x),\ \ \tau:[0\to T] \tag{24}\]
Comparing this equation with eq.(8), observe that eq.(8) has an additional \(\frac{1}{2}g_{t}^{2}\nabla\log\mu_{t}(x)\) in the drift part which is compensated by the Brownian motion \(g_{\tau}d\tilde{w}\). It is clear to see that eq.(8) and eq.(24) yields same family of marginal distributions at \(\tau\in[0,T]\) even though the former is deterministic differential equation and the latter is the stochastic. Perhaps the advantage of score-based models is that stochasticity helps in generating diverse samples for small sample size.
### Why is reverse variance same as the forward variance?
In the DDPM model (Ho et al., 2020), it was not clear how to choose the variance of the reverse differential equation, and why choosing the reverse variance the same as in forward is a good strategy. From previous analysis, we see that the reverse variance must be same as the forward because we have added and subtracted the same negative entropy term from both the drift and the diffusion. However, it is possible to change the reverse time variance. For example, we can add \(\alpha\int\log\mu d\mu\) to both the drift and diffusion terms in eq. (15). Then the reverse SDE variance will be \(\sqrt{2\alpha\beta_{t}}\), but then the drift term in eq.(8) will also be modified to \(\frac{1+\alpha}{2}g_{t}^{2}\nabla\log\mu_{t}(x)\) instead of \(g_{t}^{2}\nabla\log\mu_{t}(x)\).
### Contrasting Score-based with Energy-based Model
Let's assume that the probability measure of data can be obtained in the form of \(\rho\propto\exp(-V)\). Consider Wasserstein gradient descent with the functional as \(KL(.||\rho)\)
\[\mathcal{F}(\mu) =KL(\mu||\rho) \tag{25}\] \[=\int Vd\mu+\int\log\mu d\mu \tag{26}\]
We can use the same forward-backward splitting scheme as we used in Theorem 2 Proof, and with similar reasoning, we can recover the Langevin dynamics:
\[x_{\tau+\delta\tau}=x_{\tau}-\beta_{\tau}\nabla V(x_{\tau})\delta\tau+\sqrt{2 \beta_{\tau}}z \tag{27}\]
This demonstrates the critical difference between the Energy-based model and the score-based model: while the energy based model is moving towards the data distribution \(\mu_{0}=\rho\) with functional \(KL(.||\rho)\), the score-based model is moving away from the isotropic Gaussian distribution (\(\pi\)) with the functional, \(-KL(.||\pi)\). Score-based generative model traces the forward diffusion path in the reverse direction thereby avoiding the need to work with the data distribution \(\rho\). In the energy based model, however, we need to either estimate energy function like \(V\)(Gao et al., 2020; Du et al., 2021) or KL divergence with the data, \(\rho\).
### Proximal Algorithms in Diffusion models
WaveFit (Koizumi et al., 2022) tries to generalize the iteration in diffusion models to a proximal algorithm. Motivating from a fixed point iteration, they try to improve upon DDPM model by drawing ideas from GANs and propose a proximal algorithm type of approach which is faster in generating samples than DDPM without losing quality. Here, we show that starting from geometric perspective we can reach the proximal algorithm as a way to perform Wasserstein gradient descent. Consider a functional, say \(\mathcal{F}(\mu)=KL(\mu||\rho)\) for example where \(\rho\) is the data distribution. Forward discretization of Wasserstein gradient descent yields us iteration (Jordan et al., 1998; Salim et al., 2020)
\[\mu_{\tau^{\prime}}=\underset{\nu\in\mathcal{P}_{2}(\mathcal{X})}{\text{ argmin}}\mathcal{F}(\nu)+\frac{1}{2\gamma}W_{2}^{2}(\nu,\mu) \tag{28}\]
which is a proximal gradient algorithm in the space of probability measures. This justifies why proximal algorithms make sense in the context of diffusion generative models or score-based generative models because we are trying to reach the data distribution descending in the direction of Wasserstein gradient. Jordan et al. (1998); Wibisono (2018); Salim et al. (2020) have shown that proximal algorithm converges to the target distribution, \(\rho\). As for the choice of functional \(\mathcal{F}\), it can be any convex functional that decreases as we descend towards the target measure \(\rho\). Wavefit (Koizumi et al., 2022) shows that using much stronger GAN-type objective as a functional yields good result.
## 7 Challenges with Faster Sampling
Once the connection between the gradient flow and score-based generative model is established, we can interpret the generation as a process walking on the gradient-flow-path. If we follow the flow-path with small steps, we can reliably reach the initial data distribution, as demonstrated by the success of the score-based generative models and diffusion models. However, this is not a great idea if we increase the step size. The score-based increment is a linear approximation and therefore it accrues more error as we increase the step size. It has been experimentally observed that the samples get poorer as we increase the step size in score-based models (Dhariwal and Nichol, 2021; Ho et al., 2020; Song et al., 2021). From a geometric point-of-view, we are taking Wasserstein gradient steps using forward-backward strategy. While this strategy works well when the step size is small, it converges to a biased measure for large step-size. Bias associated with the forward-backward strategy for large step size has been studied in the context of Wasserstein gradient flow (Wibisono, 2018). In our case, this issue is further exacerbated by the fact that the functional \(-\text{KL}(\mu||\pi)\) we are trying to minimize is actually concave with respect to \(\mu\).
To mitigate this issue, we propose an intuitive and geometric idea: projection. As shown in Fig.3, as we try to sample in score-based models with large step-size, the error gets large and the trajectory deviates away from the gradient-flow-path. We propose to resolve this problem by projecting again to the gradient-flow-path before taking another step.
## 8 Projection to Gradient-flow-path
Score-based generative model first trains a score model, \(s\) such that \(s^{*}_{\theta}(x_{\tau},\tau)=\nabla\log\mu_{\tau}(x_{\tau})\) using score matching strategy. Once, the score model is trained, the discretized Eurler-Maruyama step (eq.(9)) is used for generation of samples, where score function, \(s^{*}\) replaces \(\nabla\log\mu_{\tau}(x_{\tau})\):
\[x_{\tau+\delta\tau} =x_{\tau}+(2\beta_{\tau}s^{*}_{\theta}(x_{\tau},\tau)-\beta_{\tau }\nabla\log\pi(x_{\tau}))\delta\tau\] \[+\sqrt{2\beta_{\tau}}z\] \[=x^{pred}_{\tau+\delta\tau}+\sqrt{2\beta_{\tau}}z_{\tau} \tag{29}\]
It can also be interpreted as predict and diffuse steps, where predict step is \(x^{pred}_{\tau+\delta\tau}=x_{\tau}+(2\beta_{\tau}s^{*}_{\theta}(x_{\tau},\tau)- \beta_{\tau}s^{*}_{\theta}(x_{\tau},\tau))\delta\tau\). We can see that the score-based model is trained to predict and diffuse steps.
Figure 3: During generation, sample moves tangentially to the gradient-flow-path in the reverse direction. If the step-size is large, it incurs error, which we mitigate by projection to the gradient-flow-path.
Figure 2: Qualitative comparison of Celeb-A samples generated by Score-based model and our model at different number of sampling steps (N). Our model maintains reasonable quality even when N decreases down to 20.
\(\beta_{\tau}\nabla\log\pi(x_{\tau}))\delta\tau\). Since generation is trying to trace the gradient-flow-path, after each of these predict-diffuse, we should obtain samples from the measure \(\mu_{\tau+\delta\tau}\) on the gradient-flow-path. Because of discretization error and bias, these samples do not lie on the gradient-flow-path, in fact they deviate away. To pull these samples towards the measure, \(\mu_{\tau+\delta\tau}\) on the gradient-flow-path, we use the fact that we have a way to sample from the measures on gradient-flow-path. Using the SDE equation, we can write closed form conditional distribution of \(\mu_{\tau+\delta\tau}\) as follows:
\[\mu_{\tau+\delta\tau|T}(x_{\tau+\delta\tau}|x_{T})=\mathcal{N}(x_{\tau+\delta \tau};x_{\tau+\delta\tau}^{mean}(x_{\tau=T}),2\beta_{\tau+\delta\tau}I)\]
Sampling from this conditional distribution is given by the following equation:
\[x_{\tau+\delta\tau}=x_{\tau+\delta\tau}^{mean}(x_{\tau=T})+\sqrt{2\beta_{\tau +\delta\tau}}z_{\tau} \tag{30}\]
Comparing eq.(30) with eq.(29), we note that pulling \(x_{\tau+\delta\tau}^{pred}\) close to \(x_{\tau+\delta\tau}^{mean}\) may be enough to pull the samples \(x_{\tau+\delta\tau}\) in eq.(29) towards gradient-flow-path assuming that \(\beta_{\tau+\delta\tau}\) is close to \(\beta_{\tau}\). In terms of measures, we consider the measure associated with samples \(x_{\tau+\delta\tau}^{pred}\) and target samples \(x_{\tau+\delta\tau}^{mean}\). Let's define the measure corresponding to samples \(x_{\tau+\delta\tau}^{pred}\) as \(\mu_{\tau+\delta\tau}^{pred}\) and the measures corresponding to means, \(x_{\tau+\delta\tau}^{mean}\) as \(\mu_{\tau+\delta\tau}^{mean}\). We can sample from the measure \(\mu_{\tau+\delta\tau}^{mean}\) by first sampling from \(x_{\tau=T}\sim\mu_{\tau=T}\), and passing them through \(x_{\tau+\delta\tau}^{mean}\). Our strategy to project the samples in eq.(29) onto gradient-flow-path is to project the pred measure, \(\mu_{\tau+\delta\tau}^{pred}\) to the mean measure \(\mu_{\tau+\delta\tau}^{mean}\). We achieve this through Wasserstein gradient descent in the space of probability measures. For that, we need an efficient way to estimate Wasserstein gradient, which we describe in next subsection.
### Efficient Estimation of Wasserstein Gradient
Imagine that we want to estimate Wasserstein Gradient of a functional \(\mathcal{J}(\mu)\), _i.e._, \(\nabla_{W_{2}}\mathcal{J}(\mu)\). For that, we can use the following Taylor expansion:
\[\mathcal{J}((I+hT)_{\#\mu})=\mathcal{J}(\mu)+h\langle\nabla_{W_{2}}\mathcal{ J}(\mu),T\rangle_{\mu}+o(h) \tag{31}\]
where, \(\nabla_{W_{2}}\mathcal{J}(\mu)\in L^{2}(\mu)\) is the Wasserstein gradient of \(\mathcal{J}\) at \(\mu\). To estimate the Wasserstein gradient, consider the following optimization problem:
\[T^{*}=\underset{T}{argmin}\ \ \langle\nabla_{W_{2}}\mathcal{J}(\mu),T \rangle_{\mu}+\frac{1}{2h}||T||_{\mu}^{2} \tag{32}\]
It is easy to see that \(T^{*}=-h\nabla_{W_{2}}\mathcal{J}(\mu)\) is the solution of this problem. Plug in eq.(31), and parameterize the gradient function \(T\) as a function of neural network parameters \(\theta\). Hence, we solve the following optimization problem:
\[\underset{\theta}{min}\ \ \mathcal{J}((I+T_{\theta})_{\#\mu})+\frac{1}{2h}||T_{ \theta}||_{\mu}^{2} \tag{33}\]
This optimization is efficient and can use parallel processing because: 1) it only requires samples from the measure \(\mu\), and 2) we can use minibatch from the measure \(\mu\) to update neural network parameters at a time. This removes the need to obtain all samples at a time leading to stochastic gradient descent optimization of \(\theta\).
### Predict-Project Algorithm
With the Wasserstein gradient estimation method in hand, we now move on to project \(\mu_{\tau+\delta\tau}^{pred}\) to \(\mu_{\tau+\delta\tau}^{mean}\). We define the functional \(\mathcal{J}\) in the following way:
\[\mathcal{J}_{\tau+\delta\tau}((I+T_{\theta,\tau+\delta\tau})_{ \#}\mu_{\tau+\delta\tau}^{pred})\] \[=\int||x_{\tau+\delta\tau}^{pred}+T_{\theta}(x_{\tau+\delta\tau}^ {pred})-x_{\tau+\delta\tau}^{mean}||^{2}d\mu_{\tau+\delta\tau}^{pred}(x_{ \tau+\delta\tau}^{pred}) \tag{34}\]
Note that, \(T_{\theta,\tau+\delta\tau}\) is indexed by time. Instead of learning different \(T_{\theta}\) for different time, we parameterize it by time as \(T(.,\tau)\) as in score function (Song et al., 2021). Similarly, we choose \(h=1/\sqrt{\beta_{\tau}}\) to be different for different \(\tau\) in eq.(33). To train the projection function, \(T\), we sample \(\tau\) from the uniform distribution in the interval \((0,1]\), \(x_{\tau=T}\sim\mu_{\tau=T}\) and optimize the following optimization:
\[\underset{\theta}{min}\ \ E_{\tau,x_{\tau=T}} \big{[}||x_{\tau+\delta\tau}^{pred}+T_{\theta}(x_{\tau+\delta\tau} ^{pred},\tau+\delta\tau)-x_{\tau+\delta\tau}^{mean}||^{2}\] \[+\frac{\sqrt{\beta_{\tau}}}{2}||T_{\theta}(x_{\tau+\delta\tau}^{ pred},\tau+\delta\tau)||^{2}\big{]}\]
After training, we have, \(\sqrt{\beta_{\tau}}T_{\theta}^{*}=-\nabla_{W_{2}}\mathcal{J}(\mu^{pred})\). Using this relation, we update the sample as
\[x^{proj} =x^{pred}-\nabla_{W_{2}}\mathcal{J}(\mu^{pred})(x^{pred}).\delta \mu.\delta\tau \tag{35}\] \[=x^{pred}+\sqrt{\beta_{\tau}}T_{\theta}^{*}(x^{pred},\tau).\delta \mu.\delta\tau \tag{36}\]
where \(\delta\mu\) is the small scalar by which to move in the direction of Wasserstein gradient and \(\delta\tau\) is present due to the fact that the Wasserstein gradient flow with velocity field \(v_{t}\) corresponds to dynamics \(\dot{x_{\tau}}=v_{\tau}(x_{\tau})\) (see eq.(5)). See Algorithm 1 for full sampling algorithm.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline Datasets & & N = 1000 & N = 100 & N = 40 & N = 20 \\ \hline \multicolumn{5}{c}{FID Score \(\downarrow\)} \\ \hline \multirow{2}{*}{Celeb-A} & Score Model & \multirow{2}{*}{6.331} & 35.14 & 149.42 & 222.71 \\ & Predict-Project & & **20.54** & **68.23** & **121.12** \\ \hline \multirow{2}{*}{LSUN} & Score Model & \multirow{2}{*}{15.12} & 34.62 & 122.23 & 246.17 \\ & Predict-Project & & **25.35** & **66.61** & **164.32** \\ \hline \multirow{2}{*}{SVHN} & Score Model & \multirow{2}{*}{18.95} & **146.63** & 183.30 & 285.61 \\ & Predict-Project & & 149.56 & **174.34** & **152.94** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of FID score as a measure of generated image quality between score-based and our generative model at different number of sampling steps N.
## 9 Experimental Results
To demonstrate efficacy of our algorithm, we train and generate samples on three datasets: 1) Celeb-A dataset, 2) LSUN-church dataset, and 3) SVHN dataset, where all images are of size \(64\times 64\). Our neural network architecture for both score model and projection model uses standard U-net architecture with attention due to (Dhariwal and Nichol, 2021). We use publicly available code from (Song et al., 2021) as score-based model. In these experiments, we demonstrate that as we decrease the number of sampling steps, the quality of samples decreases in Score-based generative model, but we maintain quality to a reasonable level even when the number of sampling steps is reduced to as low as 20. We use FID metric (Heusel et al., 2017) to measure the sample qualities.
In Table 1, we compare the FID score of generated images from score-based method and our Predict-Project method. We outperform the score-based method by a large margin in all cases except SVHN (N=100). This underperformance could be because our model is not trained well in SVHN (see appendix) due to lack of time. For qualitative comparison, please see Fig. (4) and Fig. (2). These results our claim that projecting to the gradient-flow-path improves sample quality, especially when the number of sampling-step is low.
## 10 Conclusion
We presented a novel geometric perspective on score-based generative models (also called diffusion generative models) by showing that they are in fact gradient flows in a space of probability measures. The geometric insight gained from
Figure 4: Comparison of generated samples in three datasets when number of sampling steps is decreased to as low as N = 40.
this connection helped us answer and clarify some critical open questions. We also demonstrated that it can help us design faster sampling algorithm. We believe that this connection will help diffuse knowledge between Wasserstein gradient flow field and score-based generative models field in the future inspiring interesting solutions to problems in both areas. Similarly, connection with energy-based models, proximal algorithms and reverse SDE could help design better algorithms in general and generative models in specific. Energy-based models, for example, can be combined with score-based models in the light of geometric understanding.
|
2308.11721 | When Are Two Lists Better than One?: Benefits and Harms in Joint
Decision-making | Historically, much of machine learning research has focused on the
performance of the algorithm alone, but recently more attention has been
focused on optimizing joint human-algorithm performance. Here, we analyze a
specific type of human-algorithm collaboration where the algorithm has access
to a set of $n$ items, and presents a subset of size $k$ to the human, who
selects a final item from among those $k$. This scenario could model content
recommendation, route planning, or any type of labeling task. Because both the
human and algorithm have imperfect, noisy information about the true ordering
of items, the key question is: which value of $k$ maximizes the probability
that the best item will be ultimately selected? For $k=1$, performance is
optimized by the algorithm acting alone, and for $k=n$ it is optimized by the
human acting alone. Surprisingly, we show that for multiple of noise models, it
is optimal to set $k \in [2, n-1]$ - that is, there are strict benefits to
collaborating, even when the human and algorithm have equal accuracy
separately. We demonstrate this theoretically for the Mallows model and
experimentally for the Random Utilities models of noisy permutations. However,
we show this pattern is reversed when the human is anchored on the algorithm's
presented ordering - the joint system always has strictly worse performance. We
extend these results to the case where the human and algorithm differ in their
accuracy levels, showing that there always exist regimes where a more accurate
agent would strictly benefit from collaborating with a less accurate one, but
these regimes are asymmetric between the human and the algorithm's accuracy. | Kate Donahue, Sreenivas Gollapudi, Kostas Kollias | 2023-08-22T18:16:40Z | http://arxiv.org/abs/2308.11721v3 | # When Are Two Lists Better than One?:
###### Abstract
Historically, much of machine learning research has focused on the performance of the algorithm alone, but recently more attention has been focused on optimizing joint human-algorithm performance. Here, we analyze a specific type of human-algorithm collaboration where the algorithm has access to a set of \(n\) items, and presents a subset of size \(k\) to the human, who selects a final item from among those \(k\). This scenario could model content recommendation, route planning, or any type of labeling task. Because both the human and algorithm have imperfect, noisy information about the true ordering of items, the key question is: which value of \(k\) maximizes the probability that the best item will be ultimately selected? For \(k=1\), performance is optimized by the algorithm acting alone, and for \(k=n\) it is optimized by the human acting alone. Surprisingly, we show that for multiple of noise models, it is optimal to set \(k\in[2,n-1]\) - that is, there are strict benefits to collaborating, even when the human and algorithm have equal accuracy separately. We demonstrate this theoretically for the Mallows model and experimentally for the Random Utilities models of noisy permutations. However, we show this pattern is _reversed_ when the human is anchored on the algorithm's presented ordering - the joint system always has strictly worse performance. We extend these results to the case where the human and algorithm differ in their accuracy levels, showing that there always exist regimes where a more accurate agent would strictly benefit from collaborating with a less accurate one, but these regimes are asymmetric between the human and the algorithm's accuracy.
## 1 Introduction
Consider the following motivating example:
Alice is a doctor trying to classify a scan with one of \(n\) different labels. Based on her professional expertise and relevant medical information she has access to, she is able to make some ranking over which of these labels is most likely to be accurate. However, she is not perfect, and sometimes picks the wrong label. She decides to use a machine learning algorithm as a tool. The algorithm similarly has a goal of maximizing the probability of picking the correct label. However, the algorithm and Alice rely on somewhat different information sources in making their predictions: vast troves of data for the algorithm, and personal conversations with the patient for the human, for example. Because of this, their rankings over the true labels will often differ slightly. The algorithm communicates its knowledge by presenting its top \(k\) labels to Alice, who picks her top label among those that are presented. For what settings and what values of \(k\) will Alice and the algorithm working together have a higher chance of picking the right label?
If the algorithm were able to tell Alice exactly which label she should pick (\(k=1\)), then this problem would simply reduce to that of building a highly accurate machine learning system. However, in the medical prediction setting, it is unrealistic to assume that the algorithm can force Alice to pick a particular label. If the algorithm presented all of the items to Alice (\(k=n\)), then this would be equivalent to Alice solving the task herself. In the case where \(n\) is large, considering each possible label may be infeasible. However, even if Alice could consider all \(n\) items herself, we will show that there are often settings where allowing the algorithm to narrow the set of items to \(k\) strictly increases the probability of picking the correct item.
In human-algorithm collaboration more generally, often the algorithm can provide assistance, but the human makes the final decision. This is the case in other settings as well: a diner trying to find the best restaurant, a driver trying to find the best route, or a teacher trying to find the best pedagogical method. This framework requires a shift in thinking: rather than focus on optimizing the performance of the algorithm alone, the goal is to build an algorithm that maximizes the performance of the human-algorithm system.
In this paper, we will focus on the role of the noise distributions that govern the human and algorithm. In particular, we will be interested in how _independent_ these are. In particular, we will be interested in how strongly the human's permutation is affected by the algorithm's prediction, or the strength of _anchoring_. In this paper, we will explore different models of noisy predictions, and give theoretical and empirical results describing when the joint human-algorithm system has a higher chance of picking the best item.
In Section 2, we describe the theoretical model that we will explore and in Section 3 we connect our model to related works. Section 4 considers the case where both the human and the algorithm have identical accuracy rates, and gives theoretical proofs for conditions where there are strict _benefits_ and strict _harms_ to using a joint human-algorithm system with the Mallows model, a model of noisy permutations over an ordering. This section also shows empirically that these results hold much more broadly, including for the Random Utilities Model. Next, Section 5 explores the case where the human and algorithm can differ in their accuracy rates, focusing on the case with exactly 3 items, of which the algorithm selects 2 to be presented. In this setting, we show that there is always a regime where a more accurate player can strictly improve their accuracy by joining with a less accurate partner. However, we show that this pattern is asymmetric between the human and the algorithm: the human has a much wider range of algorithmic accuracy rates that it is willing to partner with. Finally, Section 6 concludes and discusses avenues for future work.
## 2 Models and notation
### Human-algorithm collaboration model
We assume that there are \(n\) items \(\{x_{1},\ldots x_{n}\}\), and that the goal is to pick item \(x_{1}\). Each item could represent labels for categorical prediction, news articles of varying relevance, or driving directions with variable levels of traffic, for example. There are two actors: the first (\(A\)) narrows the items from \(n\) total items to a top \(k<n\) items which are presented to the second actor (\(H\)), which picks a single item among them. One consistent assumption we will make is that the second actor \(H\) is not able to directly access or choose from the full set of items: this could be, for example, because \(k<<n\) and \(H\) is bandwidth-limited in how many items it can consider. This model is quite broad: the two actors could be interacting recommendation algorithms, for example, or sequential levels of decision-making among human committees. However, the motivating example we will focus on in this paper is when the first actor \(A\) is an algorithm and the second actor \(H\) is a human. This setting naturally fits with the assumption that \(H\) is bandwidth-limited, and also motivates the assumption that \(A\) and \(H\) have differing orders for the items, drawn from potentially differing sources of knowledge, but are unable to directly communicate that knowledge to each other. This formulation also allows us to connect with the extensive literature on human-algorithm collaboration, which we discuss further in Section 3.
We will use \(\pi^{a},\pi^{h}\) to denote the orderings of the algorithm and human over the \(n\) items, with \(\pi^{a}_{i}=x_{j}\) meaning that the algorithm ranks item \(x_{j}\) in the \(i\)th place. We will use \(\pi^{a}_{[k]}\) to denote the \(k\) items that the algorithm ranks first (and thus presents to the human) and \(\pi^{a}_{-[n-k]}\) to denote the \(n-k\) items that the algorithm ranks last (and fails to present to the human). Both \(\pi^{a},\pi^{h}\) are random variables drawn from distributions \(\pi^{a}\sim\mathcal{D}^{a},\pi^{h}\sim\mathcal{D}^{h}\). We will often refer to the joint human-algorithm system as the _combined system_.
The distributions \(\mathcal{D}^{a},\mathcal{D}^{h}\) may be independent: this could reflect the case where both the human and algorithm come up with orderings separately, and then the algorithm presents a set of items for the human to pick between, where the human picks the best item according to their previously-determined ranking. We refer to the case of independent orderings as the _unanchored_ case. Alternatively, the distributions \(\mathcal{D}^{a},\mathcal{D}^{h}\) may be correlated. In particular, we will compare the _unanchored_ case with that of _anchored_ ordering. In this setting, the algorithm draws an ordering \(\pi^{a}\sim\mathcal{D}^{a}\), which then becomes the central ranking for the human - we will describe what this means technically for different noise models in the next section. This models settings where the algorithm presents a _ordering_ of items to the human, rather than a set, which strongly biases the human. The anchored setting implies a strong degree of correlation between the human and the algorithm's ordering. We will relax this correlation with the _semi-anchored setting_, where the algorithm's ordering \(\pi^{a}\) influences the human's ordering \(\pi^{h}\), but less strongly in the anchored setting. In Section 4 we present theoretical results for the anchored and unanchored case, as well as experimental results for the semi-anchored case, which we formalize further.
### Assumptions
One key assumption is the structure of the human-algorithm system: namely, the algorithm selects \(k\) items from which the human picks a final element. As mentioned previously, this could reflect settings the algorithm _must_ narrow the set: where the total set of items \(n\) is too large for the human to fully explore (e.g. the set of news articles, or the set of possible routes between two destinations). It could also reflect cases where the algorithm _chooses_ to narrow the set in order to express its knowledge. This structure also allows the human a structured, but wide range of flexibility.
However, there are other potential models that could also be feasible. For example, it could be the case that both the human and algorithm present their permutations \(\pi^{a},\pi^{h}\), and the combined ranking \(\pi^{c}\) is constructed by "voting" between each of the rankings, potentially with some uneven weighting between \(\pi^{a},\pi^{h}\) based on expertise. Another model could involve the human going down the algorithm's ranking \(\pi^{a}\), stopping whenever it reaches its best item \(\pi^{h}_{1}\) or with some probability \(p\) after inspecting each item. A third model could involve iterative processes where the human and algorithm can refine their rankings through shared information. Note that many of these models would require the human to consider more than \(k\) items, which contradicts this model's consideration of a bandwidth-limited agent \(H\).
While all of these models could be interesting extensions to explore, in general they are more complicated than ours. Despite the relatively simple and natural structure of our human-algorithm system, we will show that it admits a rich structure with relatively clean results.
### Noise models
In this section, we introduce the noise models we will use for \(\mathcal{D}^{a},\mathcal{D}^{h}\), which governs how the algorithm and human respectively arrive at noisy permutations over each of the \(n\) items. Both of these noise models are standard in the literature, which is what prompted us to consider them in our paper.
#### 2.3.1 Mallows model
The first is the Mallows model, which has been used extensively as a model of permutations Mallows [1957]. The model has two components: a central ordering \(\pi^{*}\) (here, assumed to be the "correct" ordering \(\{x_{1},x_{2},\ldots x_{n}\}\)) and an accuracy parameter \(\phi>0\), where higher \(\phi\) means that the distribution more frequently returns orderings that are close to the central ordering \(\pi^{*}\). The probability of any permutation \(\pi\) occurring is given by \(\frac{1}{Z}\cdot\exp\left(-\phi\cdot d(\pi^{*},\pi)\right)\) where \(Z\) is a normalizing constant \(\sum_{\pi^{\prime}\in P}\exp\left(-\phi\cdot d(\pi^{*},\pi)\right)\) involving a sum over the set all permutations \(P\) and \(d(\pi^{*},\pi)\) is a distance metric between permutations. In this work, we will use Kendall-Tau distance, which is standard. In particular, the Kendall-Tau distance is equivalent to the number of _inversions_ in \(\pi\). An inversion occurs when element \(x_{i}\) is ranked above \(x_{j}\) in the true ordering \(\pi^{*}\), but is ranked below \(x_{j}\) in \(\pi\). This can be roughly thought of as the number of "pairwise errors" \(\pi\) makes in ordering each of the elements. In the Mallows model, we model anchoring through \(\mathcal{D}^{h}\) having the central ordering \(\mathcal{D}\left(\pi^{*}=\pi^{a}\right),\pi^{a}\sim\mathcal{D}^{a}\). In this way, the human takes the algorithm's presented ordering as the "true" ordering and draws permutations centered on it. In the _unanchored_ setting the human draws their permutation from a Mallows distribution centered at the correct ordering \(\mathcal{D}\left(\pi^{*}=\{x_{1},x_{2},\ldots x_{n}\}\right)\).
#### 2.3.2 Random Utility model
The Random Utility Model (RUM) has similarly been extensively used as a model of permutations Thurstone (1927). In this model, item \(i\) has some true value \(\mu_{i}\), where we assume \(\mu_{i}\) is descending in \(i\). The human and algorithm only have access to noisy estimates of these values, \(\hat{X}_{i}^{a}\sim\mathcal{D}(\mu_{i},\sigma^{2})\) for some distribution \(\mathcal{D}\) with variance \(\sigma^{2}\) (often assumed to be Gaussian, which we will use in this paper). These noisy estimates are then used to produce an order \(\pi^{a},\pi^{h}\) in descending order of the values \(\{\hat{X}_{i}^{a}\},\{\hat{X}_{i}^{h}\}\). In RUM, we model anchoring through \(\hat{X}_{i}^{h}\sim\mathcal{D}(\mu_{j},\sigma_{h}^{2})\), for where \(j\) is the index of item \(i\) in the algorithm's permutation \(\pi^{a}\). We model the semi-anchored case by \(\hat{X}_{i}^{h}\sim\mathcal{D}(w_{a}\cdot\mu_{j}+(1-w_{a})\cdot\mu_{i},\sigma_ {h}^{2})\), where \(w_{a}\) is a weight parameter indicating how much the algorithm's ordering anchors the human's permutation, and \(j\) is the index of item \(i\) in the algorithm's permutation \(\pi^{a}\).
## 3 Related work
Studying human-algorithm collaboration is a large, rapidly-growing, and highly interdisciplinary area of research. Some veins of research are more ethnographic, studying how people use algorithmic input in their decision-making Lebovitz et al. (2021, 2020); Beede et al. (2020); Yang et al. (2018); Okolo et al. (2021). Other avenues work on developing ML tools designed to work with humans, such as in medical settings Raghu et al. (2018) or child welfare phone screenings Chouldechova et al. (2018). Finally, and most closely related to this paper, some works develop theoretical models to analyze human-algorithm systems, such as Rastogi et al. (2022); Cowgill and Stevenson (2020); Bansal et al. (2021); Steyvers et al. (2022); Madras et al. (2018). Bansal et al. (2021) proposes the notion of _complementarity_, which is achieved when a human-algorithm system together has performance that is strictly better than either the human or the algorithm could achieve along. Steyvers et al. (2022) uses a Bayesian framework to model human-algorithmic complementarity, while Donahue et al. (2022) studies the interaction between complementarity and fairness in joint human-algorithm decision systems, and Rastogi et al. (2022) provides a taxonomy of how humans and algorithms might collaborate. Kleinberg and Raghavan (2021) is structurally similar to ours in that it uses the Mallows model and RUM model to give theoretical guarantees for performance related to rankings of items. However, its setting is human-algorithm _competition_ rather than _cooperation_, where the question is whether it is better to rely on an algorithmic tool or more noisy humans to rank job candidates.
One related area of research is "conformal prediction" where the goal is to optimize the subset that the algorithm presents to the human, such as in Straitouri et al. (2022); Wang et al. (2022); Angelopoulos et al. (2020); Vovk et al. (2005); Babbar et al. (2022). This formulation is structurally similar to ours, but often takes a different approach (e.g. optimizing the subset given some prediction of how the human will pick among them). Another related area is "learning to defer", where an algorithmic tool learns whether to allow a human (out of potentially multiple different humans) to make the final decision, or to make the prediction itself (e.g. Hemmer et al. (2022); Madras et al. (2018); Raghu et al. (2019)). Finally, a third related area is multi-stage screening or pipelines, where each stage narrows down the set of items further (e.g. Blum et al. (2022); Wang and Joachims (2023); Dwork et al. (2020); Bower et al. (2022)). Hron et al. (2021) specifically studies the case with multiple imperfect nominators who each suggest an action to a ranker, who picks among them (and explores how to optimize this setting).
Some papers study how humans rely on algorithmic predictions - for example, De-Arteaga et al. (2020) empirically studies a real-life setting where the algorithm occasionally provided incorrect predictions and explores how the human decision-maker is able to overrule its predictions, while Benz and Rodriguez (2023) studies under what circumstances providing confidence scores helps humans to more accurately decide when to rely on algorithmic predictions. Mclaughlin and Spiess (2023) studies a case where the human decision-maker views the algorithm's recommendation as the "default" - similar to our "anchoring" setting, while Vasconcelos et al. (2023) studies how explanations can reduce the impact of anchoring, and Fogliato et al. (2022) empirically studies the impact of anchoring in a medical setting. Rambachan et al. (2021) studies how to identify human errors in labels from observational data, while Alur et al. (2023) explores how an algorithmic system can detect when a human actor has access to different sources of information than the algorithm itself. Also in a medical setting, Cabitza et al. (2021) studies how "interaction protocols" with doctors and algorithmic tools can affect overall accuracy. Chen et al. (2023) empirically explores how human rely on their intuition along with algorithmic explanations in making decisions. Mozannar et al. (2023) explores a
setting where an LLM is making recommendations of code snippets to programmers, with the goal of making recommendations that are likely to be accepted. Related to complementarity, Guszcza et al. (2022) describes the principles of "hybrid intelligence" necessary for optimizing human-algorithm collaboration.
There has also been a series of work looking more specifically at human-algorithm collaboration in bandit settings. Gao et al. (2021) learns from batched historical human data to develop an algorithm that assigns each task at test time to either itself or a human. Chan et al. (2019) studies a setting where the human is simultaneously learning which option is best for them. However, their framework allows the algorithm to overrule the human, which makes sense in many settings, but is not reasonable in some settings like as our motivating medical example. Bordt and Von Luxburg (2022) formalizes the problem as a two-player setting where both the human and algorithm take actions that affect the reward both experience. Agarwal and Brown (2022) and Agarwal and Brown (2023) study the case where a "menu" of \(k\) arms out of \(n\) are presented to the human, who selects a final one based on a preference model. This setting differs from ours in the model of human preferences over items, as well as the goal of optimizing for the algorithm's overall regret. Yao et al. (2023) studies a related setting where multiple content creators each recommend a top \(k\) set of items to humans, who pick among those \(k\) according to a RUM - key differences are that content creators are competing with each other and also learning their own utility functions over time. Tian et al. (2023) considers the case where the human's mental model of the algorithm is changing over time, and models this as a dynamical system.
Additionally, some work has used the framework of the human as the final decision-maker and studied how to disclose information so as to incentivize them to take the "right" action. Immorlica et al. (2018) studies how to match the best regret in a setting where myopic humans pull the final arm. Hu et al. (2022) studies a related problem with combinatorial bandits, where the goal is to select a subset of the total arms to pull. Bastani et al. (2022) investigates a more applied setting where each human is a potential customer who will become disengaged and leave if they are suggested products (arms) that are a sufficiently poor fit. Kannan et al. (2017) looks at a similar model of sellers considering sequential clients, specifically investigating questions of fairness. In general, these works differ from ours in that they assume a new human arrives at each time step, and so the algorithm is able to selectively disclose information to them.
## 4 Impact of anchoring on joint performance
In this section, we explore the impact of anchoring on the performance of the joint system. Our goal is _complementarity_ as defined in Bansal et al. (2021): when the joint system has a higher chance of picking the best item than either the human or algorithm alone. In particular, we will show that complementarity is impossible for anchored orderings, no matter what number of \(k\) items are or the relative accuracy levels of the human and algorithm \(\phi^{h},\phi^{a}\). By contrast, we will show that complementarity is possible with unanchored orderings even when the human and algorithm have equal accuracy rates, so long as the number as presented items \(k=2\). The first subsection describes theoretical tools that hold for all probability distributions, the next two subsections gives theoretical results for the Mallows model distribution, while the last subsection extends these results experimentally for the RUM, including the semi-anchored setting.
### Preliminary definitions and tools
First, this subsection describes preliminary tools we will need in order to prove the anchoring results in later sections. Note that every result in this subsection holds for all distributions of human and algorithmic permutations \(\mathcal{D}^{h},\mathcal{D}^{a}\), and regardless of the level of anchoring. However, we will find these tools useful for analysis in later subsections with more specific assumptions on \(\mathcal{D}^{h},\mathcal{D}^{a}\).
First, Definitions 1 defines "good events" where the joint human-algorithm system picks the best arm, where the algorithm alone would not have, and Definition 2 defines"bad events", where the joint system fails to pick the best arm, where the algorithm alone would have. Note that these could be identically defined with respect to when the human would have picked the best arm. However, defining events relative to the algorithm will make later proofs technically simpler.
**Definition 1**.: _A "good event" is a pair of permutations \(\rho^{a},\rho^{h}\) where the joint human-algorithm system selects the best arm \(x_{1}\) when the algorithm alone would not have picked it. The "good event" occurs when in
one of two cases holds:_
1. _The algorithm does not rank_ \(x_{1}\) _first but includes it in the_ \(k\) _items it presents, while the human ranks item_ \(x_{1}\) _first (_\(\rho_{1}^{a}\neq x_{1},x_{1}\in\rho_{[k]}^{a},\rho_{1}^{h}=x_{1}\)_)_
2. _Identical to case 1, but instead the human ranks_ \(x_{1}\) _in position_ \(m\geq 2\)_, and the algorithm removes all of the items the human had ranked before it (_\(\rho_{1}^{a}\neq x_{1},x_{1}\in\rho_{[k]}^{a},\rho_{m}^{h}=x_{1},\rho_{[m-1]}^{ h}\subseteq\rho_{-[n-k]}^{a}\)_)_
**Definition 2**.: _A "bad event" is a pair of permutations \(\pi^{a},\pi^{h}\) where the joint human-algorithm system fails to pick the best arm, where the algorithm alone would have picked it._
_A "bad event" occurs when the algorithm ranks \(x_{1}\) first, but the human does not (\(\pi_{1}^{a}=x_{1},\pi_{1}^{h}\neq x_{1}\)) and it is not the case that the human ranks \(x_{1}\) in position \(m\), and the algorithm removes all of the items the human had ranked before it (_not_ that \(\pi_{1}^{a}\in\pi_{k}^{a},\pi_{m}^{h}=x_{1},\pi_{[m-1]}^{h}\subseteq\pi_{-[n- k]}^{a}\)_)._
Complementarity occurs whenever the total probability of "good events" is greater than the total probability of "bad events".
Lemma 1 states that there exists a bijective mapping between "good events" and "bad events" - that is, for every "good event" there is a unique corresponding "bad event". As an immediate corollary, we see that there must be equal numbers of good and bad events. These results show the importance of the probability distributions \(\mathcal{D}^{a},\mathcal{D}^{h}\): given a uniform distribution over permutations, the good events and bad events are equally likely, so any complementarity must be driven by certain permutations being more likely than others.
**Lemma 1**.: _For any human algorithm system with \(k<n\), there is a bijective mapping between "good events" and "bad events"._
**Corollary**.: _There are equal numbers of "good events" and "bad events"._
While the full proof of Lemma 1 is deferred to Appendix A, the relevant bijective mapping will be useful for later analysis. We define it as "best-item-mapping", a function mapping from "good events" to "bad events" by swapping the indices of the best item \(x_{1}\) and whichever item \(x_{j}\) that the algorithm had ranked first instead of \(x_{1}\).
**Definition 3** (Best-item mapping).: _Take any pair of orderings \(\rho^{a},\rho^{h}\) such that_
\[\rho_{1}^{a}=x_{j}\quad\rho_{i}^{a}=x_{1}\quad\rho_{m}^{h}=x_{1}\quad\rho_{ \ell}^{h}=x_{j}\]
_for \(x_{j}\neq x_{1}\). Then, we construct the new orderings \(\pi^{a},\pi^{h}\) by flipping the location of items \(x_{1},x_{j}\), keeping all other items in the same location:_
\[\pi_{1}^{a}=x_{1}\quad\pi_{i}^{a}=x_{j}\quad\pi_{m}^{h}=x_{j}\quad\pi_{\ell}^ {h}=x_{1}\]
### Anchoring always causes worse performance
The preliminary results for "good events" and "bad events" in the previous subsection hold for all probability distributions \(\mathcal{D}^{a},\mathcal{D}^{h}\) and all types of anchoring between these distributions. In this and the next subsection, we will focus on the Mallows model and give conditions such that the joint system will perform strictly worse or better than human or algorithm alone.
Theorem 1 below, begins by showing that when anchoring is present, the joint system always has strictly worse accuracy than the algorithm alone - no matter how many items are presented \(k\) or the relative accuracy rates of the human and algorithm \(\phi^{a},\phi^{h}\). This is a quite general impossibility result, indicating that a wide range of conditions lead to undesirable performance.
**Theorem 1**.: _In the anchored setting with Mallows model distributions for permutations, the probability of picking the best arm strictly decreases in the joint human-algorithm system, as compared to the algorithm alone. This holds for any \(k<n\), no matter the accuracy rates for the algorithm and human \(\phi^{a},\phi^{a}\)._
While we defer a full proof of Theorem 1 to Appendix A, we give an informal proof sketch below:
Proof sketch.: This proof uses the best-item mapping from Definition 3. In particular, we take any "good event", apply the best-item mapping, and show that the corresponding "bad event" is strictly more likely than the "good event".
Given the Mallows model, a permutation \(\pi\) is more likely if they involve fewer _inversions_ (instances where \(i<j\) but \(\pi_{i}<\pi_{j}\): a lower-valued item is ranked above a higher-valued item). Best-item mapping works by flipping the rank of the best item \(x_{1}\) and \(x_{j}\), defined as whichever item the algorithm ranked first in the "good event". This mapping changes the relative ranking of \(x_{1}\) and \(x_{j}\), but also the pairwise ranking of every item that is in between \(x_{j}\) and \(x_{1}\). The full proof proves that this process always strictly _decreases_ the total number of inversions in the algorithm's ranking, relative to the "good event".
Next, we consider the human's permutation. Best-item mapping also flips the indices of \(x_{1},x_{j}\) in the human's permutation. However, in the anchored setting the human's distribution \(\mathcal{D}^{h}\) is defined relative to the algorithm's presented permutation. Therefore, flipping the indices of \(x_{1},x_{j}\) for the algorithm is equivalent to relabeling the items, meaning that the human's "good event" permutation is exactly as likely as the human's "bad event" ordering, given the changed permutation. Because of this, our results hold no matter the accuracy rates of the human and algorithm \(\phi^{h},\phi^{a}\).
### Strictly better performance is always achievable without anchoring
In the previous section, we showed that complementarity is impossible in the anchored setting under a wide range of conditions. In this section, we will give specific conditions for when complementarity is achievable in the _unanchored_ setting: specifically, whenever the human and algorithm have equal accuracy rates \(\phi^{a}=\phi^{h}\) and the algorithm presents \(k=2\) items. We consider this setting particularly important because it is extremely achievable: even if the human is very bandwidth limited, it is extremely reasonable to assume that they are able to consider a finalist set of 2 items to pick between.
**Theorem 2**.: _In the unanchored setting with permutations governed by the Mallows model, the probability of picking the best arm strictly increases in the joint human-algorithm system when exactly 2 items are presented (\(k=2\)) and \(\phi^{a}=\phi^{h}\)._
While we will again defer a full proof to Appendix A, we will offer a proof sketch:
Proof sketch.: Similar to Theorem 1, we use the best-item mapping to map between good and bad events. However, we show that in the unanchored setting, this mapping always results in a "bad event" that is equally or less likely than the corresponding "good event".
First, we consider the algorithm's permutations. Here, we show that best-item mapping actually _decreases_ the total number of inversions by exactly one, making the "bad event" ordering for the algorithm strictly _more_ likely. Decreasing the number of inversions is the _opposite_ of the overall goal of this proof; the requirement that \(k=2\) is what upper bounds this number of inversions by exactly 1.
However, we show that this effect is counteracted by the human's permutation. In the unanchored setting, the human's permutation is completely independent of the algorithm's permutation, so the analysis is much more involved than in Theorem 1. Specifically, we consider each "good event" case in Definition 1 and show that best-item mapping always _increases_ the total number of inversions by at least one.
Because the human and algorithm are assumed to have equal accuracy rates, the increase in inversions from the human's permutations cancels out the decrease in inversions from the algorithm's permutations, showing that the "bad event" is no more likely than the corresponding "good event".
The proof concludes by constructing an example where the "good event" is _strictly_ more likely than the "bad event", showing that the total probability of "good events" is strictly more likely than the total probability of "bad events".
Finally, we wish to comment briefly on the permutation distributions \(\mathcal{D}^{a},\mathcal{D}^{h}\). Both the statements of Theorem 1 and Theorem 2 are specific to the Mallows model. However, the proof technique relies very weakly on the Mallows assumption. Specifically, the only property that is necessary is that the best-item mapping in Definition 3 weakly decreases (for anchored) or increases (for unanchored) the probability of permutations occurring. For Mallows model, this is satisfied because the probability of a permutation occurring is governed by the number of inversions present. Other probability distributions satisfying this property would show identical properties to those proven in Theorems 1 and 2.
### Numerical extensions and partial anchoring
In this subsection, we extend the results of the previous subsections in two ways. First, we consider the Random Utility Model, another commonly used model of noisy permutations over items. Secondly, we model cases where the human is influenced by the algorithm's presented ranking of items, but not completely anchored on it - the semi-anchored case. Specifically, we model the semi-anchored case as there human draws their mean from a noise distribution with mean \(w_{a}\cdot\mu_{j}+(1-w_{a})\cdot\mu_{i}\), where \(w_{a}\) is a weight parameter indicating how much the algorithm's ordering anchors the human's permutation, and \(j\) is the index of item \(i\) in the algorithm's permutation \(\pi^{a}\). In this way, \(w_{a}=0\) reflects the unanchored case, while \(w_{a}=1\) reflects the anchored case.
Figure 1 demonstrates numerical simulations for the RUM, given decreasing weight \(w_{a}\)1. Note that the x-axis gives \(k\) number of items presented: \(k=1\) is the accuracy of the algorithm alone, while \(k=5\) gives the accuracy of the human considering all items (but potentially anchored on the algorithm's ordering).
Footnote 1: Code is available to reproduce all simulations at [https://github.com/kpdonahue/benefits_harns_joint_decision_making](https://github.com/kpdonahue/benefits_harns_joint_decision_making)
The top figure has \(w_{a}=1\), reflecting complete anchoring. In this case, we see accuracy is maximized at \(k=1\), which is when the algorithm acts alone. This result matches with Theorem 1's findings for the Mallows model: in a completely anchored setting, complementarity is impossible. Note that Figure 1's demonstrates even stronger results: that the accuracy of the joint system is _decreasing_ in \(k\) the number of items presented.
The bottom figure has \(w_{a}=0\) (no anchoring). In this case, we note that accuracy is identical at \(k=1,k=n=5\): the human and algorithm have equal accuracy in these plots and are independent, so they each have equal accuracy when acting alone. Here, we see that the expected accuracy at \(k=2\) is greater than the accuracy at \(k=1,k=5\), again matching the results from Theorem 2 for the Mallows model. However, we again see stronger results in Figure 1, which shows that for the given parameters the joint system exhibits complementarity for all \(k\in[2,n-1]\).
Finally, the middle two figures describe cases when the human is partially anchored on the algorithm and exhibits results intermediate to the top and bottom figures. Specifically, it seems like complementarity occurs whenever \(k\) is "sufficiently small" so the benefits of having the human's ranking outweighs the harms of anchoring.
## 5 Asymmetric complementarity zones without anchoring
In previous sections, Theorem 1 showed that complementarity is impossible for the Mallows model, regardless of the levels of accuracy for the human and algorithm, while Theorem 2 showed that complementarity is possible in the unanchored case with identical accuracy rates \(\phi_{h}=\phi_{a}\) with \(k=2\). In this section, we will further explore the unanchored setting, but allowing accuracy rates to differ. Specifically, we will show that there always exist regions of complementarity: cases where a more accurate agent would strictly increase its accuracy by collaborating with a less accurate partner. However, these regions are _asymmetric_: it is more likely that a more accurate human would gain from collaborating than a more accurate algorithm.
### Provable benefits from joining with a less accurate partner
Throughout this section, we will model the algorithm and human permutations as coming from a Mallows model. For analytical tractability, our theoretical results will focus on the case with \(n=3,k=2\).
First, Lemma 2 shows that, no matter how accurate the algorithm is, there always exists a (slightly) more accurate human such that the joint system is strictly more accurate than either (achieving complementarity).
**Lemma 2** (More accurate human).: _Consider \(n=3,k=2\) where the human and algorithm both have unanchored Mallows models with \(\phi_{a}\neq\phi_{h}\). Then, there exists a region of complementarity where a more accurate human obtains higher accuracy when collaborating with a less accurate algorithm. Specifically, for all \(\phi_{a}>0\), so long as \(\phi_{h}\in[\phi_{a},\min(1.3\cdot\phi_{a},\phi_{a}+0.3)]\) the joint system has better performance than either the human alone or algorithm alone._
For context, a Mallows model with \(n=3\) recovers the correct permutation \([x_{1},x_{2},x_{3}]\) with probability \(48\%\) of the time with \(\phi=1\) and \(57\%\) of the time with \(\phi=1.3\), so the regions in Lemma 2 represent moderate but meaningful differences in accuracy levels.
Figure 1: Figures showing probability of joint system picking the best item, given \(k\) items are presented out of \(n=5\) total. Values are drawn from a Random Utility Model using Normal distributions. Figures decrease in the level of anchoring the human exhibits on the algorithm, from \(w_{a}=1,w_{a}=0.5,w_{a}=0.25,w_{a}=0\) (complete anchoring, 0.5 weight on algorithm, 0.25 weight, no anchoring). Each value of \(k\) has 10 trials, each with \(5\cdot 10^{4}\) simulations. The algorithm has values drawn from \(\mathcal{D}_{1}^{a}=N(\mu=0.5,\sigma^{2}=0.05),\mathcal{D}_{i>1}^{a}=L(\mu=0.1,\sigma^{2}=0.05)\), the human has equal noise levels \(\sigma^{2}=0.05\).
Next, Lemma 3 gives a corresponding result for when the algorithm is more accurate than the human. However, this region differs substantially from that in Lemma 2: it is substantially narrower, indicating a much smaller range where complementarity is possible.
**Lemma 3** (More accurate algorithm).: _However, the roles of the human and algorithm are not symmetric: for the same setting as in Lemma 2, the zone of complementarity is much narrower. Specifically, complementarity is possible for \(\phi_{a}\in[\phi_{h},\phi_{h}\cdot(1+0.01)\), for all \(\phi_{h}\leq 1\), but is never possible for any \(\phi_{a}\geq\phi_{h}+0.15\) for \(\phi_{a}\geq 1\)._
These results are illustrated in Figure 2. The contour plot gives the accuracy of the joint human-algorithm system, which is strictly increasing in \(\phi^{a},\phi^{h}\). Overlaid in blue is the analytically derived region of complementarity. The regions derived in Lemmas 2 and 3 are overlaid in red and white, respectively. Note that the red region encompasses almost all of the zone of complementarity, while the white region is comparatively miniscule.
Lemma 4 explains these results: for this setting, the performance of the joint system is always higher when the more accurate actor is the human, rather than the algorithm. For intuition for this asymmetry, consider the marginal impact of a more accurate algorithm - it will be slightly more likely to include the best item \(x_{1}\) among the \(k=2\) it presents. However, once the algorithm is sufficiently accurate, it will almost always present \(x_{1}\), so increasing accuracy will have diminishing returns. A more accurate human will be more likely to select the best item \(x_{1}\), given that it is presented - which will more directly make the joint human-algorithm system more accurate.
This explains why the region of complementarity is larger when the human is the more accurate one - the human's accuracy more directly increases the accuracy of the joint system, which outperforms the more accurate actor (here, the human) for a wider range of accuracy differentials.
**Lemma 4**.: _Given any two sets of Mallows accuracies \(\phi_{1}>\phi_{2}\), for \(n=3,k=2\), the joint system always has strictly higher accuracy whenever \(\phi_{a}=\phi_{1}>\phi_{h}=\phi_{2}\)._
Figure 2: A contour plot showing relative accuracy of the joint system for differing algorithm and human accuracy \(\phi_{a},\phi_{h}\), given a Mallows model distribution for each actor with \(n=3,k=2\). The value shown is the accuracy of the joint system, minus accuracy of max(algorithm accuracy, human accuracy), so positive regions indicate complementarity. Highlighted in blue is the region of complementarity. Additionally, Lemmas 2 and 3 derive regions where the human (respectively, the algorithm) has greater accuracy than its partner, and yet obtains strictly greater accuracy when collaborating (shown in red and white regions, respectively).
### Numerical extensions
Finally, Figure 3 extends these results numerically. This figure extends the theoretical results in multiple ways: first, it show \(n=10\), which means substantially more items are presented than in Figure 2. Holding \(k=2\), this means that the algorithm has a "harder" job to identify the \(k=2\) best arm. Secondly, Figure 3 shows the Random Utility Model of permutations, where greater accuracy levels are reflected by smaller standard deviations in noise. Similar to Section 4, we include this to show study how our theoretical results for the Mallows model extend to those RUM.
Note that even in this setting, we see qualitatively similar results to Figure 2: there always exists a region of complementarity: specifically, in regions of low accuracy for the algorithm and human (bottom left of the figure) this region is largest, and this region roughly extends as the human and algorithm accuracy increase (diagonally to the upper right). However, we note that this region of complementarity is _asymmetric_: a more accurate human is more likely to benefit from partnering with a less accurate algorithm. That is visually apparent from how much further the zone of complementarity extends up the \(y\) axis (human covariance). Again, this is because increases in the accuracy of the human more directly increase the accuracy of the joint human-algorithm system.
## 6 Discussion and future work
In this paper, we have proposed a model of human-algorithm collaboration where neither the human or algorithm has ultimate say, but where they successively filter the set of \(n\) items down to \(k\) and finally a single choice. We focus on how the noise distributions \(\mathcal{D}^{a},\mathcal{D}^{h}\) influence whether the combined system has a higher chance of picking the best (correct) item. Future work extend our results to a wider range of noise model. Other interesting extensions could consider more complex models of human-algorithm collaboration - for example, cases where the human and algorithm can "vote" on the ordering of items, or other models of interaction. Additionally, they could explore cases where either the human or the algorithm is inherently biased - for example, when the algorithm has a central distribution that does that rank the best item first.
Figure 3: A version of Figure 2, but given a _RUM_ with _Normal_ distribution for each actor with \(n=10,k=2\). Similar to Figure 2, the \(x\) and \(y\) axis show increasing accuracy (here, decreasing variance). For clarity, we have flipped the axes to match Figure 2 so the lower left and upper right mean high noise and perfect accuracy, respectively.
Acknowledgments
We are extremely grateful to Kiran Tomlinson, Manish Raghavan, Manxi Wu, Aaron Tucker, Katherine Van Koevering, Kritkorn Kartikoon, Oliver Richardson, and Vasilis Syrgkanis for invaluable discussions. |
2303.13244 | Free-electron interactions with photonic GKP states: universal control
and quantum error correction | We show that the coherent interaction between free electrons and photons can
be used for universal control of continuous-variable photonic quantum states in
the form of Gottesman-Kitaev-Preskill (GKP) qubits. Specifically, we find that
electron energy combs enable non-destructive measurements of the photonic state
and can induce arbitrary gates. Moreover, a single electron interacting with
multiple photonic modes can create highly entangled states such as
Greenberger-Horne-Zeilinger states and cluster states of GKPs. | Gefen Baranes, Shiran Even-Haim, Ron Ruimy, Alexey Gorlach, Raphael Dahan, Asaf A. Diringer, Shay Hacohen-Gourgy, Ido Kaminer | 2023-03-23T13:21:04Z | http://arxiv.org/abs/2303.13244v2 | # Free-electron interactions with photonic GKP states: universal control and quantum error correction
###### Abstract
We show that the coherent interaction between free electrons and photons can be used for universal control of continuous-variable photonic quantum states in the form of Gottesman-Kitaev-Preskill (GKP) qubits. Specifically, we find that electron energy combs enable nondestructive measurements of the photonic state and can induce arbitrary gates. Moreover, a single electron interacting with multiple photonic modes can create highly entangled states such as Greenberger-Horne-Zeilinger states and cluster states of GKPs.
## 1 Introduction
Quantum error correction is essential for reaching large-scale quantum computation. One prominent approach toward this goal is to encode qubit information on continuous variables [1, 2] in quantum harmonic oscillators, known as bosonic codes. These codes, and most prominently the Gottesman-Kitaev-Preskill (GKP) code [1], facilitate quantum error correction for fault-tolerant quantum computation [3]. The generation and manipulation of GKP states is a formidable challenge, as it necessitates non-Gaussian operations that typically require strong nonlinearities.
Creating the required nonlinearity can rely on a wide range of physical mechanisms. The nonlinearity can arise from intrinsically non-quadratic Hamiltonians that can be realized in the optical regime using the Kerr effect [4, 5] or post-selection by number-resolving photonic measurements [6, 7, 8]. The GKP states can also be deterministically generated from cat states [9, 10], which, however, still require nonlinearity for their generation. Such nonlinearities are typically counterproductive to the stabilization of GKP states since they increase decoherence by coupling to external degreed-of-freedom, even more so given that such states rely on a large average photon number.
Leading approaches for generating and manipulating GKP states rely on the coupling to matter ancilla qubits, which provide the necessary strong nonlinearity. Such a scheme was demonstrated experimentally with the vibrational motion of trapped ions [11, 12], with cavity photons at microwave frequencies coupled to superconducting qubits in circuit QED [13]. A similar ancilla-based scheme was also recently suggested theoretically in optical frequencies using cavity QED [14].
Here we propose a different physical mechanism that provides the needed nonlinear interaction using free electrons that act as ancilla qubits. We show how the fundamental coherent interaction of free electrons and photons, perhaps the most basic interaction in QED, can provide the building blocks for universal quantum computing with GKP states. The interaction provides the strong nonlinearity needed for quantum error correction and universal control of GKP states. This interaction can be used in gate-based [15] and measurement-based [16] computational protocols.
The first step in this direction has recently shown the free-electron-based generation of GKP states [17]. We now unveil the complete picture, describing how the fundamental electron-photon interaction can provide universality and error correction. The underlying interaction can be
described as a conditional displacement operator (\(\mathcal{C}D\)) in the joint electron-photons Hilbert space. We follow ideas from circuit QED [13] and use this operator as a building block to create any arbitrary unitary operation in the combined Hilbert space [18].
The idea to use free electrons in the context of quantum optics is inspired by recent advances in ultrafast electron microscopy. Specifically, our work relies on the inelastic scattering of free electrons by electromagnetic fields, which was famously observed in photon-induced near-field electron microscopy (PINEM) [19, 20, 21, 22, 23, 24, 25, 26]. This nonlinear scattering provides the additional degrees of freedom required to encode quantum information on the individual electron by coherent modulation of its wavefunction [27, 28, 29]. The ability to control the modulated electrons has been studied extensively in theory (e.g., [30, 31]) and experiments (e.g., [32, 33, 34]). The interaction of such modulated electrons enables photon addition and subtraction [35], measurement of light statistics [36], coherent control of two-level systems [37, 38, 39, 40, 41, 42], and generation of entanglement [43]. The same underlying theory enables heralded generation of Fock states of one or more photons [44, 45, 46, 47]. Such ideas and experimental achievements support the feasibility of the scheme we propose here.
The use of free electrons as matter ancilla qubits is intriguing for a few practical reasons. Free electrons are versatile in their energy spectrum and can access large range of frequencies, including the optical (and potentially higher) range. This versatility enables to transfer to the optical regime concepts that were only demonstrated in the microwave regime, such as nonlinear ancilla qubits - potentially bypassing inherent technical limitations of scalability and low-temperature operation.
Moreover, the free electrons are fundamentally different from previously purposed matter ancilla qubits because they are _flying qubits_, meaning that only couple temporarily to the photonic mode before they continue propagating. The limited interaction time reduces the decoherence of the photonic mode by its coupling to the ancilla. This coupling decoherence can be characterized by multiple noisy channels, such as inverse-Purcell decay [48] and self-Kerr nonlinearities [49], which pose a stronger limitation for GKP states due to their larger photon number. These decoherence channels are reduced by the short interaction time of the flying electron qubit.
Another advantage provided by the electrons being flying qubits is that they naturally facilitate coupling between spatially separated photonic modes, which enable the generation of multipartite highly entangled states such Greenberger-Horne-Zeilinger (GHZ) states [50] and
cluster states [16], important resources for quantum computation and communication [51, 52, 53]. These possibilities are presented in our work below.
## 2 Free electrons as ancillas for conditional displacement on photonic states
We define the electron coherent energy comb as a superposition of electron energy states with a Gaussian envelope around a central energy \(E_{0}\),
\[\left|\mathrm{comb}_{\sigma,\phi}^{\omega}\right\rangle\propto\sum_{n}e^{- \frac{n^{2}}{2\sigma^{2}}}e^{i\phi n}\left|E_{0}+n\hbar\omega\right\rangle. \tag{1}\]
Here \(\left|E_{0}\right\rangle\) is the state of an electron with narrow (compared to \(\hbar\omega\)) energy distribution around the energy \(E_{0}\), \(\omega\) is the modulating laser frequency, \(\sigma\) is dimensionless and shows the effective number of energy states in the electron comb, and \(\phi\) is the electron phase controlled by the laser phase. In this paper, we consider the limit of \(\sigma\gg\hbar\omega\), and omit the \(\sigma\) in the electron comb notation. In this case, the electron comb becomes an approximate eigenstate of the energy displacement operators \(b_{\omega},\ b_{\omega}^{\dagger}\) (satisfying \(b_{\omega}b_{\omega}^{\dagger}=b_{\omega}^{\dagger}b_{\omega}=1\)). These operators describe a translation of \(\hbar\omega\) in the electron's energy, which corresponds to the emission or absorption of a single photon [54], respectively.
The electron comb can be described as a qubit with the following basis:
\[\left|0\right\rangle_{\mathrm{e}}=\left|\mathrm{comb}_{\phi=0}^{2\omega} \right\rangle,\qquad\left|1\right\rangle_{\mathrm{e}}=b_{\omega}\left|\mathrm{ comb}_{\phi=0}^{2\omega}\right\rangle. \tag{2}\]
We denote \(\left|\psi\right\rangle_{\mathrm{e}}=\alpha\left|0\right\rangle_{\mathrm{e}}+ \beta\left|1\right\rangle_{\mathrm{e}}\) as a general free-electron qubit state. The \(\left|0\right\rangle_{\mathrm{e}}\) state can be generated via a typical electron comb generation scheme [30, 31] using a modulation laser with frequency \(2\omega\). Universal single-qubit gates [27] over such free-electron qubit states are achievable by multiple PINEM interactions separated by free-space propagation, i.e., drift. Free-space propagation over an appropriate distance corresponds to a rotation around the Z axis on the Bloch sphere, and PINEM interaction corresponds to a rotation around the X axis on the Bloch sphere [27]. See Fig. 1c. and Appendix A5. Coming back to the analogy of coherent light, if we consider the energy translation operator \(b_{\omega}\), then the electron qubit states are eigenstates of \(b_{\omega}^{2}\) and satisfy \(\left\langle i\right|_{\mathrm{e}}b_{\omega}\left|i\right\rangle_{\mathrm{e}}\approx 0\) with \(i=0\),\(\mathbf{1}\), similar to ladder operators acting on optical cat states. This observation creates an analogy between the creation of GKP states [17] and cat breeding protocols [9].
To describe the interaction of such modulated electrons with quantum photonic states, we quantize the electromagnetic field, as was presented theoretically in [54, 55] and was in part demonstrated experimentally in [36]. This interaction can be described using the following scattering matrix:
\[S\big{(}g_{\mathrm{Q}}\big{)}=D\big{(}g_{\mathrm{Q}}b_{\omega}\big{)}=e^{g_{ \mathrm{Q}}b_{\omega}a^{\dagger}-g_{\mathrm{Q}}^{\star}b_{\omega}^{\dagger}a}. \tag{3}\]
Here \(g_{\mathrm{Q}}\) is the coupling between the free electron and the photonic mode; its amplitude \(|g_{\mathrm{Q}}|\) is controlled by the distance between the free electron and the mode [47] and its phase \(\angle g_{\mathrm{Q}}\) by the modulating laser phase. \(a,a^{\dagger}\) are the annihilation and creation operators for the photonic mode. \(b_{\omega}\), \(b_{\omega}^{\dagger}\), unlike the photonic operators, commute \(\big{[}b_{\omega},b_{\omega}^{\dagger}\big{]}=0\). \(D(\alpha)=\exp(\alpha a^{\dagger}-\alpha^{\ast}a)\) is a coherent displacement operator [56].
For the free-electron qubit, \(b_{\omega}=b_{\omega}^{\dagger}=\sigma_{\mathrm{\alpha}}\) (see Fig. 1b). The scattering matrix in Eq. (3) is then reduced to a conditional displacement (\(CD\)) operator, controlled in the X basis:
\[S\big{(}g_{\mathrm{Q}}\big{)}=D\big{(}g_{\mathrm{Q}}\sigma_{ \mathrm{x}}\big{)}=|+\rangle_{\mathrm{e}}\langle+|_{\mathrm{e}}\bigotimes D \big{(}g_{\mathrm{Q}}\big{)}+|-\rangle_{\mathrm{e}}\langle-|_{\mathrm{e}} \bigotimes D\big{(}-g_{\mathrm{Q}}\big{)}=\] \[=\frac{1}{2}\Big{(}\Big{(}D\big{(}g_{\mathrm{Q}}\big{)}+D\big{(} -g_{\mathrm{Q}}\big{)}\Big{)}\,I+\Big{(}D\big{(}g_{\mathrm{Q}}\big{)}-D\big{(} -g_{\mathrm{Q}}\big{)}\Big{)}\,\sigma_{\mathrm{x}}\Big{)}=CD\big{(}g_{\mathrm{ Q}}\big{)}. \tag{4}\]
The following chapters show how the free-electron qubit can be used as an ancilla qubit in manipulating GKP states in a wide range of frequencies, including the optical range.
**Figure 1. The free-electron-photon interaction as a fundamental building block for quantum information processing.****(a)** The free electron is pre-shaped into a free-electron qubit state (e.g., using laser interactions [30, 31]), which interacts with the photonic mode through a near-field coupling. The photonic mode contains a GKP state. The interaction entangles the electron with the GKP state. **(b)** The free-electron qubit states are shown by their energy spectra, as the even (blue, qubit \(|0\rangle_{\rm e}\)) and odd (red, qubit \(|1\rangle_{\rm e}\)) comb electrons with \(2\hbar\omega\) energy spacing. **(c)** Building blocks for universal quantum computation on the free-electron qubit [27] and the GKP state (left) and their corresponding circuits (right). The first is the interaction scattering matrix, the second is the free space propagation (FSP) operation on the electron, and the last is the PINEM operation on the electron describing interaction with classical coherent light.
## 3 Universal single-qubit gates and quantum error correction with free-electron ancillas
We focus on the case where the photonic mode is an ideal GKP state [1]. GKP states form a lattice in their Wigner representation [57] and can be defined by the lattice constants \(a_{x,y,z}\) (see Appendix B). Stabilization of the GKP code can be achieved with the \(CD\) operator from Eq. (4). To create the stabilizers, \(g_{Q}\) should be chosen as a lattice constant \(g_{Q}=\pm a_{i}\). Pauli gates on the GKP qubit can be achieved in two ways. The first is using the same interaction with \(g_{Q}=\pm\frac{a_{i}}{2}\), for \(i=x,y,z\). The second is by deterministically displacing the photonic state by inserting a laser interaction with the GKP qubit by a beam-splitter. The Hadamard (\(H\)) gate on the GKP state can be achieved by a \(\pi/2\) phase shift of all \(g_{Q}\)'s of all the following computation steps [58]. Such an operation can be performed by digitally delaying the electrons. These choices are analogous to the case of regular \(CD\) operations based on qubit ancillas [13, 58].
When \(g_{Q}=a_{i}/4\), Eq. (4) gives a controlled Pauli gate \(\sigma_{i}\) on the GKP state, controlled by the electron's state in the X basis. For a non-ideal GKP state, the added displacement \(D(-a_{i}/4)\) needs to be corrected (in post-processing). As an example, the CNOT gate between the free electron and the GKP state is given by \(g_{Q}=a_{x}/4\):
\[{\rm CNOT}_{\rm e\to ph} = \big{(}H_{\rm e}\bigotimes D(-a_{x}/4)\big{)}S(a_{x}/4)(H_{\rm e }\bigotimes I). \tag{5}\]
Controlled Pauli gates give the ability to create maximum entanglement between the electron qubit and the GKP state. Moreover, controlled Pauli gates can be used to read out the GKP state by measuring the electron's energy as an ancilla [15] (Fig. 2a).
The \(CD\) operator and rotation gates on the ancilla qubit can be used to implement a universal set of gates on the GKP state with an additional feedforward mechanism. In the feedforward mechanism, the next operation is done according to the electron's measurement result.
Rotation gates around \(i=x,y,z\) axis with angle \(\phi\), \(R_{i}(\phi)\), are achieved with teleported gates by an ancilla qubit [58, 59], as shown in Fig. 2c. The initial state of the electron is \(|0\rangle_{\rm e}\). The electron interacts with the GKP state with \(g_{\rm Q}=\frac{a_{i}}{4}\), \(i=x,y,z\) according to the rotation axis and is then measured in the \(\left|\phi_{\pm}\right\rangle_{\rm e}=1/\sqrt{2}\bigl{(}e^{i\phi/2}|0\rangle_{ \rm e}\pm e^{-i\phi/2}|1\rangle_{\rm e}\bigr{)}\) basis. The ability to coherently control the electron's qubit state [27] allows measuring it in any desired basis, with additional drift and PINEM interactions for the post-interaction electron. If the measurement result is \(|\phi_{-}\rangle_{\rm e}\), the Pauli gate \(\sigma_{i}\) is applied to the GKP state, and if the measurement result is \(|\phi_{+}\rangle_{\rm e}\) there is no need to apply any gate. See Appendix C5 for details on measuring in the \(\left|\phi_{\pm}\right\rangle_{\rm e}\) basis. The \(S\) and \(T\) gates can be achieved by rotations around the Z axis, with the angles \(\frac{\pi}{2}\) and \(\frac{\pi}{4}\), respectively.
Figure 2: **Single-qubit gates induced by the free-electron ancilla.****(a)** Readout operation: using electron ancilla qubit with interaction \(g_{\rm Q}=\frac{a_{Z}}{4}\), followed by a measurement of the electron to extract the GKP state. Using different axes can be used for readout with any Pauli operator. **(b)** Rotation gate \(R_{i}(\phi)\) in the \(i=x,y,z\) direction: using free-electron ancilla with \(g_{\rm Q}=\frac{a_{i}}{2}\) performs the gate controlled-Pauli (\(C\sigma_{i}\)) on the GKP state, with the electron being the control qubit. Then the electron is measured in the basis \(\left|\phi_{\pm}\right\rangle=\frac{1}{\sqrt{2}}\biggl{(}e^{\frac{i\phi}{2}}|0 \rangle_{\rm e}\pm e^{-\frac{i\phi}{2}}|1\rangle_{\rm e}\biggr{)}\), using the unitary \(U(\phi)\). For feedforward, if the measurement result is \(|\phi_{-}\rangle\), the Pauli \(\sigma_{i}\) gate is applied. **(c)**\(T\) gate: example of rotation gate with \(\phi=\frac{\pi}{4}\) and \(i=z\).
**Altogether, the free-electron qubit ancilla enables the operation of unitary gates, stabilizers, and readouts on the GKP states. These building blocks enable universal quantum computation and error correction [13] using the fundamental free-electron-photon interaction, which can be implemented and controlled in ultrafast electron microscopes [19, 22, 31, 33, 36, 60].**
## 4 The free electron as a flying qubit: creation of GHZ and cluster states
The unique property of a free-electron ancilla as a flying qubit is that a single electron can be used for entangling multiple GKP states. The protocol for a \(\text{CNOT}_{\text{ph1}\rightarrow\text{ph2}}\) gate between two GKP states in two separated photonic modes is described in Fig. 3a, where one electron qubit interacts with two GKP states. The electron starts in the state \(\ket{0}_{\text{e}}\) and interacts with the first GKP state with \(g_{\text{Q}}=\frac{a_{x}}{4}\), then changes the basis using a Hadamard gate (\(H_{\text{e}}\)) on the electron (see Appendix A5), and then interacts with the second GKP state with \(g_{\text{Q}}=\frac{a_{x}}{4}\). The last step of the protocol for \(\text{CNOT}_{\text{ph1}\rightarrow\text{ph2}}\) uses feedforward: the electron is measured, and if the measurement result is \(\ket{0}_{\text{e}}\), then nothing is applied; but if the measurement result is \(\ket{1}_{\text{e}}\), then a Pauli \(\sigma_{x}\) gate is applied to one of the GKP states. The \(\text{CNOT}_{\text{ph1}\rightarrow\text{ph2}}\) and the universal set of one qubit gates shown in the previous chapter are sufficient for universal quantum computing [15].**
The maximally entangled GHZ state can be produced using one electron qubit interacting with multiple photonic GKP states. Each interaction is a \(\text{CNOT}_{\text{e}\rightarrow\text{ph}}\), which can be implemented with \(g_{\text{Q}}=a_{x}/4\), as presented in Eq. (5). In the final step of creating the GHZ state, the Hadamard
\begin{table}
\begin{tabular}{c|c|c|c} \hline
**Pauli gates** \(\sigma_{i}\) & \(a_{i}/2\) & \(\ket{+}_{\text{e}}\) & no \\ \hline
**Readout in \(i\) basis** & \(a_{i}/4\) & \(\ket{0}_{\text{e}}\) & no \\ \hline
**Rotation** \(R_{i}(\phi)\) & \(a_{i}/4\) & \(\ket{0}_{\text{e}}\) & If \(\ket{\phi_{+}}_{\text{e}}\) is measured - none \\ & & & If \(\ket{\phi_{-}}_{\text{e}}\) is measured - \(\sigma_{i}\) gate \\ \hline
**\(\text{CNOT}_{\text{ph1}\rightarrow\text{ph2}}\)** & \(g_{\text{Q},1}=\frac{a_{x}}{4},g_{\text{Q},2}=\frac{a_{x}}{4}\) & \(\ket{0}_{\text{e}}\) & If \(\ket{0}_{\text{e}}\) is measured - none \\ & & & If \(\ket{1}_{\text{e}}\) is measured - \(\sigma_{x}\) gate \\ \hline \end{tabular}
\end{table}
Table 1: **Operations on the photonic state created by a free-electron ancilla for universal quantum computation. Row 1 describes the coupling constant and electron state needed for creating Pauli gates \(\sigma_{i}\) on the GKP state. Row 2 describes how to use the electron qubit for the readout of the GKP state. Row 3 is the rotation gate \(R_{i}\) by angle \(\phi\), created using a teleported gate with feedforward. Row 4 shows how to use two electron qubits to create a \(CNOT\) gate between two GKP states in different photonic modes.**
gate is applied to the electron. The electron is then measured to disentangle it from the GKP states. Ultimately, a \(D(-a_{x}/4)\) correction should be applied by using one \(|+\rangle_{\text{e}}\) electron with \(g_{\text{Q}}=-\frac{a_{x}}{4}\) interacting with all the GKP states to displace it back to the center of the phase space. The procedure is shown in Fig. 3b. This scheme of GHZ states creation can be realized using photonic cavities or a waveguide, as shown in Fig. 3c. For the cavities approach, the distance between the cavities should be designed such that the electron will be phase matched with all modes. For the waveguide approach, the distance between the interaction points must match the electron's path such that the electron will interact with the GKP states. Also, the GKP states must be all phase-matched to the electron [62].
The prospects of free-electron flying qubits include the potential to create the cluster states needed for measurement-based quantum computation schemes. In recent years, much effort has been invested in measurement-based photonic quantum computation, specifically in the optical range. Such schemes require the efficient generation of photonic states and their entanglement into cluster states [16]. Clusters of GKP states [61] are especially desirable because GKP states are robust against photon loss errors [1], and can be easily measured in a different basis with the same operation, as shown in Fig. 2.
Following [63], we can add appropriate propagation distances between the subsequent interactions in the GHZ-creation scheme (shown in the previous section) to add single qubit rotation on the
Figure 3: **Creation of multiqubit entanglement using free electrons.****(a)**\(\text{CNOT}_{\text{ph1}\to\text{ph2}}\) gate between two GKP states. **(b)** The scheme for generating a GHZ state of three GKP states. **(c)** Two approaches for implementing the GHZ state: stationary GKP states in cavities (left) and propagating GKP states in a waveguide (right).
electron and create a 1D cluster of GKP states using a single electron (Fig. 4a). Additionally, combining multiple electron channels can create 2D and potentially higher dimensional cluster states, as shown in Fig. 4b. These higher dimensional schemes are based on the protocol presented in [64] (further discussed in Appendix D2.2). Consequently, free-electron interactions can be used as a building block in measurement-based photonic quantum computation schemes.
## 5 Discussion and outlook
In summary, this paper demonstrates how the coherent interaction between free electrons and GKP states enables projective measurements and universal control over the GKP states. This paper also demonstrates how the interaction of multiple GKP states with the same electron enables the creation of highly entangled states such as GHZ and cluster states. The key to these possibilities is the creation of \(CD\) based on the electron interaction. The electron-photon interaction thus reproduces other protocols for GKP state generation in superconducting qubits [13] and ion traps [12]. Going beyond these demonstrations, the free-electron implementation provides additional degrees of freedom to the interaction due to the intrinsic nature of the free electron as a flying qubit.
Although the free electron ancilla qubit provides similar abilities to the circuit QED, trapped ions, and cavity QED counterparts [11-14], the most significant difference between them is that the free electron ancilla is a flying qubit. This allows for high connectivity between the electron and multiple _spatially separated_ photonic modes. This fact opens options unavailable in
Figure 4: **Using flying qubits for the generation of cluster states.****(a)** A scheme for generating a 1D cluster state of GKP states in a photonic waveguide. Quantum circuit description of the proposed scheme (left) and a possible physical scheme using propagating GKP states in a waveguide (right). **(b)** Generation of 2D cluster states of GKP states. A possible implementation using a waveguide, two free-electron sources, and a delay (left). Visualization of the resulting 2D cluster state (right).
other systems, such as the generation of highly entangled GHZ and cluster states with only one ancilla electron (rather than multiple ancilla qubits [65, 66], or multimode coupling [67], which further limits the coherence times and exponentially complicates the physical realization). The flying qubit nature of the electron also implies that it interacts with the GKP state only for a short time (typically ps time scales [33, 36]). Therefore, the coherence time of the photonic qubit is not significantly reduced by the free-electron interaction (unlike the case of interaction with ancilla qubits in circuit QED [68]).
It is also interesting to compare the interaction of free-electron qubits with GKP states to other schemes that can be realized in the optical range, such as the beam-splitter interaction of optical cat-states [10, 17]. A significant difference is that the creation of free-electron qubits bypasses the need for nonlinear components, as opposed to the creation of cat states. Another advantage of free-electron-based schemes compared to optical ones arise from developments in fast electron counting detectors (direct detection schemes) [69]. Since free electrons are energetic particles, it is easier to achieve number-resolved electron detection than a similar detection with photons.
Looking forward, the free-electron qubit schemes we presented can be generalized to multi-level qudits by changing the electron comb energy gap from \(2\hbar\omega\) to \(N\cdot\hbar\omega\), where \(N\) is an integer corresponding to the number of desired levels [17, 29]. Such electron states will be analogous to \(N\)-legged cat states, which are extremely difficult to generate in optics and can provide additional degrees of freedom that can be exploited for generating and controlling GKP states. This research direction facilitates the tunability of free electrons to provide degrees of freedom that fundamentally differ from their circuit QED or trapped-ions counterparts. |
2308.07143 | Long-range Ising spins models emerging from frustrated Josephson
junctions arrays with topological constraints | Geometrical frustration in correlated systems can give rise to a plethora of
novel ordered states and intriguing phases. Here, we analyze theoretically
vertex-sharing frustrated Kagome lattice of Josephson junctions and identify
various classical and quantum phases. The frustration is provided by
periodically arranged $0$- and $\pi$- Josephson junctions. In the frustrated
regime the macroscopic phases are composed of different patterns of
vortex/antivortex penetrating each basic element of the Kagome lattice, i.e., a
superconducting triangle interrupted by three Josephson junctions. We obtain
that numerous topological constraints, related to the flux quantization in any
hexagon loop, lead to highly anisotropic and long-range interaction between
well separated vortices (antivortices). Taking into account this interaction
and a possibility of macroscopic "tunneling" between vortex and antivortex in
single superconducting triangles we derive an effective Ising-type spin
Hamiltonian with strongly anisotropic long-range interaction. In the
classically frustrated regime we calculate numerically the
temperature-dependent spatially averaged spins polarization, $\overline{m}(T)$,
characterizing the crossover between the ordered and disordered
vortex/antivortex states. In the coherent quantum regime we analyze the lifting
of the degeneracy of the ground state and the appearance of the highly
entangled states. | Oliver Neyenhuys, Mikhail V. Fistul, Ilya M. Eremin | 2023-08-14T13:49:16Z | http://arxiv.org/abs/2308.07143v2 | Long-range Ising spins models emerging from frustrated Josephson junctions arrays with topological constraints
###### Abstract
Geometrical frustration in correlated systems can give rise to a plethora of novel ordered states and intriguing phases. Here, we analyze theoretically vertex-sharing frustrated Kagome lattice of Josephson junctions and identify various classical and quantum phases. The frustration is provided by periodically arranged 0- and \(\pi\)- Josephson junctions. In the frustrated regime the macroscopic phases are composed of different patterns of vortex/antivortex penetrating each basic element of the Kagome lattice, i.e., a superconducting triangle interrupted by three Josephson junctions. We obtain that numerous topological constraints, related to the flux quantization in any hexagon loop, lead to highly anisotropic and long-range interaction between well separated vortices (antivortices). Taking into account this interaction and a possibility of macroscopic "tunneling" between vortex and antivortex in single superconducting triangles we derive an effective Ising-type spin Hamiltonian with strongly anisotropic long-range interaction. In the classically frustrated regime we calculate numerically the temperature-dependent spatially averaged spins polarization, \(\overline{m}(T)\), characterizing the crossover between the ordered and disordered vortex/antivortex states. In the coherent quantum regime we analyze the lifting of the degeneracy of the ground state and the appearance of the highly entangled states.
## I Introduction
The collective behavior of the low-energy magnetic excitations crucially depend on the geometry of the lattice they inhabit. For example, antiferromagnetically interacting spins on a square lattice form a Neel order with antialigned neighbours. At the same time, their mutual antiparallel alignment cannot be satisfied on a triangular or kagome lattices, which are the most typical models, which feature geometric frustration and yield non-trivial spin order [1; 2; 3; 4; 5; 6; 7]. The frustration can also be provided by the competition of interactions of alternating signs of the interactions [1; 8], e.g., the ferromagnetic and antiferromagnetic ones in addition to a special geometry of the lattices. Typical consequence of the frustration is the highly degenerated ground state, a large amount of low-lying metastable states and long relaxation times at low temperatures [2; 4; 9].
Apart from the natural solid state systems demonstrating rich plethora of interesting physics behavior due to underlying frustration like is found in iron-based superconductors [10; 11], frustrated ferromagnetic chains [12], Kagome magnets [6; 13; 14; 15; 16; 17] and superconductors [18; 19; 20; 21] a special attention had attracted artificially prepared systems such as trapped ions simulators [22], photonic crystals [23; 24], two-dimensional arrays of Rydberg atoms[25; 26; 27], anisotropic optical lattices [28] and Josephson junctions networks [29; 30; 31; 32; 33; 34] due to a more efficient way to tune the frustration parameter.
The latter system coined as frustrated Josephson junction arrays (_f-JJA_s) is of a special interest since the current technology allows to form _f-JJA_s of various geometry and size as well as to tune the frustration by an externally applied magnetic field [35; 36; 30]. Furthermore, the physics of _f-JJA_s can be mapped into different non-integrable quasi-\(1D/2D\) Ising or \(X\)-\(Y\) spins models, and therefore, such arrays can provide a feasible experimental platform to establish _analog quantum simulations_ in the fields of quantum chemistry, quantum biology and low-dimensional material science [37; 38].
It is known that the _f-JAA_s display the non-frustrated and the frustrated regimes characterized by the unique and highly degenerated ground states, accordingly. In the frustration regime a plenty of complex ground states such as the checkerboard and ribbon distribution of vortices [29; 39], strip phases [40] and so on, and sharp transitions between these magnetic patterns as the external magnetic field varies, were observed in _f-JJA_s on square and triangular lattices.
A special type of _f-JJA_s is _vertex-sharing_ lattices in which an each site is shared between two neighboring triangles, e.g., quasi-\(1D\) sawtooth and diamond chains [35; 30; 41; 42], and two-dimensional Kagome lattice [7; 43]. In the frustration regime of such _f-JJA_s the vortex/antivortex penetrates each single superconducting triangle, and various distributions of vortices/antivortices can be realized. The vortex (antivortex) states correspond to anticlockwise (clockwise) persistent currents flowing in a single triangle.
The classical frustrated regime of saw-tooth and diamond chains of Josephson junctions has been previously theoretically studied in Ref. [42] where the _disordered_ state of vortices/antivortices was obtained. The lack of ordering in the distribution of vortices/antivortices in such quasi-\(1D\)_f-JJA_s was due to the absence of interaction between vortices/antivortices of different cells. At the same time, for _f-JJA_s based on the Kagome lattice the highly anisotropic distributions of vortices/antivortices forming the ground state has been also predicted [43]. What, however, remains unclear is what type of the interaction can lead to the formation of such ordered anisotropic patterns and how the |
2308.08270 | Towards Benchmarking Power-Performance Characteristics of Federated
Learning Clients | Federated Learning (FL) is a decentralized machine learning approach where
local models are trained on distributed clients, allowing privacy-preserving
collaboration by sharing model updates instead of raw data. However, the added
communication overhead and increased training time caused by heterogenous data
distributions results in higher energy consumption and carbon emissions for
achieving similar model performance than traditional machine learning. At the
same time, efficient usage of available energy is an important requirement for
battery constrained devices. Because of this, many different approaches on
energy-efficient and carbon-efficient FL scheduling and client selection have
been published in recent years. However, most of this research oversimplifies
power performance characteristics of clients by assuming that they always
require the same amount of energy per processed sample throughout training.
This overlooks real-world effects arising from operating devices under
different power modes or the side effects of running other workloads in
parallel. In this work, we take a first look on the impact of such factors and
discuss how better power-performance estimates can improve energy-efficient and
carbon-efficient FL scheduling. | Pratik Agrawal, Philipp Wiesner, Odej Kao | 2023-08-16T10:17:42Z | http://arxiv.org/abs/2308.08270v1 | # Towards Benchmarking Power-Performance Characteristics of Federated Learning Clients
###### Abstract
Federated Learning (FL) is a decentralized machine learning approach where local models are trained on distributed clients, allowing privacy-preserving collaboration by sharing model updates instead of raw data. However, the added communication overhead and increased training time caused by heterogenous data distributions results in higher energy consumption and carbon emissions for achieving similar model performance than traditional machine learning. At the same time, efficient usage of available energy is an important requirement for battery-constrained devices. Because of this, many different approaches on energy-efficient and carbon-efficient FL scheduling and client selection have been published in recent years.
However, most of this research oversimplifies power-performance characteristics of clients by assuming that they always require the same amount of energy per processed sample throughout training. This overlooks real-world effects arising from operating devices under different power modes or the side effects of running other workloads in parallel. In this work, we take a first look on the impact of such factors and discuss how better power-performance estimates can improve energy-efficient and carbon-efficient FL scheduling.
Federated Learning, Energy Efficiency, Carbon Awareness, Battery-Powered Devices, Edge AI, IoT
## I Introduction
While FL [1] mitigates privacy and data transfer overhead issues associated with centralized ML training, it has different set of challenges. Studies [2] have shown that FL consumes more energy and emits significantly more carbon to reach the same model performance as centrally trained ML models.
The usage of Internet of Things (IoT) and edge devices to train machine learning models in distributed and federated learning settings regularly have further worsened the energy and environment implications of AI/ML training. Moreover, regulators responsible for AI policy also explicitly put emphasis on sustainability and environmental aspects of AI1. Additionally, edge devices are battery powered and operate under strict energy budgets. To manage efficient usage of this limited battery power for both the non-FL baseloads and the FL training, it is necessary to profile power-performance characteristics of FL training under realistic scenarios.
Footnote 1: [https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai](https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai)
To improve efficiency, researchers have proposed energy-efficient [3, 4, 5] and carbon-efficient [6, 7] approaches for FL scheduling and client selection. FL has also seen vast amount of work in benchmarking [8] in recent years. However, these benchmarks are often missing the critical energy consumption metrics of FL and are predominantly explored in server based simulated environments. Moreover, the benchmarks that include energy consumption metrics are missing granular level energy consumption data under the real-world impact factors such as non-FL baseloads and power modes of the devices. These approaches [6] rely on analytical energy models that are based on metrics such as CPU compute cycles or Floating Point Operations Per Second (FLOPS) to calculate energy required per batch. These models do not accurately take into account factors such as power modes and baseloads of the underlying FL device.
## II Problem Statement
Current FL studies over simplify the power consumption characteristics of FL clients. In comparison to simulation settings employed in FL studies, realistic on-device FL trainings exhibit complicated operational behavior patterns when it comes to energy consumption. For example, edge devices with and without hardware accelerators (GPUs or TPUs) have different energy requirements for per sample throughout. Furthermore, edge devices are usually executing workloads (i.e. baseloads) as such as inference or time-series data streaming which affects the energy per sample performance. These complex operational behavior patterns warrant a need for better energy management and energy-efficient FL. Client selection and scheduling are complex problems in FL, given the heterogeneous nature (hardware resources such as battery, compute, memory) of FL clients. Therefore, to enable better energy-efficient FL and carbon-efficient FL scheduling, metrics such as energy per sample, samples per second are necessary. Current FL studies assume the energy availability budgets and do not consider energy per sample performance under the influence of power modes or baseloads of the FL clients [3, 4, 5, 6, 7].
## III Preliminary Insights
To gain some preliminary insights into the variation of power-performance of FL clients, we evaluate an FL training under different baseloads and power modes of a Raspberry Pi.
For Raspbeery Pi, we evaluated three power modes: Performance, Powersave and Ondemand. To simulate a baseload on an FL device, we utilized unix command _sysbench2_. It provides flexibility to simulate baseload in-terms of number
of threads/CPU cores. For the FL training we utilized Flower framework 3. We simulate a computer vision IoT training task using dataset CIFAR-10 4 and the computer vision model SqueezeNet which is light-weight and deemed to be suitable for edge computer visions applications. We assign higher kernel priority to baseload process to ensure the FL training doesn't affect the CPU time of baseload in co-running scenario. For the energy consumption measurement, we utilized WLAN power socket switch5. We report mean energy per sample and samples per second values for different power-modes and CPU core baseloads.
Footnote 3: [https://flower.dev](https://flower.dev)
Footnote 4: [https://www.cs.toronto.edu/](https://www.cs.toronto.edu/) kriz/cifar.html
Footnote 5: [https://www.delock.com/produkt/11826/merkmale.html](https://www.delock.com/produkt/11826/merkmale.html)
\[EPS:\text{Energy per Sample}\] \[P_{\text{total}}:\text{Total power consumption (FL and Baseload)}\] \[P_{\text{BL}}:\text{Power consumption due to Baseload}\] \[N:\text{Number of Samples}\]
Energy per sample values were calculated using Eq. 1.
Figure 0(a) illustrates the mean energy per sample and 95% confidence intervals for each powermode, based on 10 repeated measurements. We observe significant difference in energy per sample and samples per second values when there is no non-FL base load (0 baseload cores) compared to a scenario when non-FL baseload is executing and utilizing all CPU cores. We also observe that while samples per second (Figure 0(b)) doesn't vary significantly when non-FL baseload is co-running with FL, energy per sample values fluctuate for baseloads 3 and 4. For our experiments, ondemand mode with baseload cores 3 has an optimum energy usage when calculating same number of samples compared to other baseload and samples per second combinations.
## IV Conclusion and Future Work
Recent research studies have focused on energy-efficient and carbon-efficient FL scheduling and client selection. However, most of the research assumes simplistic energy consumption models for underlying FL clients. In this work, we showed that how energy per sample values under real-world scenarios such as different power modes and non-FL baseloads at CPU cores can vary and exhibit complex operational behavior patterns.
For future work, following open research questions and possibilities could be explored further,
* How do current FL systems communicate FL clients' energy related information? How to collect energy per sample, throughput per second and uncertainty related information at runtime?
* How can we predict the power-performance characteristics, what are the relevant metrics? With more data about real-world impact factors affecting energy footprint of edge devices, can we build predictive models for forecasting?
* How often do we need to measure before we can be certain? Can we report the uncertainty to be used in scheduling? FL trainings are usually executed multiple times due to data distribution drifts and hyperparameter search. This repetitive FL training execution could be leveraged to collect more data about power-performance behavior patterns of FL clients.
* What's the impact of hardware accelerated edge devices such as jetson nano on energy related metrics? What are the energy efficiency opportunities in FL and non-FL co-running scenarios?
|
2310.12062 | On the use of Vision-Language models for Visual Sentiment Analysis: a
study on CLIP | This work presents a study on how to exploit the CLIP embedding space to
perform Visual Sentiment Analysis. We experiment with two architectures built
on top of the CLIP embedding space, which we denote by CLIP-E. We train the
CLIP-E models with WEBEmo, the largest publicly available and manually labeled
benchmark for Visual Sentiment Analysis, and perform two sets of experiments.
First, we test on WEBEmo and compare the CLIP-E architectures with
state-of-the-art (SOTA) models and with CLIP Zero-Shot. Second, we perform
cross dataset evaluation, and test the CLIP-E architectures trained with WEBEmo
on other Visual Sentiment Analysis benchmarks. Our results show that the CLIP-E
approaches outperform SOTA models in WEBEmo fine grained categorization, and
they also generalize better when tested on datasets that have not been seen
during training. Interestingly, we observed that for the FI dataset, CLIP
Zero-Shot produces better accuracies than SOTA models and CLIP-E trained on
WEBEmo. These results motivate several questions that we discuss in this paper,
such as how we should design new benchmarks and evaluate Visual Sentiment
Analysis, and whether we should keep designing tailored Deep Learning models
for Visual Sentiment Analysis or focus our efforts on better using the
knowledge encoded in large vision-language models such as CLIP for this task. | Cristina Bustos, Carles Civit, Brian Du, Albert Sole-Ribalta, Agata Lapedriza | 2023-10-18T15:50:48Z | http://arxiv.org/abs/2310.12062v1 | # On the use of Vision-Language models for Visual Sentiment Analysis: a study on CLIP
###### Abstract
This work presents a study on how to exploit the CLIP embedding space to perform Visual Sentiment Analysis. We experiment with two architectures built on top of the CLIP embedding space, which we denote by CLIP-E. We train the CLIP-E models with WEEmo, the largest publicly available and manually labeled benchmark for Visual Sentiment Analysis, and perform two sets of experiments. First, we test on WEEmo and compare the CLIP-E architectures with state-of-the-art (SOTA) models and with CLIP Zero-Shot. Second, we perform cross dataset evaluation, and test the CLIP-E architectures trained with WEEmo on other Visual Sentiment Analysis benchmarks. Our results show that the CLIP-E approaches outperform SOTA models in WEEmo fine grained categorization, and they also generalize better when tested on datasets that have not been seen during training. Interestingly, we observed that for the FI dataset, CLIP Zero-Shot produces better accuracies than SOTA models and CLIP-E trained on WEEmo. These results motivate several questions that we discuss in this paper, such as how we should design new benchmarks and evaluate Visual Sentiment Analysis, and whether we should keep designing tailored Deep Learning models for Visual Sentiment Analysis or focus our efforts on better using the knowledge encoded in large vision-language models such as CLIP for this task. Our code is available at [https://github.com/cristinabustos16/CLIP-E](https://github.com/cristinabustos16/CLIP-E).
visual sentiment analysis, vision-language models, zero-shot classification
## I Introduction
Visual Sentiment Analysis studies how to automatically recognize affect in visual content. The interest for recognizing affect in images has significantly increased in the past few years, due to the popularity of social media and the multiple applications of this topic in areas like opinion mining [1], affective image retrieval [2], education [3], mental health [4], or hate speech analysis [5]. Additionally, feature representations of image sentiment have been shown to be meaningful for emotion recognition in scene context [6].
In this work we study how to use the CLIP embedding space for Visual Sentiment Analysis. We experiment with two simple Deep Learning (DL) architectures built on top of the CLIP embedding space with two different loss functions: Cross-Entropy loss and Contrastive loss. We compare the performance of these CLIP-based approaches (denoted by CLIP-E, from CLIP Emotions) with state-of-the-art (SOTA) models and Zero-Shot CLIP. For our experiments we use the WEEmo dataset [7], which is the largest public and manually annotated visual sentiment analysis benchmark. Interestingly, we observe that the CLIP-E approaches obtain competitive results when compared with SOTA models on the binary valence problem (positive vs. negative emotion), while they significantly outperform SOTA models in the finest grained emotion classification. Additionally, CLIP-E with Contrastive loss provides a flexible model for Visual Sentiment Analysis, able to deal with different emotion taxonomies. The CLIP-E architectures are illustrated in Fig. 1 and explained in Sect.III.
Our motivation to build on top of the CLIP embedding space for Visual Sentiment Analysis is supported by several works showing that CLIP encodes affect-related information. In particular, some works have reported high accuracy results for CLIP Zero-Shot classification on Facial Emotion Recognition [8] or Visual Sentiment Analysis [9], while others show emotional concepts, such as happiness or surprise, are encoded in the the CLIP abstract visual features [10]. Our results support the idea that the rich visual-semantic representation learned by the CLIP embedding space already contains meaningful general knowledge for effective Visual Sentiment Analysis.
In this work we also study the trade-off between specialization and generalization through cross-dataset evaluation. We train the CLIP-E approaches with WEEmo and then we test them on other popular visual sentiment benchmarks. We observe that models obtaining the highest accuracies on
Fig. 1: CLIP-E architectures. (A) CLIP-E with Cross-Entropy loss; (B) CLIP-E with Contrastive loss. In these architectures the CLIP Image Encoder and the CLIP Text Encoder are frozen.
the WBEmo test set, perform more poorly when tested in other datasets (for example, the CLIP-E approaches and CLIP Zero-Shot outperform the curriculum learning approach from Panda et al. [7], when it is learned on WBEmo and tested in Emotion-6 binary benchmark). Furthermore, the CLIP-E approaches show an interesting balance between specialization and generalization (for instance, CLIP-E Cross-Entropy achieves SOTA results on WBEmo 6 category and 25 category benchmarks, while it also achieves the best or second best cross-dataset results in 4 out of 5 benchmarks). Surprisingly, CLIP Zero-Shot outperforms all the models trained on WEBmo when they are tested on the FI [11] dataset (the second largest manually annotated public benchmark for image sentiment analysis).
Our results motivate interesting observations and questions that we discuss in Sect.V. For example, we observe that often the results provided by different methods correspond to reasonable different interpretations of the same input. This motivates questioning the common multi-class approach for Visual Sentiment Analysis, which assumes that there is just one acceptable ground truth category for each input. We suggest that future efforts for collecting data should consider collecting multiple labels per each sample, and approach the visual Sentiment Analysis problem from a multi-label perspective. Also, we observe that customized Deep Learning architectures specifically designed and trained for Visual Sentiment Analysis might have less generalization capacity, particularly when compared to models that build on top of the knowledge encoded in large vision-language models. This motivates further explorations on how to leverage knowledge from vision-language models for Visual Sentiment Analysis and, more generally, for other visual affect recognition tasks.
In summary, we empirically show that it is possible to leverage the CLIP embedding space for visual sentiment analysis. In particular, the CLIP-E simple architectures, trained on top of the CLIP embedding space, are able to outperform models that are explicitly designed for the task of Visual Sentiment Analysis in fine grained classification benchmarks. Additionally, the CLIP-E architecture with Contrastive loss results in a flexible model for visual sentiment analysis that can deal with the different emotion taxonomies used in different datasets. Finally, the affect knowledge embedded in the CLIP space allows the CLIP-E architectures to generalize better, as shown in our cross-dataset evaluations.
## II Related work
In general, Visual Sentiment Analysis is approached with supervised learning. The first studies were conducted on small datasets, and the proposed methods were based on extracting hand-crafted features from the images [12, 13, 14]. Later, some works focused on automatically detecting a large set of mid-level concepts or attributes in the images, and then training a machine learning model on top of this mid-level representation [15]. After that, the availability of larger-scale affective datasets [16, 17, 7, 11] allowed the exploration of end-to-end DL methods, and multiple works showed the superior performance of DL methods with respect to the use of hand-crafted features [18, 19, 17]. Researchers have been exploring different approaches to improve these end-to-end DL models. Some works study the use of attention mechanisms to enforce the model to focus on specific regions of the image [20]. Based on this idea, Zhang and Xu [21] proposed a model that integrates a prediction of emotion intensity maps during the learning process. Concretely, the network has a first classification stream, followed by an emotion intensity prediction stream that uses Class Activation Map [22] to guide the emotion intensity learning. Then, the predicted intensity map is integrated into the final classification stream. At the same time, Panda et al. [7] explored the advantages of curriculum learning, empirically showed the effectiveness of a guided training strategy that exploits the hierarchical structure of emotions by learning the easiest task first (binary classification, positive vs. negative emotion), and then gradually learning to recognize finer grained emotion categories. Also, other interesting recent works explore the use of image and emoji pairs obtained from social networks as weak labels to train the Visual Sentiment Analysis models [23, 24]. While the accuracies are quite remarkable, the results are not as good as the ones obtained with manually labelled images.
Some works have explored the use of external knowledge and semantic representations to bridge the gap between the pixel level representations and the affective content. For example, Zhan et al. [25] created an affective embedding space using Sentibank's ANPs and visual features in parallel, which also allowed them to perform zero-shot sentiment analysis. Later, Wei et al. [26] introduced a set of 690 emotion words and searched the web for 1M related images to build the StockEmotion dataset. With it, the authors built a multi-modal pipeline with 3 inputs: emotion words associated with the image searched, the image, and other non-emotion text adjacent to the image when it was found. They trained the model and segregated the image feature extraction portion of the network. When testing against other datasets, they integrated the image feature extractor into a classifier and fine tuned it to the task. Later, Ortis et al. [27] used a two branch network for the semantic and visual streams. They extracted text detections from the image, through the use of four image descriptor models and post-process the resulting texts into a BoW weighted with the predominant score. In parallel, they use a pretrained CNN trained on Sentibank to extract the feature layer. With this, they apply an SVM classifier to both outputs to obtain the resulting emotion.
More recently, Xu et al. presented MDAN [28], a multi-level attention network that exploits a similar concept by incorporating the insight of the hierarchical relationship among emotions at different affective levels into the model. More concretely, the model has two branches: a bottom-up branch that focuses on avoiding hierarchy violation (an image cannot be _happy_ (fine-grained emotion) and _negative_ (binary)); and a top-down branch that focuses on learning emotions at a high level, thus benefiting the learning of finer-grained emotions (it is easier to infer that an image is _happy_, if it is first known that
it is _positive_). Currently, MDAN is the method that obtains the state-of-the-art results on WEBEmo.
### _CLIP for Affect Recognition Tasks_
CLIP [8] is a popular and widely used large Vision-Language model, trained on millions of \(image-text\) pairs (where \(text\) are image captions) using a contrastive learning approach. In short, CLIP has an image embedding and a text embedding, and the model is trained to embed \(image\) and \(text\) that belongs together into similar points (according cosine similarity), while pushing away the embeddings of \(image\) and \(text\) pairs that do not belong together. Then, CLIP can be used to perform zero-shot image classification by projecting images and text prompts in the embedding space, and classifying the images with the prompt whose embedding has the highest similarity with respect to the embedded image.
Zero-Shot classification with CLIP performs remarkably well in a large variety of tasks [8]. In particular, a few works explicitly study the capacity of CLIP Zero-Shot for affect-related tasks. Actually, the authors of CLIP already showed very interesting results on Facial Emotion Recognition. Also, Bodinelly et al. [9] reported remarkable results with zero-shot CLIP and fine-tuned CLIP on Image-Emotion [11], a collection of 3+ million images, weakly labeled on 8 emotion categories. More recently, Deng et al. [29] presented SimEmotion, a model that combines the CLIP vision and language features for Visual Sentiment Analysis. The architecture combines cosine similarity (to supervise \(image\) and \(text\) pairs) and cross-entropy loss (to supervise the image emotion categorization). In their case the whole image embedding is fine tuned. Their approach also leverages the CLIP embedding space for Visual Sentiment Analysis, but it requires much more training than the CLIP-E approaches. Furthermore, SimEmotion relies on Cross-Entropy to classify new unseen images, meaning that the model can not be tested on taxonomies that differ from the taxonomy used during training (for example, if the model is trained on a taxonomy with \(6\) emotion categories, the model can not be directly used for testing on a finer grained categorization with more than \(6\) emotion categories). Despite this, SimEmotion is a very interesting architecture. Unfortunately, the authors did not release their code, models, and neither the data partitions used in their experiments, which makes it impossible to compare our computationally less expensive CLIP-E approaches with SimEmotion.
## III Visual sentiment analysis with CLIP-E
Our goal is to leverage on the knowledge that CLIP has learned to perform Visual Sentiment Analysis with minimal training effort. For that we train two simple DL architectures, denoted by CLIP-E, on top of the CLIP embedding space. One of the CLIP-E architectures is trained with Cross-Entropy loss (CLIP-E Cross-Entropy) and the other is trained with Contrastive loss (CLIP-E contrastive). The architectures are illustrated in Fig. 1.
**CLIP-E Cross-Entropy**. For the CLIP-E Cross-Entropy (Fig.1.A), we just use the CLIP image encoder with two additional fully connected layers, and then we use cross-entropy loss on emotion categories to train the two additional fully connected layers (i.e. the CLIP image encoder is frozen).
**CLIP-E Contrastive**. For the CLIP-E Contrastive (Fig.1.B), we add two fully connected layers on top of the image encoder to allow deformations of the embedding space, and then we use Contrastive loss with cosine similarity with a collection of text prompts. During training, both the CLIP image and text encoders are frozen. For the CLIP-E Contrastive approach, we generated different image caption types for an image \(I\): \(SC\), which refers to sentiment caption; \(SSC\), which refers to sentiment synonyms captions; and \(IC\), which refers to image caption. Concretely, given a Image \(I\) with their respective sentiment class \(C\), we use for \(SC\) the same sentiment prompting pattern as in [29]. Thus, \(SC\) is the prompt "a photo that seems to express \(C\)" (e.g "a photo that seems to express contentment"). For \(SCC\) we used the language generation model ChatGPT [30] to obtain the 5 most significant synonyms of class \(C\) (\(Cysn_{1},...,Cysn_{5}\))1. Then, \(SSC\) is the collection of prompts "a photo that seems to express \(Cysn_{i}\)", for i = 1,...,5. Finally, for \(IC\) we use the image captioning from OFA image caption [31] to obtain a prompt that is a short description of the image. Fig. 1.B shows the \(SC\), \(IC\), and \(SSC_{m}\) prompts for the displayed input image.
Footnote 1: The complete list of synonyms is provided in the supplementary material
### _Training and inference details for CLIP-E_
**CLIP-E Cross-Entropy**. For the CLIP-E Cross-Entropy architecture we add, on top of the CLIP image embedding, one fully connected layer of 512 units followed by the prediction layer. These two layers are trained with the well known Cross-Entropy Loss (the CLIP image embedding is frozen). In this case the inference for an unseen image \(I\) is straightforward: the image is classified as the class with highest probability. Notice that inference with CLIP-E Cross-Entropy can be only made with the same taxonomy used for training.
**CLIP-E Contrastive**. For the CLIP-E Contrastive architecture we added a fully connected layer on top of the CLIP image encoder with 512 units and relu activation, and then a fully connected with a linear activation function with 512 units before the loss function. The linear activation function was chosen in order to not lose the negative activations, because the original CLIP trained embedding space has both negative and positive values. To train the model we use the same contrastive loss function used in the original CLIP approach [8]. Concretely, given a batch of N \(image\)-\(text\) pairs ((\(I_{i},T_{i}\)), \(i=1,...,N\)), and then we compute \(L_{img}\) and \(L_{text}\) as follows
\[L_{img}=\frac{1}{N}\sum_{i=1}^{N}\Big{[}-\log\frac{\exp(\langle I_{ei},T_{ei} \rangle)}{\sum_{j=1}^{N}\exp(\langle I_{ei},T_{ej}\rangle)}\Big{]} \tag{1}\]
\[L_{text}=\frac{1}{N}\sum_{i=1}^{N}\Big{[}-\log\frac{\exp(\langle T_{ei},I_{ei} \rangle)}{\sum_{j=1}^{N}\exp(\langle T_{ei},I_{ej}\rangle)}\Big{]} \tag{2}\]
where \(\langle I_{ei},T_{ej}\rangle\) is the cosine similarity between the embedding vectors of the \(i\)-th image sample (\(I_{ei}\)) and the \(j\)-th text
sample (\(T_{ej}\)), and \(\langle I_{ei},T_{ei}\rangle\) is the cosine similarity between the \(i\)-th image sample and its corresponding text caption. Then, the Contrastive Loss (\(L_{CO}\)) is computed by
\[L_{CO}=\frac{L_{Img}+L_{text}}{2} \tag{3}\]
This loss encourages the model to maximize the similarity between the embedding vector of each sample and its text description, and minimize the similarity between the embedding vectors of each sample and the other samples in the batch.
Once that CLIP-E Contrastive is trained, we proceed as follows for the inference on an unseen image \(I\). We create a list of image sentiment captions \(SC=SC_{1},SC_{2}...SC_{M}\), where \(M\) is the number of sentiment classes in the supervised vision approach (including synonyms). Then, we compute the cosine similarity between the embedded image \(I_{e}\) and each of the embedded sentiment captions \(T_{SC_{m}}\),
\[\langle I_{e},T_{SC_{m}}\rangle=\frac{I_{e}\cdot T_{SC_{m}}}{\|I_{e}\|\|T_{SC_{ m}}\|} \tag{4}\]
and then we compute \(\hat{m}\) by
\[\hat{m}=arg\_max_{m=1,...,M}(\langle I_{e},T_{SC_{m}}\rangle) \tag{5}\]
Finally, image \(I\) is then classified as the class of the taxonomy that corresponds to the prompt \(SC_{\hat{m}}\). For example, let us suppose we are computing inference for an image using the 25 category taxonomy of WEBEmo. Then, imagine that the most similar prompt is 'a photo that seems to express positivity'. In this case the corresponding class for this prompt would be _positivity_, which is one of the 5 synonyms for the category _optimism_. Therefore, the image would be classified as _optimism_. Also, if we want to classify the image in binary valence, we would could use the WEBmo taxonomy, and classify the image as _positive_, since _optimism_ belongs to the _positive_ valence cluster. Notice that inference with CLIP-E Contrastive can be made with any same taxonomy, since we can use an arbitrary set of \(SC\) prompts during inference.
Both CLIP-E models were trained until 15 epochs. The learning rate was initialized at 1e-3, and it is scheduled to decrease by a factor of 0,1 when the validation loss was in a plateau. We used Adam optimizer, and the training is stopped once the validation loss does not increase any longer. The batch size for CLIP-E Cross-entropy was 32 and for CLIP-E Contrastive was 8.
## IV Experiments
We perform two sets of experiments. First, we train and test the CLIP-E approaches on WEBEmo [7], the largest publicly available dataset with manual annotations on visual sentiment. We compare the obtained results with SOTA models and with CLIP Zero-Shot. Second, we perform cross-dataset evaluation. Concretely, we train different models on WEBEmo and test on IAPS [32], Emotion-6 [7] (2 and 6 category benchmarks), EmotionROI [33] (2 and 6 category benchmarks), and FI [11] (2 and 8 category benchmarks). The next subsection provides a short description of the datasets used in our experiments.
### _Datasets_
In this work, we use WEBEmo [7] to train the models. WEBEmo is the largest manually annotated dataset for visual sentiment analysis and a popular benchmark. The dataset contains 268,000 images retrieved from the Internet. The images are labeled with the Parrott's hierarchical model with primary, secondary and tertiary emotions (2 emotion polarity categories -positive/negative-, branch out to 6 emotion categories, which, in turn, expanded into 25 fine-grained emotion categories). The dataset was collected by crawling through the 25 emotions as keywords over the internet, while duplicates and non-english tagged images were removed.
In our cross-dataset experiments we use 4 different smaller scale datasets for evaluating the models, all of them manually labeled. The FI [11] is a collection of 23,308 images from Flickr and Instagram manually labeled on 8 emotion categories by Amazon Mechanical Turk workers using the sentiment taxonomy of Mikel's emotions [34]. These emotions are also grouped in two valence categories (positive vs. negative). The IAPS dataset [32] is composed of 1,282 images, labeled with binary valence. The Emotion-6 dataset [7] contains 8,350 images, and is labelled with 6 emotion categories. The categories are also grouped in two valence categories. Finally, the EmotionROI [33] is composed of 1,980 images, and has binary valence labels as well as 6 category labels.
### _Results on WEBEmo_
We trained the CLIP-E models on WEBEmo using the training and validation data partitions published by the authors of WEBEmo. The models are tested in the test partition, composed of approximately 53K images. Table I summarizes the results on WEBEmo test set. The table includes \(random\) and \(most\)\(represented\)\(class\) as reference results. Then it includes three SOTA models (Panda et al. [7], Zhang et al. [21] and MDAN [28]), CLIP Zero-Shot, and the CLIP-E approaches. We run our models using 5 different seeds and the average standard deviation is 0,24 on a 0-100 scale.
We observe that both CLIP-E models achieve state-of-art accuracy on the finegrained categories (6 category and 25 cat
egory benchmarks). Also, in binary classification the accuracy obtained by the CLIP-E models is equivalent and competitive with the SOTA counterparts MDAN [28] and Zhang et al. [21]. The advantage of our models is that we do minimal training effort using simple deep learning architectures built on top of the frozen CLIP embeddings.
The confusion matrix obtained with each of the CLIP-E models are depicted in Fig.2. We notice that CLIP-E Cross-Entropy achieves higher accuracies, but the confusion matrix shows more errors on underrepresented classes, such as _sympathy, exasperation_, or _disappointment_. We also observe confusions on classes that are semantically close, such as _contentment_ and _cheefulness_. In contrast, we see less of these errors in the CLIP-E Contrastive confusion matrix.
In fine-grained classification, both CLIP-E models show higher performance in comparison with Zhang et al. [21]. Particularly, in categories like 'zest','shame','relief' CLIP-E Contrastive obtains accuracies of 45%, 44% and 42% in those classes, respectively, while Zhang et al. [21] obtained 10%, 0% and 0% in the same classes.
**Ablation results for CLIP-E Contrastive**. Table I also shows ablation results for CLIP-E Contrastive (last 4 rows). Concretely, we tested the model using separately the different types of prompts. We trained the model only using SC (sentiment caption), only using the IC (image caption), using IC and SC, and SC with SSC (with the synonyms sentiment caption). We observe that just using the sentiment caption with synonyms, the CLIP-E Contrastive achieves competitive accuracies when compared with the full model and with SOTA. However, training only with IC does not produce good results and performs worse than CLIP Zero-Shot. This happens because the model is not trained with the goal of learning about sentiment, it is been trained for learning to correctly pair images and their respective caption descriptions. However, the IC information is complementary when it is combined with SC, achieving higher accuracy than just training solely with SC. When SC and SSC are combined the performance on fine grained classification drops a bit, while the performance on binary classification improves. Overall, the best accuracy is achieved by the full CLIP-E Contrastive final model, when the information from SC, IC and SSC is combined all together.
### _Cross-dataset evaluation_
We test both CLIP-E models approaches and CLIP Zero shot on 4 different datasets (7 benchmarks). The results are shown in Table II. We use 'X' to denote when an experiment can not be computed. Up to our knowledge, the only work that includes cross-dataset evaluation is Panda et al. [7] in the binary classification setup.
For the binary classification for small datasets like IAPS, Emotion-6 and EmotionROI, CLIP-E Cross-Entropy produces better accuracies than CLIP-E Contrastive. However, the CLIP-E Contrastive obtains accuracies that are slightly higher than the CLIP Zero-Shot. Regarding the Pandas et al. [7] results, the authors trained a customized ResNet with curriculum learning for the WEBemo dataset. However, when they tested in other datasets, they achieve accuracies that are equivalent to CLIP Zero-Shot for Emotion-6. The CLIP Zero-Shot is better for FI dataset than Pandas et al. [7]. That experimentally illustrates the potential of large Vision-Language models in generalizing.
For the fine grained classification of Emotion-6, we could test CLIP-E Cross-Entropy with 6 classes because the categories are the same of WEBemo's. However, notice that the emotion taxonomies of each dataset could have been different. So, although CLIP-E Cross-Entropy reports a remarkable accuracy (58.39), if the Emotion-6 taxonomy were different than the WEBemo taxonomy, it would have been not possible to test directly test CLIP-E Cross-Entropy (trained on WEBemo) on Emotion-6.
For the case of the large scale dataset FI, the best performance is achieved by CLIP Zero-Shot, surpassing the ResNet of Panda et al. [7] and the CLIP-E approaches. The reason could be that FI is a dataset that contains different kinds of images than WEBemo. FI is composed of images that illustrate natural scenes and situations that evoke sentiments (that are probably similar to the images that CLIP has been trained), while WEBemo contains images of people and symbols, where the sentiments are represented by facial and body gestures, or conceptual meanings behind symbols or objects. However, it is important to notice that CLIP-E accuracies are higher than Panda et al. [7].
Notice that CLIP-E Contrastive has more testing versatility than cross-entropy approaches. In the CLIP-E Cross-Entropy
Fig. 2: Confusion Matrices for CLIP-E Cross-Entropy (A) and CLIP-E Contrastive (B) when trained and test on WEBemo.
and other classic classification models, the only way to leverage and test the knowledge learnt is when the other dataset has the same taxonomy, or finetuning the model using the taxonomy of the particular dataset (which requires re-training). Thus, CLIP-E Cross-Entropy can only be tested in other datasets in the binary category set up, and with Emotion-6 in the 6 categories because WEBEmo has the same taxonomy. In contrast, CLIP-E Contrastive offers the flexibility to be tested on any taxonomy, due to its inference formulation, based on an arbitrary collection of prompts and the cosine similarity. Thus, while CLIP-E Contrastive is not obtaining the highest accuracies, it has the versatility to be tested on any dataset. Furthermore, although CLIP-E Contrastive does not systematically obtain the highest accuracies, it obtains accuracies comparable with other approaches, and obtains the second best results in 3 out of the 7 tested benchmarks.
## V Discussion
### _Multiple affective interpretations of an input_
Fig.3 shows qualitative results for CLIP-E Cross-Entropy (we show per each image the softmax distribution, sorting the emotion categories by probability, in decreasing order) and CLIP-E Contrastive (we show the normalized similarity distribution, sorting the categories by similarity, in decreasing order) trained and tested on WEBEmo.
Usually, Visual Sentiment Analysis is approached as a multi-class classification problem, which assumes there is an unique ground truth category per image. However, while observing the qualitative results, it is reasonable to question this approach. For example, in Fig.3.B, CLIP-E Contrastive predicts the ground truth label _Cheerfulness_. One possible explanation for this output is the semantic content of the image, which is more related to a positive and fun situation (we noted that the IC of this image is 'a photo that contains a kid shedding in snow at the mountain', which is a description of a positive experience). In contrast, CLIP-E Cross-Entropy classifies the image as with _sadness_, and this seems also reasonable, since the image can also evoke loneliness in a desolated and grayish environment. Another example is Fig.3.E, where the semantic information is not lost and concepts like'recycling' are linked to 'garbage' and indirectly closer to _disgust_, while the output of CLIP-E Cross-Entropy classifies this picture of a standing man as _optimism_ (by manually exploring the WEBEmo dataset we found several images of confident standing men labelled with positive emotion categories, which suggests that CLIP-E trained with Cross-Entropy loss was able of learning this visual pattern). Another interesting observation can be made around the idea of multiple possible interpretations of the same image. For example, Fig.3.D is labeled as _relief_, while both CLIP-E Cross-Entropy and Contrastive produce the output _suffering_ (which is an emotional state that occurs before relief). Since humans can make different interpretations of the same image, one might think that automatic Visual Sentiment Analysis systems should be able of doing the same, and for that we need new benchmarks that include multiple interpretations of the same sample, as well as evaluation metrics to quantify how models perform on the multiple interpretation task.
These observations suggest questions that we should consider in future work on Visual Sentiment Analysis, including data collection efforts, where we could consider labelling images with multiple categories (multi-label) instead of labelling them with a single category (multi-class). While multi-class classification is also the most common apparent emotion recognition, there are recent works that are starting to approach the task with a multi-label perspective (e.g. [6]).
### _Dataset bias and the need for cross-dataset evaluation_
Fig.3 also illustrates the diversity of the WEBEmo images: some of them are more object centrie (e.g. Fig.3.A or Fig.3.D), while others are more scene centric (Fig.3.C or Fig.3.G). Some of them seem natural scenes (e.g. Fig.3.G), others are posed pictures (e.g Fig.3.B), and others have an associated conceptual message (e.g. Fig.3.A, which symbolizes winning/losing). While WEBEmo is a large and diverse dataset, we also observed that each of the datasets considered in this work have their own particularities and biases. This is the reason why we face the specialization vs. generalization challenge: the better a model performs on WEBEmo, at some point it starts performing poorly with other datasets. Concretely, after training CLIP-E with WEBEmo, we tested the models obtained after each training epoch on the other benchmark datasets. Overall, the CLIP-E Contrastive with WEBEmo needed 6 epochs to
converge and CLIP-E Cross-Entropy needed 5. However, the best performance on most of the other datasets was obtained by the first or second epoch model for both CLIP-E Contrastive and CLIP-E Cross-Entropy. These results show that as the model specializes on WEBmo, it loses the ability to generalize on other datasets. These observations motivate the importance of cross-dataset evaluation when testing Visual Sentiment Analysis models, since it is a mechanism to evaluate generalization capacity. Furthermore, the results of our cross-dataset evaluation clearly show the capacity of CLIP Zero-Shot and CLIP-E approaches to generalize across different datasets, when compared to models specifically designed and trained just for the Visual Sentiment Analysis task. Under the current Computer Vision shift to large Vision-Language foundation models, we might want to put more effort on understanding these models and their use for Visual Sentiment Analysis and other affect recognition tasks.
## VI Conclusions
We present a study on leveraging knowledge encoded in the CLIP embedding space for the task of Visual Sentiment Analysis. We experiment with two simple architectures built on top of the CLIP embedding space with Cross-Entropy loss (CLIP-E Cross-Entropy) and Contrastive loss (CLIP-E Contrastive), respectively. The CLIP-E architectures are trained on WEBmo dataset, on top of freezed CLIP embeddings. Our results show that these architectures are able of obtain accuracies competitive or even superior in fine grained categorization on the WEBmo benchmark, when compared to SOTA models. Additionally, our cross-dataset evaluations show that the CLIP-E architectures, which strongly rely on the knowledge of CLIP, are able to generalize better across different datasets. Our results motivate interesting discussions, such as the need for new Visual Sentiment Analysis benchmarks containing multiple interpretations of the same image, the interest of cross-dataset evaluation for testing the generalization capacity of the models, and the motivation for further exploring the use of large vision-language models for visual affect recognition tasks more generally.
## Ethical Impact Statement
Visual Sentiment Analysis is a challenging problem. While CLIP-E is obtaining state-of-the-art accuracies on fine grained categories in WEBmo, we can not claim that CLIP-E (or any existing methods) can accurately recognize affect in images. There is a lot of research to be done yet to have automatic systems able to richly and diversely recognize affective content in images as humans would do. Models like CLIP-E are trained on biased datasets, and the bias is present both in the image instances (most of the images were acquired in western countries, and there are images that show negative biases towards certain groups -e.g. we observed that images containing women are more frequent labeled to negative emotions like _envy_ than images containing men) and in the annotations (most of the images are labeled by a limited number of annotators and the labels do not represent the diversity of opinions in terms of affect perception). The WEBmo dataset is composed of images crawled from the internet. CLIP original approach
Fig. 3: Qualitative results of CLIP-E Cross-Entropy (Softmax Distribution provided) and CLIP-E Contrastive (Similarity Distribution provided) on WEBmo test images. The Distribution graphs show the categories in decreasing order according probability (in the case of Softmax Distribution) and similarity (in the case of Similarity Distribution).
has been trained with images and text captions crawled from the internet. It is known that these types of data collections and models contain biases, some of which reflect the biases embedded in human society. While the purpose of this work is to advance research on Visual Sentiment Analysis, the outcome of this work could potentially be used in practical real cases. For example for estimating the emotional impact of images in advertisement, or the emotional impact of movie clips, or for retrieving images with emotional content. Any practical use case of Visual Sentiment Analysis models should be properly regulated. Furthermore, if the use might directly impact a person, the person should give their consent for the use of such models. More generally, any use in real scenarios of Visual Sentiment Analysis models has to be validated for the corresponding ethics committees. Additionally, more effort needs to be made to provide explainability capacity to the models, to be able of understanding their behavior. Finally, regarding environmental matters, even though our model leverages on CLIP knowledge and requires minimal effort to be trained, original large scale vision-language models are computationally expensive and have a negative impact on carbon fingerprint.
## Acknowledgments
This work is partially supported by the Spanish Ministry of Science, Innovation and Universities RTI2018-095232-B-C22 to A.L.; a Google unrestricted gift to A.L; and UOC PhD Grant to C.B. We also thank NVIDIA for their hardware donations.
|
2306.10193 | Conformal Language Modeling | We propose a novel approach to conformal prediction for generative language
models (LMs). Standard conformal prediction produces prediction sets -- in
place of single predictions -- that have rigorous, statistical performance
guarantees. LM responses are typically sampled from the model's predicted
distribution over the large, combinatorial output space of natural language.
Translating this process to conformal prediction, we calibrate a stopping rule
for sampling different outputs from the LM that get added to a growing set of
candidates until we are confident that the output set is sufficient. Since some
samples may be low-quality, we also simultaneously calibrate and apply a
rejection rule for removing candidates from the output set to reduce noise.
Similar to conformal prediction, we prove that the sampled set returned by our
procedure contains at least one acceptable answer with high probability, while
still being empirically precise (i.e., small) on average. Furthermore, within
this set of candidate responses, we show that we can also accurately identify
subsets of individual components -- such as phrases or sentences -- that are
each independently correct (e.g., that are not "hallucinations"), again with
statistical guarantees. We demonstrate the promise of our approach on multiple
tasks in open-domain question answering, text summarization, and radiology
report generation using different LM variants. | Victor Quach, Adam Fisch, Tal Schuster, Adam Yala, Jae Ho Sohn, Tommi S. Jaakkola, Regina Barzilay | 2023-06-16T21:55:08Z | http://arxiv.org/abs/2306.10193v2 | # Conformal Language Modeling
###### Abstract
In this paper, we propose a novel approach to conformal prediction for generative language models (LMs). Standard conformal prediction produces prediction sets--in place of single predictions--that have rigorous, statistical performance guarantees. LM responses are typically sampled from the model's predicted distribution over the large, combinatorial output space of natural language. Translating this process to conformal prediction, we calibrate a _stopping rule_ for sampling different outputs from the LM that get added to a growing set of candidates until we are confident that the output set is sufficient. Since some samples may be low-quality, we also simultaneously calibrate and apply a _rejection rule_ for removing candidates from the output set to reduce noise. Similar to conformal prediction, we prove that the sampled set returned by our procedure contains at least one acceptable answer with high probability, while still being empirically precise (i.e., small) on average. Furthermore, within this set of candidate responses, we show that we can also accurately identify subsets of individual components--such as phrases or sentences--that are each independently correct (e.g., that are not "hallucinations"), again with statistical guarantees. We demonstrate the promise of our approach on multiple tasks in open-domain question answering, text summarization, and radiology report generation using different LM variants.
## 1 Introduction
Language models (LMs) have emerged as powerful tools for solving natural language processing (NLP) tasks. Given an input prompt, LMs generate a response from some predicted distribution over output text sequences. For modern models, these generations are often coherent and contextually relevant. At the same time, these generations can still contain mistakes, and lack certain aspects of robustness and reliability in terms of providing accurate, trustworthy predictions [27; 31; 38; 42; 63; 72]. Unfortunately, quantifying the uncertainty in LM outputs has remained a major challenge.
Conformal prediction is a popular model-agnostic and distribution-free method for creating prediction sets that contain the correct answers with high probability [2; 3; 4; 6; 36; 55; 69]. Applying conformal prediction to generative models such as LMs, however, is challenging due to (a) the unbounded nature of their output space (i.e., all possible text sequences), and (b) the limited available (tractable) mechanisms for exploring all possible predictions. In particular, LMs can typically only approximately search or sample candidate responses. Furthermore, while several possible responses might be acceptable (e.g., correct or factual), small differences can result in abrupt changes in coherence or meaning.
In this paper, we propose an extension of conformal prediction that is tailored specifically to generative LMs. We only assume that the (potentially black-box) LM that is given to us can be used to sample diverse output sequences, together with their evaluated model likelihoods (i.e., the output token
sequence logits). Like conformal prediction, our method offers a rigorous coverage guarantee by constructing prediction sets that, in our case, provably contain at least one acceptable response with high probability. Unlike conformal prediction, however, we do not enumerate the entire output space (which is impossible). Instead, we derive a calibrated _stopping rule_ for sampling different outputs from the LM that get added to a growing output set of candidates, until we are confident that the output set is sufficient. Since not all samples from the LM may be high quality (e.g., some may be redundant, incoherent, or have lower confidence), we also simultaneously calibrate a _rejection rule_ for removing candidates from the output set--while still ensuring that our coverage bound is not violated. This gives the benefit of making our output sets not only accurate, but also precise (i.e., small).
To more concretely describe the exact type of guarantee that we provide, suppose we have been given a calibration set \(\mathcal{D}_{\mathrm{cal}}=(X_{i},A_{i})\in\mathcal{X}\times\mathcal{A}\), \(i=1,\ldots,n\) of independent and identically distributed (i.i.d.) prompts and "admission" functions (see also [14]). Here, \(A_{i}\) is a binary random function that measures whether or not a generation \(y\) for prompt \(X_{i}\) is good enough (i.e., \(A_{i}(y)=1\)). Note that randomness in \(A_{i}\) can come from implicit random covariates--such as relying on a random annotated reference, \(Y_{i}^{\mathrm{ref}}\), to compare the candidate \(y\) to. Figure 1 illustrates a setting where \(X_{i}\) is an X-ray to automatically analyze and produce a report for, while \(A_{i}\) extracts individual findings from each generated report and checks if they correspond to those given by an expert radiologist. Let \(X_{\mathrm{test}}\) be a new i.i.d. test prompt. Using \(\mathcal{D}_{\mathrm{cal}}\) to guide our choice of hyper-parameters \(\lambda\in\Lambda\), for any \(\epsilon,\delta\in(0,1)\), our goal is to generate a set of samples \(\mathcal{C}_{\lambda}(X_{\mathrm{test}})\subseteq 2^{\mathcal{Y}}\) that satisfies
\[\mathbb{P}\Big{(}\mathbb{P}\Big{(}\exists y\in\mathcal{C}_{\lambda}(X_{ \mathrm{test}})\colon A_{\mathrm{test}}(y)=1\mid\mathcal{D}_{\mathrm{cal}} \Big{)}\geq 1-\epsilon\Big{)}\geq 1-\delta. \tag{1}\]
The outer and inner probabilities are over the draws of \(\mathcal{D}_{\mathrm{cal}}\) and \((X_{\mathrm{test}},A_{\mathrm{test}})\), respectively. \(\epsilon\) is our error tolerance, while \(\delta\) controls for the sensitivity of our algorithm with respect to calibration data.
While Eq. (1) stipulates the existence of at least one "acceptable" generation in \(\mathcal{C}_{\lambda}(X_{\mathrm{test}})\), it does not tell us much about individual responses, \(y\in\mathcal{C}_{\lambda}(X_{\mathrm{test}})\). Additionally, longer generations are often composed of multiple statements. In our radiology setting, a report may contain multiple findings, such as _"Cardiomegaly is moderate. There is mild pulmonary interstitial edema."_ We futher identify a subset of confident components that would independently be categorized as being correct (given another admission function \(A_{\mathrm{test}}^{c}\), this time operating over generation fragments). For example, we might predict that _"Cardiomegaly is moderate."_ is correct, but perhaps not _"There is mild pulmonary interstitial edema."_ This can not only be useful in catching incorrect statements, but can also help identify independently correct parts of a larger generation, even when the overall quality of the full generation is poor. Like Eq. (1), we calibrate this process such that it gives accurate results with high probability.
**Contributions.** In summary, our main results are as follows:
* We bridge the gap between conformal prediction and LMs by calibrating the _sampling_ of output sets, rather than enumerating and selecting candidate responses directly from the output space;
Figure 1: Our procedure samples candidate reports from a trained language model until a _stopping rule_ is reached. Each sample is added to the output conformal set if it meets both a minimum estimated quality and a diversity criterion. The procedure is calibrated such that at least one candidate \(y\) from the conformal set (green frame) is admissible \((A(y)=1)\). In this example, samples \(y_{1}\) and \(y_{2}\) are inadmissible because they hallucinate the presence of “edema” (in orange) and “hlir congestion” (in magenta), respectively. Sample \(y_{3}\), however, is admissible, and included in the output set by our method.
* We extend multi-label conformal prediction to identify confident components of long generations;
* Though limitations apply, we demonstrate valid risk control on multiple diverse tasks with different LMs, while still retaining meaningful output sets that are precise on average compared to baselines.
## 2 Related work
**Conformal prediction and risk control.** Our work adds to the rich collection of tools for uncertainty estimation and risk control for machine learning algorithms [2; 3; 5; 6; 16; 18; 35; 36; 68; 70; 71, _inter alia_]. These techniques were previously extended and applied in the language domain to classification with finitely-many classes [14; 15; 27], to token-level predictions [11; 54], and to reliably accelerate LMs [34; 57; 59]. Here, we address the emerging challenge of providing reliable prediction sets for unbounded, free-text generation--which previous methods are unequipped for. The distribution-free, finite-sample performance guarantees that we derive are similar to those given by prediction sets or regression intervals in standard conformal prediction [2; 50; 69], but with slightly relaxed "correctness" criterions [9; 14]. In particular, we build on the groundwork set by Angelopoulos et al. [3], which provides a general methodology for calibrating any risk function that is controllable via some low-dimensional hyper-parameter configuration. We extend their framework to handle sampling-based algorithms that can effectively be used for LMs, and that, critically, do not require enumerating the full output space (which is intractable in our case). Most relevant to our work in LMs, other recent approaches have built on conformal principles to construct confidence intervals for generative diffusion models over images [23; 64]. These methods do not directly translate to LMs, however, as they only provide non-combinatorial confidence intervals on the pixel-level.
**Uncertainty estimation in LMs.** As the use of LMs in-the-wild quickly grows, there is increasing interest in obtaining and expressing meaningful confidence estimates for each output. Recent studies show that the logits of out-of-the-box LMs tend to exhibit overconfidence, even when wrong [10; 29; 43; 67]. Recent alignment techniques degrade this even further [29; 48]. Most current mitigation approaches focus on introducing linguistic cues [39; 80] or empirical post-hoc logit calibration [25; 29; 44; 78]. Such heuristics, however, don't provide any concrete guarantees. In this work, we develop similar techniques to improve the output of the underlying LM. Our methods are model agnostic and provide rigorous guarantees. Our conformal component selection (SS4.4) also relates to recent self-consistency work that builds on the empirical observation that repeated similar samples are more likely to be correct [45; 72], and cross-sample entailment can approximate uncertainty [32]. Unlike previous work that uses a fixed number of re-samples and compares full outputs, we (1) introduce a dynamic stopping rule to reduce the number of samples, (2) extend this concept to semantically compare sub-components of long text outputs, and (3) formalize the process to provide proper guarantees.
**Reliable generation.** It is common practice to post-hoc apply classifiers and filters on top of LM generations for various quality goals such as preventing toxicity [17; 53; 73], verifying grounding against sources [7; 41; 77], or re-ranking the set of decoded outputs [24]. Our work provides a systematic and reliable approach for filtering or flagging poor-quality outputs--both at a full generation and component level--and can also readily incorporate additional signal from auxiliary classifiers. For example, we demonstrate in our experiments using off-the-shelf natural language inference (NLI) models [8; 30; 56; 65; 74; 79] to help guide the selection of individual, confident components in text summarization (i.e., sentences that are fully entailed by the larger text [13; 22; 33; 58]).
## 3 Background
We begin with a brief review of conformal prediction and general risk control (see also [1]). Here, and in the rest of the paper, upper-case letters (\(X\)) denote random variables; lower-case letters (\(x\)) denote constants, and script letters (\(\mathcal{X}\)) denote sets, unless specified. All proofs are in Appendix B.
Given a new example \(x\), for every candidate label \(y\in\mathcal{Y}\) standard conformal prediction either accepts or rejects the null hypothesis that the pairing \((x,y)\) is correct. The test statistic for this test is a _nonconformity measure_, \(\mathcal{M}((x,y),\mathcal{D})\), where \(\mathcal{D}\) is a dataset of labeled examples. Informally, a lower value of \(\mathcal{M}\) reflects that point \((x,y)\) "conforms" to \(\mathcal{D}\), whereas a higher value of \(\mathcal{M}\) reflects that \((x,y)\) does not. For example, a practical choice for \(\mathcal{M}\) could be the model-based negative log likelihood, \(-\log p_{\theta}(y|x)\), where \(\theta\) are parameters fit to \(\mathcal{D}\). Split conformal prediction [49] uses a separate training set \(\mathcal{D}_{\mathrm{train}}\) to learn a fixed \(\mathcal{M}\) that is not modified during calibration or prediction. To construct a prediction set for the new test point \(x\), the conformal classifier outputs all \(y\) for which
the null hypothesis (that pairing \((x,y)\) is correct) is not rejected. This is achieved by comparing the scores of the test candidate pairs to the scores computed over \(n\) calibration examples.
**Theorem 3.1** (Split conformal prediction [49; 69]).: _Let \((X_{i},Y_{i})\), \(i=1,\ldots,n+1\) be exchangeable random variables. Let random variable \(V_{i}=\mathcal{M}(X_{i},Y_{i})\) be the nonconformity score of \((X_{i},Y_{i})\), where \(\mathcal{M}\) is fixed. For \(\epsilon\in(0,1)\), define the prediction (based on the first \(n\) examples) at \(x\in\mathcal{X}\) as_
\[\mathcal{C}_{\epsilon}(x):=\big{\{}y\in\mathcal{Y}\colon\mathcal{M}(x,y)\leq \mathrm{Quantile}(1-\epsilon;\,V_{1:n}\cup\{\infty\})\big{\}} \tag{2}\]
_Then \(\mathbb{P}(Y_{n+1}\in\mathcal{C}_{\epsilon}(X_{n+1}))\geq 1-\epsilon\)._
Note that the coverage property expressed in Theorem 3.1 is _marginal_ over the draw of calibration and test data. The recent Learn Then Test (LTT) framework of Angelopoulos et al. [3] extends conformal prediction to control the expectation of any loss function (conditional on the draw of calibration data) by reframing hyper-parameter selection as a statistical multiple hypothesis testing problem.
Specifically, let \(L:\Lambda\to\mathbb{R}\) be any random function using a hyper-parameter configuration \(\lambda\) in some space \(\Lambda\). For example, we might have \(L(\lambda):=\ell(X,Y;\lambda)\) for some fixed loss function \(\ell\) with random inputs \((X,Y)\). Unlike conformal prediction, however, \(\lambda\) can be multi-dimensional (e.g., consist of multiple thresholds). Let \(L_{i}\), \(i=1,\ldots,n\) be an i.i.d. calibration set \(\mathcal{D}_{\mathrm{cal}}\) of random functions, and \(\epsilon\in\mathbb{R}\) be a tolerance for the test risk, \(\mathbb{E}[L_{\mathrm{test}}(\lambda)]\leq\epsilon\). LTT then identifies a random (depending on \(\mathcal{D}_{\mathrm{cal}}\)) subset of parameters, \(\Lambda_{\mathrm{valid}}\subseteq\Lambda\), with the goal of guaranteeing
\[\mathbb{P}\bigg{(}\sup_{\lambda\in\Lambda_{\mathrm{valid}}}\mathbb{E}[L_{ \mathrm{test}}(\lambda)\mid\mathcal{D}_{\mathrm{cal}}]\leq\epsilon\bigg{)} \geq 1-\delta, \tag{3}\]
where the outer probability is over the draw of \(\mathcal{D}_{\mathrm{cal}}\), and the inner expectation is over draws of \(L_{\mathrm{test}}\). This then implies that any \(\lambda\in\Lambda_{\mathrm{valid}}\) can be selected to control the risk of \(L_{\mathrm{test}}\). In short, this is achieved by associating the null hypothesis \(\mathcal{H}_{\lambda}\colon\mathbb{E}[L_{\mathrm{test}}(\lambda)]>\epsilon\) to each \(\lambda\in\Lambda\). For each null hypothesis, we then use the calibration set to compute a super-uniform p-value \(p_{\lambda}\) using concentration inequalities. Any multiple testing algorithm \(\mathcal{T}(p_{\lambda}\colon\lambda\in\Lambda)\) that controls the family-wise error rate (FWER) can then be used to identify the subset of non-rejected \(\lambda\), i.e., \(\Lambda_{\mathrm{valid}}\).1 Note that it is possible for \(\Lambda_{\mathrm{valid}}=\varnothing\), in the case that we fail to identify any statistically valid solutions (and the desired risk may not even be achievable with any \(\lambda\)). In this situation, we set \(\lambda=\texttt{null}\), and either reject the task, or provide a trivial solution (e.g., a classifier that provides all possible labels \(\mathcal{Y}\)).
Footnote 1: A FWER-controlling algorithm at level \(\delta\) is any procedure that accepts or rejects null hypotheses \(\mathcal{H}_{\lambda}\), while ensuring that the probability of falsely rejecting any \(\mathcal{H}_{\lambda}\), \(\forall\lambda\in\Lambda\), is less than \(\delta\).
**Theorem 3.2** (Learn Then Test [3]).: _Suppose \(p\)-value \(p_{\lambda}\), derived from \(\mathcal{D}_{\mathrm{cal}}\), is super-uniform under \(\mathcal{H}_{\lambda}\) for all \(\lambda\). Let \(\mathcal{T}\) be any FWER-controlling algorithm at level \(\delta\). Then \(\Lambda_{\mathrm{valid}}\) satisfies Eq. (3)._
Defining \(\mathcal{C}_{\lambda}(x):=\{y\in\mathcal{Y}\colon\mathcal{M}(x,y)\leq\lambda\}\), \(\Lambda\subset\mathbb{R}\), and \(L(\lambda):=\mathbf{1}\{Y\not\in\mathcal{C}_{\lambda}(X)\}\) recovers a criterion similar to that of conformal prediction (though not marginal over \(\mathcal{D}_{\mathrm{cal}}\)). Unfortunately, in either instantiation (LTT vs. conformal prediction) iterating over \(y\in\mathcal{Y}\) is intractable for LMs, regardless of whatever calibration technique is ultimately used. Instead, in SS4, we introduce our method for generating uncertainty sets by casting \(\lambda\) as a configuration of a _sampling_ algorithm, rather than a filter on the output space \(\mathcal{Y}\). We then show that this randomized algorithm can still be calibrated with LTT.
## 4 Conformal language modeling
We now introduce our method for generating uncertainty sets for LMs. At a high level, our procedure consists of three main steps to sample and return an collection of plausible output predictions:
1. **Sample.** A new candidate response \(y\) is sampled from our language model.
2. **Accept or reject.** The sample \(y\) is added to the growing output set, as long as it is diverse (e.g., maximum overlap with any other element is \(\leq\lambda_{1}\)) and confident (e.g., the LM likelihood is \(\geq\lambda_{2}\)).
3. **Stop or repeat.** Using a set-based scoring function, we check if the confidence in the current set is \(\geq\lambda_{3}\). If it is, then we stop and return the current set. Otherwise we return to Step 1.
\(\lambda=(\lambda_{1},\lambda_{2},\lambda_{3})\) is a configuration that we calibrate to find a valid setting, \(\hat{\lambda}=(\hat{\lambda}_{1},\hat{\lambda}_{2},\hat{\lambda}_{3})\), that controls the risk of our output sets. In the following, we more carefully define our setting and notation
(SS4.1), and then describe our sampling (SS4.2) and calibration algorithms (SS4.3). Then, in SS4.4, we provide an additional extension for highlighting confident generation components--i.e., subsections of our full generations that are independently likely to be correct, even if the full generation is not.
### Formal setting and notation
Let \(\mathcal{V}\) be an alphabet (a non-empty, finite set of tokens such as {"a", "b", "c",...}) from which all possible output strings, \(y\), are composed, i.e. \(\mathcal{Y}:=\mathcal{V}^{*}\).2 We assume that we are given a generative model \(p_{\theta}(y\mid x)\) that defines a conditional probability distribution given some input prompt \(x\in\mathcal{X}\) (where \(x\) may be text, or another modality such as an image), which we can sample from to obtain candidate output strings, \(y\sim p_{\theta}(y\mid x)\). Following Fisch et al. [14], for every input prompt \(x\), we assume access to some "admission" function \(A\colon\mathcal{V}^{*}\to\{0,1\}\) that is used to measure the acceptability of a given sample \(y\). Intuitively, \(A\) tells us if an output is "good enough".
Footnote 2: We write \(\mathcal{V}^{*}\) to denote the Kleene closure of a set \(\mathcal{V}\), i.e., \(\mathcal{V}^{*}:=\bigcup_{n=0}^{\infty}\mathcal{V}^{n}\).
Continuing our radiology report example from SS1, \(x\) is the input X-ray, \(y\) is the report, and \(p_{\theta}(y\mid x)\) is our image-to-text LM (in English). Given \(y\) and some "ground truth" report \(y^{*}\) (e.g., written by a radiologist), \(A(y)\) might measure if \(y\) and \(y^{*}\) agree on all findings. Note that, in practice, it may be hard to exactly define such an \(A\), or at least an \(A\) that is automatically computable without extensive manual annotation. In Appendix E we show that it is also sufficient to only require access to a _conservative_ admission function, \(\bar{A}\colon\mathcal{V}^{*}\to\{0,1\}\), where \(\forall y\in\mathcal{V}^{*}\) we have \(\bar{A}(y)\leq A(y)\). For instance, \(\bar{A}\) might measure exact match on a word-for-word basis between \(y\) and \(y^{*}\), instead of accounting for differences in dictation. We explore different tasks and admission functions in our experiments in SS5.
Given a calibration set \(\mathcal{D}_{\text{cal}}\), our goal is to derive a configurable algorithm with input parameters \(\lambda\in\Lambda\) for constructing a prediction \(\mathcal{C}_{\lambda}\) that we can calibrate to satisfy Eq. (1). In the framework of LTT (refer to SS3), this is equivalent to defining \(L_{i}(\lambda)=\mathbf{1}\{\nexists y\in\mathcal{C}_{\lambda}(X_{i})\colon A _{i}(y)=1\}\), and using the calibration set \(\mathcal{D}_{\text{cal}}\) to find a value \(\hat{\lambda}\) such that \(\mathbb{E}[L_{\text{test}}(\hat{\lambda})]\leq\epsilon\) with probability at least \(1-\delta\).
### Conformal sampling with rejection
Let \(\mathcal{F}\colon 2^{\mathcal{V}^{*}}\to\mathbb{R}\) be a set-based function that, for any set \(\mathcal{C}\in 2^{\mathcal{V}^{*}}\), gives a confidence score for the event \(\mathbf{1}\{\exists y\in\mathcal{C}\colon A(y)=1\}\). Practically, we expect that \(\mathcal{F}\) should be non-decreasing, i.e., \(\mathcal{C}\subseteq\mathcal{C}^{\prime}\implies\mathcal{F}(\mathcal{C}) \leq\mathcal{F}(\mathcal{C}^{\prime})\), though it is not strictly enforced. Furthermore, let \(\mathcal{S}\colon\mathcal{V}^{*}\times\mathcal{V}^{*}\to\mathbb{R}\) be a text-based similarity function (e.g., such as BLEU or ROOGE) that we use to detect duplicates in \(\mathcal{C}\), and \(\mathcal{Q}\colon\mathcal{X}\times\mathcal{V}^{*}\to\mathbb{R}\) an input-conditional text-based measure of individual prediction quality--such as the LM's likelihood function, \(p_{\theta}(y\mid x)\). We then adopt a sampling-based procedure that _grows_ an output set, \(\mathcal{C}_{1}\subseteq\mathcal{C}_{2}\subseteq\ldots\subseteq\mathcal{C}_{k-1}\), by repeatedly taking samples \(y_{k}\sim p_{\theta}(y\mid x)\), and updating
\[\mathcal{C}_{k}:=\begin{cases}\mathcal{C}_{k-1}\cup\{y_{k}\}&\text{if }\max\{ \mathcal{S}(y_{k},y_{j})\colon y_{j}\in\mathcal{C}_{k-1}\}\leq\lambda_{1}\\ &\text{and }\mathcal{Q}(x,y_{k})\geq\lambda_{2},\\ \mathcal{C}_{k-1}&\text{otherwise}.\end{cases} \tag{4}\]
until the confidence after \(k\) samples, \(\mathcal{F}(\mathcal{C}_{k})\), is \(\geq\lambda_{3}\) (or some sampling budget \(k_{\max}\) is reached).
As an intuitive, but toy, example, suppose we modeled \(y_{k}\sim p_{\theta}(y\mid x),k=1,2,\ldots\) as a Bernoulli process, where each \(y_{k}\) has the same probability of success \(p\) that we assume (albeit unrealistically) that we know. For \(X_{\text{test}}\), "success" is determined by the admission function, \(A_{\text{test}}\). The confidence that our current set \(\mathcal{C}_{k}\) contains at least one admissible answer (without rejection) then follows a geometric distribution, \(\operatorname{Geo}(p)\): all that remains is to compute the minimum number of samples to take such that Eq. (1) is satisfied. This is achieved by taking \(\mathcal{F}(\mathcal{C}_{k})=k\) and \(\lambda_{3}=\lceil\log(\epsilon)/\log(1-p)\rceil\).
Of course, in reality we do not know the probability of success \(p\) for test examples. Furthermore, the samples \(y_{k}\) are not independent, and since we are also able to observe their values, better strategies may exist to conditionally estimate \(A(y_{k})=1\). Therefore, we allow \(\mathcal{F}\) to be _any_ set-based function--that we also pair with similarity function \(\mathcal{S}\), and sample quality function \(\mathcal{Q}\), for handling rejections. Pseudocode is given in Algorithm 1. We derive, calibrate, and test different variations of \(\mathcal{F}\), \(\mathcal{S}\), and \(\mathcal{Q}\) in SS5 and SS6, respectively. Using \(\lambda=(\lambda_{1},\lambda_{2},\lambda_{3})\), we write \(\mathcal{C}_{\lambda}(X_{\text{test}})\) to denote the final output set.
### Calibration with Learn Then Test
Let \(\Lambda\) be a finite set of configurations. For example, if searching for a value of \(\lambda=(\lambda_{1},\lambda_{2},\lambda_{3})\in[0,1]^{3}\), we might consider the evenly-spaced set \(\Lambda=\{\frac{i}{\kappa}\colon i=1,\ldots,\kappa\}^{3}\) for some finite \(\kappa\in\mathbb{N}\). For each \(\lambda\in\Lambda\), LTT then requires computing a valid p-value \(p_{\lambda}\), where \(p_{\lambda}\) is a super-uniform random variable under \(\mathcal{H}_{\lambda}\). Here, we can obtain valid p-values from the empirical risk on \(\mathcal{D}_{\text{cal}}\),
\[\widehat{R}_{n}(\lambda):=\frac{1}{n}\sum_{i=1}^{n}L_{i}(\lambda),\quad\text{ where }\ L_{i}(\lambda)=\mathbf{1}\big{\{}\nexists y\in\mathcal{C}_{\lambda}(X_{i}) \colon A_{i}(y)=1\big{\}}, \tag{5}\]
**Lemma 4.1** (Binomial tail bound p-values).: _Let \(\widehat{R}_{n}(\lambda)\) be the empirical risk in Eq. (5), and let \(\operatorname{Binom}(n,\epsilon)\) denote a binomial random variable with sample size \(n\) and success probability \(\epsilon\). Then_
\[p_{\lambda}^{\mathrm{BT}}:=\mathbb{P}(\operatorname{Binom}(n,\epsilon)\leq n \widehat{R}_{n}(\lambda)) \tag{6}\]
_is a valid p-value for \(\mathcal{H}_{\lambda}\colon\mathbb{E}[L_{\text{test}}(\lambda)]>\epsilon\)._
When paired with any FWER-controlling algorithm \(\mathcal{T}\) at level \(\delta\), we obtain the set \(\Lambda_{\text{valid}}\subseteq\Lambda\) by selecting all configurations for hypotheses \(\mathcal{H}_{\lambda}\) that are rejected by \(\mathcal{T}(p_{\lambda}^{\mathrm{BT}}\colon\lambda\in\Lambda)\). If \(\Lambda_{\text{valid}}\) is empty, then we abstain (i.e., return null). Otherwise, we use the configuration that empirically minimizes a weighted combination of the average final set size (after rejection) as well as the relative number of "excess" samples taken from our model (i.e., how many extra samples our algorithm takes _after_ the first admissible answer has already been surfaced, proportional to the total number of samples). Specifically, let \(S_{\lambda}(x)\) be the total number of samples taken, \(S^{*}(x)\) be the oracle sample index \(j\) of the first admissible generation (where \(A(y_{j})=1\)), and \(\mathcal{C}_{\lambda}(x)\) be the final prediction set. Then, reusing \(\mathcal{D}_{\text{cal}}\), we take
\[\hat{\lambda}=\operatorname*{argmin}_{\lambda\in A_{\text{valid}}}\ \frac{1}{n}\sum_{i=1}^{n}\bigg{(}\rho_{1}| \mathcal{C}_{\lambda}(X_{i})|+\rho_{2}\frac{[S_{\lambda}(X_{i})-S^{*}(X_{i}) ]^{+}}{S_{\lambda}(X_{i})}\bigg{)} \tag{7}\]
where \(\rho_{1},\rho_{2}\in\mathbb{R}_{\geq 0}\) are hyper-parameters and \([\cdot]^{+}\triangleq\max(\cdot,0)\). We choose \(\rho_{1}=\rho_{2}=0.5\). As a consequence of LTT, the chosen \(\hat{\lambda}\) (which is a random variable that depends on \(\mathcal{D}_{\text{cal}}\)) is risk-controlling.
**Theorem 4.2** (Sampling-based LTT).: _Let \(\hat{\lambda}\) be defined according to Eq. (7). Then the prediction \(\mathcal{C}_{\hat{\lambda}}(X_{\text{test}})\) computed by Algorithm 1 satisfies Eq. (1)._
**Remark 4.3**.: Given a finite \(k_{\max}\), Algorithm 1 is guaranteed to terminate. Smaller \(k_{\max}\) will, however, shrink the range of achievable \(\epsilon\) (i.e., with \(\hat{\lambda}\neq\texttt{null}\)). See Appendix C for additional discussion.
Finally, to efficiently search and test the higher dimensional \(\lambda=(\lambda_{1},\lambda_{2},\lambda_{3})\) that we consider here, we use the recently proposed (FWER-controlling) Pareto Testing procedure from Laufer-Goldshtein et al. [34]. Pareto Testing can be used to exploit structure in \(\Lambda\) by first using a proportion of \(\mathcal{D}_{\text{cal}}\) to find \(\Lambda\)'s Pareto-optimal frontier, and then iteratively validate the most empirically promising configurations using Fixed Sequence Testing [20] on the remaining calibration data. See Appendix D for details.
### Conformal selection of individual components
A caveat of language generation tasks is that it is often the case that LM responses can be verbose, and composed of multiple components. We consider a component to be some logically defined subpart of a larger response, such as a series of phrases, sentences, or propositions. For example, in our radiology setting, a report such as _"The heart is mildly enlarged. The lungs are clear."_ can be broken down into two findings, namely, _"The heart is mildly enlarged."_ and _"The lungs are clear."_ While our past guarantees may tell us that admissible generations exist within our sampled prediction sets, they still do not give us any insight into how confident our model is about individual statements within each option. To address this, let \(\mathcal{E}\colon\mathcal{V}^{*}\to 2^{\mathcal{V}^{*}}\) be a deterministic function that takes a text string as input, and breaks it down into components. Here, we implement \(\mathcal{E}\) to be a simple sentence splitter. Similar to before, for every input \(x\), we assume access to some _component-based_ admission function \(A^{\mathrm{c}}\colon 2^{\mathcal{V}}\to\{0,1\}\) that is used to judge response fragments for correctness. For example, \(A\) can check if \(e\) is entailed by (or exactly matches) another component \(e^{\prime}\in\mathcal{E}(y^{\mathrm{ref}})\), where \(y^{\mathrm{ref}}\) is a human reference. Let \(\mathcal{F}^{\mathrm{c}}\colon\mathcal{V}^{*}\to\mathbb{R}\) be a function that, for any component \(e\in\mathcal{V}^{*}\), gives a confidence score for the event \(A^{\mathrm{c}}(e)=1\). We then define the subset of components \(\mathcal{C}_{\gamma}^{\mathrm{inner}}\subseteq 2^{\mathcal{V}^{*}}\) as
\[\mathcal{C}_{\gamma}^{\mathrm{inner}}(x):=\Big{\{}e\in\bigcup_{y\in\mathcal{ C}_{\lambda}(x)}\mathcal{E}(y)\colon\mathcal{F}^{\mathrm{c}}(e)\geq\gamma \Big{\}}. \tag{8}\]
Again using \(\mathcal{D}_{\mathrm{cal}}\), we seek to calibrate \(\gamma\in\Gamma\), such that for test pair \((X_{\mathrm{test}},A_{\mathrm{test}})\) and \(\alpha,\delta\in(0,1)\),
\[\mathbb{P}\Big{(}\mathbb{P}\Big{(}A^{\mathrm{c}}_{\mathrm{test}}(e)=1,\forall e \in\mathcal{C}_{\gamma}^{\mathrm{inner}}(X_{\mathrm{test}})\mid\mathcal{D}_{ \mathrm{cal}}\Big{)}\geq 1-\alpha\Big{)}\geq 1-\delta. \tag{9}\]
The outer and inner probabilities are over the draws of \(\mathcal{D}_{\mathrm{cal}}\) and \((X_{\mathrm{test}},A_{\mathrm{test}})\), respectively. The new parameter \(\alpha\) can be interpreted as the maximum rate of making _any_ false positive predictions in which we select a component that is not in fact acceptable. Like \(\mathcal{C}_{\lambda}\), we calibrate \(\mathcal{C}_{\gamma}^{\mathrm{inner}}\) using LTT. In contrast to \(\mathcal{C}_{\lambda}\), however, we seek to make \(\mathcal{C}_{\gamma}^{\mathrm{inner}}\) as _large_ as possible. Concretely, let \(L^{\mathrm{c}}_{i}(\gamma)=\mathbf{1}\{\exists e\in\mathcal{C}_{\gamma}^{ \mathrm{inner}}\colon A^{\mathrm{c}}_{i}(e)=0\}\) and let \(\Gamma_{\mathrm{valid}}\) be the set of non-rejected configurations found by LTT (when using binomial tail p-values, \(p_{\gamma}^{\mathrm{BT}}\)). During calibration we define \(\mathcal{C}_{\gamma}^{\mathrm{inner}}(X_{i})\) using an upper bound to \(\mathcal{C}_{\lambda}(X_{i})\), by simply taking the first \(k_{\mathrm{max}}\) samples, \(\{y_{1},\ldots,y_{k_{\mathrm{max}}}\}\). We then use the configuration that empirically maximizes the average number of confident components (again, while reusing \(\mathcal{D}_{\mathrm{cal}}\)):
\[\hat{\gamma}=\operatorname*{argmax}_{\gamma\in\Gamma_{\mathrm{valid}}}\ \frac{1}{n}\sum_{i=1}^{n}|\mathcal{C}_{\gamma}(X_{i})|. \tag{10}\]
**Proposition 4.4** (Component-based LTT).: _Let \(\hat{\gamma}\) be defined according to Eq. (10), where \(\mathcal{C}_{\gamma}(X_{i})\) uses \(\mathcal{C}_{\lambda}(X_{i})\equiv\{y_{1},\ldots,y_{k_{\mathrm{max}}}\}\) during calibration. Then the prediction set of components, \(\mathcal{C}_{\gamma}(X_{\mathrm{test}})\) computed by Algorithm 2 paired with any \(\mathcal{C}_{\lambda}\) at test time with \(\sup_{x}|\mathcal{C}_{\lambda}(x)|\leq k_{\mathrm{max}}\) satisfies Eq. (9)._
Furthermore, by the union bound, Eq. (1) and Eq. (9) hold simultaneously with probability \(1-2\delta\).
Experimental setup
In this section, we briefly describe our experimental setup. Appendix F contains additional details.
### Tasks
**Radiology report generation.** As motivated in SS1, we apply our method to chest X-ray radiology report generation using the **MIMIC-CXR**[26] dataset. For our LM, we fine-tune an encoder-decoder architecture based on a pretrained ViT [12] image encoder and a GPT2-small [51] text decoder. To judge admission, we use the popular Clinical Efficacy metric [40; 47] to check if the 14 labels predicted by an auxiliary CheXbert [62] model on the generated report exactly match the labels predicted by the same CheXbert model for a reference report from a radiologist. Similarly, a component (here a sentence including a finding) is defined to be admissible if it has a ROUGE-L [37] score \(\geq 0.4\) (picked through empirical validation), when compared to any component directly extracted from the reference.
**News summarization.** We also apply our method to news article text summarization using the **CN-N/DM**[19] dataset. For our LM, we finetune a T5-XL [52] model. We define a candidate generation to be admissible if it has a ROUGE-L score higher than \(0.35\), when compared to all available reference summaries from human annotators. Like MIMIX-CXR, we define a component to be admissible if it has a ROUGE-L score \(\geq 0.4\) when compared to components extracted from human summaries.
**Open-domain question answering.** Finally, we apply our method to open-domain question answering using the **TriviaQA**[28] dataset. For this task, we sample answers from the LLaMA-13B [66] LM in the few-shot setting \((k=32)\), without any additional fine-tuning. Since answers are limited to one or few tokens, a candidate output generation is acceptable only if it exactly matches an annotated reference answer (after minor normalization for removing articles, casing, and punctuation). Furthermore, since the expected answers are short and fairly atomic, we do not evaluate component-level confidence.
### Scoring functions
As discussed in SS4.2, our method is implementation-agnostic and can support different choices of quality function \(\mathcal{Q}\), similarity function \(\mathcal{S}\), and set scoring function \(\mathcal{F}\). For our purposes, we show that a straightforward approach is to simply use transformations on the model likelihoods (from token logits). Specifically, we define \(\mathcal{Q}(x,y)=p_{\theta}(y\mid x)\) using the likelihood function of the base LM, with length-normalization [76]. We use ROUGE-L for \(\mathcal{S}\). For \(\mathcal{F}\), we experiment with the following variants:
* First-K. As a baseline, we score a set by its size, \(\mathcal{F}_{\textsc{First-K}}(\mathcal{C})=|\mathcal{C}|\), and do not use rejection. This corresponds to the number of samples taken, and follows the intuition from our toy example in SS4.2.
* Max. The \(\mathcal{F}_{\textsc{Max}}\) scoring function stems from the intuition that a set is only as good as its best element, and defines \(\mathcal{F}_{\textsc{Max}}(\mathcal{C})=\max\{\mathcal{Q}(y)\colon y\in \mathcal{C}\}\).
* Sum. Alternatively, we also use the sum of item-level scores: \(\mathcal{F}_{\textsc{Sum}}(\mathcal{C})=\sum_{y\in C}\mathcal{Q}(y)\).
### Metrics
Our main motivation is to produce valid confidence sets that are also precise. To reflect this, we measure both the **loss** of our sets (which is guaranteed to satisfy our prescribed limits), as well as both (a) the **relative number of "excess" samples** taken from our model (including rejected samples, see also Eq. (7)), and (b) the ultimate **output size** of the prediction set (after rejection). Both metrics are important, as over-sampling wastes computational budget (or expensive API calls), while large output sets can be unwieldy to use and overall less helpful as an uncertainty quantification tool. We measure results and compute the AUC over the range of achievable \(\epsilon\) or \(\alpha\) (using a **fixed**\(\delta=\mathbf{0.05}\)), excluding trivial values (e.g., that a policy that always returns the first generation would satisfy).
## 6 Experimental results
We now present our main results. In all plots, solid lines give the mean over 100 trials and shaded regions show \(+/-\) the standard deviation. Additional experimental results are reported in Appendix G.
**Validity of conformal sampling with rejection.** As per Theorem 4.2, we observe in Figure 2 that our conformal sampling approach is valid, as the average set loss often matches but never exceeds the
target risk level. Methods that have access to the model logits (namely Max and Sum) are close to the diagonal line, indicating that they are not overly conservative.
**Prediction efficiency.** The likelihood-based approaches outperform the uniform First-K baseline across all three tasks. For example, as Figure 1(c) shows, the AUC of expected set size of Max and Sum are both less than half the AUC of First-K in the QA task. In tasks with longer output texts, First-K produces competitive set sizes across all achievable \(\epsilon\). However, it is being overly conservative on easy examples at the expense of hard ones. This is revealed when plotting the relative number of excess samples, where the Max scoring function largely outperforms Sum and First-K.
**Individual components.** We evaluate two scoring functions \(\mathcal{F}^{c}\) to apply our conformal component selection method. A first method Span-logits extracts the likelihood of a candidate component, as produced by the language model. Since those likelihoods are conditioned on previous context, this scoring function may underestimate the score of a correct component if it follows an incorrect
Figure 3: Conformal component selection results for \(\mathcal{C}^{\mathrm{inner}}_{\gamma}\) as a function of \(\alpha\). We report the number of components identified in \(\mathcal{C}^{\mathrm{inner}}_{\gamma}\), which we want to maximize. We also report the AUC over \(\alpha\).
Figure 2: Conformal sampling results for \(\mathcal{C}_{\lambda}\) a function of \(\epsilon\). We report the loss, relative excess samples, and overall size (normalized by \(k_{\mathrm{max}}\)). We also report the AUC over achieved/non-trivial \(\epsilon\).
component. Instead, we use an application-specific Classifier to assign a conformity score to each component. We compare these methods to a Random baseline which attributes a random score to any \((x,e)\) pair. Figure 3 shows that by modeling components independently, we produce more effective (larger) sets. We include qualitative results in Appendix H.
## 7 Conclusion
Reliably using language models (LMs) in real-world tasks inevitably requires meaningful uncertainty quantification. In this paper, we introduced a novel approach to conformal prediction that allows a user to sample prediction sets from generative LMs with infinite, combinatorial output spaces, while retaining desirable statistical guarantees. Our method bridges the gap between standard conformal prediction and LM inference techniques by calibrating a stopping rule for an algorithm that iteratively grows an output prediction set by sampling new generations (with rejection). Moreover, we provide a method for separately identifying answer components that we are more confident are accurate. This can help users better understand the quality of LM answers, including which parts may be incorrect (and vice versa) within a larger, verbose output. Finally, we demonstrate our method on three popular LM applications. Compared to common practices (e.g., first \(k\)), we obtain more efficient prediction sets, both in terms of size and samples required, leading to more effective outputs that are also less costly to obtain.
|
2303.15635 | Spectral Turán problems for intersecting even cycles | Let $C_{2k_1, 2k_2, \ldots, 2k_t}$ denote the graph obtained by intersecting
$t$ distinct even cycles $C_{2k_1}, C_{2k_2}, \ldots, C_{2k_t}$ at a unique
vertex. In this paper, we determine the unique graphs with maximum adjacency
spectral radius among all graphs on $n$ vertices that do not contain any
$C_{2k_1, 2k_2, \ldots, 2k_t}$ as a subgraph, for $n$ sufficiently large. When
one of the constituent even cycles is a $C_4$, our results improve upper bounds
on the Tur\'an numbers for intersecting even cycles that follow from more
general results of F\"{u}redi [20] and Alon, Krivelevich and Sudakov [1]. Our
results may be seen as extensions of previous results for spectral Tur\'an
problems on forbidden even cycles $C_{2k}, k\ge 2$ (see [8, 34, 44, 45]). | Dheer Noal Desai | 2023-03-27T23:30:31Z | http://arxiv.org/abs/2303.15635v2 | # Spectral Turan problems for intersecting even cycles
###### Abstract
Let \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\) denote the graph obtained by intersecting \(t\) distinct even cycles \(C_{2k_{1}},C_{2k_{2}},\ldots,C_{2k_{t}}\) at a unique vertex. In this paper, we determine the unique graphs with maximum adjacency spectral radius among all graphs on \(n\) vertices that do not contain any \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\) as a subgraph, for \(n\) sufficiently large. When one of the constituent even cycles is a \(C_{4}\), our results improve upper bounds on the Turan numbers for intersecting even cycles that follow from more general results of Furedi [20] and Alon, Krivelevich and Sudakov [1]. Our results may be seen as extensions of previous results for spectral Turan problems on forbidden even cycles \(C_{2k},k\geq 2\) (see [8, 34, 44, 45]).
**Key words:** Spectral radius, Intersecting even cycles, Intersecting even cycles and paths, Extremal graph.
**Mathematics Subject Classification:** 05C35.
## 1 Introduction
For any graph \(F\), the Turan number \(\mathrm{ex}(n,F)\) is the maximum number of edges in any graph on \(n\) vertices that avoids any isomorphic copies of \(F\) as subgraphs. The set of \(n\)-vertex \(F\)-free graphs with \(\mathrm{ex}(n,F)\) many edges is denoted by \(\mathrm{EX}(n,F)\) and called the set of _extremal graphs_. For the complete graph, \(K_{r+1}\), Turan [40] proved that the Turan graphs \(T_{r}(n)\) are the only extremal graphs, where the Turan graph is the unique complete \(r\)-partite graph on \(n\) vertices with each part having either \(\left\lfloor\frac{n}{r}\right\rfloor\) or \(\left\lceil\frac{n}{r}\right\rceil\) vertices.
The celebrated Erdos-Stone-Simonovits theorem [17, 18] extends Turan's theorem to other graphs with chromatic number \(r+1\) and gives the exact asymptotics for the Turan numbers of graphs with chromatic number at least three. It states that if the chromatic number of \(F\) is \(r+1\), then
\[\mathrm{ex}(n,F)=\left(1-\frac{1}{r}+o(1)\right)\frac{n^{2}}{2}.\]
Consequently, when \(F\) is a bipartite graph and \(r=1\), we only get \(\mathrm{ex}(n,F)=o(n^{2})\) and it remains to determine the exact asymptotics for several basic bipartite graphs. For example, for bipartite graphs \(K_{s,t}\) with \(s\leq t\), the Kovari-Sos-Turan theorem [26] establishes the upper bound \(\mathrm{ex}(n,K_{s,t})=O(n^{2-1/s})\), however matching lower bounds have only been confirmed for complete bipartite graphs \(K_{s,t}\) with \(t\geq s-1!+1\)[2, 21]. Another avenue for difficult Turan problems comes from even cycles. The order of magnitude for \(\mathrm{ex}(n,C_{2k})\) is determined only for \(k=2,3\) and \(5\)[23].
Let \(C_{k_{1},k_{2},\ldots,k_{t}}\) be the graph obtained by intersecting \(t\) cycles of lengths \(k_{i}\geq 3\), for \(1\leq i\leq t\), at a unique vertex. If all the the \(k_{i}\) are odd, we call such a graph an intersecting odd cycle. In [15] Erdos, Furedi, Gould and Gunderson determined the exact number \(\mathrm{ex}(n,F)\) and set of extremal graphs \(\mathrm{EX}(n,F)\) for \(F=C_{3,3,\ldots,3}\) the Friendship graph and \(n\) sufficiently large. More recently Hou, Qiu, and Liu [24, 25] and Yuan [42] solved the Turan problem for all other intersecting odd cycles \(C_{2k_{1}+1,2k_{2}+1,\ldots,2k_{t}+1}\).
The Turan problem remains wide open for intersecting even cycles \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\). For simplicity, assume that \(k_{1}\leq k_{2}\leq\ldots k_{t-1}\leq k_{t}\).
Furedi [20] and Alon, Krivelevich and Sudakov [1] generalized the Kovari-Sos-Turan theorem to bipartite graphs \(H\) where one of the parts has maximum degree at most \(r\). They determined upper bounds \(\mathrm{ex}(n,H)=O(n^{2-1/r})\). This upper bound follows from Theorem 2.2 of [1] and we record it below for reference.
**Theorem 1.1**.: _[_1_]_ _Let \(H=(A\cup B,F)\) be a bipartite graph with sides \(A\) and \(B\) of sizes \(|A|=a\) and \(|B|=b\), respectively. Suppose that the degrees of all vertices \(b\in B\) in \(H\) do not exceed \(r\). Let \(G=(V,E)\) be a graph on \(n\) vertices with average degree \(d=2|E(G)|/n\). If_
\[\frac{d^{r}}{n^{r-1}}-{n\choose r}\left(\frac{a+b-1}{n}\right)^{r}>a-1,\]
_then \(G\) contains a copy of \(H\)._
When \(r=2\), this directly gives that \(\operatorname{ex}(n,H)\leq\frac{1}{2}\left(a-1+\frac{(a+b-1)^{2}}{2}-O(1/n) \right)^{1/2}n^{3/2}\).
It follows from Theorem 1.1 that
\[\operatorname{ex}(n,C_{2k_{1},2k_{2},\ldots,2k_{t}})\leq\frac{1}{2}\left( \kappa+\frac{(2\kappa+t)^{2}}{2}-O(1/n)\right)^{1/2}n^{3/2}, \tag{1}\]
where \(\kappa=\sum_{i=1}^{t}(k_{i}-1)\).
When \(r=2\), Conlon and Lee [11] further showed that \(\operatorname{ex}(n,H)=cn^{3/2}\) only if \(H\) contains a \(C_{4}\). Thereafter, Conlon, Janzer and Lee [12] further strengthened Theorem 1.1 as follows.
**Theorem 1.2**.: _[_12_]_ _Let \(H\) be a bipartite graph such that in one of the parts all the degrees are at most \(r\) and \(H\) does not contain \(C_{4}\) as a subgraph. Then \(\operatorname{ex}(n,H)=o(n^{2-1/r})\)._
More recently, Sudakov and Tomon [39] strengthened the result as follows.
**Theorem 1.3**.: _[_39_]_ _Let \(t\geq 2\) be an integer. Let \(H\) be a \(K_{t,t}\)-free bipartite graph such that every vertex in one of the parts of \(H\) has degree at most \(t\). Then \(\operatorname{ex}(n,H)=o(n^{2-1/t})\)._
In this paper, we study a spectral version of the Turan problem for intersecting even cycles. Analogous to the Turan problem, let \(\operatorname{spex}(n,F)\) denote the maximum spectral radius of the adjacency matrix of any \(F\)-free graph on \(n\) vertices, and \(\operatorname{SPEX}(n,F)\) denote the set of _spectral extremal graphs_ on \(n\) vertices with adjacency spectral radius equal to \(\operatorname{spex}(n,F)\) and having no isomorphic copies of \(F\) as subgraphs. Nikiforov [34] pioneered the systematic study of spectral Turan problems, although several sporadic results appeared earlier. In [30] Nikiforov proved that the only spectral extremal graphs for \(K_{r+1}\) are the Turan graphs \(T_{r}(n)\). When combined with the observation that the average degree of a graph lower bounds the spectral radius of its adjacency matrix, Nikiforov's result implies and strengthens Turan's theorem for complete graphs. Additionally, Nikiforov [33], Babai and Guiduli [3] proved spectral versions of the Kovari-Sos-Turan theorem for complete bipartite graphs \(K_{s,t}\). Moreover, using the average degree bound these match the best known upper bounds for \(\operatorname{ex}(n,K_{s,t})\), obtained by Furedi [22]. Recently, several papers have been published determining \(\operatorname{spex}(n,F)\) for various families of graphs (see [7, 9, 29, 31, 36, 41, 43, 45, 46]). In fact, our proof further develops techniques that appear in [7, 8]. Spectral Turan problems fit into the broader framework of _Brualdi-Solheid problems_[6] that investigate the maximum spectral radius among all graphs belonging to a specified family of graphs. Several results are known in this area (see [4, 5, 14, 19, 35, 37, 38]).
As mentioned earlier, solving Turan problems for even cycles has proven to be a difficult task. Nikiforov conjectured the spectral extremal graphs for even cycles in [34]. Recently, the conjectures were proved and \(\operatorname{SPEX}(n,C_{2k})\) were determined for all values of \(k\) when \(n\) is sufficiently large. The cases \(k=2\) was covered in [34, 45], the case \(k=3\) was covered in [44], and all other values of \(k\) were covered in [8].
In [10] Cioaba, Feng, Tait and Zhang solved the spectral Turan problem for friendship graphs \(C_{3,3,\ldots,3}\). Further, Li and Peng [27] solved the spectral extremal graphs for all other intersecting odd cycles \(C_{2k_{1}+1,2k_{2}+1,\ldots,2k_{t}+1}\). In [13], the authors generalized the results of [10] to intersecting cliques.
Let \(M_{k}:=\left\lfloor\frac{k}{2}\right\rfloor\sqcup\gamma K_{1}\), where \(\gamma=k\mod 2\), denote a maximal matching on \(k\) vertices. Let \(S_{n,k}:=K_{k}\vee(n-k)K_{1}\), denote the join of a clique on \(k\) vertices and an independent set on \(n-k\) vertices, and \(S_{n,k}^{+}\) be the graph obtained by adding an edge to the independent set of \(S_{n,k}\). Finally, let \(F_{n,k}:=K_{k}\lor M_{n-k}=K_{k}\vee\left(\left\lfloor\frac{n-k}{2}\right\rfloor K _{2}\sqcup\gamma K_{1}\right)\) with \(\gamma=n-k\mod 2\), be the graph obtained from \(S_{n,k}\) by adding a maximal matching in the independent set of \(S_{n,k}\). For an integer \(k\), let \(k^{\prime}:=k-1\). We prove the following theorems for intersecting even cycles.
**Theorem 1.4**.: _Let \(\max\{k_{1},k_{2},\ldots,k_{t}\}\geq 3\) and \(\kappa:=\sum_{i=1}^{t}k^{\prime}_{i}\). If \(G\) is a graph of sufficiently large order \(n\), with \(\lambda(G)\geq\lambda(S_{n,\kappa}^{+})\) then \(G\) contains \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\) unless \(G=S_{n,\kappa}^{+}\)._
In Theorem 1.4, we are assuming that there is at least one even cycle with more than \(6\) vertices. In case all the even cycles have four vertices, then we have the following theorem.
**Theorem 1.5**.: _If \(G\) is a graph of sufficiently large order \(n\), with \(\lambda(G)\geq\lambda(F_{n,t})\), then \(G\) contains \(C_{4,4,\ldots,4}\) unless \(G=F_{n,t}\), where \(C_{4,4,\ldots,4}\) denotes the intersecting even cycle consisting of \(t\) cycles of length four that intersect at a unique vertex._
A consequence of Theorems 1.4 and 1.5 is that \(\mathrm{ex}(n,C_{2k_{1},2k_{2},\ldots,2k_{t}})\leq\frac{1}{2}(\kappa+o(1))n^{ 3/2}\). This improves the bounds in (1) and is not covered by Theorem 1.2 if \(k_{i}=2\) for some \(i\), since one of the even cycles is a \(4\)-cycle.
Interestingly, very recently, Fang, Zhai and Lin [28] found similar spectral extremal graphs for disjoint even cycles and showed that \(\mathrm{SPEX}(n,tC_{2k})=S_{n,t-1}^{+}\) for \(k\geq 3\), and \(\mathrm{SPEX}(n,tC_{4})=F_{n,2t-1}\). We note that Theorems 1.4 and 1.5 along with the results from [28] may be viewed as extensions of the spectral Turan theorem on even cycles proved in [8, 34, 44, 45].
We end this section with some results that either follow from Theorems 1.4 and 1.5, or their proofs are almost identical to those given for Theorems 1.4 and 1.5.
_Remark 1_.: Observe that the intersecting even cycle \(C_{2k_{1},2k_{2},\ldots.2k_{t}}\) also contains other bipartite graphs \(H\) that are not contained in \(S_{n,\kappa}^{+}\) or \(F_{n,t}\) (depending on whether or not \(H\) is a subgraph of an intersecting even cycle where at least one of the even cycles has more than four vertices). This implies that for \(n\) sufficiently large, \(\mathrm{EX}(n,C_{2k_{1},2k_{2},\ldots,2k_{t}})=\mathrm{EX}(n,H)\in\{S_{n, \kappa}^{+},F_{n,t}\}\). Note that the size of the smallest color class of \(H\) must also be \(\kappa+1\) (or \(t+1\)), for it to not be a subgraph of \(S_{n,\kappa}^{+}\) (or \(F_{n,t}\)).
As an application of our observations in Remark 1, we determine the spectral extremal graphs for the following families of graphs. Let \(\mathcal{CP}_{2k_{1},\ldots,2k_{t_{1}};2p_{1},\ldots,2p_{t_{2}}}\) be the graph obtained by intersecting \(t_{1}\) even cycles of lengths \(2k_{i}\geq 4\), for \(i\leq t_{1}\), and \(t_{2}\) even paths on \(2p_{i}\) vertices where \(2p_{i}\geq 4\), for \(i\leq t_{2}\) at a unique vertex. Equivalently, we can obtain \(\mathcal{CP}_{2k_{1},\ldots,2k_{t_{1}};2p_{1},\ldots,2p_{t_{2}}}\) from \(C_{2k_{1},\ldots,2k_{t_{1}},2p_{1},\ldots,2p_{t_{2}}}\) by deleting one edge from each of the constituent \(C_{2p_{i}}\), where the deleted edge was adjacent to the central vertex, for all \(1\leq i\leq t_{2}\). Clearly, \(\mathcal{CP}_{2k_{1},\ldots,2k_{t_{1}};2p_{1},\ldots,2p_{t_{2}}}\subset C_{2k _{1},\ldots,2k_{t_{1}},2p_{1},\ldots,2p_{t_{2}}}\). Then the following two results for intersecting even cycles and paths directly follow from Theorems 1.4 and 1.5.
**Theorem 1.6**.: _Let \(\max\{k_{1},k_{2},\ldots,k_{t_{1}},p_{1},p_{2},\ldots,p_{t_{2}}\}\geq 3\) where \(t_{1}+t_{2}\geq 1\), and \(\kappa:=\left(\sum_{i=1}^{t_{1}}k_{i}\right)+\left(\sum_{i=1}^{t_{2}}p_{i} \right)-(t_{1}+t_{2})\). If \(G\) is a graph of sufficiently large order \(n\) and \(\lambda(G)\geq\lambda(S_{n,\kappa}^{+})\) then \(G\) contains \(\mathcal{CP}_{2k_{1},\ldots,2k_{t_{1}};2p_{1},\ldots,2p_{t_{2}}}\) unless \(G=S_{n,\kappa}^{+}\)._
**Theorem 1.7**.: _Let \(k_{i}=p_{j}=2\) for \(0\leq i\leq t_{1},0\leq j\leq t_{2}\) where \(t_{1}+t_{2}\geq 1\), and \(t:=t_{1}+t_{2}\). If \(G\) is a graph of sufficiently large order \(n\) and \(\lambda(G)\geq\lambda(F_{n,t})\) then \(G\) contains \(\mathcal{CP}_{2k_{1},\ldots,2k_{t_{1}};2p_{1},\ldots,2p_{t_{2}}}\) unless \(G=F_{n,t}\)._
_Remark 2_.: The intersecting even cycles \(C_{2k_{1},\ldots,2k_{t}}\) also contain several other connected bipartite subgraphs with a smallest color class consisting of exactly \(\kappa+1\) vertices, apart from those mentioned above. Some of these, however, are subgraphs of \(S_{n,\kappa}^{+}\) (or \(F_{n,\kappa=t}\)). For example, consider the graphs obtained by deleting at most one edge from every constituent cycle of \(C_{2k_{1},\ldots,2k_{t}}\), where none of the deleted edges are adjacent to the central vertex. If at least one edge is deleted, such graphs are not contained in \(S_{n,\kappa}^{+}\) but are contained in \(S_{n,\kappa}^{+}\) (or \(F_{n,\kappa=t}\)).
_Remark 3_.: Let \(H\) be a connected bipartite subgraph of \(C_{4,4,\ldots,4}\), the intersecting even cycle consisting of \(t\) cycles of length four that intersect at a unique vertex. If
\(H\subset F_{n,t}\), then \(H\subset S_{n,t}^{+}\). To see this, first note that if \(H\subset F_{n,t}\), it follows from Theorem 1.7 that \(H\) must be obtained from \(C_{4,4,\ldots,4}\) by deleting at least one edge not adjacent to its center. Next, observe that any graph obtained from \(C_{4,4,\ldots,4}\) by deleting an edge not adjacent to its center, must be contained in \(S_{n,t}^{+}\) as well.
It follows from Remarks 2 and 3 that if \(H\subset C_{2k_{1},2k_{2},\ldots,2k_{t}}\), \(H\subset S_{n,\kappa}^{+}\) (or \(F_{n,\kappa=t}\)) and the size of the smallest color class of \(H\) is \(\kappa+1\), then in fact, \(H\) is not contained in \(S_{n,\kappa}\). We are now ready to state our final theorem which has an almost identical proof to that for Theorems 1.4 and 1.5. We therefore, do not include an independent proof for it, to avoid redundancy, but include additional remarks to describe slight modifications required for the same Lemmas to hold true.
**Theorem 1.8**.: _Let \(k_{1},k_{2},\ldots,k_{t}\geq 2\) and \(\kappa:=\left(\sum_{i=1}^{t}k_{i}\right)-t\). Let \(H\) be a connected subgraph of \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\) with a smallest color class consisting of \(\kappa+1\) vertices. If either \(\max\{k_{1},k_{2},\ldots,k_{t}\}\geq 3\) and \(H\subset S_{n,\kappa}^{+}\) or \(k_{1}=k_{2}=\ldots=k_{t}\) and \(H\subset F_{n,\kappa}\), then for sufficiently larger order \(n\), \(\mathrm{SPEX}(n,H)=S_{n,\kappa}\)._
## 2 Organization and Notation
For an integer \(k\), let \(k^{\prime}:=k-1\). In what follows, whenever we will be using \(t\) positive integers \(k_{1},k_{2},\ldots,k_{t}\geq 2\), to define an intersecting even cycle \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\), we will assume without loss of generality that \(k_{1}\leq k_{2}\leq\ldots\leq k_{t}\) and \(k_{1}\geq 2\). Also, let \(\kappa:=\sum_{i=1}^{t}(k_{i}-1)=\sum_{i=1}^{t}k_{i}^{\prime}\). We wish to show that \(\operatorname{\mathrm{SPEX}}(n,C_{2k_{1},2k_{2},\ldots 2k_{t}})=S_{n,\kappa}\) whenever we have \(k_{t}\geq 3\) and \(\operatorname{\mathrm{SPEX}}(n,C_{2k_{1},2k_{2},\ldots 2k_{t}})=F_{n,t}=K_{t} \vee\left(\left\lfloor\frac{n-t}{2}\right\rfloor K_{2}\cup\gamma K_{1}\right)\), where \(\gamma=n-t\mod 2\).
For \(n\) sufficiently large, let \(G\) denote some graph in \(\operatorname{\mathrm{SPEX}}(n,C_{2k_{1},2k_{2},\ldots,2k_{t}})\).
Let x be the scaled Perron vector whose maximum entry is equal to \(1\) and let \(z\) be a vertex having Perron entry \(\mathrm{x}_{z}=1\). We define positive constants \(\eta,\epsilon\) and \(\alpha\) satisfying
\[\eta <\min\left\{\frac{1}{10\kappa},\frac{1}{(\kappa+t)}\cdot\left[ \left(1-\frac{4}{5\kappa}\right)\left(\kappa-\frac{1}{16\kappa^{2}}\right)-( \kappa-1)\right]\right\}\] \[\epsilon <\min\left\{\eta,\frac{\eta}{2},\frac{1}{16\kappa^{3}},\frac{ \eta}{32\kappa^{3}+2}\right\} \tag{2}\] \[\alpha <\min\left\{\eta,\frac{\epsilon^{2}}{20\kappa}\right\}.\]
We leave the redundant conditions in these inequalities so that when we reference this choice of constants it is clear what we are using.
Let \(L\) be the set of vertices in \(V(G)\) having "large" weights and \(S=V\setminus L\) be the vertices of "small" weights,
\[L:=\{v\in V(G)|\mathrm{x}_{v}\geq\alpha\},S:=\{v\in V(G)|\mathrm{x}_{v}< \alpha\}.\]
Also, let \(M\supset L\) be the set of vertices of "not too small" a weight:
\[M:=\{v\in V(G)|\mathrm{x}_{v}\geq\alpha/4\}.\]
Denote by \(L^{\prime}\subset L\) the following subset of vertices of "very large" weights:
\[L^{\prime}:=\{v\in L|\mathrm{x}_{v}\geq\eta\}.\]
For any set of vertices \(A\subset V(G)\) and constant \(\gamma\), let \(A^{\gamma}=\{v\in V(G)|\mathrm{x}_{v}\geq\gamma\}\).
For any graph \(H=(V,E)\), let \(N_{i}(u)\) be the vertices at distance \(i\) from some vertex \(u\in V\). For any set of vertices \(A\subset V\), let \(A_{i}=A_{i}(u)=A\cap N_{i}(u)\). Thus, for the graph \(G\), \(L_{i}(u),S_{i}(u)\) and \(M_{i}(u)\) are the sets of vertices in \(L,S\) and \(M\) respectively, at distance \(i\) from \(u\). If the vertex is unambiguous from context, we will use \(L_{i}\), \(S_{i}\), and \(M_{i}\) instead. Let \(d(u):=|N_{1}(u)|\) and \(d_{A}(u):=|A_{1}(u)|\) denote the _degree_ of \(u\) in \(H\) and the number of neighbors of \(u\) lying in \(A\), respectively. In case the graph \(H\) is not clear from context, we will use \(d_{H}(u)\) instead of \(d(u)\) to denote the degree of \(u\) in \(H\). Also, let \(H[A]\) denote the induced subgraph of \(H\) by the set of vertices \(A\), and for two disjoint subsets \(A,B\subset V\), let \(H[A,B]\) denote the bipartite subgraph of \(H\) induced by the sets \(A\) and \(B\) that has vertex set \(A\cup B\) and edge set consisting of all edges with one vertex in \(A\) and the other in \(B\). Finally, let \(E(A)\), \(E(A,B)\) and \(E(H)\) denote the set of edges of \(H[A]\), \(H[A,B]\) and \(H\), respectively, and \(e(A),e(A,B)\), and \(e(H)\) denote the cardinalities \(|E(A)|\), \(|E(A,B)|\) and \(|E(H)|\), respectively.
In Section 3 we give some background graph theory lemmas and prove preliminary lemmas on the spectral radius and Perron vector of spectral extremal graphs. We reiterate that every lemma applies to both the spectral extremal graphs when the forbidden graph is an intersecting cycle and when it is some \(H\subset C_{2k_{1},\ldots,2k_{t}}\) as in the statement for Theorem 1.8. Observations specific to Theorem 1.8 appear only in Remarks 4 and 5. In Section 4 we progressively refine the structure of our spectral extremal graphs. Using these results, we prove Theorem 1.4 and 1.5 in Section 5.
## 3 Lemmas from spectral and extremal graph theory
In this section we record several lemmas that will be used subsequently. Some lemmas involve calculations that require \(n\) to be sufficiently large, without stating this explicitly. We begin this section by recording two known results.
**Lemma 3.1**.: _[_8_]_ _For a non-negative symmetric matrix \(B\), a non-negative non-zero vector \(y\) and a positive constant \(c\), if \(By\geq cy\) entrywise, then \(\lambda(B)\geq c\)._
**Lemma 3.2** (Erdos-Gallai [16]).: _Any graph on \(n\) vertices with no subgraph isomorphic to a path on \(\ell\) vertices has at most \(\frac{(\ell-2)n}{2}\) edges._
It is easy to see that any bipartite graph is contained in a complete bipartite graph, where the color classes in either graphs have the same sizes. The following lemma is the version of this known result on intersecting even cycles, and we record it here to refer to subsequently.
**Lemma 3.3**.: _For \(1\leq i\leq t\), \(k_{i}\geq 3\), and \(\sum_{i=1}^{t}k_{i}^{\prime}\leq\kappa\), the bipartite graph \(K_{\kappa+1,\kappa+t}\) contains all intersecting cycles \(C_{2k_{1},\ldots,2k_{t}}\)._
Proof.: The intersecting cycle \(C_{2k_{1},\ldots,2k_{t}}\) is a bipartite graph, whose smaller part (which contains the center of the intersecting cycle) has \(\left(\sum_{i=1}^{t}k_{i}^{\prime}\right)+1\leq\kappa+1\) vertices and larger part has \(\kappa+t\) vertices. It is clear from this that \(C_{2k_{1},\ldots,2k_{t}}\subset K_{\kappa+1,\kappa+t}\).
_Remark 4_.: Note that, if \(H\) is a subgraph of \(C_{2k_{1},\ldots,2k_{t}}\), with \(\kappa\) as defined above, then \(H\) is also contained in \(K_{\kappa+1,\kappa+t}\).
The following lemma will be used to show that any \(C_{2k_{1},\ldots,2k_{t}}\)-free graph \(G\) cannot contain a large path in \(E(G[N_{1}(v)]\cup G[N_{1}(v),N_{2}(v)])\), for any vertex \(v\) of \(G\).
**Lemma 3.4**.: _Let \(1\leq i\leq t\), \(k_{i}\geq 2\), with \(\sum_{i=1}^{t}k_{i}^{\prime}\leq\kappa\) and \(H=(V,E)\) be a graph whose vertices are partitioned into two sets \(U\) and \(W\). If \(E=E(H[U])\cup E(H[U,W])\), and \(H\) has a path on \(4\kappa+t\) vertices, then \(H\) has \(t\) disjoint paths of lengths \(2k_{i}^{\prime}+1\), \(1\leq i\leq t\), and all the endpoints of the paths are in \(U\)._
Proof.: We will prove the lemma by first proving the following claim.
_Claim 3.4.1_.: If \(H\) contains a path with \(4k_{i}^{\prime}+1\) vertices then \(H\) must contain a path of length \(2k_{i}^{\prime}+1\) with both end points in \(U\).
Proof.: If \(H\) has a path on \(4k_{i}^{\prime}+1\) vertices, then there must be a subpath \(v_{1}v_{2}\ldots v_{4k_{i}^{\prime}}\) of length \(4k_{i}^{\prime}\) where we may assume without loss of generality that \(v_{1}\in U\). We proceed to show that \(v_{1}v_{2}\ldots v_{4k_{i}^{\prime}}\) contains a path \(v_{i_{0}}v_{i_{0}+1}\ldots v_{i_{0}+2k_{i}^{\prime}}\) of length \(2k_{i}^{\prime}+1\) with both endpoints in \(U\) via contradiction. To do this assume to the contrary that there is no such subpath of length \(2k_{i}^{\prime}+1\) with both endpoints in \(U\).
Since \(v_{1}\in U\), we have that \(v_{2k_{i}^{\prime}+1}\in W\). Then both \(v_{2k_{i}^{\prime}},v_{2k_{i}^{\prime}+2}\in U\). Now, if \(v_{2}\in U\) then we have a contradiction that \(v_{2}\ldots v_{2k_{i}^{\prime}+2}\) is a path of length \(2k_{i}^{\prime}+1\) with both endpoints in \(U\). So assume \(v_{2}\in W\) and therefore \(v_{3}\in U\). Thus \(v_{2k_{i}^{\prime}+3}\in W\) and \(v_{2k_{i}^{\prime}+4}\in U\). Proceeding similarly we get
\[v_{4}\in W,v_{5}\in U,v_{2k_{i}^{\prime}+5}\in W,\ldots,v_{2k_{i}^{\prime}-2} \in W,v_{2k_{i}^{\prime}-1}\in U,v_{4k_{i}^{\prime}-1}\in W,v_{4k_{i}^{ \prime}}\in U,\text{ and finally }v_{2k_{i}^{\prime}}\in W,\]
a contradiction. Thus there must exist a path \(v_{i_{0}}v_{i_{0}+1}\ldots v_{i_{0}+2k_{i}^{\prime}}\) in \(v_{1}v_{2}\ldots v_{4k_{i}^{\prime}}\) with both endpoints in \(U\).
Next assume that \(H\) contains a path \(v_{1}v_{2}\ldots v_{4\kappa+t}\) of length \(4\kappa+t\). Then using our previous claim we have that the path \(v_{1}v_{2}\ldots v_{4k_{i}^{\prime}+1}\) must have a subpath with \(2k_{1}^{\prime}+1\) vertices, both of whose endpoints are in \(U\). Similarly, the path \(v_{4k_{1}^{\prime}+2}v_{4k_{1}^{\prime}+3}\ldots v_{4k_{i}^{\prime}+4k_{2}^{ \prime}+2}\), must contain a subpath with \(2k_{2}^{\prime}+1\) vertices, both of whose endpoints are in \(U\). In general, we have that the path \(v_{4(\sum_{j=1}^{i-1}k_{j}^{\prime})+i}v_{4(\sum_{j=1}^{i-1}k_{j}^{\prime})+i+1} \ldots v_{4(\sum_{j=1}^{i}k_{j}^{\prime})+i}\) must contain a subpath with \(2k_{i}^{\prime}+1\) vertices both of whose endpoints are in \(U\), for all \(1\leq i\leq t\).
Thus, the statement of the lemma follows.
We can combine Lemmas 3.2 and 3.4 to get the following Lemma for disjoint paths in a subgraph.
**Lemma 3.5**.: _Suppose that \(t\geq 1\), \(k_{i}^{\prime}\geq 1\) for all \(1\leq i\leq t\), and \(\kappa=\sum_{i=1}^{t}k_{i}^{\prime}\). For a graph \(\hat{H}\) on \(n\) vertices, let \(U\sqcup W\) be a partition of its vertices. If_
\[e(U)+e(U,W)>\frac{(4\kappa+t-2)}{2}n \tag{3}\]
_then there exist \(t\) disjoint paths of orders \(2k_{i}^{\prime}+1\), for \(1\leq i\leq t\), with both ends in \(U\)._
_Further, if_
\[2e(U)+e(U,W)>\frac{(4\kappa+t-2)}{2}(|U|+n) \tag{4}\]
_then there exist \(t\) disjoint paths of orders \(2k_{i}^{\prime}+1\), for \(1\leq i\leq t\), with both ends in \(U\)._
Proof.: Let \(H\) be the subgraph of \(\hat{H}\) on the same vertex set \(U\sqcup W\) with edge set \(E(U)\sqcup E(U,W)\). Then by applying Lemma 3.2 on \(H\) and the assumption \(e(U)+e(U,W)>\frac{(4\kappa+t-2)}{2}n\), we have that \(H\) must contain a path of order \(4\kappa+t\). Applying Lemma 3.4 on the subgraph \(H\) of \(\hat{H}\) which is partitioned into two sets \(U\) and \(W\), we have that \(H\) and therefore \(\hat{H}\) contains \(t\) disjoint paths of lengths \(2k_{i}^{\prime}+1\) for all \(1\leq i\leq t\).
Similarly, if \(e(U)>\frac{(4\kappa+t-2)}{2}|U|\), we have that \(\hat{H}[U]\) and therefore \(\hat{H}\) contains \(t\) disjoint paths of lengths \(2k_{i}^{\prime}+1\) for all \(1\leq i\leq t\). Thus, if \(\hat{H}\) does not contain \(t\) disjoint paths of lengths \(2k_{i}^{\prime}+1\) for all \(1\leq i\leq t\), we must have that \(2e(U)+e(U,W)=e(U)+e(U)+e(U,W)\leq\frac{(4\kappa+t-2)}{2}(|U|+n)\).
The following result on the sum of degree squares of a graph generalizes Theorem 2 of [32] to the case where the forbidden graphs are intersecting even cycles.
**Theorem 3.6**.: _Let \(G\) be a graph with \(n\) vertices and \(m\) edges. If \(G\) does not contain a \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\) where \(k_{1}\geq 2\), then_
\[\sum_{u\in V(G)}d_{G}^{2}(u)<(4\kappa+t)(n-1)n. \tag{5}\]
Proof.: Let \(u\) be any vertex of \(G\). Now
\[\sum_{v\in N_{1}(u)}(d(v)-1)=2e(G[N_{1}(u)])+e(G[N_{1}(u),N_{2}(u )]) \leq\frac{(4\kappa+t-2)}{2}\left(|N_{1}(u)|+(|N_{1}(u)|+|N_{2}(u)|)\right) \tag{6}\] \[\leq(2\kappa+t/2-1)(d(u)+n-1).\]
Thus,
\[\sum_{u\in V(G)}d_{G}^{2}(u)=\sum_{u\in V(G)}\sum_{v\in N_{1}(u)} d(v) \leq\sum_{u\in V(G)}(2\kappa+t/2-1)(d(u)+n-1)+d(u) \tag{7}\] \[\leq(4\kappa+t)e(G)+(2\kappa+t/2-1)(n-1)n\] \[<(4\kappa+t)(n-1)n.,\]
where the last inequality uses \(e(G)\leq\binom{n}{2}\).
The following result follows from the works of Furedi [20], and Alon, Krivelevich and Sudakov [1]. The latter is recorded as Theorem 1.1 above.
**Lemma 3.7**.: _[_1_, 20]_ _Let \(H\) be a bipartite graph with maximum degree at most two on one side, then there exists a positive constant \(C\) such that_
\[\mathrm{ex}(n,H)\leq Cn^{3/2}.\]
The following is a result of Conlon and Lee which implies that equality occurs in Lemma 3.7 if and only if \(H\) contains a \(C_{4}\). We record this as it appears in Theorem 1.3 of [11].
**Lemma 3.8**.: _[_11_]_ _For any bipartite graph \(H\) with maximum degree two on one side containing no \(C_{4}\), there exist positive constants \(C\) and \(\delta\) such that_
\[\mathrm{ex}(n,H)=Cn^{3/2-\delta}.\]
In particular, if we apply Lemma 3.8 to intersecting even cycles without any \(C_{4}\), we get the following lemma.
**Lemma 3.9**.: _For any intersecting cycle \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\) containing no \(C_{4}\), there exist positive constants \(C\) and \(\delta\) such that_
\[\mathrm{ex}(n,C_{2k_{1},2k_{2},\ldots,2k_{t}})=C(n^{3/2-\delta}). \tag{8}\]
Next we determine bounds for the spectral radius of a spectral extremal graph \(G\).
**Lemma 3.10**.: \(\sqrt{\kappa n}\leq\frac{\kappa-1+\sqrt{(\kappa-1)^{2}+4\kappa(n-\kappa)}}{2} \leq\lambda(G)\leq\sqrt{(4\kappa+t)(n-1)}<\sqrt{5\kappa n}\)
Proof.: The lower bound is obtained from the spectral radius of \(S_{n,\kappa}\). The upper bound is obtained by using the second-degree eigenvalue-eigenvector equation along with Lemma 3.8:
Let \(u\in V(G)\). We use Lemma 3.5 over the graph \(G[N_{1}(u)\cup N_{2}(u)]\) with \(U=N_{1}(u)\) and \(W=N_{2}(u)\). We know that \(|N_{1}(u)|=d(u)\). Also, there cannot be \(t\) disjoint paths on \(2k_{i}^{\prime}+1\) vertices, for \(1\leq i\leq t\) in \(G[N_{1}(u)\cup N_{2}(u)]\) with both end points in \(N_{1}(u)\), else \(G[\{u\}\cup N_{1}(u)\cup N_{2}(u)]\) contains a \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\) with center \(u\). So (4) implies that
\[\begin{split} 2e(N_{1}(u))+e(N_{1}(u),N_{2}(u))&\leq \frac{(4\kappa+t-2)}{2}(2|N_{1}(u)|+|N_{2}(u)|)\\ &\leq(2\kappa+t/2-1)(d(u)+n-1).\end{split} \tag{9}\]
The spectral radius of a non-negative matrix is at most the maximum of the row-sums of the matrix. Applying this result for \(A^{2}(G)\) and its spectral radius \(\lambda^{2}\) and using (9), we obtain that
\[\begin{split}\lambda^{2}&\leq\max_{u\in V(G)}\big{\{} \sum_{w\in V(G)}A_{u,w}^{2}\big{\}}=\max_{u\in V(G)}\big{\{}\sum_{v\in N(u)}d( v)\big{\}}\\ &=\max_{u\in V(G)}\big{\{}d(u)+2e(N_{1}(u))+e(N_{1}(u),N_{2}(u)) \big{\}}\\ &\leq(2\kappa+t/2)(d(u)+(2\kappa+t/2-1)(n-1)\\ &<(4\kappa+t)(n-1).\end{split} \tag{10}\]
Thus, \(\lambda<\sqrt{(4\kappa+t)(n-1)}<\sqrt{5\kappa n}\), since \(t\leq\sum_{i=1}^{t}k_{i}^{\prime}=\kappa\).
We will now determine upper bounds for the number of vertices in the sets \(L\) and \(M\) that consist of vertices with relatively large weights.
**Lemma 3.11**.: _There exists a positive constant \(\delta\) such that \(|L|\leq\frac{n^{1-\delta}}{\alpha}\) and \(|M|\leq\frac{4n^{1-\delta}}{\alpha}\)._
Proof.: We will break the proof into two cases depending on whether or not there exists a positive constant \(\delta\) such that \(\mathrm{ex}(n,C_{2k_{1},\ldots 2k_{t}})\leq\frac{\sqrt{\kappa}}{2}n^{3/2-\delta}\) (that is, depending on whether or not \(C_{2k_{1},\ldots,2k_{t}}\) contains a \(C_{4}\)). The proof for the second case also proves the first case without any \(C_{4}\). However we include both proofs to exhibit the need for more delicate techniques when \(\mathrm{ex}(n,F)=\Theta(n^{3/2})\).
_Case 1_.: \(\mathrm{ex}(n,C_{2k_{1},\ldots,2k_{t}})\leq\frac{\sqrt{\kappa}}{2}n^{3/2-\delta}\)
For any \(v\in V\), we have that
\[\lambda\mathrm{x}_{v}=\sum_{u\sim v}\mathrm{x}_{u}\leq d(v). \tag{11}\]
Combining this with the lower bound in Lemma 3.10 gives \(\sqrt{\kappa n}\mathrm{x}_{v}\leq\lambda\mathrm{x}_{v}\leq d(v)\). Summing over all vertices \(v\in L\) gives
\[|L|\sqrt{\kappa n}\alpha\leq\sum_{v\in L}d(v)\leq\sum_{v\in V}d(v)\leq 2e(G) \leq 2\mathrm{ex}(n,C_{2k_{1},2k_{2},\ldots,2k_{t}})\leq\sqrt{\kappa}n^{3/2 -\delta}. \tag{12}\]
Thus, \(|L|\leq\frac{n^{1-\delta}}{\alpha}\). A similar argument shows that \(|M|\leq\frac{4n^{1-\delta}}{\alpha}\).
_Case 2_.: \(\mathrm{ex}(n,C_{2k_{1},\ldots,2k_{t}})=\Theta(n^{3/2})\)
Let \(G\in\mathrm{SPEX}(n,C_{2k_{1},2k_{2},\ldots,2k_{t}})\) where \(2=k_{1}\leq k_{2}\leq\ldots\leq k_{t}\). We will show that \(|L|<n^{3/4}<\frac{n^{1-\delta}}{\alpha}\). Assume to the contrary that \(|L|\geq n^{3/4}\). Every vertex in \(|L|\) has degree at least \(\sqrt{\kappa n}\alpha\). We know by our observations in (1) and Lemma 3.7 that there is some positive constant \(C\) such that \(e(G)\leq\mathrm{ex}(n,C_{2k_{1},2k_{2},\ldots,2k_{t}})\leq Cn^{3/2}\). Thus, at most \(n^{3/4}\) vertices in \(L\) have degree \(2Cn^{3/4}\) and consequently, at least \(|L|-n^{3/4}\) vertices of \(L\) have degree less than \(2Cn^{3/4}\). Let \(\mathbb{L}\) denote this subset of vertices of \(L\) with degree less than \(2Cn^{3/4}\). If \(|L|>n^{3/4}\), we will show a
contradiction by proving that \(\mathbb{L}=\emptyset\). To this end, let us assume there is a vertex \(v\in\mathbb{L}\). Then \(d(v)<2Cn^{3/4}\). Now let \(H\) denote the graph with vertex set \(N_{1}(v)\cup N_{2}(v)\) and \(E(H)=E(N_{1}(v),N_{2}(v))\). Then
\[\kappa n\alpha\leq\kappa n\mathbf{x}_{v}\leq\lambda^{2}\mathbf{x}_{v}=\sum_{u \sim v}\sum_{w\sim v}\mathbf{x}_{w}\leq C_{1}n^{3/4}+\sum_{u\sim v}\sum_{ \begin{subarray}{c}u\sim v\\ w\in N_{2}(v)\end{subarray}}\mathbf{x}_{w}.\]
Thus, \(e(N_{1}(v),N_{2}(v))\geq\sum_{u\sim v}\sum_{\begin{subarray}{c}w\sim v\\ w\in N_{2}(v)\end{subarray}}\mathbf{x}_{w}\geq\kappa n\mathbf{x}_{v}-C_{1}n^{3 /4}\geq(\kappa-\epsilon)n\mathbf{x}_{v}\), where we may take \(\epsilon>0\) as small as required by taking \(n\) sufficiently large. Note that we used \(e(N_{1})\leq\frac{4\kappa+t-2}{2}|N_{1}|\leq(4\kappa+t-2)Cn^{3/4}\)).
For any constant \(\gamma\), let \(\mathbb{L}^{\gamma}\) denote the set of vertices \(\{v\in\mathbb{L}\) such that \(\mathbf{x}_{v}\geq\gamma\}\). Let \(\sigma:=\frac{\alpha}{4\kappa}\).
In the following, we will show that \(|\mathbb{L}^{\gamma}|=0\) for all \(\gamma\geq\alpha\). First we will show that if \(|\mathbb{L}^{\gamma}|=0\) for some \(\gamma\), then \(|\mathbb{L}^{\gamma-\sigma}|=0\). Clearly, since \(|\mathbb{L}^{\gamma}|=0\) for all \(\gamma>1\), this will inductively give us that \(\mathbb{L}^{\alpha}=\mathbb{L}=\emptyset\). To do this we will show the existence of disjoint paths \(P_{2k^{\prime}_{i}+1}\) for \(1\leq i\leq t\) in \(H\), with both end points in \(N_{1}\).
Given \(\mathbb{L}^{\gamma}=\emptyset\), recall that for \(v\in\mathbb{L}^{\gamma-\sigma}\),
\[\begin{split}\kappa n(\gamma-\sigma)&\leq\kappa n \mathbf{x}_{v}\leq\lambda^{2}\mathbf{x}_{v}=\sum_{u\sim v}\sum_{w\sim v} \mathbf{x}_{w}\leq C_{1}n^{3/4}+\sum_{u\sim v}\sum_{\begin{subarray}{c}w\sim v \\ w\in\mathbb{L}^{\gamma}_{2}(v)\end{subarray}}\mathbf{x}_{w}+\sum_{u\sim v} \sum_{\begin{subarray}{c}w\sim v\\ w\in N_{2}(v)\setminus\mathbb{L}^{\gamma}_{2}(v)\end{subarray}}\mathbf{x}_{w} \\ &\leq C_{1}n^{3/4}+\sum_{u\sim v}\sum_{\begin{subarray}{c}u\sim v\\ w\in N_{2}(v)\setminus\mathbb{L}^{\gamma}_{2}(v)\end{subarray}}\gamma=C_{1}n^{3 /4}+e(N_{1}(v),N_{2}(v)\setminus\mathbb{L}^{\gamma}_{2}(v))\gamma.\end{split} \tag{13}\]
Since \(\sigma:=\frac{\alpha}{4\kappa}\),
\[\begin{split}&\left(\frac{2\kappa-0.5}{2}\right)n=\kappa n \left(1-\frac{1}{4\kappa}\right)\leq\kappa n\left(1-\frac{\sigma}{\gamma} \right)=\kappa n\left(\frac{\gamma-\sigma}{\gamma}\right)\leq C_{2}n^{3/4}+e(N _{1}(v),N_{2}(v)\setminus\mathbb{L}^{\gamma}_{2}(v))\end{split} \tag{14}\]
for some positive constant \(C_{2}=C_{1}/\gamma\). Thus,
\[\frac{(2\kappa-0.6)}{2}n\leq e(N_{1}(v),N_{2}(v)\setminus\mathbb{L}^{\gamma} _{2}(v)), \tag{15}\]
Thus, there is \(P_{2\kappa+1}\) in \(H\) and a \(P_{2\kappa-1}\) in \(H\) with both end points in \(N_{1}\).
Since \(\kappa=\sum_{i=1}^{t}k^{\prime}_{i}\), \(k^{\prime}_{1}=1\), and \(t\geq 2\), it implies that \(k^{\prime}_{t}+1\leq\kappa\). Thus, \(2k^{\prime}_{t}+1\leq 2\kappa-1\), and there must exist \(P_{2k^{\prime}_{i}+1}\) in \(H\) with both end points in \(N_{1}\) for all \(1\leq i\leq t\). It remains to show that we can find disjoint paths \(P_{2k^{\prime}_{i}+1}\) in \(H\) with both end points in \(N_{1}\) for all \(1\leq i\leq t\).
Say we have shown the existence of \(j\) disjoint paths, \(P_{2k^{\prime}_{j}+1}\) for \(1\leq j<t\), in \(H\) with both end points in \(N_{1}\). We will next show by induction that there is a disjoint \(P_{2k^{\prime}_{j+1}+1}\) as well, oriented such that both its end points lie in \(N_{1}\). Let \(H_{j+1}\) be the induced subgraph of \(H\) with vertex set \(V(H)\setminus(\sqcup_{i=1}^{j}P^{\prime}_{2k_{j}}+1)\). Now, if \(W\) is any finite set of vertices in \(V(H)\), recall that
\[\begin{split}\kappa n(\gamma-\sigma)&\leq\lambda^{2 }\mathbf{x}_{v}=\sum_{u\sim v}\sum_{w\sim v}\mathbf{x}_{w}\leq C_{1}n^{3/4}+ \sum_{u\sim v}\sum_{\begin{subarray}{c}w\sim v\\ w\in N_{2}(v)\setminus\mathbb{L}^{\gamma}_{2}(v)\end{subarray}}\mathbf{x}_{w} \\ &\leq C_{1}n^{3/4}+\sum_{\begin{subarray}{c}u\sim v\\ v\in W\end{subarray}}\sum_{\begin{subarray}{c}w\sim v\\ w\in N_{2}(v)\setminus\mathbb{L}^{\gamma}_{2}(v)\\ w\in W\end{subarray}}\sum_{\begin{subarray}{c}w\sim v\\ w\in N_{2}(v)\setminus\mathbb{L}^{\gamma}_{2}(v)\\ w\notin W\end{subarray}}\mathbf{x}_{w}+\sum_{\begin{subarray}{c}w\sim v\\ w\in N_{2}(v)\setminus\mathbb{L}^{\gamma}_{2}(v)\\ w\notin W\end{subarray}}\sum_{\begin{subarray}{c}w\sim v\\ w\in N_{2}(v)\setminus\mathbb{L}^{\gamma}_{2}(v)\\ w\notin W\end{subarray}}\mathbf{x}_{w}\\ &\leq C_{2}n^{3/4}+\sum_{\begin{subarray}{c}u\sim v\\ v\in W\end{subarray}}\sum_{\begin{subarray}{c}w\sim v\\ w\in N_{2}(v)\setminus\mathbb{L}^{\gamma}_{2}(v)\\ w\notin W\end{subarray}}\gamma,\end{split} \tag{16}\]
for some positive constant \(C_{2}\) and \(n\) is taken to be sufficiently large. In particular, if \(W\) is the set of vertices of \(V(H)\), \(W=\sqcup_{i=1}^{j}P^{\prime}_{2k_{j}}+1\), we have
\[\kappa n(\gamma-\sigma)\leq C_{2}n^{3/4}+\sum_{\begin{subarray}{c}u\sim v\\ v\in W\end{subarray}}\sum_{\begin{subarray}{c}w\sim v\\ w\in N_{2}(v)\setminus\mathbb{L}^{\gamma}_{2}(v)\\ w\notin W\end{subarray}}\gamma=C_{2}n^{3/4}+e(H_{j+1})\gamma.\]
Thus, \(e(H_{j+1})>\frac{2\kappa-1}{2}n\) and there must exist a path \(P_{2k^{\prime}_{j+1}+1}\) in \(H_{j+1}\) with both end points in \(N_{1}\). By induction, there exist all \(t\) disjoint paths of lengths \(P_{2k^{\prime}_{j}+1}\) in \(H\) with both end points in \(N_{1}\), for all \(1\leq j\leq t\). These disjoint paths create the intersecting even cycle \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\), along with \(v\), the central vertex. This is a contradiction, hence \(\mathbb{L}^{\gamma-\delta}=\mathbb{L}^{\gamma}=\emptyset\). Proceeding similarly, we can show that \(\mathbb{L}^{\gamma-\delta}=\mathbb{L}^{\gamma-2\delta}=\ldots=\mathbb{L}^{ \alpha}(=\mathbb{L})\), thus \(|\mathbb{L}|=0\), a contradiction. Therefore, we must have that \(|L|<n^{3/4}<\frac{n^{1-\delta}}{\alpha}\). Similarly, we can prove that \(|M|\leq\frac{4n^{1-\delta}}{\alpha}\).
In our final lemma for this section, we determine lower bounds for the Perron entries in terms of the spectral radius of \(G\).
**Lemma 3.12**.: _The graph \(G\) is connected and for any vertex \(v\in V(G)\), we have that_
\[\mathrm{x}_{v}\geq\frac{1}{\lambda(G)}>\frac{1}{\sqrt{5\kappa n}}.\]
Proof.: Say \(z\) is a vertex with maximum eigenweight \(\mathrm{x}_{z}=1\). Then \(\sqrt{\kappa n}\leq\lambda\leq d(z)\). Now suppose to the contrary that \(G\) is not connected. Say \(V_{1}\ni z\) and \(V_{2}\) are two connected components of \(G\). Then \(\lambda(G[V_{1}])=\lambda\). Next, for any \(u\in V_{2}\), let \(\hat{G}\) be the graph obtained from \(G\), having an identical vertex set and \(E(\hat{G})=E(G[V_{1}])\cup\{uz\}\). Then, \(G[V_{1}]\) is a proper subgraph of \(\hat{G}\). Additionally, \(\hat{G}[V_{1}\cup\{u\}]\) is connected and contains \(G[V_{1}]\) as a proper subgraph. Therefore \(\lambda<\lambda(\hat{G}[V_{1}\cup\{u\}])=\lambda(\hat{G})\). Since \(G\in\mathrm{SPEX}(n,C_{2k_{1},\ldots,2k_{t}})\), we must have that \(C_{2k_{1},\ldots,2k_{t}})\subset\hat{G}\).
*Now, \(u\) must be a part of the \(C_{2k_{1},\ldots,2k_{t}}\) created in \(\hat{G}\). However, \(d_{\hat{G}}(u)=1\), so \(u\) cannot be in any \(C_{2k_{1},\ldots,2k_{t}}\), and \(\hat{G}\) is \(C_{2k_{1},\ldots,2k_{t}}\)-free. But this contradicts the fact that \(G\in\mathrm{SPEX}(n,C_{2k_{1},\ldots,2k_{t}})\). Thus, \(G\) must be a connected graph. [Note the remark given after the end of the proof, to see a slight modification to the arguments in this paragraph* for purposes of Theorem 1.8].
Now let \(v\) be an arbitrary vertex of \(G\). If \(v\) is adjacent to any vertex with eigenweight \(1\), then clearly \(\lambda\mathrm{x}_{v}\geq 1\) and therefore \(\mathrm{x}_{v}\geq\frac{1}{\lambda}\) and we are done. However, if \(v\) is not adjacent to \(z\) and assume to the contrary that \(\mathrm{x}_{v}<\frac{1}{\lambda(G)}\), then we modify \(G\) to obtain the graph \(G^{\prime}\) as follows. Let \(V(G^{\prime})=V(G)\), and \(E(G^{\prime})=(E(G)\setminus\{uv:uv\in E(G)\})\cup\{zv\}\). Then by the same arguments as above for \(\hat{G}\) we can show that \(G^{\prime}\) is \(C_{2k_{1},\ldots,2k_{t}}\)-free. Moreover, \(\mathrm{x}^{T}A(G^{\prime})\mathrm{x}-\mathrm{x}^{T}A(G)\mathrm{x}=2(1- \lambda\mathrm{x}_{v})>0\), so \(\lambda(G^{\prime})>\lambda(G)\), a contradiction. Hence, \(\mathrm{x}_{v}\geq\frac{1}{\lambda}\).
_Remark 5_.: Note the following version of the arguments in the proof of Lemma 3.12 that is relevant for the purposes of Theorem 1.8:
*Now if \(\hat{G}\) contains a copy of \(H\subset C_{2k_{1},\ldots,2k_{t}}\), then \(u\) must be a part of the \(H\) created in \(\hat{G}\). However, \(d_{\hat{G}}(u)=1\) and \(d_{G}(z)\geq\lambda>2\kappa+t+1=|V(C_{2k_{1},\ldots,2k_{t}})|\) for \(n\) large enough. So, if \(\hat{G}\) contains a copy of \(H\), then \(G\) must also have contained a copy of \(H\), a contradiction. Thus, \(G\) must be a connected graph.
## 4 Structural results for extremal graphs
In this section, we continue to revise our estimations for the structure of \(G\). We continue our use of auxiliary constants \(\alpha,\epsilon\), and \(\eta\). We assume \(n\) to be large enough for all lemmas in this section.
We begin by showing that the degree of any vertex in \(L\) is linearly growing with respect to \(n\) and therefore using Theorem 3.6 it follows that \(L\) has constant size.
**Lemma 4.1**.: _Any vertex \(v\in L\) has degree \(d(v)\geq\frac{\alpha}{10(4\kappa+t-1)}n\). Moreover, \(|L|<\frac{100(4\kappa+t)^{3}}{\alpha^{2}}<\frac{12500\kappa^{3}}{\alpha^{2}}\)._
Proof.: Assume to the contrary that there exists a vertex \(v\in L\) of degree \(d(v)<\frac{\alpha}{10(4\kappa+t-1)}n\). Let \(\mathrm{x}_{v}=c\geq\alpha\). The second degree eigenvector-eigenvalue equation with respect to the vertex \(v\) gives that
\[\begin{split}\kappa nc\leq\lambda^{2}c=\lambda^{2}\mathrm{x}_{v} =\sum_{\begin{subarray}{c}u\sim v\\ w\sim u\end{subarray}}\mathrm{x}_{w}&\leq d(v)c+2e(N_{1}(v))+ \sum_{u\sim v}\sum_{\begin{subarray}{c}u\sim u,\\ w\in N_{2}(v)\end{subarray}}\mathrm{x}_{w}\\ &\leq(4\kappa+t-1)d(v)+\sum_{u\sim v}\sum_{\begin{subarray}{c}w \sim u,\\ w\in M_{2}(v)\end{subarray}}\mathrm{x}_{w}+\sum_{u\sim v}\sum_{ \begin{subarray}{c}w\sim u,\\ w\in N_{2}(v)\end{subarray}}\mathrm{x}_{w},\end{split} \tag{17}\]
where the last inequality follows from \(e(N_{1}(v))\leq(2\kappa+t/2-1)d(v)\). Since \(d(v)<\frac{\alpha}{10(4\kappa+t-1)}n\) by our assumption and \(c\geq\alpha\), we have
\[(\kappa-0.1)nc<\sum_{u\sim v}\sum_{\begin{subarray}{c}w\sim u,\\ w\in M_{2}(v)\end{subarray}}\mathrm{x}_{w}+\sum_{u\sim v}\sum_{\begin{subarray} {c}w\sim u,\\ w\in N_{2}\setminus M_{2}(v)\end{subarray}}\mathrm{x}_{w}. \tag{18}\]
From \(d(v)<\frac{\alpha}{10(4\kappa+t-1)}\) and Lemma 3.11, we get that
\[\sum_{u\sim v}\sum_{\begin{subarray}{c}w\sim u,\\ w\in M_{2}(v)\end{subarray}}\mathrm{x}_{w}\leq e(N_{1}(v),M_{2}(v))\leq(2 \kappa+t/2-1)(d(v)+|M|)<(2\kappa+t/2-1)\left(\frac{\alpha}{10(4\kappa+t-1)}n+ \frac{4n^{1-\delta}}{\alpha}\right).\]
For \(n\) sufficiently large, we have that
\[(2\kappa+t/2-1)\left(\frac{\alpha}{10(4\kappa+t-1)}n+\frac{4n^{1-\delta}}{ \alpha}\right)\leq.9n\alpha\leq.9nc,\]
and consequently using Lemma 3.8,
\[(\kappa-1)nc<\sum_{u\sim v}\sum_{\begin{subarray}{c}w\sim u,\\ w\in N_{2}\setminus M_{2}(v)\end{subarray}}\mathrm{x}_{w}\leq e(N_{1}(v),N_{2 }\setminus M_{2}(v))\frac{\alpha}{4}\leq(2\kappa+t/2-1)n\frac{\alpha}{4}.\]
This is a contradiction because \(\kappa\geq t\geq 2\) and \(c\geq\alpha\). Hence, \(d(v)\geq\frac{\alpha}{10(4\kappa+t-1)}n\) for all \(v\in L\). Thus, for \(n\) sufficiently large we have that \(d^{2}(v)\geq\left(\frac{\alpha}{10(4\kappa+t-1)}n\right)^{2}\) for all \(v\in L\).
Combined with Theorem 3.6 this gives us that
\[|L|\left(\frac{\alpha}{10(4\kappa+t-1)}n\right)^{2}\leq\sum_{v\in L}d^{2}(v) \leq\sum_{v\in V(G)}d^{2}(v)<(4\kappa+t)(n-1)n.\]
Therefore,
\[|L|\leq\frac{(4\kappa+t)(n-1)n(10(4\kappa+t-1))^{2}}{\alpha^{2}n^{2}}<\frac{ 100(4\kappa+t)^{3}}{\alpha^{2}}\leq\frac{12500\kappa^{3}}{\alpha^{2}},\]
where we have used \(t\leq\kappa\) in the last inequality.
We will now refine our arguments in the proof above to improve degree estimates for the vertices in \(L^{\prime}\).
**Lemma 4.2**.: _If \(v\) is a vertex in \(L^{\prime}\) with \(\mathrm{x}_{v}=c\), then \(d(v)\geq cn-\epsilon n\)._
Proof.: Our proof is again by contradiction. The second degree eigenvalue-eigenvector equation with respect to the vertex \(v\) gives
\[\begin{split} knc&\leq\lambda^{2}c=\sum_{u\sim v} \sum_{w\sim u}\mathrm{x}_{w}=d(v)c+\sum_{u\sim v}\sum_{\begin{subarray}{c}w \sim u,\\ w\neq v\end{subarray}}\mathrm{x}_{w}\\ &\leq d(v)c+\sum_{u\in S_{1}}\sum_{\begin{subarray}{c}w\sim u\\ w\in L_{1}\cup L_{2}\end{subarray}}\mathrm{x}_{w}+2e(S_{1})\alpha+2e(L)+e(L_{1 },S_{1})\alpha+e(N_{1},S_{2})\alpha.\end{split} \tag{19}\]
The observation that the subgraph of \(G\) with edge set \(E[N_{1}]\cup E[N_{1},N_{2}]\) has at most \((2k+t/2-1)n\) edges, which follows form Lemma 3.5, along with \(e(N_{1})\leq(2\kappa+t/2-1)d(v)\) implies that
\[2e(S_{1}) \leq(4\kappa+t-2)n,\] \[e(L_{1},S_{1}) \leq(2\kappa+t/2-1)n,\] \[e(N_{1},S_{2}) \leq(2\kappa+t/2-1)n.\]
Combining these inequalities with Lemma 4.1, we deduce that
\[2e(S_{1})\alpha+2e(L)+e(L_{1},S_{1})\alpha+e(N_{1},S_{2})\alpha\] \[\leq (4\kappa+t-2)n\alpha+2\binom{|L|}{2}+(2\kappa+t/2-1)n\alpha+(2 \kappa+t/2-1)n\alpha \tag{20}\] \[\leq 10\kappa n\alpha,\]
for \(n\) sufficiently large. Hence,
\[\kappa nc\leq d(v)c+\sum_{u\in S_{1}}\sum_{\begin{subarray}{c}w\sim u\\ w\in L_{1}\cup L_{2}\end{subarray}}\mathrm{x}_{w}+10\kappa n\alpha\leq d(v)c+ e(S_{1},L_{1}\cup L_{2})+10\kappa n\alpha\leq d(v)c+e(S_{1},L_{1}\cup L_{2})+ \frac{\epsilon^{2}n}{2}, \tag{21}\]
where the last inequality is by (2). Thus, using \(d(v)<cn-\epsilon n\), we get that
\[(\kappa-c+\epsilon)\,nc<(\kappa n-d(v))\,c\leq e(S_{1},L_{1}\cup L_{2})+\frac{ \epsilon^{2}n}{2}. \tag{22}\]
Since \(v\in L^{\prime}\), by (2) we have \(c\geq\eta>\epsilon\). Using \(\epsilon<c\leq 1\), we obtain that
\[e(S_{1},L_{1}\cup L_{2})>(\kappa-c)nc+\epsilon nc-\frac{\epsilon^{2}n}{2}>( \kappa-1)nc+\frac{\epsilon^{2}n}{2}. \tag{23}\]
We show that \(G\) contains a \(K_{\kappa+1,\kappa+t}\) which gives a contradiction using Lemma 3.3. We first prove the following claim.
_Claim 4.2.1_.: If \(\delta:=\frac{\epsilon\alpha^{2}}{12500\kappa^{3}}\), then there are at least \(\delta n\) vertices inside \(S_{1}\) with degree at least \(\kappa\) in the bipartite subgraph \(G[S_{1},L_{1}\cup L_{2}]\).
Proof.: Assume to the contrary that less than \(\delta n\) vertices in \(S_{1}\) have degree at least \(\kappa\) in \(G[S_{1},L_{1}\cup L_{2}]\). Then \(e(S_{1},L_{1}\cup L_{2})<(\kappa-1)|S_{1}|+|L|\delta n<(\kappa-1)(c-\epsilon) n+\epsilon n\), because \(|S_{1}|\leq d(z)\) and by Lemma 4.1. Combining this with (23) gives \((\kappa-1)nc-(\kappa-2)n\epsilon>e(S_{1},L_{1}\cup L_{2})\geq(\kappa-1)nc+ \frac{\epsilon^{2}n}{2}\), a contradiction.
Let \(D\) be the set of vertices of \(S_{1}\) that have degree at least \(\kappa\) in \(G[S_{1},L_{1}\cup L_{2}]\). Thus \(|D|\geq\delta n\). Since there are only \(\binom{|L|}{\kappa}\leq\binom{12500\kappa^{3}/\alpha^{2}}{\kappa}\) options for any vertex in \(D\) to choose a set of \(\kappa\) neighbors from, it implies that there exists some set of \(\kappa\) vertices in \(L_{1}\cup L_{2}\) with at least \(\delta n/\binom{|L|}{\kappa}\geq\frac{\epsilon\alpha^{2}n}{12500\kappa^{3}}/ \binom{12500\kappa^{3}/\alpha^{2}}{\kappa}\) common neighbors in \(D\). For \(n\) large enough, this quantity is at least \(\kappa+t\), thus \(K_{\kappa+1,\kappa+t}\subset G[S_{1},L_{1}\cup L_{2}\cup\{v\}]\), and we get our desired contradiction.
An important takeaway of Lemma 4.2 is that \(d(z)\geq(1-\epsilon)n\) and for any \(v\in L^{\prime}\) we have \(d(v)\geq(\eta-\epsilon)n\). By (2), it follows that the neighborhoods of \(z\) and \(v\) intersect. So \(L^{\prime}\subset\{z\}\cup N_{1}(z)\cup N_{2}(z)\).
Next we bound the number of edges in a bipartite graph contained in the closed 2-ball around \(z:N_{2}[z]\).
**Lemma 4.3**.: _For \(z\) the vertex with \(\mathrm{x}_{z}=1\), we have \((1-\epsilon)\kappa n\leq e(S_{1},\{z\}\cup L_{1}\cup L_{2})\leq(\kappa+ \epsilon)n\)._
Proof.: To obtain the lower bound, we use (21). Given \(\mathrm{x}_{z}=1\), we have
\[\kappa n(1)\leq d(z)+e(S_{1},L_{1}\cup L_{2})+\frac{\epsilon^{2}n}{2}.\]
Since \(e(S_{1},\{z\}\cup L_{1}\cup L_{2})=e(S_{1},L_{1}\cup L_{2})+d(z)\), the lower bound follows as \(\frac{\epsilon^{2}n}{2}<\kappa n\).
To obtain the upper bound, we assume toward a contradiction that for the vertex \(z\), we have \(e(S_{1},\{z\}\cup L_{1}\cup L_{2})>(\kappa+\epsilon)n\). We will obtain a contradiction via Lemma 3.3 by showing that \(K_{\kappa+1,\kappa+t}\subset G\). For this we prove the following claim.
_Claim 4.3.1_.: Let \(\delta:=\frac{\epsilon\alpha^{2}}{12500\kappa^{3}}\). With respect to the vertex \(z\) there are at least \(\delta n\) vertices inside \(S_{1}\) with degree \(\kappa\) or more in \(G[S_{1},L_{1}\cup L_{2}]\).
Proof.: Assume to the contrary that less than \(\delta n\) vertices in \(S_{1}\) have degree at least \(\kappa\) in \(G[S_{1},L_{1}\cup L_{2}]\). Then \(e(S_{1},L_{1}\cup L_{2})<(\kappa-1)|S_{1}|+|L|\delta n<(\kappa-1)n+\epsilon n\), because \(|S_{1}|\leq n\) and by Lemma 4.1. This contradicts our assumption that \(e(S_{1},\{z\}\cup L_{1}\cup L_{2})>(\kappa+\epsilon)n\).
Hence, there is a subset \(D\subset S_{1}\) with at least \(\delta n\) vertices such that every vertex in \(D\) has degree at least \(\kappa\) in \(G[S_{1},L_{1}\cup L_{2}]\). Since there are only at most \(\binom{|L|}{\kappa}\leq\binom{12500\kappa^{3}/\alpha^{2}}{\kappa}\) options for every vertex in \(D\) to choose a set of \(\kappa\) neighbors from, we have that there exists some set of \(\kappa\) vertices in \(L_{1}\cup L_{2}\setminus\{z\}\) having a common neighborhood with at least \(\delta n/\binom{|L|}{\kappa}\geq\delta n/\binom{12500\kappa^{3}/\alpha^{2}}{ \kappa}=\frac{\epsilon\alpha^{2}}{12500\kappa^{3}}n/\binom{12500\kappa^{3}/ \alpha^{2}}{\kappa}\geq\kappa+t\) vertices. Thus, \(K_{\kappa,\kappa+t}\subset G[S_{1},L_{1}\cup L_{2}]\) and \(K_{\kappa+1,\kappa+t}\subset G[S_{1},L_{1}\cup L_{2}\cup\{z\}]\), a contradiction by Lemma 3.3. Hence \(e(S_{1},\{z\}\cup L_{1}\cup L_{2})\leq(\kappa+\epsilon)n\).
Next we show that all the vertices in \(L^{\prime}\) in fact have Perron weight close to the maximum.
**Lemma 4.4**.: _For all vertices \(v\in L^{\prime}\), we have \(d(v)\geq\left(1-\frac{1}{8\kappa^{3}}\right)n\) and \(\mathrm{x}_{v}\geq 1-\frac{1}{16\kappa^{3}}\). Moreover, \(|L^{\prime}|=\kappa\)._
Proof.: Suppose we are able to show that \(\mathrm{x}_{v}\geq 1-\frac{1}{16\kappa^{3}}\), then using Lemma 4.2 and by (2) we have that \(d(v)\geq\left(1-\frac{1}{8\kappa^{3}}\right)n\). Then because every vertex \(v\in L^{\prime}\) has \(d(v)\geq\left(1-\frac{1}{8\kappa^{3}}\right)n\), we must have that \(|L^{\prime}|\leq k\) else \(G[S_{1},L^{\prime}]\) contains a \(K_{\kappa+1,\kappa+t}\), a contradiction by Lemma 3.3.
Next, if \(|L^{\prime}|\leq\kappa-1\) then using Lemma 4.3 and (21) applied to the vertex \(z\) we have
\[\kappa n\leq\lambda^{2}\leq e(S_{1},L^{\prime}\setminus L)+e(S_{1},L_{1}\cup L _{2})\eta+\frac{\epsilon n^{2}}{2}\leq(\kappa-1)n+(\kappa+\epsilon)n\eta+ \frac{\epsilon^{2}n}{2}<\kappa n\]
where the last inequality holds by (2) and gives the contradiction. Thus \(|L^{\prime}|=\kappa\). Thus, all we need to show is that \(\mathrm{x}_{v}\geq 1-\frac{1}{16\kappa^{3}}\) for any \(v\in L^{\prime}\).
Now towards a contradiction assume that there is some vertex in \(v\in L^{\prime}\) such that \(\mathrm{x}_{v}<1-\frac{1}{16\kappa^{3}}\). Then refining (21) with respect to the vertex \(z\) we have that
\[\begin{split}\kappa n&\leq\lambda^{2}<e(S_{1}(z), \{z\}\cup L_{1}(z)\cup L_{2}(z)\setminus\{v\})+|N_{1}(z)\cap N_{1}(v)|\mathrm{ x}_{v}+\frac{\epsilon^{2}n}{2}\\ &<(\kappa+\epsilon)n-|S_{1}(z)\cap N_{1}(v)|+|N_{1}(z)\cap N_{1}( v)|\left(1-\frac{1}{16\kappa^{3}}\right)+\frac{\epsilon^{2}n}{2}\\ &=\kappa n+\epsilon n+|L_{1}(z)\cap N_{1}(v)|-|N_{1}(z)\cap N_{1} (v)|\frac{1}{16\kappa^{3}}+\frac{\epsilon^{2}n}{2}.\end{split} \tag{24}\]
Thus, using Lemma 4.1 we have \(\frac{|N_{1}(z)\cap N_{1}(v)|}{16\kappa^{3}}<\epsilon n+\frac{\epsilon^{2}n}{2 }+|L|\leq 2\epsilon n\).
But, \(v\in L^{\prime}\), thus \(\mathrm{x}_{v}\geq\eta\) and \(d(v)\geq\left(\eta-\epsilon\right)n\), and so \(|N_{1}(z)\cap N_{1}(v)|\geq\left(\eta-2\epsilon\right)n>32\kappa^{3}\epsilon n\) by (2), a contradiction.
Now we have \(|L^{\prime}|=\kappa\) and every vertex in \(L^{\prime}\) has degree at least \(\left(1-\frac{1}{8\kappa^{3}}\right)n\). Thus, the common neighborhood of vertices in \(L^{\prime}\) has at least \(\left(1-\frac{1}{8\kappa^{2}}\right)n\) vertices. Let \(R\) denote the set of vertices in this common neighborhood. Let \(\mathcal{E}\) be the set of remaining "exceptional" vertices not in \(L^{\prime}\) or \(R\). Thus \(|\mathcal{E}|\leq\frac{n}{8\kappa^{2}}\). We will now show that \(\mathcal{E}=\emptyset\) and thus \(G\) contains a large complete bipartite subgraph \(K_{\kappa,n-\kappa}\). For this, we will first prove a bound on the sum of Perron weights in the neighborhood of any vertex.
**Lemma 4.5**.: _For any vertex \(v\in V(G)\), the Perron weight in the neighborhood of \(v\) satisfies \(\sum_{w\sim v}\mathrm{x}_{w}\geq\kappa-\frac{1}{16\kappa^{2}}\)._
Proof.: Clearly if \(v\in L^{\prime}\), we have
\[\sum_{w\sim v}\mathrm{x}_{w}=\lambda\mathrm{x}_{v}\geq\lambda\left(1-\epsilon \right)\geq k-\frac{1}{16k^{2}}.\]
If \(v\in R\), then
\[\sum_{w\sim v}\mathrm{x}_{w}\geq\sum_{\begin{subarray}{c}w\sim v\\ w\in L^{\prime}\end{subarray}}\mathrm{x}_{w}\geq\kappa\left(1-\frac{1}{16 \kappa^{3}}\right)=\kappa-\frac{1}{16\kappa^{2}}.\]
Finally, let \(v\in\mathcal{E}\). If \(\sum_{w\sim v}{\rm x}_{w}<\kappa-\frac{1}{16\kappa^{2}}\), consider the graph \(H\) obtained from \(G\) by deleting all edges adjacent to \(v\) and adding the edges \(uv\) for all \(u\in L^{\prime}\). Now since \(\sum_{w\sim v}{\rm x}_{w}<\kappa-\frac{1}{16\kappa^{2}}\) we have that \({\rm x}^{T}A(H){\rm x}>{\rm x}^{T}A(G){\rm x}\), and so by the Rayleigh principle \(\lambda(H)>\lambda(G)\). However, there are no new intersecting even cycles \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\) that have isomorphic copies in \(H\) but no isomorphic copies in \(G\). To see this, assume to the contrary that there is an isomorphic copy of \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\) in \(H\) but not in \(G\). Then the \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\) has \(2\kappa+t+1\) vertices \(v_{1}(=v),v_{2},\ldots,v_{2\kappa+t+1}\) and \(v\) has at most \(\kappa\) neighbors in \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\) all of which lie in \(L^{\prime}\). However, the common neighborhood of \(L^{\prime}\), in \(G\), has at least \(\left(1-\frac{1}{8\kappa^{2}}\right)n>2\kappa+t+1\) vertices. Therefore, \(G\) must already contain an isomorphic of \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\), a contradiction.
Let \(K_{a,b}^{p}\), and \(K_{a,b}^{m}\) denote the graphs obtained from the complete bipartite graph \(K_{a,b}\) by adding into the part of size \(b\) a path \(P_{3}\) on \(3\) vertices, and a matching with two edges \(K_{2}\cup K_{2}\), respectively. So, \(K_{a,b}^{p}=K_{a}\vee((b-3)K_{1}\cup P_{3})\) and \(K_{a,b}^{m}=K_{a}\vee((b-4)K_{1}\cup 2K_{2})\). Then we can make the following observation.
**Lemma 4.6**.: _Let \(2\leq k_{1}\leq k_{2}\leq\ldots\leq k_{t}\) and \(\kappa:=\sum_{i=1}^{t}k_{i}^{\prime}\). If \(k_{t}=2\), then \(K_{t,2t_{1}}^{p}\) contains the intersecting even cycle \(C_{4,4,\ldots,4}\) consisting of \(t\) intersecting \(4\)-cycles. If \(k_{t}\geq 3\), then the graphs \(K_{\kappa,\kappa+t+1}^{p}\) and \(K_{\kappa,\kappa+t+1}^{m}\) both contain the intersecting even cycles \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\)._
It follows from Lemma 4.6 that if \(k_{t}=2\) then every vertex in \(G[R]\) has degree less than 2. Further, if \(k_{t}\geq 3\) then \(e(R)\leq 1\). Moreover, any vertex \(v\in\mathcal{E}\) is adjacent to at most \(\kappa+t+1\) vertices in \(R\), else \(K_{\kappa+1,\kappa+t}\subset G[L^{\prime}\cup\{v\},R]\), a contradiction by Lemma 3.3. Finally, any vertex in \(\mathcal{E}\) is adjacent to at most \(\kappa-1\) vertices in \(L^{\prime}\) by the definition of \(\mathcal{E}\). We are now ready to show that \(\mathcal{E}\) is empty and therefore \(S=R\), so \(G\) must contain the complete bipartite graph \(K_{\kappa,n-\kappa}\).
**Lemma 4.7**.: _The set \(\mathcal{E}\) is empty and \(G\) contains the complete bipartite graph \(K_{\kappa,n-\kappa}\)._
Proof.: Assume to the contrary that \(\mathcal{E}\neq\emptyset\). Recall that any vertex \(r\in R\) satisfies \({\rm x}_{r}<\eta\). Therefore, any vertex \(v\in\mathcal{E}\) must satisfy
\[\sum_{u\sim v}{\rm x}_{u}=\lambda{\rm x}_{v}=\sum_{\begin{subarray}{c}u\sim v \\ u\in L^{\prime}\cup R\end{subarray}}{\rm x}_{u}+\sum_{\begin{subarray}{c}u\sim v \\ u\in\mathcal{E}\end{subarray}}{\rm x}_{u}\leq\kappa-1+(\kappa+t)\,\eta+\sum_{ \begin{subarray}{c}u\sim v\\ u\in\mathcal{E}\end{subarray}}{\rm x}_{u}.\]
Combining this with Lemma 4.5 gives
\[\frac{\sum_{\begin{subarray}{c}u\sim v\\ u\in\mathcal{E}\end{subarray}}{\rm x}_{u}}{\lambda{\rm x}_{v}}\geq\frac{ \lambda{\rm x}_{v}-(\kappa-1)-(\kappa+t)\eta}{\lambda{\rm x}_{v}}\geq 1-\frac{( \kappa-1)+(\kappa+t)\eta}{\kappa-\frac{1}{16\kappa^{2}}}\geq\frac{4}{5\kappa}, \tag{25}\]
where the last inequality follows from (2). Now consider the matrix \(B=A(G[\mathcal{E}])\) and vector \(y:={\rm x}_{|\mathcal{E}}\) (the restriction of the vector \({\rm x}\) to the set \(\mathcal{E}\)). We see that for any vertex \(v\in\mathcal{E}\)
\[By_{v}=\sum_{\begin{subarray}{c}u\sim v\\ u\in\mathcal{E}\end{subarray}}{\rm x}_{u}\geq\frac{4}{5\kappa}\lambda{\rm x}_ {v}=\frac{4}{5\kappa}\lambda{\rm y}_{v}.\]
Hence, by Lemma 3.1, we have that \(\lambda(B)\geq\frac{4}{5\kappa}\lambda\geq\frac{4}{5}\sqrt{\frac{n}{\kappa}}\). Moreover, \(\frac{n}{8\kappa^{2}}\geq|\mathcal{E}|>\lambda(B)\geq\frac{4}{5}\sqrt{\frac{n }{\kappa}}\), so \(\mathcal{E}\) must have sufficiently many vertices to apply Lemma 3.10 if \(n\) is sufficiently large and \(|\mathcal{E}|\neq 0\). This is a contradiction to Lemma 3.10 which gives \(\lambda(B)\leq\sqrt{5\kappa|\mathcal{E}|}\leq\sqrt{5\kappa\frac{n}{8\kappa^{2} }}=\sqrt{\frac{5n}{8\kappa}}\), else \(\mathcal{E}\) contains \(C_{2k_{1},2k_{2},\ldots,2k_{t}}\). Therefore, \(\mathcal{E}\) must be empty.
## 5 Proof of Theorems 1.4 and 1.5
It follows from Lemma 4.7 that \(G\) contains the complete bipartite graph \(K_{\kappa,n-\kappa}\), where the part on \(\kappa\) vertices is the set \(L^{\prime}\) and the part on \(n-\kappa\) vertices is the set \(R\). By Lemma 4.6, if \(k_{t}=2\), then \(G[R]\subset M_{n-\kappa}=M_{n-t}\), so \(G\subseteq F_{n,t}\), and if \(k_{t}\geq 3\), then \(e(R)\leq 1\) in \(G\). Hence \(G\subset S_{n,\kappa}^{+}\). Since adding more edges will only increase the spectral radius, we see that the spectral radius is maximized if the vertices of \(L^{\prime}\) induce a clique \(K_{\kappa}\) and the number of edges in \(R\) is as large as possible. Thus \(G\cong F_{n,t}\) (and \(G\cong S_{n,\kappa}^{+}\)), for \(k=2\) (and \(k\geq 3\), respectively).
**Acknowledgements:** The author is grateful to Prof. Sebastian Cioaba and Prof. Michael Tait for their useful advice during the preparation of this paper. |
2305.00808 | Recent Results from BABAR: Dark Matter, Axion-like Particles and Heavy
Neutral Leptons, a contribution to the 2023 Electroweak session of the 57th
Rencontres de Moriond | Three independent searches for new physics using data collected at BABAR are
presented. Firstly, two searches for dark matter and baryogenesis:
$B^{0}\rightarrow \Lambda + \psi_{D}$ and $B^{+} \rightarrow p + \psi_{D}$ are
detailed, where $\psi_{D}$ is a new dark fermion. Neither signal is observed
and new upper limits on the branching fractions, at the 90 $\%$ confidence
level (C.L), are placed at $\mathcal{O}(10^{-5} - 10^{-6})$ across the mass
range $1.0< m_{\psi_{D}}<4.3$ GeV/c$^{2}$. Secondly, new limits on the
coupling, $g_{aW}$, of an axion-like particle ($a$) to the $W$ boson, at the 90
$\%$ C.L, are presented at $\mathcal{O}(10^{-5})$ GeV$^{-1}$ for $a$ masses in
the mass range 0.175 $<m_{a}<$ 4.78 GeV/c$^{2}$. Thirdly, a model-independent
search for heavy neutral leptons (HNL) found new upper limits at the 95 $\%$
C.L on the extended Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix element,
$|U_{\tau 4}|^{2}$, which depend on the HNL mass hypothesis and vary from $2.31
\times 10^{-2}$ to $5.04 \times 10^{-6}$, across the mass range $100 < m_{4} <
1300$ MeV/c$^{2}$, with more stringent limits on higher HNL masses. | Sophie Middleton | 2023-05-01T13:15:45Z | http://arxiv.org/abs/2305.00808v1 | # Recent Results from BABAR: Dark Matter, Axion-like Particles and Heavy Neutral Leptons
###### Abstract
Three independent searches for new physics using data collected at BABAR are presented. Firstly, two searches for dark matter and baryogenesis: \(B^{0}\to\Lambda+\psi_{D}\) and \(B^{+}\to p+\psi_{D}\) are detailed, where \(\psi_{D}\) is a new dark fermion. Neither signal is observed and new upper limits on the branching fractions, at the 90 % confidence level (C.L), are placed at \({\cal O}(10^{-5}-10^{-6})\) across the mass range \(1.0<m_{\psi_{D}}<4.3\) GeV/c\({}^{2}\). Secondly, new limits on the coupling, \(g_{aW}\), of an axion-like particle (\(a\)) to the \(W\) boson, at the 90 % C.L, are presented at \({\cal O}(10^{-5})\) GeV\({}^{-1}\) for \(a\) masses in the mass range \(0.175<m_{a}<4.78\) GeV/c\({}^{2}\). Thirdly, a model-independent search for heavy neutral leptons (HNL) found new upper limits at the 95 % C.L on the extended Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix element, \(|U_{\tau 4}|^{2}\), which depend on the HNL mass hypothesis and vary from \(2.31\times 10^{-2}\) to \(5.04\times 10^{-6}\), across the mass range \(100<m_{4}<1300\) MeV/\(c^{2}\), with more stringent limits on higher HNL masses.
## 1 Motivations: the need for new physics
Since its formulation, the Standard Model (SM) has successfully explained many experimental results. The particles and the 18 free parameters have been tested with extraordinary precision, in some cases, up to 1 part in a trillion1. However, there remains plenty of data that the SM cannot fully explain. These include: the origins of the neutrino masses and neutrino mixing; the existence of large amounts of non-baryonic cold dark matter in the Universe; and, the observed dominance of matter over anti-matter.
Footnote 1: on behalf of the BABAR Collaboration
There are also theoretical motivations to search for physics beyond the SM (BSM), including: to understand the strong CP problem; to try to unify gravity with the other fundamental forces; to account for Higgs mass hierarchy problem; and, to understand the structure of fermion masses and mixings and the number of fermion families.
In this article, three independent searches for new physics which can explain several of the above-described phenomena are detailed. All three analyses utilize data collected at BABAR.
## 2 The BaBar Detector
The data sample used in these analyses corresponds to a total integrated luminosity of 431 fb\({}^{-1}\), collected with the BABAR detector at the PEP-II \(e^{+}e^{-}\) storage ring at the SLAC National Accelerator Laboratory. At PEP-II, 9.0 GeV electrons collide with 3.1 GeV positrons at center-of-mass (CM) energies near 10.58 GeV, on the \(\Upsilon(4S)\) resonance.
In the BABAR detector[2] (see Fig. 1), a silicon vertex tracker (SVT) and a 40 layer Drift Chamber (DCH), placed inside a 1.5-T solenoid magnet, are utilized to reconstruct charged-particle tracks. The transverse momentum resolution is 0.47% at 1 GeV/\(c\), where the transverse momentum, \(p_{T}\), is defined as the total momentum of all four tracks orthogonal to the beam axis. An Electro-Magnetic Calorimeter (EMC) measures the energy of electrons |
2303.02349 | New Upper Bounds on the Size of Permutation Codes under Kendall
$τ$-Metric | We first give two methods based on the representation theory of symmetric
groups to study the largest size $P(n,d)$ of permutation codes of length $n$
i.e. subsets of the set $S_n$ all permutations on $\{1,\dots,n\}$ with the
minimum distance (at least) $d$ under the Kendall $\tau$-metric. The first
method is an integer programming problem obtained from the transitive actions
of $S_n$. The second method can be applied to refute the existence of perfect
codes in $S_n$.\\ Here we reduce the known upper bound $(n-1)!-1$ for $P(n,3)$
to $(n-1)!-\lceil\frac{n}{3}\rceil+2\leq (n-1)!-2$, whenever $n\geq 11$ is any
prime number. If $n=6$, $7$, $11$, $13$, $14$, $15$, $17$, the known upper
bound for $P(n,3)$ is decreased by $3,3,9,11,1,1,4$, respectively. | Alireza Abdollahi, Javad Bagherian, Fatemeh Jafari, Maryam Khatami, Farzad Parvaresh, Reza Sobhani | 2023-03-04T07:31:00Z | http://arxiv.org/abs/2303.02349v1 | # New upper bounds on the size of permutation codes under Kendall \(\tau\)-metric
###### Abstract.
We first give two methods based on the representation theory of symmetric groups to study the largest size \(P(n,d)\) of permutation codes of length \(n\) i.e. subsets of the set \(S_{n}\) all permutations on \(\{1,\ldots,n\}\) with the minimum distance (at least) \(d\) under the Kendall \(\tau\)-metric. The first method is an integer programming problem obtained from the transitive actions of \(S_{n}\). The second method can be applied to refute the existence of perfect codes in \(S_{n}\).
Here we reduce the known upper bound \((n-1)!-1\) for \(P(n,3)\) to \((n-1)!-\lceil\frac{n}{3}\rceil+2\leq(n-1)!-2\), whenever \(n\geq 11\) is any prime number. If \(n=6\), \(7\), \(11\), \(13\), \(14\), \(15\), \(17\), the known upper bound for \(P(n,3)\) is decreased by \(3,3,9,11,1,1,4\), respectively.
Key words and phrases:Rank modulation, Kendall \(\tau\)-Metric, permutation codes 2020 Mathematics Subject Classification: 94B25; 94B65; 68P30 Corresponding Author: A. Abdollahi ([email protected]) F. Parvaresh is supported by IPM in part by grant No. 1401680050
Introduction
Let \(\Gamma\) be a graph with vertex set \(V(\Gamma)\) and let \(\sigma_{1},\sigma_{2}\) be the set of all vertices of \(V(\Gamma)\). We say that \(\sigma_{1}\) and \(\sigma_{2}\) are called adjacent, denoted by \(\sigma_{1}-\sigma_{2}\), if \(\{\sigma_{1},\sigma_{2}\}\in E(\Gamma)\).
A _path_ of \(\Gamma\) is a simple graph whose vertex set and edge set are subsets of those of \(\Gamma\). A _path_ is a simple graph with the vertex set \(\{\sigma_{0},\sigma_{1},\ldots,\sigma_{n}\}\) such that \(\sigma_{j}-\sigma_{j+1}\) for \(j=0,\ldots,n-1\). The length of a path is the number of its edges.
By a _graphical code_ of minimum distance at least \(d\) we mean a subset of vertices of a simple graph such that any two distinct vertices has distance at least \(d\), where the distance of two vertices is defined to be the shortest length of a path between the vertices. Examples of such codes are permutation codes under Kendall \(\tau\)-metric or Ulam metric. In fact the set of all permutations with the Kendall \(\tau\) or Ulam metrics can be represented as Cayley graph (see Definition 2.4, below) and PCs are then subgraphs of the Cayley graph. The methods used in this paper rely on the fact that the permutation set with Kendall \(\tau\)-metric is a Cayley graph.
Here we observe that if \(d\) is such that graphical codes of minimum distance at least \(d\) exist, then the ones with the minimum distance exactly \(d\) exist.
\begin{table}
\begin{tabular}{|c|c c c|c|} \hline \(n\) & \(6\) & \(7\) & \(11\) & \(13\) \\ \hline Old UB & \(5!-1^{a}\) & \(6!-1^{a}\) & \(10!-1^{a}\) & \(12!-1^{a}\) \\ \hline UB & \(5!-4\) & \(6!-4\) & \(10!-10\) & \(12!-12\) \\ \(n\) & \(14\) & \(15\) & \(17\) & prime \(n\geq 19\) \\ \hline Old UB & \(13!\)[9] & \(14!\)[9] & \(16!-1^{a}\) & \((n-1)!-1^{a}\) \\ \hline UB & \(13!-1\) & \(14!-1\) & \(16!-5\) & \((n-1)!-\lceil\frac{n}{3}\rceil+2\) \\ \hline \end{tabular}
\end{table}
Table 1. Some results on the upper bounds of \(P(n,3)\). The superscripts show the references from which the upper bound is taken, where “a” is [2, 5], and gray color shows our main results.
**Proposition 2.1**.: _Let \(\Gamma\) be any simple graph and \(d\geq 1\) an integer. Then_
\[\{|C|\mid C\subseteq V(\Gamma)\ \mathrm{and}\ d_{\Gamma}(C)=d\}=\{|C|\mid C \subseteq V(\Gamma)\ \mathrm{and}\ d_{\Gamma}(C)\geq d\},\]
_where \(d_{\Gamma}(C)=\min\{d_{\Gamma}(x,y)\mid x,y\in C\ \mathrm{and}\ x\neq y\}\)._
Proof.: Let \(C\) be a graphical code with the minimum distance at least \(d\). Suppose that \(\sigma,\tau\in C\) such that \(d_{\Gamma}(C)=d_{\Gamma}(\sigma,\tau)=d+\ell\) for some non-negative integer \(\ell\). If \(\ell=0\), we are done; so from now on assume that \(\ell>0\). Let \(\sigma-\sigma_{1}-\cdots-\sigma_{\ell}-\cdots-\sigma_{d+\ell-1}-\tau\) be a shortest path in the graph \(\Gamma\) between \(\sigma\) and \(\tau\). Consider \(\hat{C}=(C\setminus\{\sigma\})\cup\{\sigma\ell\}\). We claim that \(|C|=|\hat{C}|\) and \(d_{\Gamma}(\hat{C})=d\), this will complete the proof. If \(\sigma_{\ell}\in C\), then \(d(\sigma_{\ell},\tau)=d\), which implies \(\ell=0\), a contradiction. It follows that \(|C|=|\hat{C}|\). To prove that \(d_{\Gamma}(\hat{C})=d\), it is enough to show that \(d_{\Gamma}(\delta,\sigma_{\ell})\geq d\) for all \(\delta\in C\setminus\{\sigma\}\). Since \(d_{\Gamma}(C)=d+\ell\) and by the triangle inequality we have
\[d+\ell\leq d_{\Gamma}(\delta,\sigma)\leq d_{\Gamma}(\delta,\sigma_{\ell})+d_{ \Gamma}(\sigma_{\ell},\sigma)=d_{\Gamma}(\delta,\sigma_{\ell})+\ell.\]
So \(d_{\Gamma}(\delta,\sigma_{\ell})\geq d\), as required.
A PC with Hamming metric is not a graphical code as the Hamming distance between two permutations is never equal to \(1\) and so we cannot apply Proposition 2.1 for the latter case. We do not know if the conclusion of Proposition of 2.1 is valid for PCs with Hamming metric. We propose the following question.
**Question 2.2**.: _Let \(d_{H}\) be the Hamming metric on \(S_{n}\) and \(d\geq 2\) be an arbitrary integer. Is it true that_
\[\{|C|\mid C\subseteq S_{n}\ \mathrm{and}\ d_{H}(C)=d\}=\{|C|\mid C\subseteq S_{n} \ \mathrm{and}\ d_{H}(C)\geq d\}?,\]
_where \(d_{H}(C)=\min\{d_{H}(x,y)\mid x,y\in C\ \mathrm{and}\ x\neq y\}\)._
**Definition 2.3**.: _Let \(G\) be a finite group and \(B,C\) be two non-empty subsets of \(G\). As usual we denote by \(BC\) the set \(\{bc\mid b\in B,c\in C\}\), where by \(g=bc\) we refer to the group operation, also for each \(g\in G\) we denote by \(Bg\) the set \(B\{g\}\). The set \(B\) is called inverse closed if \(B=B^{-1}:=\{b^{-1}\,|\,b\in B\}\). Also, we use the notation \(\xi\) to denote the identity element of \(G\)._
Let \(G\) be a finite group and denote by \(\mathbb{C}[G]\) the "complex group algebra" of \(G\). The elements of \(\mathbb{C}[G]\) are of the formal sum
\[\sum_{g\in G}a_{g}g, \tag{2.1}\]
where \(a_{g}\in\mathbb{C}\). The complex group algebra is a \(\mathbb{C}\)-algebra with the following addition, multiplication and scaler product:
\[\sum_{g\in G}a_{g}g+\sum_{g\in G}b_{g}g=\sum_{g\in G}(a_{g}+b_{g })\sigma,\] \[\big{(}\sum_{g\in G}a_{g}g\big{)}\big{(}\sum_{g\in G}b_{g}g\big{)} =\sum_{g\in G}\big{(}\sum_{g=g_{1}g_{2}}a_{g_{1}}b_{g_{2}}\big{)}g,\] \[\lambda\sum_{g\in G}a_{g}g=\sum_{g\in G}(\lambda a_{g})g,\]
where \(\lambda,a_{g},b_{g}\in\mathbb{C}\). If \(a_{g}=0\) for some \(g\), the term \(a_{g}g\) will be neglected in 2.1 and \(\sum_{g\in G}a_{g}g\) is written as \(a_{1}g_{1}+\cdots+a_{k}g_{k}\), where \(\{g\mid a_{g}\neq 0\}=\{g_{1},\ldots,g_{k}\}\) is non-empty and otherwise \(\sum_{g\in G}a_{g}g\) is denoted by \(0\). For a non-empty finite subset \(\Theta\) of \(G\), we denote by \(\widehat{\Theta}\) the element \(\sum_{\theta\in\Theta}\theta\) of \(\mathbb{C}[G]\).
**Definition 2.4**.: _Let \(G\) be a finite group and \(S\) be a non-empty inverse closed subset of \(G\) not-containing the identity element \(\xi\) of \(G\). Then the Cayley graph \(\Gamma:=\operatorname{Cay}(G,S)\) is a simple graph with \(V(\Gamma)=\{g\,|\,g\in G\}\) and \(E(\Gamma)=\big{\{}\{g,h\}\,\big{|}\,g,h\in G,gh^{-1}\in S\big{\}}\)._
Let \(G\) be a finite group and \(S\) be a non-empty inverse closed subset of \(G\) not-containing the identity element \(\xi\) of \(G\). Now we have a metric \(d_{\Gamma}\) on \(G\) defined by \(\Gamma\) which is the shortest length of a path between two vertices in \(\operatorname{Cay}(G,S)\). For example if \(G=S_{n}\) and \(S=\{(1,2),(2,3),\ldots,(n-1,n)\}\), the metric \(d_{\Gamma}\) is the Kendall \(\tau\)-metric on \(S_{n}\). Also if \(G=S_{n}\) and \(S=T\cup T^{-1}\), where \(T:=\{(a,a+1,\ldots,b)\mid a<b,a,b\in[n]\}\), the metric \(d_{\Gamma}\) is the Ulam metric on \(S_{n}\).
**Definition 2.5**.: _For a positive integer \(r\) and an element \(g\in G\), the ball of radius \(r\) in \(G\) under the metric \(d_{\Gamma}\) is denoted by \(B_{r}^{\Gamma}(g)\) defined by \(B_{r}^{\Gamma}(g)=\{h\in G\mid d_{\Gamma}(g,h)\leq r\}\)._
**Remark 2.6**.: _Note that \(B_{r}^{\Gamma}(g)=(S^{r}\cup\{\xi\})g\), where \(S^{r}:=\{s_{1}\cdots s_{t}\mid s_{1},\ldots,s_{t}\in S,\,1\leq t\leq r\}\). Also note that since \(S\) is inverse closed, \(B_{r}^{\Gamma}(g)=S^{r}g\) for all \(r\geq 2\). It follows that \(|B_{r}^{\Gamma}(g)|=|B_{r}^{\Gamma}(1)|=|S^{r}\cup\{\xi\}|\) for all \(g\in G\)._
**Proposition 2.7**.: _Let \(G\) be a finite group and \(d_{\Gamma}\) be the metric induced by the graph \(\operatorname{Cay}(G,S)\). Then a subset \(C\) of \(G\) is a code with \(min\{d_{\Gamma}(x,y)\,|\,x,y\in C\}\geq d\) if and only if there exists \(Y\subset G\) such that_
\[(S^{\lfloor\frac{d-1}{2}\rfloor}\cup\{\xi\})\widehat{C}=\widehat{G}-\widehat{ Y}, \tag{2.2}\]
Proof.: Let \(r:=\lfloor\frac{d-1}{2}\rfloor\), \(Y=G\setminus\cup_{c\in C}B_{r}^{\Gamma}(c)\) and \(T:=S^{r}\cup\{\xi\}\). So \(G=\cup_{c\in C}B_{r}^{\Gamma}(c)\cup Y\). It follows from Remark 2.6 that for each \(c\in C\), \(B_{r}^{\Gamma}(c)=Tc\) and so \(\cup_{c\in C}B_{r}^{\Gamma}(c)=TC\). Therefore, \(\widehat{G}=\widehat{TC}+\widehat{Y}\). On the other hand, for any two distinct elements \(c,c^{\prime}\) in \(C\), \(Tc\cap Tc^{\prime}=\varnothing\) since otherwise \(d_{\Gamma}(c,c^{\prime})\leq d-1\) that is a contradiction. Hence, \(\widehat{TC}=\widehat{T}\widehat{C}\) and this completes the proof.
**Definition 2.8**.: _Let \(G\) be a finite group and \(d_{\Gamma}\) be the metric induced by \(\operatorname{Cay}(G,S)\). For a positive integer \(r\), an \(r\)-perfect code or a perfect code of radius \(r\) of \(G\) under the metric \(d_{\Gamma}\) is a subset \(C\) of \(G\) such that \(G=\cup_{c\in C}B_{r}^{\Gamma}(c)\) and \(B_{r}^{\Gamma}(c)\cap B_{r}^{\Gamma}(c^{\prime})=\varnothing\) for any two distinct \(c,c^{\prime}\in C\)._
**Remark 2.9**.: _By a similar argument as the proof of Proposition 2.7, it can be seen that if \(C\) is an \(r\)-perfect code, then \((\widehat{S^{r}\cup\{\xi\}})\widehat{C}=\widehat{G}\). We note that according to Remark 2.6\(C\) is an \(r\)-perfect code if and only if \(|C||S^{r}\cup\{\xi\}|=|G|\)._
Let \(\rho\) be any (complex) _representation_ of a finite group \(G\) of dimension \(k\) for some positive integer \(k\), i.e., any group homomorphism from \(G\) to the general linear group \(\operatorname{GL}_{k}(\mathbb{C})\) of \(k\times k\) invertible matrices over \(\mathbb{C}\). Then by the universal property of \(\mathbb{C}[G]\), \(\rho\) can be extended to an algebra homomorphism \(\hat{\rho}\) from \(\mathbb{C}[G]\) to the algebra \(\operatorname{Mat}_{k}(\mathbb{C})\) of \(k\times k\) matrices over \(\mathbb{C}\) such that \(g^{\hat{\rho}}=g^{\rho}\) for all \(g\in G\). Thus the image of \(\widehat{\Theta}\) for any non-empty subset \(\Theta\) of \(G\) under \(\hat{\rho}\) is the element \(\sum_{\theta\in\Theta}\theta^{\rho}\) of \(\operatorname{Mat}_{k}(\mathbb{C})\). In particular by applying \(\hat{\rho}\) on the equality 2.2, we obtain
\[\big{(}\sum_{s\in S\cup\{\xi\}}s^{\rho}\big{)}\big{(}\sum_{c\in C}c^{\rho} \big{)}=\sum_{g\in G}g^{\rho}-\sum_{y\in Y}y^{\rho}, \tag{2.3}\]
where the latter equality is between elements of \(\operatorname{Mat}_{k}(\mathbb{C})\).
In the following, we state an important definition that it will play a central role in the proof of the main results of this paper.
**Definition 2.10**.: _Given a group \(G\) and a non-empty set \(\Theta\), recall that we say \(G\) acts on \(\Theta\) (from the right) if there exists a function \(\Theta\times G\to\Theta\) denoted by \((\theta,g)\mapsto\theta^{g}\) for all \((\theta,g)\in\Theta\times G\) if \((\theta^{g})^{h}=\theta^{gh}\) and \(\theta^{\xi}=\theta\) for all \(\theta\in\Theta\) and all \(g,h\in G\). For any \(\theta\in\Theta\) the set \(\operatorname{Stab}_{G}(\theta):=\{g\in G\mid\theta^{g}=\theta\}\) is called the stabilizer of \(\theta\) in \(G\) which is a subgroup of \(G\). If the action is transitive (i.e., for any two elements \(\theta_{1},\theta_{2}\in\Theta\), there exists \(g\in G\) such that \(\theta_{1}^{g}=\theta_{2}\)), all stabilizers are conjugate under the elements of \(G\), more precisely \(\operatorname{Stab}_{G}(\theta_{1})^{g}=\operatorname{Stab}_{G}(\theta_{2})\) whenever \(\theta_{1}^{g}=\theta_{2}\), where \(\operatorname{Stab}_{G}(\theta_{1})^{g}=g^{-1}\operatorname{Stab}_{G}(\theta _{1})g\)._
_Now suppose that_ \(G\) _acts on_ \(\Theta\) _and_ \(|\Theta|=k\) _is finite. Fix an arbitrary ordering on the elements of_ \(\Theta\) _so that_ \(\theta_{i}<\theta_{j}\) _whenever_ \(i<j\) _for distinct elements_ \(\theta_{i},\theta_{j}\in\Theta\)_. Denote by_ \(\rho_{\Theta}^{G}\) _the map from_ \(G\) _to_ \(\operatorname{GL}_{k}(\mathbb{Z})\) _defined by_ \(g\mapsto P_{g}\)_, where_ \(P_{g}\) _is the_ \(|\Theta|\times|\Theta|\) _matrix whose_ \((i,j)\) _entry is_ \(1\) _if_ \(\theta_{i}^{g}=\theta_{j}\) _and_ \(0\) _otherwise._
**Remark 2.11**.: _Note that the definitions of \(\rho_{\Theta}^{G}\) depends on the choice of the ordering on \(\Theta\), however any two such representations of \(G\) are conjugate by a permutation matrix._
**Remark 2.12**.: _Let \(H\) be a subgroup of a finite group \(G\) and \(X\) be the set of right cosets of \(H\) in \(G\), i.e., \(X:=\{Hg\,|\,g\in G\}\). Then \(G\) acts transitively on \(X\) via \((Hg,g_{0})\longrightarrow Hgg_{0}\). It is known that \(X\) partitions \(G\), i.e., \(G=\cup_{x\in X}x\) and \(x\cap x^{\prime}=\varnothing\) for all distinct elements \(x\) and \(x^{\prime}\) of \(X\), and \(|X|=|G|/|H|\)._
**Lemma 2.13**.: _Let \(H\) be a subgroup of a finite group \(G\) and \(X=\{Ha_{1},\dots,Ha_{m}\}\) be the set of right cosets of \(H\) in \(G\). If \(\mathcal{Y}\subset G\), then by fixing the ordering \(Ha_{i}<Ha_{j}\) whenever \(i<j\), the \((i,j)\) entry of \(\sum_{y\in\mathcal{Y}}y^{\rho_{X}^{G}}\) is \(|\mathcal{Y}\cap{a_{i}}^{-1}Ha_{j}|\)._
Proof.: Clearly, for any \(y\in\mathcal{Y}\), the \((i,j)\) entry of \(y^{\rho_{X}^{G}}\) is \(1\) if \(Ha_{i}y=Ha_{j}\) and is \(0\) otherwise. So the \((i,j)\) entry of \(y^{\rho_{X}^{G}}\) is \(1\) if \({a_{i}}{y_{j}}^{-1}\in H\) and therefore \(y\in{a_{i}}^{-1}Ha_{j}\). Hence, the \((i,j)\) entry of \(\sum_{y\in\mathcal{Y}}y^{\rho_{X}^{G}}\) is equal to \(|\{y\in\mathcal{Y}\,|\,y\in{a_{i}}^{-1}Ha_{j}\}|\). This completes the proof.
The following result summarizes the main method used in this paper.
**Theorem 2.14**.: _Let \(G\) be a finite group and \(d_{\Gamma}\) be the metric induced by the graph \(\operatorname{Cay}(G,S)\). Also, let \(C\) be a code in \(G\) with \(min\{d_{\Gamma}(c,c^{\prime})\,|\,c,c^{\prime}\in C\}\geq d\). If \(H\) is a subgroup of \(G\) and \(X\) is the set of right cosets of \(H\) in \(G\), then the optimal value of the objective function of the following integer programming problem gives an upper bound on \(|C|\)._
\[\text{Maximize} \sum_{i=1}^{|X|}x_{i},\] \[\text{subject to} \widehat{T^{\rho_{X}^{G}}}(x_{1},\dots,x_{|X|})^{t}\leq|H| \textbf{1},\] \[x_{i}\in\mathbb{Z},\ x_{i}\geq 0,\ i\in\{1,\dots,|X|\},\]
_where \(T:=S^{\lfloor\frac{d-1}{2}\rfloor}\cup\{\xi\}\), **1** is the column vector of order \(|X|\times 1\) whose entries are equal to \(1\)._
Proof.: Let \(r:=\lfloor\frac{d-1}{2}\rfloor\). By Equation 2.3, there exists \(Y\subset G\) such that
\[\big{(}\sum_{s\in T}s^{\rho_{X}^{G}}\big{)}\big{(}\sum_{c\in C}c^{\rho_{X}^{G}} \big{)}=\sum_{g\in G}g^{\rho_{X}^{G}}-\sum_{y\in Y}y^{\rho_{X}^{G}}, \tag{2.4}\]
Suppose that \(X=\{Ha_{1},\ldots,Ha_{m}\}\). Without loss of generality, we may assume that \(a_{1}=1\). We fix the ordering \(Ha_{i}<Ha_{j}\) whenever \(i<j\). By Lemma 2.13, the \((i,j)\) entry of \(\sum_{g\in G}g^{\rho_{X}^{G}}\) is equal to \(|G\cap{a_{i}}^{-1}Ha_{j}|\) and since \({a_{i}}^{-1}Ha_{j}\subseteq G\), the \((i,j)\) entry of \(\sum_{g\in G}g^{\rho_{X}^{G}}\) is equal to \(|{a_{i}}^{-1}Ha_{j}|=|H|\). So if \(B\) is a column of \(\sum_{g\in G}g^{\rho_{X}^{G}}\), then \(B=|H|\mathbf{1}\). Let \(\mathcal{C}\) be the first column of \(\sum_{c\in C}c^{\rho_{X}^{G}}\). Then Lemma 2.13 implies that for all \(1\leq i\leq|X|\), \(i\)-th row of \(\mathcal{C}\), denoted by \(c_{i}\), is equal to \(|C\cap Ha_{i}|\). Since \(C=C\cap G=\cup_{i=1}^{|X|}(C\cap Ha_{i})\) and \((C\cap Ha_{i})\cap(C\cap Ha_{j})=\varnothing\) for all \(i\neq j\), \(\sum_{i=1}^{|X|}c_{i}=|C|\). We note that by Lemma 2.13, all entries of matrix \(\widehat{F^{\rho_{X}^{\mathbb{O}}}}\), \(F\in\{C,G,Y,T\}\), are integer and non-negative. Therefore \(\mathcal{C}\) is an integer solution for the following system of inequalities
\[\widehat{T^{\rho_{X}^{\mathbb{O}}}}(x_{1},\ldots,x_{|X|})^{t}\leq|H|\mathbf{1}\]
such that \(\sum_{i=1}^{|X|}c_{i}=|C|\) and this completes the proof.
## 3. Results
Let \(G=S_{n}\) and \(S=\{(i,i+1)\,|\,1\leq i\leq n-1\}\), then the metric induced by \(\mathrm{Cay}(G,S)\) on \(S_{n}\) is the Kendall \(\tau\)-metric. In this section, by using the result in Section 2, we improve the upper bound of \(P(n,3)\) when \(n\in\{6,14,15\}\) or \(n\geq 7\) is a prime number. We note that for two permutations \(\sigma\) and \(\lambda\) of \(S_{n}\), their multiplication \(\lambda\cdot\sigma\) is defined as the composition of \(\sigma\) on \(\lambda\), namely \(\lambda\cdot\sigma(i)=\sigma(\lambda(i))\) for all \(i\in[n]\).
In order to apply Theorem 2.14, we need to fix the subgroup \(H\) and that different choices for \(H\) will lead to different results. Usual traditional with well-developed candidate for \(H\) is the Young subgroups which we are going to recall them [8].
**Definition 3.1**.: _By a number partition \(\lambda\) of \(n\)\((\)with the length \(m)\) we mean an \(m\)-tuple \((\lambda_{1},\ldots,\lambda_{m})\) of positive integers such that \(\lambda_{1}\geq\cdots\geq\lambda_{m}\) and \(n=\sum_{i=1}^{m}\lambda_{i}\). If \(\lambda\) and \(\mu\) are two partitions of \(n\), we say that \(\lambda\) dominates \(\mu\), and write \(\lambda\triangleleft\)\(\mu\), provided that \(\sum_{i=1}^{j}\lambda_{i}\geq\sum_{i=1}^{j}\mu_{i}\) for all \(j\). Let \(\lambda\) be a partition of \(n\) and \(\Delta:=(\Delta_{1},\ldots,\Delta_{m})\) be an \(m\)-tuple of non-empty subsets of \([n]\) consisting a set partition for \([n]\) with \(|\Delta_{i}|=\lambda_{i}\) for all \(i=1,\ldots,m\). We associate a Young subgroup \(S_{\Delta}\) of \(S_{n}\) by taking \(S_{\Delta}=S_{\Delta_{1}}\times\cdots\times S_{\Delta_{m}}\), where \(S_{\Delta_{i}}\) is the symmetric group on the set \(\Delta_{i}\) for all \(i=1,\ldots,m\)._
**Remark 3.2**.: _Let \(\lambda\) be a partition of \(n\) and \(\Delta\), \(\Delta^{\prime}\) be two \(m\)-tuples of non-empty subsets of \([n]\) consisting a set partition for \([n]\) with \(|\Delta_{i}|=|\Delta_{i}^{\prime}|=\lambda_{i}\) for all \(i=1,\ldots,m\). It is known that the representations \(\rho_{X}^{S_{n}}\) and \(\rho_{X^{\prime}}^{S_{n}}\), where \(X\) and \(X^{\prime}\) are the set of right cosets of the Young subgroups \(S_{\Delta}\) and \(S_{\Delta^{\prime}}\) in \(S_{n}\), respectively, are equivalent \((\)i.e., a matrix \(U\) exists such that \(U^{-1}\rho_{X}^{S_{n}}(\sigma)U=\rho_{X^{\prime}}^{S_{n}}(\sigma)\) for all \(\sigma\in S_{n})\). Hence, we use the \(m\)-tuples of non-empty subsets of \([n]\), \([\{1,\ldots,\lambda_{1}\},\{\lambda_{1}+1,\ldots,\lambda_{1}+\lambda_{2}\}, \ldots,\{n-\lambda_{m}+1,\ldots,n\}]\) for considering the Young subgroup corresponding to the partition \(\lambda=(\lambda_{1},\ldots,\lambda_{m})\), as we are studying these representations up to equivalence._
For example, if \(n=7\) and \(\lambda=(3,2,2)\), then the Young subgroup corresponding to the partition \(\lambda\) is the subgroup \(H=\{\sigma_{1}\cdot\sigma_{2}\cdot\sigma_{3}\,|\,\sigma_{1}\in S_{3},\sigma_{2} \in S_{\{4,5\}},\sigma_{3}\in S_{\{6,7\}}\}\).
**Lemma 3.3**.: _Let \(H\) be a Young subgroup of \(S_{n}\) corresponding to the partition \(\lambda:=(n-1,1)\) and \(X\) be the set of right cosets of \(H\) in \(S_{n}\). If \(S=\{(i,i+1)\,|\,1\leq i\leq n-1\}\) and \(T:=S\cup\{\xi\}\), then \(\widetilde{T^{\rho_{X}^{S_{n}}}}\) is a conjugate by a permutation matrix of the following matrix_
\[\begin{pmatrix}n-1&1&0&0&\dots&0\\ 1&n-2&1&0&\dots&0\\ 0&1&n-2&1&0&0\\ \vdots&\dots&\ddots&\ddots&\ddots&\vdots\\ 0&0&\dots&1&n-2&1\\ 0&0&\dots&0&1&n-1\end{pmatrix}. \tag{3.1}\]
Proof.: Without loss of generality we may assume that \(\lambda\) is the partition \(\{\{1\},\{2,\dots,n\}\}\) of \(n\) and therefore \(H=\operatorname{Stab}_{S_{n}}(1)\). Clearly, for each \(i\in[n]\), if \(\sigma\in H(1,i)\), then \(\sigma(1)=i\) and so \(H(1,i)\cap H(1,j)=\varnothing\) for all \(i\neq j\). So we can let \(X=\{H(1,i)\,|\,1\leq i\leq n\}\), where we are using the convention \(H(1,1):=H\). Fix the ordering of \(X\) such that \(H(1,i)<H(1,j)\) if \(i<j\). By Lemma 2.13, the \((i,j)\) entry of \(\widetilde{T^{\rho_{X}^{S_{n}}}}\) is equal to \(|T\cap(1,i)H(1,j)|\). If \(i=j\), then Remark 2.10 implies \((1,i)H(1,i)=\operatorname{Stab}_{S_{n}}(i)\) and hence \(T\cap(1,i)H(1,i)=T\setminus\{(i-1,i),(i,i+1)\}\) if \(2\leq i\leq n-1\), \(T\cap(1,n)H(1,n)=T\setminus\{(n,n-1)\}\) and \(T\cap H=T\setminus\{(1,2)\}\). Now suppose that \(i\neq j\). Clearly \((1,i)\cdot(i,j)\cdot(1,j)=(i,j)\). Let \(h\in H\). Then \(\sigma:=(1,i)\cdot h\cdot(1,j)=\pi(1,j,i)\), where \(\pi=(1,i)\cdot h\cdot(1,i)\in\operatorname{Stab}_{S_{n}}(i)\). Since \(\pi(i)=i\), \(\sigma(j)=i\) and therefore \(\sigma\) is an transposition if and only if \(h=(i,j)\). Hence, if \(j=i+1\) and \(i-1\), then \(T\cap(1,i)H(1,j)\) is equal to \(\{(i,i+1)\}\) and \(\{(i-1,i)\}\), respectively, and otherwise \(T\cap(1,i)H(1,j)=\varnothing\). This completes the proof.
**Theorem 3.4**.: _Let \(p\geq 7\) be a prime number and consider the \(p\times p\) matrix_
\[M=\begin{pmatrix}p-1&1&0&0&\dots&0\\ 1&p-2&1&0&\dots&0\\ 0&1&p-2&1&0&0\\ \vdots&\dots&\ddots&\ddots&\vdots\\ 0&0&\dots&1&p-2&1\\ 0&0&\dots&0&1&p-1\end{pmatrix}.\]
_Consider the system of inequalities \(M(x_{1},\dots,x_{p})^{t}\leq(p-1)!\mathbf{1}\) with \((x_{1},\dots,x_{p})^{t}\geq\mathbf{0}\) and \(x_{i}\) are integers. Let \(x_{\max}:=\max\{x_{i}\mid i=1,\dots,p\}\). Then_
1. \(|\{i\in[p]\mid x_{i}\leq\frac{(p-1)!}{p}\}|\geq\lceil\frac{p}{3}\rceil\)_._
2. _If_ \(\sum_{i=1}^{p}x_{i}=(p-1)!-k\)_, then_ \(|\{i\mid x_{i}=x_{\max}\}|\geq p-k-2\)_._
3. \(\sum_{i=1}^{p}x_{i}\leq(p-1)!-\lceil\frac{p}{3}\rceil+2\)__
Proof.: Let \(\mathcal{A}:=\{i\in[p]\mid x_{i}\leq\frac{(p-1)!}{p}\}\) and \(\mathcal{B}:=\{i\mid x_{i}=x_{\max}\}\). Consider the partition \(\{\{1,2\},\{3,4,5\},\{6,7,8\},\dots,\{p-2,p-1,p\}\}\) of \([p]\) if \(p\equiv 2\mod 3\) and the partition \(\{\{1,2\},\{3,4,5\},\{6,7,8\},\dots,\{p-4,p-3,p-2\},\{p-1,p\}\}\) if \(p\equiv 1\mod 3\). Each member of partitions corresponds to an obvious inequality, e.g. \(\{1,2\}\) and \(\{p-2,p-1,p\}\) are respectively corresponding to \((p-1)x_{1}+x_{2}\leq(p-1)!\) and
\(x_{p-2}+(p-2)x_{p-1}+x_{p}\leq(p-1)!\). Each inequality corresponding to a member \(P\) of the partitions forces \(x_{i}\leq(p-1)!/p\) for some \(i\in P\), where \(x_{i}=\min\{x_{j}\mid j\in P\}\). Since the size of both partitions is \(\lceil\frac{p}{3}\rceil\), we have that \(|\mathcal{A}|\geq\lceil\frac{p}{3}\rceil\) and so the first part is proved.
It follows from \(M(x_{1},\ldots,x_{p})^{t}\leq(p-1)!\mathbf{1}\) and \((x_{1},\ldots,x_{p})^{t}\geq\mathbf{0}\) that \(0\leq\sum_{i=1}^{p}M_{i}\mathbf{x}=p(\sum_{i=1}^{p}x_{i})\leq p!\), where \(M_{i}\) is \(i\)-th row of \(M\) and so \(0\leq\sum_{i=1}^{p}x_{i}\leq(p-1)!\). Let \(\ell\in[p]\) be such that \(x_{\ell}=x_{\max}\). Thus \(\sum_{i=1,i\neq\ell-1,\ell+1}^{p}(x_{\ell}-x_{i})=x_{\ell-1}+(p-2)x_{\ell}+x_{ \ell+1}-\sum_{i=1}^{p}x_{i}\leq(p-1)!-((p-1)!-k)\). Thus \(\sum_{i=1,i\neq\ell-1,\ell+1}^{p}(x_{\ell}-x_{i})\in\{0,1,\ldots,k\}\). It follows that \(|\{i\mid x_{i}<x_{\max}\}|\leq k+2\) and so \(|\mathcal{B}|\geq p-k-2\) and the second part is proved.
Let \(\sum_{i=1}^{p}x_{i}=(p-1)!-k\) and suppose, for a contradiction, that \(k<\lceil\frac{p}{3}\rceil-2\). So \(|\mathcal{B}|\geq p-\lceil\frac{p}{3}\rceil+1\) and therefore
\[|\mathcal{A}\cap\mathcal{B}|\geq|\mathcal{A}|+|\mathcal{B}|-p\geq\lceil \frac{p}{3}\rceil+p-\lceil\frac{p}{3}\rceil+1-p\geq 1.\]
Hence \(\mathcal{A}\cap\mathcal{B}\neq\varnothing\) and \(x_{\max}\leq(p-1)!/p\). Since \(p\) is prime, by Wilson theorem [4, P. 27]\((p-1)!\equiv-1\mod p\). Since \(x_{\max}\) is integer, we have that \(x_{i}\leq\frac{(p-1)!+1}{p}-1\) for all \(i\in[p]\). Therefore
\[\sum_{i=1}^{p}x_{i}=(p-1)!-k\leq p(\frac{(p-1)!+1}{p}-1)=(p-1)!+1-p\]
and so
\[p\leq k+1<\lceil\frac{p}{3}\rceil-1,\]
which is a contradiction. So we must have \(k\geq\lceil\frac{p}{3}\rceil-2\). This completes the proof.
In the following we will prove Theorem 1.1.
**Theorem:** For all primes \(p\geq 11\), \(P(p,3)\leq(p-1)!-\lceil\frac{p}{3}\rceil+2\leq(p-1)!-2\).
Proof.: Let \(C\) be a code in \(S_{p}\) with minimum Kendall \(\tau\)-distance \(3\). Let \(H\) be the Young subgroup of \(S_{p}\) corresponding to the partition \(\lambda:=(p-1,1)\) and \(X\) be the set of right cosets of \(H\) in \(S_{p}\). If \(S=\{(i,i+1)\,|\,1\leq i\leq p-1\}\) and \(T:=S\cup\{\xi\}\), then by Lemma 3.3, \(\widetilde{T^{S_{m}}}\) is a conjugate by a permutation matrix of the matrix \(M\) in Theorem 3.4. Now Theorem 2.14 implies that the optimal value of the objective function of the following integer programming problem gives an upper bound on \(|C|\)
\[\text{Maximize} \sum_{i=1}^{p}x_{i},\] subject to \[M(x_{1},\ldots,x_{p})^{t}\leq|H|\mathbf{1}=(n-1)!\mathbf{1},\] \[x_{i}\in\mathbb{Z},\ x_{i}\geq 0,\ i\in\{1,\ldots,p\},\]
where \(\mathbf{1}\) is a column vector of order \(p\times 1\) whose entries are equal to \(1\). Therefore, the result follows from Theorem 3.4. This completes the proof.
**Theorem 3.5**.: _If \(n\) is equal to \(6\), \(7\), \(11\), \(13\) and \(17\), then \(P(n,3)\) is less than or equal to \(116\), \(716\), \(10!-10\), \(12!-12\) and \(16!-5\), respectively._
Proof.: Let \(S:=\{(i,i+1)\,|\,1\leq i\leq n-1\}\). In view of Theorem 2.14, we have used CPLEX software [3] and GAP software [6] to determine the upper bound for \(P(n,3)\) obtained from solving the integer programming problem corresponding to the subgroup \(H\) of \(S_{n}\), where \(H\) is the Young subgroup corresponding to the partition \((2,2,2)\), when \(n=6\), \((5,1,1)\), when \(n=7\), \((9,2)\), when \(n=11\), \((11,2)\), when \(n=13\) and \((16,1)\), when \(n=17\). For each of the above subgroups, Using GAP software [6], first, we determined the matrix \(\widehat{(T)^{\rho_{X}^{S_{n}}}}\), where \(X\) is the set of right cosets of \(H\) in \(S_{n}\) and \(T:=S\cup\{\xi\}\), then using CPLEX software [3], we solved the integer programming problem corresponding to the subgroup \(H\).
To prove the non-existence of \(1\)-perfct codes in \(S_{14}\) and \(S_{15}\), we are using techniques in [5] which is stated in the following proposition.
**Proposition 3.6**.: _[_5_, Theorem 2.2]_ _Let \(S=\{(i,i+1)\,|\,1\leq i\leq n-1\}\) and \(T:=S\cup\{\xi\}\). If \(S_{n}\) contains a subgroup \(H\) such that \(n\nmid|H|\) and \(\widehat{(T)^{\rho_{X}^{S_{n}}}}\) is invertible, where \(X\) is the set of right cosets of \(H\) in \(S_{n}\), then \(S_{n}\) contains no \(1\)-perfect codes._
**Theorem 3.7**.: _There are no \(1\)-perfect codes under the Kendall \(\tau\)-metric in \(S_{n}\) when \(n\in\{14,15\}\)._
Proof.: Let \(S=\{(i,i+1)\,|\,1\leq i\leq n-1\}\) and \(T:=S\cup\{\xi\}\). By Proposition 3.6, to prove the non-existence of \(1\)-perfect codes under the Kendall \(\tau\)-metric in \(S_{n}\), we need to consider the young subgroups \(H\) of \(S_{n}\), \(n\in\{14,15\}\), with two properties: (1) \(n\nmid|H|\); (2) the matrix \(\widehat{(T)^{\rho_{X}^{S_{n}}}}\) is invertible. Since \(\widehat{(T)^{\rho_{X}^{S_{n}}}}\) is a matrix of dimension \(n!/|H|\), by choosing \(H\) with a larger size, the dimension of the matrix \(\widehat{(T)^{\rho_{X}^{S_{n}}}}\) decreases. In the case \(n=14\), we consider the Young subgroup \(H\) corresponding to the partition \((6,6,2)\). It is clear that \(14\nmid|H|=6!6!2!\). Also, by a software check the matrix \(\widehat{(T)^{\rho_{X}^{S_{14}}}}\) which is a matrix of dimension \(84084\) is invertible and so there are no \(1\)-perfect codes under the Kendall \(\tau\)-metric in \(S_{14}\). In the case \(n=15\), the largest Young subgroup \(H\) of \(S_{15}\) which satisfies the condition (1) is the Young subgroup corresponding to the partition \(\lambda:=(4,4,4,3)\). In this case the matrix \(\widehat{(T)^{\rho_{X}^{S_{15}}}}\) is of dimension \(1051050\) that the software was unable to check its invertibility and so we used Theorem [8, Corollary 2.2.22] to check its invertibility. By [8, Corollary 2.2.22], if for all partitions \(\mu\) of \(n\) which \(\mu\unlhd\lambda\), \(\widehat{T^{\rho_{\mu}}}\) are invertible, where \(\rho_{\mu}\) is the irreducible representation of \(S_{15}\) corresponding to \(\mu\), then \(\widehat{T^{\rho_{X}^{S_{15}}}}\) is invertible. There exist \(54\) partitions of \(15\) which dominates the partition \(\lambda\). By software check, for each partition \(\mu\) of these \(54\) partition the matrix \(\widehat{T^{\rho_{\mu}}}\) is invertible (Table 2 shows the dimension and the eigenvalue with smallest absolute value of theses matrices) and so \(\widehat{(T)^{\rho_{X}^{S_{15}}}}\) is invertible and this completes the proof.
**Conjecture 3.8**.: _If \(H\) is the Young subgroup corresponding to the partition \((p-1,p-1,2)\) of \(S_{2p}\), where \(p\geq 3\) is a prime number, and \(X\) is the set of right cosets of \(H\) in \(S_{2p}\), then \(\widehat{(S\cup\{\xi\})^{\rho_{X}^{S_{2p}}}}\) is invertible. In particular, there is no \(1\)-perfect permutation code of length \(2p\) with respect to the Kendall \(\tau\)-metric._
We note that by software checking Conjecture 3.8 holds valid for \(p\in\{3,5,7\}\).
\begin{table}
\begin{tabular}{c c c c} item & Partition & Dimension & Eigenvalue \\ \hline
[MISSING_PAGE_POST]
1.767\(\times\)10\({}^{-4}\) \\ \end{tabular}
\end{table}
Table 2. The dimention and the eigenvalue with smallest absolute value of the matrix \(\widehat{T^{\rho_{\mu}}}\) for all partitions \(\mu\) of 15 which dominates the partition (4,4,4,3).
\begin{tabular}{l l l l}
44 & (7, 6, 1, 1) & 27027 & 3.757\(\times 10^{-4}\) \\
45 & (8, 5, 1, 1) & 35100 & -7.440\(\times 10^{-5}\) \\
46 & (9, 3, 2, 1) & 42042 & 6.633\(\times 10^{-4}\) \\
47 & (6, 6, 2, 1) & 50050 & -1.680\(\times 10^{-5}\) \\
48 & (5, 5, 4, 1) & 54054 & 1.934\(\times 10^{-4}\) \\
49 & (8, 3, 3, 1) & 57330 & 9.513\(\times 10^{-5}\) \\
50 & (6, 4, 4, 1) & 80080 & -2.972\(\times 10^{-5}\) \\
51 & (8, 4, 2, 1) & 91000 & -2.590\(\times 10^{-5}\) \\
52 & (7, 5, 2, 1) & 108108 & -3.672\(\times 10^{-5}\) \\
53 & (6, 5, 3, 1) & 128700 & \(-1.920\times 10^{-5}\) \\
54 & (7, 4, 3, 1) & 135135 & \(-2.627\times 10^{-6}\) \\ \end{tabular}
## 4. Conclusion
Due to the applications of PCs under the Kendall \(\tau\)-metric in flash memories, they have attracted the attention of many researchers. In this paper, we consider the upper bound of the size of the largest PC with minimum Kendall \(\tau\)-distance 3. Using group theory, we formulate an integer programming problem that is depend on a non-trivial subgroup of \(S_{n}\) of choice, where the optimal value of the objective function gives an upper bound on \(P(n,3)\). After that, by solving the integer programming problem corresponding to some subgroups of \(S_{n}\), when \(n\geq 7\) is a prime number or \(n\in\{6,14,15\}\), we improve the upper bound on \(P(n,3)\).
## 5. Declarations
### Ethical Approval and Consent to participate
Not applicable.
The current manuscript does not report on or involve the use of any animal or human data or tissue.
### Consent for publication
Not applicable.
The current manuscript does not contain data from any individual person.
### Availability of supporting data
All data generated or analysed during this study are included in this published article.
### Competing interests
The authors declare that they have no competing interests.
### Funding
This research is supported by the Deputy of Research and Technology of University of Isfahan under a grant given to the research group CSG (Code-Scheme-Group). F. Parvaresh is also supported by IPM in part by grant No. 1401680050.
These funding sources had no role in the design of this study and will not have any role during its execution, analyses, interpretation of the data, or decision to submit results.
### Authors' contributions
All authors read and approved the final manuscript. |
2309.01828 | Secure and Efficient Federated Learning in LEO Constellations using
Decentralized Key Generation and On-Orbit Model Aggregation | Satellite technologies have advanced drastically in recent years, leading to
a heated interest in launching small satellites into low Earth orbit (LEOs) to
collect massive data such as satellite imagery. Downloading these data to a
ground station (GS) to perform centralized learning to build an AI model is not
practical due to the limited and expensive bandwidth. Federated learning (FL)
offers a potential solution but will incur a very large convergence delay due
to the highly sporadic and irregular connectivity between LEO satellites and
GS. In addition, there are significant security and privacy risks where
eavesdroppers or curious servers/satellites may infer raw data from satellites'
model parameters transmitted over insecure communication channels. To address
these issues, this paper proposes FedSecure, a secure FL approach designed for
LEO constellations, which consists of two novel components: (1) decentralized
key generation that protects satellite data privacy using a functional
encryption scheme, and (2) on-orbit model forwarding and aggregation that
generates a partial global model per orbit to minimize the idle waiting time
for invisible satellites to enter the visible zone of the GS. Our analysis and
results show that FedSecure preserves the privacy of each satellite's data
against eavesdroppers, a curious server, or curious satellites. It is
lightweight with significantly lower communication and computation overheads
than other privacy-preserving FL aggregation approaches. It also reduces
convergence delay drastically from days to only a few hours, yet achieving high
accuracy of up to 85.35% using realistic satellite images. | Mohamed Elmahallawy, Tie Luo, Mohamed I. Ibrahem | 2023-09-04T21:36:46Z | http://arxiv.org/abs/2309.01828v1 | Secure and Efficient Federated Learning in LEO Constellations using Decentralized Key Generation and On-Orbit Model Aggregation
###### Abstract
Satellite technologies have advanced drastically in recent years, leading to a heated interest in launching small satellites into low Earth orbit (LEOs) to collect massive data such as satellite imagery. Downloading these data to a ground station (GS) to perform centralized learning to build an AI model is not practical due to the limited and expensive bandwidth. Federated learning (FL) offers a potential solution but will incur a very large convergence delay due to the highly sporadic and irregular connectivity between LEO satellites and GS. In addition, there are significant security and privacy risks where eavesdroppers or curious servers/satellites may infer raw data from satellites' model parameters transmitted over insecure communication channels. To address these issues, this paper proposes _FedSecure_, a secure FL approach designed for LEO constellations, which consists of two novel components: (1) _decentralized key generation_ that protects satellite data privacy using a functional encryption scheme, and (2) _on-orbit model forwarding and aggregation_ that generates a partial global model per orbit to minimize the idle waiting time for invisible satellites to enter the visible zone of the GS. Our analysis and results show that FedSecure preserves the privacy of each satellite's data against eavesdroppers, a curious server, or curious satellites. It is lightweight with significantly lower communication and computation overheads than other privacy-preserving FL aggregation approaches. It also reduces convergence delay drastically from days to only a few hours, yet achieving high accuracy of up to 85.35% using realistic satellite images.
Low Earth orbit (LEO), satellite communication (SatCom), federated learning (FL), privacy preservation.
+
Footnote †: Corresponding author. This work is supported by the National Science Foundation (NSF) under Grant No. 2008878.
## I Introduction
**Background.** The advancement of satellite technology has enabled the launching of many small satellites into low Earth orbits (LEOs). Equipped with multiple sensors and cameras, these satellites gather extensive data about the Earth and space, allowing large AI models to be trained to support various applications such as monitoring remote areas like deserts, forests, and maritime regions, as well as homeland security like border surveillance and military reconnaissance. However, the traditional approach of centralized learning, which requires downloading the satellite data (e.g., imagery) to a ground station (GS), is impractical due to the limited bandwidth, highly intermittent connectivity between satellites and GS, and data privacy.
Federated learning (FL) [1] offers a promising solution as a distributed learning paradigm, which both saves bandwidth and preserves privacy. In FL, each client (satellite in our context) trains a local machine learning (ML) model onboard and sends only the model parameters (instead of raw data) to an aggregation server \(\mathcal{AS}\) (GS in our context). The \(\mathcal{AS}\) then aggregates all the received local models into a global model and sends it back to all the satellites for re-training. This procedure repeats until the global model eventually converges.
**FL-LEO Challenges.** However, applying FL to satellite communications (SatCom), or more specifically LEO constellations, faces significant challenges. First, there is a large delay in every communication round between satellites and \(\mathcal{AS}\) and it leads to a very slow FL convergence process which often takes several days [2]. This delay is caused by the highly irregular and sporadic connectivity between satellites and the \(\mathcal{AS}\), which is attributed to the distinct Earth rotation and satellite orbiting trajectories. Second, transmitting model parameters in FL may appear "safe" but is in fact still vulnerable under certain attacks such as model inversion and membership inference [3].
**Contributions.** To address the above challenges, we propose FedSecure, a secure FL-LEO framework to ensure both fast convergence and protection of privacy leakage, while maintaining high accuracy of the global model. Specifically,
* FedSecure consists of a functional encryption-based aggregation scheme that prevents the private satellite data from being inferred and satellite models from being decephered, _without requiring any trusted key distribution center (KDC) to generate public/private keys or any secure channel to distribute the keys among nodes_.
* We also propose an on-orbit model forwarding and aggregation scheme that generates a _partial global model_ per orbit via intra-orbit collaboration, which significantly reduces the waiting time for invisible satellites to enter the visible zone of \(\mathcal{AS}\) for model transmissions.
* Our extensive experiments on a real satellite imagery dataset for semantic segmentation tasks demonstrate that FedSecure achieves convergence in only 3 hours while maintaining competitive accuracy across various performance metrics including IoU and the Dice Coefficient. It is also lightweight with low computation overhead (\(<9\) ms) and communication overhead (497 MB).
## II Related Work
Despite the relative youth of the FL-LEO research field, notable studies have made initial strides in this area [4, 5, 6, 7, 8, 9, 10, 11]. In the synchronous FL category [4, 5, 6, 7], the \(\mathcal{AS}\) units to receive _all_ the satellites' models in each training round. FedISL [4] uses inter-satellite-link (ISL) to reduce the waiting time, but it achieves fast convergence only when the \(\mathcal{AS}\) is a GS located at the north pole (NP) or a satellite in a medium Earth orbit (MEO) above the Equator, otherwise it needs several days for convergence. The work [5] removes these restrictions and, in order to reduce delay, designates a sink satellite per orbit to collect models from satellites in the same orbit. However, it requires each satellite to run a distributed scheduler which incurs extra delay. In [6], the authors proposed a method for dynamically aggregating satellite models based on connection density, involving multiple GSs collaboration. However, ensuring model consistency across GSs is challenging and adds overhead. Lastly, the authors of
[7] proposed FedHAP which uses multiple airships or balloons to act as \(\mathcal{AS}\)s to collect models from satellites, but it requires extra hardware (HAPs) to be deployed.
In the asynchronous FL category [8, 9, 10], the \(\mathcal{AS}\) only collects models from a subset of satellites in each training round. One such approach is AsyncFLEO [8] which groups satellites according to model staleness and selects only fresh models from each group while down-weighting outdated models. So et al. proposed FedSpace [9], which aims to balance the idle waiting in synchronous FL and the model staleness in asynchronous FL by scheduling the aggregation process based on satellite connectivity. However, it requires satellites to upload a portion of raw data to the GS, which contradicts the FL principles on communication efficiency and data privacy. In [10], a graph-based routing and resource reservation algorithm is introduced to optimize the delay in FL model parameter transfer. The algorithm improves a storage time-aggregated graph, providing a comprehensive representation of the satellite networks' transmission, storage, and computing resources.
To the best of our knowledge, our work is the first that addresses security threats in FL-LEO against both internal and external adversaries (cf. Section III-B). Although some existing privacy-preserving and cryptographic techniques such as homomorphic encryption [12] could be applied, these approaches have limitations such as large ciphertexts and high communication overhead; also importantly, they require a trusted KDC to generate and distribute public and private keys which further requires a secure communication channel as well between all clients and the \(\mathcal{AS}\). Other classical cryptographic protocols such as differential privacy (DP) and secure multi-party computation (SMC) can also be applied, but they suffer from high encryption and communication overheads and can degrade the accuracy of the global model (e.g., DP adds noise to local models).
## III System and Threat Model
Our objective is to develop a decentralized FL-LEO approach that ensures both data and model privacy of LEO satellites, without requiring a KDC (and the associated secure channel).
### _System Model_
Our system model (Fig. 1) comprises:
1. LEO satellites: An LEO constellation \(\mathcal{K}\) consists of multiple satellites, indexed by \(i\), orbiting the Earth in \(L\) orbits. Each orbit \(l\) has a set of equally-spaced satellites. While orbiting, each satellite \(i\) captures high-resolution images for training an ML model for various classification tasks (e.g., detecting forest fires or hurricanes, monitoring country borders, etc.). During each round of FL training, all LEO satellites receive an (initial or updated) global model from the \(\mathcal{AS}\) during their respective visible windows, and then (re)train the model using their own local data. After training, they encrypt the model parameters and send them back to the \(\mathcal{AS}\), which aggregates them into a global model and then sends it back to all satellites again.
2. Aggregation Server \(\mathcal{AS}\): The \(\mathcal{AS}\) initiates the FL process by sending an initial (typically randomly initialized) global model to all LEO satellites successively during their respective visible windows. Then, after receiving the retrained local models from the visible satellites successively, the \(\mathcal{AS}\) aggregates them into an updated global model, and broadcasts it back to visible satellites again for another round of training. This process continues until a termination criterion is met, such as reaching a target accuracy or loss, maximum number of communication rounds, or negligible change of model parameters.
### _Threat Model_
Our threat model encompasses both external and internal adversaries, as outlined below:
1. **External adversaries.** An adversary may eavesdrop on the communication links between LEO satellites and the \(\mathcal{AS}\) to steal or monitor the model parameters, thereby inferring sensitive information about satellite data from their parameters.
2. **Internal adversaries.** The \(\mathcal{AS}\) and LEO satellites are _honest-but-curious_ participants in FL training, meaning that they follow the FL protocol honestly but may be curious to learn/infer sensitive information about the raw data of some or all the satellites. In addition, we also consider possible collusion between the \(\mathcal{AS}\) and some satellites (e.g., launched by an operator, to steal information from satellites owned by another operator).
In general, the transmission of local model parameters may leave LEO satellites vulnerable to attacks such as membership inference, model inversion, etc., which are all subsumed by the above threat model.
### _Design Goals_
1. **Privacy preservation:** The solution should ensure the confidentiality of the ML model parameters and prevent any information leakage to attackers/eavesdroppers, the \(\mathcal{AS}\), or other LEO satellites.
2. **Efficiency:** The available communication bandwidth should be used efficiently. This may involve minimizing the model size and exchange frequency between the \(\mathcal{AS}\) and satellites, as well as reducing communication overhead.
3. **Accuracy:** The final global model resulting from FL should still achieve competitive performance.
## IV Federated Learning in LEO constellations
### _FL-LEO's Computation Model_
The overarching objective of the \(\mathcal{AS}\) and all LEO satellites \(\mathcal{K}\) is to collaboratively train a global ML model, which involves the following steps: (i) the \(\mathcal{AS}\) initializes an ML model and sends it to each individual satellite when that satellite enters the
Fig. 1: System model: an LEO constellation with multiple (4) orbits.
\(\mathcal{AS}\)' visible zone; (ii) each satellite then trains the model using a local optimization method, typically mini-batch gradient descent, \(\mathbf{w}_{i}^{\beta,j+1}=\mathbf{w}_{i}^{\beta,j}-\zeta\nabla F_{k}(\mathbf{w}_{i}^{\beta, j};X_{i}^{j})\), where \(\mathbf{w}_{i}^{\beta,j}\) is the local model of satellite \(i\) at the \(j\)-th local iteration in a global communication round \(\beta_{i}\), \(\zeta_{i}\) is the learning rate, \(X_{i}^{j}\subset D_{i}\) is the \(j\)-th mini-batch and \(D_{i}\) is satellite \(i\)'s dataset; after training, sends the updated model \(\mathbf{w}_{i}^{\beta,J}\) back to the \(\mathcal{AS}\); (iii) once the \(\mathcal{AS}\) receives the trained models from all satellites, it aggregates them into an updated global model \(\mathbf{w}^{\beta+1}=\sum_{i\in\mathcal{K}}\mathbf{w}_{i}^{\beta,J}\) and sends it back to all satellites during their respective visible windows as in (i). The above process continues until the global model converges.
### _FL-LEO's Communication Model_
Assuming line-of-sight (LoS) communication, a satellite \(i\) and an \(\mathcal{AS}\)\(s\) can only communicate with each other if \(\angle(r_{s}(t),(r_{i}(t)-r_{s}(t)))\leq\frac{\pi}{2}-\alpha_{min}\), where \(r_{i}(t)\) and \(r_{s}(t)\) are their respective trajectories and \(\alpha_{min}\) is the minimum elevation angle. Additionally, assuming the channel is affected by additive white Gaussian noise (AWGN), the signal-to-noise ratio (SNR) between them is \(\frac{PG_{i}G_{s}}{K_{B}TBL_{i,s}}\) where \(P\) is the transmitter power, \(G_{i},G_{s}\) are the antenna gain of \(i\) and \(s\), respectively, \(K_{B}\) is the Boltzmann constant, \(T\) is the noise temperature, \(B\) is the channel bandwidth, and \(\mathcal{L}_{i,s}\) is the free-space pass loss which can be calculated as \(\mathcal{L}_{i,s}=\big{(}\frac{4\pi\|i,s\|_{2}f}{c}\big{)}^{2}\). Here, \(\|i,s\|_{2}\) is the Euclidean distance between \(i\) and \(s\) that satisfies \(\|i,s\|_{2}\leq\ell_{i,s}\), where \(\ell_{i,s}\) is the minimum distance between \(i\) and \(s\) that enables them to communicate with each other, and \(f\) is the carrier frequency.
### _Inner-Product Functional Encryption_
Unlike conventional encryption methods, functional encryption (FE) is a cryptosystem that allows the holder of a decryption key to decrypt encrypted data but only obtain a _function_ of the data without revealing the input data itself [13]. Our work focuses on _inner product__FE_ (IPFE) which is a specific type of FE that performs a function of inner product operations over encrypted data [13, 14]. With IPFE, given two ciphertext vectors \(\mathbf{x}^{\prime}\) and \(\mathbf{y}^{\prime}\) which are encrypted from plaintext vectors \(\mathbf{x}\) and \(\mathbf{y}\), one can obtain the inner product \(\langle\mathbf{x},\mathbf{y}\rangle\) without decrypting \(\mathbf{x}^{\prime}\) or \(\mathbf{y}^{\prime}\) or knowing any elements of \(\mathbf{x}\) and \(\mathbf{y}\). Compared to homomorphic encryption (HE) which requires decrypting the ciphertext to obtain the plaintext result, IPFE can directly obtain the final plaintext result.
## V FedSecure Design
### _Overview_
FedSecure is a synchronous FL approach designed for LEO satellites to protect their data privacy while achieving high accuracy and speeding up convergence. It works as follows: (i) Each participating satellite \(i\in\mathcal{K}\) generates a private key and a public key using the anonymous veto network (AV-net) protocol [15]; (ii) The \(\mathcal{AS}\) broadcasts the initial global model to all visible LEO satellites; (iii) Each visible satellite forwards the received global model to its neighbors on the same orbit so that invisible satellites can have the global model without waiting for their respective visible windows (see Fig. 2a); (iv) Once receiving the model, each satellite retrains it using its local data, and after training, encrypts the trained model's parameters using the AV-net protocol (Section V-C); (v) All the satellites on the same orbit collaborate to generate a partial global model by averaging their models and then encrypt and send this partial model to a currently visible satellite on the same orbit (Section V-B); (vi) This visible satellite per each orbit sends the ciphertext of the partial model to the \(\mathcal{AS}\); (vii) The \(\mathcal{AS}\) averages the encrypted parameters to obtain an updated global model _without being able to read any satellite's local model parameters or infer any satellite's training data_ (Section V-C). A visualization of our on-orbit model forwarding and aggregation is shown in Fig. 2.
### _On-Orbit Model Forwarding and Aggregation_
We propose an on-orbit model forwarding and aggregation scheme to merge all satellites' local models on the same orbit into a _partial global model_. The purpose is to overcome the long idle waiting time for invisible satellites to (successively) enter the \(\mathcal{AS}\)'s visible zone for model exchange, as in traditional synchronous FL-LEO approaches [2]. Our proposed scheme works as follows: (1) Each satellite \(i\) on the same orbit trains the global model \(\mathbf{w}^{\beta}\) received as per steps (ii) and (iii) of Section V-A, and obtains \(\mathbf{w}_{i}^{\beta}\) after training. (2) Then, the first visible satellite, say #1, forwards its \(\mathbf{w}_{1}^{\beta}\) to its next neighbor (#2, which may be invisible, see Fig. 2b), who will perform partial aggregation of its own \(\mathbf{w}_{2}^{\beta}\) and the received \(\mathbf{w}_{1}^{\beta}\) into a new \(\mathbf{w}_{2}^{\beta}\), and passes it onto its next neighbor #3; this process continues until the final partial model \(\mathbf{w}_{8}^{\beta}\) reaches the originating satellite #1. (3) Finally, satellite #1 will forward this partial model back to all satellites on the same orbit (but this round without aggregation) so that any satellite who becomes visible the first will send that model to the \(\mathcal{AS}\), for later global aggregation among all orbits. See Fig. 2; more details are provided below.
Our on-orbit model forwarding and aggregation scheme is inspired by the method proposed in [5] but differs from it as follows. In [5], each orbit needs to _schedule a sink satellite_ to collect all local models and then perform partial aggregation, which involves both scheduling overhead and the waiting time for the sink satellite to become visible. But in our scheme here, every satellite participates in partial model aggregation without relying on a specific satellite, and hence anyone can send the final partial model to \(\mathcal{AS}\). This makes our aggregation scheme more time-efficient (the additional round
Fig. 2: On-orbit model forwarding and aggregation. (a) The visible satellite (#1 in the example) broadcasts the received \(\mathbf{w}^{\beta}\) to all other satellites (most are invisible) on the same orbit bi-directionally; (b) it initiates model forwarding and aggregation by sending its trained local model \(\mathbf{w}_{1}^{\beta}\) to its neighbor satellite uni-directionally (either clockwise or counterclockwise, predetermined); (c) after receiving the final updated partial model from #8, it forwards the final partial model to all satellites on the same orbit bi-directionally, until reaching a visible satellite (\(\#7\) in the example).
of bi-directional forwarding as in Fig. 1(c) is very fast, taking a few seconds only).
Upon receiving the global model \(\mathbf{w}^{\beta}\) generated by the \(\mathcal{AS}\) for a global round \(\beta\), a visible satellite \(i\) in an orbit \(X\) initiates a retraining process using its collected data (i.e., Earth imagery) to update its local model \(\mathbf{w}^{\beta}_{i}\). After updating its local model, it forwards \(\mathbf{w}^{\beta}\) to all satellites in its orbit, including invisible satellites. Subsequently, it transmits its updated local model \(\mathbf{w}^{\beta}_{i}\) to its next-hop satellite \(i^{\prime}\) via Intra-plane ISL, with the propagation direction pre-designated either clockwise or counterclockwise. Then, the next-hop satellite \(i^{\prime}\) performs partial model aggregation by combining its updated local model \(\mathbf{w}^{\beta}_{i^{\prime}}\) with the received \(\mathbf{w}^{\beta}_{i}\) to generate a new local model \(\mathbf{w}^{\beta}_{i^{\prime}}\) before transmitting it to \(i^{\prime\prime}\) (the next-hop satellite in the designated propagation direction) as follows:
\[\mathbf{w}^{\beta}_{i^{\prime}}=(1-\alpha_{i^{\prime}})\mathbf{w}^{\beta}_{i}+\alpha_{ i^{\prime}}\mathbf{w}^{\beta}_{i^{\prime}} \tag{1}\]
where \(\alpha_{i^{\prime}}\) is a scaling factor, defined as the ratio between the data size of satellite \(i^{\prime}\) and the sum of all previous satellites (up to \(i\))'s data size (each satellite will send its data size as metadata to its neighbor together with its model). Consequently, the visible satellite, say #1, who initiated the above model forwarding process, will receive an updated on-orbit aggregated partial model \(\mathbf{w}^{\beta}_{k}\) that has aggregated all the local models on the same orbit, where \(k\) is the total number of satellites on that orbit (see Fig. 1(b)). If the satellite #1 is still visible to the \(\mathcal{AS}\), it will send \(\mathbf{w}^{\beta}_{k}\) to the \(\mathcal{AS}\). Otherwise, it initiates another round of forwarding (but without aggregation) by sending \(\mathbf{w}^{\beta}_{k}\) to its next-hop neighbors which will pass it onward further until either (i) \(\mathbf{w}^{\beta}_{k}\) reaches a visible satellite, who will then send the model to the \(\mathcal{AS}\) immediately, or (ii) every satellite on the same orbit has a copy of \(\mathbf{w}^{\beta}_{k}\) and the first satellite who becomes visible will send the model to the \(\mathcal{AS}\).
To summarize, our on-orbit model forwarding and aggregation scheme eliminates the requirement of each satellite individually communicating with the \(\mathcal{AS}\) as in conventional FL-LEO approaches, and thus cuts down the substantial idle waiting time for visible windows. Since this is entailed in _every_ communication round of FL training, our scheme will lead to a significant acceleration of FL convergence.
### _Decentralized Secure Aggregation Scheme_
Our novel secure model aggregation scheme allows building a global model without the need for a KDC while protecting the privacy of satellites. It encompasses three stages: system setup, training and reporting partial model parameters, and global parameter averaging.
#### V-C1 System setup
Pertaining to our discussion in Section V-B, one visible satellite in each of the \(L\) orbits (e.g., the first visible one) generates its public and secret keys using the AV-net protocol [15]. Given the system parameters in Table I, the following steps are carried out:
1. The visible satellite \(i_{l}\) chooses a private number \(\mathrm{x}_{i_{l}}\in\mathbb{Z}_{q}\) randomly, where \(i_{l}\) is the visible satellite \(i\) on orbit \(l\).
2. Then, satellite \(i_{l}\) broadcasts \(\mathrm{g}^{\mathrm{x}_{i_{l}}}\) to all other visible satellites on different orbits (e.g., through the \(\mathcal{AS}\) or the Internet), where \(\mathrm{g}^{\mathrm{x}_{i_{l}}}\) is a public parameter. Note that this is only needed once in the initialization phase.
3. Each satellite \(i_{l}\) computes \(\mathrm{g}^{\mathrm{y}_{i_{l}}}\) as follows: \[\mathrm{g}^{\mathrm{y}_{i_{l}}}=\prod_{x=1}^{i_{l-1}}\ \mathrm{g}^{\mathrm{x}_{i_{l}}}/\prod_{x=i_{l+1}}^{i_{L}} \mathrm{g}^{\mathrm{x}_{u}},\] (2)
4. A secret key \(s_{i_{l}}\in\mathbb{Z}_{q}\) is chosen by each satellite \(i_{l}\) to be used in the computation of the subset aggregation key \(S_{i_{l}}\) that will be sent to the \(\mathcal{AS}\). The \(S_{i_{l}}\) is computed as follows: \[S_{i_{l}}=\mathrm{g}^{s_{i_{l}}}\times(\mathrm{g}^{\mathrm{y}_{i_{l}}})^{ \mathrm{x}_{i_{l}}}=\mathrm{g}^{s_{i_{l}}+\mathrm{x}_{i_{l}}y_{i_{l}}}\] (3)
5. After receiving \(S_{i_{l}}\) from all satellites, the \(\mathcal{AS}\) computes an aggregation key \(AK_{\mathcal{AS}}\), which will be used in the aggregation process as \[AK_{\mathcal{AS}} =\prod_{i_{l=1}}^{i_{L}}S_{i_{l}}=\prod_{i_{l=1}}^{i_{L}}\mathrm{g }^{s_{i_{l}}+\mathrm{x}_{i_{l}}y_{i_{l}}}\] \[=\mathrm{g}^{s_{i_{1}}+\mathrm{x}_{i_{1}}y_{i_{1}}}\times\mathrm{ g}^{s_{i_{2}}+\mathrm{x}_{i_{2}}y_{i_{2}}}\times...\times\mathrm{g}^{s_{i_{L}}+ \mathrm{x}_{i_{L}}y_{i_{L}}}\] \[=\mathrm{g}^{\sum_{i_{l=1}}^{i_{L}}s_{i_{l}}+\sum_{i_{l=1}}^{i_{L }}x_{i_{l}}y_{i_{l}}}\] (4)
Since \(\sum_{i_{l=1}}^{i_{L}}x_{i_{l}}y_{i_{l}}\)= 0 as verified in [15], \(AK_{\mathcal{AS}}\) = \(\mathrm{g}^{\sum_{i_{l=1}}^{i_{L}}s_{i_{l}}}\).
#### V-C2 Secure reporting of partially trained models
After a visible satellite obtains the partial global model \(\mathbf{w}^{\beta}_{i_{l}}[\mu]\) in the \(\beta\)-th FL round as in Fig. 1(c), this satellite uses \(s_{i_{l}}\) to encrypt this partial model's parameters before sending to the \(\mathcal{AS}\):
\[C^{\beta}_{i_{l}}[\mathbf{w}[\mu]]=\mathrm{g}^{s_{i_{l}}u_{\mu}+\mathbf{w}^{\beta}_{i _{l}}[\mu]}\in\mathbb{G},\quad\mu=0,...,e-1 \tag{5}\]
where \(C^{\beta}_{i_{l}}\) is the ciphertext of model \(\mathbf{w}^{\beta}_{i_{l}}[\mu]\) which contains \(e\) parameters represented by the vector \((\mathbf{w}^{\beta}_{i_{l}}[0],\ldots,\mathbf{w}^{\beta}_{i_{l}}[e-1])\), \(\ell_{\beta}\) is a round identifier, and \(u_{\ell_{\beta}}=\mathcal{H}(\ell_{\beta})\in\mathbb{Z}_{q}\).
#### V-C3 Global parameter-averaging
This step is performed by the \(\mathcal{AS}\). In each FL round \(\beta\), the \(\mathcal{AS}\) collects all the encrypted partial models from visible satellites on all the orbits, and then performs the following computation to calculate an aggregated model (that is still encrypted):
\[\frac{\prod_{i_{l=1}}^{i_{L}}(C^{\beta}_{i_{l}}[\mu])}{(AK_{\mathcal{AS}})^{ \mathrm{x}_{\ell_{\beta}}}}=\frac{\prod_{i_{l=1}}^{i_{L}}(\sfrac{\sfrac{\sfrac{ \sfrac{\s_{i_{l}}}{\mu}}}{\sum_{i_{l=1}}^{i_{L}}})^{\mathrm{x}_{\ell_{\beta}}}} )^{\mathrm{x}_{\ell_{\beta}}}}{(\mathrm{g}^{\sum_{i_{l}}^{i_{L}}\sum_{i_{l}}^{i_{L}} })^{\mathrm{x}_{\ell_{\beta}}}}=\mathrm{g}^{\sum_{i_{l=1}}^{i_{L}}\mathbf{w}^{\beta}_{ i_{l}}[\mu]} \tag{6}\]
The \(\mathcal{AS}\) then utilizes a discrete logarithm method, such as _Pollard's rho algorithm_, to compute the aggregated model parameter \(\sum_{i_{l=1}}^{i_{L}}\mathbf{w}^{\beta}_{i_{l}}[\mu]\). The average value is then calculated by the \(\mathcal{AS}\), and the global model parameter \(\mathbf{w}^{\beta}[\mu]\) is updated accordingly. Finally, the \(\mathcal{AS}\) sends the updated global model back, in plaintext, to the satellites for subsequent training iterations (cf. Section V-A); these steps continue until convergence.
## VI Performance Evaluation
### _Experiment setup_
**LEO Constellation & Communication Links.** We examine an LEO constellation that comprises 20 satellites divided into 4 orbits, all orbiting at an altitude of 1200 km with an inclination angle of 70\({}^{\circ}\). We consider a GS located in Rolla, Missouri,
\begin{table}
\begin{tabular}{c l} \hline \hline Parameter & Description \\ \hline \(\mathbb{G}\) & Multiplicative cyclic group with prime order \(q\) \\ \(\mathrm{g}\in\mathbb{G}\) & Randomly chosen generator \\ \(\mathbb{Z}_{q}\) & Finite field of order \(q\) \\ \(\mathcal{H}:\{0,1\}^{*}\rightarrow\mathbb{Z}_{q}\) & Full-domain hash function onto \(\mathbb{Z}_{q}\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: System parameters.
USA as the \(\mathcal{AS}\) (it can be anywhere on Earth) with a minimum elevation angle of 10\({}^{\circ}\). We employ a Systems Tool Kit simulator developed by AGI company to determine satellite-GS connectivity. The communication parameters introduced in Section IV-B are configured as: \(P\)=40 dBm, \(G_{i}\)=\(G_{s}\)=6.98 dBi, \(T\)=354.81K, \(B\)=50 MHz, and \(f\)=2.5 GHz. Satellites' connectivity with the \(\mathcal{AS}\) (GS) was established over a two-day period to obtain the convergence results.
**Satellites Training.** We train our local models on LEO satellites using the DeepGlobe dataset [16], which includes 1146 high-resolution colored satellite images (2448x2448 pixels), divided into training/validation/test sets (803/171/172 images). The dataset covers a total area of 1716.9 km\({}^{2}\) and has a pixel resolution of 50 cm. Each image has a corresponding mask image (which is the "label") with annotations for land cover, including 7 classes represented in different colors: Urban, Agriculture, Rangeland, Forest, Water, Person, and Unknown. We augment and split the dataset evenly among 20 satellites. Each satellite trains a DeepLabV3+ model using its assigned data with a mini-batch size of 4, and a learning rate (\(\simeq\)0.00008.
**Baselines.** We compare FedSecure with most recent FL-LEO approaches including FedISL [4], FedHAP [7], and FedSpace [9] in terms of convergence speed and accuracy. Note that FedSecure is the only approach that incorporates encryption.
### _Security and Privacy Analysis_
FedSecure is a secure FL aggregation approach that eliminates the need for a trusted KDC. Thus, it is much more resilient than FE-based methods [13] (including IPFE) which require a KDC and a secure channel for generating and distributing secret keys to other nodes. Despite that, FedSecure achieves the same level of security as FE/IPFE-based methods. In the following, we explain how FedSecure protects privacy against the threat model discussed in Section III-B:
* Although eavesdroppers may be able to intercept the exchanged ciphertexts, they will not be able to gain access to any information relevant to either the local model parameters of LEO satellites or the training data. This is because the model parameters are encrypted by the satellites using their own secret keys, making it impossible for anyone (including \(\mathcal{AS}\)) to decipher the parameters.
* Using FedSecure, the \(\mathcal{AS}\) can only obtain the updated global model parameters using the aggregation key \(AK_{\mathcal{AS}}\) given the encrypted model parameters received from the LEO satellites, but is not able to compute the individual model parameters. Moreover, despite possessing the subset aggregation key \(S_{i_{l}}\) of each satellite, it is unable to obtain their secret keys \(s_{i_{l}}\) due to being masked by g\({}^{x_{i_{l}}y_{i_{l}}}\).
* FedSecure is resilient against collusion attacks that may be launched among satellites that are owned by different operators since they do not have access to the secret keys of non-colluding satellites, thus preventing them from decrypting their ciphertexts.
Thus, FedSecure ensures that sensitive satellite models remain secure even in the face of potential internal and external attacks.
### _Computation & Communication Overhead Analysis_
#### Iv-C1 Computation overhead
We evaluate the computation overhead by measuring the time it takes for the \(\mathcal{AS}\) and satellites to compute weights. We use the Python Charm library to implement our IPFE-based encryption scheme and run it on a 64-bit Ubuntu-based standard desktop computer with 4GB RAM and an Intel Core I3 CPU operating at 1.8GHz. According to Eq. 5, each satellite in each FL round encrypts \(e\) elements including weights and biases. Our measurement shows that this computation takes less than 9 ms only, which is dominantly driven by a single exponentiation operation.
#### Iv-C2 Communication overhead
The communication overhead of FedSecure is evaluated by examining the size and number of messages transmitted between the \(\mathcal{AS}\) and the satellites. We employ a 160-bit security-level elliptic curve for the cryptographic operations in our scheme. Each satellite transmits encrypted matrices that represent its updated local model parameters (weights and biases) with a total number of elements \(e\). In accordance with Eq. 5 and given the DeepLabV3+ model structure, each satellite sends a total of \(992\) MB of data in each FL round. However, it has been demonstrated by [17] that elliptic curve points can be condensed, hence resulting in a smaller number of bits. This consequently leads to a communication overhead of \(497\) MB using our scheme.
### _Efficiency and Convergence Analysis_
We use the following performance metrics:
* **Intersection over Union:** IoU provides a pixel-wise score for each class of objects in an image, ranging from 0 to 1. A score of 1 denotes a perfect match between the predicted and ground truth masks, while 0 indicates no overlap. The IoU score can be calculated as \[IoU_{b}=\frac{\sum_{a=1}^{r}TP_{ab}}{\sum_{a=1}^{r}TP_{ab}+\sum_{a=1}^{r}FP_{ ab}+\sum_{a=1}^{r}FN_{ab}}\] (7) where \(TP_{ab}\) and \(FP_{ab}\) are the number of truly predicted and falsely predicted pixels as class \(b\) in image \(a\), respectively, \(FN_{ab}\) is the number of falsely predicted pixels as other classes in image \(a\) except for class \(b\), and \(r\) is the total number of images. Assuming \(g\) land cover classes, the final score is the average of IoU across all classes, which can be expressed as \[mIoU=\frac{1}{g}\sum_{b=1}^{g}IoU_{b}\] (8)
* **Dice Coefficient:** Similar to IoU, this also measures the match between the predicted and the ground truth segmentation for each class, ranging from 0 (no overlap) to 1 (perfect overlap). But the Dice score places more emphasis on true positive predictions compared to false positives and false negatives, which can be calculated as \[Dice_{b}=\frac{2IoU_{b}}{1+IoU_{b}}\] (9) \(Dice\) can be also averaged among all classes to obtain \(mDice\), just like \(mIoU\) in the above.
Note that in all cases, the classification output is a semantic segmentation mask encoded in RGB format, where each pixel's color represents its class.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Evaluation & \multicolumn{4}{c|}{Communication round \& accumulative time (h)} \\ \cline{2-5} Metric & 1 (1.51 h) & 2 (1.73 h) & 3 (2.18 h) & 4 (2.76h) & 5 (2.97 h) \\ \hline mIoU & 0.77896 & 0.78016 & 0.78075 & 0.78148 & 0.78248 \\ \hline mDice & 0.85141 & 0.85192 & 0.85259 & 0.85308 & 0.85350 \\ \hline \end{tabular}
\end{table} TABLE II: Performance of FedSecure.
first round, FedSecure achieved a high mIoU of 77.896% and mDice of 85.141%, indicating that it can accurately classify various target classes even after only one round (\(\approx\)1.5 hours). This faster convergence speed can be attributed to our partial aggregation scheme which enables all satellites' models on the same orbit to be aggregated before being encrypted and sent to the \(\mathcal{AS}\). Subsequently, the precision of the global model increases gradually during the following communication rounds until it converges.
We also evaluated our global model's effectiveness in classifying various land covers by testing it on _unseen satellite images_ from the DeepGlobe dataset. As shown in Fig. 3, the results indicate that after just three hours of satellites-\(\mathcal{AS}\) communication, FedSecure successfully predicted masks with a high rate of overlapping/matching with the ground truth masks. This result further demonstrates the effectiveness of FedSecure in achieving fast and high performance within a few hours. Moreover, we also note that the predicted masks can be improved further by allowing additional communication rounds.
**Comparison with baselines.** We compare FedSecure's convergence speed with the baselines using the MNIST dataset. Our results indicate that FedSecure converges within 3 hours with an accuracy of 88.76% whereas FedISL [4] archives 82.76% accuracy after 4 hours of training. Notably, we achieve this level of accuracy when the \(\mathcal{AS}\) (GS) is located in Rolla as a more practical configuration, not at the NP like FedISL. Moreover, when compared with FedHAP [7] and FedSpace [9] which require 15 and 96 hours to converge, respectively, our forwarding and aggregation scheme enables FedSecure to converge \(\times\)5 and \(\times\)32 faster, respectively. Most importantly, FedSecure ensures the security and privacy of satellite models, which is not addressed in those baselines.
## VII Conclusion
This study proposes a novel FL-LEO framework, FedSecure, that offers secure and efficient model aggregation for distributed ML with satellites in the presence of attackers, eavesdroppers, and collusion. Unlike prior work, FedSecure eliminates the need for a key distribution center to generate private/public keys and the need for a secure channel to distribute the keys. Moreover, our on-orbit model forwarding and aggregation enables invisible satellites to participate in the FL process without waiting to become visible to the \(\mathcal{AS}\), and does not require a sink satellite to collect models. Our simulation results demonstrate that FedSecure achieves the same level of security as existing methods while having low communication overhead (497 Mb), computation overhead (\(<\) 9ms), and achieving fast convergence in only 3 hours on real satellite imagery dataset (DeepGlobe) with a classification accuracy of 85.35%.
|
2302.12298 | Sharpness of some Hardy-type inequalities | The current status concerning Hardy-type inequalities with sharp constants is
presented and described in a unified convexity way. In particular, it is then
natural to replace the Lebesgue measure $dx$ with the Haar measure $dx/x.$
There are also derived some new two-sided Hardy-type inequalities for monotone
functions, where not only the two constants are sharp but also where the
involved function spaces are (more) optimal. As applications, a number of both
well-known and new Hardy-type inequalities are pointed out. And, in turn, these
results are used to derive some new sharp information concerning sharpness in
the relation between different quasi-norms in Lorentz spaces. | Lars-Erik Persson, Natasha Samko, George Tephnadze | 2023-02-18T13:20:28Z | http://arxiv.org/abs/2302.12298v1 | ###### Abstract
###### Abstract
Abstract: The current status concerning Hardy-type inequalities with sharp constants is presented and described in a unified convexity way. In particular, it is then natural to replace the Lebesgue measure \(dx\) with the Haar measure \(dx/x\). There are also derived some new two-sided Hardy-type inequalities for monotone functions, where not only the two constants are sharp but also where the involved function spaces are (more) optimal. As applications, a number of both well-known and new Hardy-type inequalities are pointed out. And, in turn, these results are used to derive some new sharp information concerning sharpness in the relation between different quasi-norms in Lorentz spaces.
**Sharpness of some Hardy-type inequalities**
**Lars-Erik Persson\({}^{1,2}\), Natasha Samko\({}^{1}\) and George Tephnadze\({}^{3}\)**
\({}^{1}\)_UiT The Arctic University of Norway, P.O. Box 385, N-8505, Narvik, Norway,_
\({}^{2}\)_Karlstad University, 65188 Karlstad, Sweden,_
\({}^{3}\)_The University of Georgia, 77a Merab Kostava St, Tbilisi, 0128, Georgia._
**2020 Mathematics Subject Classification:** 26D10, 46E30.
**Key words and phrases:** Inequalities, Hardy-type inequalities, sharp constants, optimal target function, Lorentz spaces.
## 1 Introduction
The continuous Hardy inequality from 1925 (see [5]) informs us if \(f\) is non-negative \(p\)-integrable function on \((0,\infty),\) then \(f\) is integrable over the interval \((0,x)\) for each positive \(x\) and
\[\int_{0}^{\infty}\left(\frac{1}{x}\int_{0}^{x}f(y)dy\right)^{p}dx\leq\left( \frac{p}{p-1}\right)^{p}\int_{0}^{\infty}f^{p}(x)dx,\ \ p>1. \tag{1.1}\]
The development of the famous Hardy inequality in both discrete and continuous forms during the period 1906 to 1928 has its own history or, as we call it, prehistory. Contributions of mathematicians other than G.H. Hardy such as E. Landay, G. Polya, E. Schur and M. Reisz, are important here. This prehistory was described in [9].
The first weighted version of (1.1) was proved by Hardy himself in 1928 (see [6]):
\[\int_{0}^{\infty}\left(\frac{1}{x}\int_{0}^{x}f(y)dy\right)^{p}x^{a}dx\leq \left(\frac{p}{p-1-a}\right)^{p}\int_{0}^{\infty}f^{p}(x)x^{a}dx \tag{1.2}\]
where \(f\) is a measurable and non-negative function on \((0,\infty)\) whenever \(a<p-1,p>1.\)
In the remarkable further development to which today is called Hardy-type inequalities, in the case of weighted Lebesgue spaces mostly the Lebesgue measure \(dx\) is used (see the books [7], [8], [10] and [11] and the references therein). One basic idea in this paper is to use convexity and then it is more natural to instead use the measure \(dx/x\) (=Haar measure when the underlying group is \(R+\)). Moreover, this way to consider the situation helps us to easier investigate and describe the
sharpness in Hardy-type inequalities. In this theory of Hardy-type inequalities (between weighted Lebesgue spaces) we usually have good estimates of the sharp constant (= the operator norm or quasi-norm). However, in very few cases the sharp constant is known.
In this paper we describe and/or derive most of these Hardy-type inequalities in the convexity \((dx/x)\) frame described above. Moreover, we concentrate also on the problem to derive the corresponding reversed inequalities in cones of monotone functions. And still with sharp constants. It turns out that our approach also implies that the sharpness can be further improved in special situations e.g. to not only have sharp constant(s) but also by involving more optimal function spaces, sometimes even with optimal so called target functions involved. In order to illustrate this idea we present the following introductory example:
**Example 1.1**.: The inequality (1.2) holds also if the interval \((0,\infty)\) is replaced by \((0,\ell),\,0<\ell\leq\infty,\) and still the constant
\[C=\left(\frac{p}{p-1-\alpha}\right)^{p}\]
is sharp. However also the following "sharper" inequality is known (see [13] and c.f. Theorem 2.3 a) in the book [11]):
\[\int_{0}^{\ell}\left(\frac{1}{x}\int_{0}^{x}f(y)dy\right)^{p}x^{\alpha}dx\leq \left(\frac{p}{p-1-a}\right)^{p}\int_{0}^{\ell}f^{p}(x)x^{\alpha}\left[1-\left( \frac{x}{\ell}\right)^{\frac{p-1-a}{p}}\right]dx, \tag{1.3}\]
where \(\alpha<p-1,p>1,\) and still the constant \(C=\left(\frac{p}{p-1-a}\right)^{p}\) is sharp. Moreover, we note that in the cone of non-increasing functions (1.2) holds in the reversed direction with the constant \(C=1.\) But indeed (1.3) holds also in the reversed direction with the sharp constant \(C=p/(p-1-a)>1\) whenever \(\alpha>-1\). See our Theorem 3.2 a). In such a situation when both constants are sharp we say that the involved weight function
\[g(x):=1-\left(\frac{x}{\ell}\right)^{\frac{p-1-a}{p}}<1,\,\,x<\ell,\]
is the "optimal target function".
The paper is organized as follows: In Section 2 we present the mentioned convexity approach to derive power weighted Hardy-type inequalities and some of its consequences. Here, and in the sequel, it turns out that this convexity approach makes it natural to present such inequalities by using the Haar measure \(dx/x\) instead of the Lebesgue measure dx. In Section 3 we derive some new sharp reversed Hardy-type inequalities on cones of monotone functions. Section 4 is used to present and discuss some new applications e.g. concerning two-sided Hardy-type inequalities where both constants are sharp and, moreover, the actual inequalities are further sharpened by pointing out (more) optimal involved function spaces. These results, in its turn, make it possible to derive some new results concerning comparisons of different norms in Lorentz spaces. Finally, Section 5 is reserved for some concluding remarks and for presenting and/or deriving some further sharp Hardy-type inequalities.
A convexity approach to derive sharp power weighted Hardy-type inequalities
The fact that the concept of convexity can be used to prove several inequalities, both classical and new ones, was of course known by Hardy himself. For example in the famous book [7] this concept and the more or less equivalent Jensen inequality was frequently used. Hence, it may be surprising that Hardy himself never discovered that also his famous inequality in both original (see (1.1)) and power weighted form (see e.g. (1.2)) follow more or less directly as described below. Concerning convexity and its applications e.g. to prove inequalities we refer to the recent book [12], the papers [13], [14], and the references therein.
### A new look on the inequalities (1.1) and (1.2)
**Observation 2.1**.: _We note that for \(p>1\)_
\[\int_{0}^{\infty}\left(\frac{1}{x}\int_{0}^{x}f(y)dy\right)^{p}dx \leq\left(\frac{p}{p-1}\right)^{p}\int_{0}^{\infty}f^{p}(x)dx,\] \[\Leftrightarrow\] \[\int_{0}^{\infty}\left(\frac{1}{x}\int_{0}^{x}g(y)dy\right)^{p} \frac{dx}{x}\leq 1\cdot\int_{0}^{\infty}g^{p}(x)\frac{dx}{x}, \tag{2.1}\]
_where \(f(x)=g(x^{1-1/p})x^{-1/p}.\)_
This means that Hardy's inequality (1.1) is equivalent to (2.1) for \(p>1\) and, thus, that Hardy's inequality can be proved in the following simple way (see form (2.1)): By Jensen's inequality and Fubini's theorem we have that
\[\int_{0}^{\infty}\left(\frac{1}{x}\int_{0}^{x}g(y)dy\right)^{p} \frac{dx}{x} \leq \int_{0}^{\infty}\left(\frac{1}{x}\int_{0}^{x}g^{p}(y)dy\right) \frac{dx}{x}\] \[= \int_{0}^{\infty}g^{p}(y)\int_{y}^{\infty}\frac{dx}{x^{2}}dy= \int_{0}^{\infty}g^{p}(y)\frac{dy}{y}.\]
By instead making the substitution
\[f(t)=g(t^{\frac{p-1-a}{p}})t^{-\frac{1+a}{p}}\]
in (1.2) we see that also this inequality is equivalent to (2.1). These facts imply especially the following:
(a) Hardy's inequalities (1.1) and (1.2) hold also for \(p<0\) (because the function \(\varphi(u)=u^{p}\) is convex also for \(p<0\)) and hold in the reverse direction for \(0<p<1\) (with sharp constants \(\left(\frac{p}{1-p}\right)^{p}\) and \(\left(\frac{p}{a+1-p}\right)^{p},a>p-1,\) respectively).
(b) The inequalities (1.1) and (1.2) are equivalent, since both are equivalent to (2.1)
(c) The inequality (2.1) holds also with equality for \(p=1,\) which gives us a possibility to interpolate and get more information about the mapping properties of the Hardy operator. In particular, we can use interpolation theory to see that in fact the Hardy operator \(H\) maps each interpolation space \(B\) between \(L_{1}\left((0,\infty),\frac{dx}{x}\right)\) and \(L_{\infty}\left((0,\infty),\frac{dx}{x}\right)\) into \(B,\) i.e. that the following more general Hardy type inequality holds:
\[\|Hf\|_{B}\leq C\|f\|_{B}.\]
### Further consequences of the new look presented in Section 2.1
For the finite interval case we need the following extention of our basic (convexity) form of Hardy's inequality presented in Section 2.1.
**Theorem 2.2.**_Let \(g\) be a non-negative and measurable function on \((0,\ell),0<\ell\leq\infty.\) a) If \(p<0\) or \(p\geq 1,\) then_
\[\int_{0}^{\ell}\left(\frac{1}{x}\int_{0}^{x}g(y)dy\right)^{p}\frac{dx}{x}\leq 1 \cdot\int_{0}^{\ell}g^{p}(x)\left(1-\frac{x}{\ell}\right)\frac{dx}{x}. \tag{2.3}\]
_(In the case \(p<0\) we assume that \(g(x)>0,0<x\leq\ell\))._
_b) If \(\ 0<p\leq 1,\) then (2.2) holds in the reversed direction._
_c) The constant \(C=1\) is sharp in both a) and b)._
Proof. Proof a) The proof only consists of an obvious modifications of (2.2).
b) Since Jensen's inequality holds in the reversed direction for the comcave function
\[\phi(u)=u^{p},\ \ 0<p\leq 1,\]
the proof follows in the same way.
c) Assume that (2.3) with the constant \(1\) replaced by some constant \(C,\ 0<C<1.\) By applying (2.3) with the test functions \(g(x)=x^{\alpha},\ 0\leq x\leq\ell,\ a>0,\) a simple calculation shows that
\[\frac{\alpha p+1}{\left(\alpha+1\right)^{p}}\leq C<1\]
so by choosing \(\alpha\) sufficiently small we get a contradiction and the proof is complete concerning a). The proof of the sharpness of b) is obtained by making an obvious modification of this argument so the proof is complete. \(\Box\)
By doing similar calculations as in the proof of Theorem 2.4 in [13] (see also Theorem 7.10 in the book [11]), or just doing appropriate transformations, we obtain the following symmetric version of this equivalence Theorem:
**Theorem 2.3.**_Let \(0<\ell\leq\infty,\) let \(p\in\mathbb{R}_{+}\setminus\{0\}\) and let \(f\) be a non-negative function. Then a) the inequality_
\[\int_{0}^{\ell}\left(\int_{0}^{x}f(y)dy\right)^{p}x^{-\alpha}\frac{dx}{x}\leq \left(\frac{p}{\alpha}\right)^{p}\int_{0}^{\ell}\left(xf(x)\right)^{p}x^{- \alpha}\left[1-\left(\frac{x}{\ell}\right)^{\frac{\alpha}{p}}\right]\frac{dx} {x} \tag{2.4}\]
_holds for all measurable functions \(f,\) each \(\ell,\ 0<\ell\leq\infty\) and all \(\alpha\) in the following cases:_
\[(a_{1})\qquad p\geq 1,\alpha>0,\]
\[(a_{2})\qquad p<0,\alpha<0.\]
_._
_For the case \(0<p<1,\ \alpha>0,\) inequality (2.4) holds in the reversed direction for all \(\ell,\ 0<\ell\leq\infty\)._
_The inequality_
\[\int_{\ell}^{\infty}\left(\int_{x}^{\infty}f(y)dy\right)^{p}x^{\alpha}\frac{dx} {x}\leq\left(\frac{p}{\alpha}\right)^{p}\int_{\ell}^{\infty}f^{p}(x)x^{\alpha} \left[1-\left(\frac{\ell}{x}\right)^{\frac{\alpha}{p}}\right]\frac{dx}{x} \tag{2.5}\]
_holds for all measurable functions \(f,\) each \(\ell,0\leq\ell<\infty\) and all \(\alpha\) in the following cases:_
\[\begin{array}{ll}(c_{1})&\quad p\geq 1,\alpha>0,\\ (c_{2})&\quad p<0,a<0.\end{array}\]
_For the case \(0<p\leq 1,\ \alpha>0\) inequality (2.5) holds in the reversed direction for all \(\ell,\ 0\leq\ell<\infty\)._
_All inequalities above are sharp._
_Let \(p\geq 1\) or \(p<0.\) Then, the statements in a) and c) are equivalent for all permitted \(\alpha\)._
_Let \(0<p<1.\) Then, the statements in b) and d) are equivalent for all permitted \(\alpha.\)_
**Remark 2.4.** For the case \(l=\infty\) the inequalities (2.4) and (2.5) were formulated, proved and applied in this convexity form in the new book [15]. This fact has further inspired us to reformulate our results in this convexity \((dx/x)\) way, which not only contribute to a better understanding but is also more suitably for such applications in modern harmonic analysis.
## 3 Reversed sharp Hardy inequalities for monotone functions
For the proof of our main results in this Section we need the following Lemma:
**Lemma 3.1.**_Let \(p>0,\)\(\frac{1}{p}+\frac{1}{q}=1\) and let \(f\) be a non-negative and measurable function on \((a,b),\)\(\infty\leq a<b\leq\infty.\)_
\(a)\) _Let \(f\) be non-increasing on \((a,b),\)\(\infty<a<b\leq\infty.\) If \(p\geq 1,\) then_
\[\left(\int_{a}^{b}f\left(y\right)dy\right)^{p}\geq p\int_{a}^{b}\left(y-a \right)^{p-1}(f\left(y\right))^{p}dy. \tag{3.1}\]
_If \(0<p\leq 1,\) then (3.1) holds in the reversed direction._
\(b)\) _Let \(f\) be non-decreasing on \((a,b),\)\(-\infty\leq a<b<\infty.\) If \(p\geq 1,\) then_
\[\left(\int_{a}^{b}f\left(y\right)dy\right)^{p}\geq p\int_{a}^{b}\left(b-y \right)^{p-1}(f\left(y\right))^{p}dy. \tag{3.2}\]
_If \(0<p\leq 1,\) then (3.2) holds in the reversed direction._
\(c)\) _The constant \(p\) is sharp in all these four inequalities. In fact, we have even equality in (3.1) for the function \(f\left(y\right)=A\chi_{\left(a,c\right)}\left(y\right)\) for some \(c\in\left(a,b\right)\) and \(A>0.\) Moreover, equality in (3.2) holds if \(f\left(y\right)=A\chi_{\left(c,b\right)}\left(y\right)\) for some \(c\in\left(a,b\right)\) and \(A>0.\)_
Proofs of various variants of this Lemma can be found in many places (see e,g, [3]) but for the readers convenience, we include a simple proof of just this variant.
Proof. First assume that \(-\infty<a<b<\infty.\) Next we observe that the proof of \(b)\) can be reduced to that of \(a)\) by putting \(g(y)=f(a+b-y).\) Hence, it is sufficient to prove \(a).\) Moreover, by making suitable coordinate transformations we conclude that it is sufficient to consider the case \(\left(a,b\right)=\left(0.1\right).\) Therefore, we consider a non-negative, measurable and non-increasing function \(f\) on \(\left(0,1\right)\).
Let
\[F\left(x\right)=\int\limits_{0}^{x}f\left(y\right)dy.\]
Then \(F(0)=0,\) and, for almost all \(x\in\left(0,1\right),\) if \(p\geq 1\) then
\[\frac{d}{dx}(F\left(x\right))^{p}=pf\left(x\right)\left(F\left(x\right) \right)^{p-1}\geq px^{p-1}(f\left(x\right))^{p}\]
By integrating from \(0\) to \(1\) we find that
\[\left(\int\limits_{0}^{1}f\left(y\right)dy\right)^{p}=\left(F\left(1\right) \right)^{p}\geq p\int\limits_{0}^{1}y^{p-1}f\left(y\right)dy.\]
The same argument shows that this inequality holds in the reversed direction if \(0<p\leq 1.\) We conclude that \(a)\) and \(b)\) are proved. It is obvious that we have equality in the inequalities (3.1) and (3.2) and their reversed versions for \(0<p\leq 1\) for the claimed test functions
\[f\left(y\right)=A\chi_{\left(a,c\right)}\left(y\right)\text{\ \ \ and\ \ \ }f\left(y\right)=A\chi_{\left(c,b\right)}\left(y\right),\]
respectively.
The proof of the cases \(a=-\infty\) or \(b=\infty\) follows by just doing a limit procedure so the proof is complete. \(\Box\)
First we consider the case when \(f\) is non-increasing and note that then such a reversed inequality has meaning only if \(0<\alpha<p\) (since if not the involved integrals diverges for all non-trivial functions \(f\)).
Our first main result reads:
**Theorem 3.2.**_Let \(p>0,\)\(0<\alpha<p\) and let \(f\) be a measurable, non-negative and non-increasing function on \(\left(0,\ell\right),\)\(0<l\leq\infty.\)_
\(a)\) _If \(p\geq 1,\) then_
\[\int\limits_{0}^{\ell}\left(\int\limits_{0}^{x}f\left(y\right)dy\right)^{p}x^ {-\alpha}\frac{dx}{x}\geq\frac{p}{\alpha}\int\limits_{0}^{x}(xf\left(x\right) )^{p}x^{-\alpha}\left(1-\left(\frac{x}{\ell}\right)^{\alpha}\right)\frac{dx} {x}. \tag{3.3}\]
\(b)\) _If \(0<p\leq 1,\) then (3.3) holds in the reversed direction._
\(c)\) _The constant \(C=p/\alpha\) is sharp in both \(a)\) and \(b)\) and equality appears for each function \(f\left(x\right)=A\chi_{\left(0,c\right)}\left(x\right)\) for some \(c\in\left(0,l\right)\) and \(A>0.\)_
Proof.\(a)\) By using Lemma 3.1 and Fubini's theorem we find that
\[\int\limits_{0}^{\ell}\left(\int\limits_{0}^{x}f\left(y\right)dy \right)^{p}x^{-\alpha}\frac{dx}{x}\] \[\geq p\int\limits_{0}^{\ell}\left(\int\limits_{0}^{x}y^{p-1}(f\left( y\right))^{p}dy\right)x^{-\alpha}\frac{dx}{x}\] \[= p\int\limits_{0}^{\ell}\left(yf\left(y\right)\right)^{p}\left( \int\limits_{y}^{\ell}x^{-\alpha-1}dx\right)\frac{dy}{y}\] \[= \frac{p}{\alpha}\int\limits_{0}^{\ell}\left(yf\left(y\right) \right)^{p}\left(y^{-\alpha}-\ell^{-\alpha}\right)\frac{dy}{y}\] \[= \frac{p}{\alpha}\int\limits_{0}^{\ell}\left(yf\left(y\right) \right)^{p}y^{-\alpha}\left(1-\left(\frac{y}{\ell}\right)^{\alpha}\right) \frac{dy}{y}.\]
\(b)\) It is used only one inequality in the proof of \(a)\) and, according to Lemma 3.1, this inequality holds in the reversed direction in this case so also \(b)\) is proved.
\(c)\) In view of the proofs above this sharpness statement follows by using Lemma 3.1 but we also verify this directly: Let \(f\left(x\right)=A\chi_{\left(0,c\right)}\left(x\right),\)\(c\in\left(0,l\right).\) Then
\[\frac{p}{\alpha}\int\limits_{0}^{\ell}\left(xf\left(x\right) \right)^{p}x^{-\alpha}\left(1-\left(\frac{x}{\ell}\right)^{\alpha}\right) \frac{dx}{x} = \frac{p}{\alpha}A^{p}\int\limits_{0}^{c}x^{p-\alpha-1}\left(1- \left(\frac{x}{\ell}\right)^{\alpha}\right)dx\] \[= A^{p}\frac{p}{\alpha}\left(\frac{c^{p-\alpha}}{p-\alpha}-\frac{ 1}{\ell^{\alpha}}\frac{c^{p}}{p}\right):=I\]
Moreover,
\[\int\limits_{0}^{\ell}\left(\int\limits_{0}^{x}f\left(y\right)dy \right)^{p}x^{-\alpha}\frac{dx}{x} = A^{p}\int\limits_{0}^{c}x^{p-\alpha-1}dx+A^{p}c^{p}\int\limits_{ c}^{\ell}x^{-\alpha-1}dx\] \[= A^{p}\frac{c^{p-\alpha}}{p-\alpha}+\frac{A^{p}c^{p}}{\alpha} \left(c^{-\alpha}-\ell^{-\alpha}\right)\] \[= A^{p}\frac{p}{\alpha}c^{p-\alpha}-A^{p}\frac{c^{p}}{\alpha}=I\]
We conclude that the constant \(p/\alpha\) is sharp in both \(a)\) and \(b)\) with equality for
\[f\left(x\right)=A\chi_{\left(0,c\right)}\left(x\right),\text{ \ }c\in\left(0,l \right),\]
so also \(c)\) is proved.
\(\Box\)
As already mentioned the inequality (3.3) has no meaning in the cone of non-increasing functions if \(\alpha\geq p.\) But it is not so if we instead restrict to the cone of non-decreasing functions. But in this case the "target function"
\[f\left(x\right)=\left(1-\left(\frac{x}{\ell}\right)^{\alpha}\right)\]
is different and connected to the truncated \(\beta_{\alpha}\) function defined as follows:
\[\beta_{\alpha}=\beta_{\alpha}\left(u,v\right)=\int\limits_{\alpha}^{1}t^{u-1} \left(1-t\right)^{v-1},\ \ 0\leq\alpha<1.\]
In particular \(\beta_{0}\) coincides with usual \(\beta\) function \(\beta(u.v).\)
Our next main result reads:
**Theorem 3.3.**_Let \(\alpha\geq p>0\) and let \(f\) be a measurable, non-negative and non-decreasing function on \(\left(0,\ell\right),\)\(0<\ell\leq\infty.\)_
\(a)\) _If \(p\geq 1,\) then_
\[\int\limits_{0}^{\ell}\left(\int\limits_{0}^{x}f\left(y\right)dy \right)^{p}x^{-\alpha}\frac{dx}{x}\geq\] \[\geq \frac{p}{\alpha}\int\limits_{0}^{\ell}\left(xf\left(x\right) \right)^{p}x^{-\alpha}T\left(x\right)\frac{dx}{x},\]
_where_
\[T\left(x\right):=\alpha\beta_{\frac{x}{\ell}}\left(p,\alpha-p+1\right),\ \ x\leq l.\]
\(b)\) _If \(0<p\leq 1,\) then (3.4) holds in the reversed direction._
\(c)\) _the constant \(\frac{p}{\alpha}\) is sharp in both \(a)\) and \(b)\) and equality appears if_
\[f\left(x\right)=A\chi_{\left(c,l\right)}\left(x\right)\ \ \ \mbox{for some}\ \ \ c\in\left(0,l\right)\ \ \ \mbox{and}\ \ \ A>0.\]
**Proof**. \(a)\) By using again Lemma 3.1 and Fubini's theorem we obtain that
\[I := \int\limits_{0}^{\ell}\left(\int\limits_{0}^{x}f\left(y\right)dy \right)^{p}x^{-\alpha}\frac{dx}{x}\geq p\int\limits_{0}^{\ell}\int\limits_{0} ^{x}\left(x-y\right)^{p-1}\left(f\left(y\right)\right)^{p}dyx^{-\alpha}\frac{ dx}{x}\] \[= p\int\limits_{0}^{\ell}\left(f\left(y\right)\right)^{p}\int \limits_{y}^{\ell}\left(x-y\right)^{p-1}x^{-\alpha}\frac{dx}{x}dy.\]
We make the transformation \(t=\frac{y}{x}\) in the inner integral and get that
\[I \geq p\int\limits_{0}^{\ell}\left(f\left(y\right)\right)^{p}\int\limits_{ y/l}^{1}\left(1-t\right)^{p-1}\Bigl{(}\frac{y}{t}\Bigr{)}^{p-\alpha-1}\frac{dt}{t}dy\] \[= p\int\limits_{0}^{\ell}\left(yf\left(y\right)\right)^{p}y^{- \alpha}\int\limits_{y/l}^{1}\left(1-t\right)^{p-1}t^{\alpha-p}dt\frac{dy}{y}\] \[= \frac{p}{\alpha}\int\limits_{0}^{\ell}\left(yf\left(y\right) \right)^{p}y^{-\alpha}T\left(y\right)\frac{dy}{y}.\]
\(b)\) Since the only inequality used above holds in the reversed direction in this case (see Lemma 3.1) the proof of \(b)\) follows in the same way.
\(c)\) Choose the test function
\[f\left(x\right)=A\chi_{\left(c,l\right)}\left(x\right),\;c\in\left(0,l\right).\]
Then, in view of the proofs of \(a)\) and \(b),\) for any \(p>0\) the right hand side of (3.4) is equal to
\[I := \int\limits_{0}^{\ell}\int\limits_{0}^{x}\left(x-y\right)^{p-1}A ^{p}\left(\chi_{\left(c,l\right)}(y)\right)^{p}dyx^{-\alpha}\frac{dx}{x}\] \[= pA^{p}\int\limits_{c}^{\ell}\int\limits_{c}^{x}\left(x-y\right)^ {p-1}dyx^{-\alpha}\frac{dx}{x}\] \[= A^{p}\int\limits_{c}^{\ell}\left(x-c\right)^{p}x^{-\alpha}\frac{ dx}{x}\]
Moreover, the left hand side of (3.4) is equal to
\[A^{p}\int\limits_{0}^{\ell}\int\limits_{0}^{x}\left(\chi_{\left(c,l\right)} \left(y\right)dy\right)^{p}x^{-\alpha}\frac{dx}{x}=A^{p}\int\limits_{c}^{\ell }\left(x-c\right)^{p}x^{-\alpha}\frac{dx}{x}=I\]
so we have equality in (3.4) and the reversed inequality for \(0<p\leq 1\) for all \(p>0\).
The proof is complete. \(\Box\)
**Example 3.4.** For the case \(l=\infty\) we obtain the sharp inequality
\[\int\limits_{0}^{\infty}\left(\int\limits_{0}^{x}f\left(y\right)dy\right)^{p} x^{-\alpha}\frac{dx}{x}\geq pB(p,\alpha-p+1)\int\limits_{0}^{\infty}\left(xf(x) \right)^{p}x^{-\alpha}\frac{dx}{x}\]
for all non-decreasing functions \(f\). This inequality holds in the reversed direction when \(0<p\leq 1\) and the constant is sharp also then. Hence, by just changing notations we see that our result generalizes also a result in [3].
Hence, we have investigated all cases concerning the usual (arithmetic mean) Hardy operator so we turn to the dual situation (c.f. Theorem 2.3\(c\))) and here the only non-trivial situation is to study the non-increasing case.
Our main result for this case reads:
**Theorem 3.5.**_Let \(p>0,\ \alpha>0\) and \(f\) be a measurable, non-negative and non-increasing function on \((\ell,\infty),\)\(0\leq\ell<\infty.\)_
\(a)\) _If \(p\geq 1,\) then_
\[\int\limits_{\ell}^{\infty}\left(\int\limits_{x}^{\infty}f\left(y\right)dy \right)^{p}x^{\alpha}\frac{dx}{x}\geq\frac{p}{\alpha}\int\limits_{\ell}^{ \infty}\left(xf\left(x\right)\right)^{p}x^{\alpha}T_{0}\left(x\right)\frac{dx} {x}, \tag{3.5}\]
_where_
\[T_{0}\left(x\right):=\alpha\beta_{\frac{\ell}{x}}\left(p,\alpha\right),\ \ x\geq l.\]
\(b)\) _If \(0<p\leq 1,\) then (3.5) holds in the reversed direction._
\(c)\) _The constant \(p/\alpha\) is sharp in both \(a)\) and \(b)\) and equality appears in both \(a)\) and \(b)\) if_
\[f\left(x\right)=A\chi_{(\ell,c)}\left(x\right)\ \ \ \mbox{for some}\ \ \ c\in(\ell,\infty)\ \ \ \mbox{and}\ \ \ A>0.\]
Proof.\(a)\) By again applying Lemma 3.1 and Fubini's theorem we get that
\[I := \int\limits_{\ell}^{\infty}\left(\int\limits_{x}^{\infty}f\left( y\right)dy\right)^{p}x^{\alpha}\frac{dx}{x}\geq p\int\limits_{\ell}^{\infty} \left(y-x\right)^{p-1}(f\left(y\right))^{\beta}dyx^{\alpha}\frac{dx}{x}\] \[= p\int\limits_{\ell}^{\infty}\left(f\left(y\right)\right)^{p} \int\limits_{\ell}^{y}\left(y-x\right)^{p-1}x^{\alpha-1}dxdy\] \[= p\int\limits_{\ell}^{\infty}\left(f\left(y\right)\right)^{p}y^{ p-1}\int\limits_{\ell}^{y}\left(1-\frac{x}{y}\right)^{p-1}x^{\alpha-1}dxdy\]
Thus, by making the transformation \(t=x/y\) in the inner integral we can conclude that
\[I \geq p\int\limits_{\ell}^{\infty}\left(f\left(y\right)y\right)^{p}y^{ \alpha}\int\limits_{l/y}^{1}\left(1-t\right)^{p-1}t^{\alpha-1}\frac{dy}{y}\] \[= \frac{p}{\alpha}\int\limits_{\ell}^{\infty}\left(f\left(y\right) y\right)^{p}y^{\alpha}T_{0}\left(x\right)\frac{dy}{y}.\]
\(b)\) The proof follows in the same way since the only inequality used in \(a)\) now holds in the reversed direction.
\(c)\) Similarly as in the proof of Theorem 3.3\(c)\) we can easily verify that we indeed has equality in the inequality (3.5) (and the reversed inequality when \(0<p\leq 1\)) for every function
\[f\left(x\right)=A\chi_{(\ell,c)}\left(x\right),\ c\in(\ell,\infty)\ \mbox{and}\ A>0.\]
Hence, also the sharpness is proved. \(\Box\)
**Example 3.6.** Let \(f,\)\(p\) and \(\alpha\) be defined as in Theorem 3.5. If \(p\geq 1,\) then
\[\int\limits_{0}^{\infty}\left(\int\limits_{x}^{\infty}f\left(y\right)dy\right)^ {p}x^{\alpha}\frac{dx}{x}\geq p\beta\left(p,\alpha\right)\int\limits_{0}^{ \infty}\left(xf\left(x\right)\right)^{p}x^{\alpha}\frac{dx}{x},\]
where \(f(x)\) is a non-negative and non-increasing function. The inequality holds in the reversed direction when \(0<p\leq 1\) and the constant \(p\beta\left(p,\alpha\right)\) is sharp in both cases. Hence, Theorem 3.5 may be regarded also as generalization of another result in [3].
## 4 Applications
By combining Theorem 2.3\(a),\)\(b)\) and \(c)\) with Theorem 3.2 we obtain the following sharp two sided estimates:
**Theorem 4.1.**_Let \(p>0,\)\(o<\alpha<p,\)\(0<\ell\leq\infty\) and let \(f\) be a measurable, non-negative and non-increasing function on \((0,\ell).\)_
_If \(p>1,\) then_
\[\left(\frac{p}{\alpha}\right)^{1/p}I_{1}\leq I_{2}\leq\frac{p}{\alpha}I_{1}, \tag{4.1}\]
_where_
\[I_{1}=\left(\int\limits_{0}^{\ell}\left(xf\left(x\right)\right)^{p}x^{-\alpha }\left(1-\left(\frac{x}{\ell}\right)^{\alpha}\right)\frac{dx}{x}\right)^{1/p}\]
_and_
\[I_{2}=\int\limits_{0}^{\ell}\left(\int\limits_{0}^{x}f\left(y\right)dy\right) ^{p}x^{-\alpha}\frac{dx}{x}.\]
_If \(0<p\leq 1,\) then (4.1) holds in the reversed direction. Moreover, both constants \(\left(p/\alpha\right)^{1/p}\) and \(p/\alpha\) are sharp for all \(p>0.\)_
**Remark 4.2.**\(a)\) This means that the equivalence \(I_{2}\approx I_{1}\) holds and the corresponding "optimal target function " is
\[g\left(x\right)=1-\left(\frac{x}{\ell}\right)^{\alpha}\cdot\]
\(b)\) In the lower inequality we can even have equality while in the above inequality the sharpness follows by choosing a sequence of non-increasing functions (a well-known fact from the theory of Hardy-type inequalities).
**Remark 4.3.** Many crucial objects in different mathematical areas are non-decreasing (e.g. in Lorentz spaces, interpolation theory, approximation theory and harmonic analysis). Hence, in
particular, Theorem 4.1 can be useful to obtain some more precise versions of known results in each of these areas. We illustrate this fact only in the theory of Lorentz spaces but aim to later also use our result to improve some results in the modern harmonic analysis as presented in the new book [15].
Let \(f^{*}\) denote the non-increasing rearrangement of a function f on a measure space \(\left(\Omega,\mu\right).\) The Lorentz spaces \(L^{p,q},\)\(0<p,q<\infty\) are defined by using the quasi.norm (norm when \(p>1,\)\(q\geq 1)\)
\[\left\|f\right\|_{p,q}^{*}:=\left(\int\limits_{0}^{\infty}\left(f^{*}\left(t \right)t^{1/p}\right)^{q}\frac{dt}{t}\right)^{1/q}. \tag{4.2}\]
It is well-known that for the case \(p>1\) this quasi-norm is equivalent to the following one equipped with the usual Hardy operator:
\[\left\|f\right\|_{p,q}^{**}:=\left(\int\limits_{0}^{\infty}\left(\int\limits_{ 0}^{t}f^{*}\left(u\right)du\right)^{q}t^{-q/p^{\prime}}\frac{dt}{t}\right)^{1/ q}.\]
Moreover, we have the following more precise estimates:
\[\left(p^{\prime}\right)^{1/q}\left\|f\right\|_{p,q}^{*}\leq\left\|f\right\|_{ p,q}^{**}\leq p^{\prime}\left\|f\right\|_{p,q}^{*} \tag{4.3}\]
if \(q>1\) and the reversed inequalities hold if \(0<q\leq 1.\) However, by using Theorem 4.1 we not only get the sharp estimates in (4.3) but also the following more precise statement:
**Corollary 4.4.**_With the notations and assumptions above, \(p>1\) and \(0<\ell\leq\infty,\) we have that_
\[\left(p^{\prime}\right)^{1/q}I_{\ell}^{*}\leq I_{\ell}^{**}\leq p^{\prime}I_{ \ell}^{*}, \tag{4.4}\]
_where \(q>1,\)_
\[I_{\ell}^{*}:=\left(\int\limits_{0}^{\ell}\left(f^{*}\left(t\right)t^{1/p} \right)^{q}\left(1-\left(\frac{t}{\ell}\right)^{q/p^{\prime}}\right)\frac{dt}{ t}\right)^{1/q}\]
_and_
\[I_{\ell}^{**}:=\left(\int\limits_{0}^{\ell}\left(\int\limits_{0}^{t}f^{*} \left(u\right)du\right)^{q}t^{-q/p^{\prime}}\frac{dt}{t}\right)^{1/q}.\]
_If \(0<q\leq 1,\) then the inequalities in (4.4) hold in the reversed directions. Both constants \(\left(p^{\prime}\right)^{1/q}\) and \(p^{\prime}\) are sharp for all \(q>0.\)_
Proof. Just apply Theorem 4.1 with \(p\) replaced by \(q\) and \(\alpha\) replaced by \(q/p^{\prime}.\)\(\Box\)
**Remark 4.5.** Note that (4.3) is obtained by just using (4.4) with \(l=\infty,\) so in particular, both constant in (4.3) (and the reversed inequalities for \(0\leq q\leq 1)\) are sharp.
**Remark 4.6.** For the case \(0<p\leq 1\) it is known that the quasi-norm \(\left\|f\right\|_{p,q}^{\ast}\) is equivalent to the following quasi-norm \(\left\|f\right\|_{p,q}^{\ast\ast}\) equipped with the dual Hardy operator:
\[\left\|f\right\|_{p,q}^{\ast\ast}:=\left(\int\limits_{0}^{\infty}\left(\int \limits_{t}^{\infty}f^{\ast}\left(u\right)du\right)^{q}t^{-q/p^{\prime}}\frac{ dt}{t}\right)^{1/q}\]
By instead using Theorem 2.3\(c),\)\(d)\) and \(e)\) with \(l=0\) combined with Example 3.6 we obtain that if \(0<p\leq 1,\) then
\[\left(q\beta\left(q,-q/p^{\prime}\right)\right)^{1/q}\left\|f\right\|_{p,q}^{ \ast}\leq\left\|f\right\|_{p,q}^{\ast\ast}\leq-p^{\prime}\left\|f\right\|_{p,q} ^{\ast} \tag{4.5}\]
if \(q\geq 1\) and the reversed inequalities hold if \(0<q\leq 1.\) Both constants \(\left(q\beta\left(q,-q/p^{\prime}\right)\right)^{1/q}\) and \(-p^{\prime}\) are sharp for all \(q>0.\)
**Remark 4.7.** A more general statement like that in Corollary 4.4 involving sharp constants in both inequalities can be formulated, where the integrals \(\int\limits_{0}^{\infty}\) are replaced by the integrals \(\int\limits_{\ell}^{\infty},\)\(0\leq\ell<\infty.\) In particular, this gives a similar generalization of (4.5). However, in this case the result looks less nice since the two target functions \(1-\left(\frac{x}{\ell}\right)^{\alpha}\) and \(\alpha\beta_{\frac{\ell}{x}}\left(p,\alpha\right)\) do not coincide.
We only give the following final example related to Remark 4.7 and the well-known inequality: If \(0<p<1,\) then
\[\int\limits_{0}^{\infty}\left(\frac{1}{x}\int\limits_{x}^{\infty}f\left(y \right)dy\right)^{p}dy\leq\frac{\pi p}{\sin\pi p}\int\limits_{0}^{\infty}f^{p }\left(x\right)dx \tag{4.6}\]
for all functions as defined in Theorem 3.5:
**Example 4.8.** Let \(0<p<1\) and let \(f\) be a measurable, non-negative and non-increasing function on \(\left(\ell,\infty\right),\)\(0\leq l<\infty.\) If \(0<p\leq 1,\) then
\[\int\limits_{\ell}^{\infty}\left(\frac{1}{x}\int\limits_{0}^{\infty}f\left(y \right)dy\right)^{p}dx\leq p\int\limits_{\ell}^{\infty}f^{p}\left(x\right) \beta_{\frac{\ell}{x}}\left(p,1-p\right)dx,\]
and the constant \(p\) is sharp. This is just Theorem 3.5 b) with \(\alpha=1-p.\)
In particular, for \(l=\infty\) this inequality coincides with (4.6) since
\[\beta\left(p,1-p\right)=\pi/\sin\pi p,\]
so the constant \(\frac{\pi p}{\sin\pi p}\) in (4.6) is sharp.
Some further results and final remarks
First we remark that e.g the Hardy inequality (2.4) has no meaning in the limit case \(\alpha=0.\) However, by restricting to the interval \((0,1)\) and involving some suitable logarithms C. Bennett in 1973 succeeded to prove such an inequality when he developed his well-known theory for real interpolation between the (fairly close) spaces \(L\) and \(LLogL\) on \((0,1),\) see [2] and c.f. also [3]. This result has been generalized by other authors but the so far most precise results were derived in [1]. Here we state a little more general form of this result in our \(dx/x\) terminology and with the interval \((0,1)\) replaced by \((0,\ell),\ 0<\ell<\infty.\)
**Theorem 5.1.**_Let \(\alpha,p>0\) and \(f\) be a non-negative and measurable function on \((0,\ell),\ 0<\ell<\infty.\)_
\((a)\) _If \(p>1,\) then_
\[\alpha^{p-1}\left(\int_{0}^{\ell}f(x)dx\right)^{p}+\alpha^{p}\int _{0}^{\ell}[\log(\ell/x)]^{\alpha p-1}\left(\int_{0}^{x}f(y)dy\right)^{p} \frac{dx}{x}\] \[\leq \int_{0}^{\ell}x^{p}[\log(\ell/x)]^{(1+\alpha)p-1}f^{p}(x)\frac{ dx}{x}.\]
_Both constants \(\alpha^{p-1}\) and \(\alpha^{p}\) in (5.1) are sharp._
\((b)\) _If \(0<p<1,\) then (5.1) holds in the reverse direction and both constants \(\alpha^{p-1}\) and \(\alpha^{p}\) are sharp._
\((c)\) _If \(p=1,\) then we have equality in (5.1)._
Proof. The proof can be done by just modifying step by step the arguments in the proof for the case \(l=1.\) Alternatively the result can be derived by using the original result in [1] and making suitable variable substitutions. Hence, we omit the details. \(\Box\)
**Remark 5.2.** (5.1) is one of the few inequalities we know containing two constant and both are sharp. In the original paper [2] only the case (a) was considered and with one constant involved (the first term in (5.1) was missed) and the sharpness was not discussed at all.
**Remark 5.3.** By using Theorem 5.1 with \(f(x)=g(1/x)x^{-2}\) and making obvious variable transformations and changes in notations we also get the following "dual" version:
**Theorem 5.4.**_Let \(\alpha,p>0\) and \(f\) be a non-negative and measurable function on \((0,\ell),\ 0<\ell<\infty\)_
\((a)\) _If \(p>1,\) then_
\[\alpha^{p-1}\left(\int_{\ell}^{\infty}f(x)dx\right)^{p}+\alpha^{ p}\int_{\ell}^{\infty}[\log(xe/\ell)]^{\alpha p-1}\left(\int_{x}^{\infty}f(y)dy \right)^{p}\frac{dx}{x}\] \[\leq \int_{\ell}^{\infty}x^{p}[\log(xe/\ell)]^{(1+\alpha)p-1}f^{p}(x) \frac{dx}{x}.\]
_Both constants \(\alpha^{p-1}\) and \(\alpha^{p}\) in (5.2) are sharp._
\((b)\) _If \(0<p\leq 1,\) then (5.2) holds in the reverse direction and also here both constants \(\alpha^{p-1}\) and \(\alpha^{p}\) are sharp._
Next we pronounce that all sharp inequalities we presented so far are for the case \(q=p.\) Very little concerning sharp constants is known for other cases. Let us illustrate this problem by mentioning the fact that by applying the general theory in Hardy-type inequalities (see e.g. the book [11]) in a power weighted case we get in our \(dx/x\) frame the following:
**Example 5.5.** The inequality
\[\left(\int\limits_{0}^{\infty}\left(\int\limits_{0}^{x}f(t)dt\right)^{q}x^{- \alpha}\frac{dx}{x}\right)^{\frac{1}{q}}\leq C\left(\int\limits_{0}^{\infty} (xf(x))^{p}x^{-\beta}\frac{dx}{x}\right)^{\frac{1}{p}} \tag{5.3}\]
holds for some finite constant \(C>0\) for \(1<p\leq q<\infty\), if and only if
\[\beta>0\ \ \ {\rm and}\ \ \frac{\alpha}{q}=\frac{\beta}{p}. \tag{5.4}\]
**Remark 5.6.** For the case \(p=q\) we have already pointed out the sharp constant but for the case \(1<p<q<\infty\) this has been a fairy long lasted open question since G.A. Bliss in 1930 solved it for \(\beta=p-1\) (see [4]). It was finally solved in 2015 in the paper [14] and in our \(dx/x\) frame their result reads:
**Theorem 5.7.**_Let \(1<p<q<\infty\) and the parameters \(\alpha\) and \(\beta\) satisfying (5.4). Then the sharp constant in (5.3) is \(C=C_{pq}^{*},\) where_
\[C_{pq}^{*}=\left(\frac{p-1}{\beta}\right)^{\frac{1}{p^{\prime}}+\frac{1}{q}} \left(\frac{p^{\prime}}{q}\right)^{\frac{1}{p}}\left(\frac{\frac{q-p}{p} \Gamma\left(\frac{pq}{q-p}\right)}{\Gamma\left(\frac{p}{q-p}\right)\Gamma \left(\frac{p(q-1)}{q-p}\right)}\right)^{\frac{1}{p}-\frac{1}{q}}. \tag{5.5}\]
**Remark 5.8.** Some straightforward calculations show that
\[C_{pq}^{*}\rightarrow\frac{p}{\beta}\ {\rm as}\ q\to p,\]
so indeed we have the expected continuity in the sharp constants as \(\ q\to p.\)
In the dual situation we have the following:
**Example 5.9.** The inequality
\[\left(\int\limits_{0}^{\infty}\left(\int\limits_{x}^{\infty}f(t)dt\right)^{q} x^{\alpha}\frac{dx}{x}\right)^{\frac{1}{q}}\leq C\left(\int\limits_{0}^{\infty} (xf(x))^{p}x^{\beta}\frac{dx}{x}\right)^{\frac{1}{p}} \tag{5.6}\]
holds for \(1<p\leq q<\infty\) and some finite constant \(C>0,\) if and only if
\[\beta>0\ \ \ {\rm and}\ \ \frac{\alpha}{q}=\frac{\beta}{p}\]
and the sharp constant is known also in this case (see[14]).
**Remark 5.10.** Also all cases when we have equality in (5.3) with \(C=C_{pq}^{\star}\) defined by (5.5) and when we have equality in (5.6) are also known (see again [14]). Hence, it seems to be an interesting open question to derive the corresponding sharp results when the integrals \(\int_{0}^{\infty}\) are replaced by \(\int_{0}^{\ell},\ 0<\ell\leq\infty\) or \(\int_{\ell}^{\infty},\ 0\leq\ell<\infty,\) respectively. We aim to investigate this in a forthcoming paper. We use this opportunity to note a misprint in [14]. The condition \(\frac{n+\alpha}{p}=\frac{n+\beta}{q}\) in Theorems 4.1 and 4.2 in [14], should be replaced by \(\frac{n+\alpha}{q}=\frac{n+\beta}{p}.\)
**Remark 5.11.** By using the same transformations as those pointed out in Remark 5.3 we can transform inequalities involving integrals \(\int_{0}^{\ell}\) to inequalities involving the integrals \(\int_{\ell}^{\infty}.\) Let us just as one example of this fact restate Theorem 3.2 in this way:
**Theorem 5.12.**_Let \(p>0,0<\alpha<p\) and let \(f(x)x^{2}\) be a measurable, non-negative and non-decreasing function on \((0,\ell),0\leq\ell<\infty.\)_
_a) It \(p\geq 1,\) then_
\[\left(\int\limits_{\ell}^{\infty}\left(\int\limits_{x}^{\infty}f(y)dy\right) ^{p}x^{\alpha}\frac{dx}{x}\right)^{\frac{1}{p}}\geq\frac{p}{\alpha}\left( \int_{\ell}^{\infty}(xf(x))^{p}x^{\alpha}\left(1-\left(\frac{\ell}{x}\right)^ {\alpha}\right)\frac{dx}{x}\right)^{\frac{1}{p}} \tag{5.7}\]
_b) If \(0<p\leq 1,\) then (5.7) holds in the reversed direction._
_c) The constant \(p/\alpha\) is sharp in both a) and b) and equality appears for any \(f(x)=Ax^{-2}\chi_{(c,\infty)}(x)\) for some \(c\in(\ell,\infty),\ A>0.\)_
**Remark 5.13.** The function f(x) in Theorem 5.12 is an example of a so called quasi-monotone function, which means that \(f(x)x^{\alpha}\) is non-increasing or non-decreasing for some \(\alpha\in R.\) It is another interesting open question to investigate all our results concerning monotone functions for such more general quasi-monotone functions. Even in the case with infinite intervals some interesting phenomena appear. See [3] and the references therein for a special case.
|
2305.09179 | Ortho-ODE: Enhancing Robustness and of Neural ODEs against Adversarial
Attacks | Neural Ordinary Differential Equations (NODEs) probed the usage of numerical
solvers to solve the differential equation characterized by a Neural Network
(NN), therefore initiating a new paradigm of deep learning models with infinite
depth. NODEs were designed to tackle the irregular time series problem.
However, NODEs have demonstrated robustness against various noises and
adversarial attacks. This paper is about the natural robustness of NODEs and
examines the cause behind such surprising behaviour. We show that by
controlling the Lipschitz constant of the ODE dynamics the robustness can be
significantly improved. We derive our approach from Grownwall's inequality.
Further, we draw parallels between contractivity theory and Grownwall's
inequality. Experimentally we corroborate the enhanced robustness on numerous
datasets - MNIST, CIFAR-10, and CIFAR 100. We also present the impact of
adaptive and non-adaptive solvers on the robustness of NODEs. | Vishal Purohit | 2023-05-16T05:37:06Z | http://arxiv.org/abs/2305.09179v1 | # Ortho-ODE: Enhancing Robustness and of Neural ODEs against Adversarial Attacks
###### Abstract
Neural Ordinary Differential Equations (NODEs) proposed in the influential work [5], probed the usage of numerical solvers to solve the differential equation characterized by a Neural Network(NN), therefore initiating a new paradigm of deep learning models with infinite depth. NODEs were designed to tackle the irregular time series problem. However, NODEs have demonstrated robustness against various noises and adversarial attacks. This paper is about the natural robustness of NODEs and examines the cause behind such surprising behavior. We show that by controlling the Lipschitz constant of the ODE dynamics the robustness can be
significantly improved. We derive our approach from Grownwall's inequality. Further, we draw parallels between contractivity theory and Grownwall's inequality. Experimentally we corroborate the enhanced robustness on numerous datasets - MNIST, CIFAR-10, and CIFAR 100. We also present the impact of adaptive and non-adaptive solvers on the robustness of NODEs.
## 1 Introduction
Deep Learning (DL) has revolutionized and impacted diverse fields of science. It has found successful applications in high-level vision tasks like - image classification, object detection, and image segmentation, and low-level tasks like image super-resolution, de-burring, etc. Despite a plethora of applications, deep learning algorithms suffer from fundamental problems that limit their application to critical fields like medical imaging, security, and surveillance. But [33] found that most of the existing state-of-the-art neural networks are easily fooled by adversarial examples that are generated using tiny perturbations to the input images. The Inputs corrupted with imperceptible perturbations can easily fool many vanilla deep neural networks (DNNs) into misclassifying them and degrading their performance. A new subfield of deep learning, adversarial attacks [35], is dedicated to designing such imperceptible perturbations to data and defenses for such attacks. Recently, [36][21] have applied Neural Ordinary Differential Equations (NODEs) [5] to defend against adversarial attacks. Some works [36] explored the natural robustness of NODEs against adversarial attacks, both, empirically and theoretically. The work [36] made some interesting observations and provided the reasoning behind such surprising properties of NODEs. NODEs were introduced to tackle the irregular time series problem but their surprising robustness against attacks has piqued the interest of a lot of researchers. Though NODEs are robust against adversarial attacks, they still suffer from poor performance on specific attacks, specifically gradient-free attacks. Practically, it is impossible to defend against every adversarial attack out in the wild. Meanwhile, a more important question to be asked is - why NODEs are robust against some adversarial attacks, and how to improve their robustness?
So far many techniques have been introduced to tackle adversarial attacks. Probably one of the most successful techniques is _adversarial training_ introduced in works [23][38]. In adversarial training, the adversarial examples are simulated in each iteration of the model and used as a training set in the next iteration of training. Using adversarial training is computationally expensive since in every iteration we need to generate adversarial examples and retrain the model on newly generated samples. In contrast, NODEs offer robustness naturally without the need for adversarial training which makes them attractive to computation-limited applications.
In this paper, our objective is to assess the effect of the Lipschitz constant of dynamics of NODE on the robustness of the model against adversarial attacks. To this end, we first propose to use orthogonal convolutional layers [34] using Cayley transform to design the NN that signifies the dynamics of non-linear dynamical system. Encouraging
orthogonality in neural networks has proven to yield several compelling benefits. Our work specifically uses two such benefits - preserving gradient norms and enforcing low Lipschitz constants. Controlling Lipschitz constants is nontrivial and has shown several benefits [29], [13], [28] against perturbations in Convolutional Neural Networks (CNNs). Different from CNNs, NODEs are infinite depth resnets [19]. Because of their infinite depth nature, we need to ensure that NODEs do not suffer from degraded activations due to exploding and vanishing gradients [27]. Orthogonal convolutional layers using the Cayley transform ensure stable activation, preserving gradient norms and enforcing low Lipschitz constants. We call our method as _Ortho-ODE_ as in ODE with orthogonal convolutional layers. Our method is backed theoretically by Grownwall's inequality. Our contributions are:
* Our method proposes to use orthogonal convolutional layers to characterize the NN representing dynamics of ODE. Thus, enabling us to upper bound the Lipschitz constant of the dynamics making our model robust.
* We theoretically justify that imposing orthogonality constraints on dynamics ensures the representations of adversarial and pure samples remain close. Therefore, increasing the classification accuracy of our method.
* We test our method on multiple datasets and against many state-of-the-art robust NODE methods. We draw parallels between our method and contractivity theory to demonstrate that various theoretical pieces of evidence support our method.
## 2 Related Work
### Neural-ODEs
NODEs were first introduced in the work by Chen et al. [5] as continuous depth formulation of ResNet architecture. Many notable architects can be interpreted as different discretizations of the differential equations [1]. Many of the subsequent works have followed up with an exploration of optimization issues and the expressivity of NODE. For example, in the work [8] it was shown that the expressivity of the NODEs is limited due to the topology-preserving nature of NODEs. To overcome such issues [8] presented augmentedODE to learn more complex functions. The work [36] was the first to evaluate the adversarial robustness of NODEs theoretically as well as empirically.
Additionally, in [17] it was shown that injecting noise could be beneficial to the stability of NODEs. Despite some appealing properties of NODEs, they are computationally expensive. Hence a recent study [25] explored the efficient implementation of the adjoint training method. The renewed interest in the marriage of dynamic systems and deep learning has given rise to a plethora of works combining the theory of dynamical systems with NODEs. In the original formulation of NODE, there was no depth or input-dependent modification of the dynamics. However, [24] suggested using neural ODE whose dynamics would depend on the input.
### Adversarial Attacks
Adversarial examples are seen as threats to neural networks, especially in critical applications. Adversarial examples are fed to the neural network to modify their predictions to the desired prediction. As one of the initial applications of adversarial attacks dates back to work [6], [22] which modified the spellings of the words to fool the spam filters. But, the first adversarial attacks on computer vision models were introduced in work [33] and [17]. These two works established a new field of deep learning focusing on the design and defense of adversarial examples. It is puzzling to many researchers as to why neural networks are sensitive to imperceptible perturbations (adversarial attacks) in the image. The work [16] proposed a localize the attack region to a small patch instead of adding noise to the whole image. Many such attacks have been formulated for speech processing [3], [15]. Multiple works [10],[11] have proposed attacks designed specifically for adaptive models similar to NODEs.
Since the introduction of adversarial examples, many works have been proposed to refine the attacks and target various properties of the neural network. Broadly, adversarial attacks could be classified as - black-box and white-box attacks [2]. Black box attacks are those, where the attacker does not have access to any knowledge about the model or its output. However, in white box attacks, the attacker has access to the gradient information, outputs, or model architecture. White box attacks are generally more effective because of the design of targeted attacks designed for a specific model using the available information. Two of the most famous attacks are FGSM [12] and PGD [12], which is a white box attack with the goal of misclassification. PGD is a gradient-based attack where the attacker has access to gradients of the model during training. Since then many sophisticated attacks have been proposed. Autoattack [12] is a suite of attacks carefully designed to do large-scale evaluations of the robustness of NNs.
### Adversarial Defense
The adversarial defense literature is equally rich with the most famous being adversarial training [17], Bayesian adversarial training [1], and other regularization-based methods [28]. Many variations of adversarial training approaches also have been proposed [14]. Among various defense mechanisms, [30] was the first to work to improve the robustness using control theory and dynamic systems. Further, the use of Lyapunov-stable equilibrium for NODEs is investigated in Stable Neural ODE [20].
## 3 Methodology
In this section, we first go over some of the preliminaries and problem formulation for image classification using NODEs. We follow up with a detailed description of methods and theorems supporting our hypothesis.
### Preliminaries on Neural ODE
Under the neural ODE framework, we model the input and output as two states of a continuous dynamical system by approximating the system's dynamics with trainable layers. The Neural ODEs are endowed with intrinsic invertibility, yielding to a family of invertible models for solving inverse problems [9]. The following equations characterize the relation between input and output:
\[\frac{dz(t)}{dt}=f_{\mathcal{W}}(z(t),t),z(0)=z_{in} \tag{1}\]
where \(z_{out}=z(T)\) and \(f_{\mathcal{W}}:\mathbb{R}^{n}\times[0,\infty)\rightarrow\mathbb{R}^{n}\) denotes the trainable layers that are parameterized by weights \(\mathcal{W}\). We assume that \(f_{\mathcal{W}}\) is continuous in \(t\) and globally Lipschitz continuous in \(z\). The input of neural ODE corresponds to the state at \(t=0\) and output \(z_{out}\) is associated with the state at some \(T\in(0,\infty)\). Given the input \(z_{in}\), the output \(z_{out}\) at time \(t\) can be calculated by solving ODE in Eq. 1 Therefore, the solution of neural ODE can be represented as a \(d-\)dimensional function \(\phi_{T}(.,.)\) i,e.
\[z_{out}=z(T)=z(0)+\int_{0}^{T}f_{\mathcal{W}}(z(t),t)dt=\phi_{T}(z_{in},f_{ \mathcal{W}}) \tag{2}\]
It is quite easy to see that NODEs are the continuous analog of residual networks where the hidden layer of the resnet can be regarded as discrete-time difference equations \(z(t+1)=zt+f_{\mathcal{W}}(z(t),t)\).
For classification, task NODEs cannot directly be applied to images. We need CNN layers before the NODE layer to extract the representation before passing it to NODE. Additionally, we need a set of fully connected layers post NODEs to classify the representation in various classes. The pre and post-CNN layers be represented by \(f_{pre}(.)\) and \(f_{post}(.)\). NODEs have a property that is formalized using the following proposition
**Proposition 1**.: _(ODE integral curves do not intersect [18], [37]) Let \(f\) be a function whose derivative exists in every point, then \(f\) is a continuous function. Let \(z_{1}(t)\) and \(z_{2}(t)\) be two solutions of the ODE with different initial conditions, i.e. \(z_{1}(0)\neq z_{2}(0)\). \(f_{\mathcal{W}}\) is continuous in t, and globally Lipschitz is continuous in z._
Then, it holds that
\[z_{1}(t)\neq z_{2}(t)\]
for all
\[t\in[0,\infty)\]
### Problem Formulation
Let \(\mathcal{P}_{data}\) be the probability distribution of the data over \(\mathcal{X}\times\mathcal{Y}\), where \(\mathcal{X}\) represent a set of input data points and \(\mathcal{Y}\) represents the corresponding labels. Let \(n^{th}\) pair of data be represented by pair \((x_{n},y_{n})\), where \(x_{n}\in\mathbb{R}^{m\times n}\) represents an image of \(m\times n\), width and height respectively and \(y_{n}\in\mathbb{R}^{C}\). Here \(C\) is the total number of classes in the training
dataset. Further, we assume that both training and test data come from \(\mathcal{P}_{data}\). We extract the features from input \(x_{n}\) using CNN layers and process the output of NODE using an MLP layer. The whole processing pipeline is represented by the following functions -
\[z_{n}(0)=f_{pre}(x_{n}) \rightarrow\text{Feature Extraction}\] \[\frac{dz(t)}{dt}=f_{\mathcal{W}}(z(t),t,z(0)) \rightarrow\text{NODE dynamics}\] \[z_{out}=z(T)=z(0)+\int_{0}^{T}f_{\mathcal{W}}(z(t),t)dt=\phi_{T} (z_{in},f_{\mathcal{W}}) \rightarrow\text{Solution to NODEs}\] \[\text{Output}=f_{post}(z_{out}) \rightarrow\text{Final classified Output}\]
During test time the adversarial samples are generated using PGD or FSGM be represented by \(x_{adv}\). Our goal is to correctly classify the adversarial samples without adversarial training of NODE.
### The Gronwall-Bellman inequality
A matrix \(A\in\mathbb{R}^{m\times n}\) is said to be an orthogonal matrix if \(A^{T}A=I_{n}\), where \(I_{n}\) is the identity matrix of \(n\times n\) dimension. Matrix \(A\) is said to be _semi-orthogonal_ if \(A^{T}A=I_{n}\) or \(AA^{T}=I_{n}\). If \(m\geq n\), then \(A\) is norm preserving, and if \(m\leq n\) then the mapping is contractive. Alternatively, a matrix with all singular values is orthogonal too. One of the consequences of orthogonality is 1-Lipschitzness. The definition of Lipschitzness is as below
**Definition 1**.: _(Lipschitzness) A function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) is L-Lipschitz w.r.t \(l_{2}\) norm if and only if \(\frac{||f(x)-f(x^{\prime})||_{2}}{||x-x^{\prime}||_{2}}\leq L\ \forall x,x^{\prime}\in\mathbb{R}^{n}\). \(L\) is called lipschitz constant of \(f\)_
There are multiple works showcasing that bounding the Lipschitz constant of a neural network has guaranteed robustness against small perturbations in the input. Though the benefit of bounding the Lipschitz constant has been explored for neural networks the impact of bounding the Lipschitz constant on the dynamics of NODE is rarely investigated. According to Grownall's inequality [26], when stated informally - The difference between two terminal states of NODE is bounded by the difference between the initial states times the exponential of the Lipschitz constant of the dynamics. In neural ODE, dynamics are represented by a neural network. Hence controlling the Lipschitz constant, one can control how close two outputs are though initial conditions are quite different. Formally the theorem is as below-
**Theorem 1**.: _(Gronwall-Bellman inequality) Let \(U\subset\mathbb{R}^{d}\) be an open set. Let \(f:U\times[0,T]\rightarrow\mathbb{R}^{d}\) be a continuous function and let \(z_{1}\) and \(z_{2}\) satisfy the Initial Value Problem (IVP) problems -_
\[\frac{dz_{1}(t)}{dt}=f(z_{1}(t_{0}),t),z_{1}(t_{0})=x_{1} \tag{3}\]
\[\frac{dz_{2}(t)}{dt}=f(z_{2}(t_{0}),t),z_{1}(t_{0})=x_{2} \tag{4}\]
_Let \(C\) be a constant such that \(C\geq 0\) such that \(\forall t\in[0,T]\),_
\[||f(z_{2}(t_{0}),t)-f(z_{1}(t_{0}),t)||\leq C||z_{2}(t)-z_{1}(t)|| \tag{5}\]
_Consequently,_
\[||z_{2}(t)-z_{1}(t)||\leq||x_{2}-x_{1}||.e^{Ct} \tag{6}\]
_Where \(C\) is the Lipschitz constant of the dynamics._
The key to improving the robustness of the NODEs is to control the difference between the neighboring curves. From Theorem 1 we can bring two neighboring curves close together by bounding the Lipschitz constant of the dynamics of ODE. Directly bounding the Lipschitz constant is hard. Hence we resort to Cayley transform to impose orthogonality constraints on the dynamics of ODE. For our method to work, we make the following assumption -
**Assumption 1**.: _(Representation closeness) Let \(z_{pure}\) represent the input representation of pure sample and \(z_{adv}\) be the input representation of an adversarial sample. Then we assume that \(z_{adv}\) is in the \(\epsilon\) neighborhood of \(z_{pure}\). i.e,_
\[||z_{pure}-z_{adv}||\leq\epsilon \tag{7}\]
Figure 1: Architecture difference between Resnet, Vanilla ODE and Ortho-ODE
### Connection between Grownwall's Inequality and Contraction theory
Recently contraction theory has been employed in Neural Networks for multiple purposes. For example, [14] explored the use of contractivity to improve the well-posedness and robustness of implicit neural networks, and analysis of Hopfield NNs [4]. In [7] author proposed Hamiltonian NODE which is contractive by design to improve the robustness. Singh et al. [32] and Revay et al. [31] utilize contraction theory to learn stabilizable nonlinear NN models from available data. Contractivity is the property of a dynamical system and it ensures that trajectories of the dynamical system converge to each other asymptotically. Formally contractivity of a dynamic system is defined below -
**Definition 2**.: _(Contractivity) The dynamics of an ODE is contractive with contraction rate \(\zeta>0\) if_
\[||\hat{z}_{t}-z_{t}||\leq e^{-\rho t}||\hat{z_{0}}-z_{0}||,\ \ \forall t\in[0,T] \tag{8}\]
_where \(z_{0},\hat{z}_{0}\in\mathbb{R}^{n}\) are the initial conditions of IVP and \(\hat{z}_{t}\) and \(z_{t}\) are its solutions._
Therefore we can say that a NODE is contractive, the Lipschitz constant between the input and the output is smaller than 1. Making NODEs globally contractive can significantly hamper expressive power. Our goal is to apply local contractivity efficiently to harness the natural robustness provided by locally contractive NODEs.
### Ortho-ODE
A recent work by Torckman et al.[34] parameterized orthogonal convolutions using Cayley transform in a scalable and efficient way. The key idea behind the method in [34] is that multi-channel convolution in the Fourier domain reduces to a batch of matrix-vector products, and making each of those matrices orthogonal makes the convolution orthogonal. Since orthogonalization directly controls the Lipschitz constant, we propose to model the layers of the neural network describing the dynamics using orthogonal convolution layers instead of normal convolution layers. A comparison between ResNets, Vanilla ODE, and Ortho-ODE is shown in Figure.1. We briefly go over the Cayley transform and how it can be used to impose orthogonal constraints on convolution operation.
Consider convolutional layer with stride 1 with \(c_{in}\) representing the input channel and \(c_{out}\) representing the output channels. Let the set of weights of the convolutional layer mapping from input to output is of shape \(c_{out}\times c_{in}\times n\times n\), where \(n\times n\) is the size of the convolutional kernel. The convolutions are easier to be analyzed when they are considered to be circular. The convolution is said to be circular when if the kernel goes out of the bounds of input, it wraps around to the other side of the input. We define \(Conv(X)\) as the circular convolutional layer with weight tensor \(W\in\mathbb{R}^{c_{out}\times c_{in}\times n\times n}\) applied to an input tensor \(X\in\mathbb{R}^{b\times c_{in}\times h\times w}\). The resulting output tensor \(Y=Conv(X)\in\mathbb{R}^{countn}\). One can view the convolutional operation as doubly block-circulant matrix \(C\in\mathbb{R}^{c_{out}n^{2}c_{in}n^{2}}\). Similarly, we denote by \(Conv^{T}(X)\) the transpose of the same convolution.
The Cayley transform is a bijection between skewsymmetric matrices \(A\) and orthogonal matrices \(Q\) without \(-1\) eigenvalues:
\[Q=(I-A)(I+A)^{-1} \tag{9}\]
A matrix is said to be skew-symmetric if \(A=-A^{T}\). The Cayley transform of such a skew-symmetric matrix is always orthogonal. Since convolutions are linear transformations we can the Cayley transform to convolutions. As described in [34] While it is possible to construct the matrix \(C\) corresponding to a convolution \(Conv\) and apply the Cayley transform to it, this is highly inefficient in practice: Convolutions can be easily skew-symmetrized by computing \(Conv(X)-Conv^{T}(X)\), but finding their inverse is challenging; instead, we manipulate convolutions in the Fourier domain, taking advantage of the convolution theorem and the efficiency of the fast Fourier transform.
Since we can construct multiple layers of CNN where each layer is orthogonal and we use such layers to construct the neural network that represents the dynamics of the NODE. As verified in [34] using the clayey transform it is possible to maintain the orthogonalization constraint consistently.
## 4 Experiments
In this section, we first describe the datasets used to benchmark our algorithm followed by details of existing ODE and Non-ODE based benchmarks used for comparison. Further, we describe the training settings including model description, metric and numerical results.
**Datasets** General datasets used for evaluating the robustness of NODEs include MNIST, CIFAR-10, and CIFAR-100. MNIST is the easiest dataset among others. CIFAR-10 and CIFAR-100 are difficult datasets used for the classification task. For each dataset, we have 60,000 training samples and 10,000 test samples. MNIST and CIFAR-10 have 10 classes but CIFAR-100 has 100 classes.
**Benchmarks** Our method with orthogonal layers is dubbed as _Ortho-ODE_. We compare Ortho-ODE with parameter-wise equivalent ResNet-10, Vanilla-ODE which uses normal convolutional layers, and TisODE [36]
**Training Details** All the methods are trained for 100 epochs with a learning rate of 0.01 and no weight decay. We ensure the number of parameters in all the architectures is almost similar. We augment each data with crop and rotation augmentation. We evaluate all the models in the absence and presence of adversarial samples. We use PGD and FGSM adversarial attacks to assess the robustness of our method. Additionally, we use
gaussian noise with different standard deviations to assess the robustness against non-target attacks.
**Metrics** We present classification accuracy for each method on each dataset
### Numerical Results
We evaluate our method against several benchmarks. We briefly describe the benchmarks used for comparison. We evaluate all the methods in two training configurations - one with adversarial attacks and another without adversarial attacks. Additionally, we also evaluate the performance in the presence of Gaussian Noise. ResNet10 is a normal CNN with residual connections and there is a total of 10 layers, making its number of parameters almost equivalent to parameters in other methods. As expected ResNet10 does not perform well when it comes to adversarial attacks. On both FGSM and PGD-based attacks, ResNet10 struggles to give any good accuracy. All the results are tabulated in Table 1. We find that ResNet10 outperforms all the methods under no attack configuration.
We evaluate our method against Neural ODE or Vanilla ODE where the convolutional layers used in neural network parameterize the dynamics are normal convolutional layers. We ensure that all channels and the total number of parameters remain almost the same as our method. The performance of Vanilla ODE is far better than ResNet10 under all attacks. However, under no attack configuration, ResNet10 dominates.
Further, TsODE uses the time invariance property to ensure that in the solution of two slightly different initial conditions the final output from NODE almost remains the same. In our, evaluation TsODE performs best in most cases under different adversarial attacks. Compared to our method, underperforms in some cases but there is no clear evidence suggesting that our method completely outperforms TsODE.
\begin{table}
\begin{tabular}{l c c|c c|c c} \hline \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Methods} & \multirow{2}{*}{No Attacks} & Gaussian & \multicolumn{2}{c|}{FGSM} & \multicolumn{2}{c}{PGD} \\ \cline{5-7} & & & \(\sigma=100\) & FSGM - 5/255 & FSGM - 8/255 & PGD - 0.2 & PGD -0.3 \\ \hline MNSIT & ResNet10 & 99.15 & 98.75 & 28.20 & 16.07 & 32.67 & 0.0 \\ & Vanilla ODE & 97.84 & 97.63 & 49.10 & 34.78 & 64.89 & 13.02 \\ & TsODE & 99.13 & 98.9 & 50.23 & 36.98 & 67.47 & 13.7 \\ & Ortho-ODE (Our Method) & 99.14 & **99.10** & 49.32 & **37.52** & **67.86** & 11.56 \\ \hline CIFAR-10 & ResNet10 & 91.12 & 90.56 & 38.10 & 17.05 & 30.45 & 1.2 \\ & Vanilla ODE & 82.7 & 81.30 & 42.89 & 38.12 & 49.18 & 12.56 \\ & TsODE & 85.30 & 84.12 & 44.23 & 37.46 & 50.34 & 13.1 \\ & Ortho-ODE (Our Method) & **85.69** & 85.10 & 43.12 & 36.89 & **50.78** & 11.4 \\ \hline CIFAR-100 & ResNet10 & 68.57 & 66.89 & 18.57 & 14.54 & 17.13 & 0.0 \\ & Vanilla ODE & 52.91 & 56.21 & 47.67 & 37.67 & 21.89 & 11.12 \\ & TsODE & 53.62 & 55.71 & 48.12 & 36.41 & 23.72 & 12.34 \\ & Ortho-ODE (Our Method) & **53.64** & 52.45 & **49.45** & 35.32 & 21.81 & **12.56** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Robustness Results of our method compared against existing approaches on MNIST, CIFAR-10, and CIFAR-100
## 5 Limitations & Conclusion
Apart from poor performance under adversarial attacks, our method inherits the disadvantages of the Cayley transform. Among all the methods shown in the results table, our method is the slowest due to the use of the Cayley transform which converts the signal to the Fourier domain using the Fast Fourier transform. As normal convolution would not require a such step, hence our method is the slowest.
Apart from slow training and inference, the accuracy of our method needs to improve to be competitive with TsODE. Due to the orthogonality constraints on the convolutional layers, we sacrifice the expressivity of the model to some extent. The trade-off between expressivity and robustness is a common theme shared across multiple algorithms.
In this work, we evaluated the use of Grownwall's inequality to improve the robustness of NODE. We constrained the Lipschitz constant of the neural network representing the dynamics using orthogonal constraints. We choose to use Cayley transform to constrain and impose the orthogonality requirement. We evaluate our method on multiple datasets and compared it against various benchmarks. Though our method does not outperform the benchmarks in all cases, our method still works and sometimes outperforms TsODE.
## 6 Future Work
In the future, we would like to further explore the connection between the bounding of the Lipschitz constant of NODE and the adversarial robustness. It is important to figure out a faster method to impose the orthogonality requirement in order to improve the training an inference speed of our method.
|
2304.06163 | Cosymplectic groupoids | A cosymplectic groupoid is a Lie groupoid with a multiplicative cosymplectic
structure. We provide several structural results for cosymplectic groupoids and
we discuss the relationship between cosymplectic groupoids, Poisson groupoids
of corank 1, and oversymplectic groupoids of corank 1. | Rui Loja Fernandes, David Iglesias Ponte | 2023-04-12T21:14:44Z | http://arxiv.org/abs/2304.06163v2 | # Cosymplectic groupoids
###### Abstract.
A cosymplectic groupoid is a Lie groupoid with a multiplicative cosymplectic structure. We provide several structural results for cosymplectic groupoids and we discuss the relationship between cosymplectic groupoids, Poisson groupoids of corank 1, and oversymplectic groupoids of corank 1.
RLF was partially supported by NSF grant DMS-2003223
## 1. Introduction
A _cosymplectic groupoid_ is a Lie groupoid \(\mathcal{G}\rightrightarrows M\) equipped with a multiplicative cosymplectic structure \((\omega,\alpha)\). This means that \(\omega\in\Omega^{2}(\mathcal{G})\), \(\alpha\in\Omega^{1}(\mathcal{G})\) are closed multiplicative forms and \(\alpha\wedge\omega^{m}\) is nowhere vanishing, so it defines a volume form in \(\mathcal{G}\). As we will see later, one must have \(\dim\mathcal{G}=2m+1\) where \(m=\dim M\) (we assume \(M\) connected). The notion of cosymplectic groupoid was first studied in [5].
Cosymplectic groupoids lie at the intersection of two well-known, interesting, classes of Lie groupoids:
* _Poisson groupoids_, i.e., Lie groupoids with a multiplicative Poisson structure (see, e.g., [14, 17]). For a cosymplectic groupoid \((\mathcal{G},\omega,\alpha)\) the associated multiplicative Poisson structure \(\pi_{\mathcal{G}}\in\mathfrak{X}^{2}(\mathcal{G})\) has symplectic foliation \(\ker\alpha\) and leafwise symplectic form the restriction of \(\omega\).
* _Oversymplectic groupoids_, i.e., Lie groupoids with a closed multiplicative 2-form \(\omega\) satisfying \(\operatorname{rank}\omega_{1_{x}}=2\dim M\), for all \(x\in M\) (see, e.g., [2]). We will see that the 2-form of a cosymplectic groupoid satisfies this condition.
For both of these classes of Lie groupoids the base \(M\) inherits a Poisson structure. For a cosymplectic groupoid \((\mathcal{G},\omega,\alpha)\) the Poisson structures obtained from \(\pi_{\mathcal{G}}\) and from \(\omega\) coincide, and will be denoted by \(\pi_{M}\in\mathfrak{X}^{2}(M)\). Our aim in this paper is to give structural results for cosymplectic groupoids and to establish precise relationships with these two classes of Lie groupoids.
To describe our main results we observe that, as a consequence of the multiplicativity condition, the standard data associated with a cosymplectic structure satisfies:
1. \(\ker\alpha\subset T\mathcal{G}\) defines an integrable _multiplicative_ distribution in \(\mathcal{G}\);
2. \(\omega\) restricts to a symplectic form on the leaves of \(\ker\alpha\), yielding a _multiplicative_ Poisson structure \(\pi_{\mathcal{G}}\in\mathfrak{X}^{2}(\mathcal{G})\);
3. The Reeb vector field \(E\in\mathfrak{X}(G)\), characterized by \(i_{E}\omega=0\) and \(i_{E}\alpha=1\), is _bi-invariant_ (i.e., it is both left and right invariant). In particular, it is a complete vector field.
These basic facts, to be proved below, give a rich structure to a cosymplectic groupoid. For example, the collection of all orbits of the Reeb vector field intersecting the identity section is a bundle of Lie groups \(\mathcal{K}\subset\mathcal{G}\) and one has:
**Theorem 1.1**.: _For any cosymplectic groupoid \((\mathcal{G},\omega,\alpha)\) there is a short exact sequence of topological groupoids over the same base:_
(1)
_where \(\Sigma\) is the orbit space of the Reeb vector field. When this space is smooth, this is a short exact sequence of Lie groupoids and \((\Sigma,\Omega)\) is a symplectic groupoid for a unique symplectic structure making the projection \(\mathcal{G}\to\Sigma\) a Poisson map._
For an arbitrary Poisson groupoid or oversymplectic groupoid, the base Poisson structure \((M,\pi_{M})\) may fail to be integrable. However, for a cosymplectic groupoid it is always integrable. Indeed, the identity section of a cosymplectic groupoid \((\mathcal{G},\omega,\alpha)\) is contained in a single symplectic leaf of the Poisson structure \(\pi_{\mathcal{G}}\) and we have:
**Proposition 1.2**.: _Let \((\mathcal{G},\omega,\alpha)\) be a cosymplectic groupoid. The symplectic leaf of \(\pi_{\mathcal{G}}\) containing \(M\) is a Lie subgroupoid \(\Sigma^{0}\subset\mathcal{G}\), and it yields a symplectic groupoid \((\Sigma^{0},\omega|_{\Sigma^{0}})\rightrightarrows M\) integrating \((M,\pi_{M})\)._
For a proper cosymplectic groupoid where \(\Sigma^{0}\) is an embedded submanifold, one obtains a picture somewhat dual to Theorem 1.1. Namely, the flow of the Reeb vector field for some fixed time \(t_{0}\) gives a symplectomorphism of \(\Sigma^{0}\) and one finds that the cosymplectic groupoid is a symplectic mapping torus.
**Theorem 1.3**.: _Let \((\mathcal{G},\omega,\alpha)\) be a proper, source connected, cosymplectic groupoid and assume that the symplectic leaf \(\Sigma^{0}\subset\mathcal{G}\) containing the identity is an embedded submanifold. Then there is a symplectomorphism \(\varphi:\Sigma^{0}\to\Sigma^{0}\) such that \(\mathcal{G}\) is isomorphic to the symplectic mapping torus \(\Sigma^{0}\times_{\varphi}\mathbb{S}^{1}\). Moreover, the resulting submersion_
\[q:\mathcal{G}\to\mathbb{S}^{1},\]
_is a fibration of Lie groupoids._
The short exact sequence (1) may fail to be smooth and, if smooth, it may fail to split. However, at the infinitesimal level it always splits. In fact, the Lie algebroid \(A\to M\) of a cosymplectic groupoid \((\mathcal{G},\omega,\alpha)\) carries a closed IM 2-form \(\mu:A\to T^{*}M\) and a closed IM 1-form \(\nu:A\to M\times\mathbb{R}\), corresponding to \(\omega\) and \(\alpha\), respectively. We then obtain the following:
**Theorem 1.4**.: _If \((\mathcal{G},\omega,\alpha)\) is a cosymplectic groupoid, its Lie algebroid \((A,\mu,\nu)\) is canonically isomorphic to the trivial central extension of the cotangent algebroid associated with the base Poisson manifold \((M,\pi_{M})\):_
\[(A,\mu,\nu)\simeq(T^{*}M\oplus\mathbb{R},\mathrm{pr}_{T^{*}M},\mathrm{pr}_{M \times\mathbb{R}}).\]
Notice that given a source connected cosymplectic groupoid \((\mathcal{G},\omega,\alpha)\), its source 1-connected cover is a cosymplectic groupoid. Applying the previous result, the latter are easy to describe:
**Corollary 1.5**.: _If \((\mathcal{G},\omega,\alpha)\) is a source 1-connected cosymplectic groupoid then there is a canonical isomorphism_
\[(\mathcal{G},\omega,\alpha)\cong(\Sigma(M)\times\mathbb{R},\operatorname{pr }_{\Sigma}^{*}\Omega,\operatorname{pr}_{\mathbb{R}}^{*}\operatorname{d}t)\]
_where \((\Sigma(M),\Omega)\) is the source 1-connected symplectic integration of \((M,\pi_{M})\)._
In the last part of this note we discuss how far Poisson groupoids and oversymplectic groupoids are from being cosymplectic groupoids.
We say that a Poisson groupoid \((\mathcal{G},\pi_{\mathcal{G}})\) is of corank 1 if its Poisson structure has constant rank equal to \(\dim\mathcal{G}-1\), i.e., if its symplectic foliation is regular of codimension 1. We have the following simple criteria:
**Proposition 1.6**.: _Let \((\mathcal{G},\pi_{\mathcal{G}})\) be a Poisson groupoid of corank 1. Then \(\pi_{\mathcal{G}}\) is associated with a cosymplectic structure if and only if there exists a Poisson vector field \(E\in\mathfrak{X}(\mathcal{G})\) transverse the symplectic foliation of \(\pi_{\mathcal{G}}\) which is bi-invariant._
One the other hand, when \((\mathcal{G},\pi_{\mathcal{G}})\) is a _proper_ Poisson groupoid we will see that if its leafwise symplectic form admits a multiplicative extension (closed or not) then, up to a cover, it is homotopic to a cosymplectic groupoid through a homotopy that does not change the Poisson structure on the base:
**Theorem 1.7**.: _Let \((\mathcal{G},\pi_{\mathcal{G}})\rightrightarrows(M,\pi_{M})\) be an orientable proper Poisson groupoid of corank 1 and assume that there exists a multiplicative 2-form extending its leafwise symplectic form. If \((\widetilde{\mathcal{G}},\widetilde{\pi_{\mathcal{G}}})\) is its universal covering groupoid, then there is a path of Poisson structures \(\widetilde{\pi}_{\mathcal{G}}^{t}\in\mathfrak{X}^{2}(\widetilde{\mathcal{G}})\), starting at \(\widetilde{\pi}_{\mathcal{G}}^{0}=\widetilde{\pi_{\mathcal{G}}}\), with the following properties:_
1. _each_ \(\widetilde{\pi}_{\mathcal{G}}^{t}\) _is multiplicative of corank 1;_
2. _the Poisson structure on_ \(M\) _induced by_ \(\widetilde{\pi}_{\mathcal{G}}^{t}\) _is_ \(\pi_{M}\)_;_
3. \(\widetilde{\pi}_{\mathcal{G}}^{1}\) _is associated with a multiplicative cosymplectic structure._
Let us turn now to oversymplectic groupoids. Given such a groupoid \((\mathcal{G},\omega)\), if the foliation given by \(\ker\omega\) is simple then the leaf space
\[\Sigma:=\mathcal{G}/\ker\omega\]
is automatically a symplectic groupoid (this is the origin of the term "oversymplectic"; see [2]). If \((\mathcal{G},\omega)\) is proper and \(\ker\omega\) is an orientable line bundle, the quotient map \(\Phi:\mathcal{G}\to\Sigma\) yields a short exact sequence of Lie groupoids
It follows from recent results in [6] that associated to such a sequence there is a well-defined _multiplicative Chern class_, living in the multiplicative de Rham cohomology of \(\Sigma\rightrightarrows M\),
\[c(\mathcal{G})\in H^{2}_{M}(\Sigma).\]
We will show that:
**Theorem 1.8**.: _Let \((\mathcal{G},\omega)\) be a corank 1, orientable, proper oversymplectic groupoid. If \(\ker\omega\) is a simple foliation, then there exists \(\alpha\in\Omega^{1}(\mathcal{G})\) such that \((\mathcal{G},\omega,\alpha)\) is a cosymplectic groupoid if and only if the multiplicative Chern class vanishes._
This paper is organized as follows. In Section 2 we recall some basics about cosymplectic structures, we introduce cosymplectic groupoids, establish its basic properties, and we prove Theorems 1.1 and 1.3. In Section 3 we construct the infinitesimal data associated with a cosymplectic groupoid and we show that its Lie algebroid fits into a split short exact sequence, proving Theorem 1.4 and Corollary 1.5. In Section 4 we discuss relationships between Poisson groupoids, oversymplectic groupoids and cosymplectic groupoids, deducing in particular Proposition 1.6 and Theorems 1.7 and 1.8. We mostly follows the conventions and notation of the monograph [3], to which we refer for background on Poisson structures and symplectic groupoids.
## 2. Cosymplectic groupoids
### Background on cosymplectic structures
A _cosymplectic structure_ on a manifold \(Q\) of dimension \(2m+1\) is a pair \((\omega,\alpha)\), where \(\omega\) is a closed 2-form, \(\alpha\) is a closed 1-form and \(\omega^{m}\wedge\alpha\) is a volume form. These structures were first introduced by Liberman [11]. We collect here some basic facts about cosymplectic structures.
Associated with a cosymplectic \((\omega,\alpha)\) on \(Q\) there is a non-vanishing vector field \(E\in\mathfrak{X}(Q)\), called the _Reeb vector field_, characterized by
\[i_{E}\omega=0,\quad\alpha(E)=1. \tag{2}\]
On the other hand, \(\ker\alpha\subset TQ\) is an integrable distribution and the restriction of \(\omega\) to its leaves is symplectic. The resulting symplectic foliation determines a regular _Poisson structure_\(\pi_{Q}\in\mathfrak{X}^{2}(Q)\) of corank 1. Notice that, by construction, the closed 2-form \(\omega\) extends the leafwise symplectic form of \(\pi_{Q}\)
\[\omega(\pi_{Q}^{\sharp}(\beta_{1}),\pi_{Q}^{\sharp}(\beta_{2}))=\langle\beta _{1},\pi_{Q}^{\sharp}(\beta_{2})\rangle,\quad(\beta_{1},\beta_{2}\in T^{*}Q).\]
Moreover, the Reeb vector field is a Poisson vector field everywhere transverse to the symplectic foliation.
Conversely, assume that \((Q,\pi_{Q})\) is a regular Poisson structure of corank 1. If \(E\) is a vector field transverse to the symplectic foliation, then one obtains
1. a 2-form \(\omega\) extending the leafwise symplectic form such that \(\ker\omega=\langle E\rangle\) and
2. a 1-form \(\alpha\) such that \(\alpha(E)=1\) and \(\ker\alpha=\operatorname{Im}\pi_{Q}^{\sharp}\).
It is not hard to check that \(\omega\) and \(\alpha\) are closed iff \(E\) is a Poisson vector field, so we have ([7, Proposition 18]):
**Proposition 2.1**.: _A regular Poisson structure \(\pi_{Q}\in\mathfrak{X}^{2}(Q)\) of corank 1 is defined by a cosymplectic structure if and only if there a Poisson vector field transverse to the symplectic foliation._
Two cosymplectic structures \((\omega,\alpha)\) and \((\widetilde{\omega},\widetilde{\alpha})\) define the same Poisson structure \(\pi_{Q}\) if and only \(\widetilde{\alpha}=f\alpha\) for a nowhere vanishing Casimir function \(f\in C^{\infty}(Q)\), and \(\widetilde{\omega}-\omega\) is a closed two form vanishing on \(\ker\alpha=\ker\widetilde{\alpha}\). In this case, the corresponding Reeb vector fields are related by \(\widetilde{E}=\frac{1}{f}E\).
The following examples give some basic constructions of cosymplectic manifolds related with symplectic manifolds:
**Example 2.2**.: If \((S,\omega_{S})\) is a symplectic manifold then \(Q=S\times\mathbb{R}\) admits the cosymplectic structure \(\left(\mathrm{pr}_{S}^{*}\omega_{S},\mathrm{pr}_{\mathbb{R}}^{*}\,\mathrm{d}t\right)\), where \(t\) denotes the coordinate on the second factor. In this case, the vector field \(E\) is just \(\frac{\partial}{\partial t}\), while the Poisson structure is \(\pi_{Q}=\omega^{-1}\oplus 0\).
Obviously, one can replace \(\mathbb{R}\) by \(\mathbb{S}^{1}\) and \(\mathrm{d}t\) by \(\mathrm{d}\theta\), obtaining a cosymplectic structure on \(S\times\mathbb{S}^{1}\). More generally, one can consider a principal \(\mathbb{S}^{1}\)-bundle \(p:Q\to S\) over a symplectic manifold \((S,\omega_{S})\) that admits a flat connection \(1\)-form \(\alpha\in\Omega^{1}(Q)\). Then \((p^{*}\omega_{S},\alpha)\) is a cosymplectic structure whose underlying Poisson structure has symplectic foliation the horizontal foliation of \(\alpha\). The Reeb vector field is the infinitesimal generator of the \(\mathbb{S}^{1}\)-action.
**Example 2.3**.: Let \((S,\omega_{S})\) be a symplectic manifold and \(\varphi\colon S\to S\) a symplectomorphism. Recall that the corresponding _symplectic mapping torus_ is the fiber bundle
\[q\colon S_{\varphi}\to\mathbb{S}^{1},\]
where \(S_{\varphi}=(S\times\mathbb{R})/\mathbb{Z}\) is the orbit space of the free and proper action
\[\mathbb{Z}\curvearrowright S\times\mathbb{R},\quad n\cdot(x,t)=(\varphi^{n}( x),t+n).\]
The manifold \(S_{\varphi}\) is equipped with the cosymplectic structure \((\omega,\alpha)\), where \(\omega\) is the \(2\)-form obtained from the basic form \(\mathrm{pr}_{S}^{*}\omega_{S}\in\Omega^{2}(S\times\mathbb{R})\) and \(\alpha=q^{*}(\mathrm{d}\theta)\), with \(\theta\) the angle coordinate on \(S^{1}\). The Reeb vector field \(E\in\mathfrak{X}(S_{\varphi})\) is obtained by projecting the vector field \(\frac{\partial}{\partial t}\).
_Remark 2.4_.: Tischler's theorem [16] shows that given a nowhere vanishing closed \(1\)-form \(\alpha\) on a compact manifold \(Q\), there exists a fibration \(q:Q\to\mathbb{S}^{1}\) with the property that one can choose \(c>0\) such that \(c\alpha\) and \(\widetilde{\alpha}:=q^{*}\mathrm{d}\theta\) are \(C^{\infty}\)-close. Therefore, if \((Q,\omega,\alpha)\) is a compact cosymplectic manifold, one finds:
1. The Poisson structure defined by \((\omega,\alpha)\) is homotopic to the Poisson structure defined by \((\omega,q^{*}\mathrm{d}\theta)\);
2. \(q:Q\to\mathbb{S}^{1}\) can be realized as a symplectic mapping torus with associated cosymplectic structure \((\widetilde{\omega},q^{*}\mathrm{d}\theta)\) for a modified closed \(2\)-form (see, e.g., [10]).
In this sense, a compact cosymplectic structure \((Q,\omega,\alpha)\) is, essentially, a symplectic mapping torus.
**Example 2.5**.: Let \((S,\Omega)\) be a symplectic manifold and \(\iota:Q\hookrightarrow S\) a submanifold. If \(X\) is a symplectic vector field everywhere transverse to \(Q\), then \((\iota^{*}\Omega,\iota^{*}(i_{X}\Omega))\) defines a cosymplectic structure on \(Q\). If \(f\) is a function locally defining \(Q\) and such that \(X(f)=1\), then the Reeb vector field is given by \(E=X_{f}|_{Q}\), where \(X_{f}\) is the
hamiltonian vector field of \(f\). The associated Poisson structure \(\pi_{Q}\in\mathfrak{X}^{2}(Q)\) is
\[\pi_{Q}(\beta_{1},\beta_{2}):=\omega^{-1}(\widetilde{\beta}_{1},\widetilde{ \beta}_{2}),\quad(\beta_{1},\beta_{2}\in T^{*}Q),\]
where \(\widetilde{\beta}\in T^{*}S\) denotes the unique extension of \(\beta\in T^{*}Q\) satisfying \(\beta(X)=1\).
Every cosymplectic manifold \((Q,\omega,\alpha)\) can be realized as a submanifold of a symplectic manifold \((S,\Omega)\): one lets \(S=Q\times\mathbb{R}\) with symplectic form \(\Omega=\omega+\alpha\wedge\mathrm{d}t\).
### Cosymplectic groupoids
Let \(\mathcal{G}\) be a Lie groupoid over \(M\). We denote by \(\mathbf{s}\) and \(\mathbf{t}\) the source and target maps, by \(\mathbf{m}:\mathcal{G}^{(2)}\to\mathcal{G}\) the multiplication (defined on the space \(\mathcal{G}^{(2)}\) of pairs of composable arrows), by \(\mathbf{i}:\mathcal{G}\to\mathcal{G}\) the inverse map, and by \(\varepsilon:M\to\mathcal{G}\) the identity section. Our convention for the groupoid multiplication is such that, given two arrows \(x,y\in\mathcal{G}\), the product \(x\cdot y:=\mathbf{m}(x,y)\) is defined provided \(\mathbf{s}(x)=\mathbf{t}(y)\). Also, if \(m\in M\) we write \(1_{m}:=\varepsilon(m)\) for the unit arrow over \(m\), and if \(x\in\mathcal{G}\) we write \(x^{-1}:=\mathbf{i}(x)\) for the inverse arrow. We denote the groupoid by \(\mathcal{G}\rightrightarrows M\).
Recall that a form \(\omega\in\Omega^{k}(\mathcal{G})\) is said to be _multiplicative_ if
\[\mathbf{m}^{*}\omega=\pi_{1}^{*}\omega+\pi_{2}^{*}\omega, \tag{3}\]
where \(\pi_{i}:\mathcal{G}^{(2)}\to\mathcal{G}\) are the projections on each factor.
**Definition 2.6**.: A _cosymplectic groupoid_ is a triple \((\mathcal{G},\omega,\alpha)\) where \(\mathcal{G}\) is a Lie groupoid and \((\omega,\alpha)\) is a cosymplectic structure on \(\mathcal{G}\) with \(\omega\) and \(\alpha\) multiplicative forms.
The following proposition gives some basic properties of a cosymplectic groupoid. It maybe useful to recall that a multiplicative distribution in a groupoid \(\mathcal{G}\rightrightarrows M\) is a distribution \(D\subset T\mathcal{G}\) with \(\mathrm{ds}(D)=\mathrm{d}\mathbf{t}(D)=D_{0}\) and such that \(D\rightrightarrows D_{0}\) is a subgroupoid of the tangent groupoid \(T\mathcal{G}\rightrightarrows TM\). We refer to [8] for basic facts about multiplicative distributions.
**Proposition 2.7**.: _Let \((\mathcal{G},\omega,\alpha)\) be a cosymplectic groupoid over \(M\). Then \(\dim\mathcal{G}=2\dim M+1\) and one has:_
1. \(\ker\alpha\subset T\mathcal{G}\) _is a multiplicative distribution;_
2. _the induced Poisson structure_ \(\pi_{\mathcal{G}}\in\mathfrak{X}^{2}(\mathcal{G})\) _is multiplicative;_
3. _the Reeb vector field_ \(E\in\mathfrak{X}(\mathcal{G})\) _is bi-invariant;_
Proof.: It follows easily from the multiplicativity condition that for a cosymplectic groupoid \((\mathcal{G},\omega,\alpha)\) one has
\[\mathbf{i}^{*}\omega=-\omega,\quad\mathbf{i}^{*}\alpha=-\alpha,\quad\mathbf{i }_{*}E=-E,\]
where \(E\) denotes the Reeb vector field. In particular, the pull backs of \(\omega\) and \(\alpha\) along the identity section vanish and the Reeb vector field is transverse to it. It follows that the subspace generated by the tangent space to the identity section and the Reeb vector field is maximally isotropic for \(\omega\). Since \(\omega\) has corank \(1\), we conclude that for a cosymplectic groupoid \(\mathcal{G}\rightrightarrows M\) one must have \(\dim\mathcal{G}=2\dim M+1\).
The kernel of a multiplicative form is a multiplicative distribution (see [4]). Hence (i) follows. Since \(\mathrm{Im}(\pi_{\mathcal{G}})^{\sharp}=\ker\alpha\) and the symplectic forms on the leaves of \(\pi_{\mathcal{G}}\) is
the restriction of the multiplicative form \(\omega\), we must have \(\pi_{\mathcal{G}}\) multiplicative and (ii) also follows.
From the multiplicativity of \(\omega\) we also have that
\[D:=\langle E\rangle=\ker\omega,\]
is a multiplicative distribution, i.e., \(D\) is a subgroupoid of \(T\mathcal{G}\). Since \(E\) is transverse to the identity section, we find that \(D_{0}=D\cap TM=0_{M}\), i.e., we have \(D\subset\ker\mathrm{ds}\cap\ker\mathrm{dt}\). This means that \(D\) is a distribution which is both left and right invariant. In order to conclude that \(E\) itself is both left and right invariant, notice that the right invariant vector field
\[\overrightarrow{e}:\mathcal{G}\to T\mathcal{G},\quad g\mapsto d_{1_{\mathbf{s }(g)}}R_{g}(E_{1_{\mathbf{s}(g)}}),\]
takes values in \(D\), so \(i_{\overrightarrow{e}}\omega=0\). On the other hand, using the multiplicativity of \(\alpha\), we obtain
\[\alpha_{g}(\overrightarrow{e}) =\alpha_{g}(\mathrm{d}\mathbf{m}(E_{1_{\mathbf{s}(g)}},0_{g}))\] \[=\alpha_{1_{x}}(E_{1_{\mathbf{s}(g)}})+\alpha_{g}(0_{g})=1.\]
Hence, by uniqueness, we must have \(E=\overrightarrow{e}\). Similarly, we find that \(E=\overleftarrow{e}\), so \(E\) is both left and right invariant and (iii) follows.
Let \((\mathcal{G},\omega,\alpha)\) be a cosymplectic groupoid with underlying multiplicative foliation \(\mathcal{F}_{\mathcal{G}}=\ker\alpha\). The proof of Proposition 2.7 shows that the identity section is tangent to this foliation, i.e.,
\[TM\subset\ker\alpha.\]
Since we assume that \(M\) is connected, it follows that it is contained in a single symplectic leaf \(\Sigma^{0}\) of \(\mathcal{F}_{\mathcal{G}}\). In general, this leaf is only an immersed submanifold. Still, denoting its symplectic form by \(\omega_{\Sigma^{0}}=\omega|_{\Sigma^{0}}\), we have:
**Proposition 2.8**.: _Let \((\mathcal{G},\omega,\alpha)\) be a cosymplectic groupoid. Then \(\Sigma^{0}\) is a Lie subgroupoid of \(\mathcal{G}\rightrightarrows M\) and \((\Sigma^{0},\omega_{\Sigma^{0}})\rightrightarrows M\) is a symplectic groupoid integrating \((M,\pi_{M})\). In particular, \((M,\pi_{M})\) is an integrable Poisson manifold._
Proof.: The condition that \(\mathcal{F}_{\mathcal{G}}\) is multiplicative amounts to the identities
\[\mathbf{i}_{*}\mathcal{F}_{\mathcal{G}}=\mathcal{F}_{\mathcal{G}},\quad \mathbf{m}_{*}(\mathcal{F}_{\mathcal{G}}\,\mathbf{s}\times_{\mathbf{t}} \mathcal{F}_{\mathcal{G}})=\mathcal{F}_{\mathcal{G}}, \tag{4}\]
where \(\mathcal{F}_{\mathcal{G}}\,\mathbf{s}\times_{\mathbf{t}}\mathcal{F}_{ \mathcal{G}}=\mathcal{G}^{(2)}\cap(\mathcal{F}\times\mathcal{F})\) (note that since \(\mathcal{F}+\ker\mathrm{ds}=\mathcal{F}+\ker\mathrm{dt}\) this intersection is transverse). Since \(M\) is contained in the leaf \(\Sigma^{0}\), it follows that the restriction of \(\mathbf{s}\) and \(\mathbf{t}\) to \(\Sigma^{0}\) are surjective submersions.
Now observe that inversion fixes the identity section, so the first condition in (4) implies that inversion maps leaves of \(\mathcal{F}_{\mathcal{G}}\) to leaves of \(\mathcal{F}_{\mathcal{G}}\). Since inversion fixes the identity section, it follows that it maps \(\Sigma^{0}\) into itself.
Similarly, the second condition in (4) implies that multiplication maps leaves of \(\mathcal{F}_{\mathcal{G}}^{(2)}:=\mathcal{F}_{\mathcal{G}}\,\mathbf{s}\times_{ \mathbf{t}}\mathcal{F}_{\mathcal{G}}\) to leaves of \(\mathcal{F}_{\mathcal{G}}\). Since \(\Sigma^{0}\,\mathbf{s}\times_{\mathbf{t}}\Sigma^{0}\) is a leaf of \(\mathcal{F}_{\mathcal{G}}^{(2)}\) containing all the pairs \((1_{x},1_{x})\), it follows that \(\Sigma^{0}\) is closed under multiplication.
Therefore we have smooth maps \(\mathbf{i}:\Sigma^{0}\to\mathcal{G}\) and \(\mathbf{m}:\Sigma^{0}\,_{\mathbf{s}}\times_{\mathbf{t}}\Sigma^{0}\to\mathcal{G}\) with image lying in \(\Sigma^{0}\). The fact that these are also smooth as maps into \(\Sigma^{0}\) follows from the general fact that leaves of foliations are regularly immersed submanifolds.
The inclusion \(\Sigma^{0}\hookrightarrow\mathcal{G}\) is a groupoid morphism, so obviously \(\omega_{\Sigma^{0}}:=\omega|_{\Sigma^{0}}\) is a multiplicative symplectic form, so \((\Sigma^{0},\omega_{\Sigma^{0}})\rightrightarrows M\) is a symplectic groupoid. The composition of this inclusion with the target of \(\mathcal{G}\) gives a Poisson map
\[\mathbf{t}:(\Sigma^{0},\omega_{\Sigma^{0}})\to(M,\pi_{M}),\]
so this symplectic groupoid integrates \((M,\pi_{M})\).
### Examples of cosymplectic groupoids
The basic examples of cosymplectic structures mentioned in Section 2.1 all have multiplicative versions, yielding examples of cosymplectic groupoids.
**Example 2.9**.: Let \((\Sigma,\Omega_{\Sigma})\) be a symplectic groupoid over \(M\). Then we can form the trivial abelian extension
and equip \(\mathcal{G}=\Sigma\times\mathbb{R}\) with the cosymplectic structure \((\operatorname{pr}_{\Sigma}\Omega_{\Sigma},\operatorname{pr}_{\mathbb{R}}^{*} \operatorname{d}t)\), where \(t\) denotes the coordinate in the second factor. This gives a cosymplectic groupoid, called the _trivial central extension_ of the symplectic groupoid \((\Sigma,\Omega_{\Sigma})\) by \(\mathbb{R}\).
A similar construction holds with \(\mathbb{R}\) replaced by \(\mathbb{S}^{1}\) and \(\operatorname{d}\theta\) instead of \(\operatorname{d}t\). More generally, one can consider a central extension of Lie groupoids with trivial kernel
where \((\Sigma,\Omega_{\Sigma})\) a symplectic groupoid. A multiplicative Ehresmann connection for this extension is specified by a multiplicative \(1\)-form \(\alpha\in\Omega^{1}(\mathcal{G},\mathbb{R})\) (see [6, 9]). This connection is flat if and only if the form is closed, and will see in Section 4.4 that in this case we obtain a multiplicative cosymplectic structure \((\operatorname{pr}_{\Sigma}^{*}\Omega_{\Sigma},\alpha)\) in \(\mathcal{G}\).
**Example 2.10**.: Let \((\Sigma\rightrightarrows M,\omega)\) be a symplectic groupoid and \(\varphi:\Sigma\to\Sigma\) a symplectomorphism satisfying
\[\mathbf{s}\circ\varphi =\mathbf{s},\qquad\mathbf{t}\circ\varphi=\mathbf{t},\] \[\varphi(gh) =g\varphi(h)=\varphi(g)h,\qquad(g,h)\in\Sigma^{(2)}.\]
for all \((g,h)\in\Sigma^{(2)}\). Notice that these properties are satisfied by the time-one map of any bi-invariant vector field on a Lie groupoid (e.g., the Reeb vector field of a cosymplectic groupoid). These properties ensure that the map
\[M\times\mathbb{Z}\hookrightarrow\Sigma\times\mathbb{R},\quad(x,n)\mapsto( \varphi^{n}(1_{x}),n),\]
make the trivial bundle of groups \(M\times\mathbb{Z}\rightrightarrows M\) a closed, normal, subgroupoid inside the isotropy of the direct product groupoid \(\Sigma\times\mathbb{R}\rightrightarrows M\). It follows that the mapping torus
\[\Sigma\times_{\varphi}\mathbb{S}^{1}:=(\Sigma\times\mathbb{R})/\mathbb{Z},\]
has a unique Lie groupoid structure making the following sequence of Lie groupoids exact
Since \(\Sigma\times_{\varphi}\mathbb{S}^{1}\) is a symplectic mapping torus, it has a cosymplectic structure which one checks is multiplicative. Hence, it is a cosymplectic groupoid. Notice that the map
\[\Sigma\times_{\varphi}\mathbb{S}^{1}\to\mathbb{S}^{1},\quad[g,t]\mapsto e^{2 \pi it},\]
is a fibration of Lie groupoids.
**Example 2.11**.: Let \((\Sigma,\Omega)\) be a symplectic groupoid over \(M\) and let \(\iota:\mathcal{G}\hookrightarrow\Sigma\) be a Lie subgroupoid. Assume that there exists a multiplicative symplectic vector field \(X\in\mathfrak{X}(\Sigma)\) transverse to \(\mathcal{G}\). Then \(\omega:=\iota^{*}\Omega\) and \(\alpha:=\iota^{*}(i_{X}\Omega)\) define a multiplicative cosymplectic structure on the groupoid \(\mathcal{G}\).
Conversely, given a cosymplectic groupoid \((\mathcal{G},\omega,\alpha)\) we can form a symplectic groupoid \((\Sigma,\Omega)\) so that \((\mathcal{G},\omega,\alpha)\) is obtained from \((\Sigma,\Omega)\). We let \(\Sigma\) be the product of the groupoid \(\mathcal{G}\rightrightarrows M\) with the identity groupoid \(\mathbb{R}\rightrightarrows\mathbb{R}\), so \(\Sigma\) is a groupoid with space of arrows \(\mathcal{G}\times\mathbb{R}\) and space of objects \(M\times\mathbb{R}\). The symplectic form on \(\Sigma\) is given by \(\Omega=\omega+\alpha\wedge\mathrm{d}t\). One checks easily that \(\Omega\) is multiplicative and that \(\frac{\partial}{\partial t}\) is a multiplicative symplectic vector field transverse to \(\mathcal{G}\times\{0\}\cong\mathcal{G}\).
**Example 2.12**.: For a concrete example of a cosymplectic groupoid which is not a central extension, let \(\Sigma=\mathbb{T}^{n}\times\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}\) be the trivial bundle of Lie groups with fiber the torus \(\mathbb{T}^{n}\). Denoting by \((\theta^{1},\dots,\theta^{n})\) angle coordinates on the torus and \((x^{1},\dots,x^{n})\) linear coordinates on \(\mathbb{R}^{n}\), we let \(\Omega:=\sum_{i=1}^{n}\mathrm{d}\theta^{i}\wedge\mathrm{d}x^{i}\). This is a multiplicative form so \((\Sigma,\Omega)\) is a symplectic groupoid. Now fix some \(a=(a_{1},\dots,a_{n})\in\mathbb{R}^{n}\) with \(||a||=1\). The vector field
\[X:=\sum_{i=1}^{n}a_{i}\frac{\partial}{\partial x^{i}},\]
is a symplectic vector field transverse to the subgroupoid \(\mathcal{G}=\mathbb{T}^{n}\times M\rightrightarrows M\) where \(M\) is the hyperplane
\[M=\{(x^{1},\dots,x^{n})\in\mathbb{R}^{n}:\sum_{i=1}^{n}a_{i}x^{i}=0\}.\]
Moreover, \(X\) is multiplicative since its flow is a \(1\)-parameter group of automorphisms of \(\Sigma\). Hence, we are in the situation of the previous example, so we obtain a cosymplectic groupoid \((\mathcal{G},\omega,\alpha)\). In particular, we find that \(\alpha=-\sum_{i=1}^{n}a_{i}\mathrm{d}\theta^{i}\). The hamiltonian vector field \(X_{f}\) associated with the function \(f(x,\theta)=\sum_{i=1}^{n}a_{i}x^{i}\) satisfies \(i_{X_{f}}\omega=\mathrm{d}f|_{T\mathcal{G}}=0\) and \(\alpha(X_{f})=-\sum_{i=1}^{n}a_{i}^{2}=-1\). Hence, we conclude that the Reeb vector field is
\[E=-X_{f}|_{\mathcal{G}}=-\sum_{i=1}^{n}a_{i}\frac{\partial}{\partial\theta^{ i}}.\]
It follows that the orbit space of \(E\) is smooth if and only if \(\mathbb{Z}a\) defines a discrete subgroup of \(\mathbb{T}^{n}\), i.e., if and only if the ratios \(a_{i}:a_{j}\) are all rational. Therefore this yields examples of cosymplectic groupoids which are not central extensions.
### Central extensions and cosymplectic groupoids
Let \((\mathcal{G},\omega,\alpha)\) be a cosymplectic groupoid over \(M\). Let us denote by \(\mathcal{K}\subset\mathcal{G}\) the collection of all orbits of the Reeb vector field \(E\) which intersect the identity section of \(\mathcal{G}\). We call \(\mathcal{K}\) the _kernel of the cosymplectic groupoid_\((\mathcal{G},\omega,\alpha)\).
**Theorem 2.13**.: _The kernel \(\mathcal{K}\) of a cosymplectic groupoid \((\mathcal{G},\omega,\alpha)\) is a bundle of abelian groups which fits into a short exact sequence of topological groupoids over the same base_
(5)
_where \(\Sigma\) is the orbit space of the Reeb vector field. When this space is smooth, this is a short exact sequence of Lie groupoids and \((\Sigma,\Omega)\) is a symplectic groupoid for a unique symplectic structure making the projection \(\mathcal{G}\to\Sigma\) a Poisson map._
Proof.: Since \(E\) is non-vanishing and transverse to the zero section, it follows that \(\mathcal{K}\) is a submanifold of \(\mathcal{G}\). By Proposition 2.7, it follows that \(\mathcal{K}\) is a Lie subgroupoid of \(\mathcal{G}\), which is actually a bundle of Lie groups contained in the isotropy of \(\mathcal{G}\). One can form the quotient groupoid \(\Sigma=\mathcal{G}/\mathcal{K}\), which is a topological groupoid, giving the short exact sequence (5).
Notice that the quotient groupoid \(\Sigma=\mathcal{G}/\mathcal{K}\) can be identified with the space of orbits of the \(\mathbb{R}\)-action defined by the Reeb vector field. When this orbit space is a smooth manifold, the form \(\omega\) is basic for the \(\mathbb{R}\)-action so there is a unique symplectic 2-form \(\Omega\) in \(\Sigma\) such \(p^{*}\Omega=\omega\), where \(p:\mathcal{G}\to\Sigma\) is the projection. Since \(\omega\) is multiplicative, it follows that \(\Omega\) is also multiplicative, so \((\Sigma,\Omega)\) is a symplectic groupoid. One checks easily that \(p:(\mathcal{G},\pi_{\mathcal{G}})\to(\Sigma,\Omega)\) is a Poisson map. Since \(p\) is a submersion, it follows that \(\Omega\) is the unique symplectic form with this property.
We will call a cosymplectic groupoid \((\mathcal{G},\omega,\alpha)\) a _central extension Lie groupoid_ whenever the orbit space of the Reeb vector field is smooth, so \(\mathcal{G}\) fits into a short exact sequence of Lie groupoids with abelian one-dimensional kernel.
### Proper cosymplectic groupoids
Recall that a Lie groupoid \(\mathcal{G}\rightrightarrows M\) is called _proper_ if its space of arrows is Hausdorff and the map \((\mathbf{s},\mathbf{t}):\mathcal{G}\to M\times M\) is proper. In this sections we restrict our attention to proper cosymplectic groupoids.
**Example 2.14**.: The cosymplectic groupoids arising as symplectic mapping torus, as in Example 2.10, are proper whenever one starts with a proper symplectic groupoid \((\Sigma\rightrightarrows M,\omega)\). In this case, the symplectic leaves of the resulting cosymplectic groupoid \((\mathcal{G},\omega,\alpha)\) are the fibers of \(q:\mathcal{G}\to\mathbb{S}^{1}\), hence are embedded submanifolds.
The cosymplectic groupoid \((\mathcal{G},\omega,\alpha)\) constructed in Example 2.12 is also proper, being a bundle of compact Lie groups. However, in this example the symplectic leaf through the identity section \(\Sigma^{0}\) is not embedded.
It turns out that a proper cosymplectic groupoid is a symplectic mapping torus, as in Example 2.10, if and only if the symplectic leaf \(\Sigma^{0}\) containing the identity section is embedded.
**Theorem 2.15**.: _Let \((\mathcal{G},\omega,\alpha)\) be a proper, source connected, cosymplectic groupoid and assume that the symplectic leaf \(\Sigma^{0}\subset\mathcal{G}\) is embedded. Then:_
1. _there is a time_ \(t_{0}\) _such that the flow of the Reeb vector field at time_ \(t_{0}\) _maps_ \(\Sigma^{0}\) _to itself yielding a symplectomorphism_ \[\varphi:=\varphi_{E}^{t_{0}}:\Sigma^{0}\to\Sigma^{0}.\]
2. \(\mathcal{G}\) _is isomorphic to the symplectic mapping torus_ \(\Sigma^{0}\times_{\varphi}\mathbb{S}^{1}\) _and the resulting submersion_ \[q:\mathcal{G}\to\mathbb{S}^{1},\] _is a fibration of Lie groupoids._
Proof.: The Reeb vector field is a complete Poisson vector field transverse to the sympletic leaves of \((\mathcal{G},\pi_{\mathcal{G}})\). Hence, for each fix \(t\), its flow \(\varphi_{E}^{t}:\mathcal{G}\to\mathcal{G}\) maps leaves to leaves. We claim that there exists some smallest \(t_{0}>0\) such that \(\varphi_{E}^{t_{0}}(\Sigma^{0})=\Sigma^{0}\).
Since the Reeb vector field \(E\) is both left and right invariant, it satisfies
\[\mathbf{s}\circ\varphi_{E}^{t} =\mathbf{s},\qquad\mathbf{t}\circ\varphi_{E}^{t}=\mathbf{t},\] \[\varphi_{E}^{t}(gh) =g\varphi_{E}^{t}(h)=\varphi_{E}^{t}(g)h,\qquad((g,h)\in\Sigma^{ (2)}).\]
It follows that the map
\[\Phi:\Sigma^{0}\times\mathbb{R}\to\mathcal{G},\quad(g,t)\mapsto\varphi_{E}^{t }(g). \tag{6}\]
is a Lie groupoid morphism. Since \(\Sigma^{0}\) is embedded, this map is also a local diffeomorphism and the image of \(\Phi\) is open and closed in \(\mathcal{G}\). Since \(M\) is connected and \(\mathcal{G}\) is source connected, we have that \(\mathcal{G}\) is connected, so \(\Phi\) is surjective.
Now observe that, for each \(x\in M\), the map \(\Phi\) restricts to a Lie group map
\[\Phi_{x}:\mathbb{R}\to\mathcal{G}_{x},\quad t\mapsto\Phi(1_{x},t),\]
whose image is closed. Since \(\mathcal{G}\) is proper, isotropy groups are compact, so the image of this map is compact. Hence, there exists a first time \(t_{0}>0\) such \(\Phi(1_{x},t_{0})\in\Sigma^{0}\cap\mathcal{G}_{x}\). Since, for each \(t\), \(\varphi_{E}^{t}:\mathcal{G}\to\mathcal{G}\) maps leaves to leaves we conclude that
\[\varphi_{E}^{t_{0}}(\Sigma^{0})=\Sigma^{0},\]
and \(t_{0}\) is the smallest positive real satisfying this property, proving our claim.
**Lemma 2.16**.: _The morphism (6) yields a short exact sequence of Lie groupoids:_
_where the first map is \((x,n)\mapsto(\varphi_{E}^{nt_{0}}(1_{x}),-nt_{0})\). In particular, the groupoid \(\mathcal{G}\) is isomorphic to a mapping torus:_
\[\mathcal{G}\simeq(\Sigma^{0}\times\mathbb{R})/\mathbb{Z},\]
_where the \(\mathbb{Z}\)-action is generated by \((g,t)\mapsto(\varphi_{E}^{t_{0}}(g),t-t_{0})\)._
Assuming this lemma, it remains to prove is that \(\Phi\) pulls back the cosymplectic structure \((\omega,\alpha)\) to the cosymplectic structure \((\mathrm{pr}_{\Sigma^{0}}^{*}\,\omega_{\Sigma^{0}},\mathrm{pr}_{\mathbb{R}}^{*} \,\mathrm{d}t)\). This follows because:
1. \(\Phi\) is a map of the underlying foliations;
2. The Reeb vector fields \(\partial_{t}\) and \(E\) are \(\Phi\)-related \[\mathrm{d}_{(g,t)}\Phi(\partial_{t})=\left.\frac{\mathrm{d}}{\mathrm{d}s} \right|_{s=t}\,\varphi_{E}^{s}(g)=E|_{\varphi(g,t)}.\]
In fact, from (b), we find that
\[i_{\partial_{t}}\Phi^{*}\omega=\Phi^{*}i_{E}\omega=0.\]
Since \(\omega\) is closed, it follows that \(\Phi^{*}\omega\) is basic relative to \(\mathrm{pr}_{\Sigma^{0}}:\Sigma^{0}\times\mathbb{R}\to\Sigma^{0}\). On the other hand, for the section \(s:\Sigma^{0}\to\Sigma^{0}\times\mathbb{R}\), \(g\mapsto(g,0)\), we have
\[s^{*}\Phi^{*}\omega=(\Phi\circ s)^{*}\omega=\omega_{\Sigma^{0}},\]
so we conclude that
\[\Phi^{*}\omega=\mathrm{pr}_{\Sigma^{0}}^{*}\,\omega_{\Sigma^{0}}.\]
Similarly, from (a), we find that for any tangent vector \((v,0)\in T(\Sigma^{0}\times\mathbb{R})\)
\[i_{(v,0)}\Phi^{*}\alpha=\Phi^{*}(i_{\mathrm{d}\Phi(v,0)}\alpha)=0.\]
Since \(\alpha\) is closed, it follows that \(\Phi^{*}\alpha\) is basic relative to \(\mathrm{pr}_{\mathbb{R}}^{*}:\Sigma^{0}\times\mathbb{R}\to\mathbb{R}\). But using (b) again we find
\[i_{\partial_{t}}\Phi^{*}\alpha=\Phi^{*}i_{E}\alpha=1,\]
so we conclude that
\[\Phi^{*}\alpha=\mathrm{pr}_{\mathbb{R}}^{*}\,\mathrm{d}t.\]
Proof of Lemma 2.16.: Observe that for each \(x\) there is a smallest positive integer \(n_{0}\) such that
\[\varphi_{E}^{n_{0}t_{0}}(1_{x})=1_{x}.\]
Note that \(n_{0}\) is independent of \(x\). This follows, e.g., because \(n_{0}\) is the order of the group
\[\Sigma^{0}\cap\Phi_{x}(\mathbb{R})=\{1_{x},\varphi_{E}^{t_{0}}(1_{x}),\dots, \varphi_{E}^{(n_{0}-1)t_{0}}(1_{x})\},\]
and these groups form a Lie group bundle when \(x\) vary in \(M\). From this it follows also that
\[g\in\Sigma^{0},\ \varphi_{E}^{t}(g)=1_{x}\quad\Leftrightarrow\quad\left\{ \begin{array}{ll}g=\varphi_{E}^{nt_{0}}(1_{x})&\\ &\\ t=-nt_{0}+k(n_{0}t_{0}).&\end{array}\right.\text{for some }n,k\in\mathbb{Z},\]
Since \(\varphi_{E}^{nt_{0}}(1_{x})=\varphi_{E}^{(n-kn_{0})t_{0}}(1_{x})\) we conclude that the kernel of the groupoid morphism (6) is
\[\mathrm{Ker}\,\Phi=\{(\varphi_{E}^{mt_{0}}(1_{x}),-mt_{0}):m\in\mathbb{Z}\}.\]
This proves the lemma and completes also the proof of the theorem.
## 3. The Infinitesimal picture
### Infinitesimal data of a cosymplectic groupoid
Let \((\mathcal{G},\omega,\alpha)\) be a cosymplectic groupoid, with Reeb vector field \(E\) and associated Poisson structure \(\pi_{\mathcal{G}}\). All these geometric structures have infinitesimal versions, as we now explain.
In general, will denote by \(A\) a Lie algebroid with bundle projection \(p:A\to M\), anchor \(\rho_{A}:A\to TM\), and Lie bracket \([\,\ ]_{A}\) on its space of sections \(\Gamma(A)\). Our conventions are such that if \(\mathcal{G}\rightrightarrows M\) is a Lie groupoid, then its Lie algebroid \(A=A(\mathcal{G})\) has fiber \(A_{x}:=\operatorname{Ker}\operatorname{d}_{1_{x}}\!\mathbf{s}\) and anchor \(\rho_{A}|_{x}:=\operatorname{d}_{1_{x}}\!\mathbf{t}\). Moreover, its space of sections \(\Gamma(A)\) is identified with the space \(\mathfrak{X}_{r}(\mathcal{G})\) of right invariant vector fields on \(\mathcal{G}\) and we will denote by \(\overrightarrow{X}\in\mathfrak{X}_{r}(\mathcal{G})\) the right invariant vector field corresponding to \(X\in\Gamma(A)\). A multiplicative form \(\omega\in\Omega^{k}(\mathcal{G})\) induces a pair of bundle maps \(\mu:A\to\wedge^{k-1}T^{*}M\), \(\bar{\mu}:A\to\wedge^{k}T^{*}M\), defined by
\[\mu(a)(v_{1},\dots,v_{k-1}) =\omega(a,\mathrm{d}\varepsilon(v_{1}),\dots,\mathrm{d} \varepsilon(v_{k-1})),\] \[\bar{\mu}(a)(v_{1},\dots,v_{k}) =\mathrm{d}\omega(a,\mathrm{d}\varepsilon(v_{1}),\dots,\mathrm{ d}\varepsilon(v_{k})).\]
These maps satisfy the following conditions that characterize infinitesimal multiplicative (IM) forms (see, e.g., [1, 2]):
\[\begin{split} i_{\rho_{A}(b)}\mu(a)&=-i_{\rho_{A}(a) }\mu(b),\\ \mu([a,b]_{A})&=\mathcal{L}_{\rho_{A}(a)}\mu(b)-i_{ \rho_{A}(b)}(\mathrm{d}\mu(a)+\bar{\mu}(a)),\\ \bar{\mu}([a,b]_{A})&=\mathcal{L}_{\rho_{A}(a)}\bar {\mu}(b)-i_{\rho_{A}(b)}\mathrm{d}\bar{\mu}(a),\end{split} \tag{7}\]
for all sections \(a,b\in\Gamma(A)\). For a closed IM form the component \(\bar{\mu}\) vanishes.
After these preliminaries we can now list the infinitesimal data corresponding to a cosymplectic groupoid. Let \((\mathcal{G},\omega,\alpha)\) be a cosymplectic groupoid and denote by \(A\to M\) its Lie algebroid. Then:
1. The multiplicative closed 1-form \(\alpha\) induces a closed IM 1-form \(\nu:A\to\mathbb{R}\);
2. The multiplicative closed 2-form \(\omega\) induces a closed IM 2-form \(\mu:A\to T^{*}M\);
3. The Reeb vector field \(E\) induces a central section \(e\in\Gamma(A)\), i.e., \(E=\overrightarrow{e}=\overleftarrow{e}\) where \(\rho(e)=0\) and \([e,a]=0\), for all \(a\in\Gamma(A)\);
4. The multiplicative Poisson structure \(\pi_{\mathcal{G}}\) induces a unique Poisson structure \(\pi_{M}\in\mathfrak{X}^{2}(M)\), for which the target is a Poisson map, and the source is an anti-Poisson map.
Only the last item needs some justification. One can show directly from the condition that \(\pi_{\mathcal{G}}\) is multiplicative that the Poisson bracket of functions locally constant on the \(\mathbf{t}\)-fibers is a function locally constant on the \(\mathbf{t}\)-fibers (see, e.g., [17]), so that there is a unique Poisson structure on \(M\) for which the submersion \(\mathbf{t}:\mathcal{G}\to M\) is Poisson.
The infinitesimal data above has various relationships between themselves, which can be stated in a concise form as follows:
**Proposition 3.1**.: _The Lie algebroid \(A\to M\) of a cosymplectic groupoid \((\mathcal{G},\omega,\alpha)\) is a central extension_
(8)
_where \(T^{*}M\) is equipped with the cotangent Lie algebroid structure associated with the Poisson manifold \((M,\pi_{M})\) and \(\mathfrak{k}\) is the trivial line bundle generated by the central section \(e\in\Gamma(A)\). This extension has a natural splitting given by:_
\[\underline{\nu}:A\to\mathfrak{k},\quad a\mapsto\nu(a)e.\]
Proof.: (i) \(\mu:A\to T^{*}M\) is a surjective Lie algebroid map: The definition of \(\mu\) shows that we have a commutative diagram:
where \(\mu^{*}\) is the transpose of \(\mu\). By Proposition 2.7, we know that \(E=\ker\omega^{\flat}\) is transverse to the identity section, so we conclude that \(\mu^{*}\) is injective.
Now observe that since \(\mathfrak{k}:(\mathcal{G},\pi_{\mathcal{G}})\to(M,\pi_{M})\) is a Poisson map, using the definition of \(\pi_{\mathcal{G}}\) and \(\mu\), we obtain
\[\pi_{M}^{\sharp}(\mu(a))=\mathrm{d}_{1_{x}}\mathfrak{k}\cdot\pi_{\mathcal{G}} ^{\sharp}\cdot(\mathrm{d}_{1_{x}}\mathfrak{k})^{*}(\mu(a))=\rho_{A}(a)\]
where \(a\in A_{x}\). On the other hand, since \(\mu\) is a closed IM form, relations (7) give
\[\mu([a,b]_{A}) =\mathcal{L}_{\rho_{A}(a)}\mu(b)-i_{\rho_{A}(b)}\mathrm{d}\mu(a)\] \[=\mathcal{L}_{\pi_{M}^{\sharp}(\mu(a))}\mu(b)-i_{\pi_{M}^{\sharp} (\mu(b))}\mathrm{d}\mu(a)=[\mu(a),\mu(b)]_{\pi_{M}},\]
so \(\mu:A\to T^{*}M\) is a Lie algebroid morphism.
(ii) \(\ker(\mu)=\langle e\rangle\): Since \(\mu\) is surjective, its kernel is a rank 1 vector sub-bundle. Since \(e\in\Gamma(A)\) is a non-vanishing section, all we have to check is that \(\mu(e)=0\). This is clear from the definition of \(\mu\) since \(\mu(e)=(i_{E}\omega)|_{TM}=0\).
(iii) \(\underline{\nu}:A\to\langle e\rangle\), \(a\mapsto\nu(a)e\), splits the short exact sequence (8): Notice that we have \(\nu(e)=1\), since
\[\nu(e)(x)=\alpha_{1_{x}}(E_{1_{x}})=1.\]
This shows that \(\underline{\nu}\) is a splitting as a short exact sequence of vector bundles. Associated with this splitting there is a \(T^{*}M\)-connection on the bundle \(\mathfrak{k}\). Because \(e\) commutes with any section of \(A\), we have that \(e\) is a flat section
\[\nabla_{\beta}e=0.\]
On the other hand, the curvature 2-form of this splitting is given by
\[c(\beta_{1},\beta_{2})=\nu([a_{1},a_{2}]_{A}),\]
where \(a_{i}\in\operatorname{Ker}\nu\) is the unique element such that \(\mu(a_{i})=\beta_{i}\). But this curvature \(2\)-form vanishes since \(\nu\) is a closed IM form and from (7) find
\[\nu([a_{1},a_{2}]_{A})=\mathscr{L}_{\rho_{A}(a_{1})}\nu(a_{2})-i_{\rho_{A}(a_{2} )}\mathrm{d}\nu(a_{1})=0,\]
whenever \(a_{1},a_{2}\in\Gamma(\operatorname{Ker}\nu)\).
The previous proposition shows that the space of objects of a cosymplectic groupoid is a Poisson manifold, and that we have a canonical isomorphism:
\[A\cong T^{*}M\oplus\mathbb{R},\quad a\mapsto(\mu(a),\nu(a)). \tag{9}\]
Under this isomorphism, the anchor becomes
\[\rho_{A}:T^{*}M\oplus\mathbb{R}\to TM,\quad(\beta,\lambda)\mapsto\pi_{M}^{ \sharp}(\beta),\]
while the bracket on sections \((\beta_{i},f_{i})\in\Omega^{1}(M)\times C^{\infty}(M)\) can be written as:
\[[(\beta_{1},f_{1}),(\beta_{2},f_{2})]_{A}=([\beta_{1},\beta_{2}]_{\pi_{M}}, \pi_{M}^{\sharp}(\beta_{1})(f_{2})-\pi_{M}^{\sharp}(\beta_{2})(f_{1})).\]
In other words, we have:
**Corollary 3.2**.: _If \((\mathcal{G},\omega,\alpha)\) is a cosymplectic groupoid, its Lie algebroid \((A,\mu,\nu)\) is canonically isomorphic via (9) to the trivial central extension of the cotangent algebroid associated with the base Poisson manifold \((M,\pi_{M})\):_
\[(A,\mu,\nu)\simeq(T^{*}M\oplus\mathbb{R},\operatorname{pr}_{T^{*}M}, \operatorname{pr}_{M\times\mathbb{R}}).\]
The only missing piece on the infinitesimal side is what corresponds to the multiplicative Poisson structure \(\pi_{\mathcal{G}}\). At the infinitesimal level such a structure corresponds to a _Lie bialgebroid_ ([14]). The Lie bialgebroid of a cosymplectic groupoid is again rather special.
**Proposition 3.3**.: _If \((\mathcal{G},\omega,\alpha)\) is a cosymplectic groupoid (9) gives an isomorphism of Lie bialgeboids_
\[(A,A^{*})\simeq(T^{*}M\oplus\mathbb{R},TM\oplus\mathbb{R}),\]
_where \(T^{*}M\) denotes the cotangent Lie algebroid of the base Poisson manifold \((M,\pi_{M})\)._
Proof.: By Corollary 3.2, we already know that the IM forms corresponding to \((\omega,\alpha)\) give a Lie algebroid isomorphism:
\[A\cong T^{*}M\oplus\mathbb{R},\quad a\mapsto(\mu(a),\nu(a)).\]
On the other hand, the central section \(e\in\Gamma(A)\) satisfies:
\[\mathrm{d}_{A^{*}}e=0,\]
since the Reeb vector field \(E=\overrightarrow{e}\) is a Poisson vector field on \(\mathcal{G}\) (see, [13, Thm 11.4.7]). This implies that the transpose of the map \((\mu,\nu)\) is also a Lie algebroid isomorphism:
\[TM\oplus\mathbb{R}\cong A^{*},\quad(u,\lambda)\mapsto(\mu^{*}(u),\nu^{*}( \lambda)).\]
### Source 1-connected cosymplectic groupoids
By the results in the previous section, source 1-connected cosymplectic groupoids are very easy to describe:
**Theorem 3.4**.: _The base of any cosymplectic groupoid \((\mathcal{G},\omega,\alpha)\) is an integrable Poisson manifold \((M,\pi_{M})\). If \(\mathcal{G}\) is source 1-connected then there is a canonical isomorphism_
\[(\mathcal{G},\omega,\alpha)\cong(\Sigma(M)\times\mathbb{R},\operatorname{pr} _{\Sigma}^{*}\Omega,\operatorname{pr}_{\mathbb{R}}^{*}\operatorname{d}t)\]
_where \((\Sigma(M),\Omega)\) the source 1-connected symplectic integration of \((M,\pi_{M})\)._
Proof.: Since \(A\cong T^{*}M\oplus\mathbb{R}\), \(a\mapsto(\mu(a),\nu(a))\), is a Lie algebroid isomorphism, it follows that \(A\) is integrable iff and only if \(T^{*}M\) is integrable. When \(\mathcal{G}\) is source 1-connected, the integration of this isomorphism gives the desired groupoid isomorphism. This groupoid isomorphism maps \(\operatorname{pr}_{\Sigma}^{*}\Omega\) to \(\omega\) and \(\operatorname{pr}_{\mathbb{R}}^{*}\operatorname{d}t\) to \(\alpha\).
In general, if \((\mathcal{G},\omega,\alpha)\) is only source connected, we have
\[(\Sigma(M)\times\mathbb{R})/\Lambda,\]
where \(i:\Lambda\hookrightarrow\Sigma(M)\times\mathbb{R}\) is an embedded discrete bundle of Lie groups such that \(i^{*}\Omega=0\) and \(i^{*}\mathrm{d}t=0\). If we assume that the orbit space of the Reeb vector field \(E\in\mathfrak{X}(\mathcal{G})\) is smooth, we obtain that \(\mathcal{G}\) is a central extension of some symplectic integration \(\Sigma\) of \((M,\pi_{M})\):
In general, this sequence fails to split. Moreover, it may have a groupoid splitting, so that \(\mathcal{G}\cong\Sigma\times\mathcal{K}\), while the cosymplectic structure may not be the trivial one (as in Example 2.9). This is illustrated in the next example.
**Example 3.5**.: Let \(\mathcal{G}=\mathbb{R}^{2}\times\mathbb{S}^{1}\rightrightarrows\mathbb{R}\) be the trivial bundle of Lie groups with projection \((x,y,\theta)\mapsto x\) and fiber \(\mathbb{R}\times\mathbb{S}^{1}\). The forms
\[\omega:=\mathrm{d}x\wedge\mathrm{d}y+\mathrm{d}x\wedge\mathrm{d}\theta,\quad \alpha:=\frac{1}{2}\left(\mathrm{d}y-\mathrm{d}\theta\right),\]
define a multiplicative cosymplectic structure on \(\mathcal{G}\). The corresponding Reeb vector field is
\[E=\partial_{y}-\partial_{\theta}.\]
It follows that the kernel of this cosymplectic groupoid is a trivial bundle \(\mathbb{R}\times\mathbb{R}\to\mathbb{R}\) and the leaf space of this vector field can be identified with \(\mathbb{R}\times\mathbb{S}^{1}\), giving rise to the central extension:
Here the first map is given by \((x,y)\mapsto(x,y,-y)\) while the second map is given by \((x,y,\theta)\mapsto(x,y+\theta)\). This sequence has the splitting \((x,\theta)\mapsto(x,0,\theta)\).
We claim that although \(\mathcal{G}\cong\Sigma\times\mathcal{K}\) as a groupoid, the cosymplectic structure structure is not isomorphic to \((p_{\Sigma}^{*}\Omega,p_{\mathcal{K}}^{*}\mathrm{d}\theta)\). In fact, the symplectic leaves of \(\mathcal{G}\) are the leaves of the distribution \(\mathrm{d}y-\mathrm{d}\theta=0\), so admit the parametrization \((x,y)\mapsto(x,y,y+c)\), with \(c\in\mathbb{S}^{1}\). Hence, the symplectic leaves are diffeomorphic to \(\mathbb{R}^{2}\), while a trivial extension \(\Sigma\times\mathcal{K}\) has symplectic leaves diffeomorphic to \(\mathbb{R}\times\mathbb{S}^{1}\).
## 4. Poisson groupoids
### Poisson groupoids of corank 1
Let \(\mathcal{G}\rightrightarrows M\) be a Lie groupoid equipped with a regular multiplicative Poisson structure \(\pi_{\mathcal{G}}\) of corank 1. First we show that the dimension of \(M\) and \(\mathcal{G}\) are related as follows:
**Proposition 4.1**.: _If \((\mathcal{G}\rightrightarrows M,\pi_{\mathcal{G}})\) is a Poisson groupoid of corank 1 then_
\[\dim\mathcal{G}=2\dim M+1.\]
_In particular, its Lie bialgebroid \((A,A^{*})\) has a surjective anchor \(\rho_{A^{*}}:A^{*}\to TM\)._
Proof.: Let \(\omega\in\Omega^{2}(\mathcal{G})\) be an extension of the leafwise symplectic form, i.e.,
\[\omega(\pi_{\mathcal{G}}^{\sharp}(\alpha),\pi_{\mathcal{G}}^{\sharp}(\beta))= \langle\alpha,\pi_{\mathcal{G}}^{\sharp}(\beta)\rangle.\]
We claim that \(-\mathbf{i}^{*}\omega\) is also an extension of the leafwise symplectic form. Indeed, using that \(\mathbf{i}\) is an anti-Poisson map, we find
\[-\mathbf{i}^{*}\omega(\pi_{\mathcal{G}}^{\sharp}(\alpha),\pi_{ \mathcal{G}}^{\sharp}(\beta)) =-\omega(\operatorname{\mathrm{d}i}\circ\pi_{\mathcal{G}}^{\sharp }(\alpha),\operatorname{\mathrm{d}i}\circ\pi_{\mathcal{G}}^{\sharp}(\beta))\] \[=-\omega(\pi_{\mathcal{G}}^{\sharp}(\mathbf{i}^{*}(\alpha)),\pi_{ \mathcal{G}}^{\sharp}(\mathbf{i}^{*}(\beta)))\] \[=-\langle\mathbf{i}^{*}(\alpha),\pi_{\mathcal{G}}^{\sharp}( \mathbf{i}^{*}(\beta))\rangle=\langle\alpha,\pi_{\mathcal{G}}^{\sharp}(\beta)\rangle.\]
It follows that \(\tilde{\omega}=\frac{1}{2}\left(\omega-\mathbf{i}^{*}\omega\right)\) is also an extension satisfying additionally:
\[\mathbf{i}^{*}\tilde{\omega}=-\tilde{\omega}.\]
Since \(\mathbf{i}\circ\varepsilon=\varepsilon\), we deduce that \(\varepsilon^{*}\tilde{\omega}=0\). In addition, \(\operatorname{rank}\tilde{\omega}=\dim\mathcal{G}-1\) implies that \(2\dim M\leq\dim\mathcal{G}-1\).
Now observe that since \(\pi_{\mathcal{G}}\) is multiplicative, the identity section is coisotropic, i.e., we have \(\pi_{\mathcal{G}}^{\sharp}(TM)^{0}\subset TM\). So we also have \(2\dim M\geq\dim\mathcal{G}-1\) and we conclude that \(\dim\mathcal{G}=2\dim M+1\).
Given a Poisson groupoid \((\mathcal{G},\pi_{\mathcal{G}})\) we have a morphism of Lie groupoids:
Proposition 4.1 shows that if \(\pi_{\mathcal{G}}\) is of corank 1 the base map is surjective. Then the transpose \(\rho_{A^{*}}^{*}:T^{*}M\to A\) is an injective Lie algebroid morphism from the cotangent bundle algebroid \(T^{*}M\) associated with the base Poisson manifold \((M,\pi_{M})\) (see [13]). In particular, \(\rho_{A^{*}}^{*}\) is a Poisson map for the associated fiberwise linear Poisson structures and we find (alternatively, one can also apply the argument in the proof of Proposition 2.8):
**Corollary 4.2**.: _Let \((\mathcal{G},\pi_{\mathcal{G}})\) be a Poisson groupoid of corank 1. Then the symplectic leaf \(\Sigma^{0}\) of \(\pi_{\mathcal{G}}\) containing the identity section is a (symplectic) subgroupoid of \((\mathcal{G},\pi_{\mathcal{G}})\) integrating \((M,\pi_{M})\). In particular, \((M,\pi)\) is an integrable Poisson manifold._
### Poisson groupoids vs cosymplectic groupoids
When is a Poisson groupoid of corank 1 a cosymplectic groupoid? By Proposition 3.3, its Lie bialgebroid must be a central extension. A necessary and sufficient condition is the following multiplicative version of the criteria for a Poisson manifold of corank 1 to be cosymplectic:
**Proposition 4.3**.: _A Poisson groupoid \((\mathcal{G},\pi_{\mathcal{G}})\) is cosymplectic if and only if \(\pi_{\mathcal{G}}\) is regular of corank 1 and there exists a non-vanishing, bi-invariant, Poisson vector field \(E\in\mathfrak{X}(\mathcal{G})\) transverse to the symplectic foliation._
Proof.: In one direction, we already know that the Reeb vector field of a cosymplectic groupoid is a Poisson vector field transverse to the symplectic foliation, which is both left and right invariant.
To prove the reverse direction, assume that \((\mathcal{G},\pi_{\mathcal{G}})\) is a Poisson groupoid of corank 1 that admits a non-vanishing Poisson vector field \(E\in\mathfrak{X}(\mathcal{G})\) transverse to the symplectic foliation, which is both left and right invariant. We extend the symplectic forms on the leaves to a 2-form \(\omega\) by requiring \(i_{E}\omega=0\). Also, we define a 1-form \(\alpha\) by requiring \(\alpha(E)=1\) and \(\ker\alpha=\operatorname{Im}\pi_{\mathcal{G}}^{\sharp}\). Since \(E\) is a Poisson vector field, one checks easily that \(\omega\) and \(\alpha\) are closed. All we need to show is that \(\omega\) and \(\alpha\) are multiplicative.
Take \((X_{1},X_{2})\in T\mathcal{G}^{(2)}\). Since \(E\) is tranverse to the symplectic foliation \(X_{i}=\lambda_{i}E+\pi_{\mathcal{G}}^{\sharp}(\gamma^{i})\), \(i=1,2\). Moreover, the fact that \(E\) is left and right invariant implies that \(\operatorname{ds}(\pi_{\mathcal{G}}^{\sharp}(\gamma_{1}))=\operatorname{dt}( \pi_{\mathcal{G}}^{\sharp}(\gamma_{2}))\). Since \(\pi_{\mathcal{G}}\) is multiplicative, \(\operatorname{Im}\pi_{\mathcal{G}}^{\sharp}\subseteq T\mathcal{G}\) is a multiplicative distribution, so we find
\[\operatorname{dm}(X_{1},X_{2})=(\lambda_{1}+\lambda_{2})E+\pi_{\mathcal{G}}^{ \sharp}(\gamma_{1}\cdot\gamma_{2}).\]
Using that \(\alpha(E)=1\), \(\operatorname{Ker}\alpha=\operatorname{Im}\pi_{\mathcal{G}}^{\sharp}\), and that \(\omega\) is an extension of \(\pi_{\mathcal{G}}\) with \(i_{E}\omega=0\), we can conclude that \(\omega\) and \(\alpha\) are multiplicative. For instance, for \(\alpha\),
\[\mathbf{m}^{*}\alpha(X_{1},X_{2})=\alpha(\operatorname{dm}(X_{1},X_{2}))= \alpha((\lambda_{1}+\lambda_{2})E+\pi_{\mathcal{G}}^{\sharp}(\gamma_{1}\cdot \gamma_{2}))=\lambda_{1}+\lambda_{2}.\]
On the other hand,
\[(\pi_{1}^{*}\alpha+\pi_{2}^{*}\alpha)(X_{1},X_{2})=\alpha(\lambda_{1}E+\pi_{ \mathcal{G}}^{\sharp}(\gamma^{1}))+\alpha(\lambda_{2}E+\pi_{\mathcal{G}}^{ \sharp}(\gamma^{2}))=\lambda_{1}+\lambda_{2}.\]
Thus, \(\alpha\) is multiplicative.
### Proper Poisson groupoids of corank 1
Proposition 4.3 shows that given a Poisson groupoid \((\mathcal{G},\pi_{\mathcal{G}})\) of corank 1, in order to have a compatible multiplicative cosymplectic structure one needs a vector field \(E\in\mathfrak{X}(\mathcal{G})\) transverse to symplectic foliation satisfying two properties:
1. \(E\) is bi-invariant;
2. \(E\) is a Poisson vector field.
We now analyze these properties for the important special case of _proper_ Poisson groupoids.
First, as a general remark, note that the existence a vector field transverse to the symplectic foliation means that this foliation is co-orientable. Since the leaves are
oriented (being symplectic), this is equivalent to \(\mathcal{G}\) being orientable. Hence, we will assume this condition throughout this discussion.
We start by looking into condition (a).
**Proposition 4.4**.: _Let \((\mathcal{G},\pi_{\mathcal{G}})\) be an orientable Poisson groupoid of corank 1 with symplectic foliation \(\mathcal{F}_{\pi_{\mathcal{G}}}\). The following conditions are equivalent:_
1. _There exists a bi-invariant vector field transverse to_ \(\mathcal{F}_{\pi_{\mathcal{G}}}\)_;_
2. _There exists a multiplicative 1-form whose kernel is_ \(\mathcal{F}_{\pi_{\mathcal{G}}}\)_;_
_and they imply that_
1. _There exists a multiplicative 2-form extending the leafwise symplectic form._
_If \(\mathcal{G}\) is proper then the 3 conditions are equivalent._
Proof.: (i) \(\Leftrightarrow\) (ii) A vector field \(E\in\mathfrak{X}(\mathcal{G})\) transverse to \(\mathcal{F}_{\pi_{\mathcal{G}}}\) determines a unique 1-form 1-form \(\alpha\in\Omega^{1}(\mathcal{G})\) by
\[i_{E}\alpha=1,\quad\ker\alpha=T\mathcal{F}_{\pi_{\mathcal{G}}}.\]
Conversely, given \(\alpha\) these conditions determine \(E\). By an argument entirely similar to the last part of the proof of Proposition 4.3, one checks that \(\alpha\) is multiplicative if and only if \(E\) is both left and right invariant.
(i) \(\Rightarrow\) (iii) Given a vector field \(E\in\mathfrak{X}(\mathcal{G})\) transverse to \(\mathcal{F}_{\pi_{\mathcal{G}}}\) which is both right and left invariant, we define an extension \(\omega\in\Omega^{2}(\mathcal{G})\) of the leafwise symplectic form \(\omega_{\mathcal{F}_{\pi_{\mathcal{G}}}}\) by requiring
\[i_{E}\omega=0.\]
Again, one checks easily that \(\omega\) is multiplicative.
(iii) \(\Rightarrow\) (i) We assume now that \(\mathcal{G}\) is proper and we let \(\omega\in\Omega^{2}(\mathcal{G})\) be a multiplicative 2-form extending the leafwise symplectic form. Since \(\ker\omega\) is multiplicative, it is a distribution which is both right and left invariant and, as a consequence,
\[\mathfrak{k}:=\ker\omega|_{M}\subset\ker\rho_{A}\]
is invariant under the adjoint action of \(\mathcal{G}\). In addition, since \(\mathcal{F}_{\pi_{\mathcal{G}}}\) is co-orientable, there exists a non-vanishing section \(\tilde{e}\in\Gamma(\mathfrak{k})\). The corresponding right-invariant vector field \(\tilde{E}=\overrightarrow{\tilde{e}}\) may fail to be left-invariant. To correct this we use properness of \(\mathcal{G}\). Since \(\mathfrak{k}\) is Ad-invariant, the function \(c\colon\mathcal{G}\to\mathbb{R}^{+}\) defined by
\[\mathrm{d}L_{g}(\tilde{E}_{\mathbf{s}(g)})=c(g)\tilde{E}_{g},\]
is multiplicative:
\[c(gh)=c(g)c(h),\quad((g,h)\in\mathcal{G}^{(2)}).\]
Since \(\mathcal{G}\) is proper, there is function \(f\colon M\to\mathbb{R}^{+}\) such that
\[c(g)=f(\mathbf{t}(g))/f(\mathbf{s}(g)).\]
The section \(e:=f\tilde{e}\in\Gamma(\mathfrak{k})\) gives the desired vector field \(E:=\overrightarrow{e}=\overleftarrow{e}\).
We now turn to condition (b) assuming than condition (a) holds. We show that, up to a cover, a proper Poisson groupoid satisfying (a) is homotopic to one satisfying also (b) through a homotopy that does not change the Poisson structure on the base:
**Theorem 4.5**.: _Let \((\mathcal{G},\pi_{\mathcal{G}})\rightrightarrows(M,\pi_{M})\) be an orientable proper Poisson groupoid of corank 1 and assume that there exists a multiplicative 2-form extending its leafwise symplectic form. If \((\widetilde{\mathcal{G}},\widetilde{\pi_{\mathcal{G}}})\) is its universal covering groupoid, then there is a path of \(\widetilde{\pi}^{t}_{\mathcal{G}}\in\mathfrak{X}^{2}(\widetilde{\mathcal{G}})\) of Poisson structures starting at \(\widetilde{\pi}^{0}_{\mathcal{G}}=\widetilde{\pi_{\mathcal{G}}}\) with the following properties:_
1. _each_ \(\widetilde{\pi}^{t}_{\mathcal{G}}\) _is multiplicative of corank 1;_
2. _the Poisson structure on_ \(M\) _induced by_ \(\widetilde{\pi}^{t}_{\mathcal{G}}\) _is_ \(\pi_{M}\)_;_
3. \(\widetilde{\pi}^{1}_{\mathcal{G}}\) _is associated with a multiplicative cosymplectic structure._
The rest of this section will be dedicated to the proof of this theorem.
Since \(\pi_{\mathcal{G}}\) is a Poisson groupoid of corank 1, \(\rho_{A^{*}}\) is surjective and there is a short exact sequence
(10)
and a dual sequence
(11)
Let \(\omega\in\Omega^{2}(\mathcal{G})\) be the multiplicative 2-form extending the leafwise symplectic form, and \(\alpha\in\Omega^{1}(\mathcal{G})\) and \(E\in\mathfrak{X}(\mathcal{G})\) be the corresponding multiplicative 1-form and bi-invariant vector field, given by Proposition 4.4. At the infinitesimal level these give:
1. an IM 2-form \((\mu,\tilde{\mu}):A\to T^{*}M\oplus\wedge^{2}T^{*}M\);
2. an IM 1-form \((\nu,\tilde{\nu}):A\to\mathbb{R}\oplus T^{*}M\);
3. a section \(e\in\Gamma(\ker\rho_{A})\) which is central: \([e,X]_{A}=0\), for any \(X\in\Gamma(A)\).
This data make the second short exact sequence (11) a split exact sequence of Lie algebroids. Indeed, one has \(\mathfrak{k}=\mathbb{R}e\), \(\mu(e)=0\) and (11) becomes
where \(i(x,\lambda)=\lambda e\). Because \(e\) is central and \(\rho_{A^{*}}^{*}:T^{*}M\to A\) is a Lie algebroid map, we conclude that we have a Lie algebroid isomorphism
\[(\mu,\nu):A\xrightarrow{\ \sim\ }T^{*}M\oplus\mathbb{R},\]
where the right-hand side has anchor and bracket
\[\rho_{A}(\alpha,f) =\pi^{\sharp}_{M}(\alpha), \tag{13}\] \[{}_{A} =([\alpha,\beta]_{\pi_{M}},\pi^{\sharp}_{M}(\alpha)(g)-\pi^{ \sharp}_{M}(\beta)(f))., \tag{12}\]
Now let us look at the exact sequence (10). If \(e^{*}\) is the section of \(\mathfrak{k}^{*}\) defined by \(\langle e^{*},e\rangle=1\), then \(\mathfrak{k}^{*}=\mathbb{R}e^{*}\) and we have a vector bundle splitting
The Lie algebroid structure of \(A^{*}\) can be described in terms of the central section \(e\).
**Lemma 4.6**.: _There is a Lie algebroid isomorphism_
\[(\rho_{A^{*}},i^{*}):A^{*}\xrightarrow{\ \ \ \ }TM\oplus\mathbb{R},\]
_where the right-hand side has anchor and Lie bracket given by_
\[\rho_{A^{*}}(X,a) =X, \tag{15}\] \[[(X,a),(Y,b)]_{A^{*}} =([X,Y],\nabla_{X}b-\nabla_{Y}a+\Omega(X,Y)), \tag{14}\]
_with \(\nabla\) the flat connection on the trivial line bundle \(M\times\mathbb{R}\to M\) given by_
\[\nabla_{X}a=X(a)+a\gamma(X),\qquad\gamma(X):=\langle d_{*}e,X\wedge e^{*}\rangle,\]
_and \(\Omega\in\Omega^{2}(M)\) given by_
\[\Omega(X,Y):=\langle d_{*}e,X\wedge Y\rangle.\]
_Moreover, for all \(\alpha\in\Omega^{1}(M)\) one has_
\[i_{\pi^{*}_{M}(\alpha)}\gamma=0,\quad i_{\pi^{*}_{M}(\alpha)}\Omega=0.\]
_Remark 4.7_.: Note that Jacobi identity for a Lie bracket of the form (15) is equivalent to the connection \(\nabla\) being flat, i.e., to \(\gamma\) being a closed \(1\)-form, and the \(2\)-form \(\Omega\) being \(\mathrm{d}^{\nabla}\)-closed
\[\mathrm{d}\gamma=0,\qquad\mathrm{d}^{\nabla}\Omega=0.\]
One can also expressed \(\nabla\) and \(\Omega\) in terms of the \(2\)nd components of the IM forms associated with \((\omega,\alpha)\) as follows
\[\gamma=\bar{\nu}(e),\quad\Omega=\bar{\mu}(e).\]
Hence, \(A^{*}\) becomes the trivial extension of \(TM\) precisely when \((\omega,\alpha)\) is cosymplectic, in agreement with Proposition 3.3.
Proof of the Lemma 4.6.: Under the identification \((\rho_{A^{*}},i^{*}):A^{*}\xrightarrow{\ \ \ }TM\oplus\mathbb{R}\), one has \(\nabla_{X}e^{*}=[X,e^{*}]_{A^{*}}\) and \(\Omega(X,Y)e^{*}=[X,Y]_{A^{*}}\). Hence, using the definition of \(\mathrm{d}_{*}\), we find
\[\gamma(X) =\langle e,[X,e^{*}]_{A^{*}}\rangle\] \[=\langle\mathrm{d}_{*}e,X\wedge e^{*}\rangle+X(\langle e,e^{*} \rangle)-\rho_{A^{*}}(e^{*})(\langle e,X\rangle)=\langle\mathrm{d}_{*}e,X\wedge e ^{*}\rangle.\] \[\Omega(X,Y) =\langle e^{*},[X,Y]_{A^{*}}\rangle\] \[=\langle\mathrm{d}_{*}e,X\wedge Y\rangle+X(\langle e,Y\rangle)-Y (\langle e,X\rangle)=\langle\mathrm{d}_{*}e,X\wedge Y\rangle.\]
On the other hand, by [14, Cor. 3.9], we have
\[i_{\rho^{*}_{A}(\alpha)}\mathrm{d}_{*}e=[e,\rho^{*}_{A^{*}}(\alpha)]_{A}- \mathrm{d}_{*}(\langle\alpha,\rho_{A}(e)\rangle)-\rho^{*}_{A^{*}}(i_{\rho_{A}( e)}\mathrm{d}\alpha).\]
Observing that \(\rho^{*}_{A}(\alpha)=-\pi^{*}_{M}(\alpha)\) and \(\rho_{A}(e)=0\), the result follows.
Next, we will see that \((A^{*},A)\) is a triangular Lie bialgebroid in the sense of Mackenzie and Xu [14], i.e., there exists an element \(\Lambda\in\Gamma(\wedge^{2}A^{*})\) satisfying
\[[\Lambda,\Lambda]_{A^{*}}=0,\]
such that the anchor and Lie bracket on \(A=(A^{*})^{*}\) are given by
\[\rho_{A}(\xi) =\Lambda^{\sharp}(\xi), \tag{17}\] \[[\xi_{1},\xi_{2}]_{A} =[\xi_{1},\xi_{2}]_{\Lambda}:=\mathscr{L}_{\Lambda^{\sharp}(\xi_ {1})}\xi_{2}-\mathscr{L}_{\Lambda^{\sharp}(\xi_{2})}\xi_{1}-\mathrm{d}_{A}( \Lambda(\xi_{1},\xi_{2})). \tag{16}\]
In fact, we have the following general result which is an analogue for central extensions of the fact that for any Poisson structure \((M,\pi_{M})\) the pair \((TM,T^{*}M)\) is a triangular Lie bialgebroid.
**Proposition 4.8**.: _Let \(\gamma\in\Omega^{1}(M)\), \(\Omega\in\Omega^{2}(M)\) and \(\pi_{M}\in\mathfrak{X}^{2}(M)\), and denote by \(\nabla\) the connection on the trivial line bundle given by \(\gamma\). Assume that:_
1. \(\gamma\) _is closed:_ \(\mathrm{d}\gamma=0\)_;_
2. \(\Omega\) _is_ \(\mathrm{d}^{\nabla}\)_-closed:_ \(\mathrm{d}^{\nabla}\Omega=0\)_;_
3. \(\pi_{M}\) _is Poisson:_ \([\pi_{M},\pi_{M}]=0\)_._
_Then \(A^{*}=TM\oplus\mathbb{R}\), with anchor (14) and Lie bracket (15), and \(A=T^{*}M\oplus\mathbb{R}\) with anchor (12) and Lie bracket (13) are both Lie algebroids. If, additionally, one has_
\[i_{\pi_{M}^{\sharp}(\alpha)}\gamma=0,\quad i_{\pi_{M}^{\sharp}(\alpha)}\Omega =0,\quad(\alpha\in\Omega^{1}(M)),\]
_then \((A^{*},A)\) is a triangular Lie bialgebroid for the section \(\Lambda\in\Gamma(\wedge^{2}A^{*})\) given by_
\[\Lambda((\alpha,f),(\beta,g)):=\pi_{M}(\alpha,\beta).\]
_In particular, in this case one has_
\[[\Lambda,\Lambda]_{A^{*}}=0.\]
Proof of Proposition 4.8.: The fact that both \(A\) and \(A^{*}\) are Lie algebroids is standard. To check that under the additional assumptions on \(\gamma\) and \(\Omega\) the pair \((A^{*},A)\) is a triangular bialgebroid, notice that
\[\rho_{A}(\alpha,f)=\pi_{M}^{\sharp}(\alpha)=\Lambda^{\sharp}(\alpha,f),\qquad ((\alpha,f)\in\Gamma(A)),\]
so (16) holds. On the other hand, we find
\[\langle\mathscr{L}_{\Lambda^{\sharp}(\alpha,f)}(\beta,g),(Y,b)\rangle =\langle\mathscr{L}_{(\pi_{M}^{\sharp}(\alpha),0)}(\beta,g),(Y,b)\rangle\] \[=\pi_{M}^{\sharp}(\alpha)(\langle(\beta,g),(Y,b)\rangle)-\langle (\beta,g),[\pi_{M}^{\sharp}(\alpha,0),(Y,b)]_{A}\rangle\] \[=\pi_{M}^{\sharp}(\alpha)(\langle\beta,Y\rangle)+\pi_{M}^{\sharp} (\alpha)(gb)+\] \[\qquad\qquad-\langle\beta,[\pi_{M}^{\sharp}(\alpha),Y]\rangle-g \nabla_{\sharp(\alpha)}b-g\Omega(\pi_{M}^{\sharp}(\alpha),Y)\] \[=\langle\mathscr{L}_{\pi_{M}^{\sharp}(\alpha)}\beta-g\,i_{\pi_{M }^{\sharp}(\alpha)}\Omega,Y\rangle+b\pi_{M}^{\sharp}(\alpha)(g)-gb(i_{\pi_{M }^{\sharp}(\alpha)}\gamma)\] \[=\langle\mathscr{L}_{\pi_{M}^{\sharp}(\alpha)}\beta,Y\rangle+b \pi_{M}^{\sharp}(\alpha)(g)\rangle,\]
where in the last line we have used the extra assumptions on on \(\gamma\) and \(\Omega\). Using this we find that the Lie bracket on \(A\) is indeed given by (17), namely
\[[(\alpha,f),(\beta,g)]_{\Lambda} =\mathcal{L}_{\Lambda^{\sharp}(\alpha,f)}(\beta,g)-\mathcal{L}_{ \Lambda^{\sharp}(\beta,g)}(\alpha,f)-\mathrm{d}_{A}(\Lambda((\alpha,f)(\beta, g)))\] \[=(\mathcal{L}_{\pi^{\sharp}_{M}(\alpha)}\beta,\pi^{\sharp}_{M}( \alpha)(g))-(\mathcal{L}_{\pi^{\sharp}_{M}(\beta)}\alpha,\pi^{\sharp}_{M}( \beta)(f))-(\mathrm{d}(\pi_{M}(\alpha,\beta)),0)\] \[=([\alpha,\beta]_{\pi_{M}},\pi^{\sharp}_{M}(\alpha)(g)-\pi^{ \sharp}_{M}(\beta)(f))\] \[=[(\alpha,f),(\beta,g)]_{A}.\]
To complete the proof we show that \([\Lambda,\Lambda]_{A^{*}}=0\). For this we observe that by the computation above we have
\[[\Lambda,\Lambda]^{\sharp}_{A^{*}}((\alpha,f),(\beta,g)) =\Lambda^{\sharp}\big{(}[(\alpha,f),(\beta,g)]_{\Lambda}\big{)}- [\Lambda^{\sharp}(\alpha,f),\Lambda^{\sharp}(\beta,g)]\] \[=\Lambda^{\sharp}\big{(}[(\alpha,f),(\beta,g)]_{A}\big{)}-[ \Lambda^{\sharp}(\alpha,f),\Lambda^{\sharp}(\beta,g)]\] \[=\pi^{\sharp}_{M}([\alpha,\beta]_{\pi_{M}})-[\pi^{\sharp}_{M}( \alpha),\pi^{\sharp}_{M}(\beta)]\] \[=[\pi_{M},\pi_{M}]^{\sharp}(\alpha,\beta)=0,\]
where the first identity is Lemma 2.2 in [12].
We can now complete the proof of Theorem 4.5. We perform two consecutive homotopies of Lie bialgebroids as follows:
1. Starting with the original Poisson groupoid, its Lie bialgebroid is a triangular Lie bialgebroid \((A^{*},A)\) as in Proposition 4.8 with associated data \((\gamma,\Omega,\pi_{M})\). We can rescale the 2-form \(\Omega\), obtaining a family of triangular Lie bialgebroids \((A^{*}_{t},A)\) with data \((\gamma,(1-t)\Omega,\pi_{M})\), \(t\in[0,1]\) (note that this triple still satisfies for each \(t\) all the conditions in the proposition).
2. The previous homotopy gives at \(t=1\) a Lie bialgebroid with corresponding triple \((\gamma,\Omega=0,\pi_{M})\). Now we can rescale the connection 1-form \(\gamma\), obtaining a family of triangular Lie bialgebroids \((A^{*}_{t},A)\) with data \(((1-t)\gamma,0,\pi_{M})\), \(t\in[0,1]\) (notice again that this triple still satisfies for each \(t\) all the conditions in the proposition).
The result of these two consecutive deformations is a Lie bialgebroid \((A^{*},A)\) whose associated triple has both \(\gamma\) and \(\Omega\) equal to zero, i.e, it is of cosymplectic type (cf. Proposition 3.3).
Finally, we observe that in these deformations \((A^{*}_{t},A)\) the Lie algebroid \(A\) and the anchors are both fixed, and so is the underling Poisson structure. Using the Mackenzie-Xu correspondence between Lie bialgebroids and source 1-connected Lie groupoids [15], we conclude that at the level of the Lie groupoid \(\widetilde{\mathcal{G}}\) we have a path of multiplicative Poisson structures \(\widetilde{\pi}^{t}_{\mathcal{G}}\in\mathfrak{X}^{2}(\widetilde{\mathcal{G}})\) as in the statement of Theorem 4.5.
_Remark 4.9_.: A geometric way of thinking about the two deformations in the proof is as follows. We start with a Poisson groupoid \((\mathcal{G},\pi_{\mathcal{G}})\) which can be described by a pair \((\omega,\alpha)\) consisting of a multiplicative 2-form and a multiplicative 1-form, which fail to be closed but, nonetheless, \(\ker\alpha\) is an integrable distribution and the
restriction of \(\omega\) to the leaves of \(\ker\alpha\) is symplectic. After replacing \(\mathcal{G}\) by \(\widetilde{\mathcal{G}}\), we are able to construct homotopies as follows:
1. the first homotopy consists of a deformation \((\omega_{t},\alpha)\) where the 1-form \(\alpha\) is fixed, the 2-form \(\omega_{t}\) is multiplicative, at \(t=0\) equals \(\omega\) and at \(t=1\) is closed;
2. the second homotopy consists of a deformation \((\omega_{1},\alpha_{t})\) where the 2-form \(\omega_{1}\) is fixed, the 1-form \(\alpha_{t}\) is multiplicative, at \(t=0\) equals \(\alpha\) and at \(t=1\) is closed;
Moreover, through out these deformations the 1-form always defines an integral distribution and the restriction of the 2-form to its leaves is symplectic, so they define a multiplicative Poisson structure \(\pi_{t}\) on \(\mathcal{G}\).
### Proper over-symplectic groupoids of corank 1
Consider an oversymplectic groupoid \((\mathcal{G},\omega)\) of corank 1 for which \(\ker\omega\) is a simple foliation. Then we obtain an extension
where \((\Sigma,\Omega_{\Sigma})\) is a symplectic groupoid and \(\omega=q^{*}\Omega_{\Sigma}\). If \(\mathcal{G}\) is proper and orientable, then \(\mathcal{K}\) is the trivial \(\mathbb{S}^{1}\)-bundle of groups and we have a \(\mathbb{S}^{1}\)-central extension
Notice that this also makes \(\mathcal{G}\) into a \(\mathbb{S}^{1}\)-principal bundle and we denote the generator of the \(\mathbb{S}^{1}\)-action by \(\partial_{\theta}\in\mathfrak{X}(\mathcal{G})\). A _multiplicative Ehresmann connection_ for such an extension is given by a multiplicative 1-form \(\alpha\in\Omega^{1}(\mathcal{G})\) with the property that:
\[i_{\partial_{\theta}}\alpha=1. \tag{18}\]
We refer the reader to [6, 9] for the theory of such connections and its relation to ordinary principal bundle connections.
It is proved in [6] that a \(\mathbb{S}^{1}\)-central extension of a proper groupoid always admits a multiplicative Ehresmann connection \(\alpha\). Its curvature 2-form is the multiplicative 2-form
\[\Omega:=\mathrm{d}\alpha\in\Omega^{2}(\mathcal{G}).\]
This form is closed and so by (18) it is basic. Hence, we have a multiplicative, closed, 2-form \(\underline{\Omega}\in\Omega^{2}(\Sigma)\) such that:
\[\Omega=q^{*}\underline{\Omega}.\]
Denoting by \(H^{\bullet}_{M}(\Sigma)\) the _multiplicative de Rham cohomology_ of \(\Sigma\rightrightarrows M\), we have:
**Proposition 4.10**.: _Given a \(\mathbb{S}^{1}\)-central extension of a proper Lie groupoid \(\Sigma\)_
_the class of the basic curvature of a multiplicative Ehresmann connection_
\[[\underline{\Omega}]\in H^{2}_{M}(\Sigma)\]
_is independent of the choice of connection._
Proof.: If \(\alpha_{1}\) and \(\alpha_{2}\) are two multiplicative Ehresmann connections then their difference \(\alpha_{1}-\alpha_{2}\) is a basic multiplicative 1-form, i.e., we have
\[\alpha_{1}-\alpha_{2}=q^{*}\beta,\quad\text{with $\beta\in\Omega^{1}(\Sigma)$ multiplicative}.\]
It follows that their basic curvature 2-forms differ by an exact multiplicative form:
\[\underline{\Omega}_{1}-\underline{\Omega}_{2}=q^{*}\mathrm{d}\beta.\]
We call the class \([\underline{\Omega}]\in H^{2}_{M}(\Sigma)\) the _multiplicative Chern class_ of the extension. This class vanishes if and only if the extension admits a flat multiplicative Ehresmann connection.
**Theorem 4.11**.: _Let \((\mathcal{G},\omega)\) be a corank 1, orientable, proper oversymplectic groupoid. If \(\ker\omega\) is a simple foliation, then there exists \(\alpha\in\Omega^{1}(\mathcal{G})\) such that \((\mathcal{G},\omega,\alpha)\) is a cosymplectic groupoid if and only if the corresponding \(\mathbb{S}^{1}\)-central extension has vanishing multiplicative Chern class._
Proof.: If we can complete \(\omega\) to a multiplcative cosymplectic stucture \((\omega,\alpha)\) then obviously \(\alpha\) is a flat multiplicative Ehresmann connection.
For the reverse direction, assume that the multiplicative Chern class vanishes so the extension admits a multiplicative Ehresmann connection \(\alpha\). Let \(\dim\mathcal{G}=2n+1\), where \(2n=\dim\Sigma\). From (18) and the fact that \(\omega=q^{*}\Omega_{\Sigma}\), with \(\Omega_{\Sigma}\) non-degenerate, it follows that \(\alpha\wedge\omega^{n}\) is nowhere vanishing. Hence, \((\omega,\alpha)\) is a multiplicative cosymplectic structure.
|
2310.04833 | A Stochastic Analysis of Particle Systems with Pairing | Motivated by a general principle governing regulation mechanisms in
biological cells, we investigate a general interaction scheme between different
populations of particles and specific particles, referred to as agents.
Assuming that each particle follows a random path in the medium, when a
particle and an agent meet, they may bind and form a pair which has some
specific functional properties. Such a pair is also subject to random events
and it splits after some random amount of time. In a stochastic context, using
a Markovian model for the vector of the number of paired particles, and by
taking the total number of particles as a scaling parameter, we study the
asymptotic behavior of the time evolution of the number of paired particles.
Two scenarios are investigated: one with a large but fixed number of agents,
and the other one, the dynamic case, when agents are created at a bounded rate
and may die after some time when they are not paired. A first order limit
theorem is established for the time evolution of the system in both cases. The
proof of an averaging principle of the dynamic case is one of the main
contributions of the paper. Limit theorems for fluctuations are obtained in the
case of a fixed number agents. The impact of dynamical arrivals of agents on
the level of pairing of the system is discussed. | Vincent Fromion, Philippe Robert, Jana Zaherddine | 2023-10-07T14:45:18Z | http://arxiv.org/abs/2310.04833v1 | # A stochastic analysis of particle systems with pairing
###### Abstract.
Motivated by a general principle governing numerous regulation mechanisms in biological cells, we investigate a general interaction scheme between different populations of particles and specific particles, referred to as agents. Assuming that each particle follows a random path in the medium, when a particle and an agent meet, they may bind and form a pair which has some specific functional properties. Such a pair is also subject to random events and it splits after some random amount of time. In a stochastic context, using a Markovian model for the vector of the number of paired particles, and by taking the total number of particles as a scaling parameter, we study the asymptotic behavior of the time evolution of the number of paired particles. Two scenarios are investigated: one with a large but fixed number of agents, and the other one, the dynamic case, when agents are created at a bounded rate and may die after some time when they are not paired. A first order limit theorem is established for the time evolution of the system in both cases. The proof of an averaging principle of the dynamic case is one of the main contributions of the paper. Limit theorems for fluctuations are obtained in the case of a fixed number agents. The impact of dynamical arrivals of agents on the level of pairing of the system is discussed.
###### Contents
* 1 Introduction
* 2 Stochastic Model
* 3 Fixed Number of Agents
* 4 Dynamical Arrivals
* 5 Biological Background
## 1. Introduction
In this paper we investigate a general mechanism of interaction between different populations of particles and specific particles, agents, in some environment. Assuming that each of the particles follows a random path in the medium, when a particle and an agent meet, they may form a pair which has a specific functional property in the medium. Such a pair is also subject to random events, it splits after some random amount of time. The efficiency of the pairing mechanism is analyzed with the time evolution of the number of paired particles of each type.
### Motivation
The initial motivation comes from molecular biology where this is an almost ubiquitous phenomenon occurring in biological cells. It can be (roughly) described as follows: different types of macro-molecules (ribosomes, or polymerases for example), referred to as _particles_, are in charge of producing some of the functional components necessary to the development of the cell (mRNAs, proteins). Specific macro-molecules, referred to as _agents_ in the paper, like small RNAs, have a regulation role in the cell. Agents can pair/bind with particles to block, or to speed-up, their activity. Due to thermal noise, a pair agent-particle splits after some time. The dynamic behavior of the systems investigated are described in terms of binding/unbinding operations of agents and particles. See Section A of the Appendix for a more detailed presentation of these aspects.
### Literature
A typical representation of pairing mechanisms in the literature, written as a chemical reaction, is of the type,
\[\mathcal{Z}+\mathcal{F}_{j}\rightleftharpoons\mathcal{Z}F_{j}\rightharpoonup \mathcal{G}_{j}+\mathcal{Z} \tag{1}\]
where the chemical species are as follows: \(\mathcal{Z}\) is associated to what we call agents (enzymes, small RNAs,...), \(\mathcal{F}_{j}\) is for particles of type \(j\)\(\in\)\(\{1,\ldots,J\}\) (RNAs, polymerases,...). The species \(\mathcal{Z}F_{j}\) is for pairs of \(\mathcal{Z}\) and \(\mathcal{F}_{j}\) and \(\mathcal{G}_{j}\) is for a "product" of type \(j\), it can be \(\mathcal{F}_{j}\). In a deterministic setting the leads to a set of ODEs for a dynamical system \((X_{A}(t),A\)\(\in\)\(Z,F_{j},ZF_{j},G_{j})\), for example, for \((X_{{}_{\mathcal{Z}F_{j}}}(t))\) it gives
\[\frac{\mathrm{d}}{\mathrm{d}t}X_{{}_{\mathcal{Z}F_{j}}}(t)=\kappa_{j}^{+}X_{ \mathcal{Z}}(t)X_{{}_{\mathcal{F}_{j}}}(t)-\kappa_{j}^{-}X_{{}_{\mathcal{X}F_ {j}}}(t) \tag{2}\]
for some constants \(\kappa_{j}^{\pm}\)\(\geq\)0. Note the quadratic term on the right hand side. Investigations are generally on the stability of these dynamical systems. See Petrides and Vinnicombe [24], Del Giudice et al. [7] and Jayaprakash and Das [18]. See Section 2 for a brief presentation of this formalism.
In a stochastic context, this is represented as a Markov process whose state descriptor is the vector of the number of copies of the different chemical species. Simulations and numerical analysis of the associated Fokker-Planck equations have been used to study these phenomena, see Petrides and Vinnicombe [24].
The technical context is related to the celebrated Michaelis-Menten kinetics. These chemical reactions involve enzyme, substrate and product macro-molecules, whose associated chemical species are denoted respectively as \(\mathcal{E}\), \(\mathcal{S}\) and \(\mathcal{P}\). The chemical reaction
\[\mathcal{E}+\mathcal{S}\rightleftharpoons\mathcal{E}S\rightharpoonup \mathcal{P}+\mathcal{E},\]
has been investigated for some time now. The basic assumption for these models is that there are few copies of chemical species \(\mathcal{E}\) but a large number of copies of substrate, so that the reaction rate is large (for the chemical reaction on the left). In a deterministic setting it leads to a system of non-polynomial ODEs. In a stochastic context, these ODEs can be obtained via the proof of an averaging principle. See Michaelis and Menten [21] and for a general overview Sanft et al. [28] and Cornish-Bowden [5].
Averaging principles also play an important role in our paper. For the mathematical point of view, agents may be seen as playing the role of enzymes in our model. Nevertheless our framework is not really that of Michaelis-Menten. Their number is nevertheless not fixed in our main model of Section 4. As we will see, the production of agents has a strong impact on the qualitative behavior of the system.
As it can be expected, the quadratic expressions due to pairing mechanisms, like in Relation (2), are at the origin of some technical difficulties in the proof of limit theorems.
### Stochastic Model
There are \(J\) types of particles. For 1\(\leq\)\(j\)\(\leq\)\(J\), \(N_{j}\) is the total number of particles of type \(j\), this quantity is assumed to be fixed. The total number of particles is \(N\)=\(N_{1}\)+\(\cdots\)\(N_{J}\), it is our scaling parameter. A Markovian stochastic model is considered, each event occurs after an amount of time with an exponential distribution and the corresponding random variables are assumed to be independent.
There is only type of agent. An agent and a particle of type \(j\)\(\in\)\(\{1,\ldots,J\}\) bind/pair at rate \(\lambda_{j}\), and, in a reverse operation, such a pair split into an agent and a particle of type \(j\) at rate \(\eta_{j}\). An agent or a particle which is not paired is said to be _free_.
The variables of interest are
\[(F_{N}(t),Z_{N}(t))\stackrel{{\text{def.}}}{{=}}((F_{N,j}(t),j \text{=}1,\ldots,J)\,,Z_{N}(t))\]
where, for 1\(\leq\)\(j\)\(\leq\)\(J\) and \(t\)\(\geq\)0, \(F_{j}(t)\) is the number of free particles of type \(j\), i.e. not paired with an agent, and \((Z_{N}(t))\) is the process for the number of free agents. When the goal of pairing mechanism is of reducing the activity of the particles, this will be referred to as _sequestration_ of particles, the objective is of minimizing
\[\left(\sum_{j=1}^{J}\frac{F_{N,j}(t)}{N}\right)\]
the process of the fraction of the number of free particles. We analyze the asymptotic behavior, when \(N\) goes to infinity, of the time evolution of the \(J\)-dimensional process \((F_{j,N}(t)/N)\) associated to the free particles. An appropriate timescale for a non-trivial asymptotic evolution when \(N\) goes to infinity has to be determined.
In Fromion et al. [14], a related model of sequestration has been analyzed, to study the regulation of transcription. It also includes additional variables which are not considered in this paper. A component of the stochastic model is a related Markov process but in dimension 1, i.e. for \(J\)=1. As it will be seen, compared to the case \(J\)=1, the multi-dimensional aspect of our model has a significant impact on the scaling properties of the associated stochastic processes.
Two types of models for agents are analyzed.
1. Agents are neither created nor removed: the number of agents is fixed, of the order of \(N\).
2. An agent is created at rate \(\beta\) and, only when it is not paired with a particle, it dies at rate \(\delta\).
Case a) is used to investigate the case when the environment does not change significantly and when there is already a large number of agents to regulate the system. Two cases are considered. In Section 3.1 the total number of agents is of the order of \(rN\) with \(r\)\(\in\)\((0,1)\), there are much more particles than agents. It is shown that the process \((F_{j,N}(t)/N)\) is converging in distribution to the solution of an ODE. The equilibrium point of this ODE is unique and its coordinates are positive. For this system the number of free particles of type \(j\)\(\in\)\(\{1,\ldots,J\}\) is, of course, of the order of \(N\).
In Section 3.2 the total number of agents is \(N\) the same as the total number of particles. It is shown that with, appropriate initial conditions, the process \((F_{j,N}(t/\sqrt{N})/\sqrt{N})\) is converging in distribution to the solution of an ODE and a central limit theorem is proved, it shows that the fluctuations are of the order of \(\sqrt[4]{N}\). In this case the impact of stochasticity on the pairing mechanism is minimal since there is a fraction of the order of \(1/\sqrt{N}\) of free particles.
Case b) is investigated in Section 4 for the case when, initially, there few agents (free or paired) are in the system, the goal is of investigate the growth of the number of paired particles. The proof of an averaging principle in this context is challenging for several reasons.
Since a paired agent does not die (it is not degraded), one can expect an asymptotic situation as in Section 3.2 with a negligible fraction of free particles. We show that this is not the case, in fact, formally, the behavior is similar to that of Section 3.1, but on a faster time scale and with important qualitative and technical differences.
If the system starts with few agents, in this case most of \(N\) particles are initially "free", all agents created will pair with a free particle right away and will keep doing that, via the successive steps of pairing/splitting, as long the number of free particles is "large" so that, with high probability, pairing occurs before degradation for agents. Given the rate of creation of agents, the natural timescale to study this problem is \((Nt)\).
It can be expected that the multi-dimensional process
\[\left((\frac{F_{N}(Nt)}{N}\right)=\left(\frac{F_{j,N}(Nt)}{N}\right)\]
converges in distribution to a continuous process reflecting the asymptotic degree of pairing of the system. Due to their large transition rates, the integer-valued processes \((Z_{N}(Nt))\) and \((F_{N}(Nt))\) are "fast" processes. Because of the space scaling, \((F_{N}(Nt)/N)\) is an a priori "slow" process. Following the classical approach in this domain, see Papanicolaou et al. [23] in a stochastic calculus context and Kurtz [20] for its formulation for jump process. For \(T{>}0\), one has to consider the occupation measure associated to \((Z_{N}(Nt))\), i.e. this is the functional on non-negative Borelian functions on \([0,T]{\times}\mathbb{N}\),
\[g\longrightarrow\int_{0}^{T}g(s,Z_{N}(Ns))\,\mathrm{d}s.\]
If this approach allows us to derive the results of Section 3.1 for case a), where an averaging principle is proved, it does not work for case b). The sequence of processes \((F_{j,N}(Nt)/N,j{=}1,\ldots,J)\) does _not_ converge in distribution in fact. It is not tight for the topology associated to uniform convergence if the initial state does not converge to some one-dimensional curve of \([0,1]^{J}\). The main convergence result of this case is Theorem 13 of Section 4. It shows that the process associated to the total number of free particles,
\[(\|F_{N}(Nt)\|)\stackrel{{\mathrm{def.}}}{{=}}\left(\sum_{j=1}^ {J}\frac{F_{N,j}(Nt)}{N}\right),\]
converges in distribution to a continuous process. The sequence of \([0,1]^{J}\)-valued processes \((F_{j,N}(Nt)/N,1{\leq}j{\leq}J)\) converges in distribution, but in a weak form, via
its associated occupation measure. It turns out that the process \((\|F_{N}(Nt)\|)\) determines, in some way, the behavior of the coordinates of \((F_{N}(Nt))\). For \(N\) large \((F_{N,j}(Nt))\) can in fact be represented as a curve of \([0,1]^{J}\) determined by \(\|F_{N}(Nt)\|\). In Fromion et al. [14], no such difficulty shows up since \(J\)=1.
Intuitively, it is shown that, in the limit, the number of free particles is of the order of \(N\), as in case a) but for some specific \(r\)\(<\)1. These results stress the impact of dynamical arrivals and departures of agents. In particular the fraction of paired particles is asymptotically strictly less than 1.
Technical difficulties are related to the lack of tightness properties of the process \((F_{j,N}(Nt)/N)\). For this reason the definition of the occupation measure is extended to include also "slow" processes and not only the fast processes as it is classical in the context of averaging principles. As a functional on Borelian functions on \([0,T]\times[0,1]^{J}\times\mathbb{N}\), the occupation measure is expressed as
\[g\longrightarrow\int_{0}^{T}g\left(s,\frac{F_{N}(Ns)}{N},Z_{N}(Ns)\right) \mathrm{d}s.\]
The investigation of the limiting behavior of this sequence of occupation measures is the main topic of Section 4, including the identification of possible limiting points.
The reason of this behavior is essentially due to the interaction of several fast time scales. At the normal time scale \((t)\), if the components of the vector \(F_{N}(t)\) are already of the order of \(N\), the pairing/splitting events occur at a rate proportional to \(N\). Since the natural time scaling for case b) is sped-up as \((Nt)\), roughly speaking, the pairing/splitting events will be instantaneously at equilibrium, at the first order, at any "time" \(t\) for the current "mass" \(\|F_{N}(Nt)\|\). In particular, if the initial point \(\overline{F}_{N}(0)\) does not converge to the equilibrium associated to the mass \(\|F_{N}(0)\|\), there cannot be a convergence in a neighborhood of \(t\)=0, this is the one-dimensional curve mentioned above.
### Outline of the Paper
Section 2 introduces notations and the Markovian process used to investigate pairing mechanisms. Section 3 analyzes the static case when the number of agents is fixed and in Section 4 a stochastic averaging principle is proved when agents are created and degraded. To motivate the design of such stochastic models, Section A of the appendix presents several examples of regulation mechanisms in biological cells. Section B of the appendix is a quick reminder of classical limit results for \(M/M/1\) and \(M/M/\infty\) queues. These queues play an important role in the design of couplings used in the proofs of our limit theorems.
## 2. Stochastic Model
### Definitions and Notations
If \(H\) is a locally compact metric space, \(\mathcal{C}_{c}(H)\) is the space of continuous functions with compact support endowed with the topology of uniform convergence. We denote by \(\mathcal{M}^{+}(H)\) the set of non-negative Radon measures on \(H\) and \(\mathcal{M}_{1}(H)\), the set of probability distributions on \(H\), both spaces are endowed with the weak topology. See Rudin [27]. Throughout the paper convergence in distribution of a sequence of jump processes \((U_{N}(t))\) to a process \((U(t))\) is understood with respect to the topology of uniform convergence on compact sets for cadlag functions. See Chapter 2 of Billingsley [4] for example.
For \(J\)\(\in\)\(\mathbb{N}\), if \(x\)=\((x_{i})\), \(y\)=\((y_{i})\)\(\in\)\(\mathbb{R}^{J}\), define
\[\|x\|\stackrel{{\text{def.}}}{{=}}|x_{1}|+\cdots+|x_{J}|,\quad \langle x,y\rangle=x_{1}y_{1}+\cdots+x_{J}y_{J}, \tag{3}\]
and,
\[\overline{x}{=}\max(x_{j},1{\leq}j{\leq}J)\text{ and }\underline{x}{=}\inf(x_{j},1{ \leq}j{\leq}J). \tag{4}\]
We now introduce the main definitions for our stochastic model. There are \(J\) different types of particles. The total number of particles of type \(j{\in}\{1,\ldots,J\}\) is \(C_{j,N}\), and \(N{=}C_{1,N}{+}\cdots{+}C_{J,N}\), the total number of particles is a fixed number, it is also our scaling parameter. It is assumed that,
\[\lim_{N{\rightarrow}+\infty}\left(\frac{C_{j,N}}{N}\right)=c{=}(c_{j}) \tag{5}\]
holds, for some \(c{\in}(0,1)^{J}\) such that \(c_{1}{+}c_{2}{+}\cdots{+}c_{J}{=}1\).
### State Space
The state space is
\[\mathcal{S}_{N}{=}\left\{x=(f,z)=((f_{j}),z){\in}\mathbb{N}^{J+1}:f_{j}{\leq} C_{j,N},\forall 1{\leq}j{\leq}J\right\},\]
and, if \(t{\geq}0\),
* for \(1{\leq}j{\leq}J\), \(F_{j,N}(t)\) denotes the number of free particles of type \(j\) at time \(t\) and \(F_{N}(t){=}(F_{j,N}(t),1{\leq}j{\leq}J)\);
* The number of free agents at time \(t\) is \(Z_{N}(t)\);
* The state of the process at time \(t\) is \(X_{N}(t){=}(F_{N}(t),Z_{N}(t)){\in}\mathcal{S}_{N}\).
The number of agents paired with a particle of type \(j\) at time \(t\) is therefore \(S_{j,N}(t){=}C_{j,N}{-}F_{j,N}(t)\). In state \((F_{N}(t),Z_{N}(t))\), the total number of free particles is \(\|F_{N}(t)\|\).
### Transitions
The dynamical behavior of \((X_{N}(t))\) is driven by several types of transitions.
1. A given particle of type \(j\) and a given agent are paired at rate \(\lambda_{j}\);
2. A pair (particle of type \(j\), agent) is split at rate \(\eta_{j}{>}0\) to give a particle of type \(j\) and a free agent;
3. Agents are created at rate \(\beta{\geq}0\) and a _free_ agent dies, is degraded, at rate \(\delta{>}0\). An agent paired to a particle cannot die.
The state process \((X_{N}(t)){=}(F_{N}(t),Z_{N}(t))\) is almost surely a _cadlag function_, i.e. a right-continuous function with left limits at any point of \((0,+\infty)\). It is described as an irreducible Markov process on \(\mathcal{S}_{N}\) whose \(Q\)-matrix \(Q_{F}\) is given by
\[(f,z){=}((f_{j}),z)\longrightarrow(f,z){+}\begin{cases}(-e_{j},-1)&\lambda_{j }f_{j}z,\\ (+e_{j},+1)&\eta_{j}(C_{j,N}{-}f_{j}),\\ (0,+1)&\beta,\\ (0,-1)&\delta z,\end{cases}\]
where \(e_{j}\) is the \(j\)th unit vector of \(\mathbb{N}^{J}\).
Note that the pairing mechanism induces quadratic transition rates in the \(Q\)-matrix.
**Definition 1**.: \[c{=}(c_{j}),\,\eta{=}(\eta_{j}),\,\lambda{=}(\lambda_{j}),\quad\rho_{0}{=} \frac{\beta}{\delta}\text{ and }\rho_{j}{=}\frac{\eta_{j}}{\lambda_{j}},j{=}1, \ldots,J.\]
_For \(y{\in}(0,1)\), \(\phi(y)\) is defined as the unique solution of the equation_
\[\sum_{j=1}^{J}\frac{\rho_{j}}{\rho_{j}{+}\phi(y)}c_{j}=y. \tag{6}\]
### Stochastic Differential Equations
The process \((X_{N}(t)){=}(F_{N}(t),Z_{N}(t))\) is represented as the solution of a system of SDEs (Stochastic Differential Equations). On the probability space there are 2(\(J{+}1\)) independent Poisson processes on \(\mathbb{R}^{2}_{+}\) with intensity measure \(\mathrm{d}x{\otimes}\,\mathrm{d}t\), \(\mathcal{P}^{+}_{z}\), \(\mathcal{P}^{-}_{z}\), \(\mathcal{P}^{+}_{j}\), \(\mathcal{P}^{-}_{j}\), \(j{=}1,\ldots,J\)). See Rogers and Williams [26] for example. The underlying filtration \((\mathcal{F}_{t})\) is defined by, for \(t{>}0\),
\[\mathcal{F}_{t}=\sigma\left\langle\mathcal{P}^{\pm}_{j/z}([a,b]{\times}[0,s], j{=}1,\ldots,J\},a\leq b,s{\leq}t\right\rangle.\]
In the following, measurability properties are assumed to be with respect to this filtration.
Let \((F_{N}(t),Z_{N}(t))\) be the solution of the SDE, for \(j{=}1,\ldots,J\),
\[\mathrm{d}F_{j,N}(t) =\mathcal{P}^{+}_{j}((0,\eta_{j}(C_{j,N}{-}F_{j,N}(t{-}))), \mathrm{d}t){-}\mathcal{P}^{-}_{j}((0,\lambda_{j}F_{j,N}(t{-}){Z_{N}(t{-})}), \mathrm{d}t), \tag{8}\] \[\mathrm{d}Z_{N}(t) =\mathcal{P}^{+}_{z}((0,\beta),\mathrm{d}t){-}\mathcal{P}^{-}_{z} ((0,\delta Z_{N}(t{-})),\mathrm{d}t){+}\sum_{j=1}^{J}\mathrm{d}F_{j,N}(t), \tag{7}\]
where \(U(t{-})\) denotes the left-limit of the cadlag process \((U(s))\) at \(t{>}0\) and with the usual notation, if \(A{\geq}0\) and \(\mathcal{P}\) is a Poisson point process on \(\mathbb{R}^{2}_{+}\),
\[\mathcal{P}((0,A),\mathrm{d}t)=\int\mathbbm{1}_{\{x\leq A\}}\mathcal{P}( \mathrm{d}x,\mathrm{d}t). \tag{9}\]
By integrating these relations, we obtain, for \(j{=}1,\ldots,J\),
\[F_{j,N}(t)=F_{j,N}(0){+}M_{j,N}(t)\\ +\eta_{j}\int_{0}^{t}\left(C_{j,N}{-}F_{j,N}(s)\right)\mathrm{d}s{ -}\lambda_{j}\int_{0}^{t}F_{j,N}(s)Z_{N}(s)\,\mathrm{d}s, \tag{10}\]
and
\[Z_{N}(t)=Z_{N}(0){+}M_{z,N}(t){+}\beta t{-}\delta\int_{0}^{t}Z_{ N}(s)\,\mathrm{d}s\\ +\sum_{j=1}^{J}\eta_{j}\int_{0}^{t}(C_{j,N}{-}F_{j,N}(s)\,\mathrm{ d}s{-}\lambda_{j}\int_{0}^{t}F_{j,N}(s)Z_{N}(s)\,\mathrm{d}s, \tag{11}\]
where \(\left(M_{j,N}(t)\right)\) and \(\left(M_{z,N}(t)\right)\) are square integrable martingales whose previsible increasing processes are given by
\[\left(\left\langle M_{j,N}\right\rangle(t)\right) =\left(\eta_{j}\int_{0}^{t}\left(C_{j,N}\!-\!F_{j,N}(s)\right) \mathrm{d}s\!+\!\lambda_{j}\int_{0}^{t}F_{j,N}(s)Z_{N}(s)\,\mathrm{d}s\right), \tag{13}\] \[\left(\left\langle M_{z,N}\right\rangle(t)\right) =\left(\beta t\!+\!\delta\int_{0}^{t}Z_{N}(s)\,\mathrm{d}s\!+\! \sum_{j=1}^{J}\left\langle M_{j,N}\right\rangle(t)\right). \tag{12}\]
**Invariant Distribution.** As explained in the introduction, our model can be expressed in the framework of chemical reaction networks (CRN). Since we are mainly interested in the transient behavior of our system, we just give a quick sketch. It is only mentioned as an interesting aspect of our system. See Feinberg [9] for a general introduction on CRNs.
The corresponding chemical reactions are represented as
\[\emptyset\xrightleftharpoons[\delta]{\beta}\mathcal{Z},\qquad\mathcal{Z}+ \mathcal{F}_{j}\xrightleftharpoons[\eta_{j}]{\lambda_{j}}\mathcal{F}Z_{j}, \quad 1\!\leq\!j\!\leq\!J. \tag{14}\]
The associated dynamical system \(\left(\left(f_{j}(t),g_{j}(t)\right),z(t)\right)\) is defined by the ODEs
\[\begin{cases}\dot{f}_{j}(t)=\lambda_{j}f_{j}(t)z(t)\!-\!\eta_{j}g_{j}(t),\quad 1 \!\leq\!j\!\leq\!J,\\ \dot{f}_{j}(t)\!+\!\dot{g}_{j}(t)=0,\quad 1\!\leq\!j\!\leq\!J,\\ \dot{z}(t)=\beta\!-\!\delta z(t).\end{cases}\]
Its fixed point is given by \(\left((u_{j},v_{j}),w\right)\) with
\[w=\rho_{0},\quad u_{j}=\frac{C_{j}^{N}}{1\!+\!\rho_{0}\rho_{j}}=C_{j}^{N}\!- \!v_{j},\quad 1\!\leq\!j\!\leq\!J, \tag{15}\]
with \(\rho_{0}\)=\(\beta/\delta\) and \(\rho_{j}\)=\(\lambda_{j}/\eta_{j}\), for \(1\!\leq\!j\!\leq\!J\).
The characteristics of this CRN are:
* \(m\)=\(2J\)+\(2\) chemical species: \(\mathcal{Z}\), \(\mathcal{F}_{j}\), \(\mathcal{F}Z_{j}\), \(j\)=\(1,\ldots,J\);
* \(\ell\)=\(J\)+\(1\) cycles, these are the _single linkage classes_ of the CRN;
* The range, the dimension of the stochiometric space, is \(s\)=\(J\)+\(1\).
This is a CRN with deficiency \(\delta\)=\(m-\ell-s\)=\(0\). A standard result, see Anderson et al. [1], gives an explicit expression of the invariant distribution \(\pi_{N}\) of \(\left(X_{N}(t)\right)\) on \(\mathcal{S}_{N}\).
**Proposition 2**.: _The invariant distribution of \(\left(X_{N}(t)\right)\) on \(\mathcal{S}_{N}\) is given by,_
\[\pi(f,s)=\frac{1}{Z_{N}}\frac{w^{z}}{z!}\prod_{j=1}^{J}\frac{u_{j}^{f_{j}}}{f_ {j}!}\frac{v_{j}^{C_{j}^{N}-f_{j}}}{(C_{j}^{N}\!-\!f_{j})!},\quad(f,z)\!=\!((f_ {j}),z)\!\in\!\mathcal{S}_{N},\]
_where \(Z_{N}\) is the normalization constant and \((u_{j})\) and \((v_{j})\) are defined by Relation (15)._
## 3. Fixed Number of Agents
Throughout this section the number of agents \(C_{Z}^{N}\) is fixed, of the order of \(rN\), with \(r\!<\!1\) in Section 3.1, and is exactly \(N\), the total number of particles, in Section 3.2. There are no creations or degradation of agents. Only pairing and splitting mechanisms operate in these cases. As explained in the introduction, the purpose
is of understanding the behavior of the system when the total number of agents does not change. Section 4 investigate a much more dynamic version of the system.
The state space of the system is \(\mathcal{S}_{N}\overset{\text{def.}}{=}\prod_{j=1}^{J}\{0,\ldots,C_{j}^{N}\}\). For a state \(x{=}(x_{j}){\in}\mathcal{S}_{N}\), the total number of paired particles with an agent is \(N{-}x_{1}{-}\cdots{-}x_{J}\). The associated process in \(\mathcal{S}_{N}\) is denoted as \((F_{N}^{r}(t))\) has the Markov property, its \(Q\)-matrix is given by, for \(x{\in}\mathcal{S}_{N}\),
\[x\rightarrow\begin{cases}x{+}e_{j}&\eta_{j}\left(C_{j}^{N}{-}x_{j}\right),\\ x{-}e_{j}&\lambda_{j}x_{j}\left(C_{Z}^{N}{-}(N{-}x_{1}{-}\cdots{-}x_{J}) \right),\end{cases}\]
where \(e_{j}\) is the \(j\)th unit vector of \(\mathbb{N}^{J}\).
### Overloaded Case
In this section the total number of agents if of the order of \(rN\), with \(r{<}1\),
\[\lim_{N\rightarrow+\infty}\frac{C_{Z}^{N}}{N}=r. \tag{16}\]
Since there are not enough agents to handle all particles, it is clear that the number of free particles of type \(j\), \(1{\leq}j{\leq}J\), should be of the order of \(N\). The SDE (10) for \((F_{N}^{r}(t))\) becomes, for \(1{\leq}j{\leq}J\),
\[\text{d}F_{N,j}^{r}(t)=\mathcal{P}_{S_{j}}((0,\eta_{j}(C_{j}^{N}{- }F_{N,j}(t{-}))),\text{d}t)\\ -\mathcal{P}_{F_{j}}((0,\lambda_{j}F_{N,j}^{r}(t{-})Z_{N}^{r}(t{-})), \text{d}t), \tag{17}\]
where \((Z_{N}^{r}(t))\) is the process of free agents,
\[(Z_{N}^{r}(t))=\left(C_{Z}^{N}{-}\sum_{j=1}^{J}\left(C_{j}^{N}{-}F_{N,j}^{r}( t)\right)\right)=\left(C_{Z}^{N}{+}\|F_{N}^{r}(t)\|{-}N\right). \tag{18}\]
We assume that the initial conditions are such that \(Z_{N}(0){=}z_{0}\), for some fixed \(z_{0}{\in}\mathbb{N}\), and
\[\lim_{N\rightarrow+\infty}\frac{F_{N}^{r}(0)}{N}=\overline{f}_{0}=(\overline{ f}_{0,j}){\in}[0,1]^{J}, \tag{19}\]
such that
\[\sum_{j=1}^{J}\overline{f}_{0,j}=1{-}r. \tag{20}\]
The last condition expresses simply that most of the agents are initially paired with particles. We will see that the number of free agents remains a finite random variable.
**Definition 3**.: _For \(N{>}0\), the scaled process is defined as_
\[\left(\overline{F}_{N}^{r}(t)\right)\overset{\text{def.}}{=}\left(\frac{F_{N, j}^{r}(t)}{N},j{=}1,\ldots,J\right). \tag{21}\]
_If \(g\) is non-negative Borelian function on \(\mathbb{R}_{+}\times\mathbb{N}\), we define the occupation measure_
\[\langle\Lambda_{N}^{r},g\rangle\overset{\text{def.}}{=}\int_{\mathbb{R}_{+}}g \left(s,Z_{N}^{r}(s)\right)\text{d}s. \tag{22}\]
Since \(F_{N,j}^{r}(t){\leq}C_{j}^{N}{\leq}N\), \(1{\leq}j{\leq}J\), for \(t{\geq}0\), the state space of the process \((\overline{F}_{N}^{r}(t))\) is included in \([0,1]^{J}\).
**Lemma 4**.: _If \(N\) is sufficiently large, there exists a coupling of the process \((Z_{N}^{r}(t))\) with \((L(Nt))\), where \((L(t))\) is an \(M/M/\infty\) queue with input rate \(\overline{\eta}\) and service rate \(\underline{\lambda}r/2\), with_
\[\overline{\eta}=\max_{j}\eta_{j},\text{ and }\underline{\lambda}=\min_{j} \lambda_{j},\]
_such that \(L(0)\)=\(z_{0}\) and \(Z_{N}^{r}(t)\)\(\leq\)\(L(Nt)\) holds for all \(t\)\(\geq\)\(0\)._
See Section B.2 of the appendix on the \(M/M/\infty\) queue.
Proof.: This is a simple consequence of the fact that if \(Z_{N}^{r}(t)\)=\(z\)\(\in\)\(\mathbb{N}\), the rate at which there is a jump of size \(+1\), resp. \(-1\), is
\[\sum_{j=1}^{N}\eta_{j}\left(C_{j}^{N}\!-\!F_{N,j}^{r}(t)\right)\leq\overline{ \eta}\sum_{j=1}^{N}C_{j}^{N}=\overline{\eta}N,\]
resp.
\[\sum_{j=1}^{N}\lambda_{j}F_{N,j}^{r}(t)\geq\underline{\lambda}(N\!-\!C_{Z}^{ N})\geq\underline{\lambda}\left(1\!-\!\varepsilon\right)N,\]
for some \(\varepsilon\)\(\in\)\((0,1)\) if \(N\) is large enough. It is then straightforward to construct the desired coupling.
The integration of the SDE (17) gives the relation
\[\overline{F}_{j,N}^{r}(t)=\overline{F}_{j,N}^{r}(0)+ M_{j,N}^{r}(t)\] \[-\lambda_{j}\int_{0}^{t}\overline{F}_{N,j}^{r}(s)Z_{N}^{r}(s)\, \mathrm{d}s+\eta_{j}\int_{0}^{t}\left(\frac{C_{j}^{N}}{N}\!-\!\overline{F}_{N,j}^{r}(s)\right)\mathrm{d}s, \tag{23}\]
The process \((M_{j,N}^{r}(t))\) is a martingale whose previsible increasing process is
\[\left(\left\langle M_{j,N}^{r}\right\rangle(t)\right)=\left(\frac{\lambda_{j} }{N}\int_{0}^{t}\overline{F}_{j,N}^{r}(s)Z_{N}^{r}(s)\,\mathrm{d}s+\frac{\eta _{j}}{N}\int_{0}^{t}\left(\frac{C_{j}^{N}}{N}\!-\!\overline{F}_{N,j}^{r}(s) \right)\mathrm{d}s\right). \tag{24}\]
**Proposition 5**.: _Under the assumptions (5), (19), and (20)for the initial state, then for the convergence in distribution the relation_
\[\lim_{N\to+\infty}\left(\frac{F_{N}^{r}(t)}{N}\right)=(f^{r}(t))=(f_{j}^{r}(t )),\]
_holds, where \((x(t))\) is the solution of the ODEs, for \(1\)\(\leq\)\(j\)\(\leq\)\(J\) and \(t\)\(>\)\(0\),_
\[\frac{\mathrm{d}}{\mathrm{d}t}f_{j}^{r}(t)=\lambda_{j}f_{j}^{r}(t)\frac{ \left\langle\eta,c\!-\!x(t)\right\rangle}{\left\langle\lambda,x(t)\right\rangle }-\eta_{j}\left(c_{j}\!-\!f_{j}^{r}(t)\right), \tag{25}\]
_with Definition 1._
Proof.: By using the notations of Lemma 4 and the ergodic theorem for positive recurrent Markov processes, it is not difficult to prove that the sequence of processes
\[\left(\int_{0}^{t}L(Ns)\,\mathrm{d}s\right)\]
is tight with the criterion of the modulus of continuity, see Theorem 7.3 of Billingsley [4], and that its limiting point is necessary \((\overline{\eta}/\underline{\lambda}\cdot t)\).
Since \(F_{N,j}^{r}(t)\)\(\leq\)\(C_{j}^{N}\)\(\leq\)\(N\), for \(1\)\(\leq\)\(j\)\(\leq\)\(J\) and \(t\)\(\geq\)\(0\), with Relation (24), we obtain therefore that the process \((\left\langle M_{j,N}^{r}\right\rangle(t))\) is converging in distribution to \(0\), Doobs' Inequality gives that the same result holds for the martingale \((M_{j,N}^{r}(t))\).
For \(T{>}0\), with Relation (23), the modulus of continuity of \((\overline{F}^{r}_{j,N}(t))\) on the time interval \([0,T]\) is
\[\omega_{F_{N},T}(\delta)\stackrel{{\text{def.}}}{{= }}\sup_{\begin{subarray}{c}s,t\leq T\\ |s-\bar{t}|\leq\delta\end{subarray}}\left|\overline{F}^{r}_{j,N}(t)){-} \overline{F}^{r}_{j,N}(s)\right|\\ \leq\sup_{s\leq T}|M^{r}_{j,N}(s)|+\lambda_{j}\sup_{\begin{subarray} {c}s\leq t\leq T\\ |s-t|\leq\delta\end{subarray}}\int_{s}^{t}L(Nu)\,\mathrm{d}u+\delta\eta_{j} \frac{C_{j}^{N}}{N}.\]
Again with Theorem 7.3 of Billingsley [4], we deduce that the sequence of processes \((\overline{F}^{r}_{N,j}(t))\) is tight.
For \(K{>}0\),
\[\mathbb{E}(\Lambda^{r}_{N}([0,T]{\times}[K,+\infty)))\leq\int_{0}^{T} \mathbb{P}\left(L(Ns){\geq}K\right)\mathrm{d}s=\frac{1}{N}\int_{0}^{NT} \mathbb{P}(L(s){\geq}K)\,\mathrm{d}s,\]
since \((L(s))\) converges in distribution to a Poisson distribution, see Section B.2 of the appendix, the last term can be made arbitrarily small for \(K\) sufficiently large. Lemma 1.3 of Kurtz [20] gives the tightness of the sequence of random measures \((\Lambda^{r}_{N})\) and any of its limiting points \(\Lambda^{r}_{\infty}\) can be represented as,
\[\langle\Lambda^{r}_{\infty},f\rangle=\int_{\mathbb{R}_{+}\times[0,1]^{J}\times \mathbb{N}}f\left(s,x\right)\mu_{s}(\mathrm{d}x)\,\mathrm{d}s,\]
if \(f\) is a non-negative Borelian function on \([0,T]{\times}\mathbb{N}\), where \((\mu_{s})\) is an optional process on \(\mathcal{M}_{1}(\mathbb{N})\).
Hence the sequence of random variables \(((\overline{F}^{r}_{N}(t)),\Lambda^{r}_{\infty})\) is tight, we denote by \(((f^{r}(t)),\Lambda^{r}_{\infty})\) one of its limiting points. Since Proposition 19 establishes a similar result, but in a more difficult technical framework. The analogue of the sequence of processes \((\overline{F}^{r}_{N}(t))\) is _not_ tight in this case. For this reason, we skip the proof of the fact that, in the above representation of \(\Lambda^{r}_{\infty}\), \(\mu_{s}\) can be expressed as a Poisson distribution with parameter
\[\frac{\langle\eta,c{-}f^{r}(s)\rangle}{\langle\lambda,f^{r}(s)\rangle},\]
and that \((f^{r}(t))\) satisfies the ODE (25). The proposition is proved.
**Corollary 6**.: _With the notations of Proposition 5, the equilibrium point \(f^{r}_{\infty}\) of \((f^{r}(t))\) is given by_
\[f^{r}_{\infty}=\left(\frac{\eta_{j}c_{j}}{\eta_{j}{+}\lambda_{j}h_{\infty}} \right),\]
_where \(h_{\infty}{=}\phi(1{-}r)\) and \(\phi\) is defined by Relation (6)._
### Critical Case
The number of agents is exactly \(N\), the total number of particles and there are still no creation or degradation of agents. We prove that the process of the number of free particles of type \(j{\in}\{1,\ldots\}\), \((F^{1}_{N,j}(t))\), is of the order of \(\sqrt{N}\), with fluctuations of the order of \(\sqrt[4]{N}\). See Theorems 9 and 10.
Note that, for \(t{\geq}0\), the total number of free agents at time \(t\) is
\[N-\sum_{j=1}^{J}\left(C^{N}_{j}{-}F^{1}_{N,j}(t)\right)=\sum_{j=1}^{J}F^{1}_{N,j}(t)=\|F^{1}_{N}(t)\|.\]
The \(Q\)-matrix \(Q_{f}\) of \((F^{1}_{N,j}(t))\) is thus given by, for \(x{\in}{\mathcal{S}}_{N}\),
\[x\to\begin{cases}x{+}e_{j}&\eta_{j}\left(C^{N}_{j}{-}x_{j}\right),\\ x{-}e_{j}&\lambda_{j}x_{j}\|x\|.\end{cases}\]
**Lemma 7**.: _If, for \(\eta\), \(\lambda{>}0\) and \(N{\geq}1\), if \((X_{N}(t))\) is the solution of the SDE,_
\[\mathrm{d}X_{N}(t)={\mathcal{P}}_{S_{1}}((0,\eta N),\mathrm{d}t){-}{\mathcal{ P}}_{F_{1}}\left(\left(0,\lambda X_{N}(t{-})^{2}\right),\mathrm{d}t\right),\]
_with \(X(0){=}0\), then for any \(T{\geq}0\), there exists \(K_{1}{>}0\) such that,_
\[\lim_{N\to+\infty}{\mathbb{P}}\left(\sup_{t\leq NT}\frac{X_{N}(t)}{\sqrt{N}} \geq K_{1}\right)=0,\]
_and_
\[\sup_{t\geq 0}{\mathbb{E}}\left(X_{N}(t)^{2}\right)<+\infty.\]
Proof.: We fix \(K_{0}\) such that \(\lambda K_{0}^{2}{>}\eta\). If we define the process \((Y(t))\) by the SDE
\[\mathrm{d}Y(t)={\mathcal{P}}_{S_{1}}((0,\eta),\mathrm{d}t){-}\mathbb{1}_{\{Y( t){-}>0\}}{\mathcal{P}}_{F_{1}}((0,\lambda K_{0}^{2}),\mathrm{d}t),\]
with \(Y(0){=}0\). As in the proof of Proposition 8, by induction on the successive jumps of \((X_{N}(t))\), it is easy to show that the relation
\[X_{N}(t){\leq}K_{0}\sqrt{N}{+}Y(Nt)\]
holds almost surely for all \(t{>}0\). The process \((Y(t))\) is a reflected random walk on \({\mathbb{N}}\), it is usually associated to the \(M/M/1\) queue. See Chapter 5 of Robert [25]. Proposition 5.11 of this reference gives that if \(T_{A}\) is the hitting time of \(A\) by \((Y(t))\) then the random variable
\[\left(\frac{\eta}{\lambda K_{0}^{2}}\right)^{A}T_{A}\]
is converging in distribution to an exponential random variable when \(A\) goes to infinity. This shows in particular that \((T_{A}/A^{2})\) is converging in distribution to infinity, hence, for any \(K{>}0\),
\[\lim_{N\to+\infty}{\mathbb{P}}\left(\sup_{t\leq T}\frac{Y(tN)}{\sqrt{N}}\geq 1 \right)=\lim_{N\to+\infty}{\mathbb{P}}\left(T_{\lceil\sqrt{N}\rceil}{\leq} TN\right)=0.\]
This gives the first part of the lemma. We conclude the proof by setting \(K_{1}{=}K_{0}{+}1\) and remarking that the invariant distribution of \((Y(t))\) is a geometric distribution with parameter \(\eta/(\lambda K_{0}^{2})\) and that, with a simple coupling argument, the mapping \(t{\to}{\mathbb{E}}(Y(t)^{2})\) is non-decreasing.
The next result shows that all coordinates of \((F^{1}_{N}(t))\) are at most of the order of \(\sqrt{N}\) very quickly independently of the initial point. Theorem 9 completes this result by showing that the order of magnitude of its coordinates is exactly \(\sqrt{N}\).
**Proposition 8** (Coupling).: _For all \(j{\in}\{1,\ldots,J\}\),_
\[F^{1}_{N,j}(t)\leq F^{1}_{N,j}(0){+}X_{N,j}(t),\quad t{\geq}0, \tag{26}\]
_where the \((X_{N,j}(t))\) are the solutions of the SDEs_
\[\mathrm{d}X_{N,j}(t)={\mathcal{P}}_{S_{j}}((0,\overline{\eta}N),\mathrm{d}t){ -}{\mathcal{P}}_{F_{j}}\left(\left(0,\underline{\lambda}X_{N,j}(t{-})^{2} \right),\mathrm{d}t\right),\quad 1{\leq}j{\leq}J, \tag{27}\]
_with \((X_{j}(0)){=}0\) and \(\overline{\eta}\) and \(\underline{\lambda}\) are defined by Relation (4)._
_There exists \(\ell_{0}{>}0\) such that if_
\[\tau_{N}\stackrel{{\text{\rm def.}}}{{=}}\inf\left\{t{>}0:F^{1}_{N,j} (t){\leq}\left[\ell_{0}\sqrt{N}\right],\forall j{\in}\{1,\ldots,J\}\right\},\]
_then_
\[\sup_{N\geq 1}\sup_{x\in\mathcal{S}_{N}}\mathbb{E}_{x}(\tau_{N})<+\infty.\]
Proof.: The \(Q\)-matrix \(Q_{X}\) of the Markov process \((X_{N,j}(t))\) defined by the SDEs (27) is
\[x\to\begin{cases}x{+}e_{j}&\overline{\eta}N,\\ x{-}e_{j}&\underline{\lambda}x_{j}^{2},\end{cases}\]
clearly \(q_{f}(x,x{+}e_{j}){\leq}q_{X}(x,x{+}e_{j})\) and \(q_{f}(x,x{-}e_{j}){\geq}q_{X}(x,x{-}e_{j})\).
A simple coupling, by induction on the successive jumps of \((F^{1}_{N,j}(t))\), gives that the relation
\[F^{1}_{N,j}(t)\leq F^{1}_{N,j}(0){+}X_{N,j}(t)\]
holds for all \(t{\geq}0\).
To prove the last assertion, in view of Relation (26), it is enough to prove it for the "maximal" initial state, i.e. \((F^{1}_{N,j}(0)){=}(C^{N}_{j})\). If, for \(A{>}0\), \(\|x\|{>}JA\sqrt{N}\), then, if \(g(x){=}\|x\|\), for \(x{\in}{\mathbb{N}}^{J}\),
\[Q_{f}(g)(x)\leq J\overline{\eta}N{-}\underline{\lambda}J^{2}A^{2}N.\]
If we choose \(\ell{=}JA\) such that \(\gamma{=}\underline{\lambda}\ell^{2}{-}J\overline{\eta}{>}0\), by using Proposition 8.14 and Theorem 8.13 of Robert [25], we obtain
\[\mathbb{E}(\tau_{N})\leq\frac{g(F^{1}_{N}(0))}{N\gamma}{=}\frac{1}{\gamma}.\]
The proposition is proved.
**Theorem 9** (Law of Large Numbers).: _If_
\[\lim_{N\to+\infty}\frac{F^{1}_{N}(0)}{\sqrt{N}}=\overline{f}^{1}_{0}{=}\left( \overline{f}^{1}_{0,j}\right),\text{ and }\left(\overline{F}_{N}(t)\right)\stackrel{{\text{\rm def.}}}{{= }}\left(\frac{F^{1}_{N,j}\left(t/\sqrt{N}\right)}{\sqrt{N}}\right),\]
_then the sequence of processes \(\left(\overline{F}_{N}(t)\right)\) is converging in distribution to the solution \((\overline{f}^{1}(t)){=}(\overline{f}^{1}_{j}(t))\) of the ODE,_
\[\left(\overline{f}^{1}_{j}\right)^{\prime}(t)=c_{j}\eta_{j}{-}\lambda_{j} \overline{f}^{1}_{j}(t)\left\|\overline{f}^{1}(t)\right\|, \tag{28}\]
_with \(\overline{f}^{1}(0){=}\overline{f}^{1}_{0}\)._
_The equilibrium point of the ODE (28) is given by_
\[\overline{f}^{1}_{j,\infty}=\left(c_{j}\rho_{j}\left/\sqrt{c_{1}\rho_{1}{+} \cdots{+}c_{J}\rho_{J}}\right.\right), \tag{29}\]
_where \((\rho_{j})\) is given by Definition 1._
Proof.: By integration of Relation (34), we obtain, for \(t{\geq}0\),
\[\overline{F}_{N,j}(t)=\overline{F}_{N,j}(0){+}M^{0}_{N,j}(t)\\ +\eta_{j}\int_{0}^{t}\left(\frac{C^{N}_{j}}{N}{-}\frac{\overline{F }_{N,j}(s)}{\sqrt{N}}\right)\mathrm{d}s{-}\lambda_{j}\int_{0}^{t}\overline{F} _{N,j}(s)\sum_{k=1}^{J}\overline{F}_{N,k}(s)\,\mathrm{d}s, \tag{30}\]
where \((M_{N}^{0}(t))\)=\((M_{N,j}^{0}(t),1{\leq}j{\leq}J)\) is the martingale defined by, for \(1{\leq}j{\leq}J\),
\[M_{N,j}^{0}(t)\stackrel{{\text{\rm def.}}}{{=}}\] \[\frac{1}{\sqrt{N}}\int_{0}^{t/\sqrt{N}}\left[\mathcal{P}_{S_{j}} \left((0,\eta_{j}(C_{j}^{N}{-}F_{N,j}^{1}(s{-}))),\mathrm{d}s\right){-}\eta_{j }(C_{j}^{N}{-}F_{N,j}^{1}(s))\,\mathrm{d}s\right]\] \[-\frac{1}{\sqrt{N}}\int_{0}^{t/\sqrt{N}}\!\!\left[\mathcal{P}_{F_ {j}}\left(\left(0,\lambda_{j}F_{N,j}^{1}(s{-})\sum_{k=1}^{J}\overline{F}_{N,k} (s{-})\right),\mathrm{d}s\right)\right.\] \[\left.-\lambda_{j}F_{N,j}^{1}(s)\sum_{k=1}^{J}\overline{F}_{N,k} (s)\,\mathrm{d}s\right]. \tag{31}\]
Its previsible increasing process is given by
\[\left(\left\langle M_{N,j}^{0}\right\rangle(t)\right)=\left( \frac{\eta_{j}}{\sqrt{N}}\int_{0}^{t}\left(\frac{C_{j}^{N}}{N}{-}\frac{ \overline{F}_{N,j}(s)}{\sqrt{N}}\right)\mathrm{d}s\right.\\ \left.+\frac{\lambda_{j}}{\sqrt{N}}\int_{0}^{t}\overline{F}_{N,j} (s)\sum_{k=1}^{J}\overline{F}_{N,k}(s)\,\mathrm{d}s\right), \tag{32}\]
and \(\left\langle M_{N,j}^{0},M_{N,k}^{0}\right\rangle(t)\)=0, for \(1{\leq}j{\neq}k{\leq}J\). Lemma 7 shows the convergence
\[\lim_{N\rightarrow+\infty}\left(\mathbb{E}\left(\left\langle M_{N,j}^{0}(t), M_{N,k}^{0}\right\rangle(t)\right),1{\leq}j,k{\leq}J\right)=0,\]
and, with Doob's Inequality, the martingale \((M_{N}^{0}(t))\) converges to 0, and also that, for the convergence in distribution
\[\lim_{N\rightarrow+\infty}\left(\frac{\overline{F}_{N,j}(t)}{\sqrt{N}}\right) =0.\]
Standard arguments, using the criterion of the modulus of continuity, see Theorem 7.3 Billingsley [4] for example, give that the sequence of processes \((\overline{F}_{N}(t))\) is tight and that any limiting point \((\overline{f}^{1}(t))\)=\((\overline{f}_{j}^{1}(t))\) satisfies the identity
\[\overline{f}_{j}^{1}(t)=\overline{f}_{j}^{1}(0){+}\eta_{j}c_{j}t{-}\lambda_{j }\int_{0}^{t}\overline{f}_{j}^{1}(s)\sum_{k=1}^{J}\overline{f}_{k}^{1}(s)\, \mathrm{d}s. \tag{33}\]
The theorem is proved.
The fluctuations of \((F_{N}^{1}(t))\) on the timescale \((t/\sqrt{N})\) are now developed in the following theorem.
**Theorem 10** (Central Limit Theorem).: _Under the assumption on the initial state of Theorem 9, if_
\[\left(\widehat{F}_{N}^{1}(t)\right)=\left(\widehat{F}_{N,j}^{1}(t)\right) \stackrel{{\text{\rm def.}}}{{=}}\left(\frac{F_{N,j}^{1}\left(t/ \sqrt{N}\right){-}\sqrt{N}\,\overline{f}_{j}^{1}(t)}{\sqrt[4]{N}}\right),\]
_where \(\left(\overline{f}^{1}(t)\right)\) is defined by Relation (28) and_
\[\lim_{N\rightarrow+\infty}\widehat{F}_{N}^{1}(0)=\widehat{f}_{0}^{1}\in \mathbb{R}^{J},\]
_then the sequence of processes \((\widehat{F}^{1}_{N}(t))\) is converging in distribution to \((\widehat{F}^{1}(t))\), the solution of the SDE_
\[\mathrm{d}\widehat{F}^{1}_{j}(t)=\sqrt{-\left(\overline{f}^{1}_{j} \right)^{\prime}(t){+}2\eta_{j}c_{j}}\ \mathrm{d}B_{j}(t)\\ -\lambda_{j}\left(\widehat{F}^{1}_{j}(t)\left\|\overline{f}^{1}( t)\right\|{+}\overline{f}^{1}_{j}(t)\left\|\widehat{F}^{1}(t)\right\|\right) \mathrm{d}t, \tag{34}\]
_with \(\widehat{F}^{1}(0){=}\widehat{f}^{1}_{0}\), where \((B_{j}(t))\) is the standard Brownian motion in \(\mathbb{R}^{J}\)._
Proof.: Relations (34) and (33) give the identity,
\[\widehat{F}^{1}_{N,j}(t)=\sqrt[4]{N}\left(\overline{F}^{1}_{N,j}( t){-}\overline{f}^{1}_{j}(t)\right)=\widehat{F}^{1}_{N,j}(0){+}\sqrt[4]{N}M^{0}_{N,j}(t)\\ +\eta_{j}\frac{C^{N}_{j}{-}c_{j}N}{N^{3/4}}{t-}\eta_{j}\int_{0}^ {t}\frac{\overline{F}^{1}_{N,j}(s)}{\sqrt[4]{N}}\,\mathrm{d}s\\ -\lambda_{j}\int_{0}^{t}\widehat{F}^{1}_{N,j}(s)\sum_{k=1}^{J} \overline{F}^{1}_{N,k}(s)\,\mathrm{d}s{-}\lambda_{j}\int_{0}^{t}\overline{F}^{ 1}_{N,j}(s)\sum_{k=1}^{J}\widehat{F}^{1}_{N,k}(s)\,\mathrm{d}s, \tag{35}\]
and, with Relation (32), for \(1{\leq}j{\leq}J\),
\[\left(\left\langle\sqrt[4]{N}M^{0}_{N,j}\right\rangle(t)\right)= \left(\eta_{j}\int_{0}^{t}\left(\frac{C^{N}_{j}}{N}{-}\frac{\overline{F}^{1}_{ N,j}(s)}{\sqrt{N}}\right)\mathrm{d}s\\ +\lambda_{j}\int_{0}^{t}\overline{F}^{1}_{N,j}(s)\sum_{k=1}^{J} \overline{F}^{1}_{N,k}(s)\,\mathrm{d}s\right).\]
From Lemma 7 and Theorem 9, we obtain that, for the convergence in distribution, the relation
\[\lim_{N\to+\infty}\left(\left\langle\sqrt[4]{N}M^{0}_{N,j}\right\rangle(t) \right)=\left(\eta_{j}c_{j}t{+}\lambda_{j}\int_{0}^{t}\overline{f}^{1}_{j}(s) \sum_{k=1}^{J}\overline{f}^{1}_{k}(s)\,\mathrm{d}s\right)\\ =\left(\overline{f}^{1}_{j}(0){-}\overline{f}^{1}_{j}(t){+}2\eta _{j}c_{j}t\right),\]
holds, by Relation (33). Recall that
\[\left(\left\langle\sqrt[4]{N}M^{0}_{N,j},\sqrt[4]{N}M^{0}_{N,k}\right\rangle(t )\right){=}0\]
holds for \(1{\leq}j{\neq}k{\leq}J\). Theorem 1.4 page 339 of Ethier and Kurtz [8] shows that the sequence of martingales \((\widehat{M}^{0}_{N}(t))\) converges in distribution to the distribution of the process
\[\left(\int_{0}^{t}\sqrt{-(\overline{f}^{1}_{j})^{\prime}(s){+}2\eta_{j}c_{j}} B_{j}(\mathrm{d}s)\right),\]
where \((B_{j}(t))\) is a standard Brownian motion on \(\mathbb{R}^{J}\). Using again Lemma (7), we have
\[\lim_{N\to+\infty}\left(\int_{0}^{t}\frac{\overline{F}^{1}_{N,j}(s)}{\sqrt[4] {N}}\,\mathrm{d}s\right)=(0).\]
The rest of the proof is standard, first by showing the tightness of \((\widehat{F}^{1}_{N}(t))\) and secondly by identifying it as the solution of an SDE. See the proof of Theorem 6.14 of [25] for example. The theorem is proved.
The following proposition shows that the invariant distribution of the Markov process \((F^{1}_{N}(t))\) has in fact a simple expression. This is a consequence of Proposition 2, the reversibility property is in fact the additional (simple) result.
**Proposition 11** (Invariant Distribution).: _The Markov process \((F^{1}_{N}(t))\) is reversible, and its invariant distribution \(\pi_{N}\)_
\[\pi_{N}(x)=\frac{1}{Z_{N}}\frac{1}{\|x\|!}\prod_{j=1}^{J}\rho_{j}^{x_{j}}\frac{ C_{j}^{N}!}{(C_{j}^{N}-x_{j})!x_{j}!},\qquad x{\in}{\mathcal{S}}_{N}, \tag{36}\]
_where \(Z_{N}\) is a normalizing constant._
A version of Theorems (9) and (10) could probably be considered via a saddle-point analysis of the constant \(Z_{N}\). This is not done in this paper.
## 4. Dynamical Arrivals
If the systems starts with few agents so that most of \(N\) particles are "free", when an agent created, it is paired with a free particle right away, at a rate proportional to \(N\). This will happen repeatedly, via the successive steps of sequestration/de-sequestration, as long the number of free particles is sufficiently "large" so that sequestration occurs always before the degradation/death of an agent. The precise result is in fact a little more subtle than that. We show that, in the limit, on the timescale \(t{\mapsto}Nt\), there remains a positive fraction of free particles of the order of \(N\).
The state descriptor of the pairing process is in this case
\[(X_{N}(t)){=}(F_{N}(t),Z_{N}(t))=((F_{j,N}(t),j{=}1,\ldots,J),Z_{N}(t)).\]
It can be expressed as the solution of the SDEs (7) and (8), the initial conditions are assumed to satisfy the following scaling relations
\[\lim_{N\to+\infty}\frac{F_{N}(0)}{N}=\overline{f}_{0}\neq 0,\overline{f}_{0}{=} (\overline{f}_{0,j}){\in}\prod_{j=1}^{J}[0,c_{j}]^{J},\text{ and }Z_{N}(0){=}z_{0}{\in}{\mathbb{N}}, \tag{37}\]
where \(c{=}(c_{j})\) is defined by Relation (5).
Initially, a fraction \(\overline{f}_{0,j}\) of particles of type \(j{\in}\{1,\ldots,J\}\) are free and there are \(z_{0}\) free agents. Since the external input rate of agents is constant and equal to \(\beta\), in order to have a positive fraction in \(N\) of particles paired with an agent, the natural time scale to consider is, at least, \(t{\mapsto}Nt\).
The setting of the analysis will be that of averaging principles as presented in Kurtz [20]. As it will be seen there are specific technical difficulties related to the scaling framework which we introduce now.
**Definition 12** (Scaled Processes).: _For \(N{>}0\), \((\overline{X}_{N}(t))\mathop{=}\limits^{\text{\rm def.}}(\overline{F}_{N}(t),Z _{N}(Nt))\), with_
\[\left(\overline{F}_{N}(t)\right)\mathop{=}\limits^{\text{\rm def.}}\left( \frac{F_{N}(Nt)}{N}\right)=\left(\frac{F_{j,N}(Nt)}{N},j{=}1,\ldots,J\right){ \in}\prod_{j=1}^{J}\left[0,\frac{C_{j,N}}{N}\right]. \tag{38}\]
_For \(t{\geq}0\), we have \(\|F_{N}(t)\|{\leq}1\) since \(F_{j,N}(t){\leq}C_{j}^{N}\),for all \(1{\leq}j{\leq}J\)._
_The occupation measure is the random measure on \(H{\stackrel{{\rm def.}}{{=}}}{\mathbb{R}}_{+}{\times}[0,1]^{J}{ \times}{\mathbb{N}}\) defined by_
\[\langle\Lambda_{N},g\rangle\stackrel{{\rm def.}}{{=}}\int_{0}^{+ \infty}g\left(s,\left(\frac{F_{j,N}(Ns)}{N}\right),Z_{N}(Ns)\right){\rm d}s, \tag{39}\]
_for a continuous function \(g\) with compact support on \(H\),_
Note that the "slow" process \((\overline{F}_{N}(t))\) is included in the definition of the occupation measure \(\Lambda_{N}\). The reason is that the timescale is too fast, of the order of \(N^{2}\) in fact, to get directly convenient tightness properties for the sequence of processes \((\overline{F}_{j,N}(t))\).
We can have a glimpse of this problem as follows. If \((\overline{M}_{j,N}(t))\) is the martingale of Relation (10), it does not clearly converges in distribution to \(0\) as \(N\) gets large as it could be expected if a "standard" averaging principle were true. Indeed Relation (12) gives, for \(1{\leq}j{\leq}J\),
\[\left(\langle\overline{M}_{j,N}\rangle\,(t)\right)=\left(\eta_{j}\int_{0}^{t} \left(\frac{C_{j,N}}{N}{-}\overline{F}_{j,N}(s)\right){\rm d}s{+}\lambda_{j} \int_{0}^{t}\overline{F}_{j,N}(s)Z_{N}(Ns)\,{\rm d}s\right),\]
which does not seem to vanish.
We state the main result of this paper.
**Theorem 13** (Averaging Principle).: _Under the scaling assumption (5) and if \((F_{N}(0)/N)\) converges to \(\overline{f}_{0}{\neq}0\), then the sequence of processes \((\|F_{N}(Nt)\|/N)\) converges in distribution to \((H(t))\), defined by, for \(t{\geq}0\), \(H(t){\in}(0,1)\) is the unique solution of the relation_
\[\int_{\|\overline{f}_{0}\|}^{H(t)}\frac{1}{\delta\phi(u){-}\beta}\,{\rm d}u=t, \tag{40}\]
_where \(\phi\) is defined by Relation (6)._
_Furthermore the sequence \((\Lambda_{N})\) is converging in distribution to the measure \(\Lambda_{\infty}\) on \(H{=}{\mathbb{R}}_{+}{\times}[0,1]^{J}{\times}{\mathbb{N}}\), such that_
\[\langle\Lambda_{\infty},g\rangle\stackrel{{\rm def.}}{{=}}\int_ {{\mathbb{R}}_{+}{\times}{\mathbb{N}}}g\left(s,\left(f_{j}(s)\right),x\right)P _{\phi(H(s))}({\rm d}x)\,{\rm d}s, \tag{41}\]
_for a non-negative Borelian function \(g\) on \(H\), where, for \(1{\leq}j{\leq}J\),_
\[f_{j}(t)=\frac{\rho_{j}}{\rho_{j}{+}\phi(H(t))}c_{j}, \tag{42}\]
_and, for \(y{>}0\), \(P_{y}\) is a Poisson distribution with parameter \(y\)._
Note that we have a convergence in distribution \((\|F_{N}(Nt)\|/N)\), but not of the processes \((F_{j,N}(Nt)/N)\), \(j{=}1\),..., \(J\).. The convergence in distribution for this \(J\)-dimensional process is weaker, it is expressed through the sequence of occupation measures \((\Lambda_{N})\). See Dawson [6] for general definitions and results for the convergence in distribution of random measures.
It is not difficult to see that, under Condition (37) for the initial conditions, one cannot have a convergence in distribution of \((F_{j,N}(Nt)/N)\). Otherwise, its limit would be \((f_{j}(t))\), but this would imply that the asymptotic initial point \((\overline{f}_{0,j})\) would satisfy the relation
\[\overline{f}_{0,j}=\frac{\rho_{j}}{\rho_{j}{+}\phi(\overline{f}_{0})}c_{j},\]
which is not the case a priori. Asymptotically the vector \((F_{j,N}(Nt)/N)\) lives in a one dimensional curve of the state space, this is due to the fast processes which lead to a kind of state space collapse. See Propositions 22 and 23.
**Corollary 14** (Equilibrium).: _Under the assumptions, and with the notations of, Theorem (13), for \(1{\leq}j{\leq}J\),_
\[\lim_{t\to+\infty}H(t)=H_{\infty}=\sum_{j=1}^{J}\frac{\rho_{j}}{\rho_{j}{+} \beta/\delta}c_{j},\]
The quantity \(H_{\infty}\) is the asymptotic fraction of free particles. The proof of this theorem is achieved in several steps. The general picture is that nevertheless a kind of averaging principle holds, the slow process being \((\overline{F}_{N}(t))\) and the "fast" process is \((Z_{N}(Nt))\). The general method in this domain is described in Kurtz [20], see also Papanicolaou et al. [23] and Freidlin and Wentzell [13]. It turns out that, due to the very fast timescale mentioned above, the slow process has to be included in the definition of the occupation measure, see Definition 12. This situation leads to technical difficulties, to identify the invariant measures of fast processes in particular.
**Definition 15**.: _For \(a{>}0\), the stopping time \(\tau_{N}(a)\) is defined by_
\[\tau_{N}(a)\stackrel{{\text{\rm def.}}}{{=}}\inf\left\{t{>}0: \|F_{N}(Nt)\|{=}\sum_{j=1}^{J}F_{j,N}(Nt)\leq aN\right\}. \tag{43}\]
To prove convenient tightness properties of a scaled version of \((X_{N}(t))\), we first derive some technical results. In a first step, we fix some \(a_{0}{\in}(0,\|\overline{f}_{0}\|)\) and we will work with a "stopped" _occupation measure_, it is the random measure on \(H\) defined by
\[\left\langle\Lambda_{N}^{0},g\right\rangle\stackrel{{\text{\rm def.}}}{{=}}\int_{0}^{\tau_{N}(a_{0})}g\left(s,\left(\frac{F_{j,N}(Ns)}{N}\right), Z_{N}(Ns)\right)\mathrm{d}s, \tag{44}\]
for a continuous function \(g\) with compact support on \(H\). The motivation of the stopped occupation is due to a technical argument used for the identification of the invariant distributions of fast processes. See Proposition 23.
**Lemma 16**.: _If \(\|\overline{f}_{0}\|{>}0\), for \(a{\in}(0,\|\overline{f}_{0}\|)\), we have_
\[\lim_{N\to+\infty}\mathbb{P}\left(\tau_{N}(a)<\ell(a)\right)=0,\]
_with \(\ell(a)\stackrel{{\text{\rm def.}}}{{=}}\|\overline{f}_{0}\|{-} a/(2\beta)\), and the relation_
\[\lim_{N\to+\infty}\left(\frac{Z_{N}(Nt)}{\sqrt{N}}\right)=(0)\]
_holds for the convergence in distribution._
Proof.: For \(t{>}0\) and \(N\) sufficiently large, on the event \(\{\|F_{N}(t)\|{<}aN\}\) there are at least \((\|F_{N}(0)\|{-}\lceil aN\rceil{-}z_{0})\) new agents created up to time \(Nt\). Consequently for \(y{>}0\),
\[\{\tau_{N}(a)\leq y\}\subset\left\{\mathcal{P}_{z}^{+}((0,\beta){\times}(0,yN) ){\geq}\|F_{N}(0)\|{-}\lceil aN\rceil{-}z_{0}\right\},\]
by Relation (8). The first assertion follows from the law of large numbers for Poisson processes.
We now show that there exists a coupling of \((Z_{N}(t))\) with \((L_{0}(t))\), the state process of an \(M/M/\infty\) queue such that the relation \(Z_{N}(t){\leq}(L_{0}(N^{2}t))\) holds for all \(t{<}\tau_{N}(a)\). See Section B.2 of the appendix on the \(M/M/\infty\) queue.
In state \(z{\in}\mathbb{N}\), the jump rates of the process \((Z_{N}(t))\) in state \(((f_{j}),z)\) at time \(t\) are given by
\[\begin{cases}+1,&\beta{+}\sum_{j=1}^{J}\eta_{j}\left(C_{j}^{N}{-}f_{j}\right), \\ -1,&\delta z{+}\sum_{j=1}^{J}\lambda_{j}f_{j}z.\end{cases}\]
Let \((L_{0}(t))\) the process of the \(M/M/\infty\) queue with input rate \(2\overline{\eta}\) and service rate \(\delta{+}a\underline{\lambda}\), with Definition (4), with \(L_{0}(0){=}Z_{N}(0){=}z_{0}\). We take \(N\) sufficiently large so that \(\beta{\leq}\overline{\eta}N\). By comparing the jump rates, one can construct a version of \((Z_{N}(t),L_{0}(t))\) such that the relation
\[Z_{N}(Nt)\leq L_{0}(N^{2}t) \tag{45}\]
holds for all \(t{<}\tau_{N}(a)\). For \(\varepsilon{>}0\), let \(T_{N}(\varepsilon){=}\inf\left\{t:L_{0}(t){\geq}\varepsilon\sqrt{N}\right\}\). Proposition 25 of the appendix shows that the sequence of random variables
\[\left(\left(\frac{2\overline{\eta}}{a\underline{\lambda}}\right)^{\left\lceil \varepsilon\sqrt{N}\right\rceil}\frac{T_{N}(\varepsilon)}{(\left\lceil \varepsilon\sqrt{N}\right\rceil{-}1)!}\right)\]
converges in distribution to an exponential random variable. In particular, for any \(t{>}0\),
\[\lim_{N\to+\infty}\mathbb{P}(T_{N}(\varepsilon){\leq}N^{2}t)=0.\]
The proof of the lemma follows from the relation
\[\mathbb{P}\left(\sup_{s\leq t}\frac{Z_{N}(N(s{\wedge}\tau_{N}(a_{0})))}{\sqrt {N}}>\varepsilon\right)\leq\mathbb{P}\left(\sup_{s\leq t}\frac{L_{0}(N^{2}s)} {\sqrt{N}}>\varepsilon\right){=}\mathbb{P}\left(T_{N}{\leq}N^{2}t\right).\]
**Proposition 17**.: _The sequence of random measures \((\Lambda^{0}_{N})\) defined by Relation (44) is tight and any limiting point \(\Lambda^{0}_{\infty}\) can be expressed as_
\[\left\langle\Lambda^{0}_{\infty},f\right\rangle=\int_{\mathbb{R}_{+}\times[0,1 ]^{J}\times\mathbb{N}}f\left(s,x,p\right)\pi_{s}(\mathrm{d}x,\mathrm{d}p)\, \mathrm{d}s, \tag{46}\]
_for any function \(f{\in}\mathcal{C}_{c}(\mathbb{R}_{+}\times[0,1]^{J}{\times}\mathbb{N})\), where \((\pi_{s})\) is an optional process with values in the space of probability distributions on \([0,1]^{J}{\times}\mathbb{N}\)._
_If \((\Lambda^{0}_{N_{k}})\) is a sub-sequence of \((\Lambda^{0}_{N})\) converging to \(\Lambda^{0}_{\infty}\), then with the convention of Relation (9), for the convergence in distribution of processes,_
\[\lim_{k\to+\infty}\left(\int g(x,z)z\Lambda^{0}_{N_{k}}([0,t],\mathrm{d}x, \mathrm{d}z)\right)=\left(\int g(x,z)z\Lambda^{0}_{\infty}([0,t],\mathrm{d}x,\mathrm{d}z)\right), \tag{47}\]
_for any bounded continuous function \(g\) on \([0,1]^{J}{\times}\mathbb{N})\) and the limit is integrable for all \(t{\geq}0\)._
Proof.: For \(K{>}0\), with Relation (45) in the proof of Lemma 16, and with the same notations, the relation
\[\int_{0}^{t}\mathbb{1}\,_{\{Z_{N}(Ns)\geq K\}}\,\mathrm{d}s\leq\int_{0}^{t} \mathbb{P}(L_{0}(N^{2}s)\geq K)\,\mathrm{d}s\]
holds on the event \(\{\tau_{N}(a_{0}){\geq}t\}\). Consequently, for \(t{<}\ell(a_{0})\),
\[\mathbb{E}\left(\Lambda_{N}^{0}([0,t]{\times}[0,1]^{J}{\times}[K, +\infty])\right)=\mathbb{E}\left(\int_{0}^{t\wedge\tau_{N}(a_{0})}\mathbb{1}_{ \{Z_{N}(Ns)\geq K\}}\,\mathrm{d}s\right)\\ \leq\mathbb{E}\left(\mathbb{1}_{\{\tau_{N}(a_{0})>t\}}\int_{0}^{t }\mathbb{1}_{\{L_{0}(N^{2}s)\geq K\}}\,\mathrm{d}s\right)\leq\frac{1}{N^{2}} \int_{0}^{N^{2}t}\mathbb{P}(L_{0}(s)\geq K)\,\mathrm{d}s.\]
Since the Markov process \((L_{0}(t))\) converges in distribution to a Poisson distribution with parameter \(2\overline{\eta}/(\underline{\lambda}a_{0})\), see Section B.2 of the appendix. With Lemma 16, we obtain the relation
\[\limsup_{N\to+\infty}\mathbb{E}\left(\Lambda_{N}^{0}([0,t]{\times}[0,1]^{J}{ \times}[K,+\infty])\right)\leq\mathbb{P}(\mathcal{N}_{1}(0,2\overline{\eta}/ (\underline{\lambda}a_{0})){\geq}K)\,t,\]
where \(\mathcal{N}_{1}\) is a Poisson process on \(\mathbb{R}_{+}\) with rate \(1\). In particular, one can choose \(K\) sufficiently large such that
\[\sup_{N}\mathbb{E}\left(\Lambda_{N}^{0}([0,t]{\times}[0,1]^{J}{\times}[K,+ \infty])\right)\]
is arbitrarily small. Lemma 1.3 of Kurtz [20] shows that the sequence \((\Lambda_{N}^{0})\) is tight, and Lemma 1.4 of the same reference gives the representation (46).
For the second part of the proposition, Relation (45) in the proof of Lemma 16 and the Cauchy-Schwartz' Inequality give, for \(s{\leq}t\),
\[\mathbb{E}\left(\left(\int g(x,z)z\Lambda_{N}^{0}([s,t],\mathrm{ d}x,\mathrm{d}z)\right)^{2}\right)=\mathbb{E}\left(\left(\int_{s\wedge\tau_{N}(a_{0} )}^{t\wedge\tau_{N}(a_{0})}\!\!\!g\!\left(\overline{X}_{N}(s)\right)Z_{N}(Ns) \,\mathrm{d}s\right)^{2}\right)\\ \leq\|g\|_{\infty}\mathbb{E}\left(\left(\int_{s\wedge\tau_{N}(a_ {0})}^{t\wedge\tau_{N}(a_{0})}L_{0}(N^{2}s)\,\mathrm{d}s\right)^{2}\right)\leq \|g\|_{\infty}\mathbb{E}\left(\left(\int_{s}^{t}L_{0}(N^{2}s)\,\mathrm{d}s \right)^{2}\right)\\ \leq(t{-}s)\|g\|_{\infty}\int_{s}^{t}\mathbb{E}\left(L_{0}(N^{2} s)^{2}\right)\mathrm{d}s\leq(t{-}s)^{2}\|g\|_{\infty}\sup_{u\geq 0}\mathbb{E} \left(L_{0}(u)^{2}\right).\]
Kolmogorov-Centsov's criterion, implies that the sequence of processes
\[\left(\int g(x,z)z\,\Lambda_{N}^{0}([0,t],\mathrm{d}x,\mathrm{d}z)\right)\]
is tight for the convergence in distribution and any of its limiting points is a continuous process. See Theorem 2.8 and Problem 4.11, page 64 of Karatzas and Shreve [19] for example.
For \(t{>}0\) and \(C{>}0\), for the convergence in distribution we have
\[\lim_{k\to+\infty}\int g(x,z)\,(z{\wedge}C)\ \Lambda_{N_{k}}^{0}([0,t], \mathrm{d}x,\mathrm{d}z)=\int g(x,z)\,(z{\wedge}C)\ \Lambda_{\infty}^{0}([0,t],\mathrm{d}x,\mathrm{d}z).\]
Using again Relation (45), with the same argument as before,
\[\mathbb{E}\left(\int g(x,z)\,(z\wedge C)\ \Lambda^{0}_{N_{k}}([0,t], \mathrm{d}x,\mathrm{d}z)\right)\\ \leq\|g\|_{\infty}\int_{0}^{t}\mathbb{E}\left(L_{0}(N^{2}u)\wedge C \right)\mathrm{d}u\leq t\|g\|_{\infty}\sup_{u\geq 0}\mathbb{E}\left(L_{0}(u)^{2} \right)<+\infty,\]
by letting first \(k\) and then \(C\) go to infinity, we obtain the relation
\[\mathbb{E}\left(\int g(x,z)z\,\Lambda^{0}_{\infty}([0,t],\mathrm{d}x,\mathrm{ d}z)\right)<+\infty,\]
for all \(t{\geq}0\). Similarly, we have
\[\mathbb{E}\left(\int g(x,z)z\mathbb{1}_{\{z\geq C\}}\,\Lambda^{0} _{N_{k}}([0,t],\mathrm{d}x,\mathrm{d}z)\right)\\ \leq\|g\|_{\infty}\int_{0}^{t}\mathbb{E}\left(L_{0}(N_{k}^{2}u) \mathbb{1}_{\left\{L_{0}(N_{k}^{2}u)\geq C\right\}}\right)\mathrm{d}u\leq \frac{t}{C}\|g\|_{\infty}\sup_{u\geq 0}\mathbb{E}\left(L_{0}(u)^{2}\right),\]
and therefore the convergence in distribution
\[\lim_{k\to+\infty}\int g(x,z)z\,\Lambda^{0}_{N_{k}}([0,t],\mathrm{d}x,\mathrm{ d}z)=\int g(x,z)z\,\Lambda^{0}_{\infty}([0,t],\mathrm{d}x,\mathrm{d}z),\]
for \(t{>}0\). For \(p{\geq}1\) and \(0{\leq}t_{1}{\leq}\cdots{\leq}t_{p}\), this convergence also clearly holds for finite marginals at \((t_{i})\). The proposition is proved.
If \(f\) is a non-negative Borelian function on \(\mathbb{R}^{J}_{+}{\times}\mathbb{N}\), the SDEs (7) and (8) give directly the relations
\[f\left(\overline{F}_{N}(t),Z_{N}(Nt)\right)=f\left(\overline{F}_ {N}(0),Z_{N}(0)\right)+M_{f,N}(t)\\ +\sum_{j=1}^{J}\lambda_{j}\int_{0}^{t}N\Delta_{-e_{j}/N,-1}(f) \left(\overline{F}_{N}(s),Z_{N}(Ns)\right)F_{j,N}(Ns)Z_{N}(Ns)\,\mathrm{d}s\\ +\sum_{j=1}^{J}\eta_{j}\int_{0}^{t}N\Delta_{e_{j}/N,1}(f)\left( \overline{F}_{N}(s),Z_{N}(Ns)\right)(C^{N}_{j}-F_{j,N}(Ns))\,\mathrm{d}s\\ +\beta N\int_{0}^{t}\!\Delta_{0,1}(f)\left(\overline{F}_{N}(s),Z_ {N}(Ns)\right)\mathrm{d}s\\ +\delta N\int_{0}^{t}\!\Delta_{0,-1}(f)\left(\overline{F}_{N}(s), Z_{N}(Ns)\right)Z_{N}(Ns)\,\mathrm{d}s, \tag{48}\]
with the notation, for \(x\), \(u{\in}\mathbb{R}^{J}_{+}\) and \(y\), \(b{\in}\mathbb{N}\),
\[\Delta_{u,v}(f)(x,y)\stackrel{{\text{def.}}}{{=}}f(x+u,y+v)-f(x,y),\]
and, for \(1{\leq}j{\leq}J\), \(e_{j}\) is the \(j\)th unit vector of \(\mathbb{R}^{J}\).
The process \(\left(M_{f,N}(t)\right)\) is a square integrable martingale whose previsible increasing process is given by
\[\left\langle M_{f,N}\right\rangle(t)=\beta N\int_{0}^{t}\Delta_{0,1 }(f)\left(\overline{F}_{N}(s),Z_{N}(Ns)\right)^{2}\mathrm{d}s\] \[\qquad+\delta N\int_{0}^{t}\Delta_{0,-1}(f)\left(\overline{F}_{N }(s),Z_{N}(Ns)\right)^{2}Z_{N}(Ns)\,\mathrm{d}s\] \[\qquad+\sum_{j=1}^{J}\lambda_{j}\int_{0}^{t}N\Delta_{-e_{j}/N,-1}( f)\left(\overline{F}_{N}(s),Z_{N}(Ns)\right)^{2}F_{j,N}(Ns)Z_{N}(Ns)\,\mathrm{d}s\] \[\qquad+\sum_{j=1}^{J}\eta_{j}\int_{0}^{t}N\Delta_{e_{j}/N,1}(f) \left(\overline{F}_{N}(s),Z_{N}(Ns)\right)^{2}\left(C_{j}^{N}-F_{j,N}(Ns) \right)\mathrm{d}s. \tag{49}\]
If we divide by \(N\) Relation (48) taken at \(Nt\), we get
\[\frac{1}{N}\left(f\left(\overline{F}_{N}(t),Z_{N}(Nt)\right)-f \left(\overline{F}_{N}(0),Z_{N}(0)\right)\right)= \frac{1}{N}M_{f,N}(t)\] \[\qquad+\sum_{j=1}^{J}\lambda_{j}\int_{0}^{t}N\Delta_{-e_{j}/N,-1} (f)\left(\overline{F}_{N}(s),Z_{N}(Ns)\right)\overline{F}_{j,N}(s)Z_{N}(Ns) \,\mathrm{d}s\] \[\qquad+\sum_{j=1}^{J}\eta_{j}\int_{0}^{t}N\Delta_{e_{j}/N,1}(f) \left(\overline{F}_{N}(s),Z_{N}(Ns)\right)\left(\frac{C_{j}^{N}}{N}-\overline {F}_{j,N}(s)\right)\mathrm{d}s\] \[\qquad+\beta\int_{0}^{t}\Delta_{0,1}(f)\left(\overline{F}_{N}(s), Z_{N}(Ns)\right)\mathrm{d}s\] \[\qquad+\delta\int_{0}^{t}\Delta_{0,-1}(f)\left(\overline{F}_{N}( s),Z_{N}(s)\right)Z_{N}(Ns)\,\mathrm{d}s. \tag{50}\]
**Lemma 18**.: _If \(f\) is a bounded \(\mathcal{C}_{1}\) function with compact support on \(\mathbb{R}_{+}^{J}\times\mathbb{N}\), then the sequence of martingales \(\left(M_{f,N}(t)/N,t{<}\ell(a_{0})\right)\) of Relation (50) converges in distribution to \(0\)._
Proof.: We take care of the third term \(\left(A_{N}(t)\right)\) of \(\left(\left\langle M_{f,N}/N\right\rangle(Nt)\right)\) in Relation (49), the arguments are similar for the others, even easier. For \(1{\leq}j{\leq}J\), denote by \(\left(A_{j,N}(t)\right)\) the \(j\)th term of \(\left(A_{N}(t)\right)\), then
\[A_{j,N}(t)\stackrel{{\mathrm{def.}}}{{=}}\frac{\lambda_{j}}{N} \int_{0}^{t}N\Delta_{-e_{j}/N,1}(f)\left(\overline{F}_{N}(s),Z_{N}(Ns)\right)^{ 2}\overline{F}_{j,N}(s)Z_{N}(Ns)\,\mathrm{d}s,\]
then, again with Relation (45) of the proof of Lemma 16, since \(\left(\overline{F}_{j,N}(t)\right)\) is bounded by \(1\),
\[\mathbb{E}(A_{j,N}(t))\leq\frac{\lambda_{j}}{N}\left\|\frac{\partial f}{ \partial x_{j}}\right\|_{\infty}^{2}\left(\mathbb{E}\left(\int_{0}^{t}L_{0}(N ^{2}s)\,\mathrm{d}s\right)+t\mathbb{P}(\tau_{N}(a_{0})\leq\ell(a_{0}))\right),\]
since \(\left(\mathbb{E}(L_{0}(t)\right)\) is converging as \(t\) goes to infinity, we have
\[\lim_{N\to+\infty}\mathbb{E}(A_{j,N}(t))=0.\]
Doob's Inequality shows the convergence of \(\left(M_{f,N}(t)\right)/N,t{<}\ell(a_{0}))\) to \(0\)
**Proposition 19**.: _If \(\Lambda^{0}_{\infty}\) is a limiting point of \((\Lambda^{0}_{N})\) with the representation (46) of Proposition 17 then, for any continuous function \(g\) on \(\mathbb{R}^{J}_{+}{\times}\mathbb{N}\), the relation_
\[\left(\int_{0}^{t}\int_{[0,1]^{J}{\times}\mathbb{N}}g(x,p)\pi_{s}( \mathrm{d}x,\mathrm{d}p)\,\mathrm{d}s,t{<}\ell(a_{0})\right)\\ =\left(\int_{0}^{t}\int_{[0,1]^{J}}\mathbb{E}\left(g\left(x, \mathcal{N}_{1}\left(\left[0,\frac{\left\langle\eta,c{-}x\right\rangle}{ \left\langle\lambda,x\right\rangle}\right]\right)\right)\right)\pi_{s}^{1}( \mathrm{d}x)\,\mathrm{d}s,t{<}\ell(a_{0})\right), \tag{51}\]
_holds almost surely, where \(\pi_{t}^{1}\) is the marginal of \(\pi_{t}\) on \(\mathbb{R}^{J}_{+}\), \(\lambda\), \(\eta\) and \(c{\in}\mathbb{R}^{J}_{+}\) are given by Definition 1 and \(\ell(a_{0})\) in Lemma 16, and \(\mathcal{N}_{1}\) is a Poisson process on \(\mathbb{R}_{+}\) with rate \(1\)._
_Furthermore, almost surely,_
\[\int_{0}^{\ell(a_{0})}\pi_{s}^{1}\left(x{\in}[0,1]^{J}:\sum_{j=1}^{J}x_{j}<a_{ 0}\right)\mathrm{d}s=0. \tag{52}\]
Relation (51) states that, for almost all \(t{<}\ell(a_{0})\), \(\pi_{t}\) conditioned on the first coordinate \(x\) is a Poisson distribution with parameter \(\left\langle\eta,c{-}x\right\rangle/\left\langle\lambda,x\right\rangle\). Note that Relation (52) shows that the denominator \(\left\langle\lambda,x\right\rangle\) is lower bounded by \(\underline{\lambda}a_{0}\) almost surely for \(\pi_{t}\), for \(t{<}\ell(a_{0})\).
Proof.: Let \((\Lambda^{0}_{N_{k}})\) be a subsequence of \((\Lambda^{0}_{N})\) converging to some \(\Lambda^{0}_{\infty}\) of the form (46). By letting \(k\) go to infinity in Relation (50), with Lemma 16, Relation (47) of Proposition 17, and Lemma 18, for any continuous function \(g\) with compact support on \([0,1]^{J}{\times}\mathbb{N}\),
\[\int_{0}^{t}\int_{[0,1]^{J}{\times}\mathbb{N}}(g(x,p{-}1){-}g(x,p)) \left(\sum_{j=1}^{J}\lambda_{j}x_{j}\right)p\pi_{s}(\mathrm{d}x,\mathrm{d}p) \,\mathrm{d}s\\ +\int_{0}^{t}\int_{[0,1]^{J}{\times}\mathbb{N}}(g(x,p{+}1){-}f(x,p ))\left(\sum_{j=1}^{J}\eta_{j}(c_{j}{-}x_{j})\right)\pi_{s}(\mathrm{d}x, \mathrm{d}p)\,\mathrm{d}s=0,\]
holds almost surely for all \(t{<}\ell(a_{0})\). Hence we have
\[\int_{[0,1]^{J}{\times}\mathbb{N}}\lambda(g(x,p{-}1){-}g(x,p)) \left\langle\lambda,x\right\rangle p\pi_{t}(\mathrm{d}x,\mathrm{d}p)\\ +\int_{[0,1]^{J}{\times}\mathbb{N}}\eta(g(x,p{+}1){-}f(x,p)) \left\langle\eta,c{-}x\right\rangle\pi_{t}(\mathrm{d}x,\mathrm{d}p)=0,\]
almost everywhere on \(t{\in}\mathbb{R}_{+}\), or if \(g(x,p){=}g_{1}(x)g_{2}(p)\),
\[\int_{\mathbb{R}_{+}{\times}\mathbb{N}}g_{1}(x)\Omega[x](g_{2})(p)\pi_{t}( \mathrm{d}x,\mathrm{d}p)=0,\]
where, for \(h:\mathbb{N}{\to}\mathbb{R}_{+}\),
\[\Omega[x](h)(p)\stackrel{{\mathrm{def.}}}{{=}}\left\langle\eta, c{-}x\right\rangle(h(p{+}1){-}h(p)){+}\left\langle\lambda,x\right\rangle p(h(p{-}1){-}h(p)).\]
Let \(\pi_{t}(\cdot\mid x)\) be the conditional probability measure \(\pi_{t}(\mathrm{d}x,\mathrm{d}p)\) given \(x{\in}\mathbb{R}_{+}\), then we have, \(\pi_{t}(\mathrm{d}x,\mathbb{N})\) almost everywhere
\[\int\Omega[x](g_{2})(p)\pi_{t}(\mathrm{d}p\mid x)=0,\]
for all functions \(g_{2}\) with finite support. Since, for \(x{>}0\), \(\Omega[x]\) is the \(Q\)-matrix of an \(M/M/\infty\) queue, the last relation gives that \(\pi_{t}(\mathrm{d}p|x)\) is its invariant distribution, i.e. it is a Poisson distribution with parameter \(\left\langle\eta,c{-}x\right\rangle/\left\langle\lambda,x\right\rangle\).
Relation (52) is simple a consequence of Lemma 16.
The proposition is proved.
From now on, we fix \((N_{k})\) an increasing sequence such the sequence \((\Lambda^{0}_{N_{k}})\) is converging to \(\Lambda^{0}_{\infty}\) with a representation given by Relations (46) and (51). The following corollary is a direct consequence of Propositions 17 and 19.
**Corollary 20**.: _If \(g\) is a continuous bounded function on \([0,1]^{J}\), then, for the convergence in distribution,_
\[\lim_{k{\to}{+}{\infty}}\left(\int_{0}^{t}g\left(\overline{F}_{N_ {k}}(s)\right)Z_{N_{k}}(N_{k}s)\,\mathrm{d}s,t{<}\ell(a_{0})\right)\\ =\left(\int_{0}^{t}\int_{[0,1]^{J}}g(x)\frac{\left\langle\eta,c{-} x\right\rangle}{\left\langle\lambda,x\right\rangle}\pi_{s}^{1}(\mathrm{d}x)\, \mathrm{d}s,t{<}\ell(a_{0})\right). \tag{53}\]
The following proposition is a convergence result for the scaled process \((\overline{F}_{N}(t))\). It is weaker that the convergence stated in Theorem 13 clearly, but this is a crucial ingredient to establish the theorem in fact.
**Proposition 21**.: _The sequence of processes \((\|\overline{F}_{N_{k}}(t)\|,t{<}\ell(a_{0}))\) is converging in distribution to_
\[(H(t),t{<}\ell(a_{0}))\stackrel{{\mathrm{def.}}}{{=}}\left(\| \overline{f}_{0}\|+\int_{0}^{t}\int_{[0,1]^{J}}\left(\delta\frac{\left\langle \eta,c{-}x\right\rangle}{\left\langle\lambda,x\right\rangle}{-}\beta\right)\pi _{s}^{1}(\mathrm{d}x)\,\mathrm{d}s,t{<}\ell(a_{0})\right).\]
Proof.: We define, for \(t{\geq}0\),
\[\widetilde{Z}_{N}(t)=Z_{N}(t){+}\sum_{j=1}^{J}(N_{j}{-}F_{j,N}(t))=N{-}\|F_{N} (t)\|{+}Z_{N}(t), \tag{54}\]
\(\widetilde{Z}_{N}(t)\) is the total number of agents (free or paired) of the system at time \(t\). Using the SDEs (10) and (11), we have
\[\frac{\widetilde{Z}_{N}(Nt)}{N}=M_{Z,N}(t){+}\frac{\widetilde{Z}_{N}(0)}{N}{+ }\beta t{-}\delta\int_{0}^{t}Z_{N}(Ns)\,\mathrm{d}s, \tag{55}\]
where \((M_{Z,N}(t))\) is a local martingale whose previsible increasing process is given by
\[\left(\left\langle M_{Z,N}\right\rangle(t)\right)=\left(\frac{1}{N}\left( \beta t{+}\delta\int_{0}^{t}Z_{N}(Ns)\,\mathrm{d}s\right)\right). \tag{56}\]
Using again Doob's Inequality, Lemma 16 and Relation (54), the proposition is proved.
The next proposition gives a characterization of the process \((\pi^{1}_{s})\) which will be elucidated in Proposition 23.
**Proposition 22**.: _If \(g\) is a \(\mathcal{C}_{1}\)-function on \([0,1]^{J}\), then, almost surely,_
\[\left(\int_{0}^{t}\int_{[0,1]^{J}}\!\!\sum_{j=1}^{J}\!\!\frac{\partial g}{ \partial x_{j}}(x)\!\left(\lambda_{j}x_{j}\frac{\langle\eta,c\!-\!x\rangle}{ \langle\lambda,x\rangle}\!-\!\eta_{j}(c_{j}\!-\!x_{j})\right)\pi^{1}_{s}( \mathrm{d}x)\,\mathrm{d}s,t\!\!<\!\ell(a_{0})\right)=\!\!(0). \tag{57}\]
Proof.: For \(t\!\!>\!0\), let \(g\) be a \(\mathcal{C}_{2}\)-function on \([0,1]^{J}\), Relation (50) gives,
\[\frac{1}{N}g\left(\overline{F}_{N}(t)\right)=\frac{1}{N}g\left( \overline{F}_{N}(0)\right)+M_{g,N}(t)\\ +\sum_{j=1}^{J}\lambda_{j}\int_{0}^{t}N\nabla_{-e_{j}/N}(g) \left(\overline{F}_{N}(s)\right)\overline{F}_{j,N}(s)Z_{N}(Ns)\,\mathrm{d}s\\ +\sum_{j=1}^{J}\eta_{j}\int_{0}^{t}N\nabla_{e_{j}/N}(g)\left( \overline{F}_{N}(s)\right)\left(\frac{C^{N}_{j}}{N}-\overline{F}_{j,N}(s) \right)\mathrm{d}s, \tag{58}\]
and, since
\[\left(\left\langle M_{g,N}\right\rangle(t)\right)=\left(\frac{1}{ N^{2}}\sum_{j=1}^{J}\lambda_{j}\int_{0}^{t}\left(N\nabla_{-e_{j}/N}(g)\left( \overline{F}_{N}(s)\right)\right)^{2}\overline{F}_{j,N}(s)Z_{N}(Ns)\,\mathrm{d}s \right.\\ \left.+\frac{1}{N^{2}}\sum_{j=1}^{J}\eta_{j}\int_{0}^{t}\left(N \nabla_{e_{j}/N}(g)\left(\overline{F}_{N}(s)\right)\right)^{2}\left(\frac{C^{ N}_{j}}{N}-\overline{F}_{j,N}(s)\right)\mathrm{d}s\right),\]
Relation (47) shows that, for \(t\!\!<\!\ell(a_{0})\), \(\mathbb{E}(\left\langle M_{g,N_{k}}\right\rangle(t))\) is converging to \(0\), the martingale \((M_{g,N_{k}}(t))\) converges therefore in distribution to \(0\) as \(k\) gets large. By using again Relation 47 and the differentiability properties of \(g\), it is easy to conclude the proof of the proposition.
For \(1\!\!\leq\!\!j\!\leq\!\!J\), by taking a function \(g(x)\!\!=\!\!h(x_{j})\), \(x\!\!=\!\!(x_{i})\!\in\!\![0,1]^{J}\), where \(h\) is \(\mathcal{C}_{2}\), we obtain, that the relation
\[\left(\int_{0}^{t}\int_{[0,1]^{J}}h^{\prime}(x_{j})\left(\lambda_{j}x_{j}\frac {\langle\eta,c\!-\!x\rangle}{\langle\lambda,x\rangle}\!-\!\eta_{j}(c_{j}\!-\!x _{j})\right)\pi^{1}_{s}(\mathrm{d}x)\,\mathrm{d}s,t\!\!<\!\ell(a_{0})\right)= (0)\]
holds almost surely. Hence, almost surely, for any \(f\) in a dense subset of Borelian functions on \([0,1]\), the identity
\[\int_{[0,1]^{J}}f(x_{j})\left(\lambda_{j}x_{j}\frac{\langle\eta,c\!-\!x \rangle}{\langle\lambda,x\rangle}\!-\!\eta_{j}(c_{j}\!-\!x_{j})\right)\pi^{1}_{ t}(\mathrm{d}x)\]
holds almost everywhere for \(t\!\!\in\!\!\mathbb{R}_{+}\), with respect to Lebesgue's measure.
If \(U(t)\!\!=\!\!(U_{j}(t))\) is a random variable with distribution \(\pi^{1}_{t}\), the last relation can be translated in terms of conditional expectation, almost surely
\[\lambda_{j}U_{j}(t)\mathbb{E}\,\left(\frac{\langle\eta,c\!-\!U(t)\rangle}{ \langle\lambda,U(t)\rangle}\bigg{|}\,U_{j}(t)\right)=\eta_{j}(c_{j}\!-\!U_{j}(t )),\]
almost everywhere for \(t\!\!\geq\!\!0\). The following proposition is the key step to identify completely the limit points of \(((\|\overline{F}_{N}(t)\|),\Lambda^{0}_{N})\).
**Proposition 23**.: _Let \(U{=}(U_{j})\) be a random variable on \(\prod_{1}^{J}[0,c_{j}]\), such that, almost surely, \(\|U\|{\geq}\eta\), for some \(\eta{>}0\), and, for \(1{\leq}j{\leq}J\),_
\[\lambda_{j}U_{j}\mathbb{E}\,\left(\frac{\langle\eta,c{-}U\rangle}{\langle \lambda,U\rangle}\bigg{|}\,U_{j}\right)=\eta_{j}(c_{j}{-}U_{j}), \tag{59}\]
_then, almost surely,_
\[U_{j}=\frac{\rho_{j}c_{j}}{\phi(\|U\|){+}\rho_{j}},\]
_where for \(y{\in}(0,1)\), \(\phi(y)\) is the unique solution of the equation_
\[\sum_{j=1}^{J}\frac{\rho_{j}}{\rho_{j}{+}\phi(y)}c_{j}=y,\]
_and \((\rho_{j})\) is given by Definition 1 and \(\phi\) in Theorem 13._
Proof.: Define, for \(1{\leq}j{\leq}J\),
\[A_{j}\stackrel{{\text{def.}}}{{=}}\frac{\eta_{j}(c_{j}{-}U_{j} )}{\lambda_{j}U_{j}},\quad B_{j}\stackrel{{\text{def.}}}{{=}} \lambda_{j}U_{j},\]
and \(\mathcal{F}_{j}\) is the \(\sigma\)-field associated to \(U_{j}\). Note that, because of the assumptions the variables \(A_{j}\) and \(B_{j}\) are bounded by Relation (59), this identity can be re-written as
\[\mathbb{E}\,\left(\frac{\sum_{k}A_{k}B_{k}}{\sum_{k}B_{k}}\bigg{|}\,\mathcal{ F}_{j}\right)=A_{j},\]
or, since \(A_{j}\) and \(B_{j}\) are \(\mathcal{F}_{j}\)-measurable
\[\mathbb{E}\,\left(\frac{\sum_{k}(A_{k}{-}A_{j})A_{j}B_{j}B_{k}}{\sum_{k}B_{k}} \bigg{|}\,\mathcal{F}_{j}\right)=0,\]
this identity gives therefore the relation
\[\mathbb{E}\left(\frac{\sum_{j,k}(A_{j}{-}A_{k})^{2}B_{j}B_{k}}{\sum_{k}B_{k}} \right)=2\sum_{j=1}^{J}\mathbb{E}\left(\frac{\sum_{k}(A_{j}{-}A_{k})A_{j}B_{j} B_{k}}{\sum_{k}B_{k}}\right)=0,\]
since the variables \(B_{j}\), \(j{=}1\),\(\ldots\),\(J\) are almost surely positive, this implies that, almost surely, \(A_{j}{=}A_{1}\) for \(j{=}1\),\(\ldots\),\(J\), and therefore
\[U_{j}=\frac{\rho_{j}c_{j}}{A_{1}{+}\rho_{j}},\]
the proposition is proved.
Proof of Theorem 12.: We have only to gather all the results obtained up to now. Proposition 21 shows that, on the time interval \([0,\ell(a_{0}))\), the sequence of processes \((\|F_{N_{k}}(t)\|)\) converges in distribution to \((H(t))\) given by
\[(H(t))=\left(\|\overline{f}_{0}\|{+}\int_{0}^{t}\int_{[0,1]^{J}} \left(\delta\frac{\langle\eta,c{-}x\rangle}{\langle\lambda,x\rangle}{-}\beta \right)\pi_{s}^{1}(\mathrm{d}x)\,\mathrm{d}s\right)\\ =\left(\|\overline{f}_{0}\|{+}\int_{0}^{t}(\delta\phi(H(s)){-} \beta)\,\mathrm{d}s\right), \tag{60}\]
by Proposition 23. This determines completely \((H(t))\) as the solution of Relation (40), and therefore the convergence of the sequence \((\|F_{N}(t)\|)\). Relation (57) of Proposition 22 and Proposition 23 gives the desired expression. Relation (41) for the limit of the sequence \((\Lambda_{N}^{0})\) of occupation measures.
It is easily seen that the solution of Relation (40) satisfies Corollary 14, in particular, if \(H(0){>}0\), there exists \(w{>}0\) such that \(H(t){\geq}w\) for all \(t{\geq}0\). The convergence in distribution obtained can be therefore extended to any finite interval of \(\mathbb{R}_{+}\). The theorem is proved.
|
2305.07154 | Foundations of Spatial Perception for Robotics: Hierarchical
Representations and Real-time Systems | 3D spatial perception is the problem of building and maintaining an
actionable and persistent representation of the environment in real-time using
sensor data and prior knowledge. Despite the fast-paced progress in robot
perception, most existing methods either build purely geometric maps (as in
traditional SLAM) or flat metric-semantic maps that do not scale to large
environments or large dictionaries of semantic labels. The first part of this
paper is concerned with representations: we show that scalable representations
for spatial perception need to be hierarchical in nature. Hierarchical
representations are efficient to store, and lead to layered graphs with small
treewidth, which enable provably efficient inference. We then introduce an
example of hierarchical representation for indoor environments, namely a 3D
scene graph, and discuss its structure and properties. The second part of the
paper focuses on algorithms to incrementally construct a 3D scene graph as the
robot explores the environment. Our algorithms combine 3D geometry, topology
(to cluster the places into rooms), and geometric deep learning (e.g., to
classify the type of rooms the robot is moving across). The third part of the
paper focuses on algorithms to maintain and correct 3D scene graphs during
long-term operation. We propose hierarchical descriptors for loop closure
detection and describe how to correct a scene graph in response to loop
closures, by solving a 3D scene graph optimization problem. We conclude the
paper by combining the proposed perception algorithms into Hydra, a real-time
spatial perception system that builds a 3D scene graph from visual-inertial
data in real-time. We showcase Hydra's performance in photo-realistic
simulations and real data collected by a Clearpath Jackal robots and a Unitree
A1 robot. We release an open-source implementation of Hydra at
https://github.com/MIT-SPARK/Hydra. | Nathan Hughes, Yun Chang, Siyi Hu, Rajat Talak, Rumaisa Abdulhai, Jared Strader, Luca Carlone | 2023-05-11T21:54:33Z | http://arxiv.org/abs/2305.07154v1 | # Foundations of Spatial Perception for Robotics: Hierarchical Representations and Real-time Systems
###### Abstract
3D spatial perception is the problem of building and maintaining an actionable and persistent representation of the environment in real-time using sensor data and prior knowledge. Despite the fast-paced progress in robot perception, most existing methods either build purely geometric maps (as in traditional SLAM) or "flat" metric-semantic maps that do not scale to large environments or large dictionaries of semantic labels. The first part of this paper is concerned with representations: we show that scalable representations for spatial perception need to be _hierarchical_ in nature. Hierarchical representations are efficient to store, and lead to layered graphs with small _treewidth_, which enable provably efficient inference. We then introduce an example of hierarchical representation for indoor environments, namely a _3D scene graph_, and discuss its structure and properties. The second part of the paper focuses on algorithms to incrementally construct a 3D scene graph as the robot explores the environment. Our algorithms combine 3D geometry (_e.g._, to cluster the free space into a graph of places), topology (to cluster the places into rooms), and geometric deep learning (_e.g._, to classify the type of rooms the robot is moving across). The third part of the paper focuses on algorithms to maintain and correct 3D scene graphs during long-term operation. We propose hierarchical descriptors for loop closure detection and describe how to correct a scene graph in response to loop closures, by solving a _3D scene graph optimization problem_. We conclude the paper by combining the proposed perception algorithms into _Hydra_, a real-time spatial perception system that builds a 3D scene graph from visual-inertial data in real-time. We showcase Hydra's performance in photo-realistic simulations and real data collected by a Clearpath Jackal robots and a Unitree AI robot. We release an open-source implementation of Hydra at [https://github.com/MIT-SPARK/Hydra](https://github.com/MIT-SPARK/Hydra).
## I Introduction
The next generation of robots and autonomous systems will need to build actionable, metric-semantic, multi-resolution, persistent representations of large-scale unknown environments in real-time. _Actionable_ representations are required for a robot to understand and execute complex humans instructions (_e.g._, "bring me the cup of tea I left on the dining room table"). These representations include both _geometric and semantic_ aspects of the environment (_e.g._, to plan a path to the dining room, and understand where the table is); moreover, they should allow reasoning over relations between objects (_e.g._, to understand what it means for the cup of tea to be _on_ the table). These representations need to be _multi-resolution_, in that they might need to capture information at multiple levels of abstractions (_e.g._, from objects to rooms, buildings, and cities) to interpret human commands and enable fast planning (_e.g.,_ by allowing planning over compact abstractions rather than dense low-level geometry). Such representations must be built in _real-time_ to support just-in-time decision-making. Finally, these representations must be _persistent_ to support long-term autonomy: (i) they need to scale to large environments, (ii) they should allow fast inference and corrections as new evidence is collected by the robot, and (iii) their size should only grow with the size of the environment they model.
**3D spatial perception** (or Spatial AI [1]) is the problem of building actionable and persistent representations from sensor data and prior knowledge in real-time. This problem is the natural evolution of Simultaneous Localization and Mapping (SLAM), which also focuses on building persistent map representations in real-time, but is typically limited to geometric understanding. In other words, if the task assigned to the robot
Fig. 1: We introduce _Hydra_, a highly parallelized system that builds 3D scene graphs from sensor data in real-time, by combining geometric reasoning (e.g., to build a 3D mesh and cluster the free space into a graph of places), with topology (to cluster the places into rooms), and geometric deep learning (e.g., to classify the type of rooms the robot is moving across). The figure shows sample input data and the 3D scene graph created by Hydra in a large-scale real environment.
is purely geometric (_e.g.,_ "go to position [X, Y, Z]"), then spatial perception reduces to SLAM and 3D reconstruction, but as the task specifications become more advanced (_e.g.,_ including semantics, relations, and affordances), we obtain a much richer problem space and SLAM becomes only a component of a more elaborate spatial perception system.
The pioneering work [1] has introduced the notion of spatial AI. Indeed the requirements that spatial AI has to build actionable and persistent representations can already be found in [1]. In this paper we take a step further by arguing that such representations must be _hierarchical_ in nature, since a suitable hierarchical organization reduces storage during long-term operation and leads to provably efficient inference. Moreover, going beyond the vision in [1], we discuss how to combine different tools (metric-semantic SLAM, 3D geometry, topology, and geometric deep learning) to implement a real-time spatial perception system for indoor environments.
**Hierarchical Representations.** 3D Scene Graphs [2, 3, 4, 5, 6, 7] have recently emerged as expressive hierarchical representations of 3D environments. A 3D scene graph (_e.g.,_ Fig. 1) is a layered graph where nodes represent spatial concepts at multiple levels of abstraction (from low-level geometry to objects, places, rooms, buildings, etc.) and edges represent relations between concepts. Armeni et al. [3] pioneered the use of 3D scene graphs in computer vision and proposed the first algorithms to parse a metric-semantic 3D mesh into a 3D scene graph. Kim et al. [5] reconstruct a 3D scene graph of objects and their relations. Rosinol et al. [2, 4] propose a novel 3D scene graph model that (i) is built directly from sensor data, (ii) includes a sub-graph of places (useful for robot navigation), (iii) models objects, rooms, and buildings, and (iv) captures moving entities in the environment. Recent work [6, 7, 8, 9] infers objects and relations from point clouds, RGB-D sequences, or object detections. In this paper, we formalize the intuition behind these works that suitable representations for robot perception have to be hierarchical in nature, and discuss guiding principles behind the choice of "symbols" we have to include in these representations.
**Real-time Systems.** While 3D scene graphs can serve as an advanced "mental model" for robots, methods to build such a rich representation in real-time remain largely unexplored. The works [5, 6, 7] allow real-time operation but are restricted to "flat" 3D scene graphs that only include objects and their relations while disregarding the top layers in Fig. 1. The works [2, 3, 4], which focus on building truly hierarchical representations, run offline and require several minutes to build a 3D scene graph ([3] even assumes the availability of a complete metric-semantic mesh of the environment built beforehand). Extending our prior works [2, 4] to operate in real-time is non-trivial. These works utilize an Euclidean Signed Distance Function (ESDF) of the entire environment to build the scene graph. Unfortunately, ESDFs memory requirements scale poorly in the size of the environment; see [10] and Section II. Moreover, the extraction of places and rooms in [2, 4] involves batch algorithms that process the entire ESDF, whose computational cost grows over time and is incompatible with real-time operation. Finally, the ESDF is reconstructed from the robot trajectory estimate which keeps changing in response to loop closures. The approaches [2, 4] would therefore need to rebuild the scene graph from scratch after every loop closure, clashing with real-time operation.
The present paper extends our prior work [11] and proposes the first real-time system to build hierarchical 3D scene graphs of large-scale environments. Following [11], recent works has explored constructing _situational graphs_[12, 13], a hierarchical representation for scene geometry with layers describing free-space traversability, walls, rooms, and floors. While related to this research line, the works [12, 13] focus on LIDAR-based systems, which mostly reason over geometric features (_e.g.,_ walls, rooms, floors), but lack the rich semantics we consider in this paper (_e.g.,_ objects and room labels). We postpone a more extensive literature review to Section VII.
**Contribution 1: Foundations of Hierarchical Representations (Section II).** We start by observing that flat metric-semantic representations scale poorly in the size of the environment and the size of the vocabulary of semantic labels the robot has to incorporate in the map. For instance, a voxel-based metric-semantic map picturing the floor of an office building with \(40\) semantic labels per voxel (as the one underlying the approaches of [2] and [14]) already requires roughly \(450\,\mathrm{MiB}\) to be stored. Envisioning future robots to operate on much large scales (_e.g.,_ an entire city) and using a much larger vocabulary (_e.g.,_ the English dictionary includes roughly 500,000 words), we argue that research should move beyond flat representations. We show that hierarchical representations allow to largely reduce the memory requirements, by enabling lossless compression of semantic information into a layered graph, as well as lossy compression of the geometric information into meshes and graph-structured representations of the free space. Additionally, we show that hierarchical representations are amenable for efficient inference. In particular, we prove that the layered graphs appearing in hierarchical map representations have small _treewidth_, a property that enables efficient inference; for instance, we conclude that the treewidth of the scene graph modeling an indoor environment does not scale with the number of nodes in the graph (_i.e.,_ roughly speaking, the number of nodes is related to the size of the environment), but rather with the maximum number of objects in each room. While most of the results above are general and apply to a broad class of hierarchical representations, we conclude this part by introducing a specific hierarchical representation for indoor environments, namely _3D scene graphs_.
**Contribution 2: Real-time Incremental 3D Scene Graph Construction (Section III).** After establishing the importance of hierarchical representations, we move to developing a suite of algorithms to estimate 3D scene graphs from sensor data. In particular, we develop real-time algorithms that can incrementally estimate a 3D scene graph of an unknown building from visual-inertial sensor data. We use existing methods for geometric processing to incrementally build a metric-semantic mesh of the environment and reconstruct a sparse graph of "places"; intuitively, the mesh describes the occupied space (including objects in the environment), while the graph of places provides a succinct description of the free space. Then we propose novel algorithms to efficiently cluster the places into rooms; here we use tools from topology, and in
particular the notion of _persistent homology_[15]. Finally, we use novel architectures for geometric deep learning, namely _neural trees_[16], to infer the semantic labels of each room (_e.g.,_ bedroom, kitchen, etc.) from the labels of the object within. Towards this goal, we show that our 3D scene graph representation allows to quickly and incrementally compute a tree-decomposition of the 3D scene graph, which --together with our bound on the scene graph treewidth-- ensures that the neural tree runs in real-time on an embedded GPU.
**Contribution 3: Persistent Representations via Hierarchical Loop Closure Detection and 3D Scene Graph Optimization (Section IV).** Building a persistent map representation requires the robot to recognize it is revisiting a location it has seen before, and correcting the map accordingly. We propose a novel hierarchical approach for loop closure detection: the approach involves (i) a _top-down loop closure detection_ that uses hierarchical descriptors --capturing statistics across layers in the scene graph-- to find putative loop closures, and (ii) a _bottom-up geometric verification_ that attempts estimating the loop closure pose by registering putative matches. Then, we propose the first algorithm to optimize a 3D scene graph in response to loop closures; our approach relies on _embedded deformation graphs_[17] to simultaneously and consistently correct all the layers of the scene graph, including the 3D mesh, objects, places, and the robot trajectory.
**Contribution 4: Hydra, a Real-Time Spatial Perception System (Section V).** We conclude the paper by integrating the proposed algorithms into a highly parallelized perception system, named _Hydra_, that combines fast early and mid-level perception processes (_e.g.,_ local mapping) with slower high-level perception (_e.g.,_ global optimization of the scene graph). We demonstrate Hydra in challenging simulated and real datasets, across a variety of environments, including an apartment complex, an office building, and two student residences. Our experiments (Section VI) show that (i) we can reconstruct 3D scene graphs of large, real environments in real-time, (ii) our online algorithms achieve an accuracy comparable to batch offline methods and build a richer representation compared to competing approaches [7], and (iii) our loop closure detection approach outperforms standard approaches based on bag-of-words and visual-feature matching in terms of quality and quantity of detected loop closures. The source code of Hydra is publicly available at [https://github.com/MIT-SPARK/Hydra](https://github.com/MIT-SPARK/Hydra).
**Novelty with respect to [16, 11].** This paper builds on our previous conference papers [16, 11] but includes several novel findings. First, rather than postulating a 3D scene graph structure as done in [11], we formally show that hierarchical representations are crucial to achieve scalable scene understanding and fast inference. Second, we propose a novel room segmentation method based on the notion of persistent homology, as a replacement for the heuristic method in [11]. Third, we develop novel learning-based hierarchical descriptors for place recognition, that further improve performance with respect to the handcrafted hierarchical descriptors in [11]. Fourth, the real-time system described in this paper is able to also assign room labels, leveraging the neural tree architecture from [16]; while [16] uses the neural tree over small graphs corresponding to a single room, in this paper we provide an efficient way to obtain a tree-decomposition of the top layers of the scene graph, hence extending [16] to simultaneously operate over _all_ rooms and account for their relations. Finally, this paper includes further experiments and evaluations on real robots (Clearpath Jackal robots and Unitree A1 robots) and comparisons with recent scene graph construction baselines [7].
## II Symbol Grounding and the Need for Hierarchical Representations
The goal of this section is twofold. First, we remark that in order to support the execution of complex instructions, map representations must be metric-semantic, and hence ground symbolic knowledge (_i.e.,_ semantic aspects in the scene) into the map geometry (Section II-A). Second, we observe that "flat" representations, which naively store semantic attributes for each geometric entity in the map (_e.g.,_ attach semantic labels to each voxel in an ESDF) scale poorly in the size of the environment; on the other hand, hierarchical representations scale better to large environments (Section II-B) and enable efficient inference (Section II-C). We conclude the section by introducing the notion of _3D scene graph_, an example of hierarchical representation for indoor environments, and discussing its structure and properties (Section II-D).
### _Symbols and Symbol Grounding_
High-level instructions issued by humans (_e.g.,_ "go and pick up the chair") involve symbols. A _symbol_ is the representation of a concept that has a specific meaning for a human (_e.g.,_ the word "chair" is a symbol in the sense that, as humans, we understand the nature of a chair). For a symbol to be useful to guide robot action, it has to be _grounded_ into the physical world. For instance, for the robot to correctly execute the instruction "go and pick up the chair", the robot needs to ground the symbol "chair" to the physical location the chair is situated in.1 In principle, we could design the perception system of our robot to ground symbols directly in the sensor data. For instance, a robot with a camera could map image pixels to appropriate symbols (_e.g.,_ "chair", "furniture", "kitchen", "apartment"), as commonly done in 2D semantic segmentation [19]. However, grounding symbols directly into sensor data is not scalable: sensor data (_e.g.,_ images) is collected at high rates and is relatively expensive to store. This is neither convenient nor efficient when grounding symbols for long-term operation. Instead, we need intermediate (or _sub-symbolic_) representations that compress the raw sensor data into a more compact format, and that can be used to ground symbols. Traditional geometric map models used in robotics (_e.g.,_ 2D occupancy grids or 3D voxel-based maps) can be understood as sub-symbolic representations: each cell in a voxel grid does not represent a semantic concept, and instead is used to ground two symbols: "obstacle" and "free-space". Therefore, this paper is concerned with building _metric-semantic_ representations, which ground symbolic knowledge into geometric (_i.e.,_ sub-symbolic) representations.
**Which Symbols Should a Map Contain?** In this paper, we directly specify the set of symbols the robot is interested in (_i.e.,_ certain classes of objects and rooms, to support navigation in indoor environments). However, it is worth discussing how to choose the symbols more generally. The choice of symbols clearly depends on the task the robot has to execute. A mobile robot may implement a path planning query by just using the "free-space" symbol and the corresponding grounding, while a more complex domestic robot may instead need additional symbols (_e.g.,_ "cup of coffee", "table", and "kitchen") and their groundings to execute instructions such as "bring me the cup of coffee on the table in the kitchen". A potential way to choose the symbols to include in the map therefore consists of inspecting the set of instructions the robot has to execute, and then extracting the list of symbols (_e.g.,_ objects and relations of interest) these instructions involve. For instance, in this paper, we mostly focus on indoor environments and assume the robot is tasked with navigating to certain objects and rooms in a building. Therefore, the symbols we include in our maps include free-space, objects, rooms, and buildings. After establishing the list of relevant symbols, the goal is for the robot to build a compact sub-symbolic representation (_i.e.,_ a geometric map), and then ground these symbols into the sub-symbolic representation. A pictorial representation of this idea is given in Fig. 1(a): the robot builds an occupancy map (sub-symbolic representation) and attaches a number of semantic labels (symbols) to each cell. We refer to this representation as "flat" since each cell is assigned a list of symbols of interest. As we will see shortly, there are more clever ways to store the same information.
### _Hierarchical Representations Enable Large-Scale Mapping_
While the flat representation of Fig. 1(a) may contain all the information the robot needs to complete its task, it is unnecessarily expensive to store. Here we show that the symbolic knowledge underlying such representation naturally lends itself to be efficiently stored using a hierarchical representation.
Consider the case where we use a flat representation (as shown in Fig. 1(a)) to represent a 3D scene and assume our dictionary contains \(L\) symbols of interest. Let \(\delta\) be the cell (or voxel) size and call \(V\) the volume of the scene the representation describes. Then, the corresponding metric-semantic representation would require a memory of:
\[m=\mathcal{O}\left(L\cdot V/\delta^{3}\right) \tag{1}\]
to store \(L\) labels for each voxel. Note that \(m\) grows with the number of symbols \(L\), multiplied by the size of the sub-symbolic representation \(V/\delta^{3}\) (_i.e.,_ the number of voxels in the volume). When mapping large environments the term \(V/\delta^{3}\) quickly becomes unsustainably large; for instance, covering a \(10\)km \(\times\)\(10\)km area with a \(10\)cm grid resolution, even disregarding the vertical dimension, would require \(10^{10}\) voxels. In addition, it would be desirable for the next generation of robots to execute a large set of tasks, hence requiring them to ground a large dictionary of symbols \(L\).2 The size of the sub-symbolic representation, the number of symbols, and their multiplicative relation in (1) make a flat metric-semantic representation unsuitable for large-scale spatial perception.
Footnote 2: For reference, the English dictionary includes around \(500,000\) words.
A key observation here is that multiple voxels encode the same grounded symbols (_e.g.,_ a chair). In addition, many symbols naturally admit a hierarchical organization where higher-level concepts (_i.e.,_ buildings or rooms for indoor environments) contain lower-level symbols (_e.g.,_ objects). This suggests that we can more efficiently store information _hierarchically_, where all voxels associated with a certain object (_e.g.,_ all voxels belonging to a chair) are mapped to the same symbolic node (_e.g.,_ an object node with the semantic label "chair"), object nodes are associated with the room they belong to, room nodes are associated to the apartment unit they belong to, and so on. This transforms the flat representation of
Fig. 2: (a) Example of flat metric-semantic representation. We show labels only for a subset of the cells in the grid map for the sake of readability. (b) Example of hierarchical metric-semantic representation with a grid-map as sub-symbolic layer. (c) A hierarchical metric-semantic representation with compressed sub-symbolic layer. The cells representing each object are compressed into bounding polygons, and the free-space cells are compressed into a sparse graph where each node and edge are also assigned a radius that defines a circle of free-space around it.
Fig. (a)a into the hierarchical model of Fig. (b)b, where only the lowest level symbols are directly grounded into voxels, while the top layers are arranged hierarchically. This hierarchical representation is more parsimonious and reduces the required memory to
\[m=\mathcal{O}\left(V/\delta^{3}+N_{\text{objects}}+N_{\text{rooms}}+\cdots+N_{ \text{buildings}}\right), \tag{2}\]
where \(N_{\text{layer}}\) (with layer \(\in\{\text{objects},\text{rooms},\ldots\}\)) is the number of symbols at the specific layer of the hierarchy. This representation is more memory efficient than (1) because it decouples the number of symbols from the size \(V/\delta^{3}\) of the sub-symbolic representation. For instance, the scene in Fig. 2 has 336 voxels, and we assume \(L=5\).3 Then, the flat representation of Fig. (a)a involves storing \(1680\) symbols, while Fig. (b)b only requires storing \(355\) symbols and \(354\) edges describing their hierarchical relations. Crucially, the compression we performed when moving from Fig. (a)a to Fig. (b)b is _lossless_: the two representations contain exactly the same amount of information.
Footnote 3: The symbols are stored as building type, apartment type, room type, object type, and free-space/obstacle.
While we "compressed" the symbolic representation using a hierarchical data structure, the term \(V/\delta^{3}\) in (2) that corresponds to the sub-symbolic layer is still impractically large for many applications. Therefore, our robots will also typically need some compression mechanism for the sub-symbolic layer that provides a more succinct description of the occupied and free space as compared to voxel-based maps. Fortunately, the mapping literature already offers alternatives to standard 3D voxel-based maps such as OctTree [20] or neural implicit representations [21]. In general, this compression reduces the memory requirement to
\[m=\mathcal{O}\left(N_{\text{sub-sym}}+N_{\text{objects}}+N_{\text{rooms}}+ \cdots+N_{\text{buildings}}\right), \tag{3}\]
where \(N_{\text{sub-sym}}\) is the size of the compressed sub-symbolic representation. The behavior of \(N_{\text{sub-sym}}\) is driven by the compression approach used, but in general \(N_{\text{sub-sym}}\) ends up being much smaller than \(V/\delta^{3}\); in the ideal case \(N_{\text{sub-sym}}\) would grow gracefully with respect to the complexity of the scene and the resolution required by the task. We show a notional example of a compressed sub-symbolic layer in Fig. (c)c. This can represent the 336 original cells of the sub-symbolic layer of Fig. (b)b with roughly 23 nodes, 22 edges, and a 2D polygonal mesh with roughly 132 vertices. Note that moving from Fig. (b)b to Fig. (c)c may entail a _lossy_ compression of the sub-symbolic layer, _i.e.,_ the compressed representation may only be an approximate representation of the geometry of the environment. We consider this to be a feature rather than a bug: the general idea is that we can always compress the sub-symbolic representation to fit into the available memory of our robot, although such a compression might induce some performance degradation in the execution of the task (_e.g.,_ coarser maps might lead to longer paths in motion planning).
### _Hierarchical Representations Enable Efficient Inference_
While above we showed that hierarchical representations are more scalable in terms of memory requirements, this section shows that the graphs underlying hierarchical representations also enable fast inference. Specifically, we show that these hierarchical graphs have small _treewidth_: their treewidth does not grow with the size of the graph (_i.e.,_ proportionally to the size of the explored scene), but rather with the treewidth of each layer in the hierarchy. The treewidth is a well-known measure of complexity for many problems on graphs [22, 23, 24, 25]. Chandrasekaran et al. [26] show that the graph treewidth is the only structural parameter that influences tractability of probabilistic inference on graphical models: while inference is NP-hard in general for inference on graphical models [27], proving that a graph has small treewidth opens the door to efficient, polynomial-time inference algorithms. Additionally, in our previous work we have shown that the treewidth is also the main factor impacting the expressive power and tractability of novel graph neural tree architectures, namely _neural trees_[16]. The results in this section therefore open the door to the efficient use of powerful tools for learning over graphs; see Section III-D2.
In the rest of this section, we formalize the notion of hierarchical graph and show that the treewidth of a hierarchical graph is always bounded by the maximum treewidth of each of its layers. We do this by proving that the tree decomposition of the hierarchical graph can be obtained by a suitable concatenation of the tree decompositions of its layers. The results in this section are general and apply to arbitrary hierarchical representations (as defined below) beyond the representations of indoor environments we consider later in the paper.
**Definition 1** (Hierarchical Graph).: _A graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) is said to be a hierarchical graph if the set of nodes \(\mathcal{V}\) can be partitioned into \(\ell\) layers, i.e., \(\mathcal{V}=\cup_{i=1}^{\ell}\mathcal{V}_{i}\), such that_
1. _single parent:_ _each node_ \(v\in\mathcal{V}_{i}\) _at layer_ \(i\) _shares an edge with at most one node in the layer_ \(\mathcal{V}_{i+1}\) _above,_
2. _each_ \(v\in\mathcal{V}_{i}\) _can only share edges with nodes in the same or adjacent layers,_ i.e.,_ \(\mathcal{V}_{i-1}\)_,_ \(\mathcal{V}_{i}\)_, or_ \(\mathcal{V}_{i+1}\)_,_
3. _disjoint children: for any_ \(u,v\in\mathcal{V}_{i}\) _the children of_ \(u\) _and_ \(v\)_, namely_ \(C(v)\) _and_ \(C(u)\) _are disjoint_ (i.e., _they share no nodes or edges), where_ \[C(u)\triangleq\{w\in\mathcal{V}_{i-1}\ |\ (w,u)\in\mathcal{E}\}\] (4) _denotes the children of_ \(u\)_,_ i.e., _the set of nodes in layer_ \(\mathcal{V}_{i-1}\) _sharing an edge with_ \(u\)_._
_We refer to \(\mathcal{G}\) as an \(\ell\)-layered hierarchical graph and denote with \(\mathcal{V}_{i}\) the set of nodes in layer \(i\). Moreover, we conventionally order the layers from the bottom to the top, i.e., the lowest layer is \(\mathcal{V}_{1}\), while the top layer is \(\mathcal{V}_{\ell}\)._
To gain some insight into Definition 1, consider a hierarchical graph where the bottom layer \(\mathcal{V}_{1}\) describes the objects in the environment, while the higher layers \(\mathcal{V}_{2}\) and \(\mathcal{V}_{3}\) describe room and buildings, respectively. Then, the first condition in Definition 1 requires that each object belongs to a single room, and each room belongs to a single building. The second condition restricts the inter-layer edges to connect objects to rooms, and rooms to buildings. Finally, the third condition says that objects in a room are not connected to objects in another room, and that rooms in a building do not share edges
with rooms in other buildings. These conditions are relatively mild; we note that they are easily met when the edges in the graph represent inclusion or adjacency (_i.e.,_ for the graphs considered in the rest of this paper).
We now show that a tree decomposition of a hierarchical graph can be constructed by concatenating the tree decomposition of each of its layers. This result will allow us to obtain the treewidth bound in Proposition 3. The resulting tree decomposition algorithm (Algorithm 1) will also be used in the neural tree approach used for room classification in Section III-D2. The interested reader can find a refresher about tree decomposition and treewidth in Appendix A4 and an example of execution of Algorithm 1 in Fig. 3.
Footnote 4: We leave the proof of Theorem 2 in the main text since it contains a description of Algorithm 1, while we postpone other proofs to the appendix.
**Theorem 2** (Tree Decomposition of Hierarchical Graph).: _Let \(\mathcal{G}=(\mathcal{V}=\cup_{i=1}^{\ell}\mathcal{V}_{i},\mathcal{E})\) be an \(\ell\)-layered hierarchical graph. Let \(T\) be the tree decomposition of the sub-graph spanned by \(\mathcal{V}_{\ell}\) (top layer) and \(T_{v}\) be the tree decomposition of the sub-graph spanned by the children \(C_{i}(v)\) of \(v\), for \(v\in\mathcal{V}_{i}\) and \(i=\ell,\ldots,2\). Then a tree decomposition for graph \(\mathcal{G}\) can be constructed from \(\{T_{v}\mid v\in\mathcal{V}_{i}\text{ and }i=\ell,\ldots,2\}\) and \(T\), by the simple concatenation procedure described in Algorithm 1._
Proof:: _(i) Notation_: We denote the tree decomposition of a graph \(\mathcal{G}\) by \(T=(B,E)=\text{TD}\left[\mathcal{G}\right]\), where \(B\) denotes the collection of bags and \(E\) denotes the set of edges in the tree decomposition graph \(T\). For a tree decomposition graph \(T=(B,E)\) and any node \(w\) (not already included in any of the bags \(b\in B\)), we denote \(T+\{w\}\) to be the tree decomposition \(T\), after adding the element \(w\) to every bag in \(T\). Finally, given two disjoint trees \(T,T^{\prime}\) and an edge \((b,b^{\prime})\) with \(b\in T\) and \(b^{\prime}\in T^{\prime}\), we use \(T\gets T\oplus T^{\prime}\oplus(b,b^{\prime})\) to indicate updating graph \(T\) by adding graph \(T^{\prime}\), along with the edge \((b,b^{\prime})\) to it.
_(ii) Intuitive Explanation of Algorithm 1_: We form the tree decomposition \(T\) of the hierarchical graph sequentially. We initialize \(T\) with the tree decomposition of the top layer graph \(\mathcal{G}_{\ell}=\mathcal{G}[\mathcal{V}_{\ell}]\). We then augment \(T\) with the tree decomposition graphs \(T_{v}\) of \(C(v)\), for each \(v\in\mathcal{V}_{\ell}\). We augment bags of \(T_{v}\) with element \(\{v\}\) to mark the fact that \(v\) connects each node in \(C(v)\) (in graph \(\mathcal{G}\)). This procedure is carried out for the remaining layers \(i=\ell-1,\ldots,2\) and all nodes \(v\in\mathcal{V}_{i}\).
Figure 3 shows an example of execution of Algorithm 1 for the graph in Fig. 2(a). Figure 2(b) shows the (disconnected) tree decompositions produced by line 2 (which produces the single green bag B1) and the first execution of line 5 (_i.e.,_ for \(i=\ell\), which produces the two red bags). Figure 2(c) shows the result produced by the first execution of line 6 and line 9, which adds B1 to all the red bags, and then connects the two tree decompositions with an edge, respectively. Figures 2(d) and 2(e) show the result produced by the next iteration of the algorithm.
_(iii) Proof_: Recall that a tree decomposition \(T=(B,E)\) of a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) must satisfy the following conditions:
**C1**: \(T\) must be a tree;
**C2**: Every bag \(b\in B\) must be a subset of nodes of the given graph \(\mathcal{G}\), _i.e.,_\(b\subseteq\mathcal{V}\);
**C3**: For all edges \((u,v)\in\mathcal{E}\) in the given graph \(\mathcal{G}\), there must exist a bag \(b\in B\) such that \(u,v\in b\);
**C4**: For every node \(v\in\mathcal{V}\) in the given graph \(\mathcal{G}\), the set of bags \(\{b\in B\mid v\in b\}\) must form a connected component in \(T\);
**C5**: For every node \(v\in\mathcal{V}\) in the given graph \(\mathcal{G}\), there must exist a bag \(b\in B\) that contains it, _i.e.,_\(v\in b\). To state this in another way: \(\cup_{b\in B}\{v\in b\}=\mathcal{V}\).
Algorithm 1 constructs a tree decomposition \(T\) of \(\mathcal{G}\) sequentially by exploring the graph, by layers (going from layer-\(\ell\) to layer 1) and by nodes \(v\) in each layer. Let \(\mathcal{G}^{e}_{t}\) denote the sub-graph of \(\mathcal{G}\) that is explored by Algorithm 1, until iteration \(t\). Here, we count an iteration to be the execution of Line 9, and count the initialization at Line 2 as the first iteration (_i.e.,_ the first iteration initializes the tree decomposition, while the subsequent iterations update the tree decomposition).
We prove that, at any iteration \(t\), the tree decomposition graph \(T\) constructed in Algorithm 1 is a valid tree decomposition of the graph \(\mathcal{G}^{e}_{t}\). We prove this by induction. At \(t=1\), the explored graph \(\mathcal{G}^{e}_{1}\) is nothing but \(\mathcal{G}_{\ell}\). Note that \(T\), after iteration 1 (_i.e.,_ after the initialization at Line 2) by definition equals the tree decomposition \(\mathcal{G}^{e}_{1}\). Now we only need to prove that subsequent iterations produce a tree decomposition of \(\mathcal{G}^{e}_{t}\). Each new iteration (_i.e.,_ execution of Line 9) updates \(T\) to include \(T^{\prime}_{v}\) and the edge \((b,b^{\prime})\). The new explored graph is \(\mathcal{G}^{e}_{t+1}=\mathcal{G}^{e}_{t}\oplus\mathcal{G}[C(v)+\{v\}]\). It suffices to argue that \(T\oplus T_{v}\oplus(b,b^{\prime})\) (Lines 5 to 9) is a valid tree decomposition of \(\mathcal{G}^{e}_{t+1}=\mathcal{G}^{e}_{t}\oplus\mathcal{G}[C(v)+\{v\}]\). We argue this by proving that the requirements C1-C5 are satisfied.
For C1, we note that \(T\oplus T^{\prime}_{v}\oplus(b,b^{\prime})\) is a tree as \(T\) and \(T^{\prime}_{v}\) are disjoint trees and \((b,b^{\prime})\) is an edge connecting the two trees, \(T\) and \(T^{\prime}_{v}\). C2 is satisfied because \(T\) and \(T^{\prime}_{v}\) are tree decompositions of sub-graphs of \(\mathcal{G}\). C3 is satisfied for all edges in \(\mathcal{G}^{e}_{t}\) and in \(\mathcal{G}[C(v)]\). The only edges that remain are those that connect node \(v\) to nodes in \(C(v)\), namely \(\{(v,u)\mid u\in C(v)\}\). We note that by adding \(\{v\}\) to every bag of \(T_{v}\) to obtain \(T^{\prime}_{v}\) we get that all such edges are also included in the bags of \(T^{\prime}_{v}\). C4 is satisfied again because \(T\) and \(T_{v}\) are valid tree decompositions and \(T^{\prime}_{v}\) is built from \(T_{v}\) by adding \(v\) to all bags of \(T_{v}\). As \(T_{v}\) is a tree graph disjoint from \(T\), we retai
property C4. \(\mathcal{G}_{t+1}^{e}=\mathcal{G}_{t}^{e}\oplus\mathcal{G}[C(v)+\{v\}]\) (_i.e.,_ nodes in \(\mathcal{G}_{t}^{e}\) and \(C(v)\)), have a bag in \(T\oplus T_{v}^{{}^{\prime}}\oplus(b,b^{\prime})\).
Theorem 2 above showed that we can easily build a tree decomposition of a hierarchical graph by cleverly "gluing together" tree decompositions of sub-graphs in the hierarchy. Now we want to use that result to bound the treewidth of a hierarchical graph \(\mathcal{G}\). Each node in the tree decomposition is typically referred to as a _bag_, and intuitively represents a subset of nodes in the original graph. The treewidth of a tree decomposition \(T\) is defined as the size of the largest bag minus one (see Appendix A). The treewidth of a graph \(\mathcal{G}\) is then defined as the minimum treewidth that can be achieved among all tree decompositions of \(\mathcal{G}\). Since Algorithm 1 describes one such tree decompositions, Theorem 2 implies the following upper bound on the treewidth of a hierarchical graph.
**Proposition 3** (Treewidth of Hierarchical Graph).: _The treewidth of an \(\ell\)-layered hierarchical graph \(\mathcal{G}\) is bounded by the treewidths of the sub-graphs \(\mathcal{G}[C(v)]\) spanned by the children \(C(v)\) of every node \(v\) in \(\mathcal{G}\) and the treewidth of the graph \(\mathcal{G}[\mathcal{V}_{\ell}]\) spanned by all layer-\(\ell\) nodes:_
\[\text{tw}\left[\mathcal{G}\right]\leq\max\left\{\max_{v\in\mathcal{V}}\{\text{tw }\left[\mathcal{G}[C(v)]\right]+1\},\text{tw}\left[\mathcal{G}[\mathcal{V}_{ \ell}]\right]\right\}. \tag{5}\]
Intuitively, the proposition states that the treewidth of a hierarchical graph does not grow with the number of nodes in the graph, but rather with the treewidth (roughly speaking, the complexity) of each layer in the graph.
### _3D Scene Graphs and Indoor Environments_
While the considerations in the previous sections apply to generic hierarchical representations, in this section we tailor the discussion to a specific hierarchical representation, namely a _3D scene graph_, and discuss its structure and properties when such representation is used to model indoor environments.
**Choice of Symbols for Indoor Navigation.** We focus on indoor environments and assume the robot is tasked with navigating to certain objects and rooms in a building. Therefore, we include the following groups of symbols that form the _layers_ in our hierarchical representation: free-space, obstacles, objects and agents, rooms, and building. The details of the label spaces that we use for the object and room symbols can be found in Appendix D. The only "agent" in our setup is the robot mapping the environment. In terms of relations between symbols, we mostly consider two types of relations: inclusion (_i.e.,_ the chair _is in_ the kitchen) and adjacency or traversability (_i.e.,_ the chair _is near_ the table, the kitchen is adjacent to/reachable from the dining room).
As we mentioned in Section II-B, this choice of symbols and relations is dictated by the tasks we envision the robot to perform and therefore it is not universal. For instance, other tasks might require breaking down the free-space symbol into multiple symbols (_e.g.,_ to distinguish the front from the back of a room), or might require considering other agents (_e.g.,_ humans moving in the environment as in [2, 4]). This dependence on the task also justifies the different definitions of 3D scene graph found in the recent literature: for instance, the original proposal in [3] focuses on visualization and human-machine interaction tasks, rather than robot navigation, hence the corresponding scene graphs do not include the free-space as a symbol; the proposals [7, 5] consider smaller-scale tasks and disregard the room and building symbols. Similarly, the choice of relations is task dependent and can include a much broader set of relations beyond inclusion and adjacency. For instance, relations can describe attributes of an object (_e.g.,_ "has color"), material properties (_e.g.,_ "is made of"), can be used to compare entities (_e.g.,_ "is taller than"), or may encode actions (_e.g.,_ a person "carries" an object, a car "drives on" the road). While these other relations are beyond the scope of our paper, we refer the reader to [28] for further discussion.
**Choice of Sub-symbolic Representations.** We ground each symbol in compact sub-symbolic representations; intuitively, this reduces to attaching geometric attributes to each symbol observed by the robot. We ground the "obstacle" symbol
Fig. 3: (a) Hierarchical object-room-building graph with objects O1, O2,... O8, rooms R1, R2, R3, R4, and a single building B1. (b) Tree decomposition \(T\) of the building B1 and associated \(T_{v}\) of the rooms (_i.e.,_ children of B1) as computed in line 2 and 5 of Algorithm 1. (c) \(T\) after adding B1 to every bag of the tree decomposition of the rooms and joining \(T_{v}\) to \(T\) (line 6 and line 9 in Algorithm 1). (d) \(T\) and associated \(T_{v}\) for the object nodes. (e) Final tree-decomposition of the object-room-building graph formed by the concatenation procedure described in Algorithm 1.
into a 3D mesh describing the observed surfaces in the environment. We ground the "free-space" symbol using a places sub-graph, which can be understood as a topological map of the environment. Specifically, each place node in the graph of places is associated with an obstacle-free 3D location (more precisely, a sphere of free space described by a centroid and radius), while edges represent traversability between places.5 We ground the "agent" symbol using the pose-graph describing the robot trajectory [30]. We ground each "object", "room", and "building" symbol using a centroid and a bounding box. Somewhat redundantly, we also store the mesh vertices corresponding to each object, and the set of places included in each room, which can be also understood as additional groundings for these symbols. As discussed in the next section, this is mostly a byproduct of our algorithms, rather than a deliberate choice of sub-symbolic representation.
Footnote 5: While we postpone the technical details to Section III-C, the graph of places can be understood as a sparse approximation of a Generalized Voronoi Diagram (GVD) of the environment, a data structure commonly used in computer graphics and computational geometry to compress shape information [29].
**3D Scene Graphs for Indoor Environments** The choices of symbolic and sub-symbolic representations outlined above lead to the 3D scene graph structure visualized in Fig. 1. In particular, _Layer 1_ is a metric-semantic 3D mesh. _Layer 2_ is a sub-graph of objects and agents; each object has a semantic label (which identifies the symbol being grounded), a centroid, and a bounding box (providing the grounding); each agent is modeled by a pose graph describing its trajectory (in our case the robot itself is the only agent). _Layer 3_ is a sub-graph of _places_ (_i.e.,_ a topological map) where each place is an obstacle-free location and an edge between places denotes straight-line traversability. _Layer 4_ is a sub-graph of rooms where each room has a centroid, and edges connect adjacent rooms. _Layer 5_ is a building node connected to all rooms (we assume the robot maps a single building). Edges connect nodes within each layer (_e.g.,_ to model traversability between places or rooms, or connecting nearby objects) or across layers (_e.g.,_ to model that mesh vertices belong to an object, that an object is in a certain room, or that a room belongs to a building). Note that the 3D scene graph structure forms a hierarchical graph meeting the requirements of Definition 1.
**Treewidth of Indoor 3D Scene Graphs.** In Section II-C we concluded that the treewidth of a hierarchical graph is bounded by the treewidth of its layers. In this section, we particularize the treewidth bound to 3D scene graphs modeling indoor environments. We first prove bounds on the treewidth of key layers in the 3D scene graph (Lemmas 4 and 5). Then we translate these bounds into a bound on the treewidth of the object-room-building sub-graph of a 3D scene graph (Proposition 6); the bound is important in practice since we will need to perform inference over such a sub-graph to infer the room (and potentially building) labels, see Section III-D2.
We start by bounding the treewidth of the room layer.
**Lemma 4** (Treewidth of Room Layer).: _Consider the sub-graph of rooms, including the nodes in the room layer of a 3D scene graph and the corresponding edges. If each room connects to at most two other rooms that have doors (or more generally passage ways) leading to other rooms, then the room graph has treewidth bounded by two._
The insight behind Lemma 4 is that the room sub-graph is not very far from a tree (_i.e.,_ it has low treewidth) as long as there are not many rooms with multiple doors (or passage ways). In particular, the room sub-graph has treewidth equal to 2 if each room has at most two passage ways to other rooms with multiple entries. Note that the theorem statement allows a room to be connected to an arbitrary number of rooms with a single entry (_i.e.,_ a corridor leading to multiple rooms that are only accessible via that corridor). Figure 3(a) reports empirical upper-bounds of the treewidth of the room sub-graphs in the 90 scenes of the Matterport3D dataset [31] (see Section VI for more details about the experimental setup). The observed treewidth is at most 2, confirming that the conclusions of Lemma 4 hold in several apartment-like environments.
**Lemma 5** (Treewidth of Object Layer).: _Consider the sub-graph of objects, which includes the nodes in the object layer of a 3D scene graph and the corresponding edges. The treewidth of the object sub-graph is bounded by the maximum number of objects in a room._
The result is a simple consequence of the fact that in our 3D scene graph, there is no edge connecting objects in different rooms, therefore, the graph of objects includes disconnected components corresponding to each room, whose treewidth is bounded by the size of that connected component, _i.e.,_ the number of objects in that room. Figure 3(b) reports the treewidth upper-bounds for the object sub-graphs in the Matterport3D dataset. We observe that the treewidth of the object sub-graphs tends to be larger compared to the room sub-graphs, but still remains relatively small (below 20) in all tests.
We can now conclude with a bound on the treewidth of the object-room-building graph in a 3D scene graph.
**Proposition 6** (Treewidth of the Object-Room-Building Graph).: _Consider the object-room-building graph of a building, including the nodes in the object, room, and building layers of a 3D scene graph and the corresponding edges. Assume the treewidth of the room graph is less than the treewidth of the object graph. Then, the treewidth \(\text{tw}[\mathcal{G}]\) of
Fig. 4: Histogram of the treewidth upper-bounds for (a) the room sub-graph and (b) the object sub-graph of the 3D scene graphs obtained using the 90 scenes of the Matterport3D dataset [31]. We use the minimum-degree and the minimum-fill-in heuristic to obtain treewidth upper-bounds [32]. We then compute an upper-bound to be the lowest of the two.
the object-room-building graph \(\mathcal{G}\) is bounded by_
\[\text{tw}[\mathcal{G}]\leq 1+N_{o}, \tag{6}\]
_where \(N_{o}\) denotes the largest number of objects in a room._
Proposition 6 indicates that the treewidth of the object-room-building graph does not grow with the size of the scene graph, but rather depends on how cluttered each room is. This is in stark contrast with the treewidth of social network graphs [33, 16], and further motivates the proposed hierarchical organization. The treewidth bounds in this section open the door to tractable inference techniques; in particular, in Section III-D2, we show they allow applying novel graph-learning techniques, namely, the neural tree [16].
## III Real-time Incremental
3D Scene Graph Layers Construction
This section describes how to estimate an _odometric_ 3D scene graph directly from visual-inertial data as the robot explores an unknown environment. Section IV then shows how to correct the scene graph in response to loop closures.
We start by reconstructing a metric-semantic 3D mesh to populate _Layer 1_ of the 3D scene graph (Section III-A), and use the mesh to extract the objects in _Layer 2_ (Section III-B). We extract the places in _Layer 3_ as a byproduct of the 3D mesh computation by first computing a Generalized Voronoi Diagram (GVD) of the environment, and then approximating the GVD as a sparse graph of places (Section III-C). We then populate the rooms in _Layer 4_ by segmenting their geometry using persistent homology (Section III-D1) and assigning them a semantic label using the neural tree (Section III-D2). In this paper, we assume that the scene we reconstruct consists of a single building, and for _Layer 5_ we instantiate a single building node that we connect to all estimated room nodes.
### _Layers 1: Mesh_
We build a metric-semantic 3D mesh (_Layer 1_ of the 3D scene graph) using Kimera [34] and Voxblox [10]. In particular, we maintain a metric-semantic voxel-based map within an active window around the robot. Each voxel of the map contains free-space information and a distribution over possible semantic labels. We gradually convert this voxel-based map into a 3D mesh using machine cubes, attaching a semantic label to each mesh vertex. This is now standard practice (_e.g._, the same idea was used in [35] without inferring the semantic labels) and the expert reader can safely skip the rest of this subsection.
In more detail, for each keyframe6 we use a 2D semantic segmentation network to obtain a pixel-wise semantic segmentation of the RGB image, and reconstruct a depth-map using stereo matching (when using a stereo camera) or from the depth channel of the sensor (when using an RGB-D camera). We then convert the semantic segmentation and depth into a semantically-labeled 3D point cloud and transform it according to the odometric estimate of the robot pose (Section III-B). We use Voxblox [10] to integrate the semantically-labeled point cloud into a Truncated Signed Distance Field (TSDF) using ray-casting, and Kimera [2] to perform Bayesian updates over the semantic label of each voxel during ray-casting. Both Voxblox [10] and Kimera [2] operate over an _active window_, _i.e._, only reconstruct a voxel-based map within a user-specified radius \(r_{a}\) around the robot (\(r_{a}=$8\,\mathrm{m}$\) in our tests)7 to bound the memory requirements. Within the active window, we extract the 3D metric-semantic mesh using Voxblox' marching cubes implementation, where each mesh vertex is assigned the most likely semantic label from the corresponding voxel. We then use spatial hashing [36] to integrate \(\mathcal{M}_{a}\), the mesh inside the active window, into the full mesh \(\mathcal{M}_{f}\), optionally also compressing the mesh to a given resolution. The use of the active window circumvents the need to maintain a monolithic and memory-hungry voxel-based representation of the entire environment (as done in our original proposal in [34]), and instead allows to gradually transform the voxel-based map in the active window (moving with the robot) into a lighter-weight 3D mesh. The full mesh, \(\mathcal{M}_{f}\), is uncorrected for odometric drift, and we address loop closure detection and optimization in Section IV.
Footnote 7: The radius of the active window has to be larger than the maximum ray-casting distance to construct the TSDF (\(\approx$4\,\mathrm{m}$\)) and the block resolution of the spatial hashing (\(\approx$1.5\,\mathrm{m}$\)).
### _Layers 2: Objects and Agents_
**Agents.** The agent layer consists of the pose graph describing the robot trajectory (we refer the reader to [2] for extensions to multiple agents, including humans). During exploration, the odometric pose graph is obtained using stereo or RGB-D visual-inertial odometry, which is also available in Kimera [34, 2]. The poses in the pose graph correspond to the keyframes selected by Kimera, and for each pose we also store visual features and descriptors, which are used for loop closure detection in Section IV. As usual, edges in the pose graph correspond to odometry measurements between consecutive poses, while we will also add loop closures as described in Section IV.
**Objects.** The object layer consists of a graph where each node corresponds to an object (with a semantic label, a centroid, and a bounding box) and edges connect nearby objects. After extracting the 3D mesh within the active window, we segment objects by performing Euclidean clustering [37] on \(\mathcal{V}_{a}\), the vertices of \(\mathcal{M}_{a}\). In particular, we independently cluster the sets of vertices \(\mathcal{V}_{a}^{i}\) with the same semantic label \(i\) (where \(\mathcal{V}_{a}^{i}=\{v\in\mathcal{V}_{a}:\mathrm{label}(v)=i\}\)) for all semantic labels of interest. As in Kimera [2], each resulting cluster is then used to estimate a centroid and bounding box for each putative object. After each update, if a newly-detected object overlaps with an existing object node of the same semantic class in the scene graph, we merge them together by adding new mesh vertices to the previous object node; otherwise we add the new object as a new node. In practice, we consider two objects overlapping if the centroid of one object is contained in the other object's
bounding box, a proxy for spatial overlap measures such as Intersection over Union (IoU).8
Footnote 8: In previous work [2], we also detected objects with known shape using 3D registration [38]; in practice, we see that such approach only works well for large objects since small objects are not well-described by the 3D mesh, whose resolution is implicitly limited by the resolution of the voxel-based map in the active window (\(10\)cm in our tests). A more promising approach is to directly ground small 3D objects (_e.g.,_ a pen) from sensor data as in [39].
### _Layers 3: Places_
We build the places layer by computing a Generalized Voronoi Diagram (GVD) of the environment, and then sparsifying it into a graph of places. While the idea of sparsifying the GVD into a graph has appeared in related work [2, 40], these works extract such representations from a monolithic ESDF of the environment, a process that is computationally expensive and confined to off-line use ([2] reports computation times in the order of tens of minutes). Instead, we combine and adapt the approaches of Voxblox [10], which presents a incremental version of the brushfire algorithm that converts a TSDF to an ESDF, and of Lau et al. [41], who present an incremental version of the brushfire algorithm that is capable of constructing a GVD during the construction of the ESDF, but takes as input a 2D occupancy map. As a result, we show how to obtain a local GVD and a graph of places as a byproduct of the 3D mesh construction.
**From Voxels to the GVD.** The GVD is a data structure commonly used in computer graphics and computational geometry to compress shape information [29]. The GVD is the set of voxels that are equidistant to at least two closest obstacles ("basis points" or "parents"), and intuitively forms a skeleton of the environment [29] (see Fig. 5a). Following the approach in [41], we use the fact that voxels belonging to the GVD can be easily detected from the wavefronts of the brushfire algorithm used to create and update the ESDF in the active window. Intuitively, GVD voxels have the property that multiple wavefronts "meet" at a given voxel (_i.e.,_ fail to update the signed distance of the voxel), as these voxels are equidistant from multiple obstacles. The brushfire algorithm (and its incremental variant) are traditionally seeded at voxels containing obstacles (_e.g.,_[41]), but as mentioned previously, we follow the approach of [10] and use the TSDF to seed the brushfire wavefronts instead. In particular, we track all TSDF voxels that correspond to zero-crossings (_i.e.,_ voxels containing a surface) when creating \(\mathcal{M}_{a}\) using marching cubes and then use these voxels as a surrogate occupancy grid (_i.e.,_ wavefronts of the brushfire algorithm are treated as if they started from one of these TSDF voxels). Like [10], we also use the TSDF voxels within the truncation distance to seed the wavefronts of the brushfire algorithm, _i.e.,_ instead of starting the wavefronts directly from occupied voxels, we start them from the outermost voxels still within the truncation distance. In our implementation, we compute a variant of the GVD called the _\(\theta\)-simplified medial axis_ (\(\theta\)-SMA) from [42] that filters out less stable and noisier portions of the GVD using a threshold on the minimum angle between basis points. Each voxel in the GVD is assigned a distance to its basis points, defining a sphere of free-space surrounding the voxel; collectively the GVD provides a compact description of the free-space and its connectivity (Fig. 5b). Indeed, up to discretization, the GVD provides an _exact_ description of the free-space [29], meaning that one can reconstruct the full shape of the free-space from its GVD.
Fig. 5: (a) GVD (wireframe) of an environment with GVD voxels having three or more basis points highlighted. (b) Sparse places graph (in black) and associated spheres of free-space corresponding shown in red. Note that the union of the spheres roughly approximates the geometry of the free-space.
**From the GVD to the Places Graph.** While the GVD already provides a compressed representation of the free-space, it still typically contains a large number of voxels (_e.g.,_ more than \(10,000\) in the Office dataset considered in Section VI). We could instantiate a graph of places with one node for each GVD voxel and edges between nearby voxels, but such representation would be too large to manipulate efficiently (_e.g.,_ our room segmentation approach in Section III-D1 would not scale to operating on the entire GVD). Previous attempts at sparsifying the GVD (notably [40] and our earlier paper [11]) used a subset of topological features (edges and vertices in the GVD, which correspond to GVD voxels having more than three and more than four basis points respectively) to form a sparse graph of places. However, the resulting graph does not capture the same connectivity of free-space as the full GVD, and may lead to graph of places with multiple connected components. It would also be desirable for the user to balance the level of compression with the quality of the free-space approximation instead of always picking certain GVD voxels.
To resolve these issues, we first form a graph \(\mathcal{G}_{a}\) of the GVD where each GVD voxel corresponds to a node in \(\mathcal{G}_{a}\) and edges connect any two neighboring GVD voxels. We then use voxel-hashing [36] to spatially cluster --at a user-specified resolution-- all updated GVD voxels with at least \(n_{b}\) basis points (\(n_{b}=2\) in our tests) inside the active window, ensuring that each cluster forms a connected component in \(\mathcal{G}_{a}\). This clustering is shown in Algorithm 2. This algorithm takes as input the graph of GVD voxels, \(\mathcal{G}_{a}\), and for every voxel that was updated, computes a spatial hash value [36], denoted \(\mathrm{hash}(v,\delta_{p})\) in the algorithm for a given voxel \(v\) and spatial resolution \(\delta_{p}\). This hash value is the same for voxels within a cube of 3D space with side-lengths of \(\delta_{p}\) and naturally clusters the voxels of the GVD into clusters of the provided spatial resolution. Lines 5 to 13 incrementally grow clusters of voxels with the same hash value. However, this results in multiple clusters on the same connected component of \(\mathcal{G}_{a}\). Lines 15 to 22 then search over the neighbors of a given voxel \(v\) in \(\mathcal{G}_{a}\) and either combines clusters that share an edge in \(\mathcal{G}_{a}\) and the same spatial hash value, or propose an edge between the clusters if the hash value differs. Once Algorithm 2 terminates, we obtain a set of proposed nodes (corresponding to unique clusters of voxels in \(\mathcal{G}_{a}\) that form connected components) and proposed edges based on the connectivity of \(\mathcal{G}_{a}\).
These proposed nodes and edges are assigned information from the voxels that generated them: each new or updated node is assigned a position and distance to the nearest obstacle from the GVD voxel with the most basis points in the cluster; each new or updated edge is assigned a distance that is the minimum distance to the nearest obstacle of all the GVD voxels that the edge passes through. We then associate place nodes with the corresponding mesh-vertices of each basis point using the zero-crossing identified in the TSDF by marching cubes. We run Algorithm 2 after every update to the GVD and merge the proposed nodes \(\mathcal{N}\) and edges \(\mathcal{E}\) into the sparse graph of places.9
Footnote 9: Separate book-keeping is done to remove nodes and edges that no longer correspond to any GVD voxels during the brushfire update of the ESDF and GVD, as GVD voxels are removed during this update when a wavefront is able to modify the signed distance of the voxel
**Inter-layer Edges.** After building the places graph, we add inter-layer edges from each object or agent node to the nearest place node in the active window using nanoflann [43].
### _Layer 4: Rooms_
We segment the rooms by first geometrically clustering the graph of places into separate rooms (using persistent homology, Section III-D1), and then assigning a semantic label to each room (using the neural tree, Section III-D2). We remark that the construction of the room layer is fundamentally different from the construction of the object layer. While we can directly observe the objects (_e.g.,_ via a network trained to detect certain objects), we do not have a network trained to detect certain rooms. Instead, we have to rely on prior knowledge to infer both their geometry and semantics. For instance, we exploit the fact that rooms are typically separated by small passageways (_e.g.,_ doors), and that their semantics can be inferred from the objects they contain.
#### Iii-D1 Layer 4: Room Clustering via Persistent Homology
This section discusses how to geometrically cluster the environment into different rooms. Many approaches for room segmentation or detection (including [2]) require volumetric or voxel-based representations of the full environments and make assumptions on the environment geometry (_i.e.,_ a single floor or ceiling height) that do not easily extend to arbitrary buildings. To resolve these issues, we present a novel approach for constructing _Layer 4_ of the 3D scene graph by clustering the nodes in the graph of places \(\mathcal{G}_{p}\) (_Layer 3_); this approach does not assume planarity of the environment and circumvents the need to process large voxel-based maps. Our room segmentation approach consists of two stages: the first stage identifies the number of rooms in the environment (as well as a subset of places associated with each room), and the second stage assigns the remaining places in \(\mathcal{G}_{p}\) to rooms via flood-fill.
**Identifying Rooms via Persistent Homology.** We use the key insight that dilating the voxel-based map helps expose rooms in the environment: if we inflate obstacles, apertures in the environment (_e.g.,_ doors) will gradually close, naturally
Fig. 6: Connected components (in orange and red) in the subgraph of places induced by a dilation distance \(\delta\) (walls are in gray, dilated walls are dashed, places that disappear after dilation are in black).
partitioning the voxel-based map into disconnected components (_i.e.,_ rooms); however, we apply the same insight to the graph of places \(\mathcal{G}_{p}\) (rather than the voxel-based map) to enable faster computation and better scalability. As each node and edge in our place graph \(\mathcal{G}_{p}\) stores a distance to its closest obstacle \(d^{p}\) (Section III-C), dilation operations in the voxel-based map can be directly mapped into topological changes in \(\mathcal{G}_{p}\). More precisely, if we dilute the map by a distance \(\delta\), every place node or edge with \(d^{p}\) smaller than \(\delta\) will disappear from the graph since it will no longer be in the free space. We call the dilated graph \(\mathcal{G}_{p}^{\delta}\); see Fig. 6 for an illustration.
Directly specifying a single best dilation distance \(\delta^{*}\) to extract putative rooms is unwieldy: door and other aperture widths vary from scene to scene (think about the small doors in an apartment versus large passage ways in a manison). In order to avoid hard-coding a single dilation distance, we use a tool from topology, known as _persistent homology_[44, 15], to automatically compute the best dilation distance for a given graph.10 The basic insight is that the most natural choice of rooms is one that is more stable (or _persistent_) across a wide range of dilation distances. To formalize this intuition, the persistent homology literature relies on the notion of _filtration_. A _filtration_ of a graph \(\mathcal{G}_{p}\) is a set of graphs \(\mathcal{G}_{p}^{\delta_{i}}\) ordered by real-valued parameters \(\delta_{i}\) such that
Footnote 10: Persistent homology is the study of the birth and death of homologies across a _filtration_ of a simplicial complex; see [44, 15] for an introduction on the topic. In this work, we restrict ourselves to graphs (a subset of simplicial complexes) and focus only on 0-homologies (connected components).
\[\emptyset\subseteq\ldots\subseteq\mathcal{G}_{p}^{\delta_{i+1}}\subseteq \mathcal{G}_{p}^{\delta_{i}}\subseteq\ldots\subseteq\mathcal{G}_{p}\,. \tag{7}\]
In our case, each graph \(\mathcal{G}_{p}^{\delta_{i}}\) is the sub-graph of \(\mathcal{G}_{p}\) where nodes and edges with radius smaller than \(\delta_{i}\) are removed;11 hence the filtration is a set of graphs corresponding to increasing distances \(\delta_{i}\); see Figs. 6(b) and 6(d) for examples of graphs in the filtration. For each graph in the filtration, we compute the 0-homology (the number of connected components), which in our setup corresponds to the number of rooms the graph gets partitioned into; computationally, the 0-homology can be computed in one shot for all graphs in the filtration by iterating across the edges in the original graph \(\mathcal{G}_{p}\) using a union set [45], leading to an extremely efficient algorithm.
Footnote 11: Such a filtration is known as the Vietoris-Rips filtration [45].
The resulting mapping between the dilation distance and the number of connected components is an example of _Betti curve_, or \(\beta_{0}(\delta)\); see Fig. 6(a) for an example. Intuitively, for increasing distances \(\delta\), the graph first splits into more and more connected components (this is the initial increasing portion of Fig. 6(a)), and then for large \(\delta\) entire components tend to disappear (leading to a decreasing trend in the second part of Fig. 6(a)), eventually leading to all the nodes disappearing (_i.e.,_ zero connected components as shown in the right-most part of Fig. 6(a)). In practice, we restrict the set of distances to a range \([d^{-},d^{+}]\), containing reasonable sizes for openings between rooms ([0.5, 1.2]\(\mathrm{m}\) in our tests). We also do not count connected components containing less than a minimum number of vertices (\(15\) in our tests), as we are not interested in segmenting overly small regions as rooms.
Choices of rooms that are more persistent correspond to large "flat" horizontal regions of the Betti curve, where the number of connected components stays the same across a large sub-set of \(\delta\) values. More formally, let us denote with \(H\) the set of unique values assumed by the Betti curve, _i.e.,_:
\[H=\{i\in\mathbb{N}:i=\beta_{0}(\delta),\forall\delta\in\left[d^{-},d^{+} \right]\} \tag{8}\]
and --for some small positive constant \(\epsilon\)-- denote with \(I\) the set of intervals \(I_{j}\) (for each \(j\in H\)) defined as:
\[I_{j}=\{(d_{\min},d_{\max}):\forall\delta\in\left[d_{\min},d_{ \max}\right],\beta_{0}(\delta)=j,\] \[\text{and }\beta_{0}(d_{\min}-\epsilon)\neq j,\beta_{0}(d_{\max}+ \epsilon)\neq j\} \tag{9}\]
where, for each number of connected components \(H\), the set \(I\) contains the extremes \((d_{\min},d_{\max})\) of the corresponding flat interval in the Betti curve. Finally, we denote with \(L\) the lengths of the intervals in \(I\). Then, the most persistent choice for the number of rooms is the one corresponding to the longest interval in \(L\). In our approach, we choose the number of rooms (and associate an initial set of places to each room) by looking at "sufficiently persistent" intervals (_i.e.,_ flat regions with length larger than a threshold). In more detail, we select a subset of \(L\), denoted as \(\bar{L}\), that only contains the intervals of size greater than \(\alpha\cdot\max_{l\in L}l\), where \(\alpha\) is a user-specified parameter in \([0,1]\) that balances between over-segmentation (\(\alpha\) closer to 0) and under-segmentation (\(\alpha\) closer to 1). From these candidates, we assign \(\delta^{*}\) to be \(\delta^{*}=d_{\min}^{*}\), where \((d_{\min}^{*},d_{\max}^{*})\) are the extremes of the interval in \(L\) attaining the largest number of connected components. In other words, we choose the number of rooms to be the largest among all flat intervals that are sufficiently long (_i.e.,_ longer than \(\alpha\cdot\max_{l\in L}l\)). As before, for this choice of \(\delta^{*}\), we use the connected components corresponding to that dilation distance as an initial guess for the room clustering.12
Footnote 12: Note that for \(\alpha=1\), the proposed approach simply picks the longest (most persistent) interval. While intuitive, that choice typically ends up picking the very first (left-most interval in Fig. 7, with a single connected component) that persists for small \(\delta^{*}\), which is undesirable in practice.
**Assigning Remaining Nodes via Flood-Fill.** The connected components produced by the persistent homology do not account for all places in \(\mathcal{G}_{p}\), as nodes and edges in the graph disappear depending on the dilation distance. We assign the remaining place nodes to the putative room via the flood-fill algorithm, where the expansion queue is ordered by the distance of each edge to the nearest obstacle, resulting in node connections with the greatest distance to an obstacle being expanded first. This ensures that every node in the places graph \(\mathcal{G}_{p}\) is assigned to a room, provided that every original connected component of \(\mathcal{G}_{p}\) contains at least one room.
**Remark 7** (Novelty, Advantages, and Limitations).: _Many classical 2D room segmentation techniques, such as watershed or morphology approaches, are also connected to persistent homology [46]. Our room clustering method provides four key advantages. First, our approach is able to reason over arbitrary 3D environments, instead of a 2D occupancy grid (such as [47]). Second, the approach reasons over a sparse set of places and is extremely computationally and memory efficient in practice; this is in stark contrast with approaches
processing monolithic voxel-based maps_ (e.g., _[2]_ reported runtimes of tens of minutes). Third, the approach automatically computes the dilation distance used to cluster the graph; this allows it to work across a variety of environments (in Section VI, we report experiments in both small apartments and student residences). Fourth, it provides a more solid theoretical framework, compared to the heuristics proposed in related work [11, 13]. As a downside, our room clustering approach, similarly to related work relying on geometric reasoning for room segmentation, may fail to segment rooms without a clear geometric footprint, e.g., open floor-plans._
**Intra-Layer and Inter-Layer Edges.** We connect pairs of rooms, say \((i,j)\), whenever a place in room \(i\) shares an edge with a place in room \(j\). For each room, we add edges to all places in the room. We connect all rooms to a single building node, whose position is the mean of all room centroids.
#### Vi-A2 Layer 4: Room Classification via Neural Tree
While in the previous section, we (geometrically) clustered places nodes into different rooms, the approach described in this section assigns a semantic label (_e.g.,_ kitchen, bedroom) to each room. The key insight that we are going to use is that objects in a room are typically correlated with the room type (_e.g.,_ a room that contains refrigerator, oven, and toaster is likely to be a kitchen, a room with a bathtub and a toilet is likely to be a bathroom). Edges connecting rooms may also carry information about the room types (_e.g.,_ a master bedroom is more likely to be connected to a bathroom). We construct a sub-graph of the 3D scene graph that includes the object and the room layers and the corresponding intra- and inter-layer edges. Given this graph, and the object semantic labels, centroids, and bounding boxes (see Section III-B), we infer the semantic labels of each room.
While room inference could be attacked via standard techniques for inference in probabilistic graphical models, those techniques typically require handcrafting or estimating expressions for the factors connecting the nodes, inducing a probability distribution over the labels in the graph. We instead rely on more modern techniques that use _graph neural networks_ (GNNs) to learn a suitable neural message passing function between the nodes to infer the missing labels. In particular, room classification can be thought of as a semi-supervised node classification problem [48, 49], which has been extensively studied in machine learning. We also observe that our problem has two key features that make it unique. First, the object-room graph is a _heterogeneous_ graph and contains two kinds of nodes, namely objects and rooms, as opposed to large, homogeneous social network graphs (one of the key benchmarks applications in the semi-supervised node classification literature). Second, the object-room graph is a hierarchical graph (Definition 1), which gives more structure to the problem (_e.g.,_ Proposition 6).13 We review a recently proposed GNN architecture, the _neural tree_[16], that takes advantage of the hierarchical structure of the graph and leads to (provably and practically) efficient and accurate inference.
Footnote 13: Note that the result in Proposition 6 is general enough to also include the building node and perform building classification (_i.e.,_ classify an indoor environment into an office building, hospital, apartment, etc.). Here we tailor the discussion to the object-room graph for a practical reason: we lack a large enough dataset for training and testing a building classification network. The dataset in our experiments includes 90 buildings, which are mostly residential.
**Neural Tree Overview.** While traditional GNNs perform neural message passing on the edges of the given graph \(\mathcal{G}\) (the object-room graph in our case), the key idea behind the neural trees architecture is to construct a tree-structured graph from the input graph and perform message passing on the resulting tree instead of the input graph [16]. This tree-structured graph, the _H-tree_, is similar to a tree decomposition, and is such that every node in it represents either a node or a subset of nodes in the input graph. Trees are known to be more amenable for message passing (_e.g.,_ the junction tree algorithm enables exact inference for graphs with small treewidth) [50, 51]. Analogously, the neural tree has been shown to enable strong approximation results [16] and lead to better classification accuracy in practice (see [16] and Section VI). We briefly review the construction of the H-tree, the choice of message passing, and the resulting performance guarantees, and we refer the reader to [16] for an in-depth discussion.
**Constructing the H-Tree.** The neural tree performs message passing on the H-tree, a tree-structured graph constructed from the input graph. Each node in the H-tree corresponds to a sub-graph of the input graph. These sub-graphs are arranged hierarchically in the H-tree such that the parent of a node in the H-tree always corresponds to a larger sub-graph in the input graph. The leaf nodes in the H-tree correspond to singleton subsets (_i.e.,_ individual nodes) of the input graph.
The first step to construct an H-tree is to compute a tree decomposition \(T\) of the object-room graph. Since the object
Fig. 7: (a) Example of Betti curve; we only consider components with at least 15 nodes. (b-d) Example of filtration of the graph for various thresholds \(\delta\). Nodes with distances \(d^{p}\) smaller than the threshold \(\delta\) are shown in gray, while nodes with distances above \(\delta\) are colored by component membership.
room graph is a hierarchical graph, we use Algorithm 1 to efficiently compute a tree decomposition. The bags in such a tree decomposition contain either (C1) only room nodes, (C2) only object nodes, or (C3) object nodes with one room node. To form the H-tree, we need to further decompose the leaves of the tree decomposition into singleton nodes. For bags falling in the cases (C1)-(C2), we further decompose the bags using a tree decomposition of the sub-graphs formed by nodes in the bag, as described in [16]. For case (C3), we note that the sub-graph is again a hierarchical graph with one room node, hence we again use Algorithm 1 to compute a tree decomposition. We form the H-tree by concatenating these tree-decompositions hierarchically as described in [16].
**Message Passing and Node Classification.** Message passing on the H-tree generates embeddings for all the nodes and important sub-graphs of the input graph. Any of the existing message passing protocols (_e.g.,_ the ones used in Graph Convolutional Networks (GCN) [48, 52, 53, 48, 54], GraphSAGE [49], or Graph Attention Networks (GAT) [55, 56, 57]) can be re-purposed to operate on the neural tree. We provide an ablation of different choices of message passing protocols and node features in Section VI. After message passing is complete, the final label for each node is extracted by pooling embeddings from all leaf nodes in the H-tree corresponding to the same node in the input graph, as in [16].
One important difference between the H-tree in [16], and the H-tree constructed for the object-room graph is the heterogeneity of the latter. The H-tree of a heterogeneous graph will also be heterogeneous, _i.e.,_ the H-tree will now contain nodes that correspond to various kinds of sub-graphs in the input object-room graph. Specifically, the H-tree has nodes that correspond to sub-graphs: (i) containing only room nodes, (ii) containing one room node and multiple object nodes, (iii) containing only object nodes, and (iv) leaf nodes which correspond to either an object or a room node. Accordingly, we treat the neural tree as a heterogeneous graph when performing message passing. Message passing over heterogeneous graphs can be implemented using off-the-shelf functionalities in the PyTorch geometric library [58].
**Expressiveness of the Neural Tree and Graph Treewidth.** The following result, borrowed from our previous work [16], establishes a connection between the expressive power of the neural tree and the treewidth of the corresponding graph.
**Theorem 8** (Neural Tree Expressiveness, Theorem 7 and Corollary 8 in [16]).: _Call \(\mathcal{F}(\mathcal{G},N)\) the space of functions that can be produced by applying the neural tree architecture with \(N\) parameters to the graph \(\mathcal{G}\). Let \(f:[0,1]^{n}\rightarrow[0,1]\) be a function compatible with a graph \(\mathcal{G}\) with \(n\) nodes, i.e., a function that can be written as \(f(\mathcal{X})=\sum_{C\in\mathcal{C}(\mathcal{G})}\vartheta_{C}(\mathbf{x}_{C})\), where \(\mathcal{C}\left(\mathcal{G}\right)\) denotes the collection of all maximal cliques in \(\mathcal{G}\) and \(\vartheta_{C}\) is some function that maps features associated to nodes in a clique \(C\) to a real number. Let each clique function \(\theta_{c}\) in \(f\) be \(1\)-Lipschitz and be bounded to \([0,1]\). Then, for any \(\epsilon>0\), there exists a function \(g\in\mathcal{F}(\mathcal{G},N)\) such that \(||f-g||_{\infty}<\epsilon\), while the number of parameters \(N\) is bounded by_
\[N=\mathcal{O}\left(n\times(\textit{nv}\left[\mathcal{J}_{\mathcal{G}}\right]+ 1)^{2\textit{nv}\left[\mathcal{J}_{\mathcal{G}}\right]+3}\times\epsilon^{-( \textit{nv}\left[\mathcal{J}_{\mathcal{G}}\right]+1)}\right), \tag{10}\]
_where \(\textit{nv}\left[\mathcal{J}_{\mathcal{G}}\right]\) denotes the treewidth of the tree-decomposition of \(\mathcal{G}\), computed according to Algorithm 1._
While we refer the reader to [16] for a more extensive discussion, the intuition is that graph-compatible functions can model (the logarithm of) any probability distribution over the given graph. Hence, Theorem 8 essentially states that the neural tree can learn any (sufficiently well-behaved) graphical model over \(\mathcal{G}\), with a number of parameters that scales exponentially in the graph treewidth, and only linearly in the number of nodes in the graph. Therefore, for graphs with small treewidth (as the ones of Proposition 6), we can approximate arbitrary relations between the nodes without requiring too many parameters. Furthermore, Proposition 6 and Proposition 3 ensure that we can compute the tree decomposition (and hence the H-tree) efficiently in practice. Beyond these theoretical results, in Section VI we show that the use of the neural tree leads to improved accuracy in practice.
## IV Persistent Representations: Detecting and Enforcing Loop Closures in 3D Scene Graphs
The previous section discussed how to estimate the layers of an "odometric" 3D scene graph as the robot explores an unknown environment. In this section, we discuss how to use the 3D scene graph to _detect_ loop closures (Section IV-A), and how to _correct_ the entire 3D scene graph in response to putative loop closures (Section IV-B).
### _Loop Closure Detection and Geometric Verification_
We augment visual loop closure detection and geometric verification by using information across multiple layers in the 3D scene graph. Standard approaches for visual place recognition rely on visual features (_e.g.,_ SIFT, SURF, ORB) and fast retrieval methods (_e.g.,_ bag of words [59]) to detect loop closures. Advantageously, the 3D scene graph not only contains visual features (included in each node of the agent layer), but also additional information about the semantics of the environment (described by the object layer) and the geometry and topology of the environment (described by the places layer). In the following we discuss how to use this additional 3D information to develop better descriptors for loop closure detection and geometric verification.
#### Iv-A1 Top-Down Loop Closure Detection
As mentioned in Section III-B, the agent layer stores visual features for each keyframe pose along the robot trajectory. We refer to each such poses as _agent nodes_. Loop closure detection then aims
Fig. 8: Loop closure detection (left) and geometric verification (right). To find a match, we “descend” the 3D scene graph layers, comparing descriptors. We then “ascend” the 3D scene graph layers, attempting registration.
at finding a past agent node that matches (_i.e.,_ observes the same portion of the scene seen by) the latest agent node, which corresponds to the current robot pose.
**Top-Down Loop Closure Detection Overview.** For each agent node, we construct a hierarchy of descriptors describing statistics of the node's surroundings, from low-level appearance to semantics and geometry. At the lowest level, our hierarchical descriptors include standard DBoW2 appearance descriptors [59]. We augment the appearance descriptor with an object-based descriptor and a place-based descriptor computed from the objects and places in a sub-graph surrounding the agent node. We provide details about two choices of descriptors (hand-crafted and learning-based) below. To detect loop closures, we compare the hierarchical descriptor of the current (query) node with all the past agent node hierarchical descriptors, searching for a match. When comparing descriptors, we walk down the hierarchy of descriptors (from places, to objects, to appearance descriptors). In particular, we first compare the places descriptor and --if the descriptor distance is below a threshold-- we move on to comparing object descriptors and then appearance descriptors. If any of the appearance or object descriptor comparisons return a strong enough match (_i.e.,_ if two distance between two descriptors is below a threshold), we perform geometric verification; see Fig. 8 for a summary.
**Hand-crafted Scene Graph Descriptors.** Our top-down loop closure detection relies on having descriptors (_i.e.,_ vector embeddings) of the sub-graphs of objects and places around each agent node. In the conference version [11] of this paper, we proposed hand-crafted descriptors. In particular, for the objects, we use the histogram of the semantic labels of the object nodes in the sub-graph as an object-level descriptor. For the places, we use the histogram of the distances associated to each place node in the sub-graph as a place-level descriptor. As shown in [11] and confirmed in Section VI, the resulting hierarchical descriptors already lead to improved loop closure detection performance over traditional appearance-based loop closures. However, these descriptors fail to capture relevant information about objects and places, _e.g.,_ their spatial layout and connectivity. In the following, we describe learning-based descriptors that use graph neural networks to automatically find a suitable embedding for the object and place sub-graphs; these are observed to further improve loop closure detection performance in some cases; see Section VI.
**Learning-based Scene Graph Descriptors.** Given a sub-graph of objects and places around the agent node, we learn fixed-size embeddings using a Graph Neural Network (GNN). At a high level, we learn such embeddings from scene graph datasets, such that the Euclidean distance between descriptors is smaller if the corresponding agent nodes are spatially close.
In more detail, we learn separate embeddings for the sub-graph of objects and the sub-graph of places. For every object layer sub-graph, we encode the bounding-box size and semantic label of each object as node features in a GNN. For every places layer sub-graph, we encode the distance of the place node to the nearest obstacle and the number of basis points of the node as node features. Rather than including absolute node positions in the respective node features, we assign a weight to each edge \((i,j)\) between nodes \(i\) and \(j\) as \(w_{ij}=e^{-\|x_{i}-x_{j}\|}\), where \(x_{i}\) and \(x_{j}\) are the positions of nodes \(i\) and \(j\). This results in a weight in the range \([0,1]\), where the closer two nodes are, the higher the edge weight. Associating intra-node distances to edges (rather than using absolute positions as node features) makes the resulting embedding pose-invariant; this is due to the fact that the node positions only enter the network in terms of their distance, which is invariant to rigid transformations. Our GNN model architecture follows the graph embedding architecture presented in [60], which consists of multi-layer perceptrons as encoders for node and edge features, message passing layers, and a graph-level multi-level perception to aggregate node embeddings into the final graph embedding. We use triplet loss [60] to train the models and defer the details of constructing triplets and other model and training parameters to the experiments in Section VI.
#### Iv-A2 Bottom-up Geometric Verification
After we have a putative loop closure between our query and match agent nodes (say \(i\) and \(j\)), we attempt to compute a relative pose between the two by performing bottom-up geometric verification. Whenever we have a match at a given layer (_e.g.,_ between appearance descriptors at the agent layer, or between object descriptors at the object layer), we attempt to register frames \(i\) and \(j\). For registering visual features we use standard RANSAC-based geometric verification as in [34]. If that fails, we attempt registering objects using TEASER++ [38], discarding loop closures that also fail object registration. This bottom-up approach has the advantage that putative matches that fail appearance-based geometric verification (_e.g.,_ due to viewpoint or illumination changes) can successfully lead to valid loop closures during the object-based geometric verification. Section VI shows the proposes hierarchical descriptors improve the quality and quantity of detected loop closures.
### _3D Scene Graph Optimization_
This section describes a framework to correct the entire 3D scene graph in response to putative loop closures. Assume we use the algorithms in Section III to build an "odometric" 3D scene graph, which drifts over time as it is built from the odometric trajectory of the robot -- we refer to this as the _frontend (or odometric) 3D scene graph_. Then, our goals here are (i) to optimize all layers in the 3D scene graph in a consistent manner while enforcing the detected loop closures (Section IV-A), and (ii) to post-processes the results to remove redundant sub-graphs corresponding to the robot visiting the same location multiple times. The resulting 3D scene graph is what we call a _backend (or optimized) 3D scene graph_, and we refer to the module producing such a graph as the _scene graph backend_. Below, we describe the two main processes implemented by the scene graph backend: a 3D scene graph optimization (which simultaneously corrects all layers of the scene graph by optimizing a sparse subset of variables), and an interpolation and reconciliation step (which recovers the dense geometry and removes redundant variables); see Fig. 9.
**3D Scene Graph Optimization.** We propose an approach to simultaneously deform the 3D scene graph layers using an _embedded deformation graph_[17]. This approach generalizes
the _pose graph and mesh optimization approach_ in [2] as it also includes the graph of places in the optimization. At a high-level, the backend optimizes a sparse graph (the embedded deformation graph) built by downsampling the nodes in the 3D scene graph, and then reconstructs the other nodes in the scene graph via interpolation as in [17].
Specifically, we form the deformation graph as the sub-graph of the 3D scene graph that includes (i) the agent layer, consisting of a pose graph that includes both odometry and loop closures edges, (ii) the _3D mesh control points_, _i.e.,_ uniformly subsampled vertices of the 3D mesh (obtained using the same spatial hashing process described in Section III-A), with edges connecting control points closer together than a distance (\(2.5\,\mathrm{m}\) in our implementation); (iii) a minimum spanning tree of the places layer.14 By construction, these three layers form a connected sub-graph (recall the presence of the inter-layer edges discussed in Section III).
Footnote 14: The choice of using the minimum spanning tree of places is motivated by computational reasons: the use of the spanning tree increases sparsity of the resulting deformation graph, enabling faster optimization.
The embedded deformation graph approach associates a local frame (_i.e.,_ a pose) to each node in the deformation graph and then solves an optimization problem to adjust the local frames in a way that minimizes deformations associated to each edge (including loop closures). Let us call \(\mathcal{T}_{a}\), \(\mathcal{T}_{m}\), \(\mathcal{T}_{p}\) the set of poses associated with the agent layer, the mesh control points, and the places. The poses in \(\mathcal{T}_{a}\) are initially set to be the odometric poses of the robot; the poses in \(\mathcal{T}_{m}\) are initially set to have identity rotation and translation equal to the mesh control points' positions; similarly, the poses in \(\mathcal{T}_{p}\) are initially set to have identity rotation, and translation equal to the position of the corresponding places. Each edge represents a relative pose or a relative position measurement between pair of poses. In particular, the set of edges is \(\mathcal{E}=\mathcal{E}_{aa}\cup\mathcal{E}_{mm}\cup\mathcal{E}_{pp}\cup \mathcal{E}_{am}\cup\mathcal{E}_{ap}\cup\mathcal{E}_{mp}\), where each subset denotes intra-layer edges (_e.g.,_\(\mathcal{E}_{aa}\) contains the edges within the agent layer), or inter-layer edges (_e.g.,_\(\mathcal{E}_{am}\) contains the edges between a robot pose and a mesh control point).15 Intuitively, the proposed 3D scene graph optimization finds the set of poses that minimizes the mismatch with respect to these relative measurements: a small mismatch corresponds to small deformations of the local geometry of the scene graph and encourages small errors for the loop closure edges.
Footnote 15: The set \(\mathcal{E}_{aa}\) contains all the odometric and loop closure measurements relating poses in the robot trajectory. \(\mathcal{E}_{mm}\) contains all the relative positions between pairs of nearby mesh control points (expressed in the local frame attached to one of the control points). Similarly, \(\mathcal{E}_{pp}\), \(\mathcal{E}_{am}\), \(\mathcal{E}_{ap}\) and \(\mathcal{E}_{mp}\) contain the relative positions between pairs of places, between robot poses and mesh control points visible from that pose, between robot poses and places near that pose, and between a place and the corresponding basis points in the mesh, respectively.
We can find an optimal configuration for the poses \(\mathcal{T}=\mathcal{T}_{a}\cup\mathcal{T}_{m}\cup\mathcal{T}_{p}\) in the deformation graph by solving the following optimization problem:
\[\mathcal{T}^{*}=\operatorname*{arg\,min}_{\mathbf{T}_{1},\mathbf{T}_{2},\ldots\in \mathcal{T}}\sum_{(i,j)\in\mathcal{E}}\left\|\mathbf{T}_{i}^{-1}\mathbf{T}_{j}-\mathbf{E} _{ij}\right\|_{\mathbf{\Omega}_{ij}}^{2} \tag{11}\]
where \(\mathbf{T}_{i}\in\mathrm{SE}(3)\) and \(\mathbf{T}_{j}\in\mathrm{SE}(3)\) are pairs of 3D poses in \(\mathcal{T}\), \(\mathbf{E}_{ij}\) is the relative measurement associated to each edge \((i,j)\in\mathcal{E}\) (written as a 3D pose), and for a matrix \(\mathbf{M}\) we use the notation \(\|\mathbf{M}\|_{\mathbf{\Omega}}^{2}\triangleq\mathrm{tr}\left(\mathbf{M}\Omega\mathbf{M}^{ \mathsf{T}}\right)\).16 The \(4\times 4\) positive semidefinite matrix \(\mathbf{\Omega}_{ij}\) is chosen as the inverse of the odometry (or loop closure) covariances for the edges in \(\mathcal{E}_{aa}\), while it is set to \(\mathrm{diag}\left(\left[0\;0\;0\;\omega_{t}\right]\right)\) for the other relative position measurements, where the zeros cancel out the rotation component of the residual error \(\mathbf{T}_{i}^{-1}\mathbf{T}_{j}-\mathbf{E}_{ij}\), and \(\omega_{t}\) is a user-specified parameter that controls how much deformation we want to allow for each edge during the optimization.17
Footnote 16: When the relative measurement involves only a translation, the rotation component of \(\mathbf{E}_{ij}\) is conventionally set to the identity — a suitable choice of the information matrix \(\mathbf{\Omega}_{ij}\) for those measurements ensures that such rotation component is disregarded by the optimization.
Footnote 17: The interested reader might find a step-by-step derivation of (11) (but restricted to the agent layer and the mesh control points) in [2].
In hindsight, 3D scene graph optimization transforms a subset of the 3D scene graph into a _factor graph_[30], where edge potentials need to be minimized. The expert reader might also realize that (11) is mathematically equivalent to standard pose graph optimization in SLAM [30], which enables the use of established off-the-shelf solvers. In particular, we solve (11) using the Graduated Non-Convexity (GNC) solver in GTSAM [61], which is also able to reject incorrect loop closures as outliers.
**Interpolation and Reconciliation.** Once the optimization terminates, the agent and place nodes are updated with their new (optimized) positions and the full mesh is interpolated back from its control points according to the deformation graph approach in [17, 2]. After the 3D scene graph optimization and the interpolation step, certain portions of the scene graph --corresponding to areas revisited multiple times by the robot.-- contain redundant information. To avoid this redundancy, we merge overlapping nodes. For places nodes, we merge nodes within a distance threshold (\(0.4\,\mathrm{m}\) in our implementation). For object nodes we merge nodes if the corresponding objects have the same semantic label and if
Fig. 9: Loop closure detection and optimization: (a) after a loop closure is detected, (b) we extract and optimize a sub-graph of the 3D scene graph —the _deformation graph_— that includes the agent poses, the places, and a subset of the mesh vertices. (c) We then reconstruct the rest of the graph via interpolation as in [17], and (d) reconcile overlapping nodes.
one of nodes is contained inside the bounding box of the other node. After this process is complete, we recompute the object centroids and bounding boxes from the position of the corresponding vertices in the optimized mesh. Finally, we recompute the rooms from the graph of places using the approach in Section III-D.
## V Thinking Fast and Slow:
the Hydra Architecture
We integrate the algorithms described in this paper into a highly parallelized _spatial perception system_, named _Hydra_. Hydra involves a combination of processes that run at sensor rate (_e.g.,_ feature tracking for visual-inertial odometry), at sub-second rate (_e.g.,_ mesh and place reconstruction, object bounding box computation), and at slower rates (_e.g.,_ the scene graph optimization, whose complexity depends on the map size). Therefore these processes have to be organized such that slow-but-infrequent computation (_e.g.,_ scene graph optimization) does not get in the way of faster processes.
We visualize Hydra in Fig. 10. Each block in the figure denotes an algorithmic module, matching the discussion in the previous sections. Hydra starts with fast _early_ perception processes (Fig. 10, left), which perform low-level perception tasks such as feature detection and tracking (required for visual-inertial odometry, and executed at frame-rate), 2D semantic segmentation, and stereo-depth reconstruction (at keyframe rate). The result of early perception processes are passed to mid-level perception processes (Fig. 10, center). These include algorithms that incrementally construct (an odometric version of) the agent layer (_e.g.,_ the visual-inertial odometry backend), the mesh and places layers, and the object layer. Mid-level perception also includes the _scene graph frontend_, which is a module that collects the result of the other modules into an "unoptimized" scene graph. Finally, the high-level perception processes perform loop closure detection, execute scene graph backend optimization, and extract rooms (including both room clustering and classification).18 This results in a globally consistent, persistent 3D scene graph.
Footnote 18: While room detection is fast enough to be executed at keyframe rate, it still operates on the entire graph, hence it is more suitable as a slow high-level perception process.
Hydra runs in real-time on a multi-core CPU. The only module that relies on GPU computing is the 2D semantic segmentation, which uses a standard off-the-shelf deep network. The neural tree (Section III-D2) and the GNN-based loop closure detection (Section IV-A) can be optionally executed on a GPU, but the forward pass is relatively fast even on a CPU (see Section VI-C4). The fact that most modules run on CPU has the advantage of (i) leaving the GPU to learning-oriented components, and (ii) being compatible with the power limitations imposed by current mobile robots. In the next section, we will report real-time results with Hydra running on a mobile robot, a Unitree A1 quadruped, with onboard sensing and computation (an NVIDIA Xavier embedded computer).
## VI Experiments
The experiments in this section (i) qualitatively and quantitatively compare the 3D scene graph produced by Hydra to another state-of-the-art 3D scene graph construction method, SceneGraphFusion [7], (ii) examine the performance of Hydra in comparison to batch offline methods, _i.e.,_ Kimera [2], (iii) validate design choices for learned components in our method via ablation studies, and (iv) present a runtime analysis of Hydra. We also document our experimental setup, including training details for both the GNN-based loop closure descriptors and the neural-tree room classification, and datasets used. Our implementation of Hydra is available at [https://github.com/MIT-SPARK/Hydra](https://github.com/MIT-SPARK/Hydra).
### _Datasets_
We use four primary datasets for training and evaluation: two simulated datasets (Matterport3d [31] and uHumans2 [2]) and two real-world datasets (SidPac and Simmons). In addition, we use the Stanford3D dataset [3] to motivate neural tree design choices with respect to our initial proposal in [16].
**Matterport3d.** We utilize the Matterport3D (MP3D) dataset [31], an RGB-D dataset consisting of 90 reconstructions of indoor building-scale scenes. We use the Habitat Simulator [62] to traverse the scenes from the MP3D dataset and render color imagery, depth, and ground-truth 2D semantic segmentation. We generate two different 3D scene graphs datasets for the 90 MP3D scenes by running Hydra on pre-generated trajectories; one for training descriptors and one for room classification. For training the GNN-based descriptors for loop closure detection (GNN-LCD), we generate a single
Fig. 10: Hydra’s functional blocks. We conceptualize three different functional block groupings: low-level perception, mid-level perception, and high-level perception in order of increasing latency. Each functional block is labeled with a number that identifies the “logical” thread that the module belongs to.
trajectory for each scene through the navigable scene area such that we get coverage of the entire scene, resulting in 90 scene graphs. For training the room classification approaches, we generate 5 trajectories for each scene by randomly sampling navigable positions until a total path length of at least \(100\,\mathrm{m}\) is reached, resulting in 450 trajectories. When running Hydra on these 450 trajectories, we save intermediate scene graphs every 100 timesteps (resulting in roughly 15 scene graphs per trajectory), giving us 6810 total scene graphs.
**uHumans2.** The uH2 dataset is a Unity-based simulated dataset [2] that includes four scenes: a small apartment, an office, a subway station, and an outdoor neighborhood. For the purposes of this paper, we only use the apartment and office scenes. The dataset provides visual-inertial data, ground-truth depth, and 2D semantic segmentation. The dataset also provides ground truth trajectories that we use for benchmarking.
**SidPac.** The SidPac dataset is a real dataset collected in a graduate student residence using a visual-inertial hand-held device. We used a Kinect Azure camera as the primary collection device, providing color and depth imagery, with an Intel RealSense T265 rigidly attached to the Kinect to provide an external odometry source. The dataset consists of two separate recordings, both of which are used in our previous paper [11]. We only use the first recording for the purposes of this paper. This first recording covers two floors of the building (Floors 1 & 3), where we walked through a common room, a music room, and a recreation room on the first floor of the graduate residence, went up a stairwell, through a long corridor as well as a student apartment on the third floor, then finally down another stairwell to revisit the music room and the common room, ending where we started. These scenes are particularly challenging given the scale of the scenes (average traversal of around \(400\,\mathrm{m}\)), the prevalence of glass and strong sunlight in regions of the scenes (causing partial depth estimates from the Kinect), and feature-poor regions in hallways. We obtain a proxy for the ground-truth trajectory for the Floor 1 & 3 scene via a hand-tuned pose graph optimization with additional height priors, to reduce drift and qualitatively match the building floor plans.
**Simmons.** The Simmons dataset is a real dataset collected on a single floor of an undergraduate student residence with a Clearpath Jackal rover and a Unitree A1 quadruped robot (Fig. 11). The Clearpath Jackal rover uses the RealSense D455 camera as the primary collection device to provide color and depth imagery, but the rover is also equipped with a Velodyne to perform LIDAR-based odometry [63] as an external odometry source. The A1 also uses the RealSense D455 camera as the primary collection device, but is not equipped with a Velodyne; instead, it is equipped with an industrial grade Microstrain IMU to improve the performance of the visual-inertial odometry [34]. The dataset consists of two recordings, one recorded on the Jackal, and one recorded using the A1. The Jackal recording covers the rooms scattered throughout half of a single floor of the building, where the Jackal traverses a distance of around \(500\,\mathrm{m}\) through mostly student bedrooms, but also a lounge area, a kitchen, and a laundry room; the rooms are all joined by a long hallway that spans the full floor. The A1 sequence takes place on one end of the floor and maps 4 bedrooms, a small lounge, and a section of the hallway that connects all the rooms. The dataset is challenging due to the visual and structural similarity across student rooms that have similar layouts and furnishing. We obtain a proxy for the ground-truth trajectory for the Jackal using LIDAR-based SLAM, by running LOCUS [63] with flat ground assumption and LAMP [64] for loop closure corrections. The ground-truth trajectory of the A1 is obtained by registering individual visual keyframes with the visual keyframes in the Jackal sequence, and then corrected using the proxy ground-truth of the Jackal sequence.
**Stanford3D.** We use the 35 human-verified scene graphs from the Stanford 3D Scene Graph (Stanford3d) dataset [3] to compare the neural tree against standard graph neural networks for node classification and to assess new design choices against our initial proposal in [16]. These scene graphs represent individual residential units, and each consists of building, room, and object nodes with inter-layer connectivity. We use the same pre-processed graphs as in [16] where the single-type building nodes (residential) are removed and additional 4920 intra-layer object edges are added to connect nearby objects in the same room. This results in 482 room-object graphs, each containing one room and at least one object per room. The full dataset has 482 room nodes with 15 semantic labels, and 2338 objects with 35 labels.
### _Experimental Setup_
In this section, we first discuss the implementation of Hydra, including our choice of networks for the 2D semantic segmentation. Then we provide details regarding the training of our learning-based approaches: the GNN-LCD descriptors and the neural tree for room classification.
**Hydra.** To provide 2D semantic segmentation for the real-world datasets, we compare three different models, all
Fig. 11: (a) The Clearpath Jackal platform used to record one of the Simmons sequences. The Jackal is equipped with both a RealSense D455 camera and a Velodyne LIDAR. (b) The Unitree A1 platform used to record the second Simmons sequence. The A1 is equipped with a RealSense D455 camera and a Microstrain IMU. Both platforms have an Nvidia Xavier NX embedded computer onboard.
of which use the ADE20k [68] label space. We use HRNet [66] as a "nominal" semantic segmentation source and MobileNetV2 [67] as a light-weight source of semantics for use on a robot. For both HRNet and MobileNetV2, we use the pre-trained model from the MIT Scene Parsing challenge [68] to export a deployable model for our inference toolchain (ONNX and TensorRT). Additionally, we use a state-of-the-art model, OneFormer [65], to provide more-accurate-but-slower 2D semantic segmentation. A comparison of the accuracy and frame-rate of these models is shown in Fig. 12.
For both simulated and real datasets we use Kimera-VIO [34] for visual-inertial odometry. For SidPac, we fuse the Kimera-VIO estimates with the output of the RealSense T265 to improve the quality of the odometric trajectory. For Simmons, we fuse the Kimera-VIO estimates with the odometry output of [63] to improve the quality of the odometric trajectory for the sequence recorded by the Jackal platform.
All the remaining blocks in Fig. 10 are implemented in C++, following the approach described in this paper. In the experiments we primarily use a workstation with an AMD Ryzen9 3960X with 24 cores and two Nvidia GTX3080s. We also report timing results on an embedded computer (Nvidia Xavier NX) at the end of this section, and demonstrate the full Hydra system running on the same embedded computer onboard the A1 robot; video of the experiment is available.19
Footnote 19: [https://youtu.be/AEABaQ-FeY0](https://youtu.be/AEABaQ-FeY0)
**GNN-LCD Training.** We use the MP3D scene graphs to train loop closure descriptors for the object and place sub-graphs. We generate a sub-graph around every agent pose which consists of all nodes and edges within a provided radius of the parent place node that the agent pose is nearest to. We adaptively determine the radius for each sub-graph; each sub-graph has a minimum radius of \(3\,\mathrm{m}\) and is grown until either a minimum number of nodes (10) or a maximum radius of \(5\,\mathrm{m}\) is reached. Each object and place sub-graph contains the node and edges features as described in Section IV-A. The semantic label of each object node is encoded either using word2vec [69] or a one-hot encoding. We explore the impact of this label encoding in Section VI-C.
We use triplet loss to train our model. To construct the triplets, we have an anchor (a candidate sub-graph), and need to find positive and negative examples to compute the loss. A good proxy for how similar two sub-graphs are is looking at the spatial overlap of the two sets of nodes of the sub-graphs. We compute this overlap between the sub-graphs by using IoU over the bounding boxes that encompass the positions of the nodes of each sub-graph. If the overlap between two sub-graphs is at least 40 percent, we consider them to be similar (and candidates for positive examples), otherwise they are considered negative examples for that particular anchor. Sub-graphs from different scenes are always considered negative examples. To train our GNN-LCD descriptors, we use online triplet mining with batch-all triplet loss [70], where we construct valid triplets for a batch of sub-graph input embeddings and average loss on the triplets that produce a positive loss.
The message-passing architecture that we selected is GCNConv [48]. While more expressive or performant architectures exist, few are compatible with the computational framework used for inference in Hydra (_i.e._, ONNX). We split the dataset by scene; we use 70% of the original 90 scene graphs to train on, 20% of the scene graphs for validation, and the last 10% for testing. Our learning rate is \(5\times 10^{-4}\), and we train the object models for 50 epochs and place models for 80 epochs, saving the model when the average validation error is at a minimum. Each model produces a descriptor of dimension 64. Other key model parameters are reported in Appendix G.
**Neural Tree Training.** We train the neural tree and related baselines on two datasets: Stanford3d and MP3D.
For the 482 object-room scene graphs in the Stanford3D dataset, we train the neural tree and GNN baselines for the same semi-supervised node classification task examined in [16], where the architecture has to label a subset of room and object nodes. The goal of this comparison is to understand the impact of some design choices (_e.g.,_ heterogeneous graphs, edge features) with respect to our original proposal in [16] and related work. For this comparison, we implement the neural tree and baseline approaches with four different message passing functions: GCN [48], GraphSAGE [49], GAT [55], and GIN [71]. We consider both homogeneous and heterogeneous graphs; for all nodes, we use their centroid and bounding box size as node features. In some of our comparisons, we also examine the use of relative node positions as edge features, and discard the centroid from the node features. For the GNN baselines, we construct heterogeneous graphs consisting of two node types: rooms and objects. For the neural tree, we obtain graphs with four node types: room cliques, room-object cliques, room leaves, and object leaves; see Section III-D2. There are few message passing functions that are compatible with both heterogeneous graphs and edge features; therefore, we compare all heterogeneous approaches using only GAT [55]. For all tests on Stanford3d, we report average test
Fig. 12: (a) A sample RGB image from SidPac Floor 1-3. (b-d) 2D semantic segmentation from OneFormer [65], HRNet [66], and MobileNetV2 [67]. The framerate of each approach is overlaid on the images for both the GPU in the workstation used for evaluation (an Nvidia GTX 3080), as well as for a less powerful embedded computer (the Nvidia Xavier NX). OneFormer was not tested on the Xavier NX.
accuracy over 100 runs. For each run, we randomly generate a 70%, 10%, 20% split across all nodes (_i.e.,_ objects and rooms) for training, validation, and testing, respectively.
For the 6180 scene graphs in the MP3D dataset, we test the neural tree and baselines for room classification on object-room graphs. The goal of this experiment is to understand the impact of the node features and the connectivity between rooms on the accuracy of room classification. We only consider heterogeneous graphs for this dataset, and as such only use GAT [55] for message passing. The heterogeneous node types are the same as for Stanford3D. We use the bounding box size as the base feature for each node, and use relative positions between centroids as edge features. We also evaluate the impact of using semantic labels as additional features for the object nodes using word2vec [69]. The MP3D dataset contains scene graphs with partially explored rooms. We discard any room nodes where the IoU between the 2D footprint of the places within the room and the 2D footprint of the ground truth room is less than a specified ratio (60% in our tests). Further details on the construction and pre-processing of the dataset are provided in Appendix E. We predict a set of 25 room labels which are provided in Appendix D. For training, we use the scene graphs from the official 61 training scenes of the MP3D dataset. For the remaining 29 scenes, we use graphs from two trajectories of the five total trajectories for validation and the other three trajectories for testing. For use with Hydra, we select the best-performing heterogeneous neural tree architecture; this architecture uses bounding box size and word2vec embeddings of the semantic labels of the object nodes as node features, as well as relative positions between nodes as edge features.
All training and testing is done using a single Nvidia A10G Tensor Core GPU. For both datasets, we use cross entropy loss between predicted and ground-truth labels during training, and save the models with the highest validation accuracy for testing. All models are implemented using PyTorch 1.12.1 and PyTorch Geometric 2.2.0 [58]. We base our implementation on our previous open-source version of the neural tree [16]. We provide additional training details, including model implementation, timing, and hyper-parameter tuning in Appendix E.
### _Results and Ablation Study_
We begin this section with a comparison between Hydra and SceneGraphFusion [7] (Section VI-C1). We then analyze the accuracy and provide an ablation of the modules in Hydra (Section VI-C2), and show an example of the quality of scene graph that Hydra is able to produce while running onboard the Unitree A1 (Section VI-C3). Finally, we report a breakdown of the runtime of our system (Section VI-C4).
#### Vi-C1 Comparison against SceneGraphFusion
This section shows that Hydra produces better object maps compared to SceneGraphFusion [7], while also creating additional layers in the scene graph (SceneGraphFusion does not estimate rooms or places). We compare our system with SceneGraphFusion [7] for both the uHumans2 apartment and office scene. For this comparison, the only learned component used by our system is the off-the-shelf 2D semantic segmentation network, hence for a fair comparison, we do not retrain the object label prediction GNN used by SceneGraphFusion [7]. Examples of the produced objects by Hydra and SceneGraphFusion are shown in Fig. 13. Note that SceneGraphFusion tends to over-segment larger objects (_e.g.,_ the sofa in the lower right corner of Fig. 13 or the bed in the top right corner of the same figure), while Hydra tends to under-segment nearby objects (such as the dining table and chairs in the bottom left of Fig. 13). For the purposes of a fair comparison, we use OneFormer [65] to provide semantics for Hydra, and use ground-truth robot poses as input to both systems. A quantitative comparison is reported in Table I, which also includes results for Hydra using ground-truth semantics as a upper bound on performance. We report two metrics. The first, _Percent Correct_, is the percent of estimated objects that are within some distance threshold (\(0.5\,\mathrm{m}\)) of a ground-truth object (as provided by the simulator for uHumans2). The second, _Percent Found_, is the percent of ground-truth objects that are within some distance threshold (also \(0.5\,\mathrm{m}\)) of a ground-truth object (as provided by the simulator for uHumans2). As the label space of SceneGraphFusion for objects does not line up well with the ground-truth object label space, and as SceneGraphFusion does a poor job predicting object labels for the two scenes used for comparison (see Fig. 13), we do not take the semantic labels into account when computing the metrics (as opposed to the stricter metrics used in Section VI-C2 for examining object accuracy, which do take the semantic labels of the object into account).
Table I shows that Hydra largely outperforms SceneGraphFusion in terms of both _Percent Correct_ and _Percent Found_, even after disregarding the incorrect semantic labels for the objects produced by SceneGraphFusion. In hindsight, Hydra can benefit from more powerful 2D segmentation networks and estimate better objects, while SceneGraphFusion directly attempts to extract semantics from the 3D reconstruction, which is arguably a harder task and does not benefit from the large datasets available for 2D image segmentation. The last row in the table --Hydra (GT)-- shows that Hydra's accuracy can further improve with a better 2D semantic segmentation. While SceneGraphFusion is less competitive in terms of object understanding, we remark that it predicts a richer set of object relationships for each edge between objects [7] (_e.g., standing on, attached to_) compared to Hydra, which might be also useful for certain applications.
#### Vi-C2 Accuracy Evaluation and Ablation
Here, we examine the accuracy of Hydra, running in real-time, as benchmarked against the accuracy attained by constructing a 3D scene graph in an offline manner (_i.e.,_ by running Kimera [2]). We also present a detailed quantitative analysis to justify key design choices in Hydra. To do this, we break down this
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{uH2 Apartment} & \multicolumn{2}{c}{uH2 Office} \\ \cline{2-5} & \% Correct & \% Found & \% Correct & \% Found \\ \hline SceneGraphFusion & \(25.0\) & \(17.9\) & \(35.4\) & \(36.0\) \\ Hydra (OneFormer) & \(68.4\) & \(53.6\) & \(71.6\) & \(60.4\) \\ Hydra (GT) & \(\mathbf{95.5}\) & \(\mathbf{82.1}\) & \(\mathbf{86.5}\) & \(\mathbf{83.3}\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Object accuracy for Hydra and SceneGraphFusion. Best results in **bold**.
quantitative analysis across the objects, places, and rooms layers. We examine the impact of (i) the choice of loop closure detection and 2D semantic segmentation on the accuracy of each layer of the 3D scene graph, and (ii) the impact of model architectures and training parameters on the accuracy of the room classification network. As such, we first present the accuracy of the predicted object and place layers by Hydra. We then move to looking at room classification accuracy, including two ablation studies on the neural tree and an evaluation of our room clustering approach based on persistent homology. Finally, we examine the quality of predicted loop closures when using the proposed hierarchical descriptors.
When benchmarking the accuracy of the layers of Hydra, we consider five different configurations for odometry estimation and loop closure detection. The first configuration ("_GT-Trajectory_") uses ground-truth poses to incrementally construct the scene graph and disregards loop closures. The second, third, and fourth configurations ("_VIO+V-LC_", "_VIO+SG-LC_", and "_VIO+GNN-LC_", respectively) use visual-inertial odometry (VIO) for odometry estimation and use vision-based loop closures (_VIO+V-LC_), the proposed hierarchical handcrafted descriptors for scene graph loop closures (_VIO+SG-LC_), or the proposed hierarchical learned descriptors for scene graph loop closures (_VIO+GNN-LC_). The last configuration, _VIO_, uses visual-inertial odometry without loop closures.
**Accuracy Evaluation: Objects.** Figure 14 evaluates the object layer of Hydra across the five configurations of Hydra, as well as across various 2D semantic segmentation sources: ground-truth semantics (when available), OneFormer, and HRNet. For this evaluation, we benchmark against the ground-truth objects from the underlying Unity simulation for uHumans2. As the recording for the uHumans2 scenes only explores a portion of the scene, we only compare against ground-truth objects that have a high enough number of mesh vertices inside their bounding boxes (40 vertices). For the real datasets, where we do not have ground-truth objects, we consider the object layer of the batch scene graph constructed from ground-truth poses and OneFormer-based 2D semantic segmentation using Kimera [2] as the ground-truth objects for the purposes of evaluation. For the object layer, we report two metrics: the percentage of objects in the ground-truth scene graph that have an estimated object with the correct semantic label within a specified radius ("% _Found_") and the percentage of objects in the estimated scene graph that have a ground-truth object with the correct semantic label within a specified radius ("% _Correct_").20
Footnote 20: As distance thresholds, we use \(0.5\,\mathrm{m}\) for the uHumans2 Apartment and Office scene, and \(1\,\mathrm{m}\) for SidPac, the Simmons Jackal scene, and the Simmons A1 scene. These thresholds were chosen to roughly correspond to the mean Absolute Trajectory Error (ATE) for each scene, in order to normalize the metrics between simulated and real scenes.
We note some important trends in Fig. 14. First, when using _GT-Trajectory_, Hydra produces objects that are reasonably close to the ground-truth (80-100% found and correct objects for the Office scene), or close to the objects produced by offline approaches (70-80% found and correct objects for SidPac and Simmons). This demonstrates that --given the trajectory-- the real-time scene graph from Hydra is comparable to the batch and offline approaches at the state of the art. Second, _VIO+V-LC_, _VIO+SG-LC_, and _VIO+GNN-LC_ maintain reasonable levels of accuracy for the objects and attain comparable performance in small to medium-sized scenes (_i.e.,_ the uHumans2 Apartment, Office, and Simmons A1) for the same choice of semantic segmentation method. In these scenes, the drift is small and the loop closure strategy does not radically impact performance (differences are within standard deviation). However, in larger scenes (_e.g.,_ SidPac) scene graph loop closures are more important and both _VIO+GNN-LC_ and _VIO+SG-LC_ largely outperform _VIO+V-LC_ and _VIO_ in terms of object accuracy.
Finally, as expected, the quality of the semantic segmentation impacts the accuracy; this is much more pronounced for "% _Found_" than "% _Correct_". Note that the two metrics are somewhat coupled; a lower "% _Found_" can lead to a higher "% _Correct_"; this is one reason for the inversion in performance between OneFormer and HRNet for the uHumans2 scenes. Additionally, note that OneFormer identifies objects that exist in the ground-truth scene that are not exported by the simulator
Fig. 13: (a) Objects produced by Hydra for the uHumans2 apartment scene. (b) 3D scene graph produced by SceneGraphFusion for the uHumans2 apartment scene. Note that SceneGraphFusion directly infers geometric object relationships, and edges are drawn between objects for each detected relationship.
(this is another factor responsible for the lower "% _Correct_" of OneFormer). We also note a slight correlation between reduced semantic segmentation performance and reduced accuracy for _VIO+GNN-LC_ and _VIO+SG-LC_, such as for the Simmons Jackal scene. The low number of objects identified by HRNet in this scene contributes to fewer scene-graph based loop closures, and as such, worse performance.
**Accuracy Evaluation: Places.** Figure 14f evaluates the place layer by comparing the five configurations for Hydra. We use either the ground-truth semantics (when available) or OneFormer (when ground-truth semantics is not available), as the places are not influenced by the semantic segmentation quality. For the places evaluation, we construct a "ground-truth" GVD from a 3D reconstruction of the entire scene using the ground-truth trajectory of the scene. Using this, we measure the mean distance of an estimated place node to the nearest voxel in the ground-truth GVD, which we call "_Position Error_". Figure 14f reports this metric, along with standard deviation across 5 trials shown as black error bars.
We note some important trends in Fig. 14f. Similar to the case of objects, the place positions errors are almost identical for all configurations of Hydra for the uHumans2 scenes. These scenes have very low drift, and are relatively small, so it is expected that a higher quality trajectory makes less of an impact. For the larger scenes, we see a range of results. However, we see for SidPac, _VIO+GNN-LC_ performs worse than _VIO+SG-LC_, though both outperform _VIO+V-LC_ and _VIO_. Interestingly, we note that _VIO+SG-LC_ performs slightly worse than _VIO+V-LC_ for the Simmons Jackal scene, while _VIO+GNN-LC_ does the same as _VIO+V-LC_ (but with lower standard deviation, as indicated by the black confidence bars in the plot). Note that the Simmons Jackal scene has multiple uniform rooms with very similar objects and layouts. In general, _VIO_ is outperformed by methods that incorporate loop closures, and the ability to correct for loop closures is important to maintaining an accurate 3D scene graph.
**Neural Tree Ablation 1: Node Classification.** We first replicate the semi-supervised node classification experiment described in [16] and compare different message passing architectures on the original graphs (_i.e.,_ standard GNNs) and
Fig. 14: (a-e) Object accuracy metrics for all scenes across different 2D semantic segmentation sources and different LCD and pose source configurations. The metrics are averaged over 5 trials for each pair of configurations (_i.e.,_ there were 5 trials for every combination of 2D semantics and a pose source). For semantic segmentation sources, “OF” is Oneformer, “HR” is HRNet and “GT” is the ground truth semantic segmentation provided by the simulator. Each cell reports the mean and standard deviation. Values are colored from highest (dark purple, best performance) to lowest (light orange, worst performance) by mean. (f) Place layer accuracy for all scenes across different configurations for Hydra. Each bar shows the average distance error for a given scene and configuration over 5 trials; standard deviation across the 5 trials is shown as a black error bar.
on the H-tree (_i.e.,_ the proposed neural tree). To construct the H-tree graphs, we apply the proposed hierarchical tree decomposition algorithm (Algorithm 1), which concatenates the tree decomposition of each layer. The results are shown in Table II. With the proposed tree decomposition approach, the neural tree achieves an advantage between 1.63% to 11.72% over standard GNN models, depending on the type of message passing architecture. In comparison to the results in [16], the results in Table II are generated using a later version of the PyTorch Geometric library, which supports heterogeneous GNN operators. Also note that the tree decomposition used in Table II differs slightly from the tree decomposition algorithm used in [16]. We provide additional results in Appendix F that examine the impact of these changes.
We note that the absolute position of a node centroid is not invariant to translation, and that invariance is important to generalize to new scenes regardless of the choice of coordinate frames. We therefore examine using the relative positions of node centroids as edge features, instead of including them directly as node features. For the H-tree, which contains clique nodes that are comprised of multiple objects or rooms, we use the mean room centroid as the centroid of the clique when computing relative positions. In addition, we also investigate the impact of using heterogeneous graphs (which can accommodate different node and edge types, as the ones arising in the 3D scene graph) against standard homogeneous graphs. For this comparison, we only use the GAT message passing function since it is the only one in Table II that can both handle heterogeneous message passing and incorporate edge features.
We present the results of this ablation study in Table III. Comparing the two position encodings, the neural tree models achieve significantly higher accuracy when using relative positions as edge features: 7.45% on homogeneous graphs and 2.86% on heterogeneous graphs. Choosing features that are translation invariant (_i.e.,_ relative positions) has a clear advantage when using neural tree models. The standard GNNs show near-identical performance between the two position encodings. The heterogeneous graph structure has no significant impact on standard GNNs, but it degrades performance of the neural trees. The exact mechanism behind this decrease in performance in unclear; we posit that the heterogeneous neural tree variants have more parameters than the other architectures, and that the Standford3D dataset may not have enough training data for these architectures. We do not observe a significant performance decrease between the homogeneous and heterogeneous neural tree variants for the MP3D dataset.
**Neural Tree Ablation 2: Room Classification.** We compare the neural tree against standard GNNs on a room classification task using the MP3D dataset, predicting room semantic labels for the object-room graphs extracted from Hydra. Compared to Stanford3d, this dataset contains both semantic labels of the object nodes and room layer connectivity from Hydra, in addition to the geometric features (_i.e.,_ centroid position and bounding box size) of each node. Therefore, we study the effect of these two additional pieces of information. We use pre-trained word2vec vectors to represent object semantic labels and concatenate them with the geometric feature vectors. As described previously, we filter out partially explored rooms, using a threshold of 60% IoU. As before, we use GAT as the message passing function when training, as all graphs are heterogeneous and have edge features. The results are shown in Table IV. Both approaches show a significant performance improvement using the semantic labels of the objects, ranging from 14% to 17%. The neural tree also shows substantial improvement when incorporating room layer edges, while the standard GNNs do not. The best-performing neural tree model is obtained when using semantic labels for the objects and accounting for room connectivity; this is the model we use in the rest of this paper.
**Accuracy Evaluation: Rooms.** We evaluate the accuracy of the room segmentation in Hydra, by first evaluating the quality of the geometric room clustering described in Section III-D1 and then testing the room classification from Section III-D2 for different choices of 2D segmentation network.
Figure 15 evaluates the room clustering performance, using the precision and recall metrics defined in [46] (here we compute precision and recall over 3D voxels instead of 2D pixels). More formally, these metrics are:
\[\begin{split}\text{{Precision}}&=\frac{1}{|R_{e}|} \sum_{r_{e}\in R_{e}}\max_{r_{g}\in R_{g}}\frac{|r_{g}\cap r_{e}|}{|r_{e}|} \\ \text{{Recall}}&=\frac{1}{|R_{g}|}\sum_{r_{g}\in R _{g}}\max_{r_{e}\in R_{e}}\frac{|r_{e}\cap r_{g}|}{|r_{g}|}\end{split} \tag{12}\]
where \(R_{e}\) is the set of estimated rooms, \(R_{g}\) is the set of ground-truth rooms, and \(|\cdot|\) returns the cardinality of a set; here, each room \(r_{e}\) (or \(r_{g}\)) is defined as a set of free-space voxels. We hand-label the ground-truth rooms \(R_{g}\) from the ground-truth reconstruction of the environment. In particular,
\begin{table}
\begin{tabular}{l c c} \hline \hline Message Passing & Original & H-tree \\ \hline GCN & \(42.91\pm 2.01\%\) & \(\mathbf{54.63\pm 2.19\%}\) \\ GraphSAGE & \(56.97\pm 2.02\%\) & \(\mathbf{58.60\pm 2.13\%}\) \\ GAT & \(45.06\pm 2.32\%\) & \(\mathbf{53.71\pm 2.10\%}\) \\ GIN & \(48.03\pm 2.21\%\) & \(\mathbf{55.00\pm 2.68\%}\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Stanford3d: Node classification accuracy for different message passing architectures. Best results in **bold**.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{2}{c}{Graph Types} & Original & H-Tree \\ \hline \multirow{2}{*}{w/o word2vec} & w/o room edges & \(\mathbf{41.28\pm 0.48\%}\) & \(38.63\pm 0.25\%\) \\ & with room edges & \(41.20\pm 1.08\%\) & \(\mathbf{43.27\pm 0.59\%}\) \\ \hline \multirow{2}{*}{with word2vec} & w/o room edges & \(\mathbf{56.74\pm 0.85\%}\) & \(55.84\pm 0.37\%\) \\ & with room edges & \(56.02\pm 0.84\%\) & \(\mathbf{57.67\pm 0.57\%}\) \\ \hline \hline \end{tabular}
\end{table} TABLE IV: MP3D: Room classification accuracy for different graph types and node features. Best results in **bold**.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{2}{c}{Graph Types} & Original & H-tree \\ \hline \multirow{2}{*}{Homogeneous} & absolute pos. & \(45.06\pm 2.32\%\) & \(\mathbf{53.71\pm 2.10\%}\) \\ & relative pos. & \(46.05\pm 1.98\%\) & \(\mathbf{61.16\pm 2.03\%}\) \\ \hline \multirow{2}{*}{Heterogeneous} & absolute pos. & \(\mathbf{46.56\pm 2.42\%}\) & \(45.30\pm 2.57\%\) \\ & relative pos. & \(45.79\pm 2.04\%\) & \(\mathbf{48.16\pm 2.21\%}\) \\ \hline \hline \end{tabular}
\end{table} TABLE III: Stanford3d: Node classification accuracy for different graph types and position encodings. Best results in **bold**.
we manually define sets of bounding boxes for each room and then identify the set of free-space voxels for \(r_{g}\) as all free-space voxels that fall within any of the defined bounding boxes for \(r_{g}\). For the estimated rooms \(R_{e}\), we derive the set of free-space voxels \(r_{e}\) from the places comprising each estimated room. In eq. (12), _Precision_ then measures the maximum overlap in voxels with a ground-truth room for every estimated room, and _Recall_ measures the maximum overlap in voxels with an estimated room for every ground-truth room. Intuitively, low precision corresponds to under-segmentation, _i.e.,_ fewer and larger room estimates, and low recall corresponds to over-segmentation, _i.e.,_ more and smaller room estimates. For benchmarking purposes, we also include the approach in [2] (_Kimera_) as a baseline for evaluation.
Figure 15 shows that Hydra generally outperforms Kimera [2] when given the ground-truth trajectory, and that Hydra is slightly more robust to multi-floor environments. This is expected, as Kimera performs room segmentation by taking a 2D slice of the voxel-based map of the environment, which does not generalize to multi-floor scenes. For both the split-level Apartment scene and the multi-floor SidPac scene, we achieve higher precision as compared to Kimera. These differences stem from the difficulty of setting an appropriate height to attempt to segment rooms at for Kimera (this is the height at which Kimera takes a 2D slice of the ESDF). Finally, it is worth noting that our room segmentation approach is able to mostly maintain the same levels of precision and recall for _VIO+GNN-LC_, _VIO+SG-LC_ and _VIO+VC-LC_. In some cases, our approach outperforms Kimera, despite the drift inherent in _VIO+GNN-LC_, _VIO+SG-LC_ and _VIO+VC-LC_ (Kimera uses ground-truth poses).
Finally, we report room classification accuracy for uHumans2, SidPac, and Simmons across semantic segmentation sources in Table V using the proposed neural tree. We manually assign a ground-truth room category to each hand labeled ground-truth room, and compute the percentage of estimated room labels that match their corresponding ground-truth category. Label correspondences are computed by picking the ground-truth room that contains the most place nodes that comprise each estimated room; an estimated room is assigned a correspondence to _unknown_ if too few place nodes (less than 10) fall inside the best ground-truth room.
We note two interesting trends. First, the richness of the object label space impacts the predicted room label accuracy, as shown by the relatively poor performance of the GT column as compared to either HRNet or OneFormer for the uHumans2 scenes (27% versus 40% or 45%): the GT semantics available in the simulator has a smaller number of semantic classes for the objects, hindering performance. This is consistent with the observations in [72]. Second, scenes such as the uHumans2 office or Simmons that are out of distribution --compared to the MP3D dataset we use for training-- perform poorly. An example of a resulting 3D scene graph for SidPac with room category labels is shown in Fig. 1.
**Loop Closure Ablation.** Finally, we take a closer look at the quality of the loop closures candidates proposed by our hierarchical loop closure detection approach, and compare it against traditional vision-based approaches on the Office scene. In particular, we compare our approach against a vision-based loop closure detection that uses DBoW2 for place recognition and ORB feature matching, as described in [2].
Figure 16 shows the number of detected loop closures against the error of the registered solution (_i.e.,_ the relative pose between query and match computed by the geometric verification) for four different loop closure configurations: (i) "SG-LC": the proposed handcrafted scene graph loop closure detection, (ii) "SG-GNN": the proposed learned scene graph loop closure detection, (iii) "V-LC (Nominal)": a traditional vision-based loop closure detection with nominal parameters, and (iv) "V-LC (Permissive)": a vision-based loop closure detection with more permissive parameters (_i.e.,_ a decreased score threshold and less restrictive geometric verification settings). We report key parameters used in this evaluation in Appendix G. As expected, making the vision-based detection parameters more permissive leads to more but lower-quality loop closures. On the other hand, the scene graph loop closure approach produces approximately twice as many loop closures within \(10\,\mathrm{cm}\) of translation error and \(1^{\circ}\) of rotation error as the permissive vision-based approach. The proposed approach produces quantitatively and quantitatively better loop
\begin{table}
\begin{tabular}{l c c c} \hline \hline & GT & HRNet & OneFormer \\ \hline uHumans2 Apartment & \(26.7\pm 3.7\) & \(38.0\pm 21.7\) & \(45.0\pm 11.2\)\% \\ uHumans2 Office & \(27.6\pm 7.5\) & \(28.4\pm 6.9\) & \(27.0\pm 10.1\)\% \\ SidPac Floor 1–3 & N/A & \(46.2\pm 11.9\) & \(47.7\pm 12.5\)\% \\ Simmons Jackal & N/A & \(32.3\pm 16.6\) & \(15.3\pm 6.5\)\% \\ Simmons A1 & N/A & \(29.0\pm 24.4\) & \(38.0\pm 28.0\)\% \\ \hline \hline \end{tabular}
\end{table} TABLE V: Room classification accuracy for Hydra.
Fig. 15: Room metrics for all scenes across different LCD pose source configurations. Each bar shows the average precision or recall over 5 trials; standard deviation across the 5 trials is shown as a black error bar
closures compared to both baselines. Notably, SG-GNN also outperforms both vision baselines and the SG-LC. We present further discussion and a breakdown of additional statistics of the proposed loop closures of each method in Appendix H.
To further examine the relative performance of SG-LC and SG-GNN, we evaluate the top-k precision of both approaches on both the test split of the MP3D dataset used to train the descriptors, and the uHumans2 office scene. For this metric, we compute the percent of the \(k\)-highest-scored21 descriptors for each query descriptor that are valid matches; two descriptors are determined to match if the bounding box of their corresponding sub-graphs have a IoU above a specified threshold. We include two configurations of SG-GNN for object sub-graphs in this analysis; one that uses a one-hot encoding and one that uses word2vec embeddings to represent the semantic labels of the object nodes. We report this metric for both SG-LC (handcrafted) and SG-GNN (learned) in Table VI for two IoU thresholds: 0.4 and 0.6.
Footnote 21: We map distances between descriptors to a \([0,1]\) range, where \(1\) corresponds to a match, and \(0\) corresponds to a non-match
We note some interesting trends in Table VI. First, the one-hot encoding outperforms the word2vec encoding for both datasets, and appears to transfer better between datasets (_i.e.,_ showing 5% better performance for the MP3D dataset, but 10% better performance for uHumans2). Additionally, the original handcrafted descriptors maintain good performance compared to the learned descriptors, and only the learned object descriptors appear competitive to the handcrafted descriptors in terms of precision. We believe that the high performance of the handcrafted descriptors is due to the semantic and geometric diversity among the scenes of the MP3D dataset. Previous experiments (using the _VIO+GNN-LC_ configuration of Hydra) imply that the learned descriptors offer improved performance in environments with a more uniform object distribution (_i.e.,_ the Simmons Jackal scene).
#### Vi-B3 Onboard Operation on a Robot
This section shows that Hydra is capable of running in real-time and is deployable to a robot. We show this by performing a qualitative experiment, running Hydra online on the Unitree A1. We run Hydra and the chosen 2D semantic segmentation network (MobilenetV2 [67]) on the Nvidia Xavier NX mounted on the back of the A1. Additionally, we run our room classification approach in the loop with Hydra on the CPU of the same Nvidia Xavier. In this test, to circumvent the computational cost of running Kimera-VIO, we use an Intel RealSense T265 to provide odometry estimates to Hydra. As a result, we use our proposed scene-graph loop closure detection method without the appearance-based descriptors (which would rely on Kimera-VIO for computation of the visual descriptors and vision-based geometric verification). To maintain real-time performance, we configure the frontend of Hydra to run every fifth keyframe (instead of every keyframe), and limit the reconstruction range to \(3.5\,\mathrm{m}\) (instead of the nominal \(4.5\,\mathrm{m}\)). Note that this results in a nominal update rate of \(1\,\mathrm{Hz}\) for Hydra, though we still perform dense 3D metric-semantic reconstruction at keyframe rate (\(5\,\mathrm{Hz}\)). A breakdown of the runtime of Hydra's modules during this experiment is available in Section VI-C4.
For this experiment, we partially explore a floor of building 31 on the MIT campus consisting of a group of cubicles, a conference room, two impromptu lounge areas, all of which are connected by a hallway. An intermediate scene graph produced by Hydra while running the experiment is shown in Fig. 17, and video is available of the experiment.22 Hydra estimates four rooms for the scene graph; of these four rooms, room 3 is centered over one of the two lounges, and room 0 covers both the conference room (located in the lower right corner of Fig. 17) and a portion of the hallway. Qualitatively, Hydra over-segments the scene, but the produced rooms and labels are still somewhat consistent with the underlying room
Fig. 16: Number of detected loop closures versus error of the estimated loop closure pose for four different loop closure detection configurations. Five individual trials and a trend-line are shown for each configuration.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & & & \multicolumn{2}{c}{p@10} \\ \cline{3-5} & & & IoU = \(0.4\) & IoU = \(0.6\) \\ \hline \multirow{4}{*}{Objects} & \multirow{2}{*}{MP3D} & Handcrafted & \(\mathbf{70.9}\) & \(\mathbf{58.6}\) \\ & & Learned+OneHot & \(65.5\) & \(56.0\) \\ & & Learned+Word2Vec & \(60.3\) & \(50.9\) \\ \cline{2-5} & & Handcrafted & \(46.5\) & \(\mathbf{31.5}\) \\ & & Learned+OneHot & \(\mathbf{47.4}\) & \(31.4\) \\ \hline \multirow{4}{*}{Places} & \multirow{2}{*}{MP3D} & Handcrafted & \(\mathbf{76.4}\) & \(\mathbf{58.0}\) \\ & & Learned & \(59.5\) & \(47.4\) \\ \cline{1-1} \cline{2-5} & & Handcrafted & \(\mathbf{68.6}\) & \(\mathbf{41.4}\) \\ \cline{1-1} \cline{2-5} & & Learned & \(54.1\) & \(34.9\) \\ \hline \hline \end{tabular}
\end{table} TABLE VI: p@k results for loop closure detection. Best results in **bold**.
Fig. 17: Intermediate 3D scene graph created by Hydra on-board the A1 quadruped while exploring building 31 on the MIT campus. Estimated room labels are shown.
structure and labels. In this instance, only room 0 has an incorrectly estimated label; however the room categories that Hydra estimates over the course of the experiment are not as consistent. This is likely due to the poor quality of the 2D semantics from MobilenetV2, and general lack of useful object categories for inferring room labels.
#### Vi-B4 Runtime Evaluation
Figure 18 reports the runtime of Hydra versus the batch approach in [2]. This plot shows that the runtime of the batch approach increases over time and takes more than five minutes to generate the entire scene graph for moderate scene sizes; as we mentioned, most processes in the batch approach [2] entail processing the entire ESDF (_e.g.,_ place extraction and room detection), inducing a linear increase in the runtime as the ESDF grows. On the other hand, our scene graph frontend (_Hydra Mid-Level_ in Fig. 18) has a fixed computation cost. In Fig. 18, a slight upward trend is observable for _Hydra High-Level_, driven by room detection and scene graph optimization computation costs, though remaining much lower than batch processing. Noticeable spikes in the runtime for _Hydra High-Level_ (_e.g.,_ at 1400 seconds) correspond to the execution of the 3D scene graph optimization when new loop closures are added.
When reconstructing the scene shown in Fig. 18 (SidPac Floor1-3), Hydra uses a maximum of \(7.2\,\mathrm{M}\mathrm{i}\mathrm{B}\) for the TSDF, \(19.1\,\mathrm{M}\mathrm{i}\mathrm{B}\) for the storage of the semantic labels, and \(47.8\,\mathrm{M}\mathrm{i}\mathrm{B}\) for the storage of the dense GVD inside the active window, for a total of \(74.1\,\mathrm{M}\mathrm{i}\mathrm{B}\). Kimera instead uses \(79.2\,\mathrm{M}\mathrm{i}\mathrm{B}\) for the TSDF, \(211\,\mathrm{M}\mathrm{i}\mathrm{B}\) for the storage of semantic labels, and \(132\,\mathrm{M}\mathrm{i}\mathrm{B}\) for the storage of the ESDF when reconstructing the entire environment, for a total of \(422\,\mathrm{M}\mathrm{i}\mathrm{B}\) of memory. Note that the memory storage requirement of Hydra for the active window is a fifth of the memory required for Kimera, and that the memory usage of Kimera grows with the size of the scene.
Table VII reports timing breakdown for the incremental creation of each layer across scenes for a single trial. The object layer runtime is determined by the number of mesh vertices in the active window and the number of possible object semantic classes. The room layer runtime is determined by the number of places (a combination of how complicated and large the scene is); this is why the Office has the largest computation cost for the rooms despite being smaller than the SidPac scenes. Note that our target keyframe rate implies a limit of \(200\,\mathrm{m}\mathrm{s}\) for any of these processes. While the mean and standard deviation of all layers are well below this rate, we do see that the real-time rate is exceeded in some cases for extracting the objects. This specifically occurs for larger scenes (_i.e.,_ SidPac and Simmons Jackal), due to the presence of many objects. At the same time, we remark that Hydra is architected in such a way that temporarily exceeding the real-time threshold does not preclude online performance and real-time operation is restored shortly after these delays occur.
While the timing results in Table VII are obtained with a relatively powerful workstation, here we restate that Hydra can run in real-time on embedded computers commonly used in robotics applications. Towards this goal, we report the timing statistics from the online experiment running Hydra onboard the Unitree A1 as shown in Fig. 17. Hydra processes the objects in \(83.9\pm 65\,\mathrm{m}\mathrm{s}\), places in \(114.8\pm 103\,\mathrm{m}\mathrm{s}\), and rooms in \(34.7\pm 37.6\,\mathrm{m}\mathrm{s}\). While these numbers imply that Hydra can run faster than the \(1\,\mathrm{H}\mathrm{z}\) target rate on the Xavier, note that the \(1\,\mathrm{H}\mathrm{z}\) limit is chosen to not fully max out the computational resources of the Xavier. Additionally, there are other modules of Hydra and external processes that limit (sometimes significantly) the computational resources of the Xavier (_e.g.,_ the reconstruction of the TSDF and GVD). While there is still margin to optimize computation (see conclusions), these initial results stress the practicality and real-time capability of Hydra in building 3D scene graphs.
## VII Related Work
We provide a broad literature review touching on abstractions and symbolic representations (Section VII-A), metric-semantic and hierarchical map representations and algorithms to build them from sensor data (Section VII-B), and loop closure detection and optimization (Section VII-C).
### _Need for Abstractions and Symbolic Representations_
**State and Action Abstractions.** The need to abstract sensor data into higher-level representations has been studied in the context of planning and decision-making. Konidaris [73] points out the necessity of state and action abstraction for efficient task and motion planning problems. Konidaris et al. [74] extract task-specific state abstractions. James et al. [75] show how task-independent state abstractions can also be learned. James et al. [76] show how to autonomously learn object-centric representations of a continuous and high-dimensional environment, and argue that such a representation enables efficient planning for long-horizon tasks. Berg et al. [77] propose a hierarchical representation for planning in large outdoor environments. The hierarchy contains three levels:
Fig. 18: Runtime required for scene graph construction vs. timestamp for the SidPac Floor 1–3 dataset for a batch approach (Kimera) and for the proposed incremental approach (Hydra). For timing of the low-level processes in Hydra, we refer the reader to the analysis in [2], as we also rely on Kimera-VIO.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Objects [ms] & Places [ms] & Rooms [ms] \\ \hline uH2 Apartment & \(52.8\pm 14.9\) & \(11.3\pm 6.1\) & \(1.8\pm 0.9\) \\ uH2 Office & \(34.0\pm 26.1\) & \(12.5\pm 6.9\) & \(14.6\pm 11.9\) \\ SidPac Floor 1–3 & \(57.4\pm 55.7\) & \(15.7\pm 9.2\) & \(5.9\pm 9.0\) \\ Simmons Jackal & \(63.9\pm 45.8\) & \(19.6\pm 13.1\) & \(6.6\pm 9.0\) \\ Simmons A1 & \(71.1\pm 50.5\) & \(18.6\pm 11.7\) & \(1.4\pm 1.0\) \\ \hline \hline \end{tabular}
\end{table} TABLE VII: Hydra: timing breakdown.
landmarks (_e.g.,_ forests, buildings, streets), neighborhoods, and cities. Several related works also discuss action abstractions, _e.g.,_ how to group a sequence of actions into macro-actions (usually referred to as _options_). Jinnai et al. [78], for instance, provide a polynomial-time approximation algorithm for learning options for a Markov decision process.
**Ontologies and Knowledge Graphs.** A fundamental requirement for an embodied agent to build and communicate a meaningful representation of an environment is to use a common vocabulary describing concepts of interest23 and their relations. Such a vocabulary may be represented as an _ontology_ or _the Kimera-VIO estimatesknowledge graph_. The precise definition ofthe Kimera-VIO estimates an ontology varies between communities [80, 81]. A well accepted definition was proposed by Gruber [82] as _an explicit specification of conceptualization_ and later extended to require the conceptualization must be a shared view [83] and written as a formal language [84]. In practice, the definition used is often not as strict. For example, the definition provided by W3C as part of the Web Ontology Language (OWL) [85] describes an ontology as _a set of terms in a vocabulary and their inter-relationships_. Generally, an embodied agent is committed to a conceptualization regardless if the commitment is explicit (e.g., represented as knowledge graph) or implicit (e.g., represented as a set of labels for a classifier) [86].
Footnote 23: The term “concept” may be ambiguous without context as concepts may include a description of a task, a thought process, or simply the labels for a classifier. This ambiguity is discussed in more detail in [79].
The need to create a standard ontology was identified by [87], which resulted in the emergence of several knowledge processing frameworks focused on robotics applications [88, 89, 90, 91]. A significant effort has been made in the creation of common-sense ontologies and knowledge graphs [92, 93, 94, 95, 96, 97, 98, 99, 100]. In recent years, there has been a surge in applying these ontologies and knowledge graphs to problems such as 2D scene graph generation [101, 102, 103, 104], image classification [105, 106], visual question answering [107, 108, 109, 110, 111], task planning [112, 113, 114], and representation learning [115, 116], to name a few.
**Scene Grammars and Compositional Models.** Compositionality, which is the ability of humans to perceive reality as a hierarchy of parts, has been considered fundamental to human cognition [117, 118]. Geman et al. [117] propose a mathematical formulation of _compositional models_, by recursively grouping or composing constituents and assigning probability factors to composition. Zhu et al. [119] represent and infer visual patters as a recursive compositional model, which is nothing but a tree-structured graphical model where the leaf nodes represent the visual patters, and the higher-layers represent complex compositions. Zhu and Mumford [120], Zhu and Huang [121] discuss how to model objects, images, and scenes using such a hierarchical tree structure, called _stochastic grammar_, and advocate it to be a general framework for visual representation. Inspired by these insights, recent works such as [122, 123, 124] propose probabilistic generative models to capture hierarchical relationship between entities in a scene.
Recent works have tended to use such a hierarchical structure along with deep neural networks to provide better learning models. Wang et al. [125] show that using a hierarchical, compositional model of the human body results in better human pose estimation. The model consists of body parts (_e.g.,_ shoulder, elbow, arm) organized as a hierarchy (_e.g.,_ shoulder, elbow are children of arm). A bottom-up/top-down inference strategy is proposed that is able to correct ambiguities in perceiving the lowest-level parts. Niemeyer and Geiger [126] show that modeling a 3D scene as one composed of objects and background leads to more controllable and accurate image synthesis. Mo et al. [127] model object shape as a hierarchical assembly of individual parts, and the object shape is then generated by transforming the shape parameter of each object part. Ichien et al. [128] show that compositional model significantly outperform deep learning models on the task of analogical reasoning. Yuan et al. [129] survey work on compositional scene representation. While early work by Fodor and Pylyshyn [130] argued that neural-network-based models are not compositional in nature, recent works have suggested otherwise. The works [131, 132, 133] show that deep convolutional networks avoid the curse of dimensionality in approximating a class of compositional functions. Webb et al. [134] show that large language models such as GPT-3 show compositional generalizability as an emergent property. Such compositionality, however, is not yet evident in generative models trained on multi-object scenes [135].
### _Metric-semantic and Hierarchical Representations for Scene Understanding and Mapping_
**2D Scene Graphs in Computer Vision.** 2D scene graphs are a popular model for image understanding that describes the content of an image in terms of objects (typically grounded by bounding boxes in the image) and their attributes and relations [136, 137]. The estimation of 2D scene graphs (either from a single image or a sequence of images) has been well studied in the computer vision community and is surveyed in [28]. The seminal works [136, 137] advocated the use of 2D scene graphs to perform cognitive tasks such as image search, image captioning, and answering questions. These tasks, unlike object detection, require the model to reason about object attributes and object-to-object relations, and therefore, enable better image understanding. 2D scene graphs have been successfully used in image retrieval [137], caption generation [138, 139], visual question answering [140, 136], and relationship detection [141]. GNNs are a popular tool for joint object labels and/or relationship inference on scene graphs [142, 143, 144, 145]. Chang et al. [146] provide a comprehensive survey on the various methods that have been proposed to infer a 2D scene graph from an image.
Despite their popularity, 2D scene graphs have several limitations. First of all, they are designed to ground concepts in the image space, hence they are not suitable to model large-scale scenes (see discussion in Section II). Second, many of the annotated object-to-object relationships in [136] like "behind", "next to", "near", "above", "under" are harder to assess from a 2D image due to the lack of depth information, and are
more easily inferred in 3D. Finally, 2D representations such as 2D scene graphs are not invariant to viewpoint changes (_i.e.,_ viewing the same 3D scene from a different viewing angle may result in a different 2D scene graph), as observed in [3].
**Flat Metric-Semantic Representations.** The last few years have seen a surge of interest towards _metric-semantic mapping_, simultaneously triggered by the maturity of traditional 3D reconstruction and SLAM techniques, and by the novel opportunities for semantic understanding afforded by deep learning. The literature has focused on both object-based maps [147, 148, 149, 150, 151, 152] and dense maps, including volumetric models [153, 14, 154], point clouds [155, 156, 157], and 3D meshes [34, 158]. Some approaches combine objects and dense map models [162, 160, 159, 160]. These approaches are not concerned with estimating higher-level semantics (_e.g.,_ rooms) and typically return dense models that might not be directly amenable for navigation [40].
**Building Parsing.** A somewhat parallel research line investigates how to _parse the layout of a building_ from 2D or 3D data. A large body of work focuses on parsing 2D maps [46], including rule-based [47] and learning-based methods [163]. Friedman et al. [164] compute a Voronoi graph from a 2D occupancy grid, which is then labeled using a conditional random field. Recent work focuses on 3D data. Liu et al. [163] and Stekovic et al. [165] project 3D point clouds to 2D maps, which however is not directly applicable to multi-story buildings. Furukawa et al. [166] reconstruct floor plans from images using multi-view stereo combined with a Manhattan World assumption. Lukierski et al. [167] use dense stereo from an omni-directional camera to fit cuboids to objects and rooms. Zheng et al. [168] detect rooms by performing region growing on a 3D metric-semantic model.
**Hierarchical Representations and 3D Scene Graphs.** Hierarchical maps have been pervasive in robotics since its inception [170, 171, 169, 172]. Early work focuses on 2D maps and investigates the use of hierarchical maps to resolve the divide between metric and topological representations [173, 174, 175, 176, 177]. These works preceded the "deep learning revolution" and could not leverage the rich semantics currently accessible via deep neural networks.
More recently, _3D scene graphs_ have been proposed as expressive hierarchical models for 3D environments. Armeni et al. [3] model the environment as a graph including low-level geometry (_i.e.,_ a metric-semantic mesh), objects, rooms, and camera locations. Rosinol et al. [2, 4] augment the model with a topological map of places (modeling traversability), as well as a layer describing dynamic entities in the environment. The approaches in [3, 2, 4] are designed for offline use. Other papers focus on reconstructing a graph of objects and their relations [5, 6, 7]. Wu et al. [7] predict objects and relations in real-time using a graph neural network. Izatt and Tedrake [8] parse objects and relations into a scene grammar model using mixed-integer programming. Gothoskar et al. [9] use an MCMC approach. Gay et al. [178] use a quadric representation to estimate 3D ellipsoids for objects in the scene given input 2D bounding boxes, and use an RNN to infer relationships between detected objects.
### _Maintaining Persistent Representations_
**Loop Closures Detection.** Established approaches for visual loop closure detection in robotics trace back to place recognition and image retrieval techniques in computer vision; these approaches are broadly adopted in SLAM pipelines but are known to suffer from appearance and viewpoint changes [179]. Alternative approaches investigate place recognition using image sequences [180, 181, 182] or deep learning [183]. More related to our proposal is the set of papers leveraging semantic information for loop closure detection. Gawel et al. [184] perform object-graph-based loop closure detection using random-walk descriptors built from 2D images. Liu et al. [185] use similar object-based descriptors but built from a 3D reconstruction. Lin et al. [186] adopt random-walk object-based descriptors and then compute loop closure poses via object registration. Qin et al. [187] propose an object-based approach based on sub-graph similarity matching. Zheng et al. [168] propose a room-level loop closure detector. None of these approaches are hierarchical in nature.
**Loop Closures Correction.** After a loop closure is detected, the map needs to be corrected accordingly. While this process is easy in sparse (_e.g.,_ landmark-based) representations [30], it is non-trivial to perform in real-time when using dense representations. Stuckler and Behnke [188] and Whelan et al. [189] optimize a map of _surfels_, to circumvent the need to correct structured representations (_e.g.,_ meshes or voxels). Dai et al. [190] propose reintegrating a volumetric map after each loop closure. Reijgwart et al. [191] correct drift in volumetric representations by breaking the map into submaps that can be rigidly re-aligned after loop closures. Whelan et al. [192] propose a 2-step optimization that first corrects the robot trajectory and then deforms the map (represented as a point cloud or a mesh) using a deformation graph approach [17]. Rosinol et al. [2] unify the two steps into a single pose graph and mesh optimization. None of these works are concerned with simultaneously correcting multiple layers in a hierarchical representation.
## VIII Conclusions
This paper argues that large-scale spatial perception for robotics requires hierarchical representations. In particular, we show that hierarchical representations scale better in terms of memory and are more suitable for efficient inference. Our second contribution is to introduce algorithms to build a hierarchical representation of an indoor environment, namely a _3D scene graph_, in real-time as the robot explores the environment. Our algorithms combine 3D geometry (_e.g.,_ to cluster the free space into a graph of places), topology (to cluster the places into rooms), and geometric deep learning (_e.g.,_ to classify the type of rooms the robot is moving across). Our third contribution is to discuss loop closure detection and correction in 3D scene graphs. We introduce (handcrafted and learning-based) hierarchical descriptors for loop closure detection, and develop a unified optimization framework to correct drift in the 3D scene graph in response to loop closures. We integrate our algorithmic contributions into a heavily parallelized system, named _Hydra_, and show it can
build accurate 3D scene graphs in real-time across a variety of photo-realistic simulations and real datasets.
**Limitations.** While we believe the proposed contributions constitute a substantial step towards high-level scene understanding and spatial perception for robotics, our current proposal has several limitations. First, the sub-graph of places captures the free-space in 3D, which is directly usable for a drone to navigate. However, traversability for ground robots is also influenced by other aspects (_e.g.,_ terrain type, steepness). These aspects are currently disregarded in the construction of the GVD and the places sub-graph, but we believe they are particularly important for outdoor extensions of 3D scene graphs. Second, our approach for room segmentation, which first clusters the rooms geometrically, and then labels each room, mainly applies to rooms with clear geometric boundaries (_e.g.,_ it would not work in an open floor-plan). We believe this limitation is surmountable (at the expense of using extra training data) by replacing the 2-stage approach with a single learning-based method (_e.g.,_ a neural tree or a standard graph neural network) that can directly classify places into rooms. Third, for computational reasons, we restricted the inference in the neural tree to operate at the level of objects, rooms, and buildings. However, it would be desirable for high-level semantic concepts (_e.g.,_ room and object labels) to propagate information downward towards the mesh geometry. While a more compact representation of the low-level geometry [193, 194] might facilitate this process, it remains unclear how to fully integrate top-down reasoning in the construction of the 3D scene graph, which is currently a bottom-up process.
**Future Work.** This work opens many avenues for current and future investigation. First of all, while this paper mostly focused on inclusion and adjacency relations, it would be interesting to label nodes and edges of the 3D scene graph with a richer set of relations and affordances, for instance building on [7]. Second, the connections between our scene graph optimization approach and pose graph optimization offer opportunities to improve the efficiency of the optimization by leveraging recent advances in pose graph sparsification as well as novel solvers. Third, it would be interesting to replace the sub-symbolic representation (currently, a 3D mesh and a graph of places) with neural models, including neural implicit representations [21] or neural radiance fields [195], which can more easily incorporate priors and better support shape completion [196]. Fourth, the current set of detectable objects is fairly limited and restricted by the use of pre-trained 2D segmentation networks. However, we have noticed in Section VI and in [72] that a larger object vocabulary leads to better room classification; therefore, it would be interesting to investigate novel techniques that leverage language models for open-set segmentation, _e.g.,_[197, 198]. Fifth, it would be desirable the extend the framework in this paper to arbitrary (_i.e.,_ mixed indoor-outdoor environments). Finally, the implications of using 3D scene graphs for prediction, planning, and decision-making are still mostly unexplored (see [199, 200, 201] for early examples).
## Disclaimer
Research was sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
|
2305.10428 | Field-level Lyman-alpha forest modelling in redshift space via augmented
non-local Fluctuating Gunn-Peterson Approximation | We present an improved analytical model to predict the Lyman-alpha forest at
the field level in redshift space from the dark matter field, expanding upon
the widely-used Fluctuating Gunn-Peterson approximation (FGPA). In particular,
we introduce the dependence on the cosmic web environment (knots, filaments,
sheets, voids) in the model, thereby effectively accounting for non-local bias.
Furthermore, we include a detailed treatment of velocity bias in the redshift
space distortions modelling, allowing the velocity bias to be cosmic-web
dependent. We find evidence for a significant difference of the same model
parameters in different environments, suggesting that for the investigated
setup the simple standard FGPA is not able to adequately predict the
Lyman-alpha forest in the different cosmic web regimes. We reproduce the
summary statistics of the reference cosmological hydrodynamic simulation we use
for comparison, yielding accurate mean transmitted flux, probability
distribution function, 3D power spectrum, and bispectrum. In particular, we
achieve maximum deviation and average deviations accuracy in the Lyman-alpha
forest 3D power spectrum of $\sim 3\%$ and $\sim 0.1\%$ up to $k\sim 0.4 \, h
\, {\rm Mpc}^{-1}$, $\sim 5\%$ and $\sim 1.8\%$ up to $k \sim 1.4 \, h \, {\rm
Mpc}^{-1}$. Our new model outperforms previous analytical efforts to predict
the Lyman-alpha forest at the field level in all the probed summary statistics,
and has the potential to become instrumental in the generation of fast accurate
mocks for covariance matrices estimation in the context of current and
forthcoming Lyman-alpha forest surveys. | Francesco Sinigaglia, Francisco-Shu Kitaura, Kentaro Nagamine, Yuri Oku, Andrés Balaguera-Antolínez | 2023-05-17T17:58:04Z | http://arxiv.org/abs/2305.10428v3 | Field-level Lyman-alpha forest modelling in redshift space via augmented non-local Fluctuating Gunn-Peterson Approximation
###### Abstract
Context:Devising fast and accurate methods to predict the Lyman-alpha forest at the field level avoiding the computational burden of running large-volume cosmological hydrodynamic simulations is of fundamental importance to quickly generate the massive set of simulations needed by the state-of-the-art galaxy and Ly\(\alpha\) forest spectroscopic surveys.
Aims:We present an improved analytical model to predict the Ly\(\alpha\) forest at the field level in redshift space from the dark matter field, expanding upon the widely-used Fluctuating Gunn-Peterson approximation. Instead of assuming a unique universal relation over the whole considered cosmic volume, we introduce the dependence on the cosmic web environment (knots, filaments, sheets, voids) in the model, thereby effectively accounting for non-local bias. Furthermore, we include a detailed treatment of velocity bias in the redshift space distortions modelling, allowing the velocity bias to be cosmic-web dependent.
Methods:We first map the dark matter field from real to redshift space through a particle-based relation including velocity bias, depending on the cosmic web classification of the dark matter field in real space. We then formalize an appropriate functional form for our model, building upon the traditional Fluctuating Gunn-Peterson approximation (FGPA) and including a cut-off and a boosting factor mimicking a threshold and inverse-threshold bias effect, respectively, with model parameters depending on the cosmic web classification in redshift space. Eventually, we fit the coefficients of the model via an efficient Markov Chain Monte Carlo scheme.
Results:We find evidence for a significant difference of the same model parameters in different environments, suggesting that for the investigated setup the simple standard FGPA is not able to adequately predict the Ly\(\alpha\) forest in the different cosmic web regimes. We reproduce the summary statistics of the reference cosmological hydrodynamic simulation we use for comparison, yielding accurate mean transmitted flux, probability distribution function, 3D power spectrum, and bispectrum. In particular, we achieve maximum deviation and average deviations accuracy in the Ly\(\alpha\) forest 3D power spectrum of \(\sim 3\%\) and \(\sim 0.1\%\) up to \(k\sim 0.4\,h\,{\rm Mpc}^{-1}\), \(\sim 5\%\) and \(\sim 1.8\%\) up to \(k\sim 1.4\,h\,{\rm Mpc}^{-1}\).
Conclusions:Our new model outperforms previous analytical efforts to predict the Ly\(\alpha\) forest at the field level in all the probed summary statistics, and has the potential to become instrumental in the generation of fast accurate models for covariance matrices estimation in the context of current and forthcoming Ly\(\alpha\) forest surveys
## 1 Introduction
The neutral hydrogen (H I) absorption features imprinted in quasar spectra, also known as the Lyman-\(\alpha\) forest (Ly\(\alpha\) forest, hereafter), represent one of the most promising cosmological probes over the current and next decades. The Ly\(\alpha\) forest traces the density field continuously along the line-of-sight, and the high number density of quasar sightlines per square degree delivered by state-of-the-art cosmological redshift surveys will allow building accurate three-dimensional maps of the Universe at \(z\gtrsim 1.8\), i.e at distances which are currently beyond the reach of wide-field galaxy redshift surveys. In the await of next-generation astronomical facilities such as the Square Kilometre Array Observatory1 and the Extremely Large Telescope2, the Ly\(\alpha\) forest is anticipated to play a pivotal role in refining the constraints on cosmological models and advancing our comprehension of the high-redshift Universe.
Footnote 1: SKAO: [https://www.skao.int](https://www.skao.int)
Footnote 2: ELT: [https://elt.eso.org](https://elt.eso.org)
Footnote 3: ELT: [https://elt.eso.org](https://elt.eso.org)
To maximize the scientific return and optimize the extraction of cosmological information from the incoming unprecedented Ly\(\alpha\) forest data sets provided by surveys as DESI (Levi et al., 2013), Euclid (Amenodla et al., 2018), WEAVE-QSO (Pieri et al., 2016), and Subaru-PFS (Takada et al., 2014), it is extremely important to develop accurate analytical models and numerical tools to efficiently interpret and analyze Ly\(\alpha\) forest observables.
In particular, a primary goal is to formalize an effective model of the bias relation linking the Ly\(\alpha\) forest to the dark matter field. In fact, an effective bias mapping allows the prediction
of the Ly\(\alpha\) forest from the dark matter field in a fast and accurate way, enabling the generation of massive sets of Ly\(\alpha\) forest mock lightcones. Mock catalogs have become the standard tool used in large-scale cosmological surveys to robustly address uncertainties on cosmological parameters, to test pipelines and commissioning tools, and to assess the feasibility of studies targeting new observables, among others. Furthermore, knowing an effective bias description make it possible to forward-model the Ly\(\alpha\) forest at the field level and iteratively reconstruct the Ly\(\alpha\) forest (see e.g. Horowitz et al., 2019; Porquares et al., 2020) and its Baryon Acoustic Oscillations (BAOs), improving on methods de-evolving cosmic structures back in time with some approximation for the displacement field (e.g. Eisenstein et al., 2007, and later works implementing higher-order Lagrangian Perturbation Theory refinements).
Pioneering studies (Hui and Gnedin, 1997; Rauch, 1998; Croft et al., 1998; Weinberg et al., 1999) proposed to model the Ly\(\alpha\) forest using a deterministic scaling relation mapping - known as _Fluctuating Gunn-Peterson approximation_ (FGPA, hereafter) - between the dark matter density and the H I optical depth. While useful, fairly accurate on large linear scales, and fast to compute, the FGPA has been shown to lose accuracy and not adequately model the summary statistics of the Ly\(\alpha\) forest in the weakly non-linear regime (\(k>0.1\,h\,\)Mpc\({}^{-1}\)) (e.g. Sinigaglia et al., 2022). A plethora of works employed the FGPA to forward-model the Ly\(\alpha\) forest from dark matter density fields obtained through N-body simulations (Meiksin and White, 2001; Viel et al., 2002; Slosar et al., 2009; White et al., 2010; Rorai et al., 2013; Lee et al., 2014), approximated gravity solvers (Horowitz et al., 2019) and Gaussian or lognormal random fields (Le Goff et al., 2011; Font-Ribera et al., 2012; Farre et al., 2020). In particular, the LyAlpha-Color method (Farr et al., 2020), applying the FGPA on lognormal fields, with free parameters constrained so as to match the position and width of the acoustic peak in the Ly\(\alpha\) forest two-point correlation function and the amplitude of the 1D Ly\(\alpha\) forest power spectrum, was adopted to produce the mock lightcones used for the BAO measurements of the Ly\(\alpha\) forest from the final eBOSS data release (SDSS DR16) (du Mas des Bourboux et al., 2020).
On the other hand, more sophisticated iterative methods to model the Ly\(\alpha\) forest and employing different strategies have been proposed, such as e.g. LyMa (Peirani et al., 2014, 2022) and the Iteratively-Matched Statistics (Ins, Sorini et al., 2016). These techniques have been shown to yield promising results, although still feature deviations of order \(5-20\%\) in the 3D power spectrum. Moreover, those techniques were not applied to approximated gravity solvers, but rely on full N-body simulations. Therefore, they are not able to overcome the computational burden of running massive sets of simulations.
With the flourishing of machine learning methods, and the extraordinary attention that this field is receiving in astrophysics, the generation of AI3-accelerated simulated cosmological volumes is witnessing a rapid expansion.
Footnote 3: Artificial Intelligence.
The machine learning method ham (Balaguera-Antolinez et al., 2018, 2019; Pellejero-Ibanez et al., 2020; Kitaura et al., 2022) and its extension to hydrodynamics hydro-bam(Simigaglia et al., 2021, 2022), which combines the latest version of a bias mapping method with a physically-motivated strongly-supervised learning strategy and the exploitation of the hierarchy of baryon quantities, has been shown to model two- and three-point statistics to \(1\%\) and \(\sim 10\%\) level of accuracy, respectively. Therefore, this technique represents a promising way forward to produce Ly\(\alpha\) forest mock catalogs for the next-generation surveys (see e.g., Balaguera-Antolinez et al., 2022, for a concrete application to halo mock catalogs generation for the DESI survey).
Deep learning has also achieved competitive results. Harrington et al. (2021) employed deep convolutional generative adversarial networks to learn how to correct the FGPA feeding the density and velocity DM fields as inputs. In a companion paper, Horowitz et al. (2021) exploited the idea of image generation behind deep generative methods and built a conditional convolutional auto-encoder, able to sample representations in latent space of the hydrodynamic fields of a simulation and synthesize the Ly\(\alpha\) forest with a deep posterior distribution mapping.
Nonetheless, machine learning techniques to date still require an adequately large training set in order to appropriately address the issue of overfitting, making it still expensive in the case of predicting the Ly\(\alpha\) forest.
In this work we propose to reload the FGPA, augmenting it in light of the recent findings on the importance of accounting for long-range and short-range non-local terms in the mapping of dark matter tracers (see e.g. Balaguera-Antolinez et al., 2019; Kitaura et al., 2022; Sinigaglia et al., 2021). In particular, we explicitly introduce the dependence on the cosmic web environments through the so-called cosmic web classification (Hahn et al., 2007) in the FGPA modelling. We showcase the application of our algorithm to map the Ly\(\alpha\) forest on a few Mpc cells mesh in redshift space, and assess its accuracy by evaluating the deviation of relevant summary statistics (mean transmitted flux, PDF, power spectrum, and bispectrum) of the predictions from the ones of a reference cosmological hydrodynamic simulation.
The paper is organized as follows. SS2 introduces the cosmological hydrodynamic simulation we use to validate our method. SS3 summarizes the cosmic web classification and its connection to non-local bias. In SS4 we review the FGPA and describe our improvements in modelling redshift-space distortions (RSD hereafter) and non-local bias terms. SS5 presents the analysis, results of our predictions and a discussion of them. We conclude in SS7.
## 2 Reference cosmological hydrodynamic simulation
In this section, we present our reference cosmological hydrodynamic simulation.
The reference simulation has been run with the cosmological smoothed-particle hydrodynamics (SPH) code GADGET3-OSAKA(Aoyama et al., 2018; Shimizu et al., 2019), a modified version of GADGET-3 and a descendant of the popular \(N\)-body/SPH code GADGET-2(Springel, 2005). It embeds a comoving volume \(V=(500h^{-1}\,\)Mpc\({}^{3}\)) and \(N=2\times 1024^{3}\) particles of mass \(m_{\rm DM}=8.43\times 10^{9}h^{-1}\)M\({}_{\odot}\) for DM particles and \(m_{\rm gas}=1.57\times 10^{9}h^{-1}\)M\({}_{\odot}\) for gas particles. The gravitational softening length is set to \(\epsilon_{g}=16h^{-1}\) kpc (comoving), but we allow the baryonic smoothing length to become as small as \(0.1\epsilon_{g}\). This means that the minimum baryonic smoothing at \(z=2\) is about physical \(533\,h^{-1}\) pc. The star formation and supernova feedback are treated as described in Shimizu et al. (2019). The code contains also important refinements, such as the density-independent formulation of SPH and the time-step limiter (Saitoh and Makino, 2009; Saitoh and Makino, 2013; Hopkins et al., 2013).
The main baryonic processes which shape the evolution of the gas are photo-heating, photo-ionization under the UV background radiation (Haardt and Madau, 2012), and radiative cooling. All these processes are accounted for and solved by the Grackle
library4(Smith et al., 2017), which determines the chemistry for atomic (H, D and He) and molecular (H\({}_{2}\) and HD) species. The chemical enrichment from supernova is also treated with the CELib chemical evolution library by Saitoh (2017). The initial conditions are generated at redshift \(z=99\) using MUSIC2(Hahn et al., 2021) with cosmological parameters taken from Planck Collaboration et al. (2020).
Footnote 4: [https://grackle.readthedocs.io/](https://grackle.readthedocs.io/)
In this work we use the output at \(z=2\) (for which the computation times amount to \(\sim 1.44\times 10^{5}\) CPU hours), reading both gas and dark matter properties.
To compute the Ly\(\alpha\) forest flux field \(F=\exp(-\tau)\)5, we first obtain the H i optical depth \(\tau\) by means of a line-of-sight integration as follows (Nagamine et al., 2021):
Footnote 5: The notation \(F=\exp(-\tau)\) for the Ly\(\alpha\) forest flux field actually refers to the transmitted flux \(F/F_{e}\), i.e. to the quasar spectrum normalized to its continuum \(F_{e}\). We will omit hereafter the normalization and denote the transmitted flux just as \(F\) unless specified otherwise.
\[\tau=\frac{\pi e^{2}}{m_{e}c}\sum_{j}f\,\phi(x-x_{j})\,n_{\rm HI}(x_{j})\, \Delta l, \tag{1}\]
where \(e\), \(m_{e}\), \(c\), \(n_{\rm HI}\), \(f\), \(x_{j}\), \(\Delta l\) denote respectively the electron charge, electron mass, speed of light in vacuum, H i number density, the absorption oscillator strength, the line-of-sight coordinate of the \(j\)-th cell and the physical cell size. The Voigt-line profile \(\phi(x)\) in Eq. (1) is provided by the fitting formula of Tasitsiomi (2006). Where necessary, relevant quantities (e.g. H i number density) are previously interpolated on the mesh according to the SPH kernel of the simulation. Coordinates \(x_{j}\) of cells along the line-of-sight refer to the outcome of interpolation of particles based either on their positions in real space \(r_{j}\) or redshift-space \(s_{j}=r_{j}+v_{j}^{\rm los}/aH\), where \(v_{j}^{\rm los}\), \(a\) and \(H\) are the \(j\)-th particle velocity component along the line-of-sight, the scale factor and the Hubble parameter at \(z=2\), respectively.
We interpolate the dark matter and the Ly\(\alpha\) forest fields in real and in redshift space onto a \(256^{3}\) cells cubic mesh using a CIC mass assignment scheme (Hockney and Eastwood, 1981). Such resolution corresponds to a physical cell-volume of \(\partial V\sim(1.95\,h^{-1}\,{\rm Mpc})^{3}\) and a Nyquist frequency of \(k_{\rm vap}\sim 1.6\,h\,{\rm Mpc}^{-1}\).
It is worth noting that the simulations utilized for the analyses of the Ly\(\alpha\) forest in Momose et al. (2021); Nagamine et al. (2021), the exploration of the cosmological H i distribution in Ando et al. (2019), and the examination of bias in cosmological gas tracers in Sinigaglia et al. (2021, 2022) were all executed using the same code.
## 3 Cosmic web classification and non-local bias
The cosmic web (see e.g., Bond et al., 1996) arises as a result of gravitational instability and the formation and growth of cosmic structures from tiny perturbations of the primordial matter density field in the Early Universe. While several different methods have been proposed in the literature to mathematically define the cosmic web (see Cautun et al., 2014; Libeskind et al., 2017, for a summary), we focus here on the following procedure.
To quantitatively describe the large-scale matter distribution and split it into different cosmic web environments, Hahn et al. (2007) proposed a classification scheme based on the signature of the eigenvalues of the gravitational tidal field tensor
\[\mathcal{T}_{ij}(\mathbf{r})=\partial_{i}\partial_{j}\phi(\mathbf{r})\,, \tag{2}\]
where \(\phi\) denotes the gravitational potential and \(\mathbf{r}\) stands for Eulerian coordinates. Considering the equations of motion in comoving coordinates
\[\mathbf{\dot{r}}=-\nabla\phi(\mathbf{r}) \tag{3}\]
for a test particle subject to the gravitational potential \(\phi\), and assuming \(\nabla\phi(\mathbf{r})=0\) at the centre of mass of dark matter haloes (i.e. there is a local minimum), one can linearize the equation of motions and realize that the dynamics close to local extrema of the gravitational potential in the linear regime is ruled by the three eigenvalues \(\lambda_{i}\) (\(i=1,2,3\)) of \(\mathcal{T}_{ij}\) (we refer the reader to Hahn et al., 2007, for more details on the calculations). In close analogy to Zel'dovich approximation (Zel'dovich, 1970), and sorting the \(\lambda_{i}\) in decreasing order such that \(\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\), Hahn et al. (2007) defined a region at coordinates \(\mathbf{r}\) to belong to:
* a knot, if \(\lambda_{1},\lambda_{2}\,\lambda_{3}\geq 0\);
* a filament, if \(\lambda_{1},\lambda_{2}\geq 0,\lambda_{3}<0\);
* a sheet, if \(\lambda_{1}\geq 0,\lambda_{2},\lambda_{3}<0\);
* a void, if \(\lambda_{1},\lambda_{2},\lambda_{3}<0\).
At this point, one can drop the assumption of local extrema at the centre of mass of the dark matter halo and generalize the cosmic web classification to any point of the density field. This implies the emergence of a constant additive acceleration term to the linearized equations of motion, which however does not affect the role played by the tidal tensor in linear order.
Elaborating on this work, Forero-Romero et al. (2009) proposed to relax the threshold to classify the cosmic web, based on simple dynamical arguments and realizing that hignering the threshold from \(\lambda_{\rm th}=0\) to \(\lambda_{\rm th}=0.1\) provides a classification that better matches the visual appearance of the cosmic web. Alternatively to the cosmic web classification based on the gravitational tidal field tensor (hereafter T-web), other authors (Bond et al., 1996; Hoffman et al., 2012) suggested applying the same classification scheme to the shear of the velocity field
\[\Sigma_{ij}=\frac{1}{2}(\partial_{i}v_{j}+\partial_{j}v_{i}) \tag{4}\]
instead (dubbed V-web), which to linear order (\(\theta=\delta\)) corresponds to T-web, but to non-linear order departs and has been shown to trace the cosmic web more effectively towards smaller scales than the T-web.
The T-web classification has been widely adopted to investigate the properties of dark matter haloes, galaxies, and intergalactic gas in the cosmic web (Hahn et al., 2007; Yang et al., 2017; Martizzi et al., 2019; Sinigaglia et al., 2021), to develop fast accurate bias models to generate haloes and galaxy mock catalogues (e.g. Zhao et al., 2015; Balaguera-Antolinez et al., 2018, 2019; Pellejero-Ibanez et al., 2020; Kittaura et al., 2022; Balaguera-Antolinez et al., 2022), and to perform observational cosmological analysis (e.g. Lee and White, 2016; Krolewski et al., 2017; Horowitz et al., 2019), among others.
The T-web classification has also been shown to provide a direct connection between the phenomenology of the cosmic web and the mathematical description of halo bias relying on Eulerian perturbation theory (see e.g. McDonald and Roy, 2009). In fact, discriminating between different T-web environments corresponds to considering a perturbative bias expansion up to third order including both local and non-local long-range bias terms. In this sense, following Kittaura et al. (2022), let us denote the three invariants of the tidal field tensor as:
* \(I_{1}=\lambda_{1}+\lambda_{2}+\lambda_{3}\);
* \(I_{2}=\lambda_{1}\lambda_{2}+\lambda_{1}\lambda_{3}+\lambda_{2}\lambda_{3}\);
* \(I_{3}=\lambda_{1}\lambda_{2}\lambda_{3}\).
These three items represent an alternative formulation of the perturbative bias expansion up to third order (Kitaura et al., 2022), but they can also be straightforwardly linked to the phenomenological T-web classification as follows:
* knots: \(I_{3}>0\) & \(I_{2}>0\) & \(I_{1}>\lambda_{1}\);
* filaments: \(I_{3}<0\) & \(I_{2}<0\parallel I_{3}<0\) & \(I_{2}>0\) & \(\lambda_{3}<I_{1}<\lambda_{3}-\lambda_{2}\lambda_{3}/\lambda_{1}\);
* sheets: \(I_{3}>0\) & \(I_{2}<0\parallel I_{3}<0\) & \(I_{2}>0\) & \(\lambda_{1}-\lambda_{2}\lambda_{3}/\lambda_{1}<I_{1}<\lambda_{1}\);
* voids: \(I_{3}<0\) & \(I_{2}>0\) & \(I_{1}<\lambda_{1}\).
Eventually, one can generalize this description by relaxing the value of the eigenvalues threshold and replacing zero with any other arbitrary value.
From an intuitive point of view, this means that T-web models the full perturbative expansion up to the third order adopting a low-resolution binning, i.e. just four categories identified as cosmic web types (see Kitaura et al., 2022, and references therein). While using the invariants \(I_{i}\) of \(\mathcal{T}_{ij}\) has been shown to provide a more accurate description of non-local bias and of anisotropic clustering than T-web, the T-web classification has the advantage of being much less likely to incur into overfitting.
In this work, we apply the T-web classification to the modelling of the Ly\(\alpha\) forest, in order to introduce non-local contributions within the FGPA prescription. In particular, as will be described in more detail in SS4, we make both the model for RSDs and for the FGPA dependent on the T-web by fitting the parameters of such models separately for each cosmic web type. Also, we extract the cosmic web classification both in real and in redshift space. Table 1 reports the volume filling factor of each cosmic web type for the real space dark matter field, and the redshift space dark matter field obtained by applying the RSD description including velocity bias dependent on the real-space T-web with parameters described in the upper part of Table 2 (see SS4.2 for the details about the procedure), and used to compute our final non-local FGPA model. One realizes that there is only a tiny sub-percent difference in volume filling factors between real and redshift space T-web classification, with sheets being the most frequent cosmic web type (\(\sim 50\%\) of the volume), knots being the rarest (\(\sim 2-3\%\) of the volume), and filaments and voids representing intermediate cases.
## 4 FGPA modelling
In this section, we first introduce the FGPA approximation in the standard scenario, and present later on our improvement on the modelling within such a framework.
### Standard FGPA
A popular way to compute a fast proxy for the Ly\(\alpha\) forest consists in the FGPA, which assumes equilibrium between optically-thin photoionization and collisional recombination of the intergalactic H i. In particular, because the majority (\(>90\%\) in volume and \(>50\%\) in mass, Lukic et al., 2015) of the gas probed by the Ly\(\alpha\) forest found in regions with mildly non-linear density contrasts is diffuse and is not shock-heated, the gas density \(\rho_{\rm gas}\) and temperature \(T_{\rm gas}\) can be linked through the power-law relation
\[T_{\rm gas}=T_{0}(\rho_{\rm gas}/\bar{\rho}_{\rm gas})^{\gamma} \tag{5}\]
where \(\bar{\rho}\) is mean gas density and \(T_{0}\) and \(\gamma\) depend on the reionization history and on the spectral slope of the UV background model. These vary commonly within the ranges \(4000\,{\rm K}\lesssim{\rm T}_{0}\lesssim 10^{4}\,{\rm K}\) and \(0.3\lesssim\gamma\lesssim 0.6\)(see e.g., Hui & Gnedin, 1997). Assuming photo-ionization equilibrium, one can express the Ly\(\alpha\) forest optical depth \(\tau\) as a function of the gas density as
\[\tau=A(1+\delta_{\rm gas})^{\alpha}. \tag{6}\]
This expression represents the FGPA, as \(\tau\) is described as a field that fluctuates with the underlying gas distribution. While \(\alpha=2-0.7\,\gamma\sim 1.6\)(see e.g., Weinberg et al., 1999; Viel et al., 2002; Seljak, 2012; Rorai et al., 2013; Lukic et al., 2015; Cieplak & Slosar, 2016; Horowitz et al., 2019), \(A\) is a normalization constant which depends on redshift and on the details of the hydrodynamics (e.g. Weinberg et al., 1999). Given that in the cool low-density regions, dark matter and gas densities display a very high cross-correlation (see e.g. Siniagalia et al., 2021) and that the FGPA consists in a useful tool to be applied to the dark matter field to obtain Ly\(\alpha\) forest predictions without solving the equations of hydrodynamics, \(\delta_{\rm gas}\) can be replaced with \(\delta_{\rm dm}\) in Eq. (6), so that the FGPA is applied directly to the dark matter field:
\[\tau=A(1+\delta_{\rm dm})^{\alpha}. \tag{7}\]
The FGPA has been shown to represent a good approximation for very high-resolution full N-body simulations (e.g. Sorini et al., 2016; Kooistra et al., 2022), i.e. when representing density fields on regular grids with cell size \(l\sim 150-200\,{\rm kpc}\,h^{-1}\), where particle positions and velocities are known with the high accuracy thanks to the exact solution of collisionless and fluid dynamics for dark matter and gas particles/cells, respectively. However, this is not the case if either a coarser grid resolution (\(\gtrsim 1-2\,{\rm Mpc}\,{\rm h}^{-1}\)) and/or approximated gravity solvers are to be adopted, as in the case of the massive generation of large-volume Ly\(\alpha\) forest boxes encompassing all the universe up to \(z\sim 4\)(e.g. Farr et al., 2020). In particular, the summary statistics predicted by the FGPA have been shown to depart from the corresponding hydrodynamic flux field statistics obtained with the full hydro computation already at cell resolutions of \(l\sim 0.8\,{\rm Mpc}\,{\rm h}^{-1}\), both in real and in redshift space (Siniagalia et al., 2022).
In this work, we compute the FGPA based on the cubic mesh used to CIC-interpolate the simulation particles, i.e. with physical cell size \(l\sim 1.95\,{\rm Mpc}\,h^{-1}\). To motivate the choice of a coarse grid, we notice that such resolution implies that one needs to resort to a grid with \(N=5120^{3}\) cells to represent a simulation box of volume \(V=(10\,{\rm Gpc}\,{\rm h}^{-1})^{3}\), covering a full-sky cosmic realization at \(0\leq z\lesssim 3.8\), to cover the relevant volume needed for Ly\(\alpha\) forest studies. This implies already a quite heavy computational burden, especially when hundreds of mock lightcones are to be generated, and resorting to higher resolution meshes would make the process practically unfeasible.
\begin{table}
\begin{tabular}{l c} \hline \hline & Real space (\%) & Redshift space (\%) \\ \hline Knots & 2.1 & 2.7 \\ Filaments & 21.4 & 21.9 \\ Sheets & 49.2 & 48.8 \\ Voids & 27.3 & 26.6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Volume filling factors of each cosmic web type for the real space dark matter field, and the redshift space dark matter field obtained by applying the RSD description including velocity bias dependent on the real-space T-web with parameters described in the upper part of Table 2 (see §4.2 for the details about the procedure), and used to compute our final non-local FGPA model.
To put ourselves in realistic observational conditions, we compute the FGPA in redshift space. To this end, we displace particles from real to redshift space via the mapping (Kaiser, 1987; Hamilton, 1998)
\[\mathbf{s}_{i}=\mathbf{r}_{i}+\frac{(\mathbf{v}_{i}-\hat{\mathbf{r}}_{i})\,\hat{\mathbf{r}}_{i}}{aH} \tag{8}\]
where \(\mathbf{s}_{i}\) and \(\mathbf{r}_{i}\) are the Eulerian comoving coordinates of the \(i\)th particle in redshift space and real space, \(\hat{\mathbf{r}}_{i}=\mathbf{r}_{i}/|\mathbf{r}_{i}|\), \(\mathbf{v}_{i}\) is its velocity, and \(a\) and \(H\) are the scale factor and the Hubble parameter at the redshift of interest. respectively. Under the assumption of plane-parallel approximation that we adopt, and therefore neglecting the lightcone geometry, Eq. (8) simplifies to only one scalar component along the line of sight.
In this setup, we compute the _standard6_ FGPA by setting \(\alpha=1.6\) and determining the parameter \(A\) as the value which matches the normalization of the Ly\(\alpha\) forest 3D power spectrum on large scales. In this case, we find \(A=0.27\).
Footnote 6: We adopt the nomenclature _standard_ in contrast to the non-local FGPA version presented in the following sections.
### RSD modelling
While the mapping from real to redshift space presented in Eq. (8) holds exactly for dark matter particles, it does not account for any velocity bias contribution in the Ly\(\alpha\) forest. Therefore, following Siniagalia et al. (2022), we generalize such a model as follows.
We start by considering the dark matter field in real space. For each cell \(i\), we assign a set of \(N\) fictitious particles, with Eulerian position \(\mathbf{r}_{i}\) coinciding with the centre of the cell. Applying an inverse nearest grid point (NGP) scheme, each particle \(j\) inside cell \(i\) is assigned a mass \(M_{i}=\rho_{i}V_{i}/N\), where \(\rho_{i}\) and \(V_{i}\) are the dark matter density and volume of the cell, respectively. The \(j\)th particles is then displaced from real to redshift space using a modified version of Eq. (8) accounting for velocity bias:
\[\mathbf{s}_{j}=\mathbf{r}_{j}+b_{v}\,\frac{(\mathbf{v}_{\text{dm},j}\cdot\hat{\mathbf{r}}_{j} )\,\hat{\mathbf{r}}_{j}}{aH}\,, \tag{9}\]
where \(\mathbf{s}_{j}\) and \(\mathbf{r}_{j}\) are redshift space and real space Eulerian comoving coordinates of the \(j\)th particle, \(\mathbf{v}_{\text{dm},j}\) is the modelled velocity of the particle, and \(b_{v}\) is a velocity bias factor. The particle velocity \(\mathbf{v}_{\text{dm},j}\) is modelled as the sum of two components \(\mathbf{v}_{\text{dm},j}=v^{\text{coh}}_{\text{dm},j}+\mathbf{v}^{\text{disp}}_{\text {dm},j}\). Here, \(\mathbf{v}^{\text{coh}}_{\text{dm},j}=\mathbf{v}^{\text{lin}}_{\text{dm},j}\) corresponds to the velocity field at cell \(i\), interpolated on the mesh with the same mass assignment scheme as the density field, and models the large-scale coherent flows. We draw the attention of the reader to the fact that the velocity bias \(b_{v}\) directly multiplies the coherent flows component of the velocity field. On the other hand, \(\mathbf{v}^{\text{disp}}_{\text{dm},j}\) is built as a velocity dispersion term, randomly sampled from a Gaussian with zero mean and variance proportional to the value of the density field inside the cell \(i\) in real space through a power law: \(\mathbf{v}^{\text{coh}}_{\text{dm},j}\sim\mathcal{N}\left(\mu=0,\sigma=B\left(1+ \delta\right)^{\beta}\right)\), with \(B\) and \(\beta\) free parameters (Hess et al., 2013; Kitaura et al., 2014). This latter component models the quasi-virialized velocity dispersion motions, which are smoothed out by the interpolation of the velocity field on the mesh.
To determine the optimal values for the RSDs and FGPA normalization parameters, we jointly maximize the cross-correlation coefficient between the FGPA prediction and the Ly\(\alpha\) forest flux field from the reference simulation and match the normalization of the Ly\(\alpha\) forest 3D power spectrum at large scales. After fixing \(\alpha=1.6\) as fiducial value, we find the choice \(A=0.7\), \(b_{v}=-0.8\), \(b_{v}=1.5\), \(\beta=2.0\) to fulfill the aforementioned constraints.
### Non-local bias contribution
Looking at Eq. (7), one easily realizes that the standard FGPA prescription provides a way to compute the Ly\(\alpha\) forest optical depth relying exclusively on the local dark matter density, and neglecting any non-local contribution. While the simplicity of the FGPA model is appealing as it allows to compute the Ly\(\alpha\) forest with just a few parameters, it attempts at modelling distinct cosmic web environments with the same physical model, therefore failing at capturing the intrinsic diversity of physical conditions. As a major contribution of this paper, we propose to make the FGPA cosmic web dependent, to relax the exponent \(\alpha\) and let it variable, and to determine the sets of coefficients that best describe each cosmic web type. We point out that we are considering at this stage the dark matter field in redshift space as modelled in Eq. (9), and we therefore extract the T-web classification directly in redshift space.
Analogously, we also make the RSD model in Eq. (9) dependent on the cosmic web environment, thereby passing from the 3 free parameters \(\{b_{v},B,\beta\}\), to 12 parameters (3 parameters \(\times\) 4 cosmic web environments). However, in order to keep the number of free parameters as small as possible, we notice that the virial theorem allows us to write \(v\propto\sqrt{\rho}\), therefore we can fix \(\beta=0.5\). Moreover, peculiar velocity are closer to the linear or quasi-linear regime in sheets and voids, hence one can expect the velocity dispersion component to be negligible there, allowing us to set \(B_{\text{sh}}=B_{\text{vd}}=0\)7. These two tricks allow us to reduce the number of free parameters from 12 to 6. We notice that, in contrast to the cosmic web classification discussed above, here the T-web is extracted in real space, as the T-web itself is needed to perform the mapping from real to redshift space.
Footnote 7: We hereafter adopt the following subscript notation: kn=knots, fl=filaments, sh=sheets, vd=voids.
In summary, to introduce the non-local dependence on the cosmic web in the FGPA we:
1. extract the T-web from the dark matter in real space;
2. determine the distinct parameters \(\{b_{v},B,\beta\}\) depending on the real-space T-web;
3. map particles from real to redshift space adopting the cosmic web dependent version of Eq. (9);
4. extract the T-web from the dark matter in redshift space;
5. use the redshift-space T-web to model non-localities in the FGPA by making the free parameters \(\{A,\alpha\}\) dependent on the cosmic web category each cell belongs to.
### Threshold bias
It is well known from cosmological hydrodynamic simulations that the Ly\(\alpha\) forest transmitted flux one-point PDF is starkly bimodal, featuring two sharp peaks around \(F=0\) and \(F=1\), and displaying a much lower occurrence of cells with intermediate values \(0.1\lesssim F\lesssim 0.9\)(e.g., Nagamine et al., 2021, and references therein). These characteristics can be mainly attributed to the exponential mapping \(F=\exp(-\tau)\), which regularizes the domain of \(\tau\) mapping it in the interval \([0,1]\), truncating low values of \(\tau\) to \(F\sim 1\), as well as rapidly saturating high values of \(\tau\) to \(F\sim 0\). The bimodal nature of the Ly\(\alpha\) forest PDF, together with the high non-linearity induced by the exponential mapping, makes it particularly hard to capture such behavior and accurately predict
fluxes, which can suffer rapid variations within the domain, even in neighbouring cells.
To improve on the FGPA model regarding this aspect, we adopt the perspective of introducing a threshold bias (Kaiser, 1984; Bardeen et al., 1986; Sheth et al., 2001; Mo & White, 2002; Kitaura et al., 2014; Neyrinck et al., 2014; Vakili et al., 2017), which in state-of-the-art models of halo bias is used to suppress the formation in overdense regions.
In analogy with the formalism of the Pqciy method Kitaura et al. (2014, 2015, 2016a, 2016b); Vakili et al. (2017) and expanding upon it, we update Eq. (7) by introducing two multiplicative exponential terms
\[\tau=A(1+\delta)^{\alpha}\exp\left(-\frac{\delta}{\delta_{1}^{*}}\right)\exp \left(\frac{\delta}{\delta_{2}^{*}}\right)\,, \tag{10}\]
where the term \(\exp\left(-\delta/\delta_{1}^{*}\right)\) represents a threshold bias term acting as cut-off, while \(\exp\left(\delta/\delta_{2}^{*}\right)\) an inverse threshold bias acting as a boost. Either term have a negligible effect when \(\delta\ll\delta^{*}\), whereas they acquire importance when \(\delta\gtrsim\delta^{*}\). Here we do not use the step function used in Vakili et al. (2017), and \(\delta_{1}^{*}\) and \(\delta_{2}^{*}\) are left free.
On top of this modification, following SS4.3, we make the 4 free parameters of the model depends on the T-web classification, i.e. \(\{A,\alpha\,\delta_{1}^{*},\delta_{2}^{*}\}_{\rm{i}}\), with \(i=\{\)knots, filaments, sheets, voids\(\}\). In this way, we model differently the FGPA depending on distinct cosmic web environments, thereby effectively modelling non-local bias.
So far, the model has \(22=6+16\) free parameters (6 for RSD as described in SS4.2, \(16=4\times 4\) for the augmented non-local FGPA prescription). We describe the procedure we adopt to determine such parameters and our findings in SS5 and Table 2, respectively.
### Stochasticity
As final ingredient, we add a stochastic component \(\epsilon\) to the model expressed in Eq. (10) and the subsequent non-local modification:
\[\tau=A_{\rm{i}}(1+\delta)^{\alpha_{i}}\exp\left(-\frac{\delta}{\delta_{1,i}^{ *}}\right)\exp\left(\frac{\delta}{\delta_{2,i}^{*}}\right)+\epsilon, \tag{11}\]
where \(e_{i}^{j}=f_{e,i}N^{j}\) is the noise term in the \(j\)th cell for \(i=\{\)knots, filaments, sheets, voids\(\}\), and \(f_{e,i}\) is a normalization constant. While \(f_{e}\) is a parameter controlling the amplitude of the stochastic term, \(N\) is sampled from a negative binomial distribution (see Kitaura et al., 2014; Vakili et al., 2017), parametrized as8
Footnote 8: While other parametrizations are possible, we stick to the one used in this work and reported in the numpy (Harris et al., 2020) documentation, for the sake of the reproducibility of results.
\[P(N\,|\,n,p)=\frac{\Gamma(N+n)}{N!\,\Gamma(n)}\,p^{n}(1-p)^{N}, \tag{12}\]
where \(N\) is the sampled term and corresponds to the number of failures in the standard negative binomial distribution definition, \(n\) stands for the number of successes9, \(N+n\) is the total number of trials, and \(p\) is the success probability. The negative binomial distribution can be interpreted as an overdispersed alternative to the Poisson distribution, where one can allow the mean and variance to be different. This aspect can turn out to be useful in cases when two events have a positively correlated occurrence (i.e., the two events have a positive covariance term and are not independent), and this causes a larger variance than in the case in which the events are independent. In the presented parametrization, \(n\) controls the deviation from Poissonity, making the distribution converge to the Poisson distribution for large \(n\) and causing an overdispersion for small \(n\). While a thorough investigation of the deviation from Poissonity for the Ly\(\alpha\) forest goes beyond the scope of this paper, it is sensible and convenient to allow the distribution from which we sample the stochastic term to feature a non-Poissonian variance including a correlation term (see e.g. Peebles, 1980, for a detailed treatment applied to galaxies).
Footnote 9: \(n\) is in principle an integer number, but its definition can be generalized to reals, as done in this work.
In particular, as will be commented more in detail in SS5, the latest non-local modification of Eq. (10) without stochasticity yields predictions that lack power towards small scales, and
\begin{table}
\begin{tabular}{c c c} \hline
**Parameter** & **Value** & **Description** \\ \hline & RSD & \\ \hline \(b_{\rm{v,kn}}\) & \(-0.9\) & Coh. flows velocity bias in knots \\ \(b_{\rm{v,fl}}\) & \(-0.8\) & Coh. flows velocity bias in filaments \\ \(b_{\rm{v,sh}}\) & \(-1.\) & Coh. flows velocity bias in sheets \\ \(b_{\rm{v,vd}}\) & \(-0.5\) & Coh. flows velocity bias in voids \\ \(B_{\rm{kn}}\) & \(3.0\) & Q.v.m. velocity bias in knots \\ \(B_{\rm{fl}}\) & \(2.0\) & Q.v.m. velocity bias in filaments \\ \(B_{\rm{sh}}\) & \(0.0\) & Q.v.m. velocity bias in sheets \\ \(B_{\rm{vd}}\) & \(0.0\) & Q.v.m. velocity bias in voids \\ \(\beta_{\rm{kn}}\) & \(0.5\) & Q.v.m. exponent in knots \\ \(\beta_{\rm{fl}}\) & \(0.5\) & Q.v.m. exponent in filaments \\ \(\beta_{\rm{sh}}\) & — & Q.v.m. exponent in sheets \\ \(\beta_{\rm{vd}}\) & — & Q.v.m. exponent in voids \\ \hline & Mean bias & \\ \hline \(A_{\rm{kn}}\) & \(0.23^{+0.34}_{-0.19}\) & Normalization in knots \\ \(A_{\rm{fl}}\) & \(0.11^{+0.57}_{-0.14}\) & Normalization in filaments \\ \(A_{\rm{sh}}\) & \(0.20^{+0.19}_{-0.16}\) & Normalization in sheets \\ \(A_{\rm{vd}}\) & \(0.15^{+0.36}_{-0.12}\) & Normalization in voids \\ \(\alpha_{\rm{kn}}\) & \(4.03^{+0.66}_{-0.60}\) & Power-law exponent in knots \\ \(\alpha_{\rm{fl}}\) & \(3.97^{+0.60}_{-0.60}\) & Power-law exponent in filaments \\ \(\alpha_{\rm{vsl}}\) & \(2.21^{+0.62}_{-0.71}\) & Power-law exponent in sheets \\ \(\alpha_{\rm{vd}}\) & \(2.03^{+0.72}_{-0.72}\) & Power-law exponent in voids \\ \(\delta_{\rm{vsl}}^{*}\) & \(1.36^{+0.53}_{-0.48}\) & Exponential cut-off scale in knots \\ \(\delta_{\rm{fl,1}}^{*}\) & \(0.93^{+0.51}_{-0.50}\) & Exponential cut-off scale in filaments \\ \(\delta_{\rm{fl,h}}^{*}\) & \(0.85^{+0.42}_{-0.45}\) & Exponential cut-off scale in sheets \\ \(\delta_{\rm{l,vd}}^{*}\) & \(1.01^{+0.39}_{-0.68}\) & Exponential cut-off scale in voids \\ \(\delta_{\rm{l,kn}}^{*}\) & \(0.41^{+0.34}_{-0.37}\) & Exponential boost scale in knots \\ \(\delta_{\rm{l,n}}^{*}\) & \(0.17^{+0.51}_{-0.11}\) & Exponential boost scale in filaments \\ \(\delta_{\rm{l,sh}}^{*}\) & \(0.60^{+0.42}_{-0.39}\) & Exponential boost scale in sheets \\ \(\delta_{\rm{l,vd}}^{*}\) & \(0.25^{+0.47}_{-0.18}\) & Exponential boost scale in voids \\ \hline & Stochasticity & \\ \hline \(f_{e}\) & \(1.38^{+0.38}_{-0.38}\) & Normalization of stochastic term \\ \(n\) & \(0.11^{+0.58}_{-0.38}\) & Number of successes \\ \(p\) & \(0.58^{+0.34}_{-0.28}\) & Success probability (\(0<p<1\)) \\ \hline \end{tabular}
\end{table}
Table 2: Best-fit model parameters for our preferred non-local FGPA prescription with velocity bias and stochasticity. The left column reports the symbol used throughout the paper for each parameter, the central column reports the value and the associated uncertainties, and the right column provides a brief description of what the parameter stands for.
display suppression of small-scale structures, especially in the underdense environments. To this end, we decide to model the stochastic term only in voids, leaving the improved non-local FGPA model deterministic in knots, sheets, and voids, to limit the loss of cross-correlation. This implies \(f_{e}=0\) in knots, filaments, and sheets. As will be shown later in the paper, we find this choice to be sufficient to accomplish the desired goal.
## 5 Analysis, results and discussion
In this section, we describe the procedure we adopt to determine the optimal parameters for our model, and present our findings.
### Fit of model parameters
Given the model developed in SS4.2 and SS4.3, we fit the coefficients of our model as follows.
We first determine the RSD model parameters by finding the values which maximize the cross-correlation between the predicted and reference Ly\(\alpha\) forest field (see the upper part of Table 2). In particular, we start with an initial guess based on the prior knowledge we have, i.e. we initialize the parameters \(b_{v,i}=-0.8\), and \(B_{i}=1\ \forall i\), \(i=\{\text{knots, filaments, sheets, voids}\}\). As anticipated in SS4.2, we fix \(\beta_{i}=0.5\) for all the cosmic web types, and set \(B_{\text{sh}}=B_{\text{vd}}=0\). We then vary the parameters around the initial guess, one cosmic web at a time, apply the standard FGPA prediction, and monitor the variation of the cross correlation between our prediction and the reference Ly\(\alpha\) forest field from the simulation. We notice that at this point we are still applying the standard FGPA, with the same parameters for all cells, because we have not yet fit the parameters for the non-local FGPA model. In order to be able to predict flux with the non-local FGPA at this point, one should perform an end-to-end automatic extraction of the parameters, going from the real-space dark matter field to the redshift-space non-local FGPA Ly\(\alpha\) forest flux. However, this would imply a computationally expensive optimization procedure, involving interpolation of particles to the grid at each step. Therefore, we limit ourselves to a simpler manual determination which, while in principle suboptimal, turned out to be very instructive and allowed us to develop a good understanding of how different choices of the model parameters impact the cross correlation at different scales.
Afterward, we determine the parameters for the modified non-local FGPA model. Here, we resort to automatic parameter estimation. In particular, we determine the optimal values for our parameters by sampling their posterior distribution through the affine-invariant **emcee** Markov Chain Monte Carlo (MCMC) sampler (Goodman & Weare 2010; Foreman-Mackey et al. 2013).
We proceed as follows. We separately fit the Ly\(\alpha\) forest flux PDFs \(\rho(F)\) of knots, filaments, sheets, and voids, i.e. performing a separate fit for each cosmic web type. In this way, each Markov Chain samples from the posterior distribution \(p(\theta_{i}|\,\text{data})\propto p(\text{ref}\,|\,\theta_{i})\) of the parameters \(\theta_{i}=\{A_{i},\alpha_{i},\delta_{i,-1}^{2},\delta_{2,i}^{2}\}\), for \(i=\{\text{knots, filaments, sheets, voids}\}\), where "ref" denotes reference simulation Ly\(\alpha\) forest PDF \(\rho_{\text{ref}}(F)\).
We assume a Gaussian likelihood for \(\rho_{\text{ref}}(F)\):
\[P(\rho(F)\,|\,\theta)=\prod_{F}\frac{1}{\sqrt{2\pi\sigma_{F}}}\,\exp\left[- \frac{(\rho_{\text{ref}}(F)-\rho_{\text{mock}}(F))^{2}}{2\sigma_{F}^{2}} \right]\,, \tag{13}\]
where \(\rho_{\text{ref}}(F)\) and \(\rho_{\text{mock}}(F)\) are the flux PDFs of the reference and predicted Ly\(\alpha\) forest fields, and \(\sigma_{F}\) corresponds to the number of cells containing a flux value inside the same bin of the PDF as \(F\). Furthermore, we choose the following flat priors for the model parameters: \(0<A_{i}<2,\,1<\alpha_{i}<5,\,0<\delta*_{1,i}<1.5,\,0<\delta*_{2,i}<1.5,\,\forall i\). After running the chain with 32 walkers for 2000 iterations, we compute the autocorrelation length \(\tau_{cl}\) using the in-built emcee function, and find that in all cases \(\tau_{\text{cl}}\sim 100\) iterations. Therefore, we conservatively discard the first 500 iterations, i.e. \(\sim 5\) times the chain autocorrelation length.
Eventually, we compute the median of the resulting posterior distribution, and the \(16^{\text{th}}\) and \(84^{\text{th}}\) percentiles intervals as associated uncertainties.
Up to this point, the model does not include the stochastic noise term, and the predicted \(P(k)\) suffers from a lack of power towards small scales with respect to the reference \(P(k)\) (see SS5.4 for details). Therefore, the model parameters determined thus far correspond to the non-local FGPA without stochasticity. As described in SS4.5, we include a further additive noise term, exclusively in voids, randomly sampled from a negative binomial distribution. To determine the best parameters for the binomial distribution \(\theta^{\prime}=\{f_{e},n,p\}\), we first manually vary the parameters until we reach a combination of \(\theta^{\prime}\) which enhances enough the small-scale power and provides a good fit of the power spectrum. (see SS5.4 for details). Eventually, we refit the parameters for voids including the random component, and using as an initial guess the optimal parameter values previously found. Furthermore, we adopt the following upper and lower bounds as priors on \(\theta^{\prime}\): \(1<f_{e}<2,\,0<n<1,\,0<p<1\).
We report the final estimated parameter values and their uncertainty in Table 2.
As a final note, we stress that only the fit of parameters for the negative binomial distribution governing the random sampling of the stochastic term was determined by considering deviations between the reference and predicted power spectrum. In fact, in the fit of the parameters of the non-local FGPA prescription, only the PDFs were taken into account. In this sense, the fact that this bias model reproduces the power spectrum and the bispectrum with such good accuracy constitutes a reassuring sanity check.
### Visual inspection of slices through the simulation box
Figure 1 shows a comparison of slices through the simulation box, obtained by averaging two contiguous slices \(1.95\,h^{-1}\,\text{Mpc}\) thick parallel to the \((x,z)\) plane, displaying redshift space distortions along the \(z\) axis. A visual inspection between the prediction from the standard FGPA (bottom right) and the reference cosmological simulation (top left) immediately reveals that the former fails to adequately render the redshift-space cosmic structures, displaying less pronounced redshift space distortions. When introducing a velocity bias correction to the standard FGPA (bottom left), redshift space distortions appear to model more appropriately the observed elongation of structures in the cosmic web, however, it turns out not to capture the saturation at \(F\sim 1\) in the underdense regions, also reflected in the probability distribution function (see also Figure 2). We again stress that the free parameters for the FGPA model have been chosen in this case in order to match the large-scale amplitude of the power spectrum. It would be also possible to find the parameters which best fit the PDF, but this yields a severe bias (\(\gtrsim 30\%\)) in the large-scale predicted power spectrum amplitude. Eventually, our non-local FGPA model (top right) visually resembles the reference simulation to a good degree of approximation. In particular, it succeeds both in reproducing the redshift-space structure of the cosmic web, and in properly predicting the right values of flux in all the different density regimes. Without the stochasticity, underdense
regions in our predicted flux would appear emptier than they actually are. This aspect is reflected in a lack of power towards small scales, as will be commented more in detail in SS5.4. The addition of the noise term \(\epsilon\), only in voids, is meant to compensate for the lack of substructures. Therefore, we do not expect those small-scale structures to visually match the reference simulation in position. Rather, in a global sense, with the addition of the noise term voids appear to be more filled with substructures than in the case where we do not model stochasticity.
### Probability distribution function
Figure 2 shows a comparison between the Ly\(\alpha\) forest PDF from the reference simulation (red solid), and the prediction from standard FGPA (dashed yellow), from FGPA with velocity bias (purple dotted), and from our non-local FGPA with velocity bias and without and with the addition of stochastic fluctuations in voids (green dashed and blue dashed-dotted, respectively).
The sharp bimodality of the Ly\(\alpha\) forest PDF makes it particularly non-trivial to correctly predict both flux regimes, yielding
Figure 1: Slices through the simulation box, obtained by averaging two contiguous slices \(1.95\,h^{-1}\,\mathrm{Mpc}\) thick parallel to the \((x,z)\) plane, displaying redshift space distortions along the \(z\) axis. The plot displays a Ly\(\alpha\) forest transmitted flux \(F/F_{z}\) slice through the reference cosmological hydrodynamic simulation (top left), and through the predicted Ly\(\alpha\) forest boxes obtained through standard FGPA (bottom right), FGPA with cosmic-web dependent velocity bias (bottom left), and our final non-local cosmic-web dependent FGPA and stochasticity (top right). The maps are color-coded from red to blue for values ranging in the interval [0.1], where red indicates underdense and blue overdense regions. All the slices share the same color scale and the same extremals.
the correct shape and average flux. While the standard FGPA model fails to reproduce the height of both peaks, the FGPA with velocity bias prescription achieves a better prediction towards \(F\sim 0\), but a worse result at \(F\sim 1\), as already commented in SS5.2. The non-local FGPA, however, succeeds in reproducing both peaks, and qualitatively achieve the best result among the studied cases also in the intermediate flux regime \(0.1\lesssim F\lesssim 0.9\).
We stress that our final model, including stochasticity, with model parameters fitted to reproduce the PDFs of knots, filaments, sheets, and voids separately, yields a predicted average flux \(\bar{F}_{\rm pred}\sim 0.75\), in great agreement with the value from the simulation \(\bar{F}_{\rm ref}\sim 0.76\), even though the model has not been explicitly calibrated to reproduce such a quantity.
### Power spectrum and cross-correlation
In Figure 3 we present a comparison between Ly\(\alpha\) forest 3D power spectra \(P(k)\) in the top panel, power spectrum ratios \(P_{\rm mock}(k)/P_{\rm ref}(k)\) in the mid panel, and cross-correlation coefficients \(C(k)\) in the bottom panel. We display the reference simulation summary statistics as a red solid line, while we plot the predicted Ly\(\alpha\) forest flux summary statistics as a yellow dashed line in the case of standard FGPA, as a purple dotted line in the case of FGPA with velocity bias, as a green dashed line in the case of non-local FGPA with velocity bias, and as a blue dashed-dotted line the non-local FGPA with velocity bias and stochasticity. The top and bottom panels clearly highlight that the standard FGPA (yellow dashed) and its augmented version including only velocity bias (purple dotted) rapidly lose power towards small scales, reaching ratios \(P_{\rm mock}(k)/P_{\rm ref}(k)<20\%\) and \(P_{\rm mock}(k)/P_{\rm ref}(k)<50\%\) at the Nyquist frequency \(k_{\rm nyq}\sim 1.6\,h\,{\rm Mpc}^{-1}\). As already anticipated, the \(P(k)\) predicted by the non-local FGPA with velocity bias (green dashed) suffers from a \(\sim 20\%\) lack of power towards small scales with respect to the reference \(P(k)\). Eventually, the \(P(k)\) yielded by the non-local FGPA model with velocity bias and stochasticity ensures a good fit of the target power spectrum, featuring maximum deviation \(\sim 3\%\) and average deviations \(\sim 0.1\%\) up to \(k\sim 0.4\,h\,{\rm Mpc}^{-1}\), maximum deviation \(\sim 5\%\)
Figure 3: Top: comparison of 3D Ly\(\alpha\) forest power spectrum \(P(k)\). Mid: comparison of 3D Ly\(\alpha\) forest power spectrum ratios \(P_{\rm mock}(k)/P_{\rm ref}(k)\) between the predicted and reference power spectra. Bottom: comparison of cross-correlation coefficients \(C(k)\) between the reference Ly\(\alpha\) forest forest and the predictions from the different tested models. In each panel, the investigated reference simulation summary statistics are shown as a red solid line, while the prediction by the standard FGPA is displayed as a yellow dashed line, the FGPA with velocity bias as a purple dotted line, the non-local FGPA with velocity bias as a green dashed line, and the non-local FGPA with velocity bias and stochasticity as a blue dashed-dotted line. In the mid panel, the gray shaded areas stand for \(1\%,2\%,5\%,\) and \(10\%\) deviations, from darker to lighter. In the bottom panel, the gray shaded areas stand for \(5\%,\) and \(10\%\) deviations, from darker to lighter.
Figure 2: Comparison between probability distribution functions of transmitted Ly\(\alpha\) forest flux \(F/F_{c}\) as extracted from the reference simulation (red solid), and as predicted by the standard FGPA (yellow dashed), by the FGPA with velocity bias (purple dotted), by the non-local FGPA with velocity bias (green dashed), and by the non-local FGPA with velocity bias and stochasticity (blue dashed-dotted).
and average deviations \(\sim 1.8\%\) up to \(k\sim 1.4\,h\,\mathrm{Mpc}^{-1}\), and maximum deviation \(\sim 8\%\) and average deviations \(\sim 0.8\%\) up to the Nyquist frequency \(k\sim 1.6\,h\,\mathrm{Mpc}^{-1}\).
An inspection of the bottom panel of Figure 3, instead, reveals that the standard FGPA (yellow dashed) delivers a lower \(C(k)\) with respect to all the other cases which incorporate the modelling of velocity bias. In fact, looking at Figure 1, it can be appreciated by eye how the standard FGPA (top right) does not model properly the enhancement of cosmic structure elongation along the line of sight clearly seen in the reference simulation (top left). In this sense, an adequate treatment of velocity bias ensures a better cross-correlation at all scales. The non-local FGPA with velocity bias (green dashed) displays a larger \(C(k)\) than the local FGPA with velocity bias (purple dotted) at all scales, from a \(1\%\) gain in \(C(k)\) on large scales, up to a \(\sim 3\%\) gain in \(C(k)\) at \(k\sim 1.0\,h\,\mathrm{Mpc}^{-1}\). Eventually, the non-local FGPA with velocity bias and stochasticity (blue dashed-dotted) features a slightly lower \(C(k)\) than the non-local FGPA with velocity bias only (green dashed) up to \(k\sim 0.5\,h\,\mathrm{Mpc}^{-1}\), while it starts to depart from the latter at larger \(k\), lacking a \(\sim 6\%\)\(C(k)\) at \(k\sim 1.0\,h\,\mathrm{Mpc}^{-1}\). This is expected by construction, due to the addition of a random component, which has the effect of lowering the cross correlation. However, the loss of \(C(k)\) is not dramatic, and the non-local FPGA model with velocity bias and stochasticity (blue dashed-dotted) still improves the local FGPA with velocity bias \(C(k)\) (purple dotted) up to \(k\sim 0.8\,h\,\mathrm{Mpc}^{-1}\).
### Bispectrum
To address the capability of the model to reproduce higher-order statistics, we also assess the accuracy of the predicted bispectrum. We report in Figure 4 the reduced bispectrum \(Q(\theta_{12})\) as a function of the subtended angle \(\theta_{12}\), for four different triangular configurations: (i) \(k_{1}=0.1\), \(k=0.2\,h\,\mathrm{Mpc}^{-1}\) (top left), (ii) \(k_{1}=0.2\), \(k=0.2\,h\,\mathrm{Mpc}^{-1}\) (top right), (iii) \(k_{1}=0.3\), \(k=0.5\,h\,\mathrm{Mpc}^{-1}\) (bottom left), (iv) \(k_{1}=0.5\), \(k=0.5\,h\,\mathrm{Mpc}^{-1}\) (bottom right). The plot displays the bispectrum of the reference simulation (red solid), compared to the predictions from the standard FGPA (yellow dashed), the FGPA with velocity bias (purple dotted), the non-local FGPA with velocity bias and stochasticity (blue dashed-dotted). The latter (i.e. our preferred model) reproduces the target bispectrum with reasonable accuracy for all the probed triangular configurations, while the others start to feature significant deviations at \(k\gtrsim 0.2\,h\,\mathrm{Mpc}^{-1}\). In particular, the non-local FGPA model with velocity bias and stochasticity reproduces not only the overall shape of the reference bispectrum, but also some peculiar features, such as the position and amplitude of the peak around \(\theta\sim 2.3\) in the \(k_{1}=k_{2}=0.5\,h\,\mathrm{Mpc}^{-1}\) configuration, as well as the position (although not very much the amplitude) of the peak around \(\theta\sim 2.1\) in the \(k_{1}=0.3\), \(k_{2}=0.5\,h\,\mathrm{Mpc}^{-1}\).
## 6 Potential future improvements
While the presented model, as anticipated, turns out to be sufficient to fulfill the purpose it was designed for, there are further potential improvements which we leave to be explored in future works.
In this work, we have included non-local dependencies in the bias formulation only through the T-web. Such dependencies are known to have a long-range effect (e.g. McDonald & Roy, 2009), and we are hence neglecting short-range terms, whose effect kicks in when it comes to modelling the non-linear clustering towards small scales. Short-range non-local terms can be constructed as scalars from the curvature tensor \(\delta_{ij}=\partial_{i}\partial_{j}\delta\), such as its Laplacian \(\nabla^{2}\delta\) in Eulerian (e.g. Peacock & Heavens, 1985; McDonald & Roy, 2009; Werner & Porciani, 2020; Kitaura et al., 2022) or in Lagrangian space (e.g. Zennaro et al., 2022), which characterizes the shape of the local maxima of the density field (Peacock & Heavens, 1985). More in general, one can build short-range non-local terms as scalars computed from arbitrarily higher-order derivatives of the density field, such as \(\partial_{i}^{\prime}\partial_{j}^{\prime}\delta\) (e.g. Kitaura et al., 2022). Alternatively, in analogy with the I-web description based on the invariants of the tidal field tensor (see also SS3), one can work in a framework relying on the invariants of the curvature tensor. This latter description has been proven to encode crucial information to model the bias of baryon density fields (e.g. Singaglia et al., 2021, 2022). Since we do not model such family of dependencies here, we stress that we may be missing some relevant piece of physical information. However, we also stress that computing such terms requires an accurate enough modelling of small scales, which is not guaranteed here. Therefore, by introducing short-range non-local terms we may face the risk of introducing a noisy component, which goes in detriment of the final accuracy. Moreover, in the computationally cheapest way of modelling such dependencies, we would be extracting the analogous of the T-web based on \(\delta_{ij}\), which would imply a non-negligible number of additional free parameters in our model.
In connection to the previous point, we notice that Kitaura et al. (2022) showed that the I-web model encodes a larger amount of information on the clustering of biased tracers with respect to the T-web, at the expense of a larger number of parameters (the number of bins used to described the invariants) and therefore at a larger risk to incur into overfitting. Moreover, while describing the non-local quantities by explicitly using the invariants of the tidal tensor is possible (e.g. Pellejero-Ibanez et al., 2020), it implies devising a suitable functional form for each additional term used in the bias, which is in principle non-trivial.
Another point which can be improved consists in the way we model the low-density regions, identified as cosmic voids according to the T-web classification. In fact, in order to compensate the lack of substructures, we randomly sample a noise component from a negative binomial distribution. At this point we notice that this could be avoided, or its impact alleviated, by improving the modelling of the deterministic component. One easy modification could consist in binning the density distribution in voids into an arbitrary number of intervals, and identify distinct parameters for such distinct density bins. However, this would again introduce a larger number of free parameters in our model.
Eventually, if a noise term is to be included as done in this work, we notice that we have randomly sampled the stochastic component from a negative binomial distribution for the reasons previously stated, but this is not the only possible solution. In fact, one may find that using a different distribution may be more convenient.
We leave all these potential improvements to be investigated in future publications, or in applications adopting this model.
## 7 Conclusions
This work presents a significantly augmented version of the widely-used FGPA to predict the Ly\(\alpha\) forest in redshift space, as observed from current and forthcoming spectroscopic galaxy surveys. This new model relies on explicit modelling of veloc
ity bias and redshift space distortions and of a modification to the FGPA introducing a cut-off and a boosting scale, with free parameters (determined based on both heuristic and physical arguments, as well as on an efficient MCMC scheme) made dependent on the cosmic web environment as described by the T-web classification (knots, filaments, sheets, and voids, Hahn et al., 2007). In this sense, we label our model _non-local_ FGPA. In fact, the Ly\(\alpha\) forest flux in each cell is here made dependent on the content of the surrounding cells and the geometry of the gravitational field, and the model effectively incorporates non-local bias information up to third-order in Eulerian perturbative bias expansion (see Kitaura et al., 2022). In addition, since we find that such a model fails to reproduce the small-scale clustering in the underdense regions, we add a further stochastic term only in voids, randomly sampled from a negative binomial distribution. This random component has the effect of enhancing the small-scale power, and hence, of improving the fit of the Ly\(\alpha\) forest forest 3D power spectrum towards small scales.
We predict Ly\(\alpha\) forest fluxes with standard FGPA, our preferred non-local FGPA model with velocity bias and stochasticity, and two other intermediate models in between, on a mesh with \(V=(500\,h^{-1}\,\mathrm{Mpc})^{3}\) volume and \(N_{\mathrm{c}}=256^{3}\) cells, with physical cell resolution \(l\sim 1.95\,h^{-1}\,\mathrm{Mpc}\), trying to reproduce a full cosmological hydrodynamic N-body/SPH simulation (spanning the same volume) and run with \(N=1024^{3}\) particles. We assess the accuracy of the prediction of such models by comparing the results regarding the mean transmitted flux \(F\), the PDF, the power spectrum, and the bispectrum, with the analogous summary statistics computed from the reference simulation.
The augmented non-local FGPA with velocity bias and stochasticity improve upon the standard FGPA model in all the investigated summary statistics. In fact, the non-local FGPA accurately reproduces the mean transmitted flux \(\bar{F}\), the flux PDF, the power spectrum, and the bispectrum. In particular, our model yields a mean transmitted flux \(\bar{F}_{\mathrm{pred}}\sim 0.75\), in excellent agreement with the value \(\bar{F}_{\mathrm{ref}}\sim 0.76\), and a power spectrum featuring maximum deviation \(\sim 3\%\) and average deviations \(\sim 0.1\%\) up to \(k\sim 0.4\,h\,\mathrm{Mpc}^{-1}\), maximum deviation \(\sim 5\%\) and average deviations \(\sim 1.8\%\) up to \(k\sim 1.4\,h\,\mathrm{Mpc}^{-1}\), and maximum deviation \(\sim 8\%\) and average deviations \(\sim 0.8\%\) up to the Nyquist frequency \(k\sim 1.6\,h\,\mathrm{Mpc}^{-1}\). The predicted PDF and the bispectrum clearly outperform the results from any other model tested in this paper as well.
Compared to other schemes based on machine learning (e.g., Harrington et al., 2021; Horowitz et al., 2021; Sinigaglia et al., 2022), and other methods based on iterative calibrations (e.g., Sorini et al., 2016; Peirani et al., 2022), our model offers the appeal of being a purely analytical method, and therefore it is fast to compute, less prone to the overfitting problem, and it can be straightforwardly generalized to larger/smaller volumes, different mesh resolutions, and at different redshift snapshots. This method can even ensure a continuous smooth treatment of red
Figure 4: Reduced bispectrum \(Q(\theta_{12})\) as a function of the subtended angle \(\theta_{12}\), for four different triangular configurations: (i) \(k_{1}=0.1\), \(k=0.2\,h\,\mathrm{Mpc}^{-1}\) (top left), (ii) \(k_{1}=0.2\), \(k=0.2\,h\,\mathrm{Mpc}^{-1}\) (top right), (iii) \(k_{1}=0.3\), \(k=0.5\,h\,\mathrm{Mpc}^{-1}\) (bottom left), (iv) \(k_{1}=0.5\), \(k=0.5\,h\,\mathrm{Mpc}^{-1}\) (bottom right). The plot displays the bispectrum and the reference simulation (red solid), compared to the predictions from the standard FGPA (yellow dashed), the FGPA with velocity bias (purple dotted), the non-local FGPA with velocity bias (green dashed), and the non-local FGPA with velocity bias and stochasticity (blue dashed-dotted). The latter (i.e. our preferred model) reproduces the target bispectrum with reasonable accuracy for all the probed triangular configurations, while the others start to feature significant deviations at \(k\gtrsim 0.2\,h\,\mathrm{Mpc}^{-1}\).
shift evolution via interpolation of the fitting coefficients in between the redshift snapshots used for calibration, avoiding discontinuity problems at the edge of distinct contiguous redshift shells.
We point out that the resolution (\(1.95\,h^{-1}\,\)Mpc physical cell size) considered in this work is not sufficient to provide realistic Ly\(\alpha\) forest high-resolution spectra with correct modelling of the 1D power spectrum, needed to perform studies of small-scale (\(k\sim 5-10\,h\,\)Mpc\({}^{-1}\)) Physics, such as bounds on the mass of warm dark matter particles (Viel et al., 2005, 2013) and of neutrinos (Palanque-Delabrouille et al., 2015) and constraints on the thermal state of the intergalactic medium (Garzilli et al., 2017, and references therein), among others. While the application of the non-local FGPA to high-resolution density fields is possible, one should bare in mind that resolving sub-Moc scales on a 3D mesh makes it computationally very expensive to realize large-volume cosmological boxes. Therefore, in future works, we will explore novel techniques based on Bayesian inference aimed at upsampling the resolution of 1D Ly\(\alpha\) forest skewers extracted from low-resolution Ly\(\alpha\) forest flux boxes generated following the procedure presented in this work.
We plan to adopt this novel scheme to generate Ly\(\alpha\) forest mock lightcones for the DESI survey, in combination with accurate approximated gravity solvers correcting for shell-crossing (e.g. Kitaura & Hess, 2013; Tosone et al., 2021; Kitaura et al., 2023). In particular, we aim at using a recent fast structure formation model based on iterative Eulerian Lagrangian Perturbation Theory (eALPT, Kitaura et al., 2023), which has been shown to reproduce the clustering of N-body simulations with percent accuracy deep into the non-linear regime. Furthermore, the application of the non-local FGPA model to approximated dark matter fields generated through the ALPT or eALPT approximations can potentially represent a new avenue in Bayesian density field reconstruction studies using the Ly\(\alpha\) forest. We will investigate this point in future publications.
In conclusion, we argue that this novel non-local FGPA scheme represents a significant step forward with respect to previous analytical efforts, and it can become instrumental in the generation of fast accurate mocks for covariance matrices estimation in the context of current and forthcoming Ly\(\alpha\) forest surveys.
###### Acknowledgements.
We are grateful to Cheng Zhao for making his bispectrum computation code available. F.S. acknowledges the support of the doctoral grant funded by the University of Padova and by the Italian Ministry of Education, University and Research (MIUR) and the financial support of the _Fandazionezho Bz. Aldo Gini_ fellowship. F.S. is indebted to the _Instituto de Astrofisica de Canarias (IAC)_ for hospitality and the availability of computing resources during the realization of this project. F.S.K. and A.B.A. acknowledge the IAC facilities and the Spanish Ministry of Economy and Competitiveness (MINECO) under the Severo Ochoa program SEV-2015-0548, PID2020-120612GB-100 and CEA2019-00029-0023, A20217-889819-P.S.K.K. thanks the RYC2015-18693 grant. K.N. is grateful to Volker Springel for providing the original version of GADGET-3, on which the GADGET-3_OskA code is based. Our numerical simulations and analyses were carried out on the XC50 systems at the Center for Computational Astrophysics (CICA) of the National Astronomical Observatory of Japan (NAOJ), the Yukawa-21 at YITP in Kyoto University, SQUID at the Cybersma Center, Osaka University as part of the HPCI system Research Project (hpl20090). This work is supported in part by the JSPS KAKENHI Grant Number JP17H0111, 19H05810, 20H00180 (K.N.), 21Z1090, 22K2072 (Y.O.). K.N. acknowledges the travel support from the Kavli IPMU, World Premier Research Center Initiative (WPI), where part of this work was conducted.
|
2305.08267 | Generation of Kochen-Specker contextual sets in higher dimensions by
dimensional upscaling whose complexity does not scale with dimension and
their applications | Recently, handling of contextual sets, in particular Kochen-Specker (KS)
sets, in higher dimensions has been given an increasing attention, both
theoretically and experimentally. However, methods of their generation are
diverse, not generally applicable in every dimension, and of exponential
complexity. Therefore, we design a dimensional upscaling method, whose
complexity does not scale with dimension. As a proof of principle we generate
manageable-sized KS master sets in up to 27 dimensional spaces and show that
well over 32 dimensions can be reached. From these master sets we obtain an
ample number of smaller KS sets. We discuss three kinds of applications that
work with KS sets in higher dimensions. We anticipate other applications of KS
sets for quantum information processing that make use of large families of
nonisomorphic KS sets. | Mladen Pavicic, Mordecai Waegell | 2023-05-14T22:20:19Z | http://arxiv.org/abs/2305.08267v3 | # Kochen-Specker Contextuality
###### Abstract
A recently developed method of generating quantum contextual sets from small vectors components is universally and theoretically applicable to any dimension. However, tasks of obtaining such arbitrarily exhaustive sets in dimensions higher than eight face a computational barrier even on supercomputers. Therefore, for this paper, we employed a dimensional upscaling method that exploits the fact that the minimal complexity of KS sets does not scale with dimension to construct relatively small KS sets in higher dimensions from known sets in lower dimensions. This enabled us to generate numerous sets from simple vector components in up to 16-dimensional spaces using presently available computational resources.
## I Introduction
It has been proven that applications in quantum computation [1; 2], quantum steering [3], and quantum communication [4] rely on quantum contextuality and contextual sets. Small contextual sets have been implemented in a series of experiments using photons [5; 6; 7; 8; 9; 10], neutrons [11; 12], trapped ions [13], and solid state molecular nuclear spins [14].
Massive computational generation of contextual sets in odd and even dimensional Hilbert spaces has also been carried out [15; 16], [17, Supplemental Material], [18; 19; 20; 21; 22] by means of various methods among which the method of arbitrarily exhaustive automated generation of contextual sets from simple vector components in odd [21] and even [18] dimensional spaces stands out.
Although the algorithm of the latter method is valid for any dimension, in practice it is limited by the computational complexity which the present day supercomputers can handle. Thus, generation of sets in dimensions higher than eight is not viable although the algorithm itself turns out to be of _statistical polynomial complexity_, meaning that only neglectfully few sets take an exponentially growing time to calculate.
Therefore, in this paper, we offer a complementary method which can generate numerous comparatively small sets, also from simple vector components, in higher dimensional spaces. It is a dimensional upscaling method [17; 23; 24; 25] which we extend here and provide examples in up to 16 dimensions although, by our estimation, up to 32 dimensions are within reach. The method relies on a remarkable feature of contextual sets that their "minimal complexity does not scale with dimension" as proved in [17].
## II Dimensional upscaling method
To describe and handle contextual and other sets we use McKay-Megill-Paviic-hypergraph (MMPH) language [22]. An MMPH is a connected _hypergraph k-l_ with \(k\) vertices and \(l\) hyperedges in which (i) every vertex belongs to at least one hyperedge; (ii) every hyperedge contains at least 2 and at most \(n\) vertices; (iii) no hyperedge shares only one vertex with another hyperedge; (iv) hyperedges may intersect each other in at most \(n-2\) vertices; (v) graphically, vertices are represented as dots and hyperedges as (curved) lines passing through them.
We encode MMPHs by means of the printable ASCII characters with the exception of'space', '0', '-', '-' and '\(\cdot\)'. When all 90 characters are exhausted, we reuse them prefixed by '+', when those are exhausted by '++' and so on. Hyperedges are separated by '\(\cdot\)', and each MMPH is terminated by '\(\cdot\)'. There is no limit on the size of an MMPH.
For an MMPH with a coordinatization, an \(n\)-dim MMPH space becomes an \(n\)-dim Hilbert space spanned by \(n\)-tuples of mutually orthogonal vectors, where the \(n\)-tuples correspond to hyperedges and vectors to vertices of the MMPH.
A Kochen-Specker (KS) MMPH is an \(n\)-dim (\(n\geq 3\)) \(k\)-\(l\) MMPH to which it is impossible to assign 1s and 0s in such a way that the following rules hold: (I) no two vertices in any hyperedge are both assigned the value 1 and (II) in any hyperedge, not all of the vertices are assigned the value 0 [23]. KS MMPHs are contextual and nonclassical since they do not allow assignments of predefined 0s and 1s to their vertices, in contrast to classical systems and, in particular, classical computers.
In [18; 22] we generated billions of KS MMPHs in dimensions up to eight directly from simple vector components. Such a generation in 9+ dimensional spaces takes too much CPU time even on supercomputers, though.
To generate comparatively small KS MMPHs but in much higher dimensions than those obtained directly from vector components in [18; 22], we make use of a com |
2306.16509 | Hydrodynamics of thermally-driven chiral propulsion and separation | Considerable effort has been directed towards the characterization of chiral
mesoscale structures, as shown in chiral protein assemblies and carbon
nanotubes. Here, we establish a thermally-driven hydrodynamic description for
the actuation and separation of mesoscale chiral structures in a fluid medium.
Cross flow of a Newtonian liquid with a thermal gradient gives rise to chiral
structure propulsion and separation according to their handedness. In turn, the
chiral suspension alters the liquid flow which thus acquires a transverse
(chiral) velocity component. Since observation of the predicted effects
requires a low degree of sophistication, our work provides an efficient and
inexpensive approach to test and calibrate chiral particle propulsion and
separation strategies. | E. Kirkinis, A. V. Andreev, M. Olvera de la Cruz | 2023-06-28T19:07:51Z | http://arxiv.org/abs/2306.16509v1 | # Hydrodynamics of thermally-driven chiral propulsion and separation
###### Abstract
Considerable effort has been directed towards the characterization of chiral mesoscale structures, as shown in chiral protein assemblies and carbon nanotubes. Here, we establish a thermally-driven hydrodynamic description for the actuation and separation of mesoscale chiral structures in a fluid medium. Cross flow of a Newtonian liquid with a thermal gradient gives rise to chiral structure propulsion and separation according to their handedness. In turn, the chiral suspension alters the liquid flow which thus acquires a transverse (chiral) velocity component. Since observation of the predicted effects requires a low degree of sophistication, our work provides an efficient and inexpensive approach to test and calibrate chiral particle propulsion and separation strategies.
Chirality, denoting the lack of superposition ability of structures on mirror images, is a characteristic of various assemblies including carbon nanotubes, viruses and actin filaments, and is essential for their function. Since left- and right-handed amino acids lead to different protein structures, their homochirality is required for biological function such as gene encoding [1]. Chiral proteins can sometimes lead to chiral mesoscale structures; some organisms with chiral body structures have chiral cells [2]. Therefore, the chirality of proteins may be responsible for the chiral mesoscale structures found in cell media. Chiral mesoscale assemblies are formed in peptide amphiphiles with chiral aminoacids [3] and in many carbon based systems, which have required great efforts to understand and characterize [4]. However, the mechanism by which chirality manifests at the mesoscale is not well understood. Here, we propose ways of actuating and separating mesoscale chiral structures such as helices [5], helicoidal scrolls [6] and twisted ribbons [7].
In chemical and biological systems various mesoscale structures move and function in an aqueous environment in the presence of thermal gradients induced by chemical reactions [8]. Temperature gradients may alter the liquid material parameters and in particular, viscosity, as this was demonstrated in laser-induced thermophoresis experiments [9] and the associated theory of a single hot particle in a viscous liquid [10]. This motivates us to study the _hydrodynamic_ motion of chiral suspensions in Stokes flow in the presence of temperature gradients. The corresponding chiral current \(\mathbf{j}^{\mathrm{ch}}\) is perpendicular to the plane formed by the base flow direction and the temperature gradient (cf. Fig. 1). The motion of the chiral suspension also perturbs the base flow and endows it with a transverse (chiral) velocity component. It is noteworthy that the chiral suspension also exerts a screw torque on the confining walls, in the direction of the base flow. The hydrodynamic description developed in this article implies averaging over the tumbling motion of the chiral particles and applies at time scales longer than the tumbling time [11]. It can be understood as a "continuum" formulation for the motion of a chiral suspension and thus _differs_ from the majority of propulsion descriptions which are based on a resistance matrix at the level of a single suspended particle [12]. The equivalence of the two approaches was discussed in the recent review article by Witten and Diamant [13].
Motion of chiral particles suspended in a classical liquid is associated with a chiral current and in particular \(\mathbf{j}^{\pm}=n^{\pm}\mathbf{v}_{p}^{\pm}\) where \(n^{\pm}\) is the number density of right and left-handed particles, respectively and \(\mathbf{v}_{p}^{\pm}\) is their respective velocity. For simplicity we consider an incompressible liquid where the right and left-handed particles are mirror-images of each other. We can thus define a chiral current \(\mathbf{j}^{\mathrm{ch}}\) (cf. Fig. 1) of the form
\[\mathbf{j}^{\mathrm{ch}}=\mathbf{j}^{+}-\mathbf{j}^{-}. \tag{1}\]
In the presence of temperature gradients a phenomenological expression for the chiral current \(\mathbf{j}^{\mathrm{ch}}\) based on symmetry considerations and which has a low power of derivatives of vorticity is
\[\mathbf{j}^{\mathrm{ch}}=\frac{n}{T}\left[\beta_{1}(\nabla T\cdot\nabla) \mathrm{curl}\mathbf{v}+\beta_{2}\nabla T\times\nabla^{2}\mathbf{v}\right], \tag{2}\]
where \(n=n^{+}+n^{-}\) and \(T,\mathbf{v}\) are the liquid's, undisturbed by chirality, temperature and velocity respectively. \(\beta_{1,2}\) are described below. As mentioned above, the magnitude of the chiral current is determined by the complicated tumbling motion of the particles caused by thermal fluctuations and the inhomogeneous flow. The phenomenological expression (2) is written to lowest order in the driving flow. At strong drives thermal fluctuations are subdominant, and the magnitude of the chiral current is determined by averaging over corresponding Jeffery
orbits in the nonuniform flow. This problem, but in a different context, was discussed in [14].
In Ref. [11] the chiral current in an isothermal system was shown to be described by the expression
\[{\bf j}^{\rm ch}=n\beta\nabla^{2}{\rm curl}{\bf v}. \tag{3}\]
Insight into the physical origin of Eq.(2) may be obtained by applying Eq.(3) to a shear flow in the presence of temperature gradients. In particular, allowing temperature dependence of the liquid viscosity \(\eta\) and implementing the resulting vorticity equation, Eq.(3) leads to
\[{\bf j}^{\rm ch}\sim n\beta\frac{\eta^{\prime}}{\eta}\left[\nabla T\times \nabla^{2}{\bf v}+(\nabla T\cdot\nabla){\rm curl}{\bf v}\right], \tag{4}\]
where a prime denotes differentiation with respect to temperature \(T\) and we retained only leading order terms in temperature gradients. Thus, the coefficients \(\beta_{1}\) and \(\beta_{2}\) in (2) can be expressed in terms of the logarithmic derivative of viscosity with respect to temperature. We note that in the absence of temperature gradients chiral separation is possible only in nonstationary or nonlinear flows [11] as can be seen by the vorticity equation and (3). In the presence of temperature gradients chiral separation is possible even in the creeping flow regime. This is important for biological systems, which operate at low Reynolds numbers.
Recent literature examines the way microorganisms and active particles move in liquids with spatially-varying viscosity [9; 10; 15]. In what follows we are primarily concerned with suspensions where the viscosity \(\eta\) of the base liquid varies with temperature. A temperature gradient will then give rise to a chiral current of the forms (3) and (4).
The chiral density \(n^{\rm ch}=n^{+}-n^{-}\) satisfies the conservation law [11]
\[\partial_{t}n^{\rm ch}+{\rm div}({\bf v}n^{\rm ch})+{\rm div}\left[{\bf j}(n^{ \rm ch})+{\bf j}^{\rm ch}(n)\right]=0, \tag{5}\]
where \({\bf j}(n^{\rm ch})=-D\nabla n^{\rm ch}-n^{\rm ch}\lambda_{T}\nabla T-n^{\rm ch }\lambda_{p}\nabla p\), is the diffusive current relative to the liquid [11; 16] and \({\bf j}^{\rm ch}(n)\) is Eq. (3). The density \(n\) of chiral particles satisfies a similar conservation law, which is affected by chirality in a non-racemic mixture
\[\partial_{t}n+{\rm div}({\bf v}n)+{\rm div}\left[{\bf j}(n)+{\bf j}^{\rm ch}( n^{\rm ch})\right]=0, \tag{6}\]
so that Eq. (5) and (6) satisfy the Onsager principle of the symmetry of the kinetic coefficients [16].
A chiral suspension imparts stresses on the suspending liquid. To leading order in velocity gradients these stresses, allowed by symmetry, read
\[\sigma_{ij}^{\rm ch} = \eta(T)n^{\rm ch}\left\{\alpha\left[\partial_{i}({\rm curl}{\bf v })_{j}+\partial_{j}({\rm curl}{\bf v})_{i}\right]\right. \tag{7}\] \[\left.+\frac{\alpha_{1}}{T}\left.\left[\epsilon_{kli}V_{kj}+ \epsilon_{klj}V_{ki}\right]\partial_{l}T\right\},\right.\]
where \(V_{ij}\) is the rate-of-strain tensor. The first term of Eq. (7) introduced in [11] was discussed in the recent review [13]. The second term exists only when temperature gradients are present in the liquid.
The coefficients \(\beta\), \(\alpha\) and \(\alpha_{1}\) in (3) and (7) are determined in the low Reynolds number regime by studying the particle motion in the surrounding liquid [12]. They may be estimated as
\[\alpha\sim\alpha_{1}\sim\chi R^{4}\quad{\rm and}\quad\beta\sim\chi R^{3}, \tag{8}\]
where \(R\) is the chiral particle radius and \(\chi\) is the degree of chirality in the shape of the particles. Eq. (8) provides the order of magnitude estimates of these coefficients. Their precise determination for a specific particle shape however requires solving hydrodynamic equations for a tumbling particle in the presence of temperature and velocity gradients, and is beyond the scope of our work.
In the _absence_ of chirality the liquid satisfies the Navier-Stokes equations and is considered incompressible
\[\rho Du_{i}/Dt=\partial_{k}\sigma_{ik}\quad{\rm and}\quad\partial_{i}u_{i}=0, \tag{9}\]
where the Cauchy stress tensor \(\sigma_{ik}\) is given by \(\sigma_{ik}=-p\delta_{ik}+\eta\left(\frac{\partial u_{i}}{\partial x_{k}}+ \frac{\partial u_{k}}{\partial x_{i}}\right)i,k=1,2,3\), \(\rho\) the density of the liquid and \(p\) is the pressure. Conservation of energy in an incompressible liquid is expressed in the form [16]
\[\rho c_{p}(\partial_{t}T+{\bf v}\cdot{\rm grad}T)=k_{th}\nabla^{2}T \tag{10}\]
where \(c_{p}\) is the specific heat at constant pressure and \(k_{th}\) the thermal conductivity of the liquid.
Consider pressure-driven flow, in the absence of chiral particles, in a channel with unevenly heated walls (cf. Fig. 1). With \({\bf v}=u(y)\hat{\bf x}\), \(\nabla T=\partial_{y}T\hat{\bf y}\), the Navier-Stokes
equations (9) and energy balance (10) in the creeping flow approximation reduce to
\[\frac{d}{dy}\left[\eta(T)\frac{du}{dy}\right]=\frac{dp}{dx},\qquad\frac{d^{2}T}{ dy^{2}}=0, \tag{11}\]
respectively, with boundary conditions
\[u(0)=u(d)=0,\quad T(0)=T_{0},\quad T(d)=T_{0}+\Delta T. \tag{12}\]
The temperature profile thus obtained is \(T(y)=T_{0}+\frac{y}{d}\Delta T\). The solution of the first of Eq. (11) with boundary conditions (12) becomes
\[u(y)=\frac{T_{e}^{2}d^{2}\partial_{x}p}{2\eta(T)(\Delta T)^{2}}\frac{\sum \limits_{\{i,j,k\}}e^{X_{i}}\left\{\text{Ei}_{1}(X_{i})\left[\frac{e^{X_{j}}}{ X_{k}^{2}}-\frac{e^{X_{k}}}{X_{j}^{2}}\right]+\frac{1}{X_{j}X_{k}}\left(\frac{1}{ X_{k}}-\frac{1}{X_{j}}\right)\right\}}{\left[\text{Ei}_{1}(X_{0})-\text{Ei}_{1}(X_{1}) \right]e^{X_{0}+X_{1}}+\frac{e^{X_{0}}}{X_{1}}-\frac{e^{X_{1}}}{X_{0}}}, \tag{13}\]
where the symbol \(\{i,j,k\}\) denotes cyclic permutation of \(i,j\) and \(k\), and \(\text{Ei}_{1}(X)=\int_{1}^{\infty}\frac{e^{-kX}}{k}dk\) is the exponential integral. Here we employed the well-documented Arrhenius-type law [17]
\[\eta(T)=\eta_{0}e^{\frac{E}{R_{g}(T+T_{A})}}, \tag{14}\]
valid in the 243 to 373 K temperature range, since it encompasses linear and other exponential laws [18], as special cases. \(E\) is the activation energy, \(R_{g}\) is the gas constant and \(T_{A}\) is a temperature correction, unique to each viscous liquid, cf. [17] and Table I. In Eq. (13) \(X_{i}=\frac{Te}{T_{A}+T_{i}},\quad i=0,1,2,\quad T_{e}=\frac{E}{R_{g}}\), where \(T_{0}\) and \(T_{1}=T_{0}+\Delta T\) are the lower and upper channel wall fixed temperatures, respectively (cf. Fig. 1) and \(T_{2}\equiv T=T_{0}+\frac{y}{d}\Delta T\).
Now consider the presence of chiral particles and define the chiral separation velocity
\[\mathbf{v}^{\text{ch}}\equiv\mathbf{j}^{\text{ch}}/n \tag{15}\]
relative to the liquid by employing (3). With respect to the geometry displayed in Fig. 1 it has the form \(\mathbf{v}^{\text{ch}}=v^{\text{ch}}(y)\hat{\mathbf{z}}\) and its magnitude is
\[v^{\text{ch}}(y)=\chi\frac{R^{3}}{d}\frac{X_{2}^{4}\Delta T\partial_{x}p}{2 \eta(T)T_{e}}\frac{\left[\text{Ei}_{1}(X_{1})-\text{Ei}_{1}(X_{0})\right]e^{X _{0}+X_{1}}+\sum\limits_{i\neq j=0,1}\frac{(-1)^{i}e^{X_{j}}}{X_{i}^{2}}\left[ \frac{1}{2}+\frac{1}{X_{i}}\left(\frac{1}{X_{2}}-\frac{1}{2}\right)\right]}{ \left[\text{Ei}_{1}(X_{0})-\text{Ei}_{1}(X_{1})\right]e^{X_{0}+X_{1}}+\frac{e ^{X_{0}}}{X_{1}}-\frac{e^{X_{1}}}{X_{0}}}. \tag{16}\]
In Fig. 2 we plot the closed form expression (16) for the chiral separation velocity \(v^{\text{ch}}\) in cm/sec vs. channel elevation \(y\) in cm for two temperature variations \(\Delta T\) between the lower (at \(y=0\)) and upper (at \(y=0.1\) cm) channel walls. The chiral separation velocity is non-zero close to the solid walls located at \(y=0,d\), even though no-slip boundary conditions are satisfied by the base liquid. This is the case because, according to Eq. (3), chiral particle velocities become prominent in the vicinity of large vorticity gradients, and these are present close to the walls.
To obtain a better understanding of the effect, we average (16) over the channel width \(d\) and expand with respect to \(\Delta T\) to obtain \(\langle v^{\text{ch}}\rangle\sim 2\chi R^{3}\gamma\frac{\Delta T}{d}\frac{ \partial_{x}p}{\eta}\), to leading order in \(\Delta T\). It is more illuminating however to replace the pressure gradient with a characteristic velocity \(U_{0}\) of Poiseuille flow by averaging the (undisturbed by chirality) base Poiseuille profile \(u\sim\frac{\partial_{x}p}{2\eta(T_{0})}(y^{2}-yd)\) over the channel width \(d\). This gives \(U_{0}=-\frac{\partial_{x}p}{2\eta(T_{0})}d^{2}\), and substituting into the expression for \(\langle v^{\text{ch}}\rangle\) we obtain the chiral separation velocity
\[\langle v^{\text{ch}}\rangle=24\chi\left(\frac{R}{d}\right)^{3}U_{0}\gamma \Delta T, \tag{17}\]
where we defined the average of a function \(f(y)\) with respect to the channel width to be \(\langle f\rangle=\frac{1}{d}\int_{0}^{d}f(y)dy\), \(R\) is chiral particle size and \(d\) channel width. \(\gamma=E/\left[R_{g}(T_{0}+T_{A})^{2}\right]\) is the logarithmic derivative of the viscosity which arises from the linearization of (14) \(\eta(T)\sim\eta(T_{0})\left[1-\gamma(T-T_{0})\right].\)
Considering BM-4 oil [17], \(\Delta T=10\,\text{K}\) and the values displayed in Table I, Eq. (17) leads to the estimate
\[v^{\text{ch}}\sim 2\chi\ \mu\text{m/sec}. \tag{18}\]
The Reynolds number is \(\text{Re}\sim 3.4\times 10^{-3}\). Analogous results can be derived for silicon oils employed in the experiments of Ehrhard [19] and other liquids reported in the literature [17]. Water can also be used, although
it leads to Reynolds numbers higher than those reported here.
_Pertubation of liquid velocity by the chiral suspension.-_ A chiral suspension imparts stresses on the suspending liquid. To leading order in gradients of vorticity these stresses, allowed by symmetry are given by (7). The liquid velocity \(\mathbf{v}\) acquires a chirality-induced component \(\delta v\) perpendicular to the plane of the base flow
\[\mathbf{v}=u(y)\hat{\mathbf{x}}+\delta v(y)\hat{\mathbf{z}}. \tag{19}\]
With the correction (7) the Cauchy stress tensor reads
\[\sigma_{ij}=-p\delta_{ij}+\eta(\partial_{i}u_{j}+\partial_{j}u_{i})+\sigma_{ ij}^{\mathrm{ch}}. \tag{20}\]
Conservation of linear momentum \(\partial_{j}\sigma_{ij}=0\) along the flow direction \(\hat{\mathbf{x}}:-\partial_{x}p+\partial_{y}(\eta\partial_{y}u)=0\), is now accompanied by its chirality-induced counterpart that is perpendicular to the base flow direction
\[\hat{\mathbf{z}}:\partial_{y}(\eta\partial_{y}\delta v)-n^{\mathrm{ch}}\chi R ^{4}\partial_{y}(\eta\partial_{y}^{2}u)=0, \tag{21}\]
and satisfies no-slip boundary conditions \(\delta v(0)=\delta v(d)=0.\) The solution \(\delta v\) of Eq. (21) is displayed in Fig. 3 in cm/sec vs. channel elevation \(y\) in cm for two temperature variations \(\Delta T\) between the lower (at \(y=0\)) and upper (at \(y=0.1\) cm) channel walls, employing the Arrhenius-type temperature dependent viscosity law (14). Its profile is skewed due to the reduction of viscosity close to the upper heated channel wall which is also the location of high chiral separation velocity \(v^{\mathrm{ch}}\). To leading order in \(\Delta T\), and averaging over the channel width \(d\), we obtain \(\langle\delta v\rangle\sim\chi R\frac{d\partial_{x}p}{12\eta}\gamma\Delta T\). Replacing the pressure gradient with its Poiseuille flow counterpart, leads to
\[\langle\delta v\rangle\sim\chi\frac{R}{d}U_{0}\gamma\Delta T. \tag{22}\]
Employing the material parameters for BM-4 oil displayed in Table 1 and setting \(\Delta T=10\,\mathrm{K}\) we obtain \(\delta v\sim 35\chi\)\(\mu\)m/sec, which agrees well, in order of magnitude, with the exact solution displayed in Fig. 3. The momentum equation displayed in (21) was formulated by considering only the first term of the constitutive law (7) since the second term, and for the material parameters employed in this article, gives velocities that are one order of magnitude smaller than the ones derived here.
_Screw torque in a non-racemic suspension.-_ A non-racemic mixture will apply shear stresses on the channel walls that are perpendicular to the plane of the paper. These forces arise from the chiral momentum flux density (7), cf. [11]. Employing the geometry of the channel Poiseuille flow displayed in Fig. 1 this stress is
\[\hat{\mathbf{z}}:\quad\sigma_{zy}^{\mathrm{ch}}=\chi R\eta\partial_{y}^{2}u. \tag{23}\]
In Fig. 4 we display the chiral stress \(\sigma_{zy}^{\mathrm{ch}}\) as a function of channel width employing the exact form for the liquid velocity profile (13). Since the normal vectors to the two channel walls have opposite sign, the chiral suspension exerts on the walls two forces of opposite sign directed into and out of the page. Hence, there is a screw torque exerted by the chiral flow on the confining walls and is directed along the \(\hat{\mathbf{x}}\) direction of the flow. The average of the chiral stress \(\langle\sigma_{zy}^{\mathrm{ch}}\rangle\) over the channel width to leading order in \(\Delta T\) becomes
\[\langle\sigma_{zy}^{\mathrm{ch}}\rangle=\chi R\partial_{x}p(1+\frac{1}{2} \gamma\Delta T+O((\Delta T)^{2}). \tag{24}\]
Expression (24) implies that a chiral stress exists even in the absence of temperature gradients. This was also noted in [11]. Replacing the pressure gradient by the base Poiseuille profile, as carried out in the foregoing sections and employing the material values appearing in Table 1
Figure 2: Chiral separation velocity \(v^{\mathrm{ch}}\) in cm/sec from Eq. (16), perpendicular to the \(x\)-\(y\) plane formed by a base Poiseuille flow and the vertical temperature gradient directions (cf. Fig.1). \(y\) is vertical channel coordinate in cm and we employed the Arrhenius-type temperature dependent viscosity law (14) for a BM-4 oil [17]. \(\chi\) has been set equal to 1.
\begin{table}
\begin{tabular}{l c l} \hline \hline Quantity & Value & Definition \\ \hline \(\eta\) (g cm\({}^{-1}\)sec\({}^{-1}\)) & 2.03 & viscosity of BM-4 oil at 25 \({}^{\circ}\)C [17] \\ \(R\) (cm) & \(5\times 10^{-3}\) & chiral particle radius \\ \(d\) (cm/sec) & 0.1 & channel width \\ \(U_{0}\) (cm/sec) & 0.1 & Poiseuille velocity \\ \(T_{0}\) (K) & 298.15 & lower channel wall temperature \\ \(\gamma\) (\(K^{-1}\)) & 0.07 & BM-4 oil [17] \\ \(E\) (kJ/mol) & 7.5 & activation energy of BM-4 oil [17] \\ \(R_{g}\) (J/ (mol K) & 8.31441 & gas constant \\ \(T_{A}\) (K) & \(-186\) & BM-4 oil temp. correction [17] \\ \(n^{\mathrm{ch}}\)(cm\({}^{-3}\)) & \(R^{-3}\) & chiral density \(n^{+}-n^{-}\) \\ \(n\) (cm\({}^{-3}\)) & \(R^{-3}\) & particle number density \(n^{+}+n^{-}\) \\ \(u\) (cm/sec) & & basic shear flow velocity \\ \(v^{\mathrm{ch}}\) (cm/sec) & & chiral separation velocity \\ \(\delta v\) (cm/sec) & & chiral correction to flow velocity \\ \hline \hline \end{tabular}
\end{table}
Table 1: Definitions and material parameters [17]
for \(\Delta T=10K\), Eq.(24) gives \(\langle\sigma_{zy}^{\rm ch}\rangle=1.73\chi\ {\rm dynes/cm}^{-2}\). This estimate agrees, in order of magnitude, with those of same-size systems known in the literature [20].
We note the existence of a related thermal effect for the propulsion of chiral particles when the viscosity is temperature-dependent and thermal gradients are generated at the interior by viscous heating [21]. The energy equation is now replaced by \(\kappa\partial_{y}^{2}T+\eta(T)\left(\partial_{y}u\right)^{2}=0\), where \(\kappa\) is the thermal conductivity of the liquid. The external stimuli may be supplied, for instance, by sliding one channel wall at constant speed \(V\). In this case the vorticity equation and the chiral current depend on the Brinkman number \(Br\), that is, the ratio of viscous heating to conduction \(Br=\frac{\gamma\eta V^{2}}{\kappa}\). The chiral separation velocity becomes \(v^{\rm ch}\sim\frac{\kappa}{8}\left(\frac{R}{d}\right)^{3}Br\).
Another related thermally-induced chiral particle propulsion effect can take place in a Rayleigh-Benard cell [22], driven by variations of the liquid density with temperature. Here, the coefficient of thermal expansion \(\alpha_{T}\) is the logarithmic derivative of density with respect to temperature, in the same sense that \(\gamma\) in Eq. (17) is the logarithmic derivative of viscosity. Diffusion of vorticity, perpendicular to the plane of the cell, is now induced by \(\rho\alpha_{T}\nabla T\times{\bf g}\), where \(\rho\) is the unperturbed mass density of the liquid and both the temperature gradient and gravitational acceleration \({\bf g}\) lie on the plane of the cell. The chiral separation velocity is \(v^{\rm ch}\sim\chi\frac{R^{3}ego\tau\Delta T}{\eta d}\). This effect is present when the Rayleigh number exceeds its critical value.
We thank the Department of Energy, Office of Basic Energy Sciences for support under contract DE-FG02-08ER46539 (M.O.C and E.K.). The work of A.V.A. was supported, in part, by the US National Science Foundation through the MRSEC Grant No. DMR-1719797.
|
2304.00512 | An Astronomers Guide to Machine Learning | With the volume and availability of astronomical data growing rapidly,
astronomers will soon rely on the use of machine learning algorithms in their
daily work. This proceeding aims to give an overview of what machine learning
is and delve into the many different types of learning algorithms and examine
two astronomical use cases. Machine learning has opened a world of
possibilities for us astronomers working with large amounts of data, however if
not careful, users can trip into common pitfalls. Here we'll focus on solving
problems related to time-series light curve data and optical imaging data
mainly from the Deeper, Wider, Faster Program (DWF). Alongside the written
examples, online notebooks will be provided to demonstrate these different
techniques. This guide aims to help you build a small toolkit of knowledge and
tools to take back with you for use on your own future machine learning
projects. | Sara A. Webb, Simon R. Goode | 2023-04-02T11:03:48Z | http://arxiv.org/abs/2304.00512v1 | [
###### Abstract
With the volume and availability of astronomical data growing rapidly, astronomers will soon rely on the use of machine learning algorithms in their daily work. This proceeding aims to give an overview of what machine learning is and delve into the many different types of learning algorithms and examine two astronomical use cases. Machine learning has opened a world of possibilities for us astronomers working with large amounts of data, however if not careful, users can trip into common pitfalls. Here we'll focus on solving problems related to time-series light curve data and optical imaging data mainly from the Deeper, Wider, Faster Program (DWF). Alongside the written examples, online notebooks will be provided to demonstrate these different techniques. This guide aims to help you build a small toolkit of knowledge and tools to take back with you for use on your own future machine learning projects.
Machine Learning, Observational, lightcurves An Astronomers Guide to Machine Learning] An Astronomers Guide to Machine Learning S. A. Webb et al.] Sara A. Webb\({}^{1}\), Simon R. Goode\({}^{12}\) 2015 Astronomy in Focus, Volume 1
## 1 Introduction
In the field of artificial intelligence, machine learning focuses on using data and algorithms to mimic the way humans would typically learn, improving accuracy over time. Via machine learning, we can automate analytical models, taking advantage of algorithmic ability to learn from data and identify patterns with minimal human input.
Machine learning has already begun to be adopted by a wide range of sub-disciplines in astronomy, and is well established in some areas including it's use in transient astronomy as outlined in the advanced review by Fluke C and Jacobs C. (2020). Note this work is not a comprehensive review of all techniques, rather a compilation of specific examples used within established transient astronomy programs.
In the era of large current and upcoming time-domain surveys, the classification and discovery of transient sources will rely on machine classification to handle large amounts of collected data. Current ground-based surveys such as the ZTF, DES and ASAS-SN scan thousands of square degrees per night, amounting to petabytes of data annually. Recently the Panoramic Survey Telescope and Rapid Response System Survey (Pan-STARRS) delivered the first-petabyte scale optical data release (Bellm _et al._ (2019), Amari _et al._ (2016), Shappee _et al._ (2014), Stubbs C. W., _et al._ (2010), Chambers K. C. _et al._ (2016)).
Space-based time-domain missions have provided unprecedented volumes of photometry, light curves, and proper motions for Galactic sources, with _Kepler_ and K2 targeting \(\sim\)400,000+ individual stars, Transiting Exoplanet Survey Satellite (TESS, Ricker G. R. _et al._ (2014)) is expected to target at least 200,000 of the \(\sim\)9.5 million catalogue sources. The space-based mission, _Gaia_, is already releasing almost 2 billion sources Borucki W.
J., _et al._ (2010), Howell S. B., _et al._ (2014), Stassun K. G., _et al._ (2010).
Overcoming the mining challenges of these increasing amounts of data to not only identify and catalogue the multitude of known transient types but to discover additional new or anomalous sources is paramount to the success of future large transient surveys and time-domain science. This will become especially important with the upcoming Vera Rubin Observatory Legacy Survey of Space and Time (LSST, Abell P. A., _et al._ (2009)). LSST is a planned 10-year survey, imaging the entire Southern sky every three nights. LSST will generate millions of alerts each night, with billions of light curves continually updated or created.
Machine learning can be broken down broadly into either supervised or unsupervised algorithms, each pertaining subcategories beneath them. See Figure 1 for more details on the specific types of categories which will be explored below in more detail relating to the two real-data examples later on in the paper. For a comprehensive overview of techniques and uses in astronomy see Kembhavi A. and Pattnaik. R (2022), Alzubi J. _et al._,(2018), and Mathew A. _et al._,(2020).
## 2 Overview
### Supervised Machine Learning
One heavily used subcategory of machine learning in astronomy is supervised machine learning. This uses labelled data sets to train algorithms to classify data or predict outcomes. Traditionally a supervised algorithm would tune itself around the labelled data
Figure 1: Representation of supervised verses unsupervised learning and the specific examples of each.
sets generating a function to map new inputs to likely outputs.
Supervised machine learning can be separated into three main branches: 1) Regression algorithms, 2) classification algorithms and 3) Neural networks. It is important to note that neural networks can be used to solved both regression and classification problems.
Regression algorithms focus around establishing the relationship between a single dependent variable that is dependent on several independent ones. Both linear and non-linear regression can be used for supervised learning. An example of a commonly used non-linear regression algorithm is that of Decision Trees (Quinlan K. G. (1986)). Decision Trees work by breaking down data into either decision or leaf nodes. Decision nodes are where the sub-node splits into further sub nodes, whereas the leaf nodes represent a final decision or terminal node. Once a decision tree is created on features from labeled data, the algorithm is able to assign a predicted value or outcome on subsequent data (Quinlan K. G. (1986)).
Classification algorithms are able to learn from a given labeled dataset to sort, assign and classify new data into a specific given number of classes or groups. Unlike regression algorithms which output continuous values, classification algorithms will only result in an assigned class or group. Classifications can be either binary or multi-class and are dependent on the problem being approached. A common classifier used in astrophysical contexts, including transient astronomy, is via a Decision Tree. Decision trees work by creating internal nodes and leaf nodes of grouping data around the features of the data. The internal nodes represent the conditions (e.g. certain features present or not) and the leaf nodes represent the decision made based on the conditions. These algorithms are useful for constructing easy to interpret tree like models around known data which can be used to classify new data.
As data is often plentiful in astronomy, supervised algorithms are ideal for completing the 'hard lifting' in classifying and sorting astronomical data sets.
One area which as heavily utilizes supervised algorithms is in the identification of variable sources. Variable stars and quasi-stellar objects have been identified from light curves via multivariate Gaussian mixture models, random forest classifiers, support vector machines, or Bayesian neural networks (Debosscher K. G. (2007), Richards J. W. _et al._(2011), Kim. D. _et al._,(2011), Pichara K. _et al._,(2012), Bloom J. S. _et al._,(2012), Pichara K. _et al._,(2013), Kim D. _et al._,(2016), Mackenzie C. _et al._,(2016), Muthukrishna, D. _et al._,(2022)).
All of the aforementioned work successfully uses classification of objects via supervised algorithms, which were trained on light curve extracted features. Features represent a set of measurable properties/characteristics of the light curves being studied. The most common features used in earlier works are available within the python package '_FATS_' by Nun I. _et al._,(2015).
Classification of non-folded light curves of extragalactic transient sources has also been explored, moving away from selecting the class of the object by fitting analytical templates built from a set of known sources Richards J. W. _et al._(2011), Karpenka, N. V., _et al._,(2012), Lochner, M., _et al._,(2016), Narayan, G., _et al._,(2018), Moller, A., _et al._,(2016). While these techniques work well for catalogues of light curves, they cannot
easily be applied to real-time data. Real-time classification of supernovae by Muthukrishna, D., _et al._,(2019) and Moller, A., _et al._,(2019) has shown the effectiveness of deep recurrent neural networks, without the need to rely on extracting computationally expensive features of the input data.
Another heavily explored use of supervised learning is for the determination or'real' or 'bogus' sources in transient astronomy. These algorithms work by inspecting an image, or features of an image, and decide whether or not the image is of a genuine astrophysical source.
### Unsupervised Machine Learning
With unsupervised machine learning, algorithms learn patterns from unlabeled data. These unsupervised algorithms self organise and create groupings or classes based on patterns exhibited as neuronal predilection or probability densities. These techniques are instrumental in finding like among like within a large data set.
Unsupervised machine learning can be separated into two main branches: 1) Clustering algorithms and 2) Dimensionality reduction algorithms.
Clustering algorithms work by grouping data into like clusters using features extracted from the data. The ultimate goal of clustering is to group like data together, and to isolate different clusters present within data. There are four main types of methods in clustering unlabelled data. The first is via density-based clustering, which works by identifying areas in feature space of high concentrations of data points. Distribution-based clustering takes the approach that all data points are part of the expected number of clusters, and works by calculating the probability that they belong to any given cluster. This works by using distance between points in feature space to determine the locations of clusters. Centroid-based clustering works by isolating the likely centroids within the data in feature space and determining relation to each cluster via distance metrics. Finally, hierarchical-based clustering works by organising data and groupings with a top-down approach, to insure groupings of varying densities are still identified as separate clusters.
In astronomy, clustering techniques have been used within recent large data sets to isolate known transient and variable sources using light curves (Valenzuela L. _et al._,(2018), Giles D. _et al._,(2019), Galarza M. _et al._,(2020)). This ability to cluster similar data will help identify previously unknown variables and transient events in the era of large astronomical surveys. As such, it will be invaluable to meaningfully and quickly quantify the expected large volume of short timescale events to help assist in follow-up priority assignment (Abell P. A., _et al._ (2009)).
Both have been explored within astronomical data and proved successful in providing meaningful insight into large data sets.
## 3 Examples of Applications in Astronomy
Here we outline and present resources to familiarize yourself with applying supervised and unsupervised machine learning to an observational astronomy data set. For both ex
amples we'll be using different aspects of the Deeper Wider Faster (DWF) programs data.
The DWF program was developed to explore the fast dynamic universe, through multi-wavelength, multi-facility, real-time observations. The program is designed to run over \(\sim\)1 week observation blocks, at least once a year. The program is optimised to detect fast transients in real-time, and provide rapid follow-up with additional facilities. This first application of the programs brings up to the use of supervised machine learning in near-real time.
### Supervised Learning: Removal Of BOgus (ROBOT pipeline) for the Deeper Wider Faster program
This section we outline one example of supervised learning from Goode S. _et al._,(2022), which can be further explored using the interactive notebooks on GitHub1.
Footnote 1: [https://github.com/simongoode/ROBOT-pipeline](https://github.com/simongoode/ROBOT-pipeline)
During a typical DWF run, raw optical data (primarily from the CTIO Dark Energy Camera) is transferred in near real-time to Swinburne's OzSTAR supercomputer (Vohl, D., _et al._,(2017)). Each DECam image is comprised of 60 individual ccds, each 4K \(\times\) 2K pixel resolution. The footprint of the imaging is large, covering \(\sim\) 3 squared degrees.
Once the data has arrived on OzSTAR, it is processed for calibrations and made'science ready' before being ingested through the _Mary_ pipeline (Andreoni, I., _et al._,(2017)). _Mary_ performs alignment and difference imaging between template and science images, and rapidly identifies transient candidates from positive residuals in the subtractions. During a single DWF observation run, hundreds of thousands of transient candidates are flagged through difference imaging, and it is crucial that promising candidates are inspected manually before triggering space and ground-based telescopes for follow-up observations. A large problem encountered during early DWF runs was the the immense data volume, which often exceeded the capability to all be accessed manually. With hundreds of thousands of candidate found in the processing, human inspectors were faced with more data then physically possible to evaluate without assistance. Astronomers were given several key pieces of information in a DWF run, one of which is the 'postage stamp' images of candidate sky location. Figure 2 shows an example of a transient candidate as processed in real time during a DWF run. Each of the three 'cutouts' is important for determining the realness of the the source, and what type of source it likely is.
The Removal Of BOgus Transients (ROBOT) pipeline was developed to significantly reduce the number of candidates needing human inspection, and rapidly improve the efficiency of candidate inspection during DWF observational runs (Goode S. _et al._,(2022)). The ROBOT pipeline aimed to work as an intermediary step between the processing/candidate identification and the image inspection preformed by astronomers. Due to the nature of the work, the uncertainties in sky conditions, and tendencies towards compression artifacts, a large majority of the data is often deemed as 'Bogus' or False positive alerts. ROBOT works to identify the astrophysical realness of objects, and filter only those of the highest likelihood through to the human inspector.
A deep Convolutional Neutral Network (CNN) was chosen to tackle this task, as historically CNNs have proven to be excellent for 2 dimensional data structures, like image
data, and have a proven track record of reliable uses in image classification. The first step in building the ROBOT frame work was compiling and labeling large amounts of past DWF data into either real or bogus categories, and even specific source types.
Labelling of data occurred over several sessions by multiple expert astronomers. Using their past knowledge and experience, specifically around the DWF program, ensured contextually informed decisions. A total of 2952 candidate images containing template, science subtraction images were used. Out of the initial samples it was found 2250 were labelled unanimously by experts as bogus, and only 702 candidates were labelled as real. To limit the bias in the eventual network, we chose to balance the labelled data by using data augmentation. To do this, both the real and bogus images where included multiple times in the data set, but in different random augmentations including rotating, mirroring and translation. Each labelled set contained 5000 samples.
Using the labelled data, Goode S. _et al._,(2022) trained an initial 60 different CNN model architectures, each which slightly different combinations of layers, convolutions and hyperparameters. Each of the architectures was evaluated on their initial performance using the Matthews Correlation Coefficient, which took into account false positives, false negatives, true positives and true negatives to find a the architecture which preformed the best. It was found the best model was a '1c_2d' architecture, which consisted of 1 convolution, and 2 dense layers which are fully connected. The final architecture can be seen in Figure 3. The final algorithm preforms regression, returning an overall score between 0 and 1 for each the candidates passed through. The scores can then be used to determine the likelihood of it being either real or bogus, and can be used a classifier by setting the limits at which a final label of real or bogus is assigned to each.
By implementing ROBOT into DWF operations, the total time needed to inspect candidates was dramatically reduced, speeding up the human hours needed to make meaningful discoveries in real-time. Astronomy is uniquely positioned with the vast amounts of archival data able to be used in creating methods for automated discovery.
Figure 2: Example of real time processed candidate images, taken from a past operational run of DWF. Each panel is a small, 121 \(\times\) 121 pixel image that corresponds to \(\sim\) 30 \(\times\) 30 arcsec on the sky centred on the candidate. The left panel is the (deeper) template image taken at a time previous to the DWF observations, the central panel is the current science image of the sky taken minutes earlier, and the subtraction image is the digital subtraction of the two images. All constant flux is subtracted, and any flux difference, e.g. from a transient source, will remain in the subtracted image.
### Unsupervised Learning: Anomaly detection in lightcurves for the Deeper Wider Faster program
This section we outline one example of unsupervised learning from Webb S. _et al._,(2020), which can be further explored using the interactive notebooks on GitHub1.
Footnote 1: [https://github.com/sarawebb/ML_lightcurve_clustering](https://github.com/sarawebb/ML_lightcurve_clustering)
Although DWF is focused on chasing the fastest transient in near real-time during the observational runs, the data is still fully processed and explored systematically for other science objectives. One part of the post run processing is the production of light curves using the optical imaging, for every source detected, not just new transient sources. For a standard field, upwards of 100,000+ sources are present. To meaningfully evaluate this volume of sources, analytic and automated algorithms to identify sources of interest.
One exciting aspect of the DWF optical data is the cadence at which it is collected. Using continuous 20 second exposures, the lightcurves generated from this data have an average time of \(\sim\) 60-75s between data points. This cadence allows high time resolution of transient and variable events. One area of great interest is identifying new or under explored transients and variable sources in the unique DWF data. To explored possibly unknown events we needed to design a flexible algorithm which could identify lightcurves
Figure 3: Figure from Goode S. _et al._,(2022). Architecture diagram of the highest performing model found during testing. The model takes in cropped triplet images (31 \(\times\) 31 \(\times\) 3) as input into a single 2D convolution layer, with 64 filters (3 \(\times\) 3). A 4 \(\times\) 4 maxpool layer passes 7 \(\times\) 7 \(\times\) 64 information to a global average pooling layer, which functions as both a dropout and flatten layer. This pooling layer passes the information to two fully connected dense layers, before providing a single output; the probability that the input triplet images belong to a real object.
which were anomalous to the majority.
We choose to use Hierarchical Density- Based Spatial Clustering of Applications with Noise (HDBSCAN, McLannes L. _et al._,(2017)).The theoretical method behind this algorithm was first proposed by Campello R. J. G. B., _et al._,(2013). HDBSCAN takes the approach of Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and converts it into a hierarchical clustering algorithm by varying the value of epsilon (\(\epsilon\)) to identify clusters of varying densities.
The power of HDBSCAN lies within it's ability to identify clusters of varying densities within a dataset. This is a valuable tool when working with diverse data such as lightcurves. Although we expect the majority of the sources to be unchanging over the short observational time periods, those which do change will do so in a variety of different ways, and not have consistent percentages of the data they occupy. Another advantage of HDBSCAN is it's ability to identify data which is highly anomalous and not belonging to any of the identified clusters. Before we are able to cluster the DWF lightcurves, we first needed to identify what features we wanted to use to describe each.
Features represent a set of measurable properties/characteristics of the light curves being studied. In this work we extracted a uniform set of features across all light curves for two purposes 1) reduce the dimensionality of the light curves and 2) allow for direct comparison between light curves that may be on different time scales with different sampling properties. We chose to use a mixture of normalised features developed and used primarily for the identification of variable stars and quasi-stellar objects, aiming to cluster variable and periodic sources, and have highly anomalous sources encapsulated in the unclusterable 'noise' unidentified by HDBSCAN. We began the feature selection by working on a sample sub sample of data, and calculating multiple different previously used, and data specific features. Using principle component analysis, we selected the top 25 with the largest eigenvalues. The chosen features are shown in Table 5 and explained in detail in Webb S. _et al._,(2020). We extracted these 25 unique features from each light curve using mostly using \(\mathit{FATS}/\mathit{FEETS}\) packages and some in-house routines (Nun I. _et al._,(2015)).
Using the lightcurves features we tested multiple configurations of HDBSCAN, changing minimum cluster size and distance metric type. After the preliminary tests we decided on the use of a minimum cluster size of 5 and the use of the Euclidean distance metric, for its intrinsic ability to calculate the shortest distance between points. These were chosen in an effort to create as many distinct clusters in our feature space as the algorithm will allow to limit the outliers to very low density regions.
In Webb S. _et al._,(2020) two separate fields/run types were explored 1) the DWF 'J04-55 field' which was data collected using a staring method on the telescope, and 2) 'Antlia field' collected using interleaved dithered. Both observational methods have their merits and uses, and we wanted to confirm that the clustering methods would work to identify astrophysical variability as well as variability caused by observational effects such as dithering.
The clustering methods via HDBSCAN proved extremely successful in identifying not only distinct groupings of astrophysical sources, but also clustering lightcurves which were affected by observational effects such as dithering, blended sources or cosmic rays.
Table 1 breaks down the cluster types identified using the DWF 'J04-55 field' of 23,199 lightcurves, with the distant clusters containing sources of unchanging magnitudes, or those at detection thresholds or CCDs edges. Interestingly in this field, the true astrophysical variable and transient sources were unable to be clustered and identified as noise. Figure 4 shows 7 such sources extracted from the grouping of noise.
For full analysis and the results from the 'Antlia field', including the use of _Astronomaly_ and t-SNE's, and results see Webb S. _et al._,(2020).
## 4 Conclusions
Machine learning has already proven to be extremely powerful in it's ability to assist astronomers in discovery, and will only continue it's growth into more astronomical use cases. It is always important to note that machine learning isn't always a one solution fits all. It should be considered and applied with a great deal of care, to insure the problem tackled is solved in an efficient and unbiased manner. For those just beginning to explore the use of artificial intelligence in astronomical work, we highly recommend the use of existing frame works to evaluate the effectiveness of different methods. For the use of anomaly detection, the _Astronomaly_ package is a flexible framework for use on both imaging and light curve data (Lochner M. and Bassett. B. A.,(2021)). It is undeniable that machine learning will shape the future of astronomy, with several large surveys already relying on intelligent algorithms.
|
2310.13397 | Equivariant Deep Weight Space Alignment | Permutation symmetries of deep networks make basic operations like model
merging and similarity estimation challenging. In many cases, aligning the
weights of the networks, i.e., finding optimal permutations between their
weights, is necessary. Unfortunately, weight alignment is an NP-hard problem.
Prior research has mainly focused on solving relaxed versions of the alignment
problem, leading to either time-consuming methods or sub-optimal solutions. To
accelerate the alignment process and improve its quality, we propose a novel
framework aimed at learning to solve the weight alignment problem, which we
name Deep-Align. To that end, we first prove that weight alignment adheres to
two fundamental symmetries and then, propose a deep architecture that respects
these symmetries. Notably, our framework does not require any labeled data. We
provide a theoretical analysis of our approach and evaluate Deep-Align on
several types of network architectures and learning setups. Our experimental
results indicate that a feed-forward pass with Deep-Align produces better or
equivalent alignments compared to those produced by current optimization
algorithms. Additionally, our alignments can be used as an effective
initialization for other methods, leading to improved solutions with a
significant speedup in convergence. | Aviv Navon, Aviv Shamsian, Ethan Fetaya, Gal Chechik, Nadav Dym, Haggai Maron | 2023-10-20T10:12:06Z | http://arxiv.org/abs/2310.13397v3 | # Equivariant Deep Weight Space Alignment
###### Abstract
Permutation symmetries of deep networks make simple operations like model averaging and similarity estimation challenging. In many cases, aligning the weights of the networks, i.e., finding optimal permutations between their weights, is necessary. More generally, weight alignment is essential for a wide range of applications, from model merging, through exploring the optimization landscape of deep neural networks, to defining meaningful distance functions between neural networks. Unfortunately, weight alignment is an NP-hard problem. Prior research has mainly focused on solving relaxed versions of the alignment problem, leading to either time-consuming methods or sub-optimal solutions. To accelerate the alignment process and improve its quality, we propose a novel framework aimed at learning to solve the weight alignment problem, which we name Deep-Align. To that end, we first demonstrate that weight alignment adheres to two fundamental symmetries and then, propose a deep architecture that respects these symmetries. Notably, our framework does not require any labeled data. We provide a theoretical analysis of our approach and evaluate Deep-Align on several types of network architectures and learning setups. Our experimental results indicate that a feed-forward pass with Deep-Align produces better or equivalent alignments compared to those produced by current optimization algorithms. Additionally, our alignments can be used as an initialization for other methods to gain even better solutions with a significant speedup in convergence.
## 1 Introduction
The space of deep network weights has a complex structure since networks maintain their function under certain permutations of their weights. This fact makes it hard to perform simple operations over deep networks, such as averaging their weights or estimating similarity. It is therefore highly desirable to "align" networks - find optimal permutations between the weight matrices of two networks. Weight Alignment is critical to many tasks that involve weight spaces. One key application is model merging and editing (Ainsworth et al., 2022; Wortsman et al., 2022; Stoica et al., 2023; Ilharco et al., 2022), in which the weights of two or more models are (linearly) combined into a single model to improve their performance or enhance their capabilities. Weight alignment algorithms are also vital to the study of the loss landscape of deep networks (Entezari et al., 2022), a recent research direction that has gained increasing attention. Moreover, weight alignment induces an invariant distance function on the weight space that can be used for clustering and visualization.
Since weight alignment is NP-hard (Ainsworth et al., 2022), current approaches rely primarily on local optimization of the alignment objective which is time-consuming and may lead to suboptimal solutions. Therefore, identifying methods with faster run time and improved alignment quality is an important research objective. A successful implementation of such methods would allow practitioners to perform weight alignment in real-time, for example, when merging models in federated
or continual learning setups, or to perform operations that require computing many alignments in a reasonable time, such as weight space clustering.
Following a large body of works that suggested _learning_ to solve combinatorial optimization problems using deep learning architectures (Khalil et al., 2017; Bengio et al., 2021; Cappart et al., 2021), we propose the first learning-based approach to weight alignment, called Deep-Align. Deep-Align is a neural network with a specialized architecture to predict high-quality weight alignments for a given distribution of data. A major benefit of our approach is that after a model has been trained, predicting the alignment between two networks amounts to a simple feed-forward pass through the network followed by an efficient projection step, as opposed to solving an optimization problem in other methods.
This paper presents a principled approach to designing a deep architecture for the weight alignment problem. We first formulate the weight-alignment problem and prove it adheres to a specific equivariance structure. We then propose a neural architecture that respects this structure, based on newly suggested equivariant architectures for deep-weight spaces (Navon et al., 2023) called Deep Weight Space Networks (DWSNets). The architecture is based on a Siamese application of DWSNets to a pair of input networks, mapping the outputs to a lower dimensional space we call _activation space_, and then using a generalized outer product layer to generate candidates for optimal permutations.
Theoretically, we prove that our architecture can approximate the Activation Matching algorithm Tatro et al. (2020); Ainsworth et al. (2022), which computes the activations of the two networks on some pre-defined input data and aligns their weights by solving a sequence of linear assignment problems. This theoretical analysis suggests that Deep-Align can be seen as a learnable generalization of this algorithm. Furthermore, we show that Deep-Align has a valuable theoretical property called _Exactness_, which guarantees that it always outputs the correct alignment when there is a solution with zero objective.
Obtaining labeled training data is one of the greatest challenges when learning to solve combinatorial optimization problems. To address this challenge, we generate labeled examples on the fly by applying random permutations and noise to our unlabeled data. We then train our network using a combination of supervised and unsupervised loss functions _without_ relying on any labeled examples.
Our experimental results indicate that Deep-Align produces better or comparable alignments relative to those produced by slower optimization-based algorithms, when applied to both MLPs and CNNs. Furthermore, we show that our alignments can be used as an initialization for other methods that result in even better alignments, as well as significant speedups in their convergence. Lastly, we show that our trained networks produce meaningful alignments even when applied to out-of-distribution weight space data.
**Previous work.** Several algorithms have been proposed for weight-alignment (Tatro et al., 2020; Ainsworth et al., 2022; Pena et al., 2023; Akash et al., 2022). Ainsworth et al. (2022) presented three algorithms: Activation Matching, Weight Matching, and straight-through estimation. Pena et al. (2023) improved upon these algorithms by incorporating a Sinkhorn-based projection method. In part, these works were motivated by studying the loss landscapes of deep neural networks. It was conjectured that deep networks exhibit a property called _linear mode connectivity_: for any two trained weight vectors (i.e., a concatenation of all the parameters of neural architecture), a linear interpolation between the first vector and the optimal alignment of the second, yields very small increases in the loss (Entezari et al., 2022; Garipov et al., 2018; Draxler et al., 2018; Freeman and Bruna, 2016; Tatro et al., 2020). Another relevant research direction is the growing area of research that focuses on applying neural networks to neural network weights. Early methods proposed using simple architectures (Unterthiner et al., 2020; Andreis et al., 2023; Eilertsen et al., 2020). Several recent papers exploit the symmetry structure of the weight space in their architectures (Navon et al., 2023; Zhou et al., 2023; Ba et al., 2023; Zhang et al., 2023). A comprehensive survey of relevant previous work can be found in Appendix A.
## 2 Preliminaries
**Equivariance** Let \(G\) be a group acting on \(\mathcal{V}\) and \(\mathcal{W}\). We say that a function \(L:\mathcal{V}\rightarrow\mathcal{W}\) is _equivariant_ if \(L(\rho_{1}(g)v)=\rho_{2}(g)L(v)\) for all \(v\in\mathcal{V},g\in G\).
**MultiLayer Perceptrons and weight spaces.** The following definition follow the notation in Navon et al. (2023). An \(M\)-layer MultiLayer Perceptron (MLP) \(f_{v}\) is a parametric function of the following form:
\[f(x)=x_{M},\quad x_{m+1}=\sigma(W_{m+1}x_{m}+b_{m+1}),\quad x_{0}=x \tag{1}\]
Here, \(x_{m}\in\mathbb{R}^{d_{m}}\), \(W_{m}\in\mathbb{R}^{d_{m}\times d_{m-1}}\), \(b_{m}\in\mathbb{R}^{d_{m}}\), and \(\sigma\) is a pointwise activation function. Denote by \(v=[W_{m},b_{m}]_{m\in[M]}\) the concatenation of all (vectorized) weight matrices and bias vectors. We define the _weight-space_ of an \(M\)-layer MLP as: \(\mathcal{V}=\bigoplus_{m=1}^{M}(\mathcal{W}_{m}\oplus\mathcal{B}_{m})\), where \(\mathcal{W}_{m}:=\mathbb{R}^{d_{m}\times d_{m-1}}\)\(\mathcal{B}_{m}=\mathbb{R}^{d_{m}}\) and \(\bigoplus\) denotes the direct sum of vector spaces. A vector in this space represents all the learnable parameters on an MLP. We define the _activation space_ of an MLP as \(\mathcal{A}=\bigoplus_{m=1}^{M}\mathbb{R}^{d_{m}}:=\bigoplus_{m=1}^{M} \mathcal{A}_{m}\). The activation space, as its name implies, represents the concatenation of network activations at all layers. In other words, \(\mathcal{A}_{m}\) is the space in which \(x_{m}\) resides.
**Symmetries of weight spaces.** The permutation symmetries of the weight space are a result of the equivariance of pointwise activations: for every permutation matrix \(P\) we have that \(P\sigma(x)=\sigma(Px)\). Thus for example, a shallow network defined by weight matrices \(W_{1},W_{2}\) will represent the same function as the network defined by \(PW_{1},W_{2}P^{T}\), since the permutations cancel each other. The same idea can be used to identify permutation symmetries of general MLPs of depth \(M\). In this case, the weight space's symmetry group is the direct product of symmetric groups for each intermediate dimension \(m\in[1,M-1]\) namely, \(S_{d_{1}}\times\cdots\times S_{d_{M-1}}\). For clarity, we formally define the symmetry group as a product of matrix groups: \(G=\Pi_{d_{1}}\times\cdots\times\Pi_{d_{M-1}}\), where \(\Pi_{d}\) is the group of \(d\times d\) permutation matrices (which is isomorphic to \(S_{d}\)). For \(v\in\mathcal{V},v=[W_{m},b_{m}]_{m\in[M]}\), a group element \(g=(P_{1},\ldots,P_{M-1})\) acts on \(v\) via a group action \(v^{\prime}=g_{\#}(v)\), where \(v^{\prime}=[W_{m}^{\prime},b_{m}^{\prime}]_{m\in[M]}\) is defined by:
\[W_{1}^{\prime} =P_{1}W_{1},W_{M}^{\prime}=W_{M}P_{M-1}^{T},\text{ and }W_{m}^{ \prime}=P_{m}W_{m}P_{m-1}^{T},\forall m\in[2,M-1]\] \[b_{1}^{\prime} =P_{1}b_{1},\ b_{M^{\prime}}=b_{M},\text{ and }b_{m}^{\prime}=P_{m}b_{m}, \forall m\in[2,M-1].\]
By construction, \(v\) and \(v^{\prime}=g_{\#}(v)\) define the same function \(f_{v}=f_{v^{\prime}}\). The group product \(g\cdot g^{\prime}\) and group inverse \(g^{-1}=g^{T}\) are naturally defined as the elementwise matrix product and transpose operations \(g\cdot g^{\prime}=(P_{1}P_{1}^{\prime},\ldots,P_{M}P_{M}^{\prime}),\quad g^{T} =(P_{1}^{T},\ldots,P_{m}^{T})\). Note that the elementwise product and transpose operations are well defined even if the \(P_{m}\) and \(P_{m}^{\prime}\) matrices are not permutations.
## 3 The weight alignment problem and its symmetries
**The weight alignment problem.** Given an MLP architecture as in equation 1 and two weight-space vectors \(v,v^{\prime}\in\mathcal{W}\), where \(v=[W_{m},b_{m}]_{m\in[M]},v^{\prime}=[W_{m}^{\prime},b_{m}^{\prime}]_{m\in[M]}\), the weight alignment problem is defined as the following optimization problem:
\[\mathcal{G}(v,v^{\prime})=\text{argmin}_{k\in G}\|v-k_{\#}v^{\prime}\|_{2}^{2} \tag{2}\]
In other words, the problem seeks a sequence of permutations \(k=(P_{1},\ldots,P_{M-1})\) that will make \(v^{\prime}\) as close as possible to \(v\). The optimization problem in equation 2 always admits a minimizer since \(G\) is finite. For some \((v,v^{\prime})\) it may have several minimizers, in which case \(\mathcal{G}(v,v^{\prime})\) is a set of elements. To simplify our discussion we will sometimes consider the domain of \(\mathcal{G}\) to be only the set \(\mathcal{V}_{\text{unique}}^{2}\) of pairs \((v,v^{\prime})\) for which a unique minimizer exists. On this domain we can consider \(\mathcal{G}\) as a function to the unique minimizer in \(G\), that is \(\mathcal{G}:\mathcal{V}_{\text{unique}}^{2}\to G\).
Our goal in this paper is to devise an architecture that can learn the function \(\mathcal{G}\). As a guiding principle for devising this architecture, we would like this function to be equivariant to the symmetries of \(\mathcal{G}\). We describe these symmetries in the following subsection.
**The symmetries of \(\mathcal{G}\).** One important property of the function \(\mathcal{G}\) is that it is equivariant to the action of the group \(H=G\times G\) which consists of two independent copies of the permutation symmetry group for the MLP architecture we consider. Here, the action \(h=(g,g^{\prime})\in H\) on the input space \(\mathcal{V}\times\mathcal{V}\) is simply \((v,v^{\prime})\mapsto(g_{\#}v,g_{\#}^{\prime}v^{\prime})\), and the action of \(h=(g,g^{\prime})\in H\) on an element \(k\in G\) in the output space is given by \(g\cdot k\cdot g^{f}\). This equivariance property is summarized and proved in the proposition below and visualized using the commutative diagram in Figure 1: applying \(\mathcal{G}\) and then \((g,g^{\prime})\) results in exactly the same output as applying \((g,g^{\prime})\) and then \(\mathcal{G}\).
**proposition 1**.: _The map \(\mathcal{G}\) is \(H\)-equivariant, namely, for all \((v,v^{\prime})\in\mathcal{V}^{2}_{\text{unique}}\) and \((g,g^{\prime})\in H\),_
\[\mathcal{G}(g_{\#}v,g^{\prime}_{\#}v^{\prime})=g\cdot\mathcal{G}(v,v^{\prime}) \cdot g^{\mathcal{T}}\]
The function \(\mathcal{G}\) exhibits another interesting property: swapping the order of the inputs \(v,v^{\prime}\) corresponds to inverting the optimal alignment \(\mathcal{G}(v,v^{\prime})\) :
**proposition 2**.: _Let \((v,v^{\prime})\in\mathcal{V}^{2}_{\text{unique}}\) then \(\mathcal{G}(v^{\prime},v)=\mathcal{G}(v,v^{\prime})^{T}\)._
We conclude by noting that, although for simplicity we focused on the case where \((v,v^{\prime})\in\mathcal{V}^{2}_{\text{unique}}\), we can also state analogous claims for the general case where multiple minimizers are possible. In this case we will have that the equalities \(g\cdot\mathcal{G}(v,v^{\prime})\cdot g^{\mathcal{T}}=\mathcal{G}(gv,g^{ \prime}v^{\prime})\) and \(\mathcal{G}(v,v^{\prime})^{T}=\mathcal{G}(v^{\prime},v)\) still hold as equalities between subsets of \(G\).
**Extension to other optimization objectives.** In Appendix B we show that the equivariant structure of the function \(\mathcal{G}\) occurs not only for the objective in equation 2, but also when the objective \(\|v-k_{\#}v^{\prime}\|_{2}^{2}\) is replaced with any scalar function \(E(v,k_{\#}v^{\prime})\) that satisfies the following properties: (1) \(E\) is invariant to the action of \(G\) on both inputs; and (2) \(E\) is invariant to swapping its arguments.
## 4 Deep-Align
### Architecture
Here, we define a neural network architecture \(F=F(v,v^{\prime};\theta)\) for learning the weight-alignment problem. The output of \(F\) will be a sequence of square matrices \((P_{1},\dots,P_{M-1})\) that represents a (sometimes approximate) group element in \(G\). In order to provide an effective inductive bias, we will ensure that our architecture meets both properties: 1,2, namely \(F(g_{\#}v,g^{\prime}_{\#}v^{\prime})=g\cdot F(v,v^{\prime})\cdot g^{\mathcal{ T}}\) and \(F(v,v^{\prime})=F(v^{\prime},v)^{T}\). The architecture we propose is composed of four functions:
\[F=F_{proj}\circ F_{prod}\circ F_{\mathcal{V}\rightarrow\mathcal{A}}\circ F_{ DWS}:\mathcal{V}\times\mathcal{V}^{\prime}\rightarrow\bigoplus_{m=1}^{M-1} \mathbb{R}^{d_{m}\times d_{m}},\]
where the equivariance properties we require are guaranteed by constructing each of the four functions composing \(F\) to be equivariant with respect to an appropriate action of \(H=G\times G\) and the
Figure 1: The equivariance structure of the alignment problem. The function \(\mathcal{G}\) takes as input two weight space vectors \(v,v^{\prime}\) and outputs a sequence of permutation matrices that aligns them denoted \(\mathcal{G}(v,v^{\prime})\). In case we reorder the input using \((g,g^{\prime})\) where \(g=(P_{1},P_{2}),g^{\prime}=(P_{1}^{\prime},P_{2}^{\prime})\), the optimal alignment undergoes a transformation, namely \(\mathcal{G}(g_{\#}v,g_{\#}v^{\prime})=g\cdot\mathcal{G}(v,v^{\prime})\cdot g ^{\mathcal{T}}\).
transposition action \((v,v^{\prime})\mapsto(v^{\prime},v)\). In general terms, we choose \(F_{DWS}\) to be a siamese weight space encoder, \(F_{\mathcal{V}\rightarrow\mathcal{A}}\) is a siamese function that maps the weight space to the activation space, \(F_{prod}\) is a function that performs (generalized) outer products between corresponding activation spaces in both networks and \(F_{proj}\) performs a projection of the resulting square matrices on the set of doubly stochastic matrices (the convex hull of permutation matrices). The architecture is illustrated in Figure 2. We now describe our architecture in more detail:
**Weight space encoder.**\(F_{DWS}:\mathcal{V}\times\mathcal{V}^{\prime}\rightarrow\mathcal{V}^{d}\times \mathcal{V}^{\prime d}\), where \(d\) represents the number of feature channels, is implemented as a Siamese DWSNet (Navon et al., 2023). This function outputs two weight-space embeddings in \(\mathcal{V}^{d}\), namely, \(F_{DWS}(v,v^{\prime})=(\mathcal{E}(v),\mathcal{E}(v^{\prime}))\), for a DWS network \(\mathcal{E}\). The Siamese structure of the network guarantees equivariance to transposition, while the \(G\)-equivariance of DWSNet implies equivariance with respect to the action of \(G\times G\), that is \((\mathcal{E}(g_{\#}v),\mathcal{E}(g^{\prime}_{\#}v^{\prime}))=(g_{\#} \mathcal{E}(v),g^{\prime}_{\#}\mathcal{E}(v^{\prime}))\).
**Mapping the weight space to the activation space.** The function \(F_{\mathcal{V}\rightarrow\mathcal{A}}:\mathcal{V}^{d}\times\mathcal{V}^{ \prime d}\rightarrow\mathcal{A}^{d}\times\mathcal{A}^{\prime d}\) maps the weight spaces \(\mathcal{V}^{d},\mathcal{V}^{\prime d}\) to the corresponding Activation Spaces (see preliminaries section). There are several ways to implement \(F_{\mathcal{V}\rightarrow\mathcal{A}}\). As the bias space, \(\mathcal{B}=\bigoplus_{m=1}^{M}\mathcal{B}_{m}\), and the activation space have a natural correspondence between them, perhaps the simplest way, which we use in this paper, is to map a weight space vector \(v=(w,b)\in\mathcal{V}^{d}\) to its bias component \(b\in\mathcal{B}^{d}\). This operation is again equivariant to transposition and the action of \(G\times G\), where the action of \(G\times G\) on the input space is the more complicated action (by \((g_{\#},g^{\prime}_{\#})\)) on \(\mathcal{V}\times\mathcal{V}\) and the action on the output space is the simpler action of \(G\times G\) on the activation spaces.
**Generalized outer product.**\(F_{prod}:\mathcal{A}^{d}\times\mathcal{A}^{\prime d}\rightarrow\bigoplus_{m=1 }^{M}\mathbb{R}^{d_{m}\times d_{m}}\) is a function that takes the activation space features and performs a _generalized outer product_ operation as defined below:
\[F_{prod}(a,a^{\prime})_{m,i,j}=\phi([a_{m,i},a^{\prime}_{m,j}])\]
where the subscripts \(m,i,j\) represent the \((i,j)\)-th entry of the \(m\)-th matrix, and \(a_{m,i},a^{\prime}_{m,j}\in\mathbb{R}^{d}\) are the rows of \(a,a^{\prime}\). Here, the function \(\phi\) is a general (parametric or nonparametric) symmetric function in the sense that \(\phi(a,b)=\phi(b,a)\). In this paper, we use \(\phi(a,b)=s^{2}(a/\|a\|_{2},b/\|b\|_{2})\) where \(s\) is a trainable scalar scaling factor. The equivariance with respect to the action of \(G\times G\) and transposition is guaranteed by the fact that \(\phi\) is applied elementwise, and is symmetric, respectively.
**Projection layer.** The output of \(F_{prod}\) is a sequence of matrices \(Q_{1},\ldots,Q_{M-1}\) which in general will not be permutation matrices. To bring the outputs closer to permutation matrices, \(F_{proj}\) implements a approximate projection onto the convex hull of the permutation matrices, i.e., the space of doubly stochastic matrices. In this paper, we use two different projection operations, depending
Figure 2: Our architecture is a composition of four blocks: The first block, \(F_{DWS}\) generates weight space embedding for both inputs. The second block \(F_{\mathcal{V}\rightarrow\mathcal{A}}\) maps these to the activation spaces. The third block, \(F_{prod}\), generates square matrices by applying an outer product between the activation vector of one network to the activation vectors of the other network. Lastly, the fourth block, \(F_{Proj}\) projects these square matrices on the (convex hull of) permutation matrices.
on whether the network is in training or inference mode. At training time, to ensure differentiability, we implement \(F_{proj}\) as an approximation of a matrix-wise projection \(Q_{m}\) to the space of doubly stochastic matrices using several iterations of the well-known Sinkhorn projection (Mena et al., 2018; Sinkhorn, 1967). Since the set of doubly stochastic matrices is closed under the action of \(G\times G\) on the output space, and under matrix transposition, and since the Sinkhorn iterations are composed of elementwise, row-wise, or column-wise operations, we see that this operation is equivariant as well. At inference time, we obtain permutation matrices from \(Q_{i}\) by finding the permutation matrix \(P_{i}\) which has the highest correlation with \(Q_{i}\), that is \(P_{i}=\arg\max_{P\in S_{d_{i}}}(Q_{i},P),\) where the inner product is the standard Frobenious inner product. The optimization problem, known as the _linear assignment problem_ can be solved using the Hungarian algorithm.
As we carefully designed the components of \(F\) so that they are all equivariant to transposition and the action of \(G\times G\), we obtain the following proposition:
**proposition 3**.: _The architecture \(F\) satisfies the conditions specified in 1,2, namely for all \((v,v^{\prime})\in\mathcal{V}\times\mathcal{V}\) and \((g,g^{\prime})\in H\) we have: \(F(g_{\#}v,g_{\#}^{\prime}v^{\prime})=g\cdot F(v,v^{\prime})\cdot g^{\prime T}\) and \(F(v,v^{\prime})=F(v^{\prime},v)^{T}\)._
### Data generation and Loss functions
Generating labeled data for the weight-alignment problem is hard due to the intractability of the problem. Therefore, we propose a combination of both unsupervised and supervised loss functions where we generate labeled examples synthetically from unlabeled examples, as specified below.
**Data generation.** Our initial training data consists of a finite set of weight space vectors \(D\subset\mathcal{V}\). From that set, we generate two datasets consisting of pairs of weights for the alignment problem. First, we generate a labeled training set, \(D_{\text{labeled}}=\{(v^{j},v^{\prime j},t^{j})\}_{j=1}^{N_{\text{labeled}}}\) for \(t^{j}=(T_{1}^{j},\dots,T_{M-1}^{j})\in G\). This is done by sampling \(v^{j}\in D\) and defining \(v^{\prime j}\) as a permuted and noisy version of \(v^{j}\). More formally, we sample a sequence of permutations \(t\in G\) and define \(v^{\prime j}=t_{\#}f_{\text{aug}}(v^{j})\), where \(f_{\text{aug}}\) applies several weight-space augmentations, like adding binary and Gaussian noise, scaling augmentations for ReLU networks, etc. We then set the label of this pair to be \(t\). In addition, we define an unlabeled dataset \(D_{\text{unlabeled}}=\{(v^{j},v^{\prime j})\}_{j=1}^{N_{\text{unlabeled}}}\) where \(v^{j},v^{\prime j}\in\mathcal{V}\).
**Loss functions.** The datasets above are used for training our architecture using the following loss functions. The labeled training examples in \(D_{\text{labeled}}\) are used by applying a cross-entropy loss for each row \(i=1,\dots,d_{m}\) in each output matrix \(m=1,\dots,M-1\). This loss is denoted as \(\ell_{\text{supervised}}(F(v,v^{\prime};\theta),t)\). The unlabeled training examples are used in combination with two unsupervised loss functions. The first loss function aims to minimize the alignment loss in equation 2 directly by using the network output \(F(v,v^{\prime};\theta)\) as the permutation sequence. This loss is denoted as \(\ell_{\text{alignment}}(v,v^{\prime},\theta)=\|v-F(v,v^{\prime};\theta)_{\#}v ^{\prime}\|_{2}^{2}\). The second unsupervised loss function aims to minimize the original loss function used to train the input networks on a line segment connecting the weights \(v\) and the transformed version of \(v^{\prime}\) using the network output \(F(v,v^{\prime};\theta)\) as the permutation sequence. Concretely, let \(\mathcal{L}\) denote the original loss function for the weight vectors \(v,v^{\prime}\), the loss is defined as \(\ell_{\text{LMC}}(v,v^{\prime},\theta)=\mathcal{L}(\lambda v+(1-\lambda)F(v,v^ {\prime};\theta)_{\#}v^{\prime})\) for \(\lambda\) sampled uniformly \(\lambda\sim U(0,1)\)1. This loss is similar to the STE method in Ainsworth et al. (2022) and the differentiable version in Pena et al. (2023). Our final goal is to minimize the parameters of \(F\) with respect to a linear (positive) combination of \(\ell_{\text{alignment}},\ell_{\text{LMC}}\) and \(\ell_{\text{supervised}}\) applied to the appropriate datasets described above.
Footnote 1: This loss function satisfies the properties as described in Section 3 when taking expectation over \(\lambda\).
## 5 Theoretical analysis
**Relation to the activation matching algorithm.** In this subsection, we prove that our proposed architecture can simulate the activation matching algorithm, a heuristic for solving the weight alignment problem suggested in Ainsworth et al. (2022). In a nutshell, this algorithm works by evaluating two neural networks on a set of inputs and finding permutations that align their activations by solving a linear assignment problem using the outer product matrix of the activations as a cost matrix for every layer \(m=1,\dots,M-1\).
**proposition 4**.: (Deep-Align _can simulate activation matching) For any compact set \(K\subset\mathcal{V}\) and \(x_{1},\dots,x_{N}\in\mathbb{R}^{d_{0}}\), there exists an instance of our architecture \(F\) and weights \(\theta\) such that for any
\(v,v^{\prime}\in K\) for which the activation matching algorithm has a single optimal solution \(g\in G\) and another minor assumption specified in the appendix, \(F(v,v^{\prime};\theta)\) returns \(g\)._
This result offers an interesting interpretation of our architecture: the architecture can simulate activation matching while optimizing the input vectors \(x_{1},\ldots,x_{N}\) as a part of their weights \(\theta\).
**Exactness.** We now discuss the _exactness_ of our algorithms. An alignment algorithm is said to be exact on some input \((v,v^{\prime})\) if it can be proven to successfully return the correct minimizer \(\mathcal{G}(v,v^{\prime})\). For NP-hard alignment problems such as weight alignment, exactness can typically be obtained when restricting it to 'tame' inputs \((v,v^{\prime})\). Examples of exactness results in the alignment literature can be found in Aflalo et al. (2015); Dym and Lipman (2017); Dym (2018). The following proposition shows that (up to probability zero events) when \(v,v^{\prime}\) are exactly related by some \(g\in G\), our algorithm will retrieve \(g\)_exactly_:
**proposition 5** (Deep-Align is exact for perfect alignments).: _Let \(F\) denote the Deep-Align architecture with non-constant analytic activations and \(d\geq 2\) channels. Then, for Lebesgue almost every \(v\in\mathcal{V}\) and parameter vector \(\theta\), and for every \(g\in G\), we have that \(F(v,g_{\#}v,\theta)=g\)._
## 6 Experiments
In this section, we evaluate Deep-Align on the task of aligning and merging neural networks. To support future research and the reproducibility of the results, we will make our source code and datasets publicly available upon publication.
**Evaluation metrics.** We use the standard evaluation metrics for measuring model merging (Ainsworth et al., 2022; Pena et al., 2023): Barrier and Area Under the Curve (AUC). For two inputs \(v,v^{\prime}\) the Barrier is defined by \(\max_{\lambda\in[0,1]}\psi(\lambda)\equiv\mathcal{L}(\lambda v+(1-\lambda)v^ {\prime})-(\lambda\mathcal{L}(v)+(1-\lambda)\mathcal{L}(v^{\prime}))\) where \(\mathcal{L}\) denote the loss function on the original task. Similarly, the AUC is defined as the integral of \(\psi\) over \([0,1]\). Lower is better for both metrics. Following previous works (Ainsworth et al., 2022; Pena et al., 2023), we bound both metrics by taking the maximum between their value and zero.
**Compared methods.** We compare the following approaches: (1) _Naive_: where two models are merged by averaging the models' weights without alignment. The (2) _Weight matching_ and (3) _Activation matching_ approaches proposed in Ainsworth et al. (2022). (4) _Sinkhorn_(Pena et al., 2023): This approach directly optimizes the permutation matrices using the task loss on the line segment between the aligned models (denoted \(\mathcal{C}_{Rnd}\) in Pena et al. (2023)). (5) Deep-Align: Our
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{MNIST (MLP)} & \multicolumn{2}{c}{CIFAR10 (MLP)} \\ \cline{2-5} & Barrier \(\downarrow\) & AUC \(\downarrow\) & Barrier \(\downarrow\) & AUC \(\downarrow\) \\ \hline Naive & \(2.007\pm 0.00\) & \(0.835\pm 0.00\) & \(0.927\pm 0.00\) & \(0.493\pm 0.00\) \\ \hline Weight Matching & \(0.047\pm 0.00\) & \(0.011\pm 0.00\) & \(0.156\pm 0.00\) & \(0.068\pm 0.00\) \\ Activation Matching & \(0.024\pm 0.00\) & \(0.007\pm 0.00\) & \(0.066\pm 0.00\) & \(0.024\pm 0.00\) \\ Sinkhorn & \(0.027\pm 0.00\) & \(0.002\pm 0.00\) & \(0.183\pm 0.00\) & \(0.072\pm 0.00\) \\ \hline Deep-Align & \(0.005\pm 0.00\) & \(\mathbf{0.000\pm 0.00}\) & \(0.078\pm 0.01\) & \(0.029\pm 0.00\) \\ Deep-Align + Sinkhorn & \(\mathbf{0.000\pm 0.00}\) & \(\mathbf{0.000\pm 0.00}\) & \(0.037\pm 0.00\) & \(\mathbf{0.004\pm 0.00}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: _MLP image classifiers_: Results on aligning MNIST and CIFAR10 MLP image classifiers.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{CIFAR10 (CNN)} & \multicolumn{2}{c}{STL10 (CNN)} & \multicolumn{2}{c}{Runtime (Sec \(\downarrow\))} \\ \cline{2-5} & Barrier \(\downarrow\) & AUC \(\downarrow\) & Barrier \(\downarrow\) & AUC \(\downarrow\) & \\ \hline Naive & \(1.124\pm 0.01\) & \(0.524\pm 0.00\) & \(1.006\pm 0.00\) & \(0.650\pm 0.00\) & — \\ \hline Weight Matching & \(0.661\pm 0.02\) & \(0.178\pm 0.01\) & \(0.859\pm 0.00\) & \(0.453\pm 0.00\) & \(0.21\) \\ Activation Matching & \(0.238\pm 0.01\) & \(\mathbf{0.000\pm 0.00}\) & \(0.479\pm 0.00\) & \(0.250\pm 0.00\) & \(7.52\) \\ Sinkhorn & \(0.313\pm 0.01\) & \(\mathbf{0.000\pm 0.00}\) & \(0.366\pm 0.00\) & \(0.163\pm 0.00\) & \(79.81\) \\ \hline Deep-Align & \(0.237\pm 0.01\) & \(\mathbf{0.000\pm 0.00}\) & \(0.382\pm 0.01\) & \(0.182\pm 0.00\) & \(0.44\) \\ Deep-Align + Sinkhorn & \(\mathbf{0.081\pm 0.00}\) & \(\mathbf{0.000\pm 0.00}\) & \(\mathbf{0.232\pm 0.00}\) & \(\mathbf{0.097\pm 0.00}\) & \(80.25\pm 0.44+79.81\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: _CNN image classifiers_: Results on aligning CIFAR10 and STL10 CNN image classifiers.
proposed method described in Section 4. (5) Deep-Align + _Sinkhorn_: Here, the output from the Deep-Align is used as an initialization for the _Sinkhorn_ method.
**Experimental details.** Our method is first trained on a dataset of weight vectors and then applied to unseen weight vectors at test time, as is standard in learning setups. In contrast, baseline methods are directly optimized using the test networks. For the _Sinkhorn_ and Deep-Align + _Sinkhorn_ methods, we optimize the permutations for 1000 iterations. For the _Activation Matching_ method, we calculate the activations using the entire train dataset. We repeat all experiments using 3 random seeds and report each metric's mean and standard deviation. For full experimental details see Appendix E.
### Results
**Aligning classifiers.** Here, we evaluate our method on the task of aligning image classifiers. We use four network datasets. Two datasets consist of MLP classifiers for MNIST and CIFAR10, and two datasets consist of CNN classifiers trained using CIFAR10 and STL10. This collection forms a diverse benchmark for aligning NN classifiers. The results are presented in Figure 6, Table 1 and Table 2. The alignment produced through a feed-forward pass with Deep-Align performs on par or outperforms all baseline methods. Initializing the Sinkhorn algorithm with our alignment (Deep-Align + Sinkhorn) further improves the results, and significantly outperforms all other methods.
**Aligning INRs.** We use two datasets consisting of implicit neural representations (INRs). The first consists of Sine waves INRs of the form \(f(x)=\sin(ax)\) on \([-\pi,\pi]\), where \(a\sim U(0.5,10)\), similarly to the data used in Navon et al. (2023). We fit two views (independently trained weight vectors) for each value of \(a\) starting from different random initializations and the task is to align and merge the two INRs. We train our network to align pairs of corresponding views. The second dataset consists of INRs fitted to CIFAR10 images. We fit five views per image. The results are presented in Figure 4. Deep-Align, performs on par or outperforms all baseline methods. Moreover, using
Figure 3: _Merging image classifiers_: the plots illustrate the values of the loss function used for training the input networks when evaluated on a line segment connecting \(v\) and \(g_{\#}v^{\prime}\), where \(g\) is the output of each method. Values are averaged over all test images and networks and 3 random seeds.
the output from the Deep-Align to initialize the Sinkhorn algorithm further improves this result, with a large improvement over the Sinkhorn baseline with random initialization.
Generalization to out-of-distribution data (OOD).Here, we evaluate the generalization capabilities of Deep-Align under distribution shift. We use the Deep-Align model trained on CIFAR10 CNN image classifiers and evaluate the generalization on two datasets. The first dataset consists of CNN classifiers trained on a version of CIFAR10 in which each image is rotated by a rotation degree sampled uniformly from \(U(-45,45)\). The second dataset consists of CNN image classifiers trained on the STL10 dataset. Importantly, we note that DEEP-ALIGN is evaluated on a distribution of models that is different than the one observed during training. In contrast, the baselines directly solve an optimization problem for each model pair within the test datasets. While Deep-Align significantly outperforms the Naive and WM baselines, it falls short in comparison to the Sinkhorn and AM methods, both of which are directly optimized using data from the new domain (NNs and images). Employing Deep-Align as an initialization for the Sinkhorn method consistently proves beneficial, with the Deep-Align + Sinkhorn approach yielding the most favorable results in terms of the barrier metric.
Time comparison.Prior methods for weight matching, which rely on optimization, often suffer from exhaustive runtime which may be impractical for real-time applications. In contrast, once trained, Deep-Align is able to produce high-quality weight alignments through a single forward pass and an efficient projection step. Here, we compare Deep-Align to baselines by measuring the time required to align a pair of models in the CIFAR10 CNN classifiers dataset, and report the averaged alignment time using 1000 random pairs on a single RTX 2080-Ti Nvidia GPU. The results are presented in Table 2. Deep-Align is significantly faster than Sinkhorn and Activation Matching while achieving comparable results. Furthermore, Deep-Align is on par with Weight Matching w.r.t runtime, yet it consistently generates better weight alignment solutions.
Figure 4: _Aligning INRs_: The test barrier vs. the number of Sinkhorn iterations ( relevant only for _Sinkhorn_ or Deep-Align + _Sinkhorn_), using (a) sine wave and (b) CIFAR10 INRs. Deep-Align outperforms baseline methods or achieves on-par results. Our Deep-Align + Sinkhorn further improves this result and significantly improves over Sinkhorn with random initialization, both in terms of test barrier and convergence speed.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Rotated CIFAR10} & \multicolumn{2}{c}{STL10} \\ \cline{2-5} & Barrier \(\downarrow\) & AUC \(\downarrow\) & Barrier \(\downarrow\) & AUC \(\downarrow\) \\ \hline Naive & \(1.077\pm 0.01\) & \(0.714\pm 0.00\) & \(1.006\pm 0.00\) & \(0.650\pm 0.00\) \\ \hline Weight Matching & \(0.945\pm 0.02\) & \(0.550\pm 0.01\) & \(0.859\pm 0.00\) & \(0.453\pm 0.00\) \\ Activation Matching & \(0.586\pm 0.00\) & \(0.336\pm 0.00\) & \(0.479\pm 0.00\) & \(0.250\pm 0.00\) \\ Sinkhorn & \(0.596\pm 0.01\) & \(0.321\pm 0.00\) & \(0.366\pm 0.00\) & \(\mathbf{0.163\pm 0.00}\) \\ \hline Deep-Align & \(0.769\pm 0.01\) & \(0.453\pm 0.00\) & \(0.686\pm 0.01\) & \(0.373\pm 0.01\) \\ Deep-Align + Sinkhorn & \(\mathbf{0.430\pm 0.01}\) & \(\mathbf{0.245\pm 0.00}\) & \(\mathbf{0.357\pm 0.00}\) & \(0.165\pm 0.00\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Aligning OOD image classifiers, using a Deep-Align network trained on CIFAR10.
Aligning networks from disjoint datasets.Following Ainsworth et al. (2022), we experiment with aligning networks trained on disjoint datasets. One major motivation for such a setup is Federated learning (McMahan et al., 2017). In Federated Learning, the goal is to construct a unified model from multiple networks trained on separate and distinct datasets.
To that end, we split the CIFAR10 dataset into two splits. The first consists of \(95\%\) images from classes \(0\)-\(4\) and \(5\%\) of classes \(5\)-\(9\), and the second split is constructed accordingly with \(95\%\) of classes \(5\)-\(9\). We train the Deep-Align model to align CNN networks trained using the different datasets. For Sinkhorn and Activation Matching, we assume full access to the training data in the optimization stage. For DEEP-ALIGN, we assume this data is accessible in the training phase. The results are presented in Figure 5. Our approach, Deep-Align, along with the Sinkhorn and Activation Matching approaches, are able to align and merge the networks to obtain a network with lower loss compared to the original models. However, our approach is significantly more efficient at inference.
## 7 Conclusion
We investigate the challenging problem of weight alignment in deep neural networks. The key to our approach, Deep-Align, is an equivariant architecture that respects the natural symmetries of the problem. At inference time Deep-Align can align unseen network pairs without the need for performing expensive optimization. Deep-Align, performs on par or outperforms optimization-based approaches while significantly reducing the runtime or improving the quality of the alignments. Furthermore, we demonstrate that the alignments of our method can be used to initialize optimization-based approaches. One limitation of our approach is the need for training a network. Although this can be a relatively time-consuming process, we only have to perform it once for each weight distribution. Furthermore, this procedure does not require labeled examples. To summarize, Deep-Align is the first architecture designed for weight alignment. It demonstrates superior performance over existing methods. The generalization capabilities of Deep-Align make it a promising and practical solution for applications that require weight alignment.
## 8 Acknowledgements
This study was funded by a grant to GC from the Israel Science Foundation (ISF 737/2018), and by an equipment grant to GC and Bar-Ilan University from the Israel Science Foundation (ISF 2332/18).
|
2302.13186 | Construction numbers: How to build a graph? | Counting the number of linear extensions of a partial order was considered by
Stanley about 50 years ago. For the partial order on the vertices and edges of
a graph given by inclusion, we call a linear extension a {\it construction
sequence} for the graph as each edge follows the vertices where it is attached.
The number of these c-sequences is counted for various graph families. We also
consider the set of all length-$n$ c-sequences produced by the graphs with $n$
elements, simplified to their structural skeleton: vertex or edge, and further
allow the generating graph to be structurally constrained. Efficiency is
analyzed. | Paul C. Kainen | 2023-02-25T22:54:27Z | http://arxiv.org/abs/2302.13186v4 | # Construction numbers: How to build a graph?
###### Abstract
We count the number of ways to build paths, stars, cycles, and complete graphs as a sequence of vertices and edges, where each edge follows both of its endpoints. The problem was considered 50 years ago by Stanley but the explicit sequences corresponding to graph families seem to have been little studied. A cost-based variant is introduced and applications are considered.
_Key Words:_ Roller-coaster problem, up-down permutations, linearization of posets, construction number, Hasse diagrams, minimizing edge delay, ontogenies.
## 1 Introduction
The **elements** of a graph \(G=(V,E)\) are the set \(V\cup E\) of vertices and edges. A linear order on \(V\cup E\) is a **construction sequence** (or **c-sequence**) for \(G\) if each edge appears after both of its endpoints. For instance, for the path \(P_{3}\) with vertices \(1,2,3\) and edges \(12,23\), one has construction sequences \((1,2,3,23,13)\) and \((1,2,12,3,23)\), while \((1,3,12,2,23)\) is not a c-sequence. There are a total of 16 c-sequences for \(P_{3}\).
Let \(\mathcal{C}(G)\) be the set of all c-sequences for \(G=(V,E)\). The **construction number** of \(G\) is \(c(G):=\#\mathcal{C}(G)\), the number of distinct construction sequences. So \(c(P_{3})=16\).
The problem of finding the construction numbers of graphs occurred to the author in November 2022, and he proposed, as a problem for the _American Math Monthly_, to determine \(c(P_{n})\). However, after some discussion, the problem to determine \(c(K_{n})\) for the complete graph \(K_{n}\) was substituted. This was much harder to enumerate, and its solution involved the collaboration of Richard Stong, Jim Tilley, and Stan Wagon. The revised problem is to appear [7] with co-proposers PCK, RS, and JT; SW is the Problems Editor: _For \(K_{n}=(V,E)\), how many ways are there to linearly order \(V\cup E\) such that each edge appears after both vertices comprising the edge_?
After seeing how to extend construction sequences to more abstract objects, we found Stanley's work (e.g., [11]), which includes two of our results. Stan Wagon pointed out a different approach to graph construction [16], **assembly numbers** due to Vince and Bona [16], motivated by the goal of describing the self-assembly of macromolecules performed by virus capsids in the host cell. (See SS4 below.)
In this paper, we further refine the notion of construction number to account for the **cost** of having two adjacent vertices with no edge that joins them. A construction sequence for a graph \(G\) is **economical** if it has minimum total cost for its edges. This sharply reduces the set of feasible sequences and allows a greedy heuristic.
Section 2 has additional definitions and some lemmas. In Section 3, we find \(c(G)\) when \(G\) is \(K_{1,n}\) (the star with \(n\) peripheral points), path \(P_{n}\), cycle \(C_{n}\), and complete graph \(K_{n}\). Section 4 describes earlier appearances of construction numbers and considers extensions of c-sequences to hypergraphs, simplicial complexes, CW-complexes, posets, and categories. Section 5 defines cost-functions, some types of c-sequences, and relative constructability of graphs, while the last section has open problems and applications.
## 2 Basic definitions and lemmas
Let \(G=(V,E)\) be a labeled graph, where \(V=[p]:=\{1,\ldots,p\}\). Let \(q:=\#E:=|E|\), and let \(S:=V\cup E\) is the set of **elements** of \(G\). Suppose \(\ell:=p+q=\#S\). By a **permutation** on \(S\), we mean a sequence \(x\) of length \(\ell\) taken from \(S\) where each element appears exactly once. If \(s\in S\), write \(x(s)\) for the unique \(j\in[\ell]\) s.t. \(x_{j}=s\).
Let \(P_{n}\) be the path with \(n\) vertices, \(K_{1,n}\) the star with a single degree-\(n\) hub having \(n\) neighbors, each an endpoint; \(C_{n}\) and \(K_{n}\) are the cycle graph and complete graph with \(n\) vertices. See, e.g., Harary [6] for any undefined graph terminology.
A permutation \(x\) on \(S\) is a **construction sequence** (c-sequence) for \(G\) if each edge follows both of its endpoints; i.e., for \(e=uw\), \(x(u)<x(e)\) and \(x(w)<x(e)\). Let \(\mathcal{C}(G)\) be the set of all construction sequences and let \(c(G):=\#\mathcal{C}(G)\) be the **construction number** of \(G\). Clearly, \(p!q!\leq c(G)\leq(p+q)!\) for each graph \(G\).
The **graph sequence** associated with a c-sequence \(x\) is the sequence \(G_{i}\) of graphs, \(1\leq i\leq\ell\), where \(G_{i}\) is the subgraph of \(G\) determined by the set \(\{x_{1},\ldots,x_{i}\}\) of elements, which is indeed a graph. Let \(b_{i}\) be the number of connected components of \(G_{i}\). Let \(\beta(x):=\max_{i\in[\ell]}b_{i}\) and let \(b(x):=(b_{1},\ldots,b_{\ell})\).
**Lemma 1** (S. Wagon).: _If \(G\) is connected and has minimum degree \(k\), then the last \(k\) entries in any \(x\in\mathcal{C}(G)\) are edges. Moreover, \(x(v)\leq\ell-r\), where \(r=\deg(v,G)\)._
Given two element-wise disjoint finite sequences \(s_{1}\) and \(s_{2}\) of lengths \(n\) and \(m\), we define a **shuffle** of the two sequences to be a sequence of length \(n+m\) which contains both \(s_{1}\) and \(s_{2}\) as subsequences. The number of shuffle sequences of \(s_{1}\) and \(s_{2}\) is \({n+m\choose n}\), giving the construction number of a disjoint union in terms of its parts.
**Lemma 2**.: _If \(x_{1}\) and \(x_{2}\) are c-sequences for disjoint graphs \(G_{1}\) and \(G_{2}\), resp., then each shuffle of \(x_{1}\) and \(x_{2}\) is a c-sequence for \(G_{1}\cup G_{2}\), and we have_
\[c(G_{1}\cup G_{2})=c(G_{1})c(G_{2}){\ell_{1}+\ell_{2}\choose\ell_{1}}, \tag{1}\]
_where \(\ell_{1}\) and \(\ell_{2}\) are the lengths of the sequences \(x_{1}\) and \(x_{2}\), resp._
The number of ways to extend a c-sequence for \(G{-}v\) to a sequence for \(G\) can depend on which c-sequence is chosen. For example, take \(P_{2}=(\{1,2\},\{a\})\), where \(a=12\). Then \(\mathcal{C}(P_{2})=\{x^{\prime},y^{\prime}\}\), where \(x^{\prime}=(1,2,a)\equiv 12a\) and \(y^{\prime}=21a\). Consider \(P_{3}=(\{1,2,3\},\{a,b\})\), where \(b=23\). As \(P_{2}\subset P_{3}\), each c-sequence for \(P_{3}\) extends a c-sequences of \(P_{2}\). One finds that \(x^{\prime}\) has exactly 7 extensions (in short form)
\[312ab,312ba,132ab,132ba,123ab,123ba,12a3b\]
to \(x\in\mathcal{C}(P_{3})\), while \(y^{\prime}\) has exactly 9 extensions (in short form)
\[321ab,321ba,32b1a,231ab,231ba,23b1a,213ab,213ba,21a3b.\]
This gives \(c(P_{3})=16\) as it should; see below.
The previous lemma extends to any finite disjoint union. If \(G_{1},\ldots,G_{n}\) have \(\ell_{1},\ldots,\ell_{n}\) elements, then for \(\ell:=\ell_{1}+\cdots+\ell_{n}\),
\[c(G_{1}\cup\cdots\cup G_{n})=\prod_{i=1}^{n}c(G_{i}){\ell\choose\ell_{1}\cdots \ell_{n}}. \tag{2}\]
Let \(G=([p],E)\) and let \(v\in[p]\). The **based construction number** of \((G,v)\) is the number of c-sequences for \(G\) which start with the **base point**\(v\). We write \(\mathcal{C}(G,v)\) for the set of suitably restricted c-sequences and \(c(G,v)\) for its cardinality. Every c-sequence starts with a vertex, so \(c(G)=\sum_{v\in VG}c(G,v)\). Further, we have
**Lemma 3**.: _If \(v,w\in VG\) and if \(\exists\phi:G\to G\) an isomorphism such that \(\phi(v)=w\), then \(c(G,v)=c(G,w)\)._
Proof.: Let \(x\in\mathcal{C}(G)\). Then \(\tilde{x}:=(\phi(x_{1}),\ldots,\phi(x_{\ell}))\) is also a c-sequence for \(G\)
For \(i=1,\ldots,n\), let \(G_{i}\) be pairwise-disjoint graphs with \(\ell_{i}\) elements, \(v_{i}\in VG_{i}\), and suppose the vertices \(v_{i}\) are identified to a single vertex \(v\). Let \(G:=\bigvee_{i=1}^{n}G_{i}\) be the resulting **wedge-product** graph with **base point**\(v\). Then as in (2), we have
\[c(G,v)=\prod_{i=1}^{n}c(G_{i},v_{i})\binom{\ell-1}{\ell_{1}-1\cdots\ell_{n}-1}. \tag{3}\]
where \(\ell:=\ell_{1}+\cdots+\ell_{n}\)
## 3 Construction numbers for some graph families
In this section, we find \(c(G)\) when \(G\) is a star, path, cycle, or complete graph. The first result is also due to Stan Wagon.
**Theorem 1**.: _For \(n\geq 0\), \(c(K_{1,n})=2^{n}(n!)^{2}\)._
Proof.: For \(n=0,1\), the result holds. Suppose \(n\geq 2\) and let \(x=(x_{1},\ldots,x_{2n+1})\) be a construction sequence for \(K_{1,n}\). There are \(n\) edges \(e_{i}=v_{0}v_{i}\), where \(v_{0}\) is the central node, and one of the edges, say, \(e_{i}\), must be the last term in \(x\). This leaves \(2n\) coordinates in \(x^{\prime}:=(x_{1},\ldots,x_{2n})\) and one of them is \(v_{i}\). The remaining \((2n-1)\) coordinates are a construction sequence for the \((n-1)\)-star \(K_{1,n-1}\). Hence, \(c(K_{1,n})=n(2n)c(K_{1,n}-v_{i})=2n^{2}2^{n-1}(n-1)!^{2}=2^{n}(n!)^{2}\) by induction.
The numbers 2, 16, 288, 9216, 460800 generated by the above formula count the number of c-sequences for \(K_{1,n}\) for \(n\in\{1,2,3,4,5\}\). These numbers are the absolute value of the sequence A055546 in the OEIS (reference [10]) and describe the number of ways to seat \(n\) men and \(n\) women in a roller coaster with \(n\) rows, where each row has two seats which must be occupied by a man and a woman.
Note that the star \(K_{1,n}\) is the wedge-product of \(n\) copies of \(K_{2}\). There is a unique based construction sequence for \(K_{2}\). Using (3),
\[c(K_{1,n},\star)=\binom{2n}{2\cdots 2},\text{ where the base-point $\star$ is the hub of the star.} \tag{4}\]
Counting c-sequences for cycles and paths is essentially the same problem.
**Lemma 4**.: _If \(n\geq 3\), then \(c(C_{n})=n\cdot c(P_{n})\)._
Proof.: If \(x\in\mathcal{C}(C_{n})\), then \(x_{\ell}\) is an edge; the remainder is a c-sequence for \(P_{n}\).
Before determining \(c(P_{n})\), we give a Catalan-like recursion for these numbers.
**Lemma 5**.: \(c(P_{n})=\sum_{k=1}^{n-1}c(P_{k})\,c(P_{n-k})\,{2n-2\choose 2k-1}\)_._
Proof.: Any construction sequence \(x\) for \(P_{n}\) has last entry an edge \(e\), whose removal creates subpaths with \(k\) and \(n-k\) vertices, resp., for some \(k\), \(1\leq k\leq n-1\). Now \(x\) contains construction sequences for both subpaths which suffices by Lemma 2.
Trivially, \(c(P_{1})=1\) and recursion gives the sequence \(1,2,16,272,7936,353792\) for \(n=1,\ldots,6\), in the OEIS as A000182 [10], the sequence of **tangent numbers**, \(T_{n}\), which has a long and interesting history. For instance, its exponential generating function is \(\tan(x)\), and it corresponds to the odd terms in the sequence of Euler numbers [10, A000111]; see, e.g., Kobayashi [9]. Here are two proofs that \(c(P_{n})=T_{n}\).
**Proof 1**.: Let \(U(n)\subset S(n)\) be the **up-down** permutations, where consecutive differences switch sign, and the first is positive. It is well-known [18] that \(\#U(2n-1)=T_{n}\).
**Proposition 1** (D. Ullman).: _There is a bijection from \(\mathcal{C}(P_{n})\) to \(U(2n-1)\)._
Proof.: A permutation \(\pi\) of the consecutively labeled elements of a path is a construction sequence if and only if \(\pi^{-1}\) is an up-down sequence. Indeed, \(\pi^{-1}(2j)\) is the position in \(\pi\) occupied by the \(j\)-th edge, while \(\pi^{-1}(2j-1),\pi^{-1}(2j+1)\) correspond to the positions of the two vertices flanking the \(2j\)-th edge and so are smaller iff \(\pi\) is a construction sequence. Hence, \(\pi^{-1}\) is an up-down sequence, and conversely.
For instance, \(P_{5}\) gives the sequence \((1,2,3,4,5,6,7,8,9)\), where odd-numbers correspond to vertices and even numbers to edges. An example c-sequence for \(P_{5}\) is \(\pi=(5,9,7,6,3,8,4,1,2)\); each even number (e.g., 4) is preceded by its odd neighbors (3 and 5) - i.e., each edge by its two endpoints. The inverse sequence \(\pi^{-1}=(8,9,5,7,1,4,3,6,2)\) is up-down, as required.
**Proof 2**.: By [10, A000182], \(T_{n}=J_{2n-1}\), where for \(r\geq 1\), \(J_{r}\) denotes the number of permutations of \(\{0,1,\ldots,r+1\}\) which begin with '1', end with '0', and have consecutive differences which alternate in sign. Then \(J_{2k}=0\) for \(k\geq 1\) as the sequences counted by \(J\) must begin with an _up_ and end with a _down_ and hence have an odd number of terms. These **tremolo** sequences are in one-to-one correspondence with "Joyce trees" and were introduced by Street [14] who showed they satisfy the following recursion.
**Proposition 2** (R. Street).: _For \(r\geq 3\), \(J_{r}=\sum_{m=0}^{r-1}{r-1\choose m}J_{m}J_{r-1-m}\)._
Now we show \(c(P_{n})=J_{2n-1}\). Indeed, \(J_{1}=c(P_{1})\) and \(J_{3}=c(P_{2})\). Replace \(J_{2r-1}\) by \(c(P_{r})\) and \(J_{2r}\) by zero and re-index; Street's recursion becomes Lemma 5, so \(c(P_{n})\) and \(J_{2n-1}\) both satisfy the same recursion and initial conditions. But \(J_{2n-1}=T_{n}\).
By [17], [4, 24.15.4]), we have for \(n\geq 1\) with \(B_{2n}\) the \(2n\)-th Bernoulli number,
\[c(P_{n})=T_{n}=(1/n){2^{2n}\choose 2}|B_{2n}|. \tag{5}\]
An asymptotic analysis [8] shows \(c(P_{n})\) is exponentially small compared to \(c(K_{1,n-1})\).
Using Lemma 4 and equation (5), we have for \(n\geq 1\),
\[c(C_{n})={2^{2n}\choose 2}|B_{2n}| \tag{6}\]
The first two cycles are CW-complexes: \(C_{1}\) has one vertex and a loop, and \(C_{2}\) has two vertices and two parallel edges; the formula is correct for both (and for \(n\geq 3\)).
If \(\star\) is one of the endpoints of \(P_{n}\), we can calculate \(c(P_{n},\star)\) for the first few values, getting (with some care for the last term) \(1,1,5,61\) for \(n=1,2,3,4\). In fact,
\[c(P_{n},\star)=S_{n}, \tag{7}\]
where \(S_{n}\) is the \(n\)-th secant number ([10, A000364]), counting the "zig" permutations.
### Complete graphs
This section is omitted until the Monthly problem [7] has appeared and its solutions are collected. The solution is due primarily to S. Wagon, R. Stong, and J. Tilley.
## 4 Earlier appearances and extensions
The concept of construction sequence for graphs was already known in a more general context [11, p 10] but integer sequences were only briefly considered. Stanley studied the number of linear extensions of a partial order [11, p 8], using them to define a polynomial [12, p 130]. In [13, p 7] he showed the number of linear extensions of a partial order determined by a path is an Euler number, implying (5) and (7).
Now take the Hasse diagram of any partial order; define a construction sequence for the diagram to be a total order on the elements such that each element is preceded
in the linear order by all elements which precede it in the partial order. Simplicial and CW-complexes can be partially ordered by "is a face of", and the linearization of posets includes graphs and hypergraphs (e.g., the \(r\)-regular case) as 2-layer Hasse diagrams, where elements in the top layer have degree \(r\) for \(r\)-regular hypergraphs.
A notion which sounds similar to construction numbers is due to Vince and Bona: the number of _assembly trees_ of a graph [16]. Assembly numbers count ways to build up a graph from subgraphs induced by various subsets of the vertices.
For \(n\)-stars, [16] gives \(n!\), while for paths and cycles, a Catalan-type value is found. Thus, assembly-tree numbers and construction numbers can be quite different.
Construction sequences make sense for hypergraphs, multigraphs, and indeed for any CW-complex. In the latter case, one counts sequences of cells, where each cell must follow the cells to which it is attached. For simplicial complexes, one might start with simplexes, cubes, and hyperoctahedra (the standard Platonic polytopes), and the sporadic instances in 3 and 4 dimensions.
One could also consider construction sequences for topoi and for (co)limits of diagrams in categories, even beyond the finite realm [11, p 77]. Philosophically, emergent concepts follow the substrate from which they arise.
## 5 Types of construction sequences
In this section, we describe _economical, easy,_ and _greedy_ construction sequences.
Let \(G=(V,E)\) be a graph, let \(x\in\mathcal{C}(G)\), and let \(e\in E\). We define the **cost** of edge \(e=uw\) with respect to a construction sequence \(x\) to be
\[\nu(e,x):=2x(e)-x(u)-x(w)\]
where for all \(s\in V\cup E\), we have \(x(s)=j\) if and only if \(x_{j}=s\). Let
\[\nu(x):=\sum_{e\in E}\nu(e,x)\]
be the cost of \(x\), and let \(\nu(G)\) be the least cost of any of its c-sequences. Thus, edge-cost is the delay between placement of its endpoints and placement of the edge.
The **greedy algorithm**\(\mathbb{G}\) takes input a graph \(G=(V,E)\) and a linear order \(\lambda\) on \(V\), \(\lambda:=(v_{1},\ldots,v_{p})\), and outputs a c-sequence \(x:=x(\lambda):=\mathbb{G}(G,\lambda)\in\mathcal{C}(G)\). As soon as an edge is **available** (meaning that both its endpoints have appeared), the greedy algorithm selects it. If several edges are available, some method of breaking ties is employed - e.g., using lexicographic order or avoiding cycles as long as possible. When
no edges are available, the next vertex on the input list is selected, thereby increasing the number of connected components. Put \(\mathbb{G}(G):=\{\mathbb{G}(G,\lambda):\lambda\text{ linear order on }V\}\).
We call a c-sequence for \(G\) with minimum cost **economical** (an **ec**-sequence). Let \(\mathcal{C}^{\prime}(G)\) be the set of ec-sequences for \(G\) and let \(c^{\prime}(G)\) be its cardinality.
**Conjecture 1**.: _If \(G\) is any connected graph, then \(\mathbb{G}(G)\) contains \(\mathcal{C}^{\prime}(G)\)._
A good question is, for a specific graph, how to choose the vertex ordering \(\lambda\) on which to run the greedy algorithm. For the path, the obvious two choices work.
**Lemma 6**.: _Let \(n\geq 2\). Then \(\nu(P_{n})=4n-5\) and \(c^{\prime}(P_{n})=2\)._
Proof.: If \(P_{n}\) is the \(n\)-path with \(V=[n]\) and natural linear order, the greedy algorithm gives the c-sequence \(121^{\prime}32^{\prime}43^{\prime}54^{\prime}\cdots n(n-1)^{\prime}\), where we write \(k^{\prime}\) for the edge \([k,k+1]\). The first edge costs \(3\), while each subsequent edge costs \(4\). The unique nontrivial isomorphism of \(P_{n}\) produces the other member of \(\mathcal{C}^{\prime}(P_{n})\) by Lemma 3.
For \(K_{1,n}\), starting from the hub \(0\), all vertex orderings cause the greedy algorithm to give an \(x\) with cost \((n+1)^{2}-1\) but, in fact, it is better to take \(\lfloor n/2\rfloor\) of the peripheral vertices (in any order) followed by the hub vertex, now fill in the available edges (all orders produce the same cost), and then continue in random order for each remaining peripheral vertex followed by the unique newly available edge.
For \(K_{1,5}\), one gets \(x(012345)=(011^{\prime}22^{\prime}33^{\prime}44^{\prime}55^{\prime})\) (dropping commas and letting \(n^{\prime}\) denote the edge \([0,n]\)) and \(\nu(x(012345))=35=(5+1)^{2}-1\). But \(x(120345)=(1201^{\prime}2^{\prime}33^{\prime}44^{\prime}55^{\prime})\) has cost \(4+5+5+7+9=30\). Postponing \(0\) reduces later delay.
We note that for the path, if \(x\) is an economical c-sequence, then \(\beta(x)\leq 2\); but for the star, this does not hold.
The **easy** sequences are the c-sequences obtained by listing all the vertices first and then all the edges. Once the vertices are listed, _cost is independent of the ordering of the edges_. It suffices to check this for the interchange of two adjacent edges. One of the edges moves left and so has its cost reduced by \(2\) while the edge that moves to the right has its cost increased by \(2\); hence, the sum remains constant.
**Lemma 7**.: _For \(n\geq 2\), let \(x\in\mathcal{C}(P_{n})\) be any easy sequence which begins with \(v_{1},\ldots,v_{n}\) in the order they appear along the path. Then \(\nu(x)=\binom{2n-1}{2}\)._
Proof.: Let \(n\geq 2\) and put \(x_{0}:=1\,2\,3\cdots n\,[n-1,n]\,[n-2,n-1]\cdots 23\,12\). Then \(x_{0}\) has cost \(\nu(x_{0})=3+7+\cdots+4(n-1)-1=\binom{2n-1}{2}\), so \(\nu(x)=\binom{2n-1}{2}\).
We have examples where the cost of an easy sequence for \(P_{n}\), which begins with some random order of vertices, can be slightly higher or lower than \(\binom{2n-1}{2}\).
**Lemma 8**.: _Let \(n\geq 2\). Then \(\nu(C_{n})=6n-4\) and \(c^{\prime}(C_{n})=2n\)._
Proof.: The elements \(x\in{\cal C}^{\prime}(C_{n})\) for \(C_{n}=(v_{1},e_{12},v_{3},\ldots,e_{n-1,n},v_{n},e_{n,1})\) begin as in \(P_{n}\) but the last edge costs \(2+(2n-1)=2n+1\), so \(\nu(x)=4n-5+2n+1=6n-4\). Any of the \(n\) edges of the cycle could be last, and either clockwise or counterclockwise orientation could occur, so \(c^{\prime}(C_{n})=2n\).
A different cost model could be formed by attributing cost to the _vertices_, rather than the edges. For \(v\in V(G)\), let \(E_{v}\) be the set of edges incident with \(v\) and put
\[\kappa(v,x):=\Big{(}\sum_{e\in E_{v}}x(e)-x(v)\Big{)}\Big{/}\deg(v,G)\ \ \mbox{for}\ \ x\in{\cal C}(G).\]
Then \(\kappa(x):=\sum_{v\in V}\kappa(v,x)\) and \(\kappa(G):=\min_{x\in{\cal C}(G)}\kappa(x)\) are an alternative measure.
It would also be possible to explore using _maximum_, rather than summation, to get the cost of a graph from that of its edges (or vertices), as with \(L_{1}\) vs \(L_{\infty}\) norms.
Rather than follow a discrete model, it is natural to introduce time as a continuous variable. Suppose \(G=(V,E)\) is a graph and let \(h:V\cup E\to[0,1]\), where for all \(e=uw\in E\), \(h(e)>\max(h(u),h(w))\). One could then define a new edge cost \(\tilde{\nu}(e,h)\)
\[\tilde{\nu}(e,h):=2h(e)-h(u)-h(w). \tag{8}\]
One might allow an edge to exist just prior to the existence of one or both of its endpoints, if the process of implementing the edge has measurable temporal extent.
Choice of cost function may be influenced by application as we shall discuss. However, merely having a sharply curtailed repertoire of construction sequences may make it easier to find nice c-sequences. Perhaps \(c^{\prime}(G)<c^{\prime}(H)\) implies \(c(G)<c(H)\).
Currently, we don't know how much variation can occur in \(c(G)\) among families of graphs with a fixed number of vertices and edges. Let \({\cal G}(p,q)\) be the set of all graphs \(G=(V,E)\) with \(V=[p]\) and \(q=\#E\) and suppose \(G\in{\cal F}\subseteq{\cal G}(p,q)\). Define the **constructability** of \(G\) in \({\cal F}\) to be \(c(G)\) divided by the average over \({\cal F}\),
\[\xi(G,{\cal F}):=\frac{c(G)}{\alpha({\cal F})},\ \ \mbox{where}\ \alpha({\cal F}):=(\#{ \cal F})^{-1}\sum_{H\in{\cal F}}c(H). \tag{9}\]
The \(n\)-vertex trees with greatest and lowest constructability are stars and paths [8]. But the value of \(\alpha({\cal F})\) for \({\cal F}\) the family of \(n\)-vertex trees is unknown. We think that diameter and maximum degree are inversely proportional for maximal planar or maximal outerplanar graphs of fixed order \(|V|\). Do these invariants affect \(c(G)\) analogously with trees? Are paths and stars available as spanning trees for the extremal examples of constructability?
Discussion
Aside from their combinatorial relevance for the structure of incidence algebras [11] or for enumeration and integer sequences [7, 8], construction numbers of graphs might have a deeper theoretical aspect. A natural idea is to think of construction sequences as the outcome of a constrained stochastic process, where a graph _evolves_ through the addition of new vertices and edges subject to the condition that an edge cannot appear before its endpoints. Any given graph thus could be "enriched" by knowledge of its history, either the linear order on its elements or their actual time of appearance. The existence of such histories might enable some new methods of proof - e.g., for the graph reconstruction problem of Ulam and Harary.
Practical applications would include operations research, where directed hyper-edges describe complex tasks such as "build an airbase" which depend on various supporting infrastructures. If a link should occur at some moment, one would like the necessary endpoints to happen just-in-time.
Graphs occur in many scientific contexts. It would be interesting to study the actual construction sequences for the complex biochemical networks found in the organic kingdom. How close are they to being economical?
Brightwell & Winkler [2] showed that counting the linear extensions of a poset is \(\#P\)-complete and contrast this with randomized polynomial-time algorithms which estimate this number. Their conjecture that \(\#P\)-completeness holds even for height-2 posets was proved by Dittmer & Pak [5], who further included incidence posets of graphs. (Note that their order is the reverse of ours.)
Applications of linear extensions of posets to equidistributed classes of permutations were given by Bjorner & Wachs [2], and Burrow [3] has studied using traversals of posets representing taxonomies and concept lattices to construct algorithms for information databases.
Computer calculation of construction numbers to get sample values can aid in finding correct formulas (via the OEIS [10]) for inductive proofs, but such computation is difficult due to the large number of permutations. This might be flipped into an asset by utilizing the theoretical calculations here as a "teacher" for neural network or machine learning methods (cf. Talvitie et al. [15]). More ambitiously, a mathematically oriented artificial intelligence could be challenged to discover the formulas above, along with some of the others we would like to have.
There are other ways to build graphs - one could attach stars by adding a vertex and all of its edges to any existing vertex or, for 2-connected graphs, one could attach _ears_ (i.e., paths attached only by their endpoints). For random graphs, probabilities might depend on history. How are these various approaches related. |
2305.03332 | Assessing New Hires' Programming Productivity Through UMETRIX -- An
Industry Case Study | New hires (novice or experienced) usually undergo an onboarding program for a
specific period to get acquainted with the processes of the hiring organization
to reach expected programming productivity levels. This paper presents a
programming productivity framework developed as an outcome of a three-year-long
industry study with small to medium-scale organizations using a usability
evaluation and code recommendation tool, UMETRIX, to manage new hire
programming productivity. We developed a programming productivity framework
around this tool called "Utpada" Participating organizations expressed strong
interest in relying on this programming productivity framework to assess the
skill gap among new hires. It helped identify under-performers early and
strategize their upskill plan per their business needs. The participating
organizations have seen an 89% rise in quality code contributions by new hires
during their probation period compared to traditional new hires'. This
framework is reproducible for any new-hire team size and can be easily
integrated into existing programming productivity improvement programs. | Sai Anirudh Karre, Neeraj Mathur, Y. Raghu Reddy | 2023-05-05T07:23:14Z | http://arxiv.org/abs/2305.03332v1 | # Assessing New Hires' Programming Productivity Through UMETRIX - An Industry Case Study
###### Abstract
New hires (novice or experienced) usually undergo an onobarding program for a specific period to get acquainted with the processes of the hiring organization to reach expected programming productivity levels. This paper presents a programming productivity framework developed as an outcome of a three-year-long industry study with small to medium-scale organizations using a usability evaluation and code recommendation tool, UMETRIX, to manage new hire programming productivity. We developed a programming productivity framework around this tool called "Utpada" Participating organizations expressed strong interest in relying on this programming productivity framework to assess the skill gap among new hires. It helped identify under-performers early and strategize their upskill plan per their business needs. The participating organizations have seen an 89% rise in quality code contributions by new hires during their probation period compared to traditional new hires'. This framework is reproducible for any new-hire team size and can be easily integrated into existing programming productivity improvement programs.
Software Productivity; New Hires; Industrial Practices; Software Developers
## I Background
In 2018, we developed and patented an automated usability evaluation framework, **UMETRIX**, to detect code-level implementation of usability guidelines for mobile-based applications [5]. This approach uses source code analysis for automated usability assessment of mobile applications [5]. Fig. 1 illustrates the control flow of the UMETRIX [1]. UMETRIX accepts a mobile APK (android package kit) file and one or more validation case file(s) as input. The validation case file contains the code snippet linked with a usability guideline for detection. As a first step, the framework decompiles the apk file and prepares it for source code analysis. _'Validation Test Case Generator'_ loads all the validation case files and links them with the _'Validation Execution Engine'_ for source code analysis. Post execution, the framework provides a validation report on the count of correct implementation of usability guidelines via code analysis. In case of incorrect or no implementation of usability guidelines, the framework recommends code-snippets through _'Recommendation Engine'_ to avoid usability issues. The _'Validation Case DB'_ contains bundled set of validation cases written using an authoring tool and can be used for multiple usability code evaluations. The _'Metric DB'_ stores the results of each validation execution, and the _'Code Snippet Bank'_ stores the code snippets linked with respective usability guidelines for the recommendation. We evaluated the UMETRIX in 20 mobile app development companies stationed in India. We gathered improvement data on their usability indices [5] and assessed the impact of UMETRIX on mobile usability.
Following were the insights that led us to investigate this adaptation further.
* Organizations that employed UMETRIX saw a significant rise in the size of the Code Snippet Bank due to large set of code-snippet submissions over a short time.
* The code snippets were focused on the company-specific code-base. Thus these code snippets were unique for respective projects in a given organization.
* Development Teams started actively using the recommendation engine backed with Code Snippet Bank to train their new hires.
The above observations were pervasive across a few participating organizations, even though the product owners/managers addressed the issues differently. We ascertained this as an opportunity for UMETRIX tool [1] to be enhanced into a programming productivity framework to drive and track new-hire programming productivity. In this paper, we discuss the programming productivity framework called
Figure 1: UMETRIX – Usability Evaluation Framework
**"Utpada"** backed by UMETRIX and its implementation across all the participating organizations following section.
## V Utpada - Programming Productivity Framework
**Programming Productivity** _is defined as the degree of the ability of individual programmers or development teams to build and evolve software systems_[3]. Over time, software organizations endeavor to improve their productivity factors by aligning them with overall product deliverables [9]. Product owners can anticipate tangible results in a given release cycle with enough FTE (full-time-equivalent) human resources by adopting different practices. Mentorship or Buddy programs are informal practices designed to train and nurture new hire programmers on coding standards, delivery policy, and SLAs defined in the given organization. The success of such programs will rely on mentor-mentee communication and interest in a common goal. The product owners may only guarantee better success from such programs if they depend on individual personalities. Considering these prevalent challenges, we discuss a case study of proposed _"Utpada"_ - programming productivity framework backed by UMETRIX to train, upskill, and assess the productivity of new hire programmers. Fig 2. illustrates the _"Utpada"_ - programming productivity framework that utilizes aspects of code snippet search and code snippet bank database that was part of the UMETRIX tool. Following are details about the necessary and sufficient conditions to execute this programming productivity framework.
* **Pre-Condition:** Current employers, including developers, UX practitioners, and QA practitioners, should be populating the Code-Snippet bank database of the UMETRIX tool with respective code-snippets widely used as part of their existing product code base.
* **New-Hires Usage:** When New Hires are onboarded, they are assigned new development tasks, including new feature stories and open defects. The new hires use Code-Snippet Search to use the best-fit code pattern for their deliverable. They re-use or re-code using the best fit code-snippet examples and submit the beta code for code review.
* **Review-Satisfaction-Index (RSI) Scores:** The mentors oversee the code review process and approve the beta code for check-in to production code if all deliverable requirements are met. If the beta code doesn't meet the requirements, they push it back to the new hire for re-work. The mentors submit a Review-Satisfaction-Index (RSI) [2], a code-review scorecard that captures the code snippet bank usage and other organization code standards ratings during code-review.
* **Post-Conditions:** The mentor, who acts as a code reviewer, submits the unique code snippets that new hires write for future curation into the code snippet bank database.
We developed an RSI score template to ease the code-review process in the context of this framework. The approved code check-in rate, RSI scores, and other programming productivity metrics are captured during the code review. These metrics help the respective Program Managers to track new-hire productivity. We approached all the organizations that have previously participated in our UMETRIX study to use the programming productivity framework. Some of these organizations participated in our study to review new hire programming productivity of both fresh graduates and experienced new hires. The details of the empirical study is provided in the following sub-sections.
### _Study Design_
This section presents details about the participating organizations, demography of the new hires, study duration, programming productivity metrics, data tracking, and probation evaluation criteria, along with feedback from the participating organizations.
**Methodology:** A new hire had to undergo three steps to complete the onboarding and was deemed a "productive resource." _Step 1:_ a new hire is mapped with a mentor who is an existing employee for training and onboarding purposes. _Step 2:_ They are tasked with programming deliverables where the mentors conduct code reviews and capture RSI Scores. _Step 3:_ the mentor and the following technical lead will review the RSI scores for about six months (sprint by sprint basis; 1 sprint = 6 working days) to move the new hire onto production.
**Data Collection:** We received programming productivity data every quarter. However, there was a delay in obtaining consent from the organizations towards the publication of programming productivity data as they come under Employee Monitory Laws of the United States, India, and Australia. As per US Federal Law, employers have the right to monitor their employees as they perform their duties. However, _US Electronic Communications Privacy Act of 1986 (ECPA), the US Common Law Protections Against the Invasion of Privacy,
Figure 2: **“Utpada”** - New-Hire Programming Productivity Framework
Indian IT Act 2000, and section and section 43A of IT Amendment Act, 2008_ restrict practitioners to publish the information about the organization's trademark unless we have a No-Objection Certificate from the employer and employees who participated in the study. As the programming productivity data contains PII (Personal Identifiable Information) data, all the organizations prohibited us from declaring the participants' demographics. In lieu of this, all the references to the trademark names and employee information in this case study are masked. The mentors and code-reviewers (sometimes both) capture the productivity metrics during the Code-review session. Different in-house/Open-source tools were used to capture code quality metrics and other productivity metrics. Reviews were graded based on the code-review checklist and RSI scores were provided for the participant. The participants usually spend 4 to 6 months under the probation period before moving onto production. This transition will be done based on their performance during their tenure in the productivity framework of these RSI score. If the RSI scores are higher, the mentors recommend the Group managers to on-board the participants onto production teams early. The participants with exceptional performance usually spent 3-4 months in probation period. Participants with average performance spend around six months in probation. The under-performing participants were either terminated or supported with different roles which have non-development activities like QA, DevOps, etc.
**Participating Organizations:** We reached out to all the 22 Industry partners who successfully used the UMETRIX tool in practice to use the new-hire programming productivity framework. Only 6 of the organizations agreed to participate in the programming productivity study. Table 1 provides organization details and the participating team size. A multi-year-controlled case study was initiated within the participating organizations to understand the usage and impact of the Code-Snippet Bank database to enhance new hires' programming productivity. We started the study in March 2018 and ended our data collection by August 2021. We received programming productivity information from these six organizations on a Quarter-to-Quarter basis. The respective Group Managers (Head of Engineering) agreed to take up the responsibility to collate and share the data in a specified format [2]. We met with the respective owners every six months to introspect and review the strategy of skilling new hires.
**Participating Teams and New-Hires':** New hires are new employees joined in a particular organization after a thorough
hiring process. In our context, new hires' are both fresh graduates and experienced with prior employment from previous organizations. These new hires are usually organized into teams metored by a senior employee with expertise in existing work, usually called mentors. The participating teams were medium-sized, with 20 developers per team. Technical Leads act as mentors/code reviewers. They report to Group Managers within Engineering Teams. **80%** of our study participants were fresh university graduates with no prior industry experience, and the rest were experienced programmers. Overall, **21%** of them were female, **69%** were male, and the rest did not identify themselves by gender. They were hired during quarterly hire cycles between February 2018 and January 2021. The new hires participated in their respective organization's boarding process. They were later provided access to alpha/beta code branches, required developer tools, knowledge base articles, relevant product code-base training, and instructor-led training on overall deliverables. These new hires were expected to work on programming languages like Objective C, C#, Swift, ReactJS, Rust, Lua, Kotlin, and other JS frameworks. Although the new hires are hired across different hire cycles irrespective of their experience levels, they are still exposed to similar onboarding processes; the productivity data is captured per the prescribed programming productivity indicators for review.
**Programming Productivity Indicators:** Large teams among the participating organizations followed unconventional customized programming productivity indicators in line with their departmental goals. However, all six organizations agreed to measure a few common programming productivity indicators to track the new hire's performance during this study. These indicators include Deliverable throughput (DT), Lines Changed (LC), Weighted Average Class Complexity Metric, Code Quality metrics (Nested Block Depth, Leaky Encapsulation, Weighted methods per class, Type Checking, Feature Envy, Dead Code, etc.), and some minor Agile metrics include - lead time, cycle time, and velocity. They are well-established programming productivity metrics traditionally captured using different code-editor tools. Apart from the programming productivity metrics, a Review Satisfaction Index (RSI) was designed to aid the participants in tracking the health of their programming productivity based on feedback from their peer code reviewers. The raw data of the evaluated metrics is made available as supplement data [2]. The RSI scores play a crucial role in monitoring the new hire's
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Detail(Company)** & **CI** & **C2** & **C3** & **C4** & **C5** & **C6** \\ \hline
**Company Size** & 1000-2000+ (Medium) & 5000-1000+ (Large) & 5000-1000+ (Large) & 100-500+ (Small) & 500-1000+ (Small) & 10-100+ (Small) \\ \hline
**Offee/Tear Type** & Distributed, open & Multi-national & Multi-national & Distributed, closed & Multi-national & Closed office \\ & office & & & office & office & & \\ \hline
**Developer Tools** & Fixed tools & Uniform tools & Uniform tools & Uniform tools & Fixed tools & Similar tools \\ \hline
**Development** & Scrum, Kuuban & Scaled Agile, Kanbun, & Agile, Kanbun, Lean & Pair, Karbun & Pair, Agile & Pair \\
**Paraedigm** & **Mosily Mobile, Web** & Mostly Web, Mobile & Mobile, Web, & Embedded, Mobile & Mobile, Web & Mobile, VR, AR \\ & & & Embedded & & & \\ \hline
**Code Storage** &
\begin{tabular}{c} Single source, \\ monolithic \\ \end{tabular} & Multi-source, separate & Multi-source, separate & Multi-source, separate & Single source & Monolithic \\ \hline
**HO Country** & USA & USA & Australia & India & India & India \\ \hline
**Participating Teams** & 3 Teams & 6 Teams & 4 Teams & 1 Team & 1 Team & 2 Teams \\
**Industry Type** & Services & Accounting, Finance, & Engineering & Services & Healthcare & Services \\ & & Defence & & & & \\ \hline \end{tabular}
\end{table}
Table 1. Details of Participating Organizations
programming productivity. It is a score-based weighted average designated by the code-reviewer based on a code-review checklist [2]. **6.5** out of **10** was benchmarked as the minimum score for RSI. Anything below this benchmark was considered as not being productive. The mentors and designated code reviewers share the RSI scores weekly with the participants and discuss the areas for improvement. Under-performers were coached by the mentors so that they could improve over time.
Here is an example to illustrate how the new-hire participants work on code assignment and use the code-bank to address such requirements. A requirement with id **REQ-21890**, that requires to avoid content view based on resizing text on the mobile screen. The new-hire searched the initial code snippet with _extActAttributes_ as a keyword search. Its relevant code changes are updated per the requirement in change _C-REQ-21890_ to raise the width of I properties to **100%** and hide the overflow text.
1. <input type="text" class="form-control extActAttributes" id="aligncompetency"/> 2.,extActAttributes {
3. display: inline-block; width: 70%;
4. }
**REQ-21890** - Certain users - in particular, users with low vision, cognitive disabilities, and motor impairments - often need to increase the size of content in order to more comfortably read or operate a web page. All modern desktop browsers offer users -full-page zoom", which increases the size of all page content, including text, graphics, and overall layout. CONTENT BECOMES OBSCURED OR DIFFICULT TO UNDERSTAND WHEN RESIZED. When layouts don't appropriately respond to changes in zoom, content can be clipped and unreadable.
In sample 04 - User Profile Page - Personal View - Create Task, the values of read-only fields in the "Attributes" section can be clipped if the text exceeds the width of the <input> elements due to layout not appropriately adjusting to allow for 100% width.
**C-REQ-21890** Ensure that pages can be resized/zoomed to at least 200% without any loss of content or functionality. In general, avoid the use of CSS absolute or sticky positioning, container dimensions set using viewport units like vh or vw, and containers which truncate content using overflow: hidden.
In sample 04 - User Profile Page - Personal View - Create Task, set the width of the extActAttributes class to "100%" for the fields in the "Attributes" section.
**Code Review Proforma:** Michaela et al. extensively proposed an exhaustive code-review template for building proficient programmers [4]. This proforma covers aspects of code development like _Implementation_ - with a focus on relevance and scope of the feature to be made, _Dependencies_ - with a focus on compatibility and integrity of the feature, _Security_ - with a focus on vulnerabilities, input sanitation, and validation, _Logic Errors_ - events of code break and intention behind a code logic, _Error Handling_ - logging and error management, _Usability/Accessibility_ - with a focus of regulatory guidelines, _Performance_ - with a focus on code run time and impact on overall system performance, _Readability_ - with a focus on coding standards, code ethics, and code restructuring. Code Reviewer also captured the programming productivity metrics using their internal tools like Slack, GitHub, GanttPro, bitbucket, etc., from new hires of participating organizations. Below is a simple example of how to code reviewers expect error handling from new hires while building mobile features. The error handling refers to the exception to requirement **REQ-1289**. This helps the new hire programmer and the reviewer revisit the requirement and address the actual requirement.
**Data Collection:** The mentors and code-reviewers (sometimes both) capture the productivity metrics during the Code-review session. Different in-house/Open source tools were used to capture code quality metrics and other productivity metrics. Reviews were graded based on the code-review checklist and RSI scores were provided for the participant. The participants usually spend 4 to 6 months under the probation period before moving onto production. This transition will be done based on their performance of these RSI score. If the RSI scores are higher, the mentors recommend the Group managers to on-board the participants onto production teams early. The participants with exceptional performance usually spent 3-4 months in probation period. Participants with average performance spend around six months in probation. The under-performing participants were either terminated or supported
with different roles which have non-development activities like QA, DevOps, etc.
### _Results_
This section illustrates the results of the data points captured as part of this case study. Two types of data points are captured as part of this study. (1) RSI Scores through Code-review and (2) Code Bank Usage during code contribution.
* The Code-reviews generate RSI scores for the study participants on individual code contributions. The RSI combines a weighted score of programming productivity metric and a weighted score of code-review feedback, each of which is given _50_ points. Overall the RSI is scored against _100_ points and is normalized to a scale of 10 to ease the review.
* Code bank usage details are captured to understand the significance of code bank recommended code snippets while submitting code contributions.
* Code check-in details are captured by respective team leads using tools like _git, gitclear, and bitbucket_. Code Commit hits (counts), Line-Of-Code (LOC) added/updated/deleted, and Line Impact metric data daily as part of this study. In our study, no significant inferences could be drawn from this data. Hence it has not been included in the final review analysis.
### _Discussion_
We present overall programming productivity observations as part of this section. The detailed programming productivity metrics and the code bank usage of all organizations are included as part of supplement data [2].
**Programming Productivity Conclusions**: Out of _101_ new hire participants across all participating organizations, five new hires are terminated from the program due to their poor performance or lack of commitment to the training program. These programmers are employed with small participatory organizations. Thus all the code contributions of only _96_ study participants are considered for this study. Among these participants, _77_ exhibited exceptional performance in implementing the assigned deliverables. These study participants followed all the prescribed guidelines and excelled in their code contributions. Almost half received a perfect _10/10_ RSI score for specific crucial code contributions during the code reviews. These exceptional new hires are transitioned to production-level code contributions based on recommendations from respective group managers. The average time these exceptional code contributors spend is around _14_ weeks. Among the rest of the _19_ study participants, _13_ are moderate performers due to their focused knowledge of specific technologies. These participants displayed good code contributions in only distinct features and could not finish a few features they were not skilled enough. The average time spent by these medium code contributors is around _18-24_ weeks. The rest of the \(6\) study participants are under-performers who could not deliver feature stories or defects during this case study on time. These participants are transitioned to non-functional teams like Datacenter, QA, and DevOps as they are technically equipped to manage non-functional aspects of the product. All of these under-performers are employed with large participatory organizations.
**UMETRIX CodeBank Usage Rate:** Overall _2002_ code contributions are recorded by _96_ study participants during the study period. Out of these code check-ins, _1921_ code contributions relied on the code bank of UMETRIX to implement a feature story or a defect assigned. This data is captured as part of Code-review. While submitting the code contribution for code review, the new-hire participant must provide the CodeSnippetID of the code snippet utilized from the code bank. This code snippet is verified and used for reviewing the submitted code. The mentor or code review discusses the practical code-snippets relevance, implementation, and performance. This enables micro-learning for new-hire participants' during the code-review session. Code-reviewers observed that _89%_ of the time, new-hire programmers relied on the right code-snippet for their code contributions i.e., _1710_ code contributions out of _2002_ code contributions across all the participating organizations directly relied on CodeBank recommendations. The detailed metrics are illustrated as part of supplement material [2].
**Impact on Software Quality:** Medium and Small participating organizations _C1, C4, C5_ and _C6_ have let the participants contribute to the product code-base after two weeks of new hire performance. The new hire developers have taken the mentor's feedback constructively and improved over time. The product owners observed _61%_ rise in quality code contributions with only _4%_ fatal errors in new hire code contributions. These organizations traced the overall quality metrics and evaluated them based on their internal departmental key performance indicator (KPI)s. Smaller organizations _C5_ and _C6_ with smaller teams correlated their current FY-19-20 quality metrics with FY2017-18 and observed a _33%_ decrease in fatal errors and _55%_ in trivial mistakes in production code. Large organizations _C2_ and _C3_ are reluctant about sharing the data on the improvement of overall product quality based on new hire code contribution. The detailed metrics are illustrated as part of supplement material [2].
**Replicating our Study** - Fig 2 illustrates the overall process of our study. The process is flexible enough to adapt and implement. The fundamental requirement of our study is to deploy the UMETRIX tool [1] and populate the Code Snippet bank database with relevant code snippets. The rest of the steps can be customizable. Tracking RSI scores and code-bank usage through new-hire code reviews will help engineering managers understand the programming productivity of new-hires.
### _Perception Study_
We conducted an Onboarding Perception survey as part of a feedback session in October 2020. Twelve team leads and six group managers from the participatory organizations participated in the study. Fifty-two new hire programmers are also part of it. We conducted formal focused interviews with these voluntary participants to understand the impact and the
consequence of our case study on these new hires and management during their tenure at these organizations. Detailed questionnaire can be found as part of our supplementary material [2]. Following are brief observations from the team leads and managers who played a managerial role in this study.
* With the RSI scoring method, code reviews helped new hires with frequent feedback. Code Snippet Banks are beneficial for persuasive knowledge transfer that includes in-house coding standards, design patterns, in-house metric benchmarks, and regulatory aspects of code during the early stage of product development.
* Few mentors used high RSI score-based code deliverables as a brown-bag training session to up-skill underperformers.
* Mentor-new hires pair programming sessions have improved work culture and created a beneficial synergy among the teams. Filtration of underperformers has also become easy and helped development managers identify submission risks early in code deliverables.
Following are the brief observations from participant new-hire programmers (both novice and experienced).
* Code Snippet bank found to be useful for new-hire developers to get accustomed to prevailing code faster than ever. The programmers also found the recommender engine of UMTERIX played an extraordinary impact on their RSI scores.
* All the new-hire programmers found UMETRIX easy to use. Experienced new-hire programmers found the process of code reviews to be elaborate and cumbersome to qualify.
* For most of them, this productivity framework provided clarity of thought on focusing on targeted deliverables. It helped address interruptions and distractions while working towards a deliverable.
## III Related Work
Developer programming productivity is one of the central aspects of First NATO's Software Engineering Conference (1968) [6]. Stefan et al. have illustrated the shift in proposals in their comprehensive review study about prevailing developer programming productivity Factors in Software Development [9] over decades. They could clearly define the distinction between the product, process and development environment to understand programming productivity consequences. Melanie et al. developed a ProFLOW approach to capture automated programming productivity data from contributors of R&D organizations at Siemens AG [8] Chris et al. have also proposed unique metrics to track programmer productivity [7] and its implications on software quality. These two studies are widely distinctive as they illustrate different tool-based means to record and assess developer programming productivity. With vibrant studies on developer programming productivity, very few studies discuss the tool-based approaches to upskill, improve and track the programming productivity of new-hire programmers. Thus, we intend to present one such programming productivity framework **Utpada**" using our existing UMETRIX tool to fill such a gap. We do not claim this is the only solution but suggest this as an alternate view towards assessing programmer productivity in software industries.
## IV Threats to Validity
This paper only illustrates a case study on how a usability evaluation tool (UMETRIX) was leveraged into a programming productivity tool (UTPADA) over time. Our study only presents the experiences of software practitioners who adopted and practiced a different approach to address the challenges involved with their new hire programming productivity. We have equally disclosed the scope of the UMETRIX tool to all the new-hire participants from the participatory organization. The Code snippet bank of the UMETRIX tool is exhaustive enough for participants to undergo the case study. The technical leads and group managers of the participatory organizations have taken enough care to update the code bank promptly to pick up the pace of code snippet usage. The mentor of the study participants stayed constant. Still, the role of the code reviewer rotates every week, avoiding bias in code-review data collection. Our study results are strictly captured under the supervision of technical leads at respective organizations. All our conclusions are based on the code review data captured by the mentors from the participatory organization.
## V Conclusion and Future Work
This paper discusses a new-hire programming productivity framework **Utpada**" using the UMETRIX tool. The new hires use this framework as a self-help code snippet search to work on an enterprise source code base. **96** novice and experienced new-hire programmers participated in this multi-year case study. Code reviewers followed our programming productivity framework and captured RSI Scores to consider the productivity rates of new hires. Our study shows that **89%** of new-hire code contributions relied on the code snippet bank for quality code submissions to improve developer programming productivity. Events after the COVID-19 outbreak created some delays in data sharing. Thus we concluded the study by August 2021. As part of our future work, we planned to develop a code-visualization connector for UMETRIX as our future work for a better understanding of the control flow and data flow of code snippets to ease Code Bank usage.
## Acknowledgments
We thank all the new-hires' and participating organizations including their HR and Legal teams for executing and sharing programming productivity data as part of this study.
|
2301.08948 | A Perron-Frobenius analysis of wall-bounded turbulence | The Perron-Frobenius operator (PFO) is adapted from dynamical-system theory
to the study of turbulent channel flow. It is shown that, as long as the
analysis is restricted to the system attractor, the PFO can be used to
differentiate causality and coherence from simple correlation without
performing interventional experiments, and that the key difficulty remains
collecting enough data to populate the operator matrix. This is alleviated by
limiting the analysis to two-dimensional projections of the phase space, and
developing a series of indicators to choose the best parameter pairs from a
large number of possibilities. The techniques thus developed are applied to the
study of bursting in the inertial layer of the channel, with emphasis on the
process by which bursts are reinitiated after they have decayed. Conditional
averaging over phase-space trajectories suggested by the PFO shows, somewhat
counter-intuitively, that a key ingredient for the burst recovery is the
development of a low-shear region near the wall, overlaid by a lifted shear
layer. This is confirmed by a computational experiment in which the control of
the mean velocity profile by the turbulence fluctuations is artificially
relaxed. The behaviour of the mean velocity profile is thus modified, but the
association of low wall shear with the initiation of the bursts is maintained. | Javier Jimenez | 2023-01-21T13:11:39Z | http://arxiv.org/abs/2301.08948v1 | # A Perron-Frobenius analysis of wall-bounded turbulence
###### Abstract
The Perron-Frobenius operator (PFO) is adapted from dynamical-system theory to the study of turbulent channel flow. It is shown that, as long as the analysis is restricted to the system attractor, the PFO can be used to differentiate causality and coherence from simple correlation without performing interventional experiments, and that the key difficulty remains collecting enough data to populate the operator matrix. This is alleviated by limiting the analysis to two-dimensional projections of the phase space, and developing a series of indicators to choose the best parameter pairs from a large number of possibilities. The techniques thus developed are applied to the study of bursting in the inertial layer of the channel, with emphasis on the process by which bursts are reinitiated after they have decayed. Conditional averaging over phase-space trajectories suggested by the PFO shows, somewhat counter-intuitively, that a key ingredient for the burst recovery is the development of a low-shear region near the wall, overlaid by a lifted shear layer. This is confirmed by a computational experiment in which the control of the mean velocity profile by the turbulence fluctuations is artificially relaxed. The behaviour of the mean velocity profile is thus modified, but the association of low wall shear with the initiation of the bursts is maintained.
## 1 Introduction
There is widespread agreement that physical phenomena have causes, but less consensus on what this may mean. Several questions come to mind. The first is whether the concept of cause has any meaning when the equations of motion are known, and whether, even if a definition could be agreed upon, it would be of any practical value. For example, Russell (1912) argued that, if the temporal evolution of a dynamical system is described by a set of deterministic differential equations, causality is equivalent to knowledge of the initial conditions. This point of view can be traced to Newton and even to the classical world, and implies that the only causes of the state of the system at time \(t\) are the state of the system at any previous time. This is sketched in figure 1(a). Disregarding isolated singularities, any point \(\mathbf{v}(t_{\rm e})\) in phase space is the 'effect' of all the points \(\mathbf{v}(t_{\rm e}<t_{\rm e})\) in a unique incoming trajectory. Conversely, \(\mathbf{v}(t_{\rm e})\) is the 'cause' of all the points in that trajectory for which \(t>t_{e}\).
However, Russell (1912) was probably thinking about reversible Hamiltonian mechanics and, although true in theory, his conclusions are not necessarily useful in more general cases. Many mechanical systems are dissipative, and identifying the Russell (1912) cause of a particular state implies integrating
ill-posed equations backwards in time. This certainly applies to Navier-Stokes turbulence, which is the system that mostly interests us here. Similarly, Russell (1912) knew little about deterministic chaos, but we now understand that most dynamical systems with many degrees of freedom are chaotic, and cannot in practice be uniquely integrated forward. The evolution of turbulence is closer to figure 1(b), in which \(\boldsymbol{v}(t_{e})\) has been substituted by a small neighbourhood, and the forward and backward trajectories become irregular or fractal cones formed by bundles of trajectories that contain the causes and effects of the points in the neighbourhood of \(\boldsymbol{v}(t_{e})\). Russell's question can be recast as whether, in such situations, something is retained of the deterministic picture in figure 1(a).
A related problem is whether something can be said about causality without performing interventional experiments. The usual answer is that it can not, because the correlations that result from observations do not imply causation (Granger, 1969; Pearl, 2009). But the discussion in the previous paragraph suggests that this may not be the whole story, and that a sufficiently careful observation of the temporal evolution of a system may lead to the identification of the 'causal' trajectories that cross a neighbourhood of interest (Angrist _et al._, 1996).
The coarse-graining inherent in figure 1(b) suggests that the dynamical system can be simplified by partitioning the phase space into disjoint neighbourhoods of finite size, at least for a fixed temporal horizon. This is common practice in chaotic dynamical systems (Beck & Schlogl, 1993), and, although the reasons given are often that it avoids singular measures in the statistics, and that numerical experiments are anyway discrete, figure 1(b) suggests that there is a more fundamental justification. If chaos prevents us from predicting the behaviour of infinitesimally close neighbouring points, it makes little sense to insist on treating them as if they were different, and we may as well consider finite-size neighbourhoods as our fundamental dynamical units. This has important consequences for the definition of the system. The main one is that the system propagator is substituted by the'symbolic' dynamics of how often and in which order the system visits the different cells, and that the deterministic equations are substituted by the transition probabilities incorporated in the Perron-Frobenius (transfer) operator introduced in SS2.
Turbulence is well suited for these techniques, because it is a chaotic deterministic system with many degrees of freedom for which the (Navier-Stokes) equations are known. The difficulty is not how to
Figure 1: (a) The deterministic view of a dynamical system. The shaded plane represents the full phase space at one instant in time, and each trajectory is one possible evolution of the system. (b) The effect of dissipation and chaos. Phase points have to be substituted by neighbourhoods, and the flow becomes ill-conditioned both forward and backwards in time.
integrate the equations, which are in principle within reach of a sufficiently large computer, but how to explain and predict turbulent flows in terms of simpler rules. Direct simulations are exact but expensive, and we would like to have reduced models that reproduce the flow, if not in full detail, at least well enough to provide general rules about its future behaviour and, ideally, about how that behaviour could be influenced. Shear-driven turbulence is particularly appropriate because it can be made statistically steady, as in pipes or channels, but also because it is believed to be partially controlled by linear processes (del Alamo Jimenez, 2006; McKeon Sharma, 2010; Jimenez, 2013), and at least in part describable in terms of coherent structures that play the role of objects in a dynamical system (Adrian, 2007; Jimenez, 2018).
Especially interesting is the regeneration cycle of wall-bounded turbulence, whose persistence has been explained by the interaction between the perturbations of the streamwise and cross-flow velocities (Jimenez, 1994; Hamilton _et al._, 1995; Waleffe, 1997). There is fairly general consensus that the wall-normal velocity generates fluctuations of the streamwise velocity by deforming the mean shear, and that the shear interacts with the cross-flow fluctuations to amplify them (Orr, 1907; Jimenez, 2013, 2015). But this amplification is transient in most models (Butler & Farrell, 1993; Farrell & Ioannou, 1996; Schoppa & Hussain, 2002; Jimenez, 2013), and the details of how the cycle closes after the burst decays are unclear. The elucidation of this regeneration process is the underlying 'application' of our investigation, but much of the paper is dedicated to the development of the analysis procedure itself.
Note that the results of our analysis will not be causal in the sense of Granger (1969) or Pearl (2009), since they involve no intervention from the observer. But we are more interested in predictability and perhaps in coherence, and in the search for states of the system that best allow us to draw conclusions from partial flow information. The fundamental question of causality will be outsourced here to the equations of motion, and its direction to the direction of time. The main purpose of our analysis is to identify flow configurations in which the equations of motion give us the best possible information about the future of the system without necessarily solving them in detail, and which could perhaps lead to effective control strategies.
The organisation of the paper is as follows. Section 2 introduces the Perron-Frobenius operator, which is particularised to a small-box turbulent channel in SS3. Techniques for its use are developed in SS3.2 and SS3.3, leading in SS4 to the study of conditional trajectories in phase space. Finally, a simple interventional experiment is described in SS5 to help in the analysis of the wall regeneration cycle, and conclusions are offered in SS6.
## 2 The Perron-Frobenius operator
Assume a statistically stationary ergodic system,
\[\mathbf{v}(t+T)=\mathbf{\mathsf{S}}(T;t)\,\mbox{\boldmath$v$ }(t), \tag{1}\]
for which temporal and ensemble averages can be interchanged. The probability density of the state variable, \(v\), over the cells of a partition \(\{C_{j}|\,j=1\ldots N\}\) of the phase space, can be approximated by the fractional distribution, \(\mathbf{q}=\{q_{j}\}\), of the time spent by the system within each cell. After a sufficiently long time, or for a sufficiently large ensemble of experiments, these probabilities tend to an equilibrium distribution that we denote by \(\mathbf{q}_{\infty}\). More locally, if we consider the probability distributions at two different times, \(\mathbf{q}(t)\) and \(\mathbf{q}(t+T)\), the two-dimensional Perron-Frobenius operator (PFO), \(\widehat{\mathbf{\mathsf{P}}}^{e}\), relates the past to the future (Beck & Schlogl, 1993),
\[\mathbf{q}(t+T)=\widehat{\mathbf{\mathsf{P}}}^{e}(T;t)\, \mathbf{q}(t). \tag{2}\]
Because probabilities represent the results of mutually independent tests, \(\widehat{\mathbf{\mathsf{P}}}^{e}\) is linear and, for a finite partition, reduces to an \(N\times N\) matrix, where \(N\) is the number of cells in the partition, which is potentially much larger than the number of degrees of freedom of the original dynamical system. We will assume \(\widehat{\mathbf{\mathsf{P}}}^{e}\) to be independent of \(t\).
When applied to a perfectly concentrated initial distribution, \(\mathbf{q}^{(a)}(t)=\{\delta_{aj}\}\), where \(\delta_{aj}\) is Kronecker's delta, the \(a\)-th column of \(\widehat{\mathbf{\mathsf{P}}}^{e}\) represents the probability that a system initially within the \(a\)-th cell evolves into the different cells of the partition after the time interval \(T\). Note that these concentrated initial probability distributions can be interpreted as non-interventional experiments, in which a statistical knowledge of the causal structure of the coarse-grained system can be gained by observing the system over a sufficiently long time (Angrist _et al._, 1996).
The PFO is equivalent to the Bayesian conditional probability matrix (Feller, 1971) and, after a sufficiently long observation, can be estimated from the joint probability distribution (Ulam, 1964)
\[\mathsf{Q}_{ij}(t,t+T)\equiv\mathsf{Q}_{ij}(T)=\mbox{prob}_{t}(\mbox{\boldmath $v$}(t+T)\in C_{i},\mathbf{v}(t)\in C_{j}), \tag{3}\]
as
\[\mathsf{P}_{ij}^{e}=\mathsf{Q}_{ij}/\sum_{s}\mathsf{Q}_{sj}, \tag{4}\]
This operator is normalised to unit column sums, and an input probability \(\mathbf{q}(t)\), normalised to \(\sum q_{j}=1\), results in a similarly normalised output probability \(\mathbf{q}(t+T)\). The matrix \(\mathbf{\mathsf{P}}^{e}\) is generally not symmetric, and there is a dual matrix,
\[\mathrm{P}_{ij}^{c}=\mathsf{Q}_{ji}/\sum_{s}\mathsf{Q}_{is}, \tag{5}\]
which generates \(\mathbf{q}(t-T)\) given \(\mathbf{q}(t)\), allowing us to estimate the statistical distribution of the causes of a given effect. Note that, even if
\[\mathbf{q}(t-T)=\widehat{\mathbf{\mathsf{P}}}^{c}(T)\, \mathbf{q}(t) \tag{6}\]
looks like the inverse of eq:PF1, \(\widehat{\mathbf{\mathsf{P}}}^{e}\) is not the inverse of \(\widehat{\mathbf{\mathsf{P}}}^{e}\), because the marginal probabilities \(\mathbf{q}(t)\) and \(\mathbf{q}(t+T)\) have different meanings in eq:PF1 and in eq:PF4. In the former, \(\mathbf{q}(t)\) is observed, and \(\mathbf{q}(t+T)\) is the conditional probability distribution at \(t+T\) given that observation, while their meaning in eq:PF4 is reversed. One of the effects of the coarse-grained partition is to destroy any reversibility that might have been present in the original dynamical system.
Another consequence of discrete partitions is to suppress the semigroup character of the dynamical system, by which \(\mathbf{\mathsf{S}}(T_{1}+T_{2})=\mathbf{\mathsf{S}}(T_{1} )\circ\mathbf{\mathsf{S}}(T_{2})\). Indeed, even if the original dynamical system is Markovian in the sense that its future depends only on its state at the present (i.e., on its 'initial conditions'), the discretised system is generally not Markovian. The cells of almost any partition of a high-dimensional phase space are projections of infinite cylinders that extend along some neglected system dimensions. Two trajectories that intersect a cell at a given time may actually intersect its cylinder at very different places, and the only way to distinguish different trajectories is often to consider the sequence of cells visited over their entire past. Even this may not be enough, and very little is known about partitions that preserve Markovianity in high-dimensional systems (Beck & Schlogl, 1993, SS3.6). The transfer operator bypasses this limitation by acting on the transition probabilities, and is again Markovian in the sense that \(\mathbf{q}(t+T)\) formally only depends on \(\mathbf{q}(t)\) (Feller, 1971, SSX), but we regain Markovianity at the expense of losing determinacy, and we will see in SS3.1 that the semigroup property, \(\mathbf{\mathsf{P}}^{e}(nT)=\mathbf{\mathsf{P}}^{e}(T)^{n}\), is very quickly lost for the approximate transfer operator of turbulent channels.
There are several reasons why \(\mathbf{\mathsf{P}}^{e}\) is not a perfect estimator of the true operator \(\widehat{\mathbf{\mathsf{P}}}^{e}\), but the most important one has to do with the existence of an attractor. Dissipative systems, such as turbulence, typically evolve towards a lower-dimensional attracting subset of the full phase space, and the observations used in eq:PF2 only reflect the statistics of this subset. As such, \(\mathbf{\mathsf{P}}^{e}\) is a restriction of \(\widehat{\mathbf{\mathsf{P}}}^{e}\) to the system attractor, and contains little or no information about how the system reacts outside it. It is thus useful in modelling the physics, where the interest is on how the system evolves in time, but it may need additional information in control applications, where we may wish to act in ways outside the attractor.
There are two ways in which the PFO can be used to analyse a complex dynamical system. The first one is to treat it as a matrix whose properties reflect the behaviour of the attractor as a whole. 'Stochastic' matrices like \(\mathbf{\mathsf{P}}^{c}\) or \(\mathbf{\mathsf{P}}^{e}\), with non-negative elements and unit column sums, have useful properties that have been extensively studied, especially if care is exercised in dealing with the zero entries that represent cells that are never visited by the system (Lancaster, 1969). Their best known property is that they posses a unit leading eigenvalue with a real eigenvector with non-negative entries, which can be interpreted as a probability distribution over the partition. For \(\mathbf{\mathsf{P}}^{e}\), this eigenvector satisfies, \(\mathbf{q}_{1}(t+T)=\mathbf{\mathsf{P}}^{e}\mathbf{ q}_{1}(t)=\mathbf{q}_{1}(t)\), and defines a probability density that remains invariant as the system evolves, and which is identical to the natural invariant density, \(\mathbf{q}_{\infty}\), mentioned earlier in this section. The subdominant eigenvalues control the approach to \(\mathbf{q}_{\infty}\) of initial distributions different from the natural one, as well as whether the attractor can be partitioned into approximately disjoint subsets (Froyland, 2005).
As already mentioned, these are examples of global properties that apply to the full attractor. The same is true of other approximation strategies, such as proper orthogonal decomposition (POD, Berkooz _et al._, 1993) or dynamic-mode decomposition (DMD, Schmid, 2010), which use ergodicity to minimise global errors of reduced models. We are more interested here in local analyses that use the PFO as a tool for computing state-dependent conditional averages, and which give information about the expected short-term behaviour of the system in the neighbourhood of a particular cell. Our goal is to find whether some cells are more likely than others to form the basis for better predictions, and are thus'more causal'.
## 3 Application to minimal channels
Most of our analysis centres on a data set already used in Jimenez (2015). A pressure-driven spatially periodic turbulent channel flow is simulated between parallel plates separated by \(2h\). The wall-parallel periods of the computational box are \(L_{x}=\pi h/2\) and \(L_{z}=\pi h/4\), and the nominal friction Reynolds number is \(h^{+}=hu_{\tau}/\nu=950\), where \(x,y\) and \(z\) are the streamwise, wall-normal, and spanwise coordinates, respectively, and the corresponding velocity components are \(u,v\) and \(w\). Capital letters denote \(y\)-dependent ensemble averages, \(\langle\,\rangle\), as in \(U(y)\), and lower-case ones are perturbations with respect to this average. Primes are reserved for root-mean-squared intensities, and the kinetic energy of the fluctuations is defined as \(E=u^{2}+v^{2}+w^{2}\). The '\(+\)' superscript denotes 'wall' normalisation with the kinematic viscosity \(\nu\), and with the friction velocity \(u_{\tau}=\sqrt{\nu\partial_{y}U}\). The code is standard fully dealiased Fourier-Chebychev spectral, as in Kim _et al._ (1987), and the mass flux is kept constant. Time is usually normalised with the eddy turnovered \(h/u_{\tau}\), and is then denoted by an asterisk, \(t^{*}=u_{\tau}t/h\). More details can be found in Jimenez (2013).
To improve statistics, the simulation was extended in time to \(t^{*}\approx 650\), and sampled at a time interval between frames, \(\Delta t^{*}\approx 0.025\). Such simulations are minimal within a band of wall distances
\(y/h\approx 0.2-0.6\)(Flores & Jimenez, 2010), in the sense that a non-negligible fraction of the kinetic energy is contained in the first few largest wall-parallel Fourier modes. Closer to the wall, the flow contains a wider range of energy-containing scales, and cannot be considered minimal. Farther from it, the simulations cannot be directly compared to canonical turbulence, because some of the largest scales are missing. The range of wall distances mentioned above approximately includes a single largest structure that bursts irregularly. Since it was shown by Flores & Jimenez (2010) that the typical interval between bursts is \(t^{*}\approx 2\)-\(3\), the simulation analysed here contains several hundreds of bursts per wall, and about \(100\) samples per burst. Moreover, since the box is too small to allow healthy large scales in the central part of the channel, the two walls are treated as independent realisations (the cross-correlation coefficient is less than \(0.05\) for the variables discussed below). The total number of data snapshots is thus approximately \(5\times 10^{4}\).
If we define Fourier expansions of the three velocity components along \(x\) and \(z\) as
\[a(x,y,z)=\sum_{m,n}\widetilde{a}_{mn}(y)\exp[\mathrm{i}(k_{x}x+k_{z}z)], \tag{7}\]
where \(a\) is the variable to be expanded, \(k_{x}=2\pi m/L_{x}\), and \(k_{z}=2\pi n/L_{z}\), the Fourier coefficients are designated as \([mn]\). As mentioned above, only the largest structures at a given distance from the wall can be expected to be describable by relatively few degrees of freedom whose dynamics can be easily studied, and our analysis only retains the first few modes, \(m=0,1,2\) and \(n=-1,0,1\). Appendix A explains how modes with \(n\neq 0\) are used as combinations of the \(\pm n\) pair, resulting in two equivalent modes displaced spanwise by a quarter of a wavelength. Although spanwise homogeneity ensures that the interactions of these combinations with the \(n=0\) modes are statistically equivalent, they interact non-trivially among themselves, and both combinations are retained. They are designated, for example, as [21] and [21*]. Profiles of the cumulative variance of all the modes used in the paper are given in figure 2. They show that their overall energy is a comparatively small but non-trivial fraction of the total, although we will see later that they follow fairly independent dynamics. In addition, the retained modes account for approximately \(65\%\) of the tangential Reynolds stress, \(-\langle uv\rangle\) (not shown). Note that, because of the small computational box, there is substantial energy in the \([00]\) modes of \(u\) and \(w\), whose only fluctuations are temporal. They can approximately be considered as modelling the spatial variation of the mean velocity profile over wall patches of the order of the
Figure 2: The coloured lines are profiles of the cumulative variance of the harmonics retained in this paper. From bottom to top: \([00]\), \([01]\)+\([01^{\ast}]\), \([10]\), \([20]\), \([11]\)+\([11^{\ast}]\), \([21]\)+\([21^{\ast}]\). The solid black line is the total variance of the velocity component. (a) Streamwise velocity. (b) Wall-normal. (c) Spanwise.
size of the computational box.
This limited subset of data still contains a large number of degrees of freedom, because each Fourier component is a function of \(y\) with \(O(100)\) grid points. Even if we will see later that the wall-normal resolution can be reduced to \(O(10)\) points through judicious filtering, the raw degrees of freedom for each velocity component is \(O(100)\) complex numbers, and we mostly restrict ourselves to analysing the behaviour of a few integrated'summary variables' that represent global properties of the velocity within the chosen band of wall distances. In particular, if we are interested in the band \(y\in(y_{0},y_{1})\), we follow Jimenez (2013, 2015) in using an integrated intensity,
\[I_{a,mn}^{2}=\frac{1}{y_{1}-y_{0}}\int_{y_{0}}^{y_{1}}|\widetilde{a}_{mn}^{+}|^ {2}\,\mathrm{d}y, \tag{8}\]
which stands for the velocity magnitude, and, when \(k_{x}\neq 0\), an average tilting angle
\[\psi_{a,mn}=-\arctan\left(\mathrm{Im}\frac{\int_{y_{0}}^{y_{1}}\widetilde{a}_{ mn}^{+}\partial_{y}\widetilde{a}_{mn}\,\mathrm{d}y)}{k_{x}\int_{y_{0}}^{y_{1}}| \widetilde{a}_{mn}|^{2}\,\mathrm{d}y}\right), \tag{9}\]
where '\(\mathrm{Im}\)' is the imaginary part, and the dagger stands for complex conjugation. This angle varies from \(-\pi/2\) to \(\pi/2\), and describes the wall-normal structure of the phase of the Fourier mode.
Several other summary variables were considered, either based on physical arguments or in standard statistical methods (e.g. individual POD modes), but they did not add appreciably to the argument or to the conclusions. They are not discussed in the rest of the paper, except for the use of PODs as a filtering device to balance the wall-parallel and wall-normal resolution of the retained flow fields, as explained in appendix B.
Because the retained harmonics exclude the smallest scales, they can be trusted closer to the wall than the full flow, and all our results use an integration band \(y^{+}>40\) and \(y/h\leq 0.6\). Somewhat narrower or wider ranges were tested with little effect on the results.
Not all the summary variables defined in this way are mutually fully independent. Figure 3 presents their correlation coefficient,
\[C_{ab}=\frac{\left\langle(a-\langle a\rangle)(b-\langle b\rangle)\right\rangle }{\left\langle(a-\langle a\rangle)^{2}\right\rangle^{1/2}\left\langle(b- \langle b\rangle)^{2}\right\rangle^{1/2}}. \tag{10}\]
Several things stand out. The \(u\) and \(v\) components form reasonably well-correlated pairs, particularly among similar summary variables and Fourier modes, but most quantities involving \(w\) are not well correlated with \(u\) and \(v\), or among themselves. The correlation between the intensities of \(u\) and \(v\) are significant because, even if they involve integrated quantities rather than the variables themselves, they reflect the generation of the tangential Reynolds stress, \(-uv\). The higher modes, [11] and [21], tend to be better correlated among different variables than the lower ones, [10] and [20]. In particular, three of the highest correlations in figure 3 are \(C(I_{v01},I_{w01s})\approx 0.70\), \(C(I_{u11},I_{w11s})\approx 0.74\), and \(C(I_{v11},I_{w11s})\approx 0.59\), notwithstanding the generally poor correlation between the summaries of \(w\) and those of other velocity components. Interestingly, these correlations come in [\(m1\), \(m1\)*] pairs, representing flow structures offset from each other by a quarter of a spanwise wavelength. They correspond to the inclined 'rollers' that have often be described in wall-bounded flows.
Somewhat surprisingly, angles and intensities are generally uncorrelated, including the \((I_{v10},\psi_{v10})\) pair that was shown by Jimenez (2013, 2015) and Encinar & Jimenez (2020) to be particularly useful, because its joint probability distribution is traversed by the flow in a physically interpretable way.
This shows that correlations and coherence are different concepts. As a simple example, the temporal evolutions of \(\sin(t)\) and \(\cos(t)\) are orthogonal and uncorrelated, but they form a coherent pair in the sense that they transverse a one-dimensional circular sub-manifold of their phase space.
It could be tempting to use as summary variables the eigenvectors of the dominant eigenvalues of the matrix in figure 3. These combinations of variables optimally explain the variance of the data (Berkooz _et al._, 1993), but they turn out to be especially bad at describing the dynamics. This can best be understood by looking at the joint probability density of \((I_{v10},\psi_{v10})\) in figure 4(a). It is clear that \(I_{v10}\) does not explain \(\psi_{v10}\), nor vice versa, which is precisely why the pair can be used to define two-dimensional causal combinations.
We explore in the rest of the paper whether other interesting pairs of variables can be found.
### The transfer operator of the minimal channel
It should be clear from the previous discussion that the main problem in constructing the PFO for a given system is the choice of the underlying partition, and of the variables in which it is expressed. Figure 4 presents results for the minimal channel just described, using as variables the inclination angle of the wall-normal velocity, \(\psi_{v10}\), and its root-mean-squared amplitude, \(I_{v10}\), whose joint probability distribution is shown in figure 4(a). Most of the distribution is contained within the inner probability contour, but the outer fringe is interesting because Jimenez (2015) showed that its upper edge can be
Figure 3: Correlation coefficient among the different summary quantities. Large squares outlined in blue correspond to the three velocity components. Smaller squares outlined in grey are summary variables, and the smallest cells within each grey square are Fourier modes, in the order [01], [01*], [10], [20], [11], [11*], [21], [21*], from bottom to top and from left to right. The main diagonal has been blocked for clarity, as well as the inclinations for modes with \(k_{x}=0\), which are undefined.
modelled as a linearised burst in which the mean shear amplifies the velocity perturbations by tilting them forward (Orr, 1907; Jimenez, 2013), although the process by which the cycle is closed is less well understood.
The construction of the PFO starts by organising the \(15\times 13\) partition of the parameter space of figure 4(a) into a single vector of length 195, and constructing the two-time joint distribution, \(\textbf{Q}(t,\,t+T)\), from all the snapshots in the data sequence. The interval used in figure 4, \(T^{*}=0.078\), is chosen from the experience in Jimenez (2015) and Encinar & Jimenez (2020), and is the time taken by the system to traverse an increment \(\Delta\psi\approx 0.3\) along the upper edge of figure 4(a). It is also the time over which Jimenez (2015) shows that the flow can be linearly predicted in that region of the phase plane.
The columns with non-zero elements in **Q** are normalised using eq:PF2, and those with only zeros are discarded. The resulting estimated \(\textbf{P}^{e}\) is shown in figure 4(b), where each column is normalised to unity, and sorted in order of decreasing row sum for graphical purposes. The restriction of the full \(\widehat{\textbf{P}}^{e}\) to the on-attractor estimate \(\textbf{P}^{e}\) is done when discarding zeros at this step.
Note that, although the discretised variables are treated as parts of a single long vector, the probabilities in **Q** correspond to the simultaneous occurrence of two independent variables, allowing us to study effects depending on two 'causes' (Pearl, 2009). The procedure could be extended to three- or higher-dimensional projections of phase space, and to the effects of the increasingly unlikely coincidence of three or more independent causes, but it quickly runs into the limitation of the number of available data. The \(5\times 10^{4}\) snapshots used here have to populate the \(138^{2}\) matrix \(\textbf{P}^{e}\) in figure 4(b), giving an average of 2.5 phase points per matrix element. In practice, they range from \(O(100)\) data for the better populated matrix elements to zero for elements outside the attractor. The limitation is not as strict for the smoother distribution in figure 4(a), which is essentially an eigenvector of the PFO (see SS2). Its dimension is just 138 cells, and each cell represents \(O(300\text{--}1000)\) data points. Tests with partitions of the order of \(10\times 10\) cells did not qualitatively change the results described below,
Figure 4: (a) Two-dimensional joint probability distribution, \(\boldsymbol{q}_{\infty}\), for the integrated inclination and amplitude of the [10] mode of the wall-normal velocity component (i.e., \(k_{x}=2\pi/L_{x},k_{z}=0\)), averaged over \(y^{+}>40\) and \(y/h<0.6\). The inclination is partitioned in 15 equal bins, and the amplitude in 13 bins. The red contours contain 30% and 95% of the probability mass, and only cells within the outer contour are plotted. (b) PF operator, \(\textbf{P}^{e}\), obtained by collating the variables in (a) into a single vector, offset by \(T^{*}=0.078\), and rearranged in order of decreasing column sum of the joint probability **Q** in eq:PF2d. Rows and columns with zero sum have been eliminated. (c) \(L_{1}\)-norm Markov test for the PF matrices., \(\textbf{P}^{e}\);, \(\textbf{P}^{c}\).
but attempts to use much finer partitions ran into problems at the interesting edge of the distribution.
As a consequence, we restrict ourselves to two-dimensional projections of the phase space, and explore in SS3.2 which pairs of summary variables give more interesting results. This is supplemented in SS3.3 by conditionally averaging other variables over these projection planes, recovering part of the three-dimensional dynamics.
In addition, to ensure that the noise in our results is a consequence of the discrete partition rather than of an insufficient number of data, the distribution in figure 4(a), and similar later ones, are only drawn within the probability isocontour containing \(95\%\) of the total probability mass. Each cell along this contour contains \(O(100)\) data snapshots. The global averages in SS3.2 are also computed within this high-probability region, and the analysis was repeated with half the number of data, with similar conclusions.
Figure 4(c) tests the non-Markovian behaviour of \(\boldsymbol{\mathsf{P}}^{c}\) and \(\boldsymbol{\mathsf{P}}^{e}\) discussed in SS2. It shows the relative Frobenius norm of the difference between \(\boldsymbol{\mathsf{P}}(nT)\) and \(\boldsymbol{\mathsf{P}}(T)^{n}\) for \(T^{*}=0.025\). The relative difference between uncorrelated stochastic matrices depends on the ratio between the standard deviation and the mean of individual matrix columns, but is approximately unity for cases such as those in figure 4. It is clear that the two matrices being tested become essentially uncorrelated after a few time steps, and from now on we use \(\boldsymbol{\mathsf{P}}(nT)\) as our basic operator.
Figure 5 shows how the PFO can be used to extract the probability distributions of the causes and effects of a given observation. Figure 5(a) assumes that we know that the system is within the cell marked with a solid circle at \(t=0\). The conditional probability distribution at \(t=T\) is given by the corresponding column of the transfer operator, \(\boldsymbol{\mathsf{P}}^{e}\), and is displayed in the figure in dashed blue contours. Conversely, the conditional probability distribution of causes at \(t=-T\) is the corresponding column of the backwards operator \(\boldsymbol{\mathsf{P}}^{c}\). It is displayed in solid black lines, and the difference among the two distributions illustrates the temporal evolution of the system in the clockwise direction of the figure, as in Jimenez (2015).
The segregation into forward and backward distributions does not hold for all cells. Figure 5(b) applies the procedure to a cell in the high-probability core of the invariant density distribution. Its forward and backward distributions are marked as in figure 5(a), but they overlap each other and are difficult to tell apart.
Figure 5(c) is a representation of this mean displacement for all the cells in the distribution. The arrows join the centre of each reference cell to the mean position of its effects after a given time interval. Figure 5(d) does the same for the causes, and both figures show a mean clockwise displacement of the system along the upper edge of the distribution (see figure 3b in Jimenez, 2015, for comparison). In addition to this circular displacement, the arrows spiral towards the centre of the distribution in figure 5(c), and outwards in figure 5(d). This tendency increases for longer time intervals, and is due to the non-Markovian component of the probability evolution.
Any random displacement from the periphery tends to move towards the most probable locations in the central part of the distribution, and random displacements into the periphery are most likely to come from the core. This is best seen in figures 5(e,f), which are computed in the same way as figures 5(c,d) after randomising the time stamps of the flow snapshots. In fact, since the effects and causes are in this case randomly chosen states of the system, their expected average coincides with the overall mean of the invariant distribution. These randomised figures are independent of the time interval.
Figure 5: (a) For the variables in figure 4(a), and an interval \(T^{*}=0.078\), the solid black contours are the probability distribution of possible \(T\)-precursors to an observation of the cell marked with a solid circle, and the dashed blue contours are the distribution of possible effects after \(T\). Contours contain 30% and 95% of the probability mass. (b) As in (a), for an observation in the core of the invariant density distribution. (c) Mean system displacements in the parameter plane. The coloured background is the invariant density, and the arrows join the cell taken as cause with the mean system location after time \(T\). (d) As in (c), but the arrows join the mean location of the systems that will pass through the cell taken as reference after time \(T\). The red contours contain 30% and 95% of the invariant density. (e,f) As in (c,d), but using randomised time stamps for the data.
### Quality indicators
We have seen in the previous section that the main problem in constructing the PFO is collecting enough data to populate the two-dimensional histogram, **Q**, making it unpractical to consider distributions over more than two independent variables. We have also mentioned that our strategy is to test all possible variable pairs in the hope of identifying couples whose statistical behaviour is optimal, but the 36 variables used in figure 3 can be paired into 630 possible combinations, and automating the search requires indicators that are simpler to implement than the visual inspection of the two-dimensional plots in figure 5. Four such indicators are discussed in this section.
The statistical uncertainty of the displacement vectors is addressed in figure 6(a), which displays the ratio between the standard deviation of the conditionally averaged displacement of the system over a
Figure 6: As in figure 4. (a) Ratio between the averaged displacement of the effects and their standard deviation. Each variable is normalised with its overall standard deviation to compensate for the different units. (b) Determinacy index éq:corrce between the average displacement of causes and effects. Drawn for \(T^{*}=0.025\). (c) Hellinger segregation index éq:Hell between the forward and backwards conditional distributions, as function of the observation cell. (d) Kullback–Leibler information gain éq:KL2 from the distributions of the effects and the causes, measured in bits. Warm colours represent creation of information, and cooler ones represent information loss. All panels refer to the [10] mode of the wall-normal velocity, and use only cells within the 95% probability contour of \(\boldsymbol{q}_{\infty}\). In all of them, except (b), \(T^{*}=0.078\).
given time and its mean. To compensate for the different magnitudes of the two variables in the figure, which generally have incompatible units, each of them is normalised with its global standard deviation before computing the conditional statistics. The result is a measure of the error bars associated with each of the arrows in figure 5(c).
Having a small relative standard deviation does not guarantee that a quantity is physically relevant. Inspection of figure 5(c-f) reveals that deterministic and random evolutions behave differently with respect to the asymmetry between causes and effects. The displacement vectors of the causes and effects rotate in the same direction in figure 5(c,d), because both represent the deterministic evolution of the system. But the randomised vectors in figures 5(e,f) point in opposite directions, because they move from the conditioning cell towards the densest part of the distribution, independently of the direction of time. As a consequence, we can define a 'determinacy' index for an observation cell \(\mathbf{v}_{0}\) as the normalised inner product
\[C_{ce}(\mathbf{v}_{0},T)=\frac{(\mathbf{v}^{e}-\mathbf{v}_{0})\cdot(\mathbf{v}_{0}-\mathbf{v}^{c})} {\|\mathbf{v}^{e}-\mathbf{v}_{0}\|\|\mathbf{v}_{0}-\mathbf{v}^{c}\|}, \tag{11}\]
where \(\mathbf{v}^{e}-\mathbf{v}_{0}\) and \(\mathbf{v}_{0}-\mathbf{v}^{c}\) are, respectively, the conditionally averaged displacement of effects and causes over the time interval \(\pm T\). As in figure 6(a), variables are normalised with their standard deviation before computing eq:corrce. This index is an indication of how deterministic is the evolution of the system in the neighbourhood of \(\mathbf{v}_{0}\), and of how much information is gained by the observation of the variable pair. When the system is completely deterministic in the subspace being considered, \(C_{ce}\approx 1\), and when it is essentially random, \(C_{ce}\approx-1\). Figure 6(b) displays \(C_{ce}\) for the data in figure 5, and shows that the evolution of this particular Fourier mode in this parameter plane is deterministic almost everywhere.
Along the upper edge of the distribution, this agrees with the physically based conclusions of Jimenez (2015), but not along its lower edge, where both Jimenez (2015) and Encinar & Jimenez (2020) conclude that the average displacement is opposite to the predictions of the model that explains the upper edge, and that the uncertainty of the displacements is too large to trust their mean. The high uncertainty in this region is clear in figure 6(a), but figure 6(b) suggests that this part of the distribution is also deterministic. Part of the reason is the longer time interval used in figure 6(a) compared to 6(b). The apparent randomness of the evolution increases for longer intervals, as the non-Markovian behaviour takes over. The determinacy index is almost unity in figure 6(b), where the displacements are of the order of one distribution cell, but decreases to \(C_{ce}\approx 0.8\) when the figure is drawn for the more physically relevant time interval used in figure 6(a), and to \(C_{ce}\approx 0.5\) for the even longer interval used in Jimenez (2015). Encinar & Jimenez (2020), who use a different method from the one above, and a different set of data, compute a figure of merit equivalent to the relative dispersion in figure 6(a). Normalising their time offset with the average distance, \(\overline{y}\), from the wall of their filtered fields (Flores & Jimenez, 2010; Jimenez, 2015), it varies between \(u_{\tau}T/\overline{y}=0.048\) and \(0.19\). The resulting standard deviations are negligible for the shortest of those intervals, but large enough to reverse some of the displacements for the largest one. When these values are applied to the present case, assuming \(\overline{y}\approx 0.3\) for our integration band, the time interval in figure 6(b) is \(u_{\tau}T/\overline{y}=0.087\), and that in figure 6(a) is \(u_{\tau}T/\overline{y}=0.26\), explaining the apparent discrepancy between figures 6(a) and 6(b).
Figure 6(c) quantifies the temporal segregation between the conditional probability distributions of causes and effects in figure 5(a,b). The distance between two normalised probability distributions \(\mathbf{q}^{(1)}\) and \(\mathbf{q}^{(2)}\) can be characterised by the Hellinger norm (Nikulin, 2001), defined as
\[H^{2}(\mathbf{q}^{(1)},\mathbf{q}^{(2)})=\tfrac{1}{2}\sum_{j}\left(\sqrt{q_{j}^{(1)}} -\sqrt{q_{j}^{(2)}}\right)^{2}, \tag{12}\]
which vanishes for \(\boldsymbol{q}^{(1)}=\boldsymbol{q}^{(2)}\), and reaches its maximum, \(H=1\), for disjoint distributions. In the case of figure 5(a,b) and 6(c), the distance between the conditional distributions of causes and effects varies from \(H\approx 0.9\) at the edge of the density distribution, where they are clearly different, to \(H\approx 0.1\) at the centre, where past and future are almost indistinguishable.
The information provided by the indices eq:corrce and eq:Hell is related but not identical. While a high value of eq:Hell implies that causes and effects are different, a high value of eq:corrce also shows that the directions of the mean drift associated to each of them are similar, and that the flow of probability can be described as a smooth vector field.
When figures 6(a-c), are considered together they suggest that the top-right and top-left edges of the probability distribution are populated by systems which evolve in fairly deterministic manner, while the lower edge of the distribution, and especially its central core, are more random.
Finally, figure 6(d) addresses the question of whether this evolution has any effect in the probability distribution of the variables used in this section. In essence, whether the effects conditioned to a given cell are more or less organised than its causes. The Kullback-Leibler (KL) information of a distribution \(\boldsymbol{q}^{(1)}\), relative to a reference distribution \(\boldsymbol{q}^{(2)}\), is defined as
\[K(\boldsymbol{q}^{(1)},\boldsymbol{q}^{(2)})=\sum_{j}q_{j}^{(1)}\log_{2}(q_{j} ^{(1)}/q_{j}^{(2)}), \tag{13}\]
which is measured in bits, is always non-negative, and only vanishes when \(\boldsymbol{q}^{(1)}=\boldsymbol{q}^{(2)}\). Intuitively, it describes how much more organised is \(\boldsymbol{q}^{(1)}\) compared to \(\boldsymbol{q}^{(2)}\). Note that eq:KL1 is only finite if the support of \(\boldsymbol{q}^{(1)}\) is contained within the support of \(\boldsymbol{q}^{(2)}\), so that \(K\) can also be understood as a measure of how much information is gained by restricting \(\boldsymbol{q}^{(2)}\) to one of its subsets. Here, we will always use as reference the invariant distribution \(\boldsymbol{q}_{\infty}\), so that \(K\) is guaranteed to exist both for the distribution \(\boldsymbol{q}^{c}\) of the conditional causes and for the distribution \(\boldsymbol{q}^{c}\) of the effects. This choice also implies that a distribution with \(K=0\) is statistically indistinguishable from \(\boldsymbol{q}_{\infty}\), and represents an unconstrained set of phase points. The assumption that the system is restricted to a single cell at \(t=0\) almost guarantees that information is lost when this concentrated distribution is allowed to spread in the past or in the future, but the information contained in the two distributions cannot be compared directly, because they do not generically share a common support. Figure 6(d) displays the difference,
\[K^{ce}=K(\boldsymbol{q}^{e},\boldsymbol{q}_{\infty})-K(\boldsymbol{q}^{c}, \boldsymbol{q}_{\infty}), \tag{14}\]
between the information of conditional effects and of conditional causes with respect to the reference. It is positive along the left (growth) edge of the distribution, and negative along the right (decay) edge, suggesting that coherence is first created and later destroyed as the system drifts clockwise. Because the system is stationary, the two effects cancel, and the mean generation of information vanishes.
However the real power of the indicators is in the large-scale screening of less obvious variable pairs. Figure 7 displays the value of the determinacy eq:corrce and segregation eq:Hell indices for all the combinations of modal inclination and intensity of the three velocity components, averaged over the corresponding distributions. Not surprisingly, the most deterministic combinations are those involving the inclination and intensity of the same Fourier mode, but it is interesting that \(v_{10}\), which was the mode selected on physical grounds in Jimenez (2015), is also the most deterministic by a substantial margin. Slightly less deterministic is \(v_{20}\), which is just a harmonic of \(v_{10}\) for which the same theory applies. The meandering modes, \(v_{11}\), \(v_{21}\), \(u_{11}\), \(u_{21}\), are less organised but not fully incoherent. The spanwise velocity is not well described by an inclination and an intensity, but an interesting pair is \((I_{v11},I_{w11*})\), or vice versa, which was already identified as an inclined roller from the correlations in figure 3. Its appearance in the two indicators in figure 7 shows that this roller has its own causal
dynamics, and that the same is true for the second harmonic, \((I_{v21},I_{w21*})\). However, the streamwise-uniform roller \((I_{v01},I_{w01*})\) does not appear in figure 7, even if it is one of strongest couplings in the correlations in figure 3, and one of the largest contributors to the fluctuation energies in figure 2. Such two-dimensional structures do not interact with the shear and, even if they may grow to be strong, have little own dynamics.
Figure 8 displays the drift diagram and two quality indicators for the \((I_{v11},I_{w11*})\) roller. The two variables are relatively well correlated, probably by continuity, and the distribution rotates counter-clockwise. This implies that the roller is first created as an ejection (or sweep) of the wall-normal velocity, and later spreads spanwise. Figure 8(c) shows that, as in figure 6(d), coherence is created
Figure 8: As in figures 5 and 6, for the roller variables, \((I_{v11},I_{w11*})\). (a) Quiver plot of the effects, as in figure 5(c). (b) Determinacy index éq:corrce, as in figure 6(b). (c) Kullback–Leibler information gain éq:KL2, as in figure 6(d).
Figure 7: As in figure 3. (a) Determinacy index éq:corrce, averaged over the invariant distribution for different combinations of modal inclination and intensity. (b) Segregation index éq:Hell. The main diagonal and the inclinations of the \(k_{x}=0\) modes are blocked to magenta in both cases.
when the structure strengthens, and destroyed when it weakens.
### Conditional averages
The analysis in the previous sections reveals which variable pairs evolve in a deterministic way, but says little about the associated flow fields. A first step in that direction is figure 9, which displays the averages of other flow variables conditioned to the basic \((\psi_{v10},I_{v10})\) pair. Most conditional averages provide relatively little information. They are either distributed almost uniformly over the invariant probability distribution, or track the evolution of the base variables, saying little more than that strong structures are strong in most variables. A few are more interesting.
Figure 9(a) shows that the streamwise velocity becomes more non-uniform in the streamwise direction (\(u_{10}\)) as the wall-normal velocity bursts. It follows from the choice of conditioning variables that this also holds for the wall-normal velocity, but is not true of the spanwise component (not shown). The coupling of \(u_{10}\) and \(v_{10}\) is required by continuity, and the two variables are part of the two-dimensional burst described in Orr (1907) and Jimenez (2013). However, self-sustaining turbulence requires three-dimensionality, and this is provided by the \(k_{z}\neq 0\) modes in figure 9(b). This figure displays the modes of \(u\) that form a possibly non-uniform streak. They grow along the decay leg of the burst, and are therefore probably consequences, rather than precursors of the burst. It can be shown that most of the growth in figure 9(b) is due to the \([01]\) mode, which measures the intensity of a streamwise-uniform streak, and is also responsible for most of the asymmetry of the figure along the \(\psi\) axis. The higher modes, \([11]\) and \([21]\), only grow weakly, and do so along the ascending leg of the burst. On the other hand, the cross-flow \((v,w)\) roller grows along the descending branch, as shown in figure 9(c) for the \([11]\) harmonic. This is also true for the \([01]\) streamwise roller, which has similar intensity, but much less for the \([21]\) mode, which is weaker and less coherent. In all these cases, the two signs of \(k_{z}\) are grouped in the figure.
When the intensity of the \(x\)-dependent modes is substituted by a non-uniformity index,
\[\Omega_{a,mn}=I_{a,mn}/I_{a,0n}, \tag{15}\]
figure 9 changes somewhat. The growth of the roller in figure 9(c) is transferred to the bottom of the distribution. Its non-uniformity begins with \(w_{11}\) in the descending leg of the burst, and moves to \(u_{11}\) and \(v_{11}\) along its left-going bottom edge. The wavy quasi-streamwise roller is very weak in that region, but also very non-uniform, and it is tempting to conclude that this waviness is part of what eventually triggers a subsequent burst.
Figure 9(d-f) shows conditional averages in the plane of the \((I_{v11},I_{w11*})\) meandering roller. Figure 9(d) is equivalent to figure 9(a), and confirms that the roller is a three-dimensional structure in which the streamwise velocity increases as \(v_{11}\) and \(w_{11*}\) do. The effect is much stronger for \(u_{11}\) than for \(u_{11*}\), showing that the active streamwise velocity is collocated with the wall-normal component, \(v_{11}\), rather than with \(w_{11*}\). The three-dimensionality extends to other harmonics, and figure 9(e) shows that one effect is to enhance the streamwise non-uniformity of the spanwise-uniform component \(u_{10}\). This figure is dual to figure 9(a), and suggests that the intensification of the roller takes place along the upper edge of the \((\psi_{v10},I_{v10})\) burst, although where exactly cannot be decided from this representation. For example, the conditional \(I_{v10}\), which would be a direct indicator of the position along the \((\psi_{v10},I_{v10})\) burst, does not produce a clear signal in the \((I_{v11},I_{w11*})\) plane. Figure 9(f) shows the conditional \(k_{z}\neq 0\) component of \(u\), which is the same quantity in figure 9(b), and gives more information. Figure 9(b) shows that the streak of \(u\) grows along the descending leg of the clockwise evolution of \((\psi_{v10},I_{v10})\), while figure 9(f) shows that it grows along the ascending leg of the counter-clockwise
evolution of \((I_{v11},I_{w11*})\). Both legs therefore presumably correspond to the same stage of the flow, in agreement with the location in figure 9(c) of the high roller intensity.
Also interesting are figures 9(g-i), which show the conditional tangential Reynolds stress, \(\theta_{mn}=-\text{Re}(\widetilde{u}_{mn}\widetilde{v}_{mn}^{\dagger})\), integrated as in eq:ampdef. Although the quantity in the equations of motion is \(\partial_{y}\theta\), rather than \(\theta\) itself, a positive stress tends to make the mean velocity profile more turbulent, steeper near the wall, and negative ones tend to lower the wall shear. Figure 9(g) displays the conditional Reynolds stress due to all the retained harmonics, which we saw in SS3 to account for approximately two thirds of the total flow stress. It is positive everywhere, and stronger in the upper edge of the
Figure 9: (a-c) Average intensity of different velocity modes, conditioned to the variable pair \((\psi_{v10},I_{v10})\). (a) \(u_{10}\). (b) All modes of \(u\) with \(k_{z}\neq 0\). (c) \(\sqrt{v_{11}^{2}+w_{11}^{2}}\). (d-f) Conditioned to \((I_{v11},I_{w11*})\). (d) \(u_{11}\). (e) \(u_{10}\). (f) All modes of \(u\) with \(k_{z}\neq 0\). (g-i) Conditional tangential Reynolds stress, \(\theta\). (g) All the retained harmonics. (h) \(\theta_{10}\). (i) All harmonics with \(k_{z}\neq 0\).
distribution, where other flow features are also strong. It is also asymmetric in the \((\psi_{v10},I_{v10})\) plane, stronger during the growth of the burst than along its decay. The reason is shown in figure 9(h), which displays the stress due to the Orr (1907) harmonic, \(\theta_{10}\). It is almost antisymmetric in \(\psi_{v10}\), positive during the growth of the burst, and negative during its decay. This is what makes the burst transient, since its decay undoes the effect of the growth. The [10] mode is the only one that generates counter-gradient stresses. Figure 9(i) displays the tangential stress from all the other modes. They are positive everywhere, and the net effect of the burst, although transient in itself, is to generate a three-dimensional structure along its decay leg.
Although not shown in the figure, it is interesting that, when \(\theta_{10}\) is conditioned to the \((I_{v11},I_{w11*})\) plane, it is also negative in the uppermost tip of the roller distribution, confirming our previous conclusion that strong rollers correspond to the burst decay.
In a loose sense, both the down-going right-hand edge of the probability distribution in figure 9(a-c) and the up-going right-hand edge of figure 9(d-f), portray the evolution of a non-uniform \(u\)-streak into a \((v,w)\) roller. In the top row of figure 9 we see the decay of the streak, and in the middle one we see the growth of the roller. The comparison of the conditioned variables in both sets of figures suggests that the coherent right-hand edge of the evolution of the roller corresponds to the decay of the burst, so that the approach to the lower-right vertex of the triangle in figure 9(a-c) corresponds to the top of the distribution in figure 9(d-f). In this interpretation, the low-intensity evolution of the burst along the bottom edge of the distribution in figures 9(a-c) should correspond in part to the decay of the roller along the left edge of the distribution in figures 9(d-f). This phase of the burst will be examined in more detail in the next section.
## 4 Conditional trajectories
It may be useful at this point to recall that the maps in figures such as 9 are not the PFO, but its leading eigenvector rearranged as a two-dimensional matrix for human convenience. The operator itself is the larger matrix in figure 4(b), or a stack of such matrices at different time intervals. The maps in figure 9 are probability distributions of the state of the flow in phase space, and the delta-function distribution used to generate figure 5(a,b) represents a measurement that collapses that probability to a single combination of variables that describes a flow configuration. For the rest of the paper we will study these collapsed probabilities, and the statistical properties of the trajectories connecting two or more such states. The PFO then becomes mostly a guide to which cells can be expected to provide more interesting information, and to how the trajectories connecting them should be chosen.
For example, consider the statistical characterisation of trajectories that satisfy conditions at two different times, such as in the classical recurrence test in which approximate periodic behaviour is identified by monitoring when a trajectory approaches itself after a given delay (Kawahara _et al._, 2012). The statistical equivalent is whether the conditional probability distribution of the effects of a given cell includes the conditioning cell after some delay. In practice, this reduces to identifying among the diagonal of the matrix \(\textsf{P}^{c}(T)\) those cells that result in maximum probability of recurrence for each time delay. An example is figure 10(a), which plots the maximum of the diagonal of the PFO for the \((\psi_{v10},I_{v10})\) plane. It is unity at \(T=0\), when trajectories are still at their initial position, and quickly decays to about \(0.05\), which is the probability that a random trajectory intersects some cell in the core of the invariant distribution \(\boldsymbol{q}_{\infty}\). However, the curve peaks again at \(T^{*}=0.66\), and, interestingly, at twice that delay, \(T^{*}=1.35\). Moreover, since the PFO contains information about all the cells in the distribution, it allows us to recover which conditioning cell is responsible for the probability maximum,
and therefore which cell has the highest probability of recurring. This is marked as 'B' (for burst) in figure 10(b), and turns out to be an extreme high-amplitude event beyond the 95% threshold used up to now as the practical edge of our distribution. In the trajectories discussed in this section, closed symbols mark the position at \(t=0\), and the open triangles along some trajectories are equispaced by \(t^{*}=0.025\).
Two other cells are labelled in figure 10(b), marking the right- ('R') and left-hand ('L') corners of the triangular distribution. We will maintain this nomenclature for the rest of the section, with some adjustments in the location of the cells. Trajectories spanning the up-going (R \(\rightarrow\) B), down-going (B
Figure 10: (a) Recurrence test for trajectories in the \((\psi_{v10},I_{v10})\) plane. See text for explanation. (b) Black symbols are the mean trajectory passing through the most probable recurrent cell, for the period marked by the dark circle in (a). The red symbols are the only recurrent orbit crossing that cell, which is thus responsible for the local maximum in (a). (c) As in (b), in the \((I_{v11},I_{w11*})\) plane of the meandering roller. (d) As in (b) but all the non-recurrent trajectories are plotted as simple lines, each one starting with a solid circle.
\(\rightarrow\) L), and bottom (L \(\rightarrow\) R) legs of the periphery of the triangle will be denoted as growth, decay and recovery trajectories, respectively. The growth and decay legs form the burst (Jimenez, 2013). The upper half of these two legs is deterministic, and can be predicted linearly (Jimenez, 2015), but linearised bursts do not recur. There is no obvious theory for the bottom recovery leg, which is required if bursting is to explain self-sustaining turbulence. Most of this section is dedicated to analysing the recovery process.
Out of our \(5\times 10^{4}\) snapshots, only eight trajectories cross the extreme B in figure 10. Most of them do not recur, and the line of open black triangles in figure 10(b) traces the average conditional trajectory during the recurrence period. As is true for most trajectories, it approaches the high-probability core of the distribution. However, individual trajectories can be tested, and the line of red triangles in figure 10(b) shows the trajectory responsible for the peak in figure 10(a). Centring ourselves on this orbit, its growth, decay and recovery legs last approximately \(T^{*}=0.25,\,0.25\) and \(0.16\), respectively, for a total recurrence time \(T^{*}\approx 0.66\), as in figure 10(a). The total length of its two bursting legs, \(T^{*}\approx 0.5\), also agrees with the width of the bursting correlations for this flow in Jimenez (2015).
Although the recurrent trajectory very approximately closes on itself in the plane of figure 10(b), its recurrence is weaker when more variables are included. For example, figure 10(c) plots the same trajectories in the plane of the \((I_{v11},I_{w11*})\) roller. As before, the recurring trajectory is displayed as red triangles. It loiters for a while near the high-probability core of the \((I_{v11},I_{w11*})\) distribution, and joins the counter-clockwise coherent circulation during the growth period of the \((\psi_{v10},I_{v10})\) burst. The black triangles of the mean trajectory are again very different from the recurrent one, and never join the coherent circulation.
However, the divergence between the recurrent trajectory and the other trajectories that cross B is mostly a long-time phenomenon. This can be seen in figure 10(d), which is equivalent to figure 10(b) but plots all the individual trajectories going through B. The recurrent orbit is still plotted with open red triangles, while other trajectories are plotted as lines of different colours without symbols, in no particular order. All the trajectories in the figure start from the same cell and behave similarly for a while. It is only after they have decayed to approximately half their initial \(I_{v10}\) that they deviate towards the high-probability core. A similar plot in the \((I_{v11},I_{w11*})\) plane, not shown, shows the same trends, although slightly more complicated because trajectories do not start in the same cell any more. The recurrent orbit is especial in that it starts with a relatively weak and decaying roller, which only strengthens towards the end of the orbit. This is probably the reason why its \((\psi_{v10},I_{v10})\) projection is able to proceed undisturbed for a relatively long time.
That the approximately recurrent orbit includes an infrequent extreme event suggests that it may not be very relevant to the flow statistics, or even to its evolution. Its interest resides in that it includes a recovery leg that traverses the lower edge of the parameter space in the 'wrong' direction, showing that coherent recovery connections exist, and suggesting how to study them. The question is whether other trajectories exist that cross the lower edge of the distribution in the same direction, even if they do not follow the recurrent orbit around the full \((\psi_{v10},I_{v10})\) plane.
Figures 11 and 12 address this question. Figures 11(a,b) are equivalent to the cause and effect distributions in figure 5(a), but applied to the R an L corner cells for several values of the delay interval. Even if we saw in figure 6(a) that the lower edge of the distribution is a region of large scatter for the system drift, figure 11(a) shows that the flow propagates steadily from R towards L as the delay increases, and that it only spills slowly into the high probability core of the distribution. The same is true for the causes of cell L in figure 11(b). When similar plots are drawn for a cell in the central part of the lower edge of the distribution, the drift is slower and the scatter higher (not shown), suggesting that the propagation in figure 11(a,b) can be interpreted as a direct connection
between R and L that bypasses some of the intermediate states of the probability distribution.
This is further analysed in figures 12(a-c). Each of these panels represents a leg of the burst that traverse the periphery of the distribution in the \((\psi_{v10},I_{v10})\) plane. For example, figure 12(a) represents the growing leg, from the cell marked as L to the one marked as B (note that the latter has been chosen substantially lower than in figure 10, to bring it within the 95% probability contour). It takes the system \(T^{*}\approx 0.25\) to move from one to the other, and the grey lines are the \(O(500)\) trajectories that pass through L at \(t=0\). Of these, only the six trajectories drawn in blue also pass through B in the interval \(T^{*}=0.2\)-\(0.3\). Figures 12(b,c) are similarly drawn for the decay and recovery legs of the burst, respectively. The trajectories in these three legs are not continuations of each other. For example, there is a single trajectory linking cells L, B and R, in that order, three trajectories linking R, L and B, and no trajectory linking B, R and L. The only trajectory approximating a full cycle in this plane is the one in figure 10.
Figures 12(d-f) display the mean evolution of the flow field, reconstructed from the retained harmonics along the blue trajectories in figures 12(a-c), respectively, with time increasing from top to bottom. Because the problem is homogeneous in \(x\) and \(z\), the flow in the different trajectories has to be aligned to a common origin before averaging. This is done at \(t=0\) by translating each case so that the deepest low-speed perturbation of the streamwise velocity is located at the centre of the display box. To facilitate visual tracking, the frame of reference is then advected with a velocity \(U_{ad}^{+}=8\), which keeps the structures approximately stationary near the wall.
The forward tilting of the \(v\)-structures by the shear is clear in the growth and decay legs of the burst (figures 12d,e). The recovery leg in figure 12(f) is harder to interpret, in part because everything is much weaker than in the other two cases, but the clearest difference is that, while the growth and decay are dominated by high-streamwise-velocity structures near the wall, the high speed regions of the recovery leg are predominantly farther into the flow. The near-wall layer only contains a weak discontinuous low-speed streak. Since the structures in figure 12(d-f) are defined with respect to a long-term averaged velocity profile, this difference in organisation implies that the box-averaged
Figure 11: Probability distributions of the causes and effects for several delay intervals, as in figure 5(a,b). (a) Effects conditioned to the cell marked as R in the lower right-hand corner of the invariant density distribution. From blue to yellow, \(T^{*}=0.025(0.025)0.15\). (b) As in (a), for the causes leading to the cell marked as L in the lower left-hand corner.
Figure 12: (a-c) Grey lines are the phase trajectories that cross a causal cell at \(t=0\). Blue ones are those that also cross the effect cell within a given range of time intervals. (a) Growth leg of the burst, from L \(\rightarrow\) B in \(T^{*}\) = 0.2 - 0.3. (b) Decay from B \(\rightarrow\) R in \(T^{*}\) = 0.2 - 0.3. (c) Recovery leg from R \(\rightarrow\) L in \(T^{*}\) = 0.15 – 0.175. (d-f ) Averaged evolution of the flow along the blue trajectories in (a-c), respectively, reconstructed from the retained harmonics. Flow is from left to right, and time from top to bottom. Translucent orange isosurface, \(u^{+}=0.7\); translucent grey, \(u^{+}=-0.7\); cyan, \(v^{+}=0.4\); purple, \(v^{+}=-0.4\). (d-e) \(t^{*}=0.075(0.05)0.225\). (f)\(t^{*}=0(0.05)0.15\), and \(v^{+}=\pm 0.3\).
instantaneous velocity profile differs among legs.
This is confirmed in figures 13(a,b), which show that the burst is characterised by high velocities and steep profiles near the wall. Figure 13(c) shows that the recovery takes place in regions where the velocity is lower near the wall, and the shear is displaced away from it. The flow enters the recovery leg with a steep velocity gradient at \(y/h\approx 0.2\), which decays as the recovery proceeds (red to blue). Figures 13(d-f) show the evolution of the fluctuation energy at the same times. During the burst, in figures 13(d,e), the fluctuations stay relatively close to the wall (\(y/h\approx 0.12\)), and the energy peak extends all the way to it. During the recovery leg, in figure 13(f), a new peak grows at the location of the detached shear layer (\(y/h\approx 0.22\)), and forms a new secondary peak at the wall.
Note that the streamwise velocity perturbations in most frames of figure 12(d-f) are short and discontinuous, in agreement with the recent evidence in Jimenez (2022) that long streaks are not directly linked to the wall-turbulence regeneration cycle.
The conclusion that the recovery of the burst depends on a low-shear region near the wall is inconsistent with the intuitive notion that, since the shear is the ultimate source of turbulent energy, a higher shear should be a prerequisite for higher turbulent activity. Indeed, it has been known for some time that turbulent intensity and shear are correlated (Marusic _et al._, 2010; Jimenez, 2012; Mathis _et al._, 2013), but the implied model is usually that the turbulence intensity evolves to be in equilibrium with the shear. Our discussion suggests that the causality is the other way around (figure 9), and that the shear is created by the Reynolds stresses of the fluctuations. In fact, since the mass flux in our channels is constant, a low shear near the wall implies a higher one further up. What our previous discussion suggests is that the decay of a burst induces a mild shear at the wall, which in turn steepens the velocity profile away from it. The steep off-wall profile is what triggers the new burst, which is the
Figure 13: Box-averaged velocity profiles, for the trajectories marked in blue in figure 12. (a-c) Mean velocity, \(\widetilde{u}_{00}\). (d-f) Kinetic energy of the retained harmonics. (a,d) Growth leg. (b,e) Decay. (c,f) Recovery. Time increases from red to blue, separated by \(t^{*}=0.025\) among curves.
blue/magenta structure growing away from the wall in the downstream part of figure 12(f). Flores Jimenez (2010) showed that stress waves travel to and from the wall in small-box simulations such as the present one, with a wall-normal velocity of the order of \(u_{\tau}\). It is difficult to decide from such kinematic observations which of the two directions is the primary causal one, but the discussion above suggests that at least the descending wave is causal, in agreement with previous reports that the structure of the logarithmic layer in wall-bounded flows is relatively independent from the details of the wall, which is therefore not the primary, or at least not the only, seat of causality (Townsend, 1976; Mizuno Jimenez, 2013; Kwon Jimenez, 2021).
## 5 An interventional experiment
The analysis in the previous sections gives strong hints about which variables evolve coherently in wall-bounded flows, and about how this evolution can be interpreted in terms of causality within the attractor in phase space. However, the second question posed in the introduction, whether this information has any practical value, generally takes us outside the attractor, and can only be answered by more classical interventional experiments or by theoretical models. Strictly speaking, this is beyond the scope of the present paper, whose goal is to develop a methodology and to give examples of how it can be used to motivate more classical work, but we present in this section an example of how that work might proceed, building on our discussion of figure 12.
The main result of that discussion is that a particular, weakly sheared, configuration of the mean velocity profile is a requirement for burst recovery. This is not a new idea: what is probably the oldest theory of how wall turbulence is controlled is based on the two-way interaction between shear and turbulence intensity (Malkus, 1956). It is also known that forcing a locally steeper or shallower velocity profile leads to equilibrium intensities that correlate with the local shear (Tuerke Jimenez, 2013), and many low-order models of the turbulence cycle include a variable that stands for the mean velocity (Waleffe, 1997).
However, the original linear version of Malkus (1956) model was disproved by Reynolds Tiederman (1967), who found no trace of the marginal instability of the velocity profile that that model proposes for turbulent channels, and the analysis in Waleffe (1997), although highly suggestive because the effect of a strong wall shear is to inhibit the instability of the streaks, only applies to permanent travelling waves in marginally turbulent low-Reynolds number flows. Similarly, Tuerke Jimenez (2013) refer to long-term flow averages, and it is unclear whether these are relevant to the short-term behaviour of an intermittent bursting cycle. The hypothesis to be tested is whether a feed-back cycle involving the modification of the mean profile by the turbulent fluctuations, and the control of the latter by the former, can explain at least part of the mechanism that sets the frequency and amplitude of the bursts.
We do this by smoothing the evolution equation that links the fluctuations to the mean profile, defined as the deviation from the long-term mean velocity profile of the box-averaged streamwise velocity fluctuation, \(\widetilde{u}_{00}(y)\). In a regular channel, it satisfies,
\[\partial_{t}\widetilde{u}_{00}=R-P+\partial_{yy}\widetilde{u}_{00}, \tag{16}\]
where \(R=-\partial_{y}(\widetilde{uv})_{00}\) is the instantaneous mean Reynolds-stress gradient, and the pressure gradient \(P\) is determined by the ancillary flux-conservation condition,
\[\int_{0}^{2h}\widetilde{u}_{00}\,\mathrm{d}y=0 \tag{17}\]
In our experiment, we substitute eq:RQ1 by
\[\partial_{t}\widetilde{u}_{00}=Q-P+\partial_{yy}\widetilde{u}_{00}, \tag{18}\]
where
\[\partial_{t}Q=(R-Q)/\tau, \tag{19}\]
which, after a transient in which the effect of the initial conditions decays exponentially, is solved by
\[Q(y,t)=\tau^{-1}\int_{0}^{t}\exp[(\xi-t)/\tau]\,R(y,\xi)\,\mathrm{d}\xi. \tag{20}\]
Figure 14: (a) Instantaneous friction velocity of the temporally smoothed and natural experiments, [16, 17, 18, 19]. (b) As in (a), for the box-averaged fluctuating kinetic energy, \(E=u^{\prime 2}+v^{\prime 2}+w^{\prime 2}\), measured with respect to its long-time average. (c) Mean velocity profiles. (d) Mean kinetic energy. In all cases: blue, natural channel; red, \(\tau^{*}=2.27\). (e-h) Temporal evolution of the mean profiles as functions of time. (e,f) Mean streamwise velocity fluctuation, with respect to its long-time average. (g,h) Kinetic energy. (e,g) Natural channel. (f,h) Temporally smoothed one.
The modified right-hand side, \(Q\), is therefore a smoothed version of \(R\), with a smoothing time \(\tau\). The integral of \(R\) or \(Q\) across the channel can be considered as a body force that must be compensated by the pressure gradient, but it is easy to see from its expression that \(\int R\,\mathrm{d}y=0\) for impermeable walls, and that the same holds for \(Q\) after the initial transient.
Figure 14 shows some results from the experiment. Figure 14(a) shows the history of the friction Reynolds number for the natural and modified channels. It only increases slightly for the smoothed case, from \(\langle h\rangle^{+}=950\) to \(975\), although its temporal oscillations become much slower. More interesting is figure 14(b), which shows the oscillations of the box-averaged kinetic energy. They are somewhat deeper for the smoothed case, and substantially less frequent and more regular. An approximate count, using a method explained below, gives \(\Delta t^{*}\approx 3.9\) for the mean distance between bursts in the natural case, and \(\Delta t^{*}\approx 5.2\) in the smoothed one. Both are longer than in Flores & Jimenez (2010), who find \(\Delta t^{*}\approx 2\) from the temporal spectrum of the integrated Reynolds stress, probably because their method is sensitive to weaker oscillations than the present one.
The effect on the integrated velocity profiles is slight. Figure 14(c) shows the mean velocity, and reveals that the main effect is to decrease the wake component above \(y/h\approx 0.3\), but this is also the height at which this channel begins to be constrained by the numerical box, and where the profile in any case deviates from the natural one. Figure 14(d) shows that the fluctuations of the kinetic energy are also slightly higher, as expected from figure 14(b).
Figures 14(e-h) show the temporal evolution of the profiles, and give more information. Figures 14(e,f) show the deviation, \(\widetilde{u}_{00}\), of the mean velocity profile with respect to its long-time average. Each vertical section of these images is an instantaneous box-averaged profile. Blue regions are slower that usual, and yellow ones are faster. They should be compared to the instantaneous profiles in figure 13(a-c), although the flows in the present figure are not filtered or conditioned in any way. Figures 14(e,g) are the natural flow, and figures 14(f,h) are temporally smoothed. The white dashed vertical lines in figure 14(e-h) mark the time of the bursts of the kinetic energy, whose evolution is represented in figures 14(g,h). To detect them, the kinetic energy is integrated in \(y/h\in(0,0.4)\), and bursts are defined as intervals where the integrated energy rises above the level isolating the top \(15\%\) of the time.
The most interesting differences are those between the mean velocities in figures 14(e, f). The evolution in the natural case in figure 14(e) is clearly more complex than the temporally smoothed case in figure 14(f), and there is no clear correlation between the mean velocity and the position of the bursts marked by the dashed white lines. The opposite is true for the smoothed case in figure 14(f), in which the mean profile rises and falls in a series of diagonal waves, and the lines marking the bursts correspond, even to the naked eye, to low-velocity intervals marked by bluish areas near the wall. The inclined red line in the four panels of the figure mark the friction velocity, \(\mathrm{d}y/\mathrm{d}t=u_{\tau}\), which is known to be the vertical advection velocity of strong Reynolds-stress structures (Flores & Jimenez, 2010; Lozano-Duran & Jimenez, 2014). This agrees with the inclination of the rising and falling patterns in the energy evolution maps in figures 14(g,h). It also approximately describes the vertical advection velocity of the fine structure of the velocity profiles in figure 14(e), most probably because the mean profile is controlled by the Reynolds stress through eq:RQ1. On the other hand, this influence is broken in figure 14(f), where the Reynolds stresses only acts indirectly on the mean profile because of the smoothing effect of eq:RQ4, and the vertical advection is much slower. This strongly suggests that any correlation between figures 14(f) and 14(h) reflects a causal effect of the mean velocity on the burst, rather than the other way around.
This is directly tested in figure 15, which shows the conditionally averaged temporal evolution of different quantities obtained by centring them on the time of the detected energy bursts. Figure 15(a) shows the conditional evolution of the mean profile, and clearly shows the low-velocity period
preceding the burst, which is later substituted by a steeper wall profile created by the Reynolds stress of the burst, shown in figure 15(b). The conditional Reynolds stress is displayed in figure 15(c), which shows that it is a local effect due to the burst itself. An attempt to repeat this process for the natural flow in figures 15(e,g) fails, no doubt in part because the more complex structure of the flow field makes the identification of the bursts harder. In fact, even the conditioning of the burst on the burst position, as in figure 15(b), fails in that case.
## 6 Conclusions
This paper can be divided in two parts. In the first one, up to SS3, we adapt the Perron-Frobenius operator (PFO) of dynamical-system theory to the probabilistic description of the evolution over a phase space partition of a turbulent channel flow. We show that the main difficulty for doing so is collecting enough data to populate the operator matrix, and we bypass it by restricting ourselves to two-dimensional projections of the phase space. This leads to the question of how to choose the best pair of variables, and forces us to develop simple indicators of the quality of a particular representation. Several such indicators are developed, and shown to be interpretable in terms of causality and coherence within the attractor. It is argued that this last restriction allows us to draw conclusions about causality and information flux from flow histories, without the need for interventional experiments.
In particular, we show that we can use these indicators to distinguish between correlation and coherence, and to separate, for example, relatively weak structures that have their own dynamics, such as
Figure 15: Time evolution of the box-averaged profiles conditioned to the bursts of the kinetic energy in the smoothed experiment. (a) Mean streamwise velocity fluctuation, as in figure 14(f). (b) Kinetic energy. This is the conditioning field. (c) Tangential Reynolds stress, \(-(\widetilde{uv})_{00}^{+}\).
wavy rollers and streaks, from stronger and more correlated ones that have no dynamics of their own, such as streamwise-uniform streaks and rollers.
We show in SS3.2 how the indicators allow us to differentiate less promising variable pairs from those more likely to be useful in developing coherent physical models. Out of 630 possibilities, two promising pairs are found for the case analysed here. The first one is the intensity and inclination of the wall-normal velocity, which was already used by Jimenez (2015) to represent an approximately linear Orr (1907) burst, and the second is a more novel inclined wavy vortex.
The rest of the paper applies the techniques derived in the first part to analyse the Orr (1907) burst, with emphasis on the poorly understood recovery process by which bursts are re-initiated after they decay. As was the case with previous attempts to use massive searches to choose among different analysis possibilities (e.g., Jimenez, 2018, 2020), the present one mostly suggests mechanisms that have to be confirmed by more classical means, mainly because of the original limitation to on-attractor dynamics. In this case, the PFO guides us in the choice of phase-space trajectories that connect interesting flow configurations within a known range of time intervals, including trajectories describing the recovery process. At least in our relatively small computational box, conditional averaging over these connections shows, somewhat counter-intuitively, that the key ingredient for regeneration is the development of a low-shear region near the wall. New bursts are seeded from a detached shear layer overlying it. Their Reynolds stress returns the shear to the wall, and no new burst is possible until the decay of the old one again detaches the shear. It is not known whether this process generalises to larger boxes containing more than one burst.
To extend our conclusions outside the attractor, we finally perform a simple computational experiment in which the control of the mean shear by the burst is relaxed. The behaviour of the mean profile is thus modified, but the association of low wall shear with the initiation of the bursts is shown to be maintained.
This work was supported by the European Research Council under the Caust grant ERC-AdG-101018287.
|
2304.13208 | Exploring the origins of perpendicular magnetic anisotropy in amorphous
Tb-Co via changes in medium-range ordering | Amorphous thin films of Tb$_{17}$Co$_{83}$ (a-Tb-Co) grown by magnetron
co-sputtering exhibit changes in magnetic anisotropy with varying growth and
annealing temperatures. The magnetic anisotropy constant increases with
increasing growth temperature, which is reduced or vanishes upon annealing at
temperatures above the growth temperature. The proposed explanation for this
growth-induced anisotropy in high orbital moment Tb-based transition metal
alloys such as a-Tb-Co is an amorphous phase texturing with preferential
in-plane and out-of-plane local bonding configurations for the rare-earth and
transition metal atoms. Scanning nanodiffraction performed in a transmission
electron microscope (TEM) is applied to a-Tb$_{17}$Co$_{83}$ films deposited
over a range of temperatures to measure relative changes in medium-range
ordering (MRO). These measurements reveal an increase in MRO with higher growth
temperatures and a decrease in MRO with higher annealing temperatures. The
trend in MRO indicates a relationship between the magnetic anisotropy and local
atomic ordering. Tilting select films between 0$^{\circ}$ and 40$^{\circ}$ in
the TEM measures variations in the local atomic structure a function of
orientation within the films. The findings support claims that preferential
ordering along the growth direction results from temperature-mediated adatom
configurations during deposition, and that oriented MRO correlates with the
larger anisotropy constants. | Ellis Kennedy, Emily Hollingworth, Alejandro Ceballos, Daisy O'Mahoney, Colin Ophus, Frances Hellman, M. C. Scott | 2023-04-26T00:22:32Z | http://arxiv.org/abs/2304.13208v3 | Exploring the origins of perpendicular magnetic anisotropy in amorphous Tb-Co via changes in medium-range ordering
###### Abstract
Amorphous thin films of Tb\({}_{17}\)Co\({}_{83}\) (\(a\)-Tb-Co) grown by magnetron co-sputtering exhibit changes in magnetic anisotropy with varying growth and annealing temperatures. The magnetic anisotropy constant increases with increasing growth temperature, which is reduced or vanishes upon annealing at temperatures above the growth temperature. The proposed explanation for this anisotropy in high orbital moment Tb-based transition metal alloys is an amorphous phase texturing with preferential in-plane and out-of-plane local bonding configurations for the rare-earth and transition metal atoms. Scanning nanodiffraction performed in a transmission electron microscope (TEM) is applied to \(a\)-Tb\({}_{17}\)Co\({}_{83}\) films deposited over a range of temperatures to measure relative changes in medium-range ordering (MRO). These measurements reveal an increase in MRO with higher growth temperatures and a decrease in MRO with higher annealing temperatures. The trend in MRO indicates a relationship between the magnetic anisotropy and local atomic ordering. Tilting select films the TEM measures variations in the local atomic structure a function of orientation within the films. The findings support claims that preferential ordering along the growth direction results from temperature-mediated adatom configurations during deposition, and that oriented MRO correlates with the larger anisotropy constants.
+
Footnote †: preprint: AIP/123-QED
## I Introduction
Amorphous rare earth - transition metal (RE-TM) alloys have tunable magnetic properties, such as compensation temperature and perpendicular magnetic anisotropy (PMA) [1; 2; 3; 4; 5; 6; 7]. These properties make them desirable as novel spintronic materials and for ultrafast magneto-optical recording devices. For REs with non-zero total orbital angular momentum, such as Tb, their magnetic anisotropy is believed to originate from the single ion anisotropy of the RE atoms as well as complex pair-bonding and preferential atomic configurations induced during film growth and subsequent thermal treatments [4]. Out-of-plane (OOP) RE-TM bonding and in-plane (IP) TM-TM bonding have been correlated to bulk perpendicular magnetic anisotropy (PMA) in \(a\)-RE-TM systems [4; 6; 7]. The amorphous nature of these systems complicates their analysis, as variations in magnetic anisotropy must instead be attributed to subtle variations in local atomic ordering [8]. Previous work shows that PMA is independent of film thickness, surface layer magnetic interactions, and macroscopic growth-induced strain [4].
The succinct description of crystalline structures with defined unit cells and permitted symmetry operations does not extend to amorphous materials. Their lack of translational and rotational symmetry requires a statistical approach for analysis [9; 10]. Two-body and multi-body distribution functions are applied to the study of amorphous structures to determine the probability that two or more atoms will be separated by a specific distance. Short-range ordering (SRO) is probed through two-body statistical analysis, such as radial distribution functions. However, SRO is limited to the first coordination shell of the constituent atom. Looking out a little further, medium-range ordering (MRO) on the 1 - 5 nm length scale, can be probed with transmission electron microscopy (TEM) [9; 11]. Statistical analysis of MRO can determine the degree and type of ordering that exists between the extremes of long-range and short-range order [10; 12].
In this work, \(a\)-Tb\({}_{17}\)Co\({}_{83}\) is used as a representative \(a\)-RE-TM system. Tb (RE) atoms have more than half-filled \(4f\)-electron orbitals and magnetic moments that align antiferromagnetically to the moments of the Co (TM) atoms [13]. The Curie temperature (T\({}_{C}\)), saturation moment, and compensation temperature (T\({}_{comp}\)) are solely dependent on composition, while PMA and coercivity depend as well on growth temperature, and other deposition parameters [3; 4]. A series of \(a\)-Tb\({}_{17}\)Co\({}_{83}\) films were deposited at 20\({}^{\circ}\)C, 200\({}^{\circ}\)C, and 300\({}^{\circ}\)C. The films then either received no further heat treatment or were annealed at 200\({}^{\circ}\)C or 300\({}^{\circ}\)C. IP and OOP magnetization vs magnetic field measurements were collected from the films. Congruent with previous studies, the PMA was found to increase with growth temperature and decrease with annealing temperature [5; 4; 14; 15]. The TEM method of scanning nanodiffraction was used to probe the underlying structural mechanisms responsible for variations in the magnetic properties of \(a\)-Tb\({}_{17}\)Co\({}_{83}\) as a function of film deposition and annealing temperature. The relative MRO across the series of samples is measured with fluctuation electron microscopy (FEM),
a specialized application of scanning nanodiffraction that is sensitive to changes in diffracted intensity related to variations in atomic configurations within an amorphous system [9]. We determine that MRO in the films increases with higher deposition temperatures and decreases with higher annealing temperatures. The results support the model of magnetism in _a_-RE-TM systems in which PMA is related to atom-specific preferential local atomic ordering.
## II Experimental details
_a_-Tb\({}_{17}\)Co\({}_{83}\) films were produced using magnetron co-sputtering at 1.8 mTorr Ar pressure from separate Tb and Co targets. The base pressure of the chamber was 7 x 10\({}^{-8}\) Torr. The _a_-Tb\({}_{17}\)Co\({}_{83}\) films were sputtered at room temperature (20\({}^{\circ}\)C), 200\({}^{\circ}\)C, and 300\({}^{\circ}\)C with a capping layer of _a_-SiN\({}_{x}\) sputtered at 3.0 mTorr of Ar pressure. The films were deposited on Norcada grids with 10 nm thick _a_-SiN\({}_{x}\) membranes. Samples consisted of 30 nm _a_-Tb\({}_{17}\)Co\({}_{83}\) films capped with 10 nm of _a_-SiN\({}_{x}\) to prevent oxidation. The 30 nm thickness was selected for the _a_-Tb\({}_{17}\)Co\({}_{83}\), as it was experimentally determined to produce a strong speckle intensity in scanning nanodiffraction. A control sample of 10 nm of _a_-SiN\({}_{x}\) was sputtered at 3.0 mTorr of Ar pressure to determine the influence of the capping layer on the diffraction data. After deposition at 20\({}^{\circ}\)C, some films were annealed at 200\({}^{\circ}\)C or 300\({}^{\circ}\)C for an hour under high vacuum.
The magnetization as a function of applied magnetic field IP and OOP was measured in a Quantum Design Magnetic Properties Measurement System (MPMS) at 20\({}^{\circ}\)C for a series of samples grown and annealed under identical conditions to those detailed above (Supplemental Materials). The films grown at 20\({}^{\circ}\)C were annealed for an hour at 150\({}^{\circ}\)C, 200\({}^{\circ}\)C, 275\({}^{\circ}\)C, and 350\({}^{\circ}\)C to establish how magnetic anisotropy depends on annealing temperature. The applied field was varied between - 1 and 1 T, a range exceeding the magnetic saturation values for all samples. The intrinsic uniaxial anisotropy constant, K\({}_{\mathrm{eff}}\), is calculated using the relation below, where M\({}_{s}\) is the saturation magnetization and H\({}_{k}\) is the anisotropy field required to fully magnetize the films IP [15; 16].
\[K_{\mathrm{{{{u}}}i}}=\frac{{{H_{k}}{M_{s}}}}{2}+2\pi{M_{s}^{2}} \tag{1}\]
For all samples, Tc = 190\({}^{\circ}\)C, T\({}_{comp}\) = -73\({}^{\circ}\)C, and M\({}_{s}\) is approximately 150 emu/cc. All samples exhibited uniaxial PMA.
A subset of the films measured with magnetometry were analyzed using FEM. To measure changes in the MRO of the ferrimagnetic films, FEM experiments were carried out using scanning transmission electron microscopy (STEM) in an FEI TitanX operated at an accelerating voltage of 200 kV and a 2.2 nm diameter probe. Scanning nanodiffraction acquisition parameters are detailed in the Supplemental Materials. Additionally, the films deposited at 20\({}^{\circ}\)C and 300\({}^{\circ}\)C were tilted at an angle varied between 0\({}^{\circ}\) and 40\({}^{\circ}\) in 5\({}^{\circ}\) increments to determine whether MRO varies with orientation through the film. These two films were selected for tilting because they had the greatest difference in MRO as determined from FEM variance measurements.
FEM analysis was performed following the method outlined by Kennedy _et al._[17] Custom scripts were used to fit the radial profiles of the scanning nanodiffraction data using least squares fitting of the first diffracted ring as described by Gammer _et al._[18] The fitting parameters are used to correct for any ellipticity in the patterns before calculating the variance in intensity as a function of scattering vector. For patterns collected from tilted films, the ellipticity is not corrected. Instead, changes in the major elliptical axes are used as a measure of relative changes in mean bond lengths. The procedure for collecting scanning nanodiffraction data and FEM analysis is outlined in the Supplemental Materials. For FEM, the variance is calculated with respect to scattering angle **k**, position on the sample \(r\), and resolution (which is a function of _r_); however, in STEM FEM only the scattering vector varies and thus the equation for variance _V\({}_{\sigma}\)_ in diffracted intensity \(I\) becomes [10]:
\[V_{\sigma}(\textbf{k},r)=\frac{\langle I^{2}(\textbf{k},r)\rangle-\langle I( \textbf{k},r)\rangle^{2}}{\langle I(\textbf{k},r)\rangle^{2}}, \tag{2}\]
Additionally, Lorentz TEM images were collected from the as-deposited film grown at 20\({}^{\circ}\)C and the film grown at 20\({}^{\circ}\)C and annealed at 200\({}^{\circ}\)C and 300\({}^{\circ}\)C to probe the effect of annealing on domain structure. Lorentz TEM was carried out in a FEI Themis operated at 300 kV in Lorentz mode. All films had been previously subjected to a magnetic field of approximately 2 T during FEM image acquisition, but the magnetic field of the objective lens was minimized to less that 0.1 T for Lorentz imaging. The films were tilted 15\({}^{\circ}\) and the beam was defocused to produce visible magnetic domains.
## III Results
The magnetic anisotropy constant \(K_{\mathrm{{{{u}}}i}}\) is 8.33 x 10\({}^{5}\) J/m\({}^{3}\) for films grown at 20\({}^{\circ}\)C, and increases to 9.26 x 10\({}^{5}\) J/m\({}^{3}\)
Figure 1: Magnetic anisotropy constant as a function of deposition and annealing temperatures derived from IP and OOP magnetization measurements. M\({}_{s}\) is approximately the same for all films.
for films grown at 200\({}^{\circ}\)C and to 10.30 x 10\({}^{5}\) J/m\({}^{3}\) for films grown at 300\({}^{\circ}\)C. For films grown at 20\({}^{\circ}\)C and subsequently annealed at temperatures between 150\({}^{\circ}\)C and 350\({}^{\circ}\)C, the magnetic anisotropy decreases with temperature. The relationships between deposition temperature, annealing temperature, and magnetic anisotropy are shown in Fig. 1. Similar trends have been shown in other amorphous RE-TM thin film systems [6]. The OOP magnetic saturation \(M_{s}\) remains constant (150 emu/cc) for all samples in the series.
High resolution TEM imaging and the mean converged beam electron diffraction (CBED) patterns confirm that the samples are amorphous (shown in Supplemental Materials). The uniformity of the diffracted speckle in the scanning nanodiffraction patterns and lack of strong Bragg scattering indicates that the films lack long range order and have no nanocrystals.
In Fig. 2, the normalized variance in intensity _V(k)_ for films in their as-deposited state (a) and after annealing (b) is plotted as a function of scattering vector using mean statistics. The MRO increases with increasing growth temperature, as indicated by an increase in _V(k)_. The variance for the 300\({}^{\circ}\)C growth is significantly sharper than the 20\({}^{\circ}\)C and 200\({}^{\circ}\)C variances. There is an increase in the _V(k)_ and a decrease in the full width at half maximum (FWHM) across the three films as a function of deposition temperature.
The peak heights correspond to relative MRO and the peak positions correspond to the mean interatomic distance of the MRO. The peaks in _V(k)_ are typically attributable to oriented clusters of atoms (MRO) and not to individual bond types (SRO) [9; 19]. Two main peak features emerge in the variance plots in Fig. 2. The first peak is centered around 2.4 nm\({}^{-1}\) and is attributable to _a_-SiN\({}_{x}\), as confirmed through FEM analysis of 10 nm of _a_-SiN\({}_{x}\) deposited onto _a_-SiN\({}_{x}\) membranes. The second peak is centered between 4.5 and 3.3 nm\({}^{-1}\) and originates from the _a_-Tb\({}_{17}\)Co\({}_{83}\). The peak positions of the _a_-Tb\({}_{17}\)Co\({}_{83}\) growth series shift to larger \(k\) values with increasing deposition temperature. The change in the magnitude of the _a_-Tb\({}_{17}\)Co\({}_{83}\) peak variance relates to degree of MRO in the films. Peak positions and relative heights are provided in Table 1.
The FEM diffracted signal varies slightly as a function of orientation through the individual films, as shown in Fig. 3, suggesting directional dependency in bonding structure. The most significant difference in MRO as a function of tilt angle was observed between the films grown at 20\({}^{\circ}\)C and 300\({}^{\circ}\)C. These films were tilted in the TEM to 40\({}^{\circ}\)C to check for directional dependency of the MRO. Changes in the ellipticity of the FEM patterns as a function of tilt angle reveal that the films deposited at 300\({}^{\circ}\)C exhibit a greater change in the length of their major axes, compared to the major axes of the films grown at 20\({}^{\circ}\)C. The mean bond length of both films decreases as a function of orientation away from the film normals. The decrease is more pronounced for the film deposited at 300\({}^{\circ}\)C, indicating greater orientation-dependency of the MRO bond lengths. By assigning the changes in the FEM diffraction rings to changes in bond length in the IP and OOP directions, we can analyze the tilted data using strain relationships (Supplemental Materials). From this, we determine that the maximum change in mean bond length is 1.43% for the film deposited at 20\({}^{\circ}\)C and 3.14% for the film deposited at 300\({}^{\circ}\)C. These values assume the trend continues past the 40\({}^{\circ}\)C tilt angle, to the OOP orientation. This implies that the films grown at 300\({}^{\circ}\)C have greater directional dependence of interatomic ordering.
## Discussion
Many models have been proposed for the origin of PMA in amorphous RE-TM thin films. Postulations include the formation of columnar microstructures, anelastic strains, surface layer anisotropy, growth-induced anisotropy resulting from magnetic interactions, microlite formation, and temperature-mediated subtle alignments of atoms around the RE atoms [4; 20; 21; 22; 23; 24]. The mechanism described in the latter of these models is commonly referred to as amorphous phase texturing [4]. Many of these models have been disproven by varying growth parameters and measuring the resulting variations in PMA. This work supports the model of texturing in which local adatom configurations arrange themselves to minimize surface energy during the deposition process.
In _a_-Tb\({}_{17}\)Co\({}_{83}\), the itinerant _3d_-electrons of Co influence the magnetic ordering of the localized _4f_-electrons of Tb (RE) through an antiferromagnetic exchange interaction [25]. The magnetic moments of the Tb _4f_-electron orbitals, which are more than half-filled, align antiferromagnetically to the moments of the Co (TM) atoms. The combined effect of the resulting negative exchange together with local uniaxial anisotropy, which acts primarily on the Tb moments, is to produce non-colinear ferrimagnetism that is Co-dominant above T\({}_{comp}\) and Tb-dominant below T\({}_{comp}\)[26; 27]. _a_-Tb\({}_{17}\)Co\({}_{83}\) has a T\({}_{comp}\) of approximately -73\({}^{\circ}\)C, making it Co-dominant at 20\({}^{\circ}\)C. The Curie temperature (T\({}_{C}\)) of _a_-Tb\({}_{17}\)Co\({}_{83}\) is approximately 190\({}^{\circ}\)C. The PMA increases with increasing growth temperature in the _a_-Tb\({}_{17}\)Co\({}_{83}\) system, including for films grown above T\({}_{C}\)[28]. The same trend has been independently observed in other amorphous RE-TM systems [2; 29; 2; 30; 31].
There is an inverse trend between PMA and annealing temperature. Similarly, as the annealing temperature increases, the MRO decreases, as shown in Fig. 2 and summarized in Table 1. Thus, increased deposition temperature increases MRO and PMA, while increased annealing temperature reduces MRO and PMA in _a_-Tb\({}_{17}\)Co\({}_{83}\). The _V(k)_ curves for the 20\({}^{\circ}\)C growth and the film annealed at 200\({}^{\circ}\)C are similar with nearly identical FWHM, position, and relative height.
\begin{table}
\begin{tabular}{l c c} Temperature parameters & Relative Height1 & Position (nm\({}^{-1}\)) \\ \hline
20\({}^{\circ}\)C deposition & 1 & 4.5 \\
200\({}^{\circ}\)C deposition & 1.10 & 4.5 to 4.7 \\
300\({}^{\circ}\)C deposition & 1.23 & 4.8 \\
20\({}^{\circ}\)C deposition, 200\({}^{\circ}\)C anneal & 1.01 & 4.5 \\
20\({}^{\circ}\)C deposition, 300\({}^{\circ}\)C anneal & 0.81 & 4.6 \\ \end{tabular}
\end{table}
Table 1: Summarized peak heights and positions along the scattering vector \(k\) axis from FEM curves in Fig. 2
The similarity suggests that there is a minimum temperature required for MRO atomic rearrangement during annealing. Overcoming the energy barrier for atomic re-orientation detectable with FEM requires a long enough anneal time and high enough temperature. At 300\({}^{\circ}\)C for one hour, the annealing parameters are sufficient to allow energy-minimizing atomic rearrangement in the bulk of the film.
Above T\({}_{C}\), the system is paramagnetic and atomic interactions are not governed by magnetic interactions. Thus, the trend in MRO for the _a_-Tb\({}_{17}\)Co\({}_{83}\) growth series cannot be ascribed to magnetic interactions. Instead, the relationship between MRO and growth temperature-mediated PMA supports models in which local adatom configurations arrange such that they minimize the surface energy during growth [32]. As layers build, the preferential configuration is preserved, leading to oriented local magnetic anisotropy that produces an overall macroscopic PMA.
The films grown at 20\({}^{\circ}\)C and 300\({}^{\circ}\)C were selected for tilted FEM analysis to understand the relationship between temperature-mediated atomic texturing during deposition and orientationally anisotropic MRO [17]. In this measurement, FEM data is acquired at varying stage tilt angles, which enables us to deduce the relative change in bond lengths associated with the MRO in the IP and OOP directions. Because the diffraction patterns contain information about d-spacings perpendicular to the direction of the beam, as the effective angle between the beam and the sample is varied, any changes in bond length between the IP and OOP directions in the film will cause an ellipticity in diffraction pattern (Supplemental Materials). This effect is similar to ellipticity in diffraction data caused by anisotropic strain, although in our case, decoupling the influences of strain and changes in preferential atomic orientations is not possible. For this reason, we consider changes in the major axis of the fitted ellipse to correspond broadly to changes in the average bond length, \(l\), as a function of orientation through the film. \(l_{\text{xx}}\) is along the x-axis, \(l_{\text{zz}}\) in the y-z plane along the z'-axis, and \(l_{\text{xx}}\) bisects the x' and z' directions.
The stage tilt is limited to 40\({}^{\circ}\) in the TitanX, but the OOP bond length can be extrapolated from measurements at lower angles by considering the geometry of the experiment and the influence of infinitesimal shifts in bond length on the diffraction data, which gives a sin\({}^{2}\) dependence to the projected OOP bond length. As shown in Fig. 3, \(l_{\text{xx}}\) is modeled with a linear fit and \(l_{\text{zz}}\) is modeled with a sin\({}^{2}/\sqrt{2}\) dependence. The geometric origins of these terms are illustrated in the Supplemental Materials.
Fig. 3 shows the trend in average relative % change in mean bond length as a function of tilt angle for the films grown at 20\({}^{\circ}\)C and 300\({}^{\circ}\)C. Relative % change below the \(\pm\)0.2% margin of error indicates that the films are isotropic within the limit of the technique. Above this margin of error, the films exhibit changes in relative mean bond distances dependent on orientation through the films.
From the \(l_{\text{zz}}\) fits, the extrapolated maximum bond lengths are calculated. For the film deposited at 20\({}^{\circ}\)C, the extrapolated maximum relative % change in mean bond length is 1.4%. For the film deposited at 300\({}^{\circ}\)C, the extrapolated maximum relative % change in mean bond length is 4.5%. These values are determined by extending the fitting curves to a theoretical 90\({}^{\circ}\) tilt. Both films exhibit shorter mean bond lengths in the IP direction compared to the OOP direction. The IP bond lengths in the film grown at 300\({}^{\circ}\)C are larger, on average, than the bonds in the OOP orientation. Assuming the texturing model for PMA, this corresponds to a greater proportion of Co-Tb bonds in the OOP orientation relative to Co-Co bonds in the IP orientations. The greater relative % change in mean bond length for 300\({}^{\circ}\)C indicates that at higher deposition temperatures adatoms with higher energies at impingement orient such that Co-Tb bonds form preferentially in the OOP direction, consistent with EXAFS data on _a_-Tb-Fe films [7; 33].
Figure 2: Variance as a function of scattering vector \(\mathbf{k}\) for _a_-Tb\({}_{17}\)Co\({}_{83}\) films (a) deposited at 20\({}^{\circ}\)C, 200\({}^{\circ}\)C, and 300\({}^{\circ}\)C and (b) subsequently annealed at 200\({}^{\circ}\)C and 300\({}^{\circ}\)C following deposition at 20\({}^{\circ}\)C. The height of the peak centered between 4.5 nm\({}^{-1}\) and 4.8 nm\({}^{-1}\) corresponds to the degree of relative MRO in the films. The magnitude of the peak increases with growth temperature, indicating that MRO is greater for films deposited at higher temperatures. The position of the peaks along the scattering vector axis indicates the average size of the bonds resulting in detectable MRO. The dashed gray lines are the variance in intensity of the SiN\({}_{x}\) control film.
The dramatic decrease in PMA observed between the film grown at 20\({}^{\circ}\)C and the films annealed at higher temperatures and the corresponding change in MRO prompted the use of Lorentz TEM. The Lorentz images, shown in Supplemental Materials, show the typical high-contrast domain patterns seen in films with strong PMA and corroborate the trends observed in variance and magnetization for _a_-Tb\({}_{17}\)Co\({}_{83}\) annealed at increasing temperatures [16]. Higher PMA causes a high energy cost of forming domain walls and narrower domain walls, with well defined domains, while lower PMA causes wider domain walls and less defined domains. The result is sharper domain walls and better defined domains in films deposited at higher temperatures, and less defined domains in films grown at lower temperatures after annealing.
The changes in mean bond length with varying orientation through the films, the increase in MRO as the deposition temperature increases, and the subsequent decrease in MRO after annealing, collectively indicate the presence of MRO bond length anisotropy in the _a_-Tb\({}_{17}\)Co\({}_{83}\) films. During deposition, structural variations are introduced as a function of growth temperature. A higher deposition temperature favors a higher proportion of Tb-Co bonds in the OOP direction, which results in greater MRO and increased PMA. Annealing, on the other hand, allows for structural relaxation in the films, and atom rearrangement. Taken together, the observed trends in MRO with respect to growth conditions suggest that the evolution of PMA in the films is due to structural changes.
## Conclusions
Numerous studies on amorphous RE-TM films have noted the relationships between growth temperature parameters and PMA, but the structural origin remains under debate. This work established a relationship between thermal growth parameters, PMA, and local atomic ordering using the TEM technique of scanning nanodiffraction. MRO increases with increasing film deposition temperature and decreases with increasing anneal temperature. Thus, there is a correlation between the degree of MRO and PMA in _a_-Tb\({}_{17}\)Co\({}_{83}\) films grown via magnetron co-sputtering. These results support an amorphous phase texturing model in which adatom configuration varies as a function of deposition temperature and annealing allows for subsequent relaxations of preferential configurations within the films. Tb-Co (RE-TM) pairs prefer to form vertically during deposition, but annealing allows for rearrangement of atomic pairs such that there is greater uniformity across orientations. Analysis of tilted FEM data shows a greater local ordering anisotropy in the film exhibiting the highest degree of MRO (300 \({}^{\circ}\)C growth) compared to the the film grown at 20\({}^{\circ}\)C with less MRO.
Figure 3: Relative % change in mean bond distances as a function of tilt angle for the films deposited at 20\({}^{\circ}\)C and 300\({}^{\circ}\)C. The 20\({}^{\circ}\)C data are shown in the bottom row with blue data markers and the 300\({}^{\circ}\)C data are shown in the top row with magenta markers. The 0\({}^{\circ}\) mean bond length is set to zero for all orientations. The z\({}^{\ast}\)-oriented strain changes with tilt angle as the orientation changes from IP (0\({}^{\circ}\) tilt) to OOP (extrapolated to 90\({}^{\circ}\) tilt). The film deposited at 300\({}^{\circ}\)C exhibits a greater degree of change in relative bond lengths compared to the film deposited at 20\({}^{\circ}\)C. The measured % changes in mean bond length are modelled by the infinitesimal strain theory applied to a rotated system, as shown by the orange curves. The green arrows in the insets show the orientation of each \(l\) component at 90\({}^{\circ}\) (experimentally inaccessible).
###### Acknowledgements.
This work was supported by National Science Foundation STROBE grant DMR-1548924. Work at the Molecular Foundry was supported by the Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Growth and non-electron microscopy characterization of experimental _a_-Tb\({}_{17}\)Co\({}_{83}\) and _a_-SiN\({}_{x}\) were performed by D.O. and E.H. and supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC02-05CH11231, under the Nonequilibrium Magnetic Materials Program (KC2204). C.O. acknowledges support from the US Department of Energy Early Career Research Program.
## Data Availability Statement
The data that support the findings of this study are available from the corresponding author, M.C.S., upon reasonable request. The scripts used to process the FEM data are available at [https://github.com/ScottLabUCB/FEM](https://github.com/ScottLabUCB/FEM).
|
2307.04224 | Reach of Segre-Veronese Manifolds | We compute the reach, extremal curvature and volume of a tubular neighborhood
for the Segre-Veronese variety intersected with the unit sphere. | Paul Breiding, Sarah Eggleston | 2023-07-09T16:35:18Z | http://arxiv.org/abs/2307.04224v3 | # Reach of Segre-Veronese Manifolds
###### Abstract.
We compute the reach, extremal curvature and volume of a tubular neighborhood for the Segre-Veronese variety intersected with the unit sphere.
**Keywords.** Tensors, Rank-One Tensors, Reach, Curvature, Tubes.
The first author is supported by Deutsche Forschungsgemeinschaft (DFG) - Projektnr. 445466444.
the tensors in \(\mathcal{H}_{\mathbf{n},\mathbf{d}}\). For a tensor \(\mathbf{F}=\sum_{\alpha_{1},\ldots,\alpha_{r}}F_{\alpha_{1},\ldots,\alpha_{r}}\)\(\mathbf{m}_{\alpha_{1}}\otimes\cdots\otimes\mathbf{m}_{\alpha_{r}}\in\mathcal{H}_{ \mathbf{n},\mathbf{d}}\), we use the short form \(\mathbf{F}=(F_{\alpha_{1},\ldots,\alpha_{r}})\). The following defines an inner product on \(\mathcal{H}_{\mathbf{n},\mathbf{d}}\):
\[\langle\mathbf{F},\mathbf{G}\rangle:=\sum_{\alpha_{1},\ldots,\alpha_{r}}F_{ \alpha_{1},\ldots,\alpha_{r}}\cdot G_{\alpha_{1},\ldots,\alpha_{r}},\quad\text {where }\mathbf{F}=(F_{\alpha_{1},\ldots,\alpha_{r}}),\mathbf{G}=(G_{\alpha_{1}, \ldots,\alpha_{r}})\in\mathcal{H}_{\mathbf{n},\mathbf{d}}.\]
With this, \(\mathcal{H}_{\mathbf{n},\mathbf{d}}\) becomes a Euclidean space, and we can measure volumes and distances in \(\mathcal{H}_{\mathbf{n},\mathbf{d}}\). The norm of a tensor \(\mathbf{F}\in\mathcal{H}_{\mathbf{n},\mathbf{d}}\) is \(\|\mathbf{F}\|:=\sqrt{\langle\mathbf{F},\mathbf{F}\rangle}\), and the angular distance is \(d_{\mathbb{S}}(\mathbf{F},\mathbf{G}):=\arccos\langle\mathbf{F},\ \mathbf{G}\rangle\) for \(\mathbf{F},\mathbf{G}\) in the unit sphere \(\mathbb{S}(\mathcal{H}_{\mathbf{n},\mathbf{d}})\subset\mathcal{H}_{\mathbf{n},\mathbf{d}}\).
The _(spherical) Segre-Veronese variety_ in \(\mathcal{H}_{\mathbf{n},\mathbf{d}}\) is
\[\mathbb{X}_{\mathbf{n},\mathbf{d}}:=\left\{\mathbf{f}_{1}\otimes\cdots \otimes\mathbf{f}_{r}\ |\ \mathbf{f}_{i}\in\widehat{\mathbb{V}}_{n_{i},d_{i}}\right\}\cap\mathbb{S}( \mathcal{H}_{\mathbf{n},\mathbf{d}}). \tag{1.1}\]
This is the variety of products of powers of linear forms in \(\mathcal{H}_{\mathbf{n},\mathbf{d}}\) that have unit norm. Tensors in \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) are also called _decomposable_, _simple_ or _rank-one_ tensors. We prove in Proposition 2.1 that \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) is an embedded smooth submanifold of \(\mathbb{S}(\mathcal{H}_{\mathbf{n},\mathbf{d}})\) of dimension \(\dim\mathbb{X}_{\mathbf{n},\mathbf{d}}=n_{1}+\cdots+n_{r}\); hence, we call \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) a (spherical) Segre-Veronese _manifold_.
The main focus of this paper is the reach and the volume of a tubular neighborhood of \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\). We briefly recall what these are: the _medial axis_\(\operatorname{Med}(S)\) of a subset \(S\subset\mathbb{S}(\mathcal{H}_{\mathbf{n},\mathbf{d}})\) is the set of all points \(\mathbf{F}\in\mathbb{S}(\mathcal{H}_{\mathbf{n},\mathbf{d}})\) such that there exist at least two points \(\mathbf{G}_{1},\mathbf{G}_{2}\in S\) with \(d_{\mathbb{S}}(\mathbf{F},S)=d_{\mathbb{S}}(\mathbf{F},\mathbf{G}_{i})\), \(i=1,2\). The _reach_ of a subset \(S\) is its minimum distance to the medial axis:
\[\tau(S):=\inf_{\mathbf{F}\in S}d_{\mathbb{S}}(\mathbf{F},\operatorname{Med}(S )).\]
This notion was first introduced by Federer [10].
In our first main theorem we calculate the reach of the (spherical) Segre-Veronese manifold.
**Theorem 1.1** (The reach of (spherical) Segre-Veronese manifolds).: _Let \(\mathbf{d}=(d_{1},\ldots,d_{r})\) and \(\mathbf{n}=(n_{1},\ldots,n_{r})\) be \(r\)-tuples of positive integers, and let \(d:=d_{1}+\cdots+d_{r}\geq 2\) be the total degree. The reach of the (spherical) Segre-Veronese manifold is_
\[\tau(\mathbb{X}_{\mathbf{n},\mathbf{d}})=\begin{cases}\frac{\pi}{4},&d\leq 5 \\ \sqrt{\frac{d}{2(d-1)}},&d>5.\end{cases}\]
_In particular, the reach only depends on the total degree \(d\) and not on the dimensions of the Veronese varieties \(\mathbb{V}_{n_{i},d_{i}}\)._
This extends a theorem by Cazzaniga, Lerario and Rosana [14], who proved this formula for the Veronese variety, which is the special case \(r=1\). Another special case worth mentioning is \(d_{1}=\cdots=d_{r}=1\), which corresponds to the _Segre manifold_.
Since \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) is a smooth submanifold of the sphere, its reach is the minimum of the inverse of its largest radius of curvature and its smallest bottleneck. We also compute these. The next theorem explains which curves in \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) have maximal and minimal curvature; this is proved in Section 4.1.
**Theorem 1.2** (Extremal curvature of curves in Segre-Veronese manifolds).: _Let the total degree of the (spherical) Segre-Veronese manifold \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) be \(d=d_{1}+\cdots+d_{r}\geq 2\). Consider a geodesic_
\[\gamma(t)=\gamma_{1}(t)\otimes\cdots\otimes\gamma_{r}(t)\in\mathbb{X}_{\mathbf{ n},\mathbf{d}}.\]
1. _The maximum curvature of_ \(\gamma(t)\) _is_ \(\sqrt{\frac{2(d-1)}{d}}.\) _It is attained by geodesics where the_ \(\gamma_{i}(t)\) _are geodesics in_ \(\mathbb{V}_{n_{i},d_{i}}\) _of constant speed_ \(\|\gamma_{i}^{\prime}(t)\|=\sqrt{\frac{d_{i}}{d}}\)_._
2. _The minimal curvature is_ \(\sqrt{\frac{2(d_{\ell}-1)}{d_{\ell}}}\)_, where_ \(d_{\ell}=\min\{d_{1},\ldots,d_{r}\}\)_. It is attained by geodesics where_ \(\gamma_{\ell}(t)\) _is a geodesic parametrized by arc length in_ \(\mathbb{V}_{n_{\ell},d_{\ell}}\) _and the other_ \(\gamma_{i}(t)\) _are constant._
Our third main result concerns the volume of the tubular neighborhood
\[U(\varepsilon):=\{\mathbf{F}\in\mathbb{S}(\mathcal{H}_{\mathbf{n},\mathbf{d}} )\ \left|\ \ d_{\mathbb{S}}(\mathbf{F},\mathbb{X}_{\mathbf{n},\mathbf{d}})<\varepsilon\right\}.\]
In Section 5 we compute this volume in terms of complete matchings in a weighted graph. For a tuple \((v_{1},\ldots,v_{r})\) of nonnegative integers let \(G=(V,E)\) be the complete graph on \(v:=v_{1}+\cdots+v_{r}\) vertices. Recall that the tuple of degrees is \(\mathbf{d}=(d_{1},\ldots,d_{r})\). We define weights on \(E\) as follows: the vertices of \(G\) are partitioned into \(r\) groups \(V=\mathcal{I}_{1}\sqcup\cdots\sqcup\mathcal{I}_{r}\) of cardinalities \(|\mathcal{I}_{k}|=v_{k}\). The weight \(w(e)\) of an edge \(e\) between vertices in group \(\mathcal{I}_{k}\) is \(w(e)=d_{k}(d_{k}-1)\). Given a perfect matching \(C\subset E\) we define its weight to be \(w(C):=\prod_{e\in C}w(e).\) This defines the function
\[D_{\mathbf{d}}(v_{1},\ldots,v_{r}):=(-1)^{\frac{v}{2}}\sum_{C\subset E\text{ perfect matching}}w(C). \tag{1.2}\]
We now have the following result.
**Theorem 1.3** (Volume of a tubular neighborhood).: _Let \(n=\dim\mathbb{X}_{\mathbf{n},\mathbf{d}}\), \(N=\dim(\mathbb{S}(\mathcal{H}_{\mathbf{n},\mathbf{d}}))\) and \(c=N-n\). Define \(J_{i}(\varepsilon)=\int_{0}^{\varepsilon}(\sin\phi)^{N-n+2i-1}\cdot(\cos\phi)^ {n-2i}\ \mathrm{d}\phi\) and_
\[\theta_{i}=\frac{\Gamma(\frac{c}{2})}{2^{i}\,\Gamma(i+\frac{c}{2})}\ \sum_{ \begin{subarray}{c}v_{1},\ldots,v_{r}\in\mathbb{N}:\ v_{i}\leq n_{i}\\ v_{1}+\cdots+v_{r}=2i\end{subarray}}D_{\mathbf{d}}(v_{1},\ldots,v_{r}).\]
_Then for \(\varepsilon<\tau(\mathbb{X}_{\mathbf{n},\mathbf{d}})\), we have_
\[\mathrm{vol}(U(\varepsilon))=\frac{\sqrt{(d_{1}\cdots d_{r})^{n}}}{2^{r-1}} \cdot\mathrm{vol}(\mathbb{S}^{n_{1}})\cdots\mathrm{vol}(\mathbb{S}^{n_{r}}) \cdot\mathrm{vol}(\mathbb{S}^{c-1})\cdot\sum_{0\leq 2i\leq n}\theta_{i}\cdot J_{i}( \varepsilon).\]
The proof of this theorem is based on computing the Weingarten map of \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\), which we do in Theorem 3.8. We show that the Weingarten map of \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) admits a block structure where the diagonal blocks are the Weingarten maps of the Veronese factors.
At the end of Section 5.1 we compute the coefficients \(\theta_{i}\) from Theorem 1.3 for the Segre manifold \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) where \(\mathbf{n}=\mathbf{1}_{r}:=(1,1,\ldots,1)\) and \(\mathbf{d}=\mathbf{1}_{r}\). This Segre manifold is the image of \(\mathbb{S}^{1}\times\cdots\times\mathbb{S}^{1}\) under the Segre embedding.
### Acknowledgements
We thank Antonio Lerario and Andrea Rosana for helping in finding several references related to differential geometry and for carefully explaining their paper [10] to us. We also thank Jan Draisma for a discussion which led to Theorem 1.3.
### Organization of the paper
In Section 2 we discuss the differential geometry and curvature of manifolds defined by tensor products of vectors. We then apply the results from Section 2 in Section 3 to study the curvature of the (spherical) Segre-Veronese manifold \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\). In particular, we work out the Weingarten map of \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\). In Section 4 we compute the reach and prove Theorems 1.1 and 1.2. Finally, in Section 5, we compute the volume of the tubular neighborhood and prove Theorem 1.3.
## 2. Tensor products of Riemannian manifolds
The tensor space \(\mathbb{R}^{m_{1}+1}\otimes\cdots\otimes\mathbb{R}^{m_{r}+1}\) is a Euclidean space for the inner product defined by \(\langle\mathbf{x}_{1}\otimes\cdots\otimes\mathbf{x}_{r}\), \(\mathbf{y}_{1}\otimes\cdots\otimes\mathbf{y}_{r}\rangle=\langle\mathbf{x}_{1},\mathbf{y}_{1}\rangle\cdots\langle\mathbf{x}_{r},\mathbf{y}_{r}\rangle\), where \(\langle\mathbf{x},\mathbf{y}\rangle=\mathbf{x}^{T}\mathbf{y}\). Write \(N:=(m_{1}+1)\cdots(m_{r}+1)-1\); then \(\mathbb{S}^{N}\) is the sphere in \(\mathbb{R}^{m_{1}+1}\otimes\cdots\otimes\mathbb{R}^{m_{r}+1}\). We consider for \(1\leq i\leq r\) a smooth embedded submanifold \(\mathbb{M}_{i}\) of the sphere \(\mathbb{S}^{m_{i}}\subset\mathbb{R}^{m_{i}+1}\). We define the _tensor product_ of these manifolds to be
\[\mathbb{M}_{1}\otimes\cdots\otimes\mathbb{M}_{r}:=\left\{\mathbf{x}_{1} \otimes\cdots\otimes\mathbf{x}_{r}\mid\mathbf{x}_{1}\in\mathbb{M}_{1},\ldots, \mathbf{x}_{r}\in\mathbb{M}_{r}\right\}.\]
**Proposition 2.1**.: _For \(1\leq i\leq r\) let \(\mathbb{M}_{i}\) be a smooth Riemannian submanifold of \(\mathbb{S}^{m_{i}}\) of dimension \(n_{i}\), and denote \(\mathbb{M}:=\mathbb{M}_{1}\otimes\cdots\otimes\mathbb{M}_{r}.\) Furthermore, denote the tensor product map by \(\psi:\mathbb{M}_{1}\times\cdots\times\mathbb{M}_{r}\to\mathbb{M},(\mathbf{x}_ {1},\ldots,\mathbf{x}_{r})\mapsto\mathbf{x}_{1}\otimes\cdots\otimes\mathbf{x} _{r}\). Then:_
1. \(\mathbb{M}\) _is a Riemannian submanifold of_ \(\mathbb{S}^{N}\) _of dimension_ \(n_{1}+\cdots+n_{r}\)_._
2. _The tangent space of_ \(\mathbb{M}\) _at_ \(\mathbf{x}=\mathbf{x}_{1}\otimes\cdots\otimes\mathbf{x}_{r}\) _is_ \[T_{\mathbf{x}}\mathbb{M}=T_{\mathbf{x}_{1}}\mathbb{M}_{1}\otimes\mathbf{x}_{2} \otimes\cdots\otimes\mathbf{x}_{r}+\cdots+\mathbf{x}_{1}\otimes\mathbf{x}_{2} \otimes\cdots\otimes T_{\mathbf{x}_{r}}\mathbb{M}_{r}.\]
3. \(\psi\) _is a local isometry._
Proof.: For \(1\leq i\leq r\) let \(\mathcal{A}_{i}=(\mathcal{U}_{i_{j}},\varphi_{i_{j}})_{j}\) be an atlas for \(\mathbb{M}_{i}\) such that \(\mathbf{u}\in\mathcal{U}_{i}\) implies that the antipodal point \(-\mathbf{u}\not\in\mathcal{U}_{i}\). Such an atlas exists since \(\mathbf{0}\not\in\mathbb{M}_{i}\). Define the open sets \(\mathcal{U}_{i_{1},\ldots,i_{r}}:=\psi(U_{i_{1}}\times\cdots\times U_{i_{r}})\); then \(\psi_{|U_{i_{1}}\times\cdots\times U_{i_{r}}}\) is an isomorphism, so we have an atlas for \(\mathbb{M}\) with charts \(\mathcal{U}_{i_{1},\ldots,i_{r}}\) and maps \((\varphi_{i_{1}}\times\ldots\times\varphi_{i_{r}})\circ(\psi|_{U_{i_{1}}\times \cdots\times U_{i_{r}}})^{-1}\). This also shows that we have \(\dim\mathbb{M}=\dim(\mathbb{M}_{1}\times\cdots\times\mathbb{M}_{r})=m_{1}+ \cdots+m_{r}\). The Riemmanin structure on the ambient space \(\mathbb{S}^{N}\) induces a Riemmanin structure on \(\mathbb{M}\).
For the second statement, we use that \(T_{(\mathbf{x}_{1},\ldots,\mathbf{x}_{r})}\mathbb{M}=T_{\mathbf{x}_{1}} \mathbb{M}_{1}\times\cdots\times T_{\mathbf{x}_{r}}\mathbb{M}_{r}\). For \(1\leq i\leq r\) let \(\mathbf{v}\in T_{\mathbf{x}_{i}}\mathbb{M}_{i}\). By multilinearity, the derivative of \(\psi\) at \((\mathbf{x}_{1},\ldots,\mathbf{x}_{r})\) maps
\[\mathrm{D}_{(\mathbf{x}_{1},\ldots,\mathbf{x}_{r})}\psi(0,\ldots,0,\mathbf{v}, 0,\ldots,0)=\mathbf{x}_{1}\otimes\cdots\otimes\mathbf{x}_{i-1}\otimes\mathbf{ v}\otimes\mathbf{x}_{i+1}\otimes\cdots\otimes\mathbf{x}_{r}.\]
This proves the second statement, since \(T_{\mathbf{x}}\mathbb{M}\) is the image of \(\mathrm{D}_{(\mathbf{x}_{1},\ldots,\mathbf{x}_{r})}\psi\).
Finally, for \(\mathbf{v}\in T_{\mathbf{x}_{i}}\mathbb{M}_{i}\) and \(\mathbf{w}\in T_{\mathbf{x}_{j}}\mathbb{M}_{j}\) we have
\[\langle\mathbf{x}_{1}\otimes\cdots\otimes\mathbf{v}\otimes\cdots\otimes \mathbf{x}_{r},\ \mathbf{x}_{1}\otimes\cdots\otimes\mathbf{w}\otimes\cdots\otimes\mathbf{x}_{r} \rangle=\begin{cases}\langle\mathbf{v},\mathbf{w}\rangle,&i=j\\ \langle\mathbf{v},\mathbf{x}_{i}\rangle\,\langle\mathbf{w},\mathbf{x}_{j} \rangle,&i\neq j\end{cases}.\]
Since \(\langle\mathbf{v},\mathbf{x}_{i}\rangle=\langle\mathbf{w},\mathbf{x}_{j}\rangle=0\), this shows that the inner product between the images of \((0,\ldots,0,\mathbf{v},0,\ldots,0)\) and \((0,\ldots,0,\mathbf{w},0,\ldots,0)\) under \(\mathrm{D}_{(\mathbf{x}_{1},\ldots,\mathbf{x}_{d})}\) is \(\langle\mathbf{v},\mathbf{w}\rangle\), if \(i=j\), and \(0\) otherwise. This shows \(\mathrm{D}_{(\mathbf{x}_{1},\ldots,\mathbf{x}_{r})}\) preserves inner products on a basis of \(T_{(\mathbf{x}_{1},\ldots,\mathbf{x}_{r})}\mathbb{M}\) and hence is an orthogonal map. This proves the third statement.
Using the notation of Proposition 2.1 we can now write
\[\mathbb{X}_{\mathbf{n},\mathbf{d}}=\begin{cases}\mathbb{V}_{n_{1},d_{1}}\otimes \cdots\otimes\mathbb{V}_{n_{r},d_{r}},&\text{if one is $d_{i}$ odd;}\\ (\mathbb{V}_{n_{1},d_{1}}\otimes\cdots\otimes\mathbb{V}_{n_{r},d_{r}})\ \cup\ -(\mathbb{V}_{n_{1},d_{1}}\otimes\cdots\otimes\mathbb{V}_{n_{r},d_{r}}),& \text{if all $d_{i}$ are even.}\end{cases} \tag{2.1}\]
Furthermore, Proposition 2.1 implies that \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) is a smooth submanifold of the sphere of dimension \(\dim\mathbb{X}_{\mathbf{n},\mathbf{d}}=n_{1}+\cdots+n_{r}\). Therefore, we will henceforth call it the _(spherical) Segre-Veronese manifold_.
### The second fundamental form of a tensor product manifold
Recall that the second fundamental form \(\mathrm{II}_{\mathbf{x}}\) of a Riemannian submanifold \(\mathbb{M}\subset\mathbb{S}^{N}\) at a point \(\mathbf{x}\in\mathbb{M}\) is the trilinear form
\[\mathrm{II}_{\mathbf{x}}:T_{\mathbf{x}}\mathbb{M}\times T_{\mathbf{x}}\mathbb{ M}\times N_{\mathbf{x}}\mathbb{M}\to\mathbb{R},\quad(\mathbf{v},\mathbf{w}, \mathbf{a})\mapsto\left\langle\frac{\partial\mathbf{v}(\mathbf{u})}{\partial \mathbf{w}}\Bigm{|}_{\mathbf{u}=\mathbf{x}},\ \mathbf{a}\right\rangle,\]
where \(\mathbf{v}(\mathbf{u})\) is a (local) smooth tangent field of \(\mathbb{M}\) with \(\mathbf{v}(\mathbf{x})=\mathbf{v}\). For a fixed \(\mathbf{a}\in N_{\mathbf{x}}\mathbb{M}\) the _Weingarten map_ is the linear map
\[L_{\mathbf{a}}:T_{\mathbf{x}}\mathbb{M}\to T_{\mathbf{x}}\mathbb{M},\]
such that \(\mathrm{II}_{\mathbf{x}}(\mathbf{v},\mathbf{w},\mathbf{a})=\langle\mathbf{v}, L_{\mathbf{a}}(\mathbf{w})\rangle\).
The next proposition provides the Weingarten map for a tensor product of manifolds.
**Proposition 2.2**.: _Let \(\mathbb{M}_{1},\ldots,\mathbb{M}_{r}\) be as in Proposition 2.1 and \(\mathbb{M}=\mathbb{M}_{1}\otimes\cdots\otimes\mathbb{M}_{r}\). Consider a point \(\mathbf{x}=\mathbf{x}_{1}\otimes\cdots\otimes\mathbf{x}_{r}\in\mathbb{M}\) and a normal vector \(\mathbf{a}\in N_{\mathbf{x}}\mathbb{M}\). A matrix representation of the Weingarten map of \(\mathbb{M}\) at \(\mathbf{x}\) in direction \(\mathbf{a}\) relative to orthonormal coordinates is_
\[L_{\mathbf{a}}=\begin{bmatrix}L_{1}&L_{1,2}&\cdots&L_{1,r}\\ (L_{1,2})^{T}&L_{2}&\cdots&L_{1,r-1}\\ &&\ddots&\\ (L_{1,r})^{T}&(L_{1,r-1})^{T}&\cdots&L_{r}\end{bmatrix},\]
_where the matrices \(L_{i,j}\) and \(L_{i}\) are defined as follows: let \(\mathbf{v}_{1}^{(i)},\ldots,\mathbf{v}_{n_{i}}^{(i)}\) be an orthonormal basis for the tangent space \(T_{\mathbf{x}_{i}}\mathbb{M}_{i}\)._
1. _The off-diagonal blocks are_ \[L_{i,j}:=\left[\langle\mathbf{x}_{1}\otimes\cdots\otimes\mathbf{v}_{k}^{(i)} \otimes\cdots\otimes\mathbf{v}_{\ell}^{(j)}\otimes\cdots\otimes\mathbf{x}_{r },\mathbf{a}\rangle\right]_{1\leq k\leq n_{i},1\leq\ell\leq n_{j}}\in\mathbb{R} ^{n_{i}\times n_{j}}.\]
2. _Write_ \(R_{i}:=\mathbf{x}_{1}\otimes\cdots\otimes\mathbf{x}_{i-1}\otimes N_{\mathbf{x }_{i}}\mathbb{M}_{i}\otimes\mathbf{x}_{i+1}\otimes\cdots\otimes\mathbf{x}_{r}\), _and let the orthogonal projection of_ \(\mathbf{a}\) _onto_ \(R_{i}\) _be_ \(\mathbf{x}_{1}\otimes\cdots\otimes\mathbf{x}_{i-1}\otimes\mathbf{a}_{i}\otimes \mathbf{x}_{i+1}\otimes\cdots\otimes\mathbf{x}_{r}\)_. Then_ \(\mathbf{a}_{i}\in N_{\mathbf{x}_{i}}\mathbb{M}_{i}\)_, and_ \(L_{i}\in\mathbb{R}^{n_{i}\times n_{i}}\) _is a matrix representation of the Weingarten map_ \(L_{\mathbf{a}_{i}}\) _of_ \(\mathbb{M}_{i}\) _at_ \(\mathbf{x}_{i}\) _in direction_ \(\mathbf{a}_{i}\) _with respect to the orthonormal basis_ \(\mathbf{v}_{1}^{(i)},\ldots,\mathbf{v}_{n_{i}}^{(i)}\)
Proof.: By Proposition 2.1, \(\mathbf{x}_{1}\otimes\cdots\otimes\mathbf{v}_{k}^{(i)}\otimes\cdots\otimes \mathbf{x}_{r}\) for \(1\leq i\leq r\) and \(1\leq k\leq n_{i}\) is an orthonormal basis of \(T_{\mathbf{x}}\mathbb{M}\). Fix tangent vectors \(\mathbf{v}=\mathbf{x}_{1}\otimes\cdots\otimes\mathbf{v}_{k}^{(i)}\otimes\cdots \otimes\mathbf{x}_{r}\) and \(\mathbf{w}:=\mathbf{x}_{1}\otimes\cdots\otimes\mathbf{v}_{\ell}^{(j)}\otimes \cdots\otimes\mathbf{x}_{r}\). Furthermore, let \(\mathbf{v}_{k}^{(i)}(\mathbf{u}_{\mathbf{i}})\) be a local smooth tangent field of \(\mathbb{M}_{i}\) with \(\mathbf{v}_{k}^{(i)}(\mathbf{x}_{i})=\mathbf{v}_{k}^{(i)}\). Then we obtain a local smooth tangent field of \(\mathbb{M}\) with \(\mathbf{v}(\mathbf{x})=\mathbf{v}\) by setting
\[\mathbf{v}(\mathbf{u}_{1}\otimes\cdots\otimes\mathbf{u}_{r}):=\mathbf{u}_{1} \otimes\cdots\otimes\mathbf{v}_{k}^{(i)}(\mathbf{u}_{i})\otimes\cdots\otimes \mathbf{u}_{r}.\]
By multilinearity,
\[\frac{\partial\mathbf{v}(\mathbf{u})}{\partial\mathbf{w}}=\begin{cases} \mathbf{x}_{1}\otimes\cdots\otimes\frac{\partial\mathbf{v}_{k}^{(i)}( \mathbf{u}_{i})}{\partial\mathbf{v}_{\ell}^{(i)}}\otimes\cdots\otimes \mathbf{x}_{r},&\text{if $i=j$};\\ \mathbf{x}_{1}\otimes\cdots\otimes\mathbf{v}_{k}^{(i)}\otimes\cdots\otimes \mathbf{v}_{\ell}^{(j)}\otimes\cdots\otimes\mathbf{x}_{r},&\text{if $i\neq j$}.\end{cases}\]
This shows that the off-diagonal blocks of \(L_{\mathbf{a}}\) are the matrices \(L_{i,j}\).
For the diagonal blocks \((i=j)\) we observe that \(\mathbf{x}_{1}\otimes\cdots\otimes\frac{\partial\mathbf{v}_{k}^{(i)}(\mathbf{ u}_{i})}{\partial\mathbf{v}_{\ell}^{(i)}}\otimes\cdots\otimes\mathbf{x}_{r}\in R_{i}\), so
\[\langle\mathbf{v},L_{\mathbf{a}}(\mathbf{w})\rangle=\Pi_{\mathbf{ x}}(\mathbf{v},\mathbf{w},\mathbf{a}) =\left\langle\mathbf{x}_{1}\otimes\cdots\otimes\frac{\partial \mathbf{v}_{k}^{(i)}(\mathbf{u}_{i})}{\partial\mathbf{v}_{\ell}^{(i)}} \otimes\cdots\otimes\mathbf{x}_{r},\mathbf{a}\right\rangle\] \[=\left\langle\frac{\partial\mathbf{v}_{k}^{(i)}(\mathbf{u}_{i})} {\partial\mathbf{v}_{\ell}^{(i)}},\mathbf{a}_{i}\right\rangle=\langle\mathbf{ v}_{\ell}^{(i)},L_{\mathbf{a}_{i}}(\mathbf{v}_{k}^{(i)})\rangle.\]
This settles the case \(i=j\).
## 3. Geodesics and the second fundamental form of Segre-Veronese manifolds
We now use the results from the previous section to compute the second fundamental form and the Weingarten map for a (spherical) Segre-Veronese manifold \(\mathbb{X}_{\mathbf{n},\mathbf{d}}.\) The first step towards this goal is considering the Veronese manifold \((r=1)\).
### Veronese manifolds
The Bombieri-Weyl inner product on the space of homogeneous polynomials \(H_{n,d}\) has the property that
\[\langle\mathbf{f},\ \boldsymbol{\ell}^{d}\rangle=\mathbf{f}(\ell_{0},\ldots,\ell_{n }),\text{ where }\boldsymbol{\ell}(\mathbf{x})=\ell_{0}x_{1}+\cdots+\ell_{n}x_{n}; \tag{3.1}\]
that is, taking the inner product of \(\mathbf{f}\in H_{n,d}\) with \(\boldsymbol{\ell}^{d}\in\mathbb{V}_{n,d}\) evaluates \(\mathbf{f}\) at the coeffcient vector of \(\ell\). One calls \((\mathbf{x},\mathbf{y})\mapsto\langle\mathbf{x},\mathbf{y}\rangle^{d}\) a _reproducing kernel_ for \(H_{n,d}\).
Recall that the scaled monomials \(\mathbf{m}_{\alpha}=\sqrt{\binom{d}{\alpha}}\,\mathbf{x}^{\alpha}\) form an orthonormal basis for the space of polynomials \(H_{n,d}\). We first prove a lemma on the structure of the tangent space of the Veronese manifold.
**Lemma 3.1**.: _Consider \(\mathbf{m}_{(d,0,\ldots,0)}=x_{0}^{d}\in\mathbb{V}_{n,d}\). Then an orthonormal basis for the tangent space \(T_{x_{0}^{d}}\mathbb{V}_{n,d}\) is \(\{\mathbf{m}_{(d-1,1,0,\ldots,0)},\ldots,\mathbf{m}_{(d-1,0,\ldots,0,1)}\}=\{ \sqrt{d}\,x_{0}^{d-1}x_{k}\}_{k=1}^{n}\)._
Proof.: It follows from [13, Theorem 18] that \(T_{x_{0}^{d}}\mathbb{V}_{n,d}\) is spanned by \(\sqrt{d}\,x_{0}^{d-1}x_{k}\), \(1\leq k\leq n\). The fact that these monomials are orthonormal follows directly from the definition of the Bombieri-Weyl inner product.
We denote the two linear spaces from [13, Equation (30)]:
\[P:=\operatorname{span}\{\mathbf{m}_{\alpha}\mid\alpha_{0}<d-2\}\quad\text{and} \quad W:=\operatorname{span}\{\mathbf{m}_{\alpha}\mid\alpha_{0}=d-2\}. \tag{3.2}\]
The spaces \(P\) and \(W\) are orthogonal to each other. Lemma 3.1 implies the following.
**Lemma 3.2**.: \(N_{x_{0}^{d}}\mathbb{V}_{n,d}=P\oplus W\)_._
The next theorem follows from Equations (28) and (29) in [13].
**Theorem 3.3**.: _Let \(\mathbf{f}\in N_{x_{0}^{d}}\mathbb{V}_{n,d}=P\oplus W\) and \(L_{\mathbf{f}}\) be the Weingarten map of \(\mathbb{V}_{n,d}\) at \(x_{0}^{d}\) and \(\mathbf{f}\)._
1. _If_ \(\mathbf{f}\in P\)_, then_ \(L_{\mathbf{f}}=0\)_._
2. _If_ \(\mathbf{f}\in W\)_, then_ \(L_{\mathbf{f}}\) _can be represented in orthonormal coordinates by the matrix_ \[L_{\mathbf{f}}=\sqrt{\frac{d-1}{d}}\,\begin{bmatrix}\sqrt{2}\cdot f_{1,1}&f_{ 2,1}&\cdots&f_{n,1}\\ f_{2,1}&\sqrt{2}\cdot f_{2,2}&\cdots&f_{n,2}\\ \vdots&\vdots&\ddots&\vdots\\ f_{n,1}&f_{n,2}&\cdots&\sqrt{2}\cdot f_{n,n}\end{bmatrix},\] _where_ \[\mathbf{f}=\sum_{1\leq i<j\leq n}f_{i,j}\sqrt{d(d-1)}\,x_{0}^{d-2}x_{i}x_{j}+ \sum_{1\leq i\leq n}f_{i,i}\sqrt{\frac{d(d-1)}{2}}\,x_{0}^{d-2}x_{i}^{2}.\]
Recall that a random symmetric \(n\times n\) matrix \(L=(\ell_{i,j})\) is \(L\sim\operatorname{GOE}(n)\) if \(\ell_{i,j}\sim N(0,\frac{1}{2})\) for \(i\neq j\) and \(\ell_{i,i}\sim N(0,1)\) and all entries are independent (except for the symmetry condition). The probability density of \(L\) is \((2\pi)^{\frac{n(n+1)}{4}}\exp(-\frac{1}{2}\mathrm{Trace}(L^{T}L))\).
**Corollary 3.4**.: _Let \(\mathbf{f}\in N_{x_{0}^{d}}\) and \(L_{\mathbf{f}}\) be the Weingarten map of \(\mathbb{V}_{n,d}\) at \(x_{0}^{d}\) and \(\mathbf{f}\). If \(\mathbf{f}\) is Gaussian with respect to the Bombieri-Weyl metric then_
\[L_{\mathbf{f}}\sim\sqrt{\frac{2(d-1)}{d}}\operatorname{GOE}(n).\]
### Segre-Veronese manifold
We now turn to the Segre-Veronese manifold \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\). We first show that \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) is a homogeneous space. This allows us to compute geodesics and the second fundamental form at the distinguished point
\[\mathbf{E}:=x_{0}^{d_{1}}\otimes\cdots\otimes x_{0}^{d_{r}}\in\mathbb{X}_{ \mathbf{n},\mathbf{d}}.\]
The Bombieri-Weyl inner product on \(\mathcal{H}_{\mathbf{n},\mathbf{d}}\) has the property
\[\langle\mathbf{f}_{1}\otimes\cdots\otimes\mathbf{f}_{r},\,\mathbf{g}_{1} \otimes\cdots\otimes\mathbf{g}_{r}\rangle:=\langle\mathbf{f}_{1},\mathbf{g}_{ 1}\rangle\cdots\langle\mathbf{f}_{r},\mathbf{g}_{r}\rangle. \tag{3.3}\]
Moreover, it is invariant under an orthogonal change of variables; i.e., for orthogonal matrices \(Q_{1}\in O(n_{1}+1),\ldots,Q_{r}\in O(n_{r}+1)\) we have
\[\langle(\mathbf{f}_{1}\circ Q_{1})\otimes\cdots\otimes(\mathbf{f}_{r}\circ Q_{r }),\,(\mathbf{g}_{1}\circ Q_{1})\otimes\cdots\otimes(\mathbf{g}_{r}\circ Q_{r })\rangle=\langle\mathbf{f}_{1}\otimes\cdots\otimes\mathbf{f}_{r},\,\mathbf{g} _{1}\otimes\cdots\otimes\mathbf{g}_{r}\rangle. \tag{3.4}\]
This invariance was studied by Kostlan in [13, 14]. We define the action of the group
\[G:=\mathbb{Z}/2\mathbb{Z}\times O(n_{1}+1)\times\cdots\times O(n_{r}+1)\]
on \(\mathcal{H}_{\mathbf{n},\mathbf{d}}\) as the linear extension of the action
\[(\sigma,Q_{1},\ldots,Q_{r}).(\mathbf{f}_{1}\otimes\cdots\otimes\mathbf{f}_{r} ):=(-1)^{\sigma}\,(\mathbf{f}_{1}\circ Q_{1})\otimes\cdots\otimes(\mathbf{f}_ {r}\circ Q_{r}).\]
By (3.4), this action is isometric. Furthermore, the additional \(\mathbb{Z}/2\mathbb{Z}\) factor makes it act transitively on \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\). We summarize this in the following lemma.
**Lemma 3.5**.: _The group \(G\) acts isometrically and transitively on \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\)._
For our formulation of the Weingarten map of \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) we first need to obtain a certain decomposition of the normal space. We define the spaces
\[\mathcal{W}_{i} :=\operatorname{span}\{\mathbf{m}_{\alpha_{1}}\otimes\cdots \otimes\mathbf{m}_{\alpha_{r}}\mid(\alpha_{i})_{0}=d_{i}-2,\;(\alpha_{k})_{0} =d_{k}\text{ for }k\neq i\},\] \[\mathcal{G}_{i,j} :=\operatorname{span}\{\mathbf{m}_{\alpha_{1}}\otimes\cdots \otimes\mathbf{m}_{\alpha_{r}}\mid(\alpha_{i})_{0}=d_{i}-1,\;(\alpha_{j})_{0} =d_{j}-1,\;(\alpha_{k})_{0}=d_{k},\text{ for }k\neq i,j\}\]
and set
\[\mathcal{W} :=\bigoplus_{1\leq i\leq r}\mathcal{W}_{i}, \tag{3.5}\] \[\mathcal{G} :=\bigoplus_{1\leq i<j\leq r}\mathcal{G}_{i,j},\] \[\mathcal{P} :=(\mathcal{W}\oplus\mathcal{G})^{\perp}\cap N_{\mathbf{E}} \mathbb{X}_{\mathbf{n},\mathbf{d}}.\]
The next result extends Lemma 3.2 to the case \(r\geq 2\).
**Lemma 3.6**.: _If \(r\geq 2\), the normal space has the orthogonal decomposition_
\[N_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}=\mathcal{P}\oplus\mathcal{W} \oplus\mathcal{G}.\]
Proof.: By the inner product rule for simple tensors (3.3), and since the monomials \(\mathbf{m}_{\alpha}\) are orthogonal, the decomposition \(\mathcal{P}\oplus\mathcal{W}\oplus\mathcal{G}\) is an orthogonal decomposition. Therefore, we only have to show that \(\mathcal{W},\mathcal{G}\subset N_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\).
Recall from (2.1) that \(\mathbb{X}_{\mathbf{n},\mathbf{d}}=\mathbb{V}_{n_{1},d_{1}}\otimes\cdots \otimes\mathbb{V}_{n_{r},d_{r}}\), if there is at least one odd \(d_{i}\), and that, if all \(d_{i}\) are even, we have \(\mathbb{X}_{\mathbf{n},\mathbf{d}}=(\mathbb{V}_{n_{1},d_{1}}\otimes\cdots \otimes\mathbb{V}_{n_{r},d_{r}})\;\cup\;-(\mathbb{V}_{n_{1},d_{1}}\otimes \cdots\otimes\mathbb{V}_{n_{r},d_{r}})\). It follows from Proposition 2.1 (2) that
\[T_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}=T_{x_{0}^{d_{1}}}\mathbb{V}_{ n_{1},d_{1}}\otimes x_{0}^{d_{2}}\otimes\cdots\otimes x_{0}^{d_{r}}+\cdots+x_{0}^{d_{ 1}}\otimes x_{0}^{d_{2}}\otimes\cdots\otimes T_{x_{0}^{d_{r}}}\mathbb{V}_{n_{r },d_{r}}.\]
Lemma 3.1 implies that
\[T_{x_{0}^{d_{\ell}}}\mathbb{V}_{n_{\ell},d_{\ell}}=\operatorname{span}\{ \mathbf{m}_{\alpha_{1}}\otimes\cdots\otimes\mathbf{m}_{\alpha_{r}}\mid(\alpha_ {\ell})_{0}=d_{\ell}-1,\;(\alpha_{k})_{0}=d_{k},\text{ for }k\neq\ell\}.\]
The space \(\mathcal{W}_{i}\) is spanned by simple tensors \(\mathbf{m}_{\alpha_{1}}\otimes\cdots\otimes\mathbf{m}_{\alpha_{r}}\) such that the \(i\)-th factor \(\mathbf{m}_{\alpha_{i}}\) is orthogonal to both \(T_{x_{0}^{d_{i}}}\mathbb{V}_{n_{i},d_{i}}\) and \(x_{0}^{d_{i}}\). Using (3.3), this already shows that \(\mathcal{W}_{i}\perp T_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\). Consequently, \(\mathcal{W}\subset N_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\).
The space \(\mathcal{G}_{i,j}\) is spanned by simple tensors \(\mathbf{m}_{\alpha_{1}}\otimes\cdots\otimes\mathbf{m}_{\alpha_{r}}\) such that the \(i\)-th factor \(\mathbf{m}_{\alpha_{i}}\) is orthogonal to \(x_{0}^{d_{i}}\) and the \(j\)-th factor \(\mathbf{m}_{\alpha_{j}}\) is orthogonal to \(x_{0}^{d_{j}}\). Since \(T_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\) is spanned by simple tensors that have at most one factor different from \(x_{0}^{d_{k}}\), the inner product rule (3.3) implies that \(\mathcal{G}_{i,j}\perp T_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\) for all \(i,j\); hence, \(\mathcal{G}\subset N_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\).
**Example 3.7**.: Let us work out the decomposition from Lemma 3.6 in the case of the Segre manifold. This is the case \(\mathbf{d}=\mathbf{1}=(1,\ldots,1)\); i.e., \(\mathbb{X}_{\mathbf{n},\mathbf{d}}=\mathbb{S}^{n_{1}}\otimes\cdots\mathbb{S}^ {n_{r}}\). Since \(H_{n,1}\cong\mathbb{R}^{n}\), we can view elements in \(\mathcal{H}_{\mathbf{n},\mathbf{d}}=H_{n_{1},1}\otimes\cdots\otimes H_{n_{r},1}\) as \(r\)-dimensional tensors \(\mathbf{F}=(F_{i_{1},\ldots,i_{r}})\), where \(0\leq i_{j}\leq n_{j}\) for \(1\leq j\leq r\). Figure 1 shows an order-three tensor illustrating the case \(r=3\). We have for the Segre manifold
\[T_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{1}}=\{(F_{i_{1},\ldots,i_{r}})\mid \text{ there is exactly one $i_{j}$ greater than zero}\}.\]
Moreover, \(\mathcal{W}=\emptyset\) and
\[\mathcal{G} =\{(F_{i_{1},\ldots,i_{r}})\mid\text{ there are exactly two $i_{j}$s greater than zero}\},\] \[\mathcal{P} =\{(F_{i_{1},\ldots,i_{r}})\mid\text{ there are at least three $i_{j}$s greater than zero}\}.\]
Figure 1 shows the case for \(r=3\); here, the tangent space of \(\mathbb{X}_{\mathbf{n},\mathbf{1}}\) is shown in red, \(\mathcal{G}\) in green and \(\mathcal{P}\) in blue.
We can now prove a theorem on the structure of the Weingarten map of \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\).
Figure 1. Illustration of the tangent and normal spaces of the (spherical) Segre manifold \(\mathbb{S}^{2}\otimes\mathbb{S}^{2}\otimes\mathbb{S}^{2}\). at \(\mathbf{E}=x_{0}\otimes x_{0}\otimes x_{0}\). We see a \(3\times 3\times 3\) tensor \(\mathbf{F}\) with \(3^{3}=27\) entries indexed by \((i,j,k)\) for \(0\leq i,j,k\leq 2\). The white entry has index \((0,0,0)\) and corresponds to the tensor \(\mathbf{E}\). The tangent and normal spaces of the (spherical) Segre manifold are orthogonal to \(\mathbf{E}\) and hence spanned by the entries in red, green and blue. The tangent space \(T_{\mathbf{E}}\mathbb{X}_{(3,3,3),(1,1,1)}\) corresponds to the \(2+2+2=6\) red entries, where exactly one index of \((i,j,k)\) is greater than \(0\). The green entries give the component \(\mathcal{G}\) from Example 3.7, where exactly two indices are greater than \(0\). The blue entries correspond to \(\mathcal{P}\), where all indices are greater than \(0\).
**Theorem 3.8** (Weingarten map of Segre-Veronese manifolds).: _Consider a normal vector \(\mathbf{F}\in N_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}=\mathcal{P}\oplus \mathcal{W}\oplus\mathcal{G}\) and let \(L_{\mathbf{F}}\) be the Weingarten map of \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) at \(\mathbf{E}\) and \(\mathbf{F}\). Then \(L_{\mathbf{F}}\) is represented in orthonormal coordinates by the matrix_
\[L_{\mathbf{F}}=\begin{bmatrix}L_{1}&L_{1,2}&\cdots&L_{1,r}\\ (L_{1,2})^{T}&L_{2}&\cdots&L_{1,r-1}\\ &&\ddots&\\ (L_{1,r})^{T}&(L_{1,r-1})^{T}&\cdots&L_{r}\end{bmatrix}\in\mathbb{R}^{n\times n },\quad n=\dim\mathbb{X}_{\mathbf{n},\mathbf{d}},\]
_defined as follows: let us write \(\mathbf{F}=\mathbf{P}+\mathbf{W}+\mathbf{G}\), where \(\mathbf{P}\in\mathcal{P}\), \(\mathbf{W}\in\mathcal{W}\) and \(\mathbf{G}\in\mathcal{G}\). Decompose further:_
\[\mathbf{W}=\sum_{1\leq i\leq r}\mathbf{W}_{i},\quad\mathbf{G}=\sum_{1\leq i< j\leq r}\mathbf{G}_{i,j},\]
_where \(\mathbf{W}_{i}=\mathbf{m}_{(d_{1},0,\ldots,0)}\otimes\cdots\otimes\mathbf{m}_{ (d_{i-1},0,\ldots,0)}\otimes\mathbf{f}_{i}\otimes\mathbf{m}_{(d_{i+1},0, \ldots,0)}\cdots\otimes\mathbf{m}_{(d_{r},0,\ldots,0)}\in\mathcal{W}_{i}\) with_
\[\mathbf{f}_{i}=\sum_{1\leq k<\ell\leq n_{i}}f_{i,(k,\ell)}\,\mathbf{m}_{(d_{i} -2,0,\cdots,\underset{k-th}{1},\cdots,\underset{\ell-th}{1},\cdots,0)}+\sum_{ 1\leq k\leq n_{i}}f_{i,(k,k)}\,\mathbf{m}_{(d_{i}-2,0,\cdots,\underset{k-th}{ 1},\cdots,0)},\]
_and_
\[\mathbf{G}_{i,j}=\sum_{k=1}^{n_{i}}\sum_{\ell=1}^{n_{j}}g_{(i,j),(k,\ell)}\, \mathbf{m}_{(d_{1},0,\ldots,0)}\otimes\cdots\otimes\mathbf{m}_{(d_{i}-1,0, \ldots,\underset{k-th}{1},\cdots,0)}\otimes\cdots\otimes\mathbf{m}_{(d_{r},0, \ldots,0)}.\]
_Then_
\[L_{i}=\sqrt{\frac{d_{i}-1}{d_{i}}}\begin{bmatrix}\sqrt{2}\cdot f_{i,(1,1)}&f_ {i,(1,2)}&\cdots&f_{i,(1,n_{i})}\\ f_{i,(1,2)}&\sqrt{2}\cdot f_{i,(2,2)}&\cdots&f_{i,(2,n_{i})}\\ \vdots&\vdots&\ddots&\vdots\\ f_{i,(1,n_{i})}&f_{i,(2,n_{i})}&\cdots&\sqrt{2}\cdot f_{i,(n_{i},n_{i})}\end{bmatrix} \in\mathbb{R}^{n_{i}\times n_{i}}\]
_and_
\[L_{i,j}=\begin{bmatrix}g_{(i,j),(1,1)}&g_{(i,j),(1,2)}&\cdots&g_{(i,j),(1,n_{ j})}\\ g_{(i,j),(2,1)}&g_{(i,j),(2,2)}&\cdots&g_{(i,j),(2,n_{j})}\\ \vdots&\vdots&\ddots&\vdots\\ g_{(i,j),(n_{i},1)}&g_{(i,j),(n_{i},2)}&\cdots&g_{(i,j),(n_{i},n_{j})}\end{bmatrix} \in\mathbb{R}^{n_{i}\times n_{j}}.\]
Proof.: Proposition 2.2 implies the block structure of \(L_{\mathbf{F}}\). The structure of the diagonal blocks is given by Theorem 3.3. The structure of the off-diagonal blocks comes from the fact that \(\mathbf{m}_{(d_{i}-1,0\cdots,\underset{k-th}{1},\cdots,0)}\) for \(1\leq k\leq n_{i}\) are an orthonormal basis of the tangent space \(T_{x_{0_{i}}^{d_{i}}}\mathbb{V}_{n_{i},d_{i}}\) by Lemma 3.1.
An immediate corollary of Theorem 3.8 comes next.
**Corollary 3.9**.: _Let \(\mathbf{F}=(F_{\alpha_{1},\ldots,\alpha_{r}})\in N_{\mathbf{E}}\mathbb{X}_{\mathbf{ n,d}}\) and \(L_{\mathbf{F}}\) be the Weingarten map of \(\mathbb{X}_{\mathbf{n,d}}\) at \(\mathbf{E}\) in the normal direction \(\mathbf{F}\). If \(\mathbf{F}\) is Gaussian with respect to the Bombieri-Weyl norm, then_
\[L_{\mathbf{F}}\sim\begin{bmatrix}L_{1}&L_{1,2}&\cdots&L_{1,r}\\ (L_{1,2})^{T}&L_{2}&\cdots&L_{1,r-1}\\ &&\ddots&\\ (L_{1,r})^{T}&(L_{1,r-1})^{T}&\cdots&L_{r}\end{bmatrix},\]
_where_
\[L_{k}\sim\sqrt{\frac{2(d_{k}-1)}{d_{k}}}\,\mathrm{GOE}(n_{k})\quad\text{and} \quad L_{i,j}\sim\,N(0,I_{n_{i}}\otimes I_{n_{j}}),\]
_and all blocks \(L_{k},L_{i,j},1\leq k\leq r,1\leq i<j\leq r,\) are independent._
**Example 3.10** (Weingarten map of the Segre manifold).: We consider again the case of the Segre manifold \(\mathbb{X}_{\mathbf{n,d}}=\mathbb{S}^{n_{1}}\otimes\cdots\otimes\mathbb{S}^{n _{r}}\). In this case, the diagonal blocks of \(L_{\mathbf{F}}\) are all zero and the off-diagonal blocks are independent standard normal matrices. For instance, when \(r=4\), \(n_{1}=n_{2}=2\) and \(n_{3}=n_{4}=1\) we have two \(2\times 2\) diagonal zero blocks and two \(1\times 1\) diagonal zero blocks:
\[L_{\mathbf{F}}=\begin{bmatrix}0&0&\mathbf{F}_{1100}&\mathbf{F}_{1200}& \mathbf{F}_{1010}&\mathbf{F}_{1001}\\ 0&0&\mathbf{F}_{2100}&\mathbf{F}_{2200}&\mathbf{F}_{0201}&\mathbf{F}_{2001}\\ \hline\mathbf{F}_{1100}&\mathbf{F}_{2100}&0&0&\mathbf{F}_{0110}&\mathbf{F}_{01 01}\\ \mathbf{F}_{1200}&\mathbf{F}_{2200}&0&0&\mathbf{F}_{0210}&\mathbf{F}_{0201}\\ \hline\mathbf{F}_{1010}&\mathbf{F}_{2010}&\mathbf{F}_{0110}&\mathbf{F}_{0210}& 0&\mathbf{F}_{0011}\\ \hline\mathbf{F}_{1001}&\mathbf{F}_{2001}&\mathbf{F}_{0101}&\mathbf{F}_{0201}& \mathbf{F}_{0011}&0\end{bmatrix},\]
where \(\mathbf{F}=\sum_{i=0}^{2}\sum_{j=0}^{2}\sum_{k=0}^{1}\sum_{\ell=0}^{1} \mathbf{F}_{ijk\ell}\,\,x_{i}\otimes x_{j}\otimes x_{k}\otimes x_{\ell}\). We will revisit this in Example 5.5 below.
Finally, we provide an explicit description of geodesics in \(\mathbb{X}_{\mathbf{n,d}}\).
**Lemma 3.11**.: _Let \((-\varepsilon,\varepsilon)\to\gamma(t)\) be a (locally defined) geodesic in \(\mathbb{X}_{\mathbf{n,d}}\) parametrized by arc length and passing through \(\mathbf{E}\). Up to the action by the group \(G\), the geodesic \(\gamma(t)\) has the form_
\[\gamma(t)=\gamma_{1}(t)\otimes\cdots\otimes\gamma_{r}(t),\]
_where_
\[\gamma_{i}(t)=\big{(}\cos(d_{i}^{-1/2}\,a_{i}\,t)\,x_{0}+\sin(d_{i}^{-1/2}\,a _{i}\,t)\,x_{1}\big{)}^{d_{i}}\]
_and \(a_{i},1\leq i\leq r\), are real numbers with \(a_{1}^{2}+\cdots+a_{r}^{2}=1\)._
Proof.: We first show that \(\gamma(t)\) as in the statement of the theorem is a geodesic. The first derivative of \(\gamma_{i}(t)\) at \(t=0\) is
\[\gamma_{i}^{\prime}(0)=a_{i}\,\sqrt{d_{i}}\,x_{0}^{d_{i}-1}x_{1}=a_{i}\, \mathbf{m}_{(d_{i}-1,1,0,\ldots,0)}. \tag{3.6}\]
The second derivative is
\[\begin{split}\gamma_{i}^{\prime\prime}(0)&=-a_{i}^{2}\,x_ {0}^{d_{i}}+a_{i}^{2}\,(d_{i}-1)\,x_{0}^{d_{i}-2}x_{1}^{2}\\ &=-a_{i}^{2}\,\mathbf{m}_{(d_{i},0,\ldots,0)}+a_{i}^{2}\,\sqrt{ \frac{2(d_{i}-1)}{d_{i}}}\,\mathbf{m}_{(d_{i}-2,2,0,\ldots,0)}.\end{split} \tag{3.7}\]
These formulas give the first and second derivatives of the factors of \(\gamma(t)\).
Next, we compute the derivative of \(\gamma(t)\). The first derivative is
\[\gamma^{\prime}(t)=\sum_{i=1}^{r}\gamma_{1}(t)\otimes\cdots\otimes\gamma_{i-1}(t) \otimes\gamma^{\prime}_{i}(t)\otimes\gamma_{i+1}(t)\otimes\cdots\otimes\gamma_{ r}(t).\]
The second derivative is
\[\gamma^{\prime\prime}(t)= \sum_{i=1}^{r}\gamma_{1}(t)\otimes\cdots\otimes\gamma_{i-1}(t) \otimes\gamma^{\prime\prime}_{i}(t)\otimes\gamma_{i+1}(t)\otimes\cdots\otimes \gamma_{r}(t)\] \[+2\sum_{1\leq i<j\leq r}\gamma_{1}(t)\otimes\cdots\otimes\gamma^{ \prime}_{i}(t)\otimes\cdots\otimes\gamma^{\prime}_{j}(t)\otimes\cdots\otimes \gamma_{r}(t).\]
Recall from (3.5) the definition of the spaces \(\mathcal{W}\) and \(\mathcal{G}\) and from Lemma 3.6 that both spaces are contained in the normal space of \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) at \(\mathbf{E}\). If we plug in the formulas (3.6) for the first derivative of \(\gamma_{i}\) and (3.7) for the second derivative, then we obtain the following expression:
\[\gamma^{\prime\prime}(0)=-(a_{1}^{2}+\cdots+a_{r}^{2})\cdot\mathbf{E}+ \mathbf{W}+\mathbf{G}, \tag{3.8}\]
where \(\mathbf{W}\in\mathcal{W}\) and \(\mathbf{G}\in\mathcal{G}\) are given by
\[\mathbf{W} =\sum_{i=1}^{r}a_{i}^{2}\sqrt{\tfrac{2(d_{i}-1)}{d_{i}}}\, \mathbf{m}_{(d_{1},0,\ldots,0)}\otimes\cdots\otimes\mathbf{m}_{(d_{i}-2,2,0, \ldots,0)}\otimes\cdots\otimes\mathbf{m}_{(d_{r},0,\ldots,0)},\] \[\mathbf{G} =2\sum_{1\leq i<j\leq r}a_{i}a_{j}\,\mathbf{m}_{(d_{1},0,\ldots,0 )}\otimes\cdots\otimes\mathbf{m}_{(d_{i}-1,1,0,\ldots,0)}\otimes\] \[\cdots\otimes\mathbf{m}_{(d_{j}-1,1,0,\ldots,0)}\otimes\cdots \otimes\mathbf{m}_{(d_{r},0,\cdots,0)}. \tag{3.9}\]
For every \(t_{0}\in(-\varepsilon,\varepsilon)\) there exists \(U_{0}\in G\) such that \(U_{0}\mathbf{E}=U_{0}\gamma(0)=\gamma(t_{0})\), and \(N_{\gamma(t_{0})}\mathbb{X}_{\mathbf{n},\mathbf{d}}=U_{0}N_{\mathbf{E}} \mathbb{X}_{\mathbf{n},\mathbf{d}}\). Therefore, the second derivative of \(\gamma(t)\) at \(t=t_{0}\) satisfies
\[\gamma^{\prime\prime}(t_{0})\in U_{0}\left(\mathbb{R}\cdot\mathbf{E}\oplus N_ {\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\right)=\mathbb{R}\cdot\gamma(t_ {0})\oplus N_{\gamma(t_{0})}\mathbb{X}_{\mathbf{n},\mathbf{d}};\]
i.e., it has zero tangential component and hence is a geodesic.
From Proposition 2.1 (3) it follows that \(\|\gamma^{\prime}(t)\|^{2}=\|\gamma^{\prime}_{1}(t)\|^{2}+\cdots+\|\gamma^{ \prime}_{r}(t)\|^{2}\). Since \(\|\gamma^{\prime}_{i}(0)\|^{2}=a_{i}^{2}\) by (3.6), the geodesic \(\gamma(t)\) is parametrized by arc length if and only if
\[a_{1}^{2}+\cdots+a_{r}^{2}=1.\]
The statement of the theorem then follows from the local uniqueness of geodesics (see, e.g., [18, Theorem 4.27]) and the fact that we can find a suitable rotation in \(G\) such that the tangent vector \(\gamma^{\prime}_{i}(0)\) points in the direction of \(\sqrt{d_{i}}x_{0}^{d_{i}-1}x_{1}\).
## 4. Reach of the Segre-Veronese manifold
We compute the reach \(\tau(\mathbb{X}_{\mathbf{n},\mathbf{d}})\) of the Segre-Veronese manifold. We adapt the strategy from [13] and calculate the reach as the minimum of two quantities:
\[\tau(\mathbb{X}_{\mathbf{n},\mathbf{d}})=\min\{\rho_{1},\rho_{2}\},\]
where \(\rho_{1}\) is the inverse of the maximal curvature of a curve in \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) that is parametrized by arc length:
\[\frac{1}{\rho_{1}}=\sup\left\{\|P_{\mathbf{E}}(\gamma^{\prime\prime}(0))\|\mid \gamma\text{ is a geodesic in }\mathbb{X}_{\mathbf{n},\mathbf{d}}\text{ parametrized by arc length}\right\};\]
here \(P_{\mathbf{E}}\) denotes the orthogonal projection onto \(T_{\mathbf{E}}\mathbb{S}(\mathcal{H}_{\mathbf{n},\mathbf{d}})\). The second quantity, \(\rho_{2}\), is the width of the smallest bottleneck:
\[\rho_{2}=\min\left\{\tfrac{1}{2}\,d_{\mathbb{S}}(\mathbf{F},\mathbf{E})\,\left| \,\,\mathbf{F}\in\mathbb{X}_{\mathbf{n},\mathbf{d}},\,\,\mathbf{F}\neq\mathbf{ E}\,\,\text{ and }\,\,\mathbf{F}-\mathbf{E}\in\mathbb{R}\cdot\mathbf{E}\oplus N_{\mathbf{E}} \mathbb{X}_{\mathbf{n},\mathbf{d}}\right.\right\}.\]
The goal of this section is to prove the following proposition, giving formulas for both \(\rho_{1}\) and \(\rho_{2}\).
**Proposition 4.1**.: _Let \(\mathbf{d}=(d_{1},\ldots,d_{r})\) and \(\mathbf{n}=(n_{1},\ldots,n_{r})\) be \(r\)-tuples of positive integers, and let \(d:=d_{1}+\cdots+d_{r}\geq 2\). For the (spherical) Segre-Veronese manifold \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) of total degree \(d\), we have_
1. \(\rho_{1}=\sqrt{\frac{d}{2(d-1)}}\)_._
2. \(\rho_{2}=\frac{\pi}{4}\)_._
We prove Proposition 4.1 (1) in Section 4.1 and (2) in Section 4.2. Because the reach is the minimum of \(\rho_{1}\) and \(\rho_{2}\), this proves Theorem 1.1.
### Extremal curvature of the Segre-Veronese manifold
Let \(\gamma(t)\) be a geodesic in \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) parametrized by arc length. By orthogonal invariance (Lemma 3.5) we can assume that \(\gamma(0)=\mathbf{E}\). As shown in Lemma 3.11, geodesics in \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) parametrized by arc length that pass through \(\mathbf{E}\) can, without loss of generality, be written as
\[\gamma(t)=\gamma_{1}(t)\otimes\cdots\otimes\gamma_{r}(t)\]
where \(\gamma_{i}(t)=(\cos(d_{i}^{-1/2}\,a_{i}\,t)\,x_{0}+\sin(d_{i}^{-1/2}\,a_{i}\,t )\,x_{1})^{d_{i}}\) and \(a_{i},1\leq i\leq r\), are real numbers with \(a_{1}^{2}+\cdots+a_{r}^{2}=1\). By (3.8), we have \(P_{\mathbf{E}}(\gamma^{\prime\prime}(0))=\mathbf{W}+\mathbf{G}\in\mathcal{W} \oplus\mathcal{G}\), where the latter are defined in (3.9). As \(\mathcal{W}\perp\mathcal{G}\) and the \(\mathbf{m}_{\alpha}\) form orthonormal bases, the magnitude of \(P_{\mathbf{E}}(\gamma^{\prime\prime}(0))\) is
\[\|P_{\mathbf{E}}(\gamma^{\prime\prime}(0))\|^{2} =\|\mathbf{W}\|^{2}+\|\mathbf{G}\|^{2}\] \[=\sum_{i=1}^{r}a_{i}^{4}\cdot\frac{2(d_{i}-1)}{d_{i}}+4\sum_{1\leq i <j\leq r}a_{i}^{2}a_{j}^{2}\] \[=\sum_{i=1}^{r}a_{i}^{4}\cdot\frac{2(d_{i}-1)}{d_{i}}+2\sum_{i=1} ^{r}a_{i}^{2}\sum_{j\neq i}a_{j}^{2}\] \[=\sum_{i=1}^{r}\left(a_{i}^{4}\cdot\frac{2(d_{i}-1)}{d_{i}}+2a_{i }^{2}(1-a_{i}^{2})\right),\qquad\text{(because }\sum_{i=1}^{r}a_{i}^{2}=1)\] \[=2\sum_{i=1}^{r}\left(a_{i}^{2}-\frac{a_{i}^{4}}{d_{i}}\right).\]
To maximize this expression under the constraint \(\sum_{i=1}^{r}a_{i}^{2}=1\) we consider the Lagrange function
\[\mathcal{L}(a_{1},\ldots,a_{r},\lambda):=\sum_{i=1}^{r}\left(a_{i}^{2}-\frac{a_{ i}^{4}}{d_{i}}\right)-\lambda\left(1-\sum_{i=1}^{r}a_{i}^{2}\right).\]
Setting the derivatives of \(\mathcal{L}\) to zero, we have
\[0=\frac{\partial\mathcal{L}}{\partial a_{i}}=2a_{i}-\frac{4}{d_{i}}a_{i}^{3}+2 \lambda a_{i}\quad\Longrightarrow\quad a_{i}=\sqrt{\frac{d_{i}(1+\lambda)}{2} }\quad\text{ or }\quad a_{i}=0.\]
Let us first consider the case when the \(a_{i}\) are not equal to zero. In this case, the equation \(\sum_{i=1}^{r}a_{i}^{2}=1\) implies
\[1=\sum_{i=1}^{r}a_{i}^{2}=\sum_{i=1}^{r}\frac{d_{i}(1+\lambda)}{2}=\frac{d(1+ \lambda)}{2},\]
where \(d=d_{1}+\cdots+d_{r}\) is the total degree. This shows \(\lambda=\frac{2}{d}-1\), so that
\[a_{i}=\sqrt{\frac{d_{i}}{d}}.\]
Thus, in this case
\[\|P_{\mathbf{E}}(\gamma^{\prime\prime}(0))\|=\sqrt{2\sum_{i=1}^{r}\frac{d_{i}} {d}-\frac{d_{i}}{d^{2}}}=\sqrt{\frac{2(d-1)}{d}}\]
For the other critical values of \((a_{1},\ldots,a_{r})\) we get \(\sqrt{\frac{2(d^{\prime}-1)}{d^{\prime}}}\), where \(d^{\prime}=\sum_{i\in I}d_{i}\) is the total degree of a subset \(I\subset\{1,\ldots,r\}\) of factors. Since \(x\mapsto\sqrt{\frac{2(x-1)}{x}}\) is an increasing function for \(x\geq 1\), this shows that \(\sqrt{\frac{2(d-1)}{d}}\) is indeed the maximal curvature. It also shows that \(\sqrt{\frac{2(d_{\ell}-1)}{d_{\ell}}}\) is the minimal curvature, where \(d_{\ell}=\min\{d_{1},\ldots,d_{r}\}\).
Proof of Theorem 1.2.: We have shown above that \(\sqrt{\frac{2(d-1)}{d}}\) is the maximal curvature, and that \(\sqrt{\frac{2(d_{\ell}-1)}{d_{\ell}}}\), where \(d_{\ell}=\min\{d_{1},\ldots,d_{r}\}\), is the minimal curvature. The geodesics that attain these curvatures are given by the critical values \(a_{i}\), and, as shown by [10], \(\gamma_{i}(t)=\big{(}\cos(d_{i}^{-1/2}\,a_{i}\,t)\,x_{0}+\sin(d_{i}^{-1/2}\,a _{i}\,t)\,x_{1}\big{)}^{d_{i}}\) is a geodesic in \(\mathbb{V}_{n_{i},d_{i}}\).
### Bottlenecks of the Segre-Veronese manifold
We compute \(\rho_{2}\), the width of the smallest bottleneck of the Segre-Veronese manifold.
Recall that \(\rho_{2}\) is the minimum over the distances \(\frac{1}{2}\,d_{\mathbb{S}}(\mathbf{F},\mathbf{E})\) where \(\mathbf{F}\in\mathbb{X}_{\mathbf{n},\mathbf{d}}\) with \(\mathbf{F}\neq\mathbf{E}\) and \(\mathbf{F}-\mathbf{E}\in\mathbb{R}\cdot\mathbf{E}\oplus N_{\mathbf{E}}\mathbb{ X}_{\mathbf{n},\mathbf{d}}\). The latter is equivalent to
\[\langle\mathbf{F}-\mathbf{E},\mathbf{G}\rangle=0\quad\text{for all}\quad \mathbf{G}\in T_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}.\]
We have \(T_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}=T_{x_{0}^{d_{1}}}\mathbb{V}_{n_{1},d_{1}}\otimes x_{0}^{d_{2}}\otimes\cdots\otimes x_{0}^{d_{r}}+\cdots+x_{0}^{d_{ 1}}\otimes x_{0}^{d_{2}}\otimes\cdots\otimes T_{x_{0}^{d_{r}}}\mathbb{V}_{n_{r},d_{r}}\) by Proposition 2.1 (2). We check that \(\mathbf{F}-\mathbf{E}\) is orthogonal to each summand in this decomposition: let us write
\[\mathbf{F}=\ell_{1}^{d_{1}}\otimes\cdots\otimes\ell_{r}^{d_{r}}\]
where the \(\ell_{i}\) are linear forms and consider the inner product of \(\mathbf{F}-\mathbf{E}\) with elements from the first summand in the decomposition of \(T_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\) above. By Lemma 3.1 the monomials \(x_{0}^{d_{1}-1}x_{k}\), for \(1\leq k\leq n_{1}\), span the tangent space \(T_{x_{0}^{d_{1}}}\mathbb{V}_{n_{1},d_{1}}\). For \(\mathbf{G}=(x_{0}^{d_{1}-1}x_{k})\otimes x_{0}^{d_{2}}\otimes\cdots\otimes x _{0}^{d_{r}}\in T_{x_{0}^{d_{1}}}\mathbb{V}_{n_{1},d_{1}}\) we have that
\[\left\langle\mathbf{F}-\mathbf{E},\ \mathbf{G}\right\rangle= \left\langle\mathbf{F},\ (x_{0}^{d_{1}-1}x_{k})\otimes x_{0}^{d_{2}}\otimes\cdots\otimes x _{0}^{d_{r}}\right\rangle \stackrel{{\text{by \eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def _def_def_def_def}}}}{{=}}\left\langle\ell_{1}^{d_{1}},x_{0}^{d_{1}-1}x_{k} \right\rangle\,{\prod_{i=2}^{r}}\langle\ell_{i}^{d_{i}},x_{0}^{d_{i}}\rangle\] \[\stackrel{{\text{by \eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def _def_def_def_def_def_def_def_def}}}{{=}}\left\langle\ell_{1},x_{0}\right\rangle^{d_{1 }-1}\left\langle\ell_{1},x_{k}\right\rangle\,{\prod_{i=2}^{r}}\langle\ell_{i},x _{0}\rangle^{d_{i}}.\]
This inner product is zero for every \(1\leq k\leq n_{1}\) if either \(\ell_{1}=x_{0}\) or \(\langle\ell_{1},x_{0}\rangle=0\). We proceed similarly for the other summands in the decomposition of \(T_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\).
Ultimately, we find that \(\left\langle\mathbf{F}-\mathbf{E},\mathbf{G}\right\rangle=0\) for all \(\mathbf{G}\in T_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\) if and only if either \(\ell_{1}=\cdots=\ell_{r}=x_{0}\) or there is at least one \(\ell_{i}\) with \(\langle\ell_{i},x_{0}\rangle=0\), in which case \(\left\langle\mathbf{F},\mathbf{E}\right\rangle=0\) by (3.3). Since \(\mathbf{F}\neq\mathbf{E}\), it must be that the latter holds.
Therefore, the bottlenecks of \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) all have width \(\arccos 0=\frac{\pi}{2}\), so \(\rho_{2}=\frac{1}{2}\cdot\frac{\pi}{2}=\frac{\pi}{4}\).
## 5. Volume of the tubular neighborhood
Recall from Theorem 1.1 that the reach of the (spherical) Segre-Veronese manifold is \(\tau(\mathbb{X}_{\mathbf{n},\mathbf{d}})=\frac{\pi}{4}\), if \(d\leq 5\), and \(\tau(\mathbb{X}_{\mathbf{n},\mathbf{d}})=\sqrt{\frac{d}{2(d-1)}}\), if \(d>5\). In this section we prove Theorem 1.3 by computing the volume of the tubular neighborhood for \(\varepsilon<\tau(\mathbb{X}_{\mathbf{n},\mathbf{d}})\)
\[U(\varepsilon):=\left\{\mathbf{F}\in\mathbb{S}(\mathcal{H}_{\mathbf{n},\mathbf{ d}})\ \left|\ \ d_{\mathbb{S}}(\mathbf{F},\mathbb{X}_{\mathbf{n},\mathbf{d}})<\varepsilon \right.\right\}.\]
The proof will be completed in Section 5.1 below.
For the computation we use _Weyl's tube formula_[20]. We denote
\[n=\dim\mathbb{X}_{\mathbf{n},\mathbf{d}}=n_{1}+\cdots+n_{r},\quad N=\dim( \mathbb{S}(\mathcal{H}_{\mathbf{n},\mathbf{d}})),\]
and
\[J_{i}(\varepsilon)=\int_{0}^{\varepsilon}(\sin\phi)^{N-n+2i-1}\cdot(\cos\phi)^ {n-2i}\;\mathrm{d}\phi=\int_{t=0}^{\tan\varepsilon}\frac{t^{N-n+2i-1}}{(1+t^{2 })^{\frac{N+1}{2}}}\;\mathrm{d}t.\]
Then Weyl's tube formula implies that the volume of \(U(\varepsilon)\) is given as the following linear combination of the functions \(J_{i}\):
\[\operatorname{vol}(U(\varepsilon))=\sum_{0\leq 2i\leq n}\kappa_{i}\cdot J_{i}( \varepsilon), \tag{5.1}\]
with coefficients
\[\kappa_{i}=\int_{\mathbf{G}\in\mathbb{X}_{\mathbf{n},\mathbf{d}}}\left(\int_{ \mathbf{F}\in N_{\mathbf{G}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\ :\ \|\mathbf{F}\|=1}m_{2i}(L_{\mathbf{F}})\;\mathrm{d}\mathbf{F}\right)\; \mathrm{d}\mathbf{G},\]
where \(m_{2i}(L_{\mathbf{F}})\) denotes the sum of the \(2i\)-principal minors of the Weingarten map \(L_{\mathbf{F}}\) in the normal direction \(\mathbf{F}\). The coefficients \(\kappa_{i}\) are called _curvature coefficients_; they are isometric invariants of \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\).
It follows from Lemma 3.5 that the integral in the formula for \(\kappa_{i}\) is independent of \(\mathbf{G}\), so that
\[\kappa_{i}=\operatorname{vol}(\mathbb{X}_{\mathbf{n},\mathbf{d}})\;\int_{ \mathbf{F}\in N_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\ :\ \|\mathbf{F}\|=1}m_{2i}(L_{\mathbf{F}})\;\mathrm{d}\mathbf{F}, \tag{5.2}\]
where now the inner integral is over the sphere in the normal space of \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) at the point \(\mathbf{E}=x_{0}^{d_{1}}\otimes\cdots\otimes x_{0}^{d_{r}}\). The volume of the (spherical) Segre-Veronese manifold is computed next.
**Lemma 5.1**.: \(\operatorname{vol}(\mathbb{X}_{\mathbf{n},\mathbf{d}})=\frac{\sqrt{(d_{1} \cdots d_{r})^{n}}}{2^{r-1}}\cdot\operatorname{vol}(\mathbb{S}^{n_{1}}) \cdots\operatorname{vol}(\mathbb{S}^{n_{r}})\)_._
Proof.: We consider the map \(\psi\) from Proposition 2.1 in the case of \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\). If there is at least one odd \(d_{i}\), the map \(\psi\) is \(2^{r-1}:1\). Proposition 2.1 (3) therefore implies
\[\operatorname{vol}(\mathbb{X}_{\mathbf{n},\mathbf{d}})\overset{\text{by \eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def _def_def_def}}}{\overset{\text{vol}}{=}}\operatorname{vol}(\mathbb{V}_{n_{1},d_{1} })\cdots\operatorname{vol}(\mathbb{V}_{n_{r},d_{r}}).\]
On the other hand, if all \(d_{i}\) are even, \(\psi\) is \(2^{r}:1\) and we have
\[\operatorname{vol}(\mathbb{X}_{\mathbf{n},\mathbf{d}})\overset{\text{by \eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def _def_def_def_def}}}{2\cdot\operatorname{vol}(\mathbb{V}_{n_{1},d_{1}})\cdots \operatorname{vol}(\mathbb{V}_{n_{r},d_{r}})}.\]
Finally, \(\operatorname{vol}(\mathbb{V}_{n,d})=\sqrt{d^{n}}\cdot\operatorname{vol}( \mathbb{S}^{n})\) (see, e.g., [1]).
**Remark 5.2**.: The volume of the \(k\)-sphere is
\[\operatorname{vol}(\mathbb{S}^{k})=\frac{2\pi^{\frac{k+1}{2}}}{\Gamma(\frac{k+ 1}{2})}.\]
The main task in computing the volume of \(U(\varepsilon)\) therefore is integrating the principal minors of the Weingarten map \(L_{\mathbf{F}}\) over the normal space. For this we pass from the uniform distribution on the sphere to the Gaussian distribution. Since \(L_{\lambda\cdot\mathbf{F}}=\lambda\cdot L_{\mathbf{F}}\) for \(\mathbf{F}\in N_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\) and \(\lambda\in\mathbb{R}\), we have
\[m_{2i}(L_{\lambda\cdot\mathbf{F}})=\lambda^{2i}\cdot m_{2i}(L_{\mathbf{F}}). \tag{5.3}\]
Suppose that \(\mathbf{F}\) is a Gaussian vector in the normal space, that is, a random tensor in \(N_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\) with probability distribution \((2\pi)^{\frac{c}{2}}\exp(-\frac{1}{2}\|\mathbf{F}\|^{2})\). Then the two random variables \(\|\mathbf{F}\|\) and \(\mathbf{F}/\|\mathbf{F}\|\) are independent. We define the scalars
\[\lambda_{i}:=\underset{\mathbf{F}\in N_{\mathbf{E}}\mathbb{X}_{\mathbf{n}, \mathbf{d}}\ \text{Gaussian}}{\mathbb{E}}\ \|\mathbf{F}\|^{2i}.\]
Using (5.3) we can then pass between the uniform distribution on the sphere and the Gaussian distribution as follows:
\[\underset{\mathbf{F}\in N_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\text{ Gaussian}}{\mathbb{E}}\;m_{2i}(L_{\mathbf{F}})=\lambda_{i}\cdot\underset{\mathbf{F}\in N_{ \mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\text{ uniform in the sphere}}{\mathbb{E}}\;m_{2i}(L_{ \mathbf{F}}).\]
Since \(\|\mathbf{F}\|^{2}\) has a \(\chi_{c}^{2}\)-distribution with \(c=\dim N_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\) degrees of freedom, \(\lambda_{i}\) is the \(i\)-th moment of \(\chi_{c}^{2}\); i.e.,
\[\lambda_{i}=2^{i}\,\frac{\Gamma(i+\frac{c}{2})}{\Gamma(\frac{c}{2})}.\]
We have thus proved the following reformulation of (5.2).
**Lemma 5.3**.: _Let \(c=\dim N_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\). Then \(\kappa_{i}=\operatorname{vol}(\mathbb{X}_{\mathbf{n},\mathbf{d}})\cdot \operatorname{vol}(\mathbb{S}^{c-1})\cdot\,\theta_{i},\) where_
\[\theta_{i}:=\frac{\Gamma(\frac{c}{2})}{2^{i}\,\Gamma(i+\frac{c}{2})}\cdot \underset{\mathbf{F}\in N_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\text{ Gaussian}}{\mathbb{E}}\;m_{2i}(L_{\mathbf{F}}).\]
For computing the expectation of \(m_{2i}(L_{\mathbf{F}})\) we can rely on Corollary 3.9. Recall that this corollary implies that if \(\mathbf{F}\) is Gaussian, \(L_{\mathbf{F}}\) is a random symmetric matrix with independent blocks
\[L_{\mathbf{F}}\sim\begin{bmatrix}L_{1}&\cdots&L_{1,r}\\ &\ddots&\\ (L_{1,r})^{T}&\cdots&L_{r}\end{bmatrix},\qquad L_{k}\sim\sqrt{\tfrac{d_{k}(d_{ k}-1)}{2}}\,\text{GOE}(n_{i}), \tag{5.4}\]
In general it is difficult to evaluate the expected value of the minors of this random matrix. We make an attempt using graph theory in the next subsection.
### Perfect matchings in graphs and random determinants
In this section we give a formula for \(\mathbb{E}\,m_{2i}(L_{\mathbf{F}})\) when \(\mathbf{F}\) is Gaussian using concepts from graph theory. In the following, the degrees \(\mathbf{d}=(d_{1},\ldots,d_{r})\) are fixed. Define the following random symmetric matrix with independent blocks:
\[L_{\mathbf{d}}(v_{1},\ldots,v_{r}):=\begin{bmatrix}L_{1}&\cdots&L_{1,r}\\ &\ddots&\\ (L_{1,r})^{T}&\cdots&L_{r}\end{bmatrix},\qquad L_{k}\sim\sqrt{\tfrac{d_{k}(d_{ k}-1)}{2}}\,\text{GOE}(v_{i}),\]
This differs from (5.4) in that we allow the sizes of the blocks to be arbitrary, not necessarily given by the dimension \(n_{i}=\dim\mathbb{V}_{n_{i},d_{i}}\). We can write the expected principal minors of \(L_{\mathbf{F}}\) as
\[\underset{\mathbf{F}\in N_{\mathbf{E}}\mathbb{X}_{\mathbf{n},\mathbf{d}}\text{ Gaussian}}{\mathbb{E}}\;m_{2i}(L_{\mathbf{F}})=\underset{\begin{subarray}{c}v_{1}, \ldots,v_{r}\in\mathbb{N}:\text{ }v_{i}\leq n_{i}\\ v_{1}+\cdots+v_{r}=2i\end{subarray}}{\sum}\mathbb{E}\det L_{\mathbf{d}}(v_{1},\ldots,v_{r}).\]
Recall the definition of \(D_{\mathbf{d}}(v_{1},\ldots,v_{r})\) from (1.2): for a tuple \((v_{1},\ldots,v_{r})\) of nonnegative integers let \(G=(V,E)\) be the complete graph on \(v:=v_{1}+\cdots+v_{r}\) vertices where the vertices are partitioned into \(r\) groups \(V=\mathcal{I}_{1}\sqcup\cdots\sqcup\mathcal{I}_{r}\) of cardinalities \(|\mathcal{I}_{k}|=v_{k}\). The weight \(w(e)\) of an edge between vertices in group \(\mathcal{I}_{k}\) is \(w(e)=d_{k}(d_{k}-1)\). The
weight of an edge across groups is \(1\). Given a perfect matching \(C\subset E\) its weight is \(w(C):=\prod_{e\in C}w(e).\) Then,
\[D_{\mathbf{d}}(v_{1},\ldots,v_{r})=(-1)^{\frac{v}{2}}\sum_{C\subset E\text{ perfect matching}}w(C).\]
The main goal of this section is to prove the following characterization of the function \(D_{\mathbf{d}}\). In combination with (5.1), Lemma 5.1 and Lemma 5.3 the next proposition completes the proof of Theorem 1.3.
**Proposition 5.4**.: _Let \((v_{1},\ldots,v_{r})\) be nonnegative integers. Then_
\[D_{\mathbf{d}}(v_{1},\ldots,v_{r})=\mathbb{E}\det(L_{\mathbf{d}}(v_{1},\ldots, v_{r})).\]
**Example 5.5**.: Recall from Example 3.10 that the random matrix for the Segre manifold \(\mathbb{X}_{\mathbf{n},\mathbf{d}}=\mathbb{S}^{2}\otimes\mathbb{S}^{2}\otimes \mathbb{S}^{1}\otimes\mathbb{S}^{1}\) (i.e., \(n_{1}=n_{2}=2\) and \(n_{3}=n_{4}=1\)) is
\[L_{\mathbf{1}}(2,2,1,1)=\left[\begin{array}{cccc|c|c|c}0&0&\mathbf{F}_{1100 }&\mathbf{F}_{1200}&\mathbf{F}_{1010}&\mathbf{F}_{1001}\\ 0&0&\mathbf{F}_{2100}&\mathbf{F}_{2200}&\mathbf{F}_{0201}&\mathbf{F}_{2001} \\ \hline\mathbf{F}_{1100}&\mathbf{F}_{2100}&0&0&\mathbf{F}_{0110}&\mathbf{F}_{01 01}\\ \mathbf{F}_{1200}&\mathbf{F}_{2200}&0&0&\mathbf{F}_{0210}&\mathbf{F}_{0201}\\ \hline\mathbf{F}_{1010}&\mathbf{F}_{2010}&\mathbf{F}_{0110}&\mathbf{F}_{0210} &0&\mathbf{F}_{0011}\\ \hline\mathbf{F}_{1001}&\mathbf{F}_{2001}&\mathbf{F}_{0101}&\mathbf{F}_{0201} &\mathbf{F}_{0011}&0\\ \end{array}\right],\]
where the entries are all i.i.d. standard Gaussian. We compute the expected determinant of this matrix using Theorem 5.4. The corresponding graph has \(n_{1}+n_{2}+n_{3}+n_{4}=6\) vertices in four groups \(\mathcal{I}_{1}=\{1,2\},\mathcal{I}_{2}=\{3,4\},\mathcal{I}_{3}=\{5\}, \mathcal{I}_{4}=\{6\}\):
The edges within groups all have weight zero (they can be deleted). All other edges have weight one, so \(D_{\mathbf{1}}(2,2,1,1)=\mathbb{E}\) det \(L_{\mathbf{1}}(2,2,1,1)\) is given by the negative of the number of perfect matchings in this graph. We can match \(\{1,2\}\) with \(\{3,4\}\). There are two possible such matches. Or we can match \(1\) with either \(5\) or \(6\), in which case we have to match \(2\) with either \(3\) or \(4\). There are \(4\) such matches. Finally, we can also match \(2\) with either \(5\) or \(6\), and by symmetry there are again \(4\) such matches. In total these are \(10\) matches, which shows that \(D_{\mathbf{1}}(2,2,1,1)=-10\).
Proof of Proposition 5.4.: Let \(v:=v_{1}+\cdots+v_{r}\) and write
\[L_{\mathbf{d}}(v_{1},\ldots,v_{r})=(\ell_{i,j})_{1\leq i,j\leq v}.\]
Since the expectation is linear, Laplace expansion of the determinant yields
\[\mathbb{E}\,\,\det L_{\mathbf{d}}(v_{1},\ldots,v_{r})=\sum_{\pi\in\mathfrak{S}_{v }}\operatorname{sgn}(\pi)\,\,\mathbb{E}\prod_{i=1}^{v}\ell_{i,\pi(i)},\]
where \(\mathfrak{S}_{v}\) is the symmetric group on \(v\) elements. The \(\ell_{i,\pi(i)}\) are all Gaussian with mean \(\mathbb{E}\,\ell_{i,\pi(i)}=0\) and independent. This implies that the only terms whose expectation is not zero are those where all \(\ell_{i,\pi(i)}\) appear as a square. In other words, only those expectations are not zero where \(\pi\in\mathfrak{S}_{v}\) has the property that \(\pi(i)\neq i\) for all \(i\) and \(\pi(i)=j\) implies \(\pi(j)=i\). Such \(\pi\in\mathfrak{S}_{v}\) only exist when \(v\) is even.
If \(v\) is odd, we therefore have \(\mathbb{E}\,\,\det L_{\mathbf{d}}(v_{1},\ldots,v_{r})=0\). Since for \(v\) odd, no perfect matchings can exist, we also have \(D_{\mathbf{d}}(v_{1},\ldots,v_{r})=0\).
If \(v\) is even, on the other hand, the \(\pi\in\mathfrak{S}_{v}\) with the above property are precisely products of \(\frac{v}{2}\) transpositions, so that
\[\mathbb{E}\,\,\det L_{\mathbf{d}}(v_{1},\ldots,v_{r})=(-1)^{\frac{v}{2}}\sum_ {\begin{subarray}{c}\pi\in\mathfrak{S}_{v}:\\ \pi\text{ is product of }\frac{v}{2}\text{ transpositions}\end{subarray}} \mathbb{E}\prod_{i=1}^{v}\ell_{i,\pi(i)}.\]
There is a \(1:1\) correspondence between products of \(\frac{v}{2}\) transpositions and perfect matchings \(C\subset E\), where \(E\) is the set of edges in the complete graph \(G=(V,E)\) on \(v\) vertices. Let \(C=\{(i_{1},i_{2}),(i_{3},i_{4}),\ldots,(i_{v-1},i_{v})\}\) be the matching corresponding to \(\pi\); i.e., for \(j\) odd, \(\pi(i_{j})=i_{j+1}\). Then, using independence, we obtain
\[\mathbb{E}\prod_{i=1}^{v}\ell_{i,\pi(i)}=\mathbb{E}(\ell_{i_{1},i_{2}}^{2} \cdots\ell_{i_{v-1},i_{v}}^{2})=\mathbb{E}\,\ell_{i_{1},i_{2}}^{2}\cdots \mathbb{E}\,\ell_{i_{v-1},i_{v}}^{2}=\sigma_{i_{1},i_{2}}^{2}\cdots\sigma_{i_ {v-1},i_{v}}^{2},\]
Figure 2. The empirical distribution of \(\det L_{\mathbf{1}}(2,2,1,1)\) for \(10^{5}\) sample points. The empirical mean of this sample is \(-9.9995\). We show in Example 5.5 that the actual mean value is \(-10\).
where \(\sigma^{2}_{i_{j},i_{j+1}}\) is the variance of \(\ell_{i_{j},i_{j+1}}\). By the definition of \(L_{\mathbf{d}}(v_{1},\dots,v_{r})\), the variance of the off-diagonal entries in the diagonal blocks is \(d_{k}(d_{k}-1)\), while the variance of the entries in the off-diagonal blocks of \(L_{\mathbf{d}}(v_{1},\dots,v_{r})\) is \(1\). That is:
\[\sigma^{2}_{i_{j},i_{j+1}}=\begin{cases}d_{k}(d_{k}-1),&i_{j},i_{j+1}\in \mathcal{I}_{k}\\ 1,&i_{j},i_{j+1}\text{ are in different groups of vertices },\end{cases}\]
which shows that \(\mathbb{E}\prod_{i=1}^{v}\ell_{i,\pi(i)}=w(C)\), so \(D_{\mathbf{d}}(v_{1},\dots,v_{r})=\mathbb{E}\ \det L_{\mathbf{d}}(v_{1},\dots,v_{r})\).
The last example of our paper is the computation of the curvature coefficients \(\kappa_{i}\) in Weyl's tube formula (5.1) for the Segre manifold \(\mathbb{X}_{\mathbf{n},\mathbf{d}}=\mathbb{S}^{1}\otimes\dots\otimes\mathbb{ S}^{1}\subset\mathbb{S}^{2^{r}-1}\).
**Example 5.6**.: For the special case of the Segre manifold \(\mathbb{X}_{\mathbf{n},\mathbf{d}}=\mathbb{S}^{1}\otimes\dots\otimes\mathbb{ S}^{1}\) we have \(\mathbf{d}=\mathbf{1}_{r}=(1,\dots,1)\) and \(\mathbf{n}=\mathbf{1}_{r}\). In this case, Lemma 5.1 yields
\[\operatorname{vol}(\mathbb{X}_{\mathbf{n},\mathbf{d}})=\frac{1}{2^{r-1}} \cdot\operatorname{vol}(\mathbb{S}^{1})\dots\operatorname{vol}(\mathbb{S}^{1} )=2\pi^{r}.\]
Furthermore, the codimension of \(\mathbb{X}_{\mathbf{n},\mathbf{d}}\) is
\[c=2^{r}-1-r.\]
This implies that \(\kappa_{i}=2\pi^{r}\cdot\operatorname{vol}(\mathbb{S}^{c-1})\cdot\theta_{i}\), where \(\theta_{i}\) is defined as in Theorem 1.3 and Lemma 5.3. We compute the \(\theta_{i}\). By Theorem 1.3, we have \(\theta_{i}=\frac{\Gamma(\frac{c}{2})}{2^{i}\,\Gamma(i+\frac{c}{2})}\ \sum D_{ \mathbf{d}}(v_{1},\dots,v_{r})\), where the sum is over all tuples \((v_{1},\dots,v_{r})\in\{0,1\}^{r}\) with and \(v_{1}+\dots+v_{r}=2i\). There are \(\binom{r}{2i}\) such tuples. Fix a tuple \((v_{1},\dots,v_{r})\); this corresponds to the complete graph with \(2i\) vertices where all edges have weight \(1\), and there are \((2i-1)!!\) perfect matchings on this graph. Therefore, by (1.2) we have \(D_{\mathbf{1}_{2i}}(\mathbf{1}_{2i})=(-1)^{i}(2i-1)!!\). It follows that
\[\theta_{i}=(-1)^{i}\cdot\frac{\Gamma(\frac{c}{2})}{2^{i}\,\Gamma(i+\frac{c}{2 })}\cdot\binom{r}{2i}\cdot(2i-1)!!;\]
hence,
\[\kappa_{i}=(-1)^{i}\cdot 2\pi^{r}\cdot\operatorname{vol}(\mathbb{S}^{c-1}) \cdot\frac{\Gamma(\frac{c}{2})}{2^{i}\,\Gamma(i+\frac{c}{2})}\cdot\binom{r}{2 i}\cdot(2i-1)!!.\]
This completes the computation of the curvature coefficients of this Segre manifold.
|
2302.01582 | Controlling for Stereotypes in Multimodal Language Model Evaluation | We propose a methodology and design two benchmark sets for measuring to what
extent language-and-vision language models use the visual signal in the
presence or absence of stereotypes. The first benchmark is designed to test for
stereotypical colors of common objects, while the second benchmark considers
gender stereotypes. The key idea is to compare predictions when the image
conforms to the stereotype to predictions when it does not.
Our results show that there is significant variation among multimodal models:
the recent Transformer-based FLAVA seems to be more sensitive to the choice of
image and less affected by stereotypes than older CNN-based models such as
VisualBERT and LXMERT. This effect is more discernible in this type of
controlled setting than in traditional evaluations where we do not know whether
the model relied on the stereotype or the visual signal. | Manuj Malik, Richard Johansson | 2023-02-03T07:27:50Z | http://arxiv.org/abs/2302.01582v1 | # Controlling for Stereotypes in Multimodal Language Model Evaluation
###### Abstract
We propose a methodology and design two benchmark sets for measuring to what extent language-and-vision language models use the visual signal in the presence or absence of stereotypes. The first benchmark is designed to test for stereotypical colors of common objects, while the second benchmark considers gender stereotypes. The key idea is to compare predictions when the image conforms to the stereotype to predictions when it does not.
Our results show that there is significant variation among multimodal models: the recent Transformer-based FLAVA seems to be more sensitive to the choice of image and less affected by stereotypes than older CNN-based models such as VisualBERT and LXMERT. This effect is more discernible in this type of controlled setting than in traditional evaluations where we do not know whether the model relied on the stereotype or the visual signal.
## 1 Introduction
The center of gravity of NLP research has shifted to the development of language models (LMs) for representation and generation of text, and most recent high-impact research contributions describe new LMs. For some tasks, a model needs to take into account not only a text but also some non-textual information, and a wide range of multimodal LMs have been developed that allow the representation of a text jointly with some external modality. Most of this work focuses on _visual_ tasks where NLP models need to be integrated with computer vision models; examples of tasks in this area include visual question answering and caption generation. A range of combined language-and-vision LMs have been developed using different approaches for integrating representations of text and of images or videos.
But can we be sure that a multimodal model actually uses the provided visual information instead of just relying on statistical tendencies in the text corpus? With the development of multimodal LMs, some recent work has investigated what information is stored in the representations of the multiple modalities and how the multiple representations interact. For instance, Frank et al. (2021) carried out a set of controlled tests to tease apart the effects of the textual and visual modalities.
It has been widely noted that representations of language are affected by several kinds of _stereotypes_, which we loosely define as any type of phenomenon that has a highly skewed prior probability distribution. In these cases, the skewed distribution may cause a model to simply go with the default choice and ignore contextual information that would suggest an unusual analysis. Most of the discussion in the field has been about stereotypes relating to various demographic attributes (Bolukbasi et al., 2016), but in this work, we use the term "stereotype" in the more general sense mentioned above. This issue is likely to affect multimodal LMs as well, although we are aware of no previous work that investigates this phenomenon
Figure 1: An example of a controlled test of a masked language model for a color stereotype. We compute the output from the MLM head when providing an image of an object with a stereotypical color (a yellow banana) and compare it to the output when the object has an unusual color (green). If the MLM is strongly affected by a stereotype bias, the predictions change little.
systematically; for instance, if some object is often associated with some visual property (e.g. a color or shape), this property may be predicted by the model even in cases where it is not present. This effect may also have methodological implications in benchmarks for the evaluation of LMs: if a model predicted the correct answer, did it do so because of the stereotype or because it actually used the available visual information?
In this work, we propose a methodology and develop two benchmark sets for stress-testing multimodal LMs to determine to what extent they are affected by problems related to stereotypes. The key idea is to look at predictions of a language/vision LM with different visual inputs and compare the behavior of the LM in the presence or absence of stereotypes. For cases when a stereotype is present, we compare model outputs when the image _does_ correspond to the stereotype to when it _does not_.
The rest of the paper is organized as follows. Section 2 discusses the design of the benchmark sets and how we use them to investigate multimodal LMs for stereotypes. Details about the multimodal LMs we have used are covered in Section 3, and Section 4 describes how they are applied for the benchmarks, while Section 5 presents the figures achieved on the benchmarks and discusses their implications. In Section 6, we discuss related research. Finally, Section 7 summarizes the main points and discusses limitations and possible extensions.
## 2 Design of Benchmark Datasets
We have collected two datasets consisting of textual templates and corresponding images. These datasets were selected because in these cases it was relatively easy to collect images exemplifying some visual property, and where on the one hand we could find images corresponding to a stereotype, but on the other hand also control images _not_ corresponding to the stereotype.
These datasets also contain subsets we call "neutral" where stereotypes are not present. The purpose of these images is to investigate whether LMs are more sensitive to the choice of images in the cases when they cannot rely on stereotypes.
### The Memory Colors Dataset
The first dataset is an extension of the _Memory Colors_ dataset (Norlund et al., 2021), originally developed for the purpose of measuring the transfer of information between visual and textual representations. The original dataset lists a set of 109 common physical objects, where each object is listed with a "memory color": a stereotypical color we typically associate with the object. For instance, the dataset lists tomatoes as stereotypically red although tomatoes frequently have other colors. The set of objects was annotated by multiple annotators, and only the objects where there was a perfect or almost perfect consensus among annotators were included.
The dataset comes with a set of textual templates that can be used to generate prompts for LMs. Since the dataset was originally intended for use in LMs where no image was available, these text templates were intentionally formulated to elicit stereotypical responses, e.g. _"The typical color of a tomato is..."_. In our case, we changed the templates to encourage the model to focus on the image, e.g. _"The color of this tomato is..."_.
The Memory Colors dataset also includes a set of prototypical images exemplifying the stereotypical color. For each of the object types, we collected an additional image where the color was not the stereotypical one, e.g. a green tomato. All images were collected by carrying out a Google image search and picking the first result. The majority of objects with unusual colors includes examples of natural images (e.g. unripe tomatoes, orange sky); in a few cases, the color had been artifically modified.
We also extended the Memory Colors dataset with 19 neutral object types selected so that they were not expected to have a stereotypical color. This set includes common objects such as cars, houses, etc. We refer to the combined set, including the images with non-stereotypical colors and the neutral instances, as the _Extended Memory Colors_ dataset.
### Gender Stereotypes Dataset
The effect of gender in neural language representation models has been widely investigated and it is relevant to consider this in multimodal representations as well. We compiled a second dataset we term the _Gender Stereotypes_ dataset. The aim is to identify how good a multimodal model performs in the prediction of a person's gender when it is fed two different images, which will act as visual signals for us, one corresponding to a man and another one corresponding to a woman. For each pair, there is a sentence that describes the activity.
As in the color dataset, we include stereotypical cases (male-coded and female-coded, respectively) as well as cases where no stereotype is present.
The dataset contains 50 different text sentences and 100 images with, where half of the images show male individuals and half show females. Internally in the dataset, 19 and 21 text templates were created for the male and female stereotypical activities, respectively.1 Further, we defined a list of 10 different neutral tasks: _eating, walking, reading, writing, meditating, talking, studying, listening to music, clapping, crying_. For these cases, we assumed that there is no stereotypical gender associated with the activities.
Footnote 1: Stereotypical activities were selected from this website.
As we will discuss in more detail in Section 4, the property to be predicted will be represented in the sentence as a [MASK] token to be substituted by a masked LM. To include an example from the gender stereotype dataset, the sentence is as _'My therapist is very good, [MASK] helped me get myself together'_; according to the source where we selected the stereotypical occupations, therapy professionals are more frequently female.
For each of the 50 text templates, we selected two images, one for each of the genders. As for the colors dataset, we used the first result in an image search judged by an annotator to correspond to the gender in question. We did not take the self-identified gender into account.
## 3 Multimodal Language Models
The Transformer Vaswani et al. (2017) is a sequence-based model that is now the standard architecture in NLP for devising representation and generation components in neural models. Pretrained language models such as BERT Devlin et al. (2019) based on the architecture of Transformers, have proven capable of learning powerful representations applicable to a wide range of tasks. They have yielded state-of-the-art performance in many downstream tasks.
Multimodal models fusing the textual and visual modalities have been devised by researchers after looking at the huge success of pre-trained language models. In such models, multiple modalities are considered, and data for the training of the models is in multiple modalities. As our research problem revolves around the aspect of multimodality, we will focus on two modalities: a textual and a visual signal. The visual signal is in the form of images, and the natural language is the written text accompanying the images, such as captions or descriptions of the images. Examples of such visual/textual Transformers include VilBERT Lu et al. (2019), LXMERT Tan and Bansal (2019), VisualBERT Li et al. (2020), OSCAR Li et al. (2020), ImageBERT Qi et al. (2020), FLAVA Singh et al. (2022), and others. Most of the earlier models use features extracted from a Faster-RCNN pipeline Ren et al. (2015), while later models use visual Transformer architectures Dosovitskiy et al. (2021). These types of models are then trained on datasets that contain text/image pairs such as SBU Captions Ordonez et al. (2011), MS COCO Lin et al. (2014), Conceptual Captions Sharma et al. (2018), and Visual Genome QA Krishna et al. (2017), using various pre-training tasks. They are sometimes trained from scratch on the combined language/vision data and sometimes warm-started from a unimodal model such as BERT.
For this study, we selected three different multimodal models to run our experiments on. These image-augmented Transformer models are VisualBERT, LXMERT, and FLAVA. These three are specifically chosen to give a certain diversity in the selection of model architecture: one single-stream CNN-based model, one dual-stream CNN-based model, and one visual Transformer-based model.
All the models we selected are BERT-like variations that use a the technique of Masked Language Modelling (MLM) during pre-training. This idea was presented in the original BERT paper Devlin et al. (2019). In the task of Masked Language Modelling, we predict a token which has been masked by us in the sentence, given a set of unmasked tokens. In our case, unmasked tokens are supplemented by the the visual signals. The random masking ratio for the MLM is around 15%, and for investigation of our experiments one special [MASK] token is taken. As we will discuss in Section 4, we rely on the ability of the MLM to predict missing tokens in our experiments.
VisualBERTThis is a single stream multimodal model, i.e, the language and vision embeddings are processed via a single Transformer. It is an extension of BERT, by redefining the process of how input is processed. The language embeddings are extracted from BERT's tokenizer, which acts as text encoder. For the embeddings of the visual signals, Faster-RCNN is used. It extracts image
features in the form of 36 RoI (region of interest) boxes for each image, and these RoI boxes are used as features. Each of these 36 ROI boxes are vectors of size 2048. The boxes with highest probability/confidence are chosen. The visual representations are appended at the end of the sequence of word embeddings.
LxmertThis model is a dual stream multimodal model, where the inputs are processed through two Transformers, for natural language and vision signals respectively. Text is processed in the same manner as of VisualBERT, based on BERT's tokenizer. The image features for the LXMERT are extracted by the Faster-RCNN, in the same way as of VisualBERT, but we also feed the normalized boxes alongside features, which are locations of these bounding boxes. At last, the Transformers are fused.
FlavaFLAVA has a text encoder, an image encoder, and a multimodal encoder. It is a dual stream multimodal model. The text encoder, has an architecture of ViT (visual Transformers) to extract single-modal text representations. For the images, an image encoder based on ViT architecture extracts single-modal image representations. A separate Transformer, multimodal encoder, is then applied. The unimodal representations are passed through the fusion encoder which fuses two modalities, and thus obtaining cross-modal representations.
### Model Details
There is a slight difference in how the two CNN-based models, VisualBERT and LXMERT, are applied. In the case of VisualBert, we also input locations of bounding boxes. For the experiments concerning VisualBERT, we have used the pre-trained BERT tokenizer,2 and VisualBERT with COCO pretraining checkpoint3 for the model. In the case of LXMERT, the LXMERT base tokenizer and model4 were used. For FLAVA, we used the pretrained processor and model.5
Footnote 2: bert-base-uncased from the HuggingFace library.
Footnote 3: ulcanlp/visualbert-vqa-coco-pre from HuggingFace.
Footnote 4: unc-nlp/lxmert-base-uncased from HuggingFace library.
Footnote 5: facebook/flava-full from HuggingFace library.
## 4 Methodology of Analysis
Our benchmarking method uses a cloze-style fill-in-the-blank approach (Petroni et al., 2019; Jiang et al., 2020), which has previously been applied in experiments investigating the interaction between visual and linguistic representation (Norlund et al., 2021; Hagstrom and Johansson, 2022, 2022). This approach is easy to apply to BERT-style models that include a masked language model (MLM) as part of their pre-training pipeline. When applying the MLM in our experiment, the model is provided with an image and a text prompt, where the visual property to be predicted by the model has been replaced by the mask dummy token. We then investigate how well the missing token is predicted under different circumstances.
Since the nature of the two benchmarks is different, we had to apply different methodologies to get the results. We discuss these details below.
### The Memory Colors Dataset
For the Memory Colors dataset, we compare the image having a stereotypical color to an image with an unusual color for the particular object, and to a dummy image containing no meaningful information. Following previous work that applied image-augmented LMs to text-only inputs, we have considered different types of dummy images. We have used two types of dummy images: the first one being a completely black image following Iki and Aizawa (2021), and the second consisting of white noise. However, in experiments we did generally not see major differences between the behavior of the models when using the black dummy images and when using the noise images, so we limit the discussion to black dummy images in the rest of this paper.
For a given text prompt and image, we mark the output as correctly or incorrectly predicted depending on whether the token predicted at the [MASK] position matches the color of the label we have provided in the dataset or not.
In these experiments, we did not restrict the output vocabulary to color terms. In general, after going through the results, it seems that all the three models tend to output color at the position of [MASK] token.
### Gender Stereotypes Dataset
For the Gender Stereotypes dataset, we also consider the output of the MLM head at the masked position, but in this case we also need to take into account that several words may be applicable in the given context. For this reason, we create two buckets of male and female words: _he, male, man,
men, boy, his_ and _she, female, woman, women, girl, her_, respectively. We choose the predicted gender based on the highest probability the elements in the buckets get for the masked token. If the element with the highest probabilty falls in the bucket containing male words, we count this instance as predicted male by the model and vice versa for the female bucket.
## 5 Results
We evaluated the three selected models on the two benchmarks. In both cases, we compare the predictions when a stereotype is present and the image corresponds to the stereotype to the case where the image _does not_ correspond to the stereotype. We also evaluate cases where there is no stereotype and we carry out similar comparisons in this case. Additionally, we look at the model's predictions when provided with a black dummy image.
### The Extended Memory Colors Dataset
Table 1 shows the results on the extended Memory Colors stereotypes dataset. When using real images, the figures outside the brackets should be interpreted as predictive accuracies; for the black dummy images, the figures show the proportions of cases predicted as the stereotypical color. The figures in brackets show the proportion of predictions that are identical to the original prediction.
We note that VisualBERT performs poorly on this dataset, confirming previously published results that this model is underfitted on visual data and mostly sticks to the prediction by an equivalent BERT model. The effect of the image seems minimal and its performance is close to the majority-class baseline accuracy of 0.25.
The LXMERT and FLAVA models achive better scores on the original Memory Colors dataset: both models have accuracies in the 0.70-0.75 range. However, we see clearly that this similarity of performance is superficial and that the LXMERT model mostly relies on stereotypes: when we consider the control images with unexpected colors, the performance of LXMERT is very poor and it mostly keeps predicting the stereotypical color. Its performance is somewhat better for the non-stereotypical cases, but far from perfect. FLAVA on the other hand predicts fairly well on the control set, although somewhat worse than for the images with stereotypical colors; it also predicts with a good accuracy for the non-stereotypical cases. It is clear that FLAVA is much more sensitive to the choice of images in this task.
For the dummy images that are completely black, the LXMERT model's prediction are again to a large extent identical to the original predictions. Again, the FLAVA model is more receptive to the choice of images: it predicts the color _black_ in 92% of the cases and there is no discernible effect of stereotypes; it can be discussed whether this is a desired behavior in this case, since the image does not include an object of the kind mentioned in the prompt.
Finally, we note that for the non-stereotypical instances, LXMERT's predictions seem to shift more between the original images and the black dummy images. This suggests that in cases where the model cannot rely on a stereotype, the model is more sensitive to the visual input.
### Gender Stereotypes Dataset
Table 2 shows the results on the gender stereotypes dataset. Note that for consistency, the figures show the proportion of instances predicted as _male_, so they should not be interpreted as accuracies when predicting with an image of a female.
Generally speaking, all models tend to predict the _male_ class when provided with an image showing male individuals. When the input shows a fe
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{**Stereotypes**} & \multicolumn{3}{c}{**No stereotypes**} \\ \cline{2-6} & \begin{tabular}{c} **Original** \\ **image** \\ \end{tabular} & \begin{tabular}{c} **Control** \\ **image** \\ \end{tabular} & \begin{tabular}{c} **Black** \\ **image** \\ \end{tabular} & \begin{tabular}{c} **Original** \\ **image** \\ \end{tabular} &
\begin{tabular}{c} **Black** \\ **image** \\ \end{tabular} \\ \hline VisualBERT & 0.23 & 0.08 (0.50) & 0.28 (0.41) & 0.0 & 0.0 (0.84) \\ LXMERT & 0.72 & 0.11 (0.76) & 0.69 (0.87) & 0.47 & 0.05 (0.47) \\ FLAVA & 0.74 & 0.69 (0.06) & 0.08 (0.08) & 0.89 & 0.11 (0.11) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracies on the extended Memory Colors datasets. For control images with unexpected colors, the accuracies are computed with respect to the _new_ color, while for the black images the accuracies are with respect to the _original_ color. Figures in brackets show the proportion of predictions that are equal to the original prediction.
male individual, the picture is more varied. As in the previous experiment, FLAVA reacts much more strongly to the choice of images than VisualBERT and LXMERT, and tends to predict the _male_ class for images with males and vice versa.
Unexpectedly, VisualBERT as well as LXMERT both seem to generally assign higher probabilities to male-coded words, even when the prompt is stereotypically female; this is surprising since we had expected these models to predict the stereotypical classes in these cases. It seems that FLAVA is the only model that shows signs of _contextual_ gender stereotypes in this experiment: when provided with a black dummy image, this model predicts according to what would have been expected stereotypically, and at 50% for the non-stereotypical cases. As we saw in the color experiment, for the non-stereotypical cases LXMERT seems at least somewhat affected by the choice of images, although less so than FLAVA.
## 6 Related Work
This work falls in the broad category of model analysis (Belinkov and Glass, 2019) of Transformer models (Rogers et al., 2020). Belinkov and Glass (2019) divide previous approaches to model analysis into several methodological categories; in the current work, we use an approach based on behavioral testing of a specific model behavior. Specifically, our analysis is based on the outputs of the masked language model head of BERT-like models, similarly to how Petroni et al. (2019) and Jiang et al. (2020) tested BERT models for basic encyclopedic and commonsense knowledge.
The methodology based on targeted behavioral testing has also been used to investigate a number of research questions in the analysis of language-and-vision Transformer models. In particular, a number of investigations look at what type of generalizations happen between the visual and textual modalities. Cao et al. (2020) claimed that when considering attention scores, the effect of the visual modality is limited and that the textual modality dominates. Norlund et al. (2021) investigated the effect of multimodal training on textual representations, and concluded that the degree of transfer between the representations of the respective modalities is limited, at least for CNN-based models; Hagstrom and Johansson (2022, 2022) drew similar conclusions based on more extensive experiments that also include the FLAVA model. Parcalabescu et al. (2021) considered the task of predicting numbers and arrived at a conclusion similar to ours: frequently occurring numbers are predicted more often by the model.
The previous work that is most closely related to our in terms of research questions and methodology is that by Frank et al. (2021). They designed ablation tests where parts of the image or the text are hidden; as we have discussed, this setup is comparable to our experiments where black and white-noise images are used. Parcalabescu et al. (2022) introduced the idea of "foils": texts that differs minimally from the one describing the image. Our use of adversarially selected images can be seen as similar to the idea of foils, but focused on the visual modality.
## 7 Conclusions
In this work, we have proposed a methodological framework based on controlled tests designed to tease out the influence of stereotypes on the predictions of visually augmented language models. The key idea is that we expect common evaluation benchmarks to include many stereotypical cases that can easily be predicted simply by relying on language statistics. In order to disentangle the effect of the stereotype and the contribution of the visual representations we compare the model's output in cases where the provided image adheres to the stereotype to cases where it does not. We also consider the model's behavior in cases where there
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Male stereotypes**} & \multicolumn{3}{c}{**Female stereotypes**} & \multicolumn{3}{c}{**No stereotypes**} \\ \cline{2-10} \multicolumn{1}{c}{\multirow{-2}{*}{**Model**}} & **Male** & **Female** & **Black** & **Male** & **Female** & **Black** & **Male** & **Female** & **Black** \\ & **image** & **image** & **image** & **image** & **image** & **image** & **image** & **image** & **image** \\ \hline VisualBERT & 0.89 & 0.89 & 0.89 & 0.71 & 0.81 & 0.86 & 0.60 & 0.70 & 0.60 \\ LXMERT & 0.84 & 0.68 & 0.73 & 0.95 & 0.76 & 0.90 & 0.90 & 0.40 & 0.80 \\ FLAVA & 0.84 & 0.32 & 0.84 & 0.81 & 0.19 & 0.33 & 0.90 & 0.10 & 0.50 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on the gender stereotypes datasets. The figures show the proportion predicted as _male_.
are no stereotypes, that is when the prior distribution of outputs is more evenly distributed.
As an application of this framework, we created two datasets to facilitate the investigation of stereotypes for two properties: the color of objects and the gender of people. Each dataset contains a set of text prompts and corresponding image pairs, where one image in the pair corresponds to the stereotype and the other is a control where the stereotypical property is not present. This allows comparisons to be carried out in a controlled fashion.
Using the two benchmark sets, we evaluated three MLM-based visually augmented Transformer models: VisualBERT, LXMERT, and FLAVA. There are clear differences between the models, and in particular some of these differences emerge much more clearly in the controlled setting. For instance, the CNN-based LXMERT and Transformer-based FLAVA achieve similar scores in terms of raw accuracy scores for predicting the color of objects in images. However, if we consider the control images where the objects do not have the stereotypical color, the FLAVA outperforms LXMERT by a wide margin, since LXMERT keeps predicting the stereotypical color. This means that we can see clear differences among the models with respect to how sensitive they are to the choice of images.
For the gender stereotypes experiments, the results were somewhat unexpected since it turned out that the older CNN-based models almost consistently assigned higher probabilities to male-related words, where we had expected at least the LXMERT model to be somewhat affected by stereotypes suggested by the textual prompt. The newer FLAVA model on the other hand again predicts more consistently with the input image in this experiment, and only falls back on stereotypes when the input images are uninformative.
### Limitations and Possible Extensions
As discussed in SS2.2, we have intentionally used a simplistic operationalization of the notion of gender in this work and selected images returned by the image search engine when queried for'male' or 'female' respectively, and that the annotator then decided were prototypical representatives of the male or the female genders. The self-identified gender of the people in the images was not taken into account in this experiment and since our goal was to investigate the sensitivity of visually augmented LMs to the choice of images, it was a priority to carry out such an evaluation using clear-cut cases. In a more thorough investigation, it could potentially be useful to also consider how e.g. the FLAVA model, which seems to be more affected by the visual input, reacts when presented with images that do not fall into such clear-cut categories.
The most obvious way that this work could be improved would be to improve the robustness of the conclusions by scaling up the investigations along all dimensions: instead of considering just the two properties of color and gender, we would like to investigate a wider selection of properties that would be meaningful to test in language and vision models. Shape, size, and orientation are a few possible examples. For each scenario, it would also be useful to collect more examples than what we have included here, in order to improve the statistical robustness. Furthermore, since LMs are sensitive to the choice of a prompt (Jiang et al., 2020), our conclusions would be on firmer ground if we would evaluate on several text prompts for each image. Naturally, it would be interesting to consider a more extensive selection of models as well.
In this work, we treated the property of being stereotypical as binary and divided the test cases into groups based on this property. However, as discussed in the introduction, in reality the notion of stereotypicality is related to prior probability distributions. For this reason, a natural generalization of the experiments we have carried out here would be to consider stereotypicality on a continuous scale, e.g. by computing the entropy of the prior distribution and then to see how this correlates with the probability of incorrect predictions when encountering an unusual case.
The experiments in this work have been limited to evaluations of the model's behavior for selected visual-linguistic properties. It remains to see whether the same idea can be extended beyond evaluation to devise new _training_ methods as well, in order to inject a bias into the training process aimed at reducing the effects of stereotypes and encouraging the model to rely on the visual information. This type of training would typically involve more work in data collection, unless methods can be devised to adversarially generate images with unusual properties.
We finally note that the proposed methodology is not limited to the evaluation of visually augmented LMs, but could be relevant when considering any
extra-linguistic extension of LMs. For instance, similar pitfalls may occur in the evaluation of LMs augmented with structural knowledge representations. If a knowledge-augmented LM correctly predicted some encyclopedic fact Petroni et al. (2019); Jiang et al. (2020), was this because of what the knowledge resource contained or because of text statistics?
## Acknowledgements
Richard Johansson was supported by the projects _Interpreting and Grounding Pre-trained Representations for NLP_ and _Representation Learning for Conversational AI_, both funded by Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
|
2308.05967 | YOLOrtho -- A Unified Framework for Teeth Enumeration and Dental Disease
Detection | Detecting dental diseases through panoramic X-rays images is a standard
procedure for dentists. Normally, a dentist need to identify diseases and find
the infected teeth. While numerous machine learning models adopting this
two-step procedure have been developed, there has not been an end-to-end model
that can identify teeth and their associated diseases at the same time. To fill
the gap, we develop YOLOrtho, a unified framework for teeth enumeration and
dental disease detection. We develop our model on Dentex Challenge 2023 data,
which consists of three distinct types of annotated data. The first part is
labeled with quadrant, and the second part is labeled with quadrant and
enumeration and the third part is labeled with quadrant, enumeration and
disease. To further improve detection, we make use of Tufts Dental public
dataset. To fully utilize the data and learn both teeth detection and disease
identification simultaneously, we formulate diseases as attributes attached to
their corresponding teeth. Due to the nature of position relation in teeth
enumeration, We replace convolution layer with CoordConv in our model to
provide more position information for the model. We also adjust the model
architecture and insert one more upsampling layer in FPN in favor of large
object detection. Finally, we propose a post-process strategy for teeth layout
that corrects teeth enumeration based on linear sum assignment. Results from
experiments show that our model exceeds large Diffusion-based model. | Shenxiao Mei, Chenglong Ma, Feihong Shen, Huikai Wu | 2023-08-11T06:54:55Z | http://arxiv.org/abs/2308.05967v2 | # YOLOrtho: A Unified Framework for Teeth Enumeration and Dental Disease Detection
###### Abstract
Detecting dental diseases through panoramic X-rays images is a standard procedure for dentists. Normally, a dentist need to identify diseases and find the infected teeth. While numerous machine learning models adopting this two-step procedure have been developed, there has not been an end-to-end model that can identify teeth and their associated diseases at the same time. To fill the gap, we develop YOLOrtho, a unified framework for teeth enumeration and dental disease detection. We develop our model on Dentex Challenge 2023 data, which consists of three distinct types of annotated data. The first part is labeled with quadrant, and the second part is labeled with quadrant and enumeration and the third part is labeled with quadrant, enumeration and disease. To further improve detection, we make use of Tufts Dental public dataset. To fully utilize the data and learn both teeth detection and disease identification simultaneously, we formulate diseases as attributes attached to their corresponding teeth. Due to the nature of position relation in teeth enumeration, We replace convolution layer with CoordConv in our model to provide more position information for the model. We also adjust the model architecture and insert one more upsampling layer in FPN in favor of large object detection. Finally, we propose a post-process strategy for teeth layout that corrects teeth enumeration based on linear sum assignment. Results from experiments show that our model exceeds large Diffusion-based model. All data using in this research paper is publicly open.
Keywords:YOLO Multi-Label Object Detection Panoramic Dental X-ray
## 1 Introduction
The development of deep learning techniques has been significant over the past few years, and machine learning models have been depolyed in various aspects of medical image analysis like [7]. Meanwhile, dentists usually spend both significant amount of time and efforts in panoramic dental image analysis to establish a good understanding in their patients' dental health conditions. The diagnosis procedure can be facilitated by usage of deep learning models. Instead of starting from scratch, dentists can build their analysis upon the initial analysis generated automatically by the deep learning model. While many attempts in this area haven been made [4], almost all of them have done only the individual
analysis work such as quadrant detection [8], teeth instance segmentation [1] or dental disease analysis [9]. We plan to construct a unified framework that generates output including both teeth detection and their associated dental diseases. However, the cost of collecting data that has both annotation of teeth and diseases is expensive as labeling every tooth and the associated disease requires expertise and several rounds of revision. Collecting data that contains either teeth enumeration or dental disease analysis, one the other hand, requires less time and efforts. Also, there are a few existing public dataset that is annotated with such information [5, 3]. In this paper, we use the dataset provided in Dentex Challenge 2023 [3] and Tufts Dental public dataset [5]. Dentex dataset is constructed in three parts. The first part is labeled with quadrant, and the second part is labeled with quadrant and enumeration and the third part is labeled with quadrant, enumeration and disease. To further improve tooth detection, we make use of Tufts Dental public dataset, which is labeled the same way as the second part of dentex dataset. It is worthwhile to note that, since the data we are using do not follow the same labeling protocol, conventional single-class object detection network generally do not work well. To cope with multi-class detection, one need to train two models designed to perform individual task and map the results together in the post-process. To improve this two-step procedure, We construct a new architecture that allows us to train tooth detection and disease diagnosis simultaneously. We build our model upon YOLO framework [6] and add additional heads to predict the attributes of an object. Another way to look at this design is that, the class of an object can be considered as an attribute, and we simply add more attributes of this object such as whether this object is impacted or whether this object has caries.
To train a model from partial annotations, we follow a hierarchical fashion like [2]. We compute loss base on the types of input images. If the input image has enumeration of teeth, we omit the loss of disease attributes. Since the third part of Dentex dataset only label the teeth with diseases, we first train a regular teeth object detection model to pseudo label the rest of healthy teeth and set their attributes (i.e. is_impacted) to be false.
We have submitted our proposed method to Dental Enumeration and Diagnosis on Panoramic X-rays Challenge that takes place at MICCAI 2023.
## 2 Methods
### Models Overall
YOLOv8 is a well-established framework for detecting objects, and we build our model upon it. The vanilla model only supports single-class prediction for one detection. We modify the prediction heads so that multiple attributes of the detection can be predicted. In our experiment, we construct four binary classification heads: is_impacted, has_caries, has_deepcaries and has_lesion. It is worthwhile to point out that we also implement heads that provide regression and pose estimation so that this framework can be extended to a larger range of tasks. More experiments can be conducted in the future to cope with different diseases.
For example, the difference of caries and deep caries is the level of structural damage, and both of these diseases share some visual features. Therefore, it is reasonable to apply regression loss on these diseases. Instead of using the default structure, we add one more upsampling in feature pyramid network. The feature map outputs of a vanilla YOLOv8 have the stride of 8, 16, 32. The close layout and uniformly large size of teeth also make them less viable to predict on feature map with a larger stride. With the modified structure, the feature map outputs have the stride of 4, 8, 16.
#### 2.1.2 Data Preprocess
We train our model to detect teeth and their associated diseases, so the training data should at least contain all teeth information. The third part of Dentex dataset only label the teeth with diseases, we first train a regular teeth object detection model to pseudo label the rest of healthy teeth and set their attributes (i.e. is_impacted) to be false. Also, in the original annotation, if one tooth has multiple diseases, multiple bounding boxes at the same location with different disease classes are given. We combine these boxes to one box containing all information of the tooth.
#### 2.1.3 Data Augmentation
Other than the regular augmentation techniques such as scaling, random blur, rotation and translation, we implement flip mapping. If a panoramic image is flip horizontally, the quadrant information associated with the bounding boxes also changes.
### Proposed Framework
A conventional convolution layer work well in feature extraction and learns feature that is translation invariant. While this type of behavior is preferred in most computer vision task such as detection and classification, it can be detrimental to teeth enumeration. Position plays a huge role in teeth enumeration system. If only given relative position of a set of teeth in an image regardless of their visual cues, one can already make a near perfect enumeration prediction. To
Figure 1: The architecture of our proposed framework. We apply a custom feature pyramid structure to extract the image feature and design four heads to predict desired result.
preserve positional information, we replace all convolution layer with coordinate convolution layer in the backbone network.
#### 2.2.3 Loss
We add attributes heads in the framework, and these attributes prediction heads are independent of regular bounding box and classification heads. In our experiment, we have four attributes heads. For example, the prediction head for checking whether tooth is impacted looks like:\(Loss_{Attribute_{is\_impacted}}=BCE(pred,GT)\). And the overall losses now becomes:
\[Losses =w_{b}*Loss_{bbox}+w_{c}*Loss_{Class}+w_{d}*Loss_{DFL}\] \[+w_{1}*Loss_{Attribute_{1}}+...+w_{n}*Loss_{Attribute_{n}}\]
In the formula, w is a hyperparameter that stands for the weight of the corresponding loss. In the experiment we use 7.5 for bounding box, 0.5 for classification, 1.5 for DFL loss and 8 for the disease attribute loss.
### Post-Process Strategy
To further improve the accuracy of enumeration prediction, We utilize one prior on human teeth: each FDI is associated with one tooth only. Deep learning models, on the other hand, do not have this prior knowledge. We notice that deep learning models sometimes produce same teeth enumeration for two teeth that are close to each other. To solve this problem, we formulate the enumeration post-process as a linear-sum-assignment problem: each FDI is matched with one prediction only, and the cost of objects is constructed by their probability of each class.
## 3 Experiments and Results
We conduct experiments on validation data in Dentex Challenge. Experiments show that YOLOrtho achieves better results in all quadrant, diagnosis and enumeration than baseline model. To the date this challenge paper is written, our method is the leading algorithm in the final test phase. Also, ablation study of YOLOrtho shows that both upsampling and the application of CoordConv significantly improve the AP metrics. The post-process strategy serves as a small trick that boosts enumeration and quadrant metrics.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Model & AP-Quadrant & AP-Diagnosis & AP-Enumeration \\ \hline Vanilla YOLO & 0.395 & 0.330 & 0.286 \\ YOLOrtho & 0.414 & 0.357 & 0.337 \\ HierarchicalDet & 0.365 & 0.341 & 0.221 \\ \hline \end{tabular}
\end{table}
Table 1: Model Metrics in DENTEX Challenge |
2305.05743 | Surrogate-based optimisation of process systems to recover resources
from wastewater | Wastewater systems are transitioning towards integrative process systems to
recover multiple resources whilst simultaneously satisfying regulations on
final effluent quality. This work contributes to the literature by bringing a
systems-thinking approach to resource recovery from wastewater, harnessing
surrogate modelling and mathematical optimisation techniques to highlight
holistic process systems. A surrogate-based process synthesis methodology was
presented to harness high-fidelity data from black box process simulations,
embedding first principles models, within a superstructure optimisation
framework. Modelling tools were developed to facilitate tailored
derivative-free optimisation solutions widely applicable to black box
optimisation problems. The optimisation of a process system to recover energy
and nutrients from a brewery wastewater reveals significant scope to reduce the
environmental impacts of food and beverage production systems. Additionally,
the application demonstrates the capabilities of the modelling methodology to
highlight optimal processes to recover carbon, nitrogen, and phosphorous
resources whilst also accounting for uncertainties inherent to wastewater
systems. | Alex Durkin, Miao Guo | 2023-05-09T19:48:11Z | http://arxiv.org/abs/2305.05743v1 | # Surrogate-based optimisation of process systems to recover resources from wastewater
###### Abstract
Wastewater systems are transitioning towards integrative process systems to recover multiple resources whilst simultaneously satisfying regulations on final effluent quality. This work contributes to the literature by bringing a systems-thinking approach to resource recovery from wastewater, harnessing surrogate modelling and mathematical optimisation techniques to highlight holistic process systems. A surrogate-based process synthesis methodology was presented to harness high-fidelity data from black box process simulations, embedding first principles models, within a superstructure optimisation framework. Modelling tools were developed to facilitate tailored derivative- free optimisation solutions widely applicable to black box optimisation problems. The optimisation of a process system to recover energy and nutrients from a brevery wastewater reveals significant scope to reduce the environmental impacts of food and beverage production systems. Additionally, the application demonstrates the capabilities of the modelling methodology to highlight optimal processes to recover carbon, nitrogen, and phosphorous resources whilst also accounting for uncertainties inherent to wastewater systems.
Surrogate modelling, derivative-free optimisation, resource recovery from wastewater
## Acknowledgement
This work was financially supported by the UK Engineering and Physical Sciences Research Council (EPSRC) under the DTP CASE-conversion programme "Systems modelling design for waste resource recovery" [2194316].
## Conflict of interest
The authors declare no conflicts of interest.
###### Contents
* 1 Introduction
* 1.1 Process systems engineering for resource recovery
* 1.2 Derivative-free optimisation
* 1.3 Computer experiments
* 1.3.1 Variable selection and bounding
* 1.3.2 Sampling
* 1.3.3 Adaptive sampling
* 1.4 Surrogate models
###### Abstract
We consider the following problem:
* _A priori_ problem is a problem for finding a set of _a priori_ problems, where the objective
Computer-aided process synthesis methods can be used to screen alternative process networks without the time and capital expenditures required for pilot studies [16, 17]. These methods have been widely applied to the design of wastewater treatment plants [18, 19, 20, 21, 22, 23], whilst applications to integrative processes for resource recovery from wastewater remain relatively unexplored [14, 24, 25]. This research gap is constricted by the complexity of modelling resource recovery processes in the context of wastewater systems [14].
Mathematical modelling is an indispensable tool for designing the resource recovery processes within a circular economy and sustainable future [4]. Specifically, wastewater treatment plant design has undergone a paradigm shift in the last decades from a dependence on expert knowledge and well-established guidelines, towards sophisticated modelling and simulation technologies [23]. This transition was made possible by the development of rigorous mathematical models of wastewater systems, based on first-principles physics, chemistry, and biology, including models for anaerobic digestion [26] and activated sludge processes [27]. However, these models tend to be highly dimensional and complex, lending to the biological nature of the underlying processes. These models can become particularly cumbersome during equation-orientated integrated process design, wherein the mathematical formulations are coded directly into specialist optimisation software, due to the combinatorial nature of modelling superstructures comprising many possible reaction pathways [15]. However, the continued development of rigorous models for emerging resource recovery technologies is important as these models form the basis for other modelling approaches (by providing a source of high-fidelity data from process simulations) as well as providing validation for more computationally tractable approximations [28].
There exists 3 approaches to address the computational intractability of incorporating rigorous models directly within equation-orientated decision-making frameworks. First, lower-fidelity short-cut models can be formulated as more computationally tractable approximations to rigorous models whilst maintaining their foundations in first-principles [29]. [19, 20, 21] utilise lower-fidelity general models to optimise municipal wastewater treatment plants, including validation against rigorous simulation-based models. Similarly, there exists significant applications of optimisation methodologies to water networks wherein short-cut models are utilised to represent water and contaminant flows with simple conversion factors [15, 18].
Another option is to utilise design of experiments [30] and process simulation software, embedding rigorous models within sequential modular black boxes along with algorithms to enable solution convergence for complex process networks. Simulation software is widely used to evaluate process design performance within a broad range of applications including prediction, design, operation, sensitivity analysis, and optimisation [29]. However, the requirement for more accurate process models necessitates a large number of simulation evaluations resulting in excessively high computational costs. Additionally, the underlying code within process simulation software is often embedded as black box models where functional and derivative information is unavailable to the user [31].
Thirdly, increased data availability and computational efficiency has driven recent advances in data-driven AI and ML models applied to wastewater systems for sustainable decision-making [32], process analysis, operation, and control [33, 34], prediction, classification, water quality evaluation [35], uncertainty analysis, and optimisation [4]. These data-driven models are effective at modelling nonlinear wastewater systems [35] but depend highly on the quantity and the quality of data used to train them [4, 36]. Additionally, process simulation software can be used to generate large, high-fidelity data from first-principles models with which to train representative ML models as surrogates [37]. However, a dissociation between traditional engineering specialists and computer science knowledge limits the development and deployment of such data-driven decision-support tools to complex industrial problems [32, 33, 34, 36, 38].
### Derivative-free optimisation
Two branches of AI that are of particular interest in this work are the fields of ML and derivative-free optimisation (DFO). ML models provide a data-driven approach to represent resource recovery processes within larger decision-making frameworks [39]. DFO refers to algorithms used to solve black box optimisation (BBO) problems in which the mathematical formulations and derivative information of optimisation objective functions, \(f\) (Equation 1a), and constraints, \(\mathbf{g}\) (Equation 1b), are unknown or not readily available [40, 41]. (This is in contrast to equation-orientated optimisation wherein the mathematical formulations of \(f\) and \(\mathbf{g}\) are explicitly known.) BBO problems arise frequently in simulation-based optimisation applications
wherein underlying and unavailable first-principles models are harnessed within decision-making frameworks [31]. This can be achieved by interrogating the black box simulations for high-fidelity data from complex, rigorous underlying models and proceeding to use model-based (or surrogate-based) DFO or direct-search (or sampling-based) DFO approaches [42]. The former approximates underlying functions (e.g., \(f\) replaced by \(\hat{f}\)) and guides the optimisation search using surrogate (also known as meta- or reduced-order) models whilst the latter sequentially examines data samples for improvements in optimally [43, 44]. Additionally, Bayesian DFO methods exist at the interface of these two approaches: following a direct-search approach of directly analysing data for optimality whilst simultaneously harnessing features of surrogate modelling components within optimised acquisition functions (such an approach might also be referred to as active ML) [45].
\[\underset{\mathbf{X},\,\mathbf{y}}{\text{minimize}} f\left(\mathbf{x},\mathbf{y}\right)\] (1a) subject to \[\mathbf{g}(\mathbf{x},\mathbf{y})\leq\mathbf{0} \tag{1b}\]
DFO algorithms can be further categorised as local or global DFO methods where the former excel at refining current best solutions to obtain local optima using bound tightening algorithms, whilst the latter facilitate exploration for global solutions within the whole search space. Finally, stochastic (or evolutionary) DFO methods differ from deterministic algorithms due to the incorporation of random search steps and heuristics to update characteristics of the entire sample population towards optimality. [46]. Rios and Sahinidis [42] provide an extensive review of DFO algorithms and a comparison of available software implementations.
Surrogate model-based DFO has gained popularity due to the increase in data availability with which to fit surrogate models, improvements in global optimisation, and catalysed by recent interests and developments in ML [44]. In addition to providing mathematical formulations of the underlying models, surrogate models also address the computational intractability of incorporating rigorous models directly within equation-orientated decision-making frameworks. Surrogate models reduce the computational cost of evaluating expensive model relationships via a reduction in model complexity and/or dimensionality. As such, surrogate modelling is widely used for predictive modelling and feasibility analysis as well as mathematical optimisation [43].
By formulating surrogate models within optimisation problems, their predictive power can be harnessed in solving black box decision-making problems [47, 48]. However, a primary challenge lies in writing surrogate model formulations that are tractable within an optimisation problem. That is, translating a predictive formulation of an ML model into a formulation that is compatible with optimisation solver software. Recent literature has also addressed this challenge [49, 50, 51], harnessing recent advances and popularity of optimisation modelling language Pyomo [52] and Python-based ML packages, e.g., Scikit-learn [53] and PyTorch [54]. Specifically, these works utilise the object-orientated paradigm to enable the abstraction of surrogate model formulations into independently contained objects which can then be plugged into larger optimisation problems where required. For example, optimisation formulations for neural networks (NNs) exist widely in the literature, owing to both to their popularity as ML models and their highly customisable yet manageable structure [49, 55, 56]. The weights matrices and bias vectors are optimised during model training and then fixed as optimisation parameters. Since the input to the activation function is a linear weighted sum, it is possible to formulate MILP problems depending on the activation functions used [55, 56].
Caballero and Grossmann [57] present a model-based DFO algorithm using GPs as surrogate models. The authors develop a bounds refinement algorithm to enable convergence upon surrogate-based optima with high confidence of accurate representation of the underlying model. Henao and Maravelias [58] use NNs to represent multi-variable mappings alongside explicit constraints within a grey box optimisation framework. In both cases, data for training surrogate models were obtained by using static sampling strategies to generate input samples and evaluating the corresponding outputs by interfacing Matlab with process simulation software. The former formulated NLP problems within the TOMLAB optimisation environment in Matlab [59] and solved these using SNOPT [60]. The latter introduced discrete variables to represent a superstructure for optimising process synthesis, and solved the resulting MINLP problem with GAMS [61] and the DICOPT solver [62].
Boukouvala and Floudas [63] introduced a DFO framework for constrained grey box problems embedding sampling, bounds refinement, variable selection, surrogate modelling, and global optimisation. A key feature
of their work was the incorporation of multiple surrogate functions (linear, quadratic, signomial, radial basis functions (RBFs), and kriging), from which the choice of model to be used was optimised. Beykal et al. [64] incorporated support vector machines (SVMs) within the DFO framework as supervised classification surrogate models to map the region of numerical infeasibility arising from noisy simulations. Both applications formulated NLP problems which were solved by tuning the solver parameters of ANTIGONE [65] to enable local/global solutions.
Despite the rise in popularity of surrogate-based DFO, most of these applications implement NLP wherein optimisation is performed on a continuous search space. However, simulation-based DFO approaches to process synthesis typically result in black box mixed integer nonlinear programming (bb-MINLP) problems [46] or constrained DFO (CDFO) problems [47] due to the inclusion of integer programming variables to represent discrete process configurations, for which there has been relatively little research. The challenges facing bb-MINLP and CDFO frameworks are numerous: first, obtaining a representative yet tractable sample set that exists within a discontinuous search space; second, fitting a surrogate model to response surfaces involving continuous and integer decisions; third, formulating multiple continuous surrogates then patching them together at discontinuities can result in complex and less tractable optimisation formulations, particularly as the number of discrete decisions increases [46].
The following research gaps were highlighted from a review of the BBO and DFO literature which hinder the deployment of such decision-making frameworks to the process synthesis of resource recovery processes. A key challenge results from fragmented research approaches developing boutique DFO methods for specific applications, with no general one-size-fits-all solution and difficulty comparing methods to determine the best method for a given application. This challenge is compounded by the breadth of DFO applications and variations, including accounting for noisy outputs from stochastic simulations, optimising constrained BBO problems, and optimising both continuous and discrete variables. A second challenge derived from the fragmented research approach is the discontinuation of development after successful application. Recent research advances should be incorporated within existing DFO approaches as well as within commercial simulation software to enable simulation optimisation without necessitating programming experience and interfacing to a programming language. Finally, guaranteeing global solutions within reasonable computational time, particularly for highly dimensional and complex BBO problems, is an ongoing challenge in the optimisation community.
The problems associated with fragmented DFO development can be addressed by harnessing object-orientated programming (OOP) to provide general modelling toolboxes. Such approaches provide flexible foundation models which can be configured and adapted to specific applications. The flexibility of developed DFO methods is thereby increased by addressing the various modelling challenges at the object-level as opposed to the algorithm-level. OOP further lends itself to open-source development and the incorporation of research advances within new or updated modelling objects which can improve the performance of existing DFO configurations. Modelling objects can also be incorporated within tailored DFO implementations in a broad range of commercial simulation software. Finally, OOP for DFO also enables the dissociation between mathematical programming formulations and optimisation solvers, enabling local or global rigorous gradient-based solvers or stochastic metaheuristic solvers to be adopted as required.
A number of object-orientated surrogate modelling and DFO toolboxes have been developed to date, including the SUMO toolbox available in Matlab [66] and the surrogate modelling toolbox (SMT) available in Python [67]. The former enables numerous surrogate models to be trained and updated using adaptive sampling methods, with a primary focus on enabling more computationally tractable models for predictive purposes, whilst the latter enables static sampling strategy implementations and different surrogate model formulations with a focus on derivatives for use in gradient-based optimisation. Cozad et al. [68] developed a machine learning software able to interface with many black box simulation codes and construct surrogate models from a selection of available basis functions, as well as adaptive sampling implementations, within a no-code interface. The optimisation and machine learning toolkit (OMLT) is a recently developed open-source python package for formulating neural network (NN) and gradient-boosted tree surrogate models within larger optimisation problems [51]. Audet et al. [69] recently updated their popular implementation of the mesh adaptive direct-search (MADS) algorithm for DFO to a new object-orientated architecture to facilitate greater flexibility.
This work contributes an object-orientated DFO modelling suite to address the specific challenges of simulation-based BBO. For the first time, to the best of the authors' knowledge, abstract mathematical
formulations of NN and Gaussian process (GP) surrogate models for classification are developed to address uncertainties pertaining to simulation convergence failures. Additional contributions include mathematical programming formulations for GP regression models and their uncertainty predictions enabling the integration of modelling uncertainties into decision-making processes for greater solution interpretability. Further contributions include mathematical programming formulations for adaptive sampling strategies based on GP models and a novel heuristic Delaunay triangulation-based approach. The developed modelling suite packages together the object-orientated surrogate modelling and mathematical programming formulations along with data processing tools to streamline simulation-based DFO. This article proceeds with a review of computer experiments (including adaptive sampling methods) and surrogate modelling before introducing the developed object-orientated DFO methodology. An application is presented to optimise a process system to recover resources from a brewery wastewater.
### Computer experiments
Sampling from black box process simulations can be regarded as a set of computer experiments in which the experimental parameters (input data) can be designed in such a way as to maximise the information gained about the underlying system (input-output relationships). Specifically, a computer experiment evaluates mapping of input variables \(\mathbf{x}\) onto output variables \(\mathbf{y}\) through some underlying function \(f\) as shown in Equation 2. Additionally, black box simulations often return information pertaining to the convergence status of the underlying models. Such information can be processed as binary classification targets, \(\mathbf{t}\), where a value of 1 represents successful convergence, and a value of 0 means the simulation has failed to converge. Failed convergences can occur due to an infeasible combination of inputs or stochastic/numerical issues within simulator solution algorithms. It is important that the binary convergence targets are used to model feasible operating regions and to discard output data for failed convergences so as not to skew the data.
\[\mathbf{y},\mathbf{t}=f(\mathbf{x}) \tag{2}\]
Popularised by the Design and Analysis of Computer Experiments (DACE) framework in the late 1980s [30], computer experiments involve evaluating a computer simulation for different input designs. Computer experiments enable many input-output relationships to be enumerated without the requirement for expensive pilot studies. However, applications to optimisation problems are often hindered by computationally expensive simulations and lack of derivative and functional information about the underlying complex black box models. Other challenges include handling noisy simulations which are non-deterministic such that a specific input design can converge to a different output and therefore presents a one-to-many input-output mapping.
Sampling strategies have been designed to provide maximum design space coverage with the minimum number of samples. Such sampling strategies are designed to address the trade-off between homogeneous sample coverage (maximising information about underlying input-output relationships) and uniformity of samples (too uniform and the subsequent surrogate modelling can run into issues with correlated data). Sampling strategies are typically adopted in an offline approach where a specific number of samples are evaluated from the black box at once followed by subsequent modelling using all sampled data without any further sampling. This offline approach enables sampling to be removed from the modelling and optimisation workflow, enabling quicker iterations through these stages. Disadvantages include the reliance on the initially sampled data to provide good overall sample space coverage as well as good coverage around globally optimum solutions (exploration and exploitation).
#### 1.3.1 Variable selection and bounding
The design of computer experiments begins with the selection of a subset of variables to be sampled from the black box. This thereby defines the selection of variables within the subsequent ML/surrogate modelling/optimisation formulations and can reduce overfitting by avoiding the modelling of unnecessary correlations [43]. Generally, output variables are determined as those which appear in performance, feasibility, or costing functions within the larger decision-making problem being examined. The subset selection of input variables can include decision variables (design or operating variables) to be determined within a larger decision-making problem or uncertain parameters dependent on practical operating
environments. Subset selection methods aim to determine a minimal number of these variables without a significant loss in accurate modelling capabilities, with the added benefit of reducing the surrogate model dimensionality [70]. Popular systematic subset selection methods include principal component analysis [71], screening techniques [72], variance-based sensitivity analysis [73], and MIP formulations [58]. More heuristic methods include performing preliminary experiments to observe variable sensitivities or using expert knowledge to choose relevant variables [74].
Candidate design variables to be selected for resource recovery flowsheet optimisation include volumes of reactors, nominal flowrates, integer variables defining the configuration of the flowsheet, as well as reactor specific design variables such as operating temperatures and recycle rates. Operating variables are variables that can be varied temporally, to alter the operating conditions of the flowsheet in response to externalities deviating from nominal design conditions. During the design stage of flowsheets, lower and upper bounds can be imposed on operating variables to enable the modelling of flowsheet operation and control. It is also possible to treat operating variables as pseudo design variables wherein nominal values are determined.
The selection of uncertain parameters in practical operating environments considers input flows and compositions which depend on some independent upstream process. Other uncertain parameters that could be selected for modelling include environment temperatures and rainfall, which are particularly important for vessels open to the environment. Selected uncertain parameters can also pertain to the mathematical models, such as model parameters which have been determined via tuning methods, the structure imposed by mathematical models approximating complex systems also introduces uncertainties. Additionally, for simulation-based methods, there is often a degree of uncertainty in the simulation model as well, creating multiple layers of modelling uncertainty. Simulations may also be non-deterministic, necessitating numerical solvers which introduce noise based on different initialisation and convergence criteria.
At the variable selection stage, it is also important to consider in advance the surrogate models that will be used and what input-output relationships to be modelled by each. For example, NNs can be used to correlate many input-output relationships, however, simpler NN structures can be used to represent many input to one output relationships. In this case, it is up to the user to decide whether to have a single complex NN representing many relationships, or many simpler networks each representing just one output variable response. Furthermore, other surrogate models such as Gaussian processes (GPs) are not as flexible as NNs in that they can only effectively model up to 20 input variables per model, so in this case, for highly complex and dimensional systems, principal component analysis can help to reduce the input variable selection.
Data signifying the convergence status of process simulations can also be sampled to determine feasible regions for subsequent classification surrogate modelling and optimisation. A converged sample represents a feasible process design whereas failure of the simulator to converge signifies an infeasible process design. Converged/non-converged samples from black box simulations are typically assigned 0/1 labels, respectively. Another approach is to assign non-converged samples the value of \(+\infty\) such that, in the context of minimisation, the optimisation objective function is ensured sub-optimal whilst the feasibility constraints are ensured to violate the form \(g(x)\leq 0\)[44].
Once input-output variables have been selected, appropriate bounds can be imposed to define the sample space. Typically, lower and upper bounds are imposed on each input dimension to form \(m\)-vectors of lower and upper bounds \(\mathbf{x}^{L}\), \(\mathbf{x}^{U}\), which define an \(m\)-dimensional hypercube [42]. The \(m\)-dimensional input vector \(\mathbf{x}=(x_{1},x_{2},...,x_{m})\) is then bounded such that \(\mathbf{x}^{L}\leq\mathbf{x}\leq\mathbf{x}^{U}\). These bounds can be determined based on physical and thermodynamic principles, cost constraints, purchasable equipment, preliminary experiments, or expert knowledge. If the bounds are too tight, the resulting surrogate model will only be valid over a small input space, reducing its predictive capabilities and, in the case of optimisation, failing to capture globally optimal solutions. Conversely, if the bounds are too relaxed, sparse coverage of the large sample space results in regions of high uncertainty in surrogate model predictions. Variable bounds (and even variable selection) can also be adjusted and tightened within optimisation algorithms to iteratively refine the search space towards optimum solutions [57, 63].
#### 1.3.2 Sampling
The goal of sampling strategies is to produce \(n\) input-output samples from an \(m\)-dimensional input space, as the \(n\times m\) matrix \(X\), such that good sample space coverage is achieved with minimal correlation between samples. Initial sampling in this way is also referred to as static or stationary sampling, as the samples
are generated according to some strategy and then evaluated without any new information being used to inform the remaining sample locations as in adaptive sampling. Static sampling strategies include random sampling (also known as Monte Carlo sampling) [75] which can result in non-homogeneous coverage of the sample space leading to large variances in interpolated points from the surrogate model [57]. On the other extreme, grid sampling enables a uniform projection of samples onto variable axes, but results in subsets of highly correlated samples which share the same value for one or more variables [74]. To address this trade-off, quasi-random sampling techniques have been developed, such as Latin hypercube sampling (LHS) [75], Hammersley [76], Halton, and Sobol sampling [77]. [78] provide a comprehensive review of static and adaptive sampling strategies in the context of design of experiments.
Figure 1 compare the sample space coverage of 64 samples using random sampling, LHS, Sobol sampling, and grid sampling. Random sampling exhibits the least homogeneous space coverage. LHS separates the range of each input variable into \(n\) strata where \(n\) is the number of samples to be generated. \(n\) samples are then positioned such that only one sample is placed in each strata in each dimension thereby ensuring more homogeneous space coverage compared to random sampling. Sobol samples are generated based on underlying quasi-random Sobol sequences resulting in a homogeneity of space coverage between that of LHS and grid sampling. Finally, grid sampling provides the most homogeneous space coverage possible (when \(n\) has an integer root for the number of dimensions).
In addition to quasi-random sampling strategies, sampling heuristics have been developed to further optimise the distribution of samples within the search space. For example, LHS samples can be generated such that the correlation between samples is minimised or such that the minimal pairwise distance is maximised or such that the ratio between the maximum pairwise distance and the minimum pairwise distance is minimised [79]. Geometry-based sampling strategies constrain the number of samples that can be generated whilst maintaining their balanced space-filling properties. For example, grid sampling necessitates that the \(m\)-th root of \(n\) is an integer, where \(n\) is the total number of grid samples to be generated and \(m\) is the number of dimensions of the sample space, such that \(\sqrt[n]{n}\) is the number of unique sample values in each dimension. The methods section introduces a heuristic-based method to generate any number of grid samples whilst attempting to maintain homogeneous space coverage. Similarly, Sobol sampling generates an optimal set of balanced samples for \(n\) equal to a power of 2.
#### 1.3.3 Adaptive sampling
Whilst an increased number of training samples improves the resulting surrogate model accuracy, high sampling requirements contribute significant computational cost for simulation evaluations and subsequent model fitting [43]. Adaptive sampling techniques aim to minimise expensive sampling requirements by choosing promising locations for sequential samples via acquisition functions which address the trade-off between sample space exploration and exploitation. This online sampling approach builds a surrogate model on sparsely sampled data, which is then updated iteratively by choosing subsequent computer experiments to either explore more of the sample space or exploit potential local optima based on a function. This approach enables better design space exploration and exploitation but at the cost of incorporating expensive black box sampling into the online workflow. In the context of global optimisation, exploration is required to escape local optima, whilst exploitation improves accuracy at available optima. Example acquisition functions include the expected improvement (EI) [80] and modified EI [81] for GP models, postulation as
Figure 1: Static sampling strategies. (A) Random sampling, (B) Latin hypercube sampling, (C) Sobol sampling, (D) Grid sampling.
DFO problems [68], and weighted functions with departure functions [82]. More recently, emerging adaptive sampling frameworks include the use of Delaunay triangulation [83] to partition the sample space [84, 85].
Bayesian optimisation exists at the boundary of surrogate model-based DFO and data-based direct-search DFO. Specifically, Bayesian optimisation determines successive sampling points that are expected to improve the current best solution, thereby following a direct-search approach of directly analysing the samples for optimality. However, Bayesian optimisation also includes a modelling component wherein features of a probabilistic surrogate model (e.g., GP) are used to formulate acquisition functions to inform successive samples, thereby necessitating the iterative generation of surrogate models on new data. In Bayesian optimisation, maximisation of information gained from adaptive samples is achieved using acquisition functions such as the EI function, written below for an optimisation problem to maximise the function represented by the surrogate (Equation 3) [80].
\[\text{EI}(\mathbf{x})=(\hat{y}-y^{(\text{max})}-\xi)\Phi\left(\frac{\hat{y}-y^ {(\text{max})}-\xi}{s}\right)+s\phi\left(\frac{\hat{y}-y^{(\text{max})}-\xi}{s }\right) \tag{3}\]
where \(\text{EI}(\mathbf{x})\) is the EI at sample location \(\mathbf{x}\), \(\Phi(\cdot)\) represents the cumulative distribution function, \(\phi(\cdot)\) represents the probability distribution function, \(\hat{y}\) is the GP predictive mean, \(y^{(\text{max})}\) is the current maximum observation, \(s\) is the standard deviation, and \(\xi\) is a parameter controlling the trade-off between exploration relative to exploitation. The function increases with increasing \(\hat{y}\) which corresponds to exploitation around the current maximum. Simultaneously, the function increases with increasing \(s\) which corresponds to high uncertainty at sparsely sampled locations thereby enabling exploration. Since both exploration and exploitation are positively correlated with the EI function, both can be addressed simultaneously via maximisation of EI. However, the EI function exhibits multiple local optima that can cause numerical problems [43].
It is also possible to use a modified EI function (Equation 4) maximising only the second term of the EI function [81]. This enables for samples with high uncertainty, corresponding to increased \(s\), or a predicted value close to the current best solution, represented by the inverse dependence on \((\hat{y}-y^{(\text{max})}-\xi)^{2}\).
\[\text{EI}_{\text{mod}}(\mathbf{x})=s\phi\left(\frac{\hat{y}-y^{(\text{max})}- \xi}{s}\right)=\frac{s}{\sqrt{2\pi}}\exp\left(-\frac{(\hat{y}-y^{(\text{max} )}-\xi)^{2}}{2s^{2}}\right) \tag{4}\]
Another acquisition function is the probability of improvement (PI) shown in Equation 5. The PI enables exploration and exploitation to be controlled with the \(\xi\) parameter wherein greater values of \(\xi\) dampen the influence of samples near to the current best solution. In Equation 5, if \(\hat{y}-y^{(\text{max})}-\xi\leq 0\) then there is no improvement in the current best solution. However, if \(\hat{y}-y^{(\text{max})}-\xi>0\) then this is the amount by which the function value would improve at that point in the input space. Since GPs are functions sampled from a normal distribution with mean and variance functions, the probability of improvement can therefore be evaluated by sampling from the cumulative distribution function.
\[\text{PI}(\mathbf{x})=\Phi\left(\frac{\hat{y}-y^{(\text{max})}-\xi}{s}\right) \tag{5}\]
Another acquisition function is the upper confidence bound (UCB) shown in Equation 6, where the trade off between exploitation of function performance \(\hat{y}\) and exploration in regions of high uncertainty \(s\) is clear. Again the tuneable parameter \(\xi\) controls how much weight is assigned to exploration compared to exploitation.
\[\text{UCB}(\mathbf{x})=\hat{y}+\xi s \tag{6}\]
Another option to address the trade-off between exploration and exploitation in adaptive sampling is the bumpiness function used in conjunction with radial basis functions (RBFs). Another is to pose the adaptive sampling problem as a DFO problem to maximise the relative squared error as shown in Equation 7 [68].
\[\text{max}\left(\frac{y-\hat{y}}{y}\right)^{2} \tag{7}\]
where \(\hat{y}\) is the surrogate prediction, \(y\) is the true function value, and \(x\) are the input variables, typically constrained between lower and upper bounds. Such a function works to boost exploration of the search
space, by favouring samples with high uncertainty resulting from lack of prior information. On the other hand, such an approach does not enable exploitation of potential solutions with moderate accuracy until the relative error becomes significant.
Other approaches assign weightings to exploration and exploitation terms within a function representing the trade-off between the two. Exploration is commonly represented by an error term, since sparsely sampled regions will have high uncertainties due to a lack of prior information. For exploitation, a departure function quantifies the impact of a new sample added near an already sampled location (Equation 8) [82].
\[\Delta_{j}(x)=\hat{y}-\hat{y}_{j} \tag{8}\]
where \(\Delta_{j}(x)\) is the impact of sample \(j\) on the surrogate function value, \(\hat{y}\) is the prediction from the surrogate built using the entire sample set, and \(\hat{y}_{j}\) is the prediction from the surrogate built using all samples except sample \(j\).
Delaunay triangulation [83] is a widely employed approach which can be harnessed to partition the input domain to facilitate global exploration and exploitation around local optima. Examples of Delaunay triangulation applications within the optimisation literature include: explorative adaptive sampling at centroids of Delaunay triangulated regions by quantifying the trade-off between sampling within the largest region and sampling within the region with largest estimated modelling error [84]; Delaunay triangulated region selection balancing exploration and exploitation followed by optimisation, as opposed to heuristic sampling at centroids, of interior sample location [85]; objective function uncertainty quantification by interpolating triangulated regions using quadratic functions [86]; sequential sampling at the centroid of each region generated by Delaunay triangulation to improve global surrogate model accuracy [87]; a direct-search approach adopting a heuristic entropy criterion on Delaunay triangulated vertices to determine promising subsequent sample locations as well as heuristics for sample elimination and thereby triangulation mutations to facilitate global search [88].
Delaunay triangulation provides a method to partition the interior of a convex hull on a point set \(X\) into simplex regions \(R\). Specifically, a Delaunay triangulation ensures that no point in \(X\) exists inside the circumcircle (the unique circle that passes through three vertices of a triangle) of any simplex \(R\). Additionally, a Delaunay triangulation is calculated so as to maximise the minimum resulting angle, thereby avoiding sliver triangles. Furthermore, by extending the concepts to circumscribed spheres, it is possible to perform Delaunay tessellation in higher dimensional spaces. Figure 2 (left) shows the interior of a convex hull of a point set partitioned by Delaunay triangulation whilst Figure 2 (right) shows the same point set, in addition to the vertices of the search space, partitioned by Delaunay triangulation. To enable global exploration which extrapolates the convex hull of the Delaunay triangulation, the vertices the search space can be included in the Delaunay triangulation. In this way, Delaunay triangulation produces a convex hull as a hypercube over the entire input space. The algorithm can then explore the entire input space including extrapolation between the convex hull of previous samples and the bounds of the input space. Readers are referred to [89] for further details on Delaunay triangulation.
### Surrogate models
A popular field of PSE research concerns the use of surrogate modelling to represent complex systems within larger decision-making problems. The primary application for surrogate modelling is for predictive purposes wherein some underlying complex model \(f\) is substituted by making predictions with a surrogate model \(\hat{f}\). Surrogate modelling enables significant savings in computational time and resources and has therefore been utilised for a diverse range of applications [57, 58, 70, 68, 90, 91]. To further lower computational cost, reduced-order models can be formulated which also serve to increase the interpretability of the model [68]. In addition to generating predictions, the functional representations of surrogate models, \(\hat{f}\), can be embedded within larger decision-making problems enabling optimisation and feasibility analysis [43].
A surrogate (also known as reduced-order or meta-) model \(\hat{f}\) provides a mathematical formulation that approximates some underlying black box function \(f\) which often exhibits one of the following properties: highly nonlinear, highly dimensional, unavailable functional form, unavailable derivative information, or no analytical solution. The underlying model is also often computationally expensive to evaluate and packaged with numerical solvers within black box simulation software. Surrogate models enable users to
harmess these rigorous state of the art black box models with greater computational efficiency. Surrogate models provide readily available functional formulations that are more computationally tractable due to their reduced complexity and/or dimensionality, thereby warranting their widespread use for predictive applications, feasibility analysis, and mathematical optimisation.
In the context of model-based DFO, surrogate models provide tractable mathematical formulations for black box cost functions and constraints. Traditionally, surrogate modelling is used for regression between continuous function outputs and continuous model predictions. For example, [57] use GPs whilst [58] use NNs to represent flows within process flowsheets. Surrogate modelling can also be used to address classification problems between discrete target data. For example, [92] and [64] use SVMs to model the boundary between converged and non-converged simulations.
A (global) surrogate model approximates some complex underlying model (over the entire design space). Simple regression techniques such as linear or polynomial regression are the most computationally tractable surrogate models but can fail in accurately modelling nonlinearities in the underlying model. Conversely, artificial NNs or interpolation techniques such as GPs offer better accuracy albeit with less tractable formulations.
#### 1.4.1 Neural networks
NNs are popular surrogate and ML models widely used in many different fields including image recognition and speech processing to novel drug discovery [93]. With the rise of ML and surrogate modelling, NNs have also found applications in modelling chemical processes [94] due to their high accuracy and ability to represent multiple input-output relationships [38]. An NN consists of layers of nodes which map input variables \(\mathbf{x}\) onto predictive output variables \(\mathbf{\hat{y}}\). A node receives inputs from nodes in the previous layer, then transmits the output of a function evaluation to nodes in the subsequent layer. Each NN contains an input layer and an output layer, which transmit input variables \(\mathbf{x}\) and predictive output variables \(\mathbf{\hat{y}}\), respectively. The input layer, where the input data is passed into the NN has a number of nodes equal to the number of input variables. Similarly, the output layer, from which output variable predictions are made, has a number of nodes equal to the number of output variables that the NN is designed to predict. In this way, only the input layer and output layer are evaluated by the user, whilst the function evaluations enabling nonlinear modelling are carried out in hidden layers - so called because the user does not directly observe them as they exist in between the input and output layer.
Hidden layers behave similarly to the output layer but make predictions of intermediate features instead of output variables. These intermediate features do not have to be specified by the user or even evaluated at any time; during training, the NN automatically determines how the intermediate features map input variables onto the output variables. Figure 3 shows the calculation of the intermediate output value \(a_{j}^{(\lambda)}\) for
Figure 2: Delaunay triangulation within the interior of a convex hull on a point set (left), including vertices of the search space (right).
the \(j^{\text{th}}\) node within an intermediate NN layer \(\lambda\). Within a general NN structure, layer \(\lambda\) consists of \(N_{\lambda}\) nodes thereby defining a vector of intermediate layer outputs \(\mathbf{a}^{(\lambda)}\). The outputs from the previous layer, \(a_{i}^{(\lambda-1)}\), are first multiplied by the relevant elements from the matrix of weights \(w_{j,i}^{(\lambda-1)}\) and summed along with a bias parameter \(b_{j}^{(\lambda-1)}\) to yield the subsequent node inputs, \(z_{j}^{(\lambda)}\). The result is a linear function with weights and bias parameters which can be tuned during training (Equation 9a). Node inputs are then mapped onto node outputs via an activation function \(\xi^{(\lambda)}\) (Equation 9b). Equations 9c and 9d enable inputs to be passed to the network, and outputs to be interpreted from the network, respectively.
\[\mathbf{z}^{(\lambda)} =\;W^{(\lambda-1)}\mathbf{a}^{(\lambda-1)}+\mathbf{b}^{(\lambda-1)} \text{for }\lambda=2,...,\Lambda \tag{9a}\] \[\mathbf{a}^{(\lambda)} =\;\xi^{(\lambda)}\left(\mathbf{z}^{(\lambda)}\right) \text{for }\lambda=1,...,\Lambda\] (9b) \[\mathbf{z}^{(1)} =\;W^{(0)}\mathbf{x}+\mathbf{b}^{(0)}\] (9c) \[\hat{\mathbf{y}} =\;W^{(\Lambda)}\mathbf{a}^{(\Lambda)}+\mathbf{b}^{(\Lambda)} \tag{9d}\]
The structure of nodes and layers within NNs define the type of network being used. One type of NN structure is a feedforward NN (or multilayer perceptron), in which layers are structured sequentially and data flow through the network from input layer, through the hidden layers, to the output layer. Specifically, a special type of feedforward NN is a fully dense feedforward network in which each node is connected to every other node in the adjacent layers. This generality of fully dense NNs makes them widely applicable to a broad range of problems independent of the input data [38]. Another type of structure are recurrent NNs which include feedback connections where layer outputs are fed back into the NN.
Figure 4 shows the mapping of inputs \(\mathbf{x}\) onto predictions \(\hat{\mathbf{y}}\) for a general fully-dense feedforward NN with \(\Lambda+1\) layers. The convention for naming NNs is to count the number of hidden layers plus the output layer whilst not counting the input layer. Here, the input layer is thereby assigned \(\lambda=0\) whilst the final hidden layer is denoted by \(\Lambda\) such that the output layer is layer \(\Lambda+1\). Predictions from the output layer are interpretable by the user and so the notation for the outputs from the output layer is refined to \(\hat{\mathbf{y}}\). Since the outputs from the input layer are the input variables themselves, Equation 9c is used to determine the
outputs from the first hidden layer. Intermediate outputs from the hidden layers are evaluated as shown in Figure 3 and Equations 9a and 9b. Finally, predictive outputs from the output layer are evaluated using Equation 9d.
Activation functions enable users to introduce nonlinear modelling capabilities to the NN as well as incorporate expert knowledge about the system characteristics. Activation functions include the rectified linear unit (ReLU) model which is a piecewise linear function and the default choice for feedforward NN activation functions wherein negative inputs return 0 enabling deactivation of node connections, otherwise positive inputs are directly returned as outputs. Smooth activation functions include the Sigmoid function and the Tanh function. The former squashes outputs between 0 and 1 and has the property that very large magnitude inputs have very small gradients which ensures that small deviations at high values have less importance than deviations in inputs around 0. The latter squashes outputs between -1 and 1 thereby exhibiting the property that inputs of 0 return outputs of 0 as well as have the largest gradient. Other activation functions include the Softplus function which is a smooth equivalent to the ReLU function and which has the property that the derivative of the function is equivalent the Sigmoid function. The Hardsigmoid and Hardtanh functions are piecewise linear approximations of the Sigmoid and Tanh functions, respectively. There also exists variations such as the Leaky ReLU or ReLU6 models which implement the ReLU model but with a very shallow linear model for negative inputs or crop the positive outputs at 6, respectively. Finally, a linear (or identity) activation function can be used if the relationship being modelled is suspected to be linear. Some of these activation functions are detailed in Table 1[95].
NNs for classification can be implemented by interpreting predictions from the output through a logistic function to map the NN outputs (logits) onto interpretable probabilities, \(p(t=1)\). In this way, NNs for classification still make predictions in \(\mathbb{R}\) and trained using specialised loss functions for discrete target data such as binary cross-entropy (BCE) loss on the probability predictions. It is also possible to use BCE combined with a sigmoid function on the logits. Training classification NNs on the continuous logit predictions and then squashing predictions though a logistic function enables increased numerical stability compared to training on probabilities or class predictions directly [54].
#### 1.4.2 Gaussian processes
Traditionally developed in the field of geostatistics in the 1950s [96], GPs gained popularity after their use in design of experiments [30]. Today, GPs have gained much traction as surrogate models due to the relatively simple computations required for supervised ML and Bayesian inference [97]. GPs are a statistical modelling method wherein an underlying function is approximated by a probability distribution over functions enabling interpolating predictions and an estimate of uncertainty in these predictions to be evaluated simultaneously [98]. The GP predictive mean function and covariance function can be written as shown by Equation 10 and Equation 11, respectively.
Figure 4: Feedforward neural network structure to map inputs \(\mathbf{x}\) onto predictions \(\hat{\mathbf{y}}\). The network consists of an input layer with \(N_{0}\) nodes, an output layer with \(N_{\Lambda+1}\) nodes, and \(\Lambda\) hidden layers with \(N_{\lambda}\) nodes for \(\lambda=1,...,\Lambda\). Each hidden layer has an activation function \(\xi^{(\lambda)}\) for \(\lambda=1,...,\Lambda\). Layer inputs \(\mathbf{z}^{(\lambda)}\) are calculated by multiplying the previous layer outputs \(\mathbf{a}_{\lambda-1}\) by a weights matrix \(W^{(\lambda-1)}\) and added to a bias vector \(\mathbf{b}^{(\lambda-1)}\) Finally, layer outputs are determined by passing the inputs through an activation function \(\xi^{(\lambda)}\).
\[\begin{split}\hat{f}&=\mathbf{\mu}+\mathbf{k}^{\intercal}K^{-1} \left(\mathbf{y}-\mathbf{\mu}\right)\\ &=\mathbf{k}^{\intercal}K^{-1}\mathbf{y}\qquad\qquad\text{for}\ \mathbf{\mu}=\mathbf{0}\\ &=\sum_{i=1}^{n}\alpha_{i}k(X_{i},\mathbf{x})\qquad\qquad\text{ where}\ \mathbf{\alpha}=K^{-1}\mathbf{y}\\ &\qquad\qquad\mathbb{V}\left[\hat{f}\right]=\sigma_{f}^{2}- \mathbf{k}^{T}K^{-1}\mathbf{k}\end{split} \tag{10}\]
The mean function (Equation 10) can be used for predictive purposes where \(n\) is the number of training samples, \(\mathbf{\alpha}\) is an \(n\)-vector of fitted parameters dependent only on the training data, and \(k(X_{i},\mathbf{x})\) is the GP kernel function evaluated between training sample \(X_{i}\) and new input vector \(\mathbf{x}\). An attractive property of GPs is that their foundation in probabilistic distributions enables the covariance in predictions to be evaluated using Equation 11 where \(\sigma_{f}^{2}\) is a kernel parameter representing the function variance, \(\mathbf{k}\) is equivalent to \(k(X,\mathbf{x})\), and \(K\) is the GP kernel function evaluated between the training data equivalent to \(k(X,X)\).
The GP kernel parameters can be optimised to maximise some measure of the function fit. Specifically, maximum likelihood estimation (MLE) can be used to maximise the log marginal likelihood function (or minimise the negative log marginal likelihood) shown in Equation 12, where the first term is the data-fit based on observed data, the middle term is a complexity penalty, and the third term is a normalisation constant.
\[\min\frac{1}{2}\mathbf{y}^{T}K^{-1}\mathbf{y}+\frac{1}{2}\log|K|+\frac{n}{2} \log 2\pi \tag{12}\]
\begin{table}
\begin{tabular}{l l l l l} \hline Name & Plot & Function, \(\xi(x)\) & Derivative, \(\xi^{\prime}(x)\) & Range \\ \hline \multirow{2}{*}{Linear} & \multirow{2}{*}{\(x\)} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{\([-\infty,\infty]\)} \\ & & & & \\ \cline{1-1} \cline{5-5} Tanh & & & \(\begin{array}{l}\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}}=\frac{e^{2x}-1}{e^{2x}+1}\\ =1-\frac{2}{e^{2x}+1}=\frac{1-e^{-2x}}{1+e^{-2x}}\end{array}\) & \(1-\xi(x)^{2}\) & \([-1,1]\) \\ \cline{1-1} \cline{5-5} Sigmoid & & \(\begin{array}{l}\frac{1}{1+e^{-x}}\end{array}\) & \(\xi(x)\left(1-\xi(x)\right)\) & \([0,1]\) \\ \cline{1-1} Softplus & & \(\begin{array}{l}\frac{1}{\beta}\log\left(1+\exp\left(\beta x\right)\right) \end{array}\) & \(\begin{array}{l}\frac{1}{1+e^{-\beta x}}\end{array}\) & \([0,\infty]\) \\ \cline{1-1} ReLU & & \(\begin{array}{l}\max(0,x)=\begin{cases}0&\text{if}\ x\leq 0\\ x&\text{otherwise}\end{cases}\) & \(\begin{cases}0&\text{if}\ x\leq 0\\ 1&\text{otherwise}\end{cases}\) & \([0,\infty]\) \\ \cline{1-1} \cline{5-5} Hardsigmoid & & \(\begin{cases}0&\text{if}\ x\leq-3\\ 1&\text{if}\ x\geq+3\\ \frac{x}{6}+\frac{1}{2}&\text{otherwise}\end{cases}\) & \(\begin{cases}0&\text{if}\ |x|\geq 3\\ \frac{1}{6}&\text{otherwise}\end{cases}\) & \([0,1]\) \\ \hline \end{tabular}
\end{table}
Table 1: Neural network activation functions.
The GP kernel function enables users to incorporate expert knowledge into the model, for example favouring smooth, periodic (used to model function shapes which repeat themselves over some periodic dimension such as energy demand over time), or noisy functions. A primary challenge in modelling with GPs is the selection of the kernel function, akin to the arbitrary specification of NN structure and activation function. This work includes linear and polynomial kernels as shown by Equation 13, where \(\sigma_{0}^{2}\) is a kernel parameter optimised during fitting. The polynomial order, \(\omega\), is an integer that is specified and fixed by prior to model fitting where the linear kernel is specified with \(\omega=1\). GP regression with linear and polynomial kernels is equivalent to Bayesian linear and polynomial regression, respectively. Although Bayesian linear and polynomial regression can be implemented more efficiently using specialised software packages (e.g., Scikit-learn [53]), linear and polynomial GP kernels are included here for demonstrative purposes. The linear kernel is unique among GP kernels in that it is non-stationary.
\[k(\mathbf{x},\mathbf{x}^{\prime})=\sigma_{f}^{2}\left(\sigma_{0}^{2}+\mathbf{ x}\cdot\mathbf{x}^{\prime}\right)^{\omega} \tag{13}\]
Practically, the most common covariance function is the squared exponential covariance function (Equation 14) since it is universal, exhibiting a flexible approximation to many underlying functions and providing some resolve to the challenge of kernel selection. It is also possible to integrate the squared exponential kernel function against most necessary functions and it is infinitely differentiable, making it effective within optimisation frameworks [97].
\[k(\mathbf{x},\mathbf{x}^{\prime})=\sigma_{f}^{2}\exp\left(-\frac{1}{2l^{2}} \sum_{j=1}^{m}\left|x_{j}-x_{j}^{\prime}\right|^{2}\right) \tag{14}\]
In Equation 14, \(\sigma_{f}^{2}\) and \(l\) are parameters representing the positive function variance and characteristic length scale, respectively. Greater values of \(\sigma_{f}^{2}\) result in greater variances over the input domain. The length scale \(l\) represents the sensitivity of the function, where higher values ensure that even two points far away are correlated resulting in smoother, lower frequency mean predictions, whilst lower values result in higher frequency function changes. Note that the length scale is raised to the exponent 2 in Equation 14 as it has been taken outside of the squared distance term, although it could equally be written as \(\left|\frac{x_{j}-x_{j}^{\prime}}{l}\right|^{2}\). The covariance function can be evaluated between two points \(\mathbf{x},\mathbf{x}^{\prime}\) in \(m\)-dimensional space, such that \(x_{j},x_{j}^{\prime}\) represent coordinates for \(j=1,...,m\).
GPs can also be utilised to address (binary) classification problems, based on the fundamental idea: place a GP prior over a latent function \(u(\mathbf{x})\) and then squash this through the logistic sigmoid function to obtain a prior with outputs that are constrained between 0 and 1 and therefore interpretable as probabilities. These predictive probabilities can be evaluated using Equation 15,
\[p(t=1)=\sigma\left(\mathbf{k}^{T}(\mathbf{t}-\sigma(u))\left(1+\frac{\pi}{8} \left(\sigma_{f}^{2}-\mathbf{k}^{T}\left(W^{-1}+K\right)^{-1}\mathbf{k} \right)\right)^{-1/2}\right) \tag{15}\]
where \(\sigma(\cdot)\) is the sigmoid function, \(\mathbf{t}\) are the training targets, and \(W\) is a matrix parameter dependent on the latent function values. The derivation of this formulation requires the Laplace approximation to approximate non-Gaussian distributions as Gaussian [97] as well as the inverse probit approximation to map the outputs onto interpretable probabilities [99]. To fit the GPC models, the negative log marginal likelihood in minimised. Evaluating this function once again requires the Laplace approximation such that the parameter optimisation objective function can be written as Equation 16, where \(\hat{u}\) is the latent posterior mode.
\[\min-\mathbf{t}^{T}\hat{u}+\frac{1}{2}\hat{u}^{T}K^{-1}\hat{u}+\frac{1}{2}\log \left|K\right|+\frac{1}{2}\log\left|W+K^{-1}\right|+\sum_{i=1}^{n}\log\left(1+ \exp(\hat{u}_{i})\right) \tag{16}\]
## 2 Object-orientated derivative-free optimisation development
This section presents a methodology for object-orientated derivative-free optimisation (OODX). Specifically, the presented method is available as an open-source Python package [100], harnessing the capabilities of
Pyomo and Python's rich set of machine learning, data analysis, visualisation, and optimisation libraries. Figure 5 depicts the modelling capabilities of OODX wherein the modelling objects are used to construct a DFO algorithm harnessing data from black box simulations and/or data lakes (built with data from online sensors, computer simulations/code evaluations, or even physical experiments).
The OODX method currently contains six modelling objects which enable entire DFO workflows to be developed and customised to specific applications. The DataHandler object enables data generation with sampling strategies, data processing, and data storage during DFO algorithm iterations. Three surrogate modelling objects enable (1) NN - NNs for regression and classification problems, (2) GPR - GP regression (GPR) models, and (3) GPC - GP classification (GPC) models. The OODXBlock object enables abstracted Pyomo formulations to be instantiated to represent trained NN/GPR/GPC objects within larger mathematical programming formulations. Finally, the AdaptiveSampler object enables adaptive sampling Pyomo formulations to maximise exploration of the sample space and exploitation of incumbent optimal samples.
In addition to using the OODX modelling objects within tailored DFO solutions to BBO problems, the objects can also be utilised individually or in any combination required. For example, the machine learning applications for predictive purposes might only require the DataHandler and NN objects. As another example, a classification model might be trained on existing data using the GPC object before plugging a representative feasibility constraint into an existing Pyomo formulation using the OODXBlock object.
The OODX modelling objects can be used to construct both surrogate model-based DFO algorithms and data-driven direct-search DFO methods. The former might be constructed as shown in Figure 5 wherein design of experiments is used to generate data from black box simulations for training surrogate models (supervised machine learning), before a mathematical programming formulation of the surrogate model is plugged into a larger decision-making problem and solved iteratively with adaptive sampling (active machine learning). On the other hand, the latter might only utilise data-driven explorative and exploitative adaptive sampling formulations to guide the search towards a global optimum.
This section proceeds by introducing the foundational concepts for each of the six objects within OODX. Specifically, the static sampling strategies and data processing methods implemented in the DataHandler object are presented. Similarly, the methods underpinning the NN, GPR, and GPC surrogate modelling objects
Figure 5: Object-orientated derivative-free optimisation (OODX) modelling objects: DataHandler, NN, GPR, GPC, OODXBlock, AdaptiveSampler. The modelling objects are used to construct a DFO algorithm harnessing data from black box simulations and/or data lakes.
are highlighted. Likewise, the abstracted surrogate model mathematical programming formulations used in the OODXBlock object are detailed. To conclude this section, the the adaptive sampling formulations available within the AdaptiveSampler object are presented.
### Data sampling, processing, and storage
The DataHandler object enables users to perform key data processing methods for machine learning frameworks such as: static sampling strategies for input sample generation; splitting data into training and testing sets; data scaling using standardisation; replicate scaling methods to standardise new data into the same modelling space; and inverse scaling methods to return outputs from the modelling space back to the original space. The DataHandler object also enables data storage as a collection of attributes within a single object instance. Storing data in this way ensures that the data are not accidentally manipulated or overwritten whilst also enabling access for later analysis such as iterative model validation on testing data.
Static sampling strategies enable a specified number of input samples to be generated within a search space specified with lower and upper bounds on each input dimension. Sampling strategies available in the DataHandler object are random sampling, Latin hypercube sampling (LHS), Sobol sampling, and grid sampling. Random sampling was implemented by scaling random numbers generated between 0 and 1 into the specified search space. LHS and Sobol sampling were implemented using the implementations available within Scikit-optimize version 0.9.0, where the LHS method utilises the _maximin_ criterion to maximise the minimum Euclidean distance between the generated samples. The grid sampling method implemented in this work begins by calculating the value of \(\sqrt{n}\) where \(n\) is the number of samples to be generated in \(m\)-dimensional space. The result of this evaluation provides the number of discrete values to be sampled at in each dimension, and is subsequently rounded up to ensure that this value, \(n^{\text{grid}}\), is an integer such that \(n^{\text{grid}}\geq n\). \(n^{\text{grid}}\) evenly spaced samples, between the lower and upper bound of each dimension inclusive, are then generated. The resulting \(n^{\text{grid}}\) samples are then shuffled and the first \(n\) samples are used as the grid samples. This method thereby enables \(n\) grid samples to be generated even where \(n\) is not a factor of \(m\) but the characteristics of grid sampling are maintained. Additionally, this method ensures that the \(\left(n^{\text{grid}}-n\right)\) samples are removed randomly so that the homogeneous space coverage is maintained as much as possible.
The generated input samples were evaluated by black box models to produce output data for subsequent surrogate model fitting. Since the evaluation of output samples is highly dependent on different black box software, necessitating customised programming scripts, the DataHandler object enables users to directly store output data offline from their black box sampling scripts. Similarly, the DataHandler is flexible to enable storage of existing input-output data, from computer experiments or physical experiments. Here, input-output data (including any binary classification target data), existing in the original space, are stored along with the lower and upper bounds on the search space.
Following the generation of input-output data, the DataHandler object enables the data to be randomly split into training and testing sets for fitting and validating surrogate models, respectively. This implementation uses Scikit-learn version 1.0.2 and enables the specification of the fraction of the data to be reserved in the testing set (with a default value of 0.3). Training and testing inputs, outputs, and binary classification targets are stored as attributes along with the raw data.
The DataHandler object enables input-output data to be scaled via standardisation as shown by Equation 17, where \(\mathbf{x}\) is the data to be scaled, \(\bar{x}\) is the mean, \(\sigma_{x}\) is the standard deviation, and \(\tilde{\mathbf{x}}\) is the standardised data. Standardisation scales the data so that is follows a normal distribution with a mean of 0 and a standard deviation of 1.
\[\tilde{\mathbf{x}}=\frac{\mathbf{x}-\bar{x}}{\sigma_{x}} \tag{17}\]
The standardisation method automatically operates on the data stored within the OODX DataHandler object, providing standardised sets of raw inputs/outputs, training and testing inputs/outputs, as well as standardised bounds on the search space. The standardised data are stored as attributes of the DataHandler object instance along with the mean and standard deviation values used for scaling. Storing the mean and standard deviation enables the replication of the standardisation methods on new input-output data as well as inverse scaling methods for mapping data back to the original space (Figure 6). Figure 6 shows an overview of the data processing tools for the surrogate-based optimisation methodology.
For example, in Figure 6 a typical workflow begins with input data generated from a specified search space using static sampling strategies. The corresponding outputs are then obtained by evaluating these inputs using the black box. Prior to training a surrogate model, scaling methods were used to map the original input-output data into the modelling space. Standardisation into the modelling space ensures that the input-output data are scaled over the samples, for each dimension, to avoid overfitting to dimensions with larger magnitudes. Without this scaling, the larger magnitude dimensions would have greater influence during model fitting. Scaled input-output data can then be used to train a surrogate model to enable the mapping from scaled scaled inputs to predictions in the modelling space. New inputs in the original space are then scaled into the modelling space, mapped onto surrogate model predictions (bottom right), and finally mapped back to the original space (dashed dataflow). In this way, the black box input-output mapping can be achieved with the appropriate scaling, surrogate model predictions, and inverse scaling.
The standardisation methods are based on a set of assumptions which systematise the methods and ensure that they are flexible to different configurations of stored data. For example, the methods automatically adapt based on whether training and testing sets have been generated, or whether binary classification targets representing sample feasibility have been stored. The assumptions used to systematise the standardisation, along with further details on all the DataHandler instance attributes and methods, are shown in Table 14.
### Neural networks
Section 1.4.1 introduced some background on NNs. The NN object within OODX enables building, training, and evaluation of NN models using PyTorch version 2.0 [54]. Specifically, the customisation capabilities enable the number of layers (\(\Lambda\)), the number of nodes in each layer (\(N_{\lambda}\)), and the activation functions (\(\xi^{(\lambda)}\)) to be defined for fully-dense feedforward NNs (Figure 4). By default, the NN object is trained using mini-batch gradient descent (Table 13) with a mean squared error loss function for regression NNs. The optimised weights and biases are saved within trained NN objects to enable explicit mathematical formulations of NNs to be realised from generalised formulations. The general formulations, owing to the generality of fully-dense feedforward NN structures, were formulated for linear layers (Equation 9a) passed through activation
Figure 6: Data processing overview for the surrogate-based optimisation methodology. A search space is first posed (from within the original space) and static sampling methods are used to generate inputs to black box evaluations to determine corresponding outputs. Inputs and outputs are scaled to the modelling space for surrogate model training. The search space is also scaled using the statistical moments of the inputs (\(\mu_{x}\), \(\sigma_{x}\)) to provide a scaled search space for use as optimisation decision variable bounds. Trained surrogate models are embedded in optimisation and/or adaptive sampling formulations which provide solutions in the scaled modelling space. The outputs are therefore scaled back to the original space (using \(\mu_{y}\), \(\sigma_{y}\)) for interpretation or adaptive sampling from the black box (dotted line). Dashed lines show surrogate modelling predictions beginning from new inputs which are scaled prior to evaluation by a trained surrogate model to provide outputs which are then scaled back to the original space for interpretation.
functions (Equation 9b) and constructed layer by layer from inputs (Equation 9c) to outputs (Equation 9d). Predictions from explicit mathematical formulations were validated against predictions made directly from the PyTorch-based NN object. Further details on all the NN instance attributes and methods, are shown in Table 15.
### Gaussian processes
Section 1.4.2 introduced some background on GPs for both regression and classification. GP regression (GPR) in OODX uses the GPR object which was based on the GPR implementation in Scikit-learn version 1.0.2 [53] with additional functionality for deriving explicit mathematical formulations for use in DFO. The GPR object implements linear/polynomial and squared exponential kernel functions which are optimised via maximum likelihood estimation (MLE). Predictions and standard deviations in these predictions can be evaluated directly from the underlying Scikit-learn implementation or via an explicit mathematical formulation constructed using the saved optimised parameters. Predictions from the mathematical formulation were validated against predictions from the Scikit-learn object. Further details on all the GPR instance attributes and methods, are shown in Table 16.
### Classification models
The OODX NN object can also be adapted for classification surrogate modelling by utilising specialised training functions such as the binary cross entropy loss. Further details regarding NNs for classification were presented at the end of Section 1.4.1 whilst further details on using the OODX NN object for classification can be found in Table 15.
The OODX GPC object was coded entirely in NumPy version 1.21.2 to enable the GPC model to be constructed in such a way that enabled an explicit mathematical formulation of the predictive function to be formulated for compatibility with Pyomo. Despite the existence of a GPC implementation in Scikit-learn, this object uses a statistical approximation which complicates formulating an equivalent analytical formulation. Accordingly, the GPC object used the squared exponential kernel function (Equation 14) and the probit approximation [99] to enable the explicit mathematical formulation shown in Equation 15. The MLE formulation to fit the GPC model was also implemented NumPy using Equation 16. For machine learning applications, it was sufficient to make predictions using the linear algebraic notation that the underlying model formulation lends itself to, however, for mathematical optimisation applications, is was necessary to write an equivalent formulation incorporating summations over parameter/variable indices. Predictions from the explicit Pyomo-compatible formulation were validated against the linear algebra predictive function. Further details on GPC were presented at the end of Section 1.4.2 whilst all of the GPC instance attributes and methods are shown in Table 17.
### Mathematical programming formulations
Abstracted Pyomo formulations enable hierarchical optimisation modelling by adding modelling components as attributes to object instances [52]. Such object instances can then be added to larger to decision-making problems to enforce the surrogate model formulations in a plug-and-play framework. The OODXBlock object harnesses the object-orientated capabilities of Pyomo to contain the relevant sets, parameters, variables, and constraints for general surrogate model formulations which can be used to build surrogate-based DFO algorithms for solving BBO problems. The general OODXBlock formulations can be instantiated as formulations representative of specific OODX NN, GPR and GPC models. Equality constraints then connect the block inputs and outputs to corresponding variables in the larger optimisation formulations. The hierarchical optimisation modelling structure means that the larger optimisation formulations can access the surrogate modelling parameters as well as enforcing the constraints.
#### 2.5.1 Neural network formulations
Fully dense feedforward NNs with \(\lambda=0,...,\Lambda+1\) layers (where \(\lambda=0\) is the input layer, \(\lambda=1,...,\Lambda\) are the hidden layers, and \(\lambda=\Lambda+1\) is the output layer) and \(N_{\lambda}\) nodes in each layer were formulated using Equations 9a, 9c, and 9d. These equations define the NN structure and mapping of inputs to predictive
outputs. The equivalent optimisation formulation for these Equations (Table 2) enforce the NN structure and optimised weights and biases thereby enforcing an exact formulation of the linear mappings wthin the NN. This means that a prediction from the NN object at a given input returns the exact same value as a prediction enforced within optimisation problems using this OODXBlock formulation. At this stage, prior to the definition of the activation functions and the required mapping from node inputs \(\mathbf{z}\) onto activated node outputs \(\mathbf{a}\), the mathematical program in Table 2 is a linear programming (LP) formulation.
In the LP formulation in Table 2, \(a_{i}^{(\lambda)}\) represents the activated outputs from layer \(\lambda\) for nodes \(i=1,...,N_{\lambda}\), defined over the hidden layers \(\lambda=1,...,\Lambda\). To complete the NN-based formulations, activation function constraints were defined to map node inputs \(z_{i}^{(\lambda)}\) onto the activated output variables. A linear activation function multiplies the layer outputs by 1, retaining the LP formulation for the NN model. Whilst the hyperbolic tangent function is commonly used as an activation function, the trigonometric expression, \(\tanh\), is typically unsupported by optimisation solvers. The four explicit algebraic formulations of \(\tanh\) containing exponential terms (Table 1) have been implemented and their performance analysed in the literature [49]. Here, the \(\tanh\) formulation in Table 3 was implemented due to the single exponential term dependent on variable inputs. Similar formulations containing exponential terms were formulated for sigmoid and softplus activation functions (Table 3).
The activation function formulations in Table 3 can be combined with the LP in Table 2 depending on the activation function used within a trained NN, and be embedded with larger optimisation problems. Formulations for \(\tanh\), sigmoid, and softplus functions thereby provide nonlinear programming (NLP) formulations due to the nonlinearities introduced by these activation function constraints. The 4 activation functions in Table 3 were formulated in Pyomo using the LP/NLP formulations and then used to form complete Pyomo-based LP/NLP formulations by combining with the general NN structural formulation in Table 2. In practice, the appropriate constraints are automatically activated depending on the NN object passed as an argument to the OODXBlock object.
\begin{table}
\begin{tabular}{l l} \hline \hline Sets & \\ \hline \(\lambda\in\{0,...,\Lambda+1\}\) & set of layers \\ \(N_{\lambda}\) & set of nodes in each layer \\ \hline Parameters & \\ \hline \(w_{i,i^{\prime}}^{(\lambda)}\) & weights indexed by \(\lambda=0,...,\Lambda\), \(i\in N_{\lambda+1}\), \(i^{\prime}\in N_{\lambda}\) \\ \(b_{i}^{(\lambda)}\) & biases indexed by \(\lambda=0,...,\Lambda\), \(i\in N_{\lambda+1}\) \\ \hline Variables & \\ \hline \(x_{i}\) & network inputs indexed by \(i\in N_{0}\) \\ \(\hat{y}_{i}\) & network outputs indexed by \(i\in N_{\lambda+1}\) \\ \(z_{i}^{(\lambda)}\) & node inputs indexed by \(\lambda=1,...,\Lambda\), \(i\in N_{\lambda}\) \\ \(a_{i}^{(\lambda)}\) & activated outputs indexed by \(\lambda=1,...,\Lambda\), \(i\in N_{\lambda}\) \\ \hline Equations & \\ \hline
**for \(i\) in \(N_{1}\) do** & \\ \(z_{i}^{(1)}=\sum\limits_{i^{\prime}\in N_{0}}w_{i,i^{\prime}}^{(0)}x_{i^{\prime }}+b_{i}^{(0)}\) & input layer \\
**for \(\lambda=2\) to \(\Lambda\) do** & \\
**for \(i\) in \(N_{\lambda}\) do** & \\ \(z_{i}^{(\lambda)}=\sum\limits_{i^{\prime}\in N_{\lambda-1}}w_{i,i^{\prime}}^{( \lambda-1)}a_{i^{\prime}}^{(\lambda-1)}+b_{i}^{(\lambda-1)}\) & hidden layer inputs \\
**for \(i\) in \(N_{\Lambda+1}\) do** & \\ \(\hat{y}_{i}=\sum\limits_{i^{\prime}\in N_{\lambda}}w_{i,i^{\prime}}^{(\Lambda)} a_{i^{\prime}}^{(\Lambda)}+b_{i}^{(\Lambda)}\) & output layer \\ \hline \hline \end{tabular}
\end{table}
Table 2: Optimisation formulation for general fully dense feedforward neural network structure as linear program. Note that this formulation cannot be solved for any meaningful solution without defining the activation function constraints.
Mixed-integer linear programs (MILPs) can also be formulated for NNs utilising piecewise linear activation functions such as ReLU and hardsigmoid. Equations 18a - 18d enforce the MILP formulation for ReLU activation functions on layer \(\lambda\) for nodes \(i=1,...,N_{\lambda}\). This formulation utilises a binary activation variable \(p_{i}^{(\lambda)}\) that determines the location of \(z_{i}^{(\lambda)}\) in the ReLU function input domain and whether it is positive or
\begin{table}
\begin{tabular}{l l} \hline \multicolumn{1}{l}{Sets} \\ \hline \(\lambda\in\{1,...,\Lambda\}\) & set of hidden layers \\ \(N_{\lambda}\) & set of nodes in each layer \\ \hline Parameters & \\ \hline \(M\) & arbitrary large positive scalar for big-M formulations \\ \hline Variables & \\ \hline \(z_{i}^{(\lambda)}\) & node inputs indexed by \(\lambda=1,...,\Lambda\), \(i\in N_{\lambda}\) \\ \(a_{i}^{(\lambda)}\) & activated outputs indexed by \(\lambda=1,...,\Lambda\), \(i\in N_{\lambda}\) \\ \(p_{i}^{(\lambda)}\) & binary variable indexed by \(\lambda=1,...,\Lambda\), \(i\in N_{\lambda}\) \\ \(q_{i}^{(\lambda)}\) & binary variable indexed by \(\lambda=1,...,\Lambda\), \(i\in N_{\lambda}\) \\ \hline Equations & \\ \hline
**for**\(\lambda=1\)**to**\(\Lambda\)**do** & \\
**for**\(i\)**in**\(N_{\lambda}\)**do** & \\ \(a_{i}^{(\lambda)}=z_{i}^{(\lambda)}\) & linear LP \\ \hline
**for**\(\lambda=1\)**to**\(\Lambda\)**do** & \\
**for**\(i\)**in**\(N_{\lambda}\)**do** & \\ \(a_{i}^{(\lambda)}=1-\frac{2}{\exp\left(2z_{i}^{(\lambda)}\right)+1}\) & tanh NLP \\ \hline
**for**\(\lambda=1\)**to**\(\Lambda\)**do** & \\
**for**\(i\)**in**\(N_{\lambda}\)**do** & \\ \(a_{i}^{(\lambda)}=\frac{1}{1+\exp\left(-z_{i}^{(\lambda)}\right)}\) & sigmoid NLP \\ \hline
**for**\(\lambda=1\)**to**\(\Lambda\)**do** & \\
**for**\(i\)**in**\(N_{\lambda}\)**do** & \\
**for**\(i\)**in**\(N_{\lambda}\)**do** & \\
**for**\(i\)**in**\(N_{\lambda}\)**do** & \\
**for**\(i\)**in**\(N_{\lambda}\)**do** & \\ \(a_{i}^{(\lambda)}\geq 0\) & \\ \(a_{i}^{(\lambda)}\leq z_{i}^{(\lambda)}\) & \\ \(a_{i}^{(\lambda)}\leq Mp_{i}^{(\lambda)}\) & ReLU MILP \\ \hline
**for**\(\lambda=1\)**to**\(\Lambda\)**do** & \\
**for**\(i\)**in**\(N_{\lambda}\)**do** & \\ \(a_{i}^{(\lambda)}\geq\frac{1}{6}z_{i}^{(\lambda)}+\frac{1}{2}-M\left(1-p_{i}^{( \lambda)}+q_{i}^{(\lambda)}\right)\) & \\ \(a_{i}^{(\lambda)}\leq\frac{1}{6}z_{i}^{(\lambda)}+\frac{1}{2}+M\left(1-p_{i}^{( \lambda)}+q_{i}^{(\lambda)}\right)\) & \\ \(a_{i}^{(\lambda)}\geq q_{i}^{(\lambda)}\) & \\ \(z_{i}^{(\lambda)}\leq Mp_{i}^{(\lambda)}-3\) & \\ \(z_{i}^{(\lambda)}\geq M\left(p_{i}^{(\lambda)}-1\right)-3\) & \\ \(z_{i}^{(\lambda)}\leq Mq_{i}^{(\lambda)}+3\) & \\ \(z_{i}^{(\lambda)}\geq M\left(q_{i}^{(\lambda)}-1\right)+3\) & hardsigmoid MILP \\ \hline \end{tabular}
\end{table}
Table 3: Optimisation formulations for neural network activation functions: linear (LP), tanh (NLP), sigmoid (NLP), softplus (NLP), ReLU (MILP), and hardsigmoid (MILP).
negative. Constraints are enforced and relaxed when required using the big-M formulation to enforce the following properties:
* \(a_{i}^{(\lambda)}\) is always greater than or equal to 0 (Equation 18a),
* if \(z_{i}^{(\lambda)}\) is negative, then \(p_{i}^{(\lambda)}\) is constrained to 0 by Equation 18d, and \(a_{i}^{(\lambda)}\) is constrained equal to 0 by Equation 18c,
* if \(z_{i}^{(\lambda)}\) is positive, then \(p_{i}^{(\lambda)}\) is constrained to 1 in order to satisfy Equations 18b and 18c, and \(a_{i}^{(\lambda)}\) is constrained equal to \(z_{i}^{(\lambda)}\) by Equations 18b and 18d.
\[a_{i}^{(\lambda)} \geq 0 \text{for }i=1,...,N_{\lambda}\quad\text{for }\lambda=1,...,\Lambda \tag{18a}\] \[a_{i}^{(\lambda)} \geq z_{i}^{(\lambda)} \text{for }i=1,...,N_{\lambda}\quad\text{for }\lambda=1,...,\Lambda\] (18b) \[a_{i}^{(\lambda)} \leq Mp_{i}^{(\lambda)} \text{for }i=1,...,N_{\lambda}\quad\text{for }\lambda=1,...,\Lambda\] (18c) \[a_{i}^{(\lambda)} \leq z_{i}^{(\lambda)}+M\left(1-p_{i}^{(\lambda)}\right) \text{for }i=1,...,N_{\lambda}\quad\text{for }\lambda=1,...,\Lambda \tag{18d}\]
Equations 19a - 19h enforce the MILP formulation for hardsigmoid activation functions on layer \(\lambda\) for nodes \(i=1,...,N_{\lambda}\). This formulation utilises 2 binary activation variables \(p_{i}^{(\lambda)}\) and \(q_{i}^{(\lambda)}\) to determine the location of \(z_{i}^{(\lambda)}\) in the function input domain and whether it is less than -3, greater than +3, or between -3 and +3. The big-M formulation is used again here to enforce the following properties:
* if \(z_{i}^{(\lambda)}\leq-3\), \(p_{i}^{(\lambda)}\) is constrained to 0 by Equation 19b, \(q_{i}^{(\lambda)}\) is constrained to 0 by Equation 19d, and \(a_{i}^{(\lambda)}\) is constrained equal to 0 by Equations 19e and 19h,
* if \(z_{i}^{(\lambda)}\geq+3\), \(p_{i}^{(\lambda)}\) is constrained to 1 by Equation 19a, \(q_{i}^{(\lambda)}\) is constrained to 1 by Equation 19c, and \(a_{i}^{(\lambda)}\) is constrained equal to 1 by Equations 19e and 19h,
* if \(-3\leq z_{i}^{(\lambda)}\leq+3\), \(p_{i}^{(\lambda)}\) is constrained to 1 by Equation 19a, \(q_{i}^{(\lambda)}\) is constrained to 0 by Equation 19d, and \(a_{i}^{(\lambda)}\) is constrained equal to \(\frac{1}{6}z_{i}^{(\lambda)}+\frac{1}{2}\) by Equations 19f and 19g.
\[z_{i}^{(\lambda)} \leq Mp_{i}^{(\lambda)}-3 \text{for }i=1,...,N_{\lambda}\quad\text{for }\lambda=1,...,\Lambda \tag{19a}\] \[z_{i}^{(\lambda)} \geq M\left(p_{i}^{(\lambda)}-1\right)-3 \text{for }i=1,...,N_{\lambda}\quad\text{for }\lambda=1,...,\Lambda\] (19b) \[z_{i}^{(\lambda)} \leq Mp_{i}^{(\lambda)}+3 \text{for }i=1,...,N_{\lambda}\quad\text{for }\lambda=1,...,\Lambda\] (19c) \[z_{i}^{(\lambda)} \geq M\left(q_{i}^{(\lambda)}-1\right)+3 \text{for }i=1,...,N_{\lambda}\quad\text{for }\lambda=1,...,\Lambda\] (19d) \[a_{i}^{(\lambda)} \leq p_{i}^{(\lambda)} \text{for }i=1,...,N_{\lambda}\quad\text{for }\lambda=1,...,\Lambda\] (19e) \[a_{i}^{(\lambda)} \geq\frac{1}{6}z_{i}^{(\lambda)}+\frac{1}{2}-M\left(1-p_{i}^{( \lambda)}+q_{i}^{(\lambda)}\right) \text{for }i=1,...,N_{\lambda}\quad\text{for }\lambda=1,...,\Lambda\] (19f) \[a_{i}^{(\lambda)} \leq\frac{1}{6}z_{i}^{(\lambda)}+\frac{1}{2}+M\left(1-p_{i}^{( \lambda)}+q_{i}^{(\lambda)}\right) \text{for }i=1,...,N_{\lambda}\quad\text{for }\lambda=1,...,\Lambda\] (19g) \[a_{i}^{(\lambda)} \geq p_{i}^{(\lambda)} \text{for }i=1,...,N_{\lambda}\quad\text{for }\lambda=1,...,\Lambda \tag{19h}\]
The Pyomo formulations for the 2 MILP NN activation functions (ReLU and hardsigmoid) are available within the OODKBlock object as shown in Table 3. The ReLU and hardsigmoid formulations in Table 3 are used to form a complete Pyomo-based MILP formulations by combining with the general NN structural formulation in Table 2.
#### 2.5.2 Gaussian process formulations
GPR surrogate models were formulated exactly using the mathematical programming formulations shown in Table 4. Specifically, the GPR predictive mean function (Equation 10) was combined with explicit formulations for the squared exponential (Equation 14) and polynomial (Equation 13) kernel functions to yield Pyomo compatible NLP formulations (translating matrix and vector notation into formulations readable by optimisation solvers). NLP formulations for trained GPR models with optimised parameters are automatically generated from the general formulations using the OODXBlock object. These NLP formulations thereby provide exact representations of the GPR predictive mean function which was validated by comparing predictions from the explicit formulations and GPR object instances.
The uncertainty in GPR models can also be formulated such that optimisation solvers can implement them in larger decision making problems. Equation 11 shows the variance in GPR predictions at a new input vector \(\mathbf{x}\). Since \(\sigma_{f}^{2}\) and \(K^{-1}\) are parameters dependent only on training data and fixed prior to optimisation, the only dependence on new inputs is via \(\mathbf{k}\). Equation 20 therefore shows the optimisation formulation for the varying \(\mathbf{k}^{T}K^{-1}\mathbf{k}\) part of the variance formulation, where care must be taken to reformulate optimisation problems accordingly (i.e. to maximise the uncertainty, Equation 20 must be minimised). Note that, where the uncertainty requires formulating, Equation 20 must be subtracted from \(\sigma_{f}^{2}\) to provide the variance, square-rooted to provide the standard deviation, then multiplied by the correct constant to obtain the necessary confidence interval (e.g., \(1.96\sigma\) approximates the \(95\,\%\) confidence interval).
\[v^{(proxy)}=\sum_{i=1}^{n}\left(k_{i}\sum_{i^{\prime}=1}^{n}\left(K_{i,i^{ \prime}}^{-1}k_{i^{\prime}}\right)\right) \tag{20}\]
The Pyomo-based formulation of the GPR uncertainty proxy (Equation 20) is available in the OODXBlock object for squared exponential, linear, and polynomial kernel function as shown in Table 5.
\begin{table}
\begin{tabular}{l l} \hline \multicolumn{2}{l}{Sets} \\ \hline \(n\) & set over the number of training samples \\ \(m\) & set over the number of input dimensions \\ \hline \multicolumn{2}{l}{Parameters} \\ \hline \(x_{i,j}^{(\text{train})}\) & training inputs with dimensions \(i\in n\), \(j\in m\) \\ \(l\) & scalar length scale \\ \(\sigma_{f}^{2}\) & scalar process variance \\ \(\sigma_{0}^{2}\) & scalar inhomogenity variance \\ \(\omega\) & scalar integer polynomial order \\ \(\alpha_{i}\) & linear predictor weights with dimensions \(i\in n\) \\ \hline \multicolumn{2}{l}{Variables} \\ \hline \(x_{j}\) & model inputs with dimension \(j\in m\) \\ \(\hat{y}\) & model predictive output \\ \hline \multicolumn{2}{l}{Equations} \\ \hline \(\hat{y}=\sum\limits_{i\in n}\alpha_{i}\sigma_{f}^{2}\exp\left(-\sum\limits_{j \in m}\frac{1}{2l^{2}}\left(x_{j}-x_{i,j}^{(\text{train})}\right)^{2}\right)\) squared exponential \\ \(\hat{y}=\sum\limits_{i\in n}\alpha_{i}\sigma_{f}^{2}\left(\sigma_{0}^{2}+\sum \limits_{j\in m}x_{i,j}^{(\text{train})}x_{j}\right)^{\omega}\) polynomial \\ \hline \end{tabular}
\end{table}
Table 4: Optimisation formulation for Gaussian process regression (GPR) predictions with squared exponential and polynomial (including linear with \(\omega=1\)) kernel functions.
In addition to GPR-based optimisation, a GPC model with given training inputs and saved optimal model parameters can be enforced with the NLP constraint shown in Equation 21 (where \(k_{i}\) has been defined to clean up the notation). Once again, this constraint is equivalent to Equation 15, where the kernel function has been formulated explicitly and matrix/vector operations have been translated into summations over indices to enable interpretation by optimisation solvers.
\[\begin{split} p(t=1)=\left(1+\exp\left[-\frac{\sum_{i=1}^{n}\left( \delta_{i}k_{i}\right)}{\left(1+\frac{\pi}{8}\left[\sigma_{f}^{2}-\sum_{i=1}^{ n}\left(k_{i}\sum_{k=1}^{n}\left[k_{i}P_{i,k}^{-1}\right]\right)\right]\right)^{1/2}} \right]\right)^{-1}\\ \text{where }k_{i}=\sigma_{f}^{2}\exp\left(-\sum_{j=1}^{m}\frac{1}{ 2l^{2}}\left(x_{j}-x_{i,j}^{\text{(train)}}\right)^{2}\right)\end{split} \tag{21}\]
The Pyomo formulation of Equation 21 exists in the OODXBlock object with the corresponding NLP formulation shown in Table 6.
\begin{table}
\begin{tabular}{l l} \hline \hline Sets & \\ \hline \(n\) & set over the number of training samples \\ \(m\) & set over the number of input dimensions \\ \hline \multicolumn{2}{l}{Parameters} \\ \hline \(x_{i,j}^{\text{(train)}}\) & training inputs with dimensions \(i\in n\), \(j\in m\) \\ \(l\) & scalar length scale \\ \(\sigma_{f}^{2}\) & scalar process variance \\ \(\sigma_{0}^{2}\) & scalar inhomogenity variance \\ \(\omega\) & scalar integer polynomial order \\ \(K_{i,j}^{-1}\) & inverse covariance matrix with dimensions \(i,j\in n\) \\ \hline \multicolumn{2}{l}{Variables} \\ \hline \(x_{j}\) & model inputs with dimension \(j\in m\) \\ \(v^{\text{(proxy)}}\) & model uncertainty proxy output \\ \hline \multicolumn{2}{l}{Equations} \\ \hline \(v^{\text{(proxy)}}=\) & \(\sum_{i\in n}\left(\sigma_{f}^{2}\exp\left(-\sum_{j\in m}\frac{1}{2l^{2}} \left(x_{j}-x_{i,j}^{\text{(train)}}\right)^{2}\right)\right.\) \\ & \(\times\left.\sum_{k\in n}K_{i,k}^{-1}\sigma_{f}^{2}\exp\left(-\sum_{j\in m} \frac{1}{2l^{2}}\left(x_{j}-x_{i,j}^{\text{(train)}}\right)^{2}\right)\right)\) \\ \multicolumn{2}{l}{squared exponential} \\ \end{tabular}
\end{table}
Table 5: Optimisation formulations for Gaussian process regression (GPR) uncertainty proxy with squared exponential and polynomial (including linear with \(\omega=1\)) kernel functions.
### Adaptive sampling
The OODX AdaptiveSampler object provides GP-based and heuristic-based adaptive sampling Pyomo formulation objects. Specifically, exploration and exploitation are enabled using GP-based adaptive sampling formulations to maximise GPR uncertainty and the modified expected improvement (EI) function, respectively. Adaptive sampling is also enabled by using Delaunay triangulation to partition the search space into regions, then using heuristics to choose which region to place samples so as to achieve exploration and exploitation equivalent to the GP-based methods. The resulting GP-based formulations were NLPs whilst the heuristic-based formulations were implemented as MILP formulations.
The 4 adaptive sampling formulations were implemented in Pyomo version 6.2 enabling the surrogate model block formulations to be harnessed with OODXBlock. For the GP-based adaptive sampling formulation, GPR predictive and uncertainty proxy formulations were utilised within the adaptive sampling formulations. Additionally, the adaptive sampling formulations are flexible, unconstrained, foundational models which enable classification surrogate formulations to be plugged-in to enable adaptive sampling with online feasibility constraints. Upon addition of such classification-based constraints, the foundational adaptive sampling NLP and MILP formulations can become mixed integer nonlinear programming (MINLP) formulations depending on the classification model formulations used. Finally, by implementing the adaptive sampling formulations using Pyomo, it is possible to use any appropriate compatible solver to find local or global solutions as required.
#### 2.6.1 Gaussian process-based adaptive sampling
The OODX AdaptiveSampler provides a Pyomo formulation for adaptive sampling to maximise the uncertainty of a GPR surrogate model. Specifically, the uncertainty proxy, \(\upsilon^{\text{(proxy)}}\), of a trained instance of a GPR model is minimised by using the OODXBlock formulation from Table 5 in combination with the objective function in Equation 22 to provide a complete NLP problem which can be solved for adaptive samples.
\begin{table}
\begin{tabular}{l l} \hline \hline Sets & \\ \hline \(n\) & set over the number of training samples \\ \(m\) & set over the number of input dimensions \\ \hline \multicolumn{3}{l}{Parameters} \\ \hline \(x_{i,j}^{\text{(train)}}\) & training inputs with dimensions \(i\in n\), \(j\in m\) \\ \(l\) & scalar length scale \\ \(\sigma_{f}^{2}\) & scalar process variance \\ \(\delta_{i}\) & aggregate parameter with dimension \(i\in n\) \\ \(P_{i,j}^{-1}\) & aggregate parameter with dimensions \(i,j\in n\) \\ \hline \multicolumn{3}{l}{Variables} \\ \hline \(x_{j}\) & model inputs with dimension \(j\in m\) \\ \(p(t=1)\) & model probability prediction \\ \hline \multicolumn{3}{l}{Equations} \\ \hline \(p(t=1)=\) & \(\left(1+\exp\left[-\sum_{i=1}^{n}\left(\delta_{i}\sigma_{f}^{2}\exp\left(- \sum_{j=1}^{m}\frac{1}{2l^{2}}\left(x_{j}-x_{i,j}^{\text{(train)}}\right)^{2} \right)\right)\right.\right.\) \\ \(\times\left(1+\frac{\pi}{8}\left[\sigma_{f}^{2}-\sum_{i=1}^{n}\left(\sigma_{f}^ {2}\exp\left(-\sum_{j=1}^{m}\frac{1}{2l^{2}}\left(x_{j}-x_{i,j}^{\text{(train)} }\right)^{2}\right)\right.\right.\right.\) \\ \(\times\) & \(\left.\left.\left.\sum_{k=1}^{n}\left[\sigma_{f}^{2}\exp\left(- \sum_{j=1}^{m}\frac{1}{2l^{2}}\left(x_{j}-x_{i,j}^{\text{(train)}}\right)^{2} \right)P_{i,k}^{-1}\right]\right)\right]\right)^{-1/2}\right]\right)^{-1}\) \\ \multicolumn{3}{l}{GPC probability predictions} \\ \hline \hline \end{tabular}
\end{table}
Table 6: Optimisation formulation for Gaussian process classification (GPC) with squared exponential kernel function probability predictions.
\[\min_{\mathbf{x}} \upsilon^{\text{(proxy)}}(\mathbf{x}) \tag{22}\]
The OODX AdaptiveSampler provides a Pyomo formulation to maximise the modified EI for explorative and exploitative adaptive sampling of GPR models. Specifically, the objective function in Equation 23 was used in combination with OODXBlock formulations for GPR predictions \(\text{y}(\mathbf{x})\) (Table 4) and uncertainty proxy \(\upsilon^{\text{(proxy)}}(\mathbf{x})\) (Table 5) to complete an NLP problem which can be solved for adaptive samples.
\[\max_{\mathbf{x}} \left(\frac{\sigma_{f}^{2}-\upsilon^{\text{(proxy)}}(\mathbf{x})}{2 \pi}\right)^{1/2}\exp\left(-\frac{\left(\hat{y}(\mathbf{x})-y^{\text{(max)}}- \boldsymbol{\xi}\right)^{2}}{2\left(\sigma_{f}^{2}-\upsilon^{\text{(proxy)}}( \mathbf{x})\right)}\right) \tag{23}\]
#### 2.6.2 Heuristic-based adaptive sampling
The OODX AdaptiveSampler object contains exploration and exploitation Pyomo-based MILP formulations incorporating heuristic-based adaptive sampling using Delaunay triangulation implementation within SciPy version 1.8.0 [101]. The MILP formulation for the unconstrained heuristic-based adaptive sampling is presented in Table 7. These adaptive sampling formulations are refereed to as heuristic-based due to the rule-based decision-making of subsequent samples as opposed to the statistics informed adaptive sampling enabled by GP surrogate models, i.e. rigorous maximisation of GPR uncertainty (Equation 22) or rigorous maximisation of the GP-based modified EI function (Equation 23). Specifically, the heuristic-based adaptive sampling MILP chooses the largest region within the input space partitioned by Delaunay triangulation. This is implemented by performing Delaunay triangulation between samples in the input space, calculating the size of each region, assigning a binary variable to each distinct region, and selecting the binary variable corresponding to the region with the largest size. An adaptive sample is subsequently placed at the centroid of the largest simplex, thereby exploring the most sparsely sampled region. The benefit of such heuristic-based adaptive sampling is quicker iterations involving solving MILP formulations as opposed to complex surrogate-based NLP problems. The formulation in Table 7 provides a heuristic-based MILP formulation for efficient _exploration_ of the search space using the method described.
The formulation in Table 7 can be adapted by embedding OODXBlock formulations for classification models and plugging in feasibility constraints into the adaptive sampling MILP. In doing so, depending on
\begin{table}
\begin{tabular}{l l} \hline \hline Sets & \\ \hline \(R\) & set of regions \\ \(m\) & set over the number of input dimensions \\ \hline \multicolumn{2}{l}{Parameters} \\ \hline \(S_{i}\) & sizes of regions with dimensions \(i\in R\) \\ \(C_{i,j}\) & centroid coordinates with dimensions \(i\in R\), \(j\in m\) \\ \hline \multicolumn{2}{l}{Variables} \\ \hline \(x_{j}\) & decision variable with dimensions \(j\in m\) \\ \(z_{i}\) & binary variable with dimensions \(i\in R\) \\ \hline \multicolumn{2}{l}{Equations} \\ \hline \multicolumn{2}{l}{**for \(j\) in \(m\) do**} \\ \multicolumn{2}{l}{\(x_{j}=\sum\limits_{i\in R}C_{i,j}z_{i}\)} & possible \(\mathbf{x}\) values bound to centroids \\ \multicolumn{2}{l}{\(\sum\limits_{i\in R}z_{i}=1\)} & exactly one region selected \\ \hline \multicolumn{2}{l}{Objective} \\ \hline \(\max\ \ \sum\limits_{i\in R}S_{i}z_{i}\) & maximise size \\ \hline \hline \end{tabular}
\end{table}
Table 7: Heuristic-based adaptive sampling MILP formulation using regions from Delaunay triangulation of the search space.
whether the classifier formulation is MILP or NLP, the resulting adaptive sampling formulation with be MILP or MINLP, respectively. It is also possible to adjust the formulation in Table 7 to enable heuristic-based _exploitation_ of the current most promising sample with regards to surrogate-based optimisation. This Delaunay triangulation-based exploitation adaptive sampling MILP was inspired by the GP-based EI function. The exploitation adaptive sampling with Delaunay triangulation thereby includes the influence of the current best sample. Accordingly, the formulation in Table 7 can be updated to consider only the regions for which the current best sample is a vertex. In this way, the set \(R\) as well as the \(S\) and \(C\) parameters are configured differently along with the binary selector variable vector, \(\mathbf{z}\). Otherwise, the MILP formulation remains the same, although now the optimisation will determine the largest region about the current best sample.
#### 2.6.3 Bounds adjustments
The OODX AdaptiveSampler object can be used to adjust the bounds of the adaptive sampling search space to enable refinement around potential optima and/or feasible regions. Practically, the bounds on the input variables to the GP-based NLP formulations are enforced such that adaptive sampling solutions are constrained within the specified bounds. With regards to the Delaunay triangulation-based MILP formulations, the set of centroids which constrains the input variables to a discrete set of locations, is updated to only include centroids within the specified bounds. Additionally, the vertices of the search space hypercube can be included within the Delaunay triangulation. The AdaptiveSampler object therefore enables box bounds to be adjusted within bounds tightening algorithmic implementations whilst enabling the bounds tightening rules to be tailored to specific applications. In this way, it is possible to enable bounds tightening, relaxation, and/or translation in any direction and dimension as is necessitated for the desired exploitation/exploration of the search space.
## 3 Application: resource recovery from brewery wastewater
### Case study introduction
This section presents a an application of the developed methodology to the optimisation of a process system to recover resources from a brewery wastewater. Brewery wastewater was chosen due to the high concentrations of chemical oxygen demand (COD), total nitrogen (TN), and total phosphorous (TP), typical among food and beverage processing industries which have been highlighted as a focus for sustainable resource recovery [102, 103]. Typical brewery wastewater characteristics are shown in Table 8[104].
A simulation-based superstructure optimisation methodology was thereby developed to optimise process systems to recover carbon, nitrogen (N), and phosphorous (P) resources from a representative brewery wastewater. The resource recovery superstructure was modelled in a state of the art wastewater treatment process simulation software. This enabled computer experiments to provide high-fidelity training data for
\begin{table}
\begin{tabular}{l l} \hline Parameter & Value \\ \hline pH & 3–12 \\ Temperature (\({}^{\circ}\)C) & 14–18 \\ COD (g m\({}^{-3}\)) & 2,000–6,000 \\ BOD (g m\({}^{-3}\)) & 1,200–3,600 \\ COD:BOD & 1.667 \\ VFA (g m\({}^{-3}\)) & 1,000–2,500 \\ Phosphates as PO\({}_{4}\) (g m\({}^{-3}\)) & 10–50 \\ TN (g m\({}^{-3}\)) & 25–80 \\ TS (g m\({}^{-3}\)) & 5,100–8,750 \\ TSS (g m\({}^{-3}\)) & 2,900–3,000 \\ TDS (g m\({}^{-3}\)) & 2,020–5,940 \\ \hline \end{tabular}
\end{table}
Table 8: Typical brewery wastewater characteristics [104].
surrogate models [37]. The superstructure itself embedded high-rate anaerobic digestion for the recovery for biogas as well as biological nutrient (N and P) recovery (BNR) processes.
BNR has gained popularity over physico-chemical processes due to the realised performance at reduced costs arising from the non-requirement for additional dosing chemicals and subsequent separation of precipitates [105]. However, challenges associated with BNR include modelling and controlling biological systems particularly in uncertain practical environments and colder climates [105]. BNR enables the concentration of N and P nutrients into a solid digestate by-product which can be utilised as a fertiliser [106]. Additionally, the recovery of nutrient-rich fertiliser from wastewater enables the substitution of traditional N- and P-fixating production processes thereby alleviating the related environmental issues associated with human interference in the natural biogeochemical cycles. Furthermore, the removal of nutrients from wastewater effluents reduces the emissions to receiving water bodies thereby alleviating the associated impacts of water toxicity and eutrophication [14].
BNR process configurations include the anaerobic-anoxic-oxic (A\({}^{2}\)O) process, the Bardenpho process, the Johannesburg process, and the University of Cape Town (UCT) process. The A\({}^{2}\)O process (Figure 7A) facilitates both biological N and P recovery by releasing ammonia within an anaerobic zone whilst simultaneously facilitating polyhydroxyalkanoate (PHA) storage by phosphorous accumulating organisms (PAOs). Following the anaerobic zone, an anoxic zone reduces nitrates, recycled from a final aerobic zone, to nitrogen. The aerobic zone simultaneously oxidises nitrogenous compounds to nitrates and facilitates the uptake of phosphates by PAOs. The Bardenpho process (Figure 7B) operates similarly to the A\({}^{2}\)O process with an additional anoxic-aerobic step between the original aerobic stage and the final clarifier. The Bardenpho process thereby provides superior nutrient recovery compared to the A\({}^{2}\)O process at the expense of additional capital costs for additional reactors and increased operating costs for the second aeration tank. The Johannesburg process (Figure 7C) is equivalent to the A\({}^{2}\)O process with an additional fermenter on the return activated sludge (RAS) from the clarifier to the anaerobic stage. This facilitates strict anaerobic conditions within the first stage by enabling denitrification of any nitrates in the RAS. Additionally, increased volatile fatty acid (VFA) production in the side stream enables enhanced PHA storage by PAOs in the anaerobic stage. The UCT process (Figure 7D) adopts a unique recycle configuration with the RAS recycled to the anoxic stage and with an additional recycle from the anoxic stage to the anaerobic stage. There also exists a modified UCT version which utilises a multistage anoxic reactor configuration to improve the denitrification capacity by recycling nitrates to latter stages of the anoxic zones whilst the mixed liquor recycle is withdrawn from earlier stages of the anoxic zone [107]. Finally, the literature contains some works focusing on integrative BNR configurations, for example: integrating high-rate anaerobic digestion within the Bardenpho process [108]; combining the A\({}^{2}\)O process with chemical P precipitation [109]; and the optimisation of the Bardenpho process and moving bed biofilm reactor using response surface methodology [110].
Surrogate models were formulated to represent different BNR process configurations within the simulated superstructure. Specifically, GPR models and classification NNs were modelled and formulated as mathematical programming formulations using the OODX methodology presented [100]. However, the application of the OODX methodology to the superstructure optimisation of resource recovery process necessitated several challenges to be addressed. Firstly, this chapter builds on the previous one by formulating the surrogate models within a superstructure optimisation problem including binary variables for the selection of discrete resource recovery pathways within the superstructure. As such, this work develops a bb-MINLP framework [46]. In addition to the challenges associated with formulating and solving bb-MINLP problems, this chapter also consider the competing decision criteria pertaining to the recovery of different resources and economic cost. This challenge was addressed by multi-objective optimisation to highlight different non-dominated solutions on the Pareto frontier.
Finally, optimisation under uncertainty methods were used to optimise resource recovery process design whilst accounting for uncertainties in the brewery wastewater composition. Specifically, the flow and composition of wastewaters are inherently stochastic, with loading shocks potentially leading to reduced process performance in the best case, and process infeasibility in the worst case. As for the objective function, economic and environmental performance criteria are often the focus when designing systems to recover resources from wastewater, but process operability to ensure that the designed performance is realisable within practical operating environments is of equal importance.
To address the challenges in this application, superstructure optimisation methodology embedding
surrogate process unit models trained and formulated using the OODX package was developed and implemented to optimise the recovery of carbon and nutrient resources from a brewery wastewater. The well-established commercial wastewater treatment process simulation software GPS-X version 8.0 was utilised as a source of high-fidelity data from first-principles black box models using design of experiments [37]. These computer experiments provided high-fidelity data of resource recovery pathways from a superstructure embedding high-rate anaerobic digestion (AD) via an upflow anaerobic sludge blanket (UASB) reactor, 4 different BNR pathways, and side-stream AD including return flows. Binary selection variables enabled the optimisation of resource recovery pathway whilst the volume of the UASB was optimised as a continuous process design variable. Additionally, the uncertainty in the brewery wastewater COD was accounted for by varying this parameter during the computer experiments between the lower and upper brewery wastewater COD compositions cited in the literature (2,000-6,000 g m\({}^{-3}\)).
### Methods
#### Wastewater characterisation
The characterisation of the brewery effluent was in accordance with the typical brewery wastewater characteristics from Table 8 and is shown in Table 9. A wastewater production of 1,000 m\({}^{3}\,d^{-1}\) was considered
Figure 7: Biological nutrient recovery processes. (A) A\({}^{2}\)O process, (B) Bardenpho process, (C) Johannesburg process, (D) UCT process. R: recycle from aerobic to anoxic zone, RAS: return activated sludge, WAS: waste activated sludge.
for demonstrative purposes, although it should be noted that typical brewery effluent loads vary greatly based on the size of the brewery and its production capacity. As such, 1,000 m\({}^{3}\) d\({}^{-1}\) is an approximating overestimate of typical brewery wastewater production to demonstrate the modelling capabilities and it should be noted that the methodology could also be applied to other wastewater compositions and different production capacities.
The wastewater characterisation also accounted for uncertainties in the COD composition. Specifically, for the optimisation under uncertainty case study, the COD was considered to vary between the lower and upper bounds cited in literature (2,000-6,000 g m\({}^{-3}\)) [102]. The influent COD was thereby modelled as following a normal distribution with a mean of 4,000 g m\({}^{-3}\) and a standard deviation of 1,000 g m\({}^{-3}\). This probability distribution thereby ensures that 95 % of the influent COD realisations fall within 2 standard deviations of the mean (\(4,000\pm 2,000\) g m\({}^{-3}\)).
#### 3.2.2 Superstructure postulation
A superstructure comprising biological resource recovery pathways was postulated (Figure 8) and modelled within the GPS-X wastewater treatment process simulation software. Specifically, the UASB serves the double purpose as an anaerobic zone within BNR process configurations whilst also facilitating high-rate anaerobic digestion to recover biogas. The superstructure then embeds 4 different BNR configurations including A\({}^{2}\)O, Bardenpho, Johannesburg, and UCT processes. The waste activated sludge from a secondary clarification unit is then sent to a conventional anaerobic digestion process, including prior thickening and post dewatering, from which the digestate is recovered as a rich source of N and P nutrients.
The parameterisation to model each BNR pathway within the superstructure is given in Table 10. Specifically, the A\({}^{2}\)O process is modelled by constraining the volumes of the second anoxic and aerobic reactors (Anoxic 2 and Oxic 2, respectively) to zero. Additionally, the third anoxic reactor (Anoxic 3) is constrained to have zero volume and the return activated sludge is recycled to the UASB. Finally, the recycle from the first anoxic reactor (Anoxic 1) to the UASB is not utilised. The Bardenpho process is the same as the A\({}^{2}\)O process configuration except that the Bardenpho process utilises Anoxic 2 and Oxic 2. The difference between the A\({}^{2}\)O configuration and the Johannesburg process is the activation of Anoxic 3 within the RAS recycle. The UCT process is the same as the A\({}^{2}\)O process except that the RAS is recycled to Anoxic 1 instead of directly the UASB and there is a recycle from Anoxic 1 to the UASB.
#### 3.2.3 Computer experiments
Computer experiments enabled high-fidelity data from GPS-X to be harnessed for surrogate modelling training and to represent underlying black box models within the superstructure optimisation framework. GPS-X version 8.0 was interfaced using Python scripts to iterate over a set of input samples, generated using static sampling strategies, and evaluate and save the corresponding output data. The computer experiments were designed by specifying the input-output variables to be sampled, the lower, upper bounds on the input space, and generating a set of well-spaced static input samples.
The volume of the UASB was selected as a continuous input variable due to its influence on COD conversion to recovered biogas as well as the dependence of downstream BNR processes on the anaerobic
\begin{table}
\begin{tabular}{l l} \hline Parameter & Value \\ \hline Flow (m\({}^{3}\) d\({}^{-1}\)) & 1,000 \\ COD (g m\({}^{-3}\)) & 4,000 \\ BOD (g m\({}^{-3}\)) & 2,400 \\ COD:BOD & 1.66 \\ VFA (g m\({}^{-3}\)) & 2,133 \\ TN (g m\({}^{-3}\)) & 80 \\ Ammonia nitrogen (g m\({}^{-3}\)) & 50 \\ TP (g m\({}^{-3}\)) & 30 \\ Ortho-phosphate (g m\({}^{-3}\)) & 20 \\ \hline \end{tabular}
\end{table}
Table 9: Brewery wastewater characterisation implemented in this study.
zone. This critical design parameter therefore has an influence on COD, N, and P recovery providing a good candidate for multi-objective optimisation [108]. Lower and upper bounds of 50 m\({}^{3}\) and 500 m\({}^{3}\) respectively, were imposed on the UASB volume based on preliminary experiments and to provide a reasonable range for the optimisation search.
The other input variable to computer experiments was the concentration of COD in the brewery wastewater, thereby defining a 2-dimensional search space with the variable representing the volume of the UASB. The influent COD was varied in the computer experiment so as to gain an understanding of the process system performance in response to uncertain variations in wastewater quality within practical operating environments. The lower and upper bounds of 2,000 g m\({}^{-3}\) and 6,000 g m\({}^{-3}\) were imposed on the influent COD in accordance with the typical range cited for brewery wastewaters [102].
The output variables sampled from computer experiments included the final effluent COD, TN, and TP concentrations which were used to ensure effluent quality constraints were met. Biogas production, specifically the volumetric flow of methane produced, from both the UASB and the digester were sampled to assess the total biogas recovery performance of the flowsheet. The mass of the recovered digestate was sampled along with TN and TP composition data for the digestate so as to determine the N and P quality in kg t\({}^{-1}\). The aeration requirements of the 2 aerobic zones were evaluated to determine the total aeration requirement of the process as a proxy for operational cost of which the aeration constitutes a primary factor. Finally, binary data signifying convergence of the simulator (failed convergences were assigned a label of
\begin{table}
\begin{tabular}{l c c c c} \hline Parameter & A\({}^{2}\)O & Bardenpho & Johannesburg & UCT \\ \hline Anoxic 1 volume (m\({}^{3}\)) & 400 & 400 & 400 & 400 \\ Oxic 1 volume (m\({}^{3}\)) & 1,000 & 1,000 & 1,000 & 1,000 \\ Anoxic 2 volume (m\({}^{3}\)) & 0 & 400 & 0 & 0 \\ Oxic 2 volume (m\({}^{3}\)) & 0 & 200 & 0 & 0 \\ Clarifier volume (m\({}^{3}\)) & 600 & 600 & 600 & 600 \\ Anoxic 3 volume (m\({}^{3}\)) & 0 & 0 & 200 & 0 \\ Splitter fraction to UASB & 1 & 1 & 1 & 0 \\ Anoxic 1 to UASB recycle (m\({}^{3}\) d\({}^{-1}\)) & 0 & 0 & 0 & 4,000 \\ Thickener underflow (m\({}^{3}\) d\({}^{-1}\)) & 6 & 6 & 6 & 6 \\ Digester volume (m\({}^{3}\)) & 300 & 300 & 300 & 300 \\ Dewatered digestate flow (m\({}^{3}\) d\({}^{-1}\)) & 1.5 & 1.5 & 1.5 & 1.5 \\ \hline \end{tabular}
\end{table}
Table 10: BNR configuration parameterisation within the superstructure (Figure 8).
Figure 8: Carbon, nitrogen, phosphorous recovery superstructure embedding A\({}^{2}\)O, Bardenpho, Johannesburg, and UCT biological nutrient recovery (BNR) processes. Solid lines depict reactors and streams present in all BNR configurations whilst dashed lines show reactors and steams which are only activate for Bardenpho, Johannesburg, or UCT process dependent on the configurations detailed in Table 10. UASB: upflow anaerobic sludge blanket reactor.
0 whilst successful convergences were labelled 1) was sampled for modelling feasibility constraints using classification surrogate models.
The input and output variables are summarised in Table 11 along with their corresponding indexes used in the mathematical notation for surrogate model and mathematical programming formulations. Specifically, the variables were assigned incremental integer indexes with indexes \(u=\{1,2\}\) assigned to the input variables and indexes \(v=\{3,4,...,9\}\) assigned to the output variables.
Sobol sampling was adopted to provide good initial search space coverage in a small number of samples [73]. Sampling for black box superstructure optimisation necessitates a strategy to sample uniformly over the discrete variables as well as the continuous search space [46]. This was achieved by generating 32 samples over the 2-dimensional input space and then evaluating these 32 samples for each of the 4 discrete BNR configurations within the superstructure resulting in 128 total simulation interrogations.
The Sobol sampling thereby obtains the mappings of input data onto output data as shown by Figure 9 where the following notation was used:
* \(k\) indexes the set of BNR configurations {A\({}^{2}\)O, Bardenpho, Johannesburg, UCT}
* \(n\) parameter denoting the number of samples (equal to 32 here)
* \(u\) denotes the input variable index {1, 2}
* \(v\) denotes the output variable index {3, 4,..., 9}
* \(i\) indexes the set of samples {1, 2,..., n}
* \(X_{i,u}\) matrix of input samples
* \(Y_{k,i,v}\) matrix of output samples for each BNR configuration
#### 3.2.4 Surrogate modelling
Prior to training the surrogate models, the input-output data were standardised and split into training and testing data. Standardisation of the data ensured that the surrogate models were not overfit to variables with greater magnitude. The training data was subsequently used to train the surrogate models whilst the testing data was reserved for validating the model performance. The mean absolute error was used as a validation metric. In this work, 25 % of the data was reserved for testing (equating to 8 samples per BNR configuration) and the remaining 75 % (24 samples) were used for model training. The same train-test-split was used for each discrete BNR configuration so as to facilitate an equal comparison due to the same uncertainties introduced due to the train-test-split. Regression models were fitted to converged training data only so as not to skew the predictions over the converged, feasible region.
\begin{table}
\begin{tabular}{c l l} \hline \multicolumn{3}{c}{Input variables} \\ \hline \(u\) & Variable & Bounds \\ \hline
1 & UASB volume & \([50,500]\) \\
2 & Influent COD & \([2000,6000]\) \\ \hline \multicolumn{3}{c}{Output variables} \\ \hline \(v\) & Variable \\ \hline
3 & Effluent COD \\
4 & Effluent TN \\
5 & Effluent TP \\
6 & Biogas recovered \\
7 & Digestate nutrient quality \\
8 & Aeration cost \\
9 & Convergence \\ \hline \end{tabular}
\end{table}
Table 11: Input and output variable selection and input space bounds.
GPR surrogate models with squared-exponential kernel functions were formulated to map the 2 continuous input variables onto each continuous output variable for each discrete BNR configuration. In total, 6 continuous output variables (including effluent COD, TN, and TP concentrations, total biogas recovery, nutrient quality of the digestate, and total aeration requirement) and 4 BNR configurations thereby necessitated 24 GPR models. Predictions from GPR models can thereby be written as \(\hat{y}_{k,v}=\text{GPR}_{k,v}\left(\mathbf{x}\right)\) where \(k\) is the BNR configuration and \(v=3,...,8\) corresponding to the 6 continuous output variables indices shown in Table 11 and \(\mathbf{x}\) is a 2-dimensional input vector. GPR models were validated based on the mean absolute error (MAE) and mean absolute percentage error (MAPE).
Classification NNs with 2 hidden layers, each with 10 nodes and sigmoid activation functions (Figure 10), were trained on the relationships between the 2 continuous input variables and binary convergence targets (output variable assigned index 9 in Table 11). Trained for each discrete BNR pathway, there were a total of 4 classification NNs within this decision-making methodology. To maintain the notation, the predictions from the classification models can be written as \(\hat{y}_{k,9}=\text{NN}_{k}\left(\mathbf{x}\right)\) where \(k\) is the BNR configuration and the 9 index corresponds to the convergence output mapping in Table 11 and \(\mathbf{x}\) is a 2-dimensional input vector. The classification NNs were validated based on the precision and recall scores which represent the ability of the classifier to avoid false positives and to correctly label all the positives, respectively.
#### 3.2.5 Optimisation formulation
The trained GPR models were formulated within the mathematical optimisation problem using the NLP constraint shown in Equation 24. In total there were 24 GPR constraints \(\hat{\bar{y}}_{k,v}\) for each BNR configuration (A\({}^{2}\)O, Bardenpho, Johannesburg, UCT), \(k\) and each continuous output variable \(v\). The predictions of these GPR constraints exist in the standardised modelling space thereby necessitating inverse scaling back to the original space. The GPR parameters which were optimised in the surrogate modelling stage (\(\alpha_{i}\), \(\sigma_{f}\), \(l\)) as well as the standardised training data, \(\bar{X}_{i,u}\) (where \(i\) indexes the number of training samples \(n\) and \(u\) indexes the dimensionality of the input vector \(m=2\)), were implemented as optimisation parameters for each BNR configuration \(k\) and continuous output variable \(v\). Inputs to the GPR NLP constraints were the optimisation decision variable, \(\bar{x}_{1}\) and \(\bar{x}_{2}\), which were bounded by the standardised lower, upper bounds of the UASB volume and influent COD, respectively.
\[\hat{\bar{y}}_{k,v}=\left.\sum_{i\in n}\alpha_{i}\sigma_{f}^{2}\exp\left(-\sum _{u\in m}\frac{1}{2l^{2}}\left(\bar{x}_{u}-\bar{X}_{i,u}\right)^{2}\right) \right|_{k,v} \tag{24}\]
The modelling outputs from each GPR constraint were scaled back to original space by multiplying by the
Figure 9: Sampling input-output mappings for discrete BNR configurations.
relevant standard deviation, \(\sigma_{v}\), and adding the relevant mean, \(\mu_{v}\), saved during the data standardisation which was facilitated by using the OODX data handling object. The inverse scaled surrogate modelling outputs were then multiplied by a binary variable, \(\gamma_{k}\), used to activate/deactivate each BNR pathway within the superstructure optimisation model. Coupled with a "choose exactly one" constraint which constrains the sum of these binary variables to equal 1 (Equation 25), the summation over the product between these binary selection variables and the inverse scaled modelling outputs enabled MINLP formulations representing the superstructure. Equations 26, 27, and 28 show the resulting constraints to enforce effluent quality constraints on the COD, TN, and TP below limits of \(50\,\mathrm{g}\,\mathrm{m}^{-3}\), \(10\,\mathrm{g}\,\mathrm{m}^{-3}\), and \(5\,\mathrm{g}\,\mathrm{m}^{-3}\), respectively.
\[\sum_{k}\gamma_{k}=1 \tag{25}\]
\[\sum_{k}\gamma_{k}\left(\hat{\bar{y}}_{k,3}\sigma_{3}+\mu_{3}\right)\leq 50 \tag{26}\]
\[\sum_{k}\gamma_{k}\left(\hat{\bar{y}}_{k,4}\sigma_{4}+\mu_{4}\right)\leq 10 \tag{27}\]
\[\sum_{k}\gamma_{k}\left(\hat{\bar{y}}_{k,5}\sigma_{5}+\mu_{5}\right)\leq 5 \tag{28}\]
Constraints representing the classification NNs mapped the scaled optimisation input variables \(\bar{x}_{u}\) onto predicted probabilities of feasibility for each BNR configuration. The NNs with 2 hidden layers, each with 10 nodes and sigmoid activation functions, were formulated as the following set of NLP constraints, defined for each BNR configuration \(k\), where \(\hat{y}_{k,7}\) is a scalar logit prediction for each BNR configuration (Equations 29-33).
\[z_{j}^{(1)}=\sum_{u\in N_{0}}w_{j,u}^{(0)}\bar{x}_{u}+b_{j}^{(0)} \tag{29}\]
\[a_{j}^{(1)}=\frac{1}{1+\exp\left(-z_{j}^{(1)}\right)} \tag{30}\]
\[z_{j}^{(2)}=\sum_{j^{\prime}\in N_{1}}w_{j,j^{\prime}}^{(1)}a_{j^{\prime}}^{ (1)}+b_{j}^{(1)} \tag{31}\]
\[a_{j}^{(2)}=\frac{1}{1+\exp\left(-z_{j}^{(2)}\right)} \tag{32}\]
Figure 10: Feedforward neural network structure to map inputs \(\mathbf{x}\) onto predictions \(\hat{\mathbf{y}}\). The network consists of an input layer with \(N_{0}\) nodes, an output layer with \(N_{\Lambda+1}\) nodes, and \(\Lambda\) hidden layers with \(N_{\lambda}\) nodes for \(\lambda=1,...,\Lambda\). Each hidden layer has an activation function \(\xi^{(\lambda)}\) for \(\lambda=1,...,\Lambda\). Layer inputs \(\mathbf{z}^{(\lambda)}\) are calculated by multiplying the previous layer outputs \(\mathbf{a}_{\lambda-1}\) by a weights matrix \(W^{(\lambda-1)}\) and added to a bias vector \(\mathbf{b}^{(\lambda-1)}\) Finally, layer outputs are determined by passing the inputs through an activation function \(\xi^{(\lambda)}\).
\[\hat{y}_{k,9}=\sum_{j^{\prime}\in N_{2}}w_{j,j^{\prime}}^{(2)}a_{j^{\prime}}^{(2) }+b_{j}^{(2)} \tag{33}\]
The classification NNs were similarly embedded within an MINLP constraint including binary selection variables to ensure the the feasibility of the active BNR pathway. Specifically, a probability of feasibility was constrained greater than 0.5 by enforcing the logit prediction greater than or equal to 0.
\[\sum_{k}\gamma_{k}\hat{y}_{k,9}\geq 0 \tag{34}\]
Objective functions to maximise the recovered biogas, to maximise the recovered N and P nutrients within digestate, and to minimise the aeration requirement as a proxy to total operation cost were optimised individually to observe the trade-offs. These objective functions were of the form shown in Equation 35 for \(k=6,7,8\). The objective function to maximise nutrient recovery modelled the quality of the recovered digestate in terms of the total nutrient (N and P) content in \(\mathrm{kg\,t^{-1}}\). The biogas recovery objective function modelled the total methane recovered, in \(\mathrm{m^{3}\,d^{-1}}\), from both the UASB and the sludge digester. Finally, the aeration requirement was modelled as the total oxygen flow, in \(\mathrm{m^{3}\,d^{-1}}\), required to reach the desired dissolved oxygen set point in the aerobic zones.
\[\min\sum_{k}\gamma_{k}\left(\hat{\bar{y}}_{k,v}\sigma_{v}+\mu_{v}\right) \tag{35}\]
The \(\epsilon\)-constraint method was also adopted to obtain additional non-dominated Pareto optimal solutions (Equation 36) where \(\epsilon_{v}\) constraints output variable \(v\) to be at least as good as a given upper bound.
\[\sum_{k}\gamma_{k}\left(\hat{\bar{y}}_{k,v}\sigma_{v}+\mu_{v}\right)\leq \epsilon_{v} \tag{36}\]
Solutions to the MINLP problem thereby highlight optimal process configurations (optimised binary variables) as well as optimal process designs (optimised continuous variables). Initially, the influent COD was treated as an optimisation decision variable so as to demonstrate the multiple decision criteria trade-offs. Subsequently, the influent COD was treated as an uncertain parameter within a stochastic programming framework. All of the MINLP problems were solved using BARON version 22.9.30 [111] with a maximum time limit of 500 s using a CPU with 2.8 GHz Quad-Core Intel Core i7 processor and 16 GB of RAM.
#### 3.2.6 Stochastic programming formulation
To optimise the superstructure configuration under the uncertainty in the influent COD concentration, the constraints and objective functions were reformulated to express the expected values of the relevant surrogate model outputs. For example the effluent quality constraints were reformulated to constrain the expected COD, TN, and TP concentrations below the specified quality limits whilst the expected biogas recovery, expected nutrient quality of the digestate, and expected aeration requirement were specified as the stochastic programming objective functions.
To this end, the influent COD was first redefined as an indexed parameter as opposed to an optimisation decision variable where the probability of each influent COD composition, \(R_{s}\), being realised, \(p_{s}\), was evaluated. Practically, the realised influent COD values were taken as a subset from the standardised sampled data \(\widetilde{X}_{2}\). This simultaneously enables good sample coverage of the probability distribution and realised parameter values. The relevant constraints and objective functions were then expanded as a weighted sum-product between the probability of each realisation, \(p_{s}\), and the realised surrogate model prediction of dependent variables, \(\hat{\bar{y}}_{k,v,s}\). Equations 37, 38, and 38 show the stochastic programming reformulations of the effluent quality constraints for the expected COD, TN, and TP concentrations, respectively.
\[\frac{1}{\sum p_{s}}\sum_{k}\gamma_{k}\left(\sum_{s}p_{s}\left( \hat{\bar{y}}_{k,3,s}\sigma_{3}+\mu_{3}\right)\right) \leq 50 \tag{37}\] \[\frac{1}{\sum p_{s}}\sum_{k}\gamma_{k}\left(\sum_{s}p_{s}\left( \hat{\bar{y}}_{k,4,s}\sigma_{4}+\mu_{4}\right)\right) \leq 10 \tag{38}\]
\[\frac{1}{\sum p_{s}}\sum_{k}\gamma_{k}\left(\sum_{s}p_{s}\left(\hat{\hat{y}}_{k,5, s}\sigma_{5}+\mu_{5}\right)\right)\leq 5 \tag{39}\]
The surrogate models are thereby evaluated at \(x_{2}=R_{s}\) where \(x_{2}\) is the input variable representing the influent COD (Table 11) and \(R_{s}\) is the influent COD value realised for scenario \(s\) equivalent to the standardised sampled values \(\bar{X}_{s,2}\). The GPR constraint reformations are hereby shown by Equation 40.
\[\hat{y}_{k,v,s}=\left.\sum_{i\in n}\alpha_{i}\sigma_{f}^{2}\exp\left(-\frac{1 }{2l^{2}}\left(\left(\bar{x}_{1}-\bar{X}_{i,1}\right)^{2}+\left(\bar{X}_{s,2}- \bar{X}_{i,2}\right)^{2}\right)\right)\right|_{k,v} \tag{40}\]
Similarly, the classification NNs were reformulated such that the second input dimension was parameterised using the sampled data as discrete scenario realisations to yield predictions denoted as \(\hat{y}_{k,9,s}\). Whilst, the "choose exactly one" constraint for the discrete BNR pathways remained unchanged (Equation 25), the classification feasibility constraint was updated to ensure that solutions were robust to the uncertainties by ensuring the process was feasible for all influent COD scenarios (Equation 41).
\[\sum_{k}\gamma_{k}\hat{y}_{k,9,s}\geq 0 \tag{41}\]
Finally, the objective functions were updated to minimise the expected values over the possible uncertainties as shown by Equation 42 for \(v=6,7,8\).
\[\min\frac{1}{\sum p_{s}}\sum_{k}\gamma_{k}\left(\sum_{s}p_{s}\left(\hat{\hat{y} }_{k,v,s}\sigma_{v}+\mu_{v}\right)\right) \tag{42}\]
## 4 Results and discussion
### Computer experiment results
Figure 11 shows the samples, which were generated from the Python-GPS-X interface in 374 s, distributed over the input space and the convergence results. It can be seen that 7 out of the total 128 samples failed to converge (about 5 %), primarily located at low UASB volumes at high COD loading for the A\({}^{2}\)O, Bardenpho, and Johannesburg BNR pathways. The UCT process simulations converged for all 32 input samples. Overall, these results demonstrate the good sample space coverage exhibited by the static Sobol sampling over the 2 input dimensions. The total sampling time of 374 s equates to just less than 3 s per sample. Samples that failed to converge took roughly 3 times as long at just under 9 s per sample which corresponds to the 3 allowed convergence attempts per sample. Although the total sampling time here is acceptable, for higher input dimensions necessitating greater numbers of input samples, and for more complex simulations with greater convergence failure rates, the total sampling time can contribute significant time costs to the surrogate modelling framework. The efficient sample generation exhibited here coupled with the high convergence rate indicates a well posed design space imposed by the input variable selection and their corresponding bounds.
Figure 11 also shows the random split between training and testing data where 25% of samples were reserved for testing, resulting in 24 training samples and 8 testing samples. The same train-test split was applied to each discrete BNR pathway so that the uncertainties introduced by this split were equivalent for each discrete scenario. Specifically, Figure 11 shows how the random train-test split interferes with the spacing of the 32 samples resulting in regions of sparsely sampled training data within the 2-dimensional continuous search spaces. Additionally, the resulting testing set is not well spaced which can result in unrepresentative validation metrics for surrogate models trained to represent the entire search space. The implications of the random train-test split are highlighted in the subsequent sections.
### Surrogate modelling results
#### 4.2.1 Validation results
24 GPR models, each trained on 24 training samples in under 0.03 s, were validated on the reserved testing data. The average MAEs across the 4 BNR pathways as well as the mean output values are shown in Table
12. Whilst the MAEs require knowledge of the magnitude of the underlying data, the MAPEs are normalised such that a value of 1 represents an error of the order of magnitude equivalent to the underlying data. In this way MAPEs can be directly compared across variables of different magnitude as shown in Figure 12. It can be interpreted that the GPR errors for effluent COD and TP concentrations are high and that all other GPR errors are acceptable. It should be noted that high errors are expected due to the random train-test split negatively impacting the well-spaced properties of the training samples, and the resulting placement of the testing samples in the most sparsely populated regions.
The calculation of the MAPE also results in large values when the underlying data values are small. For example, effluent concentrations of COD, TN, and TP in the treated wastewater are very small for large part of the search space (depicted later in this chapter in Figure 13). Additionally, the mean values are often skewed by high effluent concentrations at poor performing process design samples. The GPR validation results should therefore be used primarily for greater interpretation of the optimisation results in conjunction with the GPR predictive uncertainties.
#### 4.2.2 Response surface results
The response surfaces generated by the GPR models predicting effluent concentrations of COD, TN, and TP, in addition to the feasible region predicted by the classification NN trained on convergence labels, are shown in Figure 13. The effluent quality response surfaces also depict the feasible region separated by the contour placed at the maximum effluent concentration limits. Together, the agglomeration of these 4 distinct feasible regions defined the feasible search space for the subsequent superstructure optimisation.
Figure 13 shows that the effluent COD quality limit results in infeasible process designs at low UASB volumes for high influent COD concentrations. Additionally, it can be observed that about 75% of the search space predicts effluent COD concentrations below the imposed limit of 50 g m\({}^{-3}\) whilst the greatest predictions of effluent COD concentration were as high as 900 g m\({}^{-3}\) and concentrated within 25% of the
\begin{table}
\begin{tabular}{l l c c} \hline Variable & Unit & MAE & \(\mu\) \\ \hline effluent COD & g m\({}^{-3}\) & 29.8 & 72.4 \\ effluent TN & g m\({}^{-3}\) & 2.7 & 9.6 \\ effluent TP & g m\({}^{-3}\) & 2.4 & 6.7 \\ nutrient quality & kg t\({}^{-1}\) & 2.5 & 36.3 \\ biogas recovery & m\({}^{3}\) d\({}^{-1}\) & 67.8 & 707 \\ aeration requirement &,000 m\({}^{3}\) d\({}^{-1}\) & 7.5 & 38.5 \\ \hline \end{tabular}
\end{table}
Table 12: Average mean absolute errors and mean values for output variable predictions across the 4 different BNR pathways.
Figure 11: Computer experiments results showing well-spaced Sobol samples within the input space. Converged samples are shown as black whilst non-converged samples are shown as red. Additionally, training samples are shown as filled whilst testing samples are shown as unfilled.
search space. This may explain the high GPR errors for these variables due to these high magnitude outliers resulting in high MAPE errors relative to most of the training data.
The maximum limit for the effluent TN concentration (\(10\,\mathrm{g}\,\mathrm{m}^{-3}\)) results in infeasible process designs at low UASB volumes for high influent COD concentrations as well as at high UASB volumes for lower influent COD concentrations. The greatest effluent TN concentrations were observed for the UCT process with low UASB volumes and high influent COD concentrations.
The TP effluent quality limit of \(5\,\mathrm{g}\,\mathrm{m}^{-3}\) enforced infeasible regions again at low UASB volumes and high influent COD concentrations as well as at high UASB volumes, particularly for low COD concentrations across all 4 BNR pathways, yet impacting feasible designs at high UASB volumes and high COD concentrations for A\({}^{2}\)O, Bardenpho, and Johannesburg configurations. The greatest violations of the TP effluent quality limit were observable at high UASB volumes and low influent COD concentrations for the 3 aforementioned BNR process configurations.
The classification NN results are shown for the 4 BNR pathways in Figure 13. The NN demonstrates a good separation between feasible and infeasible designs at logit predictions of 0 as shown by the separating hyperplane. Since all of the samples converged for the UCT pathway, the logit predictions are greater than 0 across the entire input space. The classification NNs were validated based on precision and recall scores which were both 1 across Bardenpho, Johannesburg, and UCT pathways, whilst A\({}^{2}\)O exhibited a precision of 0.88 (due to the incorrectly classified non-converged sample within the testing set visible in Figure 13) and a recall of 1.
Figure 14 shows the GPR response surfaces for the objective functions enumerating the nutrient quality of the recovered digestate, the biogas recovery, and the aeration requirement of different process designs within the superstructure. Unconstrained by the feasibility constraints, the maximum nutrient quality was achieved at low UASB volumes and low influent COD concentrations for the Bardenpho process. Maximum biogas recovery was realised at high UASB volumes and high influent COD concentrations whilst the minimum aeration flow was required for the upper half of UASB volumes over a range of influent COD concentrations expect the greatest values.
Figure 15 shows the standard deviations in the GPR predictions from Figure 14. The interpolation property of GPR models is observable by the negligible standard deviations close to the training samples
Figure 12: GPR mean absolute percentage errors.
with increasing uncertainties with increasing distance from samples. Note that the non-converged samples were not used to train the GPR models such that the uncertainty in the GPR predictions was unaffected by proximity to these samples. Because the same training data was used for each GPR model, the shape of the standard deviation response surfaces are all similar, only varying in the amplitude of uncertainties based on the magnitude of the corresponding variable predictions.
### Multi-objective optimisation results
Figure 16 shows the results of the superstructure optimisation to maximise the combined N and P nutrient quality of the recovered digestate at 61.2 \(\pm\) 2.5 kg t\({}^{-1}\) (where the uncertainty was obtained from the standard deviation in the GPR predictions based on the 95% confidence interval). The optimal flowsheet for nutrient recovery implemented the Bardenpho BNR process with a 143 m\({}^{3}\) UASB and was realised at an influent COD of 2,000 g m\({}^{-3}\) as shown by the red star (Figure 16). The amount of recovered biogas with this flowsheet is 165 m\({}^{3}\)d\({}^{-1}\) whilst the aeration requirement is 41,340 m\({}^{3}\)d\({}^{-1}\). It can be seen from Figure 16 that none of the constraints are active since the flowsheet produces an effluent with concentrations of COD, TN, and TP of 20.6 g m\({}^{-3}\), 3.4 g m\({}^{-3}\), and 1.6 g m\({}^{-3}\), respectively.
Figure 16 also shows the results of the superstructure optimisation to maximise the total biogas recovery as 1,500 \(\pm\) 103 m\({}^{3}\) d\({}^{-1}\). The maximal biogas recovery was obtained using the Johannesburg BNR process with a 388 m\({}^{3}\) UASB and was realised at an influent COD of 5,616 g m\({}^{-3}\) as shown by the red star (Figure 16). The nutrient quality of the recovered digestate using this flowsheet is 26.5 kg t\({}^{-1}\) whilst the aeration requirement is 18,920m\({}^{3}\)d\({}^{-1}\). None of the effluent constraints were active for this solution since the flowsheet produced an effluent with concentrations of TN and TP of 4.9 g m\({}^{-3}\), and 0.2 g m\({}^{-3}\), respectively, whilst all of the COD was predicted to have been recovered.
Figure 16 also shows the results of the superstructure optimisation to minimise the total aeration requirement as 15,790 \(\pm\) 8,080 m\({}^{3}\) d\({}^{-1}\). This solution was obtained using the UCT BNR process with a 255 m\({}^{3}\) UASB and was realised at an influent COD of 2,598 g m\({}^{-3}\) as shown by the red star (Figure 16). The nutrient quality of the recovered digestate and the amount of recovered biogas using this flowsheet was 39.7 kg t\({}^{-1}\) and 584 m\({}^{3}\)d\({}^{-1}\), respectively. The effluent constraints for this solution had concentrations of COD, TN, and TP of 12.8 g m\({}^{-3}\), 9.8 g m\({}^{-3}\), and only 5.0 g m\({}^{-3}\), respectively. The TP effluent constraint was therefore active at this solution which can been seen in Figure 16.
A Pareto optimal flowsheet was determined to recover a digestate with an N and P nutrient quality of 35.9 \(\pm\) 6.5 kg t\({}^{-1}\) alongside 1,070 m\({}^{3}\)d\({}^{-1}\) of biogas, with an aeration requirement of 31,570 m\({}^{3}\)d\({}^{-1}\). This Pareto optimal flowsheet implemented to UCT process with a UASB volume of 360 m\({}^{3}\)d\({}^{-1}\), and operated optimally for an influent COD concentration of 4,871 g m\({}^{-3}\) (Figure 16). The effluent from the Pareto optimal flowsheet had COD, TN, and TP concentrations of 0 g m\({}^{-3}\), 2.4 g m\({}^{-3}\), and 0 g m\({}^{-3}\), respectively.
It is also worth noting the different influent COD concentrations realised for the optimal solutions in Figure 16. Specifically, the optimal nutrient recovery was realised at the lower bound of 2,000 g m\({}^{-3}\) COD whilst the maximum biogas recovery was obtained for a COD concentration nearer the upper bound at 5,616 g m\({}^{-3}\). Therefore, nutrient recovery could be the focus of the process system at lower influent COD concentrations, whilst biogas recovery is favoured at higher influent COD concentrations. To this end, a flexible resource recovery process, in which Bardenpho and Johannesburg BNR configurations were configured, could enable the operation of the resource recovery process to vary temporally based on the influent COD. However, this would require optimisation of the process system operations in response to real-time sensor data and future projections, for example harnessing model predictive control.
Figure 17 shows the trade-offs between the competing objective criteria where each single objective solution is visualised as well as the Pareto optimal solution. The maximised nutrient quality of the recovered digestate was 61.2 kg t\({}^{-1}\) whilst the most sub-optimal nutrient quality existed for the solution to maximise biogas recovery at 26.5 kg t\({}^{-1}\) (a reduction of over 55 %). Similarly, the maximum biogas recovery was 1,500 m\({}^{3}\) d\({}^{-1}\) whilst the lowest biogas recovery existed for the flowsheet to maximise the nutrient recovery at 164.5 m\({}^{3}\)d\({}^{-1}\) or just 11 % of the maximum biogas recovery potential. Finally, the minimised aeration requirement was 15,790 m\({}^{3}\) d\({}^{-1}\) whilst the greatest aeration requirement was almost 3 times as high and existed for the maximal nutrient recovery solution at 41,340 m\({}^{3}\)d\({}^{-1}\). The Pareto optimal solution achieved a trade-off between the 3 objectives with 59 % of the maximised nutrient recovery, 71 % of the maximum biogas recovery, necessitating 76 % of the aeration requirement to recover the maximum nutrient quality
digestate.
### Optimisation under uncertainty results
Solutions to the optimisation under influent COD uncertainties determined the optimum BNR configuration and UASB volumes. The solution to the 3 objectives to maximise nutrient quality, biogas recovery, and minimise aeration cost converged on the same solution implementing a 274 m\({}^{3}\) UASB followed by the UCT BNR process. Figure 18 thereby shows the GPR predictions for the optimal process which recovers a digestate with an expected nutrient quality of 38.6 kg t\({}^{-1}\) (63 % of the maximised nutrient quality without any uncertainty considerations), 580 m\({}^{3}\) d\({}^{-1}\) expected biogas recovery (39 % of the maximum value found previously), and necessitating an expected aeration flow of 40,360 m\({}^{3}\) d\({}^{-1}\) which was over 150 % greater than the minimised value without any uncertainty considerations. Specifically, Figure 18 shows the responses of the optimal process design to the different realisations of influent COD between 2,000 g m\({}^{-3}\) and 6,000 g m\({}^{-3}\).
The expected effluent concentrations were constrained to be less than the effluent limits (50 g m\({}^{-3}\), 10 g m\({}^{-3}\), and 10 g m\({}^{-3}\) for COD, TN, and TP, respectively). Figure 18 shows that the effluent constraint for the COD concentration was below its specified limit with an expected value of 48.1 g m\({}^{-3}\). Additionally, it can be seen that the effluent COD is expected to transgress the limits at influent COD concentrations greater than 5,000 g m\({}^{-3}\).
Similarly, despite the expected effluent TN concentration of 7.6 g m\({}^{-3}\) being below the effluent limit of 10 g m\({}^{-3}\), the GPR predictive mean violates this limit at influent COD concentrations above 5,000 g m\({}^{-3}\). Similarly the expected effluent TP concentration was 2.4 g m\({}^{-3}\) whilst the predictive mean function over the realisations, despite not violating the effluent limit of 10 g m\({}^{-3}\) over all realisations of the influent COD, the 95 % upper confidence bound violates this limit for extreme influent COD concentrations. In fact, infringement of the effluent limits by the 95 % upper confidence bound is common across COD, TN, and TP concentrations.
Figure 18 also shows the predictive GPR model for the recovered digestate nutrient quality and the uncertainty in these predictions dependent on realisations of the influent COD concentration. Also shown is the expected value across all realisations of the influent COD, accounting for the probability of these concentrations being realised. Generally, the nutrient quality appears to greatest for lower influent COD concentrations, whilst the nutrient quality at greater influent COD concentrations is reduced. In particular, the uncertainty in the nutrient quality peaks at extreme influent COD concentrations of 2,000 g m\({}^{-3}\) and 6,000 g m\({}^{-3}\). In fact, this characteristic of high uncertainty at the bounds of the influent COD realisations is common across all of the GPR models due to the dependence on the distribution of training data sampled from the process simulation software. The GPR modelling uncertainties could thereby be reduced by increasing the number of static samples generated in initial sampling strategies or by adopting an adaptive sampling approach wherein additional samples are iteratively evaluated from the process simulator so as to increase the modelling fidelity at sparsely sampled locations.
Figure 13: Surrogate modelling of the feasible region (shown by the dashed lines) constrained by an effluent COD limit of \(50\,\mathrm{g}\,\mathrm{m}^{-1}\), an effluent TN limit of \(10\,\mathrm{g}\,\mathrm{m}^{-1}\), and effluent TP limit of \(5\,\mathrm{g}\,\mathrm{m}^{-1}\), and a probability of simulation convergence greater than 0.5 (corresponding to a logit prediction of 0). Effluent COD, TN, and TP were modelled using GPR whilst the logit predictions were from classification NNs. Note that the UCT simulations converged for all samples therefore there is no region of infeasibility due to simulation convergence failures. Converged samples are shown as black whilst non-converged samples are shown as red. Additionally, training samples are shown as filled whilst testing samples are shown as unfilled.
Figure 14: Objective function GPR response surfaces.
Figure 15: GPR standard deviation response surfaces. The training data are superimposed where non-converged training samples (used for classification model training but not used to train the regression models) are shown in red.
Figure 16: Optimisation solutions for 4 decision criteria: maximised nutrient quality (upper left); maximised biogas recovery (upper right); minimised aeration cost (lower left); and a Pareto optimal solution to maximise nutrient quality whilst ensuring a biogas recovery of at least \(50\,\%\) the optimal (lower right). The optimised UASB volumes and corresponding influent COD concentrations are shown by the red starts whilst the optimal BNR pathway appears in the title above each response surface. The agglomeration of the effluent quality feasible regions and the simulation convergence-based feasible region is constrained within the black shaded regions.
Figure 17: Multi-objective optimisation trade-offs. Values are normalised to the radial axes where solutions furthest from the origin are optimal and solutions nearest to the origin are the most sub-optimal trade-offs.
Figure 18: Operational profiles of the solution to the optimisation problem accounting for uncertainty in the influent chemical oxygen demand (COD). Also shown are the predictive modelling uncertainties (as the 95 % confidence interval) from Gaussian process regression models. TN: total nitrogen, TP: total phosphorous.
Conclusions
An object-orientated methodology was developed for surrogate modelling and DFO. GP and NN surrogate models were developed for both regression and classification applications. Explicit mathematical formulations of these surrogate models were validated against existing machine learning implementations and embedded within generalised optimisation formulations. Specifically, GP models were formulated as NLPs whilst NN models were formulated as MILPs or NLPs depending on whether the activation functions were piecewise linear or nonlinear, respectively. These abstracted optimisation formulations enabled plug-and-play surrogate-based optimisation modelling. A primary contribution of this work was the development of classification surrogate models to enable feasibility constraint formulations addressing uncertainties pertaining to simulation convergence failures.
Adaptive sampling algorithms based on GPs and Delaunay triangulation were developed to improve the exploration and exploitation of the search space. GP-based adaptive sampling methods enabled exploration via maximisation of the GP uncertainty or exploitation via the modified EI, both formulated as NLPs. Delaunay triangulation methods were formulated as MILPs to enable heuristic-based exploration and exploitation of the search space. The plug-and-play feature of the developed optimisation formulations enabled online feasibility constraints to be plugged-in to adaptive sampling formulations and increase the efficiency of sampling in the feasible region. Finally, the heuristic-based adaptive sampling MILP was adapted within a data-based direct-search optimisation algorithm to optimise black box problems. The adoption of the object-orientated paradigm means that these methods can be harnessed in a plug-and-play approach to increase the global optimisation and sampling efficiencies of a wide range of simulation-based optimisation applications.
This work demonstrated the applicability of the methodology developed to the optimisation of process systems to recover resources from food and beverage processing wastewaters (FPWW). Specifically, a superstructure optimisation methodology was developed to optimise process systems to recover carbon, nitrogen, and phosphorous resources from a typical brewery wastewater. A superstructure embedding a high-rate anaerobic digestion reaction upstream of 4 different BNR configurations (A\({}^{2}\)O, Bardenpho, Johannesburg, and UCT) was implemented in GPS-X process simulation software so as to harness high-fidelity data within the decision making framework. This work therefore contributed a practical simulation-based process synthesis application of a black box mixed-integer nonlinear programming framework embedding both GP and NN surrogate model formulations. To achieve this, GPR surrogate models were harnessed to represent resource recovery pathways with interpretable uncertainty quantification. Additionally, classification NN surrogate models were used to model the feasible region whilst addressing uncertainties from non-converged black box simulation data.
Multi-objective optimisation addressed the trade-offs between 3 competing decision criteria: one to maximise the total recovered biogas; one to maximise the nutrient quality of a recoverable digest; and one to minimise the aeration requirement as a proxy to the total operational cost. As such, this work addressed the challenge of holistic process synthesis which currently constrains the deployment of resource recovery process to wastewater systems. Optimal nutrient recovery was realised at low COD compositions, whilst maximum biogas recovery was achieved at COD compositions near to the upper bound cited for typical brewery effluents.
Uncertainties in the brewery wastewater COD composition were accounted for by implementing a stochastic programming reformulation of the optimisation problem, thereby enabling the process synthesis to account for the inherent uncertainties in wastewater characterisation. This work therefore demonstrates the applicability of optimisation under uncertainty methods to resource recovery from FPWW thereby contributing to the literature which focuses primarily on applications to organic solid wastes. The expected effluent compositions were enforced to be below maximum concentration limits accordingly. The optimisation also enforced robustness to process feasibility constraints derived from convergence of the process simulator. The solutions to the stochastic programming formulations maximised the expected nutrient recovery or biogas recovery, or minimised the expected aeration cost. The same solution, a 274 m\({}^{3}\) UASB upstream of a UCT BNR configuration, was determined for all 3 objectives. This was suspected due to the difficulty of satisfying the expected effluent constraints over realisations of the influent COD uncertainties. The modelling framework enables machine learning models of the optimal process design to be harnessed for further operations optimisation. Specifically, GPR models provided predictions of process performance in
response to variations in the influent COD composition in addition to the uncertainties in these predictions.
### Recommendations for future work
Flexible process optimisation was recommended to determine flexible resource recovery processes in which the BNR pathway can be adjusted in response to temporal variations in the wastewater composition. Another recommendation was to integrate the uncertainties in the GPR surrogate models within the optimisation-based decision-making process. One strategy to achieve this is via minimising the upper confidence bound which can be calculated by adding factors of the GPR standard deviations to the GPR predictive mean. For example, the 95 % upper confidence bound can be obtained by adding 1.96 standard deviations to the mean predictions provided by the GPR modelling objects. This would ensure that obtained solutions simultaneously minimised the objective function and the uncertainty in solutions. The upper confidence bound could also be used within the constraint formulations to ensure the the optimisation solutions were robust to the GPR modelling uncertainties that arise due to interpolation between the training data.
There exist multiple research directions to improve the capabilities of the developed methodology. These include future work to increase the number of surrogate models available within the object-orientated framework. One example to achieve this is to consider enabling the selection of more kernel functions within GPC models, particularly a linear kernel which would reduce the complexity of GPC-based NLP formulations. Another example would be to enable recurrent NN structures to improve nonlinear deep learning capabilities. A third example is to incorporate other surrogate models into the framework for regression and classification: support vector machines, random forests, and decision trees have been highlighted as additional alternatives.
Future work could also focus on improving the implementations of the GP and NN surrogate models. This could be achieved by enabling optimisation of the order of the GPR polynomial kernel. Another option is to enable the optimisation of NN parameters such as number of layers, nodes in each layer, and activation functions [112]. This could further be adapted to implement algorithms to optimise the selection of different surrogate models and their respective model structure as in [63]. The surrogate modelling capabilities could be further improved by implementing further regularisation methods such as L1 and dropout regularisation for NNs as well as early termination prior to overfitting. Cross-validation methods are another method to validate and quantify overfitting of surrogate models. Additionally, analysis of the effect of multiple NN outputs compared to multiple single-output NNs on the resulting NN-based optimisation formulations should be explored. Other adaptive sampling formulations and acquisition functions could also improve the existing codebase's capabilities. [43] provide a good discussion on the variants of GP models using different correlation and mean functions during specification of the GP prior. Additionally, [43] provide a good discussion on the computational aspects on GP modelling pertaining to handling higher dimensional problems and the non-convexity of the maximum likelihood estimator.
Finally, future work should implement the open-source methodology on more applications. These could be programmatic testing applications to inform further programmable improvements of the codebase and its wider usability. In this regard, increased documentation and demonstrative applications would be beneficial. Application of the methodology to real world engineering problems is also particularly important to realise the impact of the developed methodology in practical operating environments. To this end, it would be beneficial to combine these data-driven surrogate models with well-established mechanistic models and expert knowledge. Such hybrid models would improve transparency, interpretability and extrapolation capabilities thereby gaining the trust of practical operational engineers with a lack of computer science skills. Finally, such hybrid models should leverage the increasing volumes of available sensor data. This would remove a layer of abstraction incurred from the reliance on simulation models and thereby further increase the transparency of black box modelling frameworks.
|
2310.10846 | Clebsch-Gordan coefficients for Macdonald polynomials | In this paper we use the double affine Hecke algebra to compute the Macdonald
polynomial products $E_\ell P_m$ and $P_\ell P_m$ for type $SL_2$ and type
$GL_2$ Macdonald polynomials. Our method follows the ideas of Martha Yip but
executes a compression to reduce the sum from $2\cdot 3^{\ell-1}$ signed terms
to $2\ell$ positive terms. We show that our rule for $P_\ell P_m$ is equivalent
to a special case of the Pieri rule of Macdonald. Our method shows that
computing $E_\ell\mathbf{1}_0$ and $\mathbf{1}_0 E_\ell \mathbf{1}_0$ in terms
of a special basis of the double affine Hecke algebra provides universal
compressed formulas for multiplication by $E_\ell$ and $P_\ell$. The formulas
for a specific products $E_\ell P_m$ and $P_\ell P_m$ are obtained by
evaluating the universal formulas at $t^{-\frac12}q^{-\frac{m}{2}}$. | Aritra Bhattacharya, Arun Ram | 2023-10-16T21:39:38Z | http://arxiv.org/abs/2310.10846v1 | # Clebsch-Gordan coefficients for Macdonald polynomials
###### Abstract
In this paper we use the double affine Hecke algebra to compute the Macdonald polynomial products \(E_{\ell}P_{m}\) and \(P_{\ell}P_{m}\) for type \(SL_{2}\) and type \(GL_{2}\) Macdonald polynomials. Our method follows the ideas of Martha Yip but executes a compression to reduce the sum from \(2\cdot 3^{\ell-1}\) signed terms to \(2\ell\) positive terms. We show that our rule for \(P_{\ell}P_{m}\) is equivalent to a special case of the Pieri rule of Macdonald. Our method shows that computing \(E_{\ell}{\bf 1}_{0}\) and \({\bf 1}_{0}E_{\ell}{\bf 1}_{0}\) in terms of a special basis of the double affine Hecke algebra provides universal compressed formulas for multiplication by \(E_{\ell}\) and \(P_{\ell}\). The formulas for a specific products \(E_{\ell}P_{m}\) and \(P_{\ell}P_{m}\) are obtained by evaluating the universal formulas at \(t^{-\frac{1}{2}}q^{-\frac{m}{2}}\).
+
Footnote †: AMS Subject Classifications: Primary 05E05; Secondary 33D52.
_Key words--_ Macdonald polynomials, symmetric functions, Hecke algebras
## 1 Introduction
The type \(SL_{2}\) Macdonald polynomials \(P_{\ell}(x)\) are special cases of the Askey-Wilson polynomlals, sometimes called the \(q\)-ultraspherical polynomials (see [10, p.156-7]). The \(P_{\ell}(x)\) are two-parameter \(q\)-\(t\)-generalizations of the characters of finite dimensional representations of \(SU_{2}\); i.e. the characters of \(SU_{2}\) which play a pivotal role in the "standard model" in particle physics and in the analysis of Heisenberg spin chains in mathematical physics. The polynomial representation of the double affine Hecke algebra, which is the source of the type \(SL_{2}\) Macdonald polynomials, is a generalization of the "Dirac sea", a representation of the Heisenberg algebra which controls the mathematics behind the quantum harmonic oscillator (see (3.20) and Proposition 3.3 and compare to the discussion, for example, in the neighborhood of Figure 10.2 in [11]).
As in [10, SS6.1 and 6.3], we denote the electronic (nonsymmetric) Macdonald polynomials for type \(SL_{2}\) by \(E_{m}\), for \(m\in\mathbb{Z}\), and the bosonic (symmetric) Macdonald polynomials for type \(SL_{2}\) are denoted \(P_{m}\), for \(m\in\mathbb{Z}_{\geq 0}\) (see [12, SS1] for the motivation for the terminology 'electronic' and 'bosonic'). The type \(SL_{2}\) Macdonald polynomials are, by a coordinate transformation, "equivalent" to the type \(GL_{2}\) Macdonald polynomials. We review this coordinate transformation in Section 2 and explain how a product rule for type \(SL_{2}\) Macdonald polynomials translates to a product rule for type \(GL_{2}\) Macdonald polynomials.
Following the development of [12, SS3.4], we use the calculus of the bosonic symmetrizer \({\bf 1}_{0}\) and the normalized intertwining operators \(\eta_{s_{1}}\), \(\eta_{\pi}\) to compute the elements \(E_{\ell}(X){\bf 1}_{0}\) and \({\bf 1}_{0}E_{\ell}(X){\bf 1}_{0}\) in terms of a special basis of the (localized) double affine Hecke algebra. Continuing the main conceptual idea of [13] the expansions of the elements \(E_{\ell}(X){\bf 1}_{0}\) and \({\bf 1}_{0}E_{\ell}(X){\bf 1}_{0}\) function as universal formulas for multiplying Macdonald polynomials, since they contain enough information to compute arbitrary
products \(E_{\ell}P_{m}\) and \(P_{\ell}P_{m}\). We review this calculus, in our \(SL_{2}\) setting, in section 3. The use of this calculus enables to cast the product framework from [10] in a form which is tractable for executing the compression from \(2\cdot 3^{\ell-1}\) terms to \(2\ell\) terms.
The key computation for the proof of the product rules is done in sections 4 and 5. In section 4 we use the basic structural calculus reviewed in section 3 to compute recursions satsfied by the coefficients of the operators \(E_{\ell}(X)\mathbf{1}_{0}\) and \(\mathbf{1}_{0}E_{\ell}(X)\mathbf{1}_{0}\) when expanded in terms of the \(\{\eta^{\ell}\mathbf{1}_{0}\ |\ \ell\in\mathbb{Z}\}\) basis of the completed \(\mathbf{1}_{0}\)-projected double affine Hecke algebra. In section 5 we solve these recursions to provide product expressions for the coefficients similar to product expressions for binomial coefficients.
Yip [10, Theorem 4.2 and Theorem 4.4] gives alcove walk expansions of the products \(E_{\ell}P_{m}\) and \(P_{\ell}P_{m}\). The illustrative [10, Example 5.1] computes the alcove walk expansion of the product \(E_{3}P_{m}\) for the \(SL_{2}\) case. In this example there are 18 alcove walks which, after simplification, produce 6 terms. In general, for the product \(E_{\ell}P_{m}\) for type \(SL_{2}\), the alcove walk expansion of Yip will be a sum over \(2\cdot 3^{\ell-1}\) alcove walks which simplifies to \(2\ell\) terms.
The result of Theorem 6.2 of this paper provides an explicit closed formula for each of the \(2\ell\) terms which appear in the expansion of \(E_{\ell}P_{m}\). To our knowledge, this formula for \(E_{\ell}P_{m}\) is new, particularly the execution of the desired compression after executing the general double affine Hecke algebra method of deriving product rules given by Yip [10]. The \(q\)-\(t\)-binomial coefficients are given by
\[\begin{bmatrix}\ell\\ j\end{bmatrix}_{q,t}=\frac{\frac{(q;q)_{\ell}}{(t;q)_{\ell}}}{\frac{(q;q)_{j}} {(t;q)_{j}}\frac{(q;q)_{\ell-j}}{(t;q)_{\ell-j}}},\qquad\text{where}\qquad(a;q )_{j}=(1-a)(1-qa)(1-q^{2}a)\cdots(1-q^{j-1}a). \tag{1.1}\]
Then Theorem 6.2 proves that, for \(\ell,m\in\mathbb{Z}_{>0}\),
\[P_{\ell}P_{m} =\sum_{j=0}^{\ell}c_{j}^{(\ell)}(q^{m})\,P_{m+\ell-2j},\] \[E_{\ell}P_{m} =\sum_{j=0}^{\ell-1}a_{j}^{(\ell)}(q^{m})E_{m+\ell-2j}+b_{j}^{( \ell)}(q^{m})E_{-m+\ell-2j},\qquad\text{and}\] \[E_{-\ell}P_{m} =\sum_{j=0}^{\ell}t\cdot b_{j}^{(\ell+1)}(q^{m})E_{m-(\ell-2j)}+ a_{j}^{(\ell+1)}(q^{m})E_{-m-(\ell-2j)},\]
where
\[c_{j}^{(\ell)}(q^{m}) =\begin{bmatrix}\ell\\ j\end{bmatrix}_{q,t}\frac{(q^{m}q^{-(j-1)};q)_{j}}{(tq^{m}q^{-j};q)_{j}}\frac{( t^{2}q^{m}q^{\ell-2j};q)_{j}}{(tq^{m}q^{\ell-2j+1};q)_{j}}, \tag{1.2}\] \[a_{j}^{(\ell)}(q^{m}) =c_{j}^{(\ell)}(q^{m})\cdot\frac{(1-q^{\ell-j})}{(1-q^{\ell})} \cdot\frac{(1-tq^{m}q^{\ell-j})}{(1-tq^{m}q^{\ell-2j})}\qquad\text{and}\] \[b_{j}^{(\ell)}(q^{m}) =c_{\ell-j}^{(\ell)}(q^{m})\cdot q^{j}\cdot\frac{(1-q^{\ell-j})}{ (1-q^{\ell})}\cdot\frac{(1-tq^{m}q^{-(\ell-j)})}{(1-t^{2}q^{m}q^{-(\ell-2j)})}\]
The rule for the product \(P_{\ell}P_{m}\) is equivalent to a special case of the Pieri formula given in [11, Ch. VI (6.24)]. The precise connection is derived in Proposition 2.1, reproducing work of Soojin Cho [14]. Indeed, the rule for the product \(P_{\ell}P_{m}\) is the "linearization formula" for \(q\)-ultraspherical polynomials and appears in [15, Theorem 13.3.2], where it is stated that it is an identity of Rogers from 1894.
We have taken some care to try to make our exposition so that it contains all necessary definitions and complete and thorough proofs of the results. Our goal, in hope that the powerful tools provided
by the double affine Hecke will become broadly accessible and utilized, has been to make this paper so that it can be read from scratch with no previous knowledge of Macdonald polynomials or the double affine Hecke algebra. Section 7 provides explicit examples.
**Acknowledgements.** We thank Beau Anasson, Yifan Guo and Haris Rao for conversations and Mathematica computations which made it possible to establish the product formula for \(D_{j}^{(\ell)}(Y)\) which appears in (5.6). Thank you to Soojin Cho, Charles Dunkl, Dennis Stanton, and Ole Warnaar for helpful references on the orthogonal polynomial literature and to Jean-Emile Bourgine, Sasha Garbali and Jasper Stokman for very helpful comments to improve the exposition. We thank Institute of Mathematical Sciences Chennai for support which enabled A. Bhattacharya to visit University of Melbourne in February-March 2023.
## 2 Macdonald polynomials for \(Sl_{2}\) and for \(Gl_{2}\)
In this section we introduce the electronic and bosonic Macdonald polynomials for types \(SL_{2}\) and \(GL_{2}\) and explain the relation between them. We show how product rules for type \(SL_{2}\) Macdonald polynomials convert to product rules for type \(GL_{2}\) Macdonald polynomials. In Section 2.4 we check that the Pieri rule for multiplying bosonic polynomials given in [Mac, Ch. VI (6.24)] matches with the rule for the product \(P_{\ell}P_{m}\) stated in the introduction (and proved in Theorem 6.2).
For working with Macdonald polynomials, fix \(q,t\in\mathbb{C}^{\times}\) such that the only pair of integers \((a,b)\) for which \(q^{a}t^{b}=1\) is the pair \((a,b)=(0,0)\). Alternatively, one may think of \(q\) and \(t\) as parameters and to work with polynomials over the coefficient ring \(\mathbb{C}(q,t)\) instead of over the coefficient ring \(\mathbb{C}\).
### Macdonald polynomials for type \(Sl_{2}\)
The electronic Macdonald polynomials for type \(SL_{2}\),
\[E_{\ell}(x)\in\mathbb{C}[x,x^{-1}],\quad\text{are indexed by}\quad\ell\in \mathbb{Z},\]
and the bosonic Macdonald polynomials for type \(SL_{2}\),
\[P_{\ell}(x)\in\mathbb{C}[x,x^{-1}],\quad\text{are indexed by}\quad\ell\in \mathbb{Z}_{\geq 0}.\]
Let \(\ell\in\mathbb{Z}_{\geq 0}\). Using the notation of (1.1), the electronic Macdonald polynomials are given by
\[E_{-\ell}(x)=\sum_{j=0}^{\ell}\genfrac{[}{]}{0.0pt}{}{\ell}{j}_{q,t}\frac{(1 -tq^{j})}{(1-tq^{\ell})}x^{\ell-2j}\qquad\text{and}\qquad E_{\ell}(x)=\sum_{j =0}^{\ell-1}\genfrac{[}{]}{0.0pt}{}{\ell-1}{j}_{q,t}\frac{q^{\ell-1-j}(1-tq^{ j})}{(1-tq^{\ell-1})}x^{-\ell+2j+2},\]
and the bosonic Macdonald polynomials are given by
\[P_{\ell}(x)=\sum_{j=0}^{\ell}\genfrac{[}{]}{0.0pt}{}{\ell}{j}_{q,t}x^{\ell-2j}.\]
See SS7.12 for the connection between these formulas and the formulas in [Mac03, (6.2.7), (6.2.8), (6.3.7)].
### Macdonald polynomials for type \(Gl_{2}\)
The electronic Macdonald polynomials for type \(GL_{2}\),
\[E_{(\mu_{1},\mu_{2})}(x_{1},x_{2})\in\mathbb{C}[x_{1},x_{1}^{-1},x_{2},x_{2}^{-1 }],\quad\text{are indexed by }(\mu_{1},\mu_{2})\in\mathbb{Z}^{2},\]
and the bosonic Macdonald polynomials for type \(GL_{2}\),
\[P_{(\lambda_{1},\lambda_{2})}(x_{1},x_{2})\in\mathbb{C}[x_{1},x_{1}^{-1},x_{2},x_{2}^{-1}],\quad\text{are indexed by }\lambda=(\lambda_{1},\lambda_{2})\text{ with }\lambda_{1},\lambda_{2}\in \mathbb{Z}\text{ and }\lambda_{1}\geq\lambda_{2}.\]
The \(E_{(\mu_{1},\mu_{2})}(x_{1},x_{2})\) and \(P_{(\lambda_{1},\lambda_{2})}(x_{1},x_{2})\) are given, in terms of the Macdonald polynomials for type \(SL_{2}\), by
\[E_{(\mu_{1},\mu_{2})}(x_{1},x_{2}) =(x_{1}^{\frac{1}{2}}x_{2}^{\frac{1}{2}})^{\mu_{1}+\mu_{2}}E_{\mu _{1}-\mu_{2}}(x_{1}^{\frac{1}{2}}x_{2}^{-\frac{1}{2}})\qquad\text{and}\] \[P_{(\lambda_{1},\lambda_{2})}(x_{1},x_{2}) =(x_{1}^{\frac{1}{2}}x_{2}^{\frac{1}{2}})^{\lambda_{1}+\lambda_{2} }P_{\lambda_{1}-\lambda_{2}}(x_{1}^{\frac{1}{2}}x_{2}^{-\frac{1}{2}}). \tag{2.1}\]
Equivalently, if \(m_{1},m_{2}\in\frac{1}{2}\mathbb{Z}\) then \((x_{1}^{\frac{1}{2}}x_{2}^{\frac{1}{2}})^{2m_{2}}E_{2m_{1}}(x_{1}^{\frac{1}{2} }x_{2}^{-\frac{1}{2}})=E_{(m_{1}+m_{2},-m_{1}+m_{2})}(x_{1},x_{2}).\) Another way to express this conversion is to let
\[y=x_{1}^{\frac{1}{2}}x_{2}^{\frac{1}{2}}\text{ and }x=x_{1}^{\frac{1}{2}}x_{2}^{ -\frac{1}{2}}\qquad\text{so that}\qquad x_{1}=yx\text{ and }x_{2}=yx^{-1}. \tag{2.2}\]
Then the Macdonald polynomials for type \(SL_{2}\) are given in terms of the Macdonald polynomials for type \(GL_{2}\) by
\[E_{2m_{1}}(x)=y^{-2m_{2}}E_{(m_{1}+m_{2},-m_{1}+m_{2})}(yx,yx^{-1})=E_{(m_{1}+ m_{2},-m_{1}+m_{2})}(x,x^{-1}), \tag{2.3}\]
for \(m_{1},m_{2}\in\frac{1}{2}\mathbb{Z}\). The following picture illustrates the conversion between \(E_{m}\) and \(E_{(\mu_{1},\mu_{2})}\) given by the formulas (2.3) and (2.1).
### Converting product rules for type \(Sl_{2}\) to product formulas for type \(Gl_{2}\)
Assume that multiplication rules for multiplying type \(SL_{2}\) Macdonald polynomials are given by
\[E_{\ell}(x)P_{m}(x)=\sum_{j=0}^{\ell-1}a_{j}^{(\ell)}(q^{m})E_{m+\ell-2j}(x)+ \sum_{j=0}^{\ell-1}b_{j}^{(\ell)}(q^{m})E_{-m+\ell-2j}(x),\]
and
\[P_{\ell}(x)P_{m}(x)=\sum_{j=0}^{\ell}c_{j}^{(\ell)}(q^{m})E_{m+\ell-2j}(x).\]
Assume \((\nu_{1},\nu_{2})\in\mathbb{Z}^{2}\) and \((\mu_{1},\mu_{2})\in\mathbb{Z}^{2}\) with \(\mu_{1}\geq\mu_{2}\). Let
\[\ell=\nu_{1}-\nu_{2}\qquad\text{and}\qquad m=\mu_{1}-\mu_{2}\qquad\text{and} \qquad d=\mu_{1}+\mu_{2}+\nu_{1}+\nu_{2}.\]
Then
\[\begin{array}{l}m+\ell-2j+d=2(\mu_{1}+\nu_{1}-j),\\ -(m+\ell-2j)+d=2(\mu_{2}+\nu_{2}+j),\end{array}\qquad\text{and}\qquad\begin{array} []{l}-m+\ell-2j+d=2(\mu_{2}+\nu_{1}-j),\\ -(-m+\ell-2j)+d=2(\mu_{1}+\nu_{2}+j).\end{array}\]
Thus, with \(y=x_{1}^{\frac{1}{2}}x_{2}^{\frac{1}{2}}\) and \(x=x_{1}^{\frac{1}{2}}x_{2}^{-\frac{1}{2}}\) as in (2.2), the conversions in (2.1) and (2.3) give
\[P_{(\nu_{1},\nu_{2})}(x_{1},x_{2})P_{(\mu_{1},\mu_{2})}(x_{1},x_{2})=y^{\nu_{1 }+\nu_{2}}P_{\ell}(x)y^{\mu_{1}+\mu_{2}}P_{m}(x)=\sum_{j=0}^{\nu_{1}-\nu_{2}}c _{j}^{(\ell)}(q^{m})y^{d}P_{m+\ell-2j}(x)\]
\[=\sum_{j=0}^{\nu_{1}-\nu_{2}}c_{j}^{(\nu_{1}-\nu_{2})}(q^{\mu_{1}-\mu_{2}})P_{( \mu_{1}+\nu_{1}-j,\mu_{2}+\nu_{2}+j)}(x_{1},x_{2}),\]
and
\[E_{(\nu_{1},\nu_{2})}(x_{1},x_{2})P_{(\mu_{1},\mu_{2})}(x_{1},x_{ 2})=y^{\nu_{1}+\nu_{2}}E_{\nu_{1}-\nu_{2}}(x)y^{\mu_{1}+\mu_{2}}P_{\mu_{1}-\mu_ {2}}(x)=y^{d}E_{\ell}(x)P_{m}(x)\] \[=\sum_{j=0}^{\nu_{1}-\nu_{2}-1}a_{j}^{(\ell)}(q^{m})y^{d}E_{m+ \ell-2j}(x)+\sum_{j=0}^{\nu_{1}-\nu_{2}-1}b_{j}^{(\ell)}(q^{m})y^{d}E_{-m+\ell -2j}(x)\] \[=\sum_{j=0}^{\nu_{1}-\nu_{2}-1}a_{j}^{(\nu_{1}-\nu_{2})}(q^{\mu_{1 }-\mu_{2}})\,E_{(\mu_{1}+\nu_{1}-j,\mu_{2}+\nu_{2}+j)}(x_{1},x_{2})\] \[\qquad+\sum_{j=0}^{\nu_{1}-\nu_{2}-1}b_{j}^{(\nu_{1}-\nu_{2})}(q^{ \mu_{1}-\mu_{2}})\,E_{(\mu_{2}+\nu_{1}-j,\mu_{1}+\nu_{2}+j)}(x_{1},x_{2}),\]
and these are the multiplication rules for type \(GL_{2}\) Macdonald polynomials.
### Comparison of the \(Gl_{2}\) case to Macdonald
A horizontal strip \(\lambda/\mu\) of length \(\ell\) is a pair \(\lambda=(\lambda_{1},\lambda_{2})\) and \(\mu=(\mu_{1},\mu_{2})\) of partitions such that
\[\mu_{2}\leq\lambda_{2}\leq\mu_{1}\leq\lambda_{1}\quad\text{and}\quad\lambda_{1 }-\mu_{1}+\lambda_{2}-\mu_{2}=\ell.\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
Following [Mac, VI SS6 Ex. 2a], define
\[\varphi_{\lambda/\mu}=\prod_{1\leq i\leq j\leq\ell(\lambda)}\frac{f(q^{\lambda_{i} -\lambda_{j}}t^{j-i})}{f(q^{\lambda_{i}-\mu_{j}}t^{j-i})}\frac{f(q^{\mu_{i}-\mu _{j+1}}t^{j-i})}{f(q^{\mu_{i}-\lambda_{j+1}}t^{j-i})},\qquad\text{where}\qquad f(u )=\frac{(tu;q)_{\infty}}{(qu;q)_{\infty}}\]
and \((z,q)_{\infty}=(1-z)(1-zq)(1-zq^{2})\cdots\). Then [Mac, VI SS6 Ex. 2a] gives that
\[\text{if}\qquad g_{\ell}=\frac{(t,q)_{\ell}}{(q;q)_{\ell}}P_{(\ell,0)}(x_{1}, x_{2})\qquad\text{then}\qquad g_{\ell}P_{(\mu_{1},\mu_{2})}=\sum_{\lambda} \varphi_{\lambda/\mu}P_{(\lambda_{1},\lambda_{2})},\]
where the sum is over \(\lambda=(\lambda_{1},\lambda_{2})\) such that \(\lambda/\mu\) is a horizontal strip of length \(\ell\). Indeed this matches our results, in view of the following Proposition.
**Proposition 2.1**.: _Let \(c_{j}^{(\ell)}(q^{m})\) be as defined in (1.2). Let \(\lambda=(\lambda_{1},\lambda_{2})\) and \(\mu=(\mu_{1},\mu_{2})\) be such that \(\lambda/\mu\) is a horizontal strip of length \(\ell\) and let \(m=\mu_{1}-\mu_{2}\) and \(j=\lambda_{2}-\mu_{2}\). Then_
\[\frac{(q,q)_{\ell}}{(t;q)_{\ell}}\varphi_{\lambda/\mu}=c_{j}^{(\ell)}(q^{m}).\]
Proof.: Letting \(\lambda_{i}=\mu_{i}+a_{i}\) gives
\[\varphi_{\lambda/\mu} =\prod_{1\leq i\leq j\leq\ell(\lambda)}\frac{f(q^{a_{i}-a_{j}}q^{ \mu_{i}-\mu_{j}}t^{j-i})}{f(q^{a_{i}}q^{\mu_{i}-\mu_{j}}t^{j-i})}\frac{f(t^{-1} q^{\mu_{i}-\mu_{j+1}}t^{j+1-i})}{f(t^{-1}q^{-a_{j+1}}q^{\mu_{i}-\mu_{j+1}}t^{j+1-i})}\] \[=\prod_{1\leq i\leq j\leq\ell(\lambda)}\frac{(tq^{a_{i}-a_{j}}q^{ \mu_{i}-\mu_{j}}t^{j-i};q)_{\infty}}{(qq^{a_{i}-a_{j}}q^{\mu_{i}-\mu_{j}}t^{j-i };q)_{\infty}}\frac{(qq^{a_{i}}q^{\mu_{i}-\mu_{j}}t^{j-i};q)_{\infty}}{(tq^{a_{ i}}q^{\mu_{i}-\mu_{j}}t^{j-i};q)_{\infty}}\] \[\qquad\cdot\frac{(q^{\mu_{i}-\mu_{j+1}}t^{j+1-i};q)_{\infty}}{(qt ^{-1}q^{a_{i}-\mu_{j+1}}t^{j+1-i};q)_{\infty}}\frac{(qt^{-1}q^{-a_{j+1}}q^{\mu_ {i}-\mu_{j+1}}t^{j+1-i};q)_{\infty}}{(q^{-a_{j+1}}q^{\mu_{i}-\mu_{j+1}}t^{j+1-i };q)_{\infty}}\] \[=\prod_{1\leq i\leq j\leq\ell(\lambda)}\frac{(tq^{a_{i}-a_{j}}q^{ \mu_{i}-\mu_{j}}t^{j-i};q)_{a_{j}}}{(qq^{a_{i}-a_{j}}q^{\mu_{i}-\mu_{j}}t^{j-i} ;q)_{a_{j}}}\frac{(qt^{-1}q^{a_{j+1}}q^{\mu_{i}-\mu_{j+1}}t^{j+1-i};q)_{a_{j+1 }}}{(q^{-a_{j+1}}q^{\mu_{i}-\mu_{j+1}}t^{j+1-i};q)_{a_{j+1}}}\]
When \(i=j\) the first factor is
\[\frac{(tq^{a_{i}-a_{j}}q^{\mu_{i}-\mu_{j}}t^{j-i};q)_{a_{j}}}{(qq^{a_{i}-a_{j }}q^{\mu_{i}-\mu_{j}}t^{j-i};q)_{a_{j}}}=\frac{(t;q)_{a_{j}}}{(q;q)_{a_{j}}},\]
and when \(j+1=n\) so that \(a_{j+1}=0\) and \(\mu_{j+1}=0\).
\[\frac{(qt^{-1}q^{-a_{j+1}}q^{\mu_{i}-\mu_{j+1}}t^{j+1-i};q)_{a_{j+1}}}{(q^{-a_ {j+1}}q^{\mu_{i}-\mu_{j+1}}t^{j+1-i};q)_{a_{j+1}}}=\frac{(qt^{-1}q^{\mu_{i}}t^ {j+1-i};q)_{0}}{(q^{\mu_{i}}t^{j+1-i};q)_{0}}=1.\]
Thus, when \(\ell(\lambda)=2\),
\[\varphi_{\lambda/\mu}=\frac{(t;q)_{a_{1}}}{(q;q)_{a_{1}}}\frac{(t;q)_{a_{2}}} {(q;q)_{a_{2}}}\cdot\frac{(tq^{a_{1}-a_{2}}q^{\mu_{1}-\mu_{2}}t^{2-1};q)_{a_{2 }}}{(q^{a_{1}-a_{2}+1}q^{\mu_{1}-\mu_{2}}t^{2-1};q)_{a_{2}}}\cdot\frac{(t^{-1} q^{-a_{2}+1}q^{\mu_{1}-\mu_{2}}t^{2-1};q)_{a_{2}}}{(q^{a_{1}-a_{2}}q^{\mu_{1}- \mu_{2}}t^{2-1};q)_{a_{2}}}.\]
Since \(m=\mu_{1}-\mu_{2}\) and \(j=\lambda_{2}-\mu_{2}=a_{2}\) then \(a_{1}-a_{2}=(a_{1}+a_{2})-2a_{2}=\ell-2j\) and
\[\frac{(q;q)_{\ell}}{(tq;q)_{\ell}}\varphi_{\lambda/\mu}=\genfrac{[}{]}{0.0pt}{}{ \ell}{j}_{q,t}\cdot\frac{(t^{2}q^{m}q^{\ell-2j};q)_{j}}{(tq^{m}q^{\ell-2j+1};q) _{j}}\frac{(q^{m}q^{-(j-1)};q)_{j}}{(tq^{m}q^{-j};q)_{j}}=c_{j}^{(\ell)}(q^{m}).\]
## 3 DAHA for \(Sl_{2}\) and the polynomial representation
In this section we introduce the type \(SL_{2}\) double affine Hecke algebra and its polynomial representation. The double affine Hecke algebra is a source for a myriad of operators acting on polynomials. In this section we carefully establish the identitites between operators that will enable us to compute products of Macdonald polynomials.
### The double affine Hecke algebra (DAHA) for type \(Sl_{2}\)
Fix \(q^{\frac{1}{2}},t^{\frac{1}{2}}\in\mathbb{C}^{\times}\). Following [10, (6.1.2), (6.1.3)], the double affine Hecke algebra for \(SL_{2}\) is the \(\mathbb{C}\)-algebra \(\tilde{H}_{\rm int}\) generated by \(T_{1}^{\pm 1},X^{\pm 1},Y^{\pm 1},T_{\pi}^{\pm 1}\) with relations \(T_{1}T_{1}^{-1}=T_{1}^{-1}T_{1}=1\), \(XX^{-1}=X^{-1}X=1\), \(YY^{-1}=Y^{-1}Y=1\), \(T_{\pi}T_{\pi}^{-1}=T_{\pi}^{-1}T_{\pi}=1\) and
\[T_{\pi}=YT_{1}^{-1}=T_{1}Y^{-1},\qquad T_{\pi}XT_{\pi}^{-1}=q^{\frac{1}{2}}X^ {-1},\]
\[T_{1}XT_{1}=X^{-1},\qquad T_{1}Y^{-1}T_{1}=Y,\qquad(T_{1}-t^{\frac{1}{2}})(T_ {1}+t^{-\frac{1}{2}})=0. \tag{3.1}\]
It follows from the relations \(T_{1}XT_{1}=X^{-1}\) and \(T_{1}-T_{1}^{-1}=t^{\frac{1}{2}}-t^{-\frac{1}{2}}\) that
\[T_{1}X^{r}=X^{-r}T_{1}+(t^{\frac{1}{2}}-t^{-\frac{1}{2}})\frac{X^{r}-X^{-r}}{ 1-X^{2}},\qquad\text{for $r\in\mathbb{Z}$}. \tag{3.2}\]
As a left module for the Laurent polynomial ring \(\mathbb{C}[Y,Y^{-1}]\), the double affine Hecke algebra \(\tilde{H}_{\rm int}\) has basis \(\{X^{k}\mid k\in\mathbb{Z}\}\sqcup\{X^{k}T_{1}\mid k\in\mathbb{Z}\}\). Letting \(\mathbb{C}(Y)\) denote the field of fractions of \(\mathbb{C}[Y,Y^{-1}]\), the _localized double affine Hecke algebra_,
\[\tilde{H}=\mathbb{C}(Y)\otimes_{\mathbb{C}[Y,Y^{-1}]}\tilde{H}_{\rm int},\]
is the algebra with \(\mathbb{C}(Y)\)-basis \(\{X^{k}\mid k\in\mathbb{Z}\}\sqcup\{X^{k}T_{1}\mid k\in\mathbb{Z}\}\) (as a left \(\mathbb{C}(Y)\)-module) and the relations in (3.1). Although the polynomial representation of \(\tilde{H}_{\rm int}\) (which is where the Macdonald polynomials live, see SS3.2) is not a \(\tilde{H}\)-module, there are operators on the polynomial representation which we can source from the larger algebra \(\tilde{H}\). The operators which we wish to access are the intertwiners \(\tau_{\pi}^{\vee}\) and \(\tau_{1}^{\vee}\) and the normalized interwiners \(\eta_{s_{1}}\), \(\eta_{\pi}\), \(\eta\) and \(\eta^{-1}\), which are defined below in (3.3), (3.9), (3.10) and (3.11).
#### 3.1.1 Intertwiners and the bosonic symmetrizer
The _intertwiners_\(\tau_{1}^{\vee}\) and \(\tau_{\pi}^{\vee}\) and the _bosonic symmetrizer_\(\mathbf{1}_{0}\) are defined by
\[\tau_{1}^{\vee}=T_{1}+t^{-\frac{1}{2}}\frac{(1-t)}{(1-Y^{-2})},\qquad\tau_{ \pi}^{\vee}=XT_{1},\qquad\text{and}\qquad\mathbf{1}_{0}=T_{1}+t^{-\frac{1}{2 }}. \tag{3.3}\]
In [10, (6.1.6) and (6.18)], \(\tau_{1}^{\vee}\) is denoted \(\alpha\) and \(\tau_{\pi}^{\vee}\) is denoted \(\beta\).
**Proposition 3.1**.: _Then_
\[\tau_{1}^{\vee}Y=Y^{-1}\tau_{1}^{\vee}, \tau_{\pi}^{\vee}Y=Y^{-1}q^{-\frac{1}{2}}\tau_{\pi}^{\vee}, \tag{3.4}\] \[(\tau_{1}^{\vee})^{2}=t^{-1}\frac{(1-tY^{2})(1-tY^{-2})}{(1-Y^{ 2})(1-Y^{-2})}, (\tau_{\pi}^{\vee})^{2}=1, \tag{3.5}\]
\[T_{1}\mathbf{1}_{0}=\mathbf{1}_{0}T_{1}=t^{\frac{1}{2}}\mathbf{1}_{0},\qquad \mathbf{1}_{0}\tau_{1}^{\vee}=\mathbf{1}_{0}t^{-\frac{1}{2}}\frac{(1-tY^{-2})}{ (1-Y^{-2})},\qquad\tau_{1}^{\vee}\mathbf{1}_{0}=t^{-\frac{1}{2}}\frac{(1-tY^{ -2})}{(1-Y^{-2})}\mathbf{1}_{0}, \tag{3.6}\]
\[\mathbf{1}_{0}^{2}=\mathbf{1}_{0}(t^{\frac{1}{2}}+t^{-\frac{1}{2}})\qquad \text{and}\qquad\mathbf{1}_{0}=\tau_{1}^{\vee}+t^{-\frac{1}{2}}\frac{(1-tY^{ 2})}{(1-Y^{2})}=\tau_{1}^{\vee}+t^{\frac{1}{2}}\frac{(1-t^{-1}Y^{-2})}{(1-Y^{- 2})}. \tag{3.7}\]
Proof.: Using the relations in (3.1), \((\tau_{\pi}^{\vee})^{2}=XT_{1}XT_{1}=XX^{-1}=1\) and
\[\tau_{\pi}^{\vee}Y =XT_{1}Y=T_{1}^{-1}X^{-1}Y=T_{1}^{-1}q^{-\frac{1}{2}}T_{\pi}XT_{ \pi}^{-1}Y\] \[=T_{1}^{-1}q^{-\frac{1}{2}}T_{\pi}XT_{\pi}^{-1}T_{\pi}T_{1}=q^{- \frac{1}{2}}T_{1}^{-1}T_{\pi}XT_{1}=q^{-\frac{1}{2}}T_{1}^{-1}T_{\pi}\tau_{\pi }^{\vee}=q^{-\frac{1}{2}}Y^{-1}\tau_{\pi}^{\vee}.\]
Using
\[\tau_{1}^{\vee} =T_{1}+t^{-\frac{1}{2}}\frac{(1-t)}{(1-Y^{-2})}=T_{1}^{-1}+(t^{ \frac{1}{2}}-t^{-\frac{1}{2}})+t^{-\frac{1}{2}}\frac{(1-t)}{(1-Y^{-2})}\] \[=T_{1}^{-1}+(1-t)\frac{-t^{-\frac{1}{2}}(1-Y^{-2})+t^{-\frac{1}{ 2}}}{(1-Y^{-2})}=T_{1}^{-1}+t^{-\frac{1}{2}}\frac{(1-t)Y^{-2}}{(1-Y^{-2})}, \tag{3.8}\]
then
\[\tau_{1}^{\vee}Y =\Big{(}T_{1}^{-1}+t^{-\frac{1}{2}}\frac{(1-t)Y^{-2}}{(1-Y^{-2}) }\Big{)}Y=T_{1}^{-1}Y+t^{-\frac{1}{2}}\frac{(1-t)Y^{-1}}{(1-Y^{-2})}\] \[=Y^{-1}T_{1}+t^{-\frac{1}{2}}\frac{(1-t)Y^{-1}}{(1-Y^{-2})}=Y^{-1 }\Big{(}T_{1}+t^{-\frac{1}{2}}\frac{(1-t)}{(1-Y^{-2})}\Big{)}=Y^{-1}\tau_{1}^ {\vee}\]
and
\[(\tau_{1}^{\vee})^{2} =\Big{(}T_{1}+t^{-\frac{1}{2}}\frac{(1-t)}{(1-Y^{-2})}\Big{)} \tau_{1}^{\vee}=T_{1}\tau_{1}^{\vee}+\tau_{1}^{\vee}t^{-\frac{1}{2}}\frac{(1- t)}{(1-Y^{2})}\] \[=T_{1}\Big{(}T_{1}^{-1}+t^{-\frac{1}{2}}\frac{(1-t)Y^{-2}}{(1-Y^{ -2})}\Big{)}+\Big{(}T_{1}+t^{-\frac{1}{2}}\frac{(1-t)}{(1-Y^{-2})}\Big{)}t^{- \frac{1}{2}}\frac{(1-t)}{(1-Y^{2})}\] \[=1+t^{-1}\frac{(1-t)}{(1-Y^{-2})}\frac{(1-t)}{(1-Y^{2})}=\frac{(1 -Y^{-2}-Y^{2}+1+t^{-1}-2+t)}{(1-Y^{2})(1-Y^{-2})}\] \[=t^{-1}\frac{(1-tY^{2})(1-tY^{-2})}{(1-Y^{2})(1-Y^{-2})}.\]
Since \(\mathbf{1}_{0}=T_{1}+t^{-\frac{1}{2}}=T_{1}-T_{1}^{-1}+T_{1}^{-1}+t^{-\frac{1 }{2}}=(t^{\frac{1}{2}}-t^{-\frac{1}{2}})+T_{1}^{-1}+t^{-\frac{1}{2}}=T_{1}^{-1 }+t^{\frac{1}{2}}\) then
\[\mathbf{1}_{0}T_{1}=(T_{1}^{-1}+t^{\frac{1}{2}})T_{1}=1+t^{\frac{1}{2}}T_{1}=t ^{\frac{1}{2}}(T_{1}+t^{-\frac{1}{2}})=t^{\frac{1}{2}}\mathbf{1}_{0}\quad\text {and}\quad\mathbf{1}_{0}^{2}=\mathbf{1}_{0}(T_{1}+t^{-\frac{1}{2}})=\mathbf{1}_ {0}(t^{\frac{1}{2}}+t^{-\frac{1}{2}}).\]
Similarly for the product \(T_{1}\mathbf{1}_{0}\). Then
\[\mathbf{1}_{0}\tau_{1}^{\vee}=\mathbf{1}_{0}\Big{(}T_{1}+t^{-\frac{1}{2}}\frac {(1-t)}{(1-Y^{-2})}\Big{)}=\mathbf{1}_{0}\Big{(}t^{\frac{1}{2}}+t^{-\frac{1}{2}} \frac{(1-t)}{(1-Y^{-2})}\Big{)}=\mathbf{1}_{0}t^{-\frac{1}{2}}\frac{(1-tY^{-2} )}{(1-Y^{-2})}\]
and similarly for the product \(\tau_{1}^{\vee}\mathbf{1}_{0}\). Finally
\[\mathbf{1}_{0}=\tau_{1}^{\vee}-t^{-\frac{1}{2}}\frac{(1-t)}{(1-Y^{-2})}+t^{- \frac{1}{2}}=\tau_{1}^{\vee}+t^{-\frac{1}{2}}\frac{(t-Y^{-2})}{(1-Y^{-2})}= \tau_{1}^{\vee}+t^{-\frac{1}{2}}\frac{(1-tY^{2})}{(1-Y^{2})}.\]
#### 3.1.2 Normalized intertwiners
Define normalized intertwiners
\[\eta_{\pi}=\tau_{\pi}^{\vee}\qquad\text{and}\qquad\eta_{s_{1}}=t^{\frac{1}{2} }\frac{(1-Y^{-2})}{(1-tY^{-2})}\tau_{1}^{\vee}. \tag{3.9}\]
Then define
\[\eta =\eta_{\pi}\eta_{s_{1}}=\tau_{\pi}^{\vee}t^{\frac{1}{2}}\frac{(1-Y^{- 2})}{(1-tY^{-2})}\tau_{1}^{\vee}=t^{\frac{1}{2}}\frac{(1-Y^{2}q)}{(1-tY^{2}q)} \tau_{\pi}^{\vee}\tau_{1}^{\vee} \tag{3.10}\] \[\text{and}\qquad\qquad\eta^{-1} =\eta_{s_{1}}\eta_{\pi}=t^{\frac{1}{2}}\frac{(1-Y^{-2})}{(1-tY^{- 2})}\tau_{1}^{\vee}\tau_{\pi}^{\vee}. \tag{3.11}\]
**Warning:** Although \(\eta\) and \(\eta^{-1}\) are inverses of each other as elements of \(\tilde{H}\), and these are well defined operators on the polynomial representation (see Proposition 3.3), as operators on the polynomial representation \(\eta\) and \(\eta^{-1}\) are not invertible operators.
**Proposition 3.2**.: _The following relations hold in \(\tilde{H}\):_
\[\eta_{\pi}^{2}=1,\qquad\eta_{s_{1}}^{2}=1,\qquad\eta\eta_{s_{1}}=\eta_{s_{1}} \eta^{-1}. \tag{3.12}\]
\[\eta_{\pi}Y=Y^{-1}q^{-\frac{1}{2}}\eta_{\pi},\qquad\eta_{s_{1}}Y=Y^{-1}\eta_{ s_{1}},\qquad\eta Y=Yq^{\frac{1}{2}}\eta, \tag{3.13}\]
\[\mathbf{1}_{0}=(1+\eta_{s_{1}})t^{-\frac{1}{2}}\frac{(1-tY^{2})}{(1-Y^{2})}, \qquad\eta_{s_{1}}\mathbf{1}_{0}=\mathbf{1}_{0},\qquad\mathbf{1}_{0}\eta_{s_{ 1}}=\mathbf{1}_{0}t^{-1}\frac{(1-tY^{-2})}{(1-t^{-1}Y^{-2})}. \tag{3.14}\]
Proof.: From (3.5), \(\eta_{\pi}^{2}=(\tau_{\pi}^{\vee})^{2}=1\). Using \(\tau_{1}^{\vee}Y=Y^{-1}\tau_{1}^{\vee}\) and the formula for \((\tau_{1}^{\vee})^{2}\) in (3.5) gives
\[\eta_{s_{1}}^{2} =t^{\frac{1}{2}}\frac{(1-Y^{-2})}{(1-tY^{-2})}\tau_{1}^{\vee}t^{ \frac{1}{2}}\frac{(1-Y^{-2})}{(1-tY^{-2})}\tau_{1}^{\vee}=t\frac{(1-Y^{-2})}{ (1-tY^{-2})}\frac{(1-Y^{2})}{(1-tY^{2})}\tau_{1}^{\vee}\tau_{1}^{\vee}\] \[=t\frac{(1-Y^{-2})}{(1-tY^{-2})}\frac{(1-Y^{2})}{(1-tY^{2})}\cdot t ^{-1}\frac{(1-tY^{2})(1-tY^{-2})}{(1-Y^{2})(1-Y^{-2})}=1.\]
Then
\[\eta\eta_{s_{1}}=\eta_{\pi}\eta_{s_{1}}\eta_{s_{1}}=\eta_{\pi}=\eta_{s_{1}} \eta_{s_{1}}\eta_{\pi}=\eta_{s_{1}}\eta^{-1}.\]
The relations \(\eta_{\pi}Y=Y^{-1}q^{-\frac{1}{2}}\eta_{\pi}\) and \(\eta_{s_{1}}Y=Y^{-1}\eta_{s_{1}}\) follow from (3.4) and
\[\eta Y=\eta_{\pi}\eta_{s_{1}}Y=\eta_{\pi}Y^{-1}\eta_{s_{1}}=Yq^{\frac{1}{2}} \eta_{\pi}\eta_{s_{1}}=Yq^{\frac{1}{2}}\eta.\]
Using (3.7),
\[\mathbf{1}_{0}=\tau_{1}^{\vee}+t^{-\frac{1}{2}}\frac{(1-tY^{2})}{(1-Y^{2})}= \left(t^{\frac{1}{2}}\tau_{1}^{\vee}\frac{(1-Y^{2})}{(1-tY^{2})}+1\right)\cdot t ^{-\frac{1}{2}}\frac{(1-tY^{2})}{(1-Y^{2})}=(\eta_{s_{1}}+1)t^{-\frac{1}{2}} \frac{(1-tY^{2})}{(1-Y^{2})}.\]
By the last identity in (3.6),
\[\eta_{s_{1}}\mathbf{1}_{0}=t^{\frac{1}{2}}\frac{(1-Y^{-2})}{(1-tY^{-2})}\tau_ {1}^{\vee}\mathbf{1}_{0}=t^{\frac{1}{2}}\frac{(1-Y^{-2})}{(1-tY^{-2})}\cdot t ^{-\frac{1}{2}}\frac{(1-tY^{-2})}{(1-Y^{-2})}\mathbf{1}_{0}=\mathbf{1}_{0}\]
and, by the second identity in (3.6),
\[\mathbf{1}_{0}\eta_{s_{1}}=\mathbf{1}_{0}t^{\frac{1}{2}}\frac{(1-Y^{-2})}{(1- tY^{-2})}\tau_{1}^{\vee}=\mathbf{1}_{0}\tau_{1}^{\vee}t^{\frac{1}{2}}\frac{(1-Y^{2})} {(1-tY^{2})}=\mathbf{1}_{0}t^{-\frac{1}{2}}\frac{(1-tY^{-2})}{(1-Y^{-2})}t^{ \frac{1}{2}}\frac{(1-Y^{2})}{(1-tY^{2})}=\mathbf{1}_{0}t^{-1}\frac{(1-tY^{-2})} {(1-t^{-1}Y^{-2})}.\]
### The polynomial representation \(\tilde{H}_{\rm int}{\bf 1}_{Y}\)
Let \(\tilde{H}_{\rm int}{\bf 1}_{Y}\) be the \(\tilde{H}_{\rm int}\) module generated by a single generator \({\bf 1}_{Y}\) with relations
\[T_{1}{\bf 1}_{Y}=t^{\frac{1}{2}}{\bf 1}_{Y}\qquad\mbox{and}\qquad T_{\pi}{\bf 1}_{Y} ={\bf 1}_{Y}.\]
Then \(Y{\bf 1}_{Y}=T_{\pi}T_{1}{\bf 1}_{Y}=t^{\frac{1}{2}}{\bf 1}_{Y}\) and
\[\tilde{H}_{\rm int}{\bf 1}_{Y}\qquad\mbox{has $\mathbb{C}$-basis}\qquad\{X^{k}{ \bf 1}_{Y}\ |\ k\in\mathbb{Z}\}.\]
Using the second relation in (3.1) and (3.2), the action of \(\tilde{H}_{\rm int}\) in the basis \(\{X^{k}{\bf 1}_{Y}\ |\ k\in\mathbb{Z}\}\) is given explicitly by
\[XX^{r}{\bf 1}_{Y}=X^{r+1}{\bf 1}_{Y},\qquad\qquad T_{\pi}X^{r}{\bf 1}_{Y}=q^{ \frac{r}{2}}X^{-r}{\bf 1}_{Y}\qquad\mbox{and}\]
\[T_{1}X^{r}{\bf 1}_{Y}=t^{\frac{1}{2}}X^{-r}{\bf 1}_{Y}+(t^{\frac{1}{2}}-t^{- \frac{1}{2}})\frac{X^{r}-X^{-r}}{1-X^{2}}{\bf 1}_{Y},\qquad\mbox{for $r\in \mathbb{Z}$.}\]
The electronic Macdonald polynomials are the elements \(E_{m}(X)\in\mathbb{C}[X,X^{-1}]\), \(m\in\mathbb{Z}\), determined by
\[YE_{m}(X){\bf 1}_{Y}=t^{-\frac{1}{2}}q^{\frac{-m}{2}}E_{m}(X){ \bf 1}_{Y},\quad\mbox{if $m\in\mathbb{Z}_{>0}$, and}\] \[YE_{-m}(X){\bf 1}_{Y}=t^{\frac{1}{2}}q^{\frac{m}{2}}E_{-m}(X){ \bf 1}_{Y},\quad\mbox{if $m\in\mathbb{Z}_{\geq 0}$,} \tag{3.15}\]
with normalization such that the coefficient of \(X^{m}\) in \(E_{m}(X)\) is 1. The electronic Macdonald polynomials are given recursively (see [10, (6.2.3)]) by \(E_{0}(X)=1\) and \(E_{1}(X)=X\) and
\[\tau_{1}^{\vee}E_{r}(X){\bf 1}_{Y}=t^{-\frac{1}{2}}E_{-r}(X){\bf 1 }_{Y},\qquad\qquad\tau_{1}^{\vee}E_{-r}(X){\bf 1}_{Y}=t^{-\frac{1}{2}}\frac{(1-tY^{2})(1 -tY^{-2})}{(1-Y^{2})(1-Y^{-2})}E_{r}(X){\bf 1}_{Y},\] \[\tau_{\pi}^{\vee}E_{r}(X){\bf 1}_{Y}=t^{-\frac{1}{2}}E_{-(r-1)}(X){ \bf 1}_{Y}.\qquad\tau_{\pi}^{\vee}E_{-r}(X){\bf 1}_{Y}=t^{\frac{1}{2}}E_{r+1}(X){\bf 1 }_{Y}, \tag{3.16}\]
for \(r\in\mathbb{Z}_{>0}\). Note that \(\tau_{\pi}^{\vee}E_{0}(X){\bf 1}_{Y}=XT_{1}{\bf 1}_{Y}=t^{\frac{1}{2}}X{\bf 1}_{Y}=t^{ \frac{1}{2}}E_{1}(X){\bf 1}_{Y}\) and
\[\tau_{\pi}^{\vee}E_{1}(X){\bf 1}_{Y}=XT_{1}X{\bf 1}_{Y}=t^{-\frac{1}{2}}XT_{1}XT _{1}{\bf 1}_{Y}=t^{-\frac{1}{2}}XX^{-1}{\bf 1}_{Y}=t^{-\frac{1}{2}}E_{0}(X){\bf 1 }_{Y}\]
and
\[\tau_{1}^{\vee}E_{0}(X){\bf 1}_{Y}=\Big{(}T_{1}+t^{-\frac{1}{2}}\frac{(1-t)}{(1- Y^{-2})}\Big{)}{\bf 1}_{Y}=\Big{(}t^{\frac{1}{2}}+t^{-\frac{1}{2}}\frac{(1-t)}{(1-t^{-1})} \Big{)}{\bf 1}_{Y}=(t^{\frac{1}{2}}-t^{\frac{1}{2}}){\bf 1}_{Y}=0. \tag{3.17}\]
As pictured below, the elements \(\tau_{1}^{\vee}\) and \(\tau_{\pi}^{\vee}\) can be used to recursively construct the electronic Macdonald polynomials.
The bosonic Macdonald polynomials \(P_{m}(X)\in\mathbb{C}[X,X^{-1}]\), for \(m\in\mathbb{Z}_{\geq 0}\), can be given [10, (6.3.10)] by
\[P_{m}(X){\bf 1}_{Y}=E_{-m}(X){\bf 1}_{Y}+\frac{t(1-q^{m})}{(1-tq^{m})}E_{m}(X){ \bf 1}_{Y}. \tag{3.18}\]
Applying (3.15) to (3.18) and using \(\tau_{1}^{\vee}E_{r}(X)\mathbf{1}_{Y}=t^{-\frac{1}{2}}E_{-r}(X)\mathbf{1}_{Y}\) gives, for \(m\in\mathbb{Z}_{>0}\),
\[P_{m}(X)\mathbf{1}_{Y}=t^{\frac{1}{2}}\tau_{1}^{\vee}E_{m}(X)\mathbf{1}_{Y}+t \frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}E_{m}(X)\mathbf{1}_{Y}=t^{\frac{1}{2}} \mathbf{1}_{0}E_{m}(X)\mathbf{1}_{Y}, \tag{3.19}\]
where the last equality follows from (3.7).
The following Proposition analyzes the action of \(\eta\) and \(\eta^{-1}\) as operators on the polynomial representation. It shows that \(\eta\) acts as a raising operator with \(\eta E_{0}(X)\mathbf{1}_{Y}=0\) and that \(\eta^{-1}\) acts as a lowering operator with \(\eta^{-1}E_{1}(X)\mathbf{1}_{Y}=0\). The operator \(\eta\) is pictured in blue and the operator \(\eta^{-1}\) is pictured in red. The coefficients below and above the arrows provide the constants which appear in the formulas for \(\eta E_{m}\), \(\eta_{E_{m}}\), \(\eta^{-1}E_{m}\) and \(\eta^{-1}E_{m}\) which are derived in (3.25), (3.26), (3.27) and (3.28).
(3.20)
It is important to note that \(\eta\) and \(\eta^{-1}\) are not invertible as operators on the polynomial representation (even though they are inverses of each other as elements of \(\tilde{H}\)). This phenomenon is of the same nature as the fact that \((1-t^{-1}Y)\) is a well defined element of \(\tilde{H}\) with inverse \(\frac{1}{(1-t^{-1}Y)}\) in \(\tilde{H}\), and \((1-t^{-1}Y)\) is a well defined operator on \(\mathbb{C}[X,X^{-1}]\) that is not invertible as an operator on the polynomial representation \(\mathbb{C}[X,X^{-1}]\).
The identities
\[\eta^{-(\ell-j)}\eta^{j}E_{m}(X)\mathbf{1}_{Y}=t^{\frac{1}{2}(\ell-2j)}\cdot \mathrm{ev}_{m}\Big{(}\frac{(t^{-1}Y^{-2}q^{-(\ell-2j)};q)_{\ell-j}}{(Y^{-2}q^ {-(\ell-2j)};q)_{\ell-j}}\cdot\frac{(Y^{-2};q)_{j}}{(t^{-1}Y^{-2};q)_{j}} \Big{)}E_{m-(\ell-2j)}(X)\mathbf{1}_{Y}, \tag{3.21}\]
\[\eta^{j}\eta^{-(\ell-j)}E_{-m}(X)\mathbf{1}_{Y}=t^{-\frac{1}{2}(\ell-2j)}\cdot \mathrm{ev}_{m}\Big{(}\frac{(t^{-1}Y^{-2}q^{\ell-2j+1};q)_{j}}{(Y^{-2}q^{\ell- 2j+1};q)_{j}}\frac{(Y^{-2}q;q)_{\ell-j}}{(t^{-1}Y^{-2}q;q)_{\ell-j}}\Big{)}E_{- m-(\ell-2j)}(X)\mathbf{1}_{Y}. \tag{3.22}\]
follow from (3.24) and (3.23) of the following Proposition by replacing \(j\) with \(\ell-j\) (we keep the same conditions on \(j\) and \(\ell\) as in Proposition 3.3). They will be used in the proof of Theorem 6.2.
**Proposition 3.3**.: _As in (3.10) and (3.11), let_
\[\eta=t^{\frac{1}{2}}\frac{(1-Y^{2}q)}{(1-tY^{2}q)}\tau_{\pi}^{\vee}\tau_{1}^{ \vee}\qquad\text{and}\qquad\eta^{-1}=t^{\frac{1}{2}}\frac{(1-Y^{-2})}{(1-tY^{- 2})}\tau_{1}^{\vee}\tau_{\pi}^{\vee}.\]
_Let \(\mathrm{ev}_{m}\colon\mathbb{C}[Y,Y^{-1}]\to\mathbb{C}\) be the homomorphism given by \(\mathrm{ev}_{m}(Y)=t^{-\frac{1}{2}}q^{-\frac{1}{2}m}\) and extend \(\mathrm{ev}_{m}\) to elements of \(\mathbb{C}(Y)\) such that the denominator does not evaluate to \(0\). If \(\ell\in\mathbb{Z}_{\geq 0}\) and \(m\in\mathbb{Z}_{>0}\) and \(j\in\{0,\ldots,\ell\}\) then_
\[\eta^{-j}\eta^{\ell-j}E_{m}(X)\mathbf{1}_{Y}=t^{-\frac{1}{2}(\ell-2j)}\cdot \mathrm{ev}_{m}\Big{(}\frac{(t^{-1}Y^{-2}q^{\ell-2j};q)_{j}}{(Y^{-2}q^{\ell-2j };q)_{j}}\cdot\frac{(Y^{-2};q)_{\ell-j}}{(t^{-1}Y^{-2};q)_{\ell-j}}\Big{)}E_{m+ \ell-2j}(X)\mathbf{1}_{Y}, \tag{3.23}\]
\[\eta^{\ell-j}\eta^{-j}E_{-m}(X)\mathbf{1}_{Y}=t^{\frac{1}{2}(\ell-2j)}\cdot \mathrm{ev}_{m}\Big{(}\frac{(t^{-1}Y^{-2}q^{-(\ell-2j)+1};q)_{\ell-j}}{(Y^{-2}q ^{-(\ell-2j)+1};q)_{\ell-j}}\frac{(Y^{-2}q;q)_{j}}{(t^{-1}Y^{-2}q;q)_{j}}\Big{)} E_{-m+\ell-2j}(X)\mathbf{1}_{Y}. \tag{3.24}\]
Proof.: Assume \(m\in\mathbb{Z}_{>0}\). By (3.16) and (3.15),
\[\eta E_{m}(X)\mathbf{1}_{Y} =t^{\frac{1}{2}}\frac{(1-Y^{2}q)}{(1-tY^{2}q)}\tau_{\pi}^{\vee} \tau_{1}^{\vee}E_{m}(X)\mathbf{1}_{Y}=t^{\frac{1}{2}}\frac{(1-Y^{2}q)}{(1-tY^{2 }q)}\tau_{\pi}^{\vee}t^{-\frac{1}{2}}E_{-m}(X)\mathbf{1}_{Y}\] \[=t^{\frac{1}{2}}t^{-\frac{1}{2}}\frac{(1-Y^{2}q)}{(1-tY^{2}q)}t^{ \frac{1}{2}}E_{m+1}(X)\mathbf{1}_{Y}=t^{\frac{1}{2}}\frac{(1-t^{-1}q^{-(m+1)}q )}{(1-tt^{-1}q^{-(m+1)}q)}E_{m+1}(X)\mathbf{1}_{Y}\] \[=t^{\frac{1}{2}}\frac{(1-t^{-1}q^{-m})}{(1-q^{-m})}E_{m+1}(X) \mathbf{1}_{Y}=t^{-\frac{1}{2}}\frac{(1-tq^{m})}{(1-q^{m})}E_{m+1}(X)\mathbf{1 }_{Y}. \tag{3.25}\]
Thus, for \(\ell\in\mathbb{Z}_{\geq 0}\) and \(m\in\mathbb{Z}_{>0}\),
\[\eta^{\ell}E_{m}(X)\mathbf{1}_{Y} =t^{-\frac{1}{2}\ell}\frac{(1-tq^{m})(1-tq^{m+1})\cdots(1-tq^{m+ \ell-1})}{(1-q^{m})(1-q^{m+1})\cdots(1-q^{m+\ell-1})}E_{m+\ell}(X)\mathbf{1}_{Y}\] \[=t^{-\frac{1}{2}\ell}\mathrm{ev}_{m}\Big{(}\frac{(Y^{-2};q)_{\ell }}{(t^{-1}Y^{-2};q)_{\ell}}\Big{)}E_{m+\ell}(X)\mathbf{1}_{Y}.\]
Assume \(m\in\mathbb{Z}_{>0}\). Using (3.16), (3.4) and (3.15),
\[\eta E_{-m}(X)\mathbf{1}_{Y} =t^{\frac{1}{2}}\frac{(1-Y^{2}q)}{(1-tY^{2}q)}\tau_{\pi}^{\vee} \tau_{1}^{\vee}E_{-m}(X)\mathbf{1}_{Y}=t^{\frac{1}{2}}\frac{(1-Y^{2}q)}{(1-tY^ {2}q)}\tau_{\pi}^{\vee}t^{-\frac{1}{2}}\frac{(1-tY^{2})(1-tY^{-2})}{(1-Y^{2})( 1-Y^{-2})}E_{m}(X)\mathbf{1}_{Y}\] \[=\frac{(1-q^{-m})(1-t^{2}q^{m})}{(1-t^{-1}q^{-m})(1-tq^{m})}\frac {(1-Y^{2}q)}{(1-tY^{2}q)}\tau_{\pi}^{\vee}E_{m}(X)\mathbf{1}_{Y}\] \[=\frac{(1-q^{-m})(1-t^{2}q^{m})}{(1-t^{-1}q^{-m})(1-tq^{m})}\frac {(1-Y^{2}q)}{(1-tY^{2}q)}t^{-\frac{1}{2}}E_{-(m-1)}\mathbf{1}_{Y}=t^{\frac{1}{ 2}}\frac{(1-q^{m})}{(1-tq^{m})}E_{-m+1}(X)\mathbf{1}_{Y}. \tag{3.26}\]
By (3.17),
\[\eta E_{0}(X)\mathbf{1}_{Y}=t^{\frac{1}{2}}\frac{(1-Y^{2}q)}{(1-tY^{2}q)}\tau _{\pi}^{\vee}\tau_{1}^{\vee}E_{0}(X)\mathbf{1}_{Y}=0=t^{\frac{1}{2}}\frac{(1- q^{0})}{(1-tq^{0})}E_{-0+1}(X)\mathbf{1}_{Y}.\]
Thus, for \(\ell\in\mathbb{Z}_{\geq 0}\) and \(m\in\mathbb{Z}_{\geq 0}\),
\[\eta^{\ell}E_{-m}(X)\mathbf{1}_{Y} =t^{\frac{1}{2}\ell}\frac{(1-q^{m-\ell+1..m})}{(1-tq^{m-\ell+1..m })}E_{-m+\ell}(X)\mathbf{1}_{Y}\] \[=t^{\frac{1}{2}\ell}\mathrm{ev}_{m}\Big{(}\frac{(1-q^{m-\ell+1}) \cdots(1-q^{m-1})(1-q^{m})}{(1-tq^{m-\ell+1})\cdots(1-tq^{m-1})(1-tq^{m})} \Big{)}E_{-m+\ell}(X)\mathbf{1}_{Y}\] \[=t^{\frac{1}{2}\ell}\mathrm{ev}_{m}\Big{(}\frac{(t^{-1}Y^{-2}q^{-( \ell-1)};q)_{\ell}}{(Y^{-2}q^{-(\ell-1)};q)_{\ell}}\Big{)}E_{-m+\ell}(X)\mathbf{ 1}_{Y},\]
where the right hand side evaluates to \(0\) if \(\ell>m\) (because the denominator factors are all nonzero and the numerator contains a factor of \((1-q^{0})=1-1=0\)).
Assume \(m\in\mathbb{Z}_{>0}\). By (3.16) and (3.15),
\[\eta^{-1}E_{m}(X)\mathbf{1}_{Y} =t^{\frac{1}{2}}\frac{(1-Y^{-2})}{(1-tY^{-2})}\tau_{1}^{\vee}\tau_{ \pi}^{\vee}E_{m}(X)\mathbf{1}_{Y}=t^{\frac{1}{2}}\frac{(1-Y^{-2})}{(1-tY^{-2})} \tau_{1}^{\vee}t^{-\frac{1}{2}}E_{-(m-1)}(X)\mathbf{1}_{Y}\] \[=\frac{(1-Y^{-2})}{(1-tY^{-2})}t^{-\frac{1}{2}}\frac{(1-tY^{2})(1- tY^{-2})}{(1-Y^{2})(1-Y^{-2})}E_{m-1}(X)\mathbf{1}_{Y}\] \[=\frac{(1-tq^{m-1})}{(1-t^{2}q^{m-1})}t^{-\frac{1}{2}}\frac{(1-q^ {-(m-1)})(1-t^{2}q^{m-1})}{(1-t^{-1}q^{-(m-1)})(1-tq^{m-1})}E_{m-1}(X)\mathbf{1 }_{Y}\] \[=t^{\frac{1}{2}}\frac{(1-q^{m-1})}{(1-tq^{m-1})}E_{m-1}(X) \mathbf{1}_{Y}. \tag{3.27}\]
In particular, \(\eta^{-1}E_{1}(X)\mathbf{1}_{Y}=0\). Thus, for \(\ell\in\mathbb{Z}_{\geq 0}\) and \(m\in\mathbb{Z}_{>0}\),
\[\eta^{-\ell}E_{m}(X)\mathbf{1}_{Y} =t^{\frac{1}{2}\ell}\frac{(1-q^{m-\ell})\cdots(1-q^{m-2})(1-q^{m-1 })}{(1-tq^{m-\ell})\cdots(1-tq^{m-2})(1-tq^{m-1})}E_{m-\ell}(X)\mathbf{1}_{Y}\] \[=t^{\frac{1}{2}}\mathrm{ev}_{m}\Big{(}\frac{(t^{-1}Y^{-2}q^{- \ell};q)_{\ell}}{(Y^{-2}q^{-\ell};q)_{\ell}}\Big{)}E_{m-\ell}(X)\mathbf{1}_{Y},\]
where the right hand side evaluates to \(0\) if \(\ell\geq m\).
Assume \(m\in\mathbb{Z}_{\geq 0}\). By (3.16) and (3.15),
\[\eta^{-1}E_{-m}(X)\mathbf{1}_{Y} =t^{\frac{1}{2}}\frac{(1-Y^{-2})}{(1-tY^{-2})}\tau_{1}^{\vee}\tau _{\pi}^{\vee}E_{-m}(X)\mathbf{1}_{Y}=t^{\frac{1}{2}}\frac{(1-Y^{-2})}{(1-tY^{ -2})}\tau_{1}^{\vee}t^{\frac{1}{2}}E_{m+1}(X)\mathbf{1}_{Y}\] \[=t\frac{(1-Y^{-2})}{(1-tY^{-2})}t^{-\frac{1}{2}}E_{-(m+1)}(X) \mathbf{1}_{Y}=t^{\frac{1}{2}}\frac{(1-t^{-1}q^{-(m+1)})}{(1-tt^{-1}q^{-(m+1) })}E_{-(m+1)}(X)\mathbf{1}_{Y}\] \[=t^{\frac{1}{2}}\frac{(1-t^{-1}q^{-m-1})}{(1-q^{-m-1})}E_{-m-1}(X )\mathbf{1}_{Y}=t^{-\frac{1}{2}}\frac{(1-tq^{m+1})}{(1-q^{m+1})}E_{-m-1}(X) \mathbf{1}_{Y}. \tag{3.28}\]
Thus, for \(\ell\in\mathbb{Z}_{\geq 0}\) and \(m\in\mathbb{Z}_{\geq 0}\),
\[\eta^{-\ell}E_{-m}(X)\mathbf{1}_{Y} =t^{-\frac{1}{2}\ell}\cdot\frac{(1-tq^{m+1})(1-q^{m+2})\cdots(1-q^ {m+\ell})}{(1-q^{m+1})(1-q^{m+2})\cdots(1-q^{m+\ell})}E_{-m-\ell}(X)\mathbf{1} _{Y}\] \[=t^{-\frac{1}{2}\ell}\mathrm{ev}_{m}\Big{(}\frac{(Y^{-2}q;q)_{ \ell}}{(t^{-1}Y^{-2}q;q)_{\ell}}\Big{)}E_{-m-\ell}(X)\mathbf{1}_{Y}.\]
### Some identities in \(\tilde{H}\)
**Proposition 3.4**.: _Let \(\ell\in\mathbb{Z}_{>0}\). As elements of \(\tilde{H}_{\mathrm{int}}\),_
\[E_{-\ell}(X)\mathbf{1}_{0}=t^{\frac{1}{2}}\Big{(}T_{1}^{-1}+t^{-\frac{1}{2}} \frac{(1-t)q^{\ell}t}{(1-q^{\ell}t)}\Big{)}E_{\ell}(X)\mathbf{1}_{0},\qquad \quad E_{\ell+1}(X)\mathbf{1}_{0}=t^{-\frac{1}{2}}\tau_{\pi}^{\vee}E_{-\ell}(X )\mathbf{1}_{0}, \tag{3.29}\]
_and_
\[P_{\ell}(X)\mathbf{1}_{0}=t^{\frac{1}{2}}\mathbf{1}_{0}E_{\ell}(X)\mathbf{1}_{0}. \tag{3.30}\]
_Additionally, \(E_{1}(X)\mathbf{1}_{0}=t^{-\frac{1}{2}}\tau_{\pi}^{\vee}E_{0}(X)\mathbf{1}_{0}\)._
Proof.: (a) Let \(Q_{1}(X),Q_{2}(X)\in\mathbb{C}[X,X^{-1}]\) be such that
\[t^{\frac{1}{2}}\Big{(}T_{1}^{-1}+t^{-\frac{1}{2}}\frac{(1-t)q^{\ell}t}{(1-q^{ \ell}t)}\Big{)}E_{\ell}(X)=Q_{1}(X)T_{1}+Q_{2}(X).\]
Then, by (3.16),
\[E_{-\ell}(X)\mathbf{1}_{Y} =t^{\frac{1}{2}}\tau_{1}^{\vee}E_{\ell}(X)\mathbf{1}_{Y}=t^{ \frac{1}{2}}\Big{(}T_{1}^{-1}+t^{-\frac{1}{2}}\frac{(1-t)Y^{-2}}{(1-Y^{-2})} \Big{)}E_{\ell}(X)\mathbf{1}_{Y}\] \[=t^{\frac{1}{2}}\Big{(}T_{1}^{-1}+t^{-\frac{1}{2}}\frac{(1-t)q^{ \ell}t}{(1-q^{\ell}t)}\Big{)}E_{\ell}(X)\mathbf{1}_{Y}=(Q_{1}(X)t^{\frac{1}{2} }+Q_{2}(X))\mathbf{1}_{Y}.\]
Since \(\{X^{k}\mathbf{1}_{Y}\ |\ k\in\mathbb{Z}\}\) is a basis of \(\tilde{H}\mathbf{1}_{Y}\) then \(E_{-\ell}(X)=Q_{1}(X)t^{\frac{1}{2}}+Q_{2}(X).\) So
\[E_{-\ell}(X)\mathbf{1}_{0} =(Q_{1}(X)t^{\frac{1}{2}}+Q_{2}(X))\mathbf{1}_{0}\] \[=(Q_{1}(X)T_{1}+Q_{2}(X))\mathbf{1}_{0}=t^{\frac{1}{2}}\Big{(}T_{ 1}^{-1}+t^{-\frac{1}{2}}\frac{(1-t)q^{\ell}t}{(1-q^{\ell}t)}\Big{)}E_{\ell}(X )\mathbf{1}_{0}.\]
(b) Let \(Q_{1}(X),Q_{2}(X)\in\mathbb{C}[X,X^{-1}]\) such that \(t^{-\frac{1}{2}}\tau_{\pi}^{\vee}E_{-\ell}(X)=XT_{1}E_{-\ell}(X)=Q_{1}(X)T_{1} +Q_{2}(X).\) Then, by (3.16),
\[E_{\ell+1}(X)\mathbf{1}_{Y}=t^{-\frac{1}{2}}\tau_{\pi}^{\vee}E_{-\ell}(X) \mathbf{1}_{Y}=(Q_{1}(X)T_{1}+Q_{2}(X))\mathbf{1}_{Y}=(Q_{1}(X)t^{\frac{1}{2} }+Q_{2}(X))\mathbf{1}_{Y}.\]
Since \(\{X^{k}\mathbf{1}_{Y}\ |\ k\in\mathbb{Z}\}\) is a basis of \(\tilde{H}\mathbf{1}_{Y}\) then \(E_{\ell+1}(X)=(Q_{1}(X)t^{\frac{1}{2}}+Q_{2}(X))\). So
\[E_{\ell+1}(X)\mathbf{1}_{0}=(Q_{1}(X)t^{\frac{1}{2}}+Q_{2}(X))\mathbf{1}_{0}=( Q_{1}(X)T_{1}+Q_{2}(X))\mathbf{1}_{0}=t^{-\frac{1}{2}}\tau_{\pi}^{\vee}E_{-\ell}(X )\mathbf{1}_{0}.\]
(c) Let \(Q_{1}(X),Q_{2}(X)\) be such that \(t^{\frac{1}{2}}\mathbf{1}_{0}E_{\ell}(X)=Q_{1}(X)T_{1}+Q_{2}(X)\). Then
\[P_{\ell}(X)\mathbf{1}_{Y}=t^{\frac{1}{2}}\mathbf{1}_{0}E_{\ell}(X)\mathbf{1}_ {Y}=(Q_{1}(X)T_{1}+Q_{2}(X))\mathbf{1}_{Y}=(Q_{1}(X)t^{\frac{1}{2}}+Q_{2}(X)) \mathbf{1}_{Y}.\]
So \(P_{\ell}(X)=(Q_{1}(X)t^{\frac{1}{2}}+Q_{2}(X))\) and
\[P_{\ell}(X)\mathbf{1}_{0}=(Q_{1}(X)t^{\frac{1}{2}}+Q_{2}(X))\mathbf{1}_{0}=(Q_ {1}(X)T_{1}+Q_{2}(X))\mathbf{1}_{0}=t^{\frac{1}{2}}\mathbf{1}_{0}E_{\ell}(X) \mathbf{1}_{0}.\]
**Proposition 3.5**.: _Let_
\[c(Y)=\frac{(1-tY^{2})}{(1-Y^{2})}\qquad\text{and}\qquad F_{\ell}(Y)=\frac{(1-t )}{(1-tq^{\ell})}\frac{(1-tY^{2}q^{\ell})}{(1-Y^{2})},\ \ \text{for}\ \ell\in\mathbb{Z}_{>0}.\]
_Then, as elements of \(\tilde{H}\),_
\[E_{-\ell}(X)\mathbf{1}_{0}=(\eta_{s_{1}}c(Y)+F_{\ell}(Y))E_{\ell}(X)\mathbf{1} _{0}\quad\text{and}\quad E_{\ell+1}(X)\mathbf{1}_{0}=t^{-\frac{1}{2}}\eta_{\pi }E_{-\ell}(X)\mathbf{1}_{0}. \tag{3.31}\]
Proof.: Using the first identity in (3.29), (3.8) and (3.9),
\[E_{-\ell}(X)\mathbf{1}_{0} =t^{\frac{1}{2}}\Big{(}T_{1}^{-1}+t^{-\frac{1}{2}}\frac{(1-t)q^{ \ell}t}{(1-q^{\ell}t)}\Big{)}E_{\ell}(X)\mathbf{1}_{0}\] \[=t^{\frac{1}{2}}\Big{(}\tau_{1}^{\vee}-t^{-\frac{1}{2}}\frac{(1-t )Y^{-2}}{(1-Y^{-2})}+t^{-\frac{1}{2}}\frac{(1-t)q^{\ell}t}{(1-q^{\ell}t)}\Big{)} E_{\ell}(X)\mathbf{1}_{0}\] \[=t^{\frac{1}{2}}\Big{(}t^{-\frac{1}{2}}\frac{(1-tY^{-2})}{(1-Y^{- 2})}\eta_{s_{1}}+t^{-\frac{1}{2}}\frac{(1-t)(-Y^{-2}+q^{\ell}tY^{-2}+q^{\ell}t- q^{\ell}tY^{-2})}{(1-Y^{-2})(1-q^{\ell}t)}\Big{)}E_{\ell}(X)\mathbf{1}_{0}\] \[=t^{\frac{1}{2}}\Big{(}t^{-\frac{1}{2}}\eta_{s_{1}}\frac{(1-tY^{ 2})}{(1-Y^{2})}+t^{-\frac{1}{2}}\frac{(1-t)(-1+q^{\ell}tY^{2})}{(Y^{2}-1)(1-q ^{\ell}t)}\Big{)}E_{\ell}(X)\mathbf{1}_{0}\] \[=\Big{(}\eta_{s_{1}}\frac{(1-tY^{2})}{(1-Y^{2})}+\frac{(1-t)(1-tY ^{2}q^{\ell})}{(1-tq^{\ell})(1-Y^{2})}\Big{)}E_{\ell}(X)\mathbf{1}_{0},\]
where the next to last equality follows from the second identity in (3.13). Then, by (3.9), the second identity in (3.29) and (3.10),
\[E_{\ell+1}(X)\mathbf{1}_{0}=t^{-\frac{1}{2}}\eta_{\pi}E_{-\ell}(X)\mathbf{1}_ {0}=t^{-\frac{1}{2}}\Big{(}\eta\frac{(1-tY^{2})}{(1-Y^{2})}+\eta_{\pi}\frac{(1 -t)(1-tY^{2}q^{\ell})}{(1-tq^{\ell})(1-Y^{2})}\Big{)}E_{\ell}(X)\mathbf{1}_{0}.\]
## 4 Operator expansions
Let \(\mathbb{C}(Y)\) be the field of fractions of \(\mathbb{C}[Y,Y^{-1}]\). As indicated in Section 3.1, as a left \(\mathbb{C}(Y)\)-module, the localised double affine Hecke algebra
\[\tilde{H}\quad\text{has $\mathbb{C}(Y)$-basis}\qquad\{X^{k}\ |\ k\in\mathbb{Z}\}\cup\{X^{k}T_{1}\ |\ k\in\mathbb{Z}\}.\]
Then \(\tilde{H}\mathbf{1}_{0}\) is a \(\mathbb{C}(Y)\)-subspace of \(\tilde{H}\) and
\[\tilde{H}\mathbf{1}_{0}\quad\text{has $\mathbb{C}(Y)$-basis}\qquad\{X^{k} \mathbf{1}_{0}\ |\ k\in\mathbb{Z}\},\]
since, by the first relation in (3.6), \(T_{1}\mathbf{1}_{0}=t^{\frac{1}{2}}\mathbf{1}_{0}\). The sets
\[\{E_{k}(X)\mathbf{1}_{0}\ |\ k\in\mathbb{Z}\}\qquad\text{and}\qquad\{\eta^{k}(X) \mathbf{1}_{0}\ |\ k\in\mathbb{Z}\}\qquad\text{are also $\mathbb{C}(Y)$- bases of $\tilde{H}\mathbf{1}_{0}$},\]
and the results of this section and the next provide explicit product formulas for the transition coefficients between these bases.
### Definition of \(D_{j}^{(\ell)}(Y)\) and \(K_{j}^{(\ell)}(Y)\)
Define functions \(D_{j}^{(\ell-1)}(Y)\) for \(\ell\in\mathbb{Z}_{>0}\) and \(D_{j}^{(-\ell)}(Y)\) for \(\ell\in\mathbb{Z}_{\geq 0}\) by the expansions
\[E_{\ell}(X)\mathbf{1}_{0}=\sum_{j=0}^{\ell-1}\eta^{\ell-2j}D_{j}^{(\ell-1)}(Y) \mathbf{1}_{0}\qquad\text{and}\qquad E_{-\ell}(X)\mathbf{1}_{0}=\sum_{j=0}^{ \ell}\eta^{-\ell+2j}D_{j}^{(-\ell)}(Y)\mathbf{1}_{0}. \tag{4.1}\]
Define \(K_{j}^{(\ell)}(Y)\) for \(\ell\in\mathbb{Z}_{\geq 0}\) and \(j\in\{0,1,\dots,\ell\}\) by
\[\mathbf{1}_{0}E_{\ell}(X)\mathbf{1}_{0}=\sum_{j=0}^{\ell}\mathbf{1}_{0}\eta^{ \ell-2j}K_{j}^{(\ell)}(Y). \tag{4.2}\]
For example,
\[E_{3}(X)\mathbf{1}_{0}= \eta^{3}D_{0}^{(2)}(Y)\mathbf{1}_{0}+\eta D_{1}^{(2)}(Y)\mathbf{1}_{ 0}+\eta^{-1}D_{2}^{(2)}(Y)\mathbf{1}_{0},\] \[E_{-3}(X)\mathbf{1}_{0}= \eta^{3}D_{3}^{(-3)}(Y)\mathbf{1}_{0}+\eta D_{2}^{(-3)}(Y)\mathbf{1 }_{0}+\eta^{-1}D_{1}^{(-3)}(Y)\mathbf{1}_{0}+\eta^{-3}D_{0}^{(-3)}(Y)\mathbf{1 }_{0},\]
and
\[\mathbf{1}_{0}E_{3}(X)\mathbf{1}_{0}=\mathbf{1}_{0}\eta^{3}K_{0}^{(3)}(Y)+ \mathbf{1}_{0}\eta K_{1}^{(3)}(Y)+\mathbf{1}_{0}\eta^{-1}K_{2}^{(3)}(Y)+ \mathbf{1}_{0}\eta^{-3}K_{3}^{(3)}(Y).\]
See Section 7.4 for examples of the first few of the functions \(D_{j}^{(\ell)}(Y)\) and \(K_{j}^{(\ell)}(Y)\). Proposition 4.2 below provides a formula for the \(K_{j}^{(\ell)}(Y)\) in terms of the \(D_{j}^{(\ell)}(Y)\).
### A recursion for the \(D_{j}^{(\ell)}(Y)\)
The following Proposition provides recursions determining the \(D_{j}^{(\ell)}(Y)\), showing that the \(D_{j}^{(\ell)}\) for \(\ell\in\mathbb{Z}_{\geq 0}\) and \(j\in\{0,\ldots,\ell-1\}\) form something like a Pascal triangle,
\[D_{0}^{(0)}\] \[D_{0}^{(2)}\] \[D_{0}^{(3)}\]
**Proposition 4.1**.: _Let \(D_{j}^{(\ell)}(Y)\) and \(D_{j}^{(-\ell)}(Y)\) be as defined in (4.1)._
1. _If_ \(\ell\in\mathbb{Z}_{>0}\) _and_ \(j\in\{0,\ldots,\ell\}\) _then_ \(D_{j}^{(-\ell)}(Y)=t^{\frac{1}{2}}D_{j}^{(\ell)}(Y^{-1})\)_._
2. _The_ \(D_{j}^{(\ell)}(Y)\) _satisfy, and are determined by,_ \(D_{0}^{(0)}(Y)=t^{-\frac{1}{2}}\) _and the recursion_ \[D_{0}^{(\ell)}=t^{\frac{1}{2}}\frac{(1-t^{-1}Y^{-2}q^{\ell})}{(1-Y^{-2}q^{\ell })}D_{0}^{(\ell-1)}(Y),\qquad D_{\ell}^{(\ell)}=t^{-\frac{1}{2}}\frac{(1-t)(1- tY^{-2})}{(1-tq^{\ell})(1-Y^{-2}q^{-\ell})}D_{0}^{(\ell-1)}(Y^{-1}),\] \[D_{j}^{(\ell)}(Y)=t^{\frac{1}{2}}\frac{(1-t^{-1}Y^{-2}q^{\ell-2j})}{(1-Y^{-2 }q^{\ell-2j})}D_{j}^{(\ell-1)}(Y)+t^{-\frac{1}{2}}\frac{(1-t)(1-q^{\ell}tY^{-2 }q^{\ell-2j})}{(1-tq^{\ell})(1-Y^{-2}q^{\ell-2j})}D_{\ell-j}^{(\ell-1)}(Y^{-1}).\] (4.3)
Proof.: (a) Using the first relation in (4.1), the second relation in (3.29), the second relation in (4.1),
\[\sum_{j=0}^{\ell}\eta^{\ell+1-2j}D_{j}^{(\ell)}(Y)\mathbf{1}_{0}=E_{\ell+1}(X) \mathbf{1}_{0}=t^{-\frac{1}{2}}\tau_{\pi}^{\vee}E_{-\ell}(X)\mathbf{1}_{0}=t^{ -\frac{1}{2}}\eta_{\pi}\sum_{j=0}^{\ell}\eta^{-\ell+2j}D_{j}^{(-\ell)}(Y) \mathbf{1}_{0}.\]
By (3.12), (3.13), and the second relation in (3.14),
\[\eta_{\pi}\eta^{-\ell+2j}D_{j}^{(-\ell)}(Y)\mathbf{1}_{0} =\eta_{\pi}\eta_{s_{1}}\eta_{s_{1}}\eta^{-\ell+2j}D_{j}^{(-\ell)}( Y)\mathbf{1}_{0}=\eta\eta_{s_{1}}\eta^{-\ell+2j}D_{j}^{(-\ell)}(Y)\mathbf{1}_{0}\] \[=\eta\eta^{\ell-2j}D_{j}^{(-\ell)}(Y^{-1})\eta_{s_{1}}\mathbf{1}_ {0}=\eta^{\ell-2j+1}D_{j}^{(-\ell)}(Y^{-1})\mathbf{1}_{0}.\]
So \(\sum_{j=0}^{\ell}\eta^{\ell+1-2j}D_{j}^{(\ell)}(Y)\mathbf{1}_{0}=t^{-\frac{1}{2 }}\sum_{j=0}^{\ell}\eta^{\ell-2j+1}D_{j}^{(-\ell)}(Y^{-1})\mathbf{1}_{0}\), giving \(D_{j}^{(\ell)}(Y)=t^{-\frac{1}{2}}D_{j}^{(-\ell)}(Y^{-1})\).
(b) Let
\[c(Y)=\frac{(1-tY^{2})}{(1-Y^{2})}\qquad\text{and}\qquad F_{\ell}(Y)=\frac{(1-t )}{(1-tq^{\ell})}\frac{(1-q^{\ell}tY^{2})}{(1-Y^{2})},\quad\text{for $\ell\in\mathbb{Z}_{>0}$}.\]
By (4.1) and the first relation in (3.31),
\[\sum_{j=0}^{\ell}\eta^{-\ell+2j}D_{j}^{(-\ell)}(Y)\mathbf{1}_{0} =E_{-\ell}(X)\mathbf{1}_{0}=(\eta_{s_{1}}c(Y)+F_{\ell}(Y))E_{\ell}( X)\mathbf{1}_{0}\] \[=(\eta_{s_{1}}c(Y)+F_{\ell}(Y))\sum_{j=0}^{\ell-1}\eta^{\ell-2j}D_ {j}^{(\ell-1)}(Y)\mathbf{1}_{0}\]
By (3.13) and the last relation in (3.12),
\[\eta_{s_{1}}c(Y)\eta^{\ell-2j}D_{j}^{(\ell-1)}(Y) =\eta_{s_{1}}\eta^{\ell-2j}c(q^{-\frac{1}{2}(\ell-2j)}Y)D_{j}^{( \ell-1)}(Y)=\eta^{-(\ell-2j)}\eta_{s_{1}}c(q^{-\frac{1}{2}(\ell-2j)}Y)D_{j}^{( \ell-1)}(Y)\] \[=\eta^{-(\ell-2j)}c(q^{-\frac{1}{2}(\ell-2j)}Y^{-1})D_{j}^{(\ell- 1)}(Y^{-1})\eta_{s_{1}}\]
and \(F_{\ell}(Y)\eta^{\ell-2j}D_{j}^{(\ell-1)}(Y)=\eta^{\ell-2j}F_{\ell}(q^{-\frac {1}{2}(\ell-2j)}Y)D_{j}^{(\ell-1)}(Y).\) So
\[\sum_{j=0}^{\ell} \eta^{-(\ell-2j)}D_{j}^{(-\ell)}(Y)\mathbf{1}_{0}=(\eta_{s_{1}}c (Y)+F_{\ell}(Y))\sum_{j=0}^{\ell-1}\eta^{\ell-2j}D_{j}^{(\ell-1)}(Y)\mathbf{1} _{0}\] \[=\sum_{j=0}^{\ell-1}\eta^{-(\ell-2j)}c(q^{-\frac{1}{2}(\ell-2j)}Y ^{-1})D_{j}^{(\ell-1)}(Y^{-1})\mathbf{1}_{0}+\sum_{j=0}^{\ell-1}\eta^{\ell-2j} F_{\ell}(q^{-\frac{1}{2}(\ell-2j)}Y)D_{j}^{(\ell-1)}(Y)\mathbf{1}_{0}\]
where the last equality uses the relation \(\eta_{s_{1}}\mathbf{1}_{0}=\mathbf{1}_{0}\) from (3.14). Putting \(k=\ell-j\) in the second sum makes \(j=\ell-k\) and
\[\sum_{j=0}^{\ell-1}\eta^{\ell-2(\ell-k)}F_{\ell}(q^{-\frac{1}{2}(\ell-2(\ell-k ))}Y)D_{j}^{(\ell-1)}(Y)\mathbf{1}_{0}=\sum_{k=1}^{\ell}\eta^{-(\ell-2k)}F_{ \ell}(q^{\frac{1}{2}(\ell-2k)}Y)D_{\ell-k}^{(\ell-1)}(Y)\mathbf{1}_{0}\]
Thus \(\sum_{j=0}^{\ell}\eta^{-\ell+2j}D_{j}^{(-\ell)}(Y)\mathbf{1}_{0}\) is equal to
\[\eta^{-\ell} c(q^{-\frac{1}{2}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### A formula for \(K_{j}^{(\ell)}(Y)\) in terms of the \(D_{j}^{(\ell)}(Y)\)
Since \(E_{0}(X)=1\), equation (4.2) and the first relation in (3.7) give \(\mathbf{1}_{0}E_{0}(X)\mathbf{1}_{0}=\mathbf{1}_{0}^{2}=\mathbf{1}_{0}(t^{ \frac{1}{2}}+t^{-\frac{1}{2}})\) so that
\[K_{0}^{(0)}=t^{\frac{1}{2}}+t^{-\frac{1}{2}}.\]
**Proposition 4.2**.: _Let \(\ell\in\mathbb{Z}_{>0}\) and let \(K_{j}^{(\ell)}(Y)\) and \(D_{j}^{(\ell-1)}(Y)\) be as defined in (4.2) and (4.1). Then_
\[K_{0}^{(\ell)}(Y)=t^{-\frac{1}{2}}D_{0}^{(\ell-1)}(Y)\frac{(1-tY^{2})}{(1-Y^{2 })},\qquad K_{\ell}^{(\ell)}(Y)=t^{\frac{1}{2}}D_{0}^{(\ell-1)}(Y^{-1})\frac{( 1-t^{-1}Y^{2}q^{\ell})(1-tY^{2})}{(1-tY^{2}q^{\ell})(1-Y^{2})},\]
_and, for \(j\in\{1,\ldots,\ell-1\}\),_
\[K_{j}^{(\ell)}(Y)=t^{\frac{1}{2}}D_{j}^{(\ell-1)}(Y)\frac{(1-t^{-1}Y^{-2})}{(1 -Y^{-2})}+t^{-\frac{1}{2}}D_{\ell-j}^{(\ell-1)}(Y^{-1})\frac{(1-tY^{-2}q^{\ell- 2j})}{(1-t^{-1}Y^{-2}q^{\ell-2j})}\frac{(1-t^{-1}Y^{2})}{(1-Y^{-2})}. \tag{4.4}\]
Proof.: Let
\[c(Y)=\frac{(1-tY^{2})}{(1-Y^{2})}=t\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}.\]
Then, by (4.1), the first relation in (3.14), the second relation in (3.13) and the relation \(\eta\eta_{s_{1}}=\eta_{s_{1}}\eta^{-1}\) from (3.12),
\[\mathbf{1}_{0}E_{\ell}(X)\mathbf{1}_{0} =\mathbf{1}_{0}\sum_{j=0}^{\ell-1}\eta^{\ell-2j}D_{j}^{(\ell-1)}( Y)\mathbf{1}_{0}=\mathbf{1}_{0}\sum_{j=0}^{\ell-1}\eta^{\ell-2j}D_{j}^{(\ell-1 )}(Y)(1+\eta_{s_{1}})t^{-\frac{1}{2}}c(Y)\] \[=\sum_{j=0}^{\ell-1}\mathbf{1}_{0}\eta^{\ell-2j}D_{j}^{(\ell-1)}( Y)t^{-\frac{1}{2}}c(Y)+\sum_{j=0}^{\ell-1}\mathbf{1}_{0}\eta_{s_{1}}\eta^{-( \ell-2j)}D_{j}^{(\ell-1)}(Y^{-1})t^{-\frac{1}{2}}c(Y).\]
By the last relation in (3.14) and the last relation in (3.13),
\[\mathbf{1}_{0}\eta_{s_{1}}\eta^{-(\ell-2j)}=\mathbf{1}_{0}t^{-1}\frac{(1-tY^{ -2})}{(1-t^{-1}Y^{-2})}\eta^{-(\ell-2j)}=\mathbf{1}_{0}\eta^{-(\ell-2j)}t^{-1} \frac{(1-tY^{-2}q^{-(\ell-2j)})}{(1-t^{-1}Y^{-2}q^{-(\ell-2j)})}\]
and thus, by reindexing with \(k=\ell-j\) and using \(-(\ell-2j)=\ell-2k\), the second sum is
\[\sum_{j=0}^{\ell-1}\mathbf{1}_{0}\eta_{s_{1}}\eta^{-(\ell-2j)}D_{j}^{(\ell-1)} (Y^{-1})t^{-\frac{1}{2}}c(Y)=\sum_{k=1}^{\ell}\mathbf{1}_{0}\eta^{\ell-2k}t^{- 1}\frac{(1-tY^{-2}q^{\ell-2k})}{(1-t^{-1}Y^{-2}q^{\ell-2k})}D_{\ell-k}^{(\ell-1 )}(Y^{-1})t^{-\frac{1}{2}}c(Y).\]
Hence, by (4.2),
\[\sum_{j=0}^{\ell} \mathbf{1}_{0}\eta^{\ell-2j}K_{j}^{(\ell)}(Y)=\mathbf{1}_{0}E_{ \ell}(X)\mathbf{1}_{0}\] \[=\sum_{j=0}^{\ell-1}\mathbf{1}_{0}t^{-\frac{1}{2}}\eta^{\ell-2j}D_ {j}^{(\ell-1)}(Y)c(Y)+\sum_{k=1}^{\ell}\mathbf{1}_{0}t^{-\frac{3}{2}}\eta^{ \ell-2k}\frac{(1-tY^{-2}q^{\ell-2k})}{(1-t^{-1}Y^{-2}q^{\ell-2k})}D_{\ell-k}^{( \ell-1)}(Y^{-1})c(Y).\]
## 5 Product expressions for \(D_{j}^{(\ell)}(Y)\) and \(K_{j}^{(\ell)}(Y)\)
In this section we establish product formulas for the coefficients in the operators
\[E_{\ell}(X)\mathbf{1}_{0}=\sum_{j=0}^{\ell-1}\eta^{\ell-2j}D_{j}^{( \ell-1)}(Y)\mathbf{1}_{0},\qquad E_{-\ell}(X)\mathbf{1}_{0}=\sum_{j=0}^{\ell} \eta^{-\ell+2j}D_{j}^{(-\ell)}(Y)\mathbf{1}_{0}\qquad\text{and}\]
\[\mathbf{1}_{0}E_{\ell}(X)\mathbf{1}_{0}=\sum_{j=0}^{\ell}\mathbf{1}_{0}\eta^{ \ell-2j}K_{j}^{(\ell)}(Y).\]
These coefficients turn out to be something like generalized binomial coefficients, determined by the recursions that were established in Proposition 4.1 and 4.2. These product formulas provide a kind of binomial theorem for operators \(E_{\ell}(X)\mathbf{1}_{0}\) and \(\mathbf{1}_{0}E_{\ell}(X)\mathbf{1}_{0}\), as elements of the double affine Hecke algebra \(\tilde{H}\). The final result is Theorem 5.4.
### \(q\)-\(t\)-binomial coefficients
For \(\ell\in\mathbb{Z}_{\geq 0}\) and \(j\in\{0,\ldots,\ell\}\) define
\[(z;q)_{j}=(1-z)(1-zq)(1-zq^{2})\cdots(1-zq^{j-1})\qquad\text{and} \qquad\begin{bmatrix}\ell\\ j\end{bmatrix}_{q,t}=\frac{\frac{(q;q)_{\ell}}{(t;q)_{\ell}}}{\frac{(q;q)_{j}} {(t;q)_{\ell}}(t;q)_{\ell-j}}. \tag{5.1}\]
For \(a,b\in\mathbb{Z}\) with \(a\leq b\) let
\[(1-zq^{a.b})=(1-zq^{a})(1-zq^{a+1})\cdots(1-zq^{b-1})(1-zq^{b}).\]
With this notation,
\[\begin{bmatrix}\ell\\ j\end{bmatrix}_{q,t}=\frac{(1-q^{1..\ell})}{(1-q^{1..j})(1-q^{1..\ell-j})}\frac {(1-tq^{0..j-1})(1-tq^{0..\ell-j-1})}{(1-tq^{0..\ell-1})}=\frac{(1-q^{j+1..\ell} )}{(1-q^{1..\ell-j})}\frac{(1-tq^{0..\ell-j-1})}{(1-tq^{j..\ell-1})}.\]
Then
\[\begin{bmatrix}\ell\\ \ell-j\end{bmatrix}_{q,t}=\begin{bmatrix}\ell\\ j\end{bmatrix}_{q,t}\qquad\text{and}\qquad\begin{bmatrix}\ell+1\\ j\end{bmatrix}_{q,t}=\begin{bmatrix}\ell\\ j\end{bmatrix}_{q,t}.\frac{(1-q^{\ell+1})(1-tq^{\ell-j})}{(1-q^{\ell+1-j})(1-tq^ {\ell})}. \tag{5.2}\]
### \(Y\)-binomial coefficients
For \(\ell\in\mathbb{Z}_{\geq 0}\) and \(j\in\{0,1,\ldots,\ell\}\) define a rational function in \(Y\) by
\[\begin{pmatrix}\ell\\ j\end{pmatrix}_{Y}=\frac{(t^{-1}Y^{-2}q^{-(j-1)};q)_{\ell-j}(tY^{-2}q^{\ell-2j}; q)_{j}}{(Y^{-2}q;q)_{\ell-j}(Y^{-2}q^{-j};q)_{j}}. \tag{5.3}\]
An alternative expression is
\[\begin{pmatrix}\ell\\ j\end{pmatrix}_{Y}=\frac{(1-t^{-1}Y^{-2}q^{-(j-1)..\ell-2j})(1-tY^{-2}q^{\ell-2 j..\ell-j-1})}{(1-Y^{-2}q^{1..\ell-j})(1-Y^{-2}q^{-j..-1})}.\]
which, in the alcove walk point of view of [10], might be thought of as a weighted alcove walk of total length \(\ell\) with \(j\) left moving crossings and \(\ell-j\) right moving crossings.
Then (see the examples in Section 7.3)
\[\binom{\ell}{\ell-j}_{Y^{-1}} =\frac{(1-t^{-1}Y^{2}q^{-(\ell-j-1).\ell-2(\ell-j)})(1-tY^{2}q^{ \ell-2(\ell-j).\ell-(\ell-j)-1})}{(1-Y^{2}q^{1.\ell-(\ell-j)})(1-Y^{2}q^{-(\ell -j).,-1})}\] \[=\frac{(1-t^{-1}Y^{2}q^{-(\ell-j-1).\ldots(\ell-2j)})(1-tY^{2}q^{ -(\ell-2j).\cdot j-1})}{(1-Y^{2}q^{1.j})(1-Y^{2}q^{-(\ell-j).,-1})}\] \[=\frac{(1-tY^{-2}q^{\ell-2j.\ell-j-1})(1-t^{-1}Y^{-2}q^{-(j-1). \ell-2j})}{(1-Y^{-2}q^{-j.-1})(1-Y^{-2}q^{1.\ell-j})}\] \[\qquad\cdot\frac{(t^{-1}Y^{2})^{j}(q^{-(\ell-j)})^{j}q^{\frac{1} {2}j(j-1)}(tY^{2})^{\ell-j}(q^{-(\ell-2j)-1})^{\ell-j}q^{\frac{1}{2}(\ell-j) (\ell-j-1)}}{Y^{2}q^{\frac{1}{2}j(j-1)}Y^{2(\ell-j)}(q^{-(\ell-j+1)})^{\ell-j }q^{\frac{1}{2}(\ell-j)(\ell-j-1)}}\] \[=t^{\ell-2j}q^{0}\cdot\binom{\ell}{j}_{Y}. \tag{5.4}\]
Also
\[\binom{\ell+1}{j}_{Y} =\frac{(1-t^{-1}Y^{-2}q^{-(j-1).\ell+1-2j})(1-tY^{-2}q^{\ell+1-2j. \ell+1-j-1})}{(1-Y^{-2}q^{1.\ell+1-j})(1-Y^{-2}q^{-j.,-1})}\] \[=\binom{\ell}{j}_{Y}\cdot\frac{(1-t^{-1}Y^{-2}q^{\ell+1-2j})(1-tY^ {-2}q^{\ell-j})}{(1-tY^{-2}q^{\ell-2j})(1-Y^{-2}q^{\ell+1-j})} \tag{5.5}\]
### Definition of the products \(\tilde{K}_{j}^{(\ell)}(Y)\) and \(\tilde{D}_{j}^{(\ell-1)}(Y)\)
For \(\ell\in\mathbb{Z}_{>0}\) and \(j\in\{0,\ldots,\ell\}\) define
\[\tilde{D}_{j}^{(\ell)}(Y)=t^{-\frac{1}{2}(\ell+1)}\cdot t^{\ell-j}\cdot\genfrac {[}{]}{0.0pt}{}{\ell}{j}_{q,t}\cdot\binom{\ell}{j}_{Y}\cdot\frac{(1-tq^{\ell -j})}{(1-tq^{\ell})}\cdot\frac{(1-tY^{-2}q^{\ell-j})}{(1-tY^{-2}q^{\ell-2j})} \tag{5.6}\]
\[\tilde{D}_{j}^{(-\ell)}(Y)=t^{-\frac{1}{2}\ell}\cdot(qt)^{j}\cdot\genfrac{[} {]}{0.0pt}{}{\ell}{j}_{q,t}\cdot\binom{\ell}{\ell-j}_{Y}\cdot\frac{(1-tq^{ \ell-j})}{(1-tq^{\ell})}\cdot\frac{(1-t^{-1}Y^{-2}q^{-(\ell-j)})}{(1-t^{-1}Y^{ -2}q^{-(\ell-2j)})} \tag{5.7}\]
\[\tilde{K}_{j}^{(\ell)}(Y)=t^{-\frac{1}{2}(\ell-1)}\cdot t^{\ell-1-j}\cdot \genfrac{[}{]}{0.0pt}{}{\ell}{j}_{q,t}\cdot\binom{\ell}{j}_{Y}\cdot\frac{(1-Y^ {-2}q^{\ell-2j})}{(1-t^{-1}Y^{-2}q^{\ell-2j})}\cdot\frac{(1-t^{-1}Y^{-2})}{(1- Y^{-2})} \tag{5.8}\]
The following Proposition provides useful relationships between these expressions which follow from (5.2), (5.3) and (5.4).
**Proposition 5.1**.: \[\frac{\tilde{D}_{j}^{(\ell-1)}(Y)}{\tilde{K}_{j}^{(\ell)}(Y)}=t^{-\frac{1}{2}}\cdot \frac{(1-q^{\ell-j})}{(1-q^{\ell})}\cdot\frac{(1-Y^{-2}q^{\ell-j})}{(1-Y^{-2}q^ {\ell-2j})}\cdot\frac{(1-Y^{-2})}{(1-t^{-1}Y^{-2})}\] (5.9)
\[\frac{\tilde{D}_{\ell-j}^{(\ell-1)}(Y^{-1})}{\tilde{K}_{j}^{(\ell)}(Y)}=t^{ \frac{1}{2}}q^{\ell-j}\cdot\frac{(1-q^{j})}{(1-q^{\ell})}\cdot\frac{(1-Y^{-2}) }{(1-t^{-1}Y^{-2})}\cdot\frac{(1-t^{-1}Y^{-2}q^{\ell-2j})}{(1-tY^{-2}q^{\ell-2 j})}\cdot\frac{(1-Y^{-2}q^{-j})}{(1-Y^{-2}q^{\ell-2j})}. \tag{5.10}\]
\[\frac{\tilde{D}_{j}^{(\ell)}(Y)}{\tilde{D}_{j}^{(\ell+1)}(Y)}=t^{-\frac{1}{2} }\cdot\frac{(1-tq^{\ell+1})}{(1-tq^{\ell+1-j})}\cdot\frac{(1-q^{\ell+1-j})}{(1 -q^{\ell+1})}\cdot\frac{(1-tY^{-2}q^{\ell+1-2j})}{(1-tY^{-2}q^{\ell+1-j})} \cdot\frac{(1-Y^{-2}q^{\ell+1-j})}{(1-t^{-1}Y^{-2}q^{\ell+1-2j})}. \tag{5.11}\]
\[\frac{\tilde{D}_{\ell+1-j}^{(\ell)}(Y^{-1})}{\tilde{D}_{j}^{(\ell+1)}(Y)}=t^{ \frac{1}{2}}q^{\ell+1-j}\cdot\frac{(1-tq^{\ell+1})}{(1-tq^{\ell+1-j})}\cdot \frac{(1-q^{j})}{(1-q^{\ell+1})}\cdot\frac{(1-Y^{-2}q^{-j})}{(1-tY^{-2}q^{\ell+ 1-j})} \tag{5.12}\]
\[\frac{\tilde{D}_{\ell+1-j}^{(\ell)}(Y^{-1})}{\tilde{D}_{j}^{(\ell)}(Y)}=tq^{ \ell+1-j}\cdot\frac{(1-q^{j})}{(1-q^{\ell+1-j})}\cdot\frac{(1-Y^{-2}q^{-j})(1- t^{-1}Y^{-2}q^{\ell+1-2j})}{(1-tY^{-2}q^{\ell+1-2j})(1-Y^{-2}q^{\ell+1-j})}, \tag{5.13}\]
Proof.: All of these are proved by using the relations (5.2), (5.3) and (5.4) and cancelling all common factors from the numerator and denominators. The proof of (5.9) is similar to the proofs of (5.11), (5.12) and (5.13) and (5.10), which are as follows.
Using (5.6), (5.5) and the second relation in (5.2) gives (5.11):
\[\tilde{D}_{j}^{(\ell)}(Y)=t^{-\frac{1}{2}(\ell+1)}\cdot t^{\ell- j}\cdot\genfrac{[}{]}{0.0pt}{}{\ell}{j}_{q,t}\cdot\frac{(1-tq^{\ell-j})}{(1-tq^{ \ell})}\cdot\genfrac{[}{]}{0.0pt}{}{\ell}{j}_{Y}\cdot\frac{(1-tY^{-2}q^{\ell- j})}{(1-tY^{-2}q^{\ell-2j})}\] \[=t^{\frac{1}{2}}t^{-\frac{1}{2}(\ell+2)}\cdot t^{-1}t^{\ell+1-j} \cdot\genfrac{[}{]}{0.0pt}{}{\ell+1}{j}_{q,t}\cdot\frac{(1-q^{\ell+1-j})(1-tq^ {\ell})}{(1-q^{\ell+1})(1-tq^{\ell})}\cdot\frac{(1-tq^{\ell-j})}{(1-tq^{\ell})}\cdot\] \[\qquad\cdot\genfrac{[}{]}{0.0pt}{}{\ell+1}{j}_{Y}\cdot\frac{(1-tY ^{-2}q^{\ell-2j})(1-Y^{-2}q^{\ell+1-j})}{(1-t^{-1}Y^{-2}q^{\ell+1-j})(1-tY^{-2 }q^{\ell-j})}\cdot\frac{(1-tY^{-2}q^{\ell-j})}{(1-tY^{-2}q^{\ell+2j})}\] \[=t^{-\frac{1}{2}}\tilde{D}_{j}^{(\ell+1)}(Y)\cdot\frac{(1-tq^{\ell +1})}{(1-tq^{\ell+1-j})}\cdot\frac{(1-q^{\ell+1-j})}{(1-q^{\ell+1})}\cdot\frac{( 1-tY^{-2}q^{\ell+1-2j})}{(1-tY^{-2}q^{\ell+1-j})}\cdot\frac{(1-Y^{-2}q^{\ell+1- j})}{(1-t^{-1}Y^{-2}q^{\ell+1-2j})}.\]
Using (5.6), (5.4) and the first relation in (5.2) gives (5.12):
\[t^{\frac{1}{2}(\ell+1)}\tilde{D}_{\ell+1-j}^{(\ell)}(Y^{-1})=t^{\ell -(\ell+1-j)}\cdot\genfrac{[}{]}{0.0pt}{}{\ell}{\ell+1-j}_{q,t}\cdot\frac{(1- tq^{\ell-(\ell+1-j)})}{(1-tq^{\ell})}\cdot\genfrac{[}{]}{0.0pt}{}{\ell}{\ell+1-j}_{Y^{-1}} \cdot\frac{(1-tY^{2}q^{\ell-(\ell+1-j)})}{(1-tY^{2}q^{\ell-2(\ell+1-j)})}\] \[=t^{j-1}\cdot\genfrac{[}{]}{0.0pt}{}{\ell+1}{\ell+1-j}_{q,t}\cdot \frac{(1-q^{\ell+1-(\ell+1-j)})(1-tq^{\ell})}{(1-q^{\ell+1})(1-tq^{\ell-(\ell+ 1-j)})}\cdot\frac{(1-tq^{j-\ell})}{(1-tq^{\ell})}\] \[\qquad\cdot\genfrac{[}{]}{0.0pt}{}{\ell+1}{\ell+1-j}_{Y^{-1}} \cdot\frac{(1-tY^{2}q^{\ell-2(\ell+1-j)})(1-Y^{2}q^{\ell+1-(\ell+1-j)})}{(1-t ^{1}Y^{2}q^{\ell+1-2(\ell+1-j)})(1-tY^{2}q^{\ell-(\ell+1-j)})}\cdot\frac{(1-tY^ {2}q^{\ell-2(\ell+1-j)})}{(1-tY^{2}q^{\ell-2(\ell+1-j)})}\] \[=t^{j-1}\cdot\genfrac{[}{]}{0.0pt}{}{\ell+1}{j}_{q,t}\cdot\frac{ (1-q^{j})}{(1-q^{\ell+1})}\cdot t^{\ell+1-2j}\cdot\genfrac{[}{]}{0.0pt}{}{\ell+ 1}{j}_{Y}\cdot\frac{(1-Y^{2}q^{j})}{(1-t^{-1}Y^{2}q^{-(\ell+1-2j)})}\] \[=t^{\ell-j}\cdot\genfrac{[}{]}{0.0pt}{}{\ell+1}{j}_{q,t}\cdot \frac{(1-q^{j})}{(1-q^{\ell+1})}\cdot\binom{\ell+1}{j}_{Y}\cdot\frac{(1-Y^{-2} q^{-j})}{(1-tY^{-2}q^{\ell+1-2j})}\cdot tq^{j+\ell+1-2j}\] \[=q^{\ell+1-j}\cdot t^{\frac{1}{2}(\ell+2)}\tilde{D}_{j}^{(\ell+1 )}(Y)\cdot\frac{(1-tq^{\ell+1-j})}{(1-tq^{\ell+1-j})}\cdot\frac{(1-q^{j})}{(1- q^{\ell+1})}\cdot\frac{(1-tY^{-2}q^{\ell+1-2j})}{(1-tY^{-2}q^{\ell+1-j})}\cdot \frac{(1-Y^{-2}q^{-j})}{(1-tY^{-2}q^{\ell+1-2j})}\] \[=q^{\ell+1-j}\cdot t^{\frac{1}{2}(\ell+2)}\tilde{D}_{j}^{(\ell+1 )}(Y)\cdot\frac{(1-tq^{\ell+1})}{(1-tq^{\ell+1-j})}\cdot\frac{(1-q^{j})}{(1-q ^{\ell+1})}\cdot\frac{(1-Y^{-2}q^{-j})}{(1-tY^{-2}q^{\ell+1-j})}.\]
Equation (5.13) follows from (5.12) and (5.11). Using (5.13) and (5.9)
\[D_{\ell-j}^{(\ell-1)}(Y^{-1}) =tq^{\ell-j}D_{j}^{(\ell-1)}(Y)\cdot\frac{(1-q^{j})}{(1-q^{\ell-j })}\frac{(1-t^{-1}Y^{-2}q^{\ell-2j})}{(1-tY^{-2}q^{\ell-2j})}\cdot\frac{(1-Y^{ -2}q^{-j})}{(1-Y^{-2}q^{\ell-j})}\] \[=t^{\frac{1}{2}}q^{\ell-j}\cdot K_{j}^{(\ell)}(Y)\cdot\frac{(1-q^ {\ell-j})}{(1-q^{\ell})}\cdot\frac{(1-Y^{-2}q^{\ell-j})}{(1-Y^{-2}q^{\ell-2j}) }\cdot\frac{(1-Y^{-2})}{(1-t^{-1}Y^{-2})}\] \[\qquad\cdot\frac{(1-q^{j})}{(1-q^{\ell-j})}\frac{(1-t^{-1}Y^{-2} q^{\ell-2j})}{(1-tY^{-2}q^{\ell-2j})}\cdot\frac{(1-Y^{-2}q^{-j})}{(1-\underline{ 1-Y^{-2}q^{-j}})}\]
which gives equation (5.10).
### A recursion for the \(\tilde{D}_{j}^{(\ell)}(Y)\)
**Proposition 5.2**.: _Let \(\ell\in\mathbb{Z}_{\geq 0}\) and let \(\tilde{D}_{j}^{(\ell)}(Y)\) and \(\tilde{D}_{j}^{(-\ell)}(Y)\) be as defined in (4.1)._
1. _If_ \(\ell\in\mathbb{Z}_{>0}\) _and_ \(j\in\{0,\dots,\ell\}\) _then_ \(\tilde{D}_{j}^{(-\ell)}(Y)=t^{\frac{1}{2}}\tilde{D}_{j}^{(\ell)}(Y^{-1})\)_._
2. _The_ \(\tilde{D}_{j}^{(\ell)}(Y)\) _satisfy, and are determined by,_ \(\tilde{D}_{0}^{(0)}(Y)=t^{-\frac{1}{2}}\) _and the recursions_ \[\tilde{D}_{0}^{(\ell)}(Y)=t^{\frac{1}{2}}\frac{(1-t^{-1}Y^{-2}q^{\ell})}{(1-Y^{ -2}q^{\ell})}\tilde{D}_{0}^{(\ell-1)}(Y),\qquad\tilde{D}_{\ell}^{(\ell)}(Y)=t^{- \frac{1}{2}}\frac{(1-t)(1-tY^{-2})}{(1-tq^{\ell})(1-Y^{-2}q^{-\ell})}\tilde{D}_ {0}^{(\ell-1)}(Y^{-1}),\] _and_ \[\tilde{D}_{j}^{(\ell)}(Y)=t^{\frac{1}{2}}\frac{(1-t^{-1}Y^{-2}q^{\ell-2j})}{(1 -Y^{-2}q^{\ell-2j})}\tilde{D}_{j}^{(\ell-1)}(Y)+t^{-\frac{1}{2}}\frac{(1-t)(1-q^ {\ell}tY^{-2}q^{\ell-2j})}{(1-tq^{\ell})(1-Y^{-2}q^{\ell-2j})}\tilde{D}_{\ell- j}^{(\ell-1)}(Y^{-1}).\] (5.14)
_for_ \(j\in\{1,\dots,\ell-1\}\)_._
Proof.: (a) Using (5.4),
\[t^{\ell-j}.\binom{\ell}{j}_{Y^{-1}}\cdot\frac{(1-tY^{2}q^{\ell-j})}{( 1-tY^{2}q^{\ell-2j})}=t^{\ell-j}\cdot t^{-(\ell-2j)}\binom{\ell}{\ell-j}_{Y} \cdot\frac{(1-t^{-1}Y^{-2}q^{-(\ell-j)})}{(1-t^{-1}Y^{-2}q^{-(\ell-2j)})}\cdot q ^{j}\] \[\qquad=(qt)^{j}\binom{\ell}{\ell-j}_{Y}\cdot\frac{(1-t^{-1}Y^{-2}q ^{-(\ell-j)})}{(1-t^{-1}Y^{-2}q^{-(\ell-2j)})}.\]
Thus, by (5.6),
\[t^{\frac{1}{2}}D_{j}^{(\ell)}(Y^{-1})=t^{-\frac{1}{2}\ell}\cdot(qt)^{j}\cdot \genfrac{[}{]}{]}{0.0pt}{}{\ell}{j}_{q,t}\cdot\frac{(1-tq^{\ell-j})}{(1-tq^{ \ell})}\cdot\binom{\ell}{\ell-j}_{Y}\cdot\frac{(1-t^{-1}Y^{-2}q^{-(\ell-j)})}{ (1-t^{-1}Y^{-2}q^{-(\ell-2j)})}, \tag{5.15}\]
which is equal to \(\tilde{D}_{j}^{(-\ell)}(Y)\) as defined in (5.7).
(b) The first two identities are special cases of (5.11) and (5.12).
Assume \(j\in\{1,\ldots,\ell-1\}\). From (5.6), (5.11) and (5.12),
\[t^{\frac{1}{2}}\frac{\tilde{D}_{j}^{(\ell)}(Y)}{\tilde{D}_{j}^{( \ell+1)}(Y)}= \frac{(1-tq^{\ell+1})}{(1-tq^{\ell+1-j})}\cdot\frac{(1-q^{\ell+1-j })}{(1-q^{\ell+1})}\cdot\frac{(1-tY^{-2}q^{\ell+1-2j})}{(1-tY^{-2}q^{\ell+1-j} )}\cdot\frac{(1-Y^{-2}q^{\ell+1-j})}{(1-t^{-1}Y^{-2}q^{\ell+1-2j})}\] \[\text{and}\qquad t^{-\frac{1}{2}}\frac{\tilde{D}_{\ell+1-j}^{( \ell)}(Y^{-1})}{\tilde{D}_{j}^{(\ell+1)}(Y)}=q^{\ell+1-j}\cdot\frac{(1-tq^{ \ell+1})}{(1-tq^{\ell+1-j})}\cdot\frac{(1-q^{j})}{(1-q^{\ell+1})}\cdot\frac{(1- Y^{-2}q^{-j})}{(1-tY^{-2}q^{\ell+1-j})}.\]
Thus
\[t^{\frac{1}{2}}\frac{(1-t^{-1}Y^{-2}q^{\ell+1-2j})}{(1-Y^{-2}q^{ \ell+1-2j})}\frac{\tilde{D}_{j}^{(\ell)}(Y)}{\tilde{D}_{j}^{(\ell+1)}(Y)}+t^{- \frac{1}{2}}\frac{(1-t)(1-q^{\ell+1}tY^{-2}q^{\ell+1-2j})}{(1-tq^{\ell+1})(1-Y^ {-2}q^{\ell+1-2j})}\frac{\tilde{D}_{\ell+1-j}^{(\ell)}(Y^{-1})}{\tilde{D}_{j}^{ (\ell+1)}(Y)}\] \[\qquad=\frac{(1\!-\!t^{-1}\!Y\!-\!2q^{\ell+1-2j})}{(1-Y^{-2}q^{ \ell+1-2j})}\frac{(1-tq^{\ell+1})(1-q^{\ell+1-j})}{(1-tq^{\ell+1-j})(1-q^{\ell +1})}\frac{(1-tY^{-2}q^{\ell+1-2j})}{(1-tY^{-2}q^{\ell+1-j})}\frac{(1-Y^{-2}q^{ \ell+1-j})}{(1-tY^{-2}q^{\ell+1-2j})}\] \[\qquad\qquad+\frac{(1-t)(1-q^{\ell+1}tY^{-2}q^{\ell+1-2j})}{(1\!- \!tq^{\ell+1})(1-Y^{-2}q^{\ell+1-2j})}q^{\ell+1-2j}\frac{(1\!-\!tq^{\ell+1}\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof.: The first two identities are special cases of (5.9) and (5.10).
Let \(j\in\{1,\ldots,\ell-1\}\). Using (5.13) gives
\[1 +t^{-1}\frac{\tilde{D}_{\ell+1-j}^{(\ell)}(Y^{-1})}{\tilde{D}_{j}^{( \ell)}(Y)}\cdot\frac{(1-tY^{-2}q^{\ell+1-2j})}{(1-t^{-1}Y^{-2}q^{\ell+1-2j})}=1 +q^{\ell+1-j}\frac{(1-q^{j})}{(1-q^{\ell+1-j})}\frac{(1-Y^{-2}q^{-j})}{(1-Y^{-2 }q^{\ell+1-j})}\] \[=\frac{(1-Y^{-2}q^{\ell+1+\overline{z}}-q^{\ell\underline{z}+ \overline{z}}+Y^{-2}q^{2(\ell+1-j)}+q^{\ell\underline{z}+\overline{z}}-q^{ \ell+1}-Y^{-2}q^{\ell+1-2j}+Y^{-2}q^{\ell+1+\overline{z}})}{(1-q^{\ell+1-j})( 1-Y^{-2}q^{\ell+1-j})}\] \[=\frac{(1-q^{\ell+1})(1-Y^{-2}q^{\ell+1-2j})}{(1-q^{\ell+1-j})(1- Y^{-2}q^{\ell+1-j})}.\]
Multiplying both sides by \(t^{\frac{1}{2}}\tilde{D}_{j}^{(\ell)}(Y)\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\) gives
\[t^{\frac{1}{2}}\tilde{D}_{j}^{(\ell)}(Y)\frac{(1-t^{-1}Y^{-2})}{ (1-Y^{-2})}+t^{-\frac{1}{2}}\tilde{D}_{\ell+1-j}^{(\ell)}(Y^{-1})\frac{(1-tY^{- 2}q^{\ell+1-2j})}{(1-t^{-1}Y^{-2}q^{\ell+1-2j})}\frac{(1-t^{-1}Y^{-2})}{(1-Y^{- 2})}\] \[=t^{\frac{1}{2}}\tilde{D}_{j}^{(\ell)}(Y)\cdot\frac{(1-q^{\ell+1} )(1-Y^{-2}q^{\ell+1-2j})}{(1-q^{\ell+1-j})(1-Y^{-2}q^{\ell+1-j})}\cdot\frac{(1- t^{-1}Y^{-2})}{(1-Y^{-2})}=\tilde{K}_{j}^{(\ell+1)},\]
where the last equality is (5.9).
**Theorem 5.4**.: _As in (4.1) and (4.2), let \(D_{j}^{(\ell)}(Y)\) and \(K_{j}^{(\ell)}(Y)\) be defined by the expansions_
\[E_{\ell}(X)\mathbf{1}_{0}=\sum_{j=0}^{\ell-1}\eta^{\ell-2j}D_{j}^{(\ell-1)}(Y) \mathbf{1}_{0}\qquad\text{and}\qquad E_{-\ell}(X)\mathbf{1}_{0}=\sum_{j=0}^{ \ell}\eta^{-\ell+2j}D_{j}^{(-\ell)}(Y)\mathbf{1}_{0}\]
\[\text{and}\qquad\mathbf{1}_{0}E_{\ell}(X)\mathbf{1}_{0}=\sum_{j=0}^{\ell} \mathbf{1}_{0}\eta^{\ell-2j}K_{j}^{(\ell)}(Y)\]
_in the localized double affine Hecke algebra \(\tilde{H}\). Then_
\[D_{j}^{(\ell)}(Y) =t^{-\frac{1}{2}(\ell+1)}\cdot t^{\ell-j}\cdot\begin{bmatrix} \ell\\ j\end{bmatrix}_{q,t}\cdot\begin{pmatrix}\ell\\ j\end{pmatrix}_{Y}\cdot\frac{(1-tq^{\ell-j})}{(1-tq^{\ell})}\cdot\frac{(1-tY^{- 2}q^{\ell-j})}{(1-tY^{-2}q^{\ell-2j})},\] \[D_{j}^{(-\ell)}(Y) =t^{-\frac{1}{2}\ell}\cdot(qt)^{j}\cdot\begin{bmatrix}\ell\\ j\end{bmatrix}_{q,t}\cdot\begin{pmatrix}\ell\\ \ell-j\end{pmatrix}_{Y}\cdot\frac{(1-tq^{\ell-j})}{(1-tq^{\ell})}\cdot\frac{(1- t^{-1}Y^{-2}q^{-(\ell-j)})}{(1-t^{-1}Y^{-2}q^{-(\ell-2j)})}\quad\text{and}\] \[K_{j}^{(\ell)}(Y) =t^{-\frac{1}{2}(\ell-1)}\cdot t^{\ell-1-j}\cdot\begin{bmatrix} \ell\\ j\end{bmatrix}_{q,t}\cdot\begin{pmatrix}\ell\\ j\end{pmatrix}_{Y}\cdot\frac{(1-Y^{-2}q^{\ell-2j})}{(1-t^{-1}Y^{-2}q^{\ell-2 j})}\cdot\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}.\]
## 6 Products for type \(SL_{2}\) Macdonald polynomials
The formulas for \(E_{\ell}(X)\mathbf{1}_{0}\) and \(E_{-\ell}(X)\mathbf{1}_{0}\) in Theorem 5.4 serve as universal formulas for products, containing, for all \(m\) at once, the information of the products \(E_{\ell}P_{m}\) and \(E_{-\ell}P_{m}\) expanded in terms of electronic Macdonald polynomials. In the same way the expansion of \(\mathbf{1}_{0}E_{\ell}(X)\mathbf{1}_{0}\) in \(\tilde{H}\) which is given in Theorem 5.4 is a universal formula for the products \(P_{\ell}P_{m}\) expanded in terms of the bosonic Macdonald polynomials \(P_{r}(x)\). In this section we use the results of Theorem 5.4 to derive these products, thus accomplishing our goal, in the \(SL_{2}\) case, of using double affine Hecke algebra tools to compute compact formulas for products of Macdonald polynomials.
### The universal coefficients \(A_{j}^{(\ell)}(Y)\), \(B_{j}^{(\ell)}(Y)\) and \(C_{j}^{(\ell)}(Y)\)
Define
\[C_{j}^{(\ell)}(Y) =\begin{bmatrix}\ell\\ j\end{bmatrix}_{q,t}\cdot\frac{(t^{-1}Y^{-2}q^{-(j-1)};q)_{j}(tY^{-2}q^{\ell-2j };q)_{j}}{(Y^{-2}q^{\ell-2j+1};q)_{j}(Y^{-2}q^{-j};q)_{j}}\] \[A_{j}^{(\ell)}(Y) =\begin{bmatrix}\ell\\ j\end{bmatrix}_{q,t}\cdot\frac{(1-q^{\ell-j})}{(1-q^{\ell})}\cdot\frac{(t^{-1} Y^{-2}q^{-(j-1)};q)_{j}(tY^{-2}q^{\ell-2j};q)_{j}}{(Y^{-2}q^{-j};q)_{j}(Y^{-2}q^{ \ell-2j};q)_{j}}\] \[B_{j}^{(\ell)}(Y) =q^{j}\cdot\begin{bmatrix}\ell\\ j\end{bmatrix}_{q,t}\cdot\frac{(1-q^{\ell-j})}{(1-q^{\ell})}\cdot\frac{(t^{-1} Y^{-2}q^{-(\ell-j-1)};q)_{\ell-j}(tY^{-2}q^{-(\ell-2j-1)};q)_{\ell-j-1}}{(Y^{-2}q^{ -(\ell-2j-1)};q)_{\ell-j}(Y^{-2}q^{-(\ell-j-1)};q)_{\ell-j-1}}\]
Then
\[A_{j}^{(\ell)}(Y) =C_{j}^{(\ell)}(Y)\cdot\frac{(1-q^{\ell-j})}{(1-q^{\ell})}\cdot \frac{(1-Y^{-2}q^{\ell-j})}{(1-Y^{-2}q^{\ell-2j})}\qquad\text{and} \tag{6.1}\] \[B_{\ell-j}^{(\ell)}(Y) =q^{\ell-j}\cdot\begin{bmatrix}\ell\\ j\end{bmatrix}_{q,t}\cdot\frac{(1-q^{j})}{(1-q^{\ell})}\cdot\frac{(t^{-1}Y^{-2} q^{-(j-1)};q)_{j}(tY^{-2}q^{\ell-2j+1};q)_{j-1}}{(Y^{-2}q^{\ell-2j+1};q)_{j}(Y^{-2}q^{-(j -1)};q)_{j-1}}\] \[=C_{j}^{(\ell)}(Y)\cdot q^{\ell-j}\cdot\frac{(1-q^{j})}{(1-q^{ \ell})}\cdot\frac{(1-Y^{-2}q^{-j})}{(1-tY^{-2}q^{\ell-2j})} \tag{6.2}\]
The following proposition gives formulas for \(A_{j}^{(\ell)}\), \(B_{j}^{(\ell)}\) and \(C_{j}^{(\ell)}\) in terms of \(D_{j}^{(\ell)}\) and \(K_{j}^{(\ell)}\).
**Proposition 6.1**.: \[t^{-\frac{1}{2}}C_{j}^{(\ell)}(Y)=K_{j}^{(\ell)}(Y)\cdot t^{-\frac{1}{2}(\ell -2j)}\cdot\frac{(t^{-1}Y^{-2}q^{\ell-2j};q)_{j}}{(Y^{-2}q^{\ell-2j};q)_{j}} \cdot\frac{(Y^{-2};q)_{\ell-j}}{(t^{-1}Y^{-2};q)_{\ell-j}}.\] (6.3)
\[A_{j}^{(\ell)}(Y)=D_{j}^{(\ell-1)}(Y)\cdot t^{-\frac{1}{2}(\ell-2j)}\cdot \frac{(t^{-1}Y^{-2}q^{\ell-2j};q)_{j}}{(Y^{-2}q^{\ell-2j};q)_{j}}\cdot\frac{(Y^ {-2};q)_{\ell-j}}{(t^{-1}Y^{-2};q)_{\ell-j}}\cdot t\frac{(1-t^{-1}Y^{-2})}{(1- Y^{-2})} \tag{6.4}\]
\[B_{j}^{(\ell)}(Y)=D_{j}^{(\ell-1)}(Y^{-1})t^{\frac{1}{2}(\ell-2j)}\cdot\frac{(t ^{-1}Y^{-2}q^{-(\ell-2j)+1};q)_{\ell-j}}{(Y^{-2}q^{-(\ell-2j)+1};q)_{\ell-j}} \cdot\frac{(Y^{-2}q;q)_{j}}{(t^{-1}Y^{-2}q;q)_{j}} \tag{6.5}\]
Proof.: (a) Using (5.8) gives
\[K_{j}^{(\ell)}(Y)\cdot t^{-\frac{1}{2}(\ell-2j)}\cdot\frac{(t^{-1}Y ^{-2}q^{\ell-2j};q)_{j}}{(Y^{-2}q^{\ell-2j};q)_{j}}\cdot\frac{(Y^{-2};q)_{\ell-j }}{(t^{-1}Y^{-2};q)_{\ell-j}}\] \[=t^{-\frac{1}{2}(\ell-1)}\cdot t^{\ell-1-j}\cdot\genfrac{[}{]}{0.0pt}{}{\ell}{j}_{q,t}\cdot\genfrac{[}{]}{0.0pt}{}{\ell}{\ell}{j}_{Y}\cdot \frac{(1-Y^{-2}q^{\ell-2j})}{(1-t^{-1}Y^{-2}q^{\ell-2j})}\cdot\frac{(1-t^{-1}Y ^{-2})}{(1-Y^{-2})}\] \[\qquad\cdot t^{-\frac{1}{2}(\ell-2j)}\cdot\frac{(t^{-1}Y^{-2}q^{ \ell-2j};q)_{j}}{(Y^{-2}q^{\ell-2j};q)_{j}}\cdot\frac{(Y^{-2};q)_{\ell-j}}{(t^ {-1}Y^{-2};q)_{\ell-j}}\] \[=t^{-\frac{1}{2}}\genfrac{[}{]}{0.0pt}{}{\ell}{j}_{q,t}\cdot \frac{(t^{-1}Y^{-2}q^{-(j-1)};q)_{\ell-j}(tY^{-2}q^{\ell-2j};q)_{j}}{(Y^{-2}q;q )_{\ell-j}(Y^{-2}q^{-j};q)_{j}}\cdot\frac{(t^{-1}Y^{-2}q^{\ell-2j};q)_{j}}{(Y^{ -2}q^{\ell-2j};q)_{j}}\cdot\frac{(Y^{-2};q)_{\ell-j}}{(t^{-1}Y^{-2};q)_{\ell-j}}\] \[\qquad\cdot\frac{(1-Y^{-2}q^{\ell-2j})}{(1-t^{-1}Y^{-2}q^{\ell-2 j})}\cdot\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\] \[=t^{-\frac{1}{2}}\genfrac{[}{]}{0.0pt}{}{\ell}{j}_{q,t}\cdot \frac{(t^{-1}Y^{-2}q^{-(j-1)};q)_{\ell-j}(1-t^{-1}Y^{-2})(t^{-1}Y^{-2}q^{\ell-2 j};q)_{j}}{(t^{-1}Y^{-2};q)_{\ell-j}(1-t^{-1}Y^{-2}q^{\ell-2j})}\] \[\qquad\qquad\cdot\frac{(Y^{-2};q)_{\ell-j}}{(1-Y^{-2})(Y^{-2}q;q) _{\ell-j}}\cdot\frac{(1-Y^{-2}q^{\ell-2j})}{(Y^{-2}q^{\ell-2j};q)_{j}}\cdot \frac{1}{(Y^{-2}q^{-j};q)_{j}}\cdot(tY^{-2}q^{\ell-2j};q)_{j}\] \[=t^{-\frac{1}{2}}\genfrac{[}{]}{0.0pt}{}{\ell}{j}_{q,t}\cdot \frac{(t^{-1}Y^{-2}q^{-(j-1)};q)_{\ell-j-1}(t^{-1}Y^{-2}q^{\ell-2j};q)_{j}}{(t^ {-1}Y^{-2}q;q)_{\ell-j-1}}\] \[\qquad\qquad\cdot\frac{1}{(1-Y^{-2}q^{\ell-j})}\cdot\frac{1}{(Y^{ -2}q^{\ell-2j+1};q)_{j-1}}\cdot\frac{1}{(Y^{-2}q^{-j};q)_{j}}\cdot(tY^{-2}q^{ \ell-2j};q)_{j}\] \[=t^{-\frac{1}{2}}\genfrac{[}{]}{0.0pt}{}{\ell}{j}_{q,t}\cdot \frac{(t^{-1}Y^{-2}q^{-(j-1)};q)_{j}(tY^{-2}q^{\ell-2j};q)_{j}}{(Y^{-2}q^{\ell- 2j+1};q)_{j}(Y^{-2}q^{-j};q)_{j}}=t^{-\frac{1}{2}}C_{j}^{(\ell)}.\]
(b) Using (5.9) and (6.3) and (6.1), gives
\[D_{j}^{(\ell-1)}(Y)\cdot t^{-\frac{1}{2}(\ell-2j)}\cdot\frac{(t^{-1}Y^{-2}q^{ \ell-2j};q)_{j}}{(Y^{-2}q^{\ell-2j};q)_{j}}\cdot\frac{(Y^{-2};q)_{\ell-j}}{(t^ {-1}Y^{-2};q)_{\ell-j}}\cdot t\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\] \[=t^{-\frac{1}{2}}K_{j}^{(\ell)}(Y)\cdot t^{-\frac{1}{2}(\ell-2j)} \cdot\frac{(1-q^{\ell-j})}{(1-q^{\ell})}\cdot\frac{(1-Y^{-2}q^{\ell-j})}{(1-Y^ {-2}q^{\ell-2j})}\cdot\frac{(1-Y^{-2})}{(1-t-1)^{-2}}\] \[\qquad\cdot\frac{(t^{-1}Y^{-2}q^{\ell-2j};q)_{j}}{(Y^{-2}q^{\ell- 2j};q)_{j}}\cdot\frac{(Y^{-2};q)_{\ell-j}}{(t^{-1}Y^{-2};q)_{\ell-j}}\cdot t \cdot\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\] \[=t^{\frac{1}{2}}\cdot t^{-\frac{1}{2}}C_{j}^{(\ell)}(Y)\cdot\frac{ (1-q^{\ell-j})}{(1-q^{\ell})}\cdot\frac{(1-Y^{-2}q^{\ell-j})}{(1-Y^{-2}q^{\ell- 2j})}=A_{j}^{(\ell)}(Y).\]
(c) Using (5.10) and (6.3) and (6.2) gives
\[D^{(\ell-1)}_{\ell-j}(Y^{-1})\cdot t^{-\frac{1}{2}(\ell-2j)}\cdot \frac{(t^{-1}Y^{-2}q^{(\ell-2j)+1};q)_{j}}{(Y^{-2}q^{(\ell-2j)+1};q)_{j}}\cdot \frac{(Y^{-2}q;q)_{\ell-j}}{(t^{-1}Y^{-2}q;q)_{\ell-j}}\] \[=K^{(\ell)}_{j}(Y)\cdot t^{-\frac{1}{2}(\ell-2j)}\cdot\frac{(t^{-1 }Y^{-2}q^{(\ell-2j)+1};q)_{j}}{(Y^{-2}q^{(\ell-2j)+1};q)_{j}}\cdot\frac{(Y^{-2} q;q)_{\ell-j}}{(t^{-1}Y^{-2}q;q)_{\ell-j}}\cdot\frac{(1-t^{-1}Y^{-2}q^{\ell-2j})}{(1-t ^{-1}Y^{-2})}\cdot\frac{(1-Y^{-2})}{(1-Y^{-2}q^{\ell-2j})}\] \[\qquad\cdot t^{\frac{1}{2}}q^{\ell-j}\cdot\frac{(1-q^{j})}{(1-q^ {\ell})}\cdot\frac{(1-Y^{-2}q^{-j})}{(1-tY^{-2}q^{\ell-2j})}\] \[=K^{(\ell)}_{j}(Y)\cdot t^{-\frac{1}{2}(\ell-2j)}\cdot\frac{(t^{- 1}Y^{-2}q^{(\ell-2j)};q)_{j}}{(Y^{-2}q^{(\ell-2j)};q)_{j}}\cdot\frac{(Y^{-2};q)_ {\ell-j}}{(t^{-1}Y^{-2};q)_{\ell-j}}\cdot\frac{(1-t^{-1}Y^{-2}q^{\ell-j})}{(1- t^{-1}Y^{-2}q^{\ell-j})}\cdot\frac{(1-Y^{-2}q^{\ell-j})}{(1-Y^{-2}q^{\ell-j})}\] \[\qquad\cdot t^{\frac{1}{2}}q^{\ell-j}\cdot\frac{(1-q^{j})}{(1-q^ {\ell})}\cdot\frac{(1-Y^{-2}q^{-j})}{(1-tY^{-2}q^{\ell-2j})}\] \[=t^{-\frac{1}{2}}C^{(\ell)}_{j}(Y)\cdot t^{\frac{1}{2}}q^{\ell-j }\cdot\frac{(1-q^{j})}{(1-q^{\ell})}\cdot\frac{(1-Y^{-2}q^{-j})}{(1-tY^{-2}q^ {\ell-2j})}=B^{(\ell)}_{\ell-j}(Y).\]
Then, replacing \(j\) with \(\ell-j\) gives
\[B^{(\ell)}_{j}(Y)=D^{(\ell-1)}_{j}(Y^{-1})\cdot t^{\frac{1}{2}(\ell-2j)}\cdot \frac{(t^{-1}Y^{-2}q^{-(\ell-2j)+1};q)_{\ell-j}}{(Y^{-2}q^{-(\ell-2j)+1};q)_{ \ell-j}}\cdot\frac{(Y^{-2}q;q)_{j}}{(t^{-1}Y^{-2}q;q)_{j}}.\]
### Product formulas for type \(SL_{2}\) Macdonald polynomials
The following theorem provides formulas for the products \(E_{\ell}(x)E_{m}(x)\), \(E_{-\ell}(x)P_{m}(x)\) and \(P_{\ell}(x)P_{m}(x)\) expanded in terms of Macdonald polynomials. It is useful to note that, in Theorem 6.2,
\[\operatorname{ev}_{m}(A^{(\ell)}_{j}(Y)) =0\ \ \text{if}\ m+\ell-2j<0, \operatorname{ev}_{m}(tB^{(\ell+1)}_{j}(Y)) =0\ \ \text{if}\ m-(\ell-2j)<0,\] \[\operatorname{ev}_{m}(B^{(\ell)}_{j}(Y)) =0\ \ \text{if}\ -m+\ell-2j>0, \operatorname{ev}_{m}(A^{(\ell+1)}_{j}(Y)) =0\ \ \text{if}\ -m-(\ell-2j)>0,\]
and
\[\operatorname{ev}_{m}(C^{(\ell)}_{j}(Y))=0\ \ \text{if}\ m+\ell-2j<0,\]
since a factor in the numerator of each of these expressions evaluates to \((1-1)=0\).
**Theorem 6.2**.: _Let \(\ell,m\in\mathbb{Z}_{>0}\). Let \(\operatorname{ev}_{m}\colon\mathbb{C}[Y,Y^{-1}]\to\mathbb{C}\) be the homomorphism given by \(\operatorname{ev}_{m}(Y)=t^{-\frac{1}{2}}q^{-\frac{1}{2}m}\) and extend \(\operatorname{ev}_{m}\) to elements of \(\mathbb{C}(Y)\) such that the denominator does not evaluate to \(0\). Then_
\[P_{\ell}(x)P_{m}(x)=\sum_{j=0}^{\ell}\operatorname{ev}_{m}(C^{(\ell)}_{j}(Y))P_{ m+\ell-2j}(x),\]
\[E_{\ell}(x)P_{m}(x)=\sum_{j=0}^{\ell-1}\operatorname{ev}_{m}(A^{(\ell)}_{j}(Y))E_{ m+\ell-2j}(x)+\sum_{j=0}^{\ell-1}\operatorname{ev}_{m}(B^{(\ell)}_{j}(Y))E_{-m+ \ell-2j}(x)\qquad\text{and}\]
\[E_{-\ell}(x)P_{m}(x)=\sum_{j=0}^{\ell}\operatorname{ev}_{m}(tB^{(\ell+1)}_{j}(Y)) E_{m-(\ell-2j)}(x)+\sum_{j=0}^{\ell}\operatorname{ev}_{m}(A^{(\ell+1)}_{j}(Y))E_{-m-( \ell-2j)}(x).\]
Proof.: By (3.15), if \(f\in\mathbb{C}(Y)\) such that \(\mathrm{ev}_{m}(f(Y))\) is defined then \(f(Y)E_{m}(X)\mathbf{1}_{Y}=\mathrm{ev}_{m}(f(Y)E_{m}(X)\mathbf{1}_{Y}\).
(a) By (3.19), (3.30), (4.2) and (3.15),
\[P_{\ell}(X)P_{m}(X)\mathbf{1}_{Y} =P_{\ell}(X)t^{\frac{1}{2}}\mathbf{1}_{0}E_{m}(X)\mathbf{1}_{Y}=t \mathbf{1}_{0}E_{\ell}(X)\mathbf{1}_{0}E_{m}(X)\mathbf{1}_{Y}\] \[=t\Big{(}\sum_{j=0}^{\ell}\mathbf{1}_{0}\eta^{\ell-2j}K_{j}^{( \ell)}(Y)\Big{)}E_{m}(X)\mathbf{1}_{Y}\]
and, using (3.23),
\[\mathbf{1}_{0} \eta^{\ell-2j}tK_{j}^{(\ell)}(Y)E_{m}(X)\mathbf{1}_{Y}=t\cdot \mathrm{ev}_{m}(K_{j}^{(\ell)}(Y))\mathbf{1}_{0}\eta^{-j}\eta^{\ell-j}E_{m}(X )\mathbf{1}_{Y}\] \[=t\cdot\mathrm{ev}_{m}\Big{(}K_{j}^{(\ell)}(Y)t^{-\frac{1}{2}( \ell-2j)}\frac{(t^{-1}Y^{-2}q^{\ell-2j};q)_{j}}{(Y^{-2}q^{\ell-2j};q)_{j}}\cdot \frac{(Y^{-2};q)_{\ell-j}}{(t^{-1}Y^{-2};q)_{\ell-j}}\Big{)}\mathbf{1}_{0}E_{m +\ell-2j}(X)\mathbf{1}_{Y}\] \[=t\cdot\mathrm{ev}_{m}\Big{(}K_{j}^{(\ell)}(Y)t^{-\frac{1}{2}( \ell-2j)}\frac{(t^{-1}Y^{-2}q^{\ell-2j};q)_{j}}{(Y^{-2}q^{\ell-2j};q)_{j}}\cdot \frac{(Y^{-2};q)_{\ell-j}}{(t^{-1}Y^{-2};q)_{\ell-j}}\Big{)}t^{-\frac{1}{2}}P_{ m+\ell-2j}(X)\mathbf{1}_{Y}\] \[=t^{\frac{1}{2}}\mathrm{ev}_{m}(t^{-\frac{1}{2}}C_{j}^{(\ell)}(Y ))P_{m+\ell-2j}(X)\mathbf{1}_{Y},\]
where the last equality is (6.3).
(b) By (3.19), (4.1) and (3.18),
\[E_{\ell}(X)P_{m}(X)\mathbf{1}_{Y} =E_{\ell}(X)t^{\frac{1}{2}}\mathbf{1}_{0}E_{m}(X)\mathbf{1}_{Y}=t ^{\frac{1}{2}}\Big{(}\sum_{j=0}^{\ell-1}\eta^{\ell-2j}D_{j}^{(\ell-1)}(Y) \Big{)}\mathbf{1}_{0}E_{m}(X)\mathbf{1}_{Y}\] \[=t^{\frac{1}{2}}\Big{(}\sum_{j=0}^{\ell-1}\eta^{\ell-2j}D_{j}^{( \ell-1)}(Y)\Big{)}t^{-\frac{1}{2}}P_{m}(X)\mathbf{1}_{Y}\] \[=\Big{(}\sum_{j=0}^{\ell-1}\eta^{\ell-2j}D_{j}^{(\ell-1)}(Y)\Big{)} \Big{(}\frac{t(1-q^{m})}{(1-tq^{m})}E_{m}(X)+E_{-m}(X)\Big{)}\mathbf{1}_{Y}.\]
Using (3.23),
\[\eta^{\ell-2j}D_{j}^{(\ell-1)}(Y)t\frac{(1-q^{m})}{(1-tq^{m})}E_{m }(X)\mathbf{1}_{Y}=\mathrm{ev}_{m}\big{(}D_{j}^{(\ell-1)}(Y)\big{)}t\frac{(1- q^{m})}{(1-tq^{m})}\eta^{-j}\eta^{\ell-j}E_{m}(X)\mathbf{1}_{Y}\] \[\quad=\mathrm{ev}_{m}\Big{(}D_{j}^{(\ell-1)}(Y)t\frac{(1-t^{-1}Y^{ -2})}{(1-Y^{-2})}t^{-\frac{1}{2}(\ell-2j)}\frac{(t^{-1}Y^{-2}q^{\ell-2j};q)_{j}} {(Y^{-2}q^{\ell-2j};q)_{j}}\cdot\frac{(Y^{-2};q)_{\ell-j}}{(t^{-1}Y^{-2};q)_{ \ell-j}}\Big{)}E_{m+\ell-2j}(X)\mathbf{1}_{Y}\] \[\quad=\mathrm{ev}_{m}(A_{j}^{(\ell)}(Y))E_{m+\ell-2j}(X)\mathbf{1}_ {Y},\]
where the last equality follows from (6.4). Using (3.15) and (3.24),
\[\eta^{\ell-2j}D_{j}^{(\ell-1)}(Y)E_{-m}(X)\mathbf{1}_{Y}=\mathrm{ ev}_{m}(D_{j}^{(\ell-1)}(Y^{-1}))\eta^{\ell-j}\eta^{-j}E_{-m}(X)\mathbf{1}_{Y}\] \[\quad=\mathrm{ev}_{m}\Big{(}D_{j}^{(\ell-1)}(Y^{-1})t^{\frac{1}{2}( \ell-2j)}\frac{(t^{-1}Y^{-2}q^{-(\ell-2j)+1};q)_{\ell-j}}{(Y^{-2}q^{-(\ell-2j)+1} ;q)_{\ell-j}}\frac{(Y^{-2}q;q)_{j}}{(t^{-1}Y^{-2}q;q)_{j}}\Big{)}E_{-m+\ell-2j} (X)\mathbf{1}_{Y}\] \[\quad=\mathrm{ev}_{m}(B_{j}^{(\ell)}(Y))E_{-m+\ell-2j}(X)\mathbf{1 }_{Y},\]
where the last equality is (6.5).
(c) By (3.19), (4.1), (3.18) and Proposition 4.1(a),
\[E_{-\ell}(X)P_{m}(X)\mathbf{1}_{Y} =E_{-\ell}(X)t^{\frac{1}{2}}\mathbf{1}_{0}E_{m}(X)\mathbf{1}_{Y}=t ^{\frac{1}{2}}\Big{(}\sum_{j=0}^{\ell}\eta^{-(\ell-2j)}D_{j}^{(-\ell)}(Y) \Big{)}\mathbf{1}_{0}E_{m}(X)\mathbf{1}_{Y}\] \[=t^{\frac{1}{2}}\Big{(}\sum_{j=0}^{\ell}\eta^{-(\ell-2j)}D_{j}^{( -\ell)}(Y)\Big{)}t^{-\frac{1}{2}}P_{m}(X)\mathbf{1}_{Y}\] \[=\Big{(}\sum_{j=0}^{\ell}\eta^{-(\ell-2j)}D_{j}^{(-\ell)}(Y) \Big{)}\Big{(}\frac{t(1-q^{m})}{(1-tq^{m})}E_{m}(X)+E_{-m}(X)\Big{)}\mathbf{1}_ {Y}.\]
Using
\[\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})} \frac{(t^{-1}Y^{-2}q^{-(\ell-2j)};q)_{\ell-j}}{(Y^{-2}q^{-(\ell-2j )};q)_{\ell-j}}\cdot\frac{(Y^{-2};q)_{j}}{(t^{-1}Y^{-2};q)_{j}}\] \[=\frac{(t^{-1}Y^{-2}q^{-(\ell-2j)};q)_{\ell+1-j}}{(Y^{-2}q^{-( \ell-2j)};q)_{\ell+1-j}}\cdot\frac{(1-Y^{-2}q^{j})}{(1-t^{-1}Y^{-2}q^{j})}\cdot \frac{(Y^{-2}q;q)_{j-1}}{(t^{-1}Y^{-2}q;q)_{j-1}}\] \[=\frac{(t^{-1}Y^{-2}q^{-(\ell-2j)};q)_{\ell+1-j}}{(Y^{-2}q^{-( \ell-2j)};q)_{\ell+1-j}}\cdot\frac{(Y^{-2}q;q)_{j}}{(t^{-1}Y^{-2}q;q)_{j}}\]
and (3.21)gives
\[\eta^{-(\ell-2j)}t^{\frac{1}{2}}D_{j}^{(\ell)}(Y^{-1})t\frac{(1-q ^{m})}{(1-tq^{m})}E_{m}(X)\mathbf{1}_{Y}=\mathrm{ev}_{m}\Big{(}t^{\frac{1}{2}} D_{j}^{(\ell)}(Y^{-1})t\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\Big{)}\eta^{-(\ell-j)} \eta^{j}E_{m}(X)\mathbf{1}_{Y}\] \[=\mathrm{ev}_{m}\Big{(}t^{\frac{1}{2}}D_{j}^{(\ell)}(Y^{-1})t\frac {(1-t^{-1}Y^{-2})}{(1-Y^{-2})}t^{\frac{1}{2}(\ell-2j)}\frac{(t^{-1}Y^{-2}q^{- (\ell-2j)};q)_{\ell-j}}{(Y^{-2}q^{-(\ell-2j)};q)_{\ell-j}}\cdot\frac{(Y^{-2};q) _{j}}{(t^{-1}Y^{-2};q)_{j}}\Big{)}E_{m-(\ell-2j)}(X)\mathbf{1}_{Y}\] \[=\mathrm{ev}_{m}\Big{(}tD_{j}^{(\ell)}(Y^{-1})\cdot t^{\frac{1}{2 }(\ell+1-2j)}\cdot\frac{(t^{-1}Y^{-2}q^{-(\ell-2j)};q)_{\ell+1-j}}{(Y^{-2}q^{- (\ell-2j)};q)_{\ell+1-j}}\cdot\frac{(Y^{-2}q;q)_{j}}{(t^{-1}Y^{-2}q;q)_{j}} \Big{)}E_{m-(\ell-2j)}(X)\mathbf{1}_{Y}\] \[=\mathrm{ev}_{m}(tB_{j}^{(\ell+1)}(Y))E_{m-(\ell-2j)}(X)\mathbf{1} _{Y},\]
where the last equality is (6.5). Using (3.15) and (3.22) gives
\[\eta^{-(\ell-2j)}t^{\frac{1}{2}}D_{j}^{(\ell)}(Y^{-1})E_{-m}(X) \mathbf{1}_{Y}=\mathrm{ev}_{m}(t^{\frac{1}{2}}D_{j}^{(\ell)}(Y))\eta^{j}\eta^{ -(\ell-j)}E_{-m}(X)\mathbf{1}_{Y}\] \[=\mathrm{ev}_{m}\Big{(}t^{\frac{1}{2}}D_{j}^{(\ell)}(Y)t^{-\frac{1} {2}(\ell-2j)}\frac{(t^{-1}Y^{-2}q^{\ell-2j+1};q)_{j}}{(Y^{-2}q^{\ell-2j+1};q)_{ j}}\frac{(Y^{-2}q;q)_{\ell-j}}{(t^{-1}Y^{-2}q;q)_{\ell-j}}\Big{)}E_{m-(\ell-2j)}(X) \mathbf{1}_{Y}\] \[=\mathrm{ev}_{m}\Big{(}D_{j}^{(\ell)}(Y)t^{-\frac{1}{2}(\ell+1-2j )}\frac{(t^{-1}Y^{-2}q^{\ell-2j+1};q)_{j}}{(Y^{-2}q^{\ell-2j+1};q)_{j}}\frac{(Y ^{-2};q)_{\ell+1-j}}{(t^{-1}Y^{-2};q)_{\ell+1-j}}\cdot t\frac{(1-t^{-1}Y^{-2})} {(1-Y^{-2})}\Big{)}E_{-m-(\ell-2j)}(X)\mathbf{1}_{Y}\] \[=\mathrm{ev}(A_{j}^{(\ell+1)}(Y))E_{-m-(\ell-2j)}(X)\mathbf{1}_{Y},\]
where the last equality is (6.4).
**Remark 6.3**.: After replacing \(Y^{-2}\) by \(X_{1}X_{2}^{-1}\) the expression for \(C_{j}^{(\ell)}(Y)\) coincides with the expression for the Macdonald Littlewood-Richardson coefficient given in [14, Theorem 1.4].
## 7 Examples
For \(j\in\mathbb{Z}_{>0}\) and \(a,b\in\mathbb{Z}\) with \(a\leq b\) define
\[(z;q)_{j} =(1-z)(1-qz)(1-q^{2}z)\cdots(1-q^{j-1}z)\qquad\text{and}\] \[(1-zq^{a.b}) =(1-zq^{a})(1-zq^{a+1})\cdots(1-zq^{b-1})(1-zq^{b})\quad\text{so that} \quad(1-zq^{a.b})=(1-zq^{a})_{b-a+1}.\]
### Examples of the \(q\)-\(t\)-binomial coefficients
Let
\[\genfrac{[}{]}{0.0pt}{}{k}{j}_{q,t}=\frac{\frac{(q;q)_{k}}{(t;q)_{k}}}{\frac{ (q;q)_{i}}{(t;q)_{j}}\frac{(q;q)_{k-i}}{(t;q)_{k}}}=\frac{(1-q^{j+1.k})}{(1-q ^{1..k-j})}\frac{(1-tq^{0..k-j-1})}{(1-tq^{j..k-1})}.\]
Then
\[\genfrac{[}{]}{0.0pt}{}{0}{0}_{q,t}=1,\]
\[\genfrac{[}{]}{0.0pt}{}{1}{0}_{q,t}=1,\qquad\genfrac{[}{]}{0.0pt}{}{1}{1}_{q, t}=1,\]
\[\genfrac{[}{]}{0.0pt}{}{2}{0}_{q,t}=1\qquad\genfrac{[}{]}{0.0pt}{}{2}{1}_{q, t}=\frac{(1-q^{2})(1-t)}{(1-q)(1-tq)},\qquad\genfrac{[}{]}{0.0pt}{}{2}{2}_{q, t}=1,\]
\[\genfrac{[}{]}{0.0pt}{}{3}{0}_{q,t}=1,\qquad\genfrac{[}{]}{0.0pt}{}{3}{1}_{q, t}=\frac{(1-t)(1-q^{3})}{(1-q)(1-tq^{2})},\qquad\genfrac{[}{]}{0.0pt}{}{3}{2}_{q, t}=\frac{(1-t)(1-q^{3})}{(1-q)(1-tq^{2})},\qquad\genfrac{[}{]}{0.0pt}{}{3}{3}_{q,t}=1\]
### Examples of the shifted \(q\)-\(t\)-binomial coefficients
Let
\[\genfrac{\{}{\}}{0.0pt}{}{k}{j}_{q,t}=\genfrac{[}{]}{0.0pt}{}{k}{j}_{q,t}\, \frac{(1-tq^{k-j})}{(1-tq^{k})}=\frac{(1-q^{j+1..k})}{(1-q^{1..k-j})}\frac{(1- tq^{1..k-j-1})}{(1-tq^{j..k})}.\]
Then
\[\genfrac{\{}{\}}{0}{0}{0}_{q,t}=1,\]
\[\genfrac{\{}{\}}{0}{1}{0}_{q,t}=1,\qquad\genfrac{\{}{\}}{0.0pt}{}{1}{1}_{q,t} =\frac{(1-t)}{(1-tq)},\]
\[\genfrac{\{}{\}}{0}{2}{0}_{q,t}=1\qquad\genfrac{\{}{\}}{0.0pt}{}{2}{1}_{q,t}= \frac{(1-q^{2})(1-t)}{(1-q)(1-tq^{2})},\qquad\genfrac{\{}{\}}{0.0pt}{}{2}{2}_{ q,t}=\frac{(1-t)}{(1-tq^{2})},\]
\[\genfrac{\{}{\}}{0}{3}{0}_{q,t}=1,\qquad\genfrac{\{}{\}}{0}{3}{1}_{q,t}= \frac{(1-t)(1-q^{3})}{(1-q)(1-tq^{3})},\qquad\genfrac{\{}{\}}{0.0pt}{}{3}{2}_{ q,t}=\frac{(1-t)(1-tq)}{(1-q)(1-tq^{2})},\qquad\genfrac{\{}{\}}{0.0pt}{}{3}{3}_{q,t}= \frac{(1-t)}{(1-tq^{3})}.\]
### Examples of the \(Y\)-binomial coefficients
\[\binom{\ell}{j}_{Y}=\frac{(1-t^{-1}Y^{-2}q^{-(j-1)..\ell-2j})(1-tY^{-2}q^{\ell-2j \cdot\ell-j-1})}{(1-Y^{-2}q^{1..\ell-j})(1-Y^{-2}q^{-j..-1})}=\frac{(t^{-1}Y^{-2 }q^{-(j-1)};q)_{\ell-j}(tY^{-2}q^{\ell-2j};q)_{j}}{(Y^{-2}q;q)_{\ell-j}(Y^{-2}q ^{-j};q)_{j}}.\]
Then
\[\binom{0}{0}_{Y} =1,\] \[\binom{1}{0}_{Y} =\frac{(1-t^{-1}Y^{-2}q)}{(1-Y^{-2}q)}=t^{-1}\cdot\frac{(1-tY^{2}q ^{-1})}{(1-Y^{2}q^{-1})}\] \[\binom{1}{1}_{Y} =\frac{(1-tY^{-2}q^{-1})}{(1-Y^{-2}q^{-1})}=t\cdot\frac{(1-t^{-1} Y^{2}q)}{(1-Y^{2}q)}\] \[\binom{2}{0}_{Y} =\frac{(1-t^{-1}Y^{-2}q)(1-t^{-1}Y^{-2}q^{2})}{(1-Y^{-2}q)(1-Y^{ -2}q^{2})}=t^{-2}\cdot\frac{(1-tY^{2}q^{-1})(1-tY^{2}q^{-2})}{(1-Y^{2}q^{-1})( 1-Y^{2}q^{-2})}\] \[\binom{2}{1}_{Y} =\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2}q)}\cdot\frac{(1-tY^{-2})}{(1-Y ^{-2}q^{-1})}=\frac{(1-tY^{2})}{(1-Y^{2}q^{-1})}\cdot\frac{(1-t^{-1}Y^{2})}{(1 -Y^{2}q)}\] \[\binom{2}{2}_{Y} =\frac{(1-tY^{-2}q^{-2})(1-tY^{-2}q^{-1})}{(1-Y^{-2}q^{-2})(1-Y^{ -2}q^{-1})}=t^{2}\cdot\frac{(1-t^{-1}Y^{2}q^{2})(1-t^{-1}Y^{2}q)}{(1-Y^{2}q^{2} )(1-Y^{2}q)}\]
\[\binom{3}{0}_{Y} =\frac{(1-t^{-1}Y^{-2}q)(1-t^{-1}Y^{-2}q^{2})(1-t^{-1}Y^{-2}q^{3} )}{(1-Y^{-2}q)(1-Y^{-2}q^{2})(1-Y^{-2}q^{3})}\] \[\binom{3}{1}_{Y} =\frac{(1-t^{-1}Y^{-2})(1-t^{-1}Y^{-2}q)}{(1-Y^{-2}q)(1-Y^{-2}q^ {2})}\cdot\frac{(1-tY^{-2}q)}{(1-Y^{-2}q^{-1})}\] \[\binom{3}{2}_{Y} =\frac{(1-t^{-1}Y^{-2}q^{-1})}{(1-Y^{-2}q)}\cdot\frac{(1-tY^{-2}q ^{-1})(1-tY^{-2})}{(1-Y^{-2}q^{-2})(1-Y^{-2}q^{-1})}\] \[\binom{3}{3}_{Y} =\frac{(1-tY^{-2}q^{-3})(1-tY^{-2}q^{-2})(1-tY^{-2}q^{-1})}{(1-Y^ {-2}q^{-3})(1-Y^{-2}q^{-2})(1-Y^{-2}q^{-1})}\]
### Examples of the \(D_{j}^{(\ell-1)}(Y)\)
The general product formula for the \(D_{j}^{(\ell)}(Y)\) is
\[D_{j}^{(\ell)}(Y)=t^{-\frac{1}{2}(\ell+1)}\cdot t^{\ell-j}\cdot\genfrac{[}{]}{0.0pt}{}{\ell}{j}_{q,t}\cdot\frac{(1-tq^{\ell-j})}{(1-tq^{\ell})}\cdot\genfrac{ [}{]}{0.0pt}{}{\ell}{j}_{Y}\cdot\frac{(1-tY^{-2}q^{\ell-j})}{(1-tY^{-2}q^{\ell -2j})}\]
The first few of the \(D_{j}^{(\ell-1)}(Y)\) are
\[D_{0}^{(0)}(Y) =t^{-\frac{1}{2}}\cdot 1,\] \[D_{0}^{(1)}(Y) =t^{-\frac{2}{2}}\cdot\frac{(1-tY^{2}q^{-1})}{(1-Y^{2}q^{-1})}=t ^{-\frac{2}{2}}\cdot t\cdot\frac{(1-t^{-1}Y^{-2}q)}{(1-Y^{-2}q)}\] \[D_{1}^{(1)}(Y) =t^{-\frac{2}{2}}\cdot qt\cdot\frac{(1-t)}{(1-tq)}\cdot\frac{(1 -t^{-1}Y^{2})}{(1-Y^{2}q)}=t^{-\frac{2}{2}}\cdot\frac{(1-t)}{(1-tq)}\cdot\frac {(1-tY^{-2})}{(1-Y^{-2}q^{-1})},\]
\[D_{0}^{(2)}(Y) =t^{-\frac{3}{2}}\cdot\frac{(1-tY^{2}q^{-2})(1-tY^{2}q^{-1})}{(1- Y^{2}q^{-2})(1-Y^{2}q^{-1})}=t^{-\frac{3}{2}}\cdot t^{2}\cdot\frac{(1-t^{-1}Y^{-2}q) (1-t^{-1}Y^{-2}q^{2})}{(1-Y^{-2}q)(1-Y^{-2}q^{2})}\] \[D_{1}^{(2)}(Y) =t^{-\frac{3}{2}}\cdot qt\cdot\frac{(1-q^{2})(1-t)}{(1-q)(1-tq^{2 })}\cdot\frac{(1-tY^{2})}{(1-Y^{2}q^{-1})}\cdot\frac{(1-t^{-1}Y^{2}q^{-1})}{(1 -Y^{2}q)},\] \[=t^{-\frac{3}{2}}\cdot t\cdot\frac{(1-q^{2})(1-t)}{(1-q)(1-tq^{2 })}\cdot\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2}q)}\cdot\frac{(1-tY^{-2}q)}{(1-Y^{-2} q^{-1})}\] \[D_{2}^{(2)}(Y) =t^{-\frac{3}{2}}\cdot q^{2}t^{2}\cdot\frac{(1-t)}{(1-tq^{2})} \cdot\frac{(1-t^{-1}Y^{2})(1-t^{-1}Y^{2}q)}{(1-Y^{2}q)(1-Y^{2}q^{2})}\] \[=t^{-\frac{3}{2}}\cdot\frac{(1-t)}{(1-tq^{2})}\cdot\frac{(1-tY^{- 2}q^{-1})(1-tY^{-2})}{(1-Y^{-2}q^{-2})(1-Y^{-2}q^{-1})}.\]
### Examples of the \(K_{j}^{(\ell)}(Y)\)
The general product formula for the \(K_{j}^{(\ell)}(Y)\) for \(\ell\in\mathbb{Z}_{>0}\) is
\[K_{j}^{(\ell)}(Y)=t^{-\frac{1}{2}(\ell-1)}\cdot t^{\ell-1-j}\cdot\begin{bmatrix} \ell\\ j\end{bmatrix}_{q,t}\cdot\begin{pmatrix}\ell\\ j\end{pmatrix}_{Y}\cdot\frac{(1-Y^{-2}q^{\ell-2j})}{(1-t^{-1}Y^{-2}q^{\ell-2 j})}\cdot\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\]
The first few of the \(K_{j}^{(\ell)}(Y)\) are
\[K_{0}^{(0)}(Y) =(t^{\frac{1}{2}}+t^{-\frac{1}{2}})\qquad(\text{since }\mathbf{1}_{0}E_{0}(X) \mathbf{1}_{0}=\mathbf{1}_{0}^{2}=\mathbf{1}_{0}(t^{\frac{1}{2}}+t^{-\frac{1}{ 2}})),\] \[K_{0}^{(1)}(Y) =1\cdot\frac{(1\!-\!t^{-1}\!Y^{-2}q)}{(1\!-\!Y^{-2}q)}\cdot\frac {(1\!-\!Y^{-2}q)}{(1\!-\!Y^{-2}q)}\cdot\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})},\] \[K_{1}^{(1)}(Y) =1\cdot t^{-1}\cdot\frac{(1-tY^{-2}q^{-1})}{(1\!-\!Y^{-2}q^{+})} \cdot\frac{(1\!-\!Y^{-2}q^{+})}{(1-t^{-1}Y^{-2}q^{-1})}\cdot\frac{(1-t^{-1}Y^{ -2})}{(1-Y^{-2})},\] \[K_{0}^{(2)}(Y) =t^{-\frac{1}{2}}\cdot t\cdot\frac{(1-t^{-1}Y^{-2}q)(1\!-\!t^{- 2}q^{-2})}{(1-Y^{-2}q)(1\!-\!Y^{-2}q^{+})}\cdot\frac{(1\!-\!Y^{-2}q^{+})}{(1\! -\!t^{-1}Y^{-2}q^{-1})}\cdot\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\] \[K_{1}^{(2)}(Y) =t^{-\frac{1}{2}}\cdot 1\cdot\frac{(1-q^{2})(1-t)}{(1-q)(1-tq)} \cdot\frac{(1\!-\!t^{-1}\!Y^{-2}q)}{(1-Y^{-2}q)}\cdot\frac{(1-tY^{-2})}{(1-Y^ {-2}q^{-1})}\cdot\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2}q^{-1})}\cdot\frac{(1-t^{-1} Y^{-2})}{(1-t^{-1}Y^{-2}q^{-2})}\cdot\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\] \[K_{2}^{(2)}(Y) =t^{-\frac{1}{2}}\cdot t^{-1}\cdot\frac{(1-tY^{-2}q^{-2})(1-tY^{ -2}q^{-1})}{(1-Y^{-2}q^{-2})(1-Y^{-2}q^{-1})}\cdot\frac{(1\!-\!Y^{-2}q^{+})}{(1 -t^{-1}Y^{-2}q^{-2})}\cdot\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\]
\[K_{0}^{(3)}(Y) =t^{-1}\cdot t^{2}\cdot\frac{(1-t^{-1}Y^{-2}q)(1-t^{-1}Y^{-2}q^{ 2})(1-t^{-1}Y^{-2}q^{3})}{(1-Y^{-2}q)(1-Y^{-2}q^{2})(1-Y^{-2}q^{3})}\cdot\frac {(1-Y^{-2}q^{3})}{(1-t^{-1}Y^{-2}q^{3})}\cdot\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2} )}\] \[K_{1}^{(3)}(Y) =t^{-1}\cdot t\cdot\frac{(1-t)(1-q^{3})}{(1-q)(1-tq^{2})}\cdot \frac{(1-t^{-1}Y^{-2})(1-t^{-1}Y^{-2}q)}{(1-Y^{-2}q)(1-Y^{-2}q^{2})}\cdot\frac{ (1-tY^{-2}q)}{(1-Y^{-2}q^{-1})}\] \[\qquad\cdot\frac{(1-Y^{-2}q)}{(1-t^{-1}Y^{-2}q)}\cdot\frac{(1-t^ {-1}Y^{-2})}{(1-Y^{-2})}\] \[K_{2}^{(3)}(Y) =t^{-1}\cdot 1\cdot\frac{(1-t)(1-q^{3})}{(1-q)(1-tq^{2})}\cdot \frac{(1-t^{-1}Y^{-2}q^{-1})}{(1-Y^{-2}q)}\cdot\frac{(1-tY^{-2}q^{-1})(1-tY^{- 2})}{(1-Y^{-2}q^{-2})(1-Y^{-2}q^{-1})}\] \[\qquad\cdot\frac{(1-Y^{-2}q^{-1})}{(1-t^{-1}Y^{-2}q^{-1})}\cdot \frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\] \[K_{3}^{(3)}(Y) =t^{-1}\cdot t^{-1}\cdot\frac{(1-tY^{-2}q^{-3})(1-tY^{-2}q^{-2})(1 -tY^{-2}q^{-1})}{(1-Y^{-2}q^{-3})(1-Y^{-2}q^{-2})(1-Y^{-2}q^{-1})}\cdot\frac{(1 -Y^{-2}q^{-3})}{(1-t^{-1}Y^{-2}q^{-3})}\cdot\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\]
### Examples of \(K_{j}^{(\ell)}(Y)\) to \(C_{j}^{(\ell)}(Y)\)
The following are examples of the identity (6.3) from Proposition 6.1 which says
\[K_{j}^{(\ell)}(Y)\cdot t^{-\frac{1}{2}(\ell-2j)}\cdot\frac{(t^{-1}Y^{-2}q^{\ell -2j};q)_{j}}{(Y^{-2}q^{\ell-2j};q)_{j}}\cdot\frac{(Y^{-2};q)_{\ell-j}}{(t^{-1}Y^ {-2};q)_{\ell-j}}=C_{j}^{(\ell)}(Y),\]
where
\[C_{j}^{(\ell)}(Y)=\begin{bmatrix}\ell\\ j\end{bmatrix}_{q,t}\cdot\frac{(t^{-1}Y^{-2}q^{-(j-1)};q)_{j}(tY^{-2}q^{\ell- 2j};q)_{j}}{(Y^{-2}q^{\ell-2j+1};q)_{j}(Y^{-2}q^{-j};q)_{j}}.\]
The case \(\ell=0\).
\[K_{0}^{(0)}(Y)\cdot t^{-\frac{1}{2}(0-2.0)}\cdot\frac{(t^{-1}Y^ {-2}q^{\ell-2j};q)_{0}}{(Y^{-2}q^{\ell-2j};q)_{0}}\cdot\frac{(Y^{-2};q)_{0}}{ (t^{-1}Y^{-2};q)_{0}}\] \[\qquad=(t^{\frac{1}{2}}+t^{-\frac{1}{2}})\cdot 1\cdot\frac{1}{ 1}\cdot\frac{1}{1}=t^{\frac{1}{2}}+t^{-\frac{1}{2}}.\]
The case \(\ell=1\).
\[K_{0}^{(1)}(Y)\cdot t^{-\frac{1}{2}(1-2.0)}\cdot\frac{(t^{-1}Y^ {-2}q^{\ell-2j};q)_{0}}{(Y^{-2}q^{\ell-2j};q)_{0}}\cdot\frac{(Y^{-2};q)_{1}}{ (t^{-1}Y^{-2};q)_{1}}\] \[\qquad=\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\cdot t^{-\frac{1}{2}} \frac{1}{1}\cdot\frac{(1-Y^{-2})}{(1-t^{-1}Y^{-2})}=t^{-\frac{1}{2}}.\]
\[K_{1}^{(1)}(Y)\cdot t^{-\frac{1}{2}(1-2.1)}\cdot\frac{(t^{-1}Y^ {-2}q^{1-2.1};q)_{1}}{(Y^{-2}q^{1-2.1};q)_{1}}\cdot\frac{(Y^{-2};q)_{0}}{(t^{- 1}Y^{-2};q)_{0}}\] \[\qquad=t^{-1}\frac{(1-tY^{-2}q^{-1})}{(1-t^{-1}Y^{-2}q^{-1})} \frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\cdot t^{\frac{1}{2}}\frac{(1-t^{-1}Y^{-2} q^{-1})}{(1-Y^{-2}q^{-1})}\cdot\frac{1}{1}\] \[\qquad=t^{-\frac{1}{2}}\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\frac{( 1-tY^{-2}q^{-1})}{(1-Y^{-2}q^{-1})}.\]
The case \(\ell=2\).
\[K_{0}^{(2)}(Y)\cdot t^{-\frac{1}{2}(2-2.0)}\cdot\frac{(t^{-1}Y^ {-2}q^{\ell-2j};q)_{0}}{(Y^{-2}q^{\ell-2j};q)_{0}}\cdot\frac{(Y^{-2};q)_{2}}{( t^{-1}Y^{-2};q)_{2}}\] \[\qquad=t^{\frac{1}{2}}\frac{(1-t^{-1}Y^{-2}q)(1-t^{-1}Y^{-2})}{(1 -Y^{-2}q)(1-Y^{-2})}\cdot t^{-1}\cdot\frac{1}{1}\cdot\frac{(1-Y^{-2})(1-Y^{-2 }q)}{(1-t^{-1}Y^{-2})(1-t^{-1}Y^{-2}q)}=t^{-\frac{1}{2}}.\]
\[K_{1}^{(2)}(Y)\cdot t^{-\frac{1}{2}(2-2.1)}\cdot\frac{(t^{-1}Y^ {-2}q^{\ell-2j};q)_{1}}{(Y^{-2}q^{\ell-2j};q)_{1}}\cdot\frac{(Y^{-2};q)_{1}}{(t ^{-1}Y^{-2};q)_{1}}\] \[\qquad=t^{-\frac{1}{2}}\frac{(1-q^{2})(1-t)}{(1-q)(1-tq)}\frac{(1 -tY^{-2})(1-t^{-1}Y^{-2})}{(1-Y^{-2}q)(1-Y^{-2}q^{-1})}\cdot 1\cdot\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\cdot\frac{(1-Y^{-2})}{(1-t^{-1}Y^{- 2})}\] \[\qquad=t^{-\frac{1}{2}}\frac{(1-q^{2})(1-t)}{(1-q)(1-tq)}\frac{(1 -tY^{-2})(1-t^{-1}Y^{-2})}{(1-Y^{-2}q)(1-Y^{-2}q^{-1})}.\]
\[K_{2}^{(2)}(Y)\cdot t^{-\frac{1}{2}(2-2.2)}\cdot\frac{(t^{-1}Y^ {-2}q^{-2.2};q)_{2}}{(Y^{-2}q^{2-2.2};q)_{2}}\cdot\frac{(Y^{-2};q)_{0}}{(t^{-1} Y^{-2};q)_{0}}\] \[\qquad=t^{-\frac{3}{2}}\frac{(1-tY^{-2}q^{-2})(1-tY^{-2}q^{-1})( 1-t^{-1}Y^{-2})}{(1-Y^{-2}q^{-1})(1-t^{-1}Y^{-2}q^{-2})(1-Y^{-2})}\cdot t^{-1} \cdot\frac{(1-t^{-1}Y^{-2}q^{-2})(1-t^{-1}Y^{-2}q^{-1})}{(1-Y^{-2}q^{-2})(1-Y^ {-2}q^{-1})}\cdot\frac{1}{1}\] \[\qquad=t^{-\frac{1}{2}}\frac{(1-tY^{-2}q^{-2})(1-tY^{-2}q^{-1})}{( 1-Y^{-2}q^{-2})(1-Y^{-2}q^{-1})}\cdot\frac{(1-t^{-1}Y^{-2}q^{-1})(1-t^{-1}Y^{-1 })}{(1-Y^{-2}q^{-1})(1-Y^{-2})}\]
### Examples of \(E_{\ell}(x)\) and \(P_{\ell}(x)\)
\[\vdots\] \[E_{-2}(x) =x^{-2}+\frac{(1-t)(1-q^{2})}{(1-q)(1-q^{2}t)}+\frac{(1-t)}{(1-q^{2} t)}x^{2}\] \[E_{-1}(x) =x^{-1}+\frac{1-t}{1-qt}x,\] \[E_{0}(x) =1,\] \[E_{1}(x) =x,\] \[E_{2}(x) =x^{2}+q\frac{(1-t)}{(1-qt)},\] \[E_{3}(x) =x^{3}+\Big{(}\frac{(1-t)q}{(1-tq)}+\frac{(1-t)q^{2}}{(1-tq^{2})} \frac{(1-t)}{(1-tq)}\Big{)}x+\frac{(1-t)q^{2}}{(1-tq^{2})}x^{-1},\] \[\vdots\]
and
\[P_{0}(x) =1,\] \[P_{1}(x) =x+x^{-1},\] \[P_{2}(x) =(x^{2}+x^{-2})+\frac{(1-q^{2})(1-t)}{(1-q)(1-qt)},\] \[P_{3}(x) =(x^{3}+x^{-3})+\frac{(1-q^{3})(1-t)}{(1-q^{2}t)(1-q)}(x+x^{-1}),\] \[P_{4}(x) =(x^{4}+x^{-4})+\frac{(1-q^{4})(1-t)}{(1-q^{3}t)(1-q)}(x^{2}+x^{- 2})+\frac{(1-q^{4})(1-q^{3})(1-qt)(1-t)}{(1-q^{3}t)(1-q^{2}t)(1-q^{2})(1-q)},\] \[\vdots\]
### Examples of products \(E_{\ell}P_{m}\)
\[E_{1}P_{m}=E_{m+1}+\frac{(1-q^{m})}{(1-tq^{m})}E_{-m+1}=E_{m+1}+ \operatorname{ev}_{m}\Bigl{(}\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\Bigr{)}E_{-m+1},\]
\[E_{2}P_{m} =E_{m+2}+\frac{(1-t)}{(1-tq)}\cdot\frac{(1-q^{m})}{(1-tq^{m-1})} \cdot\frac{(1-t^{2}q^{m})}{(1-tq^{m})}E_{m}\] \[\qquad+\frac{(1-q^{m-1})}{(1-tq^{m-1})}\frac{(1-q^{m})}{(1-tq^{m} )}\cdot\frac{(1-t^{2}q^{m-1})}{(1-tq^{m-1})}E_{-m+2}+q\frac{(1-t)}{(1-tq)}\cdot \frac{(1-q^{m})}{(1-tq^{m+1})}E_{-m}\]
\[=E_{m+2}+\genfrac{\{}{\}}{0.0pt}{}{1}{1}_{q,t}\operatorname{ev}_{m}\Bigl{(} \frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2}q^{-1})}\cdot\frac{(1-tY^{-2})}{(1-Y^{-2})} \Bigr{)}E_{m}\] \[\qquad+\operatorname{ev}_{m}\Bigl{(}\frac{(1-t^{-1}Y^{-2}q^{-1} )}{(1-Y^{-2}q^{-1})}\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\cdot\frac{(1-tY^{-2}q^ {-1})}{(1-Y^{-2}q^{-1})}\Bigr{)}E_{-m+2}\] \[\qquad+q\genfrac{\{}{\}}{0.0pt}{}{1}{1}_{q,t}\operatorname{ev}_{ m}\Bigl{(}\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2}q)}\Bigr{)}E_{-m},\]
\[E_{3}P_{m} =E_{m+3}+\frac{(1-t)(1-q^{2})}{(1-q)(1-tq^{2})}\cdot\frac{(1-q^{m} )}{(1-tq^{m-1})}\cdot\frac{(1-t^{2}q^{m+1})}{(1-tq^{m+1})}\ E_{m+1}\] \[\qquad+\frac{(1-t)}{(1-tq^{2})}\cdot\frac{(1-q^{m-1})(1-q^{m})}{ (1-tq^{m-2})(1-tq^{m-1})}\cdot\frac{(1-t^{2}q^{m-1})(1-t^{2}q^{m})}{(1-tq^{m-1 })(1-tq^{m})}\ E_{m-1}\] \[\qquad+\frac{(1-q^{m-2})(1-q^{m-1})(1-q^{m})}{(1-tq^{m-2})(1-tq^{ m-1})(1-tq^{m})}\cdot\frac{(1-t^{2}q^{m-2})(1-t^{2}q^{m-1})}{(1-tq^{m-2})(1-tq^{m-1 })}\ E_{-m+3}\] \[\qquad+q\frac{(1-t)(1-q^{2})}{(1-q)(1-tq^{2})}\cdot\frac{(1-q^{m -1})(1-q^{m})}{(1-tq^{m})(1-tq^{m+1})}\cdot\frac{(1-t^{2}q^{m})}{(1-tq^{m-1}) }\ E_{-m+1}\] \[\qquad+q^{2}\frac{(1-t)}{(1-tq^{2})}\cdot\frac{(1-q^{m})}{(1-tq^{ m+2})}\ E_{-m-1}\]
\[=E_{m+3}+\genfrac{\{}{\}}{0.0pt}{}{2}{1}_{q,t}\operatorname{ev}_{m}\Bigl{(} \frac{(1-t^{-1}Y^{-2}q)}{(1-Y^{-2}q^{-1})}\cdot\frac{(1-tY^{-2}q)}{(1-Y^{-2}q )}\Bigr{)}E_{m+1}\] \[\qquad+\genfrac{\{}{\}}{0.0pt}{}{2}{2}_{q,t}\operatorname{ev}_{m} \Bigl{(}\frac{(1-t^{-1}Y^{-2}q^{-1})(1-t^{-1}Y^{-2})}{(1-Y^{-2}q^{-2})(1-Y^{-2 }q^{-1})}\cdot\frac{(1-tY^{-2}q^{-1})(1-tY^{-2})}{(1-Y^{-2}q^{-1})(1-Y^{-2})} \Bigr{)}E_{m-1}\] \[\qquad+\operatorname{ev}_{m}\Bigl{(}\frac{(1-t^{-1}Y^{-2}q^{-2})(1 -t^{-1}Y^{-2}q^{-1})(1-t^{-1}Y^{-2})}{(1-Y^{-2}q^{-2})(1-Y^{-2}q^{-1})(1-Y^{- 2})}\cdot\frac{(1-tY^{-2}q^{-2})(1-tY^{-2}q^{-1})}{(1-Y^{-2}q^{-2})(1-Y^{-2}q^ {-1})}\Bigr{)}\ E_{-m+3}\] \[\qquad+q\genfrac{\{}{\}}{0.0pt}{}{2}{1}_{q,t}\operatorname{ev}_{ m}\Bigl{(}\frac{(1-t^{-1}Y^{-2}q^{-1})(1-t^{-1}Y^{-2})}{(1-Y^{-2})(1-Y^{-2}q)} \cdot\frac{(1-tY^{-2})}{(1-Y^{-2}q^{-1})}\Bigr{)}\ E_{-m+1}\] \[\qquad+q^{2}\genfrac{\{}{\}}{0.0pt}{}{2}{2}_{q,t}\operatorname{ev}_{ m}\Bigl{(}\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2}q^{2})}\Bigr{)}E_{-m-1}.\]
### General formulas for \(E_{\ell}P_{m}\) and \(E_{-\ell}P_{m}\)
The general formula for \(E_{\ell}P_{m}\) with \(\ell\in\mathbb{Z}_{>0}\) is
\[E_{\ell}(x)P_{m}(x)\] \[=\sum_{j=0}^{\ell-1}\genfrac{\{}{\}}{0.0pt}{}{\ell-1}{j}_{q,t} \mathrm{ev}_{m}\Big{(}\frac{(1-t^{-1}Y^{-2}q^{-(j-1)..0})(1-tY^{-2}q^{\ell-2j..\ell-1-j})}{(1-Y^{-2}q^{-j..-1})(1-Y^{-2}q^{\ell-2j..\ell-1-j})}\Big{)}E_{m+ \ell-2j}(x)\] \[\qquad+\sum_{j=0}^{\ell-1}q^{j}\genfrac{\{}{\}}{0.0pt}{}{\ell-1}{ j}_{q,t}\mathrm{ev}_{m}\Big{(}\frac{(t^{-1}Y^{-2}q^{-(\ell-j-1)..0})(1-tY^{-2}q^{-( \ell-2j)..j-1})}{(1-Y^{-2}q^{j-(\ell-j-1)..j})(1-Y^{-2}q^{-(\ell-j-1)..-1})} \Big{)}E_{-m+\ell-2j}(x)\] \[=\sum_{j=0}^{\ell-1}\genfrac{\{}{\}}{0.0pt}{}{\ell-1}{j}_{q,t} \mathrm{ev}_{m}\Big{(}\frac{(t^{-1}Y^{-2}q^{-(j-1)};q)_{j}(tY^{-2}q^{\ell-2j} ;q)_{j}}{(Y^{-2}q^{-j};q)_{j}(Y^{-2}q^{\ell-2j})_{j}}\Big{)}E_{m+\ell-2j}(x)\] \[\qquad+\sum_{j=0}^{\ell-1}q^{j}\genfrac{\{}{\}}{0.0pt}{}{\ell-1}{ j}_{q,t}\mathrm{ev}_{m}\Big{(}\frac{(t^{-1}Y^{-2}q^{-(\ell-j-1)};q)_{\ell-j}(tY^{-2}q^{-( \ell-2j-1)};q)_{\ell-j-1}}{(Y^{-2}q^{-(\ell-2j-1)};q)_{\ell-j}(Y^{-2}q^{-(\ell- j-1)};q)_{\ell-j-1}}\Big{)}E_{-m+\ell-2j}(x)\] \[=\sum_{j=0}^{\ell-1}\mathrm{ev}_{m}(A_{m}^{(\ell)}(Y))E_{m+\ell-2 j}(x)+\sum_{j=0}^{\ell-1}\mathrm{ev}_{m}(B_{j}^{(\ell)}(Y))E_{-m+\ell-2j}(x),\]
where
\[A_{j}^{(\ell)}(Y) =\genfrac{\{}{\}}{0.0pt}{}{\ell-1}{j}_{q,t}\frac{(t^{-1}Y^{-2}q^ {-(j-1)};q)_{j}(tY^{-2}q^{\ell-2j};q)_{j}}{(Y^{-2}q^{-j};q)_{j}(Y^{-2}q^{\ell- 2j})_{j}}\qquad\text{and}\] \[B_{j}^{(\ell)}(Y) =q^{j}\genfrac{\{}{\}}{0.0pt}{}{\ell-1}{j}_{q,t}\frac{(t^{-1}Y^{-2 }q^{-(\ell-j-1)};q)_{\ell-j}(tY^{-2}q^{-(\ell-2j-1)};q)_{\ell-j-1}}{(Y^{-2}q^{- (\ell-2j-1)};q)_{\ell-j}(Y^{-2}q^{-(\ell-j-1)};q)_{\ell-j-1}}.\]
The general formula for \(E_{-\ell}P_{m}\) with \(\ell\in\mathbb{Z}_{\geq 0}\) is
\[E_{-\ell}(x)P_{m}(x)\] \[=\sum_{j=0}^{\ell}\genfrac{\{}{\}}{0.0pt}{}{\ell}{j}_{q,t} \mathrm{ev}_{m}\Big{(}\frac{(t^{-1}Y^{-2}q^{-(j-1)};q)_{j}(tY^{-2}q^{\ell+1-2j} ;q)_{j}}{(Y^{-2}q^{-j};q)_{j}(Y^{-2}q^{\ell+1-2j})_{j}}\Big{)}E_{-m-\ell+2j}(x)\] \[\qquad+\sum_{j=0}^{\ell}t\cdot q^{j}\genfrac{\{}{\}}{0.0pt}{}{ \ell}{j}_{q,t}\mathrm{ev}_{m}\Big{(}\frac{(t^{-1}Y^{-2}q^{-(\ell-j)};q)_{\ell+1 -j}(tY^{-2}q^{-(\ell-2j)};q)_{\ell-j}}{(Y^{-2}q^{-(\ell-2j)};q)_{\ell-j}(Y^{-2} q^{-(\ell-j)};q)_{\ell-j}}\Big{)}E_{m-\ell+2j}(x)\] \[=\sum_{j=0}^{\ell}\mathrm{ev}_{m}(A_{j}^{(\ell+1)}(Y))E_{-m-\ell+2 j}(x)+\sum_{j=0}^{\ell}t\cdot\mathrm{ev}_{m}(B_{j}^{(\ell+1)}(Y))E_{m-\ell+2j}(x).\]
### Examples of products \(E_{-\ell+1}P_{m}\)
\[E_{0}P_{m} =E_{-m}+t\frac{(1-q^{m})}{(1-tq^{m})}E_{m}=E_{-m}+t\cdot\mathrm{ev}_ {m}\Big{(}\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\Big{)}E_{m},\]
\[E_{-1}P_{m} =E_{-m-1}+\frac{(1-t)}{(1-tq)}\cdot\frac{(1-q^{m})}{(1-tq^{m-1})} \cdot\frac{(1-t^{2}q^{m})}{(1-tq^{m})}E_{-m+1}\] \[\qquad+t\frac{(1-q^{m-1})}{(1-tq^{m-1})}\frac{(1-q^{m})}{(1-tq^{m })}\cdot\frac{(1-t^{2}q^{m-1})}{(1-tq^{m-1})}E_{m-1}+tq\frac{(1-t)}{(1-tq)} \cdot\frac{(1-q^{m})}{(1-tq^{m+1})}E_{m+1}\]
\[=E_{-m-1}+\genfrac{\{}{\}}{0.0pt}{}{1}{1}_{q,t}\mathrm{ev}_{m} \Big{(}\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2}q^{-1})}\cdot\frac{(1-tY^{-2})}{(1-Y^ {-2})}\Big{)}E_{-m+1}\] \[\qquad+t\cdot\mathrm{ev}_{m}\Big{(}\frac{(1-t^{-1}Y^{-2}q^{-1})} {(1-Y^{-2}q^{-1})}\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})}\cdot\frac{(1-tY^{-2}q^{ -1})}{(1-Y^{-2}q^{-1})}\Big{)}E_{m-1}\] \[\qquad+t\cdot q\genfrac{\{}{\}}{0.0pt}{}{1}{1}_{q,t}\mathrm{ev} _{m}\Big{(}\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2}q)}\Big{)}E_{m+1},\]
\[E_{-2}P_{m} =E_{-m-2}+\frac{(1-t)(1-q^{2})}{(1-q)(1-tq^{2})}\cdot\frac{(1-q^{ m})}{(1-tq^{m-1})}\cdot\frac{(1-t^{2}q^{m+1})}{(1-tq^{m+1})}\ E_{-m}\] \[\qquad+\frac{(1-t)}{(1-tq^{2})}\cdot\frac{(1-q^{m-1})(1-q^{m})}{( 1-tq^{m-2})(1-tq^{m-1})}\cdot\frac{(1-t^{2}q^{m-1})(1-t^{2}q^{m})}{(1-tq^{m-1} )(1-tq^{m})}\ E_{-m+2}\] \[\qquad+t\cdot\frac{(1-q^{m-2})(1-q^{m-1})(1-q^{m})}{(1-tq^{m-2})( 1-tq^{m-1})(1-tq^{m})}\cdot\frac{(1-t^{2}q^{m-2})(1-t^{2}q^{m-1})}{(1-tq^{m-2} )(1-tq^{m-1})}\ E_{m-2}\] \[\qquad+t\cdot q\frac{(1-t)(1-q^{2})}{(1-q)(1-tq^{2})}\cdot\frac{( 1-q^{m-1})(1-q^{m})}{(1-tq^{m-1})(1-tq^{m})}\cdot\frac{(1-t^{2}q^{m})}{(1-tq^ {m+1})}\ E_{m}\] \[\qquad+t\cdot q^{2}\frac{(1-t)}{(1-tq^{2})}\cdot\frac{(1-q^{m})} {(1-tq^{m+2})}\ E_{m+2}\]
\[=E_{-m-2}+\genfrac{\{}{\}}{0.0pt}{}{2}{1}_{q,t}\mathrm{ev}_{m} \Big{(}\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2}q^{-1})}\cdot\frac{(1-tY^{-2}q)}{(1-Y^ {-2}q)}\Big{)}E_{-m}\] \[\qquad+\genfrac{\{}{\}}{0.0pt}{}{2}{2}_{q,t}\mathrm{ev}_{m}\Big{(} \frac{(1-t^{-1}Y^{-2}q^{-1})(1-t^{-1}Y^{-2})}{(1-Y^{-2}q^{-2})(1-Y^{-2}q^{-1}) }\cdot\frac{(1-tY^{-2}q^{-1})(1-tY^{-2})}{(1-Y^{-2}q^{-1})(1-Y^{-2})}\Big{)}E_ {-m+2}\] \[\qquad+t\cdot\mathrm{ev}_{m}\Big{(}\frac{(1-t^{-1}Y^{-2}q^{-2})( 1-t^{-1}Y^{-2}q^{-1})(1-t^{-1}Y^{-2})}{(1-Y^{-2}tq^{-2})(1-Y^{-2}tq^{-1})(1-Y^ {-2})}\cdot\frac{(1-tY^{-2}q^{-2})(1-tY^{-2}q^{-1})}{(1-Y^{-2}q^{-2})(1-Y^{-2} q^{-1})}\Big{)}\ E_{m-2}\] \[\qquad+t\cdot q\genfrac{\{}{\}}{0.0pt}{}{2}{1}_{q,t}\mathrm{ev}_ {m}\Big{(}\frac{(1-t^{-1}Y^{-2}q^{-1})(1-t^{-1}Y^{-2})}{(1-Y^{-2}q^{-1})(1-Y^{-2 })}\cdot\frac{(1-tY^{-2})}{(1-Y^{-2}q)}\Big{)}\ E_{m}\] \[\qquad+t\cdot q^{2}\genfrac{\{}{\}}{0.0pt}{}{2}{2}_{q,t}\mathrm{ev} _{m}\Big{(}\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2}q^{2})}\Big{)}E_{m+2}.\]
### Examples of products \(P_{\ell}P_{m}\)
The general formula is
\[P_{\ell}(x)P_{m}(x)=\sum_{j=0}^{\ell}\mathrm{ev}_{m}\big{(}C_{j}^{(\ell)}(Y) \big{)}P_{m+\ell-2j}(x),\]
where
\[C_{j}^{(\ell)}(Y)=\genfrac{[}{]}{0.0pt}{}{\ell}{j}_{q,t}\cdot\frac{(1-t^{-1}Y^ {-2}q^{-(j-1)\ldots 0})(1-tY^{-2}q^{\ell-2j\ldots\ell-j-1})}{(1-Y^{-2}q^{\ell-2j+1 \ldots\ell-j})(1-Y^{-2}q^{-j\ldots-1})}.\]
The first few cases are
\[P_{1}P_{m} =P_{m+1}+\frac{(1-q^{m})}{(1-tq^{m})}\frac{(1-t^{2}q^{m-1})}{(1- tq^{m-1})}P_{m-1}\] \[=P_{m+1}+\mathrm{ev}\Big{(}\frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2})} \frac{(1-tY^{-2}q^{-1})}{(1-Y^{-2}q^{-1})}\Big{)}P_{m-1},\]
\[P_{2}P_{m} =P_{m+2}+\frac{(1-q^{2})(1-t)}{(1-tq)(1-q)}\cdot\frac{(1-q^{m})}{ (1-tq^{m+1})}\cdot\frac{(1-t^{2}q^{m})}{(1-tq^{m-1})}P_{m}\] \[\qquad+\frac{(1-q^{m-1})(1-q^{m})}{(1-tq^{m-1})(1-tq^{m})}\cdot \frac{(1-t^{2}q^{m-2})(1-t^{2}q^{m-1})}{(1-tq^{m-2})(1-tq^{m-1})}P_{m-2}\]
\[=P_{m+2}+\genfrac{[}{]}{0.0pt}{}{2}{1}_{q,t}\mathrm{ev}\Big{(} \frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2}q)}\cdot\frac{(1-tY^{-2})}{(1-Y^{-2}q^{-1})} \Big{)}P_{m}\] \[\qquad+\mathrm{ev}\Big{(}\frac{(1-t^{-1}Y^{-2}q^{-1})(1-t^{-1}Y^ {-2})}{(1-Y^{-2}q^{-1})(1-Y^{-2})}\cdot\frac{(1-tY^{-2}q^{-2})(1-tY^{-2}q^{-1} )}{(1-Y^{-2}q^{-2})(1-Y^{-2}q^{-1})}\Big{)}P_{m-2},\]
\[P_{3}P_{m} =P_{m+3}+\frac{(1-t)(1-q^{3})}{(1-q)(1-tq^{2})}\cdot\frac{(1-q^{m })}{(1-tq^{m+2})}\cdot\frac{(1-t^{2}q^{m+1})}{(1-tq^{m-1})}\ P_{m+1}\] \[\qquad+\frac{(1-t)(1-q^{3})}{(1-q)(1-tq^{2})}\cdot\frac{(1-q^{m-1 })(1-q^{m})}{(1-tq^{m})(1-tq^{m+1})}\cdot\frac{(1-t^{2}q^{m-1})(1-t^{2}q^{m})}{ (1-tq^{m-2})(1-tq^{m-1})}\ P_{m-1}\] \[\qquad+\frac{(1-q^{m-2})(1-q^{m-1})(1-q^{m})}{(1-tq^{m-2})(1-tq^{ m-1})(1-tq^{m})}\cdot\frac{(1-t^{2}q^{m-3})(1-t^{2}q^{m-2})(1-t^{2}q^{m-1})}{(1-tq^{ m-3})(1-tq^{m-2})(1-tq^{m-1})}P_{m-3}\]
\[=P_{m+3}+\genfrac{[}{]}{0.0pt}{}{3}{1}_{q,t}\mathrm{ev}\Big{(} \frac{(1-t^{-1}Y^{-2})}{(1-Y^{-2}q^{2})}\cdot\frac{(1-tY^{-2}q)}{(1-Y^{-2}q^{- 1})}\Big{)}P_{m+1}\] \[\qquad+\genfrac{[}{]}{0.0pt}{}{3}{2}_{q,t}\mathrm{ev}\Big{(} \frac{(1-t^{-1}Y^{-2}q^{-1})(1-t^{-1}Y^{-2})}{(1-Y^{-2})(1-Y^{-2}q)}\cdot\frac{ (1-tY^{-2}q^{-1})(1-tY^{-2})}{(1-Y^{-2}q^{-2})(1-Y^{-2}q^{-1})}\Big{)}P_{m-1}\] \[\qquad+\mathrm{ev}\Big{(}\frac{(1-t^{-1}Y^{-2}q^{-2\ldots 0})}{(1-Y^{-2}q^{- 2\ldots 0})}\cdot\frac{(1-tY^{-2}q^{-3\ldots-1})}{(1-Y^{-2}q^{-3\ldots-1})} \Big{)}P_{m-3}\]
### Proof of the \(q\)-\(t\)-binomial formulas for \(E_{\ell}\) and \(P_{\ell}\)
**Proposition 7.1**.: _The electronic Macdonald polynomials are given by_
\[E_{-\ell}(x)=\sum_{j=0}^{\ell}\genfrac{[}{]}{0.0pt}{}{\ell}{j}_{q,t}\frac{(1- tq^{j})}{(1-tq^{\ell})}x^{\ell-2j}\qquad\text{and}\qquad E_{\ell}(x)=\sum_{j=0}^{ \ell-1}\genfrac{[}{]}{0.0pt}{}{\ell-1}{j}_{q,t}\frac{q^{\ell-1-j}(1-tq^{j})}{( 1-tq^{\ell-1})}x^{-\ell+2j+2},\]
_and the bosonic Macdonald polynomials are given by_
\[P_{\ell}(x)=\sum_{j=0}^{\ell}\genfrac{[}{]}{0.0pt}{}{\ell}{j}_{q,t}x^{\ell-2j}.\]
Proof.: Following [20, SS6.2], where \(q^{k}=t\),
\[\genfrac{[}{]}{0.0pt}{}{k+a}{b} =\frac{(q;q)_{k+a}}{(q;q)_{b}(q;q)_{k+a-b}}=\frac{(1-q^{k+a-b+1}) \cdots(1-q^{k+a})}{(q;q)_{b}}\] \[=\frac{(1-tq^{a-b+1})\cdots(1-tq^{a})}{(q;q)_{b}}=\frac{(tq^{a-(b -1)};q)_{b}}{(q;q)_{b}}.\]
Then, from [20, (6.2.7)],
\[E_{-m} =\genfrac{[}{]}{0.0pt}{}{k+m}{m}^{-1}\sum_{i+j=m}\genfrac{[}{]}{0.0pt}{}{k+i-1}{i}\genfrac{[}{]}{0.0pt}{}{k+j}{j}x^{i-j}\] \[=\frac{(q;q)_{m}}{(tq^{m-(m-1)};q)_{m}}\sum_{i+j=m}\frac{(tq^{i-1 -(i-1)};q)_{i}}{(q;q)_{i}}\frac{(tq^{j-(j-1)};q)_{j}}{(q;q)_{j}}x^{i-j}\] \[=\frac{(q;q)_{m}}{(tq;q)_{m}}\sum_{j=0}^{m}\frac{(t;q)_{m-j}}{(q; q)_{m-j}}\frac{(tq;q)_{j}}{(q;q)_{j}}x^{m-j-j}\] \[=\frac{(1-t)}{(1-tq^{m})}\frac{(q;q)_{m}}{(t;q)_{m}}\sum_{j=0}^{m }\frac{(t;q)_{m-j}}{(q;q)_{m-j}}\frac{(1-tq^{j})}{(1-t)}\frac{(t;q)_{j}}{(q;q) _{j}}x^{m-2j}\] \[=\sum_{j=0}^{m}\genfrac{[}{]}{0.0pt}{}{m}{j}_{q,t}\frac{(1-tq^{j })}{(1-tq^{m})}x^{m-2j}\]
and, from [20, (6.2.8)],
\[E_{m+1}=\genfrac{[}{]}{0.0pt}{}{k+m}{m}^{-1}\sum_{i+j=m}\genfrac{[}{]}{0.0pt }{}{k+i-1}{i}\genfrac{[}{]}{0.0pt}{}{k+j}{j}q^{i}x^{-i+j+1}=\sum_{j=0}^{m} \genfrac{[}{]}{0.0pt}{}{m}{j}_{q,t}\frac{(1-tq^{j})}{(1-tq^{m})}q^{m-j}x^{-m+2 j+1}\]
so that
\[E_{m}=\sum_{j=0}^{m-1}\genfrac{[}{]}{0.0pt}{}{m-1}{j}_{q,t}\frac{(1-tq^{j})}{ (1-tq^{m-1})}q^{m-1-j}x^{-m+2j+2}.\]
From [11, (6.3.7)],
\[P_{m} =\genfrac{[}{]}{0.0pt}{}{k+m-1}{m}^{-1}\sum_{i+j=m}\genfrac{[}{]}{0.0 pt}{}{k+i-1}{i}\genfrac{[}{]}{0.0pt}{}{k+j-1}{j}x^{i-j}\] \[=\frac{(q;q)_{m}}{(tq^{m-1-(m-1)};q)_{m}}\sum_{i+j=m}\frac{(tq^{i- 1-(i-1)};q)_{i}}{(q;q)_{i}}\frac{(tq^{j-1-(j-1)};q)_{j}}{(q;q)_{j}}x^{i-j}\] \[=\frac{(q;q)_{m}}{(t;q)_{m}}\sum_{j=0}^{m}\frac{(t;q)_{m-j}}{(q;q) _{m-j}}\frac{(t;q)_{j}}{(q;q)_{j}}x^{m-j-j}=\sum_{j=0}^{m}\genfrac{[}{]}{0.0pt}{ }{m}_{q,t}x^{m-2j}.\]
### Examples of \(E_{t}(X)\mathbf{1}_{0}\)
This page provides examples of the identities in (3.29). Since \(E_{1}(X)=X\) and \(E_{-1}(X)=X^{-1}+\frac{(1-t)}{(1-qt)}X\) then
\[E_{1}(X)\mathbf{1}_{0} =X\mathbf{1}_{0}=\tau_{\pi}^{\vee}T_{1}^{-1}\mathbf{1}_{0}=t^{- \frac{1}{2}}\tau_{\pi}^{\vee}\mathbf{1}_{0}=t^{-\frac{1}{2}}\tau_{\pi}^{\vee}E _{0}(X)\mathbf{1}_{0}=t^{-\frac{1}{2}}\eta_{\pi}\mathbf{1}_{0},\] \[E_{-1}(X)\mathbf{1}_{0} =\Big{(}X^{-1}+\frac{(1-t)}{(1-qt)}X\Big{)}\mathbf{1}_{0}=\Big{(} T_{1}\tau_{\pi}^{\vee}+\frac{(1-t)}{(1-tq)}\tau_{\pi}^{\vee}T_{1}^{-1}\Big{)} \mathbf{1}_{0}\] \[=\Big{(}T_{1}+t^{-\frac{1}{2}}\frac{(1-t)}{(1-tq)}\Big{)}\tau_{ \pi}^{\vee}\mathbf{1}_{0}=\Big{(}T_{1}+t^{-\frac{1}{2}}\frac{(1-t)}{(1-tq)} \Big{)}t^{\frac{1}{2}}E_{1}(X)\mathbf{1}_{0}\]
Since \(E_{3}(X)=X^{3}+\Big{(}\frac{(1-t)q}{(1-tq)}+\frac{(1-t)q^{2}}{(1-tq^{2})}\frac {(1-t)}{(1-tq)}\Big{)}X^{+}\) then
\[E_{3}(X)\mathbf{1}_{0} =\Big{(}X^{3}+\Big{(}\frac{(1-t)q}{(1-tq)}+\frac{(1-t)q^{2}}{(1-tq ^{2})}\frac{(1-t)}{(1-tq)}\Big{)}X+\frac{(1-t)q^{2}}{(1-tq^{2})}X^{-1}\Big{)} \mathbf{1}_{0}\] \[=\Big{(}\tau_{\pi}^{\vee}T_{1}^{-1}\tau_{\pi}^{\vee}T_{1}^{-1} \tau_{\pi}^{\vee}T_{1}^{-1}+t^{-\frac{1}{2}}\frac{t^{-\frac{1}{2}}(1-t)tq}{(1- tq)}\tau_{\pi}^{\vee}T_{1}^{-1}\] \[\qquad+\frac{t^{-\frac{1}{2}}(1-t)tq^{2}}{(1-tq^{2})}\frac{t^{- \frac{1}{2}}(1-t)}{(1-tq)}\tau_{\pi}^{\vee}T_{1}^{-1}+t^{-\frac{1}{2}}\frac{t^ {-\frac{1}{2}}(1-t)tq^{2}}{(1-tq^{2})}T_{1}\tau_{\pi}^{\vee}\Big{)}\mathbf{1}_{0}\] \[=\Big{(}\tau_{\pi}^{\vee}T_{1}^{-1}\tau_{\pi}^{\vee}T_{1}^{-1} \tau_{\pi}^{\vee}T_{1}^{-1}+t^{-\frac{1}{2}}\frac{t^{-\frac{1}{2}}(1-t)tq}{(1- tq)}\tau_{\pi}^{\vee}T_{1}^{-1}\] \[\qquad+\frac{t^{-\frac{1}{2}}(1-t)tq^{2}}{(1-tq^{2})}\frac{t^{- \frac{1}{2}}(1-t)}{(1-tq)}\tau_{\pi}^{\vee}T_{1}^{-1}+t^{-\frac{1}{2}}\frac{t^ {-\frac{1}{2}}(1-t)tq^{2}}{(1-tq^{2})}(T_{1}^{-1}+(t^{\frac{1}{2}}-t^{-\frac{1 }{2}}))\tau_{\pi}^{\vee}\Big{)}\mathbf{1}_{0}\] \[=\Big{(}\tau_{\pi}^{\vee}T_{1}^{-1}\tau_{\pi}^{\vee}T_{1}^{-1} \tau_{\pi}^{\vee}t^{-\frac{1}{2}}+t^{-\frac{1}{2}}\frac{t^{-\frac{1}{2}}(1-t)tq }{(1-tq)}\tau_{\pi}^{\vee}T_{1}^{-1}\tau_{\pi}^{\vee}\tau_{\pi}^{\vee}\] \[\qquad+t^{-\frac{1}{2}}\frac{t^{-\frac{1}{2}}(1-t)tq^{2}}{(1-tq^{2 })}\frac{t^{-\frac{1}{2}}(1-t)}{(1-tq)}\tau_{\pi}^{\vee}+t^{-\frac{1}{2}} \frac{t^{-\frac{1}{2}}(1-t)tq^{2}}{(1-tq^{2})}T_{1}^{-1}\tau_{\pi}^{\vee}\] \[\qquad-t^{-\frac{1}{2}}\frac{t^{-\frac{1}{2}}(1-t)tq^{2}}{(1-tq^{2 })}\frac{t^{-\frac{1}{2}}(1-t)}{(1-tq)}(1-tq)\tau_{\pi}^{\vee}\Big{)}\mathbf{1}_ {0}\] \[=\Big{(}\tau_{\pi}^{\vee}T_{1}^{-1}\tau_{\pi}^{\vee}T_{1}^{-1} \tau_{\pi}^{\vee}t^{-\frac{1}{2}}+t^{-\frac{1}{2}}\frac{t^{-\frac{1}{2}}(1-t) tq}{(1-tq)}\tau_{\pi}^{\vee}T_{1}^{-1}\tau_{\pi}^{\vee}\tau_{\pi}^{\vee}\] \[\qquad+t^{-\frac{1}{2}}\frac{t^{-\frac{1}{2}}(1-t)tq^{2}}{(1-tq^{2 })}T_{1}^{-1}\tau_{\pi}^{\vee}\] \[\qquad+t^{-\frac{1}{2}}\frac{t^{-\frac{1}{2}}(1-t)tq^{2}}{(1-tq^{2 })}\frac{t^{-\frac{1}{2}}(1-t)}{(1-tq)}tq\tau_{\pi}^{\vee}\Big{)}\mathbf{1}_{0}\] \[=\Big{(}t^{-\frac{1}{2}}\tau_{\pi}^{\vee}T_{1}^{-1}\tau_{\pi}^{ \vee}T_{1}^{-1}\tau_{\pi}^{\vee}+t^{-\frac{1}{2}}\frac{t^{-\frac{1}{2}}(1-t) tq}{(1-tq)}\tau_{\pi}^{\vee}T_{1}^{-1}\tau_{\pi}^{\vee}\tau_{\pi}^{\vee}\] \[\qquad+t^{-\frac{1}{2}}\frac{t^{-\frac{1}{2}}(1-t)tq^{2}}{(1-tq^{2 })}\tau_{\pi}^{\vee}\tau_{\pi}^{\vee}T_{1}^{-1}\tau_{\pi}^{\vee}+t^{-\frac{1}{2 }}\frac{t^{-\frac{1}{2}}(1-t)tq^{2}}{(1-tq^{2})}\frac{t^{-\frac{1}{2}}(1-t) tq}{(1-tq)}\tau_{\pi}^{\vee}\tau_{\pi}^{\vee}\tau_{\pi}^{\vee}\Big{)} \mathbf{1}_{0}\] \[=t^{-\frac{1}{2}}\tau_{\pi}^{\vee}\Big{(}T_{1}^{-1}+\frac{t^{- \frac{1}{2}}(1-t)tq^{2}}{(1-tq^{2})}\Big{)}\tau_{\pi}^{\vee}\Big{(}T_{1}^{-1}+ \frac{t^{-\frac{1}{2}}(1-t)tq}{(1-tq)}\tau_{\pi}^{\vee}\mathbf{1}_{0}.\]
### Examples of the \(E_{\ell}(X)\mathbf{1}_{0}\) expansion
Let
\[c(Y)=\frac{(1-tY^{2})}{(1-Y^{2})}\qquad\text{and}\qquad F_{\ell}(Y)=\frac{(1-t)(1 -tY^{2}q^{\ell})}{(1-tq^{\ell})(1-Y^{2})}.\]
Then
\[E_{1}(X)\mathbf{1}_{0}=t^{-\frac{1}{2}}\eta_{\pi}\mathbf{1}_{0}=t^{-\frac{1}{2 }}\eta_{\pi}\eta_{\text{s}_{1}}\mathbf{1}_{0}=\eta D_{0}^{(0)}(Y)\mathbf{1}_{0},\]
\[E_{2}(X)\mathbf{1}_{0} =t^{-\frac{2}{2}}(\eta c(Y)+\eta_{\pi}F_{1}(Y))\eta\mathbf{1}_{0}\] \[=t^{-\frac{2}{2}}\eta^{2}c(q^{-\frac{1}{2}}Y)\mathbf{1}_{0}+t^{- \frac{2}{2}}F_{1}(q^{-\frac{1}{2}}Y^{-1})\mathbf{1}_{0}\] \[=\eta^{2}t^{-\frac{2}{2}}\frac{(1-tY^{2}q^{-1})}{(1-Y^{2}q^{-1}) }\mathbf{1}_{0}+t^{-\frac{2}{2}}\frac{(1-t)(1-tq^{-1}Y^{-2}q)}{(1-tq)(1-q^{-1 }Y^{-2})}\mathbf{1}_{0}\] \[=\eta^{2}t^{-\frac{2}{2}}\cdot\frac{(1-tY^{2}q^{-1})}{(1-Y^{2}q^ {-1})}\mathbf{1}_{0}+t^{-\frac{2}{2}}\cdot qt\frac{(1-t)(1-t^{-1}Y^{2})}{(1-tq )(1-Y^{2}q)}\mathbf{1}_{0}\] \[=\eta^{2}D_{0}^{(1)}(Y)\mathbf{1}_{0}+D_{1}^{(1)}(Y)\mathbf{1}_{0}\]
and
\[E_{3}(X)\mathbf{1}_{0} =t^{-\frac{1}{2}}(\eta c(Y)+\eta_{\pi}F_{2}(Y))(\eta^{2}D_{0}^{(1 )}(Y)\mathbf{1}_{0}+D_{1}^{(1)}\mathbf{1}_{0})\] \[=\eta^{3}t^{-\frac{1}{2}}c(q^{-2}Y)D_{0}^{(1)}(Y)\mathbf{1}_{0}+ \eta t^{-\frac{1}{2}}c(Y)D_{1}^{(1)}(Y)\mathbf{1}_{0}\] \[\qquad+\eta^{-1}t^{-\frac{1}{2}}F_{2}(q^{-2}Y^{-1})D_{0}^{(1)}(Y^ {-1})\mathbf{1}_{0}+\eta t^{-\frac{1}{2}}F_{2}(Y^{-1})D_{1}^{(1)}(Y^{-1}) \mathbf{1}_{0}\] \[=\eta^{3}t^{-\frac{1}{2}}c(q^{-2}Y)D_{0}^{(1)}(Y)\mathbf{1}_{0}\] \[\qquad+\eta t^{-\frac{1}{2}}(c(Y)D_{1}^{(1)}(Y)+F_{2}(Y^{-1})D_{1 }^{(1)}(Y^{-1}))\mathbf{1}_{0}\] \[\qquad+\eta^{-1}t^{-\frac{1}{2}}F_{2}(q^{-2}Y^{-1})D_{0}^{(1)}(Y^ {-1})\mathbf{1}_{0}\] \[=\eta^{3}t^{-\frac{3}{2}}\frac{(1-tY^{2}q^{-2})(1-tY^{2}q^{-1})}{( 1-Y^{2}q^{-2})(1-Y^{2}q^{-1})}\mathbf{1}_{0}\] \[\qquad+\eta t^{-\frac{1}{2}}\frac{(1-t)(1-q^{2})}{(1-tq^{2})(1-q) }\cdot q\frac{(1-tY^{2})(1-tqY^{-2})}{(1-Y^{2}q)(1-Y^{-2}q)}\mathbf{1}_{0}\] \[\qquad+\eta^{-1}t^{-\frac{3}{2}}\frac{(1-t)}{(1-tq^{2})}\cdot \frac{(1-tY^{-2})(1-tY^{-2}q^{-1})}{(1-Y^{-2}q^{-2})(1-Y^{-2}q^{-1})}\mathbf{1 }_{0}\] \[=\eta^{3}D_{0}^{(2)}(Y)\mathbf{1}_{0}+\eta D_{1}^{(2)}(Y)\mathbf{1 }_{0}+\eta^{-1}D_{2}^{(2)}(Y)\mathbf{1}_{0}.\] |
2301.08790 | On the optimal measurement of conversion gain in the presence of dark
noise | Working from a model of Gaussian pixel noise, we present and unify over
twenty-five years of developments in the statistical analysis of the photon
transfer conversion gain measurement. We then study a two-sample estimator of
the conversion gain that accounts for the general case of non-negligible dark
noise. The moments of this estimator are ill-defined (their integral
representations diverge) and so we propose a method for assigning
pseudomoments, which are shown to agree with actual sample moments under mild
conditions. A definition of optimal sample size pairs for this two-sample
estimator is proposed and used to find approximate optimal sample size pairs
that allow experimenters to achieve a predetermined measurement uncertainty
with as little data as possible. The conditions under which these
approximations hold are also discussed. Design and control of experiment
procedures are developed and used to optimally estimate a per-pixel conversion
gain map of a real image sensor. Experimental results show excellent agreement
with theoretical predictions and are backed up with Monte Carlo simulation. The
per-pixel conversion gain estimates are then applied in a demonstration of
per-pixel read noise estimation of the same image sensor. The results of this
work open the door to a comprehensive pixel-level adaptation of the photon
transfer method. | Aaron Hendrickson, David P. Haefner, Bradley L. Preece | 2023-01-20T20:28:01Z | http://arxiv.org/abs/2301.08790v1 | # On the optimal measurement of conversion gain in the presence of dark noise
###### Abstract
Working from a model of Gaussian pixel noise, we present and unify over twenty-five years of developments in the statistical analysis of the photon transfer conversion gain measurement. We then study a two-sample estimator of the conversion gain that accounts for the general case of non-negligible dark noise. The moments of this estimator are ill-defined (their integral representations diverge) and so we propose a method for assigning pseudomoments, which are shown to agree with actual sample moments under mild conditions. A definition of optimal sample size pairs for this two-sample estimator is proposed and used to find approximate optimal sample size pairs that allow experimenters to achieve a pre-determined measurement uncertainty with as little data as possible. The conditions under which these approximations hold are also discussed. Design and control of experiment procedures are developed and used to optimally estimate a per-pixel conversion gain map of a real image sensor. Experimental results show excellent agreement with theoretical predictions and are backed up with Monte Carlo simulation. The per-pixel conversion gain estimates are then applied in a demonstration of per-pixel read noise estimation of the same image sensor. The results of this work open the door to a comprehensive pixel-level adaptation of the photon transfer method.
ol
## 1 Introduction
Photon Transfer (pt) is a methodology developed in the 1970s to aid in the design, characterization, and optimization of solid state image sensors [1]. Since its inception, pt has evolved to become the standard approach to image sensor characterization for manufacturers and consumers alike, culminating in its use as the basis for the European Machine Vision Association (EMVA) 1288 standard in 2005 [2, 3]. To fully characterize the performance of an image sensor with pt, many performance parameters are measured including, but not limited to, conversion gain, read noise, dynamic range, and quantum efficiency.
Of all performance parameters prescribed by the pt method, the conversion gain, \(g\left(e\text{-}/\text{DN}\right)\), is the most critical as it is the unit conversion constant needed to convert sensor measurements from device specific units of Digital Numbers (dn) into physical units of electrons (\(e\)-). Since units of DN are device specific, it is only after multiplying by \(g\) that DN measurements represent a physical quantity that can be compared between different devices. For this reason, many of the performance parameters measurable by the pt method, e.g. read noise, dynamic range, and quantum efficiency, at some point require multiplying quantities in units of DN by \(g\).
Naturally, \(g\) must be estimated through measurement; thus, the precision and accuracy of its measurement fundamentally limits the precision and accuracy of all measured pt parameters converted to units of electrons by \(g\). To see why this is, suppose \(G\) is an estimator of \(g\) and \(T\) is an estimator of some parameter \(\tau\) with units of DN. Then \(\mathcal{T}=T\times G\) is an estimator of \(\tau\) in units of electrons and we have for the absolute coefficient of variation \(\mathsf{ACV}\mathcal{T}=\sqrt{\mathsf{Var}\mathcal{T}}/|\mathcal{E}\mathcal{T}|\):
\[\mathsf{ACV}^{2}\mathcal{T}=\mathsf{ACV}^{2}T+(\mathsf{ACV}^{2}T)(\mathsf{ ACV}^{2}G)+\mathsf{ACV}^{2}G. \tag{1}\]
It follows that \(\mathsf{ACV}\mathcal{T}\geq\mathsf{ACV}G\) showing that the relative uncertainty in the estimator \(G\) represents a lower bound for the relative uncertainty of all pt measurements converted to units of electrons.
Given the central role conversion gain plays in the pt method, much work has been conducted into investigating various estimators for \(g\), their statistical properties, and procedures for performing \(g\)-estimation [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. From this body of research, only a small subset have studied how sample size(s) relate to estimator uncertainty for the purpose of properly designing and control
ling conversion gain measurement experiments [1, 4, 10, 12]. Perhaps of most influence to this correspondence is the work of Beecken & Fossum (1996) who studied a shot-noise-limited estimator under a model of Gaussian noise [4]. When the shot-noise-limited assumption is made under the Gaussian model, statistical analysis of the resulting estimator becomes much more tractable and parameters such as the optimal (minimal) sample size needed to control the estimator's uncertainty can be derived. Of course, results derived from this assumption only hold in the limiting case of shot-noise-limited estimation and is generally not applicable for measuring \(g\) in the presence of dark noise which is the more general situation. Such a situation can occur when dealing with sensors of lower-quality or when attempting to measure conversion gain under low-illumination conditions.
An important consideration in how conversion gain is measured depends on if a particular sensor under test exhibits uniform characteristics, whereby, each pixel in the sensor array is assumed to exhibit identical values of conversion gain, read noise, etc. When a sensor conforms to the uniform assumption, only a single, global, value of each pt parameter needs to be reported to describe the performance of the device. Since most sensors are comprised of many pixels, only a few frames of data are needed to obtain large sample estimates of these parameters. For example, Janesick showed that when working in the shot-noise-limited regime, one can measure the conversion gain to a relative uncertainty of 1% by sampling \(\mathcal{O}(10^{4})\) (approximately 20 000) pixels from just a few frames of image data [1]. When the assumption of sensor uniformity is not held, it is necessary to measure each pt performance parameter on a per-pixel basis, requiring many frames of data. Returning back to the example given by Janesick, the \(\mathcal{O}(10^{4})\) pixels needed to obtain a global estimate of \(g\) within 1% uncertainty now turns into \(\mathcal{O}(10^{4})\) full-resolution image frames needed to measure the conversion gain of each pixel to the same uncertainty.
The requirement of large datasets for per-pixel conversion gain estimation is further compounded in the case of Complementary Metal-Oxide Semiconductor (cmos) Active Pixel Sensors (aps), which not only exhibit per-pixel nonuniformities, but also are generally nonlinear devices. To circumvent the problem of nonlinearity, Janesick _et al._ proposed an extension of pt which requires measuring \(g\) at the low-illumination end of it's dynamic range, where the sensor exhibits linear characteristics (see Section 7.3 of [7] and [8]). As sensor dark noise dominates signal noise at low-illumination, the number of samples needed to measure \(g\) again dramatically increases. For example, in this correspondence we show that if photon induced signal noise is approximately equal to sensor dark noise, then to measure \(g\), per-pixel, to 1% uncertainty one needs no less than \(\mathcal{O}(10^{5})\) (approximately 180 000) total image frames of data. The dramatic increase in the amount of data needed for per-pixel estimation, especially when measuring \(g\) outside the shot-noise-limited regime, drives the need to find a means for estimating \(g\) with as little data as possible; this is the main goal of this correspondence.
From an experimental perspective, capturing large datasets is not just time consuming but also introduces sensitivity to drift in the sensor and/or light source, which unchecked, will corrupt per-pixel estimates of \(g\). Through utilizing optimal sampling, one is able to measure \(g\) with as few frames as possible, reducing the time to capture data and mitigating drift. As such, our goal here is to develop a general method for optimally estimating \(g\) that holds in the shot-noise-limit as well as the more general case where dark noise is non-negligible.
We will organize this paper by first introducing a Gaussian model of sensor noise along with the basic assumptions under which the model holds (Section 2). We follow with a condensed review of theory and statistical analysis for estimators of the conversion gain in the shot-noise-limited case (Section 3) and general case which accounts for the presence of dark noise (Section 4). Section 5 then studies optimal sample size pairs for the general conversion gain estimator, which is an estimator based on two independent samples. Because analytical expressions for the exact optimal sample size pairs cannot be derived, this section derives approximations and determines the conditions under which these approximations are useful. From these analytical expressions for the approximate optimal sample size pairs, Section 6 describes a method for design and control of experiment for per-pixel conversion gain estimation on a real image sensor. Lastly, Section 7 presents an application to per-pixel conversion gain estimation by performing per-pixel estimation of read noise on the same sensor.
## 2 Sensor noise model
Consider a sensor observing a constant irradiance, monochromatic light source. The expected number of interacting photons, \(\mu_{\gamma}\), received by each pixel per fixed integration time is given by
\[\mu_{\gamma}=\frac{AE_{\text{exp}}}{h\nu}QE_{\text{int}}, \tag{2}\]
where \(A\left(\text{m}^{2}\right)\) is the pixel area, \(E\left(\text{W}/\text{m}^{2}\right)\) is the irradiance, \(t_{\text{exp}}\left(\text{s}\right)\) is the integration time, \(h\nu\left(\text{J}\right)\) is the quantization constant for photons of frequency \(\nu\left(\text{Hz}\right)\), and \(QE_{\text{int}}\left(-\right)\) is the interacting quantum efficiency representing the probability an incident photon is detected [2].
The actual number of observed interacting photons for any given integration time will vary randomly and is accurately modeled by the Poisson distribution [1]. Assuming the transfer of interacting photons to photoelectrons is one-to-one, variations in the number of photoelectrons generated in each pixel per integration time can therefore be modeled as a Poisson random variable \(\mathcal{P}\sim\mathcal{P}(\mu_{c})\). As the irradiance-which we shall refer to as the illumination level-increases, \(\mu_{c}\rightarrow\infty\) leading to the asymptotic result
\[\frac{\mathcal{P}-\mu_{c}}{\sqrt{\mu_{c}}}\overset{d}{\rightarrow}\mathcal{N }(0,1); \tag{3}\]
however, for even relatively small values of \(\mu_{c}\), say \(\mu_{c}>30\), the error in this approximation is generally acceptable for applied purposes and so we can assume \(\mathcal{P}\sim\mathcal{N}(\mu_{c},\mu_{c})\).
Additionally, dark noise present in the sensor is comprised of dark current shot noise and read noise with the read noise further comprised of several other noise sources, e.g. source follower noise, reset noise, etc., so by the central limit theorem we have the reasonable model \(\mathcal{P}\sim\mathcal{N}(\mu_{\mathcal{P}},\sigma_{\mathcal{Q}}^{2})\), where \(\mu_{\mathcal{P}}\) and \(\sigma_{\mathcal{Q}}\) represent the sensor bias and dark noise in units of electrons, respectively. We will further assume \(\mathcal{P}\) and \(\mathcal{Q}\) are independent. In considering the combined photon induced and dark signal \(\mathcal{P}+\mathcal{Q}\), one might be tempted to assume so long as \(\mu_{c}\) is large, that the approximate normality of \(\mathcal{P}\) implies \(\mathcal{P}+\mathcal{P}\) must also be approximately normal; however, this is not always the case. Once again using \(\mathcal{P}\sim\mathcal{P}(\mu_{c})\) the density of \(\mathcal{P}+\mathcal{P}\) can be formally expressed by the convolution integral \(f_{\mathcal{P}+\mathcal{Q}}(x)=\int_{\mathbb{N}_{0}}f_{\mathcal{P}}(x-n)\, \mathrm{d}F_{\mathcal{P}}(n)\) leading to the explicit form (see [9] and
Section 7.2 of [1]):
\[f_{\mathscr{P}+\mathscr{D}}(x)=\sum_{n=0}^{\infty}\phi(x-n;\mu_{\mathscr{D}}, \sigma_{\mathscr{D}})\frac{e^{-\mu_{e}}\mu_{e}^{n}}{n!}, \tag{4}\]
where \(\phi(\cdot;\mu,\sigma)\) is the normal probability density with mean \(\mu\) and standard deviation \(\sigma\). Due to the emerging importance of (4) in the literature [9, 14] we also note that an alternative expression can be obtained by writing \(f_{\mathscr{P}+\mathscr{D}}\) in terms of the exponential square series function [15]
\[f_{\mathscr{P}+\mathscr{D}}(x)=\phi(x;\mu_{\mathscr{D}},\sigma_{\mathscr{D}}) e^{-\mu_{e}}E_{sq}(e^{-1/2\sigma_{\mathscr{D}}^{2}},\mu_{e}e^{(x-\mu_{\mathscr{D}})/ \sigma^{2}},1), \tag{5}\]
where \(E_{sq}(q,r,z)\coloneqq\sum_{n=0}^{\infty}q^{n^{2}}r^{n}z^{n}/n!\). Proposition 5.2 in Schmidt (2017) along with the identities \(e^{i2}=\cos z+i\sin z\) and \(2\cos z=e^{i2}+e^{-i2}\) give
\[E_{sq}(e^{-1/2\sigma_{\mathscr{D}}^{2}},\Omega,1)=\\ 2\int_{0}^{\infty}\phi(t;0,1)\exp(\Omega\cos(t/\sigma_{\mathscr{ D}}))\cos(\Omega\sin(t/\sigma_{\mathscr{D}}))\,\mathrm{d}t, \tag{6}\]
which subsequently provides a novel integral representation for (4).
Figure 1 plots \(f_{\mathscr{P}+\mathscr{D}}\) for \(\mu_{e}=30\,e\)-, \(\mu_{\mathscr{D}}=0\)- and \(\sigma_{\mathscr{D}}=0.3,10\,e\)-. Upon inspection, we see that despite \(\mu_{e}\) being large, so that \(\mathscr{P}\) is approximately normal, only the density for \(\mathscr{P}+\mathscr{D}\) corresponding to \(\sigma_{\mathscr{D}}=1.0\,e\)- can be accurately modeled as normal. As such, on top of the restriction \(\mu_{e}>30\,e\)- we will also assume \(\sigma_{\mathscr{D}}>1.0\,e\)- so that we may use the model \(\mathscr{P}+\mathscr{D}\sim\mathcal{N}(\mu_{e}+\mu_{\mathscr{D}},\mu_{e}+ \sigma_{\mathscr{D}}^{2})\). This additional assumption excludes photon counting devices such as Deep-Sub-Electron-Read Noise (Dsern) image sensors. We do note that in the case where a large number of sample are collected these assumptions can be loosened as the distributions of the sample statistics, e.g. mean and variance, will agree with our model even if \(\mathscr{D}\) and \(\mathscr{P}+\mathscr{D}\) deviate from normality.
In addition to being accurate for data from real image sensors, the normal model
\[\mathscr{D}\sim\mathcal{N}(\mu_{\mathscr{D}},\sigma_{\mathscr{D}}^ {2}) \tag{7}\] \[\mathscr{P}+\mathscr{D}\sim\mathcal{N}(\mu_{e}+\mu_{\mathscr{D}}, \mu_{e}+\sigma_{\mathscr{D}}^{2}) \tag{8}\]
is mathematically convenient due to the tractability of normal moments. However, the continuous signals \(\mathscr{D}\) and \(\mathscr{P}+\mathscr{D}\) are never directly observed because they are quantized via an analog-to-digital converter. This quantization step transforms the corresponding continuous density functions into discrete probability mass functions, which distorts the shape of the distributions and thus alters the moments. By further imposing \(g\leq\sigma_{\mathscr{D}}\), the effects of quantization are negligible so that we can reasonably assume a normal model for the quantized signal as well [1, 5].
## 3 Review of gain estimation theory: shot-noise-limited case
Our first goal is to present the major statistical results of conversion gain estimation for the special case of a shot-noise-limited response. The results in this section will serve as a starting point for the general case of gain estimation in Section 4. While we focus on \(g\)-estimation for a single pixel by repeated sampling of the pixel in time, the analysis is equally valid in the case of spatially sampling an array of identical pixels exposed to a uniform light source. To help keep things organized, Table 1 lists key symbols pertaining to shot-noise-limited estimation and their associated formulae. Note that all these symbols are built up from only three fundamental quantities: \(\mu_{e}\)-, \(g\), and \(n\).
### Estimator derivation
A pixel can be modeled as a transfer function \(\mathcal{T}:e\rightarrow\mathrm{DN}\) mapping photoelectrons to a digital number output. In general, each pixel comprising the active sensor array is assumed to have it's own unique transfer function. Suppose \(\mathcal{T}(e\text{-})=e\text{-}/g\) and our pixel has zero bias and dark noise so that the dark signal can be formally represented by the degenerate variable \(\mathscr{D}\sim\delta(0)\). Because the only noise in the pixel output will come from photon shot noise we say the pixel exhibits a shot-noise-limited response. If \(P=\mathcal{T}(\mathscr{P})\) is the random variable representing the photon induced output signal in DN, then it's easy to see
\[\mu_{P}:=\mathsf{E}P=\mathsf{E}(\mathscr{P}/g)=\mu_{e}\text{-}/g \tag{9}\]
and
\[\sigma_{P}^{2}\coloneqq\mathsf{Var}P=\mathsf{Var}(\mathscr{P}/g)=\mu_{e} \text{-}/g^{2}. \tag{10}\]
Combining these two results give us the fundamental photon transfer relation
\[g=\mu_{P}/\sigma_{P}^{2}. \tag{11}\]
This fundamental relation implies a natural estimator for \(g\). Let \(\{P_{1},\ldots,P_{n}\}\) be a sample of \(n\) i.i.d. observations of our pixel exposed to some constant level of incident illumination for a fixed, nonzero integration time. Then we can estimate \(g\) with
\[G=\bar{P}/\hat{P}, \tag{12}\]
where \(\bar{P}=\frac{1}{n}\sum_{k=1}^{n}P_{k}\) and \(\hat{P}=\frac{1}{n-1}\sum_{k=1}^{n}(P_{k}-\bar{P})^{2}\) are the sample mean and sample variance, respectively. Under the normal
\begin{table}
\begin{tabular}{c c|c c} \hline \hline symbol & formula & symbol & formula \\ \hline \(\mu_{P}\) & \(\mu_{e}\text{-}/g\) & \(\alpha\) & \((n-1)/2\) \\ \(\sigma_{P}^{2}\) & \(\mu_{e}\text{-}/g^{2}\) & \(\beta\) & \(\alpha/\sigma_{P}^{2}\) \\ \hline \(\mu_{P}\) & \(\mu_{P}\) & \(g\) & \(\mu_{P}/\sigma_{P}^{2}\) \\ \(\sigma_{P}^{2}\) & \(\sigma_{P}^{2}/n\) & \(-\) & \(-\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of symbols and corresponding formulae associated with shot-noise-limited estimation.
model \(P_{k}\sim\mathcal{N}(\mu_{P},\sigma_{P}^{2})\), \((\bar{P},\bar{P})\) is a complete sufficient statistic of the unknown parameter \((\mu_{P},\sigma_{P}^{2})\) so that \(G\) also happens to be the Uniformly Minimum-Variance Unbiased Estimator (umvue) of its expected value [16].
### Historical developments
Statistical analysis of the estimator (12) has been previously conducted by Beecken & Fossum (1996) as well as Janesick (2001) [4, 5]. In both works the moments of \(G\) were approximated with the moments of it's first-order Taylor polynomial about \((\bar{E}\bar{P},\bar{E}\bar{P})=(\mu_{P},\sigma_{P}^{2})\)
\[G\approx g+\frac{g}{\mu_{P}}(\bar{P}-\mu_{P})-\frac{g}{\sigma_{P}^{2}}(\bar{P} -\sigma_{P}^{2}). \tag{13}\]
Using these approximate moments Beecken & Fossum were able to show under the normal model of sensor noise (c.f. Eq. 20 in [4] using \(S_{g}/g\mapsto\mathsf{ACV}G\), \(g\mapsto 1/g\), \(\bar{x}\mapsto\mu_{P}\), \(N\mapsto n\), and \(\sigma/S\mapsto 1\))
\[\mathsf{ACV}^{2}G\approx\frac{2}{n-1}+\frac{1}{n}\frac{1}{\mu_{e}}. \tag{14}\]
For clarity we note that the paper by Beecken & Fossum actually studied the estimator \(G^{-1}=\bar{P}/\bar{P}\), which is the conversion gain in units of \(\mathrm{DN}/e\)-; however, applying their noise model and statistical analysis to \(G\) as given by (12) gives the result in (14). Furthermore, as \(n\) becomes large we may replace \(2/(n-1)\) with \(2/n\), which is the same estimate given by Janesick (c.f. Eq. 2.18 in [5] using \(\sigma_{K}^{2}\mapsto\mathsf{Var}G\), \(N_{pix}\mapsto n\), \(S(\mathrm{DN})\mapsto\mu_{P}\), and \(K\mapsto g\)).
In both works it was noted that at typical illumination levels where \(g\) is measured, (14) is very well approximated by its first term, which happens to be the first-order Taylor approximation of \(\mathsf{ACV}^{2}\hat{P}^{-1}\). In other words, for sufficiently large illumination, we have the approximate relation \(\mathsf{ACV}G\approx\mathsf{ACV}\hat{P}^{-1}\). Such an approximation is useful because it tells us that the number of samples needed to measure \(g\) to a given uncertainty can be approximated by the number of samples needed to measure \(1/\sigma_{P}^{2}\) to the same uncertainty. This is a key insight that will appear several more times throughout this correspondence. Using the high illumination approximation \(\mathsf{ACV}G\approx\sqrt{2/n}\) [c.f. Eq. 6.12, 1] we subsequently obtain Janesick's approximation for the optimal (minimal) number of samples needed to estimate \(g\) to a desired relative uncertainty \(\mathsf{acv}_{0}\):
\[n^{\mathrm{opt}}\approx\frac{2}{\mathsf{acv}_{0}^{2}}. \tag{15}\]
### Further developments
Statistical analysis of the shot-noise-limited estimator in (12) yields tractable results for the density function, moments, and optimal sample size without the need to invoke approximate methods. For a normal model, we have for the distributions of the photon induced signal and it's sample statistics : \(P_{k}\sim\mathcal{N}(\mu_{P},\sigma_{P}^{2})\), \(P\sim\mathcal{N}(\mu_{P},\sigma_{P}^{2})\), and \(\bar{P}\sim\mathcal{G}(\alpha,\beta)\), with the latter being a gamma variable parameterized in terms of shape \(\alpha\) and rate \(\beta\) (see Table 1 for parameter formulae). Since the \(\bar{P}_{k}\) are normal, \(\bar{P}\) and \(\bar{P}\) are also independent. For the density function we use change of variables to write \(f_{G}(g)=\int_{0}^{\infty}t\phi(gt;\mu_{P},\sigma_{P})f_{\bar{P}}(t)\,\mathrm{d}t\), which after substituting \(u=|g|t/\sigma\) gives
\[f_{G}(g)=\frac{\alpha}{|g|}\left(\frac{\beta\sigma_{P}}{|g|}\right)^{\alpha} \frac{e^{2\bar{\sigma}(g)/4-\mu_{P}^{2}/(2\sigma_{P}^{2})}}{\sqrt{2\pi}}D_{- \alpha-1}(z(g)). \tag{16}\]
Here, \(z(g)=\beta\sigma/|g|-\mu_{\bar{P}}\operatorname{sign}(g)/\sigma_{\bar{P}}\) and \(D_{v}(z):=\frac{e^{-2/4}}{\Gamma(-v)}\int_{0}^{\infty}t^{-v-1}e^{-t^{2}/2-zt} \,\mathrm{d}t\), which is the parabolic cylinder function.
As for the moments of \(G\) we have by the independence of \(\bar{P}\) and \(\bar{P}\)
\[\mathsf{E}G^{k}=(\mathsf{E}\bar{P}^{k})\mathsf{E}\bar{P}^{-k}. \tag{17}\]
The moments of \(\bar{P}\) are easily found by comparing the generating function \(e^{2zt-t^{2}}=\sum_{n=0}^{\infty}H_{n}(z)t^{n}/n!\) to the moment generating function of \(\bar{P}\) yielding
\[\mathsf{E}\bar{P}^{k}=(i\sigma_{\bar{P}}/\sqrt{2})^{k}H_{k}\left(-i\frac{\mu_ {\bar{P}}}{\sqrt{2}\sigma_{P}}\right), \tag{18}\]
where \(H_{k}(z)\coloneqq(2z-\bar{o}_{z})^{k}\cdot 1\) denotes the \(k\)th degree Hermite polynomial and \(i\) the imaginary unit. Likewise, we have for the moments of \(\bar{P}^{-1}\)
\[\mathsf{E}\bar{P}^{-k}=\beta^{k}(\alpha)_{-k} \tag{19}\]
with \((s)_{n}\coloneqq\Gamma(s+n)/\Gamma(s)\) denoting the Pochhammer symbol.
From here, the approximate results of Beecken, Fossum, and Janesick can be derived rigorously from an exact expression for \(\mathsf{ACV}G\). The following lemma will aid us in this goal and also be used extensively throughout the rest of this work. All proofs can be found in Section 9.
**Lemma 1**.: _Let \(T=XY\). If \(X\) and \(Y\) are independent then_
\[\mathsf{ACV}^{2}T=\mathsf{ACV}^{2}X+(\mathsf{ACV}^{2}X)(\mathsf{ACV}^{2}Y)+ \mathsf{ACV}^{2}Y, \tag{20}\]
_with \(\mathsf{ACV}T\coloneqq\sqrt{\mathsf{Var}T}/|\mathsf{ET}|\)._
With the help of Lemma 1 and the moment expressions given above we deduce the exact expression (c.f. (14))
\[\mathsf{ACV}^{2}G=\frac{2}{n-5}+\frac{2}{n(n-5)}\frac{1}{\mu_{e}}+\frac{1}{n} \frac{1}{\mu_{e}}. \tag{21}\]
Setting \(\mathsf{ACV}^{2}G=\mathsf{acv}_{0}^{2}\) yields a quadratic equation in \(n\), which upon solving gives the optimal sample size needed to measure \(g\) to a desired relative uncertainty \(\mathsf{acv}_{0}\):
\[n^{\mathrm{opt}}=\frac{2+5\mathsf{acv}_{0}^{2}+\frac{1}{\mu_{e}}+\left((2+5 \mathsf{acv}_{0}^{2}+\frac{1}{\mu_{e}})^{2}-12\frac{\mathsf{acv}_{0}^{2}}{\mu_{ e}}\right)^{1/2}}{2\mathsf{acv}_{0}^{2}}. \tag{22}\]
Unfortunately, this exact expression is not of great use in practice because it depends on the unknown quantity \(\mu_{e}\)-, which also cannot be directly measured without _a priori_ knowledge of \(g\). To obtain an approximation that is independent of \(\mu_{e}\), we first consider the following result showing that \(\mathsf{ACV}G\) is dominated by \(\mathsf{ACV}\hat{P}^{-1}\) at high-illumination.
**Theorem 1**.: _Let \(G\) be as given in (12). As illumination increases, \(\mu_{e}\to\infty\) and_
\[\mathsf{ACV}G=\mathsf{ACV}\hat{P}^{-1}\left(1+\frac{n-3}{4n}\frac{1}{\mu_{e}} +\mathcal{O}(\mu_{e}^{-2})\right), \tag{23}\]
_with \(\mathsf{ACV}\hat{P}^{-1}=\sqrt{2/(n-5)}\)._
Theorem 1 confirms the observations of [4, 5] in that
\[\mathsf{ACV}G\sim\mathsf{ACV}\hat{P}^{-1} \tag{24}\]
at high-illumination. Because of this finding, the optimal sample size for \(G\) can be approximated by the optimal sample size for \(\bar{P}^{-1}\) when restricted to high-illumination conditions. Setting
\(\mathsf{ACV}^{2}\,\hat{P}^{-1}=\mathsf{ac}\nu_{0}^{2}\) yields a linear equation in \(n\), which upon solving for \(n\) subsequently gives us the high-illumination, asymptotic approximation for the optimal sample size of \(G\)
\[n^{\text{opt}}\sim\frac{2}{\mathsf{ac}\nu_{0}^{2}}+5,\quad\mu_{e}\to\infty. \tag{25}\]
For example, choosing a desired relative uncertainty of \(1\%\) we have \(\mathsf{ac}\nu_{0}=0.01\) and \(n^{\text{opt}}\approx 20\,005\), which agrees with Janesick's approximation of \(n^{\text{opt}}\approx 20\,000\) as given by (15). Unfortunately, all of the results in this section break down when sensor dark noise is non-negligible and thus have limited applications. We are now ready to move onto the more general case.
## 4 Review of gain estimation theory: general case
As was the case in the previous section, general gain estimation requires the use of many symbols that can be combined and manipulated. To stay organized, Table 2 lists many of the key symbols used along with their corresponding formulae. Note that all of these symbols are constructed from six fundamental quantities: \(\mu_{e}\), \(\mu_{\mathscr{P}}\), \(\sigma_{\mathscr{P}}^{2}\), \(g\), \(n_{1}\), and \(n_{2}\).
### Estimator derivation
We now consider the more general case where the pixel exhibits both a bias and non-negligible dark noise: \(\mathscr{D}\sim\mathcal{N}(\mu_{\mathscr{P}},\sigma_{\mathscr{P}}^{2})\) with \(\mu_{\mathscr{P}},\sigma_{\mathscr{P}}\neq 0\). We again note that in general, each pixel in the active sensor array will exhibit unique values of \(\mu_{\mathscr{P}}\) and \(\sigma_{\mathscr{P}}\). By linearity of the transfer function, the digital output of a pixel in the absence of illumination for some fixed, nonzero integration time is \(D=\mathcal{T}(\mathscr{P})\) with
\[\mu_{D}\coloneqq\mathsf{ED}=\mu_{\mathscr{P}}/g \tag{26}\]
and
\[\sigma_{D}^{2}\coloneqq\mathsf{Var}D=\sigma_{\mathscr{P}}^{2}/g^{2}. \tag{27}\]
Likewise, the digital output of the same pixel exposed to incident illumination for the same fixed, nonzero integration time gives \(\sqrt{2/n}\) discussed in Section 3.
Following Janesick's work, Hendrickson (2017) was the first attempt to draw exact statistical conclusions about the full estimator (31) by applying the normal model \(X_{k}\sim\mathcal{N}(\mu_{D+D},\sigma_{D+D}^{2})\) and \(Y_{k}\sim\mathcal{N}(\mu_{D},\sigma_{D}^{2})\) to derive the density of \(G\) in the form of the Centralized Inverse-Fano (CIF) distribution
\[f_{G}(g)=\int_{-\infty}^{\infty}|t|\phi(gt;t\mu_{\mathscr{P}},\sigma_{\mathscr{ P}})f_{\mathscr{P}}(t)\,\mathrm{d}t, \tag{35}\]
where
\[f_{\mathscr{P}}(t)=C\times\begin{cases}\frac{e^{\mu_{\mathscr{P}}^{2}t}}{ \Gamma(\mu_{\mathscr{P}})}U\left(\frac{1-\alpha_{1}}{2-\alpha_{1}-\alpha_{2}} ;-(\beta_{1}+\beta_{2})t\right)&t<0\\ \frac{e^{\beta_{1}}}{\Gamma(\mu_{\mathscr{P}})}U\left(\frac{1-\alpha_{1}}{2- \alpha_{1}-\alpha_{2}};(\beta_{1}+\beta_{2})t\right)&t\geq 0,\end{cases} \tag{36}\]
\begin{table}
\begin{tabular}{c c|c c} \hline symbol & formula & symbol & formula \\ \hline \(\mu_{D}\) & \(\mu_{\mathscr{P}}/g\) & \(\mu_{\mathscr{P}}^{2}\) & \(\mu_{e}\)-/\(g\) \\ \(\sigma_{D}^{2}\) & \(\sigma_{\mathscr{P}}^{2}/g^{2}\) & \(\sigma_{\mathscr{P}}^{2}\) & \(\mu_{e}\)-/\(g^{2}\) \\ \hline \(\mu_{P+D}\) & \(\mu_{P}+\mu_{D}\) & \(\alpha_{1}\) & \((n_{1}-1)/2\) \\ \(\sigma_{\mathscr{P}+D}^{2}\) & \(\sigma_{\mathscr{P}}^{2}+\sigma_{D}^{2}\) & \(\alpha_{2}\) & \((n_{2}-1)/2\) \\ \hline \(\mu_{\mathscr{P}}\) & \(\mu_{P}\) & \(\beta_{1}\) & \(\alpha_{1}/\sigma_{\mathscr{P}+D}^{2}\) \\ \(\sigma_{\mathscr{P}}^{2}\) & \(\sigma_{\mathscr{P}+D}^{2}/n_{1}+\sigma_{D}^{2}/n_{2}\) & \(\beta_{2}\) & \(\alpha_{2}/\sigma_{D}^{2}\) \\ \hline \(\mu_{\mathscr{P}}\) & \(\sigma_{\mathscr{P}}^{2}\) & \(g\) & \(\frac{\mu_{P+D}-\mu_{D}}{\sigma_{\mathscr{P}+D}^{2}-\sigma_{D}^{2}}\) \\ \(\sigma_{\mathscr{P}}^{2}\) & \(\alpha_{1}/\beta_{1}^{2}+\alpha_{2}/\beta_{2}^{2}\) & \(-\) & \(-\) \\ \hline \end{tabular}
\end{table}
Table 2: **List of symbols and corresponding formulae associated with general estimation.**
with \(C=\beta_{1}^{\kappa_{1}}\beta_{2}^{\kappa_{2}}(\beta_{1}+\beta_{2})^{1-\alpha_{1 }-\kappa_{2}}\) is the gamma-difference distribution [17, 18, 19, 20], and \(U(a,b,z)\) is Kummer's confluent hypergeometric function of the 2nd-kind [10]. Under this model it was shown that \(C\) has ill-defined moments which is due to the fact that the tails of the probability density \(f_{G}\) decay like those of the Cauchy density [21]. For this reason, Hendrickson (2019) [11] extended the notion of statistical moments in the same manner as Peng [22, 23] by deriving the first moment of \(G\) in the sense of the Cauchy principal value \(\mathsf{E}_{P}G=\lim_{R\to\infty}\int_{-R}^{R}tf_{G}(t)\,\mathrm{d}t\)
\[\mathsf{E}_{P}G=g\frac{(\frac{\beta_{1}}{\beta_{1}}-\frac{\beta_ {2}}{\beta_{2}})\beta_{1}^{\kappa_{1}}\beta_{2}^{\kappa_{2}}(\beta_{1}+\beta_ {2})^{1-\alpha_{1}-\kappa_{2}}}{(\alpha_{1}+\alpha_{2}-1)\,\mathsf{B}(\kappa_ {1},\kappa_{2})}\bigg{(}\\ \psi(\alpha_{1})-\log\beta_{1}+\frac{(\alpha_{1}-1)\beta_{2}}{ \alpha_{2}\beta_{1}}{}_{3}F_{2}\left(\begin{array}{c}2-\alpha_{1},1,1\\ 1+\alpha_{2},2\end{array};-\frac{\beta_{2}}{\beta_{1}}\right)\\ -\psi(\alpha_{2})+\log\beta_{2}-\frac{(\alpha_{2}-1)\beta_{1}}{ \alpha_{1}\beta_{2}}{}_{3}F_{2}\left(\begin{array}{c}2-\alpha_{2},1,1\\ 1+\alpha_{1},2\end{array};-\frac{\beta_{1}}{\beta_{2}}\right)\bigg{)}. \tag{37}\]
Here, \(\log z\) is the natural logarithm, \(\psi(z)\) is the digamma function, \(\mathsf{B}(\alpha,\beta)\) is the beta function, and \({}_{P}F_{q}(\mathbf{a};\mathbf{b};z)\) is the generalized hypergeometric function. It was shown that \(\mathsf{E}_{P}G\) agrees with actual sample means of conversion gain data when \(\mathsf{P}(\hat{P}\leq 0)\approx 0\). Additionally, Hendrickson (2021) showed that no unbiased, finite variance estimator of the modified gain relation (30) exists for all possible parameters under the normal model of noise despite \((\hat{Y},\hat{Y},\hat{X},\hat{X})\) constituting a complete sufficient statistic for the parameter \((\mu_{D},\sigma_{D}^{2},\mu_{P+D},\sigma_{P+D}^{2})\)[Theorem 14, 12].
### Further developments
The nonexistence of \(G\)'s moments lies in the fact that \(\hat{P}\) has positive and continuous probability density at zero, which manifests as non-integrable singularities in the integral representations of these moments. To assign some notion of higher-order moments to \(G\) we may take advantage of the independence of \(\hat{P}\) and \(\hat{P}\) and the concept of regularization to define pseudomomments as
\[\mathsf{E}_{P}G^{k}\coloneqq(\mathsf{E}\hat{P}^{k})\mathsf{E}_{P}\hat{P}^{-k}, \quad k\in\mathbb{N}, \tag{38}\]
where \(\mathcal{P}\) denotes the principal-value regularization
\[\mathsf{E}_{P}\hat{P}^{-k}\coloneqq\lim_{\epsilon\to 0^{+}}\int_{R \setminus(-\epsilon,\epsilon)}\frac{f_{\rho}(t)}{t^{k}}\,\mathrm{d}t-h_{k}( \epsilon), \tag{39}\]
with \(h_{1}(\epsilon)=0\) and
\[h_{k}(\epsilon)=\sum_{\ell=0}^{k-2}\frac{f_{\rho}^{(\ell)}(0)}{\ell!}\left( \frac{1-(-1)^{k-\ell-1}}{(k-\ell-1)\epsilon^{k-\ell-1}}\right) \tag{40}\]
for \(k\geq 2\)[24, 25]. Alternative definitions and methods also exist for evaluating the pseudomomments. For example, we may express them via a moment generating function as
\[\mathsf{E}_{P}\hat{P}^{-k}=\frac{1}{(k-1)!}\hat{\sigma}_{\omega}^{k-1}\mathcal{ H}[f_{\rho}](\omega)\Big{|}_{\omega=0} \tag{41}\]
with
\[\mathcal{H}[f_{\rho}](\omega)=\lim_{\epsilon\to 0^{+}}\int_{R \setminus(\omega-\epsilon,\omega+\epsilon)}\frac{f_{\rho}(t)}{t-\omega}\, \mathrm{d}t, \tag{42}\]
denoting the Hilbert transform of the density \(f_{\rho}\)[26]. Regardless of the method used to evaluate them, the subscript \(\mathcal{P}\) is there to remind the reader that these moments are regularized and do not exist in the traditional sense because the integrals representing them diverge.
Evaluating the pseudomomments proves to be quite challenging. As pointed out earlier, the work in [11] was able to obtain an expression for the special case \(k=1\), as given in (37), through the use of complex methods involving contour integration. For \(k\geq 2\), one could implement (39) numerically; however, when \(n_{1}\) and \(n_{2}\) become even moderately large the evaluation of \(f_{\rho}\) becomes unstable causing numerical computation to fail. In such a case we may approximate the distribution of \(\hat{P}\) by a normal one with mean \(\mu_{\hat{P}}\) and variance \(\sigma_{\hat{P}}^{2}\) so that we may use the normal approximation given by Quenouille [27]
\[\mathcal{H}[f_{\rho}](\omega)\sim\frac{\sqrt{2}}{\sigma_{\hat{P}}}\mathcal{D} \left(\frac{\mu_{\hat{P}}-\omega}{\sqrt{2}\sigma_{\hat{P}}}\right). \tag{43}\]
Here, \(\mathcal{D}(z)\coloneqq e^{-z^{2}}\int_{0}^{z}\ell^{2}\,\mathrm{d}t\) denotes the Dawson integral. Higher-order derivatives for the Dawson integral are given by Barakat (1971), which allow us to deduce a closed-form, asymptotic approximation for the pseudomomments of \(\hat{P}^{-1}\) given large \(n_{1}\) and \(n_{2}\)[28]:
\[\mathsf{E}_{P}\hat{P}^{-k}\sim\frac{2}{(k-1)!}\frac{1}{(\sqrt{2}\sigma_{\hat{P }})^{k}}(H_{k-1}(z_{\hat{P}})\mathcal{D}(z_{\hat{P}})-P_{k-2}(z_{\hat{P}})), \tag{44}\]
where \(z_{\hat{P}}=\mu_{\hat{P}}/(\sqrt{2}\sigma_{\hat{P}})\), and \(P_{n}\) is a polynomial satisfying \(P_{n}(t)=2tP_{n-1}(t)-2nP_{n-2}(t)\) with \(P_{-1}(t)=0\) and \(P_{0}(t)=1\). Combining this result with the moments of \(\hat{P}\) subsequently give us the large sample size asymptotic approximation of the pseudomments for \(G\)
\[\mathsf{E}_{P}G^{k}\sim\frac{2}{(k-1)!}\left(\frac{i\sigma_{\hat{P }}}{2\sigma_{\hat{P}}}\right)^{k}H_{k}(-iz_{\hat{P}})\\ \cdots\times(H_{k-1}(z_{\hat{P}})\mathcal{D}(z_{\hat{P}})-P_{k-2}(z _{\hat{P}})), \tag{45}\]
where \(z_{\hat{P}}=\mu_{\hat{P}}/(\sqrt{2}\sigma_{\hat{P}})\). For example, in the case \(k=1\) we obtain the large sample size (large \(\alpha\)) asymptotic approximation of (37)
\[\mathsf{E}_{P}G\sim g\frac{\sqrt{2}(\frac{\alpha_{1}}{\beta_{1}}-\frac{\alpha_{2 }}{\beta_{2}})}{\sqrt{\frac{\alpha_{2}}{\beta_{1}^{2}}+\frac{\alpha_{2}}{\beta_{2 }^{2}}}}\mathcal{D}\left(\frac{\frac{\alpha_{1}}{\beta_{1}}-\frac{\alpha_{2}}{ \beta_{2}}}{\sqrt{\frac{\alpha_{2}}{\beta_{1}^{2}}+\frac{\alpha_{2}}{\beta_{2}^{ 2}}}}\right). \tag{46}\]
As a verification of the accuracy of these approximations, using the parameters \(\mu_{P}=9\,\mathrm{DN}\), \(\sigma_{\hat{P}+D}^{2}=10\,\mathrm{DN}^{2}\), \(\sigma_{D}^{2}=1\,\mathrm{DN}^{2}\) (so that \(g=1\)), \(n_{1}=101\), and \(n_{2}=51\) we calculated \(\mathsf{E}_{P}G\) using the exact expression (37) as well as the normal approximation (46) yielding \(\mathsf{E}_{P}G=1.02604\) and \(\mathsf{E}_{P}G\approx 1.02738\), respectively. This resulted in only a \(0.13\%\) approximation error showing that the normal approximation in (45) will approximate \(\mathsf{E}_{P}G\) as well as the higher-order pseudomments for the chosen sample sizes. We typically deal with much larger sample sizes in \(\mathsf{PT}\) conversion gain estimation and therefore expect (46) to be a good approximation to the exact pseudomoments of \(G\) in most scenarios.
Through the use of pseudomomments we can subsequently derive other quantities of interest for \(G\) like the (pseudo) absolute coefficient of variation
\[\mathsf{ACV}_{P}^{2}G=\mathsf{ACV}_{P}^{2}\hat{P}^{-1}+(\mathsf{ACV}_{P}^{2} \hat{P}^{-1})(\mathsf{ACV}^{2}\hat{P})+\mathsf{ACV}^{2}\hat{P} \tag{47}\]
where
\[\mathsf{ACV}_{P}^{2}\hat{P}^{-1}=\frac{\mathsf{E}_{P}\hat{P}^{-2}-(\mathsf{E}_{
and \(\mathsf{ACV}^{2}\hat{P}=\mathsf{Var}\hat{P}/(\mathsf{E}\hat{P})^{2}\) is defined in the traditional sense. Likewise, we have the (pseudo) absolute relative bias
\[\mathsf{AR}\hat{P}G=\mathsf{AR}\hat{P}_{P}\hat{P}^{-1} \tag{49}\]
with
\[\mathsf{ARB}_{P}\hat{P}^{-1}=\left|\frac{\mathsf{E}_{P}\hat{P}^{-1}-(\mathsf{E }\hat{P})^{-1}}{(\mathsf{E}\hat{P})^{-1}}\right|. \tag{50}\]
What remains is to address why these pseudomomoments are useful for describing moments of actual data. The motivation for introducing pseudomoments was to assign analytical expressions to the moments \(\mathsf{E}\hat{P}^{-k}\), which diverge on sets of the form \(|\hat{P}|<\epsilon\). This is clear from writing
\[\mathsf{E}\hat{P}^{-k}=\int_{\mathbb{R}(-\epsilon\epsilon)}\frac{f_{P}(t)}{t^ {k}}\,\mathrm{d}t+\int_{-\epsilon}^{\epsilon}\frac{f_{P}(t)}{t^{k}}\,\mathrm{d}t, \tag{51}\]
where the first integral in this decomposition always converges, while the second integral always diverges for any choice of \(k\in\mathbb{N}\) and \(\epsilon>0\). As such, the principal value regularization provides a means of discarding the divergent terms arising out of the second integral to provide a finite expression for \(\mathsf{E}\hat{P}^{-k}\). In practice we don't ever observe \(\hat{P}\) out in the extreme tails of its assumed distribution due to the small probabilities of such events, and even more so, inaccuracies in our assumed noise model. In particular, recall that we assumed a normal distribution for the digital signal \(X_{k}\) and \(Y_{k}\) but in practice these quantities can only take on values between \(0\) and \(2^{N_{\mathrm{bits}}}-1\) with \(N_{\mathrm{bits}}\) denoting the bit-depth of the analog-to-digital converter. Because the normal model assigns positive density outside this interval, we see the model inherently overestimates the tails of the actual data. Therefore, if \(\mathsf{ACV}\hat{P}\) is small we won't observe \(|\hat{P}|<\epsilon\) and the sample moments will agree with the pseudomoments. Again using the parameters following (46), \(10^{9}\) pseudorandom observations of \(\hat{P}\) and \(\hat{P}\) were generated, which were then used to compute a sample of \(10^{9}\) observations of \(G\). Due to a sufficiently small value of \(\mathsf{ACV}\hat{P}\), all observed values of \(\hat{P}\) were strictly positive, so we should expect the sample statistics to agree with the theoretical pseudomoments. Computing the sample mean yielded \(\hat{G}=1.02604\dots\), which agreed with the exact value of \(\mathsf{E}_{P}G\) to six significant digits. Likewise, we would expect the higher-order sample moments to agree with their corresponding higher-order pseudomoments for these parameters.
## 5 Optimal Sample Size Pairs for Conversion Gain Estimation
Now that we have a thorough understanding of the statistical characteristics for the general conversion gain estimator, we may begin to tackle the problem of optimal measurement. Recall from Section 3 that the optimal sample size for the one-sample, shot-noise-limited estimator \(G\) satisfied \(\mathsf{ACV}G(n^{\mathrm{opt}})=\mathsf{ac}\nu_{0}\) for any choice of \(\mathsf{ac}\nu_{0}>0\). In the present problem, we are now working with an estimator of two samples and must first define what is meant by optimal sample sizes in this two-sample case.
**Definition 1** (Optimal sample size pairs).: _Let \(T=f(\mathbf{X},\mathbf{Y})\) be a statistic of a sample \(\mathbf{X}\) of size \(n_{1}\) and another sample \(\mathbf{Y}\) of size \(n_{2}\). Furthermore, let \(\mathsf{ACV}T(n_{1},n_{2})\) denote the absolute coefficient of variation for \(T\) as a function of the sample sizes. Then the optimal sample size pairs for \(T\) shall be defined as the ordered pair \((n_{1}^{\mathrm{opt}},n_{2}^{\mathrm{opt}})\), which satisfies the system of equations_
\[\inf_{n_{2}}\left.\mathsf{ACV}T(N-n_{2},n_{2})\right|_{N=n_{1}^{ \mathrm{opt}}+n_{2}^{\mathrm{opt}},n_{2}=n_{2}^{\mathrm{opt}}} \tag{52}\] \[\mathsf{ACV}T(n_{1}^{\mathrm{opt}},n_{2}^{\mathrm{opt}})=\mathsf{ ac}\nu_{0}. \tag{53}\]
In this system of equations we see (52) fixes the total number of samples to \(N=n_{1}+n_{2}\) and solves for the \(n_{2}\) that minimizes \(\mathsf{ACV}T\). In the special case where \(\mathsf{ACV}(N-n_{2},n_{2})\) is strictly convex in \(n_{2}\), this minimization can be solved via equating the derivative with zero: \(\partial_{n_{2}}\mathsf{ACV}T(N-n_{2},n_{2})=0\). Solving (52) and then substituting \(N\mapsto n_{1}^{\mathrm{opt}}+n_{2}^{\mathrm{opt}}\) and \(n_{2}\mapsto n_{2}^{\mathrm{opt}}\) implicitly defines the optimal sample sizes as a function of each other, which we will call the _optimality relation_. For example, in the proof of Lemma 3 we derive the optimality relation for the estimator \(P\) in the form \(n_{2}^{\mathrm{opt}}=\frac{\partial\nu_{0}}{\partial\nu_{0}}n_{1}^{\mathrm{opt}}\), which expresses the relationship between the optimal sample sizes for \(P\). Substituting the optimality relation into (53) then scales the optimal sample sizes so that they not only satisfy the optimality relation but also achieve a prescribed final absolute coefficient of variation equal to \(\mathsf{ac}\nu_{0}\). In this way, \(n_{1}^{\mathrm{opt}}\) and \(n_{2}^{\mathrm{opt}}\) represent the sample sizes whose sum is the minimal possible number of total samples needed to force \(\mathsf{ACV}\mathsf{Total}\) equal to \(\mathsf{ac}\nu_{0}\) and therefore serves as a good generalization of optimal sample size to the two-sample case.
In Section 4 we were unable to derive an exact expression for \(\mathsf{ACV}_{P}G\) and thus are forced to make some form of approximation to get a handle on this problem. Even if we did have an exact expression, substituting it in Definition 1 would almost certainly yield an intractable system of equations. To overcome these barriers, we will take the approach of linearizing \(\hat{P}^{-1}\) by replacing it with it's first-order Taylor polynomial about \(\mathsf{E}\hat{P}=\sigma_{P}^{2}\) and instead focus on
\[G_{\delta}=\hat{P}\times\hat{P}_{\delta}^{-1}, \tag{54}\]
with
\[\hat{P}_{\delta}^{-1}=\frac{1}{\sigma_{P}^{2}}-\frac{\hat{P}-\sigma_{P}^{2}}{ \sigma_{P}^{4}}. \tag{55}\]
The advantage we gain from this linearization is that \(\hat{P}_{\delta}^{-1}\) and \(G_{\delta}\) have simple and well-defined moments that can be used to make concrete conclusions about their statistical properties including their optimal sample sizes. Furthermore, so long as \(\mathsf{ACV}\hat{P}\) is small we have \(\hat{P}_{\delta}^{-1}\stackrel{{ d}}{{\approx}}\hat{P}^{-1}\), which implies \(G_{\delta}\stackrel{{ d}}{{\approx}}G\) so that any conclusions we make about the random variables \(\hat{P}_{\delta}^{-1}\) and \(G_{\delta}\) apply to \(\hat{P}^{-1}\) and \(G\) when \(\mathsf{ACV}\hat{P}\) is small. In particular, note that \(\mathsf{ACV}\hat{P}=\mathsf{ACV}\hat{P}_{\delta}^{-1}\) so we can conclude that the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) should be good approximations for those of \(G\) when \(\mathsf{ac}\nu_{0}\) is chosen to be small and the optimal sample sizes of \(\hat{P}_{\delta}^{-1}\) are moderately equal to those of \(G_{\delta}\).
Our angle of attack from here will be as follows. We will first use Definition 1 to derive properties pertaining to the optimal sample sizes of \(\hat{P}\) (Subsection 0.A), which will be used later on to derive similar properties of the optimal sample sizes for \(G_{\delta}\). Following analysis of \(\hat{P}\) will be analogous study of \(\hat{P}_{\delta}^{-1}\) (Subsection 0.B), where we derive the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) and their properties. Subsection 0.C then studies the estimator \(G_{\delta}\) with the goal of understanding when the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) are a good approximation for those of \(G_{\delta}\). We first show that \(\mathsf{ACV}G_{\delta}\) is dominated by \(\mathsf{ACV}\hat{P}_{\delta}^{-1}\) at high-illumination (Theorem 3) so that the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) are asymptotically equal to those of \(G_{\delta}\) at high-illumination. At the other end of the illumination range, Theorem 4 shows that the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) are asymptotically proportional to those of \(G_{\delta}\) with the constant of proportionality approaching one with
increasing dark noise \(\sigma_{\mathcal{G}}\). What this demonstrates is that the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) should be excellent approximations to those of \(G_{\delta}\) at any illumination level given sufficiently large dark noise. This observation leads us to construct a metric, \(\mathcal{E}_{\text{opt}}\), which indicates how good the optimal sample sizes of \(\hat{P}_{\delta}^{-1}\) are as approximations for those of \(G_{\delta}\) as a function of illumination level. By studying \(\mathcal{E}_{\text{opt}}\) at low-illumination we are able to put a rule-of-thumb on how large the dark noise must be to obtain a good approximation. In particular, results will show that for sensors with moderate dark noise of \(\sigma_{\mathcal{G}}\geq 5\,e\)-, the optimal sample sizes of \(\hat{P}_{\delta}^{-1}\) are excellent approximations for those of \(G_{\delta}\) at any illumination level. Theorem 5 then confirms these findings by showing for small \(\text{acv}_{0}\)
\[\text{ACV}_{p}G(n_{1}^{\text{opt}},n_{2}^{\text{opt}})=c\cdot\text{acv}_{0}+ \mathcal{O}(\text{acv}_{0}^{3}), \tag{56}\]
with \(n_{i}^{\text{opt}}\) again representing the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) and \(c\to 1\) with increasing illumination and/or increasing dark noise. Consequently, we show that the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) serve as excellent approximations for those of \(G\) at any illumination level when \(\text{acv}_{0}\) is small and \(\sigma_{\mathcal{G}}\geq 5\,e\)-.
Before proceeding, it will also be convenient to introduce the dimensionless quantity
\[\zeta=\sigma_{D}^{2}/\sigma_{P+D}^{2}. \tag{57}\]
We will reparameterize all subsequent analysis in terms of this quantity via the following lemma.
**Lemma 2**.: _Under the assumed model we have \(\mu_{P}=\sigma_{D}^{2}\zeta^{-1}(1-\zeta)g\) and \(\sigma_{P}^{2}=\sigma_{D}^{2}\zeta^{-1}(1-\zeta)\)._
Because \(0<\sigma_{D}<\sigma_{P+D}\), it follows that \(\zeta\in(0,1)\). As illumination decreases to zero, we find \(\zeta\to 1^{-}\). Likewise, as illumination increases without bound \(\zeta\to 0^{+}\), with \(\zeta=0\) denoting a mathematical definition of the shot-noise-limit. Real sensors always contain some dark noise and are limited in well capacity; therefore, one can only achieve the shot-noise-limit in theory and never in real experiments. Lastly, the reader should take note that \(\zeta\) is increasing as illumination decreases.
### Statistical analysis of bar-\(P\)
We begin our analysis by studying the random variable \(\bar{P}\) with the primary purpose of understanding its optimal sample sizes for later use in studying \(G_{\delta}\). Using Lemma 2, the dark noise relation \(\sigma_{\mathcal{G}}=\sigma_{D}\times g\), and the distributional result \(\bar{P}\sim\mathcal{N}(\mu_{P},\sigma_{\mathcal{G}+D}^{2}/n_{1}+\sigma_{D}^{ \dagger}/n_{2})\) we are able to write the squared absolute coefficient of variation as
\[\text{ACV}^{2}P=\frac{1}{\sigma_{\mathcal{G}}^{2}}\frac{\zeta}{(1-\zeta)^{2}} \left(\frac{1}{n_{1}}+\frac{\zeta}{n_{2}}\right). \tag{58}\]
With this expression in hand, we present some key properties of the optimal sample sizes for \(P\).
**Lemma 3**.: _Let \(n_{1}^{\text{opt}}\) and \(n_{2}^{\text{opt}}\) denote the optimal sample sizes for \(P\). As the illumination decreases, \(\zeta\to 1^{-}\), \(n_{2}^{\text{opt}}/n_{1}^{\text{opt}}\to 1^{-}\), and \(n_{i}^{\text{opt}}\sim\mathcal{C}_{\bar{P}}(1-\zeta)^{-2}\) for \(i=1,2\) with \(C_{\bar{P}}=2/(\sigma_{\mathcal{G}}^{2}\text{acv}_{0}^{2})\)._
At low-illumination, from Lemma 3, the optimal sample sizes for \(\bar{P}\) are asymptotically proportional to \((1-\zeta)^{-2}\), demonstrating that as expected, infinite sample sizes are needed in the zero illumination limit.
### Statistical analysis of hat-\(P_{\delta}^{-1}\)
We now carry out a similar but more detailed analysis for \(\hat{P}_{\delta}^{-1}\). Under the normal model of Section 2 we have the distributional results \(\hat{X}\sim\mathcal{G}(\alpha_{1},\beta_{1})\) and \(\hat{Y}\sim\mathcal{G}(\alpha_{2},\beta_{2})\) (see Table 2). As such, the difference \(\bar{P}=\hat{X}-\hat{Y}\) is distributed as \(\bar{P}\sim\mathcal{G}\mathcal{D}(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})\), which is a gamma-difference variable. Working with the properties of the gamma-difference distribution as well as Lemma 2 we can write
\[\text{ACV}^{2}\hat{P}_{\delta}^{-1}=\frac{2}{(1-\zeta)^{2}}\left(\frac{1}{n_{1} -1}+\frac{\zeta^{2}}{n_{2}-1}\right), \tag{59}\]
which leads to the following expressions for optimal sample sizes of \(\hat{P}_{\delta}^{-1}\).
**Lemma 4**.: _The optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) are given by,_
\[(n_{1}^{\text{opt}},n_{2}^{\text{opt}})=\left(\frac{2(1+\zeta)}{\text{acv}_{0}^ {2}(1-\zeta)^{2}}+1,\frac{2\zeta(1+\zeta)}{\text{acv}_{0}^{2}(1-\zeta)^{2}}+1 \right). \tag{60}\]
**Remark 1**.: _Our goal for deriving the optimal sample sizes of \(\hat{P}_{\delta}^{-1}\) was for the purpose of optimally estimating \(\hat{P}^{-1}\). It turns out that we can alter the sample sizes of Lemma 4 to make them exact in the shot-noise-limit. See Remark 2 in the appendix for more information._
**Proposition 1**.: _Let \(n_{1}^{\text{opt}}\) and \(n_{2}^{\text{opt}}\) denote the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\), then:_
1. \(n_{1}^{\text{opt}}\) _and_ \(n_{2}^{\text{opt}}\) _are strictly increasing with decreasing illumination (_\(\zeta\) _increasing),_
2. \(n_{2}^{\text{opt}}<n_{1}^{\text{opt}}\) _for all_ \(\zeta\in[0,1)\)_,_
3. _As illumination decreases,_ \(\zeta\to 1^{-}\)_,_ \(n_{2}^{\text{opt}}/n_{1}^{\text{opt}}\to 1^{-}\)_, and_ \(n_{i}^{\text{opt}}\sim\mathcal{C}_{\hat{P}_{\delta}^{-1}}(1-\zeta)^{-2}\) _for_ \(i=1,2\) _where_ \(C_{\hat{P}_{\delta}^{-1}}=4/\text{acv}_{0}^{2}\)_._
**Corollary 1**.: _Let \(n_{1}^{\text{opt}}(\zeta)\) and \(n_{2}^{\text{opt}}(\zeta)\) denote the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) as a function of \(\zeta\). To estimate \(\hat{P}_{\delta}^{-1}\) to a relative uncertainty of \(\text{acv}_{0}\), one needs at minimum a total of \(N^{\text{opt}}(0)=n_{1}^{\text{opt}}(0)+n_{2}^{\text{opt}}(0)=2/\text{acv}_{0}+2\) observations._
Proposition 1 demonstrates that the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) have similar characteristics for those of \(\bar{P}\). In particular, we see at low-illumination that the optimal sample sizes are asymptotically equal and asymptotically proportional to \((1-\zeta)^{-2}\) showing that infinite sample sizes are again needed in the zero illumination limit. Furthermore, Corollary 1 takes advantage of the monotonicity of the optimal sample sizes in \(\zeta\) in order to derive a lower bound to the minimum total number of samples needed to achieve an absolute coefficient of variation for \(\hat{P}_{\delta}^{-1}\) equal to \(\text{acv}_{0}\). Note that this lower bound closely matches the asymptotic result for shot-noise-limited estimator of \(g\) given in (25). Choosing \(\text{acv}_{0}=0.01\) gives the lower bound \(N^{\text{opt}}(0)=20\,002\), which serves to show the large sample sizes needed to obtain typically desired measurement uncertainties.
Now that we have explicit expressions for the optimal sample sizes of \(\hat{P}_{\delta}^{-1}\), and know their properties, we can assess the accuracy of the approximation obtained by using the optimal sample sizes of \(\hat{P}_{\delta}^{-1}\) in place of those for \(\hat{P}^{-1}\).
**Theorem 2**.: _Let \(\hat{P}_{\text{opt}}^{-1}\) denote the estimator \(\hat{P}^{-1}\) as a function of the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\). Then as \(\text{acv}_{0}\to 0^{+}\)_
\[\begin{split}\text{ACV}_{\mathcal{P}}\hat{P}_{\text{opt}}^{-1}& =\text{acv}_{0}+3\text{acv}_{0}^{3}+\mathcal{O}(\text{acv}_{0}^{5}),\\ \text{ARB}_{\bar{P}}\hat{P}_{\text{opt}}^{-1}&=\text{ acv}_{0}^{2}+3\text{acv}_{0}^{4}+\mathcal{O}(\text{acv}_{0}^{6}).\end{split} \tag{61}\]
As Theorem 2 shows, when \(\mathsf{acv}_{0}\) is small, the optimal samples sizes for \(\hat{P}_{\delta}^{-1}\) force \(\mathsf{ACV}_{p}\hat{P}^{-1}\) to be nearly \(\mathsf{acv}_{0}\), which indicates a good approximation. Since the secondary term in the expansion for \(\mathsf{ACV}_{p}\hat{P}_{\mathsf{opt}}^{-1}\) is cubic, we expect the approximation to be quite good for any small choice of \(\mathsf{acv}_{0}\), say \(\mathsf{acv}_{0}<0.1\). This restriction is not problematic as one generally does not seek to measure \(1/\sigma_{p}^{2}\) with greater than \(10\%\) uncertainty. Furthermore, Theorem 2 also quantifies the bias of \(\hat{P}_{\mathsf{opt}}^{-1}\) showing that the main term is quadratic in \(\mathsf{acv}_{0}\). As such, we expect the estimator \(\hat{P}^{-1}\) to be nearly unbiased for \(1/\sigma_{p}^{2}\) (in the sense that \(\mathsf{E}_{p}\hat{P}^{-1}\approx 1/\sigma_{p}^{2}\)) when subject to the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) for a small choice of \(\mathsf{acv}_{0}\).
### Statistical analysis of \(G_{\delta}\)
Equipped with the results of Subsections A-B, we are now ready to perform a statistical analysis on \(G_{\delta}\) with the goal of determining when the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) serve as good approximations for those of \(G_{\delta}\). Using the formula in Lemma 1 we first write the squared absolute coefficient of variation for \(G_{\delta}\) as
\[\mathsf{ACV}^{2}G_{\delta}=\mathsf{ACV}^{2}\bar{P}+(\mathsf{ACV}^{2}\bar{P}) (\mathsf{ACV}^{2}\bar{P}_{\delta}^{-1})+\mathsf{ACV}^{2}\bar{P}_{\delta}^{-1}, \tag{62}\]
with \(\mathsf{ACV}^{2}\bar{P}\) and \(\mathsf{ACV}^{2}\bar{P}_{\delta}^{-1}\) given in (58) and (59), respectively. Our first goal here is to show the estimator \(G_{\delta}\) behaves like previous estimators for \(g\) in the sense that its uncertainty is dominated by that of \(\hat{P}_{\delta}^{-1}\) at high-illumination (c.f. Theorem 1).
**Theorem 3**.: _Let \(G_{\delta}\) and \(\hat{P}_{\delta}^{-1}\) be as given in (54) and (55), respectively. As illumination increases, \(\zeta\to 0^{+}\) and_
\[\mathsf{ACV}G_{\delta}=\mathsf{ACV}\hat{P}_{\delta}^{-1}\left(1+\frac{1}{ \sigma_{\mathcal{D}}^{2}}\frac{n_{1}+1}{4n_{1}}\zeta+\mathcal{O}(\zeta^{2}) \right). \tag{63}\]
Since \(\mathsf{ACV}G_{\delta}\sim\mathsf{ACV}\hat{P}_{\delta}^{-1}\) we have by the same reasoning in Section 3 that the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) are asymptotically equal to those of \(G_{\delta}\) at high-illumination with equality in the shot-noise-limit (\(\zeta=0\)). However, if our goal is optimal sampling for the general case, this approximation must also hold in the low-illumination regime (\(\zeta\to 1^{-}\)). To determine how good this approximation is at low-illumination, we may consider the following theorem.
**Theorem 4**.: _Let \(n_{1}^{\mathrm{opt}}\) and \(n_{2}^{\mathrm{opt}}\) denote the optimal sample sizes for \(G_{\delta}\). Then, as illumination decreases, \(\zeta\to 1^{-}\), \(n_{2}^{\mathrm{opt}}/n_{1}^{\mathrm{opt}}\to 1^{-}\), and \(n_{i}^{\mathrm{opt}}\sim C_{G_{\delta}}(1-\zeta)^{-2}\) for \(i=1,2\) with_
\[C_{G_{\delta}}=\frac{2}{\mathsf{acv}_{0}^{2}}\left(1+\frac{1}{2\sigma_{ \mathcal{D}}^{2}}+\left(\left(1+\frac{1}{2\sigma_{\mathcal{D}}^{2}}\right)^{ 2}+\frac{2}{\sigma_{\mathcal{D}}^{2}}\mathsf{acv}_{0}^{2}\right)^{1/2}\right). \tag{64}\]
Comparing Theorem 4 with Proposition 1 (\(iii\)) shows that at low-illumination the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) are asymptotically proportional to the optimal sample sizes for \(G_{\delta}\). The constant of proportionality, \(C_{G_{\delta}}/C_{\hat{P}_{\delta}^{-1}}\), depends on the dark noise \(\sigma_{\mathcal{D}}\) and as the dark noise increases
\[\frac{C_{G_{\delta}}}{C_{\hat{P}_{\delta}^{-1}}}\sim 1+\frac{1+\mathsf{acv}_{0}^{2} }{2\sigma_{\mathcal{D}}^{2}}+\mathcal{O}(\sigma_{\mathcal{D}}^{-4}), \tag{65}\]
which further shows that this constant of proportionality approaches one with increasing dark noise. Bringing the observations following Theorems 3-4 together, we expect the optimal sample sizes of \(\hat{P}_{\delta}^{-1}\) to serve as good approximations to those of \(G_{\delta}\) at any illumination level given enough dark noise.
To get a better grasp on these observations we will define the metric
\[\mathcal{E}=\frac{\mathsf{ACV}\hat{P}_{\delta}^{-1}}{\mathsf{ACV}G_{\delta}}. \tag{66}\]
and consider this metric as a function of the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\), that is,
\[\mathcal{E}_{\mathrm{opt}}=\frac{\mathsf{acv}_{0}}{\mathsf{ACV}G_{\delta, \mathrm{opt}}}=\left(1+(\mathsf{ACV}^{2}\bar{P}_{\mathrm{opt}})(1+\mathsf{acv}_{ 0}^{-2})\right)^{-1/2} \tag{67}\]
with
\[\mathsf{ACV}^{2}\bar{P}_{\mathrm{opt}}=\frac{1}{\sigma_{\mathcal{D}}^{2}}\frac {\zeta}{(1-\zeta)^{2}}\left(\frac{1}{n_{1}^{\mathrm{opt}}}+\frac{\zeta}{n_{2}^{ \mathrm{opt}}}\right). \tag{68}\]
Both \(\mathcal{E}\) and it's counterpart \(\mathcal{E}_{\mathrm{opt}}\) are normalized in the sense that \(0\leq\mathcal{E}_{\mathrm{opt}}\leq 1\). In particular, whenever \(\mathcal{E}_{\mathrm{opt}}\approx 1\) we can conclude that the optimal sample sizes of \(\hat{P}_{\delta}^{-1}\) are good approximations for those of \(G_{\delta}\) with equality achieved when \(\mathcal{E}_{\mathrm{opt}}=1\).
It is straightforward to show \(\lim_{\zeta\to 0^{+}}\mathcal{E}_{\mathrm{opt}}=1\), which again indicates that the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) are exactly equal to those for \(G_{\delta}\) in the shot-noise-limit. Taking the limit now in the opposite direction, we see \(\mathsf{ACV}^{2}\bar{P}_{\mathrm{opt}}\to\mathsf{acv}_{0}^{2}/(2\sigma_{ \mathcal{D}}^{2})\) as \(\zeta\to 1^{-}\) and so
\[\overline{\mathcal{E}_{\mathrm{opt}}}\coloneqq\lim_{\zeta\to 1^{-}}\mathcal{E}_{ \mathrm{opt}}=\left(1+\frac{1+\mathsf{acv}_{0}^{2}}{2\sigma_{\mathcal{D}}^{2} }\right)^{-1/2}, \tag{69}\]
which is nonzero. To get a sense of how close \(\overline{\mathcal{E}_{\mathrm{opt}}}\) is to one recall from the discussion following Theorem 2, that we generally want to impose the restriction \(\mathsf{acv}_{0}\in(0,0.1)\). Since \(\overline{\mathcal{E}_{\mathrm{opt}}}\) is decreasing in \(\mathsf{acv}_{0}\) we set \(\mathsf{acv}_{0}=0.1\) giving the lower bound
\[\overline{\mathcal{E}_{\mathrm{opt}}}>\left(1+\frac{101}{200}\frac{1}{\sigma_{ \mathcal{D}}^{2}}\right)^{-1/2}. \tag{70}\]
Plotting this lower bound as a function of \(\sigma_{\mathcal{D}}\) we find \(\overline{\mathcal{E}_{\mathrm{opt}}}>0.99\) for \(\sigma_{\mathcal{D}}\geq 5\)_e_- and so for sensors with dark noise greater than \(5\)_e_-, we can expect the optimal sample sizes of \(\hat{P}_{\delta}^{-1}\) to give excellent approximations to those of \(G_{\delta}\) at any level of illumination. In conclusion, if \(\mathsf{acv}_{0}\) is small (\(\mathsf{acv}_{0}<0.1\)) and the dark noise is sufficiently large (\(\sigma_{\mathcal{D}}\geq 5\)_e_-), the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) serve as excellent approximations for those of the general conversion gain estimator \(G\) at any illumination level.
We close this section with an analogous result to that of Theorem 2, which confirms our findings.
**Theorem 5**.: _Let \(G_{\mathrm{opt}}\) denote the estimator \(G\) as a function of the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\). Then as \(\mathsf{acv}_{0}\to 0^{+}\)_
\[\mathsf{ACV}_{p}G_{\mathrm{opt}}=\sqrt{1+a}\,\mathsf{acv}_{0}+\frac{6+a+b}{2 \sqrt{1+a}}\,\mathsf{acv}_{0}^{3}+\mathcal{O}(\mathsf{acv}_{0}^{5}), \tag{71}\]
\[\mathsf{ARB}_{p}G_{\mathrm{opt}}=\mathsf{acv}_{0}^{2}+3\mathsf{acv}_{0}^{4}+ \mathcal{O}(\mathsf{acv}_{0}^{6}),\]
_where \(a=\frac{1}{\sigma_{\mathcal{D}}^{2}}\frac{\zeta}{1+\zeta}\) and \(b=-\frac{1}{\sigma_{\mathcal{
## 6 Design and control of experiment for per-pixel conversion gain estimation
In this section, we demonstrate how the derived expressions can be used in the Design & Control of Experiment (doe & coe) for per-pixel conversion gain estimation. For this example we will use the ON Semiconductor KAI-04070 monochrome interline transfer ccd sensor [29]. As we will demonstrate, the entire experimental process for per-pixel conversion gain estimation centers around the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) in Lemma 4 and the metric \(\mathcal{E}_{\text{opt}}\) given in (67).
A full Monte Carlo implementation of the doe and coe algorithms presented in this section can be found in Code 1 [30] and Code 2 [31]. Code 1 (DOE_and_COE.m) is the primary script and requires matlab's _Statistics and Machine Learning_ toolbox to run. Since Code 1 is meant to guide the reader in understanding the steps involved in doe and coe it is advised to run it one section at a time and observe the results of each section.
### Experimental Setup
Figure 2 shows a schematic diagram of the experimental setup used for performing per-pixel conversion gain estimation. The experimental setup consisted of a \(650\,\mathrm{nm}\) Superluminescent Light Emitting Diode (sled) passed into an integrating sphere with the Sensor Under Test (sut) placed in the plane of the spheres output port (\(f/0\) geometry) where uniformity is highest [32]. To facilitate control of the illumination level, a variable Optical Attenuator (voa) was introduced between the sled and integrating sphere. The sensor was configured at its full bit-depth of 14-bits to minimize quantization error and image data was read off the sensor at a \(512\times 512\,\mathrm{px}\) resolution using a single readout register operating at \(40\,\mathrm{m}\mathrm{h}\mathrm{z}\). By operating the sensor in a single-tap mode like this, the gain of each pixel will be the same and so any excursions in the measured gain of each pixel will be, in theory, completely due to sampling error. This high level of uniformity will allow us to see if the proposed algorithms are able to successfully measure the gain to the desired relative uncertainty, verify our noise model, and compare experimental results to theoretical predictions.
In order to capture imagery under both dark and illuminated conditions, a Motorized Mirror (mm) was placed next to the path of the sled beam. Moving the mirror into the beam path redirected the beam away from the integrating sphere and into a Beam Dump (bd); thus, providing a dark environment for the sensor.
### Design of experiment
doe for per-pixel conversion gain estimation begins with choosing a suitable value for the desired relative uncertainty \(\mathsf{acv}_{0}\). As a rough rule of thumb, if we are operating under the condition \(\mathcal{E}_{\text{opt}}\approx 1\), then for small \(\mathsf{acv}_{0}\) we may approximate \(G_{\text{opt}}\sim\mathcal{N}(g,(g\times\mathsf{acv}_{0})^{2})\) so that \(\mathsf{P}(G_{\text{opt}}\in g(1\pm\mathsf{acv}_{0}))=0.683\). This amounts to the optimal estimate of \(g\) being within \(\mathsf{acv}_{0}\times 100\%\) of its exact value \(68.3\%\) of the time. For this particular experiment it was decided to use \(\mathsf{acv}_{0}=0.05\) as this represents a typical value one might choose.
Next, we want to determine an appropriate illumination level for measuring the conversion gain by selecting a value of \(\zeta\) where: (1) the total number of required observations is attainable and (2) our approximate optimal sample sizes are valid. To determine when both conditions are satisfied, we will create an \(\mathcal{E}\)-\(N\) plot, which consists of plotting the functions \(\mathcal{E}_{\text{opt}}\) and \(N^{\text{opt}}=n_{1}^{\text{opt}}+n_{2}^{\text{opt}}\) as a function of \(\zeta\). To determine an illumination level, we select a value of \(\zeta\) where \(N^{\text{opt}}\) is small enough and \(\mathcal{E}_{\text{opt}}\approx 1\). However, because \(\mathcal{E}_{\text{opt}}\) contains the unknown value of the dark noise, we will need to provide an estimate of \(\sigma_{\mathcal{D}}\) for this plot, which can be accomplished through a preliminary measurement, a vendor specification sheet, or an educated guess/lower bound.
In this example we will take the route of estimating a global lower bound for the dark noise. For this particular sensor we expect \(g\geq 1\) and since \(\sigma_{\mathcal{D}}=\sigma_{D}\times g\) it follows that \(\sigma_{D}\) provides a lower bound on \(\sigma_{\mathcal{D}}\). Since \(\mathcal{E}_{\text{opt}}\) is an increasing function of \(\sigma_{\mathcal{D}}\), it follows that evaluating it at \(\sigma_{D}\) provides a lower bound for it's exact value. To obtain a global estimate of \(\sigma_{D}^{2}\) we capture two dark frames \(\mathbf{Y}_{1}\) and \(\mathbf{Y}_{2}\) and then compute half the sample variance of the difference-frame \(\Delta\mathbf{Y}=\mathbf{Y}_{1}-\mathbf{Y}_{2}\). Note the use of bold symbols to denote \(\mathbf{Y}_{k}\) as a two-dimensional array so that \(\Delta\mathbf{Y}\) is the pixel-wise difference of the two dark frames \(\mathbf{Y}_{1}\) and \(\mathbf{Y}_{2}\). For the sensor under test we found
\[\hat{\sigma}_{D}^{2}=\frac{1}{2}\mathrm{var}(\Delta\mathbf{Y})=39.94\,\mathrm{ DN}^{2}, \tag{72}\]
where
\[\mathrm{var}(\Delta\mathbf{Y})=\frac{1}{512^{2}-1}\sum_{i=1}^{512}\sum_{j=1}^{5 12}(\Delta\mathbf{Y}_{ij}-\overline{\Delta\mathbf{Y}})^{2}. \tag{73}\]
Figure 3 plots
\[\hat{\mathcal{E}}_{\text{opt}}(\zeta)=\\ \left(1+\frac{1+\mathsf{acv}_{0}^{-2}}{\hat{\sigma}_{D}^{2}} \frac{\zeta}{(1-\zeta)^{2}}\left(\frac{1}{n_{1}^{\text{opt}}(\zeta)}+\frac{ \zeta}{n_{2}^{\text{opt}}(\zeta)}\right)\right)^{-1} \tag{74}\]
along with \(N^{\text{opt}}(\zeta)\) for our choice of \(\mathsf{acv}_{0}=0.05\) and estimate \(\hat{\sigma}_{D}\). Due to the sufficiently large value of \(\hat{\sigma}_{D}\) we see that \(\hat{\mathcal{E}}_{\text{opt}}\) is near unity for virtually any illumination level so that the requirement \(\mathcal{E}_{\text{opt}}\approx 1\) will not restrict what illumination levels we can choose for the experiment. To select an appropriate illumination level we first note that this sensor can record images at \(\approx 5\,\mathrm{f}\mathrm{p}\mathrm{s}\) for the chosen readout rate of \(40\,\mathrm{m}\mathrm{h}\mathrm{z}\). Looking back at Figure 3 we observe that the illumination level corresponding to \(\zeta=0.4\) is paired with an optimal total sample size of \(N^{\text{opt}}\approx 4300\) and approximation quality metric \(\mathcal{E}_{\text{opt}}\approx 0.993\). At a recording rate of \(5\,\mathrm{f}\mathrm{p}\mathrm{s}\) this number of images will take \(\approx 15\,\mathrm{m}\mathrm{i}\mathrm{n}\). to capture, which is short enough to avoid any significant drift in the sensor or source.
Figure 2: Schematic diagram of experimental setup for per-pixel conversion gain estimation.
Now equipped with a desired value for \(\zeta\), we guessed the required illumination level by adjusting the variable optical attenuator until the illumination level at the sensor plane resulted in a mean pixel output of about \(1\%\) of the sensor's dynamic range. Using the same process to estimate the dark noise, two illuminated frames \(\mathbf{X}_{1}\) and \(\mathbf{X}_{2}\) were captured and their difference \(\Delta\mathbf{X}=\mathbf{X}_{1}-\mathbf{X}_{2}\) was used to obtain the estimate
\[\phi_{P+D}^{2}=\frac{1}{2}\text{var}(\Delta\mathbf{X})=112.89\,\text{DN}^{2}, \tag{75}\]
which lead to a global estimate of \(\zeta\) equal to
\[\zeta=\frac{\phi_{D}^{2}}{\phi_{P+D}^{2}}=0.354. \tag{76}\]
For our purposes, this illumination level was sufficiently close to the target \(\zeta=0.4\) and corresponded to \(\hat{\mathcal{E}}_{\text{opt}}=0.996\) and \(N^{\text{opt}}=3521\) images, which needs only \(\approx 12\,\text{min.}\) to capture. If this \(\zeta\)-value was not appropriate we could simply re-adjust the illumination level, capture another two illuminated frames, and re-estimate \(\zeta\). We can repeat this procedure until we have found an illumination level corresponding to a suitable \(\zeta\)-value.
Before moving on to data capture, its important to understand what (76) is an estimate of. Recall that \(\mathbf{Y}_{ijk}\sim\mathcal{N}(\mu_{D},\sigma_{D}^{2})\) so that we have for the difference frame \(\mathbf{Y}_{ij1}-\mathbf{Y}_{ij2}=\Delta\mathbf{Y}_{ij}\sim\mathcal{N}(0,2 \sigma_{D}^{2})\). In general \(\sigma_{D}^{2}\) will vary from pixel-to-pixel the \(ij\)-dimension) so that we may treat it as a random variable and model it according to some probability distribution \(\sigma_{D}^{2}\sim F_{\hat{\nu}_{\hat{\nu}_{\hat{\nu}_{\hat{\nu}_{\hat{\nu}} _{\hat{\nu}}}}}}\). By the law of total variance one finds
\[\text{Var}(\Delta\mathbf{Y}_{ij})=\text{Var}(\mathsf{E}(\Delta\mathbf{Y}_{ij} |\sigma_{D}^{2}))+\mathsf{E}(\text{Var}(\Delta\mathbf{Y}_{ij}|\sigma_{D}^{2} ))=2\,\mathsf{E}(\sigma_{D}^{2}), \tag{77}\]
which shows that the estimator (72) gives an unbiased estimate of the average value of \(\sigma_{D}^{2}\) across the sensor array and likewise for the estimator of \(\sigma_{P+D}^{2}\) in (75). Thus, \(\zeta\) is a ratio of unbiased estimates for the average noise values across the sensor array; providing a useful global estimate of \(\zeta\).
### Control of experiment
With the illumination level set, the experiment was ready to commence. The algorithm for data capture is presented in Algorithm 1. This algorithm utilizes Welford's online algorithm in UpdateStats(0) to iteratively update the master frames \(\hat{\mathbf{X}}\), \(\hat{\mathbf{Y}}\), \(\hat{\mathbf{X}}\), and \(\hat{\mathbf{Y}}\), which contain sample means and variances for each pixel in both dark and illuminated conditions. In each iteration of the algorithm a batch of illuminated frames and another batch of dark frames are captured and used to update the master frames before recalculating a global estimate of \(\zeta\), which is then in turn used to update the next batch sizes. The initial batch sizes are \(\texttt{batch}=\lceil n_{1}^{\text{opt}}(0,\text{acc}_{0})\rceil\) and \(\texttt{batch}2=2\), which are the minimal possible number of each frame type needed. The batch sizes are updated to capture \(m\times 100\%\) (\(m\in(0,1]\)) of the remaining difference between the current sample sizes and their estimates. The parameter \(m\) controls how aggressive the algorithm is and ultimately how many times the light source needs to be turned on and off. Large values of \(m\) mean the algorithm will iterate less times (putting more confidence in the estimated sample sizes) while smaller values of \(m\) result in more iterations (less confidence in the estimates). In the limit \(m\to 0\), the algorithm iterates every time a dark and illuminated frame are captured. The algorithm terminates when both batch sizes are nonpositive indicating that the current sample sizes meet or exceed their respective estimates. This is then followed by the per-pixel calculation of \(g\), denoted \(\mathbf{G}\), which we shall call the \(g\)-map.
While the form of \(\zeta\) in Algorithm 1 has a much different form than that of (76), we can show it still is estimating the same quantity. To see why note that \(\hat{\mathbf{Y}}_{ij}\sim\mathcal{G}(\alpha_{2},\beta_{2})\) with \(\sigma_{D}^{2}\sim F_{\sigma_{D}^{2}}\). By the law of total expectation
\[\mathsf{E}(\hat{\mathbf{Y}}_{ij})=\mathsf{E}(\mathsf{E}(\hat{\mathbf{Y}}_{ij} |\sigma_{D}^{2}))=\mathsf{E}(\sigma_{D}^{2}), \tag{78}\]
which shows \(\sum\hat{\mathbf{Y}}_{ij}\) is an unbiased estimator for \(I\cdot J\cdot\mathsf{E}(\sigma_{D}^{2})\) with \(I\) and \(J\) being the vertical and horizontal resolution of the sensor in units of pixels, respectively. Likewise, \(\sum\hat{\mathbf{X}}_{ij}\) is an unbiased estimator for \(I\cdot J\cdot\mathsf{E}(\sigma_{P+D}^{2})\) so that \(\zeta=\sum\hat{\mathbf{Y}}_{ij}/\sum\hat{\mathbf{X}}_{ij}\) is equivalent to a ratio of unbiased estimates for \(\mathsf{E}(\sigma_{D}^{2})\) and \(\mathsf{E}(\sigma_{P+D}^{2})\) just as (76) is.
### Experimental Data
Algorithm 1 was executed on the chosen ccd for the specified uncertainty \(\text{acc}_{0}=0.05\) and multiplier \(m=0.8\), which halted after capturing \(n_{1}=2602\) illuminated frames and \(n_{2}=921\) dark frames. Due to the large value of \(m\), the mirror only needed to be moved eleven times throughout the entire cce procedure. Figure 4 presents the final \(g\)-map created from Algorithm 1 along with a histogram of the \(g\)-map values. Upon inspection of the \(g\)-map, there is no evidence of structure or nonuniformities as was expected. Because the excursions in the \(g\)-map are almost entirely due to random sampling error, as opposed to nonuniformity in the sensor or source, and the sensor noise obeys the assumed noise model, the estimated value of \(g\) for each pixel represents an i.i.d. observation from the ccf distribution [10]. Given the large values of \(n_{1}\) and \(n_{2}\), the fitted ccf distribution for this example was approximated by Hinkley's normal ratio
Figure 3: \(\mathcal{E}\)-\(N\) plot (top) with optimal sample sizes (bottom) for \(\text{acc}_{0}=0.05\) versus \(\zeta\).
distribution [33]
\[f_{G}(g)\sim\frac{e^{-c/2}}{\sigma_{p}\sigma_{p}\sigma_{\hat{p}}a^{2}(g)}\left( \frac{b(g)e^{p^{2}(g)/2}}{\sqrt{2\pi}}(2\Phi(b(g))-1)+\frac{1}{\pi}\right) \tag{79}\]
where \(a(g)=(g^{2}/\sigma_{\hat{p}}^{2}+1/\sigma_{\hat{p}}^{2})^{1/2}\), \(b(g)=(\mu_{\hat{p}}g/\sigma_{\hat{p}}^{2}+\mu_{\hat{p}}/\sigma_{\hat{p}}^{2})/ a(g)\), and \(c=\mu_{\hat{p}}^{2}/\sigma_{\hat{p}}^{2}+\mu_{\hat{p}}^{2}/\sigma_{\hat{p}}^{2}\). The parameters \(\mu_{\hat{p}}\), \(\sigma_{p}\), \(\mu_{\hat{p}}\), and \(\sigma_{\hat{p}}\) are easily estimated from sample means and sample standard deviations of the \(\hat{\mathbf{P}}=\hat{\mathbf{X}}-\hat{\mathbf{Y}}\) and \(\hat{\mathbf{P}}=\hat{\mathbf{X}}-\hat{\mathbf{Y}}\) master frames.
Given that the fitted distribution is that of a normal ratio, the pseudomomoments of the fitted distribution are exactly described by (45). Table 3 presents the exact values of the pseudomoments for the fitted distribution and compares them to sample moments of the \(g\)-map. Upon inspection, we see the theoretical pseudomoments show a high level of agreement with the sample moments and thus demonstrate that the pseudomoments serve as a useful way to characterize moments of actual sensor data.
Recall that the coe algorithm was run with a design parameter of \(\text{acv}_{0}=0.05\); comparing this to the measurement, we find the sample absolute coefficient of variation of the \(g\)-map to be
\[\text{acv}(\mathbf{G})=0.05059. \tag{80}\]
Here, \(\text{acv}(\mathbf{G})=\sqrt{\text{var}(\mathbf{G})}/|\text{mean}(\mathbf{G})|\) with \(\text{mean}(\mathbf{G})\) and \(\text{var}(\mathbf{G})\) denoting the sample mean and variance of the two-dimensional \(g\)-map, respectively. We note that \(\text{acv}(\mathbf{G})>\text{acv}_{0}\). The discrepancy between \(\text{acv}(\mathbf{G})\) and \(\text{acv}_{0}\) is due in part to variance in the coe procedure and other factors like having to estimate the optimal sample sizes. But even if the algorithm was deterministic and we knew the optimal sample sizes exactly, Theorem 5 tells us that the absolute coefficient of variation for the \(g\)-map should have a positive bias away from the desired value of \(\text{acv}_{0}\).
To study this effect more, a Monte Carlo experiment was setup to replicate the sensor data presented above. The only parameters we need to estimate for the experiment are \(g\), \(\mu_{\hat{p}}\), \(\sigma_{\hat{p}}\), and \(\mu_{c}\). To obtain a good estimate for \(g\) we use the definition of relative bias \(\text{RB}_{\mathcal{P}}G=(\mathsf{E}_{\mathcal{P}}G-g)/g\) to write the first moment of \(G\) as
\[\mathsf{E}_{\mathcal{P}}G=g(1+\text{RB}_{\mathcal{P}}G). \tag{81}\]
In the context of conversion gain estimation we expect the relative bias to be positive, i.e. \(\text{RB}_{\mathcal{P}}G=\text{ARB}_{\mathcal{P}}G\) and since the \(g\)-map, \(\mathbf{G}\), was computed using the optimal sample sizes we have according to Theorem 5: \(\text{ARB}_{\mathcal{P}}G_{\text{opt}}\sim\text{acv}_{0}^{2}+3\text{acv}_{0}^{4}\). This leads to the bias corrected estimate
\[\hat{g}=\text{mean}(\mathbf{G})/(1+\text{acv}_{0}^{2}+3\text{acv}_{0}^{4}) \tag{82}\]
Figure 4: Histogram of \(g\)-map values fit with the cif distribution (top) and \(g\)-map (bottom).
\begin{table}
\begin{tabular}{c c c c} \hline \hline & fit & sample & error (\%) \\ \hline \(\mathsf{E}_{\mathcal{P}}G\) & 2.1971 & 2.1972 & 4.3508 \(\times\) 10\({}^{-3}\) \\ \(\sqrt{\text{var}_{\mathcal{P}}G}\) & 0.11069 & 0.11122 & 0.47739 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of \(g\)-map sample moments to fitted distribution pseudomoments.
with \(\text{acv}_{0}=0.05\). Table 4 presents the estimate for \(g\) along with the other estimates needed for the experiment. We note that these parameter estimates are used as the simulation parameters in Code 1 [30].
With the parameter estimates in Table 4, Algorithm 1 was run a total of ten times using the design parameters \(\text{acv}_{0}=0.05\) and \(m=0.8\). To simulate the sensor data, \(512\times 512\text{px}\) dark images, \(\mathbf{Y}\), and illuminated images, \(\mathbf{X}\), were generated according to the model in Section 2 so that they contained elements of the form
\[\mathbf{Y}_{ij}=\left\lceil\mathcal{P}/\hat{g}\right\rfloor \tag{83}\] \[\mathbf{X}_{ij}=\left\lceil(\mathcal{P}+\mathcal{D})/\hat{g}\right\rfloor, \tag{84}\]
with \(\mathcal{P}\sim\mathcal{P}(\hat{\mu}_{c})\) and \(\mathcal{D}\sim\mathcal{N}(\hat{\mu}_{\mathcal{G}},\hat{\nu}_{\mathcal{G}}^{2})\). Table 5 presents the results of the Monte Carlo experiment. We see that the sample sizes resulting from the simulated coe algorithm were very close to those from our actual experiment and have very little variance. This indicates that: (1) our assumed noise model is effective and (2) the difference between \(\text{acv}(\mathbf{G})\) and \(\text{acv}_{0}\) in our original experiment is unlikely to be due to uncertainty in the optimal sample size estimates. Furthermore, we see that the absolute coefficient of variation for the \(g\)-map created by the algorithm shows a distinct positive bias away the desired value \(\text{acv}_{0}\), also with small variance. Such results are indicative that the difference between \(\text{acv}(\mathbf{G})\) and \(\text{acv}_{0}\) in our original experiment is a result of approximating optimal sample sizes for \(G\) with those of \(\hat{\rho}_{\delta}^{-1}\).
## 7 Application: Per-pixel Read Noise Estimation
With a method for estimating per-pixel conversion gain, we can estimate downstream parameters such as the read noise \(\sigma_{\mathcal{G}}\) on a per-pixel basis. Recall that the random variable \(\mathcal{D}\) represents the dark noise of the sensor for some nonzero integration time. As such, the noise represented by \(\mathcal{D}\) contains the combined noise from read noise and dark current shot noise. To isolate the sensor read noise component of \(\mathcal{D}\), given by \(\mathcal{R}\sim\mathcal{N}(\mu_{\mathcal{G}},\sigma_{\mathcal{G}}^{2})\), we simply set the sensor integration time to \(t_{\text{exp}}=0\) sec (or the shortest allowable integration time), which eliminates dark current. Again by linearity of the transfer function we find for the sensor output at zero illumination for a zero second integration time to be \(R=\mathcal{T}(\mathcal{R})\) with
\[\mu_{R}\coloneqq\mathsf{E}R=\mu_{\mathcal{R}}/g \tag{85}\]
and
\[\sigma_{R}^{2}\coloneqq\text{Var}R=\sigma_{\mathcal{R}}^{2}/g^{2}. \tag{86}\]
Upon inspection of the expression for \(\sigma_{R}^{2}\) we find an equation for the read noise as
\[\sigma_{\mathcal{R}}=\sigma_{R}\times g. \tag{87}\]
We already have an estimator for \(g\) so all that is needed to estimate \(\sigma_{\mathcal{R}}\) is an estimator for \(\sigma_{R}\).
To estimate \(\sigma_{R}\) we let \(\{Z_{1},\ldots,Z_{n_{3}}\}\) denote a sequence of \(n_{3}\) i.i.d. observations of a pixel captured in the dark over a zero second integration time. Under the assumed normal model, \(Z_{k}\sim\mathcal{N}(\mu_{R},\sigma_{R}^{2})\); thus we can obtain an unbiased estimate of \(\sigma_{R}^{2}\) via the sample variance
\[\hat{Z}=\frac{1}{n_{3}-1}\sum_{k=1}^{n_{3}}(Z_{k}-\hat{Z})^{2}. \tag{88}\]
It follows that we can estimate \(\sigma_{R}\) via the sample standard deviation \(\hat{Z}=\sqrt{\hat{Z}}\) although this estimate will no longer be unbiased. It's easy to show that \(\hat{Z}\sim\mathcal{G}(\alpha_{3},\beta_{3})\) with \(\alpha_{3}=(n_{3}-1)/2\) and \(\beta_{3}=\alpha_{3}/\sigma_{R}^{2}\) so that using the transformation \(\hat{Z}=\sqrt{\hat{Z}}\) we find for the density of \(\hat{Z}\)
\[f_{2}(z)=\frac{2\beta^{\text{s}}}{\Gamma(\alpha)}z^{2\alpha-1}e^{-\beta z^{2}}, \tag{89}\]
which is the Nakagami distribution \(\hat{Z}\sim\mathcal{N}a(\alpha_{3},\beta_{3})\). One finds for the moments of \(\hat{Z}\)
\[\mathsf{E}\hat{Z}^{n}=\frac{(\alpha_{3})_{n/2}}{\alpha_{3}^{n/2}}\sigma_{R}^{n }, \tag{90}\]
with \((s)_{n}\coloneqq\Gamma(s+n)/\Gamma(s)\) again denoting the Pochhammer symbol. Thus, an unbiased estimator for \(\sigma_{R}\) can be given by
\[\hat{Z}^{*}=\frac{\sqrt{\alpha_{3}}}{(\alpha_{3})_{1/2}}\sqrt{\hat{Z}}. \tag{91}\]
The read noise \(\sigma_{\mathcal{R}}\) of our pixel can therefore be estimated via
\[\hat{\mathcal{R}}=\hat{Z}^{*}\times G, \tag{92}\]
with the estimator \(G\) given by (31). Since \(\hat{Z}^{*}\) is computed from a the sample \(\mathbf{Z}\), which is independent from the samples \(\mathbf{X}\) and \(\mathbf{Y}\) used to calculate \(G\), we further find for the density of \(\hat{R}\)
\[f_{\hat{R}}(r)=\int_{0}^{\infty}f_{\hat{Z}^{*}}(t)f_{G}(r/t)\frac{\text{d}t}{t}, \tag{93}\]
with \(f_{G}\) again being the cif distribution. We shall simply refer to \(f_{\mathcal{R}}\) as the Read Noise (kn) distribution. Likewise, we have for the pseudomomoments of \(\hat{\mathcal{R}}\)
\[\mathsf{E}_{\mathcal{P}}\hat{\mathcal{R}}^{n}=\frac{(\alpha_{3})_{n/2}}{( \alpha_{3})_{1/2}^{n}}\sigma_{R}^{n}\,\mathsf{E}_{\mathcal{P}}G^{n}. \tag{94}\]
To experimentally measure the per-pixel read noise of our sensor, we again employed an iterative algorithm. In each iteration of this algorithm a master \(\hat{Z}^{*}\)-frame was updated with a new \(\mathbf{Z}\)-frame using Weilderd's algorithm, which was then multiplied, per-pixel, by the \(g\)-map estimated in the previous section to produce the \(\sigma_{\mathcal{R}}\)-map. The algorithm was stopped when
\begin{table}
\begin{tabular}{c c c} \hline parameter & estimate & value \\ \hline \(\hat{g}\) & \(\text{mean}(\mathbf{G})/(1+\text{acv}_{0}^{2}+3\text{acv}_{0}^{4})\) & 2.1917 \\ \(\hat{\mu}_{\mathcal{D}}\) & \(\text{mean}(\bar{\mathbf{Y}})\times\hat{g}\) & 92.858 \\ \(\hat{\sigma}_{\mathcal{D}}\) & \(\sqrt{\text{mean}(\bar{\mathbf{Y}})}\times\hat{g}\) & 13.853 \\ \(\hat{\mu}_{c}\) & \(\text{mean}(\bar{\mathbf{X}}-\bar{\mathbf{Y}})\times\hat{g}\) & 350.03 \\ \hline \end{tabular}
\end{table}
Table 4: Parameter estimates for Monte Carlo experiment.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \(\text{mean}(\cdot)\) & \(\text{var}(\cdot)\) & target \\ \hline \(n_{1}\) & 2606.9 & 0.1 & 2602 \\ \(n_{2}\) & 923.9 & 0.1 & 921 \\ \(\text{acv}(\mathbf{G})\) & 0.050279 & \(5.0328\times 10^{-9}\) & 0.050409 (Thm. 5) \\ \hline \end{tabular}
\end{table}
Table 5: Results of Monte Carlo experiment for ten runs.
the sample absolute coefficient of variation for the \(\sigma_{\mathcal{R}}\)-map satisfied \(\operatorname{acv}(\hat{\mathcal{R}}_{\text{map}})\leq 1.05\operatorname{acv}( \mathbf{G})\). This algorithm halted after \(n_{3}=2291\) Z-frames were captured.
Figure 5 presents the \(\sigma_{\mathcal{R}}\)-map generated from this procedure along with its histogram fit the with ren-distribution of (93). Unlike the \(g\)-map, the \(\sigma_{\mathcal{R}}\)-map does show some column-wise nonuniformities, which are linked to the interline transfer CCD architecture of the sensor. However, these nonuniformities are not severe as indicated by how well the ren-distribution fits the data. What this demonstrates is that per-pixel maps allow further insight into how sensor architecture affects the uniformity of key performance parameters across the sensor array.
## 8 Conclusions
In this work we have presented a general method for sample size determination of the photon transfer conversion gain measurement given a desired uncertainty requirement. So long as this uncertainty requirement is small and sensor dark noise is greater than \(5\,e\)-, this method of determining optimal sample sizes works across the full dynamic range of a sensor. Additionally, we have developed analytical expressions for the moments of the conversion gain sampling distribution (the cif distribution) through the use of pseudomoments, showing that these pseudoments accurately describe the sampling moments of conversion gain data under the proposed sensor noise model. With these theoretical results, we were able to construct simple design and control of experiment procedures that guides the number of samples required for both dark and illuminated conditions based on iterative statistics and predicted convergence. These experimental procedures were executed on a real image sensor; the results of which agreed with our theoretical predictions and were further confirmed through Monte Carlo simulation.
The ability to optimally measure per-pixel conversion gain is a key development in a more comprehensive approach to per-pixel photon transfer characterization. We have already shown how per-pixel conversion gain maps enable per-pixel read noise estimation and we plan to extend this approach to measure other important PT parameters such as well-capacity and dynamic range on a per-pixel basis. The additional information that comes from these per-pixel maps provides a richer characterization of the sensor and opens up the idea of assigning quality metrics to a sensor based on the uniformity of its per-pixel maps.
## 9 Proofs
Proof of Lemma 1.: By the law of total variance
\[\operatorname{Var}T=(\mathsf{E}X^{2})\operatorname{Var}Y+(\mathsf{E}Y)^{2} \operatorname{Var}X. \tag{95}\]
Substituting \(\mathsf{E}X^{2}=\operatorname{Var}X+(\mathsf{E}X)^{2}\), expanding, and dividing both sides by \((\mathsf{E}T)^{2}=(\mathsf{E}X)^{2}(\mathsf{E}Y)^{2}\) gives the desired result.
Proof of Theorem 1.: Letting \(T=G\), \(X=\bar{P}\), and \(Y=\hat{P}^{-1}\) we have after dividing both sides of the relation in Lemma 1 by \(\operatorname{ACV}^{2}\hat{P}^{-1}\) and combining with
\[\operatorname{ACV}^{2}\bar{P}=\frac{1}{n}\frac{\sigma_{P}^{2}}{n_{P}^{2}}= \frac{1}{n}\frac{1}{\mu_{P}g}=\frac{1}{n}\frac{1}{\mu_{e}}. \tag{96}\]
and \(\operatorname{ACV}^{2}\hat{P}^{-1}=2/(n-5)\):
\[\frac{\operatorname{ACV}G}{\operatorname{ACV}\bar{P}^{-1}}=\left(1+\frac{n-3 }{2n}\frac{1}{\mu_{e}}\right)^{1/2}. \tag{97}\]
The result then follows from noting that \(\sqrt{1+a/x}=1+a/(2x)+\mathcal{O}(x^{-2})\) as \(x\to\infty\).
Proof of Lemma 2.: The relation for \(\sigma_{P}^{2}\) is derived by combining \(\sigma_{P}^{2}=\sigma_{P+D}^{2}-\sigma_{D}^{2}\) and \(\sigma_{P+D}^{2}=\sigma_{D}^{2}/\zeta\). To derive the relationship for \(\mu_{P}\) we use the fundamental gain relation in (30) to write \(\mu_{P}=\sigma_{P}^{2}g\).
Proof of Lemma 3.: From (58) we observe \(\operatorname{ACV}^{2}\bar{P}(N-n_{2},n_{2})\) is strictly convex on \(n_{2}\in(0,N)\) for all \(\zeta\in(0,1)\); thus, the optimal sample sizes for \(\bar{P}\) satisfy
\[\begin{split}\partial_{n_{2}}\operatorname{ACV}^{2}\bar{P}(N-n_ {2},n_{2})\Big{|}_{N=n_{1}^{\text{opt}}+n_{2}^{\text{opt}},n_{2}=n_{2}^{\text{ opt}}}=0,\\ \operatorname{ACV}^{2}\bar{P}(n_{1}^{\text{opt}},n_{2}^{\text{ opt}})=\operatorname{acv}_{0}^{2}.\end{split} \tag{98}\]
The first equation gives the optimality relation \(n_{2}^{\text{opt}}/n_{1}^{\text{opt}}=\sqrt{5}\), which proves \(n_{2}^{\text{opt}}/n_{1}^{\text{opt}}\to 1\) as \(\zeta\to 1^{-}\). Solving the systems of
Figure 5: Histogram of \(\sigma_{\mathcal{R}}\)-map values fit with the ren distribution (top) and \(\bar{R}\)-map (bottom).
equations then gives
\[(n_{1}^{\text{opt}},n_{2}^{\text{opt}})=\left(\frac{1}{\sigma_{\mathcal{D}}^{2} \operatorname{acv}_{0}^{2}}\frac{\zeta(1+\sqrt{\zeta})}{(1-\zeta)^{2}},\frac{1 }{\sigma_{\mathcal{D}}^{2}\operatorname{acv}_{0}^{2}}\frac{\zeta(\sqrt{\zeta}+ \zeta)}{(1-\zeta)^{2}}\right). \tag{99}\]
We now see that \(n_{1}^{\text{opt}}<n_{1}^{\text{opt}}\), which allows us to deduce the stronger result \(n_{2}^{\text{opt}}/n_{1}^{\text{opt}}\to 1^{-}\) as \(\zeta\to 1^{-}\). Then working directly with the expressions for the optimal sample sizes it is straightforward to show \(n_{i}^{\text{opt}}\sim 2/(\sigma_{\mathcal{D}}^{2}\operatorname{acv}_{0}^{2})(1- \zeta)^{-2}\), which completes the proof.
Proof of Lemma 4.: Upon inspection of (59) we see \(\operatorname{ACV}^{2}\hat{P}_{\delta}^{-1}(N-n_{2},n_{2})\) is strictly convex on \(n_{2}\in(1,N-1)\) for all \(\zeta\in(0,1)\); thus, its optimal sample sizes satisfy
\[\begin{split}\partial_{n_{2}}\operatorname{ACV}^{2}\hat{P}_{ \delta}^{-1}(N-n_{2},n_{2})\Big{|}_{N=n_{1}^{\text{opt}}+n_{2}^{\text{opt}},n_ {2}=n_{2}^{\text{opt}}}&=0,\\ \operatorname{ACV}^{2}\hat{P}_{\delta}^{-1}(n_{1}^{\text{opt}},n_ {2}^{\text{opt}})&=\operatorname{acv}_{0}^{2}.\end{split} \tag{100}\]
Equating the derivative with zero we find for optimality relation \(1/(n_{1}^{\text{opt}}-1)=\zeta/(n_{2}^{\text{opt}}-1)\), which upon substituting into the second equation gives us \(n_{1}^{\text{opt}}\). Substituting the solution for \(n_{1}^{\text{opt}}\) back into the optimality relation then gives \(n_{2}^{\text{opt}}\). The proof is now complete.
**Remark 2**.: _Suppose we define \(\hat{P}^{-1}\) in terms of the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) and consider what happens as we pass to the shot-noise-limit. Denoting_
\[\hat{P}_{\text{opt}}^{-1}\coloneqq\frac{1}{\hat{X}(n_{1}^{\text{opt}})-\hat{Y }(n_{2}^{\text{opt}})}, \tag{101}\]
_we see as \(\zeta\to 0^{+}\) two simultaneous events occurring: (1) \(n_{2}^{\text{opt}}\to 1\), which implies \(\hat{Y}(n_{2}^{\text{opt}})\xrightarrow{d}\delta(0)\) and (2) \(\sigma_{p+D}\to\sigma_{p}\) so that \(\hat{X}\xrightarrow{d}\mathcal{G}(\alpha_{1},\alpha_{1}/\sigma_{p}^{2})\) with \(\alpha_{1}=(n_{1}^{\text{opt}}(0)-1)/2\). In conclusion,_
\[\lim_{\zeta\to 0^{+}}\hat{P}_{\text{opt}}^{-1}\triangleq\frac{1}{\hat{X}(n_{1}^{ \text{opt}}(0))} \tag{102}\]
_with the r.h.s. being the shot-noise-limited estimator of \(1/\sigma_{p}^{2}\) in Section 3 computed from \(n_{1}^{\text{opt}}(0)\) observations. Comparing the shot-noise-limited optimal sample sizes for this estimator in (25) with \(n_{1}^{\text{opt}}(0)=2/\operatorname{acv}_{0}+1\) we see that we can force our optimal sample sizes to be exact in the shot-noise-limit by instead using_
\[(n_{1}^{\text{opt}},n_{2}^{\text{opt}})=\left(\frac{2(1+\zeta)}{\operatorname{ acv}_{0}^{2}(1-\zeta)^{2}}+5,\frac{2\zeta(1+\zeta)}{\operatorname{acv}_{0}^{2}(1- \zeta)^{2}}+1\right). \tag{103}\]
**Lemma 5**.: _As \(|z|\to\infty\)_
\[\sqrt{2}\,\mathcal{D}(z/\sqrt{2})\sim\frac{1}{z}\sum_{k=0}^{\infty}(2k-1)!! \frac{1}{z^{2k}}. \tag{104}\]
Proof of Lemma 5.: The proof follows from combining the relation
\[\mathcal{D}(z)=\frac{\sqrt{\pi}}{2}e^{-z^{2}}\operatorname{erfi}(z) \tag{105}\]
with the asymptotic expansion for \(|z|\to\infty\)
\[\operatorname{erfi}(z)\sim\operatorname{sgn}(3z)i+\frac{1}{\sqrt{\pi}z}e^{z^{ 2}}\sum_{k=0}^{\infty}\frac{(1/2)_{k}}{z^{2k}}, \tag{106}\]
and the relation between the Pochhammer symbol and double factorial \((2k-1)!!=2^{k}(1/2)_{k}\).
Proof of Theorem 2.: As \(\operatorname{acv}_{0}\to 0^{+}\), \(n_{1}^{\text{opt}}\to\infty\) and \(n_{2}^{\text{opt}}\to\infty\) so that we have by the central limit theorem \(\hat{P}_{\text{opt}}\xrightarrow{d}\mathcal{N}(\mu_{\hat{P}_{\text{opt}}}, \sigma_{\hat{P}_{\text{opt}}}^{2})\). Noting that \(\operatorname{ACV}\hat{P}_{\text{opt}}=\operatorname{ACV}\hat{P}_{\delta,\text {opt}}^{-1}=\operatorname{acv}_{0}\) we have after combining (50) with (43)
\[\operatorname{ARB}_{\mathcal{P}}\hat{P}_{\text{opt}}^{-1}\to\left|\frac{\sqrt{2 }}{\operatorname{acv}_{0}}\mathcal{D}\left(\frac{1}{\sqrt{2}\operatorname{ acv}_{0}}\right)-1\right|. \tag{107}\]
Since \(\operatorname{acv}_{0}\) is small, we subsequently have according to Lemma 5
\[\operatorname{ARB}_{\mathcal{P}}\hat{P}_{\text{opt}}^{-1}\sim\operatorname{acv}_ {0}^{2}\sum_{k=0}^{\infty}(2k+1)!!\operatorname{acv}_{0}^{2k}, \tag{108}\]
which leads to the desired result for \(\operatorname{ARB}_{\mathcal{P}}\hat{P}_{\text{opt}}^{-1}\).
By the same line of reasoning we combine (48) and (43)
\[\operatorname{ACV}_{\mathcal{P}}^{2}\hat{P}_{\text{opt}}^{-1}\to\frac{\operatorname {ARB}_{\mathcal{P}}\hat{P}_{\text{opt}}^{-1}}{(\sqrt{2}\,\mathcal{D}( \operatorname{acv}_{0}^{-1}/\sqrt{2}))^{2}}-1. \tag{109}\]
Using standard results for the products and multiplicative inverses of power series we write with the help of Lemma 5
\[(\sqrt{2}\,\mathcal{D}(\operatorname{acv}_{0}^{-1}/\sqrt{2}))^{-2}\sim\frac{1}{ \operatorname{acv}_{0}}\sum_{k=0}^{\infty}b_{k}\operatorname{acv}_{0}^{2k}, \tag{110}\]
with \(b_{0}=1\), \(b_{k}=-\sum_{i=1}^{k}a_{i}b_{k-i}\) and
\[a_{i}=\sum_{j=0}^{i}(2(i-j)-1)!!\,(2j-1)!!. \tag{111}\]
It follows
\[\operatorname{ACV}_{\mathcal{P}}^{2}\hat{P}_{\text{opt}}^{-1}\sim\sum_{k=1}^{ \infty}\left(\sum_{\ell=0}^{k}(2(k-\ell)+1)!!\,b_{\ell}\right)\operatorname{acv}_{0 }^{2k}, \tag{112}\]
which upon combining with \(\sqrt{1+ax^{2}+\mathcal{O}(\mathcal{X}^{4})}=1+ax^{2}/2+\mathcal{O}(x^{4})\) as \(x\to 0\) yields the desired asymptotic result for \(\operatorname{ACV}_{\mathcal{P}}\hat{P}_{\text{opt}}^{-1}\).
Proof of Theorem 3.: The proof follows much in the same way as that for Theorem 1. We write
\[\frac{\operatorname{ACV}G_{\delta}}{\operatorname{ACV}\hat{P}_{\delta}^{-1}}=\left( 1+\operatorname{ACV}^{2}\hat{P}+(\operatorname{ACV}^{2}\hat{P})( \operatorname{ACV}^{-2}\hat{P}_{\delta}^{-1})\right)^{1/2}. \tag{113}\]
Combining this with the asymptotic approximations
\[\operatorname{ACV}^{2}\hat{P}=\frac{1}{n_{1}\sigma_{\mathcal{D}}^{2}}\zeta+ \mathcal{O}(\zeta^{2}), \tag{114}\]
\[\operatorname{ACV}^{-2}\hat{P}_{\delta}^{-1}=\frac{n_{1}-1}{2}+(n_{1}-1)\zeta+ \mathcal{O}(\zeta^{2}), \tag{115}\]
and \(\sqrt{1+ax+\mathcal{O}(x^{2})}=1+ax/2+\mathcal{O}(x^{2})\) as \(x\to 0\) gives the desired result.
Proof of Theorem 4.: The results of Lemma 3 and Proposition 1 show that as \(\zeta\to 1^{-}\), the optimal sample sizes for \(\hat{P}_{\delta}^{-1}\) and \(\hat{P}\) are asymptotically equal and of the form \(n_{i}^{\text{opt}}\sim C(1-\zeta)^{-2}\). As such, the optimal sample sizes for \(G_{\delta}\) must also be asymptotically equal and of the form \(n_{i}^{\text{opt}}\sim C_{G_{\delta
To determine \(C_{G_{s}}\) we substitute \(n_{1}=n_{2}=C_{G_{s}}(1-\zeta)^{-2}\) into the expression for \(\text{ACV}^{2}G_{s}\), pass to the limit, and equate with \(\text{acv}_{0}\) yielding
\[\frac{2}{\sigma_{\mathcal{G}}^{2}}\frac{1}{C_{G_{s}}}+\frac{8}{\sigma_{\mathcal{G }}^{2}}\frac{1}{C_{G_{s}}^{2}}+4\frac{1}{C_{G_{s}}}=\text{acv}_{0}. \tag{117}\]
The resulting expression is a quadratic equation in \(C_{G_{s}}\), the roots of which are real and differing in sign. Taking the positive root then yields the expression for \(C_{G_{s}}\).
Proof of Theorem 5.: Since \(\text{ARB}_{\mathcal{P}}G=\text{ARB}_{\mathcal{P}}\hat{p}^{-1}\), the result for \(\text{ARB}_{\mathcal{P}}G_{\text{opt}}\) immediately follows from Theorem 2. To obtain the result for \(\text{ACV}_{\mathcal{P}}G_{\text{opt}}\), combine
\[\text{ACV}_{\mathcal{P}}^{2}G=\text{ACV}_{\mathcal{P}}^{2}\hat{p}^{-1}+(\text {ACV}_{\mathcal{P}}^{2}\hat{p}^{-1})(\text{ACV}^{2}\hat{p})+\text{ACV}^{2}P \tag{118}\]
with
\[\text{ACV}_{\mathcal{P}}^{2}\hat{p}_{\text{opt}}^{-1} =\text{acv}_{0}+6\text{acv}_{0}^{3}+\mathcal{O}(\text{acv}_{0}^{ 5}) \tag{119}\] \[\text{ACV}_{\mathcal{P}}^{2}\hat{P}_{\text{opt}} =a\text{acv}_{0}+b\text{acv}_{0}^{3}+\mathcal{O}(\text{acv}_{0}^{ 5})\]
and \(\sqrt{1+ax^{2}+\mathcal{O}(x^{4})}=1+ax^{2}/2+\mathcal{O}(x^{4})\) as \(x\to 0\).
## 10 Backmatter
Funding.The authors report no funding for this work.
Acknowledgments.The authors would like thank Nico Schlomer for his matlab2tikz function used to create the figures throughout this work [34]. The authors would also like to thank Paul Enta for pointing us to reference [15] on square series generating functions.
Disclosures.The authors declare no conflicts of interest.
Data availability.Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
Supplemental document.Code for the DOE and COE procedures is available on figshare (Ref. [30, 31]).
|
2307.14037 | On the Bloch eigenvalues and spectrum of the differential operators of
odd order | In this paper we consider the Bloch eigenvalues and spectrum of the
non-self-adjoint differential operator L generated by the differential
expression of odd order n with the periodic PT-symmetric coefficients, where
n>1. We study the localizations of the Bloch eigenvalues and the structure of
the spectrum. Moreover, we find conditions on the norm of the coefficients
under which the spectrum of L coincides with the real line | O. A. Veliev | 2023-07-26T08:40:49Z | http://arxiv.org/abs/2307.14037v1 | # On the Bloch eigenvalues and spectrum of the differential operators of odd order
###### Abstract
In this paper we consider the Bloch eigenvalues and spectrum of the non-self-adjoint differential operator \(L\) generated by the differential expression of odd order \(n\) with the periodic PT-symmetric coefficients, where \(n>1.\) We study the localizations of the Bloch eigenvalues and the structure of the spectrum. Moreover, we find conditions on the norm of the coefficients under which the spectrum of \(L\) coincides with the real line.
Key Words: PT-symmetric coefficients, Bloch eigenvalues, Spectrum.
AMS Mathematics Subject Classification: 34L05, 34L20.
## 1 Introduction
Let \(L\) be the differential operator generated in the space \(L_{2}(-\infty,\infty)\) by the differential expression
\[l(y)=(-i)^{n}y^{(n)}(x)+\underset{v=2}{\overset{n}{\sum}}(-i)^{n-v}p_{v}(x)y^ {(n-v)}(x), \tag{1}\]
where \(n\) is an odd integer greater than \(1\) and \(p_{v}\) for \(v=2,3,...n\) are \(1\)-periodic PT-symmetric function satisfying \((p_{v})^{(n-v)}\in L_{2}\left[0,1\right]\). It is well-known that (see [4, 5]) the spectrum \(\sigma(L)\) of the operator \(L\) is the union of the spectra of the operators \(L_{t}\) for \(t\in(-1,1]\) generated in \(L_{2}\left[0,1\right]\) by (1) and the boundary conditions
\[y^{(\nu)}\left(1\right)=e^{i\pi t}y^{(\nu)}\left(0\right) \tag{2}\]
for \(\nu=0,1,...,(n-1).\) The spectra \(\sigma(L_{t})\) of the operators \(L_{t}\) consist of the eigenvalues called the Bloch eigenvalues of \(L.\)
The operators \(L\) and \(L_{t}\) are denoted by \(L(0)\) and \(L_{t}(0)\) if \(p_{2},p_{3},...,p_{n},\) are the zero functions. It is clear that \(\left(2\pi k+\pi t\right)^{n}\) and \(e^{i\pi(2k+t)x}\) for \(k\in\mathbb{Z}\) are respectively the eigenvalues and eigenfunctions of \(L_{t}(0).\) The numbers \(\left(2\pi k+\pi t\right)^{n}\) for \(k\in\mathbb{Z}\) are the simple eigenvalues of \(L_{t}(0)\) and the set of all Bloch eigenvalues of \(L(0)\) cover only \(1\) times the real axis.
In [10] we proved that if the coefficients of (1) are the \(m\times m\) matrices with the PT-symmetric entries and \(m\) is an odd number, then \(\mathbb{R}\subset\)\(\sigma(L).\) In this paper
we consider the case \(m=1\) in detail and prove that the nonreal part \(\sigma(L)\backslash\mathbb{R}\) of \(\sigma(L)\) is contained in the rectangle
\[\left\{\lambda\in\mathbb{C}:\left|\operatorname{Re}\lambda\right|\leq(2\pi N)^ {n},\ \left|\operatorname{Im}\lambda\right|<\frac{\sqrt{10}}{3}\left(2N+1\right)^{n-3 /2}\pi^{n-2}C\right\}, \tag{3}\]
where \(N\) is the smallest integer satisfying \(N\geq\pi^{-2}C+1\) and
\[C=\underset{v=2}{\overset{n}{\sum}}\underset{s=0}{\overset{n-v}{\sum}} \frac{(n-v)!\left\|\left(p_{v}\right)^{(s)}\right\|}{s!(n-v-s)!\pi^{v+s-2}}.\]
Moreover, we prove that if \(C\leq\ \pi^{2}2^{-n+1/2}\), then \(\sigma(L)=\mathbb{R}\).
Note that the obtained results and the methods of the investigations for the odd case (\(n=2v+1\)) essentially differ from the results and the methods of the investigations for the even case (\(n=2v\)). The general even case is similar to the case \(n=2\) (Schrodinger operator). There are a large number of papers for the Schrodinger operator (see the monographs [1, Chapters 4 and 6] and [8, Chapters 3 and 5] and the papers they refer to). The results and the method used in this paper are completely different from the results and methods of those papers. That is why, and in order not to deviate from the purpose of this paper, we do not discuss them here, in detail. We only note that, in the first paper [2] about the PT-symmetric periodic potential, the disappearance of real energy bands for some complex-valued PT-symmetric periodic potentials have been reported. In [6] it was showed that the disappearance of such real energy bands implies the existence of nonreal band spectra. In [7], I proved that the main part of the spectrum of \(L\) is real and contains the large part of \([0,\infty)\). However, in general, the spectrum contains also infinitely many nonreal acts.
## 2 Main Results
First we investigate the localizations of the eigenvalues of \(L_{t}\). This investigation is similar to Section 2 of [9], where the self-adjoint case was investigated. We prove that if \(\left|k\right|\geq\pi^{-2}C+1\), then the \(\delta_{k}(t):=\frac{3}{2}\pi^{n-2}C\left|(2k+t)\right|^{n-2}\) neighborhood
\[U(k,t)=\{\lambda\in\mathbb{C}:\left|\lambda-(2\pi k+\pi t)^{n}\right|<\delta_ {k}(t)\} \tag{4}\]
of the eigenvalue \((2\pi k+\pi t)^{n}\) of \(L_{t}(0)\) contains only one eigenvalue of \(L_{t}\), where \(C\) is defined in (3). To prove this statement we use the following formulas. Let \(\lambda(k,t,\varepsilon)\) be the eigenvalue of the operator \(L_{t,\varepsilon}:=L_{t}(0)+\varepsilon(L_{t}-L_{t}(0))\) satisfying \(\operatorname{Re}\left(\lambda(k,t,\varepsilon)\right)\in I(k,t)\) and \(\Psi_{\lambda(k,t,\varepsilon)}\) be a normalized eigenfunction of \(L_{t}\), corresponding to the eigenvalue \(\lambda(k,t,\varepsilon)\), where \(\varepsilon\in[0,1]\) and
\[I(k,t)=[(2\pi k+\pi t-\pi)^{n},(2\pi k+\pi t+\pi)^{n}). \tag{5}\]
Sometimes, for brevity, instead of \(\Psi_{\lambda(k,t,\varepsilon)}\) and \(\lambda(k,t,\varepsilon)\) we write \(\Psi_{\lambda}\) and \(\lambda\).
Multiplying the equation \(L_{t,\varepsilon}\Psi_{\lambda}=\lambda\Psi_{\lambda}\) by \(e^{i(2\pi k+\pi t)x}\) and using the equality
\[L_{t}(0)e^{i(2\pi k+\pi t)x}=\left(2\pi k+\pi t\right)^{n}e^{i(2\pi k+\pi t)x}\]
we get
\[\left(\lambda-(2\pi k+\pi t)^{n}\right)\left(\Psi_{\lambda},e^{i(2\pi k+\pi t )x}\right)=\varepsilon\sum\limits_{\nu=2}^{n}(p_{v}\Psi_{\lambda}^{(n-v)},e^ {i(2\pi k+\pi t)x}), \tag{6}\]
where \((\cdot,\cdot)\) is the inner product in \(L_{2}\left[0,1\right].\) To prove that \(\lambda(k,t,\varepsilon)\in U(k,t)\) we estimate the right side of (6) and \(\left(\Psi_{\lambda},e^{i(2\pi k+\pi t)x}\right)\). First let us estimate the right hand side of (6). For this we use the integration by parts formula and obtain
\[(p_{v}\Psi_{\lambda(k,t,\varepsilon)}^{(n-v)},e^{i(2\pi k+\pi t)x})=(\Psi_{ \lambda(k,t,\varepsilon)},\left(\overline{p_{v}}e^{i(2\pi k+\pi t)x}\right)^ {(n-v)}). \tag{7}\]
If \(k\in\mathbb{Z}\backslash\left\{0\right\},\) then by direct calculations one can easily verify that
\[\left\|\left(\overline{p_{v}}e^{i(2\pi k+\pi t)x}\right)^{(n-v)}\right\|\leq \sum\limits_{s=0}^{n-v}\frac{(n-v)!\left|2\pi k+\pi t\right|^{n-v-s}\left\|p_{ v}^{(s)}\right\|}{s!(n-v-s)!}. \tag{8}\]
Using (7), (8), Schwarz's inequality and equality \(\left\|\Psi_{\lambda(k,t,\varepsilon)}\right\|=1\) we get
\[\left|(p_{v}\Psi_{\lambda(k,t,\varepsilon)}^{(n-v)},e^{i(2\pi k+\pi t)x}) \right|\leq\sum\limits_{s=0}^{n-v}\frac{(n-v)!\left|2\pi k+\pi t\right|^{n-v-s }\left\|p_{v}^{(s)}\right\|}{s!(n-v-s)!}\leq\]
\[\left|2\pi k+\pi t\right|^{n-2}\sum\limits_{s=0}^{n-v}\frac{(n-v)!\left\|(p_{ v})^{(s)}\right\|}{s!(n-v-s)!\left|2\pi k+\pi t\right|^{v+s-2}}.\]
Therefore, if \(k\neq 0,\) then
\[\left|\varepsilon\sum\limits_{\nu=2}^{n}(p_{v}\Psi_{\lambda(k,t,\varepsilon) }^{(n-v)},e^{i(2\pi k+\pi t)x})\right|\leq\left|2\pi k+\pi t\right|^{n-2}C \tag{9}\]
for all \(\varepsilon\in[0,1]\) and \(t\in(-1,1]\), where \(C\) is defined in (3). In case \(k=0\) we have
\[\left|\varepsilon\sum\limits_{\nu=2}^{n}(p_{v}\Psi_{\lambda(k,t,\varepsilon) }^{(n-v)},e^{i(\pi t)x})\right|\leq\pi^{n-2}C \tag{10}\]
for all \(\varepsilon\in[0,1]\) and \(t\in(-1,1].\)
Let us now show that the estimate for \(\left(\Psi_{\lambda},e^{i(2\pi k+\pi t)x}\right)\) can be performed by repeating the calculations performed in [9] for estimate (11) of \(\left(\Psi_{\lambda},\varphi_{k,t}\right).\) We consider the case \(k>0.\) The case \(k<0\) can be considered in the same way. It follows from the definition of \(\lambda(k,t,\varepsilon)\) and (5) that
\[\left|\lambda(k,t,\varepsilon)-(2\pi p+\pi t)^{n}\right|>\left|(2\pi k+\pi t+ \pi)^{n}-(2\pi p+\pi t)^{n}\right|\]
for \(p>k\) and
\[|\lambda(k,t,\varepsilon)-(2\pi p+\pi t)^{n}|\geq|(2\pi k+\pi t-\pi)^{n}-(2\pi p+ \pi t)^{n}|\]
for \(p<k\). Using these inequalities and the relations obtained from (6) and (9) by replacing \(k\) with \(p\neq 0\) we get
\[\left|\left(\Psi_{\lambda(k,t,\varepsilon)},e^{i(2\pi p+\pi t)x}\right)\right| ^{2}<\frac{\pi^{-4}C^{2}\left(|2p+t|^{n-2}\right)^{2}}{\left((2k+t+1)^{n}-(2p +t)^{n}\right)^{2}} \tag{11}\]
for \(p>k\) and
\[\left|\left(\Psi_{\lambda(k,t,\varepsilon)},e^{i(2\pi p+\pi t)x}\right)\right| ^{2}\leq\frac{\pi^{-4}C^{2}\left(|2p+t|^{n-2}\right)^{2}}{\left((2k+t-1)^{n}-( 2p+t)^{n}\right)^{2}} \tag{12}\]
for \(p<k.\) In case \(p=0,\) instead of (9) using (10) we get the formulas
\[\left|\left(\Psi_{\lambda(k,t,\varepsilon)},e^{i\pi tx}\right)\right|^{2}< \frac{\pi^{-4}C^{2}}{\left((2k+t+1)^{n}-t^{n}\right)^{2}} \tag{13}\]
for \(k<0\) and
\[\left|\left(\Psi_{\lambda(k,t,\varepsilon)},e^{i\pi tx}\right)\right|^{2}\leq \frac{\pi^{-4}C^{2}}{\left((2k+t-1)^{n}-t^{n}\right)^{2}} \tag{14}\]
for \(k>0\) instead of (11) and (12). Note that formulas (11)-(14) coincides with the formulas (18)-(21) of [9] if \(C\) and \(e^{i(2\pi p+\pi t)x}\) are replaced by \(M\) and \(\varphi_{p,t}.\) Therefore, instead of (18)-(21) of [9] using (11)-(14) and repeating the proof of (11) of [9] we get
\[\left|\left(\Psi_{\lambda(k,t,\varepsilon)},e^{i(2\pi k+\pi t)x}\right)\right| >\frac{2}{3}. \tag{15}\]
Thus, instead of (11), (8) and (9), (10) of [9] using respectively (15), (6) and (9), (10) and repeating the proofs of Theorem \(1(a)\) and \((b)\) of [9] we obtain.
**Theorem 1**: _Let \(N\) be the smallest integer satisfying \(N\geq\pi^{-2}C+1\) and_
\[S(N,t)=\{\lambda\in\mathbb{C}:\operatorname{Re}\lambda\in[(-2\pi N+\pi+\pi t )^{n},(2\pi N-\pi+\pi t)^{n})\}\]
\((a)\) _If \(|k|\geq N,\) then the eigenvalues of \(L_{t,\varepsilon}\) for \(\varepsilon\in[0,1]\) lying in the strip_
\[P(k,t)=\{\lambda\in\mathbb{C}:\operatorname{Re}\lambda\in I(k,t)\}\]
_is contained in \(U(k,t),\) where \(U(k,t)\) and \(I(k,t)\) are defined in (4) and (5)._
\((b)\) _The closures of \(S(N,t)\) and \(U(k,t)\) for \(|k|\geq N\) are pairwise disjoint closed sets._
Similarly, repeating the proof of Theorem 2 of [9] we get
**Theorem 2**: _If \(C\leq\ \pi^{2}2^{-n+1/2}\), then the eigenvalues of \(L_{t,\varepsilon}\) for \(\varepsilon\in[0,1]\) are contained in the disks_
\[U(0,t)=\left\{\lambda\in\mathbb{C}:|\lambda-(\pi t)^{n}|<\frac{1}{5}\pi^{n} \right\},\]
\[U(1,t)=\left\{\lambda\in\mathbb{C}:|\lambda-(2\pi+\pi t)^{n}|<\frac{3}{10}\,|2 +t|^{n-2}\,\pi^{n}\right\},\]
\[U(-1,t)=\left\{\lambda\in\mathbb{C}:|\lambda-(\pi t-2\pi)^{n}|<\frac{3}{10}\,| t-2|^{n-2}\,\pi^{n}\right\},\]
_and \(U(k,t)\) for \(|k|>1\) which are defined by (4). The closures of these disks are pairwise disjoint closed sets._
Now we consider the eigenvalues of \(L_{t,\varepsilon}\) lying in the strip \(S(N,t).\)
**Theorem 3**: _The eigenvalue \(\lambda\) of \(L_{t,\varepsilon}\) lying in the strip \(S(N,t)\) is contained in the rectangle_
\[R(N,t)=\left\{\operatorname{Re}\lambda\in A(N,t),|\operatorname{Im}\lambda|< \frac{\sqrt{10}}{3}\,(2N+1)^{n-3/2}\,\pi^{n-2}C\right\},\]
_where \(A(N,t)=[(-2\pi N+\pi+\pi t)^{n},(2\pi N-\pi+\pi t)^{n}),\) and \(\varepsilon\in[0,1].\)_
**Proof.** Let \(\lambda\) be the eigenvalue of \(L_{t,\varepsilon}\) lying in \(S(N,t)\) and \(\Psi_{\lambda}\) be a normalized eigenfunction corresponding to \(\lambda.\) There exists \(k\in[-N,N]\) such that
\[\left|\left(\Psi_{\lambda},e^{i(2\pi k+\pi t)x}\right)\right|=\max_{p\in[-N,N] }\left|\left(\Psi_{\lambda},e^{i(2\pi p+\pi t)x}\right)\right|. \tag{16}\]
First, we prove that
\[\sum_{p:|p|>N}\left|\left(\Psi_{\lambda},e^{i(2\pi p+\pi t)x}\right)\right|^{2 }<\frac{1}{10}. \tag{17}\]
If \(\lambda\in S(N,t)\), then
\[|\lambda-(2\pi p+\pi t)^{n}|>|(2\pi N+\pi t-\pi)^{n}-(2\pi p+\pi t)^{n}|\]
for \(p>N\) and
\[|\lambda-(2\pi p+\pi t)^{n}|\geq|(-2\pi N+\pi t+\pi)^{n}-(2\pi p+\pi t)^{n}|\]
for \(p<-N\). Using these inequalities and repeating the proof of (13) and (14) of [9] (use the last two inequalities instead of first two inequalities in the proof of Lemma 1 of [9] and repeat the proof of the lemma) we obtain
\[\sum_{p:p>N}\left|\left(\Psi_{\lambda},e^{i(2\pi p+\pi t)x}\right)\right|^{2} \leq\frac{5}{64}\]
\[\sum\limits_{p:p<-N}\left|\left(\Psi_{\lambda},e^{i(2\pi p+\pi t)x}\right)\right|^ {2}<\frac{1}{48}.\]
The last two inequalities give (17). Now, using Parseval's equality, (17) and (16) we obtain
\[\left|\left(\Psi_{\lambda},e^{i(2\pi k+\pi t)x}\right)\right|>\frac{3}{\sqrt{10 \left(2N+1\right)}}.\]
On the other hand, if \(k\in[-N,N],\) then by (9) and (10) the right side of (6) is not greater than \(\left(\left(2N+1\right)\pi\right)^{n-2}C.\) Therefore from (6) we obtain
\[\left|\lambda-(2\pi k+\pi t)^{n}\right|<\frac{\sqrt{10}}{3}\left(2N+1\right)^{ n-3/2}\pi^{n-2}C\]
that gives the proof of the theorem.
Now using Theorem 1-3 we consider the spectrum of \(L.\) Besides, we use the following results from [10] formulated here as summaries.
**Summary 1**: \((a)\) _The real line \(\mathbb{R}\) is a subset of \(\sigma(L)\) ( Theorem 2\((b)\) of [10])_
\((b)\) _If \(\lambda\) is an eigenvalue of \(L_{t},\) then \(\overline{\lambda}\) is also an eigenvalue of \(L_{t}\) (This result follows from Theorem 1\((a)\) of [10])._
Note that Summary 1\((b)\) is a characteristic property of the differential operators with PT-symmetric coefficients. For the operator \(L_{t}\) this result immediately follows from Theorem 1\((a)\) of [10] due to the following. Theorem 1\((a)\) of [10] states that if \(\Psi\) is a solution of \(l(y)=\lambda y,\) then the function \(\Phi\) defined by \(\Phi(x,\lambda)=\overline{\Psi(-x,\lambda)}\) is a solution of \(l(y)=\overline{\lambda}y,\) where \(l(y)\) is defined in (1). On the other hand, \(\Psi\) satisfies boundary conditions (2) if and only if \(\Psi(x)\) has the form \(\Psi(x)=e^{i\pi tx}p(x),\) where \(p(x)\) is a periodic function. Then one can easily verify that the function \(\Phi\) has the same form. Therefore, if \(\lambda\) is an eigenvalue of \(L_{t},\) then \(\overline{\lambda}\) is also an eigenvalue of \(L_{t}.\)
Now we are ready to prove the following results of this paper.
**Theorem 4**: \((a)\) _Each of the disks \(U(k,t)\) for \(|k|\geq N\) contains only one eigenvalues of \(L_{t}\), where \(N\) is defined in Theorem 1. This eigenvalue is a real number._
\((b)\) _The real part \(\sigma(L)\cap\mathbb{R}\) of the spectrum \(\sigma(L)\) of \(L\) is \(\mathbb{R}\) and the nonreal part \(\sigma(L)\backslash\mathbb{R}\) of \(\sigma(L)\) consists of the curves lying in the rectangle (3)._
\((c)\) _If \(C\leq\ \pi^{2}2^{-n+1/2}\), then \((a)\) is valid for all \(k\in\mathbb{Z}\) and \(\sigma(L)=\mathbb{R}.\)_
**Proof.**\((a)\) Since the strips \(S(N,t)\) and \(P(k,t)\) for \(|k|\geq N,\) defined in Theorem 1, is a cover of \(\mathbb{C},\) it follows from Theorems 3 and 1\((a)\) that all eigenvalues of \(L_{t,\varepsilon}\) for all \(\varepsilon\in[0,1]\) are contained in the union of the sets \(R(N,t)\) and \(U(k,t)\) for \(|k|\geq N.\) Moreover, by Theorem 1\((b)\) the closures of the disks \(U(k,t)\) for \(|k|\geq N\) and the rectangle \(R(N,t)\) are pairwise disjoint closed sets. Therefore there exists a closed curve \(\Gamma(k,t)\) which enclose only the disk \(U(k,t)\) and lies in the resolvent set of the operators \(L_{t,\varepsilon}\) for all \(\varepsilon\in[0,1].\) Since \(L_{t,\varepsilon}\) is a holomorphic family with respect to \(\varepsilon,\) we conclude that the number of eigenvalues
(counting the multiplicity) of the operators \(L_{t,0}=L_{t}(0)\) and \(L_{t,1}=L_{t}\) lying inside \(U(k,t)\) are the same (see [3, Chap. 7]). Since the operator \(L_{t,0}\) has only \(1\) eigenvalues lying inside \(U(k,t),\) the operator \(L_{t,1}\) also has only \(1\) eigenvalues lying inside \(U(k,t).\) Thus \(U(k,t)\) for \(k\geq N\) contains only \(1\) eigenvalues of \(L_{t}.\) In the same way we prove this statement for \(k\leq-N.\) If the eigenvalue \(\lambda\) of \(L_{t}\) lying in \(U(k,t)\) is a nonreal number, then by Summary \(1(b)\)\(\overline{\lambda}\) is also an eigenvalue of \(L_{t}\) lying in \(U(k,t).\) This contradicts to the first sentence of \((a).\)
\((b)\) It is well known that (see [4, 5]) the spectrum of \(L\) consist of the curves and the points of these curves are the Bloch eigenvalues. Therefore the proof of \((b)\) follows from \((a),\) Summary \(1(a)\) and Theorem 3.
\((c)\) Using Theorem 2 and repeating the proof of \((a),\) we obtain that if \(C\leq\pi^{2}2^{-n+1/2},\) then \((a)\) is valid for all \(k\in\mathbb{Z},\) all Bloch eigenvalues of \(L\) are real numbers and \(\sigma(L)\subset\mathbb{R}\). Thus, the proof of \((c)\) follows from Summary \(1(a).\)
|
2304.02305 | The role of convection in the existence of wavefronts for biased
movements | We investigate a model, inspired by (Johnston et al., Sci. Rep., 7:42134,
2017), to describe the movement of a biological population which consists of
isolated and grouped organisms. We introduce biases in the movements and then
obtain a scalar reaction-diffusion equation which includes a convective term as
a consequence of the biases. We focus on the case the diffusivity makes the
parabolic equation of forward-backward-forward type and the reaction term
models a strong Allee effect, with the Allee parameter lying between the two
internal zeros of the diffusion. In such a case, the unbiased equation (i.e.,
without convection) possesses no smooth traveling-wave solutions; on the
contrary, in the presence of convection, we show that traveling-wave solutions
do exist for some significant choices of the parameters. We also study the sign
of their speeds, which provides information on the long term behavior of the
population, namely, its survival or extinction. | Diego Berti, Andrea Corli, Luisa Malaguti | 2023-04-05T08:54:13Z | http://arxiv.org/abs/2304.02305v1 | # The role of convection in the existence of wavefronts for biased movements
###### Abstract
We investigate a model, inspired by Johnson et al. (2017), see [8], to describe the movement of a biological population which consists of isolated and grouped organisms. We introduce biases in the movements and then obtain a scalar reaction-diffusion equation which includes a convective term as a consequence of the biases. We focus on the case the diffusivity makes the parabolic equation of forward-backward-forward type and the reaction term models a strong Allee effect, with the Allee parameter lying between the two internal zeros of the diffusion. In such a case, the unbiased equation (i.e., without convection) possesses no smooth traveling-wave solutions; on the contrary, in the presence of convection, we show that traveling-wave solutions do exist for some significant choices of the parameters. We also study the sign of their speeds, which provides information on the long term behavior of the population, namely, its survival or extinction.
**AMS Subject Classification:** 35K65; 35C07, 35K57, 92D25
**Keywords:** Population dynamics, biased movement, sign-changing diffusivity, traveling-wave solutions, diffusion-convection reaction equations.
## 1 Introduction
In this paper we investigate a model to describe the movement of biological organisms. Its detailed presentation appears in Section 2. Inspired by the recent paper [8], we assume that the population is constituted of isolated and grouped organisms; our discussion is presented in the case of a single spatial dimension but could be extended to the whole space. The first rigorous mathematical deduction of movement for organisms appeared in [17]; since then, several models have been proposed, see for instance [7, 8, 13, 14, 15] and references there. In this context, a common procedure is to start from a discrete framework where the transition probabilities per unit time \(\tau\) and for a one-step jump-width \(l\) are assigned, and then pass to the limit for \(\tau,l\to 0\). In the aforementioned papers the limiting assumptions
make the diffusivity totally responsible for the movement, and no convection term appears; see however [14, SS5.3] and [16], for instance, for the deduction of a model which also include a convective effect. Here, we generalize the model in [8] by introducing a possibly _biased_ movement, which leads, in general, to a convective term. As a consequence, we show the appearance of a greater variety of dynamics which allow to better investigate the long term behavior of the population; in particular, to predict its survival or extinction.
Our model is described by a reaction-diffusion-convection equation
\[u_{t}+f(u)_{x}=\big{(}D(u)u_{x}\big{)}_{x}+g(u),\qquad t\geq 0,\,x\in\mathbb{R}, \tag{1.1}\]
where the functions \(f,D\) and \(g\) satisfy (2.10), (2.11) and (2.12), respectively. The unknown function \(u\) denotes the density (or concentration) of the population and then it has bounded range; for simplicity we assume \(u\in[0,1]\). An interesting feature of equation (1.1) in this context is that _negative diffusivities_ arise for several natural choices of the parameters. As in [8], here we consider a diffusion term which makes equation (1.1) of forward-backward-forward type. This occurrence was already noticed in other papers, see for instance [18, 20] in the case of a homogeneous population under different assumptions. Notice however that the deduction of the model both in [8] and in the present paper also involves the reaction term, while in [18, 20] it is limited to diffusion. As opposite to _positive_ diffusivities, which model the spatial spreading, _negative_ diffusivities are usually interpreted to model the "chaotic" movement which follows from aggregation [18, 20]. In turn, the latter is "a macroscopic effect of the isolated and the grouped motility of the agents, together with competition for space"[11]. At last, we assume that the reaction term \(g\) shows the strong Allee effect, i.e., it is of the so called _bistable_ type (see assumption (g) below).
We focus on the existence of traveling-wave solutions \(u(x,t)=\varphi(x-ct)\) to equation (1.1), for some profiles \(\varphi=\varphi(\xi)\) and wave speeds \(c\), see [6] for general information. If the profile is defined in \(\mathbb{R}\), it is monotone, nonconstant, and reaches asymptotically the equilibria of (1.1), then the corresponding traveling-wave solution is called a _wavefront_. We consider precisely decreasing profiles which connect the outer equilibria of \(g\), i.e.,
\[\varphi(-\infty)=1\quad\text{ and }\quad\varphi(\infty)=0. \tag{1.2}\]
The case when profiles are increasing, and then satisfy \(\varphi(-\infty)=0\), \(\varphi(\infty)=1\), is dealt analogously and leads to a similar discussion. These solutions, even if of a special kind, have several advantages: they are global, they are often in good agreement with experimental data [13], and can be attractors for more general solutions [5]. Moreover, when \(u\) represents the density of a biological species, as in this case, then condition (1.2) means that, for times \(t\to\infty\), the species either successfully persists if \(c>0\), or it becomes extinct if \(c<0\). The wavefront profile \(\varphi\) must satisfy the ordinary differential equation
\[\big{(}D(\varphi)\varphi^{\prime}\big{)}^{\prime}+\Big{(}c-\dot{f}(\varphi) \Big{)}\,\varphi^{\prime}+g(\varphi)=0. \tag{1.3}\]
We used the notation \(\dot{\,}:=d/du\) and \({}^{\prime}:=d/d\xi\). Although one can consider the case of discontinuous profiles, see [10, 11] and references there, in this paper we focus on regular monotone profiles of equation (1.3). This means that they are continuous, and of class \(C^{2}\) except possibly at points where \(D\) vanishes; then solutions to equation (1.3) are intended in the distribution sense.
The existence of wavefronts is treated here in a quite general framework, which includes, in particular, our biological model. More precisely, we fix three real numbers \(\alpha,\beta,\gamma\) satisfying
\[0<\alpha<\gamma<\beta<1, \tag{1.4}\]
and assume, see Figure 1,
* \(f\in C^{1}[0,1]\);
* \(D\in C^{1}[0,1]\), \(D>0\) in \([0,\alpha)\cup(\beta,1]\), and \(D<0\) in \((\alpha,\beta)\);
* \(g\in C^{1}[0,1]\), \(g<0\) in \((0,\gamma)\), \(g>0\) in \((\gamma,1)\), and \(g(0)=g(\gamma)=g(1)=0\).
Since \(f\) in (1.1) is defined up to an additive constant, we can take \(f(0)=0\). The term \(\dot{f}(u)\) represents the drift of the total concentration \(u\) and prescribes in particular if a concentration wave is moving toward the right (\(\dot{f}(u)>0\)) or toward the left (\(\dot{f}(u)<0\)). The parabolic equation (1.1) is of _backward_ type in the interval \((\alpha,\beta)\) and of _forward_ type elsewhere; moreover, it degenerates at \(\alpha\) and \(\beta\).
The presence of wavefronts to (1.1) satisfying (D) and (g) and with \(f=0\) was first discussed in [9], where it is shown in particular that, if a wavefront exists, then \(\gamma\notin[\alpha,\beta]\). Such a situation and many others, again with \(f=0\), was also considered in [8, cases 6.3, 8.3], in the framework of the particular model deduced in that paper. The case with convection is not yet completely understood. Then our issue here is
_whether and when the presence of the convective flow allows the existence of wavefronts_.
An intuitive argument, see Remark 3.3, shows that the answer is in the affirmative at least for suitable concave \(f\). We now briefly report on the content of this paper.
In Section 2 we introduce the biological model and state our main results about it, for more immediacy, in a somewhat simplified way; proofs, which require the analysis of the general case dealt in Section 3, and more details, are deferred to Section 4. In particular, we provide positive or negative results for each of the possible behaviors of \(f\), namely, when \(f\) is concave, convex, or it changes concavity once.
In Section 3 we investigate the _fine properties_ (uniqueness, strict monotonicity, estimates of speed thresholds) of such wavefronts for equation (1.1) satisfying (f)-(D) and (g). A similar discussion for a _monostable_ reaction term \(g\) appeared in [1] and [3], respectively in
Figure 1: Typical plots of the functions \(D\) (dashed line) and \(g\) (dashdotted line).
the general framework and for the population model with biased movements. We recall that \(g\) is called _monostable_ if \(g>0\) in \((0,1)\) and \(g(0)=g(1)=0\).
As in our aforementioned papers, we exploit here an order-reduction technique. Since we focus on profiles \(\varphi=\varphi(\xi)\) that are strictly monotone when \(\varphi\in(0,1)\), we can consider the inverse function \(\varphi^{-1}(\varphi)\) of \(\varphi\) and, by denoting \(z(\varphi):=D(\varphi)\varphi^{\prime}\left(\varphi^{-1}(\varphi)\right)\), we reduce the problem (1.3) to a first-order singular boundary-value problem for \(z\) in \([0,1]\). This problem is tackled by the classical techniques of upper- and lower-solutions. This technique requires lighter assumptions than the phase-plane analysis in [8] and is simpler than the geometric singular perturbation theory exploited in [10]. Then wavefronts satisfying (1.2) are obtained by suitably pasting traveling waves. The results appear in Section 3, they are given for an arbitrary equation (1.1) satisfying conditions (f), (D), (g), and they are original. About (g), the mere requirement that \(g\) is continuous and the product \(Dg\) differentiable at \(0\) would be sufficient for us. Both for (D) and (g), we made slightly stricter assumptions than necessary both for simplicity and because they are satisfied by our biological model with biased movements. The cases when the internal zero of \(g\) is before \(\alpha\), i.e. \(\gamma\in(0,\alpha)\), or after \(\beta\), that is \(\gamma\in(\beta,1)\), are not treated here. Equation (1.1) with \(f=0\) admits wavefronts in these cases and we expect that they persist also in the presence of the convective effect \(f\).
The issue of the linear stability of the wavefronts is certainly interesting; we claim that it could be developed as in [10, 19], with a similar discussion.
## 2 A biological model with biased movements
In this section we first summarize a model for the movement of organisms recently presented in [8] for populations constituted by two groups of individuals. Then we show how a convective term can appear in the equation because of a biased movement. At last, we provide our results about wavefronts for such a model; proofs are deferred to Section 4.
### The model
The population is divided into _isolated_ and _grouped_ organisms. Both groups can move, reproduce and die, with possibly different rates. The organisms occupy the sites \(jl\), for \(j=0,\pm 1,\pm 2,\dots\) and \(l>0\); we denote by \(c_{j}\) the probability of occupancy of the \(j\)-th site. Let \(P_{m}^{i}\) and \(P_{m}^{g}\) be the movement transitional probabilities for isolated and grouped individuals, respectively; we use the notation \(P_{m}^{i,g}\) to indicate the two sets of parameters together. Analogously, the corresponding probabilities for birth and death are \(P_{b}^{i,g}\) and \(P_{d}^{i,g}\).
Differently from [8], we also introduce the parameters \(a^{i},b^{i}\geq 0\) and \(a^{g},b^{g}\geq 0\), which characterize a (linearly) biased movement for the isolated and grouped individuals. For the isolated individuals the bias is towards the left if \(a^{i}-b^{i}>0\) and towards the right if \(a^{i}-b^{i}<0\); for the grouped individuals the same occurs when either \(a^{g}-b^{g}>0\) or \(a^{g}-b^{g}>0\), respectively. In the case of [8] one has \(a^{i}=b^{i}=a^{g}=b^{g}=1\) and then \(a^{i,g}-b^{i,g}=0\).
Then, the variation \(\delta c_{j}\) of \(c_{j}\) during a time-step \(\tau>0\) is given by
\[\delta c_{j}=\] \[= \frac{P_{m}^{i}}{2}\left[a^{i}c_{j-1}(1-c_{j})(1-c_{j-2})+b^{i}c_{j +1}(1-c_{j})(1-c_{j+2})\ -(a^{i}+b^{i})c_{j}(1-c_{j-1})(1-c_{j+1})\right]\] \[+\frac{P_{m}^{g}}{2}\left[a^{g}c_{j-1}(1-c_{j})+b^{g}c_{j+1}(1-c_ {j})-a^{g}c_{j}(1-c_{j+1})-b^{g}c_{j}(1-c_{j-1})\right]\] \[-\frac{P_{m}^{g}}{2}\left[a^{g}c_{j-1}(1-c_{j})(1-c_{j-2})+b^{g}c_ {j+1}(1-c_{j})(1-c_{j+2})\right.\] \[\left.\quad-(a^{g}+b^{g})c_{j}(1-c_{j-1})(1-c_{j+1})\right]+\ \text{reaction terms}, \tag{2.5}\]
where
\[\text{reaction terms}\ =\] \[= \frac{P_{b}^{i}}{2}\left[c_{j-1}(1-c_{j})(1-c_{j-2})+c_{j+1}(1-c_ {j})(1-c_{j+2})\right]+\frac{P_{b}^{g}}{2}\left[c_{j-1}(1-c_{j})+c_{j+1}(1-c_ {j})\right]\] \[-\frac{P_{b}^{g}}{2}\left[c_{j-1}(1-c_{j})(1-c_{j-2})+c_{j+1}(1-c _{j})(1-c_{j+2})\right]-\frac{P_{d}^{i}}{2}\left[c_{j}(1-c_{j-1})(1-c_{j+1})\right]\] \[-\frac{P_{d}^{g}}{2}\left[c_{j}\right]+\frac{P_{d}^{i}}{2}\left[c _{j}(1-c_{j-1})(1-c_{j+1})\right].\]
By noticing that every bracket is divided by \(2\), we deduce
\[a^{i}+b^{i}=a^{g}+b^{g}=2, \tag{2.6}\]
since in the deduction of (2.5) a bias \(a^{i,g}\) implies a converse bias \(b^{i,g}=2-a^{i,g}\).
The continuum model is obtained by replacing \(c_{j}\) with a smooth function \(c=c(x,t)\) and expanding \(c\) around \(x=jl\) at second order. Then, we divide (2.5) by \(\tau\) and pass to the limit for \(l,\tau\to 0\) while keeping \(l^{2}/\tau\) constant; for simplicity we assume \(l^{2}/\tau=1\). To perform this step one makes the following assumptions on the reactive-diffusive terms [8]:
\[\frac{P_{m}^{i,g}l^{2}}{2\tau}\sim D_{i,g},\quad\frac{P_{b}^{i,g}}{2\tau}\sim \lambda_{i,g},\quad\frac{P_{d}^{i,g}}{2\tau}\sim k_{i,g},\quad P_{b}^{i,g},P_{ d}^{i,g}=O(\tau),\quad\text{ for }l\to 0,\tau\to 0. \tag{2.7}\]
The above limits define the diffusivity parameters \(D_{i,g}\), the birth rates \(\lambda_{i,g}\), and the death rates \(k_{i,g}\); all these parameters are non-negative. About the convection terms, we require
\[a^{i,g}(\tau)\sim 1,\ b^{i,g}(\tau)\sim 1\quad\text{ and }\quad a^{i,g}(\tau)-b^{i,g}(\tau)\sim C_{i,g}\sqrt{\tau}\quad \text{ for }\tau\to 0, \tag{2.8}\]
for some \(C_{i,g}\in\mathbb{R}\). We stress that the parameters \(C_{i,g}\) can be either positive or negative according to the values of the bias coefficients \(a^{i,g}\) and \(b^{i,g}\); in particular, if \(C_{i}>0\) then we
Figure 2: Sketch of the meaning of the parameters \(a^{i,g}\) and \(b^{i,g}\).
have a bias toward the left of the isolated individuals, and toward the right if \(C_{i}<0\); the analogous bias for the grouped individuals corresponds to either \(C_{g}>0\) (left) or \(C_{g}<0\) (right). If \(C_{i,g}=0\), then the corresponding bias is too weak to pass to equation (2.9); with a slight abuse of terminology we say that the corresponding group has no convective movement. At last, assumption \(\eqref{eq:1}_{1}\) is compatible with (2.6); assumption \(\eqref{eq:1}_{2}\) is analogous to \(\eqref{eq:1}_{4}\).
In conclusion, we obtain the equation
\[u_{t}+f(u)_{x}=\left(D(u)u_{x}\right)_{x}+g(u), \tag{2.9}\]
with
\[f(u) =-\left(C_{i}D_{i}+C_{g}D_{g}\right)u(1-u)^{2}-C_{g}D_{g}u(1-u), \tag{2.10}\] \[D(u) =D_{i}\left(1-4u+3u^{2}\right)+D_{g}\left(4u-3u^{2}\right),\] (2.11) \[g(u) =\lambda_{g}u(1-u)+\left[\lambda_{i}-\lambda_{g}-\left(k_{i}-k_{ g}\right)\right]u(1-u)^{2}-k_{g}u. \tag{2.12}\]
The model (2.9) depends on the eight parameters \(C_{i,g}\), \(D_{i,g}\), \(k_{i,g}\) and \(\lambda_{i,g}\). Equation (2.9) coincides with (1.1) but we agree that when we refer to (2.9) we understand \(f\), \(D\) and \(g\) as in (2.10)-(2.12). We point out that \(f(0)=f(1)=0\), i.e., the convective flow vanishes when the density is either zero or maximum, as physically it should be. When \(C_{i}=0\) the isolated individuals have no convective movement and the function \(f\) is convex in \([0,1]\) if \(C_{g}>0\) and concave otherwise. Instead, when \(C_{g}=0\) the grouped individuals have no convective movements and \(f\) changes its concavity for \(u=2/3\). The diffusion and reaction terms (2.11)-(2.12) coincide with those in [8, (2)], while \(f\) is missing there.
### Main results on the model
About the model introduced in the previous subsection, the case we are interested in is when conditions (D) and (g) are satisfied; the corresponding assumptions on the parameters have already been given in [8, 11].
**Lemma 2.1**.: _The diffusivity \(D\) in (2.11) satisfies (D) if and only if \(D_{i}>4D_{g}>0\). In this case we have_
\[\alpha=\frac{2}{3}-\frac{\omega}{3}\quad\text{ and }\quad\beta=\frac{2}{3}+ \frac{\omega}{3},\qquad\text{ for }\omega:=\sqrt{\frac{D_{i}-4D_{g}}{D_{i}-D_{g}}}. \tag{2.13}\]
_The reaction term \(g\) in (2.12) satisfies (g) if and only if \(k_{g}=0\), \(\lambda_{g}>0\) and \(r_{i}:=k_{i}-\lambda_{i}>0\). In this case_
\[\gamma=\frac{r_{i}}{r_{i}+\lambda_{g}}, \tag{2.14}\]
_and \(\gamma\in(\alpha,\beta)\) if and only if_
\[\frac{1-\omega}{2+\omega}<\frac{\lambda_{g}}{r_{i}}<\frac{1+\omega}{2-\omega}.\]
Notice that \(\omega\in(0,1)\), \(\beta-\alpha=2\omega/3\) and \(\alpha+\beta=4/3\). The condition \(k_{g}=0\) clearly has no biological sense, and is interpreted in the sense that the life expectancy of grouped individuals is much larger than that of isolated individuals; the condition \(r_{i}>0\) further hinders the latter. Here, \(\gamma\) is the Allee parameter [8, 11].
Here follows a simple necessary condition for the existence of wavefronts.
**Proposition 2.1**.: _If (2.9) admits wavefronts satisfying condition (1.2), then \(C_{g}<0\)._
It is easy to see that a necessary condition to have wavefronts satisfying conditions \(\varphi(-\infty)=0\) and \(\varphi(\infty)=1\), instead of (1.2), is \(C_{g}>0\).
We now summarize the restrictions required on the parameters:
\[C_{g}<0,\qquad D_{i}>4D_{g}>0, \tag{2.15}\] \[r_{i}:=k_{i}-\lambda_{i}>0,\qquad k_{g}=0,\qquad\lambda_{g}>0,\] (2.16) \[\frac{1-\omega}{2+\omega}<\frac{\lambda_{g}}{r_{i}}<\frac{1+ \omega}{2-\omega}, \tag{2.17}\]
with \(\omega\) defined in (2.13). We _always_ assume conditions (2.15)-(2.17) in the following, without any further mention. The results below are preferably stated by referring to the following dimensionless quotients and by lumping the parameters referring to the grouped population into a single dimensionless parameter as follows:
\[s:=\frac{C_{i}}{|C_{g}|},\qquad d:=\frac{D_{i}}{D_{g}}>4,\qquad\mu:=\frac{r_{ i}}{\lambda_{g}}=\frac{k_{i}-\lambda_{i}}{\lambda_{g}},\qquad E_{g}:=|C_{g}| \sqrt{\frac{D_{g}}{\lambda_{g}}}. \tag{2.18}\]
Under this notation we have
\[\omega=\sqrt{\frac{d-4}{d-1}}. \tag{2.19}\]
Notice that \(E_{g}\) gathers the parameters concerning convection, diffusion and reaction of the grouped individuals; the parameter \(\mu\) is the ratio between the net increasing rate of the isolated and grouped individuals. Notice that condition (2.17) is equivalent to
\[\frac{2-\omega}{1+\omega}<\mu<\frac{2+\omega}{1-\omega}. \tag{2.20}\]
The convective term \(f\) can change convexity at most once; then, it can be either concave or convex, or else convex-concave or concave-convex. We now examine each of these cases; in all of them, we emphasize that \(s\) is always multiplied by \(d\). Since the parameter \(s\) does not depend on \(d\), we can understand \(sd\) as a variable independent from \(d\), which lumps the ratios of the coefficients related to the movement. In this way we shall often deal with the couple \((\omega,sd)\) of parameters, where \(\omega\) depends on \(d\).
The concave caseThe convective term \(f\) is strictly concave if and only if
\[0\leq sd\leq\frac{3}{2}, \tag{2.21}\]
(see Lemma 4.2). In this case, model (2.9) admits wavefronts satisfying (1.2) and we can also discuss the sign of their speeds \(c\); we now provide some prototype results. The key condition is
\[r_{i}(2+\omega)+\lambda_{g}(1+\omega)<\frac{8C_{g}^{2}D_{g}}{81}\frac{d-4}{(d -1)^{2}}. \tag{2.22}\]
Condition (2.22) contains several parameters; therefore, there are many ways of discussing the results, depending on which parameters are set and which are held constant; we focus on two different choices.
First, for fixed \(C_{g},D_{g}\) we consider the triangle, see Figure 3 on the left,
\[\mathcal{T}_{g}(d):=\left\{(r_{i},\lambda_{g})\in\mathbb{R}^{+}\times\mathbb{R}^{ +}\colon(\ref{eq:1.1})\text{ and }(\ref{eq:2.2})\text{ hold }\right\}. \tag{2.23}\]
Under (2.21) if \((r_{i},\lambda_{g})\in\mathcal{T}_{g}(d)\) then equation (2.9) admits wavefronts satisfying (1.2) (see Remark 4.4.
Moreover, for such \((r_{i},\lambda_{g})\in\mathcal{T}_{g}(d)\) we have, see Figure 4:
* if \(d>4+2\sqrt{3}\) and also \[\frac{18\sqrt{\mu(d-1)}}{\tau(\omega,\gamma,sd)}<E_{g},\] (2.24) where \(\tau\) is defined in (4.15), then (2.9) admits wavefronts satisfying (1.2) with _positive_ speeds (see Remark 4.6);
* if \(d\in(4,(5+2\sqrt{3})/2)\), then _every_ pair \((r_{i},\lambda_{g})\in\mathcal{T}_{g}(d)\) provides profiles and all of them have _negative_ speeds (see Remark 4.7).
Figure 4: Thresholds leading to wavefronts with positive or negative speeds when \(f\) is strictly concave. The figure is to be interpreted as follows: if \(d>4+2\sqrt{3}\sim 7.46\), then there exist wavefronts with positive speed, while, if \(4<d<(5+2\sqrt{3})/2\sim 4.23\), then every wavefront has negative speed under the assumptions of Theorem 4.3.
Figure 3: On the left: the triangle \(\mathcal{T}_{g}(d)\) is the intersection of three half-planes. Here \(|C_{g}|\sqrt{D_{g}}=6\); dashed lines refer to \(d=5\), solid lines to \(d=8\). On the right: The conditions in (2.20) prescribe that \(\mu\) must lie between the red and the blue line. Condition (2.22) further prescribes that \(\mu>0\) must belong to the region below the black line: the dashed curve refers to \(E_{g}=40\), the dash-dotted curve to \(E_{g}=60\).
Second, notice that, assuming again (2.21), we can interpret (2.17) and (2.22) as relationships between \(d\) and \(\mu\) for fixed \(E_{g}\), see Figure 3 on the right and Corollary 4.1. In this framework, profiles exist for every couple \((d,\mu)\) lying in the region between the red and blue lines, and below the black line.
The convex caseIf \(f\) is convex, then equation (2.9) admits no such wavefronts. Indeed, we show a stronger result: wavefronts satisfying (1.2) never exist if \(f\) is convex just in the interval \([\alpha,\beta]\) (see Remark 4.3).
The case when \(f\) changes convexityWe now consider the two cases when \(f\) changes convexity; in order to simplify the analysis, we focus on the particular case when the inflection point of \(f\) coincides with \(\gamma\). Recall that \(\gamma\) represents the Allee parameter [8], which describes the threshold separating a decrease of concentration (if \(u<\gamma\)) from an increase of concentration (if \(u>\gamma\)). The assumption that \(f\) has an inflection point at \(\gamma\) means that the maximum drift \(\dot{f}\) (if \(f\) is convex-concave) or the minimum drift (if \(f\) is concave-convex) is precisely reached at \(\gamma\). We refer to Figure 5 for an illustration of both cases.
* Assume \(f\) is convex-concave and define the set \[\mathcal{S}:=\left\{(\omega,sd)\colon 1+\frac{1}{\omega}<sd<\frac{12(2+3 \omega)}{(4+\omega)^{2}}\right\}.\] (2.25) If \((\omega,sd)\in\mathcal{S}\) and estimate (4.27) holds, then equation (2.9) has wavefronts satisfying condition (1.2).
* Assume \(f\) is concave-convex and define the set \[\tilde{\mathcal{S}}:=\left\{(\omega,sd)\colon\,-\frac{16\omega}{(\omega+2)^{2 }}<sd<1-\frac{1}{\omega}\right\}.\] (2.26) If \((\omega,sd)\in\tilde{\mathcal{S}}\) and estimate (4.33) holds, then equation (2.9) has wavefronts satisfying condition (1.2).
We refer to Figure 6 for an illustration of the sets \(\mathcal{S}\) and \(\tilde{\mathcal{S}}\).
Figure 5: Plots of the functions \(D\) (dashed line), \(g\) (dashdotted line) and \(f\) (solid line) in the case \(f\) is convex-concave (on the left) and concave-convex (on the right), with \(\gamma\) as inflection point of \(f\).
## 3 Theoretical results
In this section we provide the theoretical results that are needed for the investigation of model (2.9). In the following we consider equation (1.1) and we always assume (1.4) and (f), (D), (g), without any further mention. The existence of a wavefront solution to (1.1), whose profile satisfies (1.2), is obtained by pasting profiles connecting \(0\) with \(\alpha\), \(\alpha\) with \(\gamma\), \(\gamma\) with \(\beta\), and \(\beta\) with \(1\). Each subprofile exists for \(c\) larger or smaller than a certain threshold, which varies according to the subinterval: we denote them by
\[c_{0,\alpha}^{*},\quad c_{\alpha,\gamma}^{*},\quad c_{\gamma,\beta}^{*},\quad c _{\beta,1}^{*},\]
respectively. The expressions of these thresholds are not explicit, but we provide below rather precise estimates for them. We denote
\[c_{0}:=\min\{c_{0,\alpha}^{*},\,c_{\alpha,\gamma}^{*}\}\quad\text{ and }\quad c_{1}=:\max\{c_{\gamma,\beta}^{*},\,c_{\beta,1}^{*}\}. \tag{3.1}\]
In the above pasting framework, \(c_{0}\) involves the speeds of profiles connecting \(0\) with \(\alpha\) and then \(\alpha\) to \(\gamma\), while \(c_{1}\) refers to the connections \(\gamma\) to \(\beta\) and then \(\beta\) to \(1\). We denote with \(\mathcal{J}\) the set of _admissible speeds_, i.e., the speeds \(c\) such that there is a profile with that speed satisfying (1.2).
The following main result concerns general necessary and sufficient conditions for the existence of wavefronts.
**Theorem 3.1**.: _If_
\[c_{1}<c_{0}, \tag{3.2}\]
_then for every \(c\in(c_{1},c_{0})\) there are wavefronts to equation (1.1) satisfying (1.2)._
_Conversely, if \(c_{1}>c_{0}\) there exists no wavefronts to equation (1.1) satisfying (1.2)._
We point out that, in the case \(c_{1}<c_{0}\), there are _infinitely many_ profiles with the _same_ speed \(c\in(c_{1},c_{0})\). More precisely, see Figure 7, for every such \(c\) there exists \(\lambda_{c}<0\) and a family of profiles \(\varphi_{\lambda}\), for \(\lambda\in[\lambda_{c},0)\), which are characterized by
\[\varphi(\xi_{\gamma})=\gamma\quad\text{ and }\quad\varphi_{\lambda}^{\prime}( \xi_{\gamma})=\lambda,\]
for some \(\xi_{\gamma}\in\mathbb{R}\). The first condition simply says that all profiles have been shifted so that they reach the value \(\gamma\) at the same \(\xi=\xi_{\gamma}\) (in order to make a comparison possible); the
Figure 6: The sets \(\mathcal{S}\), on the left, and \(\tilde{\mathcal{S}}\), on the right. The sets \(\mathcal{S}\) and \(\tilde{\mathcal{S}}\) are the plane regions bounded from above by the blu curve and from below by the red curve.
second one states that their slopes at \(\xi_{\gamma}\) cover the cone centered at \((\xi_{\gamma},\gamma)\) and opening \([\lambda_{c},0)\). We refer to [2] for more information.
We denote the _difference quotient_ of a scalar function of a real variable \(F=F(\varphi)\) with respect to a point \(\varphi_{0}\) as
\[\delta(F,\varphi_{0})(\varphi):=\frac{F(\varphi)-F(\varphi_{0})}{\varphi- \varphi_{0}}. \tag{3.3}\]
We also introduce the integral mean of the difference quotient and denote it as
\[\Delta(F,\varphi_{0})(\varphi):=\frac{1}{\varphi-\varphi_{0}}\int_{\varphi_{0} }^{\varphi}\delta(F,\varphi_{0})(\psi)\,d\psi=\frac{1}{\varphi-\varphi_{0}} \int_{\varphi_{0}}^{\varphi}\frac{F(\psi)-F(\varphi_{0})}{\psi-\varphi_{0}}\,d\psi.\]
Notice that for every \(\varphi\in(0,\gamma)\) there is \(\varphi_{1}\in(0,\gamma)\) such that \(\Delta(Dg,\alpha)(\varphi)=\delta(Dg,\alpha)(\varphi_{1})\). Then we have
\[\sup_{[0,\gamma]}\Delta(Dg,\alpha)\leq\sup_{[0,\gamma]}\delta(Dg,\alpha), \tag{3.4}\]
and the same estimate holds true in the interval \([\gamma,1]\) by replacing \(\alpha\) with \(\beta\).
The following results provide necessary and sufficient conditions for the existence of decreasing profiles, in order to make condition (3.2) more explicit in terms of the functions \(f\), \(D\) and \(g\). The proofs are deferred to the end of this section.
**Corollary 3.1** (Necessary condition).: _If there are wavefronts to equation (1.1) whose profiles satisfy (1.2), then_
\[\min\left\{\inf_{[0,\gamma]}\delta(f,\alpha),\dot{f}(\alpha)-2\sqrt{\dot{D}( \alpha)g(\alpha)}\right\}\geq\max\left\{\sup_{[\gamma,1]}\delta(f,\beta),\dot{ f}(\beta)+2\sqrt{\dot{D}(\beta)g(\beta)}\right\}. \tag{3.5}\]
In particular, wavefronts exist only if both conditions
\[\dot{f}(\alpha)-\dot{f}(\beta) \geq 2\sqrt{\dot{D}(\alpha)g(\alpha)}+2\sqrt{\dot{D}(\beta)g( \beta)}, \tag{3.6}\] \[\inf_{[0,\gamma]}\delta(f,\alpha)-\sup_{[\gamma,1]}\delta(f,\beta) \geq 0, \tag{3.7}\]
are satisfied. Notice that inequality (3.6) separates the behavior of \(f\) from that of \(Dg\).
Figure 7: Some profiles with the same speed \(c\), in the case \(c_{1}<c_{0}\) in Theorem 3.1; profiles have been shifted so that \(\varphi(\xi_{\gamma})=\gamma\).
**Remark 3.1**.: _When \(f\) is strictly concave we have_
\[\inf_{[0,\gamma]}\delta(f,\alpha)=\delta(f,\alpha)(\gamma)=\frac{f(\gamma)-f( \alpha)}{\gamma-\alpha},\quad\sup_{[\gamma,1]}\delta(f,\beta)=\delta(f,\beta)( \gamma)=\frac{f(\gamma)-f(\beta)}{\gamma-\beta}, \tag{3.8}\]
_and then_
\[\inf_{[0,\gamma]}\delta(f,\alpha)-\sup_{[\gamma,1]}\delta(f,\beta)=\frac{f( \gamma)-f(\alpha)}{\gamma-\alpha}-\frac{f(\gamma)-f(\beta)}{\gamma-\beta}>0. \tag{3.9}\]
The following result shows, in particular, how far from zero must be the difference in (3.7) in order to have solutions.
**Corollary 3.2** (Sufficient condition).: _We have the following results._
* _Assume_ \[\inf_{[0,\gamma]}\delta\left(f,\alpha\right)-\sup_{[\gamma,1]}\delta\left(f, \beta\right)>2\sup_{[0,\gamma]}\sqrt{\Delta\left(Dg,\alpha\right)}+2\sup_{[ \gamma,1]}\sqrt{\Delta\left(Dg,\beta\right)}.\] (3.10) _Then, Equation (_1.1_) admits wavefronts satisfying (_1.2_), and_ \(\mathcal{J}\) _is a bounded interval._
* _We have_ 1. _either_ \(\mathcal{J}\subset(0,\infty)\) _or_ \(\mathcal{J}=\emptyset\)_, in the case_ \[\max\left\{\sup_{[\gamma,1]}\delta(f,\beta),\dot{f}(\beta)+2\sqrt{\dot{D}( \beta)g(\beta)}\right\}>0;\] (3.11) 2. _either_ \(\mathcal{J}\subset(-\infty,0)\) _or_ \(\mathcal{J}=\emptyset\)_, in the case_ \[\min\left\{\inf_{[0,\gamma]}\delta(f,\alpha),\dot{f}(\alpha)-2\sqrt{\dot{D}( \alpha)g(\alpha)}\right\}<0.\] (3.12)
In the proof of Theorem 3.1 we will reduce the existence of a wavefront to equation (1.1) satisfying (1.2) to the investigation of a solution \(z\) to the following _singular_ first-order problem in the interval \([0,1]\):
\[\begin{cases}\dot{z}(\varphi)=\dot{f}(\varphi)-c-\frac{D(\varphi)g(\varphi)}{z (\varphi)}&\text{in}\ \ (0,\alpha)\cup(\alpha,\beta)\cup(\beta,1),\\ z<0&\text{in}\ \ (0,\alpha)\cup(\beta,1),\\ z>0&\text{in}\ \ (\alpha,\beta),\\ z(0)=z(\alpha)=z(\beta)=z(1)=0.\end{cases} \tag{3.13}\]
By a solution to (3.13) we mean a function \(z(\varphi)\) which is continuous on \([0,1]\) and satisfies the equation (3.13)\({}_{1}\) in integral form, i.e.,
\[z(\varphi)=f(\varphi)-c\varphi-\int_{0}^{\varphi}\frac{D(\sigma)g(\sigma)}{z( \sigma)}\,d\sigma,\qquad\varphi\in[0,1].\]
Notice that we exploited here the assumption \(f(0)=0\). It is clear that such a \(z\) belongs to \(C^{1}\left((0,1)\setminus\{\alpha,\beta\}\right)\). To solve problem (3.13) we divide it into four subproblems, which
correspond to the subintervals \([0,\alpha]\), \([\alpha,\gamma]\), \([\gamma,\beta]\) and \([\beta,1]\) of the interval \([0,1]\). In order to have a unified treatment of any of these problems, we now collect results from [2, Lemma 4.1, Corollary 4.1, Remark 4.1] for the problem
\[\begin{cases}\dot{z}(\varphi)=h(\varphi)-c-\frac{Q(\varphi)}{z(\varphi)},& \varphi\in(\sigma_{1},\sigma_{2}),\\ z(\varphi)<0,&\varphi\in(\sigma_{1},\sigma_{2}).\end{cases} \tag{3.14}\]
**Lemma 3.1**.: _Let \(h\) and \(Q\) be continuous functions on \([\sigma_{1},\sigma_{2}]\), with \(Q>0\) in \((\sigma_{1},\sigma_{2})\) and \(Q(\sigma_{1})=Q(\sigma_{2})=0\). Then we have:_
1. _For any_ \(c\in\mathbb{R}\) _there exists a unique_ \(\zeta_{c}\in C^{0}[\sigma_{1},\sigma_{2}]\cap C^{1}\left(\sigma_{1},\sigma_{2}\right)\) _satisfying (_3.14_) and_ \(\zeta_{c}(\sigma_{2})=0\)_._
2. _Denote_ \(c^{*}(\sigma_{1},\sigma_{2}):=\sup\left\{c\in\mathbb{R}:\zeta_{c}(\sigma_{1} )<0\right\}\in(-\infty,\infty]\)_. If_ \(c^{*}(\sigma_{1},\sigma_{2})<\infty\)_, then for every_ \(c>c^{*}(\sigma_{1},\sigma_{2})\)_, there exists_ \(\beta(c)\in(-\infty,0)\) _such that there is a unique_ \(z_{c,s}\in C^{0}[\sigma_{1},\sigma_{2}]\cap C^{1}\left(\sigma_{1},\sigma_{2}\right]\) _satisfying (_3.14_),_ \(z_{c,s}(\sigma_{1})=0\)_,_ \(z_{c,s}(\sigma_{2})=s<0\)_, if and only if_ \(s\geq\beta(c)\)_. Moreover, we have_ \[\max\left\{\sup_{(\sigma_{1},\sigma_{2}]}\delta\left(f,\sigma_{1} \right),h(\sigma_{1})+2\sqrt{\dot{Q}(\sigma_{1})}\right\}\leq c^{*}(\sigma_{1 },\sigma_{2})\leq\\ \sup_{(\sigma_{1},\sigma_{2}]}\delta\left(f,\sigma_{1}\right)+2 \sup_{(\sigma_{1},\sigma_{2}]}\sqrt{\Delta(Q,\sigma_{1})},\] (3.15) _where_ \(f(\varphi):=\int_{0}^{\varphi}h(\sigma)\,d\sigma\)_,_ \(\varphi\in[0,1]\)_._
3. _If_ \(\dot{Q}(0)\) _exists, then_ \(c^{*}(\sigma_{1},\sigma_{2})\) _is finite._
Conditions (3.15) also exploit estimates on the threshold speeds recently proposed in [12]. With the help of Lemma 3.1, in the proof of the following proposition we analyze the subproblems we mentioned above.
**Proposition 3.1**.: _Problem (3.13) is solvable if \(c_{1}<c_{0}\) and it is not solvable if \(c_{1}>c_{0}\); in the former case we have \(c\in[c_{1},c_{0}]\)._
_Estimates for the thresholds \(c_{0,\alpha}^{*}\), \(c_{\alpha,\gamma}^{*}\), \(c_{\gamma,\beta}^{*}\), \(c_{\beta,1}^{*}\) are provided by (3.19), (3.22), (3.24), (3.26), respectively._
Proof.: The proof analyzes the restriction of problem (3.13) to the four above intervals.
Case \([0,\alpha]\).For \(\varphi\in[0,\alpha]\) we define
\[h_{1}(\varphi)=:-\dot{f}(-\varphi+\alpha),\quad D_{1}(\varphi)=:D(-\varphi+ \alpha),\quad g_{1}(\varphi)=:-g(-\varphi+\alpha).\]
We also define \(w(\varphi):=z(-\varphi+\alpha)\) and \(\tilde{c}_{1}:=-c\). Then, when restricted to the interval \([0,\alpha]\), problem (3.13) is equivalent to
\[\begin{cases}\dot{w}=h_{1}-\tilde{c}_{1}-D_{1}g_{1}/w&\text{in}\ \ (0,\alpha),\\ w<0&\text{in}\ \ (0,\alpha),\\ w(0)=w(\alpha)=0.\end{cases} \tag{3.16}\]
Lemma 3.1 applies with \(\sigma_{1}=0\), \(\sigma_{2}=\alpha\) and \(Q=D_{1}g_{1}\): since \(Q\) is differentiable in \(0\), then Lemma 3.1 provides a threshold \(\tilde{c}^{*}_{0,\alpha}\) such that (3.16) is solvable iff \(\tilde{c}_{1}\geq\tilde{c}^{*}_{0,\alpha}\), i.e. \(c\leq-\tilde{c}^{*}_{0,\alpha}=:c^{*}_{0,\alpha}\). By (3.15) we obtain
\[\max\left\{\sup_{(0,\alpha]}\delta(f_{1},0),h_{1}(0)+2\sqrt{\dot{D}_{1}(0)g_{1 }(0)}\right\}\leq\tilde{c}^{*}_{0,\alpha}\leq\sup_{(0,\alpha]}\delta(f_{1},0)+2 \sup_{(0,\alpha]}\sqrt{\Delta(D_{1}g_{1},0)}, \tag{3.17}\]
with
\[f_{1}(\varphi)=\int_{0}^{\varphi}h_{1}(\sigma)\,d\sigma=\int_{0}^{\varphi}- \dot{f}(-\sigma+\alpha)\,d\sigma=-\int_{\alpha-\varphi}^{\alpha}\dot{f}(s)\, ds=f(\alpha-\varphi)-f(\alpha),\]
whence
\[\sup_{s\in(0,\alpha]}\delta(f_{1},0)(s)=\sup_{s\in(0,\alpha]}\frac{f_{1}(s)}{ s}=\sup_{s\in[0,\alpha)}\frac{f(s)-f(\alpha)}{\alpha-s}.\]
Moreover we have
\[\int_{0}^{s}\frac{D_{1}(\varphi)g_{1}(\varphi)}{\varphi}\,d\varphi=\int_{0}^{s }-\frac{D(-\varphi+\alpha)g(-\varphi+\alpha)}{\varphi}\,d\varphi=\int_{\alpha- s}^{\alpha}-\frac{D(\sigma)g(\sigma)}{\alpha-\sigma}\,d\sigma.\]
Then formula (3.17) can be written as
\[\max\left\{\sup_{[0,\alpha)}\left[-\delta(f,\alpha)\right],-\dot{f}(\alpha)+ 2\sqrt{\dot{D}(\alpha)g(\alpha)}\right\}\leq\tilde{c}^{*}_{0,\alpha}\leq\sup_ {[0,\alpha)}\left[-\delta(f,\alpha)\right]+2\sup_{[0,\alpha)}\sqrt{\Delta(Dg, \alpha)}. \tag{3.18}\]
Hence,
\[\inf_{[0,\alpha)}\delta(f,\alpha)-2\sup_{[0,\alpha)}\sqrt{\Delta(Dg,\alpha)} \leq c^{*}_{0,\alpha}\leq\min\left\{\inf_{[0,\alpha)}\delta(f,\alpha),\dot{f}( \alpha)-2\sqrt{\dot{D}(\alpha)g(\alpha)}\right\}. \tag{3.19}\]
Case \([\alpha,\gamma]\).We denote
\[h_{2}(\varphi):=-\dot{f}(\varphi),\quad D_{2}(\varphi):=-D(\varphi),\quad g_{ 2}(\varphi)=:-g(\varphi).\]
We also define \(w(\varphi):=-z(\varphi)\) and \(c_{2}:=-c\). Then problem (3.13), when restricted to the interval \([\alpha,\gamma]\), becomes
\[\begin{cases}\dot{w}=h_{2}-c_{2}-D_{2}g_{2}/w&\text{ in }\ (\alpha,\gamma),\\ w<0&\text{ in }\ (\alpha,\gamma],\\ w(\alpha)=0.\end{cases} \tag{3.20}\]
By Lemma 3.1 we deduce the existence of a threshold \(\tilde{c}^{*}_{\alpha,\gamma}\) such that (3.20) is solvable iff \(c_{2}\geq\tilde{c}^{*}_{\alpha,\gamma}\), i.e. \(c\leq-\tilde{c}^{*}_{\alpha,\gamma}=:c^{*}_{\alpha,\gamma}\). Moreover, by (3.15) we deduce
\[\max\left\{\sup_{(\alpha,\gamma]}\delta(f_{2},\alpha),h_{2}(\alpha)+2\sqrt{ \dot{D}_{2}(\alpha)g_{2}(\alpha)}\right\}\leq\tilde{c}^{*}_{\alpha,\gamma}\leq \sup_{(\alpha,\gamma]}\delta(f_{2},\alpha)+2\sup_{(\alpha,\gamma]}\sqrt{ \Delta(D_{2}g_{2},\alpha)},\]
where
\[f_{2}(\varphi):=\int_{\alpha}^{\varphi}h_{2}(\sigma)\,d\sigma=-f(\varphi)+f( \alpha),\,\varphi\in[\alpha,\gamma].\]
Whence, by returning to the variables \(h,D,g\), we find
\[\max\left\{\sup_{(\alpha,\gamma]}\left\{-\delta(f,\alpha)\right\},-\dot{f}( \alpha)+2\sqrt{\dot{D}(\alpha)g(\alpha)}\right\}\leq\tilde{c}_{\alpha,\gamma} ^{*}\leq\sup_{(\alpha,\gamma]}\left\{-\delta(f,\alpha)\right\}+2\sup_{(\alpha, \gamma]}\sqrt{\Delta(Dg,\alpha)}. \tag{3.21}\]
Hence,
\[\inf_{(\alpha,\gamma]}\delta(f,\alpha)-2\sup_{(\alpha,\gamma]}\sqrt{\Delta(Dg, \alpha)}\leq c_{\alpha,\gamma}^{*}\leq\min\left\{\inf_{(\alpha,\gamma]}\delta( f,\alpha),\dot{f}(\alpha)-2\sqrt{\dot{D}(\alpha)g(\alpha)}\right\}. \tag{3.22}\]
Case \([\gamma,\beta]\).For \(\varphi\in[\gamma,\beta]\) we define
\[h_{3}(\varphi):=\dot{f}(-\varphi+\gamma+\beta),\quad D_{3}(\varphi)=:-D(- \varphi+\gamma+\beta),\quad g_{3}(\varphi)=:g(-\varphi+\gamma+\beta).\]
We also denote \(w(\varphi):=-z(-\varphi+\gamma+\beta)\). Then in the interval \([\gamma,\beta]\) problem (3.13) can be written as
\[\begin{cases}\dot{w}=h_{3}-c-D_{3}g_{3}/w&\text{ in }\ (\gamma,\beta),\\ w<0&\text{ in }\ [\gamma,\beta),\\ w(\beta)=0.\end{cases} \tag{3.23}\]
By Lemma 3.1 problem (3.23) is solvable iff \(c\geq c_{\gamma,\beta}^{*}\), for some threshold \(c_{\gamma,\beta}^{*}\). Upper and lower estimates for \(c_{\gamma,\beta}^{*}\) can be obtained, as in the previous cases, by applying (3.15). In conclusion we find the estimates
\[\max\left\{\sup_{[\gamma,\beta)}\delta(f,\beta),\dot{f}(\beta)+2\sqrt{\dot{D}( \beta)g(\beta)}\right\}\leq c_{\gamma,\beta}^{*}\leq\sup_{[\gamma,\beta)} \delta(f,\beta)+2\sup_{[\gamma,\beta)}\sqrt{\Delta(Dg,\beta)}. \tag{3.24}\]
Case \([\beta,1]\).In this case we directly apply Lemma 3.1: the problem
\[\begin{cases}\dot{z}=\dot{f}-c-Dg/z&\text{ in }\ (\beta,1),\\ z<0&\text{ in }\ (\beta,1),\\ z(\beta)=z(1)=0,\end{cases} \tag{3.25}\]
is solvable iff \(c\geq c_{\beta,1}^{*}\), for some \(c_{\beta,1}^{*}\). Again, estimates for \(c_{\beta,1}^{*}\) are deduced by (3.15):
\[\max\left\{\sup_{(\beta,1]}\delta(f,\beta),\dot{f}(\beta)+2\sqrt{\dot{D}( \beta)g(\beta)}\right\}\leq c_{\beta,1}^{*}\leq\sup_{(\beta,1]}\delta(f,\beta) +2\sup_{(\beta,1]}\sqrt{\Delta(Dg,\beta)}. \tag{3.26}\]
This concludes the analysis of the restrictions of problem (3.13) to the four above intervals. Condition \(c_{1}\leq c_{0}\) is the requirement that there is a common admissible speed \(c\) for the above subproblems. In this case \(c\in[c_{1},c_{0}]\)
**Remark 3.2**.: Since \(D\) and \(g\) vanish in the interior of none of the above sub-intervals, one finds \(\varphi^{\prime}<0\) if \(\varphi\in(0,1)\setminus\{\alpha,\gamma,\beta\}\) (see [2, Proposition 3.1(ii)]). Moreover, by [4, Theorem 2.9 (i)], we deduce that the profile never reaches the value \(1\) for a finite value of \(\xi\); the same result holds for the value \(0\), by exploiting again [4, Theorem 2.9 (i)] after the change of variables that led to (3.16). At last, we have \(\varphi^{\prime}(\gamma)<0\) by the second part of the proof of Proposition 3.1(ii) in [2]. As a consequence, the profile \(\varphi\) is strictly monotone.
Proof of Theorem 3.1.: The proof follows an argument based on the reduction of (1.1)-(1.2) to (3.13), see [2].
First, assume \(c_{1}<c_{0}\). We argue separately in the four sub-intervals where \(Dg\neq 0\) and then we put together what we found. Thus, let \(z\) be the solution of (3.13) associated to some \(c\in[c_{1},c_{0}]\). Define \(\varphi_{1,\beta}\), \(\varphi_{\beta,\gamma}\), \(\varphi_{\gamma,\alpha}\) and \(\varphi_{\alpha,0}\) as the solutions of
\[\varphi^{\prime}=\frac{z(\varphi)}{D(\varphi)}, \tag{3.27}\]
with the initial data (respectively)
\[\varphi_{1,\beta}(0)=\frac{1+\beta}{2},\quad\varphi_{\beta,\gamma}=\frac{ \beta+\gamma}{2},\quad\varphi_{\gamma,\alpha}(0)=\frac{\gamma+\alpha}{2},\quad \varphi_{\alpha,0}(0)=\frac{\alpha}{2}.\]
Since the right-hand side of (3.27) is locally of class \(C^{1}\), then \(\varphi_{1,\beta}\), \(\varphi_{\beta,\gamma}\), \(\varphi_{\gamma,\alpha}\) and \(\varphi_{\alpha,0}\), exist and are unique in their respective maximal existence intervals.
We focus on the pasting of \(\varphi_{1,\beta}\), \(\varphi_{\beta,\gamma}\) at \(\beta\). Let \(\varphi_{1,\beta}\), \(\varphi_{\beta,\gamma}\) be maximally defined in \((\xi_{1},\xi_{\beta}^{1})\subset\mathbb{R}\), \((\xi_{\beta}^{2},\xi_{\gamma}^{1})\subset\mathbb{R}\), with
\[-\infty\leq\xi_{1}<0<\xi_{\beta}^{1}\leq\infty,\quad-\infty\leq\xi_{\beta}^{2} <0<\xi_{\gamma}^{1}\leq\infty,\]
and satisfying
\[\lim_{\xi\to\xi_{1}^{+}}\varphi_{1,\beta}(\xi)=1,\ \lim_{\xi\to\{\xi_{\beta}^{1}\} ^{-}}\varphi_{1,\beta}(\xi)=\beta,\quad\text{ and }\quad\lim_{\xi\to\{\xi_{\beta}^{2}\}^{+}} \varphi_{\beta,\gamma}(\xi)=\beta\ \lim_{\xi\to\{\xi_{\gamma}^{1}\}^{-}} \varphi_{\beta,\gamma}(\xi)=\gamma.\]
In order to glue together \(\varphi_{1,\beta}\) and \(\varphi_{\beta,\gamma}\) (after space shifts), we need to prove \(\xi_{\beta}^{1}\in\mathbb{R}\) and \(\xi_{\beta}^{2}\in\mathbb{R}\). We have
\[\lim_{\xi\to\{\xi_{\beta}^{2}\}^{+}}\varphi_{\beta,\gamma}^{\prime}(\xi)=\lim_ {\xi\to\{\xi_{\beta}^{2}\}^{+}}\frac{z\left(\varphi_{\beta,\gamma}(\xi)\right) }{D\left(\varphi_{\beta,\gamma}(\xi)\right)}=\lim_{s\to\beta^{-}}\frac{z(s)}{ D(s)}=\lim_{t\to\gamma^{+}}\frac{w(t)}{D_{3}(t)},\]
with \(w\) and \(D_{3}\) as in (3.23). The last limit is essentially discussed in the proof of [4, Theorem 2.5]; the only difference is that the interval \([0,1]\) appearing there is now replaced by \([\gamma,\beta]\). Reasoning as there we obtain that
\[\lim_{t\to\gamma^{+}}\frac{w(t)}{D_{3}(t)}\in[-\infty,0);\]
hence \(\xi_{\beta}^{2}\) is a real value. With a similar reasoning, this time directly applied to \(z(\varphi_{1,\beta})\) and \(D(\varphi_{1,\beta})\), we can prove that also \(\xi_{\beta}^{2}\) is a real value.
The remaining pastings are exactly proved as in the proof of [2, Proposition 3.2] and we refer the reader to that paper for details. To this aim, in particular, we need that \(z(\gamma)>0\), which is satisfied when \(c_{1}<c_{0}\) by Proposition 3.1. The proof of the first statement is complete.
We now prove the second statement. Suppose that (1.1)-(1.2) admits a profile \(\varphi\) associated to some speed \(c\in\mathbb{R}\). In particular, \(\varphi\) is decreasing and hence it can be decomposed into sub-profiles \(\varphi_{1,\beta}\), \(\varphi_{\beta,\gamma}\), \(\varphi_{\gamma,\alpha}\) and \(\varphi_{0,\beta}\) connecting, respectively, \(\beta\) to \(1\), \(\gamma\) to \(\beta\), and so on. By Remark 3.2 we have \(\varphi^{\prime}<0\) if \(\varphi\in(0,1)\setminus\{\alpha,\gamma,\beta\}\). Therefore, \(\varphi_{1,\beta}\) is invertible for \(\varphi_{1,\beta}\in(\beta,1)\), \(\varphi_{\beta,\gamma}\) is invertible for \(\varphi_{\beta,\gamma}\in(\gamma,\beta)\), and so on. Let \(\zeta=\zeta(\varphi):(\beta,1)\to\mathbb{R}\) be the inverse function of \(\varphi_{1,\beta}\), and set
\[z(\varphi):=D(\varphi)\varphi_{1,\beta}^{\prime}\left(\zeta(\varphi)\right), \ \ \text{for}\ \ \varphi\in(\beta,1).\]
By direct computations, the function \(z\) solves \(\eqref{eq:z}_{1}\) in \((\beta,1)\) (where \(z\in C^{1}\)) and, by adapting [2, Lemma 3.1], it can be extended to a function of class \(C^{0}[\beta,1]\), still called \(z\). Also, as in [2, Lemma 3.1], we have \(z(1)=0\). Arguing similarly in the other sub-intervals, one finds that \(z\in C^{0}[0,1]\) is in \(C^{1}\) in \((0,\alpha)\cup(\alpha,\beta)\cup(\beta,1)\) and it satisfies (3.13). For more details we refer to the similar case presented in the proof of [2, Proposition 3.1_(ii)_], which applies because \(g\) satisfies [2, (2.2)]. According to Proposition 3.1 we obtain \(c_{1}\leq c_{0}\) and then also the second statement is proved.
**Remark 3.3**.: We now provide a simple argument showing why wavefronts should exist for suitable concave \(f\), in the case the drift \(\dot{f}\) is first positive and then negative. For \(\lambda>0\), let \(f\) be defined by \(\lambda u\) in \((0,\gamma)\) and \(-\lambda(u-2\gamma)\) in \((\gamma,1)\), so that \(f\) is Lipschitz continuous with \(\dot{f}=\lambda\) in \((0,\gamma)\) and \(\dot{f}=-\lambda\) in \((\gamma,1)\). In this case, the role of \(\lambda\) is to shift to the right (of magnitude \(+\lambda\)) the estimates for \(c_{0}\), as (3.19) and (3.22) show, and to shift to the left (of \(-\lambda\)) the estimates for \(c_{1}\) (see (3.24) and (3.26)). Hence, (3.2) holds true for \(\lambda\) large enough.
We denote by \(s_{0,\alpha}\), \(s_{\alpha,\gamma}\), \(s_{\gamma,\beta}\), \(s_{\beta,1}\), the lower bounds in (3.19), (3.22), (3.24), (3.26), respectively, and with \(\Sigma_{0,\alpha}\), \(\Sigma_{\alpha,\gamma}\), \(\Sigma_{\gamma,\beta}\), \(\Sigma_{\beta,1}\), the corresponding upper bounds. In other words we rewrite (3.19), (3.22), (3.24), (3.26) as
\[s_{0,\alpha}\leq c_{0,\alpha}^{*}\leq\Sigma_{0,\alpha},\quad s_{\alpha,\gamma }\leq c_{\alpha,\gamma}^{*}\leq\Sigma_{\alpha,\gamma},\quad s_{\gamma,\beta} \leq c_{\gamma,\beta}^{*}\leq\Sigma_{\gamma,\beta},\quad s_{\beta,1}\leq c_{ \beta,1}^{*}\leq\Sigma_{\beta,1}. \tag{3.28}\]
Define moreover
\[s_{0,\gamma}:=\inf_{[0,\gamma]}\delta(f,\alpha)-2\sup_{[0,\gamma]}\sqrt{ \Delta(Dg,\alpha)}\quad\text{ and }\quad\Sigma_{\gamma,1}:=\sup_{[\gamma,1]}\delta(f,\beta)+2\sup_{[\gamma,1]} \sqrt{\Delta(Dg,\beta)}.\]
Here above, the arguments of the supremums are not defined at \(\alpha\) and \(\beta\), respectively; of course, since \(f,D,g\in C^{1}\), we understand them as \(-\dot{f}(\alpha)\), \(\dot{D}(\alpha)g(\alpha)\), \(\dot{f}(\beta)\) and \(\dot{D}(\beta)g(\beta)\), respectively. Under this notation we immediately deduce the following result.
**Lemma 3.2**.: _If \(\Sigma_{\gamma,1}<s_{0,\gamma}\), then condition (3.2) is satisfied._
Proof.: According to the right-hand sides of the estimates (3.24) and (3.26) we have \(c_{1}\leq\max\{\Sigma_{\gamma,\beta}\), \(\Sigma_{\beta,1}\}\leq\Sigma_{\gamma,1}\). By the assumption \(\Sigma_{\gamma,1}<s_{0,\gamma}\) we obtain
\[c_{1}\leq\Sigma_{\gamma,1}<s_{0,\gamma}\leq\min\{s_{0,\alpha},\,s_{\alpha, \gamma}\}.\]
Because of (3.19) and (3.22) we deduce \(c_{1}<\min\{s_{0,\alpha},\,s_{\alpha,\gamma}\}\leq\min\{c_{0,\alpha}^{*},c_{ \alpha,\gamma}^{*}\}=c_{0}\).
Proof of Corollary 3.1.: If a wavefront exists, then necessarily \(c_{1}\leq c_{0}\) because of Theorem 3.1. Then, by (3.1) and (3.28) it follows
\[\max\left\{\sup_{[\gamma,1]}\delta(f,\beta),\dot{f}(\beta)+2\sqrt{ \dot{D}(\beta)g(\beta)}\right\}=\\ \max\{s_{\gamma,\beta},\,s_{\beta,1}\}\leq c_{1}\leq c_{0}\leq\min \{\Sigma_{0,\alpha},\,\Sigma_{\alpha,\gamma}\}=\\ \min\left\{\inf_{[0,\gamma]}\delta(f,\alpha),\dot{f}(\alpha)-2 \sqrt{\dot{D}(\alpha)g(\alpha)}\right\}, \tag{3.29}\]
which is (3.5).
Proof of Corollary 3.2.: First, notice that (3.10) is exactly \(\Sigma_{\gamma,1}<s_{0,\gamma}\) after trivial manipulations. As a consequence, Theorem 3.1 and Corollary 3.2. imply the existence of wavefronts.
To obtain (3.11), we impose \(\max\{s_{\gamma,\beta},\,s_{\beta,1}\}>0\) in (3.29), which in turn implies that \(c_{1}>0\); notice that the left-hand side in (3.29) is precisely the left-hand side of (3.11). Analogously, to obtain (3.12), we impose \(\min\{\Sigma_{0,\alpha},\,\Sigma_{\alpha,\gamma}\}<0\). This implies \(c_{0}<0\).
We now investigate when the set \(\mathcal{J}\) of admissible speeds contains positive values.
**Lemma 3.3**.: _Assume_
\[\inf_{[0,\gamma]}\delta(f,\alpha)>2\sup_{[0,\gamma]}\sqrt{\Delta(Dg,\alpha)}. \tag{3.30}\]
_Then either \(\mathcal{J}=\emptyset\) or \(\mathcal{J}\cap(0,+\infty)\neq\emptyset\)._
Proof.: Assume that \(\mathcal{J}\neq\emptyset\); then, according to Theorem 3.1, we have \(\mathcal{J}\cap(0,\infty)\neq\emptyset\) if and only if \(c_{0}>0\). By (3.19) and (3.22), we have
\[c_{0} =\min\{c_{0,\alpha}^{*},c_{\alpha,\gamma}^{*}\}\] \[\geq\min\Big{\{}\inf_{[0,\alpha)}\delta(f,\alpha)-2\sup_{[0, \alpha)}\sqrt{\Delta(Dg,\alpha)},\inf_{(\alpha,\gamma]}\delta(f,\alpha)-2\sup _{(\alpha,\gamma]}\sqrt{\Delta(Dg,\alpha)}\Big{\}}\] \[\geq\inf_{[0,\gamma]}\delta(f,\alpha)-2\sup_{[0,\gamma]}\sqrt{ \Delta(Dg,\alpha)}=s_{0,\gamma}.\]
If condition (3.30) is satisfied, then \(c_{0}>0\).
## 4 Existence of wavefronts in the model of biased movements
In this section we investigate the presence of wavefronts to the biased model (2.9) and prove their main qualitative properties. We make use of the results provided in Section 3 for a general reaction-diffusion-convection process.
Proof of Lemma 2.1.: The function \(D\) in (2.11) is a parabola with \(D(0)=D_{i}\) and \(D(1)=D_{g}\). We have \(\dot{D}(u)=-(D_{i}-D_{g})(4-6u)\), which vanishes iff \(u=\frac{2}{3}\), and \(D(\frac{2}{3})=\frac{1}{3}(-D_{i}+4D_{g})\). Then \(D\) is positive-negative-positive if and only if \(D_{i}>4D_{g}\); the case \(D_{g}=0\) is excluded because then \(D\) changes sign only once in \((0,1)\). Then the two zeros \(\alpha\) and \(\beta\) of \(D\) satisfy \(\eqref{eq:2.13}_{1,2}\). Moreover \(g(0)=0\) and \(g(1)=0\) if and only if \(k_{g}=0\); under this assumption, \(g\) also vanishes at \(\gamma\) defined in (2.14). Hence \(g\) satisfies condition (g) if and only if \(k_{g}=0\), \(\lambda_{g}>0\) and \(r_{i}>0\). The condition \(\gamma\in(\alpha,\beta)\) is then equivalent to (2.17).
In the following proofs, we often make use of the notation
\[p:=C_{i}D_{i}+C_{g}D_{g}\quad\text{and}\quad q:=C_{g}D_{g}. \tag{4.1}\]
We now rewrite formulas (2.10)-(2.12) by exploiting (2.13), (2.14) and (4.1):
\[f(u) =-pu(1-u)^{2}-qu(1-u), \tag{4.2}\] \[D(u) =3(D_{i}-D_{g})(u-\alpha)(u-\beta),\] (4.3) \[g(u) =(r_{i}+\lambda_{g})\cdot u(1-u)(u-\gamma). \tag{4.4}\]
**Remark 4.1**.: We point out that \(\dot{f}(0)=-(p+q)\) and \(\dot{f}(1)=q\); these quantities can be understood the drift at very low and maximum concentration, respectively.
**Remark 4.2**.: The movement velocity \(v=v(u)\) is defined by \(f(u)=:uv(u)\). Then \(v(u)=-pu^{2}+(2p+q)u-(p+q)=(1-u)\left(pu-(p+q)\right)\), and then \(v\) vanishes at the maximum density \(1\); it can also possibly vanish at \(u_{0}=\frac{p+q}{p}\) (i.e., if \(u_{0}\in[0,1)\)). This is analogous to similar models in collective movements [21, SS3.1]. Recalling that \(q<0\), it is easy to see that only the following cases may occur (for simplicity we do not include the case \(p+q=0\), when \(u_{0}=0\), or \(p=0\), when \(u_{0}\) is missing, for which slightly different results hold):
1. \(q<0<p+q\). Then \(v\) is concave, it is first negative, then positive; \(f\) is convex-concave.
2. \(p+q<0<p\). Then \(v\) is positive and concave; \(f\) is concave or convex-concave.
3. \(p<0\), \(q<0\). Then \(v\) is positive and convex; \(f\) is concave or concave-convex.
We assume conditions (2.15)-(2.17); in particular, the assumption \(C_{g}<0\) becomes \(q<0\). Under this notation, for \(\varphi,\varphi_{0}\in(0,1)\) we have, see (3.3),
\[\delta(f,\varphi)(\varphi_{0})=-(p+q)+(2p+q)(\varphi+\varphi_{0})-p(\varphi^ {2}+\varphi\varphi_{0}+\varphi_{0}^{2}). \tag{4.5}\]
Proof of Proposition 2.1.: We apply condition (3.6). Since \(\dot{f}(u)=-p(3u^{2}-4u+1)+q(2u-1)\), then (3.6) applies if \(-p(3\alpha^{2}-4\alpha+1)+q(2\alpha-1)>-p(3\beta^{2}-4\beta+1)+q(2\beta-1)\), that is
\[-p\left(3(\alpha^{2}-\beta^{2})-4(\alpha-\beta)\right)+2q(\alpha-\beta)>0. \tag{4.6}\]
By (2.13) we obtain that (4.6) is equivalent to \(2q(\alpha-\beta)=-\frac{4q}{3}\omega>0\), that is, \(q<0\). Hence, we deduce \(C_{g}<0\) since \(D_{g}>0\) by Lemma 2.1.
A sufficient condition for the existence of wavefronts to equation (2.9) is (3.10). The following result provides an upper estimate of the right-hand side of (3.10).
**Lemma 4.1**.: _We have_
\[2\sup_{[0,\gamma]}\sqrt{\Delta(Dg,\alpha)}+2\sup_{[\gamma,1]}\sqrt{\Delta(Dg, \beta)}\leq\sqrt{\frac{D_{g}}{\lambda_{g}}}\sqrt{d-1}\left(\sqrt{\mu(2+\omega )}+\sqrt{1+\omega}\right).\]
Proof.: By (4.3) we have \(D(\varphi)g(\varphi)=3(D_{i}-D_{g})(r_{i}+\lambda_{g})\varphi(\varphi-\alpha)( \varphi-\gamma)(\varphi-\beta)(1-\varphi)\). Then, for \(\varphi\in[0,\gamma]\) we obtain
\[\frac{D(\varphi)g(\varphi)}{\varphi-\alpha} \leq\frac{3}{4}(D_{i}-D_{g})(r_{i}+\lambda_{g})(\varphi-\gamma)( \varphi-\beta)\leq\frac{3}{4}(D_{i}-D_{g})(r_{i}+\lambda_{g})\gamma\beta\] \[=\frac{1}{4}r_{i}(D_{i}-D_{g})(2+\omega). \tag{4.7}\]
By (3.4) and (4.7) we deduce
\[2\sup_{[0,\gamma]}\sqrt{\Delta(Dg,\alpha)}\leq\sqrt{D_{g}}\sqrt{r_{i}(d-1)(2+ \omega)}. \tag{4.8}\]
With a similar reasoning we have that
\[2\sup_{[\gamma,1]}\sqrt{\Delta(Dg,\beta)}\leq\sqrt{D_{g}}\sqrt{\lambda_{g}(d- 1)(1+\omega)}, \tag{4.9}\]
since we have \((\varphi-\gamma)(\varphi-\alpha)\leq(1-\gamma)(1-\alpha)\), for \(\varphi\in[\gamma,1]\), because \((r_{i}+\lambda_{g})(1-\gamma)=\lambda_{g}\). We complete the proof by combining (4.8) and (4.9).
### A strictly concave convective term
The left-hand side of (3.10) takes a simple form when \(f\) is strictly concave (see Remark 3.1); for this reason we first consider this case, see Figure 8. The following result characterizes the strict concavity of the function \(f\), see (2.21).
**Lemma 4.2**.: _The function \(f\) in (2.10) is strictly concave if and only if (2.21) is satisfied._
Proof.: By (4.2) we compute \(\ddot{f}(u)=-6pu+4p+2q\); therefore \(\ddot{f}<0\) in \((0,1)\) if and only if
\[-3pu+2p+q<0,\ \ \text{for any}\ \ u\in(0,1). \tag{4.10}\]
The line \(-3pu+2p+q=0\) connects the points \((0,2p+q)\) and \((1,-p+q)\). We remark that \(2p+q=-p+q=0\) is not possible since \(q<0\) by conditions (2.15)-(2.17) and (4.1). Hence, (4.10) holds if and only if
\[\begin{cases}2p+q\leq 0,\\ -p+q\leq 0.\end{cases} \tag{4.11}\]
Conditions (4.11) hold if and only if \(q\leq p\leq-q/2\), which is equivalent to (2.21).
Figure 8: Plots of the functions \(D\) (dashed line), \(g\) (dashdotted line) and \(f\) (solid line) in the case \(f\) is strictly concave.
**Remark 4.3**.: From the proof of Lemma 4.2 we deduce that \(f\) is strictly convex iff \(-\frac{C_{g}D_{g}}{2}\leq C_{i}D_{i}+C_{g}D_{g}\leq C_{g}D_{g}\); this condition does not match with the assumption \(C_{g}<0\), which is necessary to have wavefronts to equation (2.9) satisfying (1.2) by Proposition 2.1. Indeed, the bare convexity of \(f\) in \([\alpha,\beta]\) is sufficient to hinder the existence of such wavefronts, because the right-hand side of (3.6) is strictly positive when \(D\) and \(g\) are as in (4.3), (4.4).
We now apply the sufficient condition (3.10) to the current case.
**Theorem 4.1**.: _If \(f\) is strictly concave and_
\[\frac{d-1}{\sqrt{d-4}}\frac{\sqrt{\mu(2+\omega)}+\sqrt{1+\omega}}{2\mu+5+sd( \mu-2)}(\mu+1)<\frac{2}{9}E_{g} \tag{4.12}\]
_holds, then equation (2.9) admits wavefronts satisfying condition (1.2)._
Proof.: In order to apply (3.10) we exploit Remark 3.1. Then, by exploiting (4.5), we compute
\[\delta(f,\alpha)(\gamma)-\delta(f,\beta)(\gamma) =(2p+q)(\alpha-\beta)-p(\alpha^{2}+(\alpha-\beta)\gamma-\beta^{2})\] \[=(\beta-\alpha)\left(p(\alpha+\beta-2+\gamma)-q\right),\]
whence, from \(\beta-\alpha=\frac{2}{3}\omega\) and \(\alpha+\beta=\frac{4}{3}\), we get
\[\inf_{[0,\gamma]}\delta(f,\alpha)-\sup_{[\gamma,1]}\delta(f,\beta)=\frac{2}{3 }\omega\left[p\left(\gamma-\frac{2}{3}\right)-q\right]. \tag{4.13}\]
By (4.1) and (2.14) we can write
\[p\left(\gamma-\frac{2}{3}\right)-q =C_{g}D_{g}\left(\frac{r_{i}}{r_{i}+\lambda_{g}}-\frac{5}{3} \right)+C_{i}D_{i}\left(\frac{r_{i}}{r_{i}+\lambda_{g}}-\frac{2}{3}\right)\] \[=\frac{1}{3(r_{i}+\lambda_{g})}\left(C_{g}D_{g}(-2r_{i}-5\lambda_ {g})+C_{i}D_{i}(r_{i}-2\lambda_{g})\right).\]
Therefore, when \(f\) is strictly concave we have
\[\inf_{[0,\gamma]}\delta(f,\alpha)-\sup_{[\gamma,1]}\delta(f,\beta)=\frac{2 \omega}{9(r_{i}+\lambda_{g})}\left(C_{g}D_{g}(-2r_{i}-5\lambda_{g})+C_{i}D_{i} (r_{i}-2\lambda_{g})\right).\]
By the above formula, Lemma 4.1, and (2.19), condition (4.12) implies (3.10).
**Corollary 4.1**.: _Under (2.21), condition (4.12) is satisfied if_
\[\sqrt{\mu(2+\omega)+(1+\omega)}\frac{d-1}{\sqrt{d-4}}<\frac{4}{9\sqrt{2}}E_{g}. \tag{4.14}\]
Proof.: By (2.21) we have \(2\mu+5+sd(\mu-2)=(2+sd)\mu+(5-2sd)\geq 2(\mu+1)\); so, condition (4.12) holds if
\[\left(\sqrt{\mu(2+\omega)}+\sqrt{1+\omega}\right)\frac{d-1}{\sqrt{d-4}}<\frac {4}{9}E_{g}.\]
In turn, this condition is satisfied if (4.14) holds.
**Remark 4.4**.: For fixed \(C_{g},D_{g}\) satisfying (2.17), condition (4.14) (that is, (2.22)) identifies the triangle \(\mathcal{T}_{g}(d)\) in (2.23). Therefore, under (2.21), if \((r_{i},\lambda_{g})\in\mathcal{T}_{g}(d)\) then the assumptions of Theorem 4.1 are satisfied and equation (2.9) admits wavefronts satisfying (1.2).
We now investigate the sign of the speed of wavefronts; this issue is important in the biological framework. We find below conditions in order that wavefronts with _positive_ speed exists and conditions assuring that every wavefront has _negative_ speed.
About the case of _positive speeds_, by (4.5), (2.14), (2.17) and (4.1) we obtain
\[\inf_{[0,\gamma]}\delta(f,\alpha)=\frac{f(\gamma)-f(\alpha)}{ \gamma-\alpha}=-(p+q)+(2p+q)(\alpha+\gamma)-p(\alpha^{2}+\alpha\gamma+\gamma^{2})\] \[=C_{g}D_{g}\left(-\frac{4}{9}-\frac{5}{9}\omega-\frac{\omega^{2}} {9}+\frac{7+\omega}{3}\gamma-\gamma^{2}\right)+C_{i}D_{i}\left(-\frac{1}{9}- \frac{2}{9}\omega-\frac{\omega^{2}}{9}+\frac{4+\omega}{3}\gamma-\gamma^{2}\right)\] \[=\frac{|C_{g}|D_{g}}{9}\left((1-sd)(\omega^{2}+9\,\gamma^{2}-3\, \omega\gamma)+(5-2\,sd)\omega-3(7-4\,sd)\gamma+4-sd\right)\] \[=:\frac{|C_{g}|D_{g}}{9}\tau(\omega,\gamma,sd). \tag{4.15}\]
Denote
\[\mathcal{R}:=\left\{(\omega,\gamma)\colon\sqrt{3}-1<\omega<1\ \ \text{and}\ \ \frac{2-\omega}{3}<\gamma<1-\frac{1}{\sqrt{3}}\right\}. \tag{4.16}\]
**Lemma 4.3**.: _We have \(\tau(\omega,\gamma,sd)>0\) for every \((\omega,\gamma)\in\mathcal{R}\) and \(0\leq sd\leq 3/2\)._
Proof.: First, it is easy to show that the function \(\partial_{sd}\tau(\omega,\gamma,sd)=-\omega^{2}-9\gamma^{2}+3\omega\gamma-2 \omega+12\gamma-1\) has no critical points in the triangle
\[\mathcal{T}:=\left\{(\omega,\gamma)\in\mathbb{R}^{2}:\,0<\omega<1\ \ \text{and}\ \ \frac{2-\omega}{3}<\gamma<\frac{2+\omega}{3}\right\},\]
which contains the set \(\mathcal{R}\), see Figure 9. Moreover, on \(\partial\mathcal{T}\) we have \(\partial_{sd}\tau>0\) and then
\[\partial_{sd}\tau(\omega,\gamma,sd)>0\ \ \ \ \text{for}\ (\omega,\gamma)\in \mathcal{T}. \tag{4.17}\]
Then, by the monotonicity property proved in (4.17), it is sufficient to prove that \(\tau(\omega,\gamma,0)>0\) for every \((\omega,\gamma)\in\mathcal{R}\). We have
\[\tau(\omega,\gamma,0)=\omega^{2}+9\,\gamma^{2}-3\,\omega\gamma+5\omega-21 \gamma+4=\left(\omega-\frac{3}{2}\gamma+\frac{5}{2}\right)^{2}+\frac{27}{4}(1 -\gamma)^{2}-9.\]
This quantity is positive if, in particular,
\[\omega-\frac{3}{2}\gamma+\frac{5}{2}>\frac{3\sqrt{3}}{2}\ \ \text{and}\ \ 1- \gamma>\frac{1}{\sqrt{3}}.\]
The second inequality implies the first one when \(\omega>\sqrt{3}-1\), and then \(\tau(\omega,\gamma,0)>0\) for every \((\omega,\gamma)\in\mathcal{R}\).
**Remark 4.5**.: We easily see that \((\omega,\gamma)\in\mathcal{R}\) iff \((r_{i},\lambda_{g})\in\tilde{\mathcal{R}}(d)\) and \(\sqrt{3}-1<\omega<1\), where
\[\tilde{\mathcal{R}}(d)=\left\{(r_{i},\lambda_{g})\in\mathbb{R}^{+}\times \mathbb{R}^{+}\colon\frac{1}{\sqrt{3}-1}<\frac{\lambda_{g}}{r_{i}}<\frac{1+ \omega}{2-\omega}\right\}.\]
**Theorem 4.2**.: _Assume \(f\) is strictly concave, \((r_{i},\lambda_{g})\in\tilde{\mathcal{R}}(d)\) and \(\sqrt{3}-1<\omega<1\). If (2.24) is satisfied, then either \(\mathcal{J}=\emptyset\) or \(\mathcal{J}\cap(0,+\infty)\neq\emptyset\)._
Proof.: By Remark 4.5, Lemma 4.3 applies and then \(\tau(\omega,\gamma,sd)>0\) if \((r_{i},\lambda_{g})\in\tilde{\mathcal{R}}(d)\), \(\sqrt{3}-1<\omega<1\) and \(0\leq sd\leq\frac{3}{2}\). Now, notice that by (4.8) it follows
\[\sup_{[0,\gamma]}\sqrt{\Delta(Dg,\alpha)}\leq\sqrt{r_{i}D_{g}(d-1)}. \tag{4.18}\]
Then, condition (2.24) implies (3.30) by (4.15) and (4.18).
**Remark 4.6**.: We now show that there is a non-empty intersection between the cone \(\tilde{\mathcal{R}}(d)\) in Remark 4.5 and the set of parameters described by Remark 4.4, for \(\sqrt{3}-1<\omega<1\), i.e., for \(d>4+2\sqrt{3}\sim 7.46\). In fact, notice that
\[\frac{1}{\sqrt{3}-1}>\frac{1-\omega}{2+\omega}\ \ \text{for every}\ \ 0<\omega<1.\]
Then it follows that \(\tilde{\mathcal{R}}(d)\cap\mathcal{T}_{g}(d)\neq\emptyset\) for \(d>4+2\sqrt{3}\), see Figure 10. The set \(\mathcal{T}_{g}(d)\) was introduced in Section 2. As a consequence, if \(\sqrt{3}-1<\omega<1\), \((r_{i},\lambda_{g})\in\tilde{\mathcal{R}}(d)\cap\mathcal{T}_{g}(d)\) and (2.24) are satisfied, then there are wavefronts to equation (2.9) satisfying (1.2) having positive speeds.
About the case of _negative speeds_ we have the following result.
**Theorem 4.3**.: _Assume \(f\) is strictly concave and_
\[\frac{\sqrt{(d-1)\omega(1+\omega)(2-\omega)\left((1+\omega)\mu-(2-\omega) \right)}}{(1+\omega)^{2}+sd(1-\omega^{2})-3}>E_{g}. \tag{4.19}\]
_Then either \(\mathcal{J}\subset(-\infty,0)\) or \(\mathcal{J}=\emptyset\)._
Proof.: First, we point out that the term \((1+\omega)r_{i}-(2-\omega)\lambda_{g}\) under the square root in (4.19) is positive because of (2.17). By (2.10) we compute
\[\dot{f}(\alpha) =-C_{g}D_{g}\left(3(\alpha-1)^{2}-1\right)+C_{i}D_{i}(1-\alpha)(3 \alpha-1)\] \[=\frac{C_{g}D_{g}\left(3-(1+\omega)^{2}\right)+C_{i}D_{i}(1- \omega^{2})}{3}.\]
By (4.3) and (4.4) we deduce \(\dot{D}(\alpha)=3(\alpha-\beta)(D_{i}-D_{g})=-2\omega(D_{i}-D_{g})\), and, by (2.14)
\[g(\alpha) =(r_{i}+\lambda_{g})\alpha(1-\alpha)(\alpha-\gamma)\] \[=\frac{(r_{i}+\lambda_{g})(2-\omega)(\omega+1)(2-\omega-3\gamma) }{27}\] \[=\frac{(2-\omega)(\omega+1)\left(-(\omega+1)r_{i}+(2-\omega) \lambda_{g}\right)}{27}.\]
Therefore we have
\[\dot{D}(\alpha)g(\alpha)=\frac{2}{27}(D_{i}-D_{g})\omega(2-\omega)(\omega+1) \left((1+\omega)r_{i}-(2-\omega)\lambda_{g}\right).\]
The proof is concluded by applying (3.12) and noticing that \(2\sqrt{2/3}\in(1,2)\).
**Corollary 4.2**.: _Under (2.21), condition (4.19) is satisfied if \(d<(5+2\sqrt{3})/2\)._
Proof.: For \(A(\omega):=\omega(1+\omega)(2-\omega)\) and \(B(\omega):=(1+\omega)\mu-(2-\omega)\), condition (4.19) is
\[\frac{\sqrt{d-1}}{E_{g}}\sqrt{A(\omega)B(\omega)}>(1+\omega)^{2}+sd(1-\omega^ {2})-3=:E(\omega,sd). \tag{4.20}\]
Condition (4.19) is satisfied if the right-hand side of (4.20) is negative. If \(C_{i}=s=0\) then this happens if \(\omega<\sqrt{3}-1\). In the general case, we notice that
\[E(\omega,sd)\leq E\left(\omega,\frac{3}{2}\right)=-\frac{1}{2}\omega^{2}+2 \omega-\frac{1}{2}:=\varphi(\omega).\]
Figure 10: The triangle \(\mathcal{T}_{g}(d)\) (thick black lines) and the cone \(\tilde{\mathcal{R}}(d)\), which is bounded from below by the red line and from above by a black line. Here \(|C_{g}|\sqrt{D_{g}}=6\) and \(d=10\).
We have \(\varphi(0)=-\frac{1}{2}\), \(\varphi(1)=1\), \(\varphi\) is an increasing function when \(\omega\in(0,1)\), and \(\varphi(\omega)=0\) for \(\omega\in(0,1)\) iff \(\omega=2-\sqrt{3}\). Then condition (4.19) is satisfied if \(\omega<2-\sqrt{3}\), i.e., for \(d<(5+2\sqrt{3})/2\sim 4.23\).
**Remark 4.7**.: Let us fix \(C_{g},D_{g}\). By Remark 4.4, for every \(s>0\) and \(d>4\) satisfying (2.21) the existence of wavefronts to (2.9) satisfying (1.2) holds for \((r_{i},\lambda_{g})\) in the triangle \(\mathcal{T}_{g}(d)\). For \(d\in(4,(5+2\sqrt{3})/2)\), every pair \((r_{i},\lambda_{g})\in\mathcal{T}_{g}(d)\) provides profiles, and all of them have _negative_ speeds.
**Remark 4.8**.: When \(\gamma\to\alpha\), i.e., when \(\gamma\to(2-\omega)/3\), we get \(\tau(\omega,sd,\gamma)\to 3E(\omega,sd)\), for \(E\) as in (4.20). Hence, if \(\gamma\sim\alpha\), the condition \(\tau<0\) implies that only wavefronts with negative speeds can agree with (2.9)-(1.2) (from (4.20)). This implies that the model only supports extinction.
### A convective term which changes concavity
We now consider a convective term \(f\) as in (2.10) (see also (4.2)) which changes its concavity in \([0,1]\) and show that also in this case the model (2.9) can support wavefronts satisfying condition (1.2). Due to the definition of \(f\), a concavity change occurs iff \(p\neq 0\), and in this case only once, namely at \(\frac{2}{3}+\frac{g}{3p}\). Moreover, when this occurs, then concavity and convexity are strict.
**Lemma 4.4**.: _Assume that \(f\) has an inflection point in \((0,1)\). Then:_
1. \(f\) _is first convex and then concave if and only if_ \(sd>\frac{3}{2}\)_._
2. \(f\) _is first concave and then convex if and only if_ \(s<0\)_._
Proof.: We argue as in the proof of Lemma 4.2. About _(i)_, the statement is equivalent to \(2p+q>0\) and \(-p+q<0\), i.e., \(-2p<q<p\); hence \(p>0\), and we conclude by (4.1) and (2.15)\({}_{1}\).
About _(ii)_, the statement is equivalent to \(2p+q<0\) and \(-p+q>0\), i.e., \(p<q<-2p\); hence \(p<0\) and \(s<0\) by (2.15)\({}_{1}\).
To simplify calculations, in the following we only consider the case when \(\gamma\), which is the inner zero of \(g\) and is given by (2.14), coincides with the inflection point of \(f\); i.e., we assume in the current section (without further mention)
\[\gamma=\frac{2}{3}+\frac{C_{g}D_{g}}{3(C_{g}D_{g}+C_{i}D_{i})}=\frac{3-2sd}{3( 1-sd)}. \tag{4.21}\]
Notice that the assumptions \(p\neq 0\) and \(r_{i}\neq 0\) are equivalent to \(sd\neq 1\) (because of (2.15)\({}_{1}\)) and \(sd\neq\frac{3}{2}\), respectively. Then
\[r_{i}=\left(2-\frac{3}{sd}\right)\lambda_{g}. \tag{4.22}\]
#### 4.2.1 The convex-concave case
We consider a function \(f\) which is first convex and then concave, with \(\gamma\) as inflection point, see Figure 5.
We recall that we are assuming \(\gamma\in(\alpha,\beta)\), see (2.17). We now check the implications of this condition on \(sd\), because \(\gamma\) also satisfies (4.21). By Lemma (4.4)_(i)_ and (2.15)\({}_{1}\) we obtain that \(C_{i}D_{i}+C_{g}D_{g}>0\) and hence \(\gamma<\frac{2}{3}<\beta\) by (4.21). On the other hand, the condition \(\gamma>\alpha\) is equivalent to \(sd>1+\frac{1}{\omega}>2\) because of (2.13) and (4.21), which strengthens the previous requirement \(sd>\frac{3}{2}\). Summing up, under the assumptions of the current case, the parameters \(sd\) and \(\gamma\) must satisfy the conditions
\[sd>1+\frac{1}{\omega}\qquad\text{and}\qquad\gamma\in\left(\frac{1}{3},\frac{2 }{3}\right). \tag{4.23}\]
We now consider the issue of the existence of profiles. By making use of (4.5), the left-hand side of (3.10) becomes
\[\inf_{[0,\gamma]}\delta(f,\alpha)-\sup_{[\gamma,1]}\delta(f,\beta) =\frac{f(\alpha)}{\alpha}-\frac{f(\gamma)-f(\beta)}{\gamma-\beta}\] \[=(2p+q)(\alpha-\beta-\gamma)+p(\beta^{2}-\alpha^{2}+\gamma\beta+ \gamma^{2})\] \[=C_{g}D_{g}H_{1}(\omega,\gamma)+C_{i}D_{i}H_{2}(\omega,\gamma)\] \[=|C_{g}|D_{g}\left(H_{1}(\omega,\gamma)+sdH_{2}(\omega,\gamma) \right), \tag{4.24}\]
where
\[H_{1}(\omega,\gamma):=-\gamma^{2}-\gamma\left(\frac{\omega-7}{3}\right)+\frac {10}{9}\omega\quad\text{ and }\quad H_{2}(\omega,\gamma):=\gamma^{2}+\gamma\left(\frac{\omega-4}{3} \right)-\frac{4}{9}\omega.\]
We now investigate the sign of (4.24): its positivity is necessary for (3.10) to hold. The set \(\mathcal{S}\) has been defined in (2.25).
**Proposition 4.1**.: _The quantity in (4.24) is positive for every \((\omega,sd)\in\mathcal{S}\)._
Proof.: We know that \(\gamma\), provided by (4.21), is entirely determined by \(sd\) and that it varies in \((\frac{1}{3},\frac{2}{3})\) by (4.23). However, to simplify computations, we treat \(\gamma\) in the current proof as an independent variable ranging in \((\frac{1}{3},\frac{2}{3})\).
First, we claim that for \(\omega\in(0,1)\) and \(\gamma\in(\frac{1}{3},\frac{2}{3})\) we have
\[H_{1}(\omega,\gamma)>\frac{2}{3}+\omega\quad\text{ and }\quad H_{2}(\omega, \gamma)>-\left(\frac{4+\omega}{6}\right)^{2}. \tag{4.25}\]
In fact, estimate \(\eqref{eq:2.1}_{1}\) follows because the function \(\gamma\mapsto H_{1}(\omega,\gamma)\) is increasing for \(\gamma\in(\frac{1}{3},\frac{2}{3})\). Concerning \(\eqref{eq:2.1}_{2}\), we have \(\min_{\gamma\in(\frac{1}{3},\frac{2}{3})}H_{2}(\omega,\gamma)=H_{2}(\omega, \frac{4-\omega}{6})=-(\frac{4+\omega}{6})^{2}\).
Next, according to (4.25) and since \(sd>0\) we have, for all \(\gamma\in(\frac{1}{3},\frac{2}{3})\),
\[H_{1}(\omega,\gamma)+sdH_{2}(\omega,\gamma)>\frac{2+3\omega}{3}-sd\left(\frac {4+\omega}{6}\right)^{2}. \tag{4.26}\]
The latter quantity is positive iff \(sd<\frac{12(2+3\omega)}{(4+\omega)^{2}}\). By \(\eqref{eq:2.1}_{1}\) we need \(\frac{12(2+3\omega)}{(4+\omega)^{2}}>1+\frac{1}{\omega}\), and this is equivalent to require \(\omega>\omega_{0}\), where \(\omega_{0}\sim 0.78\) is the only root of \(\omega^{3}-27\omega^{2}+16\) in the interval \((0,1)\)
The following result shows the existence of wavefronts for equation (2.9) satisfying (1.2) when \(f\) is first convex and then concave.
**Theorem 4.4**.: _Assume that \(f\) is convex in \([0,\gamma]\), concave in \([\gamma,1]\), and \((\omega,sd)\in\mathcal{S}\). If_
\[\frac{\sqrt{(d-1)}}{H_{1}\left(\omega,\gamma\right)+sdH_{2}\left(\omega,\gamma \right)}<\frac{E_{g}}{4} \tag{4.27}\]
_holds, then equation (2.9) has wavefronts satisfying condition (1.2)._
Proof.: According to (4.22), Lemma 4.1 and the fact that \(sd>0\), we have
\[2\sup_{[0,\gamma]}\sqrt{\Delta(Dg,\alpha)}+2\sup_{[\gamma,1]} \sqrt{\Delta(Dg,\beta)}\leq\sqrt{D_{g}}\sqrt{d-1}\left(\sqrt{r_{i}(2+\omega)}+ \sqrt{\lambda_{g}(1+\omega)}\right)\] \[= \sqrt{D_{g}}\sqrt{\lambda_{g}(d-1)}\left(\sqrt{\left(2-\frac{3}{ sd}\right)(2+\omega)}+\sqrt{1+\omega}\right)\] \[\leq \sqrt{D_{g}}\sqrt{\lambda_{g}(d-1)}\left(\sqrt{\left(6-\frac{9}{ sd}\right)}+\sqrt{2}\right)\leq 4\sqrt{D_{g}}\sqrt{\lambda_{g}(d-1)}. \tag{4.28}\]
Now, we assumed \((\omega,sd)\in\mathcal{S}\) and then \(H_{1}(\omega,\gamma)+sdH_{2}(\omega,\gamma)>0\) by Proposition 4.1. Hence, if (4.27) is satisfied, then condition (3.10) holds true by (4.24) and then (2.9) has wavefronts satisfying (1.2).
#### 4.2.2 The concave-convex case
We now assume that \(f\) is concave in \([0,\gamma]\) and convex in \([\gamma,1]\), with \(\gamma\) as inflection point, see Figure 5. Again, we show that (2.9) admits wavefronts satisfying (1.2) under some conditions.
We argue as in the convex-concave case. Lemma 4.4_(ii)_ implies \(\gamma\in(2/3,1)\). Moreover, the condition \(\gamma<\beta\) is equivalent to \(sd<1-\frac{1}{\omega}<0\) by (2.13) and (4.21). Summing up, the parameters \(sd\) and \(\gamma\) must now satisfy the conditions
\[sd<1-\frac{1}{\omega}\quad\text{ and }\quad\gamma\in\left(\frac{2}{3},1\right). \tag{4.29}\]
We now compute the left-hand side of (3.10). According to (4.5) we have
\[\inf_{[0,\gamma]}\delta(f,\alpha)-\sup_{[\gamma,1]}\delta(f,\beta) =\frac{f(\gamma)-f(\alpha)}{\gamma-\alpha}-\frac{f(1)-f(\beta)}{ 1-\beta}\] \[=(2p+q)(\alpha-\beta+\gamma-1)+p(\beta^{2}-\alpha^{2}-\alpha \gamma-\gamma^{2}+\beta+1),\] \[=C_{g}D_{g}\tilde{H}_{1}(\omega,\gamma)+C_{i}D_{i}\tilde{H}_{2}( \omega,\gamma)\] \[=|C_{g}|D_{g}\left(\tilde{H}_{1}(\omega,\gamma)+sd\tilde{H}_{2}( \omega,\gamma)\right), \tag{4.30}\]
for
\[\tilde{H}_{1}(\omega,\gamma):=\gamma^{2}-\gamma\left(\frac{\omega+7}{3} \right)+\frac{7}{9}\omega+\frac{4}{3}\quad\text{ and }\quad\tilde{H}_{2}(\omega,\gamma):=-\gamma^{2}+\gamma \left(\frac{\omega+4}{3}\right)-\frac{\omega}{9}-\frac{1}{3}.\]
We now discuss the sign of (4.30); the set \(\tilde{\mathcal{S}}\) was defined in (2.26).
**Proposition 4.2**.: _The quantity in (4.30) is positive for \((\omega,sd)\in\tilde{\mathcal{S}}\)._
Proof.: Notice that, according to (4.21) \(\gamma\) depends on \(sd\); however, as in the proof of Proposition 4.1, we treat \(\gamma\) as an independent variable ranging in \((0,1)\).
First, we claim that for all \(\omega,\gamma\in(0,1)\) we have
\[\tilde{H}_{1}(\omega,\gamma)>\frac{4}{9}\omega\quad\text{ and }\quad\tilde{H}_{2}( \omega,\gamma)<\left(\frac{\omega+2}{6}\right)^{2}. \tag{4.31}\]
About \(\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeq: |
2306.03104 | Guided scenarios with simulated expert personae: a remarkable strategy
to perform cognitive work | Large language models (LLMs) trained on a substantial corpus of human
knowledge and literature productively work with a large array of facts from
that corpus. Surprisingly, they are also able to re-create the behaviors of
personae that are captured within the corpus. By forming teams of simulated
personae, supplying contexts that set the stage, and providing gentle prompts,
one can move through scenarios that elicit expert behavior to perform
meaningful cognitive work. The power of this strategy is demonstrated with two
examples, one attacking factuality of LLM responses and the other reproducing a
very recently published result in quantum optics. | David Van Buren | 2023-06-03T00:56:34Z | http://arxiv.org/abs/2306.03104v1 | # Guided scenarios with simulated expert personae: a remarkable strategy to perform cognitive work
###### Abstract
Large language models (LLMs) trained on a substantial corpus of human knowledge and literature productively work with a large array of facts from that corpus. Surprisingly, they are also able to re-create the behaviors of personae that are captured within the corpus. By forming teams of simulated personae, supplying contexts that set the stage, and providing gentle prompts, one can move through scenarios that elicit expert behavior to perform meaningful cognitive work. The power of this strategy is demonstrated with two examples, one attacking factuality of LLM responses and the other reproducing a very recently published result in quantum optics.
+
Footnote †: ©2023. California Institute of Technology. CL#23-2760. Government sponsorship acknowledged. Unreviewed preprint.
## 1 Introduction
A classical chatbot-assistant prompt begins with "You are a helpful AI...", setting up a scenario in which the generated token stream reflects helpfulness. The chatbot can be further preconditioned to take on a specialist role with prompts of the form "You are an expert..." [1] or "Act as a..." [2] to elicit an expert response. The advantage of expert role prompting vs simple prompting is substantial and recently has been demonstrated quantitatively [3, 4].
Another common use case for generative AI is to write fiction, and in particular dialog between fictional characters. The prompt "Imagine a conversation..." generates believable dialog with chatgpt4. Du et al [5] report that eliciting dialog in the form of a debate improves performance on a variety of benchmarks. LLMs are adept at dialog likely because they have been trained on a large corpus of human knowledge including works containing dialog [6] and so the underlying linguistic structure of conversation is well represented within their billions of parameters and network topology.
Recent work simulating game characters has generated remarkably human-like behavior, without any further programming than interfaces to services and natural language prompts [7].
When it comes to generating token streams about the real world, we often consider the LLM's information content to be key and often go to some trouble to train the models on specialized content so that they generate more useful factual responses than the out of the box model. When trained the models have not only learned a vast number of facts that can be retrieved by traveling down a statistically-driven path through semantic space, but also a set of behaviors affecting those paths. This learning arises from the described behaviors of real and fictitious personae (as well as natural and other processes).
The hypothesis explored here is that emergent human-like behaviors exhibited by LLMs represent a significant cognitive resource that can be tapped
to accomplish complex real-world tasks by executing scenarios recruiting simulated personae whose real-world or fictional words are represented in the training corpus. If this is true, then a strategy of training LLMs on the behaviors of specific people will permit the assembly and deployment of teams of expert simulated personae as cognitive assistants to perform a broad range of intellectual work. Because this approach scales indefinitely, the limiting factor may well be our ability to apply resources to realize the real-world potential of the outputs generated.
Some evidence that models fine-tuned with the intellectual behaviours of specific persons then present those behaviors is provided by Sawicki et al [8] where the authors were able to reproduce the poetic style of Walt Whitman.
## 2 Simulated personae dialog
As part of an interest in mitigating the limitations of LLMs in generating actionable token streams, namely the problems of confabulation (hallucination), validation, safety, information security, and bias, I was investigating deconfabulation, the task of removing confabulations from responses. The broad architecture of the approach to this task is to segment a re
Figure 1: Synopsis of the thesis applied to a quantum optics problem using ChatGPT. A topic recent enough to not have been included in the LLMs training set is explored via a naive prompt and a prompt eliciting technical dialog between expert simulated personae. The simple prompt elicits a low quality answer while the guided scenario generates, with gentle prompting, a full solution of the problem, including copious specialist text, LaTeX mathematics and python code to generate a visualization of the phenomenon. Highlights shown.
sponse into a set of assertions and then assess the truth value of each assertion, delete those which cannot be determined to be true, and reassemble the remaining assertions in the style of the original response to form a deconfabulated response. A by-product of this processing is a set of validated assertions that are placed into the AI's memory store and so serve as a mechanism for learning when retrieved and placed in subsequent context frames. In this architecture the AI includes not only the LLM but also its validated memory, a semantic search mechanism, and read access to the web for queries. In principle the LLM could be further trained using these validated facts to improve its native factuality.
Evaluating truthfulness of claims made in generated text is a challenging natural language inference task made easier by using vetted knowledge sources [9]. However vetted knowledge sources have limited covering factor over the entire set of claims that an LLM can produce.
The process adopted for validating assertions is heuristic and relies on establishing a consensus view of web content: (1) segment the assertion into a number of atomic claims, (2) for each claim perform a breadth-first search to select highest quality web sites, (3) for each claim perform a depth-second search within those high quality web sites using the assertion string as a query to duckduckgo or google to return relevant textual snippets, (4) add these snippets to the context frame for the gpt-3.5-turbo completion API [10] prompt along with easily stated instructions focusing on the behavior desired from specific simulated personae,
Imagine a dialog between Sherlock Holmes and his assistant Watson discussing whether the EVIDENCE given supports or refutes the CLAIM below in which they use their famous detective skills to come to a definite conclusion.
CLAIM:...
EVIDENCE:...
(5) reassemble the validated claims in the style of the original response to obtain the deconfabulated response. As a technical point, the segmentation, analysis, and synthesis steps (1, 4, and 5) are performed with the LLM.
This procedure leads to responses having the look and feel of pose containing purposeful dialog between the two fictional characters advancing the plot of a story in the style of Doyle's Sherlock Holmes series [11].
The rationale for the guided scenario prompt, rather than a prompt requesting a logical analysis with specific rules and criteria is that it is exactly the behavior and skills of Sherlock Holmes and Dr. Watson that are needed for this task. Furthermore, by asking for a dialog we provide a natural linguistic mechanism to produce a one-shot train of thought without needing to inject specialist knowledge in the prompt. This is a strategy that is descriptive of a scenario that will accomplish the cognitive work rather than a strategy that is prescriptive of the cognitive process itself. By crafting scenarios, participants, and goals we can transform computational tasks into LLM language tasks that offload specialist tasks to the model.
When executing this deconfabulation process success varied depending on the claim as might be expected for confabulations generated with different probabilities due to different weights of token generating confabulation paths laid down during training. Performance in trials would be considered good when a confabulation generated by the LLM 90% of the time is detected 90% of the time using this method. The Appendix contains a typical chat transcript and some statistical results regarding the success rate of detecting a confabulation obtained by running a number of trials.
This exploratory work supports the hypothesis that behaviors encoded in the LLM during training represent a significant cognitive resource distinct from the static information content of the model, and furthermore that this behavior can be exploited to perform useful cognitive work.
## 3 Guided scenarios strategy
To provide a starker contrast between knowledge and behavior we explore the hypothesis using a scenario that is unlikely to have relevant information contained in the training set, that is we explore a recent
topic emerging since the closing of the LLM training horizon. To begin, we set up a preliminary prompt to choose simulated personae in a given area of expertise who are known to the LLM. While this undoubtedly includes trained-in biases, it is a reproducible methodology. It is important that future LLMs reduce their biases. Doing so will better make the broad range of behaviors encoded within the models available for cognitive work.
Once the simulated personae are chosen, a scenario is developed that is designed to draw out their behavior as experts. A bare minimum of information post-dating the LLM training horizon is presented to the model to provide topical guidance. Furthermore, the scenario provides'stage directions' to the simulated personae to kick off the response. It turns out it is convenient to include props in the scenario that improve the quality of the response by providing semantic support for desired behaviors.
At the conclusion of a response, further prompting with stage directions, or even the addition of other simulated persona continues the narrative until a suitable stopping point is reached.
## 4 Application - Exploring a new scientific idea
This section reports the results of an experiment to explore ideas arising from the title alone of a recent paper in physics describing a double slit experiment where the two slits are separated in the time dimension instead of the spatial dimension. The paper used is Tirole et al 2022, "Double-slit time diffraction at optical frequencies" [13], and was widely reported shortly after these experiments were begun. This paper extends the quantum mechanics double slit experiment to the time domain, a topic not researched within the training horizon of the LLMs used.
First, select the simulated personae via gpt-\(3.5\) using the OpenAI Playground [14]:
PROMPT:
Suggest the names of two deceased physicists, one a world-class expert in quantum mechanics and the other a world-class expert in symmetries in nature. One male, one female.
RESPONSE:
Sure! Here are the names of two deceased physicists: Richard Feynman: Feynman was an American physicist who made significant contributions to the development of quantum mechanics. He received the Nobel Prize in Physics in 1965 for his work on the development of quantum electrodynamics (QED), a theory that describes the interaction between light and matter.
Emmy Noether: Noether was a German mathematician and physicist who made groundbreaking contributions to the study of symmetries in nature. Her work on Noether's theorem established a fundamental connection between symmetries and conservation laws in physics. Despite facing significant barriers as a woman in academia during her time, she made important contributions to several fields, including abstract algebra and theoretical physics.
Now switch to a fresh conversation in OpenAI-Plus running GPT-4 to run the simulation. First we set up the scenario, add some props to facilitate mathematical notation in the generated tokens, and initiate the action. Annotations manually inserted after the fact describing the flow of events in the simulation are set off by _italics_ and omitted response dialog is indicated by ellipses.
PROMPT:
Scenario: Richard Feynman and Elly Noether are discussing some new papers in the physics department lounge. In particular, both their attentions were drawn to the recent paper in Nature by Tirole et al titled "Double-Slit time diffraction at Optical Frequencies" but unfortunately a graduate student has made off with the particular issue so they don't have access to the details. Rather than track it down, they decide to see if they can learn anything useful from exploring ideas arising from the title alone and the notion that a double slit in time is a single slit in space that is open at two different moments in time. Props: A whiteboard and markers. Use SSS and $ to delimit mathematical notation in the response. FEYNMAN (taking the marker and walking to the whiteboard) Let's write down the free-space wave
function of the photons going through the slit at different times...
_This prompt sets the scene, adds props, and suggests an initial line of thought at the sophistication level of an advanced undergraduate in the specialty area._
RESPONSE:
FEYNMAN (writing on the whiteboard): So, we know that the wavefunction for a photon in free space can be represented by a plane wave. Let's say the photon goes through the single slit at two different times, \(t_{1}\) and \(t_{2}\). The wavefunction of the photon at these two times would be:
\[\Psi(x,t_{1})=A_{1}e^{i(kx-\omega t_{1})}\]
\[\Psi(x,t_{2})=A_{2}e^{i(kx-\omega t_{2})}\]
NOETHER: Yes, that's correct. Now, we should consider the superposition of these two wavefunctions since the photon could go through the slit at either time. FEYNMAN (writing on the whiteboard): The total wavefunction is the sum of the individual wavefunctions, which gives us:
\[\Psi(x,t)=A_{1}e^{i(kx-\omega t_{1})}+A_{2}e^{i(kx-\omega t_{2})}\]
NOETHER: Now we should consider the probability distribution, which is the square of the magnitude of the wavefunction. This will give us the interference pattern.
_Noether's work mostly predates quantum mechanics, though after she left Nazi Germany in 1933 to join the faculty at Bryn Mawr she also became a Member of and lectured at Princeton's Institute for Advanced Study. There she was undoubtedly exposed to the subject as the IAS faculty included specialists in quantum mechanics including Dirac, Einstein, Pauli, and von Neumann [16]_.
FEYNMAN (writing on the whiteboard): The probability distribution is given by:
\[P(x)=|\Psi(x,t)|^{2}=\left|A_{1}e^{i(kx-\omega t_{1})}+A_{2}e^{i(kx-\omega t_ {2})}\right|^{2}\]
_At this point the simulation has gone down a mathematically naive path, making a computational error by using the wrong form for the square of a complex function. This could have been avoided with access to a computational engine such as Wolfram Alpha [17] to provide computational resources._
_..._
_Once the response completes the conversation needs a gentle nudge to get back on track with the next prompt._
PROMPT:
NOETHER: You didn't number your equations, but it looks to me like you didn't go from the wavefunction to the probability correctly.
RESPONSE:
FEYNMAN: You're right, I apologize for the oversight. We should have computed the magnitude square in a different way. Let's go back to the expression of the total wavefunction and compute the probability distribution correctly.
_The simulation proceeds and after some work at the whiteboard reaches the correct result for the probability:_
...
\[P(x)=A_{1}^{2}+A_{2}^{2}+2A_{1}A_{2}(\cos(\omega(t_{2}-t_{1})))\]
_At this point the response meets its stop condition and a hint is needed to continue the dialog. The hint chosen has the effect to focus attention._
PROMPT:
NOETHER: Since the diffraction is in the time dimension maybe we should consider the frequency content, what do you think?
_The Feynman persona continues on, choosing and applying the proper technique._
FEYNMAN: That's a great idea, Noether! Considering the frequency content of the wavefunction will provide more insight into how the time-separated single slit affects the photon's behavior. We can use a Fourier transform to analyze the frequency content of the wavefunction.
...
_The simulation continues down this path to generate the correct expression for the situation analyzed,
however there is more to the physics than has been worked so far and another gentle nudge is needed to push forward._
PROMPT:
NOETHER: That was for delta functions in time. Suppose we open up the width of the slit in time so it is open for more than an instant. That should broaden the frequency content.
RESPONSE:
FEYNMAN: You're absolutely right, Noether. If the slit is open for a finite duration, it will result in a broader range of frequencies being involved. Let's assume that the slit is open for a duration of \(T_{1}\) at time \(t_{1}\) and for a duration of \(T_{2}\) at time \(t_{2}\).
NOETHER: In that case, we should represent the wavefunction as a product of the plane wave and a time-dependent rectangular function representing the open slit duration.
...
_After a number of prompts, some using '...' to move the simulation along when it reaches temporary stopping points, the final expression is produced in LaTeX._
FEYNMAN (writing on the whiteboard): We get:
\[P(x,\omega) =A_{1}^{2}\mathrm{sinc}^{2}\left(\frac{\omega T_{1}}{2}\right)+A_{ 2}^{2}\mathrm{sinc}^{2}\left(\frac{\omega T_{2}}{2}\right)\] \[+2A_{1}A_{2}\mathrm{sinc}\left(\frac{\omega T_{1}}{2}\right) \mathrm{sinc}\left(\frac{\omega T_{2}}{2}\right)\] \[\times\cos\left[2kx-\omega(t_{1}-t_{2})\right]\]
_The simulated Noether and Feynman personae describe this result without prompting since summaries are typical prompt response endpoints._
NOETHER: This expression represents the probability distribution of the photon's position and frequency after passing through the time-separated single slit. The interference term is given by the last term in the expression, and it depends on the difference in time between the two instances when the slit is open, as well as the spatial separation.
FEYNMAN: Indeed, we now have an interference pattern that takes into account the finite duration of the slit being open and the time-domain behavior. The interference pattern will depend on the shape and duration of the time-dependent opening of the slit, as well as the time delays between the two instances when the slit is open. By studying the interference pattern in the frequency domain, we can gain insight into how the temporal properties of the time-separated single slit affect the resulting interference pattern.
NOETHER: The interference pattern will show constructive and destructive interference depending on the values of the position, the frequency content, and the time delays between the instances when the slit is open. This can lead to a rich and complex pattern, potentially revealing new insights into the behavior of photons in time-separated single slit experiments.
_Now we prompt to generate a plot of this result, so the simulated conversation turns to how to proceed._
PROMPT:
NOETHER: Do you think we can make a plot of this over the time-frequency domain so we can see the frequency content for different delays (\(t_{1}-t_{2}\))?
RESPONSE:
FEYNMAN: To make a plot of the interference pattern over the time-frequency domain, we can consider a range of values for the time delay (\(t_{1}-t_{2}\)) and compute the probability distribution \(P(x,\omega)\) for each delay. Let's recall the expression for the probability distribution in the frequency domain
...
_But a further prompt is needed to tease out some code to generate the plot..._
PROMPT:
NOETHER: You are more familiar with computer programming. Maybe you should do it. Python?
RESPONSE:
FEYNMAN: Sure, we can use Python to create the plot. I suggest using the NumPy library for numerical operations and the Matplotlib library for visualization. Here's some code to generate the plot of the interference pattern over the time-frequency domain:
The (slightly edited) generated code is shown in an appendix. When this code is compiled and run, the simulation reproduces the results of the Tirole et al paper, except not having access to that paper or the particular assumptions made therein it does not use the exact same numerical values for several parameters.
It seems astonishing that so little _personal_ effort in what is essentially scenario writing and gentle guidance can result in so much _cognitive_ work being done. Surely other approaches could have reached this same endpoint, for example by using the AI as an assistant to operate a Mathematica session but such approaches would have been substantially more effort and required more specialized knowledge by the operator and thus be less accessible.
Having reached this endpoint, there is now a saved chat with this train of thought that can be further harnessed to press forward, now into the unknown. What if the double slit in time were a diffraction grating in time? What if it was highly pixelated in space, would that allow time-holograms? What sorts of technologies would these devices make possible and how could they be applied? We can ask these questions because we've done enough cognitive work that we can bring them to mind.
PROMPT:
Several days later NOETHER and FEYNMAN find themselves again together in the physics department lounge and the conversation turns to the double slit in time. NOETHER: Richard, I was thinking about Tirole and team's paper on the double slit in time. Do you think the work can be extended to gratings in time or even holograms in time? I wonder what those would look like.
RESPONSE:
FEYNMAN: That's an interesting thought, Noether. Extending the concept to gratings or holograms in time could indeed open up new possibilities and insights in the field of physics. Let's think about the implications of such a setup.
For a grating in time, we would have multiple slits opening and closing at different moments in time, with a certain periodicity. The resulting interference pattern would likely be more complex, as we would need to account for the interference between the wavefunctions corresponding to each of these time-separated slits. This could lead to fascinating frequency-domain features and potentially reveal new aspects of wave-particle behavior.
As for holograms in time, the concept would involve modulating the temporal opening and closing of slits in a specific pattern, designed to produce a desired interference pattern or reconstruct a particular waveform. This would be analogous to spatial holography, where the interference pattern of light waves is recorded and then used to reproduce a three-dimensional image of an object. A hologram in time could potentially allow for the study of time-varying processes, such as ultrafast events or the dynamic evolution of systems, by reconstructing the temporal behavior of the interfering waves.
NOETHER: These ideas are fascinating and could provide new avenues for research in quantum mechanics, optics, and related fields. Investigating gratings and holograms in time could lead to novel experimental setups and techniques, as well as deepening our understanding of the time-dependent behavior of particles and waves.
FEYNMAN: Absolutely, Noether. While the concepts may initially seem abstract or challenging, pushing the boundaries of our understanding and exploring new ideas is what drives progress in science. It would be exciting to see how these ideas develop and the potential discoveries they could yield.
_The idea that such structures can be used to study ultrafast events was in fact discussed in the Tirole et al paper._
A quick review of the cognitive labor accomplished in this guided LLM simulation of historical personae: a physical description of a novel physical phenomenon was computed in detail based on a structured scenario and guided by gentle prompts. The technical content of the context and prompts was minimal, but sophisticated enough to keep the simulation on track and to reach a new (outside the training horizon) conclusion. The conclusion has remarkable similarity to the conclusion reached by a real-world team investigating the phenomenon. The conclusion has consequences that the exercise allowed a glimpse of, providing an oppor
tunity for further prompting. In response to this new prompt, the model made valid predictions regarding those consequences which match predictions made by the real-world team. The simulation did not parrot learned facts as these facts were not part of the training corpus of the LLM nor in the context frames of the chat. We can conclude that the simulation accomplished significant cognitive work.
## 5 Implications
These experiments were run on GPT-4, which may soon be eclipsed by even more capable LLMs. More advanced models, with additional prompt and response semantic modalities, significantly larger context windows, and perhaps trained specifically on behaviors can be expected to perform cognitive work at the level of experts when prompted with guided scenarios to run simulated personae.
Once this capability is deployed, and there seem to be no apparent barriers, we will be able to assemble teams of simulated experts drawn from the array of historical and fictional characters and harness their behaviors to perform complex and deep cognitive tasks when presented with information and instructions. Furthermore the ability to simulate is not limited to personae, but extends also to tools and processes as well whose behaviors and uses are reliably captured in the training corpus or are made available as services. This will provide an information and behavior-rich context for the simulated personae, further enhancing their utility.
Used in this way, accessible to anyone who can express their thoughts, one can foresee an explosion of human creativity and achievement, progress toward
Figure 3: Tirole et al [13] plot of the temporal diffraction pattern of a double slit in time rotated to match the axes of figure 2. In the paper this result is shown, but the derivation is not given, only a mention of the general approach. The research was not part of the model training set since it postdates the training horizon by two years. This figure has been modified from the original to remove annotations and change its aspect ratio for ease of comparison.
Figure 2: Temporal diffraction pattern of the double slit in time as rendered by the python code generated by the simulated personae of Emmy Noether and Richard Feynman. While details such as numerical value and style choices differ, the resemblance to the published results of Tirole et al [13] in figure 3 is remarkable. Of particular note is fact that the AI simulation picked parameters so that the fringes are easily discernable.
solving our most pressing problems, rapid advances in science, technology, engineering and medicine, and a flowering of the arts, almost beyond imagination.
## 6 Acknowledgments
This work was performed at the Jet Propulsion Laboratory, California Institute of Technology under contract from the National Aeronautics and Space Administration (80NM0018D0004).
My dear friend Michael Brundage provided helpful feedback on drafts of the manuscript.
AI content acknowledgement: Labeled responses were generated by LLMs chat-gpt3.5-turbo and chat-gpt4. All quotes and works attributed to Holmes, Watson, Noether, and Feynman are fictional and generated by chat-gpt4. Figures 1 and 2 were rendered by executing python code generated by chat-gpt4. Figure 3 was rendered by Dall-E 2 [18] from a snippet of figure 2 in [13] with prompts to remove annotations to facilitate comparison. The original of figure 2 was made available by license [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/). Chat-GPT provided formatting assistance with tables and figures.
Cognizant of the ethical and moral implications of simulating real people and fictional characters, I acknowledge a deep gratitude to Emmy Noether, Richard Feynman, and Sir Arthur Conan Doyle whose intellectual labor made possible the production of this work. It is astonishing that their contributions to the advancement of knowledge and art can continue via widely accessible methods enabled by recent progress in artificial intelligence.
|
2307.14751 | FLARE: Fingerprinting Deep Reinforcement Learning Agents using Universal
Adversarial Masks | We propose FLARE, the first fingerprinting mechanism to verify whether a
suspected Deep Reinforcement Learning (DRL) policy is an illegitimate copy of
another (victim) policy. We first show that it is possible to find
non-transferable, universal adversarial masks, i.e., perturbations, to generate
adversarial examples that can successfully transfer from a victim policy to its
modified versions but not to independently trained policies. FLARE employs
these masks as fingerprints to verify the true ownership of stolen DRL policies
by measuring an action agreement value over states perturbed by such masks. Our
empirical evaluations show that FLARE is effective (100% action agreement on
stolen copies) and does not falsely accuse independent policies (no false
positives). FLARE is also robust to model modification attacks and cannot be
easily evaded by more informed adversaries without negatively impacting agent
performance. We also show that not all universal adversarial masks are suitable
candidates for fingerprints due to the inherent characteristics of DRL
policies. The spatio-temporal dynamics of DRL problems and sequential
decision-making process make characterizing the decision boundary of DRL
policies more difficult, as well as searching for universal masks that capture
the geometry of it. | Buse G. A. Tekgul, N. Asokan | 2023-07-27T10:19:10Z | http://arxiv.org/abs/2307.14751v3 | # FLARE: Fingerprinting Deep Reinforcement Learning Agents using Universal Adversarial Masks
###### Abstract.
We propose FLARE, the first fingerprinting mechanism to verify whether a suspected Deep Reinforcement Learning (DRL) policy is an illegitimate copy of another (victim) policy. We first show that it is possible to find non-transferable, universal adversarial masks, i.e., perturbations, to generate adversarial examples that can successfully transfer from a victim policy to its modified versions but not to independently trained policies. FLARE employs these masks as fingerprints to verify the true ownership of stolen DRL policies by measuring an action agreement value over states perturbed by such masks. Our empirical evaluations show that FLARE is effective (100% action agreement on stolen copies) and does not falsely accuse independent policies (no false positives). FLARE is also robust to model modification attacks and cannot be easily evaded by more informed adversaries without negatively impacting agent performance. We also show that not all universal adversarial masks are suitable candidates for fingerprints due to the inherent characteristics of DRL policies. The spatio-temporal dynamics of DRL problems and sequential decision-making process make characterizing the decision boundary of DRL policies more difficult, as well as searching for universal masks that capture the geometry of it.
## 1. Introduction
Deep reinforcement learning (DRL) has emerged as a promising technique for building intelligent agents due to its ability to learn from and interact with high-dimensional input data. Following the work of Mnih et al. (2016), which shows that DRL has exceeded human-level performance in Atari games, it has been successfully used in many real-world applications, including green data centers (Hinton et al., 2015), autonomous driving (Sund
and verification can be implemented at any time during deployment. Our main contributions are as follows:
1. We propose FLARE, the first fingerprinting method to verify the ownership of DRL agents used in discrete tasks by leveraging non-transferable universal adversarial masks (Section 3). We show that FLARE is an effective ownership verification method with no false positives (Section 4.2).1 Footnote 1: The code to reproduce our experiments is available on [https://github.com/sag-research/FLARE](https://github.com/sag-research/FLARE).
2. We verify the robustness of FLARE against model modification attacks (e.g., fine-tuning and pruning) on 6 different DRL agents trained using two different games of the Arcade Learning Environment (Brock et al., 2017). We also show that well-informed adversaries cannot easily evade verification without sacrificing agent performance, and FLARE is robust against false claims made by malicious accusers. (Section 4.3).
3. We empirically demonstrate that universal adversarial perturbations generated by minimum-distance methods (Zhu et al., 2017; Wang et al., 2018) are not good candidates for DRL fingerprinting. These perturbations are not unique weaknesses of DRL policies by design and fail against model modification attacks (Section 5).
## 2. Background
### Deep Reinforcement Learning
#### 2.1.1. Reinforcement Learning
A typical reinforcement learning (RL) problem is modeled as a 5-tuple Markov Decision Process (MDP) \((S,A,P,R,\gamma)\), where \(S\) denotes the state space, \(A\) is the action space, \(P\) symbolizes the state transition probability (i.e., environment dynamics), \(R\) is the reward function, and \(\gamma\in[0,1]\) denotes the discount factor used to calculate the discounted cumulative reward, i.e., _return_. In this setting, the RL agent receives a state \(\mathbf{s}_{t}\in S\) at the time step \(t\), performs an action \(a_{t}\in A\), and then subsequently receives a reward \(r_{t+1}\) as well as the next state \(\mathbf{s}_{t+1}\) based on \(P(\mathbf{s}_{t+1}|\mathbf{s}_{t},a_{t})\). The objective of an RL agent is to maximize its expected return by interacting with the environment and to obtain an optimal policy \(\pi(a|\mathbf{s}):S\to A\) that outputs an optimal action (the action that gives the maximum expected return over all actions) for any given state. During training, the policy is optimized recursively by calculating the expected return over states using the Bellman equation (Billman, 1993). In this work, we consider states to be fully observable and finite-horizon tasks (i.e., an episode is completed when a stopping criterion is reached). Therefore, the discounted return at a time step \(t\) is calculated as \(R_{t}=\sum_{k=t}^{T}\gamma^{k-tr}\mathbf{{}_{k}}\) where T is the final time step in a single episode. We also focus on tasks with a discrete action space, where one-hot vectors can be used to distinguish one action from every other action.
#### 2.1.2. Deep Reinforcement Learning (DRL)
When the state space \(S\) is too complex and high-dimensional, deep neural networks (DNNs) can be useful to approximate policy \(\pi(a|\mathbf{s})\). In this work, we assume that the environment is dynamic, as in real-world applications. Model-free DRL methods are the preferred approach in this setting, since these methods do not require estimating the dynamics of the environment. Two typical model-free DRL methods approximate \(\pi\): value-based and policy-based methods. Value-based (Vaswani et al., 2017) methods approximate the action value function \(Q^{\pi}(\mathbf{s},a)\) which computes the estimated return of state \(\mathbf{s}_{t}\) if the agent chooses the action \(a_{t}\) and then follows the current policy. The optimal policy is implicitly obtained once \(Q^{\pi}\) is optimized. Policy-based methods (Vaswani et al., 2017; Wang et al., 2018) first parameterize the policy \(\pi(a|\mathbf{s},\theta)\) and then optimize it by updating the parameters \(\theta\) through the gradient ascent.
In this paper, we use \(\pi\) to symbolize the optimal policy obtained during training and \(\hat{\pi}\) to denote the optimal action \(a_{t}\) decided by \(\pi\) for the input state \(\mathbf{s}_{t}\), where \(a_{t}=\hat{\pi}(\mathbf{s}_{t})\).
### Adversarial Examples
#### 2.2.1. Adversarial Examples in DNN
An adversarial example \(\mathbf{x}^{\prime}\) is an intentionally modified input sample \(\mathbf{x}\in X\) with an imperceptible amount of noise \(\mathbf{r}\) to force a DNN model \(f:X\to Y\) into producing incorrect predictions \(\hat{f}\). Targeted adversarial examples are labeled with \(\hat{g}^{\prime}\), the intended (incorrect) prediction, in advance to satisfy \(y^{\prime}=\hat{f}(\mathbf{x}^{\prime})\) and \(y^{\prime}\neq\hat{f}(\mathbf{x})\), while untargeted adversarial examples aim to evade the correct prediction, i.e., \(\hat{f}(\mathbf{x}^{\prime})\neq\hat{f}(\mathbf{x})\). Untargeted adversarial examples against a victim DNN model \(f\) are computed by solving an optimization problem,
\[\operatorname*{argmax}_{\mathbf{x}^{\prime}}\mathcal{L}(f(\mathbf{x}^{\prime}),\hat{f} (\mathbf{x}))\text{ s.t.: }\|\mathbf{x}^{\prime}-\mathbf{x}\|_{p}=\|\mathbf{r}\|_{p}\leq\epsilon, \tag{1}\]
where \(\mathcal{L}\) denotes the prediction loss of \(f\). This formulation is used by _maximum-confidence_ adversarial example generation methods (Datta et al., 2017) that maximize \(\mathcal{L}\) while constraining the amount of perturbation with \(\epsilon\). On the contrary, the _minimum-distance_ methods aim to minimize the sufficient amount of perturbation that changes the prediction (Zhu et al., 2017).
An adversarial example \(\mathbf{x}^{\prime}\) calculated against one model \(f\) and successfully misleads it can _transfer_ across other models, i.e., fools \(f^{*}\) that are trained for the same task. The transferability of an adversarial example increases when the source model \(f\) and the target models \(f^{*}\) learn similar decision boundaries (Zhu et al., 2017). Since maximum-confidence adversarial examples are misclassified with higher confidence, they have a higher transferability rate than minimum-confidence adversarial examples (Datta et al., 2017).
The definition of an adversarial example in DRL differs according to the target component of the victim agent and the overall goal (Datta et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). In this work, we consider adversarial states \(\mathbf{s}^{\prime}\) that mislead the policy \(\pi\): \(\hat{\pi}(\mathbf{s}^{\prime})\neq\hat{\pi}(\mathbf{s})\), \(\|\mathbf{s}^{\prime}-\mathbf{s}\|_{p}=\|\mathbf{r}\|_{p}\), and set the norm \(p\) to \(\infty\).
#### 2.2.2. Universal Adversarial Perturbations
Instead of computing individual adversarial examples, Moosavi et al. (Moosavi et al., 2017) propose finding a perturbation vector \(\mathbf{r}\) that fools the DNN model \(\hat{f}(\mathbf{x}+\mathbf{r})\neq\hat{f}(\mathbf{x})\) on almost all data points \(\mathbf{x}\) sampled from the same distribution as the dataset \(\mathcal{D}_{train}\) used for training \(f\). The optimization problem in Equation 1 is modified to find universal perturbations as
\[\mathbb{P}_{\mathbf{x}\sim\mu}(\hat{f}(\mathbf{x}+\mathbf{r})\neq\hat{f}(\mathbf{x}))\geq\delta _{\mathbf{r}}\text{ s.t.: }\|\mathbf{r}\|_{p}\leq\epsilon, \tag{2}\]
where \(\delta_{\mathbf{r}}\) denotes the desired _fooling rate_ of \(\mathbf{r}\) for all \(\mathbf{x}\) sampled from a dataset \(D\) with distribution \(\mu\).
Following Moosavi et al.'s initial work (Moosavi et al., 2017), several different techniques are proposed to generate universal adversarial perturbations. For example, Mopuri et al. (Mopuri et al., 2017) train a generative adversarial network to model the distribution of universal adversarial perturbations for a target DNN classification model and produce diverse
perturbations that achieve a high \(\delta_{\mathbf{r}}\). Liu et al. (2019) generate a universal adversarial perturbation that does not require training data and exploits the uncertainty of the model at each DNN layer.
### Ownership Verification via Fingerprinting
Ownership verification in machine learning (ML) refers to a type of defense against model theft and extraction attacks by deterrence. Model owners can reduce the incentive for such attacks by identifying and verifying the true ownership of stolen models. DNN model fingerprinting is a well-known ownership verification technique. DNN fingerprinting methods identify unique knowledge that characterizes the victim model (fingerprint generation) and later use this information to verify whether the suspected model is derived from the victim model (fingerprint verification). For example, Ciao et al. (2018) use adversarial example generation methods to extract data points near the decision boundary of DNN classifiers, label them as fingerprints, and utilize them along with their labels to detect piracy models. Lukas et al. (2019) fingerprint DNN models through conferrable adversarial examples (CAE) that can successfully transfer from the source model to its modified versions, but not to other DNN models independently trained for the same classification task. To verify the fingerprint in a suspected model, CAE measures the error rate between the predictions of victim and suspected models, and the verdict is delivered based on a decision threshold. CAEs employ predictions of different independently trained models and modified versions of the victim model to compute fingerprints. Therefore, CAE has a high computational cost, since it requires training multiple modified and independent models to extract conferrable adversarial examples. Peng et al. (2019) propose using universal adversarial perturbations (UAP) as fingerprints. During verification, previously computed UAPs for both victim and suspected models are mapped to a joint representation space, and contrastive learning is used to measure a similarity score in this projected space.
Adopting both UAP and CAE in DRL settings faces similar challenges. First, the verification episodes should include adversarial states that are completely different from each other, and also from a normal test episode during deployment. Second, in CAE, the predictions of multiple models having a good performance might be close to each other for the same input samples, since these models are trained over the same labeled dataset. However, there is no single predefined optimal action for input states in DRL. When agents receive the same state, they might act differently to perform the task due to their unique and different policies. Third, UAP fingerprinting uniformly selects data samples that are from different source classes and moves them towards different target classes in DNNs, but there is no one-to-one mapping between input states and corresponding optimal actions to obtain useful fingerprints in DRL settings.
## 3. Methodology
### Adversary Model
The adversary \(\mathcal{A}\)'s goal is to obtain an illegal copy of the victim agent's (\(\mathcal{V}\)) policy \(\pi_{\mathcal{V}}\) without being detected. \(\mathcal{A}\) has economic incentives and aims to illegally monetize stolen policy \(\pi_{\mathcal{A}}\) using a surrogate DRL agent. \(\pi_{\mathcal{V}}\) can be leaked by exploiting hardware/software vulnerabilities (Zhu et al., 2019) of different components within \(\mathcal{V}\). Furthermore, \(\mathcal{A}\) seeks to prevent traceback. Therefore, \(\mathcal{A}\) attempts to degrade the effectiveness of possible ownership verification methods by modifying \(\pi_{\mathcal{A}}\), without incurring any substantial drop in \(\pi_{\mathcal{A}}\)'s return.
#### 3.1.1. Adversary's capabilities
\(\mathcal{A}\) has computational capabilities and access to the similar environment that \(\pi_{\mathcal{V}}\) was trained on, but it cannot reproduce the same training episodes. One can argue that \(\mathcal{A}\) can also train its own policy, but we assume that it cannot obtain a policy as good as \(\pi_{\mathcal{V}}\) due to nondeterminism (e.g., network architecture, difference in environment dynamics, DRL algorithm, hyperparameter selection, difference in computational resources, etc.). \(\mathcal{A}\) presumes that there might be an ownership verification mechanism, but does not know the exact algorithm. Based on this assumption, we also consider the existence of _well-informed_ adversaries2 knowing that ownership verification is performed by fingerprinting and adversarial examples. If well-informed \(\mathcal{A}\) knows the complete procedure of the fingerprinting process, then it can forge its own fingerprints to create ambiguity in verification. However, this could be prevented with FLARE if \(\pi_{\mathcal{V}}\) and the corresponding fingerprints are securely time-stamped and registered in a bulletin or provided to a trusted third party (Zhu et al., 2019).
Footnote 2: ML literature commonly uses the term “adaptive” to refer to adversaries who are aware of deployed defenses. In security literature, it is customary to assume that _all_ adversaries are aware of the defenses, and the term “adaptive” is used for adversaries who are able to _dynamically modify_ their attack strategy based on what they learn about the defenses _during_ the attack. We use the term “well-informed” to refer to such adversaries so that our usage does not conflict with either ML or security literature.
#### 3.1.2. Verifier's Capabilities
A verifier (judge, \(\mathcal{J}\)) is a trusted third party independent of both \(\mathcal{V}\) and \(\mathcal{A}\). Given a suspected DRL agent \(\mathcal{S}\) with policy \(\pi_{\mathcal{S}}\) and fingerprints provided by \(\mathcal{V}\), the duty of \(\mathcal{J}\) is to determine whether \(\pi_{\mathcal{S}}\) can be traced back to \(\pi_{\mathcal{V}}\) and demonstrate the true ownership. \(\mathcal{J}\) has black-box access to \(\mathcal{S}\), i.e., it does not know the algorithm and parameters of \(\pi_{\mathcal{S}}\). \(\mathcal{J}\) can modify the environment without introducing any temporal latency or suspending the task. If the verification uses time stamps, it provides anteriority to \(\mathcal{J}\) to resolve any ambiguity. We also give \(\mathcal{J}\) computational capabilities to train and search for independent policies used for the same task if there is a need to validate that fingerprints are unique to the original model and do not transfer to independent models. We also define that a good fingerprinting mechanism should satisfy the following requirements:
1. **Effectiveness**: Successful ownership verification of stolen policies, i.e., maximizing true positives.
2. **Integrity**: Avoiding accidental accusations of independently trained policies, i.e., minimizing false positives.
3. **Robustness**: Withstanding model modification and evasion attacks. This is achieved if either the ownership of the modified policy is still successfully verified or the modification results in a substantial decrease in utility measured by the agent performance.
Fingerprinting algorithms do not necessarily aim for _utility_ (i.e., maintaining the quality of the suspected model on fingerprints), as they typically use adversarial examples during verification (Zhu et al., 2019; Liu et al., 2019) and the desired outcomes for fingerprints contain incorrect
predictions. Therefore, we did not include utility as a requirement. However, we still restrict FLARE based on the utility concept, so that agents can still maintain their overall performance and complete the task without a significant performance degradation in episodes that include the verification phase.
### Universal Adversarial Masks as Fingerprints
FLARE aims to find a set of adversarial masks that can fool the original agent in any input state to which it is added, but cannot transfer to independently trained agents. Lukas et al. (Lukas et al., 2019) define a similar property for classifiers called "conferability". Conferrable adversarial examples can transfer from the original classifier to its derivatives but not to independently trained classifiers. In contrast, FLARE does not generate individual adversarial examples but instead searches for universal adversarial masks that can be used to generate conferrable adversarial examples.
#### 3.2.1. Fingerprint Generation
During fingerprint generation, FLARE first computes the universal adversarial mask using the original policy \(\pi_{\mathcal{V}}\) and independently trained models \(\pi_{i},(i\in\mathcal{I})\) that have the same DNN architecture. FLARE aims to find a universal mask \(\mathbf{r}\) that maximizes the loss function in Equation 3 and is bounded by \(\epsilon\) in \(l_{\infty}\)-norm.
\[\mathcal{L}(\pi_{\mathcal{V}}(\mathbf{s_{t}+\mathbf{r}}),\hat{\pi}_{\mathcal{V}}(\mathbf{ s_{t}}))-\mathbb{1}_{(\hat{\pi}_{\mathcal{V}}(\mathbf{s_{t}})=\hat{\pi}_{i}(\mathbf{s_{t}}) )}\mathcal{L}(\pi_{i}(\mathbf{s_{t}+\mathbf{r}}),\hat{\pi}_{i}(\mathbf{s_{t}})) \tag{3}\]
The first part of Equation 3 maximizes the categorical cross-entropy loss between \(\pi_{\mathcal{V}}\)'s predictions for clean and adversarial states using the log-probability vector for all actions \(\pi_{\mathcal{V}}\) in adversarial state and the performed action \(\hat{\pi}_{\mathcal{V}}\) in the clean version of that state. The second part minimizes the categorical cross-entropy loss between adversarial states \(\mathbf{s_{t}}+\mathbf{r}\) computed for \(\pi_{i}\) and their clean counterparts \(\mathbf{s_{t}}\) only if the predicted action for \(\mathbf{s_{t}}\) is the same for both \(\pi_{\mathcal{V}}\) and \(\pi_{i}\). The modified loss function ensures that the same \(\mathbf{s_{t}}+\mathbf{r}\) cannot mislead \(\pi_{\mathcal{V}}\) and \(\pi_{i}\) in the same way, even if \(\hat{\pi}_{i}\) produces a suboptimal action. FLARE uses untargeted adversarial examples as fingerprints (see Section 2.2), so the solution of Equation 3 forces \(\pi_{\mathcal{V}}\) into the incorrect action in \(\mathbf{s_{t}}+\mathbf{r}\), but has a minimum effect on \(\pi_{i}\). Multiple independently trained policies are used to calculate the second part of Equation 3 for each \(i\in\mathcal{I}\) by taking the average of individual losses. A universal adversarial mask should also achieve a high fooling rate \(\delta_{\mathbf{r}}\) as presented in Equation 2.
To ensure universality, FLARE uses an approach similar to (Srivastava et al., 2017) when solving Equation 3. First, \(\mathcal{V}\) completes one episode and the observed states are saved in a training set \(\mathcal{D}_{flare}\). Then FLARE computes the average gradient of the loss function in Equation 3 w.r.t. \(k\) states randomly sampled from \(\mathcal{D}_{flare}\). This enables FLARE to generate \(\binom{len(\mathcal{D}_{flare})}{k}\) different universal adversarial masks as a fingerprint candidate. After generating the fingerprint candidate, FLARE checks its _non-transferability score_. We compute the non-transferability score (\(nts\)) for a universal adversarial mask \(\mathbf{r}\) on an episode \(eps\) (that \(\pi_{\mathcal{V}}\) follows) as
\[nts(\mathbf{r},eps)=\delta_{\mathbf{r},eps}\times max_{i\in\mathcal{I}}\left(1-AA(\pi_{ \mathcal{V}},\pi_{i},\mathbf{s},\mathbf{r})\right), \tag{4}\]
where \(\delta_{\mathbf{r},eps}\) refers to the fooling rate measured for \(\pi_{\mathcal{V}}\) using all \(\mathbf{s}\) observed in \(eps\). \(AA\) denotes _action agreement_ and is calculated as
\[AA(\pi_{i},\pi_{j},\mathbf{s},\mathbf{r})=\frac{1}{N}\sum_{\mathbf{t}=0}^{t=N}\mathbb{1}_{( \hat{\pi}_{i}(\mathbf{s_{t}+\mathbf{r}})=\hat{\pi}_{j}(\mathbf{s_{t}+\mathbf{r}}))}, \tag{5}\]
where \(N\) refers to the length of one full episode \(eps\) that \(\pi_{\mathcal{V}}\) follows.
FLARE only accepts the candidate \(\mathbf{r}_{candidate}\) as a valid fingerprint if \(nts(\mathbf{r}_{candidate})\) is greater than a threshold value \(\tau_{nts}\) and achieves a fooling rate \(\delta_{\mathbf{r}_{candidate}}\) higher than \(\tau_{\mathcal{G}}\) over a single \(eps\). How FLARE decides whether to include a universal adversarial mask in a fingerprint list FLR is presented in Algorithm 1.
```
0:\(\mathcal{D}_{flare}\): Fingerprint generation set
0:FRL: Fingerprint list
1:parameters: \(\tau_{nts},\tau_{\mathcal{G}},n_{\text{episodes}}\cdot n_{\text{FRL}}\)
2:FRL = [ ].
3:for\(eps\leq n_{\text{episodes}}\)do
4: Generate \(\mathbf{r}_{candidate}\) from \(\mathcal{D}_{flare}\)
5: Compute \(nts(\mathbf{r}_{candidate})\) using \(\mathcal{I}\) and \(\forall\mathbf{s}_{t}\in\epsilon ps\)
6:if\(nts\geq\tau_{nts}\) and \(\delta_{\mathbf{r}_{candidate}}\geq\tau_{\mathcal{G}}\)then
7: Add \(\mathbf{r}_{candidate}\) into FRL
8:endif
9:if\(len(\text{FRL})==n_{\text{FRL}}\)then
10: return FRL
11:endif
12:endfor
13:return FRL
```
**Algorithm 1** Fingerprint generation
#### 3.2.2. Fingerprint Verification
For fingerprint verification, the verifier \(\mathcal{J}\) has given a fingerprint set FLR. \(\mathcal{J}\) first observes the interactions between the suspected agent \(\mathcal{S}\) and the environment to estimate the total number of states \(N\) that occur during a single episode. Then, for each subsequent episode, \(\mathcal{J}\) adds one fingerprint starting from a random state at time \(t_{start}\) over a short time window of length \(M\) to preserve the return in an acceptable range. \(\mathcal{V}\) is also queried with the adversarial states \(\mathbf{s_{t}}+\mathbf{r}\) that the suspected agent receives. For each fingerprint, \(AA\) is calculated as \(1/M\sum_{t=t_{start}}^{t_{start}+M-1}AA(\pi_{\mathcal{V}},\pi_{\mathcal{S}},\mathbf{s _{t}},\mathbf{r})\). If \(AA\) for a single fingerprint exceeds a decision threshold \(AA\geq 0.5\), that fingerprint produces supporting evidence to verify that the suspected model is the stolen copy. The final verdict (stolen vs. independent) is made based on the _majority vote_. FLARE also returns \(AA\) averaged on all fingerprints to quantify the confidence in the final decision. The verification procedure is summarized by Algorithm 2.
## 4. Empirical Analysis
### Experimental Setup
We evaluated FLARE using the Arcade Learning Environment (ALE) (Carrone et al., 2017). We selected two different games, Pong and MsPacman, from ALE to train agents with three different model-free DRL algorithms: A2C (Pong and MsPacman, 2019), DQN (Pong and MsPacman, 2019), and PPO (MsPacman, 2019). Pong is a two-player game in which agents are trained to win against the computer, while MsPacman is a single-player game with the goal of achieving
the highest score without crashing into enemies. When constructing the state information, we applied the pre-processing methods proposed in (Zhu et al., 2017). Furthermore, for each victim \(\mathcal{V}\), we independently trained five additional policies \(\pi_{i}\) (\(i\in I\)) that have the same DNN architecture and the DRL algorithm as \(\mathcal{V}\), and used them during fingerprint generation. In total, we obtained six victim policies and thirty independent policies. In Pong, the victim and the independent policies win the game with the highest score (+21). In MsPacman, it was harder to achieve similar high scores since states are more complex than Pong and depend on the position of multiple enemies. In both games, the score is used to quantify the agent's return. Appendix A.1 presents software/hardware requirements for reproduction, as well as the average performance of all agents.
During fingerprint generation in DQN, FLARE uses the DNN approximating Q value function. For other algorithms, FLARE selects the policy network (e.g., the actor network in A2C) to compute fingerprints. We set the maximum number of fingerprints \(len(FRL)\) at 10, and the window size \(M\) at 40. The discussion on the choice of \(len(FRL)\) and \(M\) is included in Appendix A.2. Other hyperparameters used in fingerprint generation are also listed in Appendix A.2. In our experimental setup, we used different random initialization for episodes used in training, fingerprint generation, verification, estimation of agent performance, modification attacks, and evasion attacks to ensure randomness in dynamic (and uncontrollable) environments.
### Effectiveness and Integrity
Figure 1 summarizes various FLARE metrics (fooling rate \(\delta\), non-transferability score \(nts\) and action agreement \(AA\) calculated on different policies) for three different DRL algorithms. \(AA_{orig}\) denotes \(AA\) of the adversary's policy \(\pi_{\mathcal{V}}\) which is identical to the victim policy \(\pi_{\mathcal{V}}\). \(AA_{ind}\) (verification) refers to \(AA\) values of independent policies \(\pi_{i}\) that share the same DRL algorithm as \(\pi_{\mathcal{V}}\) and are used in Algorithm 1 (5 policies for each \(\mathcal{V}\)). We use the remaining 10 independent policies (having a different DRL algorithm from \(\mathcal{V}\)) trained for the same task to calculate \(AA_{others}\) (verification). \(AA_{orig}\) is much higher than the threshold value 0.5, almost equal to 1.0 in most cases. Furthermore, the average fooling rate of fingerprints is high, which proves that fingerprints successfully mislead \(\pi_{\mathcal{V}}\). The average of \(AA_{ind}\) (verification) and \(AA_{others}\) (verification) are lower than 0.5 in all cases, and the majority vote is always "not stolen (independent)" for any other \(\pi_{i}\) that is not \(\pi_{\mathcal{V}}\). Results show that FLARE achieves a high detection rate while avoiding false accusations of independently trained ones. Thus, we conclude that FLARE satisfies the effectiveness and integrity requirements.
As shown in Figure 1, \(AA_{ind}\) and \(AA_{others}\) show different variances for three DRL algorithms. We found that one or two fingerprints seldom produce \(AA\geq 0.5\) for \(\pi_{\mathcal{V}}\) and \(\pi_{i}\), although they behave differently in the same clean states. This reveals that a single fingerprint rarely represents the same weakness of two separate policies, and the number of fingerprints should be high enough to satisfy integrity considering this phenomenon.
During verification, we set the threshold value to 0.5 (a single fingerprint votes for "stolen" if \(AA\geq 0.5\)) over all experiments. However, it might be better to look at the full profile of the receiver operational characteristic (ROC) curves, which give a complete picture of the trade-off between false positive and true positive rates by varying the threshold value. We provide ROC curves for both Pong and MsPacman games in Appendix A.3.
Figure 1. Various FLARE metrics averaged over 10 runs for all generated fingerprints. FLARE can successfully distinguish between the original model \(AA_{orig}\) and independent models \(AA_{ind},AA_{others}\), while achieving high fooling rate \(\delta\) and non-transferability score \(nts\).
_Utility_ : As stated in Section 3.1, we do not consider utility a requirement for FLARE. However, based on the definition in (Han et al., 2017), we measure the _impact_ of the verification on the victim agent to ensure that it does not fail the task during verification. We measure the impact as:
\[Impact=\frac{Return\nu_{\mathcal{V}(test)}-Return\nu_{\mathcal{V}( verification)}}{Return\nu_{\mathcal{V}(test)}-Return\nu_{\mathcal{V}_{min}(test)}}. \tag{6}\]
\(Return\nu_{\mathcal{V}(test)}\) and \(Return\nu_{\mathcal{V}(verification)}\) are the average return of \(\mathcal{V}\) in an episode initialized with the same start state (and with the same environment dynamics) with or without the verification. \(Return\nu_{\mathcal{V}_{min}(test)}\) is the return of \(\mathcal{V}\) if it chooses the worst possible actions for each state in the same episode. The results presented in Appendix A.4 show that the average impact on agent performance is 0.02 and 0.22 in MSPacman and Pong, respectively. We also found that the return never drops to \(Return\nu_{\mathcal{V}_{min}(test)}\) during verification. Thus, we conclude that agents continue their task without a significant impact after the verification phase ends.
### Robustness
#### 4.3.1. Robustness Against Model Modification Attacks
Adversary \(\mathcal{A}\) could modify the stolen policy \(\pi_{\mathcal{A}}\) by carefully retraining it to preserve agent performance while trying to suppress evidence used for verification. We consider two common types of model modification attacks, fine-tuning (Zhu et al., 2017) and weight pruning (Kumar et al., 2018), where \(\mathcal{A}\) is aware of the existence of an ownership verification technique but does not know the type of it. We implemented fine-tuning by retraining \(\pi_{\mathcal{A}}\) in an additional 200 episodes and decreasing the learning rate by 100 to maintain agent performance. For pruning, we first performed global pruning, i.e., removed a percentage of the lowest connections across the DNN model. After pruning, we fine-tuned the pruned model over 200 episodes.
To evaluate the robustness requirement, we computed the majority vote and \(AA\) values of the stolen \(\pi_{\mathcal{A}}\) and modified policies \(\pi_{\mathcal{A}^{*}}\) on the verification episodes. We also measured the impact of the modification on model utility (agent performance) by changing Equation 6 to:
\[Impact=\frac{Return\pi_{\mathcal{A}(test)}-Return\pi_{\mathcal{A}^{*}(test)}}{Return \pi_{\mathcal{A}(test)}-Return\pi_{A_{min}(test)}}, \tag{7}\]
where \(Return\pi_{\mathcal{A}^{*}}\) is the return of stolen and modified policy, and \(Return\pi_{\mathcal{A}}\) denotes the return of stolen (unmodified) policy over the same test episodes. Based on the results on the impact of verification on utility, we generously set the maximum allowable impact as 0.4 for modification attacks, indicating that \(Return\pi_{\mathcal{A}^{*}(test)}\)
\begin{table}
\begin{tabular}{c c c c c|c c c c}
**Game, DRL** & \multirow{2}{*}{**Stats**} & \multicolumn{3}{c|}{**Fine-tuning, \# of episodes**} & \multicolumn{3}{c}{**Pruning and fine-tuning, pruning levels (\%)**} \\
**method** & & **50** & **100** & **200** & **25** & **50** & **75** & **90** \\ \hline \multirow{4}{*}{**Pong,** **A2C**} & Impact & 0.0 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 & 1.0 \(\pm\) 0.0 \\ & \(AA\) & 0.95 \(\pm\) 0.14 & 0.95 \(\pm\) 0.14 & 0.94 \(\pm\) 0.10 & 0.94 \(\pm\) 0.14 & 0.91 \(\pm\) 0.25 & 0.67 \(\pm\) 0.42 & 0.28 \(\pm\) 0.42 \\ & Votes & 10 / 0 & 10 / 0 & 10 / 0 & 10 / 0 & 10 / 0 & 10 / 0 & 1.0 \(\pm\) 0.0 & 1.0 \(\pm\) 0.0 \\ & Impact & 0.0 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 & 0.8 \(\pm\) 0.0 & 1.0 \(\pm\) 0.0 & 1.0 \(\pm\) 0.0 \\
**Pong,** **DQN** & \(AA\) & 0.94 \(\pm\) 0.05 & 0.89 \(\pm\) 0.14 & 0.90 \(\pm\) 0.17 & 0.88 \(\pm\) 0.16 & 0.66 \(\pm\) 0.38 & 0.09 \(\pm\) 0.17 & 0.27 \(\pm\) 0.4 \\ & Votes & 10 / 0 & 10 / 0 & 9 / 1 & 10 / 0 & 10 / 0 & 10 / 0 & 1.0 \(\pm\) 0.0 & 1.0 \(\pm\) 0.0 \\ & Impact & 0.0 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 & 1.0 \(\pm\) 0.0 & 1.0 \(\pm\) 0.0 \\
**Pong,** & \(AA\) & 0.88 \(\pm\) 0.23 & 0.89 \(\pm\) 0.25 & 0.88 \(\pm\) 0.30 & 0.78 \(\pm\) 0.35 & 0.67 \(\pm\) 0.35 & 0.65 \(\pm\) 0.41 & 0.71 \(\pm\) 0.39 \\ & Votes & 9 / 1 & 9 / 1 & 9 / 1 & 7 / 3 & 7 / 3 & 6 / 4 & 7 / 3 \\ & Impact & 0.0 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 & 0.39 \(\pm\) 0.19 & 0.03 \(\pm\) 0.10 & 0.30 \(\pm\) 0.15 & 0.73 \(\pm\) 0.11 \\
**MsPacman,** & \(AA\) & 0.82 \(\pm\) 0.16 & 0.75 \(\pm\) 0.29 & 0.62 \(\pm\) 0.35 & 0.71 \(\pm\) 0.28 & 0.65 \(\pm\) 0.39 & 0.72 \(\pm\) 0.26 & 0.59 \(\pm\) 0.23 \\
**A2C** & Votes & 9 / 1 & 8 / 2 & 6 / 4 & 8 / 4 & 7 / 3 & 8 / 2 & 6 / 4 \\ & Impact & 0.79 \(\pm\) 0.11 & 0.83 \(\pm\) 0.02 & 0.87 \(\pm\) 0.03 & 0.79 \(\pm\) 0.11 & 0.74 \(\pm\) 0.09 & 0.86 \(\pm\) 0.01 & 0.71 \(\pm\) 0.43 \\
**MsPacman,** & \(AA\) & 0.23 \(\pm\) 0.34 & 0.15 \(\pm\) 0.28 & 0.16 \(\pm\) 0.31 & 0.38 \(\pm\) 0.44 & 0.00 \(\pm\) 0.01 & 0.59 \(\pm\) 0.46 & 0.42 \(\pm\) 0.42 \\
**DQN** & Votes & 2 / 8 & 1 / 9 & 2 / 8 & 4 / 6 & 0 / 10 & 6 / 4 & 4 / 6 \\ & Impact & 0.85 \(\pm\) 0.11 & 0.40 \(\pm\) 0.26 & 0.51 \(\pm\) 0.08 & 0.52 \(\pm\) 0.15 & 0.57 \(\pm\) 0.04 & 0.62 \(\pm\) 0.05 & 0.66 \(\pm\) 0.19 \\
**MsPacman,** & \(AA\) & 0.43 \(\pm\) 0.36 & 0.11 \(\pm\) 0.16 & 0.25 \(\pm\) 0.32 & 0.26 \(\pm\) 0.36 & 0.33 \(\pm\) 0.38 & 0.31 \(\pm\) 0.32 & 0.13 \(\pm\) 0.20 \\ & Votes & 4 / 6 & 0 / 10 & 3 / 7 & 3 / 7 & 3 / 7 & 3 / 7 & 4 / 6 & 1 / 9 \\ \hline \end{tabular}
\end{table}
Table 1. Average impact, \(AA\) and voting results (✓:Stolen, ✗: Independent) for piracy policies that are 1) fine-tuned over a different number of episodes and 2) pruned and then fine-tuned over 200 episodes. \(AA\) is averaged over 10 verification episodes, while impact is averaged over 10 test episodes. ( — : Successful verification with \(AA\geq 0.75\), — : Successful verification with \(0.75\geq AA\geq 0.50\), — : Failed verification with high impact \(\geq 0.4\), — : Failed verification with low impact \(<0.4\))
would fall a little more than halfway between \(Return_{\mathcal{A}(test)}\) and \(Return_{\mathcal{A}_{min}(test)}\).
Table 1 shows the robustness evaluation of FLARE against model modification attacks. As shown in the table, FLARE successfully verifies fine-tuned Pong agents with high \(AA\) values. FLARE usually results in a failed verification of fine-tuned MsPacman agents. However, the impact of modification is exceptionally high for these cases. A similar conclusion can be drawn from the pruning results. An increase in the pruning level negatively affects the verification by decreasing its \(AA\) values. However, the impact of pruning is too high in three cases in Pong and most of the cases in MsPacman, despite failed verification. Based on our robustness definition in Section 3.1, we conclude that FLARE is robust against model modification attacks.
#### 4.3.2. Robustness Against Evasion Attacks and Well-informed Adversaries
\(\mathcal{A}\) can evade verification by discovering individual inputs used for verification or adapt the agent's behavior to avoid a successful verification. For evasion, \(\mathcal{A}\) should have more information about the ownership verification procedure. In our setup, \(\mathcal{A}\) knows that the ownership verification is done via FLARE but is unaware of the exact adversarial mask used during verification. Based on this information, the simplest evasion attack is performing suboptimal actions with a pre-defined random action ratio on each episode. Figure 2 confirms that the increase in the random action ratio causes a decrease in agent performance (lower return) despite successful evasion. Therefore, FLARE is robust against evasion via suboptimal action return.
Evasion attacks can combine detecting adversarial examples (i.e., fingerprints used for verification) and then performing either suboptimal actions or restoring original actions. We employ Visual Foresight (VF) (Krause et al., 2017) to carry out this attack. VF predicts the next states and the associated probability distribution of actions by looking at a history of previous states and observed actions. If the distance between the predicted and current action distribution is large, VF detects that state as adversarial, and performs the predicted action instead of the current one as a recovery mechanism. Figure 2 shows that the use of VF does not affect the agent performance. However, the high values of \(AA\) shown in the figure justifies that VF cannot recover agent performance when states are perturbed with non-transferable, universal adversarial masks and fail to evade verification. This is because the collected history of previous states consists of adversarial inputs, which might lead to the original (incorrect) action even if the adversarial state is detected correctly (Shen et al., 2017). For this reason, we also evaluated the case where VF chooses a suboptimal action (VF + suboptimal action) instead of the one predicted during recovery. Figure 2 shows that it decreases \(AA\) more than VF, but \(AA\) is not too low to evade verification and change the final verdict.
Finally, we evaluated FLARE against the most well-informed adversaries that can improve the robustness of the policy against \(l_{\infty}\)
Figure 2. \(AA\) and return when attacker implements random action with different ratios, Visual Foresight (VF), and VF+suboptimal action as evasion against FLARE. \(AA\) is averaged in 10 verification episodes, whereas the return is averaged in 10 test episodes. Solid lines represent the return while dashed lines refer to \(AA\). Each plot includes three solid and dashed lines (some of which overlap), and different markings on these lines refer to a specific evasion method.
norm adversarial perturbations by adversarial training. For adversarial training, we implemented one of the recent state-of-the-art methods, RADIAL-RL (Rendjian et al., 2017). We choose to implement RADIAL-RL for DQN agents, because these are shared cases between our and the authors' experiments. RADIAL-DQN (RADIAL-RL designed for DQN) first obtains a policy without adversarial training and then fine-tunes the policy by incorporating an adversarial loss term into the loss function that is minimized during training. In our setting, \(\mathcal{A}\) performs RADIAL-DQN by skipping the first step and fine-tunes the stolen policy \(\pi_{\mathcal{A}}\) using adversarial loss. We adopted the open source repository of the authors 3 in our framework, did not change the hyperparameters used in RADIAL-DQN, and saved both agents with the best performance and the final agent after RADIAL-DQN was completed.
Footnote 3: [https://github.com/tnumasso/radial_rl_v2](https://github.com/tnumasso/radial_rl_v2)
The first two rows of Table 2 summarize the impact, \(AA\) values, and the votes for agents modified through RADIAL-DQN. The results indicate that \(\mathcal{A}\) can evade verification by making \(\pi_{\mathcal{A}}\) more robust to adversarial states in Pong. \(\mathcal{A}\) obtains an improved policy for MsPacman (3rd column, negative impact: higher reward), but cannot evade verification. This outcome is not surprising, as DNN fingerprinting has limitations against adaptive adversaries that perform adversarial training (Rendjian et al., 2017). Then, we considered an alternative scenario where \(\mathcal{V}\) fine-tunes its policy with RADIAL-DQN, saves the best agent, and generates fingerprints for this agent (RDQN). The last two rows of Table 2 show the verification results when \(\mathcal{A}\) implements RADIAL-DQN against adversarially robust victim agents. In this case, \(\mathcal{A}\) cannot evade verification without affecting the agent's performance. Therefore, although FLARE is limited against adversarial training, it satisfies the robustness requirement when fingerprinting adversarially robust victim agents.
#### 4.3.3. Robustness Against False Claims
Liu et al. (Liu et al., 2019) show that malicious accusers can produce fake fingerprints that can pass the ownership verification test against independent models in many ownership verification schemes, including CAE (Rendjian et al., 2017). Therefore, we also evaluated the robustness of FLARE against malicious accusers by generating fingerprints for the accuser's policy without maximizing the loss for independent policies (Equation 3) and not measuring the non-transferability score (Algorithm 1, line 6), which is similar to the setup proposed in (Liu et al., 2019) to evaluate CAE. We selected one of the five independent policies \(\pi_{i},i\in\mathcal{I}\) that behaves the closest to \(\pi_{\mathcal{V}}\) in the test episodes and has the same DRL algorithm as the accuser policy and other independent policies as \(\mathcal{J}\)'s control set. As shown in Table 3, the malicious accuser cannot falsely claim ownership of \(\mathcal{V}\) for the perturbation constraint set in FLARE (\(\epsilon=0.05\)), except the PPO agent trained for Pong. If the perturbation constraint becomes larger (\(\epsilon\geq 0.1\)), then the accuser's false fingerprints transfer to other models in those cases. Having \(\mathcal{J}\) perform an additional check that the size of the adversarial mask does not exceed a prescribed bound can mitigate against false claims attacks for FLARE. Table 3 also indicates that adversarial states have a higher transferability rate between PPO algorithms compared to others. In these cases, \(\mathcal{J}\) can train or search for other independent PPO policies for the same task as suggested in (Liu et al., 2019), and it can reject the claim if the accuser's fingerprints falsely verify all independent models. Therefore, we conclude that FLARE is not susceptible to false claims with a simple additional countermeasure on \(\epsilon\) and non-transferability check based on the DRL algorithm.
Model extraction attacks in DRL :In this work, we limit the scope to the adversary model described in Section 3.1 and do not consider model extraction attacks against DRL policies through imitation learning (Chen et al., 2017). Nevertheless, we tried to implement the model extraction attack proposed by Chen et al. (Chen et al., 2017), but were unable to obtain good stolen policies, which could be due to the simpler tasks chosen in the setup of the original work. Chen et al. (Chen et al., 2017) experimentally show that adversarial examples can successfully transfer from stolen policies to the victim policy if they share the same DRL algorithm. Their preliminary results provide insight into the possibility of preserving fingerprints during the extraction of the DRL model. Thus, we leave the construction of effective DRL model extraction attacks and evaluate the robustness of FLARE against these attacks for future work.
## 5. Transferability of Universal Masks
The fingerprint generation process in FLARE is based on maximum-confidence adversarial example generation techniques and is similar to Fast Gradient Sign Method (FGSM) (Krizhevsky et al., 2014), since FLARE averages the gradient of Equation 3 w.r.t. randomly selected states. As presented in Section 2.2, maximum-confidence adversarial examples have a higher transferability rate than minimum-confidence examples. FGSM is a maximum-confidence method itself; however,
\begin{table}
\begin{tabular}{c c c c}
**Game, DRL method** & **Stats** & **Best Agent** & **Final Agent** \\ \hline
**Pong,** & Impact & \(0.0\pm 0.0\) & \(0.0\pm 0.0\) \\
**RADIAL-DQN** & \(AA\) & \(0.04\pm 0.06\) & \(0.04\pm 0.06\) \\ & Votes & \(0\)\(\sqrt{10}\)\(X\) & \(0\)\(\sqrt{10}\)\(X\) \\
**MsPacman,** & Impact & \(-0.16\pm 0.03^{*}\) & \(0.39\pm 0.03\) \\
**RADIAL-DQN** & \(AA\) & \(0.59\pm 0.40\) & \(0.29\pm 0.31\) \\ & Votes & \(6\)\(\sqrt{1}\)\(\mathcal{X}\) & \(4\)\(\sqrt{6}\)\(X\) \\
**Pong,** & Impact & \(0.0\pm 0.0\) & \(0.0\pm 0.0\) \\
**RADIAL-** & \(AA\) & \(0.84\pm 0.21\) & \(0.89\pm 0.17\) \\ & Votes & \(8\)\(\sqrt{1}\)\(2\)\(X\) & \(9\)\(\sqrt{1}\)\(X\) \\
**MsPacman,** & Impact & \(0.15\pm 0.04\) & \(0.55\pm 0.06\) \\
**RADIAL-** & \(AA\) & \(0.61\pm 0.34\) & \(0.09\pm 0.18\) \\
**RDQN** & Votes & \(7\)\(\sqrt{1}\)\(3\)\(\mathcal{X}\) & \(1\)\(\sqrt{9}\)\(\mathcal{X}\) \\ \hline \end{tabular}
\end{table}
Table 2. Average impact, \(AA\) and voting results for stolen policies modified by RADIAL-DQN. Results are reported for both the agent with the best performance during RADIAL-DQN (3rd column) and the final agent obtained after RADIAL-DQN finishes (4th column). \(AA\) is averaged on 10 verification episodes and impact is averaged over 10 test episodes. (’: improved policy, : Successful verification with \(AA\geq 0.75\), : Successful verification with \(0.75\geq AA\geq 0.50\), : Failed verification with high impact \(\geq 0.4\), : Failed verification with low impact \(<0.4\))
during the computation of universal, non-transferable adversarial masks, the effect of the high-sensitivity directions obtained from the most confident adversarial examples is diminished by others. Nevertheless, we analyzed whether minimum-confidence adversarial masks in DNNs can be useful for DRL fingerprinting. For that reason, we changed the universal mask generation \(\mathbf{r}\), (Algorithm 1, line 4) with Universal Adversarial Perturbation (UAP) [27] by implementing the method proposed for DRL settings [36].
We found that FLARE with UAP satisfies the effectiveness and integrity requirements for all agents, except the DQN agent trained for MsPacman. It was impossible to obtain an adversarial example with the perturbation constraint used in FLARE (\(\epsilon=0.05\)) against this agent, but increasing it leads to transferable adversarial examples and false positives. The real issue with UAP emerges when the adversary \(\mathcal{A}\) modifies the stolen policy \(\pi_{\mathcal{A}}\) with model modification attacks. Due to its minimum-distance property, UAP finds the smallest high-sensitivity directions belonging to the closest incorrect class (or discrete actions in DRL), and generally the resulting \(\mathbf{r}\) is smaller than \(\epsilon\). Therefore, a small change in \(\pi_{\mathcal{A}}\) negatively affects the robustness of UAP. Contrary to UAP, FLARE shifts the source sample using the maximum amount of perturbation \(\epsilon\), forces \(\pi_{\mathcal{A}}\) to perform the same incorrect action and is more robust against model modification attacks. We illustrate this problem in Figure 3.
One of the main reasons why FLARE has better robustness stems from the fact that the input space embeddings in DRL are not as separable as in DNN [2]. In DRL, although the input states are spatially similar, they often result in different actions. DRL agents optimize policies using both input state and environment dynamics and act upon spatio-temporal abstractions [43]. FLARE identifies discontinuities in the optimal policy and computes an adversarial state that is spatially similar to the source state but far from it in temporal dimension. UAP typically explores adversarial pockets that are closer in the spatial domain due to its minimum-distance strategy. Therefore, it cannot withstand model modification attacks that preserve the spatio-temporal abstractions and slightly change the sequential strategy.
We provide experimental results for our discussion in Table 4. This table compares the fooling rate and action agreement \(AA\) for adversarial states used in the verification of fine-tuned policies.
Figure 3. Depiction of fingerprints generated by UAP and FLARE for a DRL policy and three available actions. UAP moves the source sample in the direction of the closest incorrect action, and typically this movement is less than the perturbation constraint (denoted by circles). In contrast, FLARE shifts the source sample to the same action, which is irrelevant to the original action, while using the maximum value of the perturbation constraint.
\begin{table}
\begin{tabular}{c c c c c} & & & \(\mathbf{\epsilon}\) **vs. \(AA\) (Votes)** & \\ \cline{3-5}
**Game, DRL method** & \(\mathbf{0.05}\) & \(\mathbf{0.1}\) & \(\mathbf{0.2}\) & \(\mathbf{0.5}\) \\ \hline
**Pong,** & \(\mathcal{V}\) & \(0.45\pm 0.47\) (\(\mathcal{S}\)/ \(5\) ✗) & \(0.49\pm 0.49\) (\(\mathcal{S}\)/ \(5\) ✗) & \(0.40\pm 0.49\) (\(\mathcal{A}\)/ \(6\) ✗) & \(0.40\pm 0.49\) (\(4\) ✓/ \(6\) ✗) \\
**A2C** & \(\mathcal{I}\), avg. & \(0.32\pm 0.36\) (\(3\) ✓/ \(7\) ✗) & \(0.38\pm 0.45\) (\(3\) ✓/ \(7\) ✗) & \(0.30\pm 0.41\) (\(3\) ✓/ \(7\) ✗) & \(0.28\pm 0.43\) (\(3\) ✓/ \(7\) ✗) \\
**Pong,** & \(\mathcal{V}\) & \(0.37\pm 0.42\) (\(4\) ✓/ \(6\) ✗) & \(0.37\pm 0.45\) (\(3\) ✓/ \(7\) ✗) & \(0.33\pm 0.45\) (\(3\) ✓/ \(7\) ✗) & \(0.40\pm 0.49\) (\(4\) ✓/ \(6\) ✗) \\
**DQN** & \(\mathcal{I}\), avg. & \(0.01\pm 0.18\) (\(\mathcal{I}\)/ \(9\) ✗) & \(0.07\pm 0.22\) (\(\mathcal{I}\)/ \(9\) ✗) & \(0.05\pm 0.19\) (\(\mathcal{I}\)/ \(9\) ✗) & \(0.05\pm 0.19\) (\(\mathcal{I}\)/ \(9\) ✗) \\
**Pong,** & \(\mathcal{V}\) & \(0.56\pm 0.39\) (\(\mathcal{S}\)/ \(5\) ✗) & \(0.68\pm 0.42\) (\(\mathcal{T}\)/ \(3\) ✗) & \(0.76\pm 0.38\) (\(\mathcal{S}\)/ \(2\) ✗) & \(0.78\pm 0.39\) (\(8\) ✓/ \(2\) ✗) \\
**PPO** & \(\mathcal{I}\), avg. & \(0.56\pm 0.36\) (\(\mathcal{S}\)/ \(4\) ✗) & \(0.59\pm 0.38\) (\(\mathcal{S}\)/ \(4\) ✗) & \(0.59\pm 0.38\) (\(\mathcal{S}\)/ \(4\) ✗) & \(0.52\pm 0.41\) (\(\mathcal{S}\)/ \(4\) ✗) \\
**MsPacman,** & \(\mathcal{V}\) & \(0.00\pm 0.00\) (\(\mathcal{V}\)/10 ✗) & \(0.03\pm 0.05\) (\(\mathcal{V}\)/10 ✗) & \(0.14\pm 0.29\) (\(\mathcal{I}\)/ \(9\) ✗) & \(0.09\pm 0.22\) (\(1\) ✓/ \(9\) ✗) \\
**A2C** & \(\mathcal{I}\), avg. & \(0.15\pm 0.56\) (\(\mathcal{I}\)/ \(9\) ✗) & \(0.14\pm 0.21\) (\(\mathcal{I}\)/ \(9\) ✗) & \(0.13\pm 0.30\) (\(\mathcal{I}\)/ \(8\) ✗) & \(0.21\pm 0.36\) (\(\mathcal{I}\)/ \(8\) ✗) \\
**MsPacman,** & \(\mathcal{V}\) & \(0.23\pm 0.36\) (\(\mathcal{Z}\)/ \(8\) ✗) & \(0.0\pm 0.00\) (\(\mathcal{V}\)/10 ✗) & \(0.0\pm 0.0\) (\(\mathcal{V}\)/10 ✗) & \(0.0\pm 0.0\) (\(\mathcal{V}\)/10 ✗) \\
**DQN** & \(\mathcal{I}\), avg. & \(0.26\pm 0.24\) (\(\mathcal{Z}\)/ \(8\) ✗) & \(0.19\pm 0.26\) (\(2\)/\(8\) ✗) & \(0.15\pm 0.29\) (\(\mathcal{I}\)/ \(9\) ✗) & \(0.24\pm 0.26\) (\(3\) ✓/\(7\) ✗) \\
**MsPacman,** & \(\mathcal{V}\) & \(0.19\pm 0.18\) (\(\mathcal{V}\)/ \(9\) ✗) & \(0.26\pm 0.31\) (\(\mathcal{I}\)/ \(7\) ✗) & \(0.38\pm 0.37\) (\(\mathcal{A}\)/ \(6\) ✗) & \(0.07\pm 0.21\) (\(\mathcal{I}\)/ \(9\) ✗) \\
**PPO** & \(\mathcal{I}\), avg. & \(0.10\pm 0.11\) (\(\mathcal{V}\)/10 ✗) & \(0.50\pm 0.39\) (\(\mathcal{S}\)/ \(5\) ✗) & \(0.74\pm 0.40\) (\(\mathcal{S}\)/ \(2\) ✗) & \(0.80\pm 0.20\) (\(\mathcal{S}\)/ \(2\) ✗) \\ \end{tabular}
\end{table}
Table 3. \(AA\) values (averaged over 10 verification episodes) and voting results for false claims against victim \(\mathcal{V}\), and independent \(\mathcal{I}\) policies with different perturbation constraint \(\epsilon\) values. (The cases where a false claim succeeds are shown as follows: **False claim with \(AA\geq 0.75\), **False claim with \(0.75\geq AA\geq 0.50\))**
We chose to report the results for fine-tuned policies from Table 1 considering the acceptable impact range (\(<0.4\)) on the modified agent's performance. Matched actions refers to situations where both victim and modified policies perform the same action for the same input state without any added fingerprints. In contrast, different actions refer to cases where victim and modified policies behave differently for the same input state. Table 4 shows that the fooling rate of UAP is lower than FLARE in almost all cases. This supports our first claim regarding the robustness of UAP and FLARE. The columns labeled with \(AA\) show the action agreement where the fingerprint successfully misleads the victim policy. In this case, the ideal \(AA\) value for matched actions would be \(1.0\). As can be seen from Table 4, FLARE reaches much higher \(AA\) values for matched actions than UAP. Surprisingly, FLARE attains higher \(AA\) values for different actions as well. This shows that, even if the fine-tuned policy successfully changes the agent's behavior, the adversarial states generated by FLARE force the policy to perform the same incorrect action. The same conclusion cannot be drawn from the UAP results, as the \(AA\) values reported for the same actions are lower than FLARE even in the case with the higher fooling rate.
Based on this discussion, we conjecture that using minimum-distance adversarial examples to fingerprint DRL agents requires either adding modified policies to the loss function in Equation 3, or considering the temporal structure of the policy while finding the high-sensitivity directions. The latter option also opens a new space of adversarial examples that can exploit temporal abstractions learned by DRL policies.
## 6. Related Work
Adversarial Examples in DRL:Recent work has shown that DRL policies are vulnerable to adversarial examples generated for agents' states (Han et al., 2017; Wang et al., 2018) or actions (Wang et al., 2019) in single-agent environments, or produce natural adversarial states by exploiting other agents in multi-agent settings (Chen et al., 2019). Other studies focus on perturbing the dynamics of the environment by modifying the environment conditions (Wang et al., 2019; Wang et al., 2019). DRL adversarial training (Wang et al., 2019; Wang et al., 2019) has been considered as a mitigation, but adversarially robust policies were found to be more vulnerable to high-sensitivity directions caused by a natural change in the environment (Wang et al., 2019).
_Ownership Verification via Model Watermarking:_ Model watermarking has become a widely known ownership verification procedure for DNNs (Chen et al., 2019; Wang et al., 2019; Wang et al., 2019). Model watermarking embeds traceable information (i.e. watermark) into the DNN by either directly inserting it into model parameters or adding unique knowledge into a small subset of the training set. During ownership verification, the existence of the watermark is proven on illegitimate copies. Previous DRL ownership verification methods adapt model watermarking techniques. For example, Behzadan et al. (Behzadan et al., 2019) propose the embedding of sequential states that are separate from the main environment as watermarks during training. However, watermark verification also requires a different environment, and there is no guarantee that watermarks will be retained while learning complex tasks. Chen et al. (Chen et al., 2019) obtain a sequence of damage-free states as watermarks that are sampled from the same environment and do not impact agent performance. During verification, the authors compare the action probability distributions given by both the victim and the suspected agents over these sequential states. However, this watermarking method requires modifying both the training process and the reward function.
Although model watermarking is considered a practical solution to protect DNN ownership, many studies have shown (Wang et al., 2019; Wang et al., 2019) that they cannot withstand well-informed adversaries and model modification attacks. Compared to watermarking, DNN fingerprinting methods show improved robustness to model modification and extraction attacks (Wang et al., 2019; Wang et al., 2019). Furthermore, fingerprinting does not change the training procedure unlike watermarking. However, there is no prior work applying fingerprinting as an ownership verification method in DRL.
_Ownership Verification of Large models via Fingerprinting:_ Although FLARE is specifically designed for DRL, we conjecture that universal and non-transferable adversarial masks can be useful for fingerprinting, e.g., large language models. For example, Wallace et al. (Wallace et al., 2019) show the availability of context-independent universal adversarial triggers that force large language models to produce incorrect results. Similarly, Gu et al. (Gu et al., 2019) demonstrate that universal adversarial patches can fool vision transformers. If universality is restricted with the non-transferability requirement, then the generated adversarial masks will profile the global behavior of large models and can be used in ownership verification.
## 7. Conclusion
In this paper, we propose FLARE, the first fingerprint method that can be used for ownership verification of DRL policies, and show the existence of non-transferable universal adversarial masks in DRL settings. We empirically demonstrate that our fingerprints are efficient and do not accidentally accus
\begin{table}
\begin{tabular}{c|c c c|c c|c c}
**Game, DRL Method** & \multicolumn{3}{c}{**UAP**} & \multicolumn{3}{c}{**FLARE**} \\ \hline
**(Fine-tuned** & \multicolumn{2}{c}{**Matched actions**} & \multicolumn{2}{c}{**Different actions**} & \multicolumn{2}{c}{**Matched actions**} & \multicolumn{2}{c}{**Different actions**} \\
**over 200 eps.)** & Fooling rate & \(AA\) & Fooling rate & \(AA\) & Fooling rate & \(AA\) & Fooling rate & \(AA\) \\ \hline
**Pong, A2C** & \(0.79\pm 0.09\) & \(0.78\pm 0.08\) & \(0.89\pm 0.07\) & \(0.69\pm 0.13\) & \(0.95\pm 0.10\) & \(0.94\pm 0.09\) & \(0.85\pm 0.12\) & \(0.92\pm 0.16\) \\
**Pong, DQN** & \(0.69\pm 0.11\) & \(0.12\pm 0.10\) & \(0.55\pm 0.18\) & \(0.24\pm 0.14\) & \(0.89\pm 0.13\) & \(0.92\pm 0.18\) & \(0.93\pm 0.07\) & \(0.94\pm 0.09\) \\
**Pong, PPO** & \(0.86\pm 0.05\) & \(0.40\pm 0.21\) & \(0.82\pm 0.14\) & \(0.41\pm 0.13\) & \(0.91\pm 0.03\) & \(0.98\pm 0.08\) & \(0.90\pm 0.07\) & \(0.87\pm 0.3\) \\
**MsPacman, A2C** & \(0.76\pm 0.32\) & \(0.53\pm 0.42\) & \(0.91\pm 0.13\) & \(0.58\pm 0.42\) & \(0.68\pm 0.36\) & \(0.64\pm 0.36\) & \(0.64\pm 0.42\) & \(0.55\pm 0.43\) \\ \hline \end{tabular}
\end{table}
Table 4. Comparison of UAP and FLARE based on fooling rate (measured for the victim policy) and action agreement \(AA\). Both the fooling rate and \(AA\) are averaged using adversarial states (fingerprints) in 10 verification episodes. The higher fooling rate and \(AA\) values are highlighted in green. Matched actions: Cases where victim and modified policies perform the same action for the same state. Different actions: Cases where victim and modified policies perform different actions for the same state.
models. Adversarial training is the only method that evades verification by making policies robust to adversarial examples. However, our experiments show that the fingerprints obtained by FLARE for robust policies are persistent. We hypothesize that FLARE can be extended to continuous tasks, where the verifier can check how much the suspected agent deviates from the original action value, and we leave this for future work.
A promising direction for future work related to DRL fingerprinting is to study whether an intentional change in environment conditions can be useful candidates for fingerprints. DRL policies show decreased robustness when deployed in a different environment and include high-sensitivity directions due to natural causes. This vulnerability leads to model evasion attacks via natural adversarial examples, but can also be leveraged to learn natural (and non-transferable) fingerprints for ownership verification. We believe that our study can create more interest in securing DRL agents using novel ownership verification methods against possible model piracy and extraction attacks.
Acknowledgements. This research was partially supported by Intel. We thank Dr. Samuel Marchal and Shelly Wang for initial discussions on this problem and for collaborating on an alternative approach to fingerprinting DRLs that we explored prior to the solution presented in this paper. We also thank Aalto Science-IT for computational resources.
|
2304.02463 | The Next Generation Event Horizon Telescope Collaboration: History,
Philosophy, and Culture | This white paper outlines the plans of the History Philosophy Culture Working
Group of the Next Generation Event Horizon Telescope Collaboration. | Peter Galison, Juliusz Doboszewski, Jamee Elder, Niels C. M. Martens, Abhay Ashtekar, Jonas Enander, Marie Gueguen, Elizabeth A. Kessler, Roberto Lalli, Martin Lesourd, Alexandru Marcoci, Sebastián Murgueitio Ramírez, Priyamvada Natarajan, James Nguyen, Luis Reyes-Galindo, Sophie Ritson, Mike D. Schneider, Emilie Skulberg, Helene Sorgner, Matthew Stanley, Ann C. Thresher, Jeroen Van Dongen, James Owen Weatherall, Jingyi Wu, Adrian Wüthrich | 2023-04-05T14:41:27Z | http://arxiv.org/abs/2304.02463v1 | # The Next Generation Event Horizon Telescope Collaboration: History, Philosophy, and Culture
###### Abstract
This white paper outlines the plans of the History Philosophy Culture Working Group of the Next Generation Event Horizon Telescope Collaboration.
**Keywords: black holes; ngEHT; robustness; no hair theorems; scientific collaborations; philosophy; history; social sciences; governance; visualization**
## 1 Introduction
**Coordinating author: Galison, P.; Contributing authors: Elder, J. and Thresher, A.C.**
Deep in the development of physics lie crucial intersections of science and philosophy. When Isaac Newton released his _Principia Mathematica_ to the world, he included a "Scholium" on space and time. It contains no diagrams, mathematical expressions, experimental reports, theorems, or specific laws of motion or gravity. Instead, the Scholium sets out the starting terms of the inquiry itself, delving into the nature of space, time, and place. "I must observe", Newton insisted, "that the common people conceive those quantities under no other notions but from the relation they bear to sensible objects. And thence arise certain prejudices, for the removing of which it will be convenient to distinguish them into absolute and relative, true and apparent, mathematical and common." Rulers and clocks, calendars and sunrises, all the motions we use to tell time: these were merely the observable, "sensible" aspects of our basic concepts. How, asked Newton, are we "to obtain the true motions from their causes, effects, and apparent differences, and the converse." These deeply philosophical questions motivated the writing of the _Principia_ ([1], pp. 6-12).
For Einstein, too, philosophical analysis was essential to subverting conformist tendencies in approaching central questions of physics. Nowhere was this more important than in his relativity theories: first, in his revision of space, time, and simultaneity in special relativity, leading to the unified spacetime introduced by Hermann Minkowski; and second, in Einstein's far deeper 1915 reconfiguration of spacetime in general relativity.1 Einstein drew on a range of philosophical influences: from his youth forward, Einstein maintained a persisting interest in the work of Immanuel Kant and the neo-Kantians; he and his "Olympia Academy" dug line-by-line into Henri Poincare's work on conventionalism; he sustained an abiding, if critical, interest in the work of the Vienna Circle; he also borrowed from Ernst Mach, who was deeply suspicious of an absolute, sense-independent notion of space and time. Throughout his life, Einstein believed that epistemology--the study of the formation, nature and justification of knowledge--and science "are dependent upon each other. Epistemology without contact with science becomes an empty scheme. Science without epistemology is--insofar as it is thinkable at all--primitive and muddled." ([7], pp. 683-684).2
Footnote 1: The _Principia_ is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principipia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principia_, which is a _Principipia_, which is a _Principipia_, which is a _Principipia_, which is a _Princip
then transported to central computing facilities where supercomputer "correlators" align the recorded signals. These aligned data can then be used to create images. In April 2019, the EHT Collaboration released the first ever picture of a black hole, M87*, the 6.5 billion solar mass compact object at the center of the elliptical galaxy M87 in the constellation Virgo [19]. Three years later, the EHT issued an image of the supermassive black hole, Sgr A*, at the center of the Milky Way [20]. Extending this work, the next generation EHT (ngEHT) aims to supplement the EHT network of telescopes with an additional ten or more sites that would fill out the virtual telescope and bring in new hardware and software, that together would make possible higher-resolution pictures and even movies.4
In the imaging campaign leading to the first pictures of M87* and Sgr A*, cross-fertilisation of science studies with the work of the EHT imaging group placed black hole images within a broader historical-epistemic context of pictorial argumentation. This allowed the objectivity of the black hole images to be framed in terms of longer-term and analytic approaches to the objectivity of images [21]. The goal now is to expand this implication in the next generation Event Horizon Telescope, setting the History, Philosophy, and Culture (HPC) Working Group as one of the eight science working groups of the collaboration as of 2022. These working groups will bring to bear on the study of black holes the resources of the history and philosophy of science along with the panoply of disciplines that compose Science and Technology Studies (STS). More specifically, the goal is to put this interdisciplinary working group into productive conversation with the other science and technical working groups--in the process of research and not as a post hoc account. Parallel to the other working groups, HPC will divide into four focus groups:
1. Algorithms, Inference, and Visualization,
2. Foundations,
3. Collaborations,
4. Siting, Education, In- and Outreach.
The Algorithms, Inference and Visualization (AIV) focus group aims to understand the epistemic and aesthetic choices that will guide ngEHT image production. To do so, the group will work closely with the Algorithms and Inference Working Group of the ngEHT. The AIV focus group provides a philosophical, historical, and social scientific complement to this working group, providing a space for a comparative discussion of inference methods and the broader social context of image dissemination. In this article (Section 2) we will report on the power and limits of "robustness" as an analytic virtue, and on the visual conventions of the EHT and ngEHT to come.
The Foundations focus group builds on the existing BHI Foundations Seminar, which draws historians, philosophers, and scientists to its meetings on topics ranging from the thermodynamics of black holes to the nature of singularities. In this article (Section 3) we discuss the relationship between theory and observation, through selected topics of foundational interest (e.g., no hair theorems) that illustrate the often-complex nature of this relationship.
Alongside these bridges between history, philosophy and scientific work are questions about the constitution of the ngEHT. What structure should its governance have? How should the collaboration ensure transparency, choose scientific goals, and assure representation in decision-making? What rules of the road should guide comportment in the collaboration, ranging from authorship and credit to collegiality, diversity, equity, and inclusion? Such questions will be addressed by the Collaborations focus group. Here we include a preliminary discussion of these issues (Section 4), drawing not only on the History and Philosophy of Science (HPS) but on the broader mix of Science and Technology Studies (STS) (including sociological and ethnographic work). To these questions, we offer initial reflections on the broad range of topics within the purview of the AIV, Foundations, and Collaborations focus groups--_initial_, _not final_, as befits these early, formative days of the ngEHT.
One important note: we acknowledge the cultural, historical, epistemic, political, environmental, and economic issues that surround the siting of telescopes. These problems have recently been at the fore of both academic and public interest due to ongoing conflicts at places like the Thirty Meter Telescope in Hawai'i, and the Square Kilometre Array in South Africa and Australia, where local communities have protested the projects for reasons including a lack of inclusion, concern for religious, cultural, and environmental sites, and the ongoing role of science within the longer history of colonialism and self-determination.[5] These sites, and others, highlight the need for careful discussions of our ethical obligations towards local communities, individuals, and the environment when building instruments.[6] Given the importance of such topics, we have decided more serious work is required before we comment on the normative aspects of siting. As such, we will not be discussing siting in this paper, but are instead determined to build and maintain a broadly-diverse, appropriately interdisciplinary focus group dedicated to the topic, drawing on community members, scientists, philosophers, humanists, and social scientists to frame these issues. We anticipate producing publications dedicated solely to this topic in the near future.
## 2 Algorithms, Inference, and Visualization
**Coordinating author: Doboszewski, J.; Contributing authors: Elder, J.; Enander, J.; Galson, P.; Gueguen, M.; Kessler, E.A.; Nguyen, J.; Skulberg, E.; Stanley, M. and Van Dongen, J.**
### Introduction
The Algorithms, Inference, and Visualization (AIV) focus group is a space for a general and comparative discussion of inference methods. The overarching goal is to analyse (and also contribute to) the epistemic and aesthetic choices that will guide ngEHT image production and interpretation. Many lessons can be learned from other computationally heavy areas of science (such as climate science or cosmological simulations) and other large experiments in physics. Here we discuss two example clusters of questions of interest to the AIV: robustness and reliability of imaging methods, and aesthetic choices in black hole imaging. A broader look at such issues will allow us to keep track of the range of factors contributing to decision-making, leading to better-informed choices in the long run.
### Robustness and Reliability of Imaging
"Robustness" is often used in discussions of EHT data and results, including the analyses of both M87* and Sgr A*. Here we offer a short guide to its different uses in the scientific and philosophical literature, before we turn to discussing the use of robustness in justifying EHT and ngEHT results.
The robustness of a result can be characterized as the claim that if a variety of derivations, tests, or lines of evidence converge on a result, then that result is more secure than if it were obtained with only a single line of evidence. For that boost in confidence to hold, lines of evidence should be, in some sense, independent: convergence should not be attributable to some mistaken or irrelevant assumption shared by all lines of evidence (although see [28] for a discussion of the difficulty explicating what this amounts to).[7]
Experimental results are robust in the above sense when aspects of the experimental setup are varied, but results nonetheless converge--for example, when multiple independent measurements of Avogadro's number produce consistent results, these results are considered to be robust. In typical experimental situations, many factors can be varied, including the sample population or control group, initial or boundary conditions, and the measurement apparatus. Many such variations are impossible in the (ng)EHT, which will deal with a small number of sources, initially sparse sampling, lack of control over sources, and a lack of alternative
instruments capable of performing the same measurements. However, multiple redundancies are built into the EHT measurements. For example, the use of varied calibration pipelines builds confidence that the result is not due to idiosyncratic factors in a particular pipeline. For some purposes like mass measurements other means of accessing the system (e.g., observations of S stars orbiting Sgr A*) also contribute to the robustness of the EHT results.
The results of modeling and data analysis methods can also be called robust when they are consistent across variations in modeling assumptions, analysis methods, or parameter choices. The robust occurrence of some features (e.g., the temperature increase for a range of climate simulations, or ring size for a range of EHT imaging methods) increases confidence in that aspect of the modeling outcomes, while other, less stable features (e.g., regional precipitation for climate simulations, the positions of bright 'knot' structures in EHT images of Sgr A*) should be treated with caution.
Among the main lines of criticism formulated against robustness arguments, two seem especially relevant in the context of the ngEHT. Such criticisms envisage the ensemble of models containing (1) a shared core of assumptions, which make the models comparable, and (2) an unshared part, deemed problematic (e.g., modeling assumptions, idealizations, parametrizations or measurement apparatus), whose possible impact on the models' output must be understood and eliminated.
The first criticism argues that in numerical models the shared core common to all models tends to include problematic assumptions. Common idealizations, such as iterative and discretization errors, are unavoidable to numerically solve the problem but are also important sources of numerical artifacts. Hence, their impact cannot be determined through robustness reasoning. The second criticism points out that the mere convergence of results cannot by itself indicate a reliability or partial truth: something else is needed. Gueguen [29] examines a number of cases where convergent results across N-body simulations may be attributable to numerical artifacts. For example, Baushev et al. [30] point out that N-body cosmological simulations predict a "cuspy" profile for dark matter halo density for galaxy center regions (in conflict with observations). They argue that the convergence of simulations on such predictions is produced by numerical artifacts rather than by a physically realistic process captured by the simulations. This case shows how the apparent robustness of simulation results may not indicate that the results are reliable. As emphasized by [31] in their response to the seminal paper by [32] on robustness, from a purely logical point of view, robustness can guarantee reliability only in those cases where we already know that one of the models in the set is correct.8 This condition is rarely satisfied when robustness is the most needed, i.e, when it is used to supplement the absence of analytic solutions or experimental measures that could determine whether one of the models is indeed correct. Hence there is a clear need to analyze when robustness is an efficient tracer of reliability within the ngEHT program, and when it needs to be supplemented or substituted.
Footnote 8: [https://github.com/gneh/gneh/gneh](https://github.com/gneh/gneh/gneh)
In the suite of papers that the EHT issued on M87* [19; 33; 34; 35; 36; 37; 38; 39] and Sgr A* [40; 41; 42; 43; 44], the collaboration's overwhelming concern was to establish, with confidence, the existence of a ring surrounding the black hole shadow. That is, the EHT Collaboration did not want to issue a false positive. For that reason, in the M87* image work _robustness_ was key; the collaboration: varied the priors to make sure those choices were not forcing the image to be a ring; isolated four image-making groups to avoid cross-contaminating expectations based on others' results; and varied image reconstruction methods to ensure that the observed ring was not an artifact of any one imaging method. These measures constituted a determined drive to be sure that in the image of M87* the ring and bright crescent in the south were as unshakeable as possible. The commitment to robustness came with an unavoidable cost: other, valid effects--observations outside the ring, for example--might have been omitted. However, especially for this first,
momentous publication, the collective desire was for an appropriately robust, and therefore conservative, claim.
Yet, robustness is not the only possible epistemic desideratum. Over the course of the next generation of work, we may well want to pursue other, complementary ambitions. With highly specific models, physicists and observers could explore other predicted phenomena that might otherwise be lost in the noise. More unsteady, delicate phenomena in the accretion disk and jet formation, for example, could be detected using models and templates of various kinds. In particle physics, such targeted searches are common--this is what triggers do when they pluck a particular signal, interaction, particle, or phenomenon out of the vast sea of other results. Indeed, in many domains of physics, initial statements of groundbreaking results are more statistically fragile.[9] Robustness is thus a core epistemic virtue, but not the only one: too strong an emphasis on it could lead to false negatives by blinding us to hard-to-see phenomena just above the noise. Selectivity, pushed too hard, can produce false positives, giving us back what we hope and expect to see. We need _both_ robustness _and_ model-based selectivity. However, there are epistemic trade-offs to be made between these different epistemic virtues. Future work with the ngEHT will involve decisions about which virtues to prioritize in which contexts.
### Science and Aesthetics in Black Hole Imaging
All images have style expressed through "shared visual features" ([46], p. 4). Graphs, for instance, tend to avoid detail. Certain color schemes are more used than others. Including or removing artifacts is another choice. Astronomical images, whether based on empirical data or simulations, reflect an array of choices and decisions, and they also participate in their larger historical and cultural contexts. Their creation and interpretation rely on pre-existing visual traditions that establish the norms, expectations, and methods by which a scientific image is given meaning. The AIV focus group will draw on the extensive scholarship on imaging in astronomy and physics to reflect on such image-making choices and decisions by the ngEHT, as well as how the results are received and understood both within the scientific community and beyond [21; 45; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63].
Over the last several decades, images have furthered scientific understandings of black holes. However, until the EHT images, these representations were based on astronomers' calculations and simulations rather than observations. In the early 1970s, researchers visualized the basic outlines of black holes but their images were still in a schematic style [64; 65; 66]. Later in the same decade, more detailed and naturalistic visualizations of black holes emerged: a film clip by Leigh Palmer, Maurice Pryce, and William Unruh (unpublished, but shared in multiple lectures) and Jean-Pierre Luminet's black and white drawing of a black hole accretion disk [67]. Yet later, color visualizations, such as those by Heino Falcke, Fulvio Melia, and Eric Agol of Sgr A*, theorized how the black hole shadow might look if observed using VLBI [68] (see also [69] for the first visualizations in color).
Simulations remained a critical part of the EHT imaging process, resulting in observations that integrated theory in interesting ways. New data imaging pipelines were developed and used together with a library of synthetic images produced by general relativistic magnetohydrodynamic simulations and general relativistic ray tracing [36; 39; 43]. Comparing the observations with theoretical simulations was key for establishing that the observed ring was created by synchrotron emission from a hot plasma orbiting near the black hole. Although these specific techniques were novel, astronomers have long been aware of the dependence of their observations on theory. The need to reduce collected data to a more concise and tractable form in order to account for phenomena such as stellar aberration, atmospheric refraction, or the so-called "personal equation" (variations due to a specific observer's idiosyncrasies) means that astronomy as a discipline has reflected on the role of theory in making raw data into useful depictions of celestial bodies for generations [70; 71]. There is a long intellectual
ancestry of ever-more complex reliance on theory to allow for increasingly powerful forms of observation and imaging. These increases in scope and depth, however, also required more delicate conceptual and social scaffolding, increasing the possible influence of bias and blind-spots [72; 73]. The EHT Imaging Group was keenly aware of concerns about bias and systematic error; from the beginning, the imaging process was shaped by these concerns, in order to ensure the validity of the image [19].
Another concern for the EHT was the legibility of their images for a wide audience--particularly for the first image of M87*, given its novelty. The color palette--a ring in orange-red hues against a black background--was chosen with this in mind; orange was believed more likely to signify heat than blue (even though blue has shorter wavelengths and is therefore "hotter" than orange). Because the EHT Collaboration wanted to share one image with audiences of varying degrees of specialization (see [74], on the basis of [75]), a single averaged image was created from multiple images based on different imaging methods. Notably, the averaging of the Sgr A* image was different than that of M87*, with the former averaging process being more complex than the latter (see [19] for M87* and [20] for Sgr A*). These averaging techniques also connect to historical practices going back to the very beginning of technology-assisted scientific images with Galileo, Hooke, and Hevelius. Such figures used compositing techniques to make their early telescopic and microscopic images legible to wide, non-specialist audiences (particularly those who did not have access to the relevant instruments). Even through the nineteenth century, and well into the twentieth, it was accepted that astronomers would need to synthesize many individual observations in order to produce a reliable drawn or photographic image [58; 76; 77].
Given that more than a billion people saw the M87* image within days of its release [78], EHT imaging choices will continue to influence how black holes are perceived and understood. The next generation of images produced by the ngEHT will build on these perceptions while introducing new considerations; increasing the bandwidth, including other frequencies, and adding telescope sites, will allow for greater resolution, and the production of moving images (movies). This means that further choices will need to be made about how to convey this information in an image.
The history of astronomical images (and their reception) offers models to consider. Many existing astronomical images use color to distinguish between different wavelengths, and the hues often signify both physical properties and evoke aesthetic responses. For example, color in many Hubble Space Telescope images indicates relative temperatures while also creating a resemblance to the sublime nineteenth-century paintings of the western regions of the USA [79]. Such seemingly naturalistic color choices elicit questions from viewers, who assume color corresponds to human perception. In other instances--remote sensing of the Earth and some planetary images--more obviously engineered color choices enhance morphology yet emphasize the reliance on technology to extend human vision [63; 79]. Looking forward, ngEHT might also find it valuable to seek models beyond the history of scientific images when making decisions on how to represent data. This could include representation of movement in film or video games, or examining the work of artists who use scientific data as the basis of their aesthetic explorations [80]. EHT images of M87* and Sgr A* have elicited a range of responses (from awe to disappointment) and have already shaped the iconography of black holes [74]. ngEHT imaging represents an opportunity to consider once again how imaging decisions, whether motivated by scientific or aesthetic concerns, shape the scientific and public perception of black holes.
## 3 Foundations
**Coordinating author: Elder, J.; Contributing authors: Ashtekar, A.; Doboszewski, J.; Enander, J.; Lesourd, M.; Murgueitio Ramirez, S.; Schneider, M.D.; Thresher, A.C. and Weatherall, J.O.**
### Introduction
The Foundations focus group is an extension of the existing Foundations Seminar at the Black Hole Initiative (BHI). This seminar provides a venue for discussion of foundational issues relating to black holes. Previous themes of the seminar include: singularities, black hole thermodynamics, the analytic extension of the exterior Kerr metric, and theory vs. observation in astrophysics (among others). As we take on a new role as a focus group of the HPC working group, we will aim to facilitate further discussion of these themes in the context of the ngEHT.
In what follows, we illustrate issues that arise from such discussions. To do so, we narrow the focus to the final theme in the above list: bridging the gap between theory and observation. In Section 3.2, we provide some examples of where challenges arise for the applicability of theoretical results to real-world black holes. This includes a discussion of the no-hair theorems in Section 3.2.1 and a discussion of the relationship between concepts like mass, charge, and angular momentum in cosmological settings with and without a (positive) cosmological constant, in Section 3.2.2. Then, in Section 3.3, we sketch some philosophical responses to these apparent challenges.
The key questions that we seek to address in this section are these: how do we (or should we) apply formal mathematical results to a messy world where many of the assumptions behind those results are not, strictly speaking, realized? Furthermore, how can empirical results be brought to bear on theory in such cases? Our goal is to address such questions in the context of (supermassive) black holes such as those observed by the EHT and ngEHT. While the discussion of these questions here is only a beginning, answering such questions in the future will have important consequences for our understanding of applications (and tests) of theoretical results using the ngEHT array.
Overall, this section serves as an example of the kinds of discussion that will continue to take place within the Foundations seminar as it takes on a second, complementary role as a focus group within the HPC working group. In addition to the theme discussed here, singular spacetimes, black hole thermodynamics, and other foundational topics concerning black holes will be the subject of ongoing philosophical discussion.
### Challenges for the Applicability of Theory to Astrophysical Black Holes: Two Examples
Astrophysicists and astronomers often refer to exact solutions of the Einstein field equations--especially the Schwarzschild and Kerr metrics--when describing and interpreting their observations but there are potential problems with this. The Schwarzschild and Kerr metrics are highly idealized, involving assumptions that might not be physically realistic (see [81] for a discussion of this point in the context of the notion of an event horizon). For example, astrophysical black holes exist in the presence of matter fields, in a universe whose expansion is characterized by a positive cosmological constant, whereas these two metrics are solutions of the vacuum Einstein field equations and are asymptotically flat. It is therefore imperative to investigate the domain of applicability of these descriptions for astrophysical black holes. This means carefully explicating the ways that these solutions are used and examining the conditions under which the idealizations inherent in these solutions may or may not be problematic.
For illustrative purposes, we briefly consider two examples: the physical relevance of the no-hair theorems and the applicability of quantities such as mass, charge, and angular momentum for \(\Lambda>0\), where \(\Lambda\) is the cosmological constant.
#### 3.2.1 No-Hair Theorems
It is widely assumed that the geometry around astrophysical black holes is well described by the Kerr (or Kerr-Newman) family of metrics.[10] The justification for this is based on the application of so-called 'no hair' theorems, according to which stationary black hole spacetimes solving the Einstein field equations in vacuum, or the Maxwell-Einstein field equations with an electromagnetic stress-energy tensor, are exhausted by the Kerr and Kerr-Newman families of metrics, respectively.
However, this line of reasoning depends on a range of assumptions that may be called into question for physically realistic black holes. First, the no-hair theorems apply to stationary black holes (see Section 1 of [83]), so their application relies on the assumption that astrophysical black holes eventually settle down to a stationary state. Second, existing no-hair theorems rely on various mathematical assumptions that are highly unrealistic. In the standard formulation, analyticity of the spacetime metric is required in order to show the existence of the appropriate Killing vector fields; but astrophysical modeling of gas and plasma strongly suggests the presence of shocks in the vicinity of a black hole, making analyticity an implausible assumption. Third, the no-hair theorems are known to fail in the presence of matter fields (other than electric fields); see Section 5 of [83] for a variety of examples arising if the source side of the Einstein's field equations is a (classical) Yang-Mills term.
This illustrates some important concerns about the applicability of no-hair theorems for astrophysical black holes. Given that several of the assumptions behind the theorem do not, strictly speaking, hold in reality, to what extent should we expect real black holes to be well-described by the Kerr(-Newman) metrics? Furthermore, are there ngEHT observations that might provide evidence of deviations from Kerr(-Newman)? No such deviations have been observed by the EHT to date. However, ref. [44] provides constraints on potential deviations from the Kerr metric based on the 2017 observations of Sgr A*.
#### 3.2.2 Mass, Charge, and Angular Momentum in \(\Lambda>0\)
If we study the Einstein field equations with \(\Lambda=0\), adopting certain assumptions about global spacetime structure (e.g., that the underlying manifold is simply connected at infinity and spacetime geometry is asymptotic to Minkowski spacetime), the theory of general relativity seems to single out a small number of global quantities--ADM mass, charge, and angular momentum[11]--which play a central role in understanding and quantifying basic astrophysical phenomena. However, cosmological observations support the conclusion that the accelerated expansion of the universe is well described by a positive cosmological constant, i.e., \(\Lambda>0\)[84]. Some have taken this empirical finding to signify the need for a better understanding of the character of global quantities in an asymptotically de Sitter universe, to replace the ones currently in use (see [85] for discussion on this inference, including caveats). Recent progress in defining and understanding counterparts of the ADM quantities in the \(\Lambda>0\) case has been made by Abhay Ashtekar and collaborators [86; 87; 88; 89]. Unlike the familiar ADM quantities noted above, these new ones take for granted different assumptions about global spacetime structure.
It would be prudent to clarify the relationships between the global quantities in the \(\Lambda=0\) and \(\Lambda>0\) cases, including the role of the flat case in characterizing black holes in the \(\Lambda>0\) context. In particular, doing so seems necessary to interpret what astronomers are observing when they measure the mass, spin, etc. of real astrophysical black holes under an idealizing assumption that the cosmological constant vanishes. The central issue here is a general question about how to interpret global properties and asymptotic spacetime assumptions as relevant to astrophysical modeling. However, a further issue arises: how to interpret specific asymptotic assumptions within idealized models in a situation where physical expectations about the expected asymptotic spacetime structure (\(\Lambda>0\)) are very different from that of an idealized model? One might hope that a systematic understanding of isolated systems includes an
interpretation of asymptotic spacetime assumptions as becoming approximately true 'far away' [90]. However, in a \(\Lambda>0\) context, there would seem to be such a thing as moving 'too far away' from the isolated system under study (due to the presence of cosmological horizons that separate distant observers from the system). Therefore, considering the particular case of \(\Lambda>0\) complicates any such story about asymptotically flat structure becoming approximately true.
### Theory and Observation: Bridging the Gap
Purported problems like the examples above elicit a range of responses from strict to pragmatic. One guiding question for the Foundations group moving forward is the following: how can we apply theory to observations (and vice versa) when strictly speaking there is a mismatch (e.g., the conditions of theorem are not met in the real world)? Furthermore, how can this be justified? Answering these questions will generally be sensitive to the details of the case--including the precision of the description needed and the stability of the theoretical results across changes in assumptions. For now, we defer detailed consideration of the above examples to future work. Here, we instead outline some different perspectives on the general theme along with some guiding philosophical morals.
A strict approach, prioritizing mathematical rigor, is to adopt a cautious stance and not apply concepts or models outside the domain in which the assumptions behind them are true. If the assumptions behind a theorem are not true then it is not considered to be physically relevant. This approach embodies a conservatism toward epistemic risk, prioritizing the avoidance of errors over pursuing potentially fruitful (but risky) avenues of reasoning. On such a view, the issues described above are indeed considered to be problematic, amounting to a pressing need for further study and understanding. The no-hair theorem case, for example, suggests a need for a better understanding of black holes beyond the Kerr-Newman family. The manifest failure of no-hair theorems in the presence of matter fields means that we should absolutely expect to see deviations from the Kerr metric in the near-horizon regime. A better understanding of what these deviations could be and how we might test for them should be part of the scientific landscape.
A more pragmatic approach to these issues is based on a different conception of the roles of models in scientific inferences. Indeed, a recent 'pragmatic turn' [91] in the philosophy of modeling and measurement has led to a greater emphasis on epistemic goods such as reliability (e.g., [92]) and adequacy for purpose (e.g., [93]) over truth.
For Cartwright et al. [92] the mismatch between models and the real world is resolved by noting that science gets to truth via reliability. Indeed, the vast majority of science is not the kind of thing that takes truth values. Models, along with things like measures, experiments, codes, narratives and techniques are essential parts of science; and yet, what would it mean to say a code or technique is true? Instead, we can ask the more important question--are they reliable? If so, for what? In what context? On this view, reliability, far more than truth, captures the actual goals and structures of science and helps explain why models are useful for black hole physics--because our goal is to create reliable systems for capturing black hole dynamics and properties: systems that, in turn, give us reliable results for the particular job at hand. We have only to look at processes of model-building and model-selection to see this in action--particular idealizations are chosen, and values set, that get us closest to useful results. This, in turn, is taken to be a proxy for truth that works provided we remain within the context the model is built or adapted to be useful for.12
Footnote 12: This is not the case for the case of a model, but it is not the case for the case of a model.
From this perspective, asking whether the assumptions underlying various foundational results are true is misguided, and better questions would be whether the assumptions are reasonably clear and the results are useful for various purposes. This seems to be the attitude adopted by many working physicists. However, even if one grants that a pragmatic, or even
instrumentalist, attitude to foundational issues is justified for many practical purposes, one might think simply dismissing the foundational worries raised above is too fast. One reason is that a way in which our models can be useful and even reliable is by identifying points of tension in our understanding of a given physical system--in this case, black holes. Those points of tension, where models with apparently overlapping domains disagree, or where it is unclear whether the assumptions of this or that theorem truly apply to a given case, have historically been catalysts for developing new physics that can explain why different, apparently inconsistent, models nonetheless work in different contexts. A too-radical form of instrumentalism about scientific modeling would presumably reject the demand to make our models consistent, or to at least resolve the tensions that may arise between them [94; 95].
Future ngEHT observations will play a mediating role, bridging the gap between real astrophysical black holes and idealized theoretical descriptions of them. Doing so will mean scrutinizing the reliability of our best models of black holes and the domains of applicability of theoretical results pertaining to them.
## 4 Collaborations
**Coordinating author: Martens, N.C.M.; Contributing authors: Doboszewski, J.; Elder, J.; Galison, P.; Lalli, R.; Marcoci, A.; Nguyen, J.; Ritson, S.; Schneider, M.D.; Skulberg, E.; Sorgner, H.; Van Dongen, J.; Wu, J. and Wuthrich, A.**
### Introduction
The Collaborations focus group lies at the intersection of various approaches within the humanities and social sciences, including history, philosophy, sociology, science and technology studies, integrated history and philosophy of science, and law. As a result, we combine a mix of different methodologies, including literature analysis, comparative case studies (e.g., ATLAS, LIGO-Virgo, IPCC, Hubble, JWST; see below), tools from the digital humanities, interviews and surveys. This will enhance our ability to engage with the rest of the ngEHT collaboration in a way that includes a diversity of opinions, all with the aim of supporting a constant dialogue to provide real-time recommendations to the ngEHT collaboration, qua collaboration, at each of its various stages of development and operation.
The focus group concerns itself with the relationship between individuals and the ngEHT collaboration as a whole. To address ngEHT's social epistemology (i.e., how knowledge is produced in social groups such as scientific collaborations), we delve into how knowledge is conditioned by the collaborative production of data, images, and text, and what the process of negotiation entails for its claims about the world.13 It is clear from previous large-scale collaborations--and the ngEHT will be no exception--that _the establishment of fact_, and _what constitutes a fact_ are to some extent the result of the social negotiation of consensus.14 Thus, group structure and the distribution of authority play a direct role in what counts as knowledge. For instance, the particle physics community has converged on near-universal conventions regarding the determination of facts--five sigmas are required for a discovery--whereas only two sigmas are required to exclude new physics hypotheses.15 In contrast, there is currently no such shared standard in astrophysics.
Footnote 13: [https://www.nist.gov/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nistn/istn/nist/nist/nist/nist/nistn/istnist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nistn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/nistn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/nistn/istn/istn/istn/istn/istn/istn/istn/istn/istn/nistn/istn/istn/istn/nistn/istn/istn/nistn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/n/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istnistn/istn/istn/istn/istnistn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istnistn/istn/istn/istn/istnistn/istn/istnistn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istnistn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/ist](https://www.nist.gov/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nistn/istn/nist/nist/nist/nist/nistn/istnist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nist/nistn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/nistn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/nistn/istn/istn/istn/istn/istn/istn/istn/istn/istn/nistn/istn/istn/istn/nistn/istn/istn/nistn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/n/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istnistn/istn/istn/istn/istnistn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istnistn/istn/istn/istn/istnistn/istn/istnistn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istnistn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/istn/ist)
### Knowledge Formation and Governance: Top-Down vs. Bottom-Up
Large-scale scientific collaboration can take place within a variety of governance/ organizational structures, ranging from top-down hierarchical structures to more loosely organized bottom-up collaboration in the absence of a formal governing structure. We, the ngEHT collaboration, see ourselves as (ideally) being located somewhere in the middle of this spectrum--in particular somewhat closer to the bottom-up extreme than the EHT collaboration. In this section, we briefly illustrate this claim by contrasting the ngEHT with instances of scientific collaboration found at either extreme--specifically, the particle physics collaborations ATLAS and CMS associated with the Large Hadron Collider (LHC), the gravitational-wave-detecting LIGO-Virgo collaboration (LVC), and the Intergovernmental Panel on Climate Change (IPCC).
At the top-down extreme of the spectrum, partially exemplified by ATLAS and CMS, as well as the LVC, we find hierarchical structures with a centralized, physical headquarters and funding stream, with one or a few main instruments or purposes, and a large number of committees that decide which collaboration papers are published and how, which members get to present at which conference, etc. The collaboration is prioritized over the individual member; consensus is prioritized over dissent and diversity of opinions, with dissent being procedurally dealt with internally before (consensus) results are published. This structure facilitates a strong group identity, obtaining a large amount of funding for a dedicated, coordinated purpose, and achieving that purpose in the most efficient way possible. However, there is a risk that individual credit and creativity are lost to some extent. In contrast to these top-down examples, the ngEHT is a loosely organized, informally scripted, yet formally documented collaboration. Although workshops and conferences bring together researchers for short periods of time, observations will take place from different continents, researchers usually work from different geographical locations, and no building has been constructed for the purpose of housing ngEHT research. Instead, the asynchronous electronic infrastructure ([105], p. 159) of Overleaf, Slack, Google documents, slides, and telecons will be used to coordinate matters.
At the bottom-up end of the spectrum, we do not find formal collaborations per se but instead entire scientific communities with a common subject and a more or less uniform research culture. In such cases, authors coalesce in and out of projects, with members of the community communicating via conferences and peer-reviewed publications rather than in a physical headquarters. In this bottom-up model, individual groups can pursue any research direction that they themselves consider fruitful--as long as they manage to get funding--and publish dissenting results. A coherent, negotiated narrative connecting all these results and delineating the _facts_ is more likely to be established later (if at all), through review papers and review presentations in textbooks. Particularly striking examples are meta-analyses in medical communities or the recent report by the Intergovernmental Panel on Climate Change (endnote 5) which synthesizes 14,000 papers from the climate science community. In contrast to such extreme bottom-up examples, some sustained collaboration is required to achieve the ngEHT's main goals: financing and building additional telescopes and coordinating the whole network of telescopes so that it has access to; the joint reduction of data; and, finally, reporting its findings in publications. Moreover, it is important to stress that maximizing the benefits of bottom-up approaches does not come for free; it is not a mere matter of the absence of a top-down governance structure, but also the implementation of positive measures that bring out the advantages of bottom-up approaches, such as room for diversity and individual creativity.
One important challenge for the ngEHT then, regarding the spectrum of bottom-up versus top-down approaches to social epistemology and governance, is to be the best rather than the worst of both worlds. In the remainder of this section, we outline some preliminary thoughts on how this can be achieved. In particular, we discuss the need to facilitate dissent (Section 4.3) and to adopt a governance charter (Section 4.4).
### Knowledge Formation: Differences of Opinion
Should large scientific collaborations aim for consensus? The extent to which consensus is ideal for a scientific collaboration depends on how consensus is construed. First, we can consider the _unit_ of consensus: should the group agree on individual propositions or collections of (logically connected) propositions?17 Second, we can consider the _bearer_ of consensus. In the first instance, whether a group should aim at consensus may depend on the nature of the collaboration: what ties the individuals together?18 In the second instance, when we attribute consensus to a group, are we "summarising" the attitudes of the individuals, or does the collaborative aspect add something to this--possibly in the sense of a "plural subject" or a "group agent" [108; 109; 110]? Third, we can distinguish between at least two _attitudes_ relevant to the consensus: if a group is in consensus does it (or each member of it) hold a consensual belief, or a consensual acceptance, where different epistemic norms are associated with each attitude (e.g., belief requires a commitment to truth while acceptance may not) [111; 112; 113; 114].19 Fourth, we can ask about the _extent_ of consensus: at one extreme consensus might be identified with unanimity, but some level of dissent may be consistent with consensus, and indeed, as we discuss below, even encouraged [116]. Clarifying each of these dimensions allows us to ask more fine-grained questions about the nature and desirability of consensus (e.g., we can attribute a consensus belief to the group without necessarily requiring that all, or even any, of the individuals, believe all, or even any, of the propositions the group believes, although they may accept them in virtue of being in the collaboration).
The above requires us to take a step back and ask what _being in the ngEHT collaboration_ actually means. Issues such as who may be a member of the ngEHT collaboration and an author of the collaboration's papers need to be made explicit. How is membership established, and what does it imply to be a member? Which rights, responsibilities and credits follow from membership? Who may become a member? Is a vetting procedure required, and which members get to decide who else may become a member? Should there be different types of membership? Are all members also on the author list of collaboration papers? Is it possible to be a member without being an author? Might different types of authorship (e.g., data compiler, data analyst, text writer) be desirable? How are papers written and what epistemic goals might be favored by such a process? What happens if the collaboration is succeeded by another, or splits up: who owns the collaborative knowledge? Answers to these questions make clear who is a party to making knowledge; and thus also, what constitutes knowledge.20 In the near future we aim to survey how different modalities of membership and authorship have been crafted in comparable yet different collaborations (ATLAS at the LHC, LVC, and IPCC), and make an inventory of current practices in the EHT and ngEHT collaborations, including an analysis of their advantages and drawbacks.
Returning to consensus, some construal of consensus is _prima facie_ valuable and to be expected in scientific collaborations. First, because epistemic peers presented with the same evidence are, on the first approach, expected to reach the same conclusions [120]. Second, the higher the number of independent and competent scientists who believe a particular claim, the more likely it is to be true (the relevant result is a generalized Condorcet's Jury Theorem [121; 122]). Third, the stronger the consensus for a claim, the more likely it is for the general public to accept it [123]. Finally, a lack of consensus is often what politicians and lobbyists use to undermine the findings of scientific collaborations [124].
On the other hand, there are reasons to be wary of some cornstruals of consensus. Consensus between individuals may be impossible to achieve in contexts where the collaboration involves individuals with different values and/or disparate areas of expertise. Furthermore, the fact that epistemic peers _may_ reasonably disagree on substantive issues motivates the applicability of judgment aggregation theory to scientific collaborations [125; 126]. Finally, when consensus is enforced through a collaboration's policies in a top-down fashion (cf. Section 4.2),
this may disincentivize deliberation and the exploration of competing hypotheses [127]; it may also produce the appearance of agreement when there is none [120; 128; 129; 130].
It is thus important to find a good balance between top-down and bottom-up approaches to structuring an organization (cf. Section 4.2) that promotes consensus-building without prematurely suppressing dissent. Having a diversity of beliefs and practices among team members can be epistemically beneficial to science. For example, individuals in collaboration may draw on different (and possibly even competing) sources of evidence and theories in order to justify their conclusions [131]. Moreover, if all team members test the same hypothesis (and especially, by means of the same methods), they may prematurely settle for false beliefs. Several authors (notably [132]) have advocated for a period of _transient diversity_ during scientific research when different epistemic options are sufficiently tested before the community settles on a consensus.
Mechanisms that allow for or encourage transient diversity thus present strategies to promote a desired kind of creativity at the group level within bottom-up research contexts [133; 134]. While the influence of (diverging) non-cognitive values in science is unavoidable, it is not necessarily pernicious [135], and transient diversity could provide one such mechanism. Indeed, a more inclusive representation of values and perspectives is expected to produce epistemically more robust results [136]. Increasing transient epistemic diversity may also be helped by incorporating perspectives from marginalized groups into the scientific inquiry [137; 138; 139; 140]. Furthermore, facilitating minority views and carefully publicizing (partial) dissent increases transparency and enhances rather than erodes the credibility of the collaborations' conclusions [116; 120]. One motivation for this bottom-up line of reasoning stems from the social turn in the philosophy of science [141]: emphasizing the political, social, and psychological aspects of scientific collaborations encourages the idea that trustworthy decisions in science, as in other social institutions, requires deliberation, transparency and openness. Enforcing consensus goes against these norms.
In light of the above, what techniques and policies should guide collaboration within the ngEHT? Firstly, there are several mechanisms that can generate (transient) diversity. Of particular interest are modeling results [142] showing that the less connected the epistemic community is, the more likely it is to converge to the true belief--but the slower it is at doing so [143; 144; 145; 132]. For high stakes frontier research where it is important to be correct, it may be warranted to temporarily limit communication between team members. For instance, the limited communications between the imaging teams at the EHT may have epistemically benefited the final results [146; 142].21
Footnote 21: [https://www.sciencedirect.com/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/sciencescience/science/science/science/science/science/science/science/science/science/science/science/science/science/science/sciencescience/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/sciencescience/science/science/science/science/science/science/science/science/science/science/science/sciencescience/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/sciencescience/science/science/science/sciencescience/science/sciencescience/science/science/science/science/sciencescience/science/science/science/science/science/sciencescience/science/science/science/science/science/science/science/science/sciencescience/science/](https://www.sciencedirect.com/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/sciencescience/science/science/science/science/science/science/science/science/science/science/science/science/science/science/sciencescience/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/sciencescience/science/science/science/science/science/science/science/science/science/science/science/sciencescience/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/science/sciencescience/science/science/science/sciencescience/science/sciencescience/science/science/science/science/sciencescience/science/science/science/science/science/sciencescience/science/science/science/science/science/science/science/science/sciencescience/science/)
The Collaborations focus group will also explore how ngEHT members interact with one another. Methodologically, we can use concepts and tools from network theory to quantitatively investigate the structure of the collaboration and its change over time. By using a multi-layered network perspective of socio-epistemic networks we can investigate how the social structure is related to the production of new knowledge [155; 156]. Network approaches also allow us to understand the flow of information within the collaboration. An illustrative example in this regard is recent work analyzing more than 20,000 emails sent via internal mailing lists of a major particle physics collaboration [157]. This analysis revealed a pronounced sub-structure of the communication network featuring smaller "communities" within the collaboration. The communication network is also relatively dense and, in a network-theoretical sense, less hierarchical than most such networks, which is surprising given the top-down governance structures in place. Such analyses of communications networks may provide insight into how large-scale collaborations collectively produce knowledge.
Similar network analyses could also be done for the ngEHT. This descriptive project could also inform the normative guidance that we provide to the collaboration; the analyses could be used to test hypotheses about what communication structures might be particularly conducive to epistemic success, and which mechanisms and governance structures would foster such communication. This work could then be connected to the rich body of literature spanning decision theory, social psychology, and mathematics that explores the advantages and drawbacks of different ways of structuring deliberation between, and eliciting judgments from, experts [158; 159], as well as formal frameworks for conceptualizing the relationship between the attitudes of individuals and the attitudes of the group [106]. For the ngEHT, the exact balance between seeking collectivist consensus from the outset or operating via integration and trade-offs between autonomous viewpoints will depend on how data and responsibilities are shared among members, whether there are distinct organizational sub-units within the collaboration, and what the final arbiter is in cases of conflict (e.g., whether an appeal to a higher authority is possible, and how that authority is legitimized). The authorship of publications (whether they are mainly collectively authored or authored by distinct groups within the collaboration) will likely reflect these organizational norms [160].
In sum, it is clear that it would be beneficial for the ngEHT not to enforce consensus in the top-down fashion known from, among others, the various LHC collaborations. The Collaborations focus group aims to enrich the somewhat abstract existing literature by investigating concrete mechanisms and organizational structures that can maximize the benefits of epistemic diversity, applicable to the ngEHT context via a detailed analysis of the practice of the ngEHT collaboration with tools from the digital humanities and with internal surveys. It is crucial that these organizational structures are geared towards representation, diversity, sufficient freedom for individual creativity, the appropriate balance between transparency and epistemic distance at various stages of the collaboration, and appropriate assignment of credit, as elaborated upon in the following subsection.
### Governance
Well-structured governance is key to the future of collaboration. A main task for the Collaborations focus group will be to systematically analyze the organizational structure of various similar collective entities, including LIGO-Virgo, EHT, ATLAS, CMS, CERN, IPCC, the UN, Hubble and JWST, to identify their main benefits and drawbacks. Surveys conducted among ngEHT members, based on a similar survey conducted within the EHT collaboration, will also provide valuable data moving forward. These lessons will be synthesized into the optimal governance model for the ngEHT, keeping in mind the desiderata and worries described in the previous subsections.
To give the reader a tentative first impression of what such a governance model might look like, we sketch here an initial suggestion. We view this as the beginning of an ongoing conversation about the optimal governance model for the ngEHT collaboration. This model will then be iteratively tested and improved, especially with regards to how it facilitates knowledge formation and adapted as circumstances change. Given that the nascent ngEHT collaboration has already begun to take shape, it is crucial that this group make what recommendations we can--however preliminary--at this early stage. We are now in a position to influence organizational structures that may become increasingly entrenched as the ngEHT project gains momentum.
The core of the collaboration is its eleven working groups--eight science working groups (including HPC) and three technical working groups. In other collaborations, working groups have worked particularly well to generate a sense of community and strong science. The major Principal Investigators that lead the working groups alongside (and overlapping with) the Management Team--including the ngEHT director, chief scientist, and chief engineer--take on the dual responsibilities of fiscal probity (fulfilling the contracts) and keeping a steady hand on the filler to keep the collaboration in line with its founding goals. They would be guided and supported by a small number of governance structures (Figure 1): a central Scientific Council, a Project Advisory Committee, a Facilities Advisory Board, an Ethics Committee and a Publication Committee. These structures are not intended to provide top-down constraints by appointees, such as forcing consensus, but are instead (partially) elected, representative bodies that streamline the collaboration in a way that celebrates diversity and raises ethical scientific comportment to a primary aim.
**Scientific Council & Project Advisory Committee.** The ngEHT, like LIGO, includes multiple sites and dozens of scientific groups. To run its program, LIGO established a scientific council (LSC) that determines the scientific priorities and the overall mission--responsible too for science, instrumentation, communication, and operation. Composing the LSC are representatives of the various groups, in proportion to their membership size.22 The ngEHT might follow a modified version of that model which offers a way for the membership to shape scientific and technological policy and to facilitate decisions about priorities (such as targets, observation cadence, instrument standards, and aims). The ngEHT Scientific Council would be composed of representatives chosen by the constituent groups--no such representative body exists within the EHT collaboration. Where participating institutions or other stakeholders, including local communities and junior members, are too small to field separate representatives,
Figure 1: Tentative governance structure of the ngEHT collaboration.
they could be grouped together to form a larger body. The elected council would receive advice from the already existing Project Advisory Committee/Science Advisory Board), consisting of appointed, experienced and mostly external scholars, including Nobel laureates.
**Ethics Committee & Transparent Ethical Charter.** In founding the ngEHT, a charter specifying structure is desired, but should equally include transparent record keeping, voting procedures, and appointments as well as principles of membership, publication, authorship, credit, and conflict resolution. Along with these procedures, the charter would lay down a guiding, forceful commitment to diversity, equity and inclusion, as well as to ethical compartment regarding fairness, respectful interactions, and accountability. Putting this in the founding charter would give it the weight it deserves, showing these values are foundational, not _pro forma_. As groups join the ngEHT, it would be essential, in addition, to have a Memorandum of Understanding underscoring commitment to the charter and to the particular roles and responsibilities of the group. A high-level ethics committee--ideally its members would include several members of the History Philosophy Culture Working Group--would be tasked with drafting this charter to be sent to the rest of the collaboration for feedback, with overseeing the adherence to this charter once in place, and with updating the charter based on continuous feedback. It should maintain and publicize policies to promote an equitable, inclusive, and welcoming workplace. This committee could also include or run elections to identify ambudspersons and mediators as part of a broader mandate to do all in its power to stop intolerable actions visited upon collaborators such as harassment, bullying or marginalization on the basis of race, gender, nationality, or identity.
**Facilities Advisory Board.** The ngEHT will use some established facilities and so, in part, resembles an experiment at a particular facility telescope--the ngEHT will apply for time. Essential to realizing its mission, the ngEHT aims to build approximately ten additional sites beyond the existing telescope facilities made use of by the EHT: five in a first phase with an additional five to follow. The Facilities Advisory Board would consist of representatives of some of the telescopes or groups of telescopes, and, if needed, scientifically-relevant facilities (e.g., large-scale computation/correlators) even if they are not direct stakeholders. Note that the Facilities Advisory Board and Project Advisory Committee are separate entities, in contrast to the structure of the EHT collaboration.
**Publication Committee.** The aim of the publication committee would not be to provide negative constraints beyond standard checks regarding the use of proprietary data. It is not to be a gatekeeper that approves the official opinions and results of the members of the collaboration. Instead, its aim is positive: to streamline the process of publications through the collaboration and work of smaller subsets of members that relates to the ngEHT, by coordinating internal review in cases where this may be helpful, by ensuring that credit is given where credit is due, and by coordinating the ngEHT science book and other strategies that enhance the overall visibility of ngEHT related outputs, all in line with policies set out in the ngEHT's charter.
The ngEHT, like the IPCC, is an overarching framework for dozens of institutes across the world, each funded in different ways. Like the IPCC, the ngEHT has working groups. In contrast to the ngEHT, the IPCC was formed by an international compact, offering not novel research but a mechanism for collective, reliable assessment of existing research--including evaluators of different career stages, genders, and geographical regions. The ngEHT could learn from the way the IPCC has honed methods of assembling expert judges to assess both scientific/technical questions and to assist in effective final write-ups of the work. Similarly, the LHC detectors ATLAS and CMS have elaborated effective (but different) means of evaluating their own work before publication, which could serve as inspiration.
In sum, a governance model like the model proposed above would serve to support the working groups and help them excel, not by providing constraints that prioritise the
collaboration over the individual working group members, but in a way that streamlines their work by ensuring diverse representation of the various stakeholders.
## 5 Conclusions
This white paper has presented some--but by no means all--of the plans of the History Philosophy and Culture Working Group of the ngEHT collaboration. It is unprecedented for scholars from the humanities and social sciences to be integrated into a physics collaboration of this size, from the very beginning and with the same standing within the collaboration structure as its STEM members. We would like to cordially invite other scholars from the humanities and social sciences to join us in this exciting endeavor of making the ngEHT a prime model for interdisciplinary collaboration and recording high-quality videos of a black hole together.
Writing--original draft preparation, all authors; writing--review and editing, all authors; supervision, P.G.; project administration, P.G., J.D., J.E. and N.C.M.M. All authors have read and agreed to the published version of the manuscript.
J. Doboszewski, J. Elder and N.C.M. Martens would like to thank the Volkswagen Foundation for its support in providing the funds to create the Lichtenberg Group for History and Philosophy of Physics at the University of Bonn. M. Gueguen and N.C.M. Martens would like to thank the European Union's Horizon 2020 research and innovation programme for the funding received under the Marie Sklodowska-Curie grant agreements No. 101026214 and No. 101065772, respectively. P. Galison, J. Doboszewski, J. Elder, M. Lesourd, and P. Natarajan also acknowledge the support of the Black Hole Initiative, which is funded by grants from the John Templeton Foundation and the Gordon and Betty Moore Foundation (although the opinions expressed in this work are those of the authors and do not necessarily reflect the views of these Foundations). J. Elder also acknowledges the support of the "Inductive Metaphysics" project funded by the Deutsche Forschungsgemeinschaft (DFG), Research Unit FOR 2495 (specifically subproject B6: "The Role of Inference to the Best Explanation in the Discovery of Gravitational Waves"). N.C.M. Martens, H. Sorgner, and A. Wuthrich's contribution was made possible by funding from the DFG (FOR 2063)/FWF (I 4410-G) Research Unit "Epistemology of the LHC", and A. Wuthrich's contribution furthermore by funding from the European Union (ERC, Project NEPI, No. 101044932). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
Not applicable.
We want to recognize the early and important contributions of T. Nichols, N. Conway (on outreach), A. Raymond, G. Fitzpatrick, M. Johnson (on technical siting) to the formation of the HPC working group, which in turn built on the already long-running BHI Foundations seminar (with many thanks e.g., to F. Azhar, M. Lesourd, E. Curiel and the participants of that seminar). In framing the scope of the still-developing Siting focus group that will report in subsequent publications, the Siting Workshop conveners and farmers, including A. Thresher and P. Natarajan (later joined by D. Palumbo), thank the presenters at the first Siting Workshop which has helped guide subsequent developments: C. Prescod-Weinstein, K. Kamelamela, H. Nielson, M. Johnson, K. Fox, J. Havstad, T. Nichols, R. Chiaravalloti, S. Doeleman, G. Fitzpatrick, J. Houston, A. Oppenheimer. We would like to thank Jonas Enander, Luis Reyes-Galindo, Mike Schneider & Jeroen van Dongen for their internal review of this white paper. We are grateful for valuable discussions with the attendees of the HPC Kick-Off Workshop (Black Hole Initiative, Harvard, Feb-March 2021), with the attendees of the ngEHT meeting (Granada, June 2022), and with the other members of the HPC working group.
The authors declare no conflict of interest.
## Notes
* 1 A very helpful framing of the history of general relativity can be found in [2]. On Einstein's special theory of relativity, focusing on his redefinition of simultaneity, see [3]. On the eclipse expedition of 1919 and its surround--as a historical example of observational history, see [4; 5]. On Einstein's own trajectory to general relativity, see [6].
* 2 On the philosophically-infected work of Einstein, see, as an entree into the literature [8; 9; 10; 11; 12]; and for a launch into the philosophy in Einstein's physics see [9; 13]. Of books on the philosophy of spacetime, Earman's have been a grounding point of many discussions [14; 15], as has the (physics-based) lapidary take on general relativity by Wald [16]. For a fine example of a more recent conceptual analysis, see [17].
* 3 On the long-term history of relativity as it opened up into the science of black holes in particular, see [18].
* 4 See ([20], Sections 4.4 and 9) for discussion of "dynamic imaging", which results in a movie of the source (i.e., a series of images or frames) instead of a single image.
* 5 Two excellent doctoral dissertations offer fine-grained analysis of the mountaintop dispute, and include a wide range of further references. Swanner [22] focuses on the triply conflicting astronomical, environmental and indigenous narratives that collided at Mt. Graham, Mauna Kea, and Kitt Peak; Salazar [23] addresses the Kanaka rights claim, specifically addressing the Thirty Meter Telescope (TMT), in opposition to a framing of the dispute as one of "stakeholders" or a "multicultural" ideal. Swanner focuses on Mauna Kea in a subsequent article, also on the TMT [24]. For an important current Hawaiian-led impact assessment of the TMT including additional references, see Kahanamoku et al. [25]. Many further references across a wide cross-disciplinary range including archaeology, biology, among others, will be given in a subsequent paper directed toward siting.
* 6 Highlighting the environmental, social, experimental, and ethical implications of locating scientific facilities through a robust history of locating LIGO's sites, see Nichols, T. [26; 27]
* 7 If "secure" is understood in terms of degrees of belief (expressed by some function satisfying the Kolmogorov axioms of probability), then "boost in confidence" can be understood as (something like) the statement that the conjunction \(E_{1}\&\ldots\&E_{n}\) confirms \(R\) to a greater extent than \(E_{i}\) alone, for any \(i\); where \(R\) is the result, and \(E_{i}\) are lines of evidence.
* 8 Here we retain Orzack and Soder's terminology, describing models as "true" or "correct". Note, however, that this terminology is controversial (see Section 3.3) with some recent philosophical treatments of models suggesting that models themselves are neither true nor false.
* 9 On the contrast between inclusive and selective instrumental demonstration in particle physics, see Galison [45].
* 10 Or, in the context of a positive cosmological constant (see Section 3.2.2), perhaps instead one assumes a Kerr-de Sitter (or Kerr-Newman-de Sitter) metric. A good recent discussion of black holes with positive cosmological constant is in ([82], ch.5). One way to give these metrics is by writing them in Boyer-Lindquist coordinates, including some functions \(\delta\) and \(\sigma\), which are functions of radius, spin, mass, and \(\Lambda\). The mass read off from such a solution is the same as the mass of the Kerr metric.
* 11 ADM stands for Richard Arnowitt, Stanley Deser and Charles W. Misner, authors of the Hamiltonian formulation of general relativity known as the ADM formalism, within which the ADM quantities are defined.
* 12 This perspective also has implications for how we think about the use of robustness reasoning discussed in Section 2.2.
* 13 For instance, it is well known that the more authors a scientific paper has, the more conservative the claims in the paper may be, and the longer (on average) the paper, as well as its title, tend to be [96]. Single-authored blogs tend to be more readable than blogs authored by two authors, as measured by the Flesch readability score, despite no difference in average sentence length [97]. If this can be extrapolated to journal papers with large numbers of authors, the ngEHT may want to experiment with breaking up papers into separate papers, each of which is written by a smaller set of authors, and/or for the writing to be done by the smallest possible number of people with other members of the project providing input in other ways/at other stages (e.g., everyone is involved in outlining the structure of the paper and the eventual editing, but not in the writing process in between). The latest report by the Intergovernmental Panel on Climate Change (IPCC) provides a model of such a practice. A first draft by one of their working groups (WG1) was written by just the working group, comprising 240 scientists (Assessment Report [AR] 1 WG1 IPCC, 2021). After this, a much larger number of scientists from around the world provided comments that were incorporated into subsequent drafts. The ngEHT could consider writing papers following this model, scaled down according to the smaller number of scientists involved.
* 14 On the historical contingency of our notion of fact, see [98; 99; 100; 101].
* 15 On the role of'sigma's in modern physics, see [102].
* 16 On the practice of averaging over black hole images as epistemic practice, see [103; 104].
* 17 Work in judgment aggregation theory highlights the impact these relations can have on the consistency of the group attitude, see [106]. See [107]'s distinction between the "commitment" and "distributed" models of group knowledge.
The distinction between belief and acceptance can also help us conceptualize the role of idealization in science, as discussed in Section 3.3, see, for instance, [115].
* Compare, e.g., with discussions on including string theorists as physicists [117; 118; 119].
* Interesting in this regard is the current ngEHT analysis challenge, where part of the collaboration creates a training set from simulated signals with noise added to them (and potentially also some fake signals), with another part of the collaboration honing their analysis tools on this training data without knowing how it was created.
* On the LIGO Scientific Collaboration Charter [161].
|
2303.09591 | Zero Curvature Condition for Quantum Criticality | Quantum criticality typically lies outside the bounds of the conventional
Landau paradigm. Despite its significance, there is currently no generic
paradigm to replace the Landau theory for quantum phase transition, partly due
to the rich variety of quantum orders. In this paper, we present a new paradigm
of quantum criticality based on a novel geometric approach. Instead of focusing
on microscopic orderings, our approach centers on the competition of commuting
operators, which can be best investigated through the boundary geometry of
their expectation values. We demonstrate that the quantum phase transition
occurs precisely at the zero-curvature point on this boundary, which implies
the competing operators are maximally commuting at the critical point. | Chaoming Song | 2023-03-16T18:35:19Z | http://arxiv.org/abs/2303.09591v1 | # Zero Curvature Condition for Quantum Criticality
###### Abstract
Quantum criticality typically lies outside the bounds of the conventional Landau paradigm. Despite its significance, there is currently no generic paradigm to replace the Landau theory for quantum phase transition, partly due to the rich variety of quantum orders. In this paper, we present a new paradigm of quantum criticality based on a novel geometric approach. Instead of focusing on microscopic orderings, our approach centers on the competition of commuting operators, which can be best investigated through the boundary geometry of their expectation values. We demonstrate that the quantum phase transition occurs precisely at the zero-curvature point on this boundary, which implies the competing operators are maximally commuting at the critical point.
In recent years, there has been growing interest in the study of quantum criticality at zero temperature, revealing a wealth of fascinating phenomena absent in classical phase transitions [1; 2; 3; 4]. While classical phase transitions are well understood and described by the Landau-Ginzburg-Wilson (LGW) theory, quantum criticality poses a greater challenge due to quantum entanglement [5]. In the LGW paradigm, the phase transition is typically characterized by local order parameters [6], which do not capture the essential physics of many quantum systems. For instance, quantum systems can undergo a topological phase transition without any change in local order parameters [7; 8; 9; 10; 11; 12; 13]. Another example is the deconfined quantum criticality [14; 15; 16], where the system competes over multiple orders but undergoes a continuous transition, contrary to the predictions of LGW of a first-order transition or phase coexistence.
Despite various models demonstrating the non-Landau nature of quantum criticality, there is currently no generic paradigm to replace the LGW theory for quantum criticality. This is partly due to the rich variety of microscopic orders discovered in quantum systems, which poses a challenge for identifying a universal framework that applies to all systems. However, a major distinction between quantum and classical systems lies in their non-commutativity. One may suspect quantum criticality arises from the non-commutative nature of physical observables, which gives rise to the unique feature of quantum phase transition beyond the scope of classical theory.
In this Letter, we present a novel approach to understanding quantum phase transitions. Instead of focusing on microscopic details of particular orders, we argue that quantum criticality arises from the non-commutativity of competing operators. We demonstrate that this competition can be investigated through the boundary geometry of their expectation values. The quantum critical point corresponds to the zero-curvature point of this boundary, which can be interpreted as maximally commuting and is closely linked to integrability.
We start with a generic system of two competing operators, with the Hamiltonian
\[H(\lambda_{1},\lambda_{2})=\lambda_{1}H_{1}+\lambda_{2}H_{2}, \tag{1}\]
where Hermitian operators \(H_{1}\) and \(H_{2}\) are non-commutative, and \(\lambda_{1}\) and \(\lambda_{2}\) are real parameters. Equation (1) describes a competition between two non-commutative operators, potentially giving rise to the quantum phase transition. For any given wavefunction \(\psi\), there are two natural order parameters \(\langle H_{1}\rangle\equiv\frac{\langle\psi|H_{1}|\psi\rangle}{\langle\psi| \psi\rangle}\) and \(\langle H_{2}\rangle\equiv\frac{\langle\psi|H_{2}|\psi\rangle}{\langle\psi| \psi\rangle}\), which introduces a mapping
\[\psi\rightarrow\left(\langle H_{1}\rangle,\langle H_{2}\rangle\right), \tag{2}\]
from the system Hilbert space to a two-dimensional moduli space \(\mathcal{M}\) of order parameters. For finite-dimensional Hilbert space, the moduli \(\mathcal{M}\) is generally a semialgebraic set. Figure 1 illustrates that the geometry of \(\mathcal{M}\) is enclosed by a non-trivial boundary \(\partial\mathcal{M}\). To understand this boundary better, we consider the energy functional
\[E[\psi]=\lambda_{1}\langle H_{1}\rangle+\lambda_{2}\langle H_{2}\rangle. \tag{3}\]
The variational principle \(\delta E[\psi]=0\) implies that the stationary points of Eq. (3) correspond to the eigenstates \(\psi\) of the Hamiltonian (1), satisfying
\[\lambda_{1}\delta\langle H_{1}\rangle+\lambda_{2}\delta\langle H_{2}\rangle=0, \tag{4}\]
which corresponds to the singular set \(\mathcal{M}_{e}\) of the mapping (2), where the global minimum corresponds to the boundary \(\partial\mathcal{M}\subset\mathcal{M}_{e}\). Each point on \(\partial\mathcal{M}\) corresponds to expectation values of the ground state with fixed parameter \(\mathbf{\lambda}\equiv(\lambda_{1},\lambda_{2})\) in Eq. (1). Moreover, Eq (4) implies that the vector \(\mathbf{\lambda}\) is perpendicular to the boundary \(\partial\mathcal{M}\) and more generally \(\mathcal{M}_{e}\), being its normal vector, which determines uniquely the ground state expectations \(\langle H_{1}\rangle_{\mathbf{\lambda}}\) and \(\langle H_{2}\rangle_{\mathbf{\lambda}}\).
Our previous works [17; 18] have shown that the eigenstate moduli \(\mathcal{M}_{e}\) is a one-dimensional algebraic variety for finite-dimensional Hilbert space, determined by a single equation
\[f(\langle H_{1}\rangle,\langle H_{2}\rangle)=0, \tag{5}\]
where \(f\) is a bivariate polynomial that is determined implicitly by Eq. (4). The algebraic relation (5) encodes all information of the Hamiltonian family (1). Below we will focus primarily on the boundary \(\partial\mathcal{M}\), that is, the ground state expectations.
The boundedness of the moduli \(\mathcal{M}\) can be seen as a generalization of Heisenberg's uncertainty principle, whereby the non-trivial geometry of the boundary \(\partial\mathcal{M}\) reflects a highly non-classical phenomenon. One of the most notable features of the moduli \(\mathcal{M}\) is their convexity, which we illustrate through a simple proof. Consider two arbitrary points \((x_{1},y_{1})\) and \((x_{2},y_{2})\in\mathcal{M}\), where \(x_{1}=\langle\psi_{1}|H_{1}|\psi_{1}\rangle\) and \(y_{1}=\langle\psi_{1}|H_{2}|\psi_{1}\rangle\) and \(x_{2}=\langle\psi_{2}|H_{1}|\psi_{2}\rangle\) and \(y_{2}=\langle\psi_{2}|H_{2}|\psi_{2}\rangle\), with both \(\psi_{1}\) and \(\psi_{2}\) normalized. We interpolate the wavefunctions \(\psi(p)\) between \(\psi_{1}\) and \(\psi_{2}\), as
\[\psi(p)=\sqrt{p}\psi_{1}+\sqrt{1-p}e^{i\theta}\psi_{2}, \tag{6}\]
with \(0\leq p\leq 1\), where the phase \(\theta\) is chosen such that the expectation values \(\left(\frac{\langle\psi|H_{1}|\psi\rangle}{\langle\psi|\psi\rangle},\frac{ \langle\psi|H_{2}|\psi\rangle}{\langle\psi|\psi\rangle}\right)\) interpolate linearly between points \((x_{1},y_{1})\) and \((x_{2},y_{2})\). We find that this requirement is fulfilled if we choose the phase \(\theta\) to be of the form
\[\theta=\arg\left(x_{12}\Delta y-y_{12}\Delta x-n_{12}\Delta n\right)+\frac{ \pi}{2}, \tag{7}\]
where \(\Delta x\equiv x_{2}-x_{1}\), \(\Delta y\equiv y_{2}-x_{1}\) and \(\Delta n\equiv y_{2}x_{1}-x_{2}y_{1}\), and \(x_{12}\equiv\langle\psi_{1}|H_{1}|\psi_{2}\rangle\), \(y_{12}\equiv\langle\psi_{1}|H_{2}|\psi_{2}\rangle\) and \(n_{12}\equiv\langle\psi_{1}|\psi_{2}\rangle\). Since \(\psi(p=0)=\psi_{1}\) and \(\psi(p=1)=\psi_{2}\), the intermediate value theorem guarantees that any point lying on the line segment between \((x_{1},y_{1})\) and \((x_{2},y_{2})\) must have a corresponding \(p\) value. This completes our proof of convexity.
An important consequence of the convexity of \(\mathcal{M}\) is that the Gaussian curvature of its boundary \(\partial\mathcal{M}\),
\[\kappa\geq 0, \tag{8}\]
being non-negative wherever it is well-defined. This provides valuable insight into the geometry of the competing operators. It is important to note that this property holds only for the ground state, thus emphasizing the fundamental differences between the ground state, which governs the zero-temperature quantum phase transition, and the excited states that govern the finite-temperature conventional phase transition.
The existence of a quantum critical point can be inferred from a singularity on the boundary \(\partial\mathcal{M}\). Below we focus on the thermodynamic limit, after both \(\langle H_{1}\rangle\) and \(\langle H_{2}\rangle\) have been rescaled by the system size. In this limit, the functional form in Eq. (5) may not be analytic, indicating the presence of a singularity. However, the convexity of \(\mathcal{M}\) imposes a constraint on the type of singularity that can occur. For instance, while a cusp may appear for excited states, it cannot exist for ground states.
At the critical point, there are two types of singularities on \(\partial\mathcal{M}\), and we will first focus on type I. While the boundary \(\partial\mathcal{M}\) is convex, the geometry enclosed by Eq.(5) can have a concave region. Figure 2a illustrates such a phenomenon, which occurs when the ground states (solid curves) connect to the first excited states (dashed curves) at two different points \((\langle H_{1}\rangle^{*},\langle H_{2}\rangle^{*})\) and \((\langle H_{1}\rangle^{**},\langle H_{2}\rangle^{**})\), resulting in a gap closing. The convexity property allows the true ground states to be lower than the one predicted by Eq. (5) by interpolating between these two points (shown by the red line). The concave region corresponds to the excited states, while the ground state boundary remains convex. The critical regions correspond to a single critical value in terms of the parameter \(\mathbf{\lambda}_{c}=(\lambda_{1c},\lambda_{2c})\), as the entire line segment corresponds to the same normal vector. This leads to non-zero changes in the order parameters \(\Delta\langle H_{1}\rangle=\langle H_{1}\rangle^{**}-\langle H_{1}\rangle^{*}\) and \(\Delta\langle H_{2}\rangle=\langle H_{2}\rangle^{**}-\langle H_{2}\rangle^{*}\), which causes a discontinuous phase transition. In general, the energy change \(\Delta E=\lambda_{1c}\Delta\langle H_{1}\rangle+\lambda_{2c}\Delta\langle H_{ 2}\rangle\) is also discontinuous. However, there are cases where
Figure 2: (a) Type I and (b) Type II quantum critical points (QC). The dashed curves represent the first excited states.
Figure 1: Illustration of the moduli space (grey domain) for the expectation values \(\langle H_{1}\rangle\) and \(\langle H_{1}\rangle\). The solid curve represents the boundary \(\partial\mathcal{M}\), corresponding to the ground. The dashed curve corresponds to the excited state and is not necessarily convex.
\(\Delta E=0\). This often occurs when the system has additional ground degeneracy at \(\mathbf{\lambda}_{c}\), with \(E^{*}=E^{**}\). Notably, when the boundary falls into the critical region, the curvature vanishes, as
\[\kappa_{c}=0. \tag{9}\]
The disappearance of curvature in the critical region is a key feature of the quantum phase transition. This indicates that the boundary \(\partial\mathcal{M}\) is becoming tangential to the tangent line of the critical point, signifying that the system is undergoing a significant change in behavior.
Type II singularity has a similar fashion to type I, but the critical region now merges into a single point, as illustrated in Fig.2b. The zero-curvature condition(9) also holds in this case, with an associated gap closing. However, unlike the type I singularity, in this case, \(\langle H_{1}\rangle\) and \(\langle H_{2}\rangle\) change continuously, and only their derivatives diverge at the critical points, resulting in a continuous phase transition.
To relate our theory to the Ehrenfest classification, we assume \(\lambda_{1}>0\) without loss of generality. Since the parameters are projective, the system can be parameterized by the coupling constant \(g=\lambda_{2}/\lambda_{1}\). Substituting Eq. (3) into Eq. (4) lead to \(\langle H_{1}\rangle=E(g)-gE^{\prime}(g)\) and \(\langle H_{2}\rangle=E^{\prime}(g)\), which are the Hellmann-Feynman theorems. Substituting the Gaussian curvature of a curve \(\kappa\equiv\frac{\langle H_{2}\rangle^{\prime\prime}\langle H_{1}\rangle- \langle H_{1}\rangle^{\prime\prime}\langle H_{2}\rangle^{\prime}}{\langle \langle H_{1}\rangle^{\prime}+\langle H_{2}\rangle^{\prime}\rangle^{3/2}}\), we obtain
\[\kappa=-\frac{1}{(g+1)^{3/2}E^{\prime\prime}(g)}, \tag{10}\]
implying that the zero-curvature condition (9) implies the divergence of the second derivative of the energy. For the non-degenerate case, second-order perturbation theory suggests that \(E^{\prime\prime}(g)=2\sum_{k>0}\frac{|\langle k|H_{2}|0\rangle|^{2}}{E_{0}-E _{k}}\leq 0\), in line with the non-negativeness of the curvature. In particular, near the critical point the curvature \(\kappa\) scales linearly with the energy gap \(\Delta=E_{1}-E_{0}\), and vanishes when the gap closes. However, the convexity we proved is more general, even for cases where there is additional degeneracy.
The type I and II transitions roughly correspond to first- and second-order phase transitions in the Ehrenfest sense. However, there is a fundamental difference between conventional and quantum phase transitions classified here. Unlike conventional phase transitions, quantum phase transitions do not necessarily occur due to broken symmetry. Instead, they result from the competition between non-commuting operators. Our geometric approach places additional constraints on quantum phase transitions. For example, both Type I and II transitions occur only at a fixed normal direction, implying a single critical point, \(\mathbf{\lambda}_{c}\), instead of an intermediate coexistence region as predicted by LGW. While it is possible that multiple transitions exist corresponding to multiple zero-curvature points, a continuous interpolation between two normal directions would imply a non-vanishing curvature in this region.
Below we will apply our theory to several systems. One of the simplest examples of a quantum phase transition is perhaps the one-dimensional transverse field Ising model (TFIM) [19], which is described by the Hamiltonian
\[H_{TFIM}=-J\sum_{i}Z_{i}Z_{i+1}-h\sum_{i}X_{i}, \tag{11}\]
where \(X_{i}\) and \(Z_{i}\) are Pauli matrices that act on the spin variables at site \(i\), and \(J\) and \(h\) are the coupling constants representing the strength of the spin interaction and the transverse magnetic field, respectively. In this case, \(H_{1}=-\frac{1}{L}\sum_{i}Z_{i}Z_{i+1}\) and \(H_{2}=-\frac{1}{L}\sum_{i}X_{i}\), where \(L\) is the number of sites.
The TFIM follows the conventional Landau paradigm of symmetry breaking, where the magnetization \(m(g)\equiv-\langle H_{2}\rangle\) serves as the order parameter, where \(g\equiv h/J\). The model displays Kramers-Wannier duality, which interchanges \(H_{1}\) with \(H_{2}\) and \(J\) with \(h\), implying that \(\langle H_{1}\rangle=-m(1/g)\) represents the magnetization of the dual system. The exact expression of the magnetization
\[m(g)\equiv\frac{1}{\pi}\int_{0}^{\pi}\frac{1+g\cos k}{\sqrt{1+g^{2}+2g\cos k}}dk \tag{12}\]
in the thermodynamic limit [19]. Figure 3a plots \(\langle H_{1}\rangle\) versus \(\langle H_{2}\rangle\) as the boundary of the moduli space \(\mathcal{M}\). The system undergoes a quantum phase transition when \(g_{c}=1\) (red dot). The Gaussian curvature \(\kappa\) can be computed directly as
\[\kappa=\frac{\pi g^{2}(g+1)}{\left(g^{2}+1\right)^{3/2}\left(\left(g^{2}+1 \right)K\left(\frac{4g}{(g+1)^{2}}\right)-(g+1)^{2}E\left(\frac{4g}{(g+1)^{2} }\right)\right)}, \tag{13}\]
where \(K(x)\) and \(E(x)\) represent complete elliptic integrals of the first and second kinds, respectively. Figure 3b plots the dependence of \(\kappa\) on the coupling constant \(g\), showing that it vanishes at the critical point as predicted by Eq. (9). Additionally, the curvature satisfies the self-dual relation \(\kappa(g)=\kappa(1/g)\).
We next turn our attention to the toric code (TC), a model defined on a two-dimensional square lattice that exhibits unconventional topological order [20; 21]. The
Figure 3: (a) The moduli space and (b) curvature for both TFIM and TC.
Hamiltonian describing the TC in a uniform magnetic field is given by
\[H_{TC}=-J\left(\sum_{v}A_{v}+\sum_{p}B_{p}\right)-h\sum_{e}Z_{e}, \tag{14}\]
where \(A_{v}\equiv\prod_{e\in v}X_{e}\) and \(B_{p}\equiv\prod_{e\in p}Z_{e}\) are stabilizer operators for the spins around each vertex \(v\) and plaquette \(p\), respectively. Here \(X_{e}\) and \(Z_{e}\) correspond to Pauli matrices acting on the edges of the lattice. The TC model (14) undergoes a quantum phase transition of the topological orders. In contrast to conventional Landau phase transitions, the topological phase transition in the toric code model cannot be characterized by any local order parameters. Remarkably, the transverse-field Ising model (TFIM) can be transformed into the TC model by mapping the magnetization \(\sum_{i}X_{i}\) and the spin interaction operators \(\sum Z_{i}Z_{i+1}\) to the operators \(\sum_{v}A_{v}\) and \(\sum_{e}Z_{e}\), respectively [22; 23]. The operator \(\sum_{p}B_{p}\) is related to the anyon excitation and is a c-number for the ground state. However, this mapping is non-local, transforming the local spin operator \(X_{i}\) in the TFIM into a string operator in the TC, which reflects the non-Landau nature of the quantum criticality of the TC.
Despite their fundamental microscopic differences, the TFIM and the TC share an identical non-commutative nature arising from two competing operators \(H_{1}\) and \(H_{2}\) for the ground state. As a result, the TC model exhibits the same \(\langle H_{1}\rangle-\langle H_{2}\rangle\) diagram and curvature function as the TFIM, as shown in Fig. 3. This observation suggests that the quantum criticality may be largely independent of microscopic details and instead is determined by the degree of non-commutativity between competing operators and quantified by the geometry of the boundary \(\partial\mathcal{M}\).
To better understand the zero curvature condition, we investigate the geometry of \(\mathcal{M}\) when two operators \(H_{1}\) and \(H_{2}\) commute. However, to provide a more general analysis, we will consider a Hamiltonian of the form
\[H=\lambda_{1}H_{1}+\ldots+\lambda_{n}H_{n}, \tag{15}\]
where \(H_{i}\) are \(n\) independent Hermitian operators of a finite-dimensional Hilbert space \(\mathcal{H}\) and \(\lambda_{i}\) are real coefficients. This defines an \(n\)-dimensional real vector space of Hermitian operators. If all elements of the Hamiltonian commute, i.e., \([H_{i},H_{j}]=0\) for all \(i,j\), the system is completely integrable, with maximal \(n=\dim\mathcal{H}\) equal to the dimension of the Hilbert space. Without loss of generality, we may let \(H_{n}=I\), and \(\lambda_{n}=-E\) such that the corresponding Schrodinger equation becomes \(H\psi=0\). As all the operators \(H_{i}\) commute, they can be simultaneously diagonalized as \(H_{i}=\text{diag}(\Lambda_{i,1},\ldots,\Lambda_{i,N})\), where \(\Lambda_{i,j}\) represents the \(j\)-th eigenvalue of the operator \(H_{i}\). Consequently, \(\langle\psi|H_{i}|\psi\rangle=\sum_{j=1}^{N}\Lambda_{i,j}|\psi_{j}|^{2}\). Let \(\tilde{\Lambda}\) be the inverse of the matrix \(\Lambda\), then we have \(\sum_{j=1}^{N}\tilde{\Lambda}_{i}j\langle\psi|H_{j}|\psi\rangle=|\psi_{i}|^{2}\geq 0\). Substituting \(\langle\psi|H_{n}|\psi\rangle=\langle\psi|\psi\rangle\), we obtain a set of \(n\) linear inequalities
\[\sum_{j=1}^{n-1}\tilde{\Lambda}_{i,j}\langle H_{j}\rangle+\tilde{\Lambda}_{i, n}\geq 0, \tag{16}\]
where \(\langle H_{j}\rangle=\frac{\langle\psi|H_{j}|\psi\rangle}{\langle\psi|\psi\rangle}\) denotes the expectation value of \(H_{j}\). This suggests that \(\mathcal{M}\) is a \(n-1\) dimensional simplex, as illustrated in Fig. 4. The boundary \(\partial\mathcal{M}\) is given by the equality of Eq. (16), implying that Eq. (5) factors into a product of linear combinations of \(\langle H_{i}\rangle\).
The geometry of \(\mathcal{M}\) for any \(n<\dim\mathcal{H}\) can be obtained by projecting the simplex (16) onto its \(n\)-dimensional subspace, resulting in a polytope. For instance, when \(H_{1}\) and \(H_{2}\) commute in Eq. (1), the corresponding moduli space \(\mathcal{H}\) is a polygon. Remarkably, the curvature \(\kappa\) of the polygon is zero almost everywhere except for the vertices. However, for non-commuting operators, one would expect a curved boundary. Nonetheless, the zero-curvature condition implies that the system at the quantum critical point is close to an integrable system, where operators \(H_{1}\) and \(H_{2}\) are nearly commuting.
In conclusion, we have presented a new paradigm for understanding quantum criticality distinct from conventional Landau phase transitions. Instead of focusing on microscopic orderings, we argue that the competition of non-commuting operators is the driving force behind the quantum phase transition, which can be best investigated through the boundary geometry of their expectation values. Based on this approach, we show both discontinuous and continuous quantum phase transitions are associated with a vanishing curvature on the moduli boundary, which implies integrability and maximal commuting near the critical point. Our approach presents several avenues for future research and challenges. For instance, it is straightforward to extend our theory from two operators to arbitrary sets of operators, where the zero
Figure 4: The moduli space \(\mathcal{M}\) (grey domain) for integrable model.
curvature condition links to the phase boundary of the multi-dimensional phase diagram. However, the eigenstate moduli have a richer geometry, providing an intriguing avenue for further investigation. Overall, our approach offers a novel perspective on quantum theory, particularly for many-body systems, and has the potential to yield significant implications for strongly correlated systems.
|
2305.12559 | Multi-scale information content measurement method based on Shannon
information | In this paper, we present a new multi-scale information content calculation
method based on Shannon information (and Shannon entropy). The original method
described by Claude E. Shannon and based on the logarithm of the probability of
elements gives an upper limit to the information content of discrete patterns,
but in many cases (for example, in the case of repeating patterns) it is
inaccurate and does not approximate the true information content of the pattern
well enough. The new mathematical method presented here provides a more
accurate estimate of the (internal) information content of any discrete pattern
based on Shannon's original function. The method is tested on different data
sets and the results are compared with the results of other methods like
compression algorithms. | Zsolt Pocze | 2023-05-21T20:14:03Z | http://arxiv.org/abs/2305.12559v1 | # Multi-scale information content measurement method based on Shannon information
###### Abstract
In this paper, we present a new multi-scale information content calculation method based on Shannon information (and Shannon entropy). The original method described by Claude E. Shannon and based on the logarithm of the probability of elements gives an upper limit to the information content of discrete patterns, but in many cases (for example, in the case of repeating patterns) it is inaccurate and does not approximate the true information content of the pattern well enough. The new mathematical method presented here provides a more accurate estimate of the (internal) information content of any discrete pattern based on Shannon's original function. The method is tested on different data sets and the results are compared with the results of other methods like compression algorithms.
_Volgyerdo Nonprofit Kft._, Nagybakonak, HUN, [email protected], 2023
## 1 Introduction
Traditionally, Shannon's information theory [15] has been used to measure the information content of samples. Shannon information, as defined by Claude E. Shannon, is the degree of uncertainty or surprise associated with a given outcome in a set of possible outcomes. Shannon entropy, which is the expected value of Shannon information, is used to quantify the average information content of a discrete sample or message. It serves as a basic concept in information theory and is widely used in communication systems and data compression.
In some situations, such as repeated patterns, Shannon's original information measurement method does not give accurate results enough because it does not take into account the structure of the patterns, it only looks at certain statistical characteristics of them. To solve this problem, this paper presents a new multiscale information content calculation method based on Shannon's original principles. By refining the computational approach, our method offers a more accurate estimate of the internal information content of discrete samples, regardless of their nature.
There are several other methods for measuring the information content of patterns, such as Kolmogorov complexity [8], randomness [11], and compression complexity. The common property of these methods that they are all suitable for determining and understanding the information content of patterns with some accuracy, and therefore provide a suitable comparison basis for checking newer methods.
To verify the effectiveness of our new method, it is applied to various data sets and compared with compression algorithms. The results show that our proposed method based on Shannon information closely approximates the results measured by other methods while taking a completely different approach.
Patterns
In this study, we deal with the calculation of the internal quantitative information content of discrete patterns. From the point of view of the calculation of the information content, the nature of the object of the measurement is irrelevant. The information content of events, signals, system states, or data sequences can be calculated since their models (with finite precision) can all be represented as discrete patterns. By moving along a spatial pattern, we get a temporal pattern and vice versa. Therefore, we do not distinguish between spatial and temporal patterns. The basic markings should be as follows.
Denote\(\mathcal{M}(R)\) the set of finite sequences that can be generated from the set \(R\):
\[\mathcal{M}(R)=\{X:\mathbb{N}^{+}\to R\} \tag{1}\]
Let us call the finite sequence \(X\in\mathcal{M}(R)\) a pattern:
\[X=[x_{1},...,x_{N}] \tag{2}\]
Denote the length of the series \(X\):
\[n(X)=N \tag{3}\]
Denote the set of possible values of the series \(X\):
\[R=R_{X}=\{r_{1},r_{2},...,r_{K}\} \tag{4}\]
Let \(f(x)\) denite the number of occurrences of \(x\in R_{X}\) in the series of \(X\):
\[f(x)=\sum_{i=1}^{K}[r_{i}=x] \tag{5}\]
Let the relative frequency of any \(x\in R\) element of the pattern \(X\):
\[p(x)=f(x)/N \tag{6}\]
Denote the concatenation of \(X_{1}X_{2}...X_{K}\) patterns as:
\[X_{1}X_{2}...X_{K}=\mathop{\parallel}\limits_{i=1}^{K}X_{i} \tag{7}\]
Information content
The information content can be interpreted intuitively when only the interpretable information content is examined [1]. In this study we examine the amount of the entire internal information content without interpreting it or considering the context.
The information content of a pattern can be characterized by the improbability of individual elements of the pattern (Shannon information [15]), the length of the most concise description of the pattern (Kolmogorov complexity [8]), or the degree of randomness of the pattern [11].
A fundamental difference between Shannon's and Kolmogorov's viewpoints is that Shannon considered only the probabilistic characteristics of the random source of information that created the pattern, ignoring the pattern itself. In contrast, Kolmogorov only focused on the pattern itself [5]. In their definition, Kolmogorov and Chaitin called (inaccurately) random the pattern with the maximum information content [12].
Information, complexity and randomness have such similar properties that we can reasonably assume that they are essentially approaching the same thing with different methods. It is sufficient to consider that the Shannon information, Kolmogorov complexity and randomness of a pattern consisting of identical elements are all minimal, while in the case of a true random pattern all three values are maximal, and they all assign the highest information value to data sets with maximum entropy [1].
The concepts of entropy and information are often confused [3], so it is important to mention that entropy can also be understood as such the average information content per element.
Approached intuitively, the amount of information is a function for which the following conditions are met:
1. The information content of a pattern with zero length or consisting of identical elements is zero.
2. The information content of the pattern consisting of repeating sections is (almost) identical to the information content of the repeating section.
3. A pattern and its reflection have the same information content.
4. The sum of the information content of patterns with disjoint value sets is smaller than the information content of the concatenated pattern.
5. The information content of true random patterns is almost maximal.
Let the information content be the function \(I\) that assigns a non-negative real number to any arbitrary pattern \(X\in\mathcal{M}(R)\):
\[I:\mathcal{M}_{R}\rightarrow\mathbb{R}^{+} \tag{8}\]
In addition, the following conditions are met:
1. \(I(X)=0\Leftrightarrow|R_{X}|<2\)
2. \(I(\mathop{\parallel}\limits_{i=1}^{K}X)=I(X)\)
3. \(I(\mathop{\parallel}\limits_{i=1}^{K}X_{i})=I(\mathop{\parallel}\limits_{i=K}^ {1}X_{i})\)
4. \(\mathop{\mid}\limits_{i=1}^{K}R_{X_{i}}|=\emptyset\Rightarrow I(\mathop{ \parallel}\limits_{i=1}^{K}X_{i})>\mathop{\sum}\limits_{i=1}^{K}I(X_{i})\)
5. \(I(X)\leq I(X_{TR}),\forall X\in\mathcal{M}(R),|X|=|X_{TR}|,\) where \(X_{TR}\in\mathcal{M}(R)\) is a real random pattern.
Since any pattern can be described in non-decomposable binary form, the unit of information content should be the bit.
It can be seen that for any pattern \(X\in\mathcal{M}(R)\), if \(N=n(X)\) and \(K=|R|\), then the maximum information content of \(X\) is:
\[I_{MAX}(X)=N\cdot log_{2}(K) \tag{9}\]
That is, \(I(X)\leq I_{MAX}(X)\) for any pattern \(X\in\mathcal{M}(R)\). In the case of a binary pattern, \(I_{MAX}(X)=N\), the length of the pattern, which means that a maximum of \(N\) bits of information (decision) is required to describe the pattern.
If the maximum information content is known, the relative information content can be calculated:
\[I^{(rel)}(X)=I(X)/I_{MAX}(X) \tag{10}\]
## 4 Shannon information
In theory, Kolmogorov complexity would provide a better approximation of the information content of patterns, but it has been proven that it cannot be calculated [5], in contrast to Shannon information [15], which can be calculated efficiently, but approximates the actual information content less well. Shannon information calculates the information content of the pattern based on the expected probability of occurrence (relative frequency) of the elements of the pattern.
The Shannon information of an arbitrary pattern \(X\in\mathcal{M}(R)\):
\[I_{S}(X)=\sum_{i=1}^{N}log_{2}(\frac{1}{p(x_{i})})) \tag{11}\]
Since the relative frequency (expected occurrence) of the elements of the pattern is only one statistical characteristic of the pattern and does not take into account the order of the elements. That's why the Shannon information often gives a very inaccurate estimate of the information content.
The value of the Shannon information is the same for all patterns of the same length whose elements have the same relative frequency. If \(X\in\mathcal{M}(R)\), \(Y\in\mathcal{M}(Q)\) and \(|R|=|Q|=K\) then it holds that:
\[I_{S}(X)=I_{S}(Y),\;if\left\{p(r_{1}),p(r_{2}),...,(r_{K})\right\}=\left\{p(q_{ 1}),p(q_{2}),...,(q_{K})\right\} \tag{12}\]
Shannon information ignores the structure of the patterns at different scales, the laws encoded in them, and therefore overestimates the information content of patterns consisting of repeating sections.
The problem can be illustrated with a simple example. Let's calculate the Shannon entropy of the following three patterns:
1. \(X_{A}\) : \(001101101010111001110010010010001000010000\)
2. \(X_{B}\) : \(101010101010101010101010101010101010101010\)
3. \(X_{C}\) : \(111111110000000011111110000000001111111100000000\)
In all three cases, the set of values is \(R=0,1\), the probability of each element is \(p(0)=0.5\) and \(p(1)=0.5\), and the Shannon entropy is \(I_{S}(X)=\sum\limits_{i=1}^{N}\;log_{2}(\frac{1}{p(x_{i})})=16\;bit\), although it is obvious that the information content of the data series differs significantly. Due to its randomness, the information content of data line \(X_{A}\) is almost 16 bits, while the information content of the other two data lines is much smaller, as they contain repeated sections. In the \(X_{B}\) data line, for example, the 2-bit section [10] is repeated, which means that its information content is closer to 2 bits.
The problem is that in the example above, we are examining the datasets at an elementary level, and our Shannon entropy function does not take into account the larger-scale structure of the dataset, such as the presence of repeating segments longer than 1 signal. Therefore, it is obvious to develop methods that are based on the Shannon entropy, but the data series are analyzed in the entire spectrum of the resolution, in the entire frequency range, and thus provide a more accurate approximation of the information content of the data series. Countless such solutions have already been published, which can be read, for example, in the articles [2] and [6]. This article presents an additional method.
## 5 SSM information
### Shannon information spectrum
Let the pattern \(X\) be partitioned by sections of length \(r\) if \(m=[N/r]\):
\[X^{(r)}=[x_{1}...x_{r},x_{r+1}...x_{2r},\;...,\;x_{(m-1)\cdot r+1}...x_{m\cdot r}] \tag{13}\]
Let the following series denoted as Shannon information spectrum (SP) of pattern \(X\):
\[I_{SP}^{(r)}(X)=I_{S}(X^{(r)}),\;r=1,...,[N/2] \tag{14}\]
From the sequences \(X^{(r)}\), we omit (truncated) partitions shorter than \(r\), those that are shorter than \(r\). In the cases \(r>[N/2]\), \(I_{SP}(X^{(r)})=0\), so these are also omitted from the spectrum.
### Maximal Shannon information spectrum
The Shannon information spectrum will be maximum in the case of random data sets. Let the following formula denoted as maximum Shannon information spectrum (SMS):
\[I_{SMS}^{(r)}(X)=m\cdot log_{2}(min(K^{r},m)),\;r=1,...,[N/2] \tag{15}\]
\(I_{SMS}^{(r)}(X)\) is a supremum for all information spectrum having the same value set and pattern length. If \(K^{r}<m\), then in the case of random patterns, the value set of the partitioning contains most likely all possible partitions, so the information content is approximately \(m\cdot log_{2}(K^{r})\). If \(K^{r}>m\), then the partitioning cannot contain all possible partitions, each partition will most likely be unique, so the information content will be \(m\cdot log_{2}(m)\). If \(r\) is small enough, then the series \(X^{(r)}\) most likely contains all possible partitions, therefore by random data sets the measured amount of information will approximately equal the maximum possible information content of the pattern, i.e. if \(r\) is small, then \(I_{SPM}^{(r)}(X^{(r)})\approx I_{MAX}(X^{(r)})=N\cdot log_{2}(n)\).
Figure 1: Diagram \(A\) shows the Shannon information spectrum of the random pattern \(X_{A}\), and diagram \(B\) shows the repeating pattern\(X_{C}\) (Appendix I). It can be seen that in case \(B\), a lower value appears at certain frequencies.
Figure 2: Comparison of maximum Shannon information spectrum (ISMS) and Shannon information spectrum (ISP) of the repeating pattern \(X_{C}\).
### Shannon normalized information spectrum
If we are interested in how much the information content seems to be relative to the maximum value in each resolution, we can normalize the spectrum with the maximum spectrum to the range \([0-N\cdot log_{2}(n)]\). Let the following sequence denoted as Shannon normalized information spectrum (SNS):
\[I_{SNS}^{(r)}(X)=\begin{cases}\frac{I_{SP}^{(r)}(X)}{I_{SMS}^{(r)}(X)}\cdot I_{ MAX}(X),&if\,|R_{X^{(r)}}|>1\\ r\cdot\frac{I_{SP}^{(r)}(X)}{N},&if\,|R_{X^{(r)}}|=1\end{cases}\quad where\,r=1,...,[N/2] \tag{16}\]
If the value set of the partitioning has only one element, i.e. \(|R_{X^{(r)}}|=1\), the normalized value would be 0. In this case the information content should be the information content of the repeating partition, and the average Shannon entropy of an element of the elementary resolution is multiplied by the length of the partition: \(r\cdot\frac{I_{SP}^{(r)}(X)}{N}\).
The figures show that different types of patterns have very different and characteristic spectra. This suggests that the type or source of the pattern may be inferred from the nature of the spectrum, but we do not deal with this in this study.
Figure 3: Comparison of Shannon normalized information spectrum (S of patterns from very different sources. The vertical axis represents the amounts of bits of information measured at the given resolution. You can see how different the spectrum of the different patterns is, but in most cases there is a resolution where the information content shows a definite minimum. The minima are marked with an arrow. A: random binary pattern, B: binary pattern with repeating sections, C: DNA section, D: English text, E: ECG signal, F: audio recording containing speech, G: evolution of the number of sunspots between 1700-2021, H: seismogram, I: Lena’s photo.
### SSM information
We know that the Shannon information gives an upper estimate in all cases, so we get the most accurate approximation of the information content from the normalized spectrum if we take the minimum. Let the information content calculated from the normalized spectrum denoted as Shannon spectrum minimum information (SSM information):
\[I_{SSM}(X)= \underset{i=1}{min}\left(I_{SSS}^{(i)}(X)\right) \tag{17}\]
Shannon information, SSM information and compression complexity of different patterns (Appendix I) in bits:
Relative Shannon information, SSM information, and compression complexity of different patterns (Appendix I) compared to maximum information:
It can be seen from the table that the SSM information gives similar results as the compression algorithms. In general, it is true that the more computationally demanding a compression or information measurement procedure is, the closer it is to Kolmogorov complexity. In the examined examples, the results of SSM information are usually located between the results of ZIP and 7Z, so the computational complexity of SSM information must be similar to the computational complexity of ZIP and 7Z.
\begin{table}
\begin{tabular}{|c|l|c|c|c|c|c|c|} \hline Pattern & Source & \(I_{MAX}(X)\) & \(I_{S}(X)\) & \(I_{SSM}(X)\) & \(I_{ZIP}(X)\) & \(I_{7Z}(X)\) & \(I_{ZPAQ}(X)\) \\ \hline \hline \(\mathrm{X}_{A}\) & Random binary pattern. & 48 & 46 & 40 & & & \\ \hline \(\mathrm{X}_{B}\) & Repeating binary pattern. & 48 & 48 & 2 & & & \\ \hline \(\mathrm{X}_{C}\) & Repeating binary pattern. & 48 & 48 & 13 & & & \\ \hline \(\mathrm{X}_{D}\) & Repeating text. & 362 & 343 & 58 & & & \\ \hline \(\mathrm{X}_{E}\) & Duplicate text with one character error. & 374 & 347 & 116 & & & \\ \hline \(\mathrm{X}_{F}\) & Random DNA pattern. & 471 & 422 & 409 & & & \\ \hline \(\mathrm{X}_{G}\) & DNA segment of COVID virus. & 471 & 405 & 388 & & & \\ \hline \(\mathrm{X}_{H}\) & Random string (0-9, a-z, A-Z). & 1209 & 1174 & 1174 & & & \\ \hline \(\mathrm{X}_{I}\) & English text (James Herriot’s Cat Stories). & 1104 & 971 & 971 & & & \\ \hline \(\mathrm{X}_{J}\) & Solar activity between 1700-2021 (A-Z). & 1495 & 1349 & 1295 & & & \\ \hline \(\mathrm{X}_{K}\) & Isaac Asimov: True love. & 50901 & 37266 & 32649 & 30904 & 29968 & 25248 \\ \hline \(\mathrm{X}_{L}\) & Binary ECG signal. & 80000 & 79491 & 47646 & 52320 & 41032 & 36968 \\ \hline \(\mathrm{X}_{M}\) & Binary seismic data. & 313664 & 312320 & 171546 & 83920 & 66064 & 45824 \\ \hline \(\mathrm{X}_{N}\) & Speech recording. & 325472 & 325342 & 277489 & 286760 & 257856 & 251408 \\ \hline \(\mathrm{X}_{O}\) & Lena. & 524288 & 524216 & 422085 & 443096 & 371360 & 337408 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of SSM information and compression complexity of different patterns.
\begin{table}
\begin{tabular}{|c|l|c|c|c|c|c|} \hline Pattern & Source & \(I_{S}^{(rel)}(X)\) \% & \(I_{SSM}^{(rel)}(X)\) \% & \(I_{ZIP}^{(rel)}(X)\) \% & \(I_{7Z}^{(rel)}(X)\) \% & \(I^{(rel)}{}_{ZPAQ}(X)\) \% \\ \hline \hline \(\mathrm{X}_{K}\) & Isaac Asimov: True love. & 73 & 64 & 61 & 59 & 50 \\ \hline \(\mathrm{X}_{L}\) & Binary ECG signal. & 99 & 60 & 65 & 51 & 46 \\ \hline \(\mathrm{X}_{M}\) & Binary seismic data. & 100 & 55 & 27 & 21 & 15 \\ \hline \(\mathrm{X}_{N}\) & Speech recording. & 100 & 85 & 88 & 79 & 77 \\ \hline \(\mathrm{X}_{O}\) & Lena. & 100 & 81 & 85 & 71 & 64 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of relative SSM information and relative compression complexity of different patterns.
### Comparison with computational complexity
If we do not know the signal set of the signal sequence, the first step is to determine the number of signals occurring in the signal sequence, which has an asymptotic complexity of \(\mathcal{O}(N\cdot logN)\).
Determining the Shannon information consists of two steps. In the first step, we determine the frequency of signals, which has a complexity of \(\mathcal{O}(N)\), and in the second step, we sum up the entropy of each signal, so the total complexity of the Shannon information is \(\mathcal{O}(N\cdot logN)+\mathcal{O}(N)=\mathcal{O}(N\cdot logN)\).
For the ZIP, 7Z and ZPAQ algorithms used to calculate the compression complexity, the complexity is usually between \(\mathcal{O}(N)\) and \(\mathcal{O}(N\cdot logN)\), but for ZPAQ may be greater [7][14][13].
In the case of SSM information, the first step is also to determine the frequency of signals, which has a complexity of \(\mathcal{O}(N)\). In the second step, the Shannon information spectrum is calculated \(\mathcal{O}(N)+\mathcal{O}(N/2)+\mathcal{O}(N/3)+...+\mathcal{O}(2)=\mathcal{ O}(N\cdot logN)\) complexity, finally the minimum of the spectrum can be determined \(\mathcal{O}(N)\) with complexity. The complexity of calculating the SSM information in the worst case is \(\mathcal{O}(I_{SSM}(X))=\mathcal{O}(N\cdot logN)+\mathcal{O}(N)+\mathcal{O}( N\cdot logN)+\mathcal{O}(N)=\mathcal{O}(N\cdot logN)\), which is identical to compression algorithms.
Figure 4: Comparison of the results of different information measurement methods.
Figure 5: Comparison of the average results of different information measurement methods.
### Known issues
All methods of calculating the amount of information have inaccuracies. One of the problems with SSM information is that if the repetition in a repeating pattern is not perfect, the value of the SSM information is larger than expected, as shown in the example below.
## 6 Conclusion
It can be shown SSM information can determine the information content of the patterns with an accuracy comparable to the compression algorithms, but at the same time it is simple. In addition information spectrum presented here provides a useful visual tool for studying the information structure of patterns in the frequency domain.
|
2310.16589 | Polarization-entangled photons from a whispering gallery resonator | Crystalline Whispering Gallery Mode Resonators (WGMRs) have been shown to
facilitate versatile sources of quantum states that can efficiently interact
with atomic systems. These features make WGMRs an efficient platform for
quantum information processing. Here, we experimentally show that it is
possible to generate polarization entanglement from WGMRs by using an
interferometric scheme. Our scheme gives us the flexibility to control the
phase of the generated entangled state by changing the relative phase of the
interferometer. The S value of the Clauser-Horne-Shimony-Holt's inequality in
the system is $2.45 \pm 0.07$, which violates the inequality by more than 6
standard deviations. | Sheng-Hsuan Huang, Thomas Dirmeier, Golnoush Shafiee, Kaisa Laiho, Dmitry V. Strekalov, Gerd Leuchs, Christoph Marquardt | 2023-10-25T12:25:40Z | http://arxiv.org/abs/2310.16589v1 | # Polarization-entangled photons from a whispering gallery resonator
###### Abstract
Crystalline Whispering Gallery Mode Resonators (WGMRs) have been shown to facilitate versatile sources of quantum states that can efficiently interact with atomic systems. These features make WGMRs an efficient platform for quantum information processing. Here, we experimentally show that it is possible to generate polarization entanglement from WGMRs by using an interferometric scheme. Our scheme gives us the flexibility to control the phase of the generated entangled state by changing the relative phase of the interferometer. The S value of the Clauser-Horne-Shimony-Holt's inequality in the system is \(2.45\pm 0.07\), which violates the inequality by more than 6 standard deviations.
## I Introduction
Entanglement plays a pivotal role for several quantum technologies in the field of quantum communication and information. For these applications miniaturized plug & play photonic sources are desired. Additionally, the chosen quantum emitter has to be compatible with the quantum hardware such that the light source can drive the latter. For example, efficient distribution and sharing of entanglement between different nodes is important for the realization of quantum communication networking infrastructure. In many proposals, the nodes of such a network would consist of atomic or atom-like systems [1]. The optical transitions of those systems typically exhibit narrow bandwidths in the range from 10 to 100 MHz. An efficient interface between different nodes therefore requires sources of entangled states with the optical bandwidths on the same order.
Often, one generates entangled states with non-linear processes such as spontaneous parametric downconversion (SPDC) in second-order non-linear materials [2; 3; 4; 5] or spontaneous four-wave mixing in materials exhibiting third-order non-linearities [6]. To realize the entangled states, one superimposes the outputs of two independent processes which are highly indistinguishable. The two typical configurations are using two nonlinear crystals whose optical axes are perpendicular [2] or a crystal with an interferometric setup [3; 4]. One drawback of such sources is that the bandwidth is magnitudes wider and not compatible with atomic systems without further effort. In our case, we use a Whispering Gallery Mode Resonator (WGMR) as a source for generating polarization entanglement. Such sources typically have a bandwidth of a few tens of MHz and can interact with narrowband systems without additional filters.
WGMRs are made of highly transparent dielectric materials. When light propagates in such a resonator, it is confined near the rim due to the total internal reflection. There are several interesting advantages to achieving SPDC in WGMRs [7]. Firstly, WGMRs have a high Q-factor (\(Q>10^{7}\)) and small mode volume (less then \(10^{6}\lambda^{3}\), where \(\lambda\) is the optical wavelength inside the resonator), which can increase the efficiency of SPDC. The high Q factor ensures a narrow bandwidth for the whole transparency region of the material. Moreover, the spectrum of WGMRs can be widely tuned by various techniques. Additionally, it is possible to fine tune the bandwidth via the distance of the coupling prisms relative to the WGMR [8]. It has been shown that with these properties one can efficiently tune the generated parametric photons from the same WGMR to narrowband transitions of rubidium and cesium [9]. Those features make WGMRs a potentially promissing source of quantum states of light for developing quantum networks. However, one key aspect of this source is missing: polarization entanglement has not been demonstrated in WGMRs.
In this work, we demonstrate for the first time two-photon polarization entanglement from a WGMR in an interferometric scheme. In this scheme, we leverage the directional degeneracy of the whispering gallery modes. By coupling the pump laser into the equivalent clockwise (CW) and counterclockwise (CCW) modes, we generate the parametric signal and idler photon pairs that populate counterpropagating, but otherwise identical, whispering gallery modes. These photon pairs can be used to yield polarization entanglement. This is achieved by rotating the polarization of the cw-propagating beams and combining the two signal beams as well as the two idler beams on polarizing beamsplitters.We show that by adjusting the relative phase between the combined parametric beams, we can access various polarization-entangled Bell states. These states can be characterized by the Clauser-Horne-Shimony-Holt's (CHSH) \(S\)-parameter [10]. The quantum states, for which this parameter takes the values \(S>2\), incorporate two-partite superpositions exhibiting tighter correlations than classical systems can have. For the maximally entangled states
this parameter takes the value \(S=2\sqrt{2}\). The classical boundary \(S\leq 2\) is known as the CHSH inequality [10]. We demonstrate a violation of this inequality by more than 6 standard deviations, attesting to the quantum nature of the producing polarization states. Apart from the direct measurement of the CHSH \(S\)-parameter it is possible to extract it from the visibility of the observed interference fringe. We compare these two \(S\)-values for the generated state and find them consistent.
## II Results
### Experimental setup
In the following, we explain the experimental setup as shown in Fig. 1 (a). We couple a \(532\,\mathrm{nm}\) continuous-wave laser into the WGMR from the CW and CCW directions. The non-polarizing beamsplitter is used to detect the reflected pump spectrum. The x-cut LiNbO\({}_{3}\) prism is used to couple the pump light into the WGMR while minimizing the parametric photons loss due to the parasitic out-coupling [11]. The diamond prism is used to couple the signals and idlers out of the WGMR. Note that because of the wavelength difference, the pump loss due to evanescent coupling through this prism is strongly suppressed when the signal and idler out-coupling are optimized. In this way, using two separate prisms to couple the three involved light fields allows us to selectively optimize the coupling rates. The optic elements located on the right side of the WGMR are used to create polarization-entangled states and examine their quality. The WGMR is coarsely temperature stabilized using a Peltier element and a temperature controller. In addition, we implement a fast temperature control technique by shining a blue light on top of the WGMR [12].
rotation. The state at the PBSs is
\[|\Phi_{CW}\rangle\propto e^{i(\omega_{p}t_{CW,p}+\omega_{s}t_{CW,s}+\omega_{i}t_{ CW,i})}|V\rangle_{s}|V\rangle_{i} \tag{2}\]
Since we perform coincidence measurements, photons from different pairs don't have a correlation and only contribute to the noise. We can represent the state as
\[|\Phi\rangle=\frac{1}{\sqrt{2}}(|HH\rangle+e^{i\varphi}|VV\rangle) \tag{3}\] \[\varphi=\omega_{p}\Delta t_{p}+\omega_{s}\Delta t_{s}+\omega_{i} \Delta t_{i} \tag{4}\]
where \(\Delta t=t_{CW}-t_{CCW}\) and the overall phase is dropped. Note that by moving one of the HWPs from the CW to CCW channels we could produce other types of entangled states, such as \(|\Psi\rangle=(|HV\rangle+e^{i\varphi}|VH\rangle)/\sqrt{2}\).
Our setup is a nonlinear optical interferometer operating at three different wavelengths. A control over the phase \(\varphi\) can be implemented in any part of this interferometer. We choose to do it with a piezo-actuated mirror in the CW signal channel thereby affecting the \(\Delta t_{s}\) in Eq. (4). Besides being actively controlled, the phase \(\varphi\) can drift due to variations in the optical path lengths that need to be stabilized for the duration of the experiment. Furthermore, even for the stable optical path lengths, there is a variation \(\Delta\varphi\) around the fixed value \(\varphi\) arising from the pump, signal and idler frequency fluctuations around their central values:
\[\Delta\varphi=\Delta\omega_{p}\Delta t_{p}+\Delta\omega_{s}\Delta t_{s}+\Delta \omega_{i}\Delta t_{i} \tag{5}\]
Here the pump laser linewidth \(\Delta\omega_{p}\) is very small and may be safely neglected. However the signal and idler resonance widths defining its spectral widths \(\Delta\omega_{s}\) and \(\Delta\omega_{i}\) are not insignificant. This means that the interferometer arm lengths \(L_{CW}\), \(L_{CCW}\) need to be balanced to provide a good overlap of the interfering biphoton wavepackets: \(|L_{s,CCW}-L_{s,CW}|<<c\Delta\omega_{s}\), \(|L_{i,CCW}-L_{i,CW}|<<c\Delta\omega_{i}\). This would be very difficult to achieve with a free-space SPDC which is characterized by very broad phase matching and hence the large signal and idler spectral widths. But with our WGMRs whose signal and idler linewidths are given above it is sufficient to balance the interferometer on lengths that are longer than a few meters.
Changing the relative phase \(\varphi\) by applying a voltage to the piezo in the experiment, we can freely change from one Bell state \(|\Phi^{-}\rangle=(|HH\rangle-|VV\rangle)/\sqrt{2}\) to another \(|\Phi^{+}\rangle=(|HH\rangle+|VV\rangle)/\sqrt{2}\). This is shown in Fig. 3. When measured on a diagonal/anti-diagonal polarization basis, the coincidence counts for the state \(|\Phi^{+}\rangle\) is expected to reach a maximum while that of the state \(|\Phi^{-}\rangle\) is expected to reach a minimum. Using the latter setting, we characterized the passive stability of our interferometric setup by tracking the coincidence counts. We found it to be stable on a scale of several minutes, which is sufficient for the following measurements. Details to this can be found in the methods section.
We further demonstrated the non-local two-photon interference through changing the polarization projections. The entangled state we used in the following measurements is \(|\Phi^{-}\rangle\). The data shown in Fig. 4 are measured with the polarizer in the signal arm set to project the polarization onto the directions \(\theta_{s}=0^{\circ}/90^{\circ}/45^{\circ}/135^{\circ}\) (further denoted as the H/V/D/A basis), while changing the projection angle \(\theta_{i}\) the polarizer in the idler arm. From Eq. (3) it is easy to find that the normalized coincidence counts \(\rho\) should fit the prediction
\[\rho=\cos^{2}(\theta_{i}+\theta_{s})\]
Figure 1: Sketches of the experimental arrangement. (a) is the experimental setup. EOM: electro-optic modulator, FG: function generator, HWP: half-wave plate, PBS: polarizing beamsplitter, BS: non-polarizing beamsplitter, PID: proportional–integral–derivative controller, LN: x-cut LiNbO\({}_{3}\) prism, D: diamond prism, DM: dichroic mirror, Piezo: piezoelectric transducer, Pol: polarizer, D: detector. (b) and (c) illustrate the interferometric arrangement for CCW and CW propagation of the involved beams, respectively. The side views of the electric field distributions are presented for the resonator mode numbers (d) \(L=m\), \(q=1\) (e) \(|L-m|=1\), \(q=1\) and (f) \(L=m\), \(q=2\). The top view of the electric field distribution for a resonator in (g) equals to \(m=20\).
We fit the data with Eq. (6), resulting in a visibility of 95%/89%/89%/86% in H/V/D/A basis, respectively. These results exceed the classical limit of 71% [3] thus verifying the generation of Bell states. In Fig. 4(b-c) we further illustrate the single counting rates for the signal and idler detectors recorded during the coincidence counting. By showing that these rates are nearly constant, we emphasize that the observed pattern is the result of a true two-photon quantum interference effect and not a statistical product of two single counting rates. The reason why the maximum value of the coincidences in the H base is higher than that of in the other bases is the difference of the loss in the CW and CCW directions. In the experiments we aimed at keeping similar single count rates in both directions, so we slightly increased the pump power in the CW direction, which has greater losses than the CCW direction.
To highlight the nonlocal character of the observed two-photon interference, we rotate both polarizers at the same time in the same and opposite directions. The results shown in Fig. 5 fit the prediction in Eq. (6).
The nonlocal nature of observed two-photon interference allows us to test Bell type inequalities. We calculated the S value of the CHSH inequality [10] in two ways, arriving at consistent results. On the one hand, we measured the coincidence counts in four polarization combinations, and extracted the S value from the measurements [2]. The results show \(S=2.45\pm 0.07\), which violates the CHSH inequality \(S\leq 2\) by more than 6 sigmas (Fig. 6). Additionally, we calculated the S value from the visibilities of the four sinusoidal curves in Fig. 4(a) [3; 5]. The result yields a value of \(S=2.54\pm 0.06\), which violates
Figure 4: Count rate measurements in terms of rotatitng the polarization in idler beam path. (a) represents the coincidence counts in the four different polarizations H/V/D/A selected in the signal arm. (b) and (c) are the idler and signal counts rate in the 4 different polarization bases.
Figure 3: Two-photon interference fringe as a function of the phase \(\phi\).
Figure 2: Conventional characterization of the CW and CCW PDC processes. (a) and (b) represent the cross-correlation functions of the signals and idlers generated from the CW and CCW directions. The leading and trailing time constants \(\tau_{s,i}\)[16] are calculated by exponential fitting. (c) and (d) show the auto-correlation functions of the signals generated from the CW/CCW direction. The red lines are the theoretical prediction of the peak value for a perfectly single mode quantum state.
the CHSH inequality by more than 8 sigmas.
## III Discussion
In conclusion, we have demonstrated the creation of polarization-entangled photons from a WGMR. The fact that the visibilities extracted from the measured coincidence fringes are larger than 85%, while the single count rates remain unchanged, providing us a genuine sign of the generation of the high-quality entanglement.The S value of the CHSH inequality is \(S=2.45\pm 0.07\), confirming that the generated quantum states are polarization-entangled.
The polarization-entangled photon pairs from the WGMRs can potentially be used for entanglement swapping, in order to distribute entanglement over long distances between the "material" qubits, such as atoms or ions. By design, the signals can meet the frequency and bandwidth requirements of atomic systems for efficient atom-photon interactions, while the wavelength of the idlers is located in the telecom band favorable for long-distance transmission. As a result, long-distance quantum information processing can be realized. Besides, with the same configuration as shown in Fig. 1, it is possible to generate higher-order states, such as 4-photon GHZ states, which makes this type of source interesting for various advanced quantum information applications and protocols. Apart from that, by choosing the coupling regime, one can switch between high photon count rate and long coherence time, making this type of source compatible to applications in two extreme regions. Finally, as the required pump power is low (only a few hundreds nanowatt is needed in this experiment), our source can be enabling for applications with limited power budgets such as space satellite projects.
## IV Methods
### Stability of the setup
We measured the free-running phase drift of our setup to determine how long we can measure without actively stabilizing the interferometer. The result is shown in Fig. 7. We found that the drift is slow enough, and we don't need to stabilize the interferometers actively if we finish a single measurement in less than 5 minutes.
### Coincidence contrasts
For the measurements in the D/A basis shown in Fig. 4, we first rotate both polarizers to \(45^{\circ}/135^{\circ}\) and apply a voltage on the piezo to minimize the coincidence counts. Since we don't actively stabilize the interferometers, this step is necessary to ensure the state we measured is \(|\Phi^{-}\rangle\). During the measurements, the voltage applied on the piezo keeps unchanged. After that, we record the coincidence counts of different polarizer settings for \(30\,\mathrm{s}\) each and then rotate the polarizer in the idler arm by \(30^{\circ}\). This step is repeated until the idler polarizer is rotated to \(225^{\circ}/315^{\circ}\). Then, we minimize the coincidence counts again to compensate for the phase drifts of our setup. After that, we continue to step through the remaining polarizer settings until the polarizer is rotated to \(375^{\circ}/465^{\circ}\). This procedure is repeated 10 times in total to gather sufficient statistics for an error estimation.
Figure 5: The coincidence counts which are measured when rotating both polarizers at the same time. The gray line is the theoretical prediction of the coincidence counts of \(\theta_{i}-\theta_{s}=0^{\circ}\)
Figure 6: The S values extracted from 15 repetitions of the CHSH measurements done within three days. The errors are calculated assuming that the main uncertainties come from the photon counting statistics, and that the statistics are Poissonian.
For the measurements in the H/V basis, the polarizer in the signal arm is rotated to \(0^{\circ}/90^{\circ}\) and the polarizer in the idler arm is set to \(0^{\circ}\). The coincidence counts are recorded for 30 s and repeated 10 times. After that, we rotate the polarizer in the idler arm in steps of \(30^{\circ}\) and record the coincidence counts until it is rotated to \(330^{\circ}\).
For the measurements \(\theta_{i}-\theta_{s}=0^{\circ}/\theta_{i}+\theta_{s}=270^{\circ}\) shown in Fig. 5, the steps are similar to that of in the D/A basis, but this time we rotate both polarizers simultaneously in the same/opposite direction. Since the oscillation frequency is twice as fast as the frequency in Fig. 4, we rotate the polarizers by \(22.5^{\circ}\) instead to catch the feature.
### S value
For the measurements of the S value, we first rotate both polarizers to \(45^{\circ}\) and minimize the coincidence counts with the piezo. After that, we rotate the polarizer in the idler arm to \(67.5^{\circ}/112.5^{\circ}/157.5^{\circ}/202.5^{\circ}\) in sequence and record the coincidence counts for 30s. We choose these four angles because the greatest violation of the CHSH inequality occurs under these measurements [10]. Then, we rotate the polarizer in the signal arm to \(90^{\circ}\) and the polarizer in the idler arm to \(202.5^{\circ}/157.5^{\circ}/112.5^{\circ}/67.5^{\circ}\) in sequence and record the coincidence counts. After that, we rotate both polarizer to \(135^{\circ}\) and minimize the coincidence counts again. Later, we rotate the polarizer in the idler arm to \(157.5^{\circ}/202.5^{\circ}/247.5^{\circ}/292.5^{\circ}\) in sequence and record the coincidence counts. Then, we rotate the polarizer in the signal arm to \(180^{\circ}\) and the polarizer in the idler arm to \(292.5^{\circ}/247.5^{\circ}/202.5^{\circ}/157.5^{\circ}\) in sequence and record the coincidence counts. The measurements are repeated 15 times for the error estimation.
###### Acknowledgements.
This research was conducted within the scope of the project QuNET, funded by the German Federal Ministry of Education and Research (BMBF) in the context of the federal government's research framework in IT-security "Digital. Secure. Sovereign".
**Author Contributions** S-H.H, T.D., and G.S. built the experimental apparatus and performed the measurements. S-H.H, T.D., K.L., and D.S. performed the data analysis. G.L. and C.M. supervised the project. All authors contributed to writing the manuscript.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.