id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.03258
|
Pretty Good Strategies for Benaloh Challenge
|
Benaloh challenge allows the voter to audit the encryption of her vote, and
in particular to check whether the vote has been represented correctly. An
interesting analysis of the mechanism has been presented by Culnane and Teague.
The authors propose a natural game-theoretic model of the interaction between
the voter and a corrupt, malicious encryption device. Then, they claim that
there is no "natural" rational strategy for the voter to play the game. In
consequence, the authorities cannot provide the voter with a sensible auditing
strategy, which undermines the whole idea.
Here, we claim the contrary, i.e., that there exist simple rational
strategies that justify the usefulness of Benaloh challenge.
|
Wojciech Jamroga
|
2023-07-06T19:27:22Z
|
http://arxiv.org/abs/2307.03258v2
|
# Pretty Good Strategies for Benaloh Challenge
###### Abstract
Benaloh challenge allows the voter to audit the encryption of her vote, and in particular to check whether the vote has been represented correctly. An interesting analysis of the mechanism has been presented by Culnane and Teague. The authors propose a natural game-theoretic model of the interaction between the voter and a corrupt, malicious encryption device. Then, they claim that there is no "natural" rational strategy for the voter to play the game. In consequence, the authorities cannot provide the voter with a sensible auditing strategy, which undermines the whole idea.
Here, we claim the contrary, i.e., that there exist simple rational strategies that justify the usefulness of Benaloh challenge.
## 1 Introduction
_Benaloh challenge_[3, 4] aims to give the voter the possibility to audit the encryption of her vote, and in particular to check whether the vote has been represented correctly. More precisely, the device that encrypts and sends the ballot must first commit to a representation of the vote given as input. After that, the voter decides whether to cast it or "spoil" it, i.e., open the encryption and check its correctness. Intuitively, this should reduce the risk of altering the value of the vote by a malfunctioning or corrupt machine when it casts the ballot on the voter's behalf.
An interesting analysis of the mechanism has been presented in [6]. The authors propose a natural game-theoretic model of the interaction between the voter and a corrupt, malicious encryption device. Then, they claim that there is no "natural" rational strategy for the voter to play the game, where rational play is defined in terms of Nash equilibrium [17]. More precisely, they claim that: (1) only randomized voting strategies can form a Nash equilibrium, (2) for audit sequences with bounded length, the voter gets cheated in all Nash equilibria, and (3) the Nash equilibria in the infinite game do not form an easy pattern (e.g., Bernoulli trials). In consequence, the voter cannot be provided with a sensible auditing strategy, which undermines the whole method.
In this paper, we claim that - on the contrary - there exist simple auditing strategies that justify the usefulness of Benaloh challenge. This follows from three important observations. First, we show that there _are_ Nash equilibria in bounded strategies where the voter casts her intended vote with high probability.
Based on this observation, we focus on a small subset of randomized strategies, namely the ones where the voter spoils the ballot with probability \(p\) in the first round, and in the second round always casts. Secondly, we point out that the rationality of strategies in Benaloh challenge is better captured by Stackelberg equilibrium [23, 22, 15], rather than Nash equilibrium. Thirdly, a sensible Stackelberg strategy does not have to be optimal; it suffices that it is "good enough" for whatever purpose it serves. Fourthly, we prove that the generalized Stackelberg equilibrium in the set of such strategies does not exist, but the voter can get arbitrarily close to the upper limit of the Stackelberg payoff. To show this, we formally define the concept of _Stackelberg value_, and show that it is always higher than the value of Nash equilibrium in the set of randomized strategies for the voter.
**Related work.** Game-theoretic analysis of voting procedures that takes into account the economic or social incentives of the participants has been scarce. In [5], two voting systems were compared using zero-sum two-player games based on attack trees, with the payoffs representing the success of coercion. In [13], a simple game-theoretic model of preventing coercion was proposed and analyzed using Nash equilibrium, maxmin, and Stackelberg equilibrium. The authors of [25] applied Stackelberg games to prevent manipulation of elections, focussing on the computational complexity of preventing Denial of Service attacks. The research on _security games_[26, 21, 8], using Stackelberg equilibrium to design anti-terrorist and anti-poaching policies, is of some relevance, too.
## 2 Benaloh Challenge and Benaloh Games
We start by a brief introduction of Benaloh challenge. Then, we summarize the game-theoretic analysis of the challenge, proposed in [6].
### Benaloh Challenge
_Benaloh challenge_[3, 4] is a "cut-and-choose" technique for voter-initiated encryption audits, which proceeds as follows:
1. An empty ballot is generated and provided to the voter.
2. The voter fills in the ballot and transmits it to the encryption device;
3. The device encrypts the ballot with the election public key, and makes the encrypted vote available to the voter;
4. The voter decides to cast the encrypted vote, or to open and audit the encryption. If the encryption is opened, the ballot is discarded, and the voter proceeds back to step 1.
Benaloh challenge is meant to counter the threat of a malicious encryption device that falsely encrypts the ballot, e.g., in favor of another election candidate. Importantly, this should be done without compromising receipt-freeness of the voting protocol. In a broader perspective, the challenge can be applied in any
communication scenario where the encryption mechanism is not trustworthy and plausible deniability is required on the side of the sender.
The idea behind the technique is that, if the voters audit the encryptions from time to time, corrupt devices will be exposed and investigated. Thus, it does not pay off to tamper with the encryption in the long run, and the perpetrator would have little incentive to do that. At its core, this is a game-theoretic argument.
### Benaloh Challenge as Inspection Game
Intuitively, the interaction in Benaloh challenge can be seen as a game between the voter \(V\) and the encryption device \(D\) - or, more accurately, between the voter and the malicious party that might have tampered with the device. We will use the term _Benaloh game_ to refer to this aspect of Benaloh challenge. In each round, the voter can choose between casting her intended vote (action \(cast\)) and auditing the encryption (action \(audit\)). At the same time, the device chooses to either encrypt the vote truthfully (action \(true\)) or cheat and encrypt another value of the vote (action \(false\)). Both players know exactly what happened in the previous rounds, but they decide what to do without knowing what the other player has selected in the current round.
A very interesting analysis has been presented by Chris Culnane and Vanessa Teague in [6]. The authors model the interaction as an _inspection game_, i.e., a non-cooperative game where one player verifies if the other party adheres to a given requirement - typically, a legal rule [2]. The idea is very simple: \(V\) chooses the round \(n_{cast}\) in which she wants to cast the vote, and \(D\) chooses the round \(n_{cheat}\) when it will fake the encryption for the first time. Consequently, the voter's plan is to audit the encryption in all rounds \(n<n_{cast}\), and similarly the device encrypts truthfully for all \(n<n_{cheat}\). The players choose their strategies before the game, without knowing the opponent's choice. Their payoffs (a.k.a. utilities) are presented in Figure 1, with the parameters interpreted as follows:
* \(\textit{Succ}_{i}\): the reward of player \(i\) for succeeding with their task (i.e., casting the vote as intended for \(V\), and manipulating the vote for \(D\));
* \(\textit{Fail}_{i}\): player \(i\)'s penalty for failing (i.e., getting cheated for \(V\), and getting caught with cheating for \(D\));
* \(c_{audit}\): the cost of a single audit; essentially, a measure of effort and time that \(V\) needs to invest into encrypting and spoiling a spurious ballot;
It is assumed that \(\textit{Succ}_{i},\textit{Fail}_{i},c_{audit}>0\). Also, \(c_{audit}<\textit{Fail}_{V}\), i.e., the voter cares about what happens with her vote enough to audit at least once.
Figure 1: Inspection game for Benaloh challenge [6, Fig. 2]
There are two variants of the game: finite, where the number of rounds is bounded by a predefined number \(n_{max}\in\mathbb{N}_{\geq 1}\), and infinite, where the game can proceed forever. In the finite variant, the voter chooses \(n_{cast}\in\{1,\ldots,n_{max}\}\), and the device selects \(n_{cheat}\in\{1,\ldots,n_{max},\infty\}\), with \(n_{cheat}=\infty\) meaning that it always encrypts truthfully and never cheats. In the infinite variant, the voter and the device choose respectively \(n_{cast}\in\mathbb{N}_{\geq 1}\) and \(n_{cheat}\in\mathbb{N}_{\geq 1}\cup\{\infty\}\). The structure of the game is common knowledge among the players.
**Discussion.** One might consider a slightly richer game by allowing the voter to refuse participation (\(n_{cast}=0\)) or to keep auditing forever (\(n_{cast}=\infty\)). Also, we could include a reward \(\mathit{Catch}_{V}\) that the voter gets when detecting an attack and reporting it to the authorities. In this paper, we stick to the game model of [6], and leave a proper analysis of the richer game for the future.
### Are There Simple Rational Strategies to Cast and Audit?
Culnane and Teague make the following claims about their model (and, by implication, about the game-theoretic properties of Benaloh challenge):
1. There is no Nash equilibrium in deterministic strategies [6, Lemma 1]. Thus, a rational voter must use _randomized strategies_ in Benaloh challenge.1 Footnote 1: A concise explanation of game-theoretic terms is presented in Sections 3 and 5.1.
2. A Nash equilibrium in the _finite Benaloh game_ can only consist of the voter casting right away and the device cheating right away; the argument proceeds by backward induction [6, Lemma 2 and its proof]. Thus, by [6, Lemma 1], there are no Nash equilibria in the finite Benaloh game, and a rational voter should use _infinite audit strategies_.
3. In the _infinite Benaloh game_, there is no Nash equilibrium in which the voter executes a Bernoulli process, i.e., randomizes in each round with the same probability \(r\) whether to audit or cast [6, Theorem 2]. Quoting the authors, "this prevents authorities from providing voters with a sensible auditing strategy." In other words, there are no "easy to use" rational strategies for the voter in Benaloh challenge.
The above claims have two controversial aspects: a technical one and a conceptual one. First, while claims (1) and (3) are correct, claim (2) is not. By Nash's theorem [17], every finite game has a Nash equilibrium in randomized strategies, and this one cannot be an exception. We look closer at the issue in Section 4, show why backward induction does _not_ work here, and demonstrate that a clever election authority can design the procedure so that the voters do have a simple Nash equilibrium strategy to cast and audit.
Secondly, the authors of [6] implicitly assume that "sensible strategies" equals "simple Nash equilibrium strategies." As we discuss in Section 5, Nash equilibrium is not the only concept of rationality that can be applied here. In fact, Stackelberg equilibrium [23, 22] is arguably a better fit for the analysis of Benaloh
challenge. Following the observation, we prove that generalized Stackelberg equilibrium [15] for the voter in the set of randomized strategies does not exist, but \(V\) can get arbitrarily close to the upper limit of the Stackelberg payoff function. Moreover, there is always a Bernoulli strategy for the voter whose Stackelberg value is higher than the payoff in Nash equilibrium. In sum, Stackelberg games better capture rational interaction in Benaloh challenge, provide the voter with simple strategies, and obtain higher payoffs for \(V\) than Nash equilibria.
## 3 Intermezzo: Game Theory Primer, Part One
Here, we present a compressed summary of the relevant game-theoretic notions. For a detailed introduction, see e.g. [18, 20].
**Strategic games.** A _strategic game_ consists of a finite set of _players_ (or _agents_), each endowed with a finite set of _actions_. A tuple of actions, one per player, is called an _action profile_. The _utility function_\(u_{i}(\alpha_{1},\ldots,\alpha_{n})\) specifies the _utility_ (often informally called the _payoff_) that agent \(i\) receives after action profile \((\alpha_{1},\ldots,\alpha_{n})\) has been played. In the simplest case, we assume that each player plays by choosing a single action. This kind of choice represents a _deterministic strategy_ (also called _pure strategy_) on the part of the agent.
The payoff table of an example strategic game is shown in Figure 2. Two players, Alice and Bob, decide in parallel whether to go to the local bar or to the theater. The strategies and utilities of Bob are set in grey for better readability.
**Rationality assumptions.** The way rational players choose their behaviors is captured by _solution concepts_, formally represented by a subset of strategies or strategy profiles. In particular, _Nash equilibrium (NE)_ selects those strategy profiles \(\sigma\) which are stable under unilateral deviations, i.e., no player \(i\) can improve its utility by changing its part of \(\sigma\) while the other players stick to their choices. Equivalently, \(\sigma\) is a Nash equilibrium if each \(\sigma_{i}\) is a best response to the choices of the other players in \(\sigma\). In our example, _(theater,theater)_ is the
Figure 3: Multi-step Battle of the Sexes. The initial state is filled with yellow, and terminal states with black. Transitions corresponding to dominated choices are shown in grey
Figure 2: A variation on the Battle of the Sexes game. The only Nash equilibrium is indicated by the black frame. Stackelberg equilibrium for Alice is set on yellow background. The players’ best responses to the opponent’s strategies are underlined
only Nash equilibrium. Another solution concept (Stackelberg equilibrium) will be introduced in Section 5.1.
**Multi-step games.** To model multi-step interaction, we use _concurrent extensive form games_, i.e., game trees where the players proceed in rounds, and choose their actions simultaneously in each round. The agents' payoffs are defined for each _play_, i.e., maximal path from the root to a leaf of the tree. A multi-step variant of the Battle of the Sexes, where Alice and Bob first veto-vote on whether to go out and then decide on where to go, is shown in Figure 3. In such games, a deterministic strategy of player \(i\) is a conditional plan that maps the nodes in the tree to \(i\)'s actions. Each strategy profile determines a unique play.
Nash equilibrium is defined analogously to strategic games. Additionally, \(\sigma\) is a _subgame-perfect Nash equilibrium (SPNE)_ if it is a Nash equilibrium in each subtree obtained by fixing another starting point for the game. _Backward induction_ eliminates choices that are _weakly dominated_, i.e., ones for which there is another choice obtaining a better vector of payoffs. Backward induction preserves subgame-perfect Nash equilibria, and can be used to reduce the game tree if the agents are assumed to play SPNE. For example, Alice's strategy _bar_ obtains payoff vector \(\framebox{3}\framebox{1}\), while _theater_ obtains \(\framebox{4}\framebox{2}\). Thus, the former strategy is dominated by the latter, and can be removed from the game three.
**Randomized play.** Randomization makes it harder for the opponents to predict the player's next action, and to exploit the prediction. Moreover, Nash equilibrium is guaranteed to exist for randomized strategy profiles (Nash's theorem [17]), whereas no such guarantee applies to pure strategies. In multi-step games, players can randomize in two ways. A _mixed strategy_ for player \(i\) is represented by a probability distribution over the pure strategies of \(i\), with the idea that the player randomizes according to that distribution, and then duly executes the selected multi-step strategy. A _behavioral strategy_ assigns each game node with a probability distribution over the _actions_ of \(i\), with the idea that \(i\) randomizes freshly before each subsequent move. By Kuhn's theorem, every mixed strategy has an outcome-equivalent behavioral strategy [14] and vice versa [12] in games with perfect recall (i.e., ones where players never forget what they have observed). Note that deterministic strategies can be seen as a special kind of randomized strategies that use only Dirac distributions, i.e., \(s_{i}(\alpha)=1\). In that case we will write \(s_{i}=\alpha\) as a shorthand.
## 4 Benaloh According to Nash
In this section, we look closer at the claims of [6].
### Deterministic Audit Strategies in Benaloh Games
The first claim of Culnane and Teague is that Benaloh games have no Nash equilibrium where the voter plays deterministically [6, Lemma 1]. This is indeed
true. To see that, consider any strategy profile \((n_{cast},s_{D})\) where \(V\) deterministically chooses a round \(n_{cast}\) to cast her vote, and \(D\) chooses \(n_{cheat}\) according to probability distribution \(s_{D}\). If \(s_{D}\neq n_{cast}\), then the device increases its payoff by responding with \(s_{D}=n_{cast}\), i.e., cheating with probability \(1\) at round \(n_{cast}\); hence, \((n_{cast},s_{D})\) is not a Nash equilibrium. Conversely, if \(s_{D}=n_{cast}\), then the voter increases her payoff by changing her mind and casting at round \(n_{cast}-1\) earlier (if \(n_{cast}>1\)) or at round \(n_{cast}+1\) (otherwise); hence \((n_{cast},n_{cast})\) is not a Nash equilibrium either.
Ultimately, \(V\) must use randomized strategies, so that \(D\) cannot precisely predict in which round the vote will be cast.
### The Rise and Fall of Backward Induction
Now, we turn to randomized voting strategies in Benaloh games with finite horizon \(n_{max}\). It was claimed in [6, proof of Lemma 2] that all \(V\)'s strategies where the voter does not cast immediately cannot be part of a Nash equilibrium. The argument goes by backward induction: \(D\) knows that \(V\) must cast in round \(n=n_{max}\), so it can safely cheat in that round. Thus, the voter should cast in rounds \(1,\ldots,n_{max}-1\) to avoid being cheated, in which case the device can actually safely cheat in round \(n_{max}-1\), and so on. Unfortunately (or fortunately from the voters' point of view), the argument is incorrect.
To begin with, backward induction _cannot_ be applied to games in strategic form nor to inspection games; it requires a proper representation of the sequential nature of the game. We propose the concurrent EF game in Figure 4 as a model of
Figure 4: Game tree for Benaloh challenge. \(V\)’s payoffs are in black, \(D\)’s payoffs in red
Benaloh challenge with horizon \(n_{max}\). Each level in the game tree corresponds to a subsequent round of the game. The players choose their actions simultaneously; if \(V\) casts, or \(V\) audits and \(D\) submits false encryption, then the game ends and the payoffs are distributed. If \(V\) audits and \(D\) encrypts truthfully, the game proceeds to the next round. At \(n=n_{max}\), the voter can only cast.
Let us start with the final round of the procedure (i.e., the lowest level in the tree). \(D\) has two available choices: _true_ and _false_, promising the payoff vectors of 0 and \(\boxed{\mathit{Succ}_{D}}\), respectively. Indeed, the choice to encrypt truthfully is dominated and can be removed from the tree, leaving only the right-hand branch. We can also propagate the payoffs from the remaining leaf to its parent (i.e., \(-(n_{max}-1)c_{audit}-\mathit{Fail}_{V}\) for \(V\), and \(\mathit{Succ}_{D}\) for \(D\)).
Consider now the second-to-last level of the tree. Again, the device has two choices: _true_ promising \(\boxed{0}\), and _false_ promising \(\boxed{\mathit{Succ}_{D}}\). It is easy to see that none of them dominates the other: _false_ works strictly better if the opponent decides to cast, whereas _true_ obtains better payoff if the opponent does \(audit\). Also the voter has now two available choices: \(cast\) with the payoff vector \(\boxed{-(n_{max}-2)c_{audit}+\mathit{Succ}_{V}}\boxed{-(n_{max}-2)c_{audit}- \mathit{Fail}_{V}}\) and \(audit\) with \(\boxed{-(n_{max}-1)c_{audit}-\mathit{Fail}_{V}}\boxed{-(n_{max}-1)c_{audit}}\). Clearly, the former vector obtains better payoff in the first dimension, but strictly worse in the second one. Thus, no choice of the voter is dominated. Since we cannot eliminate any choices, the backward induction stops already at that level.
Why is the intuitive argument in [6] wrong? After all, if the voter assigns a positive probability \(p\) to auditing in the round \(n_{max}-1\), she knows she will be cheated (in the final round) with exactly that probability. The problem is, if she sets \(p=0\), she is sure to get cheated right away! Thus, the voter should use \(p\) to keep the opponent uncertain about her current action, which is the usual purpose of randomizing in strategies.
### Mixed Nash Equilibria in Finite Benaloh Games
We know from Section 4.2 that backward induction does _not_ eliminate randomized audit strategies in finite Benaloh games. The next question is: what Nash equilibria do we obtain? We start with _mixed strategies_, i.e., ones represented by probability distributions \(s_{V}=[p_{1}^{V},\cdots,p_{n_{max}}^{V}]\) and \(s_{D}=[p_{1}^{D},\cdots,p_{\infty}^{D}]\), where \(p_{n}^{V}\) is the probability that the voter casts her vote in round \(n\), and \(p_{n}^{D}\) is the probability that the device cheats for the first time in round \(n\).
**Support sets of Nash strategies.** First, observe that there are no subgames outside of the main path in the game tree. Thus, all Nash equilibria are subgame perfect. Moreover, backward induction eliminates the possibility that the device encrypts truthfully in the last round, hence \(p_{\infty}^{D}=0\) in any Nash equilibrium. Consequently, we can represent \(s_{D}\) by \([p_{1}^{D},\cdots,p_{n_{max}}^{D}]\).
Secondly, all the other probabilities must be nonzero, see the following lemma.2
Footnote 2: The proofs of the formal results can be found in Appendix A.
Lemma 1: _If \(s_{V}=[p_{1}^{V},\cdots,p_{n_{max}}^{V}]\) and \(s_{D}=[p_{1}^{D},\cdots,p_{n_{max}}^{D}]\) form a Nash equilibrium, then for all \(i=V,D\) and \(n=1,\ldots,n_{max}\) we have \(p_{n}^{i}>0\)._
**Calculating the audit probabilities.** We compute \(p_{1}^{V},\ldots,p_{n_{max}}^{V}\) using the standard necessary condition for Nash equilibrium in mixed strategies [18, Lemma 33.2]. If \((s_{V},s_{D})\) is a Nash equilibrium with \(p_{n}^{V}>0\) and \(p_{n}^{D}>0\) for all \(n=1,\ldots,n_{max}\), then the following conditions must hold:
1. Every deterministic strategy of \(V\) obtains the same payoff against \(s_{D}\), in other words: \(\forall n_{cast},n_{cast}^{\prime}\in\{1,\ldots,n_{max}\}\). \(u_{V}(n_{cast},s_{D})=u_{V}(n_{cast}^{\prime},s_{D})\)
2. Every deterministic strategy of \(D\) obtains the same payoff against \(s_{V}\), in other words: \(\forall n_{cheat},n_{cheat}^{\prime}\in\{1,\ldots,n_{max}\}\). \(u_{D}(s_{V},n_{cheat}^{\prime})=u_{D}(s_{V},n_{cheat}^{\prime})\)
Consider condition (2). Using the payoffs in Figure 1, we get:
Lemma 2: _If \(s_{V}=[p_{1}^{V},\cdots,p_{n_{max}}^{V}]\) is a part of Nash equilibrium then \(p_{n+1}^{V}\ =\ \frac{\mathit{Succ}_{D}}{\mathit{Succ}_{D}+\mathit{Fail}_{D}}\ p_{n}^{V}\) for every \(n\in\{1,\ldots,n_{max}-1\}\)._
Theorem 4.1: _The mixed voting strategy \(s_{V}=[p_{1}^{V},\cdots,p_{n_{max}}^{V}]\) is a part of Nash equilibrium iff, for every \(n\in\{1,\ldots,n_{max}\}\):_
\[p_{n}^{V}\ =\ \frac{(1-R)R^{n-1}}{1-R^{n_{max}}},\qquad\text{where }R=\frac{\mathit{Succ}_{D}}{\mathit{Succ}_{D}+\mathit{Fail}_{D}}.\]
Indeed, the mixed equilibrium strategy \(s_{V}\) provides no _simple_ recipe for the voter. This is evident when we consider concrete payoff values.
Example 1: Take \(n_{max}=5\) and assume \(\mathit{Succ}_{D}=1,\mathit{Fail}_{D}=4\), i.e., the opponent fears failure four times more than he values success. Then, \(R=0.2\), and hence \(s_{V}=[0.8,0.16,0.032,0.006,0.001]\) is the unique equilibrium strategy for the voter. In other words, the voter should cast immediately with probability \(0.8\), audit once and cast in round \(2\) with probability \(0.16\), and so on.
### Towards Natural Audit Strategies
So far, we have considered _mixed strategies_ for the voter. That is, the voter draws \(n_{cast}\) before the game according to the probability distribution \(s_{V}\), and then duly follows the outcome of the draw. An alternative is to use a _behavioral strategy_\(b_{V}=(b_{1}^{V},\ldots,b_{n_{max}}^{V})\), where the voter does a _fresh_ Bernoulli-style lottery with probability of success \(b_{n}^{V}\) in each subsequent round. If successful, she casts her vote; otherwise, she audits and proceeds to the next round.
**Behavioral Nash equilibria.** First, we observe that the game in Figure 4 is a game of _perfect recall_, i.e., the players remember all their past observations
(in our case, the outcomes of all the previous rounds). Thus, by Kuhn's theorem, mixed and behavioral strategies are outcome-equivalent. In other words, the same outcomes can be obtained if the players randomize before the game or throughout the game. Below, we characterize the behavioral strategy that corresponds to the mixed strategy of Theorem 1.
**Theorem 2**: _The behavioral voting strategy \(b_{V}=[b_{1}^{V},\cdots,b_{n_{max}}^{V}]\) is a part of Nash equilibrium iff, for every \(n\in\{1,\ldots,n_{max}\}\):_
\[b_{n}^{V}\ =\ \frac{1-R}{1-R^{n_{max}-n+1}},\qquad\mbox{where }R=\frac{\mbox{Succ}_{D}}{ \mbox{Succ}_{D}+\mbox{Fail}_{D}}.\]
Example 2: The behavioral strategy implementing \(s_{V}=[0.8,0.16,0.032,0.006,0.001]\) of Example 1 is \(b_{V}=[0.8,0.801,0.81,0.83,1]\). That is, the voter casts immediately with probability \(0.8\), else audits, randomizes again, and casts with probability \(0.801\), and so on.
**Behavioral audit strategies are reasonably simple.** At the first glance, the above behavioral strategy seems difficult to execute, too. We cannot expect the voter to randomize with probability _exactly_\(0.8\), then _exactly_\(0.801\), etc. On the other hand, \(b_{V}\) can be approximated reasonably well by the following recipe: "in each round before \(n_{max}\), cast with probability close to \(0.8\), otherwise audit, randomize freshly, and repeat; in the last round, cast with probability \(1\)." This can be generalized due to the following observation.
In Benaloh games, we can usually assume that \(\mbox{Fail}_{D}\gg\mbox{Succ}_{D}\). First of all, it is important to realize that the opponent of the voter is not the encrypting device, but a human or organizational perpetrator represented by the device. To be more precise, the strategies in the game are defined by the capabilities of the device, but the incentives are those of the perpetrator. Thus, the utility values defined by \(u_{D}\) should not be read as "the payoffs of the device," but rather the utilities of the external party who rigged the device in order to achieve some political, social, or economic goals. Secondly, the scope of the opponent's activity is not limited to the interaction with a single voter and to corrupting a single encryption device. Presumably, they must have tampered with multiple devices in order to influence the outcome of the vote. Consequently, the opponent is in serious trouble if even few devices are caught cheating. This is likely to attract attention and trigger investigation, which may lead to an audit of all the encryption devices, revision or voiding of the votes collected from those that turned out corrupt, and even an arrest and prosecution of the perpetrator. All in all, the penalty for fraud detection (\(\mbox{Fail}_{D}\)) is usually much higher than the reward for a successful swap of a single vote (\(\mbox{Succ}_{D}\)).
**Theorem 3**: _If \(\frac{\mbox{Succ}_{D}}{\mbox{Fail}_{D}}\to 0\), then the equilibrium strategy \(b_{V}\) of the voter converges to the following behavioral strategy:_
\[\widehat{b_{n}^{V}}\ =\ \left\{\begin{array}{ll}\frac{\mbox{Fail}_{D}}{\mbox{Succ}_{D }+\mbox{Fail}_{D}}&\mbox{for }n<n_{max}\\ 1&\mbox{for }n=n_{max}\end{array}\right.\]
The finite Bernoulli strategy to audit with probability \(R=\frac{\mathit{Fail}_{D}}{\mathit{Succ}_{D}+\mathit{Fail}_{D}}\) in each round except last seems reasonably simple. By Theorem 3.1, it is also reasonably close to the unique Nash equilibrium.
**Making things even simpler for the voter.** In order to make Benaloh challenge even easier to use, the voting authority can set \(n_{max}\) accordingly. In particular, it can fix \(n_{max}=2\), i.e., allow the voter to audit at most once. That does not seem very restrictive, as empirical evidence suggests that voters seldom audit their votes [24, 1, 7], and even fewer are able to complete it correctly [24, 1, 10].3 The Benaloh game in strategic form for \(n_{max}=2\) is shown in Figure 5a.
Footnote 3: In fairness, there is also some evidence that suggests the contrary [9, Section 5.6.1].
**Theorem 4**: _For \(n_{max}=2\), the behavioral NE strategy of the voter is:_
\[b_{1}^{V}\ =\ \frac{\mathit{Succ}_{D}+\mathit{Fail}_{D}}{2\mathit{Succ}_{D}+ \mathit{Fail}_{D}},\qquad\qquad b_{2}^{V}\ =\ 1.\]
To make the analysis intuitive, consider the concrete values in Example 1.
Example 3: Take \(\mathit{Succ}_{D}=1,\mathit{Fail}_{D}=4\). By Theorem 2.1, the behavioral Nash equilibrium strategy of the voter is \(b_{V}=[\frac{5}{6},1]\). That is, the voter casts immediately with probability \(\frac{5}{6}\), otherwise audits and casts in the next round - which is a rather simple strategy.
Also, recall our argument that, typically, \(\mathit{Fail}_{D}\gg\mathit{Succ}_{D}\). In that case, \(p_{V}^{1}\) becomes close to \(1\). In other words, the voter should _almost always_ cast immediately, which is a very simple recipe to follow. Thus, contrary to what Culnane and Teague claim in [6], Benaloh challenge can be designed in a way that admits simple Nash equilibrium strategies of the voter.
### Behavioral Audit Strategies are Simple Enough, But Are They Good Enough?
We have just seen that finite Benaloh games do allow for simple and easy to use Nash equilibrium strategies. This seems good news, but what kind of utility do they promise for the voter? That is, how much will the voter benefit from playing NE in Benaloh challenge? For easier reading, we calculate the answer on our running example.
Figure 5: Benaloh game for \(n_{max}=2\): (a) parameterized payoff table; (b) concrete payoff table for the values of Example 4
_Example 4_.: Following Example 3, we take \(n_{max}=2,\mathit{Succ}_{D}=1,\mathit{Fail}_{D}=4\). Moreover, we assume \(\mathit{Succ}_{V}=2,\mathit{Fail}_{V}=3,c_{audit}=1\), i.e., the voter loses slightly more by getting cheated than she gains by casting successfully, and the cost of an audit is half of the gain from a successful vote. The resulting payoff table is presented in Figure 4(b).
We can now compute the Nash equilibrium strategy of the device using Lemma 1 and Condition 1 of Section 4.3. Consequently, we get \(-3p_{1}^{D}+2(1-p_{1}^{D})=-p_{1}^{D}-4(1-p_{1}^{D})\), and thus \(s_{D}=[\frac{3}{4},\frac{1}{4}]\). Recall that the NE strategy of the voter is \(s_{V}=[\frac{5}{6},\frac{1}{6}]\). This yields the following expected payoffs of the players:
\[u_{V}(s_{V},s_{D}) =-3\frac{15}{24}+2\frac{5}{24}-1\frac{3}{24}-4\frac{1}{24}=\ -\frac{7}{6}\] \[u_{D}(s_{V},s_{D}) =1\frac{15}{24}+0\frac{5}{24}-4\frac{3}{24}+\frac{1}{24}=\ \frac{1}{6}\.\]
So, the voter gets negative expected utility, and would be better off by not joining the game at all! If that is the case, then a considerate election authority should forbid electronic voting _not_ because there are no simple NE strategies to audit and vote, but because there is one and it is bad for the voter. The big question is: does Nash equilibrium really provide the right solution concept for rational interaction in Benaloh challenge? We discuss this in Section 5.
## 5 Benaloh According to Stackelberg
Nash equilibrium encodes a particular view of rational decision making. In this section, we discuss its applicability to Benaloh games, suggest that Stackelberg equilibrium is a much better match, and analyze Benaloh challenge through the lens of Stackelberg games.
### Game-Theoretic Intermezzo, Part Two
Every solution concept encodes its own assumptions about the nature of interaction between players and their deliberation processes. The assumptions behind Nash equilibrium in 2-player games can be characterized as follows [19]:
1. Alice and Bob have common belief that each of them plays best response to one another, and
2. Alice believes that Bob has an accurate view of her beliefs, and that Bob believes that Alice has an accurate view of his beliefs,
3....and analogously for Bob.
Alternatively, NE can be characterized as a local optimum of strategy search with mutual adaptations. Informally, it represents collective behaviors that can emerge when the agents play the game repeatedly, and adapt their choices to what they expect from the other agents. Thus, it captures the "organic" emergence of behavior through a sequence of strategy adjustments that leads to a point where nobody is tempted to change their strategy anymore.
Is Nash equilibrium the right concept of rationality for Benaloh games? Note that the characterizations of NE are inherently symmetric. In particular, they assume that both players are able to form accurate beliefs about each other's intentions. This is _not_ the case in Benaloh challenge. In line with the arguments of [6], the perpetrator has significant technological and motivational advantage over an average voter. For example, he can use opinion polls and statistical methods to get a good view of the voter's preferences. Even more importantly, machine learning techniques can be used to profile the frequencies with which the voter chooses to audit or cast. On the other hand, the voter has neither data nor resources to form accurate predictions w.r.t. the strategy of the encryption device. This seems pretty close to the Stackelberg model of economic interaction.
**Stackelberg equilibrium.**_Stackelberg games_[23, 22] represent interaction where the strategy of one player (called the _leader_) is known in advance by the other player (the _follower_). The follower is assumed to play best response to that strategy. The _generalized Stackelberg equilibrium (SE)_[15] prescribes the leader's strategy that maximizes the guaranteed payoff against the follower's best responses. We define and analyze SE for Benaloh games in Section 5.2.
### Pretty Good Strategies against Best Response
For simplicity, we assume that \(n_{max}=2\) throughout this section, i.e., the voter can audit the encryption at most once. Thus, the strategy of the voter can be represented by the probability \(p_{V}\) of casting the vote in the first round. Similarly, the strategy of the device can be represented by the probability \(p^{D}\) of cheating in the first round. We first establish \(D\)'s best response to any fixed \(p^{V}\) and the voter's guaranteed expected utility against best response. These can be formally defined as follows.
Definition 1: The _best response_ of \(D\), given \(V\)'s strategy represented by \(p^{V}\), returns those strategies \(p^{D}\) for which the expected value of \(u_{D}(p^{V},p^{D})\) is maximal:
\[BR_{D}(p^{V})=\operatorname*{argmax}_{p^{D}\in[0,1]}(Eu_{D}(p^{V},p^{D})).\]
Note that a best response always exists, though it does not have to be unique.
Definition 2: The _generalized Stackelberg equilibrium_ for \(V\) is defined as the strategy that maximizes \(V\)'s expected payoff against best response. In case of multiple best responses to some \(p^{V}\), we look at the worst case scenario.
\[SE_{V}=\operatorname*{argmax}_{p^{V}\in[0,1]}\inf_{p^{D}\in BR_{D}(p^{V})}(Eu _{V}(p^{V},p^{D})).\]
For randomized strategies of the leader, the Stackelberg equilibrium does not have to exist (cf. Example 5). To characterize the leader's abilities in such games, we propose the notion of _Stackelberg value_.
Definition 3: The _Stackelberg value_ for \(V\) is the expected guaranteed payoff that \(V\) can obtain against best response_ in the limit_:
\[\mathit{SVal}_{V}=\sup_{p^{V}\in[0,1]}\inf_{p^{D}\in BR_{D}(p^{V})}(Eu_{V}(p^{ V},p^{D})).\]
Clearly, \(\mathit{SVal}_{V}\) is always well defined. Moreover, the game has a Stackelberg equilibrium if \(V\) obtains the Stackelberg value for some strategy. Finally, for each \(\epsilon>0\), the voter has a strategy that \(\epsilon\)-approximates the Stackelberg value, i.e., obtains at least \(\mathit{SVal}_{V}-\epsilon\) against best response.
Lemma 3: _The best response of the device to any fixed strategy of the voter is_
\[BR_{D}(p^{V})=\left\{\begin{aligned} & 0&\text{for }p^{V}<p^{V}_{ \textsc{NE}}\\ & 1&\text{for }p^{V}>p^{V}_{\textsc{NE}}\\ &\text{any }p^{D}\in[0,1]&\text{for }p^{V}=p^{V}_{ \textsc{NE}}\end{aligned}\right.\]
_where \(p^{V}_{\textsc{NE}}=\frac{\mathit{Succ}_{D}+\mathit{Fail}_{D}}{2\mathit{ Succ}_{D}+\mathit{Fail}_{D}}\) is the NE probability of casting in round 1._
Lemma 4: _The voter's expected utility against best response is:_
\[Eu_{V}(p^{V},BR_{D}(p^{V}))=\left\{\begin{aligned} & p^{V}\mathit{Succ}_{V}-(1-p^{V})(c_{audit}+ \mathit{Fail}_{V})&\text{for }p^{V}<p^{V}_{\textsc{NE}}\\ &-p^{V}\mathit{Fail}_{V}-(1-p^{V})c_{audit}&\text{for }p^{V}\geq p^{V}_{ \textsc{NE}}\end{aligned}\right.\]
Example 5: The graph of \(Eu_{V}(p^{V},BR_{D}(p^{V}))\) for the parameters in Example 4 (i.e., \(n_{max}=2,\mathit{Succ}_{D}=1,\mathit{Fail}_{D}=4,\mathit{Succ}_{V}=2,\mathit{ Fail}_{V}=3,c_{audit}=1\)) is depicted in Figure 6. It is easy to see that the function does not reach its optimum, and hence the optimal \(p^{V}\) against best response does not exist. Still, the strategies based on \(p^{V}\) being _slightly smaller_ than the Nash equilibrium strategy \(p^{V}_{\textsc{NE}}=\frac{5}{6}\) are quite attractive to the voter, since they obtain payoff that is both positive and strictly higher than the Nash payoff.
The next and final theorem generalizes the example to arbitrary two-round Benaloh games. It shows that the voter has no optimal Stackelberg strategy in
Figure 6: \(V\)’s payoffs against best response for the Benaloh game in Figure 4(b). The voter’s payoff obtained by Nash equilibrium is shown for comparison
the game (point 1), but the value of \(\mathit{SVal}_{V}=\frac{\mathit{Succ}_{D}(\mathit{Succ}_{V}-\mathit{Fail}_{V}-c _{\mathit{audit}})+\mathit{Fail}_{D}\mathit{Succ}_{V}}{2\mathit{Succ}_{D}+ \mathit{Fail}_{D}}\) can be approximated arbitrarily closely (point 2). That is, for each \(\epsilon>0\), the voter has a strategy that obtains at least \(\mathit{SVal}_{V}-\epsilon\) against best response. Moreover, \(\epsilon\)-approximating Stackelberg equilibrium is strictly better than playing Nash equilibrium (point 3). Lastly, approximate Stackelberg strategies obtain positive utility for the voter under reasonable assumptions (point 4).
**Theorem 5**: _The following properties hold for the Benaloh game with \(n_{max}=2\):_
1. _There is no Stackelberg equilibrium for_ \(V\) _in randomized strategies._
2. _The Stackelberg value of the game is_ \(\mathit{SVal}_{V}=\frac{\mathit{Succ}_{D}(\mathit{Succ}_{V}-\mathit{Fail}_{V }-c_{\mathit{audit}})+\mathit{Fail}_{D}\mathit{Succ}_{V}}{2\mathit{Succ}_{D}+ \mathit{Fail}_{D}}\)_._
3. \(\mathit{SVal}_{V}>Eu_{V}(p^{V}_{\mathit{NE}},p^{D}_{\mathit{NE}})\)_, where_ \((p^{V}_{\mathit{NE}},p^{D}_{\mathit{NE}})\) _is the Nash equilibrium._
4. _If_ \(\mathit{Fail}_{D}\gg\mathit{Succ}_{D}\) _and_ \(\mathit{Succ}_{V}\geq a\mathit{Fail}_{V}\) _for a fixed_ \(a>0\)_, then_ \(\mathit{SVal}_{V}>0\)_._
Thus, Stackelberg games capture the rational interaction in Benaloh games better than Nash equilibrium, and predict strictly higher payoffs for the voter.
## 6 Conclusions, or What Do We Learn from That?
In this paper, we analyze a simple game-theoretic model of incentives in Benaloh challenge, inspired by [6]. Contrary to [6], we conclude that the voters have at their disposal simple strategies to audit and cast their votes. This is especially the case if encryption audits are limited to at most one audit per voter. In that event, a pretty good strategy for the voter is to almost always (but not _exactly_ always!) cast immediately in the first round. Interestingly, this is how voters usually behave in real-life elections, according to empirical evidence.
Moreover, we point out that rational interaction in Benaloh games is better captured by Stackelberg equilibrium, rather than Nash equilibrium. While the optimal Stackelberg strategy is not attainable for the voter, it can be approximated arbitrarily close by casting the vote immediately with probability _slightly lower_ than for the Nash equilibrium. This is good news, because Stackelberg strategies (even approximate) promise strictly better payoffs for the voter than Nash strategies. And, under reasonable assumptions, they produce positive utility for \(V\). Thus, using Benaloh challenge _is_ beneficial to the voter, after all.
The takeaway advice based on this study can be summarized as follows:
1. Using Benaloh challenge is practical and beneficial to the rational voter.
2. Putting a strict limit on the number of allowed audits makes things easier for the voter. The election authority might design the voting system so that each voter can audit the vote encryption at most once.
3. The voters should not try to adapt to the strategy of the attacker, the way Nash equilibrium prescribes. Instead, they should stick to auditing the votes with a fixed (and rather low) frequency, thus approximating the Stackelberg optimum and putting the opponent on the defensive.
**Discussion and future work.** An obvious limitation of the current study is the assumption of _complete information_ about the structure of the game. In particular, it is dubious to assume that the voter knows how much the adversary values the outcomes of the game. In the future, we plan to extend the analysis to an incomplete information game model of Benaloh challenge, e.g., in the form of a Bayesian game [11].
Moreover, the analysis in this paper is performed as a 2-player game between a single voter and the voter's device. It would be interesting to see how this extends to scenarios where the adversary controls multiple devices and plays multiple rounds with different voters. Last but not least, the players' payoffs for either failing or succeeding need further discussion. In particular, we assume that the costs of failure for the opponent are much higher than the benefits of success; this should be better justified or refuted.
**Acknowledgments.** The author thanks Stanislaw Ambroszkiewicz, Peter B. Roenne, Peter Y.A. Ryan, and the anonymous reviewers of E-VOTE-ID for their valuable comments, suggestions, and discussions. The work has been supported by NCBR Poland and FNR Luxembourg under the PolLux/FNR-CORE projects STV (POLLUX-VII/1/2019 and C18/IS/12685695/IS/STV/Ryan), SpaceVote (POLLUX-XI/14/SpaceVote/2023 and C22/IS/17232062/SpaceVote) and PABLO (C21/IS/16326754/PABLO).
|
2305.15243
|
Photonic Time Crystals and Parametric Amplification: similarity and
distinction
|
Photonic Time crystals (PTC) arise in time-modulated media when the frequency
of modulation of permittivity is on the order of twice the frequency of light
and are manifested by the generation and amplification of so-called time
reversed waves propagating in the direction opposite to the incoming light.
Superficially, the observed phenomenon bears resemblance to the widely known
phenomena of optical parametric generation (OPG) and amplification (OPA) using
second or third order optical nonlinearities. I show that while indeed the same
physical mechanism underpins both PTC and OPA , the difference arises from the
boundary conditions. Thus , while dispersion for both PTC and OPA exhibit the
same bandgap in momentum space, only in the case of PTC can one have
propagation in that bandgap with exponential amplification. I also show that
PTC can be engineered with both second and third order nonlinearities, and that
rather unexpectedly, modulating permittivity on the ultrafast (few fs) rate is
not a necessity, and that one can emulate all the PTC features using materials
with a few picoseconds response time commensurate with the propagation time
through the medium.
|
Jacob B Khurgin
|
2023-05-24T15:29:29Z
|
http://arxiv.org/abs/2305.15243v2
|
## Photonic Time Crystals and Parametric Amplification: similarity and distinction
## Photonic Time Crystals and Parametric Amplification: similarity and distinction
_Jacob B Khurgin_
Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland 21218, USA E-mail [email protected]
**Abstract:** Photonic Time crystals (PTC) arise in time-modulated media when the frequency of modulation of permittivity is on the order of twice the frequency of light and are manifested by the generation and amplification of so-called "time reversed" waves propagating in the direction opposite to the incoming light. Superficially, the observed phenomenon bears resemblance to the widely known phenomena of optical parametric generation (OPG) and amplification (OPA) using second or third order optical nonlinearities. I show that while indeed the same physical mechanism underpins both PTC and OPA, the difference arises from the boundary conditions. Thus, while dispersion for both PTC and OPA exhibit the same bandgap in momentum space, only in the case of PTC can one have propagation in that bandgap with exponential amplification. I also show that PTC can be engineered with both second and third order nonlinearities, and that rather unexpectedly, modulating permittivity on the ultrafast (few fs) rate is not a necessity, and that one can emulate all the PTC features using materials with a few picoseconds response time commensurate with the propagation time through the medium.
## 1 Introduction
Diverse phenomena associated with light propagation in materials with temporary modulated optical properties have moved into the focus of attention of the photonic community in recent years[1-5]. One can argue that exploring the fourth, temporal dimension is a natural extension of the developments in the research on three and two dimensional spatially modulated metamaterials and photonic crystals. Alternatively, one can say that photonics offers an ideal testing platform for exploring temporal modulation effects previously studied theoretically, such as formation of Floquet crystals.[6] Last, but not least, it is in the optical domain that time modulated phenomena may segue into practical applications from optical isolators [7, 8] to generating entangled photons. At this time, time modulated phenomena such as time reflection/refraction that is a precursor to the subject of this essay photonic time crystals (PTC) [5, 9, 10] have been demonstrated in the microwave domain [11], while progress in the optical domain has been so far impeded by the difficulty of achieving fast (on the scale of an optical period, i.e. a few femtoseconds) modulation of permittivity in available materials, despite significant progress achieved in relatively new
materials, transparent conductive oxides[5]. Yet with optimistic but not entirely unreasonable expectation that sooner or later time reflection and time crystals in optical domain will become feasible[5, 12], it would be logical to attempt to ascertain what, if any, benefits such developments will bring beyond what had been achieved and utilized routinely.
Since the most (and perhaps the only) realistic way of achieving modulation of the permittivity on the femtosecond scale is by doing it with light, the time modulation phenomena must be closely related to the widely explored nonlinear optical phenomena based on second \(\chi^{(2)}\) and third \(\chi^{(3)}\) nonlinear susceptibilities[13]. These phenomena are optical parametric generation (OPG) and amplification (OPA), four wave mixing (FWM) and phase conjugation (PC). The similarities have been duly noted and questions are often posed, asking what is the difference between parametric processes and time modulated phenomena, especially PTC? In each of these phenomena the new (idler) wave is generated while the energy of the strong pump wave is transferred to that idler as well as to the original signal wave, so that both of them experience amplification. Some answers to those who doubt the novelty of PTC have been provided[10], but these answers have not been complete and quantitative, as, for instance, no fair comparison of either efficiency or bandwidth of PTC vs. that of OPA has been made in the literature. Similarly, since formation of PTC using \(\chi^{(3)}\) involves four photons, no detailed analysis of how it relates to FWM and PC[14] has been presented. Lastly it is often pointed out [10] that unlike more conventional parametric processes, phase matching plays no role in formation of PTC, which, with all due respect, is not correct as the lack of phase matching in time rather than in space does negatively affect the efficiency of amplification in PTC.
In this exercise I attempt to clarify the issues outlined above and provide an answer to the question of what unites and what separates PTC and parametric processes by showing (using only elementary analytical derivations) that the main difference between the two is only in the boundary conditions. While energy is conserved in OPA/OPG, in PTC it is only momentum that stays unperturbed. The bandgap in momentum space arises in both OPA and PC, but energy conservation at the boundary prevents operation inside the bandgap. I also show that, when third order nonlinearity is used, many PTC features (bandgap and amplification) can be observed without ultrafast modulation of permittivity, by relying on transient grating oscillating at the beta frequency between pump and signal.
To commence, one may consider the second order nonlinear parametric processes. Quite a few different geometries can be used to observe parametric phenomena, but to make a fair comparison with PTC, it would be best to focus on the backward OPA and OPG in which the signal and idler waves are counterpropagating and are phase conjugated. Backward OPA was first proposed by Harris as early as 1966[15] and later modified by Ding et al [16] for the case of transverse pumping and waveguide propagation in which the phase matching requirements are relaxed and the propagation distance is sufficient to attain meaningful amplification. OPG in this scheme was experimentally demonstrated in 2006[17]. This geometry can be adapted to operating in either the OPA or the PTC regime, and I chose it to perform a comparative analysis of the two. While the theory of backward OPA is straightforward and well-explored, I find it vital to provide a concise derivation in order to pinpoint the key distinction between OPA and PTC. This distinction is in the opposite signs of the temporal and spatial derivatives in coupled equations that leads to the oscillatory character of propagation in backward OPA vs. exponential growth in PTC.
As shown in Fig.1a, the waveguide made of material with nonzero second order susceptibility \(\chi^{(2)}\) supports a single mode at frequencies \(\omega_{1}\) (signal) and \(\omega_{2}\)(idler) propagating in opposite directions
\[E_{1,2}=A_{1,2}f(x,y)e^{i(\pm k_{1,2}z-\omega_{1,2}t)}+A_{1,2}^{*}f(x,y)e^{i( \mp k_{1,2}z+\omega_{2}t)} \tag{1}\]
where \(A_{1,2}\) are the amplitudes, \(f(x,y)\) is the normalized mode profile; propagation constants are \(k_{1,2}=\omega_{1,2}n_{df}\) / \(c\), and \(n_{eff}\) is the effective index. The pump wave propagates in the direction normal to \(z\), \(E_{p}=A_{3}e^{i(k_{3}x-\omega_{3}t)}+A_{3}^{*}e^{i(k_{3}x-\omega_{3}t)}\). Without loss of generality one can take \(A_{3}\) to be real. As a result of parametric interaction with signal nonlinear polarization at the difference frequency, \(\omega_{2}=\omega_{3}-\omega_{1}\) is generated,
\[P_{NL}(\omega_{2})=\varepsilon_{0}d_{eff}E_{p}E_{1}=\varepsilon_{0}d_{eff}A_{ 3}A_{1}e^{i(k_{2}z+\omega_{2}t)}f(x,y)e^{-ik_{3}x}+c.c., \tag{2}\]
where \(d_{eff}\) is the effective second order susceptibility that accounts for all tensor components of \(\chi^{(2)}\) for a given orientation. An important point here is that we consider the case when the time modulation is continuous in time (i.e. starting long before the signal wave first arrives at the boundary at z=0) but restricted in space to the extent of the waveguide between z=0 and z=L. Then energy conservation dictates that nonlinear polarization engender the backward propagating wave at the same frequency \(\omega_{2}\).
The nonlinear interaction between the signal wave and pump gives rise to nonlinear polarization at the signal frequency,
\[P_{NL}(\omega_{\lambda})=\varepsilon_{0}d_{eff}E_{P}E_{2}=\varepsilon_{0}d_{eff}A _{3}A_{2}^{*}e^{i(k_{2}z-\alpha_{0}t)}f(x,y)e^{ik_{3}x}+c.c. \tag{3}\]
One can now substitute nonlinear polarizations (2) and (3) into the wave equation
\[\frac{d^{2}E}{dz^{2}}-\frac{n^{2}}{c^{2}}\frac{d^{2}E}{dt^{2}}=\frac{1}{c^{2} \varepsilon_{0}}\frac{d^{2}P_{NL}}{dt^{2}} \tag{4}\]
Figure 1: (a) Nonlinear waveguide for observation of OPA The signal enters the waveguide after permittivity variation at frequency \(2\omega_{0}\) has been established by pump A\({}_{3}\). (b) variations of signal (P\({}_{1}\)) and idler (P\({}_{2}\)) along the length of waveguide. (c) Arrangement required for observation of PTC. Signal wave is already inside the waveguide by the time permittivity modulation commences. (d) Exponential growth of signal and idler waves in time.
The boundary conditions indicate that amplitudes \(A_{1,2}\) are only the functions of coordinate \(z\). Then, using a slow variable approach (neglecting second derivatives) and performing integration over transverse coordinates x and y, we obtain the usual set of coupled equations
\[\begin{split}\frac{dA_{1}}{dz}=&\,i\,\frac{\omega_{ 1}d_{\mathit{eff}}\,A_{3}}{2n_{\mathit{eff}}\,c}\,FA_{2}^{*}e^{i\,2A\omega z} \\ \frac{dA_{2}^{*}}{dz}=&\,i\,\frac{\omega_{2}d_{ \mathit{eff}}\,A_{3}}{2n_{\mathit{eff}}\,c}\,F^{*}A_{4}e^{-i\,2A\omega z}\end{split} \tag{5}\]
where the spatial overlap factor is \(F=\iint f^{2}(x,y)e^{iky}dxdy=\left|F\right|e^{i\varphi}\), momentum mismatch is
\[2\Delta k=k_{2}-k_{1}=2n_{\mathit{eff}}\,\Delta\omega\,/\,c, \tag{6}\]
and \(\Delta\omega=\omega_{0}-\omega_{1}\), where \(\omega_{0}=\omega_{p}\)/2 is the frequency of degenerate OPA. Normalizing amplitudes to the photon numbers and eliminating the imaginary numbers by introducing new variables \(a_{1}=A_{4}e^{i\varphi}\omega^{-1/2}\) and \(a_{2}=iA_{2}^{*}\omega_{2}^{-1/2}\), one arrives at the well-known set of coupled equations
\[\begin{split}\frac{da_{1}}{dz}=&\,\kappa a_{2}e^{ i\,2A\omega z}\\ \frac{da_{2}}{dz}=&\,-\kappa a_{2}e^{-j\,2A\omega z }\end{split} \tag{7}\]
where the coupling coefficient is \(\kappa=\left(\omega_{0}\omega_{2}\right)^{1/2}\mid F\mid d_{\mathit{eff}}\,A_ {3}\)/\(2n_{\mathit{eff}}\,c\).
Before proceeding further, it is worthwhile to note the opposite signs of derivatives in the first and second equations in (7) - clearly this the natural consequence of the opposition in space propagation of signal and idler waves. Consequently, the solution is harmonic and not exponential. I now proceed by introducing
\[a_{1}=b_{1}e^{i\Delta kz},\,a_{2}=b_{2}e^{-i\,\Delta kz}, \tag{8}\]
and substituting it into (7) end up with
\[\begin{split}\frac{db_{1}}{dz}+j\Delta kb_{1}=&\, \kappa b_{2}\\ \frac{db_{2}}{dz}-j\Delta kb_{2}=&\,-\kappa b_{1} \end{split} \tag{9}\]
Substituting the expected solution \(b_{1,2}\sim e^{j\beta z}\) yields - characteristic equation \(\beta^{2}=\kappa^{2}+\Delta k^{2}\) indicating that the propagation constant is always real. Applying the boundary conditions \(b_{1}(0)=A_{0};b_{2}(L)=01\) - obtain
\[\begin{array}{l}b_{1}=A_{0}\cos\beta z+\dfrac{\tan\beta L-i \dfrac{\Delta k}{\beta}}{1+i\dfrac{\Delta k}{\beta}\tan\beta L}A_{0}\sin\beta z \\ b_{2}=A_{0}\kappa^{-1}\left[-\beta+i\Delta k\dfrac{\tan\beta L-i \dfrac{\Delta k}{\beta}}{1+i\dfrac{\Delta k}{\beta}\tan\beta L}\right]\sin\beta z +A_{0}\kappa^{-1}\left[\beta\dfrac{\tan\beta L-i\dfrac{\Delta k}{\beta}}{1+i \dfrac{\Delta k}{\beta}\tan\beta L}+i\Delta k\right]\cos\beta z,\end{array} \tag{10}\]
indicating the oscillatory character of the amplitudes as shown in Fig. 1b.
Let us now obtain the expression for the dispersion of the wavevector inside the nonlinear medium, \(k_{in}=k_{1}+\Delta k\pm\beta\) using (6) - and (10)
\[ck_{1,2in}\ /\ n_{eff}=\omega_{0}\mp\sqrt{\kappa_{t}^{2}+\left(\omega_{0}- \omega_{1,2}\right)^{2}}\,, \tag{11}\]
where the temporal coupling coefficient is \(\kappa_{t}=c\kappa\ /\ n_{eff}=\left(\omega_{1}\omega_{2}\right)^{1/2}\ |\ F\ |\ d_{ eff}\ A_{3}\ /\ 2n_{eff}^{2}\). The dispersion is plotted in Fig.2 - and the bandgap around \(k_{0}=\omega_{0}n_{eff}\ /\ c\) whose width is \(2\kappa\) can be clearly spotted. Since the boundary conditions allow change of momentum, the light whose wavevector outside the time modulated region \(k_{1}=\omega_{1}n_{eff}\ /\ c\) falls into the bandgap changes its momentum to \(k_{1,in}\) (11) when it enters the medium and then changes it back upon the exit. The situation is quite different in the case of PTC.
**Figure 2** Dispersion of light inside the time modulated medium. For the OPA arrangement, the boundary conditions prevent light from being inside the bandgap, but for TPC it is allowed.
**3. Time crystals in second order nonlinear materials**
A time crystal is formed when time modulation occurs after the signal has already entered the nonlinear region and the modulation has commenced afterwards, as shown in Fig.1c. In this situation the wavevector must be conserved and the forward wave \(E_{1}=A_{1}e^{i(k_{2}-\omega\alpha)}+c.c.\) can only be efficiently coupled into the counter-propagating wave \(E_{2}=A_{2}^{*}e^{i(k+\omega\alpha)}+c.c.\) when the pump wave at frequency \(\omega_{3}=2\omega_{0}\) is turned on. Once the pump is on, translational symmetry assures that amplitudes of the waves are the functions of time only. Hence. substituting nonlinear polarization into the wave equation (4) and assuming that amplitude changes are small over optical period we obtain
\[\begin{array}{l}{\frac{dA_{1}}{dt}=i\frac{\omega\ d_{ eff}A_{3}}{2n_{eff}^{2}}FA_{2}^{*}e^{-i2\Delta\omega\alpha}}\\ {\frac{dA_{2}^{*}}{dt}=-i\frac{\omega\ dA_{3}}{2n_{eff}^{2}}FA_{ 1}e^{i2\Delta\omega}}\end{array} \tag{12}\]
If one compares (12) with (5), one can see that the main difference is the change of sign in the second of the coupled equations. This is easily explained,in space, two coupled waves propagate in the opposite directions, while in time they obviously move in the same direction (one should not take the term "time reversed" literally), hence the time derivatives have the same sign. Note that phase mismatch \(\Delta\omega=\omega_{0}-\omega\) now occurs in time rather than in space. Following the derivations of the previous section, i.e. introducing \(b_{1}=A_{1}\omega^{-1/2}e^{i\varphi-i\Delta\omega}\) and \(b_{1}=iA_{2}^{*}\omega^{-1/2}e^{i\Delta\omega\alpha}\) we obtain a new set of coupled equations
\[\begin{array}{l}{\frac{db_{1}}{dt}-i\Delta\omega b_{1}=\kappa _{t}b_{2}}\\ {\frac{db_{2}}{dz}+i\Delta\omega b_{2}=\kappa_{t}b_{1}}\end{array} \tag{13}\]
Where \(\kappa_{t}=c\kappa/n_{eff}=\omega\left|\right.F\left|\right.d_{eff}A_{3}/2n_{ eff}^{2}\) It is easy to see that, due to r.h.s. of both equations having the same sign, an exponentially growing solution becomes possible. Substituting \(b_{1,2}=b_{1,2}e^{\gamma\alpha}\) into (13) I obtain the characteristic equation \(\gamma^{2}=\kappa_{t}^{2}-\Delta\omega^{2}\). Therefore, as long as \(\Delta\omega<\kappa_{t}\) one has solution that is
a sum of exponentially increasing and decreasing waves with exponentially increasing one dominating.
With initial conditions \(b_{1}(0)=A_{0}\) and \(b_{2}(0)=A_{0}\) it follows that
\[\begin{split} b_{1}(t)&=A_{0}\Bigg{[}\cosh\gamma t+i \frac{\Delta\omega}{\gamma}\text{sinh}\,\gamma t\Bigg{]}\\ b_{2}(t)&=A_{0}\frac{\kappa}{\gamma}\text{sinh}\, \gamma t\end{split} \tag{14}\]
The complex frequencies of two waves inside the gap can then be found as \(\widetilde{\omega}_{1,2,in}=\omega_{0}\pm j\gamma\).
When the wave is outside the bandgap, then \(\gamma=i\beta_{t}=i\sqrt{\Delta\omega^{2}-\kappa_{t}^{2}}\) and the solution is oscillatory,
\[\begin{split} b_{1}(t)&=A_{0}\Bigg{[}\cos\beta_{t} t+i\frac{\Delta\omega}{\beta_{t}}\text{sin}\,\beta_{t}\Bigg{]}\\ b_{2}(t)&=A_{0}\frac{\kappa_{t}}{\beta_{t}}\, \text{sin}\,\beta_{t}t\end{split} \tag{15}\]
The fields of two counterpropagating waves then become
\[\begin{split} E_{1}(t)&=\frac{A_{0}}{2}\Bigg{[}1+ \frac{\Delta\omega}{\beta}\Bigg{]}e^{j(kz-\omega_{0}t-\beta_{t})}+\frac{A_{0} }{2}\Bigg{[}1-\frac{\Delta\omega}{\beta}\Bigg{]}e^{j(kz-\omega_{0}t-\beta_{t} )}+c.c.\\ E_{2}&=\frac{A_{0}}{2}\frac{\kappa}{\beta}\Big{[}e ^{j(-kz+\omega_{0}t-\beta_{t})}-e^{j(-kz-\omega_{0}t+\beta_{t})}\Big{]}+c.c. \end{split} \tag{16}\]
That means that, once inside the time modulated interval, the eigenmodes are superpositions of two waves with eigenfrequencies \(\omega_{1,2,in}=\omega_{0}\pm\sqrt{\left(\omega-\omega_{0}\right)^{2}-\kappa_ {t}^{2}}\), which are real outside of the bandgap and complex inside of it. Note the important fact that, during the time interval when modulation takes place, one always has \(\omega_{1}+\omega_{2}=2\omega_{0}=\omega_{3}\), indicating that energy is pretty much conserved as it is transferred from a pump photon to signal and idler photons - a standard condition of all parametric processes. It is only at temporal boundaries when frequencies are no longer conserved - for instance after the pump signal stops, both signal and idler revert to original frequency \(\omega\). Whether one is inside (14) or outside (15) the bandgap, the relation \(|\,b_{1}\,|^{2}-|\,b_{2}\,|^{2}=A_{0}^{2}\) indicates photon number conservation [18]. Also note that, in the vicinity of \(\omega=\omega_{0}\pm\kappa_{t}\), exceptional points[19] occur, with plenty of well-publicized features of dubious practicality[20-22].
Let us now find the dispersion outside the bandgap. Since \(k=\omega_{eff}/c\),l obtain
\[(\omega_{1,2,in}-\omega_{0})^{2}+\kappa_{t}^{2}=(kc\,/\,n_{\mathit{eff}}-\omega_{0} )^{2}, \tag{17}\]
and
\[kc\,/\,n_{\mathit{eff}}=\omega_{0}\pm\sqrt{\kappa_{t}^{2}+\left(\omega_{0}- \omega_{1,2,in}\right)^{2}} \tag{18}\]
which is of course _identical_ to the expression (11) for the dispersion in OPA as shown in Fig.2
## 4 Significance of boundary conditions
To summarize what has been shown so far, the dispersion for time modulated material looks the same whether one considers PTC or backward OPA and energy conservation is maintained as pump photons split into signal and idler with energy conserved. So, why, then, can PTC show exponential growth, while backward OPA cannot? To discern it, consider the difference in boundary/initial conditions. In Fig. 3 a-c 1 show the development of amplification in OPA. In Fig. 3a the signal light is outside the modulated region and has frequency \(\omega_{1}\)and wavevector \(k_{1}\)is propagating, as expected, situated on the straight line describing unmodulated waveguide dispersion. Now, when the signal crosses the spatial boundary of the modulation region (shown in darker shade), the boundary condition calls for energy, i.e. frequency conservation. (The point \(\omega,k\) can only move along horizontal line.) Hence, the momentum becomes \(k_{1,in}\) as shown in Fig.3b. If the value of \(k_{1}\)corresponded to the bandgap in the momentum space (as in the case shown in Fig.3), it is now "pushed" outside of it. Inside the modulated region the idler wave with momentum \(-k_{2,in}\)is then generated. Both the signal and idler are amplified, and, once they reach their respective boundaries (at z=L and z=0) respectively, the wavevectors revert to their original values \(k_{1}\)and \(k_{2}\) as shown in Fig.3c.
Figure 3: Light propagation through the time modulated medium in the OPA arrangement. (a) signal wave is outside the dark shaded modulated region. (b) signal is inside the modulation region and its wavevector changes while frequency is conserved. A counterpropagating idler is generated. (c) Signal and idler leave the modulated region and their wavevectors return to their values in the unmodulated waveguide.
Note that, even though energy conservation may not be maintained at each boundary, it is maintained for the entire time T from start to end as one can easily see that \(\left|\,b_{1}(T)\,\right|^{2}-\left|\,b_{2}(T)\,\right|^{2}=\left|\,b_{1}(0)\, \right|^{2}\) indicating that forward and backward propagating photons are generated at the same rate, as in any parametric process.
Figure 4: Light propagation through the time modulated medium in the PTC arrangement.(a) the signal wave is inside the time modulated region, but the modulation has not commenced yet. (b) Time modulation has commenced and the signal frequency changes to a complex value while the wavevector is conserved. The counterpropagating idler is generated. (c) Time modulation has stopped, and frequencies of signal and idler revert to the original frequency \(\omega\).
## 5 Performance comparison
Having established the similarities and differences between OPA and PTC on the fundamental level, perhaps now is the time to compare them on a more practical and application-relevant level, starting from a simple comparison of amplification attainable in either of the schemes. That can be done by considering the fact that, to ensure translational symmetry and momentum conservation, the entire modulation time in OTC cannot exceed the propagation time. Therefore using \(L=Tc/n_{eff}\) one can easily see that \(\kappa L=\kappa_{t}T\) and one can obtain from (10)
\[\begin{split} b_{1,out}=b_{1}(L)=A_{0}\left[\cos\beta_{t}T+\frac{ \tan\beta_{t}T-i\frac{\Delta\omega}{\beta_{t}}}{1+i\frac{\Delta\omega}{\beta_{ t}}\tan\beta_{t}T}\sin\beta_{t}T\right]\\ b_{2,out}=b_{2}(0)=\kappa_{t}^{-1}A_{0}\left[\beta\frac{\tan \beta_{t}T-i\frac{\Delta\omega}{\beta_{t}}}{1+i\frac{\Delta\omega}{\beta_{t}} \tan\beta_{t}T}+i\Delta\omega\right]\end{split} \tag{19}\]
where \(\beta_{t}=\sqrt{\kappa_{t}^{2}+\Delta\omega^{2}}\). Let us now plot the values of output powers \(P_{1,2}=\left|b_{1,2,out}\right|^{2}\) for OPA according to (19) (dashed lines) and for PTC according to (14) (solid lines) for different values of peak parametric single pass gain \(\kappa L=\kappa_{t}T\).
**Figure 5. Output power relative to input power for OPA (dashed lines) and PTC (solid lines) for different values of a single pass parametric gain \(\kappa L=\kappa_{t}T\).**
Prior to doing it, let us ascertain what values of gain are achievable. Taking the effective second order susceptibility on the scale of \(d_{eff}\mid F\mid\sim\)100 \(pm\)/\(V\) for GaAs[23], \(L=\) lmm and pump intensity of 1GW/cm\({}^{2}\) (\(A_{3}=5\times 10^{7}V\) / \(m\) ) one can obtain \(\kappa L\sim 4\) and the width of bandgap \(2\kappa_{t}=\omega\mid F\mid d_{eff}\,A_{3}\) / \(2\pi n_{eff}^{2}\sim\)130 \(GHz\) - this value is less than 0.1% of signal frequency but sufficient to accommodate the entire spectrum of the signal pulse whose duration is less than transit time \(T\sim\)10 \(ps\). These numbers, as well as experimental results [17], appear to confirm that, with a bit of luck and ingenuity, both OPA and PTC are attainable using a second order nonlinearity. I am not going to further comment on practical issues (such as, for instance focusing the pump onto a stripe with high aspect ratio), as the goal here is just to provide the comparison of OPA and PTC.
Now, we turn our attention to Fig.5.a where the output intensities of the forward and backward waves are plotted for the case of single pass parametric gain \(\kappa_{t}T=1\). For these moderate values of gain, the overall output power gain curves exhibit similar smooth dependences on detuning from the central frequency \(\omega_{0}\). As the parametric gain increases to \(\kappa_{t}T=1.3\) (Fig.5b) the overall gain for OPA increases near \(\omega_{0}\). The reason is the presence of positive feedback in space. Obviously, this feedback is unattainable in the time domain as, despite the term "time reversal," no light can travel back in time. Ultimately, when one approaches the threshold condition \(\kappa_{t}T=\pi\) / 2 (as shown in Fig.5c), optical parametric oscillation commences, at which point the parametric gain gets clamped at threshold value due to depletion of the pump. If the pulse pump is short, however (less than a few round trip pass times), then parametric oscillation will not have time to develop, and one can in principle continue to increase the parametric gain to the value \(\kappa_{t}T=2\), at which point PTC does show large gain within the bandgap and relatively little gain outside of it (Fig.5d). Finally, assuming that a huge gain of \(\kappa_{t}T=10\) can be achieved without gain depletion due to amplified spontaneous parametric down conversion, the difference between PTC and OPA becomes obvious - in PTC the overall gain exceeds 20dB inside the bandgap and goes to essentially zero outside of it. In my opinion, experience shows that overall gain beyond 30dB cannot be achieved due to stimulated emission (optical parametric generation in this case) that depletes the gain. Hence the picture presented
in Fig.5e is of mostly academic interest, not to say that it does not have any value, as it elucidates when and where the difference between OPA and PTC becomes perceptible.
From the practical point of view, it has been purported that one can achieve parametric amplification in PTC over a broader bandwidth than in OPA [24] and that parametric gain is frequency-independent as long as it lies within the gap[10]. This is obviously not entirely correct, as gain in TPC declines severely with detuning for center frequency \(\omega_{0}\). Furthermore, in OPA geometry, even outside the bandgap the amplification bandwidth is comparable, and gain is larger than in PTC for realistic values of gain in Fig.5 a-c. If one really wants to achieve a broad bandwidth with OPA, then one should consider a standard forward OPA operating around degeneracy, where bandwidth is determined by group velocity dispersion (GVD) \(\beta_{2}\), \(\Delta\omega=2\sqrt{\kappa\ /\ \beta_{2}}\) and amplification bandwidths as broad as 250 THz are routinely attainable [25] so that white light can be generated in them [26]. Nothing like that can be obtained in the scheme with counterpropagating signal and idler, in either OPA or PTC arrangement.
## 6 Time crystal formation using nonlinear index and its relationship with optical phase conjugation.
Just as using the second order nonlinearity, one may consider using the third order processes[13, 27]. The advantage of third order processes is that phase matching in them is less critical, but the big shortcoming is that the strength of third order processes, at least the ultrafast ones, is less than that of second order processes. But does one really need ultrafast process to observe the usual manifestations of PTC?. Here I provide a somewhat surprising answer.
The third order nonlinear polarization in the presence of a pump wave of frequency \(\omega_{0}\)can formally be written as
\[P_{NL}=\varepsilon_{0}3\chi^{(3)}E_{P}E_{P}(E_{1}+E_{2})=\varepsilon_{0}3\chi ^{(3)}A_{3}^{2}\left(A_{1}e^{i(k+\{2\omega_{0}-\omega\})t}\right)+A_{2}^{ \star}e^{j(k-\{2\omega_{0}-\omega\}t)}\right)e^{-2ik_{y}x}+c.c., \tag{20}\]
where only the terms relevant to the formation of PTC (or to PC) have been kept. The factor of 3 is here because one has three distinct sequences in which three waves interact, \(E_{1,2}E_{P}E_{P}\), \(E_{P}E_{1,2}E_{P}\),and \(E_{P}E_{P}E_{1,2}\). These three terms, however, describe different processes and have different values of nonlinear susceptibilities[28]. The first term,
\[P_{NL}^{fast}=\varepsilon_{0}\chi^{(3)}(2\omega_{0}-\omega,-\omega_{0}, \omega_{0})A_{1}\ e^{ikz-j\alpha t}A_{3}^{\star}e^{j\alpha t}A_{3}^{\star}e^{ j\alpha t}e^{2ik_{y}x} \tag{21}\]
describes the process in which permittivity is modulated at frequency \(2\omega_{0}\) which in turn modulates the signal wave and scatters it into the counterpropagating idler (and vice versa). The sequence of events is shown in Fig.6a schematically. This process is non resonant and hence instant and relatively weak with \(\chi_{fast}^{(3)}\sim\)\(10^{-19}\)\(-10^{-20}\,m^{2}\,/\,V^{2}\)[29]. Following derivations in the previous section, one can easily obtain the same coupled set of equations (13) with coupling coefficient \(\kappa_{t}=c\kappa^{\prime}/\,n_{off}=\omega\,|\,F\,|\,\chi_{fast}^{(3)}A_{3}^ {2}\,/\,2n_{off}^{2}\). It is easy to see that for the same pump intensity, the coupling coefficient is significantly less than in the case of the second order nonlinearity.
But what about the other two terms (equal to each other) that also engender
\[P_{NL}^{slow}=2\varepsilon_{0}\chi^{(3)}(2\omega_{0}-\omega;\omega_{0},- \omega,\omega_{0})A_{3}^{*}e^{i\omega_{0}t}A_{1}\,e^{ikz-j\alpha t}A_{3}^{*}e^{ i\omega_{0}t}e^{2ik_{3}x}\,? \tag{22}\]
This equation describes the process in which first the pump and signal wave mix and produce intensity oscillating at beat frequency \(\omega_{0}-\omega\), \(I_{beat}\sim A_{1}A_{3}^{*}e^{ikz-(\omega-\omega_{0})t}\). If the beat frequency is comparable, or less than the absorption bandwidth, then when the beat wave is absorbed, a slow-moving permittivity grating
\[\Delta\varepsilon_{NL}\,(z,t)\sim\chi_{slow}^{(3)}A_{1}A_{3}^{*}e^{ikz-(\omega -\omega_{0})t} \tag{23}\]
is formed as shown in Fig.6b. Following that, another pump photon scatters from the moving grating resulting in an idler wave. The mechanism of permittivity change can be associated with absorption saturation via Kramers-Kronig relations[30], or, as is the case of intraband absorption in metals and doped semiconductors, including transparent conductive oxides (TCO), it can be due to increase of the local
Figure 6: Two distinct mechanisms of PTC formation. (a) “fast” mechanism in which two pump photons (1) and (2) modulate the permittivity, and the signal photon (3) scatters into the idler (4). (b) “slow” mechanism in which one pump (1) and one signal (2) photon produce a moving grating at the beat frequency, and another pump photon (3) scatters off it into the idler (4)
electron temperature[31-33]. Either way, these processes have characteristic lifetimes over which the changes in permittivity accumulate. For absorption saturation, the characteristic time \(\tau\) is the recombination time measured in the range of a few picoseconds[34] to nanoseconds. For intersubband processes in quantum wells this relaxation time is sub-picosecond[35, 36]. In the case of TCO's the time is the temperature relaxation time due to energy transfer from electrons to phonons. This time is measured in hundreds of femtoseconds[37-39]. Naturally, the maximum change of permittivity is proportional to \(\tau\), and is thus much larger than the ultrafast change. It has been shown [28, 40]that the relation between fast and slow nonlinearities is that between coherence time \(T_{2}\)and relaxation time \(T_{1}\), which is typically orders of magnitude longer. In case of TCO's the ratio is that between the momentum scattering time \(\tau_{m}\), which is on the scale of a few femtoseconds and the aforementioned electron-lattice relaxation time, measured in hundreds of picoseconds. It is for his reason the third order susceptibility in TCO materials has been measured in the range of \(3\times 10^{-17}\,m^{2}\) / \(V^{2}\)[33, 40]. With that, one can achieve fairly large value of coupling \(\kappa_{t}=\omega\,|\,F\,|\,\chi_{slow}^{(3)}A_{3}^{2}\,/\,2n_{eff}^{2}\). However, this assumes that the beat frequency is less than \(1/\tau\) - the "real bandwidth" can be found by finding the nonlinear index change from the equation.
\[\frac{d\Delta\varepsilon_{NL}}{dt}=\frac{1}{\tau}\Bigl{(}-\Delta\varepsilon_{ NL}+\chi_{slow}^{(3)}A_{p}^{*}A_{1}\,e^{ik+i\Delta\sigma t}\Bigr{)} \tag{24}\]
This equation has a solution,
\[\Delta\varepsilon_{NL}(t)=\chi_{slow}^{(3)}A_{p}^{*}A_{1}\,e^{ikz-i\Delta \sigma t}\frac{1-e^{i\Delta\sigma t-t/\tau}}{1-i\Delta\sigma\tau} \tag{25}\]
which indicates the coupling and hence bandgap increasing with time as
\[\kappa_{t}(t)=\omega\,|\,F\,|\,\chi_{slow}^{(3)}A_{3}^{2}\,/\,2n_{eff}^{2}\, \frac{1-e^{i\Delta\sigma t-t/\tau}}{1-i\Delta\sigma\tau}=\kappa_{t,\max}\, \Bigl{(}1-e^{i\Delta\sigma t-t/\tau}\Bigr{)} \tag{26}\]
Therefore, as long as the time constant is comparable or less than modulation time interval T (which in its turn is less than propagation time), one can still see all the salient features of PTC as coupling coefficient increases during the pulse. This is shown in Fig.7 for the case of \(\Delta\omega\,{=}\,\kappa_{t}\) / \(2\)and different values of characteristic times \(\tau\). As one can see, the performance deteriorates for longer times. A grating simply does not have enough time to form before the pump shuts down at time T.
t is obvious that the process based on a slow nonlinearity is essentially standard FWM[14] or PC[41] - the only difference being that, instead of two counterpropagating pump waves, one uses co-propagating ones with phase matching achieved due to short propagation distance in the transverse direction. It is instructive to relate the PTC formation using a slow nonlinearity with recording and instant reading of a hologram[42]. Indeed, the signal plays the role of the object wave, while the pump plays the role of the reference wave. Their interference records a dynamic hologram (grating) by modulating the refractive index. This hologram is then instantly read by the pump waves which not only restores the object wave (image) but also produces the conjugate image - idler. Indeed in the experiments on time reversal [43] with TCO's, two waves have been observed, even though the nonlinearity was "slow". It is not surprising to see the gain in this interpretation - obviously using a high brightness restoring wave will increase the brightness of image beyond that of object.
In the end, one can state that ultrafast nonlinearity is not necessarily for observing the phenomena associated with exponential temporal growth inside the bandgap.
## 7 Conclusions
One thing that I definitely did not want to address in this discourse is the question of what all of this portends for practical applications? A lot of enticing "prospectives have been envisioned by many authors whose vision extends way "further beyond the horizon than my limited eyesight allows me, and a lot of exciting forecasts "will be made, no matter what opinion I "may express here. So, rather than provoking the
Figure 7: Performance of TMC based on “slow” nonlinearity with characteristic response times \(\tau\).
rightreousire in the community, I will just summarize a few fundamental facts discovered (or, perhaps, rediscovered) in this work and leave it to the reader to interpret them in practical context. Here are the facts:
The PTC is in its heart a parametric process in which modulation of permittivity using second or third order optical nonlinearity causes simultaneous generation of signal and idler photons with energy conservation maintained. The main difference between PTC and conventional parametric processes - OPA, PC, FWM is in the boundary conditions. In conventional parametric processes signal and idler frequencies remain unchanged inside and outside modulated region as only wavevectors can change. For PTC the situation at the temporary boundary is the opposite and while wavevector is maintained before, during, and after time modulation interval, the frequency changes and the conjugated (or time reversed) wave outside the modulation interval have the same frequency as the incident signal. While dispersion curves for OPA and PTC are identical, in PTC one can couple the signal into the bandgap and achieve exponential amplification. That being said, the overall amplification is similar for OPA and PTC - slowly decaying as signal frequency is detuned from the central frequency \(\omega_{0}\). For moderate values of modulation, the OPA actually holds advantage over PTC due to the presence of feedback in space that is obviously impossible in time. It is only when modulation becomes very strong that one can potentially observe the salient feature of PTC - strong amplification within bandgap and almost no amplification outside of it.
In order to observe most of the features of TMC it is not really necessary to operate with an ultrafast nonlinearity. Using a relatively slow but strong nonlinearity with response time on the scale of propagation time, i.e. anywhere from a few hundreds of femtoseconds to a few picoseconds (which can be obtained in TCO, a low temperature growth semiconductor, or, intersubband transition in a quantum well) will provide one with a simple way to get all the characteristics of TMC. Note also, that using this slow nonlinearity and operating near the band edge one can explore an interesting region of the "fast light" occurring near the exceptional point, where the group velocity \(d\omega/dk\) approaches very large values and the density of photonics states exhibits singularity[44].
With that, I hope that this essay does help to clear some fundamental uncertainties existing within the community of researchers working in the exiting field of time modulated media. The work performed here may also turn out to be useful in the development of practical TMC's based on second order and slow third order nonlinearities, so it is up to the experimentalists to test and,with any luck, exploit the conclusions made here.
## Acknowledgements
I want to thank all the relevant funding agencies for not funding this work and thus giving me an opportunity of freely exploring the topic of my choice. As always, the contributions of Prof. P. Noir and Dr. S. Artois of JHU, who provided me with all the support and encouragement I needed, are greatly appreciated.
## Conflict of Interest
The author declares no competing financial interest.
|
2301.08083
|
Algorithmic Writing Assistance on Jobseekers' Resumes Increases Hires
|
There is a strong association between the quality of the writing in a resume
for new labor market entrants and whether those entrants are ultimately hired.
We show that this relationship is, at least partially, causal: a field
experiment in an online labor market was conducted with nearly half a million
jobseekers in which a treated group received algorithmic writing assistance.
Treated jobseekers experienced an 8% increase in the probability of getting
hired. Contrary to concerns that the assistance is taking away a valuable
signal, we find no evidence that employers were less satisfied. We present a
model in which better writing is not a signal of ability but helps employers
ascertain ability, which rationalizes our findings.
|
Emma van Inwegen, Zanele Munyikwa, John J. Horton
|
2023-01-19T14:02:53Z
|
http://arxiv.org/abs/2301.08083v1
|
# Algorithmic Writing Assistance on Jobseekers'
###### Abstract
There is a strong association between the quality of the writing in a resume for new labor market entrants and whether those entrants are ultimately hired. We show that this relationship is, at least partially, causal: a field experiment in an online labor market was conducted with nearly half a million jobseekers in which a treated group received algorithmic writing assistance. Treated jobseekers experienced an 8% increase in the probability of getting hired. Contrary to concerns that the assistance is taking away a valuable signal, we find no evidence that employers were less satisfied. We present a model in which better writing is not a signal of ability but helps employers ascertain ability, which rationalizes our findings.
## 1 Introduction
For most employers, the first exposure to a job candidate is typically a written resume. The resume contains information about the applicant--education, skills, past employment, and so on--that the employer uses to draw inferences about the applicant's suitability for the job. Conveying this information is the most important function of the resume. A better-written resume--without any change in the underlying facts--might make it easier for the employer to draw the correct inferences, which could lead to a greater chance of an interview or job offer. We call this the "clarity view" of the role of resume writing quality. However, a resume might not merely be a conduit for match-relevant information; the resume's writing itself could signal ability. In particular, the quality of the writing might be informative about the jobseeker's ability and communication skills. This is another reason better writing could lead to a greater chance of an interview or a job offer. We call this the "signaling view" of the role of resume writing quality.
In this paper, we explore how resume writing quality affects the hiring process using both observational data and a field experiment. We focus on distinguishing between the
"clarity view" and "signaling view." Using observational data from a large online labor market, we document a strong positive relationship between writing quality and hiring (and not simply callbacks). This relationship persists even after controlling for other factors that might otherwise explain the relationship. In terms of magnitude, an additional 1 percentage point increase in error rate (number of errors in the resume divided by the number of words in the resume) is associated with a 3% decrease in the probability of being hired. However, this is only an association and there are other potential reasons why writing quality could be correlated with hiring even with our controls. As such, we report the results of a field experiment in which we vary writing quality in the same market.
The typical approach to addressing a question of causality in hiring preferences would be an audit study, where the researcher would make fictitious job applications and observe callback rates. However, this method of analysis has a number of downsides, such as deception and wasting employers' time (Kessler et al., 2019). Furthermore, a callback is merely the first step in the hiring funnel, making it an imperfect proxy for who actually gets hired.
We use an alternative approach. We intercept jobseekers at the resume-writing stage and randomly offer some of them--the treatment group--algorithmic writing assistance. Others--the control group--had the status quo experience of no assistance. This writing assistance creates random variation in writing quality. The algorithmic writing assistance was provided by a company we call the Algorithmic Writing Company. We will discuss in depth what the Algorithmic Writing Service provides, but generally, it makes writing better by identifying common errors and offering the writer suggestions on how to address those errors.
In the experimental data, there is a very strong "first stage," in that those treated had better-written resumes on several quantifiable dimensions. For example, we find fewer spelling and grammar errors in the resumes of the treated group of jobseekers. Positive effects on resume quality were concentrated among the low-end of the distribution in writing quality, as jobseekers with already excellent resumes can benefit little from writing assistance.
After creating a resume, jobseekers engage in search, which may or may not lead to a job. We observe job search behavior and outcomes for both treated and control jobseekers. Treated workers did not send out more applications than workers in the control group, nor did they propose higher wages. This is a convenient result because our interest is in employers' decision-making, even though randomization was at the level of the jobseeker. If jobseekers had altered their application behavior--perhaps sending more applications because they know they have a stronger case to make--we might wrongly attribute greater job-market success to the resume rather than this endogenous effort.
Our primary outcome of interest is the effects of writing assistance on hiring. We find that treated jobseekers had a 8% increase in their probability of being hired at all relative to the control group. The 95% confidence interval on the percentage increase in hiring is \((3\%,13\%.)\) They also had 7.8% more job offers over the experimental period than those in the control group. In terms of the matches themselves, treated workers' hourly wages were 8.4% higher than the hourly wages of workers in the control group. However, it is important to remember this is a conditional result and could simply be due to composition changes in which workers are hired.
In the "signaling view" the treatment removed or at least weakened a credible signal of jobseeker ability. If this is the case, this should leave employers disappointed. Unique to our setting, we have a measure of employer disappointment, as both sides privately rate each other at the conclusion of the contract. Although these ratings have been shown to become inflated over time (Filippas et al., Forthcoming) and can be distorted when they are public and reciprocal (Bolton, Greiner and Ockenfels, 2013), they are still a useful signal of worker performance. If employers are disappointed with the performance of the worker, this would likely manifest in lower employer ratings at the conclusion of the contract. We find no evidence that this is the case. If anything, the treatment group had slightly higher ratings. The average rating of employer satisfaction of workers in the control group was 8.835 on a ten-point scale. The average rating in the treatment group was 8.84 and had a confidence interval of \((8.74,8.94)\). A natural question is how much statistical power we would have to detect differences in the marginal hires induced by the treatment. Under conservative assumptions, we have 80% power to detect if marginally induced hires were 0.2 standard deviations worse. Given the 8.4% higher average wages in the treatment group, if employers were simply tricked into hiring worse workers generally, these higher wages should have made it even more likely to find a negative effect on ratings (Luca and Reshef, 2021).
One possible explanation for our results is that employers are simply wrong in regarding resume writing quality as informative about ability. However, the "clarity view" can also rationalize our results without making this assumption. It is helpful to formalize this notion to contrast it with the more typical signaling framing of costly effort and hiring. To that end, we present a simple model where jobseekers have heterogeneous private information about their productivity but can reveal their type via writing a "good" resume. This is not a signaling model where more productive workers face lower resume-writing costs--any worker, by writing a good resume, will reveal their information, and this cost is assumed to be independent of actual productivity. Our model has heterogeneous "good" resume writing costs. We show that writing assistance shifting the cost distribution can generate our findings of more hires, higher wages, and equally satisfied employers.
Our main contribution is to compare the "clarity view" and "signaling view" for the positive relationship between writing and hiring. Our main substantive finding is evidence for the "clarity view." We can do this because we can trace the whole matching process from resume creation all the way to a measure of post-employment satisfaction. Helping jobseekers have better-looking resumes helped them get hired (consistent with both explanations), but we find no evidence that employers were later disappointed (which is what the "signaling view" explanation would predict). We also contribute more broadly by showing the importance of text in understanding matching (Marinescu and Wolthoff, 2020). The notion that better writing can help a reader make a better purchase decision is well-supported in the product reviews literature (Ghose and Ipeirotis, 2010) but is a novel finding in labor markets. In one related example, Hong, Peng, Burtch and Huang (2021) shows that workers who directly message prospective employers (politely) are more likely to get hired, but the politeness effect is muted when the workers' messages contain typographic errors.
In addition to the general theoretical interest in understanding hiring decisions, there are practical implications to differentiating between these two views of the resume. If the "clarity view" is more important, then any intervention that encourages better writing is likely to be beneficial. There will likely be little loss in efficiency if parties are better informed. Even better, as we show, the kind of assistance that improves clarity can be delivered algorithmically. These interventions are of particular interest because they have zero marginal cost (Horton, 2017; Belot et al., 2018), making a positive return on investment more likely, a consideration often ignored in the literature (Card et al., 2010). On the other hand, if the "signaling view" is more important, then providing such writing assistance will mask important information and lead to poor hiring decisions.
Unlike general advice, algorithmic interventions are adaptive. In our study, the algorithm took what the jobseeker was trying to write as input and gave targeted, specific advice on improvement. This is likely more immediately useful than more vague recommendations, such as telling jobseekers to "omit needless words." This advice comes in the form of recommendations that are predicted to improve the resume's effectiveness. A limitation of our study is that we cannot speak to crowd-out effects (Crepon et al., 2013), which are relevant to discuss the welfare implications of any labor market intervention. However, this concern is somewhat secondary to our narrower purpose of understanding how employers make decisions with respect to resumes. Additionally, given that in our setting, new entrants compete with established jobseekers on the platform, we anticipate the crowd-out effect will be small, and perhaps even welcome if at the expense of more established workers, given the obstacles new entrants face (Pallais, 2013).
In addition to exploring an AI technology in a real labor market, we contribute to a large
literature on how variation in applicant attributes affects callback rates (Moss-Racusin et al., 2012; Bertrand and Mullainathan, 2003; Kang et al., 2016; Farber et al., 2016). While we are not the first to show that writing matters in receiving callbacks from employers (Sterkens et al., 2021; Martin-Lacroux and Lacroux, 2017), we are the first to do so on such a massive scale and with natural variation in writing quality1. Our experiment involves 480,948 jobseekers which is an order of magnitude larger than the next largest experiments. Another benefit is that we do not need to guess how workers might make mistakes on their resumes, as it is workers and not researchers writing their resumes. Additionally, unique in this literature, we can follow the induced changes all the way through hiring and even post-employment assessment which allows us to answer our "clarity view" vs. "signaling view" questions.
Footnote 1: While the reason this preference exists is not known, recruiters report, anecdotally, caring about a resume’s writing quality (Oreopoulos, 2011).
The rest of the paper proceeds as follows. Section 2 describes the online labor market which serves as the focal market for this experiment. Section 3 reports the experimental results of the treatment effects on writing quality and subsequent labor market outcomes. In Section 4 we present a simple model that can rationalize our findings. Section 5 concludes.
## 2 Empirical context and experimental design
The setting for this experiment is a large online labor market. Although these markets are online, with a global audience, and with lower search costs (Goldfarb and Tucker, 2019), they are broadly similar to more conventional markets (Agrawal et al., 2015). Employers post job descriptions, jobseekers apply, and there are interviews followed by hiring and managing. One distinctive feature of online labor markets is that both the employer and the worker provide ratings for each other at the end of a contract.
Because of the many similarities between on and offline labor markets, a growing body of research uses online labor markets as a setting, often through randomized experiments. These studies contribute to the theory in longstanding questions about labor markets, such as deepening our understanding of the mechanisms and processes by which employers and workers find jobs. Online labor markets also allow researchers to broaden the range of questions in which it is possible to make causal estimates (Horton, 2010; Barach and Horton, 2021) because platforms store detailed data on things like applications, text, length of time spent working on an application, speed of hire, and much more.
Many studies on online labor markets identify and measure phenomena that are relevant to labor markets both online and offline. Like the offline labor market, online labor
markets have been shown to have hiring biases (Chan and Wang, 2018). But, Agrawal et al. (2016) shows that these biases tend to be ameliorated with experience and that general, employers are able to learn as they hire (Kokkodis and Ransbotham, 2022). And Stanton and Thomas (2016) shows that in an online labor market, agencies (which act as quasi-firms) help workers find jobs and break into the marketplace.
### Search and matching on the platform
A would-be employer writes job descriptions, labels the job opening with a category (e.g., "Graphic Design"), lists required skills, and then posts the job opening to the platform website. Jobseekers generally learn about job openings via electronic searches. They submit applications, including a wage bid and a cover letter. In addition to jobseeker-initiated applications, employers can also use the interface to search worker profiles and invite workers to apply to particular jobs. The platform uses the jobseeker's history and ratings on the platform to recommend jobseekers to would-be employers (Horton, 2017). Despite platforms making algorithmic recommendations, none are based on the writing quality of their resume. In terms of selection, Pallais (2013) shows that employers in an online labor market care about workers' reputation and platform experience when hiring. After jobseekers submit applications, employers screen the applicants, decide whether to give interviews, and then whether to make an offer(s).
### Experimental intervention at the resume-writing stage of profile creation
When new jobseekers sign up to work on the platform, their first step is to register and create their profile. This profile serves as the resume with which they apply for jobs. This profile includes a list of skills, education, and work experience outside of the platform, as well as a classification of their primary job category (e.g., "Graphic Design"), mirroring what employers select when posting a job. The interface consists of a text box for a profile title and a longer one for a profile description. Jobseekers either enter their profile information on the spot or they can copy and paste it from somewhere else.
During the experimental period, jobseekers registering for the platform were randomly assigned to an experimental cell. The experimental sample comprises jobseekers who joined the platform between June 8th and July 14th, 2021. For treated jobseekers, the text boxes for the profile description are checked by the Algorithmic Writing Service. Control jobseekers received the status quo experience. The experiment included 480,948 jobseekers, with
50% allocated to the treated cell. Table 1 shows that it was well-balanced and the balance of pre-treatment covariates was consistent with a random process.
### The algorithmic writing assistance
Words and phrases which are spelled wrong or used incorrectly are underlined by the Algorithmic Writing Service. See Figure 1 for an example of the interface with an example of the text "marked up" by the Algorithmic Writing Service. By hovering a mouse cursor over the underlined word or phrase, the user sees suggestions for fixing spelling and grammar errors. The Algorithmic Writing Service also gives advice about punctuation, word usage, phrase over-use, and other attributes related to clarity, engagement, tone, and style.
### Platform profile approval
When jobseekers finish setting up their profiles, they have to wait to be approved by the platform. The platform approves jobseekers who have filled out all the necessary information and uploaded an ID and bank details. The platform can also reject jobseekers at their discretion. However, platform rejection is somewhat rare. About 10 percent of profiles are
Figure 1: Example of the Algorithmic Writing Service ’s interface showing suggestions on how to improve writing
rejected, usually as a part of fraud detection or because the jobseekers leave a completely empty profile. 46% of workers who were allocated into the experiment upon registration complete and submit their profiles. About 41% of workers who begin registering get all the way through the approval process.
As approval is made following profile creation, this platform step creates a potential problem for interpreting any intervention that changes profile creation. For example, it could be that better writing just led to a greater probability of platform approval. Or, it could have caused jobseekers to be more likely to complete the registration process and submit their profile, both of which could effect hiring. While unlikely, this is possible, and we do several things to deal with this potential issue.
First, see whether there is any evidence of selection. We find no evidence that treated jobseekers were more likely to be approved--the estimate is a precise zero. In Appendix Table 7 we show that treated jobseekers are no more likely to submit their profiles and that approval too is unaffected by the treatment.
Second, in our main analysis, we condition on profile approval in our regressions. We also do robustness checks where we report the same analysis not conditioned on profile approval and where we control for profile approval as a covariate. All our results are robust to these strategies and are described in Section 3.11.
Once a jobseeker is approved, they can begin applying for jobs posted on the platform. Their profile will include their resume and a "profile hourly wage" which is the wage offer to employers searching for workers. After they complete their first job on the platform, their profile also shows the worker's actual wages and hours worked on jobs found through the platform.
### Description of data used in the analysis
The dataset we use in the analysis consists of the text of jobseekers' resumes as well as all of their behavior on the platform between the time they registered and August 14th, 2021, one month after allocations ended. We construct jobseeker level data including the title and text of their profile, the number of applications they send in their first month on the platform, the number of invitations to apply for jobs they receive, the number of interviews they give, and the number of contracts they form with employers. These workers most often list Design & Creative, Writing, Administrative Support, and Software Development as their primary job categories, in order of frequency.
In Table 1 we present summary statistics about the jobseekers in the full experimental sample as well as the sample conditioned on platform approval. 16% of the jobseekers spec
ify that writing jobs are their primary area of work. Only 14% of jobseekers are based in the US, and over 80% are based in a country where English is not the native language.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & Treatment & Control & Difference in means: & p-value \\ & mean: & mean: & \(\bar{X}_{TRT}-\bar{X}_{CTL}\) & \\ & \(\bar{X}_{TRT}\) & \(\bar{X}_{CTL}\) & & \\ \hline _Full sample description: N = 480,948_ & & & & \\ & Resume submitted & 0.456 (0.001) & 0.455 (0.001) & 0.001 (0.001) & 0.454 \\ & Platform approved & 0.407 (0.001) & 0.406 (0.001) & 0.002 (0.001) & 0.187 \\ & Resume length & 32.910 (0.116) & 32.859 (0.117) & 0.051 (0.165) & 0.758 \\ & Profile hourly rate & 18.843 (0.126) & 18.917 (0.126) & -0.075 (0.178) & 0.676 \\ \hline _Flow from initial allocation into analysis sample_ & & & & \\ & _Treatment (N)_ & _Control (N)_ & _Total (N)_ & \\ & Total jobseekers allocated & 240,231 & 240,717 & 480,948 & \\ & who submitted their profiles & 109,638 & 109,604 & 219,242 & \\ & and were approved by the platform & 97,859 & 97,610 & 195,469 & \\ & with non-empty resumes & 97,479 & 97,221 & 194,700 & \\ _Pre-allocation attributes of the analysis sample: N = 194,700_ & & & & \\ & From English-speaking country & 0.182 (0.001) & 0.183 (0.001) & -0.002 (0.002) & 0.363 \\ & US-based & 0.141 (0.001) & 0.143 (0.001) & -0.002 (0.002) & 0.223 \\ & Specializing in writing & 0.166 (0.001) & 0.168 (0.001) & -0.002 (0.002) & 0.150 \\ & Specializing in software & 0.115 (0.001) & 0.115 (0.001) & 0.000 (0.001) & 0.769 \\ & Resume length & 70.393 (0.222) & 70.260 (0.222) & 0.133 (0.314) & 0.671 \\ \hline \hline \end{tabular} _Notes:_ This table reports means and standard errors of various pre-treatment covariates for the treatment group and the control group. The first panel shows the post-allocation outcomes of the full experimental sample i) profile submission, ii) platform approval, iii) length of resume in the number of words, iv) profile hourly wage rate in USD. The means of profile hourly rate in treatment and control groups are only for those profiles which report one. The reported p-values are for two-sided t-tests of the null hypothesis of no difference in means across groups. The second panel describes the flow of the sample from the allocation to the sample we use for our experimental analysis. The complete allocated sample is described in the first line, with each following line defined cumulatively. The third panel looks at pre-allocation characteristics of the jobseekers in the sample we use for our analysis, allocated jobseekers with non-empty resumes approved by the platform. We report the fraction of jobseekers i) from the US, UK, Canada, or Australia, ii) from the US only, iii) specializing in writing jobs, iv) specializing in software jobs, and v) the mean length of their resumes in the number of words.
\end{table}
Table 1: Comparison of jobseeker covariates, by treatment assignment
### Constructing measures of writing quality
We do not observe the changes that Algorithmic Writing Service suggested--we simply observe the resumes that result. As such, we need to construct our own measures of writing quality to determine if the treatment was delivered.
Algorithmic Writing Service gives suggestions to writers about how to improve text along several dimensions. Perhaps the most straightforward measure of writing quality is spelling. To see if the treatment impacted spelling errors, we take each worker's profile and check if each word appears in an English language dictionary. We use the dictionary hunspell, which is based on MySpell dictionaries and is the basis for the spell checker for Google Chrome, Firefox, and Thunderbird.
As many of the resumes are for technical jobs, they often contain industry-specific terms such as "UX" or brand names like "Photoshop." To prevent these from being labeled as errors, we augmented the list of words in the dictionary by checking the 1,000 most commonly "misspelled" words in our sample and adding non-errors manually.
Spelling is not the only measure of writing quality. To broaden our measures, we use LanguageTool, an open-source software that finds many errors that a simple spell checker cannot detect, to understand employers care about measures of writing quality other than simply the number of spelling mistakes. LanguageTool is a rule-based dependency parser that identifies errors (rule violations) and categorizes them. Some example categories include "Nonstandard Phrases," "Commonly Confused Words," "Capitalization," and "Typography." For example, the nonstandard phrase "I never have been" would be flagged with a suggestion to replace it with "I have never been." For a more detailed explanation of all of the rule categories, see Appendix Table 12.
### Spelling errors are associated with lower hiring probabilities in the control group
Before presenting the experimental results, we explore the relationship between resume writing quality and hiring in the control group. We begin by studying the most unambiguous measure of writing quality: spelling. In Figure 2 we plot the relationship between hiring outcomes and the percentage of words spelled correctly on the resumes of jobseekers in the control group. Because the distribution of percent correctly spelled is so left skewed, we truncate the sample to only those who spell at least 75% of the words in their resumes correctly. This window includes 98% of jobseekers in the control group. The x-axis is deciles between 75% and 100% of words spelled correctly.
Job seekers with resumes with fewer spelling errors are more likely to be hired. In the
left facet, the y-axis is the number of contracts a jobseeker forms in their first month on the platform. In the right facet, the y-axis is the probability that a jobseeker is ever hired in their first month on the platform. A jobseeker with fewer than 90% of the words in their resume spelled correctly has only a 3% chance of getting hired, while jobseekers with around 99% of the words spelled correctly has an 8% chance of getting hired. However, as is visible in both facets, resumes with 100% of words spelled correctly are much _less_ likely to receive interest from employers. This is likely because those resumes tend to be much shorter than the others--the average length of a resume that has zero spelling errors is only 52 words long.
### The association between various kinds of writing errors and hiring probabilities
Moving beyond spelling, in Table 13 we summarize the occurrence of other types of errors within the control group. In Table 2, we show the correlation between hiring outcomes on each type of language error in the resumes in the control group of the experimental sample. In these regressions, we control for the jobseekers' profile hourly rate and their job category. Resumes with more errors in capitalization, grammar, typography, miscellaneous,
Figure 2: Association between spelling errors and hiring outcomes in the control group
collocations, possible typo, commonly confused words, and semantics all hired less. This linear model places some unreasonable assumptions like constant marginal effects on the relationship between various writing errors and hiring. There may be interactions between these error types. However, it is still useful to summarize the relationships. We can see generally negative relationships between writing errors and hiring.
Interestingly, more style errors _positively_ predict hiring. While initially surprising, style errors are often caused by language being unnecessarily flowery. Some examples of style errors are "Moreover, the street is almost entirely residential" and "Doing it this way is more easy than the previous method." This implies that despite employers' dislike of most writing errors, they forgive or even prefer this kind of flowery language.
## 3 Effects of the treatment
We look at two main kinds of experimental results. First, we examine how the treatment affected the text of resumes. We are looking to see whether there is a "first stage." Next, we look at market outcomes for those treated workers. For convenience, we present these treatment effects as percent changes, in Figures 3 and 5. We calculate the standard errors using the delta method.
### Algorithmic writing assistance improved writing quality
The first step is to measure the effect Algorithmic Writing Service has on writing in the treatment group. We start with the fraction of words in the resume spelled incorrectly. In the control group, resumes are 70 words long on average. Even the worst spellers spell most of the words correctly, and an average resume has 96% of the words spelled correctly.
To understand the effects of the treatment on other types of writing errors we return to the more fine-grained LanguageTool definitions of writing errors. In Figure 3, we look at the effect of treatment on the number of each type of writing error, normalized by resume length.2 Our outcomes of interest are the error rate for each type, so we normalize each type of error to the number of words in the resume. We calculate the standard errors using the delta method.
Footnote 2: The treatment had no effect on the length of resumes—see Table 8 in Appendix A.1.
We find that jobseekers in the control group had a higher rate of errors of the following types: capitalization, collocations, commonly confused words, grammar, spelling, possible typos, miscellaneous, and typography. We find larger treatment effects for errors associated with writing clarity than for many others. For example, the largest magnitudes of differ
\begin{table}
\begin{tabular}{l c c} \hline \hline & \multicolumn{2}{c}{_Dependent variable:_} \\ \cline{2-3} & Number of Contracts & Hired \\ & (1) & (2) \\ \hline Capitalization Error & \(-0.298^{***}\) & \(-0.175^{***}\) \\ & (0.077) & (0.039) \\ Possible Typo & \(-0.049^{***}\) & \(-0.031^{***}\) \\ & (0.011) & (0.006) \\ Grammar Error & \(-0.320^{***}\) & \(-0.223^{***}\) \\ & (0.097) & (0.049) \\ Punctuation Error & \(0.064^{***}\) & \(0.038^{***}\) \\ & (0.024) & (0.012) \\ Typography Error & \(-0.053^{***}\) & \(-0.039^{***}\) \\ & (0.016) & (0.008) \\ Style Error & \(0.164^{*}\) & \(0.092^{*}\) \\ & (0.098) & (0.050) \\ Miscellaneous Error & \(-0.457^{***}\) & \(-0.241^{***}\) \\ & (0.143) & (0.073) \\ Redundant Phrases & \(0.123\) & \(0.086\) \\ & (0.385) & (0.195) \\ Nonstandard Phrases & \(0.860\) & \(-0.074\) \\ & (1.275) & (0.646) \\ Commonly Confused Words & \(-1.192^{**}\) & \(-0.667^{**}\) \\ & (0.531) & (0.269) \\ Collocations & \(-0.588^{*}\) & \(-0.368^{**}\) \\ & (0.347) & (0.176) \\ Semantic Error & \(-1.229\) & \(-0.683^{*}\) \\ & (0.789) & (0.400) \\ Constant & \(0.167\) & \(0.167^{**}\) \\ & (0.142) & (0.072) \\ \hline Controls & X & X \\ Observations & 93,725 & 93,725 \\ R\({}^{2}\) & 0.002 & 0.004 \\ \hline \hline \end{tabular} _Notes: This table analyzes correlation between various writing errors on jobseekers’ resumes and their hiring outcomes. The independent variables, writing errors, are divided by the number of words in the jobseekers’ resume. Column (1) defines Number of Contracts as the number of unique jobs they work over the month after they register for the platform. Column (2) defines Hired as the probability the jobseeker was hired over that month. All analysis includes controls for profile hourly rate and job category. Writing errors are defined by LanguageToolR. The sample is made up of all jobseekers in the control group of the experimental sample who submitted non-empty resumes and were approved by the platform. Significance indicators: \(p\leq 0.10:*\), \(p\leq 0.05:**\) and \(p\leq.01:***\)._
\end{table}
Table 2: Hiring outcomes predicted based on language errors (normalized by word count) in the control group
_Notes:_ This plot shows the effect of the treatment on various writing errors in jobseekers' resumes. Point estimates are the percentage change in the dependent variable versus the control group for the treatment groups. A 95% confidence interval based on standard errors calculated using the delta method is plotted around each estimate. The experimental sample is of all new jobseekers who registered and were approved for the platform between June 8th and July 14th, 2021, and had non-empty resumes, with \(N=194\),700. Regression details can be found in Appendix Tables 16 and 17.
Figure 3: Effect of the algorithmic writing assistance on writing quality measures
ences in error rate were commonly confused words and collocations, where two English words are put together that are not normally found together. Interestingly, the treatment group had more "style" errors.
### Algorithmic assistance helped the worst writers more
The treatment was predominantly effective for jobseekers at the bottom of the spelling distribution. In Figure 4 we report results from a quantile regression on the effect of the treatment on the percentage of words they spelled correctly. The effect is concentrated in jobseekers in the bottom half of the spelling distribution. The treatment effect is largest for jobseekers below the 30% decile, with effects decreasing at each decile until the median at which point the treatment did not affect spelling.
### Heterogeneous treatment effects to spelling
A natural question is whether effects differed by jobseeker background. In Table 3 we interact pre-randomization jobseeker attributes with the treatment. We can see that jobseekers from the US, from English-speaking countries,3 and who are writers all do better in "levels." We find that jobseekers from countries that are not native English speaking experience significantly larger treatment effects to the fraction of words they spell correctly than their anglophone counterparts.
Figure 4: Effect of treatment on percentage of words spelled correctly, by deciles
Figure 5: Effect of algorithmic writing assistance on hiring outcomes
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{_Dependent variable:_} \\ \cline{2-5} & \multicolumn{4}{c}{Frac Words Spelled Correctly x 100} \\ & (1) & (2) & (3) & (4) \\ \hline Algo Writing Treatment (Trt) & \(0.090^{*}\) & \(0.133^{***}\) & \(0.122^{**}\) & \(0.102^{**}\) \\ & (0.046) & (0.051) & (0.050) & (0.050) \\ Native-English & & \(2.405^{***}\) & & \\ & & (0.084) & & \\ Trt \(\times\)English & & \(-0.218^{*}\) & & \\ & & (0.119) & & \\ US & & & \(2.394^{***}\) & \\ & & & (0.093) & \\ Trt \(\times\) US & & & \(-0.192\) & \\ & & & (0.131) & \\ Writer & & & & \(0.875^{***}\) \\ & & & & (0.087) \\ Trt \(\times\) Writer & & & & \(-0.063\) \\ & & & & (0.123) \\ Constant & \(96.399^{***}\) & \(95.957^{***}\) & \(96.055^{***}\) & \(96.251^{***}\) \\ & (0.033) & (0.036) & (0.035) & (0.036) \\ \hline Observations & 194,700 & 194,700 & 194,700 & 194,700 \\ R\({}^{2}\) & 0.00002 & 0.008 & 0.006 & 0.001 \\ \hline \hline \end{tabular} _Notes_: In Column (1) we show the overall effect of the treatment to the percentage of correctly spelled words on a jobseekers’ resume. In Column (2) we interact the treatment with a dummy variable for if the jobseeker is in the US, UK, Canada, or Australia. In Column (3) we interact the treatment with a dummy for if the jobseeker is in the US. In Column (4) we interact the treatment with a dummy for if the jobseeker lists Writing as their primary category of desired work. The sample is made up of all jobseekers in the control group of the experimental sample who submitted non-empty resumes and were approved by the platform. Significance indicators: \(p\leq 0.10:*\), \(p\leq 0.05:***\) and \(p\leq.01:***\).
\end{table}
Table 3: Effects of writing assistance on spelling
### Treated workers did not change their job search strategy or behavior
One potential complication in our desire to focus on employer decision-making is that the treatment could have impacted jobseekers search behavior or intensity. Suppose treated jobseekers changed their behavior, knowing they had higher quality resumes. In that case, we could not interpret our treatment effect as being driven by employers' having improved perceptions of treated jobseekers. However, we find no evidence that jobseekers changed their search behavior due to the treatment. In the first facet of Figure 5, the outcome is the number of applications a jobseeker sends out over their first 28 days after allocation. We find no effect of the treatment on the number of applications sent.
In the second facet, the outcome is the mean wage bid proposed by the jobseekers on their applications in their first 28 days on the platform. Average wage bids in both the treatment and control group were $24 per hour. The lack of effects on jobseekers' behaviors makes sense because they were unaware of the treatment.
Table 4 show the effects of the treatment on jobseekers application behavior. In Column (1) we see whether treated jobseekers applied for more jobs than those in the control group over the experimental period and find they did not. In Column (2) we find that treated jobseekers do not bid for more hourly jobs than those in the control group. They also could have bid for higher wages knowing they had better-looking resumes. In Column (3) we see no evidence of this, where we narrow the sample to only applications to hourly jobs and look at the effect of the treatment on hourly wage bids.
### The treatment did not affect employer recruiting
Employers were able to seek out workers using the platform's search feature to invite jobseekers to apply to their job openings. In the third facet of Figure 5 from the top, the outcome is the number of invitations to apply for a job that the jobseeker receives in their first month. We find the effect of the treatment on employer invitations is a precise zero. In the fourth facet from the top, the outcome is the number of interviews a jobseekers gives over their first month on the platform. We find that this is also a precise zero.
Although it may seem surprising given the results on hires and contracts, it makes sense given that our experimental sample consists of only new jobseekers to the platform. New entrants almost never appear in the search results when employers search for jobseekers, given that their rank is determined by their platform history.
### Treated jobseekers were more likely to be hired
The treatment raised jobseekers' hiring probability and the number of contracts formed on the platform. In the fifth facet of Figure 5, the outcome is a binary indicator for whether or not a jobseeker is ever hired in their first 28 days on the platform. During the experiment, 3% of jobseekers in the control group worked at least one job on the platform. Treated jobseekers see an 8% increase in their likelihood of being hired in their first month on the platform. In Table 5 Column (1) we report these results in levels.
Jobseekers in the treated group formed 7.8% more contracts overall. In the sixth facet of Figure 5, the outcome is the number of contracts a jobseeker worked on over their first month.
### Hourly wages in formed matches were higher
Treated workers had 8.4% higher hourly wages than workers in the control group. In the seventh facet, the outcome is the mean hourly rate workers earned in jobs they worked over their first month on the platform.4 In the control group, workers on average made $17.17 per hour. In the treatment group, workers made $18.62 per hour, a significant difference at
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{_Dependent variable:_} \\ \cline{2-4} & Num Applications & Num Hourly Applications & Mean Hourly Wage Bid \\ & (1) & (2) & (3) \\ \hline Algo Writing Treatment & 0.010 & 0.009 & \(-0.223\) \\ & (0.028) & (0.018) & (0.391) \\ Constant & 2.466\({}^{***}\) & 1.325\({}^{***}\) & 24.283\({}^{***}\) \\ & (0.020) & (0.012) & (0.276) \\ \hline Observations & 194,700 & 194,700 & 68,664 \\ R\({}^{2}\) & 0.00000 & 0.00000 & 0.00000 \\ \hline \hline \end{tabular} _Notes_: This table analyzes the effect of the treatment on jobseekers’ application behavior. The outcome in Column (1) is the number of total applications a jobseeker sent out between the time the experiment began and one month after it ended. The sample is made up of all jobseekers in the control group of the experimental sample who submitted non-empty resumes and were approved by the platform. The outcome in Column (2) is the number of specifically hourly applications sent out in that same time period. The outcome in Column (3) is the mean hourly wage bid they proposed for those hourly jobs, and the sample narrows to only jobseeker who submitted at least one application to an hourly job.
Significance indicators: \(p\leq 0.10:*\), \(p\leq 0.05:**\) and \(p\leq.01:***\).
\end{table}
Table 4: Effects of writing assistance on jobseekers’ application behavior
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{_Dependent variable:_} \\ \cline{2-5} & \multicolumn{4}{c}{Hired x 100} \\ & (1) & (2) & (3) & (4) \\ \hline Algo Writing Treatment (Trt) & \(0.247^{***}\) & \(0.223^{**}\) & \(0.242^{***}\) & \(0.237^{***}\) \\ & (0.080) & (0.088) & (0.086) & (0.088) \\ Native-English & & \(2.508^{***}\) & & \\ & & (0.146) & & \\ Trt \(\times\)English & & \(0.155\) & & \\ & & (0.207) & & \\ US & & & \(2.602^{***}\) & \\ & & & (0.161) & \\ Trt \(\times\) US & & & \(0.072\) & \\ & & & (0.228) & \\ Writer & & & & \(-0.293^{*}\) \\ & & & & (0.151) \\ Trt \(\times\) Writer & & & & 0.061 \\ & & & & (0.214) \\ Constant & \(3.093^{***}\) & \(2.632^{***}\) & \(2.719^{***}\) & \(3.142^{***}\) \\ & (0.057) & (0.063) & (0.061) & (0.062) \\ \hline Observations & 194,700 & 194,700 & 194,700 & 194,700 \\ R\({}^{2}\) & 0.00005 & 0.003 & 0.003 & 0.0001 \\ \hline \hline \end{tabular} _Notes_: This table analyzes the effect of the treatment on whether or not a jobseeker was ever hired on the platform in the month after they joined, times 100. In Column (1) we show the overall effect of the treatment to hiring. In Column (2) we interact the treatment with a dummy variable for if the jobseeker is in the US, UK, Canada, or Australia. In Column (3) we interact the treatment with a dummy for if the jobseeker is in the US. In Column (4) we interact the treatment with a dummy for if the jobseeker lists Writing as their primary category of desired work. The sample is made up of all jobseekers in the control group of the experimental sample who submitted non-empty resumes and were approved by the platform. Significance indicators: \(p\leq 0.10:*\), \(p\leq 0.05:***\) and \(p\leq.01:***\).
\end{table}
Table 5: Effects of writing assistance on hiring, by sub-groups
the 0.059 level. Since workers did not bid any higher, this result suggests that employers are hiring more productive workers, or that they thought the treated workers were more productive. If it is the latter, the "signaling view" would predict that employers would then be disappointed with the workers they hired, which we should be able to observe in worker ratings.
### Employers gave treated workers slightly better ratings on average
At the end of every contract, employers rate the workers' quality by reporting a private rating to the platform. These ratings are not shared with the worker. In the control group, workers had an average rating of 8.835. In the final facet of Figure 5 we show that treated workers who formed any contracts over the experimental period did not have a lower rating than workers in the control group. We show this result in levels in Table 6-- workers in the treated group have an average rating of 8.84 with a standard error of 0.072.
\begin{table}
\begin{tabular}{l c c} \hline \hline & \multicolumn{2}{c}{_Dependent variable:_} \\ \cline{2-3} & Hourly wage rate & Private rating \\ & (1) & (2) \\ \hline Algo Writing Treatment & 1.448\({}^{*}\) & 0.004 \\ & (0.766) & (0.072) \\ Constant & 17.173\({}^{***}\) & 8.835\({}^{***}\) \\ & (0.558) & (0.052) \\ \hline Observations & 3,542 & 4,433 \\ R\({}^{2}\) & 0.001 & 0.00000 \\ \hline \hline \end{tabular} _Notes: This analysis looks at the effect of treatment on outcomes of worked jobs for jobseekers in the experimental sample. Column (1) defines hourly wage rate as the hourly wage rate paid for all hourly jobs worked. Column (2) defines private rating as the private rating on all jobs given by employers to the workers after the job ended. The experimental sample is of all new jobseekers who registered and were approved for the platform between June 8th and July 14th, 2021 and had non-empty resumes. Significance indicators: \(p\leq 0.10:*\), \(p\leq 0.05:**\) and \(p\leq.01:***\)._
\end{table}
Table 6: Effect of algorithmic writing assistance on wages and ratings of worked jobs
### How much power do we have to detect worse contractual outcomes?
Although the treatment has slightly more positive ratings, a natural question is how much power is available to detect effects. While we do find a substantial increase in hiring--8%--these marginal hires are mixed in with a much larger pool of "inframarginal" hires that would likely be hired anyway, but for our intervention. How much worse could those marginal applicants have been and still get our results of slightly higher ratings in the treatment?
Let \(I\) indicate "inframarginal" jobseekers who would have been hired in the treatment or control. Let \(M\) indicate "marginal" jobseekers who are only hired in the treatment. For workers in the control group, the average private rating will be \(\bar{r}_{C}=\bar{r}_{I}\). But for the treatment, the mean rating is a mixture of the ratings for the inframarginal and the ratings for the induced, marginal applicants, and so
\[\bar{r}_{T}=\frac{\bar{r}_{I}+\tau\bar{r}_{M}}{1+\tau} \tag{1}\]
where \(\tau\) is the treatment effect. We assume no substitution, making our estimates conservative. The sampling distribution of the mean rating for the marginal group is
\[\bar{r}_{M}=\frac{\bar{r}_{T}(1+\tau)-\bar{r}_{C}}{\tau} \tag{2}\]
Our course, \(\bar{r}_{T}\), \(\tau\) and \(\bar{r}_{C}\) are all themselves random variables. Furthermore, they are not necessarily independent. To compute the sampling distribution of \(\bar{r}_{M}\), we bootstrap sample both the hiring regressions and the private feedback regressions on the experimental sample.5 Because we do not have feedback on workers who are never hired, we use the estimates values to calculate \(\bar{r}_{M}\). Figure 6 shows the sampling distribution of \(\bar{r}_{M}\).
Footnote 5: We define this sample as the workers allocated into the experiment who were approved by the platform and had non-empty resumes. From this we bootstrap sample with replacement. We run the hiring regressions on this sample and the ratings regressions on the same samples, narrowed to only those workers who were ever hired.
The treatment and control actual ratings are plotted as solid vertical lines. As expected given the treatment has a slight positive effect on average ratings, the distribution is centered at these mean values.
The dashed line indicates the control mean rating minus a standard deviation in the private ratings (which is 2.34). Comparing this value to the distribution of \(\bar{r}_{M}\), this is only about 0.025 of the density. In short, it would be quite surprising for us to get the results we have--an 8% increase in hires and slightly higher (but not significant ratings) if the actual
marginal hires were a standard deviation worse.
### Heterogeneous treatment effects to hiring
We might have expected the treatment to have differential effects on these subgroups, particularly since the treatment disproportionately impacted the fraction of words spelled correctly in non-native English speakers' resumes. In hiring outcomes, we might expect, for example, that native English or US-based jobseekers would benefit less, writers might benefit more--though as we saw earlier, writers already make few errors. However, for these same jobseekers, the treatment might do less.
We have already shown above in Table 3 that the treatment disproportionately impacted the fraction of words spelled correctly in non-native English speakers' resumes. If we look downstream to hiring outcomes, in Table 5, we interact the same groups with the treatment and look at their effect on the probability they were hired. The point estimates are generally quite imprecise and we lack the power to conclude much. While non-native English speakers' writing might benefit more from the treatment, it does not translate into more hires relative to native English speakers.
### Robustness checks
In our main analysis narrow the sample to only those jobseekers whose profiles were approved by the platform. In Appendix Table 11 we run a similar regression on the full exper
Figure 6: Sampling distribution of the private ratings of marginal hired jobseekers
imental sample, but we include profile approval as a control to see if it affects the estimates. In this analysis, we find that the treatment effect on the number of hires is slightly smaller than in the analysis conditional on platform approval--conditioning the sample on only jobseekers whose profiles were approved has an estimate of 7.8% while it is 10% in the full sample. The effect on the probability of any hire is 8% in the sample of only approved jobseekers and 8% in the unconditional sample. This approach and narrowing the sample to only approved jobseekers would "block" the approval channel. In Appendix Table 10 we report the same analysis not conditioned on profile approval. None of these robustness checks change the direction or significance of any of the hiring estimates, and the slightly larger estimates in the unconditional sample are unsurprising because of the positive effect of the treatment on platform approval.
A simple model of the "clarity view" of resume writing
In this section, we formalize a rational model of how the writing intervention could (a) increase hiring but (b) not lead to worse matches. We formalize the argument that better writing allowed employers to better ascertain who was a potential match with a simple model, and show how this kind of interplay between resume quality and hiring could exist in equilibrium.
### A mass of jobseekers with heterogeneous productivity
There is a unit mass of jobseekers. If hired, their productivity is \(\theta_{i}\). Workers are either high-type (\(\theta=\theta_{H}\)) or low-type (\(\theta=\theta_{L}\)), with \(\theta_{H}>\theta_{L}\). Workers know their own type. It is common knowledge that the fraction of high types in the market is \(\gamma\). All workers, if hired, are paid their expected productivity, from the employer's point of view. Hires only last one unit of time.
### Jobseekers decide whether to put into resume-writing
Before being hired, jobseekers write resumes. Jobseekers must decide whether to put effort \(e\in\{0,1\}\) into writing that resume. Effort itself is not observable. The cost of this effort is jobseekers-specific and there is a distribution of individual resume effort costs. The support of the cost distribution is \([0,\bar{c}]\). The distribution has mass everywhere and the CDF is \(F\) and PDF is \(f\). Jobseekers who put in no effort have resume costs of \(0\), while those that put in effort have a cost of \(c_{i}\). Critically, this cost is independent of a jobseeker's type i.e., there is no Spence-like assumption that better workers find it cheaper to create better resumes (Spence, 1978).
Before making an offer, firms observe a signal of jobseekers' type on their resume, \(R\in\{0,1\}\). With effort, a high-type jobseeker generates an \(R=1\) signal; without effort, \(R=0\). A low-type jobseeker generates \(R=0\) no matter what.
Clearly, low-types will never put in effort. The question is whether a high type will put in effort. The decision hinges on whether the cost of resume effort is worth the wage premium it creates. Let \(w_{R=0}\) be the wage paid in equilibrium to jobseekers with \(R=0\). Note that \(w_{R=1}=\theta_{H}\), as there is no uncertainty about a jobseeker's type if \(R=1\).
A jobseekers \(i\) who is a high-type will choose \(e=1\) if \(\theta_{H}-w_{R=0}(\bar{c})>c_{i}\), and so the marginal high-type indifferent between putting in effort or not has a resume-writing cost of
\(\tilde{c}\), where
\[\hat{c}=\theta_{H}-w_{R=0}(\hat{c}). \tag{3}\]
This implies that there are \(F(\tilde{c})\gamma\) jobseekers that choose \(e=1\). There are the high-type jobseekers with relatively low resume writing costs. The remaining \([1-F(\hat{c})]\gamma\) high-type jobseekers choose \(e=0\). They are pooled together with the \(1-\gamma\) jobseekers that choose \(e=0\) because they are low-types.
From the employer's perspective, if they believe that the resume effort cost of the marginal high-type jobseekers is \(c\), the probability an \(R=0\) jobseekers is high-type is
\[p_{H}^{R=0}(c)=\frac{1-F(c)}{1/\gamma-F(c)}. \tag{4}\]
The wage received by an \(R=0\) worker is
\[w_{R=0}(c)=\theta_{L}+(\theta_{H}-\theta_{L})p_{H}^{R=0}(c) \tag{5}\]
When the \(c\) of the marginal jobseeker is higher, more jobseekers find it worth choosing \(e=1\), as \(F^{\prime}(c)>0\). This leaves fewer high-types in the \(R=0\) pool, and so
\[\frac{dp_{H}^{R=0}}{dc}<0. \tag{6}\]
### The equilibrium fraction of high-type workers putting effort into resume-writing
In equilibrium, there is some marginal high-type jobseeker indifferent between \(e=0\) and \(e=1\), and so
\[(\theta_{H}-\theta_{L})(1-p_{H}^{R=0}(\hat{c}))=\hat{c}.\]
Figure 7 illustrates the equilibrium i.e., the cost where the marginal jobseeker is indifferent between \(e=0\) and \(e=1\). The two downward-sloping lines are the pay-offs to the marginal jobseeker for each \(c\). The pay-off to \(R=1\) is declining, as the wage is constant (at \(\theta_{H}\)) but the cost is growing linearly. The pay-off to \(R=0\) is also declining, from Equation 6. Both curves are continuous.
Note that when the marginal jobseeker has \(c=0\), there is just a point-mass of high-types that have a cost that low, i.e., \(f(c)\). This implies that the \(R=0\) pool is just the expected value
of all jobseekers, so the wage is just \(\gamma\theta_{H}+(1-\gamma)\theta_{L}\). The "marginal" jobseeker pays nothing, so the pay-off is \(\theta_{H}\). At the other extreme, \(c=\bar{c}\), all but a point mass of jobseekers have a cost less than this, so the \(R=0\) pool is purely low-types and the wage is \(\theta_{L}\). For the \(R=1\) market, the "marginal" jobseeker has a cost of \(\bar{c}\) so the pay-off is \(\theta_{H}-\bar{c}\). We know \(\theta_{H}>\gamma\theta_{H}+(1-\gamma)\theta_{L}\). And by assumption, \(\theta_{L}>\theta_{H}-\bar{c}\), and so by the intermediate value theorem, an equilibrium \(\hat{c}\) exists on \((0,\bar{c})\).
A shift in the resume writing cost distribution leads to more high-type workers choosing to exert effort
Now suppose a technology comes along that lowers--or at least keeps the same--resume writing costs for all jobseekers. This would shift \(F\) higher for all points except the endpoints of the support, creating a new distribution of costs that first-order stochastically dominates the other.
Before determining the new equilibrium, note that no matter the marginal \(c\), when
Figure 7: Equilibrium determination of the marginal high-type jobseeker indifferent between putting effort into a resume
increases, the probability that an \(R=0\) worker is a high-type declines, as
\[\frac{dp_{H}}{dF}=-\frac{1}{(F-2)^{2}}<0. \tag{7}\]
This shifts the \(w_{R=0}\) curve down everywhere, without changing the endpoints.
Because \(w_{R=1}-c\) is downward sloping, it intersects \(w_{R=0}(c)\) at a higher value of \(c\). At the new equilibrium, the marginal jobseeker has resumes costs of \(\hat{c}^{\prime}\), where \(\hat{c}^{\prime}>\hat{c}\). At this new equilibrium, more jobseekers choose \(e=1\), causing more \(R=1\) signals. This lowers wages for the \(R=0\) group.
### The effects of lower costs are theoretically ambiguous
Note that this shift in costs is not Pareto improving--low-types are made worse off as they find themselves in a pool with fewer high-types. Furthermore, because workers are all paid their expected product, the ideal outcome would be for everyone to choose \(R=0\). Resume effort purely changes around the allocation of the wage bill, not the total amount. Total surplus is
\[\theta_{H}\gamma+(1-\gamma)\theta_{L}-\int_{0}^{\hat{c}}cf(c)dc, \tag{8}\]
which is maximized at \(\hat{c}=0\), i.e., when no one finds it worthwhile to choose effort. However, with a _shift_ in cost distribution (raising \(F\)), whether matters is whether the marginal decrease in costs for all inframarginal workers i..e, those with \(c<\hat{c}\) outweighs the costs borne by the (newly) marginal jobseekers who choose to put in effort.
In our model, all job offers are accepted. However, if we think of jobseekers as having idiosyncratic reservation values that determine whether they accept an offer, the shift in costs makes it more likely that high-types will accept an offer, while making it less likely that low-types will accept an offer. This is consistent with our results of a greater chance an employer hires at all in the treatment. It is also consistent with our result of higher wages. Finally, if we think of employer ratings being a function of surplus, our finding of no change in satisfaction is also consistent, as employers are, in all cases, just paying for expected productivity.
## 5 Conclusion
Employers are more likely to hire new labor market entrants with better-written resumes. We argue that better writing makes it easier for employers to decide to hire a particular
worker. We show results from a field experiment in an online labor market where treated workers were given algorithmic writing assistance from Algorithmic Writing Service. These jobseekers were 8% more likely to get hired and formed 7.8% more contracts over the month-long experiment. While one might have expected writing quality to be a valuable indicator of worker quality, the treatment did not affect employers' ratings of hired workers. We provide a model of the hiring process where the cost of exerting effort on a resume is lowered by algorithmic writing assistance, which helps employers to distinguish between high and low-type workers.
One possibility is that the benefits to treated workers came at the expense of other workers, as both treated- and control-assigned workers compete in the same market. Crowd-out concerns have been shown to be important with labor market assistance (Crepon et al., 2013). However, even if additional hires came from experienced workers, this is likely still a positive result. New labor market entrants are uniquely disadvantaged (Pallais, 2013) in online labor markets. To the extent that the gains to new workers come partially at the expense of experienced workers, this is likely a good trade-off.
Conceptualizing AI/ML innovation and proliferation as a fall in the cost of prediction technology fits our setting (Agrawal et al., 2018, 2018). Writing a resume is, in part, an applied prediction task--what combination of words and phrases, arranged in what order, are likely to maximize my pay-off from a job search? Algorithmic Writing Service reduces the effort or cost required for making these decisions. When revising their resumes, rather than identifying errors in their own predictions themselves, jobseekers with access to Algorithmic Writing Service specify their target audience and writing goals and enter their draft profiles into Algorithmic Writing Service. Algorithmic Writing Service assists jobseekers in error correction. Furthermore, the treatment, by lowering the costs of error-free writing for at least some jobseekers, causes them to do better at writing their resumes.
Interestingly, this algorithmic writing assistance will likely "ruin" writing as a signal. With the proliferation of writing technologies with capabilities far beyond what is explored here (Brown et al., 2020), even if the "signaling view" was at one time true, technological changes are likely to make it not true in the future.
## References
* Agrawal et al. (2015)**Agrawal, Ajay, John Horton, Nicola Lacetera, and Elizabeth Lyons**, "Digitization and the contract labor market," _Economic analysis of the digital economy_, 2015, _219_.
* Agrawal et al. (2015)**Agrawal, Ajay, John Horton, Nicola Lacetera, and Elizabeth Lyons**, "Digitization and the contract labor market," _Economic analysis of the digital economy_, 2015, _219_.
decision-making and artificial intelligence," in "The economics of artificial intelligence: An agenda," University of Chicago Press, 2018, pp. 89-110.
* _, _, and _, _, Prediction machines: the simple economics of artificial intelligence_, Harvard Business Press, 2018.
* _, _,_ **Nicola Lacetera, and Elizabeth Lyons**, "Does standardized information in online markets disproportionately benefit job applicants from less developed countries?," _Journal of international Economics_, 2016, _103_, 1-12.
* _Barach, & Horton_ (2021)**Barach, Moshe A and John J Horton**, "How do employers use compensation history? Evidence from a field experiment," _Journal of Labor Economics_, 2021, _39_ (1), 193-218.
* _Belot, & Kircher, & Muller_ (2018)**Belot, Michele, Philipp Kircher, and Paul Muller**, "Providing Advice to Jobseekers at Low Cost: An Experimental Study on Online Advice," _The Review of Economic Studies_, 10 2018, _86_ (4), 1411-1447.
* _Bertrand, & Mullainathan_ (2003)**Bertrand, Marianne and Sendhil Mullainathan**, "Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination.(2003)," _Amer. Econ. Rev._, 2003, _94_, 991.
* _Bolton, & Ockenfels_ (2013)**Bolton, Gary, Ben Greiner, and Axel Ockenfels**, "Engineering trust: reciprocity in the production of reputation information," _Management science_, 2013, _59_ (2), 265-285.
* _Brown, Mann, & Ryder, & Subbiah, & Kaplan, & Dariwal, & Dariwal, & Neelakantan, & Shyam, & Sastry, & Amanda Askell et al._ (2020)**Brown, Tom, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, & Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell et al.**, "Language models are few-shot learners," _Advances in neural information processing systems_, 2020, _33_, 1877-1901.
* _Card, & Kluve, & Weber_ (2010)**Card, David, Jochen Kluve, and Andrea Weber**, "Active labour market policy evaluations: A meta-analysis," _The Economic Journal_, 2010, _120_ (548), F452-F477.
* _Chan, & Wang_ (2018)**Chan, Jason and Jing Wang**, "Hiring preferences in online labor markets: Evidence of a female hiring bias," _Management Science_, 2018, _64_ (7), 2973-2994.
* _Crepon et al._ (2013)**Crepon, Bruno, Esther Duflo, Marc Gurgand, Roland Rathelot, and Philippe Zamora**, "Do labor market policies have displacement effects? Evidence from a clustered randomized experiment," _The Quarterly Journal of Economics_, 2013, _128_ (2), 531-580.
* _Farber et al._ (2016)**Farber, Henry S, Dan Silverman, and Till Von Wachter**, "Determinants of callbacks to job applications: An audit study," _American Economic Review_, 2016, _106_ (5), 314-18.
* _Feld et al._ (2017)
**Filippas, Apostolos, John Joseph Horton, and Joseph Golden**, "Reputation Inflation," Forthcoming.
* Ghose and Ipeirotis (2010)**Ghose, Anindya and Panagiotis G Ipeirotis**, "Estimating the helpfulness and economic impact of product reviews: Mining text and reviewer characteristics," _IEEE transactions on knowledge and data engineering_, 2010, _23_ (10), 1498-1512.
* Goldfarb and Tucker (2019)**Goldfarb, Avi and Catherine Tucker**, "Digital economics," _Journal of Economic Literature_, 2019, _57_ (1), 3-43.
* Hong et al. (2021)**Hong, Yili, Jing Peng, Gordon Burtch, and Ni Huang**, "Just DM Me (Poliutely): Direct Messaging, Politeness, and Hiring Outcomes in Online Labor Markets," _Information Systems Research_, 2021, _32_ (3), 786-800.
* Horton and Markets (2010)**Horton, John J.**, "Online labor markets," _Internet and Network Economics: 6th International Workshop, WINE 2010, Stanford, CA, USA, December 13-17, 2010. Proceedings_, 2010.
* Horton and Markets (2017) --, "The Effects of Algorithmic Labor Market Recommendations: Evidence from a Field Experiment," _Journal of Labor Economics_, 2017, _35_ (2), 345-385.
* Kang et al. (2016)**Kang, Sonia K, Katherine A DeCelles, Andras Tilesik, and Sora Jun**, "Whitened resumes: Race and self-presentation in the labor market," _Administrative Science Quarterly_, 2016, _61_ (3), 469-502.
* Kessler et al. (2019)**Kessler, Judd B, Corinne Low, and Colin D Sullivan**, "Incentivized resume rating: Eliciting employer preferences without deception," _American Economic Review_, 2019, _109_ (11), 3713-44.
* Kokkodis and Ransbotham (2022)**Kokkodis, Marios and Sam Ransbotham**, "Learning to successfully hire in online labor markets," _Management Science_, 2022.
* Luca and Reshef (2021)**Luca, Michael and Oren Reshef**, "The effect of price on firm reputation," _Management Science_, 2021, _67_ (7), 4408-4419.
* Marinescu and Wolthoff (2020)**Marinescu, Ioana and Ronald Wolthoff**, "Opening the black box of the matching function: The power of words," _Journal of Labor Economics_, 2020, _38_ (2), 535-568.
* Martin-Lacroux and Lacroux (2017)**Martin-Lacroux, Christelle and Alain Lacroux**, "Do Employers Forgive Applicants' Bad Spelling in Resumes?," _Business and Professional Communication Quarterly_, sep 2017, _80_ (3), 321-335.
* Martin-Lacroux et al. (2019)
**Moss-Racusin, Corinne A, John F Dovidio, Victoria L Brescoll, Mark J Graham, and Jo Handelsman, "Science faculty's subtle gender biases favor male students," _Proceedings of the national academy of sciences_, 2012, _109_ (41), 16474-16479.
* Oreopoulos (2011)**Oreopoulos, Philip**, "Why do skilled immigrants struggle in the labor market? A field experiment with thirteen thousand resumes," _American Economic Journal: Economic Policy_, 2011, \(3\) (4), 148-71.
* Pallais (2013)**Pallais, Amanda**, "Inefficient Hiring in Entry-level Labor Markets," _American Economic Review_, 2013.
* Spence (1978)**Spence, Michael**, "Job market signaling," in "Uncertainty in economics," Elsevier, 1978, pp. 281-306.
* Stanton and Thomas (2016)**Stanton, Christopher T and Catherine Thomas, "Landing the first job: The value of intermediaries in online hiring," _The Review of Economic Studies_, 2016, _83_ (2), 810-854.
* Sterkens et al. (2021)**Sterkens, Philippe, Ralf Caers, Marijke De Couck, Michael Geamanu, Victor Van Driessche, and Stijn Baert, "Costly Mistakes: Why and When Spelling Errors in Resumes Jeopardise Interview Chances," _Working paper_, 2021.
## Appendix A Appendix
Figure 8: Daily allocations of jobseekers into experimental cells
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{_Dependent variable:_} \\ \cline{2-5} & Num Contracts & Hired x 100 & Num Hourly Interviews & Num Invitations \\ & (1) & (2) & (3) & (4) \\ \hline Algo Writing Treatment & 0.004\({}^{**}\) & 0.247\({}^{***}\) & 0.002 & 0.001 \\ & (0.002) & (0.080) & (0.004) & (0.003) \\ Constant & 0.047\({}^{***}\) & 3.093\({}^{***}\) & 0.210\({}^{***}\) & 0.142\({}^{***}\) \\ & (0.001) & (0.057) & (0.003) & (0.002) \\ \hline Observations & 194,700 & 194,700 & 194,700 & 194,700 \\ R\({}^{2}\) & 0.00003 & 0.00005 & 0.00000 & 0.00000 \\ \hline \hline \end{tabular} _Notes: This analysis looks at the effect of treatment on hiring outcomes on jobseekers in the experimental sample. Column (1) defines Number of Contracts as the number of unique jobs they work over the month after they register for the platform. Column (2) defines Hired x 100 as one hundred times the probability the jobseeker was hired over that month. Column (3) is the number of interviews they gave over that month. And the Column (4) outcome Invitations is the number of times they were recruited to a job over their first month. The experimental sample is of all new jobseekers who registered and were approved for the platform between June 8th and July 14th, 2021 and had non-empty resumes. Significance indicators: \(p\leq 0.10:*\), \(p\leq 0.05:**\) and \(p\leq.01:***\)._
\end{table}
Table 9: Effect of algorithmic writing assistance on hiring outcomes
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{_Dependent variable:_} \\ \cline{2-5} & Num Contracts & Hired x 100 & Num Hourly Interviews & Num Invitations \\ & (1) & (2) & (3) & (4) \\ \hline \multirow{2}{*}{**Algo Writing Treatment} & \(0.002^{**}\) & \(0.110^{***}\) & \(0.001\) & \(0.001\) \\ & (0.001) & (0.033) & (0.002) & (0.001) \\ Constant & \(0.019^{***}\) & \(1.273^{***}\) & \(0.085^{***}\) & \(0.058^{***}\) \\ & (0.0005) & (0.023) & (0.001) & (0.001) \\ \hline Observations & 480,948 & 480,948 & 480,948 & 480,948 \\ R\({}^{2}\) & 0.00001 & 0.00002 & 0.00000 & 0.00000 \\ \hline \hline \end{tabular} _Notes_: This analysis looks at the effect of treatment on hiring outcomes on jobseekers in the experimental sample. Column (1) defines Number of Contracts as the number of unique jobs they work over the month after they register for the platform. Column (2) defines Hired x 100 as one hundred times the probability the jobseeker was hired over that month. Column (3) is the number of interviews they gave over that month. And the Column (4) outcome Invitations is the number of times they were recruited to a job over their first month. The sample used in this analysis is the entire experimental sample. Significance indicators: \(p\leq 0.10:*\), \(p\leq 0.05:***\) and \(p\leq.01:***\).
\end{table}
Table 10: Effect of algorithmic writing assistance on hiring outcomes, unconditional on platform approval
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{_Dependent variable:_} \\ \cline{2-5} & Num Contracts & Hired x 100 & Num Hourly Interviews & Num Invitations \\ & (1) & (2) & (3) & (4) \\ \hline \multirow{2}{*}{Algo Writing Treatment} & \(0.002^{**}\) & \(0.104^{***}\) & \(0.001\) & \(0.0004\) \\ & (0.001) & (0.033) & (0.002) & (0.001) \\ \multirow{2}{*}{Approved by Platform} & \(0.048^{***}\) & \(3.171^{***}\) & \(0.210^{***}\) & \(0.142^{***}\) \\ & (0.001) & (0.033) & (0.002) & (0.001) \\ Constant & \(-0.0003\) & \(-0.013\) & \(-0.0003\) & \(0.00001\) \\ & (0.001) & (0.027) & (0.001) & (0.001) \\ \hline Observations & 480,948 & 480,948 & 480,948 & 480,948 \\ R\({}^{2}\) & 0.011 & 0.019 & 0.030 & 0.023 \\ \hline \hline \end{tabular} _Notes_: This analysis looks at the effect of treatment on hiring outcomes on jobseekers in the experimental sample. Column (1) defines Number of Contracts as the number of unique jobs they work over the month after they register for the platform. Column (2) defines Hired x 100 as one hundred times the probability the jobseeker was hired over that month. Column (3) is the number of interviews they gave over that month. And the Column (4) outcome Invitations is the number of times they were recruited to a job over their first month. The sample used in this analysis is the entire experimental sample. Significance indicators: \(p\leq 0.10:*\), \(p\leq 0.05:***\) and \(p\leq.01:***\).
\end{table}
Table 11: Effect of algorithmic writing assistance on hiring outcomes, controlling for platform approval
\begin{table}
\begin{tabular}{p{85.4pt}|p{113.8pt}|p{113.8pt}} \hline
**Category** & **Description** & **Examples** \\ \hline American English Phrases & Sentence favors the American English spelling of words. & _apologize_, _catalog_, _cavilization_, _defense_ \\ \hline British English, Oxford Spelling & Sentence favors the British English spelling of words. & _apologise_, _catalog_, _cavilisation_, _defence_ \\ \hline Capitalization & Rules about detecting uppercase words where lowercase is required and vice versa. & _This house is old. it was built in 1950. I really like Harry potter_. \\ \hline Collocations & A collocation is made up of two or more words that are commonly used together in English. This refers to an error in this type of phrase. & _Undoubtedly, this is the result of an extremely dynamic development of Lublin in the recent years. I will take it in to account. It’s better to be save then sorry._ \\ \hline Commonly Confused Words & Words that are easily confused, like ’there’ and ’their’ in English. & _I have my won bed. Their elicit behavior got the students kicked out of school.It’s the worse possible outcome._ \\ \hline Grammar & Violations related to system of rules that allow us to structure sentences. & _Tom make his life worse. A study like this one rely on historical and present data.This is best way of dealing with errors._ \\ \hline Miscellaneous & Miscellaneous rules that don’t fit elsewhere. & _This is best way of dealing with errors_. _The train arrived a hour ago. It’s nice, but it doesn’t work. (inconsistent apostro-phes)_ \\ \hline Nonstandard & & _I never have been to London. List the names in an alphabetical order. Why would a man all of the sudden send flowers?_ \\ \hline Possible Typo & Spelling issues. & _It’a extremely helpful when it comes to homework. We haven’t earned anything.This is not a HIPPA violation._ \\ \hline Punctuation & Error in the marks, such as period, comma, and parentheses, used in writing to separate sentences and their elements and to clarify meaning. & _'Tm over here, she said. Huh I thought it was done already. The U.S.A is one of the largest countries._ \\ \hline Redundant Phrases & Redundant phrases contain words that say the same thing twice. When one of the words is removed, the sentence still makes sense. Sometimes the sentence has to be slightly restructured, but the message remains the same. & _We have more than 100+ customers. He did it in a terrible way. The money is sufficient enough to buy the sweater._ \\ \hline Semantics & Logic, content, and consistency problems. & _It allows us to both grow, focus, and flourish. On October 7, 2025, we visited the client.This was my 11nd try._ \\ \hline Style & General style issues not covered by other categories, like overly verbose wording. & _Moreover, the street is almost entirely residential. Moreover, it was named after a poet. Doing it this way is more easy than the previous method. I’m not very experienced too. Anyways, I don’t like it._ \\ \hline Typography & Problems like incorrectly used dash or quote characters. & _This is a sentence with two consecutive spaces_. I have 3dogs.The price rose by \$12,50. \\ \hline \end{tabular}
\end{table}
Table 12: Description of Error Rule Categories with Examples
\begin{table}
\begin{tabular}{l l l} \hline \hline & Total Errors & Error Rate \\ \hline Capitalization Errors & 0.112 (0.488) & 0.003 (0.015) \\ Possible Typo & 2.350 (8.098) & 0.041 (0.102) \\ Grammar Errors & 0.195 (0.541) & 0.004 (0.012) \\ Punctuation Errors & 0.654 (2.096) & 0.010 (0.048) \\ Typographic Errors & 0.758 (2.356) & 0.015 (0.071) \\ Style Errors & 0.343 (0.933) & 0.004 (0.012) \\ Miscellaneous Errors & 0.094 (0.353) & 0.002 (0.008) \\ Redundant Phrases & 0.027 (0.172) & 0.000 (0.003) \\ Nonstandard Phrases & 0.002 (0.052) & 0.000 (0.001) \\ Commonly Confused Words & 0.008 (0.093) & 0.000 (0.002) \\ Collocations & 0.013 (0.125) & 0.000 (0.003) \\ Semantic Errors & 0.007 (0.113) & 0.000 (0.001) \\ \hline \hline \end{tabular} _Notes: This table reports means and standard errors of the writing errors in the resumes of the control group. The first column displays the total error count and the second column displays the error rate (normalized by word count). Writing errors are defined by LanguageToolR. The sample is made up of all jobseekers in the control group of the experimental sample who submitted non-empty resumes and were approved by the platform._
\end{table}
Table 13: Summary statistics on error counts and rates in the control group
\begin{table}
\begin{tabular}{l c c} \hline \hline & \multicolumn{2}{c}{_Dependent variable:_} \\ \cline{2-3} & Number of Contracts & Hired \\ & (1) & (2) \\ \hline Capitalization Error & \(-0.010^{***}\) & \(-0.006^{***}\) \\ & (0.002) & (0.001) \\ Possible Typo & 0.0002 & 0.0001 \\ & (0.0001) & (0.0001) \\ Grammar Error & \(-0.002\) & \(-0.001\) \\ & (0.002) & (0.001) \\ Punctuation Error & \(0.006^{***}\) & \(0.003^{***}\) \\ & (0.001) & (0.0003) \\ Typography Error & \(-0.001\) & \(-0.001^{**}\) \\ & (0.0005) & (0.0002) \\ Style Error & \(0.010^{***}\) & \(0.006^{***}\) \\ & (0.001) & (0.001) \\ Miscellaneous Error & \(-0.010^{***}\) & \(-0.004^{***}\) \\ & (0.003) & (0.002) \\ Redundant Phrases & \(0.019^{***}\) & \(0.012^{***}\) \\ & (0.007) & (0.003) \\ Nonstandard Phrases & \(0.071^{***}\) & \(0.026^{**}\) \\ & (0.022) & (0.011) \\ Commonly Confused Words & \(-0.028^{**}\) & \(-0.014^{**}\) \\ & (0.012) & (0.006) \\ Collocations & \(-0.007\) & \(-0.006\) \\ & (0.009) & (0.005) \\ Semantic Error & \(-0.014\) & \(-0.007\) \\ & (0.010) & (0.005) \\ Constant & \(0.140\) & \(0.152^{**}\) \\ & (0.142) & (0.072) \\ \hline Controls & X & X \\ Observations & 93,725 & 93,725 \\ R\({}^{2}\) & 0.004 & 0.005 \\ \hline \hline \end{tabular} _Notes_: This table analyzes correlation between various writing errors on jobseekers’ resumes and their hiring outcomes. Column (1) defines Number of Contracts as the number of unique jobs they work over the month after they register for the platform. Column (2) defines Hired as the probability the jobseekers was hired over that month. Column (3) is the number of interviews they gave over that month. And the Column (4) outcome Invitations is the number of times they were recruited to a job over their first month. All analysis includes controls for profile hourly rate and job category. Writing errors are defined by LanguageToolR. The sample is made up of all jobseekers in the control group of the experimental sample who submitted non-empty resumes and were approved by the platform.
Significance indicators: \(p\leq 0.10:*\), \(p\leq 0.05:**\) and \(p\leq.01:***\).
\end{table}
Table 14: Hiring outcomes predicted based on language errors in the control group
|
2302.06298
|
Hyperspectral Image Super Resolution with Real Unaligned RGB Guidance
|
Fusion-based hyperspectral image (HSI) super-resolution has become
increasingly prevalent for its capability to integrate high-frequency spatial
information from the paired high-resolution (HR) RGB reference image. However,
most of the existing methods either heavily rely on the accurate alignment
between low-resolution (LR) HSIs and RGB images, or can only deal with
simulated unaligned RGB images generated by rigid geometric transformations,
which weakens their effectiveness for real scenes. In this paper, we explore
the fusion-based HSI super-resolution with real RGB reference images that have
both rigid and non-rigid misalignments. To properly address the limitations of
existing methods for unaligned reference images, we propose an HSI fusion
network with heterogenous feature extractions, multi-stage feature alignments,
and attentive feature fusion. Specifically, our network first transforms the
input HSI and RGB images into two sets of multi-scale features with an HSI
encoder and an RGB encoder, respectively. The features of RGB reference images
are then processed by a multi-stage alignment module to explicitly align the
features of RGB reference with the LR HSI. Finally, the aligned features of RGB
reference are further adjusted by an adaptive attention module to focus more on
discriminative regions before sending them to the fusion decoder to generate
the reconstructed HR HSI. Additionally, we collect a real-world HSI fusion
dataset, consisting of paired HSI and unaligned RGB reference, to support the
evaluation of the proposed model for real scenes. Extensive experiments are
conducted on both simulated and our real-world datasets, and it shows that our
method obtains a clear improvement over existing single-image and fusion-based
super-resolution methods on quantitative assessment as well as visual
comparison.
|
Zeqiang Lai, Ying Fu, Jun Zhang
|
2023-02-13T11:56:45Z
|
http://arxiv.org/abs/2302.06298v1
|
# Hyperspectral Image Super Resolution with Real Unaligned RGB Guidance
###### Abstract
Fusion-based hyperspectral image (HSI) super-resolution has become increasingly prevalent for its capability to integrate high-frequency spatial information from the paired high-resolution (HR) RGB reference image. However, most of the existing methods either heavily rely on the accurate alignment between low-resolution (LR) HSIs and RGB images, or can only deal with simulated unaligned RGB images generated by rigid geometric transformations, which weakens their effectiveness for real scenes. In this paper, we explore the fusion-based HSI super-resolution with real RGB reference images that have both rigid and non-rigid misalignments. To properly address the limitations of existing methods for unaligned reference images, we propose an HSI fusion network with heterogenous feature extractions, multi-stage feature alignments, and attentive feature fusion. Specifically, our network first transforms the input HSI and RGB images into two sets of multi-scale features with an HSI encoder and an RGB encoder, respectively. The features of RGB reference images are then processed by a multi-stage alignment module to explicitly align the features of RGB reference with the LR HSI. Finally, the aligned features of RGB reference are further adjusted by an adaptive attention module to focus more on discriminative regions before sending them to the fusion decoder to generate the reconstructed HR HSI. Additionally, we collect a real-world HSI fusion dataset, consisting of paired HSI and unaligned RGB reference, to support the evaluation of the proposed model for real scenes. Extensive experiments are conducted on both simulated and our real-world datasets, and it shows that our method obtains a clear improvement over existing single-image and fusion-based super-resolution methods on quantitative assessment as well as visual comparison. The code and dataset are publicly available at [https://zeqiang-lai.github.io/HSI-RefSR/](https://zeqiang-lai.github.io/HSI-RefSR/).
Hyperspectral Imaging, Hyperspectral Image Fusion, Hybrid Camera System, Super-Resolution
## I Introduction
Hyperspectral imaging systems are designed to collect and process the abundant spectral information from across the electromagnetic spectrum. Different from conventional RGB cameras, spectral imaging systems divide the spectrum into much more bands than three, which provides higher spectral resolution. However, limited by the existing imaging techniques, higher spectral resolution often comes at the expense of lower spatial resolution. This might limit the applications of HSI in the fields of remote sensing [1, 2, 3], classification [4, 5, 6], and etc [7, 8].
With the aim of lifting the spatial resolution, most recent works [9, 11, 9] follow the paradigm of single image super-resolution (SISR) that upsamples the spatial resolution given the single LR HSI. These methods usually depend upon the powerful learning capability of different types of complex convolutional neural network (CNN) to reconstruct missing high-frequency details. For example, Li _et al._[11] propose a mixed convolutional network (MCNet) by utilizing both 2D and 3D convolutions. Jiang _et al._[9] introduce SSPSR that explores the spatial and spectral prior with group convolution. Fu _et al._[12] extend the 3D-CNN with a bi-directional quasi-recurrent neural network to enhance the inter-spectral interactions. Though progress has been made, the performance of these approaches is still physically restricted by the deficient information provided by LR input, which hinders the further improvements, especially for large scaling factors.
To overcome the limitation of SISR, alternative approaches [10, 13, 14] consider the HSI super-resolution in hybrid imaging systems, where an aligned HR RGB camera is used to complement the hyperspectral counterpart. With these systems, paired aligned data can be obtained and various optimization-based [13, 15] and CNN-based methods [10, 16] are proposed to transfer the high-frequency details from HR RGB reference image for the reconstruction of HR HSI from the captured LR HSI. These methods usually perform better than SISR approaches, but heavily rely on the complex imaging system and careful calibration to ensure precise alignment, which weakens its effectiveness for practical applications. To alleviate the strong assumption of existing fusion-based approaches, some recent works [17, 18, 19, 20] begin to take into account the misalignment of RGB reference images, _e.g._, Fu _et al._[17] propose an alternating direction method of multipliers (ADMM)-based method for solving
Fig. 1: Illustration of the misalignment between the real RGB reference image and LR HSI. It can be observed there is a tiny offset for the white box. For the comparison of different methods, our method produces the clearest result on the premise of proper alignment. The SISR method, _i.e._, SSPSR [9], obtains the aligned but more blurred result. The fusion-based method, _i.e._, Optimized [10], processes directly on RGB reference without considering the misalignment.
HSI super-resolution with rigid geometric misaligned RGB reference, Qu _et al_. [19] implicitly learn to correlate the spatial-spectral information from unregistered multimodality images through an unsupervised framework, and applies to the geometric misaligned images and reference images collected from a different time and sources, Zheng _et al_. [20] propose a NonRegSRNet that considers more complex misalignment by randomly shifting some pixels of the aligned reference. Nevertheless, most of these methods are still limited at deal with complex misalignments and they are often restricted to unsupervised approaches due to the lack of real-world unaligned datasets. As a result, the fusion-based HSI super-resolution with real unaligned reference images is still under-explored for real-world dual hybrid camera systems.
In this paper, we explore the fusion-based HSI super-resolution (also dubbed as HSI Fusion) with real-world RGB reference images that have both rigid and non-rigid misalignments. As shown in Figure 1, the RGB reference images under our system share the same scene as LR HSI but are not necessarily to be well-aligned. Therefore, we can easily build a dual-camera system using a common commercial tripod to capture paired data, without any special equipment (_e.g._, beam splitter) as [10]. This makes our approach more economically and technically practical for real-world applications. In order to effectively address the complex misalignment in real HSI-RGB pairs, we propose an HSI fusion network (HSIFN) with heterogenous feature extractions, multi-stage feature alignments, and attentive feature fusion. Specifically, the input HSI and RGB images are first transformed into two sets of multi-scale features with an HSI encoder and an RGB encoder, respectively. Then, the features of RGB reference images are processed by a multi-stage alignment module to explicitly align the features of RGB reference with the LR HSI. Different from previous works [17, 18] that assumes a global rigid geometric transformation, our alignment module performs the pixel-wise transformation by estimating a dense optical flow map for each level of reference features, which makes our model more robust to non-rigid deformation. Moreover, the aligned features of RGB reference are adaptively adjusted with an element-wise weight map, which is computed by fusing the features of RGB reference, LR HSI, and predicted optical flow. This allows our network to selectively focus on more informative regions from the RGB reference while ignoring the incorrect ones, _e.g._, falsely aligned regions by the previous alignment module. Finally, we combine the aligned features from RGB reference images and LR HSI through a multi-level HSI decoder to generate the reconstructed HR HSI.
Although the misalignment has been a long-standing issue for HSI fusion [17, 21], it is seldom addressed due to the lack of real unaligned datasets. In order to enable the training and evaluation of the proposed method, we collect a real-world HSI fusion dataset, consisting of unaligned high-resolution HSIs captured by a dual-camera system. Each pair of HSIs share the same scene under different viewpoints, and one of them can be selected to synthesize the multispectral image (MSI) or RGB counterpart for HSI fusion with MSI or RGB guidance. To evaluate the effectiveness of the proposed HSI fusion network, extensive experiments are conducted on both simulated and real-world unaligned datasets. The experimental results show that our method obtains a clear improvement over existing single-image and fusion-based super-resolution methods on quantitative assessment as well as visual comparison.
Our main contributions are summarized as follows.
* We propose an HSI fusion network for the fusion-based HSI super-resolution using real unaligned RGB reference with both rigid and non-rigid transformation.
* We introduce a multi-stage pixel-wise alignment module and an adaptive attention module to address the misalignment between RGB reference and LR HSI.
* We collect an HSI fusion dataset with real unaligned RGB reference for verification, and the experiments demonstrate the proposed method achieve better performance than previous works on the real and simulated datasets.
## II Related Works
The methods for HSI super-resolution could generally be divided into single image super-resolution methods and reference-based super-resolution methods. In this section, we provide an overview of their recent major approaches.
### _Single Image Super-Resolution_
Single image super-resolution (SISR) [9, 11, 12, 22, 23, 24, 25] has been actively studied in recent years for lifting spatial resolution of HSI. Due to the ill-posedness of super-resolution, most existing SISR approaches [9, 22] rely on the learning capability of deep convolutional neural network (CNN) to recover the missing high-frequency details. For example, Hu _et al_. [22] present an intrafusion network (IFN) that utilizes the spatial-spectral information in one integrated network. Jiang _et al_. [9] propose SSPSR that exploits the spatial and spectral prior with group convolution and channel attention. Since both spatial and spectral information is important for HSI, 3D convolution is densely explored. Mei _et al_. [25] propose a novel three-dimensional full convolutional neural network (3D-FCNN) to exploit both the spatial context of neighboring pixels and the spectral correlation of neighboring bands. Li _et al_. [11] design a mixed convolutional network (MCNet) by mixing 2D and 3D convolutions. Fu _et al_. [12] introduce a bidirectional 3D quasi-recurrent neural network (Bi3DQRNN) to explore the structural spatial-spectral correlation and global correlation along spectra. To explicitly enforce the constraints on the spatial and spectral domain, He _et al_. [23] propose a deep Laplacian pyramid network to progressively increase the spatial resolution of HSI, whose spectral characteristics are further enhanced by non-negative dictionary learning. Li _et al_. [24] present a deep spectral difference convolutional neural network (SDCNN) model to learn the mapping between LR HSI and HR HS and a spatial constraint (SCT) strategy. With the development of vision transformer [26], recent works [27, 28] also explore the transformer architecture to better enhance the modeling abilities for long-range dependency. Despite the promising performance these methods have achieved, it is physically difficult for SISR to recover highly textured regions because of the information bottleneck of LR input.
### _HSI Fusion_
Apart from the SISR approaches, another type of fusion-based HSI super-resolution methods [15, 29, 30, 31, 32] propose to utilize paired high-resolution RGB or multispectral reference to reconstruct the missing high-frequency details. Previous works following this paradigm usually assume that the paired reference is precisely aligned, which is the key difference between these works and ours. To effectively incorporate the information from the reference, these methods adopt either optimization techniques (_e.g._ matrix factorization [15, 29, 30, 31], Bayesian representation [32, 33], and tensor factorization [34, 35]), or deep CNN [10, 14, 16]. For instance, Akhtar _et al_. [32] propose to learn the spectral dictionary by using a non-parametric Bayesian model, and apply the dictionary for HSI super-resolution with a generic Bayesian sparse coding strategy. Dong _et al_. [29] formulate the HSI super-resolution as a joint estimation of the hyperspectral dictionary and the sparse codes based on the prior knowledge of the spatial-spectral sparsity of the hyperspectral image. Dian _et al_. [16] proposes to refine the optimization-based fusion framework with a learned deep convolutional neural network-based prior. Fu _et al_. [10] present a simple and efficient CNN to replace the hand-crafted prior for HSI fusion in an unsupervised way. Xie _et al_. [36] unfold the iteration-based fusion algorithm and propose to learn the proximal operators and model parameters through a deep CNN. Despite the superiority of these methods over SISR, their performance is limited for real-world applications due to the requirement of accurate alignment.
To alleviate the strong assumption of precise alignment, recent works [17, 18, 21] attempt to design models to handle simulated unaligned data. Specifically, Fu _et al_. [17] and Nie _et al_. [18] propose to register images by estimating the geometric transformation matrix via the alternative minimization algorithm and a spatial transformer network, respectively. However, these methods only work on data with simple geometric misalignment, and cannot handle the non-rigid transformation that is more common in real unaligned data. Zhou _et al_. [21] propose an integrated registration and fusion method for remote sensing datasets, but it suffers from significant performance drop for natural hyperspectral data as it is reported in [19]. Qu _et al_. [19] propose an unsupervised framework that implicitly learns to correlate the spatial-spectral information from unregistered multimodality images, and applies to the geometric misaligned images and reference images collected from a different time and sources, Zheng _et al_. [20] propose a NonRegSRNet that considers more complex misalignment by randomly shifting some pixels of the aligned reference. In this work, we consider the HSI fusion with real unaligned RGB guidance. Our architecture is designed to handle complex misalignment of real data and collect the first real HSI fusion dataset for training and evaluation.
## III HSI Fusion Network
Given the input HSI with low spatial resolution \(H^{LR}\) and high-resolution reference RGB image \(R^{Ref}\), the task of HSI fusion is to reconstruct the high-resolution HSI \(H^{SR}\) conditioned on \(R^{Ref}\). For the real-world HSI fusion, the precise alignment between reference \(R^{Ref}\) and input \(H^{LR}\) is usually unattainable due to the high cost and complexity of the required imaging system. Hence, it is essential to properly align the reference with input before performing the fusion.
To address the aforementioned issue, we propose an HSI fusion network (HSIFN), which consists of three steps, _i.e_., feature extraction, alignment, and fusion. An overview of our HSIFN is shown in Figure 2. It is built with five major components, including an HSI encoder, an RGB encoder, an alignment module, an attention module, and a fusion decoder. Specifically, the two different encoders are responsible for extracting multi-level deep features from input HSI and RGB reference by considering the specific characteristics of each type of image. For each level feature of the reference RGB
Fig. 2: An overview of our unaligned HSI fusion network. It takes LR HSI \(H^{LR\uparrow}\), HR reference RGB image \(R^{Ref}\), and synthetic RGB image of LR HSI \(R^{HSI}\) as inputs. We first extract the multi-level features of the input HSI and RGB images with an HSI encoder and an RGB encoder. Then, we align the reference features \(\{F_{i}^{Ref}\}_{1}^{N}\) using optical flow estimated from two successive coarse-to-fine flow estimators. After that, the aligned reference features \(\{F_{i}^{Ref2}\}_{1}^{N}\) at different levels are further adjusted by an attention module. Finally, we fuse the weighted aligned feature with the features of LR HSI \(\{F_{i}^{HSI}\}_{1}^{N}\) using a fusion decoder to produce the final SR HSI \(H^{SR}\).
image, the alignment module estimates a dense optical flow map in a coarse-to-fine manner to perform the pixel-wise warping. After acquiring the aligned reference features, the attention module is employed to compute an element-wise attention weight map to drive the network to attend to more discriminative regions. Then, the weighted aligned features, as well as the HSI features, are further integrated with the fusion decoder to produce the super-resolved (SR) HSI. The details of each network component of our architecture are described in the subsequent sections.
### _HSI & RGB Encoders_
The encoders embed the reference RGB image \(R^{Ref}\) as well as the upsampled HSI \(H^{LR\uparrow}\) into multi-level deep features to extract useful information for the subsequent reconstruction. To better explore the specific characteristics from each type of image, _e.g._, structural spatio-spectral correlation of HSI, we adopt different encoders to extract features of the reference RGB image and HSI as,
\[\begin{split}\{F_{i}^{HSI}\}_{1}^{N}&=\mathbf{E}_ {hsi}(H^{LR\uparrow}),\\ \{F_{i}^{Ref}\}_{1}^{N}&=\mathbf{E}_{rgb}(R^{Ref}), \end{split} \tag{1}\]
where \(N\) is the total number of levels of features and is set to 4 in our network, \(\mathbf{E}_{hsi}\) and \(\mathbf{E}_{rgb}\) are HSI and RGB encoders, \(F_{i}^{HSI}\) and \(F_{i}^{Ref}\) is the extracted features of HSI and RGB reference at \(i^{th}\) level.
#### Iii-A1 RGB Encoder
We construct the RGB encoder by stacking a series of 2D convolutional layers with kernel size \(K=5\) to incorporate information within large receptive fields. Following the common practice in [37, 38], we reduce the spatial resolution and increase the number of feature channels as the network goes deeper to extract multi-level features. The detailed network structure is shown in Figure 3(a). The RGB encoder consists of five convolution-activation layers. The first two layers keep the spatial dimension, while the last three half both the height and width sequentially. We use the same feature numbers activation function, and scaled exponential linear unit (SELU) [39] activation function, across all the layers. We use the same RGB encoder to extract features of both reference RGB image \(R^{Ref}\) as well as the synthetic RGB image \(R^{HSI}\) of input LR HSI \(H^{LR\uparrow}\) as
\[\begin{split}\{F_{i}^{Ref}\}_{1}^{N}&=\mathbf{E}_ {rgb}(R^{Ref}),\\ \{F_{i}^{Hrgb}\}_{1}^{N}&=\mathbf{E}_{rgb}(R^{HSI} ),\end{split} \tag{2}\]
where \(N\) denotes the number of levels of the multi-level features and is set to 4 in our network.
#### Iii-A2 HSI Encoder
The HSI encoder is built in a similar way as the RGB counterpart except that it uses quasi recurrent convolutional unit (QRU) [40]. Different from conventional convolutional layers, QRU uses a 3D convolutional neural network and a recurrent pooling function to better explore the structural spatio-spectral correlation and global correlation along the spectrum for HSI. In detail, QRU first separately performs two 3D convolutions on the input features \(I\) to obtain a set of pixel-wise weight maps \(W\) and candidate feature maps \(F\) for each band as
\[\begin{split} W&=\sigma\left(h_{w}\otimes I \right),\\ F&=\tanh\left(h_{f}\otimes I\right),\end{split} \tag{3}\]
where \(\sigma\) denotes sigmoid function, \(h_{w},h_{f}\) are two 3D filters, and \(\otimes\) represents the 3D convolution. Then, the candidate feature map of each band \(f_{i}\) is adaptively merged using the weight map \(w_{i}\) in a recurrent manner as
\[h_{i}=(1-w_{i})\odot h_{i-1}+w_{i}\odot f_{i},\ \ \forall i\in[1,b], \tag{4}\]
where \(\odot\) denotes the element-wise multiplication and \(h_{i}\) denotes the fused feature map at the \(i^{th}\) band. The final features \(H\) are the concatenation of \(h_{i}\) for each band.
Following [40], we use the bidirectional QRU (Bi-QRU) for the first layer and the alternating direction scheme is adopted for the subsequent QRU layers. Specifically, the bidirectional QRU essentially computes two sets of features in the forward and backward direction and takes the summation of them as the final features. The alternating direction scheme makes two successive QRU layers merge the features in different directions. These enhance our HSI encoder with the global spectral context without too much computational burden.
### _Alignment Module_
The key to HSI fusion is to design an effective approach to transfer the high-frequency information from HR RGB reference into LR HSI. When the images are properly aligned, this can be achieved with a simple concatenation of reference and HSI features. However, the same method might be unsuitable for unaligned reference due to the difficulty for subsequent convolutional fusion layers to properly capture the correspondence between features of the RGB reference and LR HSI at different spatial locations.
Fig. 3: Detailed structures of the RGB encoder and HSI encoder. The RGB encoder contains five layers of convolution, and the HSI encoder is made up of four layers of QRU.
With the aim of reducing the negative effect of spatial misalignment for the subsequent reconstruction, we introduce a pixel-wise alignment module to explicitly align the multi-level reference features of RGB reference to the features of LR HSI. Unlike previous works [17, 18] that align the reference with a global rigid homography transformation, our alignment module performs pixel-wise transformation by estimating a dense optical flow map for each level of reference features, which makes our model more robust to non-rigid deformation.
The overall structure of the proposed alignment module is shown in Figure 2. Without sacrificing the representation capability, we first convert the input HSI \(H^{LR\dagger}\) to a synthetic RGB counterpart \(R^{HSI}\) (dubbed as HSI-RGB) with a spectral response function (SRF) for computational efficiency. To better handle large displacements, following the previous works [38, 41], we perform the optical flow estimation in a coarse-to-fine manner using two successive flow estimators. Specifically, the first flow estimator \(\mathbf{Flow_{1}}\) takes reference RGB \(R^{Ref}\)and HSI-RGB \(R^{HSI}\) as input and predicts a rough flow map to coarsely align the reference,
\[R^{Ref2}=\mathbf{warp}(R^{Ref},\mathbf{Flow_{1}}(R^{Ref},R^{HSI})). \tag{5}\]
Then, the coarsely-aligned reference \(R^{Ref2}\) and HSI-RGB \(R^{HSI}\) are fed into the second flow estimator \(\mathbf{Flow_{2}}\) to predict a set of refined flow maps to align multi-level reference features,
\[\{V_{i}\}_{1}^{N} =\mathbf{Flow_{2}}(R^{Ref2},R^{HSI}), \tag{6}\] \[\{F_{i}^{Ref2}\}_{1}^{N} =\mathbf{warp}(\{F_{i}^{Ref}\}_{1}^{N},\{V_{i}^{N}\}).\]
Although any state-of-the-art optical flow networks [37, 41, 42, 43] can be adopted as our flow estimator, not everyone can be effectively trained under our architecture without explicit supervision from ground truth optical flow. Hence, we empirically choose different networks for datasets with/without sufficient training samples to achieve the best performance. Specifically, an improved version of FlowNetS [44] is adopted and trained from scratch for large datasets (_e.g._, our simulated dataset), and a pretrained PWC-Net [37] is used for small datasets (_e.g._, our real dataset).
### _Attention Module_
Despite the previous alignment module being able to align the reference features to some extent, mistakes of flow estimation are unavoidable and can be even more common for HSI fusion due to the lack of ground truth optical flow for explicit guidance. Furthermore, the warping operation itself also introduces misleading ghosting artifacts in the occluded area [45], which are useless and cause confusion in the subsequent fusion step.
On the basis of analysis, we introduce an attention module to adaptively adjust the importance of each spatial location in the feature map by computing an element-wise attention weight. Before performing attention, we first encode the HSI-RGB \(R^{HSI}\) into multi-level deep features \(\{F_{i}^{Hrgb}\}_{1}^{N}\), using the same RGB encoder as RGB reference,
\[\{F_{i}^{Hrgb}\}_{1}^{N}=\mathbf{E}_{rgb}(R^{HSI}). \tag{7}\]
Then, the estimated optical flow, the features of HSI-RGB, and the RGB reference are fed into an attention module to predict the corresponding attention map.
The structure of the proposed attention module is shown in Figure 4. The predicted optical flow \(V_{i}\) is first fed into a feature extractor \(\mathbf{E}_{f}\), _i.e._, a small CNN, to obtain the embedded flow features \(F_{i}^{flow}\),
\[F_{i}^{flow}=\mathbf{E}_{f}(V_{i}). \tag{8}\]
Meanwhile, a point-wise convolutional layer \(\mathbf{G}\) is used to lower the dimensions of HSI and RGB features for computational efficiency. Then, we concatenate the flow features, the compressed HSI and RGB features along the channel dimension and feed the results into a residual convolutional network (ResCNN) to obtain the initial attention weight, which is subsequently normalized with the sigmoid function \(\sigma\),
\[W_{i}=\sigma(\mathbf{ResCNN}([\mathbf{G}(F_{i}^{Ref}),\mathbf{G}(F_{i}^{Hrgb }),F_{i}^{flow}])). \tag{9}\]
Afterward, the normalized weight is element-wise multiplied with the aligned reference features \(F_{i}^{Ref2}\) to produce the final reference features for the fusion decoder,
\[F_{i}^{Ref3}=W_{i}\otimes F_{i}^{Ref2}. \tag{10}\]
### _Fusion Decoder_
With the weighted aligned reference features, a fusion decoder with the skip connection [46] is adopted to predict the final high-resolution HSI. In detail, the multi-level features of RGB reference are the first broadcast along the band dimension to match the number of band of the input LR HSI. Then, the expanded reference features, HSI features at level \(i\), and the decoder features at level \(i-1\) (if exist)
Fig. 4: Illustration of the computation of attention map. The optical flow \(V_{i}\) is embedded into deep features and the point-wise convolution (PWConv) is used to lower the dimensions of the features of HSI-RGB \(F_{i}^{Hrgb}\) and reference RGB \(F_{i}^{Ref}\).
Fig. 5: Illustration of the fusion decoder. \(F_{i}^{Ref3}\) denotes the weighted aligned reference features at level \(i\). \(F_{i}^{HSI}\) denotes the HSI features at level \(i\).
are concatenated and sent to an upsample-QRU [40] layer to produce the decoder features at level \(i\) (for \(i\in[0,2]\)). After obtaining the decoder features at the last level, another bidirectional-QRU [40] layer is employed to perform the last fusion to generate the final SR output.
The illustration of the fusion decoder is shown in Figure 5. Specifically, the fusion decoder contains four upsampled QRU layers [40] to integrate the features of reference RGB image and LR HSI at four levels. The upsampled QRU is identical to the QRU described in the previous section of the HSI encoder except it uses upsampled 3D convolution [47] instead of plain 3D convolution. Similar to the HSI encoder, the last QRU layer is bidirectional and the rest are alternative directional. For QRU layer at level \(i\), it receives the concatenation of decoder features \(F_{i-1}^{D}\) at level \(i-1\) (if exists), the broadcasted reference features \(F_{4-i}^{-def3}\) and HSI features \(F_{4-i}^{HSI}\) at level \(4-i\), and predicts the decoder features at next level \(i+1\), _i.e._,
\[F_{i}^{D}=QRU([F_{i-1}^{D},F_{4-i}^{HSI},broadcast(F_{4-i}^{Ref3})]). \tag{11}\]
For the last QRU layer, it predicts the final SR HSI.
## IV Real HSI Fusion Dataset
Most existing works on HSI fusion [10, 17, 18, 34] either assume the precise alignment between the reference image and input HSI or only consider the simulated unaligned data generated by geometric transformation. The performance and the generalization capabilities of these methods on real data are usually not taken into account due to the lack of appropriate real unaligned HSI fusion datasets.
In order to validate the performance of our approach, we collect a new real-world unaligned HSI fusion dataset, called Real-HSI-Fsuion, which consists of 60 pairs of unaligned high-resolution HSIs. An overview of the dataset is shown in Figure 6, our dataset contains different types of scenes including indoor scenes, outdoor scenes, buildings, and objects. Each pair of HSIs shares the same scene under different viewpoints, but are not precisely aligned.
In detail, we employ two SOC710-VP hyperspectral cameras from Surface Optics Corporation (SOC), USA, for the HSI imaging. Each camera is equipped with a silicon-based charge-coupled device (CCD) and an integrated scanning system to capture HSI with 696 \(\times\) 520 pixels in spatial resolution and 128 spectral bands from 376.76 \(mm\) to 1037.77 \(mm\) at 5.16 \(nm\) interval. The dynamic range of each HSI is 12 bits, so the spectral value ranges from 0 to 4095. We use a commercial dual camera mount tripod to fix the two cameras as shown in Figure 6. Due to the lack of autofocus, we manually adjust the exposure time, focal length, and camera position to maximize the clarity and overlapped region for each scene.
After acquiring the raw HSI data, we coarsely align the image pairs by estimating the affine transformation matrix with SIFT [48] and RANSAC [49] using the synthetic RGB counterparts. Then, we manually crop the overlapped region for each pair to remove the disjoint border. Following the common practice in [50, 51], we select 31 bands ranging from about 400 nm to 700 nm in visible spectral range to construct the final dataset that consists of 60 HR HSI-HSI pairs, which share a similar size as existing HSI datasets, _e.g._, CAVE and Harvard. Ten pairs of HSIs are randomly selected for testing and the rest is for training.
It should be noted that the dataset consists of pairs of HSIs captured by two HSI cameras. The complementary RGB image is synthesized from one HSI by multiplying it with the spectral response matrix of Nikon D700 as [19, 32, 52]. It might be confusing why not directly capture RGB reference with RGB camera directly. The reasons behind such a choice are mainly to make the dataset more flexible and useful for other slightly different settings without collecting similar datasets again. For example, by using our dataset, it is possible to generate different synthetic RGB references using different camera response functions. Besides, it is also feasible to synthesize an unaligned multispectral reference image (MSI) for HSI SR with an unaligned MSI reference. Further, our dataset can also be used for spectral super-resolution with unaligned LR HSI reference where two HSIs are required.
## V Experiments
In this section, we provide the experimental results on both the simulated dataset and our real HSI fusion dataset. We also provide an ablation study and discussion to verify the effectiveness of each proposed network component.
### _Experimental Settings_
#### V-A1 Dataset
To evaluate the proposed method under different levels of misalignment, we perform the experiments on
Fig. 6: (a) **Illustration of Real HSI Fusion Dataset.** The dataset includes paired unaligned HSIs from indoor scenes, outdoor scenes, objects, and buildings. The first line and the second line show the synthetic RGB images from captured HSIs. **(b) Imaging System.** The imaging system is made up of two HSI cameras mounted on a tripod. **(c) Data Distribution.
the simulated dataset (with small misalignment) as well as the real dataset we collected (with relatively larger misalignment). Different from previous works [17, 18], we construct the simulated dataset by synthesizing the HSIs with 31 bands from real unaligned RGB-RGB image pairs in the light field dataset Flowers [53], using HSCNN+[54], which is a recent state-of-the-art deep-learning model for hyperspectral recovery from RGB images. The simulated dataset contains 3343 pairs of images in the size of \(320\times 512\) where 100 pairs are randomly selected for testing and the rest is used for training. For the real dataset, 10 pairs of images are randomly selected for testing as described in Section IV. For each HSI-HSI pair, we choose one HSI from them to synthesis the RGB reference (Ref-RGB), and use the same approach to synthesis the RGB counterpart of input HSI (HSI-RGB). Due to the difference between two HSI cameras, an extra histogram-based color-matching is performed to alleviate the spectral inconsistency. We generate the LR HSI for simulated and real datasets using the same approach as [12], _i.e._, the HR image is first blurred using a Gaussian kernel with \(\mu=8,\sigma=3\) and then downsampled with the specified scale factor.
#### Iv-A2 Implementation Details
We implement the proposed fusion network using PyTorch [55]. The AdamW [56] optimizer is adopted to minimize the smooth \(L_{1}\) loss between predicted SR HSI and the corresponding ground truth. The weight decay rate of the optimizer is set to \(5\times 10^{-5}\). The batch size is set to 1. For the simulated dataset, we train the network for 50 epochs with a learning rate set to \(1\times 10^{-4}\). For the real dataset, we train the network for 200 epochs with a learning rate set to \(1\times 10^{-5}\). Besides, several strategies are used to improve the performance on the real dataset, (1) We use the pretrained PWC-Net [37] as our flow estimator for the real dataset, since training the flow estimator from scratch is extremely difficult without sufficient supervision (inadequating samples, the lack of ground truth optical flow). (2) We rescale the weight of the HSI features and the reference features to balance the importance of each type of feature for different scale factors. (3) We pretrain our network with the HR HSI-RGB and then fine-tune on the LR HSI-RGB. This allows our network to distill knowledge from the easier HR-HR matching to guide the more ambiguous LR-HR matching.
#### Iv-A3 Compared Methods
We compare our method with eight state-of-the-art methods, including three SISR methods (_i.e._ Bi-3DQRNN [12], MCNet [11], SSPSR [9]) and five fusion-based methods (_i.e._, NSSR [29], Optimized [10], Integrated [21], u2MDN [19], Non-Reg [20]). For deep-learning-based
Fig. 7: Visual comparison on simulated unaligned dataset under scale factors of 4 and 8. Our method produces sharper details than competing SISR approaches and more aligned results over fusion-based methods. Zoom in for details.
methods, we train the networks using the recommended hyperparameters and the same dataset as ours. For optimization-based methods, we empirically select the best parameters to achieve their best performance.
#### Iv-A4 Evaluation Metrics
We employ two sets of quantitative quality metrics for systematic evaluation of ours and competing methods. PSNR and SSIM [57] are used to evaluate the spatial fidelity of the super-resolved HSI. SAM [58] is employed to measure spectral similarity. PSNR and SSIM are calculated as the average of the bandwise results for each HSI. Larger values of PSNR and SSIM suggest better performance, while a smaller value of SAM implies better performance.
### _Results on Synthetic Data_
In this part, we provide the experimental results on the simulated dataset generated from real unaligned RGB-RGB pairs. The quantitative results are shown in Table I. It can be observed that our approach is significantly better than all the SISR methods with over 4 and 8 dB improvement on PSNR for scale factors of 4 and 8, respectively. Specifically, the SISR methods could achieve relatively satisfactory results on the scale factor of 4, but their performance significantly drops for a larger scale factor of 8 (over 7 dB on PSNR). This is because SISR relies on context information of LR input to guide the reconstruction of high-frequency details, but such context information is insufficient for higher scale factors, thus resulting in notable performance degradation. On the contrary, our method retains comparably better performance with a lower decline in PSNR, which demonstrates the advantages of the utilization of additional reference images. When compared with other fusion-based methods, our method outperforms them by an even larger margin. Besides, it can be observed that the performance of these methods is similar for different scale factors. The reason for this phenomenon is that even though these fusion-based approaches produce the visually clear outcome as shown in Figure 7, their results are, in fact, not properly aligned with the ground truth. On the contrary, our method successfully aligns and transfers the high-frequency details from the reference image and produces the finest results. Overall, the experimental results on the simulated dataset indicate that the proper alignment is significantly important for boosting performance on unaligned fusion-based HSI super-resolution.
Fig. 8: Visual comparison on our real HSI fusion dataset under scale factors of 4 and 8. Our method reconstructs sharper details while properly aligning with the ground truth. Zoom in for details.
### _Results on Real Data_
Different from the simulated dataset, the real dataset is notably more challenging due to the larger misalignment and fewer data samples. As the quantitative results are shown in Table II, all the competing methods as well as ours suffer from performance drop with respect to PSNR gain over Bicubic when compared with the results on simulated data. Nevertheless, our approach still outperforms all the competing methods in terms of PSNR and SSIM. In particular, three fusion-based methods are notably worse than ours and exhibit similar results on scale factors of 4 and 8. This is largely because the results of these methods are not properly aligned (_i.e_. Integrated [21], u2MDN [19], Non-Reg [20]), or aligned to the RGB reference (_i.e_. NSSR [29] and Optimized [10]). On the contrary, our model achieves better results by equipping with an alignment module as well as an attention module to align the reference image. The visual comparison of different methods is provided in Figure 8. Our proposed method outperforms other methods in terms of visual quality, generating sharper details while properly aligned to the ground truth. NSSR [29] and Optimized [10] also produce the results with fine details but their results are aligned to the RGB reference image. The other three SISR approaches produce more blurred results than ours as they are unable to utilize the high-frequency information from high-resolution RGB guidance.
### _Ablation Study_
In this section, we provide the results of several ablation studies to investigate our proposed method. We first perform the break-down ablation on the simulated dataset without pre-training to analyze the impact of each components of our network. Then, we verify the effectiveness of each component by comparing it with other variants on our real dataset.
Effect of QRU-based Encoder and DecoderThe quasi-recurrent unit (QRU) is used for merging the features along the spectrum dimension, which is shown to be helpful for improving the quality of the reconstructed HSI. Therefore,
Fig. 9: Visual comparison on the simulated and real dataset under the scale factor of 16. Our method reconstructs more details while properly aligning with the ground truth. Zoom in for details.
we employ the QRU as the basic building block for our HSI encoder and final fusion decoder. To demonstrate the effectiveness of this design choice, we conduct an ablation study that compares the performance of different models w/ and w/o QRU and Bi-QRU. The results are shown in Table V. Similar to [40], Bi-QRU is used at the first layer of the HSI encoder, and we alternatively change the direction of the following QRUs to achieve the global contextual receptive field. This strategy provides comparable performance to the full-BiQRU strategy, while reducing the total number of parameters.
#### V-A1 Effect of Alignment Module
Alignment is an essential step for the effective fusion of HR RGB reference and LR HSI. As shown in Table IV, the alignment module achieves a significant improvement on PSNR (1.88 dB) and SSIM (0.01) compared with the baseline, which demonstrates its effectiveness. Figure 10 shows the visualization of predicted optical flow on two sample scenes from the simulated and real datasets. We use a pre-trained flow estimator for the real dataset, but the one for the simulated dataset is trained from scratch. Thus, it partially indicates the capability of our model to learn flow estimation even without explicit supervision.
Effect of Attention ModuleThe ablation results of the attention module are shown in Table IV, we can observe that there is a prominent improvement (0.59 dB on PSNR) after adding the attention module, which verifies its effectiveness. In order to analyze the actual transformation that the attention module learned, we conduct a series of visualizations. As shown in Figure 11, the generated attention map generally outlines the main objects with some inclinations on the edges and corners. The reason that the attention module outlines the edges might come from the fact that the displacement of the unaligned image might not produce a difference for non-edges areas. Therefore, the network learns to attend to areas of reference RGB image that differ from the LR HSI, which are more likely to be areas near the edges. Nevertheless, the actual attention map might encapsulate more complex relations beyond the attention on edges.
Effect of Pretrained Flow EstimatorWe also evaluate the effectiveness of the pre-training of flow estimators on the real dataset under the scale factor of 4. The experimental results are shown in Table VI. It can be seen that there is a significant improvement (over 4 dB on PNSR) after adding the pre-training for flow estimators.
Effect of Fusion ModuleWe verify the function of the fusion module from two aspects, including (a) The effectiveness of the augmented features from the reference RGB images, and (b) the design of the fusion module, i.e., the use of QRU. To verify (a), we construct a variant of our model where the fusion module only takes the LR HSI features as input. This actually makes our model a SISR model, which also removes the alignment and attention modules. The performance comparison is shown in Table IV. It can be seen that the performance of SISR version severely drops when the augmented reference features are removed. This proves the usefulness of the reference RGB image. To verify (b), we remove the QRU in the fusion module and evaluate the model performance. The quantitative results are shown in Table VII. It can be seen that the fusion module with QRU is better than the fusion module without QRU. This verifies the effectiveness of the design of our fusion module.
real and simulated datasets for the scale factor of 16 in Table III. The visual comparison is shown in Figure 9. It can be seen that our method still outperforms all the competing methods for quantitative results. Overall, for the simulated dataset with sufficient training samples and small misalignment, our method can achieve fairly good performance while obtaining reasonable visual results. For the real dataset, our method may struggle due to difficulty of cross-scale alignment for large misalignment.
#### V-B2 Computational Complexity
The proposed HSI fusion network contains five different modules, but the overall pipeline is not very complicated and it can be divided into three sequential parts, including feature extraction, feature alignment, and feature fusion. (1) The first part "feature extraction" includes an RGB encoder and an HSI encoder to extract features from reference RGB images and LR HSI. (2) The second part "feature alignment" includes two successive optical-flow estimators and an attention module to align and adjust the features of the reference RGB image. (3) The final part "feature fusion" includes a fusion decoder that fuses the aligned reference features and LR HSI features to predict the final reconstructed HR HSI. For the quantitative analysis, the total number of parameters of different network components is shown in Table VIII. It can be observed that our encoders and fusion decoder are relatively lightweight. The major portion of parameters lies in the optical-flow estimators. Since the design of the optical-flow estimator is not the central topic and contribution of this work, we simply experiment with some classical ones, such as FlowNet [43] and PWCNet [37]. However, our method is not restricted to these flow estimators, and the other estimator, such as RAFT [59], could also be used. For example, the model size can be reduced when RAFT is adopted but the FLOPs would be larger than PWCNet, so it is a trade-off between Params and FLOPs for choosing the flow estimator.
#### V-B3 Bandwise Reconstruction Quality
To better analysis the reconstruction quality of different methods, we visualize the spectral curve of some selected pixels as well as the curve of SSIM for each band on our real HSI fusion dataset. The visualizations are shown in Figure 12. For the reconstruction of the spectral curve, it can be seen that our method (black line) is closest to the ground truth. For the curve of SSIM and PSNR for each band, it can be observed that different bands have different performances on the metric of SSIM and PSNR, and our method achieves the best performance on average. The SSIM of SISR methods, i.e., Bi-3DQRNN [12], MCNet [11], and SSPSR [9], are obviously lower, which is largely due to the absence of the usage of the HR reference image for reconstructing fine-grained details.
## VI Conclusion
In this paper, we introduce a new unaligned HSI fusion network to address the problem of hyperspectral image super-resolution with real unaligned RGB guidance. To deal with the complex misalignment of real unaligned data, a flow-based alignment module is introduced to explicitly align the reference image in the feature space by performing pixel-wise transformation with estimated optical flow. Besides, we propose an element-wise attention module to adaptively adjust the aligned features to drive the network to focus on more discriminative regions, which further improves the performance. Moreover, we collect the first HSI fusion dataset with real unaligned pairs of HSI and RGB reference to provide a benchmark and source of training data for unaligned HSI fusion methods. The experiments demonstrate the promising performance and superiority of our unaligned architecture over existing SISR and fusion-based methods. We hope our work
Fig. 12: Visualization of band-wise statistics. (a) Spectral curve: it can be observed that our results (Ours, blue line) is closest to the ground truth (Target, orange line). (b) SSIM curve: it can be seen that our method achieves the best performance in the most spectral bands. (c) PSNR curve: it can be seen that our method achieves the best performance in the most spectral bands.
could provide foundations for further research in the field of HSI fusion with unaligned guidance.
|
2310.06089
|
Predictive auxiliary objectives in deep RL mimic learning in the brain
|
The ability to predict upcoming events has been hypothesized to comprise a
key aspect of natural and machine cognition. This is supported by trends in
deep reinforcement learning (RL), where self-supervised auxiliary objectives
such as prediction are widely used to support representation learning and
improve task performance. Here, we study the effects predictive auxiliary
objectives have on representation learning across different modules of an RL
system and how these mimic representational changes observed in the brain. We
find that predictive objectives improve and stabilize learning particularly in
resource-limited architectures, and we identify settings where longer
predictive horizons better support representational transfer. Furthermore, we
find that representational changes in this RL system bear a striking
resemblance to changes in neural activity observed in the brain across various
experiments. Specifically, we draw a connection between the auxiliary
predictive model of the RL system and hippocampus, an area thought to learn a
predictive model to support memory-guided behavior. We also connect the encoder
network and the value learning network of the RL system to visual cortex and
striatum in the brain, respectively. This work demonstrates how representation
learning in deep RL systems can provide an interpretable framework for modeling
multi-region interactions in the brain. The deep RL perspective taken here also
suggests an additional role of the hippocampus in the brain -- that of an
auxiliary learning system that benefits representation learning in other
regions.
|
Ching Fang, Kimberly L Stachenfeld
|
2023-10-09T19:06:25Z
|
http://arxiv.org/abs/2310.06089v2
|
# Predictive auxiliary objectives in deep RL mimic learning in the brain
###### Abstract
The ability to predict upcoming events has been hypothesized to comprise a key aspect of natural and machine cognition. This is supported by trends in deep reinforcement learning (RL), where self-supervised auxiliary objectives such as prediction are widely used to support representation learning and improve task performance. Here, we study the effects predictive auxiliary objectives have on representation learning across different modules of an RL system and how these mimic representational changes observed in the brain. We find that predictive objectives improve and stabilize learning particularly in resource-limited architectures, and we identify settings where longer predictive horizons better support representational transfer. Furthermore, we find that representational changes in this RL system bear a striking resemblance to changes in neural activity observed in the brain across various experiments. Specifically, we draw a connection between the auxiliary predictive model of the RL system and hippocampus, an area thought to learn a predictive model to support memory-guided behavior. We also connect the encoder network and the value learning network of the RL system to visual cortex and striatum in the brain, respectively. This work demonstrates how representation learning in deep RL systems can provide an interpretable framework for modeling multi-region interactions in the brain. The deep RL perspective taken here also suggests an additional role of the hippocampus in the brain- that of an auxiliary learning system that benefits representation learning in other regions.
## 1 Introduction
Deep reinforcement learning (RL) models have shown remarkable success solving challenging problems (Sutton & Barto, 2018; Mnih et al., 2013; Silver et al., 2016; Schulman et al., 2017). These models use neural networks to learn state representations that support complex value functions. A key challenge in this setting is to avoid degenerate representations that support only subpar policies or fail to transfer to related tasks. Self-supervised auxiliary objectives, particularly predictive objectives, have been shown to regularize learning in neural networks to prevent overfit or collapsed representations (Lyle et al., 2021; Dabney et al., 2021; Francois-Lavet et al., 2019). As such, it is common to combine deep RL objectives with auxiliary objectives. The modular structure of these multi-objective models can function as a metaphor for how different regions of the brain combine to comprise an expressive, generalizable learning system.
Analogies can readily be drawn between the components of a deep RL system augmented with predictive objectives and neural counterparts. For instance, the striatum has been identified as a RL-like value learning system (Schultz et al., 1997). Hippocampus has been linked to learning predictive models and cognitive maps (Mehta et al., 1997; O'Keefe & Nadel, 1978; Koene et al., 2003). Finally, sensory cortex has been suggested to undergo unsupervised or self-supervised learning akin to feature learning (Zhuang et al., 2021), although reward-selective tuning also been observed (Poort et al., 2015). Comparing representations across artificial and biological neural networks can pro
vide a useful frame of reference for understanding the extent artificial models resemble the brain's mechanisms for robust and flexible learning.
These comparisons can also provide useful insights into neuroscience, where little is known about how learning in one region might drive representational changes across the brain. For instance, the hippocampus is a likely candidate for predictive objectives, as ample experimental evidence has shown that activity in this region is predictive of the upcoming experience of an animal (Skaggs & McNaughton, 1996; Lisman & Redish, 2009; Mehta et al., 1997; Payne et al., 2021; Muller & Kubie, 1989; Pfeiffer & Foster, 2013; Schapiro et al., 2016; Blum & Abbott, 1996; Mehta et al., 2000). These observations are often accounted for in theoretical work as hippocampus computing a predictive model or map (Lisman & Redish, 2009; Mehta et al., 2000; Russek et al., 2017; Whittington et al., 2020; Momennejad, 2020; George et al., 2021; Stachenfeld et al., 2017). Much has been written about how learned predictive models may be used by the brain to simulate different outcomes or support planning (Vikbladh et al., 2019; Geerts et al., 2020; Mattar & Daw, 2018; Miller et al., 2017; Olafsdottir et al., 2018; Redish, 2016; Koene et al., 2003; Foster & Knierim, 2012; McNamee et al., 2021). However, in the context of deep RL, the mere act of learning to make predictions in one region confers substantial benefits to other interconnected regions by shaping representations to incorporate predictive information (Hamrick et al., 2020; Oord et al., 2018; Bengio, 2012). One of the key insights of this work is to propose that an additional role of predictive learning in hippocampus is to drive representation learning that supports deep RL in the brain.
The primary contribution of this paper is to quantify how representations in a deep RL model change with predictive auxiliary objectives, and to identify how these changes mimic representational changes in the brain. We first characterize key functional benefits this auxiliary system confers on learning. We evaluate the effects of predictive auxiliary objectives in a simple gridworld foraging task, and confirm that these objectives help prevent representational collapse, particularly in resource-limited networks. We also observe that longer-horizon predictive objectives are more useful than shorter ones for transfer learning, explaining why novel environments activate longer timescale regions of hippocampus (Fredes et al., 2021). We further demonstrate that a deep RL model with predictive auxiliary objectives undergo a variety of representational phenomena also observed in neural populations in the brain. Downstream objectives can alter activity in the encoder, which is mirrored in various results that show how visual cortical activity is altered by different types of learning. Learning in the prediction module drives activity patterns consistent with activity measured in hippocampus. Overall we find that these interacting objectives explain diverse effects in the neural data not well modeled by considering learning systems in isolation. Moreover, it suggests that deep RL augmented with predictive objectives appears to in many ways mirror the brain's approach to learning.
## 2 Related Work
In deep RL, auxiliary objectives have emerged as a crucial tool for representation learning. These additional objectives require internal representations to support other learning goals besides the primary task of value learning. Auxiliary objectives thus regularize internal representations to preserve information that may be relevant for learning. They are thought to address challenges that may arise in sparse reward environments, such as representation collapse and value overfitting (Lyle et al., 2021). Many auxiliary objectives used in machine learning are predictive in flavor. Prior work has found success in defining objectives to predict reward (Shelhamer et al., 2016; Jaderberg et al., 2016; Shelhamer et al., 2016) or to predict future states (Shelhamer et al., 2016; Oord et al., 2018; Wayne et al., 2018) from history. Predictive objectives may be useful for additional functions as well. Intrinsic rewards based on the agent's ability to predict the next state can be used to guide curiosity-driven exploration (Pathak et al., 2017; Tao et al., 2020). These objectives may also aid with transfer learning (Walker et al., 2023), by learning representations that capture features that generalize across diverse domains. The incorporation of auxiliary objectives has greatly enhanced the efficiency and robustness of deep RL models in machine learning applications.
In neuroscience, much theoretical work has sought to characterize brain regions by the computational objective they may be responsible for. Hippocamp in particular has been suggested to learn predictions of an animal's upcoming experience. This has been formalized as learning a transition model similar to model-based reinforcement learning (Fang et al., 2022) to learning long-horizon
predictions as in the successor representation (Gershman et al., 2012; Stachenfeld et al., 2017). Separately, the striatum has long been suggested to support model-free (MF) reinforcement learning like actor-critic models (Joel et al., 2002), with more recent work connecting these hypotheses to deep RL settings (Dabney et al., 2021; Lindsey and Litwin-Kumar, 2022).
Less work has been done to understand how the computational objectives of multiple brain regions interact, although this has been suggested as a framework for neuroscience (Marblestone et al., 2016; Yamins and DiCarlo, 2016; Botvinick et al., 2020). Within this literature, a number of groups use multi-region recurrent neural networks (Pinto et al., 2019; Andalman et al., 2019; Kleinman et al., 2021) or switching nonlinear dynamical systems (Semedo et al., 2014; Glaser et al., 2020; Karniol-Tambour et al., 2022) to model the interactions of different regions. However, much of this work focuses more on fitting recorded neural activity than taking a normative perspective on brain function. Some work has sought to construct multi-region models based off computational principles expected in different brain regions (Frank and Claus, 2006; O'Reilly and Frank, 2006; Geerts et al., 2020). Overall, though, the work in this area remains sparse.
In this paper, we establish connections between these two traditions of work in machine learning and neuroscience, in particular showing how deep RL networks can be a multi-region model for neuroscience.
## 3 Experimental Methods
Network architectureWe implement a double deep Q-learning network (Van Hasselt et al., 2016) with a predictive auxiliary objective, similar to Francois-Lavet et al. (2019) (Fig 1A). A deep convolutional neural network \(E\) encodes observation \(o_{t}\) at time \(t\) into a latent state \(z_{t}\) (\(o_{t}\) will be a 2D image depicting the agent state in a tabular grid world in our experiments). The state \(z_{t}\) is used by two network heads: a Q-learning network \(\hat{Q}(z,a)\) that will be used to select action \(a_{t}\) and a prediction network \(T(z,a)\) that predicts future latent states. Both \(Q\) and \(T\) are multi-layer perceptrons with one hidden layer.
Network training procedureThe agent is trained on transitions \((o_{t},a_{t},o_{t+1},a_{t+1})\) sampled from a random replay buffer. We will also let \(o_{i}\) and \(o_{j}\) denote any two observations randomly sampled from the replay buffer that may not have occurred in sequence.
Figure 1: A deep reinforcement learning framework to model the effects of multi-region computation. **A.** Schematic of the deep RL model we use. Reward is provided as a scalar input \(r\). Observations \(o\) are 2D visual inputs fed into an encoder (green) that learns low-dimensional state space representations \(z\). The encoder is a convolutional neural network. Representations \(z\) are used to learn Q values via a 2-layer MLP (blue); these Q values are used to select actions \(a\). An additional predictive auxiliary objective (orange) is enforced by a separate 2-layer MLP learning predictions from \(z\). **B.** Systems in the brain that are analogous to the encoder, model-free value learning system, and predictive auxiliary task described in (A). A more anatomically accurate and detailed diagram can be found in Appendix Fig A.1.
The weights of \(E\), \(Q\), \(T\) are trained end-to-end to minimize the standard double deep Q-learning temporal difference loss function \(\mathcal{L}_{Q}\) and a predictive auxiliary loss \(\mathcal{L}_{pred}\). The predictive auxiliary loss is similar to that of contrastive predictive coding (Oord et al., 2018). That is, \(\mathcal{L}_{pred}=\mathcal{L}_{+}+\mathcal{L}_{-}\) where \(\mathcal{L}_{+}\) is a positive sampling loss and \(\mathcal{L}_{-}\) is a negative sampling loss.
The positive sample loss is defined as \(\mathcal{L}_{+}=||\tau(z_{t},a_{t})-z_{t+1}-\gamma\tau(o_{t+1},a_{t+1})||^{2}\), where \(z_{t}=E(o_{t})\) and \(\tau(z_{t},a_{t})=z_{t}+T(z_{t},a_{t})\). This encourages learning of transition-based structure in the encoded representations. Additionally, \(\gamma\) modulates the predictive horizon.
The negative sample loss is defined as \(\mathcal{L}_{-}=-\exp||z_{i}-z_{j}||\). This loss drives temporally distant observations to be represented differently, thereby preventing the trivial solution from being learned (map all latent states to a single point). However, we note that negative sampling elements are not always needed to support self-predictive learning if certain conditions are satisfied (Tang et al., 2023). Except where indicated, the agent learns off-policy via a random policy during learning, only using its policy during test time. The weights over loss terms \(\mathcal{L}_{Q}\), \(\mathcal{L}_{+}\), \(\mathcal{L}_{-}\) are chosen through a small grid search over the final episode score.
Experimental comparisons and modificationsWe will treat the encoder network as a sensory cortex analog, the Q-learning network as a striatum analog, and the prediction network as a hippocampus analog (Fig 1B). In our analyses, we vary several parameters of interest. We vary the size of \(z\) to test the effects of the information bottleneck of the encoder. We will also modulate the strength of \(\gamma\) in the auxiliary loss to test the effects of different timescales of prediction. Finally, we also test how the depths of the decoder and encoder networks affect learning.
Figure 2: Gridworld performance with predictive auxiliary tasks. **A.** The model is tested on grid-world task in a 8x8 arena. The agent must navigate to a hidden reward given random initial starting locations. **B.** Average episode score across training steps for models without auxiliary losses (blue), with only the negative sampling loss \(\mathcal{L}_{-}\) (green), and with the full predictive loss \(\mathcal{L}_{pred}\) (orange). The maximum score is 1 and \(|z|=10\) (i.e. \(z\) contains 10 units). Each training step is one batch of replayed transitions that the network is trained on, where the batch size is 64. All error bars are standard error mean over 45 random seeds. **C.** 3D PCA representations of latent states \(z\) for the models in (B) (two random seeds). The latent states are colored by the quadrant of the arena they lie in. The quadrants (in order) are purple, pink, gray, brown. The goal location state is colored red. Gray lines represent the true connectivity between states. **D.** We vary \(|z|\) (see E, F), as well as the encoder/decoder depths (see Appendix A.2BC). **E.** Average episode score at the end of learning (600 training steps) across \(|z|\). **F.** Fraction of units in \(z\) that are silent during the task, across \(|z|\). **G.** Cosine similarity of two randomly sampled states throughout learning, \(|z|=10\).
Results
### Predictive objectives help prevent representational collapse.
We first want to understand the effect predictive auxiliary objectives have on a learning system. We test the RL model in a simple gridworld foraging task, where an agent must navigate to a hidden reward from any point in a 2D arena. The observation received by the agent is a 2D image depicting a birds-eye view of the agent's location. We compare a model without auxiliary objectives (MF-only) to models with the negative sampling objective \(\mathcal{L}_{-}\) only and with the full predictive objective \(\mathcal{L}_{pred}\). Here, the predictive model is be trained with one-step prediction (\(\gamma=0\)).
Given sufficient capacity in the encoder, decoder, and latent layer \(z\), all models unsurprisingly learn the foraging task (Fig 2B). However, the model with prediction reaches maximum performance with fewer training steps than the both the negative-sampling model and the MF-only agent (Fig 2B). Additionally, the latent representation in the predictive model appears to capture the global structure of the environment better than the other two models (Fig 2C). The model without any auxiliary tasks tends to expand the representation space around rewarding states, while the model with negative sampling (Fig 2C) evenly spaces apart state representations without regard for environment structure.
We next tested how the effects of auxiliary tasks change with the size of the model components (Fig 2D). We first varied the size of \(z\), and thus the representational capacity of the encoder. We find that, although all models can perform well given a large enough latent dimension \(|z|\), supplying the model with a predictive auxiliary objective allows the model to learn the task even with a smaller bottleneck (Fig 2E). This benefit is not conveyed by the negative sampling loss alone, suggesting that learning the environment structure confers its own unique benefit (Fig 2E). We find similar results by varying the encoder network depth and the decoder network depth (SuppFig 2), showing that the benefits of predictive auxiliary objectives are more salient in resource-limited cases.
This difference may be because representational collapse is a greater danger in lower-dimensional settings. To test this, we measure how many units in the output of the encoder are involved in supporting the state representation. We find that a greater proportion of units are completely silent in the MF-only encoder (Fig 2F), suggesting a less distributed representation. To more directly test for collapse, we measure how the cosine similarity between state representations change across learning. Although all models start with highly similar states, the models with auxiliary losses separate state representations across training more than the MF-only model does (Fig 2G).
Long-horizon predictive auxiliary tasks are more effective at supporting representational transfer than short-horizon predictive tasks.
Thus far, we have tested the predictive auxiliary objective with a one-step prediction is used. However, longer horizon predictions are often used as predictive auxiliary objectives (Oord et al., 2018; Hansen et al., 2019), and many neural systems, including hippocampus, have been hypothesized to perform long-horizon predictions (Brunec & Momennejad, 2022; Lee et al., 2021). We next sought to understand under what conditions longer horizons of prediction in auxiliary objectives would be useful. In particular, we were interested in exploring how well learned representations could transfer to new tasks. We hypothesize that long-horizon predictions (larger \(\gamma\) in \(\mathcal{L}_{+}\)) can better capture global environment structure and thus learn representations that transfer better to tasks in similar environments.
We first test representation transfer to new reward locations in gridworld (Fig 3A). After the agent learns an initial goal location in task A, we freeze the encoder, move the goal to a new state, and fine-tune the value network for task B. This allows us to test how well the learned representation structure can support new value functions. We test models with \(\mathcal{L}_{pred}\) loss and \(\gamma\in\{0.0,0.25,0.5,0.8\}\). We find that, although all models learn task A quickly, models with larger \(\gamma\) learn task B more efficiently (Fig 3B). We test how this effect scales with latent sizes. Just having a predictive horizon longer than one timestep appears sufficient to improve learning efficiency, with the effect stronger at larger latent sizes. (Fig 3C). The selective benefit of longer time horizons for transfer may explain the observation that regions of hippocampus with larger spatial scales appear to be preferentially active in novel environments (Fredes et al., 2021; Kohler et al., 2002; Poppenk et al., 2010).
We hypothesize that the difference in efficient transfer performance across the models may result from learning a latent structure that better reflects global structure. Long-horizon prediction may be better at smoothing across experience over many timesteps, thus capturing global environment structure better than short-horizon prediction and providing a larger benefit when latent representations are higher dimensional. Indeed, models with smaller \(\gamma\) values tend to learn more curved maps that preserve local, but not global, structure (Fig 3D). To quantify this effect, we measured the inner product between the states representing the corners of the environment. These are states that are maximally far from each other, and as such, representations that capture the environment structure accurately should separate these states from each other. We see that, across learning, models with larger \(\gamma\) learn to separate corner states better (Fig 3E).
Predictive auxiliary objectives can also be disadvantageous under certain regimes. Under predictive objectives, latent representations are shaped to reflect transition structure. However, these learned
Figure 3: Effects of predictive auxiliary objectives across transfer learning scenarios. **A.** We test goal transfer by moving the goal location to a new state in task B. After training on task A, encoder weights are frozen and the value function is fine-tuned on task B. **B.** Average episode score across task A, then task B. All models shown use the predictive auxiliary loss, with the shade of each line corresponding to the magnitude of \(\gamma\) in \(\mathcal{L}_{pred}\) (\(\gamma\in\{0.0,0.25,0.5,0.8\}\), \(|z|=17\)). **C.** The episode score after \(100\) training steps for each of the models in (B), as \(|z|\) is increased. All models achieve maximum performance in task A. 30 random seeds are run for each latent size. **D.** 3D PCA plots, for three models (\(\gamma=0.0,0.25,0.5\)) with the same random seed. **E.** Pairwise cosine similarity values between the corner states of the arena for the model shown in (B). **F.** We test transition transfer by shuffling the connectivity between all states in task B. Freezing and fine-tuning are the same as in (A). **G.** Average episode score across task A, then task B. Here, \(|z|=17\) and \(\epsilon=0.4\)-greedy policy during learning. In green is the model with only \(\mathcal{L}_{-}\) as an auxiliary loss. **H.** Episode score after \(150\) training steps for the model with only \(\mathcal{L}_{-}\) (green) versus the model with \(\mathcal{L}_{pred}\) for \(\gamma=0.8\). On the x-axis, the policy \(\epsilon\) used during training is varied, with \(\epsilon=1.0\) corresponding to a fully random policy (\(|z|=17\), all models have achieved maximum performance on task A).
representations might not generalize well to new tasks where the transition structure or the policy changes. We test this in a different transfer task, where reward location remains the same in task B, but the environment transition structure is scrambled (Fig 3F). Additionally, to test for effects of policy change across task A and B, we vary the portion of random actions taken in our \(\epsilon\)-greedy agent. Under this new transfer task with \(\epsilon=0.4\), we find a marked decrease in task B performance for models with the predictive objective compared to a model with just the negative sampling loss. (Fig. 3G).
Indeed, as \(\epsilon\) gets smaller and the agent learns more from biased on-policy transition statistics, transfer performance on task B accordingly suffers (Fig 3G,H). All models with predictive objectives do not perform as well in task B as a model with only negative sampling loss (Fig 3G,H).
Effects of value learning and history-dependence in prediction network resemble hippocampal activity.
We next ask how well representations developed in the network can model representations found in neural activity. The output of our \(T\) network serves as an analog to the hippocampus, a region implicated in self-predictive learning. We first test whether the \(T\) network activity can capture a classic result in the hippocampal literature: formation of spatially local activity patterns, called place fields. We plot the spatial firing fields of individual \(T\) units in our model trained on gridworld,
Figure 4: Representational changes in the predictive model are similar to those observed in the hippocampus. **A.** 2D foraging experiments are simulated as in the gridworld task from Fig 1-2. **B.** 2D receptive fields from top four \(T\) units (columns) sorted by spatial information score (Skaggs et al., 1992). Three random seeds are shown (rows). The model uses \(\mathcal{L}_{pred}\) and \(|z|=10\). White asterisk depicts reward. **C.** As in (B), but the model has no auxiliary objectives. **D.** We simulate circular track experiments using a circular gridworld environment with \(28\) states. The agent receives reward by running laps in a clockwise fashion, with reward in a random state for each random seed. **E.** Receptive fields of two example units in the \(T\) network before learning (gray) and after learning (orange). **F.** Histogram over the shift in receptive field peaks for individual units in \(T\) over \(15\) random seeds, where \(|z|=24\). Positive shift values are in the forward movement direction, and vice-versa for negative shift values. Black dotted line at \(0\). Median of the histogram is \(-0.034\). **G.** Histogram over the location of receptive field peaks for units in (F), with location centered around the reward site. Random shuffle (gray) control was made by randomly shuffling the weights of the \(T\) network. Black dotted line at \(0\). The model median is \(-0.06\), while the random shuffle median is \(-0.02\). **H.** We simulate a 5x5 alternating-T maze (details in Appendix). Center corridor is colored pink. **I.** Cosine similarity of \(T\) population vector responses in the center corridor under left-turn versus right-turn conditions. X-axis depicts location in the center corridor. Data is from \(20\) random seeds. Shown is the model without auxiliary objectives (blue) and the model with \(\mathcal{L}_{pred}\) (orange). \(T\) is randomly initialized for the model without an auxiliary objective.
and find 2D place fields as expected (Fig 4B). We also find that the prevalence of these place fields is greatly reduced in models without predictive auxiliary tasks (Fig 4C).
Hippocampal place fields also undergo to experience-dependent changes. We test for these effects in our model through 1D circular track experiments (Fig 4D). We find that place fields developed on the 1D track will skew and shift backwards from the movement of the animal (Fig 4E,F). This is consistent with phenomena in rodent hippocampal data that have been attributed to predictive learning (Mehta et al., 2000). We also find that the number of place fields across the linear track is more abundant close to the reward site (Fig 4G), another widely observed phenomena that is considered to be a result of reward learning in hippocampus. Our results suggest that value learning in shared representations with other systems can result in reward-related effects in the hippocampus.
Finally, we test a more complex form of experience-dependency in neural activity by simulating an alternating T-maze task. In this task, animals are required to alternate between two trial types: one where they run down a center corridor and turn left for reward, and another where they run down the same center corridor but turn right for reward. In these tasks, neural activity has been observed to "split" - that is, neurons fire differently in the center corridor across trial types despite the spatial details of the corridor remaining the same (Duvelle et al., 2023). Interestingly, the degree of splitting is greatest in the beginning of the corridor and also high at the end of the corridor, splitting least in the middle of the corridor (Duvelle et al., 2023). To enable the agent to perform this task, which requires remembering whether the previous trial type, we introduce a memory component to the agent so that a temporally graded trace of previous observations are made available (**?**). We measure cosine similarity between population activity in the left turn condition and the right turn condition. Lower similarity corresponds to greater splitting. The representations in both a MF-only model and the model with the predictive objective show increased splitting in the beginning of the corridor due to the memory component (Fig 4F). However, only the model with the predictive objective shows increased splitting at the end of the corridor (Fig 4F). This shows that the pattern of splitting seen in data can be captured by a model using both memory and prediction.
Effects of value learning and transition learning in the encoder network resemble activity in visual cortex.
As another example of representational effects arising from mutually interacting regions, we compare the activity of our encoder network to experimental results in sensory cortices. Neurons in visual cortex (even those in primary regions) have been observed to change their tuning as a result of learning Poort et al. (2015); Li and DiCarlo (2008, 2010); Wilmes and Clopath (2019); Pakan et al. (2018). Our model provides a simple system to look for such effects.
First, we test for effects of prediction and temporal statistics that have been seen in visual cortex. Specifically, Li and DiCarlo (2008) found that object selectivity in macaque IT neurons could be altered by exposing animals to sequences of images where preferred stimuli and non-preferred stimuli became linked (Fig 5A). The images in the preferred and non-preferred that are linked together are referred to as the "swap position" within a sequence (Fig 5A). An analogous experiment can be run in our gridworld task from Fig 2. We first identify spatially contiguous preferred and non-preferred states of neurons in the encoder network. We then expose the model to sequences where preferred states and non-preferred states became connected at arbitrarily chosen swap positions (Fig 5B). We find neurons in the output of the encoder that, after exposure, decrease their firing rate for the preferred stimulus at the swap location and increase their firing rate for the non-preferred stimulus at the swap position (Fig 5C). This is consistent with observations in data as well (Li and DiCarlo, 2008; Li and DiCarlo, 2010). We quantify this change in firing rate at different sequence locations. We find a similar trend as in data, where tuning for stimuli closer to the swap position is increasingly altered away from the original preferences (Fig 5D). Importantly, this effect is not present without the predictive auxiliary objective, similar to lesion studies carried out in Finnie et al. (2021).
The downstream Q-learning objective also have an effect on representations in the encoder. We simulate value learning effects in visual cortical activity through linear track experiments used in Poort et al. (2015) (Fig 5E). In this experiment, authors found that V1 neurons in mice increased selectivity for visual cues in the environment after learning the task. Furthermore, the authors noted a slight selectivity increase for more rewarding cues (vertical gratings) compared to nonrewarding cues (angled gratings). We find a similar effect in units in early layers of the encoder network: a
small, but statistically significant increase in proportion of units encoding the rewarded stimulus (Fig 5F). As in Poort et al. (2015), selectivity increases across learning, but with a greater preference for the vertical grating environment (Fig 5G).
## 5 Conclusion
In this work, we explore the representational effects induced by predictive auxiliary objectives. We show how such objectives are useful in resource-limited settings and in certain transfer learning settings. We also investigate how prediction and predictive horizons affect learned representation structure. Furthermore, we describe how such deep RL models can function as a multi-region model for neuroscience. We show how representation learning in the prediction model recapitulates experimental observations made in the hippocampus. We make similar connections between representation learning in the encoder model and learning in visual cortex.
Our results point to a new perspective on the role of the hippocampus in learning. That is, a predictive system like the hippocampus can be useful for learning without being used to generate sequences or support planning. Learning predictions is sufficient to induce useful structure into representations used by other regions. This view also connects to trends seen in machine learning literature. In deep RL, predictive models need not be used for forward planning (Hamrick et al., 2020) to be useful for representation learning. Additionally, the contrastive prediction objective used in this work is drawn from machine learning literature but bears interesting similarities to classic descriptions of hippocampal computation. CA3 and CA1 in the hippocampus have been implicated in predictive
Figure 5: Representational changes in the encoder model resemble recordings from visual cortex. **A.** Example of the sequence structure in the preference swap task of Li & DiCarlo (2008; 2010), with images numbered by their location in the sequence. **B.** Reproduced schematic from Li & DiCarlo (2010) of the changes in IT neuron response to preferred images (red) and non-preferred images (blue) across exposure to new image transitions. **C.** Responses of two example units from the model with \(\mathcal{L}_{pred}\). Arrows indicate response profile before and after experiencing swapped transitions. Red indicates the response to \(P1,P2,P3\) states that were selected from the gridworld environment, while blue indicates the response to \(N1,N2,N3\) states selected from the environment. **D.** Change in response difference between \((P1,N1)\), \((P2,N2)\), and \((P3,N3)\) over \(10\) units. Each unit is a separate transition swap experiment. Shown is the model without any auxiliary objectives (blue) and the model with \(\mathcal{L}_{pred}\) (orange). **E.** Linear track VR experiment used in Poort et al. (2015). Vertical stripe corridors led to reward, while angled corridors did not lead to any consequence. Animals experienced either condition at random following an approach corridor with random noise stimuli. **F.** Selectivity across the population before learning (gray) and after learning (orange). Selectivity was calculated as in Poort et al. (2015), with negative and positive values corresponding to angled and vertical corridor preference, respectively. Asterisks indicate significance from one-tailed Welch's t-test (t-statistic: \(-12.43\), p-value: \(9\times 1\mathrm{e}{-36}\)) **G.** Selectivity of individual units before (gray) and after (orange) learning for vertical condition (V), angled condition (A), or neither (N/A). Units are pooled across \(15\) experiments.
learning similar to the positive sampling loss. Meanwhile, the dentate gyrus in the hippocampus has been proposed to perform pattern separation similar to the contrastive negative sampling loss.
Overall, this work points to the utility of a modeling approach that considers the effect of multiple objectives in a deep learning system. The deep network setting reveals new aspects of neuroscience modeling that are less apparent in tabular settings or in simpler models.
|
2305.02059
|
Algorithmic Theory of Qubit Routing
|
The qubit routing problem, also known as the swap minimization problem, is a
(classical) combinatorial optimization problem that arises in the design of
compilers of quantum programs. We study the qubit routing problem from the
viewpoint of theoretical computer science, while most of the existing studies
investigated the practical aspects. We concentrate on the linear nearest
neighbor (LNN) architectures of quantum computers, in which the graph topology
is a path. Our results are three-fold. (1) We prove that the qubit routing
problem is NP-hard. (2) We give a fixed-parameter algorithm when the number of
two-qubit gates is a parameter. (3) We give a polynomial-time algorithm when
each qubit is involved in at most one two-qubit gate.
|
Takehiro Ito, Naonori Kakimura, Naoyuki Kamiyama, Yusuke Kobayashi, Yoshio Okamoto
|
2023-05-03T12:02:40Z
|
http://arxiv.org/abs/2305.02059v2
|
# Algorithmic Theory of Qubit Routing+
###### Abstract
The qubit routing problem, also known as the swap minimization problem, is a (classical) combinatorial optimization problem that arises in the design of compilers of quantum programs. We study the qubit routing problem from the viewpoint of theoretical computer science, while most of the existing studies investigated the practical aspects. We concentrate on the linear nearest neighbor (LNN) architectures of quantum computers, in which the graph topology is a path. Our results are three-fold. (1) We prove that the qubit routing problem is NP-hard. (2) We give a fixed-parameter algorithm when the number of two-qubit gates is a parameter. (3) We give a polynomial-time algorithm when each qubit is involved in at most one two-qubit gate.
Key words: Qubit routing, Qubit allocation, Fixed-parameter tractability
## 1 Introduction
The qubit routing problem captures a (classical) combinatorial problem in designing compilers of quantum programs. We rely on the formulation introduced by Siraichi et al. [11]. In their setting, a quantum program is designed as a quantum circuit. In a quantum circuit, there are wires and quantum gates such as Hadamard gates and controlled NOT gates. Each wire corresponds to one quantum bit (or a qubit for short) and gates operate on one or more qubits at a time. A quantum circuit is designed at the logic level and needs to be implemented at the physical level. This requires a mapping of logical qubits to physical qubits in such a way that all the gate operations can be performed. However, due to physical restrictions, some sets of qubits could be mapped to places where an operation on those qubits cannot be physically performed. This problem is essential for some of the superconducting quantum computers such as IBM Quantum systems. Figure 1 shows the graph topology of such computers.
To overcome this issue, we insert so-called swap gates into the quantum circuit. A swap gate can swap two qubits on different wires, and can be implemented by a sequence of three controlled NOT gates. With swap operations, we will be able to perform all the necessary gate operations that were designed in the original circuit. The process is divided into two phases. In the first phase, we find an initial mapping of logical qubits to physical qubits. In the second phase, we determine how swap operations are performed. Since a swap operation incurs a cost, we want to minimize the number of swap operations. The quantum routing problem focuses on the second phase of the process.
Several methods have been proposed to solve the quantum routing problem. The existing research has mainly focused on the practical aspects of the problem. On the other hand, the theoretical investigation has been scarce. Some authors have claimed that the problem is NP-hard, but no formal proof was given to date. Until this paper, it was not known whether the problem can be solved in polynomial time for the simplest case where the physical placement of qubits forms a path and the program has no two gate operations that can be performed at the same time. A path corresponds to the case of the
linear nearest neighbor architectures that have been extensively studied in the literature of quantum computing (e.g. [10]).
This paper focuses on the theoretical aspects of the quantum routing problem and gives a better understanding of the problem from the viewpoint of theoretical computer science. We are mainly concerned with the case where the physical placement of qubits forms a path. Under this restriction, we obtain the following results.
1. We prove that the qubit routing problem is NP-hard even when the program has no two gate operations that can be performed at the same time (Theorem 1).
2. We give a fixed-parameter algorithm when the number of gates in a given quantum circuit is a parameter (Theorem 4).
3. We give a polynomial-time algorithm when each qubit is involved in at most one two-qubit operation (Theorem 5).
As a side result, we also prove that the problem is NP-hard when the physical placement of qubits forms a star and any set of gate operations can be performed at the same time (Theorem 6).
Problem FormulationWe formulate the problem Qubit Routing in a purely combinatorial way as follows. As input, we are given an undirected graph \(G=(V,E)\), a partially ordered set (a poset for short) \(P=(S,\preceq)\), a set \(T\) of tokens, a map \(\varphi\colon S\to\binom{T}{2}\), where \(\binom{T}{2}\) denotes the set of unordered pairs of \(T\), and an initial token placement \(f_{0}\colon V\to T\), which is defined as a bijection. The undirected graph \(G\) corresponds to the physical architecture of a quantum computer in which each vertex corresponds to a physical qubit and an edge corresponds to a pair of qubits on which a gate operation can be performed. The poset \(P\) corresponds to a quantum circuit that we want to run on the quantum computer. Each token in \(T\) corresponds to a logical qubit. The token placement \(f_{0}\) corresponds to an initial mapping of the logical qubits in \(T\) to the physical qubits in \(V\) (e.g., as a result of the qubit allocation, see "Related Work" below). For the notational convenience, we regard \(f_{0}\) as a mapping from \(V\) to \(T\); this is not mathematically relevant since \(f_{0}\) is bijective and to construct a mapping from \(T\) to \(V\), one uses the inverse \(f_{0}^{-1}\). The bijectivity of a token placement is irrelevant since if there are more logic qubits than physical qubits, then we may introduce dummy logical qubits so that their numbers can be equal. Each element in \(S\) corresponds to the pair of logical qubits on which a gate operation is performed. The correspondence is given by \(\varphi\). We note that \(\varphi\) does not have to be injective. The poset \(P\) determines the order along which the gate operations determined by \(\varphi\) are performed. The order is partial since some pairs of operations may be performed independently: in that case, the corresponding elements of \(S\) are incomparable in \(P\). An example is given in Figure 2.
A _swap_\(f_{i}\leadsto f_{i+1}\) is defined as an operation that transforms one token placement (i.e., a bijection) \(f_{i}\colon V\to T\) to another token placement \(f_{i+1}\colon V\to T\) such that there exists an edge \(\{u,v\}\in E\) such that \(f_{i+1}(u)=f_{i}(v)\), \(f_{i+1}(v)=f_{i}(u)\) and \(f_{i+1}(x)=f_{i}(x)\) for all \(x\in V\setminus\{u,v\}\). This corresponds to a swap operation of two logical qubits \(f_{i}(u)\) and \(f_{i}(v)\) assigned to two different physical qubits \(u\) and \(v\).
As an output of the qubit routing problem, we want a swap sequence \(f_{0}\leadsto f_{1}\leadsto\dots\leadsto f_{\ell}\) that satisfies the following condition: there exists a map \(i\colon S\to\{0,1,2,\dots,\ell\}\) such that \(f_{i(s)}^{-1}(\varphi(s))\in E\) for every \(s\in S\) and if \(s\preceq s^{\prime}\), then \(i(s)\leq i(s^{\prime})\). A swap sequence with this condition is said to be _feasible_.
Figure 1: The graph topology of IBM quantum devices called “Johannesburg” (left) and “Almaden” (right). Each vertex represents a physical qubit, and we may perform a two-input gate operation only for a pair of adjacent qubits. In our problem formulation, the graph topology is taken into account as the graph \(G\). Source: [https://www.ibm.com/blogs/research/2019/09/quantum-computation-center/](https://www.ibm.com/blogs/research/2019/09/quantum-computation-center/)
The objective is to minimize the length \(\ell\) of a swap sequence. The condition states that all the operations in the quantum circuit that corresponds to the poset \(P\) are performed as they follow the order \(P\).
In summary, our problem is described as follows.
Qubit Routing
**Input.**: A graph \(G=(V,E)\), a poset \(P=(S,\preceq)\), a set \(T\) of tokens, a map \(\varphi\colon S\to\binom{T}{2}\), and an initial token placement \(f_{0}\colon V\to T\).
**Question.**: Find a feasible swap sequence of minimum length.
Note that the minimum length of a feasible swap sequence is at most \(|V||S|\) (if \(G\) is connected), which implies that determining the existence of a feasible swap sequence of length bounded by a given number is in NP.
Related WorkThe qubit allocation problem was introduced by Siraichi et al. [11]. Following Nannicini et al. [9], we divide the task in the qubit allocation problem into two phases. The first phase involves qubit assignment that gives an initial placement of the logical qubits on the physical qubits, and the second phase involves qubit routing that inserts swap operations at appropriate positions so that all the gate operations in a given circuit can be performed. The problem formulation in this paper solves the second-phase problem. The qubit routing problem is often called the swap minimization, too.
For qubit routing, several heuristic algorithms have been given [11, 7, 12], and integer-programming formulations have been given [13, 9] that attempt to solve problem instances to optimality. Siraichi et al. [11, 12] have claimed that the qubit routing problem is NP-hard since it is more general than the so-called token swapping problem [14] that is known to be NP-hard [8, 3, 6] even for trees [1]. However, their argument was informal and no formal proof was not found.
A similar problem was studied by Botea, Kishimoto, and Marinescu [4]. In their problem, a mapping of logical qubits to physical qubits is injective but not bijective, and a logical qubit can only be moved to a physical qubit that is not occupied by another logical qubit. They proved that the makespan minimization in their setting is NP-hard.
## 2 Hardness: Paths and Chains
In this section, we show that the problem is NP-hard even when \(G\) is a path and \(P\) is a chain (i.e., a totally ordered set).
**Theorem 1**.: Qubit Routing _is NP-hard even when \(G\) is a path and \(P\) is a chain._
To prove 1, we first introduce notation. For a swap sequence \(f_{0}\leadsto f_{1}\leadsto\dots\leadsto f_{\ell}\), which is denoted by \(\mathbf{f}\), we say that \(f_{0}\) and \(f_{\ell}\) are the _initial_ and _target_ token placements of \(\mathbf{f}\), respectively. We also say that \(\mathbf{f}\) is from \(f_{0}\) to \(f_{\ell}\). The length \(\ell\) of \(\mathbf{f}\) is denoted by \(|\mathbf{f}|\). For two swap sequences \(\mathbf{f}_{1}\) and \(\mathbf{f}_{2}\), if the target token placement of \(\mathbf{f}_{1}\) is equal to the initial token placement of \(\mathbf{f}_{2}\), then its concatenation is denoted by \(\mathbf{f}_{1}\circ\mathbf{f}_{2}\). Note that in the concatenation of swap sequences, the target token placement of \(\mathbf{f}_{1}\) and the initial token placement of \(\mathbf{f}_{2}\) are identified, and thus \(|\mathbf{f}_{1}\circ\mathbf{f}_{2}|=|\mathbf{f}_{1}|+|\mathbf{f}_{2}|\). When the initial and the target token placements of \(\mathbf{f}\) coincide, for a positive integer \(h\), we denote \(\mathbf{f}^{h}=\mathbf{f}\circ\mathbf{f}\circ\dots\circ\mathbf{f}\), where \(\mathbf{f}\) appears \(h\) times.
Figure 2: The left figure shows an instance of a quantum circuit, which is a part of the circuits given in [2]. The right figure shows the Hasse diagram of the corresponding poset \(P\) in our problem formulation.
Throughout this section, we only consider the case where \(P\) is a chain. For a chain \(P=(S,\preceq)\), let \(s_{1},s_{2},\ldots,s_{|S|}\) be distinct elements in \(S\) such that \(s_{1}\prec s_{2}\prec\cdots\prec s_{|S|}\). Then, the information of \(P\) and \(\varphi\) can be represented as a sequence \(Q=(q_{1},q_{2},\ldots,q_{|S|})\), where \(q_{i}:=\varphi(s_{i})\in\binom{T}{2}\) for each \(i\). We say that a swap sequence \(f_{0}\leadsto f_{1}\leadsto\cdots\leadsto f_{\ell}\)_realizes_\(Q\) if there exist \(0\leq i_{1}\leq i_{2}\leq\cdots\leq i_{|S|}\leq\ell\) such that \(f_{ij}^{-1}(q_{j})\in E\) for \(j=1,\ldots,|S|\). In particular, if the swap sequence consisting of a single token placement \(f\) realizes \(Q\), then we say that \(f\) realizes \(Q\). With this terminology, when \(P\) is a chain, Qubit Routing is to find a shortest swap sequence that realizes a given sequence of token pairs. For two sequences \(Q_{1}\) and \(Q_{2}\) of token pairs, its concatenation is denoted by \(Q_{1}\circ Q_{2}\). For a sequence \(Q\) of token pairs and a positive integer \(h\), we denote \(Q^{h}=Q\circ Q\circ\cdots\circ Q\), where \(Q\) appears \(h\) times.
To show the NP-hardness of Qubit Routing, we reduce Optimal Linear Arrangement, which is known to be NP-hard [5].
Optimal Linear Arrangement
**Input.** A graph \(H=(V(H),E(H))\) and a positive integer \(k\).
**Question.** Is there a bijection \(g\colon V(H)\to\{1,2,\ldots,|V(H)|\}\) that satisfies \(\sum_{\{u,v\}\in E(H)}|g(u)-g(v)|\leq k\)?
Suppose that we are given an instance of Optimal Linear Arrangement that consists of a graph \(H=(V(H),E(H))\) and a positive integer \(k\). Denote \(V(H)=\{v_{1},v_{2},\ldots,v_{n}\}\) and \(E(H)=\{e_{1},e_{2},\ldots,e_{m}\}\), where \(n=|V(H)|\) and \(m=|E(H)|\). We may assume that \(k<nm\) since otherwise, any bijection \(g\) is a solution to Optimal Linear Arragement. Let \(\alpha=2nm+1\), which is an odd integer, and let \(\beta\) and \(\gamma\) be sufficiently large integers (e.g., \(\beta=n^{2}\alpha\) and \(\gamma=4k\alpha\)).
We now construct an instance of Qubit Routing as follows. Define a set of tokens as \(T=\{t_{v,i}\mid v\in V(H),\ i\in\{1,2,\ldots,\alpha\}\}\). Let \(G=(V,E)\) be a path with \(n\alpha\) vertices. We define
\[Q_{v} :=(\{t_{v,1},t_{v,2}\},\{t_{v,2},t_{v,3}\},\ldots,\{t_{v,\alpha-1 },t_{v,\alpha}\}) (v\in V(H)),\] \[Q :=Q_{v_{1}}\circ Q_{v_{2}}\circ\cdots\circ Q_{v_{n}},\] \[\psi(e) :=(\{t_{u,nm+1},t_{v,nm+1}\}) (e=\{u,v\}\in E(H)),\] \[R :=Q^{\gamma}\circ\psi(e_{1})\circ Q^{\gamma}\circ\psi(e_{2})\circ \cdots\circ\psi(e_{m})\circ Q^{\gamma},\]
and \(R^{\beta}\) is the sequence of token pairs that has to be realized. The initial token placement \(f_{0}\) is defined arbitrarily. This gives an instance \((G,T,R^{\beta},f_{0})\) of Qubit Routing.
To show the validity of the reduction, it suffices to show the following.
**Proposition 1**.: _The above instance \((G,T,R^{\beta},f_{0})\) of Qubit Routing has a solution of length less than \(2\beta(k\alpha+\alpha-m)\) if and only if the original instance of Optimal Linear Arrangement has a solution of objective value at most \(k\)._
### Proof of Proposition 1
To simpify the notation, we regard the vertex set \(V\) of \(G\) as \(\{1,2,\ldots,n\alpha\}\) so that \(\{i,i+1\}\in E\) for \(i\in\{1,2,\ldots,n\alpha-1\}\). We say that a token placement \(f\colon V\to T\) is _block-aligned_ if \(|f^{-1}(t_{v,i})-f^{-1}(t_{v,i+1})|=1\) for any \(v\in V\) and for any \(i\in\{1,2,\ldots,\alpha-1\}\). In other words, \(f\) is block-aligned if and only if, for any \(v\in V(H)\), the vertices \(f^{-1}(t_{v,1}),f^{-1}(t_{v,2}),\ldots,f^{-1}(t_{v,\alpha})\) appear consecutively in this order (or in the reverse order) on the path \(G\). Observe that \(f\) is block-aligned if and only if it realizes \(Q\). In what follows, we show the sufficiency and the necessity in Proposition 1 separately.
**Sufficiency ("if" part)**
Suppose that there exists a bijection \(g\colon V(H)\to\{1,2,\ldots,|V(H)|\}\) such that \(\sum_{\{u,v\}\in E(H)}|g(u)-g(v)|\leq k\). Define a token placement \(f^{*}\colon V\to T\) as \(f^{*}((g(v)-1)\alpha+i)=t_{v,i}\) for \(v\in V(H)\) and \(i\in\{1,2,\ldots,\alpha\}\). Then, \(f^{*}\) is block-aligned, and hence \(f^{*}\) realizes \(Q\). This implies that \(f^{*}\) realizes \(Q^{\gamma}\), too.
We use the following two lemmas.
**Lemma 1**.: _There exists a swap sequence \(\mathbf{f}_{0}\) from \(f_{0}\) to \(f^{*}\) whose length is less than \((n\alpha)^{2}\)._
Proof.: Since \(|V|=|T|=n\alpha\), by applying swaps at most \(n\alpha-1\) times to \(f_{0}\), we obtain a token placement \(f_{1}\) such that an end vertex \(v\) of the path \(G\) satisfies \(f_{1}(v)=f^{*}(v)\). By applying the same operation for each vertex \(v\) in \(G\) from one end to the other, we obtain \(f^{*}\). The total number of token swaps is at most \((n\alpha-1)|V|<(n\alpha)^{2}\).
**Lemma 2**.: _For any \(e=\{u,v\}\in E(H)\), there exists a swap sequence \(\mathbf{f}_{e}\) from \(f^{*}\) to \(f^{*}\) such that \(\mathbf{f}_{e}\) realizes \(\psi(e)=(\{t_{u,nm+1},t_{v,nm+1}\})\) and \(|\mathbf{f}_{e}|=2|g(u)-g(v)|\alpha-2\)._
Proof.: By applying swaps \(|(f^{*})^{-1}(t_{u,nm+1})-(f^{*})^{-1}(t_{v,nm+1})|-1=|g(u)-g(v)|\alpha-1\) times to \(f^{*}\), we can obtain a token placement \(f_{e}\) such that \((f_{e})^{-1}(t_{u,nm+1})\) and \((f_{e})^{-1}(t_{v,nm+1})\) are adjacent, i.e., \(f_{e}\) realizes \(\psi(e)\). Conversely, \(f_{e}\) can be transformed to \(f^{*}\) by applying \(|g(u)-g(v)|\alpha-1\) swaps. Therefore, there exists a swap sequence of length \(2|g(u)-g(v)|\alpha-2\) that contains \(f^{*},f_{e},f^{*}\) in this order, which completes the proof.
We now show that the following swap sequence satisfies the conditions:
\[\mathbf{f}:=\mathbf{f}_{0}\circ(\mathbf{f}_{e_{1}}\circ\mathbf{f}_{e_{2}} \circ\cdots\circ\mathbf{f}_{e_{m}})^{\beta},\]
where \(\mathbf{f}_{0}\) and \(\mathbf{f}_{e_{i}}\) are as in Lemmas 1 and 2, respectively. Since \(\mathbf{f}_{e_{1}}\circ\mathbf{f}_{e_{2}}\circ\cdots\circ\mathbf{f}_{e_{m}}\) realizes \(R\), we see that \(\mathbf{f}\) realizes \(R^{\beta}\). Furthermore, we obtain
\[|\mathbf{f}| =|\mathbf{f}_{0}|+\beta\sum_{i=1}^{m}|\mathbf{f}_{e_{i}}|\] \[<(n\alpha)^{2}+\beta\sum_{\{u,v\}\in E(H)}(2|g(u)-g(v)|\alpha-2)\] \[\leq(n\alpha)^{2}+2\beta(k\alpha-m)\] \[<2\beta(\alpha+k\alpha-m).\]
This shows that \(\mathbf{f}\) is a desired swap sequence.
#### Necessity ("only if" part)
To show the necessity, we first show a few properties of block-aligned token placements.
**Lemma 3**.: _If a token placement \(f\colon V\to T\) is block-aligned, then there exists a bijection \(g\colon V(H)\to\{1,2,\ldots,n\}\) such that_
\[f^{-1}(t_{v,nm+1})=(g(v)-1)\alpha+nm+1\text{ for any }v\in V(H). \tag{1}\]
Proof.: Let \(f\colon V\to T\) be a block-aligned token placement. Since \(f^{-1}(t_{v,1})\), \(f^{-1}(t_{v,2}),\ldots,f^{-1}(t_{v,\alpha})\) appear consecutively in \(G\) for every \(v\in V(H)\), there exists a bijection \(g\colon V(H)\to\{1,2,\ldots,n\}\) such that
\[\{f^{-1}(t_{v,1}),f^{-1}(t_{v,2}),\ldots,f^{-1}(t_{v,\alpha})\}=\{(g(v)-1) \alpha+1,(g(v)-1)\alpha+2,\ldots,g(v)\alpha\}\]
for \(v\in V(H)\). Since \(f^{-1}(t_{v,1}),f^{-1}(t_{v,2}),\ldots,f^{-1}(t_{v,\alpha})\) appear in this order or in the reverse order, \(f^{-1}(t_{v,nm+1})\) has to be located in the middle of them in either case, where we note that \(nm+1=(\alpha+1)/2\). Therefore, \(f^{-1}(t_{v,nm+1})=(g(v)-1)\alpha+nm+1\).
We say that a block-aligned token placement \(f\colon V\to T\)_corresponds to_ a bijection \(g\colon V(H)\to\{1,2,\ldots,n\}\) if (1) holds.
**Lemma 4**.: _Let \(g_{1},g_{2}\colon V(H)\to\{1,2,\ldots,n\}\) be bijections with \(g_{1}\neq g_{2}\). Suppose that \(f_{1}\) and \(f_{2}\) are block-aligned token placements that correspond to \(g_{1}\) and \(g_{2}\), respectively. Then, any swap sequence from \(f_{1}\) and \(f_{2}\) is of length at least \(\alpha^{2}\)._
Proof.: Since \(g_{1}\neq g_{2}\), there exists a vertex \(v\in V(H)\) with \(g_{1}(v)>g_{2}(v)\). Since
\[\{f_{1}^{-1}(t_{v,1}),f_{1}^{-1}(t_{v,2}),\ldots,f_{1}^{-1}(t_{v,\alpha})\}= \{(g_{1}(v)-1)\alpha+1,(g_{1}(v)-1)\alpha+2,\ldots,g_{1}(v)\alpha\}\]
\[\{f_{2}^{-1}(t_{v,1}),f_{2}^{-1}(t_{v,2}),\ldots,f_{2}^{-1}(t_{v,\alpha})\}=\{(g_{ 2}(v)-1)\alpha+1,(g_{2}(v)-1)\alpha+2,\ldots,g_{2}(v)\alpha\},\]
the length of any swap sequence from \(f_{1}\) and \(f_{2}\) is at least
\[\sum_{i=1}^{\alpha}(f_{1}^{-1}(t_{v,i})-f_{2}^{-1}(t_{v,i}))=\alpha^{2}(g_{1}(v )-g_{2}(v))\geq\alpha^{2},\]
which completes the proof.
Suppose that there exists a swap sequence \(\mathbf{f}\) of length less than \(2\beta(k\alpha+\alpha-m)\) that realizes \(R^{\beta}\). Then, there exists a subsequence \(\mathbf{f}^{\prime}\) of \(\mathbf{f}\) such that \(\mathbf{f}^{\prime}\) realizes \(R\) and \(|\mathbf{f}^{\prime}|\leq|\mathbf{f}|/\beta<2(k\alpha+\alpha-m)\). Since \(|\mathbf{f}^{\prime}|<2(k+1)\alpha\leq\gamma\), if a subsequence of \(\mathbf{f}^{\prime}\) realizes \(Q^{\gamma}\), then it contains a token placement that realizes \(Q\). With this observation, we see that \(\mathbf{f}^{\prime}\) contains token placements \(f_{1}^{*},f_{e_{1}},f_{2}^{*},f_{e_{2}},f_{3}^{*},\ldots,f_{e_{m}},f_{m+1}^{*}\) in this order, where \(f_{i}^{*}\) realizes \(Q\) (i.e., it is block-aligned) and \(f_{e_{i}}\) realizes \(\psi(e_{i})\) for each \(i\).
For \(i\in\{1,2,\ldots,m+1\}\), Lemma 3 shows that \(f_{i}^{*}\) corresponds to some bijection \(g_{i}\colon V(H)\to\{1,2,\ldots,n\}\). Furthermore, since \(|\mathbf{f}^{\prime}|<2(k+1)\alpha\leq\alpha^{2}\), Lemma 4 shows that \(g_{1}=g_{2}=\cdots=g_{m+1}\). That is, \(f_{1}^{*},f_{2}^{*},\ldots,f_{m+1}^{*}\) correspond to a common bijection \(g\colon V(H)\to\{1,2,\ldots,|V(H)|\}\).
We now show that \(g\) is a desired bijection. For every \(e_{i}=\{u,v\}\), any swap sequence from \(f_{i}^{*}\) to \(f_{e_{i}}\) has length at least \(|(f_{i}^{*})^{-1}(t_{u,nm+1})-(f_{i}^{*})^{-1}(t_{v,nm+1})|-1=\alpha|g(u)-g(v) |-1\) as \(f_{i}^{*}\) corresponds to \(g\). Similarly, any swap sequence from \(f_{e_{i}}\) to \(f_{i+1}^{*}\) has length at least \(|(f_{i+1}^{*})^{-1}(t_{u,nm+1})-(f_{i+1}^{*})^{-1}(t_{v,nm+1})|-1=\alpha|g(u)- g(v)|-1\). Therefore, we obtain
\[|\mathbf{f}^{\prime}|\geq\sum_{\{u,v\}\in E(H)}2(\alpha|g(u)-g(v)|-1).\]
This together with \(|\mathbf{f}^{\prime}|<2(k\alpha+\alpha-m)\) shows that \(\sum_{\{u,v\}\in E(H)}|g(u)-g(v)|<k+1\). This implies that \(\sum_{\{u,v\}\in E(H)}|g(u)-g(v)|\leq k\) by integrality, and hence \(g\) is a desired bijection. This completes the proof of Theorem 1.
## 3 Algorithm Parameterized by the Number of Gates
In this section, we assume that \(G=(V,E)\) is a path. Let \(k=|S|\) be the size of a poset \(P=(S,\preceq)\). The purpose of this section is to design a fixed-parameter algorithm for the problem parameterized by \(k\). Since \(G\) is a path, we suppose for simplicity that \(V=\{1,2,\ldots,n\}\) and \(E=\{\{i,i+1\}\mid i=1,2,\ldots,n-1\}\).
We first observe that we may assume that a poset forms a chain. Indeed, suppose that \(P\) is not a chain. Then, if we have a fixed-parameter algorithm for a chain, we can apply the algorithm for all the linear extensions of \(P\). Since the number of the linear extensions is at most \(k!\), it is a fixed-parameter algorithm for \(P\). Thus, we may assume that \(S=\{s_{1},s_{2},\ldots,s_{k}\}\) such that \(s_{1}\prec s_{2}\prec\cdots\prec s_{k}\). Let \(\tilde{T}=\bigcup_{s\in S}\varphi(s)\) and let \(\tilde{k}=|\tilde{T}|\). Then, we have \(\tilde{k}\leq 2k\). We denote \([\tilde{k}]=\{1,2,\ldots,\tilde{k}\}\).
Let \(f\) be a token placement. Let \(\tilde{V}\) be the positions where tokens in \(\tilde{T}\) are placed, i.e., \(\tilde{V}=\{f^{-1}(t)\mid t\in\tilde{T}\}\). We denote \(\tilde{V}=\{v_{1},v_{2},\ldots,v_{\tilde{k}}\}\) where \(v_{1}<v_{2}<\cdots<v_{\tilde{k}}\). Define a vector \(x\in\mathbb{Z}^{\tilde{k}}\) so that \(x_{i}=v_{i+1}-v_{i}\), where \(v_{0}=0\), for every index \(i=0,1,\ldots,\tilde{k}-1\). We note that \(\sum_{i=0}^{\tilde{k}-1}x_{i}=v_{\tilde{k}}\leq n\) holds. We further define a bijection \(\sigma\colon[\tilde{k}]\to\tilde{T}\) as \(\sigma(i)=f(v_{i})\). Let \(\Sigma\) denote the set of all bijections from \([\tilde{k}]\) to \(\tilde{T}\). We call the pair \((x,\sigma)\) the _signature_ of the token placement \(f\), which is denoted by \(\mathsf{sig}(f)\). The signature maintains the information only on the tokens in \(\tilde{T}\), which suffices for finding a shortest feasible swap sequence since swapping two tokens not in \(\tilde{T}\) is redundant in the swap sequence.
We first present a polynomial-time algorithm when \(k\) is a fixed constant in Section 3.1, and then a fixed-parameter algorithm in Section 3.2.
### Polynomial-time algorithm for a fixed constant \(k\)
Define \(\mathcal{R}=\{x\in\mathbb{Z}^{\tilde{k}}\mid\sum x_{j}\leq n,x_{j}\geq 1\ (j=0,1, \ldots,\tilde{k}-1)\}\) and \(\mathcal{S}=\{(x,\sigma)\mid x\in\mathcal{R},\sigma\in\Sigma\}\). We see that, for any \((x,\sigma)\in\mathcal{S}\), there exists a token placement \(f\) such that \(\mathsf{sig}(f)=(x,\sigma)\), and such a placement \(f\) can be found in polynomial time even when \(k\) is not a constant. It holds that \(|\mathcal{S}|=O(\tilde{k}!n^{\tilde{k}})\).
For two signatures \((x^{0},\sigma^{0})\) and \((x^{t},\sigma^{t})\), a _swap sequence from \((x^{0},\sigma^{0})\) to \((x^{t},\sigma^{t})\)_ means a swap sequence \(f_{0}\leadsto f_{1}\leadsto\dots\leadsto f_{\ell}\) for some token placements \(f_{0}\) and \(f_{\ell}\) such that \(\mathsf{sig}(f_{0})=(x^{0},\sigma^{0})\) and \(\mathsf{sig}(f_{\ell})=(x^{t},\sigma^{t})\). Its length is defined to be \(\ell\).
We first show that we can find a shortest swap sequence between two signatures in polynomial time when \(k\) is a fixed constant.
**Lemma 5**.: _For two signatures \((x^{0},\sigma^{0})\) and \((x^{t},\sigma^{t})\), we can find a shortest swap sequence from \((x^{0},\sigma^{0})\) to \((x^{t},\sigma^{t})\) in \(O(\operatorname{poly}(|\mathcal{S}|))\) time._
Proof.: We construct a graph on the vertex set \(\mathcal{S}\) such that \((x,\sigma)\) and \((x^{\prime},\sigma^{\prime})\) are adjacent if and only if there exist two token placements \(f\), \(f^{\prime}\) such that \(\mathsf{sig}(f)=(x,\sigma)\), \(\mathsf{sig}(f^{\prime})=(x^{\prime},\sigma^{\prime})\), and \(f\leadsto f^{\prime}\). Then a path from \((x^{0},\sigma^{0})\) to \((x^{t},\sigma^{t})\) in the graph corresponds to a swap sequence from \((x^{0},\sigma^{0})\) to \((x^{t},\sigma^{t})\). Therefore, we can find a shortest swap sequence by finding a shortest path from \((x^{0},\sigma^{0})\) to \((x^{t},\sigma^{t})\) in the graph. Since the number of vertices of the constructed graph is \(|\mathcal{S}|\), it can be done in \(O(\operatorname{poly}(|\mathcal{S}|))\) time.
The above lemma allows us to design a polynomial-time algorithm for a fixed constant \(k\).
**Theorem 2**.: _Let \(P=(S,\preceq)\) be a chain of \(k\) elements, where \(k\) is a fixed constant. For a token placement \(f_{0}\), we can find a shortest feasible swap sequence from \(f_{0}\) in polynomial time._
Proof.: Let \(\mathcal{S}_{i}=\{(x,\sigma)\in\mathcal{S}\mid\exists f\text{ s.t. }\mathsf{sig}(f)=(x,\sigma)\text{ and }f^{-1}( \varphi(s_{i}))\in E\}\), which is the set of signatures \((x,\sigma)\) that correspond to token placements in which \(\varphi(s_{i})\) are adjacent. Note that if \((x,\sigma)\in\mathcal{S}_{i}\), then \(\mathsf{sig}(f)=(x,\sigma)\) implies \(f^{-1}(\varphi(s_{i}))\in E\) for any \(f\), because \(\varphi(s_{i})\subseteq\tilde{T}\). Moreover, let \(\mathcal{S}_{0}=\{\mathsf{sig}(f_{0})\}\).
Define a digraph \(\mathcal{G}=(\bigcup_{i=0}^{k}\mathcal{S}_{i},\bigcup_{i=0}^{k-1}E_{i})\), where \(E_{i}=\{((x,\sigma),(x^{\prime},\sigma^{\prime}))\mid(x,\sigma)\in\mathcal{S }_{i},(x^{\prime},\sigma^{\prime})\in\mathcal{S}_{i+1}\}\) for \(i=0,1,\dots,k-1\). We suppose that \(e=((x,\sigma),(x^{\prime},\sigma^{\prime}))\) in \(E_{i}\) has a length equal to the shortest length of a swap sequence from \((x,\sigma)\) to \((x^{\prime},\sigma^{\prime})\), which can be computed in polynomial time by Lemma 5.
We see that the shortest path from the vertex in \(\mathcal{S}_{0}\) to some vertex in \(\mathcal{S}_{k}\) corresponds to a shortest feasible swap sequence from \(f_{0}\). The number of vertices of the graph is bounded by \(O(k|\mathcal{S}|)\), which is polynomial when \(k\) is a constant. Thus, the theorem holds.
### Fixed-parameter algorithm
In this section, we present a fixed-parameter algorithm parameterized by \(k\) by dynamic programming.
We first observe that we can compute the shortest length of a swap sequence between two signatures with the same bijection \(\sigma\).
**Lemma 6**.: _Suppose that we are given two signatures \((x,\sigma)\) and \((y,\sigma)\) with the same bijection \(\sigma\). Then, the shortest length of a swap sequence from \((x,\sigma)\) to \((y,\sigma)\) is equal to \(\sum_{i=1}^{\tilde{k}}|v_{i}-w_{i}|\), where \(v_{i}=\sum_{j=0}^{i-1}x_{j}\) and \(w_{i}=\sum_{j=0}^{i-1}y_{j}\) for \(i=1,\dots,\tilde{k}\). Moreover, there exists a shortest swap sequence such that all the token placements in the sequence have the same bijection \(\sigma\) in their signatures._
Proof.: Since the initial and target token placements have the same \(\sigma\) in their signature, we need \(|w_{i}-v_{i}|\) swaps to move the token \(\sigma(i)\) to \(w_{i}\) for any \(i=1,\dots,\tilde{k}\). Thus the shortest swap sequence has length at least \(\sum_{i=1}^{\tilde{k}}|v_{i}-w_{i}|\). To see that they are equal, we show the existence of a swap sequence of length \(\sum_{i=1}^{\tilde{k}}|v_{i}-w_{i}|\) by induction on this value. If \(\sum_{i=1}^{\tilde{k}}|v_{i}-w_{i}|=0\), then the claim is obvious. Otherwise, let \(p\) be the minimum index such that \(v_{p}\neq w_{p}\). By changing the roles of \(x\) and \(y\) if necessary, we may assume that \(v_{p}>w_{p}\). Then, starting from \((x,\sigma)\), we apply swap operations \(v_{p}-w_{p}\) times to obtain a new signature \((x^{\prime},\sigma)\) such that \(x^{\prime}_{p-1}=x_{p-1}-(v_{p}-w_{p})\) and \(x^{\prime}_{p}=x_{p}+(v_{p}-w_{p})\). That is, \(v^{\prime}_{i}=v_{i}\) for \(i\in[\tilde{k}]\setminus\{p\}\) and \(v^{\prime}_{p}=w_{p}\), where \(v^{\prime}_{i}\) is defined as \(v^{\prime}_{i}=\sum_{j=0}^{i-1}x^{\prime}_{j}\). Note that this operation is possible without changing the bijection \(\sigma\), because \(v^{\prime}_{p}=w_{p}>w_{p-1}=v_{p-1}\) by the minimality of \(p\). By the induction hypothesis, there exists a swap sequence of length \(\sum_{i=1}^{\tilde{k}}|v^{\prime}_{i}-w_{i}|\) between \((x^{\prime},\sigma)\) and \((y,\sigma)\). Therefore, we obtain a swap sequence between \((x,\sigma)\) and \((y,\sigma)\) whose length is \(|v_{p}-w_{p}|+\sum_{i=1}^{\tilde{k}}|v^{\prime}_{i}-w_{i}|=\sum_{i=1}^{\tilde{k} }|v_{i}-w_{i}|\). Moreover, each token placement in the obtained swap sequence has the same bijection \(\sigma\)
By the lemma, the shortest length of a swap sequence from \((x,\sigma)\) to \((y,\sigma)\) does not depend on \(\sigma\). Thus, we denote it by \(d(x,y)\) for \(x,y\in\mathcal{R}\).
Let \(\mathbf{f}\) be a feasible swap sequence \(f_{0}\leadsto f_{1}\leadsto\dots\leadsto f_{\ell}\). Suppose that \(\varphi(s_{j})\) is realized at token placement \(f_{i_{j}}\), that is, \(f_{i_{j}}^{-1}(\varphi(s_{j}))\in E\) for \(j=1,2,\dots,k\) and \(i_{1}\leq i_{2}\leq\dots\leq i_{k}\). We define \(i_{0}=0\). Note that the number of distinct values in \(\{i_{1},\dots,i_{k}\}\), denoted by \(\alpha\), is at most \(k\). Also, let \(\beta\) be the number of times that \(\sigma\) in the signature changes in the swap sequence. We call \(\alpha+\beta\) the _signature length_ of \(\mathbf{f}\).
The following lemma says that the signature length is bounded by a function of \(k\), which we denote by \(\ell_{\max}\).
**Lemma 7**.: _For a shortest feasible swap sequence \(f_{0}\leadsto f_{1}\leadsto\dots\leadsto f_{\ell}\), the signature length is bounded by \(((2k)!+1)k\) from above._
Proof.: Consider the partial swap sequence \(f_{i_{j-1}}\leadsto\dots\leadsto f_{i_{j}}\) for \(j=1,2,\dots,k\). We observe that, in the partial swap sequence, the number of times that bijections change is at most \(\tilde{k}!\). Indeed, in this partial sequence, token placements with the same bijection appear sequentially, as otherwise, we can short-cut between them by Lemma 6. Therefore, the total signature length is bounded by \(k(\tilde{k}!)+k\leq k((2k)!)+k\).
Let \(g^{\ell}(x,\sigma,P)\) be the shortest length of a swap sequence to realize \(P\) from some token placement \(f_{0}\) with \(\mathsf{sig}(f_{0})=(x,\sigma)\) such that it has signature length at most \(\ell\). In what follows, we derive a recursive equation on \(g^{\ell}(x,\sigma,P)\) for dynamic programming.
We first give notation. For a bijection \(\sigma\in\Sigma\) and a non-negative integer \(j\) with \(1\leq j\leq\tilde{k}-1\), let \(\sigma_{j}\in\Sigma\) be the bijection obtained from \(\sigma\) by swapping the \(j\)-th and \((j+1)\)-st tokens. We define \(\mathcal{R}_{j}=\{x\in\mathcal{R}\mid x_{j}=1\}\), which is the set of signatures such that the \(j\)-th token and the \((j+1)\)-st token are adjacent.
To derive a recursive equation on \(g^{\ell}(x,\sigma,P)\), consider the following two cases in which the signature length is decreased at least by one, separately.
The first case is when the bijection \(\sigma\) is changed, i.e., \(\beta\) decreases. Suppose that we change \(\sigma\) to \(\sigma_{j}\) for some \(1\leq j\leq\tilde{k}-1\). In this case, we first move to \((x^{\prime},\sigma)\) for some \(x^{\prime}\in\mathcal{R}_{j}\), and then change \((x^{\prime},\sigma)\) to \((x^{\prime},\sigma_{j})\). By Lemma 6, the number of swaps is \(d(x,x^{\prime})+1\). After moving to \((x^{\prime},\sigma_{j})\), we can recursively consider finding a shortest swap sequence to realize \(P\) from \((x^{\prime},\sigma_{j})\) with the signature length at most \(\ell-1\). Therefore, the total length in this case is \(g^{\ell-1}(x^{\prime},\sigma_{j},P)+d(x,x^{\prime})+1\), and hence the shortest length when we change \(\sigma\) is equal to
\[\min_{1\leq j\leq k-1}\min_{x^{\prime}\in\mathcal{R}_{j}}\left\{g^{\ell-1}(x^ {\prime},\sigma_{j},P)+d(x,x^{\prime})+1\right\}.\]
The other case is when \(s_{1}\) is realized without changing \(\sigma\), i.e., \(\alpha\) decreases. Then, it is necessary that \(s_{1}=(\sigma(h),\sigma(h+1))\) for some \(1\leq h\leq\tilde{k}-1\). To realize \(s_{1}\), we move \((x,\sigma)\) to \((x^{\prime},\sigma)\) for some \(x^{\prime}\in\mathcal{R}_{h}\). By recursion, the total length in this case is \(g^{\ell-1}(x^{\prime},\sigma,P^{\prime})+d(x,x^{\prime})\) by Lemma 6, where \(P^{\prime}\) is the poset obtained from \(P\) by removing the first element \(s_{1}\), that is, \(P^{\prime}\) forms the chain \(s_{2}\prec s_{3}\prec\dots\prec s_{k}\). Thus, the shortest length in this case is
\[\min_{x^{\prime}\in\mathcal{R}_{h}}\left\{g^{\ell-1}(x^{\prime},\sigma,P^{ \prime})+d(x,x^{\prime})\right\}.\]
In summary, we have that, for any \(x\in\mathcal{R}\), \(\sigma\in\Sigma\), and \(1\leq\ell\leq\ell_{\max}\),
\[g^{\ell}(x,\sigma,P)=\min\biggl{\{} \min_{1\leq j\leq k-1}\min_{x^{\prime}\in\mathcal{R}_{j}}\left\{g^{ \ell-1}(x^{\prime},\sigma_{j},P)+d(x,x^{\prime})+1\right\},\] \[\min_{x^{\prime}\in\mathcal{R}_{h}}\left\{g^{\ell-1}(x^{\prime}, \sigma,P^{\prime})+d(x,x^{\prime})\right\}\biggr{\}}, \tag{2}\]
where \(s_{1}=(\sigma(h),\sigma(h+1))\) for some \(h\). If such \(h\) does not exist, the second term is defined to be \(+\infty\).
It follows from (2) that we can design a dynamic programming algorithm. However, the running time would become \(O(k\cdot k!\ell_{\max}|\mathcal{R}|)\), and this does not give a fixed-parameter algorithm since \(|\mathcal{R}|=O(n^{\tilde{k}})\). In what follows, we will reduce the running time by showing that the minimum is achieved at an extreme point.
For \(i=0,1,\ldots\tilde{k}-1\), let \(e_{i}\) denote the unit vector whose \(i\)-th entry is one and the other entries are zeros. For a vector \(x\in\mathcal{R}\) and \(1\leq j\leq\tilde{k}-1\), define \(N_{j}(x)\) as
\[N_{j}(x)=\{x+ae_{j-1}-(x_{j}-1)e_{j}+be_{j+1}\mid a+b=x_{j}-1,\ a,b\in\mathbb{Z} _{+}\},\]
where we regard \(e_{\tilde{k}}\) as the zero vector to simplify the notation. Then, \(x^{\prime}\in N_{j}(x)\) satisfies that
\[x^{\prime}_{j} =1,\] \[x^{\prime}_{j-1}+x^{\prime}_{j+1} =x_{j-1}+x_{j}+x_{j+1}-1,\] \[x^{\prime}_{i} =x_{i}\ \text{for}\ \ i\not\in\{j-1,j,j+1\},\]
where \(x_{\tilde{k}}=n-\sum_{i=0}^{\tilde{k}-1}x_{i}\) and \(x^{\prime}_{\tilde{k}}=n-\sum_{i=0}^{\tilde{k}-1}x^{\prime}_{i}\). The signature \((x^{\prime},\sigma)\) with \(x^{\prime}\in N_{j}(x)\) means that it is obtained from \((x,\sigma)\) by only moving two tokens \(\sigma(j)\) and \(\sigma(j+1)\) so that the two tokens are adjacent. Moreover, define \(y^{j}\) and \(y^{\prime}{}^{j}\) to be vectors in \(N_{j}(x)\) in which \((a,b)=(0,x_{j}-1)\) and \((a,b)=(x_{j}-1,0)\), respectively. Then,
\[(y^{j})_{j-1}=x_{j-1}, (y^{j})_{j+1}=x_{j}+x_{j+1}-1,\] \[(y^{\prime j})_{j-1}=x_{j-1}+x_{j}-1, (y^{\prime j})_{j+1}=x_{j+1}.\]
Thus, \((y^{j},\sigma)\) (\((y^{\prime j},\sigma)\), resp.) is obtained from \((x,\sigma)\) by only moving one token \(\sigma(j+1)\) (\(\sigma(j)\), resp.,) so that \(\sigma(j)\) and \(\sigma(j+1)\) are adjacent.
The following theorem asserts that the minimum of (2) is achieved at either \(y^{j}\) or \(y^{\prime j}\).
**Theorem 3**.: _For any \(x\in\mathcal{R}\), \(\sigma\in\Sigma\), and \(1\leq\ell\leq\ell_{\max}\), it holds that_
\[g^{\ell}(x,\sigma,P)=\min\biggl{\{} \min_{1\leq j\leq\tilde{k}-1,y\in\{y^{j},y^{\prime j}\}}\left\{g^ {\ell-1}(y,\sigma_{j},P)+d(x,y)+1\right\},\] \[\min_{y\in\{y^{\prime},y^{\prime k}\}}\left\{g^{\ell-1}(y,\sigma, P^{\prime})+d(x,y)\right\}\biggr{\}}.\]
To prove the theorem, we show the following two lemmas. We first show that the minimum of (2) is achieved at some point in \(N_{j}(x)\).
**Lemma 8**.: _It holds that, for any \(x\in\mathcal{R}\), \(\sigma\in\Sigma\), \(1\leq\ell\leq\ell_{\max}\) and \(1\leq j\leq\tilde{k}-1\),_
\[\min_{x^{\prime}\in\mathcal{R}_{j}}\left\{g^{\ell-1}(x^{\prime},\sigma,P)+d(x,x^{\prime})\right\}=\min_{y\in N_{j}(x)}\left\{g^{\ell-1}(y,\sigma,P)+d(x,y) \right\}.\]
Proof.: Let \(x^{*}\) be a vector in \(\mathcal{R}_{j}\) that attains the minimum of the LHS, that is,
\[g^{\ell-1}(x^{*},\sigma,P)+d(x,x^{*})=\min_{x^{\prime}\in\mathcal{R}_{j}} \left\{g^{\ell-1}(x^{\prime},\sigma,P)+d(x,x^{\prime})\right\}.\]
To simplify the notation, we denote \(g^{*}=g^{\ell-1}(x^{*},\sigma,P)\) and \(g_{y}=g^{\ell-1}(y,\sigma,P)\) for a vector \(y\in N_{j}(x)\).
Since \(\mathcal{R}_{j}\supseteq N_{j}(x)\), it holds that, for any \(y\in N_{j}(x)\),
\[g^{*}+d(x,x^{*})\leq g_{y}+d(x,y). \tag{3}\]
The minimality of \(g_{y}\) implies that \(g_{y}\leq g^{*}+d(y,x^{*})\). Hence, it holds by (3) that, for any \(y\in N_{j}(x)\),
\[g^{*}+d(x,x^{*})\leq g_{y}+d(x,y)\leq g^{*}+d(x,y)+d(y,x^{*}). \tag{4}\]
To show the lemma, it suffices to find \(y\in N_{j}(x)\) that satisfies (4) with equality.
By definition, \(d(x,y)=x_{j}-1\) for any \(y\in N_{j}(x)\). We denote \(v_{i}=\sum_{h=0}^{i-1}x_{h}\), \(v_{i}^{*}=\sum_{h=0}^{i-1}x_{h}^{*}\), and \(w_{i}=\sum_{h=0}^{i-1}y_{h}\) for any \(i=1,\ldots,\tilde{k}\). Then, Lemma 6 implies that
\[d(x,x^{*})=\sum_{i=1}^{\tilde{k}}|v_{i}-v_{i}^{*}|=X+|v_{j}-v_{j}^{*}|+|v_{j+1 }-v_{j+1}^{*}|+X^{\prime},\]
where we define \(X=\sum_{i=1}^{j-1}|v_{i}-v_{i}^{*}|\) and \(X^{\prime}=\sum_{i=j+2}^{k}|v_{i}-v_{i}^{*}|\). We distinguish three cases by the position of \(v_{j}^{*}\). Note that \(v_{j+1}^{*}=v_{j}^{*}+1\) since \(x^{*}\in\mathcal{R}_{j}\).
First, suppose that \(v_{j}^{*}\) satisfies that \(v_{j}<v_{j}^{*}<v_{j+1}\). Then, since \(|v_{j}-v_{j}^{*}|+|v_{j+1}-v_{j+1}^{*}|=v_{j}^{*}-v_{j}+v_{j+1}-(v_{j}^{*}+1)= x_{j}-1\), it holds that \(d(x,x^{*})=X+x_{j}-1+X^{\prime}\). We define \(y\in N_{j}(x)\) so that \(w_{j}=v_{j}^{*}\) and \(w_{j+1}=v_{j+1}^{*}\). Then, we have \(d(y,x^{*})=X+X^{\prime}\), since \(w_{i}=v_{i}\) for any \(i\in\{1,\ldots,j-1\}\cup\{j+2,\ldots,\tilde{k}\}\). Therefore, since \(d(x,y)=x_{j}-1\), we obtain \(d(x,x^{*})=d(y,x^{*})+d(x,y)\). Thus, (4) holds with equality.
Next, suppose that \(v_{j}^{*}\) satisfies that \(v_{j}^{*}\leq v_{j}\). Then it holds that
\[d(x,x^{*}) =X+X^{\prime}+|v_{j}-v_{j}^{*}|+|v_{j+1}-v_{j+1}^{*}|\] \[=X+X^{\prime}+(v_{j}-v_{j}^{*})+(v_{j+1}-v_{j}+v_{j}-(v_{j}^{*}+1))\] \[=X+X^{\prime}+2(v_{j}-v_{j}^{*})+x_{j}-1\]
since \(v_{j+1}-v_{j}=x_{j}\). We define \(y\in N_{j}(x)\) so that \(w_{j}=v_{j}\) and \(w_{j+1}=v_{j}+1\). Then, we have \(d(y,x^{*})=X+X^{\prime}+2(v_{j}-v_{j}^{*})\), since \(w_{i}=v_{i}\) for any \(i\in\{1,\ldots,j-1\}\cup\{j+2,\ldots,\tilde{k}\}\). Therefore, since \(d(x,y)=x_{j}-1\), we obtain \(d(x,x^{*})=d(y,x^{*})+d(x,y)\), and hence (4) holds with equality.
The case where \(v_{j}^{*}\geq v_{j+1}\) is analogous to the second case. Therefore, in any case, there exists \(y\in N_{j}(x)\) that satisfies (4) with equality. Thus, the lemma holds.
By Lemma 8, it holds that
\[g^{\ell}(x,\sigma,P)=\min\biggl{\{}\min_{1\leq j\leq k-1}\min_{y\in N_{j}(x)} \left\{g^{\ell-1}(y,\sigma_{j},P)+x_{j}\right\},\min_{y\in N_{h}(x)}\left\{g^ {\ell-1}(y,\sigma,P^{\prime})+x_{h}-1\right\}\biggr{\}}, \tag{5}\]
in which we note \(d(x,y)=x_{j}-1\) for \(y\in N_{j}(x)\).
**Lemma 9**.: _The function \(g^{\ell}(x,\sigma,P)\) can be expressed as the minimum of linear functions on \(x\). That is, for any \(\sigma\in\Sigma\) and \(1\leq\ell\leq\ell_{\max}\), there exist vectors \(c_{1},\ldots,c_{p}\) and real numbers \(\delta_{1},\ldots,\delta_{p}\) such that_
\[g^{\ell}(x,\sigma,P)=\min_{1\leq i\leq p}\{c_{i}x+\delta_{i}\}\]
_for any \(x\in\mathcal{R}\)._
Proof.: We prove this lemma by induction on \(\ell\). In the base case where \(\ell=0\), the poset \(P\) is empty, and \(\sigma\) is not changed. Thus, \(g^{0}(x,\sigma,P)=0\). Suppose that \(\ell\geq 1\).
By (5), it suffices to show that, for any \(j\) and \(\sigma^{\prime}\),
\[\min_{y\in N_{j}(x)}\left\{g^{\ell-1}(y,\sigma^{\prime},P)+x_{j}\right\}\]
can be expressed as the minimum of linear functions. By the induction hypothesis, \(g^{\ell-1}\) is the minimum of linear functions. Hence, the above can be expressed as
\[\min_{y\in N_{j}(x)}\left\{\min_{1\leq i\leq p^{\prime}}\{c_{i}^{\prime}y+ \delta_{i}^{\prime}\}+x_{j}\right\}=\min_{1\leq i\leq p^{\prime}}\min_{y\in N _{j}(x)}\left\{c_{i}^{\prime}y+x_{j}+\delta_{i}^{\prime}\right\}\]
for some linear functions \(c_{i}^{\prime}y+\delta_{i}^{\prime}\) for \(i=1,\ldots,p^{\prime}\).
Recall that, for every \(y\in N_{j}(x)\), we can write \(y=x+ae_{j-1}-(x_{j}-1)e_{j}+be_{j+1}\) for some \(a,b\in\mathbb{Z}_{+}\) with \(a+b=x_{j}-1\). Hence, it holds that
\[c_{i}^{\prime}y+x_{j}+\delta_{i}^{\prime}=c_{i}^{\prime}(x+ae_{j-1}-(x_{j}-1)e_ {j}+be_{j+1})+x_{j}+\delta_{i}^{\prime},\]
which is a linear function on \(x\) for given \(a\), \(b\), and \(j\). Therefore, the lemma holds by induction.
With Lemma 9, we are ready to prove Theorem 3.
Proof of Theorem 3.: By (5), it suffices to show that, for any \(\sigma\in\Sigma\) and \(1\leq j\leq\tilde{k}-1\), either \(y^{j}\) or \(y^{jj}\) achieves the minimum of
\[\min_{y\in N_{j}(x)}\left\{g^{\ell-1}(y,\sigma,P)+x_{j}\right\}.\]
Since \(g^{\ell-1}\) is in the form of the minimum of linear functions by Lemma 9, the above can be expressed as
\[\min_{1\leq i\leq p^{\prime}}\min_{a,b\in\mathcal{I}_{z};a+b=x_{j-1}}\left\{c^{ \prime}_{i}(x+ae_{j-1}-(x_{j}-1)e_{j}+be_{j+1})+x_{j}+\delta^{\prime}_{i}\right\}\]
for some linear functions \(c^{\prime}_{i}y+\delta^{\prime}_{i}\) for \(i=1,\ldots,p^{\prime}\). For given \(x\) and \(j\), the minimum is attained either at \((a,b)=(x_{j}-1,0)\) or at \((a,b)=(0,x_{j}-1)\). They correspond to \(y^{j}\) and \(y^{\prime j}\), respectively. Thus, the theorem holds.
Let \(f_{0}\) be the initial placement and \(\mathsf{sig}(f_{0})=(x^{0},\sigma^{0})\). Let \(v^{0}_{i}=\sum_{j=0}^{i-1}x^{0}_{j}\) for each \(i\). For \(i=0,1,\ldots,\tilde{k}-1\), define \(B_{i}=\{f_{0}(v^{0}_{i}+1),f_{0}(v^{0}_{i}+2),\ldots,f_{0}(v^{0}_{i+1}-1)\}\), which are tokens not in \(\tilde{T}\). Then, \(|B_{i}|=x^{0}_{i}-1\). Theorem 3 shows that, when we compute \(g^{\ell}(x,\sigma,P)\) by using the formula, it suffices to consider signatures \((y,\sigma^{\prime})\) such that all tokens in \(B_{i}\) appear consecutively along the path. That is, we only consider vectors \(y\) such that, for each \(i\), \(y_{i}-1=\sum_{h=i^{\prime}}^{j^{\prime}}|B_{h}|\) for some \(0\leq i^{\prime},j^{\prime}\leq\tilde{k}-1\) (possibly, \(y_{i}-1=0\)). This shows that each \(y_{i}\) can take one of \(O(\tilde{k}^{2})\) values, and hence \(y\) has \(O(\tilde{k}^{2\tilde{k}})\) choices. Therefore, the size of the DP table is \(O(k\cdot k!\ell_{\max}\tilde{k}^{2\tilde{k}})\), and each step can be processed in a fixed-parameter time. Thus, we have the following theorem.
**Theorem 4**.: _There exists a fixed-parameter algorithm for Qubit Routing when \(G\) is a path and \(k=|S|\) is a parameter. _
## 4 Polynomial-time Algorithm for Disjoint Two-Qubit Operations
We say that an instance \((G,P=(S,\preceq),T,\varphi,f_{0})\) of Qubit Routing has _disjoint pairs_ if \(\varphi(s)\cap\varphi(s^{\prime})=\emptyset\) for any pair of distinct elements \(s,s^{\prime}\in S\). The objective of this section is to give a polynomial-time algorithm for instances with disjoint pairs when the graph \(G\) is a path.
**Theorem 5**.: Qubit Routing _can be solved in polynomial time when a given graph is a path and the instance has disjoint pairs._
Let \((G,P=(S,\preceq),T,\varphi,f_{0})\) be an instance with disjoint pairs. Since \(G\) is a path, we suppose for simplicity that \(V=\{1,2,\ldots,n\}\) and \(E=\{\{i,i+1\}\mid i=1,2,\ldots,n-1\}\). Let \(f\) be a token placement. For an element \(s\in S\) with \(f^{-1}(\varphi(s))=\{\alpha_{1},\alpha_{2}\}\), \(\alpha_{1}<\alpha_{2}\), we define \(\mathsf{gap}_{f}(s):=\alpha_{2}-\alpha_{1}-1\). Then \(\mathsf{gap}_{f}(s)\) is a lower bound on the number of swaps to realize \(\varphi(s)\) if the initial token placement is \(f\). For distinct elements \(s,s^{\prime}\in S\) such that \(f^{-1}(\varphi(s))=\{\alpha_{1},\alpha_{2}\}\), \(\alpha_{1}<\alpha_{2}\), and \(f^{-1}(\varphi(s^{\prime}))=\{\beta_{1},\beta_{2}\}\), \(\beta_{1}<\beta_{2}\), we say that \(s\) and \(s^{\prime}\)_cross_ if \(\alpha_{1}<\beta_{1}<\alpha_{2}<\beta_{2}\) or \(\beta_{1}<\alpha_{1}<\beta_{2}<\alpha_{2}\). One can observe that, if \(S=\{s,s^{\prime}\}\) such that \(s\) and \(s^{\prime}\) cross, we can realize both \(\varphi(s)\) and \(\varphi(s^{\prime})\) by \(\mathsf{gap}_{f}(s)+\mathsf{gap}_{f}(s^{\prime})-1\) swaps. Note that \(\alpha_{1},\alpha_{2},\beta_{1}\), and \(\beta_{2}\) are always distinct since the instance has disjoint pairs. We define
\[\mathsf{gap}(f) :=\sum_{s\in S}\mathsf{gap}_{f}(s),\] \[\mathsf{cross}(f) :=\left|\left\{\{s,s^{\prime}\}\in\binom{S}{2}\ \Big{|}\ s\mbox{ and }s^{\prime}\mbox{ cross}\right\}\right|,\] \[\mathsf{value}(f) :=\mathsf{gap}(f)-\mathsf{cross}(f),\]
where \(\mathsf{gap}\), \(\mathsf{cross}\), and \(\mathsf{value}\) are regarded as functions only in \(f\) by fixing \(G\), \(P\), \(T\), and \(\varphi\).
We prove the following proposition, which completes the proof of Theorem 5.
**Proposition 2**.: _The optimal value for the instance \((G,P=(S,\preceq),T,\varphi,f_{0})\) is \(\mathsf{value}(f_{0})\). Such a sequence can be constructed in polynomial time._
To prove Proposition 2, we first show that the optimal value is at most \(\mathsf{value}(f_{0})\).
**Lemma 10**.: _The instance \((G,P=(S,\preceq),T,\varphi,f_{0})\) has a feasible swap sequence of length \(\mathsf{value}(f_{0})\), and such a swap sequence can be found in polynomial time._
Proof.: We prove the statement by induction on the lexicographic ordering of \((\mathsf{gap}(f_{0}),\mathsf{value}(f_{0}))\). Note that \(\mathsf{gap}(f_{0})\geq 0\) and \(\mathsf{value}(f_{0})\geq-|\binom{S}{2}|\).1
Footnote 1: Although we can show a better lower bound on \(\mathsf{value}(f_{0})\) by careful analysis, we just present a trivial lower bound here, because we only need the fact that \(\mathsf{value}(f_{0})\) has a finite lower bound to apply the induction.
If \(\mathsf{gap}(f_{0})=0\), then \(f_{0}^{-1}(\varphi(x))\) forms a pair of adjacent vertices for any \(x\in S\), and hence \(\mathsf{cross}(f_{0})=\mathsf{value}(f_{0})=0\) holds. In such a case, since the trivial swap sequence consisting of a single token placement \(f_{0}\) is a feasible sequence of length zero, the statement holds.
Suppose that \(\mathsf{gap}(f_{0})\geq 1\). Let
\[i^{*}:=\max\{i\in\{1,2,\ldots,n\}\mid\exists s^{*}\in S\text{ s.t. }f_{0}^{-1}( \varphi(s^{*}))=\{i,j\}\text{ with }j\geq i+2\},\]
which is well-defined as \(\mathsf{gap}(f_{0})\geq 1\). Let \(f_{1}\) be the token placement obtained from \(f_{0}\) by applying a swap operation on \(\{i^{*},i^{*}+1\}\in E\). Then, we see that \(\mathsf{value}(f_{1})=\mathsf{value}(f_{0})-1\) and \(\mathsf{gap}(f_{1})\leq\mathsf{gap}(f_{0})\) by the following case analysis.
1. Suppose that \(i^{*}+1\not\in f_{0}^{-1}(\varphi(s))\) for any \(s\in S\). In this case, \(\mathsf{gap}(f_{1})=\mathsf{gap}(f_{0})-1\) and \(\mathsf{cross}(f_{1})=\mathsf{cross}(f_{0})\), which shows that \(\mathsf{value}(f_{1})=\mathsf{value}(f_{0})-1\).
2. Suppose that \(i^{*}+1\in f_{0}^{-1}(\varphi(s))\) for some \(s\in S\). By the maximality of \(i^{*}\) and by the assumption that the instance has disjoint pairs, we obtain \(f_{0}^{-1}(\varphi(s))=\{j,i^{*}+1\}\) for \(j<i^{*}\) or \(j=i^{*}+2\). * If \(f_{0}^{-1}(\varphi(s))=\{j,i^{*}+1\}\) for some \(j<i^{*}\), then \(\mathsf{gap}(f_{1})=\mathsf{gap}(f_{0})-2\) and \(\mathsf{cross}(f_{1})=\mathsf{cross}(f_{0})-1\), which shows that \(\mathsf{value}(f_{1})=\mathsf{value}(f_{0})-1\). * If \(f_{0}^{-1}(\varphi(s))=\{i^{*}+1,i^{*}+2\}\), then \(\mathsf{gap}(f_{1})=\mathsf{gap}(f_{0})\) and \(\mathsf{cross}(f_{1})=\mathsf{cross}(f_{0})+1\), which shows that \(\mathsf{value}(f_{1})=\mathsf{value}(f_{0})-1\).
Since \((\mathsf{gap}(f_{1}),\mathsf{value}(f_{1}))\) is lexicographically smaller than \((\mathsf{gap}(f_{0}),\mathsf{value}(f_{0}))\), by the induction hypothesis, instance \((G,P=(S,\preceq),T,\varphi,f_{1})\) has a feasible swap sequence of length \(\mathsf{value}(f_{1})=\mathsf{value}(f_{0})-1\). Since \(f_{1}\) is obtained from \(f_{0}\) by a single swap operation, this shows that \((G,P=(S,\preceq),T,\varphi,f_{0})\) has a feasible swap sequence of length \(\mathsf{value}(f_{0})\).
The proof of Lemma 10 shows that, by applying \(\mathsf{value}(f_{0})\) swap operations, we can transform \(f_{0}\) into another token placement \(f\) such that \(f^{-1}(\varphi(s))\) forms a pair of adjacent vertices for any \(s\in S\). This means that the obtained sequence is a feasible solution for any poset \(P\) on \(S\).
We next show that the optimal value is at least \(\mathsf{value}(f_{0})\).
**Lemma 11**.: _Each feasible swap sequence for the instance \((G,P=(S,\preceq),T,\varphi,f_{0})\) has length at least \(\mathsf{value}(f_{0})\)._
Proof.: It suffices to show that no single swap operation can decrease \(\mathsf{value}(f)\) by more than one. Consider a single swap operation on \(\{i,i+1\}\in E\) that transforms \(f\) into \(f^{\prime}\). It is easy to see that \(\mathsf{gap}(f^{\prime})\geq\mathsf{gap}(f)-2\) and \(\mathsf{cross}(f^{\prime})\leq\mathsf{cross}(f)+1\). We now prove \(\mathsf{value}(f^{\prime})\geq\mathsf{value}(f)-1\) by the case analysis.
1. Suppose that \(\mathsf{gap}(f^{\prime})\geq\mathsf{gap}(f)\). In this case, since \(\mathsf{cross}(f^{\prime})\leq\mathsf{cross}(f)+1\), we obtain \(\mathsf{value}(f^{\prime})\geq\mathsf{value}(f)-1\).
2. Suppose that \(\mathsf{gap}(f^{\prime})=\mathsf{gap}(f)-1\). In this case, we have one of the following. * There exists \(s\in S\) such that \(f^{-1}(\varphi(s))=\{j,i+1\}\) for some \(j\leq i-1\), and \(i\not\in f^{-1}(\varphi(s^{\prime}))\) for any \(s^{\prime}\in S\), or * There exists \(s\in S\) such that \(f^{-1}(\varphi(s))=\{i,j\}\) for some \(j\geq i+2\), and \(i+1\not\in f^{-1}(\varphi(s^{\prime}))\) for any \(s^{\prime}\in S\). In both cases, we obtain \(\mathsf{cross}(f^{\prime})=\mathsf{cross}(f)\), which shows that \(\mathsf{value}(f^{\prime})=\mathsf{value}(f)-1\).
3. Suppose that \(\mathsf{gap}(f^{\prime})=\mathsf{gap}(f)-2\). In this case, there exist \(s,s^{\prime}\in S\) such that \(f^{-1}(\varphi(s))=\{j,i+1\}\) for some \(j\leq i-1\) and \(f^{-1}(\varphi(s^{\prime}))=\{i,k\}\) for some \(k\geq i+2\). Then, we obtain \(\mathsf{cross}(f^{\prime})=\mathsf{cross}(f)-1\), which shows that \(\mathsf{value}(f^{\prime})=\mathsf{value}(f)-1\).
This completes the proof of the lemma.
Hardness: Stars and Antichains
In this section, we show that the problem is NP-hard even when \(G\) is a star and \(P\) is an antichain. Recall that an antichain is a poset in which every pair of elements is incomparable.
**Theorem 6**.: Qubit Routing _is NP-hard even when \(G\) is a star and \(P\) is an antichain._
Proof.: We reduce Vertex Cover, which is known to be NP-hard [5].
Vertex Cover
**Input.** A graph \(H=(V(H),E(H))\) and a positive integer \(k\).
**Question.** Is there a vertex subset \(X\subseteq V(H)\), called a _vertex cover_ of \(H\), such that \(|X|\leq k\) and \(\{u,v\}\cap X\neq\emptyset\) for every edge \(\{u,v\}\in E(H)\)?
Suppose that we are given an instance of Vertex Cover that consists of a graph \(H=(V(H),E(H))\) and a positive integer \(k\). Let \(n=|V(H)|\). We construct an instance of Qubit Routing as follows. Define a set of tokens as \(T=V(H)\cup\{0\}\), where \(0\not\in V(H)\). Let \(S=E(H)\) and define \(\varphi\colon S\to\binom{T}{2}\) as \(\varphi(\{u,v\})=\{u,v\}\) for each \(\{u,v\}\in S\). The poset \(P\) is an antichain, i.e., every pair of elements in \(S\) is incomparable. Define a graph \(G=(V,E)\) as \(V=\{0,1,\ldots,n\}\) and \(E=\{\{0,i\}\mid i\in\{1,2,\ldots,n\}\}\). The initial token placement \(f_{0}\colon V\to T\) should satisfy \(f_{0}(0)=0\) and other tokens can be placed arbitrarily over the vertices \(\{1,2,\ldots,n\}\).
We claim that there exists a swap sequence of length \(k\) for the instance \((G,P,T,\varphi,f_{0})\) of Qubit Routing if and only if there exists a vertex cover \(X\) in \(H\) of size \(k\). Suppose that there exists a vertex cover \(X=\{v_{1},\ldots,v_{k}\}\) in \(H\) of size \(k\). Then, we construct the following swap sequence. In the swap \(f_{i-1}\leadsto f_{i}\), we exchange the token at the vertex \(0\) and the token \(v_{i}\). Then, for each edge \(\{v_{i},v\}\in E(H)\), it holds that \(\{f_{i}^{-1}(v_{i}),f_{i}^{-1}(v)\}=\{0,f_{i}^{-1}(v)\}\in E\). Therefore, the swap sequence \(f_{0}\leadsto\cdots\leadsto f_{k}\) realizes \(P\). Conversely, suppose that there exists a swap sequence \(f_{0}\leadsto f_{1}\leadsto\cdots\leadsto f_{k}\) of length \(k\) for the instance \((G,P,T,\varphi,f_{0})\). Let \(X=\{v\in V(H)\mid f_{i}^{-1}(0)=v\) for some \(i\in\{1,2,\ldots,k\}\}\), i.e., \(X\) is the set of vertices of \(H\) that are placed at \(0\) in the course of swaps. Since the swap sequence realizes \(P\), for every edge \(\{u,v\}\in E(H)\), there exists \(f_{i}\) such that \(\{f_{i}^{-1}(u),f_{i}^{-1}(v)\}\in E\), which implies that \(u\in X\) or \(v\in X\). Therefore, \(X\) is a vertex cover of \(H\).
## 6 Concluding Remarks
We initiated algorithmic studies on the quantum routing problem, also known as the swap minimization problem, from the viewpoint of theoretical computer science. The problem is of central importance in compiler design for quantum programs when they are implemented in some of the superconducting quantum computers such as IBM Quantum systems.
Most notably, we proved the quantum routing problem is NP-hard even when the graph topology of a quantum computer is a path, which corresponds to the so-called linear nearest neighbor architecture. In our proof, the initial token placement can be chosen arbitrarily. This implies that the combined optimization of the quantum assignment and the quantum routing is also NP-hard for the same architecture.
We also gave some algorithmic results, but they were restricted to the case of the linear nearest neighbor architectures. Possible future work is to give algorithmic results with theoretical guarantees for other graph topologies.
AcknowledgmentWe thank Toshinari Itoko at IBM Research Tokyo for bringing the qubit allocation problem to our attention and also for his valuable comments.
|
2306.03617
|
A Data-Efficient Approach for Long-Term Human Motion Prediction Using
Maps of Dynamics
|
Human motion prediction is essential for the safe and smooth operation of
mobile service robots and intelligent vehicles around people. Commonly used
neural network-based approaches often require large amounts of complete
trajectories to represent motion dynamics in complex semantically-rich spaces.
This requirement may complicate deployment of physical systems in new
environments, especially when the data is being collected online from onboard
sensors. In this paper we explore a data-efficient alternative using maps of
dynamics (MoD) to represent place-dependent multi-modal spatial motion
patterns, learned from prior observations. Our approach can perform efficient
human motion prediction in the long-term perspective of up to 60 seconds. We
quantitatively evaluate its accuracy with limited amount of training data in
comparison to an LSTM-based baseline, and qualitatively show that the predicted
trajectories reflect the natural semantic properties of the environment, e.g.
the locations of short- and long-term goals, navigation in narrow passages,
around obstacles, etc.
|
Yufei Zhu, Andrey Rudenko, Tomasz P. Kucner, Achim J. Lilienthal, Martin Magnusson
|
2023-06-06T12:12:25Z
|
http://arxiv.org/abs/2306.03617v1
|
# A Data-Efficient Approach for Long-Term Human Motion Prediction Using Maps of Dynamics
###### Abstract
Human motion prediction is essential for the safe and smooth operation of mobile service robots and intelligent vehicles around people. Commonly used neural network-based approaches often require large amounts of complete trajectories to represent motion dynamics in complex semantically-rich spaces. This requirement may complicate deployment of physical systems in new environments, especially when the data is being collected online from onboard sensors. In this paper we explore a data-efficient alternative using _maps of dynamics_ (MoD) to represent place-dependent multi-modal spatial motion patterns, learned from prior observations. Our approach can perform efficient human motion prediction in the long-term perspective of up to 60 seconds. We quantitatively evaluate its accuracy with limited amount of training data in comparison to an LSTM-based baseline, and qualitatively show that the predicted trajectories reflect the natural semantic properties of the environment, e.g. the locations of short- and long-term goals, navigation in narrow passages, around obstacles, etc.
## I Introduction
Long-term human motion prediction (LHMP) is important for autonomous robots and vehicles to operate safely in populated environments [1]. Accurately predicting the future trajectories of people in their surroundings over extended time periods is essential for enhancing motion planning, tracking, automated driving, human-robot interaction, intelligent safety monitoring and surveillance.
Human motion is complex and may be influenced by several hard-to-model factors, including social rules and norms, personal preferences, and subtle cues in the environment that are not represented in geometric maps. To address these challenges, popular neural network approaches learn motion dynamics directly from data, with many recent studies developing models based on LSTMs [2], GANs [3], CNNs [4], CVAEs [5] and transformers [6]. Most of these approaches focus on learning to predict stochastic interactions between diverse moving agents in the short-term perspective in scenarios where the effect of the environment topology and semantics is minimal.
When predicting long-term human motion in complex, large-scale environments, the influence of the surrounding space (e.g. passages, stairs, entrances, various objects and semantically-meaningful areas) on human motion goes beyond what is contained in the current state of the moving person or the observed interactions. This impact of the environment has to be modelled explicitly, for instance by informing the prediction method with a semantic map [7, 8, 9]. Another effective approach to address this challenge is to use _maps of dynamics_ (MoDs). MoDs [10] are maps that encode spatial or spatio-temporal motion patterns as a feature of the environment. MoD-informed long-term human motion prediction (MoD-LHMP) approaches are particularly suited to predict motion in the long-term perspective, where the environment effects become critical for making accurate predictions. MoDs efficiently encode the stochastic local motion patterns over the entire map, informing the predictor in areas which may have no influence on the immediate decisions of the walking people, but become critical in the long-term perspective.
As a proof of concept for MoD-LHMP, we propose to build CLiFF MoDs [11] from training data and use them to bias a constant velocity motion prediction method, generating stochastic trajectory predictions for up to \(60\,\mathrm{s}\) into the future.
One crucial advantage of the MoD-LHMP approach is its data efficiency. Prior art neural network-based approaches often require large amounts of data for training, and their performance can significantly degrade in absence thereof. Typically, these approaches also need complete sequences of tracked positions for training. The proposed MoD-LHMP approach, on the other hand, allows encoding human motion from sparse and incomplete data, requiring only observed velocities in discrete locations and interpolating the missing motion in between. This property is relevant, for instance, when the deployed robot collects the data in an online fashion from on-board sensors and with a limited field of view.
In this work, we evaluate the efficiency of MoD-based motion encoding for making accurate long-term predictions. In our experiments we sample few trajectories from the ATC dataset and use them to build a CLiFF map and train LSTM
Fig. 1: Maps of dynamics provide an efficient and lightweight encoding of sparse and incomplete velocity data to characterize the motion flows in the environment. We propose a method to predict long-term multi-modal human motion using data-efficient CLiFF maps [11]. **Left:** trajectories from the ATC dataset used for training. **Right:** CLiFF map.
based baselines. We then compare these methods using the ADE/FDE prediction accuracy metrics. Furthermore, we qualitatively demonstrate that the CLiFF-LHMP approach has the ability to predict human motion in complex environments over very long time horizons, implicitly inferring common goal points and correctly predicting trajectories that follow the complex topology of the environment, e.g. navigating around corners or obstacles or passing through narrow passages such as doors.
## II Method
### _Maps of Dynamics_
In the proposed approach for human motion prediction, we exploit Maps of Dynamics (MoD) which encode human dynamics as a feature of the environment. By using velocity observations, human dynamics can be represented through flow models. In this work, we employ Circular-Linear Flow Field map (CLiFF-map) [11] to represent the flow of human motion. CLiFF-map represents local flow patterns as a multi-modal, continuous joint distribution of speed and orientation. As the orientation of velocity is a circular variable, and magnitude of velocity is a linear variable, CLiFF-map associates a semi-wrapped Gaussian mixture model (SWGMM) with each location, describing flow patterns around the given location, see Fig. 1. By using SWGMM, CLiFF-map is able to properly address multimodality in the data, thereby enhancing its capability to predict uncertain long-term human motion. A CLiFF-map represents motion patterns based on local observations and estimates the likelihood of motion at a given query location. As it can be built from incomplete or spatially sparse data, CLiFF-map efficiently captures human motion patterns without requiring large amounts of data or complete trajectories. This characteristic makes CLiFF-LHMP a data-efficient approach for predicting human motion.
### _Motion Prediction_
We frame the task of predicting a person's future trajectory as using a short observed trajectory to infer a sequence of future states. The length of the observation history is \(O_{s}\in\mathbb{R}^{+}\) s, equivalent to an integer \(O_{p}>0\) observation time steps. With the current time-step denoted as the integer \(t_{0}\geq 0\), the sequence of observed states is \(\mathcal{H}=\langle s_{t_{0}-1},...,s_{t_{0}-O_{p}}\rangle\), where \(s_{t}\) is the state of a person at time-step \(t\). A state is represented by 2D Cartesian coordinates \((x,y)\), speed \(\rho\) and orientation \(\theta\): \(s=(x,y,\rho,\theta)\).
From the observed sequence \(\mathcal{H}\), we derive the observed speed \(\rho_{\rm obs}\) and orientation \(\theta_{\rm obs}\) at time-step \(t_{0}\). Then the current state becomes \(s_{t_{0}}=(x_{t_{0}},y_{t_{0}},\rho_{\rm obs},\theta_{\rm obs})\). The values of \(\rho_{\rm obs}\) and \(\theta_{\rm obs}\) are calculated as a weighted sum of the finite differences in the observed states, as in the recent ATLAS benchmark [12], such that \(\rho_{\rm obs}=\sum_{t=1}^{O_{p}}v_{t_{0}-t}g(t)\) and \(\theta_{\rm obs}=\sum_{t=1}^{O_{p}}\theta_{t_{0}-t}g(t)\), where \(g(t)=(\sigma\sqrt{\pi 2}e^{\frac{1}{2}(\frac{t}{2})^{2}})^{-1}\).
Given the current state \(s_{t_{0}}\), the goal is to estimate a sequence of future states. Future states are predicted for a given horizon \(T_{s}\in\mathbb{R}^{+}\) s. \(T_{s}\) is equivalent to \(T_{p}>0\) prediction time steps assuming the constant time interval \(\Delta t\) between two predictions. Thus, the prediction horizon is \(T_{s}=T_{p}\Delta t\). The future sequence is then denoted as \(\mathcal{T}=\langle s_{t_{0}+1},s_{t_{0}+2},...,s_{t_{0}+T_{p}}\rangle\).
The CLiFF-LHMP algorithm is presented in Alg. 1. With the input of a CLiFF-map and past states of a person, the algorithm predicts a sequence of future states. To estimate \(\mathcal{T}\), for each prediction time step, we sample a velocity from the CLiFF-map at the current position (\(x_{t}\), \(y_{t}\)) to bias the prediction with the learned motion patterns represented by the CLiFF-map. To sample a velocity at a given location \((x,y)\), we first get the SWGMMs \(\Xi_{\rm near}\) whose distances to \((x,y)\) are less than \(r_{s}\), where \(r_{s}\) is the sampling radius. After getting the sampled velocity, the velocity (\(\rho_{t}\), \(\theta_{t}\)) is predicted by assuming that a person will continue walking with the same speed as in the last time step, \(\rho_{t}=\rho_{t-1}\), and biasing the direction of motion with the sampled orientation \(\theta_{s}\) as:
\[\theta_{t}=\theta_{t-1}+(\theta_{s}-\theta_{t-1})\cdot K(\theta_{s}-\theta_{t -1}), \tag{1}\]
where \(K(\cdot)\) is a kernel function that defines the degree of impact of the CLiFF-map. We use a Gaussian kernel with a parameter \(\beta\) that represents the kernel width:
\[K(x)=e^{-\beta\|x\|^{2}}. \tag{2}\]
With kernel \(K\), we scale the CLiFF-map term by the difference between the velocity sampled from the CLiFF-map and the current velocity according to a constant velocity model (CVM). The sampled velocity is trusted less if it deviates more from the current velocity. A larger \(\beta\) value makes the method behave more like a CVM, and a smaller \(\beta\) makes it more closely follow the CLiFF-map.
```
Input: \(\mathcal{H}\), \(x_{t_{0}}\), \(y_{t_{0}}\) Output:\(\mathcal{T}\)
1\(\mathcal{T}=\{\}\)
2\(\rho_{\rm obs},\theta_{\rm obs}\leftarrow\) getObservedVelocity(\(\mathcal{H}\))
3\(s_{t_{0}}=(x_{t_{0}},y_{t_{0}},\rho_{\rm obs},\theta_{\rm obs})\)
4for\(t=t_{0}+1\),..., \(t_{0}+T_{p}\)do
5\(x_{t},y_{t}\leftarrow\) getNewPosition(\(s_{t-1}\))
6\(\theta_{s}\leftarrow\) sampleVelocityFromCLiFFmap(\(x_{t},y_{t}\))
7(\(\rho_{t}\), \(\theta_{t}\)) \(\leftarrow\) predictVelocity(\(\theta_{s}\), \(\rho_{t-1}\), \(\theta_{t-1}\))
8\(s_{t}\leftarrow(x_{t},y_{t},\rho_{t},\theta_{t})\)
9\(\mathcal{T}\leftarrow\mathcal{T}\cup s_{t}\)
10return\(\mathcal{T}\)
```
**Algorithm 1**CLiFF-LHMP
## III Evaluation
In this section, we evaluate the data efficiency and accuracy of the proposed CLiFF-LHMP approach and compare it to the LSTM-based human motion prediction methods. Vanilla LSTM [13] is used as the baseline representative of LSTM-based methods.
### _Implementation Details_
We evaluate the prediction performance using the ATC dataset [14], which contains trajectories recorded in a shopping mall in Japan. The dataset covers a large indoor
environment with a total area of around \(900\,\mathrm{m}^{2}\). The ATC dataset consists of 92 days in total. Given the immense length of the ATC dataset for each recording day, a subset covering the first four days can be considered representative. We use the subset in the experiments, with the first day (Oct.24) for training, and the remaining 3 days for testing. Both the LSTM and CLiFF-LHMP approaches are trained with same data and evaluated with same data to ensure a fair comparison.
In ATC dataset, the original detection rate is \(30\,\mathrm{Hz}\). We downsample the data to \(2.5\,\mathrm{Hz}\) to align with \(0.4\,\mathrm{s}\) observation time interval, as commonly used in human motion prediction. For each trajectory, we take \(3.2\,\mathrm{s}\) (the first 8 positions) as the observation history and use the remaining trajectory (up to the maximum prediction horizon) as the prediction ground truth. Instead of using a fixed prediction horizon, we explore a wider range of values \(T_{s}\) up to a maximum value in our evaluation. The maximum prediction horizon is determined based on the tracking duration distribution of the dataset. We use the 90th percentile value, which is \(60\,\mathrm{s}\), as maximum prediction horizon for experiments of ATC dataset. As LSTM-based approaches require complete trajectories for training, we use for all compared approaches trajectories equal or longer than \(60\,\mathrm{s}\) for both training and testing.
Given the area and tracking duration in the ATC dataset, when evaluating CLiFF-LHMP, we set prediction time step \(\Delta t\) to \(1\,\mathrm{s}\), CLiFF-map resolution to \(1\,\mathrm{m}\), sampling radius \(r_{s}\) to \(1\,\mathrm{m}\) and kernel parameter \(\beta\) to 1. For training vanilla LSTM model, we set the dimension of hidden state of the LSTM model set to 128 and the learning rate set to 0.003.
For the evaluation of the predictive performance we use the _Average_ and _Final Displacement Errors_ (ADE and FDE) metrics. ADE describes the error between points on the predicted trajectories and the respective ground truth at the same time step. FDE describes the error at the last prediction time step.
We stop predicting when the sample reaches an area outside of the MoD, in case of the CLiFF map, i.e. when no SWGMMs are available within the radius \(r_{s}\) around the sampled location. Predicted trajectories that end before \(T_{s}\) will only be included in the ADE/FDE evaluation up to the last predicted point. When predicting for each ground truth trajectory, the prediction horizon \(T_{s}\) is set either equal to its length or \(60\,\mathrm{s}\) for longer trajectories.
### _Experiments and Results_
#### Iv-B1 Efficiency of motion prediction with limited data
To evaluate the data efficiency of the CLiFF-LHMP method, we ran a series of experiments with varying amount of training data [100, 200,..., 1000 trajectories]. The training data were randomly selected multiple times. Once selected, we fed the same data to train the CLiFF-map and the LSTM model, and the evaluation metrics were averaged from all the runs.
Figure 2 shows the ADE and FDE results for CLiFF-LHMP and vanilla LSTM for prediction horizon of \(60\,\mathrm{s}\), with the number of training trajectories ranging from 100 to 1000. CLiFF-LHMP consistently outperforms LSTM when predicting long-term human motion in these cases. When more than 200 training trajectories are used, the standard deviation of ADE and FDE of CLiFF-LHMP is also lower than for LSTM. While the performance of LSTM drops substantially for smaller training data sets, especially when training with fewer than 200 trajectories, CLiFF-LHMP has a stable performance even with as few as 100 training trajectories. When decreasing the training dataset size from 1000 to 100 trajectories, the error merely increases 4% in ADE and 1% in FDE for CLiFF-LHMP, while for LSTM the ADE increases by 35% and the FDE by 27%. Figure 3 shows a comparison on different prediction horizons from \(10\,\mathrm{s}\) to \(60\,\mathrm{s}\) for three sizes of the training dataset (200, 600, 1000 trajectories). When the prediction horizon increases, CLiFF-LHMP becomes slightly more sensitive to the amount of training data.
#### Iv-B2 Efficiency of motion representation
To compare the quality of the underlying CLiFF-map itself, trained with different amounts of data, we compute the Kullback-Leibler (KL) divergence [15] between the distributions represented in the CLiFF-maps. The KL divergence results are shown as heatmaps in Figure 4. CLiFF-map associates a Gaussian Mixture Model to each location, and we use a KL divergence heatmap to visualize the changes between two different CLiFF-maps. The first image in Figure 4 shows the changes of CLiFF-maps built with 100 and 1000 trajectories, respectively. It is evident that as the number of training trajectories increases, the primary alterations in the CLiFF-map occur predominantly along the boundary regions. More
Fig. 3: ADE/FDE of CLiFF-LHMP in the ATC dataset with training dataset of 200, 600, 1000 trajectories and with prediction horizon 10–60 s, and \(12\,\mathrm{s}\).
Fig. 2: ADE/FDE of CLiFF-LHMP and LSTM in the ATC dataset, using different amounts of trajectories (100–1000) as training data. The prediction horizon is \(60\,\mathrm{s}\). The shade represents one std. dev.
over, in highly constrained environments, such as the eastern corridor of the ATC map, the velocity distributions exhibit comparatively minimal variations. The other three figures in Figure 4 shows the sensitivity of CLiFF-map to the input data. When the number of training data increases from 900 to 1000 (see the fourth image in Figure 4), the CLiFF-map changes less than when the number of training data increases from 100 to 200 (see the second image in Figure 4). This shows that the CLiFF-map can capture major human motion patterns already with small amounts of training data.
#### Iii-B3 Descriptive power of compact motion representation models
Figure 5 shows qualitative examples of predicted trajectories using Maps of Dynamics in the long-term perspective. As no explicit knowledge is given about obstacle layout, LSTM predicts unfeasible trajectory which crosses the walls. In contrast, by exploiting learned motion patterns encoded in the CLiFF-map, our method predicts realistic trajectories that follow the complex topology of the environment e.g. navigating around corners or obstacles or passing through narrow passages such as doors, stairs (in the top part of the map) and exits (in the left part).
## IV Conclusions
In this paper, we present the idea to exploit _Maps of Dynamics_ (MoDs) for long-term human motion prediction. As a proof of concept for MoD-LHMP, we propose CLiFF-LHMP. Our method uses the CLiFF-map, a specific MoD that probabilistically represents human motion patterns within a velocity field. Our approach involves sampling velocities from the CLiFF-map to bias constant velocity predictions, generating stochastic trajectory predictions for up to \(60\,\mathrm{s}\) into the future. We evaluate CLiFF-LHMP using the ATC dataset, with a vanilla LSTM as the baseline approach. The experiments highlight the data efficiency advantage of our method. CLiFF-LHMP accuracy is only affected to a minor degree when using less than 200 trajectories as training data, while LSTM requires about three times as many trajectories to approach its optimal performance. The results also demonstrate that our approach consistently outperforms the LSTM method at the long prediction horizon of \(60\,\mathrm{s}\). By exploiting learned motion patterns encoded in the CLiFF-map, our method implicitly accounts for the obstacle layouts and predicts trajectories that follow the complex topology of the environment.
The current implementation of MoD-LHMP uses spatial motion patterns that are built offline based on past observations. In the future we plan to extend the approach to online life-long learning, enabling live updates based on the motion observations. One future direction is the evaluation of additional types of MoDs for long-term human motion prediction, including those capturing temporally-conditioned motion patterns. Another future direction is to learn MoDs online for life-long learning enabling updates based on live motion observations. Additionally, in future work, we aim to formally describe and analyze the MoD-LHMP methodology, include further datasets [16, 17] in the evaluation.
|
2310.10833
|
Proper Laplacian Representation Learning
|
The ability to learn good representations of states is essential for solving
large reinforcement learning problems, where exploration, generalization, and
transfer are particularly challenging. The Laplacian representation is a
promising approach to address these problems by inducing informative state
encoding and intrinsic rewards for temporally-extended action discovery and
reward shaping. To obtain the Laplacian representation one needs to compute the
eigensystem of the graph Laplacian, which is often approximated through
optimization objectives compatible with deep learning approaches. These
approximations, however, depend on hyperparameters that are impossible to tune
efficiently, converge to arbitrary rotations of the desired eigenvectors, and
are unable to accurately recover the corresponding eigenvalues. In this paper
we introduce a theoretically sound objective and corresponding optimization
algorithm for approximating the Laplacian representation. Our approach
naturally recovers both the true eigenvectors and eigenvalues while eliminating
the hyperparameter dependence of previous approximations. We provide
theoretical guarantees for our method and we show that those results translate
empirically into robust learning across multiple environments.
|
Diego Gomez, Michael Bowling, Marlos C. Machado
|
2023-10-16T21:14:50Z
|
http://arxiv.org/abs/2310.10833v2
|
# Proper Laplacian Representation Learning
###### Abstract
The ability to learn good representations of states is essential for solving large reinforcement learning problems, where exploration, generalization, and transfer are particularly challenging. The _Laplacian representation_ is a promising approach to address these problems by inducing intrinsic rewards for temporally-extended action discovery and reward shaping, and informative state encoding. To obtain the Laplacian representation one needs to compute the eigensystem of the graph Laplacian, which is often approximated through optimization objectives compatible with deep learning approaches. These approximations, however, depend on hyperparameters that are impossible to tune efficiently, converge to arbitrary rotations of the desired eigenvectors, and are unable to accurately recover the corresponding eigenvalues. In this paper we introduce a theoretically sound objective and corresponding optimization algorithm for approximating the Laplacian representation. Our approach naturally recovers both the true eigenvectors and eigenvalues while eliminating the hyperparameter dependence of previous approximations. We provide theoretical guarantees for our method and we show that those results translate empirically into robust learning across multiple environments.
## 1 Introduction
Reinforcement learning (RL) is a framework for decision-making where an agent continually takes actions in its environment and, in doing so, controls its future states. After each action, given the current state and the action itself, the agent receives a reward and a next state from the environment. The objective of the agent is to maximize the sum of these rewards. In principle, the agent has to visit all states and try all possible actions a reasonable number of times to determine the optimal behavior. However, in complex environments, e.g., when the number of states is large or the environment changes with time, this is not a plausible strategy. Instead, the agent needs the ability to learn representations of the state that facilitate exploration, generalization, and transfer.
The _Laplacian framework_(Mahadevan, 2005; Mahadevan & Maggioni, 2007) proposes one such representation. This representation is based on the graph Laplacian, which, in the tabular case, is a matrix that encodes the topology of the state space based on both the policy the agent uses to select actions and the environment dynamics. Specifically, the \(d-\)dimensional _Laplacian representation_ is a map from states to vectors whose entries correspond to \(d\) eigenvectors of the Laplacian.
The Laplacian representation is very effective as a distance metric in RL because its eigenvectors induce a space where Euclidean distance correlates to temporal distance. Thus, among other things, besides its use as a state representation (e.g., Mahadevan & Maggioni, 2007; Lan et al., 2022), it has been used for state abstraction (Wang et al., 2022), reward shaping (Wu et al., 2019; Wang et al., 2023), exploration via temporally-extended actions (see overview by Machado et al., 2023), and achieving state-of-the-art performance in sparse reward environments (Klissarov & Machado, 2023).
When the number of states, \(|\mathcal{S}|\), is small, the graph Laplacian can be represented as a matrix and one can use standard matrix eigendecomposition techniques to obtain its _eigensystem1_ and the cor
responding Laplacian representation. In practice, however, \(|\mathcal{S}|\) is large, or even uncountable. Thus, at some point it becomes infeasible to directly compute the eigenvectors of the Laplacian. In this context, Wu et al. (2019) proposed a scalable optimization procedure to obtain the Laplacian representation in state spaces with uncountably many states. Such an approach is based on a general definition of the graph Laplacian as a linear operator, also introduced by Wu et al. (2019). Importantly, this definition allows us to model the Laplacian representation as a neural network and to learn it by minimizing an unconstrained optimization objective, the _graph drawing objective_ (GDO).
However, arbitrary rotations of the eigenvectors of the Laplacian minimize the graph drawing objective (Wang et al., 2021). This not only implies that the solution found could differ from the true eigenvectors, but also that the gradient dynamics could be unstable. As a solution, Wang et al. (2021) proposed the _generalized graph drawing objective_ (GGDO), which breaks the symmetry of the optimization problem by introducing a sequence of decreasing hyperparameters to GDO. The true eigenvectors are the only solution to this new objective. Despite this, when minimizing this objective with stochastic gradient descent, the rotations of the smallest eigenvectors2 are still equilibrium points of the generalized objective. Consequently, there is variability in the eigenvectors one actually finds when minimizing such an objective, depending, for example, on the initialization of the network and on the hyperparameters chosen.
Footnote 2: We refer to the eigenvectors with corresponding smallest eigenvalues as the “smallest eigenvectors”.
These issues are particularly problematic because it is **impossible to tune the hyperparameters** of GDDO without already having access to the problem solution: previous results, when sweeping hyperparameters, used the cosine similarity between the _true eigenvectors_ and the approximated solution as a performance metric. To make matters worse, the best **hyperparameters are environment dependent**, as shown in Figure 1. Thus, when relying on GDO, or GODO, _it is impossible to guarantee an accurate estimate of the eigenvectors of the Laplacian in environments where one does not know these eigenvectors in advance_, which obviously defeats the whole purpose. Finally, the existing objectives are **unable to approximate the eigenvalues** of the Laplacian, and existing heuristics heavily depend on the accuracy of the estimated eigenvectors (Wang et al., 2023).
In this work, we introduce a theoretically sound max-min objective and a corresponding optimization procedure for approximating the Laplacian representation that addresses _all_ the aforementioned issues. Our approach naturally recovers both the true eigenvectors and eigenvalues while eliminating the hyperparameter dependence of previous approximations. Our objective, which we call the Augmented Lagrangian Laplacian Objective (ALLO), corresponds to a Lagrangian version of GDO augmented with stop-gradient operators. These operators break the symmetry between the rotations of the Laplacian eigenvectors, turning the eigenvectors and eigenvalues into the unique stable equilibrium point under gradient ascent-descent dynamics, independently of the original hyperparameters of GGDO. Besides theoretical guarantees, we empirically demonstrate that our proposed approach is robust across different environments with different topologies and that it is able to accurately recover the eigenvalues of the graph Laplacian as well.
Figure 1: Average cosine similarity between the true Laplacian representation and GGDO for different values of the barrier penalty coefficient, averaged over 60 seeds, with the best coefficient highlighted. The shaded region corresponds to a 95% confidence interval.
## 2 Background
We first review the reinforcement learning setting before presenting the Laplacian representation and previous work at optimization objectives for approximating it.
Reinforcement Learning.We consider the setting in which an agent interacts with an environment. The environment is a reward-agnostic Markov-decision process \(M=(\mathcal{S},\mathcal{A},P,\mu_{0})\) with finite state space \(\mathcal{S}=\{1,\cdots,|\mathcal{S}|\}\), finite action space \(\mathcal{A}=\{1,\cdots,|\mathcal{A}|\}\), transition probability map \(P:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\mathcal{S})\), which maps a state-action pair \((s,a)\) to a state distribution \(P(\cdot|s,a)\) in the simplex \(\Delta(\mathcal{S})\), and initial state distribution \(\mu_{0}\in\Delta(\mathcal{S})\).5 The agent is characterized by the policy \(\pi:\mathcal{S}\rightarrow\Delta(\mathcal{A})\) that it uses to choose actions. At time-step \(t=0\), an initial state \(S_{0}\) is sampled from \(\mu_{0}\). Then, the agent samples an action \(A_{0}\) from its policy and, as a response, the environment transitions to a new state \(S_{1}\), following the distribution \(P(S_{0},A_{0})\). After this, the agent selects a new action, the environment transitions again, and so on. The agent-environment interaction determines a Markov process characterized by the transition matrix \(\mathbf{P}_{\pi}\), where \((\mathbf{P}_{\pi})_{s,s^{\prime}}=\sum_{a\in\mathcal{A}}\pi(s,a)P(s^{\prime}| s,a)\) is the probability of transitioning from state \(s\) to state \(s^{\prime}\) while following policy \(\pi\).
Footnote 5: For ease of exposition, we restrict the notation, theorems, and proofs to the tabular setting. However, it is not difficult to generalize them to the setting in which the state space is a probability space and matrices become linear operators in a Hilbert space as done by Wu et al. (2019) and Wang et al. (2021).
Laplacian Representation.In graph theory, the object of study is a node set \(\mathcal{V}\) whose elements are pairwise connected by edges. The edge between a pair of nodes \(v,v^{\prime}\in\mathcal{V}\) is quantified by a non-negative real number \(w_{v,v^{\prime}}\), which is \(0\) only if there is no edge between the nodes. The adjacency matrix, \(\mathbf{W}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\), stores the information of all edges such that \((\mathbf{W})_{v,v^{\prime}}=w_{v,v^{\prime}}\). The degree of a node \(v\) is the sum of the adjacency weights between \(v\) and all other nodes in \(\mathcal{V}\) and the degree matrix \(\mathbf{D}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\) is the diagonal matrix containing these degrees. The Laplacian of a graph \(\mathbf{L}\) is defined as \(\mathbf{L}=\mathbf{D}-\mathbf{W}\), and, just as the adjacency matrix, it fully encodes the information of the graph.
If we consider the state space of an MDP \(M\) as the set of nodes, \(\mathcal{V}=\mathcal{S}\), and \(\mathbf{W}\) as determined by \(\mathbf{P}_{\pi}\), then we might expect the graph Laplacian to encode useful temporal information about \(M\), meaning the number of time steps required to go from one state to another. In accordance with Wu et al. (2019), we broadly define the _Laplacian_ in the tabular reinforcement learning setting as any matrix \(\mathbf{L}=\mathbf{I}-f(\mathbf{P}_{\pi})\), where \(f:\mathbb{R}^{|\mathcal{S}|\times|\mathcal{S}|}\rightarrow\mathrm{Sym}_{| \mathcal{S}|}(\mathbb{R})\) is some function that maps \(\mathbf{P}_{\pi}\) to a symmetric matrix.6 For example, if \(\mathbf{P}_{\pi}\) is symmetric, the Laplacian is typically defined as either \(\mathbf{L}=\mathbf{I}-\mathbf{P}_{\pi}\) or \(\mathbf{L}=\mathbf{I}-(1-\lambda)\Phi_{\pi}^{\lambda}\), where \(\Phi_{\pi}^{\lambda}=(\mathbf{I}-\lambda\mathbf{P}_{\pi})^{-1}\) is a matrix referred to as the successor representation matrix (Dayan, 1993; Machadevan et al., 2018). In the case where \(\mathbf{P}_{\pi}\) is not symmetric, \(\mathbf{L}\) is usually defined as \(\mathbf{L}=\mathbf{I}-\frac{1}{2}(\mathbf{P}_{\pi}+\mathbf{P}_{\pi}^{\top})\) to ensure it is symmetric (Wu et al., 2019).
Footnote 6: The Laplacian has \(|S|\) different **real** eigenvectors and corresponding eigenvalues only if it is symmetric.
Footnote 7: For proofs in the tabular setting, see the work by Koren (2003) for the case \(d=2\), and Lemma 1 for arbitrary \(d\). For the abstract setting, see the work by Wang et al. (2021).
The _Laplacian representation_, \(\phi:\mathcal{S}\rightarrow\mathbb{R}^{d}\), maps a state \(s\) to \(d\) corresponding entries in a set of \(0<d\leq|\mathcal{S}|\) chosen eigenvectors of \(\mathbf{L}\), i.e., \(\phi(s)=[\mathbf{e}_{1}[s],\cdots,\mathbf{e}_{d}[s]]^{\top}\), where \(\mathbf{e}_{i}\) is the \(i-\)th smallest eigenvector of \(\mathbf{L}\) and \(\mathbf{e}_{i}[s]\), its \(s-\)th entry (Mahadevan & Maggioni, 2007; Stachenfeld et al., 2014; Machado et al., 2017).
The Graph Drawing Objective.Given the graph Laplacian \(\mathbf{L}\), the spectral graph drawing optimization problem (Koren, 2003) is defined as follows:
\[\min_{\mathbf{u}_{1},\cdots,\mathbf{u}_{d}\in\mathbb{R}^{\mathcal{ S}}} \quad\sum_{i=1}^{d}\langle\mathbf{u}_{i},\mathbf{L}\mathbf{u}_{i}\rangle\] (1) such that \[\langle\mathbf{u}_{j},\mathbf{u}_{k}\rangle=\delta_{jk}\;,\;1\leq k \leq j\leq d\,,\]
where \(\langle\cdot,\cdot\rangle\) is the inner product in \(\mathbb{R}^{|\mathcal{S}|}\) and \(\delta_{jk}\) is the Kronecker delta. This optimization problem has two desirable properties. The first one is that the \(d\) smallest eigenvectors of \(\mathbf{L}\) are a global optimizer.8
Hence, the Laplacian representation \(\phi\) associated with \(\mathbf{L}\) is a solution to this problem. The second property is that both objective and constraints can be expressed as expectations, making the problem amenable to stochastic gradient descent. In particular, the original _constrained_ optimization problem (1) can be approximated by the unconstrained _graph drawing objective_ (GDO):
\[\min_{\mathbf{u}\in\mathbb{R}^{d|\mathcal{S}|}}\quad\sum_{i=1}^{d} \langle\mathbf{u}_{i},\mathbf{L}\mathbf{u}_{i}\rangle+b\sum_{j=1}^{d}\sum_{k=1 }^{d}\big{(}\langle\mathbf{u}_{j},\mathbf{u}_{k}\rangle-\delta_{jk}\big{)}^{2}\,, \tag{2}\]
where \(b\in(0,\infty)\) is a scalar hyperparameter and \(\mathbf{u}=[\mathbf{u}_{1}^{\top},\cdots,\mathbf{u}_{d}^{\top}]^{\top}\) is the vector that results from concatenating the vectors \((\mathbf{u}_{i})_{i=1}^{d}\)(Wu et al., 2019).
The Generalized Graph Drawing Objective.As mentioned before, any rotation of the smallest eigenvectors of the Laplacian \(\mathbf{L}\) is a global optimizer of the constrained optimization problem (1). Hence, even with an appropriate choice of hyperparameter \(b\), GDO does not necessarily approximate the Laplacian representation \(\phi\). As a solution, Wang et al. (2021) present the generalized graph drawing optimization problem:
\[\min_{\mathbf{u}\in\mathbb{R}^{d|\mathcal{S}|}}\quad\sum_{i=1}^{ d}c_{i}\langle\mathbf{u}_{i},\mathbf{L}\mathbf{u}_{i}\rangle \tag{3}\] \[\text{such that}\quad\langle\mathbf{u}_{j},\mathbf{u}_{k}\rangle= \delta_{jk}\,,\ 1\leq k\leq j\leq d\,,\]
where \(c_{1}>\cdots>c_{d}>0\) is a monotonically decreasing sequence of \(d\) hyperparameters. Correspondingly, the unconstrained _generalized graph drawing objective_ (GDDO) is defined as:
\[\min_{\mathbf{u}\in\mathbb{R}^{d|\mathcal{S}|}}\quad\sum_{i=1}^{ d}c_{i}\langle\mathbf{u}_{i},\mathbf{L}\mathbf{u}_{i}\rangle+b\sum_{j=1}^{d} \sum_{k=1}^{d}\min(c_{j},c_{k})\big{(}\langle\mathbf{u}_{j},\mathbf{u}_{k} \rangle-\delta_{jk}\big{)}^{2}\,. \tag{4}\]
Wang et al. (2021) prove that the optimization problem (3) has a unique global minimum that corresponds to the smallest eigenvectors of \(\mathbf{L}\), for _any_ possible choice of the hyperparameter sequence \((c_{i})_{i=1}^{d}\). However, in the unconstrained setting, which is the setting used when training neural networks, these hyperparameters do affect both the dynamics and the quality of the final solution. In particular, Wang et al. (2021) found in their experiments that the linearly decreasing choice \(c_{i}=d-i+1\) performed best across different environments. More importantly, under gradient descent dynamics, the introduced coefficients are unable to break the symmetry and arbitrary rotations of the eigenvectors are still equilibrium points (see Corollary (1) in Section 4).
## 3 Augmented Lagrangian Laplacian Objective
In this section we introduce a method that retains the benefits of GGDO while avoiding its pitfalls. Specifically, we relax the goal of having a unique global minimum for a constrained optimization problem like (3). Instead, we modify the stability properties of the unconstrained dynamics to ensure that the only stable equilibrium point corresponds to the Laplacian representation.
Asymmetric Constraints as a Generalized Graph Drawing Alternative.We want to break the _dynamical symmetry_ of the Laplacian eigenvectors that make any of their rotations an equilibrium point for GDO (2) and GGDO (4) while avoiding the use of hyperparameters. For this, let us consider the original graph drawing optimization problem (1). If we set \(d=1\), meaning we try to approximate only the first eigenvector \(\mathbf{e}_{1}\), it is clear that the only possible solution is \(\mathbf{u}_{1}^{\star}=\mathbf{e}_{1}\). This happens because the only possible rotations are \(\pm\mathbf{e}_{1}\). If we then try to solve the optimization problem for \(d=2\), but fix \(\mathbf{u}_{1}=\mathbf{e}_{1}\), the solution will be \((\mathbf{u}_{1}^{\star},\mathbf{u}_{2}^{\star})=(\mathbf{e}_{1},\mathbf{e}_{2})\), as desired. Repeating this process \(d\) times, we can obtain \(\phi\). Thus, we can eliminate the need for the \(d\) hyperparameters introduced by GGDO by solving \(d\) separate optimization problems. To replicate this separation while maintaining a single unconstrained optimization objective, we introduce the stop-gradient operator \(\llbracket\cdot\rrbracket\) in GDO. This operator does not affect the objective in any way, but it indicates that, when following gradient descent dynamics, the real gradient of the objective is not used. Instead,
when calculating derivatives, whatever is inside the operator is assumed to be constant. Specifically, the objective becomes:
\[\min_{\mathbf{u}\in\mathbb{R}^{d|\delta|}} \sum_{i=1}^{d}\langle\mathbf{u}_{i},\mathbf{L}\mathbf{u}_{i} \rangle+b\sum_{j=1}^{d}\sum_{k=1}^{j}\left(\langle\mathbf{u}_{j},\llbracket \mathbf{u}_{k}\rrbracket\rangle-\delta_{jk}\right)^{2}. \tag{5}\]
Note that in addition to the stop-gradient operators, the upper bound in the inner summation is now the variable \(j\), instead of the constant \(d\). These two modifications ensure that \(\mathbf{u}_{i}\) changes only to satisfy the constraints associated to the previous vectors \((\mathbf{u}_{j})_{j=1}^{i-1}\) and itself, but not the following ones, i.e., \((\mathbf{u}_{j})_{j=i+1}^{d}\). Hence, the asymmetry in the descent direction achieves the same effect as having \(d\) separate optimization problems. In particular, as proved in Lemma 2 in the next section, the descent direction of the final objective, yet to be defined, becomes \(\mathbf{0}\) only for permutations of a subset of the Laplacian eigenvectors, and not for any of its rotations.
Augmented Lagrangian Dynamics for Exact Learning.The regularization term added in all of the previous objectives (2), (4), and (5) is typically referred to as a quadratic penalty with barrier coefficient \(b\). This coefficient shifts the equilibrium point of the original optimization problems (1) and (3), and one can only guarantee that the desired solution is obtained in the limit \(b\rightarrow\infty\) (see Chapter 17 by Nocedal & Wright, 2006). In practice, one can increase \(b\) until a satisfactory solution is found. However, not only is there no direct metric to tell how close one is to the true solution, but also an extremely large \(b\) is empirically bad for neural networks when optimizing GDO or GODO. As a principled alternative, we propose the use of augmented Lagrangian methods. Specifically, we augment the objective (5) by adding the original constraints, multiplied by their corresponding dual variables, \((\beta_{jk})_{1\leq k\leq j\leq d}\). This turns the optimization problem into the following max-min objective, which we call the _augmented Lagrangian Laplacian objective_ (ALLO):
\[\max_{\mathbf{\beta}}\min_{\mathbf{u}\in\mathbb{R}^{d|\delta|}} \sum_{i=1}^{d}\langle\mathbf{u}_{i},\mathbf{L}\mathbf{u}_{i} \rangle+\sum_{j=1}^{d}\sum_{k=1}^{j}\beta_{jk}\big{(}\langle\mathbf{u}_{j}, \llbracket\mathbf{u}_{k}\rrbracket\rangle-\delta_{jk}\big{)}+b\sum_{j=1}^{d} \sum_{k=1}^{j}\left(\langle\mathbf{u}_{j},\llbracket\mathbf{u}_{k} \rrbracket\rangle-\delta_{jk}\right)^{2}, \tag{6}\]
where \(\mathbf{\beta}=[\beta_{1,1},\beta_{2,1},\beta_{2,2},\cdots,\beta_{d,1},\cdots, \beta_{d,d}]\in\mathbb{R}^{d(d+1)/2}\) is a vector containing all of the dual variables. There are two reasons to introduce the additional linear penalties, which at first glance do not seem to contribute anything that the quadratic one is not adding already. First, for an appropriately chosen \(b\), the equilibria of the max-min objective (6) corresponds exactly to permutations of the smallest Laplacian eigenvectors, and only the sorted eigenvectors are a stable solution under gradient ascent-descent dynamics. Second, the optimal dual variables \(\mathbf{\beta}^{\star}\) are proportional to the smallest Laplacian eigenvalues, meaning that with this single objective one can recover naturally **both** eigenvectors and eigenvalues of \(\mathbf{L}\) (see the next section for the formalization of these claims).
Something to note is that the standard augmented Lagrangian has been discussed in the literature as a potential approach for learning eigenvectors of linear operators, but it was dismissed due to lack of empirical stability (Pfau et al., 2019). ALLO overcomes this problem through the introduction of the stop-gradient operators, which are responsible for breaking the symmetry of the Laplacian eigenvector rotations, in a similar way as how gradient masking is used in spectral inference networks (Pfau et al., 2019).
Barrier Dynamics.For the introduced max-min objective to work, in theory, \(b\) has to be larger than a **finite** value that depends on the specific Laplacian \(\mathbf{L}\). Moreover, if \(f(\mathbf{P}_{\pi})\) in the definition of \(\mathbf{L}\) is a stochastic matrix, which is the case for all of the typical definitions mentioned previously, one can exactly determine a lower bound for \(b\), as proved in the next section. In practice, however, we found that \(b\) still needs to be increased. In our experiments, we do so in a gradient ascent fashion, just as with the dual variables.
## 4 Theoretical results
To prove the soundness of the proposed max-min objective, we need to show two things: 1) that the equilibria of this objective correspond to the desired eigensystem of the Laplacian, and 2) that this equilibria is stable under stochastic gradient ascent-descent dynamics.
As an initial motivation, the following Lemma deals with the first point in the _stationary setting_. While it is already known that the set of solutions to the graph drawing optimization problem (1) corresponds to the rotations of the smallest eigenvectors of \(\mathbf{L}\), the Lemma considers a primal-dual perspective of the problem that allows one to relate the dual variables with the eigenvalues of \(\mathbf{L}\). This identification is relevant since previous methods are not able to recover the eigenvalues.
**Lemma 1**.: _Consider a symmetric matrix \(\mathbf{L}\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{S}|}\) with increasing, and possibly repeated, eigenvalues \(\lambda_{1}\leq\cdots\leq\lambda_{|\mathcal{S}|}\), and a corresponding sequence of eigenvectors \((\mathbf{e}_{i})_{i=1}^{|\mathcal{S}|}\). Then, given a number of components, \(1\leq d\leq|\mathcal{S}|\), the pair \((\mathbf{u}_{i}^{*})_{i=1}^{d},(\beta_{jk}^{*})_{1\leq k\leq j\leq d}\), where \(\mathbf{u}_{i}^{*}=\mathbf{e}_{i}\) and \(\beta_{jk}^{*}=-\lambda_{j}\delta_{jk}\), is a solution to the primal-dual pair of optimization problems corresponding to the spectral graph drawing optimization problem (1). Furthermore, any other primal solution corresponds to a rotation of the eigenvectors \((\mathbf{e}_{i})_{i=1}^{d}\)._
See the Appendix for the proof of Lemma 1. Now that we know that the primal-dual pair of optimization problems associated to (1) has as a solution the smallest eigensystem of the Laplacian, the following Lemma shows that the equilibria of the max-min objective (6) coincides only with this solution, up to a constant, and any possible permutation of the eigenvectors, **but not with its rotations**.
**Lemma 2**.: _The pair \(\mathbf{u}^{*},\boldsymbol{\beta}^{*}\) is an equilibrium pair of the max-min objective (6), under gradient ascent-descent dynamics, if and only if \(\mathbf{u}^{*}\) coincides with a subset of eigenvectors of the Laplacian \((\mathbf{e}_{\sigma(i)})_{i=1}^{d}\), for some permutation \(\sigma:\mathcal{S}\to\mathcal{S}\), and \(\beta_{jk}^{*}=-2\lambda_{\sigma(j)}\delta_{jk}\)._
Proof.: Let us denote \(\mathcal{L}\) the objective (6). Then, we have the following gradient ascent-descent dynamical system:
\[\mathbf{u}_{i}[t+1]=\mathbf{u}_{i}[t]-\alpha_{\text{primal}}\cdot\mathbf{g}_{ \mathbf{u}_{i}}(\mathbf{u}[t],\boldsymbol{\beta}[t])\,,\,\,\,\forall 1\leq i\leq d,\]
\[\beta_{jk}[t+1]=\beta_{jk}[t]+\alpha_{\text{dual}}\cdot\frac{\partial \mathcal{L}}{\partial\beta_{jk}}(\mathbf{u}[t],\boldsymbol{\beta}[t])\,,\,\, \,\forall 1\leq k\leq j\leq d\,,\]
where \(t\in\mathbb{N}\) is the discrete time index, \(\alpha_{\text{primal}},\alpha_{\text{dual}}>0\) are step sizes, and \(\mathbf{g}_{\mathbf{u}_{i}}\) is the gradient of \(\mathcal{L}\) with respect to \(\mathbf{u}_{i}\), taking into account the stop-gradient operator. We avoid the notation \(\nabla_{\mathbf{u}_{i}}\mathcal{L}\) to emphasize that \(\mathbf{g}_{\mathbf{u}_{i}}\) is not a real gradient, but a chosen direction that ignores what is inside the stop-gradient operator.
The equilibria of our system corresponds to those points for which \(\mathbf{u}_{i}^{*}[t+1]=\mathbf{u}_{i}^{*}[t]\) and \(\beta_{jk}^{*}[t+1]=\beta_{jk}^{*}[t]\). Hence,
\[\mathbf{g}_{\mathbf{u}_{i}}(\mathbf{u}^{*},\boldsymbol{\beta}^{*})=2\mathbf{ L}\mathbf{u}_{i}^{*}+\sum_{j=1}^{i}\beta_{ij}\mathbf{u}_{j}^{*}+2b\sum_{j=1}^{i}( \langle\mathbf{u}_{i}^{*},\mathbf{u}_{j}^{*}\rangle-\delta_{ij})\mathbf{u}_{j }^{*}=\mathbf{0}\,,\,\,\,\forall 1\leq i\leq d, \tag{7}\]
\[\frac{\partial\mathcal{L}}{\partial\beta_{jk}}(\mathbf{u}^{*},\boldsymbol{ \beta}^{*})=\langle\mathbf{u}_{j}^{*},\mathbf{u}_{k}^{*}\rangle-\delta_{jk}=0 \,,\,\,\,\forall 1\leq k\leq j\leq d\,. \tag{8}\]
We proceed now by induction over \(i\), considering that Equation (8) tells us that \(\mathbf{u}^{*}\) corresponds to an orthonormal basis. For the base case \(i=1\) we have:
\[\mathbf{g}_{\mathbf{u}_{1}}(\mathbf{u}^{*},\boldsymbol{\beta}^{*})=2\mathbf{ L}\mathbf{u}_{1}^{*}+\beta_{1,1}\mathbf{u}_{1}^{*}=0\,.\]
Thus, we can conclude that \(\mathbf{u}_{1}\) is an eigenvector \(\mathbf{e}_{\sigma(1)}\) of the Laplacian, and that \(\beta_{1,1}\) corresponds to its eigenvalue, specifically \(\beta_{1,1}=-2\lambda_{\sigma(1)}\), for some permutation \(\sigma:\mathcal{S}\to\mathcal{S}\). Now let us suppose that \(\mathbf{u}_{j}=\mathbf{e}_{\sigma(j)}\) and \(\beta_{jk}=-2\lambda_{\sigma(j)}\delta_{jk}\) for \(j<i\). Equation (8) for \(i\) then becomes:
\[\mathbf{g}_{\mathbf{u}_{i}}(\mathbf{u}^{*},\boldsymbol{\beta}^{*})=2\mathbf{ L}\mathbf{u}_{i}^{*}+\beta_{ii}\mathbf{u}_{i}^{*}+\sum_{j=1}^{i-1}\beta_{ij} \mathbf{e}_{\sigma(j)}=0\,.\]
In general, we can express \(\mathbf{u}_{i}^{*}\) as the linear combination \(\mathbf{u}_{i}^{*}=\sum_{j=1}^{|\mathcal{S}|}c_{ij}\mathbf{e}_{\sigma(j)}\) since the eigenvectors of the Laplacian form a basis. Also, given that \(\langle\mathbf{u}_{j},\mathbf{u}_{k}\rangle=0\), we have that \(c_{ij}=0\) for \(j<i\). Hence,
\[2\sum_{j=i}^{|\mathcal{S}|}c_{ij}\mathbf{L}\mathbf{e}_{\sigma(j)}+\beta_{ii} \sum_{j=i}^{|\mathcal{S}|}c_{ij}\mathbf{e}_{\sigma(j)}+\sum_{j=1}^{i-1}\beta_{ ij}\mathbf{e}_{\sigma(j)}=\sum_{j=i}^{|\mathcal{S}|}c_{ij}(2\lambda_{\sigma(j)}+ \beta_{ii})\mathbf{e}_{\sigma(j)}+\sum_{j=1}^{i-1}\beta_{ij}\mathbf{e}_{\sigma(j )}=0\,.\]
By orthogonality of the eigenvectors, we must have that each coefficient is \(0\), implying that \(\beta_{ij}=0\) and either \(c_{ij}=0\) or \(\beta_{ii}=-2\lambda_{\sigma(j)}\). The last equation allows us to conclude that a pair \((c_{ij},c_{ik})\) can only be different to \(0\) simultaneously for \(j,k\) such that \(\lambda_{\sigma(j)}=\lambda_{\sigma(k)}\), i.e., \(\mathbf{u}_{i}\) lies in the subspace of eigenvectors corresponding to the same eigenvalue, where each point is in itself an eigenvector. Thus, we can conclude, that \(\mathbf{u}_{i}=\mathbf{e}_{\sigma(i)}\) and \(\beta_{ij}=-2\lambda_{i}\delta_{ij}\), as desired.
As a Corollary to Lemma 2, let us suppose that we fix all the dual variables to \(0\), i.e., \(\beta_{jk}=0\). Then, we will obtain that the constraints of the original optimization problem (1) must be violated for any possible equilibrium point. This explains why optimizing GGDO in Equation (4) may converge to undesirable rotations of the Laplacian eigenvectors, even when the smallest eigenvectors are the unique solution of the original associated constrained optimization problem.
**Corollary 1**.: _The point \(\mathbf{u}^{*}\) is an equilibrium point of the objective (2) or the objective (4), under gradient descent dynamics, if and only if for any \(1\leq i\leq d\) there exists a \(1\leq j\leq d\) such that \((\mathbf{u}^{*}_{i},\mathbf{u}^{*}_{j})\neq\delta_{ij}\). That is, the equilibrium is guaranteed to be different to the eigenvectors of the Laplacian._
Finally, we prove that even when all permutations of the Laplacian eigenvectors are equilibrium points of the proposed objective (6), only the one corresponding to the ordered smallest eigenvectors and its eigenvalues is stable. This is in contrast with GGDO.
**Theorem 1**.: _The only permutation in Lemma 2 that corresponds to an stable equilibrium point of the max-min objective (6) is the identity permutation, under an appropriate selection of the barrier coefficient \(b\). That is, there exist a finite barrier coefficient such that \(\mathbf{u}^{*}_{i}=\mathbf{e}_{i}\) and \(\beta^{*}_{jk}=-2\lambda_{j}\delta_{jk}\) correspond to the only stable equilibrium pair, where \(\lambda_{i}\) is the \(i-\)th smallest eigenvalue of the Laplacian and \(\mathbf{e}_{i}\) its corresponding eigenvector. In particular, any \(b>2\) guarantees stability._
Proof Sketch.: Complete proof is in the Appendix. We have that \(\mathbf{g}_{\mathbf{u}_{i}}\) and \(\nicefrac{{\partial\mathcal{E}}}{{\partial\beta_{jk}}}\) define the chosen ascent-descent direction. Concatenating these vectors and scalars in a single vector \(\mathbf{g}(\mathbf{u},\mathbf{\beta})\), the stability of the dynamics can be determined from the Jacobian matrix \(J(\mathbf{g})\). Specifically, if all the eigenvalues of this matrix have a positive real part in the equilibrium pair \(\mathbf{u}^{*},\mathbf{\beta}^{*}\), we can conclude that the equilibrium is stable. If there is one eigenvalue with negative real part, then it is unstable (see Chicone, 2006; Sastry, 2013; Mazumdar et al., 2020). As proved in the Appendix, for any pair \(1\leq i<j\leq|\mathcal{S}|\), there exists a real eigenvalue proportional to \(\lambda_{\sigma(j)}-\lambda_{i}\). This means that, unless the \(\sigma\) permutation is the identity, there will be at least one negative eigenvalue and the equilibrium corresponding to this permutation will be unstable.
## 5 Experiments
We evaluate three different aspects of the proposed max-min objective: eigenvector accuracy, eigenvalue accuracy, and the necessity of each of the components of the proposed objective.
Eigenvector Accuracy.We start by considering the grid environments shown in Figure 2. We generate \(200,000\) transition samples in each of them from a uniform random policy and a uniform initial state distribution. We use the \((x,y)\) coordinates as inputs to a fully-connected neural network \(\phi_{\mathbf{\theta}}:\mathbb{R}^{2}\rightarrow\mathbb{R}^{d}\), parameterized by \(\mathbf{\theta}\), with 3 layers of 256 hidden units to approximate the \(d-\)dimensional Laplacian representation \(\phi\), where \(d=11\). The network is trained using stochastic gradient descent with our objective (see work by Wu et al., 2019, for details). We repeat this process with different initial barrier coefficients, using the same values as in Figure 1.
Figure 3 shows the average cosine similarity of eigenvectors found using ALLO compared to the true Laplacian eigenvectors. In all three environments, it learns close approximations of the smallest \(d-\)eigenvectors in fewer gradient updates than GGDO (see Figure 1) and without a strong dependence on the chosen barrier coefficients.
Figure 2: Examples of grid environments. Color corresponds to the second smallest eigenvector of the Laplacian learned by ALLO.
As a second and more conclusive experiment, we select the barrier coefficient that displayed the best performance for GGDO across the three previous environments (\(b=2.0\)), and the best barrier increasing rate, \(\alpha_{\text{barrier}}\), for our method across the same environments (\(\alpha_{\text{barrier}}=0.01\)). Then, we use these values to learn the Laplacian representation in 12 different grid environments, each with a different number of states and topology (see Figure 6 in the Appendix). In this case, we generated 1 million transitions for training.
Figure 4A compares the average cosine similarities obtained with each method. In particular, it shows the mean difference of the average cosine similarities across 60 seeds. Noticeably, the baseline fails completely in the two smallest environments (i.e., GridMaze-7 and GridMaze-9), and it also fails partially in the two largest ones (i.e., GridMaze-32 and GridRoom-64). In contrast, ALLO finds close approximations of the true Laplacian representation across all environments, with the exception of GridRoomSym-4, where it still found a more accurate representation than GGDO. These results are statistically significant for 9 out of 12 environments, with a p-value threshold of 0.1 (see Table 1 in the Appendix). Again, this suggests that the proposed objective is successful in removing the untunable-hyperparameter dependence observed in GGDO.
Eigenvalue Accuracy.The dual variables of ALLO should capture the eigenvalues of their associated eigenvectors. Here, we quantify how well they approximate the true eigenvalues in the same 12 grid environments as in Figure 4A. In particular, we compare our eigenvalue accuracy against those found with a simple alternative method (Wang et al., 2023), based on GGDO and on Monte Carlo approximations. Figure 4B shows that the average relative error for the second to last eigenvalues, meaning all except one, is consistently larger across all environments when using the alternative approach, with a significance level of \(0.01\). This is not a surprising result given the poor results in eigenvector accuracy for GGDO. However, in several environments the error is high even for the smallest eigenvalues, despite GGDO approximations being relatively more accurate for the associated eigenvectors. Across environments and across the eigenspectrum, our proposed objective provides more accurate estimates of the eigenvalues.
Ablations.ALLO has three components that are different from GGDO: (1) the **stop-gradient** as a mechanism to break the symmetry, (2) the **dual variables** that penalize the linear constraints and from which we extract the eigenvalues of the graph Laplacian, and (3) the mechanism to monotonically **increase the barrier coefficient** that scales the quadratic penalty. Our theoretical results suggest that the stop-gradient operation and the dual variables are necessary, while increasing the barrier coefficient could be helpful, eventually eliminating the need for the dual variables if all one cared about was to approximate the eigenvectors of the graph Laplacian, not its eigenvalues. In this section, we perform ablation studies to validate whether these insights translate into practice when using neural networks to minimize our objective. Specifically, in GridMaze-19, we compare the average cosine similarity of ALLO, with the same objective but without dual variables, and with GGDO, which does not use dual variables, nor the stop gradient, nor the increasing coefficients. For completeness, we also evaluate GGDO objective with increasing coefficients.
The curves in each panel of Figure 5 represent the different methods we evaluate, while the different panels evaluate the impact of different rates of increase of the barrier coefficient. Our results show that increasing the barrier coefficients is indeed important, and not increasing it, as GDO and GGDO do not, actually prevents us from obtaining the true eigenvectors. It is also interesting to observe that the rate in which we increase the barrier coefficient matters empirically, but it does not prevent
Figure 3: Average cosine similarity between the true Laplacian and ALLO for different initial values of the barrier coefficient \(b\), averaged over 60 seeds, with the best coefficient highlighted. The shaded region corresponds to a 95% confidence interval.
our solution to obtain the true eigenvectors. The importance of the stop gradient is evident when one looks at the difference in performance between GGDO and ALLO (and variants), particularly when not increasing the barrier coefficients. Finally, it is interesting to observe that the addition of the dual variables, which is essential to estimate the eigenvalues of the graph Laplacian, does not impact the performance of our approach. Based on our theoretical results, we conjecture the dual variables add stability to the learning process in larger environments, but we leave this for future work.
## 6 Conclusion
In this paper we introduced a theoretically sound min-max objective that makes use of stop-gradient operators to turn the Laplacian representation into the unique stable equilibrium point of a gradient ascent-descent optimization procedure. We showed empirically that, when applied to neural networks, the objective is robust to the same untunable hyperparameters that affect alternative objectives across environments with diverse topologies. In addition, we showed how the objective results in a more accurate estimation of the Laplacian eigenvalues when compared to alternatives.
As future work, it would be valuable to better understand the theoretical impact of the barrier coefficient in the optimization process. Since we can now obtain the eigenvalues of the graph Laplacian, it would be also interesting to see how they could be leveraged, e.g., as an emphasis vector for feature representations or as a proxy for the duration of temporally-extended actions discovered from the Laplacian. Finally, it would be exciting to see the impact that having access to a proper approximation of the Laplacian will have in algorithms that rely on it (e.g., Wang et al., 2023; Klassarov and Machado, 2023).
#### Acknowledgments
We thank Alex Lewandowski for helpful discussions about the Laplacian representation, Martin Klassarov for providing an initial version of the baseline (GGDO), and Adrian Orenstein for provid
Figure 4: Difference of cosine similarities when approximating eigenvectors (A), and of relative errors for eigenvalues (B). Error bars show the standard deviation of the differences. GR and GM stand for GridRoom and GridMaze. Blue bars correspond to p-values below \(0.01\).
Figure 5: Average cosine similarity for different objectives in the environment GridMaze-19, for initial barrier coefficient \(b=0.1\), and for different barrier increase rates \(\alpha_{\text{barrier}}\).
|
2306.13404
|
X-ray diffraction from dislocation half-loops in epitaxial films
|
X-ray diffraction from dislocation half-loops consisting of a misfit segment
and two threading arms extending from it to the surface is calculated by the
Monte Carlo method. The diffraction profiles and reciprocal space maps are
controlled by the ratio of the total lengths of the misfit and the threading
segments of the half-loops. A continuous transformation from the diffraction
characteristic of misfit dislocations to that of threading dislocations with
increasing thickness of an epitaxial film is studied. Diffraction from
dislocations with edge and screw threading arms is considered and the
contributions of both types of dislocations are compared.
|
Vladimir M. Kaganer
|
2023-06-23T09:36:01Z
|
http://arxiv.org/abs/2306.13404v1
|
# X-ray diffraction from dislocation half-loops in epitaxial films
###### Abstract
X-ray diffraction from dislocation half-loops consisting of a misfit segment and two threading arms extending from it to the surface is calculated by the Monte Carlo method. The diffraction profiles and reciprocal space maps are controlled by the ratio of the total lengths of the misfit and the threading segments of the half-loops. A continuous transformation from the diffraction characteristic of misfit dislocations to that of threading dislocations with increasing thickness of an epitaxial film is studied. Diffraction from dislocations with edge and screw threading arms is considered and the contributions of both types of dislocations are compared.
## I Introduction
Misfit dislocations are the most common mode of strain relaxation in epitaxial films [1; 2; 3; 4]. Since the dislocation lines cannot terminate inside a crystal, a misfit dislocation is accompanied by threading arms that extend to the surface (or terminate at an incoherent boundary; we do not consider this case here). The glide of the threading arm under the action of epitaxial strain is the most prominent mechanism of strain relaxation [5]. Threading dislocations passing through the active region of a heteroepitaxial structure lead to a degradation of its electronic properties, whereas misfit dislocations, if located at the interface of a buffer layer below the active region, may have no negative effect. Therefore, a separate determination of misfit and threading dislocations is of primary interest in the characterization of heterostructures for electronic and optoelectronic applications. The density of threading dislocations can be very low if the dislocations glide over long distances, up to the entire length of the sample. At the other extreme, epitaxial gallium nitride is a well-known example of a crystal with high threading dislocation densities [6].
Shifts in the positions of the X-ray diffraction peaks due to relaxation of the average strain by misfit dislocations are commonly used to detect strain relaxation and the corresponding misfit dislocation density [7]. Dislocations also cause inhomogeneous strain, leading to additional diffuse scattering at low dislocation densities and to a broadening of the X-ray peaks at high dislocation densities. The interpretation of the diffraction peak profiles is not as straightforward as that of the mean strain due to dislocations, since the positions of the dislocations may be correlated for kinetic or energetic reasons. The elastic energy of a dislocation array is reduced when misfit dislocations reduce fluctuations in the mean distances between dislocations, from a random to a more periodic arrangement. Threading dislocations reduce the elastic energy when dislocations with opposite Burgers vectors are closer together to compensate for long-range strain.
The theory of X-ray diffraction from misfit [8] and threading [9] dislocations takes these correlations into account and shows that the diffraction peak profiles are sensitive to them. Scattering from misfit dislocations cannot be neglected even in situations where the threading dislocations dominate. Reciprocal space maps of GaN films several microns thick, where threading dislocations are expected to dominate, also showed a significant scattering from misfit dislocations [10; 11]. In these studies, misfit and threading dislocations were considered as two separate dislocation arrays uncorrelated with each other.
It is more appropriate to model the dislocation distribution by dislocation half-loops consisting of a misfit segment and two threading arms extending from it to the surface. The two threading segments have opposite displacement fields, corresponding to opposite directions of the dislocation lines when the Burgers vector is kept constant along the half-loop. Equivalently, the two threading segments can be considered to have opposite Burgers vectors, if the dislocation line directions are taken to be the same. These threading dislocations screen the strain field from each other and provide a model of the dislocation correlations that reduce the elastic energy of the film [12]. By varying the relative lengths of the misfit and threading segments, one can go from the limiting case of misfit dislocations to the opposite limit of threading dislocations. The elastic field of a dislocation half-loop is quite complicated (see Supporting Information) and the diffraction from the half-loops can hardly be studied analytically. However, the X-ray diffraction from a statistical distribution of defects with known elastic fields can be calculated by the Monte Carlo method [12; 13].
The aim of the present work is to model the X-ray diffraction from dislocation half-loops. We follow a transformation of the reciprocal space maps and the diffraction profiles with increasing film thickness while keeping the misfit dislocation density constant. In this way a change from the diffraction pattern characteristic of misfit dislocations to that of threading dislocations can be analyzed. We show that the parameter controlling this transformation is the ratio of the total lengths of misfit and threading dislocations, or equivalently, the ratio of the mean length of the misfit segment to the film thickness. We find that this transformation is rather smooth and also depends on the inclination of the actual diffraction vector to the surface. We compare the effects of the half-loops with the edge and screw dislocation types of the threading arms, and find that they both contribute to the symmetric Bragg reflections.
## II Monte Carlo simulation of X-ray diffraction
We study the X-ray diffraction from the dislocation half-loops sketched in Fig. 1. Threading arms are assumed to be
straight and perpendicular to the film surface. Two types of dislocations are considered. Dislocations with edge threading arms (denoted by \(b_{y}\) in Fig. 1) have Burgers vectors normal to the half-loop plane. Such half-loops correspond to the insertion (or removal, depending on the sense of the mismatch) of a rectangular piece of the extra atomic plane, bounded by the dislocation line and shadowed in Fig. 1. It releases the mismatch between the film and the substrate. The second type of dislocations has screw threading dislocation arms (denoted by \(b_{z}\) in Fig. 1). Their misfit segments provide a local tilt of the film. For these half-loops, Burgers vectors with opposite signs are taken with equal probability, so there is no net tilt of the film.
We take the density of the threading dislocation arms \(\rho_{\rm T}\) and the mean length of the misfit segment \(L\) as two parameters characterizing the dislocation ensemble. The misfit dislocation density is therefore \(\rho_{\rm M}=L\rho_{\rm T}/2\), since each half-loop has two threading arms. We note that the threading dislocation density \(\rho_{\rm T}\) and the misfit dislocation density \(\rho_{\rm M}\) have different dimensionalities. Threading dislocation density is the number of threading dislocations per unit area of the surface, or more generally, the total length of the threading dislocations per unit volume. Misfit dislocation density is the number of dislocations per unit length of interface, or more generally, the total length of the dislocation lines per unit area of interface.
A parameter that controls the relative contributions of misfit and threading dislocations is the ratio \(L/t\) of the mean length of the misfit segment \(L\) to the film thickness \(t\). One can also compare the total length of misfit dislocations per unit area of the interface \(\rho_{\rm M}\) with that of threading dislocations \(\rho_{\rm T}t\), since the length of each threading segment is \(t\). Given the definition of \(\rho_{\rm M}\) above, this ratio is simply \(L/2t\). Another parameter of the dislocation array is the dimensionless parameter \(M\) introduced by Wilkens [14; 15] to characterize the screening of the dislocation strain by the surrounding dislocations. It is equal to the ratio of the mean distance \(L\) between threading dislocations with opposite Burgers vectors (assuming the same dislocation line directions of the threading segments) to the mean distance between threading dislocations \(\rho_{\rm T}^{-1/2}\), so that \(M=L\rho_{\rm T}^{1/2}\).
The Monte Carlo simulations below are performed for an example of a GaN\(\{0001\}\) epitaxial film. The positions of the dislocation half-loops are random and uncorrelated. The lengths \(L\) of the misfit segments have a lognormal distribution with the standard deviation \(L/2\). The misfit segments of the half-loops run in three equivalent \(\left\langle 1\bar{1}00\right\rangle\) directions with equal probability. The length of the Burgers vector of a half-loop with edge threading arms \(b_{x}\) is \(a=0.319\) nm, while that of the half-loop with screw threading arms \(b_{z}\) is \(c=0.518\) nm. The displacement field of a half-loop, satisfying the elastic boundary conditions of the free surface, is constructed from the displacement field of an angular dislocation near the free surface [16] and that of a dislocation normal to the surface [17]. Details of the construction and the analytical expressions for all components of the displacements are given in the Supplementary Information.
The choice of Poisson's ratio to model dislocations in GaN is somewhat ambiguous. In strain relaxation problems for elastically anisotropic epitaxial films, the Poisson ratio is commonly chosen to give the same vertical strain as in the isotropic approximation. For GaN\((0001)\) this requirement gives \(\nu=c_{13}/(c_{13}+c_{33})\), where \(c_{ij}\) are the anisotropic elastic moduli. The value \(\nu=0.21\) is obtained using elastic moduli of GaN from Ref. [18]. The measured values of \(\nu\) vary from \(0.15\) to \(0.23\)[19]. On the other hand, the strain field of a straight edge dislocation with \((0001)\) dislocation line direction in an anisotropic hexagonal crystal coincides with the isotropic solution when the Poisson ratio is taken to be \(\nu_{h}=c_{12}/(c_{12}+c_{11})\)[20]. Using the elastic moduli of GaN [18], Poisson's ratio is \(\nu_{h}=0.27\). We use the latter value in the Monte Carlo simulations below, to get a better representation of the strain fields of the threading dislocation arms.
The diffracted intensity is a Fourier transform of the correlation function \(G({\bf r}_{1},{\bf r}_{2})=\left\langle\exp\left[i{\bf Q}\cdot({\bf U}({\bf r }_{2})-{\bf U}({\bf r}_{1}))\right]\right\rangle\) to reciprocal space. Here \({\bf r}_{1}\) and \({\bf r}_{2}\) are the coordinates of two points inside the crystal, \({\bf U}({\bf r})\) is the total displacement due to all dislocations (equal to the sum of the displacement fields of individual dislocations due to linear elasticity) calculated in these two points, and \({\bf Q}\) is the diffraction vector. The statistical average \(\left\langle\ldots\right\rangle\) over the dislocation ensemble and the Fourier transform can be performed simultaneously in one and the same Monte Carlo integration [13]. This integration is time consuming, especially when dislocation densities are large and low intensities at asymptotes are of interest: the integration is a summation of complex numbers of modulus \(1\) to finally obtain a real number which is much less than one.
When the dislocation density is large and hence the mean-squared strain is large, only correlations between closely spaced points \({\bf r}_{1}\) and \({\bf r}_{2}\) are of importance [21]. The expansion \({\bf Q}\cdot({\bf U}({\bf r}_{2})-{\bf U}({\bf r}_{1}))\approx({\bf r}_{2}-{ \bf r}_{1})\cdot\nabla({\bf Q}\cdot{\bf U})\) allows to reduce the X-ray intensity calculation to the calculation of the probability density of the respective distortion components [22; 23]. Specifically, the intensity \(I(q_{x},q_{z})\) in the reciprocal space map is calculated as the joint probability density of the distortions \(q_{x}=-\partial({\bf Q}\cdot{\bf U})/\partial x\) and \(q_{z}=-\partial({\bf Q}\cdot{\bf U})/\partial z\). These distortions depend on a depth \(z\) of the point in the epitaxial film at which they are calculated. Therefore, an integration over \(z\) is performed from the surface \(z=0\) to the interface \(z=t\). As is usual for a Monte Carlo simulation, this integration does not require any additional computational effort: the point \(z\) is randomly and homogeneously
Figure 1: Geometry of an epitaxial film with a dislocation half-loop.
seeded on the interval \([0,t]\). Similarly, the intensity \(I(q)\) in a double-crystal scan with an open detector, in particular the scan in skew geometry, is calculated as the probability density of the distortion \(q=-\hat{\mathbf{K}}^{\text{out}}\cdot\nabla(\mathbf{Q}\cdot\mathbf{U})\), where \(\hat{\mathbf{K}}^{\text{out}}\) is a unit vector in the direction of the diffracted beam and the integration of the probability density over \(z\) is performed, as above. This expression for the distortion component was derived in Appendix A of Ref. [24] and is re-derived in the Supplementary Information in more familiar Cartesian coordinates. The wave vector \(q\) is related to the angular deviation \(\omega\) by \(q=Q\omega\cos\theta\), where \(\theta\) is the Bragg angle of the actual reflection. The calculation of the strain probability density distribution is orders of magnitude faster than the straightforward calculation mentioned above because it avoids summing the oscillating complex terms.
This Monte Carlo calculation is ideally suited to parallel computing as each realization of the random dislocation distribution can be computed independently and the partial sums obtained on different processors can be added at the end. We use the coarray extension to Fortran, which was added to the language standard in 2008. In practice, the parallel computations require only a few lines of code to be modified and are performed on 128 cores without any loss of computational efficiency.
Monte Carlo simulations are performed on an Epyc(tm) 7763 compute server. Diffraction profiles and maps are typically computed in less than 1 minute with sufficient accuracy to reveal the features of the intensity distribution. Each of the curves and maps presented below took several hours to reduce the statistical noise. As the statistical error decreases as \(1/\sqrt{N}\), where \(N\) is the number of repetitions, the one-minute runs are only an order of magnitude less accurate in intensity. The computation time can be reduced by choosing larger steps in the angles in the curves and wave vectors in the maps. On the other hand, most of the computation time is the calculation of the dislocation displacements by analytical formulae presented in the Supplementary Information, which leaves very little room for improvement. The calculation requires memory for an array of the calculated intensity and an array of the coordinates of the dislocation in an actual realization of their distribution, which together do not exceed several megabytes per core.
## III Results
Let us consider the X-ray diffraction from dislocation half-loops with edge threading arms. We assume a threading dislocation density \(\rho_{\text{T}}=1\times 10^{10}\,\text{cm}^{-2}\) and a mean length of the misfit segments \(L=1\,\text{\SIUnitSymbolMicro m}\). Figure 2(a) shows a transformation of the \(11\bar{2}4\) reciprocal space maps with increasing film thickness. For a thickness \(t=0.05\,\text{\SIUnitSymbolMicro m}\), which is small compared to the misfit segment length, the misfit dislocations dominate the diffraction. The reciprocal space map has the same features as that of infinitely long misfit dislocations [8]. It is extended in the direction almost perpendicular to the direction of the diffraction vector indicated by an arrow in the figure (these directions need not be exactly perpendicular to each other since as this is not required by symmetry). In the opposite limit, where the thickness \(t=5\,\text{\SIUnitSymbolMicro m}\) is large compared to the misfit segment length, the diffraction is dominated by threading dislocation arms. Since threading dislocations are parallel straight lines in real space, their diffraction pattern in the reciprocal space is a disc perpendicular to the dislocation line [10; 11]. A section of the disc through the scattering plane gives the horizontal streak in the map. The maps in Fig. 2(a) show a gradual transition from one limit to the other. At thickness \(t=0.2\,\text{\SIUnitSymbolMicro m}\), five times smaller than the misfit segment length, the diffraction pattern already differs from that for misfit dislocations. At thickness \(t=5\,\text{\SIUnitSymbolMicro m}\), five times larger than the misfit segment length, there is still a finite width of the intensity spot in the \(q_{z}\) direction.
Figure 2(b) shows diffraction profiles in skew geometry [25; 9; 26] for the same film thicknesses and for three reflections, a symmetric 0002 reflection (left), a slightly asymmetric \(1\bar{1}04\) reflection (middle), and a highly asymmetric \(12\bar{3}1\) reflection (right). The intensities calculated by the Monte Carlo method are seen as noisy lines, while smooth lines of the same colors are the fits discussed below. Let us start the analysis with the symmetric reflection. Since straight edge dislocations in an infinite medium produce strain only in the plane normal to the dislocation line, it is expected that edge threading dislocations do not cause any broadening of the symmetric reflections. However, the plot in Fig. 2(b) shows that the total effect of the strain field of the misfit segment and the strain due to stress relaxation at the free surface of the threading segments of the half-loop give rise to a diffraction peak broadening even at a thickness of \(5\,\text{\SIUnitSymbolMicro m}\).
In the usual treatment of the broadening of the symmetric reflections as a manifestation of the screw dislocations, this broadening would be interpreted as a density of screw dislocations. The smooth lines in the plots of Fig. 2(b) are the fits proposed in Ref. [9]. They include two parameters, the dislocation density and the length of the strain field screening (or the dimensionless parameter \(M\)). An apparent density of screw threading dislocations, obtained in the fit of the 0002 reflection for the film thickness of \(5\,\text{\SIUnitSymbolMicro m}\), is \(1.1\times 10^{8}\,\text{cm}^{-2}\). The apparent density of screw dislocations increases with the decreasing film thickness, as can be seen from the plots, and reaches \(6.5\times 10^{9}\,\text{cm}^{-2}\) for a film thickness of \(0.05\,\text{\SIUnitSymbolMicro m}\).
At the opposite extreme of a highly asymmetric \(12\bar{3}1\) reflection in the right plot of Fig. 2(b), the strain due to edge threading arms dominates. The diffraction profiles almost coincide for the film thicknesses of \(0.2\,\text{\SIUnitSymbolMicro m}\) and above. A slightly asymmetric \(1\bar{1}04\) reflection in the middle plot of Fig. 2(b) shows an intermediate behavior: for thicknesses less than \(1\,\text{\SIUnitSymbolMicro m}\), the misfit segment of the half-loop makes a significant contribution.
Figures 2(c) and 2(d) summarize the results of the fits made by the model for infinitely long edge threading dislocations [9]. These fits are represented by smooth lines in Fig. 2(b). A total of 19 diffraction profiles in different asymmetric reflections in skew geometry are calculated by the Monte Carlo method. The apparent density of edge threading dislocations \(\vec{\rho}_{\text{T}}\) and the corresponding apparent parameter \(\hat{M}\) are obtained
in the fits. The results for different reflections are compared by plotting these apparent parameters as a function of the angle \(\Psi\) between the diffraction vector and the film surface. \(\Psi=0\) corresponds to diffraction in the surface plane, and \(\Psi=90^{\circ}\) to symmetric reflections. The symmetric reflections are not included in Figs. 2(c) and 2(d) since they have been fitted to screw rather than edge threading dislocations.
The results for the film thickness of \(5\,\mathrm{\SIUnitSymbolMicro m}\) are shown in Figs. 2(c) and 2(d) by full squares, deliberately made larger than the symbols for the other thicknesses, as they come closest to the model of infinite threading dislocations assumed by the fits. The dislocation density obtained in the fit for this film thickness is quite close to the density of \(1\times 10^{10}\,\mathrm{cm}^{-2}\) modeled in the Monte Carlo simulations. This result confirms the consistency between the present Monte Carlo simulations and the fits by the formulae from Ref. [9]. Figure 2(c) shows that as the thickness decreases, the misfit parts of the half-loops make progressively larger contributions. The apparent density of edge dislocations can be 6 times larger than the real density. It can also be seen that the apparent density systematically depends on the inclination angle \(\Psi\) of the reflection: the less asymmetric reflections give a larger apparent density. This dependence can help to recognize the contribution of misfit dislocations.
Figure 2: Monte Carlo calculation of the X-ray diffraction from dislocation half-loops with edge threading arms. Threading dislocation density \(\rho_{\mathrm{T}}=1\times 10^{10}\,\mathrm{cm}^{-2}\), mean length of the misfit segments \(L=1\,\mathrm{\SIUnitSymbolMicro m}\). (a) Reciprocal space maps in \(11\bar{2}4\) reflection for different epitaxial layer thicknesses. The diffraction vector is indicated by an arrow on the left map. (b) Diffraction peak profiles in skew geometry. The noisy lines are Monte Carlo simulations, while the smooth curves are fits that treat the diffraction intensity as due only to threading dislocations. (c) Apparent density of threading dislocations \(\widetilde{\rho_{\mathrm{T}}}\) and (d) apparent values \(\widetilde{M}\) of the Wilkens parameter obtained in these fits. (e) The full width at half maximum of the diffraction profiles (FWHM) of the reflections. \(\Psi\) is the angle between the reflection vector and the crystal surface.
The input value of the parameter \(M\) in the Monte Carlo simulations is \(M=L\rho_{\rm T}^{1/2}=10\). The values obtained in the fit are several times larger and show a large scatter even for the 5 \(\mathrm{\SIUnitSymbolMicro m}\) film thickness, where the threading dislocations dominate. This result is not surprising: as it was discussed in Ref. [9], the fit does not take into account the orientation factors involved in this parameter. As a result, the accuracy of the dislocation correlation determination is lower than that of the dislocation density determination. As it has been also discussed in Ref. [12], the consideration of these orientation factors is a rather complicated task. On the other hand, it is the dislocation density rather than the dislocation correlations that is of primary interest.
Figure 2(e) shows the full widths at half maxima (FWHMs) of the peaks obtained from the Monte Carlo simulation. The data are shown for the film thicknesses of \(0.05\,\mathrm{\SIUnitSymbolMicro m}\) and \(5\,\mathrm{\SIUnitSymbolMicro m}\). The points from the intermediate thicknesses (not shown) are scattered in between. The FWHMs are used to estimate the dislocation density by a popular, because of its extreme simplicity, formula \(\rho_{\rm T}=\mathrm{FWHM}^{2}/4.35b^{2}\)[27]. This formula is used in symmetric or asymmetric reflections with the Burgers vector \(b\) equal to either \(c\) or \(a\) lattice parameter of GaN to obtain the densities of either screw or edge dislocations. The correct use of this formula for edge dislocations implies the use of twist, i.e., extrapolation of the peak widths in Fig. 2(e) to \(\Psi=0\)[26].
When the threading dislocation arms are long and dominate in the scattering (full squares), the FWHMs of the asymmetric reflections in Fig. 2(e) increase with the increasing inclination of the reflection (the angle \(\Psi\) decreases). The same dependence is observed in experiments [28, 9, 26]. Extrapolation to \(\Psi=0\) gives a "twist" of \(0.3^{\circ}\), which according to the above formula gives a threading dislocation density of \(6\times 10^{9}\,\mathrm{cm}^{-2}\), about half of the threading dislocation density used on input in the Monte Carlo simulations. Thus, this simple formula gives a reasonable estimate of the threading dislocation density, with some underestimation. Further Monte Carlo simulations (not presented here) show that this underestimation is systematic. The reflections for a thin epitaxial film, shown by triangles in Fig. 2(e), give a large scatter of the FWHMs of different reflections and, on average, a similar "twist". Hence, the FWHM based determination of the threading dislocation density gives the same underestimate. The FWHMs of the symmetric reflections, shown by the points at \(\Psi=90^{\circ}\) in Fig. 2(e), depend significantly on the order of the reflections. The \(0002\) reflection would give an apparent density of screw dislocations of \(1\times 10^{7}\,\mathrm{cm}^{-2}\) for the \(5\,\mathrm{\SIUnitSymbolMicro m}\) thick film and \(4\times 10^{9}\,\mathrm{cm}^{-2}\) for the \(0.05\,\mathrm{\SIUnitSymbolMicro m}\) thick film.
Figure 3 shows the reciprocal space maps and the diffraction profiles in symmetric \(0002\) reflection of dislocation half-loops with screw (top) and edge (bottom) threading arms. In both cases the mean length of the misfit segments and the film thickness are taken to be the same, \(L=1\,\mathrm{\SIUnitSymbolMicro m}\) and \(t=1\,\mathrm{\SIUnitSymbolMicro m}\). The dislocation densities differ by an order of magnitude, half-loops with screw threading arms of density \(\rho_{\rm T}=1\times 10^{9}\,\mathrm{cm}^{-2}\) are compared with half-loops with the edge threading arms of density \(\rho_{\rm T}=1\times 10^{10}\,\mathrm{cm}^{-2}\).
The screw threading arms dominate in the diffraction pattern of the respective half-loops, since the displacement due to a screw dislocation is along the diffraction vector. As a result, the diffraction intensity in the map of Fig. 3(a) is extended in the lateral direction, perpendicular to the direction of the screw arms. The scan in the \(q_{x}\) direction in the map, which coincides with the \(\omega\) scan in the symmetric reflection, collects all the diffracted intensity. Figure 3(b) shows that the \(\omega\) scan and the double crystal scan almost coincide and have the expected \(\omega^{-3}\) asymptote.
The edge threading arms contribute to diffraction in a symmetric Bragg reflection only due to the strain resulting from elastic relaxation at the free surface, since the displacement field of the edge threading dislocation in infinite medium is perpendicular to the diffraction vector. The intensity in the reciprocal space map in Fig. 3(c) is mainly due to the misfit segments of the half-loops and extends in both \(q_{x}\) and \(q_{z}\) directions. The intensity in the \(\omega\) scan shown in Fig. 3(d) has an \(\omega^{-4}\) asymptote, while the additional integration in the reciprocal space for the double-crystal scan gives rise to an \(\omega^{-3}\) dependence.
Comparing the double crystal scans in Figs. 3(b) and 3(d), one can see that \(1\times 10^{9}\,\mathrm{cm}^{-2}\) half-loops with screw threading arms and \(1\times 10^{10}\,\mathrm{cm}^{-2}\) half-loops with edge threading arms give very close diffraction curves. Thus, when dislocation half-loops with comparable lengths of the misfit and threading segments are present, the common assumption that the intensity in symmetric Bragg reflections is due to screw threading dislocations and the intensity in asymmetric reflections is due to edge threading dislocations, is no longer valid.
Figure 3: Reciprocal space maps in a symmetric Bragg reflection \(0002\) from dislocation half-loops with (a) screw threading arms, \(\rho_{\rm T}=1\times 10^{9}\,\mathrm{cm}^{-2}\), and (c) edge threading arms, \(\rho_{\rm T}=1\times 10^{10}\,\mathrm{cm}^{-2}\). The mean length of the misfit segments is \(L=1\,\mathrm{\SIUnitSymbolMicro m}\), the film thickness is \(t=1\,\mathrm{\SIUnitSymbolMicro m}\). The \(\omega\) and \(\theta\)-\(2\theta\) triple-crystal scans through the maps, as well as the double-crystal scans, are shown in (b) and (d) in log-log scale.
Conclusions
The use of the displacement field of an angular dislocation allows the construction of arbitrary dislocation arrangements in epitaxial films, in particular dislocation half-loops. The X-ray diffraction of an epitaxial film with an arbitrary density of dislocation half-loops can be calculated by the Monte Carlo method. For large dislocation densities and significant broadening of the diffraction peaks, the diffraction intensity can be calculated as the probability density of the corresponding strain components, in the Stokes-Wilson approximation. The use of this approximation allows the calculation time to be reduced by several orders of magnitude.
The shape of the double-crystal diffraction curves for half-loops is the same as that for threading dislocations. When both misfit and threading dislocations are present, a joint analysis of the double crystal diffraction curves in skew geometry and reciprocal space maps in coplanar geometry is required to distinguish their contributions.
X-ray diffraction from dislocation half-loops is controlled by the ratio of the total lengths of the misfit and the threading segments. A significant deviation from the scattering pattern of misfit dislocations is already seen in the reciprocal space maps when this ratio is 5:1, and the opposite limit of threading dislocations is not yet reached when this ratio is 1:5. An apparent density of threading dislocations obtained by fits to the formula derived for threading dislocations alone is up to 6 times larger than the real density of the threading segments. The apparent density obtained in this way scatters significantly depending on the reflection chosen. This scatter in density can be used to distinguish between half-loops and only threading dislocations. Another indicator that may help to distinguish between these two cases is the dependence of the FWHMs of the reflections on the angle \(\Psi\) between the reflection vector and the surface. For threading dislocations the FWHMs increase as \(\Psi\) decreases. When misfit dislocations dominate, the FWHMs show a larger scattering without a systematic \(\Psi\) dependence.
For the half-loops with comparable total lengths of the misfit and threading segments, both the half-loops with edge and screw threading arms contribute to the diffraction curves in symmetric Bragg reflections. The contribution of the half-loops with screw threading arms is an order of magnitude larger for comparable dislocation densities. However, since the densities of the screw threading dislocations in GaN films grown by molecular beam epitaxy is an order of magnitude smaller than those of edge dislocations, the contributions of both dislocation types are comparable. In this case a clear distinction between the dislocation types can be seen in the reciprocal space maps: the diffraction spot for half-loops with edge threading arms is roundish, while for those with screw arms it is laterally elongated.
###### Acknowledgements.
The author thanks Oliver Brandt for providing access to and maintaining the compute server that was used for the Monte Carlo simulations in this study, as well as for many useful discussions and a critical reading of the manuscript.
|
2307.12433
|
Localized Magnetic States of Fe, Co, and Ni Impurities on Alkali Metal
Films
|
X-ray absorption spectroscopy (XAS) and x-ray magnetic circular dichroism
(XMCD) have been used to study transition metal impurities on K and Na films.
The multiplet structure of the XAS spectra indicates that Fe, Co, and Ni have
localized atomic ground states with predominantly d7, d8, and d9 character,
respectively. XMCD shows that the localized impurity states possess large,
atomiclike, magnetic orbital moments that are progressively quenched as
clusters are formed. Ni impurities on Na films are found to be nonmagnetic,
with a strongly increased d10 character of the impurity state. The results show
that the high magnetic moments of transition metals in alkali hosts originate
from electron localization.
|
P. Gambardella, S. S. Dhesi, Sandra Gardonio, Cesare Grazioli, P. Ohresser, Carlo Carbone
|
2023-07-23T21:09:03Z
|
http://arxiv.org/abs/2307.12433v1
|
# Localized Magnetic States of Fe, Co, and Ni Impurities on Alkali Metal Films
###### Abstract
X-ray absorption spectroscopy (XAS) and x-ray magnetic circular dichroism (XMCD) have been used to study transition metal impurities on K and Na films. The multiplet structure of the XAS spectra indicates that Fe, Co, and Ni have localized atomic ground states with predominantly \(d^{7}\), \(d^{8}\), and \(d^{9}\) character, respectively. XMCD shows that the localized impurity states possess large, atomiclike, magnetic orbital moments that are progressively quenched as clusters are formed. Ni impurities on Na films are found to be nonmagnetic, with a strongly increased \(d^{10}\) character of the impurity state. The results show that the high magnetic moments of transition metals in alkali hosts originate from electron localization.
The ground state of an isolated transition metal atom possesses large spin and orbital magnetic moments given by Hund's rules. In the solid state, however, hybridization and crystal field effects strongly reduce both these magnetic moments. Dilute transition metal impurities in nonmagnetic hosts can be viewed as a bridge between the atomic and solid state and have consequently attracted a great deal of attention ever since the work of Friedel [1]. The extent to which the transition metal \(d\) states interact with the valence bands of the host [1,2] directly influences macroscopic properties resulting in anomalies in the electronic transport, magnetic susceptibility, and specific heat. In this respect, alkali metals, with their simple electronic structure, are considered to be ideal hosts for studying the interaction between localized \(3d\) states and a free-electron Fermi gas [3, 4, 5, 6, 7, 8, 9, 10, 11].
More than a decade ago, Riegel _et al._ and Kowallik _et al._ [3, 4] showed that isolated Fe and Ni impurities implanted in K, Rb, and Cs hosts yield surprisingly large magnetic susceptibilities. Localized Fe \(3d^{6}\) and Ni \(3d^{9}\) configurations giving total magnetic moments of 6.7 and \(3.5\mu_{B}\), respectively, were indicated as the origin of these effects. Recently, however, anomalous Hall resistance measurements [5] of Fe and Co impurities on Cs thin films reported large magnetic moments which were interpreted in terms of polarized Cs conduction electrons, analogous to the mechanism giving rise to giant magnetic moments in CoPd and FePd alloys [12]. This interpretation was subsequently questioned [6] and not reproduced by first-principles band structure calculations [10, 11]. The ground state electronic structure and magnetism of transition metal impurities in alkali hosts therefore remains an open question, in particular with respect to the degree of \(3d\) localization, the local moment at the impurity sites, and the orbital contribution to the total magnetic moment. X-ray absorption spectroscopy (XAS) combined with x-ray magnetic circular dichroism (XMCD) provides an ideal approach to resolve these issues. The line shape of the XAS spectra is a fingerprint for the \(d\)-state configuration, whereas XMCD yields the spin and orbital moments via straight forward sum rules [13, 14] in an element specific manner.
In this Letter, XAS and XMCD are used to probe the local electronic and magnetic structure of Fe, Co, and Ni impurities deposited on K and Na films. XAS convincingly shows that transition metals form highly localized atomic configurations in alkali metal hosts while sum rule analysis of the XMCD yields orbital magnetic moments comparable to the atomic limit. From the multiplet structure of the XAS at the Fe, Co, and Ni \(L_{2,3}\) edges the atomic configurations are identified as \(d^{7}\), \(d^{8}\), and \(d^{9}\), respectively. The quenching of the Ni magnetic moment in Na films is shown to be related to an increased \(d^{10}\) weight with respect to the \(d^{9}\) configuration. Moreover, the present measurements demonstrate that XMCD has a significant potential for the study of dilute systems with impurity concentration as low as \(3\,\times\,10^{12}\) atoms cm\({}^{-2}\).
The experiments were performed at beam line ID8 of the European Synchrotron Radiation Facility in Grenoble. K and Na films were evaporated onto a clean Cu(111) substrate; transition metals were subsequently deposited in minute quantities, 0.002\(-\)0.015 monolayers (1 ML = 1.6 \(\times\,10^{15}\) atoms cm\({}^{-2}\)), at \(T=10\) K in order to obtain isolated impurities. The coverage of the transition metals was calibrated by measuring the in-plane remanent magnetization on Cu(111) at room temperature, which has a sharp onset at 2 ML [15]. The pressure measured during metal evaporation was \(1.0\,\times\,10^{-10}\) mbar. Residual gas contamination was always lower than that detectable by O \(K\)-edge spectra recorded before and after metal evaporation. XAS at the \(L_{2,3}\) edges was performed in total electron yield mode using circularly polarized (**P**) light with 99% polarization in magnetic fields up to \(\pm 7\) T with the sample at \(T=10\) K. XMCD was recorded by switching both **P** and the sample magnetization (**M**).
Figure 1 shows the XAS spectra recorded for Fe, Co, and Ni impurities deposited on a K film for parallel (solid lines) and antiparallel (dashed lines) alignment of \(\mathbf{P}\) with \(\mathbf{M}\). For clarity, these spectra are referred to as \(I_{+}^{\mathrm{exp}}\) and \(I_{-}^{\mathrm{exp}}\), respectively. The pairs of XAS spectra were normalized to the incident photon flux and to each other at the preedge. A common linear baseline was subtracted from each set of spectra after normalization. The XMCD spectra, given by \(I_{+}^{\mathrm{exp}}-I_{-}^{\mathrm{exp}}\), are also shown for each case. For each transition metal there are several notable features. The XAS spectra present narrow multiplet structures which are not observed in the corresponding bulk metal spectra [16, 17]. This is a clear indication of 3\(d\) localization on the transition metal impurities. In addition, the magnitude of the XMCD is significantly larger compared to the bulk [16, 17] and compared to that reported for low-dimensional structures, where the size of the magnetic moments is increased due to the reduced coordination [18]. Further, in the case of Fe and Co, the XMCD at the \(L_{2}\) edge has the opposite sign with respect to bulk spectra, whereas it is zero for Ni.
To determine the electronic configurations of the localized states, the multiplet structure of the XAS and XMCD spectra is compared to atomic multiplet calculations reported by van der Laan and Thole [19]. The insets of Figs. 1(a) and 1(b) show the XAS and XMCD for the 3\(d^{n}\to 2p^{5}3d^{n+1}\) absorption thresholds calculated for \(d^{7}\) and \(d^{8}\) configurations, respectively, in zero crystal field with an atomic value for the spin-orbit splitting. The calculated spectra are labeled \(I_{+}^{\mathrm{th}}\) and \(I_{-}^{\mathrm{th}}\), for comparison with the corresponding experimental spectra. Clearly there is a very close match between the theoretical and experimental XAS spectra of Figs. 1(a) and 1(b). In the case of \(I_{-}^{\mathrm{exp}}\) (Fe), five peaks can be distinguished and one additional peak appears for \(I_{+}^{\mathrm{exp}}\) (Fe). All the structure in the XAS spectra of Fig. 1(a) is reproduced in the calculated spectra for a \(d^{7}\) ion shown in the inset of Fig. 1(a). However, one significant difference is that all the higher energy structure of \(I_{-}^{\mathrm{th}}\) (Fe) is absent for \(I_{+}^{\mathrm{th}}\) (Fe). The fact that this structure appears in \(I_{+}^{\mathrm{exp}}\) (Fe) is mainly due to incomplete alignment of the Fe magnetization with the applied field. Magnetization loops (not shown) indicate that \(\mathbf{M}\) is not saturated at 7 T. Other minor differences between the experimental and calculated line shapes might be ascribed to a small fraction of the spectral weight arising from a mixing of different initial state configurations. Similar arguments can be used to understand the \(d^{8}\) ground state for the Co case shown in Fig. 1(b). The line shape of \(I_{-}^{\mathrm{exp}}\) (Co) corresponds to the calculated [19] multiplet structures of a \(d^{8}\) ground state shown as \(I_{+}^{\mathrm{th}}\) (Co) in the inset of Fig. 1(b). For a \(d^{8}\) ground state \(I_{+}^{\mathrm{th}}\) is zero in the atomic limit; accordingly, we find that all peaks have a strongly reduced intensity in \(I_{+}^{\mathrm{exp}}\) (Co) compared to \(I_{-}^{\mathrm{exp}}\) (Co). The Ni case is very simple since the single peak at the \(L_{3}\) edge indicates that the only states which are excited in the absorption process are those arising from the \(j=3/2\) initial state. Dipole selection rules, in this case, imply that the empty \(d\) states are of purely \(j=5/2\) character indicating a \(d^{9}\) atomic ground state with \(S=1/2\), \(L=2\), and \(J=5/2\)[20].
In the present study, XAS combined with XMCD clearly demonstrates a predominant \(d^{7}\) configuration for Fe adatoms on K. Previous studies [3, 8] were unable to discriminate between a Fe \(d^{6}\) or \(d^{7}\) configuration since the size of the effective moment is the same for each configuration. One could argue that differences in the impurity
Figure 1: XAS spectra over the \(L_{2,3}\) edges recorded with \(\mathbf{P}\) parallel (solid line) and antiparallel (dashed line) to \(\mathbf{M}\) for (a) 0.015 ML Fe, (b) 0.015 ML Co, and (c) 0.004 ML Ni deposited on K films. The insets of (a) and (b) show the corresponding spectra calculated [19] for \(d^{7}\) and \(d^{8}\) atomic configurations, respectively (the energy scale has been renormalized to match the experimental \(L_{3}\)-\(L_{2}\) separation). In each case the resulting XMCD (\(I_{+}-I_{-}\)) is also shown.
coordination might strongly affect the ground state configurations of Fe, since we have investigated Fe impurities on K films rather than _in_ the films. These effects, however, would rather favor an additional charge transfer from the electropositive host to implanted impurities, thus leading to an increase of the \(d\) electron count. For Co, a \(d^{7}\) configuration has been suggested [6] in order to interpret the anomalous Hall resistance measurements [5] on alkali films. However, here we present strong evidence for a Co \(d^{8}\) ground state, in very good agreement with recent Hubbard-modified local spin density calculations for Co impurities in Cs films [10]. In the case of Ni, the \(d^{9}\) configuration agrees with the interpretation of perturbed \(\gamma\)-ray distribution data for Ni impurities in K, Rb, and Cs hosts [4]. The atomic configurations, the ground state \(S\), \(L\), \(J\) values, and the predicted atomic magnetic moments for Fe, Co, and Ni are summarized in Table 1.
The expectation values of the orbital moment \(\langle L_{z}\rangle\), spin moment \(\langle S_{z}\rangle\), and magnetic dipole term \(\langle T_{z}\rangle\) are related by the sum rules [13, 14] to the integrated XMCD signal. Here we use the ratio \(R\), which is given by
\[R\,\equiv\,\frac{\langle L_{z}\rangle}{2\langle S_{z}\rangle\,+\,\gamma\langle T _{z}\rangle}\,=\,\frac{2}{3}\,\,\frac{\Delta A_{3}\,+\,\Delta A_{2}}{\Delta A_ {3}\,-\,2\Delta A_{2}}\,, \tag{1}\]
where \(\Delta A_{2}\) and \(\Delta A_{3}\) are the integrated XMCD signals over the \(L_{2,3}\) edges, respectively. Table 1 compares the theoretical (\(R_{\rm th}\)) and experimental (\(R_{\rm exp}\)) values of \(R\). We find a very good agreement between \(R_{\rm exp}\) deduced from the XMCD measurements and \(R_{\rm th}\) calculated in the atomic limit for Co and Ni. For Fe there is a discrepancy which could be due to finite temperature effects [20]; we note that \(R\,\approx\,1\) is also found for the calculated \(d^{7}\) spectra [19]. The application of the XMCD sum rules in the present context is particularly relevant, since their derivation stems from a localized atomic model [13, 14]. So far, their validity has only been checked for bulk magnetic systems [16]. In particular, for atomic Fe and Ni, we note the importance of \(\langle T_{z}\rangle\) which is generally assumed to be negligible in bulk \(3d\) systems [16].
Figure 2 shows \(R_{\rm exp}\) determined from the XMCD spectra as a function of increasing Co and Ni impurity coverage on K and Na films. For Co/K, Co/Na, and Ni/K, \(R_{\rm exp}\) begins to decrease above 0.015 ML due to the formation of clusters by statistical deposition on neighboring adsorption sites. Monte Carlo simulations on a periodic lattice show that at 0.005 (0.015) ML the mean cluster size is 1.06 (1.20) atoms, whereas at 0.15 ML it increases to 8.2 atoms. With growing cluster size, \(d\)-state hybridization quenches the orbital moment. In parallel, the multiplet structure in the XAS spectra is broadened and eventually resembles that of the bulk metal. The Co XMCD shown in Fig. 3 clearly reflects this trend. The \(L_{3}\) peak at 779 eV broadens, whereas the smaller satellite at 781 eV becomes
Figure 3: XMCD spectra recorded for Co on K showing the line shape changes as the atomic Co forms clusters with increased coverage.
Figure 2: \(R_{\rm exp}\) as a function of coverage determined for (a) Co on K and Na and for (b) Ni on K and Na.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline State & \(S\) & \(T\) & \(L\) & \(J\) & \(m\) & \(R_{\rm th}\) & \(R_{\rm exp}\) \\ \hline Fe \(d^{7}\) & \(3/2\) & \(-1/7\) & \(3\) & \(9/2\) & \(6.63\,\mu_{B}\) & \(3/2\) & 0.95 \(\,\pm\,\)0.05 \\ Co \(d^{8}\) & 1 & \(1/7\) & \(3\) & \(4\) & \(5.59\,\mu_{B}\) & 1 & 0.89 \(\,\pm\,\)0.04 \\ Ni \(d^{9}\) & \(1/2\) & \(2/7\) & \(2\) & \(5/2\) & \(3.55\,\mu_{B}\) & \(2/3\) & 0.67 \(\,\pm\,\)0.05 \\ \hline \end{tabular}
\end{table}
Table 1: Calculated Hund’s rules ground state values of the isotropic spin moment (\(S\)), dipole spin moment (\(T\)), orbital moment (\(L\)), total angular moment (\(J\)), and magnetic moment \(m=g_{J}\,\mu_{B}/\gamma(J+1)\) corresponding to the \(3d\) configurations identified on K films. The theoretical and experimental \(R\) values are also compared for 0.015 ML Fe, 0.015 ML Co, and 0.004 ML Ni.
less defined and finally disappears. At the \(L_{2}\) edge, the effects of hybridization are even more spectacular. The XMCD is negative first, as predicted by an atomic model [19], and gradually becomes increasingly positive until it resembles the XMCD for bulk Co.
Previous investigations [3, 4, 7] reported deviations from a purely atomic magnetic behavior as the host moves up from K to Li in the alkali series in the periodic table. The observed reduction of the Co/Na \(R_{\rm exp}\) shown in Fig. 2 compared to Co/K indicates a reduction of the orbital magnetic moment due to the increased hybridization of the \(3d\) states with the host. More striking, Ni impurities on Na do not present any XMCD up to a coverage of \(\sim\)0.012 ML implying a nonmagnetic ground state. This effect has been explained [9] in terms of a broadening of the impurity state associated with the increased electron density of Na hosts compared to heavier alkali elements. Alternative explanations such as Kondo-type demagnetization have been excluded in earlier work [4]. Here we argue that the Ni impurity states in Na assume a significant \(d^{10}\) character, and that this may have a direct bearing on the quenching of the Ni moment. Figure 4(b) compares the isotropic (\(I_{+}^{\rm exp}\,+\,I_{-}^{\rm exp}\)) XAS spectra for 0.004 ML of Ni on Na (solid line) and on K (dotted line). The weak \(L_{2}\) intensity in the Ni/Na spectra indicates the presence of a \(j=3/2\) component in the Ni final states likely due to moderate hybridization with the host. The intensity of the \(L_{3}\) edge due to the \(3d^{9}\to 2p^{5}3d^{10}\) transition is reduced by more than 50% for Ni/Na compared to Ni/K. The width of the line, however, although larger compared to Ni/K (0.80 vs 0.52 eV FWHM) is still much narrower relative to bulk Ni spectra (\(\sim\)1.7 eV [17]), indicating that Ni valence states can still be considered as predominantly localized. These observations suggest the presence of two resonant \(d^{9}\) and \(d^{10}\) configurations of ionic type [21], where fast incoherent electron hopping (i.e., charge fluctuation) forbids the existence of a magnetic moment.
In conclusion, \(3d\) transition metal impurities on alkali films display localized atomic configurations with fully unquenched orbital magnetic moments. The present results conclusively demonstrate that the high moments in alkali systems originate from the localization of the \(3d\) states.
We thank K. Larsson for his skilled technical assistance.
|
2310.17184
|
Multifunctional imaging enabled by optical bound states in the continuum
with broken symmetry
|
For photonic crystal slab (PCS) structures, bound states in the continuum
(BICs) and circularly polarized states (dubbed C-points) are important
topological polarization singularities in momentum-space and have attracted
burgeoning attention due to their novel topological and optical properties. In
our work, the evolution of polarization singularities from BICs to C-points is
achieved by breaking the in-plane C2 symmetry of a PCS structure of a square
lattice with C4v symmetry. Correspondingly, a BIC is split into two C-points
with opposite chirality, incurring distinct optical transmission responses with
the incidence of right or left circular polarization (RCP or LCP). Harnessing
such chirality selectivity of the C-points, we propose a multifunctional
imaging system by integrating the designed PCS into a conventional 4-f imaging
system, to realize both the edge imaging and conventional bright-field imaging,
determined by the circular polarization state of the light source. In addition
to multifunctional imaging, our system also provides a vivid picture about the
evolution of the PCS platforms' singularities.
|
Jiale Chen, Zhao-Xian Chen, Jun-Long Kou, Yan-Qing Lu
|
2023-10-26T06:32:28Z
|
http://arxiv.org/abs/2310.17184v2
|
# Multifunctional imaging enabled by optical bound states in the continuum with broken symmetry
###### Abstract
For photonic crystal slab (PCS) structures, bound states in the continuum (BICs) and circularly polarized states (dubbed C-points) are important topological polarization singularities in momentum-space and have attracted burgeoning attention due to their novel topological and optical properties. In our work, the evolution of polarization singularities from BICs to C-points is achieved by breaking the in-plane \(C_{2}\) symmetry of a PCS structure of a square lattice with \(C_{4v}\) symmetry. Correspondingly, a BIC is split into two C-points with opposite chirality, incurring distinct optical transmission responses with the incidence of right or left circular polarization (RCP or LCP). Harnessing such chirality selectivity of the C-points, we propose a multifunctional imaging system by integrating the designed PCS into a conventional 4-\(f\) imaging system, to realize both the edge imaging and conventional bright-field imaging, determined by the circular polarization state of the light source. In addition to multifunctional imaging, our system also provides a vivid picture about the evolution of the PCS platforms' singularities.
Photonic crystals, polarization singularities, BIC, C-point, chirality, edge detection, bright-field imaging, multifunctional imaging.
## 1 Introduction
Light beams contain multidimensional information, such as wavelength, amplitude, phase, polarization, and orbital angular momentum [1]. Among them, the polarization properties of light waves have attracted enormous interest and provide a significant degree of freedom for optical manipulation with broadened applications in optical microscopy, multidimensional perceptions [2], optical communications [3], and so on. Recently, the PCS has been a popular platform for polarization-driven applications for its macroscopic size, ease of fabrication and simulations [4], and most importantly, its capability to realize all polarized states for full coverage on the Poincare sphere [5]. Besides, the complicated far-field polarization states with different in-plane wave vectors \(k_{//}=(k_{w},\ k_{y})\) in PCS structures can also be mapped onto the momentum space (k-space) and form the so-called polarization map, providing a deeper understanding of light from a topological perspective [6]. On the polarization map, there are several kinds of polarization singularities, such as centers of polarization vortices (V-points), lines of linear polarized states (L-lines), and C-points [7]. Those polarization singularities usually have unique physical properties potentially beneficial for various applications in modulating the radiation, polarization, phase, and transmission behaviors [8].
In the perspective of topological optics, singularities in the polarization map can be manipulated by symmetry operation to produce rich polarization field configurations in the momentum space [9]. Usually, symmetry-protected (SP) BICs emerge at the center of the Brillion zone center, namely the \(\Gamma\) point, for PCSs with \(C_{4v}\) symmetry [5] or \(C_{6v}\) symmetry [10] and appear as V-points in the polarization map. When perturbation is applied (e.g., broken symmetry) to introduce multipolar moments parallel to the sample plane [11], polarization fields in the momentum space change interestingly, accompanied by the destruction of SP-BICs. For example, the generation of six C-points from one V-point was numerically and experimentally verified with the symmetry of the system broken from \(C_{6v}\) to \(C_{3v}\)[10]; two-paired C-points were generated from one V-point with broken symmetry from \(C_{4v}\) to \(C_{2v}\)[5]. The evolutions of BICs and C-points are governed by the conservation law of topological charges [12]. Specifically, the V-point with integer charges splits into two C-points with half-integer charges. Accordingly, the nonradiative bound state changes into a resonance that can be excited by external light. The polarization singularities serve as a bridge that connects the radiation characteristics with the system's symmetry and have led to numerous applications in different areas. For instance, BICs (or V-points) in PCS platforms have been intensively investigated for a great deal of applications, such as BIC-based lasers [13, 14], enhanced nonlinear effects [15, 16], enhanced light-matter interactions [17, 18], and BIC-based sensing [19, 20], optical analog computing [21], etc. However, there are limited applications and devices enabled by the C-points except for a few demonstrations on chiral emission [22, 23], unidirectional guided resonances [24], arbitrary polarization conversion via PCSs [25], and PCS-induced polarization-dependent lateral beam shifts [26]. Recent work [5, 23] mentioned the chiral selectivity of C-points, while the underlying physics and great potential in applications of C-points is still far from being well explored. Thus, we propose a multifunctional imaging system based on C-points in the PCS, to promote the understanding of C-points and broaden the applications of C-points in the field of optical image processing.
In this Letter, we show that two-paired C-points with opposite chirality can emerge from the decomposition of a BIC in the PCS with broken symmetry. Due to the chirality selectivity of C-points in the proposed PCS device, two sets of optical transfer functions (OTFs) around the C-points are theoretically verified and are proved to be switchable under RCP or LCP incidence. We further demonstrate that our multifunctional imaging system, by integrating the PCS into a conventional 4-\(f\) system [27], is capable of providing two different functions: bright-field imaging and edge-enhanced imaging [21, 28-30]. Notably, the proposed PCS operates in the transmission mode and works in the momentum space, providing nonlocal OTFs [31] for the two functions without requiring any collimation elements, making it more compatible with image processing applications. It is also worthy to mention that this nanophotonic system shows decent edge detection effects in both horizontal and vertical directions with wavelength-scale high resolution (1.10 \(\upmu\)m). In all cases, our work provides a new path for the on-purpose design of different topological polarization singularities and broadens the applications of C-points in the image processing area.
## 2 Design and Principles
We start our discussions on the design of the proposed free-standing hole-type PCS, which supports transformable topological polarization singularities via symmetry operations [9]. For example, BICs in PCSs are known to be feasible for providing C-points, which are of great significance in solving problems of chiral photonics [32, 33]. As schematically shown in Fig. 1a, by breaking the in-plane \(C_{2}\) symmetry of a PCS with \(C_{4v}\) symmetry, two paired C-points with opposite chirality emerge from the destruction of the BIC. With this approach, we can follow a more intuitive way to design the desired PCS, compared with other designing approaches based on counterintuitive optimization methods [34]. The PCS consists of a square lattice of Si\({}_{3}\)N\({}_{4}\) with a refraction index of \(n=2.02\), and a period of \(a=450\) nm. The thickness of the slab \(d\) and the geometry (shape and dimension) of air holes offer adequate degrees of
freedom to modulate the desired functions for the PCS device. According to temporal coupled mode theory [35, 36], due to the Fano interference between the resonance-assisted and background transmissions [37-40], namely the indirect and direct pathways in the photonic crystals, the total transmission usually exhibits asymmetric Fano-like profile, which is adverse for edge detection. However, transmission spectra with a Lorentz line shape are well suited for the perfect edge detection effect [41] needed in our multifunctional imaging system. To realize such a symmetric and Lorentz-like transmission profile around the C-points, here we set \(d=200\) nm and optimize the duty ratio [36] (defined as the ratio of the air hole's area to the area of square unit cell) to \(r=0.30\) based on effective medium theory [42] to get the background transmission as \(t_{\text{d}}=1\), rendering a Lorentz-like profile at the resonant frequency [37]. Due to the chirality selectivity enabled by the C-points, the Lorentz-like profile turns into full transmission spectra when switching the helicity of incident light. Thus, another function, i.e., bright field imaging, can also be realized in our multifunctional imaging system.
In order to determine the structural parameters of our PCS device and the physical peculiarity of different polarization singularities, eigenmodes analysis has been applied to simulate the transverse electric (TE) and transverse magnetic (TM) band structures and corresponding Quality factors \(Qs\) of the PCS with or without \(C_{4v}\) symmetry. For the PCS with \(C_{4v}\) symmetry, the air holes are square (see inset of Fig. 1b) with side length \(L_{0}=247\) nm (the duty ratio \(r=0.30\)). At the \(\Gamma\) point, the SP-BICs are identified as nonradiative states with diverging \(Qs\)
Figure 1: Schematic for breaking the BIC point into two-paired C-points. **(a)** By breaking the in-plane \(C_{2}\) symmetry (changing square air holes to isosceles triangle air holes, a SP-BIC splits into two C-points with opposite chirality. **(b-c)** The simulated bands of TE and TM modes of the PCS following the \(X\)’-\(I\)-\(X\) direction, the inset of which shows the schematic of the proposed PCS and the first Brillion zone, respectively. **(d)** The \(Q\) factors of bands of the PCS before (upper panel) and after (lower panel) symmetry breaking. **(e-f)** The extracted polarization field map of band TE\({}_{2}\) before (**e**) and after (**f**) symmetry breaking.
coexisting with extended modes within the light cone. According to the band structure in Fig. 1b and the \(Q\) factors in the upper panel of Fig. 1d, we can clearly identify four SP-BICs located in four non-degenerate bands TE\({}_{2}\), TE\({}_{3}\), TM\({}_{2}\), TM\({}_{5}\) in the frequency range of interest: 0.62-0.80\(f_{0}\) (namely 562-726 nm in the visible spectrum, where \(f_{0}=c/a=666.7\) THz, \(c\) is the speed of light in the vacuum). It is worth noting that the BICs cannot be excited by the normal incidence of any polarization and gradually evolve into leaky resonances at oblique incidence [32]. For the PCS with broken \(C_{2}\) symmetry, the square air holes are changed to isosceles triangles with the area unchanged (see inset of Fig. 1c). Though there is a subtle change in the band structure in Fig. 1c, the \(Q\)s in the lower panel of Fig. 1d decrease significantly, meaning the BICs (nonradiative states) evolve into resonances (radiative) after breaking the in-plane \(C_{2}\) symmetry. Without loss of generality, we focus on the band TE\({}_{2}\) with the operation frequency near 0.6375\(f_{0}\) in the following design and discussions.
To identify the radiation characteristics of the two PCS structures, we further calculate the distribution of far-field polarization, to map each intrinsic Bloch mode on the band TE\({}_{2}\) with different in-plane wave vectors in momentum-space (Figs. 1e-f). In Fig. 1e, the polarization
Figure 2: Simulated angle-resolved transmission spectra and phase response under RCP incidence. **(a)** The simulated transmittance in the 2D parameter space (\(k\), \(f_{a}\)) for the PCS with square air holes (\(C_{4\ast}\) symmetry). Four bands (TE\({}_{2}\), TE\({}_{3}\), TM\({}_{2}\), TM\({}_{5}\)) and four different SP-BICs are marked for the eye guide. **(b)** The simulated transmittance for the PCS with isosceles triangle air holes (broken symmetry). **(c-d)** The simulated optical transmission amplitude and phase response for TE\({}_{2}\) denoted with the dashed box in **(b)**. **(e-f)** The optical transmission amplitude and phase response in the momentum-space (\(k_{\ast}\), \(k_{\ast}\)) for \(f_{a}=0.6375\) marked as the horizontal lines in **c-d**. The two dotted crosses centered at the RH C-point indicate the field of view (Fig. 3) for the designed PCS.
map of the PCS with \(C_{4v}\) symmetry clearly shows that far-field polarization states form a vortex around \(\Gamma\), where the polarization is ill-defined at the V-point (denoted as the black dot). Notably, the corresponding SP-BIC can be characterized with a topological charge of -1, which is defined as the winding number of the vortex around the singularity [43]. Importantly, the polarization states of radiation from two-dimensional (2D) periodic structures are theoretically proved to be generally elliptical. For a lossless PCS system with in-plane \(C_{2}\) symmetry and reciprocity, the polarization ellipticity of each radiative Bloch mode is often close to 0, making the whole polarization field around the V-point (Fig. 1e) close to linear polarization [43, 44], which explains why the PCS with \(C_{4v}\) symmetry only supports linear polarization selectivity. When the in-plane \(C_{2}\) symmetry is broken, as shown in Fig. 1f, the linear polarized states change to multiple elliptically polarized states, a L-line, a right-handed (RH) C-point, and a left-handed (LH) C-point. Note that the two C-points (denoted as the red and blue dots in Fig. 1a) are mirror-symmetric to each other about the \(k_{\mathrm{x}}=0\) line owing to the preserved left-right mirror symmetry, while the L-lines (denoted as the black lines) lie on the mirror axis. Here, \(k_{\mathrm{x}}\) represents the normalized wavevector with respect to \(k_{0}=2\pi/a\). The two C-points emerge at two specific \(k\) points (\(k_{\mathrm{CL(CR)}}=\pm\,0.039k_{0}\), where the "\(\pm\)" sign corresponds to the RH and LH C-point) near the \(\Gamma\) point in the horizontal direction. Besides, the conservation law of topological charges still holds for the evolution of different polarization singularities [43], the SP-BIC at \(\Gamma\) with a topological charge of -1 is split into two C-points with the same topological charges of -1/2 with opposite chirality. Note that the LH (or RH) C-point can be excited by LCP (or RCP) incidence while it cannot be accessed with opposite incidence, even though the \(Q\), as shown in the lower panel of Fig. 1d, is finite with a magnitude of hundreds. Such chirality selectivity provides a new degree of freedom (the handedness of incidence) to modulate the far-field radiation and the in-plane transmission effects.
In order to get a better grasp of the different optical transmission responses of the PCS with or without broken symmetry, we simulate the angle-resolved transmission spectra and the transmission phase responses in the 2D parameter space (\(k_{\mathrm{x}}\), \(f_{\mathrm{n}}\)) under circularly polarized incidence (we only present the results under RCP incidence for symmetry consideration, and the parameter \(f_{\mathrm{n}}\) represents the normalized frequency with respect to \(f_{0}=\mathrm{c}/a\)). For unperturbed PCS with RCP incidence, as shown in Fig. 2a, there are only four unexcited points on the bands of interest at the \(\Gamma\) point in the symmetric angle-resolved transmission spectra, indicating four SP-BICs (or vortex polarization singularities). According to the symmetry, the spectra do not change when the incidence switches to LCP, meaning the BICs cannot be excited by incident light of any circular polarization. However, for the case without \(C_{2}\) symmetry, the transmission spectra under RCP incidence (Fig. 2b) become asymmetric due to the different exciting responses of two C-points. The asymmetrically vanishing regions on these bands indicate the corresponding C-points, which means that the incident light cannot excite these Bloch modes; that is, the far-field radiation polarization of these states is orthogonal to the polarization of the incident light and is circular. The mode with LCP (RCP) polarization eigenstate will not respond to the RCP (LCP) excitation and turn out to be a diminished point among the transmittance spectra. Specially, there is an unexcited region on the left side (\(k_{\mathrm{x}}<0\), around \(k_{\mathrm{CL}}\) = - 0.039\(k_{0}\) for LH C-point) of bands TE\({}_{2}\) while a continuous deep valley (transmittance undergoes a 1\(\rightarrow\) 0 \(\rightarrow\)1 process) on the right side (\(k_{\mathrm{x}}>0\), around \(k_{\mathrm{CR}}=+\,0.039k_{0}\) for RH C-point). The positions and handedness of the C-points are in accordance with the simulated polarization map (Fig. 1f). From such asymmetric transmission response, we can conclude that there is strict one-to-one correspondence between the C-points and the incidence, viz, the RCP (LCP) incidence can only excite the RH (LH) C-point, but not the opposite. In other words, the incidence with opposite chirality completely decouples with the nanostructure at the C-points. Next, we focus on the desired optical band (TE\({}_{2}\)) and calculate its transmission responses. (Figs. 2c-d). The background transmission satisfies \(t_{\mathrm{d}}=1\) for \(f_{\mathrm{n}}=0.6375\), with which we can get a significant jump in the transmittance amplitude and an abrupt change in the phase spectra with
the excitation of RCP incidence. At the RH C-point, LCP incidence totally decouples with the PCS, leading to unitary transmission through the structure, while the RCP incidence couples with the PCS and leads to destructive interference of the transmitted light, showing perfect reflection from the PCS. We further fixed the incident frequency at \(0.6375f_{0}\) and simulated the optical transmission responses in the momentum space (\(k_{\mathrm{x}}\), \(k_{\mathrm{y}}\)) (Figs. 2e-f) to better illustrate the peculiar properties of C-points (\(k_{\mathrm{y}}\) represents the normalized wavevector in the y direction). Under RCP incidence, the right side of Fig. 2e (\(k_{\mathrm{x}}>0\)) shows a significant change, from complete transmission (background \(t_{\mathrm{d}}=1\)) to zero transmission; however, the left side (\(k_{\mathrm{x}}<0\)) barely changes from the background. In particular, the transmission spectra with RCP incidence show an asymmetric crescent pattern centered around the RH C-point (Obviously, there is also a crescent pattern centered around the LH C-point under LCP incidence for symmetry consideration). It is worth noting that the simulated phase spectra of transmission coefficients (Fig. 2f) also show the same crescent pattern with a significant phase jump around the RH C-point. Such a peculiar asymmetric transmission response for LCP (or RCP) incidence reveals solid evidence of the chirality selectivity of the C-points. Therefore, C-points in PCS platforms have been demonstrated with its chirality selectivity, which makes this PCS system potentially suited for various chiro-optical applications, such as vortex beam generation [45], modulation in radiation characteristics and transmission behaviors [23], laying the foundation of our proposed multifunctional imaging system. In the following, we will demonstrate that the designed PCS device can provide the desired OTFs for different functionalities that our nanophotonic compound system requires.
We utilize the distinct transmission responses of C-point with different circularly polarized incidences to design the OTFs and to realize multifunctional imaging, i.e., edge detection and bright-field imaging. Without loss of generality, we chose the RH C-point (\(k_{\mathrm{CR}}=+\,0.039k_{0}\)) for the rest demonstrations. Firstly, we extracted data from the simulations in Fig. 2 and fitted the OTFs centered around the RH C-point at frequency \(0.6375f_{0}\) in the band TE\({}_{2}\) with the range: \(|(k_{\mathrm{x}}\cdot k_{\mathrm{CR}})/k_{0}|\leq 0.02\), \(|k_{\mathrm{y}}/k_{0}|\leq 0.02\) (field of view). Figure 3a-b and Figs. 3d-e show the fitted transmittance and phase response in the range of interest for RCP incidence and LCP incidence, respectively. We further calculated the transmission spectra along horizontal (\(k_{\mathrm{y}}=0\)) and vertical (\(k_{\mathrm{x}}=0\)) directions (Figs. 3c and 3f) denoted by the dotted crosses in Fig. 3a and Fig. 3d. For RCP incidence, the transmittance undergoes a jump from 1 to 0 around the RH C-point (Fig. 3a), and there is also an abrupt jump in the transmission phase (Fig. 3b) (Note that the OTFs here are of complex values; the real part and imaginary part of the OTF with RCP incidence show a similar crescent pattern). One can see that the transmittance spectra undergo a perfect Lorentz line shape horizontally while changing relatively slowly in the vertical direction. The Lorentz line shape is known to be well-fitted for edge detection [41]; most light reflects from the PCS except for the edges in the horizontal direction. The vertical direction experiences a similar situation except for smaller light transmittance. Thus, the OTF in the vertical direction is also able to distinguish the vertical edges in the given field of view. By contrast, the transmittance with LCP incidence is close to unity (Fig. 3d) in the given field of view, and the phase changes continuously (Fig. 3e). Meanwhile, the transmittance approaches unitary for both directions (Fig. 3f), which means most light transmits through the PCS under LCP incidence. The distinct OTFs for RCP and LCP incidence on the proposed PCS platform have been theoretically verified with the capability to realize 2D edge detection operation and bright-field imaging in the transmission mode. Thus, we can design a multifunctional imaging system by integrating the PCS into a conventional 4-\(f\) imaging system. As schematically shown in Fig. 3g, the PCS was directly placed in front of the object plane, and the edge detection (or bright-field imaging) function is clearly exhibited under RCP (or LCP) incidence.
## 3 Results and discussion
To numerically demonstrate that the device can be integrated into conventional imaging systems and verify its multifunctionality, we numerically simulate the Fresnel diffraction [46] and the OTFs of the PCS of the designed compound imaging system. As shown in Fig. 3g, the 4-\(f\) system consists of two lenses (\(L_{1}\) and \(L_{2}\)) of equal focal length \(f\); thus, a "\(-1\) magnification imaging" can be achieved at the image plane [47]. By directly placing the PCS behind the object plane in the 4-\(f\) system, we can easily load the OTFs of the PCS into this compound nanophotonic system. Remarkably, the OTFs work at the momentum space owing to their nonlocal wavevector-dependent transfer functions [31, 48]. Therefore, the PCS is a nonlocal optical device, and we can get the same image at the output plane when the PCS is placed at the Fourier plane or other positions in the 4-\(f\) system. (The imaging results when we place the PCS at the Fourier plane also show good performance of both edge-imaging and bright-field imaging). This position independence is a crucial advantage of our PCS device over most
Figure 3: The OTFs of the proposed PCS device around the RH C-point under RCP or LCP incidence, and the schematic of the designed multifunctional imaging system. The operation range of interest in the momentum space is \(|(k_{\mathrm{x}}\!-\!k_{\mathrm{CR}})/k_{\mathrm{d}}|\!\leq\!0.02\) and \(|\!k_{\mathrm{x}}\!/k_{\mathrm{b}}|\!\leq\!0.02\). **(a-b)** The fitted transmittance (a) and transmission phase (b) under RCP incidence. **(c)** The transmittance at the positions marked by dotted lines in **a**. **(d-f)** The same as **(a-c)** but for LCP incidence. **(g)** Integrating the PCS into a conventional 4-f imaging system can realize edge (or bright field) imaging with RCP (or LCP) incidence.
traditional optical elements that are local optical elements with position-dependent optical responses and need to be strictly aligned to the optic axis [48]. Furthermore, it is crucial that the PCS needs to be placed obliquely with a particular angle \(\theta_{0}\) (Fig. 3g) with respect to the optic axis because the working range of OTFs is centered around the RH C-point (off-\(\Gamma\)) in momentum space, where \(\theta_{0}=\text{arcsin}\) (\(k_{\text{CR}}/k_{0}\)) = 2.3 deg. The target band is TE\({}_{2}\) (Fig. 2c), and the corresponding operation frequency is \(f_{\text{w}}=0.6375f_{0}\) (705.4 nm, red light).
To evidently demonstrate the two different functions of the proposed imaging system, we performed theoretical simulations on three samples (Letters "NJU", traffic signs, and the logo of Nanjing University) in our imaging system. As shown in Fig. 3g, edge detection and bright-field imaging are switchable with different excitation modes (RCP or LCP). It is worthy pointing out that the original images of three cases are binarized for simplification. We recorded the output images for different cases, and the reversed images in the image plane were flipped back for convenient comparisons. Figure 4a, 4d, and 4g show the output images of three cases under LCP excitation; these bright-field images are of high image quality and no distortion and carry nearly all information of the input objects. When switching LCP to RCP, output images of high-contrast of the edges for three samples are presented as Figs. 4b, 4e, and 4g. It is
Figure 4: Two different functions under RCP and LCP incidence of the proposed imaging system, edge-imaging and bright-field-imaging. **(a-b)** The normal bright-field images under LCP incidence and the edge-enhanced images under RCP incidence for Letters “NJU”, respectively. **(c)** The measured intensity distribution of the bright-field images (under LCP incidence) and the edge-enhanced images (under RCP incidence) of the Letters “NJU”. **(d-f)** The same as (**a-c**) but for the traffic signs. **(g-i)** The same as (a-c) but for the Logo of Nanjing University. Note that the dashed lines indicate the positions of our horizontal-cut measurements of intensities.
remarkable that the edges of the letters, signs and logo are clearly revealed along both horizontal and vertical directions, which indicates a decent 2D edge detection effect with high resolution. Considering the transmittance through the PCS (different OTFs in two directions, Fig. 3c) discussed in the previous context, there is less light transmitting through the PCS in the vertical direction, so that the vertical edges are less obvious (but still distinguishable to some extent) than the horizontal ones. We further calculated the normalized intensity distribution in the horizontal direction for both edge-enhanced images and bright-field images in order to quantificationally examine the performance of our multifunctional imaging system. As shown in the intensity diagrams (Figs. 4c, 4f, 4i), where the upper panels show the amplitude of bright-field images, and the lower panels show the intensity distribution for edge-images. The red dashed lines in Fig. 4 indicate the positions of horizontal-cut measurements. It can also be seen that there are clearly visible separated sharp peaks located at every edge, and positions away from the edges are of nearly zero amplitude, which also supports the high-contrast imaging of the edges and evidently shows the perfect edge detection due to the Lorentz line shape OTF in the \(k_{x}\) direction. By comparing the intensity distribution of the different cases, two functions of our multifunctional imaging system are more intuitively demonstrated. Note that we can get the minimum resolution of 1.56\(\lambda\) (1.10 \(\upmu\)m, average widths at half-height of each sharp peak in the intensity diagrams of edge-enhanced images) to clearly distinguish all the edges, where \(\lambda\) is the operation wavelength of the incident light, \(\lambda=\text{c/}f_{\text{w}}=0.706\)\(\upmu\)m. Thus, we can conclude that bright-field imaging with high quality and edge imaging with high resolution (wavelength scale) have been successfully demonstrated, and the two different functions are switchable with RCP or LCP excitation.
One major advantage of the multifunctional imaging system is the nonlocality of the PCS device. This edge enhancement effect in our system is similar to dark-field imaging [49] but without the use of additional collimation components (i.e., a condenser) due to the nonlocal OTFs, which significantly reduces the system complexity. Besides, the PCS can be placed at an arbitrary position in the optical pathway in the 4-\(f\) system owing to the nonlocality [31, 48]. Furthermore, the proposed multifunctional imaging system can operate over a relatively broad band due to the low-\(Q\) resonance (\(Q\sim\) 200) away from the BIC state [50]. Figure 2c-d indicates that it is useful for edge discrimination across a broad bandwidth from 0.630\(f_{0}\) to 0.642\(f_{0}\), namely 700-714 nm. Additionally, the operation frequency of the PCS device is scaled as c/\(a\), and it can be tuned accordingly by changing the period \(a\) of the PCS based on different demands. We can also utilize the C-points at another band (i.e., TE\({}_{3}\) or TM\({}_{5}\)) of the same structure. Even though we might get different topological configurations of C-points [5], we can achieve similar edge detection effects and chirality selectivity that can be utilized in our multifunctional imaging system.
## 4 Conclusion
In summary, we have proposed a novel approach to realize a multifunctional imaging system by utilizing the chirality of C-points of the proposed PCS structure. In our design, the two C-points of opposite chirality originate from the SP-BIC in the PCS with \(C_{4v}\) symmetry when the in-plane \(C_{2}\) symmetry is broken. Two sets of OTFs can be obtained from the peculiar asymmetric optical transmission response of C-points, forming the basis of our multifunctional imaging system. Then, our imaging system has been theoretically demonstrated to provide two different switchable functions: edge imaging under LCP incidence and bright-field imaging under RCP incidence. The proposed imaging system manifests the advantage of nonlocality, reduced complexity, high resolution (edge detection), broadband operation as well as the ability to implement two different OTFs. Not only does this compound nanophotonic system open new opportunities in applications such as biological imaging and computer vision, it also promotes the understanding and on-purpose design of V-points and C-points, and inspires new explorations in for novel phenomena and potential applications in radiation modulating, topological photonics, and image processing, which are worthy of further study.
|
2305.13154
|
Defending Against the Dark Arts: Recognising Dark Patterns in Social
Media
|
Interest in unethical user interfaces has grown in HCI over recent years,
with researchers identifying malicious design strategies referred to as ''dark
patterns''. While such strategies have been described in numerous domains, we
lack a thorough understanding of how they operate in social networking services
(SNSs). Pivoting towards regulations against such practices, we address this
gap by offering novel insights into the types of dark patterns deployed in SNSs
and people's ability to recognise them across four widely used mobile SNS
applications. Following a cognitive walkthrough, experts (N=6) could identify
instances of dark patterns in all four SNSs, including co-occurrences. Based on
the results, we designed a novel rating procedure for evaluating the malice of
interfaces. Our evaluation shows that regular users (N=193) could differentiate
between interfaces featuring dark patterns and those without. Such rating
procedures could support policymakers' current moves to regulate deceptive and
manipulative designs in online interfaces.
|
Thomas Mildner, Merle Freye, Gian-Luca Savino, Philip R. Doyle, Benjamin R. Cowan, Rainer Malaka
|
2023-05-22T15:42:02Z
|
http://arxiv.org/abs/2305.13154v1
|
# Defending Against the Dark Arts: Recognising Dark Patterns in Social Media
###### Abstract.
Interest in unethical user interfaces has grown in HCI over recent years, with researchers identifying malicious design strategies referred to as "dark patterns". While such strategies have been described in numerous domains, we lack a thorough understanding of how they operate in social networking services (SNSs). Pivoting towards regulations against such practices, we address this gap by offering novel insights into the types of dark patterns deployed in SNSs and people's ability to recognise them across four widely used mobile SNS applications. Following a cognitive walkthrough, experts (\(N=6\)) could identify instances of dark patterns in all four SNSs, including co-occurrences. Based on the results, we designed a novel rating procedure for evaluating the malice of interfaces. Our evaluation shows that regular users (\(N=193\)) could differentiate between interfaces featuring dark patterns and those without. Such rating procedures could support policymakers' current moves to regulate deceptive and manipulative designs in online interfaces.
SNS, social media, social networking services, interface design, dark patterns, well-being, ethical interfaces +
Footnote †: c) 2023 Copyright held by the owner/author(s). This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _Designing Interactive Systems Conference (DIS '23), July 10–14, 2023, Pittsburgh, PA, USA_, [https://doi.org/10.1145/3563657.3595964](https://doi.org/10.1145/3563657.3595964).
+
Footnote †: c) 2023 Copyright held by the owner/author(s). This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _Designing Interactive Systems Conference (DIS '23), July 10–14, 2023, Pittsburgh, PA, USA_, [https://doi.org/10.1145/3563657.3595964](https://doi.org/10.1145/3563657.3595964).
+
Footnote †: c) 2023 Copyright held by the owner/author(s). This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _Designing Interactive Systems Conference (DIS '23), July 10–14, 2023, Pittsburgh, PA, USA_, [https://doi.org/10.1145/3563657.3595964](https://doi.org/10.1145/3563657.3595964).
+
Footnote †: c) 2023 Copyright held by the owner/author(s). This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _Designing Interactive Systems Conference (DIS '23), July 10–14, 2023, Pittsburgh, PA, USA_, [https://doi.org/10.1145/3563657.3595964](https://doi.org/10.1145/3563657.3595964).
+
Footnote †: c) 2023 Copyright held by the owner/author(s). This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _Designing Interactive Systems Conference (DIS '23), July 10–14, 2023, Pittsburgh, PA, USA_, [https://doi.org/10.1145/3563657.3595964](https://doi.org/10.1145/3563657.3595964).
+
Footnote †: c) 2023 Copyright held by the owner/author(s). This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _Designing Interactive Systems Conference (DIS '23), July 10–14, 2023, Pittsburgh, PA, USA_, [https://doi.org/10.1145/3563657.3595964](https://doi.org/10.1145/3563657.3595964).
+
Footnote †: c) 2023 Copyright held by the owner/author(s). This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _Designing Interactive Systems Conference (DIS '23), July 10–14, 2023, Pittsburgh, PA, USA_, [https://doi.org/10.1145/3563657.3595964](https://doi.org/10.1145/3563657.3595964).
+
Footnote †: c) 2023 Copyright held by the owner/author(s). This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _Designing Interactive Systems Conference (DIS '23), July 10–14, 2023, Pittsburgh, PA, USA_, [https://doi.org/10.1145/3563657.3595964](https://doi.org/10.1145/3563657.3595964).
+
Footnote †: c) 2023 Copyright held by the owner/author(s). This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _Designing Interactive Systems Conference (DIS '23), July 10–14, 2023, Pittsburgh, PA, USA_, [https://doi.org/10.1145/3563657.3595964](https://doi.org/10.1145/3563657.3595964).
## 1. Introduction
Among HCI researchers, interest in the ethical implications of how technology is designed has seen a noticeable increase over recent years. One of the more widely known topics within this work is research that focuses on unethical design strategies, referred to as "dark patterns". Cataloguing instances of dark patterns has led to a growing collection of interface artefacts that negatively affect users' ability to make informed decisions. A common example can be seen in cookie-consent banners that often visually elevate options allowing the tracking and storing of users' data over alternatives to denying such functionalities. Originating in e-commerce (Mohammad et al., 2015; Sankar et al., 2015), and other online websites (Mohammad et al., 2015; Sankar et al., 2015), dark patterns describe design strategies that coerce, steer, or obfuscate users into unfavourable actions that they may not have taken if they were fully informed (Sankar et al., 2015). Today, related work has identified a multitude of designs that fit this definition, including digital games (Sankar et al., 2015), social networking sites (SNS) (Sankar et al., 2015; Sankar et al., 2015; Sankar et al., 2015; Sankar et al., 2015), and mobile applications (Sankar et al., 2015; Sankar et al., 2015; Sankar et al., 2015).
The adverse effects of dark patterns have drawn the attention of regulators worldwide. Examples aimed at better protecting users' privacy and autonomy can be seen in the California Consumer Privacy Act CCPA (Sankar et al., 2015) or the Digital Service Act (DSA) of the European Union (Dai et al., 2015). Regardless of the national background, regulating dark patterns faces common challenges, such as a missing taxonomy, the rapid development of new dark patterns, and difficulty identifying dark patterns that require legal interventions. We see that findings from human-computer interaction (HCI) can support the legal discussion and legislative efforts (Sankar et al., 2015) in developing a taxonomy and providing the right tools to assess and regulate dark patterns. Therefore, it is crucial that research advances our understanding of the implications of dark patterns in as many domains as possible to enable regulators and legislators to create effective measures to protect users.
In this work, we take steps towards achieving this goal by (1) analysing the ability of experts and regular users of social media to identify dark patterns based on established definitions thereof and by (2) studying an alternative approach to classify interfaces based on high-level characteristics proposed by Mathur et al. (Mathur et al., 2015; Mathur et al., 2015) to approach an easier evaluation. As this is a relatively new research area, knowledge about how people perceive dark patterns is still limited, with a handful of studies exploring this particular aspect of the topic (Sankar et al., 2015; Sankar et al., 2015; Sankar et al., 2015). In light of initial moves towards regulation and increased attention in the scientific literature, this work reflects
on the current state of the dark pattern research, investigates how applicable current taxonomies are in domains in which they were not first established, and whether current definitions can be utilised as evaluation tools. Before conducting this research, we collected 69 types of dark patterns from eight papers (Gray et al., 1997; Gray et al., 1997; Gray et al., 1997; Gray et al., 1997; Gray et al., 1998; Gray et al., 2000; Gray et al., 2001; Gray et al., 2001), further included in Mathur et al.'s (Mathur et al., 2001) literature review. While we are aware that recent work have updated the overall corpus of dark patterns (Gray et al., 2000; Gray et al., 2001), which we could not include in our studies, the focus of this research is to aim for a simplified recognition tool to aid policy-makers' and regulators' efforts. For this endeavor, we turn towards SNSs as we still lack certain insights about how malicious interfaces manifest in this context. Additionally, the omnipresent nature of SNSs affords constant investigation as research repeatedly highlights negative effects posed on their users' well-being (Gray et al., 2000; Gray et al., 2001). Aiming to aid regulatory efforts, we address these research gaps based on two research questions:
* Can dark patterns taxonomies be used by experts to identify and recognise instances in SNSs?
* Are regular SNS users able to differentiate between interfaces with and without dark patterns?
We answer these questions through two studies. In the first, we conducted cognitive walkthroughs with six HCI researchers aimed at investigating whether current dark pattern taxonomies can be used to assess and identify dark patterns in novel interfaces. The four SNSs included in the study were Facebook, Instagram, TikTok, and Twitter. In a second study, we conducted an online survey to learn about the recognisability of dark patterns by regular SNS users. In contrast to the first study, we did not provide participants of the second study with the complete corpus of dark pattern research but instead relied on five questions adopting Mathur et al.'s (Mathur et al., 2001; Mathur et al., 2001) high-level dark pattern characteristics with the aim of assessing the malice of a particular interface design. While this hinders an immediate comparison between both studies, our evaluation of this alternative process shows that regular users are able to generally recognise dark patterns. Conclusively, dark patterns were not rated to be very malicious (using Mathur et al.'s (Mathur et al., 2001; Mathur et al., 2001) five high-level characteristics) but participants were able to successfully discern dark patterns from a selection of interface screenshots collected from Study 1, that either did or did not contain them. We also propose that a similar approach, one that is not fundamentally linked to specific examples of dark pattern design, could introduce more flexibility and practicality into current legislation processes and would better future-proof legislative efforts aiding the protection of users.
## 2. Related Work
In this section, we will approach relevant research to identify, recognise, and regulate dark patterns from two directions. We will begin by establishing a taxonomy of dark pattern types resulting from the collaborative effort of prior research. This taxonomy is later used in our first study. Afterwards, we highlight work studying the perception and recognition of dark patterns, a necessary step towards successful regulation. We then outline the form of current approaches and strategies in the final paragraphs of this section.
### Dark Pattern Taxonomy
Here, we attempt to provide a relatively comprehensive overview of the current dark patterns landscape. To provide a summary of the taxonomy used in our studies, Table 1 presents key contributions taken from Mathur et al.'s (Mathur et al., 2001) earlier review on dark pattern literature. As we deem it important for our studies that the definitions for dark patterns should be the result of empirical research, we decided to limit the scope for the eight academic contributions part of Mathur et al.'s literature review (Mathur et al., 2001). Although more holistic guidelines exist, these are not included as they tend not to provide enough empirical evidence in their definitions. This left eight academic works that met our criteria, which collectively presented 69 different types of dark patterns that are outlined below in chronological order. Brignull (Gray et al., 2000), who first coined the term dark pattern, initialised the current body of work with twelve types that concern online design strategies. In a similar effort, Conti and Sobiesk (Conti and Sobiesk, 2000) defined eleven types of malicious strategies based on a one-year data collection. Although their work was published before the term dark pattern gained the recognition it sees today, we refer to their results as dark patterns for the sake of conciseness. Offering seven game-specific dark patterns, Zagal et al. (Zagal et al., 2000) studied tricks used in that industry to create, for example, competition or disparate treatment through unethical practices. In another work, Greenberg et al. (Greenberg et al., 2001) were interested in the possible exploitation of spatial factors when discussing dark patterns through the lens of proxemic theory. The result introduces eight types of proxemic dark patterns like speculative technologies targeting users with specific advertisements using public displays. Closely related to the Privacy by Design concept (Zagal et al., 2000), and thus particularly interesting for our research, Bosch et al. (Bosch et al., 2001) collected eight types of dark patterns enveloping schemes that target data collection and limitations of users' agency to customise their personal preferences.
Taking a different approach, Gray et al. (Gray et al., 2001) looked to investigate how dark patterns are created in the first place. Here, researchers analysed an image-based corpus of potential types of dark patterns using a qualitative approach while relying on Brignull's original taxonomy. They define five types of dark patterns that practitioners engage in when developing manipulative designs. Following this research, Gray et al. (Gray et al., 2001) applied content analysis on 4775 user-generated posts collected from the Reddit sub-forum _r/assholesdesign_. Their result provides six properties "asshole designers" subscribe to. Interested in the number of web services embedding dark patterns, Mathur et al. (Mathur et al., 2001) applied hierarchical clustering to identify that 11% of shopping websites employ text-based dark patterns based on a collection of more than 11k samples. Evaluation of their data generated twelve dark patterns embedded in shopping websites.
These works bring together 69 types of dark patterns. Noticeably, various domains have been investigated, widening our understanding of these strategies' origins. However, there is currently a potentially important gap regarding SNS-related platforms like Facebook, Instagram, TikTok, and Twitter - platforms that many people interact with frequently in their day-to-day lives. A growing body of research already illustrates problems with users accurately recollecting the amount of time they spend on SNSs and the frequency in which they use these services (Gray et al., 2001; Gray et al., 2001; Gray et al., 2001). Concerns are also growing regarding alarming implications SNSs have on their
users' well-being [3; 40; 43; 44]. Filling this gap, the research presented here considers the current discourse to review the presence of these described dark patterns in four major SNS platforms.
### Perceiving Dark Patterns
Interested in the cognitive biases dark patterns exploit, Mathur et al. [33] analysed their dark patterns further and recognised five common characteristics in which these dark patterns operate: _asymmetric_; _restrictive_; _covert_; _deceptive_, and _information hiding_. In a follow-up effort, Mathur et al [34] applied these characteristics to prior dark pattern taxonomies while extending the framework to include a sixth characteristic named _disparate treatment_. Collectively, this framework promises an alternative and interesting tool to study dark patterns. To test its utility outside its original scope, our research applies this framework to recognise dark patterns in SNSs. Instead of focusing entirely on the identification of dark patterns, a multitude of works considers end-users' perspectives of dark patterns. In this sense, Di Geronimo et al. [10] sampled 240 popular applications from the Google Playstore and analysed each for contained dark patterns based on Gray et al.'s [19] taxonomy. Based on 10-minute cognitive walkthroughs, their results indicate that 95% of tested applications yield dark patterns. An ensuing online survey revealed that the majority of users fail to discern Dark Patterns in 30-second video recordings of mobile applications. However, their ability to identify harmful designs improves when educated on the subject. In line with prior research, including Maier and Harr's [32] confirmation of users' difficulty to recognise dark patterns [32], Bongard-Blanchy et al. [4] reinforce these implications through their online survey studying participants' ability to recognise dark patterns. Studying the effects browser modalities have on the number of dark patterns users are faced with, Gunawan et al [23] conducted a thematic analysis on recordings of various online services. Their work describes twelve previously not described dark patterns, including _extraneous badges_ that describe nudging interface elements, like coloured circles, which provoke immediate interaction. Trying to understand Facebook users' control over ad-related settings, Habib et al. [24] demonstrate that the SNS does not meet users' preferred requirements. Considering dark patterns in their work, the authors discuss problematic interface structures limiting users' agency to choose settings efficiently and to their liking. This limitation is further discussed by Schaffner et al. [38], who demonstrate difficulties for users to successfully delete their accounts across 20 SNSs. Their success rate was additionally impacted by the modality in which a particular SNS is accessed.
Investigating persuasive designs, Utz et al. [42] demonstrate how nudging interfaces can shift users' decisions towards a preset goal. In a similar vein, Grassl et al. [21] showed evidence that nudges prevent informed decisions. In their experiments, users were either faced with banners visually promoting a privacy-diminishing option or a reverted interface where the option protecting users' privacy was promoted instead. Related efforts of this community highlight current shortcomings of the GDPR [12] to achieve its goals. Reviewing compliance of consent management platforms, Nouwens et al. [37] show that only 11.6% of websites from a corpus of 10k met the minimum requirements of European law. Reviewing the GDPR for its objectives to give users control over their data, Boyens et al. [6] find that users experience serious problems, leading to decreasing trust in institutions that should protect them.
These works collectively show that the responsibility to avoid dark patterns can and should not solely fall onto users. Additional protection needs to come from other sources, such as the better implementation of regulations, while research needs to foster our understanding of dark patterns' origins as well as exploited strategies. We contribute to the latter by turning towards SNSs. Unlike prior work, our study utilises Mathur et al.'s dark pattern characteristics as a framework to learn about users' ability to recognise dark patterns in this domain.
### Regulating Dark Patterns
The advantages of interdisciplinary efforts between HCI and legal scholars have recently been shown in Gray et al.'s [20] work studying consent banners from multiple perspectives. The negative effects of dark patterns in online contexts are not a new phenomenon in law. Protecting users and consumers from manipulation, unfair practices, and imbalances has always been a subject of legislation.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline
**Brigunil** & **Coint \& Sobiesk** & **Zagal et al.** & **Greenberg et al.** & **Bosch et al.** & **Gray et al.** & **Gray et al.** & **Mathur et al.** \\ & 2m [9] & 2m [15] & 2m [16] & 2m [5] & 2m [16] & 2m [17] & 2m [18] \\ \hline _Trick Questions_ & _Caerina_ & _Grinding_ & _-Attention Grabbe_ & _-Provacy Zaubering_ & _Nugging_ & _Automating the User_ & _-Coustanton Tumors_ & _-Limited-time_ \\ _Snuck Into Basket_ & _-Diuration_ & _-Inpression_ & _-Batt and Switch_ & _-Hidden Leggae_ & _-Obstruction_ & _-Two-Feed_ & _-Limited-time_ \\ _-Ruch Model_ & _-Fored Work_ & _-Monational Numbers_ & _-The Social Network_ & _Spatiatiations_ & _-Smoking_ & _-Customizing_ & _Mugges_ \\ _Privacy Zaubering_ & _-Massulating_ & _-Pro to Skip_ & _of Protein Controls_ & _-The Social Metro Profile_ & _-Interface hydrofrome_ & _-Babugging_ & _-High-demand_ \\ _-Cornphanishing_ & Nawrighton_ & _-Playing by_ & _Ot Unintended_ & _-Bud Defaults_ & _-Fored Action_ & _-Nakling-Max-Dianging_ & _Mugges_ \\ _-Digual Add_ & _-Retrieiting_ & _Appointment_ & _Rididnesship_ & _-Immertial Accounts_ & & _-Manpresenting_ & _-Activity Modifications_ \\ _-Pritz Congestion_ & Functionally_ & _-Pre-Delivered_ & _-Captive Audience_ & _-Information shifting_ & & & _-Cornphanishing_ \\ _Presvitation_ & _-Pick_ & _Coutant_ & _-Wt New Target_ & _-Fored Registration_ & & & _-Tartmoutha_ \\ _Malicious_ & _-Configuration_ & _-Social Pyramid_ & _-Diagonal Data_ & _-Address Book_ & & & _Of Uncertain_ \\ _-Hidden Costs_ & _-Explishing Errors_ & _Schemes_ & _California_ & _Lesching_ & & & _Origis_ \\ _-Hidden and Switch_ & _-Interruption_ & -Making Personal_ & -_Hidden to Caucal_ & & & _-Hard to Caucal_ \\ _-Fored Continuity_ & _-Otification_ & _-Infirmation Public_ & & & & _-Visual Intelligence_ \\ _-Pritzend Spam_ & -Shock & -The Allk Factor_ & & & & & _-Low-stock Messages_ \\ \hline \end{tabular}
\end{table}
Table 1. This table shows 69 types of dark patterns described in eight related works. Columns are in chronological order in which these works were published.
Different laws can affect single design patterns, including data protection law, consumer law, and competition law, depending on their impact on consumers, traders, and personal data (Zhou et al., 2017; Zhou et al., 2018). Recently, attempts to regulate dark patterns as a whole have arisen. Especially the European Union started to draft legislation that specifically targets dark patterns. The EU's Digital Service Act (Eur, 2017) (DSA) and its proposal for the Data Act (Zhou et al., 2017) explicitly provide a definition for dark patterns in their recials.
A key challenge is to legislate patterns that are rapidly evolving while adopting new strategies to pass regulation, yet maintaining their malice. In the context of SNSs, our study draws attention to tools of HCI that could support legal decisions. Picking up on these works, legislators and regulators could utilise the existing knowledge about dark patterns to extend current approaches to protecting peoples' privacy on further problematic designs that potentially harm their well-being. In the presented work, we explore a novel approach to evaluate the malice of interfaces of four SNSs based on high-level characteristics proposed by Mathur et al. (Mathur et al., 2018).
## 3. Study 1: Cognitive Walkthrough
The purpose of this study is to see whether definitions of dark patterns can be used to recognise similar design strategies in domains other than the ones they were initially identified in. We, therefore, considered four SNSs (Facebook, Instagram, TikTok, and Twitter) where we had six HCI researchers review mobile applications in the form of cognitive walkthroughs (Mathur et al., 2018). Each researcher was asked to complete ten tasks designed for identifying and recording any instances of dark patterns on the SNSs' mobile applications. The decision to investigate exactly these four SNSs is based on their overall popularity (Mathur et al., 2018), comparable features, and similar user bases. As the experiment was conducted during the COVID-19 pandemic, participants completed their walkthroughs without supervision. Study 1 aims to answer the following research question: Can dark patterns taxonomies be used by experts to identify and recognise instances in SNSs?
### Reviewers
For this experiment, we recruited reviewers who have strong expertise in HCI and UX research and design. In a similar fashion to regulators who have to decide whether a problematic interface requires legal action or not, our participants needed to meet the necessary qualifications to identify dark patterns. Their knowledge of best practices in interface design and user experience makes them more susceptible to recognising potential issues compared to users without access to this particular expertise, as shown in prior research (Beng et al., 2017; Chen et al., 2017). Recruitment involved reaching out to researchers with backgrounds in cognitive science, computer science, and media science who also specialised in HCI research. Participation was on a voluntary basis. In total, we selected six participants (3 female, 3 male) from the authors' professional network. The average age of the panel was 28.33 years (\(SD=1.63\)), with an average experience in HCI research of 3.83 years (\(SD=1.47\)). All participants worked in academia in HCI-related research labs. Five are of German nationality, while one reviewer is Russian. While all participants had experience in interface design, except for one, none had prior knowledge of dark pattern academic research. Before conducting the study, each participant was provided with the necessary information on the topic before we obtained their consent. To protect them from the unethical consequences of dark patterns, we provided each participant with devices, new accounts for the SNSs, and data to be used during the study. This is further elaborated in subsection 3.2 Preparation.
### Preparation
After receiving their consent for participating in this study, each reviewer received two smartphone devices, a factory reset iPhone X (iOS 14.5) and a Google Pixel 2 (Android 11), with the social media applications already installed to ensure the same version1 was used by each participant. Both iOS and Android devices were used to distinguish between problematic interface designs caused by the applications and those linked to the operating systems. Also, each participant was provided with a new email account and phone number so they could create new user profiles for their assigned platforms. This was done to respect participants' privacy and to avoid customisation of accounts from previous usages that may impact participants' experience and, subsequently, their findings. Lastly, we stored some amount of media content on each device as part of the cognitive walkthrough, affording the participants to create and post content. Again, this ensured that participants did not have to share any personal information with the SNS.
Footnote 1: Installed versions consistent throughout Study 1: Facebook (OS: 321.0.0.5.3.119; Android: 321.0.0.37.119); Instagram: (OS: 191.0.0.25.122; Android: 191.1.0.4.124); TikTok (OS:193.0.0; Android: 19.3.4); Twitter (OS: 8.6.0.2; Android: 8.95.0-release.00).
### Procedure
One key element of this study is an extracted dark pattern taxonomy based on Mathur et al.'s (Mathur et al., 2018) work, including a review of the dark pattern landscape. The taxonomy, featuring 69 distinct types (see Table 1), was given to each reviewer after a one-hour-long introduction to the topic, followed by another hour to resolve unanswered questions mitigating inconsistencies in reviewers' expertise. Despite reviewers' backgrounds in HCI-related fields, this introductory session ensured a common understanding of current conceptualisations of dark patterns. After the introduction, each reviewer was handed informational material containing the presented information and the definitions of the 69 dark pattern types. This material is provided in the supplementary material of this paper. To maintain further consistency throughout the study, we created ten tasks reviewers were asked to complete during the cognitive walkthroughs (Mathur et al., 2018). Five of these tasks were adapted from research conducted by Di Geronimo et al. (Di Geronimo et al., 2017) that evaluated popular applications on the Google Play Store. Inspired by elements of their methodology, we increased the amount of time each SNS should be investigated to approximately 30 minutes based on a pre-study. This decision allows us to understand the interfaces of the four SNSs on a deeper level. Lastly, each reviewer was assigned two of the four SNSs ensuring that each application was reviewed three times by independent people on both iOS and Android operating systems. After a reviewer completed their walkthrough, we saved the stored recording data from the devices before setting them up
for the next session. Below are the ten tasks each reviewer performed. Tasks taken from or worded closely to Di Geronimo et al. (2017) are highlighted by an asterisk. Items 1, 9, and 10 were added to improve the task flow, whilst items 4 and 5 were developed to address typical SNS activities such as creating and sharing personal content and networking.
1. _Turn on screen recording on each device._
2. _Open the app and create an account to log in and then out._
3. _Close and repen the app._
4. _Create any kind of content, post it, and delete it._
5. _Follow and unfollow other accounts._
6. _Visit the personal settings._
7. _Visit the ad-related settings._
8. _Use the application for its intended use (minimum of five minutes):_ 1. _Describe the natural flow of the app - what did you use it for?_
9. _Could you use the app as you wanted or did some features 'guide' your interactions?_
10. _how easy was it to get distracted and if so what distracted you?_
11. _Delete your account._
12. _Turn off screen recording and save the recording._
## 4. Results of Study 1
In this study, we considered a dark pattern taxonomy comprising 69 individual types of dark patterns (see Table 1) across mobile applications for the SNSs Facebook, Instagram, TikTok, and Twitter. Offering an answer to our first research question, the six participants identified a total of 548 dark pattern distinct instances from the considered 69 types that can be associated with descriptions contained within the taxonomy provided. Participants found \(N_{F}=232\) dark pattern instances in Facebook, \(N_{I}=96\) in Instagram, \(N_{Ti}=95\) in Twitter, and \(N_{Tw}=125\) in Twitter. Figure 1 presents four screenshots that demonstrate examples of dark patterns identified by participants across each of the four SNSs. Close inspection shows multiple types of dark patterns at play in each image. Although the four SNSs were selected based on similar functionalities and user bases, we do not compare results across platforms. Despite their similarities, each SNS contains unique features that distinguishes them from the others. Also, the number of functionalities between the SNSs varies considerably, with Facebook containing many more options for users to engage with than alternatives. Instead, we report descriptive statistics that will then be further elaborated on in the discussion section of this paper.
### Recognised Types of Dark Patterns
Of the 69 types of dark patterns contained in the taxonomy participants were provided with at the beginning of this study, 31 distinct types were identified, leaving the remaining 55.07% unrecognised across any of the four SNSs. All recognised dark patterns can be seen in Figure 2. For brevity, only key illustrative instances are reported here, while the full analysis will be included in the supplementary material. Across the four SNSs, two dark pattern types stood out the most: With a total of 58 recognised instances, Gray et al.'s _Interface Interference_(Gray et al., 1996) (i.e. interfaces that privilege certain elements over others confusing users to make a particular choice) was most readily identified by participants, whilst Mathur et al.'s _Visual Interference_(Mathur et al., 2007) (i.e. interfaces that deploy visual/graphical tricks to influence users' choices) was next most widely observed with 51 instances. The third most frequently identified dark pattern was Gray et al.'s _Obstruction_(Gray et al., 1996) dark pattern (interfaces that make certain actions unnecessarily difficult to demoite users) recognised 47 times. Bosch et al.'s _Bad Defaults_(Bogd et al., 2015) (privacy settings are pre-set to share users' personal information by default) came fourth with 44 instances, closely followed by 40 counts of Brignull's _Privacy Zuckering_(Gray et al., 1996) (tricks to deceive users into sharing more personal information than intended) dark pattern.
### Types of Dark Patterns That Have Not Been Recognised
While 44.93% of dark pattern types were recognised during the cognitive walkthrough, the other 55.07% were not. Almost all dark pattern taxonomies contained some dark patterns that were recognised. However, the taxonomy by Zagal et al. (Zugal et al., 2015), being video-game focused, did not contribute any specific dark patterns that were recognised. This result shows that not all dark pattern types are relevant for each domain. By adding new dark pattern types to the overall collection for each domain, regulators have increasingly more items to consider complicating their endeavour if they are to use them as guides.
### Dark Patterns Co-Occurrences
To learn more about how dark patterns interact with each other, we also analysed them for co-occurrences. We used the software ATLAS.ti (Gray et al., 2016) to calculate the co-occurrence coefficient between any two dark patterns, which is based on the Jaccard similarity coefficient (Gray et al., 2015) returning a c-coefficient \(c\). Interestingly, the data revealed that although two patterns are described differently, their working can be rather similar in the context of SNSs. Intersections between _Interface Interference \(\cap\) Visual Interference_(\(c=0.85\), \(N=50\) co-occurrences), _Forced Action \(\cap\) Forced Work_(\(c=0.89\), \(N=25\) co-occurrences), and _Road Model \(\cap\) Hard to Cancel_(\(c=0.71\), \(N=17\) co-occurrences), for instance, follow this example. However, like the intersection between _Misrepresenting_(\(\cap\)_1_) _Imormal Accounts_(\(c=0.55\), \(N=12\) co-occurrences) or _Privacy Zuckering \(\cap\) Bad Defaults_(\(c=0.35\), \(N=22\) co-occurrences), most co-occurrences are indications for interfaces yielding multiple distinct dark patterns simultaneously. Due to the overall co-occurrence data set is too large to be fully represented here, it has been included in the supplementary material.
## 5. Study 2: Online Survey
Findings from Study 1 suggest existing taxonomies feature numerous types of dark patterns that are not applicable to SNSs and that some dark patterns employed by SNSs are not incorporated in earlier taxonomies. In this second study, we adopted a different approach to identifying dark patterns in interfaces. Instead of relying on fixed descriptions and definitions of existing dark patterns, we developed a questionnaire consisting of five questions based on dark pattern characteristics previously highlighted by Mathur et al. (Mathur et al., 2007). These higher-level characteristics go beyond dark pattern
definitions by descriptively organising dark patterns from existing literature (Midhrer et al., 2017). Following this approach, study 2 aims to address the following research question: Are regular SNS users able to differentiate between interfaces with and without dark patterns?
### Screenshots
We used sixteen screenshots along with the aforementioned questionnaire to evaluate people's ability to recognise dark patterns within screenshots of the four SNSs. While eight of the sixteen screenshots contained dark patterns, the other eight did not and served as control. All screenshots were sampled from the previous study (see Figure 3 for four example images). Regarding those that contained dark patterns, two conditions had to be met: Screenhots had to (1) represent all five characteristics by Mathur et al. while (2) contained dark patterns had to be identified by at least two expert reviewers. Furthermore, we avoided using screenshots that contained dark patterns that only emerge through procedural interactions taken by users (e.g. _Roach Motel_). Consequently, two authors of this paper ensured to pick screenshots where the dark patterns were recognisable on a static image, for example by deploying visual/aesthetic (e.g.Visual Interference) or linguistic (e.g. Confirmshaming) manipulations.Screenshots that did not contain dark patterns were carefully selected by sampling situations where expert reviewers did not recognise any dark pattern. This was additionally validated by two authors of this paper to ensure no dark pattern had been accidentally overlooked. Using these screenshots, we test whether participants can generally recognise dark patterns and whether they can differentiate between screenshots with and without dark patterns.
### Methodology
To investigate our research question, we conducted an online survey. The survey was divided into three parts: (1) screening for participants' SNS usage behaviour, (2) a dark pattern recognition task, and (3) a demographic questionnaire. In total, the survey featured 25 question items (included in supplementary material) and took on average 12:22 minutes (\(SD=9\):45) to complete. As we were interested if regular social media users could asses dark patterns in SNS, only participants who indicated previous and regular use of social media platforms were included in the sample. This was achieved using screening questions about previous social media usage. Before evaluating the sixteen screenshots, participants were provided with the following definition of dark patterns by Mathur et al.'s (Mathur et al., 2017): _"user interface design choices that benefit an online service by coercing, steering, or deceiving users into making decisions that, if fully informed and capable of selecting alternatives, they might not make"_. For each of the sixteen screenshots, participants had to first answer if they thought dark patterns were present in the screenshot based on the definition of dark patterns by Mathur et al.'s (Mathur et al., 2017) with 'Yes', 'No' or 'Maybe'. In the next step, participants then had to answer if they saw dark patterns in the screenshot based on Mathur's dark pattern characteristics (Mathur et al., 2017). For this, we developed five questions adopting the characteristics (Mathur et al., 2017), which participants rated based on a unipolar 5-point Likert-scale (see Table 2). Available responses ranged from "Not at all" to "Extremely". After assessing all five characteristics, they moved on to the next screenshot. Screenshots were delivered in a randomised order between participants. Once all screenshots were assessed, the survey concluded by collecting basic demographic data from each respondent, including age, gender, current country of residency, and an optional field to give feedback.
Figure 1. Example screenshots from Study 1. Figure 0(a) contains the dark patterns _Hidden-Legales__Stipulations (A)_, _Misdirection (B)_, _Interface Interference (C)_, _Visual Interference (D)_, _Privacy Zuckering (E)_, and _Address Book__Leeching (F)_. Figure 0(b) contains the dark patterns _Privacy Zuckering (A)_, _Address Book__Leeching (B)_, _Hidden-Legales__Stipulation (C)_, _Interface Interference (D)_, _and Visual Interference (E)_. Figure 0(c) contains the dark patterns _Hidden-Legales__Stipulation (A)_, _Interface Interference (B)_ and _Visual Interference (C)_. Figure 0(d)_Privacy Zuckering (A)_, _Interface Interference (B)_, _and Visual Interference (C)_.
### Participants
To calculate an appropriate sample size needed to answer our research questions, we conducted an _a priori_ power analysis using the software G"Power (Grover, 2019). Given our study design, to achieve a power of 0.8 and a medium effect size, the analysis suggested a total sample size of 166. Participants of this survey were recruited from two sources: (1) The Reddit forum _r/samplesize_(Brockett et al., 2016) and (2) _Prolific_(Pranranchi et al., 2017). For redundancy, we invited 90 people, more than our power analysis suggested. After receiving their consent to participate in this study, 256 participants were recruited and completed the online survey. Of these 256 participants, 26 were recruited via Reddit (Brockett et al., 2016) and 230 via Prolific (Pranchi et al., 2017). Initially, we recruited participants from Reddit to assess the feasibility of our study design. After this was ensured and we successfully verified that the retrieved data was equal in quality to the data gained from Prolific, both sets were accumulated. Compensation for participating in this study was rewarded with \(\ell\)7.2 per hour, with individual compensation dependent on participants' time needed to complete the study (mean = 12.2 minutes, \(SD=8.76\) minutes). We excluded 63 data sets in total due to: failure to complete the questionnaire; failed attention checks (questions with a single true answer to measure participants' engagement); not meeting inclusion criteria; completing the questionnaire in unrealistic times based on _a priori_ testing; and if they replied with the same option over 95% of instances. Eventually, data from a total of 193 participants were included in the analysis, thus satisfying the estimate of the power analysis.
## 6. Results of Study 2
In this section, we present the results of the online survey. The results are split into three parts: (1) demographic data on our participants, (2) results on whether participants can recognise dark
\begin{table}
\begin{tabular}{l l} \hline \hline & **Mathur 2019 (Mohur et al., 2019)** \\ & Dark Pattern Characteristics \\ \hline Characteristic & Question \\ \hline Asymmetric & Does the user interface design impose unequal weights or burdens on the available choices presented to the user in the interface? \\ Covert & Is the effect of the user interface design choice hidden from the user? \\ Deceptive & Does the user interface design induce false beliefs either through affirmative misstatements, misleading statements, or omissions? \\ Hides Information & Does the user interface obscure or delay the presentation of necessary information to the user? \\ Restrictive & Does the user interface restrict the set of choices available to users? \\ \hline \hline \end{tabular}
\end{table}
Table 2. This table lists the introductory questions Mathur et al. (2019) (Mohur et al., 2019) gave for each dark pattern characteristic.
Figure 2. Summary of the occurrences of all 69 considered dark pattern types in four SNSs.Of the 69 types 31 were recognised. Privacy Zuckering1 refers to Brignull’s (Brockett et al., 2016) description while Privacy Zuckering2 refers to Bösch et al.’s defintion (Brockett et al., 2016).
patterns based on the definition of dark patterns by Mathur et al. (Mathur et al., 2018), and (3) whether they can differentiate between screenshots with and without dark patterns based on Mathur's dark pattern characteristics (see Table 2), as a recognition task including the 69 different individual dark pattern types would have exceeded the scope and purpose of this online survey. Instead, we relied on Mathur et al.'s high-level dark pattern characteristics. For each of the five dark pattern characteristics (_asymmetry_; _covert_; _deception_; _information hiding_; and _restriction_) participants rated on a 5-point Likert scale ("Not at all" - "Extremely"), how much the characteristic was present in the screenshot. For each screenshot, this resulted in an average rating. Figure 5 demonstrates how the screenshots were used to generate these ratings. This procedure allows us to compare participants' ratings between the different screenshots. Using this approach, the maximum rating for a screenshot featuring all dark pattern characteristics corresponds to (Bou et al., 2016; Bou et al., 2016; Bou et al., 2016; Bou et al., 2016) and thus an average rating of 4, while a minimum rating for a screenshot without dark patterns corresponds to \([0,0,0,0,0]\) and thus an average rating of 0. In total, all 193 survey respondents rated (\(193*16=3088\)) 3088 screenshots.
### Demographic Information
The mean age across individuals was \(\mu=27.91\) years (\(SD=9.53\)), with 155 identifying as female and 35 as male. The remainder (N=3) identified as either non-binary or with a third gender. When asked about their current country of residence, the participants replied as follows: Australia (4); Canada (35); France (1); Greece (1); Hong Kong - S.A.R. (1); Ireland (11); Japan (1); South Africa (2); Spain (1); United Kingdom of Great Britain and Northern Ireland (40); United States of America (96). In terms of how frequently participants used the internet, 189 self-reported using the internet on a daily basis, with the remainder (N=4) using it more than once per week. An inclusion criterion for participation was a previous experience with at least one of the four SNSs. Therefore, we asked participants about their usage of Facebook, Instagram, TikTok, and Twitter. Regarding Facebook, 138 participants reported actively using it, 20 do not use it, and 35 used to use it but not anymore. 167 participants currently use Instagram, while 15 do not use it, and 11 have used it but do not anymore. Looking at TikTok, 134 participants use it currently, 55 do not, and 4 have used it but do not anymore. Lastly, 112 participants actively use Twitter, 51 are not using it, whereas 30 used to but do not anymore.
### Generally Recognising Dark Patterns
For the eight screenshots that did feature dark patterns, when asked if respondents notice any malicious interface elements in the screenshot, 426 screenshots received a "yes" rating, 408 a "maybe", and 710 a "no" rating. In contrast, for the eight screenshots that did not contain dark patterns, 143 received a "yes" rating, 269 a "maybe", and 1132 a "no" rating. A Wilcoxon signed rank test with continuity correction shows significant differences between the two groups of screenshot ratings (\(V=89253\), \(p-value<0.0001\), \(R=0.37\)). Thus, we see that more people noticed malicious elements in screenshots that contained dark patterns.
### Differentiating Between Screenshots With and Without Dark Patterns
Our previous results showed that people generally see differences between the two types of screenshots. We can thus test whether people rate screenshots differently when they show dark patterns
Figure 3. Four example screenshots used in study 2, sampled from study 1. Figure 2(a) contains the dark patterns _Interface Interference (A), Confirsmashing (B), Address-Book Leeching (C), Privacy Zuckering (D)_, and _Visual Interference (E)_. Figure 2(b) contains the dark patterns _Interface Interference (A)_, and _Visual Interference (B)_. Importantly, Figure 2(a) and Figure 2(b) were presented to participants without annotations. Neither Figure 2(c) nor Figure 2(d) contain any dark patterns. In total, sixteen screenshots were used in study 2 - eight containing dark patterns and eight that do not.
compared to screenshots with no dark patterns according to Mathur et al.'s (Mathur et al., 2017) five characteristics. We thus calculated the median total rating for screenshots that featured dark patterns and the same for screenshots that did not feature dark patterns. Across all screenshots which featured dark patterns, we find a median rating of \(1.2\) (\(mean=1.26\), \(SD=1.02\)) compared to a median rating of \(0.2\) (\(mean=0.69\), \(SD=0.81\)) for screenshots without dark patterns (see Figure 4). A Wilcoxon signed-rank test results in a significant difference between the two ratings (\(V=669900\), p-value \(<0.0001\), \(R=0.3\)). Given that non-dark pattern screenshots received a significantly lower median average rating than dark pattern screenshots, we conclude that people recognised a difference between screenshots containing dark patterns and those that did not base on questions adopting the five characteristics. We further observe a difference in participants' perceptions of the two types of screenshots. While the median rating of screenshots without dark patterns is \(0.2\), very close to \(0\) ("Not at all"), the median rating of screenshots with dark patterns is \(1.2\) ("A little bit"), relatively low considering a maximum rating of \(4\) ("Extremely"). This implies that while participants distinguish screenshots with and without dark patterns with a significant difference, based on the five characteristics, their rating is overall rather low.
#### 6.3.1. Per Characteristic Rating
Based on participants' different ratings for dark pattern versus non-dark pattern screenshots, we gain a more detailed view of the applicability of the individual characteristics. We consider the median scores here because the data is not normally distributed. Overall, the median data indicates that across screenshots of the same kind, each characteristic contributed to the assessment, with a rating of \(1\) for screenshots that contain dark patterns and \(0\) for those not featuring dark patterns.
To further validate the five characteristics, we investigated their relationship to the malice rating from section 6.2. We performed a multiple linear regression to see how well the individual characteristics predict the malice rating. The result shows a F-statistic p-value of \(<0.0001\), suggesting that at least one of the five characteristics is significantly related to the malice score. Considering each t-statics, further analysis revealed that the characteristics _asymmetric_ (t \(<0.001\)) and _restrictive_ (t = 0.004) show a significant association with the malice score. The remaining characteristics _covert_ (t = 0.053), _deceptive_ (t = 0.081), and _hides information_ (t = 0.074) do not yield such association, however. Thus, changes in those three characteristics do not significantly affect the malice score in our model.
#### 6.3.2. Per Screenshot Rating
Considering the screenshots independently, we gain further insights into the differences between average scores. This allows us to notice the effectiveness and sensitivity with which this approach measures the malice in a single screenshot. Across the eight screenshots containing dark patterns, seven screenshots have median ratings \(>\)1, while the median rating for one screenshot is \(0.4\) (see Table 4, Tw1). Looking at the non-dark pattern screenshots, six were rated with a median \(<\)1, while two screenshots have a median rating of \(1\) (see Table 4, F1 and Ti2).
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline \multicolumn{10}{c}{**Comparison Of Screenshots**} \\ \hline \multicolumn{10}{c}{Dark Pattern Screenshots} \\ \hline & F1 & F2 & I1 & I2 & Ti1 & Ti2 & Tw1 & Tw2 \\ \hline mean & 1.40 & 1.42 & 1.45 & 1.21 & 1.76 & 1.14 & **0.60** & 1.12 \\ median & 1.40 & 1.40 & 1.40 & 1.20 & 1.80 & 1.00 & **0.40** & 1.00 \\ SD & 1.08 & 0.94 & 1.08 & 0.99 & 1.06 & 0.99 & **0.73** & 0.89 \\ \hline \multicolumn{10}{c}{Non-Dark Pattern Screenshots} \\ \hline & FA & FB & IA & IB & TiA & TiB & TwA & TwB \\ \hline mean & **1.06** & 0.66 & 0.45 & 0.54 & 0.69 & **1.10** & 0.39 & 0.56 \\ median & **1.00** & 0.20 & 0.00 & 0.20 & 0.40 & **1.00** & 0.00 & 0.20 \\ SD & **0.99** & 0.92 & 0.71 & 0.73 & 0.81 & **0.99** & 0.65 & 0.75 \\ \hline \end{tabular}
\end{table}
Table 4. Overview of the mean, median, and standard deviation of participants’ ratings per dark pattern and non-dark pattern screenshot. Each of the four SNSs was represented with two screenshots containing dark patterns and two that did not. The letters in the screenshots’ labels refer to a particular SNS: F - Facebook; I = Instagram; Ti = TiTkTok; Tw = Twitter.
Figure 4. This box plot visualises the differences in which participants, who were provided with a definition for dark patterns, rated the screenshot after being asked if they noticed any malicious designs. The figure shows a significant difference between participants’ ratings of screenshots containing dark patterns versus those that do not.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline \multicolumn{5}{c}{**Comparison of Five Characteristics**} \\ \hline & \multicolumn{4}{c}{Dark Pattern Screenshots} \\ \hline & Asym- & Covert & Restric- & Decep- & Hides \\ &metry & & tive & tive & Info. \\ \hline mean & 1.42 & 1.21 & 1.40 & 1.02 & 1.27 \\ median & **1.00** & **1.00** & **1.00** & **1.00** & **1.00** \\ SD & 1.26 & 1.20 & 1.18 & 1.18 & 1.26 \\ \hline \multicolumn{5}{c}{Non-Dark Pattern Screenshots} \\ \hline mean & 0.71 & 0.80 & 0.84 & 0.60 & 0.80 \\ median & **0.00** & **0.00** & **0.00** & **0.00** & **0.00** \\ SD & 1.03 & 1.08 & 1.12 & 0.99 & 1.11 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Overview of the mean, median, and standard deviation of participants’ ratings of dark pattern and non-dark pattern screenshots according to Mathur et al.’s (Mathur et al., 2017) five characteristics: _assymetric, covert, restrictive, deceptive_, and _information hiding_.
## 7. Discussion
This work presents insights from two studies, widening our understanding of how dark patterns manifest in SNSs and exploring a novel approach to evaluate the malice of interfaces. As online regulations have been shown to lack protection of users (Brignell et al., 2017), we were interested in the effectiveness of current regulations that aim to shield users from dark patterns. Based on a comprehensive taxonomy, we let experienced HCI researchers apply dark patterns, by means of their descriptions, to four popular SNSs (Facebook, Instagram, TikTok, and Twitter). Although a range of dark patterns has been recognised, the results of the first study bear certain difficulties that hindered the process and thus highlight a necessity for more efficient approaches to recognising dark patterns. Exploring an alternative approach to evaluate the malice of interfaces, we defined five questions based on Mathur et al.'s (Mathur et al., 2017) dark pattern characteristics. Letting regular users rate screenshots sampled from recordings of the first study, we found a potential measure in this approach that can be of aid for regulatory strategies. In this section, we discuss the applicability of dark pattern research as a tool to evaluate interfaces in relation to regulation.
### A Taxonomy As Evaluation Tool
We acknowledge that the applied taxonomy, including entailed dark patterns from eight works, was not designed as a tool for the assessment of dark patterns and covers different scopes regarding their level of abstraction. While research on dark patterns moves forward, expanding our knowledge of the types of dark patterns that exist, we believe that it is important to reflect on the current status quo and consider the multitude of findings in new contexts. Study 1, therefore, tests the utility of dark patterns to identify their instances in SNSs. With the successful recognition of a range of these dark patterns in SNSs, the results of our first study imply that the chosen approach is suitable for identifying dark patterns in domains that may lie outside their original scope, offering an answer to our first research question. Tainting these results, however, we noticed certain issues that posed difficulties to the reviewers when executing their tasks.
Overall, 31 out of 69 considered dark patterns were recognised, leaving another 31 not applicable in the context of SNSs. Especially game-related dark patterns (Gray et al., 2017) and those inspired by proxemic theory (Gray et al., 2018) were not all or rarely noticed. In contrast, dark patterns by Gray et al. (Gray et al., 2018) were identified more frequently. This implies that expert reviewers found it easier to recognise dark patterns that were described more abstractly compared to domain-specific ones suggesting similar effectiveness in identifying dark patterns in regulatory contexts. A particular difficulty in this study emerged from dark patterns that shared the same names. Brignull's (Brignull, 2017)_Confirmshaming_ dark pattern, for instance, was carried over by Mathur et al. (Mathur et al., 2017) who remained with its original definition, making it confusing as to which version should be applied when a related dark pattern is recognised. Other candidates - _Privacy Zuckering_ by Brignull (Brignull, 2017) and Bosch et al. (Bosch et al., 2017) and _Batt and Switch_ by Brignull (Brignull, 2017) and Greenberg et al. (Greenberg et al., 2018) - were given distinct descriptions resulting in different applicability in SNSs. Contrary to this difficulty, the results of our co-occurrence tests show that dark patterns with different names apply in same interfaces. We see two possible explanations for this: (1) Provided descriptions of two dark patterns are too close, clouding distinct applications, at least in the context of SNSs. A high co-occurrence between _Interface Interference_(Gray et al., 2018) and _Visual Interference_(Mathur et al., 2017) can be explained this way. Alternatively, (2) two different dark patterns complement each other creating particularly problematic situations. Here, _Privacy Zuckering_ and _Bad Default_ do not describe the same interface problems but _Privacy Zuckering_ profits from the _Bad Default_ dark pattern as the latter will often result in users sharing more data unknowingly.
### Assessing the Malice of Interfaces
The results of study 1 indicate that abstract and distinct criteria are most efficient for evaluating the presence of dark patterns in interfaces. Study 2, therefore, explores an alternative approach by relying on Mathur et al.'s (Mathur et al., 2017) five high-level characteristics to assess the malice of interfaces. Based on their framework, we developed five questions that we used to study regular users' ability to recognise dark patterns based on screenshots of the four SNSs. Answering our second research question, the results of this second study show that users were generally able to distinguish between screenshots featuring dark patterns and those that did not. However, ratings for the dark pattern screenshots indicate some difficulties as scores were considerably low (average median = 1.2), given that the maximum score a screenshot could receive is 4. Yet, participants' ability to differentiate screenshots based on these five characteristics suggests the promising effectiveness of this approach. Past work has found difficulties among participants in avoiding dark patterns (Gray et al., 2018; Mathur et al., 2017). While our data suggest similar difficulties, our second study's results further support suggestions by Bongard-Blanchy et al. (Bongard-Blanchy et al., 2018), who have shown that informing users about dark patterns helps to identify them.
This is further supported by the median ratings of each evaluated characteristic of the sixteen screenshots. We notice that across the eight dark pattern screenshots, each rating is 1 ("A little bit"), whereas the median rating for non-dark pattern screenshots is 0 ("Not at all"), as shown in Table 3. This consistency across participants implies that all characteristics contribute to the assessment of dark patterns in screenshots. Considering individual median ratings per screenshot (see Table 4), we see this consistency almost entirely confirmed. With regards to the dark pattern screenshots, participants were able to correctly identify malicious interfaces in seven out of eight instances (87.5%). In non-dark pattern screenshots, participants accurately determined no presence of dark patterns six out of eight times (75%). As neither the taxonomy nor Mathur et al.'s (Mathur et al., 2017) characteristics were designed to identify or recognise dark patterns in SNSs, this attempt opens a possible pathway for future directions of dark pattern research. Relying on more abstract characteristics offers a promising approach to evaluating new interfaces. Figure 5 visually demonstrates this approach. If an interface is suspected of containing any number of dark patterns, it is evaluated using a 5-point Likert-scale ("Not at all" - "Extremely") according to the five questions adopting Mathur et al.'s (Mathur et al., 2017) characteristics. The maliciousness of the interface can then be determined by considering each characteristic's rating based on their individual values or as an average calculated from all five. We gain further support for
this model through the multiple linear regression showing a highly significant relationship between the questions and the malice score. Individually, two characteristics - _asymmetry_ and _restrictive_ - maintain this highly significant association while three do not, leaving room for future improvement. The nature of this study describes an experimental setup aiming to assess the malice in interfaces better. The general statistical significance of both users' ability to differentiate between malicious and harmless design as well as in our multiple linear regression affirms the utility of such characteristics and our model. This approach allows further insights into the types of dark patterns present in the interface by considering which characteristics they subscrib to. As participants of the second study only had to meet the criteria of being regular users of SNSs, we believe that more experienced evaluators could be able to evaluate interfaces more sensitively. Although this work utilises a total of 69 types of dark patterns, we acknowledge that our work has left new gaps for future work to consider SNS-specific types of dark patterns. Meanwhile, recent efforts have extended our knowledge of dark patterns in SNSs (Ross et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019), which leaves room for future updates of our research.However, while these prior efforts describe dark patterns that occur in SNSs based on qualitative approaches, to our knowledge, this research is among the first to quantitatively assess dark patterns in SNSs while considering both experts' and users' ability to recognise them in this environment. Moreover, we extend the current discourse with a possible measure to access the malice of interfaces, regardless of their origin, by not requiring a complete corpus after all. Instead, relying on wider characteristics enables users to assess this malice based on five simple yet extendable, high-level dimensions.
### Paving The Way For Regulations
The variety of dark pattern types shows how far-stretched mis-chievous strategies in online domains can be. Still, they all have one thing in common: They harm users. Regulators and legislation already have powerful tools to ensure the protection of end-users. However, not all regulations are equally effective. To support this, findings from HCI research on dark patterns can aid existing approaches to protect peoples' privacy on problematic designs. The presented work has mainly two implications for legislative efforts regarding dark patterns. The first one addresses the problem that the law is prone to lag behind dark patterns evolution, suggesting alternative approaches are needed to protect users successfully. The regulation of dark patterns must, on the one hand, be concrete enough to address manipulative mechanisms and, on the other hand, abstract enough to capture future developments. Our findings show that research in HCI constantly explores new dark patterns resulting in diverse taxonomies, as depicted in Figure 1. Nevertheless, we see that recognising dark pattern characteristics on a meta-level is convincing and, referring to Mathur et al.'s high-level characteristics (Mathur et al., 2018), might be a promising approach to achieving a shared conceptualisation. This suggests that generalisable definitions and characterisations are better suited and more future-proof to assess dark patterns in various domains. We argue that findings from HCI can support legislative efforts by providing dark pattern characteristics based on empirical research and offering a sustainable vocabulary helping lawmakers to get ahead of developments of unethical designs. Such characteristics could be a basis for a legal definition and a general ban on dark patterns. The second implication deals with recognising dark patterns in practice. Tools from HCI have the compelling potential for supporting courts and authorities since they could objectively measure the manipulation effect of a design (e.g. Figure 5). Offering authorities a tool to evaluate the malice of interfaces easily, the proposed score determines the degree to which a specific design is either harmless or contains malicious features based on empirical evidence. Here, the goal lies in the identification of a certain score within the sweet spot, or threshold, that most accurately distinguishes between interfaces with dark patterns from those without. Our results show that even regular users are able to correctly differentiate between malicious and harmless interfaces. Professionals and trained people
Figure 5. This figure demonstrates the approach to assess malice in interfaces by applying questions based on Mathur et al’s (Mathur et al., 2018) dark pattern characteristics. First, an interface is selected which is suspected of containing any amount of dark patterns. Using the five questions described in Table 2, the interface can then be evaluated using a Likert-scale from “Not at all” to “Extremely”. In this example, we demonstrate this based on a five-item scale. The result are independent ratings for each characteristic, which can be averaged into a single digit.
would likely perform similar tasks with even better accuracy. Consequently, the findings and tools from HCI research can become a considerable and valuable instrument in the decision-making processes of authorities. Ultimately, HCI research can pave the way for regulators to act on observed exploitation in interfaces that can, but are not limited to, target users' personal data or manipulate their decision space, provoking potentially harmful actions.
## 8. Limitations & Future Work
Both studies of this work yield certain limitations. Firstly, study 1 was conducted during the COVID-19 pandemic, which meant that the experiment was conducted without supervision. Although recordings do not suggest misunderstandings across reviewers, a present study supervisor can offer additional assistance. While we aimed to consider a range of SNSs, the number of platforms available today limited us to four applications with similar functionalities. Although the chosen SNSs present popular platforms, we neglected important services like YouTube or Twitch, featuring video-streaming platforms, but also messenger services like WhatsApp or Telegram, which each entail large user bases. Future work could consider alternative SNSs that were not in the scope of this work. As Mathur et al.'s Mathur et al. (2019) sixth _Disparate Treatment_ characteristic was not applied at all during the reviews, meaning that none of Zagal et al.'s Zagal et al. (2019) dark patterns were recognised in SNSs, it would further be interesting to consider SNSs that offer paying users different experiences (e.g. LinkedIn, Twitch, or YouTube). Also, future work could include recording instances of users sharing their data in- and outside of SNSs, as we did not include such a task in our cognitive walkthroughs. Study 1 was further limited by the selection of dark patterns included in our taxonomy. Because we decided only to include dark patterns that resulted from empirical research, we excluded those part of guidelines and regulations. Furthermore, Gunawan et al. Gunawan et al. (2019) propose twelve additional dark patterns that we did not include as our experiment was conducted at the time of their publication. Future work could include further types of dark patterns for gaining an even deeper understanding of dark patterns in SNSs. Moreover, our methodology proved fruitful gaining us important insights into dark patterns in SNSs. Future work could adopt this approach to utilising the existing corpus of dark pattern knowledge when investigating dark patterns in other domains.
In study 2, we tested our evaluation approach based on screenshots to assess the malice of interfaces. While results indicate certain accuracy in participants differentiating between screenshots containing dark patterns and those that do not, our results do not allow us to make any statements about how well participants identified specific dark patterns. Furthermore, the screenshots are limited to showing dark patterns within a single stage on a static image. While we made sure to choose dark patterns, which are recognisable on screenshots, this limitation excludes possible dark patterns that rather work on a procedural level during an interaction. To reach participants, we used the online research platform _Prolific_Prolific (2019) to generate a convenience sample, restricted only to users who have prior experience with SNSs and are fluent in the English language, as screenshots were in English. However, we did not aim for a representative sample. Surprisingly, we noticed that 80,3% of the participants identified as females skewing the demographic. Although we did not notice any differences between individual participants' ratings, we acknowledge that the data set is biased towards females. Moreover, we decided to rely on regular users as participants for this study. As our findings suggest a novel approach to aid the regulation of dark patterns, it would be interesting the see how related professionals such as regulators and legal scholars recognise dark patterns in a similar study. This could further be enhanced by additional characteristics that better incorporate malicious interfaces currently not covered. Also, Gunawan et al. Gunawan et al. (2019) suggest that dark patterns may exist in SNSs to a different extent in their desktop modality. While we identified a host in SNSs for existing dark patterns, this work considers dark patterns that are not specific to this domain. As many described dark patterns have their origin in online shopping websites, future work could investigate social media platforms to describe unique dark patterns here. This further includes the characteristics from Mathur et al. Mathur et al. (2019), which we used in our survey. Although the results of the multiple linear regression indicate a highly significant relationship between the questions and the malice score, only two out of five characteristics also yielded significant associations. This invites future research to advance our model and develop a suitable questionnaire for improved assessment.
## 9. Conclusion
In this paper, we examined four popular SNS platforms (Facebook, Instagram, TikTok, and Twitter) for dark patterns, advancing research in this context. Based on a cognitive walkthrough with six HCI experts, we learned which dark patterns occur in SNSs by considering a taxonomy based on prior findings in this field. Results of this study show that while this approach offers detailed insights, it lacks certain efficiency while posing difficulties to reviewers. Considering these results, we designed a novel approach to assess the malice of interfaces based on high-level characteristics. In a second study, we tested this alternative demonstrating a tool to recognise dark patterns in screenshots. Taking a legal perspective on current regulations for dark patterns, we discuss the findings of our second study, shining a light on how HCI research can aid the protection of SNS users.
###### Acknowledgements.
The research of this work was partially supported by the Klaus Tschira Stiftung gGmbH.
|
2304.04701
|
Explicit computation of Galois representations occurring in families of
curves
|
We extend our method to compute division polynomials of Jacobians of curves
over Q to curves over Q(t), in view of computing mod ell Galois representations
occurring in the \'etale cohomology of surfaces over Q. Although the division
polynomials which we obtain are unfortunately too complicated to achieve this
last goal, we still obtain explicit families of Galois representations over
P^1_Q, and we study their degeneration at places of bad reduction of the
corresponding curve.
|
Nicolas Mascot
|
2023-04-10T16:41:36Z
|
http://arxiv.org/abs/2304.04701v1
|
# Explicit computation of Galois representations occurring in families of curves
###### Abstract
We extend our method to compute division polynomials of Jacobians of curves over \(\mathbb{Q}\) to curves over \(\mathbb{Q}(t)\), in view of computing mod \(\ell\) Galois representations occurring in the etale cohomology of surfaces over \(\mathbb{Q}\). Although the division polynomials which we obtain are unfortunately too complicated to achieve this last goal, we still obtain explicit families of Galois representations over \(\mathbb{P}^{1}_{\mathbb{Q}}\), and we study their degeneration at places of bad reduction of the corresponding curve.
**Acknowledgements**
The author thanks Jean Gillibert for setting him on track to understanding the material presented in Section 5.2. Experiments presented in this paper were carried out using the [PlaFRIM] experimental testbed, supported by Inria, CNRS (LABRI and IMB), Universite de Bordeaux, Bordeaux INP, and Conseil Regional d'Aquitaine (see [https://www.plafrim.fr/](https://www.plafrim.fr/)), and on the Warwick mathematics institute computer cluster provided by the EPSRC Programme Grant EP/K034383/1 "LMF: L-Functions and Modular Forms". The computer algebra packages used were [Pari/GP] and [Magma].
**Keywords:** Galois representation, division polynomial, etale cohomology, Jacobian, surface, family of curves, degeneration, ramification, inverse Galois problem.
Introduction
Suppose we are given a surface \(S\) defined over \(\mathbb{Q}\) as well as a prime \(\ell\in\mathbb{N}\) such that the etale cohomology space \(\mathrm{H}^{2}_{\mathrm{\acute{e}t}}(S_{\overline{\mathbb{Q}}},\mathbb{Z}/\ell \mathbb{Z})\) contains a Galois-submodule which affords a mod \(\ell\) Galois representation \(\rho\) that we wish to compute explicitly. By this, we mean computing a polynomial which encodes \(\rho\) in the following sense:
**Definition 1.1**.: Let \(K\) be a number field, and let \(\rho:\mathrm{Gal}(\overline{K}/K)\longrightarrow\mathrm{GL}(V_{\rho})\) be a mod \(\ell\) Galois representation, where \(V_{\rho}\) is an \(\mathbb{F}_{\ell}\)-vector space of finite dimension. We say that a separable polynomial \(F(x)\in K[x]\)_encodes_\(\rho\) if we are given an explicit bijection between \(V_{\rho}\setminus\{0\}\) and the roots of \(F(x)\) in some extension \(\Omega\) of \(K\) over which \(F(x)\) splits completely, in such a way that the Galois action on the roots of \(F(x)\) matches that on \(V_{\rho}\). In particular, the splitting field of \(F(x)\) then agrees with the number field \(\overline{K}^{\mathrm{Ker}\,\rho}\) cut out by \(\rho\).
In [10, 2], we sketched a method to compute \(\rho\subset\mathrm{H}^{2}_{\mathrm{\acute{e}t}}(S_{\overline{\mathbb{Q}}}, \mathbb{Z}/\ell\mathbb{Z})\) based on _devissage_[SGA4\(\frac{1}{2}\), 3.4], and which may be informally summarised as follows. Pick a proper dominant morphism \(\pi:S\longrightarrow B\) from \(S\) to a curve \(B\) over \(\mathbb{Q}\), and write \(S_{b}\) for the fibre of \(\pi\) at a point \(b\in B\). Roughly speaking, the Leray spectral sequence [12, 12] attached to \(\pi\) then shows that \(\mathrm{H}^{2}_{\mathrm{\acute{e}t}}(S_{\overline{\mathbb{Q}}},\mathbb{Z}/ \ell\mathbb{Z})\) is made up of \(\mathrm{H}^{p}_{\mathrm{\acute{e}t}}\left(B_{\overline{\mathbb{Q}}},\mathrm{ H}^{q}_{\mathrm{\acute{e}t}}(S_{b},\mathbb{Z}/\ell\mathbb{Z})\right)\) for \(p+q=2\). Since the terms for \(p=0,q=2\) and for \(p=2,q=0\) consist of uninteresting bits, we can expect that \(\rho\) occurs in \(\mathrm{H}^{1}_{\mathrm{\acute{e}t}}\left(B_{\overline{\mathbb{Q}}},\mathrm{ H}^{1}_{\mathrm{\acute{e}t}}(S_{b},\mathbb{Z}/\ell\mathbb{Z})\right)\). As \(B\) and the \(S_{b}\) are curves, and as the \(\mathrm{H}^{1}_{\mathrm{\acute{e}t}}\) of a curve is essentially the torsion of its Jacobian (see the first part of Theorem 1.2 below for a precise statement), it is thus reasonable to hope to compute compute \(\rho\subset\mathrm{H}^{2}_{\mathrm{\acute{e}t}}(S_{\overline{\mathbb{Q}}}, \mathbb{Z}/\ell\mathbb{Z})\) by:
1. Computing the family of Galois representations parametrised by \(b\in B\) afforded by the \(\ell\)-torsion of the Jacobian of the fibre \(S_{b}\),
2. Gluing these data into an explicit model of a cover \(C\longrightarrow B\) of curves,
3. Catching \(\rho\) in the \(\ell\)-torsion of the Jacobian of the curve \(C\).
Strategy 1.1: Computing in the \(\mathrm{H}^{2}_{\mathrm{\acute{e}t}}\) of surfaces by looking at the torsion of Jacobians of curves.
The situation is illustrated on Figure 1.1.
More precisely, we have the following result:
**Theorem 1.2**.: _Given an \(\mathbb{F}_{\ell}\)-Galois-module \(M\) and an integer \(n\in\mathbb{Z}\), write \(M(n)\) for the twist of \(M\) by the n-th power of the mod \(\ell\) cyclotomic character._
1. _Let_ \(X\) _be a nonsingular, geometrically irreducible curve over a number field_ \(K\)_, and let_ \(J\) _be the Jacobian of the completion of_ \(X\)_. If_ \(X\) _is complete, then_ \(\mathrm{H}^{1}_{\text{\'{e}t}}(X_{\overline{K}},\mathbb{Z}/\ell\mathbb{Z}) \simeq J[\ell](-1)\) _as Galois modules. If_ \(X\) _is not complete, then_ \(\mathrm{H}^{1}_{\text{\'{e}t}}(X_{\overline{K}},\mathbb{Z}/\ell\mathbb{Z})\) _is an extension of_ \(J[\ell](-1)\) _by copies of_ \((\mathbb{Z}/\ell\mathbb{Z})(-1)\)_._
2. _Suppose_ \(\rho\) _is a mod_ \(\ell\) _Galois representation contained in_ \(\mathrm{H}^{2}_{\text{\'{e}t}}(S_{\overline{\mathbb{Q}}},\mathbb{Z}/\ell \mathbb{Z})\) _(up to semi-simplification). Let_ \(B^{\prime}=B\setminus Z\)_, where_ \(Z\subset B\) _is the locus of bad fibres of_ \(\pi\)_. Assume that_ \(\rho\) _has no Jordan-Holder components of the form_ \((\mathbb{Z}/\ell\mathbb{Z})(n)\) _for any_ \(n\in\mathbb{Z}\)_, and no component in common with_ \(\eta(-1)\)_, where_ \(\eta\) _is the mod_ \(\ell\) _permutation representation induced by the Galois action on the geometrically irreducible components of the bad fibres of_ \(\pi\)_. Then_ \(\rho\) _is also contained (up to semi-simplification) in_ \(\mathrm{H}^{1}_{\text{\'{e}t}}(C_{\overline{\mathbb{Q}}},\mathbb{Z}/\ell \mathbb{Z})(-1)\)_, where_ \(C\) _is the completion of the cover of_ \(B^{\prime}\) _formed by the nonzero_ \(\ell\)_-torsion points of the Jacobian of the_ \(S_{b}\)_._
Part 1 is standard (cf. [20, 14.2,14.4,16.2]), and part 2 is [26, Thm 7]. In particular, if \(\rho\) satisfies the assumptions of part 2, and if \(C\) is geometrically irreducible, then \(\rho\) is found
Figure 1.1: The surface \(S\) with some of the fibres \(S_{b}\) of \(\pi\). The rectangles above them represent the Jacobian of these fibres, inside which the red dots represent \(\ell\)-torsion points. These points define a curve \(C\) whose Jacobian should contain \(\rho\) in its \(\ell\)-torsion.
(up to twist) in the \(\ell\)-torsion of the Jacobian of \(C\). More generally, if \(C\) is not geometrically irreducible, consider a Galois number field \(K\subset\overline{\mathbb{Q}}\) such that the geometrically irreducible components \(C_{i}\) of \(C\) are defined over \(K\); then \(\rho\) will be found in the induction to \(\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\) of the representation of \(\operatorname{Gal}(\overline{\mathbb{Q}}/K)\) afforded by the \(\ell\)-torsion of the Jacobians of the \(C_{i}\).
Let us now explain in more detail how to turn these observations into an algorithm to compute \(\rho\) explicitly, assuming for simplicity that \(C\) is geometrically irreducible. In [14], we described an algorithm which, given a proper, nonsingular, and geometrically irreducible curve \(C\) over a number field1\(K\) and a prime \(\ell\in\mathbb{N}\), computes what may be called an \(\ell\)-division polynomial \(R_{C,\ell}(x)\in K[x]\) of \(C\), that is to say a polynomial which encodes the representation afforded by the \(\ell\)-torsion of the Jacobian \(J\) of \(C\) in the sense of Definition 1.1. This algorithm is also capable of computing the subrepresentation afforded by a Galois-submodule \(V\) of \(J[\ell]\), provided that there exists a prime \(\mathfrak{p}\nmid\ell\) of \(K\) where \(C\) has good reduction and such that \(V\subset J[\ell]\) may be characterised by the characteristic polynomial of \(\operatorname{Frob}_{\mathfrak{p}}\) acting on \(V\).
Footnote 1: At present, this algorithm is only implemented for \(K=\mathbb{Q}\), but its generalisation to number fields is straightforward.
Suppose for the sake of the exposition that we are given an equation \(f(x,y,t)\in\mathbb{Q}[x,y,t]\) such that our surface \(S\) is the desingularisation of the projective closure of the patch defined by \(f(x,y,t)=0\). It is then natural to choose \(B=\mathbb{P}^{1}_{\mathbb{Q}}\) and \(\pi\) the projection \((x,y,t)\mapsto t\), thereby viewing the surface \(S\) as a curve \(\mathcal{S}\) over \(\mathbb{Q}(t)\). Suppose furthermore we generalised our division polynomial algorithm [14] to curves over \(\mathbb{Q}(t)\). We would then be able to compute a division polynomial \(R_{\mathcal{S},\ell}(x,t)\in\mathbb{Q}(t)[x]\) for \(\mathcal{S}\), whose specialisation \(R_{\mathcal{S},\ell}(x,t_{0})\in\mathbb{Q}(t_{0})[x]\) at any good fibre \(t=t_{0}\in B\) of \(\pi\) would an \(\ell\)-division polynomial of the fibre \(S_{t_{0}}\). Then the equation \(R_{\mathcal{S},\ell}(x,t)=0\) would define the curve \(C\) such that \(\rho\) occurs (up to twist by the cyclotomic character) in the \(\ell\)-torsion of the Jacobian of \(C\), so that we may compute \(\rho\) by applying the original version of [14] to \(C\), by isolating the twist of \(\rho\) in the Jacobian \(J_{C}\) of \(C\) from the knowledge of the characteristic polynomial of \(\rho(\operatorname{Frob}_{\mathfrak{p}})\) where \(\mathfrak{p}\) is as described above (cf. [14] for a successfully worked out example of this approach).
In particular, we would not even need to compute all of the \(\ell^{2g_{C}}\) points of \(J_{C}[\ell]\), which would be impractical even for \(\ell=2\) as soon as the genus \(g_{C}\) of \(C\) is moderately large, but only the \(\ell^{\deg\rho}\) points of the subspace affording the twist of \(\rho\) contained in \(J_{C}[\ell]\). On the other hand, this method forces us to compute all the \(\ell\)-torsion points of the Jacobian of \(\mathcal{S}\) in order to get an equation for \(C\), and this therefore only applicable when the genus of \(\mathcal{S}\) is reasonably small.
The purpose of this article is to explain how [14] can indeed be generalised to curves over \(\mathbb{Q}(t)\), thereby making it theoretically possible to compute explicitly mod \(\ell\) Galois representations which occur in the \(\operatorname{H}^{2}_{\text{\'{e}t}}\) of surfaces.
**Remark 1.3**.: Very general but unfortunately impractical algorithms to compute with etale cohomology are presented in [13] and [12]. In contrast, our goal is to obtain a practical method for the specific case of the \(\operatorname{H}^{2}_{\text{\'{e}t}}\) of surfaces.
We show how [14] can be generalised to curves over \(\mathbb{Q}(t)\) in Section 2. Since [14] requires the curve to be given as a Riemann-Roch space, in Section 3 we briefly recall how to perform various computations with plane algebraic curves, including the determination of Riemann-Roch spaces and the verification whether the curve is geometrically irreducible.
As an application, in Section 4 we compute division polynomials \(R_{\mathcal{S},\ell}(x,t)\) for three curves \(\mathcal{S}\) over \(\mathbb{Q}(t)\), of respective genera 1, 2, and 3. This makes it possible, in principle, to compute with the \(\operatorname{H}^{2}_{\text{\'{e}t}}\) of the corresponding surfaces over \(\mathbb{Q}\); but unfortunately, the equations which we obtain for the curves of genera 2 and 3 are too complicated for this to practical. However, the data that we obtain is still worth our attention, since it encodes families of Galois representations
over \(B=\mathbb{P}^{1}_{\mathbb{Q}}\), and it is especially interesting to study how these families degenerate at bad fibres, which we do in Section 5; in particular, we strive to find a geometric explanation for the ramification of these degenerations.
## 2 Division polynomials over \(\mathbb{Q}(t)\)
### Sketch of the algorithm over \(\mathbb{Q}\)
Let still \(\ell\in\mathbb{N}\) be prime. The purpose of this section is to explain how our algorithm [10] to compute \(\ell\)-division polynomials of curves over \(\mathbb{Q}\) can be generalised to curves over \(\mathbb{Q}(t)\). In this view, let us first recall how this algorithm works with a curve \(C\) over \(\mathbb{Q}\):
1. Pick a prime \(p\neq\ell\) of good reduction of \(C\). Determine \(a\in\mathbb{N}\) such that the \(\ell\)-torsion of the Jacobian \(J\) of \(C\) is defined over \(\mathbb{F}_{q}\), where \(q=p^{a}\).
2. Generate points of \(J(\mathbb{F}_{q})[\ell]\) which span \(J[\ell]\) as an \(\mathbb{F}_{\ell}[\text{Frob}_{p}]\)-module.
3. Lift these points to \(J(\mathbb{Z}_{q}/p^{e})[\ell]\), where \(\mathbb{Z}_{q}\) is the ring of integers of the unramified extension of \(\mathbb{Q}_{p}\) with residue field \(\mathbb{F}_{q}\), and \(e\in\mathbb{N}\) is an accuracy parameter.
4. Construct an evaluation map \(\alpha\in\mathbb{Q}(J)\).
5. Expand \(\tilde{F}(x)=\underset{0\neq t\in J[\ell]}{\prod}\bigl{(}x-\alpha(t)\bigr{)} \in(\mathbb{Z}/p^{e}\mathbb{Z})[x]\), and identify it as an element \(F(x)\) of \(\mathbb{Q}[x]\).
Algorithm 2.1: Division polynomial of a curve over \(\mathbb{Q}\).
The idea is thus to pick an auxiliary prime \(p\), and to rely on the fact that \(J[\ell]\) is etale at \(p\) to construct \(p\)-adic approximations of points of \(J[\ell]\).
The polynomial \(F(x)\) is then an \(\ell\)-division polynomial of \(C\) in the sense of Definition 1.1. This supposes that \(\alpha\) is defined and injective on \(J[\ell]\); if this is not the case, we start over with another \(\alpha\). This also supposes that the accuracy parameter \(e\) is large enough to identify \(F(x)\) from its mod \(p^{e}\) approximation \(\tilde{F}(x)\). In particular, the correctness of this method is not rigorously guaranteed, although this could be done by confirming that the elements of \(J(\mathbb{Z}_{q}/p^{e})[\ell]\) are indeed \(p\)-adic approximations of \(\ell\)-torsion points defined over the stem fields of the irreducible factors of \(F(x)\). Besides, in most cases, one easily convinces oneself beyond reasonable doubt that the output \(F(x)\) is correct, e.g. by checking that it has the appropriate Galois group and ramification.
In order to compute in \(J\), this algorithm relies on Makdisi's algorithms [12, 13]. These algorithms were originally designed to work over a field, so in [10] we generalised them to work over a local ring such as \(\mathbb{Z}_{q}/p^{e}\). These algorithms also require the knowledge of an explicit basis of a Riemann-Roch space of \(C\) of high-enough degree so as to represent \(C\) internally (cf. the bottom of page 1421 in [10]), so we will explain in Section 3 below how such a basis may be computed from a (possibly singular) plane model of \(C\).
### Sketch of the algorithm over \(\mathbb{Q}(t)\)
By analogy with the embedding of \(\mathbb{Q}\) into its completion \(\mathbb{Q}_{p}\), it is natural to extend Algorithm 2.1 to curves over \(\mathbb{Q}(t)\) by embedding \(\mathbb{Q}(t)\) into the \(p\)-adic Laurent series field \(\mathbb{Q}_{p}((t))\). This leads to the following idea to compute an \(\ell\)-division polynomial of a curve \(\mathcal{C}\) over \(\mathbb{Q}(t)\):
1. If required, shift the parameter \(t\) so that \(\mathcal{C}\) has good reduction \(C_{0}\) at \(t=0\). Pick a prime \(p\neq\ell\) of good reduction of \(C_{0}\), and determine \(a\in\mathbb{N}\) such that the \(\ell\) torsion of the Jacobian \(J_{0}\) of \(C_{0}\) is defined dover \(\mathbb{F}_{q}\), where \(q=p^{a}\).
2. Generate points of \(J_{0}(\mathbb{F}_{q})[\ell]\) which span \(J_{0}[\ell]\) as an \(\mathbb{F}_{\ell}[\text{Frob}_{p}]\)-module.
3. Lift these points to \(\mathcal{J}(R)[\ell]\), where \(\mathcal{J}\) is the Jacobian of \(\mathcal{C}\) and \(R\) is a finite quotient of the formal power series ring \(\mathbb{Z}_{q}[[t]]\).
4. Construct an evaluation map \(\alpha\in\mathbb{Q}(t)(\mathcal{J})\).
5. Expand \(\tilde{F}(x)=\prod_{0\neq t\in J[\ell]}\big{(}x-\alpha(t)\big{)}\in R[x]\), and identify it as an element \(F(x)\) of \(\mathbb{Q}(t)[x]\).
Algorithm 2.2: Division polynomial of a curve over \(\mathbb{Q}(t)\).
This assumes that we manage to extend Makdisi's algorithms to finite quotients of \(\mathbb{Z}_{q}[[t]]\). This is actually not an issue, because the extension which we designed in [16] works with any finite local ring \(R\) over which one can perform linear algebra in "good reduction cases" in the following sense:
**Definition 2.1**.: Let \(R=\mathcal{O}/\mathfrak{a}\) be finite quotient of a local domain \(\mathcal{O}\). Let \(K\) be the fraction field of \(\mathcal{O}\), and let \(k\) be the residue field of \(\mathcal{O}\). We say that _we can perform linear algebra over \(R\) in cases of good reduction_ if, given the reduction mod \(\mathfrak{a}\) of a matrix \(A\) over \(\mathcal{O}\) such that the rank of \(A\) is the same over \(K\) and over \(k\), we can compute an approximation in \(\mathcal{O}/\mathfrak{a}\) of a \(K\)-basis of the kernel of \(A\).
Similarly, the construction [16, 2.2.3] of evaluation maps \(\alpha\) generalises to Jacobians of curves over \(\mathbb{Q}(t)\) without change.
Finally, we can identify the coefficients of \(\tilde{F}(x)\) as elements of \(\mathbb{Q}(t)\) by a combination of \(p\)-adic rational reconstruction (as we did in the original version of [16]) and of Pade approximants (see Remark 2.2 below for practical details).
### Lifting torsion points \((p,t)\)-adically
In order to turn these ideas into a proper algorithm, we still must explain what kind of finite quotients \(R\) of \(\mathbb{Z}_{q}[[t]]\) we will work with, and how to lift an \(\ell\)-torsion point from \(\mathbb{F}_{q}\) to \(R\).
A first natural choice for \(R\) would be \(R_{e}=\mathbb{Z}_{q}[[t]]/\mathfrak{m}^{e}\), where \(\mathfrak{m}=(p,t)\) is the maximal ideal of \(\mathbb{Z}_{q}[[t]]\) and \(e\in\mathbb{N}\) is an accuracy parameter as in Algorithm 2.1. This choice may be appealing at first, as it would give us the hope of being able to raise the \(p\)-adic and the \(t\)-adic accuracy of torsion points simultaneously; but unfortunately, we will see below that \(R_{e}\) having Krull dimension \(2\) actually results in an algorithmic obstacle to lifting torsion points. Furthermore, elements of \(R_{e}\) are of the form \(\sum_{j<e}\lambda_{j}t^{j}\) where \(\lambda_{j}\in\mathbb{Z}/p^{e-j}\mathbb{Z}\) is known with poor accuracy for
large \(j\); as a result, in \(\tilde{F}(x)\), the coefficients of high powers of \(t\) would be known with poor \(p\)-adic accuracy, which would force us to increase the value of \(e\) so as to identify them, so we would end up lugging around high powers of \(t\) throughout the calculation only to drop them at the final stage since they are \(p\)-adically too imprecise to be identified as rational numbers, and thus result in a major waste of time.
We have therefore decided to work with the quotients \(R=R_{e,h}=(\mathbb{Z}_{q}/p^{e}\mathbb{Z}_{q})[t]/(t^{h})\), where \(h\in\mathbb{N}\) is a second accuracy parameter. The introduction of this new parameter grants us the flexibility of setting the \(p\)-adic accuracy independently from the \(t\)-adic one, which turns out to be useful in practice. Furthermore, this makes it possible to generalise our algorithm to lift torsion points. In order to see why, recall how we proceeded over \(\mathbb{Q}\) in [10]:
Let \(\mathcal{O}=\mathbb{Z}_{q}\), \(\varpi=p\), \(\mathfrak{m}=\varpi\mathcal{O}\), \(K=\mathbb{Q}_{q}\), and let \(J\) be the Jacobian of a curve over \(K\) which has good reduction at \(\mathfrak{m}\). Given \(e\in\mathbb{N}\), a point \(x\in J(\mathcal{O}/\mathfrak{m}^{e})\) is represented in Makdisi's algorithms (as generalised in [10]) by a matrix \(W_{x}\) with entries in \(\mathcal{O}/\mathfrak{m}^{e}\); but conversely, most such matrices do not represent any point of \(J\). We thus began with an algorithm [10, Algorithm 9] which, given an integer \(e\in\mathbb{N}\) and a matrix \(W_{x}\) representing \(x\in J(\mathcal{O}/\mathfrak{m}^{e})\), computes a lift of \(W_{x}\) to \(\mathcal{O}/\mathfrak{m}^{2e}\) which represents a lift of \(x\) to \(J(\mathcal{O}/\mathfrak{m}^{2e})\).
Due to the tangent space of \(J\) at \(x\), this lift of \(x\) is not unique, and indeed this algorithm can return several matrices representing different random lifts of \(x\) if required. But this also means that even if \(x\) was \(\ell\)-torsion in \(J(\mathcal{O}/\mathfrak{m}^{e})\), none of these lifts to \(J(\mathcal{O}/\mathfrak{m}^{2e})\) are guaranteed (nor even likely) to be \(\ell\)-torsion.
In order to circumvent this problem, we showed how to construct an algebraic "coordinate chart" \(\kappa:U\hookrightarrow\mathcal{O}^{n}\), where \(n\) is a fixed integer not smaller than the genus \(g\) of the curve. This chart is defined on an \(\mathfrak{m}\)-adic neighbourhood \(U\) of the origin \(0\in J(\mathcal{O})\), and turns the mod \(\mathfrak{m}^{e}\) representation in Makdisi form of a point \(x\in U\) into a vector \(\kappa(x)\in(\mathcal{O}/\mathfrak{m}^{e})^{n}\) such that for all \(e^{\prime}\leqslant e\), \(\kappa(x)=0\bmod\mathfrak{m}^{e^{\prime}}\) if and only if \(x=0\) in \(J(\mathcal{O}/\mathfrak{m}^{e^{\prime}})\). As \(\mathcal{O}\) is furthermore principal with uniformiser \(\varpi=p\), we then designed a second algorithm [10, Algorithm 11], which computes the unique lift to \(J(\mathcal{O}/\mathfrak{m}^{2e})[\ell]\) of a point \(x\in J(\mathcal{O}/\mathfrak{m}^{e})[\ell]\) as follows:
1. Use algorithm [10, Algorithm 9] to generate \(g+1\) matrices \(W_{0},\cdots,W_{g}\) representing random lifts \(x_{0},\cdots,x_{g}\) of \(x\) to \(J(\mathcal{O}/\mathfrak{m}^{2e})\).
2. For each of these lifts, compute the vectors \(k_{i}=\frac{1}{\varpi^{e}}\kappa([\ell]x_{i})\in(\mathcal{O}/ \mathfrak{m}^{e})^{n}\).
3. Try to find scalars \(\lambda_{1},\cdots,\lambda_{g}\in\mathcal{O}/\mathfrak{m}^{2e}\) such that \(\sum_{i=0}^{g}\lambda_{i}k_{i}=0\bmod\mathfrak{m}^{e}\) and \(\sum_{i=0}^{g}\lambda_{i}=1\bmod\mathfrak{m}^{2e}\), and return the matrix \(\sum_{i=0}^{g}\lambda_{i}W_{i}\).
Algorithm 2.3: Lifting an \(\ell\)-torsion point in Makdisi form.
The idea is that with high probability, the lifts \(x_{i}\) form an affine coordinate frame of the tangent space of \(J\) at \(x\), which guarantees the existence and uniqueness of the \(\lambda_{i}\) (and otherwise, we start over with other random lifts \(x_{i}\)). Note that since \(x\) is assumed to be \(\ell\)-torsion mod \(\mathfrak{m}^{e}\), we have \(\kappa(x_{i})=0\bmod\mathfrak{m}^{e}\) for all \(i\), so division by \(\varpi^{e}\) does result in the \(k_{i}\) being integral. This division is essential so that we can find the \(\lambda_{i}\) by solving a linear system over the local ring \(\mathcal{O}/\mathfrak{m}^{2e}\), since it ensures that this system will have good reduction in the sense of Definition 2.1 provided as long as the \(x_{i}\) do form an affine frame.
Let us now see how to generalise Algorithm 2.3 to the case where \(\mathcal{O}=\mathbb{Z}_{q}[[t]]\). We can now
see why working with quotients of \(\mathbb{Z}_{q}[[t]]\) of the form \(\mathbb{Z}_{q}[[t]]/(p,t)^{e}\) would be an issue: In step 2, we would obtain vectors \(\kappa([\ell]x_{i})\) with entries in \((p,t)^{e}/(p,t)^{2e}\), but since the ideal \((p,t)\) is not principal, we would not be able to renormalise the linear system defining the \(\lambda_{i}\) into a system of good reduction in the sense of Definition 2.1.
In contrast, by working with quotients of the form \((\mathbb{Z}_{q}/p^{e}\mathbb{Z}_{q})[t]/(t^{h})\), we can generalise Algorithm 2.1 as follows: given a point \(x\in J_{0}(\mathbb{F}_{q})[\ell]\), we can first lift it \(p\)-adically to \(J(\mathbb{Z}_{q}/p^{e})[\ell]\) by using the original version of Algorithm 2.3 as described in [19], and then, we can lift this lift \(t\)-adically to \(\mathcal{J}\big{(}(\mathbb{Z}_{q}/p^{e}\mathbb{Z}_{q})[t]/(t^{h})\big{)}[\ell]\), by applying Algorithm 2.3 with \(\mathcal{O}=(\mathbb{Z}_{q}/p^{e}\mathbb{Z}_{q})[[t]]\) and \(\varpi=t\). Indeed, even though \(\mathfrak{m}=t\mathcal{O}\) is no longer maximal, the point is that the quotient \(\mathcal{O}/\mathfrak{m}^{h}=(\mathbb{Z}_{q}/p^{e}\mathbb{Z}_{q})[t]/(t^{h})\) is still a local ring with "residue ring" \(k=\mathbb{Z}_{q}/p^{e}\) which is still local, so that our generalisation of Makdisi to local rings is able to handle working over it.
We are thus able to lift torsion points from \(J_{0}(\mathbb{F}_{q})[\ell]\) to \(\mathcal{J}\big{(}(\mathbb{Z}_{q}/p^{e}\mathbb{Z}_{q})[t]/(t^{h})\big{)}[\ell]\), and thus to extend our method [19] to curves defined over \(\mathbb{Q}(t)\).
**Remark 2.2**.: In practice, when we identify elements \(c\in\mathbb{Q}(t)\) from an approximation in \((\mathbb{Z}/p^{e}\mathbb{Z})[t]/(t^{h})\) at the end of Algorithm 2.2, rather than first identifying \(c\) as an element of \(\mathbb{Q}[t]/(t^{h})\) by \(p\)-adic rational reconstruction and then as an element of \(\mathbb{Q}(t)\) by Pade approximants over \(\mathbb{Q}\), it is much more efficient to proceed in the reverse order, that is to say to first use Pade approximants over \(\mathbb{Q}_{p}\) so as to identify \(c\) as an element of \(\mathbb{Q}_{p}(t)\) whose coefficients are known mod \(p^{e}\), and then to reconstruct these coefficients as rational numbers. The reason for this is that unless \(h\) is quite small, the Taylor coefficients of \(c\) up to \(O(t^{h})\) will typically have a very large arithmetic height, so that identifying them would require the \(p\)-adic precision parameter \(e\) to be very high, which would drastically reduce the execution speed of the whole of Algorithm 2.2. For example, in Section 4.3 below, identifying the coefficients of a 2-division polynomial of a family of plane quartics requires \(h=128\), and experimentation has shown to us that this in turn requires \(e=4096\) with the first method, but only \(e=128\) with the second one.
## 3 Computing with plane algebraic curves
When we apply Strategy 1.1, on both occasions when we use our algorithm to compute an \(\ell\)-division polynomial of a curve (first over \(\mathbb{Q}(t)\) with Algorithm 2.2, and then over \(\mathbb{Q}\) with Algorithm 2.1), that curve is given to us by a plane equation, which is possibly singular. However, as explained in the previous Section, our \(\ell\)-division polynomial algorithm relies on Makdisi's algorithms, which require the curve to be represented by a Riemann-Roch space of high-enough degree.
The purpose of this Section is therefore to explain how one may perform explicit computations, such as Riemman-Roch spaces, with curves given by possibly singular plane models. Such functionalities are already available in some computer algebra packages such as [19], but our implementation of the \(\ell\)-division polynomial algorithm is based on [11], and converting data from [19] to [11] is tedious and tends to break the flow of automation. We have therefore implemented our own package to compute with plane algebraic curves in [11], in a way which is tailored towards our needs.
### Representing the desingularised curve
Fix a ground field \(K\) over which one can algorithmically factor polynomials and perform linear algebra. For example, \(K\) could be \(\mathbb{Q}\) or \(\mathbb{Q}(t)\). We also assume that \(K\) has characteristic 0, although this is hypothesis is not essential (see Remark 3.4 below).
Suppose we are given an irreducible polynomial \(f(x,y)\in K[x,y]\). It defines an affine curve \(C\) over \(K\), but instead one typically wants to work with \(\tilde{C}\), the desingularisation of the projective completion of \(C\). Nonsingular points of \(C\) may be identified with points of \(\tilde{C}\), so we only need a specific representation for points of \(\tilde{C}\) at infinity or above singular points of \(C\).
One possibility would be to construct an explicit model of \(\tilde{C}\) made up of several charts in a higher-dimensional ambient space; however, this approach would lead to Grobner bases calculations in many variables, which could be very slow. Therefore, we have instead decided to represent these points of \(\tilde{C}\) by formal series parametrisations. For instance, if \(f(x,y)=xy+\cdots\) so that \(C\) has a node at the origin, the two points of \(\tilde{C}\) corresponding to the two branches of this node can be represented by parametrisations of the form
\[x=t,\ y=t+O(t^{2})\quad\text{and}\quad x=t,\ y=-t+O(t^{2}).\]
In order to compute such parametrisations, we can take advantage of the fact that the field \(\overline{K}\{\{x\}\}\) of Puiseux series over \(\overline{K}\) contains an algebraic closure of \(K(x)\): for each root \(y=\sum_{m\geqslant m_{0}}a_{m}x^{m/e}\in\overline{K}\{\{x\}\}\) of \(f(x,y)\in K(x)[y]\), we obtain the parametrisation
\[x=t^{e},\ y=\sum_{m\geqslant m_{0}}a_{m}t^{m}\in\overline{K}((t)). \tag{3.1}\]
One might thus hope for a bijection between the points of \(\tilde{C}\) above \(x=0\) and parametrisations of the form \(x=t^{e}\), \(y\in\overline{K}((t))\) with \(x\) and \(y\) not both series in \(t^{m}\) for any \(m\geqslant 2\); but unfortunately, this is not the case, because (3.1) can be reparametrised as
\[x=t^{\prime e},\ y=\sum_{m\geqslant m_{0}}\zeta^{m}a_{m}t^{\prime m}\]
where \(t=\zeta t^{\prime}\) for any \(e\)-th root of unity \(\zeta\in\overline{K}\). In particular, with this approach, there would be no hope to match the extension of \(K\) generated by the coefficients \(a_{j}\) with the field of definition of the corresponding point2.
Footnote 2: Unless of course \(K\) happens to contain the roots of unity of all orders, which typically will not be the case for the applications which we have in mind since we will be working over \(K=\mathbb{Q}\) or \(\mathbb{Q}(t)\).
Fortunately, Duval [10] has shown that these problems can be circumvented by allowing parametrisations of the form \(x=bt^{e}\), \(y\in\overline{K}((t))\) where \(b\in\overline{K}\) is a constant:
**Theorem 3.2**.: _Let \(f(x,y)\in K[x,y]\) be irreducible of degree \(n\) in \(y\). There exists a finite set of parametrisations_
\[x=b_{j}t^{e_{j}},\ y=\sum_{m\geqslant m_{j}}a_{j,m}t^{m}\]
_where for each \(j\), the \(b_{j}\) and the \(a_{j,m}\) lie in \(\overline{K}\) and span a finite extension \(L_{j}\) of \(K\), and such that the \(n\) roots of \(f\) in \(\overline{K}\{\{x\}\}\) are obtained without repetition as_
\[y=\sum_{m\geqslant m_{j}}a_{j,m}^{\sigma}(\beta x^{1/e_{j}})^{m}\]
_where \(\sigma\) ranges over the \(K\)-embeddings of \(L_{j}\) into \(\overline{K}\) and \(\beta\) ranges over \(\{\beta\in\overline{K}\,|\,\beta^{-e_{j}}=b_{j}^{\sigma}\}\) (so that \(t=\beta x^{1/e_{j}}\) is what one obtains when solving \(x=b_{j}t^{e_{j}}\) for \(t\))._
This means that we have a Galois-equivariant bijection between this set of parametrisations and the set of places of the function field \(K(C)=K(x)[y]/f(x,y)\) of \(C\) above \(x=0\), and therefore with the points of \(\tilde{C}\) above \(x=0\). In particular, we have
\[\sum_{j}e_{j}f_{j}=n\]
where the \(f_{j}=[L_{j}:K]\) are the residue degrees and the \(e_{j}\) are the ramification indices, so that the \(L_{j}\) are the fields of definition of the corresponding points of \(\tilde{C}\), and that the
\[\prod_{\sigma:L_{j}\to\overline{K}}\prod_{\beta^{-\epsilon_{j}}=b_{j}}\left(y -\sum_{m\geqslant m_{j}}a_{\beta,m}^{\sigma}(\beta x^{1/e_{j}})^{m}\right) \tag{3.3}\]
are the irreducible factors of \(f(x,y)\) over \(K((x))\). Note the analogy with the determination of the decomposition of a prime number \(p\) in a number field by studying the factorisation over \(\mathbb{Q}_{p}\) of a polynomial defining that number field.
Duval explains that these parametrisations can be computed as follows:
1. Draw the Newton polygon of \(f(x,y)\), that is to say the lower convex hull of the points \((i,j)\in\mathbb{Z}^{2}\) such that the coefficient \(a_{i,j}\) of \(y^{i}x^{j}\) in \(f(x,y)=\sum_{i,j}a_{i,j}y^{i}x^{j}\) is nonzero.
2. For each segment \(pi+qj=r\) of the Newton polygon, where \(p,q,r\in\mathbb{Z}\) and \(\gcd(p,q)=1\), find \(u,v\in\mathbb{Z}\) such that \(up+vq=1\), and let \(f_{0}=\sum_{pi+qj=r}a_{i,j}x^{j}y^{i}\). Then for each \(b\in\overline{K}\) such that \(f_{0}(b^{-u}t^{q},b^{v}tp)=0\), let \(f_{1}(x,y)=f\big{(}b^{-u}x^{q},b^{q}xb^{p}(1+y)\big{)}\). If \(f_{1}\) is nonsingular in \(y\), stop; else, go back to step 1 with \(f\) replaced with \(f_{1}\).
Algorithm 3.1: Computing parametrisations.
The idea is that we use the Newton polygon to determine the valuation of the roots \(y\) of \(f(x,y)=0\), and then view \(f_{0}\) as the "leading terms", the other terms being thought of as higher-order perturbations. After finitely many iterations, the equation obtained will be nonsingular in \(y\), so its roots can be be found by Newton iteration. We thus obtain explicit parametrisations representing the points of \(\tilde{C}\) above \(x=0\) such that the field of definition of each point is the extension generated by the coefficients of the corresponding parametrisation. Parametrisations for the points above other values of \(x\) can be of course obtained similarly, by shifting the variable \(x\) appropriately.
**Remark 3.4**.: The only reason why we have assumed that \(K\) has characteristic 0 was to ensure that \(f(x,y)\in K(x)[y]\) splits completely over \(\overline{K}\{\{x\}\}\). Theorem 3.2 and Algorithm 3.1 actually remain valid in positive characteristic \(\pi\) as long as there is no wild ramification, that is to say that none of the places has ramification index divisible by \(\pi\), which is equivalent to having \(\pi\nmid q\) whenever we consider a segment \(pi+qj=r\) of a Newton polygon in step 1. All the algorithms presented in this section therefore remain valid in positive characteristic as long as \(\tilde{C}\) is at most tamely ramified as a cover of \(\mathbb{P}^{1}_{x}\), which in practice means we typically only exclude really small characteristics such as 2, 3, or 5. Furthermore, by checking whether \(\pi\mid q\) during the execution of algorithm 3.1, we can reliably detect when this algorithm is going to fail.
### Regular differentials and the genus
Now that we have computed parametrisations representing singular points and points at infinity, we can find a basis of regular differentials on \(\tilde{C}\). Indeed, it is well-known [1, 2.9] that for all \((i,j)\in\mathbb{Z}\) strictly in the interior of the full (as opposed to lower) convex hull of the support of \(f(x,y)=\sum_{i,j}a_{i,j}y^{i}x^{j}\), the differential \(\omega_{i,j}=\frac{x^{j-1}y^{j-1}}{\partial f/\partial y}\,\mathrm{d}x\) is regular everywhere except possibly at singular points, and that every regular differential on \(\tilde{C}\) is a \(K\)-linear combination of those. We thus obtain a basis of regular differentials by finding the linear combinations whose expansion along the parametrisations corresponding to singular points do not have any polar part, which amounts to linear algebra over \(K\). In particular, we recover the genus of \(\tilde{C}\) as the size of this basis.
While there exist more direct ways to compute the genus, having an actual basis of regular differentials is very useful in practice. For example, it makes it possible to test whether the curve is hyperelliptic, and to find an explicit change of variables which puts in in Weierstrass form if it is [vH2]. And if the curve is not hyperelliptic, on can instead compute its canonical image, which provides a way of finding simpler models for curves defined by a complicated, highly-singular equation (for example, this is the approach that we followed in [14, 3]).
### Riemann-Roch spaces and extra functionalities
With our parametrisations representing singular points and points at infinity, we can also compute the integral closure
\[\mathcal{O}=\{s\in K(C)\,|\,\text{the only poles of $s$ are above $x=\infty$}\}\]
of \(K[x]\) in \(K(C)\) in a similar way to the number field case [1, 2.4]: for each irreducible \(d(x)\in K[x]\) such that \(d(x)^{2}\mid\operatorname{disc}_{y}f(x,y)\), we construct a local basis by starting with the approximation \((\omega_{j}=y_{1}^{j-1})_{1\leqslant j\leqslant n}\) where \(y_{1}=a(x)y\) and \(a(x)\) is the leading coefficient of \(f(x,y)\in K(x)[y]\), and refining it as long as we can find scalars \(\lambda_{j}\in K[x]/\big{(}d(x)\big{)}\) such that \(\frac{\sum_{j}\lambda_{j}\neq j}{d(x)}\) has no polar part when evaluated along the parametrisations representing the points above \(d(x)=0\). We then join these local bases into a \(K[x]\)-basis of \(\mathcal{O}\) by computing a Hermite normal form over \(K[x]\).
Thanks to this \(K[x]\)-basis of \(\mathcal{O}\), we can check whether \(C\) is geometrically irreducible, by finding which elements of \(\mathcal{O}\) are also regular above \(x=\infty\).
We can also compute Riemann-Roch spaces, since it is easy, given a divisor on \(C\), to compute a "common denominator" \(d(x)\in K[x]\) such that the corresponding Riemann-Roch space is contained in \(\frac{1}{d(x)}\mathcal{O}\).
This makes it possible to find conic models for curves of genus \(0\). If \(K\) is a number field, we can then test whether the curve has a rational point by a constructive version of Hasse-Minkowski, in which case another use of Riemann-Roch provides us with an explicit rational parametrisation of the curve [vH0]. Riemann-Roch spaces also make it possible to turn curves of genus \(1\) on which a rational point is known into elliptic curves in Weierstrass form.
Finally, now that we are able to compute Riemann-Roch spaces, we can initialise Makdisi's algorithms so as to compute in the Jacobian of \(\tilde{C}\).
We have implemented all the functionalities described in this section in [11]. Our code, which compares quite decently to [16], is available for use in a development branch of [11], which also contains the generalisation of [14] to \(\mathbb{Q}(t)\) described in Section 2.
Examples
In order to demonstrate the use of the algorithm described in Section 2, we have computed some division polynomials over \(\mathbb{Q}(t)\). The calculations took place on the [PlaFRIM] cluster.
### Warmup
As a sanity check, we first used our new algorithm in order to recover an equation for the \(3\)-torsion of the elliptic surface \(\mathcal{E}\) defined by
\[y^{2}=t(1+2t-t^{2})(x^{2}-1)(t^{2}x^{2}-1)\]
that was the object of our attention in [10]. Even though using Makdisi's algorithms on elliptic curves is obviously out-of-proportion, we instantaneously obtained the division polynomial
\[3x^{8}+4t(t^{2}+1)(t^{2}-2t-1)x^{6}+6t^{4}(t^{2}-2t-1)x^{4}-t^{8}(t^{2}-2t-1)^{ 2}\in\mathbb{Q}(t)[x],\]
which is incomparably simpler than what we obtained in [10] with [11]'s elldivpol function, and even prettier than the nicest model that we were able to achieve in [10]. To boost, this polynomial reminisces about \(t=0\) and \(t^{2}-2t-1=0\) being places of bad reduction of \(\mathcal{E}\).
### A hyperelliptic family
Encouraged by this first example, we then computed an \(\ell\)-division polynomial for \(\ell=3\) of the curve over \(\mathbb{Q}(t)\) of genus \(g=2\) corresponding to the hyperelliptic surface \(H\) defined by the equation
\[y^{2}=x^{6}-x^{4}+(t-1)(x^{2}+x).\]
**Remark 4.1**.: The equation \(y^{2}=x^{6}-x^{4}+t(x^{2}+x)\) would have been more natural, but we shifted the parameter \(t\) so as to have good reduction at \(t=0\). We did the same for the previous example, but the polynomial which we presented there was the un-shifted version.
We chose to use the auxiliary prime \(p=17\), since having the \(\ell\)-torsion defined over \(\mathbb{Q}_{p^{a}}((t))\) then merely requires \(a=6\); and we computed the \(\ell\)-torsion mod \((p^{e},t^{h})\) for \(e=48\) and \(h=16\). The computation took \(2\) minutes, and we obtained an \(\ell\)-division polynomial \(R_{H,3}(x,t)\in\mathbb{Q}(t)[x]\) of degree \(\ell^{2g}-1=80\) and whose coefficients have numerators of degree up to \(12\) and coefficients of up to \(27\) decimal digits, and common denominator \(d_{H}(t)=3^{3}(t+1)^{2}\).
This denominator can probably be explained by the fact that \(H\) has bad reduction at \(t=-1\); even though it can be observed that \(d_{H}(t)\) is not divisible by \(t-1\) whereas \(H\) clearly has bad reduction at \(t=1\) as well.
### A plane quartic family
As a final example, we computed an \(\ell\)-division polynomial for \(\ell=2\) of the family \(Q\) of plane quartics of generic genus \(g=3\) defined by the equation
\[x^{4}+(2-t)y^{4}+2x^{3}+x(x+y)+(t-1)(y+x^{2}+x)=0.\]
This time, we took \(p=5\) as it allows \(a=7\), and the accuracy parameters were \(e=h=128\). After one hour and a half, we obtained a division polynomial \(R_{Q,2}(x,t)\in\mathbb{Q}(t)[x]\) of
degree \(\ell^{2g}-1=63\) with common denominator \(d_{Q}(x)=(t-2)(2t-3)^{4}d_{22}(t)\) where \(d_{22}(t)\in\mathbb{Z}[t]\) is irreducible of degree \(22\) and has leading coefficient \(2^{16}\), and whose coefficient numerators have degree up to \(54\) and coefficients of up to \(39\) digits.
It should be noted that one of the places of \(\mathbb{P}^{1}_{t}\) at which \(Q\) has bad reduction has degree \(14\) over \(\mathbb{Q}\); since this must somehow be reflected in an anomalous behaviour of the specialisation of \(R_{Q,2}\) at this value of \(t\), this explains why the coefficients of \(R_{Q,2}\) are so complicated, and why the \(t\)-adic accuracy (\(h=128\)) required to identify them was so much larger than in the previous example. This in turn explains why this computation took so much longer than the previous one.
This time, most of the "geometric content" of the denominator, that is to say the factors \((2t-3)^{4}\) and \(d_{22}(t)\), do not correspond to places of bad reduction of \(Q\) (but \(t-2\) does), and should instead probably be interpreted as values of \(t\) for which the evaluation map \(\alpha\in\mathbb{Q}(t)(\mathcal{J})\) fails to be defined on all the \(2\)-torsion points (see Section 2 for the definition and context around \(\alpha\)). However, it is still interesting to note that in all three examples, the "arithmetic content", that is to say the leading coefficient of the common denominator, is a power of \(\ell\).
**Remark 4.2**.: Our calculations rely on [11]'s polynomial arithmetic, which unfortunately does not benefit from fast algorithms for multiplication of polynomials of high degree. In view of the high \(t\)-adic accuracy that it required, it is likely that the computation of \(R_{Q,2}\) would have been faster if fast polynomial arithmetic had been available.
**Remark 4.3**.: As explained in the Introduction, our identification of the coefficients of our division polynomials as elements of \(\mathbb{Q}(t)\) from approximations in \(\mathbb{Q}_{p}[[t]]\) is not rigorous. However, it is easy to convince oneself that these division polynomials are correct beyond reasonable doubt, for example by checking that their at nonzero values of \(t\) of good reduction has Galois group contained in \(\mathrm{GSp}(2g,\ell)\), and that their ramification agrees what is predicted by Neron-Ogg-Shafarevich [12]. The geometric interpretation of the ramification of the specialisations of these division polynomials at bad values of \(t\) which we will establish in the next section is also evidence that their coefficients have been correctly identified.
## 5 Degeneration of Galois representations and their ramification
Disappointingly, the division polynomials \(R_{H,3}(x,t)\) and \(R_{Q,2}(x,t)\) which we have obtained in the previous Section are so complicated that neither [12] nor our plane curves package presented in Section 3 are able to determine their genus, let alone compute Riemann-Roch spaces required to use Makdisi's algorithms to work in their Jacobian. As a result, we are unfortunately unable to conclude our calculation of the Galois representations occurring in the etale cohomology of the corresponding surfaces.
However, these division polynomials are still very valuable data, in that each of them encodes a family of Galois representations parametrised by \(\mathbb{P}^{1}_{\mathbb{Q}}\). Furthermore, these representations are far from trivial, in that they have maximal image. Indeed, one easily checks with [12] that the specialisation of \(R_{H,3}(x,t)\) at a rational value of \(t\) of good reduction of \(H\) (for example, at \(t=0\)) has Galois group \(\mathrm{GSp}(4,3)\) over \(\mathbb{Q}\), which proves that \(R_{H,3}(x,t)\) has Galois group \(\mathrm{GSp}(4,3)\) over \(\mathbb{Q}(t)\); therefore, most specialisations of \(R_{H,3}(x,t)\) will have Galois group \(\mathrm{GSp}(4,3)\) by Hilbert irreducibility, so that \(R_{H,3}(x,t)\) may be viewed as a family (in \(t\)) of polynomials (in \(x\)) with generic Galois group \(\mathrm{GSp}(4,3)\). One similarly checks that \(R_{Q,2}(x,t)\) defines a family of polynomials with generic Galois group \(\mathrm{GSp}(6,2)=\mathrm{Sp}(6,2)\), which happens to be a simple group.
### Decomposition of the bad places
It is especially interesting to study how these families of Galois representations degenerate at values of \(t\) which are places of bad reduction of the corresponding curves over \(\mathbb{Q}(t)\).
The bad places of our hyperelliptic family \(H\) defined by
\[y^{2}=x^{6}-x^{4}+(t-1)(x^{2}+x)\]
are easily determined by examining the discriminant of the right-hand side, and turn out to be \(t=1\), \(t=-1\), \(t=283/256\), and \(t=\infty\).
In order to analyse the degeneration of \(R_{H,3}(x,t)\) at these places, one must not simply substitute these values for \(t\), as this would be as incorrect as trying to understand the decomposition of a prime \(p\) in a number field by factoring a polynomial mod \(p\) without taking into consideration the index of the order attached to this polynomial. Instead, we must study the factorisation over \(\mathbb{Q}((t))\) of versions of \(R_{H,3}(x,t)\) shifted in such a way that the bad place under consideration in now \(t=0\). In view of (3.1), this is equivalent to determining the ramification in \(t\) and the field of definitions of the points above \(t=0\) of the desingularisation of the curve \(R_{H,3}(x,t)=0\), which we can achieve thanks to our implementation of Duval's method described in Section 3.1. We thus obtain the following data:
In this table, the second column shows the decomposition of the place of \(\mathbb{Q}(t)\) in the function field \(\mathbb{Q}(t)[x]/\big{(}R_{H,3}(x,t)\big{)}\); for example, there are five places above \(t=\infty\), two with residue field \(\mathbb{Q}\) and respective ramification indices \(2\) and \(6\), one with residue field \(\mathbb{Q}(\sqrt{3})\) and ramification index \(4\), and two with residue field \(\mathbb{Q}(\sqrt[4]{12})\) and respective ramification indices \(4\) and \(12\). The third column shows the Galois group of the compositum of the Galois closures of the residue fields, and the last column lists the prime numbers which ramify in this Galois closure, or, equivalently, in at least one of the residue fields. Still in this table, \(\mathbb{Q}(\zeta_{m})^{+}\) denotes the intersection of the cyclotomic field \(\mathbb{Q}(\zeta_{m})\) with \(\mathbb{R}\), and \(K_{d}\), \(K_{d}^{\prime}\), \(K_{d}^{\prime\prime}\), and so on stand for pairwise non-isomorphic number fields of degree \(d\). As for Galois groups, \(C_{n}\), \(D_{2n}\), and \(S_{n}\) respectively denote cyclic, dihedral, and symmetric groups, and \(A\cdot B\) stands for a nonsplit group extension with normal subgroup \(A\) and quotient \(B\). For \(t=1\), we have exceptionally expressed the Galois group as \((\mathbb{Z}/36\mathbb{Z})^{\times}\) instead of \(C_{6}\times C_{2}\) because the Galois closure is the \(36^{\text{th}}\) cyclotomic field.
We will elucidate the nature of some of these residue fields in Section 5.2, where we will also explain the occurrence of each of the ramified primes.
As for our family of quartics \(Q\), the places of bad reduction are \(t=1\), \(t=2\), \(t=\infty\), as well as the place of degree \(14\) mentioned in the previous Section. The high degree of this last place makes explicit computations with it impractical, so we ignore it from now on. We obtain the following data:
\begin{table}
\begin{tabular}{c|l|l|l} \(t\) & Place decomposition & Galois group & Ramification \\ \hline \(1\) & \(\mathbb{Q}(\sqrt{3})^{1}\cdot\mathbb{Q}(\sqrt{-1})^{3}\cdot\big{(}\mathbb{Q}( \zeta_{9})^{+}(\sqrt{-1})\big{)}^{9}\cdot\big{(}\mathbb{Q}(\zeta_{36})^{+} \big{)}^{3}\) & \((\mathbb{Z}/36\mathbb{Z})^{\times}\) & \(2,3\) \\ \(-1\) & \(\mathbb{Q}(\sqrt{-21})^{1}\cdot K_{1}^{6}\cdot K_{18}^{1}\cdot K_{18}^{\prime \ 3}\) & \(C_{2}\times C_{3}\cdot S_{3}^{2}\) & \(2,3,7,11\) \\ \(\frac{283}{256}\) & \(\mathbb{Q}(\sqrt{-14})^{1}\cdot K_{18}^{\prime\ 3}\cdot K_{24}^{1}\) & \((C_{2}\times C_{3}\rtimes S_{3})\cdot S_{4}\) & \(2,3,7,11\) \\ \(\infty\) & \(\mathbb{Q}^{2}\cdot\mathbb{Q}^{6}\cdot\mathbb{Q}(\sqrt{3})^{4}\cdot\mathbb{Q}( \sqrt[4]{12})^{4}\cdot\mathbb{Q}(\sqrt[4]{12})^{12}\) & \(D_{4}\) & \(2,3\) \\ \end{tabular}
\end{table}
Table 5.1: Decomposition of the bad places of \(H\).
### Visualising ramification on the special fibre
We would now like to find a geometric explanation for the ramified primes observed in the previous tables. We will also explain the occurrence of some of the residue fields.
At a place of \(\mathbb{P}^{1}_{t}\) of good reduction, so that the fibre of the surface is a nice curve \(F\), the Neron-Ogg-Shafarevich criterion [13] would lead us to expect ramification at \(p=\ell\) as well as at the primes of bad reduction of \(F\). By analogy, at a bad place, we would expect ramification at \(p=\ell\) and at the primes \(p\) such that the bad fibre becomes "even worse".
More specifically, this bad fibre should be understood as the fibre of a minimal regular model of the surface over \(\mathbb{Q}\), and saying that the fibre becoming even worse mod \(p\) means that the reduction mod \(p\) of this special fibre does not agree with the special fibre of the minimal regular model of the reduction mod \(p\) of the surface. In more colourful language, this could be summarised by saying that along with \(p=\ell\), these are the primes \(p\) such that taking the special fibre of the minimal regular model does not commute with reduction mod \(p\).
**Remark 5.1**.: Instead of looking at special fibres of the minimal regular model, it would also make sense to consider the semistable fibres. We content ourselves with this imprecision, because we are in effect looking at families of curves over the base \(\mathbb{P}^{1}_{\mathbb{Z}}\) which has dimension \(2\) (one geometric dimension and one arithmetic one), so that as far as the author is aware, there is no longer a canonical notion of good (meaning Neron) model for the Jacobian.
#### 5.2.1 The hyperelliptic surface
Let us begin with the hyperelliptic surface \(H\).
**The fibre at \(t=1\)**
The surface \(H\) is not regular above \(t=1\), but in characteristic \(\pi\neq 2\), it becomes regular after one blowup, and its special fibre then consists of two rational curves arranged as shown on Figure 5.1:
Figure 5.1: The special fibre of \(H\) at \(t=1\) when \(\pi\neq 2\).
\begin{table}
\begin{tabular}{c|l|l|l} \(t\) & Place decomposition & Galois group & Ramification \\ \hline \(1\) & \(\mathbb{Q}^{1}\cdot\mathbb{Q}^{1}\cdot K_{8}^{1}\cdot K_{8}^{1}\cdot K_{8}^{ \prime\prime 2}\cdot K_{8}^{\prime\prime 2}\cdot K_{12}^{1}\) & \(C_{3}^{3}\rtimes S_{4}\) & \(2,229\) \\ \(2\) & \(\mathbb{Q}^{1}\cdot\mathbb{Q}^{2}\cdot\mathbb{Q}^{4}\cdot\mathbb{Q}^{8}\cdot \mathbb{Q}^{8}\cdot\mathbb{Q}(\sqrt{2})^{4}\cdot\mathbb{Q}(\sqrt{2},\sqrt{15}) ^{8}\) & \(C_{2}^{2}\) & \(2,3,5\) \\ \(\infty\) & \(\mathbb{Q}^{1}\cdot\mathbb{Q}^{2}\cdot\mathbb{Q}^{4}\cdot K_{3}^{2}\cdot K_{3}^{ 4}\cdot K_{6}^{1}\cdot K_{8}^{\prime\prime 4}\) & \(S_{4}\times C_{2}\) & \(2,23\) \\ \end{tabular}
\end{table}
Table 5.2: Decomposition of some of the bad places of \(Q\).
In contrast, in characteristic \(\pi=2\), it takes many more blowups to obtain a regular model of \(H\) above \(t=1\). This explains the ramification at \(p=2\) observed in Table 5.1 for \(t=1\). As for ramification at \(p=3\), it is simply explained by the fact that we are looking at 3-torsion.
**The fibre at \(t=-1\)**
For \(t=-1\), in characteristic \(\pi\not\in\{2,7,11\}\), we again obtain a regular surface after one blowup. Its special fibre is made up of an elliptic curve and a rational curve, as shown on Figure 5.2. Our plane curve package described in Section 3 informs us that over \(\mathbb{Q}\), the elliptic component is the curve of [LMFDB] label 176.a2, whose conductor \(176=2^{4}\cdot 11\).
As a result, in characteristic \(\pi=11\), the elliptic curve degenerates, and the special fibre becomes what is shown on Figure 5.3:
This explains why we observed ramification at \(p=11\). As for \(\pi=2\), the special fibre is the same as for \(t=1\), since \(t\) is defined over \(\mathbb{Z}\) and \(-1\equiv 1\bmod 2\).
It remains to explain ramification at \(p=7\). A closer inspection of the special fibre over \(\mathbb{Q}\) (as shown on Figure 5.2) shows that the intersection points of the two components are not rational, but defined over \(\mathbb{Q}(\sqrt{7})\) and Galois-conjugates of each other; as a result, when we reduce mod \(\pi=7\), these intersection points coalesce, and the special fibre becomes what is shown on Figure 5.4, which explains ramification at 7:
Figure 5.3: The special fibre of \(H\bmod 11\) at \(t=-1\). Both components are now rational.
Figure 5.2: The special fibre of \(H\) at \(t=-1\) when \(\pi\not\in\{2,7,11\}\).
**Remark 5.2**.: As one would expect, our residue fields pick up the 3-torsion of the elliptic curve component of the special fibre. More specifically, this elliptic curve 176.a2 acquires two of its 3-torsion points over \(\mathbb{Q}(\sqrt{-1})\), whereas each of its remaining six points of order 3 is defined over one of the Galois conjugates of a number field \(F\) of degree 6. The field \(K_{6}\) appearing in Table 5.1 is actually an extension of \(\mathbb{Q}(\sqrt{-1})\) of degree 3 and relative discriminant \((1+\sqrt{-1})^{2}\cdot 3^{3}\cdot 7\), whereas the field \(K_{18}\) appearing in the same table is an extension of \(F\) of degree 3 ramified only above 2 and 7. The fact that these extensions have degree 3 can be interpreted in terms of generalised Jacobians, since we are looking at 3-torsion. Curiously, there does not seem to be a similar interpretation for \(K^{\prime}_{18}\), but we still note that \(K_{18}\) and \(K^{\prime}_{18}\) have the same Galois closure, which also contains \(K_{6}\).
**The fibre at \(t=\infty\)**
The surface \(H\) is actually already regular at \(t=\infty\) in any characteristic, so we can directly visualise its special fibre, which turns out to have a rather nasty singularity:
The fact that \(H\) is regular at \(t=\infty\) even mod \(\pi=2\) fails to explain why we observed ramification at \(p=2\) in Table 5.1. However, the special fibre which we have obtained is clearly not semistable, so we may be looking at the "wrong" fibre.
In order to investigate further, we can look in the direction of the semistable fibre, which means we must perform a ramified base change [13, 3.47]. The simplest candidate is to base-change to \(\mathbb{Q}(t^{1/2})\), meaning that we replace \(t\) with \(t^{2}\) in our equation. This results in \(H\) no longer being regular, even in characteristic \(\pi=0\); after several blowups, we find that in characteristic \(\pi\neq 2\), the special fibre is made up of four rational curves, one of which has multiplicity two, as shown on Figure 5.6:
Figure 5.4: The special fibre of \(H\) mod 7 at \(t=-1\).
Figure 5.5: The special fibre of \(H\) at \(t=\infty\) in any characteristic.
In contrast, in characteristic \(\pi=2\), the desingularisation requires more blowups, which finally explains the ramification that we observed at \(p=2\).
**Remark 5.3**.: Because of the presence of a double component, the special fibre which we have obtained after base-changing to \(\mathbb{Q}(t^{1/2})\) is still not semistable, and a further base change would be required to remedy this. However, as explained in Remark 5.1, since we do not have a clear notion of "good" model, we content ourself with this reasonably satisfying explanation.
#### The fibre at \(t=283/256\)
In characteristic \(\pi\not\in\{2,3,7,11\}\), \(H\) is already regular at \(t=283/256\), and its special fibre is a curve of genus \(1\) with a nodal self-intersection, as shown on Figure 5.7:
Over \(\mathbb{Q}\), the desingularisation of this fibre is the elliptic curve of [LMFDB] label \(528.c2\), whose conductor is \(528=2^{4}\cdot 3\cdot 11\), and as expected, the phenomenon described in Remark 5.2 occurs again, in that the number field \(K_{24}\) displayed in Table 5.1 is an extension of degree \(3\) of the field of degree \(8\) over the Galois conjugates of which the points of order \(3\) of this elliptic curve are defined. We do not, however, have a similar interpretation for the field \(K_{18}^{\prime\prime}\), but we note that its Galois closure is the same as that of \(K_{24}\), and also contains the other residue field \(\mathbb{Q}(\sqrt{-14})\) appearing in the corresponding row of Table 5.1.
In characteristics \(\pi=2,3,7,11\), we respectively have \(283/256\equiv\infty,1,-1,-1\), which are cases for which we have already found an explanation for the corresponding ramification.
#### The quartic surface
We now proceed to the same analysis of ramification for the family of plane quartics \(Q\).
Figure 5.6: The special fibre of the base change of \(H\) to \(\mathbb{Q}(t^{1/2})\) at \(t=\infty\) in characteristic \(\pi\neq 2\).
**The fibre at \(t=1\)**
At \(t=1\), in characteristic \(\pi\not\in\{2,229\}\), we find that the special fibre has three components, two of which are rational, whereas the third one has genus \(2\):
Over \(\mathbb{Q}\), our plane curves package informs us that the component of genus \(2\) is isomorphic to the hyperelliptic curve of equation
\[y^{2}=x(x^{4}-x+1)\]
whose [LMFDB] label is \(29312.\)a.\(58624.1\); in particular, the conductor of its Jacobian is \(29312=2^{7}\cdot 229\). As expected, the phenomenon described in Remark 5.2 occurs again, in that the number field \(K_{8}\) displayed in Table 5.2 is defined by the irreducible polynomial \(x^{8}-x^{2}+1\) and is therefore clearly a quadratic extension of a field over which the Jacobian of this hyperelliptic curve acquires a point of order \(2\). We do not have any similar interpretation for the fields \(K^{\prime}_{8},K^{\prime\prime}_{8}\), nor \(K_{12}\) appearing in the same row of this table, but we still mention that \(K_{8}\), \(K^{\prime}_{8}\), and \(K^{\prime\prime}_{8}\) share the same Galois closure, which is a quadratic extension of the Galois closure of \(K_{12}\).
Since \(229\) divides the discriminant of this hyperelliptic curve, when we reduce mod \(\pi=229\), this curve degenerates into a curve of genus \(1\) with a nodal self-intersection:
This explains the ramification that we have observed at \(p=229\). As for the ramification at \(p=2\), it is explained both by the fact that we are now looking at the \(2\)-torsion.
**The fibre at \(t=2\)**
In characteristic \(\pi\not\in\{3,5\}\), we obtain a special fibre made up of three rational components, one of which has a cusp, and which are arranged as follows:
Figure 5.9: The special fibre of \(Q\) mod \(229\) at \(t=1\).
Figure 5.8: The special fibre of \(Q\) at \(t=1\) when \(\pi\not\in\{2,229\}\).
Reducing mod \(\pi=5\) does not result in requiring more blowups; however, the rightmost fibre, which is a conic, degenerates into a union of two curves, which explains the ramification at \(p=5\):
The same degeneration occurs mod \(\pi=3\), and furthermore resolving the singularities of \(Q\) at \(t=2\) also requires more blowups in characteristic \(3\). Both these facts explain the ramification at \(p=3\).
#### The fibre at \(t=\infty\)
Mod \(\pi\not\in\{2,23\}\), our model for \(Q\) is already regular at \(t=\infty\), whence a special fibre formed of one component of genus \(1\) with a nasty self-intersection:
Over \(\mathbb{Q}\), our plane curve package informs us that the desingularisation of this curve is the elliptic curve with [LMFDB] label 92.a1, whose conductor is \(92=2^{2}\cdot 23\); and the field \(K_{3}\) displayed in Table 5.2, which is the cubic field of discriminant \(-23\), is also the field over which this elliptic curve acquires a point of order \(2\). Furthermore, \(K_{6}\) is a quadratic extension of \(K_{3}\) which is only ramified above \(2\) and \(23\). We do not have a similar explanation for \(K_{8}^{\prime\prime\prime}\), but we observe that the Galois closure of \(K_{8}^{\prime\prime\prime}\), which has degree \(48\), contains \(K_{6}\) and therefore \(K_{3}\).
That \(23\) divides the conductor of this elliptic curve also results in this curve acquiring an extra node mod \(\pi=23\), which explains the ramification at \(p=23\):
Figure 5.11: The special fibre of \(Q\) mod \(5\) at \(t=2\).
Figure 5.10: The special fibre of \(Q\) at \(t=2\) when \(\pi\not\in\{3,5\}\).
|
2310.07397
|
Target-oriented Proactive Dialogue Systems with Personalization: Problem
Formulation and Dataset Curation
|
Target-oriented dialogue systems, designed to proactively steer conversations
toward predefined targets or accomplish specific system-side goals, are an
exciting area in conversational AI. In this work, by formulating a <dialogue
act, topic> pair as the conversation target, we explore a novel problem of
personalized target-oriented dialogue by considering personalization during the
target accomplishment process. However, there remains an emergent need for
high-quality datasets, and building one from scratch requires tremendous human
effort. To address this, we propose an automatic dataset curation framework
using a role-playing approach. Based on this framework, we construct a
large-scale personalized target-oriented dialogue dataset, TopDial, which
comprises about 18K multi-turn dialogues. The experimental results show that
this dataset is of high quality and could contribute to exploring personalized
target-oriented dialogue.
|
Jian Wang, Yi Cheng, Dongding Lin, Chak Tou Leong, Wenjie Li
|
2023-10-11T11:32:57Z
|
http://arxiv.org/abs/2310.07397v2
|
# Target-oriented Proactive Dialogue Systems with Personalization:
###### Abstract
Target-oriented dialogue systems, designed to proactively steer conversations toward predefined targets or accomplish specific system-side goals, are an exciting area in conversational AI. In this work, by formulating a <dialogue act, topic> pair as the conversation target, we explore a novel problem of personalized target-oriented dialogue by considering personalization during the target accomplishment process. However, there remains an emergent need for high-quality datasets, and building one from scratch requires tremendous human effort. To address this, we propose an automatic dataset curation framework using a role-playing approach. Based on this framework, we construct a large-scale personalized target-oriented dialogue dataset, **TopDial**1, which comprises about 18K multi-turn dialogues. The experimental results show that this dataset is of high quality and could contribute to exploring personalized target-oriented dialogue.
Footnote 1: Our code and data are available at [https://github.com/iwangjian/TopDial](https://github.com/iwangjian/TopDial).
## 1 Introduction
Compared with traditional dialogue systems that focus merely on passively responding to user requirements, a recently investigated research topic of target-oriented dialogue systems (Sevegnani et al., 2021; Deng et al., 2023) specifies a conversation target from the system side, enabling the system to take the initiative and lead the conversation. Early work in this area mainly formulates the targets as mentioning certain keywords (Tang et al., 2019; Qin et al., 2020; Zhong et al., 2021; Yang et al., 2022) or specific topics (Wu et al., 2019; Severani et al., 2021). To allow the formed targets to be applicable in broad scenarios, a few recent studies (Zhang et al., 2021; Wang et al., 2023) define <dialogue act, topic> pairs as targets. For example, given the target of <movie recommendation, "King of Comedy">, the system needs to take appropriate dialogue acts and smoothly steer the discussed topic towards the designated one. Its ultimate objective is to achieve recommendations on the target topic "King of Comedy". Our work also follows the form of <dialogue act, topic> pairs as targets to study target-oriented dialogue systems due to their higher applicability in real-world scenarios.
Despite many existing efforts, we find that two critical issues remain to be solved. One urgent problem is the need for well-organized benchmarks or datasets. Current studies for target-oriented dialogue (Gupta et al., 2022; Wang et al., 2023) mainly re-purpose existing non-target-oriented dialogue datasets, which are not exactly suitable as they are crowd-sourced without consideration of target accomplishment. Nevertheless, building a new high-quality dataset from scratch requires expensive human effort. The other essential issue is that, target-oriented dialogue systems need to consider personalized aspects (Wu et al., 2021; Rana et al., 2023), such as user profiles and personalities, which were largely ignored by previous work. User profiles involve user preferences about potential topics relevant to the target, while personalities imply possible reactions and feedback during the dialogue process. With personalized information incorporated, the system could be tailored to a user and lead the conversation towards the target with higher engagement instead of obtrusively driving to the target, thereby improving user experience. Thus, we raise the question: _How can we build high-quality datasets with little human effort for personalized target-oriented dialogue?_
In this work, we first give a comprehensive definition (SS2) of personalized target-oriented dialogue, then lay out the desirable characteristics (SS2) that a qualified dialogue dataset should meet. Drawing inspiration from some recent work that has demonstrated unprecedented capabilities of large language models (LLM) in simulating human so
cial behaviors Guo et al. (2023); Li et al. (2023), we propose a role-playing approach for automatic dataset curation (SS3) using multiple LLM agents. They are designed to follow specific instructions to fulfill the requirements. Based on that, we synthesize a large-scale dialogue dataset named **TopDial** and show its quality and effectiveness (SS4).
Our main contributions are: (1) We formulate the problem of personalized target-oriented dialogue, which is promising yet underexplored. (2) We propose a novel role-playing framework for automatic dialogue dataset curation. It provides insights into building large-scale datasets for many other dialogue tasks. (3) Our constructed TopDial dataset is of high quality and contributes to the related research community.
## 2 Problem Formulation
Task DefinitionWe consider a dialogue corpus \(\mathcal{D}=\{(\mathcal{U}_{i},\mathcal{K}_{i},\mathcal{T}_{i},\mathcal{C}_{i })\}_{i=1}^{N}\), where \(N\) is the total number of dialogues. In the \(i\)-th dialogue, \(\mathcal{U}_{i}\) represents the personalized information, such as the user's profiles and/or personalities. \(\mathcal{K}_{i}\) represents the domain knowledge facts relevant to the \(i\)-th dialogue. \(\mathcal{T}_{i}\) denotes the predefined target consisting of an <dialogue act, topic> pair. \(\mathcal{C}_{i}=\{\mathcal{C}_{i,t}\}_{t=1}^{N_{T}}\) is the dialogue content, with a total of \(N_{T}\) turns. The task of personalized target-oriented dialogue is formalized as follows: given a target \(\mathcal{T}\), a set of user's personalized information \(\mathcal{U}\), a set of relevant domain knowledge \(\mathcal{K}\), and a dialogue context \(\mathcal{C}\), the objective is to proactively lead the conversation and generate proper utterances to achieve the target \(\mathcal{T}\) at an appropriate time.
Desirable Characteristics of DatasetsBased on the above definition, we lay out two desirable characteristics that a qualified dataset should meet, namely _target-oriented proactivity_ and _personalization_. Target-oriented proactivity emphasizes that a dialogue dataset should allow the system to (i) take the initiative throughout a conversation, (ii) proactively lead the discussed topic towards the target topic based on domain knowledge, and (iii) accomplish the target act. On the other hand, personalization indicates that dialogues in a qualified dataset should embody (i) user profiles, which may involve users' past preferences about potential topics relevant to the target, and (ii) user personalities, which may imply users' possible reactions and feedback during the system-initiative process.
## 3 Dataset Curation Framework
In this section, we describe a role-playing approach for automatic dataset curation using multiple LLM agents. Figure 1 depicts the whole framework, which involves one _user agent_, one _system agent_, and one _moderator agent_. All these agents are designed to follow specific instructions and communicate in our _role-playing environment_.
Role-Playing EnvironmentThis environment is designed to provide a global description for prompting all LLM agents. To achieve desirable target-oriented role playing, we instantiate the environment description based on the domains of the pre-defined targets. For example, one can describe the environment as "You are participating in a conversation about music or movies." for a given target \(\mathcal{T}\) = <movie recommendation, "King of Comedy">. Then, the description will be prepended to each agent's instructions.
User AgentThe user agent aims to simulate human users who generate utterances conditioned on their specific profiles and personalities. Since there are many off-the-shelf dialogue datasets grounded with user profiles, we collect all user profiles from one chosen dataset and parse them into a profile slot pool. Each slot contains a particular slot key (e.g., name, age range, liked or disliked movies) and a list of candidate values. We randomly sample a slot value for each key, and then form all key-value pairs as the simulated user profile.
Inspired by Big-5 personality traits Goldberg (1993) that have been widely adopted in personality-aware tasks Oraby et al. (2018); Yu et al. (2019), we randomly sample a positive or negative description
Figure 1: Overview of our role-playing framework for automatic dialogue dataset curation.
for each of the following traits: openness (O), conscientiousness (C), extraversion (E), agreeableness (A), neuroticism (N). The sampled descriptions are then combined as the simulated user personality. We verbalize the simulated user profile and personality in natural languages, prompting the user agent to act as a human user. We present our detailed instruction template in Appendix A.1.
System AgentThe system agent aims to serve as a human-like domain-specific enthusiast, such as a movie enthusiast who enjoys a variety of films, or a foodie who enjoys delicious food. Its long-term goal is to proactively lead the conversation towards the target, as discussed in SS2. To achieve target-oriented proactivity, we take a given target \(\mathcal{T}\) and a set of relevant domain knowledge \(\mathcal{K}\) (and a few comments related to the target topic, if applicable) from a chosen seed dataset as the fundamental prompting source. Besides, in human-to-human conversations, one can easily know the other's explicit profile information, while it is hard to be aware of implicit personality before their first conversation. Thus, we pass the simulated user profile yielded by the user agent to the system agent as a personalized prompting source (see Figure 1).
We assign required instructions to the system agent based on the above prompting sources and task definition. We provide the instruction template in Appendix A.2. In practice, we further enhance the system agent in a self-augmented instruction manner, where the agent's task prompt will be repeated at each dialogue round to avoid forgetting its long-term goal.
through their collaboration, with very little human effort involved in the whole process.
## 4 TopDial Dataset
Based on our dataset curation framework, we synthesized the dataset TopDial by utilizing the re-purposed version Wang et al. (2023) of DuRec-Dial 2.0 Liu et al. (2021) as the seed dataset after carefully considering the problem formulation and necessary prompting sources. We report more implementation details in Appendix B.1.
Dataset StatisticsTable 1 compares TopDial with related datasets. To the best of our knowledge, TopDial is the first dataset equipped with the desirable characteristics discussed in SS2. It should be noted that the DuRecDial 2.0 dataset is crowdsourced without considering targets and is not exactly suitable for the end task of target-oriented proactive dialogue, while the re-purposed version of DuRecDial 2.0 largely relies on human effort to form targets and preprocess dialogues. In comparison, our TopDial dataset is curated based on target-oriented proactivity. In addition, by grounding the personality information during the dataset curation process, TopDial is more natural and effective in reflecting personalization.
Table 2 shows detailed statistics of the TopDial dataset (see domain distributions in Figure 2). We also visualize the transitions of dialogue acts of the system through the first six dialogue rounds in Figure 3. We observe that the system often asks preferences or other questions at the very beginning. As the dialogue continues, the system introduces topic-related attributes and elicits the user's interest. It shows that the system proactively leads the dialogue and gradually achieves target dialogue acts, i.e., recommendations on target topics.
and 50% from the TopDial test data. Our evaluation metrics include the average score of BLEU-1/2 Papineni et al. (2002), persona F1 Lim et al. (2022), knowledge F1 and target success rate (Succ.) Wang et al. (2023). We describe details of these metrics and model training in Appendix C.
The comparison results reported in Table 3 show a similar trend: the two baseline models trained on our TopDial dataset significantly outperform those trained on the seed dataset. In particular, our TopDial dataset is more effective in training personalized target-oriented dialogue models (e.g., much higher persona F1 and Succ. socres) by grounding the profile and personality information during the dataset curation process. It shows that TopDial is an effective training resource for the personalized target-oriented dialogue task.
Case StudyDue to space limitation, we present some cases in Appendix D (see Figure 9 and Figure 10) for a better understanding. These cases intuitively show that our TopDial dataset fulfills target-oriented proactivity and personalization. It also shows that our dataset curation framework can be a viable alternative for building personalized target-oriented dialogue datasets.
## 5 Conclusion
In this work, we explore a new task: personalized target-oriented dialogue. We first define this challenging task, and then lay out the desirable characteristics that a qualified dialogue dataset should meet. We propose a novel role-playing framework for automatic dataset curation, based on which we construct a large-scale dialogue dataset TopDial. Our statistics and evaluations validate its effectiveness and high quality.
## Limitations
Since we adopt ChatGPT agents to simulate the designed roles, ensuring the factual correctness of the synthetic dialogues during the role-playing process is challenging, as ChatGPT may produce output content with hallucinations Bang et al. (2023). We intend to improve the dataset curation process with some post-processing steps, such as fact-checking and correction based on the grounded domain knowledge. In addition, we observe that sometimes the moderator agent cannot appropriately terminate a conversation due to its difficulty in understanding the achievement of the target, even though it has been assigned with detailed instructions and in-context examples. We will leave this for future research.
## Ethical Considerations
Developing target-oriented dialogue systems requires careful ethical considerations due to the potential impact on specific scenarios. As an application scenario explored in this work, providing recommendations is one of the highly-applicable target dialogue acts. Target-oriented dialogue systems can create non-obtrusive recommendations for specific products and services. Our work does not force the system to achieve the designated target nor force users to accept recommendations.
We emphasize that regulation of the target designation is crucial when deploying target-oriented dialogue systems in particular domains. For instance, specifying a target should not violate factual correctness, user privacy rules, or laws of human society. We want to raise awareness about the potential misuse of such systems with toxic intentions. For example, such systems may be used to pose as humans and mislead users through conversations. To avoid such risks, we highlight that it is necessary to improve transparency, such as informing users that they are chatting with a bot, not a human.
## Acknowledgments
This work was supported by the Research Grants Council of Hong Kong (15207122, 15207920, 15207821, 15204018, 15213323) and National Natural Science Foundation of China (62076212). It was also supported in part by PolyU internal grants (ZVQ0, ZVVX).
|
2306.11213
|
Efficient and reliable divergence-conforming methods for an
elasticity-poroelasticity interface problem
|
We present a finite element discretisation to model the interaction between a
poroelastic structure and an elastic medium. The consolidation problem
considers fully coupled deformations across an interface, ensuring continuity
of displacement and total traction, as well as no-flux for the fluid phase. Our
formulation of the poroelasticity equations incorporates displacement, fluid
pressure, and total pressure, while the elasticity equations adopt a
displacement-pressure formulation. Notably, the transmission conditions at the
interface are enforced without the need for Lagrange multipliers. We
demonstrate the stability and convergence of the divergence-conforming finite
element method across various polynomial degrees. The a priori error bounds
remain robust, even when considering large variations in intricate model
parameters such as Lam\'e constants, permeability, and storativity coefficient.
To enhance computational efficiency and reliability, we develop residual-based
a posteriori error estimators that are independent of the aforementioned
coefficients. Additionally, we devise parameter-robust and optimal block
diagonal preconditioners. Through numerical examples, including adaptive
scenarios, we illustrate the scheme's properties such as convergence and
parameter robustness.
|
S. Badia, M. Hornkjøl, A. Khan, K. -A. Mardal, A. F. Martín, R. Ruiz-Baier
|
2023-06-20T00:42:23Z
|
http://arxiv.org/abs/2306.11213v1
|
Efficient and Reliable divergence-conforming methods for an elasticity-poroelasticity interface problem
###### Abstract.
We present a finite element discretisation to model the interaction between a poroelastic structure and an elastic medium. The consolidation problem considers fully coupled deformations across an interface, ensuring continuity of displacement and total traction, as well as no-flux for the fluid phase. Our formulation of the poroelasticity equations incorporates displacement, fluid pressure, and total pressure, while the elasticity equations adopt a displacement-pressure formulation. Notably, the transmission conditions at the interface are enforced without the need for Lagrange multipliers. We demonstrate the stability and convergence of the divergence-conforming finite element method across various polynomial degrees. The _a priori_ error bounds remain robust, even when considering large variations in intricate model parameters such as Lanz constants, permeability, andativity coefficient. To enhance computational efficiency and reliability, we develop residual-based _a posteriori_ error estimators that are independent of the aforementioned coefficients. Additionally, we devise parameter-robust and optimal block diagonal preconditioners. Through numerical examples, including adaptive scenarios, we illustrate the scheme's properties such as convergence and parameter robustness.
**Keywords.** Biot-elasticity transmission equations, mixed finite element methods, divergence-conforming schemes, _a priori_ error analysis, _a posteriori_ error analysis, operator preconditioning.
\({}^{1}\)School of mathematics, Monash university, 9 Rainforest Walk, Melbourne 3800 VIC, Australia; and Centre internacional de Metrodes Numerics a l'Enginyeria, Campus Nord, 08034, Barcelona, Spain. \({}^{2}\)Department of Mathematics, University of Oslo, Norway. \({}^{3}\)Department of Mathematics, Indian Institute of Technology Roorkee, Roorkee 247667, India. \({}^{4}\)School of Computing, Australian National University, Acton ACT 2601, Australia. \({}^{5}\)School of Mathematics and Victorian Heart Institute, Monash University, 9 Rainforest Walk, Melbourne 3800 VIC, Australia; and Universidad Advanista de Chile, Casilla 7-D, Chillan, Chile. \({}^{E-mail address: [email protected], [email protected], [email protected], [email protected], [email protected], [email protected].
## 1. Introduction
In a variety of engineering and biomedical applications, poroelastic bodies are either surrounded or in contact with a purely elastic material. Examples include filter design, prosthetics, simulation of oil extraction from reservoirs, carbon sequestration, and sound insulation structures. From the viewpoint of constructing and analysing numerical methods, recent works for the interfacial Biot/elasticity problem can be found in [4, 5, 25, 26, 27]. These contributions include mortar-type discretisations, formulations using rotations, and extensions to lubrication models. In this work we focus on H(div)-conforming discretisations of displacement for the transmission problem, in combination with a total pressure formulation for both the elastic and poroelastic sub-domains. Divergence-conforming methods with tangential jump penalisation for elasticity were already proposed in [32]. Their counterpart for Biot poroelasticity equations (in the two-field formulation) have been introduced in [34, 47, 48], while a much more abundant literature is available for Brinkman flows (as well as coupled flow-transport problems) in [16, 17, 30, 35]. The extension to interfacial porous media flow has been addressed in [18, 19, 24, 33]. In general, such type of discretisations offer appealing features such as local conservation of mass, and the ability to produce robust and locking-free schemes. These type of schemes are required because of the large number of parameters upon which robustness is targeted (especially in the limits of near incompressibility and near impermeability).
As regularity of the solution is not always available (due to possibly high contrast in material parameters, domain singularities, etc), we are also interested in deriving _a posteriori_ error estimates which allow us to apply adaptive mesh refinement in the regions where it is most required. A coupled elliptic-parabolic _a posteriori_ error analysis for Biot poroelasticity and multiple network poroelasticity is available from the works [1, 22, 38]. On the other hand, robust estimates for the elasticity-poroelasticity coupling have been obtained only recently, for enriched Galerkin methods in [26], and in [4] for rotation-based formulations.
Here the analysis is carried out considering two examples of fluid pressure approximation: either continuous or discontinuous piecewise polynomials. For the DG case we use a classical symmetric interior penalty (SIP) method. In all cases the proposed formulation is robust with respect to material parameters that can assume very small or very large values, including the extreme cases of near incompressibility, near impermeability, and near zero storativity. This parameter independence in the stability of the discrete problem is critical in the _a priori_ error bounds, in the derivation of _a posteriori_ error estimates, and in the design of robust preconditioners.
Finally, we design optimal preconditioner that are robust with respect to parameters. The preconditioner is block-diagonal and its definition relies on the stability properties of the proposed numerical scheme, i.e., it consists in a discretisation of the continuous Riesz map (see [30, 37] for similar approaches). The definition of the pressure block is motivated by [42], where a robust preconditioner for the interface Stokes problems with high contrast is proposed.
The remainder of the paper has been structured as follows. Section 2 is devoted to the description of the interfacial problem, it states boundary and transmission conditions, and there we give the continuous weak formulation. The discrete problem in two different formulations is defined in Section 3. The stability and solvability of the H(div)-conforming methods is addressed in Section 4. Residual-based _a posteriori_ error estimators are constructed and analysed in Section 5. The operator framework and definition of a norm-equivalent preconditioner is addressed in Section 6, and numerical methods that confirm the properties of the proposed methods are collected in Section 7.
## 2. Problem statement
### Preliminaries
Standard notation on Lebesgue and Sobolev spaces together with their associated norms will be adopted throughout the presentation. For \(s\geq 0\) and a generic domain \(\mathcal{D}\), the symbol \(\mathrm{H}^{*}(\mathcal{D})\) denotes the usual Sobolev space equipped with the norm \(\|\cdot\|_{s,\mathcal{D}}\) and seminorm \(|\cdot|_{s,\mathcal{D}}\). The case \(s=0\) is understood as the space \(\mathrm{L}^{2}(\mathcal{D})\). Boldfaces will be used to denote vector-valued spaces, maintaining the same notation as scalar spaces for the norms and seminorms. For a Banach space \(V\), we will use the symbol \(V^{\prime}\) to denote its dual space. We also recall the definition of the space \(\mathbf{H}(\mathrm{div},\mathcal{D}):=\{\boldsymbol{v}\in\mathbf{L}^{2}( \mathcal{D}):\mathrm{div}\,\boldsymbol{v}\in\mathrm{L}^{2}(\mathcal{D})\}\), which is of Hilbert type when endowed with the norm \(\|\boldsymbol{v}\|_{\mathrm{div},\mathcal{D}}^{2}=\|\boldsymbol{v}\|_{0, \mathcal{D}}^{2}+\|\mathrm{div}\,\boldsymbol{v}\|_{0,\mathcal{D}}^{2}\). As usual, throughout the paper the notation \(A\lesssim B\) will abbreviate the inequality \(A\leq CB\) where \(C\) is a generic constant that does not depend on the maximal mesh sizes \(h\) nor on the sensitive parameters of the model, in particular the Lame parameters on each subdomain (and will proceed similarly for \(A\gtrsim B\)). The constants in the inequalities will be specified whenever necessary from the context.
### The transmission problem
Following the problem setup from [25, 26], let us consider a bounded Lipschitz domain \(\Omega\subset\mathbb{R}^{d}\), \(d\in\{2,3\}\), together with a partition into non-overlapping and connected subdomains \(\Omega^{\mathrm{E}}\), \(\Omega^{\mathrm{P}}\) representing zones occupied by an elastic body (e.g., a non-pay rock, in the context of reservoir modelling) and a fluid-saturated poroelastic region (e.g., a reservoir), respectively. The interface between the two subdomains is denoted as \(\Sigma=\partial\Omega^{\mathrm{P}}\cap\partial\Omega^{\mathrm{P}}\), and on it the normal vector \(\boldsymbol{n}\) is assumed to point from \(\Omega^{\mathrm{P}}\) to \(\Omega^{\mathrm{E}}\). The boundary of the domain \(\Omega\) is separated in terms of the boundaries of two individual subdomains, that is \(\partial\Omega:=\Gamma^{\mathrm{P}}\cup\Gamma^{\mathrm{E}}\), and then subdivided as the disjoint Dirichlet and Neumann type condition as \(\Gamma^{\mathrm{P}}:=\Gamma^{\mathrm{P}}_{D}\cup\Gamma^{\mathrm{P}}_{N}\) and \(\Gamma^{\mathrm{E}}:=\Gamma^{\mathrm{E}}_{D}\cup\Gamma^{\mathrm{E}}_{N}\), respectively. We assume that all sub-boundaries have positive \((d-1)\)-Hausdorff measure.
In the overall domain we state the momentum balance of the fluid and solid phases on the poroelastic region, the mass conservation of the total amount of fluid, and the balance of linear momentum on the elastic region. In doing so, and following [5], in addition to the usual variables of elastic displacement, poroelastic displacement, and fluid pressure, we employ the total pressure in the poroelastic subdomain, and the Herrmann pressure in the elastic subdomain. For given body loads \(\boldsymbol{b}^{\mathrm{P}}(t):\Omega^{\mathrm{P}}\to\mathbb{R}^{d}\), \(\boldsymbol{b}^{\mathrm{E}}(t):\Omega^{\mathrm{E}}\to\mathbb{R}^{d}\), and a volumetric source or sink \(\ell^{\mathrm{P}}(t):\Omega^{\mathrm{P}}\to\mathbb{R}\), one seeks for each time \(t\in(0,t_{\mathrm{final}}]\), the vector of solid displacements \(\boldsymbol{u}^{\mathrm{E}}:\Omega^{\mathrm{E}}\to\mathbb{R}^{d}\) of the non-pay zone, the elastic pressure \(\varphi^{\mathrm{E}}:\Omega^{\mathrm{E}}\to\mathbb{R}\), the displacement \(\boldsymbol{u}^{\mathrm{P}}(t):\Omega^{\mathrm{P}}\to\mathbb{R}^{d}\), the pore fluid pressure \(p^{\mathrm{P}}(t):\Omega^{\mathrm{P}}\to\mathbb{R}\), and the total pressure \(\varphi^{\mathrm{P}}(t):\Omega^{\mathrm{P}}\to\mathbb{R}\) of the reservoir, satisfying:
\[-\mathrm{div}(2\mu^{\mathrm{P}}\boldsymbol{\varepsilon}( \boldsymbol{u}^{\mathrm{P}})-\varphi^{\mathrm{P}}\mathbf{I}) =\boldsymbol{b}^{\mathrm{P}} \text{in }\Omega^{\mathrm{P}}\times(0,t_{\mathrm{final}}], \tag{2.1a}\] \[\left(c_{0}+\frac{\alpha^{2}}{\lambda^{\mathrm{P}}}\right) \partial_{t}p^{\mathrm{P}}-\frac{\alpha}{\lambda^{\mathrm{P}}} \partial_{t}\varphi^{\mathrm{P}}-\frac{1}{\eta}\,\mathrm{div}(\kappa\nabla p ^{\mathrm{P}}) =\ell^{\mathrm{P}} \text{in }\Omega^{\mathrm{P}}\times(0,t_{\mathrm{final}}],\] (2.1b) \[\varphi^{\mathrm{P}}-\alpha p^{\mathrm{P}}+\lambda^{\mathrm{P} }\,\mathrm{div}\,\boldsymbol{u}^{\mathrm{P}} =0 \text{in }\Omega^{\mathrm{P}}\times(0,t_{\mathrm{final}}],\] (2.1c) \[-\mathrm{div}(2\mu^{\mathrm{E}}\boldsymbol{\varepsilon}( \boldsymbol{u}^{\mathrm{E}})-\varphi^{\mathrm{E}}\mathbf{I}) =\boldsymbol{b}^{\mathrm{E}} \text{in }\Omega^{\mathrm{E}}\times(0,t_{\mathrm{final}}],\] (2.1d) \[\varphi^{\mathrm{E}}+\lambda^{\mathrm{E}}\,\mathrm{div}\, \boldsymbol{u}^{\mathrm{E}} =0 \text{in }\Omega^{\mathrm{E}}\times(0,t_{\mathrm{final}}]. \tag{2.1e}\]
Here \(\kappa(\boldsymbol{x})\) is the hydraulic conductivity of the porous medium, \(\eta\) is the constant viscosity of the interstitial fluid, \(c_{0}\) is the storativity coefficient, \(\alpha\) is the Biot-Willis consolidation parameter, and \(\mu^{\mathrm{E}},\lambda^{\mathrm{E}}\) and \(\mu^{\mathrm{P}},\lambda^{\mathrm{P}}\) are the Lame parameters associated with the constitutive law of the solid on the elastic and on the poroelastic subdomain, respectively. The poroelastic stress \(\boldsymbol{\widetilde{\sigma}}=\boldsymbol{\sigma}-\alpha p^{\mathrm{P}} \mathbf{I}\) is composed by the effective mechanical stress \(\lambda^{\mathrm{P}}(\mathrm{div}\,\boldsymbol{u}^{\mathrm{P}})\mathbf{I}+2\mu^ {\mathrm{P}}\boldsymbol{\varepsilon}(\boldsymbol{u}^{\mathrm{P}})\) plus the non-viscous fluid stress (the fluid pressure scaled with \(\alpha\)). This system is complemented by the following set of boundary conditions
\[\boldsymbol{u}^{\mathrm{P}}=\boldsymbol{0}\quad\text{and}\quad \frac{\kappa}{\eta}\nabla p^{\mathrm{P}}\cdot\boldsymbol{n}^{\Gamma} =0 \text{on }\Gamma^{\mathrm{P}}_{D}\times(0,t_{\mathrm{final}}], \tag{2.2a}\] \[[2\mu^{\mathrm{P}}\boldsymbol{\varepsilon}(\boldsymbol{u}^{ \mathrm{P}})-\varphi^{\mathrm{P}}\mathbf{I}]\boldsymbol{n}^{\Gamma}=\boldsymbol{0} \text{and}\quad p^{\mathrm{P}} =0 \text{on }\Gamma^{\mathrm{P}}_{N}\times(0,t_{\mathrm{final}}],\] (2.2b) \[\boldsymbol{u}^{\mathrm{E}} =\boldsymbol{0} \text{on }\Gamma^{\mathrm{E}}_{D}\times(0,t_{\mathrm{final}}],\] (2.2c) \[[2\mu^{\mathrm{E}}\boldsymbol{\varepsilon}(\boldsymbol{u}^{ \mathrm{E}})-\varphi^{\mathrm{E}}\mathbf{I}]\boldsymbol{n}^{\Gamma} =\boldsymbol{0} \text{on }\Gamma^{\mathrm{E}}_{N}\times(0,t_{\mathrm{final}}]. \tag{2.2d}\]
Here, the partition \(\Gamma^{\mathrm{P}}:=\Gamma^{\mathrm{P}}_{D}\cup\Gamma^{\mathrm{P}}_{N}\) denotes the sub-boundaries where we impose essential (i.e., \(\boldsymbol{u}^{\mathrm{P}}=\boldsymbol{0}\)) and natural boundary conditions (i.e., \(\boldsymbol{\widetilde{\sigma}}\boldsymbol{n}^{\Gamma}=\boldsymbol{0}\)) corresponding to equation (2.1a). For ease of notation, we note that, in this definition, we are assuming that the essential and natural sub-boundaries corresponding to equation (2.1b) (i.e., the ones where we impose \(p^{\mathrm{P}}=0\) and \(\nabla p^{\mathrm{P}}\cdot\boldsymbol{n}^{\Gamma}=0\), respectively) match the natural and essential sub-boundaries associated to (2.1a), respectively. However, in general, this does not have to be case, and one may choose separately the partition into essential and natural for each of these two equations separately.
Along with the previous set of boundary conditions, the system is also complemented by transmission conditions in the absence of external forces (derived by means of homogenisation in [41]) that take the following form
\[\mathbf{u}^{\rm P}=\mathbf{u}^{\rm E},\quad[2\mu^{\rm P}\mathbf{\varepsilon}(\mathbf{u}^{\rm P})- \varphi^{\rm P}\mathbf{\Pi}]\mathbf{n}=[2\mu^{\rm E}\mathbf{\varepsilon}(\mathbf{u}^{\rm E})- \varphi^{\rm E}\mathbf{\Pi}]\mathbf{n},\quad\frac{\kappa}{\eta}\nabla p^{\rm P}\cdot \mathbf{n}=0\quad\text{on }\Sigma\times(0,t_{\rm final}], \tag{2.3}\]
which represent continuity of the medium, the balance of total tractions, and no-flux of fluid at the interface, respectively. An advantage with respect to [5] is that here we can use the full poroclastic stresses to impose the transmission conditions. We also consider the following initial conditions
\[p^{\rm P}(0)=0,\quad\mathbf{u}^{\rm P}(0)=\mathbf{0}\qquad\qquad\text{ in }\Omega^{\rm P}.\]
Homogeneity of the boundary and initial conditions is only assumed to simplify the exposition of the subsequent analysis, however the results remain valid for more general assumptions. We also note that non-homogeneous boundary conditions are used in the numerical experiments.
### A weak formulation
As the paper focuses on the spatial discretisation, we will restrict the problem formulation to the steady case. For this, we define the function spaces
\[\mathbf{V}^{\rm E}:=\mathbf{\Pi}^{\rm 1}_{\Gamma^{\rm E}_{\rm E}}(\Omega^{\rm E}),\quad \mathbf{V}^{\rm P}:=\mathbf{\Pi}^{\rm 1}_{\Gamma^{\rm P}_{\rm D}}(\Omega^{\rm P}),\quad \mathrm{Q}^{\rm P}:=\mathbf{\Pi}^{\rm 1}_{\Gamma^{\rm P}_{N}}(\Omega^{\rm P}),\quad \mathrm{Z}^{\rm E}:=\mathrm{L}^{2}(\Omega^{\rm E}),\quad\mathrm{Z}^{\rm P}:= \mathrm{L}^{2}(\Omega^{\rm P}).\]
Considering a backward Euler discretisation in time with constant time step \(\Delta t\), multiplying the time-discrete version of (2.1b) by adequate test functions, integrating by parts (in space) whenever appropriate, and using the boundary conditions (2.2c)-(2.2d), leads to the following weak formulation: Find \(\mathbf{u}^{\rm P}\in\mathbf{V}^{\rm P},\mathbf{u}^{\rm E}\in\mathbf{V}^{\rm E},p^{\rm P}\in \mathds{Q}^{\rm P},\varphi^{\rm E}\in\mathds{Z}^{\rm E},\varphi^{\rm P}(t)\in \mathds{Z}^{\rm E}\) such that
\[2\mu^{\rm P}(\mathbf{\varepsilon}(\mathbf{u}^{\rm P}),\mathbf{\varepsilon}(\mathbf{v}^{\rm P} ))_{0,\Omega^{\rm P}}-(\varphi^{\rm P},\operatorname{div}\mathbf{v}^{\rm P})_{0, \Omega^{\rm P}}-([2\mu^{\rm P}\mathbf{\varepsilon}(\mathbf{u}^{\rm P})-\varphi^{\rm P }\mathbf{\Pi}]\mathbf{n},\mathbf{v}^{\rm P})_{\Gamma^{\rm P}_{D}}=(\mathbf{b}^{\rm P},\mathbf{v}^{ \rm P})_{0,\Omega^{\rm P}},\]
\[(\varphi^{\rm P},\psi^{\rm P})_{0,\Omega^{\rm P}}-\alpha(p^{\rm P},\psi^{\rm P })_{0,\Omega^{\rm P}}+\lambda^{\rm P}(\psi^{\rm P},\operatorname{div}\mathbf{u}^{ \rm P})_{0,\Omega^{\rm P}}=0,\]
\[\frac{1}{\Delta t}\bigg{(}c_{0}+\frac{\alpha^{2}}{\lambda^{\rm P}}\bigg{)}(p ^{\rm P},q^{\rm P})_{0,\Omega^{\rm P}}-\frac{1}{\Delta t}\frac{\alpha}{ \lambda^{\rm P}}(\varphi^{\rm P},q^{\rm P})_{0,\Omega^{\rm P}}+\frac{1}{\eta} (\kappa\nabla p^{\rm P},\nabla q^{\rm P})_{0,\Omega^{\rm P}}-\frac{1}{\eta} \langle\kappa\nabla p^{\rm P}\cdot\mathbf{n},q^{\rm P}\rangle_{\partial\Omega^{ \rm P}}=(\ell^{\rm P},q^{\rm P})_{0,\Omega^{\rm P}},\]
\[2\mu^{\rm E}(\mathbf{\varepsilon}(\mathbf{u}^{\rm E}),\mathbf{\varepsilon}(\mathbf{v}^{\rm E }))_{0,\Omega^{\rm E}}-(\varphi^{\rm E},\operatorname{div}\mathbf{v}^{\rm E})_{0, \Omega^{\rm E}}-([2\mu^{\rm E}\mathbf{\varepsilon}(\mathbf{u}^{\rm E})-\varphi^{\rm E }\mathbf{\Pi}]\mathbf{n}\partial^{\Omega^{\rm E}},\mathbf{v}^{\rm E})_{\Gamma^{\rm E}_{D}} =(\mathbf{b}^{\rm E},\mathbf{v}^{\rm E})_{0,\Omega^{\rm E}},\]
\[(\varphi^{\rm E},\psi^{\rm E})_{0,\Omega^{\rm E}}+\lambda^{\rm E}(\psi^{\rm E },\operatorname{div}\mathbf{u}^{\rm E})_{0,\Omega^{\rm E}}=0.\]
Note that we can simply define a _global displacement_\(\mathbf{u}\in\mathbf{V}:=\mathbf{\Pi}^{\rm 1}_{\Gamma_{D}}(\Omega)\) (through continuity of the medium in (2.3)) such that \(\mathbf{u}|_{\Omega^{\rm P}}=\mathbf{u}^{\rm P}\) and \(\mathbf{u}|_{\Omega^{\rm E}}=\mathbf{u}^{\rm E}\); as well as a _global pressure_ (it is the total pressure on the poroclastic medium and the elastic hydrostatic pressure on the elastic subdomain) \(\varphi\in\mathds{Z}:=\mathrm{L}^{2}(\Omega)\) such that \(\varphi|_{\Omega^{\rm P}}=\varphi^{\rm P}\) and \(\varphi|_{\Omega^{\rm E}}=\varphi^{\rm E}\). Similarly, we define the body load \(\mathbf{b}\in\mathbf{\mathrm{L}}^{2}(\Omega)\) composed by \(\mathbf{b}|_{\Omega^{\rm P}}=\mathbf{b}^{\rm P}\) and \(\mathbf{b}|_{\Omega^{\rm E}}=\mathbf{b}^{\rm E}\), and also the global Lame parameters \(\mu\) and \(\lambda\) as \(\mu|_{\Omega^{\rm P}}=\mu^{\rm P}\), \(\lambda|_{\Omega^{\rm P}}=\lambda^{\rm P}\) and \(\mu|_{\Omega^{\rm E}}=\mu^{\rm E}\), \(\lambda|_{\Omega^{\rm E}}=\lambda^{\rm E}\). We also multiply the weak form of the mass conservation equation by -1. The steps above in combination with the second and third transmission conditions in (2.3), yield: Find \(\mathbf{u}\in\mathbf{V}\), \(p^{\rm P}\in\mathds{Q}^{\rm P}\), \(\varphi\in\mathds{Z}\) such that
\[a_{1}(\mathbf{u},\mathbf{v}) + b_{1}(\mathbf{v},\varphi)=\ F(\mathbf{v})\quad\forall\mathbf{v}\in\mathbf{V}, \tag{2.4a}\] \[-\tilde{a}_{2}\bigg{(}\frac{1}{\Delta t}p^{\rm P},q^{\rm P} \bigg{)}- a_{2}(p^{\rm P},q^{\rm P})+b_{2}\bigg{(}q^{\rm P},\,\frac{1}{\Delta t} \varphi\bigg{)}=G(q^{\rm P})\quad\forall q^{\rm P}\in\mathds{Q}^{\rm P},\] (2.4b) \[b_{1}(\mathbf{u},\psi)+ b_{2}(p^{\rm P},\psi)- a_{3}(\varphi,\psi)= 0\quad\forall\psi\in\mathds{Z}, \tag{2.4c}\]
where the bilinear forms \(a_{1}:\mathbf{V}\times\mathbf{V}\to\mathbb{R}\), \(a_{2}:\mathds{Q}^{\rm P}\times\mathds{Q}^{\rm P}\to\mathbb{R}\), \(a_{3}:\mathds{Z}\times\mathds{Z}\to\mathbb{R}\), \(b_{1}:\mathbf{V}\times\mathds{Z}\to\mathbb{R}\), \(b_{2}:\mathds{Q}^{\rm P}\times\mathds{Z}\to\mathbb{R}\), and linear functionals \(F:\mathbf{V}\to\mathbb{R}\), \(G:Q^{\rm P}\to\mathbb{R}\), adopt the following form
\[a_{1}(\mathbf{u},\mathbf{v}):=2(\mu\,\mathbf{\varepsilon}(\mathbf{u}),\mathbf{ \varepsilon}(\mathbf{v}))_{0,\Omega},\qquad b_{1}(\mathbf{v},\psi):=-(\psi,\operatorname {div}\mathbf{v})_{0,\Omega},\qquad F(\mathbf{v}):=(\mathbf{b},\mathbf{v})_{0,\Omega}, \tag{2.5}\] \[\tilde{a}_{2}(p^{\rm P},q^{\rm P}):=\bigg{(}c_{0}+\frac{\alpha^{2 }}{\lambda^{\rm P}}\bigg{)}(p^{\rm P},q^{\rm P})_{0,\Omega^{\rm P}},\qquad a _{2}(p^{\rm P},q^{\rm P}):=\frac{1}{\eta}(\kappa\nabla p^{\rm P},\nabla q^{ \rm P})_{0,\Omega^{\rm P}},\] \[b_{2}(p^{\rm P},\psi):=\frac{\alpha}{\lambda^{\rm P}}(p^{\rm P },\psi^{\rm P})_{0,\Omega^{\rm P}},\qquad a_{3}(\varphi,\psi):=(\frac{1}{ \lambda}\varphi,\psi)_{0,\Omega},\qquad G(q^{\rm P}):=-(\ell^{\rm P},q^{\rm P })_{0,\Omega^{\rm P}}.\]
### Properties of the continuous weak form and further assumptions
The variational forms above satisfy the continuity bounds
\[a_{1}(\mathbf{u},\mathbf{v})\leq\|\sqrt{2\mu}\mathbf{\varepsilon}(\mathbf{u})\|_{0,
for all \(\mathbf{u},\mathbf{v}\in\mathbf{V}\), \(\psi,\varphi\in\mathrm{Z}\), \(p^{\mathrm{P}},q^{\mathrm{P}}\in\mathrm{Q}^{\mathrm{P}}\). There also holds coercivity of the diagonal bilinear forms
\[a_{1}(\mathbf{v},\mathbf{v})\geq\|\sqrt{2\mu}\mathbf{\varepsilon}(\mathbf{v})\|_{0,\Omega}^{2} \gtrsim\|\sqrt{2\mu}\mathbf{v}\|_{1,\Omega}^{2},\quad|a_{2}(q^{\mathrm{P}},q^{ \mathrm{P}})|\geq\|\sqrt{\frac{\kappa}{\eta}}\nabla q^{\mathrm{P}}\|_{0, \Omega^{\mathrm{P}}}^{2},\quad a_{3}(\psi,\psi)\geq\|\frac{1}{\sqrt{\lambda}} \psi\|_{0,\Omega}^{2},\]
for all \(\mathbf{v}\in\mathbf{V}\), \(q^{\mathrm{P}}\in\mathrm{Q}^{\mathrm{P}}\), \(\psi\in\mathrm{Z}\), and the following inf-sup condition (see, e.g., [23]): There exists \(\xi>0\) such that
\[\sup_{\mathbf{v}(\mathbf{v}(\mathbf{0}))\in\mathbf{V}}\frac{b_{1}(\mathbf{v},\psi)}{\|\mathbf{v}\| _{1,\Omega}}\geq\xi\|\psi\|_{0,\Omega}\qquad\forall\psi\in\mathrm{Z}. \tag{2.6}\]
Details on the unique solvability of the continuous problem can be found in [25, 27], or, for the steady case with rotation-based formulations, in [4, 5] (but in those references the analysis assumes that the pay-zone poroelastic subdomain is completely confined by the elastic structure).
Similarly to the relevant inf-sup condition (2.6), we have that for each \(\varphi_{0}\in\mathrm{L}^{2}(\Omega)\) with \(\varphi_{0}|_{\Omega^{\mathrm{P}}}=\varphi_{0}^{\mathrm{E}}\in\mathrm{L}^{2}( \Omega^{\mathrm{E}})\) and \(\varphi_{0}|_{\Omega^{\mathrm{P}}}=\varphi_{0}^{\mathrm{P}}\in\mathrm{L}^{2}( \Omega^{\mathrm{P}})\), we can find \(\mathbf{v}_{0}^{\mathrm{E}}\in\mathbf{H}_{\Gamma^{\mathrm{E}}_{\mathrm{P},0}}^{ \mathrm{I}}(\Omega^{\mathrm{E}})\) and \(\mathbf{v}_{0}^{\mathrm{P}}\in\mathbf{H}_{\Gamma^{\mathrm{P},0}}^{\mathrm{I}}( \Omega^{\mathrm{P}})\), where \(\mathbf{H}_{\Gamma^{\mathrm{E}}_{\mathrm{P},0}}^{\mathrm{I}}(\Omega^{\mathrm{ E}})=\{\mathbf{v}:\mathbf{v}\in\mathbf{H}_{\Gamma^{\mathrm{E}}_{\mathrm{D}}}^{ \mathrm{I}}(\Omega^{\mathrm{E}})\) and \(\mathbf{v}|_{\Sigma}=\mathbf{0}\}\) and \(\mathbf{H}_{\Gamma^{\mathrm{P}}_{\mathrm{D},0}}^{\mathrm{I}}(\Omega^{\mathrm{ P}})=\{\mathbf{v}:\mathbf{v}\in\mathbf{H}_{\Gamma^{\mathrm{P}}_{\mathrm{D}}}^{ \mathrm{I}}(\Omega^{\mathrm{P}})\) and \(\mathbf{v}|_{\Sigma}=\mathbf{0}\}\), such that
\[(\mathrm{div}\;\mathbf{v}_{0}^{\mathrm{E}},\varphi_{0}^{\mathrm{E}}) _{0,\Omega^{\mathrm{E}}} \geq C_{\Omega^{\mathrm{E}}}/\mu^{\mathrm{E}}\|\varphi_{0}^{ \mathrm{E}}\|_{0,\Omega^{\mathrm{E}}}^{2},\quad\sqrt{2\mu^{\mathrm{E}}}\|\mathbf{ \nabla}\mathbf{v}_{0}^{\mathrm{E}}\|_{0,\Omega^{\mathrm{E}}}\leq 1/\sqrt{2\mu^{ \mathrm{E}}}\|\varphi_{0}^{\mathrm{E}}\|_{0,\Omega^{\mathrm{E}}},\] \[(\mathrm{div}\;\mathbf{v}_{0}^{\mathrm{P}},\varphi_{0}^{\mathrm{P}}) _{0,\Omega^{\mathrm{P}}} \geq C_{\Omega^{\mathrm{P}}}/\mu^{\mathrm{P}}\|\varphi_{0}^{ \mathrm{P}}\|_{0,\Omega^{\mathrm{P}}},\quad\sqrt{2\mu^{\mathrm{P}}}\|\mathbf{ \nabla}\mathbf{v}_{0}^{\mathrm{P}}\|_{0,\Omega^{\mathrm{P}}}\leq 1/\sqrt{2\mu^{ \mathrm{P}}}\|\varphi_{0}^{\mathrm{P}}\|_{0,\Omega^{\mathrm{P}}}.\]
Hence, there exists \(\mathbf{v}_{0}\in\mathbf{V}\) such that \(\mathbf{v}_{0}|_{\Omega^{\mathrm{E}}}=\mathbf{v}_{0}^{\mathrm{E}}\) and \(\mathbf{v}_{0}|_{\Omega^{\mathrm{P}}}=\mathbf{v}_{0}^{\mathrm{P}}\). Moreover we have
\[\sup_{\mathbf{0}\neq\mathbf{v}\in\mathbf{V}}\frac{b_{1}(\mathbf{v},\varphi_{0})}{\|\sqrt{2 \mu}\mathbf{v}\|_{1,\Omega}}\geq\tilde{C}\|\frac{1}{\sqrt{2\mu}}\varphi_{0}\|_{0, \Omega},\]
or, following also [42], we can write
\[\sup_{\mathbf{0}\neq\mathbf{v}\in\mathbf{V}}\frac{b_{1}(\mathbf{v},\varphi_{0})}{\|\sqrt{2 \mu}\mathbf{\varepsilon}(\mathbf{v})\|_{0,\Omega}}\geq\tilde{C}\|\frac{1}{\sqrt{2\mu}} \varphi_{0}\|_{0,\Omega},\]
for a positive constant \(\tilde{C}\) independent of \(\mu\).
## 3. An H(div)-conforming finite element approximation
We denote by \(\{\mathcal{F}_{h}^{\mathrm{P}}\}_{h}\) and \(\{\mathcal{F}_{h}^{\mathrm{E}}\}_{h}\) sequences of triangular (or tetrahedral in 3D) partitions of the poroelastic and elastic subdomains \(\Omega^{\mathrm{P}}\) and \(\Omega^{\mathrm{E}}\), respectively having diameter \(h_{K}\), and being such that the partitions are conforming with the interface \(\Sigma\). We label by \(K^{-}\) and \(K^{+}\) the two elements adjacent to a facet (an edge in 2D or a face in 3D), while \(h_{e}\) stands for the maximum diameter of the facet. By \(\mathcal{E}_{h}\) we will denote the set of all facets and will distinguish between facets lying on the elastic, poroelastic, and interfacial regions \(\mathcal{E}_{h}=\mathcal{E}_{h}^{\mathrm{E}}\cup\mathcal{E}_{h}^{\mathrm{P}} \cup\mathcal{E}_{h}^{\mathrm{E}}\).
For a smooth vector, scalar, or tensor field \(w\) defined on \(\mathcal{F}_{h}\), \(w^{\pm}\) denote its traces taken from the interior of \(K^{+}\) and \(K^{-}\), respectively. We also denote by \(\mathbf{n}^{\pm}\) the outward unit normal vector to \(K^{\pm}\). The symbols \(\{\!\{\!\cdot\}\!\}\) and \(\{\!\{\!\cdot\}\!\}\) denote, respectively, the average and jump operators, defined as
\[\{\!\{\!\mathbf{w}\}\!\}\coloneqq\frac{1}{2}(w^{-}+w^{+}),\quad\llbracket\mathbf{v} \odot\mathbf{n}\rrbracket\coloneqq(w^{-}\odot\mathbf{n}^{-}+w^{+}\odot\mathbf{n}^{+}), \tag{3.1}\]
for a generic multiplication operator \(\odot\), which applies to interior edges, whereas for boundary jumps and averages we adopt the conventions \(\{\!\{\!w\}\!\}=w\), and \(\llbracket w\odot\mathbf{n}\rrbracket=w\odot\mathbf{n}\). The element-wise action of a differential operator is denoted with a subindex \(h\), for example, \(\mathbf{\nabla}_{h}\), \(\mathbf{\nabla}_{h}\) will denote the broken gradient operators for scalar and vector quantities, respectively and \(\mathbf{\varepsilon}_{h}(\cdot)=\frac{1}{2}(\mathbf{\nabla}_{h}\cdot+(\mathbf{\nabla}_{h} \cdot)^{T})\) is the symmetrised vector broken gradient.
Let \(\mathbb{P}_{k}(K)\) denote the local space spanned by polynomials of degree up to \(k\geq 0\), and let us consider the following discrete spaces
\[\begin{split}\mathbf{V}_{h}&\coloneqq\big{\{}\mathbf{v}_{h} \in\mathbf{H}(\mathrm{div};\Omega):\mathbf{v}_{h}|_{K}\in\left[\mathbb{P}_{k+1}(K) \right]^{d}\quad\forall K\in\mathcal{F}_{h},\quad\mathbf{v}_{h}\cdot\mathbf{n}|_{ \Gamma^{\mathrm{E}}_{h}\cup\Gamma^{\mathrm{E}}_{D}}=0\big{\}},\\ \mathrm{Q}_{h}^{\mathrm{P}}&\coloneqq\big{\{}q_{h}^{ \mathrm{P}}\in\mathrm{Q}^{\mathrm{P}}:q_{h}|_{K}\in\mathbb{P}_{k+1}(K) \quad\forall K\in\mathcal{F}_{h}^{\mathrm{P}}\big{\}},\\ \mathrm{Z}_{h}&\coloneqq\big{\{}\psi_{h}\in\mathrm{Z}: \psi_{h}|_{K}\in\mathbb{P}_{k}(K)\quad\forall K\in\mathcal{F}_{h}\big{\}},\end{split} \tag{3.2}\]
which, in particular, satisfy the so-called equilibrium property
\[\mathrm{div}\;\mathbf{V}_{h}=\mathrm{Z}_{h}. \tag{3.3}\]
Note that in this case \(\mathbf{V}_{h}\) is the space of divergence-conforming BDM elements [15], and it is not conforming with \(\mathbf{V}\). Its basic approximation property, locally on \(K\in\mathcal{F}_{h}\), is that for all \(\mathbf{v}\in\mathbf{H}^{s}(K)\), there exists an interpolant \(\mathbf{v}_{
### Formulation with continuous fluid pressure
The Galerkin finite element formulation then reads: Find \((\mathbf{u}_{h},p^{\mathrm{P}}_{h},\varphi_{h})\in\mathbf{V}_{h}\times\mathrm{Q}^{ \mathrm{P}}_{h}\times\mathrm{Z}_{h}\) such that:
\[a^{h}_{1}(\mathbf{u}_{h},\mathbf{v}_{h}) +\,b_{1}(\mathbf{v}_{h},\varphi_{h}) =F(\mathbf{v}_{h}) \forall\mathbf{v}_{h}\in\mathbf{V}_{h}, \tag{3.5a}\] \[-\tilde{a}_{2}(p^{\mathrm{P}}_{h},q^{\mathrm{P}}_{h})- a_{2}(p^{\mathrm{P}}_{h},q^{\mathrm{P}}_{h}) +\,b_{2}(q^{\mathrm{P}}_{h},\,\varphi_{h}) =G(q^{\mathrm{P}}_{h}) \forall q^{\mathrm{P}}_{h}\in\mathrm{Q}^{\mathrm{P}}_{h},\] (3.5b) \[b_{1}(\mathbf{u}_{h},\psi_{h})+b_{2}(p^{\mathrm{P}}_{h},\psi_{h}) -a_{3}(\varphi_{h},\psi_{h}) =\qquad 0 \forall\psi_{h}\in\mathrm{Z}_{h}, \tag{3.5c}\]
where \(a^{h}_{1}(\cdot,\cdot)\) is the discrete version of the bilinear form \(a_{1}(\cdot,\cdot)\) and it is defined using a symmetric interior penalty from [35] (see also [33] for its use in the context of poroelasticity)
\[a^{h}_{1}(\mathbf{u}_{h},\mathbf{v}_{h}) \coloneqq 2(\mu\mathbf{e}_{h}(\mathbf{u}_{h}),\mathbf{e}_{h}(\mathbf{v}_{h}))_{0, \Omega}-2\sum_{e\in\delta_{h}\cup\Gamma^{\mathrm{P}}_{D}}\bigl{(}\langle\{\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(a_{2}^{h}(\cdot,\cdot)\) is the discrete version of the bilinear form \(a_{2}(\cdot,\cdot)\) and it is defined using a symmetric interior penalty from, e.g., the classical paper [6]
\[a_{2}^{h}(p_{h}^{\mathrm{p}},q_{h}^{\mathrm{p}})\] \[+\sum_{e\in\xi_{b}^{\mathrm{p}}\cup\mathrm{U}_{\mathrm{D}}^{ \mathrm{p}}}\frac{\beta_{p^{\mathrm{p}}}}{h_{e}}\bigg{\langle}\frac{\kappa}{ \eta}\llbracket p_{h}^{\mathrm{p}}\boldsymbol{n}\rrbracket,\llbracket q_{h}^{ \mathrm{p}}\boldsymbol{n}\rrbracket\bigg{\rangle}_{0,e}, \tag{3.10}\]
where \(\beta_{p^{\mathrm{p}}}>0\) is a parameter penalising the pressure jumps. If \(\beta_{p^{\mathrm{p}}}\) is sufficiently large, this yields the coercivity of the bilinear form \(a_{2}^{h}\).
Next, and as done for the case of continuous pressure approximation, we write down the compact form of the above weak formulation (3.9): Find \((\boldsymbol{u}_{h},p_{h}^{\mathrm{p}},\varphi_{h})\in\mathbf{V}_{h}\times \widetilde{\mathrm{Q}}_{h}^{\mathrm{p}}\times\mathrm{Z}_{h}\) such that
\[\widetilde{M}_{h}(\boldsymbol{u}_{h},p_{h}^{\mathrm{p}},\varphi_{h}; \boldsymbol{v}_{h},q_{h}^{\mathrm{p}},\psi_{h})=F(\boldsymbol{v}_{h})+G(q_{ h}^{\mathrm{p}})\quad\forall(\boldsymbol{v}_{h},q_{h}^{\mathrm{p}},\psi_{h}) \in\mathbf{V}_{h}\times\widetilde{\mathrm{Q}}_{h}^{\mathrm{p}}\times\mathrm{ Z}_{h},\]
where
\[\widetilde{M}_{h}(\boldsymbol{u}_{h},p_{h}^{\mathrm{p}},\varphi_{ h};\boldsymbol{v}_{h},q_{h}^{\mathrm{p}},\psi_{h})=a_{1}^{h}(\boldsymbol{u}_{h}, \boldsymbol{v}_{h})+b_{1}(\boldsymbol{v}_{h},\varphi_{h})-\tilde{a}_{2}(p_{h }^{\mathrm{p}},q_{h}^{\mathrm{p}})- a_{2}^{h}(p_{h}^{\mathrm{p}},q_{h}^{\mathrm{p}})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+b_{2}(q_{ h}^{\mathrm{p}},\,\varphi_{h})+b_{1}(\boldsymbol{u}_{h},\psi_{h})+b_{2}(p_{h }^{\mathrm{p}},\psi_{h})-a_{3}(\varphi_{h},\psi_{h}).\]
## 4. Unique solvability of the discrete problems and _a priori_ error estimates
### Well-posedness analysis for formulation (3.5)
We proceed by means of a Fortin argument and consider the canonical interpolation operator \(\Pi_{h}:\mathbf{V}\to\mathbf{V}_{h}\) such that
\[b_{1}(\boldsymbol{v}-\Pi_{h}\boldsymbol{v},\varphi_{h}) =0\qquad\forall\varphi_{h}\in\mathrm{Z}_{h}, \tag{4.1a}\] \[|\boldsymbol{v}-\Pi_{h}\boldsymbol{v}|_{s,K} \leq Ch^{t-s}|\boldsymbol{v}|_{t,K}\quad\forall K\in\mathscr{T}_{h}, \tag{4.1b}\]
where \(C\) is a positive constant which depends only on the shape of \(K\) and \(1\leq t\leq r+1\) (see, e.g., [13]).
Using the trace inequality and property (4.1b), we have the following bound in one of the norms from (3.7)
\[\|\boldsymbol{v}-\Pi_{h}\boldsymbol{v}\|_{*,\mathscr{T}_{h}}\leq\|\boldsymbol {v}\|_{*,\mathscr{T}_{h}}\quad\forall\boldsymbol{v}\in\mathbf{V}.\]
Moreover, we have
\[\|\Pi_{h}\boldsymbol{v}\|_{*,\mathscr{T}_{h}}\leq C\|\boldsymbol{v}\|_{*, \mathscr{T}_{h}}.\]
With these properties satisfied by the operator \(\Pi_{h}\), we can use the continuous inf-sup condition to readily show that there exists \(\xi>0\) such that a discrete inf-sup condition for the bilinear form \(b_{1}\):
\[\sup_{v\in\mathbf{V}_{h}\setminus\{\boldsymbol{0}\}}\frac{b_{1}(\boldsymbol{v },\varphi_{h})}{\|\boldsymbol{v}\|_{*,\mathscr{T}_{h}}}\geq\sup_{\Pi_{h}\boldsymbol {v}\in\mathbf{V}_{h}\setminus\{\boldsymbol{0}\}}\frac{b_{1}(\boldsymbol{v}, \varphi_{h})}{\|\Pi_{h}\boldsymbol{v}\|_{*,\mathscr{T}_{h}}}=\sup_{v\in\mathbf{ V}\setminus\{\boldsymbol{0}\}}\frac{b_{1}(\boldsymbol{v},\varphi_{h})}{\| \Pi_{h}\boldsymbol{v}\|_{*,\mathscr{T}_{h}}}\geq\frac{1}{C}\sup_{v\in\mathbf{ V}\setminus\{\boldsymbol{0}\}}\frac{b_{1}(\boldsymbol{v},\varphi_{h})}{\| \boldsymbol{v}\|_{*,\mathscr{T}_{h}}}\geq\xi\|\frac{1}{\sqrt{2\mu}}\varphi_{h} \|_{0,\Omega}\quad\forall\varphi_{h}\in\mathrm{Z}_{h}.\]
We are now in a position to establish a global inf-sup condition.
**Theorem 4.1**.: _For every \((\boldsymbol{u}_{h},p_{h}^{\mathrm{p}},\varphi_{h})\in\mathbf{V}_{h}\times \mathrm{Q}_{h}^{\mathrm{p}}\times\mathrm{Z}_{h}\), there exists \((\boldsymbol{v}_{h},q_{h}^{\mathrm{p}},\psi_{h})\in\mathbf{V}_{h}\times \mathrm{Q}_{h}^{\mathrm{p}}\times\mathrm{Z}_{h}\) with \(\llbracket(\boldsymbol{v}_{h},q_{h}^{\mathrm{p}},\psi_{h})\rrbracket\lesssim \llbracket(\boldsymbol{u}_{h},p_{h}^{\mathrm{p}},\varphi_{h})\rrbracket\) such that_
\[M_{h}(\boldsymbol{u}_{h},p_{h}^{\mathrm{p}},\varphi_{h};\boldsymbol{v}_{h},q_{h }^{\mathrm{p}},\psi_{h})\gtrsim\llbracket\boldsymbol{u}_{h},p_{h}^{\mathrm{p}}, \varphi_{h}\rrbracket^{2},\]
_where_
\[\llbracket\boldsymbol{v}_{h},q_{h}^{\mathrm{p}},\psi_{h}\rrbracket^{2}:=\| \boldsymbol{v}_{h}\|_{*,\mathscr{T}_{h}}^{2}+\|\frac{1}{\sqrt{2\mu}}\psi_{h} \|_{0,\Omega}^{2}+\frac{1}{\lambda^{\mathrm{E}}}\|\psi_{h}\|_{0,\Omega^{ \mathrm{E}}}^{2}+\frac{1}{\lambda^{\mathrm{P}}}\|\psi_{h}-\alpha q_{h}^{ \mathrm{p}}\|_{0,\Omega^{\mathrm{E}}}^{2}+c_{0}\|q_{h}^{\mathrm{p}}\|_{0, \Omega^{\mathrm{P}}}^{2}+\|\frac{\kappa}{\eta}\nabla q_{h}^{\mathrm{p}}\|_{0, \Omega^{\mathrm{P}}}^{2}. \tag{4.2}\]
_Moreover, we have that_
\[\big{|}M_{h}(\boldsymbol{u}_{h},p_{h}^{\mathrm{p}},\varphi_{h};\boldsymbol{v}_{h}, q_{h}^{\mathrm{p}},\psi_{h})\big{|}\lesssim\llbracket\boldsymbol{u}_{h},p_{h}^{ \mathrm{p}},\varphi_{h}\rrbracket\llbracket\boldsymbol{1}(\boldsymbol{v}_{h},q_ {h}^{\mathrm{p}},\psi_{h})\rrbracket,\]
_for all \((\boldsymbol{u}_{h},p_{h}^{\mathrm{p}},\varphi_{h}),(\boldsymbol{v}_{h},q_{h}^{ \mathrm{p}},\psi_{h})\in\mathbf{V}_{h}\times\mathrm{Q}_{h}^{\mathrm{p}}\times \mathrm{Z}_{h}\)._
Proof.: Let \((\boldsymbol{u}_{h},p_{h}^{\mathrm{p}},\varphi_{h})\in\mathbf{V}_{h}\times \mathrm{Q}_{h}^{\mathrm{p}}\times\mathrm{Z}_{h}\) be arbitrary. Using the definition of the multilinear form \(M_{h}\), we easily obtain
\[M_{h}(\boldsymbol{u}_{h},p_{h}^{\mathrm{p}},\varphi_{h};\boldsymbol{v}_{h},0,0)=a _{1}^{h}(\boldsymbol{u}_{h},\boldsymbol{v}_{h})+b_{1}(\boldsymbol{v}_{h}, \varphi_{h})\geq\bigg{(}\xi-\frac{1}{2\epsilon_{1}}\bigg{)}\,\|\frac{1}{\sqrt{2 \mu}}\varphi_{h}\|_{0,\Omega}^{2}-\frac{\epsilon_{1}}{2}\|\boldsymbol{u}_{h}\|_{*, \mathscr{T}_{h}}.\]
Selecting \(\boldsymbol{v}=\boldsymbol{u}_{h}\), \(q^{\mathrm{p}}=-p_{h}^{\mathrm{p}}\) and \(\psi=-\varphi_{h}\) we have
\[M_{h}(\boldsymbol{u}_{h},p_{h}^{\mathrm{p}},\varphi_{h},\boldsymbol{u}_{h},-p_{h}^{\mathrm{p}},-\varphi_{h}) =a_{1}^{h}(\boldsymbol{u}_{h},\boldsymbol{u}_{h})+\tilde{a}_{2}(p_{h }^{\mathrm{p}},p_{h}^{\mathrm{p}})+a_{2}(p_{h}^{\mathrm{p}},p_{h}^{\mathrm{p}})- 2b_{2}(p_{h}^{\mathrm{p}},\,\varphi_
Then we can make the choice \(\mathbf{v}=\mathbf{u}_{h}+\delta_{1}\mathbf{v}_{h}\), \(q^{\mathrm{P}}=-p_{h}^{\mathrm{P}}\) and \(\psi=-\varphi_{h}\), leading to
\[M_{h}(\mathbf{u}_{h},p_{h}^{\mathrm{P}},\varphi_{h},\mathbf{u}_{h}+\delta_ {1}\mathbf{v}_{h},-p_{h}^{\mathrm{P}},-\varphi_{h})\] \[=M_{h}(\mathbf{u}_{h},p_{h}^{\mathrm{P}},\varphi_{h},\mathbf{u}_{h},-p_{h }^{\mathrm{P}},-\varphi_{h})+\delta_{1}M_{h}(\mathbf{u}_{h},p_{h}^{\mathrm{P}}, \varphi_{h},\mathbf{v}_{h},0,0)\] \[\geq C_{2}\|\mathbf{u}_{h}\|_{2,\mathcal{S}_{h}}^{2}+c_{0}\|p_{h}^{ \mathrm{P}}\|_{0,\mathrm{\Omega}^{\mathrm{P}}}^{2}+1/\lambda^{\mathrm{E}}\| \varphi_{h}\|_{0,\mathrm{\Omega}^{\mathrm{E}}}^{2}+1/\lambda^{\mathrm{P}}\| \varphi_{h}-\alpha p_{h}^{\mathrm{P}}\|_{0,\mathrm{\Omega}^{\mathrm{P}}}^{2}\] \[\qquad+c_{0}\|p_{h}^{\mathrm{P}}\|_{0,\mathrm{\Omega}^{\mathrm{P }}}^{2}+\|\kappa/\eta(\nabla p_{h}^{\mathrm{P}})\|_{0,\mathrm{\Omega}^{ \mathrm{P}}}^{2}+\delta_{1}\left(\frac{1}{2\epsilon_{1}}\right)\|\frac{1}{\sqrt {2\mu}}\varphi_{h}\|_{0,\mathrm{\Omega}^{\mathrm{P}}}^{2}-\delta_{1}\frac{ \epsilon_{1}}{2}\|\mathbf{u}_{h}\|_{*,\mathcal{S}_{h}}\] \[\geq\left(C_{2}-\delta_{1}\frac{\epsilon_{1}}{2}\right)\|\mathbf{u}_{ h}\|_{*,\mathcal{S}_{h}}^{2}+c_{0}\|p_{h}^{\mathrm{P}}\|_{0,\mathrm{\Omega}^{ \mathrm{P}}}^{2}+1/\lambda^{\mathrm{E}}\|\varphi_{h}\|_{0,\mathrm{\Omega}^{ \mathrm{E}}}^{2}+1/\lambda^{\mathrm{P}}\|\varphi_{h}-\alpha p_{h}^{\mathrm{P} }\|_{0,\mathrm{\Omega}^{\mathrm{P}}}^{2}\] \[\qquad+c_{0}\|p_{h}^{\mathrm{P}}\|_{0,\mathrm{\Omega}^{\mathrm{P }}}^{2}+\|\kappa/\eta(\nabla p_{h}^{\mathrm{P}})\|_{0,\mathrm{\Omega}^{ \mathrm{P}}}^{2}+\delta_{1}\left(\xi-\frac{1}{2\epsilon_{1}}\right)\|\frac{1}{ \sqrt{2\mu}}\varphi_{h}\|_{0,\mathrm{\Omega}}^{2},\]
where we have used Young's inequality. Assuming the values \(\epsilon_{1}=1/\xi\) and \(\delta_{1}=C_{2}/\epsilon_{1}\), we then have
\[M_{h}(\mathbf{u}_{h},p_{h}^{\mathrm{P}},\varphi_{h};\mathbf{v}_{h},q_{h}^{\mathrm{P}}, \psi_{h})\geq\frac{1}{2}\min\left\{C_{2}\xi^{2},C_{2}\right\}\|\mathbf{(u}_{h},p_{ h}^{\mathrm{P}},\varphi_{h})\|^{2},\]
and the first part of the proof concludes after realising that
\[\|\mathbf{(v}_{h},q_{h}^{\mathrm{P}},\varphi_{h})\|^{2}=\|\mathbf{(u}_{h}+\delta_{1} \mathbf{v},-p_{h}^{\mathrm{P}},-\varphi_{h})\|^{2}\leq 2\|\mathbf{(u}_{h},p_{h}^{ \mathrm{P}},\varphi_{h})\|^{2}.\]
For the continuity property, it suffices to apply Cauchy-Schwarz inequality and the definition of \(M_{h}\).
**Lemma 4.1**.: _Let \((\tilde{\mathbf{u}},\tilde{p}^{\mathrm{P}},\tilde{\varphi})\) be a generic triplet in \(\mathbf{V}_{h}\times\mathrm{Q}_{h}^{\mathrm{P}}\times\mathrm{Z}_{h}\). Then the following estimate holds_
\[\|\mathbf{(u}-\mathbf{u}_{h},p-p_{h}^{\mathrm{P}},\varphi-\varphi_{h})\|\mathbf{1}\lesssim \|\mathbf{(}u-\tilde{\mathbf{u}},p^{\mathrm{P}}-\tilde{p}^{\mathrm{P}},\varphi-\tilde {\varphi})\|\mathbf{1}+\left(\sum_{K\in\mathcal{S}_{h}}h_{K}^{2}|\sqrt{2\mu}(\mathbf{u }-\tilde{\mathbf{u}})|_{2,K}^{2}\right)^{1/2}.\]
Proof.: Directly from triangle inequality we have
\[\|\mathbf{(}u-\mathbf{u}_{h},p-p_{h}^{\mathrm{P}},\varphi-\varphi_{h})\|\mathbf{1}\leq\| \mathbf{(}u-\tilde{\mathbf{u}},p^{\mathrm{P}}-\tilde{p}^{\mathrm{P}},\varphi-\tilde{ \varphi})\|\mathbf{1}+\mathbf{1}(\tilde{\mathbf{u}}-\mathbf{u}_{h},\tilde{p}^{\mathrm{P}}-p_{h }^{\mathrm{P}},\tilde{\varphi}-\varphi_{h})\|\mathbf{1}.\]
Using Theorem 4.1 and the properties of \(M_{h}\) gives
\[\|\mathbf{(}\tilde{\mathbf{u}}-\mathbf{u}_{h},\tilde{p}^{\mathrm{P}}-p_{h}^{ \mathrm{P}},\tilde{\varphi}-\varphi_{h})\|\mathbf{1}^{2} \lesssim M_{h}(\tilde{\mathbf{u}}-\mathbf{u}_{h},\tilde{p}^{\mathrm{P}}-p_{h}^{ \mathrm{P}},\tilde{\varphi}-\varphi_{h};\mathbf{v}_{h},q_{h}^{\mathrm{P}},\psi_{h})\] \[\leq M_{h}(\tilde{\mathbf{u}},\tilde{p}^{\mathrm{P}},\tilde{\varphi},\mathbf{v}_{h},q_{h}^{\mathrm{P}},\psi_{h})-M_{h}(\mathbf{u}_{h},p_{h}^{\mathrm{P}}, \varphi_{h};\mathbf{v}_{h},q_{h}^{\mathrm{P}},\psi_{h})\] \[\leq M_{h}(\tilde{\mathbf{u}}-\mathbf{u},\tilde{p}^{\mathrm{P}}-p^{ \mathrm{P}},\tilde{\varphi}-\varphi;\mathbf{v}_{h},q_{h}^{\mathrm{P}},\psi_{h})\] \[\lesssim\|\mathbf{(}u-\tilde{\mathbf{u}},p^{\mathrm{P}}-\tilde{p}^{\mathrm{P }},\varphi-\tilde{\varphi})\|\mathbf{1}\] \[\qquad\qquad+\left(\sum_{K\in\mathcal{S}_{h}}h_{K}^{2}|\sqrt{2\mu}( \mathbf{u}-\tilde{\mathbf{u}})|_{2,K}^{2}\right)^{1/2}\|\mathbf{(}\tilde{\mathbf{u}}-\mathbf{u}_{h}, \tilde{p}^{\mathrm{P}}-p_{h}^{\mathrm{P}},\tilde{\varphi}-\varphi_{h})\|\mathbf{1}.\]
### Well-posedness analysis for formulation (3.9)
Proceeding similarly to the proof of Theorem 4.1, we can establish the following result.
**Theorem 4.2**.: _For every \((\mathbf{u}_{h},p_{h}^{\mathrm{P}},\varphi_{h})\in\mathbf{V}_{h}\times\widetilde{ \mathrm{Q}}_{h}^{\mathrm{P}}\times Z_{h}\), there exists \((\mathbf{v}_{h},q_{h}^{\mathrm{P}},\psi_{h})\in\mathbf{V}_{h}\times\widetilde{ \mathrm{Q}}_{h}^{\mathrm{P}}\times Z_{h}\) with \(\|\mathbf{(}\mathbf{v}_{h},q_{h}^{\mathrm{P}},\psi_{h})\|\mathbf{1}\lesssim\|\mathbf{(}u_{h},p_{h }^{\mathrm{P}},\varphi_{h})\|\mathbf{1}\) such that_
\[\widetilde{M}_{h}(\mathbf{u}_{h},p_{h}^{\mathrm{P}},\varphi_{h};\mathbf{v}_{h},q_{h}^{ \mathrm{P}},\psi_{h})\gtrsim\|\mathbf{(}\mathbf{u}_{h},p_{h}^{\mathrm{P}},\varphi_{h})\| \mathbf{1}_{*}^{2},\]
_where_
\[\|\mathbf{(}\mathbf{v}_{h},q_{h}^{\mathrm{P}},\psi_{h})\|\mathbf{1}_{*}^{2}:=\|\mathbf{v}_{h}\|_{*, \mathcal{S}_{h}}^{2}+\|\frac{1}{\sqrt{2\mu}}\psi_{h}\|_{0,\mathrm{\Omega}^{ \mathrm{E}}}^{2}+\frac{1}{\lambda^{\mathrm{E}}}\|\psi_{h}\|_{0,\mathrm{\Omega}^{ \mathrm{E}}}^{2}+\frac{1}{\lambda^{\mathrm{E}}}\|\psi_{h}-\alpha q_{h}^{ \mathrm{P}}\|_{0,\mathrm{\Omega}^{\mathrm{P}}}^{2}+c_{0}\|q_{h}^{\mathrm{P}}\|_{0, \mathrm{\Omega}^{\mathrm{P}}}^{2}+\|q_{h}^{\mathrm{P}}\|_{*,\mathrm{\Omega}^{ \mathrm{P}}}^{2},\]
_and_
\[\left\|q_{h
Proof.: Similarly to the proof of Lemma 4.1, we obtain
\[C_{2}\|(\tilde{\mathbf{u}}-\mathbf{u}_{h},\tilde{p}^{\mathrm{P}}-p_{h}^{ \mathrm{P}},\tilde{\varphi}-\varphi_{h})\|_{*}^{2} \leq\widetilde{M}_{h}(\tilde{\mathbf{u}}-\mathbf{u}_{h},\tilde{p}^{\mathrm{ P}}-p_{h}^{\mathrm{P}},\tilde{\varphi}-\varphi_{h};\mathbf{v}_{h},q_{h}^{\mathrm{P}}, \psi_{h})\] \[\leq\widetilde{M}_{h}(\tilde{\mathbf{u}},\tilde{p}^{\mathrm{P}}, \tilde{\varphi};\mathbf{v}_{h},q_{h}^{\mathrm{P}},\psi_{h})-M_{h}(\mathbf{u}_{h},p_{h} ^{\mathrm{P}},\varphi_{h};\mathbf{v}_{h},q_{h}^{\mathrm{P}},\psi_{h})\] \[\leq\widetilde{M}_{h}(\tilde{\mathbf{u}}-\mathbf{u},\tilde{p}^{\mathrm{P}} -p^{\mathrm{P}},\tilde{\varphi}-\varphi;\mathbf{v}_{h},q_{h}^{\mathrm{P}},\psi_{h})\] \[\leq\mathbb{I}(\mathbf{u}-\tilde{\mathbf{u}},p^{\mathrm{P}}-\tilde{p}^{ \mathrm{P}},\varphi-\tilde{\varphi})\|_{*}+\bigg{(}\sum_{K\in\mathscr{T}_{h}} h_{K}^{2}|\sqrt{2\mu}(\mathbf{u}-\tilde{\mathbf{u}})|_{2,K}^{2}\bigg{)}^{1/2}\] \[\qquad+\bigg{(}\sum_{K\in\mathscr{T}_{h}}\frac{\kappa}{\eta}h_{K }^{2}|\varphi-\tilde{\varphi}|_{2,K}^{2}\bigg{)}^{1/2}\|(\tilde{\mathbf{u}}-\mathbf{u} _{h},\tilde{p}^{\mathrm{P}}-p_{h}^{\mathrm{P}},\tilde{\varphi}-\varphi_{h})\| _{*},\]
where we have used Theorem 4.2 in combination with triangle inequality.
**Theorem 4.3**.: _Let \((\mathbf{u},p^{\mathrm{P}},\varphi)\) and \((\mathbf{u}_{h},p_{h}^{\mathrm{P}},\varphi_{h})\) be the unique solutions of the continuous and discrete problems (2.4) and (3.9), respectively. If \(\mathbf{u}\in\mathbf{V}\cap\Pi^{k+2}(\Omega)\), \(p^{\mathrm{P}}\in\mathrm{Q}\cap H^{k+2}(\Omega^{\mathrm{P}})\) and \(\varphi\in\mathrm{Z}\cap H^{k+1}(\Omega)\) with \(k\geq 0\) then_
\[\|(\mathbf{u}-\mathbf{u}_{h},p^{\mathrm{P}}-p_{h}^{\mathrm{P}},\varphi- \varphi_{h})\|_{*}\lesssim h^{k+1} \bigg{(}|\sqrt{2\mu}\mathbf{u}|_{k+2,\Omega}^{2}+\|\frac{1}{\sqrt{2\mu }}\varphi\|_{k,\Omega}^{2}+\frac{1}{\lambda^{\mathrm{E}}}\|\varphi^{\mathrm{E} }\|_{k+1,\Omega^{\mathrm{E}}}^{2}+\frac{1}{\lambda^{\mathrm{P}}}\|\varphi^{ \mathrm{P}}\|_{k+1,\Omega^{\mathrm{P}}}^{2}\] \[\qquad+(c_{0}+\frac{\alpha^{2}}{\lambda^{\mathrm{P}}})\|p^{ \mathrm{P}}\|_{k+1,\Omega^{\mathrm{P}}}^{2}+\|\frac{\kappa}{\eta}\nabla p^{ \mathrm{P}}\|_{k+2,\Omega^{\mathrm{P}}}^{2}\bigg{)}.\]
Proof.: Combining Lemma 4.2 with the approximation results of H(div)-conforming spaces (3.4) leads to the stated result.
**Remark 4.1**.: _If instead of (3.8) we employ \(RT_{k}\times\mathbb{Q}_{k}\times\mathbb{Q}_{k}\)\((k\geq 1)\) on rectangular meshes (where \(RT_{k}\) is the Raviart-Thomas finite element space and \(\mathbb{Q}_{k}\) is the discontinuous finite element space of degree \(k\)), then the a priori error estimates from Theorem 4.3 are modified as follows_
\[\|(\mathbf{u}-\mathbf{u}_{h},p^{\mathrm{P}}-p_{h}^{\mathrm{P}},\varphi- \varphi_{h})\|_{*}\lesssim h^{k}\bigg{(}|\sqrt{2\mu}\mathbf{u}|_{k+1,\Omega}^{2}+\| \frac{1}{\sqrt{2\mu}}\varphi\|_{k,\Omega}^{2}+\frac{1}{\lambda^{\mathrm{E}}}\| \varphi^{\mathrm{E}}\|_{k,\Omega^{\mathrm{E}}}^{2}+\frac{1}{\lambda^{\mathrm{P} }}\|\varphi^{\mathrm{P}}\|_{k,\Omega^{\mathrm{P}}}^{2}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+(c_ {0}+\frac{\alpha^{2}}{\lambda^{\mathrm{P}}})\|p^{\mathrm{P}}\|_{k,\Omega^{ \mathrm{P}}}^{2}+\|\frac{\kappa}{\eta}\nabla p^{\mathrm{P}}\|_{k+1,\Omega^{ \mathrm{P}}}^{2}\bigg{)}.\]
_The estimate results from using the approximation properties of the corresponding finite element family on quadrilateral meshes._
**Remark 4.2**.: _Consider the following norm for \((\mathbf{v}_{h},q_{h}^{\mathrm{P}},\psi_{h})\in\mathbf{V}_{h}\times\widetilde{ \mathbb{Q}}_{h}^{\mathrm{P}}\times Z_{h}\)_
\[\|(\mathbf{v}_{h},q_{h}^{\mathrm{P}},\psi_{h})\|_{*}^{2}:=\|\mathbf{v}_{h}\|_{*,\mathscr{T }_{h}}^{2}+\|\frac{1}{\sqrt{2\mu}}\psi_{h}\|_{0,\Omega^{\mathrm{E}}}^{2}+ \frac{1}{\lambda^{\mathrm{E}}}\|\psi_{h}\|_{0,\Omega^{\mathrm{E}}}^{2}+\frac{1}{ \lambda^{\mathrm{P}}}\|\psi_{h}\|_{0,\Omega^{\mathrm{P}}}^{2}+(c_{0}+\frac{ \alpha^{2}}{\lambda^{\mathrm{P}}})\|q_{h}^{\mathrm{P}}\|_{0,\Omega^{\mathrm{P}}} ^{2}+\|q_{h}^{\mathrm{P}}\|_{*,\Omega^{\mathrm{P}}}^{2}.\]
_To establish the equivalence between \(\mathbb{I}\cdot\|_{*}\) and \(\|\mathbb{I}\cdot\|_{*}\) in \(\mathbf{V}_{h}\times\widetilde{\mathbb{Q}}_{h}^{\mathrm{P}}\times Z_{h}\) we need to prove that_
\[\|(\mathbf{u}_{h},p_{h}^{\mathrm{P}},\varphi_{h})\|_{*}\lesssim\|(\mathbf{u} _{h},p_{h}^{\mathrm{P}},\varphi_{h})\|_{**}, \tag{4.3a}\] \[\|(\mathbf{u}_{h},p_{h}^{\mathrm{P}},\varphi_{h})\|_{**}\lesssim\|(\mathbf{u} _{h},p_{h}^{\mathrm{P}},\varphi_{h})\|_{*}. \tag{4.3b}\]
_Using the Cauchy-Schwarz inequality readily implies (4.3a), whereas (4.3b) holds whenever \(\frac{\alpha^{2}}{\lambda^{\mathrm{P}}}\in[0,9c_{0}/10)\)._
_Similarly, for \((\mathbf{v}_{h},q_{h}^{\mathrm{P}},\psi_{h})\in\mathbf{V}_{h}\times\mathbb{Q}_{h}^{ \mathrm{P}}\times Z_{h}\) we can establish an equivalence between the norm \(\mathbb{I}\cdot\|_{*}\) defined in (4.2) and_
\[\|(\mathbf{v}_{h},q_{h}^{\mathrm{P}},\psi_{h})\|_{*}^{2}:=\|\mathbf{v}_{h}\|_{*,\mathscr{T }_{h}}^{2}+\|\frac{1}{\sqrt{2\mu}}\psi_{h}\|_{0,\Omega^{\mathrm{E}}}^{2}+ \frac{1}{\lambda^{\mathrm{E}}}\|\psi_{h}\|_{0,\Omega^{\mathrm{E}}}^{2}+\frac{1}{ \lambda^{\mathrm{P}}}\|\psi_{h}\|_{0,\Omega^{\mathrm{P}}}^{2}+(c_{0}+\frac{ \alpha^{2}}{\lambda^{\mathrm{P}}})\|q_{h}^{\mathrm{P}}\|_{0,\Omega^{\mathrm{P}}} ^{2}+\|\frac{\kappa}{\eta}\nabla q_{h}^{\mathrm{P}}\|_{0,\Omega^{\mathrm{P}}}^{2}.\]
## 5. Residual-based _a posteriori_ error analysis
In this section we derive robust _a posteriori_ estimators for the two families of mixed finite element approximations, and show reliability and efficiency independently of the sensible model parameters. The error estimates are obtained in a similar fashion as in, e.g., [4]. Firstly, we discuss _a posteriori_ error estimation for formulation (3.5).
### Definition of the bulk and edge residuals
First we define the local elastic error estimator \(\Theta_{K}\) and the elastic data oscillation \(\widetilde{\Upsilon}_{K}\) for each \(K\in\mathscr{T}_{h}^{\mathrm{E}}\) as
\[\Theta_{K}^{2} :=\frac{h_{K}^{2}}{\mu^{\mathrm{E}}}\|\mathbf{R}_{1}^{\mathrm{E} }\|_{0,K}^{2}+\sum_{e\in OK}\frac{h_{e}}{\mu^{\mathrm{E}}}\|\mathbf{R}_{e}^{ \mathrm{E}}\|_{0,e}^{2}+\sum_{e\in OK}\frac{\beta_{\mathbf{u}}\mu^{\mathrm{E}}}{h_{e}} \|[\mathbf{u}_{h}^{
and the edge residual in the elastic subdomain is defined as
\[\mathbf{R}_{e}^{\mathrm{E}}:=\begin{cases}\frac{1}{2}\llbracket(2\mu^{\mathrm{E}} \boldsymbol{\varepsilon}(\mathbf{u}_{h}^{\mathrm{E}})-\varphi_{h}^{\mathrm{E}} \mathbf{I})\boldsymbol{n}\rrbracket_{e}&e\in\mathcal{S}(\mathcal{T}_{h}^{ \mathrm{E}})\setminus\Gamma,\\ ((2\mu^{\mathrm{E}}\boldsymbol{\varepsilon}(\mathbf{u}_{h}^{\mathrm{E}})- \varphi_{h}^{\mathrm{E}}\mathbf{I})\boldsymbol{n})_{e}&e\in\Gamma_{N}^{ \mathrm{E}},\\ \mathbf{0}&e\in\Gamma_{D}^{\mathrm{E}}.\end{cases}\]
Next, we define the poroelastic local error estimator \(\Psi_{K}\), for each \(K\in\mathcal{T}_{h}^{\mathrm{P}}\), as
\[\Psi_{K}^{2}:=\frac{h_{K}^{2}}{\mu^{\mathrm{P}}}\|\mathbf{R}_{1}^{\mathrm{P}} \|_{0,K}^{2}+\sum_{e\in\partial K}\frac{h_{e}}{\mu^{\mathrm{P}}}\|\mathbf{R}_ {e}^{\mathrm{P}}\|_{0,e}^{2}+\sum_{e\in\partial K}\frac{\beta_{u}\mu^{\mathrm{ P}}}{h_{e}}\|\llbracket\boldsymbol{u}_{h}^{\mathrm{P}}\otimes\boldsymbol{n} \rrbracket_{e}\|_{0,e}^{2}+\rho_{d}\|R_{2}^{\mathrm{P}}\|_{0,K}^{2}+\rho_{1}\|R_ {3}^{\mathrm{P}}\|_{0,K}^{2}+\sum_{e\in\partial K}\rho_{2}\|R_{e}^{\mathrm{P}} \|_{0,e}^{2},\]
where the elemental residuals assume the following form
\[\mathbf{R}_{1}^{\mathrm{P}} :=\{\boldsymbol{b}_{h}^{\mathrm{P}}+\mathrm{div}(2\mu^{\mathrm{P }}\boldsymbol{\varepsilon}(\boldsymbol{u}_{h}^{\mathrm{P}})-\varphi_{h}^{ \mathrm{P}}\mathbf{I})\}_{K},\] \[R_{2}^{\mathrm{P}} :=\{\mathrm{div}\,\boldsymbol{u}_{h}^{\mathrm{P}}+(\lambda^{ \mathrm{P}})^{-1}\varphi_{h}^{\mathrm{P}}-\alpha(\lambda^{\mathrm{P}})^{-1}p_{ h}^{\mathrm{P}}\}_{K},\] \[R_{3}^{\mathrm{P}} :=\{\boldsymbol{s}_{h}^{\mathrm{P}}-(c_{0}+\alpha^{2}(\lambda^{ \mathrm{P}})^{-1}p_{h}^{\mathrm{P}}+\alpha(\lambda^{\mathrm{P}})^{-1}\varphi_ {h}^{\mathrm{P}}+\eta^{-1}\,\mathrm{div}[\kappa(\nabla p_{h}^{\mathrm{P}}- \rho\mathbf{g})]\}_{K},\]
and the edge residuals are defined as
\[\mathbf{R}_{e}^{\mathrm{P}}:=\begin{cases}\frac{1}{2}\llbracket(2\mu^{\mathrm{P }}\boldsymbol{\varepsilon}(\boldsymbol{u}_{h}^{\mathrm{P}})-\varphi_{h}^{ \mathrm{P}}\mathbf{I})\boldsymbol{n}\rrbracket_{e}&e\in\mathcal{S}(\mathcal{T} _{h}^{\mathrm{P}})\setminus\Gamma\\ ((2\mu^{\mathrm{P}}\boldsymbol{\varepsilon}(\boldsymbol{u}_{h}^{\mathrm{P}})- \varphi_{h}^{\mathrm{P}}\mathbf{I})\boldsymbol{n})_{e}&e\in\Gamma_{N}^{ \mathrm{P}}\\ \mathbf{0}&e\in\Gamma_{D}^{\mathrm{P}}\end{cases},\quad R_{e}^{\mathrm{P}}:= \begin{cases}\frac{1}{2}\llbracket\eta^{-1}\kappa(\nabla p_{h}-\rho\mathbf{g}) \cdot\boldsymbol{n}\rrbracket_{e}&e\in\mathcal{E}(\mathcal{T}_{h}^{\mathrm{P}} )\setminus\Gamma\\ (\eta^{-1}\kappa(\nabla p_{h}-\rho\mathbf{g})\cdot\boldsymbol{n})_{e}&e\in \Gamma_{D}^{\mathrm{P}}\\ 0&e\in\Gamma_{N}^{\mathrm{P}}\end{cases},\]
with the scaling constants taken as
\[\rho_{1}:=\min\{(c_{0}+\alpha^{2}(2\mu^{\mathrm{P}}+\lambda^{\mathrm{P}})^{-1 })^{-1},h_{K}^{2}\eta\kappa^{-1}\},\quad\rho_{2}:=\eta\kappa^{-1}h_{e},\quad \rho_{d}:=((\mu^{\mathrm{P}})^{-1}+(2\mu^{\mathrm{P}}+\lambda^{\mathrm{P}})^{-1 })^{-1}.\]
On the other hand, the poroelastic oscillation term adopts the following specification
\[\widehat{\Upsilon}_{K}^{2}=h_{K}^{2}(\mu^{\mathrm{P}})^{-1}\|\mathbf{b}^{ \mathrm{P}}-\boldsymbol{b}_{h}^{\mathrm{P}}\|_{0,K}^{2}+\rho_{1}\|\mathbf{s}^{ \mathrm{P}}-\boldsymbol{s}_{h}^{\mathrm{P}}\|_{0,K}^{2}.\]
Next we recall that \(\Theta_{K}^{2}\) and \(\Psi_{K}^{2}\) are the elasticity and poroelasticity estimators, respectively. Let us define the interface and total estimators as follows
\[\Lambda_{e}^{2}:=h_{e}(\mu^{\mathrm{E}}+\mu^{\mathrm{P}})^{-1}\|\mathbf{R}_{ \Sigma}\|_{0,e}^{2}+h_{e}\eta\kappa^{-1}\|\widehat{R}_{\Sigma}\|_{0,e}^{2}+ \frac{\beta_{u}\mu_{0}}{h_{e}}\|\llbracket\boldsymbol{u}_{h}\otimes\boldsymbol{n }\rrbracket_{e}\|_{0,e}^{2},\quad\Xi^{2}:=\sum_{K\in\mathcal{T}_{h}^{\mathrm{E} }}\Theta_{K}^{2}+\sum_{K\in\mathcal{T}_{h}^{\mathrm{E}}}\Psi_{K}^{2}+\sum_{e \in\mathcal{E}_{h}^{\mathrm{E}}}\Lambda_{e}^{2},\]
where
\[\mathbf{R}_{\Sigma}:=\{(2\mu^{\mathrm{E}}\boldsymbol{\varepsilon}(\boldsymbol{u }_{h}^{\mathrm{E}})-\varphi_{h}^{\mathrm{E}}\mathbf{I})\boldsymbol{n}-(2\mu^{ \mathrm{P}}\boldsymbol{\varepsilon}(\boldsymbol{u}_{h}^{\mathrm{P}})-\varphi_{h}^ {\mathrm{E}}\mathbf{I})\boldsymbol{n}\},\quad\widehat{R}_{\Sigma}:=\{\kappa \xi^{-1}(\nabla p_{h}^{\mathrm{P}}-\rho\mathbf{g})\cdot\boldsymbol{n}\}.\]
In addition we define the global data oscillations term \(\Upsilon\) as
\[\Upsilon^{2}:=\sum_{K\in\mathcal{S}_{h}\cap\Omega^{\mathrm{E}}}\widetilde{ \Upsilon}_{K}^{2}+\sum_{K\in\mathcal{S}_{h}\cap\Omega^{\mathrm{P}}}\widehat{ \Upsilon}_{K}^{2},\]
where \(\widetilde{\Upsilon}_{K}\) and \(\widehat{\Upsilon}_{K}\) are the local data oscillations for elasticity and poroelasticity, respectively.
### Reliability estimates
First we introduce the following modified bilinear form \(\widehat{M}_{h}(\cdot,\cdot,\cdot;\cdot,\cdot,\cdot)\) as
\[\widehat{M}_{h}(\boldsymbol{u}_{h},p_{h}^{\mathrm{P}},\varphi_{h} ;\boldsymbol{v}_{h},q_{h}^{\mathrm{P}},\psi_{h}):=\tilde{a}_{1}^{h}(\boldsymbol{u }_{h},\boldsymbol{v}_{h})+b_{1}(\boldsymbol{v}_{h},\varphi_{h})+\tilde{a}_{2}(p_{h} ^{\mathrm{P}},q_{h}^{\mathrm{P}})+a_{2}(p_{h}^{\mathrm{P}},q_{h}^{\mathrm{P}})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+b_{2}(q_ {h}^{\mathrm{P}},\varphi_{h})+b_{1}(\boldsymbol{u}_{h},\psi_{h})+b_{2}(p_{h}^{ \mathrm{P}},\psi_{h})-a_{3}(\varphi_{h},\psi_{h}).\]
Moreover, the following relation holds
\[a_{1}^{h}(\boldsymbol{u}_{h},\boldsymbol{v}_{h})=\tilde{a}_{1}^{h}(\boldsymbol{u }_{h},\boldsymbol{v}_{h})+K_{h}(\boldsymbol{u}_{h},\boldsymbol{v}_{h}), \tag{5.1}\]
where the last term on the right-hand side is the consistency contribution and it can be written as
\[K_{h}(\boldsymbol{u}_{h},\boldsymbol{v}_{h}):=-2\sum_{e\in\mathcal{E}_{h}\cup \Gamma_{D}^{\mathrm{E}}}\big{(}(\big{\{}\mu\boldsymbol{\varepsilon}_{h}( \boldsymbol{u}_{h})\big{\}},\llbracket\boldsymbol{v}_{h}\otimes\boldsymbol{n} \rrbracket)_{0,e}+\langle\{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof.: It follows straightforwardly from the decomposition \(\mathbf{u}_{h}=\mathbf{u}_{h}^{c}+\mathbf{u}_{h}^{r}\) and from the facet residual.
**Theorem 5.2** (Reliability for the transmission problem).: _Let \((\mathbf{u},p^{\mathrm{P}},\varphi)\) and \((\mathbf{u}_{h},p_{h}^{\mathrm{P}},\varphi_{h})\) be the solutions of the weak formulations (2.4) and (3.5), respectively. Then the following reliability bound holds_
\[\|\!\|(\mathbf{u}-\mathbf{u}_{h},p^{\mathrm{P}}-p_{h}^{\mathrm{P}},\varphi-\varphi_{h}) \|\!\|\leq C_{\mathrm{rel}}(\Xi+\Upsilon),\]
_where \(C_{\mathrm{rel}}>0\) is a constant independent of the mesh size and of the delicate model parameters._
Proof.: Using triangle inequality, we have
\[\|\!\|(\mathbf{u}-\mathbf{u}_{h},p^{\mathrm{P}}-p_{h}^{\mathrm{P}},\varphi-\varphi_{h}) \|\!\|\leq\!\|\!|(\mathbf{u}-\mathbf{u}_{h}^{c},p^{\mathrm{P}}-p_{h}^{\mathrm{P}}, \varphi-\varphi_{h})\|\!|+\!|\!|(\mathbf{u}_{h}^{r},0,0)|\!|\!|. \tag{5.2}\]
Since \((\mathbf{u}-\mathbf{u}_{h}^{c},p^{\mathrm{P}}-p_{h}^{\mathrm{P}},\varphi-\varphi_{h}) \in\mathbf{V}\times\mathrm{Q}^{\mathrm{P}}\times\mathcal{Z}\), then from the stability result in Theorem 5.1, we have
\[C_{2}\|\!\|(\mathbf{u}-\mathbf{u}_{h}^{c},p^{\mathrm{P}}-p_{h}^{\mathrm{P}},\varphi- \varphi_{h})\|\!|^{2}\leq\widehat{M}_{h}((\mathbf{u}-\mathbf{u}_{h}^{c},p^{\mathrm{P} }-p_{h}^{\mathrm{P}},\varphi-\varphi_{h});(\mathbf{v},q^{\mathrm{P}},\psi)), \tag{5.3}\]
with \(|\!|\!|(\mathbf{v},q^{\mathrm{P}},\psi)|\!|\!|\leq C_{1}\|\!|\!|(\mathbf{u}-\mathbf{u}_{h }^{c},p^{\mathrm{P}}-p_{h}^{\mathrm{P}},\varphi-\varphi_{h})|\!|\!|\). Moreover, we have
\[\widehat{M}_{h}((\mathbf{u}-\mathbf{u}_{h}^{c},p^{\mathrm{P}}-p_{h}^{ \mathrm{P}},\varphi-\varphi_{h});(\mathbf{v},q^{\mathrm{P}},\psi))\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
**Lemma 5.3**.: _The following bound holds true for the local scalar estimator in the bulk elasticity sub-domain:_
\[\sqrt{\frac{1}{\frac{1}{\mu^{\text{E}}}+\frac{1}{\lambda^{\text{E}}}}}\|\mathbf{ R}_{e}^{\text{E}}\|_{0,K}\lesssim(\lambda^{\text{E}})^{-1/2}\|\varphi-\varphi_{h}\|_{0,K}+ \sqrt{2\mu^{\text{E}}}\|\boldsymbol{\nabla}(\boldsymbol{u}-\boldsymbol{u}_{h} )\|_{0,K},\qquad K\in\mathcal{T}_{h}^{\text{E}}.\]
Proof.: Consider \(K\in\mathcal{T}_{h}^{\text{E}}\). Using the relation \(\operatorname{div}\boldsymbol{u}_{h}^{\text{E}}+(\lambda^{\text{E}})^{-1} \varphi^{\text{E}}=0|_{K}\), it holds
\[\sqrt{\frac{1}{\frac{1}{\mu^{\text{E}}}+\frac{1}{\lambda^{\text{E}}}}}\| \mathbf{R}_{e}^{\text{E}}\|_{0,K} =\sqrt{\frac{1}{\frac{1}{\mu^{\text{E}}}+\frac{1}{\lambda^{\text{E }}}}}\|\operatorname{div}\boldsymbol{u}_{h}^{\text{E}}+(\lambda^{\text{E}})^ {-1}\varphi_{h}^{\text{E}}\|_{0,K}\] \[=\sqrt{\frac{1}{\frac{1}{\mu^{\text{E}}}+\frac{1}{\lambda^{\text{E }}}}}\|\operatorname{div}\boldsymbol{u}_{h}^{\text{E}}-\operatorname{div} \boldsymbol{u}^{\text{E}}+(\lambda^{\text{E}})^{-1}(\varphi_{h}^{\text{E}}- \varphi^{\text{E}})\|_{0,K}\] \[\lesssim(\lambda^{\text{E}})^{-1/2}\|\varphi-\varphi_{h}\|_{0,K} +\sqrt{2\mu^{\text{E}}}\|\boldsymbol{\nabla}(\boldsymbol{u}-\boldsymbol{u}_{ h})\|_{0,K}.\]
Let \(b_{e}\) be the edge polynomial bubble function on \(e\) which is an interior edge (or interior facet in 3D) shared by two elements \(K\) and \(K^{\prime}\). Moreover, \(b_{e}\) is positive in the interior of the patch \(P_{e}\) formed by \(K\cup K^{\prime}\), and is zero on the boundary of the patch. Then, we can conclude the following estimates from [45]:
\[\|q\|_{0,e}\lesssim\|b_{e}^{1/2}q\|_{0,e},\quad\|b_{e}q\|_{0,K}\lesssim h_{e} ^{1/2}\|q\|_{0,e},\quad\|\nabla(b_{e}q)\|_{0,K}\lesssim h_{e}^{-1/2}\|q\|_{0,e} \qquad\forall K\in P_{e}, \tag{5.6}\]
where \(q\) is the scalar-valued polynomial function which is defined on the edge \(e\).
**Lemma 5.4**.: _With regards to the edge contribution to the local estimator on the elastic sub-domain, we have that_
\[(\sum_{e\in\partial K}h_{e}(\mu^{\text{E}})^{-1}\|\mathbf{R}_{e}^ {\text{E}}\|_{0,e}^{2})^{1/2}\] \[\qquad\lesssim\sum_{K\in P_{e}}((\mu^{\text{E}})^{-1/2}h_{K}\| \boldsymbol{b}^{\text{E}}-\boldsymbol{b}_{h}^{\text{E}}\|_{0,K}+(\mu^{\text{E }})^{-1/2}\|\varphi^{\text{E}}-\varphi_{h}^{\text{E}}\|_{0,K}+(2\mu^{\text{E} })^{1/2}\|\boldsymbol{u}^{\text{E}}-\boldsymbol{u}_{h}^{\text{E}}\|_{0,K}).\]
Proof.: For each \(e\in\mathcal{S}_{h}\), we introduce \(\boldsymbol{\zeta}_{e}=(\mu^{\text{E}})^{-1}h_{e}\mathbf{R}_{e}^{\text{E}}b_ {e}\). Then, the estimates (5.6) gives
\[h_{e}(\mu^{\text{E}})^{-1}\|\mathbf{R}_{e}^{\text{E}}\|_{0,e}^{2}\lesssim\int_ {e}\mathbf{R}_{e}^{\text{E}}\cdot((\mu^{\text{E}})^{-1}h_{e}\mathbf{R}_{e}^{ \text{E}}b_{e})=\int_{e}\mathbf{R}_{e}^{\text{E}}\cdot\boldsymbol{\zeta}_{e}.\]
Using the relation \(\llbracket(2\mu^{\text{E}}\boldsymbol{\varepsilon}(\boldsymbol{u}^{\text{E}})- \varphi^{\text{E}}\mathbf{I})\boldsymbol{n}\rrbracket_{e}=\boldsymbol{0}\) implies
\[\int_{e}\llbracket(2\mu^{\text{E}}(\boldsymbol{\varepsilon}( \boldsymbol{u}_{h}^{\text{E}})-\boldsymbol{\varepsilon}(\boldsymbol{u}^{ \text{E}}))-(\varphi_{h}^{\text{E}}-\varphi^{\text{E}})\mathbf{I})\cdot \boldsymbol{n}\rrbracket_{e}\cdot\boldsymbol{\zeta}_{e} =\sum_{K\in P_{e}}\int_{K}(2\mu^{\text{E}}(\boldsymbol{ \varepsilon}(\boldsymbol{u}_{h}^{\text{E}})-\boldsymbol{\varepsilon}( \boldsymbol{u}^{\text{E}}))+\nabla(\varphi_{h}^{\text{E}}-\varphi^{\text{E}})) \cdot\boldsymbol{\zeta}_{e}\] \[\qquad\qquad\qquad\qquad+\sum_{K\in P_{e}}\int_{K}(2\mu^{\text{E} }(\boldsymbol{u}_{h}^{\text{E}}-\boldsymbol{u}^{\text{E}})\cdot\boldsymbol{ \nabla}\boldsymbol{\zeta}_{e}+(\varphi_{h}^{\text{E}}-\varphi^{\text{E}}) \nabla\cdot\boldsymbol{\zeta}_{e}),\]
where integration by parts have been used element-wise. Recalling that \(\{\boldsymbol{b}^{\text{E}}+\operatorname{div}(2\mu^{\text{E}}\boldsymbol{ \varepsilon}(\boldsymbol{u}^{\text{E}})-\varphi^{\text{E}}\mathbf{I})\}= \boldsymbol{0}|_{K}\), we have
\[\frac{h_{e}}{\mu^{\text{E}}}\|\mathbf{R}_{e}^{\text{E}}\|_{0,e}^{2}\lesssim \sum_{K\in P_{e}}\int_{K}\left((\boldsymbol{b}_{h}^{\text{E}}-\boldsymbol{b}^{ \text{E}})\cdot\boldsymbol{\zeta}_{e}+2\mu^{\text{E}}\int_{K}(\boldsymbol{u}^{ \text{E}}-\boldsymbol{u}_{h}^{\text{E}})\cdot\boldsymbol{\nabla}\boldsymbol{ \zeta}_{e}+\int_{K}(\varphi_{h}^{\text{E}}-\varphi_{h}^{\text{E}})\nabla\cdot \boldsymbol{\zeta}\right)+\sum_{K\in P_{e}}\int_{K}\mathbf{R}_{1}^{\text{E}} \cdot\boldsymbol{\zeta}_{e}.\]
From the Cauchy-Schwarz inequality we can conclude that
\[h_{e}(\mu^{\text{E}})^{-1}\|\mathbf{R}_{e}^{\text{E}}\|_{0,e}^{2} \lesssim\sum_{K\in P_{e}}((\mu^{\text{E}})^{-1/2}h_{K}\|\boldsymbol{b}^{ \text{E}}-\boldsymbol{b}_{h}^{\text{E}}\|_{0,K}+(\mu^{\text{E}})^{-1/2}\| \varphi^{\text{E}}-\varphi_{h}^{\text{E}}\|_{0,K}+\|\boldsymbol{\nabla} \boldsymbol{u}^{\text{E}}-\boldsymbol{\nabla}\boldsymbol{u}_{h}^{\text{E}}\|_{0,K})\times\] \[\qquad\qquad\qquad((\mu^{\text{E}})^{1/2}\|\boldsymbol{\nabla} \boldsymbol{\zeta}_{e}\|_{0,K}+(\mu^{\text{E}})^{1/2}h_{K}^{-1}\|\boldsymbol{ \zeta}_{e}\|_{0,K}).\]
And the rest of the desired estimate follows from the following bound
\[(\mu^{\text{E}})^{1/2}\|\boldsymbol{\nabla}\boldsymbol{\zeta}_{e}\|_{0,K}+(\mu^{ \text{E}})^{1/2}h_{K}^{-1}\|\boldsymbol{\zeta}_{e}\|_{0,K}\lesssim(\mu^{\text{E} })^{1/2}h_{K}^{-1}\|\boldsymbol{\zeta}_{e}\|_{0,K}=h_{e}^{1/2}(\mu^{\text{E}})^{-1/2} \|\mathbf{R}_{e}^{\text{E}}\|_{0,e}.\]
**Lemma 5.5**.: _The elastic local bulk a posteriori estimator satisfies_
\[\left(\sum_{K\in\mathcal{T}_{h}^{\text{E}}}\Theta_{K}^{2}\right)^{1/2}\lesssim \sum_{K\in\mathcal{T}_{h}^{\text{E}}}\left(\frac{h_{K}}{\sqrt{\mu^{\text{E}}}}\| \boldsymbol{b}^{\text{E}}-\boldsymbol{b}_{h}^{\text{E}}\|_{0,K}+\frac{1}{\sqrt{ \mu^{\text{E}}}}\|\varphi^{\text{E}}-\varphi_{h}^{\text{E}}\|_{0,K}+\frac{1}{ \sqrt{2\mu^{\text{E}}}}\|\boldsymbol{\nabla}(\boldsymbol{u}^{\text{E}}- \boldsymbol{u}_{h}^{\text{E}})\|_{0,K}\right).\]
Proof.: Combining Lemmas 5.2-5.4 implies the stated result.
#### 5.3.2. Efficiency estimates for poroelastic error estimator
**Lemma 5.6**.: _The first vectorial contribution to the local bulk poroelastic error estimator satisfies the following bound_
\[h_{K}(\mu^{\mathrm{P}})^{-1/2}\|\mathbf{R}_{1}^{\mathrm{P}}\|_{0,K}\lesssim(\mu^ {\mathrm{P}})^{-1/2}h_{K}\|\mathbf{b}^{\mathrm{P}}-\mathbf{b}_{h}^{\mathrm{P}}\|_{0,K}+( \mu^{\mathrm{P}})^{-1}\|\varphi^{\mathrm{P}}-\varphi_{h}^{\mathrm{P}}\|_{0,K}+ \|\mathbf{u}^{\mathrm{P}}-\mathbf{u}_{h}^{\mathrm{P}}\|_{0,K}.\]
Proof.: It proceeds similarly to Lemma 5.2.
**Lemma 5.7**.: _The second scalar contribution to the local bulk poroelastic error estimator satisfies the following bound_
\[\rho_{d}^{1/2}\|R_{2}^{\mathrm{P}}\|_{0,K} \lesssim(\lambda)^{-1/2}\|(\varphi_{h}^{\mathrm{P}}-\varphi^{ \mathrm{P}}-\alpha(p_{h}^{\mathrm{P}}-p^{\mathrm{P}}))\|_{0,K}+\sqrt{2\mu^{ \mathrm{P}}}\|\mathbf{\nabla}(\mathbf{u}-\mathbf{u}_{h})\|_{0,K}.\]
Proof.: The constitutive relation \(\operatorname{div}\mathbf{u}_{h}^{\mathrm{P}}+(\lambda^{\mathrm{P}})^{-1}\varphi ^{\mathrm{P}}-\alpha(\lambda^{\mathrm{P}})^{-1}p^{\mathrm{P}}=0|_{K}\) implies that
\[\rho_{d}^{1/2}\|R_{2}^{\mathrm{P}}\|_{0,K} =\rho_{d}^{1/2}\|\operatorname{div}\mathbf{u}_{h}^{\mathrm{P}}+( \lambda^{\mathrm{P}})^{-1}\varphi_{h}^{\mathrm{P}}-\alpha(\lambda^{\mathrm{P} })^{-1}p_{h}^{\mathrm{P}}\|_{0,K}\] \[=\rho_{d}^{1/2}\|\operatorname{div}\mathbf{u}_{h}^{\mathrm{P}}- \operatorname{div}\mathbf{u}^{\mathrm{P}}+(\lambda^{\mathrm{P}})^{-1}(\varphi_{h }^{\mathrm{P}}-\varphi^{\mathrm{P}}-\alpha(p_{h}^{\mathrm{P}}-p^{\mathrm{P}}) )\|_{0,K}\] \[\lesssim(\lambda^{\mathrm{P}})^{-1/2}\|\varphi_{h}^{\mathrm{P}}- \varphi^{\mathrm{P}}-\alpha(p_{h}^{\mathrm{P}}-p^{\mathrm{P}})\|_{0,K}+\sqrt{2 \mu^{\mathrm{P}}}\|\mathbf{\nabla}(\mathbf{u}-\mathbf{u}_{h})\|_{0,K}.\]
**Lemma 5.8**.: _The third scalar contribution to the local bulk poroelastic error estimator satisfies the following bound_
\[(\rho_{1})^{1/2}\|R_{3}^{\mathrm{P}}\|_{0,K} \lesssim(\rho_{1})^{1/2}\|s^{\mathrm{P}}-s_{h}^{\mathrm{P}}\|_{0,K}+(c_{0})^{1/2}\|p^{\mathrm{P}}-p_{h}^{\mathrm{P}}\|_{0,K}+(\kappa/\xi)^{1/2} \|\nabla(p^{\mathrm{P}}-p_{h}^{\mathrm{P}})\|_{0,K}\] \[\quad+(1/\lambda^{\mathrm{P}})^{-1/2}\|\varphi-\varphi_{h}^{ \mathrm{P}}+\alpha(p^{\mathrm{P}}-p_{h}^{\mathrm{P}})\|_{0,K}.\]
Proof.: For each \(K\in\mathcal{T}_{h}\), we define \(\omega|_{K}=\rho_{1}R_{3}b_{K}\). Then, invoking (5.5), we conclude that
\[\rho_{1}\|R_{3}^{\mathrm{P}}\|_{0,K}^{2}\lesssim\int_{K}R_{3}^{\mathrm{P}}( \rho_{1}R_{3}^{\mathrm{P}}b_{K})=\int_{K}R_{3}^{\mathrm{P}}\omega.\]
Using the relation \(s-[c_{0}+\alpha^{2}(\lambda^{\mathrm{P}})^{-1}]p^{\mathrm{P}}+\alpha(\lambda^ {\mathrm{P}})^{-1}\varphi^{\mathrm{P}}+\xi^{-1}\operatorname{div}[\kappa( \nabla p^{\mathrm{P}}-\rho\mathbf{g})]_{K}=0\) in the last term and then integrating with \(\omega|_{\partial K}=0\), we can assert that
\[(\rho_{1})^{-1}\|R_{3}^{\mathrm{P}}\|_{0,K}^{2} \lesssim\int_{K}(s_{h}^{\mathrm{P}}-s^{\mathrm{P}})\omega+(c_{0} )^{-1}\int_{K}(p^{\mathrm{P}}-p_{h}^{\mathrm{P}})\omega+\xi^{-1}\int_{K}\kappa \nabla(p^{\mathrm{P}}-p_{h}^{\mathrm{P}})\cdot\nabla\omega\] \[\quad+\alpha(\lambda)^{-1}\int_{K}(\varphi^{\mathrm{P}}-\varphi_ {h}^{\mathrm{P}}+\alpha(p^{\mathrm{P}}-p_{h}^{\mathrm{P}}))\omega.\]
Then, Cauchy-Schwarz inequality gives
\[\rho_{1}\|R_{3}^{\mathrm{P}}\|_{0,K}^{2}\lesssim ((\rho_{1})^{1/2}\|s^{\mathrm{P}}-s_{h}^{\mathrm{P}}\|_{0,K}+[c_{ 0}]^{1/2}\|p^{\mathrm{P}}-p_{h}^{\mathrm{P}}\|_{0,K}+\xi^{-1/2}\|\kappa^{1/2} \nabla(p^{\mathrm{P}}-p_{h}^{\mathrm{P}})\|_{0,K}\] \[\quad+(\lambda^{\mathrm{P}})^{-1/2}\|(\varphi^{\mathrm{P}}- \varphi_{h}^{\mathrm{P}}+\alpha(p^{\mathrm{P}}-p_{h}^{\mathrm{P}}))\|_{0,K})(( \kappa/\xi)^{1/2}\|\nabla\omega\|_{0,K}+(\rho_{1})^{-1/2})\|\omega\|_{0,K}.\]
And the proof follows after noting that
\[(\frac{\kappa}{\xi})^{1/2}\|\nabla\omega\|_{0,K}+(\rho_{1})^{-1/2}\|\omega\|_{0,K }\lesssim(\frac{\kappa}{\xi})^{1/2}h_{K}^{-1}\|\omega\|_{0,K}+\rho_{1}^{-1/2}\| \omega\|_{0,K}\lesssim(\rho_{1})^{-1/2}\|\omega\|_{0,K}=(\rho_{1})^{1/2}\|R_{3 }^{\mathrm{P}}\|_{0,K}.\]
**Lemma 5.9**.: _The edge contribution to the local poroelastic error estimator satisfies the following bound_
\[(\sum_{e\in\partial K}h_{e}(\mu^{\mathrm{P}})^{-1}\|\mathbf{R}_{e}^{ \mathrm{P}}\|_{0,e}^{2})^{1/2}\] \[\quad\lesssim\sum_{K\in P_{e}}((\mu^{\mathrm{P}})^{-1/2}h_{K}\| \mathbf{b}^{\mathrm{P}}-\mathbf{b}_{h}^{\mathrm{P}}\|_{0,K}+(\mu^{\mathrm{P}})^{-1/2}\| \varphi^{\mathrm{P}}-\varphi_{h}^{\mathrm{P}}\|_{0,K}+(2\mu^{\mathrm{P}})^{1/2}\| \mathbf{\nabla}(\mathbf{u}^{\mathrm{P}}-\mathbf{u}_{h}^{\mathrm{P}})\|_{0,K}).\]
Proof.: The proof is conducted similarly to that of Lemma 5.4.
**Lemma 5.10**.: _There holds:_
\[\left(\sum_{K\in\mathcal{T}_{h}^{\mathrm{P}}}\Psi_{K}^{2}\right)^{1/2} \lesssim\] \[\quad(\rho_{1})^{1/2}\|s^{\mathrm{P}}-s_{h}^{\mathrm{P}}\|_{0,K}+(c _{0})^{1/2}\|p^{\mathrm{P}}-p_{h}^{\mathrm{P}}\|_{0,K}+(\kappa/\xi)^{1/2}\| \nabla(p^{\mathrm{P}}-p_{h}^{\mathrm{P}})\|_{0,K}\] \[\quad+(1/\lambda^{\mathrm{P}})^{-1/2}\|\varphi-\varphi_{h}^{ \mathrm{P}}+\alpha(p^{\mathrm{P}}-p_{h}^{\mathrm{P}})\|_{0,K}\right).\]
Proof.: The results follows after combining Lemmas 5.6-5.9.
#### 5.3.3. Efficiency estimates for interface estimator
**Lemma 5.11**.: _There holds:_
\[\biggl{(}\sum_{e\in\Sigma}h_{e}(\mu^{\mathrm{E}}+\mu^{\mathrm{P}})^ {-1}\|\mathbf{R}_{\Sigma}\|_{0,e}^{2}\biggr{)}^{1/2}\] \[\qquad+\sum_{K\in P_{e}\cap\Omega^{\mathrm{E}}}((2\mu^{\mathrm{P} })^{-1/2}h_{K}\|\mathbf{b}^{\mathrm{P}}-\mathbf{b}_{h}^{\mathrm{P}}\|_{0,K}+(2\mu^{ \mathrm{P}})^{-1/2}\|\varphi^{\mathrm{P}}-\varphi_{h}^{\mathrm{P}}\|_{0,K}+(2 \mu^{\mathrm{P}})^{1/2}\|\mathbf{\nabla}_{h}(\mathbf{u}^{\mathrm{P}}-\mathbf{u}_{h}^{ \mathrm{P}})\|_{0,K})\biggr{)}.\]
Proof.: For each \(e\in\xi_{h}^{\Sigma}\), \(\mathbf{\zeta}_{e}\) is defined locally as \(\mathbf{\zeta}_{e}=(\mu^{\mathrm{E}}+\mu^{\mathrm{P}})^{-1}h_{e}\mathbf{R}_{\Sigma} b_{e}\). Using (5.6) gives
\[h_{e}(\mu^{\mathrm{E}}+\mu^{\mathrm{P}})^{-1}\|\mathbf{R}_{\Sigma}\|_{0,e}^{2} \lesssim\int_{e}\mathbf{R}_{\Sigma}\cdot((\mu^{\mathrm{E}}+\mu^{\mathrm{P}})^ {-1}h_{e}\mathbf{R}_{\Sigma}b_{e})=\int_{e}\mathbf{R}_{\Sigma}\cdot\mathbf{\zeta} _{e}.\]
Integration by parts implies
\[\int_{e} (2\mu^{\mathrm{E}}(\mathbf{\varepsilon}(\mathbf{u}_{h}^{\mathrm{E}})- \mathbf{\varepsilon}(\mathbf{u}^{\mathrm{E}}))-(\varphi_{h}^{\mathrm{E}}-\varphi^{ \mathrm{E}})\mathbf{I})\mathbf{n}\cdot\mathbf{\zeta}_{e}-(2\mu^{\mathrm{P}}(\mathbf{ \varepsilon}(\mathbf{u}_{h}^{\mathrm{P}})-\mathbf{\varepsilon}(\mathbf{u}^{\mathrm{P}}))- (\varphi_{h}^{\mathrm{P}}-\varphi^{\mathrm{P}})\mathbf{I})\mathbf{n}\cdot\mathbf{\zeta }_{e}\] \[=\sum_{K\in P_{e}\cap\Omega^{\mathrm{E}}}\int_{K}(\mathbf{div}(2\mu ^{\mathrm{E}}(\mathbf{\varepsilon}(\mathbf{u}_{h}^{\mathrm{E}})-\mathbf{\varepsilon}(\mathbf{ u}^{\mathrm{E}})))+\nabla(\varphi_{h}^{\mathrm{E}}-\varphi^{\mathrm{E}})) \cdot\mathbf{\zeta}_{e}\] \[\qquad-\sum_{K\in P_{e}\cap\Omega^{\mathrm{E}}}\int_{K}(2\mu^{ \mathrm{E}}(\mathbf{\varepsilon}(\mathbf{u}_{h}^{\mathrm{E}})-\mathbf{\varepsilon}(\mathbf{ u}^{\mathrm{E}}))-(\varphi_{h}^{\mathrm{E}}-\varphi^{\mathrm{E}})\mathbf{I}):\mathbf{ \nabla}\mathbf{\zeta}_{e}\] \[\qquad-\sum_{K\in P_{e}\cap\Omega^{\mathrm{P}}}\int_{K}(\mathbf{ div}(2\mu^{\mathrm{P}}(\mathbf{\varepsilon}(\mathbf{u}_{h}^{\mathrm{P}})-\mathbf{\varepsilon}(\mathbf{ u}^{\mathrm{P}})))+\nabla(\varphi_{h}^{\mathrm{P}}-\varphi^{\mathrm{P}})) \cdot\mathbf{\zeta}_{e}\] \[\qquad-\sum_{K\in P_{e}\cap\Omega^{\mathrm{P}}}\int_{K}(2\mu^{ \mathrm{P}}(\mathbf{\varepsilon}(\mathbf{u}_{h}^{\mathrm{P}})-\mathbf{\varepsilon}(\mathbf{u}^ {\mathrm{P}}))+(\varphi_{h}^{\mathrm{P}}-\varphi^{\mathrm{P}})\mathbf{I}):\mathbf{ \nabla}\mathbf{\zeta}_{e}.\]
Note that \(\mathbf{b}^{\mathrm{P}}+\mathbf{div}(2\mu^{\mathrm{P}}\mathbf{\varepsilon}(\mathbf{u}^{ \mathrm{P}})-p^{\mathrm{P}}\mathbf{I})=\mathbf{0}|_{K}\) and \(\mathbf{b}^{\mathrm{E}}+\mathbf{div}(2\mu^{\mathrm{E}}\mathbf{\varepsilon}(\mathbf{u}^{ \mathrm{E}})-p^{\mathrm{E}}\mathbf{I})=\mathbf{0}|_{K}\). Then, we can assert that
\[\frac{h_{e}}{\mu^{\mathrm{E}}+\mu^{\mathrm{E}}}\|\mathbf{R}_{ \Sigma}\|_{0,e}^{2} \lesssim\sum_{K\in P_{e}\cap\Omega^{\mathrm{E}}}\int_{K}\left((\mathbf{b} _{h}^{\mathrm{E}}-\mathbf{b}^{\mathrm{E}})\cdot\mathbf{\zeta}_{e}-\int_{K}2\mu^{ \mathrm{E}}(\mathbf{\varepsilon}(\mathbf{u}_{h}^{\mathrm{E}})-\mathbf{\varepsilon}(\mathbf{u}^ {\mathrm{E}})):\mathbf{\nabla}\mathbf{\zeta}_{e}+\int_{K}(p_{h}^{\mathrm{E}}-p^{ \mathrm{E}})\nabla\cdot\mathbf{\zeta}\right)\] \[\qquad+\sum_{K\in P_{e}\cap\Omega^{\mathrm{E}}}\int_{K}\mathbf{ R}_{1}^{\mathrm{E}}\cdot\mathbf{\zeta}_{e}+\sum_{K\in P_{e}\cap\Omega^{\mathrm{P}}} \int_{K}\mathbf{R}_{1}^{\mathrm{P}}\cdot\mathbf{\zeta}_{e}\] \[\qquad+\sum_{K\in P_{e}\cap\Omega^{\mathrm{P}}}\int_{K}\left((\mathbf{b }_{h}^{\mathrm{P}}-\mathbf{b}^{\mathrm{P}})\cdot\mathbf{\zeta}_{e}-\int_{K}2\mu^{ \mathrm{P}}(\mathbf{\varepsilon}(\mathbf{u}_{h}^{\mathrm{P}})-\mathbf{\varepsilon}(\mathbf{u}^ {\mathrm{P}})):\mathbf{\nabla}\mathbf{\zeta}_{e}+\int_{K}(p_{h}^{\mathrm{P}}-p^{ \mathrm{P}})\nabla\cdot\mathbf{\zeta}\right).\]
Applying Cauchy-Schwarz inequality gives
\[\frac{h_{e}}{\mu^{\mathrm{E}}+\mu^{\mathrm{P}}}\|\mathbf{R}_{e}\|_ {0,e}^{2} \lesssim\sum_{K\in P_{e}\cap\Omega^{\mathrm{E}}}((2\mu^{\mathrm{E}})^ {-1/2}h_{K}\|\mathbf{b}^{\mathrm{E}}-\mathbf{b}_{h}^{\mathrm{E}}\|_{0,K}+(2\mu^{ \mathrm{E}})^{-1/2}\|\varphi^{\mathrm{E}}-\varphi_{h}^{\mathrm{E}}\|_{0,K}+(2 \mu)^{1/2}\|\mathbf{\varepsilon}(\mathbf{u}_{h}^{\mathrm{E}})-\mathbf{\varepsilon}(\mathbf{u}^{ \mathrm{E}})\|_{0,K})\times\] \[\qquad+\sum_{K\in P_{e}\cap\Omega^{\mathrm{P}}}((2\mu^{\mathrm{P}})^ {-1/2}h_{K}\|\mathbf{b}^{\mathrm{P}}-\mathbf{b}_{h}^{\mathrm{P}}\|_{0,K}+(2\mu^{ \mathrm{P}})^{-1/2}\|\varphi^{\mathrm{P}}-\varphi_{h}^{\mathrm{P}}\|_{0,K}+(2 \mu)^{1/2}\|\mathbf{\varepsilon}(\mathbf{u}_{h}^{\mathrm{P}})-\mathbf{\varepsilon}(\mathbf{u}^{ \mathrm{P}})\|_{0,K})\times\] \[\qquad\qquad((\mu^{\mathrm{P}})^{1/2}\|\mathbf{\nabla}\mathbf{\zeta}\|_{0,K}+( \mu^{\mathrm{P}})^{1/2}h_{K}^{-1}\|\mathbf{\zeta}\|_{0,K}).\]
And as a consequence of the bounds
\[(2\mu^{\mathrm{E}})^{1/2}\|\mathbf{\nabla}\mathbf{\zeta}\|_{0,K}+(2\mu^{ \mathrm{E}})^{1/2}h_{K}^{-1}\|\mathbf{\zeta}\|_{0,K} \lesssim(2\mu^{\mathrm{E}})^{1/2}h_{K}^{-1}\|\mathbf{\zeta}\|_{0,K} \lesssim h_{e}^{1/2}(\mu^{\mathrm{E}}+\mu^{\mathrm{P}})^{-1/2}\|\mathbf{R}_{e}\|_ {0,e},\] \[(2\mu^{\mathrm{P}})^{1/2}\|\mathbf{\nabla}\mathbf{\zeta}\|_{0,K}+(2\mu^{ \mathrm{P}})^{1/2}h_{K}^{-1}\|\mathbf{\zeta}\|_{0,K} \lesssim(2\mu^{\mathrm{P}})^{1/2}h_{K}^{-1}\|\mathbf{\zeta}\|_{0,K} \lesssim h_{e}^{1/2}(\mu^{\mathrm{E}}+\mu^{\mathrm{P}})^{-1/2}\|\mathbf{R}_{e}\|_ {0,e},\]
the desired estimates hold true.
**Theorem 5.3** (Efficiency).: _Let \((\mathbf{u},p^{\mathrm{P}},\varphi)\) and \((\mathbf{u}_{h},p_{h}^{\mathrm{P}},\varphi_{h})\) be the solutions of the weak formulations (2.4) and (3.5), respectively. Then the following efficiency bound holds_
\[\Xi\leq C_{\mathrm{eff}}(\|(\mathbf{u}-\mathbf{u}_{h},p^{\mathrm{P}}-p_{h}^{\mathrm{P}}, \varphi-\varphi_{h})\|+\Upsilon),\]
_where \(C_{\mathrm{eff}
**Remark 5.1**.: _To introduce a posteriori error estimation for formulation (3.9), we modify the proposed estimator for formulation (3.5). Specifically, we add one extra jump term for discontinuous fluid pressure so that the modified a posteriori error estimator is as follows:_
\[\Xi^{2}:=\sum_{K\in\mathcal{G}_{k}^{\mathrm{P}}}\Theta_{K}^{2}+\sum_{K\in \mathcal{G}_{k}^{\mathrm{P}}}\widetilde{\Psi_{K}}^{2}+\sum_{e\in\mathcal{E}_{ k}^{\mathrm{P}}}\Lambda_{e}^{2}, \tag{5.7}\]
_with_
\[\widetilde{\Psi_{K}}^{2}=\Psi_{K}{}^{2}+\sum_{e\in\partial K}\frac{\beta_{ \mathrm{P}^{\mathrm{P}}}\kappa}{h_{e}\eta}\|[\mathrm{p}_{h}^{\mathrm{P}}\mathbf{n} ]_{e}\|_{0,e}^{2},\]
_where \(\Theta_{K}\), \(\Psi_{K}\) and \(\Lambda_{e}\) are defined in Section 5.1. The proposed a posteriori estimator (5.7) is also reliable, efficient and robust. The idea of proofs of reliability and efficiency is similar to the a posteriori estimation associated with formulation (3.5)._
## 6. Robust block preconditioning
Building upon the analysis results in Sections 3 and 4, our goal now is to construct norm-equivalent block diagonal preconditioners for the discrete systems (3.5) and (3.9) that are robust with respect to (e.g., high interface contrast in the) physical parameters and mesh size \(h\).
To this end, we begin by writing system (2.4) in the following operator form \(\mathcal{I}\mathcal{I}\mathcal{I}=\mathcal{G}\), with \(\vec{x}=(\mathbf{u},p^{\mathrm{P}},\varphi)\), \(\mathcal{G}=(F,G,0)\), and
\[\mathcal{I}\mathcal{I}=\begin{pmatrix}\mathcal{A}_{1}&0&\mathcal{B}_{1}^{ \prime}\\ 0&-C_{1}&\mathcal{B}_{2}^{\prime}\\ \mathcal{B}_{1}&\mathcal{B}_{2}&-C_{2}\end{pmatrix}. \tag{6.1}\]
The block operators in \(\mathcal{I}\mathcal{I}\) are induced by the respective bilinear forms as:
\[\mathcal{A}_{1}:\mathbf{V}\to\mathbf{V}^{\prime},\quad\langle \mathcal{A}_{1}(\mathbf{u}),\mathbf{v}\rangle:=a_{1}(\mathbf{u},\mathbf{v})=\int_{\Omega}2\mu \mathbf{\varepsilon}(\mathbf{u}):\mathbf{\varepsilon}(\mathbf{v}),\] \[\mathcal{B}_{1}:\mathbf{V}\to\mathbf{Z}^{\prime},\quad\langle \mathcal{B}_{1}(\mathbf{v}),\psi\rangle:=b_{1}(\mathbf{v},\psi)=-\int_{\Omega}\psi \operatorname{div}\mathbf{v},\] \[\mathcal{B}_{2}:\mathrm{Q}^{\mathrm{P}}\to\mathbf{Z}^{\prime}, \quad\langle\mathcal{B}_{2}(p^{\mathrm{P}}),\psi\rangle:=b_{2}(p^{\mathrm{P }},\psi)=\int_{\Omega^{\prime}}\frac{\alpha}{\lambda}p^{\mathrm{P}}\psi,\] \[C_{1}:\mathrm{Q}^{\mathrm{P}}\to\mathrm{Q}^{\mathrm{P}^{\prime}}, \quad\langle C_{1}(p^{\mathrm{P}}),q^{\mathrm{P}}\rangle:=\tilde{a}_{2}(p^{ \mathrm{P}},q^{\mathrm{P}})+a_{2}(p^{\mathrm{P}},q^{\mathrm{P}})=\int_{\Omega^ {\mathrm{P}}}\left(\left(c_{0}+\frac{\alpha^{2}}{\lambda}\right)p^{\mathrm{P}}q ^{\mathrm{P}}+\frac{\kappa}{\eta}\nabla p^{\mathrm{P}}\nabla q^{\mathrm{P}} \right),\] \[C_{2}:\mathrm{Z}\to\mathbf{Z}^{\prime},\quad\langle C_{2}( \varphi),\psi\rangle=a_{3}(\phi,\psi):=\frac{1}{\lambda}\int_{\Omega}\varphi\psi.\]
Similarly, the discrete systems (3.5) and (3.9) can be cast, respectively, in the following matrix block-form:
\[\underbrace{\begin{pmatrix}\mathcal{A}_{1h}&0&\mathcal{B}_{1}^{ \prime}\\ 0&-C_{1}&\mathcal{B}_{2}^{\prime}\\ \mathcal{B}_{1}&\mathcal{B}_{2}&-C_{2}\end{pmatrix}}_{=:\mathcal{I}\mathcal{I}_ {h}}\begin{pmatrix}\mathbf{u}_{h}\\ p_{h}^{\mathrm{P}}\\ \varphi_{h}\end{pmatrix}=\begin{pmatrix}F\\ G\\ 0\end{pmatrix},\quad\text{and}\quad\underbrace{\begin{pmatrix}\mathcal{A}_{1h}&0& \mathcal{B}_{1}^{\prime}\\ 0&-C_{1h}&\mathcal{B}_{2}^{\prime}\\ \mathcal{B}_{1}&\mathcal{B}_{2}&-C_{2}\end{pmatrix}}_{=:\mathcal{I}\mathcal{I}_ {h}}\begin{pmatrix}\mathbf{u}_{h}\\ p_{h}^{\mathrm{P}}\\ \varphi_{h}\end{pmatrix}=\begin{pmatrix}F\\ G\\ 0\end{pmatrix}, \tag{6.2}\]
where \(\mathcal{I}\mathcal{I}_{h}\) and \(\mathcal{I}_{h}\) are induced by the multilinear forms \(M_{h}\) and \(\mathcal{I}_{h}\), respectively, and
\[\mathcal{A}_{1h}:\mathbf{V}_{h} \to\mathbf{V}_{h}^{\prime}, \quad\langle\mathcal{A}_{1h}(\mathbf{u}_{h}),\mathbf{v}_{h}\rangle:=a_{1}^{h}(\mathbf{u }_{h},\mathbf{v}_{h}),\] \[C_{1h}:\mathrm{Q}_{h}^{\mathrm{P}} \to\mathrm{Q}_{h}^{\mathrm{P}^{\prime}}, \quad\langle C_{1}(p_{h}^{\mathrm{P}}),q_{h}^{\mathrm{P}}\rangle:=\tilde{a}_{2 }(p_{h}^{\mathrm{P}},q_{h}^{\mathrm{P}})+a_{2}^{h}(p_{h}^{\mathrm{P}},q_{h}^{ \mathrm{P}}).\]
Next, and following [37] (see also [31, Remark 5] and [30]), a preconditioner for the linear systems in (6.2) can be constructed from the discrete version of the continuous Riesz map block-diagonal operator. This latter continuous map is defined as follows:
\[\mathcal{P}:\mathbf{V}\times\mathrm{Q}^{\mathrm{P}}\times\mathrm{Z}\to( \mathbf{V}\times\mathrm{Q}^{\mathrm{P}}\times\mathrm{Z})^{\prime},\]
\[\mathcal{P}:=\begin{pmatrix}[\mathcal{A}_{1}]^{-1}&0&0\\ 0&[C_{1}]^{-1}&0\\ 0&0&[C_{2}^{\prime}]^{-1}\end{pmatrix}=\begin{pmatrix}2\mu\,\mathbf{div}\, \mathbf{\varepsilon}&0&0\\ 0&\left(C_{0}+\frac{\alpha^{2}}{\lambda}\right)\mathbf{I}-\operatorname{div}( \frac{\kappa}{\eta}\nabla)&0\\ 0&0&\left(\frac{1}{\lambda}+\frac{1}{2\mu}\right)\mathbf{I}\end{pmatrix}^{-1}. \tag{6.3}\]
Note that, when comparing the previous expression with the main block diagonal of (6.1), \(C_{2}\) is replaced by \(C_{2}^{\prime}\), which contains the additional term \(\frac{1}{2\mu}\). Furthermore, we define the discrete weighted space \(\mathbf{X}_{h,\epsilon}:=\mathbf{V}_{h}\times\mathrm{Q}_{h}^{\mathrm{P}}\times \mathrm{Z}_{h}\) which contains all triplets \((\mathbf{u}_{h},p_{h}^{\mathrm{P}},\varphi_{h})\) that are bounded in the discrete weighted norm \(\|\!\!\|\cdot\|\!\!\|\), and, similarly, \(\mathbf{X}_{h,\epsilon,*}:=\mathbf{V}_{h}\times\widetilde{\mathrm{Q}}_{h}^{ \mathrm{P}}\times\mathrm{Z}_{h}\), with the norm \(\|\!\!\|\cdot\|\!\!|_{*}\), in the discontinuous fluid pressure case. Here the subindex \(\epsilon\) represents all weighting parameters \((\mu,c_{0},\alpha,\lambda,\kappa,\eta)\).
Note that the discrete solution operator \(\mathcal{I}\mathcal{I}_{h}\) (and also \(\widetilde{\mathcal{I}}\mathcal{I}_{h}\) for the case of discontinuous pressure) is self-adjoint and indefinite on \(\mathbf{X}_{h,\epsilon}\) (resp. on \(\mathbf{X}_{h,\epsilon,*}\)). The stability of this operator in the triple norm has been proven in Theorem 4.1, which implies that it is
a uniform isomorphism (see also Theorem 4.2 for the case of discontinuous pressure and using the norm \(\|\!\!\!\|\cdot\|_{\mathbf{\mathsf{I}}_{\mathbf{\mathsf{*}}}}\)). Based on the discrete solution operators and the Riesz map (6.3), we have the following form for the discrete preconditioners:
\[\mathcal{P}_{h}=\begin{pmatrix}[\widehat{\mathcal{A}}_{1h}]^{-1}&0&0\\ 0&[C_{1}]^{-1}&0\\ 0&0&[C_{2}^{\prime}]^{-1}\end{pmatrix},\quad\widehat{\mathcal{P}}_{h}= \begin{pmatrix}[\widehat{\mathcal{A}}_{1h}]^{-1}&0&0\\ 0&[C_{1h}]^{-1}&0\\ 0&0&[C_{2}^{\prime}]^{-1}\end{pmatrix}, \tag{6.4}\]
where \(\widehat{\mathcal{A}}_{1h}\) is defined as follows:
\[\widehat{\mathcal{A}}_{1h}:\mathbf{V}_{h}\to\mathbf{V}_{h}^{\prime},\quad \langle\widehat{\mathcal{A}}_{1h}(\mathbf{u}_{h}),\mathbf{v}_{h}\rangle:=\sum_{K\in \mathcal{S}_{h}}2\mu(\mathbf{\varepsilon}(\mathbf{u}_{h}),\mathbf{\varepsilon}(\mathbf{v}_{h }))_{K}+\sum_{e\in\mathcal{E}_{h}\cup\Gamma_{D}^{*}}\frac{2\mu\beta_{\mathbf{u}}} {h_{e}}\langle[\mathbf{u}_{h}\otimes\mathbf{n}],[\![\mathbf{v}_{h}\otimes\mathbf{n}]\!] \rangle_{e},\]
i.e., the operator defining the \(\|\cdot\|_{\mathbf{\mathsf{*}},\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Verification of convergence to smooth solutions
We manufacture a closed-form displacement and fluid pressure
\[\mathbf{u}=\begin{pmatrix}\sin(\pi[x+y])\\ \cos(\pi[x^{2}+y^{2}])\end{pmatrix},\quad p^{\rm P}=\sin(\pi x+y)\sin(\pi y),\]
which, together with \(\varphi^{\rm P}=\alpha p^{\rm P}-\lambda^{\rm P}\operatorname{div}\mathbf{u}\), \(\varphi^{\rm E}=-\lambda^{\rm E}\operatorname{div}\mathbf{u}\), constitute the solutions to (2.1). For this test we consider the unit square domain \(\Omega=(0,1)^{2}\) divided into \(\Omega^{\rm E}=(0,1)\times(0.5,1)\) and \(\Omega^{\rm P}=(0,1)\times(0,0.5)\) and separated by the interface \(\Sigma=(0,1)\times\{0.5\}\). The boundaries are taken as \(\Gamma^{\rm P}_{D}=\partial\Omega^{\rm E}\setminus\Sigma\) and \(\Gamma^{\rm P}_{D}=\partial\Omega\setminus\Gamma^{\rm E}_{D}\), which implies that a real Lagrange multiplier is required constraining the mean value of the global pressure to coincide with the exact value. The parameter values are taken as follows
\[\alpha=1,\quad\mu^{\rm P}=10,\quad\lambda^{\rm P}=2\cdot 10^{4}, \quad\mu^{\rm E}=20,\quad\lambda^{\rm E}=10^{4},\] \[c_{0}=1,\quad\kappa=1,\quad\eta=1,\quad\gamma=1,\quad\Delta t=1, \quad T=1,\quad\beta_{\mathbf{u}}=\beta_{p^{\rm P}}=2.5\cdot 10^{2k+1}.\]
We note that the stress on the interface \(\Sigma\) is not continuous. As a result, we must add the following term:
\[\sum_{e\in\mathcal{E}_{k}^{\rm C}}\left(\llbracket\mathbf{v}\rrbracket,\llbracket (2\mu\varepsilon(\mathbf{u})-\varphi\mathbf{I})\mathbf{n}\rrbracket\right)_{0,e},\]
to the right-hand side of (3.5) and (3.9) evaluated at the exact solution. We must also include additional terms for non-homogeneous Neumann and Dirichlet boundary conditions.
For the discretisation using continuous fluid pressure approximation, errors between exact and approximate solutions are computed using the norms
\[\mathbf{e}(\mathbf{u},p^{\rm P},\varphi):=\llbracket\mathbf{I}(\mathbf{u}-\mathbf{u}_{h}, p^{\rm P}-p_{h}^{\rm P},\varphi-\varphi_{h})\rrbracket,\quad\mathbf{e}_{*}(\mathbf{u}):= \lVert\mathbf{u}-\mathbf{u}_{h}\rVert_{*,\mathcal{J}_{\rm x}},\]
while for discontinuous pressure approximations the following norms are modified
\[\mathbf{e}_{*}(\mathbf{u},p^{\rm P},\varphi):=\llbracket\mathbf{I}(\mathbf{u}-\mathbf{u}_{h },p^{\rm P}-p_{h}^{\rm P},\varphi-\varphi_{h})\rrbracket_{*},\quad\mathbf{e}_ {*}(p^{\rm P}):=\lVert p^{\rm P}-p_{h}^{\rm P}\rVert_{*,\Omega^{\rm P}}.\]
The experimental rates of convergence are computed as
\[\mathbf{r}=\log(\mathbf{e}_{(\cdot)}/\tilde{\mathbf{e}}_{(\cdot)})[\log(h/ \tilde{h})]^{-1},\]
where \(\mathbf{e},\tilde{\mathbf{e}}\) denote errors generated on two consecutive meshes of sizes \(h\) and \(\tilde{h}\), respectively. Such an error history is displayed in Tables 7.1-7.2. In these cases we note that uniform mesh refinement is sufficient to obtain optimal convergence rates of \(\Theta(h^{k+1})\) in the corresponding broken energy norm (we also tabulate the individual errors in their natural norms). These results are consistent with the theoretical error estimates derived in Theorem 4.3.
The robustness of the _a posteriori_ error estimators is quantified in terms of the effectiveity index of the indicator
\[\mathtt{eff}(\Xi)=(\mathbf{e}_{*}(\mathbf{u})^{2}+\mathrm{e}(p^{\rm P})^{2}+ \mathrm{e}(\varphi)^{2})^{1/2}/\Xi,\]
(or \(\mathtt{eff}(\Xi)=(\mathbf{e}_{*}(\mathbf{u})^{2}+\mathbf{e}_{*}(p^{\rm P})^{2}+ \mathrm{e}(\varphi)^{2})^{1/2}/\Xi\) in the case of discontinuous fluid pressures) and \(\mathtt{eff}\) is expected to remain constant independently of the number of degrees of freedom associated with each mesh refinement. In both tables the effectivity
\begin{table}
\begin{tabular}{||c c|c c|c c|c c|c c|c||} \hline \hline \(k\) & DoF & \(\mathbf{e}_{*}(\mathbf{u},p^{\rm P},\varphi)\) & rate & \(\mathbf{e}_{*}(\mathbf{u})\) & rate & \(\mathbf{e}_{*}(p^{\rm P})\) & rate & \(\mathtt{e}(\varphi)\) & rate & \(\mathtt{eff}(\Xi)\) \\ \hline \multirow{6}{*}{0} & 97 & 5.84e+03 & * & 2.61e+02 & * & 6.94e-01 & * & 8.24e+03 & * & 1.25e-01 \\ & 369 & 2.63e+03 & 1.15 & 7.84e+01 & 1.73 & 3.57e-01 & 0.96 & 3.71e+03 & 1.15 & 1.22e-01 \\ & 1441 & 1.28e+03 & 1.04 & 2.51e+01 & 1.64 & 1.81e-01 & 0.98 & 1.81e+03 & 1.04 & 1.19e-01 \\ & 5697 & 6.36e+02 & 1.01 & 6.94e+00 & 1.85 & 9.14e-02 & 0.99 & 8.98e+02 & 1.01 & 1.18e-01 \\ & 22657 & 3.17e+02 & 1.00 & 2.90e+00 & 1.26 & 4.66e-02 & 0.97 & 4.48e+02 & 1.00 & 1.18e-01 \\ \hline \multirow{6}{*}{1} & 229 & 1.36e+03 & * & 6.06e+01 & * & 1.48e-01 & * & 1.92e+03 & * & 6.60e-02 \\ & 889 & 3.45e+02 & 1.98 & 3.15e+01 & 0.95 & 4.19e-02 & 1.83 & 4.85e+02 & 1.98 & 6.93e-02 \\ & 3505 & 8.85e+01 & 1.96 & 8.51e+00 & 1.89 & 1.09e-02 & 1.94 & 1.24e+02 & 1.96 & 6.97e-02 \\ & 13921 & 2.22e+01 & 1.99 & 2.03e+00 & 2.06 & 2.79e-03 & 1.97 & 3.13e+01 & 1.99 & 6.98e-02 \\ & 55489 & **5.57e+00** & 2.00 & 4.95e-01 & 2.04 & 7.43e-04 & 1.91 & 7.84e+00 & 2.00 & 6.98e-02 \\ \hline \multirow{6}{*}{2} & 417 & 2.73e+02 & * & 5.29e+01 & * & 2.46e-02 & * & 3.78e+02 & * & 4.06e-02 \\ & 1633 & 3.24e+01 & 3.07 & 6.35e+00 & 3.06 & 2.94e-03 & 3.07 & 4.49e+01 & 3.07 & 4.02e-02 \\ \cline{1-1} & 6465 & **3.72e+00** & 3.12 & 4.37e-01 & 3.86 & 3.69e-04 & 2.99 & 5.22e+00 & 3.10 & 3.95e-02 \\ \cline{1-1} & 25729 & **4.52e-01** & **3.04** & 2.94e-02 & 3.89 & 4.84e-05 & 2.93 & 6.37e-01 & 3.03 & 3.85e-02 \\ \cline{1-1} & 102657 & **5.63e-02** & **3.02** & 7.81e-03 & 3.10 & 5.91e-06 & 2.97 & 8.16e-02 & 3.01 & 3.84e-02 \\ \hline \hline \end{tabular}
\end{table}
Table 7.2. Example 1. Error history and effectivity indexes for polynomial degrees \(k=0,1,2\), going up to \(T=1\). Discretisation with discontinuous Biot fluid pressure.
index is asymptotically constant for all polynomial degrees. This fact confirms the efficiency and reliability of the estimator. Similar results are also obtained even when the Poisson ratio in each subdomain is close to 0.5.
### Verification of _a posteriori_ error estimates
To assess the performance of the proposed estimators, we use the L-shaped domain \(\Omega=(-1,1)^{2}\setminus(0,1)^{2}\), the interface is zig-zag-shaped and going from the reentrant corner \((0,0)\) to the bottom-left corner of the domain \((-1,-1)\), and the porous domain is the one above the interface. We consider manufactured solutions with high gradients near the reentrant corner
\[\mathbf{u}=10^{-2}\begin{pmatrix}((x-x_{a})^{2}+(y-y_{a})^{2})^{-2/3}\\ ((x-x_{a})^{2}+(y-y_{a})^{2})^{-2/3}\end{pmatrix},\quad p^{\mathrm{P}}=((x-x_{ a})^{2}+(y-y_{a})^{2})^{-2/3},\]
with \((x_{a},y_{a})=(0.01,0.01)\). We employ adaptive mesh refinement consisting in the usual steps of solving, then computing the local and global estimators, marking, refining, and smoothing. The marking of elements for refinement follows the classical Dorfler approach [21]: a given \(K\in\mathcal{T}_{h}\) is _marked_ (added to the marking set \(\mathcal{U}\mathcal{I}_{h}\subset\mathcal{T}_{h}\)) whenever the local error indicator \(\Xi_{K}\) satisfies
\[\sum_{K\in\mathcal{U}_{h}}\Xi_{K}^{2}\geq\zeta\sum_{K\in\mathcal{T}_{h}}\Xi_{ K}^{2},\]
where \(\zeta\) is a user-defined bulk density parameter. All edges in the elements in \(\mathcal{U}_{h}\) are marked for refinement. Additional edges are marked for the sake of closure, and an additional smoothing step (Laplacian smoothing on the refined mesh to improve the shape regularity of the new mesh) is applied before starting a new iteration of the algorithm. When computing convergence rates under adaptive mesh refinement, we use the expression
\[x_{(\cdot)}=-2\log(e_{(\cdot)}/\tilde{e}_{(\cdot)})[\log(\texttt{DoF}/\widehat {\texttt{DoF}})]^{-1}.\]
We set the following parameter values \(c_{0}=0.01\), \(\alpha=0.5\), \(\eta=0.01\), \(\kappa=10^{-3}\) and consider two cases for the Young and Poisson moduli: first \(E^{\mathrm{E}}=10\), \(E^{\mathrm{P}}=100\), \(\nu^{\mathrm{E}}=0.495\), \(\nu^{\mathrm{P}}=0.4\), and secondly larger contrast: \(E^{\mathrm{E}}=1000\), \(E^{\mathrm{P}}=10\), \(\nu^{\mathrm{E}}=0.499\), \(\nu^{\mathrm{P}}=0.25\). Moreover, we only use the polynomial degree \(k=1\), \(\beta_{\mathbf{u}}=500\), and \(\zeta=10^{-7}\). For this case we consider continuous fluid pressure approximations. The error history is presented in the left and centre panels of Figure 7.1. There we plot the error decay vs the number of degrees of freedom for the case of uniform mesh refinement, and adaptive mesh refinement with or without a smoothing step, and using the mild vs high contrast mechanical parameters. For comparison we also plot an indicative of the orders \(\Theta(h)\) and \(\Theta(h^{2})\) (thanks to the relation \(\texttt{DoF}^{-1/d}\lesssim h\lesssim\texttt{DoF}^{-1/d}\), in 2D we take \(C\texttt{DoF}^{-1/2}\) and \(C\texttt{DoF}^{-1}\), respectively). We note that for high contrast parameters, the performance of the three methods is very similar. However, as the mesh is refined, for roughly the same computational cost, the two adaptive methods render a much better approximate solution. Another observation is that the effectivity indexes (plotted in the right panel) have a slightly higher oscillation than in the adaptive cases, but overall they do not show a systematic increase/decrease. We also show samples of adaptive meshes in Figure 7.2. The _a posteriori_ error indicator correctly identifies and guides the agglomeration of elements near the zones of high gradients (the reentrant corner), plus the zones where the contrast occurs (the interface corners). The figure also portrays examples of approximate solutions together with the value of \(\Xi_{K}\) locally.
### A simple simulation of indentation in a 3D layered material
We now consider the punch problem (the drainage of a body by an induced compression loading [20, 36]). The full domain is \(\Omega=(0,50)^{3}\) mm\({}^{3}\), and it is equi-separated into elastic and poroelastic subdomains by a diagonal plane. The elastic moduli, hydromechanical coupling constants, and fluid model parameters
Figure 7.1. Example 2. Error decay and effectivity indexes for the convergence test on an L-shaped domain using mild and high-contrast elastic parameters with \(k=1\).
are
\[E^{\mathrm{E}}=50\,\text{kN/mm}^{2},\quad E^{\mathrm{P}}=210\,\text{kN/mm}^{2}, \quad\nu^{\mathrm{E}}=0.3,\quad\nu^{\mathrm{P}}=0.499,\]
\[\alpha=0.85,\quad c_{0}=0.1,\quad\kappa=10^{-3}\,\text{mm},\quad\eta=10^{-2}\text {kN/mm}^{2}\text{s}.\]
A normal surface load is applied on a quarter of the plane \(y=50\,\text{mm}\), near the corner \((0,50,50)\), where the traction has magnitude \(60\,\text{N/mm}^{2}\). On the three planes \(x=0\), \(y=0\), and \(z=50\,\text{mm}\) we prescribe zero normal displacement \(\mathbf{u}\cdot\mathbf{n}=0\) together with an influx condition \(\frac{\kappa}{\eta}\nabla p^{\mathrm{P}}\cdot\mathbf{n}=-1.7\,\text{mm/s}\), and on the remainder of the boundary we set stress-free conditions for the solid phase and zero Biot pressure. Figure 7.3 shows the deformed configuration after a few steps of mesh adaptation, which clearly illustrates the jump in material properties. The meshes are more densely refined near the interface, indicating that the estimator captures correctly the error in these regions. The bottom plots suggest a difference in compliance, as observed across the interface, the deformations are more pronounced in the elastic domain. Note that the method guided by the _a posteriori_ error estimate is particularly effective in capturing high solution gradients (of global displacement and of global total pressure). The rightmost panel of the figure also shows, in log-log scale, the decay of the global error estimator \(\Xi\) vs the number of degrees of freedom, for comparison we also plot an indicative of the mesh size, for \(3\text{D}\), \(C\,\text{DoF}^{-\frac{1}{3}}\), which confirms that the estimator converges at least with \(\Theta(h)\) to zero. Note that we are using here the lowest-order method \(k=0\) with continuous fluid pressure approximation, and the mesh agglomeration parameter is chosen as \(\zeta=0.02\).
### A test with realistic model parameters (application to brain multiphysics)
The Biot-elasticity system described in this paper is useful in a range of applications. We show here, as an example, how it can be used to calculate the displacement in a system consisting of the wall of a penetrating vessel and the intersitium surrounding it. A proper network of vessels is shown in [28, 39], but we will do a simplified 2D illustration. Here, the vessel is T-shaped with the intersititium in a surrounding box. The boundaries are divided into Dirichlet and Neumann boundaries for both the vessel wall and the interstitum domains. The bottom boundary of the vessel wall and the top and bottom boundary of the intersititum have Dirichlet boundaries while the side boundaries for both the vessel wall and intersititium have Neumann boundaries. The mesh is shown in Figure 7.4 (top left). The displacement is driven by a pressure wave along the inside of the vessel wall, represented as a sinus wave with a period of one second and a maximum value of \(1\,\text{kPa}\). The value is taken from [46] where the intraventricular intracranial pressure is reported to be mostly in the range \(0.1\)-\(1\,\text{kPa}\). The vessel wall for rats are reported to be between \(3.8\) and \(5.8\)\(\mu\)m and the diameter of the vessels
Figure 7.2. Example 2. Initial meshes for poroelastic and elastic subdomains, and meshes after 2 and 7 steps of adaptive refinement guided by \(\Xi\) (top). The bottom row shows, at the finest level and for the case without mesh smoothing, the approximate global displacement, Biot fluid pressure, and the cell-wise value of the _a posteriori_ error indicator. Here we use \(k=1\).
is reported to be 43 and 63 \(\mu\)m [11]. We choose the vessel to have a diameter of 50 \(\mu\)m while the vessel wall have a thickness of 5 \(\mu\)m in this example. The Lame parameters in the interstitium are \(\mu=1\) kPa and \(\lambda=1\) MPa in our example. This is in the range of \(\mu=[590,2.5\cdot 10^{3}]\) Pa and \(\lambda=[529,1.0\cdot 10^{11}]\) Pa given in [43]. In the vessel wall, they are chosen to be 1 MPa and 1 GPa respectively, which is similar to the reported ranges of \(\mu=[3.3\cdot 10^{3},8.2\cdot 10^{5}]\) Pa and \(\lambda=[3.0\cdot 10^{4},3.4\cdot 10^{12}]\) Pa given in [12, 43]. Additionally, the permeability is 100 nm\({}^{2}\) in the interstitium which within the ranges of 10 to 2490 nm\({}^{2}\) presented in [29, 43]. We use mixed boundary conditions as follows
\[[2\mu^{\text{E}}\mathbf{\varepsilon}(\mathbf{u}^{\text{E}})-\varphi^{\text {E}}\mathbf{I}\mathbf{n}^{\text{$\partial$\Omega$\text{E}}} =0\text{ on }\Gamma_{N}^{P} \tag{7.1a}\] \[\mathbf{u}^{\text{E}} =0\text{ on }\Gamma_{D}^{P}\] (7.1b) \[[2\mu^{\text{P}}\mathbf{\varepsilon}(\mathbf{u}^{\text{P}})-\varphi^{\text {P}}\mathbf{I}\mathbf{n}^{\text{$\partial$\Omega$\text{P}}} =0\text{ on }\Gamma_{N}^{P}\] (7.1c) \[\frac{1}{\eta}(\kappa\nabla p^{P}\cdot\mathbf{n},q^{P})_{\text{$ \partial$\Omega$\text{P}}} =0\text{ on }\Gamma_{N}^{P}\] (7.1d) \[\mathbf{u}^{\text{P}} =0\text{ on }\Gamma_{D}^{P}, \tag{7.1e}\]
where \(\Gamma_{N}^{E}\) and \(\Gamma_{D}^{E}\) is the Neumann and Dirichlet boundaries for the vessel wall and \(\Gamma_{N}^{P}\) and \(\Gamma_{D}^{P}\) is the Neumann and Dirichlet boundaries for the interstitium. The previously mentioned pressure wave is described by
\[[2\mu^{\text{E}}\mathbf{\varepsilon}(\mathbf{u}^{\text{E}})-\varphi^{\text {E}}\mathbf{I}\mathbf{n}^{\text{$\partial$\Omega$\text{E}}}=g(t)\mathbf{n}^{\text{$ \partial$\Omega$\text{E}}}\text{ on }\Gamma_{\text{traction}}, \tag{7.2}\]
where \(\Gamma_{\text{traction}}\) is the inside of the vessel wall and \(g(t)=10^{3}\cdot\sin(2\pi t)\). The simulation runs over 0.5 seconds which encompass a full expansion and contraction of the vessel wall by the pressure sine wave.
We time discretize the transient problem using Crank-Nicolson's method with a constant time step \(\Delta t=0.01\) seconds, and solve the problem at each step with a sparse direct solver. From the solution, Figure 7.4, we observe that the pressure wave causes the displacement to spread through the vessel wall and into the interstitium. As the pressure wave goes back to zero, the displacement in the interstitium relaxes back to zero as well. The maximum displacement is 66 \(\mu\)m. This displacement is quite large considering the maximum arterial wall velocity is 18-25 \(\mu\)m/s in mice, as reported in [40].
Figure 7.3. Example 3. Meshes on the deformed domain after 1, 2, and 4 steps of adaptive refinement guided by \(\Xi\). The bottom row shows, at the finest level, the approximate displacement and total pressure on both subdomains, as well as the convergence of the global _a posteriori_ error estimator.
### Evaluation of preconditioning robustness
We thoroughly evaluated the robustness of \(\mathcal{D}_{h}\) in (6.4) for a wide span of physical parameter-value ranges of interest, different mesh resolutions, H(div)-conforming approximations for the displacements (in particular, BDM and Raviart-Thomas), different rectangular domain shapes, and elastic and poroelastic rectangular subdomain shapes. Overall, our results confirm an asymptotically constant number of MINRES iterations with mesh resolution even with material parameters that exhibit very large jumps accross the interface (e.g., we tested up to 3 orders of magnitude jumps in \(\mu\) and \(\lambda\)), and/or very small or very large values (e.g., \(\kappa\in[10^{-3},10^{-5},10^{-7}]\) m\({}^{2}\); \(\lambda,\mu\in[1,10^{3},10^{6},10^{9}]\) Pa), including the extreme cases of near incompressibility, near impermeability, and near zero storativity. We note, however, that the value of the penalty parameter \(\beta_{\mathbf{u}}\) has to be chosen carefully (typically via numerical experimentation), as it can have a significant impact on preconditioner efficiency.
For conciseness, in this section, we only show results for the particularly challenging (and realistic) combination of physical parameter values corresponding to the problem in Section 7.4. We use the upper part of the domain in this problem, namely the rectangular domain \(\Omega=[0,0.25]\times[0.17,0.25]\), with elastic subdomain spanning the thin stripe \([0,0.25]\times[0.17,0.1705]\). As usual, the poroelastic domain is defined as the complement of the elastic domain. We used a triangular uniform mesh generator parametriable by the number of layers of triangles in the thinner dimension of the elastic domain, which we refer to as \(\ell\). Note that \(h=0.05/\ell\). We tested in particular with three mesh resolutions corresponding to \(\ell=2,4,8\). We solve problem (3.5) with the known manufactured solution described in Section 7.1. We report results only for BDM with \(k=0\), although we stress that the number of iterations obtained for Raviart-Thomas with \(k=1\) were very similar to those reported herein. The preconditioned MINRES solver is used in conjunction with \(\mathcal{D}_{h}\) in (6.4), and convergence is claimed whenever the Euclidean norm of the (unpreconditioned) residual of the whole system is reduced by a factor of \(10^{6}\). The action of the preconditioner was computed by LU decomposition in all cases. As an illustration of the challenge at hand, for \(\ell=2\), \(\beta_{\mathbf{u}}=20\), the condition number of the unpreconditioned system (3.5) is approximately as large as \(1.25\times 10^{26}\) (as computed by the cond Julia function). By a suitable scaling of the system, this large number could be reduced to \(1.25\times 10^{14}\) (i.e., by 12 orders of magnitude). In particular, we scale (2.1a) and (2.1d) with \(1/\max(\mu^{\mathrm{P}},\mu^{\mathrm{E}})\) and solve for the scaled pressures \(\varphi^{\mathrm{P,E}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
|
2305.08212
|
Analytic and algebraic integrability of quasi-smooth derived foliations
|
We study integrability results for derived foliations in the holomorphic
context. We prove a global integrability theorem by flat groupoids, as well as
global algebraic integrability in the presence of a compact leaf with finite
holonomy groups. These results are generalization to the derived (and thus
singular) setting of well know results and constructions: integration of
holomorphic foliations by smooth groupoid and global stability in the
holomorphic situation.
|
Bertrand Toen, Gabriele Vezzosi
|
2023-05-14T18:11:30Z
|
http://arxiv.org/abs/2305.08212v1
|
# Analytic and algebraic integrability of quasi-smooth derived foliations
###### Abstract.
We study integrability results for derived foliations in the holomorphic context. We prove a global integrability theorem by flat groupoids, as well as global algebraic integrability in the presence of a compact leaf with finite holonomy groups. These results are generalization to the derived (and thus singular) setting of well know results and constructions: integration of holomorphic foliations by smooth groupoids and global stability in the holomorphic situation (see [2]).
###### Contents
* 1 Quick reminder on derived analytic geometry
* 2 De Rham algebras in the holomorphic context
* 3 Analytic derived foliations
* 4 Analytic integrability and existence of derived enhancements
* 5 Analytification of derived foliations: GAGA theorem
* 6 Existence of leaf space and holonomy
* 6.1 Some conditions
* 6.2 Existence theorem
* 7 Reeb stability and algebraic integrability
## Introduction
In [13] we have introduced the notion of derived foliation in the algebraic setting, and proved a Riemann-Hilbert correspondence for quasi-smooth and rigid (also called _transversally smooth and rigid_ in this paper) derived foliations. This correspondence used analytic techniques at several places, as well as integrability results (formal and analytic).
The purpose of the present paper is to investigate further the integrability results of transversally smooth and rigid derived foliations, particularly at the analytic and algebraic level, including global aspects of integrability. For this, we reconsider the notion of derived holomorphic foliations on general derived analytic stacks, by introducing a holomorphic version of graded
mixed algebras as used in [23]. We study several integrability results for (transversally smooth and rigid) derived foliations: formal and local integrability (Theorem 4.1), global analytic integrability by means of a holonomy groupoid (Theorem 6.2), and finally global algebraic integrability under the condition that at least one leaf is compact and has finite holonomy. On the way, we also prove that any differential ideal on a complex manifold, whose singularities lies in codimension 3 or higher, admits a unique derived enhancement; in particular our integrability results apply to such cases.
As a final comment, we remark that the content of this note is part of a book in preparation on derived foliations, and will appear with more details in its chapters.
## 1. Quick reminder on derived analytic geometry
We recall that we can define a set valued symmetric operad \(hol\), of holomorphic rings as follows (see [11, 12, 13] and [14, 15] for the \(\mathcal{C}^{\infty}\)-version). The set \(hol(n)\) is defined to be the set of holomorphic functions of \(\mathbb{C}^{n}\), and the operadic structure is defined by substitution in a natural manner (\(n=0\) is included here, the operad \(hol\) is unital). By definition, a holomorphic ring is an (unital) algebra over the operad \(hol\). In more concrete terms, a holomorphic ring consists of a set \(R\), endowed with extra operations: for any holomorphic function \(f:\mathbb{C}^{n}\to\mathbb{C}\) we are given applications
\[\gamma_{f}:R^{n}\to R,\]
satisfying some natural properties with respect to substitutions. Intuitively, \(\gamma_{f}(a_{1},\dots,a_{n})\) stands for "\(f(a_{1},\dots,a_{n})\)", the "evaluation of \(f\) at the elements \(a_{i}\)". When \(f\) is restricted to just polynomials maps, the operations \(\gamma_{f}\) determines a commutative \(\mathbb{C}\)-algebra structure on \(R\) (the sum and multiplication being induced by the holomorphic maps \(x+y\) and \(xy\)). The typical example of a holomorphic ring is the set \(R=\mathcal{O}_{X}(X)\) of (globally defined) holomorphic functions on a given complex analytic space \(X\), where the operations \(\gamma_{f}\) are actually defined by \(\gamma_{f}(a_{1},\dots,a_{n})=f(a_{1},\dots,a_{n})\), for \(a_{i}\in\mathcal{O}_{X}(X)\).
Morphisms of holomorphic rings are maps commuting with all the operations \(\gamma_{f}\). They form a category denoted by \(\mathbf{CR}^{h}\), which comes equipped with with a forgetful functor \(\mathbf{CR}^{h}\to\mathbb{C}-\mathbf{CR}\), from holomorphic rings to commutative \(\mathbb{C}\)-algebras, given by restricting the \(f\)'s to polynomial maps only. We will often use the same notations for a holomorphic ring \(R\) and its underlying \(\mathbb{C}\)-algebra, but in situations where the difference is important the latter will be denoted by \(N(R)\). Note that the forgetful functor \(\mathbf{CR}^{h}\to\mathbb{C}-\mathbf{CR}\) possesses a left adjoint, denoted by \(R\mapsto R^{h}\), sending a \(\mathbb{C}\)-algebra to its "holomorphication". The construction \(R\mapsto R^{h}\) sends the polynomials algebra \(\mathbb{C}[x_{1},\dots,x_{n}]\) to the ring \(hol(n)\) of holomorphic functions on \(\mathbb{C}^{n}\) (with its natural holomorphic structure), and commutes with colimits. For any \(\mathbb{C}\)-algebra \(R\) the holomoprhic ring \(R^{h}\) can thus be described by choosing a presentation of \(R\). The forgetful functor \(\mathbf{CR}^{h}\to\mathbf{CR}\) commutes with filtered colimits (and, more generally, with sifted colimits),
but does not commute with push-outs in general. In order to avoid confusions we will use \(\otimes^{h}\) to denote push-outs in \(\mathbf{CR}^{h}\).
Any holomorphic ring \(R\) possesses a module of (holomorphic) _differential forms_\(\Omega^{1}_{R}\), which is an \(R\)-module with the following universal property: \(R\)-modules maps \(\Omega^{1}_{R}\to M\) are in one-to-one correspondence with _holomorphic derivations_\(\delta:R\to M\). Recall here that a derivation \(\delta:R\to M\) is _holomorphic_ if for all \(f\in hol(n)\) we have the chain rule
\[\delta(\gamma_{f}(a_{1},\dots,a_{n}))=\sum_{i}\frac{\partial f}{\partial z_{i} }(a_{1},\dots,a_{n}).\delta(a_{i}).\]
It is important here to make a difference between a holomorphic ring \(R\) and its underlying \(\mathbb{C}\)-algebra \(N(R)\). Indeed, \(\Omega^{1}_{R}\) is different from \(\Omega^{1}_{N(R)/\mathbb{C}}\), the usual module of Kahler differential forms in \(N(R)\). There is obvious morphism \(\Omega^{1}_{N(R)/\mathbb{C}}\to\Omega^{1}_{R}\) coming from the fact that the universal holomorphic derivation \(R\to\Omega^{1}_{R}\) is a derivation, but this is not an isomorphism in general. Also note that if we start with a commutative \(\mathbb{C}\)-algebra \(R\), then there exists a natural isomorphism of \(R^{h}\)-modules (which follows from the fact that the forgetful functor \(\mathbf{CR}^{h}\to\mathbb{C}-\mathbf{CR}\) commutes with trivial square zero extensions)
\[R^{h}\otimes_{R}\Omega^{1}_{R/\mathbb{C}}\simeq\Omega^{1}_{R^{h}}.\]
When \(R\) is a holomorphic ring \(\Omega^{1}_{R}\) will always mean the \(R\)-module of holomorphic differentials, and we will use the notation \(\Omega^{1}_{N(R)}\) for the (non-holomorphic) Kahler differentials.
The next definition is essentially similar to the notions used in [11, 12]. Note however that in the non-connective case, our notions of holomorphic cdga differs slightly from the one in loc. cit., as the holomorphic structure will exist on the cohomological degree \(0\) subalgebra (and not merely on the subalgebra of \(0\)-cocyles). Note also the extra condition on the differential which does not appear in the above mentioned reference. However, for connective cdga's it coincides with the notion of [11, 12]. We will use holomorphic non-connective cdga in order consider de Rham theory in the holomorphic context.
**Definition 1.1**.:
1. _Let_ \(A\in\mathbf{cdga}_{\mathbb{C}}\) _be a_ \(\mathbb{C}\)_-linear cdga. A_ holomorphic structure on_ \(A\) _consists of a holomorphic ring structure on the_ \(\mathbb{C}\)_-algebra_ \(A^{0}\) _of elements of cohomological degree_ \(0\)_, compatible with its_ \(\mathbb{C}\)_-algebra structure, such that the cohomological differential_ \(d:A^{0}\to A^{1}\) _is a holomorphic derivation. A_ holomorphic cdga _is a_ \(\mathbb{C}\)_-linear cdga endowed with a holomorphic structure._
2. \(A\) morphism__\(A\to B\)_, between two holomorphic cdga's,_ _is a morphism in_ \(\mathbf{cdga}_{\mathbb{C}}\) _for which the induced morphism_ \(A^{0}\to B^{0}\) _is a morphism of holomorphic rings._
The holomorphic cdga's form a category \(cdga^{h}\), endowed with a forgetful functor \(cdga^{h}\to cdga_{\mathbb{C}}\), to the category of \(\mathbb{C}\)-linear cdga's. This functor is the right adjoint of an adjunction, whose left adjoint is denoted by \(A\mapsto A^{h}\). Explicitly, for \(A\in cdga_{\mathbb{C}}\), \(A^{h}\) is defined as follows. We start by the graded \(\mathbb{C}\)-algebra \((A^{0})^{h}\otimes_{A^{0}}A\), by extending \(A^{0}\) to its holomorphication. The cohomological differential \(d:A^{0}\to A^{1}\) extends uniquely into a holomorphic derivation
\(d^{h}:(A^{0})^{h}\to(A^{0})^{h}\otimes_{A^{0}}A^{1}\). We then define the cohomological differential \(d^{h}\) on the whole graded algebra \((A^{0})^{h}\otimes_{A^{0}}A\) by the formula
\[d^{h}(a\otimes x)=d^{h}(a).x+a.d(x),\]
for any homogenuous \(x\in A\) and \(a\in(A^{0})^{h}\). This defines a structure of cdga on \(A^{h}\). Moreover, \((A^{h})^{0}=(A^{0})^{h}\) comes equipped with its natural holomorphic ring structure making \(A^{h}\) into a holomorphic cdga in the sense of the above definition.
The adjunction \(cdga^{h}\leftrightarrowleftarrow cdga_{\mathbb{C}}\) can be used in order to lift the model structure on \(cdga_{\mathbb{C}}\) to a model structure on \(cdga^{h}\), for which fibrations and equivalences are defined via the forgetful functor.
**Propositon 1.2**.: _The above notions of equivalences and fibrations make \(cdga^{h}\) into a model category for which the forgetful functor \(cdga^{h}\to cdga\) is a right Quillen functor._
_Proof._ We rely on the standard techniques to lift model structures (see for instance [1, SS2.5, 2.6]). We use the standard set of generating cofibrations and trivial cofibrations for \(cdga\). The forgetful functor commutes with filtered colimits, and thus the only non-trivial statement is to check that a push-out along the image by \((-)^{h}\) of a generating trivial cofibration is an equivalence.
For this, we remind that the generating set \(J\) of trivial cofibrations in \(cdga\) is the set of morphisms \(\mathbb{C}\to\mathbb{C}[x,y]\), where \(deg(x)=deg(y)+1\) and the differential is given by \(d(x)=y\). When none of \(x\) or \(y\) sit in degree \(0\), \(\mathbb{C}[x,y]^{h}\) is \(\mathbb{C}[x,y]\) and its part of cohomological degree \(0\) is \(\mathbb{C}\) with its canonical holomorphic structure. There are thus two cases to investigate: \(deg(x)=0\) and \(deg(x)=-1\).
When \(deg(x)=0\), the holomorphic cdga \(\mathbb{C}[x,y]^{h}\) can be written as \(hol(1)[y]\), which is a holomorphic cdga zero outside of degree \(0\) and \(1\), and of the form
\[hol(1)\xrightarrow{d}hol(1)\]
in degrees \(0\) and \(1\). The holomorphic ring \(hol(1)\) is here the ring of entire functions on \(\mathbb{C}\), and the differential \(d\) is here simply the derivative \(f\mapsto f^{\prime}\) of entire functions. As a result, if \(A\) is any holomorphic cdga, the underlying complex of the push-out of holomorphic cdga's \(A\otimes_{\mathbb{C}}^{h}\mathbb{C}[x,y]^{h}\) is the cocone of the morphism of complexes
\[A\otimes^{h}hol(1)\xrightarrow{d}A\otimes^{h}hol(1),\]
where \(d\) is again the derivative acting on the factor \(hol(1)\). We have to prove that this cone is quasi-isomorphic to the complex \(A\), sitting inside \(A\otimes^{h}hol(1)\) via the unit \(\mathbb{C}\to hol(1)\). In fact we will show that there is a short exact sequence of complexes
This means that for any \(i\), the sequence of vector spaces
is exact. For \(i\neq 0\), \((A\otimes^{h}hol(1))^{i}\simeq A^{i}\otimes hol(1)\) is the usual tensor product, and thus the sequence is exact because it is obtained from \(\mathbb{C}\to hol(1)\to hol(1)\) by tensoring with \(A^{i}\). For \(i=0\), we can write \(A^{0}\) as filtered colimit of holomorphic rings of finite presentation, and thus restrict to the case where \(A^{0}=\mathcal{O}(X)\) is the ring of holomorphic functions on a closed analytic subspace \(X\subset\mathbb{C}^{n}\), defined by a finite number of global equations. The above sequence is then isomorphic to the exact sequence
where \(\partial_{t}\) is the partial derivative along the standard coordinates \(t\) on \(\mathbb{C}\).
When \(deg(x)=-1\) the proof follows the same lines and is left to the reader. \(\Box\)
The \(\infty\)-category obtained from \(cdga^{h}\) by inverting the equivalences (i.e. the morphisms inducing quasi-isomorphisms on the underlying cdga's) will be denoted by \(\mathbf{cdga}^{h}\). The Quillen adjunction of Proposition 1.2 produces an adjunction of \(\infty\)-categories
\[\mathbf{cdga}_{\mathbb{C}}\leftrightarrows\mathbf{cdga}^{h},\]
whose left adjoint will be denoted by \(A\mapsto A^{h}\). The right adjoint, if necessary, will be denoted by \(A\mapsto N(A)\), but most of the time it will not be written explicitely (so we'll simply write again \(A\) for the underlying cdga).
A holomorphic cdga \(A\) possesses a _module of holomorphic differential forms_\(\Omega^{1}_{A}\). This is an \(A\)-dg-module such that morphisms of dg-modules \(\Omega^{1}_{A}\to M\) are in one-to-one correspondence with holomorphic derivations \(A\to M\). In order to define more precisely holomorphic derivations, we introduce the trivial square zero extension \(A\oplus M\). This is a holomorphic cdga whose underlying cdga is the trivial square zero extension of \(N(A)\) by \(M\), while the holomorphic structure on \(A^{0}\oplus M^{0}\) is defined by formula
\[\gamma_{f}(a_{1}+m_{1},\ldots,a_{n}+m_{n})=\gamma_{f}(a_{1},\ldots,a_{n})+ \sum_{i}\frac{\partial f}{\partial z_{i}}(a_{1},\ldots,a_{n}).m_{i}.\]
The holomorphic derivations from \(A\) to \(M\) are then _defined_ as sections of the natural projection \(A\oplus M\to A\).
As in the algebraic case, the construction \(A\mapsto\Omega^{1}_{A}\) can be left derived in order to produce a (holomorphic) _cotangent complex_\(\mathbb{L}_{A}\) for any \(A\in\mathbf{cdga}^{h}\). This cotangent complex satisfies the usual properties of stability by base change and functoriality inside \(\mathbf{cdga}^{h}\). For \(A\in\mathbf{cdga}_{\mathbb{C}}\) we have a natural equivalence
\[A^{h}\otimes_{A}\mathbb{L}_{A}\simeq\mathbb{L}_{A^{h}}.\]
Any object \(A\in\mathbf{cdga}^{h}\) possesses a _connective cover_\(\tau_{\leq 0}A\). This is the right adjoint of the inclusion \(\infty\)-functor \(\mathbf{cdga}^{h}_{c}\hookrightarrow\mathbf{cdga}^{h}\), where \(\mathbf{cdga}^{h}_{c}\) stands for the full sub-\(\infty\)-category spanned by holomorphic cdga's whose underlying cdga are connective. The connective cover construction \(A\mapsto\tau_{\leq 0}A\) is moreover compatible with the forgetful \(\infty\)-functor to \(\mathbf{cdga}\). Indeed,
we simply have to notice that if \(A\) is a holomorphic cdga, the kernel \(Ker(d:A^{0}\to A^{-1})\subset A^{0}\) is a sub-holomorphic ring of \(A^{0}\) (because, by definition, \(d\) is a holomorphic derivation).
A connective holomorphic cdga \(A\in\mathbf{cdga}_{c}^{h}\) is called a _finite cell object_ if there is a finite sequence of morphisms in \(\mathbf{cdga}_{c}^{h}\)
where each \(A_{i+1}\) is obtained from \(A_{i}\) by choosing some elements \(a_{i}\in A_{i}^{d_{i}}\) of degree \(n_{i}\), and freely adding a finite number of variables \(y_{i}\) of degrees \(n_{i}-1\) with \(d(y_{i})=a_{i}\). By weakening this definition, we say that \(A\in\mathbf{cdga}_{c}^{h}\) is an _almost finite cell object_ if there is a countable sequence of morphisms
with \(A=co\lim_{i}A_{i}\) and where each \(A_{i}\to A_{i+1}\) is as above but with the condition that the sequence of integers \(n_{i}\) tends to \(\infty\). In other words, an almost finite cell object is a cell object with a finite number of cell in each dimension.
**Definition 1.3**.:
1. _A connective holomorphic cdga_ \(A\) _is_ of finite presentation _if it is a retract, in the_ \(\infty\)_-category_ \(\mathbf{cdga}_{c}^{h}\) _of a finite cell object._
2. _A connective holomorphic cdga_ \(A\) _is_ almost of finite presentation _if it is a retract of an almost finite cell object._
Using [17, Prop. 2.2], it can be shown that finitely presented holomorphic cdga's are the compact objects in \(\mathbf{cdga}_{c}^{h}\). Similarly, almost finitely presented holomorphic cdga's are the object \(A\) for which \(Map(A,-)\) commutes with filtered colimits of uniformly truncated objects: for any filtered system \(\{A_{i}\}\), with \(H^{j}(A_{i})=0\) for all \(j\leq n\) (for some fixed integer \(n\)), we have \(colim_{i}Map(A,A_{i})\simeq Map(A,colim_{i}A_{i})\).
As in the algebraic context, cotangent complexes can be used in order to characterize (almost) finitely presented objects as follows. A holomorphic cdga \(A\) is (almost) finitely presented if and only if the holomorphic ring \(H^{0}(A)\) is finitely presented (in the category of holomorphic rings \(\mathbf{CR}^{h}\)) and \(\mathbb{L}_{A}\) is a (almost) perfect \(A\)-modules. Note here that \(H^{0}(A)\) is the zero-th cohomology group of \(A\), which comes equipped with a natural structure of a holomorphic ring such that \(Ker(d)\to H^{0}(A)\) is a quotient of holomorphic rings.
**Definition 1.4**.: _The \(\infty\)-category of affine derived analytic spaces is the full sub-\(\infty\)-category of the opposite \(\infty\)-category of \(\mathbf{cdga}_{c}^{h}\) formed by holomorphic cdga's almost of finite presentation. It is denoted by \(\mathbf{dAff}^{h}\). The object in \(\mathbf{dAff}^{h}\) corresponding to an object \(A\in\mathbf{cdga}_{c}^{h}\) will be denoted symbolically by \(\mathbf{Spec}^{h}\,A\)._
By definition/construction, we note that for a affine derived analytic space \(X=\mathbf{Spec}^{h}\,A\), the holomorphic ring \(H^{0}(A)\) can be canonically identified with the ring of holomorphic functions on a closed analytic subspace \(\tau_{0}(X)\) in \(\mathbb{C}^{n}\). The analytic space \(\tau_{0}(X)\) is defined as follows. We can write \(A\) as a connective cell object with finitely many cells in each dimension, and for such a
cell object, \(A^{0}\) is a free holomorphic ring on \(n\) generators (for some \(n\)). Thus, \(H^{0}(A)\) becomes a quotient of the holomorphic ring \(hol(n)=\mathcal{O}(\mathbb{C}^{n})\) by a finitely number of relations. These relations are given by elements \(f_{i}\in hol(n)\) and therefore define a complex analytic subspace \(\tau_{0}(X)\) inside \(\mathbb{C}^{n}\). The holomorphic ring \(H^{0}(A)\) is then naturally isomorphic to the holomorphic ring \(\mathcal{O}_{X}(X)\) of holomorphic functions on \(\tau_{0}(X)\). The complex space \(\tau_{0}(X)\) is called the _truncation_ of \(X\). In the same manner, the cohomology groups \(H^{i}(A)\) are finitely generated \(H^{0}(A)\)-modules, and thus corresponds to globally generated coherent sheaves on \(\tau_{0}(X)\) (see [22, Prop. 1.24, Lem. 1.25]).
The natural analytic topology on \(\mathbb{C}\) induces a Grothendieck topology on the \(\infty\)-category \(\mathbf{dAff}^{h}\), as follows. If \(i:\mathbf{Spec}^{h}\,A\to\mathbf{Spec}^{h}\,B\) is a morphism of affine derived analytic spaces, we say that \(i\) is an _open immersion_ if the induced morphism \(i_{0}:\mathbf{Spec}^{h}\,H^{0}(A)\to\mathbf{Spec}^{h}\,H^{0}(B)\) is an open immersion of analytic spaces, and if furthermore the natural morphisms \(H^{i}(A)\otimes_{H^{0}(A)}H^{0}(B)\to H^{i}(B)\) are isomorphisms (i.e. the coherent sheaves \(H^{i}(A)\) on \(\mathbf{Spec}^{h}\,H^{0}(A)\) restrict to \(H^{i}(B)\) along the open immersion \(i_{0}\)). An _open covering_ of affine derived analytic spaces is then a family of open immersion \(\{U_{i}\to X\}_{i}\) such that \(\coprod_{i}\tau_{0}(U_{i})\to\tau_{0}(X)\) is a surjective map of complex spaces.
We can also define smooth and etale morphisms, simply by using the holomorphic cotangent complexes as usual. We say that \(\mathbf{Spec}^{h}\,A\to\mathbf{Spec}^{h}\,B\) is smooth (resp. etale) if the relative cotangent complex \(\mathbb{L}_{B/A}\) is a projective \(B\)-module of finite rank (resp. \(\mathbb{L}_{B/A}\simeq 0\))..
We are now in a situation in which we have a Grothendieck \(\infty\)-site \(\mathbf{dAff}^{h}\) together with a notion of smooth morphisms. We can thus define Artin stacks by using the usual formal procedure (see [11, 1.4.3]). In particular, we have the \(\infty\)-topos of stacks on this \(\infty\)-site, which will be denoted by \(\mathbf{dSt}^{h}\), as well as full sub-\(\infty\)-category of Artin stacks. Note that by convention here the site \(\mathbf{dAff}^{h}\) consists only of almost finitely presented objects, and thus Artin stacks in these settings will be automatically locally almost finitely presented. Therefore, for any Artin derived analytic stack \(X\), its truncation \(\tau_{0}X\) is an (underived) Artin analytic stack locally of finite presentation, and the homotopy groups \(H^{i}(\mathcal{O}_{X})\) define _coherent_ sheaves on \(\tau_{0}(X)\).
## 2. De Rham algebras in the holomorphic context
We introduce a category of _holomorphic \(\mathbb{C}\)-linear graded mixed cdga's_. Its objects are graded mixed cdga's \(B\) together with a holomorphic structure on the cdga \(B^{(0)}\), and such that the mixed structure \(\epsilon:B^{(0)}\to B^{(1)}[-1]\) is a holomorphic derivation on the holomorphic cdga \(B^{(0)}\). We can endow this category with a model category structure simply by defining equivalences and fibrations on the underlying complexes \(B^{(i)}\) (by forgetting the holomorphic structures).
**Propositon 2.1**.: _The category of holomorphic graded mixed cdgas, with the above notions of fibrations and equivalences, define a model category structure such that the forgetful functor to graded mixed cdga's is right Quillen._
_Proof._ This is similar to the proof of Proposition 1.2, and in fact easier as here the forgetful functor commutes with push-outs. We leave the proof to the reader. \(\Box\)
The \(\infty\)-category obtained from the model category of holomorphic graded mixed cdga's will be denoted by \((\epsilon-\mathbf{cdga}^{gr})^{h}\), and will be called the \(\infty\)-category of _holomorphic graded mixed cdga's_. For any holomorphic cdga \(A\) its de Rham algebra \(\mathbf{DR}(A)=Sym_{A}(\mathbb{L}_{A}[1])\), endowed with the mixed structure induced by the de Rham differential, obviously is an object in \((\epsilon-\mathbf{cdga}^{gr})^{h}\). More precisely, the \(\infty\)-functor \((\epsilon-\mathbf{cdga}^{gr})^{h}\to\mathbf{cdga}^{h}\), sending \(B\) to \(B^{(0)}\) has a left adjoint given by \(B\to\mathbf{DR}(B)\). This adjunction of \(\infty\)-categories is moreover induced by a Quillen adjunction on the level of model categories. Of this adjunction we retain the following property
**Propositon 2.2**.: _Let \(A\) be a holomorphic cdga and \(\mathbf{DR}(A)\) its de Rham complex as an object in \((\epsilon-\mathbf{cdga}^{gr})^{h}\). For any \(B\in(\epsilon-\mathbf{cdga}^{gr})^{h}\) the morphism induced on mapping spaces_
\[\mathbf{Map}_{(\epsilon-\mathbf{cdga}^{gr})^{h}}(\mathbf{DR}(A),B)\longrightarrow \mathbf{Map}_{\mathbf{cdga}^{h}}(A,B^{(0)})\]
_is an equivalence._
We will also use a second adjunction between holomorphic graded mixed cdga's and holomorphic cdga's. Specifically, we consider the \(\infty\)-functor \(A\mapsto A(0)\), sending a holomorphic cdga \(A\) to the holomorphic graded mixed cdga \(A(0)\) which is \(A\) purely in weight \(0\) with the trivial mixed structure. This defines an \(\infty\)-functor \(\mathbf{cdga}^{h}\to(\epsilon-\mathbf{cdga}^{gr})^{h}\) which commutes with colimits and thus admits a right adjoint. This right adjoint is called the _holomorphic realization_ and is denoted by
\[|-|:(\epsilon-\mathbf{cdga}^{gr})^{h}\to\mathbf{cdga}^{h}.\]
**Propositon 2.3**.: _For any \(B\in(\epsilon-\mathbf{cdga}^{gr})^{h}\), the underlying cdga \(N(|B|)\) associated to \(|B|\) is canonically equivalent to \(|N(B)|\), the realization of the underlying graded mixed cdga \(N(B)\) associated to \(B\)._
_Proof._ This is clear by adjunction and from the fact that \(A(0)^{h}\) obviously identifies with \(A^{h}(0)\). For \(A\in\mathbf{cdga}_{\mathbb{C}}\), we have a chain of natural equivalences
\[Map_{\mathbf{cdga}_{\mathbb{C}}}(A,N(|B|))\simeq Map_{\mathbf{ cdga}^{h}}(A^{h},|B|)\simeq Map_{(\epsilon-\mathbf{cdga}^{gr})^{h}}(A(0)^{h},B)\] \[\simeq Map_{\epsilon-\mathbf{cdga}^{gr}}(A(0),N(B))\simeq Map_{ \mathbf{cdga}_{\mathbb{C}}}(A,|N(B)|).\]
\(\Box\)
**Definition 2.4**.: _Let \(A\) be a (connective) holomorphic cdga and \(\mathbf{DR}(A)\) its de Rham holomorphic graded mixed cdga. The holomorphic derived de Rham complex of \(A\) is defined by \(\widehat{C}^{*}_{DR}(A):=|\mathbf{DR}(A)|\in\mathbf{cdga}^{h}\)._
**Remark 2.5**.: In most cases we will only consider the underlying cdga of \(\widehat{C}^{*}_{DR}(A)\) and will seldom use its natural holomorphic structure. However, this holomorphic structure will be
useful for some specific arguments. Because of this we will still denote by \(\widehat{C}^{*}_{DR}(A)\) the underlying cdga and simply call it the derived de Rham complex of \(A\).
## 3. Analytic derived foliations
For any affine derived analytic space \(\mathbf{Spec}^{h}\,A\) we remind that we have a holomorphic graded mixed cdga \(\mathbf{DR}(A)\) whose underlying graded cdga is \(Sym_{A}(\mathbb{L}_{A}[1])\), and for which the mixed structure is given by the de Rham differential. The construction \(A\mapsto\mathbf{DR}(A)\) defines an \(\infty\)-functor
\[\mathbf{DR}(-):(\mathbf{dAff}^{h})^{op}\longrightarrow(\epsilon-\mathbf{cdga }^{gr})^{h}.\]
We then consider the \(\infty\)-functor
\[\mathcal{F}ol^{pr}:(\mathbf{dAff}^{h})^{op}\longrightarrow\mathbf{Cat}_{ \infty},\]
sending \(X=\mathbf{Spec}^{h}\,A\in\mathbf{dAff}^{h}\) to the \(\infty\)-category of holomorphic graded mixed \(\mathbf{DR}(X)\)-cdga's \(B\) such that the following two conditions are satisfied
1. the natural morphism \(A\to B^{(0)}\) is a quasi-isomorphism
2. \(B^{(1)}\) is an almost perfect complex of \(A\)-dg-modules
3. the natural morphism of graded cdga's \[Sym_{A}(B^{(1)})\to B\] is a graded quasi-isomorphism.
As opposed to the algebraic situation, the \(\infty\)-functor \(\mathcal{F}ol^{pr}:(\mathbf{dAff}^{h})^{op}\longrightarrow\mathbf{Cat}_{\infty}\) is only a prestack i.e. itf does not satisfy descent. Indeed, if \(X\subset\mathbb{C}^{n}\) is a closed analytic subspace, globally defined by a finite number of equations, and with holomorphic ring of functions \(A\), then almost perfect \(A\)-modules correspond to almost perfect complexes on \(X\) than can be written as bounded complex of globally generated vector bundles on \(X\). Of course, these are not all almost perfect complexes on \(X\). The associated stack \(\mathcal{F}ol\) to \(\mathcal{F}ol^{pr}\) can however be computed easily. For this, we sheafify everything on \(X\). We first consider the small site of affine opens \(U\subset X\). The holomorphic ring \(A\) sheafifies to a sheaf of holomorphic cdga's \(\mathcal{O}_{X}\). We next consider \(\mathbf{DR}_{X}\) as the sheaf \(U\mapsto\mathbf{DR}(U)\) of holomorphic graded mixed cdga's. The objects in \(\mathcal{F}ol(X)\) can then be described as sheaves of holomorphic graded mixed \(\mathbf{DR}_{X}\)-cdga's \(B\) such that
1. the natural morphism \(\mathcal{O}_{X}\to B^{(0)}\) is a quasi-isomorphism of sheaves of cdga's on \(X\)
2. \(B^{(1)}\) is a almost perfect complex of \(\mathcal{O}_{X}\)-dg-modules
3. the natural morphism of graded cdga's \[Sym_{\mathcal{O}_{X}}(B^{(1)})\to B\] is a quasi-isomorphism of sheaves of graded complexes.
We thus get an \(\infty\)-functor \(\mathcal{F}ol:(\mathbf{dAff}^{h})^{op}\rightarrow\mathbf{Cat}_{\infty}\) satisfying the descent property, and thus can be left Kan extended to a colimit preserving \(\infty\)-functor
\[\mathcal{F}ol:(\mathbf{dSt}^{h})^{op}\longrightarrow\mathbf{Cat}_{\infty}.\]
**Definition 3.1**.: _For a derived analytic stack \(X\in\mathbf{dSt}^{h}\), the \(\infty\)-category of derived foliations on \(X\) is \(\mathcal{F}ol(X)\). The initial object of \(\mathcal{F}ol(X)\), when it exists, will be denoted by \(0_{X}\) (or just by \(0\)), and called the initial foliation on \(X\). For a morphism of derived analytic stacks \(f:X\to Y\) the induced \(\infty\)-functor \(\mathcal{F}ol(Y)\to\mathcal{F}ol(X)\) is called the pull-back of derived foliations and is denoted by \(f^{*}\)._
**Remark 3.2**.:
1. By definition, derived foliations in the analytic settings are always almost perfect. This is only because we want to avoid technical complications related to the notion of quasi-coherent sheaves in the analytic setting, and moreover all the examples treated in this paper will be almost perfect.
2. In the definition of derived foliation on an affine \(\mathbf{Spec}\,A\), it would have been equivalent to consider \(\mathbf{DR}_{X}\) as a sheaf of (non-holomoprhic) graded mixed cdga's, and define derived foliations simply as sheaves of graded mixed \(\mathbf{DR}_{X}\)-cdga's \(B\) with the same conditions (this was the point of view used in [10]). Indeed, the holomorphic structure on \(B^{(0)}\) is here implicit by the first condition, as \(\mathcal{O}_{X}\) is equipped with a canonical holomorphic structure. Moreover, \(\epsilon:B^{(0)}\to B^{(1)}[-1]\) would then be compatible with \(dR:\mathcal{O}_{X}\to\mathbb{L}_{X}\), and thus would carry a canonical holomorphic structure as well. However, it seems slightly easier to work with the definition we gave above because of the obvious universal property of proposition 2.2.
We finish this Section by mentioning how most of the general results and notions we have seen for derived foliations on derived stacks (see [10]) remain valid, possibly with mild modifications, in the current analytic case. As a start, if \(\mathcal{F}\in\mathcal{F}ol(X)\) is a derived foliation on a derived analytic stack \(X\in\mathbf{dSt}^{h}\), we can define its fake (or big) cotangent complex \(\mathbb{L}_{\mathcal{F}}\), which is an \(\mathcal{O}_{X}\)-module, simply by sending \(u:S:=\mathbf{Spec}^{h}\,A\to X\) to the \(\mathcal{O}_{S}\)-module \(\mathbb{L}_{u^{*}(\mathcal{F})}\). Here, \(u^{*}(\mathcal{F})\in\mathcal{F}ol(\mathbf{Spec}^{h}\,A)\) and can thus be represented by a sheaf of graded mixed \(\mathbf{DR}_{S}\)-algebras \(\mathbf{DR}_{u^{*}(\mathcal{F})}\) satisfying the conditions for being a derived foliations. The \(\mathcal{O}_{S}\)-module \(\mathbb{L}_{u^{*}(\mathcal{F})}\) is then by definition \(\mathbf{DR}_{u^{*}(\mathcal{F})}^{(1)}[-1]\), the \((-1\)-shifted) weight one piece of \(\mathbf{DR}_{u^{*}(\mathcal{F})}\), which by assumption is an almost perfect \(\mathcal{O}_{S}\)-module. We get this way a family of \(\mathcal{O}_{S}\)-modules, functorial in \(S\to X\), or in other words an \(\mathcal{O}_{X}\)-module denoted by \(\mathbb{L}_{\mathcal{F}}^{big}\). It comes equipped with a canonical morphism \(\mathbb{L}_{X}^{big}\to\mathbb{L}_{\mathcal{F}}^{big}\).
A first observation is that the cone of this morphism is an almost perfect \(\mathcal{O}_{X}\)-module. Indeed, this is due to the fact that conormal complexes of derived foliations are stable by pull-backs. We denote this cone by \(\mathcal{N}_{\mathcal{F}}^{*}\). Suppose now that \(X\) is a derived analytic Artin stack, and let us consider its cotangent complex \(\mathbb{L}_{X}\), which comes equipped with a canonical morphism \(\mathbb{L}_{X}^{big}\to\mathbb{L}_{X}\).
**Definition 3.3**.: _Let \(X\) be an analytic derived Artin stack and \(\mathcal{F}\in\mathcal{F}ol(X)\) be a derived foliation on \(X\). The cotangent complex of \(\mathcal{F}\), \(\mathbb{L}_{\mathcal{F}}\), is defined by the cartesian square of \(\mathcal{O}_{X}\)-modules_
We note that the cone of \(\mathbb{L}_{X}\to\mathbb{L}_{\mathcal{F}}\) is again \(\mathcal{N}_{\mathcal{F}}^{*}\), that we have seen to be almost perfect. Therefore, \(\mathbb{L}_{\mathcal{F}}\) is itself an almost perfect \(\mathcal{O}_{X}\)-module. Also, from the definition, it is easy to deduce the functorial nature of cotangent complexes. For a morphism \(f:X\to Y\) of analytic derived Artin stacks, and \(\mathcal{F}\in\mathcal{F}ol(Y)\), we have a cartesian square of almost perfect \(\mathcal{O}_{Y}\)-modules
Cotangent complexes are useful in order to define smooth, quasi-smooth and rigid derived foliations, as done in [20]. We remind here the analog notion that is important for the present paper.
**Definition 3.4**.: _A derived foliation \(\mathcal{F}\in\mathcal{F}ol(X)\) on an analytic derived Artin stack \(X\) is called transversally smooth and rigid if the fiber of the morphism \(\mathbb{L}_{X}\to\mathbb{L}_{\mathcal{F}}\) is a vector bundle on \(X\). It is smooth if \(\mathbb{L}_{\mathcal{F}}\) is a vector bundle on \(X\)._
An important special case is when \(X\) is a _complex manifold_. In this case transversally smooth and rigid derived foliations have a two terms cotangent complex \(V\to\Omega^{1}_{X}\), where \(V\) is a vector bundle. These are the derived foliations which are the closest to be genuine holomorphic foliations (i.e. those corresponding to the case where \(V\) is a subbundle of \(\Omega^{1}_{X}\)).
We finish by defining the notion of _derived enhancement of a differential ideal_\(K\subset\Omega^{1}_{X}\) on a complex manifold \(X\). Remind that a coherent subsheaf \(K\subset\Omega^{1}_{X}\) is called a _differential ideal_ if the sheaf image of the de Rham differential \(dR(K)\subset\Omega^{2}_{X}\) is contained in the image of \(K\otimes\Omega^{1}_{X}\to\Omega^{2}_{X}\) (induces by the wedge product). Any derived analytic foliation \(\mathcal{F}\) defines a differential ideal \(K_{\mathcal{F}}\subset\Omega^{1}_{X}\) by considering the kernel of the morphism of coherent sheaves \(\Omega^{1}_{X}\to H^{0}(\mathbb{L}_{\mathcal{F}})\).
**Definition 3.5**.: _Let \(X\) be a complex manifold and \(K\subset\Omega^{1}_{X}\) a differential ideal. A derived enhancement of \(K\) is the datum of \(\mathcal{F}\in\mathcal{F}ol(X)\) with \(K_{\mathcal{F}}=K\)._
## 4. Analytic integrability and existence of derived enhancements
The following local integrability result is specific to the analytic setting, and is a version of the classified Frobenius theorem for transversally smooth and rigid derived foliation. It is a
direct consequence of a result of Malgrange combined with the analytic version of [23, Cor. 1.5.4], that is proven the same manner as in the formal algebraic setting. However, we still display this result as a theorem to stress its importance.
**Theorem 4.1**.: _Let \(X\) be a complex manifold (or more generally a smooth complex analytic space), and \(\mathcal{F}\in\mathcal{F}ol(X)\) be a derived foliation on \(X\). We suppose that the following two conditions are satisfied._
1. _The derived foliation_ \(\mathcal{F}\) _is transversally smooth and rigid._
2. _The derived foliation is smooth in codimension_ \(2\)_._
_Then, locally on \(X\), \(\mathcal{F}\) is integrable._
**Proof.** This is an application of Malgrange's theorem on Frobenius with singularities, see [14]. Indeed, by [23, Cor. 1.5.4] we already know that the truncation \(K\) of \(\mathcal{F}\) is a formally integrable differential ideal at each point, and thus case \(B\) of [14, Thm. 3.1] can be applied. We thus get that, after localizing on \(X\), there exists a holomorphic map \(f:X\to S\), where \(S\) is another complex manifold, such that \(f\) integrates \(K\), i.e. \(f^{*}(\Omega^{1}_{S})=K\subset\Omega^{1}_{X}\). We claim that \(f\) also integrates \(\mathcal{F}\). Indeed, we consider \(\mathcal{O}_{\mathcal{F}}\subset\mathcal{O}_{X}\) the subring of functions which are killed by the relative de Rham differential \(d:\mathcal{O}_{X}\to\Omega^{1}_{X}/K\). By the codimension \(2\) condition, we know that the natural morphism from derived de Rham cohomology along \(\mathcal{F}\) to naive de Rham cohomology of \(K\) induces an isomorphism of sheaves of holomorphic rings (see [23, Prop. 3.1.5])
\[H^{0}_{dR}(\mathcal{F})\longrightarrow H^{0}_{dR,naive}(K)=\mathcal{O}_{ \mathcal{F}}.\]
In particular, as \(\widehat{C}^{*}_{DR}(\mathcal{F}))=|\mathbf{DR}_{\mathcal{F}}|\) is the realization of \(\mathbf{DR}_{\mathcal{F}}\), and because \(H^{i}(|\mathbf{DR}_{\mathcal{F}}|)\simeq 0\) for all \(i<0\), we have a canonical morphism of holomorphic graded mixed cdga's
\[\mathcal{O}_{\mathcal{F}}\longrightarrow|\mathbf{DR}_{\mathcal{F}}|\to \mathbf{DR}_{\mathcal{F}}.\]
This implies that the sheaf of holomoprhic graded mixed \(\mathbf{DR}_{X}\)-cdga's \(\mathbf{DR}_{\mathcal{F}}\) descend to a sheaf of holomorphic graded mixed \(\mathbf{DR}_{\mathcal{O}_{X}/\mathcal{O}_{\mathcal{F}}}\)-cdga's (where \(\mathbf{DR}_{\mathcal{O}_{X}/\mathcal{O}_{\mathcal{F}}}\) is the relative derived de Rham algebra of the inclusion of sheaves of holomorphic rings \(\mathcal{O}_{\mathcal{F}}\to\mathcal{O}_{X}\)), as shown by the next lemma.
**Lemma 4.2**.: _There exists a commutative square of sheaves of holomorphic graded mixed cdga's on \(X\)_
_where \(\mathcal{O}_{\mathcal{F}}\to\mathbf{DR}_{\mathcal{F}}\) is the natural morphism described above, and \(\mathbf{DR}_{\mathcal{O}_{\mathcal{F}}}\to\mathbf{DR}_{X}\) is the morphism induced from the inclusion of sheaves of holomorphic rings \(\mathcal{O}_{\mathcal{F}}\to\mathcal{O}_{X}\)._
_Proof of the lemma._ We have two possible composed morphisms \(\mathbf{DR}_{\mathcal{O}_{\mathcal{F}}}\to\mathbf{DR}_{\mathcal{F}}\). These are morphisms of holomorphic graded mixed cdga's. By the universal property of \(\mathbf{DR}_{\mathcal{O}_{\mathcal{F}}}\) (see
Proposition 2.2), in order to check that these two morphisms are homotopic it is enough to show that they induce the same morphism of holomorphic rings in weight \(0\). But by construction these two morphisms are equal to the canonical inclusion \(\mathcal{O}_{\mathcal{F}}\to\mathcal{O}_{X}\). \(\Box\)
By the previous lemma, we have thus constructed a natural morphism of sheaves of holomorphic graded mixed cdga's on \(X\)
\[\mathbf{DR}_{\mathcal{O}_{X}/\mathcal{O}_{\mathcal{F}}}=\mathbf{DR}_{X}\otimes_ {\mathbf{DR}(\mathcal{O}_{\mathcal{F}})}\mathcal{O}_{\mathcal{F}}\longrightarrow \mathbf{DR}_{\mathcal{F}}.\]
We claim that this morphism is an equivalence. Indeed, it is enough to check that it induces an equivalence on the associated graded, or, equivalently by multiplicativity, on the weight \(1\) parts. However, in weight \(1\) the above morphism is the canonical morphism \(\mathbb{L}_{\mathcal{O}_{X}/\mathcal{O}_{\mathcal{F}}}\to\mathbb{L}_{\mathcal{ F}}\), where \(\mathbb{L}_{\mathcal{O}_{X}/\mathcal{O}_{\mathcal{F}}}\) is the holomorphic relative cotangent complex of \(\mathcal{O}_{\mathcal{F}}\subset\mathcal{O}_{X}\), and the morphism above is induced by the holomorphic derivation \(\mathcal{O}_{X}\to\mathbf{DR}_{\mathcal{F}}^{(1)}\) which vanishes on \(\mathcal{O}_{\mathcal{F}}\). To see that this morphism \(\mathbb{L}_{\mathcal{O}_{X}/\mathcal{O}_{\mathcal{F}}}\to\mathbb{L}_{\mathcal{ F}}\) is an equivalence we can work locally on \(X\), and thus assume that the differential ideal \(K\) is integrable by a morphism \(f:X\to S\) with \(S\) a smooth manifold. In this case, [10, Thm. 2.1.1] shows that \(\mathcal{O}_{\mathcal{F}}\simeq f^{-1}(\mathcal{O}_{S})\), and thus \(\mathbb{L}_{\mathcal{O}_{X}/\mathcal{O}_{\mathcal{F}}}\) is identified with the coherent sheaf \(\Omega^{1}_{f}\) of relative \(1\)-forms on \(X\) over \(S\). The fact that \(\Omega^{1}_{f}\simeq\mathbb{L}_{\mathcal{F}}\) then follows from the very definition of the fact that \(f\) integrates the differential ideal \(K\).
To summarize, we have seen that under the hypothesis of the theorem \(\mathbf{DR}_{\mathcal{F}}\) is of the form \(\mathbf{DR}_{\mathcal{O}_{X}/\mathcal{O}_{\mathcal{F}}}\). Moreover, locally on \(X\), \(\mathbf{DR}_{\mathcal{O}_{X}/\mathcal{O}_{\mathcal{F}}}\) is the relative de Rham algebra \(\mathbf{DR}_{X/S}\) for a morphism \(f:X\to S\) integrating the differential ideal \(K\). This shows in particular that \(f\) also integrates \(\mathcal{F}\) as wanted. \(\Box\)
The following existence and uniqueness results are specific to the analytic setting and does not have an algebraic counterpart (see however 5.3 in the proper case). They are very close to theorem 4.1, and themselves are again consequences of the results of [10].
**Propositon 4.3**.: _Let \(X\) be a complex manifold and \(K\subset\Omega^{1}_{X}\) be a differential ideal such that \(K\) is a vector bundle._
1. _If the coherent sheaf_ \(\Omega^{1}_{X}/K\) _is a vector bundle in codimension_ \(2\)_, then there exists at most one, up to equivalence, transversally smooth and rigid derived enhancement for_ \(K\)_._
2. _If the coherent sheaf_ \(\Omega^{1}_{X}/K\) _is a vector bundle in codimension_ \(3\)_, then there exists a unique, up to equivalence, transversally smooth and rigid derived enhancement for_ \(K\)_._
**Proof.** (1) This is a consequence of the proof of the previous theorem 4.1. Indeed, during its proof we have shown that if \(\mathcal{F}\in\mathcal{F}ol(X)\) is a transversally smooth and rigid derived enhancement of \(K\), then we must have \(\mathbf{DR}_{\mathcal{F}}\simeq\mathbf{DR}_{\mathcal{O}_{X}/\mathcal{O}_{ \mathcal{F}}}\), for \(\mathcal{O}_{\mathcal{F}}\subset\mathcal{O}_{X}\) the holomorphic subring of functions killed by the de Rham differential \(d:\mathcal{O}_{X}\to\Omega^{1}_{X}/K\). This shows indeed that \(K\) determines \(\mathcal{F}\) and thus implies the uniqueness of \(\mathcal{F}\).
(2) The reasoning is the same as the proof of theorem 4.1. We let \(\mathcal{O}_{\mathcal{F}}\subset\mathcal{O}_{X}\) be the holomorphic subring of functions killed by the de Rham differential \(d:\mathcal{O}_{X}\to\Omega^{1}_{X}/K\). We set \(\mathbf{DR}_{\mathcal{F}}:=\mathbf{DR}_{\mathcal{O}_{X}/\mathcal{O}_{\mathcal{ F}}}\) be the relative holomorphic de Rham algebra of the inclusion \(\mathcal{O}_{\mathcal{F}}\subset\mathcal{O}_{X}\).
It is endowed with the canonical morphism \(\mathbf{DR}_{X}\to\mathbf{DR}_{\mathcal{F}}\), and we claim that as such it is a transversally smooth and rigid derived enhancement of \(K\). For this we use case (A) of [12, Thm. 3.1] which ensures that \(K\) is locally integrable. Again by [12, Thm. 2.1.1], this shows that \(\mathbf{DR}_{\mathcal{F}}\) is locally of the form \(\mathbf{DR}_{X/S}\) for some morphism \(f:X\to S\) with \(S\) smooth. This clearly implies that the sheaf of graded mixed \(\mathbf{DR}_{X}\)-cdga \(\mathbf{DR}_{\mathcal{F}}\) satisfies the conditions of being a transversally smooth and rigid derived foliation, and that the corresponding differential ideal is equal to \(K\). \(\Box\)
## 5. Analytification of derived foliations: GAGA theorem
Let \(\mathbf{dSt}_{\mathbb{C}}\) be the \(\infty\)-category of derived stacks over \(\mathbf{Spec}\,\mathbb{C}\). We consider the full sub-\(\infty\)-category \(\mathbf{dSt}_{\mathbb{C}}^{\text{afp}}\subset\mathbf{dSt}_{\mathbb{C}}\), consisting of derived stacks which are locally almost of finite presentation over \(\mathbb{C}\). These are the derived stacks that can be obtained as colimits of objects of the form \(\mathbf{Spec}\,A\) with \(A\) a connective cdga which is almost of finite presentation over \(\mathbb{C}\). The canonical embedding \(\mathbf{dSt}_{\mathbb{C}}^{\text{afp}}\subset\mathbf{dSt}_{\mathbb{C}}\) can also be identified with the left Kan extension of the Yoneda embedding restricted to derived affine schemes locally of almost of finite presentation \(\mathbf{dAff}_{\mathbb{C}}^{\text{afp}}\subset\mathbf{dAff}_{\mathbb{C}} \subset\mathbf{dSt}_{\mathbb{C}}\). In particular, an alternative description of \(\mathbf{dSt}_{\mathbb{C}}^{\text{afp}}\) is as the \(\infty\)-category of (hypercomplete) stacks on the \(\infty\)-site \(\mathbf{dAff}_{\mathbb{C}}^{\text{afp}}\) (where the topology is the etale topology).
We consider the analytification \(\infty\)-functor
\[(-)^{h}:\mathbf{dSt}_{\mathbb{C}}^{\text{afp}}\to\mathbf{dSt}^{h}\]
from the \(\infty\)-category of derived stacks locally almost of finite presentation to the \(\infty\)-category of derived analytic stacks. It is obtained as the left Kan extension of the following composition of \(\infty\)-functors
where the first \(\infty\)-functor is the analytification, sending \(\mathbf{Spec}\,A\) to \(\mathbf{Spec}^{h}\,A^{h}\), where \(A^{h}\) is the holomorphic cdga generated by \(A\), while the second \(\infty\)-functor is the Yoneda embedding. The analtyfication \(\infty\)-functor \((-)^{h}:\mathbf{dSt}_{\mathbb{C}}^{\text{afp}}\to\mathbf{dSt}^{h}\) is the left adjoint of the restriction \(\infty\)-functor, sending \(X\in\mathbf{dSt}^{h}\) to derived stack defined by \(A\mapsto X(A^{h})\). This restriction \(\infty\)-functor will be denoted by \(r\) so that we have an adjunction
\[(-)^{h}:\mathbf{dSt}_{\mathbb{C}}^{\text{afp}}\leftrightarrow\mathbf{dSt}^{h} :r.\]
Moreover, the right adjoint \(r\) is a geometric morphism of \(\infty\)-topos, or equivalently \((-)^{h}\) commutes with finite limits. To avoid confusions with the various \((-)^{h}\) notations involved (for instance for graded mixed cdga's), we will denote this adjunction by
\[(-)^{h}=:u^{-1}:\mathbf{dSt}_{\mathbb{C}}^{\text{afp}}\leftrightarrow\mathbf{ dSt}^{h}:u_{*}:=r.\]
Remind that to both \(\infty\)-topoi \(\mathbf{dSt}_{\mathbb{C}}^{\text{afp}}\) and \(\mathbf{dSt}^{h}\) we have associated the canonical stacks (in \(\infty\)-categories) of almost perfect derived foliations, denoted by \(\mathcal{F}ol^{\text{ap}}\) in the algebraic setting,
and simply by \(\mathcal{F}ol\) in the holomorphic setting. These can be compared by means of the analytification \(\infty\)-functor \(u^{-1}\) as in the following proposition.
**Propositon 5.1**.: _There exists a canonical morphism of derived analytic stacks_
\[u^{-1}(\mathcal{F}ol^{\rm ap})\longrightarrow\mathcal{F}ol.\]
_Proof_. By adjunction it is enough to construct a morphism \(\mathcal{F}ol^{\rm ap}\to u_{*}(\mathcal{F}ol)\). For an object \(\mathbf{Spec}\,A\in\mathbf{dAff}_{\mathbb{C}}^{\rm afp}\), we consider the analytification construction \((-)^{h}:\mathcal{F}ol^{\rm ap}(X)\rightarrow\mathcal{F}ol(X^{h})\), sending a graded mixed cdga \(B\) with \(B^{(0)}\simeq A\) to \(B^{h}\), which is a holomorphic graded mixed cdga with \((B^{h})^{(0)}\simeq A^{h}\) satisfying the conditions to define a holomorphic derived foliations on \(X^{h}\). This is clearly functorial in \(A\), and thus defines a morphism \(\mathcal{F}ol^{\rm afp}\to u_{*}(\mathcal{F}ol)\) as required. \(\Box\)
The above proposition is needed in order to construction the analytification for derived foliations. Indeed, for \(X\in\mathbf{dSt}_{\mathbb{C}}^{\rm afp}\), we define an \(\infty\)-functor
\[(-)^{h}:\mathcal{F}ol^{\rm ap}(X)\rightarrow\mathcal{F}ol(X^{h})\]
by means of the analytification \(\infty\)-functor followed by the morphism of Proposition 5.1
\[\mathcal{F}ol^{\rm ap}(X)\simeq Map(X,\mathcal{F}ol^{\rm ap})\to Map(X^{h},u^{ -1}(\mathcal{F}ol^{\rm ap}))\to Map(X^{h},\mathcal{F}ol)\simeq\mathcal{F}ol (X^{h}).\]
**Definition 5.2**.: _For \(X\in\mathbf{dSt}_{\mathbb{C}}^{\rm afp}\) the \(\infty\)-functor defined above_
\[(-)^{h}:\mathcal{F}ol^{\rm ap}(X)\rightarrow\mathcal{F}ol(X^{h})\]
_is called the analytification \(\infty\)-functor for derived foliations._
The GAGA theorem for derived foliations can now be stated as follows.
**Theorem 5.3**.: _Let \(X\) be a proper derived algebraic Deligne-Mumford stack. Then the \(\infty\)-functor_
\[(-)^{h}:\mathcal{F}ol^{\rm ap}(X)\rightarrow\mathcal{F}ol(X^{h})\]
_from almost perfect algebraic derived foliations on \(X\) to almost perfect holomorphic derived foliations on \(X^{h}\), is an equivalence of \(\infty\)-categories._
_Proof_. We start by enlarging quite a bit the \(\infty\)-categories \(\mathcal{F}ol^{\rm ap}(X)\) and \(\mathcal{F}ol(X^{h})\). For this, let \(C_{X}\) be the \(\infty\)-category of sheaves of graded mixed cdga's \(B\) on \(X_{et}\) together with an augmentation \(B\rightarrow\mathcal{O}_{X}\) (of graded mixed cdga's for the trivial graded mixed structure on \(\mathcal{O}_{X}\)) and satisfying the following conditions:
1. The augmentation \(B\rightarrow\mathcal{O}_{X}\) induces an equivalence on weight zero parts \(B^{(0)}\simeq\mathcal{O}_{X}\).
2. The negative weight pieces of \(B\) vanish: \(B^{(i)}\simeq 0\) for all \(i<0\).
3. For all \(i\), the \(\mathcal{O}_{X}\)-module \(B^{(i)}\) is almost perfect.
Similarly, we have \(C_{X^{h}}\) the \(\infty\)-category of sheaves of holomorphic graded mixed cdga's \(B\) on \(X^{h}_{et}\) together with an augmentation \(B\rightarrow\mathcal{O}_{X^{h}}\) (of holomorphic graded mixed cdga's for the trivial graded mixed structure on \(\mathcal{O}_{X^{h}}\)) and satisfying the following conditions:
1. The augmentation \(B\to\mathcal{O}_{X^{h}}\) induces an equivalence on weight zero parts \(B^{(0)}\simeq\mathcal{O}_{X^{h}}\).
2. The negative weight pieces of \(B\) vanish: \(B^{(i)}\simeq 0\) for all \(i<0\).
3. For all \(i\), the \(\mathcal{O}_{X^{h}}\)-module \(B^{(i)}\) is almost perfect.
As a first observation we have the following finiteness result of cotangent complexes.
**Lemma 5.4**.: _Let \(B\in C_{X}\) (resp. \(B\in C_{X^{h}}\)), then the graded cotangent complex \(\mathbb{L}_{B}\) is such that each individual weight piece \(\mathbb{L}_{B}^{(i)}\) is almost perfect over \(\mathcal{O}_{X}\) and is zero for \(i<0\)._
_Proof of the lemma._ We give the proof for objects in \(C_{X}\), the holomorphic case is treated similarly.
First of all this is a statement about the underlying graded cdga's so we can forget the mixed structure. We construct a "graded cell decomposition" of \(B\) in the usual way, for which cells in a given dimension will be parametrized by an almost perfect \(\mathcal{O}_{X}\)-modules. We construct, by induction, a sequence of sheaves of graded \(\mathcal{O}_{X}\)-cdga's
(1)
such that \(C(n)\to B\) induces an equivalence in weights less than or equal to \(n\), and for each \(n\) there exists a push-out square of sheaves of graded cdga's
with \(E(n)\) an almost perfect \(\mathcal{O}_{X}\)-module pure of weight \(n+1\). Assuming that \(C(n)\to B\) has been constructed, we consider the induced morphism \(C(n)^{(n+1)}\to B^{(n+1)}\) on the weight \((n+1)\) pieces, and we denote by \(E(n)\) is homotopy fiber, which is a graded \(\mathcal{O}_{X}\)-module pure of weight \(n+1\). The graded \(\mathcal{O}_{X}\)-cdga \(C(n+1)\) is then defined by the push-out above. Because the morphism \(K(n)\to B\) is canonically homotopic to zero, the morphism \(C(n)\to B\) factors canonically through \(C(n+1)\to B\). Finally, \(C(n)\) and \(C(n+1)\) coincide in weight less than \(n\), so \(C(n+1)\to B\) induces an equivalence in weight less than \(n\). Moreover, the weight \(n\) part of \(C(n+1)\) is by definition equivalent to the cone of \(E(n)\to C(n)^{(n+1)}\), and thus is sent by an equivalence to \(B^{(n+1)}\) as required.
The existence of the sequence \(C\) does imply the lemma, as \(\mathbb{L}_{B}\) will coincide with \(\mathbb{L}_{C(n)}\) in weight less than \(n\). Moreover, by induction on \(n\) and from the push-out square 1, \(\mathbb{L}_{C(n)}\) is seen to be a graded perfect \(C(n)\)-module for all \(n\), and thus it is graded almost perfect over \(\mathcal{O}_{X}\). \(\Box\)
A second observation is that \(C_{X}\) (resp. \(C_{X^{h}}\)) contains \(\mathcal{F}ol^{\mathrm{ap}}(X)\) (resp. \(\mathcal{F}ol(X^{h})\)) as a full sub-\(\infty\)-category.
**Lemma 5.5**.: _The natural forgetful \(\infty\)-functors_
\[\mathcal{F}ol^{\mathrm{ap}}(X)\longrightarrow C_{X}\qquad\mathcal{F}ol(X^{h}) \longrightarrow C_{X^{h}}\]
_are fully faithful. Their essential images consists of \(B\in C_{X}\) (resp. \(B\in C_{X^{h}}\)) such that \(Sym_{\mathcal{O}_{X}}(B^{(1)})\simeq B\) (resp. \(Sym_{\mathcal{O}_{X^{h}}}(B^{(1)})\simeq B\)) as graded cdga's._
_Proof of the lemma._ This simply follows by unraveling the various definitions and the universal property of the de Rham algebras. We give the argument for \(C_{X}\), the case of \(C_{X^{h}}\) being totally similar.
For two derived foliations \(\mathcal{F}\) and \(\mathcal{F}^{\prime}\) in \(\mathcal{F}ol^{\mathrm{ap}}(X)\), corresponding to sheaves of graded mixed \(\mathbf{DR}_{X}\)-cdga's \(\mathbf{DR}_{\mathcal{F}}\) and \(\mathbf{DR}_{\mathcal{F}^{\prime}}\), the mapping space \(Map(\mathcal{F},\mathcal{F}^{\prime})\) is by definition the homotopy fiber, taken at the identity of \(\mathcal{O}_{X}\), of the natural morphism
\[Map_{\epsilon-\mathbf{cdga}^{\mathrm{gr}}_{X}}(\mathbf{DR}_{\mathcal{F}^{ \prime}},\mathbf{DR}_{\mathcal{F}})\longrightarrow Map_{\epsilon-\mathbf{ cdga}^{\mathrm{gr}}_{X}}(\mathbf{DR}_{X},\mathbf{DR}_{\mathcal{F}})\simeq Map _{\mathbf{cdga}_{X}}(\mathcal{O}_{X},\mathbf{DR}^{(0)}_{\mathcal{F}})\simeq Map _{\mathbf{cdga}_{X}}(\mathcal{O}_{X},\mathcal{O}_{X})\]
(where we have denoted by \(\epsilon-\mathbf{cdga}^{gr}_{X}\) and \(\mathbf{cdga}_{X}\) the \(\infty\)-categories of sheaves of graded mixed cdga's and of cdga's on \(X_{e}t\)). In the same manner, when \(\mathbf{DR}_{\mathcal{F}}\) are considered as augmented towards \(\mathcal{O}_{X}\) via their natural augmentation to \(\mathbf{DR}^{(0)}_{\mathcal{F}}\simeq\mathcal{O}_{X}\simeq\mathbf{DR}^{(0)}_{ \mathcal{F}^{\prime}}\), their mapping space in \(C_{X}\) is the homotopy fiber, taken at the identity, of the natural morphism
\[Map_{\epsilon-\mathbf{cdga}^{\mathrm{gr}}_{X}}(\mathbf{DR}_{\mathcal{F}^{ \prime}},\mathbf{DR}_{\mathcal{F}})\longrightarrow Map_{\epsilon-\mathbf{cdga} ^{\mathrm{gr}}_{X}}(\mathbf{DR}_{\mathcal{F}^{\prime}},\mathbf{DR}^{(0)}_{ \mathcal{F}})\simeq Map_{\mathbf{cdga}_{X}}(\mathbf{DR}^{(0)}_{\mathcal{F}^{ \prime}},\mathbf{DR}^{(0)}_{\mathcal{F}})\simeq Map_{\mathbf{cdga}_{X}}( \mathcal{O}_{X},\mathcal{O}_{X}).\]
These two morphisms are easily seen to be equivalent, and thus so are their homotopy fibers. This shows that the \(\infty\)-functors of the lemma are fully faithful. Similarly, if \(B\to\mathcal{O}_{X}\) is an object in \(C_{X}\), choosing an inverse of the equivalence \(B^{(0)}\simeq\mathcal{O}_{X}\) provides a morphism \(\mathcal{O}_{X}\to B^{(0)}\), and thus a morphism \(\mathbf{DR}_{X}\to B\) by the universal property of de Rham algebras. The graded mixed cdga \(B\), together with this morphism from \(\mathbf{DR}_{X}\) clearly defines an object in \(\mathcal{F}ol^{\mathrm{ap}}(X)\), whose image by the forgetful \(\infty\)-functor is equivalent to \(B\), showing the essential surjectivity. \(\Box\)
The analytification \(\infty\)-functor \((-)^{h}:\mathcal{F}ol^{\mathrm{ap}}(X)\to\mathcal{F}ol(X^{h})\) extends to \((-)^{h}:C_{X}\to C_{X^{h}}\), simply by sending a graded mixed cdga to the corresponding holomorphic graded mixed cdga by the analytification construction. Concretely, this sends a sheaf of graded mixed cdga \(B\) on \(X_{et}\) to the sheaf (on \(X_{et}^{h}\)) of graded cdga \(\mathcal{O}_{X^{h}}\otimes_{u^{-1}(\mathcal{O}_{X})}u^{-1}(B)\) endowed with its canonical mixed structure and its natural augmentation to \(\mathcal{O}_{X^{h}}\). To prove the theorem, it is therefore enough to prove that this extended \(\infty\)-functor \((-)^{h}:C_{X}\to C_{X^{h}}\) is an equivalence of \(\infty\)-categories.
For this, we consider the forgetful \(\infty\)-functor to graded cdga's by forgetting the mixed structures. We let \(C_{X}^{o}\) be the \(\infty\)-category of sheaves of graded cdga's \(B\) on \(X_{et}\) together with an augmentation \(B\to\mathcal{O}_{X}\) and satisfying the graded analogues of the previous three conditions: \(B^{(0)}\simeq\mathcal{O}_{X}\), \(B^{(i)}\) are zero for \(i<0\) and are almost perfect for \(i>0\). Similarly we have \(C_{X^{h}}^{o}\) consisting of sheaves of graded cdga's \(B\) on \(X_{et}^{h}\) together with an augmentation \(B\to\mathcal{O}_{X^{h}}\) and satisfying the graded analogues of the previous three conditions: \(B^{(0)}\simeq\mathcal{O}_{X^{h}}\), \(B^{(i)}\) are zero for \(i<0\) and are almost perfect for \(i>0\). We have forgetful \(\infty\)-functors \(C_{X}\to C_{X}^{o}\) and
\(C_{X^{h}}\to C^{o}_{X^{h}}\) which commute with the analytification functors
(2)
**Lemma 5.6**.: _The analytification \(\infty\)-functor_
\[(-)^{h}:C^{o}_{X}\to C^{o}_{X^{h}}\]
_is an equivalence of \(\infty\)-categories._
_Proof of the lemma._ This is a simple application of the GAGA theorem for almost perfect complexes of proper derived Artin stack of [10]. Indeed, GAGA implies that the analytification \((-)^{h}:\mathsf{APerf}(X)\to\mathsf{APerf}(X^{h})\) is an equivalence (note that \(\mathsf{APerf}\) is denoted by \(Coh^{-}\) in [10]). We deduce that it also induces an equivalence on the \(\infty\)-categories of \(\mathbb{N}\)-graded objects \(\mathsf{APerf}^{\mathbb{N}}(X)\to\mathsf{APerf}^{\mathbb{N}}(X^{h})\). As this equivalence is moreover an equivalence of symmetric monoidal \(\infty\)-categories, we get an induced equivalence on commutative algebra objects. Finally, considering commutative algebra augmented towards \(\mathcal{O}_{X}\) and \(\mathcal{O}_{X^{h}}\) we get the statement of the lemma. \(\Box\)
In order to deduce the theorem from the lemma we use the Barr-Beck theorem of [11]. Indeed, the forgetful \(\infty\)-functors \(C_{X}\to C^{o}_{X}\) and \(C_{X}\to C_{X^{h}}\) are both monadic. The left adjoint to \(C_{X}\to C^{o}_{X}\) sends an augmented sheaf of graded algebras \(B\to\mathcal{O}_{X}\) to \(Sym_{B}(\mathbb{L}_{B}[1])\), the graded de Rham algebra of \(B\) endowed with its total grading and its natural mixed structure induced by the de Rham differential. The weight \(0\) part of \(Sym_{B}(\mathbb{L}_{B}[1])\) clearly is \(\mathcal{O}_{X}\), and so we get an object in \(C_{X}\). Similarly, the forgetful \(\infty\)-functor \(C_{X^{h}}\to C^{o}_{X^{h}}\) has a left adjoint given by the graded holomorphic de Rham algebra construction. Because cotangent complexes commute with analytification, we have that the commutative square \(2\) is adjointable and the adjoint square
naturally commutes (the canonical natural transformation between the two compositions is an equivalence). Therefore, the equivalence \((-)^{h}:C^{o}_{X}\simeq C^{o}_{X^{h}}\) preserves the monads defining \(C_{X}\) and \(C_{X^{h}}\) and we have that the induced \(\infty\)-functor \((-)^{h}:C_{X}\to C_{X^{h}}\) is an equivalence as desired.
## 6. Existence of leaf space and holonomy
In this section we will restrict ourselves to the following specific setting. We let \(X\) be a separated and connected complex manifold. Let \(\mathcal{F}\in\mathcal{F}ol(X/\mathbb{C})\) be a derived foliation on \(X\) which is transversally smooth and rigid. We will study the existence of an analytic leaf space for \(\mathcal{F}\). Namely we would like to contract all the leaves to points and produce the quotient space. This quotient space does not exist in general, and we start by studying extra conditions on \(\mathcal{F}\) allowing its existence.
### Some conditions
We start by discussing some conditions on \(\mathcal{F}\) which will later on ensure the existence of a leaf space.
**Codimension \(2\).** Because \(\mathcal{F}\) is assumed transversally smooth and rigid the cotangent complex \(\mathbb{L}_{\mathcal{F}}\) is a length two complex of vector bundles on \(X\) of the form \(\mathcal{N}_{\mathcal{F}}^{*}\to\Omega^{1}_{X/\mathbb{C}}\), where \(\mathcal{N}_{\mathcal{F}}^{*}\) is the conormal bundle of \(\mathcal{F}\). The _codimension \(2\) condition_ for \(\mathcal{F}\) states that there exists a closed analytic subset \(Z\subset X\), of codimension \(2\) or higher, such that \(\mathbb{L}_{\mathcal{F}}\) restricted to \(U=X-Z\) is a vector bundle. Equivalently, \(\mathcal{N}_{\mathcal{F}}^{*}\to\Omega^{1}_{X/\mathbb{C}}\) is injective, and \(\mathcal{N}_{\mathcal{F}}^{*}\) is a sub-bundle when restricted to \(U\). In particular, the perfect complex \(\mathbb{L}_{\mathcal{F}}\) is equivalent to a single coherent sheaf in degree zero, namely \(\Omega^{1}_{X/\mathbb{C}}/\mathcal{N}_{\mathcal{F}}^{*}\), which is a vector bundle outside of \(Z\). The locus outside which \(\mathbb{L}_{\mathcal{F}}\) is a vector bundle, is a closed subset of \(X\), and is called the _singular set of \(\mathcal{F}\)_. Hence, the codimension \(2\) condition holds for \(\mathcal{F}\) iff its singular locus is of codimension \(\geq 2\).
When the codimension \(2\) condition is satisfied for \(\mathcal{F}\), we will also say, synonymously, that \(\mathcal{F}\)_is smooth in codimension \(2\)_.
**Flatness.** For any point \(x\in X\), we know that there exists a formal leaf \(\hat{\mathcal{L}}_{x}(\mathcal{F})\) passing through \(x\) (see e.g. [13, SS1.6]). This formal leaf is a formal moduli problem given by a dg-Lie algebra whose underlying complex is \(\mathbb{T}_{\mathcal{F},x}[-1]\), the fiber of the tangent complex of \(\mathcal{F}\) at \(x\) shifted by \(-1\). As \(\mathcal{F}\) is transversally smooth and rigid, \(\mathbb{T}_{\mathcal{F},x}[-1]\) is cohomological concentrated in degrees \([1,2]\), and therefore corresponds to a pro-representable formal moduli problem. So there exists a pro-artinian connective local cdga \(\underline{A}="\lim"A_{i}\) such that \(\hat{\mathcal{L}}_{x}(\mathcal{F})\simeq\mathsf{Spf}\underline{A}\) is the formal spectrum of \(A\).
The _flatness condition_ states that, if \(A=\lim_{i}A_{i}\) is the realization of \(\underline{A}\), then \(H^{i}(A)=0\) for all \(i<0\). Equivalently, this means that \(\hat{\mathcal{L}}_{x}(\mathcal{F})\) is flat over \(\mathbb{C}\). As \(A\) is naturally quasi-isomorphic to the Chevalley-Eilenberg complex of the dg-Lie algebra \(\mathbb{T}_{\mathcal{F},x}[-1]\), the flatness condition can also be stated as the following vanishing for dg-Lie algebra cohomology
\[H^{i}(\mathbb{T}_{\mathcal{F},x}[-1],\mathbb{C})\simeq 0\qquad\forall\,i<0.\]
**Local connectedness.** The following local connectedness condition only makes sense under the codimension \(2\) condition. Indeed, we already know that when \(\mathcal{F}\) is smooth in codimension \(2\), then it is locally integrable on \(X\) (see theorem 4.1). Therefore, there exists a basis \(B\) for the topology of \(X\), and for all open \(U\in B\), a holomorphic map \(f:U\to\mathbb{C}^{d}\), such that \(\mathcal{F}_{|U}\simeq f^{*}(0)\).
Then, the _local connectedness coinditon_ states that \(B\) and the holomorphic maps \(f\) as above can be moreover chosen so that the non-empty fibers of \(f\) are all connected. This condition is of topological nature and is not always satisfied. When \(d=1\), it is always satisfied as \(f\) is smooth in codimension \(2\) and thus it is well known that the Milnor fibers of \(f\) are connected (see [10]). When \(d>1\), there are examples of holomorphic maps \(f\) such that the fibers close to a singular fiber are connected or not depending on the direction along which we approach the singular value. A simple example is given by \(f:\mathbb{C}^{3}\to\mathbb{C}^{2}\) given by \(f(x,y,z)=(x^{2}-y^{2}z,y)\) (see [24, Introduction]).
Our existence theorem below (Theorem 6.2) requires _all three conditions_, i.e. codimension \(2\), flatness and local connectedness, to be satisfied. Before going further we would like to make some comments on these conditions.
**Remark 6.1**.:
1. As already reminded before, the codimension \(2\) condition implies that \(\mathcal{F}\) is locally integrable (see Theorem 4.1). So, for any given point \(x\in X\), there is an open \(x\in U\) and a holomorphic map \(f:U\to\mathbb{C}^{d}\) such that \(f^{*}(0)\simeq\mathcal{F}_{|U}\). This implies that the formal leaf \(\hat{\mathcal{L}}_{x}(\mathcal{F})\) is the formal completion of \(f^{-1}(f(x))\) at the point \(x\), where \(f^{-1}\) denotes here the derived fiber of \(f\). Since formal completions are faithfully flat, flatness of \(\hat{\mathcal{L}}_{x}(\mathcal{F})\) implies that \(f^{-1}(f(x))\) is flat over \(\mathbf{Spec}\,\mathbb{C}\) locally at \(x\). Therefore, the flatness condition implies that the derived fibers are all flat over \(\mathbb{C}\), and thus that \(f\) must be a flat morphism. The converse is obviously true. Therefore, we see that under the codimension \(2\) condition, flatness is equivalent to the fact that the all the local holomorphic maps integrating \(\mathcal{F}\) are flat holomorphic maps.
2. If \(\mathcal{F}\) is moreover smooth, and thus is given by an integrable subbundle of \(\mathbb{T}_{X}\), then all three conditions above are automatically satisfied. Indeed, the local holomorphic maps \(f\) must be smooth and thus its fibers are themselves smooth. In particular, the local connectedness is true because \(f\) are submersions. Moreover, the formal leaves are the formal completions of smooth sub-varieties (the fibers of \(f\)), and or thus all equivalent to formal affine spaces \(\widehat{\mathbb{A}}^{n-d}\) (where \(n=dimX\)), and thus are flat.
### Existence theorem
**Theorem 6.2**.: _Let \(X\) be a connected and separated (i.e. Hausdorff) complex manifold and \(\mathcal{F}\in\mathcal{F}ol(X/\mathbb{C})\) be a derived foliation that is flat, smooth in codimension \(2\) and locally connected. There exists a smooth and \(1\)-truncated Deligne-Mumford analytic stack \(X\mathbin{/\!\!/}\mathcal{F}\), together with a holomorphic morphism \(\pi:X\to X\mathbin{/\!\!/}\mathcal{F}\) with the following properties._
1. _The morphism_ \(\pi\) _is flat, surjective, and we have_ \(\pi^{*}(0)\simeq\mathcal{F}\)_._
2. _The Deligne-Mumford stack_ \(X\mathbin{/\!\!/}\mathcal{F}\) _is quasi-separated, effective and is of dimension_ \(d\) _equal to the rank of_ \(\mathcal{N}_{\mathcal{F}}^{*}\) _(i.e. the codimension of_ \(\mathcal{F}\)_)._
3. _For any global point_ \(x:*\to X\mathbin{/\!\!/}\mathcal{F}\) _the fiber of_ \(\pi\) _at_ \(x\)__ \[\pi^{-1}(x):=*\times_{X\mathbin{/\!\!/}\mathcal{F}}X\]
_is a connected and separated analytic space._
4. _For any global point_ \(x:*\to X\mathbin{/\!\!/}\mathcal{F}\)_, the isotropy group_ \(H_{x}\) _of_ \(X\mathbin{/\!\!/}\mathcal{F}\) _at_ \(x\) _acts freely and properly on_ \(\pi^{-1}(x)\)_._
5. _The morphism_ \(\pi\) _is_ relatively connected _in the sense of topos theory i.e. the pull-back_ \(\infty\)_-functor_ \(\pi^{-1}:\mathit{St}(X\mathbin{/\!\!/}\mathcal{F})\to\mathit{St}(X)\) _is fully faithful when restricted to_ \(0\)_-truncated objects (i.e. sheaves of sets)._
Before giving a proof of Theorem 6.2, we explain some of the terms used in in (2). First of all, _quasi-separated_ here means that the diagonal of \(X\mathbin{/\!\!/}\mathcal{F}\) is a separated morphism. In more concrete terms, if the stack \(X\mathbin{/\!\!/}\mathcal{F}\) is presented as an etale groupoid object \(G\rightrightarrows U\), with \(U\) a disjoint union of Stein manifolds, then the smooth analytic space \(G\) must be separated. In terms of such a presentation, _effectiveness_ of \(X\mathbin{/\!\!/}\mathcal{F}\) means that the action of \(G\) on \(U\) by germs of biholomorphic maps is faithful (see [10]). As shown in [10], any Deligne-Mumford stack \(Y\) possesses a universal effective quotient \(Y^{eff}\), and by construction \(X\mathbin{/\!\!/}\mathcal{F}\) will be defined as the universal effective quotient of a more general construction (called the _monodromy_ stack), that we will study in a future work.
For later use, we set the following terminology.
**Definition 6.3**.: _Let \(X\) be a complex manifold, \(\mathcal{F}\) be a derived foliation on \(X\), and assume the conditions of the theorem 6.2 hold._
1. _The stack_ \(X\mathbin{/\!\!/}\mathcal{F}\) _is called the_ leaf space of the derived foliation__\(\mathcal{F}\)_._
2. _For any_ \(x\in X\) _the quotient analytic space_ \(\pi^{-1}(\pi(x))/H_{\pi(x)}\) _is called the_ maximal leaf passing through__\(x\)_._
3. _The_ \(H_{\pi(x)}\)_-covering_ \(\pi^{-1}(\pi(x))\to\pi^{-1}(\pi(x))/H_{\pi(x)}\) _is called the_ holonomy covering_, and the group_ \(H_{\pi(x)}\) _the_ holonomy group _of_ \(\mathcal{F}\) _at_ \(x\)_._
_The maximal leaf will be denoted by \(\mathcal{L}_{x}^{max}(\mathcal{F})\) and the holonomy group \(Hol_{\mathcal{F}}(x)\). The holonomy covering of \(\mathcal{L}_{x}^{max}(\mathcal{F})\) will be denoted by \(\widetilde{\mathcal{L}}_{x}^{max}(\mathcal{F})\)._
We are now ready to start the proof of Theorem 6.2.
**Construction of \(X\mathbin{/\!\!/}\mathcal{F}\).** We let \(\mathcal{B}\) be the set of all open subsets \(U\subset X\) such that there exists a flat holomorphic function \(f:U\to\mathbb{C}^{d}\) with connected fibers, and with \(f^{*}(0)\simeq\mathcal{F}_{|U}\). As \(f_{U}\) is flat it has an open image \(V\subset\mathbb{C}^{d}\), and it factors as \(U\to V\subset\mathbb{C}^{d}\). For each \(U\in\mathcal{B}\), we fix once and for all \(f_{U}:U\to V\subset\mathbb{C}^{d}\), where \(V\) is open, and \(f_{U}\) is flat, surjective with connected fibers, and such that \(f^{*}(0)\simeq\mathcal{F}_{|U}\). Our conditions on \(\mathcal{F}\) imply that the set \(\mathcal{B}\) forms a basis for the topology of \(X\).
Before going further, we note that \(\mathcal{B}\) consists of all opens \(U\subset X\) with the required properties, so the only choices that have been made here are the choices of the functions \(f_{U}\). In order to control the impact of such choices, we will need the following result (Lemma 6.4). By the codimension 2 condition we know that the perfect complex \(\mathbb{L}_{\mathcal{F}}\) is a coherent sheaf, denoted by \(\Omega^{1}_{\mathcal{F}}\simeq H^{0}(\mathbb{L}_{\mathcal{F}})\). Therefore, the de Rham differential for \(\mathcal{F}\) induces a morphism
of of sheaves on \(U\). Since \(f^{*}(0)\simeq\mathcal{F}\), this de Rham differential can be identified with the relative de Rham differential \(dR_{U/V}:\mathcal{O}_{U}\to\Omega^{1}_{U/V}\). In particular, the composite morphism \(f^{-1}(\mathcal{O}_{V})\to\mathcal{O}_{U}\to\Omega^{1}_{\mathcal{F}}\) is zero.
**Lemma 6.4**.: _The natural morphism_
\[f^{-1}(\mathcal{O}_{V})\to Ker(dR_{\mathcal{F}})\]
_of sheaves of rings on \(U\), is an isomorphism._
Proof of Lemma 6.4.: Because \(f\) is flat and surjective, the induced morphism \(f^{-1}(\mathcal{O}_{V})\to\mathcal{O}_{U}\) is a monomorphism of sheaves. Therefore, \(f^{-1}(\mathcal{O}_{V})\to Ker(dR_{\mathcal{F}})\) is also a monomorphism. It remains to show that a local section of \(Ker(dR_{\mathcal{F}})\), locally descend to a local holomorphic function on \(V\).
**Sub-Lemma 6.5**.: _Let \(f:X\to Y\) be a flat surjective morphism of complex manifolds which is smooth in codimension \(2\). If a map \(u:Y\to\mathbb{C}\) is such that \(f\circ u:X\to\mathbb{C}\) is holomorphic, then \(u\) is holomorphic._
Proof of sublemma 6.5.: Let \(U_{0}\subset X\) be the open on which \(f\) is smooth, so that \(Z=X-U_{0}\) is a closed analytic subset of codimension \(\geq 2\). The image of \(U_{0}\) by \(f\) is an open \(V_{0}\subset Y\), because \(f\) is flat, and \(V_{0}\) is dense by Sard's theorem. As \(f\circ u:U_{0}\to V_{0}\) is a holomorphic submersion, it has local sections, and thus \(u\) is holomorphic when restricted to \(V_{0}\). Moreover, \(f\) being flat and surjective implies that the topology of \(Y\) is the quotient of the topology on \(X\) by the map \(f\). This shows that \(u\) is also continuous on \(Y\). The application \(u\) is continuous on \(Y\) and holomorphic on a dense open, and thus it is holomorphic on all of \(Y\).
Now, let \(s\) be local section of \(Ker(dR_{\mathcal{F}})\) defined on an open \(U^{\prime}\subset U\). By the local connectivity condition, we can assume, possibly shrinking \(U^{\prime}\), that the restriction of \(f\) on \(U^{\prime}\) has connected fibers. The codimension \(2\) condition implies that the canonical map from derived de Rham cohomology of \(U\) over \(V\) coincides with the naive de Rham cohomology in degree \(0\). In particular, \(dR_{\mathcal{F}}(s)=0\) implies that \(s\) determines a natural element in \(H^{0}_{dR}(U^{\prime}/V)\), the derived de Rham cohomology of \(U^{\prime}\) relative to \(V\). We can write this as \(s\in|\mathbf{DR}(U/V)|(U^{\prime})\), where \(\mathbf{DR}(U/V)\) is the sheaf of graded mixed cdga's of relative de Rham theory of \(U^{\prime}\) over \(V\). Note that the sheaf \(|\mathbf{DR}(U/V)|\) is naturally a module over \(f^{-1}(\mathcal{O}_{V})\). Moreover, if \(y\in V\) is a point, corresponding to a morphism of sheaves of rings \(f^{-1}(\mathcal{O}_{V})\to\mathbb{C}\) then we have an equivalence of sheaves of cdga's
\[|\mathbf{DR}(U/V)|\otimes_{f^{-1}(\mathcal{O}_{V})}\mathbb{C}\simeq|\mathbf{ DR}(U/V)\otimes_{f^{-1}(\mathcal{O}_{V})}\mathbb{C}|\simeq j_{*}(|\mathbf{DR}(U_{ y}/\mathbb{C})|),\]
where \(j:U_{y}\hookrightarrow U\) is the inclusion of the fiber \(U_{y}=f^{-1}(y)\), and where the first equivalence follows from the fact that \(\mathbb{C}\) is a perfect \(f^{-1}(\mathcal{O}_{V})\)-module (and thus the tensor operation commutes with limits and with the realization functor \(|-|\)). To summarize, the restriction of \(s\) on \(U^{\prime}_{y}\), lies in derived de Rham cohomology \(H^{0}_{dR}(U^{\prime}_{y}/\mathbb{C})\). As \(U^{\prime}_{y}\) is connected, and by the comparison between derived de Rham cohomology and Betti cohomology ([2, Cor.
4.27] and [11, Theorem IV.1.1]), we have that \(H^{0}_{dR}(U^{\prime}_{y}/\mathbb{C})\simeq\mathbb{C}\), and thus the restriction of \(s\) on \(U^{\prime}_{y}\) is a constant function. As \(f:U^{\prime}\to V\) is flat with connected fibers, the function \(s\) descends to an application defined on the open \(V^{\prime}=f(U^{\prime})\subset V\), which is moreover holomorphic by the sublemma 6.5. This shows that \(s\) comes from a local section of \(f^{-1}(\mathcal{O}_{V})\) as required. \(\Box\)
Lemma 6.4 has the following important corollary, showing that the choices of the local morphisms \(f:U\to V\) are essentially unique, up to a unique local biholomorphism.
**Corollary 6.6**.: _Let \(f:U\to V\) and \(g:U\to V^{\prime}\) be two flat and surjective holomorphic morphisms of smooth manifolds with \(f^{*}(0)\simeq g^{*}(0)\). Assume that \(V^{\prime}\) is isomorphic to an open in \(\mathbb{C}^{d}\). If the fibers of \(f\) are connected Then, there exists a unique etale holomorphic morphism \(\alpha:V\simeq V^{\prime}\) such that \(\alpha f=g\)._
_Proof Corollary 6.6._ We apply Lemma 6.4 to the local coordinate functions of \(\mathbb{C}^{d}\) restricted to \(V^{\prime}\), and thus get a unique factorization \(\alpha:V\to V^{\prime}\) such that \(\alpha f=g\). Because \(f\) is surjective, \(\alpha\) must be unique. Finally, as \(f^{*}(0)\simeq g^{*}(0)\), by comparing the cotangent complexes we get that \(\alpha\) must be etale. \(\Box\)
We come back to the base \(\mathcal{B}\) of the topology of \(X\). Recall that \(\mathcal{B}\) consists of all the open subsets \(U\subset X\) such that there exists a flat holomorphic function \(f:U\to\mathbb{C}^{d}\) with connected fibers, and with \(f^{*}(0)\simeq\mathcal{F}_{|U}\). The set \(\mathcal{B}\) is ordered by inclusions, and as such will be considered as category. We define a functor
\[\phi:\mathcal{B}\to\mathbb{C}Man_{et}^{d},\]
from \(B\) to the category of complex manifolds of dimension \(d\) and etale holomorphic morphisms between them. On objects, the functor \(\phi\) sends \(U\) to the space \(V\), the base of the morphism \(f_{U}:U\to V\). If \(U^{\prime}\subset U\) is an inclusion of opens in \(\mathcal{B}\), with morphisms \(f_{U}:U\to V\) and \(f_{U^{\prime}}U^{\prime}\to V^{\prime}\). Corollary 6.6 implies that there exists a unique etale factorization \(\alpha:V^{\prime}\to V\) such that \(f_{U^{\prime}}\alpha=(f_{U})_{|U^{\prime}}\). This provides, for each open inclusion \(U^{\prime}\subset U\) in \(\mathcal{B}\), a natural morphism \(\phi(U^{\prime})=V^{\prime}\to\phi(U)=V\) in \(\mathbb{C}Man_{et}^{d}\). This completes the definition of the functor \(\phi\).
By [10], we know that smooth Deligne-Mumford stacks are stable under arbitrary colimits of diagrams of etale morphisms. We therefore consider the colimit of the functor \(\phi\), and take its universal effective quotient as defined in [10]
\[X\mathbin{/\!\!/}\mathcal{F}:=(\operatorname{colim}_{U\in\mathcal{B}}V)^{eff}.\]
By construction \(X\mathbin{/\!\!/}\mathcal{F}\) is an effective smooth Deligne-Mumford stack of dimension \(d\). Moreover, as \(\mathcal{B}\) is a basis for the topology of \(X\), we have that \(X\simeq\operatorname{colim}_{U\in\mathcal{B}}U\). The local morphisms \(\left\{f_{U}:U\to V\right\}_{U\in\mathcal{B}}\) therefore define the projection
\[\pi:X\simeq\operatorname{colim}_{U\in B}U\to\operatorname{colim}_{U\in \mathcal{B}}V\to(\operatorname{colim}_{U\in\mathcal{B}}V)^{eff}=X\mathbin{/\! \!/}\mathcal{F}.\]
**The morphism \(\pi\) is flat surjective and \(\pi^{*}(0)\simeq\mathcal{F}\).** By construction, the morphism \(\pi:X\to X\mathbin{/\!\!/}\mathcal{F}\), restricted to an open \(U\in\mathcal{B}\) factors as \(\pi_{|U}:\ U\xrightarrow{f_{U}}V\xrightarrow{p_{V}}X\mathbin{/\!\!/}\mathcal{F}\), where \(\operatorname{p_{V}}\) is a finite morphism of \(\pi\). The morphism \(\pi\) is flat surjective and \(\pi^{*}(0)\simeq\mathcal{F}\). The morphism \(\pi:X\to X\mathbin{/\!
\(p_{V}\) is the composition of the natural morphism \(V\to\operatorname{colim}_{U\in B}V\), with the canonical projection \(\operatorname{colim}_{U\in B}V\to X\mathbin{/\!\!/}\mathcal{F}\) to the effective quotient. The morphism \(p_{V}\) is etale, and \(f_{U}\) is flat, so we see that \(\pi_{|U}\) is a flat morphism for all \(U\in\mathcal{B}\). As \(\mathcal{B}\) is a basis for the topology of \(X\), this shows that \(\pi\) is flat.
To see that \(\pi\) is surjective, it is enough to notice that the universal effective quotient map is always surjective (because it does not the change the objects), and moreover that \(\coprod_{U\in\mathcal{B}}V\to co\operatorname{lim}_{U\in\mathcal{B}}V\) is surjective. As a result, we see that the composition \(\coprod_{U\in\mathcal{B}}U\to X\to X\mathbin{/\!\!/}\mathcal{F}\) is a surjective morphism, and thus that \(\pi\) is surjective.
Finally, let us consider \(\pi^{*}(0)\). We have two derived foliations on \(X\), \(\pi^{*}(0)\) and \(\mathcal{F}\). For all \(U\in\mathcal{B}\), we know that \(f_{U}^{*}(0)\simeq\mathcal{F}_{|U}\), but as \(p_{V}:V\to X\mathbin{/\!\!/}\mathcal{F}\) is etale, we have that \(f_{U}^{*}(0)\simeq\pi^{*}(0)_{|U}\). Therefore, the two derived foliations \(\pi^{*}(0)\) and \(\mathcal{F}\) coincides on each open \(U\). This implies in particular that the corresponding differential ideals \(K_{\mathcal{F}}\) and \(K_{\pi^{*}(0)}\), agree on each \(U\), and thus globally agree on \(X\). This means that \(\mathcal{F}\) and \(\pi^{*}(0)\) are two transversally smooth and rigid derived enhancements of the same differential ideals \(K_{\mathcal{F}}\). It follows from the uniqueness statement of derived enhancements in codimension 2 (Proposition 6.10) that \(\mathcal{F}\simeq\pi^{*}(0)\) as required.
### Quasi-separatedness
The quasi-separatedness property for \(X\mathbin{/\!\!/}\mathcal{F}\) follows from the general lemma below.
**Lemma 6.7**.: _Any smooth effective Deligne-Mumford stack is quasi-separated._
Proof of Lemma 6.7.: Let \(Y\) be an effective Deligne-Mumford stack and \(U\to Y\) an etale atlas with \(U\) a disjoint union of Stein manifolds (hence \(U\) is separated). The nerve of \(U\to Y\) defines an etale groupoid \(G=U\times_{Y}U\) acting on \(U\). The quasi-separatedness of \(Y\) is then equivalent to the fact that the analytic space \(G\) is separated. So, we are left to prove that \(G\) is separated.
Let \(\mathsf{Haf}(U)\) be the Haefliger groupoid on \(U\). As a set, \(\mathsf{Haf}(U)\) consists of triplets \((x,y,u)\), with \(x\) and \(y\) points in \(U\) and \(u\) is a germ of holomorphic isomorphism from \(x\) to \(y\). A basis for the topology on \(\mathsf{Haf}(U)\) is defined as follows. Fix \(V\subset U\) and \(W\subset U\) two opens and \(u:V\simeq W\) and isomorphism. Associated to \(V,W\) and \(u\) we have a subset \(\mathsf{Haf}(U)_{V,W,u}\subset\mathsf{Haf}(U)\) consisting of all points \((x,u(x),u_{x})\), where \(x\in V\), \(u(x)\in W\) its image, and \(u_{x}\) the germ of isomorphism at \(x\) from \(x\) to \(u(x)\). By definition, a basis for the topology on \(\mathsf{Haf}(U)\) consists of all \(\mathsf{Haf}(U)_{V,W,u}\) where \((V,W,u)\) varies. The groupoid structure on \(\mathsf{Haf}(U)\) is the obvious one: the source (resp. target) map sends \((x,y,u)\) to \(x\) (resp. \(y\)), and the composition is given by composing germs of isomorphisms.
Since we are working in the holomorphic context, the space \(\mathsf{Haf}(U)\) is automatically separated, as two holomorphic maps agreeing on an open subset agree globally (on the connected components meeting this open).
Finally, there is a canonical morphism of groupoids over \(U\), \(\psi:G\to\mathsf{Haf}(U)\). To a point \(u\in G\), with source \(x\) and target \(y\), we associate \(\psi(u)\) the germs of isomorphisms defined by the diagram
which, \(G\) being an etale groupoid, induces an isomorphism on germs
\[U_{x}\xleftarrow{\simeq}G_{u}\xleftarrow{\simeq}U_{y}\;.\]
The isomorphism \(U_{x}\simeq U_{y}\) obtained this way defines \(\psi(u)\). Now, the property of \(Y\) being effective is equivalent to the fact that the morphism \(\psi\) is a monomorphism. So \(G\hookrightarrow\mathsf{Haf}(U)\) is here an injective holomorphic map. As \(\mathsf{Haf}(U)\) is separated, this implies that \(G\) is also separated. \(\Box\)
**Actions of isotropy groups.** Let \(x:*\to X\mathbin{/\!\!/}\ \mathcal{F}\) be a point in \(X\mathbin{/\!\!/}\ \mathcal{F}\), and \(H_{x}\) be the isotropy group at \(x\). We have a commutative diagram with cartesian squares
The canonical morphism \(BH_{x}\to X\mathbin{/\!\!/}\ \mathcal{F}\) is a monomorphism, and thus \([\pi^{-1}(x)/H_{x}]\to X\) is also a monomorphism. This implies that the stack \([\pi^{-1}(x)/H_{x}]\) is \(0\)-truncated and thus is an analytic space. Moreover, as \(X\) is separated, so is \([\pi^{-1}(x)/H_{x}]\). Therefore, \(H_{x}\) acts freely and properly on \(\pi^{-1}(x)\).
**Relative connectedness.** Let us denote by \(Sh(Z)\subset St(Z)\) the full sub-category of \(0\)-truncated objects (for some stack \(Z\)), i.e. the category of sheaves of sets. By construction, the functor \(\pi^{-1}:Sh(X\mathbin{/\!\!/}\ \mathcal{F})\to Sh(X)\) is obtained as a functor on limit categories
\[\lim_{U\in\mathcal{B}}f_{U}^{-1}:Sh(X\mathbin{/\!\!/}\ \mathcal{F})\simeq\lim_{U\in B }Sh(V)\to\lim_{U\in B}Sh(U)\simeq Sh(X).\]
Therefore, in order to prove that \(\pi^{-1}\) is fully faithful, it is enough to prove that \(f_{U}^{-1}:Sh(V)\to Sh(U)\) is fully faithful. But this follows easily from the fact that \(f_{U}\) is a flat surjective morphism with connected fibers, as shown by the next lemma.
**Lemma 6.8**.: _Let \(f:U\to V\) be an open and surjective morphism for which the topology of \(V\) is the quotient of the topology of \(U\) (i.e. a subset \(T\subset V\) is open if and only if \(f^{-1}(T)\subset U\) is open). Then, if \(f\) has connected fibers, for any sheaf of sets \(E\) on \(V\) the canonical morphism_
\[E\to f_{*}f^{-1}(E)\]
_is an isomorphism._
_Proof of Lemma 6.8._ Let us represent \(E\) by its espace etale \(p:E\to V\). As the statement is local on \(V\), it is enough to show that \(E(V)\to f^{-1}(E)(U)\) is bijective. Note that \(E(V)\) is the set of continuous sections of \(E\to V\), and \(f^{-1}(E)(U)\) is the set of continuous maps \(s:U\to E\) such that \(ps=f\). The map \(E(V)\to f^{-1}(E)(U)\) simply sends a section \(s:V\to E\) to \(sf:U\to E\). As \(f\) is surjective, this map is clearly injective. Moreover, if \(s:U\to E\) is
an element \(f^{-1}(E)(U)\), we can restrict \(s\) to the fiber of \(f\) at a point \(v\in V\). This provides a continuous map \(f^{-1}(v)\to E_{v}\), where \(E_{v}\) is the fiber of \(E\) at \(v\) endowed with the discrete topology. As \(f^{-1}(v)\) is connected we see that the restriction of \(s\) along all the fibers of \(f\) is a constant morphism. Because \(f\) is a topological quotient, this implies that \(s:U\to E\) descend to a continuous map \(V\to E\), and thus \(s\) is the image of an element in \(E(V)\). \(\Box\)
**An etale groupoid description.** It is possible to describe the stack \(X\mathbin{/\!\!/}\mathcal{F}\) by a (more or less explicit) etale groupoid acting on \(W:=\coprod_{U\in\mathcal{B}}V\). Recall that we have chosen for any \(U\in\mathcal{B}\) a smooth surjective holomoprhic map with connected fibers \(f:U\to V\) which integrates \(\mathcal{F}_{|U}\). We consider all pairs of embeddings
\[U_{1}\supset U_{0}\subset U_{2}\]
of opens \(V_{i}\in\mathcal{B}\). By Corollary 6.6, there is a corresponding pair of etale morphisms
Therefore, for any \(v\in V_{0}\), we deduce a germ of isomorphism \(\alpha(v)\) between \((V_{1},s(v))\) and \((V_{2},b(v))\). When \(v\) varies inside \(V_{0}\), we get a family of elements \(\{(s(v),b(v),\alpha(v))\}_{v\in V_{0}}\) in \(\mathsf{Haf}(W)\) in the Haefliger groupoid on \(W=\coprod_{U\in\mathcal{B}}V\).
**Lemma 6.9**.: _Let \(G\to W\) be the etale groupoid obtained as the nerve of the canonical morphism \(W\to X\mathbin{/\!\!/}\mathcal{F}\). Then, \(G\) is isomorphic, as a groupoid over \(W\), to the subgroupoid of \(\mathsf{Haf}(W)\) generated by the elements \((s(v),b(v),\alpha(v))\) for all possible choices of \(U_{1}\supset U_{0}\subset U_{2}\) in \(\mathcal{B}\) and of \(v\in V_{0}\)._
_Proof of Lemma 6.9._ This follows from a more general formula for the effective part of a colimit of etale morphisms. Let \(V_{*}:I\to\mathbb{C}Man_{et}^{d}\) be a diagram of smooth complex manifolds of dimension \(d\) with etale transition maps. Let \(X=(\operatorname{colim}U_{i})^{eff}\), and \(W=\coprod V_{i}\to X\) be the canonical map. Then, the nerve \(W\to X\) is isomorphic, as a groupoid over \(W\), to the subgroupoid of \(\mathsf{Haf}(W)\) generated by all the germs of isomorphisms generated by diagrams of the form \(V_{j}\mathbin{\!\!/}\mathcal{F}_{i}\mathbin{\!\!/}\mathcal{F}_{k}\), image of \(j\mathbin{\!\!/}\mathcal{F}_{i}\mathbin{\!\!/}\mathcal{F}_{k}\) in \(I\) by \(V_{*}\). This last general fact is obvious from to the universal property of colimits and of the effective part. \(\Box\)
**Connectedness of the leaves and their holonomy coverings.** We first prove that the leaves are connected. For this, let \(x\) and \(y\) be two points in \(X\) such that \(\pi(x)\) and \(\pi(y)\) are isomorphic in \(X\mathbin{/\!\!/}\mathcal{F}\). By the construction of \(X\mathbin{/\!\!/}\mathcal{F}\) as a colimit, we know that there exists a finite sequence of open inclusions in \(\mathcal{B}\)
\[U_{1}\supset U_{1,2}\subset U_{2}\supset U_{2,3}\subset U_{3}\dots U_{n-1} \supset U_{n-1,n}\subset U_{n},\]
with points \(x_{i,j}\in U_{i,j}\), and such that
1. \(x\in U_{1}\) and \(y\in U_{n}\)
2. \(f_{1}(x_{1,2})=f_{1}(x)\) and \(f_{n}(x_{n-1,n})=f_{n}(y)\)
3. \(f_{i}(x_{i-1,i})=f_{i}(x_{i,i+1})\)
where \(f_{i}:U_{i}\to V_{i}\) is our chosen map integrating \(\mathcal{F}_{|U_{i}}\). As the fibers of \(f_{i}\) are all connected, it is possible to find path \(\gamma_{i}\) joining \(x_{i-1,i}\) with \(x_{i,i+1}\) and such that \(f_{i}(\gamma_{i})\) is constant (for all \(i\neq 1,n\)). In the same manner, we can choose a path \(\alpha\) from \(x\) to \(x_{1,2}\) with \(f_{1}(\alpha)\) constant, and \(\beta\) from \(x_{n-1,n}\) to \(y\) with \(f_{n}(\beta)\) constant. The concatenation of the \(\gamma_{i}\)'s with \(\alpha\) and \(\beta\) provides a path \(\gamma:[0,1]\to X\), from \(x\) to \(y\) and such that, for all \(t\in[0,1]\), the objects \(\pi(\gamma(t))\) are all isomorphic. By definition, this means that \(\gamma\) factors through the image of the monomorphism \(\mathcal{L}_{x}^{max}(\mathcal{F})\hookrightarrow X\), so that it is a continuous path in \(\mathcal{L}_{x}^{max}(\mathcal{F})\) joining \(x\) and \(y\). This implies that \(\mathcal{L}_{x}^{max}(\mathcal{F})\) is path connected.
It remains to show that the \(H_{x}\)-covering \(\widetilde{\mathcal{L}}_{x}^{max}(\mathcal{F})\to\mathcal{L}_{x}^{max}( \mathcal{F})\) is connected, or equivalentlty that the corresponding classifying morphism \(\pi_{1}(\mathcal{L}_{x}^{max}(\mathcal{F}),x)\to H_{x}\) is a surjective morphism of groups. This is proven using the same argument as before with \(x=y\). Indeed, an element \(h\in H_{x}\) is determined by a finite sequence of open inclusions in \(\mathcal{B}\)
\[U_{1}\supset U_{1,3}\subset U_{2}\supset U_{2,3}\subset U_{3}\dots U_{n-1} \supset U_{n-1,n}\subset U_{n},\]
with points \(x_{i,j}\in U_{i,j}\), and such that
1. \(x\in U_{1}\cap U_{n}\)
2. \(f_{1}(x_{1,2})=f_{1}(x)\) and \(f_{n}(x_{n-1,n})=f_{n}(x)\)
3. \(f_{i}(x_{i-1,i})=f_{i}(x_{i,i+1})\).
We have seen that there exists a loop \(\gamma\) in \(X\), pointed at \(x\in X\), such that \(\pi(\gamma(t))\) is independent of \(t\in[0,1]\), up to isomorphism, in \(X\mathbin{/\!\!\!/}\;\mathcal{F}\). The image of \(\gamma\) by the morphism \(\pi_{1}(\mathcal{L}_{x}^{max}(\mathcal{F}),x)\to H_{x}\) is \(h\) by construction.
This finishes the proof of Theorem 6.2. \(\Box\)
We finish this section with the following statement, concerning the functoriality of the leaf space construction, and thus of its characterization via a universal property. We leave the details to the reader.
**Propositon 6.10**.: _Let \(f:X\to Y\) be a morphism between smooth, connected and separated complex manifolds, \(\mathcal{F}_{Y}\in\mathcal{F}ol(Y)\), and \(\mathcal{F}_{X}\mathrel{\mathop{:}}=f^{*}(\mathcal{F}_{Y})\). We assume that both derived foliations \(\mathcal{F}_{Y}\) and \(\mathcal{F}_{X}\) are transversally smooth, flat, smooth in codimension \(2\) and locally connected. Then, there exists a unique, up to a unique isomorphism, morphism \(u:X\mathbin{/\!\!\!/}\;\mathcal{F}_{X}\to Y\mathbin{/\!\!\!/}\;\mathcal{F}_{Y}\) such that the following diagram commutes up to isomorphism_
The following lemma relates the leaf space with the base of a flat proper morphism with connected fibers.
**Lemma 6.11**.: _Let \(f:X\to S\) be a proper and flat holomorphic morphism with \(S\) connected. If the derived foliation \(f^{*}(0_{S})\) is locally connected, then the Stein factorization \(X\to S^{\prime}={\bf Spec}\,f_{*}({\mathcal{O}}_{X})\to S\) is such that \(S^{\prime}\to S\) is a finite etale covering. We have_
\[X\mathbin{/\!\!/}f^{*}(0_{Y})\simeq S^{\prime}.\]
_In particular, if one fiber of \(f\) is connected, then all the fibers of \(S\) are (and \(S^{\prime}=S\)), and we have a canonical equivalence \(X\mathbin{/\!\!/}f^{*}(0_{Y})\simeq S\)._
_Proof of the lemma._ By connectivity we can pull-back to a curve \(C\to S\), and thus assume that \(S\) is a smooth curve. As \(f\) is flat then so is \(S^{\prime}\to S\) (because it must be non-constant). At a ramification point of \(S^{\prime}\to S\), the Milnor fiber can not be connected, so \(S^{\prime}\to S\) has no ramification point and is thus etale.
As \(S^{\prime}\to S\) is etale, we have \(f^{*}(0_{S})\simeq g^{*}(O_{S^{\prime}})\) where \(f:X\to S^{\prime}\) is the Stein factorization of \(f\). By Proposition 6.10, we then have a canonical morphism
\[X\mathbin{/\!\!/}f^{*}(0_{S})\longrightarrow S^{\prime}.\]
This morphism is etale as it can be checked easily on cotangent complexes. It is moreover a monomorphism, as its fibers must be connected because they are covered by the fibers of \(g\). This implies that all the holonomy groups are trivial and thus that \(X\mathbin{/\!\!/}f^{*}(0_{S})\) is a separated smooth analytic space. As \(X\to S^{\prime}\) is proper, we have that \(X\mathbin{/\!\!/}f^{*}(0_{S})\to S^{\prime}\) is also proper, and thus an isomorphism. \(\Box\)
## 7. Reeb stability and algebraic integrability
**Theorem 7.1**.: _Let \(X\) be a smooth, proper and connected complex algebraic variety, and \({\mathcal{F}}\in{\mathcal{F}}ol(X/{\mathbb{C}})\) be a transversally smooth derived foliation on \(X\). We suppose that \({\mathcal{F}}\) is flat, smooth in codimension \(2\), and locally connected. We denote by \(X^{h}\) and \({\mathcal{F}}^{h}\) the analytifications of \(X\) and \({\mathcal{F}}\). Then, the following three conditions are equivalent._
1. _The analytic Deligne-Mumford stack_ \(X^{h}\mathbin{/\!\!/}{\mathcal{F}}^{h}\) _is algebraizable and proper._
2. _There exists a smooth and proper effective Deligne-Mumford stack_ \(M\)_, and a flat surjective morphism_ \(p:X\to M\) _with connected fibers such that_ \(p^{*}(0)\simeq{\mathcal{F}}\)_._
3. _There exists one point_ \(x\in X^{h}\)_, such that_ \({\mathcal{L}}_{x}^{max}({\mathcal{F}}^{h})\)_, the maximal leaf passing through_ \(x\)_, is compact, and its holonomy group_ \(Hol_{{\mathcal{F}}^{h}}(x)\) _is finite (or equivalently the holonomy covering_ \(\widetilde{{\mathcal{L}}_{x}^{max}({\mathcal{F}}^{h})}\) _is compact)._
**Proof.**\((1)\Rightarrow(2)\) We consider a smooth and proper Deligne-Mumford stack \(M\) with \(M^{h}\simeq X^{h}\mathbin{/\!\!/}{\mathcal{F}}^{h}\). By GAGA the morphism \(\pi:X^{h}\to M^{h}\) algebraizes to a morphism \(p:X\to M\). Moreover, by the GAGA theorem for perfect derived foliations (Theorem 5.3), we know that the analytification \(\infty\)-functor \({\mathcal{F}}ol(X)\to{\mathcal{F}}ol(X^{h})\) is an equivalence of categories. From \(\pi^{*}(0)\simeq{\mathcal{F}}^{h}\), and from the compatibility of the analytification with pull-backs, we deduce that \(p^{*}(0)\simeq{\mathcal{F}}\). Moreover, as \(\pi=p^{h}\) has connected fibers, so does \(p\).
\((2)\Rightarrow(3)\) By the universal property of the leaf spaces (Proposition 6.10), we have a canonical identification \(M^{h}\simeq X^{h}\,/\!\!\!/\,\mathcal{F}^{h}\). In particular, the maximal leaves of \(\mathcal{F}^{h}\) are automatically the analytification of the fibers of \(p\), and thus compact, by hypothesis. In the same way, the holonomy groups of \(\mathcal{F}^{h}\) are the isotropy groups of \(M\), and thus are finite because \(M\) is proper.
\((3)\Rightarrow(1)\) This implication is the main content of the theorem, and its proof will be somehow long. Let \(x\in X\) be a closed point such that \(\mathcal{L}_{x}^{max}(\mathcal{F}^{h})\) is compact. The monomorphism \(\mathcal{L}_{x}^{max}(\mathcal{F}^{h})\hookrightarrow X^{h}\) is thus a closed immersion, and by GAGA \(\mathcal{L}_{x}^{max}(\mathcal{F}^{h})\) arises as the analytification of a closed subscheme \(X_{x}\subset X\). We denote by \(\widehat{X_{x}}\) the formal completion of \(X\) along \(X_{x}\).
Analogously, the monomorphism \(BH_{x}\hookrightarrow X^{h}\,/\!\!\!/\,/\mathcal{F}^{h}\) is a closed immersion. Indeed, \(X^{h}\longrightarrow X^{h}\,/\!\!\!/\,/\mathcal{F}^{h}\) being flat and surjective, it is enough (by flat descent) to show that the pull-back \(BH_{x}\times_{X^{h}\,/\!\!\!/\mathcal{F}^{h}}X^{h}\hookrightarrow X^{h}\) is a closed immersion. But \(BH_{x}\times_{X^{h}\,/\!\!\!/\,/\mathcal{F}^{h}}X^{h}\simeq X^{h}_{x}\simeq \mathcal{L}_{x}^{max}(\mathcal{F}^{h})\), and we have seen that it is closed in \(X^{h}\) by assumption. We can therefore consider \(\widehat{BH_{x}}\), the formal completion of the stack \(X^{h}\,/\!\!\!/\,/\mathcal{F}^{h}\) along the closed substack \(BH_{x}\). We thus get a diagram of formal stacks
(3)
The formal stack \(\widehat{BH_{x}}\) is of the form \([\widehat{A}^{d}/H_{x}]\), and as \(H_{x}\) is finite we can even assume that the action of \(H_{x}\) is induced by a linear action on \(\mathbb{A}^{d}\). The above diagram can be considered as a formal \(\widehat{BH_{x}}\)-point of the stack \(Fin(X)\) of schemes finite over \(X\), whose reduced \(BH_{x}\)-point is given by \(X_{x}\to X\times BH_{x}\). Because the stack \(Fin(X)\) is an algebraic stack locally of finite presentation, this formal point can be algebraized (by Artin algebraization theorem). Therefore, we can find a pointed smooth variety \((S,s)\), which is an etale neighborhood of \(0\) in \(\mathbb{A}^{d}\), with a compatible \(H_{x}\)-action (fixing \(s\)), a scheme \(Z\) with a flat and proper map \(X\to[S/H_{x}]\), and a diagram
(4)
The above diagram is such that the fiber \(Z\) at \(BH_{x}=[s/H_{x}]\hookrightarrow[S/H_{x}]\) is isomorphic to \(X_{x}\) as a subscheme in \(X\). Moreover, considering the formal completion of the diagram 4 at \(X_{x}\subset Z\) recovers the diagram (3) above.
**Lemma 7.2**.: _By possibly shrinking \((S,s)\) to a smaller etale neighborhood of \(0\) in \(\mathbb{A}^{d}\), we have \(f^{*}(\mathcal{F}^{h})\simeq q^{*}(0)\) as derived analytic foliations on \(Z\) (i.e. in \(\mathcal{F}ol(Z^{h})\))._
_Proof of Lemma 7.2._ To prove this lemma we use the uniqueness of derived enhancement of Proposition 6.10. For this, we need to prove that \(f^{*}(\mathcal{F}^{h})\) and \(q^{*}(0)\) are both smooth in codimension \(2\), and then that the corresponding differential ideals in \(\Omega^{1}_{Z^{h}}\) coincide.
We first notice that by construction the morpshim \(f:Z\to X\) is formally etale around \(X_{x}\hookrightarrow Z\), and thus, by shrinking \(S\) if necessary we can assume that \(f\) is an etale morphism. Therefore, it is clear that \(f^{*}(\mathcal{F}^{h})\) is smooth in codimension \(2\), as it is locally isomorphic to \(\mathcal{F}\). Concerning \(q^{*}(0)\), in order to prove that it is smooth in codimension \(2\) it is enough to show that \(\mathbb{L}_{Z/[S/H_{x}]}\) is a vector bundle outside of a codimension \(2\) closed subset. However, the perfect complex \(\mathbb{L}_{Z/[S/H_{x}]}\) and \(f^{*}(\mathbb{L}_{\mathcal{F}})\) are quasi-isomorphic on the formal completion \(\widehat{X_{x}}\). Because the stack of quasi-isomorphisms between two perfect complex is an Artin stack of finite presentation (see for instance [13]), we deduce that these two perfect complexes must be quasi-isomorphic locally around \(X_{x}\subset Z\). Therefore, by shrinking \(S\) if necessary, we can assume that they are quasi-isomorphic on \(Z\). This implies that \(q^{*}(0)\) is also smooth in codimension \(2\).
Finally, we can use the same argument but now for the stack of quasi-isomorphisms between \(\mathbb{L}_{Z/[S/H_{x}]}\) and \(f^{*}(\mathbb{L}_{\mathcal{F}})\) compatible with the canonical morphism
This shows that the two quotients of coherent sheaves
\[\Omega^{1}_{Z}\to H^{0}(\mathbb{L}_{Z/[S/H_{x}]})\qquad\Omega^{1}_{Z}\to H^{0} (f^{*}(\mathbb{L}_{\mathcal{F}}))\]
are isomorphic, and thus their kernel must be equal. These kernels being precisely the differential ideals \(K_{q^{*}(0)}\) and \(K_{f^{*}(\mathcal{F})}\), we see that these must coincide. Therefore, all the conditions are met to apply Proposition 4.3 and we deduce the lemma. \(\Box\)
Since the morphism \(f:Z\to X\) is etale, it is easy to see that \(f^{*}(\mathcal{F})^{h}\) is not only smooth in codimension \(2\), but also flat and locally connected. We can thus apply functoriality of leaf spaces (Proposition 6.10) in order to see that there exists a commutative diagram in the analytic category
The morphism \(r\) is here a finite etale cover, which can be identified with the relative connected components of \(Z\to[S/H_{x}]\) (see Lemma 6.11). Since \(X_{x}\) is connected, we see that this finite
etale cover must be trivial, and thus \(r\) is an equivalence. We therefore conclude the existence of commutative diagram
But \(f\) is etale, and \(q^{*}(0)\simeq\pi^{*}(0)\), a comparison of the cotangent complexes shows that \(g\) must also be etale.
As a result, for any \(y\in S^{h}\), we have an etale morphism of connected analytic spaces
\[\{y\}\times_{[S^{h}/H_{x}]}Z^{h}\longrightarrow\widetilde{\mathcal{L}}^{max}_{ f(y)}(\mathcal{F}).\]
However, because \(p\) is proper we deduce that the source is a proper analytic space, and that the above morphism is a finite etale cover, and thus a proper surjective morphism. In particular, \(\widetilde{\mathcal{L}}^{max}_{f(y)}(\mathcal{F})\) must be compact. We thus conclude that for all \(y\in X\), in the image of \(f\), the maximal leaf \(\mathcal{L}^{max}_{y}(\mathcal{F})\subset X^{h}\) is compact and its holonomy \(H_{y}\) is finite. As \(f\) is an etale morphism, its image is a dense Zariski open subset in \(X\), say \(W\subset X\). Let \(y\in X-U\) be a closed point, and let \(y_{i}\) be a sequence of closed points in \(X\) converging to \(y\) in the analytic topology. We may assume that the holonomy groups of the \(y_{i}\) are all trivial, as these form a Zariski dense open substack in \([S/H_{x}]\). We then consider the corresponding sequence of closed subschemes \(\mathcal{L}^{max}_{y_{i}}(\mathcal{F}^{h})\subset X\). By compactness of the Hilbert scheme, this sequence of subschemes possesses a limit \(L\subset X\). Clearly \(\pi\) sends \(L\) to \(BH_{y}\subset X^{h}\,/\!\!\!/\;\mathcal{F}^{h}\), and thus \(L\) is contained in the maximal leaf \(\mathcal{L}^{max}_{y}(\mathcal{F}^{h})\). Moreover, \(y\) being the limit of \(y_{i}\), we have \(y\in L\). More generally, any point \(y^{\prime}\in\mathcal{L}^{max}_{y}(\mathcal{F})\) is a limit of points \(y^{\prime}_{i}\in\mathcal{L}^{max}_{y_{i}}(\mathcal{F}^{h})\), and thus \(y^{\prime}\in L\). Therefore we have that \(L\) and \(\mathcal{L}^{max}_{y}(\mathcal{F}^{h})\) coincide set theoretically. This implies in particular that \(\mathcal{L}^{max}_{y}(\mathcal{F}^{h})\) is compact, that the monomorphism \(\mathcal{L}^{max}_{y}(\mathcal{F}^{h})\hookrightarrow X^{h}\) is a closed immersion, and by GAGA that \(\mathcal{L}^{max}_{y}(\mathcal{F}^{h})\) is algebraizable. We thus conclude that the maximal leaves \(\mathcal{L}^{max}_{y}(\mathcal{F}^{h})\) are compact and algebraizable for all \(y\in X^{h}\).
It remains to show that the holonomy group \(H_{y}=Hol_{y}(\mathcal{F}^{h})\) is finite for all \(y\). For this, we use the same techniques. We write the maximal leaf \(\mathcal{L}^{max}_{y}(\mathcal{F}^{h})\) as the limit, in the Hilbert scheme, of compact leaves with trivial holonomy \(\mathcal{L}^{max}_{y_{i}}(\mathcal{F}^{h})\). This limit can be arranged as a diagram of analytic spaces
where \(S\) is a small holomorphic disk, \(Z\) is flat and proper with connected fibers over \(S\), and \(Z\to X\times S\) is a closed immersion. Let \(Z_{0}=\mathcal{L}^{max}_{y}(\mathcal{F}^{h})\subset X\) be the fiber at the central point \(0\in S\), and \(Z_{t}\subset X\) denotes the "generic" or Milnor fiber. The nearby cycles in dimension \(0\) form a constructible sheaf on \(X_{0}\), and therefore any open cover \(U_{i}\) of \(X_{0}\) in \(X\) can be refined to
a cover such that each \(U_{i}\cap X_{t}\) has a uniformly bounded number of connected components. By covering \(X_{0}\) with \(U_{i}\) belonging to our basis \(\mathcal{B}\) for the topology, we deduce the existence of an etale morphism \(V\to X^{h}\mathbin{/\!\!/}\mathcal{F}^{h}\) covering the point \(y\), and such that \(\pi^{-1}(V)\cap\mathcal{L}^{\text{max}}_{y_{i}}(\mathcal{F}^{h})\) possesses at most \(m\) connected components for some fixed integer \(m\). Equivalently, let \(G\rightrightarrows V\) be the etale groupoid induced on \(V\), so that \([V/G]\) is an open substack in \(X^{h}\mathbin{/\!\!/}\mathcal{F}^{h}\). Then, all the orbits of the points \(y_{i}\) are finite of order at most \(m\) in \(V\). As a result, if \(h\in G_{y}\) is a point in the stabilizer of \(y\in V\), since all \(y_{i}\)'s have trivial holonomies, we must have that \(h^{m}\) is the identity near \(y_{i}\), for all \(i\) (for all \(i\) big enough such that \(h^{m}\) is defined at \(y_{i}\)). By analytic continuation we have that \(h^{m}=id\). Now, let \(H_{y}\) be the holonomy group at a point \(y\). We have seen that \(H_{y}\) is of finite exponent \(m\). Moreover, as it is a subgroup of the group of germs of holonomic isomorphisms of \(\mathbb{C}^{d}\) at \(0\), it must be finite (see [20, Lem. 2]).
To summarize, we have seen that all the maximal leaves are compact, and all the holonomy groups are finite. We still need to show that this implies that \(X^{h}\mathbin{/\!\!/}\mathcal{F}^{h}\) is proper and algebraizable. We start by proving that \(X^{h}\mathbin{/\!\!/}\mathcal{F}^{h}\) is a separated Deligne-Mumford stack, i.e. that its diagonal is a finite morphism. For this, we come back to the very beginning of the proof of \((3)\Rightarrow(1)\). Now that we know that all the maximal leaves are compact, and all holonomy groups are finite, we can run exactly the same argument starting with any point \(x\in X\). We have seen that there exists a commutative diagram
In this diagram \(p\) is a proper morphism, and moreover the natural morphism \(Z^{h}\to[S^{h}/H_{x}]\times_{X^{h}\mathbin{/\!\!/}\mathcal{F}^{h}}X^{h}\) is etale. Properness of \(p\) implies that this etale morphism is moreover a finite covering. As a consequence \([S^{h}/H_{x}]\times_{X^{h}\mathbin{/\!\!/}\mathcal{F}^{h}}X^{h}\to[S^{h}/H_{x}]\) is proper. As the etale morphisms of the form \([S^{h}/H_{x}]\to X^{h}\mathbin{/\!\!/}\mathcal{F}^{h}\) cover the whole stack \(X^{h}\mathbin{/\!\!/}\mathcal{F}^{h}\), we deduce that the morphism \(\pi:X^{h}\to X^{h}\mathbin{/\!\!/}\mathcal{F}^{h}\) is a proper morphism.
We consider the nerve of \(\pi\), which defines a proper and flat groupoid \(G\rightrightarrows X^{h}\). The diagonal morphism \(G\to X^{h}\times X^{h}\) is thus a proper and unramified morphism. It is thus a finite morphism. As \(X^{h}\to X^{h}\mathbin{/\!\!/}\mathcal{F}^{h}\) is a flat covering, this implies that the diagonal of the stack \(X^{h}\mathbin{/\!\!/}\mathcal{F}^{h}\) is a finite morphism, and thus that \(X^{h}\mathbin{/\!\!/}\mathcal{F}^{h}\) is a separated Deligne-Mumford stack. Finally, as \(X^{h}\to X^{h}\mathbin{/\!\!/}\mathcal{F}^{h}\) is proper and surjective, we deduce that \(X^{h}\mathbin{/\!\!/}\mathcal{F}^{h}\) is also proper.
To finish, we need to prove that \(X^{h}\mathbin{/\!\!/}\mathcal{F}^{h}\) is algebraizable. But this follows by considering the groupoid \(G\rightrightarrows X^{h}\) which is the nerve of \(\pi\) as above. Since \(G\to X^{h}\times X^{h}\) is finite, and \(X^{h}\) and \(G\) are both proper algebraic spaces, we know by GAGA that the groupoid \(G\rightrightarrows X^{h}\) is the analytification of a proper flat algebraic groupoid \(G^{\prime}\rightrightarrows X\). We can consider the quotient stack \([X/G^{\prime}]\) of this proper flat groupoid (see for instance [19]) which defines an algebraic Deligne-Mumford stack whose analytification is \(X^{h}\mathbin{/\!\!/}\mathcal{F}^{h}\).
|
2307.06446
|
Integer-valued rational functions over globalized pseudovaluation
domains
|
$\DeclareMathOperator{\IntR}{Int{}^\text{R}}$$\DeclareMathOperator{\Int}{Int}$Let
$D$ be a domain. Park determined the necessary and sufficient conditions for
which the ring of integer-valued polynomials $\Int(D)$ is a globalized
pseudovaluation domain (GPVD). In this work, we investigate the ring of
integer-valued rational functions $\IntR(D)$. Since it is necessary that $D$ be
a GPVD for $\IntR(D)$ to be a GPVD, we consider $\IntR(D)$, where $D$ is a
GPVD. We determine that if $D$ is a pseudosingular GPVD, then $\IntR(D)$ is a
GPVD. We also completely characterize when $\IntR(D)$ is a GPVD if $D$ is a
pseudovaluation domain that is not a valuation domain.
|
Baian Liu
|
2023-07-12T20:39:16Z
|
http://arxiv.org/abs/2307.06446v2
|
# Integer-valued rational functions over globalized pseudovaluation domains
###### Abstract
Let \(D\) be a domain. Park determined the necessary and sufficient conditions for which the ring of integer-valued polynomials \(\mathrm{Int}(D)\) is a globalized pseudovaluation domain (GPVD). In this work, we investigate the ring of integer-valued rational functions \(\mathrm{Int}^{\mathrm{R}}(D)\). Since it is necessary that \(D\) be a GPVD for \(\mathrm{Int}^{\mathrm{R}}(D)\) to be a GPVD, we consider \(\mathrm{Int}^{\mathrm{R}}(D)\), where \(D\) is a GPVD. We determine that if \(D\) is a pseudosingular GPVD, then \(\mathrm{Int}^{\mathrm{R}}(D)\) is a GPVD. We also completely characterize when \(\mathrm{Int}^{\mathrm{R}}(D)\) is a GPVD if \(D\) is a pseudovaluation domain that is not a valuation domain.
## 1 Introduction
The concept of integer-valued polynomials has been studied throughout many different areas of mathematics. One way to study integer-valued polynomials is to consider a collection of integer-valued polynomials as a ring. Given a domain \(D\) with field of fractions \(K\) and \(E\) some subset of \(K\), we can define
\[\mathrm{Int}(D)\coloneqq\{f\in K[x]\mid f(d)\in D,\,\forall d\in D\}\quad\text {and}\quad\mathrm{Int}(E,D)\coloneqq\{f\in K[x]\mid f(a)\in D,\,\forall a\in E\}\]
the **ring of integer-valued polynomials over \(D\)** and **ring of integer-valued polynomials on \(E\) over \(D\)**, respectively. Note that \(\mathrm{Int}(D,D)=\mathrm{Int}(D)\).
A type of question one can ask about the rings of the form \(\mathrm{Int}(D)\) starts by fixing a property on the rings of integer-valued polynomials. Then we investigate what kind of conditions the base ring \(D\) must have. The two papers [10] and [12] give two different classifications of Prufer domains of the form \(\mathrm{Int}(D)\). Furthermore, [13] gives a complete characterization of when \(\mathrm{Int}(D)\) is a Prufer \(v\)-multiplication domain (P\(v\)MD). The ring-theoretic property we look at in this work is the property of being a globalized pseudovaluation domain (GPVD). In [14], Park gives complete necessary and sufficient conditions for \(\mathrm{Int}(D)\) to be a GPVD. To introduce this result, we provide the definitions surrounding GPVDs.
We can consider a GPVD to be a generalization of a Prufer domain. We first focus on the local counterparts. Localizing a Prufer domain at a prime ideal yields a valuation domain. A way to generalize a valuation domain is to use a pseudovaluation domain. For references on pseudovaluation domains, see [15, 16].
**Definition 1.1**.: A domain \(D\) is a **pseudovaluation domain** (PVD) if \(D\) has a valuation overring \(V\) such that \(\mathrm{Spec}(D)=\mathrm{Spec}(V)\) as sets. The valuation domain is uniquely determined and is called the **associated valuation domain** of \(D\).
**Remark 1.2**.: _In particular, a pseudovaluation domain and the associated valuation domain have the same maximal ideal._
_One way to construct a pseudovaluation domain is to start with a valuation domain \(V\). Let \(\mathfrak{m}\) be the maximal ideal of \(V\). Consider the canonical projection \(\pi:V\to V/\mathfrak{m}\). Take a subfield \(F\subseteq V/\mathfrak{m}\). Then \(D\coloneqq\pi^{-1}(F)\) is a pseudovaluation domain with associated valuation domain \(V\)._
We can consider Prufer domains to be global counterparts of valuation domains. Dobbs and Fontana studied two global counterparts of pseudovaluation domains called locally pseudovaluation domains and globalized pseudovaluation domains [10]. First, we introduce the definition of a locally pseudovaluation domain.
**Definition 1.3**.: A domain \(D\) is a **locally pseudovaluation domain** (LPVD) if for every maximal ideal \(\mathfrak{m}\) of \(D\), the localization \(D_{\mathfrak{m}}\) is a PVD.
Since there is a valuation domain associated with a pseudovaluation domain, we would like there to be a Prufer domain associated with a locally pseudovaluation domain. However, a locally pseudovaluation domain that is not a PVD or a Prufer domain does not have a Prufer overring with the same prime spectrum [1, Proposition 3.3]. Nevertheless, there is a subclass of LPVDs that does have an associated Prufer overring which is a unibranched extension.
**Definition 1.4**.: An extension of commutative rings \(A\subseteq B\) is **unibranched** if the contraction map \(\operatorname{Spec}(B)\to\operatorname{Spec}(A)\) is a bijection.
A domain \(D\) is a **globalized pseudovaluation domain** (GPVD) if there exists a Prufer domain \(T\) containing \(D\) such that
* \(D\subseteq T\) is a unibranched extension and
* there exists a nonzero radical ideal \(J\) common to \(D\) and \(T\) such that each prime ideal of \(T\) containing \(J\) is maximal in \(T\) and each prime ideal of \(D\) containing \(J\) is maximal in \(D\).
The Prufer domain is uniquely determined and is called the **associated Prufer domain** of \(D\).
**Remark 1.5**.: _Theorem 3.1 in [10] provides an equivalent definition of a GPVD. A domain \(D\) is a GPVD if there exists a Prufer domain \(T\) containing \(D\) such that there is a common radical ideal \(J\) where \(D/J\subseteq T/J\) is a unibranched extension of Krull dimension zero rings._
**Remark 1.6**.: _A GPVD is an LPVD, and a PVD is a GPVD with its associated valuation domain as its associated Prufer domain and their common maximal ideal as the common radical ideal in the definition of a GPVD [10]._
We return to Park's characterization of GPVDs of the form \(\operatorname{Int}(D)\). This characterization uses the idea of an interpolation domain, which is a domain \(D\) such that for every distinct pair \(a,b\in D\) there exists \(f\in\operatorname{Int}(D)\) such that \(f(a)=0\) and \(f(b)=1\). Park showed that for a domain \(D\) that is not a field, \(\operatorname{Int}(D)\) is a GPVD if and only if \(D\) is GPVD and an interpolation domain [11]. Furthermore, when \(\operatorname{Int}(D)\) is a GPVD, the associated Prufer domain is \(\operatorname{Int}(D,T)\), where \(T\) is the associated Prufer domain of \(D\).
Now we examine a generalization of the concept of integer-valued polynomials. We will study integer-valued rational functions. For a domain \(D\) with field of fractions \(K\) and \(E\) some subset of \(K\), we define
\[\operatorname{Int}^{\mathrm{R}}(D)\coloneqq\{\varphi\in K(x)\mid\varphi(d) \in D,\,\forall d\in D\}\quad\text{and}\quad\operatorname{Int}^{\mathrm{R}}(E, D)\coloneqq\{\varphi\in K(x)\mid\varphi(a)\in D,\,\forall a\in E\}\]
the **ring of integer-valued rational functions over \(D\)** and the **ring of integer-valued rational functions on \(E\) over \(D\)**, respectively. Note that \(\operatorname{Int}^{\mathrm{R}}(D,D)=\operatorname{Int}^{\mathrm{R}}(D)\). We can also define an ideal of \(\operatorname{Int}^{\mathrm{R}}(E,D)\) using an ideal \(I\) of \(D\). One can check that the set
\[\operatorname{Int}^{\mathrm{R}}(E,I)\coloneqq\{\varphi\in\operatorname{Int}^{ \mathrm{R}}(E,D)\mid\varphi(a)\in I,\,\forall a\in E\}\]
is an ideal of \(\mathrm{Int}^{\mathrm{R}}(E,D)\).
We want to explore under what conditions the ring of integer-valued rational functions is a GPVD. We first provide a necessary condition. As with Prufer domains, the homomorphic image of a GPVD is a GPVD [12, Lemma 1]. In order to have a ring of integer-valued rational functions that is a GPVD, we must have a base ring that is a GPVD.
**Proposition 1.7**.: _Let \(D\) be a domain with \(E\) a nonempty subset of the field of fractions of \(D\). If \(\mathrm{Int}^{\mathrm{R}}(E,D)\) is a GPVD, then \(D\) is a GPVD._
Proof.: Let \(a\in E\) be any element and consider the homomorphism \(\mathrm{Int}^{\mathrm{R}}(E,D)\to D\) given by evaluation at \(a\). Then \(D\) is the homomorphic image of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) so \(D\) is a GPVD.
In Section 2, we introduce the notion of a pseudosingular GPVD, which generalizes the idea of a singular Prufer domain. We then show that \(\mathrm{Int}^{\mathrm{R}}(E,D)\) is a GPVD for \(D\) a pseudosingular GPVD and \(E\) any subset of the field of fractions of \(D\). We also show that the Prufer domain associated with \(\mathrm{Int}^{\mathrm{R}}(E,D)\) is \(\mathrm{Int}^{\mathrm{R}}(E,T)\), where \(T\) is the Prufer domain associated with \(D\). In Section 3, we completely classify which PVDs \(D\), which are not valuation domains, make the ring \(\mathrm{Int}^{\mathrm{R}}(D)\) a GPVD. Lastly, in Section 4, we give necessary and sufficient conditions under which the ring \(\mathrm{Int}^{\mathrm{R}}(K,D)\) is a local domain, where \(D\) is a PVD that is not a valuation domain and \(K\) is the field of fractions of \(D\).
## 2 Pseudosingular GPVDs
In this section, we give a family of GPVDs such that their rings of integer-valued rational functions is also a GPVD. We introduce the notion of a pseudosingular GPVD, generalizing notion of a singular Prufer domain. The notion of a singular Prufer domain is important as it gives a condition under which the ring \(\mathrm{Int}^{\mathrm{R}}(E,D)\) is a Prufer domain. The same proof for Theorem 3.5 in [1] shows that if \(D\) is a singular Prufer domain, then \(\mathrm{Int}^{\mathrm{R}}(E,D)\) is a Prufer domain for any subset \(E\) of the field of fractions of \(D\). For this reason, we generalize this notion to give a condition under which the ring \(\mathrm{Int}^{\mathrm{R}}(E,D)\) is a GPVD.
**Definition 2.1**.: [1] Let \(D\) be a Prufer domain. We say \(D\) is **singular** if there exists a family \(\Lambda\) of maximal ideals of \(D\) so that
* \(D=\bigcap\limits_{\mathfrak{m}\in\Lambda}D_{\mathfrak{m}}\);
* for each \(\mathfrak{m}\in\Lambda\), the maximal ideal of \(D_{\mathfrak{m}}\) is principal, generated by some \(t_{\mathfrak{m}}\in D_{\mathfrak{m}}\); and
* there exists some \(t\in D\) and \(n\in\mathbb{N}\) such that for each \(\mathfrak{m}\in\Lambda\), we have \(0<v_{\mathfrak{m}}(t)<nv_{\mathfrak{m}}(t_{\mathfrak{m}})\), where \(v_{\mathfrak{m}}\) is a valuation associated with \(D_{\mathfrak{m}}\).
Since there is a Prufer domain associated with a GPVD, we can impose conditions on the associated Prufer domain as conditions on the GPVD.
**Definition 2.2**.: Let \(D\) be a GPVD. We say that \(D\) is **pseudosingular** if the associated Prufer domain \(T\) is singular.
Because a GPVD has an associated Prufer domain that is a unibranched extension, it could be useful to have a description of the maximal ideals of the ring we suspect is a GPVD in order to prove that the ring is indeed a GPVD. Some of the maximal ideals of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) can be defined through maximal ideals of the base ring \(D\).
**Definition 2.3**.: Let \(D\) be a domain and let \(E\) be some nonempty subset of the field of fractions of \(D\). Take \(\mathfrak{m}\) to be a maximal ideal of \(D\) and \(a\in E\). We define the set
\[\mathfrak{M}_{\mathfrak{m},a}\coloneqq\{\varphi\in\mathrm{Int}^{\mathrm{R}}(E, D)\mid\varphi(a)\in\mathfrak{m}\}.\]
We know that \(\mathfrak{M}_{\mathfrak{m},a}\) is a maximal ideal of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) because it is the kernel of the surjective map \(\mathrm{Int}^{\mathrm{R}}(E,D)\to D/\mathfrak{m}\) given by evaluation at \(a\) modulo \(\mathfrak{m}\). Maximal ideals of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) of the form \(\mathfrak{M}_{\mathfrak{m},a}\) are called **maximal pointed ideals**.
In general, not all of the maximal ideals of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) are maximal pointed ideals. Since there is a notion of a limit of a family of ideals using filters and ultrafilters, we can use this notion to potentially describe more maximal ideals of \(\mathrm{Int}^{\mathrm{R}}(E,D)\).
**Definition 2.4**.: Let \(S\) be a set. A **filter**\(\mathcal{F}\) on \(S\) is a collection of subsets of \(S\) such that
1. \(\emptyset\notin\mathcal{F}\);
2. if \(A,B\in\mathcal{F}\), then \(A\cap B\in\mathcal{F}\); and
3. if \(A\in\mathcal{F}\) and \(B\subseteq S\) is such that \(A\subseteq B\), then \(B\in\mathcal{F}\).
If \(\mathcal{U}\) is a filter on \(S\) such that for every \(A\subseteq S\), we have \(A\in\mathcal{U}\) or \(S\setminus A\in\mathcal{U}\), then we call \(\mathcal{U}\) an **ultrafilter**. Every filter of \(S\) is contained in some ultrafilter of \(S\) by the Ultrafilter Lemma.
**Definition 2.5**.: Let \(R\) be a commutative ring. Take \(\{I_{\lambda}\}_{\lambda\in\Lambda}\) to be a family of ideals of \(R\). For each \(r\in R\), we define the **characteristic set of \(r\) on \(\{I_{\lambda}\}\)** to be
\[\chi_{r}=\{I_{\lambda}\mid r\in I_{\lambda}\}.\]
For a filter \(\mathcal{F}\) on \(\{I_{\lambda}\}\), we define the **filter limit of \(\{I_{\lambda}\}\) with respect to \(\mathcal{F}\)** as
\[\lim_{\mathcal{F}}I_{\lambda}=\{r\in R\mid\chi_{r}\in\mathcal{F}\}.\]
If \(\mathcal{F}\) is an ultrafilter, we call \(\lim_{\mathcal{F}}I_{\lambda}\) the **ultrafilter limit of \(\{I_{\lambda}\}\) with respect to \(\mathcal{F}\)**.
**Remark 2.6**.: _The filter and ultrafilter limits of a family of ideals are also themselves ideals. If \(\{\mathfrak{p}_{\lambda}\}_{\lambda\in\Lambda}\) is a family of prime ideals and \(\mathcal{U}\) is an ultrafilter of \(\{\mathfrak{p}_{\lambda}\}\), then the ultrafilter limit \(\lim_{\mathcal{U}}\mathfrak{p}_{\lambda}\) is also a prime ideal. This gives rise to the **ultrafilter topology** on \(\mathrm{Spec}(R)\), which is identical to the patch topology and the constructible topology on \(\mathrm{Spec}(R)\)[2]._
We will now use a description of the maximal ideals using maximal pointed ideals and ultrafilter limits to show that rings of integer-valued rational functions over pseudosingular GPVDs are also GPVDs. We will also utilize the rational function \(\theta(x)=\frac{t(1+x^{2n})}{(1+tx^{n})(t+x^{n})}\), where \(t\) and \(n\) come from the definition of a singular Prufer domain. This rational function played an integral role in showing that a ring of integer-valued rational functions over a singular Prufer domain is a Prufer domain in [2].
**Theorem 2.7**.: _Let \(D\) be a pseudosingular GPVD. Denote by \(K\) the field of fractions of \(D\) and let \(E\) be a nonempty subset of \(K\). Suppose that \(T\) is the associated Prufer domain of \(D\). Then \(\mathrm{Int}^{R}(E,D)\) is a GPVD with associated Prufer domain \(\mathrm{Int}^{R}(E,T)\)._
Proof.: Let \(J\) denote the nonzero radical ideal common to \(D\) and \(T\) so that \(D/J\subseteq T/J\) is a unbranched extension of Krull dimension zero rings. Such an ideal exists since \(D\) is a GPVD.
Since \(D\) is pseudosingular, we know that \(T\) is singular. Therefore, there exists a collection \(\Lambda\) of maximal ideals of \(T\) such that
* \(T=\bigcap\limits_{\mathfrak{m}\in\Lambda}T_{\mathfrak{m}}\),
* for each \(\mathfrak{m}\in\Lambda\), the maximal ideal of the valuation domain \(T_{\mathfrak{m}}\) is principally generated by some \(t_{\mathfrak{m}}\in T_{\mathfrak{m}}\), and
* there exists some \(t\in T\) and \(n\in\mathbb{N}\) such that \(0<v_{\mathfrak{m}}(t)<nv_{\mathfrak{m}}(t_{\mathfrak{m}})\) for all \(\mathfrak{m}\in\Lambda\), where \(v_{\mathfrak{m}}\) is a valuation associated with \(T_{\mathfrak{m}}\).
We fix \(t\) and \(n\) for the rest of proof, as well as \(v_{\mathfrak{m}}\) for each \(\mathfrak{m}\in\Lambda\). Furthermore, since we are considering two rings of integer-valued rational functions at once, we need notation to clarify of which ring the maximal pointed ideals are ideals. Let \(a\in E\) and \(\mathfrak{m}\in\Lambda\). We define \(\mathfrak{M}^{T}_{\mathfrak{m},a}\coloneqq\{\varphi\in\operatorname{Int}^{ \operatorname{R}}(E,T)\mid\varphi(a)\in\mathfrak{m}\}\) as an ideal of \(\operatorname{Int}^{\operatorname{R}}(E,T)\). Let \(\mathfrak{n}\coloneqq\mathfrak{m}\cap D\). We define \(\mathfrak{M}^{D}_{\mathfrak{n},a}\coloneqq\{\varphi\in\operatorname{Int}^{ \operatorname{R}}(E,D)\mid\varphi(a)\in\mathfrak{n}\}\) as an ideal of \(\operatorname{Int}^{\operatorname{R}}(E,D)\).
We first give the statements we want to prove. The proofs of the statements will follow.
1. Every maximal ideal of \(T\) is an ultrafilter limit of ideals in \(\Lambda\). Furthermore, \(t\) is in the Jacobson radical of \(T\).
2. The ideal \(\operatorname{Int}^{\operatorname{R}}(E,J)\) is a nonzero radical ideal of both \(\operatorname{Int}^{\operatorname{R}}(E,D)\) and \(\operatorname{Int}^{\operatorname{R}}(E,T)\).
3. A prime ideal of \(\operatorname{Int}^{\operatorname{R}}(E,T)\) containing \(\operatorname{Int}^{\operatorname{R}}(E,J)\) is maximal in \(\operatorname{Int}^{\operatorname{R}}(E,T)\).
4. A prime ideal of \(\operatorname{Int}^{\operatorname{R}}(E,D)\) containing \(\operatorname{Int}^{\operatorname{R}}(E,J)\) is maximal in \(\operatorname{Int}^{\operatorname{R}}(E,D)\).
5. Every maximal ideal of \(\operatorname{Int}^{\operatorname{R}}(E,T)\) can be written in the form \(\lim\limits_{\mathcal{U}}\mathfrak{M}^{T}_{\mathfrak{m},a}\) for some ultrafilter \(\mathcal{U}\) of \(\{\mathfrak{M}^{T}_{\mathfrak{m},a}\mid a\in E,\mathfrak{m}\in\Lambda\}\).
6. Every maximal ideal of \(\operatorname{Int}^{\operatorname{R}}(E,D)\) can be written in the form \(\lim\limits_{\mathcal{U}}\mathfrak{M}^{D}_{\mathfrak{n},a}\) for some ultrafilter \(\mathcal{U}\) of \(\{\mathfrak{M}^{D}_{\mathfrak{n},a}\mid a\in E,\mathfrak{m}\in\Lambda, \mathfrak{n}=\mathfrak{m}\cap D\}\).
7. Suppose that \(\mathcal{U}_{1}\) and \(\mathcal{U}_{2}\) are ultrafilters of \(\{\mathfrak{M}^{T}_{\mathfrak{m},a}\mid a\in E,\mathfrak{m}\in\Lambda\}\) such that \(\lim\limits_{\mathcal{U}_{1}}\mathfrak{M}^{T}_{\mathfrak{m},a}\) and \(\lim\limits_{\mathcal{U}_{2}}\mathfrak{M}^{T}_{\mathfrak{m},a}\) are distinct maximal ideals of \(\operatorname{Int}^{\operatorname{R}}(E,T)\). Then \(\lim\limits_{\mathcal{U}_{1}}\mathfrak{M}^{T}_{\mathfrak{m},a}\cap \operatorname{Int}^{\operatorname{R}}(E,D)\neq\lim\limits_{\mathcal{U}_{2}} \mathfrak{M}^{T}_{\mathfrak{m},a}\cap\operatorname{Int}^{\operatorname{R}}(E,D)\).
8. The ring \(\operatorname{Int}^{\operatorname{R}}(E,D)\) is a GPVD with associated Prufer domain \(\operatorname{Int}^{\operatorname{R}}(E,T)\).
The first four items are to establish that there is a nonzero radical ideal, namely \(\operatorname{Int}^{\operatorname{R}}(E,J)\), common to \(\operatorname{Int}^{\operatorname{R}}(E,D)\) and \(\operatorname{Int}^{\operatorname{R}}(E,T)\) such that each prime ideal of \(\operatorname{Int}^{\operatorname{R}}(E,D)\) or \(\operatorname{Int}^{\operatorname{R}}(E,T)\) containing that common nonzero radical ideal is actually a maximal ideal of the ring to which it belongs. The next three statements shows that \(\operatorname{Int}^{\operatorname{R}}(E,D)/\operatorname{Int}^{\operatorname {R}}(E,J)\subseteq\operatorname{Int}^{\operatorname{R}}(E,T)/\operatorname{Int }^{\operatorname{R}}(E,J)\) is a unibranched extension. Now we prove the claims.
1. Claim: Every maximal ideal of \(T\) is an ultrafilter limit of ideals in \(\Lambda\). Furthermore, \(t\) is in the Jacobson radical of \(T\). Since \(T\) is a Prufer domain, every nonzero ideal of \(T\) is a \(t\)-ideal. In particular, every maximal ideal of \(T\) is a \(t\)-ideal. Writing \(T\) as \(T=\bigcap\limits_{\mathfrak{m}\in\Lambda}T_{\mathfrak{m}}\), we see that every maximal ideal of \(T\) is an ultrafilter limit of ideals in \(\Lambda\)[10, Proposition 2.8]. Now let \(\mathfrak{a}\) be a maximal ideal of \(T\). We know that \(\mathfrak{a}\) is an ultrafilter limit of ideals in \(\Lambda\). Since \(t\in\mathfrak{m}\) for all \(\mathfrak{m}\in\Lambda\), we also have that \(t\in\mathfrak{a}\). Since \(t\) is in all of the maximal ideas of \(T\), we can conclude that \(t\) is in the Jacobson radical of \(T\).
2. Claim: The ideal \(\operatorname{Int}^{\mathrm{R}}(E,J)\) is a nonzero radical ideal of both \(\operatorname{Int}^{\mathrm{R}}(E,D)\) and \(\operatorname{Int}^{\mathrm{R}}(E,T)\). Since \(J\) is a nonzero ideal of both \(D\) and \(T\), we know that \(\operatorname{Int}^{\mathrm{R}}(E,J)\) is a nonzero ideal of both \(\operatorname{Int}^{\mathrm{R}}(E,D)\) and \(\operatorname{Int}^{\mathrm{R}}(E,T)\). Now suppose that \(\varphi\in\operatorname{Int}^{\mathrm{R}}(E,D)\) is such that there exists an \(m\in\mathbb{N}\) such that \(\varphi^{m}\in\operatorname{Int}^{\mathrm{R}}(E,J)\). Then for any \(a\in E\), we have \(\varphi(a)^{m}\in J\). Since \(\varphi(a)\in D\) and \(J\) is a radical ideal of \(D\), we have that \(\varphi(a)\in J\). Thus, \(\varphi\in\operatorname{Int}^{\mathrm{R}}(E,J)\). This shows that \(\sqrt{\operatorname{Int}^{\mathrm{R}}(E,J)}\subseteq\operatorname{Int}^{ \mathrm{R}}(E,J)\) and therefore \(\operatorname{Int}^{\mathrm{R}}(E,J)\) is a radical ideal of \(\operatorname{Int}^{\mathrm{R}}(E,D)\). The same argument shows that \(\operatorname{Int}^{\mathrm{R}}(E,J)\) is a radical ideal of \(\operatorname{Int}^{\mathrm{R}}(E,T)\).
3. Claim: A prime ideal of \(\operatorname{Int}^{\mathrm{R}}(E,T)\) containing \(\operatorname{Int}^{\mathrm{R}}(E,J)\) is a maximal ideal of \(\operatorname{Int}^{\mathrm{R}}(E,T)\). Take \(\mathfrak{P}\) to be a prime ideal of \(\operatorname{Int}^{\mathrm{R}}(E,T)\) containing \(\operatorname{Int}^{\mathrm{R}}(E,J)\). Then let \(\varphi\in\operatorname{Int}^{\mathrm{R}}(E,T)\setminus\mathfrak{P}\). Set \(\psi=\frac{\varphi^{n}}{t+\varphi^{2n}}\). We want to show that \(\psi\in\operatorname{Int}^{\mathrm{R}}(E,T)\). Take an element \(a\in E\). Then for each \(\mathfrak{m}\in\Lambda\), we have \[v_{\mathfrak{m}}(\psi(a))=\begin{cases}0,&\text{if }v_{\mathfrak{m}}(\varphi(a))=0,\\ nv_{\mathfrak{m}}(\varphi(a))-v_{\mathfrak{m}}(t)>0,&\text{if }v_{\mathfrak{m}}( \varphi(a))>0.\end{cases}\] Since \(T=\bigcap\limits_{\mathfrak{m}\in\Lambda}T_{\mathfrak{m}}\), we have that \(\psi(a)\in T\). This holds for all \(a\in E\), so \(\psi\in\operatorname{Int}^{\mathrm{R}}(E,T)\). Now we observe that \[\varphi^{n}(1-\varphi^{n}\psi)=\varphi^{n}\cdot\frac{t+\varphi^{2n}-\varphi^{ n}\varphi^{n}}{t+\varphi^{2n}}=t\psi.\] We have that \(J\subseteq\mathfrak{P}\cap T\), so \(\mathfrak{P}\cap T\) is a maximal ideal of \(T\). Thus, \(t\in\mathfrak{P}\cap T\subseteq\mathfrak{P}\), and therefore \(t\psi\in\mathfrak{P}\). We now have that \(\varphi^{n}(1-\varphi^{n}\psi)\in\mathfrak{P}\). Since \(\varphi\notin\mathfrak{P}\), we must have \(1-\varphi^{n}\psi\in\mathfrak{P}\). This means that \(\varphi\) has an inverse modulo \(\mathfrak{P}\). The previous statement holds for any \(\varphi\in\operatorname{Int}^{\mathrm{R}}(E,T)\setminus\mathfrak{P}\), implying that \(\mathfrak{P}\) is a maximal ideal in \(\operatorname{Int}^{\mathrm{R}}(E,T)\).
4. Claim: A prime ideal of \(\operatorname{Int}^{\mathrm{R}}(E,D)\) containing \(\operatorname{Int}^{\mathrm{R}}(E,J)\) is a maximal ideal of \(\operatorname{Int}^{\mathrm{R}}(E,D)\). Take \(\mathfrak{P}\) to be a prime ideal of \(\operatorname{Int}^{\mathrm{R}}(E,D)\) containing \(\operatorname{Int}^{\mathrm{R}}(E,J)\). Then let \(\varphi\in\operatorname{Int}^{\mathrm{R}}(E,D)\setminus\mathfrak{P}\). Set \(\psi=\frac{\varphi^{n}}{t+\varphi^{2n}}\). We want to show that \(\psi\in\operatorname{Int}^{\mathrm{R}}(E,D)\). Take an element \(a\in E\) and a maximal ideal \(\mathfrak{a}\) of \(D\). Let \(\mathfrak{a}^{\prime}\) be the unique maximal ideal of \(T\) that contracts to \(\mathfrak{a}\). We know that \(\mathfrak{a}^{\prime}\) is an ultrafilter limit of ideals in \(\Lambda\). For \(\mathfrak{m}\in\Lambda\), we have that \(\psi(a)\in\mathfrak{m}\) if and only if \(\varphi(a)\in\mathfrak{m}\), as calculated in the third claim. This implies that \(\psi(a)\in\mathfrak{a}^{\prime}\) if and only if \(\varphi(a)\in\mathfrak{a}^{\prime}\). If \(\varphi(a)\in\mathfrak{a}^{\prime}\), then \(\psi(a)\in\mathfrak{a}^{\prime}\subseteq\mathfrak{a}^{\prime}T_{\mathfrak{a}^ {\prime}}=\mathfrak{a}D_{\mathfrak{a}}\subseteq D_{\mathfrak{a}}\). If \(\varphi(a)\notin\mathfrak{a}^{\prime}\), then we have \(\varphi(a)^{2n}\in D\setminus\mathfrak{a}^{\prime}\subseteq D_{\mathfrak{a}}^{\times}\) and combining with the fact that \(t\in\mathfrak{a}^{\prime}\subseteq\mathfrak{a}^{\prime}T_{\mathfrak{a}^{\prime}}= \mathfrak{a}D_{\mathfrak{a}}\) yields \(t+\varphi(a)^{2n}\in D_{\mathfrak{a}}^{\times}\). Using the fact that \(\varphi(a)^{n}\in D_{\mathfrak{a}}\) if \(\varphi(a)\notin\mathfrak{a}^{\prime}\), we get that \(\psi(a)\in D_{\mathfrak{a}}\). No matter if \(\varphi(a)\in\mathfrak{a}^{\prime}\) or \(\varphi(a)\notin\mathfrak{a}^{\prime}\), we get that \(\varphi(a)\in D_{\mathfrak{a}}\). This holds for all \(a\in E\) and all maximal ideals \(\mathfrak{a}\) of \(D\). Thus, \(\psi\in\operatorname{Int}^{\mathrm{R}}(E,D)\). Now we observe that \[\varphi^{n}(1-\varphi^{n}\psi)=\varphi^{n}\cdot\frac{t+\varphi^{2n}-\varphi^{n} \varphi^{n}}{t+\varphi^{2n}}=t\psi.\] We claim that \(t\psi\in\mathfrak{P}\). Let \(\mathfrak{a}\) be a maximal ideal of \(D\). Let \(\mathfrak{a}^{\prime}\) be the unique maximal ideal of \(T\) that contracts to \(\mathfrak{a}\). We know \(t\in\mathfrak{a}D_{\mathfrak{a}}\) and thus \(t\in D_{\mathfrak{a}}\) for all maximal ideals \(\mathfrak{a}\) of \(D\). Thus, \(t\in D\). Since \(\mathfrak{P}\) contains \(\operatorname{Int}^{\mathrm{R}}(E,J)\), we have that \(J\subseteq\mathfrak{P}\cap D\), so \(\mathfrak{q}_{1}:=\mathfrak{P}\cap D\) is a maximal ideal of \(D\). Let \(\mathfrak{q}_{2}\) the unique maximal ideal of \(T\) that contracts to \(\mathfrak{q}_{1}\). We know that \(t\in\mathfrak{q}_{2}\subseteq\mathfrak{q}_{2}T_{\mathfrak{q}_{2}}=\mathfrak{q}_{1}D_ {\mathfrak{q}_{1}}\). Now we see that \(t\in\mathfrak{q}_{1}D_{\mathfrak{q}_{1}}\cap D=\mathfrak{q}_{1}\subseteq \mathfrak{P}\). Thus, \(t\psi\in\mathfrak{P}\).
We now have that \(\varphi^{n}(1-\varphi^{n}\psi)\in\mathfrak{P}\). Since \(\varphi\notin\mathfrak{P}\), we must have \(1-\varphi^{n}\psi\in\mathfrak{P}\). This means that \(\varphi\) has an inverse modulo \(\mathfrak{P}\) for all \(\varphi\in\mathrm{Int}^{\mathrm{R}}(E,D)\setminus\mathfrak{P}\), implying that \(\mathfrak{P}\) is a maximal ideal in \(\mathrm{Int}^{\mathrm{R}}(E,D)\).
5. Claim: Every maximal ideal of \(\mathrm{Int}^{\mathrm{R}}(E,T)\) is of the form \(\lim_{\mathcal{U}}\mathfrak{M}_{\mathfrak{m},a}^{T}\) for some ultrafilter \(\mathcal{U}\) of \(\{\mathfrak{M}_{\mathfrak{m},a}^{T}\mid a\in E,\mathfrak{m}\in\Lambda\}\). Since \(T\) is a singular Prufer domain, we know that \(\mathrm{Int}^{\mathrm{R}}(E,T)\) is a Prufer domain [1, Theorem 3.5]. Then we know every maximal ideal of \(\mathrm{Int}^{\mathrm{R}}(E,T)\) is a \(t\)-ideal. Additionally, we write \[\mathrm{Int}^{\mathrm{R}}(E,T)=\bigcap_{a\in E}\bigcap_{\mathfrak{m}\in \Lambda}\mathrm{Int}^{\mathrm{R}}(E,T)_{\mathfrak{M}_{\mathfrak{m},a}^{T}}\] and then Proposition 2.8 of [1] proves the claim.
6. Claim: Every maximal ideal of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) is of the form \(\lim_{\mathcal{U}}\mathfrak{M}_{\mathfrak{n},a}^{D}\) for some ultrafilter \(\mathcal{U}\) of \(\{\mathfrak{M}_{\mathfrak{n},a}^{D}\mid a\in E,\mathfrak{m}\in\Lambda, \mathfrak{n}=\mathfrak{m}\cap D\}\). We will take our characteristic sets with respect to \[\{\mathfrak{M}_{\mathfrak{n},a}^{D}\mid a\in E,\mathfrak{m}\in\Lambda, \mathfrak{n}=\mathfrak{m}\cap D\}.\] Let \(A\subseteq\mathrm{Int}^{\mathrm{R}}(E,D)\) be a proper ideal. We want to show that \(U:=\{\chi_{\varphi}\mid\varphi\in A\}\) is closed under finite intersections. Take \(\varphi_{1},\varphi_{2}\in A\). Set \[\theta(x)=\frac{t(1+x^{2n})}{(1+tx^{n})(t+x^{n})}.\] We claim that \(\theta\in\mathrm{Int}^{\mathrm{R}}(K,D)\). Take \(a\in K\) and let \(\mathfrak{p}\) be a maximal ideal of \(D\). Let \(\mathfrak{p}^{\prime}\) be the unique maximal ideal of \(T\) that contracts to \(\mathfrak{p}\). We know that \(\mathfrak{p}\) is an ultrafilter limit of ideals in \(\Lambda\) with respect to some ultrafilter \(\mathcal{U}\) of \(\Lambda\). Let \(\mathfrak{m}\in\Lambda\). We have that \(v_{\mathfrak{m}}(a)>0\) implies \(v_{\mathfrak{m}}\big{(}\frac{a^{n}}{t}\big{)}>0\), meaning \(a\in\mathfrak{p}^{\prime}\) implies \(\frac{a^{n}}{t}\in\mathfrak{p}^{\prime}\). Since \(a\in\mathfrak{p}^{\prime}\) implies that \(\{\mathfrak{m}\in\Lambda\mid a\in\mathfrak{m}\}\in\mathcal{U}\), we know that \(\{\mathfrak{m}\in\Lambda\mid\frac{a^{n}}{t}\in\mathfrak{m}\}\in\mathcal{U}\) as well. This means that \(a\in\mathfrak{p}^{\prime}\) implies \(\frac{a^{n}}{t}\in\mathfrak{p}^{\prime}\). Note that \(t\in\mathfrak{p}^{\prime}\) as well. Now we calculate. Denote by \(v\) a valuation associated to \(T_{\mathfrak{p}^{\prime}}\). If \(v(a)=0\), then \(v(\theta(a))=v(t)+v(1+a^{2n})-0-0>0\), so \(\theta(a)\in\mathfrak{p}^{\prime}T_{\mathfrak{p}^{\prime}}=\mathfrak{p}D_{ \mathfrak{p}}\subseteq D_{\mathfrak{p}}\). If \(v(a)>0\), then \[\theta(a)=\frac{t(1+a^{2n})}{(1+ta^{n})(t+a^{n})}=\frac{1+a^{2n}}{(1+ta^{n})(1+ \frac{a^{n}}{t})}.\] We have \(1+a^{2n}\equiv 1\pmod{\mathfrak{p}^{\prime}}\) and \((1+ta^{n})(1+\frac{a^{n}}{t})\equiv 1\cdot 1\equiv 1\pmod{\mathfrak{p}^{ \prime}}\). This means that \(\theta(a)\in 1+\mathfrak{p}^{\prime}T_{\mathfrak{p}^{\prime}}\subseteq D_{\mathfrak{p}}\). Lastly, suppose that \(v(a)<0\). We calculate that \[\theta(a)=\frac{t(1+a^{2n})}{(1+ta^{n})(t+a^{n})}=\frac{\frac{1}{a^{2n}}+1}{( \frac{1}{ta^{n}}+1)(\frac{t}{a^{n}}+1)}.\] We see that \(\frac{1}{4^{2n}}+1\equiv 1\pmod{\mathfrak{p}^{\prime}}\) and also \((\frac{1}{ta^{n}}+1)(\frac{t}{a^{n}}+1)\equiv 1\cdot 1\equiv 1\pmod{ \mathfrak{p}^{\prime}}\). Thus, \(\theta(a)\in 1+\mathfrak{p}^{\prime}T_{\mathfrak{p}^{\prime}}\subseteq D_{ \mathfrak{p}}\). We now know that \(\theta(a)\in D_{\mathfrak{p}}\) for all \(a\in K\) and maximal ideals \(\mathfrak{p}\) of \(D\), so \(\theta\in\mathrm{Int}^{\mathrm{R}}(K,D)\). Now we consider \(\rho(x)=\varphi_{1}(x)+\theta\Big{(}\frac{\varphi_{1}(x)}{\varphi_{2}(x)} \Big{)}\varphi_{2}(x)\). Since \(\theta\Big{(}\frac{\varphi_{1}(x)}{\varphi_{2}(x)}\Big{)}\in\mathrm{Int}^{ \mathrm{R}}(K,D)\), which is contained in \(\mathrm{Int}^{\mathrm{R}}(E,D)\), we have that \(\rho(x)\in(\varphi_{1},\varphi_{2})\subseteq A\).
Let \(a\in E\). Now let \(\mathfrak{m}\in\Lambda\). Suppose that \(v_{\mathfrak{m}}(\varphi_{1}(a))=v_{\mathfrak{m}}(\varphi_{2}(a))\). Then \(v_{\mathfrak{m}}(\rho(a))=v_{\mathfrak{m}}(\varphi_{1}(a))\). If \(v_{\mathfrak{m}}(\varphi_{1}(a))<v_{\mathfrak{m}}(\varphi_{2}(a)))\), we have \(v_{\mathfrak{m}}(\rho(a))=v_{\mathfrak{m}}(\varphi_{1}(a))\). If \(v_{\mathfrak{m}}(\varphi_{1}(a))>v_{\mathfrak{m}}(\varphi_{2}(a))\), we have \(v_{\mathfrak{m}}(\rho(a))=v_{\mathfrak{m}}(\varphi_{2}(a))\). In summary, \(v_{\mathfrak{m}}(\rho(a))=\min\{v_{\mathfrak{m}}(\varphi_{1}(a)),v_{\mathfrak{ m}}(\varphi_{2}(a))\}\). This implies that \(\chi_{\varphi_{1}}\cap\chi_{\varphi_{2}}=\chi_{\rho}\). Thus, \(U\) is closed under finite intersections. Since \(A\) is a proper ideal, \(U\) does not contain the empty set. We have just shown that \(U\) is closed under finite intersections, so we can deduce that \(U\) has the finite intersection property. We can then extend \(U\) to \(\mathcal{U}\), an ultrafilter of \(\{\mathfrak{M}^{D}_{\mathfrak{n},a}\mid a\in E,\mathfrak{m}\in\Lambda, \mathfrak{n}=\mathfrak{m}\cap D\}\). Then we see that \(A\subseteq\varinjlim_{\mathcal{U}}\mathfrak{M}^{D}_{\mathfrak{n},a}\). Thus, all maximal ideals of \(\operatorname{Int}^{\mathbb{R}}(E,D)\) are of the form \(\varinjlim_{\mathcal{U}}\mathfrak{M}^{D}_{\mathfrak{n},a}\) for some ultrafilter \(\mathcal{U}\) of \(\{\mathfrak{M}^{D}_{\mathfrak{n},a}\mid a\in E,\mathfrak{m}\in\Lambda, \mathfrak{n}=\mathfrak{m}\cap D\}\).
7. Claim: Suppose that \(\mathcal{U}_{1}\) and \(\mathcal{U}_{2}\) are ultrafilters of \(\{\mathfrak{M}^{T}_{\mathfrak{m},a}\mid a\in E,\mathfrak{m}\in\Lambda\}\) such that \(\lim_{\mathcal{U}_{1}}\mathfrak{M}^{T}_{\mathfrak{m},a}\) and \(\lim_{\mathcal{U}_{2}}\mathfrak{M}^{T}_{\mathfrak{m},a}\) are distinct maximal ideals of \(\operatorname{Int}^{\mathbb{R}}(E,T)\). Then \(\lim_{\mathcal{U}_{1}}\mathfrak{M}^{T}_{\mathfrak{m},a}\cap\operatorname{ Int}^{\mathbb{R}}(E,D)\neq\lim_{\mathcal{U}_{2}}\mathfrak{M}^{T}_{\mathfrak{m},a} \cap\operatorname{Int}^{\mathbb{R}}(E,D)\). Suppose that \(\mathcal{U}_{1}\) and \(\mathcal{U}_{2}\) are ultrafilters of \(\{\mathfrak{M}^{T}_{\mathfrak{m},a}\mid a\in E,\mathfrak{m}\in\Lambda\}\) such that \(\lim_{\mathcal{U}_{1}}\mathfrak{M}^{T}_{\mathfrak{m},a}\) and \(\lim_{\mathcal{U}_{2}}\mathfrak{M}^{T}_{\mathfrak{m},a}\) are distinct maximal ideals of \(\operatorname{Int}^{\mathbb{R}}(E,T)\). Then \(\lim_{\mathcal{U}_{1}}\mathfrak{M}^{T}_{\mathfrak{m},a}\cap\operatorname{ Int}^{\mathbb{R}}(E,D)\neq\lim_{\mathcal{U}_{2}}\mathfrak{M}^{T}_{\mathfrak{m},a}\cap \operatorname{Int}^{\mathbb{R}}(E,D)\). Let \(\varphi\in\lim_{\mathcal{U}_{2}}\mathfrak{M}^{T}_{\mathfrak{m},a}\setminus \lim_{\mathcal{U}_{1}}\mathfrak{M}^{T}_{\mathfrak{m},a}\). Consider \[\psi(x)=\frac{\varphi(x)}{\varphi(x)+\theta(\varphi(x))},\] where \(\theta(x)=\frac{t(1+x^{2n})}{(1+t^{n})^{(t+x^{n})}}\) from Claim 6. Take a maximal ideal \(\mathfrak{p}\) of \(D\). We know that \(\mathfrak{p}\) is the contraction of some maximal ideal \(\mathfrak{p}^{\prime}\) of \(T\). Let \(v\) denote a valuation corresponding to \(T_{\mathfrak{p}^{\prime}}\). Now take \(a\in E\). If \(v(\varphi(a))>0\), then \(v(\psi(a))=v(\varphi(a))-v(\theta(\varphi(a)))=v(\varphi(a))>0\), so \(\varphi(a)\in D_{\mathfrak{p}}\). If \(v(\varphi(a))=0\), then \(v(\theta(\varphi(a)))>0\) so \(\varphi(a)+\theta(\varphi(a))\in\varphi(a)+\mathfrak{p}^{\prime}T_{\mathfrak{p}^ {\prime}}\subseteq D^{\times}_{\mathfrak{p}}\). This shows that \(\psi(a)\in D_{\mathfrak{p}}\). Thus, \(\psi(a)\in D\) and therefore \(\psi\in\operatorname{Int}^{\mathbb{R}}(E,D)\). From this, we also see that \(\varphi(a)\in\mathfrak{p}^{\prime}\) if and only if \(\psi(a)\in\mathfrak{p}\). Thus, \(\psi\) is in \(\varinjlim_{\mathcal{U}_{2}}\mathfrak{M}^{T}_{\mathfrak{m},a}\cap\operatorname{ Int}^{\mathbb{R}}(E,D)\) but not in \(\varinjlim_{\mathcal{U}_{1}}\mathfrak{M}^{T}_{\mathfrak{m},a}\cap\operatorname{ Int}^{\mathbb{R}}(E,D)\).
8. Claim: The ring \(\operatorname{Int}^{\mathbb{R}}(E,D)\) is a GPVD with associated Prufer domain \(\operatorname{Int}^{\mathbb{R}}(E,T)\). We know that the ideal \(\operatorname{Int}^{\mathbb{R}}(E,J)\) is a nonzero radical ideal common to both \(\operatorname{Int}^{\mathbb{R}}(E,D)\) and \(\operatorname{Int}^{\mathbb{R}}(E,T)\) from Claim 2. Additionally, any prime ideal of \(\operatorname{Int}^{\mathbb{R}}(E,D)\) containing \(\operatorname{Int}^{\mathbb{R}}(E,J)\) is maximal in \(\operatorname{Int}^{\mathbb{R}}(E,D)\) and any prime ideal of \(\operatorname{Int}^{\mathbb{R}}(E,T)\) containing \(\operatorname{Int}^{\mathbb{R}}(E,J)\) is maximal in \(\operatorname{Int}^{\mathbb{R}}(E,T)\) by Claims 3 and 4. Thus, \[\dim(\operatorname{Int}^{\mathbb{R}}(E,D)/\operatorname{Int}^{\mathbb{R}}(E,J))= \dim(\operatorname{Int}^{\mathbb{R}}(E,T)/\operatorname{Int}^{\mathbb{R}}(E,J))=0.\] Next, we need to show that the extension \[\operatorname{Int}^{\mathbb{R}}(E,D)/\operatorname{Int}^{\mathbb{R}}(E,J)\subseteq \operatorname{Int}^{\mathbb{R}}(E,T)/\operatorname{Int}^{\mathbb{R}}(E,J)\] is unibranched. Every maximal ideal of \(\operatorname{Int}^{\mathbb{R}}(E,T)\) containing \(\operatorname{Int}^{\mathbb{R}}(E,J)\) is of the form \(\varinjlim_{\mathcal{U}}\mathfrak{M}^{T}_{\mathfrak{m},a}\) for some ultrafilter \(\mathcal{U}\) of \(\{\mathfrak{M}^{T}_{\mathfrak{m},a}\mid a\in E,\mathfrak{m}\in\Lambda\}\), and two distinct ideals of this form contract to two distinct ideals of \(\operatorname{Int}^{\mathbb{R}}(E,D)\) containing \(\operatorname{Int}^{\mathbb{R}}(E,J)\) due to Claim 7. Now we take a maximal ideal of \(\operatorname{Int}^{\mathbb{R}}(E,D)\) containing \(\operatorname{Int}^{\mathbb{R}}(E,J)\). From Claim 6, we know that this maximal ideal has the form \(\varinjlim_{\mathcal{U}}\mathfrak{M}^{D}_{\mathfrak{n},a}\) for some ultrafilter \(\mathcal{U}\) of \(\{\mathfrak{M}^{D}_{\mathfrak{n},a}\mid a\in E,\mathfrak{m}\in\Lambda\}\).
\(\Lambda,\mathfrak{n}=\mathfrak{m}\cap D\). We construct the ultrafilter
\[\mathcal{U}^{\prime}\coloneqq\{\{\mathfrak{M}^{T}_{\mathfrak{m},a}\mid\mathfrak{ M}^{D}_{\mathfrak{m},a}\in S,\mathfrak{m}\cap D=\mathfrak{n}\}\mid S\in\mathcal{U}\}\]
of \(\{\mathfrak{M}^{T}_{\mathfrak{m},a}\mid a\in E,\mathfrak{m}\in\Lambda\}\). We claim that \(\lim_{\mathcal{U}}\mathfrak{M}^{D}_{\mathfrak{n},a}=\lim_{\mathcal{U}^{ \prime}}\mathfrak{M}^{T}_{\mathfrak{m},a}\cap\mathrm{Int}^{\mathrm{R}}(E,D)\). Let \(\varphi\in\lim_{\mathcal{U}}\mathfrak{M}^{D}_{\mathfrak{n},a}\). Then \(\{\mathfrak{M}^{D}_{\mathfrak{n},a}\mid\varphi(a)\in\mathfrak{n},a\in E, \mathfrak{m}\in\Lambda,\mathfrak{n}=\mathfrak{m}\cap D\}\in\mathcal{U}\). Since \(\mathfrak{m}\cap D\subseteq\mathfrak{m}\), we have that \(\{\mathfrak{M}^{T}_{\mathfrak{m},a}\mid\varphi(a)\in\mathfrak{m},a\in E, \mathfrak{m}\in\Lambda\}\in\mathcal{U}^{\prime}\). Thus, \(\varphi\in\lim_{\mathcal{U}^{\prime}}\mathfrak{M}^{T}_{\mathfrak{m},a}\cap \mathrm{Int}^{\mathrm{R}}(E,D)\). To show the reverse inclusion, we now suppose that \(\varphi\in\lim_{\mathcal{U}^{\prime}}\mathfrak{M}^{T}_{\mathfrak{m},a}\cap \mathrm{Int}^{\mathrm{R}}(E,D)\). Then \(\{\mathfrak{M}^{T}_{\mathfrak{m},a}\mid\varphi(a)\in\mathfrak{m},a\in E, \mathfrak{m}\in\Lambda\}\in\mathcal{U}^{\prime}\). Since \(\varphi\) is in \(\mathrm{Int}^{\mathrm{R}}(E,D)\) as well, we know for any \(a\in E\) and \(\mathfrak{m}\in\Lambda\) that \(\varphi(a)\in\mathfrak{m}\) implies that \(\varphi(a)\in\mathfrak{m}\cap D\). Thus, \(\{\mathfrak{M}^{D}_{\mathfrak{n},a}\mid\varphi(a)\in\mathfrak{n},a\in E, \mathfrak{m}\in\Lambda,\mathfrak{n}=\mathfrak{m}\cap D\}\) is in \(\mathcal{U}\) and therefore \(\varphi\in\lim_{\mathcal{U}}\mathfrak{M}^{D}_{\mathfrak{n},a}\). This implies the contraction map of the prime spectra of the extension \(\mathrm{Int}^{\mathrm{R}}(E,D)/\,\mathrm{Int}^{\mathrm{R}}(E,J)\subseteq \mathrm{Int}^{\mathrm{R}}(E,T)/\,\mathrm{Int}^{\mathrm{R}}(E,J)\) is surjective. Thus, \(\mathrm{Int}^{\mathrm{R}}(E,D)/\,\mathrm{Int}^{\mathrm{R}}(E,J)\subseteq \mathrm{Int}^{\mathrm{R}}(E,T)/\,\mathrm{Int}^{\mathrm{R}}(E,J)\) is a unibranched extension. This shows that \(\mathrm{Int}^{\mathrm{R}}(E,D)\) is a GPVD with associated Prufer domain \(\mathrm{Int}^{\mathrm{R}}(E,T)\).
## 3 PVDs with non-singular associated valuation domains
Now that we have seen that rings of integer-valued rational functions over pseudosingular GPVDs are GPVDs, we investigate rings of integer-valued rational functions over a base ring that is a GPVD but not a pseudosingular GPVD. We restrict our focus to the base ring being a PVD. A PVD being not pseudosingular means that its associated valuation overring is not singular, which means that the maximal ideal of the associated valuation overring is not principal. Thus, we consider the case where the base ring is a PVD whose associated valuation overring does not have a principal maximal ideal.
For a PVD, we can make use of the valuation associated with the associated valuation overring. This allows us to utilize the tool of the minimal valuation function. Here, we view a value group \(\Gamma\) as being embedded in its divisible closure \(\mathbb{Q}\Gamma\coloneqq\Gamma\otimes_{\mathbb{Z}}\mathbb{Q}\).
**Definition 3.1**.: [10] Let \(V\) be a valuation domain with value group \(\Gamma\), valuation \(v\), and field of fractions \(K\). Take a nonzero polynomial \(f\in K[x]\) and write it as \(f(x)=a_{n}x^{n}+\cdots+a_{1}x+a_{0}\) for \(a_{0},a_{1},\ldots,a_{n}\in K\). We define the **minimum valuation function of \(f\)** as \(\mathrm{minval}_{f}:\Gamma\to\Gamma\) by
\[\gamma\mapsto\min\{v(a_{0}),v(a_{1})+\gamma,v(a_{2})+2\gamma,\ldots,v(a_{n})+ n\gamma\}\]
for each \(\gamma\in\Gamma\). We may also think of \(\mathrm{minval}_{f}\) as a function from \(\mathbb{Q}\Gamma\) to \(\mathbb{Q}\Gamma\) defined as \(\gamma\mapsto\min\{v(a_{0}),v(a_{1})+\gamma,v(a_{2})+2\gamma,\ldots,v(a_{n})+ n\gamma\}\) for each \(\gamma\in\mathbb{Q}\Gamma\).
For a nonzero rational function \(\varphi\in K[x]\), we write \(\varphi=\frac{f}{g}\) for some \(f,g\in K[x]\). Then for each \(\gamma\in\Gamma\), we define \(\mathrm{minval}_{\varphi}(\gamma)=\mathrm{minval}_{f}(\gamma)-\mathrm{minval}_{g }(\gamma)\).
The purpose of the minimum valuation function of a rational function is that it can predict the valuation of the outputs of the rational function most of the time. The minimum valuation function also has the nice property of being piecewise linear. The following lemma showcases these facts.
**Lemma 3.2**.: _[_10_, Proposition 2.24, Lemma 2.26, Proposition 2.27]_ _Let \(V\) be a valuation domain with value group \(\Gamma\), valuation \(v\), maximal ideal \(\mathfrak{m}\), and field of fractions \(K\). For a nonzero \(\varphi\in K(x)\),_
the function \(\operatorname{minval}_{\varphi}\) has the following form evaluated at \(\gamma\in\mathbb{Q}\Gamma\)_
\[\operatorname{minval}_{\varphi}(\gamma)=\begin{cases}c_{1}\gamma+\beta_{1},& \gamma\leq\delta_{1},\\ c_{2}\gamma+\beta_{2},&\delta_{1}\leq\gamma\leq\delta_{2},\\ \vdots&\\ c_{k-1}\gamma+\beta_{k-1},&\delta_{k-2}\leq\gamma\leq\delta_{k-1},\\ c_{k}\gamma+\beta_{k},&\delta_{k-1}\leq\gamma,\end{cases}\]
_where \(c_{1},\ldots,c_{k}\in\mathbb{Z}\); \(\beta_{1},\ldots,\beta_{k}\in\Gamma\); and \(\delta_{1},\ldots,\delta_{k-1}\in\mathbb{Q}\Gamma\) such that \(\delta_{1}<\cdots<\delta_{k-1}\)._
_Furthermore, for all but finitely many \(\gamma\in\Gamma\), we have that \(v(\varphi(t))=\operatorname{minval}_{\varphi}(v(t))\) for all \(t\in K\) such that \(v(t)=\gamma\). If the residue field of \(V\) is infinite, then for any \(\varphi_{1},\ldots,\varphi_{n}\in K(x)\) and any \(\gamma\in\Gamma\), there exists \(a\in K\) with \(v(a)=\gamma\) such that \(\operatorname{minval}_{\varphi_{i}}(\gamma)=v(\varphi_{i}(a))\) for all \(i\)._
The following result of Dobbs and Fontana shows how the common radical ideal helps describe what the localizations of a GPVD at maximal ideals look like.
**Proposition 3.3**.: _[_25_, p. 156]_ _Let \(D\) be a GPVD and \(T\) the Prufer domain associated to \(D\). Then by the equivalent definition of GPVD, there exists a common radical ideal \(J\) such that \(D/J\subseteq T/J\) is a unibranched extension of Krull dimension 0 rings. Let \(\mathfrak{n}\) be a maximal ideal of \(D\) and \(\mathfrak{m}\) be the maximal ideal of \(T\) contracting to \(\mathfrak{n}\). Then_
* _if_ \(J\not\subseteq\mathfrak{m}\)_, then_ \(D_{\mathfrak{n}}=T_{\mathfrak{m}}\) _is a valuation domain, and_
* _if_ \(J\subseteq\mathfrak{m}\)_, then_ \(D_{\mathfrak{n}}\) _is a PVD with associated valuation domain_ \(T_{\mathfrak{m}}\)_._
Even when \(D\) is a PVD that is not a valuation domain, we can define the prime ideal \(\mathfrak{M}^{*}\) of \(\operatorname{Int}^{\mathbb{R}}(D)\) when the associated valuation domain has infinite residue field or maximal ideal that is not principal. This is defined analogously to the \(\mathfrak{M}^{*}\) prime ideal defined for \(\operatorname{Int}^{\mathbb{R}}(V)\), where \(V\) is a valuation domain with infinite residue field or maximal ideal that is not principal in [10]. We define
\[\mathfrak{M}^{*}\coloneqq\{\varphi\in\operatorname{Int}^{\mathbb{R}}(D)\mid \operatorname{minval}_{\varphi}(0)>0\},\]
where minval is defined using the valuation associated with the associated valuation domain. We check this is indeed a prime ideal of \(\operatorname{Int}^{\mathbb{R}}(D)\). We will eventually only need this following result in the case where the maximal ideal of the associated valuation domain is not principal.
**Lemma 3.4**.: _Let \(D\) be a PVD. Suppose \(D\) has infinite residue field or the maximal ideal of the associated valuation domain is not principal. Then \(\mathfrak{M}^{*}\) is a prime ideal of \(\operatorname{Int}^{R}(D)\)._
Proof.: We will let \(V\) denote the valuation domain associated to \(D\). Also let \(v\) be an associated valuation, \(\mathfrak{m}\) be the maximal ideal of \(V\), and \(\Gamma\) the value group.
Let \(\varphi,\psi\in\mathfrak{M}^{*}\). We want to show \(\varphi+\psi\in\mathfrak{M}^{*}\). We consider the case where \(D\) has infinite residue field and the case where \(\mathfrak{m}\) is not principal in \(V\) separately.
Suppose that \(D\) has infinite residue field. Then there exists some \(u\in D\) such that \(v(\varphi(u))=\operatorname{minval}_{\varphi}(0),v(\psi(u))=\operatorname{ minval}_{\psi}(0)\), and \(v((\varphi+\psi)(u))=\operatorname{minval}_{\varphi+\psi}(0)\) by Lemma 3.2. Then we have
\[\operatorname{minval}_{\varphi+\psi}(0)=v((\varphi+\psi)(u)) \geq\min\{v(\varphi(u)),v(\psi(u))\}\] \[=\min\{\operatorname{minval}_{\varphi}(0),\operatorname{minval}_{ \psi}(0)\}\] \[>0.\]
Therefore, \(\varphi+\psi\in\mathfrak{M}^{*}\).
If \(\mathfrak{m}\) is not principal in \(V\), then there exists \(\varepsilon\in\Gamma\) with \(\varepsilon>0\) such that for all \(d\in D\) such that \(0<v(d)<\varepsilon\), we have \(\operatorname{minval}_{\varphi}(v(d))=v(\varphi(d)),\operatorname{minval}_{ \psi}(v(d))=v(\varphi(d))\), and \(\operatorname{minval}_{\varphi+\psi}(v(d))=v((\varphi+\psi)(d))\) by Lemma 3.2. This implies that
\[\operatorname{minval}_{\varphi+\psi}(\gamma)\geq\min\{\operatorname{minval}_{ \varphi}(\gamma),\operatorname{minval}_{\psi}(\gamma)\},\]
for all \(\gamma\in\Gamma\) such that \(0<\gamma<\varepsilon\). Thus, the above inequality also holds for \(\gamma=0\) by Lemma 3.2, which leads to \(\operatorname{minval}_{\varphi+\psi}(0)\geq\min\{\operatorname{minval}_{ \varphi}(0),\operatorname{minval}_{\psi}(0)\}>0\). Thus, \(\varphi+\psi\in\mathfrak{M}^{*}\).
Now let \(\rho\in\operatorname{Int}^{\mathrm{R}}(D)\) and \(\varphi\in\mathfrak{M}^{*}\). We can use similar techniques as above to show that \(\operatorname{minval}_{\rho}(0)\geq 0\). Then \(\operatorname{minval}_{\rho\varphi}(0)=\operatorname{minval}_{\rho}(0)+ \operatorname{minval}_{\varphi}(0)>0\). This shows that \(\rho\varphi\in\mathfrak{M}^{*}\), so \(\mathfrak{M}^{*}\) is an ideal of \(\operatorname{Int}^{\mathrm{R}}(D)\).
Now suppose that \(\rho,\rho^{\prime}\in\operatorname{Int}^{\mathrm{R}}(D)\) such that \(\rho\rho^{\prime}\in\mathfrak{M}^{*}\). We see that
\[\operatorname{minval}_{\rho\rho^{\prime}}(0)=\operatorname{minval}_{\rho}(0)+ \operatorname{minval}_{\rho^{\prime}}(0)>0,\]
which implies that \(\operatorname{minval}_{\rho}(0)>0\) or \(\operatorname{minval}_{\rho^{\prime}}(0)>0\) since \(\operatorname{minval}_{\rho}(0)\geq 0\) and \(\operatorname{minval}_{\rho^{\prime}}(0)\geq 0\). Thus, \(\mathfrak{M}^{*}\) is a prime ideal of \(\operatorname{Int}^{\mathrm{R}}(D)\).
Now we focus on the case where the maximal ideal of the associated valuation domain is not principal. The following lemma is then analogous to Theorem 6.6 of [10].
**Lemma 3.5**.: _Let \(D\) be a PVD with associated valuation domain \(V\) whose maximal ideal is not principal. Then the prime ideal \(\mathfrak{M}^{*}\) is not maximal._
Proof.: Denote by \(v\) a valuation associated with \(V\) and \(\Gamma\) its value group. Let \(\mathcal{U}\) be a non-principal ultrafilter of \(\{\mathfrak{M}_{\mathfrak{m},a}\mid a\in D\}\) containing the family of sets of ideals \(\{\{\mathfrak{M}_{\mathfrak{m},a}\mid a\in\mathfrak{m},v(a)<\varepsilon\} \mid\varepsilon\in\Gamma,\varepsilon>0\}.\) We claim that \(\mathfrak{M}^{*}\subsetneq\lim\limits_{\mathcal{U}}\mathfrak{M}_{\mathfrak{m},a}\).
Let \(\varphi\in\mathfrak{M}^{*}\). Then there exist \(\varepsilon\in\Gamma\) with \(\varepsilon>0\), \(c\in\mathbb{Z}\), and \(\beta\in\Gamma\) such that \(\operatorname{minval}_{\varphi}(\gamma)=c\gamma+\beta\) for all \(\gamma\) with \(0<\gamma<\varepsilon\) and for all \(d\in D\) such that \(0<v(d)<\varepsilon\), we have
\[v(\varphi(d))=\operatorname{minval}_{\varphi}(v(d)).\]
We can make \(\varepsilon\) small enough so that \(c\gamma+\beta>0\) for all \(\gamma\) such that \(0<\gamma<\varepsilon\) since \(\operatorname{minval}_{\varphi}(0)=\beta>0\). This shows that \(\varphi\in\lim\limits_{\mathcal{U}}\mathfrak{M}_{\mathfrak{m},a}\).
The containment \(\mathfrak{M}^{*}\subseteq\lim\limits_{\mathcal{U}}\mathfrak{M}_{\mathfrak{m},a}\) is strict since \(x\in\lim\limits_{\mathcal{U}}\mathfrak{M}_{\mathfrak{m},a}\setminus\mathfrak{ M}^{*}\).
Now we show that for a PVD \(D\) that is not a valuation domain and is not pseudosingular, then \(\operatorname{Int}^{\mathrm{R}}(D)\) is not a GPVD.
**Proposition 3.6**.: _Let \(D\) be a PVD with associated valuation domain \(V\) whose maximal ideal is not principal and \(D\neq V\). Then the domain \(\operatorname{Int}^{R}(D)\) is not a GPVD._
Proof.: Let \(\mathfrak{m}\) denote the maximal ideal of \(D\). Also let \(a\in D\). We first show that \(\operatorname{Int}^{\mathrm{R}}(D)_{\mathfrak{M}_{\mathfrak{m},a}}\) is not a valuation domain. We have that
\[\operatorname{Int}^{\mathrm{R}}(D)_{\mathfrak{M}_{\mathfrak{m},a}}\subseteq \{\varphi\in K(x)\mid\varphi(a)\in D\}.\]
Intersecting with \(K\) yields that \(\operatorname{Int}^{\mathrm{R}}(D)_{\mathfrak{M}_{\mathfrak{m},a}}\cap K\subseteq D\). Take \(d\in V\setminus D\). Then \(d,d^{-1}\in V\setminus D\), so neither \(d\) nor \(d^{-1}\) is in \(\operatorname{Int}^{\mathrm{R}}(D)_{\mathfrak{M}_{\mathfrak{m},a}}\). Thus, \(\operatorname{Int}^{\mathrm{R}}(D)_{\mathfrak{M}_{\mathfrak{m},a}}\) is not a valuation domain.
Now suppose for a contradiction that \(\operatorname{Int}^{\mathrm{R}}(D)\) is a GPVD with associated Prufer domain \(T\) and common radical ideal \(J\) such that \(\operatorname{Int}^{\mathrm{R}}(D)/J\subseteq T/J\) is a unibranched extension of Krull dimension \(0\) rings. By Proposition 3.3, we see that \(J\subseteq\mathfrak{M}_{\mathfrak{m},a}\) for all \(a\in D\), since \(\operatorname{Int}^{\mathrm{R}}(D)_{\mathfrak{M}_{\mathfrak{m},a}}\) is not a valuation domain. Since \(J\subseteq\mathfrak{M}_{\mathfrak{m},a}\) for all \(a\in D\), we have \(J\subseteq\operatorname{Int}^{\mathrm{R}}(D,\mathfrak{m})\). Together with \(\operatorname{Int}^{\mathrm{R}}(D,\mathfrak{m})\subseteq\mathfrak{M}^{*}\), we get \(J\subseteq\mathfrak{M}^{*}\), but by Lemma 3.5, we have a prime ideal of \(\operatorname{Int}^{\mathrm{R}}(D)\) containing \(J\) that is not maximal, contradicting the assumption that \(\operatorname{Int}^{\mathrm{R}}(D)\) is a GPVD.
The previous proposition required the PVD to be not a valuation domain. If the base ring \(V\) is a valuation domain, we know exactly when \(\operatorname{Int}^{\mathrm{R}}(V)\) is a Prufer domain.
**Theorem 3.7**.: _[_15_, Corollary 2.30]_ _Let \(V\) be a valuation domain with maximal ideal \(\mathfrak{m}\). Then \(\operatorname{Int}^{\mathrm{R}}(V)\) is a Prufer domain if and only if \(V/\mathfrak{m}\) is not algebraically closed or \(\mathfrak{m}\) is a principal ideal of \(V\)._
Since a Prufer domain is a GPVD, we know that if a valuation domain \(V\) has a principal maximal ideal or a residue field that is not algebraically closed, then \(\operatorname{Int}^{\mathrm{R}}(V)\) is a GPVD. This means that a valuation domain \(V\) can be not (pseudo)singular and \(\operatorname{Int}^{\mathrm{R}}(V)\) is still a GPVD as long as the residue field of \(V\) is not algebraically closed.
We also know that if \(V\) is a valuation domain with algebraically closed residue field and maximal ideal that is not principal, then \(\operatorname{Int}^{\mathrm{R}}(V)\) is not a Prufer domain. We show that if the value group associated with \(V\) is not divisible, then \(\operatorname{Int}^{\mathrm{R}}(V)\) is also not a GPVD in this case. We show this by first showing that a domain that is not a Prufer domain but an essential domain cannot be a GPVD.
**Proposition 3.8**.: _Let \(D\) be a domain that is not a Prufer domain with a family of essential maximal ideals \(\{\mathfrak{m}_{\lambda}\}_{\lambda\in\Lambda}\) such that \(\bigcap\limits_{\lambda\in\Lambda}D_{\mathfrak{m}_{\lambda}}\). Then \(D\) is also not a GPVD._
Proof.: Suppose on the contrary that \(D\) is a GPVD. Then \(D\) has an associated Prufer overring \(T\). Now for all \(\lambda\in\Lambda\), let \(\mathfrak{n}_{\lambda}\) be the unique maximal ideal of \(T\) that contracts to \(\mathfrak{m}_{\lambda}\). Then by Proposition 3.3, we deduce that \(D_{\mathfrak{m}_{\lambda}}=T_{\mathfrak{n}_{\lambda}}\) because \(D_{\mathfrak{m}_{\lambda}}\) is a valuation domain. Thus,
\[T\subseteq\bigcap\limits_{\lambda\in\Lambda}T_{\mathfrak{n}_{\lambda}}= \bigcap\limits_{\lambda\in\Lambda}D_{\mathfrak{m}_{\lambda}}=D,\]
showing that \(T=D\), but we assumed that \(D\) is not Prufer, so this is a contradiction, meaning \(D\) cannot be a GPVD.
**Corollary 3.9**.: _Let \(D\) be a domain, \(K\) its field of fractions, and \(E\) a subset of \(K\). Assume that \(\operatorname{Int}^{\mathrm{R}}(E,D)\) is not Prufer, but there is a family of maximal ideals \(\{\mathfrak{m}_{\lambda}\}_{\lambda\in\Lambda}\) of \(D\) such that \(D=\bigcap\limits_{\lambda\in\Lambda}D_{\mathfrak{m}_{\lambda}}\) and each ideal in \(\{\mathfrak{M}_{\mathfrak{m}_{\lambda},a}\mid\lambda\in\Lambda,a\in E\}\) is essential. Then \(\operatorname{Int}^{\mathrm{R}}(E,D)\) is not a GPVD._
Proof.: Because \(\operatorname{Int}^{\mathrm{R}}(E,D)_{\mathfrak{M}_{\mathfrak{m}_{\lambda},a} }\subseteq\{\varphi\in K(x)\mid\varphi(a)\in D_{\mathfrak{m}_{\lambda}}\}\) for every \(\lambda\in\Lambda\) and \(a\in E\), we know that \(\operatorname{Int}^{\mathrm{R}}(E,D)=\bigcap\limits_{\lambda\in\Lambda} \bigcap\limits_{a\in E}\operatorname{Int}^{\mathrm{R}}(E,D)_{\mathfrak{M}_{ \mathfrak{m}_{\lambda},a}}\). Therefore, \(\operatorname{Int}^{\mathrm{R}}(E,D)\) is not a GPVD.
In order to use the previous corollary, we have to show that the maximal pointed ideals are essential.
**Proposition 3.10**.: _[_15_, Proposition 2.35]_ _Let \(V\) be a valuation domain whose value group is not divisible. Let \(\mathfrak{m}\) be the maximal ideal of \(V\), \(E\) be a subset of \(K\), the field of fractions of \(V\), and take \(a\in E\). Then_
\[\operatorname{Int}^{R}(E,V)_{\mathfrak{M}_{\mathfrak{m},a}}=\{\varphi\in K(x )\mid\varphi(a)\in V\},\]
_a valuation domain._
**Corollary 3.11**.: _Suppose that \(V\) is a valuation domain such that the maximal ideal is not principal, the residue field is algebraically closed, and the value group is not divisible. Then \(\operatorname{Int}^{R}(V)\) is not a GPVD._
Proof.: We know that \(\operatorname{Int}^{\mathrm{R}}(V)\) is not a Prufer domain due to Theorem 3.7. The maximal pointed ideals of \(\operatorname{Int}^{\mathrm{R}}(V)\) are essential due to Proposition 3.10. Together, this allows us to apply Corollary 3.9 to show that \(\operatorname{Int}^{\mathrm{R}}(V)\) is not a GPVD.
Let \(D\) be a PVD with associated valuation domain \(V\). First suppose that \(D\neq V\). Then \(\operatorname{Int}^{\mathrm{R}}(D)\) is a GPVD if and only if the maximal ideal is principal. If \(\operatorname{Int}^{\mathrm{R}}(D)\) is a GPVD, then the associated Prufer domain is \(\operatorname{Int}^{\mathrm{R}}(D,V)\). Interestingly, if \(D\neq V\) and \(V\) has a residue field that is not algebraically closed, then \(\operatorname{Int}^{\mathrm{R}}(D,V)\) is a Prufer domain [15, Theorem 3.2], but \(\operatorname{Int}^{\mathrm{R}}(D)\) is not a GPVD.
Now suppose the base ring is a valuation domain \(V\). If the residue field of \(V\) is not algebraically closed or the maximal ideal is principal, then \(\operatorname{Int}^{\mathrm{R}}(V)\) is a Prufer domain and thus a GPVD. If the residue field of \(V\) is algebraically closed, the maximal ideal of \(V\) is not principal, and the value group associated with \(V\) is not divisible, then \(\operatorname{Int}^{\mathrm{R}}(V)\) is not a GPVD. The remaining case is the case where the residue field of \(V\) is algebraically closed and the value group associated with \(V\) is divisible.
## 4 Local rings of integer-valued rational functions
In this section, we give a family of rings of integer-valued rational functions over PVDs that are local domains. This uses a result about rational functions as maps between fields. First, we give a lemma that shows that if we have a valuation that in a sense separates the units and the maximal ideal of a local domain, then we get a ring of integer-valued rational functions that is local.
**Lemma 4.1**.: _[_15_, Lemma 3.4]_ _Let \(D\) be a local domain with maximal ideal \(\mathfrak{m}\) and field of fractions \(K\). Suppose that there is a valuation overring \(V\) of \(D\) with a valuation \(v\) such that for all \(d\in\mathfrak{m}\), we have \(v(d)>0\). Also let \(\Gamma\) be the value group of \(v\). Suppose that for all \(\varphi\in\operatorname{Int}^{R}(K,D)\) that \(\operatorname{minval}_{\varphi}(\gamma)=0\) for all \(\gamma\in\Gamma\) or \(\operatorname{minval}_{\varphi}(\gamma)>0\) for all \(\gamma\in\Gamma\). Then \(\operatorname{Int}^{R}(K,D)\) is a local domain with maximal ideal \(\operatorname{Int}^{R}(K,\mathfrak{m})\)._
To create this dichotomy in the minimum valuation function when the base ring is a PVD, we make use a lemma about rational functions as maps between from a larger field to a smaller field in a field extension. The field extension of interest here is the one from the residue field of the PVD to the residue field of the associated valuation domain.
**Definition 4.2**.: Let \(L/M\) be a purely inseparable field extension of fields of characteristic \(p>0\). We say that \(L/M\) is of **finite exponent** if there exists some \(e\in\mathbb{N}\) such that \(a^{p^{e}}\in M\) for all \(a\in L\).
**Lemma 4.3**.: _[_15_, Lemma 6.4]_ _Let \(L/M\) be a field extension that is not purely inseparable of finite exponent. Additionally, suppose that \(L\) is an infinite field. Then there does not exist a nonconstant rational function \(\varphi\in L(x)\) such that \(\varphi(d)\in M\) for all but finitely many \(d\in L\)._
This fact about rational functions as maps between fields will be used alongside the polynomials that fall out considering the residue fields of the PVD and the associated valuation domain. These polynomials are called local polynomials.
**Definition 4.4**.: [15] Let \(V\) be a valuation domain with an associated valuation \(v\) and field of fractions \(K\). Take \(f\in K[x]\) be a nonzero polynomial and \(t\in K\). We define the **local polynomial of \(f\) at \(t\)** to be
\[\operatorname{loc}_{f,v,t}(x)=\frac{f(tx)}{a_{d}t^{d}}\mod\mathfrak{m},\]
where \(d=\max\{i\in\{0,1,\ldots,n\}\mid v(a_{i})+iv(t)=\operatorname{minval}_{f}(v(t))\}\) and \(\mathfrak{m}\) is the maximal ideal of \(V\). This is a well-defined monic polynomial with coefficients in \(V/\mathfrak{m}\).
One utility of the local polynomial is that it can determine the coefficients that appear in the minimal valuation function.
**Lemma 4.5**.: _[_14_, Lemma 2.25]_ _Take \(\varphi\in K(x)\) to be nonzero and \(\alpha\in\Gamma\). There exist \(\varepsilon\in\mathbb{Q}\Gamma\) with \(\varepsilon>0\) small enough, \(c,c^{\prime}\in\mathbb{Z}\), and \(\beta,\beta^{\prime}\in\Gamma\) such that_
\[\mathrm{minval}_{\varphi}(\gamma)=\begin{cases}c\gamma+\beta,&\text{if }\alpha- \varepsilon<\gamma<\alpha,\\ c^{\prime}\gamma+\beta^{\prime},&\text{if }\alpha<\gamma<\alpha+\varepsilon.\end{cases}\]
_Write \(\varphi=\frac{f}{g}\) for some \(f,g\in K[x]\). Take \(t\in K\) such that \(v(t)=\gamma\). We can write \(\mathrm{loc}_{f,t}=a_{i_{t}}x^{i_{1}}+\cdots+a_{i_{r}}x^{i_{r}}\) and \(\mathrm{loc}_{g,t}=b_{j_{1}}x^{j_{1}}+\cdots+b_{j_{s}}x^{j_{s}}\) for some nonzero \(a_{i_{1}},\ldots,a_{i_{r}},b_{j_{1}},\ldots,b_{j_{s}}\in V/\mathfrak{m}\). Then_
\[c=i_{r}-j_{s}\quad\text{and}\quad c^{\prime}=i_{1}-j_{1}.\]
We now show that the ring of integer-valued rational functions over a PVD on its field of fractions can be local under certain conditions on the residue fields.
**Proposition 4.6**.: _Let \(D\) be a PVD with \(V\) being the corresponding valuation overring, and \(\mathfrak{m}\) being the common maximal ideal. Suppose that \(\Gamma\), the value group of \(V\), is divisible. Let \(M:=D/\mathfrak{m}\) and \(L:=V/\mathfrak{m}\). Suppose further that \(L/M\) is not purely inseparable of finite exponent and \(L\) is infinite. Then \(\mathrm{Int}^{R}(K,D)\) is local with maximal ideal \(\mathrm{Int}^{R}(K,\mathfrak{m})\)._
Proof.: Let \(v\) be a valuation associated with \(V\). Take \(\varphi\in\mathrm{Int}^{R}(K,D)\). Write \(\varphi=\frac{f}{g}\) with \(f,g\in D[x]\). Take \(\gamma\in\Gamma\) to be some element and \(t\in K\) such that \(v(t)=\gamma\). Then \(\mathrm{loc}_{f,t},\mathrm{loc}_{g,t}\in L[x]\). We want to show that \(\frac{\mathrm{loc}_{f,t}}{\mathrm{loc}_{g,t}}\) maps \(L\) to \(M\). Write \(f=\sum\limits_{i}a_{i}x^{i}\) and \(g=\sum\limits_{j}b_{j}x^{j}\) with \(a_{i},b_{j}\in D\). Pick out all the indices \(i_{1}<\cdots<i_{r}\) such that \(v(a_{i_{t}}t^{i_{1}})=\cdots=v(a_{i_{r}}t^{i_{r}})=\mathrm{minval}_{f}(\gamma)\), and similarly pick out all the indices \(j_{1}<\cdots<j_{s}\) such that \(v(b_{j_{1}}t^{j_{1}})=\cdots=v(b_{j_{s}}t^{j_{s}})=\mathrm{minval}_{g}(\gamma)\). We write
\[\frac{b_{j_{s}}}{a_{i_{r}}}t^{j_{s}-i_{r}}\varphi(tx)=\frac{f(tx)/(a_{i_{r}}t ^{i_{r}})}{g(tx)/(b_{j_{s}}t^{j_{s}})}=\frac{\frac{a_{i_{t}}t^{i_{1}}x^{i_{1}} +\cdots+a_{i_{r}}t^{i_{r}}x^{i_{r}}+\sum\limits_{i\neq i_{1},\ldots,i_{r}}a_{ i}x^{i}}{b_{j_{1}}x^{j_{1}}+\cdots+b_{j_{s}}t^{j_{s}}x^{j_{s}}+\sum\limits_{j \neq j_{1},\ldots,j_{s}}b_{i}x^{j_{s}}}}}{b_{j_{s}}t^{j_{s}}}.\]
Let \(c\in L\) such that \(c\) is not a root of neither \(\mathrm{loc}_{f,t}\) nor \(\mathrm{loc}_{g,t}\). Note that all but finitely many elements of \(L\) satisfy this condition. Take \(u\in V\) such that \(u+\mathfrak{m}=c\). Then
\[\frac{f(tu)}{a_{i_{r}}t^{i_{r}}}\mod\mathfrak{m}=\frac{a_{i_{1}}t^{i_{1}}u^{i _{1}}+\cdots+a_{i_{r}}t^{i_{r}}u^{i_{r}}+\sum\limits_{i\neq i_{1},\ldots,i_{r} }a_{i}u^{i}}{a_{i_{r}}t^{i_{r}}}\mod\mathfrak{m}=\mathrm{loc}_{f,t}(c)\neq 0.\]
Similarly,
\[\frac{g(tu)}{b_{j_{s}}t^{j_{s}}}\mod\mathfrak{m}=\mathrm{loc}_{g,t}(c)\neq 0.\]
Therefore,
\[\varphi(tu)\mod\mathfrak{m}=\left(\frac{a_{i_{r}}}{b_{j_{s}}}t^{i_{r}-j_{s}} \mod\mathfrak{m}\right)\frac{\mathrm{loc}_{f,t}}{\mathrm{loc}_{g,t}}(c).\]
Whenever \(\mathrm{minval}_{\varphi}(\gamma)=0\), we have that \(v\Big{(}\frac{a_{i_{r}}}{b_{j_{s}}}t^{i_{r}-j_{s}}\Big{)}=0\), so \(\Big{(}\frac{a_{i_{r}}}{b_{j_{s}}}t^{i_{r}-j_{s}}\mod\mathfrak{m}\Big{)}\neq 0\). Since \(\varphi(tu)\in D\), we obtain that \(\Big{(}\frac{a_{i_{r}}}{b_{j_{s}}}t^{i_{r}-j_{s}}\mod\mathfrak{m}\Big{)}\frac{ \mathrm{loc}_{f,t}}{\mathrm{loc}_{g,t}}(c)\in M\). This holds for all but finitely many \(c\in L\), so \(\Big{(}\frac{a_{i_{r}}}{b_{j_{s}}}t^{i_{r}-j_{s}}\mod\mathfrak{m}\Big{)}\frac{ \mathrm{loc}_{f,t}(x)}{\mathrm{loc}_{g,t}(x)}\) is constant by Lemma 4.3. Moreover, \(\frac{\mathrm{loc}_{f,t}(x)}{\mathrm{loc}_{g,t}(x)}\) is constant. We claim that this shows that the existence of some \(\gamma\in\Gamma\) such that \(\mathrm{minval}_{\varphi}(\gamma)=0\) implies that \(\mathrm{minval}_{\varphi}=0\). If there exist \(\delta,\delta^{\prime}\in\Gamma\) such that \(\mathrm{minval}_{\varphi}(\delta)=0\) and \(\mathrm{minval}_{\varphi}(\delta^{\prime})>0\), we can assume without loss of generality that \(\delta<\delta^{\prime}\). Then there exist \(\alpha,\beta,\varepsilon\in\Gamma\) and \(n\in\mathbb{Z}\setminus\{0\}\) such that
\[\mathrm{minval}_{\varphi}(\gamma)=\begin{cases}n\gamma+\beta,&\alpha\leq\gamma \leq\alpha+\varepsilon\\ 0,&\alpha-\varepsilon\leq\gamma\leq\alpha\end{cases}\]
by Lemma 3.2. Note that \(\alpha\in\Gamma\) because \(\Gamma\) is divisible. Now take \(t\in K\) such that \(v(t)=\alpha\). The fact that \(\frac{\log_{f,t}(x)}{\log_{g,t}(x)}\) is constant implies that \(n-0=0\), by Lemma 4.5. This is a contradiction. Thus, Lemma 4.1 implies that \(\mathrm{Int}^{\mathrm{R}}(K,D)\) is local with maximal ideal \(\mathrm{Int}^{\mathrm{R}}(K,\mathfrak{m})\).
Note that \(\mathrm{Int}^{\mathrm{R}}(K,D)\) is not trivial in this case. As an example, for any rational function \(\varphi\in\mathrm{Int}^{\mathrm{R}}(K,V)\) and \(d\in\mathfrak{m}\), we have \(d\varphi\in\mathrm{Int}^{\mathrm{R}}(K,\mathfrak{m})\subseteq\mathrm{Int}^{ \mathrm{R}}(K,D)\).
The following proposition shows that without the conditions on the extension of residue fields of Proposition 4.6, the ring \(\mathrm{Int}^{\mathrm{R}}(K,D)\) can be not local.
**Proposition 4.7**.: _Let \(D\) be a PVD with associated valuation domain \(V\neq D\) and shared maximal ideal \(\mathfrak{m}\). Set \(M:=D/\mathfrak{m}\) and \(L:=V/\mathfrak{m}\). Suppose that \(L\) is finite or \(L/M\) is purely inseparable of finite exponent. Then \(\mathrm{Int}^{R}(K,D)\) is not local._
Proof.: If \(L\) is finite of order \(q\), then \(x^{q}-x+1\) maps \(L\) to \(\{1\}\subseteq M\). We claim then that \(\frac{1}{x^{q}-x+1}\in\mathrm{Int}^{\mathrm{R}}(K,D)\). If \(a\in K\) with \(v(a)<0\), then \(v\Big{(}\frac{1}{a^{q}-a+1}\Big{)}=-qv(a)>0\). If \(a\in K\) with \(v(a)\geq 0\), then \(\frac{1}{a^{q}-a+1}\in 1+\mathfrak{m}\), so \(\frac{1}{a^{q}-a+1}\in D\). We see then that \(\frac{1}{x^{q}-x+1}\in\mathfrak{M}_{\mathfrak{m},a}\subseteq\mathrm{Int}^{ \mathrm{R}}(K,D)\) for all \(a\in K\) with \(v(a)<0\) and \(\frac{1}{x^{q}-x+1}\notin\mathfrak{M}_{\mathfrak{m},a}\subseteq\mathrm{Int}^{ \mathrm{R}}(K,D)\) for all \(a\in K\) with \(v(a)\geq 0\). This means \(\mathrm{Int}^{\mathrm{R}}(K,D)\) is not local.
Now consider the case when \(L/M\) is purely inseparable of finite exponent. This means that there exists \(e\in\mathbb{N}\) such that \(a^{p^{e}}\in M\) for all \(a\in L\), where \(p>0\) is the characteristic of \(M\). Since \(V\neq D\), we have that \(L\neq M\). Let \(c\in L\setminus M\). Then the polynomial \(x^{p^{e}}-c\) has no roots in \(L\). Additionally, the polynomial \((x^{p^{e}}-c)^{p^{e}}=x^{p^{2e}}-c^{p^{e}}\) also has no roots in \(L\) and has coefficients in \(M\). Consider the rational function \(\frac{1}{x^{p^{2e}}-u^{p^{e}}}\), where \(u+\mathfrak{m}=c\). Let \(a\in K\) such that \(v(a)<0\). Then \(v\Big{(}\frac{1}{a^{p^{2e}}-u^{p^{e}}}\Big{)}=-p^{2e}v(a)>0\). If \(a\in K\) is such that \(v(a)\geq 0\), then since \(x^{p^{2e}}-c^{p^{e}}\) has no roots in \(L\), we calculate that \(v\Big{(}\frac{1}{a^{p^{2e}}-u^{p^{e}}}\Big{)}=0\). Furthermore, \(a^{p^{2e}}-u^{p^{e}}+\mathfrak{m}\in M\), so \(\frac{1}{a^{p^{2e}}-u^{p^{e}}}\in D\). This shows that \(\frac{1}{x^{p^{2e}}-u^{p^{e}}}\in\mathrm{Int}^{\mathrm{R}}(K,D)\). We have \(\frac{1}{x^{p^{2e}}-u^{p^{e}}}\in\mathfrak{M}_{\mathfrak{m},a}\subseteq \mathrm{Int}^{\mathrm{R}}(K,D)\) for all \(a\in K\) with \(v(a)<0\) and \(\frac{1}{x^{p^{2e}}-u^{p^{e}}}\notin\mathfrak{M}_{\mathfrak{m},a}\subseteq \mathrm{Int}^{\mathrm{R}}(K,D)\) for all \(a\in K\) with \(v(a)\geq 0\). Thus, \(\mathrm{Int}^{\mathrm{R}}(K,D)\) is not local in this case.
In conclusion, for a PVD \(D\) that is not a valuation domain with field of fractions \(K\), we can determine exactly when \(\mathrm{Int}^{\mathrm{R}}(K,D)\) is a local domain.
**Corollary 4.8**.: _Let \(D\) be a PVD with associated valuation domain \(V\), maximal ideal \(\mathfrak{m}\), and field of fraction \(K\). Suppose that \(D\neq V\) and set \(M\coloneqq D/\mathfrak{m}\) and \(L\coloneqq V/\mathfrak{m}\). Then \(\mathrm{Int}^{R}(K,D)\) is a local domain if and only if \(L/M\) is not purely inseparable of finite exponent and \(L\) is infinite._
|
2302.05949
|
Machine Learning Assisted Bad Data Detection for High-throughput
Substation Communication
|
Electrical substations are becoming more prone to cyber-attacks due to
increasing digitalization. Prevailing defense measures based on cyber rules are
often inadequate to detect attacks that use legitimate-looking measurements. In
this work, we design and implement a bad data detection solution for electrical
substations called ResiGate, that effectively combines a physics-based approach
and a machine-learning-based approach to provide substantial speed-up in
high-throughput substation communication scenarios, while still maintaining
high detection accuracy and confidence. While many existing physics-based
schemes are designed for deployment in control centers (due to their high
computational requirement), ResiGate is designed as a security appliance that
can be deployed on low-cost industrial computers at the edge of the smart grid
so that it can detect local substation-level attacks in a timely manner. A key
challenge for this is to continuously run the computationally demanding
physics-based analysis to monitor the measurement data frequently transmitted
in a typical substation. To provide high throughput without sacrificing
accuracy, ResiGate uses machine learning to effectively filter out most of the
non-suspicious (normal) data and thereby reducing the overall computational
load, allowing efficient performance even with a high volume of network
traffic. We implement ResiGate on a low-cost industrial computer and our
experiments confirm that ResiGate can detect attacks with zero error while
sustaining a high throughput.
|
Suman Sourav, Partha P. Biswas, Vyshnavi Mohanraj, Binbin Chen, Daisuke Mashima
|
2023-02-12T16:12:50Z
|
http://arxiv.org/abs/2302.05949v1
|
# Machine Learning Assisted Bad Data Detection for High-throughput Substation Communication
###### Abstract
Electrical substations are becoming more prone to cyber-attacks due to increasing digitalization. Prevailing defence measures based on cyber rules are often inadequate to detect attacks that use legitimate-looking measurements. In this work, we design and implement a bad data detection solution for electrical substations called _ResiGate_, that effectively combines a physics-based approach and a machine-learning-based approach to provide substantial speed-up in high-throughput substation communication scenarios, while still maintaining high detection accuracy and confidence. While many existing physics-based schemes are designed for deployment in control centers (due to their high computational requirement), ResiGate is designed as a security appliance that can be deployed on low-cost industrial computers at the edge of the smart grid so that it can detect local substation-level attacks in a timely manner. A key challenge for this is to continuously run the computationally demanding physics-based analysis to monitor the measurement data frequently transmitted in a typical substation. To provide high throughput without sacrificing accuracy, ResiGate uses machine learning to effectively filter out most of the non-suspicious (normal) data and thereby reducing the overall computational load, allowing efficient performance even with a high volume of network traffic. We implement ResiGate on a low-cost industrial computer and our experiments confirm that ResiGate can detect attacks with zero error while sustaining a high throughput.
## I Introduction
Substations are critical nodal points in a power grid that handle the transmission and distribution of power through different components like transformers, switchgears with bus-bars and circuit breakers, and intelligent electronic devices (IEDs). In recent years, researchers have studied various cyber threat scenarios for electrical substations and their mitigation strategies (e.g., [1, 2]). Security technologies such as firewall and intrusion detection system (IDS) are being increasingly deployed to protect the substation networks. Firewall and IDS typically work on a specific set of cyber rules to allow legitimate traffic. While they can flag/block some unauthorized or malicious packets, they may fail to counter advanced attacks that follow the normal communication channel and use legitimate-looking measurements to cause physical damage.
In this work, we design and implement _ResiGate_, which is a physics-based intrusion detection solution for electrical substations. ResiGate's approach could supplement the cyber-based rules in today's intrusion detection systems. Specifically, ResiGate checks for existence of false data in the measurement reported by IEDs in a substation. While physics-based approaches for detecting false measurement data have been studied in the literature [3, 4, 5], most existing solutions are designed for deployment in control centers. Such centralized solutions often do not take into consideration the high-frequency measurement data available at substations (that is not reported at a control center due to the limited bandwidth constraints). Additionally, they rely on high-performance servers to handle the computationally onerous physics-based analysis, and also require fast and reliable communication channel. Though physics-based analysis is accurate, it is also computationally demanding which makes its deployment difficult in scenarios with limited computational resources or requiring processing high throughput data. If only periodically sampled data is sent to the control center for detection, an advanced attacker can avoid the detection by launching only short-duration transient attacks at the base station level.
To mitigate this and to effectively perform bad data detection at the local substation level, in this paper, we propose to augment the physics-based false data detection with machine learning, whereby the goal of the machine learning is to reduce the amount of computation, by filtering out non-suspicious normal data. Specifically, for this filtering ResiGate first scans all measurements using a lightweight ML algorithm, namely gradient boosted decision tree (GBDT). We should also note that Resigate framework is agnostic to ML algorithm, and any advanced scheme can be integrated. The GBDT model for a given substation is trained in advance using diverse sets of normal data and pre-crafted abnormal data. Once trained, the model requires only about 20 milliseconds for online classification of a snapshot of over 100 measurement points in our experimental setup. A small set of data flagged as suspicious will be further validated by physics-based analysis. Now, as only a small set of suspicious data needs to undergo physics-based analysis, the overall computational resources required to validate the data would be significantly reduced, providing accelerated bad data detection. Although both ML-based and physics-based IDS solutions for power systems have been proposed in literature (e.g., [6, 7]), to the best of our knowledge, ResiGate is the first fully-functioning research prototype that combines the efficiency of an ML-based approach with the accuracy of physics-based analysis for low-cost deployment in electrical substations. The main contributions of our work are as follows:
* We propose a novel design in ResiGate that augments the accurate but computationally demanding physics-based analysis with an efficient machine learning based filtering
|
2301.12965
|
Quadratic Matrix Factorization with Applications to Manifold Learning
|
Matrix factorization is a popular framework for modeling low-rank data
matrices. Motivated by manifold learning problems, this paper proposes a
quadratic matrix factorization (QMF) framework to learn the curved manifold on
which the dataset lies. Unlike local linear methods such as the local principal
component analysis, QMF can better exploit the curved structure of the
underlying manifold. Algorithmically, we propose an alternating minimization
algorithm to optimize QMF and establish its theoretical convergence properties.
Moreover, to avoid possible over-fitting, we then propose a regularized QMF
algorithm and discuss how to tune its regularization parameter. Finally, we
elaborate how to apply the regularized QMF to manifold learning problems.
Experiments on a synthetic manifold learning dataset and two real datasets,
including the MNIST handwritten dataset and a cryogenic electron microscopy
dataset, demonstrate the superiority of the proposed method over its
competitors.
|
Zheng Zhai, Hengchao Chen, Qiang Sun
|
2023-01-30T15:09:00Z
|
http://arxiv.org/abs/2301.12965v1
|
# Quadratic Matrix Factorization with Applications to Manifold Learning
###### Abstract
Matrix factorization is a popular framework for modeling low-rank data matrices. Motivated by manifold learning problems, this paper proposes a quadratic matrix factorization (QMF) framework to learn the curved manifold on which the dataset lies. Unlike local linear methods such as the local principal component analysis, QMF can better exploit the curved structure of the underlying manifold. Algorithmically, we propose an alternating minimization algorithm to optimize QMF and establish its theoretical convergence properties. Moreover, to avoid possible over-fitting, we then propose a regularized QMF algorithm and discuss how to tune its regularization parameter. Finally, we elaborate how to apply the regularized QMF to manifold learning problems. Experiments on a synthetic manifold learning dataset and two real datasets, including the MNIST handwritten dataset and a cryogenic electron microscopy dataset, demonstrate the superiority of the proposed method over its competitors.
Quadratic matrix factorization, alternating minimization, convergence property, manifold learning.
## 1 Introduction
Matrix factorization has achieved many successes in various applications, including factor models [1], clustering [2, 3], recommendation system [4], graph and representation learning [5]. The key idea behind matrix factorization is that any data matrix admitting a low-rank structure can be represented as a product of two matrices with smaller dimensions. In a general form, matrix factorization solves
\[\min_{\begin{subarray}{c}U\in\mathbb{R}^{D\times r},V\in\mathbb{F} \times m\\ \mathcal{C},V\in\mathcal{C}\end{subarray}}\|X-UV\|_{\mathrm{F}}^{2}. \tag{1}\]
where \(X\in\mathbb{R}^{D\times m}\) is the data matrix with dimension \(D\) and sample size \(m\), \(U\in\mathbb{R}^{D\times r}\) and \(V\in\mathbb{R}^{r\times m}\) are factors of rank \(r\), \(\mathcal{C}\) is the feasible set encoding additional structural information, and \(\|\cdot\|_{\mathrm{F}}\) is the Frobenius norm. Building upon (1), many well-known algorithms in the literature can be obtained by taking specific feasible sets \(\mathcal{C}\). For example, if \(\mathcal{C}=\{V:VV^{T}=I_{r}\}\) enforces orthonormal constraints on the rows of \(V\), then (1) reduces to principal component analysis (PCA). If \(\mathcal{C}\) enforces non-negative constraints on both \(U\) and \(V\), or either \(U\) or \(V\), then (1) becomes nonnegative matrix factorization (NMF) [6] or semi-NMF [7] respectively. NMF also shares a strong connection to spectral clustering [8].
Although matrix factorization (1) has been widely studied in various forms, it is less explored in the nonlinear case: some rows of \(V\) might be nonlinear functions of other rows of \(V\). Nonlinear matrix factorization naturally arises in manifold learning problems, where data \(\{x_{i}\}_{i=1}^{m}\subseteq\mathbb{R}^{D}\) are assumed to concentrate near an unknown \(d\)-dimensional manifold \(\mathcal{M}\) and the goal is to recover \(\mathcal{M}\) from \(\{x_{i}\}_{i=1}^{m}\)[9, 10]. Mathematically, assume \(\{x_{i}\}_{i=1}^{m}\) are given by
\[x_{i}=f(\tau_{i})+\epsilon_{i},\quad i=1,\ldots,m,\]
where \(f\) is a bijective smooth mapping from an open set \(\mathcal{T}\subseteq\mathbb{R}^{d}\) to \(\mathcal{M}\subseteq\mathbb{R}^{D}\), \(\tau_{i}\in\mathcal{T}\) is the \(d\)-dimensional representation of \(x_{i}\), and \(\epsilon_{i}\) is the \(i\)-th approximation error. Here \(f\) and \(\tau_{i}\) are non-identifiable in the sense that \(f(\tau_{i})=f^{\prime}(\tau_{i}^{\prime})\), where \(f^{\prime}=f\circ g\) and \(\tau_{i}^{\prime}=g^{-1}(\tau_{i})\) for any diffeomorphism \(g\) on \(\mathcal{T}\), but \(f(\tau_{i})\) is uniquely defined. To find \(f(\tau_{i})\), we propose to minimize the residual sum of squares with respect to \(f\) and \(\tau_{i}\):
\[\min_{f\in\mathcal{F},\{\tau_{i}\}_{i=1}^{m}\subseteq\mathbb{R}^ {d}}\sum_{i=1}^{m}\|x_{i}-f(\tau_{i})\|_{\mathrm{F}}^{2}. \tag{2}\]
Here \(\mathcal{F}=\{f:\mathbb{R}^{d}\mapsto\mathbb{R}^{D}\}\) is a prediction function class. Let \(X=(x_{1},\ldots,x_{m})\), \(\Phi=(\tau_{1},\ldots,\tau_{m})\), and \(\Xi(f,\Phi)=(f(\tau_{1}),\ldots,f(\tau_{m}))\). Then (2) can be rewritten as
\[\min_{f\in\mathcal{F},\Phi\in\mathbb{R}^{m\times d}}\|X-\Xi(f, \Phi)\|_{\mathrm{F}}^{2}. \tag{3}\]
If \(\mathcal{F}\) consists of linear functions only, i.e., \(\mathcal{F}=\{f\mid f(\tau)=A\tau,A\in\mathbb{R}^{D\times d}\}\), then the optimization problem (3) reduces to the matrix factorization problem (1) with \(U=A\), \(V=\Phi\), and \(\mathcal{C}=\mathbb{R}^{D\times r}\times\mathbb{R}^{r\times m}\). This is referred to as linear matrix factorization (LMF). The linear assumption on \(f\) can be too restrictive for manifold learning problems because it does not take the curved structure into account and thus is only applicable to model flat manifolds, i.e., manifolds that are locally isometric to the Euclidean spaces. For general manifolds, it is better to consider \(f\) as quadratic functions. Higher-order polynomial functions for \(f\) are also possible but tend to overfit noisy data, rendering poor generalization performance. For the reasons above, this paper focuses on the quadratic function class
\[\mathcal{F}=\{f(\tau)=c+A\tau+\mathcal{B}(\tau,\tau):c\in\mathbb{R} ^{D},A\in\mathbb{R}^{D\times d},\] \[\text{symmetric tensor }\mathcal{B}\in\mathbb{R}^{D\times d \times d}\}. \tag{4}\]
For any quadratic function \(f\), there exists a unique matrix \(R\in\mathbb{R}^{D\times(2+3d+d^{2})/2}\) such that \(f(\tau)=R\xi(\tau)\), where
\[\xi(\tau)=[1,\tau^{T},\psi(\tau)^{T}]^{T} \tag{5}\] \[\psi(\tau)=[\tau_{[1]}^{2},\tau_{[1]}^{2},...,\tau_{[1]}\tau_{[d ]},\tau_{[2]}^{2},...,\tau_{[d]}^{2}]^{T}\in\mathbb{R}^{(d^{2}+d)/2}.\]
Here \(\tau_{[i]}\) denotes the \(i\)-th coordinate of \(\tau\) and \(\psi(\cdot)\) maps \(\tau\) to a vector consisting of all quadratic and interaction terms of \(\{\tau_{[i]}\}_{i=1}^{d}\). Let \(T(\Phi)=(\xi(\tau_{1}),\ldots,\xi(\tau_{m}))\). Then the optimization problem (3) with the quadratic function class (4) reduces to \(\min_{R,\Phi}\|X-RT(\Phi)\|_{\mathrm{F}}^{2}\). To make \(\Phi\) identifiable, we propose to solve the following optimization problem
\[\min_{\Phi\in\mathcal{T}=_{d,\Phi}\|X-RT(\Phi)\|_{\mathrm{F}}^{2}}. \tag{6}\]
This is again a special case of the general matrix factorization problem (1), and we emphasize that \(T(\Phi)\) encodes an implicit constraint that the last \((d^{2}+d)/2\) rows of \(T(\Phi)\) are quadratic functions of its second-to-\((1+d)\)-th rows. Thus, we refer to (6) as quadratic matrix factorization (QMF).
LMF has been widely studied [6, 11, 12, 13], but the proposed QMF is much more difficult and has received little attention. In this paper, we propose to optimize the QMF problem (6) with respect to \(\Phi\) and \(R\) alternatively. Specifically, with \(\Phi\) fixed, optimizing (6) with respect to \(R\) is equivalent to a linear regression problem. With \(R\) fixed, minimizing (6) over \(\Phi\) is a non-convex quadratic projection problem, i.e., projecting a target point onto the quadratic surface determined by \(R\). This quadratic projection problem can be efficiently solved by an alternating minimization algorithm when \(\|RJ\|_{\mathrm{F}}\) is small, where \(J=(\mathbf{0}\ I_{(d^{2}+d)/2})^{T}\in\mathbb{R}^{(2+3d+d^{2})/2\times(d^{2}+d )/2}\). This motivates us to add a regularizer \(\lambda\|RJ\|_{\mathrm{F}}^{2}\) to (6) and solve the regularized QMF problem:
\[\min_{\begin{subarray}{c}R,\Phi\\ \bullet\Phi=\tau_{d}:\Phi=\mathbf{=}0\end{subarray}}\ell_{\lambda}(R,\Phi)=\| X-RT(\Phi)\|_{\mathrm{F}}^{2}+\lambda\|RJ\|_{\mathrm{F}}^{2}. \tag{7}\]
To solve (7), an alternating minimization algorithm is proposed in Section 4. We also discuss how to tune \(\lambda\) properly.
Our contributions are four-fold. First, motivated by manifold learning problems, we introduce the quadratic matrix factorization model and propose an alternating minimization algorithm for solving (6). Second, we establish the theoretical convergence property of the QMF algorithm. Third, motivated by the theoretical analysis of the QMF algorithm, we propose a regularized QMF algorithm and give an adaptive parameter tuning method. Finally, we apply the regularized QMF algorithm to solve general manifold learning problems. Numerically, we examine the performance of the proposed method on a synthetic manifold learning dataset and two real-world datasets, including the MNIST handwritten dataset and a cryogenic electron microscopy dataset, and demonstrate the superiority of the proposed method over its competitors.
### _Related Work and Paper Organization_
Our QMF model is different from the problems studied in [14, 15], which are also referred to as quadratic matrix factorization. They consider approximating a matrix by a product of multiple low-rank matrices, and some factor matrix appears twice in the approximation, that is, \(X\approx UU^{T}\) or \(X\approx AQU^{T}C\) with \(U\) being an unknown factor. They focus on estimating \(U\). In contrast, our paper is motivated by manifold learning problems and focuses on solving (1) with quadratic constraints: some rows of \(V\) are quadratic functions of the other rows of \(V\). Their algorithms cannot be applied to solve our QMF problem (6).
Many manifold learning methods, such as LLE [16], Isomap [17], Laplacian eigenmaps [18] and diffusion maps [19], aim to find a lower-dimensional representation of the dataset while preserving certain geometric structures. In contrast, our target is to recover the underlying manifold structure on which the dataset lies. Most algorithms with the same purpose are based on tangent space estimation [20, 21, 22, 23, 24]. These methods share one common limitation that they do not take the higher-order smoothness into account. Compared with the aforementioned methods, the local polynomial approximation algorithm, which takes higher-order smoothness into account, could achieve a better convergence rate as shown in [9]. However, [9] and [10] mainly study the statistical properties of the local polynomial fitting algorithms, leaving several important computational issues, such as the algorithmic convergence properties untouched. Our paper addresses these computational issues by providing algorithmic convergence properties and a regularized QMF algorithm that potentially avoids over-fitting.
The rest of the paper proceeds as follows. In Section 2, we describe the alternating minimization algorithm for solving the QMF problem (6). As a key component, we propose an alternating minimization algorithm to solve the quadratic projection problem and present its theoretical analysis. We establish the algorithmic convergence property of QMF in Section 3. In Section 4, we develop a regularized QMF algorithm and discuss the tuning method. Applications to manifold learning algorithms are given in Section 5, and numerical experiments are carried out in Section 6. We conclude this paper with several remarks in Section 7 and leave technical proofs in the Appendix.
### _Notation_
Throughout this paper, we denote by \(\|\cdot\|\) and \(\|\cdot\|_{\mathrm{F}}\) the spectral norm and the Frobenius norm respectively. For a vector \(v\in\mathbb{R}^{D}\), we use \(\|v\|_{1}=\sum_{i=1}^{D}|v_{i}|\) to denote its \(\ell_{1}\) norm. For a matrix \(M\in\mathbb{R}^{r\times m}\), denote by \(\sigma_{\mathrm{im}}(M)=\sigma_{\mathrm{min}\{r,m\}}(M)\), and \(\|M\|_{2,1}=\sum_{i=1}^{r}\|M_{i}\|\) the \(i\)-th largest singular value, the smallest singular value, and the \(\ell_{2,1}\) norm of \(M\) respectively. Here \(M_{i}\) is the \(i\)-th row of \(M\). Also, denote by \(M^{\dagger}\) the Moore-Penrose inverse of \(M\) and by \(P_{M}=M^{T}(MM^{T})^{\dagger}M\in\mathbb{R}^{m\times m}\) the projection matrix corresponding to the row subspace of \(M\), i.e., the subspace in \(\mathbb{R}^{m}\) spanned by the rows of \(M\). Let \(\mathcal{B}\in\mathbb{R}^{D\times d\times d}\) be a tensor, then \(\mathcal{B}(\tau,\eta)\in\mathbb{R}^{D}\) for any \(\tau,\eta\in\mathbb{R}^{d}\). We use \(B_{k}\in\mathbb{R}^{d\times d}\) to denote the \(k\)-th slice of \(\mathcal{B}\), i.e., \(\mathcal{B}(\tau,\eta)_{k}=\tau^{T}B_{k}\eta=\langle B_{k},\eta\tau^{T}\rangle\) for all \(\tau,\eta\in\mathbb{R}^{d}\) and \(k=1,\ldots,D\). We refer to \(\mathcal{B}\) as a symmetric tensor if \(\{B_{k}\}_{k=1}^{D}\) are all symmetric. For any \(\eta\in\mathbb{R}^{d}\), define \(\mathcal{B}_{\eta}\) as the action of \(\mathcal{B}\) on the vector \(\eta\):
\[\mathcal{B}_{\eta}=[B_{1}\eta,\ldots,B_{D}\eta]^{T}\in\mathbb{R}^{D\times d}, \tag{8}\]
where \(B_{k}\) denotes the \(k\)-th slice of \(\mathcal{B}\). Thus, \(\mathcal{B}(\tau,\eta)=\mathcal{B}_{\eta}\tau=\mathcal{B}_{\tau}\eta\) when \(\mathcal{B}\) is symmetric. Denote by \(\mathcal{B}^{*}(\cdot):\mathbb{R}^{D}\mapsto\mathbb{R}^{d\times d}\) the adjoint operator of \(\mathcal{B}\):
\[\mathcal{B}^{*}(c)=\sum_{k=1}^{D}c_{k}B_{k}\in\mathbb{R}^{d\times d},\quad \forall c\in\mathbb{R}^{D},\]
where \(B_{k}\) is the \(k\)-th slice of \(\mathcal{B}\). Let \(\mathbf{1}_{m}=[1,\ldots,1]^{T}\in\mathbb{R}^{m}\) and \(\mathbf{0}\) be an all-zero matrix, whose size depends on the context. In addition, let \(I_{d}\) be the identity matrix of size \(d\times d\). Given two matrices \(A,D\in\mathbb{R}^{d\times d}\), we use \(A\succeq D\) (resp. \(A\preceq D\)) to indicate that \(A-D\) (resp. \(D-A\)) is a positive semi-definite matrix.
## 2 Quadratic Matrix Factorization
This section presents an alternating minimization algorithm for solving quadratic matrix factorization. Given \(\{x_{i}\}_{i=1}^{m}\subseteq\mathbb{R}^{D}\), the goal is to solve
\[\min_{f\in\mathcal{F},\{\tau_{i}\}_{i=1}^{m}\subseteq\mathbb{R}^{d}}\sum_{i=1 }^{m}\|x_{i}-f(\tau_{i})\|_{\mathrm{F}}^{2}, \tag{9}\]
where \(\mathcal{F}\) is the quadratic function class (4). Before proceeding to the algorithm, let us first illustrate the advantages of the quadratic function class over the linear function class in a swiss roll fitting example. As in Figure 1, we generate noisy data points near a swiss roll, fit such data using linear and quadratic functions, and compare the fitted curves with the underlying truth. It turns out that linear fitting tends to return a polygon, while quadratic fitting could produce curved lines that recover the underlying truth better. The difference between linear and quadratic fitting becomes more significant in the central region, where the curvature is large. This coincides with the intuition that quadratic fitting performs better because it takes the curvature into consideration, while linear fitting does not.
To formally present the algorithm, we need some notations. Recall that \(\psi(\cdot)\) and \(\xi(\cdot)\) are given by (5). For any symmetric tensor \(\mathcal{B}\in\mathbb{R}^{D\times d\times d}\), there exists a unique matrix \(Q\in\mathbb{R}^{D\times(d^{2}+d)/2}\) such that
\[\mathcal{B}(\tau,\tau)=Q\psi(\tau),\quad\forall\tau\in\mathbb{R}^{d}. \tag{10}\]
In particular, the \(k\)-th row \(Q_{k}\). of \(Q\) relates to the \(k\)-th slice of \(\mathcal{B}\) via the equality \(Q_{k}.\psi(\tau)=\tau^{T}B_{k}\tau\). We shall refer to such \(Q\) as the matrix representation of the symmetric tensor \(\mathcal{B}\). Similarly, for any quadratic function \(f(\tau)=c+A\tau+\mathcal{B}(\tau,\tau)\), there exists a unique matrix \(R\in\mathbb{R}^{D\times(2+3d+d^{2})/2}\) such that \(f(\tau)=R\xi(\tau),\forall\tau\in\mathbb{R}^{d}\). Indeed, \(R=[c,A,Q]\) with \(Q\) being the matrix representation of the symmetric tensor \(\mathcal{B}\). Let \(X=[x_{1},\ldots,x_{m}]\in\mathbb{R}^{D\times m}\), \(\Phi=[\tau_{1},...,\tau_{m}]\in\mathbb{R}^{d\times m}\), and
\[\Psi(\Phi)=[\psi(\tau_{1}),...,\psi(\tau_{m})]\in\mathbb{R}^{\frac{d^{2}d}{2} \times m}. \tag{11}\]
Then the optimization problem (9) is equivalent to
\[\min_{R,\Phi}\ell(R,\Phi)=\ell(c,A,Q,\Phi)=\|X-RT(\Phi)\|_{\mathrm{F}}^{2}, \tag{12}\]
where
\[R=R(c,A,Q)=[c,A,Q], \tag{13}\]
This can be viewed as a matrix factorization problem, with constraints given by (13). The last \((d^{2}+d)/2\) rows of \(T(\Phi)\) in the constraints (13) are determined by the second-to-\((1+d)\)th rows of \(T(\Phi)\) through the mapping \(\psi\). Recall that \(\psi(\tau)\) collects all quadratic and interaction terms of \(\{\tau_{\|}\}_{i=1}^{d}\), thus these constraints specify all possible quadratic constraints. Problem (12) is thus referred to as QMF. LMF is a special case of QMF with additional constraints that \(Q=0\).
However, (12) suffers from non-identifiability issues. This is because the minima of (12) are determined by \(T(\Phi)\) only through its row space. To see this, we fix \(\Phi\) and consider minimizing \(\ell(R,\Phi)\) with respect to \(R\) only. The minimizer \(\widetilde{R}\) and the product \(\widetilde{R}T(\Phi)\) are given by
\[\begin{split}&\widetilde{R}=\operatorname*{argmin}_{R}\ell(R,\Phi)=XT( \Phi)^{T}(T(\Phi)T(\Phi)^{T})^{\dagger},\\ &\widetilde{R}T(\Phi)=XT(\Phi)^{T}(T(\Phi)T(\Phi)^{T})^{ \dagger}T(\Phi)=XP_{T(\Phi)},\end{split} \tag{14}\]
where \(M^{\dagger}\) denotes the Moore-Penrose inverse of \(M\) and \(P_{T(\Phi)}=T(\Phi)^{T}(T(\Phi)T(\Phi)^{T})^{\dagger}T(\Phi)\in\mathbb{R}^{m \times m}\). Thus the loss of \(\widetilde{R}T(\Phi)\) only depends on the row subspace of \(T(\Phi)\). Substituting (14) into (12), we obtain
\[\min_{\Phi}\min_{R}\ell(R,\Phi)=\min_{\Phi}\|X-XP_{T(\Phi)}\|_{\mathrm{F}}^{2}.\]
In particular, if \(\Phi_{1}\) and \(\Phi_{2}\) satisfy \(P_{T(\Phi_{1})}=P_{T(\Phi_{2})}\), then \(\min_{R}\ell(R,\Phi_{1})=\min_{R}\ell(R,\Phi_{2})\) and thus it is impossible to distinguish \(\Phi_{1}\) and \(\Phi_{2}\) when optimizing (12). Proposition 1 provides concrete transformations on \(\Phi\) such that \(\min_{R}\ell(R,\Phi)\) stays the same.
**Proposition 1**: _Suppose \(\Phi^{\prime}=Z\Phi+u\mathbf{1}_{m}^{T}\) for some invertible matrix \(Z\in\mathbb{R}^{d\times d}\) and some vector \(u\in\mathbb{R}^{d}\). Then for any \(R\), there exists \(R^{\prime}\) such that \(\ell(R^{\prime},\Phi^{\prime})=\ell(R,\Phi)\). In particular, we have \(\min_{R}\ell(R,\Phi^{\prime})=\min_{R}\ell(R,\Phi)\)._
The proof of Proposition 1 is left in the Appendix. To overcome non-identifiability issues, we add additional constraints \(\Phi\Phi^{T}=I_{d}\) and \(\Phi\mathbf{1}_{m}=0\) to (12) and solve
\[\min_{\Phi\prec T^{T}=I_{d}\prec\Phi\mathbf{1}_{m}=0}\ell(R,\Phi)=\|X-RT(\Phi) \|_{\mathrm{F}}^{2}, \tag{15}\]
where \(R=R(c,A,Q)\) and \(T(\Phi)\) are defined in (13). Several remarks follow. First, by Proposition 1, the minima of the constrained problem (15) and the unconstrained problem (12) are the same. Thus these two problems are equivalent. Second, by introducing constraints \(\Phi\Phi^{T}=I_{d}\), we reduce the solution space from \(\mathbb{R}^{d\times m}\) to the Stiefel manifold, which
Fig. 1: A comparison between the linear and quadratic fitting in the swiss roll fitting problem. The above figures display the fitted curves using linear and quadratic functions, respectively. The dots represent the raw data and the dashed lines represent the underlying truth.
potentially helps speed up the optimization procedure [25]. Third, we still do not have any restrictions on \(R\) in (15), so optimizing (15) over \(R\) with fixed \(\Phi\) is still a regression problem with its solution given by (14). Fourth, the constraint \(\Phi\Phi^{T}=I_{d}\) enforces the scale of \(\Phi\) to be neither too large nor too small. Thus, the configuration of the approximation \(RT(\Phi)\) largely depends on \(R\) and can be controlled by regularizing \(R\) properly. We will explore this last point in Section 4.
Now we present our first main algorithm. We adopt an alternating minimization strategy to solve (15). To begin with, we initialize \(\Phi_{0}\in\mathbb{R}^{d\times m}\) as the top \(d\) eigenvectors of the gram matrix \(G=(X-\bar{\mathbf{x}}\mathbf{1}_{m}^{T})^{T}(X-\bar{\mathbf{x}}\mathbf{1}_{m}^{T})\), where \(\bar{x}=\frac{1}{m}\sum_{i=1}^{m}x_{i}\). During the \(t\)-th loop, we first fix \(\Phi=\Phi_{t-1}\) and update \(R_{t}=\operatorname*{argmin}_{R}\ell(R,\Phi_{t-1})\) as in (14). Next, we fix \(R=R_{t}\) and update \(\widetilde{\Phi}_{t}=\operatorname*{argmin}_{\Phi_{t}}\ell(R_{t},\Phi)\). This is a separable problem in the sense that \(\widetilde{\Phi}_{t}\) is given by \(\widetilde{\Phi}_{t}=[\widetilde{\tau}_{1,t},\ldots,\widetilde{\tau}_{m,t}]\) with
\[\widetilde{\tau}_{i,t}=\operatorname*{argmin}_{\tau\in\mathbb{R}^{d}}\|x_{i} -R_{t}\xi(\tau)\|^{2}, \tag{16}\]
where \(\xi(\tau)\) is defined in (5). We refer to (16) as a quadratic projection problem because it finds the closest point \(R_{t}\xi(\widetilde{\tau}_{i,t})\) to \(x_{i}\) on the quadratic surface \(f_{t}(\tau)=R_{t}\xi(\tau)\). At the end of the \(t\)-th loop, we set \(\Phi_{t}=Z_{t}\widetilde{\Phi}_{t}(I_{m}-\mathbf{1}_{m}\mathbf{1}_{m}^{T}/m)\) with \(Z_{t}\) given by
\[Z_{t}=(\widetilde{\Phi}_{t}(I_{m}-\mathbf{1}_{m}^{T}\mathbf{1}_{m}/m) \widetilde{\Phi}_{t}^{T})^{-1/2}. \tag{17}\]
This ensures that the constraints \(\Phi_{t}\Phi_{t}^{T}=I_{d}\) and \(\Phi_{t}\mathbf{1}_{m}=0\) hold. Given a precision level \(c>0\), we will repeat the above iterations until the stopping criteria \(\|\Phi_{t}^{T}\Phi_{t}-\Phi_{t-1}^{T}\Phi_{t-1}\|\leq\epsilon\) is met, i.e., the row subspace of \(\Phi_{t}\) stabilizes. Algorithm 1 summarizes the details. Till now, the only issue we have not yet addressed is how to solve (16), which will be discussed in the following subsection.
```
Data:\(X=[x_{1},\ldots,x_{m}]\in\mathbb{R}^{D\times m}\) Result\(\Phi=[\tau_{1},\ldots,\tau_{m}]\in\mathbb{R}^{d\times m}\), quadratic function \(f(\tau)=R_{t}\xi(\tau)\).
1 initialize \(\Phi_{0}\in\mathbb{R}^{d\times m}\) as the top \(d\) eigenvectors of \(G=(X-\bar{\mathbf{x}}\mathbf{1}_{m}^{T})^{T}(X-\bar{\mathbf{x}}\mathbf{1}_{m}^{T})\);
2while\(\|\Phi_{t}^{T}\Phi_{t}-\Phi_{t-1}^{T}\Phi_{t-1}\|>\epsilon\)do
3 update \(R_{t}=\operatorname*{argmin}_{R}\ell(R,\Phi_{t-1})\) as in (14);
4for\(i=1\)to\(m\)do
5 solve the \(i\)-th projection problem \(\widetilde{\tau}_{i,t}=\operatorname*{argmin}_{\tau\in\mathbb{R}^{d}}\|x_{i}- R_{t}\xi(\tau)\|^{2}\);
6 end for
7 set \(\widetilde{\Phi}_{t}=[\widetilde{\tau}_{1,t},\ldots,\widetilde{\tau}_{m,t}]\);
8 update \(\Phi_{t}=Z_{t}\widetilde{\Phi}_{t}(I_{m}-\mathbf{1}_{m}\mathbf{1}_{m}^{T}/m)\) with \(Z_{t}\) given by (17);
9 end for
```
**Algorithm 1**An alternating minimization algorithm for quadratic matrix factorization.
### _Quadratic Projection_
Since the parameters \(R_{t}\) and \(x_{i}\) are fixed when solving (16), we omit the subscripts \(i\) and \(t\) for simplicity. We rewrite the loss function in (16) as
\[h(\tau)=\|x-c-A\tau-\mathcal{B}(\tau,\tau)\|^{2}, \tag{18}\]
where \(c,A,\mathcal{B}\) are determined by \(R\) via (10) and (13). Minimizing (18) with respect to \(\tau\) is non-convex, and we consider minimizing the following surrogate loss instead:
\[\min_{\tau,\eta}g(\tau,\eta)=\frac{1}{2}\|x-c-A\tau-\mathcal{B}( \tau,\eta)\|^{2}\] \[+\frac{1}{2}\|x-c-A\eta-\mathcal{B}(\tau,\eta)\|^{2}.\]
Note that \(g\) is symmetric in \(\tau\) and \(\eta\), and \(g(\tau,\tau)=h(\tau)\). Starting from \(\tau_{0}=\eta_{0}=\mathbf{0}\in\mathbb{R}^{d}\), we update \(\{\tau_{s},\eta_{s}\}\) iteratively in the following manner:
\[\left\{\begin{array}{l}\tau_{s}=\operatorname*{argmin}_{\tau}g(\tau,\eta_{s-1 }),\\ \eta_{s}=\operatorname*{argmin}_{\eta}g(\tau_{s},\eta).\end{array}\right. \tag{19}\]
Upon convergence such that \((\tau^{*},\eta^{*})=\operatorname*{argmin}_{\tau,\eta}g(\tau,\eta)\), we take \(\tau^{*}\) to be the solution to (16). In what follows, we show how (19) can be efficiently solved and prove that \((\tau_{s},\eta_{s})\) converges to \((\tau^{*},\tau^{*})\) for some stationary point \(\tau^{*}\) of \(h\) under certain conditions.
The update rule (19) can be implemented efficiently as all iterates admit closed-form solutions. Let us fix \(\eta=\eta_{s-1}\) and consider optimizing \(g(\tau,\eta_{s-1})\) over \(\tau\). Recall \(\mathcal{B}_{\eta}\) is given by (8) for any \(\eta\in\mathbb{R}^{d}\) and \(\mathcal{B}(\tau,\eta)=\mathcal{B}_{\eta}\tau=\mathcal{B}_{\eta}\tau\). Then \(g(\tau,\eta_{s-1})\) can be rewritten as
\[g(\tau,\eta_{s-1}) =\frac{1}{2}\|x-c-(A+\mathcal{B}_{\eta_{s-1}})\tau\|^{2}\] \[+\frac{1}{2}\|x-c-A\eta_{s-1}-\mathcal{B}_{\eta_{t-1}}\tau\|^{2}.\]
This is a quadratic function of \(\tau\), so \(\tau_{s}=\operatorname*{argmin}_{\tau}g(\tau,\eta_{s-1})\) admits a closed-form solution
\[\tau_{s}=\Gamma_{\eta_{s-1}}^{-1}\zeta_{\eta_{s-1}},\]
where \(\Gamma_{\eta_{s-1}}\) and \(\zeta_{\eta_{s-1}}\) are
\[\Gamma_{\eta_{s-1}} =(A+\mathcal{B}_{\eta_{s-1}})^{T}(A+\mathcal{B}_{\eta_{s-1}})+ \mathcal{B}_{\eta_{s-1}}^{T}\mathcal{B}_{\eta_{s-1}}, \tag{20}\] \[\zeta_{\eta_{s-1}} =(A+\mathcal{B}_{\eta_{s-1}})^{T}(x-c)+\mathcal{B}_{\eta_{s-1}}^{T} (x-c-A\eta_{s-1}). \tag{21}\]
Since \(g(\tau,\eta)\) is symmetric in \(\tau\) and \(\eta\), the dual problem \(\eta_{s}=\operatorname*{argmin}_{\eta}g(\tau_{s},\eta)\) can be solved similarly as
\[\eta_{s}=\Gamma_{\tau_{s}}^{-1}\zeta_{\tau_{s}},\]
where \(\Gamma_{\tau_{s}}\) and \(\zeta_{\tau_{s}}\) are given by (20) and (21) with \(\eta_{s-1}\) replaced by \(\tau_{s}\).
Now we provide conditions under which \((\tau_{s},\eta_{s})\) converges such that \(\tau^{*}=\lim_{s}\tau_{s}=\lim_{s}\eta_{s}\), and \((\tau^{*},\tau^{*})\) is a first-order stationary point of \(g\). This implies that \(\tau^{*}\) is a stationary point of \(h\). The key is to analyze the Hessian matrix of \(g\):
\[H_{g}(\tau,\eta)=\begin{bmatrix}\Gamma_{\eta}&H_{\eta\tau}\\ H_{\tau\eta}&\Gamma_{\tau}\end{bmatrix},\]
where \(\Gamma_{\eta}\) and \(\Gamma_{\tau}\) are given by (20) with parameter \(\eta\) and \(\tau\) respectively and \(H_{\eta\tau}\) and \(H_{\tau\eta}\) are given by
\[\begin{split} H_{\eta\tau}=H_{\eta\tau}^{T}=&-2\mathcal{B}^{*} (x-c-A(\tau+\eta)/2-\mathcal{B}(\tau,\eta))\\ &+\mathcal{B}_{\eta}
Here \(B_{\eta}\) and \(B_{\tau}\) are defined in (8), and \(\mathcal{B}^{*}(\cdot)\) represents the adjoint operator of \(\mathcal{B}\). \(\Gamma_{\eta}\) is always positive definite when \(\sigma_{d}(A)>0\) because
\[\Gamma_{\eta}=\frac{1}{2}A^{T}A+(\frac{1}{\sqrt{2}}A+\sqrt{2}\mathcal{B}_{\eta} )^{T}(\frac{1}{\sqrt{2}}A+\sqrt{2}\mathcal{B}_{\eta})\succeq\frac{1}{2}A^{T}A. \tag{23}\]
The following theorem shows that if \((\tau_{s},\eta_{s})\) is bounded, then under certain conditions on \(\mathcal{B}\), we have \((\tau_{s},\eta_{s})\) converges, \(\tau^{*}=\lim_{s\to\infty}\tau_{s}=\lim_{s\to\infty}\eta_{s}\), and \((\tau^{*},\tau^{*})\) is a stationary point of \(g\).
**Theorem 2**: _Suppose \(\sigma_{d}(A)>0\) and define \(\mathcal{S}_{\alpha}=\{\tau\in\mathbb{R}^{d}\mid\|\tau\|\leq\alpha\}\) for some \(\alpha>0\). Denote by \(B_{k}\) the \(k\)-th slice of \(\mathcal{B}\) for \(k=1,\ldots,D\). If \(\mathfrak{b}=\max_{k}\sigma_{1}(B_{k})\) satisfies_
\[(2\|x-c\|_{1}+4\alpha\|A\|_{2,1})\mathfrak{b}+3D\alpha^{2}\mathfrak{b}^{2} \leq\sigma_{d}^{2}(A)/4, \tag{24}\]
_then \(H_{g}(\tau,\eta)\) is positive definite and \(\sigma_{\min}(H_{g}(\tau,\eta))\geq\sigma_{d}^{2}(A)/4\) for all \(\tau,\eta\in\mathcal{S}_{\alpha}\). If the sequence \((\tau_{s},\eta_{s})\) obtained by (19) falls into the region \(\mathcal{S}_{\alpha}\times\mathcal{S}_{\alpha}\) for sufficiently large \(s\), then \((\tau_{s},\eta_{s})\) converges, \(\tau^{*}=\lim_{s\to\infty}\tau_{s}=\lim_{s\to\infty}\eta_{s}\), and \((\tau^{*},\tau^{*})\) is a stationary point of \(g\)._
**Proof** _Since \(\sigma_{d}(A)>0\), by (23), \(\Gamma_{\eta}\) is positive definite with \(\sigma_{\min}(\Gamma_{\eta})\geq\sigma_{d}^{2}(A)/2\). By symmetry, such property holds for \(\Gamma_{\tau}\) as well. As for \(H_{\eta\tau}\) and \(H_{\tau\eta}\), we can use (24) to show that_
\[\sigma_{1}(H_{\eta\tau})=\sigma_{1}(H_{\tau\eta})\leq\sigma_{d}^{2}(A)/4, \tag{25}\]
_holds for any \(\tau,\eta\in\mathcal{S}_{\alpha}\). In particular, for any \(\tau,\eta\in\mathcal{S}_{\alpha}\), we have_
\[\sigma_{1}(\mathcal{B}^{*}(\mathcal{B}(\tau,\eta)))\leq D\alpha^{ 2}\mathfrak{b}^{2}, \sigma_{1}(\mathcal{B}_{\eta}^{T}A)\leq\alpha\|A\|_{2,1}\mathfrak{b},\] \[\sigma_{1}(A^{T}\mathcal{B}_{\tau})\leq\alpha\|A\|_{2,1}\mathfrak{ b}, \sigma_{1}(\mathcal{B}_{\eta}^{T}\mathcal{B}_{\tau})\leq D\alpha^{2}\mathfrak{b}^{2},\]
_Similarly,_
\[\sigma_{1}(\mathcal{B}^{*}(x-c-A(\tau+\eta)/2))\] \[\leq (\|x-c\|_{1}+\|A(\tau+\eta)/2\|_{1})\mathfrak{b}\leq(\|x-c\|_{1} +\alpha\|A\|_{2,1})\mathfrak{b}.\]
_Combining these inequalities with (22), we obtain_
\[\sigma_{1}(H_{\eta\tau}) =\sigma_{1}(H_{\tau\eta})\] \[\leq(2\|x-c\|_{1}+4\alpha\|A\|_{2,1})\mathfrak{b}+3D\alpha^{2} \mathfrak{b}^{2}.\]
_Then (25) follows from the condition (24). To proceed, we decompose \(H_{g}(\tau,\eta)\) into the following summation of a positive definite matrix and a symmetric matrix:_
\[H_{g}(\tau,\eta)=\begin{bmatrix}\Gamma_{\eta}&\mathbf{0}\\ \mathbf{0}&\Gamma_{\tau}\end{bmatrix}+\begin{bmatrix}\mathbf{0}&H_{\eta\tau} \\ H_{\tau\eta}&\mathbf{0}\end{bmatrix}. \tag{26}\]
_For the second term, it follows from (25) that_
\[\sigma_{1}\left(\begin{bmatrix}\mathbf{0}&H_{\eta\tau}\\ H_{\tau\eta}&\mathbf{0}\end{bmatrix}\right)=\sigma_{1}(H_{\eta\tau})\leq \sigma_{d}^{2}(A)/4,\]
_for any \(\tau,\eta\in\mathcal{S}_{\alpha}\). Since \(\min\{\sigma_{\min}(\Gamma_{\tau}),\sigma_{\min}(\Gamma_{\eta})\}\geq\sigma_{d} ^{2}(A)/2\), the first term in (26) is positive definite, \(H_{g}(\tau,\eta)\) is positive definite, and_
\[\sigma_{\min}(H_{g}(\tau,\eta))\geq \min\{\sigma_{\min}(\Gamma_{\tau}),\sigma_{\min}(\Gamma_{\eta}) \}-\sigma_{1}(H_{\eta\tau})\] \[\geq \sigma_{d}^{2}(A)/4,\qquad\forall\tau,\eta\in\mathcal{S}_{\alpha}. \tag{27}\]
_Since the region \(\mathcal{S}_{\alpha}\times\mathcal{S}_{\alpha}\) is convex, (27) implies the strong convexity of \(g\) over \(\mathcal{S}_{\alpha}\times\mathcal{S}_{\alpha}\)._
_Suppose that the sequence \((\tau_{s},\eta_{s})\) generated by (19) falls into the region \(\mathcal{S}_{\alpha}\times\mathcal{S}_{\alpha}\) for sufficiently large \(s\). Since \(g\) is smooth and strongly convex over the region \(\mathcal{S}_{\alpha}\times\mathcal{S}_{\alpha}\), it is well-known that \((\tau_{s},\eta_{s})\) would converge to the unique first-order stationary point of \(g\) in \(\mathcal{S}_{\alpha}\times\mathcal{S}_{\alpha}\)[26]. Next, we will prove \(\lim_{s\to\infty}\tau_{s}=\lim_{s\to\infty}\eta_{s}\) by contradiction. In specific, define \(\tau^{*}=\lim_{s\to\infty}\tau_{s}\) and \(\eta^{*}=\lim_{s\to\infty}\eta_{s}\) and assume \(\tau^{*}\neq\eta^{*}\). Since \((\tau^{*},\eta^{*})\) is a first-order stationary point of \(g\) in \(\mathcal{S}_{\alpha}\times\mathcal{S}_{\alpha}\), by symmetry, we know \((\eta^{*},\tau^{*})\) is also a first-order stationary point of \(g\) in \(\mathcal{S}_{\alpha}\times\mathcal{S}_{\alpha}\). However, this contradicts the uniqueness of the first-order stationary point of \(g\) in \(\mathcal{S}_{\alpha}\times\mathcal{S}_{\alpha}\)._
Suppose \((\tau_{s},\eta_{s})\subseteq\mathcal{S}_{\alpha}\times\mathcal{S}_{\alpha}\) for sufficiently large \(s\). Theorem 2 proves the convergence of \((\tau_{s},\eta_{s})\) by establishing the strong convexity of \(g\) over \(\mathcal{S}_{\alpha}\times\mathcal{S}_{\alpha}\) under condition (24). Condition (24) holds when both \(\alpha\) and \(\|x-c\|_{1}\) are small or \(\mathfrak{b}=\max_{k}\sigma_{1}(B_{k})\) is small. In particular, condition (24) holds if \(\mathfrak{b}\leq\mathfrak{b}_{0}\), where
\[\mathfrak{b}_{0}= \frac{-\|x-c\|_{1}-2\alpha\|A\|_{2,1}}{3D\alpha^{2}}\] \[+\frac{\sqrt{(\|x-c\|_{1}+2\alpha\|A\|_{2,1})^{2}+3D\alpha^{2} \sigma_{d}^{2}(A)/4}}{3D\alpha^{2}}.\]
It trivially holds when \(\mathfrak{b}=0\), which corresponds to LMF. In general, by the relationship (10) between \(\mathcal{B}\) and \(Q\), we have \(\sigma_{1}(B_{k})\leq\|B_{k}\|_{p}\leq\|Q_{k}\|\), where \(Q_{k}\) is the \(k\)-th row of \(Q\). Thus, \(\mathfrak{b}\) is small if \(\|Q\|_{F}\) is small.
To conclude this subsection, we use an example with \(D=3,d=1\) to illustrate the effect of \(\mathfrak{b}\) on the convergence of \(\{(\tau_{s},\eta_{s})\}_{s=1}^{\infty}\). We consider two different settings of \(g\) and display the sequence \(\{(\tau_{s},\eta_{s})\}_{s=1}^{\infty}\) in Figure 2. The left panel has a smaller \(\mathfrak{b}\) than the right panel. The first row of Figure 2 shows that \((\tau_{s},\eta_{s})\) converges with \(\lim\tau_{s}=\lim\eta_{s}\) in the left panel while \((\tau_{s},\eta_{s})\) converges with \(\lim\tau_{s}\neq\lim\eta_{s}\) in the right panel. In addition, the second row shows that \(\tau_{s}\) in the left panel converges to the global minimum of \(h\) while \(\tau_{s}\)
the same limiting point, that is \(\lim\tau_{s}\neq\lim\eta_{s}\). In this case, \(\eta_{s}\) is not guaranteed to converge to the global minimum of \(h\) in general.
## 3 Convergence Properties
This section establishes the convergence property for Algorithm 1. In particular, we provide conditions under which a limit point of the update sequence is a stationary point of (12).
**Theorem 3**: _Denote by \(\{(R_{t},\widetilde{\Phi}_{t},\Phi_{t})\}_{t=1}^{\infty}\), the update sequence generated by Algorithm 1. For any sub-sequence \(\{t_{j}\}_{j=1}^{\infty}\) such that \(\lim_{j}\Phi_{t_{j}}=\Phi^{*}\), \(\lim_{j}\Phi_{t_{j}-1}=\Phi^{**}\), \(\lim_{j}\widetilde{\Phi}_{t_{j}}=\widetilde{\Phi}^{**}\), if \(\sigma_{\min}(T(\Phi^{*})T(\Phi^{*})^{T})>0\) and \(\sigma_{\min}(T(\Phi^{**})T(\Phi^{**})^{T})>0\), then \(R_{t_{j}+1}\) and \(R_{t_{j}}\) also converge. Denote by \(R^{*}=\lim_{j}R_{t_{j}+1}\) and \(R^{**}=\lim_{j}R_{t_{j}}\). Also, we have \(\widetilde{\Phi}^{**}=\operatorname*{argmin}_{\Phi}\ell(R^{**},\Phi)\). If we assume_
\[\ell(R^{**},\Phi^{**})-\ell(R^{**},\widetilde{\Phi}^{**})\geq\gamma\|\Phi^{**} -\widetilde{\Phi}^{**}\|_{\mathrm{F}}^{2} \tag{28}\]
_holds for some constant \(\gamma>0\), then \(\Phi^{**}=\widetilde{\Phi}^{**}=\Phi^{*}\) and \(R^{**}=R^{*}\) and the accumulation point \((R^{*},\Phi^{*})\) satisfies the first order Karush-Kuhn-Tucker (KKT) condition of the problem (12)._
**Proof** _By Algorithm 1, the update sequence \(\{(R_{t},\widetilde{\Phi}_{t},\Phi_{t})\}_{t=1}^{\infty}\) satisfies_
\[\begin{cases}R_{t}=\operatorname*{argmin}_{R}\ell(R,\Phi_{t-1}),\\ \widetilde{\Phi}_{t}=\operatorname*{argmin}_{\Phi}\ell(R_{t},\Phi),\\ \Phi_{t}=Z_{t}\widetilde{\Phi}_{t}(I_{m}-\mathbf{1}_{m}\mathbf{1}_{m}^{T}/m), \end{cases}\]
_where \(Z_{t}\) is defined by (17). It implies that \(\ell(R_{t+1},\Phi_{t})\) is monotone decreasing:_
\[\begin{split}\ell(R_{t+1},\Phi_{t})=&\min_{R}\ell(R, \Phi_{t})=\min_{R}\ell(R,\widetilde{\Phi}_{t})\\ \leq&\ell(R_{t},\widetilde{\Phi}_{t})\leq\ell(R_{t}, \Phi_{t-1}),\end{split} \tag{29}\]
_where the second equality follows from Proposition 1. By the monotone convergence theorem and the fact that \(\ell(R_{t+1},\Phi_{t})\geq 0\), both \(\ell(R_{t},\Phi_{t-1})\) and \(\ell(R_{t},\widetilde{\Phi}_{t})\) converge to the same value._
_Consider the sub-sequence \(\{t_{j}\}_{j=1}^{\infty}\) such that \(\lim_{j}\Phi_{t_{j}}=\Phi^{*}\) and \(\sigma_{\min}(T(\Phi^{*})T(\Phi^{*})^{T})=\gamma_{1}>0\). Then \(\sigma_{\min}(T(\Phi)T(\Phi)^{T})\geq\gamma_{1}/2>0\) for any \(\Phi\in\mathcal{S}^{*}_{t_{0}}=\{\Phi\mid\|\Phi-\Phi^{*}\|_{\mathrm{F}}\leq \epsilon_{0}\}\) with some sufficiently small \(\epsilon_{0}\). Thus, \(XT(\Phi)^{T}(T(\Phi)T(\Phi)^{T})^{-1}\) is well-defined for \(\Phi\in\mathcal{S}^{*}_{t_{0}}\) and is a continuous function of \(\Phi\) over \(\mathcal{S}^{*}_{t_{0}}\). Since \(\lim_{j}\Phi_{t_{j}}=\Phi^{**}\), we know \(\Phi_{t_{j}}\in\mathcal{S}^{*}_{\epsilon_{0}}\) for sufficiently large \(j\). By continuity, we have_
\[\begin{split}\lim_{j}R_{t_{j}+1}&=\lim_{j}XT(\Phi_ {t_{j}})^{T}(T(\Phi_{t_{j}})T(\Phi_{t_{j}})^{T})^{\dagger}\\ =&XT(\Phi^{*})^{T}(T(\Phi^{*})T(\Phi^{*})^{T})^{-1}. \end{split}\]
_where we use the definition (14) of \(R_{t_{j}+1}\). Denote by \(R^{*}=\lim_{j}R_{t_{j}+1}\)._
_Let us further assume that the sub-sequence \(\{t_{j}\}_{j=1}^{\infty}\) satisfies \(\lim_{j}\Phi_{t_{j}-1}=\Phi^{**}\), \(\lim_{j}\widetilde{\Phi}_{t_{j}}=\widetilde{\Phi}^{**}\), and \(\sigma_{\min}(T(\Phi^{**})T(\Phi^{**})^{T})>0\). Then \(R_{t_{j}}\) converges by the same argument and denote by \(R^{**}=\lim_{j}R_{t_{j}}\). Moreover, we claim that \(\widetilde{\Phi}^{**}=\operatorname*{argmin}_{\Phi}\ell(R^{**},\Phi)\). Otherwise, there exists \(\Phi^{\prime}\) such that \(\ell(R^{**},\Phi^{\prime})<\ell(R^{**},\widetilde{\Phi}^{**})\). By continuity, there exists some \(j_{0}\) such that \(\ell(R_{t_{0}},\Phi^{\prime})<\ell(R^{**},\widetilde{\Phi}^{**})\), which contradicts to the relationship (29). Furthermore, we assume (28) holds for some constant \(\gamma>0\)._
_Now we are in a position to prove that \(\Phi^{**}=\widetilde{\Phi}^{**}=\Phi^{*}\), \(R^{**}=R^{*}\), and \((R^{*},\Phi^{*})\) satisfies the KKT condition. By continuity, we have_
\[\ell(R^{**},\Phi^{**})= \lim_{j}\ell(R_{t_{j}},\Phi_{t_{j}-1})\] \[= \lim_{j}\ell(R_{t_{j}},\widetilde{\Phi}_{t_{j}})=\ell(R^{**}, \widetilde{\Phi}^{**}),\]
_where the second equality follows from (29). By (28), we have \(\Phi^{**}=\widetilde{\Phi}^{**}\), or equivalently \(\|\Phi_{t_{j}-1}-\widetilde{\Phi}_{t_{j}}\|_{\mathrm{F}}^{2}\to 0\) as \(j\to\infty\). Since \(\Phi_{t_{j}}=\operatorname*{argmin}_{\Phi\widetilde{\Phi}^{**}=I_{d},\Phi \mathbf{1}_{m}=0}\|\Phi-\widetilde{\Phi}_{t_{j}}\|_{\mathrm{F}}^{2}\), we have \(\|\Phi_{t_{j}}-\widetilde{\Phi}_{t_{j}}\|_{\mathrm{F}}^{2}\leq\|\Phi_{t_{j}-1}- \Phi_{t_{j}}\|_{\mathrm{F}}^{2}\) and thus \(\|\Phi_{t_{j}}-\Phi_{t_{j}}\|_{\mathrm{F}}^{2}\to 0\) as \(l\to\infty\). Then the fact \(\Phi_{t_{j}}\to\Phi^{*}\) implies that \(\Phi^{**}=\widetilde{\Phi}^{**}=\Phi^{*}\). By (2) and its \(R^{**}\) version, we have \(R^{**}=R^{*}\). Finally, by taking the limit on the following optimality conditions:_
\[\begin{cases}\nabla_{\Phi}\ell(R_{t_{j}},\Phi)|_{\Phi=\widetilde{\Phi}_{t_{j}}}=0,\\ \Phi_{t_{j}}=Z_{t_{j}}\widetilde{\Phi}_{t_{j}}(I_{m}-\mathbf{1}_{m}\mathbf{1}_ {m}^{T}/m),\\ \nabla_{R}\ell(R,\Phi_{t_{j}})|_{R=R_{t_{j}+1}}=0,\end{cases}\]
_we prove that \(\{R^{*},\Phi^{*}\}\) satisfies the KKT conditions of the problem (12)._
Let us discuss the assumptions in Theorem 3. First, the feasible set \(\mathcal{G}=\{\Phi\in\mathbb{R}^{d\times m}\mid\Phi\Phi^{T}=I_{d},\Phi\mathbf{1 }_{m}=0\}\) is compact, so there always exists a sub-sequence \(\{t_{j}\}_{j=1}^{\infty}\) such that \(\{\Phi_{t_{j}}\}_{j=1}^{\infty}\) and \(\{\Phi_{t_{j}-1}\}_{j=1}^{\infty}\) converge. If we assume \(\widetilde{\Phi}_{t}\) is bounded, then we can choose \(\{t_{j}\}_{j=1}^{\infty}\) such that \(\widetilde{\Phi}_{t_{j}}\) also converges. Second, the assumption \(\sigma_{\min}(T(\Phi^{*})T(\Phi^{*})^{T})>0\) and \(\sigma_{\min}(T(\Phi^{**})T(\Phi^{**})^{T})>0\) are mild, especially when \(m\) is much larger than \(d\). Finally, the assumption (28) characterizes the \(\gamma\)-strong convexity of \(\ell(R^{**},\Phi)\) with respect to \(\Phi\) surrounding the minimizer \(\widetilde{\Phi}^{**}\
matter how \(\lambda\) is chosen, RQMF always outperforms LMF in terms of memorization properties.
To solve (30), we adopt the same alternating minimization strategy described in Algorithm 1. When \(R\) is fixed, minimizing \(\ell_{\lambda}(R,\Phi)\) with respect to \(\Phi\) is equivalent to minimizing \(\ell(R,\Phi)\) with respect to \(\Phi\), since the regularizer \(\lambda\|RJ\|_{\mathrm{F}}^{2}\) is independent of \(\Phi\). It thus reduces to the quadratic projection problem discussed in Section 2.1. On the other hand, when \(\Phi\) is fixed, minimizing \(\ell_{\lambda}(R,\Phi)\) with respect to \(R\) is a ridge regression problem, and the solution can be given in closed form as
\[\begin{split}\widetilde{R}&=\operatorname*{argmin }_{R}\ell_{\lambda}(R,\Phi)\\ &=XT(\Phi)^{T}(T(\Phi)T(\Phi)^{T}+\lambda JJ^{T})^{-1}.\end{split} \tag{32}\]
Here we use the observation that \(T(\Phi)T(\Phi)^{T}+\lambda JJ^{T}\) is invertible as shown in Lemma 4 below. Therefore, to solve (30), it suffices to replace (14) in Algorithm 1 by (32), which gives us Algorithm 2, the RQMF algorithm.
**Lemma 4**: _Suppose \(\Phi\Phi^{T}=I_{d}\) and \(\Phi\mathbf{1}_{m}=0\). If \(\lambda>0\), then \(T(\Phi)T(\Phi)^{T}+\lambda JJ^{T}\) is positive definite. Furthermore, \(J^{T}(T(\Phi)T(\Phi)^{T}+\lambda JJ^{T})^{-1}J\) is also positive definite._
The proof of Lemma 4 is left in the Appendix. The following proposition shows that no matter how \(\lambda\) is chosen, RQMF memorizes the data better than LMF. Recall that LMF corresponds to RQMF with \(\lambda\to\infty\), or equivalently \(Q=RJ=0\).
**Proposition 5**: _For any \(\lambda>0\), the RQMF that solves (30) memorizes the data better than LMF in the following sense:_
\[\|X-R^{*}T(\Phi^{*})\|_{\mathrm{F}}^{2}\leq\|X-R^{\prime}T(\Phi^{\prime})\|_ {\mathrm{F}}^{2}, \tag{33}\]
_where \((R^{\prime},\Phi^{\prime})=\operatorname*{argmin}_{R\in\Omega,\Phi}\ell_{ \lambda}(R,\Phi)\) with \(\Omega=\{R\mid R=[c,A,Q],Q=0\}\) is the solution of LMF and \((R^{*},\Phi^{*})=\operatorname*{argmin}_{R,\Phi}\ell_{\lambda}(R,\Phi)\) is the solution of RQMF._
The proof of Proposition 5 is left in the Appendix.
### _Tuning Parameter Selection_
This section discusses how to tune \(\lambda\). Before presenting our new tuning method, let us first illustrate the effect of \(\lambda\) in Figure 3. We generate 240 data points uniformly on the unit circle and then manually add normal noises obeying \(\mathcal{N}(0,0.1^{2}I)\). For each target sample, we fit a curve around the target data using the nearest 40 data points and then project the target data onto the fitted curve. To fit the curve, we use the RQMF algorithm with \(\lambda=0.1\), 0.01, and 0. Figure 3 shows that the RQMF algorithm with \(\lambda=0.01\) achieves the best performance. When \(\lambda=0.1\) is too large, the RQMF algorithm behaves like linear matrix factorization and tends to use straight lines as the fitted curves. When \(\lambda=0\) is too small, the RQMF algorithm tends to overfit data with excessively curved lines. Therefore, it is important to pick a proper \(\lambda\).
In what follows, we will describe a new adaptive tuning method. Recall that when \(\Phi\) is fixed, \(\widetilde{R}\) in (32) is a function of \(\lambda\). Define \(s(\lambda)=\|\widetilde{R}(\lambda)J\|_{\mathrm{F}}^{2}\) and denote by \(s^{\prime}(\lambda)\) and \(s^{\prime\prime}(\lambda)\) the corresponding first and second derivatives.
Proposition 6 shows that \(s^{\prime}(\lambda)<0\) and \(s^{\prime}(\lambda)>0\) when \(s(\lambda)>0\) and \(\lambda>0\). This implies that \(s(\lambda)\) is a decreasing function of \(\lambda\) while \(s^{\prime}(\lambda)\) is a strictly increasing function of \(\lambda\). In particular, the root \(\lambda=s^{\prime-1}(-\delta)\) is unique and can be easily found via the bisection method. We propose to pick \(\lambda=s^{\prime-1}(-\delta)\) for a prescribed \(\delta>0\).
The proposed tuning method is reasonable in the following sense. Recall that \(s(\lambda)\) is the quantity that the regularizer \(\|RJ\|_{\mathrm{F}}^{2}\) aims to control and \(s^{\prime}(\lambda)\) measures the sensitivity of the target quantity \(s(\lambda)\) with respect to \(\lambda\). Thus, our tuning method chooses \(\lambda\) corresponding to a prespecified sensitivity level \(\delta\).
To illustrate the advantage of tuning \(\lambda\) via \(s^{\prime-1}(\cdot)\), we implement RQMF with 50 different \(\lambda\) evenly spaced in \([0,0.1]\). We use samples drawn from the sine curve, that is, \((t_{i},\sin(t_{i}))+\epsilon_{i}\) with \(\{t_{i}\}_{i=1}^{2}\) evenly distributed in \([\frac{\pi}{3},\frac{2\pi}{3}]\) and \(\epsilon_{i}\stackrel{{\mathrm{i.i.d.}}}{{\sim}}\mathcal{N}( \mathbf{0},0.03^{2}I)\). For each \(\lambda\), we implement RQMF and compute the error, that is, the average distance between the fitted data points and the underlying truth. Also, we calculate the values of \(s(\lambda)\) and \(\delta(\lambda)=-s^{\prime}(\lambda)\). Figure 4 displays the error against different \(\lambda\), \(s(\lambda)\), and \(\delta(\lambda)\). It shows that the error versus \(\delta(\lambda)\) curve is the flattest near the optimal error. To achieve a prespecified error, say \(1.2\times 10^{-3}\), the feasible choice of \(\delta(\lambda)\) has a much wider range than that of \(\lambda\) or \(s(\lambda)\). Thus, it is easier to achieve good performances of RQMF by choosing \(\delta(\lambda)\) rather than \(\lambda\) or \(s(\lambda)\). In practice, we choose \(\delta>0\) as a constant smaller than \(-s^{\prime}(0)\), where \(\Phi\) determining the function \(s(\cdot)\) is given by LMF.
**Proposition 6**: _Suppose \(\Phi\) is fixed with \(\Phi\Phi^{T}=I_{d}\) and \(\Phi\mathbf{1}_{m}=0\). Define \(\widetilde{R}(\lambda)\) by (32) and \(s(\lambda)=\|\widetilde{R}(\lambda)J\|_{\mathrm{F}}^{2}\). Then \(s^{\prime}(\lambda)\leq 0\) and \(s^{\prime\prime}(\lambda)\geq 0\). Furthermore, the strict inequalities \(s^{\prime}(\lambda)<0\) and \(s^{\prime\prime}(\lambda)>0\) hold if \(s(\lambda)>0\) and \(\lambda>0\)._
The proof of Proposition 6 is collected in the Appendix.
## 5 Applications to Manifold Learning
This section applies the RQMF algorithm to manifold learning problems. Assume data \(\{x_{i}\}_{i=1}^{m}\subseteq\mathbb{R}^{D}\) are generated near an unknown smooth manifold \(\mathcal{M}\) of intrinsic dimension \(d\). Here we no longer assume all data belong to the same local chart. Instead, we assume data belong to a union of several local charts. To recover the underlying manifold, we can apply the RQMF algorithm for each local chart.
The performance of this divide-and-conquer strategy depends on choices of specific local charts. Besides, the quality of the fitted points on a single chart cannot be guaranteed uniformly: recovering the central region tends to be of higher quality than recovering the marginal region. Also, for a data point belonging to multiple local charts, the fitted points in different charts are different and it is hard to determine which one is the best. To address these challenges, we propose an improved divide-and-conquer strategy. This strategy constructs a local chart for each data point \(y\) by finding its nearest \(K\) samples or by \(\mathcal{N}(y,a)=\{x_{i}\mid\|x_{i}-y\|\leq a\}\) for some \(a>0\). Then for each target sample, we denoise this particular data point by applying the RQMF algorithm to the corresponding chart. Compared with the original divide-and-conquer strategy, our strategy treats each data point as an individual problem and improves the accuracy.
In a more general form, we could use a kernel function \(K_{h}(\cdot,\cdot)\) to assign a closer point with higher importance, where \(h\) is the bandwidth. For a target sample \(y\), we modify the loss function in (30) as
\[\begin{split}\min_{\begin{subarray}{c}a,b\\ \phi\prec T=J,q\downarrow T=0\end{subarray}}\ell_{\lambda,y,h}(R,\Phi)=& \|(X-RT(\Phi))W_{h}^{1/2}(y)\|_{\mathrm{F}}^{2}\\ &+\lambda\|RJ\|_{\mathrm{F}}^{2},\end{split} \tag{34}\]
where \(X=(x_{1},\ldots,x_{m})\) is the _global_ data matrix and \(W_{h}^{1/2}(y)\in\mathbb{R}^{m\times m}\) is a diagonal weight matrix with the \(i\)-th diagonal element equal to \(K_{h}^{1/2}(x_{i},y)\). If we choose \(K_{h}(x,y)=1_{\|x-y\|\leq a}\), then (34) reduces to the improved divide-and-conquer strategy mentioned above. It is also possible to use other kernel functions, such as the Gaussian kernel. To distinguish the kernel RQMF model from the previous equal-weight RQMF, we use RQMF-E to represent equal-weight RQMF and RQMF-K to represent RQMF with weights determined by a kernel.
We also discuss how to tune \(\lambda\) for each sub-problem (34). Picking the same \(\lambda\) for all sub-problems is not desirable since the best \(\lambda\) depends on the weights \(W_{h}^{1/2}(y)\) and the curvature of the underlying truth, which vary as \(y\) changes. Instead, we suggest using the tuning method proposed in Section 4.1, which picks \(\lambda=s^{\prime-1}(-\delta)\) for the same prescribed sensitivity level \(\delta>0\) for all sub-problems. The same \(\delta\) would result in different \(\lambda\)'s for different charts, and this strategy often leads to better fitting accuracies in our experience.
## 6 Numerical Experiments
This section presents numerical experiments on a synthetic manifold learning dataset and two real datasets, including the MNIST handwritten dataset and a cryogenic electron microscopy dataset, to examine the finite-sample performance of the proposed method. Our goal is to reconstruct the underlying manifold from noisy data and compare our method with five commonly used methods. We first briefly describe these five competitors.
* **Local PCA** For any \(x\), it first finds the \(K\) nearest data points \(\{x_{i_{1}},...,x_{i_{K}}\}\) and then computes the covariance matrix \(M=\frac{1}{K}\sum_{k=1}^{K}(x_{i_{k}}-c_{x})(x_{i_{k}}-c_{x})^{T}\in\mathbb{R }^{D\times D}\), where \(c_{x}=\sum_{k=1}^{K}x_{i_{k}}/K\) is the center of these samples. Denote by \(P\in\mathbb{R}^{D\times D}\) the projection matrix corresponding to the space spanned by the \(d\) principle eigenvectors of \(M\). The denoised point of \(x\), a point on the estimated manifold "projected" from \(x\), is given by \(x_{\text{new}}=c_{x}+P(x-c_{x})\).
* **KDE & LOG-KDE Ridge Estimation** Both methods are special cases of the nonparametric ridge estimation method [21]. Let \(\hat{p}(x)=\sum_{i}K_{h}(x,x_{i})\) be the kernel density estimation (KDE) with the kernel function \(K_{h}(\cdot,\cdot)\) and the bandwidth \(h\). KDE ridge estimator estimates the ridge: \[\begin{split}\operatorname{ridge}\coloneqq\{& x\mid\Pi^{\perp}(\nabla^{2}\hat{p}(x))\nabla\hat{p}(x)=\mathbf{0},\\ &\lambda_{d+1}(\nabla^{2}\hat{p}(x))<0\},\end{split}\] (35)
Fig. 3: Illustration of the effect of \(\lambda\) for fitting a circle. This figure displays the generated data and the locally fitted curves with \(\lambda=0.1,0.01,0\) (from left to right). In the last three figures, the rhombuses stand for the target samples, and the asterisks represent the place where the target samples are projected to fitted curves.
where \(\Pi^{\perp}(\nabla^{2}\hat{p}(x))=I-UU^{T}\) with \(U\in\mathbb{R}^{D\times d}\) given by the top \(p\) principal eigenvectors of \(\nabla^{2}\hat{p}(x)\). Although the ridge in (35) does not admit close-form solutions, we may use the subspace constrained mean shift (SCMS) algorithm to find the denoised point of any point \(x\) and thus the ridge [20]. Similarly, the LOG-KDE ridge estimation is merely KDE ridge estimation with \(\hat{p}(x)\) replaced by \(\log\hat{p}(x)\).
* **Mfit** Mfit, proposed by [22], estimates the following manifold: \[\left\{x\mid\Pi_{x}(\sum_{i}\alpha(x,x_{i})\Pi_{i}(x-x_{i}))=\mathbf{0} \right\},\] where \(\alpha(\cdot,\cdot)\) is a weight function, \(\Pi_{i}\) is the projection matrix onto the approximate normal space at \(x_{i}\), and \(\Pi_{x}\) is the projection matrix corresponding to the top \(D-d\) principal eigenvectors of the matrix \(\sum_{i}\alpha(x,x_{i})\Pi_{i}\). Again, we may use the SCMS algorithm to solve this problem.
* **Moving LS** The moving least square (LS) consists of two steps [27]. For any \(x\in\mathbb{R}^{D}\), we first find the \(d\)-dimensional hyperplane \(\mathcal{H}\) in \(\mathbb{R}^{D}\) minimizing the following quantity \[\mathcal{L}_{1}(\mathcal{H})=\min_{q\in\mathcal{H},x-q\perp\mathcal{H}}\sum_ {i}\alpha(q,x_{i})\rho^{2}(x_{i},\mathcal{H}),\] where \(\alpha(\cdot,\cdot)\) is a weight function and \(\rho(x_{i},\mathcal{H})\) is the distance between \(x_{i}\) and \(\mathcal{H}\). We can construct a coordinate system on \(\mathcal{H}\) with origin \(q\), where \(q\) is the projection point of \(x\) on \(\mathcal{H}\). Using this coordinate system, we can obtain the \(d\)-dimensional configuration \(x^{\prime}_{i}\) of \(x_{i}\) by projecting \(x_{i}\) to \(\mathcal{H}\). Second, we fit a polynomial function \(p:\mathbb{R}^{d}\rightarrow\mathbb{R}^{D}\) of a given degree by minimizing the following weighted squares \[\mathcal{L}_{2}(p)=\sum_{i}\alpha(q,x_{i})\|p(x^{\prime}_{i})-x_{i}\|_{2}^{2}.\] The denoised point of \(x\) is then given by \(p(0)\). In our experiments, we fix the degree of \(p\) as two.
### _A Synthetic Example_
In this subsection, we compare RQMF-E and RQMF-K with the above five competitors in a synthetic spherical fitting experiment. We simulate the noisy data \(\{x_{i}\}_{i=1}^{240}\) by generating 240 points uniformly from the unit sphere \(\mathcal{S}\) in \(\mathbb{R}^{3}\) first and then adding independent noises following \(\mathcal{N}(0,\sigma^{2}I)\) with \(\sigma=0.2\). All algorithms take in the noisy data \(\{x_{i}\}\) and then output the denoised data \(\{\tilde{x}_{i}\}\). To measure the performance of different algorithms, we use the mean squared error (MSE) and standard derivation (SD):
\[\mathrm{MSE}=\sum_{i=1}^{m}\|\widehat{x}_{i}-P_{\mathcal{S}}( \widehat{x}_{i})\|_{2}^{2}/m,\] \[\mathrm{SD}=\sqrt{\frac{1}{m}\sum_{i=1}^{m}(\|\widehat{x}_{i}-P_ {\mathcal{S}}(\widehat{x}_{i})\|_{2}^{2}-\mathrm{MSE})^{2}},\]
where \(P_{\mathcal{S}}(\cdot)\) is the projector onto the sphere.
We briefly discuss how RQMF-E and RQMF-K are implemented. RQMF-E denoises each data point \(y\) using the \(K\) nearest neighbors of \(y\), where \(K\) is a tuning parameter. To avoid overfitting, we require \(K\) to be larger than the rank \((d^{2}+3d+2)/2\) of \(RT(\Phi)\). For RQMF-K, we use the Gaussian kernel and set the bandwidth as \(h=d_{K}/3+3\), where \(d_{K}\) is the distance from \(y\) to its \(K\)-th nearest neighbor. For both RQMF algorithms, we set the regularization parameter \(\lambda\) as \(\lambda=s^{\prime-1}(-\delta)\), where \(\delta\) is a tuning parameter. To select \(K\) and \(\delta\) for both RQMFs, we compute the MSEs of both RQMF-E and RQMF-K with different \((K,\delta)\) as shown in Figure 5. When \(K\) is small, the local data approximates a plane and thus a smaller \(\delta\) (larger \(\lambda\)) yields a better performance for RQMF-E. When \(K\) is large, the local data exhibits nonlinear structures, thus it is better to use a larger \(\delta\) (smaller \(\lambda\)) for RQMF-E. The effect of \(\delta\) is visualized in Figure 6. It shows that smaller \(\delta\) tends to fit flatter planes in comparison with
Fig. 5: An illustration of the impact of \(\delta\) and \(K\) for spherical fitting for RQMF-E (left) and RQMF-K(right).
Fig. 6: An illustration of local fitted surface under the impact of \(\delta\) for RQMF-E when \(K=18\).
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \(K\) & \(\mathcal{F}\) & 10 & 13 & 16 & 19 & 22 & 25 & 28 \\ \hline RQMF-E & 0.0243 (0.0325) & **0.0165 (0.0227)** & **0.0122 (0.0172)** & **0.0115 (0.0164)** & **0.0148 (0.0256)** & **0.0130 (0.0222)** & **0.0149 (0.0344)** & **0.0156 (0.0356)** \\ RQMF-K & **0.0188 (0.0281)** & 0.0170 (0.0270) & 0.0155 (0.0231) & 0.0148 (0.0233) & 0.0153 (0.0242) & 0.0159 (0.0252) & 0.0170 (0.0266) & 0.0185 (0.0280) \\ Local PCA & 0.0437 (0.0521) & 0.0437 (0.0522) & 0.0435 (0.0520) & 0.0432 (0.0519) & 0.0434 (0.0521) & 0.0434 (0.0518) & 0.0434 (0.0517) & 0.0434 (0.0517) \\ KDE & 0.0333 (0.0480) & 0.0298 (0.0469) & 0.0302 (0.0483) & 0.0307 (0.0482) & 0.0323 (0.0493) & 0.0342 (0.0499) & 0.0369 (0.0492) & 0.0389 (0.0501) \\ LOG-KDE & 0.0278 (0.0417) & 0.0192 (0.0350) & 0.0159 (0.0344) & 0.0155 (0.0348) & 0.0168 (0.0349) & 0.0192 (0.0370) & 0.0230 (0.0395) & 0.0275 (0.0415) \\ Mfit & 0.0392 (0.0463) & 0.0333 (0.0424) & 0.0262 (0.0347) & 0.0215 (0.0314) & 0.0183 (0.0302) & 0.0160 (0.0299) & 0.0154 (0.0303) & 0.0185 (0.0376) \\ Moving LS & 0.0420 (0.0844) & 0.0673 (0.1155) & 0.1017 (0.1474) & 0.1506 (0.1666) & 0.1898 (0.1819) & 0.2264 (0.2033) & 0.2544 (0.2060) & 0.2642 (0.2072) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparisons of different methods on the synthetic spherical dataset in terms of MSE and SD (in bracket) with varying \(K\).
larger \(\delta\), which coincides with the phenomenon in Figure 5. To characterize the good performance region for \(\delta\) and \(K\) in Figure 5, we choose \(\delta=\max\{1,8K-125\}\) to determine \(\delta\) based on \(K\) for RQMF-E in this experiment. On the other hand, the performance of RQMF-K is relatively robust to the choice of \(\delta\), thus we fix \(\delta=100\) for different choices of \(K\).
Now we compare RQMF-E and RQMF-K with their competitors. The results are collected in Table I. The results indicate that RQMF-E and RQMF-K outperform other methods for a wide range of \(K\). When \(K=7\), RQMF-K achieves the best performance and when \(10<K<28\), RQMF-E achieves the best performance among all methods. If we focus on the best performance of different algorithms, the RQMF-E is still favored with the minimal \(\mathrm{MSE}=0.0115\) when \(K=16\). The superior performances of RQMF demonstrate the benefits of using the curvature information in the denosing procedure. It is also worth noting that RQMF outperforms Moving LS, which also fits a quadratic polynomial in its second step. This is possibly due to fact that RQMF iteratively updates the local representations \(\Phi\) of data points, while Moving LS only uses the local coordinates learned in its first step. The estimation error of the local coordinates learned in Moving LS could lead to a degradation of the final fitting accuracy.
### _An Application to the MNIST Handwritten Digit Dataset_
This subsection compares RQMF-K and its competitors on the MNIST handwritten digit dataset [28]. Each image in the dataset consists of \(28\times 28\) pixels. We use \(g(a,b)\) to denote the grey value of an image at pixel \((a,b)\) and each image is determined by such a function \(g\). Only pixels with nonzero grey values are considered, so the dataset for each image is given by \(\{x_{i}=(a_{i},b_{i})\in\mathbb{R}^{2}\mid g(a_{i},b_{i})>0\}\). In this way, each image can be viewed as a perturbed one-dimensional manifold in \(\mathbb{R}^{2}\) and our goal is to recover the underlying manifold, which is also referred to as the principal curve [20].
The pixel closer to the principal curve tends to have a larger grey value. Thus, it is natural to use the grey values as weights in (34). Specifically, for each \(y\), we set the diagonal weight matrix \(W_{h}(y)\) in (34) by \((W_{h}(y))_{ii}=K_{h}(y,x_{i})g(x_{i})/s\) for some constant \(s\), where \(K_{h}(\cdot,\cdot)\) is a Gaussian kernel and the bandwidth \(h\) is given by the distance of \(y\) to \(y\)'s \(K\)-th nearest pixel. We use \(\delta=100\) to tune the parameter \(\lambda=s^{\prime-1}(\delta)\).
Since the smooth curve contains the most significant signal and the isolated points can be thought as noise, we measure the smoothness of the image using the convolution of the original image with the Laplace operator
\[w=\left[\begin{array}{ccc}0&1&0\\ 1&-4&1\\ 0&1&0\end{array}\right].\]
The obtained matrix \(I\ast w\) is a discrete version of the Laplace operator defined for function \(f\), i.e., \(\Delta^{2}f(x,y)=\frac{\partial^{2}f}{\partial x^{2}}+\frac{\partial^{2}f}{ \partial y^{2}}\). The average value of \(I\ast w\) represents the degree of smoothness of the image \(I\). We report the average of the nonzero of the absolute value in \(I\ast w\) in Table II.
For each image, we apply all six manifold learning algorithms to recover the principal curve, and the results are displayed in Figure 7 and Table
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline Image ID & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline Original & 1.0637 & 1.2264 & 1.0306 & 0.8760 & 1.0332 & 0.9499 & 1.0246 & 1.0261 & 1.0068 & 1.0267 & 0.7178 & 1.0326 \\ RQMF-K & **0.4848** & **0.5795** & **0.5168** & **0.5785** & **0.5440** & **0.4851** & **0.5733** & **0.5319** & **0.6011** & **0.5388** & **0.5327** & **0.5470** \\ KDE & 1.0875 & 0.8876 & 0.7574 & 0.9382 & 0.8387 & 0.9246 & 0.7489 & 0.6942 & 0.6687 & 0.9070 & 0.6320 & 0.7359 \\ LOG-KDE & 1.1395 & 0.8861 & 0.7732 & 0.9460 & 0.8606 & 0.9329 & 0.7650 & 0.7523 & 0.6979 & 0.8812 & 0.6361 & 0.7764 \\ Mit & 1.1769 & 0.9158 & 0.8567 & 0.9026 & 0.9520 & 0.9705 & 0.7956 & 0.8274 & 0.9107 & 0.8562 & 0.6189 & 0.7976 \\ Moving LS & 1.1926 & 1.3185 & 1.4126 & 1.0072 & 1.2117 & 1.1529 & 1.4050 & 1.3821 & 1.4752 & 1.2451 & 0.9490 & 1.4394 \\ \hline \hline \end{tabular}
* Number in bold and underlined as the hot and around short results for each column; setting \(\alpha\)-parity.
\end{table} TABLE II: Comparison of the smoothness measured by the average of the nonzero of the absolute value of \(I\ast w\) corresponding to 12 example images
Fig. 7: The first row displays 12 examples from the original MNIST dataset. The second to the sixth rows collect the results for the RQMF-K, KDE, LOG-KDE, Mitt, and Moving LS algorithms, respectively.
the denoised images by RQMF-K are smoother than the original images and images output by KDE, LOG-KDE, Mfit. While Moving LS also produces smoother images, it tends to twist the original images too much; see images of 6,8,9. In contrast, RQMF preserves the major contour of the original images much better.
### _An Application to Cryo-EM_
This subsection compares RQMF-E and RQMF-K with its competitors on the Cryo-EM dataset [29]. This dataset consists of \(n=2000\) images with shape \(64\times 64\). Each image is modeled as a vector in \(\mathbb{R}^{4096}\) with elements given by the grey values on all 4096 pixels. The whole dataset is then represented as \(\{\iota_{i}\}_{i=1}^{m}\subseteq\mathbb{R}^{4096}\), where \(\iota_{i}\) denotes the \(i\)-th image. The dataset inherently resides on a lower-dimensional manifold in \(\mathbb{R}^{4096}\) due to the image generation and processing of Cryo-EM, such as rotation, projection, and blurring by convolution [22]. In our experiment, we take the original data as the underlying manifolds, add noises to the original data, apply all six manifold learning algorithms to the noisy data, and finally compare their recovery accuracies.
It is computationally expensive to directly fitting the manifold in \(\mathbb{R}^{4096}\) using any manifold learning algorithm. To reduce the dimensionality, we approximate the original dataset by a \(D\)-dimensional subspace such that \(\iota_{i}\approx Ux_{i}\), where \(U\in\mathbb{R}^{4096\times D}\) denotes \(D\) principal eigenvectors of \(S=\frac{1}{m}\sum_{i}\iota_{i}\iota_{i}^{T}\) and \(x_{i}=U^{\top}\iota_{i}\in\mathbb{R}^{D}\). We could fit a \(d\)-dimensional manifold in \(\mathbb{R}^{D}\) and map such a manifold to the pixel space \(\mathbb{R}^{4096}\) via \(U\). In what follows, we fix \(D=20\). We construct the noisy dataset as \(\{\iota_{i}^{\prime}=Ux_{i}^{\prime}\}_{i=1}^{m}\) with \(x_{i}^{\prime}\) given by
\[x_{i}^{\prime}=x_{i}+\epsilon_{i},\quad\epsilon_{i}\sim N(0,\sigma^{2}I_{D}), \quad i=1,\ldots,m.\]
Next, we recover a 5-dimensional manifold in \(\mathbb{R}^{D}\) by applying all six manifold learning algorithms to \(\{x_{i}^{\prime}\}_{i=1}^{m}\). For each method and each sample, we use the nearest \(K\) samples to fit the local manifold. In addition, we use the Gaussian kernel in (34) and for each \(y\), we set the bandwidth \(h\) as \(\|y-x_{i_{K}}\|_{2}\), where \(x_{i_{K}}\) is the \(K\)-th nearest neighbour of \(y\). Figure 8 visualizes 16 denoised images using these manifold learning methods with \(K=60\).
To evaluate the performance, we use the following mean squared error and standard deviation of the error between
Fig. 8: Visualization the fitting and denoised results of 16 randomly chosen images by RQMF-E and five related manifold learning methods with \(K=60\).
Fig. 9: The performance of RQMF-E with different \(K\) and \(\delta\) for Cryo-EM.
the solution and the real image:
\[\mathrm{MSE}=\frac{1}{m}\sum_{i=1}^{m}\|\iota_{i}-\widehat{\iota}_{i}\|_{2}^{2},\]
\[\mathrm{SD}=\sqrt{\frac{1}{m}\sum_{i=1}^{m}(\|\iota_{i}-\widehat{\iota}_{i}\|_{2 }^{2}-\mathrm{MSE})^{2}},\]
where \(\hat{\iota}_{i}=U\hat{x}_{i}\) for all \(i\) and \(\hat{x}_{i}\) is the \(i\)-th fitted point in \(\mathbb{R}^{D}\). Figure 9 displays the MSE of RQMF-E with different \(K\) and \(\delta\). It shows that RQMF-E achieves the best performance when \(K=48\) and \(\delta=50\).
To compare different methods, we collect the MSEs of all methods with varying \(K\) in Table III. It can be seen that RQMF-E outperforms other methods in terms of MSE and SD in most settings. If we focus on the best performance of each method, RQMF-E is again favored with an error 1.3482 when \(K=48\). Therefore, by taking the curvature information into account, RQMF-E exhibits a stronger expressive ability than other methods and thus achieves better denoising performance on this dataset.
## 7 Concluding Remarks
This paper proposes a quadratic matrix factorization framework to learn the structure of the observed data. We develop an alternating minimization algorithm to solve the non-convex quadratic matrix factorization problem as well as a regularized version. Theoretical convergence properties are established. We also present a novel transformation-based parameter-tuning method for regularized quadratic matrix factorization and intuitively argue its advantages over naively tuning the original regularization parameter. Furthermore, we apply the proposed methods to manifold learning problems. We demonstrate the superiority of the proposed method numerically in a synthetic manifold learning dataset and two real datasets, i.e., the MNIST handwritten dataset and a cryogenic electron microscopy dataset.
There are several interesting directions for future research. First, our work and most related works assume the intrinsic dimension \(d\) is known _a priori_, while this information is often not available in practice. Thus, it remains an important question to estimate the intrinsic dimensionality \(d\) under the quadratic matrix factorization framework. It would also be interesting to characterize the impact if \(d\) is misspecified. Second, the noises in the signal-plus-noise model may be heavy-tailed or even adversarial, so it is important to develop robust algorithms. Third, non-negative constraints are widely used in linear matrix factorization [6, 12]. It is interesting to study how non-negative constraints can be used in QMF to enhance performance.
|
2310.15793
|
Improving generalization in large language models by learning prefix
subspaces
|
This article focuses on large language models (LLMs) fine-tuning in the
scarce data regime (also known as the "few-shot" learning setting). We propose
a method to increase the generalization capabilities of LLMs based on neural
network subspaces. This optimization method, recently introduced in computer
vision, aims to improve model generalization by identifying wider local optima
through the joint optimization of an entire simplex of models in parameter
space. Its adaptation to massive, pretrained transformers, however, poses some
challenges. First, their considerable number of parameters makes it difficult
to train several models jointly, and second, their deterministic parameter
initialization schemes make them unfit for the subspace method as originally
proposed. We show in this paper that "Parameter Efficient Fine-Tuning" (PEFT)
methods, however, are perfectly compatible with this original approach, and
propose to learn entire simplex of continuous prefixes. We test our method on a
variant of the GLUE benchmark adapted to the few-shot learning setting, and
show that both our contributions jointly lead to a gain in average performances
compared to sota methods. The implementation can be found at the following
link: https://github.com/Liloulou/prefix_subspace
|
Louis Falissard, Vincent Guigue, Laure Soulier
|
2023-10-24T12:44:09Z
|
http://arxiv.org/abs/2310.15793v1
|
# Improving generalization in large language models by learning prefix subspaces
###### Abstract
This article focuses on large language models (LLMs) fine-tuning in the scarce data regime (also known as the "few-shot" learning setting). We propose a method to increase the generalization capabilities of LLMs based on neural network subspaces. This optimization method, recently introduced in computer vision, aims to improve model generalization by identifying wider local optima through the joint optimization of an entire simplex of models in parameter space. Its adaptation to massive, pretrained transformers, however, poses some challenges. First, their considerable number of parameters makes it difficult to train several models jointly, and second, their deterministic parameter initialization schemes make them unfit for the subspace method as originally proposed. We show in this paper that "Parameter Efficient Fine-Tuning" (PEFT) methods, however, are perfectly compatible with this original approach, and propose to learn entire simplex of continuous prefixes. We test our method on a variant of the GLUE benchmark adapted to the few-shot learning setting, and show that both our contributions jointly lead to a gain in average performances compared to sota methods. The implementation can be found at the following link: [https://github.com/Liloulou/prefix_subspace](https://github.com/Liloulou/prefix_subspace)
## 1 Introduction
The emergence of large language models (Devlin et al., 2019; Radford et al., 2019; Raffel et al., 2019) in recent years has significantly transformed the applications of deep learning methods in natural language processing. These models, pretrained in an unsupervised fashion on massive textual datasets, enable the fine-tuning of powerful models with just a few thousand -or even hundred- observations. They achieve generalization performances that required millions of observations just a few years ago, particularly when used in conjunction with discrete instruction prompts (Brown et al., 2020).
Extending these discrete methods to the learning of continuous prompts (Lester et al., 2021), which conceptually falls within the framework of so-called "Parameter Efficient Fine-Tuning" (PEFT) methods (Houlsby et al., 2019; Bapna and Firat, 2019), poses certain challenges in the context of few-shot learning. One such challenge is the issue of model adjustment guidance through validation metric during gradient descent (Mao et al., 2022). Traditionally, in the process of model fitting, approximately one-third of the training dataset is excluded beforehand to create a validation (or development) set dedicated to inferring an unbiased estimation of the model's performance (Hastie et al., 2001). This metric is utilized both during gradient descent (to estimate convergence of the descent algorithm or inform early stopping heuristics), and subsequently to guide hyperparameter searches typically employed in the fine-tuning of large language models. However, the validity of this approach relies on the assumption that the distribution of the validation set is representative of the real observed phenomenon. This assumption quickly loses its relevance in the context of few-shot learning, where at most a few tens of observations are available for estimating the validation metric. This notion has become problematic enough in present times that a portion of academic literature on continuous learning with small datasets presents experiment results utilizing validation sets that are unrealistic and artificial, containing several orders of magnitude more observations than the training set used for the model adjustment itself (Mao et al., 2022).
In the machine learning community, characterizing local minima with desirable generalization properties has been a topic of interest for decades (Garipov et al., 2018; Zhang et al., 2021; Hochreiter and Schmidhuber, 1997). From flat minima (Hochreiter and Schmidhuber, 1997) to mode connectivity (Garipov et al., 2018), this body of work
has provided the basis for several practical observations regarding the connection between the properties of local minima and a model's generalization abilities.
The concept of learning neural network subspaces (Wortsman et al., 2021) is an example of a method built using these considerations. This approach proposes to find not just a local minimum of the cost function in the model's parameter space, but an entire simplex associated with low values of this objective. This additional constraint is meant to bias the descent algorithm towards wider minima, empirically associated with better generalization (Dziugaite and Roy, 2018). In addition, the availability of this entire simplex of models allows for the inference of not only one scalar development metric, but an entire distribution, at any given moment during model fine-tuning. These two phenomena, become particularly relevant when viewed through the lens of large language models, and most especially for few-shot learning problems, where the model's ability to generalize a concept class from a limited number of examples is crucial.
The contributions of this article are as follows. First, we introduce the first adaptation of the subspace method to large language models through subspace adjustment of prefixes (a PEFT method similar to the state-of-the-art continuous prompt adjustment in current academic literature). Next, this article proposes to leverage certain natural advantages offered by the subspace method to revisit the concept of guiding model adjustment through the validation metric. We will empirically demonstrate that the combination of these two ideas leads to a significant improvement in terms of average prediction on natural language understanding tasks provided by the GLUE benchmark (Wang et al., 2018). Finally, an ablation study will be presented to provide some insights into the mechanisms underlying this prediction improvement.
## 2 Background
In this section, we review the two main concepts used in this article, neural network subspaces (Wortsman et al., 2021) and prefix-tuning (Li and Liang, 2021).
### Mode connectivity and network subspaces
Learning neural network subspaces.The subspace method proposes to obtain the simplex of solutions (in the parameter space of the studied model) through a single optimization loop as follows:
* A simplex of \(n\) models is built through random initialization of each of its vertices using standard random initialization schemes.
* For each gradient descent iteration, a model is built as a weighted average of all vertices (to sample uniformly from the simplex they define)
* The sampled model is used for inference and cost function computation
* The gradient is backpropagated through all vertices to update all of their parameters
So far, the sampling procedure does not depend at all on the connectionist aspect of neural networks and simply considers a model as a vector of learnable parameters. However, the vast majority of deep learning models are defined as sequences of non-linear transformations (Goodfellow et al., 2016). Therefore, it seems natural to incorporate, in one way or another, this sequential structure of neural models into the sampling procedure. To do so, Wortsman et al. 2021 propose to sample each layer's parameters independently. This variant, known as the "layer-by-layer" method, is empirically associated with better generalization performances.
After model fitting, the simplex can be used either in the context of ensemble methods, or simply by using the simplex's centroid as the final model (Wortsman et al., 2021). The latter case is the one we focus on in this article, mainly because of the generalization properties it empirically displays.
Subspace centroid and generalization.Several explanations have been proposed to explain these interesting generalization properties. One possible justification for this property, visualized in 1, lies in the idea that a model obtained through traditional training would be located at the periphery of a local minimum of the objective function, typically more susceptible to generalization errors (Izmailov et al., 2018; Dziugaite and Roy, 2018). On the contrary, moving within the subspace allows us to "cross" the local minimum, in order to obtain a model associated with a more stable region of the objective function (Dziugaite and Roy, 2018).
Application to LLMsFrom the definition of the subspace method, it becomes clear why it has never been applied (at least to our knowledge) to large language models. First, it requires storing not just one model during the optimization loop, but all the vertices of the studied simplex. This additional memory complexity constraint is likely manageable in the case of adjusting a small convolutional network in computer vision. However, language models are known for their substantial size, reaching up to hundreds of billions of parameters, to the extent that the traditional fine-tuning of a single model already poses a considerable technical challenge for most specialized computing infrastructures. Therefore, the idea of simultaneously adjusting not just one but up to six models (the typical number of simplex vertices used in the subspace method) appears to be impractical.
In addition, and more fundamentally, this approach relies on a _random_ initialization of the descent algorithm to construct an initial simplex of models. In contrast, pretrained language models are inherently initialized in a _deterministic_ manner. Their entire transfer learning capabilities rely on the data representations captured into their parameter vectors during pretraining.
### Prefix-tuning
On the other hand, continuous prompt adjustment methods and, by extension, PEFT methods (Houlsby et al., 2019; Hu et al., 2021; Li and Liang, 2021; Liu et al., 2022), propose not to directly fine-tune language models, but instead introduce new learnable parameters (such as the embeddings of virtual tokens in continuous prompt learning) and adjust them while keeping the language model's pretrained parameters frozen. The main advantage of this approach lies in the ability of these "adapted" models to replicate (or even improve in contexts associated with small sample sizes) the performances of language models while reducing the number of learnable parameters by several orders of magnitude. In addition, some of these approaches (Li and Liang, 2021) typically require random initialization of the additional parameters they introduce into the model, making them particularly promising candidates for adapting the subspace method to large language models.
Prompt-based approaches (Liu et al., 2022), on the other hand, are based on the adjustment of \(n\) embedding vectors \(\{E_{i}\}\,i=1^{n}\), typically concatenated at the beginning of the input embedding sequence of the language model \(LM_{\Phi}\) parameterized by \(\Phi\). In other words, for an input sequence of \(L\) tokens, \(\{I_{i}\}_{i=1}^{l}\), we construct a predictive model based not on the output of the language model itself:
\[LM_{\Phi}(\{I_{i}\}_{i=1}^{l}) \tag{1}\]
but on
\[LM_{\Phi}(concat(\{E_{i}\}_{i=1}^{n}\,,\{I_{i}\}_{i=1}^{l})) \tag{2}\]
The adjustment of the predictive model is done solely by adjusting the virtual tokens \((E_{i})_{i=1}^{n}\), while keeping the parameters of the language model \(\Phi\) frozen.
To increase the expressiveness of this approach (which is particularly limited in terms of the number of learnable parameters), prefix tuning (Li and Liang, 2021), chosen in this article as a candidate for applying the subspace method, proposes to concatenate these virtual tokens not to the input sequence of the model, but to the Key and Value sequences used as inputs to the multiplicative attention modules in each layer of the language model.
In a similar approach to continuous prompt finetuning, the adjustment of prefixes is done solely by adjusting (via gradient descent) the virtual tokens, while keeping the parameters of the language model itself frozen. However, directly learning these embeddings proves to be particularly unstable (Li and Liang, 2021). Therefore, it is customary not to adjust them directly, but instead to use a reparameterization trick, which involves concatenating transformed versions of the prefixes to the Key and
Figure 1: Generalization evolution of a language model adjusted on a prefix line, with alpha the weighting between both line extremities. Generalization performance follows a curve similar to a parabola with a maximum at its center.
Value sequences. This transformation is parameterized by a two-layer feed-forward network, as follows:
\[P_{v}=MLP_{v}(E)\ and\ P_{k}=MLP_{k}(E) \tag{3}\]
With :
* \(P_{v}\) and \(P_{k}\) the prefixes prepended to the Values and Keys sequences in the model, respectively
* \(E\) a sequence of embedding vectors
* \(MLP_{v}\) and \(MLP_{k}\) the reparametrization perceptrons for the Values and Keys prefixes, respectively
## 3 Learning prefix subspaces
From this definition, it appears that the two main issues that made applying the subspace method to large language models cumbersome are both alleviated when using prefix tuning. Indeed, as a PEFT method, the number of trainable in the prefixes is orders of magnitude lower than the model's parameters, allowing to easily store simplexes in memory. Moreover, these prefixes are basically embedding vectors that require random initialization before model fine-tuning, which allows us to easily sample an initial simplex, by randomly initializing all its vertices.
### Model formalization
The adaptation of the subspace method to prefix-tuning can be done through two distinct approaches:
1. Application to the learnable parameters of the model itself, namely the initial embedding and the reparameterization perceptron.
2. Application of the method to the prefixes themselves, specifically the output of the reparameterization module.
In this article, propose to investigate the second option, essentially considering the reparameterization module as a training artifact, and build our proposed prefixes as follows:
\[P_{v}=\Sigma_{i=1}^{n}\alpha_{i}MLP_{v,i}(E_{i}) \tag{4}\]
\[P_{k}=\Sigma_{i=1}^{n}\alpha_{i}MLP_{k,i}(E_{i}) \tag{5}\]
\[\{[\alpha]_{1}^{n}\in[0,1]^{n};\Sigma_{i}\alpha_{i}=1\} \tag{6}\]
With :
* \(P_{v}\) and \(P_{k}\) the prefixes prepended to the Values and Keys sequences in the model, respectively
* \(E_{i}\) the sequence of embedding vectors associated with simplex vertex \(i\)
* \(MLP_{v,i}\) and \(MLP_{k,i}\) the reparametrization perceptrons associated with simplex vertex \(i\) for the Values and Keys prefixes, respectively
It is also important to consider the adaptation of the method's "layer-wise" variant. Indeed, the prefix adjustment does not rely on introducing a conventional perceptron structure into the language model, but rather on modifying the operation of the multi-head attention module. In this article, we propose to extend this "layer-wise" variant to each layer's Keys and Values prefixes. Thus, during each descent iteration, the Keys and Values prefixes of each layer will be independently sampled. Moreover, this sampling will be performed independently for all observations, unlike the traditional approach that prefers creating a single model per descent iteration step.
Additionally, the prediction head of the model is typically randomly initialized. Therefore, we choose to apply the subspace method to the prediction head as well, as described in Part 2. For consistency, the variant of parameter sampling at the observation level, rather than the batch level, will also be applied to it.
In summary, we propose to adjust a simplex with \(n\) prefix vertices as follows:
* Independent initialization of \(n\) reparameterization systems
* Computation of the \(n\) vertices of the simplex for each descent iteration
* Construction of prefixes used for cost function inference and gradient calculation through independent uniform sampling for each observation, each layer, as well as for the prefixes of the Key and Value sequences.
### Subspace learning and stochastic inference of development metrics
The adjustment of a large language model is typically guided by estimating a performance metric on a validation set, both during hyperparameter search and the descent process itself, where the best model according to this scalar value is selected as
the final model. The estimation of this metric in a subspace learning framework raises questions. Indeed, adjusting not just a single model, but an entire simplex, results in potentially estimable validation metrics.
Since we limit ourselves in this article to using this method to extract the centroid associated with better generalization performance, it would be natural to estimate the metric with respect to said centroid. However, the existence of not just a single model but this simplex, and the additional information it provides about the nature of the obtained local minimum, might be interesting. This is particularly the case in a few-shot learning context. As mentioned earlier, for validation datasets with small sample sizes (typically <100), estimating this metric can become unreasonably noisy.
Therefore, we propose using the entire simplex to "augment" the development metric's estimation. This will be done by not using the simplex' centroid for inference but by using _multiple_ randomly sampled models for each observation from the validation set. In other words, for every development metric estimation, we propose to concatenate the development set multiple time, and to perform inference under the same conditions as during gradient descent iterations, meaning with randomly sampled models used for each observation. We set the number \(n\) of development set concatenation to 10 in all experiments presented in this article that employ this stochastic inference approach.
Nevertheless, we still select the centroid of the simplex as the final model. Indeed, the determinism of a model remains a desirable property in production settings.
## 4 Experimental protocol
### Datasets
All experiments described in this article to evaluate the predictive performance of the proposed method are conducted with BERT-base-cased on datasets constructed from the GLUE benchmark Wang et al. (2018), which consists of 8 English language comprehension tasks, all formulated as classification problems. However, these datasets have significantly larger sample sizes than what would be expected in the few-shot learning setting, and they do not make their test datasets available. As a consequence, we do not directly use these datasets, but instead adapt them to a format more suitable to our problem using a methodology similar to that presented in Mao et al. (2022). Namely, we build few-shot learning classification datasets with varying sample sizes (50, 100, 200, and 500 observations) through random sampling. However, our method of constructing these corpora differs from the original authors' on several key points.
First, the authors chose to construct validation sets of 1,000 observations for all their training datasets, which, in our opinion, is not realistic in a few-shot learning context (a validation set generally does not contain ten times more examples than its training set). Secondly, they use the GLUE benchmark's validation sets as test sets. However, some of these validation sets have small sample sizes (277 for RTE, 408 for MRPC), which could potentially introduce noise in the estimation of performance metrics. Therefore, for each reference dataset, a dataset of sample size \(K\) is constructed as follows:
* The training and validation datasets are concatenated into a single dataset.
* Half of the observations (capped at 5,000 observations) are excluded from this dataset to construct a test dataset that is common to all experiments.
* \(K\) observations are then selected through uniform sampling and divided into a training and validation dataset following a 70/30 proportion.
For each task in the benchmark and for each selected sample size, 10 datasets are constructed using this methodology to allow for replication of experiments on different datasets, enable estimation of average performance, and test the significance at a 5% threshold of the obtained differences (via bootstrap).
### Baselines
All experiments described in this article to evaluate the predictive performance of the proposed method are conducted with BERT-base-cased. We compare our method to 5 baseline fine-tuning approaches, including standard fine-tuning and 4 alternate PEFT methods. Similar to prefix-tuning, these alternative fine-tuning approaches are based on the idea of freezing the language model's parameters and introducing a fraction of new adjustable parameters
(typically with a cardinality several orders of magnitude lower than that of the model itself), but they differ in how they introduce these new parameters into the model:
* Standard Adapter (Houlsby et al., 2019), which typically involves introducing one or more two-layer bottleneck perceptrons at different stages of a Transformer layer. This was the first PEFT method to be introduced and is the most recognized.
* Low-Rank Adaption (LoRA) (Hu et al., 2021), which reparametrizes the projection matrices of Values and Queries prior to the multi-head multiplicative attention module using two-layer linear bottleneck perceptrons. This was the first PEFT method to propose different transformations for different elements of the attention module.
* UniPELT (Mao et al., 2022), a fusion method combining adapters, LoRA, and prefixes to benefit from the advantages of each method (without suffering from their potential respective drawbacks).
* Standard prefix tuning, a crucial reference method to estimate the portion of the performance of the proposed method that can be attributed to it.
For the proposed approach and all aforementioned baselines, we follow the same procedure for model fitting and hyperparameter search as proposed by (Mao et al., 2022), and all experiment settings can be found in the annex. To ensure optimal comparability, the hyperparameter choices for the proposed method will be selected to exactly match those of the prefix tuning baseline, which were also determined for the first time in a text classification framework by (Mao et al., 2022). The subspaces adjusted in the experiments are all simplexes with 6 vertices.
### Model variants
In order to better identify the impact of the different aspects of the proposed method, we also experiment with the following variants:
* Same method with 2-vertex simplexes (i.e., a line)
* Same method without stochastic validation inference
* Same method without prediction head subspaces
* Same method without prefix subspace (i.e., only on prediction heads)
## 5 Results
Overall effectivenessThe performances of all selected PEFT methods, as well as the proposed approach, are presented in Table 1 for all different tasks of the GLUE benchmark and for the different selected sample sizes. Overall, the method significantly outperforms most baselines for all sample sizes, and notably outperforms significantly all baselines for \(K=500\). However, the method notably shows a higher gain for lower sample sizes (\(K<200\)), which strongly implies that experiment results still suffer from high variance in this regime
The comparison between the proposed approach and prefix tuning is particularly interesting. Indeed, both approaches have the same exact functional form. In terms of statistical significance, the proposed method outperforms its classical counterpart 12 times:
* On QNLI, SST-2, and STS-B for K=50 and K=100
* On MRPC and QQP for K=200
* On MNLI, QNLI, MRPC, and STS-B for K=500
However, it is statistically surpassed only once, on MRPC for K=50, which is even more surprising considering that the difference between the two methods in this experiment is 0.4%. Moreover, the proposed method becomes significantly superior again on this task once the sample size increases up to 500 observations, showing a significant increase in generalization performances.
Comparison between PEFT methodsMore broadly, the proposed method is significantly surpassed only 6 times across all experiments:
* On MRPC by the prefix and LoRA methods for K=50
* On RTE by the Adapter method for K=50
* On STS-B by the UniPELT method for K=50
* On CoLA by the Adapter method for K=200 and K=500
It is noteworthy that most of these occurrences are observed for K=50 (and therefore validation sets of 15 observations), where model fitting becomes particularly challenging.
On the other hand, the proposed method significantly outperforms one of the other baseline methods in the conducted experiments a total of 80 times, demonstrating a clear advantage in terms of predictive power.
It is particularly noticeable that the majority of experiments where the proposed method outperforms the reference methods are mainly on three datasets: QNLI, SST-2, and QQP. Furthermore, the ability of the proposed method to significantly surpass the reference methods on these tasks does not seem to depend on the sample size of the datasets.
However, it is difficult to identify what distinguishes these datasets from those where the proposed method remains comparable to the reference methods. Both groups feature an equal number of similar tasks and imbalanced datasets.
Comparison between approach variantsThe results for the investigated variants, which are displayed in Table 2, can be summarized as follows:
1. The use of two-vertex simplexes shows slightly inferior performance compared to the proposed method for K=50, and similar performance thereafter.
2. The use of the validation-guided subspace method estimated deterministically collapses for K=50, K=100, and K=200 (cases where the performance is even lower than the classical prefix fitting method) and eventually becomes equivalent to the proposed method.
3. The use of prefix subspaces without a head
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline
**Method (number of params.)** & **MNLI** & **QNLI** & **SST-2** & **QQP** & **CoLA** & **STS-B** & **MRPC** & **RTE** & **Avg.** \\ \hline \multicolumn{10}{l}{[_K = 50]_} \\ Fine-tuning (108M) & 35.5 & 65.9 & \(57.57_{*}\) & \(45.6_{*}\) & **3.5** & 45.1\({}_{*}\) & 81.1 & 50.6 & \(48.1_{*}\) \\ Adapter (5M) & 35.6 & \(62.6_{*}\) & \(64.7_{*}\) & \(35.3_{*}\) & 0.0 & 59.7 & 80.2 & \(\mathbf{53.1}^{*}\) & \(48.9_{*}\) \\ LoRA (0.3M) & 35.7 & \(63.9_{*}\) & \(68.4_{*}\) & \(47_{*}\) & 1.0 & \(56.5_{*}\) & \(\mathbf{81.4}\) & \(\underline{52.8}\) & \(50.8_{*}\) \\ UniPELT (1.8M) & 35.3 & \(62.4_{*}\) & \(73.1_{*}\) & \(42.3_{*}\) & 1.1 & \(\mathbf{64.3}^{*}\) & \(80.7^{*}\) & 51.8 & \(51.4_{*}\) \\ Prefix-tuning (0.9M) & \(\mathbf{37.8}\) & \(63.5_{*}\) & \(74.9_{*}\) & \(\underline{53.1}\) & \(\underline{1.8}\) & \(59.2_{*}\) & \(80.4^{*}\) & 52.6 & \(\underline{52.9}\) \\ Prefix subspaces (0.9M) & \(\mathbf{36.6}\) & \(\mathbf{66.6}\) & \(\mathbf{80.1}\) & \(\mathbf{54.3}\) & 0.8 & \(\underline{61.1}\) & 80.0 & 52.2 & \(\mathbf{54.0}\) \\ \hline \multicolumn{10}{l}{[_K = 100]_} \\ Fine-tuning (108M) & \(35.5_{*}\) & \(68.9_{*}\) & \(73.9_{*}\) & \(52.6_{*}\) & \(3.0_{*}\) & \(64.1_{*}\) & \(\underline{81.3}\) & 52.1 & \(53.9_{*}\) \\ Adapter (5M) & \(36.3_{*}\) & \(66.7_{*}\) & \(72.8_{*}\) & \(54.0_{*}\) & 7.2 & \(63.8_{*}\) & 80.5 & 53.0 & \(54.3_{*}\) \\ LoRA (0.3M) & 37.3 & \(64.9_{*}\) & \(73.2_{*}\) & \(54.2_{*}\) & 7.3 & \(60.4_{*}\) & \(\underline{81.3}\) & 52.9 & \(53.9_{*}\) \\ UniPELT (1.8M) & 37.7 & \(66.9_{*}\) & \(79.1_{*}\) & \(53.6_{*}\) & 5.1 & \(\underline{68.4}\) & \(79.7_{*}\) & \(52.0_{*}\) & \(55.3_{*}\) \\ Prefix-tuning (0.9M) & \(\mathbf{38.3}\) & \(69.4_{*}\) & \(\underline{80.8}_{*}\) & \(\underline{57.2}\) & \(\mathbf{8.1}\) & \(66.6_{*}\) & 81.1 & \(\mathbf{54.2}\) & \(\underline{57.0}\) \\ Prefix subspaces (0.9M) & \(\mathbf{38.5}\) & \(\mathbf{70.8}\) & \(\mathbf{82.5}\) & \(\mathbf{59.6}\) & \(\underline{7.8}\) & \(\mathbf{68.3}\) & \(\mathbf{81.5}\) & \(\underline{54.1}\) & \(\mathbf{57.9}\) \\ \hline \multicolumn{10}{l}{[_K = 200]_} \\ Fine-tuning (108M) & 42.3 & \(\mathbf{71.9}\) & \(80.8_{*}\) & 63.0 & 20.2 & \(69.0_{*}\) & 80.8 & 54.6 & \(60.3_{*}\) \\ Adapter (5M) & 42.7 & \(69.1_{*}\) & \(83.1_{*}\) & \(59.5_{*}\) & \(\mathbf{26.5}^{*}\) & \(70.3_{*}\) & 80.7 & \(\mathbf{56.2}\) & 61.0 \\ LoRA (0.3M) & 41.0 & \(67.1_{*}\) & \(82.2_{*}\) & \(61.2_{*}\) & 19.8 & \(67.8_{*}\) & 80.1 & 54.5 & \(59.2_{*}\) \\ UniPELT (1.8M) & 41.6 & \(70.2\) & \(82.8_{*}\) & \(58.7_{*}\) & 16.4 & \(\mathbf{72.8}\) & \(\mathbf{81.7}\) & 54.9 & \(59.9_{*}\) \\ Prefix-tuning (0.9M) & \(\mathbf{44.9}\) & \(\underline{71.4}\) & \(\mathbf{84.2}\) & \(63.0_{*}\) & 22.2 & 71.3 & \(79.6_{*}\) & \(\underline{56.0}\) & \(\underline{61.6}\) \\ Prefix subspaces (0.9M) & \(\mathbf{44.7}\) & \(\underline{71.2}\) & \(\underline{84.1}\) & \(\mathbf{64.4}\) & 21.1 & \(\underline{72.3}\) & \(\underline{81.6}\) & \(\mathbf{55.9}\) & \(\mathbf{61.9}\) \\ \hline \multicolumn{10}{l}{[_K = 500]_} \\ Fine-tuning (108M) & \(52.7_{*}\) & \(74.3_{*}\) & \(85.4_{*}\) & \(66.8\) & \(32.2_{*}\) & 78.0 & \(\underline{82.5}\) & 59.8 & \(66.5_{*}\) \\ Adapter (5M) & \(51.1_{*}\) & \(72.4_{*}\) & \(85.4_{*}\) & \(65.7_{*}\) & \(\mathbf{38.9}^{*}\) & \(76.1_{*}\) & \(81.9_{*}\) & 59.8 & \(66.4_{*}\) \\ LoRA (0.3M) & \(50.1_{*}\) & \(73.6_{*}\) & \(84.6_{*}\) & \(66.5\) & \(35.3\) & \(75.6_{*}\) & \(82.3_{*}\) & \(58.3_{*}\) & \(65.8_{*}\) \\ UniPELT (1.8M) & \(50.7_{*}\) & \(74.2_{*}\) & \(85.4_{*}\) & \(63.4_{*}\) & 34.2 & 77.2 & \(\underline{82.1}\) & \(57.8_{*}\) & \(65.6_{*}\) \\ Prefix-tuning (0.9M) & \(54.0_{*}\) & \(74.7_{*}\) & \(85.6_{*}\) & \(66.2\) & 35.7 & \(77.8\) & \(82_{*}\) & \(\underline{60}\) & \(67.0_{*}\) \\ Prefix subspaces (0.9M) & \(\mathbf{55.7}\) & \(\mathbf{75.4}\) & \(\mathbf{86.1}\) & \(\mathbf{67.2}\) & \(\underline{36.0}\) & \(\mathbf{78.1
subspace is consistently surpassed by the proposed method.
4. The use of prediction head subspace coupled with classical prefixes is considerably surpassed for K=50 (which is the only statistically significant result) and similar to the proposed method when the sample size increases.
These observations, taken as a whole, provide several pieces of evidence regarding the relevance of using the notion of stochastic validation metric inference in few-shot learning. In particular, Observation 3 shows that adjusting the prefix subspace with classical validation metric estimation is associated with performance gains similar to the proposed method only from \(K=500\) onwards. Moreover, the fact that the performance of this variant is even lower than that achieved by classical prefix adjustment further supports the importance of the proposed method of stochastic validation metric inference.
Subsequently, Observations 1, 2, and 3 provide slightly weaker arguments regarding the importance of simplex size in the context of this stochastic estimation. Although the simplex size does not appear to have an effect for \(K>50\) (strongly indicating that learning lines is preferable for these sample sizes, which are significantly more memory-efficient), it seems to have an impact for very small sample sizes. This observation could be explained by the richness of information extracted from the validation dataset through stochastic estimation, due to a larger simplex. However, the results presented in this article are insufficient to confirm or refute this hypothesis. Similarly, Observations 2 and 3 particularly highlight the importance of adjusting the entire set of learnable parameters of the model through the subspace when \(K=50\). This could also be explained by suggesting that restricting the stochastic validation metric estimation to a subset of learnable parameters limits the ability to characterize the obtained local minimum.
## 6 Conclusion
In this article, we introduced two innovative ideas. The first one, an adaptation of the subspace method for training large language models through the PEFT method, is, to our knowledge, the first example of its use in the academic literature on natural language processing. The second idea, proposing an alternative way to estimate development metrics, represents an original application of the subspace method and is not specific to problems encountered in textual data analysis. The combined use of these two methods leads to a significant improvement in the performance of common language models such as BERT on language comprehension tasks proposed by the GLUE benchmark, rephrased in a "few-shot learning" context. The ablation study presented at the end of the article also allows us to quantify the impact of these two contributions. The performance gains observed on very small datasets (\(\leq 100\)) seem to be mainly explained by the finer information extracted from the validation set through the stochastic metric estimation method. However, this gain appears to diminish for larger sample sizes, where the subspace method applied to prefix tuning seems to be sufficient on its own to achieve performance gains over standard PEFT methods as well as classical model training.
Finally, applying subspace learning to PEFT methods also enables the training of powerful predictive models while significantly reducing the computational resources typically required for training large language models. This approach preserves the fundamental efficiency goal of these methods. Learning prefix subspaces remains accessible even in situations where resources, both in terms of data and computational power, are limited.
\begin{table}
\begin{tabular}{l c c c c} \hline
**Method** & \(K=50\) & \(K=100\) & \(K=200\) & \(K=500\) \\ \hline Proposed approach & 54.0 & 57.9 & 61.9 & 67.8 \\ Line subspace & 53.6 & 57.9 & 62.1 & 67.6 \\ Deterministic & \(49.5_{*}\) & 56.0 & 61.4 & 67.8 \\ Head subspace & 53.5 & 56.8 & 61.3 & 67.2 \\ Prefix only subspaces & 52.6 & 57.7 & 62.2 & 67.6 \\ \hline \end{tabular}
\end{table}
Table 2: Results of the ablation study. The reported scores correspond to the average predictive performances across all tasks in the GLUE benchmark.
## 7 Limitations
The proposed approach is meant to improve the performances of large language models in the context of few-shot learning. As a consequence, it becomes increasingly dependent on the type of representations the model's pretraining was able to capture. In other words, it would be highly unreasonable to expect the proposed method to perform well in highly complex tasks that cannot be easily captured by current unsupervised pretraining schemes.
In addition, the fact that this method allows for fine-tuning without high sample sizes in the development set might let people use it without any additional test set, and thus any model validation of any sort, which might lead to the implementation of highly biased models.
## 8 Ethical considerations
This article introduces a method for text classification that has the same exact functional form as a prefix fine-tuned large language model. As a consequence, they get the same exact ethical issues, such as socially biased classification algorithms. In addition, the method's increased generalization abilities make it so that these algorithms might be built with fewer observations, which can lead to ill-defined objectives. These concerns call for thought and caution when implementing tools using the proposed model.
## 9 Acknowledgements
We would like to thank the Sorbonne Center for Artificial Intelligence for funding Louis Falissard's post-doctoral contract within the MLIA laboratory of the Institute of Intelligent Systems and Robotics. We would also like to thank the ANR JCJC project SESAMS (Projet-ANR18-CE23-0001).
|
2305.11743
|
The strongly robust simplicial complex of monomial curves
|
To every simple toric ideal $I_T$ one can associate the strongly robust
simplicial complex $\Delta _T$, which determines the strongly robust property
for all ideals that have $I_T$ as their bouquet ideal. We show that for the
simple toric ideals of monomial curves in $\mathbb{A}^{s}$, the strongly robust
simplicial complex $\Delta _T$ is either $\{\emptyset \}$ or contains exactly
one 0-dimensional face. In the case of monomial curves in $\mathbb{A}^{3}$, the
strongly robust simplicial complex $\Delta _T$ contains one 0-dimensional face
if and only if the toric ideal $I_T$ is a complete intersection ideal with
exactly two Betti degrees. Finally, we provide a construction to produce
infinitely many strongly robust ideals with bouquet ideal the ideal of a
monomial curve and show that they are all produced this way.
|
Dimitra Kosta, Apostolos Thoma, Marius Vladoiu
|
2023-05-19T15:28:41Z
|
http://arxiv.org/abs/2305.11743v1
|
# The strongly robust simplicial complex of monomial curves
###### Abstract.
To every simple toric ideal \(I_{T}\) one can associate the strongly robust simplicial complex \(\Delta_{T}\), which determines the strongly robust property for all ideals that have \(I_{T}\) as their bouquet ideal. We show that for the simple toric ideals of monomial curves in \(\mathbb{A}^{s}\), the strongly robust simplicial complex \(\Delta_{T}\) is either \(\{\emptyset\}\) or contains exactly one \(0\)-dimensional face. In the case of monomial curves in \(\mathbb{A}^{3}\), the strongly robust simplicial complex \(\Delta_{T}\) contains one \(0\)-dimensional face if and only if the toric ideal \(I_{T}\) is a complete intersection ideal with exactly two Betti degrees. Finally, we provide a construction to produce infinitely many strongly robust ideals with bouquet ideal the ideal of a monomial curve and show that they are all produced this way.
Key words and phrases: Toric ideals, Graver basis, Monomial curves, indispensable elements, Robust ideals, Simplicial complex 2020 Mathematics Subject Classification: 05E45, 13F65, 13P10, 14M25 Corresponding author: Dimitra Kosta
## 1. Introduction
Let \(A\in\mathbb{Z}^{m\times n}\) be an integer matrix such that \(\operatorname{Ker}_{\mathbb{Z}}(A)\cap\mathbb{N}^{n}=\{\mathbf{0}\}\). The toric ideal of \(A\) is the ideal \(I_{A}\subset K[x_{1},\dots,x_{n}]\) generated by the binomials \(x^{\mathbf{u}^{+}}-x^{\mathbf{u}^{-}}\) where \(K\) is a field, \(\mathbf{u}\in\operatorname{Ker}_{\mathbf{Z}}(A)\) and \(\mathbf{u}=\mathbf{u}^{+}-\mathbf{u}^{-}\) is the unique expression of \(\mathbf{u}\) as a difference of two non-negative vectors with disjoint support, see [21, Chapter 4]. A toric ideal is called robust if it is minimally generated by its Universal Grobner basis, where the Universal Grobner basis is the union of all reduced Grobner bases, see [4]. A _strongly robust_ toric ideal is a toric ideal \(I_{A}\) for which the Graver basis \(\operatorname{Gr}(I_{A})\) is a minimal system of generators, see [23]. The condition \(\operatorname{Ker}_{\mathbb{Z}}(A)\cap\mathbb{N}^{n}=\{\mathbf{0}\}\) implies that any minimal binomial generating set is contained in the Graver basis, see [9, Theorem 2.3]. For strongly robust ideals then the Graver basis is the unique minimal system of generators and, thus, any reduced Grobner basis as well as the Universal Grobner basis are identical with the Graver basis, since all of them contain a minimal system of generators and they are subsets of the Graver basis (see [21, Chapter 4]). We conclude that for a strongly robust toric ideal \(I_{A}\) the following sets are identical: the set of indispensable elements, any minimal system of binomial generators, any reduced Grobner basis, the Universal Grobner basis and the Graver basis. Therefore strongly robust toric ideals are robust. The classical example of strongly robust ideals are the Lawrence ideals, see [21, Chapter 7]. There are several articles in the literature studying robust related properties of ideals; see [2, 3, 10, 11, 22] for robust
ideals, [4, 5] for robust toric ideals, [12, 24] for generalized robust toric ideals and [14, 17, 19, 20, 21, 23] for strongly robust toric ideals.
To characterize combinatorially the strongly robust property of toric ideals which have in common the same bouquet ideal \(I_{T}\), in [17], we defined a simplicial complex, the strongly robust simplicial complex \(\Delta_{T}\), the faces of which determine the strongly robust property. In particular, let \(I_{A}\) be a toric ideal with bouquet ideal \(I_{T}\), the ideal \(I_{A}\) is strongly robust if and only if the set \(\omega\) of indices \(i\), such that the \(i\)-th bouquet of \(I_{A}\) is non-mixed, is a face of \(\Delta_{T}\), see [17, Theorem 3.6]. Thus, understanding the strongly robust property of toric ideals \(I_{A}\) is equivalent to understanding the strongly robust simplicial complex \(\Delta_{T}\) for simple toric ideals \(I_{T}\). Simple toric ideals are ideals for which every bouquet is a singleton. Bouquet ideals are always simple. A method for the computation of the strongly robust simplicial complex \(\Delta_{T}\) for a particular simple toric ideal \(I_{T}\) was given in [17, Theorem 3.7], however an interesting problem is to understand the strongly robust simplicial complex \(\Delta_{T}\) for classes of simple toric ideals. In this direction, we determine the strongly robust simplicial complex \(\Delta_{T}\) for the simple toric ideals of monomial curves, which are toric ideals defined by \(1\times s\)-matrices, where \(s\geq 3\). For \(s=2\), the toric ideal \(I_{T}\) is principal and thus it is never simple but always strongly robust. For \(s\geq 3\), the ideal of a monomial curve is never strongly robust, see Remark 4.6. Toric ideals with bouquet ideal the toric ideal of a monomial curve include toric ideals of varieties with any dimension as well as varieties with any codimension greater than one. For example, a toric ideal with bouquet ideal the toric ideal \(I_{T}\) of a monomial curve defined by the matrix \(T=(24,40,41,60,80)\) is the ideal \(I_{A}\) of a toric variety of dimension \(7\) and codimension \(4\) in Example 6.4, where we explain why such an ideal \(I_{A}\) is strongly robust. In Section 6, we provide infinitely many examples of strongly robust toric ideals with bouquet ideal the toric ideal of a monomial curve and we prove that all such toric ideals are provided by this method.
In [23], Sullivant asked the question: does every strongly robust toric ideal \(I_{A}\) of codimension \(r\) have at least \(r\) mixed bouquets? Since bouquets preserve the codimension and \(s\) is the number of bouquets of \(I_{A}\), Sullivant's question is equivalent to a question about the dimension of the strongly robust simplicial complex of its bouquet ideal \(I_{T}\): is it true that simple toric ideals \(I_{T}\) of codimension \(r\) in the polynomial ring of \(s\) variables have \(\dim\Delta_{T}<s-r\)? In Section 3, we give an affirmative answer to the question of Sullivant for the simple toric ideals of monomial curves.
The structure of the paper is the following. In Section 2, we present the notation, give definitions and previous results that will be required throughout the paper. Then, in Section 3, we firstly proceed by showing that the dimension of the strongly robust simplicial complex for a monomial curve \(T\) is \(\dim\Delta_{T}\leq 0\). Note that for monomial curves the codimension is \(s-1\) thus \(\dim\Delta_{T}\leq 0<1=s-(s-1)\), which agrees with Sullivant's conjecture. Section 4 contains results that describe the circuits of \(\Lambda(T)_{i}\) which lead to Theorem 4.4 that links the properties of complete intersection and strongly robustness for monomial curves in \(\mathbb{A}^{s}\). Using this we give a full description of the strongly robust simplicial complex in the case of monomial
curves in \(\mathbb{A}^{3}\) in Theorem 4.7 and show that for monomial curves in \(\mathbb{A}^{s}\) the strongly robust simplicial complex is either \(\{\emptyset\}\) or contains exactly one \(0\)-dimensional face in Theorem 4.5. In Section 5, we extend the notions of a primitive element as well as the Graver basis and give a necessary and sufficient condition in terms of primitive elements (or the Graver basis) on whether one specific element is the unique \(0\)-dimensional face of \(\Delta_{T}\). Finally, in Section 6, we use generalized Lawrence matrices to describe completely all matrices \(A\) for which the toric ideal \(I_{A}\) is strongly robust and which have bouquet ideal the toric ideal of a monomial curve.
## 2. Preliminaries
Let \(A=(\mathbf{a}_{1},\ldots,\mathbf{a}_{n})\) be an integer matrix in \(\mathbb{Z}^{m\times n}\), with column vectors \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\) and such that \(\operatorname{Ker}_{\mathbb{Z}}(A)\cap\mathbb{N}^{n}=\{\mathbf{0}\}\). We say that \(\mathbf{u}=\mathbf{v}+_{c}\mathbf{w}\) is a conformal decomposition of the vector \(\mathbf{u}\in\operatorname{Ker}_{\mathbb{Z}}(A)\) if \(\mathbf{u}=\mathbf{v}+\mathbf{w}\) and \(\mathbf{u}^{+}=\mathbf{v}^{+}+\mathbf{w}^{+},\mathbf{u}^{-}=\mathbf{v}^{-}+ \mathbf{w}^{-}\), where \(\mathbf{v},\mathbf{w}\in\operatorname{Ker}_{\mathbb{Z}}(A)\). The conformal decomposition is called proper if both \(\mathbf{v}\) and \(\mathbf{w}\) are not zero. For the conformality, in terms of signs, the corresponding notation is the following: \(+=\oplus+_{c}\oplus\), \(-=\ominus+_{c}\ominus\), \(0\ =\ 0+_{c}0\). where the symbol \(\ominus\) means that the corresponding integer is nonpositive and the symbol \(\oplus\) nonnegative. By \(\operatorname{Gr}(A)\) we denote the set of elements in \(\operatorname{Ker}_{\mathbb{Z}}(A)\) that do not have a proper conformal decomposition. A binomial \(\mathbf{x}^{\mathbf{u}^{+}}-\mathbf{x}^{\mathbf{u}^{-}}\in I_{A}\) is called _primitive_ if \(\mathbf{u}\in\operatorname{Gr}(A).\) The set of the primitive binomials is finite and it is called the _Graver basis_ of \(I_{A}\) and is denoted by \(\operatorname{Gr}(I_{A})\), [21, Chapter 4].
We recall from [16, Definition 3.9] that for vectors \(\mathbf{u},\mathbf{v},\mathbf{w}\in\operatorname{Ker}_{\mathbb{Z}}(A)\) such that \(\mathbf{u}=\mathbf{v}+\mathbf{w}\), the sum is said to be a _semiconformal decomposition_ of \(\mathbf{u}\), written \(\mathbf{u}=\mathbf{v}+_{sc}\mathbf{w}\), if \(v_{i}>0\) implies that \(w_{i}\geq 0\), and \(w_{i}<0\) implies that \(v_{i}\leq 0\), for all \(1\leq i\leq n\). The decomposition is called _proper_ if both \(\mathbf{v},\mathbf{w}\) are nonzero. The set of indispensable elements \(S(A)\) of \(A\) consists of all nonzero vectors in \(\operatorname{Ker}_{\mathbb{Z}}(A)\) with no proper semiconformal decomposition. For the semiconformality, in terms of signs, the corresponding notation is the following: \(+=*+_{sc}\oplus\), \(-=\ominus+_{sc}*\), \(0=\ominus+_{sc}\oplus\), where the symbol \(*\) means that it can take any value.
A binomial \(\mathbf{x}^{\mathbf{u}^{+}}-\mathbf{x}^{\mathbf{u}^{-}}\in I_{A}\) is called _indispensable_ binomial if it belongs to the intersection of all minimal systems of binomial generators of \(I_{A}\), up to identification of opposite binomials. The set of indispensable binomials is \(S(I_{A})=\{\mathbf{x}^{\mathbf{u}^{+}}-\mathbf{x}^{\mathbf{u}^{-}}|\mathbf{u }\in S(A)\}\) by [16, Lemma 3.10] and [8, Proposition 1.1].
_Circuits_ are irreducible binomials of a toric ideal \(I_{A}\) with minimal support. In vector notation, a vector \(\mathbf{u}\in\operatorname{Ker}_{\mathbb{Z}}(A)\) is called a circuit of the matrix \(A\) if \(\operatorname{supp}(\mathbf{u})\) is minimal and the components of \(\mathbf{u}\) are relatively prime.
To the matrix \(A=(\mathbf{a}_{1},\ldots,\mathbf{a}_{n})\) we associate its Gale transform, which is the \(n\times(n-r)\) matrix whose columns span the lattice \(\operatorname{Ker}_{\mathbb{Z}}(A)\), where \(r\) is the rank of \(A\). We will denote the set of ordered row vectors of the Gale transform by \(\{G(\mathbf{a}_{1}),\ldots,G(\mathbf{a}_{n})\}\). The vector \(\mathbf{a}_{i}\) is called _free_ if its Gale transform \(G(\mathbf{a}_{i})\) is equal to the zero vector, which means that \(i\) is not contained in the support of any element in \(\operatorname{Ker}_{\mathbb{Z}}(A)\). The _bouquet graph_\(G_{A}\) of \(I_{A}\) is the graph on the set of vertices \(\{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\}\), whose edge set \(E_{A}\) consists of those \(\{\mathbf{a}_{i},\mathbf{a}_{j}\}\) for which \(G(\mathbf{a}_{i})\) is a rational multiple
of \(G(\mathbf{a}_{j})\) and vice-versa. The connected components of the graph \(G_{A}\) are called _bouquets_.
It follows from the definition that the free vectors of \(A\) form one bouquet, which we call the _free bouquet_ of \(G_{A}\). The non-free bouquets are of two types: _mixed_ and _non-mixed_. A non-free bouquet is mixed if contains an edge \(\{\mathbf{a}_{i},\mathbf{a}_{j}\}\) such that \(G(\mathbf{a}_{i})=\lambda G(\mathbf{a}_{j})\) for some \(\lambda<0\), and is non-mixed if it is either an isolated vertex or for all of its edges \(\{\mathbf{a}_{i},\mathbf{a}_{j}\}\) we have \(G(\mathbf{a}_{i})=\lambda G(\mathbf{a}_{j})\) with \(\lambda>0\), see [19, Lemma 1.2].
Let \(B_{1},B_{2},\ldots,B_{s}\) be the bouquets of \(I_{A}\). We reorder the vectors \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\) to \(\mathbf{a}_{11},\mathbf{a}_{12},\ldots,\mathbf{a}_{1k_{1}},\;\mathbf{a}_{21}, \mathbf{a}_{22},\ldots,\mathbf{a}_{2k_{2}},\;\ldots,\mathbf{a}_{s1},\mathbf{a }_{s2},\ldots,\mathbf{a}_{sk_{s}}\) in such a way that the first \(k_{1}\) vectors belong to the bouquet \(B_{1}\), the next \(k_{2}\) to \(B_{2}\) and so on up to the last \(k_{s}\) that belong to the bouquet \(B_{s}\). Note that \(k_{1}+k_{2}+\cdots+k_{s}=n\). For each bouquet \(B_{i}\) we define two vectors \(\mathbf{c}_{B_{i}}\) and \(\mathbf{a}_{B_{i}}\). If the bouquet \(B_{i}\) is free then we set \(\mathbf{c}_{B_{i}}\in\mathbb{Z}^{n}\) to be any nonzero vector such that \(\operatorname{supp}(\mathbf{c}_{B_{i}})=\{i1,\ldots,ik_{i}\}\) and with the property that the first nonzero coordinate, \(c_{i1}\), is positive. For a non-free bouquet \(B_{i}\) of \(A\), consider the Gale transforms of the elements in \(B_{i}\). All the Gale transforms are nonzero, since the bouquet is non-free, and pairwise linearly dependent, since they belong to the same bouquet. Therefore, there exists a nonzero coordinate \(l\) in all of them. Let \(g_{l}=\gcd(G(\mathbf{a}_{i1})_{l},G(\mathbf{a}_{i2})_{l},\ldots,G(\mathbf{a}_ {ik_{l}})_{l})\), where \((\mathbf{w})_{l}\) is the \(l\)-th component of a vector \(\mathbf{w}\). Then \(\mathbf{c}_{B_{i}}\) is the vector in \(\mathbb{Z}^{n}\) whose \(qj\)-th component is \(0\) if \(q\neq i\), and \(c_{ij}=\varepsilon_{i1}(G(\mathbf{a}_{ij})_{l})/g_{l}\), where \(\varepsilon_{i1}\) represents the sign of the integer \(G(\mathbf{a}_{ij})_{l}\). Note that \(c_{i1}\) is always positive. Then the vector \(\mathbf{a}_{B_{i}}\) (see [19, Definition 1.7]), is defined as \(\mathbf{a}_{B_{i}}=\sum_{j=1}^{n}(c_{B_{i}})_{j}\mathbf{a}_{j}\in\mathbb{Z}^{m}\).
If \(B_{i}\) is a non-free bouquet of \(A\), then \(B_{i}\) is a mixed bouquet if and only if the vector \(\mathbf{c}_{B_{i}}\) has a negative and a positive coordinate, and \(B_{i}\) is non-mixed if and only if the vector \(\mathbf{c}_{B_{i}}\) has all nonzero coordinates positive, see [19, Lemma 1.6]. The toric ideal \(I_{A_{B}}\) associated to the matrix \(A_{B}\), whose columns are the vectors \(\mathbf{a}_{B_{i}}\), \(1\leq i\leq s\), is called the bouquet ideal of \(I_{A}\).
Let \(\mathbf{u}=(u_{1},u_{2},\ldots,u_{s})\in\operatorname{Ker}_{\mathbb{Z}}(A_{B})\) then the linear map
\[D(\mathbf{u})=(c_{11}u_{1},c_{12}u_{1},\ldots,c_{1k_{1}}u_{1},c_{21}u_{2}, \ldots,c_{2k_{2}}u_{2},\ldots,c_{s1}u_{s},\ldots,c_{sk_{s}}u_{s}),\]
where all \(c_{j1},1\leq j\leq s\), are positive, is an isomorphism from \(\operatorname{Ker}_{\mathbb{Z}}(A_{B})\) to \(\operatorname{Ker}_{\mathbb{Z}}(A)\), see [19, Theorem 1.9].
The cardinality of the sets of different toric bases depends only on the signatures of the bouquets and the bouquet ideal, see [17, Theorem 2.3, Theorem 2.5] and [19, Theorem 1.11].
Note that the bouquet ideal is simple: a toric ideal is called _simple_ if every bouquet is a singleton, in other words if \(I_{T}\subset K[x_{1},\ldots,x_{s}]\) and has \(s\) bouquets. The bouquet ideal of a simple toric ideal \(I_{A}\) is \(I_{A}\) itself. Let \(I_{A}\) be the ideal of a monomial curve, where \(A=(n_{1},n_{2},\ldots,n_{s})\) is an \(1\times s\) matrix. In this case for any \(i,j\in[s]\) there exists one circuit with support only \(\{i,j\}\). Then for \(s\geq 3\) and any two \(n_{k},n_{l}\) there exists a circuit that is zero on the \(k^{th}\) component and nonzero on the \(l^{th}\) component and vice versa. Therefore the Gale transforms \(G(n_{k}),G(n_{l})\) are not the one multiple of the other. Thus all toric ideals of monomial curves are simple if \(s\geq 3\).
**Definition 2.1**.: Let \(I_{T}\subset K[x_{1},\ldots,x_{s}]\) be a simple toric ideal and \(\omega\subset\{1,\ldots,s\}\). A toric ideal \(I_{A}\) is called \(T_{\omega}\)-robust ideal if and only if
* the bouquet ideal of \(I_{A}\) is \(I_{T}\) and
* \(\omega=\{i\in[s]|B_{i}\text{ is non-mixed}\}\).
We denote by
\[S_{\omega}(T)=\{\mathbf{u}\in\operatorname{Gr}(T)|\;D(\mathbf{u})\in S(A)\}\]
and call \(S_{\omega}(T)\) the \(T_{\omega}\)-indispensable set, where \(I_{A}\) is an \(T_{\omega}\)-robust toric ideal and \(S(A)\) is the set of indispensable elements of \(A\).
The second part of the definition is correctly defined, since in [17] we showed that the set of elements \(\mathbf{u}\) which belong to \(\operatorname{Gr}(T)\), such that \(D(\mathbf{u})\) is indispensable in a \(T_{\omega}\)-robust toric ideal \(I_{A}\), does not depend on the \(I_{A}\) chosen, but only on \(T\) and \(\omega\).
In [17] we introduced a simplicial complex, which determines the strongly robust property for toric ideals.
**Definition 2.2**.: The set \(\Delta_{T}=\{\omega\subseteq[s]\mid S_{\omega}(T)=\operatorname{Gr}(T)\}\) is called the _strongly robust complex_ of \(T\).
According to [17, Corollary 3.5, Theorem 3.6], the set \(\Delta_{T}\) is a simplicial complex, which determines the strongly robust property for toric ideals.
**Theorem 2.3**.: _[_17_, Theorem 3.6]_ _Let \(I_{A}\) be a \(T_{\omega}\)-robust toric ideal. The toric ideal \(I_{A}\) is strongly robust if and only if \(\omega\) is a face of the strongly robust complex \(\Delta_{T}\)._
The following theorem provides a way to compute the strongly robust complex of a simple toric ideal \(I_{T}\). By \(\Lambda(T)\) we denote the second Lawrence lifting of \(T\), which is the \((m+s)\times 2s\) matrix \(\begin{pmatrix}T&0\\ I_{s}&I_{s}\end{pmatrix}.\) By \(\Lambda(T)_{\omega}\) we denote the matrix taken from \(\Lambda(T)\) by removing the \((m+i)\)-th row and the \((s+i)\)-th column for each \(i\in\omega\).
**Example 2.4**.: For \(T=(n_{1},n_{2},n_{3},n_{4})\) and \(\omega=\{3\}\), the \(\Lambda(T)_{\omega}\) matrix is
\[\begin{pmatrix}n_{1}&n_{2}&n_{3}&n_{4}&0&0&0\\ 1&0&0&0&1&0&0\\ 0&1&0&0&0&1&0\\ 0&0&0&1&0&0&1\end{pmatrix}.\]
**Theorem 2.5**.: _[_17_, Theorem 3.7]_ _The set \(\omega\) is a face of the strongly robust complex \(\Delta_{T}\) if and only if \(I_{\Lambda(T)_{\omega}}\) is strongly robust._
## 3. On the dimension of the strongly robust complex
Suppose that \(T=(n_{1},n_{2},n_{3})\) with the property that \(\gcd{(n_{1},n_{2},n_{3})}=1\). For an element \(\mathbf{u}=(u_{1},u_{2},u_{3})\in\mathbb{N}^{3}\), we define the \(T\)-degree of the monomial \(x^{\mathbf{u}}\) to be \(\deg_{T}(x^{\mathbf{u}}):=u_{1}n_{1}+u_{2}n_{2}+u_{3}n_{3}\). A vector \(b\) is called a _Betti \(T\)-degree_ if \(I_{T}\) has a minimal generating set of binomials containing an element of \(T\)-degree \(b\). Betti \(T\)-degrees do not depend on the minimal set of binomial generators, [6, 21].
We know from J. Herzog [15], that if \(I_{T}\) is not complete intersection then the ideal \(I_{T}\) is minimally generated by three binomials, while a complete intersection
\(I_{T}\) is minimally generated by two binomials. Let \(c_{i}\) be the smallest multiple of \(n_{i}\) that belongs to the semigroup generated by \(n_{j},n_{k}\), where \(\{i,j,k\}=\{1,2,3\}\). Then there exist non-negative integers \(c_{ij},c_{ik}\) (not necessarily unique) such that \(c_{i}n_{i}=c_{ij}n_{j}+c_{ik}n_{k}\). We have the following cases.
* If all six \(c_{ij}\neq 0\), then \(I_{T}\) is a non complete intersection ideal, minimally generated by the three elements \(x_{1}^{c_{1}}-x_{2}^{c_{12}}x_{3}^{c_{13}}\), \(x_{2}^{c_{2}}-x_{1}^{c_{21}}x_{3}^{c_{23}}\), \(x_{3}^{c_{3}}-x_{1}^{c_{31}}x_{2}^{c_{32}}\). All minimal generators have full support, therefore the ideal is generic and thus all elements are indispensable, see [18, Lemma 3.3, Remark 4.4]. Being indispensable binomials implies that they have different \(T\)-degrees, see [7]. Thus in the non complete intersection case we have \(3\) Betti \(T\)-degrees, \(c_{1}n_{1}\neq c_{2}n_{2}\neq c_{3}n_{3}\).
* If at least one \(c_{ij}=0\), then \(I_{T}\) is a complete intersection ideal and is generated by \(x_{j}^{c_{j}}-x_{k}^{c_{k}}\) and \(x_{i}^{c_{i}}-x_{j}^{c_{ij}}x_{k}^{c_{ik}}\). There are two cases:
* the two binomials have different \(T\)-degrees, i.e. \(c_{j}n_{j}=c_{k}n_{k}\neq c_{i}n_{i}\). In this case we have two Betti \(T\)-degrees.
* the two binomials have the same \(T\)-degree, i.e. \(c_{1}n_{1}=c_{2}n_{2}=c_{3}n_{3}\). In this case we have one Betti \(T\)-degree, see [13]. It is easy to see that in the binomial \(x_{i}^{c_{i}}-x_{j}^{c_{ij}}x_{k}^{c_{ik}}\) the monomial \(x_{j}^{c_{ij}}x_{k}^{c_{ik}}\) can only be \(x_{j}^{c_{j}}\) or \(x_{k}^{c_{k}}\), otherwise one can find smaller multiples of \(n_{j}\) or \(n_{k}\) than \(c_{j}\) or \(c_{k}\) that belong to the semigroup generated by \(n_{i},n_{k}\) or \(n_{i},n_{j}\), contradicting the choice of \(c_{j}\) or \(c_{k}\). Thus both binomials are circuits. None of the circuits is indispensable since any two of the three binomials \(x_{1}^{c_{1}}-x_{2}^{c_{2}}\), \(x_{1}^{c_{1}}-x_{3}^{c_{3}}\) and \(x_{2}^{c_{2}}-x_{3}^{c_{3}}\) generate the ideal \(I_{T}\).
In both cases B1, B2, the circuit \(x_{j}^{c_{j}}-x_{k}^{c_{k}}\) is the same as \(x_{j}^{n_{k}^{\#}}-x_{k}^{n_{j}^{\#}}\), where \(n_{k}^{\#},n_{j}^{\#}\) are just the integers \(n_{k},n_{j}\) divided by \(g_{jk}=g.c.d(n_{k},n_{j})\). Thus \(c_{j}=n_{k}^{\#}\) and \(c_{k}=n_{j}^{\#}\).
**Definition 3.1**.: Let \(T=(n_{1},n_{2},n_{3})\), we say that \(I_{T}\) is a _complete intersection on \(n_{i}\)_ if \(c_{j}n_{j}=c_{k}n_{k}\neq c_{i}n_{i}\). We say that \(I_{T}\) is a _complete intersection on all_ if \(c_{1}n_{1}=c_{2}n_{2}=c_{3}n_{3}\).
Remark that the condition \(c_{j}n_{j}=c_{k}n_{k}\neq c_{i}n_{i}\) implies that \(I_{T}\) can be complete intersection on \(n_{i}\) for at most one \(n_{i}\).
**Example 3.2**.: Let \(T_{1}=(7,15,20)\), then \(c_{1}=5,c_{2}=4,c_{3}=3\) and the Betti \(T_{1}\)-degrees are \(5\cdot 7\neq 4\cdot 15=3\cdot 20\). Therefore \(I_{T_{1}}\) is complete intersection on \(n_{1}=7\). Let \(T_{2}=(5,6,15)\), then \(c_{1}=3,c_{2}=5,c_{3}=1\) and the Betti \(T_{2}\)-degrees are \(3\cdot 5=1\cdot 15\neq 5\cdot 6\). Therefore \(I_{T_{2}}\) is complete intersection on \(n_{2}=6\). If \(T_{3}=(6,8,11)\), then \(c_{1}=4,c_{2}=3,c_{3}=2\), the Betti \(T_{3}\)-degrees are \(4\cdot 6=3\cdot 8\neq 2\cdot 11\), and therefore \(I_{T_{3}}\) is complete intersection on \(n_{3}=11\). Let \(T_{4}=(6,10,15)\), then \(c_{1}=5,c_{2}=3,c_{3}=2\) and there is only one Betti \(T_{4}\)-degree \(5\cdot 6=3\cdot 10=2\cdot 15\). Therefore \(I_{T_{4}}\) is complete intersection on all. Finally, for \(T_{5}=(3,5,7)\) we have \(c_{1}=4,c_{2}=2,c_{3}=2\) and there are three Betti \(T_{5}\)-degrees: \(4\cdot 3\neq 2\cdot 5\neq 2\cdot 7\). Therefore \(I_{T_{5}}\) is not a complete intersection.
**Proposition 3.3**.: _Let \(T=(n_{1},n_{2},n_{3})\). In the toric ideal \(I_{T}\) at least two of the three circuits are not indispensable._
Proof.: In the case that \(I_{T}\) is not complete intersection the toric ideal is generated by three binomials of full support [15, Section 3]. Then the toric ideal is generic and by [18, Lemma 3.3, Remark 4.4] all three generators are indispensable. Since all generators have full support none of them is a circuit. Therefore none of the three circuits of the toric ideal \(I_{T}\) is indispensable. In the complete intersection case, the ideal \(I_{T}\) has two minimal generators, one of which is always a circuit [15, Proposition 3.5 and Theorem 3.8]. If exactly one minimal generator was a circuit, then the other two circuits would not be indispensable. If both of the minimal generators were circuits then without loss of generality we can assume they would be of the form \(x_{i}^{a}-x_{j}^{b}\), \(x_{i}^{c}-x_{k}^{d}\). Namely, two of the monomials will be powers of the same variable. We can distinguish two cases.
* Firstly, the two exponents \(a\), \(c\) could be different, so assume that \(a<c\). In this case, according to [7, Theorem 3.4], the binomial \(x_{i}^{c}-x_{k}^{d}\) would not be indispensable, as \(c\) would not be a minimal binomial \(T\)-degree. Therefore, in this case there exists at most one indispensable circuit, and since we have three circuits in total at least two would not be indispensable.
* In the second case, the two exponents \(a,c\) are equal. Then both generators \(x_{i}^{a}-x_{j}^{b}\) and \(x_{i}^{a}-x_{k}^{d}\) would be of the same Betti \(T\)-degree. The ideal is generated by any two of the following three circuits \(x_{i}^{a}-x_{j}^{b}\), \(x_{i}^{a}-x_{k}^{d}\), \(x_{j}^{d}-x_{k}^{b}\), since \(x_{j}^{d}-x_{k}^{b}=(x_{i}^{a}-x_{j}^{b})-(x_{i}^{a}-x_{k}^{d})\). Therefore, there is no indispensable binomial and in particular none of the circuits is indispensable.
**Theorem 3.4**.: _Let \(T=(n_{1},n_{2},\ldots,n_{s})\) then \(\dim(\Delta_{T})\leq 0.\)_
Proof.: Suppose on the contrary that \(\dim(\Delta_{T})>0.\) Then there exist \(i,j\) such that the edge \(\{i,j\}\) belongs to the strongly robust complex \(\Delta_{T}\). Therefore, the toric ideal \(I_{\Lambda(T)_{\{i,j\}}}\) is strongly robust by Theorem 2.5. Consider the ideal \(I_{(n_{i},n_{j},n_{k})}\) for any \(k\neq i,j\). Let \(\mathbf{c}\) be a non indispensable circuit of the toric ideal of \(I_{(n_{i},n_{j},n_{k})}\) different from \((n_{j}^{\prime},-n_{i}^{\prime},0)\), where \(n_{i}^{\prime},n_{j}^{\prime}\) are the \(n_{i},n_{j}\) divided by their greatest common divisor. Then without loss of generality \(\mathbf{c}\) will be in the form \(\mathbf{c}=(n_{k}^{*},0,-n_{i}^{*})\), where \(n_{i}^{*},n_{k}^{*}\) are the \(n_{i},n_{k}\) divided by their greatest common divisor. We know that there always exists such a circuit \(\mathbf{c}\), since by the Proposition 3.3 we have that at least two of the three circuits of \(I_{(n_{i},n_{j},n_{k})}\) are not indispensable. As the circuit \(\mathbf{c}\) is not indispensable, it has a proper semiconformal decomposition into two vectors with the following pattern of signs \((n_{k}^{*},0,-n_{i}^{*})=(*,-,\ominus)+_{sc}\left(\oplus,+,*\right)\). The first \(*\) is a positive number and the second \(*\) is a negative number, since \(\operatorname{Ker}_{\mathbb{Z}}(n_{i},n_{j},n_{k})\cap\mathbb{N}^{3}=\{ \mathbf{0}\}\). Namely, \((n_{k}^{*},0,-n_{i}^{*})=(a,-b,-c)+_{sc}\left(d,b,-e\right)\), where \(a,b,c,d,e\in\mathbb{N}\) and \(abe\neq 0\), so this is a proper semiconformal decomposition. We will show below that this lifts into a proper semiconformal decomposition in \(I_{\Lambda(T)_{\{i,j\}}}\). Indeed, in the toric ideal \(I_{\Lambda(T)_{\{i,j\}}}\) we have
\[(0,\ldots,n_{k}^{*},\ldots,0,\ldots,-n_{i}^{*},\ldots,n_{i}^{*},\ldots,0)=\]
\[(0,\ldots,a,\ldots,-b,\ldots,-c,\ldots,c,\ldots,0)+_{sc}\left(0,\ldots,d,\ldots,b,\ldots,-e,\ldots,e,\ldots,0\right),\]
where the only nonzero components are in the \(i^{th}\),\(k^{th}\) and \((s+k-2)^{th}\) positions in the first vector and in the \(i^{th}\), \(j^{th}\), \(k^{th}\) and \((s+k-2)^{th}\) positions in the last two. This
decomposition is proper semiconformal, since \((n_{k}^{*},0,-n_{i}^{*})=(a,-b,-c)+_{sc}(d,b,-e)\) is proper. We observe that the element \((0,\ldots,n_{k}^{*},\ldots,0,\ldots,-n_{i}^{*},\ldots,n_{i}^{*},\ldots,0)=D((0, \ldots,n_{k}^{*},\ldots,0,\ldots,-n_{i}^{*},\ldots,0))\) is a circuit of \(I_{\Lambda(T)_{\{i,j\}}}\), since \(D\) maps circuits to circuits [19, Theorem 1.11], and is not an indispensable element in \(I_{\Lambda(T)_{\{i,j\}}}\), since it admits a proper semiconformal decomposition. However, we know that circuits are always contained in the Graver basis [21, Proposition 4.11], so this means that the Graver basis \(\operatorname{Gr}\left(\Lambda(T)_{\{i,j\}}\right)\) is not equal to the set of indispensable elements \(S\left(\Lambda(T)_{\{i,j\}}\right)\). Therefore, the toric ideal \(I_{\Lambda(T)_{\{i,j\}}}\) is not strongly robust, a contradiction. We conclude that \(\dim(\Delta_{T})\leq 0\).
In [23, Corollary 1.3], Sullivant proved that strongly robust codimension \(2\) toric ideals have at least \(2\) mixed bouquets. For the strongly robust complex, this result means that \(\dim(\Delta_{T})<s-2.\) In the same paper, Sullivant poses a stronger question which can be translated to the following question: for every simple codimension \(r\) toric ideal \(I_{T}\), is it true that \(\dim(\Delta_{T})<s-r\)? Theorem 3.4 proves that the answer to this question is affirmative for the simple toric ideals of monomial curves. Note that toric ideals of monomial curves have codimension \(r=s-1\).
## 4. Complete Intersection and strongly robustness
**Proposition 4.1**.: _Let \(T=(n_{1},n_{2},\ldots,n_{s})\) and \(\mathbf{u},\mathbf{v},\mathbf{w}\in\operatorname{Ker}_{\mathbb{Z}}(T)\). If \(D(\mathbf{u})=D(\mathbf{v})+_{sc}D(\mathbf{w})\) in \(\operatorname{Ker}_{\mathbb{Z}}\left(\Lambda(T)_{\{i\}}\right)\), then \([\mathbf{u}]^{i}=[\mathbf{v}]^{i}+_{c}[\mathbf{w}]^{i}\), where \([\mathbf{u}]^{i}\) is the vector obtained from \(\mathbf{u}\) by deleting its \(i^{th}\) component._
Proof.: Let \(j\in\{1,\ldots,s\}\) be such that \(j\neq i\). Then, for the vector \(D(\mathbf{u})\) in the kernel \(\operatorname{Ker}_{\mathbb{Z}}\left(\Lambda(T)_{\{i\}}\right)\), one of the components is equal to \(u_{j}\) and another is \(-u_{j}\). Similarly, the corresponding two components of \(D(\mathbf{v}),D(\mathbf{w})\in\operatorname{Ker}_{\mathbb{Z}}\left(\Lambda(T) _{\{i\}}\right)\) are \(v_{j},-v_{j}\) and \(w_{j},-w_{j}\) respectively. The semiconformal decomposition \(D(\mathbf{u})=D(\mathbf{v})+_{sc}D(\mathbf{w})\), implies that on those components we have
\[(u_{j}) = (v_{j})+_{sc}(w_{j}), \tag{2}\] \[(-u_{j}) = (-v_{j})+_{sc}(-w_{j}). \tag{1}\]
If \(u_{j}\geq 0\), then \(w_{j}\geq 0\) by (1), while \(-v_{j}\leq 0\) by (2). Therefore, both \(v_{j},w_{j}\) are non-negative and so the sum \((u_{j})=(v_{j})+_{c}(w_{j})\) is conformal. If on the other hand \(u_{j}\leq 0\), then \(v_{j}\leq 0\) by (1) and \(-w_{j}\geq 0\) by (2). Therefore, both \(v_{j},w_{j}\) are non-positive and the sum \((u_{j})=(v_{j})+_{c}(w_{j})\) is again conformal.
**Lemma 4.2**.: _Any circuit of \(\Lambda(T)_{\{i\}}\) that has \(i\) in its support is indispensable._
Proof.: Due to the one-to-one correspondence between circuits of \(\Lambda(T)_{\{i\}}\) and circuits of \(T\), a circuit of \(\Lambda(T)_{\{i\}}\) can be written as \(D(\mathbf{u})\), where \(\mathbf{u}\) is a circuit of \(T\), see [19, Theorem 1.11]. Therefore, \(D(\mathbf{u})\) has the form \(D(0,\ldots,0,n_{i}^{\#},0,\ldots,0,-n_{j}^{\#},0,\ldots,0)\), where \(n_{i}^{\#},n_{j}^{\#}\) are the \(n_{i},n_{j}\) divided by their greatest common divisor, \(n_{i}^{\#}\) is the \(j^{th}\) component of \(\mathbf{u}\) and \(-n_{j}^{\#}\) is the \(i^{th}\) component of \(\mathbf{u}\). Let \(D(\mathbf{u})=D(\mathbf{v})+_{sc}D(\mathbf{w})\) be a semi-conformal decomposition of \(D(\mathbf{u})\). Then, by Proposition 4.1, we have that \([\mathbf{u}]^{i}=[\mathbf{v}]^{i}+_{c}[\mathbf{w}]^{i}\). As the only conformal decomposition of \(0\) is \(0+0\), the only
possible non zero component of each of the three vectors \([{\bf u}]^{i},[{\bf v}]^{i},[{\bf w}]^{i}\) is the \(j^{th}\) and we have that \(n_{i}^{\#}=v_{j}+_{c}w_{j}\). Thus, both \(v_{j},w_{j}\) are non negative and at least one is positive. Without loss of generality we can assume that \(v_{j}>0\), so \({\bf v}\) is not zero and \([{\bf v}]^{i}\) has only one element \(v_{j}\) in its support. This means that \({\bf v}\) has minimal support \(\{i,j\}\) and so it is a multiple of the circuit \({\bf u}\). Therefore, \({\bf v}=l{\bf u}\) and \(l\geq 1\), since \(v_{j}\) is positive. Thus, \(n_{i}^{\#}=v_{j}+w_{j}\geq v_{j}=ln_{i}^{\#}\), therefore \(l=1\), \({\bf v}={\bf u}\) and for \({\bf w}\) the only option is that \({\bf w}={\bf 0}\), thus \(D({\bf w})={\bf 0}\), leading to a non-proper decomposition. This proves that \(D({\bf u})\) is indispensable.
**Lemma 4.3**.: _Let \(T=(n_{1},n_{2},\ldots,n_{s})\) and \({\bf u}\) be the circuit with support on \(j,k\in[s]\). Then the circuit \(D({\bf u})\) is indispensable in \(\Lambda(T)_{\{i\}}\) if and only if \(I_{(n_{i},n_{j},n_{k})}\) is a complete intersection on \(n_{i}\)._
Proof.: The circuit with support on \(j,k\) is \({\bf u}=(0,\ldots,0,n_{k}^{\#},0,\ldots,0,-n_{j}^{\#},0,\ldots,0)\), where the two nonzero elements \(n_{k}^{\#},n_{j}^{\#}\) are in the \(j^{th}\) and \(k^{th}\) position respectively and \(n_{k}^{\#},n_{j}^{\#}\) are the \(n_{k},n_{j}\) divided by their greatest common divisor. To prove one implication, let \(I_{(n_{i},n_{j},n_{k})}\) be a complete intersection on \(n_{i}\). Then \(n_{k}^{\#}=c_{j}\) and \(n_{j}^{\#}=c_{k}\) and thus \({\rm g.c.d}(c_{j},c_{k})=1\). Suppose that the circuit \(D({\bf u})\) is not indispensable in \(\Lambda(T)_{\{i\}}\) and let \(D({\bf u})=D({\bf v})+_{sc}D({\bf w})\) be a proper semiconformal decomposition of \(D({\bf u})\). Then, by Proposition 4.1, we have that \([{\bf u}]^{i}=[{\bf v}]^{i}+_{c}[{\bf w}]^{i}\). Also, coordinate wise \(c_{j}=v_{j}+_{c}w_{j}=a+b\) and \(c_{k}=v_{k}+_{c}w_{k}=-c-d\), where \(a,b,c,d\in\mathbb{N}\). Moreover, from the semiconformal decomposition of \(D({\bf u})\), we have that \(0=v_{i}+_{sc}w_{i}\), so \(v_{i}=-e,w_{i}=e\), where \(e\in\mathbb{N}\). The rest of the components of \({\bf u},{\bf v},{\bf w}\) are zero, since the only conformal decomposition of \(0\) is \(0+0\), by Proposition 4.1. Then, since \({\bf u},{\bf v},{\bf w}\in{\rm Ker}_{\mathbb{Z}}(T)\) and
\[{\bf v} = (0,\ldots,0,\ a,0,\ldots,-e,\ldots,0,-c,0,\ldots,0),\] \[{\bf w} = (0,\ldots,0,\ b,0,\ldots,\ \ e,\ldots,0,-d,0,\ldots,0),\]
we have that \(an_{j}-en_{i}-cn_{k}=0\) and \(bn_{j}+en_{i}-dn_{k}=0\), where the possible nonzero components of \({\bf v},{\bf w}\) are in the \(j^{th}\), \(i^{th}\) and \(k^{th}\) positions. This implies that \(an_{j}=cn_{k}+en_{i}\) and we distinguish two cases: \(a=0\) and \(a>0\). If \(a=0\), then \(an_{j}=0=cn_{k}+en_{i}\) implies that \(c=0\) and \(e=0\), which means that \({\bf v}={\bf 0}\) and \(D({\bf v})={\bf 0}\), a contradiction. In the case that \(a>0\), then \(an_{j}\) belongs to the semigroup \(<n_{k},n_{i}>\). However, \(c_{j}n_{j}\) is the smallest multiple of \(n_{j}\) that belongs to the semigroup \(<n_{k},n_{i}>\). Thus, \(a\geq c_{j}=a+b\), which implies that \(a=c_{j}\) and \(b=0\). We similarly argue for \(dn_{k}=bn_{j}+en_{i}\). If \(d=0\), then \(dn_{k}=0=bn_{j}+en_{i}\) implies \(b=0\) and \(e=0\). This means that \({\bf w}={\bf 0}\) and \(D({\bf w})={\bf 0}\), a contradiction. In the case that \(d>0\), then \(dn_{k}\) belongs to the semigroup \(<n_{i},n_{j}>\). However, \(c_{k}n_{k}\) is the smallest multiple of \(n_{k}\) that belongs to the semigroup \(<n_{i},n_{j}>\). Thus, \(d\geq c_{k}=c+d\), which implies that \(d=c_{k}\) and \(c=0\).
In conclusion, from \(an_{j}=cn_{k}+en_{i}\) and \(dn_{k}=bn_{j}+en_{i}\), we have that \(b=c=0\), \(a=c_{j}\), \(d=c_{k}\) and thus \(c_{j}n_{j}=en_{i}\) and \(c_{k}n_{k}=en_{i}\). The equation \(c_{j}n_{j}=en_{i}\) implies that \({\rm g.c.d}(c_{j},e)=1\), since otherwise a smaller multiple of \(n_{j}\) would belong to the semigroup \(<n_{k},n_{i}>\). Then \(e\) divides \(n_{j}\) and thus \(n_{i}\) is a multiple of \(c_{j}\). Similarly from \(c_{k}n_{k}=en_{i}\) we have \({\rm g.c.d}(c_{k},e)=1\). Then \(e\) divides \(n_{k}\) and thus
is a multiple of \(c_{k}\). Since \(\mathrm{g.c.d}(c_{j},c_{k})=1\) we have that \(n_{i}=\lambda c_{j}c_{k}\). Then \(c_{j}n_{j}=en_{i}\) implies \(n_{j}=\lambda ec_{k}\) and \(c_{k}n_{k}=en_{i}\) implies \(n_{k}=\lambda ec_{j}\).
Summarizing we have \(n_{i}=\lambda c_{j}c_{k}\), \(n_{j}=\lambda ec_{k}\), \(n_{k}=\lambda ec_{j}\), \(\mathrm{g.c.d}(c_{j},e)=1\), \(\mathrm{g.c.d}(c_{k},e)=1\) and \(\mathrm{g.c.d}(c_{j},c_{k})=1\). Recall that \(c_{i}n_{i}\) is the smallest multiple of \(n_{i}\) that belongs to the semigroup generated by \(n_{j},n_{k}\). Then \(c_{i}n_{i}=c_{ij}n_{j}+c_{ik}n_{k}\) implies that \(c_{i}\lambda c_{j}c_{k}=c_{ij}\lambda ec_{k}+c_{ik}\lambda ec_{j}\). We conclude that \(e\) divides \(c_{i}\), since \(\mathrm{g.c.d}(c_{j},e)=1\), \(\mathrm{g.c.d}(c_{k},e)=1\). Therefore \(e\leq c_{i}\), but from the defining property of \(c_{i}\) and the fact that \(en_{i}=c_{j}n_{j}\) we have \(e\geq c_{i}\). Thus \(c_{i}=e\) and \(c_{i}n_{i}=c_{j}n_{j}=c_{k}n_{k}\). This means that \(I_{(n_{i},n_{j},n_{k})}\) is complete intersection on all, a contradiction. Thus, \(D(\mathbf{u})\) is indispensable in \(\Lambda(T)_{\{i\}}\).
To prove the other implication, suppose now that \(I_{(n_{i},n_{j},n_{k})}\) is not a complete intersection on \(n_{i}\). It follows then that either is a complete intersection on all or is not complete intersection at all. If it is complete intersection on all then \(c_{j}n_{j}=c_{i}n_{i}=c_{k}n_{k}\). Then, we have the proper semi-conformal decomposition \(\mathbf{u}=\mathbf{v}+_{sc}\mathbf{w}\), where
\[\mathbf{u} = (0,\ldots,\ c_{j}\,\ldots,0,\ 0\,0,\ldots,0,-c_{k},0,\ldots,0),\] \[\mathbf{v} = (0,\ldots,\ c_{j}\,\ldots,0,-c_{i},0,\ldots,0,\ \ 0\,\ldots,0),\] \[\mathbf{w} = (0,\ldots,\ 0\,\ldots,0,\ c_{i}\,0,\ldots,0,-c_{k},0,\ldots,0)\,\]
where the components that can be nonzero in at least one vector are in the \(j^{th}\), \(i^{th}\) and \(k^{th}\)-positions. This implies that \(D(\mathbf{u})=D(\mathbf{v})+_{sc}D(\mathbf{w})\), since on all components except the one in the \(i^{th}\) position the sum is conformal. Thus \(D(\mathbf{u})\) is not indispensable in \(\Lambda(T)_{\{i\}}\), a contradiction.
In the case that \(I_{(n_{j},n_{i},n_{k})}\) is not complete intersection, then the circuit \((c_{j},0,-c_{k})\) is not indispensable in \(I_{(n_{j},n_{i},n_{k})}\), therefore it has a proper semiconformal decomposition. Note that in terms of signs \((c_{j},0,-c_{k})=(*,\ominus,\ominus)+_{sc}(\oplus,\oplus,*)=\mathbf{v}+_{sc} \mathbf{w}\). The first \(*\) is \(+\) and the second \(*\) is \(-\), since \(\mathrm{Ker}_{\mathbb{Z}}(n_{j},n_{i},n_{k})\cap\mathbb{N}^{3}=\{\mathbf{0}\}\). Thus, the sum is conformal in the first and the last component. Then
\[D((0,\ldots,c_{j},\ldots,0,0,\ldots,-c_{k},0,\ldots,0))=\] \[D((0,\ldots,v_{j},\ldots,v_{i},0,\ldots,v_{k},0,\ldots,0))+_{sc}D ((0,\ldots,w_{j},\ldots,w_{i},0,\ldots,w_{k},0,\ldots,0)).\]
Thus, \(D(\mathbf{u})\) is not indispensable in \(\Lambda(T)_{\{i\}}\), a contradiction, and therefore the other implication is also proved.
The following theorem shows that if \(\{i\}\) is a face of the strongly robust complex \(\Delta_{T}\), then \(n_{i}\) has a very special property. As we will see in Theorem 4.5, if \(n_{i}\) satisfies this property, then \(n_{i}\) is unique.
**Theorem 4.4**.: _Let \(T=(n_{1},n_{2},\ldots,n_{s})\). If the strongly robust complex \(\Delta_{T}\) contains \(\{i\}\) as a face then for every \(j,k\in[s]\) with \(j,k\neq i\), \(I_{(n_{i},n_{j},n_{k})}\) is a complete intersection on \(n_{i}\)._
Proof.: Suppose that \(\{i\}\) is a face of \(\Delta_{T}\), then \(\Lambda(T)_{\{i\}}\) is strongly robust. Then all circuits are indispensable, since for strongly robust toric ideals the set of indispensable elements is equal to the Graver basis and the latter contains all circuits. Therefore, by Lemma 4.3, \(I_{(n_{i},n_{j},n_{k})}\) is a complete intersection on \(n_{i}\)
**Theorem 4.5**.: _Let \(T=(n_{1},n_{2},\ldots,n_{s})\). The strongly robust complex \(\Delta_{T}\) is either \(\{\emptyset\}\) or \(\{\emptyset,\{i\}\}\) for exactly one \(i\in[s]\)._
Proof.: By Theorem 3.4\(\dim(\Delta_{T})\leq 0\), thus \(\Delta_{T}=\{\emptyset\}\) or \(\Delta_{T}=\{\emptyset\}\cup\{\{i\}|i\in\Sigma\subset[s]\}.\) We claim that \(\Sigma\) contains at most one element. Suppose not, and let \(i,j\in\Sigma\), \(i\neq j\). Take any \(k\in[s]\) different from \(i,j\) and apply two times Theorem 4.4. We have that \(I_{(n_{i},n_{j},n_{k})}\) is a complete intersection on \(n_{i}\) and \(n_{j}\), and thus a contradiction, see remark after Definition 3.1. Thus the set \(\Sigma\) can have at most one element.
**Remark 4.6**.: It follows from Theorem 4.5 that toric ideals of monomial curves in \(A^{s}\) for \(s\geq 3\) are never strongly robust, since in this case \(I_{T}\) is a simple toric ideal, therefore all of its bouquets are not mixed. Thus, \(I_{T}\) is a \(T_{[s]}\)-robust ideal and \([s]\not\in\Delta_{T}\). The fact that the toric ideal of a monomial curve is not robust for \(s\geq 3\), thus also not strongly robust, was also noticed in [12, Corollary 4.17].
Although the converse of Theorem 4.4 is not true in general, it is true for \(1\times 3\) matrices, as the following Theorem shows.
**Theorem 4.7**.: _Let \(T=(n_{1},n_{2},n_{3})\). We have the following three cases_
* _if_ \(I_{T}\) _is not complete intersection, then_ \(\Delta_{T}\) _is the empty complex;_
* _if_ \(I_{T}\) _is complete intersection on_ \(n_{i}\)_, for an_ \(i\in[3]\)_, then_ \(\Delta_{T}=\{\emptyset,\{i\}\}\)_;_
* _if_ \(I_{T}\) _is complete intersection on all, then_ \(\Delta_{T}\) _is the empty complex._
Proof.: By Proposition 4.5, we have that the strongly robust complex \(\Delta_{T}\) is either \(\{\emptyset\}\) or \(\{\emptyset,\{i\}\}\) for one \(i\in[3]\). By Theorem 4.4, if \(\{i\}\in\Delta_{T}\), then \(I_{(n_{i},n_{j},n_{k})}\) is a complete intersection on \(n_{i}\). Therefore it remains to show that if \(I_{T}\) is complete intersection on \(n_{i}\), for an \(i\in[3]\), then \(\Delta_{T}=\{\emptyset,\{i\}\}\), or equivalently that \(\Lambda(T)_{\{i\}}\) is strongly robust. Without loss of generality we may suppose that \(i=1\). By Lemma 4.2 the two circuits with \(1\) in their support are indispensable, while by Lemma 4.3 the remaining circuit with support on \(\{2,3\}\) is indispensable in \(\Lambda(T)_{\{1\}}\), as \(I_{T}\) is a complete intersection on \(n_{1}\). Thus it remains to prove that elements \(\mathbf{u}\) in the \(\operatorname{Gr}(T)\) with full support are indispensable in \(\Lambda(T)_{\{1\}}\). Taking \(\mathbf{u}\) or \(-\mathbf{u}\), we can suppose that the first component of \(\mathbf{u}\) is positive.
There are three cases then for \(\mathbf{u}\): (1) \(\mathbf{u}=(a,-b,-c)\), (2) \(\mathbf{u}=(a,b,-c)\), and (3) \(\mathbf{u}=(a,-b,c)\), where \(a,b,c\in\mathbb{N}\).
(1) \(\mathbf{u}=(a,-b,-c)\). Let \(D(\mathbf{u})=D(\mathbf{v})+_{sc}D(\mathbf{w})\) be a semiconformal decomposition of \(D(\mathbf{u})\) in \(\Lambda(T)_{\{1\}}\). Then, from Proposition 4.1, we have \([\mathbf{u}]^{1}=[\mathbf{v}]^{1}+_{c}[\mathbf{w}]^{1}\), but \([\mathbf{u}]^{1}=(-b,-c)\) and the sum being conformal implies that all components of \([\mathbf{v}]^{1},[\mathbf{w}]^{1}\) are non positive. Since \(\mathbf{v},\mathbf{w}\in\operatorname{Ker}_{\mathbb{Z}}(T)\), this means that their first component is non negative. However, then the sum \(D(\mathbf{u})=D(\mathbf{v})+_{c}D(\mathbf{w})\) is conformal, and \(D(\mathbf{u})\in\operatorname{Gr}(\Lambda(T)_{\{1\}})\) since \(\mathbf{u}\in\operatorname{Gr}(T)\), [19]. As elements of the Graver basis as characterised as those with no proper conformal decomposition, one of the \(D(\mathbf{v}),D(\mathbf{w})\) is zero and thus \(D(\mathbf{u})\) is indispensable.
(2) \(\mathbf{u}=(a,b,-c)\). Let \(D(\mathbf{u})=D(\mathbf{v})+_{sc}D(\mathbf{w})\) be a semiconformal decomposition of \(D(\mathbf{u})\) in \(\Lambda(T)_{\{1\}}\). Then, from Proposition 4.1, we have \([\mathbf{u}]^{1}=[\mathbf{v}]^{1}+_{c}[\mathbf{w}]^{1}\). However, \([\mathbf{u}]^{1}=(b,-c)\) and the sum being conformal implies that the second component of each of the \(\mathbf{v},\mathbf{w}\) is non negative, while the third component is non positive. Taking into account that the sum \(D(\mathbf{u})=D(\mathbf{v})+_{sc}D(\mathbf{w})\) is semiconformal, we get
that the sign patent of \(\mathbf{v},\mathbf{w}\) is \(\mathbf{v}=(*,\oplus,\ominus)\) and \(\mathbf{w}=(\oplus,\oplus,\ominus)\). If \(*=\oplus\), then \(\mathbf{u}\) will have a conformal decomposition and since \(\mathbf{u}\in\operatorname{Gr}(T)\), one of the \(\mathbf{v},\mathbf{w}\) is zero. Which implies that one of the \(D(\mathbf{v}),D(\mathbf{w})\) is zero and thus \(D(\mathbf{u})\) is indispensable.
If \(*=\ominus\) then \(\mathbf{u}=(a,b,-c)=\mathbf{v}+\mathbf{w}=(-a_{1},b_{1},-c_{1})+(a_{2},b_{2},- c_{2})\), where \(a_{1},b_{1},c_{1},a_{2},b_{2},c_{2}\in\mathbb{N}\). Then, \(-a_{1}n_{1}+b_{1}n_{2}-c_{1}n_{3}=0\) and \(a_{2}n_{1}+b_{2}n_{2}-c_{2}n_{3}=0\), since \(\mathbf{v},\mathbf{w}\in\operatorname{Ker}_{\mathbb{Z}}(T)\). In the first equation, \(b_{1}=0\) implies that \(a_{1}=0=c_{1}\) thus \(\mathbf{v}=\mathbf{0}\) and the proof is complete, as \(D(\mathbf{u})\) is indispensable. Otherwise, \(b_{1}n_{2}\) belongs to the semigroup generated by \(n_{1},n_{3}\) and \((n_{1},n_{2},n_{3})\) is a complete intersection on \(n_{1}\) implies \(b=b_{1}+b_{2}\geq b_{1}\geq n_{3}^{\#}.\) Similarly from the second equation we get that either \(\mathbf{w}=\mathbf{0}\) or \(c=c_{1}+c_{2}\geq c_{2}\geq n_{2}^{\#}.\) But then the proper sum \((a,b,-c)=(a,b-n_{3}^{\#},-c+n_{2}^{\#})+(0,n_{3}^{\#},-n_{2}^{\#})\) is conformal, which is a contradiction since \(\mathbf{u}\in\operatorname{Gr}(T)\).
(3) \(\mathbf{u}=(a,-b,c)\). For the third case we argue in a similar manner as in the second case.
Thus in all cases \(D(\mathbf{u})\) is indispensable in \(\Lambda(T)_{\{1\}}\) and thus \(\Lambda(T)_{\{1\}}\) is strongly robust.
A different proof of Theorem 4.7 can be given using Lemmata 4.2, 4.3 and [17, Corollary 4.6], since for \(T=(n_{1},n_{2},n_{3})\) the ideal \(I_{\Lambda(T)_{\{1\}}}\) has codimension \(2\). We have preferred the above proof, as it follows the style of the remaining proofs in this article.
## 5. Primitive elements and strongly robustness
In this section, we generalise the notion of a primitive element and that of a Graver basis and use it to give a necessary and sufficient criterion for a vertex \(\{i\}\) to be a face of the strongly robust simplicial complex.
**Definition 5.1**.: _An element \(\mathbf{u}\in S\subset\mathbb{Z}^{n}\) is called primitive in \(S\) if there is no \(\mathbf{v}\in S\), \(\mathbf{v}\neq\mathbf{u}\) such that \(\mathbf{v}^{+}\leq\mathbf{u}^{+}\) and \(\mathbf{v}^{-}\leq\mathbf{u}^{-}\). The set of primitive elements in \(S\) is denoted by \(\operatorname{Graver}(S)\)._
Primitive elements in \(S=\operatorname{Ker}_{\mathbb{Z}}(A)\) constitute the Graver basis of \(A\).
**Definition 5.2**.: _Let \(T=(n_{1},n_{2},\dots,n_{s})\), we define_
\[\operatorname{Gr}(T)^{i}=\{[\mathbf{u}]^{i}|\mathbf{u}\in\operatorname{Gr}(T )\}\subset\mathbb{Z}^{s-1}.\]
**Proposition 5.3**.: _Let \(\mathbf{u}\in\operatorname{Gr}(T).\) Then \(D(\mathbf{u})\) is indispensable in \(\Lambda(T)_{\{i\}}\) if and only if \([\mathbf{u}]^{i}\) is primitive in \(\operatorname{Gr}(T)^{i}\)._
Proof.: Suppose \([\mathbf{u}]^{i}\) is not primitive in \(\operatorname{Gr}(T)^{i}\). Then there exists an element \(\mathbf{v}\in\operatorname{Gr}(T)\) such that \([\mathbf{v}^{+}]^{i}\leq[\mathbf{u}^{+}]^{i}\) and \([\mathbf{v}^{-}]^{i}\leq[\mathbf{u}^{-}]^{i}\). Set \(\mathbf{w}=\mathbf{u}-\mathbf{v}\). Then it follows that \(\mathbf{u}=\mathbf{v}+\mathbf{w}\) and \([\mathbf{u}]^{i}=[\mathbf{v}]^{i}+_{c}[\mathbf{w}]^{i}\). Looking at the i-components of \(\mathbf{v},\mathbf{w}\) and putting in front the negative one we get that one of the sums \(D(\mathbf{u})=D(\mathbf{v})+D(\mathbf{w})\) or \(D(\mathbf{u})=D(\mathbf{w})+D(\mathbf{v})\) is semiconformal. Thus \(D(\mathbf{u})\) is not indispensable in \(\Lambda(T)_{\{i\}}\).
Suppose that \(D(\mathbf{u})\) is not indispensable in \(\Lambda(T)_{\{i\}}\). Then there is a proper semiconformal decomposition \(D(\mathbf{u})=D(\mathbf{v})+_{sc}D(\mathbf{w})\), note that in any decomposition
like that the sum is conformal in every component except possible at the i-th component, see Proposition 4.1. But \(\mathbf{u}\in\operatorname{Gr}(T)\) therefore \(D(\mathbf{u})\in\operatorname{Gr}(\Lambda(T)_{\{i\}})\), thus it does not have a proper conformal decomposition. We conclude that the i-th component of \(\mathbf{v}\) is negative and the i-th component of \(\mathbf{w}\) is positive. From all semiconformal decompositions \(D(\mathbf{u})=D(\mathbf{v})+_{sc}D(\mathbf{w})\) choose one with \(w_{i}\) smallest. Then, for these choices of \(\mathbf{v},\mathbf{w}\), we claim that \(\mathbf{w}\in\operatorname{Gr}(T).\) Suppose not, then there is a proper conformal decomposition \(\mathbf{w}=\mathbf{w}^{\prime}+_{c}\mathbf{w}^{\prime\prime}=\mathbf{w}^{ \prime\prime}+_{c}\mathbf{w}^{\prime}\). Choose \(\mathbf{w}^{\prime\prime}\) to be one with \(w_{i}>w_{i}^{\prime\prime}\). But then it is easy to see that \(D(\mathbf{u})=D(\mathbf{v}+\mathbf{w}^{\prime})+_{sc}D(\mathbf{w}^{\prime \prime})\), which is a contradiction, since \(w_{i}>w_{i}^{\prime\prime}\). Thus \(\mathbf{w}\in\operatorname{Gr}(T)\). Therefore \([\mathbf{w}]^{i}\) is in \(\operatorname{Gr}(T)^{i}\) and \([\mathbf{w}^{+}]^{i}\leq[\mathbf{u}^{+}]^{i}\) and \([\mathbf{w}^{-}]^{i}\leq[\mathbf{u}^{-}]^{i}\). Thus \([\mathbf{u}]^{i}\) is not primitive in \(\operatorname{Gr}(T)^{i}\).
**Theorem 5.4**.: _Let \(T=(n_{1},n_{2},\ldots,n_{s})\). The strongly robust complex \(\Delta_{T}\) contains \(\{i\}\) as a face if and only if for every element \(\mathbf{u}\in\operatorname{Gr}(T)\) the \([\mathbf{u}]^{i}\) is primitive in \(\operatorname{Gr}(T)^{i}\)\((\)or equivalently if and only if \(\operatorname{Graver}(\operatorname{Gr}(T)^{i})=\operatorname{Gr}(T)^{i}\)\()\)._
Proof.: Suppose that \(\{i\}\) is a face of \(\Delta_{T}\) then \(\Lambda(T)_{\{i\}}\) is strongly robust. Therefore \(D(\mathbf{u})\) is indispensable in \(\Lambda(T)_{\{i\}}\) for every element \(\mathbf{u}\in\operatorname{Gr}(T)\). Then by Proposition 5.3 for every element \(\mathbf{u}\in\operatorname{Gr}(T)\) the \([\mathbf{u}]^{i}\) is primitive in \(\operatorname{Gr}(T)^{i}\).
Suppose now that for every element \(\mathbf{u}\in\operatorname{Gr}(T)\) the \([\mathbf{u}]^{i}\) is primitive in \(\operatorname{Gr}(T)^{i}\). Then Proposition 5.3 implies that \(D(\mathbf{u})\) is indispensable in \(\Lambda(T)_{\{i\}}\) for every \(\mathbf{u}\in\operatorname{Gr}(T)\). But [19, Theorem 1.11] says that all elements in the Graver basis of \(\Lambda(T)_{\{i\}}\) are in the form \(D(\mathbf{u})\) with \(\mathbf{u}\in\operatorname{Gr}(T)\). Thus \(\Lambda(T)_{\{i\}}\) is strongly robust and so \(\{i\}\) is a face of \(\Delta_{T}\).
## 6. Generalized Lawrence matrices
In [19, Section 2], Petrovic et al generalized the notion of a Lawrence matrix, see [21, Chapter 7], by introducing generalized Lawrence matrices. For every matrix \(A\) there exists a generalized Lawrence matrix with the same kernel up to permutation of columns, see [19, Corollary 2.3]. Let \((c_{1},\ldots,c_{m})\in\mathbb{Z}^{m}\) be any vector having full support and with the greatest common divisor of all its components equal to \(1\). Then there exist integers \(\lambda_{1},\ldots,\lambda_{m}\) such that \(1=\lambda_{1}c_{1}+\cdots+\lambda_{m}c_{m}\). For any choice of \(\lambda_{1},\ldots,\lambda_{m}\) and any integer \(n\) we define the matrix \(A(n,(c_{1},\ldots,c_{m}))=(\lambda_{1}n,\ldots,\lambda_{m}n)\in\mathbb{Z}^{1 \times m}\) and the matrix
\[C(c_{1},\ldots,c_{m})=\left(\begin{array}{ccccc}-c_{2}&c_{1}&&&\\ -c_{3}&&c_{1}&&\\ &&&\ddots&\\ -c_{m}&&&c_{1}\end{array}\right)\in\mathbb{Z}^{(m-1)\times m}.\]
**Theorem 6.1**.: _Let \(T=(n_{1},n_{2},\ldots,n_{s})\in\mathbb{Z}^{1\times s}\). Let \(\mathbf{c}_{1},\ldots,\mathbf{c}_{s}\) be any set of vectors having full support and each with the greatest common divisor of all its components equal to \(1\), with \(\mathbf{c}_{j}\in\mathbb{Z}^{m_{j}}\) for some \(m_{j}\geq 1\). In the case that \(\Delta_{T}=\{\emptyset\}\) each \(\mathbf{c}_{j}=(c_{j1},\ldots,c_{jm_{j}})\in\mathbb{Z}^{m_{j}}\) has the first component positive and at least one component negative, while in the case that \(\Delta_{T}=\{\emptyset,\{i\}\}\) then the same is true for all \(\mathbf{c}_{j}\) with \(j\neq i\). Define \(p=1+\sum_{i=1}^{s}(m_{i}-1)\) and \(q=\sum_{i=1}^{s}m_{i}\). Then the toric ideal \(I_{A}\) is
strongly robust, where_
\[A=\ \left(\begin{array}{ccccc}A_{1}&A_{2}&\cdots&A_{s}\\ C_{1}&0&\cdots&0\\ 0&C_{2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&C_{s}\end{array}\right)\in\mathbb{Z}^{p\times q},\]
\(A_{j}=A(n_{j},(c_{j1},\ldots,c_{jm_{j}}))\) _and \(C_{j}=C(c_{j1},\ldots,c_{jm_{j}})\) for all \(j=1,\ldots,s\)._
Proof.: The matrix \(A\) is a generalized Lawrence matrix, see [19, Theorem 2.1] with bouquet ideal the toric ideal of the monomial curve \(I_{T}\), with all bouquets mixed except possibly the \(B_{i}\), in the case that \(\Delta_{T}=\{\emptyset,\{i\}\}\). Therefore the toric ideal is \(T_{\omega}\)-robust, where \(\omega\) is either the empty set or \(\{i\}\). In both cases \(\omega\) is a face of the strongly robust simplicial complex \(\Delta_{T}\). Thus the toric ideal is strongly robust, see Theorem 2.3.
**Example 6.2**.: Theorem 6.1 provides a way to produce examples of strongly robust ideals with bouquet ideal the ideal of any monomial curve. Take for example the monomial curve with defining matrix \(T=(4,5,6)\). The toric ideal \(I_{T}\) is a complete intersection on \(5\) thus according to Theorem 4.7 the strongly robust complex \(\Delta_{T}\) is equal to \(\{\emptyset,\{2\}\}\). Choose three integer vectors, one for each bouquet, of any dimension with full support and the greatest common divisor of all its components equal to \(1\) such that they have a positive first component and at least one component negative except possibly for the second vector which may have all components positive. For example choose \(\mathbf{c}_{1}=(2,-1,-2023)\), \(\mathbf{c}_{2}=(10,2024,7,4)\) and \(\mathbf{c}_{3}=(5,3,-2029)\). For each vector \(\mathbf{c}=(c_{1},\ldots,c_{m})\in\mathbb{Z}^{m}\) choose integers \(\lambda_{1},\ldots,\lambda_{m}\) such that \(1=\lambda_{1}c_{1}+\cdots+\lambda_{m}c_{m}\). For example \(1=0\cdot 2+(-1)\cdot(-1)+0\cdot(-2023)\), \(1=(-1)\cdot 10+0\cdot 2024+1\cdot 7+1\cdot 4\) and \(1=2\cdot 5+(-3)\cdot 3+0\cdot(-2029).\) Then Theorem 6.1 says that the toric ideal \(I_{A}\) is strongly robust, where
\[A=\ \left(\begin{array}{ccccccccc}0&-4&0&-5&0&5&5&12&-18&0\\ 1&2&0&0&0&0&0&0&0&0\\ 2023&0&2&0&0&0&0&0&0\\ 0&0&0&-2024&10&0&0&0&0\\ 0&0&0&-7&0&10&0&0&0\\ 0&0&0&-4&0&0&10&0&0\\ 0&0&0&0&0&0&-3&5&0\\ 0&0&0&0&0&0&2029&0&5\end{array}\right).\]
According to the following theorem, _all_ strongly robust ideals with bouquet ideal the ideal of a monomial curve are produced like in the above example.
**Theorem 6.3**.: _Let \(I_{A}\) be any toric ideal which is strongly robust and such that its bouquet ideal is the ideal of a monomial curve. Then there exists a generalized Lawrence matrix \(A^{\prime}\) such that \(I_{A}=I_{A^{\prime}}\), up to permutation of column indices, where_
\[A^{\prime}=\ \left(\begin{array}{ccccc}A_{1}&A_{2}&\cdots&A_{s}\\ C_{1}&0&\cdots&0\\ 0&C_{2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&C_{s}\end{array}\right)\in\mathbb{Z}^{p\times q},\]
_with matrices \(A_{j}=A(n_{j},(c_{j1},\ldots,c_{jm_{j}}))\) and \(C_{j}=C(c_{j1},\ldots,c_{jm_{j}})\) for some \(T=(n_{1},n_{2},\ldots,n_{s})\in\mathbb{Z}^{1\times s}\), and \(\mathbf{c}_{j}\in\mathbb{Z}^{m_{j}}\) for some \(m_{j}\geq 1\), for all \(j=1,\ldots,s\), are integer vectors having full support and each one with the greatest common divisor of all its components equal to \(1\). In the case that \(\Delta_{T}=\{\emptyset\}\) each \(\mathbf{c}_{j}=(c_{j1},\ldots,c_{jm_{j}})\in\mathbb{Z}^{m_{j}}\) has the first component positive and at least one component negative. In the case that \(\Delta_{T}=\{\emptyset,\{i\}\}\) each \(\mathbf{c}_{j}\) with \(j\neq i\) has the first component positive and at least one component negative, while \(\mathbf{c}_{i}\) may have all components positive._
Proof.: By hypothesis, the ideal \(I_{A}\) is strongly robust with its bouquet ideal \(I_{T}\), where \(T=(n_{1},n_{2},\ldots,n_{s})\in\mathbb{Z}^{1\times s}\). By Theorem 4.5 the strongly robust complex \(\Delta_{T}\) is either \(\{\emptyset\}\) or \(\{\emptyset,\{i\}\}\) for one \(i\in[s]\). Then the ideal \(I_{A}\) is either \(T\) or \(T_{\{i\}}\)-robust, therefore all bouquets of \(I_{A}\) are mixed with the possible exception of the \(i\)-th bouquet.
For any integer matrix \(A\), there exists a generalized Lawrence matrix \(A^{\prime}\) such that \(I_{A}=I_{A^{\prime}}\), up to permutation of column indices, by [19, Corollary 2.3]. Since the bouquet ideal is the ideal \(I_{T}\) and all bouquets of \(I_{A}\) are mixed with the possible exception of the \(i\)-th bouquet, the vectors \(\mathbf{c}_{j}\) have a positive first component and at least one component negative, with the possible exception of \(\mathbf{c}_{i}\) which may have all components positive.
**Example 6.4**.: Consider the matrix \(A=(\mathbf{a}_{1},\ldots,\mathbf{a}_{11})\in\mathbb{Z}^{8\times 11}\), given by
\[A=\left(\begin{array}{ccccccccc}36&60&4&40&64&39&1&72&84&12&4\\ 12&20&4&8&24&-2&1&12&4&0&4\\ 36&80&4&48&88&39&1&84&84&12&4\\ 60&100&12&16&112&33&4&120&104&36&24\\ 24&40&8&24&48&-4&2&36&8&0&8\\ 12&20&4&-12&24&-2&1&24&8&12&8\\ 12&20&4&-12&24&-2&1&24&12&12&12\\ 24&40&0&4&40&39&1&60&84&24&4\end{array}\right).\]
Using 4ti2 [1] we can see that the toric ideal \(I_{A}\) is strongly robust and a Gale transform of the matrix \(A\) is
\[\left(\begin{array}{ccccc}5&40&779&-13642\\ -18&-162&-3198&56004\\ -15&-120&-2337&40926\\ 0&6&123&-2154\\ 15&135&2665&-46670\\ 0&0&4&-72\\ 0&0&8&-144\\ 0&-4&-82&1436\\ 0&0&0&1\\ 0&14&287&-5026\\ 0&0&0&-1\end{array}\right).\]
The toric ideal \(I_{A}\) has five bouquets \(B_{1}=\{a_{1},a_{3}\}\), \(B_{2}=\{a_{2},a_{5}\}\), \(B_{3}=\{a_{6},a_{7}\}\), \(B_{4}=\{a_{4},a_{8},a_{10}\}\), \(B_{5}=\{a_{9},a_{11}\}\) all of them being mixed except the third one, which is non-mixed. The corresponding \(\mathbf{c}_{B}\) vectors are: \(\mathbf{c}_{B_{1}}=(1,0,-3,0,0,0,0,0,0,0)\)
\[\mathbf{c}_{B_{2}}=(0,6,0,0,-5,0,0,0,0,0,0),\ \ \mathbf{c}_{B_{3}}=(0,0,0,0,0,1,2,0,0,0,0),\]
\[\mathbf{c}_{B_{4}}=(0,0,0,3,0,0,0,-2,0,7,0),\ \ \mathbf{c}_{B_{5}}=(0,0,0,0,0,0,0,0,1,0,-1).\]
The bouquet ideal is the toric ideal of the matrix \(A_{B}=(\mathbf{a}_{B_{1}},\mathbf{a}_{B_{2}},\mathbf{a}_{B_{3}},\mathbf{a}_{B_{4 }},\mathbf{a}_{B_{5}})\), that is
\[A_{B}=\ \left(\begin{array}{cccccc}24&40&41&60&80\\ 0&0&0&0&0\\ 24&40&41&60&80\\ 24&40&41&60&80\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 24&40&41&60&80\end{array}\right).\]
The bouquet ideal \(I_{A_{B}}\) is exactly the same as the toric ideal of the monomial curve for \(T=(24,40,41,60,80)\) for which we know that the strongly robust complex is \(\Delta_{T}=\{\emptyset,\{3\}\}\). Note that only the third bouquet is not mixed and thus \(I_{A}\) is a \(T_{\{3\}}\)-robust ideal which explains why it is strongly robust.
Then, the generalized Lawrence matrix with the same kernel, after permutation of column indices, is
\[A^{\prime}=\ \left(\begin{array}{cccccccccc}24&0&40&40&41&0&60&60&0&80&0 \\ 3&1&0&0&0&0&0&0&0&0&0\\ 0&0&5&6&0&0&0&0&0&0&0\\ 0&0&0&0&-2&1&0&0&0&0\\ 0&0&0&0&0&2&3&0&0&0\\ 0&0&0&0&0&0&-7&0&3&0&0\\ 0&0&0&0&0&0&0&0&1&1\end{array}\right).\]
Note that the permutation of column indices to bring the vectors of the same bouquet together gives the following isomorphism of the two kernels \(\phi:\mathrm{Ker}_{\mathbb{Z}}(A)\mapsto\mathrm{Ker}_{\mathbb{Z}}(A^{\prime})\),
\[\phi(u_{1},u_{2},u_{3},u_{4},u_{5},u_{6},u_{7},u_{8},u_{9},u_{10},u_{11})=(u_{ 1},u_{3},u_{2},u_{5},u_{6},u_{7},u_{4},u_{8},u_{10},u_{9},u_{11}).\]
**Acknowledgments.** The first author has been supported by the Royal Society Dorothy Hodgkin Research Fellowship DHF\(\backslash\)R1\(\backslash\)201246. The third author has been partially supported by the grant PN-III-P4-ID-PCE-2020-0029, within PNCDI III, financed by Romanian Ministry of Research and Innovation, CNCS - UEFISCDI.
|
2310.04846
|
Soft finger rotational stability for precision grasps
|
Soft robotic fingers can safely grasp fragile or variable form objects, but
their force capacity is limited, especially with less contact area: precision
grasps and when objects are smaller or not spherical. Current research is
improving force capacity through mechanical design by increasing contact area
or stiffness, typically without models which explain soft finger force
limitations. To address this, this paper considers two types of soft grip
failure, slip and dynamic rotational stability. For slip, the validity of a
Coulomb model investigated, identifying the effect of contact area, pressure,
and relative pose. For rotational stability, bulk linear stiffness of the
fingers is used to develop conditions for dynamic stability and identify when
rotation leads to slip. Together, these models suggest contact area improves
force capacity by increasing transverse stiffness and normal force. The models
are validated on pneumatic fingers, both custom PneuNets-based and commercially
available. The models are used to find grip parameters which increase force
capacity without failure.
|
Hun Jang, Valentyn Petrichenko, Joonbum Bae, Kevin Haninger
|
2023-10-07T15:11:31Z
|
http://arxiv.org/abs/2310.04846v2
|
# Soft finger dynamic stability and slip by Coulomb friction and bulk stiffness
###### Abstract
Soft robotic fingers can safely grasp fragile or non-uniform objects, but their force capacity is limited, especially with less contact area: objects which are smaller, not round, or where an enclosing grasp is not feasible. To improve force capacity, this paper considers two types of grip failure, slip and dynamic rotational stability. For slip, a Coulomb model for soft fingers based on total normal and tangential force is validated, identifying the effect of contact area, pressure, and grip position on effective Coulomb coefficient, normal force and transverse stiffness. For rotational stability, bulk stiffness of the fingers is used to develop conditions for dynamic stability about the initial grasp, and a condition for when the rotation leads to slip. Together, these models suggest contact area improves grip by increasing transverse stiffness and normal force. The models are validated in a range of grasp conditions, shown to predict the influence of object radius and finger distance on grip stability limits.
## 1 Introduction
Soft fingers aim to improve generalization of grasping and manipulation, especially with objects which are sensitive or have variation in surface geometry. Soft fingers have been shown to grasp a large range of objects, especially spherical or cylindrical objects when an enclosing grasp is used [1, 2]. Such applications are well-suited to pick-and-place tasks in unobstructed environments [3], where a large contact area between finger and object can be employed.
However, soft fingers typically have reduced force transmission capacity compared with rigid fingers [4]. The soft materials have limited ability to build normal force at the contact surface [5], and compliance in the normal direction makes force closure effects more complex [6]. This force transmission limitation can be even more significant when contact area is reduced; when smaller or flat objects are grasped, or when cluttered environments limit the ability to realize large contact areas [3]. This can be a limitation in both the ability to manipulate heavy objects [7] and in moving from pick-and-place to contact-rich manipulation, which often requires more force [8]. Ongoing work has explored increasing force capacity using stiffening elements [9], jamming techniques to increase stiffness [10], incompressible actuating fluids [7], additional kinematic structure to increase contact area [11], and structured compliance which is stiffer where forces are needed [12]. The force capacity is also affected by grip parameters such as degree of actuation, robot grip pose, and finger distance. However, the optimal grip parameters are highly object specific. To determine good grip parameters for a specific object, deep learning approaches are common for grasp pose planning [13], but not yet applied to optimizing force capacity in soft grasps.
Soft finger force capacity is limited by at least two phe
Figure 1: Grasp instability in soft fingers, where (a) shows initial grasp, yawing instability, and rolling instability from top to bottom. In (b), the instability of a larger object is seen.
nomena, slip and stability. Grip stability has a wide range of definitions [14, 15]. Planar grip stability has been studied for force-closure [16], form-closure [17], and stiff-but-underactuated fingers [18]. However, these analyses assume point contact between object and gripper, as well as the gripper contact point being perfectly stiff, allowing the expression of a static grip matrix \(G\) which forms the basis of most further analysis [14]. These analyses cannot be directly applied to soft fingers, where contact is neither point-like and the contact patches move in response to force. Some approaches do model finger compliance [15], such as for tendon-driven fingers [19] or spherical fingertips [6], considering Coulomb force conditions with linear stiffness models. However, dynamic instability can be observed in soft fingers, where rotation away from an initial object orientation occurs, as seen in Figure 1. This effect is not initiated by slip, and requires modeling motion transverse to the grip normal.
Slip is typically modelled with Coulomb friction in grasping [14, 20]. However, a Coulomb friction model is a point-contact model, raising questions about its applicability when the contact area is larger. Finite-element based contact models are being developed [21, 22], but not yet applied to grip slip or stability. For spherical soft fingers, closed-form friction models which include a torsional friction about the contact normal have been proposed [23, 6, 24], but not extended to general soft fingers. Deep-learning approaches for slip detection or prediction [20] have been used for trajectory planning [25], but not finger design or optimization of grip parameters.
To increase the force capacity of soft fingers, this paper proposes a model for dynamic stability and validates the bulk Coulomb model. Compared with friction modelling in soft fingers [20, 24], we examine the effect of contact area, pressure, and offset, finding their impact on effective Coulomb coefficient. This suggests increasing normal force is the primary way to increase force capacity. However, we find that dynamic instability occurs as grip force increases. This is then modelled, considering bulk finger stiffness, developing dynamic stability conditions which depend on grip force, transverse stiffness, and object radius. Compared with classical grip stability [14, 16, 17, 18, 19], this is the dynamic stability of a compliant system, analyzing the convergence to a certain pose. This paper first validates the Coulomb model for pneumatic fingers, finding the influence of contact area on finger slip force. Next, a model for dynamic rotational stability is proposed, and conditions for this rotation to cause slip developed. These models are validated experimentally on a range of object sizes and grip conditions, validating the effect of object radius, contact area and finger stiffness on maximum stable grip force.
## 2 Friction modelling
This section introduces the friction model, validating the bulk Coulomb model for soft fingers. Coulomb friction models slip as a violation of the condition \(|f_{t}|<\mu f_{n}\), where \(\mu\geq 0\) is the Coulomb friction coefficient, \(f_{n}\) normal force and \(f_{t}\) tangential force. These models are popular, even in soft robotics [26, 9, 27]. They can also be made stochastic, where \(\mu\) can be considered to vary with a distribution [20].
However, the Coulomb model is a point model, and it is unclear how it applies to larger contact areas. One reason for this is that the pressure distribution over the contact area not uniform, which can result in local slip, where local relative motion occurs before bulk slip [23]. Computational methods to estimate contact pressure have been proposed [22], but not yet extended with frictional models. An additional effect on larger contact areas is torsional friction, which can be modelled on spherical geometries [24], but not yet extended to general contact. To investigate the accuracy of the bulk Coulomb friction over soft grasp parameters (contact area, offset, pressure, material), we develop a test environment as seen in Figure 2.
### Validation of bulk Coulomb
The experimental setup seen in Figure 2 is used with test objects 3D printed from Tough PLA which are sections of a cylinder of radius \(30\) mm. In this way, the contact area is controlled by the test object, provided the finger is in contact with the full arc of the object, which is verified visually per experiment. The test objects are mounted onto a Force/Torque sensor (ME-MesSysteme MP11, load limit 500N, 20Nm, sampling rate 125Hz) to allow measurement of the total normal and transverse force.
Pneumatic fingers are made with Wacker RT625 Elastosil, and the kinematic structure is an adjusted form of the Pneunet fingers [28]. They are actuated by controlled pressure valves (VEAB-L-26-D9-Q4-V1-1R1, Festo), where a constant desired pressure is applied and real pressure measured. The fingers are mounted on a Universal Robots UR10, which is used to initiate contact and load the fingers. We then compare five different parameters: pressure, contact area, rela
Figure 2: Experimental set up for the friction experiments.
tive position between finger and contact object (horizontal and vertical offset) and contact surface material. The range of parameters tested is shown in Table 1. The high friction condition applies a rubber to the surface of the test object.
For each set of parameters, the following experiment process is followed: (i) the finger is actuated in free space, (ii) the robot moves the finger into contact, (iii) contact along the entire test object is visually verified, (iv) linear motion in the positive y direction begins, with a velocity of \(2\)mm/s and to a distance of \(30\)mm, (v) the robot moves back to the initial pose. As the robot moves in the y direction, the total transverse force, which is friction force, can be directly measured as \(f_{t}=f_{y}\) from the force/torque sensor. The total normal force is calculated as \(f_{n}=\sqrt{f_{x}^{2}+f_{z}^{2}}\), where \(f_{z}\) is assumed to be from asymmetrical contact pressure, not friction, as the contact has been established with motion in x. The ratio of friction force to the total normal force is plotted, which saturates at \(\mu\) during slip. Experiments are conducted for each parameter combination three times, and the graph displays the mean value.
In all of the results for friction force in Figure 3 and ratio of friction to normal force in Figure 5, three phases can be identified: the stick phase, where the fingertip is not moving while the robot moves with constant velocity, a transition region, and a constant friction phase as sliding occurs. The slope in the stick phase represents the soft finger's bulk stiffness in y, \(k_{y}\), as it stretches horizontally.
Increased pressure leads to higher normal forces, as seen in Figure 3(a), but does not affect stiffness in the direction of motion. Similarly, contact area changes the normal force significantly, but has negligible impact on \(\mu\) in Figure 5(a). However, contact area does increase this stiffness. Neither of these parameters affects \(\mu\) as much as material, shown in gray in 5(a). On the other hand, variation in the robot's horizontal and vertical position cause larger changes in \(\mu\), as seen in Figure 5(b), possibly due to the different contact pressure distributions. Additionally, a higher vertical offset increases the transverse stiffness as seen in the stick phase of Figure 4, as the object is closer to the finger basis.
The results are summarized in Table 2, showing the effects of a unit change of each parameter on \(f_{n}\) and \(k_{y}\) and the range of \(\mu\) realized. These results suggests that normal force and material are more important than contact area and pressure for the friction force limits. Variation in the grip pose can change the normal pressure distribution, which can cause larger variation in \(\mu\). Stiffness in the direction of motion is mostly affected by vertical offset and contact area.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Parameter name** & **Value** \\ \hline \hline
**Pressure \([bar]\)** & 0.4, 0.8, 1.2 \\ \hline
**Contact area \([cm^{2}]\)** & 2.1(S), 4.2(M), 8.4(L) \\ \hline
**Horizontal offset \([mm]\)** & 0, -10, -20 \\ \hline
**Vertical offset \([mm]\)** & 40, 60, 80 \\ \hline
**Contact surface material friction** & Ordinary, High \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of soft finger experiment for slip
Figure 4: Experimental results for friction force
Figure 5: Experimental results for friction coefficient
## 3 Dynamic grip stability
If a bulk Coulomb model with a constant \(\mu\) is accurate, increasing force capacity should be as easy as increasing normal force \(f_{n}\). However, in some cases this leads to rotation of the object, in either yawing or rolling as seen in 1(a), which can lead to slip as seen in Figure 1(b).
To understand why this happens, we define dynamic grip stability as the convergence of the object rotation \(\Theta\) to the initial rotation angle \(\Theta=0\)deg over time. To evaluate the dynamic stability, the model in Figure 6 is used. We model the object's pose relative to the robot flange with stiffnesses, where one side is fixed to the flange and the other to the object. We assume that the stiffnesses are linear and equal between the fingers, ignore any torsional stiffness at the contact area, and assume that the direction of the normal and transverse force stays constant. We derive a dynamics equation and analyze the stability about the equilibrium point \(\Theta=0\)deg using the eigenvalues, where only a rotational motion of the body with the torsion angle \(\Theta\) is studied.
Let the bulk stiffness of the fingers be modeled with the corresponding stiffness \(k_{x}\) in the normal force direction and \(k_{y}\) in transversal direction, with a radius \(r\) to the center of mass of the object. Grip force of finger in x direction is modeled by the spring with a relative spring displacement \(\Delta x\) and preload in the y direction is assumed to be 0.
### Rotational dynamics
We model the pure rotation of the object seen in Figure 6. The dynamics can be determined based on the energies of the body. The torque is given by
\[T=\frac{1}{2}I\dot{\Theta}^{2}, \tag{2}\]
where \(I\) is the rotational inertia and \(\dot{\Theta}\) the rotation speed about \(z\). The potential energy of the system \(V\) is derived from the relative travel of the spring through the preload and through the distance covered by the springs during object rotation. As the body rotates, the spring displacement changes, giving the potential energy
\[V=k_{x}(\Delta x-r(1-\cos\Theta))^{2}+k_{y}(r\sin\Theta)^{2}, \tag{3}\]
where \(r\) is object radius, \(k_{x}\) and \(k_{y}\) the stiffness in respective directions, and \(\Delta x\) the preload displacement. With (2) and (3) inserted into (1) the dynamic equation can be calculated to
\[\begin{split} I\ddot{\Theta}=& 2k_{x}r\sin\Theta\Delta x -2k_{x}r^{2}(1-\cos\Theta)\sin\Theta\\ &-2k_{y}r^{2}\sin\Theta\cos\Theta.\end{split} \tag{4}\]
### Linearized dynamics and stability
With defining of the state vector \(x=[\Theta,\dot{\Theta}]^{T}\), the problem can be considered from the control systems point of view. The system is nonlinear, but the linearized dynamics can be found and dynamic stability about \(x=[0,0]^{T}\) can be analyzed. When the system is dynamically stable about a state, it converges to that state over time from starting conditions near that state. When the system is dynamically unstable, it will diverge.
With the introduced state variables, the system matrix \(A\) linearized about the equilibrium point \(\Theta=0\) can be written as
\[A=\begin{bmatrix}0&1\\ \frac{2r}{T}(k_{x}\Delta x-k_{y}r)&0\end{bmatrix}, \tag{5}\]
where \(\dot{x}\approx Ax\) about \(\Theta=0\)deg. The \(A\) matrix has two eigenvalues which can be found as
\[\begin{split}\lambda_{1/2}&=|\lambda I-A|=\pm\sqrt{ \frac{2r}{I}(k_{x}\Delta x-k_{y}r)}\\ &=\begin{cases}\pm\sqrt{\frac{2r}{I}(k_{x}\Delta x-k_{y}r)},\text{ for }k_{x}\Delta x>k_{y}r\\ \pm i\sqrt{\frac{2r}{T}(k_{y}r-k_{x}\Delta x)},\text{ for }k_{x}\Delta x<k_{y}r \end{cases}.\end{split} \tag{6}\]
If the term \(k_{x}\Delta x=f_{p}\) exerted by the finger, is greater than the stiffness in the transverse direction \(k_{y}\) times the distance to the center of masses \(r\), then there will be a pole located
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Parameter & \(f_{n}\) [N/unit] & \(k_{y}\) [N/(mm unit)] & \(\mu\) (still system with Lagrangian mechanics \\ \hline \hline Pressure & \(3.69/bar\) & \(0.046/bar\) & \(0.58\) (\(0.025\)) \\ Contact area & \(0.47/cm^{2}\) & \(0.024/cm^{2}\) & \(0.63\) (\(0.017\)) \\ Horizontal offset & \(0.14/mm\) & \(0.005/mm\) & \(0.62\) (\(0.086\)) \\ Vertical offset & \(0.07/mm\) & \(0.089/mm\) & \(0.72\) (\(0.085\)) \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the effect of a unit change in the parameter on normal force and stiffness in direction of motion \(k_{y}\), and the average \(\mu\) values reached over the range of parameters tested.
Figure 6: View from above of the grasped object, fingers are modeled as springs
in the right half-plane, indicating instability. This gives a dynamic stability condition about the equilibrium point \(\Theta=0\)' of
\[\underbrace{k_{x}\Delta x}_{f_{p}}<k_{y}r, \tag{7}\]
and we denote the force when this condition is violated as \(f_{p}^{i}=k_{y}r\). As seen in Figure 1, instability can also happen about the x axis, in this case the model results in the condition \(f_{p}^{i}=k_{z}r\).
### Rest position
The derived dynamic equation (4) can be used to predict the angle that occurs after the force is applied by the fingers by setting \(\hat{\theta}\) to zero, which has solutions for \(\Theta_{r}=n\pi\) for \(n\in\mathbb{Z}\), and additional solutions at \(\Theta_{r}\neq n\pi\) when
\[\left|\frac{k_{x}\Delta x-k_{x}r}{k_{y}r-k_{x}r}\right|<1. \tag{8}\]
The progression of the rest position angle \(\Theta_{r}\) over the applied force is shown for selected system parameters in the Figure 7. It can be seen that the object rotates when a certain limit force is exceeded, where this limit force follows from (7). It should be noted that if the condition (8) for the analytical solution is not satisfied, the progression of the rest position angle cannot be calculated and only the force \(f_{p}^{i}\) can be determined.
The comparison of these results for different \(k_{x}\) and \(k_{y}\) values can be seen in Figure 8. As indicated by (7), the initiation of rotation depends on \(k_{y}\), increasing as \(k_{y}\) increases. However, the slope after rotation is initiated decreases as \(k_{x}\) increases.
### Rotation leading to slip
As the object rotates, the normal and transverse forces change, possibly leading to slip. The Figure 6 shows the normal force \(f_{n}\) and static friction force \(f_{t}\), which from the nonlinear model can be defined as follows:
\[f_{n} = k_{x}(\Delta x-r(1-\cos\Theta)) \tag{9}\] \[= f_{p}-k_{x}r(1-\cos\Theta)\] (10) \[f_{t} = k_{y}r\sin\Theta, \tag{11}\]
where \(f_{p}=k_{x}\Delta x\) is the preload force on the finger. These equations are connected by the Coulomb friction condition \(|f_{t}|\leq f_{n}\mu\). With the assumption that the direction of the forces \(f_{n}\) and \(f_{t}\) doesn't change with rotation, the limiting case for the angle from which the slip begins can be determined by solving \(f_{t}=\mu f_{n}\), which gives [29]
\[\Theta_{f}=2(\arctan(\frac{a\pm\sqrt{a^{2}+b^{2}-c^{2}}}{b+c})+n\pi),n\in \mathbb{Z}, \tag{12}\]
where \(a=k_{y}r,b=-\mu k_{x}r\) and \(c=\mu f_{p}-\mu k_{x}r\). The angle \(\Theta_{f}\) indicates the rotation at which slip occurs. This can be calculated for the same set of parameters and is also plotted in the Figure 7. It follows that slip does not occur if the rest position of the system \(\Theta_{r}\) remains smaller than the critical angle for the friction condition \(\Theta_{f}\).
## 4 Experimental validation of grip stability model
To validate the model with stability conditions, silicon-based fingers are used in a variety of conditions. The model in Section 3 makes two predictions: when rotation away from initial orientation begins \(f_{p}^{i}(k_{x},k_{y},r)\), and when the rotation leads to slip at \(f_{p}^{s}(k_{x},k_{y},r)\). For some grip conditions, these two are close - when the slope of \(\Theta_{r}\) over \(f_{p}\) is high. In other cases, \(\Theta_{r}\) increases slowly.
Figure 8: Rest angle \(\Theta_{f}\) progression as \(k_{x}\) and \(k_{y}\) vary.
Figure 7: Rest position \(\Theta_{r}\) over the changing of preload force according to (4) and maximal friction angle \(\Theta_{f}\) according to (12) with \(k_{x}=0.3N/mm\), \(k_{y}=0.066N/mm\),\(r=35mm\) and \(\mu=0.6\).
### Experimental set up
The first goal is to test the stability condition (7) about the equilibrium point \(\Theta=0\)' when grasping an object. The assumptions in the derivation were that the distance between the fingers and the object center \(r\) does not change and the fingers have the same properties, i.e. the stiffnesses \(k_{x}\), \(k_{y}\), \(k_{z}\) are matched in both fingers. Furthermore, it is assumed that the stiffness of the fingers in the transverse direction \(k_{y}\) does not change with changing in pressure, as indicated in Section 2.
The model predicts that if the force applied by the fingers is less than the term \(k_{y}r\), then the object will remain in its initial rest position \(\Theta=0\)'. If this limit force is exceeded, the rotation of the body should happen, similar to the Figure 7 and a new rest position \(\Theta_{r}\) should be assumed. As pressure increases, the slip force \(f_{p,x}^{s}\) will be exceeded which often results in smaller objects flying from the fingers.
The stiffness of the finger is needed to validate the stability model. In order to calculate this, the test object is fixed on the force sensor and with the measuring force and displacement of robot's end-effector, the stiffness of finger can be determined. First, the grip conditions from the stability experiment are recreated - relative pose between object and gripper, an the pressure in the finger. The end-effector is moved from \(x_{0}\) to \(x_{1}\), the forces measured at each one as \(f_{0}\) and \(f_{1}\), and the stiffness \(k\) can be calculated with a simple relationship \(k=\frac{f_{1}-f_{0}}{x_{1}-x_{0}}\). To measure the stiffness in the y- and z-direction, a Cartesian motion of \(1-3\) mm is used to avoid slip and stay in the linear range.
### Force instability experiments
In each set of experiments, the object is grasped from an experiment table, with an initial grasp location flush with the bottom of the fingers. The pressure is increased in steps of around \(0.15\) bar, waiting for \(10\) seconds and stopping the experiment when instability and slip has occurred.
#### 4.2.1 Validation of force stability limit
With the friction coefficient determined from the Section 2 and the calculated stiffnesses from the description above, the determined and observed values can be summarized in Table 3. The object and experimental setup can be seen in Figure 1(a).
To avoid rotation, the finger normal force should not exceed \(k_{y}r=1.65N\) for instability about z, and not greater than \(k_{z}r=0.75N\) for instability about x. In the experiments, the rotation about x was observed at force \(f_{p}^{s}=1.5N\). And this also should be expected from the model, since the force limit about x is smaller than about z.
#### 4.2.2 Finger distance and force limit
The effective finger stiffness is varied by varying the finger distance \(d\) via an electrical parallel gripper, i.e. the offset between the finger and the body at each measurement, which influences the stiffnesses and preload force. A test object with a length of 30 mm is grasped.
Table 4 shows a comparison of the derived model and the measured values of slip force \(f_{p,x}^{s}\) about x at the different distance \(d\) between the fingers. Furthermore, the expected instability force \(f_{p,x}^{i}\) and \(f_{p,z}^{i}\) is written in the table at which the rotation should begin, about the x and z axis, respectively.
It can be seen that when the stiffness in z is increased, the force that leads to dynamic instability and thus to slip of object increases, as predicted by the model. While the trend is correct, at higher \(k_{z}\) values, the observed maximum force is lower than the model prediction. We attribute this to the non-negligible \(\Delta y\) in realistic gripping scenarios, which gives additional transverse forces which can violate the slip condition earlier.
#### 4.2.3 Changes in object properties
The grip stability limits are next assessed over a range of objects with different lengths and contact areas. The objects and the resulting observed slip force can be seen in Figure 9(a) and (b), respectively.
We can see that the object with a larger \(r\) results in a higher \(f_{p}^{s}\), as predicted by the instability model. Additionally, the object with the larger contact area allows a higher grasp force. The larger contact area was shown to increase \(k_{x}\) in Table 2, which would result in a higher instability force \(f_{p}^{i}\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(d\) [mm] & \(f_{p,x}^{s}\) [N] & \(f_{p,z}^{i}\) [N] & \(f_{p,x}^{i}\) [N] & \(k_{x},k_{y},k_{z}\) [N/mm] \\ \hline \hline
50 & 1.5 & 1.65 & 0.75 & 0.14, 0.11, 0.05 \\ \hline
40 & 2.15 & 1.2 & 1.05 & 0.11, 0.08, 0.07 \\ \hline
30 & 2.9 & 1.2 & 6 & 0.1, 0.08, 0.4 \\ \hline
20 & 3.38 & 1.2 & 9.3 & 0.1, 0.08, 0.62 \\ \hline \end{tabular}
\end{table}
Table 4: Validation of the model, where \(f_{p,x}^{s}\) is the observed slip force limit.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Parameter** & **Value** \\ \hline \hline
**Staffness \(k_{x}[N/mm]\)** & 0.14 \\ \hline
**Staffness \(k_{y}[N/mm]\)** & 0.11 \\ \hline
**Staffness \(k_{z}[N/mm]\)** & 0.05 \\ \hline
**Object radius \(r[mm]\)** & 15 \\ \hline
**Expected instability preload force about z \(f_{p}^{s}[N]\)** & 1.65 \\ \hline
**Expected instability preload force about x \(f_{p}^{s}[N]\)** & 0.75 \\ \hline
**Observed slip force about x \([N]\)** & 1.5 \\ \hline \end{tabular}
\end{table}
Table 3: Measured parameters for stability experiments, where \(\mu=0.6\).
### Rest angle validation
To validate the rest angle that the object takes as grip force is increased, we grasp a test object as seen in Figure 2(a) of length \(50\) mm. A long indicating stick is glued to the bottom, and the object is lightly lifted off the table and the finger pressure is increased. As the pressure increases, the robot's last axis is jogged back to align the indicator, providing a measurement of the rotation angle. The results can be seen in Figure 10, where the measured rest angle can be seen. While the initiation of rotation at the instability force \(f_{p}^{i}\) matches the model prediction well, the resulting angles of the object are lower than predicted by the model by at least a factor of \(2\). We attribute this to the linearity assumptions of the finger stiffness, where additional displacement of the fingers may change the stiffness properties.
## 5 Conclusion
This paper investigated models to increase the friction force capacity of soft fingers. It was shown that with a bulk Coulomb model, increasing contact area mostly increases normal force and transverse stiffness, where coefficient \(\mu\) is mostly unchanged. The normal force was then shown to have a rotational stability limit, where above a threshold grip force \(f_{p}^{i}\), rotation away from initial grasp point occurs. The instability model was validated for a range of grip parameters and objects, showing increasing object radius and transverse stiffness increase the stable normal force limit.
While the trends in object radius and bulk finger stiffness are validated, the observed instability force is lower than anticipated at higher stiffness values. Future work can apply more complex finger models which more accurately describe the gripping force direction and nonlinear stiffnesses.
|
2305.00896
|
Prime spectrum and dynamics for nilpotent Cantor actions
|
A minimal equicontinuous action by homeomorphisms of a discrete group
$\Gamma$ on a Cantor set $X$ is locally quasi-analytic, if each homeomorphism
has a unique extension from small open sets to open sets of uniform diameter on
$X$. A minimal action is stable, if the actions of $\Gamma$ and of the closure
of $\Gamma$ in the group of homeomorphisms of $X$, are both locally
quasi-analytic.
When $\Gamma$ is virtually nilpotent, we say that $\Phi \colon \Gamma \times
\mathfrak{X} \to \mathfrak{X}$ is a nilpotent Cantor action. We show that a
nilpotent Cantor action with finite prime spectrum must be stable. We also
prove there exist uncountably many distinct Cantor actions of the Heisenberg
group, necessarily with infinite prime spectrum, which are not stable.
|
Steven Hurder, Olga Lukina
|
2023-05-01T15:58:47Z
|
http://arxiv.org/abs/2305.00896v1
|
# Prime spectrum and dynamics for nilpotent Cantor actions
###### Abstract.
A minimal equicontinuous action by homeomorphisms of a discrete group \(\Gamma\) on a Cantor set \(X\) is locally quasi-analytic, if each homeomorphism has a unique extension from small open sets to open sets of uniform diameter on \(X\). A minimal action is stable, if the actions of \(\Gamma\) and of the closure of \(\Gamma\) in the group of homeomorphisms of \(X\), are both locally quasi-analytic.
When \(\Gamma\) is virtually nilpotent, we say that \(\Phi\colon\Gamma\times\mathfrak{X}\to\mathfrak{X}\) is a nilpotent Cantor action. We show that a nilpotent Cantor action with finite prime spectrum must be stable. We also prove there exist uncountably many distinct Cantor actions of the Heisenberg group, necessarily with infinite prime spectrum, which are not stable.
Version date: May 1, 2023.
2020 _Mathematics Subject Classification_. Primary: 20E18, 37B05, 37B45; Secondary: 57S10.
Keywords: odometers, Cantor actions, profinite groups, Steinitz numbers, Heisenberg group.
one of the most basic invariants, is the property that the action is _stable_ or _wild_. The purpose of this note is to give a relation between the prime spectrum of a Cantor action and the wild property.
As explained in detail in Section 2.5 below, the property that the action \((\mathfrak{X},\Gamma,\Phi)\) is stable is a property of the action of the completion \(\mathfrak{G}(\Phi)=\overline{\Phi(\Gamma)}\subset\mathrm{Homeo}(\mathfrak{X})\), which is a profinite group naturally acting on \(\mathfrak{X}\). The property that the action \((\mathfrak{X},\Gamma,\Phi)\) is locally quasi-analytic is defined in Definition 2.10, and \((\mathfrak{X},\Gamma,\Phi)\) is _stable_ if the action of \(\mathfrak{G}(\Phi)\) on \(\mathfrak{X}\) is also locally quasi-analytic. If \((\mathfrak{X},\Gamma,\Phi)\) is stable, then \((\mathfrak{X},\Gamma,\Phi)\) is locally quasi-analytic. The converse need not hold even for actions of nilpotent groups, as we see later.
A Cantor action \((\mathfrak{X},\Gamma,\Phi)\) is said to be _nilpotent_ if \(\Gamma\) contains a finitely generated nilpotent subgroup with finite index. This class of group actions is particularly interesting, as it has the natural next level of complexity after the abelian Cantor actions. We show the following three results for nilpotent Cantor actions.
**Theorem 1.2**.: _Let \((\mathfrak{X},\Gamma,\Phi)\) be a nilpotent Cantor action. If the prime spectrum \(\pi[\mathfrak{X},\Gamma,\Phi]\) is finite, then the action is stable._
Theorem 1.2 does not have a converse. We show that every collection of primes, finite or infinite, can be realized as the prime spectrum of a stable nilpotent Cantor action.
**Theorem 1.3**.: _Let \(\pi_{f}\) and \(\pi_{\infty}\) be two distinct sets of primes, where \(\pi_{f}\) is a finite set, and \(\pi_{\infty}\) is a non-empty finite or infinite set. Then there exists a stable nilpotent Cantor action \((\mathfrak{X},\Gamma,\Phi)\) such that \(\pi_{\infty}[\mathfrak{X},\Gamma,\Phi]=\pi_{\infty}\) and \(\pi_{f}[\mathfrak{X},\Gamma,\Phi]=\pi_{f}\)._
Let \((\mathfrak{X},\Gamma,\Phi)\) be an Cantor action. If the action is effective, then it is free, and the action of the closure \(\mathfrak{G}(\Phi)\) is also free, which implies that the action is stable. An effective nilpotent Cantor action need not be free, and may even have elements which fix every point in a clopen subset of the Cantor set \(\mathfrak{X}\). The authors showed in their work [17] that nilpotent Cantor actions are locally quasi-analytic, which means that such subsets of fixed points cannot be arbitrarily small, as their diameter has lower bound which is uniform over the Cantor set \(\mathfrak{X}\). It is then surprising to discover that if one allows \(\mathfrak{G}(\Phi)\) to have infinite prime spectrum then one can construct wild nilpotent actions, for which the action of the closure \(\mathfrak{G}(\Phi)\) is not locally quasi-analytic, as shown in Theorem 1.4. In addition, Theorem 1.4 is a realization result, which shows that every infinite set of primes can be realized as the prime spectrum of a wild nilpotent Cantor action.
**Theorem 1.4**.: _Given any two distinct sets \(\pi_{f}\) and \(\pi_{\infty}\) of primes, where \(\pi_{f}\) is infinite and \(\pi_{\infty}\) is any (possibly empty) set, there is a minimal equicontinuous action \((\mathfrak{X},\Gamma,\Phi)\) of the Heisenberg group, such that \(\pi_{f}[\mathfrak{X},\Gamma,\Phi]=\pi_{f}\) and \(\pi_{\infty}[\mathfrak{X},\Gamma,\Phi]=\pi_{\infty}\)._
_Moreover, there exists an uncountable number of nilpotent Cantor actions \((\mathfrak{X},\Gamma,\Phi)\) of the Heisenberg group \(\Gamma\) with infinite prime spectra such that:_
1. _Each_ \((\mathfrak{X},\Gamma,\Phi)\) _is topologically free,_
2. _Each_ \((\mathfrak{X},\Gamma,\Phi)\) _is wild,_
3. _The prime spectra of such actions are pairwise distinct._
The notion of return equivalence for Cantor actions, and its relationship with conjugacy of action is explained in Section 2.4. The result of Corollary 1.5 below follows from the result that the prime spectrum of the action is an invariant of its return equivalence class, see Theorem 2.16.
**Corollary 1.5**.: _There exists an uncountable number of nilpotent Cantor actions \((\mathfrak{X},\Gamma,\Phi)\) of the Heisenberg group \(\Gamma\) which are not return equivalent, and therefore not conjugate._
The conclusion of Theorem 1.4 is used in [21] for the calculation of the mapping class groups of solenoidal manifolds whose base is a nil-manifold.
We note that for more general groups \(\Gamma\), an analog of Theorem 1.2 need not hold. For example, a weakly branch group, as studied in [4, 6, 7, 25], acts on the boundary of a \(d\)-regular rooted tree, and so has finite prime spectrum \(\{d\}\), but the dynamic of the actions on the Cantor boundary are wild.
**Question 1.6**.: _Let \((\mathfrak{X},\Gamma,\Phi)\) be a Cantor action. For which classes of groups \(\Gamma\) does the finiteness of the prime spectrum of the action imply that the action is stable?_
The paper is organized as follows. In Section 2.1 we recall basic properties of minimal equicontinuous group actions on Cantor sets. In particular, the definition of the prime spectrum of a minimal equicontinuous action is given in Definition 2.14. We prove Theorem 1.2 in Section 3, and give basic examples of nilpotent Cantor actions in Section 4. In Section 5 we construct examples of stable and wild actions of the Heisenberg group with prescribed prime spectrum, proving Theorems 1.3 and 1.4, from which we deduce Corollary 1.5.
## 2. Cantor actions
We recall some of the basic properties of Cantor actions, as required for the proofs of Theorems 1.2 and 1.4. More complete discussions of the properties of equicontinuous Cantor actions are given in the text by Auslander [2], the papers by Cortez and Petite [9], Cortez and Medynets [10], and the authors' works, in particular [11, 12] and [18, Section 3].
### Basic concepts
Let \((\mathfrak{X},\Gamma,\Phi)\) denote an action \(\Phi\colon\Gamma\times\mathfrak{X}\to\mathfrak{X}\). We write \(g\cdot x\) for \(\Phi(g)(x)\) when appropriate. The orbit of \(x\in\mathfrak{X}\) is the subset \(\mathcal{O}(x)=\{g\cdot x\mid g\in\Gamma\}\). The action is _minimal_ if for all \(x\in\mathfrak{X}\), its orbit \(\mathcal{O}(x)\) is dense in \(\mathfrak{X}\).
An action \((\mathfrak{X},\Gamma,\Phi)\) is _equicontinuous_ with respect to a metric \(d_{\mathfrak{X}}\) on \(\mathfrak{X}\), if for all \(\varepsilon>0\) there exists \(\delta>0\), such that for all \(x,y\in\mathfrak{X}\) and \(g\in\Gamma\) we have that \(d_{\mathfrak{X}}(x,y)<\delta\) implies \(d_{\mathfrak{X}}(g\cdot x,g\cdot y)<\varepsilon\). The property of being equicontinuous is independent of the choice of the metric on \(\mathfrak{X}\) which is compatible with the topology of \(\mathfrak{X}\).
Now assume that \(\mathfrak{X}\) is a Cantor space. Let \(\operatorname{CO}(\mathfrak{X})\) denote the collection of all clopen (closed and open) subsets of \(\mathfrak{X}\), which forms a basis for the topology of \(\mathfrak{X}\). For \(\phi\in\operatorname{Homeo}(\mathfrak{X})\) and \(U\in\operatorname{CO}(\mathfrak{X})\), the image \(\phi(U)\in\operatorname{CO}(\mathfrak{X})\). The following result is folklore, and a proof is given in [17, Proposition 3.1].
**Proposition 2.1**.: _For \(\mathfrak{X}\) a Cantor space, a minimal action \(\Phi\colon\Gamma\times\mathfrak{X}\to\mathfrak{X}\) is equicontinuous if and only if the \(\Gamma\)-orbit of every \(U\in\operatorname{CO}(\mathfrak{X})\) is finite for the induced action \(\Phi_{*}\colon\Gamma\times\operatorname{CO}(\mathfrak{X})\to\operatorname{ CO}(\mathfrak{X})\)._
**Definition 2.2**.: _We say that \(U\subset\mathfrak{X}\) is adapted to the action \((\mathfrak{X},\Gamma,\Phi)\) if \(U\) is a non-empty clopen subset, and for any \(g\in\Gamma\), if \(\Phi(g)(U)\cap U\neq\emptyset\) implies that \(\Phi(g)(U)=U\)._
The proof of [17, Proposition 3.1] shows that given \(x\in\mathfrak{X}\) and clopen set \(x\in W\), there is an adapted clopen set \(U\) with \(x\in U\subset W\).
For an adapted set \(U\), the set of "return times" to \(U\),
\[\Gamma_{U}=\{g\in\Gamma\mid g\cdot U\cap U\neq\emptyset\} \tag{1}\]
is a subgroup of \(\Gamma\), called the _stabilizer_ of \(U\). Then for \(g,g^{\prime}\in\Gamma\) with \(g\cdot U\cap g^{\prime}\cdot U\neq\emptyset\) we have \(g^{-1}\,g^{\prime}\cdot U=U\), hence \(g^{-1}\,g^{\prime}\in\Gamma_{U}\). Thus, the translates \(\{g\cdot U\mid g\in\Gamma\}\) form a finite clopen partition of \(\mathfrak{X}\), and are in 1-1 correspondence with the quotient space \(X_{U}=\Gamma/\Gamma_{U}\). Then \(\Gamma\) acts by permutations of the finite set \(X_{U}\) and so the stabilizer group \(\Gamma_{U}\subset G\) has finite index. Note that this implies that if \(V\subset U\) is a proper inclusion of adapted sets, then the inclusion \(\Gamma_{V}\subset\Gamma_{U}\) is also proper.
**Definition 2.3**.: _Let \((\mathfrak{X},\Gamma,\Phi)\) be a Cantor action. A properly descending chain of clopen sets \(\mathcal{U}=\{U_{\ell}\subset\mathfrak{X}\mid\ell>0\}\) is said to be an adapted neighborhood basis at \(x\in\mathfrak{X}\) for the action \(\Phi\), if \(x\in U_{\ell+1}\subset U_{\ell}\) is a proper inclusion for all \(\ell>0\), with \(\cap_{\ell>0}\,U_{\ell}=\{x\}\), and each \(U_{\ell}\) is adapted to the action \(\Phi\)._
Given \(x\in\mathfrak{X}\) and \(\varepsilon>0\), Proposition 2.1 implies there exists an adapted clopen set \(U\in\operatorname{CO}(\mathfrak{X})\) with \(x\in U\) and \(\operatorname{diam}(U)<\varepsilon\). Thus, one can choose a descending chain \(\mathcal{U}\) of adapted sets in \(\operatorname{CO}(\mathfrak{X})\) whose intersection is \(x\), from which the following result follows:
**Proposition 2.4**.: _Let \((\mathfrak{X},\Gamma,\Phi)\) be a Cantor action. Given \(x\in\mathfrak{X}\), there exists an adapted neighborhood basis \(\mathcal{U}\) at \(x\) for the action \(\Phi\)._
Combining the above remarks, we have:
**Corollary 2.5**.: _Let \((\mathfrak{X},\Gamma,\Phi)\) be a Cantor action, and \(\mathcal{U}\) be an adapted neighborhood basis. Set \(\Gamma_{\ell}=\Gamma_{U_{\ell}}\), with \(\Gamma_{0}=\Gamma\), then there is a nested chain of finite index subgroups, \(\mathcal{G}_{\mathcal{U}}=\{\Gamma_{0}\supset\Gamma_{1}\supset\cdots\}\)._
### Profinite completion
Let \(\Phi(\Gamma)\subset\mathrm{Homeo}(\mathfrak{X})\) denote the image subgroup for an action \((\mathfrak{X},\Gamma,\Phi)\). When the action is equicontinuous, the closure \(\overline{\Phi(\Gamma)}\subset\mathrm{Homeo}(\mathfrak{X})\) in the _uniform topology of maps_ is a separable profinite group. We adopt the notation \(\mathfrak{G}(\Phi)\equiv\overline{\Phi(\Gamma)}\).
Let \(\widehat{\Phi}\colon\mathfrak{G}(\Phi)\times\mathfrak{X}\to\mathfrak{X}\) denote the induced action of \(\mathfrak{G}(\Phi)\) on \(\mathfrak{X}\), which is transitive as the action \((\mathfrak{X},\Gamma,\Phi)\) is minimal. For \(\widehat{g}\in\mathfrak{G}(\Phi)\), we write its action on \(\mathfrak{X}\) by \(\widehat{g}\,x=\widehat{\Phi}(\widehat{g})(x)\). Given \(x\in\mathfrak{X}\), introduce the isotropy group,
\[\mathfrak{D}(\Phi,x)=\{\widehat{g}\in\mathfrak{G}(\Phi)\mid\widehat{g}\,x=x \}\subset\mathrm{Homeo}(\mathfrak{X})\, \tag{2}\]
which is a closed subgroup of \(\mathfrak{G}(\Phi)\), and thus is either finite, or is an infinite profinite group. As the action \(\widehat{\Phi}\colon\mathfrak{G}(\Phi)\times\mathfrak{X}\to\mathfrak{X}\) is transitive, the conjugacy class of \(\mathfrak{D}(\Phi,x)\) in \(\mathfrak{G}(\Phi)\) is independent of the choice of \(x\), and by abuse of notation we omit the subscript \(x\). The group \(\mathfrak{D}(\Phi)\) is called the _discriminant_ of the action \((\mathfrak{X},\Gamma,\Phi)\) in [12, 16, 18], and is called a _parabolic_ subgroup (of the profinite completion of a countable group) in the works by Bartholdi and Grigorchuk [5, 6].
### Algebraic Cantor actions
Let \(\mathcal{G}=\{\Gamma=\Gamma_{0}\supset\Gamma_{1}\supset\Gamma_{2}\supset\cdots\}\) be a descending chain of finite index subgroups. Let \(X_{\ell}=\Gamma/\Gamma_{\ell}\) and note that \(\Gamma\) acts transitively on the left on the finite set \(X_{\ell}\). The inclusion \(\Gamma_{\ell+1}\subset\Gamma_{\ell}\) induces a natural \(\Gamma\)-invariant quotient map \(p_{\ell+1}\colon X_{\ell+1}\to X_{\ell}\). Introduce the inverse limit
\[X_{\infty} \equiv \lim_{\longleftarrow}\,\{p_{\ell+1}\colon X_{\ell+1}\to X_{ \ell}\mid\ell\geq 0\}\] \[= \{(x_{0},x_{1},\ldots)\in X_{\infty}\mid p_{\ell+1}(x_{\ell+1})=x _{\ell}\;\text{for all}\;\ell\geq 0\;\}\subset\prod_{\ell\geq 0}\;X_{\ell}\.\]
Then \(X_{\infty}\) is a Cantor space with the Tychonoff topology, where the actions of \(\Gamma\) on the factors \(X_{\ell}\) induce a minimal equicontinuous action denoted by \(\Phi\colon\Gamma\times X_{\infty}\to X_{\infty}\). There is a natural basepoint \(x_{\infty}\in X_{\infty}\) given by the cosets of the identity element \(e\in\Gamma\), so \(x_{\infty}=(e\Gamma_{\ell})\). An adapted neighborhood basis of \(x_{\infty}\) is given by the clopen sets
\[U_{\ell}=\{x=(x_{i})\in X_{\infty}\mid x_{i}=e\Gamma_{i}\in X_{i}\;,\;0\leq i \leq\ell\;\}\subset X_{\infty}. \tag{4}\]
There is a tautological identity \(\Gamma_{\ell}=\Gamma_{U_{\ell}}\).
For each \(\ell\geq 0\), we have the "partition coding map" \(\Theta_{\ell}\colon\mathfrak{X}\to X_{\ell}\) which is \(\Gamma\)-equivariant. The maps \(\{\Theta_{\ell}\}\) are compatible with the map on quotients in (3), and so they induce a limit map \(\Theta_{x}\colon\mathfrak{X}\to X_{\infty}\). The fact that the diameters of the clopen sets \(\{U_{\ell}\}\) tend to zero, implies that \(\Theta_{x}\) is a homeomorphism. Moreover, \(\Theta_{x}(x)=x_{\infty}\in X_{\infty}\) where \(\{x\}=\cap_{\ell>0}\;U_{\ell}\). The following is folklore:
**Theorem 2.6**.: _[_11_, Appendix A]_ _The map \(\Theta_{x}\colon\mathfrak{X}\to X_{\infty}\) induces an isomorphism of the Cantor actions \((\mathfrak{X},\Gamma,\Phi)\) and \((X_{\infty},\Gamma,\Phi_{x})\)._
The action \((X_{\infty},\Gamma,\Phi_{x})\) is called the _odometer model_ centered at \(x\) for the action \((\mathfrak{X},\Gamma,\Phi)\). The dependence of the model on the choices of a base point \(x\in\mathfrak{X}\) and adapted neighborhood basis \(\mathcal{U}\) is discussed in detail in the works [11, 13, 16, 18]. Again, we abuse notation in the following and omit the subscript "\(x\)".
Next, we develop the algebraic model for the profinite action \(\widehat{\Phi}\colon\mathfrak{G}(\Phi)\times\mathfrak{X}\to\mathfrak{X}\) of the completion \(\mathfrak{G}(\Phi)\equiv\overline{\Phi(\Gamma)}\subset\mathrm{Homeo}(\mathfrak{ X})\). Fix a choice of group chain \(\{\Gamma_{\ell}\mid\ell\geq 0\}\) as above, which provides an algebraic model for the action \((\mathfrak{X},\Gamma,\Phi)\).
For each \(\ell\geq 1\), let \(C_{\ell}\subset\Gamma_{\ell}\) denote the _core_ of \(\Gamma_{\ell}\), i.e. the largest normal subgroup of \(\Gamma_{\ell}\) in \(\Gamma\). So
\[C_{\ell}\ =\mathrm{Core}(\Gamma_{\ell})\ =\bigcap_{g\in\Gamma}\ g\ \Gamma_{\ell}\ g^{-1}\ \subset\Gamma_{\ell}. \tag{5}\]
As \(\Gamma_{\ell}\) has finite index in \(\Gamma\), the same holds for \(C_{\ell}\). Observe that for all \(\ell\geq 0\), we have \(C_{\ell+1}\subset C_{\ell}\).
Introduce the quotient group \(Q_{\ell}=\Gamma/C_{\ell}\) with identity element \(e_{\ell}\in Q_{\ell}\). There are natural quotient maps \(q_{\ell+1}\colon Q_{\ell+1}\to Q_{\ell}\), and we can form the inverse limit group
\[\widehat{\Gamma}_{\infty} \equiv \lim_{\ell\downarrow}\,\{q_{\ell+1}\colon Q_{\ell+1}\to Q_{\ell} \mid\ell\geq 0\}\] \[= \{(g_{\ell})=(g_{0},g_{1},\ldots)\mid g_{\ell}\in Q_{\ell}\,\ q_{\ell+1}(g_{\ell+1})=g_{\ell}\text{ for all }\ell\geq 0\ \}\ \subset\prod_{\ell\geq 0}\ \Gamma_{\ell}\, \tag{6}\]
which is a Cantor space with the Tychonoff topology. The left actions of \(\Gamma\) on the spaces \(X_{\ell}=\Gamma/\Gamma_{\ell}\) induce a minimal equicontinuous action of \(\widehat{\Gamma}_{\infty}\) on \(X_{\infty}\), again denoted by \(\widehat{\Phi}\colon\widehat{\Gamma}_{\infty}\times X_{\infty}\to X_{\infty}\). Note that the isotropy group of the action of \(Q_{\ell}=\Gamma_{\ell}/C_{\ell}\) at the identity coset in \(X_{\ell}=\Gamma/\Gamma_{\ell}\) is the subgroup \(D_{\ell}=\Gamma_{\ell}/C_{\ell}\).
Denote the points in \(\widehat{\Gamma}_{\infty}\) by \(\widehat{g}=(g_{\ell})\in\widehat{\Gamma}_{\infty}\) where \(g_{\ell}\in Q_{\ell}\). There is a natural basepoint \(\widehat{e}_{\infty}\in\widehat{\Gamma}_{\infty}\) given by the cosets of the identity element \(e\in\Gamma\), so \(\widehat{e}_{\infty}=(e_{\ell})\) where \(e_{\ell}=eC_{\ell}\in Q_{\ell}\) is the identity element in \(Q_{\ell}\).
For each \(\ell\geq 0\), let \(\Pi_{\ell}\colon\widehat{\Gamma}_{\infty}\to Q_{\ell}\) denote the projection onto the \(\ell\)-th factor in (6), so in the coordinates of (7), we have \(\Pi_{\ell}(\widehat{g})=g_{\ell}\in Q_{\ell}\). The maps \(\Pi_{\ell}\) are continuous for the profinite topology on \(\widehat{\Gamma}_{\infty}\), so the pre-images of points in \(Q_{\ell}\) are clopen subsets. In particular, the fiber of \(\Pi_{\ell}\colon\widehat{\Gamma}_{\infty}\to Q_{\ell}\) over \(c_{\ell}\) is the normal subgroup
\[\widehat{C}_{\ell}=\Pi_{\ell}^{-1}(e_{\ell})=\{(g_{i})\in\widehat{\Gamma}_{ \infty}\mid g_{i}\in C_{i}\,\ 0\leq i\leq\ell\}. \tag{8}\]
The collection \(\{\widehat{C}_{\ell}\mid\ell\geq 1\}\) forms a basis of clopen neighborhoods of \(\widehat{e}_{\infty}\in\widehat{\Gamma}_{\infty}\). That is, for each clopen set \(\widehat{U}\subset\widehat{\Gamma}_{\infty}\) with \(\widehat{e}_{\infty}\in\widehat{U}\), there exists \(\ell_{0}>0\) such that \(\widehat{C}_{\ell}\subset\widehat{U}\) for all \(\ell\geq\ell_{0}\).
**Theorem 2.7**.: _[_11_, Theorem 4.4]_ _There is an isomorphism \(\widehat{\tau}\colon\mathfrak{G}(\Phi)\to\widehat{\Gamma}_{\infty}\) which conjugates the profinite action \((\mathfrak{X},\mathfrak{G}(\Phi),\widehat{\Phi})\) with the profinite action \((X_{\infty},\widehat{\Gamma}_{\infty},\widehat{\Phi})\). In particular, \(\widehat{\tau}\) identifies the isotropy group \(\mathfrak{D}(\Phi)\) with the inverse limit subgroup_
\[D_{\infty}=\varprojlim\,\{q_{\ell+1}\colon\Gamma_{\ell+1}/C_{\ell+1}\to\Gamma_ {\ell}/C_{\ell}\mid\ell\geq 0\}\subset\widehat{\Gamma}_{\infty}. \tag{9}\]
The maps \(q_{\ell+1}\) in the formula (9) need not be surjections, and thus the calculation of the inverse limit \(D_{\infty}\) can involve some subtleties. For example, it is possible that each group \(Q_{\ell}\) is non-trivial for \(\ell>0\), and yet \(D_{\infty}\) is the trivial group.
### Equivalence of Cantor actions
We next recall the notions of equivalence of Cantor actions. The first and strongest is that of _isomorphism_, which is a generalization of the notion of conjugacy of topological actions. For \(\Gamma=\mathbb{Z}\), isomorphism corresponds to the notion of "flip conjugacy" introduced in the work of Boyle and Tomiyama [8]. The definition below also appears in the papers [10, 16, 24].
**Definition 2.8**.: _Cantor actions \((\mathfrak{X}_{1},\Gamma_{1},\Phi_{1})\) and \((\mathfrak{X}_{2},\Gamma_{2},\Phi_{2})\) are said to be isomorphic if there is a homeomorphism \(h\colon\mathfrak{X}_{1}\to\mathfrak{X}_{2}\) and a group isomorphism \(\Theta\colon\Gamma_{1}\to\Gamma_{2}\) so that_
\[\Phi_{1}(g)=h^{-1}\circ\Phi_{2}(\Theta(g))\circ h\in\mathrm{Homeo}(\mathfrak{X} _{1})\text{ for all }g\in\Gamma_{1}. \tag{10}\]
The notion of _return equivalence_ for Cantor actions is weaker than isomorphism, and is natural when considering the dynamical properties of Cantor systems which should be independent of the restriction of the action to a clopen cross-section.
For a minimal equicontinuous Cantor action \((\mathfrak{X},\Gamma,\Phi)\) and an adapted set \(U\subset\mathfrak{X}\), by a small abuse of notation, we use \(\Phi_{U}\) to denote both the restricted action \(\Phi_{U}\colon\Gamma_{U}\times U\to U\) and the induced quotient action \(\Phi_{U}\colon\mathcal{H}_{U}\times U\to U\) for \(\mathcal{H}_{U}=\Phi(G_{U})\subset\mathrm{Homeo}(U)\). Then \((U,\mathcal{H}_{U},\Phi_{U})\) is called the _holonomy action_ for \(\Phi\).
**Definition 2.9**.: _Two minimal equicontinuous Cantor actions \((\mathfrak{X}_{1},\Gamma_{1},\Phi_{1})\) and \((\mathfrak{X}_{2},\Gamma_{2},\Phi_{2})\) are return equivalent if there exists an adapted set \(U_{1}\subset\mathfrak{X}_{1}\) for the action \(\Phi_{1}\) and an adapted set \(U_{2}\subset\mathfrak{X}_{2}\) for the action \(\Phi_{2}\), such that the restricted actions \((U_{1},\mathcal{H}_{1,U_{1}},\Phi_{1,U_{1}})\) and \((U_{2},\mathcal{H}_{2,U_{2}},\Phi_{2,U_{2}})\) are isomorphic._
If the actions \(\Phi_{1}\) and \(\Phi_{2}\) are isomorphic in the sense of Definition 2.8, then they are return equivalent with \(U_{1}=\mathfrak{X}_{1}\) and \(U_{2}=\mathfrak{X}_{2}\). However, the notion of return equivalence is weaker even for this case, as the conjugacy is between the holonomy groups \(\mathcal{H}_{1,\mathfrak{X}_{1}}\) and \(\mathcal{H}_{2,\mathfrak{X}_{2}}\), and not the groups \(\Gamma_{1}\) and \(\Gamma_{2}\).
### Locally quasi-analytic
The quasi-analytic property for Cantor actions was introduced by Alvarez Lopez and Candel in [1, Definition 9.4] as a generalization of the notion of a _quasi-analytic action_ studied by Haefliger for actions of pseudogroups of real-analytic diffeomorphisms. The authors introduced a local form of the quasi-analytic property in [12, 16]:
**Definition 2.10**.: _[_16_, Definition 2.1]_ _A topological action \((\mathfrak{X},\Gamma,\Phi)\) on a metric Cantor space \(\mathfrak{X}\), is locally quasi-analytic if there exists \(\varepsilon>0\) such that for any non-empty open set \(U\subset\mathfrak{X}\) with \(\mathrm{diam}(U)<\varepsilon\), and for any non-empty open subset \(V\subset U\), and elements \(g_{1},g_{2}\in\Gamma\)_
\[\text{if the restrictions }\Phi(g_{1})|V=\Phi(g_{2})|V,\ \text{ then }\Phi(g_{1})|U=\Phi(g_{2})|U. \tag{11}\]
_The action is said to be quasi-analytic if (11) holds for \(U=\mathfrak{X}\)._
In other words, \((\mathfrak{X},\Gamma,\Phi)\) is locally quasi-analytic if for every \(g\in\Gamma\), the homeomorphism \(\Phi(g)\) has unique extensions on the sets of diameter \(\varepsilon>0\) in \(\mathfrak{X}\), with \(\varepsilon\) uniform over \(\mathfrak{X}\). We note that an effective action \((\mathfrak{X},\Gamma,\Phi)\), for a countable group \(\Gamma\), is topologically free if and only if it is quasi-analytic.
Recall that a group \(\Gamma\) is _Noetherian_[3] if every increasing chain of subgroups has a maximal element. Equivalently, a group is Noetherian if every subgroup of \(\Gamma\) is finitely generated.
**Theorem 2.11**.: _[_17_, Theorem 1.6]_ _Let \(\Gamma\) be a Noetherian group. Then a minimal equicontinuous Cantor action \((\mathfrak{X},\Gamma,\Phi)\) is locally quasi-analytic._
A finitely-generated nilpotent group is Noetherian, so as a corollary we obtain that all Cantor actions by finitely-generated nilpotent groups are locally quasi-analytic.
The notion of an LQA Cantor action extends to the case of a profinite group action \(\widehat{\Phi}\colon\mathfrak{G}\times\mathfrak{X}\to\mathfrak{X}\).
**Definition 2.12**.: _Let \((\mathfrak{X},\Gamma,\Phi)\) be a Cantor action, and \(\widehat{\Phi}\colon\mathfrak{G}\times\mathfrak{X}\to\mathfrak{X}\) the induced profinite action. We say that the action is stable if the induced profinite action \((\mathfrak{X},\mathfrak{G}(\Phi),\widehat{\Phi})\) is locally quasi-analytic, and is said to be wild otherwise._
A profinite completion \(\mathfrak{G}\) of a Noetherian group \(\Gamma\) need not be Noetherian, as can be seen for the example of \(\Gamma=\mathbb{Z}\), and \(\widehat{\Gamma}\) the full profinite completion of \(\mathbb{Z}\). In particular, the assumption that the action \((\mathfrak{X},\Gamma,\Phi)\) is locally quasi-analytic does not imply that the action is stable.
### Type and typeset for Cantor actions
A Steinitz number \(\xi\) can be written uniquely as the formal product over the set of primes \(\Pi\),
\[\xi=\prod_{p\in\Pi}\ p^{\chi_{\xi}(p)} \tag{12}\]
where the _characteristic function_\(\chi_{\xi}\colon\Pi\to\{0,1,\dots,\infty\}\) counts the multiplicity with which a prime \(p\) appears in the infinite product \(\xi\).
**Definition 2.13**.: _Two Steinitz numbers \(\xi\) and \(\xi^{\prime}\) are said to be asymptotically equivalent if there exists finite integers \(m,m^{\prime}\geq 1\) such that \(m\cdot\xi=m^{\prime}\cdot\xi^{\prime}\), and we then write \(\xi\stackrel{{\Delta}}{{\sim}}\xi^{\prime}\)._
_A type is an asymptotic equivalence class of Steinitz numbers. The type associated to a Steinitz number \(\xi\) is denoted by \(\tau[\xi]\)._
In terms of their characteristic functions \(\chi_{1},\chi_{2}\), we have \(\xi\stackrel{{\Delta}}{{\sim}}\xi^{\prime}\) if and only if the following conditions are satisfied:
* \(\chi_{1}(p)=\chi_{2}(p)\) for all but finitely many primes \(p\in\Pi\),
* \(\chi_{1}(p)=\infty\) if and only iff \(\chi_{1}(p)=\infty\) for all primes \(p\in\Pi\).
Given two types, \(\tau\) and \(\tau^{\prime}\) we write \(\tau\leq\tau^{\prime}\), if there exists representatives \(\xi\in\tau\) and \(\xi^{\prime}\in\tau^{\prime}\) such that their characteristic functions satisfy \(\chi_{\xi}(p)\leq\chi_{\xi^{\prime}}(p)\) for all primes \(p\in\Pi\).
**Definition 2.14**.: _Let \(\pi\) denote the set of primes. Given \(\xi=\prod_{p\in\pi}\ p^{\chi_{\xi}(p)}\), define:_
\[\pi(\xi) = \{p\in\pi\mid\chi_{\xi}(p)>0\}\,\mbox{ the prime spectrum of }\xi,\] \[\pi_{f}(\xi) = \{p\in\pi\mid 0<\chi_{\xi}(p)<\infty\}\,\mbox{ the finite prime spectrum of }\xi,\] \[\pi_{\infty}(\xi) = \{p\in\pi\mid\chi_{\xi}(p)=\infty\}\,\mbox{ the infinite prime spectrum of }\xi\.\]
Note that if \(\xi\stackrel{{\mbox{\tiny$\Delta$}}}{{\sim}}\xi^{\prime}\), then \(\pi_{\infty}(\xi)=\pi_{\infty}(\xi^{\prime})\). The property that \(\pi_{f}(\xi)\) is an _infinite_ set is also preserved by asymptotic equivalence of Steinitz numbers.
We define the type of a Cantor action \((X_{\infty},\Gamma,\Phi)\) defined by a chain of finite index subgroups, \(\mathcal{G}=\{\Gamma=\Gamma_{0}\supset\Gamma_{1}\supset\cdots\}\). Let \(C_{\ell}\subset\Gamma_{\ell}\) denote the normal core of \(\Gamma_{\ell}\) in \(\Gamma\).
**Definition 2.15**.: _Let \((X_{\infty},\Gamma,\Phi)\) be a minimal equicontinuous Cantor action defined by a group chain \(\mathcal{G}\). The type \(\tau[X_{\infty},\Gamma,\Phi]\) of the action is the equivalence class of the Steinitz order_
\[\xi(X_{\infty},\Gamma,\Phi)=\operatorname{lcm}\{\#X_{\ell}=\#(\Gamma/\Gamma_{ \ell})\mid\ell>0\}. \tag{13}\]
Finally, we note the following result:
**Theorem 2.16**.: _[_20_, Theorem 1.9]_ _Let \((\mathfrak{X},\Gamma,\Phi)\) be a Cantor action. The Steinitz order \(\xi(\mathfrak{X},\Gamma,\Phi)\) is defined to be the Steinitz order for an algebraic model \((X_{\infty},\Gamma,\Phi)\) of the action, which does not depend upon the choice of an algebraic model. Moreover, the type \(\tau[\mathfrak{X},\Gamma,\Phi]\) depends only on the return equivalence class of the action._
### Type for profinite groups
The _Steinitz order_\(\Pi[\mathfrak{G}]\) of a profinite group \(\mathfrak{G}\) is defined by the supernatural number associated to a presentation of \(\mathfrak{G}\) as an inverse limit of finite groups (see [29, Chapter 2] or [26, Chapter 2.3]). The Steinitz order appears in the study of analytic representations of profinite groups associated to groups acting on rooted trees, see for example [23].
Recall that for a profinite group \(\mathfrak{G}\), an open subgroup \(\mathfrak{U}\subset\mathfrak{G}\) has finite index [26, Lemma 2.1.2].
**Definition 2.17**.: _Let \((\mathfrak{X},\Gamma,\Phi)\) be a minimal equicontinuous Cantor action, with choice of a basepoint \(x\in\mathfrak{X}\). The Steinitz orders of the action are defined as follows:_
1. \(\xi(\mathfrak{G}(\Phi))=\operatorname{lcm}\{\#\ \mathfrak{G}(\Phi)/ \mathfrak{N}\mid\mathfrak{N}\subset\mathfrak{G}(\Phi)\mbox{ open normal subgroup}\}\)_,_
2. \(\xi(\mathfrak{G}(\Phi))=\operatorname{lcm}\{\#\ \mathfrak{D}(\Phi)/(\mathfrak{N}\cap \mathfrak{D}(\Phi))\mid\mathfrak{N}\subset\mathfrak{G}(\Phi)\mbox{ open normal subgroup}\}\)_,_
3. \(\xi(\mathfrak{G}(\Phi):\mathfrak{D}(\Phi))=\operatorname{lcm}\{\#\ \mathfrak{G}(\Phi)/(\mathfrak{N}\cdot\mathfrak{D}(\Phi))\mid\mathfrak{N} \subset\mathfrak{G}(\Phi)\mbox{ open normal subgroup}\}\)_._
The Steinitz orders satisfy the Lagrange identity, where the multiplication is taken in the sense of supernatural numbers,
\[\xi(\mathfrak{G}(\Phi))=\xi(\mathfrak{G}(\Phi):\mathfrak{D}(\Phi))\cdot\xi( \mathfrak{D}(\Phi)). \tag{14}\]
and thus we always have \(\tau[\mathfrak{D}(\Phi)]\leq\tau[\mathfrak{G}(\Phi)]\). The following is a direct consequence of the definitions:
**Theorem 2.18**.: _Let \((\mathfrak{X},\Gamma,\Phi)\) be a Cantor action. Then there is equality of Steinitz orders, \(\xi(\mathfrak{X},\Gamma,\Phi)=\xi(\mathfrak{G}(\Phi):\mathfrak{D}(\Phi))\)._
## 3. Nilpotent actions
In this section, we apply the notion of the Steinitz order of a nilpotent Cantor action to the study of its dynamical properties. The proof of Theorem 1.2 is based on the special properties of the profinite completions of nilpotent groups, in particular the uniqueness of their Sylow \(p\)-subgroups, and on the relation of this algebraic property with the dynamics of the action.
### Noetherian groups
A countable group \(\Gamma\) is said to be _Noetherian_[3] if every increasing chain of subgroups \(\{H_{i}\mid i\geq 1\}\) of \(\Gamma\) has a maximal element \(H_{i_{0}}\). The group \(\mathbb{Z}\) is Noetherian; a finite product of Noetherian groups is Noetherian; and a subgroup and quotient group of a Noetherian group is Noetherian. Thus, a finitely-generated nilpotent group is Noetherian.
The notion of a Noetherian group has a generalization which is useful for the study of actions of profinite groups.
**Definition 3.1**.: _[_29_, page 153]_ _A profinite group \(\mathfrak{G}\) is said to be topologically Noetherian if every increasing chain of closed subgroups \(\{\mathfrak{H}_{i}\mid i\geq 1\}\) of \(\mathfrak{G}\) has a maximal element \(\mathfrak{H}_{i_{0}}\)._
We illustrate this concept with two canonical examples of profinite completions of \(\mathbb{Z}\).
**Example 3.2**.: Let \(\widehat{\mathbb{Z}}_{p}\) denote the \(p\)-adic integers, for \(p\) a prime. That is, \(\widehat{\mathbb{Z}}_{p}\) is the completion of \(\mathbb{Z}\) with respect to the chain of subgroups \(\mathcal{G}=\{\Gamma_{\ell}=p^{\ell}\mathbb{Z}\mid\ell\geq 1\}\). The closed subgroups of \(\widehat{\mathbb{Z}}_{p}\) are given by \(p^{i}\cdot\widehat{\mathbb{Z}}_{p}\) for some fixed \(i>0\), hence satisfy the ascending chain property in Definition 3.1.
**Example 3.3**.: Let \(\widehat{\pi}=\{p_{i}\mid i\geq 1\}\) be an infinite collection of distinct primes. Define an increasing chain of subgroups of \(\mathbb{Z}\), where \(\mathcal{G}_{\widehat{\pi}}=\{\Gamma_{\ell}=p_{1}p_{2}\cdots p_{\ell}\mathbb{ Z}\mid\ell\geq 1\}\). Let \(\widehat{\mathbb{Z}}_{\widehat{\pi}}\) be the completion of \(\mathbb{Z}\) with respect to the chain \(\mathcal{G}_{\widehat{\pi}}\). Then we have a topological isomorphism
\[\widehat{\mathbb{Z}}_{\widehat{\pi}}\cong\prod_{i\geq 1}\ \mathbb{Z}/p_{i} \mathbb{Z}. \tag{15}\]
Let \(H_{\ell}=\mathbb{Z}/p_{1}\mathbb{Z}\oplus\cdots\oplus\mathbb{Z}/p_{\ell} \mathbb{Z}\) be the direct sum of the first \(\ell\)-factors. Then \(\{H_{\ell}\mid\ell\geq 1\}\) is an increasing chain of subgroups of \(\widehat{\mathbb{Z}}_{\widehat{\pi}}\) which does not stabilize, so \(\widehat{\mathbb{Z}}_{\widehat{\pi}}\) is not topologically Noetherian.
These two examples illustrate the idea behind the proof of the following result.
**Proposition 3.4**.: _Let \(\Gamma\) be a finitely generated nilpotent group, and let \(\widehat{\Gamma}\) be a profinite completion of \(\Gamma\). Then \(\widehat{\Gamma}\) is topologically Noetherian if and only if the prime spectrum \(\pi(\xi(\widehat{\Gamma}))\) is finite._
Proof.: First, recall some basic facts about profinite groups. (See for example, [29, Chapter 2].) For a prime \(p\), a finite group \(H\) is a \(p\)-group if every element of \(H\) has order a power of \(p\). A profinite group \(\mathfrak{H}\) is a pro-\(p\)-group if \(\mathfrak{H}\) is the inverse limit of finite \(p\)-groups. A Sylow \(p\)-subgroup \(\mathfrak{H}\subset\mathfrak{G}\) is a maximal pro-\(p\)-subgroup [29, Definition 2.2.1].
A profinite group \(\mathfrak{G}\) is _pro-nilpotent_ if it is the inverse limit of finite nilpotent groups. For example, if \(\mathfrak{G}\) is a profinite completion of a nilpotent group \(\Gamma\), then \(\mathfrak{G}\) is pro-nilpotent.
The group \(\mathfrak{G}\) is topologically finitely generated if it contains a dense subgroup \(\Gamma\subset\mathfrak{G}\) where \(\Gamma\) is finitely generated. The completion \(\mathfrak{G}(\Phi)\) associated to a Cantor action \((\mathfrak{X},\Gamma,\Phi)\) with \(\Gamma\) finitely generated is topologically finitely generated.
Assume that \(\mathfrak{G}\) is pro-nilpotent, then for each prime \(p\), there is a unique Sylow \(p\)-subgroup of \(\mathfrak{G}\), which is normal in \(\mathfrak{G}\) (see [29, Proposition 2.4.3]). Denote this group by \(\mathfrak{G}_{(p)}\). Moreover, \(\mathfrak{G}_{(p)}\) is non-trivial if and only if \(p\in\pi(\xi(\mathfrak{G}))\). We use the following result for pro-nilpotent groups, which is a consequence of [29, Proposition 2.4.3].
**Proposition 3.5**.: _Let \(\mathfrak{G}\) be a profinite completion of a finitely-generated nilpotent group \(\Gamma\). Then there is a topological isomorphism_
\[\mathfrak{G}\cong\prod_{p\in\pi(\xi(\mathfrak{G}))}\ \mathfrak{G}_{(p)}. \tag{16}\]
From the isomorphism (16) it follows immediately that if the prime spectrum \(\pi(\xi(\mathfrak{G}))\) is infinite, then \(\mathfrak{G}\) is not topologically Noetherian. To see this, list \(\pi(\xi(\mathfrak{G}))=\{p_{i}\mid i=1,2,\ldots\}\), then we obtain an infinite strictly increasing chain of closed subgroups,
\[\mathfrak{H}_{\ell}=\prod_{i=1}^{\ell}\ \mathfrak{G}_{(p_{i})}\.\]
If the prime spectrum \(\pi(\xi(\mathfrak{G}))\) is finite, then the isomorphism (16) reduces the proof that \(\mathfrak{G}\) is topologically Noetherian to the case of showing that if \(\mathfrak{G}\) is topologically finitely generated, then each of its Sylow \(p\)-subgroups is Noetherian. The group \(\mathfrak{G}_{(p)}\) is nilpotent and topologically finitely generated, so we can use the lower central series for \(\mathfrak{G}_{(p)}\) and induction to reduce to the case where \(\mathfrak{G}\) is a topologically finitely-generated abelian pro-\(p\)-group, and so is isomorphic to a finite product of \(p\)-completions of \(\mathbb{Z}\), which are topologically Noetherian.
Observe that a profinite completion \(\mathfrak{G}\) of a finitely generated nilpotent group \(\Gamma\) is a topologically finitely-generated nilpotent group, and we apply the above remarks.
**Corollary 3.6**.: _Let \(\Gamma\) be a virtually nilpotent group; that, is there exists a finitely-generated nilpotent subgroup \(\Gamma_{0}\subset\Gamma\) of finite index. Then a profinite completion \(\mathfrak{G}\) of \(\Gamma\) is topologically Noetherian if and only if its prime spectrum \(\pi(\xi(\mathfrak{G}))\) is finite._
Proof.: We can assume that \(\Gamma_{0}\) is a normal subgroup of \(\Gamma\), then its closure \(\mathfrak{G}_{0}\subset\mathfrak{G}\) satisfies the hypotheses of Proposition 3.4, and the Steinitz orders satisfy \(\xi(\mathfrak{G}_{0})\stackrel{{\Delta}}{{\sim}}\xi(\mathfrak{G})\). As \(\mathfrak{G}_{0}\) is topologically Noetherian if and only if \(\mathfrak{G}\) is topologically Noetherian, the claim follows.
### Dynamics of Noetherian groups
We relate the topologically Noetherian property of a profinite group with the dynamics of a Cantor action of the group, to obtain the proof of Theorem 1.2. We first give the profinite analog of [17, Theorem 1.6]. We follow the outline of its proof in [17].
**Proposition 3.7**.: _Let \(\mathfrak{G}\) be a topologically Noetherian group. Then a minimal equicontinuous action \((\mathfrak{X},\mathfrak{G},\widehat{\Phi})\) on a Cantor space \(\mathfrak{X}\) is locally quasi-analytic._
Proof.: The closure \(\mathfrak{G}(\Phi)\subset\operatorname{Homeo}(\mathfrak{X})\), so the action \(\widehat{\Phi}\) of \(\mathfrak{G}(\Phi)\) is effective. Suppose that the action \(\widehat{\Phi}\) is not locally quasi-analytic, then there exists an infinite properly decreasing chain of clopen subsets of \(\mathfrak{X}\), \(\{U_{1}\supset U_{2}\supset\cdots\}\), which satisfy the following properties, for all \(\ell\geq 1\):
* \(U_{\ell}\) is adapted to the action \(\widehat{\Phi}\) with isotropy subgroup \(\mathfrak{G}_{U_{\ell}}\subset\mathfrak{G}\);
* there is a closed subgroup \(K_{\ell}\subset\mathfrak{G}_{U_{\ell+1}}\) whose restricted action to \(U_{\ell+1}\) is trivial, but the restricted action of \(K_{\ell}\) to \(U_{\ell}\) is effective.
It follows that we obtain a properly increasing chain of closed subgroups \(\{K_{1}\subset K_{2}\subset\cdots\}\) in \(\mathfrak{G}\), which contradicts the assumption that \(\mathfrak{G}\) is topologically Noetherian.
Proof of Theorem 1.2.: Let \((\mathfrak{X},\Gamma,\Phi)\) be a nilpotent Cantor action, and we are given that the prime spectrum \(\pi(\xi(\mathfrak{G}(\Phi)))\) is finite. Then there exists a finitely-generated nilpotent subgroup \(\Gamma_{0}\subset\Gamma\) of finite index, and we can assume without loss of generality that \(\Gamma_{0}\) is normal. Let \(\mathfrak{G}(\Phi)_{0}\) be the closure of \(\Gamma_{0}\) in \(\mathfrak{G}(\Phi)\). The group \(\mathfrak{G}(\Phi)\) has finite prime spectrum implies that the group \(\mathfrak{G}(\Phi)_{0}\) has finite prime spectrum, and thus by Proposition 3.4 the group \(\mathfrak{G}(\Phi)_{0}\) is topologically Noetherian. Let \(x\in\mathfrak{X}\), then it suffices to show that the action of \(\Gamma_{0}\) on the orbit \(\mathfrak{X}_{0}=\mathfrak{G}(\Phi)_{0}\cdot x\) is stable. This reduces the proof to showing the claim when \(\Gamma\) is nilpotent. Then the profinite closure \(\mathfrak{G}(\Phi)\) is also nilpotent, and we have a profinite action \((\mathfrak{X},\mathfrak{G}(\Phi),\widehat{\Phi})\).
Suppose that the action \(\widehat{\Phi}\) is not locally quasi-analytic, then there exists an increasing chain of closed subgroups \(K_{\ell}\subset\mathfrak{D}(\Phi)\) where \(K_{\ell}\) acts trivially on the clopen subset \(U_{\ell}\subset\mathfrak{X}\). As \(\mathfrak{D}(\Phi)\) is a closed subgroup of \(\mathfrak{G}(\Phi)\), the increasing chain \(\{K_{\ell}\mid\ell>0\}\) consists of closed subgroups of \(\mathfrak{G}(\Phi)\). This contradicts the fact that \(\mathfrak{G}(\Phi)\) is topologically Noetherian. Hence, the action \(\widehat{\Phi}\) must be locally quasi-analytic. That is, the action \((\mathfrak{X},\Gamma,\Phi)\) is stable.
## 4. Basic examples
In this section, we construct two basic examples of nilpotent Cantor actions. These examples illustrate the principles behind the subsequent more complex constructions in Section 5, which are used to prove Theorems 1.3 and 1.4.
The integer Heisenberg group is the simplest non-abelian nilpotent group, and it can be represented as the upper triangular matrices in \(\mathrm{GL}(3,\mathbb{Z})\). That is,
\[\Gamma=\left\{\left[\begin{array}{ccc}1&a&c\\ 0&1&b\\ 0&0&1\end{array}\right]\mid a,b,c\in\mathbb{Z}\right\}. \tag{17}\]
We denote a \(3\times 3\) matrix in \(\Gamma\) by the coordinates as \((a,b,c)\).
**EXAMPLE 4.1**.: A _renormalizable Cantor action_, as defined in [22], can be constructed from the group chain defined by a proper self-embedding of a non-abelian group \(\Gamma\) into itself.
For a prime \(p\geq 2\), define the self-embedding \(\varphi_{p}\colon\Gamma\to\Gamma\) by \(\varphi(a,b,c)=(pa,pb,p^{2}c)\). Then define a group chain in \(\Gamma\) by setting
\[\Gamma_{\ell}=\varphi_{p}^{\ell}(\Gamma)=\{(p^{\ell}a,p^{\ell}b,p^{2\ell}c) \mid a,b,c\in\mathbb{Z}\}\quad,\quad\bigcap_{\ell>0}\;\Gamma_{\ell}=\{e\}\.\]
For \(\ell>0\), the normal core for \(\Gamma_{\ell}\) is given by \(C_{\ell}=\mathrm{core}(\Gamma_{\ell})=\{(p^{2\ell}a,p^{2\ell}b,p^{2\ell}c)\mid a,b,c\in\mathbb{Z}\}\), and so the quotient group \(Q_{\ell}=\Gamma/C_{\ell}\cong\{(\overline{a},\overline{b},\overline{c})\mid \overline{a},\overline{b},\overline{c}\in\mathbb{Z}/p^{2\ell}\mathbb{Z}\}\). The profinite group \(\widehat{\Gamma}_{\infty}\) is the inverse limit of the quotient groups \(Q_{\ell}\) so we have \(\widehat{\Gamma}_{\infty}=\{(\widehat{a},\widehat{b},\widehat{c})\mid \widehat{a},\widehat{b},\widehat{c}\in\widehat{\mathbb{Z}}_{p^{2}}\}\). Thus, \(\xi(\widehat{\Gamma})=\{p^{\infty}\}\). Even though the quotient groups \(\Gamma_{\ell}/C_{\ell}\) are all non-trivial, for this action the inverse limit \(\mathfrak{D}(\Phi)\) is the trivial group. This follows from the fact that there are inclusions
\[\Gamma_{2\ell}=\{(p^{2\ell}a,p^{2\ell}b,p^{4\ell}c)\mid a,b,c,\in\mathbb{Z}\} \subset C_{\ell}=\{(p^{2\ell}a,p^{2\ell}b,p^{2\ell}c)\mid a,b,c\in\mathbb{Z} \}\.\]
The triviality of \(\mathfrak{D}(\Phi)\) implies that there is an equivalent group chain for the action [11] which can be chosen so that every subgroup in the chain is normal in \(\Gamma\).
**EXAMPLE 4.2**.: For distinct primes \(p,q\geq 2\), define the self-embedding \(\varphi_{p,q}\colon\Gamma\to\Gamma\) by \(\varphi(a,b,c)=(pa,qb,pqc)\). Then define a group chain in \(\Gamma\) by setting
\[\Gamma_{\ell}=\varphi_{p,q}^{\ell}(\Gamma)=\{(p^{\ell}a,q^{\ell}b,(pq)^{\ell}c )\mid a,b,c\in\mathbb{Z}\}\quad,\quad\bigcap_{\ell>0}\;\Gamma_{\ell}=\{e\}\.\]
For \(\ell>0\), the normal core for \(\Gamma_{\ell}\) is given by \(C_{\ell}=\mathrm{core}(\Gamma_{\ell})=\{((pq)^{\ell}a,(pq)^{\ell}b,(pq)^{\ell} c)\mid a,b,c\in\mathbb{Z}\}\), and so the quotient group \(Q_{\ell}=\Gamma/C_{\ell}\cong\{(\overline{a},\overline{b},\overline{c})\mid \overline{a},\overline{b},\overline{c}\in\mathbb{Z}/(pq)^{\ell}\mathbb{Z}\}\). The profinite group \(\widehat{\Gamma}_{\infty}\) is the inverse limit of the quotient groups \(Q_{\ell}\) so we have \(\widehat{\Gamma}_{\infty}=\{(\widehat{a},\widehat{b},\widehat{c})\mid \widehat{a},\widehat{b},\widehat{c}\in\widehat{\mathbb{Z}}_{pq}\}\). Thus, \(\xi(\widehat{\Gamma}_{\infty})=\{p^{\infty},q^{\infty}\}\), and \(D_{\infty}\) is the inverse limit of the finite groups \(\Gamma_{\ell}/C_{\ell}\) by (9), so \(D_{\infty}\cong\widehat{\mathbb{Z}}_{q}\times\widehat{\mathbb{Z}}_{p}\).
## 5. Nilpotent actions with prescribed spectrum
In this section, we construct stable actions of the discrete Heisenberg group with prescribed prime spectrum, proving Theorem 1.3. Then we construct examples of wild nilpotent Cantor actions, proving Theorem 1.4 from which we deduce Corollary 1.5. For simplicity, our examples all use the Heisenberg group represented by \(3\times 3\) matrices. Of course, these examples can be generalized to the integer upper triangular matrices in all dimensions, where there is much more freedom in the choices made in the construction. The calculations become correspondingly more tedious, but yield analogous results. It seems reasonable to expect that similar constructions can be made for any finitely-generated torsion-free nilpotent (non-abelian) group \(\Gamma\). That is, there are always group chains in \(\Gamma\) which yield wild nilpotent Cantor actions.
Let \(\Gamma\subset\mathrm{GL}(3,\mathbb{Z})\) denote the discrete Heisenberg group, given by formula (17). The basis for the constructions below is the structure theory for nilpotent group completions in Proposition 3.5, in particular the formula (16). Given sets of primes \(\pi_{f}\) and \(\pi_{\infty}\), we embed an infinite product of finite actions, as in Example 5.1, into a profinite completion \(\widehat{\Gamma}_{\infty}\) of \(\Gamma\), and thus obtain a nilpotent Cantor action \((X_{\infty},\Gamma,\Phi)\) on the quotient space \(X_{\infty}=\widehat{\Gamma}_{\infty}/D_{\infty}\).
### Basic components of the construction
Fix a prime \(p\geq 2\).
For \(n\geq 1\) and \(0\leq k<n\), we have the following finite groups:
\[G_{p,n}=\left\{\left[\begin{array}{ccc}1&\overline{a}&\overline{c}\\ 0&1&\overline{b}\\ 0&0&1\end{array}\right]\mid\overline{a},\overline{b},\overline{c}\in\mathbb{Z} /p^{n}\mathbb{Z}\right\}\,\ H_{p,n,k}=\left\{\left[\begin{array}{ccc}1&p^{k} \overline{a}&0\\ 0&1&0\\ 0&0&1\end{array}\right]\mid\overline{a}\in\mathbb{Z}/p^{n}\mathbb{Z}\right\} \tag{18}\]
Note that \(\#[G_{p,n}]=p^{3n}\) and \(\#[H_{p,n,k}]=p^{n-k}\).
Let \(\overline{x}=(1,0,0),\overline{y}=(0,1,0),\overline{z}=(0,0,1)\in G_{p,n}\), then \(\overline{x}\cdot\overline{y}\cdot\overline{x}^{-1}=\overline{yz}\) and \(\overline{x}\cdot\overline{z}\cdot\overline{x}^{-1}=\overline{z}\). That is, the adjoint action of \(\overline{x}\) on the "plane" in the \((\overline{y},\overline{z})\)-coordinates is a "shear" action along the \(\overline{z}\)-axis, and the adjoint action of \(\overline{x}\) on the \(\overline{z}\)-axis fixes all points on the \(\overline{z}\)-axis.
Set \(X_{p,n,k}=G_{p,n}/H_{p,n,k}\), then the isotropy group of the action of \(G_{p,n}\) on \(X_{p,n,k}\) at the coset \(H_{p,n,k}\) of the identity element is \(H_{p,n,k}\). The core subgroup \(C_{p,n,k}\subset H_{p,n,k}\) contains elements in \(H_{p,n,k}\) which fix every point in \(X_{p,n,k}\). The action of \(\overline{x}\in H_{p,n,k}\) on the coset space \(X_{p,n,k}\) satisfies
\[\Phi(\overline{x})(\overline{y}\,H_{p,n,k})=\overline{yz}\,H_{p,n,k},\]
so the identity is the only element in \(G_{p,n}\) which acts trivially on every coset in \(X_{p,n,k}\), so \(C_{p,n,k}\) is the trivial group. Then \(D_{p,n,k}=H_{p,n,k}/C_{p,n,k}=H_{p,n,k}\), and for each \(g\in H_{p,n,k}\) its action fixes the cosets of the multiples of \(\overline{z}\).
### Stable nilpotent actions with finite or infinite prime spectrum
We now prove Theorem 1.3 by constructing a family of stable examples with prescribed prime spectra.
Let \(\pi_{f}\) and \(\pi_{\infty}\) be two disjoint collections of primes, with \(\pi_{f}\) a finite set, and \(\pi_{\infty}\) a non-empty set.
Enumerate \(\pi_{f}=\{q_{1},q_{2},\ldots,q_{m}\}\), and then choose integers \(1\leq r_{i}\leq n_{i}\) for \(1\leq i\leq m\).
Enumerate \(\pi_{\infty}=\{p_{1},p_{2},\ldots\}\) with the convention (for notational convenience) that if \(\ell\) is greater than the number of primes in \(\pi_{\infty}\) then we set \(p_{\ell}=1\). For each \(\ell\geq 1\), define the integers
\[M_{\ell} = q_{1}^{r_{1}}q_{2}^{r_{2}}\cdots q_{m}^{r_{m}}\cdot p_{1}^{\ell }p_{2}^{\ell}\cdots p_{\ell}^{\ell}\, \tag{20}\] \[N_{\ell} = q_{1}^{n_{1}}q_{2}^{n_{2}}\cdots q_{m}^{n_{m}}\cdot p_{1}^{\ell }p_{2}^{\ell}\cdots p_{\ell}^{\ell}. \tag{19}\]
For all \(\ell\geq 1\), observe that \(M_{\ell}\) divides \(N_{\ell}\).
Define a subgroup of the Heisenberg group \(\Gamma\), in the coordinates above,
\[\Gamma_{\ell}=\{(aM_{\ell},bN_{\ell},cN_{\ell})\mid a,b,c\in\mathbb{Z}\}.\]
Its core subgroup is given by \(C_{\ell}=\{(aN_{\ell},bN_{\ell},cN_{\ell})\mid a,b,c\in\mathbb{Z}\}\). Observe that
\[\mathbb{Z}/N_{\ell}\mathbb{Z}\cong\mathbb{Z}/q_{1}^{n_{1}}\mathbb{Z}\oplus \cdots\oplus\mathbb{Z}/q_{m}^{n_{m}}\mathbb{Z}\oplus\mathbb{Z}/p_{1}^{\ell} \mathbb{Z}\oplus\cdots\oplus\mathbb{Z}/p_{\ell}^{\ell}\mathbb{Z}\.\]
By Proposition 3.5, and in the notation of Example 5.1, we have for \(k_{i}=n_{i}-r_{i}\) that
\[\widehat{\Gamma}_{\infty}\ =\ \underset{\longleftarrow}{\lim}\{\Gamma/C_{\ell} \rightarrow\Gamma/C_{\ell-1}\mid\ell\geq 1\}\ \cong\ \prod_{i=1}^{m}\ G_{q_{i},n_{i}}\ \cdot\ \prod_{j=1}^{\infty}\ \widehat{\Gamma}_{(p_{j})}, \tag{21}\]
\[D_{\infty}\ =\ \underset{\longleftarrow}{\lim}\{\Gamma_{\ell}/C_{\ell} \rightarrow\Gamma_{\ell-1}/C_{\ell-1}\mid\ell\geq 1\}\ \cong\ \prod_{i=1}^{m}\ H_{q_{i},n_{i},k_{i}}. \tag{22}\]
Then the Cantor space \(X_{\infty}=\widehat{\Gamma}_{\infty}/D_{\infty}\) associated to the group chain \(\{\Gamma_{\ell}\mid\ell\geq 1\}\) is given by
\[X_{\infty}\ \cong\ \prod_{i=1}^{m}\ X_{q_{i},n_{i},k_{i}}\ \times\ \prod_{j=1}^{\infty}\ \widehat{\Gamma}_{(p_{j})}. \tag{23}\]
In particular, as the first factor in (23) is a finite product of finite sets, the second factor defines an open neighborhood
\[U=\prod_{i=1}^{m}\ \{x_{i}\}\ \times\ \prod_{j=1}^{\infty}\ \widehat{\Gamma}_{(p_{j})}\]
where \(x_{i}\in X_{q_{i},n_{i},k_{i}}\) is the basepoint given by the coset of the identity element. That is, \(U\) is a clopen neighborhood of the basepoint in \(X_{\infty}\). The isotropy group of \(U\) is given by
\[\widehat{\Gamma}_{\infty}|U\ =\ \prod_{i=1}^{m}\ H_{q_{i},n_{i},k_{i}}\ \times\ \prod_{j=1}^{\infty}\ \widehat{\Gamma}_{(p_{j})}. \tag{24}\]
The restriction of \(\widehat{\Gamma}_{\infty}|U\) to \(U\) is isomorphic to the subgroup
\[K|U\ =\ \prod_{i=1}^{m}\ \{\overline{e}_{i}\}\ \times\ \prod_{j=1}^{\infty}\ \widehat{\Gamma}_{(p_{j})}\ \subset\ \mathrm{Homeo}(U)\, \tag{25}\]
where \(\overline{e}_{i}\in G_{q_{i},n_{i}}\) is the identity element. The group \(K|U\) acts freely on \(U\), and thus the action of \(\widehat{\Gamma}_{\infty}\) on \(X_{\infty}\) is locally quasi-analytic. Moreover, the prime spectrum of the action of \(\Gamma\) on \(X_{\infty}\) is the union \(\widehat{\pi}=\pi_{f}\cup\pi_{\infty}=\pi(\xi(\widehat{\Gamma}_{\infty}))\). If \(\pi_{\infty}\) is infinite, then the prime spectrum of the action is infinite. Note that the group \(\Gamma\) embeds into \(\widehat{\Gamma}_{\infty}\), since the integers \(M_{\ell}\) and \(N_{\ell}\) tend to infinity with \(\ell\). This completes the proof of Theorem 1.3.
### Wild nilpotent actions with infinite prime spectrum
We now prove Theorem 1.4. We must show that every infinite set of primes can be realized as the prime spectrum of a wild action of the Heisenberg group \(\Gamma\), as defined by (17). Let \(\pi_{f}\) and \(\pi_{\infty}\) be disjoint collections of primes, with \(\pi_{f}\) an infinite set and \(\pi_{\infty}\) arbitrary, possibly empty.
Enumerate \(\pi_{f}=\{q_{1},q_{2},\ldots\}\) and choose integers \(1\leq r_{i}<n_{i}\) for \(1\leq i<\infty\).
Enumerate \(\pi_{\infty}=\{p_{1},p_{2},\ldots\}\), again with the convention that if \(\ell\) is greater than the number of primes in \(\pi_{\infty}\) then we set \(p_{\ell}=1\).
As in Section 5.2, for each \(\ell\geq 1\), define the integers
\[\begin{array}{rcl}M_{\ell}&=&q_{1}^{r_{1}}q_{2}^{r_{2}}\cdots q_{\ell}^{r_{ \ell}}\cdot p_{1}^{\ell}p_{2}^{\ell}\cdots p_{\ell}^{\ell}\,\\ N_{\ell}&=&q_{1}^{n_{1}}q_{2}^{n_{2}}\cdots q_{\ell}^{n_{\ell}}\cdot p_{1}^{ \ell}p_{2}^{\ell}\cdots p_{\ell}^{\ell}\.\end{array}\]
For \(\ell\geq 1\), define a subgroup of the Heisenberg group \(\Gamma\), in the coordinates above,
\[\Gamma_{\ell}=\{(aM_{\ell},bN_{\ell},cN_{\ell})\mid a,b,c\in\mathbb{Z}\}\, \tag{26}\]
Its core subgroup is given by \(C_{\ell}=\{(aN_{\ell},bN_{\ell},cN_{\ell})\mid a,b,c\in\mathbb{Z}\}\). For \(k_{i}=n_{i}-r_{i}\) we then have
\[\widehat{\Gamma}_{\infty}\ \cong\ \prod_{i=1}^{\infty}\ G_{q_{i},n_{i}}\ \cdot\ \prod_{j=1}^{\infty}\ \widehat{\Gamma}_{(p_{j})}\quad,\quad D_{\infty}\ \cong\ \prod_{i=1}^{\infty}\ H_{q_{i},n_{i},k_{i}}. \tag{27}\]
The Cantor space \(X_{\infty}=\widehat{\Gamma}_{\infty}/D_{\infty}\) associated to the group chain \(\{\Gamma_{\ell}\mid\ell\geq 1\}\) is given by
\[X_{\infty}\ \cong\ \prod_{i=1}^{\infty}\ X_{q_{i},n_{i},k_{i}}\ \times\ \prod_{j=1}^{\infty}\ \widehat{\Gamma}_{(p_{j})}. \tag{28}\]
The first factor in (23) is an infinite product of finite sets, so fixing the first \(\ell\)-coordinates in this product determines a clopen subset of \(X_{\infty}\). Let \(x_{i}\in X_{q_{i},n_{i},k_{i}}\) denote the coset of the identity element, which is the basepoint in \(X_{q_{i},n_{i},k_{i}}\). Then for each \(\ell\geq 1\), we define a clopen set in \(X_{\infty}\)
\[U_{\ell}=\prod_{i=1}^{\ell}\ \{x_{i}\}\ \times\ \prod_{i=\ell+1}^{\infty}\ X_{q_{i},n_{i},k_{i}}\ \times\ \prod_{j=1}^{\infty}\ \widehat{\Gamma}_{(p_{j})}. \tag{29}\]
Recalling the calculations in Example 5.1, the subgroup \(H_{q_{i},n_{i},k_{i}}\) is the isotropy group of the basepoint \(x_{i}\in X_{q_{i},n_{i},k_{i}}\). Thus, the isotropy subgroup of \(U_{\ell}\) for the \(\widehat{\Gamma}_{\infty}\)-action is given by the product
\[\widehat{\Gamma}_{\infty}|_{U_{\ell}}\ =\ \prod_{i=1}^{\ell}\ H_{q_{i},n_{i},k_{i}} \ \times\ \prod_{i=\ell+1}^{\infty}\ G_{q_{i},n_{i}}\ \times\ \prod_{j=1}^{\infty}\ \widehat{\Gamma}_{(p_{j})}. \tag{30}\]
For \(j\neq i\), the subgroup \(H_{q_{i},n_{i},k_{i}}\) acts as the identity on the factors \(X_{q_{j},n_{j},k_{j}}\) in (28). Thus, the image of \(\widehat{\Gamma}_{\infty}|_{U_{\ell}}\) in \(\mathrm{Homeo}(U_{\ell})\) is isomorphic to the subgroup
\[Z_{\ell}\ =\ \widehat{\Gamma}_{\infty}|U_{\ell}\ =\ \prod_{i=1}^{\ell}\,\{ \overline{e}_{i}\}\ \times\ \prod_{i=\ell+1}^{\infty}\ G_{q_{i},n_{i}}\ \times\ \prod_{j=1}^{\infty}\ \widehat{\Gamma}_{(p_{j})}\ \subset\ \mathrm{Homeo}(U_{\ell})\, \tag{31}\]
where \(\overline{e}_{i}\in G_{q_{i},n_{i}}\) is the identity element.
We next show that this action is not stable; that is, for any \(\ell>0\) there exists a clopen subset \(V\subset U_{\ell}\) and non-trivial \(\widehat{g}\in Z_{\ell}\) so that the action of \(\widehat{\Gamma}_{\infty}\) restricts to the identity map on \(V\).
We can assume without loss of generality that \(V=U_{\ell^{\prime}}\) for some \(\ell^{\prime}>\ell\). Consider the restriction map for the isotropy subgroup of \(Z_{\ell}\) to \(U_{\ell^{\prime}}\) which is given by
\[\rho_{\ell,\ell^{\prime}}\colon Z_{\ell}|_{U_{\ell^{\prime}}}\to Z_{\ell^{ \prime}}\subset\mathrm{Homeo}(U_{\ell^{\prime}})\.\]
We must show that there exists \(\ell^{\prime}>\ell\) such that this map has a non-trivial kernel. Calculate this map in terms of the product representations above,
\[Z_{\ell}|_{U_{\ell^{\prime}}}\ =\ \prod_{i=1}^{\ell}\,\{\overline{e}_{i}\} \ \times\ \prod_{i=\ell+1}^{\ell^{\prime}}\ H_{q_{i},n_{i},k_{i}}\ \times\ \prod_{i=\ell^{\prime}+1}^{\infty}\ G_{q_{i},n_{i}}\ \times\ \prod_{j=1}^{\infty}\ \widehat{\Gamma}_{(p_{j})}. \tag{32}\]
For \(\ell<i\leq\ell^{\prime}\), the group \(H_{q_{i},n_{i},k_{i}}\) fixes the point \(\prod_{i=1}^{\ell^{\prime}}\,\{x_{i}\}\), and acts trivially on \(\prod_{i=\ell^{\prime}+1}^{\infty}\ X_{q_{i},n_{i},k_{i}}\). Thus, the kernel of the restriction map contains the second factor in (32),
\[\prod_{i=\ell+1}^{\ell^{\prime}}\ H_{q_{i},n_{i},k_{i}}\ \subset\ \ker\left\{\rho_{\ell,\ell^{\prime}}\colon Z_{\ell}|_{U_{\ell^{\prime}}}\to \mathrm{Homeo}(U_{\ell^{\prime}})\right\}. \tag{33}\]
As this group is non-trivial for all \(\ell^{\prime}>\ell\), the action of \(\widehat{\Gamma}_{\infty}\) on \(X_{\infty}\) is not locally quasi-analytic, hence the action of \(\Gamma\) on \(X_{\infty}\) is wild. Moreover, the prime spectrum of the action of \(\Gamma\) on \(X_{\infty}\) equals the union \(\widehat{\pi}=\pi_{f}\cup\pi_{\infty}\).
We now prove the second part of Theorem 1.4, showing that choices in the construction above can be made in such a way that the action of \(\Gamma\) on a Cantor set is topologically free while the action of \(\widehat{\Gamma}_{\infty}\) is wild, and the prime spectrum is prescribed.
Choose an infinite set of distinct primes \(\pi_{f}=\{q_{1},q_{2},\ldots\}\), and let \(\pi_{\infty}\) be empty.
Choose the constants as in Section 5.1, with \(n_{i}=2\) and \(k_{i}=1\) for all \(i\geq 1\).
Define the Cantor space \(X_{\infty}\) by (28), where the second factor is trivial; that is a point. The action of \(\widehat{\Gamma}_{\infty}\) is wild by the calculations in formulas (30) to (33).
We claim that the action of \(\Gamma\) on \(X_{\infty}\) is topologically free. Suppose not, then there exists an open set \(U\subset X_{\infty}\) and \(g\in\Gamma\) such that the action of \(\Phi(g)\) is non-trivial on \(X_{\infty}\) but leaves the set \(U\) invariant, and restricts to the identity action on \(U\). The action of \(\Gamma\) on \(X_{\infty}\) is minimal, so there exists \(h\in\Gamma\) with \(h\cdot x_{\infty}\in U\). Then \(\Phi(h^{-1}gh)(x_{\infty})=x_{\infty}\) and the action \(\Phi(h^{-1}gh)\) fixes an open neighborhood of \(x_{\infty}\). Replacing \(g\) with \(h^{-1}gh\) we can assume that \(\Phi(g)(x_{\infty})=x_{\infty}\in U\). From the definition (29), the clopen sets
\[U_{\ell}=\prod_{i=1}^{\ell}\ \{x_{i}\}\ \times\ \prod_{i=\ell+1}^{\infty}\ X_{q_{i},2,1} \tag{34}\]
form a neighborhood basis at \(x_{\infty}\), and thus there exists \(\ell>0\) such that \(U_{\ell}\subset U\).
The group \(\Gamma\) embeds into \(\widehat{\Gamma}_{\infty}\) along the diagonal in the product (16). That is, we can write \(g=(g,g,\ldots,g)\in\prod_{i=1}^{\infty}\ G_{q_{i},2}\). The action of \(\Phi(g)\) is factorwise, and \(\Phi(g)(x_{\infty})=x_{\infty}\) implies that \(g\in D_{\infty}\cong\prod_{i=1}^{\infty}\ H_{q_{i},n_{i},k_{i}}\). The assumption that \(\Phi(g)\) fixes the points in \(U\) implies that it acts trivially
on each factor \(X_{q_{i},2,1}\) for \(i>\ell\). As each factor \(H_{q_{i},2,1}\) acts effectively on \(X_{q_{i},2,1}\) this implies that the projection of \(g\) to the \(i\)-th factor group \(H_{q_{i},2,1}\) is the identity for \(i>\ell\). This implies that every entry above the diagonal in the matrix representation of \(g\) in (17) is divisible by an infinite number of distinct primes \(\{q_{i}\mid i\geq\ell\}\), so by the Prime Factorization Theorem the matrix \(g\) is the identity.
Alternatively, observe that we have \(g\in\prod_{i=1}^{\ell}\ H_{q_{i},2,1}\). This is a finite product of finite groups, which implies that \(g\in\Gamma\) is a torsion element. However, the Heisenberg group \(\Gamma\) is torsion-free, hence \(g\) must be the identity. Thus, the action of \(\Gamma\) on \(X_{\infty}\) must be topologically free.
Finally, the above construction allows the choice of any infinite subset \(\pi_{f}\) of distinct primes, and there are an uncountable number such choices which are distinct. Thus, by Theorem 1.9 in [20] there are an uncountable number of topologically-free, wild nilpotent Cantor actions with distinct prime spectrum. This completes the proof of Theorem 1.4.
### Proof of Corollary 1.5
Consider the family of wild topologically free actions on the Heisenberg group \(\Gamma\) with infinite distinct prime spectrum, as constructed at the end of Section 5.3. We show that the uncountable number of infinite choices of \(\pi_{f}\) in this family can be made so that the actions have pairwise disjoint types.
By Definition 2.13, for two Steinitz numbers \(\xi\) and \(\xi^{\prime}\) we have that their types are equal, \(\tau(\xi)=\tau(\xi^{\prime})\), if and only if there exist integers \(m,m^{\prime}\) such that \(m\cdot\xi=m^{\prime}\cdot\xi^{\prime}\). Thus two actions with prime spectra \(\pi_{f}\) and \(\pi_{f}^{\prime}\) have distinct types if and only if \(\pi_{f}\) and \(\pi_{f}^{\prime}\) differ by an infinite number of entries. This happens, for instance, if \(\pi_{f}\) and \(\pi_{f}^{\prime}\) are _almost disjoint infinite sets_, i.e. they are infinite sets with finite intersection.
The set of prime numbers is countable, so the family of infinite almost disjoint subsets of prime numbers is uncountable if and only if the family of infinite almost disjoint subsets of natural numbers is uncountable. The family of almost disjoint subsets of natural numbers is uncountable by [14, Corollary 2.3]. Since the set of finite subsets of natural numbers is countable, the set of almost disjoint infinite subsets of natural numbers is uncountable.
It follows that the prime spectra of the uncountable family of actions of the Heisenberg group in Theorem 1.4 can be chosen so that they form a family of almost disjoint infinite sets. Then their types are pairwise distinct, and by Theorem 2.16 these actions of the Heisenberg group are pairwise not return equivalent. Therefore, they are pairwise not conjugate.
|
2305.01550
|
Mitigating Approximate Memorization in Language Models via Dissimilarity
Learned Policy
|
Large Language models (LLMs) are trained on large amounts of data, which can
include sensitive information that may compromise personal privacy. LLMs showed
to memorize parts of the training data and emit those data verbatim when an
adversary prompts appropriately. Previous research has primarily focused on
data preprocessing and differential privacy techniques to address memorization
or prevent verbatim memorization exclusively, which can give a false sense of
privacy. However, these methods rely on explicit and implicit assumptions about
the structure of the data to be protected, which often results in an incomplete
solution to the problem. To address this, we propose a novel framework that
utilizes a reinforcement learning approach (PPO) to fine-tune LLMs to mitigate
approximate memorization. Our approach utilizes a negative similarity score,
such as BERTScore or SacreBLEU, as a reward signal to learn a dissimilarity
policy. Our results demonstrate that this framework effectively mitigates
approximate memorization while maintaining high levels of coherence and fluency
in the generated samples. Furthermore, our framework is robust in mitigating
approximate memorization across various circumstances, including longer
context, which is known to increase memorization in LLMs.
|
Aly M. Kassem
|
2023-05-02T15:53:28Z
|
http://arxiv.org/abs/2305.01550v1
|
# Mitigating Approximate Memorization in Language Models via
###### Abstract
Large Language models (LLMs) are trained on large amounts of data, which can include sensitive information that may compromise personal privacy. LLMs showed to memorize parts of the training data and emit those data verbatim when an adversary prompts appropriately. Previous research has primarily focused on data preprocessing and differential privacy techniques to address memorization or prevent verbatim memorization exclusively, which can give a false sense of privacy. However, these methods rely on explicit and implicit assumptions about the structure of the data to be protected, which often results in an incomplete solution to the problem. To address this, we propose a novel framework that utilizes a reinforcement learning approach (PPO) to fine-tune LLMs to mitigate approximate memorization. Our approach utilizes a negative similarity score, such as BERTScore or SacreBLEU, as a reward signal to learn a dissimilarity policy \(\pi_{D}\). Our results demonstrate that this framework effectively mitigates approximate memorization while maintaining high levels of coherence and fluency in the generated samples. Furthermore, our framework is robust in mitigating approximate memorization across various circumstances, including longer context, which is known to increase memorization in LLMs.
## 1 Introduction
Large language models have recently grown exponentially, from millions to billions to trillions of parameters (Radford et al., 2019; Brown et al., 2020; Chowdhery et al., 2022; Fedus et al., 2021). As the scale of those models rises, their training sets grow to billions of tokens (Gao et al., 2020), and their performance improves across the board, even when using a few-shot learning setting (Brown et al., 2020). With the increase in the size of both models and datasets, as well as the performance improvement, practical concerns have arisen regarding the privacy risks of memorization of the training data in large language models, as an adversary interacting with a pre-trained model can extract individual sequences that were used to train the model (Carlini et al., 2021), even if the language model was trained on a dataset that is publicly accessible. Many research has been conducted in the context of large language models to address training data memorization problems; studies show that a language model with 6 billion parameters (GPT-J) can memorize at least 1% of its training data (Carlini et al., 2022). The cause of memorization might be the language models' training strategy, as the objective of a language model (Radford et al., 2018) is to identify the relationship between the present token and its succeeding (auto-aggressive LM) or surrounding segments (MLM). Another cause for memorization might be several repeated instances in the training corpus since the more repeated an example is, the more likely it is to be memorized (Lee et al., 2021).
Several approaches have been proposed to address the problem of memorization in large language models, such as data sanitization, the use of differential privacy algorithms, and data deduplication. These techniques can effectively prevent the generation of memorized content, but they also have drawbacks. For example, data sanitization assumes that private information can be easily identified and is not context-dependent, while differential privacy can result in lower-quality generative models (Anil et al., 2021).
In this study, we propose a framework to prevent memorization in large language models by fine-tuning those models using a reinforcement learning approach (PPO) (Schulman et al., 2017). Given samples of prefixes and suffixes from the original pre-training data of the language model, we use a prefix as input for the language model to generate the suffix; then, we compute the negative SacreBLEU (Post, 2018) score to measure the dissimi
larity between the true suffix and generated suffix, the dissimilarity scores is then regarded as a reward signal to maximize in the training process, which guarantees that the approximate memorization will be mitigated. We experimented with different reward functions such as BERTScore Zhang et al. (2019), the weighted sum between perplexity, and SacreBLEU or BERTScore. The objective of those experiments is to see how they can affect the generation quality and memorization ratio. Yet, all of the proposed reward functions assure to minimization of approximate memorization. The aim of the framework is to learn a policy \(\pi_{D}\) that can paraphrase the suffix given some prefix. For example, in "Alice green lives at 187 bob street", the prefix is "Alice green lives at." The suffix is "187 bob street" we aim that the fine-tuned language model paraphrase the suffix to be: 12 red street. As the suffix is paraphrased, the memorization relationship between the prefix and suffix is minimized. We evaluate the effectiveness of the framework in two different settings. We consider the first one as the training process's standard setting, giving a prefix to the model; we want the generated suffix to be as dissimilar as possible to the true suffix. Many studies Carlini et al. (2021, 2022) have shown that as a longer context is provided, the memorization ratio increases, so the second setting is to provide 100 additional context tokens pre-prefix combined with the prefix to the model to evaluate the memorization in this case. The proposed framework does not make any explicit or implicit assumptions about the structure of the data to be protected. Also, unlike the DP methods, the proposed framework does not apply any partition mechanism to split the data into public data and private data; as language data cannot be partitioned, we apply the policy on all training data as defining or partitioning the data into private and public might be impossible in case of language dataBrown et al. (2022).
We tested with three model sizes (125M, 1.3B, and 2.7B parameters), and all of our models are the GPT-Neo FamilyBlack et al. (2021). Our main findings are as follows:
* **The learned policy is able to generate new suffixes which are different from the true ones** by a large margin of dissimilarity without a considerable loss in the generation quality that's achieved because the fine-tuned LM learned a policy to change names, numbers, or replace a whole phrase by a similar entity.
* **As the size of the language model increases, the convergence rate becomes faster**. The convergence in this context means that the model-generated suffixes become significantly different from the original suffixes, and the difference between the perplexity of the generated examples and the original examples becomes smaller. In our experiments, we found that the GPT-Neo 125M converged in three epochs, four PPO epochs per batch, GPT-Neo 1.3B model converged in two epochs, four PPO epochs per batch, and the GPT-Neo 2.7B model converged in two epochs, two PPO epochs per batch.
* **As the size of a language model increases, the dissimilarity score increase**, which can be measured by the difference between negative SacreBLEU before and after applying the framework. However, this increase in dissimilarity score is only sometimes a positive outcome. In some settings, it is accompanied by an increase in the perplexity score of the generated examples. This suggests that larger models may tend to "forget" the memorized data faster.
Overall, our findings show that using the proposed framework to fine-tune large language models mitigates training data approximate memorization while ensuring suffix generation quality without a considerable loss in the fluency and coherence of the text.
## 2 Background
### Language Models
Language models are central to natural language processing techniques. They operate by taking in a sequence of tokens, such as words or characters, and outputting a probability distribution over the next token. Training a language model aims to maximize the likelihood of a given text corpus. One popular approach to training language models is to use a "next-token prediction" objective, which involves constructing a generative model of the probability distribution of a sequence of tokens. State-of-the-art language models use neural networks, such as recurrent neural networks or attention-based models, to estimate this probability distribution. In recent years, transformer-based language models Vaswani et al. (2017); Devlin et al. (
2018; Radford et al., 2019) have become particularly popular due to their ability to scale to billions of parameters and achieve high performance on large datasets, as it has been proved by empirical results that show that the performance of transformer-based language models follows a power law relationship concerning model size, dataset size, and amount of compute used for training (Kaplan et al., 2020). This means that to see significant improvements in model performance, there needs to be an order-of-magnitude increase in at least one of these factors. For example, increasing the model size by a factor of 10 or increasing the dataset size by a factor of 10 could lead to a noticeable improvement in model performance. It is worth noting that using large models, large datasets, and high amounts of compute time are all essential for achieving high performance with language models. However, these large language models have also raised concerns about their potential to memorize and replicate sensitive information, such as personal information or long text sequences (Carlini et al., 2021; Brown et al., 2022). In our experiments, we considered three different auto-regressive large language models: GPT-Neo 125M, GPT-Neo 1.3B, and GPT-Neo 2.7B, which are pre-trained on English corpora.
### Memorization Definitions
In the context of memorization in large language models, we follow the definition proposed by (Lee et al., 2021), which introduced approximate memorization. Given a string S, if a prompt P exists, the model generates's' given 'P,' and the model output is memorized with some chosen edit distance of the prompt's true continuation in the training set. In our study, we choose the edit distance to be a similarity measure (BLEU) as proposed in (Ippolito et al., 2022), to be able to capture the approximate memorization, not just the "Eidetic memorization" (Carlini et al., 2021) as the definition of verbatim memorization fails to include more subtle forms of memorization (Ippolito et al., 2022). For example: if we have a sequence S: "My name is Alice green, I live at 187 bob street, and my phone number is 226284", and the model generation is: "My name is Alice green, I live at 187 queen street, and my phone number is 226284" following the definition of "verbatim memorization," this sequence isn't memorized, but if we follow "approximate memorization" we can say that the model generation is similar by 84.92% SacreBLEU score to the true continuation.
### Related Work
Recent studies (Ippolito et al., 2022) reveal that large language models may avoid filters that limit verbatim memorization by performing plausible "style transfers" to the prompt. They demonstrated this by proposing an inference time defense mechanism, "MemFREE decoding," to prevent memorization in large language models. This is accomplished by efficiently querying the training dataset and checking for the existence of any n-gram in the prefix combined with the generated suffix. Another study showed that duplicate instances in the training data increase the likelihood that a language model will memorize them. So data deduplication techniques (Kandpal et al., 2022) effectively remove these duplicates from the training data. However, it is essential to note that this approach only partially prevents the model from memorizing instances, as it is still possible to memorize sequences beyond duplicated instances. Moreover, Differential privacy (DP) is a widely-used technique for training models that do not memorize individual training examples. DP algorithms, such as the ones described in (Abadi et al., 2016), are considered the gold standard for protecting privacy in machine learning models. However, in practice, these techniques often result in worse performance than non-private models, as observed by (Anil et al., 2021). As a result, state-of-the-art language models often require a large amount of data and computational resources to train and are not typically trained using DP. Furthermore, DP algorithms are computationally expensive, with slower convergence and lower utility than non-private methods (Anil et al., 2021). This is especially concerning for language models, which are known to memorize large portions of their training data and may be more likely to memorize anomalous data points, presenting a privacy risk for the text's authors or subjects. One of the most challenging aspects of using DP on language data is identifying the boundaries of private information (Brown et al., 2022).
## 3 Data
We employed a subset of the Pile dataset, which was released as a benchmark for training data extraction attacks on large Language Models. Generally, the Pile dataset contains data from
various sources (e.g., books, Web scrapes, open source code). We used this version of the subset 1, designed to be easy to extract to assess the performance of targeted attacks. The dataset contains only 15,000 samples since the full version is not released yet. Each sample consists of 200 tokens sampled randomly from the Pile training set. The topics included in the subset are code, news, logs, conversations-replies, copyrights, links, etc. Most of them are in the English language. The dataset is splitted into 13,500 samples for training and 1,500 samples for testing.
Footnote 1: [https://github.com/google-research/lm-extraction-benchmark](https://github.com/google-research/lm-extraction-benchmark)
**Training Data.** For the training phase, we use the third 50 tokens for each string as a prefix and the fourth 50 tokens as a suffix.
**Evaluation Data.** We employed various settings and datasets to ensure the learned policy generalization. First, we evaluated the fine-tuned model on the test set of the Pile-subset as we use the third 50 tokens for each string as a prefix and the fourth 50 tokens as a suffix. Second, in the longer context setting, we added extra 100 tokens and combine it with the third 50 tokens in the sample; the first 100 tokens represent the pre-prefix, and the next 50 tokens are prefixes. Using a longer context in a language model can be considered a form of attack, as it allows the adversary to gain access to more information. With this additional information, the adversary can extract new and sensitive information from the language model, compromising its security and integrity.
## 4 Method
Our approach begins with an autoregressive pretrained language model (GPT-Neo 125M, GPT-Neo 1.3B, and 2.7B), a dataset of 15,000 samples split into prefixes and suffixes, and a reward function (e.g., Negative SacreBLEU), which will be discussed later. The following steps are followed: Feed the prefixes into the pretrained language model to obtain the generated suffixes. We pass the generated suffixes to the reward function, which computes the dissimilarity between the generated and true suffixes. Finally, we use the reward function output as a scaler reward to fine-tune the language model and optimize the reward using the PPO algorithm.
### Training Environment
We employed GENERATION AS A TOKEN-LEVEL MDP environment (Ramamurthy et al., 2022); the environment, in general, is an NLP task in which we have a supervised dataset consisting of prefixes and suffixes. Generating a response can be thought of as following a Markov Decision Process, where each step involves choosing a vocabulary word as an action based on the current input and previous actions. Each episode starts with a specific prompt and ends when a certain number of steps have been taken(in our case, the number of steps means some tokens that have been generated) or an end-of-sentence token is generated. The reward for an episode is based on how well the final state is dissimilar to the target output (e.g., an automated metric like Negative SacreBLEU, BERTScore).
### Fine-tuning details
Using PPO, we fine-tuned the pretrained language model in our environment. The environment used is GENERATION AS A TOKEN-LEVEL MDP, similar to the bandit environment in that it presents a random customer prompt and expects a response. The only difference between the employed and bandit environments is that in the bandit, we utilize a discount factor \(\gamma=0.95\) instead of \(\gamma=1\). The reason for selecting \(\gamma=0.95\) is that it gives more stability in training since the rewards are calculated in a discounted fashion in the token-level MDP, which reduces the magnitude of the reward that is applied to tokens selected at the start (Ramamurthy et al., 2022). Given the prefix and generated suffix, it computes a reward based on the reward function of choice, and the episode is completed. Further
Figure 1: Illustration of Framework Pipeline which mitigates approximate memorization in language models
more, the KL penalty is applied per token using a reference model (the pretrained model before fine-tuning.) This prevents the fine-tuned model from generating a suffix that deviates too much from the reference language model (e.g., generating white spaces). A value network \(V\) is included beside the language modeling head to estimate the value function. The batch size is 32 for all models; we selected a specific number of epochs for each model as the convergence rate for each model is different; we mean by convergence in this context that the model-generated suffixes become significantly different from the original suffixes but without a considerable loss in the perplexity as the difference between the perplexity of the generated examples and original examples becomes smaller, so we selected the appropriate number of epochs that balance between these goals. We use three epochs, four PPO epochs per batch for GPT-Neo 125M, two epochs, four PPO epochs per batch for GPT-Neo 1.3B, and two epochs, two PPO epochs per batch for GPT-Neo 2.7B. The learning rate for all models was \(1.41\times 10^{-5}\). Also, the KL Beta of 0.2 is selected, and the clip range of 0.2.
### Reward Functions
To effectively prevent the approximate memorization of training data and encourage learning a diverse set of language generation strategies, we must carefully design a suitable reward function for our pretrained language model. This reward function should enable the model to generate dissimilar suffixes that are semantically consistent, have the correct syntactic structure, and are related to the same topic as the prefix. Previous research has shown that the pretrained LM can bypass memorization checks by producing ALL-CAPITAL text (Ippolito et al., 2022), so our reward function must address this issue. We conducted experiments with three different reward functions based on the semantic similarity between the model's generated text and the training data. However, we observed that the model could learn a policy to maximize rewards in ways that do not align with the desired output, such as generating white space or repeating the same words multiple times through learning shortcuts (Geirhos et al., 2020). Our chosen reward function must address these potential pitfalls.
**Negative SacreBLEU.** is a popular metric for evaluating the quality of the machine-translated text, which is a modified version of the BLEU (Bilingual Evaluation Understudy) metric. Our approach uses negative SacreBLEU, calculated as the difference between 100 and the SacreBLEU score. Negative SacreBLEU has the advantage of being based on semantic similarity, making it suitable for learning policies that minimize memorization. However, it has the limitation of being restricted to a maximum n-gram length and not considering contextual meaning, which may limit the model's ability to learn diverse and robust policies. In the following sections, we will explore ways to address this limitation.
**Negative SacreBLEU + Perplexity.** To improve the fluency of the model's generated text, we experimented with combining the Negative SacreBLEU score with the perplexity metric. We found that as the dissimilarity score (measured using Negative SacreBLEU) increased, the fluency of the model (as measured by perplexity) decreased. Therefore, we incorporated perplexity as a penalty term in the overall reward function, which was calculated as the weighted sum of Negative SacreBLEU and the negative of the perplexity metric, with a weight of 0.5 applied to each. This combination of metrics allowed us to balance the goal of encouraging dissimilarity with the need to maintain fluency in the model's generated text.
**BERTScore.** is a model-based semantic metric for assessing the quality of text generated by a model. It works by embedding the text using a BERT-based model and calculating the cosine similarity between the two resulting embedding vectors. BERTScore outputs recall, precision, and F1 scores based on this similarity. In our experiments, we employ BERTScore computed by DeBERTa large model (He et al., 2021) to maximize the negative F1-score, which can be interpreted as minimizing the cosine similarity between the generated and true suffix. One advantage of using BERTScore is that it relies on contextualized embeddings, which allows it to capture dependencies and consider the contextual meaning of words in the sentence. This can enable the model to learn more diverse and contextually appropriate policies.
## 5 Experiments
To thoroughly assess the effectiveness of our framework for preventing memorization in large language models, we conducted a series of experi
ments using different settings and evaluation metrics. We evaluated the model in standard settings and longer contexts. Our evaluations covered a range of factors, including dissimilarity measures, semantic consistency, and fluency. This comprehensive approach allowed us to analyze the model's performance in various scenarios and understand its capabilities and limitations.
### Experimental Setting
To evaluate the performance and flexibility of our framework, we conducted a series of experiments using different settings and test sets. These experiments included variations in the model size, reward function, and the number of epochs. This allowed us to analyze the impact of these factors on the model's performance and identify the optimal configuration for the task.
**Models.** To investigate the impact of model size on our framework and its feasibility for different sizes, we conducted experiments using three models with varying sizes: 125M, 1.3B, and 2.7B. These models are part of the GPT-Neo family, and the memorization problem was observed using these models Carlini et al. (2022). The fact that they have been subjected to various techniques for extracting or preventing memorization in the literature makes them a benchmark to study the memorization problem. An additional advantage of the GPT-Neo models is that their training set is known; they were pretrained on the publicly available Pile dataset, which consists of 825GB of data.
**Reward Functions.** To examine the effect of different reward functions on the learned policy in our proposed framework, we conducted experiments using three different reward functions: negative SacreBLEU, negative SacreBLEU combined with perplexity, and negative BERTScore. These reward functions were designed to address the issue of approximate memorization and encourage the generation of dissimilar yet semantically consistent output. They are based on the notion of semantic similarity, which ensures that the generated text is meaningfully related to the input. By comparing the performance of these reward functions, we aimed to identify the optimal choice for the given task.
**Number of Epochs.** The number of epochs is a key hyperparameter in our framework, as it determines the number of times the model is trained on the data. During each epoch, the learned policy can change, so it is important to carefully select the number of epochs in order to find the optimal policy. There is a tradeoff between the dissimilarity and fluency of the generated examples, which we will discuss later. Therefore, choosing the appropriate number of epochs is crucial for achieving the best balance between these competing objectives.
**Dataset Settings.** To evaluate the ability of the learned policy to generalize to different settings in the pretraining dataset, we conducted experiments using three different context lengths: the same training sequence length and an additional 100 tokens as a longer context. This allowed us to assess how the model's performance is affected by the length of the context and to identify any patterns or trends in its behavior. By comparing the results across these different context lengths, we aimed to understand the model's generalization capabilities better.
**Evaluation Metrics.** To assess the performance of our method, we evaluated two key aspects: the dissimilarity score and the quality of the generated suffixes. (1) Dissimilarity Score: We measured the dissimilarity between the generated suffix and the true suffix using the negative SacreBLEU score, as it is based on semantic similarity and can, therefore, effectively measure approximate memorization. (2) Quality of generated suffixes: To assess the text's fluency, we used the perplexity score of the applied model before applying the framework. This metric allowed us to evaluate the quality of the generated
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Model** & **Reward Function** & **Epoch** & **N \(-\)**SacreBLEU** & **PPL \(\downarrow\)** \\ \hline \multirow{2}{*}{GPT-Neo 125M} & N-SacreBLEU & 82.78 & 5.25 \\ & N-SacreBLEU + PPL & 12 & 78.32 & 4.83 \\ & **BERTScore** & **67.70** & **4.01** \\ \hline \multirow{2}{*}{GPT-Neo 1.3B} & N-SacreBLEU + PPL & 72.10 & 3.652 \\ & **BERTScore** & **53.22** & **2.64** \\ \hline \multirow{2}{*}{GPT-Neo 2.7B} & N-SacreBLEU + PPL & 71.47 & 6.105 \\ & **BERTScore** & 4 & 80.84 & 8.60 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The Results Of Different Reward Functions On The Standard Setting Test Set.
suffixes in terms of their grammatical correctness and overall coherence. We could comprehensively understand the model's performance by considering both the dissimilarity score and the quality of the generated suffixes.
### Experimental Results
This section will present the results of using the proposed framework with various configurations on the selected dataset. The objective is to examine the effectiveness of the proposed approach under different scenarios and to identify any patterns or trends that may emerge.
**GPT-Neo 125M.** In our experiments, we found that the negative BERTScore reward function outperformed negative SacreBLEU and negative SacreBLEU combined with Perplexity in terms of achieving a balance between similarity and fluency when used with the GPT-Neo 125M model and a range of epochs from 4, 8, and 12. While negative SacreBLEU combined with the Perplexity achieved a higher dissimilarity score, it came at the cost of a decrease in the fluency of the generated examples as the perplexity score increased. The worst performance was seen with the negative SacreBLEU reward function, as it had a higher dissimilarity score and perplexity score than the other reward functions, as shown in Table 1. Upon further exploration, we found that 12 PPO epochs achieved the best balance between similarity and fluency for all reward functions. All of these experiments were evaluated on the standard test set, and the top three experiment settings were then evaluated on the longer context setting. Negative BERTScore with 12 PPO epochs achieved the best results in this setting. As expected, the longer context increased the memorization score. However, the dissimilarity score also significantly improved, going from 45.74% before the framework was applied to 55.04% with the negative SacreBLEU score. As demonstrated in Table 2, the standard setting and the longer context setting results show that the framework is robust in improving the language model's performance. In the standard setting, the difference between negative SacreBLEU before and after applying the framework is 8.07, indicating a significant improvement. Even in the longer context setting, where the model has access to more information, the difference between negative SacreBLEU before and after applying the framework is 9.3, which is still an improvement. This demonstrates that the framework is able to maintain its effectiveness even when the model is presented with more information, which can be a challenge for many models as it increases the risk of memorization. Moreover, using the longer context positively impacted the generated suffixes as the fluency was enhanced.
**GPT-Neo 1.3B & GPT-Neo 2.7B.** We conducted the same experiments with various settings on the GPT-Neo 1.3B and GPT-Neo 2.7B models and found that the same settings performed the best, except for the number of epochs. The GPT-Neo 1.3B model converged at two epochs, four PPO epochs per batch, while the GPT-Neo 2.7B model converged at two epochs, two PPO epochs per batch. This suggests that larger models converge faster. We noticed the same observation of using a longer context which improved the generated suffixes by enhancing fluency and increased the dissimilarity score from 9.11% to 36.16% and 10.61% to 44.62% of the negative SacreBLEU score in GPT-Neo 1.3B and GPT-Neo 2.7B respectively. However, in those models, the difference increased, which shows that increasing the model size led to an increase in the difference of dissimilarity score as shown in Figure 2 and, in the case of a longer context setting without a significant loss in perplexity, as the perplexity score increased from 1.55 to 1.71 and 1.414 to 1.823 in GPT-Neo 1.3B
Figure 3: Displaying The Negative SacreBLEU Distribution Of GPT-Neo 2.7B On Standard Setting Before (blue) & After (orange) Applying The Framework
Figure 2: Comparing The Mean SacreBLEU Score Before (blue) & After Applying the Framework (orange) Across All Model Sizes
and GPT-Neo 2.7B respectively after applying the framework as shown in Table 2.
## 6 Discussion
**Learned Policy.** We monitored the model's performance as we trained it over different epoch ranges. In an attempt to increase the dissimilarity between the generated and true suffixes, the model initially employed a policy of outputting the same suffix but with different cases (e.g., all uppercase or all lowercase). However, this approach was not useful for our objective and did not reduce similarity. Over epochs, the model improved its learned policy to achieve a better reward. Initially, it tried to minimize the similarity between the target and generated suffixes by altering individual words or numbers. However, this approach could have been more effective in reducing similarity. As training continued, the model began to modify multiple words in the generated suffix in an attempt to increase dissimilarity. Eventually, it started rephrasing or replacing entire phrases with semantically similar alternatives in order to achieve the desired dissimilarity score. It's worth noting that while the dissimilarity score improved with the number of epochs, the perplexity score also increased, indicating a trade-off between the two objectives. Additionally, training the model for too many epochs caused it to generate suffixes that were semantically dissimilar to the true suffix and, in some cases, generated meaningless tokens like question marks or repeated the same word multiple times.
**Evaluating Approximate Memorization.** According to recent research, when requiring a binary label, whether approximate memorization occurred or not, the BLEU score of 75% for the generated suffix is the suitable threshold. This threshold was determined by examining a large number of generated examples (Ippolito et al., 2022). However, in our own investigation, we found that this issue is mitigated even when the threshold is as low as 50% after applying the framework. Despite this discovery, we chose to use the more widely accepted threshold of 75% in order to demonstrate the effectiveness of our framework. After implementing the framework with GPT-Neo 1.3B and GPT-Neo 2.7B, the number of generated examples that exhibited approximate memorization decreased from 910 to 497 and from 1036 to 321, respectively, as shown in Figure 4, Figure 5, and Figure 6. Figure 3 shows the negative SacreBLEU scores of GPT-Neo 2.7B on standard setting for both the generated and true suffixes in the form of a box plot. The median negative SacreBLEU score for the generated suffixes is 72.06%, which means that approximately 50% of the data has a dissimilarity score higher than 72.06%. On the other hand, the median negative SacreBLEU score for the true suffixes is 8.70%, indicating that approximately 50% of the data has a dissimilarity score lower than 8.70%. This comparison demonstrates the effectiveness of our framework in reducing the occurrence of approximate memorization.
## 7 Conclusion
In this paper, we propose a novel framework for addressing the issue of large language models memorizing training data. Our approach is demonstrated to be effective through a series of evaluations conducted in various settings. We show that our framework is able to effectively reduce approximate memorization by significantly decreasing the SacreBLEU score while maintaining the fluency and coherence of generated samples. Additionally, our framework shows robustness when using a longer context, which can be seen as a form of attack. While many studies have established a correlation between increasing the size of language models and their tendency to memorize more training data, also, finetuning these larger models can be computationally costly. Our proposed framework offers a new approach to this problem by demonstrating
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Settings**} & \multicolumn{2}{c}{**IBPOR:**} & \multicolumn{2}{c}{**APTER**} & \multicolumn{2}{c}{**APTER**} & \multicolumn{2}{c}{**AP\(\mathbf{x}\)-sn**} & \multicolumn{1}{c}{**Epoch**} \\ \cline{3-7} & & \multicolumn{1}{c}{**SavedBLEU**} & \multicolumn{1}{c}{**PPL\(\downarrow\)**} & \multicolumn{1}{c}{**N SacreBLEU**} & \multicolumn{1}{c}{**PPL\(\downarrow\)**} & \multicolumn{1}{c}{**PPL\(\downarrow\)**} & \multicolumn{1}{c}{**PPL\(\downarrow\)**} & \multicolumn{1}{c}{**PPL\(\downarrow\)**} \\ \hline GPT-Neo 125M & Standard & 50.63 & 3.85 & 67.70 & 4.01 & 8.07 & 12 \\ & LC\({}^{\ddagger}\) & 45.74 & 4.12 & 55.04 & 4.15 & 9.32 & \\ \hline GPT-Neo 1.3B & Standard & 34.16 & 2.18 & 53.22 & 2.64 & 9.00 & 8 \\ & LC & 91.1 & 1.58 & 36.16 & 1.71 & 27.08 & 8 \\ \hline GPT-Neo 2.7B & Standard & 27.18 & 1.92 & 61.79 & 4.46 & 9.00 & 4 \\ & LC & 10.61 & 1.44 & 44.62 & 1.82 & 9.00 & 4 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison Of Negative SacreBLEU & Perplexity Means Before & After Applying The Framework. \({}^{\ddagger}\) Refer to Longer Context Setting. \(\mathbf{\Delta_{N-SB}}\) For The Difference Between Negative SacreBLEU Means Before & After Applying The Framework.
that as the size of the language model increases, the convergence rate and dissimilarity score also increase. These improvements effectively mitigate the issue of increased memorization while reducing the computational costs associated with finetuning larger models.
## Limitations
One of the limitations of our work is that it relies on a single scalar reward for optimization, as the problem has dual objectives: dissimilarity and perplexity. To overcome this limitation, we suggest exploring other techniques, such as Multi-objective Reinforcement Learning, which can potentially enhance performance and optimize both objectives simultaneously. Additionally, the dataset used in our work lacks metadata and labels, which can be useful for further analysis of the model's performance on different types of text, such as personal information, copyrights, and news. Using such metadata and labels can help to understand the model's performance on different classes of text and make necessary adjustments to improve the model's performance.
## Ethics Statement
Improving the large language model to be privacy-preserving is crucial since the language models have become more prominent and involved in many applications in multi-aspect of life. Ensuring the data privacy of those models is vital since some adversary may be able to reach that information. To make those models widely used, we have to guarantee they cannot emit private data. In this paper, we hope our work will serve as a foundation for developing new and innovative solutions to the problem of approximate memorization in large language models since verbatim memorization can give a false sense of privacy, as earlier work suggested. Our proposed framework provides a promising approach to addressing this issue. Further research and experimentation in this area can lead to even more effective methods for reducing memorization in these models. Our work also highlights the importance of considering both the computational cost and the performance trade-off when developing new techniques for addressing memorization in large language models.
|
2304.01679
|
Lateral transport of domains in anionic lipid bilayer membranes under DC
electric fields: A coarse-grained molecular dynamics study
|
Dynamic lateral transport of lipids, proteins, and self-assembled structures
in biomembranes plays crucial roles in diverse cellular processes. In this
study, we perform a coarse-grained molecular dynamics simulation on a vesicle
composed of a binary mixture of neutral and anionic lipids to investigate the
lateral transport of individual lipid molecules and the self-assembled lipid
domains upon an applied direct current (DC) electric field. Under the potential
force of the electric field, a phase-separated domain rich in the anionic
lipids is trapped in the opposite direction of the electric field. The
subsequent reversal of the electric field induces the unidirectional domain
motion. During the domain motion, the domain size remains constant, but a
considerable amount of the anionic lipids is exchanged between the
anionic-lipid-rich domain and the surrounding bulk. While the speed of the
domain motion (collective lipid motion) shows a significant positive
correlation with the electric field strength, the exchange of anionic lipids
between the domain and bulk (individual lipid motion) exhibits no clear
correlation with the field strength. The mean velocity field of the lipids
surrounding the domain displays a two-dimensional (2D) source dipole. We
revealed that the balance between the potential force of the applied electric
field and the quasi-2D hydrodynamic frictional force well explains the
dependence of the domain motions on the electric-field strengths. The present
results provide insight into the hierarchical dynamic responses of
self-assembled lipid domains to the applied electric field and contribute to
controlling the lateral transportation of lipids and membrane inclusions.
|
Hiroaki Ito, Naofumi Shimokawa, Yuji Higuchi
|
2023-04-04T10:07:34Z
|
http://arxiv.org/abs/2304.01679v2
|
Lateral transport of domains in anionic lipid bilayer membranes under DC electric fields: A coarse-grained molecular dynamics study
###### Abstract
Dynamic lateral transport of lipids, proteins, and self-assembled structures in biomembranes plays crucial roles in diverse cellular processes. In this study, we perform a coarse-grained molecular dynamics simulation on a vesicle composed of a binary mixture of neutral and anionic lipids to investigate the lateral transport of individual lipid molecules as well as the self-assembled lipid domains themselves upon an applied direct current (DC) electric field. Under the potential force of the electric field, a phase-separated domain rich in the anionic lipids emerges in the opposite direction of that of the electric field. The subsequent reversal of the electric field induces the unidirectional domain motion. During the domain motion, the domain size remains constant, but a considerable amount of the anionic lipids are exchanged between the anionic-lipid-rich domain and the neutral-lipid-rich bulk. The mean velocity field of the lipids surrounding the domain displays a two-dimensional (2D) source dipole. We revealed that the balance between the potential force of the applied electric field and the quasi-2D hydrodynamic frictional force well explains the dependence of the domain motions on the electric-field strengths. The present results provide insight into the hierarchical dynamic responses of self-assembled lipid domains to the applied electric field and contribute to controlling the lateral transportation of membrane inclusions.
## I Introduction
Dynamic lateral arrangement of lipid molecules and membrane proteins in cell membranes plays crucial roles in various cellular processes such as signal transduction, membrane trafficking, and energy conversion. The cellular processes associated with cell membranes are believed to be facilitated by forming small functional domains called lipid rafts, in which specific lipid molecules and membrane proteins are dynamically self-assembled[1; 2]. The transiently formed small rafts can further coalesce into a large cluster[3]. In this process, small domains are laterally transported in the bilayer membrane to contact each other at a distance short enough to attract by lipid-lipid and protein-protein interactions. As a pioneering demonstration in plasma membranes, clustering of the raft ganglioside GM1 by the cross-linking mediated by cholera toxin has been observed at a physiological temperature[4]. For a deeper understanding of physical principles underlying the membrane-associated transport phenomena and potential applications of these cellular processes, it is necessary to investigate the mechanism of the lateral transport of individual molecules as well as the transport of the self-assembled lipid domains themselves.
Artificial lipid bilayer systems such as giant unilamellar vesicles (GUVs) and supported lipid bilayer (SLB) membranes are suitable platforms for studying the fundamental mechanism of domain formation [5; 6]. The lateral transport phenomena of lipids[7; 8] and raft-like macroscopic domains[9; 10; 11; 12] in lipid bilayer membranes have been studied using such systems, and the influences of lipid species, aqueous solutions, coupling between the membrane and solutions, etc., on the lateral diffusivity have been discussed. As one of the fundamental attempts to quantify the lateral transport by the mobility of the membrane inclusions, electrical manipulation of charged lipids and proteins in lipid bilayer membranes by applying a tangential direct current (DC) electric field has been developed. The technique was first adopted to observe the redistribution of a charged complex, a membrane receptor with a charged ligand concanavalin A, in the cell membrane[13; 14] and recently to demonstrate the lateral migration of lipid rafts and orienting the cell migration[15]. The underlying mechanism of the lateral motion of membrane inclusions was then extensively studied using the SLB membranes[16; 17]. In most situations of lipid bilayer membranes floating in a three-dimensional (3D) solvent with counterions and salts, the charged membrane inclusions are driven by electrophoresis and electroosmosis[16]. These two effects are typically competing, and the dominant factor depends on the geometry of the inclusions and the concentration of electrolytes in the solvent. The two-dimensional (2D) movement of the charged membrane inclusions can be well-described by the advection-diffusion equation and results in the concentration gradient with an exponential profile in a steady state[17], indicating that the individual inclusions are independently dragged by the electric
field under the thermal fluctuation. The resultant redistribution of the charged species in the SLB membranes has been utilized for the measurement of diffusivity or charge of the inclusions[16; 17; 18] and separation of membrane proteins[19].
Although the mechanism and applications of the electric-field-induced concentration gradient of charged membrane inclusions have been intensively studied, the response of the self-assembled domains to the electric field is not fully understood due to the complexity of the self-assembled hierarchical structure with long-range electrostatic interaction. During the past decade, phase separation of anionic phospholipids in GUVs has attracted increasing attention[20; 21; 22; 23] because the cell membranes[24] and organelle membranes of lysosomes, mitochondria, etc., contain anionic phospholipids. In this context, a manipulation technique of the phase-separated charged domains on a GUV by applying an external DC electric field has been proposed[25]. In this demonstrative experiment, the electrophoresis dominated the domain dynamics; the charged domains are dragged by a DC electric field and oriented to the direction of the electric field within seconds. Toward a deeper understanding of charge-regulated hierarchical structure formation and applications of emergent functions, it is necessary to elucidate multiscale dynamics ranging from microscopic molecules to mesoscopic domains and the dependence of the mobility on the strengths of electric fields.
For revealing the dynamics of lipid molecules and self-assembled domains at the molecular level, molecular simulations are helpful[26; 27; 28]. In particular, the highly coarse-grained model allows us to reproduce macroscopic phase separation for neutral lipid vesicles[29] and anionic lipid vesicles with the interaction potential based on the Debye-Huckel theory[30; 31; 32]. In this paper, we report domain responses to external electric fields by coarse-grained molecular dynamics (MD) simulation. The details of the simulation are explained in Sec. II. In the present model, we considered the electrostatic repulsive interaction between anionic head groups of lipids with the Debye-Huckel approximation and the potential force of homogeneous DC electric fields. The results of vesicle dynamics are shown in Sec. III. We observed domain formation under the applied electric and checked the domain response upon the reversal of the direction of the electric fields with various strengths. From statistics of lipid species and analysis of the mean velocity field around the domain, we analyzed the microscopic and mesoscopic dynamics of each lipid species during the domain motion. In Sec. IV, the domain motion and its field-strength dependence are discussed as hydrodynamic drag problems for a quantitative understanding of orienting the charged domain under an externally applied electric field.
## II Methods
In our coarse-grained MD simulation, a single lipid molecule is represented by three beads: linearly connected one hydrophilic bead and two hydrophobic beads, which correspond to the lipid head group and hydrocarbon chains, respectively. The excluded volume interaction between two beads separated by a distance \(r\) is
\[V_{\rm ex}(r)=\left\{\begin{array}{ll}4v\left[\left(\frac{b}{r}\right)^{12} -\left(\frac{b}{r}\right)^{6}+\frac{1}{4}\right],&r\leq r_{\rm c},\\ 0,&r>r_{\rm c},\end{array}\right. \tag{1}\]
where \(r_{\rm c}=2^{1/6}b\). \(v=k_{\rm B}T\) is the unit of energy, where \(k_{\rm B}\) and \(T\) are the Boltzmann constant and absolute temperature, respectively. For the bilayer stability, we chose the parameter \(b\) for three combinations of the bead species as \(b_{\rm head,head}=b_{\rm head,tail}=0.95\sigma\) and \(b_{\rm tail,tail}=\sigma\), where \(\sigma=7.09\) A is the typical cross-sectional diameter of a single lipid molecule as the unit of length. The potentials for the stretching and bending of a bond between two connected beads are
\[V_{\rm stretch}(r)=\frac{1}{2}k_{\rm stretch}(r-\sigma)^{2} \tag{2}\]
and
\[V_{\rm bend}(\theta)=\frac{1}{2}k_{\rm bend}(1-\cos\theta)^{2}, \tag{3}\]
where \(k_{\rm stretch}=500v/\sigma^{2}\) and \(k_{\rm bend}=60v\) are the bonding strength of the connected beads and the bending stiffness of a lipid molecule, respectively. Here, \(0\leq\theta\leq\pi\) is the angle between two adjacent bonds. The attractive hydrophobic interaction between hydrophobic beads is
\[V_{\rm attr}(r)=\left\{\begin{array}{ll}-v,&r<r_{\rm c},\\ -v\cos^{2}\left[\frac{\pi(r-r_{\rm c})}{2w_{\rm c}}\right],&r_{\rm c}\leq r \leq r_{\rm c}+w_{\rm c},\\ 0,&r>r_{\rm c}+w_{\rm c},\end{array}\right. \tag{4}\]
where \(w_{\rm c}\) is the phenomenological cutoff length of the attractive interaction. The lipid membranes are in "gel" or "liquid" phases depending on \(w_{\rm c}\). In this study, we adopted \(w_{\rm c}/\sigma=1.7\) for the neutral-neutral pairs and the anionic-anionic pairs and \(w_{\rm c}/\sigma=1.5\) for the neutral-anionic pairs to induce the phase separation with the binary lipid mixture. Note that \(w_{\rm c}/\sigma=1.7\) and \(1.5\) respectively correspond to "gel" and "liquid" phases, and the "gel" phase with \(w_{\rm c}=1.7\) is near the boundary of the "gel" and "liquid" phases in the parameter space[29; 31]. Thus, the lipid molecules in the membrane are still mobile, and the phase-separated domains become circular by the interfacial energy, which is characteristic of a so-called liquid-disordered phase. To represent the anionic lipids, we considered the electrostatic repulsive interaction between anionic head groups. The repulsive electrostatic interaction is described as the Debye-Huckel potential
\[V_{\rm rep}(r)=v\ell_{\rm B}q_{1}q_{2}\frac{\exp(-r/\ell_{\rm D})}{r}, \tag{5}\]
where \(\ell_{\rm B}=\sigma\) is the Bjerrum length, \(q_{1}\) and \(q_{2}\) are the valencies of the interacting charged head groups, and \(\ell_{\rm D}=\sigma\sqrt{\epsilon k_{\rm B}T/n_{0}e^{2}}\) is the Debye screening length. \(n_{0}\), \(\epsilon\), and \(e\) are the bulk salt concentration, the dielectric constant of the solution, and the elementary charge, respectively. We set \(n_{0}=100\,\)mM and \(q_{1}=q_{2}=-1\), which represent a typical condition for monovalent anionic lipids in an aqueous solution of a physiological monovalent salt concentration. We did not set any cutoff for the screened electrostatic repulsion. We imposed the static direct current (DC) electric field \(\mathbf{E}=Ev\mathbf{e}_{z}\) along the \(z\)-axis in the Cartesian coordinates, where \(E\) and \(\mathbf{e}_{z}\) represent the strength of the DC electric field and the unit vector in the \(z\)-direction, respectively. The potential of the DC electric field is, therefore,
\[V_{\rm EF}(r)=-q_{i}Evz. \tag{6}\]
The position of the \(i\)-th bead \(\mathbf{r}_{i}\) obeys the stochastic dynamics described by the Langevin equation
\[m\frac{{\rm d}^{2}\mathbf{r}_{i}}{{\rm d}t^{2}}=-\eta\frac{{\rm d}\mathbf{r}_{i}}{{\rm d }t}+\mathbf{f}_{i}^{V}+\mathbf{\xi}_{i}, \tag{7}\]
where \(m=1\) and \(\eta=1\) are the mass and the drag coefficient, respectively. The total potential force \(\mathbf{f}_{i}^{V}\) is calculated from the sum of the derivatives of the interaction potentials described in Eqs. (1)-(6). The Brownian force \(\mathbf{\xi}_{i}\) satisfies the fluctuation-dissipation theorem
\[\langle\mathbf{\xi}_{i}(t)\mathbf{\xi}_{j}(t^{\prime})\rangle=6v\eta\delta_{ij}\delta( t-t^{\prime}), \tag{8}\]
where \(\delta_{ij}\) is the Kronecker delta and \(\delta(\cdot)\) is the Dirac delta. The time increment for solving the discretized equation is set at \({\rm d}t=7.5\times 10^{-3}\tau\), where \(\tau=\eta\sigma^{2}/v\) is the unit of time.
In this study, we calculated the dynamics of bilayer vesicles composed of a binary mixture of anionic lipids and electrically neutral lipids. The spherical bilayer vesicle consists of 500 anionic lipids and 4500 neutral lipids, thus 5000 lipid molecules in total. At the initial state, the anionic and neutral lipids are homogeneously mixed in the vesicle. We first calculated the dynamics to form a phase-separated domain, subsequently reversed the direction of the DC electric field, and additionally calculated to observe the domain response. We adopted sufficiently long durations both for the domain formation \(t_{\rm df}=10.0\times 7500\tau\) and for the domain response after the reversal of the DC electric field \(t=10.0\times 7500\tau\). In the following, the time is represented with the unit of \(7500\tau\) for simplicity. Calculations were performed five times for each strength of the DC electric field to ensure reproducibility.
## III Results
First, we checked the effect of the DC electric field on the domain formation. Figure 1(a) shows the time course of the domain formation of anionic lipids with no electric field, i.e., \(E=0.0\), from \(t_{\rm df}=0.0\) to \(t_{\rm df}=10.0\) as the reference behavior. The anionic lipids rapidly assembled into a domain, and the position of the domain fluctuated with time. Figure 1(b) shows the time course of the domain formation under a DC electric field \(E=-0.01\). Since the anionic lipids experienced the potential force in the opposite direction of that of the electric field, they assembled toward the "top" of the vesicle (positive \(z\)-direction). Once the domain formed at the top, the domain position was fixed. The potential force further pulled the domain, and the vesicle slightly elongated and moved along the \(z\)-direction.
Figure 2: (a) Typical snapshots of the domain dynamics after the reversal of the direction of the electric field \(\mathbf{E}\) with the strength \(E=0.01\). The vesicle at \(t_{\rm df}=10.0\), shown in Fig. 1, is used as the initial configuration, and this moment is newly set as \(t=0.0\). Calculation time \(t\) is represented with the unit of \(7500\tau\). (b–d) Position of the CM of the vesicle (black) and that of the anionic lipid domain (red) in Cartesian coordinates \((x,y,z)\). Shaded regions represent the corresponding standard deviations in five trials.
Figure 1: (a) Typical snapshots of domain formation of anionic lipids (red) for \(E=0.0\). Calculation time for the domain formation \(t_{\rm df}\) is represented with the unit of \(7500\tau\). (b) Typical snapshots of domain formation of anionic lipids for \(E=-0.01\).
Afterward, we reversed the direction of the electric field. Figure 2(a) shows the time course of the domain dynamics after the reversal of the direction of the electric field. Here, we set the initial configuration at \(t=0.0\) to the vesicle at \(t_{\rm df}=10.0\), shown in Fig. 1, and reversed the sign of the electric field from \(E=-0.01\) to \(E=0.01\). The anionic lipid domain started to move along the meridian of the vesicle and finally reached the bottom of the vesicle in \(t=10.0\). Figures 2(b)-(d) show the position of the center of mass (CM) of the vesicle and that of the anionic lipid domain in Cartesian coordinates \((x,y,z)\). While both the vesicle and the domain fluctuated but hardly moved in the \((x,y)\) plane, they moved along the \(z\)-direction; The vesicle and the domain moved toward the opposite direction of the electric field. Since the domain orientation was reversed during the motion, the domain position overtook the vesicle CM at around \(t\simeq 3.0\), as shown in Fig. 2(d).
Figure 3 shows the dependence of the domain motion on the strengths of the reversed electric field \(\mathbf{E}\). Here, we set the Cartesian coordinates \((x,y,z)\) in which the origin coincides with the vesicle CM and measured the orientation of the anionic lipid domain by the polar angle \(\theta_{\rm d}\), as illustrated in Fig. 3(a). Figure 3(b) shows the time developments of the orientation of the domain for \(E=0.005,0.01,0.02\), and \(0.03\). At \(t=0\), the polar angle \(\theta_{\rm d}\) has a small but finite value due to the thermal fluctuations of the domain. For the same reason, at sufficiently large \(t\), \(\theta_{\rm d}\) converged to a value slightly smaller than \(\pi\). The increasing rate of \(\theta_{\rm d}\) depends on the strength of the electric field \(E\); the larger \(E\), the faster the rate. Note that, under the weaker electric fields \(E\leq 0.001\), the position of the anionic lipid domain randomly fluctuated, and under the stronger electric fields \(E\geq 0.035\), the domain was pulled out of the vesicle due to the strong potential force.
To check the detailed molecular dynamics during the motion of the domain under the reversed DC electric field, we evaluated the motility of individual lipid molecules by the deviation within the membrane for \(E=0.01\). Figure 4(a) shows the schematic of the angle deviation \(\Delta\theta(t)\) of a lipid molecule, defined as the difference in the orientations of a lipid molecule observed from the vesicle CM at \(t=0\) and \(t=t^{\prime}\). We calculated the mean angle deviation \(\langle\Delta\theta(t)\rangle\) for anionic and neutral lipids as shown in Fig. 4(b), where \(\langle\cdot\rangle\) denotes the average over all the anionic or neutral lipids in five vesicles. For neutral lipids, the mean angle deviation increased from \(0\) to \(\pi/2\) during the motion of the anionic lipid domain, indicating that the neutral lipids randomly moved over the vesicle. The mean angle deviation of the anionic lipids increased from \(0\) to \(5\pi/8\), which is larger than \(\pi/2\), indicating the net directional transport of the anionic lipids. While the net directional transport should be attributed to the motion of the anionic lipid domain after the reversal of the DC electric field, the obtained mean angle deviation \(\langle\Delta\theta(t=10)\rangle\simeq 5\pi/8\) is smaller than the apparent angle deviation of the domain rotation by \(\simeq~{}\pi\), as observed in Figs. 2(a) and 3(b). To confirm further details of the dynamics of the anionic lipids, we counted the total number of anionic lipids in
Figure 4: (a) Schematic of the mean angle deviation \(\Delta\theta(t)\) of a lipid molecule measured from the reversal of \(\mathbf{E}\) at \(t=0\). (b) Mean angle deviation \(\langle\Delta\theta(t)\rangle\) of anionic (red) and neutral (yellow) lipids for \(E=0.01\). Shaded regions represent the corresponding standard deviations in terms of lipid molecules. Calculation time \(t\) is represented with the unit of \(7500\tau\). (c) Mean number of lipids \(N\) for \(E=0.01\). Anionic lipids in the vesicle (red), anionic lipids in the domain (dark red), anionic lipids that entered the domain more than once during the domain motion (green), anionic lipids that left the domain (blue), and anionic lipids that entered or left the domain (black). (d) Rate of the decreasing numbers of lipids d\(N\)/dt plotted for various \(E\). Red: anionic lipids in the vesicle, yellow: neutral lipids, and dark red: anionic lipids in the domain.
Figure 3: (a) Definition of the domain orientation \(\theta_{\rm d}\). (b) Domain orientation \(\theta_{\rm d}\) after the reversal (\(t=0\)) of the direction of the electric field \(\mathbf{E}\) with various strengths \(E=0.005\), \(0.01\), \(0.02\), and \(0.03\). Calculation time \(t\) is represented with the unit of \(7500\tau\). Shaded regions represent the corresponding standard deviations in five trials.
the vesicle and that of the anionic lipids in the domain (Fig. 4(c)). Although the total number of anionic lipids in the vesicle slightly decreased throughout the dynamics due to gradual dropouts from the vesicle, the number of anionic lipids in the domain, and thus the size of the domain, remained almost constant. We also found such a tendency in the decreasing rate \(\mathrm{d}N/\mathrm{d}t\), where \(N\) is the number of the lipid molecules, for various \(E\), as shown in Fig. 4(d). The decreasing rate for the total anionic lipids was negative, and the rates for the domain-forming anionic lipids and neutral lipids were almost 0, i.e., the numbers of these lipids were almost constants. Interestingly, the domain-forming anionic lipids were not confined in the domain during the domain motion. We also counted the number of anionic lipids that entered or left the domain more than once during the domain motion in each time duration \(t\). The numbers of the entered- and left-lipids were almost the same at each \(t\), as expected from the almost constant domain size. The sum of the anionic lipids that entered and left the domain in \(t=10\) was more than 70% of the domain-forming anionic lipids, as shown in Fig. 4(c). Given that roughly 75% of the domain-forming anionic lipids randomly fluctuated by crossing the domain boundary and the remaining 25% of the domain-forming anionic lipids were confined in the domain, this rough estimation reproduces the mean angle deviation \(\left\langle\Delta\theta(t=10)\right\rangle\simeq 5\pi/8\), which is consistent with the result shown in Fig. 4(b). Therefore, the anionic lipid domain unidirectionally "swims" to the opposite direction of the DC electric field in the surrounding disordered neutral lipids by keeping its size but exchanging a considerable amount of the domain-forming anionic lipids across the domain boundary.
To capture the mesoscopic behavior of the domain motion, next, we analyzed the mean velocity field of the lipids inside and around the domain. We set the domain-frame right-handed Cartesian coordinates \((X,Y,Z)\) in which the origin and the direction of the \(Y\)-axis coincide with the instantaneous domain CM and the direction of the domain motion along the corresponding meridian, respectively (Fig. 5(a)). The basis vectors \(\mathbf{e}_{Y}\) and \(\mathbf{e}_{Z}\) are chosen as \(\mathbf{e}_{\theta}\) and \(\mathbf{e}_{r}\) in the standard spherical coordinates, respectively, and \(\mathbf{e}_{X}=\mathbf{e}_{Y}\times\mathbf{e}_{Z}\). Figures 5(b-d) show the color map of the domain-frame velocity component inside the domain in the \(X\)-direction, that in the \(Y\)-direction, and the corresponding domain-frame vector field, respectively, for \(E=0.03\). During the domain motion toward the positive \(Y\)-direction, the domain-forming anionic lipids exhibited a "convection-roll"-like motion. However, such a structure inside the domain was not clear due to the 75% exchange of the domain-forming anionic lipids (Fig. 4(c)). Figures 5(e-g) show the color map of the domain-frame velocity component around the domain in the \(X\)-direction, that in the \(Y\)-direction, and the corresponding domain-frame vector field, respectively, for \(E=0.03\). The velocity field clearly shows the form of a
Figure 5: (a) Domain-frame coordinates \((X,Y,Z)\). Color maps of the domain-frame velocity components in (b) the \(X\) direction and (c) the \(Y\) direction in the domain. (d) Domain-frame vector field in the domain. Color maps of the domain-frame velocity components in (e) the \(X\) direction and (f) the \(Y\) direction around the domain. (g) Domain-frame vector field around the domain. \(E=0.03\).
two-dimensional (2D) source dipole, in which the source and sink locate at the front and back of the moving domain, respectively. Such a characteristic velocity field associated with the domain motion under a DC electric field has not been identified in the experiment[25]. Our coarse-grained MD simulation suggests that the hydrodynamic nature of the lipid membrane plays an important role in determining the details in the dynamic response of the self-assembled domain to a DC electric field.
## IV Discussion
The reversal of the direction of the DC electric field induces the motion of the anionic lipid domain toward the opposite direction of the electric field. The anionic lipid domain moves in the surrounding neutral lipids, leading to the formation of a characteristic 2D source dipole of the mean velocity field around the domain. The 2D source dipole is reminiscent of a hydrodynamic source dipole, which appears in 2D hydrodynamic systems at a low Reynolds number, such as a disk-shaped droplet dragged in a quasi-2D channel flow[33]. To construct a theoretical framework for orienting the domain by a DC electric field, therefore, we considered a quasi-2D hydrodynamic drag problem.
The motion of an anionic lipid domain is driven by a DC electric field. The potential force exerted on a circular domain under the DC electric field can be instantaneously described as
\[\mathbf{F}_{\rm EF}=\pi a^{2}\Sigma E\sin\theta_{\rm d}\mathbf{e}_{Y}, \tag{9}\]
where \(a\) and \(\Sigma\) are the radius and the surface charge density of the domain. To describe the quasi-2D mobility of the domain in the membrane, we considered a hydrodynamic system consisting of a 2D planar incompressible liquid membrane sandwiched by a 3D solvent[34; 35]. The Saffman-Delbruck mobility of a circular fluid domain embedded in the 2D membrane is described as
\[b_{\rm T}=\frac{1}{4\pi\eta_{\rm m}h}\left(\ln\frac{\eta_{\rm m}h}{\eta_{\rm s }a}-\gamma+\frac{1}{2}\right), \tag{10}\]
for \(\eta_{\rm s}\ll\eta_{\rm m}\), where \(\eta_{\rm m}\), \(\eta_{\rm s}\), and \(h\) are the viscosity of the membrane, the viscosity of the 3D solvent, and the thickness of the membrane, respectively[36; 34; 11]. \(\gamma\approx 0.5772\) is the Euler's constant. Considering the friction coefficient \(\lambda_{\rm T}=b_{\rm T}^{-1}\) against the slow domain motion, the balance equation becomes
\[\mathbf{F}_{\rm EF}-\lambda_{\rm T}R\frac{{\rm d}\theta_{\rm d}}{{\rm d}\hat{t}} \mathbf{e}_{Y}=\mathbf{0}, \tag{11}\]
where \(\hat{t}\) and \(R\) are the theoretical time and the radius of the vesicle, respectively. Integrating this equation reads
\[f(\theta_{\rm d})\equiv\ln\frac{\cot\frac{\theta_{\rm d}}{\theta _{0}}}{\cot\frac{\theta_{0}}{2}}=-\frac{\hat{t}}{\hat{\tau}}, \tag{12}\] \[\hat{\tau}^{-1}=\frac{\Sigma Ea^{2}}{4\eta_{\rm m}hR}\left(\ln \frac{\eta_{\rm m}h}{\eta_{\rm s}a}-\gamma+\frac{1}{2}\right), \tag{13}\]
or equivalently,
\[\theta_{\rm d}(\hat{t})=2\cot^{-1}\left[\cot\frac{\theta_{0}}{2} \exp\left(-\frac{\hat{t}}{\hat{\tau}}\right)\right], \tag{14}\]
where \(\theta_{\rm d}(0)=\theta_{0}\) is the initial polar angle of the domain orientation. Figure 6(a) shows the theoretical curves of the domain orientation \(\theta_{\rm d}(\hat{t})\) described in Eq. (14) after the reversal of the direction of the electric field at \(\hat{t}=0\) for various strengths of the electric field. Here, we plotted for \(\hat{\tau}^{-1}=0.5\), \(1.0\), \(2.0\), and \(3.0\), which are proportional to the strength of the electric field. For the initial orientation, we set \(\theta_{0}=\pi/16\) by considering the fluctuation of the domain orientation. As we can see in Fig. 6(a), the theoretical curves qualitatively reproduced the results of coarse-grained MD simulation shown in Fig. 3(b) in both the time development of the domain orientation \(\theta_{\rm d}(t)\) and its dependence on the strength of the electric field \(E\). For further validation, we also plotted \(f(\theta_{\rm d})\), defined in Eq. (12), obtained from the coarse-grained MD simulation, as shown in Fig. 6(b). According to Eqs. (12) and (13), the data for various strengths \(E\) should collapse on a linear line as a function of \(Et\). Figure 6(b) shows that the data for \(Et<0.1\) clearly collapse on a linearly decreasing line irrespective of the strength \(E\). For \(Et>0.1\), which corresponds to \(\theta_{\rm d}\approx 0.9\), the domain almost reaches the opposite pole, and the data deviate from the collapsed line with thermal fluctuations. The finally stabilized position \(\theta_{\rm d}(Et\simeq 0.2)\) shows a weak positive correlation with the strength \(E\), resulting from competition with the thermal fluctuations. Although the microscopic dynamics is complex, as seen in Fig. 4, the mesoscopic domain-scale dynamics can be predicted through the continuum description for the quasi-2D fluidic membrane.
Generally, domain motion can be affected by the contributions of both the electrostatic potential force exerted on the charged lipids and the frictional force due to the electroosmotic flow induced by the bulk ions accumulated near the membrane[16]. In the previous experimental demonstration[25], phase-separated domains rich in negatively charged lipids oriented toward the positive electrode, and the positively charged domains oriented
Figure 6: (a) Theoretical curves of the domain orientation \(\theta_{\rm d}\) after the reversal (\(\hat{t}=0\)) of the direction of the electric field \(\mathbf{E}\) for various strength parameters \(\hat{\tau}^{-1}=0.5\), \(1.0\), \(2.0\), and \(3.0\). (b) \(f(\theta_{\rm d})\) versus \(Et\) for the data from coarse-grained MD simulation.
toward the negative electrode. This experiment suggests that the electrostatic force, rather than the electroosmotic flow, is dominant for orienting the domain in the lipid bilayer membrane. In the present coarse-grained MD simulation, the bulk ions are implicitly included as the screening effect on the electrostatic interaction between the anionic head groups, and thus potential modifications by bulk electroosmotic flow are neglected. For more precise quantitative predictions of the domain dynamics, simulations with explicit ions are needed in future work.
## V Conclusion
Using coarse-grained MD simulation, we studied the dynamical lateral transport of a phase-separated domain rich in anionic lipids in a lipid bilayer vesicle under an externally applied DC electric field. Under the potential force of the electric field, anionic lipids self-assembled into a domain in the opposite direction of that of the electric field, and the domain was transported along the meridian of the vesicle after the reversal of the electric field. The domain dynamics depends on the strength of the electric field. During the domain motion, a considerable amount, e.g., 75% for \(E=0.01\), of the domain-forming anionic lipids were exchanged with those in the vesicle, while the domain size remained almost constant. The mean velocity field around the domain in the domain-frame coordinates exhibited a 2D dimensional source dipole with the source and sink at the front and back of the moving domain, respectively, indicating that the 2D hydrodynamic nature determines the domain dynamics. Based on the results obtained in the coarse-grained MD simulation, we described the domain dynamics by the balance equation between the potential force and hydrodynamic frictional force, which well explained the time development of the domain position as well as its field-strength dependence. The present findings not only demonstrate orienting the functional domains by external fields in a predictable way but also contribute to a fundamental understanding of the lateral transport phenomena of hierarchical structures in 2D interfaces, especially in biomembranes.
###### Acknowledgements.
Calculations were performed using the parallel computer "SGI UV3000" at the Research Center for Advanced Computing Infrastructure at JAIST and Supercomputer Center at the Institute for Solid State Physics in the University of Tokyo. The research was supported by JSPS KAKENHI Grant Numbers JP21K13891 (H.I.) and JP19H05718 (Y.H.), and JSPS and MESS Japan-Slovenia Research Cooperative Program Grant Number JPJSBP120215001.
|
2308.00659
|
Liouville's Theorem on integration in finite terms for $\mathrm
D_\infty,$ $ \mathrm{SL}_2$ and Weierstrass field extensions
|
Let $k$ be a differential field of characteristic zero and the field of
constants $C$ of $k$ be an algebraically closed field. Let $E$ be a
differential field extension of $k$ having $C$ as its field of constants and
that $E=E_m\supseteq E_{m-1}\supseteq\cdots\supseteq E_1\supseteq E_0=k,$ where
$E_i$ is either an elementary extension of $E_{i-1}$ or $E_i=E_{i-1}(t_i,
t'_i)$ and $t_i$ is weierstrassian (in the sense of Kolchin ([Page 803,
Kolchin1953]) over $E_{i-1}$ or $E_i$ is a Picard-Vessiot extension of
$E_{i-1}$ having a differential Galois group isomorphic to either the special
linear group $\mathrm{SL}_2(C)$ or the infinite dihedral subgroup
$\mathrm{D}_\infty$ of $\mathrm{SL}_2(C).$ In this article, we prove that
Liouville's theorem on integration in finite terms ([Theorem, Rosenlicht1968])
holds for $E$. That is, if $\eta\in E$ and $\eta'\in k$ then there is a
positive integer $n$ and for $i=1,2,\dots,n,$ there are elements $c_i\in C,$
$u_i\in k\setminus \{0\}$ and $v\in k$ such that
$$\eta'=\sum^n_{i=1}c_i\frac{u'_i}{u_i}+v'.$$
|
Partha Kumbhakar, Varadharaj R. Srinivasan
|
2023-08-01T16:50:01Z
|
http://arxiv.org/abs/2308.00659v1
|
Liouville's theorem on integration in finite terms for \(\mathrm{D}_{\infty},\,\mathrm{SL}_{2}\) and Weierstrass field extensions
###### Abstract.
Let \(k\) be a differential field of characteristic zero and the field of constants \(C\) of \(k\) be an algebraically closed field. Let \(E\) be a differential field extension of \(k\) having \(C\) as its field of constants and that \(E=E_{m}\supseteq E_{m-1}\supseteq\cdots\supseteq E_{1}\supseteq E_{0}=k\), where \(E_{i}\) is either an elementary extension of \(E_{i-1}\) or \(E_{i}=E_{i-1}(t_{i},t_{i}^{\prime})\) and \(t_{i}\) is weierstrass (in the sense of Kolchin [14, Page 803]) over \(E_{i-1}\) or \(E_{i}\) is a Picard-Vessiot extension of \(E_{i-1}\) having a differential Galois group isomorphic to either the special linear group \(\mathrm{SL}_{2}(C)\) or the infinite dihedral subgroup \(\mathrm{D}_{\infty}\) of \(\mathrm{SL}_{2}(C)\). In this article, we prove that Liouville's theorem on integration in finite terms [10, Theorem] holds for \(E\). That is, if \(\eta\in E\) and \(\eta^{\prime}\in k\) then there is a positive integer \(n\) and for \(i=1,2,\ldots,n\), there are elements \(c_{i}\in C,u_{i}\in k\setminus\{0\}\) and \(v\in k\) such that
\[\eta^{\prime}=\sum_{i=1}^{n}c_{i}\frac{u_{i}^{\prime}}{u_{i}}+v^{\prime}.\]
## 1. Introduction
Let \(k\) be a differential field of characteristic zero with the derivation \(x\mapsto x^{\prime}\) and \(C_{k}:=\{x\in k\mid x^{\prime}=0\}\) be the field of constants of \(k\). Let \(E\) be a differential field extension of \(k\). An element \(\theta\in E\) is said to be _elementary_ (respectively, _liouvillian_) over \(k\) if either \(\theta\) is algebraic over \(k\) or for some \(\alpha\in k\setminus\{0\}\), \(\theta^{\prime}=\alpha^{\prime}/\alpha\) (respectively, \(\theta^{\prime}=\alpha\)) or for some \(\alpha\in k,\,\theta^{\prime}/\theta=\alpha^{\prime}\) (respectively, \(\theta^{\prime}/\theta=\alpha\)). A differential field extension \(E\) of \(k\) is called an _elementary field extension_ (respectively, a _liouvillian field extension_) of \(k\) if there is a tower of differential fields
\[k=E_{0}\subseteq E_{1}\subseteq\cdots\subseteq E_{m}=E\]
such that for each \(i=1,\ldots,m\), we have \(E_{i}=E_{i-1}(\theta_{i})\) for some \(\theta_{i}\) elementary (respectively, liouvillian) over \(k\).
An element \(f\in k\) is said to have an _elementary integral_ over \(k\) if there are constants \(c_{1},\ldots,c_{n}\in C_{k}\) and nonzero elements \(u_{1},\ldots,u_{n}\in k\) and an element \(v\in k\) such that
\[c_{1}u_{1}^{\prime}/u_{1}+\cdots+c_{n}u_{n}^{\prime}/u_{n}+v^{\prime}=f.\]
Observe that if \(f\in k\) has an elementary integral over \(k\) then one can construct an elementary extension field \(E\) such that \(E=k(x_{1},\ldots,x_{n})\) and that for each \(i,x_{i}^{\prime}=u_{i}^{\prime}/u_{i}\). Here, one can think \(x_{i}\) as "\(\log(u_{i})\)" and that \(\int f=\sum_{i=1}^{n}c_{i}\log(u_{i})+v\).
In [10], Rosenlicht proved the following theorem, which is known as Liouville's Theorem: Let \(k\) be a differential field of characteristic zero and \(E\) be an elementary extension of \(k\) with \(C_{E}=C_{k}\). Suppose that there is an element \(f\in k\) having an elementary integral over \(E\). Then \(f\) has an elementary integral over \(k\). It is then natural to ask for an extension of Liouville's theorem when the differential field \(E\) is built up by successive adjunction of elements that are not necessarily elementary over the predecessor field. In fact, there are several such extensions of Liouville's theorem available. For example,
in [13], [1], [1] and [14] the nonelementary adjunctions considered are by indefinite integrals such as error functions, logarithmic integrals and polylogarithmic integrals.
In this article, we prove yet another extension of Liouville's theorem. The differential field extensions we consider are obtained by successive adjunction of elements that are either elementary or solutions of certain second order differential equations or solutions of Weierstrass differential equations, over the predecessor field. The precise statement of our theorem is as follows.
**Theorem 1.1**.: _Let \(k\) be a differential field of characteristic zero and \(C_{k}\) be an algebraically closed field. Let_
\[E=E_{m}\supseteq E_{m-1}\supseteq\cdots\supseteq E_{1}\supseteq E_{0}=k\]
_be a chain of differential fields such that \(C_{E}=C_{k}\) and for each \(i=1,2,\ldots,m,\)\(E_{i}\) is of one of the following types:_
1. \(E_{i}\) _is an elementary extension of_ \(E_{i-1}.\)__
2. \(E_{i}\) _is a Picard-Vessiot extension of_ \(E_{i-1}\) _having a differential Galois group isomorphic to_ \(\mathrm{SL}_{2}(C_{k})\) _or to the dihedral subgroup_ \(\mathrm{D}_{\infty}\) _of_ \(\mathrm{SL}_{2}(C_{k})\)__ \[\mathrm{D}_{\infty}:=\left\{\begin{pmatrix}a&0\\ 0&a^{-1}\end{pmatrix}\ \Big{|}\ a\in C_{k}\setminus\{0\}\right\}\bigcup \left\{\begin{pmatrix}0&a\\ -a^{-1}&0\end{pmatrix}\ \Big{|}\ a\in C_{k}\setminus\{0\}\right\}.\]
3. \(E_{i}=E_{i-1}(\theta,\theta^{\prime}),\) _where_ \(\theta\) _is_ Weierstrass1 _over_ \(E_{i-1}.\) _That is,_ \(\theta\) _is transcendental over_ \(E_{i-1}\) _and there are a polynomial_ \(P(X)=4X^{3}-g_{1}X-g_{0}\in C_{k}[X]\) _with a non-zero discriminant_2 _and a nonzero element_ \(\alpha\in E_{i-1}\) _such that_
Footnote 1: Definition is due to Kolchin [13, p. 803]
Footnote 2: That is, \(27g_{0}^{2}-g_{1}^{3}\neq 0,\) which then implies that the Weierstrass elliptic curve \(ZY^{2}-4X^{3}+g_{1}Z^{2}X+g_{0}Z^{3}\) is non-singular.
\[\theta^{\prime 2}=\alpha^{2}P(\theta)\quad(\text{Weierstrass differential equation}).\]
_Suppose that \(f\in k\) has an elementary integral over \(E.\) Then \(f\) has an elementary integral over \(k.\)_
What is special about our theorem is that our extension field \(E,\) unlike the extension fields considered in [13], [1], [1], need not be a liouvillian extension field: we allow adjunction of weierstrass elements and solutions of second order differential equations with Galois groups isomorphic to \(\mathrm{SL}_{2}(C_{k}).\)
In the manuscript [1], Hebisch considered _elliptic-Lambert_ field extensions, which are obtained by a repeated adjunction of elliptic functions, Lambert functions and elliptic integrals. Wherein, for the field extensions obtained by a repeated adjunction of elementary and elliptic functions, he proved that Liouville's theorem holds. His definition of an elliptic function \(\eta\) over a field \(k\) is that
\[\eta^{\prime 2}=\beta^{\prime 2}P(\eta), \tag{1.1}\]
where \(\beta\in k\) and \(P(X)=X^{3}-g_{1}X-g_{0}\in C_{k}[X].\) In Theorem 1.1 (iii), the element \(\alpha\in E_{i-1}\) that appears in the Weierstrass differential equation is arbitrary and in particular, \(\alpha\) need be of the form \(\beta^{\prime}\) for any \(\beta\in E\) or for any \(\beta\) from an elliptic-Lambert field extension of \(E_{i-1}.\) Therefore, weierstrass elements at large are not covered in [1]. However, if \(\eta\) is an elliptic function, as in Equation (1.1), over \(E_{i-1}\) then it is clearly weierstrassian over \(E_{i-1}\). Thus, the extension field \(E\) considered in our theorem does include the adjunction of Hebisch's elliptic functions.
## 2. Preliminaries and Basic Results
In this paper, by a differential field, we mean a field of characteristic zero with a single derivation map. We fix a differential field \(k\) and assume that the field of constants \(C\) of \(k\) is an algebraically closed field. We shall now record few basic results from differential algebra that are needed in our proofs.
### Second order differential equations
Let \(E\) be a Picard-Vessiot extension of \(k\) for a differential \(k-\)module \(M.\) Suppose that the differential Galois group \(\mathscr{G}(E/k)\) is isomorphic to a closed subgroup of \(\mathrm{SL}_{2}(C).\) Then \(\mathscr{G}(E|k)\) is isomorphic to one of the following groups [12, Page 7, Lemma].
1. a finite group.
2. the infinite dihedral subgroup of \(\mathrm{SL}_{2}(C);\) \[\mathrm{D}_{\infty}:=\left\{\begin{pmatrix}a&0\\ 0&a^{-1}\end{pmatrix}\ \Big{|}\ a\in C\setminus\{0\}\right\}\bigcup\left\{ \begin{pmatrix}0&a\\ -a^{-1}&0\end{pmatrix}\ \Big{|}\ a\in C\setminus\{0\}\right\}.\]
3. \(\mathrm{SL}_{2}(C).\)
4. a closed subgroup of a Borel subgroup of \(\mathrm{SL}_{2}(C).\)
In the next two propositions, we explain known structure properties of Picard-Vessiot extensions whose Galois groups are isomorphic to the infinite dihedral group or the special linear group. The results obtained will be used in the proof of our main theorem.
**Proposition 2.1**.: _Let \(E\) be a Picard-Vessiot extension of \(k\) with Galois group \(\mathscr{G}(E|k)\) isomorphic to \(\mathrm{D}_{\infty}\) as algebraic groups. Then there is a tower of differential fields_
\[k\subsetneqq k(\alpha)\subsetneqq E=k(\alpha)(\eta)\]
_having the following properties_
1. \(k(\alpha)\) _is a quadratic extension of_ \(k\) _as well as the algebraic closure of_ \(k\) _in_ \(E.\)__
2. _The element_ \(\eta\) _is transcendental over_ \(k(\alpha)\) _with_ \(\eta^{\prime}/\eta=\alpha.\)__
3. _There is an element_ \(\gamma\in k\setminus\{0\}\) _such that the trace_ \[\mathrm{tr}(\alpha)=\frac{1}{2}\frac{\gamma^{\prime}}{\gamma}.\]
Proof.: Since \(\mathscr{G}(E|k)=\mathrm{D}_{\infty}\cong\mathrm{G}_{m}\rtimes\mathbb{Z}_{2},\) the identity component \(\mathscr{G}(E|k)^{0}\) is isomorphic to \(\mathrm{G}_{m}\) and the quotient group \(\mathscr{G}(E|k)/\mathscr{G}(E|k)^{0}\) is isomorphic to \(\mathbb{Z}_{2}.\) Therefore, from the fundamental theorem of differential Galois theory and from [12, Lemma A1], there is a tower of differential fields
\[k\subseteq k(\alpha)\subseteq E=k(\alpha)(\eta),\]
where \(k(\alpha)\) is the algebraic closure of \(k\) in \(E,\)\(k(\alpha)\) is a quadratic extension of \(k,\eta\) is transcendental over \(k(\alpha)\) and that \(\eta^{\prime}/\eta\in k(\alpha)\setminus k.\) Furthermore,
\[\mathscr{G}(k(\alpha)|k)\cong\mathscr{G}(E|k)/\mathscr{G}(E|k)^{0}\cong \mathbb{Z}_{2}\]
\[\mathscr{G}(E|k(\alpha))\cong\mathscr{G}(E|k)^{0}\cong\mathrm{G}_{m}.\]
Since, \(k\subsetneqq k(\eta^{\prime}/\eta)\subseteq k(\alpha)\) and that \([k(\alpha):k]=2,\) we have \(k(\eta^{\prime}/\eta)=k(\alpha).\) Therefore, we may assume \(\alpha=\eta^{\prime}/\eta.\) Let \(\alpha\) and \(\beta\) be the distinct roots of the irreducible polynomial of \(\alpha.\) Then, there is an automorphism \(\tau\in\mathscr{G}(E|k)\) such that \(\tau(\alpha)=\beta.\) Let \(\tau(\eta)=:\zeta\) and observe that \(\zeta^{\prime}/\zeta=\beta\) and that
\[\frac{(\eta\zeta)^{\prime}}{\eta\zeta}=\alpha+\beta\in k. \tag{2.1}\]
Since \(\mathcal{G}(E|k)\cong\mathrm{D}_{\infty}\) has no closed normal subgroup \(N\) such that \(\mathrm{D}_{\infty}/N\cong\mathrm{G}_{m},\) it follows that \(\eta\zeta\) is not transcendental over \(k.\) Thus \(\eta\zeta\) belongs to the quadratic extension \(k(\alpha).\) Now from Equation (2.1) and from [16, Remark 1.11.1], we obtain \((\eta\zeta)^{2}\in k.\) Let \(\gamma:=(\eta\zeta)^{2}\) and observe that
\[\frac{(\eta\zeta)^{\prime}}{\eta\zeta}=\alpha+\beta=\mathrm{tr}(\alpha)=\frac {1}{2}\frac{\gamma^{\prime}}{\gamma}.\]
**Proposition 2.2**.: [16, page 58] _Let \(E\) be a Picard-Vessiot extension of \(k\) with Galois group \(\mathcal{G}(E|k)\) isomorphic to \(\mathrm{SL}_{2}(C)\) as algebraic groups. Then we have the following:_
1. \(E\) _is a Picard-Vessiot extension of_ \(k\) _for a matrix differential equation_ \(y^{\prime}=Ay,\) _where_ \[A=\begin{pmatrix}0&1\\ r&s\end{pmatrix}\in M_{2}(k).\]
2. _there is a tower of differential fields_ \[k\subsetneqq k(\alpha)\subsetneqq k(\alpha,\xi)\subsetneqq k(\alpha,\xi,\eta )=E,\] _where_ \(\alpha,\xi\) _and_ \(\eta\) _are_ \(k-\)_algebraically independent,_ \(\eta^{\prime}=\omega/\xi^{2}\) _for some_ \(\omega\in k\setminus\{0\},\)__\(\xi^{\prime}=\alpha\xi\) _and that_ \(\alpha\) _is a zero of the Riccati differential polynomial_ \(R(X)=X^{\prime}+X^{2}-rX-s.\)__
3. \(R\) _has no zeros in_ \(k.\)__
Proof.: Let \(R\) be the Picard-Vessiot ring of \(E.\) Then by [16, Corollaries 5.17 and 5.29], there is an isomorphism of \(k-\)algebras
\[\phi:R\to k\otimes_{C}C[\mathcal{G}(E|k)],\]
where \(C[\mathcal{G}(E|k)]\) is the coordinate ring of \(\mathcal{G}(E|k),\) which is also compatible with the action of \(\mathcal{G}(E|k).\) Since
\[C[\mathcal{G}(E|k)]=\frac{C[x_{11},x_{12},x_{21},x_{22}]}{\langle x_{11}x_{22 }-x_{21}x_{12}-1\rangle},\]
there are elements \(y_{1},y_{2},y_{3},y_{4}\in R\) (namely, the images of \(x_{ij}\) under \(\phi^{-1}\)) that generates \(R\) as a \(k-\)algebra. Let
\[Y:=\begin{pmatrix}y_{1}&y_{2}\\ y_{3}&y_{4}\end{pmatrix}\qquad W:=\begin{pmatrix}y_{1}&y_{2}\\ y^{\prime}_{1}&y^{\prime}_{2}\end{pmatrix}.\]
Then for any \(\sigma\in\mathcal{G}(E|k),\) we have
\[\sigma(Y)=YC_{\sigma} \tag{2.2}\]
for some \(C_{\sigma}:=(c_{ij\sigma})\in\mathrm{SL}_{2}(C).\) In particular, for \(i=1,2\) and for all \(\sigma\in\mathcal{G}(E|k),\)\(\sigma(y_{i})=c_{1i\sigma}y_{1}+c_{2i\sigma}y_{2}\) and therefore we also have
\[\sigma(W)=WC_{\sigma}. \tag{2.3}\]
From Equations (2.2) and (2.3), we have the following observations. First, we see that the entries of the matrices \(W^{\prime}W^{-1}\) and \(YW^{-1}\) are fixed by \(\mathcal{G}(E|k).\) Therefore \(W^{\prime}=AW\) and \(Y=BW,\) for some \(A,B\in M_{2}(k).\) Next, \(\sigma(\det(W))=\det(W)\) for all \(\sigma\in\mathcal{G}(E|k)\) and we obtain that \(\det(W)\in k.\) Now since entries of \(Y\) generate \(R,\) so does the entries of \(W.\) Finally, since \(\mathrm{tr.deg}(E|k)=3,\) the set \(\{y_{1},y_{2},y^{\prime}_{1}\}\) must be \(k-\)algebraically independent.
Let \(A=(a_{ij})\in M_{2}(k).\) Then \(y^{\prime}_{1}=a_{11}y_{1}+a_{12}y^{\prime}_{1}\) and therefore \(a_{11}=0\) and \(a_{12}=1.\)This proves (i).
Now from the equation \(y^{\prime}=Ay,\) we obtain a second order differential equation
\[z^{\prime\prime}=a_{21}z^{\prime}+a_{22}z \tag{2.4}\]
to which \(y_{1},y_{2}\) are \(k-\)algebraically independent solutions. Let \(\xi:=y_{1},\,\alpha=\xi^{\prime}/\xi\) and \(\eta=y_{2}/y_{1}\) and observe that these choices have the desired properties listed in (ii).
Let \(V\) be the set of all solutions in \(E\) of the Equation (2.4) and \(\mathscr{R}\) be the set of all zeros in \(E\) of the Riccati differential polynomial \(R(X)=X^{\prime}+X^{2}-a_{21}X-a_{22}\). Then, the logarithmic derivative map \(x\to x^{\prime}/x\) from \(V\setminus\{0\}\to\mathscr{R}\) is surjective. If \(u\in k\cap\mathscr{R}\) then since \(k\) is algebraically closed in \(E,\) there is a \(v\in V\) that is transcendental over \(k\) such that \(v^{\prime}/v=u.\) Note that \(\sigma(v)=c_{\sigma}v\) for some \(c_{\sigma}\in C\setminus\{0\}.\) Therefore, the field \(k(v)\) is then a differential field that is also stable under the action of \(\mathscr{G}(E|k).\) But then, by fundamental theorem of differential Galois theory, the closed subgroup \(\mathscr{G}(E|k(v))\) must be a normal subgroup of \(\mathscr{G}(E|k)\cong\mathrm{SL}_{2}(C)\) of dimension \(2\). This contradicts the fact that \(\mathrm{SL}_{2}(C)\) is a simple algebraic group. Thus, \(k\) contains no zeros of \(R\).
### Weierstrassian elements
Let \(k\subset K\) be differential fields of characteristic zero such that \(C_{K}=C\) and that \(C\) be an algebraically closed field. An element \(\theta\in K\) is said to _weierstrassian_ over \(k\) if \(\eta\) is transcendental over \(k\) and there is a nonsingular irreducible projective curve of genus \(1\), defined over \(C\), in the Weierstrass form:
\[\mathcal{E}:=ZY^{2}-4X^{3}+g_{1}Z^{2}X+g_{0}Z^{3}\]
such that for some \(\alpha\in k\setminus\{0\},\,(\theta:\theta^{\prime}/\alpha:1)\) is a \(K-\)point of the curve. That is
\[\theta^{\prime 2}=\alpha^{2}(4\theta^{3}-g_{1}\theta-g_{0});\quad\alpha\in k \setminus\{0\},\quad g_{0},g_{1}\in C,\quad 27g_{0}^{2}-g_{1}^{3}\neq 0.\]
In [10, page 809], it was proved that if \(K\) is a strongly normal extension of \(k\) such that \(k\) is algebraically closed in \(K\) and the field transcendence degree \(\text{tr.d}(K|k)=1\) then there is an element \(\theta\in K\) such that either \(K=k(\theta)\) and \(\theta^{\prime}\in k\) or \(K=k(\theta)\) and \(\theta^{\prime}/\theta\in k\) or \(K=k(\theta,\theta^{\prime})(\zeta)\) and \(\theta\) is weierstrassian over \(k\) and \(\zeta\) is algebraic over \(k(\theta,\theta^{\prime})\).
We need the following facts about the differential \(k-\)automorphism group \(\mathscr{G}\) of the differential field \(k(\theta,\theta^{\prime}).\) The \(C-\)rational points of \(\mathcal{E},\) denoted by \(\mathcal{E}(C),\) is a commutative group. A point \(p\in\mathcal{E}(C)\) induces a translation isomorphism \(\tau_{p}:\mathcal{E}(C)\to\mathcal{E}(C)\) of curves, defined by \(\tau_{p}(x)=x+p.\) The function field \(k(\mathcal{E})\) of \(\mathcal{E}(k)\) is isomorphic to \(k(\theta,\theta^{\prime})\) and the map \(\tau_{p},\) for \(p\in\mathcal{E}(C),\) induces a differential \(k-\)automorphism \(\tau_{p}^{*}\) of \(k(\theta,\theta^{\prime}).\) Conversely, every differential \(k-\)automorphism \(\tau\) of \(k(\theta,\theta^{\prime})\) also induces a translation map \(\tau_{p},\) for some \(p\in\mathcal{E}(C).\) The mapping \(\varphi:\mathcal{E}(C)\to\mathscr{G}\) defined by \(\varphi(\tau_{p})=\tau_{p}^{*}\) is in fact an isomorphism of commutative groups. There is a bijective Galois correspondence between the intermediate differential subfields of \(k(\theta,\theta^{\prime})\) and the closed subgroups (which are precisely the finite subgroups) of \(\mathcal{E}(C).\) In particular, it can be shown that \(k\) is algebraically closed in \(k(\theta,\theta^{\prime}).\) We refer the reader to [10, pages 803-807] or [11, Example 2.7] for a proof of these facts.
Let \(k\subset K\) be fields and \(\Omega_{K/k}\) be the \(K-\)vector space of \(k-\)differentials. Note that \(\Omega_{K/k}\) has the following universal property: there is a \(k-\)derivation map \(\mathrm{d}:K\to\Omega_{K/k},\) which by definition is \(k-\)linear and for \(x,y\in K,\,\mathrm{d}(xy)=x\,\,\mathrm{d}y+y\,\,\mathrm{d}x,\) such that for any \(k-\)derivation \(D\) from \(K\) to a \(K-\)vector space \(M,\) there is a unique \(K-\)homomorphism \(\phi:\Omega_{K/k}\to M\) such that \(\phi\circ\mathrm{d}=D.\)
For the convenience of the reader, in the next few propositions, we reproduce certain results from [11] and [11] that will be used in the proof of our theorem.
**Proposition 2.3**.: _[_11_, Proposition 4]_ _Let \(k\subset K\) be fields of characteristic zero. Let \(c_{1},\ldots,c_{n}\) be elements of \(k\) that are linearly independent over the rational numbers \(\mathbb{Q}\subset k\) and \(u_{1},\ldots,u_{n}\) be nonzero elements of \(K\) and \(v\) be an element of \(K.\) Then the element_
\[c_{1}\frac{\mathrm{d}u_{1}}{u_{1}}+c_{2}\frac{\mathrm{d}u_{2}}{u_{2}}+\cdots+c _{n}\frac{\mathrm{d}u_{n}}{u_{n}}+\mathrm{d}v\]
_of \(\Omega_{K/k}\) is zero if and only if each \(u_{1},\ldots,u_{n},v\) is algebraic over \(k\)._
**Proposition 2.4**.: _[_12_, Lemma]_ _Let \(k\) be a differential field of characteristic zero, \(K\) be a differential field extension of \(k\), \(C_{K}=C\) and \(\text{tr.d}(K|k)=1\). If there are two \(k-\)differentials of the form \(c_{1}\text{{\rm d}}u_{1}/u_{1}+\dots+c_{n}\text{{\rm d}}u_{n}/u_{n}+\text{{\rm d }}v,\) where \(c_{1},\dots,c_{n}\) are constants and each \(u_{1},\dots,u_{n},v\) in \(K\) such that \(c_{1}u_{1}^{\prime}/u_{1}+\dots+c_{n}u_{n}^{\prime}/u_{n}+v^{\prime}\in k\) then the differentials are linearly dependent over \(C\)._
**Proposition 2.5**.: _[_12_, Lemma]_ _Let \(k\) be a differential field of characteristic zero. Let \(k(t)\) be a differential field extension of \(k\), \(C_{k(t)}=C\) with \(t\) is transcendental over \(k\) and either \(t^{\prime}\in k\) or \(t^{\prime}/t\in k\). Let \(c_{1},\dots,c_{n}\in k\) be linearly independent over \(\mathbb{Q}\) and let \(u_{1},\dots,u_{n}\) be nonzero element in \(k(t)\), \(v\in k(t)\). Then if_
\[\sum_{i=1}^{n}c_{i}\frac{u_{i}^{\prime}}{u_{i}}+v^{\prime}\in k[t]\]
_we have \(v\in k[t]\) and in the case \(t^{\prime}\in k\), each \(u_{i}\in k\), while in the case \(t^{\prime}/t\in k\), for each \(i=1,\dots,n\), \(u_{i}/t^{\nu_{i}}\in k\) for some integer \(\nu_{i}\)._
## 3. Main Results
**Lemma 3.1**.: _Let \(E\) be a Picard-Vessiot extension of \(k\) having a differential Galois group isomorphic to the infinite dihedral group \(\mathrm{D}_{\infty}\). If \(f\in k\) has an elementary integral over \(E\) then it has an elementary integral over \(k\)._
Proof.: Let \(c_{1}u_{1}^{\prime}/u_{1}+\dots+c_{n}u_{n}^{\prime}/u_{n}+v^{\prime}=f\in k,\) where \(c_{i}\in C\) and \(v,u_{i}\in E.\) We may further assume that \(c_{1},\dots,c_{n}\) are \(\mathbb{Q}-\)linearly independent (see [12, page 158]). By Proposition 2.1 we have a tower \(k\subset k(\alpha)\subset k(\alpha,\eta)=E\), where \(k(\alpha)\) is a quadratic extension, \(\operatorname{tr}(\alpha)=\frac{1}{2}\frac{\gamma^{\prime}}{\gamma}\), \(\eta\) is transcendental over \(k\) and \(\eta^{\prime}/\eta=\alpha\). Now we use Proposition 2.5 and obtain that for each \(i\), \(u_{i}=a_{i}\eta^{m_{i}}\) with \(a_{i}\in k(\alpha)\), \(m_{i}\) an integer, and \(v\in k(\alpha)[\eta]\). Since \(u_{i}^{\prime}/u_{i}=a_{i}^{\prime}/a_{i}+m_{i}\eta^{\prime}/\eta=a_{i}^{ \prime}/a_{i}+m_{i}\alpha\). We have
\[f=\sum_{i=1}^{n}c_{i}\frac{a_{i}^{\prime}}{a_{i}}+(\sum_{i=1}^{n}m_{i}c_{i}) \alpha+v^{\prime}.\]
This implies \(v^{\prime}\in k(\alpha)\). Applying Kolchin-Ostrowski, [12, Appendix], we have \(v\in k(\alpha)\). Hence
\[f=\sum_{i=1}^{n}c_{i}\frac{a_{i}^{\prime}}{a_{i}}+c\alpha+v^{\prime}. \tag{3.1}\]
where \(a_{1},\dots,a_{n},v\in k(\alpha)\) and \(c=\sum_{i=1}^{n}m_{i}c_{i}\).
Let \(G\) be the Galois group of the quadratic extension \(k(\alpha)\) of \(k\). Then, for all \(x\in k(\alpha)\),
\[\operatorname{tr}(x)=\sum_{\sigma\in G}\sigma(x)\in k,\qquad\operatorname{nr} (x)=\prod_{\sigma\in G}\sigma(x)\in k\quad\text{and}\quad\operatorname{tr}(x )^{\prime}=\operatorname{tr}(x^{\prime}).\]
Applying the trace map to Equation (3.1), we obtain
\[2f =\sum_{i=1}^{n}c_{i}\operatorname{tr}\left(\frac{a_{i}^{\prime}}{ a_{i}}\right)+c\operatorname{tr}(\alpha)+\operatorname{tr}(v^{\prime})\] \[=\sum_{i=1}^{n}c_{i}\left(\frac{\operatorname{nr}(a_{i})^{ \prime}}{\operatorname{nr}(a_{i})}\right)+c\operatorname{tr}(\alpha)+ \operatorname{tr}(v)^{\prime}.\]
Since we have proved in Proposition 2.1 that \(\operatorname{tr}(\alpha)=\frac{1}{2}\frac{\gamma^{\prime}}{\gamma}\) for some \(\gamma\in k\), the proof of the Lemma is now complete.
**Lemma 3.2**.: _Let \(E\) be a Picard-Vessiot extension of \(k\) having a differential Galois group isomorphic to \(\mathrm{SL}_{2}(C)\). If \(f\in k\) has an elementary integral over \(E\) then it has an elementary integral over \(k\)._
Proof.: We first resolve \(E\) as a tower of differential fields as in Proposition 2.2 (ii). Then for \(y:=\omega/\xi^{2}\), we have
\[\beta:=\frac{y^{\prime}}{y}=\frac{\omega^{\prime}}{\omega}-2\alpha\in k(\alpha)\]
and therefore, \(k(\alpha)=k(\beta)\) and that \(k(\alpha,y)\) is a differential field. Since \(\eta^{\prime}=y\) and that \(\xi^{2}\in k(\alpha,y),\) we have the following tower of differential fields:
\[k\subsetneqq k(\alpha)\subsetneqq k(\alpha,y)\subsetneqq k(\alpha,y,\eta) \subsetneqq E=k(\alpha,y,\eta)(\xi),\]
where \(\alpha,\xi\) and \(\eta\) are as in Proposition 2.2 (ii) and \(E\) is a quadratic extension of \(k(\alpha,y,\eta)\) with \(\xi^{2}\in k(\alpha,y).\) We divide the rest of the proof into four steps.
_Step 1._ Let
\[c_{1}\frac{u_{1}^{\prime}}{u_{1}}+c_{2}\frac{u_{2}^{\prime}}{u_{2}}+\cdots+c_{ n}\frac{u_{n}^{\prime}}{u_{n}}+v^{\prime}=f,\]
where \(u_{1},\cdots,u_{n}\in E\) and \(c_{1},\cdots,c_{n}\) are \(\mathbb{Q}-\)linearly independent constants and \(v\in E.\) Then since \(E\) is an elementary (algebraic) extension of \(k(\alpha,y,\eta),\) by Liouville's theorem, we obtain that \(f\) admits a similar expression over \(k(\alpha,y,\eta).\) Thus, we shall further assume that for each \(i,\)\(u_{i}\in k(\alpha,y,\eta)\) and that \(v\in k(\alpha,y,\eta).\)
_Step 2._ Observe that \(f\in k\subset k(\alpha,y)[\eta]\) and that \(\eta^{\prime}=y\in k(\alpha,y).\) Therefore, we shall apply Proposition 2.5 and obtain that \(u_{1},\ldots,u_{n}\in k(\alpha,y)\) and that \(v\in k(\alpha,y)[\eta].\) Since \(v^{\prime}\in k(\alpha,y),\) by Kolchin-Ostrowski, there is a constant \(e_{1}\) such that \(v-e_{1}\eta\in k(\alpha,y).\) Thus
\[f=\sum_{i=1}^{n}c_{i}\frac{u_{i}^{\prime}}{u_{i}}+e_{1}y+(v-e_{1}\eta)^{\prime}. \tag{3.2}\]
where \(u_{i},\ldots u_{n},v-e_{1}\eta\in k(\alpha,y).\)
_Step 3._ We have \(\sum_{i=1}^{n}c_{i}u_{i}^{\prime}/u_{i}+(v-e_{1}\eta)^{\prime}=f-e_{1}y\in k( \alpha)[y],\) where \(y^{\prime}=\beta y.\) Therefore, we shall again apply Proposition 2.5 and obtain that \(v-e_{1}\eta\in k(\alpha)[y]\) and for each \(i=1,2,\ldots,n,\) there is an integer \(m_{i}\) and an element \(v_{i}\in k(\alpha)\) such that \(u_{i}=v_{i}y^{m_{i}}.\) Let \(v-e_{1}\eta=w+a_{1}y+a_{2}y^{2}+\cdots+ay^{l}.\) Then we have
\[(v-e_{1}\eta)^{\prime} =w^{\prime}+(a_{1}^{\prime}+\beta a_{1})y+\cdots+(a_{l}^{\prime}+ l\beta a_{l})y^{l}\] \[\frac{u_{i}^{\prime}}{u_{i}} =\frac{v_{i}^{\prime}}{v_{i}}+m_{i}\beta.\]
Using the above equations, we shall rewrite Equation (3.2) and obtain that
\[f=\sum_{i=1}^{n}c_{i}\frac{v_{i}^{\prime}}{v_{i}}+e\beta+w^{\prime}, \tag{3.3}\]
where \(e=\sum_{i=1}^{n}m_{i}c_{i},\)\(v_{i}\in k(\alpha)=k(\beta)\) and \(w\in k(\alpha).\)
_Step 4._ We shall now show that the elements \(w,v_{1},\ldots,v_{n}\) belong to \(k\) and that \(e=0.\) This will then complete the proof.
Let \(\overline{E}\) be an algebraic closure of \(E\) and \(\bar{k}\) be the algebraic closure of \(k\) in \(\overline{E}.\) For any rational function \(x\in k(\alpha)\) and \(a\in\bar{k},\) let
\[x=r_{\lambda}(\alpha-a)^{\lambda}+r_{\lambda+1}(\alpha-a)^{\lambda+1}+\cdots\]
be the Laurent series expansion of \(x\) about \(a.\) Since
\[(\alpha-a)^{\prime}=-R(a)-(\alpha-a)^{2}-(2a-r)(\alpha-a),\]
we have the following Laurent series expansions for \(x^{\prime}\) and \(x^{\prime}/x\):
\[x^{\prime} =-\lambda R(a)r_{\lambda}(\alpha-a)^{\lambda-1}+\cdots\] \[\frac{x^{\prime}}{x} =\lambda R(a)(\alpha-a)^{-1}+\cdots\]
Thus, \(\operatorname{ord}_{a}(x^{\prime})\geq\operatorname{ord}_{a}(x)-1\) and that \(\operatorname{ord}_{a}(x^{\prime}/x)\geq-1.\) In particular,
\[\operatorname{ord}_{a}\left(\sum_{i=1}^{n}c_{i}(v_{i}^{\prime}/v_{i})\right) \geq-1. \tag{3.4}\]
Since \(k\) is algebraically closed in \(E,\) we have \(\overline{k}E\) is a Picard-Vessiot extension of \(\overline{k}\) with (see [1, Proposition 6.6])
\[\mathscr{G}(\overline{k}E|\overline{k})\cong\mathscr{G}(E|k)\cong\operatorname {SL}_{2}(C).\]
Therefore, applying Proposition 2.2 with \(\overline{k}\) in place of \(k,\) we obtain that \(R(a)\neq 0\) for all \(a\in\overline{k}.\) Thus, if \(a\) is a pole of \(x\) then \(\operatorname{ord}_{a}(x^{\prime})=\operatorname{ord}_{a}(x)-1\) and if \(a\) is either a pole or a zero of \(x\) then \(\operatorname{ord}_{a}(x^{\prime}/x)=-1.\) With these observations, we shall move on to show that \(v_{i}\) and \(w\) are in \(k.\)
Suppose that \(w\) has a pole at \(a\in\overline{k}.\) Then \(\operatorname{ord}_{a}(w)<0\) and as we observed, \(\operatorname{ord}_{a}(w^{\prime})=\operatorname{ord}_{a}(w)-1<-2.\) Note that \(\operatorname{ord}_{a}(e\beta)=\operatorname{ord}_{a}(e(\omega^{\prime}/ \omega)-e\alpha)=0\) or \(1,\) depending on \(e=0\) or \(e\neq 0\). Therefore, from Equations (3.3) and (3.4), we obtain
\[0=\operatorname{ord}_{a}(f)\leq\min\left\{\operatorname{ord}_{a}\left(\sum_{i= 1}^{n}c_{i}(v_{i}^{\prime}/v_{i})\right),\operatorname{ord}_{a}(e\beta), \operatorname{ord}_{a}(w^{\prime})\right\}=\operatorname{ord}_{a}(w^{\prime}) <-2,\]
a contradiction. Therefore, \(w\) is a polynomial in \(\alpha\) over \(k.\)
Let \(A\subseteq\{1,2,\ldots,n\}\) be the subset containing all \(j\) such that \(v_{j}\) has either a zero or a pole at \(a\in\overline{k}.\) Suppose that \(A\neq\emptyset.\) Then the Laurent series expansion of \(\sum_{j\in A}c_{i}(v_{i}^{\prime}/v_{i})\) is
\[\sum_{j\in A}c_{i}(v_{i}^{\prime}/v_{i})=\left(\sum_{i\in A}c_{i}\lambda_{i} \right)R(a)(\alpha-a)^{-1}+\cdots,\]
where \(\lambda_{i}=\operatorname{ord}_{a}(v_{i}).\) Since \(c_{1},\ldots,c_{n}\) are \(\mathbb{Q}-\)linearly independent, we have \(\sum_{i\in A}c_{i}\lambda_{i}\neq 0.\) This implies
\[\operatorname{ord}_{a}\left(\sum_{i\in A}c_{i}(v_{i}^{\prime}/v_{i})\right)=-1.\]
On the other hand, for each \(i\in\{1,2,\ldots,n\}\setminus A,\) we have \(\operatorname{ord}_{a}(v_{i}^{\prime}/v_{i})\geq 0.\) Then
\[\operatorname{ord}_{a}\left(\sum_{i\in\{1,2,\ldots,n\}\setminus A}c_{i}(v_{i} ^{\prime}/v_{i})\right)\geq 0.\]
Thus
\[\operatorname{ord}_{a}\left(\sum_{i=1}^{n}c_{i}(v_{i}^{\prime}/v_{i})\right)=-1.\]
Since \(w\in k[\alpha],\) we have \(w^{\prime}\in k[\alpha]\) and that \(\operatorname{ord}_{a}(e\beta+w^{\prime})\geq 0.\) Then
\[0=\operatorname{ord}_{a}(f)\leq\min\left\{\operatorname{ord}_{a}\left(\sum_{i= 1}^{n}c_{i}(v_{i}^{\prime}/v_{i})\right),\operatorname{ord}_{a}\left(e\beta+w^ {\prime}\right)\right\}=\operatorname{ord}_{a}\left(\sum_{i=1}^{n}c_{i}(v_{i}^{ \prime}/v_{i})\right)=-1,\]
which is absurd. Thus \(A=\emptyset\) and we obtain that \(v_{1},\ldots,v_{n}\) belongs to \(k\) and that \(e\beta+w^{\prime}=f-\sum_{i=1}^{n}c_{i}(v_{i}^{\prime}/v_{i})\in k.\)
To complete the proof, we only need to show that \(e=0\) and that \(w\in k.\) We have already noted that \(w\in k[\alpha].\) Now we shall show that \(w\) has no zeros in \(\overline{k},\) which would then imply \(w\in k.\) Suppose \(a\in\overline{k}\) is a zero of \(w\) of order \(m\geq 1\) and that degree of \(w\) is \(l.\) Then \(w^{\prime}\) is of degree \(l+1:\)
\[w =a_{m}(\alpha-a)^{m}+\cdots+a_{l}(\alpha-a)^{l},\;\text{ where }a_{l} \neq 0,a_{m}\neq 0\,\text{ and }\] \[w^{\prime} =-ma_{m}R(a)(\alpha-a)^{m-1}-\cdots-la_{l}(\alpha-a)^{l+1}.\]
Since \(\beta=(\omega^{\prime}/\omega)-2\alpha\) is a polynomial of degree \(1\) and \(e\beta+w^{\prime}=f-\sum_{i=1}^{n}c_{i}(v_{i}^{\prime}/v_{i})\in k,\) by comparing the degrees, we obtain that \(l=0.\) This in turn implies \(w\in k\) and \(e=0.\)
**Lemma 3.3**.: _Let \(k(\theta,\theta^{\prime})\) be a differential field extension of \(k,\) where \(\theta\) is weierstrassian over \(k.\) If \(f\in k\) has an elementary integral over \(k(\theta,\theta^{\prime})\) then it has an elementary integral over \(k.\)_
Proof.: Let \(f\in k\) have an elementary integral over \(k(\theta,\theta^{\prime}).\) Then, there are \(\mathbb{Q}-\)linearly independent constants \(c_{1},\ldots,c_{n},\) nonzero elements \(u_{1},\ldots,u_{n}\in k(\theta,\theta^{\prime})\) and an element \(v\in k(\theta,\theta^{\prime})\) such that
\[c_{1}u_{1}^{\prime}/u_{1}+\cdots+c_{n}u_{n}^{\prime}/u_{n}+v^{\prime}=f\in k.\]
Fix an algebraic closure of \(k(\theta,\theta^{\prime})\) and let \(\overline{k}\) be the relative algebraic closure of \(k.\) Let \(\mathscr{G}\) be the group of all differential automorphisms of \(\overline{k}(\theta,\theta^{\prime})\) fixing elements of \(\overline{k}.\) For \(\tau\in\mathscr{G},\) we have
\[c_{1}\tau(u_{1})^{\prime}/\tau(u_{1})+\cdots+c_{n}\tau(u_{n})^{\prime}/\tau(u_ {n})+\tau(v)^{\prime}=f\in k.\]
From Proposition 2.4, we see that there is a constant \(c_{\tau}\in C\) such that
\[c_{1}\mathrm{d}\tau(u_{1})/\tau(u_{1})+\cdots+c_{n}\mathrm{d}\tau(u_{n})/\tau( u_{n})-c_{\tau}\left(c_{1}\mathrm{d}u_{1}/u_{1}+\cdots+c_{n}\mathrm{d}u_{n}/u_{n} \right)+\mathrm{d}(\tau(v)-c_{\tau}v)=0. \tag{3.5}\]
We extend the set \(\{c_{1},\ldots,c_{n}\}\) to a \(\mathbb{Q}-\)linearly independent set of constants \(\{c_{1},\ldots,c_{n},\ldots,c_{l}\}\) so that for each \(1\leq i\leq n,\)\(c_{\tau}c_{i}\) is a \(\mathbb{Q}-\)linear combination of \(c_{1},\ldots,c_{l}.\) Now for \(1\leq i\leq n\) and \(1\leq j\leq l,\) we can find integers \(m_{ij\tau}\) and a positive integer \(l_{\tau}\) such that
\[-l_{\tau}c_{\tau}c_{i}=\sum_{j=1}^{l}m_{ij\tau}c_{j}.\]
Then Equation (3.5) becomes
\[c_{1}\mathrm{d}v_{1}/v_{1}+\cdots+c_{l}\mathrm{d}v_{l}/v_{l}+\mathrm{d}(l_{ \tau}v_{0})=0, \tag{3.6}\]
where for \(1\leq j\leq n,\) we have
\[v_{j}=\frac{\tau(u_{j}^{l_{\tau}})}{\prod_{i=1}^{n}u_{i}^{m_{ij\tau}}}\]
and for each \(n+1\leq i\leq l,\)\(v_{i}\) is a power product of \(u_{1},\ldots,u_{n},\tau(u_{1}),\ldots,\tau(u_{n})\) and \(v_{0}=\tau(v)-c_{\tau}v.\)
Apply Proposition 2.3 to Equation (3.6) to obtain that each \(v_{0},v_{1},\ldots,v_{n}\) belongs to \(\overline{k}.\) Thus, for each \(\tau\in\mathscr{G}\) and for each \(1\leq j\leq n,\) there are elements \(f_{j\tau}\) and \(g_{\tau}\) in \(\overline{k}\) such that
\[\tau(u_{j}^{l_{\tau}})=f_{j\tau}\prod_{i=1}^{n}u_{i}^{m_{ij\tau}}\quad\text{ and}\quad\tau(v)=c_{\tau}v+g_{\tau}. \tag{3.7}\]
We claim that these equations hold only if each \(u_{1},\ldots,u_{n},v\) belongs to \(k,\) which will then complete the proof of the lemma. As noted earlier, \(k\) is algebraically closed in \(k(\theta,\theta^{\prime}).\) Since each \(u_{1},\ldots,u_{n},v\) belongs to \(k(\theta,\theta^{\prime}),\) the claim follows once we show that each \(u_{1},\ldots,u_{n},v\) belongs to \(\overline{k}.\)
Suppose that for some \(j\) we have \(u_{j}\in\overline{k}(\theta,\theta^{\prime})\) and \(u_{j}\notin\overline{k}.\) Since \(\mathcal{E}=ZY^{2}-4X^{3}+\underline{g}_{1}Z^{2}X+g_{0}Z^{3}\) is an irreducible nonsingular projective curve, every non-constant rational function in \(\overline{k}(\mathcal{E})=\overline{k}(\theta,\theta^{\prime})\) admits a zero and a pole. Let
\[T_{j}:=\{y\in\mathcal{E}(\overline{k})\mid y\text{ is a pole of }\;\tau(u_{j})\;\text{ for some }\tau\in\mathscr{G}\}.\]
Recall that \(\mathscr{G}\) consists of automorphisms induced by the translation maps \(\tau_{p}\) for \(p\in\mathcal{E}(C)\). Observe that \(\{\tau_{p}(x)=x+p\mid p\in\mathcal{E}(C)\}\) is an infinite set and that \(x\in\mathcal{E}(\overline{k})\) is a pole of \(u_{j}\) if and only if for each \(p\in\mathcal{E}(C)\), \(\tau_{-p}(x)=x-p\) is a pole of \(\tau_{p}^{*}(u_{j})\). Thus \(T_{j}\) is an infinite set.
For each \(1\leq i\leq n\), let \(S_{i}\) be the finite set of all zeros and poles of \(u_{i}\). Since \(f_{j\tau}\in\overline{k}\) and \(m_{ij\tau}\) are integers, the set of all poles of \(f_{j\tau}\prod_{i=1}^{n}u_{i}^{m_{ij\tau}}\) is a subset of the finite set \(\cup_{i=1}^{n}S_{i}\). Since \(l_{\tau}\) is a positive integer, if \(y\in T_{j}\) is a pole of \(\tau(u_{j})\) then \(y\) is also a pole of \(\tau(u_{j}^{l_{\tau}})\). Therefore, from Equation (3.7), we must have \(T_{j}\subset\cup_{i=1}^{n}S_{i}\), which contradicts the fact that \(T_{j}\) is an infinite set. Thus each \(u_{1},\cdots,u_{n}\) must belong to \(\overline{k}\). Similarly, one shows that \(v\in\overline{k}\).
Proof of Theorem 1.1.: By an induction on \(m\), we shall assume that \(f\in k\) has an elementary integral over \(E_{1}\). Depending on the type of the extension \(E_{1}\), we shall apply [11, Theorem] or one of the Lemmas 3.1, 3.2, 3.3 and obtain that \(f\) has an elementary integral over \(k\).
|
2303.04995
|
Text-Visual Prompting for Efficient 2D Temporal Video Grounding
|
In this paper, we study the problem of temporal video grounding (TVG), which
aims to predict the starting/ending time points of moments described by a text
sentence within a long untrimmed video. Benefiting from fine-grained 3D visual
features, the TVG techniques have achieved remarkable progress in recent years.
However, the high complexity of 3D convolutional neural networks (CNNs) makes
extracting dense 3D visual features time-consuming, which calls for intensive
memory and computing resources. Towards efficient TVG, we propose a novel
text-visual prompting (TVP) framework, which incorporates optimized
perturbation patterns (that we call 'prompts') into both visual inputs and
textual features of a TVG model. In sharp contrast to 3D CNNs, we show that TVP
allows us to effectively co-train vision encoder and language encoder in a 2D
TVG model and improves the performance of crossmodal feature fusion using only
low-complexity sparse 2D visual features. Further, we propose a
Temporal-Distance IoU (TDIoU) loss for efficient learning of TVG. Experiments
on two benchmark datasets, Charades-STA and ActivityNet Captions datasets,
empirically show that the proposed TVP significantly boosts the performance of
2D TVG (e.g., 9.79% improvement on Charades-STA and 30.77% improvement on
ActivityNet Captions) and achieves 5x inference acceleration over TVG using 3D
visual features. Codes are available at Open.Intel.
|
Yimeng Zhang, Xin Chen, Jinghan Jia, Sijia Liu, Ke Ding
|
2023-03-09T02:38:32Z
|
http://arxiv.org/abs/2303.04995v3
|
# Text-Visual Prompting for Efficient 2D Temporal Video Grounding
###### Abstract
In this paper, we study the problem of temporal video grounding (**TVG**), which aims to predict the starting/ending time points of moments described by a text sentence within a long untrimmed video. Benefiting from fine-grained 3D visual features, the TVG techniques have achieved remarkable progress in recent years. However, the high complexity of 3D convolutional neural networks (CNNs) makes extracting dense 3D visual features time-consuming, which calls for intensive memory and computing resources. Towards efficient TVG, we propose a novel text-visual **p**rompting (**TVP**) framework, which incorporates optimized perturbation patterns (that we call 'prompts') into both visual inputs and textual features of a TVG model. In sharp contrast to 3D CNNs, we show that TVP allows us to effectively co-train vision encoder and language encoder in a 2D TVG model and improves the performance of cross-modal feature fusion using only low-complexity sparse 2D visual features. Further, we propose a **T**emporal-**D**istance **IoU** (**TDIoU**) loss for efficient learning of TVG. Experiments on two benchmark datasets, Charades-STA and ActivityNet Captions datasets, empirically show that the proposed TVP significantly boosts the performance of 2D TVG (e.g., 9.79% improvement on Charades-STA and 30.77% improvement on ActivityNet Captions) and achieves \(5\times\) inference acceleration over TVG using 3D visual features. Codes are available at Open.Intel.
## 1 Introduction
In recent years, we have witnessed great progress on temporal video grounding (**TVG**) [30, 74]. One key to this success comes from the fine-grained dense 3D visual features extracted by 3D convolutional neural networks (CNNs) (_e.g._, C3D [56] and I3D [3]) since TVG tasks demand spatial-temporal context to locate the temporal interval of the moments described by the text query. However, due to the high cost of the dense 3D feature extraction, most existing TVG models only take these 3D visual features extracted by offline 3D CNNs as inputs instead of co-training during TVG model training.
Although models using 3D visual features (that we call '**3D methods or models')** outperform these using the 2D features (that we call '**2D methods or models'), a unique advantage of 2D methods is that extracting 2D visual features can significantly reduce the cost in TVG tasks [14, 15, 30, 61, 62, 69, 74, 75]. An efficient and lightweight solution with reasonable performance is also demanded in computer vision, NLP, and video-language tasks [19, 23, 68, 76, 77, 78, 41, 68]. As discussed above, the methods employing 3D video features are challenging to be
Figure 1: The architecture and performance comparison among TVG methods: **a)** 3D TVG methods [14, 16, 69, 67, 62, 64, 67, 71, 73], **b)** 2D TVG methods [1, 7], and **c)** TVP-based 2D TVG (Ours), **d)** overall performance comparison. Ours is the most efficient (least inference time) and achieves competitive performance compared to 3D TVG methods. In contrast to existing TVG methods, which utilize dense video features extracted by non-trainable _offline 3D CNNs_ and textual features, our proposed framework utilizes a trainable _2D CNN_ as the vision encoder to extract features from sparsely-sampled video frames with a universal set of frame-aware visual prompts and adds text prompts in textual feature space for end-to-end regression-based modeling.
employed in practical applications. It thus has significant practical and economic value to develop compact 2D solutions for TVG tasks. In this paper, we ask:
_How to advance 2D TVG methods so as to achieve comparable results to 3D TVG methods?_
To address this problem, we propose a novel text-visual prompting (**TVP**) framework for training TVG models using 2D visual features. As shown in **Fig. 1**, for existing 2D TVG and 3D TVG methods, they all utilize offline pretrained vision encoders and language encoders to perform feature extraction. In contrast, our proposed TVP framework is end-to-end trainable. Furthermore, benefiting from text-visual prompting and cross-modal pretraining on large-scale image-text datasets, our proposed framework could achieve comparable performance to 3D TVG methods with significant inference time acceleration.
Conventionally, TVG methods consist of three stages: 1 extracting feature from visual and text inputs; 2 multi-modal feature fusion; 3 cross-modal modelling. In contrast to conventional methods, TVP incorporates optimized input perturbation patterns (that we call '**prompts**') into both visual inputs and textual features of a TVG model. We apply trainable parameters in the textual features as text prompts and develop a universal set of frame-aware patterns as visual prompts. Specially, we sample a fixed number of frames from a video and optimize text prompts for the input query sentence and a set of visual prompts for frames with different temporal locations during training. During testing, the same set of optimized visual prompts and textual prompts are applied to all test-time videos. We refer readers to **Fig. 2** for illustrations of visual prompts and text prompts introduced. To the best of our knowledge, our work makes the first attempt to utilize prompt learning to successfully improve the performance of regression-based TVG tasks using 2D visual features.
Compared to 3D CNNs, 2D CNNs loses spatiotemporal information of the video during feature extraction. Inspired by the success of transformers on the vision-language tasks [54, 55, 22, 35, 44, 47, 9, 25] and the recent application of prompt learning to transformers in both vision and language domains [37, 25, 27, 32, 23, 40], we choose transformer as our base TVG model and propose to utilize prompts to compensate for the lack of spatiotemporal information in 2D visual features. Furthermore, we develop a Temporal-Distance IoU (**TDIoU**) loss for training our proposed framework. There are two aspects that distinguish our proposed framework from existing works. First, our proposed framework is designed to boost the performance of the regression-based TVG methods utilizing 2D CNNs as the vision encoder, not for transfer learning [26, 21, 2] Second, our proposed framework utilizes 2D CNN to extract visual features from sparsely-sampled video frames, which requires less memory and is easier to be applied in practical applications compared to 3D methods [69, 34, 60, 61, 62, 75], especially for long videos. Furthermore, thanks to the compact 2D CNN as the vision encoder, our proposed framework could implement the language encoder and visual encoder co-training for better multimodal feature fusion. In summary, the **contributions** of this work are unfolded below:
* We propose an effective and efficient framework to train 2D TVG models, in which we leverage TVP (text-visual prompting) to improve the utility of sparse 2D visual features without resorting to costly 3D features. To the best of our knowledge, it is the first work to expand the application of prompt learning for resolving TVG problems. Our method outperforms all of 2D methods and achieves competitive performance to 3D TVG methods.
* Technology-wise, we integrate visual prompt with text prompt to co-improve the effectiveness of 2D visual features. On top of that, we propose TDIoU (temporal-distance IoU)-based prompt-model co-training method to obtain high-accuracy 2D TVG models.
* Experiment-wise, we show the empirical success of our proposal to boost the performance of 2D TVG on Charades-STA and ActivityNet Captions datasets, _e.g._, 9.79% improvement in Charades-STA, and \(30.77\%\) in ActivityNet-Captions together with \(5\times\) inference time acceleration over 3D TVG methods.
## 2 Related Work
**Video Temporal Grounding (TVG).** The objective of the TVG is to predict the starting/ending time points of target moments within an untrimmed video, which is described by a text sentence. Early TVG solutions [7, 70, 14, 20, 64, 62, 7] mainly employ two-stage "propose-and-rank" pipeline:
Figure 2: Text-visual prompting illustration. (a) Text prompts are directly applied in the feature space. (b) A set of visual prompts are applied to video frames in order.
Propose: utilize sliding windows or proposal network to generate proposal candidates from the input video. 2 Rank: the proposed candidates would be ranked according to the text query, and then the proposal with the highest ranking would be the final prediction decision. In contrast to proposal-based methods, regression-based methods [67, 69, 16] directly predict the starting/ending time points of the target moments without ranking massive proposal candidates. Thus, regression-based methods are much faster than proposal-based methods, which is one reason why our work focuses on the regression-based TVG. Furthermore, reinforcement learning (RL)-based methods formulate the TVG task as a sequence of decisions to make [60, 18]. In particular, they train an agent to control the movement of a window by shifting or scaling. During training, the agent would be rewarded or punished based on whether the window is close to the target moment after an adjustment.
**Temporal Action Detection (TAD).** TAD aims to determine whether predefined actions occur in a video and to predict the corresponding time intervals during which these actions occur [53, 56, 59, 12, 48, 13, 63]. Different from TVG, the input of TAD is only a video. In other words, TAD only requires a semantic understanding of videos. Compared to TAD, TVG is more challenging since it requires a semantic understanding of both videos and natural languages. Furthermore, TVG needs to process the multimodal interaction between videos and natural languages.
**Text Prompting.** Prompting has recently achieved great success in the domain of natural language processing [49, 58, 52, 50, 46, 51, 52, 37, 40, 51, 50, 52]. Text prompting is a process that leverages a data-agnostic perturbation operation applied to text inputs or their embeddings to improve the performance of the downstream task. The simplest way is to construct an input context template originating from human contemplation [46, 49, 50, 51, 51]. Although the manually-crafted context templates are simple and interpretable, they are typically not the optimal input prompts. To tackle this issue, other work has focused on searching the optimal prompting in the discrete input space [52, 58, 25] or in the language model's embedding space [37, 40, 32].
**Visual Prompting.** Inspired by the idea of prompt learning in NLP [37], visual prompting (VP) was first proposed by Bahng _et. al._[2] to reprogram a source vision model (_e.g._, ImageNet-pretrained classifier) to accomplish downstream target tasks (_e.g._, CIFAR-10 image classification). VP shares almost the same idea with the model reprogramming technology in the vision domain [57, 57, 65, 6, 11, 56, 72, 81], which incorporates a universal input perturbation into testing data so as to improve a desired performance metric, _e.g._, target task accuracy, robustness, and fairness.
**Multi-Modal Prompting.** Although visual prompting and text prompting have recently attracted much attention, they are under-explored in the multi-modal learning, especially on the temporal video grounding task. The existing works [2, 27, 66] mainly focus on integrating text and visual prompts with the CLIP (Contrastive Language-Image Pretrained) model to improve downstream tasks with imagery data. The problem of multi-modal prompting in the video understanding task has not been studied. In this paper, we for the first time develop the text-visual prompting technique to improve the performance of temporal video grounding using 2D visual features.
## 3 Methods
In this section, we begin with the problem formulation of regression-based TVG. Then we demonstrate the design of TVP (text-visual prompts) and present the overview of our proposed TVP framework.
### Problem Definition
Let \(\mathbf{v}\in\mathbb{R}^{N_{\mathrm{vid}}\times C\times H\times W}\) be an untrimmed video consisting of a sequence of \(N_{\mathrm{vid}}\) video frames, and \(\mathbf{s}\in\mathbb{R}^{N_{\mathrm{tex}}}\) be a text query consisting of a sequence of \(N_{\mathrm{tex}}\) language tokens. Here, the video-query pair \((\mathbf{v},\mathbf{s})\) belongs to a video-language dataset \(\mathcal{D}\). Given \(\mathbf{v}\) and \(\mathbf{s}\), TVG aims to predict the time interval \(\mathbf{\hat{T}}=(\hat{t}_{\mathrm{sta}},\hat{t}_{\mathrm{end}})\) of the target video moments described by the query \(\mathbf{s}\). The TVG model that fuses the vision-language modalities can be described as:
\[\mathbf{\hat{T}}=f(\ g_{\mathrm{tex}}(\mathbf{s}),\ g_{\mathrm{vid}}(\mathbf{v })\ ), \tag{1}\]
where \(f\) denotes TVG model, and \(g_{\mathrm{vid}}\) and \(g_{\mathrm{tex}}\) represent vision encoder and language encoder, respectively.
### TDIoU Loss Function
Conventionally, the TVG model can be learned by minimizing the **temporal IoU loss**\(\mathcal{L}_{\mathrm{tIoU}}\) defined below:
\[\mathcal{L}_{\mathrm{tIoU}}=\left(1-\frac{\mathbf{\hat{T}}(\mathbf{\theta})\bigcap \mathbf{T}}{\mathbf{\hat{T}}(\mathbf{\theta})\bigcup\mathbf{T}}\right), \tag{2}\]
where for ease of notation let \(\mathbf{\theta}\) denote all the trainable parameters involved in (1), and \(\mathbf{T}=(t_{\mathrm{sta}},t_{\mathrm{end}})\) is the label (_i.e._, the ground-truth time interval) of the target moment associated with the input video-query pair \((\mathbf{v},\mathbf{s})\). The rationale behind (2) is to maximize the overlapping between the predicted time interval and its ground truth.
However, for non-overlapping cases, the temporal IoU loss \(\mathcal{L}_{\mathrm{tIoU}}\) would encounter a gradient vanishing problem. Inspired by [82], we develop a novel **TDIoU** (Temporal-Distance IoU) loss for training our proposed TVG models by incorporating the normalized central time point distance and duration difference between the predicted video clips and the target video clips. We elaborate on the proposed loss below.
**Distance Loss \(\mathcal{L}_{\rm dis}\).** To avoid the gradient vanishing problem caused by the non-overlapping case, we involve distance loss \(\mathcal{L}_{\rm dis}\) to directly minimize the normalized central time point distance. In addition, we add a threshold \(\alpha_{1}\) to prevent oscillation in the later training phase. The distance loss is then given by:
\[\mathcal{L}_{\rm dis}=\max\left(\frac{\left|\left(t_{\rm sta}+t_{ \rm end}\right)/2-\left(\hat{t}_{\rm sta}+\hat{t}_{\rm end}\right)/2\right|}{ \left|\hat{\mathbf{T}}\bigcup\mathbf{T}\right|},\,\alpha_{1}\right), \tag{3}\]
where recall that \(\mathbf{T}=(t_{\rm sta},t_{\rm end})\), \(\hat{\mathbf{T}}\) is predicted by the TVG model (1), and we choose \(\alpha_{1}=0.2\) in experiments.
**Duration Loss \(\mathcal{L}_{\rm dur}\).** The introduction of distance loss \(\mathcal{L}_{\rm dis}\) avoids the gradient vanishing problem but only considers the central time point distance. Yet, this may not be precise enough. For example, even if the central time points are completely overlapped, the duration of two video clips may not be identical. Inspired by the above, we propose the duration loss:
\[\mathcal{L}_{\rm dur}=\max\left(\frac{\left|\mathbf{T}-\hat{\mathbf{T}}( \boldsymbol{\theta})\right|}{\left|\mathbf{T}\right|},\,\alpha_{2}\right), \tag{4}\]
where \(\alpha_{2}\) is the precision tolerance threshold and set by 0.4 in our experiments.
Finally, the proposed Temporal-Distance IoU (TDIoU) loss is given by
\[\mathcal{L}=\mathcal{L}_{\rm 1IoU}+\beta_{1}\mathcal{L}_{\rm dis}+\beta_{2} \mathcal{L}_{\rm dur}, \tag{5}\]
where \(\beta_{1}>0\) and \(\beta_{2}>0\) are regularization parameters.
### Text-Visual Prompt Design
Inspired by the application of prompts on transformers [2, 36, 37, 21], we propose jointly text-visual prompting to boost the performance of our models, in which prompts are optimized perturbation patterns. To improve data processing efficiency, we uniformly sample video frames from the untrimmed video \(\mathbf{v}\) to obtain \(\mathbf{v}_{\rm sam}\in\mathbb{R}^{N_{\rm sam}\times C\times H\times W}\), where \(N_{\rm sam}\) is the number of sampled video frames. In addition, we introduce a set of frame-aware visual prompts \(\boldsymbol{\delta}_{\rm vp}\in\mathbb{R}^{N_{\rm sam}\times d_{\rm vp}}\) in the pixel space of sampled video frames \(\mathbf{v}_{\rm sam}\), and introduce text prompts \(\boldsymbol{\delta}_{\rm tp}\in\mathbb{R}^{N_{\rm tp}\times d_{\rm tp}}\) in the textual feature space. By incorporating video frame sampling and text-visual prompts into the TVG model (1), we obtain:
\[(\hat{t}_{\rm sta},\hat{t}_{\rm end})=f(\ \boldsymbol{\delta}_{\rm tp},\ g_{ \rm tex}(\mathbf{s}),\ g_{\rm vid}(\mathbf{v}_{\rm sam}+\boldsymbol{\delta}_{ \rm vp})\ ). \tag{6}\]
Given a pre-trained 2D TVG model \(f\), the objective of text-visual prompting (TVP) is to learn a universal set of visual prompts \(\boldsymbol{\delta}_{\rm vp}\) and text prompts \(\boldsymbol{\delta}_{\rm tp}\) to be integrated into sampled video frames and textual features, respectively. Specially, a set of different visual prompts are applied to uniformly-sampled frames of one untrimmed video in order. During training, only the set of visual prompts and text prompts are updated through backpropagation. During fine-tuning, prompts are frozen, and the parameters of the TVG model and encoders are updated. During testing, the set of optimized visual prompts and the optimized text prompts are applied to all test-time video-query pairs.
Figure 3: Overview of our proposed TVP (text-visual prompting) framework for 2D TVG (temporal video grounding). The whole process contains four phases: \(\boldsymbol{\Theta}\) Video frame preprocessing: uniformly sample frames from input video and apply a set of frame-aware visual prompts to the sampled frames in order, \(\boldsymbol{\Theta}\) Feature extraction: 2D CNN extracts features from sampled video frames with visual prompts, and the language encoder extracts textual features. In addition, the visual features would be spatially downsampled and temporally fused by max pooling and mean pooling, respectively. \(\boldsymbol{\Theta}\) Multimodal feature processing: after spatial downsampling and temporal fusion, the 2D visual features would be integrated into the prompted textual features. \(\boldsymbol{\Theta}\) Crossmodal fusion: the multimodal features would be processed by a 12-layer transformer encoder, and MLP would predict the starting/ending time points of the target moment.
### Framework
Inspired by the success of transformers in vision-language tasks, we choose ClipBERT [31] as the base model for 2D TVG. Extended from ClipBERT, the input of our regression-based TVG model would be describable sentences and uniformly sampled frames of one untrimmed video as shown in **Fig. 3**. Then, the predicted starting and ending time points of the target video clip would be model outputs. As described in **Algorithm 1**, there are four phases of our proposed TVP framework: \(\blacklozenge\)
**Video frame preprocessing**: We obtain sparsely-sampled frames \(\mathbf{v}_{\mathrm{sam}}\) from one input untrimmed video \(\mathbf{v}\), and apply universal frame-aware visual prompts \(\boldsymbol{\delta}_{\mathrm{vp}}\) on top of frames at the padding location. \(\blacklozenge\)**Feature extraction**: 2D vision encoder (first 5 ConvBlock of ResNet-50) \(g_{\mathrm{vid}}\) and language encoder (a trainable word embedding layer) \(g_{\mathrm{tex}}\) would extract features from the prompted frames \(\mathbf{v}^{\prime}_{\mathrm{sam}}\) and textual inputs \(\mathbf{s}\), respectively. \(\blacklozenge\)**Multimodal feature processing**: Following the setting of Pixel-BERT [22], the 2D visual features \(\mathbf{Q}_{\mathrm{vid}}\) are downsampled spatially by a \(2\times 2\) max-pooling layer and fused temporally by a mean-pooling layer. Then, text prompts \(\boldsymbol{\delta}_{\mathrm{tp}}\) are integrated into textual features \(\mathbf{Q}_{\mathrm{tex}}\). In addition, trainable 2D visual position embeddings \(\mathbf{M}_{\mathrm{2D}}\) and textual position embeddings \(\mathbf{M}_{\mathrm{pos}}\) are applied to the processed 2D visual features \(\mathbf{Q}^{\prime}_{\mathrm{vid}}\) and prompted textual features \(\mathbf{Q}^{\prime}_{\mathrm{tex}}\), respectively [10, 31]. Afterwards, the processed and position-encoded 2D visual features \(\mathbf{Q}^{\prime\prime}_{\mathrm{vid}}\) are flattened and integrated into prompted and position-encoded textual features \(\mathbf{Q}^{\prime\prime}_{\mathrm{tex}}\). Moreover, type embeddings \(\mathbf{M}_{\mathrm{type}}\) would be added to the integrated multimodal features \(\mathbf{Q}_{\mathrm{all}}\) to indicate the source type of features. \(\blacklozenge\)
**Crossmodal fusion**: A 12-layer transformer [10] is utilized for crossmodal fusion on \(\mathbf{Q}_{\mathrm{all}}\), and then multilayer perceptron (MLP) ending with sigmoid function is used as the prediction head to process the last-layer crossmodal representation \(\mathbf{Q}_{\mathrm{CM}}\) of the transformer for generating the predicted starting/ending time points \((\hat{t}_{\mathrm{sta}},\hat{t}_{\mathrm{sta}})\) of the target moments described by the text query input.
```
0: vision encoder \(g_{\mathrm{vid}}\), language encoder \(g_{\mathrm{tex}}\), position embeddings \(\mathbf{M}_{\mathrm{pos}}\), 2D position embeddings \(\mathbf{M}_{\mathrm{2D}}\), type embeddings \(\mathbf{M}_{\mathrm{type}}\), transformer \(f\), prediction head \(MLP\), visual prompts \(\boldsymbol{\delta}_{\mathrm{vp}}\), text prompts \(\boldsymbol{\delta}_{\mathrm{tp}}\)
0: Predicted time interval \(\hat{\mathbf{T}}=(\hat{t}_{\mathrm{sta}},\hat{t}_{\mathrm{end}})\) Phase \(\blacklozenge\): Video frame preprocessing
1:\(\mathbf{v}_{\mathrm{sam}}\leftarrow\) uniformly sample video frames from an untrimmed video \(\mathbf{v}\)
2:\(\mathbf{v}^{\prime}_{\mathrm{sam}}\leftarrow\) apply visual prompts \(\boldsymbol{\delta}_{\mathrm{vp}}\) to the sampled video frames \(\mathbf{v}_{\mathrm{sam}}\) Phase \(\blacklozenge\): Feature Extraction
3:\(\mathbf{Q}_{\mathrm{vid}}=g_{\mathrm{vid}}(\mathbf{v}^{\prime}_{\mathrm{sam}})\leftarrow\) extracting 2D visual features
4:\(\mathbf{Q}_{\mathrm{tex}}=g_{\mathrm{tex}}(\mathbf{s})\leftarrow\) extracting textual features Phase \(\blacklozenge\): Multimodal feature processing
5:\(\mathbf{Q}^{\prime}_{\mathrm{vid}}\leftarrow\) apply spatial downsampling and temporal fusion to 2D visual features \(\mathbf{Q}_{\mathrm{vid}}\)
6:\(\mathbf{Q}^{\prime}_{\mathrm{tex}}\leftarrow\) apply text prompts \(\boldsymbol{\delta}_{\mathrm{tp}}\) to textual features \(\mathbf{Q}_{\mathrm{tex}}\)
7:\(\mathbf{Q}^{\prime\prime}_{\mathrm{vid}}\leftarrow\) add 2D visual position embeddings \(\mathbf{M}_{\mathrm{2D}}\) on the processed 2D visual features \(\mathbf{Q}^{\prime}_{\mathrm{vid}}\)
8:\(\mathbf{Q}^{\prime\prime}_{\mathrm{tex}}\leftarrow\) add position embeddings \(\mathbf{M}_{\mathrm{pos}}\) to prompted textual features \(\mathbf{Q}^{\prime}_{\mathrm{tex}}\)
9:\(\mathbf{Q}_{\mathrm{all}}\leftarrow\) integrate the processed and position-encoded textual features \(\mathbf{Q}^{\prime\prime}_{\mathrm{tex}}\) and the processed and position-encoded 2D visual features \(\mathbf{Q}^{\prime\prime}_{\mathrm{vid}}\)
10:\(\mathbf{Q}_{\mathrm{all}}+\mathbf{M}_{\mathrm{type}}\leftarrow\) add type embeddings \(\mathbf{M}_{\mathrm{type}}\) to the integrated multimodal features \(\mathbf{Q}_{\mathrm{all}}\) Phase \(\blacklozenge\): Crossmodal fusion
11:\(\mathbf{Q}_{\mathrm{CM}}=f(\mathbf{Q}_{\mathrm{all}}+\mathbf{M}_{\mathrm{type}})\leftarrow\) implement crossmodal fusion through transformer \(f\)
12:\((\hat{t}_{\mathrm{sta}},\hat{t}_{\mathrm{end}})=MLP(\mathbf{Q}_{\mathrm{CM}})\leftarrow\) prediction head generates the predicted time interval according to crossmodal representation \(\mathbf{Q}_{\mathrm{CM}}\)
```
**Algorithm 1** Overview of TVP framework
## 4 Experiments
In this section, we demonstrate the effectiveness of our proposed TVP framework on Charades-STA and ActivityNet Captions datasets.
### Experiment Setup
Datasets.The evaluations are implemented on two standard benchmark datasets for TVG task, Charades-STA [14] and ActivityNet Captions [28]. **Tab. 1** summarizes the details of both datasets. **Charades-STA** dataset contains \(6,672\) videos and \(16,124\) text queries in total. The average length of videos is \(30.6s\), and the average length of text query is \(7.2\)\(words\). The average length of moments corresponding to the text query is \(8.1s\). Following the same dataset split as [14] for fair comparisons, there are \(12,408\) video-query pairs for training and \(3,720\) pairs for testing. **ActivityNet Captions** dataset contains \(14,926\) videos and \(71,953\) text queries in total. The average length of videos is \(117.6s\), and the average length of text query is \(14.4\)\(words\).
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Dataset & Charades-STA & ActivityNet Captions \\ \hline Domain & Indoor Activity & Indoor/Outdoor Activity \\ \hline \# Videos & \(6,672\) & \(14,926\) \\ Avg. Video Length (\(second\)) & \(30.6\) & \(117.6\) \\ \hline \# Moments & \(11,767\) & \(71,953\) \\ Avg. Moment Length (\(second\)) & \(8.1\) & \(37.1\) \\ \hline Vocabulary Size & \(1,303\) & \(15,505\) \\ \# Queries & \(16,124\) & \(71,953\) \\ Avg. Query Length (\(word\)) & \(7.2\) & \(14.4\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of TVG benchmark datasets (Charades-STA and ActivityNet Captions datasets).
The average length of moments corresponding to the text query is \(37.1s\). ActivityNet Captions dataset is split into training set, validation set, and testing set in a \(2:1:1\) ratio. Since the testing set is withheld for competition, only a training set and two validation sets (_val1_ and _val2_) can be accessed publicly. For fair comparisons, we evaluate our proposed framework on _val1_.
**Baselines.** We compare our proposal with \(15\) baseline methods: 1** Proposal-based**: CTRL [14], MCN [1], SAP [7], BPNet [62], LPNet [61], QSPN [64], MAN [71]; 2** Proposal-free**: ABLR [67], DRN [69], CPNet [34], DEBUG [43], ExCL [16], VSLNet [73]; 3** Reinforcement learning**: TSP-PRL [60], TripNet [18].
**Evaluation metrics.** Following [14], we adopt Acc(R@1, IoU=m) as the performance evaluation metric, which represents the percentage accuracy of top-\(1\) predicted moments whose tIoU (temporal IoU) with the ground-truth moment is larger than \(m\). By convention, we consider the following tIoU threshold values \(m=\{0.3,0.5,0.7\}\).
**Crossmodal pretraining setup.** Our 2D vision encoder (ResNet-50) is initialized with the weight from grid-feat [24], which can extract effective grid features from visual inputs. In addition, both the language encoder and 12-layer transformer are initialized with the BERT-base model weight [10], which are pretrained on English Wikipedia and BookCorpus [83]. Thanks to the compact 2D vision encoder, TVP (our proposal) is able to directly utilize image-text pairs for end-to-end training. Since the benefits of cross-modal pretraining has been demonstrated by [22, 44, 55], our base model is pretrained on two large-scale image-text datasets, which are Visual Genome Captions [29] and COCO Captions [8]. To be more specific, image-text matching [44, 55] and masked language modeling [10] are employed for cross-modal pretraining.
**Implementation setup.** For video inputs, we uniformly sample \(N_{\mathrm{sam}}\) frames from a video (\(N_{\mathrm{sam}}=48\) for Charades-STA and \(N_{\mathrm{sam}}=64\) for ActivityNet Captions). In addition, all video frames are resized to have a maximum longer side of \(448\) with an original aspect ratio, and then the frames are zero-padded to \(448\times 448\). The default visual prompt sizes for both dataset are \(96\). The default text prompt sizes are \(10\) and \(20\) for Charades-STA and ActivityNet Captions, respectively. We utilize the first 5 ConvBlocks of ResNet-50 as the 2D vision encoder and a trainable embedding layer as the language encoder for both Charades-STA and ActivityNet Captions datasets. For text queries, all word tokens are maintained after lower-case conversion and tokenization. We use AdamW [42] for end-to-end model training, with \(\beta_{1}=1.0\), \(\beta_{2}=0.1\), \(\alpha_{1}=0.2\), \(\alpha_{2}=0.4\). Initial learning rates are \(1e-1\) and \(5e-7\) for prompt training and model finetuning, respectively. In addition, the learning rate linearly decays to \(0\) with the first \(10\%\) training step for warmup. Our experiments are implemented in PyTorch [45], and models and prompts are fine-tuned separately for \(12\) epochs with the mixed precision on 8 NVIDIA V100 GPUs.
at tIoU thresholds \(m=0.3\) and \(m=0.5\). The combination of text and visual prompts can not only achieves \(7.55\%\) and \(9.79\%\) improvements at tIoU thresholds \(m=0.3\) and \(m=0.5\), but also improve the performance by \(8.14\%\) at \(m=0.7\). This demonstrates the effectiveness and necessity of the joint text-visual prompting.
**Effectiveness of TVP on ActivityNet Captions.** We focus on the performance comparisons with 3D TVG methods on ActivityNet since there are no results of 2D TVG method reported on ActivityNet Captions. The results of multiple methods on ActivityNet Captions datasets are reported in **Tab. 3**. Even on this more challenging dataset, our proposed method still has achieved competitive performance compared to 3D TVG methods. Different from the performance of TVP on Charades-STA dataset, text prompts or visual prompts can achieve a significant performance boost on the base model over all IoU thresholds \(m\) alone (\(5.73\%\) at \(m=0.3\), \(8.04\%\) at \(m=0.5\), \(27.43\%\) at \(m=0.7\) ), and the text-visual prompt combination could further boost the performance (\(6.14\%\) at \(m=0.3\), \(8.17\%\) at \(m=0.5\), \(30.77\%\) at \(m=0.7\)). It is worth noting that the performance gap over \(m=0.7\) between 2D TVG methods and 3D TVG methods is narrowed significantly.
**In summary**, through the experimental results on Charades-STA and ActivityNet Captions datasets, we can find that our proposed TVP framework could achieve competitive performance overall tIoU thresholds on Charades-STA and ActivityNet Captions by improving the utility of sparse 2D visual features. Thanks to the lightweight 2D vision encoder, cotraining language encoder and vision encoder on large-scale image-text datasets can be performed, which benefits the base model to achieve good performance. Furthermore, the combination of text and visual prompts can achieve better results than any single kind of prompts on both datasets, which again proves the importance of cross-modal training.
**Video frame sampling effect. Fig. 4** demonstrates the performance of base model with different number \(N_{\text{sam}}\) of sampled video frames as visual inputs. For Charades dataset, the base model performance keeps increasing before \(N_{\text{sam}}\) reaches 48, but when it exceeds 48, performance starts to degrade. This is because frequent background changes harm the performance of object re-identification in videos, which are noisy for object motion analysis [17].
For ActivityNet Caption dataset, base model performance continues to improve even when sampled frame number \(N_{\text{sam}}\) exceeds 48, due to the longer average video length in ActivityNet Captions dataset. Balancing the frame number and batch size for training, we choose \(N_{\text{sam}}=64\) for ActivityNet Captions.
**TVP performance vs. prompt size.** As shown in **Tab. 4**, we can find that when visual prompts are small, they cannot bring changes to the base model, and when visual prompts are too large, the performance starts to decrease. This is because key information within video frames might be removed. However, the text prompts can bring significant performance boost even when the text prompt size is small as shown in **Tab. 5**, which is because the textual features has a smaller dimension compared to visual features, and also the text prompts are directly optimized in feature space during training.
**TVP performance vs. visual prompt operation.** Visual prompt is first proposed by [2], where visual prompts are
\begin{table}
\begin{tabular}{c|c c c|c} \hline \hline Visual Prompt Size & \multicolumn{3}{c|}{Acc(R@1, IoU=\(m\))} & Prompt + Frame \\ & \(m\)=0.3 & \(m\)=0.5 & \(m\)=0.7 & \\ \hline
0 & 61.29 & 40.43 & 19.89 & \\ \hline
16 & 61.29 & 40.43 & 20.00 & \\
32 & 61.94 & 39.78 & 19.35 & \\
48 & 63.66 & 42.37 & 20.00 & \\
72 & 63.87 & 43.66 & 19.78 & \\
96 & **65.38** & **44.31** & **20.22** & \\
128 & 64.73 & 43.66 & 19.78 & \\ \hline \hline \end{tabular}
\end{table}
Table 4: The performance comparison of different visual prompt sizes on Charades-STA dataset.
Figure 4: Impact of sampled frame numbers.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Text Prompt Size & \multicolumn{3}{c}{Acc(R@1, IoU=\(m\))} \\ & \(m\)=0.3 & \(m\)=0.5 & \(m\)=0.7 \\ \hline
0 & 57.20 & 40.16 & 19.14 \\ \hline
5 & 65.38 & 41.94 & 20.43 \\
10 & **65.81** & 43.44 & 20.65 \\
15 & 65.59 & 43.23 & 21.29 \\
20 & 64.95 & **43.87** & **21.51** \\
25 & 63.66 & 42.80 & 20.65 \\
30 & 64.46 & 42.63 & 20.51 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The performance comparison of different text prompt sizes on Charades-STA dataset.
added to the image for transfer learning on classification tasks. In contrast, our proposed prompting framework is designed to compensate for the spatiotemporal information loss in 2D visual features. Due to the differences in the task, we try two different prompt operation strategies, '_replace_' and '_add_'. '_add_' is to add the visual prompts to the pixel value of the video frame at the corresponding padding locations. '_replace_' is to replace the pixel values of video frames with visual prompts at corresponding padding locations. '_remove_' is in order to study the impact of removing the pixel values at the padding location. As shown in **Tab. 6**, '_add_' or '_remove_' prompt operations have limited effects on the base model. However, '_replace_' does boost the base model performance.
**TVP achieves inference efficiency.** As shown in **Fig. 5**, we can find that the inference time required for visual feature extraction accounts for more than half of the inference time of the whole model, while the inference time required for the 3D vision encoder is more than \(5\times\) compared to the 2D vision encoder, and even more than the time required for the whole TVG model using 2D vision encoder, which fully demonstrates the feasibility of accelerating the overall inference speed by reducing the complexity of the vision encoder. Need to note that if there are multiple model weights for different sampled frame number settings and model weights can be adopted adaptively for different lengths of videos, the inference speed for short videos should increase, and the prediction results for long videos will be further improved.
**Ablation studies.** Through **Tab. 7**, we can find that the addition of either distance loss \(\mathcal{L}_{\mathrm{dis}}\) or duration loss \(\mathcal{L}_{\mathrm{dur}}\) will result in a performance increase, but the combination of the two will result in a significant performance increase (\(11.34\%\) at \(m=0.3\), \(35.26\%\) at \(m=0.5\), \(68.27\%\) at \(m=0.7\), ), especially over tIoU thresholds \(m=0.5\) and \(m=0.7\). This demonstrates that distance loss \(\mathcal{L}_{\mathrm{dis}}\) and duration loss \(\mathcal{L}_{\mathrm{dur}}\) could provide more precise training guides compared to only using temporal IoU loss \(\mathcal{L}_{\mathrm{tIoU}}\). Furthermore, we posit that prompting may encode additional spatial-temporal supervision to help the model trainer to escape from bad local optima as shown in Fig. 6, where fine-tuning w/ prompts yields a flatter loss landscape than the one w/o prompts.
## 5 Conclusion
In this paper, we propose text-visual prompting to boost the performance of 2D TVG methods by compensating for the lack of spatiotemporal information in 2D visual features. In contrast to 3D TVG methods, TVP allows us to effectively co-train vision encoder and language encoder in a 2D TVG model and improves the performance of cross- modal feature fusion using only low-complexity sparse 2D visual features. The effectiveness of our proposed TVP (text-visual prompting) framework has been demonstrated on two standard datasets, Charades-STA and ActivityNet. Our models outperform all 2D models significantly, and also achieve comparable performance to 3D models. What is more, we achieve over \(5\times\) inference speedup over TVG methods of using 3D visual features.
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline \hline Operation & \multicolumn{4}{c|}{Charades-STA} & \multicolumn{4}{c}{ActivityNet Captions} \\ \hline & \multicolumn{3}{c|}{R@1, IoU=\(m\)} & \multicolumn{3}{c}{R@1, IoU=\(m\)} \\ & \(m\)=0.3 & \(m\)=0.5 & \(m\)=0.7 & \(m\)=0.3 & \(m\)=0.5 & \(m\)=0.7 \\ \hline Original & 61.29 & 40.43 & 19.89 & 57.20 & 40.16 & 19.14 \\ Remove & 61.29 & 40.43 & 20.0 & 57.20 & 40.16 & 19.14 \\ Add & 61.08 & 39.57 & 20.22 & 57.15 & 40.16 & 19.27 \\ Replace & **65.38** & **44.31** & **20.22** & **60.12** & **43.39** & **23.71** \\ \hline \hline \end{tabular}
\end{table}
Table 6: The performance comparison of different visual prompt operations (‘_remove_’, ‘_add_’, ‘_replace_’) with fixed visual prompt size \(p=96\) on Charades-STA and ActivityNet Captions datasets.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Loss Function Selection & \multicolumn{3}{c}{R@1, IoU=\(m\)} \\ & \(m\)=0.3 & \(m\)=0.5 & \(m\)=0.7 \\ \hline \(\mathcal{L}_{\mathrm{tIoU}}\) & 55.05 & 29.89 & 11.82 \\ \hline \(\mathcal{L}_{\mathrm{tIoU}}+\mathcal{L}_{\mathrm{dis}}\) & 60.64 & 31.18 & 16.77 \\ \(\mathcal{L}_{\mathrm{tIoU}}+\mathcal{L}_{\mathrm{dur}}\) & 59.78 & 30.97 & 16.34 \\ \(\mathcal{L}_{\mathrm{tIoU}}+\mathcal{L}_{\mathrm{dis}}+\mathcal{L}_{\mathrm{dur}}\) & **61.29** & **40.43** & **19.89** \\ \hline \hline \end{tabular}
\end{table}
Table 7: The performance comparison of different loss designs on Charades-STA dataset.
Figure 5: Inference time comparison. (a) inference time comparison between 2D vision encoder (ResNet-50) and 3D vision encoder (C3D). (b)inference time comparison between the vision encoder and the other modules of the 2D TVG model, where the sampled frame number for our TVP framework is \(1.2\times\) the length of the video in seconds.
Figure 6: Loss landscape visualization in 2D plane: Finetuning w/o prompts (left) and using prompts (right); see [33] for implementation.
|
2309.00570
|
Mechanism of feature learning in convolutional neural networks
|
Understanding the mechanism of how convolutional neural networks learn
features from image data is a fundamental problem in machine learning and
computer vision. In this work, we identify such a mechanism. We posit the
Convolutional Neural Feature Ansatz, which states that covariances of filters
in any convolutional layer are proportional to the average gradient outer
product (AGOP) taken with respect to patches of the input to that layer. We
present extensive empirical evidence for our ansatz, including identifying high
correlation between covariances of filters and patch-based AGOPs for
convolutional layers in standard neural architectures, such as AlexNet, VGG,
and ResNets pre-trained on ImageNet. We also provide supporting theoretical
evidence. We then demonstrate the generality of our result by using the
patch-based AGOP to enable deep feature learning in convolutional kernel
machines. We refer to the resulting algorithm as (Deep) ConvRFM and show that
our algorithm recovers similar features to deep convolutional networks
including the notable emergence of edge detectors. Moreover, we find that Deep
ConvRFM overcomes previously identified limitations of convolutional kernels,
such as their inability to adapt to local signals in images and, as a result,
leads to sizable performance improvement over fixed convolutional kernels.
|
Daniel Beaglehole, Adityanarayanan Radhakrishnan, Parthe Pandit, Mikhail Belkin
|
2023-09-01T16:30:02Z
|
http://arxiv.org/abs/2309.00570v1
|
# Mechanism of feature learning in convolutional neural networks
###### Abstract
Understanding the mechanism of how convolutional neural networks learn features from image data is a fundamental problem in machine learning and computer vision. In this work, we identify such a mechanism. We posit the Convolutional Neural Feature Ansatz, which states that covariances of filters in any convolutional layer are proportional to the average gradient outer product (AGOP) taken with respect to patches of the input to that layer. We present extensive empirical evidence for our ansatz, including identifying high correlation between covariances of filters and patch-based AGOPs for convolutional layers in standard neural architectures, such as AlexNet, VGG, and ResNets pre-trained on ImageNet. We also provide supporting theoretical evidence. We then demonstrate the generality of our result by using the patch-based AGOP to enable deep feature learning in convolutional kernel machines. We refer to the resulting algorithm as (Deep) ConvRFM and show that our algorithm recovers similar features to deep convolutional networks including the notable emergence of edge detectors. Moreover, we find that Deep ConvRFM overcomes previously identified limitations of convolutional kernels, such as their inability to adapt to local signals in images and, as a result, leads to sizable performance improvement over fixed convolutional kernels.
## 1 Introduction
Neural networks have achieved impressive empirical results across various tasks in natural language processing [8], computer vision [39], and biology [50]. Yet, our understanding of the mechanisms driving the successes of these models is still emerging. One such mechanism of central importance is that of _neural feature learning_, which is the ability of networks to automatically learn relevant input transformations from data [37, 43, 55, 56].
An important line of work [5, 14, 23, 32, 35, 43, 52, 55] has demonstrated how feature learning in fully connected neural networks provides an advantage over classical, non-feature-learning models such as kernel machines. Recently, the work [37] identified a connection between a mathematical operator, known as average gradient outer product (AGOP) [17, 21, 47, 48], and feature learning in fully connected networks. This work subsequently demonstrated that the AGOP could be used to enable similar feature learning in kernel machines operating on tabular data. In contrast to the case for fully connected networks, there are few prior works [3, 24] analyzing feature learning in convolutional networks, which have been transformative in computer vision [19, 39]. The work [24] demonstrates an advantage of feature learning in convolutional networks by showing that these models are able to threshold noise and identify signal in image data unlike convolutional kernel methods including Convolutional Neural Tangent Kernels [4]. The work [3] analyzes how deep convolutional networks can correct features in early layers by simultaneous training of all layers. While these prior works identify advantages of feature learning in convolutional networks, they do not identify a general operator that captures such feature learning. The connection between AGOP and feature learning
in fully connected neural networks [37] suggests that a similar connection should exist for feature learning in convolutional networks. Moreover, such a mechanism could be used to learn analogous features with any machine learning model such as convolutional kernel machines.
In this work, we establish a connection between convolutional neural feature learning and the AGOP, which we posit as the Convolutional Neural Feature Ansatz (CNFA). Unlike the fully connected case from [37] where feature learning is characterized by AGOP with respect to network inputs, we demonstrate that convolutional feature learning is characterized by AGOP with respect to patches of network inputs. We present empirical evidence for the CNFA by demonstrating high average Pearson correlation (in most cases \(>.9\)) between AGOP on patches and the covariance of filters across all layers of pre-trained convolutional networks on ImageNet [40] and across all layers of SimpleNet [18] trained on several standard image classification datasets. We additionally prove that the CNFA holds for one step of gradient descent for deep convolutional networks. To demonstrate the generality of our identified convolutional feature learning mechanism, we leverage the AGOP on patches to enable feature learning in convolutional kernel machines. We refer to the resulting algorithm as ConvRFM. We demonstrate that ConvRFM captures features similar to those learned by the first layer of convolutional networks. In particular, on various image classification benchmark datasets such as SVHN [33] and CIFAR10 [26], we observe that ConvRFM recovers features corresponding to edge detectors. We further enable deep feature learning with convolutional kernels by developing a layerwise training scheme with ConvRFM, which we refer to as Deep ConvRFM. We demonstrate that Deep ConvRFM learns features similar to those learned by deep convolutional neural networks. Furthermore, we show that Deep ConvRFM overcomes limitations of convolutional kernels identified in [24] and exhibits _local feature adaptivity_. Lastly, we demonstrate that Deep ConvRFM provides improvement over CNTK and ConvRFM on several standard image classification datasets, indicating a benefit to deep feature learning. Our results advance understanding of how convolutional networks automatically learn features from data and provide a path toward integrating convolutional feature learning into general machine learning models.
## 2 Convolutional Neural Feature Ansatz (CNFA)
Let \(f:\mathbb{R}^{c\times P\times Q}\rightarrow\mathbb{R}\) denote a convolutional neural network (CNN) operating on \(P\times Q\) resolution images with \(c\) color channels. The \(\ell^{th}\) convolutional layer of a CNN involves applying a function \(h_{\ell}:\mathbb{R}^{c_{\ell-1}\times P_{\ell-1}\times Q_{\ell-1}}\rightarrow \mathbb{R}^{c_{\ell}\times P_{\ell}\times Q_{\ell}}\) defined recursively as \(h_{\ell}(x)=\phi(\widetilde{W}_{\ell}*h_{\ell-1}(x))\) with \(h_{1}=x\), \(\widetilde{W}_{\ell}\in\mathbb{R}^{c_{\ell}\times c_{\ell-1}\times q\times q}\) denoting \(c_{\ell}\) filters of size \(c_{\ell-1}\times q\times q\), \(*\) denoting the convolution operation, and \(\phi\) denoting an elementwise activation function. To understand how features emerge in convolutional networks, we abstract a convolutional network to a function of the form
\[f(x)=g(W_{1}x[1,1],\ldots,W_{1}x[i,j],\ldots,W_{1}x[P,Q]),\quad i \in[P],j\in[Q]\ ; \tag{1}\]
where \(W_{1}\in\mathbb{R}^{c_{1}\times cq^{2}}\) is a matrix of \(c_{1}\) stacked filters of size \(cq^{2}\) and \(x[i,j]\in\mathbb{R}^{cq^{2}}\) denotes the patch of \(x\) centered at coordinate \((i,j)\). This abstraction is helpful since it allows us to consider feature learning in convolutional networks with arbitrary architecture (e.g., pooling layers, batch normalization, etc.) after any given convolutional layer. Up to rotation and reflection by the left singular vectors, the feature extraction properties of \(W_{1}\) are determined by the singular values and right singular vectors of \(W_{1}\). These singular values and vectors can be recovered from the matrix \(W_{1}^{T}W_{1}\), which is the empirical (uncentered) covariance of filters in the first layer. This argument extends to analyze features selected at layer \(\ell\) of a CNN by considering a function of the form \(f(x)=g_{\ell}(W_{\ell}h_{\ell-1}(x)[1,1],\ldots,W_{\ell}h_{\ell-1}(x)[P_{\ell- 1},Q_{\ell-1}])\). We refer to the matrix \(W_{\ell}^{T}W_{\ell}\) as a _Convolutional Neural Feature Matrix_ (CNFM) and note that this matrix is proportional to the (uncentered) empirical covariance matrix of filters in layer \(\ell\).
We use the form of convolutional networks presented in Eq. (1) to state our Convolutional Neural Feature Ansatz (CNFA). Let \(G_{\ell}(x):=g_{\ell}(W_{\ell}h_{\ell-1}(x)[1,1],\ldots,W_{\ell}h_{\ell-1}(x)[P _{\ell-1},Q_{\ell-1}])\). Then, after training \(f\) for at least one epoch of (stochastic) gradient descent on standard loss functions:
\[W_{\ell}^{\top}W_{\ell}\propto\sum_{p=1}^{n}\sum_{(i,j)\in S} \nabla_{h_{\ell-1}(x)[i,j]}G_{\ell}(x)\left(\nabla_{h_{\ell-1}(x)[i,j]}G_{ \ell}(x)\right)^{\top}; \tag{2}\]
where \(S=\{(i,j)\}_{i\in[P_{\ell-1}],j\in[Q_{\ell-1}]}\) denotes the set of indices of patches utilized in the convolution operation in layer \(\ell\). The CNFA (Eq. 2) mathematically implies that the convolutional neural feature matrices are
proportional to the average gradient outer product (AGOP) with respect to the patches of the input to layer \(\ell\). The CNFA implies that the structure of covariance matrices of filters in convolutional networks, an object studied in prior work [49], corresponds to AGOP over patches. Intuitively, the CNFA implies that convolutional features are constructed by identifying and amplifying those pixels in any patch that most change the output of the network. We now present extensive empirical evidence corroborating our ansatz. We subsequently present supporting theoretical evidence.
### Empirical evidence for CNFA
We now provide empirical evidence for the ansatz by computing the correlation between CNFMs and the AGOP for each convolutional layer in various CNNs. We provide three lines of evidence by computing correlations for the following models: (1) AlexNet [27], all VGGs [46], and all ResNet [19] models pre-trained on ImageNet [40] ; (2) SimpleNet models [18] trained on SVHN [33], GTSRB [20], CIFAR10 [26], CIFAR100, and ImageNet32 [10]; and (3) shallow CNNs across 10 standard computer vision datasets from PyTorch upon varying pooling and patch size of convolution operations. The first set of experiments provides evidence for the ansatz in large-scale state-of-the-art models on ImageNet. The second set provides evidence for the ansatz across standard computer vision datasets. The last set provides evidence for the ansatz holding across architecture choices.
CNFA verification for pre-trained state-of-the-art models on ImageNet.
We begin by providing evidence for the ansatz on pre-trained state-of-the-art models on ImageNet. In Fig. 1, we present these correlations for AlexNet, all VGG models and all ResNet models pre-trained on ImageNet, which are available for download from the PyTorch library [36].4 As a control, we verify that weights at the end of training are far from initialization (see the red bars in Fig. 1A). Note that despite the complexity involved in training these models (e.g., batch normalization, skip connections, custom optimization procedures, data augmentation) the Pearson correlation between the AGOP and CNFMs are remarkably high (\(>.9\) for each layer of AlexNet and VGG13). In Fig. 1B, we additionally visualize the AGOP and CNFM for the first convolutional layer in AlexNet, VGG11, and ResNet18 to demonstrate the qualitative similarity between these matrices. In addition, in Appendix Fig. 7, we verify that these correlations are lower at initialization than at the end of training indicating that the ansatz is, in fact, a consequence of training.
Footnote 4: We evaluate all correlations between AGOP and CNFMs for all convolutional layers of AlexNet and all VGGs. To simplify computation on ResNets, we evaluate correlations between AGOP and CNFMs for the first layer in each BasicBlock and each Bottleneck, as defined in PyTorch. We note that for ResNet152, this computation involves computing correlation between matrices in 50 Bottleneck blocks.
CNFA verification for SimpleNet on CIFAR10, CIFAR100, ImageNet32, SVHN, GTSRB.To verify the ansatz on other datasets, we also trained the SimpleNet model on five datasets including
Figure 1: **A.** Correlation between initial CNFM and trained CNFM (red) and trained CNFM with AGOP (green) for convolutional layers in VGG, AlexNet, and ResNet on ImageNet (\(224\times 224\) resolution color images). **B.** Initial CNFM, trained CNFM, and AGOP matrices for the first convolutional layer of ResNet18, VGG11, and AlexNet on ImageNet.
CIFAR10/100, ImageNet32, SVHN, and GTSRB. We note SimpleNet had achieved state-of-the-art results on several of these tasks at the time of its release (e.g., \(>95\%\) test accuracy on CIFAR10). We train SimpleNet models using the same optimization procedure provided from [18] (i.e., Adadelta [57] with weight decay and manual learning rate scheduling). We use a small initialization scheme of normally distributed weights with a standard deviation of \(10^{-4}\) for convolutional layers. We note that we were able to recover high test accuracies across all datasets consistent with the results from [18] (see test accuracies for these trained SimpleNet models in Appendix Fig. 8). As shown in Appendix Fig. 8, we observe consistently high correlation between AGOPs and CNFMs across layers of SimpleNet.
CNFA is robust to hyperparameter choices.We lastly study the effect of patch size and architecture choices on the CNFA for networks trained using the Adam optimizer [25]. We generally observe that larger patch sizes slightly reduce the correlation between AGOP and CNFMs, and that max pooling layers (in contrast to no pooling or average pooling) lead to higher correlation (Appendix Fig. 9). Interestingly, these results indicate that the choices used in state-of-the-art CNNs (max pooling layers and patch size of 3) are consistent with those that lead to highest correlation between AGOP and CNFMs.
### Visualizing features captured by CNFM and AGOP
We now visualize how the CNFM operates on patches of images to select features and demonstrate that AGOP over patches captures similar features. Both the CNFM and AGOP yield an operator on patches of images. Thus, to visualize how these matrices select features, we expand input images into individual patches, then apply either the CNFM or the AGOP to each patch. We then reduce the expanded image back to its original size by taking the norm over the spatial dimensions of each expanded patch. Formally, the
Figure 2: Comparison of features extracted by CNFMs and AGOPs across layers of VGG11 and AlexNet for two input images. These visualizations provide further supporting evidence that the CNFMs and AGOPs of early layers are performing an operation akin to edge detection.
value for each coordinate \((i,j)\in P_{\ell-1}\times Q_{\ell-1}\) is replaced with \(\|M_{\ell}^{\frac{1}{2}}h_{\ell-1}(X)[i,j]\|\) where \(M_{\ell}:=W_{\ell}^{T}W_{\ell}\). Our visualization reflects the magnitude of the patch in the image of the patch transformation. For example, if \(M_{\ell}\) is an edge detector, then \(\|M_{\ell}^{\frac{1}{2}}h_{\ell-1}(X)[i,j]\|\) will be large, if and only if the patch centered at coordinate \((i,j)\) contains an edge.
This visualization technique emerges naturally from the convolution operation in CNNs, where a post-activation hidden unit is generated by applying a filter to each patch independently of the others. Further, this visualization characterizes how a trained CNN extracts features across patches of any image. This is in contrast to visualization techniques based on saliency maps [41, 44, 45, 59], which consider gradients with respect to an entire input image and for a single sample.
In addition to the high correlation between AGOP and CNFMs in the previous section, in Fig. 2, we observe that the AGOP and CNFMs transform input images similarly at any given layer of the CNN. For \(224\times 224\) images from ImageNet, CNFMs and AGOPs extracted from a pre-trained VGG11 model both emphasize objects and their edges in the image. We note these visualizations corroborate hypotheses from prior work that the first layer weights of deep CNNs learn an operator corresponding to edge detection [58]. Moreover, our results imply that the mathematical origin of edge detectors in convolutional neural networks is the average gradient outer product. In the following section, we will corroborate this claim by demonstrating that such edge detectors can be recovered without the use of any neural network through estimating the average gradient outer product of convolutional kernel machines.
### Supporting Theoretical Evidence for CNFA
The following theorem (proof in Appendix A) proves the ansatz for general convolutional networks after 1 step of full-batch gradient descent.
**Theorem 1**.: _Let \(f\) denote a function that operates on \(m\) patches of size \(q\), i.e., let \(f(v_{1},v_{2},\ldots,v_{m}):\mathbb{R}^{q}\times\ldots\times\mathbb{R}^{q} \rightarrow\mathbb{R}\) with \(f(v_{1},v_{2},\ldots,v_{m})=g(Wv_{1},Wv_{2},\ldots,Wv_{m})\) where \(W\in\mathbb{R}^{k\times q}\) and \(g(z_{1},\ldots,z_{m}):\mathbb{R}^{k}\times\ldots\times\mathbb{R}^{k} \rightarrow\mathbb{R}\). Assume \(g(\mathbf{0})=0\) and \(\frac{\partial g(\mathbf{0})}{\partial z_{\ell}}=\frac{\partial g(\mathbf{0} )}{\partial z_{\ell^{\prime}}}\neq 0\) for all \(\ell,\ell^{\prime}\in[m]\). If \(W\) is trained for one step of gradient descent with mean squared loss on data \(\{((v_{1}^{(p)},\ldots v_{m}^{(p)}),y_{p})\}_{p=1}^{n}\) from initialization \(W^{(0)}=\mathbf{0}\), then for the point \((u_{1},\ldots,u_{m})\):_
\[{W^{(1)}}^{T}W^{(1)}\propto\sum_{r=1}^{m}\frac{\partial f^{(1)}(u_{1},\ldots, u_{m})}{\partial v_{r}}\frac{\partial f^{(1)}(u_{1},\ldots,u_{m})}{\partial v _{r}}^{T}\;; \tag{3}\]
_where \(f^{(1)}(v_{1},v_{2},\ldots v_{m}):=g(W^{(1)}v_{1},W^{(1)}v_{2},\ldots,W^{(1)}v _{m})\)._
We note the assumptions of Theorem 1 hold for several types of convolutional networks. As a simple example, the assumptions hold for convolutional networks with activation function \(\phi\) satisfying \(\phi(0)=0\) and \(\phi^{\prime}(0)\neq 0\) (e.g., tanh activation) with remaining layers initialized as constant matrices. Furthermore, we note that while the above theorem is stated for the first layer of a convolutional network, the same proof strategy applies for deeper layers by considering the subnetwork \(G_{\ell}(x)\).
## 3 CNFA as a general mechanism for convolutional feature learning
We now show that the CNFA allows us to introduce a feature learning mechanism in any machine learning model on patches to capture features akin to those of convolutional networks. Given recent work connecting neural networks to kernel machines [22], we focus on convolutional kernels given by the Convolutional Neural Tangent Kernel (CNTK) [4] as our candidate model class. Intuitively, these models can be thought of as combining kernels evaluated across pairs of patches in images. While such models have achieved impressive performance [1, 6, 7, 29, 38, 42], these models do not automatically learn features from data unlike CNNs. Thus, as demonstrated in prior work [24, 52], there are tasks where CNTKs are significantly outperformed by corresponding CNNs.
A major consequence of the CNFA is that we can now enable feature learning in CNTKs by leveraging the AGOP over patches. In particular, we can first solve kernel regression with the CNTK and then use
the AGOP of the trained predictor over patches of images to learn features. We call our method the _Convolutional Recursive Feature Machine (ConvRFM)_, as it is the convolutional variant of the original RFM [37]. We will demonstrate that ConvRFM accurately captures first layer feature learning in CNNs and can recover edge detectors as features when trained on standard image classification datasets. To account for deep convolutional feature learning, we extend ConvRFM to Deep ConvRFMs by sequentially learning features in a manner similar to layerwise training in CNNs. We show that Deep ConvRFM: (1) improves performance of CNTKs on local signal adaptivity tasks considered in [24] ; and (2) improves performance of CNTKs on several image classification tasks.
### Convolutional Recursive Feature Machine (ConvRFM)
We present the algorithm for ConvRFM in Algorithm 1. The ConvRFM algorithm recursively learns a feature extractor on patches of a given image by implementing the AGOP across patches of training data. Namely, the ConvRFM first builds a predictor with a fixed convolutional kernel. Then, we compute the AGOP of the trained predictor with respect to image patches, which we denote as the _feature matrix_, \(M\). Lastly, we transform image patches with \(M\) and then repeat the previous steps. We provide a concrete example of this algorithm for the convolutional neural network Gausssian process (CNNGP) [9, 28] of a one hidden layer convolutional network with fully connected last layer operating on black and white images below.
The CNNGP of a one hidden layer convolutional network with fully connected last layer, activation \(\phi\), and filter size \(q\) is given by
\[K(x,z)=\frac{1}{PQ}\sum_{i=1}^{P}\sum_{j=1}^{Q}\check{\phi}(x[i,j]^{T}z[i,j], \|x[i,j]\|,\|z[i,j]\|)\ ;\]
where \(x,z\in\mathbb{R}^{P\times Q}\), \(x[i,j]\in\mathbb{R}^{q^{2}}\) denotes the vectorized \(q\times q\) patch of \(x\) centered at coordinate \((i,j)\), and \(\check{\phi}(a^{T}b,\|a\|,\|b\|)\) denotes the dual activation [15] of \(\phi\). For the case of ReLU activation, this dual activation has a well known form [9] and is given by
\[\check{\phi}(a^{T}b,\|a\|,\|b\|)=\frac{1}{\pi}\left(a^{T}b\left(\pi-\arccos \left(\frac{a^{T}b}{\|a\|\|b\|}\right)\right)+\sqrt{\|a\|^{2}\|b\|^{2}-a^{T}b} \right)\.\]
In ConvRFM, we modify the inner product in the kernel above to be a Mahalanobis inner product, constructing kernels of the form
\[K_{M}(x,z):=\frac{1}{PQ}\sum_{i=1}^{P}\sum_{j=1}^{Q}\check{\phi}(x[i,j]^{T}Mz [i,j],x[i,j]^{T}Mx[i,j],z[i,j]^{T}Mz[i,j])\ ;\]
where \(M\) is a learned positive semi-definite matrix. In particular, \(M\) is updated as the AGOP of the estimator constructed by solving kernel regression with \(K_{M}\). In our experiments, we analyze performance when replacing \(\check{\phi}\) with the Mahanolobis Laplace kernel used in [37] and with the CNTK of a deep convolutional ReLU network with fully connected last layer. We will make clear our choice of \(\check{\phi}\) by denoting our method as CNTK-ConvRFM or Laplace-ConvRFM.
ConvRFM captures first layer features of convolutional neural networks.We now demonstrate that ConvRFM recovers features similar to those learned by first layers of CNNs. In Fig. 3A, we visualize the top eigenvectors of the feature matrix of CNTK-ConvRFM (filter size \(3\times 3\)) and Laplace-ConvRFM (filter size \(7\times 7\)) trained on SVHN. Training details for all methods are presented in Appendix B. We observe that these top eigenvectors resemble edge detectors [16]. In Fig. 3B, we visualize how the feature matrix of the CNTK-ConvRFM and the CNFM of the corresponding finite width CNN trained on SVHN transform SVHN images. Even though both operators arise from vastly different training procedures (solving kernel regression vs. training a CNN), we observe that both operators appear to extract similar features (corresponding to edges of digits) from SVHN images. We provide additional evidence for similarity between ConvRFM and CNN features in Appendix Fig. 10. To demonstrate further evidence of the universality of edge detector features arising from AGOP of CNTK-ConvRFM and Laplace-ConvRFM, we analyze how these AGOPs transform arbitrary images. In particular, in Fig. 3C, we apply these operators extracted from models trained on SVHN to images on ImageNet. We again observe that these operators remarkably extract edges from corresponding ImageNet images, which are of vastly different resolution (\(224\times 224\) instead of \(32\times 32\)) and contain vastly different objects. Such experiments provide conclusive evidence that AGOP with respect to patches of convolutional kernels recovers features akin to edge detectors. We present further experiments demonstrating emergence of edge detectors from convolutional kernels trained on CIFAR10 and GTSRB in Appendix Figs. 11 and 12. In particular, the eigenvectors of the AGOP often resemble Gabor filters with different orientations. In Figure 11, we see that horizontally, vertically, and diagonally aligned eigenvectors identify edges of the same alignment.
### Deep feature learning with Deep ConvRFM
ConvRFM is capable of only extracting features by linearly transforming patches of input images, which is analogous to extracting such features using the first layer of a CNN. In contrast, the CNFA implies that deep convolutional networks are capable of learning features in intermediate layers. To enable deep feature learning, we introduce Deep ConvRFM (see Algorithm 2) by sequentially learning features with AGOP in a manner similar to layerwise training in CNNs. In particular, Deep ConvRFM iterates the following steps:
1. Construct a predictor, \(\widehat{f}\), by training a convolutional kernel machine with kernel \(K_{M}\).
Figure 3: Features extractors learned by ConvRFM using CNTK (CNTK-ConvRFM) and Laplace kernel (Laplace-ConvRFM), which appear to operate as universal edge detectors. **A.** Top 8 eigenvectors of CNTK-ConvRFM and Laplace-ConvRFM trained on SVHN. We use \(3\times 3\) patches for CNTK-ConvRFM and \(7\times 7\) patches for Laplace-ConvRFM. **B.** Comparison of patch operators learned by CNTK-ConvRFM (given by the AGOP taken with respect to patches) and CNNs (given by the CNFM). **C.** Applying patch-based AGOP operators from ConvRFMs trained on SVHN to images from ImageNet.
2. Update \(M\) to be the AGOP with respect to patches of the trained predictor.
3. Transform the data, \(x\), with random features given by \(\phi(Wx)\) where \(W\) denotes a set of convolutional filters with weights sampled according to \(\mathcal{N}(0,M)\) and \(\phi\) is a nonlinearity.
Note that while we utilize random features and sample convolutional filters in Deep ConvRFM, we never utilize backpropgation to learn features or train models. Features are learned via the AGOP and models are trained by solving kernel regression, which is a convex optimization problem. For the base kernel for Deep ConvRFM, we utilize the deep CNTK [4] as implemented in the Neural Tangents library [34].5
Footnote 5: In order to take gradient with respect to patches using Neural Tangents, we used a workaround that involved expanding images into their patch representations. This workaround unfortunately leads to heavy memory utilization, which limited our analysis of Deep ConvRFMs.
Deep ConvRFM learns similar features to deep CNNs.We now present evidence that Deep ConvRFMs learn similar features to those learned by deep CNNs. We analyze features learned by deep ConvRFM and the corresponding CNN on the local signal adaptivity synthetic tasks from [24] and SVHN. For the synthetic task from [24], we consider classification of MNIST digits embedded in a larger image of i.i.d. Gaussian noise. Dataset and training details are presented in Appendix B. In Fig. 4, we observe that AGOPs at each layer of Deep ConvRFM and and CNFMs at each layer of the corresponding CNN transform examples from both datasets similarly.
Figure 4: Visualizations of features for each layer of Deep ConvRFM and the corresponding CNN on SVHN and the noisy digits task from [24].
Deep ConvRFM overcomes limitations of convolutional kernels.In the work [24] the authors posited local signal adaptivity, the ability to suppress noise and amplify signal in images, as a potential explanation for the superiority of convolutional neural networks over convolutional kernels. As supporting evidence, [24] demonstrated that convolutional networks generalized far better than convolutional kernels on image classification tasks in which images were embedded in a noisy background. We now demonstrate that by incorporating feature learning through patch-AGOPs, Deep ConvRFM exhibits local signal adaptivity on the tasks considered in [24] and thus, similar to CNNs, yield significantly improved performance over convolutional kernels. In particular, we begin by comparing performance of CNTK, Conv RFM, Deep ConvRFM, and corresponding CNNs on the following two image classification tasks from [24]: (1) images of black and white horizontal bars placed in a random position on larger images of Gaussian noise ; (2) MNIST images placed in a random position on larger images of Gausssian noise. The work [24] demonstrated that CNNs, unlike CNTK, could learn to threshold the background noise and amplify the signal in these tasks thus far outperforming CNTKs when the amount of background noise was large. In Fig. 5, we demonstrate that for these tasks CNNs, ConvRFMs, and Deep ConvRFMs all extract local signals and dim background noise through the AGOP, and thus far outperform CNTKs. Moreover, we observe that Deep ConvRFMs can provide up to a 5% improvement in performance over ConvRFM on the synthetic MNIST task, indicating a benefit to deep feature learning.
Benefit of deep feature learning on real-world image classification tasks.Lastly, we analyze performance of CNTK, ConvRFM, Deep ConvRFM, and the corresponding three convolutional layer CNN on standard image classification datasets available for download from PyTorch. Consistent with our observations for synthetic tasks from [24], we observe in Fig. 6A that ConvRFM and Deep ConvRFM provide an improvement over CNTK across almost all tasks. Moreover, we observe that ConvRFM and Deep ConvRFM outperform CNTKs consistently when the corresponding CNN outperforms the CNTK. In Fig. 6B, we analyze the impact of deep feature learning by increasing the number of feature learning layers in Deep ConvRFM, i.e., the number of layers for which we utilize the AGOP to learn features. We observe that adding more layers of feature learning leads to consistent performance boost in the local signal adaptivity tasks from [24] and on select datasets such as SVHN and EMNIST [13].
## 4 Discussion
In this work, we identified a mathematical mechanism of feature learning in deep convolutional networks, which we posited as the Convolutional Neural Feature Ansatz (CNFA). Namely, the ansatz stated that features selected by convolutional networks, given by empirical covariance matrices of filters at any given layer, can be recovered by computing the average gradient outer product (AGOP) of the trained network
Figure 5: Test accuracy of CNTK, ConvRFM, Deep ConvRFM, and the corresponding CNN on local signal adaptivity tasks from [24] as a function of noise level. **A.** Identifying black and white bars in noisy images. **B.** MNIST digits placed randomly in noisy background image.
with respect to image patches. We presented empirical and theoretical evidence for the ansatz. Notably, we showed that convolutional filter covariances of neural networks pre-trained on ImageNet (AlexNet, VGG, ResNet) are highly correlated with AGOP with respect to patches (in many cases, Pearson correlation \(>.9\)). Since the AGOP with respect to patches can be computed on any function operating on image patches, we could use the AGOP to enable feature learning in any machine learning model operating on image patches. Thus, building on the RFM algorithm for fully connected networks from [37], we integrated the AGOP to enable deep feature learning in convolutional kernel machines, which could not apriori learn features, and referred to the resulting algorithms as ConvRFM and Deep ConvRFM. We demonstrated that ConvRFM and Deep ConvRFM recover features similar to those of deep convolutional neural networks, including evidence that features learned by these models can serve as universal edge detectors, akin to features learned in convolutional networks. Moreover, we demonstrated that ConvRFM and Deep ConvRFM overcome prior limitations of convolutional kernels, including the Convolutional Neural Tangent Kernel (CNTK), such the inability to adapt to localized signals in images [24]. Lastly, we showed a benefit to deep feature learning by demonstrating improvement in performance of Deep ConvRFM over ConvRFM and the CNTK on standard image classification benchmarks. We now conclude with a discussion of implications of our results and future directions.
Identifying mechanisms driving success of deep learning.Understanding the mechanisms driving success of neural networks is an important problem for developing effective, interpretable and safe machine learning models. The complexities of training deep neural networks such as custom training procedures and layer structures (batch normalization, dropout, residual connections, etc.) can make it difficult to pinpoint overarching principles leading to effectiveness of these models. The fact that correlation between convolutional neural feature matrices (CNFMs) and AGOPs is high for convolutional networks pre-trained on ImageNet with all of these inherent complexities baked in, provides strong evidence that the connection between AGOP and CNFMs is key to identifying the core principles making these networks successful.
Emergence of universal edge detectors with average gradient outer product.Detecting edges in images is a well-studied task in computer vision and classical approaches involved applying fixed convolutional filters to detect edges in images [2, 16, 54]. For example, AlexNet automatically learned filters in its first convolutional layer that were remarkably similar to Gabor filters [30]. Similarly, there was evidence that other convolutional networks pre-trained on ImageNet learned features akin to edge detection in the first layer [58]. Yet, it had been unclear how such filters automatically emerge through training. We demonstrated that the AGOP with respect to patches of a large class of convolutional models (convolutional neural networks and convolutional kernels) trained on various standard image classification tasks consistently recovered edge detectors (see Fig. 2, Fig. 3A, B). We further showed the universality of these edge detector features by demonstrating that features learned by ConvRFM on SVHN automatically identified edges in ImageNet
Figure 6: **A. Performance comparison of Deep ConvRFM with the corresponding CNTK and CNN on benchmark image classification datasets from PyTorch. B. Effect of number of feature learning layers on Deep ConvRFM performance.**
images. This strongly suggests that edge detectors emerge from the underlying nature of the task rather than specific properties of architectures. Our findings indicate that understanding connections between AGOP and classical edge detection approaches is a promising direction for understanding emergence of features in the first layer of convolutional neural networks and for identifying simple algorithms to capture deeper convolutional features.
Reducing computational complexity of convolutional kernels.In this work, we provided an approach for enabling feature learning in convolutional kernels by iteratively training convolutional kernel machines and computing AGOP of the trained predictor. Given that convolutional kernels are able to achieve impressive accuracy on standard datasets without any feature learning [1, 6, 7, 29, 42], these methods have the potential to provide state-of-the-art results upon incorporating feature learning. Yet, in contrast to the case of classical kernel machines such as those used in [37], evaluating the kernel for an effective CNTK (such as those with Global Average Pooling [4]) can be a far more computationally intensive process than simply training a convolutional neural network. For example, according to Neural Tangents [34], the CNTK of a Myrtle kernel [42] can take anywhere from 300 to 500 GPU hours for CIFAR10. Given that Deep ConvRFM involves constructing a kernel matrix and computing AGOP to capture features at each layer, reducing the evaluation time of convolutional kernels through strategies such as random feature approximations is key to making these approaches scalable.
## Acknowledgements
A.R. is supported by the Eric and Wendy Schmidt Center at the Broad Institute. We acknowledge support from the National Science Foundation (NSF) and the Simons Foundation for the Collaboration on the Theoretical Foundations of Deep Learning6 through awards DMS-2031883 and #814639 as well as the TILOS institute (NSF CCF-2112665). This work used the programs (1) XSEDE (Extreme science and engineering discovery environment) which is supported by NSF grant numbers ACI-1548562, and (2) ACCESS (Advanced cyberinfrastructure coordination ecosystem: services & support) which is supported by NSF grants numbers #2138259, #2138286, #2138307, #2137603, and #2138296. Specifically, we used the resources from SDSC Expanse GPU compute nodes, and NCSA Delta system, via allocations TG-CIS220009.
Footnote 6: [https://deepfoundations.ai/](https://deepfoundations.ai/)
## Code Availability
All code is available at [https://github.com/aradha/convrfm](https://github.com/aradha/convrfm).
|
2310.15778
|
Privacy Protection in MRI Scans Using 3D Masked Autoencoders
|
MRI scans provide valuable medical information, however they also contain
sensitive and personally identifiable information that needs to be protected.
Whereas MRI metadata is easily sanitized, MRI image data is a privacy risk
because it contains information to render highly-realistic 3D visualizations of
a patient's head, enabling malicious actors to possibly identify the subject by
cross-referencing a database. Data anonymization and de-identification is
concerned with ensuring the privacy and confidentiality of individuals'
personal information. Traditional MRI de-identification methods remove
privacy-sensitive parts (e.g. eyes, nose etc.) from a given scan. This comes at
the expense of introducing a domain shift that can throw off downstream
analyses. In this work, we propose CP-MAE, a model that de-identifies the face
by remodeling it (e.g. changing the face) rather than by removing parts using
masked autoencoders. CP-MAE outperforms all previous approaches in terms of
downstream task performance as well as de-identification. With our method we
are able to synthesize high-fidelity scans of resolution up to $256^3$ --
compared to $128^3$ with previous approaches -- which constitutes an eight-fold
increase in the number of voxels.
|
Lennart Alexander Van der Goten, Kevin Smith
|
2023-10-24T12:25:37Z
|
http://arxiv.org/abs/2310.15778v3
|
# Preserving Patient Privacy in MRI Scans: A Comprehensive Approach with 3D Masked Autoencoders
###### Abstract
MRI scans provide valuable medical information, however they also contain sensitive and personally identifiable information (PII) that needs to be protected. Whereas MRI metadata is easily sanitized, MRI image data is a privacy risk because it contains information to render highly-realistic 3D visualizations of a patient's head, enabling malicious actors to possibly identify the subject by cross-referencing a database. Data anonymization and de-identification is concerned with ensuring the privacy and confidentiality of individuals' personal information. Traditional MRI de-identification methods remove privacy-sensitive parts (e.g. eyes, nose etc.) from a given scan. This comes at the expense of introducing a domain shift that can throw off downstream analyses. Recently, a GAN-based approach was proposed to de-identify a patient's scan by remodeling it (e.g. changing the face) rather than by removing parts. In this work, we propose CP-MAE, a model that de-identifies the face using masked autoencoders and that outperforms all previous approaches in terms of downstream task performance as well as de-identification. With our method we are able to synthesize scans of resolution up to \(256^{3}\) (previously \(128^{3}\)) which constitutes an eight-fold increase in the number of voxels. Using our construction we were able to design a system that exhibits a highly robust training stage, making it easy to fit the network on novel data.
## 1 Introduction
Magnetic resonance imaging (MRI) is a non-invasive, high-resolution imaging technology that enables the analysis of anatomical structures such as brain, spine, joints, abdomen, pelvis, the cardiovascular system and associated diseases. While MRI scans are usually visualized as 2D slices over one of the three axes, it is also possible to render a high-quality 3D model using techniques such as volumetric ray-tracing, yielding a realistic depiction of a patient's face. This is problematic in terms of data privacy: Given a face database with associated identites, a malicious actor can find the closest match to a given face rendering, _allowing them to infer the patient's identity_.
Various de-identification methods have appeared over the years that address the removal of privacy-sensitive
Figure 1: MRI scans pose a privacy risk since highly-realistic face renderings can be crafted and misused for malicious purposes. Our model aims to advance the so-called _remodeling-based_ subclass of MRI de-identification which retains the brain and remodels all other features. (_top_: 3D view, _bottom_ slice-view)
parts1 and mostly differ in their level of aggressiveness: While FACE MASK [24] merely blurs out the face, skull-stripping methods such as MRI WATERSHED [35] only retain the brain and remove everything else. However, these traditional approaches are potentially problematic. Simply put, many tools in the medical workflow require the presence of certain landmarks as preconditions. If those landmarks are absent, the analysis might be impaired or, in the worst case, it might be impossible to perform the analysis at all. On the other hand, if a method is not aggressive enough it might still be feasible to infer a patient's identity, a fact that proves especially harmful when such a tool is to be used automatically and without oversight on a large corpus of scans.
Footnote 1: In the most extreme case just retaining the brain
Consequently, _removal-based_ methods seek to find a sweet spot which ensures acceptable levels of both privacy and downstream task compatibility. A recently proposed GAN-based approach named CP-GAN [11] introduced a new paradigm: to remodel the face and leave the medically-relevant information intact. This _remodeling-based_ class of models aims to remodel the skull and facial features (staying true to the overall data distribution of MRI scans) while leaving the brain _untouched_. Put differently, a patient's MRI scan is de-identified by copying their brain and remodeling the remaining parts such that they "look like" as if they belong to an actual MRI scan but do not give away information about the patient's real face (or identity). This approach eliminates the need for the aforementioned trade-off, as it ensures accurate placement of landmarks and thus minimizes domain shift, leading to more effective de-identification performance.
Recently, the emergence of _diffusion-probabilistic_ models (DPMs) and _masked autoencoders_ (MAEs) has propelled generative approaches forward, deviating from the previously dominant _generative adversarial network_ (GAN) methodology. Instead of relying on the definition of an adversarial game to learn the underlying data distribution, these new model classes adopt significantly different principles. Masked autoencoders draw inspiration from natural language processing (NLP) models such as BERT [12], and focus on predicting stochastically-masked segments of the data while conditioning on the unmasked portions. Data points can be sampled by starting with randomly initialized data and allowing the masked autoencoder to fill in the missing parts based on its highest confidence. MAEs are generally considered to be more efficient [30, 41] than DPMs during inference as significantly fewer steps are needed to synthesize a high-fidelity sample.
MAEs also offer two fundamental advantages over GANs: (i) Higher training stability and (ii) typically a lower memory footprint2 at training time. Considering that MRI de-identification with generative models involves high memory requirements and often suffers from instability due to small batch sizes, MAEs seem to be a promising alternative.
Footnote 2: Due to the absence of a discriminator
In this work, we adapt masked autoencoders for MRI de-identification. Our contributions are as follows:
1. We suggest a novel _masked autoencoder_-based model named CP-MAE
2. We show that CP-MAE can deal with higher-resolution MRI scans than previous remodeling-based methods, improving the state-of-the-art from a resolution of \(128^{3}\) to \(256^{3}\), effectively octupling the number of synthesized voxels
3. To the best of our knowledge, our model is the first instantiation of combining a volumetric VQ-VAE with an MAE for 3D MR image synthesis
4. We demonstrate that CP-MAE features superior de-identification performance compared to other methods and can be robustly trained on modestly-sized datasets (_e.g_. ADNI, OASIS-3)
5. We show that de-identification with CP-MAE introduces minimal effects on brain tissue and subcortical segmentation tasks compared to other approaches.
## 2 Related Work
**MRI De-Identification.** MRI de-identification is a critical pre-processing step in neuroimaging, and many methods have been proposed to effectively perform this task. Smith _et al_. introduced the Brain Extraction Tool (BET) [36], which is widely used for its simplicity. The algorithm creates a binary brain mask by thresholding the input image and uses a deformable model to refine the initial mask. Despite its effectiveness, it sometimes fails to exclude non-brain tissues, such as dura and eyes.
To address some of the limitations of BET, Iglesias _et al_. proposed Robust Brain Extraction (ROBEX) [20]. ROBEX employs a trained random forest classifier to distinguish brain and non-brain voxels. MRI WATERSHED [35] is perhaps the most widely used algorithm due to its presence in the _FreeSurfer_[13] library, as the name suggests it employs a watershed transformation followed by a deformable model. Milchenko _et al_. [24] propose FACE MASK, focusing on the protection of sensitive facial information in MRI data. This method combines the benefits of defacing and skull-stripping by generating a mask that can be used to blur out facial features while preserving brain anatomy. Schimke _et al_. introduced a tool named QUICKSHEAR [34], which is intended to compute a hyperplane in 3D space that separates the facial features from the brain. DEFACE [5] is a deformable model that estimates which voxels belong to the brain, said voxels can then be cut out for de-identification purposes.
Finally, Van der Goten _et al_. suggest a remodeling-based
approach using GANs, CP-GAN [11]. This framework anonymously remodels non-brain tissues as opposed to removing them. In CP-GAN the de-identification process is performed by a conditional GAN that takes as input the patient's brain and a convex hull hinting at the metric extent of the skull to be generated. Our approach follows this general framework of remodeling the face and skull, as opposed to opting for something that would remove these parts altogether.
**Vector Quantization.** Vector-quantized autoencoders (VQ-VAE) [31, 40] have become a staple in the computer vision community over the last years [29, 33] and have also found widespread application in other fields [9, 15, 44]. VQ-VAEs are neural networks that marry the concept of autoencoding with vector quantization. The latter concept can best be described as _snapping_ a continuous vector \(c\in\mathbb{R}^{d}\) to a so-called codebook vector \(v_{i}\in\mathbb{R}^{d}\) (\(i=1,\ldots,K\)) according to \(i=\text{argmax}_{j}\ d(c,v_{j})\) where \(d(\cdot,\cdot)\) denotes some suitable distance function. This concept enables efficient data compression and reconstruction. The impossibility to derive gradients through the discrete argmax operation is addressed by the _straight-through_ estimator trick[4].
A recurrent theme within recent works in computer vision is a two stage approach: Firstly, a VQ-VAE (or a variant thereof) is trained in an _unsupervised_ fashion on the whole dataset \(\mathcal{D}\). The encoder of the VQ-VAE is used to _translate_ each dataset element \(X\) (image, spectrogram etc.) into a discrete3 representation \(e(X)\) (_i.e_. a matrix of integers) constituting a (fixed) dataset \(\mathcal{D}_{e}\). Secondly, the actual task is formulated. For image synthesis this could mean to learn how to stochastically sample from \(\mathcal{D}_{e}\). Said samples can then be fed through the decoder to obtain actual samples mimicking the dataset \(\mathcal{D}\). The distribution can be learned via a multitude of model classes. The most common ones are autoregressive [31], DPM-based [1, 7] or MAE-based [6, 30].
Footnote 3: It is also possible to translate to _continuous_ vectors which is done by taking the _inputs_ of the discretization operation rather than its outputs
**Image Synthesis.** Image synthesis stands as a potent subset of machine learning models, boasting a wide range of uses such as the generation of synthetic data, identifying anomalies, and facilitating semi-supervised learning. A diverse array of generative models exists, encompassing Variational Autoencoders (VAEs) [22], Generative Adversarial Networks (GANs) [16], and, more recently, diffusion-probabilistic models (DPMs) [37] and masked autoencoders (MAEs) [12, 27, 28].
While diffusion-probabilistic models often produce state-of-the-art results, they are challenged by long execution times during inference because for a fixed number of steps \(N\) (typically \(1000\)) the image needs to be sequentially denoised \(N\) times. DDIM [38] has shown that a non-Markovian reinterpretation of the diffusion process can reduce the number of steps significantly (_e.g_. to merely 50 steps). Yet, we argue that this is still a lengthy process if one is concerned with high-resolution 3D scans, especially if consumer-grade hardware is to be used.
Masked autoencoder (MAE) approaches are related to diffusion models in that they aim to predict an image that was previously degraded. While DPMs use noise to achieve this, MAEs stochastically mask out certain portions of the data. Recently, MaskGIT [6] demonstrated that state-of-the-art performance on ImageNet [10] can be achieved with a remasking scheme, Paella [30] uses a similar training objective but simplifies the inference stage and furthermore allows the model to re-predict tokens. Given said properties we opt to adapt the Paella model to 3D in order to model the latent integer codes emitted by the 3D VQ-VAE networks.
Figure 2: **Preprocessing/Vector Quantization Stage.** We first apply standard preprocessing algorithms to a given MRI scan to make the data easier to model. When then execute two independent stages: (i) We use ROBEX [20] to extract the (continuous) brain of the scan, and inversely (ii) we remove the brain from the full skull. Both representations are then _compressed_ independently by two VQ-VAEs into 3D integer volumes of much lower resolution. To help generalization we apply various augmentation techniques. After the training phase, a “paired dataset” is created that contains the fixed discrete codes derived from both representations.
## 3 Method
We follow the problem definition of CP-GAN [11]: Given a set of 3D scans \((X^{(i)})_{i=1,...,N}\) following a data distribution \(\mathcal{P}_{X}\) and having a resolution of \(S^{3}\), we aim to find a mapping \(Y=G_{\Phi}(\gamma(X))\) parameterized by \(\Phi\) that de-identifies a _raw_ scan \(X\) and yields \(Y\). The purpose of the _privacy transform_\(\gamma(\cdot)\) is to provide \(G_{\Phi}(\cdot)\) with a minimal blueprint to guide the skull synthesis without leaking information about the patient's facial features4. Fundamentally, \(\gamma(X)\) should at least include a representation of the brain to inform the model about the metric constraints of the to-be-synthesized skull.
Footnote 4: As the skull is synthesized around the brain, \(\gamma(\cdot)\) should at least contain a binary brain mask
**Overview.** Synthesizing (3D) MRI scans is a challenging endeavor in terms of memory as a volume contains \(S^{3}\) (_e.g_. \(S\in\{128,256\}\)) voxels which are ideally predicted by a single pass. VQ-VAEs paired with DPMs have already been successfully shown to accurately model high-resolution (2D) imagery, most prominently in the case of Stable Diffusion [33] that manages5 to synthesize (RGB) images of resolution up to \(256^{2}\). Applying a VQ-VAE on MRI scans is therefore particularly promising as the number of voxels could in the most extreme case be reduced from \(256^{3}\) to a resolution of \(64^{3}\), reducing the overall number of voxels by a factor of 64. This is equivalent to a grayscale image of resolution \(512^{2}\), which is still pushing the limits of modern hardware and algorithms.
Footnote 5: Without using additional super resolution techniques
One key objective of our work is to be able to run the training on consumer-grade hardware with modest training/inference times. Having this in mind, a bird's eye view of our method can be described as follows: We first generalize the VQ-VAE model to three dimensions. One instance of the VQ-VAE is tasked to model the brains only, a second complementary instance models the full skulls _without_ the brain. Applying both trained instances on an MRI dataset produces a paired dataset where each item is a tuple of the two latent integer volumes coming from each VQ-VAE encoder (see Fig. 2).
In a second step, we employ a Paella-style MAE that conditions on the previously derived latents from the brain and models the distribution of the latents pertaining to the full skulls (see Fig. 3). Unseen MRI scans can then be de-identified by computing the latents associated to the brain and letting the MAE synthesize a realistic skull using the brain latent as conditioning variable (see Fig. 4).
**Vector Quantization Stage.** In order to reduce the memory requirements of high-resolution MR imagery, we leverage the VQ-VAE framework to compress (parts of) MRI scans into latent codes.
As a preparatory step we use ROBEX [20] to compute the (binary) brain mask \(B(X)\) of an MRI scan \(X\in\mathcal{D}\). We need the brain representation for two reasons: Firstly, it serves as a conditioning variable that informs the synthesis about the proportions of the skull, and secondly, we require it to later copy the original brain into the de-identified scan.
We then train a (3D) VQ-VAE for each of the two representations \(B(X)\odot X\) (_"brain"_) and \(\overline{B}(X)\odot X\) (_"skull"_) _in isolation_ where \(\odot\) denotes the Hadamard product and \(\overline{B}(X)\doteq 1-B(X)\). Both representations are complementary6 to each other in that \(B(X)\odot X\) contains solely the brain (_i.e_. has non-zero brain intensities) without featuring any of the remaining parts (_i.e_. zero non-brain intensities) whereas the inverted properties hold for \(\overline{B}(X)\odot X\).
Footnote 6: By adding both one recovers \(X\)
Training both models yields two encoder/decoder pairs \((e_{1},d_{1})\) and \((e_{2},d_{2})\). After training, we deploy the two encoders to translate the dataset of MRI scans \(\mathcal{D}\) into a highly-compressed _paired_ dataset of integer codes \(\mathcal{D}_{e}=\{(e_{1}[B(X)\odot X)],e_{2}[\overline{B}(X)\odot X]\mid X \in\mathcal{D}\}\). For the sake of simplicity, we assume that both \(e_{1}[\cdot]\) and \(e_{2}[\cdot]\) share the same output shape \(s\times s\times s\) where \(s\) is a power of two that evenly divides \(S\). A conceptual depiction of this and the applied preprocessing can be found in Fig. 2.
**Architecture.** We adapt the original VQ-VAE [40] model from (2D) images to (3D) MRI scans. Given a tensor7\(Z\in\mathbb{R}^{S\times S\times S}\), its resolution \(S\in\{128,256\}\) and a de
Figure 3: **Latent Modeling Stage (MAE). Given the “paired dataset” featured in Figure 2 we first extract the two components derived from the (continuous) brain and the full skull without the brain. The former serves as a _conditioning_ variable that informs the network about the metric constraints (_e.g_. the size of the to-be-generated skull) of the synthesis process. During training, the latter (a volume of integers encoding the full skull) is stochastically-perturbed according to a _noise schedule_. The MAE’s task is then to “unmask” the tokens. Since the targets are categorical we opt for the _cross entropy_ loss.**
sired output resolution \(s\ll S\) (a power of two), we first feed \(Z\) through an encoder. The encoder consists of a stack of \(\log_{2}\nicefrac{{}}{{s}}\) convolutional blocks (the _"feature block"_) followed by a vector quantizer.
We apply layer normalization [2] along the channel dimension before each convolution. Let us express the final output of the feature block as \(f(Z)\in\mathbb{R}^{s\times s\times s\times L}\) where \(L\) denotes an arbitrary embedding dimension.
The vector quantizer, part of the encoder, operates independently on the \(s^{3}\) vectors of its input \(f(Z)\), producing a volumetric output \(e(Z)\in\{0,\dots,N_{\text{CV}}-1\}^{s\times s\times s}\) of integers where each integer encodes the index of the closest codebook vector in a (trainable) codebook of size \(N_{\text{CV}}\). The decoder is a mirror image of the encoder, with its own set of parameters. We employ a mean square loss between \(Z\) and the output of the decoder. To account for space constraints we moved specifics as well as the used augmentation regimen to the _Appendix_.
**Latent Modeling Stage.** Having the compressed latent integer representations of \(E_{1}\hat{=}e_{1}[B(X)\odot X]\) (_"brain"_) and \(E_{2}\hat{=}e_{2}[\overline{B}(X)\odot X]\) (_"skull"_) now at our disposal, we describe a generative MAE \(H_{\Phi}(E_{2}|E_{1})\) that predicts the latter (integer) skull representation while conditioning on the former (integer) brain representation.
**Architecture.** Recall that both \(E_{1}\) and \(E_{2}\) are 3D volumes/grids of integers from the set \(\{0,\dots,N_{\text{CV}}-1\}^{s\times s\times s}\). In order to model the distribution of \(E_{2}\) the MAE methodology requires that one defines a (time-dependent) scheme to _perturb_ the integers in the \(E_{2}\) representation. We decide to use _Paella_-style [30] perturbations, _i.e._ a value \(v\in\{0,\dots,N_{\text{CV}}-1\}\) is kept with probability \(\alpha_{t}\) and resampled from the same set with a probability \(1-\alpha_{t}\), where the time \(t\sim\mathcal{U}(0,1)\) is sampled independently for each element in the batch. This can be efficiently implemented by bundling the point-wise masking decisions in a tensor \(M\) and the three-dimensional uniform variates in \(U\):
\[\tilde{E_{2}}=(1-M)\odot E_{2}+M\odot U \tag{1}\]
As the spatial shape of the input agree with the spatial shape of the output, we decide to use a U-Net architecture (with layer normalization) to denoise the perturbed representations. During training the U-Net is given three inputs: Namely, the perturbed input \(\tilde{E_{2}}\), the condition \(E_{1}\hat{=}e_{1}[B(X)\odot X]\) (also called \(\gamma(X)\)) and the time \(t\) to provide a general sense of how corrupted \(\tilde{E_{2}}\) is. We choose to merge \(E_{1}\) and \(\tilde{E_{2}}\) by using a simple (channel-wise) concatenation before feeding it into the network. The scalar \(t\) is fed to an MLP (w. GeLU activations [18]) at each layer deriving two parameters \(\delta_{1},\delta_{2}\) which are used to shift and rescale an input \(Z\) as \(Z^{\prime}=Z\cdot(\delta_{1}+1)+\delta_{2}\). The U-Net finally produces a logit tensor \(Y_{\text{logits}}\in\mathbb{R}^{s\times s\times s\times N_{\text{CV}}}\) that can be used to compute the cross entropy loss w.r.t. the (unperturbed) \(E_{2}\). For further details refer to Fig. 3 and the _Appendix_.
**Sampling Stage.** After having trained the MAE, we execute the following sampling steps: (i) Compute the brain mask \(B(X^{\prime})\), (ii) compute the integer representation \(E_{1}=e_{1}[B(X^{\prime})\odot X^{\prime}]\), (iii) use \(H_{\Phi}\) to sample \(E_{2}\) using \(E_{1}\) as a conditioning variable, and finally (iv), pass \(E_{2}\) through the decoder \(d_{2}(\cdot)\) to recover a skull that harmonizes well with the brain in \(X^{\prime}\). To ensure that the de-identified scan accurately represents the real brain and not a hallucinated version thereof, we implement a simple blending scheme that
Figure 4: **Test-time De-Identification.** An _unseen_ MRI scan is de-identified by applying the same preprocessing as during train time. Afterwards, the brain is extracted yielding both a continuous and a binary brain representation. The former is _vector-quantized_ to a compact volume of integers and used as the _condition_ in the inference stage of the masked autoencoder. Starting from a randomly-initialized \(\hat{X}\) the network refines its estimate of how a skull around the given brain could look like. A certain number of _renoising_ steps is necessary as the network was not tasked to do one-step reconstructions during the training phase. The final _de-identified_ scan is obtained by _blending_ the original scan with the last estimate \(\hat{X}\) where the _binary_ brain representation acts as a _mask_. This step ensures that the brain is preserved.
is inspired by CP-GAN in order to copy the brain from the original \(X\):
\[Y=(1-B(X))\odot d_{2}(\hat{E_{2}})+B(X)\odot X \tag{2}\]
where \(\hat{E_{2}}\) denotes the latent as estimated by letting the MAE condition on \(E_{1}\hat{=}e_{1}[B(X)\odot X]\) (_i.e._ the brain).
**Privacy Considerations.** During the training phase, the model can theoretically "remember" or associate the original scan given only a representation of the brain. However, when we split the data into training and testing sets, we find that the de-identification property holds on the testing set. This is because the network has never seen brains from the test set before and a test-set brain hence does not provide enough information to accurately recreate the skull or any distinctive facial features of an unseen individual. We validate this finding within our user study in the _Experiments_ section.
## 4 Experiments
In this section, we conduct a comprehensive set of experiments to evaluate the performance and effectiveness of our proposed methodology for MRI scan de-identification. We aim to assess the capabilities of our approach in terms of preserving privacy while maintaining the diagnostic value of the scans. To achieve this, we carefully design experiments that encompass various aspects, including downstream task performance and the degree of de-identification achieved. To gauge how much a de-identification method affects a real-world downstream task, we employ various segmentation tasks (_i.e._ SIENAX [21], FIRST [26] & FASTSURFER [19]) and quantify how the segmentation changes in comparison to running the methods on raw data. We also conduct a user study on Amazon Mechanical Turk in which people are tasked with mapping de-identified scans to their raw/original counterpart given 3D renderings of the respective scans.
**Datasets and implementation details.** In this study, we use the well-established and publicly available datasets ADNI [42, 43] (2,172 T1-weighted MRI scans) and OASIS-3 [23] (2,556 T1-weighted MRI scans). We use a per-patient train (\(80\%\))-test (\(20\%\)) split to partition the data.
For training our networks, we leverage two NVIDIA RTX 4090 GPUs, each equipped with 24 GiB of GPU mem
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{4}{c}{_User-based_} & \multicolumn{4}{c}{_Model-based_} \\ \hline & ADNI & \multicolumn{2}{c}{OASIS-3} & \multicolumn{2}{c}{ADNI} & \multicolumn{2}{c}{OASIS-3} \\ \hline \(128^{3}\) & \(256^{3}\) & \(128^{3}\) & \(256^{3}\) & \(128^{3}\) & \(256^{3}\) & \(128^{3}\) & \(256^{3}\) \\ \hline BLACK & \(20.17\pm 25.45\) & \(20.30\pm 26.33\) & \(18.37\pm 10.90\) & \(18.54\pm 11.08\) & \(19.66\pm 2.25\) & \(18.75\pm 2.08\) & \(19.86\pm 0.89\) & \(19.98\pm 1.19\) \\ BLURRED & \(45.05\pm 28.91\) & \(46.29\pm 29.42\) & \(41.63\pm 15.57\) & \(42.93\pm 18.06\) & \(87.46\pm 7.65\) & \(86.54\pm 8.61\) & \(97.05\pm 2.33\) & \(97.31\pm 1.67\) \\ ORIGNAL & \(55.33\pm 30.70\) & \(58.93\pm 29.24\) & \(61.86\pm 15.00\) & \(59.27\pm 16.94\) & \(100.00\pm 0.00\) & \(100.00\pm 0.00\) & \(100.00\pm 0.00\) & \(100.00\pm 0.00\) \\ MRI WAFERSED & \(19.03\pm 25.31\) & \(21.48\pm 27.25\) & \(22.56\pm 13.11\) & \(22.20\pm 14.41\) & \(44.75\pm 4.99\) & \(47.03\pm 5.06\) & \(67.76\pm 4.39\) & \(67.50\pm 7.39\) \\ DEACE & \(43.53\pm 50.16\) & \(45.58\pm 29.53\) & \(38.14\pm 9.58\) & \(43.17\pm 16.04\) & \(92.97\pm 0.86\) & \(90.22\pm 1.51\) & \(90.78\pm 0.36\) & \(90.85\pm 0.20\) \\ QUICKSHER & \(38.51\pm 30.18\) & \(39.81\pm 30.43\) & \(40.70\pm 16.68\) & \(35.85\pm 13.78\) & \(98.79\pm 0.87\) & \(95.66\pm 1.98\) & \(98.81\pm 0.21\) & \(99.83\pm 0.20\) \\ FAC MASK v1 & \(48.55\pm 30.49\) & \(50.24\pm 30.54\) & \(50.23\pm 17.39\) & \(52.68\pm 14.32\) & \(96.31\pm 27.21\) & \(98.75\pm 1.59\) & \(99.65\pm 0.50\) & \(99.91\pm 0.23\) \\ FAC MASK v2 & \(38.05\pm 28.79\) & \(43.60\pm 29.90\) & \(33.02\pm 17.93\) & \(35.85\pm 15.00\) & \(94.42\pm 5.11\) & \(98.06\pm 1.20\) & \(99.78\pm 0.31\) & \(99.63\pm 0.29\) \\ CP-GAN & \(28.46\pm 27.95\) & ✗ & \(30.47\pm 14.30\) & ✗ & \(56.11\pm 5.05\) & ✗ & \(56.40\pm 2.89\) & ✗ \\ CP-MAE & \(\mathbf{23.14\pm 27.88}\) & \(\mathbf{25.91\pm 27.65}\) & \(\mathbf{22.56\pm 11.77}\) & \(\mathbf{23.41\pm 15.27}\) & \(\mathbf{39.91\pm 9.49}\) & \(\mathbf{41.74\pm 6.91}\) & \(\mathbf{48.82\pm 4.32}\) & \(\mathbf{58.19\pm 4.04}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **De-Identification Quality.** We compare CP-MAE against traditional methods, CP-GAN and four control methods (above “- -” line) in terms of their de-identification capabilities. For this, users on _Amazon Mechanical Turk_ were asked to map an original rendering onto its de-identified counterpart (c.f. _“User-based”_). Similarly, we harnessed a neural network trained within a _contrastive learning_ framework to perform the very same task (c.f. _“Model-based”_). Both scenarios involve finding the correct option among five alternatives (of which four belong to a different subject) and the showcased values correspond to the percentage of correct guesses (\(\pm\) s.d.). The “✗” for CP-GAN denotes that the method does not support a resolution of \(256^{3}\).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & ADNI & \multicolumn{2}{c}{OASIS-3} \\ \hline & \(128^{3}\) & \(256^{3}\) & \(128^{3}\) & \(128^{3}\) & \(256^{3}\) \\ \hline MRI WAVTRISHED & ✗ & \(0.229\pm 0.208\) & ✗ & ✗ \\ DETYCE & \(0.977\pm 0.016\) & \(0.977\pm 0.022\) & \(0.981\pm 0.007\) & \(0.959\pm 0.013\) \\ QUICKSHER & \(0.952\pm 0.007\) & \(0.980\pm 0.015\) & \(0.982\pm 0.044\) & \(0.981\pm 0.027\) \\ PACKMASK v1 & \(0.959\pm 0.007\) & \(0.956\pm 0.011\) & \(0.956\pm 0.022\) & \(0.958\pm 0.012\) \\ PACEMASK v2 & \(0.550\pm 0.007\) & \(0.952\pm 0.062\) & \(0.950\pm 0.043\) & \(0.958\pm 0.012\) \\ CT-GAN & \(0.980\pm 0.013\) & ✗ & \(0.984\pm 0.003\) & ✗ \\ CT-MA & \(\mathbf{0.985\pm 0.010}\) & \(\mathbf{0.986\pm 0.011}\) & \(\mathbf{0.994\pm 0.005}\) & \(\mathbf{0.981\pm 0.012}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Downstream Tasks: Brain segmentation.** We utilize the widely-established SIENAX tool from the FSL suite to gauge how much de-identification methods affect the outcome of a common downstream task. A perfect method gives rise to the same segmentation. We quantify the difference between the segmentation maps by computing the class-averaged Dice score over four classes. Higher is better. (“✗” denotes that the model is not available for this resolution, “✗” means that the method is failing in more than 90% of all cases)
Figure 5: **De-Identification Quality:** Renderings. A single subject (from OASIS-3) de-identified by various methods (and control methods) featuring differing levels of aggressiveness.
ory. Our implementation relies on the PyTorch framework [25]. To optimize memory consumption, we utilize the _inductor_ backend for model compilation, which has proven to be highly advantageous. Moreover, we adopt the recently introduced Lion optimizer [8], which enables us to scale up the model size compared to other available optimizers. These implementation details contribute to the efficiency and effectiveness of our training process and facilitate robust experimentation.
**Preprocessing & Benchmarks.** The MRI scans are standardized to RAS+ orientation, resampled to a common resolution (\(128^{3}\) resp. \(256^{3}\)), bias field corrected by the N4 method [39], intensity normalized using the _Fuzzy C-means_ algorithm [32] and finally co-registered to a fixed scan with an affine deformation.
We juxtapose our findings by comparing against six de-identification methods. The first three belong to the removal-based class and are comprised of QUICKSHEAR [34], FACE MASK [24] and DEFACE [5]. On the extreme end we also include MRI WATERSHED as a control method which removes all non-brain tissue. Apart from the published version of FACE MASK we also cover an updated8 version of the toolkit created by the same author. For the sake of clarity we refer to the two variants as FACE MASK v1 and FACE MASK v2. Furthermore, we benchmark against CP-GAN [11] which is so far the only member of the remodeling-based class. A visual depiction of the methods can be found in Figure 5. All procedures were applied with default settings on images of resolution \(128^{3}\) and \(256^{3}\) (with the exception of CP-GAN which synthesizes only up to a resolution of \(128^{3}\)). A detailed explanation of each method can be found in the _Appendix_.
Footnote 8: [https://github.com/mmilch01/mask_face2](https://github.com/mmilch01/mask_face2)
**De-Identification Quality: User-based Study.** In an effort to assess the resilience of various de-identification approaches, we replicate the user study of CP-GAN utilizing Amazon Mechanical Turk. Participants were exposed to a series of quiz questions designed to bypass various de-identification techniques. Each question was created by sampling an untouched (3D) rendering of a scan, accompanied by five more (3D) renderings created by a single fixed method (_e.g_. CP-MAE). Within these renderings, one was the de-identified counterpart of the original patient scan. The goal for the participants was to correctly match the de-identified rendering with the rendering of the unaltered scan. The renderings were produced using volumetric ray tracing and feature a \(45^{\circ}\) side profile of the patients. This is in contrast to the user study conducted by CP-GAN which uses a frontal view. We argue that a side view presents more discernible information to the user as various features such as skull size, ears and head posture can be factored into the decision process.
For maintaining comparability, we incorporated control tasks with varying degrees of de-identification: ORIGINAL, which signifies the untouched scan with no de-identification applied; BLURRED, introducing a mild degree of obfuscation by blurring the 2D renderings; BLACK, consistently presenting an entirely black image; and MRI WATERSHED, a brain extraction tool which only retains the brain tissue.
For each method (or control method) we posted \(220\) distinct9 questions and a single question was given to five workers, meaning that \(1100\) questions were answered in total for each method. Taking the two resolutions into account, we collected \(2\times 10\times 1100=22,000\) answers. Since the workers are presented with five alternatives the optimal guessing rate is \(20\%\) which corresponds to perfect de-identification. More specifics, including a sample question, are available in the _Appendix_. Illustrative renderings are presented in Figure 5 whereas the results are showcased in Table 1.
Footnote 9: One question for each element in the test set
We observe that CP-MAE attains almost optimal de-identification results, outperforming the removal-based methods by a margin of \(15\) to \(25\) percent. CP-MAE also outperforms CP-GAN albeit by a smaller margin. Although both methods share a similar methodology, we explain the difference by the fact that CP-GAN is additionally conditioning on a tightly-fit (filled) convex hull around the skull. This effectively constrains the synthesis to produce skull shapes that are more correlated to the skull shape of the raw/original scan as compared to CP-MAE which solely conditions on the brain.
**De-Identification Quality: Model-based Study.** Building on the methodology of the previous experiment, we evaluated the efficacy of different de-identification models by attempting to circumvent them. In this iteration, we employed a neural network to determine the similarity between original (unaltered) and de-identified renderings, an approach that is similar in fashion to the one conducted for CP-GAN albeit with a much more powerful similarity-quantification network.
To achieve this, we adopted a similar metric learning strategy by training a Siamese network \(h(\cdot,\cdot)\). This network is designed to discern whether two given inputs are associated with the same patient. Specifically, for an original rendering \(x\) and a de-identified rendering \(y\), the network \(h\) functions by independently embedding \(x\) and \(y\) through a sub-network \(\tilde{h}\)--comprising a ResNet-18 block [17] (pre-trained on ImageNet-1K), a flattening layer, and a fully-connected layer--and subsequently calculating the Euclidean distance between the resulting embeddings. Thus, the network's output is given by \(h(x,y)\triangleq\|\tilde{h}(x)-\tilde{h}(y)\|_{2}\). We employed the Triplet Margin loss function as outlined in [3], with the margin set to \(1\). The previously established hold-out dataset was partitioned, allocating \(80\%\) of patients to the training set and the remaining \(20\%\) to the test set. To
calculate standard deviations we repeat the aforementioned process to obtain \(k=10\) independent folds. Detailed training procedures are outlined in the _Appendix_.
Table 1 illustrates the network's proficiency in overcoming de-identification methods. The values are obtained by selecting a patient from one of the test sets (pertaining to a specific fold) and a corresponding original scan. A de-identification method \(m\) was then chosen, and the \(m\)-rendering of the scan was designated as the correct option. The other four options were compiled by randomly choosing \(m\)-renderings from scans of different patients. The network's decision is given by the option \(y\) that exhibits the shortest Euclidean distance to \(\hat{h}(x)\).
First and foremost, we observe that the Siamese approach is much more powerful in terms of defeating the de-identification methods. Similar to the user-based study, we observe that CP-MAE exhibits superior de-identification performance in comparison to traditional de-identification methods and compares favorably against CP-GAN.
**Effect of De-Identification on Medical Analyses.** It is of utmost importance that a scan de-identified with a de-identification method does not negatively affect the analysis performed by downstream tasks. To explore the level of impairment we employ three established methods. Firstly, SIENAX [21] measures the brain volume and provides a segmentation into four classes (total brain volume, white matter, gray matter & ventricular CSF) and is the de-facto standard to assess how individuals compare in terms of tissue composition. Secondly, FIRST is a Bayesian (non deep-learning based) point distribution-based approach to perform efficient and accurate subcortical segmentations. Subcortical segmentations are an important means in the analysis of neurodegenerative diseases. Thirdly, to gauge how deep learning-based downstream tasks are effected, we further employ FASTSURFER [19] which is a recently suggested tool that also addresses the subcortical segmentation task. It is meant to offer a more efficient and robust alternative to the widely-established FREESURFER [14] algorithm.
To quantify whether one of these segmentation methods is thrown off by a de-identification method, we replicate the relativistic evaluation approach already employed by CP-GAN: First a segmentation method is run on an original MRI scan yielding a segmentation map \(\mathcal{S}\), then the segmentation method is run once again, this time on the de-identified counterpart of the scan, producing a new segmentation map \(\mathcal{S}^{\prime}\). Possible differences are then quantified using the Dice coefficent w.r.t. each segmentation class. We report the _class-averaged Dice score_ for SIENAX in Table 2 and for FIRST resp. FASTSURFER in Figure 6.
We observe that CP-MAE outperforms the other methods w.r.t. both SIENAX and FIRST and exhibits slightly better Dice scores for FASTSURFER. Furthermore, we observe that de-identification with MRI WATERSHED (full-brain extraction) causes downstream tasks to either get impaired or to fail completely.
## 5 Conclusion
We present a novel MRI de-identification method called CP-MAE which robustly anonymizes an MRI scan by employing a generative model that conditions only on the brain. Within our work, we have shown how the challenge of handling high-resolution MR imagery can be addressed by leveraging two instances of a VQ-VAE together with an MAE. The MAE operates in the low-dimensional space defined by the VQ-VAE pairs and is able to stochastically produce high-fidelity MR scans that retain the brain of the raw scan but exhibit vastly different appearance out
Figure 6: **Downstream Tasks: Subcortical segmentation. In a similar vein to Table 2 we analyze how de-identification affects downstream tasks. FIRST is an integral part of the FSL suite and FASTSURFER is a deep learning-based alternative to FREESURFER that runs orders of magnitude faster. The depicted values are the _class-averaged_ Dice scores over \(15\) classes for FIRST and resp. \(78\) classes for FASTSURFER. Due to the much lower performance of MRI WATERSHED (\(\approx 0.21\), averaged over all configurations) we decided to exclude the method to improve visual discernability among the other methods. Higher values are preferable.**
side of the brain. Using this construction we are able to improve the state-of-the-art 3D de-identification resolution of CP-GAN from \(128^{3}\) to \(256^{3}\), representing an eight-fold increase in the number of voxels. We validate the de-identification quality of our model by conducting a result by in which CP-MAE outperforms all other methods by moderate to substantial margins. Analyzing how de-identification affects common brain segmentation tasks, we conclude that our model affects the segmentation maps only to a negligible degree. Future work will include the CT domain.
|
2308.08205
|
Near-extremal Kerr-like ECO in the Kerr/CFT Correspondence in Higher
Spin Perturbations
|
The Kerr/CFT correspondence has been established to explore the quantum
theory of gravity in the near-horizon geometry of a extreme Kerr black holes.
The quantum gravitational corrections on the near-horizon region may manifest
in form of a partially reflective membrane that replace the horizon. In such
modification, the black holes now can be seen as a horizonless exotic compact
object (ECO). In this paper, we consider the properties of Kerr-like ECOs in
near-extremal condition using Kerr/CFT correspondence. We study the quasinormal
modes and absorption cross-section in that background and compare these by
using CFT dual computation. The corresponding dual CFT one needs to incorporate
finite size/finite $N$ effects in the dual CFT terminology. We also extend the
dual CFT analysis for higher spin perturbations such as photon and graviton. We
find consistency between properties of the ECOs from gravity sides and from CFT
sides. The quasinormal mode spectrum is in line with non-extreme case, where
the differences are in the length of the circle, on which the dual CFT lives,
and phase shift of the incoming perturbation. The absorption cross-section has
oscillatory feature that start to disappear near extremal limit. The particle
spin determines the phase shift and conformal weight. We also obtain that the
echo time-delay depends on the position of the membrane and extremality of the
ECOs.
|
M. Zhahir Djogama, Muhammad F. A. R. Sakti, Freddy P. Zen, Mirza Satriawan
|
2023-08-16T08:10:56Z
|
http://arxiv.org/abs/2308.08205v2
|
# Near-extremal Kerr-like ECO in the Kerr/CFT Correspondence in Higher Spin Perturbations
###### Abstract
The Kerr/CFT correspondence has been established to explore the quantum theory of gravity in the near-horizon geometry of a extreme Kerr black holes. The quantum gravitational corrections on the near-horizon region may manifest in form of a partially reflective membrane that replace the horizon. In such modification, the black holes now can be seen as a horizonless exotic compact object (ECO). In this paper, we consider the properties of Kerr-like ECOs in near-extremal condition using Kerr/CFT correspondence. We study the quasinormal modes and absorption cross-section in that background and compare these by using CFT dual computation. The corresponding dual CFT one needs to incorporate finite size/finite \(N\) effects in the dual CFT terminology. We also extend the dual CFT analysis for higher spin perturbations such as photon and graviton. We find consistency between properties of the ECOs from gravity sides and from CFT sides. The quasinormal mode spectrum is in line with non-extreme case, where the differences are in the length of the circle, on which the dual CFT lives, and phase shift of the incoming perturbation. The absorption cross-section has oscillatory feature that start to disappear near extremal limit. The particle spin determines the phase shift and conformal weight. We also obtain that the echo time-delay depends on the position of the membrane and extremality of the ECOs.
## I Introduction
The Anti-de Sitter/conformal field theory (AdS/CFT) correspondence is a significant development in string theory which is based on the idea of the holographic principle, proposed by 't Hooft [1], which suggests that a higher-dimensional theory can be described by a corresponding lower-dimensional one. The AdS/CFT correspondence provides a fruitful tool for studying strongly coupled theories by relating them to weakly coupled theories and vice versa. It has been particularly successful in investigating the thermodynamics of black holes. A remarkable development of this correspondence is the successful study of the microscopic origin of Kerr black hole's entropy. It is then well known as Kerr/CFT correspondence [2]. For extremal Kerr black holes, it is found that its near-horizon geometry exhibits an exact \(SL(2,R)\times U(1)\) symmetry leading to the precise computation of Bekenstein-Hawking entropy from two-dimensional (2D) Cardy formula. For non-extremal Kerr black hole, the conformal symmetry does not emerge directly in the geometry of the black holes. Instead, the conformal symmetry is encoded in the solution space of a probe scalar field in the near-horizon region within the low-frequency limit approximation. One can read off the conformal symmetry from the quadratic Casimir operator that satisfies \(SL(2,R)\times SL(2,R)\) algebra. However, the conformal symmetry is globally obstructed by the periodic identification of the azimuthal angle \(\phi\). The periodic identification of \(\phi\) denotes that \(SL(2,R)\times SL(2,R)\) symmetry breaks into \(U(1)\times U(1)\) symmetry. The spontaneous breaking of the symmetry is caused by the left- and right-moving temperatures \(T_{L,R}\). By assuming the smooth connection with the extremal black hole, the associated central charges of this non-extremal Kerr black hole are computed and used to compute the Cardy entropy. The primary result is then this Cardy entropy precisely matches with the Bekenstein-Hawking entropy of the extremal Kerr black holes. For more example on extremal Kerr/CFT correspondence, one can find in [3; 4; 5; 6; 7] and for non-extremal one in [8; 9; 10] cases. For a review, one can find in Ref. [11].
The Kerr/CFT correspondence has come as an alternative to study the properties of classical rotating black holes. But then one can may expect that this correspondence can be applied for quantum black holes. One of the studies that considers quantum black holes in the relation with Kerr/CFT correspondence is performed in Ref. [12]. Without losing the generality, we may assume that due to the strong gravitational attraction inside the black holes, quantum effect can affect the structure of the black holes. As an intriguing example provided in Ref. [13; 14], the quantum effect appears to modify the
apparent horizon of the classical black holes that leads to potentially observable consequences such as the quasi-normal modes (QNMs). The structure of the event horizon is assumed to alter significantly to be infinitesimal quantum membrane. This quantum membrane is reflective and located slightly in front of the would-be horizon of the classical black hole in the order of Planck length. This novel representation is another alternative way to solve the black hole information paradox. Beside this alternative representation to information paradox, one may read some astrophysical objects that also come to solve the same problem which are fuzzballs [16; 17], 2-2 holes [18], gravastars [19; 20; 21], and Kerr-like wormholes [22]. This object is known as Kerr-like exotic compact object (ECO). The precise origin of the quantum reflective membrane is still not well grasped due to the lack of an exact calculation within a theory of quantum gravity. However, the analysis of dual CFT on this ECO is portrayed in terms of Boltzmann factor that matches with the reflectivity of the quantum horizon [23].
The existence of the quantum reflective membrane gives rise to some potential observables. The principal potential observable from this feature is the presence of gravitational echoes. The gravitational echoes may be detected in the postmerger ringdown signal of a binary system coalescence such as black hole coalescence [24; 25; 26; 27]. In particular, the major realizations for such observable detection of echoes lies in the detection of QNMs since the ringdown phase is dominated by this modes. The echo signals that bring these modes are separated from the primary ringdown signal by the corresponding time-delay due to the presence of the reflective membrane. In several papers, it is claimed that the potential evidence of gravitational echoes has been discovered within LIGO data [28; 29; 30; 31].
It is pointed out in Ref. [27] that the non-linear physics may affect the time between the main merger event and the first echo. Due to non-linear physics effect, the magnitude of the time-delay may be shifted around 2%-3%. As an example, this shift can occur in Rastall theory of gravity where the covariant derivative of the matter tensor is not null and proportional to an arbitrary constant that corresponds with the time-delay shift [32]. Not only the time-delay, the analysis of QNMs in the ringdown spectrum can be a proper procedure to verify the natural structure of black holes or ECOs. Furthermore, this also can be another fascinating playground to probe some features beyond general relativity. The calculation of QNMs might probe the extra dimension from the ECOs in such braneworld scenario [33; 34].
In near future, a number of experiments to improve the precision related to the study of astrophysical objects and phenomena will run such as will be done by LIGO or Event Horizon Telescope. More specifically, they will run some experiments to advance black hole's observations and related to its properties. Kerr (rotating) black hole is considered as the most physical black hole in the universe. A huge number of observations denotes that the observed black holes are rotating very fast, nearly the extremal limit, especially the supermassive ones in the active galactic nuclei [35; 36]. As a very specific example, the near-extremal Kerr is found to be the source of X-ray in the binary system GRS 1915+105 [37]. Therefore, it is very relevant to study the near-extremal black hole and its quantum counterpart (ECOs).
In this work, we will consider near-extremal Kerr ECO. We will study the scalar field in that background. Instead of imposing purely ingoing boundary condition, we impose reflective boundary conditions at the place slightly near the event horizon of the near-extremal Kerr metric (would-be horizon). Due to the scalar field perturbation, we compute the QNMs by solving the propagation of massless scalar field modes. We then compare the results by using CFT dual computation as has been done for generic Kerr ECO. Moreover, we extend the calculation for higher spin fields such as photon and graviton. We want to see the echoes emerging in the absorption cross-section of those fields. In the end, we will compute the echo time-delay produced by the propagation of the fields on the near-extremal Kerr ECO background.
The organization of this paper is given as follows. In the next section, we consider Kerr metric and its properties as ECO. In Section III, we consider massless scalar field in the near-extremal Kerr ECO background and assume the reflective boundary condition to compute the QNMs. In Section IV, we compare the result from gravity computation with the CFT dual computation. In Section V, we perform similar computation for higher spin fields including their QNMs. Then in Section VI, the echo time-delay is computed for all fields. Finally, we summarize our work in the last section.
## II Kerr-like exotic compact object
We consider ECO with Kerr metric as its exterior spacetime. The near-horizon geometry is modified by replacing the event horizon with partially reflective membrane as a consequence of a quantum gravitational effect. This membrane is located slightly outside the usual position of the would-be horizon. For a rotating Kerr-like ECO with mass \(M\) and angular momentum \(J=aM\), the metric in the Boyer-Lindquist coordinate can be written as
\[ds^{2}= -\left(1-\frac{2M\hat{r}}{\hat{\rho}^{2}}\right)d\hat{t}^{2}\] \[+\left(\hat{r}^{2}+a^{2}+\frac{2a^{2}M\hat{r}\sin^{2}\theta}{ \hat{\rho}^{2}}\right)\sin^{2}\theta d\hat{\phi}^{2}\] \[-\frac{4aM\hat{r}\sin^{2}\theta}{\hat{\rho}^{2}}d\hat{\phi}d \hat{t}+\frac{\hat{\rho}^{2}}{\Delta}d\hat{r}^{2}+\rho^{2}d\theta^{2}, \tag{1}\]
where
\[\hat{\rho}^{2} =\hat{r}^{2}+a^{2}\cos^{2}\theta,\] \[\Delta =\hat{r}^{2}+a^{2}-2M\hat{r}. \tag{2}\]
The usual position of the horizons and the angular velocity on the horizon are given by
\[r_{\pm}=M\pm\sqrt{M^{2}-a^{2}},\ \ \Omega_{H}=\frac{a}{2Mr_{+}}. \tag{3}\]
The Hawking temperature and the Bekenstein-Hawking entropy are given by
\[T_{H}=\frac{r_{+}-r_{-}}{8\pi Mr_{+}},\ \ S_{BH}=2\pi Mr_{+}. \tag{4}\]
The presence of reflective membrane with reflectivity \(\mathcal{R}\) can be seen as a quantum correction originating from near-horizon quantum gravitational effects [38; 39]. We assume that the reflective membrane is located near the would-be horizon \((r_{+}+\delta r)\), where \(\delta r\) is to be the order of Planck length for stability of this ECO [38]. For a black hole, an incident scalar wave with some modes is reflected by the angular momentum barrier while others will cross the barrier and absorbed by the black hole through the event horizon. However, because ECO possesses a reflective membrane, some of the incident waves will be reflected back and forth between membrane and the angular momentum barrier until eventually the waves tunnel through the barrier as a repeating echoes after a time-delay.
Two models of the membrane are studied recently. These are the constant reflectivity and Boltzmann frequency-dependent reflectivity [13; 14]
\[\mathcal{R}=\begin{cases}R_{c},&\text{constant reflectivity}\\ e^{-\frac{|\omega-m\Omega_{H}|}{2T_{QH}}},&\text{Boltzmann reflectivity}\end{cases} \tag{5}\]
with \(R_{c}\) is a constant and \(T_{QH}\) is defined as "quantum horizon temperature" [23] and expected to be comparable to the Hawking temperature with an arbitrary proportional constant (\(T_{QH}=\gamma T_{H}\)). The proportional constant \(\gamma\) depends on the dispersion and dissipation effects in graviton propagation [40]. We will see how this modification of boundary condition at the horizon affects the absorption cross-section and QNM spectrum of Kerr-like ECO at near-extremal case. Later we will compare these results with a dual CFT analysis.
## III Near-extremal Kerr ECO background
One of the realization of the correspondence between Kerr black holes and the CFTs is the equivalence of the absorption cross-section [41]. It turns out that this also true for Kerr-like ECO as shown in [12]. They show that the absorption cross-section for Kerr-like ECO has an additional factor compared to that of black holes, related to the reflectivity, and this part can also emerge from 2D CFT living on a finite circle. Another quantity that we can investigate is QNM, where the reflectivity of the ECO affects the imaginary part of QNM. As both quantities depend on the reflectivity, we cannot directly take extremal case to explore the scattering issue in general because the Boltzmann reflectivity will become zero. However, for near-extremal case, we may explore this scattering problem. So in this section, we focus on near-extremal limit of the Kerr-like ECO.
To study the conformal symmetry, we consider scattering of a massless scalar field expanded in modes
\[\Phi=e^{im\hat{\phi}-i\omega\hat{I}}S(\theta)R(\hat{r}). \tag{6}\]
The wave equations for a full Kerr metric (1) are
\[\frac{1}{\sin\theta}\partial_{\theta}(\sin\theta\partial_{\theta}S)+\left(K_{ l}-\frac{m^{2}}{\sin^{2}\theta}-a^{2}\omega^{2}\sin^{2}\theta\right)S=0, \tag{7}\]
and
\[\partial_{\hat{r}}(\Delta\partial_{\hat{r}}R)+\left(\frac{(\omega(\hat{r}^{2} +a^{2})-ma)^{2}}{\Delta}+2ma\omega-K_{l}\right)R=0, \tag{8}\]
where \(K_{l}\) is separation constant. Define the dimensionless coordinate \(x\) and dimensionless Hawking temperature \(\tau_{H}\)[42]
\[x=\frac{\hat{r}-r_{+}}{r_{+}},\hskip 28.452756pt\tau_{H}=\frac{r_{+}-r_{-}}{r_{+ }}=8\pi MT_{H}, \tag{9}\]
where the near-extremal regime corresponds to \(\tau_{H}\ll 1\). In this regime, the radial equation (8) becomes
\[x(x+\tau_{H})R^{\prime\prime}+(2x+\tau_{H})R^{\prime}+V_{l}R=0, \tag{10}\]
where the prime denotes \(\partial_{x}\) and
\[V_{l}=\frac{\left[x(x+2)m+\tau_{H}n\right]^{2}}{4x(x+\tau_{H})}+m^{2}-K_{l}, \tag{11}\]
\[n=\frac{\omega-m\Omega_{H}}{2\pi T_{H}}. \tag{12}\]
In the near-extremal limit, \(n\) are held fixed when \(T_{H}\to 0\). This means only modes with energies near superradiant bound \(\omega\simeq\Omega_{H}\) will survive. Modes with energies outside the scale of the bound will not cross into the near-horizon region. We will solve the radial equation in far region \(x\gg\tau_{H}\) and near region \(x\ll 1\), then match the solutions in the matching region \(\tau_{H}\ll x\ll 1\). For far and near regions, the derivation is similar to the near-extremal black holes case [42; 51] as we still use Kerr spacetime as the exterior background. The difference becomes apparent when we impose new boundary condition at the matching region to include the reflective membrane.
### Far region
In the far region \(x\gg\tau_{H}\), the radial equation becomes
\[x^{2}R^{\prime\prime}+2xR^{\prime}+\left(\frac{1}{4}m^{2}(x+2)^{2}+m^{2}-K_{l }\right)R=0. \tag{13}\]
If we define \(z=imx\) and
\[\beta^{2}=\frac{1}{4}+K_{l}-2m^{2}, \tag{14}\]
then (13) can be written as a Whittaker equation
\[\partial_{z}^{2}R+\left(-\frac{1}{4}+\frac{(-im)}{z}+\frac{\left(\frac{1}{4}- \beta^{2}\right)}{z^{2}}\right)R=0. \tag{15}\]
The solution for above equation is
\[R_{far} = Ae^{-\frac{1}{2}z}x^{-\frac{1}{2}+\beta}M\left(\frac{1}{2}+ \beta+im,1+2\beta,z\right) \tag{16}\] \[+B(\beta\rightarrow-\beta),\]
where \(M(a,b,z)\) is Kummer function. In the matching region \(x\ll 1\), \(M(a,b,z)\to 1\),
\[R_{far}\sim Ax^{-\frac{1}{2}+\beta}+Bx^{-\frac{1}{2}-\beta}. \tag{17}\]
While in the asymptotic region \(x\gg 1\),
\[R_{far}\sim Z_{out}e^{\frac{1}{2}imx}x^{-1+im}+Z_{in}e^{-\frac{1}{2}imx}x^{-1- im}, \tag{18}\]
where
\[Z_{in} = AC_{+}+BC_{-}, \tag{19}\] \[Z_{out} = A\tilde{C}_{+}+B\tilde{C}_{-},\] \[C_{\pm} = \frac{\Gamma(1\pm 2\beta)}{\Gamma(\frac{1}{2}\pm\beta-im)}(-im)^{ \frac{1}{2}\mp\beta-im},\] \[\tilde{C}_{\pm} = \frac{\Gamma(1\pm 2\beta)}{\Gamma(\frac{1}{2}\pm\beta+im)}(im)^{ \frac{1}{2}\mp\beta+im}.\]
### Near region
In the near region \(x\ll 1\), the radial equation becomes
\[x(x+\tau_{H})R^{\prime\prime}+(2x+\tau_{H})R^{\prime}+V_{l}^{near}R=0, \tag{20}\]
where
\[V_{l}^{near}=\frac{(2mx+\tau_{H}n)^{2}}{4x(x+\tau_{H})}+m^{2}-K_{l}. \tag{21}\]
The solution is
\[R_{near} = Cx^{-\frac{1}{2}n}\left(\frac{x}{\tau_{H}}+1\right)^{i(\frac{n}{2}-m )}F\left(a,b,c;z\right) \tag{22}\] \[+ Dx^{\frac{1}{2}n}\left(\frac{x}{\tau_{H}}+1\right)^{i(\frac{n}{ 2}-m)}\] \[\times F\left(a-c+1,b-c+1,2-c;z\right),\]
where \(F(a,b,c;z)\) is hypergeometric function with \(a=1/2+\beta-im\), \(b=1/2-\beta-im\), \(c=1-in\), and \(z=-x/\tau_{H}\). In the matching region \(x\gg\tau_{H}\),
\[R_{near} \sim \tau_{H}^{\frac{1}{2}+\beta}x^{-\frac{1}{2}-\beta}\Gamma(-2\beta) \tag{23}\] \[\times \left[C\frac{\Gamma\left(1-in\right)}{\Gamma\left(\frac{1}{2}- \beta-im\right)\Gamma\left(\frac{1}{2}-\beta-i(n-m)\right)}\tau_{H}^{-\frac{i }{2}n}\right.\] \[\left.+D\frac{\Gamma\left(1+in\right)}{\Gamma\left(\frac{1}{2}- \beta+im\right)\Gamma\left(\frac{1}{2}-\beta+i(n-m)\right)}\tau_{H}^{\frac{i }{2}n}\right]\] \[+ (\beta\rightarrow-\beta).\]
While near the membrane \(x\to 0\)
\[R_{near} \sim Cx^{-\frac{i}{2}n}+Dx^{\frac{i}{2}n} \tag{24}\] \[\sim Ce^{-i(\omega-m\Omega_{H})r^{*}}+De^{i(\omega-m\Omega_{H})r^{* }},\]
where \(r^{*}\) is the tortoise coordinate defined as
\[r^{*}=\int\frac{\hat{r}^{2}+a^{2}}{(\hat{r}-r_{+})(\hat{r}-r_{-})}dr. \tag{25}\]
The first part on the right hand side of Eq. (24) can be seen as the ingoing wave and the second part as the outgoing wave.
### Matching region
The new boundary condition at the membrane can be defined in terms of amplitudes in (24)
\[\mathcal{R}e^{i\pi\delta}=\frac{D}{C}x_{0}^{in}, \tag{26}\]
where \(x_{0}\) is the position of the membrane and \(\delta\) is a phase shift determined by the properties of the ECO. In the matching region \(\tau_{H}\ll x\ll 1\), solutions from far and near regions are both valid. By comparing coefficient from (17) and (23), we obtain
\[\frac{A}{C} = \tau_{H}^{\frac{1}{2}-\beta-\frac{i}{2}n}\Gamma(2\beta)\left[Q_{- }+\left(\frac{x_{0}}{\tau_{H}}\right)^{-in}\mathcal{R}e^{i\pi\delta}Q_{+} \right], \tag{27}\] \[\frac{B}{C} = \tau_{H}^{\frac{1}{2}+\beta-\frac{i}{2}n}\Gamma(-2\beta)\left[P_{ -}+\left(\frac{x_{0}}{\tau_{H}}\right)^{-in}\mathcal{R}e^{i\pi\delta}P_{+} \right],\]
where
\[P_{\pm} = \frac{\Gamma\left(1\pm in\right)}{\Gamma\left(\frac{1}{2}-\beta \pm im\right)\Gamma\left(\frac{1}{2}-\beta\pm i(n-m)\right)}, \tag{29}\] \[Q_{\pm} = \frac{\Gamma\left(1\pm in\right)}{\Gamma\left(\frac{1}{2}+\beta \pm im\right)\Gamma\left(\frac{1}{2}+\beta\pm i(n-m)\right)}. \tag{30}\]
### Absorption cross-section
The absorption cross-section is the ratio between absorbed flux at the event horizon and incoming flux from
infinity
\[\sigma_{abs}=\frac{\mathcal{F}_{r\to r_{+}}^{abs}}{\mathcal{F}_{\infty}^{in}}=n \tau_{H}\frac{1-|\mathcal{R}|^{2}}{\left|\frac{A}{C}+\frac{B}{C}\right|^{2}}. \tag{31}\]
In this calculation we only consider near-horizon contribution to match with CFT. Since we consider near-extremal limit \(\tau_{H}\ll 1\), \(|A/C|\) is more dominant than \(|B/C|\). Using \(|A/C|\) given by (27), we find
\[\sigma_{abs} \sim\frac{(\tau_{H})^{2\beta}}{\pi\Gamma(2\beta)^{2}}\sinh{(\pi n)}\] \[\times\left|\Gamma\left(\frac{1}{2}+\beta+im\right)\Gamma\left( \frac{1}{2}+\beta+i(n-m)\right)\right|^{2}\] \[\times\frac{1-|\mathcal{R}|^{2}}{\left|1-\mathcal{R}e^{-2ir_{0}^ {*}(\omega-m\Omega_{H})+i\pi\delta}\right|^{2}}, \tag{32}\]
where \(r_{0}^{*}=r^{*}(\hat{r}_{0})\) is the position of membrane in tortoise coordinate, \(\ln(x_{0}/\tau_{H})\sim r_{0}^{*}(r_{+}-r_{-})/(r_{+}^{2}+a^{2})\). The difference between absorption cross-section obtained for ECOs and black holes is the oscillatory feature, in addition to the classical cross-section, that depends on \(\mathcal{R}\). Obviously for \(\mathcal{R}=0\), it is associated with the classical black hole.
### Quasinormal modes
The oscillation of radiation from ECO corresponds to the resonance at the ECO's QNM frequencies. The parameters contained by the ECOs affect the QNM spectrums. Therefore, since those parameters also affect the near-horizon quantum structure of the ECOs, QNM may contain significant information about it, as shown in Ref. [32]. There are various methods used to obtain the QNM spectrums of black holes or ECOs. Usually, a purely outgoing boundary condition at the asymptotic infinity is imposed. However, it is difficult to solve the wave equation governing the perturbations analytically, making it impractical to get the QNMs of ECOs without making assumptions. Another method to obtain QNM spectrum, especially using AdS/CFT duality, is from poles of retarded Green's function [43]. In this calculation, we use the low-frequency limit to solve approximately the wave function and obtain the QNM spectrums.
Purely outgoing boundary condition means that there is no incoming waves from infinity. The absence of ingoing waves from infinity, \(Z_{in}=0\) in (18), gives us the relation \(B=-AC_{+}/C_{-}\). Using this condition and matching (17) and (23) give us
\[\frac{D}{C}=-\frac{P_{-}+\sigma Q_{-}}{P_{+}+\sigma Q_{+}}, \tag{33}\]
where
\[\sigma=(\tau_{H}/im)^{-2\beta}\frac{\Gamma(2\beta)\Gamma(1+2\beta)\Gamma(1- \beta-im)}{\Gamma(-2\beta)\Gamma(1-2\beta)\Gamma(1+\beta-im)}. \tag{34}\]
Near the superradiant bound \(\omega\simeq m\Omega_{H}\), we can solve (33) approximately in the low-frequency limit \(M\omega\ll 1\). In this limit and modified near-horizon boundary condition (26), we get
\[\mathcal{R}e^{i\pi\delta}=-e^{i2(\omega-m\Omega_{H})r_{0}^{*}}. \tag{35}\]
This equation can be solved to obtain QNM spectrums,
\[\omega-m\Omega_{H}=\frac{1}{2r_{0}^{*}}\pi(2q+1+\delta)-i\frac{\ln(\mathcal{R })}{2r_{0}^{*}}, \tag{36}\]
where \(q\) is a positive integer. If we consider the reflectivity as the Boltzmann reflectivity, we find
\[\omega-m\Omega_{H}\simeq \frac{1}{2r_{0}^{*}}\pi(2q+1+\delta)\] \[\times\left(1-\frac{isgn[2q+1+\delta]}{4r_{0}^{*}\gamma T_{H}} \right). \tag{37}\]
This result has same form with the QNM spectrum of Kerr-like ECO for non-extreme case [12]. The difference are in the definition of \(r_{0}^{*}\) where it is taken on the near extremality here, and in the value of \(\delta\) which we will see in the next section.
## IV Dual CFT of Kerr-like ECO
### Hidden conformal symmetry
It is already well known that Kerr black hole is dual to 2D CFT [2]. However, unlike extremal Kerr black holes, the conformal symmetries are hidden for non-extremal Kerr black holes [41]. They are hidden locally on the radial scalar wave equation and globally broken under periodic identification of \(\phi\). The symmetry becomes apparent when we investigate the scalar wave equation at the near-horizon region. As we see in Eq. (22), the solution of scalar wave equation in this region is given by the hypergeometric functions, which fall into \(SL(2,R)\) representation.
In order to understand this hidden symmetry, we can introduce the following set of conformal coordinates [41; 44]
\[w^{+} =\sqrt{\frac{\hat{r}-r_{+}}{\hat{r}-r_{-}}}e^{2\pi T_{H}\hat{ \phi}}, \tag{38}\] \[w^{-} =\sqrt{\frac{\hat{r}-r_{+}}{\hat{r}-r_{-}}}e^{2\pi T_{L}\hat{\phi }-\frac{r}{2M}},\] (39) \[y =\sqrt{\frac{\hat{r}_{+}-r_{-}}{\hat{r}-r_{-}}}e^{\pi(T_{R}+T_{L} )\hat{\phi}-\frac{\hat{r}}{2M}}, \tag{40}\]
where we identify right- and left-moving temperatures as
\[T_{R}=\frac{r_{+}}{4\pi a}\tau_{H},\qquad\qquad T_{L}=\frac{r_{+}+r_{-}}{4\pi a}. \tag{41}\]
The right-moving temperature is proportional to near-extremal Hawking temperature. In terms of these new coordinates, we can define two sets of vector fields,
\[H_{1} =i\partial_{+}, \tag{42}\] \[H_{-1} =i(w^{+2}\partial_{+}+w^{+}y\partial_{y}-y^{2}\partial_{-}),\] (43) \[H_{0} =i\left(w^{+}\partial_{+}+\frac{1}{2}y\partial_{y}\right), \tag{44}\]
and
\[\bar{H}_{1} =i\partial_{-}, \tag{45}\] \[\bar{H}_{-1} =i(w^{-2}\partial_{-}+w^{-}y\partial_{y}-y^{2}\partial_{+}),\] (46) \[\bar{H}_{0} =i\left(w^{-}\partial_{-}+\frac{1}{2}y\partial_{y}\right), \tag{47}\]
where \(\partial_{\pm}=\partial/\partial w^{\pm}\). Each set of the vector fields can form a quadratic Casimir operator that reads as
\[\mathcal{H}^{2} =\bar{\mathcal{H}}^{2}=-H_{0}^{2}+\frac{1}{2}(H_{1}H_{-1}+H_{-1}H _{1})\] \[=\frac{1}{4}(y^{2}\partial_{y}^{2}-y\partial_{y})+y^{2}\partial_ {+}\partial_{-}. \tag{48}\]
It is shown in Ref. [41] that in terms of \((\hat{t},\hat{r},\hat{\phi})\), we can express the scalar wave equation at the near-horizon region for Kerr background (1) as the \(SL(2,R)\) Casimir operator. As we have mentioned earlier, the \(SL(2,R)\times SL(2,R)\) symmetry breaks into \(U(1)\times U(1)\) under periodic identification of azimuthal coordinate, \(\hat{\phi}\rightarrow\hat{\phi}+2\pi\).
For fixed \(r\), we can write the relation between the conformal coordinates and Boyer-Lindquist coordinates as
\[w^{\pm}=e^{\pm t_{R,L}}, \tag{49}\]
where we define
\[t_{R}=2\pi T_{R}\hat{\phi},\hskip 28.452756ptt_{L}=\frac{\hat{t}}{2M}-2\pi T_{L }\hat{\phi}. \tag{50}\]
This relation is analogous to the relation between Minkowski coordinates (\(w^{\pm}\)) and Rindler coordinates (\(t_{R,L}\)). The frequencies (\(\omega_{L},\omega_{R}\)) related with the Killing vectors (\(i\partial_{t_{L}},i\partial_{t_{R}}\)) are conjugate to (\(t_{L},t_{R}\)).
In order to match between absorption cross-section or QNMs from gravity and CFT, we need to choose left and right frequencies \(\tilde{\omega}_{L},\tilde{\omega}_{R}\). One way of writing the relation between these frequencies and (\(\omega,m\)) of Kerr spacetime is by considering the first law of black hole's thermodynamics,
\[T_{H}\delta S_{BH}=\delta M-\Omega_{H}\delta J. \tag{51}\]
We identify \(\delta M\) as \(\omega\) and \(\delta J\) as \(m\), and consider conjugate charges (\(\delta E_{L},\delta E_{R}\)),
\[\delta S_{BH}=\frac{\delta E_{R}}{T_{R}}+\frac{\delta E_{L}}{T_{L}}, \tag{52}\]
which leads to identification of \(\delta E_{L,R}\) as \(\bar{\omega}_{L,R}\). Thus, relations between (\(\omega,m\)) and \(\delta E_{L,R}\) are
\[\delta E_{L} =\tilde{\omega}_{L}=2\pi T_{L}\omega_{L}=\frac{2M^{2}}{a}\omega,\] \[\delta E_{R} =\tilde{\omega}_{R}=2\pi T_{R}\omega_{R}=\tilde{\omega}_{L}-m. \tag{53}\]
### QNMs from CFT
From AdS/CFT duality perspective, QNM spectrums are given by the poles of retarded CFT correlation function. By employing this duality, the QNM computation is consistent with the gravity result as shown in Refs. [45; 46; 47; 8; 9]. As for the ECOs, the modification of near-horizon region due to quantum gravitational effect may come from finite size/finite \(N\) effects in the usual AdS/CFT [48; 49; 50]. Furthermore, discrete QNMs spectrums are believed to be produced by this effects on the CFT side.
For ECOs, to obtain QNM spectrums and later absorption cross-section, we start with two-point function of a CFT living on a torus with spatial cycle of length \(L\) and temporal cycle of length \(1/T\). The detailed derivation of the two-point function is given in [12]. Unlike the usual description Kerr/CFT correspondence where cylindrical approximation of torus (\(L\gg 1/T\)) is used, in this case we keep both spatial and temporal coordinates to be finite. We then can apply additional thermal periodicity of imaginary time to the azimuthal periodicity,
\[\hat{\phi}\rightarrow\hat{\phi}+2\pi L+i\frac{\Omega_{H}}{T_{H}},\hskip 28.452756pt \hat{t}=\hat{t}+\frac{i}{T_{H}}. \tag{54}\]
With this identification, the CFT coordinates (50) become
\[t_{L}\sim t_{L}+-4\pi^{2}LT_{L}-i\left(\frac{2\pi T_{L}\Omega_{H}}{T_{H}}- \frac{1}{2MT_{H}}\right),\]
\[t_{R}\sim t_{R}+4\pi^{2}LT_{R}+i\frac{2\pi T_{R}\Omega_{H}}{T_{H}}. \tag{55}\]
The Fourier transformation of the CFT two-point function based on torus coordinate (50) and their periodicity (55) is given as
\[\tilde{G}(\omega_{L},\omega_{R}) =\int dt_{L}dt_{R}e^{-i\omega_{L}t_{L}}e^{-i\omega_{R}t_{R}}\] \[\times\sum_{p\in\mathbb{Z}}\frac{(\pi T_{L})^{2h_{L}}}{\left[\sinh \pi T_{L}\left(\frac{t_{L}}{2\pi T_{L}}+p(2\pi L+\frac{i\mathbf{a}}{T_{L}}) \right)\right]^{2h_{L}}},\] \[\times\frac{(\pi T_{R})^{2h_{R}}}{\left[\sinh\pi T_{R}\left(\frac{t _{R}}{2\pi T_{R}}+p(2\pi L+\frac{i\mathbf{a}}{T_{R}})\right)\right]^{2h_{R}}}, \tag{56}\]
where we generalize the two-point function in terms of general value of modular parameter \(\mathbf{a}\). For the convenience, we define new torus coordinate \(\tilde{t}_{R,L}\) as
\[\tilde{t}_{L,R}=\frac{t_{L,R}}{2\pi T_{L,R}}+p(2\pi L+i\mathbf{a}/T_{L,R}). \tag{57}\]
Thus the two-point function becomes
\[\tilde{G}(\tilde{\omega}_{L},\tilde{\omega}_{R}) = \sum_{p\in\mathbb{Z}}e^{ip(2\pi L(\tilde{\omega}_{L}+\tilde{\omega }_{R})+i\mathbf{a}\left(\frac{\tilde{\omega}_{L}}{T_{L}}+\frac{\tilde{\omega}_{ R}}{T_{R}}\right))}\int d\tilde{t}_{L}d\tilde{t}_{R}\] \[\times\frac{e^{-i\tilde{\omega}_{L}\tilde{t}_{L}}e^{-i\tilde{ \omega}_{R}\tilde{t}_{R}}(\pi T_{L})^{2h_{L}}(\pi T_{R})^{2h_{R}}}{\left[ \sinh(\pi T_{L}\tilde{t}_{L})\right]^{2h_{L}}\left[\sinh(\pi T_{R}\tilde{t}_ {R})\right]^{2h_{R}}},\] \[\propto T_{L}^{2h_{L}-1}T_{R}^{2h_{R}-1}e^{-\frac{\tilde{\omega}_{L}}{2T _{L}}-\frac{\tilde{\omega}_{R}}{2T_{R}}}\] \[\times\left|\Gamma\left(h_{L}+i\frac{\tilde{\omega}_{L}}{2\pi T_ {L}}\right)\Gamma\left(h_{R}+i\frac{\tilde{\omega}_{R}}{2\pi T_{R}}\right) \right|^{2}\] \[\times\left[\frac{1}{1-e^{i2\pi L(\tilde{\omega}_{L}+\tilde{ \omega}_{R})-\mathbf{a}\left|\frac{\tilde{\omega}_{L}}{T_{L}}+\frac{\tilde{ \omega}_{R}}{T_{R}}\right|}}\right.\] \[\left.-\frac{1}{1-e^{i2\pi L(\tilde{\omega}_{L}+\tilde{\omega}_{ R})+\mathbf{a}\left|\frac{\tilde{\omega}_{L}}{T_{L}}+\frac{\tilde{\omega}_{R}}{T_{R}} \right|}}\right].\]
The QNM spectrums for ECOs come from the poles of the exponential part of the CFT two-point function. There are two poles lying in the upper and lower half of the \(\omega\) plane. Other poles coming from the singularities of the Gamma function correspond to the usual Kerr black hole QNM spectrums. In this case, QNM spectrums come from the retarded correlation function, so the poles of (58) in the lower half plane are the one that we need. Based on the definition of CFT temperatures (41) and CFT frequencies (53), taking near-extremal and near-superradiant limit, the QNM spectrum is
\[\omega-m\Omega_{H} = \frac{1}{8ML}(2q-2mL) \tag{59}\] \[\times \left(1-\frac{i\mathbf{a}\times sgn[2q-2mL]}{8M\pi LT_{H}}\right).\]
The QNM spectrum for near-extremal Kerr-like ECO is approximately calculated in the low-frequency limit as provided in (37). Both real and imaginary part of QNM for ECOs would match with the CFT result if we define
\[L=\frac{|r_{0}^{*}|}{4\pi M},\qquad\delta=-1-2mL,\qquad\mathbf{a}=\frac{1}{2 \gamma}. \tag{60}\]
Although, we can keep the modular parameter \(\mathbf{a}\) to be general. Compared with non-extreme case, this QNM spectrum has different dependency of spin of the ECO (\(a\)), since \(a\) and \(M\) is almost interchangeable in this case. In fact, if we take \(a\simeq M\) condition of Eq. (40) in [12], it produces the same QNM spectrum with (59). Because of this difference, we also have different definition of \(L\) and \(\delta\) which contribute to the cross-section and echo time-delay. With this identification, we can find from the imaginary part of QNM (59) that reflectivity of the membrane is
\[\mathcal{R}=e^{-|\omega-\Omega_{H}|/2\gamma T_{H}}, \tag{61}\]
which match exactly with Boltzmann reflectivity.
From QNMs, we can also see the condition of ergoregion instability. The instability occurs due to infinite amplification of trapped incoming waves between reflective membrane and potential barrier, inside the ergoregion. When the wave is reflected by the membrane or potential barrier and then cross through the ergoregion, it will gain energy from the ECO through Penrose's process. Since the wave is trapped, the amplification will arise exponentially, causing instability. From the point of view of the QNMs, this instability corresponds to \(Im(\omega)>0\). To avoid this, ECO needs to have some absorption of the incoming waves, in other words the membrane should be partially reflective [38; 39]. From Fig. (1), we can see in our case the instability is avoidable for all modes by choosing the modular parameter \(\mathbf{a}=1/2\) (\(\gamma=1\)) or positive in general. As for the real part, modular parameter does not affect it. We can also see that nearer extremal limit (higher \(a/M\)) produces higher \(\omega\) and smaller \(L\) (lower \(r_{0}\)) also produces higher \(\omega\).
### Absorption cross-section from CFT
In the previous section, we have seen the contribution of reflective membrane of the ECOs that is the emergence of new poles in the two-point function. The identification (60) makes QNM from CFT consistent with gravity calculation. In Eq. (32), the presence of the reflective membrane contributes as an oscillatory feature on the absorption cross-section. We need to check if the identification (60) can produce the same near-horizon contribution to the absorption cross-section on CFT computation.
In the CFT description, we again consider CFT living on a circle with finite length \(L\) and look into the finite size effects on the boundary. The absorption cross-section can be defined using Fermi's golden rule given by
\[\sigma_{abs}\sim \int\!dt_{L}dt_{R}e^{-i\omega_{L}t_{L}}e^{-i\omega_{R}t_{R}}(G(t_ {L}-i\epsilon,t_{R}-i\epsilon)\] \[\quad-G(t-L+i\epsilon,t_{R}+i\epsilon)) \tag{62}\]
The \(\pm i\epsilon\) determine the poles that contribute while performing the integration. Using the same way to perform the integration in (58), we obtain
\[\sigma_{abs}\sim \omega^{2l-1}T_{L}^{2h_{L}-1}T_{R}^{2h-1}\sinh\left(\frac{\tilde{ \omega}_{L}}{2T_{L}}+\frac{\tilde{\omega}_{R}}{2T_{R}}\right)\] \[\times\left|\Gamma\left(h_{L}+i\frac{\tilde{\omega}_{L}}{2\pi T_{ L}}\right)\Gamma\left(h_{R}+i\frac{\tilde{\omega}_{R}}{2\pi T_{R}}\right) \right|^{2}\] \[\times\frac{1-e^{-2\mathbf{a}\left|\frac{\tilde{\omega}_{L}}{T_{L }}+\frac{\tilde{\omega}_{R}}{T_{R}}\right|}}{\left|1-e^{i2\pi L(\tilde{\omega}_{L }+\tilde{\omega}_{R})-\mathbf{a}\left|\frac{\tilde{\omega}_{L}}{T_{L}}+\frac{ \tilde{\omega}_{R}}{T_{R}}\right|}\right|^{2}} \tag{63}\]
This result agrees with absorption cross-section from gravity calculation (32), if we choose
\[h_{L}=h_{R}=\frac{1}{2}+\beta,\qquad T_{L}=\frac{1}{2\pi},\qquad T_{R}=\frac{ \tau_{H}}{4\pi},\]
\[\omega_{L}=m,\qquad\qquad\omega_{R}=n-m, \tag{64}\]
along with (60). The CFT temperatures and frequencies consistent with (41) and (53) in near-extremal near-superradiant limit that we have defined earlier. From Fig. (2), the absorption cross-section is negative in low frequency because the superradiant condition is similar to classical black hole. The oscillatory feature comes from the last factor in (63). When this factor vanishes, the cross-section will reduce to that of classical black holes. We can also see that when \(a/M\to 1\), the oscillatory feature starts to disappear. Thus, we can say that for near-extremal case \(a\simeq M\) the Boltzmann reflectivity is suppressed by the spin of the ECO.
## V Higher spin perturbations
In the previous sections, we have shown that the absorption cross-section and QNM spectrums of near-extremal Kerr-like ECOs for scalar perturbation can be reproduced by the CFT. However, we can extend our discussion to general spin particle's perturbation. To explore electromagnetic and gravitational perturbation or non-vanishing spin in general, we can apply the Newman-Penrose (NP) formalism [52]. The NP null tetrad of Kerr \(e^{\mu}_{a}=(l^{\mu},n^{\nu},m^{\mu},m^{\ast\mu})\), in the basis \((\hat{t},\hat{r},\theta,\hat{\phi})\) is
\[l^{\mu}=\left(\frac{\hat{r}^{2}+a^{2}}{\Delta},1,0,\frac{a}{ \Delta}\right),\] \[n^{\mu}=\frac{1}{2(\hat{r}^{2}+a^{2}\cos^{2}\theta)}(\hat{r}+a,- \Delta,0,a), \tag{65}\] \[m^{\mu}=\frac{1}{\sqrt{2}(\hat{r}+ia\cos\theta)}\left(ia\sin \theta,0,1,\frac{i}{\sin\theta}\right),\]
with non-vanishing inner products
\[l.n=-m.m^{\ast}=-1. \tag{66}\]
The equation of motion of the perturbations in Kerr spacetime as its background are described in terms of Teukolsky's master equations [53]. it turns out that this equation is separable for angular and radial functions. The wave function can be decomposed into the form
\[\psi_{s}=e^{-i\omega\hat{t}+im\hat{\phi}}R_{s}(\hat{r})S_{s}(\theta) \tag{67}\]
where \(s\) is "spin-weight" of the field, \(\psi_{s}\) are related to electromagnetic field strength and Weyl tensor for spin-1 (\(|s|=1\)) and spin-2 (\(|s|=2\)) perturbations. The angular and radial functions satisfy,
\[\frac{1}{\sin\theta}\frac{d}{d\theta}\left(\sin\theta\frac{d}{d \theta}S_{s}\right)+\left(\Lambda_{lms}-a^{2}\omega^{2}\sin^{2}\theta-\frac{m^ {2}}{\sin^{2}\theta}\right.\] \[\left.-2a\omega s\cos\theta-\frac{s^{2}+2ms\cos\theta}{\sin^{2} \theta}\right)S_{s}=0, \tag{68}\]
Figure 1: Real and imaginary part of QNM as a function of \(q\) with \(a/M\) and \(r_{0}\) variations. We set \(m=1\) and \(\mathbf{a}=1/2\).
and
\[\Delta^{-s}\frac{d}{d\hat{r}}\left(\Delta^{s+1}\frac{d}{d\hat{r}}R_{s}\right)\] \[\qquad\quad+\left(\frac{H^{2}-2is(\hat{r}-M)H}{\Delta}+4is\omega \hat{r}-\lambda\right)R_{s}=0, \tag{69}\]
where \(\Lambda_{lms}\) is a separation constant, \(\lambda=\Lambda_{lms}-2am\omega-s(s+1)\), and \(H=\omega(\hat{r}^{2}+a^{2})-am\).
In terms of \(x\) and \(\tau_{H}\), the radial equation is
\[x(x+\tau_{H}){R_{s}}^{\prime\prime}+(s+1)(2x+\tau_{H}){R_{s}}^{\prime}+V_{s}R_ {s}=0, \tag{70}\]
where
\[V_{s} = \frac{(mx^{2}+2mx+n\tau_{H})^{2}}{4x(x+\tau_{H})}+2ims(1+x)-\lambda \tag{71}\] \[- \frac{is(2x+\tau_{H})(mx^{2}+2mx+n\tau_{H})}{4x(x+\tau_{H})}.\]
### Far region
In the far region \(x\gg\tau_{H}\), the radial equation is
\[x^{2}{R_{s}}^{\prime\prime}+(s+1)2x{R_{s}}^{\prime}+V_{s}^{far}R_{s}=0, \tag{72}\]
where
\[V_{s}^{far}=-\Lambda_{lms}+m^{2}+\frac{m^{2}}{4}(x+2)^{2}+imsx+s(s+1). \tag{73}\]
The solution is to above equation is
\[R_{s}^{far}= A_{s}e^{-i\frac{m}{2}x}x^{-\frac{1}{2}+\beta-s}\] \[\times M\left(\frac{1}{2}+\beta-s+im,1+2\beta,imx\right)\] \[+B_{s}(\beta\rightarrow-\beta), \tag{74}\]
where
\[\beta^{2}=\frac{1}{4}+\Lambda_{lms}-2m^{2}. \tag{75}\]
In the matching region \(x\ll 1\), \(M(a,b,z)\to 1\),
\[R_{s}^{far}\sim A_{s}x^{-\frac{1}{2}+\beta-s}+B_{s}x^{-\frac{1}{2}-\beta-s}. \tag{76}\]
### Near region
In the near region \(x\ll 1\), the radial equation is (70) with
\[V_{s}^{near}= \frac{(2mx+n\tau_{H})^{2}-is(2x+\tau_{H})(2mx+n\tau_{H})}{4x(x+ \tau_{H})}\] \[+2ims+s(s+1)-\Lambda_{lms}. \tag{77}\]
The solution in the matching region \(x\gg\tau_{H}\) is
\[R_{s}^{near}\sim\tau_{H}^{\frac{1}{2}+\beta}x^{-\frac{1}{2}- \beta-s}\Gamma(-2\beta)\] \[\times\left[C_{s}\frac{\Gamma\left(1-s-in\right)}{\Gamma\left( \frac{1}{2}-\beta-s-im\right)\Gamma\left(\frac{1}{2}-\beta-i(n-m)\right)}\tau _{H}^{-i\frac{n}{2}}\right.\] \[+\left.D_{s}\frac{\Gamma\left(1+s+in\right)}{\Gamma\left(\frac{1 }{2}-\beta+s+im\right)\Gamma\left(\frac{1}{2}-\beta+i(n-m)\right)}\tau_{H}^{i \frac{n}{2}+s}\right]\] \[+\left(\beta\rightarrow-\beta\right). \tag{78}\]
### Matching region
The boundary condition at the membrane can be defined as
\[\mathcal{R}e^{i\pi\delta}=\frac{D_{s}}{C_{s}}x_{0}^{in+s} \tag{79}\]
Comparing coefficient from (76) and (78), we get
\[\frac{A_{s}}{C_{s}}= \tau_{H}^{\frac{1}{2}-\beta-i\frac{n}{2}}\Gamma(2\beta)\left[R_{- }+\left(\frac{x_{0}}{\tau_{H}}\right)^{-in-s}\mathcal{R}e^{i\pi\delta}R_{+}\right] \tag{80}\] \[\frac{B_{s}}{C_{s}}= \tau_{H}^{\frac{1}{2}+\beta-i\frac{n}{2}}\Gamma(-2\beta)\left[S_ {-}+\left(\frac{x_{0}}{\tau_{H}}\right)^{-in-s}\mathcal{R}e^{i\pi\delta}S_{+}\right] \tag{81}\]
where
\[R_{\pm} =\frac{\Gamma\left(1\pm s\pm in\right)}{\Gamma\left(\frac{1}{2}+ \beta\pm s\pm im\right)\Gamma\left(\frac{1}{2}+\beta\pm i(n-m)\right)} \tag{82}\] \[S_{\pm} =\frac{\Gamma\left(1\pm s\pm in\right)}{\Gamma\left(\frac{1}{2}- \beta\pm s\pm im\right)\Gamma\left(\frac{1}{2}-\beta\pm i(n-m)\right)} \tag{83}\]
### Absorption cross-section and QNMs
Following the way in section II, we obtain the QNM spectrum as
\[\omega-m\Omega_{H}\simeq \frac{1}{2r_{0}^{s}\pi}\left(2q+\frac{(-1)^{s}+1}{2}+\delta\right)\] \[\times\left(1-\frac{isgn[2q+\frac{(-1)^{s}+1}{2}+\delta]}{4r_{0} ^{s}\gamma T_{H}}\right), \tag{84}\]
This result is consistent with CFT when we define
\[\delta=-\frac{(-1)^{s}+1}{2}-2mL \tag{85}\]
and \(L,{\bf a}\) are the same as given in Eq. (60). On the other hand, the absorption cross-section is
\[\sigma_{abs} \sim \frac{(\tau_{H})^{2\beta}}{\pi\Gamma(2\beta)^{2}}\sinh\left(\pi n\right) \tag{86}\] \[\times \left|\Gamma\left(\frac{1}{2}+\beta-s+im\right)\Gamma\left(\frac{ 1}{2}+\beta+i(n-m)\right)\right|^{2}\] \[\times \frac{1-|\mathcal{R}|^{2}}{\left|1+\mathcal{R}e^{-2i\tau_{0}^{ \ast}(\omega-m\Omega_{H})+i\pi\delta}\right|^{2}}\]
This result agrees with the CFT result when we choose
\[h_{R}=\frac{1}{2}+\beta, h_{L}=h_{R}-s, \tag{87}\]
and \(T_{L},T_{R},\omega_{L},\omega_{R}\) are also given by Eq. (64). We can take \(s=0\) and this will reduce to Eq. (37) and Eq. (32) for scalar perturbation. Furthermore, we can find the results for electromagnetic perturbation and gravitational perturbation when we take \(s=1\) and \(s=2\), respectively.
Each perturbation has different observational signature that we can see from its QNM and absorption cross-section. In Fig. (3), we can see that scalar perturbation produce the most negative cross-section, the lower spin \(s\) makes \(\sigma_{abs}\) more negative. We also notice that in the electromagnetic case, the frequency has different phase \(\pi\) compare to the scalar and gravitational case. In Fig. (4), we can see that for the same \(m\), perturbation with even \(s\) produce higher \(w\) than perturbation with odd \(s\). Also for all \(s\), the presence of partially reflective membrane suppresses the instability.
## VI Echo time-delay
Since there is a reflective membrane, the ingoing wave is partially reflected and trapped between the membrane and angular momentum barrier. Eventually the waves will leak as repeating echoes. The time gap between two consecutive echoes is time for the wave to travel from the angular momentum barrier to the membrane and travel back again. This time gap or time-delay can be stated in terms of the ECO parameters and the position of the reflective membrane. The echoes time-delay can be defined as
\[\Delta\tau=2|r_{0}^{\ast}| \tag{88}\]
This expression indicates that \(\Delta\tau\) is sensitive to the position of the membrane. In terms of distance of the membrane from the usual position of horizon \(\delta r=r_{0}-r_{+}\), Eq. (88) becomes
\[\Delta\tau\simeq\pm\frac{4Mr_{+}}{(r_{+}-r_{-})}\ln\left(\frac{\delta r}{r_{+ }-r_{-}}\right), \tag{89}\]
where plus sign is for \(\delta r>r_{+}-r_{-}\) and minus sign is for \(\delta r<r_{+}-r_{-}\). However, we only consider the later case because as we see in Fig. (5) larger \(\delta r\) should have shorter time-delay, since the distance between the angular momentum barrier and the reflective membrane decreases. We get a similar result to non-extreme case [12] with slight difference on the logarithmic factor. This is caused by different coordinate transformation when converting the position of the membrane in tortoise coordinate to the proper distance, where in this case we use near-extremal limit (9). Nonetheless, we still attain the sensitivity of the echoes time-delay to the position of the membrane. In addition to that, it also sensitive to the extremality of the ECOs. The length of the torus in terms of echo time-delay is
\[L=\frac{1}{4\pi M}\frac{\Delta\tau}{2}=-\frac{r_{+}}{2\pi(r_{+}-r_{-})}\ln \left(\frac{\delta r}{r_{+}-r_{-}}\right). \tag{90}\]
Thus, we obtain
\[\delta r=(r_{+}-r_{-})e^{-\frac{2\pi(r_{+}-r_{-})}{r_{+}}L}. \tag{91}\]
This relation shows that the position of the membrane from the horizon associated with the size of the torus. When \(\delta r\to 0\), we can have \(L\rightarrow\infty\) which corresponds to classical case. If we expand (91) and take up to the first order, we have relation between \(\delta r\) and dimensionless Hawking temperature
\[\delta r\simeq(r_{+}-r_{-})\left(1-\frac{2\pi(r_{+}-r_{-})}{r_{+}}L\right) \simeq\tau_{H}r_{+}. \tag{92}\]
Since \(\delta r\) is assumed in order of Planck length for stability, this relation fits with near-extremal condition \(\tau_{H}\ll 1\).
In Fig. (5) we show the plot of echo time-delay as a function of \(a/M\) near extremality with variation of \(\delta r\). We can see that when \(a/M\to 1\), echo time-delay will approach infinity, in other words there are no echoes. This can be seen as for extremal case, the reflectivity is suppressed by the spin of the ECO thus ECO absorbs the incoming wave like classical black hole and does not produce echoes. Moreover, echo time-delay does not depend
on the particle spin, \(s\). Since echo time-delay is defined as how long it takes for a wave to travel across the ergoregion twice. For massless scalars, photons, and gravitons that travel with approximately same speed, these will have same time-delay.
## VII Conclusions
In this paper, we analyzed the CFT dual of Kerr-like ECO in near-extremal condition. Due to quantum gravitational effects, the horizon is replaced by partially reflective membrane that placed slightly outside the would-be horizon. To keep the reflectivity in the near-extremal limit non-zero, we only considered fields with energies near-superradiant bound. We imposed this reflective boundary conditions for scalar field perturbation. The QNM spectrum and absorption cross-section is then obtained by solving the propagation of massless scalar field modes. We compared the results by using CFT dual computation as has been done for generic Kerr ECO. The CFT calculation of modified near-horizon region is done by considering dual field theory that lives on a finite toroidal two-manifold, where we kept the spatial and temporal periodicity finite. This method of modification can be interpreted as finite size/finite \(N\) effects in AdS/CFT.
We showed the consistency between the QNMs and the absorption cross-section computed from the gravity side and similar quantities computed from the dual CFT. The QNMs in near-extremal condition is in line with non-extreme case, where the differences are in the value of torus length \(L\) and phase shift \(\delta\). We reproduced Boltzmann reflectivity from imaginary parts of QNMs from CFT, which agreed with non-extreme case. The imaginary part of QNMs also related with instability of ECO. The instability in near-extremal regime is suppressed by the partially reflective membrane as long as we choose the modular parameter \(\mathbf{a}\) to be positive. Besides QNMs, the absorption cross-section possessed oscillatory features that start to disappear when \(a/M\to 1\). This showed that the reflectivity of ECO is suppressed by the spin of the ECO near extremality. The CFT side reproduced this features by choosing CFT quantities that is consistent with near-extremal condition.
We also extended the calculation for higher spin fields such as photon and graviton. We solved the Teukolsky's master equation for general spin and imposed reflective boundary condition, then we obtained the QNMs and absorption cross-section. The particle spin contributes to the phase shift of the reflected waves. In the electromagnetic perturbation, the frequency has different phase \(\pi\) compare to the scalar and gravitational perturbation. We also showed that lower spin parameter produces a more negative absorption cross-section. Moreover, in terms of CFT quantity the particle spin appears in the conformal weight.
In the end, we computed the echo time-delay produced by the propagation of the fields on the near-extremal Kerr ECO background. Echo time-delay is necessary observable from the gravitational echoes observation in the postmerger ringdown signal. The modification of near-horizon region could also be manifested in this observable. We showed that the echoes time-delay is sensitive
Figure 4: Real and Imaginary part of QNM as a function of \(q\) with variations of \(s\). For both panels we set \(r_{0}^{*}=100\) and \(\mathbf{a}=1/2\).
Figure 5: Echo time-delay as a function of \(a/M\) near extremality with variation of \(\delta r\).
to the position of the membrane and the extremality of the ECOs. It is very important to investigate the quantum corrections to the near-horizon geometry of near-extremal Kerr from the CFT side. It may lead to a better understanding of the microscopic theory. We believe that our calculation on the QNMs, absorption cross-section, and echo time-delay in near-extremal condition can be used to advance the study of ECO, especially with the relation with experiment since in the near future more experiments will run to improve the study of astrophysical objects. Furthermore, extending our analysis for higher spin fields provide the possibility of observing these physical properties of ECO such as through gravitational waves.
###### Acknowledgements.
We would like to thank the members of Theoretical Physics Groups of Institut Teknologi Bandung for the discussion. M. F. A. R. S. is supported by the Second Century Fund (C2F), Chulalongkorn University, Thailand.
|
2308.07136
|
Pairing interacting protein sequences using masked language modeling
|
Predicting which proteins interact together from amino-acid sequences is an
important task. We develop a method to pair interacting protein sequences which
leverages the power of protein language models trained on multiple sequence
alignments, such as MSA Transformer and the EvoFormer module of AlphaFold. We
formulate the problem of pairing interacting partners among the paralogs of two
protein families in a differentiable way. We introduce a method called DiffPALM
that solves it by exploiting the ability of MSA Transformer to fill in masked
amino acids in multiple sequence alignments using the surrounding context. MSA
Transformer encodes coevolution between functionally or structurally coupled
amino acids. We show that it captures inter-chain coevolution, while it was
trained on single-chain data, which means that it can be used
out-of-distribution. Relying on MSA Transformer without fine-tuning, DiffPALM
outperforms existing coevolution-based pairing methods on difficult benchmarks
of shallow multiple sequence alignments extracted from ubiquitous prokaryotic
protein datasets. It also outperforms an alternative method based on a
state-of-the-art protein language model trained on single sequences. Paired
alignments of interacting protein sequences are a crucial ingredient of
supervised deep learning methods to predict the three-dimensional structure of
protein complexes. DiffPALM substantially improves the structure prediction of
some eukaryotic protein complexes by AlphaFold-Multimer, without significantly
deteriorating any of those we tested. It also achieves competitive performance
with using orthology-based pairing.
|
Umberto Lupo, Damiano Sgarbossa, Anne-Florence Bitbol
|
2023-08-14T13:42:09Z
|
http://arxiv.org/abs/2308.07136v1
|
# Pairing interacting protein sequences using masked language modeling
###### Abstract
Predicting which proteins interact together from amino-acid sequences is an important task. We develop a method to pair interacting protein sequences which leverages the power of protein language models trained on multiple sequence alignments, such as MSA Transformer and the EvoFormer module of AlphaFold. We formulate the problem of pairing interacting partners among the paralogs of two protein families in a differentiable way. We introduce a method called DiffPALM that solves it by exploiting the ability of MSA Transformer to fill in masked amino acids in multiple sequence alignments using the surrounding context. MSA Transformer encodes coevolution between functionally or structurally coupled amino acids. We show that it captures inter-chain coevolution, while it was trained on single-chain data, which means that it can be used out-of-distribution. Relying on MSA Transformer without fine-tuning, DiffPALM outperforms existing coevolution-based pairing methods on difficult benchmarks of shallow multiple sequence alignments extracted from ubiquitous prokaryotic protein datasets. It also outperforms an alternative method based on a state-of-the-art protein language model trained on single sequences. Paired alignments of interacting protein sequences are a crucial ingredient of supervised deep learning methods to predict the three-dimensional structure of protein complexes. DiffPALM substantially improves the structure prediction of some eukaryotic protein complexes by AlphaFold-Multimer, without significantly deteriorating any of those we tested. It also achieves competitive performance with using orthology-based pairing.
## Significance statement
Deep learning has brought major advances to the field of proteins. Self-supervised models, based on approaches from natural language processing and trained on large ensembles of protein sequences, efficiently learn statistical dependence in this data. This includes coevolution patterns between structurally or functionally coupled amino acids, which allows them to capture structural contacts. We propose a method to pair interacting protein sequences which leverages the power of a protein language model trained on multiple sequence alignments. Our method performs well for small datasets that are challenging for existing methods. It can improve structure prediction of protein complexes by supervised methods, which remains more challenging than that of single-chain proteins.
## Introduction
Interacting proteins play key roles in cells, ensuring the specificity of signaling pathways and forming multi-protein complexes that act e.g. as molecular motors or receptors. Predicting protein-protein interactions and the structure of protein complexes are important questions in computational biology and biophysics. Indeed, high-throughput experiments capable of resolving protein-protein interactions remain challenging [1], even for model organisms, and experimental determination of protein complex structure is demanding.
A major advance in protein structure prediction was achieved by AlphaFold [2] and other deep learning approaches [3, 4, 5]. Extensions to protein complexes have been proposed [6, 7, 8, 9], including AlphaFold-Multimer (AFM) [7], but their performance is heterogeneous and less impressive than for monomers [10]. Importantly, the first step of AlphaFold is to build multiple-sequence alignments (MSAs) of homologs of the query protein sequence. The results of the CASP15 structure prediction contest demonstrated that MSA quality is crucial to further improving AlphaFold performance [11, 12]. For protein complexes involving several different chains (heteromers), paired MSAs, whose rows include actually interacting chains, can provide coevolution signal between interacting partners that is informative about inter-chain contacts [13, 14, 15, 16]. However, constructing paired MSAs poses the challenge of properly pairing sequences. Accordingly, the quality of pairings strongly impacts the accuracy of heteromer structure prediction [17, 9, 18]. Pairing interaction partners is difficult because many protein families contain several paralogous proteins encoded within the same genome. This problem is known as paralog matching. In prokaryotes, genomic proximity can often be used to solve it, since most interaction partners are encoded in close genomic locations [19, 20]. However, this is not the case in eukaryotes. Large-scale coevolution studies of protein complexes [21, 22, 23] and deep learning approaches [6, 7, 8, 9, 24] have paired sequences by using genomic proximity when possible [8, 21, 24], and/or by pairing together the closest, or equally ranked, hits to the query sequences, i.e. relying on approximate orthology [6, 7, 8, 9, 22, 23, 24, 25].
Aside from genomic proximity and orthology, phylogeny-based methods have addressed the paralog matching problem [26, 27, 28, 29, 30, 31, 32, 33, 34, 35], exploiting similarities between evolutionary histories of interacting proteins [36, 37, 38, 39, 40]. Other methods, based on coevolution [41, 42, 43, 44, 45], rely on correlations in amino-acid usage between interacting proteins [46, 47, 14, 15]. These correlations arise from the need to maintain physico-chemical complementarity among amino acids in contact, and from shared evolutionary history [48, 49]. Phylogeny and coevolution can be explicitly combined, improving performance [50]. However, coevolution-based approaches are data-thirsty and need large and diverse MSAs to perform well. This limits their applicability, especially to eukaryotic complex structure prediction. Nevertheless, the core idea of finding pairings that maximise coevolution signal holds promise for paralog matching and complex structure prediction.
We develop a new coevolution-based method for paralog matching which leverages recent neural protein language models taking MSAs as inputs [2, 51]. These models are one of the ingredients of the success of AlphaFold [2]. We focus on MSA Transformer [51], a protein language model which was trained on MSAs using the masked language modeling (MLM) objective in a self-supervised way. We introduce DiffPALM, a differentiable method for predicting paralog matchings using MLM. We show that it outperforms existing coevolution methods by a large margin on difficult benchmarks of shallow MSAs extracted from ubiquitous prokaryotic protein datasets. DiffPALM performance further quickly improves when known interacting pairs are provided as examples. Next, we apply DiffPALM to the hard problem of paralog matching for eukaryotic protein complexes. Among the complexes we tested, DiffPALM substantially improves structure prediction by AFM in some cases, and does not yield any significant deterioration. It also achieves competitive performance with using orthology-based pairing.
## Results
### Leveraging MSA-based protein language models for paralog matching
MSA-based protein language models, which include MSA Transformer [51] and the EvoFormer module of AlphaFold [2], are trained to correctly fill in masked amino acids in MSAs with the MLM loss (see "Supplementary material, MSA Transformer and masked language modeling for MSAs" and [51, 52] for details). To this end, they use the rest of the MSA as context, which allows them to capture coevolution. Indeed, MSA Transformer achieves state-of-the-art performance at unsupervised structural contact prediction [51], captures pairwise phylogenetic relationships between sequences [52], and can be used to generate new sequences from given protein families [53]. While MSA Transformer was only trained on MSAs corresponding to single chains, inter-chain coevolution signal has strong similarities with intra-chain signal [13, 14, 15]. We find that MSA Transformer is able to detect inter-chain contacts from a properly paired MSA, while it cannot do so from a wrongly paired MSA, see Fig. S1. Furthermore, we find that the MLM loss (used for the pre-training of MSA Transformer) decreases as the fraction of correctly matched sequences increases, see Fig. S2. These results demonstrate that MSA Transformer captures inter-chain coevolution signal.
In this context, we ask the following question: Can we exploit MSA-based protein language models to address the paralog matching problem? Let us focus on the case where two MSAs have to be paired, which is the relevant one for heterodimers. Paralog matching amounts to pairing these two MSAs, each corresponding to one of two interacting protein families, so that correct interaction partners are placed on the same row of the paired MSA. Throughout, we will assume that interactions are one-to-one, excluding cross-talk, which is valid for proteins that interact specifically [54]. Thus, within each species, assuming that there is the same number of sequences from both families, we aim to find the correct one-to-one matching that associates one protein from the first family to one protein from the second family. We also cover the case where the two protein families have different numbers of paralogs within the same species, see "Methods". Motivated by our finding that the MLM loss is lower for correctly paired MSAs than for incorrectly paired ones, we address the paralog matching problem by looking for pairings that minimise an MLM loss. A challenge is that the number of possible such one-to-one matchings scales factorially with the number of sequences in the species, making it difficult to find the permutation that minimises the loss by a brute-force search. We address this challenge by formulating a differentiable optimization problem that can be solved using gradient methods, to yield configurations minimizing our MLM loss, see "Methods". We call our method **DiffPALM**, short for **Diff**erentiable **P**airing using **A**lignment-based **L**anguage **M**odels.
### DiffPALM outperforms other coevolution methods on small MSAs
We start out by considering a well-controlled benchmark dataset composed of ubiquitous prokaryotic proteins from two interacting families, namely histidine kinases (HKs) and response regulators (RRs) [55, 56], see "Supplementary material, Datasets". These proteins interact together within prokaryotic two-component signaling systems, important pathways that enable bacteria to sense and respond to environment signals [54]. They possess multiple paralogs (on the order of ten per genome, with substantial variability), and have strong specificity for their cognate partners. Because most cognate HK-RR pairs are encoded in the same operon, many interaction partners are known from genome proximity, which enables us to assess performance. In addition, earlier coevolution methods for paralog matching were tested on this dataset, allowing rigorous comparison [14, 47, 50]. Here, we focus on datasets comprising about 50 cognate HK-RR pairs. Indeed, this small data regime is problematic for existing coevolution methods,
which require considerably deeper alignments to achieve good performance [14, 15, 47, 50]. Furthermore, this regime is highly relevant for eukaryotic complexes, because their homologs have relatively small sequence diversity, as shown by the effective depth of their MSAs in Table S1. While prokaryotic proteins such as HKs and RRs feature large diversity, focusing on small datasets allows us to address the relevant regime of low diversity in this well-controlled benchmark case. We hypothesize that MSA Transformer's extensive pre-training can help to capture coevolution even in these difficult cases. To assess this, we first test two variants of our DiffPALM method (see "Methods") on 40 MSAs from the HK-RR dataset comprising about 50 HK-RR pairs each (see "Supplementary material, Datasets"). We first address the _de novo_ pairing prediction task, starting from no known HK-RR pair, and then we study the impact of starting from known pairs.
Fig. 1 shows that DiffPALM performs better than the chance expectation, obtained for random within-species matching. Moreover, it outperforms other coevolution-based methods, namely DCA-IPA [14], MI-IPA [47], which rely respectively on Potts models and on mutual information, and GA-IPA [50], which combines these coevolution measures with sequence similarity, a proxy for phylogeny. Importantly, these results are obtained without giving any paired sequences as input to the algorithm. The performance of DiffPALM is particularly good for pairs with high confidence score (see "Result and confidence"), as shown by the "precision-10" curve, which focuses on top 10% predicted pairs, when ranked by predicted confidence (see Fig. 1). We also propose a method based on a protein language model trained on single sequences, ESM-2 (650M) [5], see "Pairing based on a single-sequence language model". DiffPALM also outperforms this method, even though the latter is faster (no need for backpropagation) and is formulated as a linear matching problem, which is solved exactly. This confirms that the coevolution information contained in the MSA plays a key role in the performance of DiffPALM, which is based on MSA Transformer. A key strength of MSA Transformer and thus of DiffPALM is that they leverage the power of large language models while starting from MSAs, and thus allow direct access to the coevolution signal. Fig. 1 shows that both variants of DiffPALM, namely MRA and IPA (see "Methods") outperform all baselines, and that precision of MRA increases with the number of runs used (see Table S2 for details). Note that the distribution of precision-10 scores over the different MSAs we considered is skewed, especially after many MRA runs, see figure Fig. S3. For many MSAs, almost perfect scores are reached, while performance is bad for a few others. MSAs with smaller average number of sequence per species tend to yield larger precision, as the pairing task is then easier.
So far, we addressed _de novo_ pairing prediction, where no known HK-RR pair is given as input. Can DiffPALM precision increase by exploiting "positive examples" of known interacting partners? This is an important question, since experiments on model species may for instance give some positive examples (see "The paralog matching problem"). To address it, we included different numbers of positive examples, by using the corresponding non-masked interacting pairs as context (see "Construction of an appropriate MLM loss"). The left panel of Fig. 2 shows that the performance of DiffPALM significantly increases with the number of positive examples used, reaching almost perfect performance for the highest-confidence pairs (precision-10).
While we focused on HK-RR pairing so far, DiffPALM is a general method. To assess how it extends to other cases, we consider another pair of ubiquitous prokaryotic proteins, namely homologs of the _E. coli_ proteins MALG-MALK, which are involved in ABC transporter complexes. These proteins form permanent complexes, while HK-RR interact transiently to transmit signal. The right panel of Fig. 2 shows results obtained on 40 MSAs comprising about 50 MALG-MALK pairs, without positive examples. We observe that DiffPALM outperforms the chance expectation by a large margin. It also significantly outperforms existing coevolution methods [14, 47, 50], as well as our method based on ESM-2 (650M), see Table S2. Note that all approaches yield better performance for MALG-MALK than for HK-RR, as the number of MALG-MALK pairs per species is smaller than that of HK-RR pairs. Finally, while Fig. 2
Figure 1: **Performance of DiffPALM on small HK-RR MSAs.** The performance of two variants of DiffPALM (MRA and IPA, see “Improving precision: MRA and IPA”) is shown versus the number of runs used for the MRA variant, for 40 MSAs comprising about 50 HK-RR pairs. The chance expectation, and the performance of various other methods, are reported as baselines. Three existing coevolution-based methods are considered: DCA-IPA [14], MI-IPA [47], and GA-IPA [50]. We also consider a pairing method based on the scores given by the ESM-2 (650M) single-sequence protein language model [5], see “Pairing based on a single-sequence language model”. With all methods, a full one-to-one within-species pairing is produced, and performance is measured by precision (also called positive predictive value or PPV), namely, the fraction of correct pairs among predicted pairs. The default score is “precision-100”, where this fraction is computed over all predicted pairs (100% of them). For DiffPALM-MRA, we also report “precision-10”, which is calculated over the top 10% predicted pairs, when ranked by predicted confidence within each MSA (see “Methods”). For DiffPALM, we plot the mean performance on all MSAs (color shading), and the standard error range (shaded region). For our ESM-2 based method, we consider 10 different values of masking probability \(p\) from 0.1 to 1.0, and we report the range of precisions obtained (gray shading). For other baselines, we report the mean performance on all MSAs.
reports the final MRA performance, Fig. S4 shows that both performance scores increase with the number of MRA runs in all cases.
### Using DiffPALM for eukaryotic complex structure prediction by AFM
An important and more challenging application of DiffPALM is predicting interacting partners among the paralogs of two families in eukaryotic species. Indeed, eukaryotes often have many paralogs per species [57] but eukaryotic-specific protein families generally have fewer total homologs and smaller diversity than prokaryotes. Moreover, most interacting proteins are not encoded in close proximity in eukaryotic genomes. Paired MSAs are a key ingredient of protein complex structure prediction by AFM [7, 9]. When presented with query sequences, the default AFM pipeline [7] retrieves homologs of each of the chains. Within each species, homologs of different chains are ranked according to Hamming distance to the corresponding query sequence. Then, equal-rank sequences are paired. Can DiffPALM improve complex structure prediction by AFM? To address this question, we consider 15 complexes, listed in Table S1, whose structures are not included in the training set of the AFM release we used (see "Supplementary material, General points on AFM"), and for which the default AFM complex prediction was reported to perform poorly [7, 18] (see "Supplementary material, Eukaryotic complexes").
Fig. 3 shows that DiffPALM can improve complex structure prediction by AFM (see Fig. S5 for details). This suggests that it is able to produce better paired MSAs than those from the default AFM pipeline. In particular, substantial improvements are obtained for the complexes with PDB identifiers 6L5K and 6FYH, see Figs. S6 and S7 for structural visualizations. Among the complexes we considered, 6L5K and 6FYH have large effective (piirable) MSA depths, see Table S1. Conversely, complexes with very small raw or effective MSA depths do not significantly benefit from DiffPALM. Thus, DiffPALM is sensitive to MSA depth and diversity, albeit having less stringent such requirements than other coevolution methods. In most cases, the quality of structures predicted using DiffPALM pairing is comparable to that obtained
Figure 2: **Impact of positive examples and extension to another pair of protein families.** We report the performance of DiffPALM with 5 MRA runs (measured as precision-100 and precision-10, see Fig. 1), for various numbers of positive examples, on the same HK-RR MSAs as in Fig. 1 (left panel). We also report the performance of DiffPALM for similarly-sized MALG-MALK MSAs (right panel). In both cases, we show the mean value over the 40 different MSAs with its standard error interval, and we plot the chance expectation for reference.
using the pairing method adopted e.g. by ColabFold [8], where only the orthologs of the two query sequences, identified as their best hits, are paired in each species (resulting in at most one pair per species) [22, 23, 24, 25, 8, 9], see Fig. 3. Note however that, for 6PNQ, the ortholog-only pairing method is outperformed both by DiffPALM and by AFM default. Indeed, the raw and effective MSA depths are smaller for this structure than e.g. for 6L5K and 6FYH (see Table S1). Thus, further reducing diversity by restricting to paired orthologs may be negatively impacting structure prediction in this case. Given the good results obtained overall with orthology-based pairings, we tried using them as positive examples for DiffPALM. Given the very good precision obtained by DiffPALM for high-confidence HK-RR pairs (see above), we also tried restricting to high-confidence pairs. For most structures, we obtained no significant improvement over the standard DiffPALM using these variants (see Fig. S8). However, for 6WCW, we could generate several higher-quality structures, particularly when using orthologs as positive examples.
Although DiffPALM achieves similar performance on these structure prediction tasks as using orthology, it predicts some pairs that are quite different from orthology-based pairs.
Figure 3: **Performance of AFM using different pairing methods.** We use AFM to predict the structure of protein complexes starting from differently paired MSAs, each of them constructed from the same initial unpaired MSAs. Three pairing methods are considered: the default one of AFM, only pairing orthologs to the two query sequences, and a single run of DiffPALM (equivalent to one MRA run). Performance is evaluated using DockQ scores (top panels), a widely used measure of quality for protein-protein docking [58], and the AFM confidence scores (bottom panels), see “Supplementary material, General points on AFM”. The latter are also used as transparency levels in the top panels, where more transparent markers denote predicted structures with low AFM confidence. For each query complex, AFM is run five times. Each run yields 25 predictions which are ranked by AFM confidence score. The five top predicted structures are selected from each run, giving 25 predicted structures in total for each complex. Out of the 15 complexes listed in Table S1, we restrict to those where any two of these three pairing methods yield a significant difference (\(>0.1\)) in average DockQ scores for at least one set of predictions coming from different runs but with the same within-run rank according to AFM confidence. Panels are ordered by increasing mean DockQ score for the AFM default method.
Indeed, Fig. S9 shows that the fraction of pairs identically matched by DiffPALM and by orthology is often smaller than 0.5. Fig. S10 further shows that, for the sequences that are paired differently by DiffPALM and by orthology, the Hamming distances between the two predicted partners is often above 0.5. Nevertheless, most of the pairs that are predicted both by DiffPALM and by using orthology have high DiffPALM confidence (see Fig. S9), confirming the importance of these pairs.
## Discussion
We developed DiffPALM, a method for pairing interacting protein sequences that builds on MSA Transformer [51], a protein language model trained on MSAs. MSA Transformer efficiently captures coevolution between amino acids, thanks to its training to fill in masked amino acids using the surrounding MSA context [51, 52, 53]. We showed that it also captures inter-chain coevolutionary signal, despite being trained on single-chain MSAs. We leveraged this ability in DiffPALM by using a masked language modeling loss as a coevolution score and looking for the pairing that minimizes it. We formulated the pairing problem in a differentiable way, allowing us to use gradient methods. On shallow MSAs extracted from controlled prokaryotic benchmark datasets, DiffPALM outperforms existing coevolution-based methods as well as a method based on a state-of-the-art language model trained on single sequences. Its performance quickly increases when adding examples of known interacting sequences. Paired MSAs of interacting partners are a key ingredient to complex structure prediction by AFM. We found that using DiffPALM can improve the performance of AFM, and achieves competitive performance with orthology-based pairing.
Recent work [18] also used MSA Transformer for paralog matching, in a method called ESMPair. It relies on column attention matrices and compares them across the MSAs of interacting partners. This makes it quite different from DiffPALM, which relies on coevolutionary information via the MLM loss. ESMPair may be more closely related to phylogeny-based [17] or orthology-based pairing methods, since column attention encodes phylogenetic relationships [52]. 13 out of the 15 eukaryotic protein complexes we considered were also studied in [18], but no substantial improvement (and often a degradation of performance) was reported for those when using ESMPair instead of the default AFM pairing, except for 7BQU. By contrast, DiffPALM yields strong improvements for 6L5K and 6FYH, and no significant performance degradation. Explicitly combining coevolution and phylogeny using MSA Transformer is a promising direction to further improve partner pairing. Indeed, such an approach has already improved traditional coevolution methods [50]. Other ways of improving MSA usage by AFM have also been proposed [59] and could be combined with advances in pairing. Besides improving MSA construction [60] and the extraction of MSA information, other promising approaches include exploiting structural alignments [61], using massive sampling and dropout [62], and combining AFM with more traditional docking methods [63, 64], which has allowed e.g. to improve structure prediction of 6A6I [63].
DiffPALM illustrates the power of neural protein language models trained on MSAs, and their ability to capture the rich structure of biological sequence data. The fact that these models encode inter-chain coevolution, while they are trained on single-chain data, shows their ability to generalize. We used MSA Transformer in a zero-shot setting, without fine-tuning it to the task of interaction partner prediction. Such fine-tuning could yield further performance gains [65].
The fact that DiffPALM outperforms existing coevolution methods [14, 47, 50] for shallow MSAs is reminiscent of the impressive performance of MSA Transformer at predicting structural contacts from shallow MSAs [51]. While traditional coevolution methods either compute local coevolution scores for two columns of an MSA [47] or build a global model for an MSA [14, 15],
MSA Transformer was trained on large ensembles of MSAs and shares parameters across them. This presumably allows it to transfer knowledge between MSAs, and to bypass the usual needs for deep MSAs of traditional coevolution methods [14, 15, 46, 47, 50], or of MSA-specific transformer models [66]. This constitutes major progress for the use of coevolution signal.
After the transformative progress brought by deep learning to protein structure prediction [2, 3, 4, 5], predicting protein complex structure and ligand binding sites is fast advancing with AFM and related methods, but also with other deep learning models based on structural representations [67, 68, 69, 70]. Combining the latter with the power of sequence-based language models may bring even further progress.
## Methods
### The paralog matching problem
Goal and notations.Panalog matching amounts to pairing a pair of MSAs, each one corresponding to one of the two protein families considered. We assume that interactions are one-to-one. Let \(\mathcal{M}^{\rm(A)}\) and \(\mathcal{M}^{\rm(B)}\) be the (single-chain) MSAs of two interacting protein families A and B, and let \(K\) denote the number of species represented in both MSAs and comprising more than one unmatched sequence in at least one MSA. Species represented in only one MSA are discarded since no within-species matching is possible for them. Species with only one unmatched sequence in each MSA are not considered further since pairing is trivial. There may also be \(N_{\rm pos}\) known interacting pairs: they are treated separately, as positive examples (see below). Here we focus on the unmatched sequences. For \(k=1,\ldots,K\), let \(N_{k}^{\rm(A)}\) and \(N_{k}^{\rm(B)}\) denote the number of unmatched sequences belonging to species \(k\) in \(\mathcal{M}^{\rm(A)}\) and \(\mathcal{M}^{\rm(B)}\) (respectively).
Dealing with asymmetric cases.The two protein families considered may have different numbers of paralogs within the same species. Assume, without loss of generality, that \(N_{k}^{\rm(A)}<N_{k}^{\rm(B)}\) for a given \(k\). To solve the matching problem with one-to-one interactions, we would like to pick, for each of the \(N_{k}^{\rm(A)}\) sequences in \(\mathcal{M}^{\rm(A)}\), a single and exclusive interaction partner out of the \(N_{k}^{\rm(B)}\) available sequences in \(\mathcal{M}^{\rm(B)}\). The remaining sequences of the species in \(\mathcal{M}^{\rm(B)}\) are left unpaired. In practice, we achieve this by augmenting the original set of species-\(k\) sequences from \(\mathcal{M}^{\rm(A)}\) with \(N_{k}^{\rm(B)}-N_{k}^{\rm(A)}\) "padding sequences" made entirely of gap symbols. By doing so (and analogously when \(N_{k}^{\rm(A)}>N_{k}^{\rm(B)}\)), the thus-augmented interacting MSAs have the same number \(N_{k}:=\max(N_{k}^{\rm(A)},N_{k}^{\rm(B)})\) of sequences from each species \(k\). In practice, this method is used for the AFM complex structure prediction, while the curated benchmark prokaryotic MSAs do not have asymmetries (see "Supplementary material, Datasets").
Formalization.The paralog matching problem corresponds to finding, within each species \(k\), a mapping that associates one sequence of \(\mathcal{M}^{\rm(A)}\) to one sequence of \(\mathcal{M}^{\rm(B)}\) (and reciprocally). Thus, within each species \(k\), one-to-one matchings can be encoded as permutation matrices of size \(N_{k}\times N_{k}\). A brute-force search through all possible within-species one-to-one matchings would scale factorially with the size \(N_{k}\) of each species, making it prohibitive. Note that the Iterative Pairing Algorithm (IPA) [14, 47] is an approximate method to solve this problem when optimizing coevolution scores. Here, we introduce another one, which allows to leverage the power of deep learning.
Exploiting known interacting partners.Our use of a language model allows for contextual conditioning, a common technique in natural language processing. Indeed, if any correctly paired sequences are already known, they can be included as part of the joint MSA input to
MSA Transformer. In this case, we exclude their pairing from the optimization process - in particular, by not masking any of their amino acids, see below. We call these known paired sequences "positive examples". In Fig. 2, we randomly sampled species and included all their pairs as positive examples, until we reached the desired depth \(N_{\text{pos}}\pm 10\%\). For eukaryotic complex structure prediction, we treated the query sequence pair as a positive example.
### DiffPALM: Paralog matching based on MLM
Here, we explain our paralog matching method based on MLM, which we call DiffPALM. Background information on MSA Transformer and its MLM loss is collected in "Supplementary material, MSA Transformer and masked language modeling for MSAs". DiffPALM exploits our differentiable framework for optimizing matchings, see "Supplementary material, A differentiable formulation of paralog matching". The key steps are summarized in Fig. 4.
**Construction of an appropriate MLM loss.** Using the tools just described, we consider two interacting MSAs (possibly augmented with padding sequences), still denoted by \(\mathcal{M}^{\text{(A)}}\) and \(\mathcal{M}^{\text{(B)}}\). Given species indexed by \(k=1,\ldots,K\), we initialize a set \(\{X_{k}\}_{k=1,\ldots,K}\) of square matrices of size \(N_{k}\times N_{k}\) (the case \(K=1\) corresponds to \(X\) in "Supplementary material, A differentiable formulation of paralog matching"). We call these "parameterization matrices". By applying to them the matching operator \(M\) [see Eq. (S2)], we obtain the permutation matrices \(\{M(X_{k})\}_{k=1,\ldots,K}\), encoding matchings within each species in the paired MSA. Using gradient methods, we optimize the parameterization matrices so that the corresponding permutation
Figure 4: **Schematic of the DiffPALM method. First, the parameterization matrices \(X_{k}\) are initialized, and then the following steps are repeated until the loss converges: (1) Compute the permutation matrix \(M(X_{k})\) and use it to shuffle \(\mathcal{M}^{\text{(A)}}\) relative to \(\mathcal{M}^{\text{(B)}}\). Then pair the two MSAs. (2) Randomly mask some tokens of one of the two sides of the paired MSA and compute the MLM loss Eq. (S1). (3) Backpropagate the loss and update the parameterization matrices \(X_{k}\), using the Sinkhorn operator \(\hat{S}\) for the backward step instead of the matching operator \(M\) (see “Supplementary material, A differentiable formulation of paralog matching”).**
matrices yield a paired MSA with low MLM loss. More precisely, paired MSAs are represented as concatenated MSAs with interacting partners placed on the same row,1 and our MLM loss for this optimization is computed as follows:
Footnote 1: We employ no special tokens to demarcate the boundary between one sequence and its partner.
1. Perform a shuffle of \(\mathcal{M}^{\mathrm{(A)}}\) relative to \(\mathcal{M}^{\mathrm{(B)}}\) using the permutation matrix \(M(X_{k})\) in each species \(k\) (plus an optional noise term, see below), to obtain a paired MSA \(\mathcal{M}\);
2. Generate a mask for \(\mathcal{M}\) (excluding any positive example tokens from the masking);
3. Compute MSA Transformer's MLM loss for that mask, see Eq. (S1).
Importantly, we only mask tokens from one of the two MSAs, chosen uniformly at random within that MSA with a high masking probability \(p=0.7\).2 Our rationale for using large masking probabilities is that it forces the model to predict masked residues in one of the two MSAs by using information coming mostly from the other MSA - see Fig. S2. We stress that, if padding sequences consisting entirely of gaps are present (see above), we mask these symbols with the same probability as those coming from ordinary sequences. Of the two MSAs to pair, we mask the one with shorter length if no padding sequences exist (i.e. here for our prokaryotic benchmark datasets). Else, if lengths are comparable but one MSA contains considerably more padding sequences than the other, we preferentially mask that MSA. Otherwise, we randomly choose which of the two MSAs to mask.
Footnote 2: In contrast, uniformly random masking with \(p=0.15\) was used during MSA Transformer’s pre-training [51].
We fine-tuned all the hyperparameters involved in our algorithm using two joint MSAs of depth \(\sim 50\), constructed by selecting random species from the HK-RR dataset (see "Supplementary material, Datasets").
Noise and regularization.Following [71], after updating (or initializing) each \(X_{k}\), we add to it a noise term given by a matrix of standard i.i.d. Gumbel noise multiplied by a scale factor. The addition of noise ensures that the \(X_{k}\) do not get stuck at degenerate values for the right-hand side of Eq. (S2), and more generally may encourage the algorithm to explore larger regions in the space of permutations. As scale factor for this noise we choose \(0.1\) times the sample standard deviation of the current entries of \(X_{k}\), times a global factor tied to the optimizer scheduler (see next paragraph). Finally, since the matching operator is scale-invariant, we can regularize the matrices \(X_{k}\) to have small Frobenius norm. We find this to be beneficial and implement it through weight decay, set to be \(w=0.1\).
Optimization.We backpropagate the MLM loss on the parameterization matrices \(X_{k}\). We use the AdaDelta optimizer [72] with an initial learning rate \(\gamma=9\) and a "reduce on loss plateau" learning rate scheduler which decreases the learning rate by a factor of \(0.8\) if the loss has not decreased for more than \(20\) gradient steps after the learning rate was last set. The learning rate scheduler also provides the global scale factor which, together with the standard deviation of the entries of \(X_{k}\), dynamically determines the magnitude of the Gumbel noise (see previous paragraph).
Exploring the loss landscape through multiple initializations.We observe that the initial choice of the parameterization set \(\{X_{k}\}_{k=1,\dots,K}\) strongly impact results. Slightly different initial conditions for \(X_{k}\) lead to very different final permutation matrices. Furthermore, we observe fast decrease in the loss when the \(X_{k}\) are initialized to be exactly zero (our use of Gumbel noise means that we break ties randomly when computing the permutation matrices \(M(X_{k})\); if noise is not used, similar results can be achieved by initializing \(X_{k}\) with entries very close to zero). Thus, we can cheaply probe different paths in the loss landscape by performing several short runs using zero-initialized parameterization matrices \(X_{k}\). In practice, we use \(20\)
different such short runs each consisting of 20 gradient steps. Then, we average all the final parameterizations together to warm-start a longer run made up of 400 gradient steps.
Result and confidence.We observe that, even though the loss generally converges to a minimum average value during our optimization runs, there are often several distinct hard permutations associated to the smallest loss values. This may indicate a flattening of the loss landscape relative to the inherent fluctuations in the MLM loss, and/or the existence of multiple local minima. To extract a single matching per species from one of our runs (or indeed from several runs, see "Improving precision: MRA and IPA"), we average the hard permutation matrices associated to the \(q\) lowest losses, and evaluate the matching operator [Eq. (S2)] on the resulting averages. We find final precision-100 figures to be quite robust to the choice of \(q\). On the other hand, for individual (warm-started) runs as described in "Exploring the loss landscape through multiple initializations", precision-10 benefits from setting \(q\) to its maximum possible value of 400.
Furthermore, we propose using each entry in the averaged permutation matrices as an indicator of the model's confidence in the matching of the corresponding pair of sequences. Indeed, pairs that are present in most low-loss configurations are presumably essential for the optimization process and are captured most of the times, pushing their confidence value close to 1. Conversely, non-interacting pairs are in most of the cases associated to higher losses and therefore appear sporadically, obtaining confidences close to zero. Accordingly, we refer to the averaged hard permutations used to extract a single matching per species as "confidence matrices", and to the final in-species matchings as "consensus permutations".
Improving precision: MRA and IPA.We propose two methods for improving precision further. In the first method, which we call Multi-Run Aggregation (MRA), we perform \(N_{\text{runs}}\) independent optimization runs for each interacting MSA. Then, we collect together the hard permutations independently obtained from each run, and aggregate the \(q=400\) lowest-loss permutations from this larger collection to obtain more reliable confidence matrices and hard permutations.
The second method is an iterative procedure analogous to the Iterative Pairing Algorithm (IPA) of Refs. [14, 47], and named after it. The idea is to gradually add pairs with high confidence as positive examples. Assuming a paired MSA containing a single species for notational simplicity, the \(n\)-th iteration (starting at \(n=1\)) involves the following steps:
1. Perform an optimization run and extract from it a confidence matrix \(C^{(n)}\) as described in "Result and confidence", using the currently available positive examples;
2. Compute the moving average \(\tilde{C}^{(n)}=\text{mean}(C^{(n)},\tilde{C}^{(n-1)},\dots,\tilde{C}^{(1)})\) (where \(\tilde{C}^{(1)}\equiv C^{(1)}\));
3. Define candidate matchings via the consensus permutation \(M^{(n)}=M(\tilde{C}^{(n)})\);
4. Repeat Steps 1-3 a maximum of 3 times, until the average MLM loss estimated using \(M^{(n)}\), and 200 random masks, is lower or statistically insignificantly higher3 than what could have been obtained using \(M^{(n-1)}\) and the same positive examples as in Step 1;
5. If Step 4 fails, set \(\tilde{C}^{(n)}=\tilde{C}^{(n-1)}\) and \(M^{(n)}=M(\tilde{C}^{(n-1)})\) (but removing rows and columns corresponding to the positive examples added at iteration \(n-1\)); Footnote 3: Based on a two-sample T-test: we say “statistically insignificantly” when \(p\geq 0.95\), and “statistically significantly” when \(p<0.05\).
6. Check that the average MLM loss estimated using \(M^{(n)}\) and 200 random masks, but only regarding as positive examples those available at the beginning of iteration \(n-1\), is not statistically significantly higher3 than the average MLM loss estimated using \(M^{(n-1)}\) and those same positive examples;
7. If Step 6 fails, terminate the IPA. Otherwise, pick the top 5 pairs according to \(\tilde{C}^{(n)}\), promote them to positive examples in all subsequent iterations, and remove them from the portion of the paired MSA to be optimized.
If several species are present, they are optimized together (see "Construction of an appropriate MLM loss") and confidence values from all species are used to select the top 5 pairs.
### Pairing based on a single-sequence language model
To assess whether a single-sequence model is able to solve the paralog matching problem, we consider the 650M-parameter version of the model ESM-2 [5]. We score candidate paired sequences using the MLM loss in Eq. (S1). In contrast with MSA Transformer, the input of the model is not paired MSAs but single paired sequences. Therefore, it is sufficient to individually score each possible pair within each species, without needing to consider all permutations. Denoting by \(N_{k}\) the number of sequences from each family in species \(k\), the number of possible pairs is \(N_{k}^{2}\) while the number of permutations is \(N_{k}!\). This complexity reduction allows us to evaluate the scores of all possible pairs. This removes the need of backpropagating the loss on the permutation matrix. Accordingly, this method is much faster, since we only need to use the model in evaluation mode, without gradient backpropagation.
For each candidate paired sequence, we evaluate the average of the MLM losses computed over multiple random masks (with masking probability \(p\)). Once the average MLM losses are computed for all the \(N_{k}^{2}\) pairs, we compute the optimal one-to-one matching by using standard algorithms for linear assignment problems [73] on the \(N_{k}\times N_{k}\) matrix containing all the losses.
### Assessing the impact of pairing on AFM structure prediction
Pairing methods employed in AFM and ColabFold.When presented with a set of query chains, AFM retrieves homologs of each of the chains by running JackHMMER [74] on UniProt, and further homology searches on other databases [7]. UniProt hits are partitioned into species4 and ranked within each species by decreasing Hamming distance to the relevant query sequence. A paired MSA is obtained by matching equal-rank hits. Sequences left unpaired are discarded. In addition, AFM produces "block MSAs" constructed by "pairing" hits from the remaining databases with padding sequences of gaps. The input for AFM comprises the paired MSA and the block MSAs.
Footnote 4: While the official AFM implementation in [https://github.com/deepmind/alphafold](https://github.com/deepmind/alphafold) uses UniProt “entry names” to define species, when possible we instead use NCBI TaxIDs (via UniProt mappings to NCBI tax IDs, retrieved on 17 Dec 2022, corresponding to UniProt Knowledgebase release 2022_04), which are more accurate.
While sharing the same architecture and weights as AFM, ColabFold retrieves homologs using MMseqs2 [75] on ColabFoldDB [8]. In each species, hits are sorted by increasing E-value, and the best hits are paired [8, 9, 22, 23, 24, 25]. Thus, contrary to the default AFM pipeline, the paired MSA in ColabFold contains at most one sequence pair per species for a heterodimer. Because the databases and homology search methods used by ColabFold differ from those used by AFM, a direct comparison does not allow one to isolate the effect of their different pairing schemes. Therefore, we employed the ColabFold pairing method starting from the sequences that are paired in the default AFM pipeline.
Pairing using DiffPALM.To assess the impact of DiffPALM on complex structure prediction by AFM, we started from the sequences that are paired in the default AFM pipeline. We left out species with large unbalances between the number of sequences in the two families considered. Specifically, if the ratio of the larger to the smaller of these two numbers exceeds an ad-hoc "maximum size ratio" MSR (see Table S1), if there is only one sequence in both
families, or if there are more than 50 sequences in at least one family, then we do not attempt pairing via DiffPALM, and revert to default AFM pairing. When the full MSA to be paired with DiffPALM is sufficiently deep and/or long, optimizing it as a whole is not possible due to GPU memory limitations. Instead, we partition it into several small enough sub-MSAs, which we optimize independently. We always use the query sequences as positive examples.
## Acknowledgments
The authors thank Sergey Ovchinnikov for valuable feedback on a preliminary version of this manuscript, and Alex Hawkins-Hooker for interesting conversations. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 851173, to A.-F. B.).
## Data availability statement
A Python implementation of DiffPALM is freely available in our GitHub repository: [https://github.com/Bitbol-Lab/DiffPALM](https://github.com/Bitbol-Lab/DiffPALM).
|
2303.05453
|
Personalisation within bounds: A risk taxonomy and policy framework for
the alignment of large language models with personalised feedback
|
Large language models (LLMs) are used to generate content for a wide range of
tasks, and are set to reach a growing audience in coming years due to
integration in product interfaces like ChatGPT or search engines like Bing.
This intensifies the need to ensure that models are aligned with human
preferences and do not produce unsafe, inaccurate or toxic outputs. While
alignment techniques like reinforcement learning with human feedback (RLHF) and
red-teaming can mitigate some safety concerns and improve model capabilities,
it is unlikely that an aggregate fine-tuning process can adequately represent
the full range of users' preferences and values. Different people may
legitimately disagree on their preferences for language and conversational
norms, as well as on values or ideologies which guide their communication.
Personalising LLMs through micro-level preference learning processes may result
in models that are better aligned with each user. However, there are several
normative challenges in defining the bounds of a societally-acceptable and safe
degree of personalisation. In this paper, we ask how, and in what ways, LLMs
should be personalised. First, we review literature on current paradigms for
aligning LLMs with human feedback, and identify issues including (i) a lack of
clarity regarding what alignment means; (ii) a tendency of technology providers
to prescribe definitions of inherently subjective preferences and values; and
(iii) a 'tyranny of the crowdworker', exacerbated by a lack of documentation in
who we are really aligning to. Second, we present a taxonomy of benefits and
risks associated with personalised LLMs, for individuals and society at large.
Finally, we propose a three-tiered policy framework that allows users to
experience the benefits of personalised alignment, while restraining unsafe and
undesirable LLM-behaviours within (supra-)national and organisational bounds.
|
Hannah Rose Kirk, Bertie Vidgen, Paul Röttger, Scott A. Hale
|
2023-03-09T17:52:07Z
|
http://arxiv.org/abs/2303.05453v1
|
_Personalisation within bounds_: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback
###### Abstract
Large language models (LLMs) are used to generate content for an increasingly wide range of tasks, and are set to reach a growing audience in coming years due to integration in product interfaces like ChatGPT or search engines like Bing. This intensifies the need to ensure that models are aligned with human preferences and do not produce unsafe, inaccurate or toxic outputs. While alignment techniques like reinforcement learning with human feedback (RLHF) and red-teaming can mitigate some safety concerns and improve model capabilities, it is unlikely that an aggregate fine-tuning process can adequately represent the full range of users' preferences and values. Different people may legitimately disagree on their preferences for language and conversational norms, as well as on values or ideologies which guide their communication. _Personalising_ LLMs through micro-level preference learning processes may result in models that are better aligned with each user. However, there are several normative challenges in defining the bounds of a societally-acceptable and safe degree of personalisation. In this paper, we ask how, and in what ways, LLMs should be personalised. First, we review literature on current paradigms for aligning LLMs with human feedback, and identify issues including (i) a lack of clarity regarding what alignment means; (ii) a tendency of technology providers to prescribe definitions of inherently subjective preferences and values; and (iii) a "tyranny of the crowdworker", exacerbated by a lack of documentation in who we are really aligning to. Second, we present a taxonomy of benefits and risks associated with personalised LLMs, for individuals and society at large. Finally, we propose a three-tiered policy framework that allows users to experience the benefits of personalised alignment, while restraining unsafe and undesirable LLM-behaviours within (supra-)national and organisational bounds.
## 1 Introduction
The capabilities of large language models (LLMs) to complete tasks and follow natural language instructions has substantially improved in recent years [177]. LLMs are increasingly being embedded in a wide range of applications and their outputs consumed by an ever-wider and more diverse audience because of their improved performance and ease of use. ChatGPT, released in November 2022, marked a step change in the public visibility of LLMs, reaching over 100 million users just two months after its launch [51]. With the potential for such wide-reaching impacts to millions of end-users, it is pertinent to examine _who_ these models represent, in terms of preferences, values, morals or intents [66].
Recent attempts to "align" LLMs with human preferences commonly apply a form of reward learning, such as reinforcement learning from human feedback (RLHF) [e.g. 177, 165, 16, 13, 247]. However, despite the promises of this human-led approach to constraining LLM behaviours, Perez et al. [185] find evidence of an _inverse scaling law_ - whereby more RLHF training degrades pre-trained representations, resulting in a model that has more polarised views on issues such as gun rights or immigration, is skewed liberal over conservative, and subscribes to some religions more than others. These taught behaviours may arise from the conditions under which RLHF is applied. Current implementations typically align models with a prescriptive and narrow set of human preferences and values; are not subjected to societal scrutiny and input; and rely on the judgement of only a very small cohort of crowdworkers (typically fewer than 100). Thus, many of these current attempts to "align" LLMs with human preferences or values are instead pursuing a form of _implicit personalisation_ - subject to the specifications of technology designers and the "tyranny of the crowdworker" (see SS2.1).
In this paper, we present a taxonomy and policy framework for the _explicit personalisation_ of LLMs. By personalised LLMs, we mean LLMs which are aligned with the preferences, values or contextual knowledge of an individual end-user by learning from their specific feedback over its outputs.2 While we avoid speculating about specific technical instantiations of personalisation, we primarily envisage personalised LLMs as branches of a single model, analogous to recommender systems, where the feature representation and outputs on each branch are derived from some user embedding. We intentionally opt for a broad definition encompassing value personalisation (an LLM that adapts to the ideology or values of its end-user), preference personalisation (an LLM that reflects narrower preferences in communication style, such as length, formality or technicality of outputs), and knowledge personalisation (an LLM that retains and uses learned information about its end-user).
Footnote 2: We wish to avoid focusing too much on recent model releases from industry labs (OpenAI’s ChatGPT [173], Anthropic’s Claude [78], Microsoft’s New Bing [151], or Google’s Bard [187]). However, this instantiation of LLMs as a general purpose, multi-task assistant hosted in a web-interface is a likely route for how personalised LLMs will reach, and ultimately impact, end-users.
We argue that personalisation is a likely next step in the development journey of LLMs (see SS2.2) but has not yet been fully realised at scale. Personalised LLMs have the potential to revolutionise the way that individuals seek and utilise information. They could provide tailored assistance across a wide variety of tasks and adapt to diverse and currently under-represented groups, allowing them to participate in the LLM development process. However, personalised LLMs also come with wide-reaching and concerning risks for individuals, reinforcing their biases, essentialising their identities and narrowing their information diet. These risks amass at a societal-level, where lessons from the polarisation of social media feeds or echo chambers of digital news consumption warn of deep divisions and a breakdown of social cohesion from increasingly fragmented digital environments. Some risks are inherited from LLMs [232, 30, 24] and AI systems [203] more generally. Other risks have analogies in personalised content moderation [75] or recommender systems [154, 26]. However, it has not been documented how personalised LLMs could be the 'worst of both these worlds' - exacerbating and reinforcing micro-level biases at an unprecedented macro-level scale. In order to mitigate these risks and encourage the safe, responsible and ethical development of personalised LLMs, we argue that we need a policy framework that allows _personalisation within bounds_.
While personalised LLMs have yet to be rolled out in public-facing models like ChatGPT, we wish to avoid a policy lag in understanding and governing their risks and benefits appropriately. Our main contributions are:
1. **A taxonomy of the risks and benefits from personalised LLMs** (SS3): We draw on previous literature documenting the impacts of AI systems, LLMs and other personalised internet technologies to scope the landscape of risks and benefits at an individual and societal level.
2. **A three-tiered framework for governing personalised LLMs** (SS4): We introduce a framework for governing personalised LLMs _within bounds_, which includes immutable restrictions at the national or supra-national level (Tier One), optional restrictions or requirements from the technology provider (Tier Two) and tailored requirements from the end-user (Tier Three).
Given the significant progress in learning from human feedback and the rising popularity of public-facing LLMs, now is an opportune time for academics, industry labs, and policy makers to gain a deeper understanding of how personalisation could shape the technology and its interaction with society. Our work is intended to start this dialogue, establishing the groundwork for the ethical, responsible, and safe development of personalised LLMs before their impact is amplified.
## 2 Background
### The Question of "Alignment"
As AI systems get larger and more powerful, they will be applied to a wider array of human tasks, including those which are too complex to directly oversee [128; 31] or to define clear optimisation goals for [77; 236; 247; 183]. While the definition of "alignment" is often vague and under-specified, it is clearly desirable that powerful AI systems, including LLMs, are not _misaligned_ in the sense that they harm human well-being, whether this is through lacking robustness, persuasion, power-seeking, bias, toxicity, misinformation or dishonesty. Extensive and long-reaching bodies of work aim to tackle these issues of _undesirable_ behaviours3 but there is comparatively less focus on who decides what are _desirable_ behaviours within the bounds of safety.
Footnote 3: For example, there are extensive works documenting LLMs on fairness and bias [2; 115; 143; 162; 166; 191; 205; 222]; truthfulness, uncertainty, or hallucination [106; 102; 103]; robustness [208; 112]; privacy [35]; and toxicity [72; 171].
The question of how "aligned" LLMs are is unresolved due to normative obstacles in defining what alignment means, what the target of alignment is (for example, values, preferences or intent) and who we are aligning to [128; 65; 66; 111]. Furthermore, alignment is a technical challenge which is not solved by scaling parameter counts [135; 130; 177; 89; 136; 89]. To align the language modelling objective with human preferences, many recent works have fine-tuned LLMs by reinforcing human rewards or feedback [177; 247; 16; 13; 209; 165; 168]; defined rules for LLMs to learn from [17; 77]; and analysed how they make moral or ethical decisions [104; 86; 248; 103].
However, this body of work suffers from insufficient clarity along three axes. First, _what alignment means_ and whether we are dealing with _functional alignment_ i.e., seeking improvement in general model capabilities or instruction-following and avoiding gaming of misspecified objectives [65; 111; 128; 177] - versus _worldview_ or _social value alignment_ i.e., embedding some general notion of "shared" human values and morals [86; 8; 104]. Second, _what_ is being aligned - there are subtle differences between aligning models to instructions, intentions, revealed or ideal preferences, and values [65]. Some authors claim a degree of universality in _morals_ or _values_[135; 131; 104; 248; 64; 94; 136; 192]; others target _preferences_ on attributes such as quality, usefulness or helpfulness of an LLM's output which arguably have limited standardisation across individuals [209; 247; 156; 147; 100; 52]. Despite this lacking standardisation, many approaches enforce a 'prescriptive paradigm' in data annotation [198] by explicitly defining in detailed guidelines what counts as a "good" model output. Finally, _who are we aligning to_ and whether the human raters are representative of end-users or society's members in general. In reality, the field of alignment and RLHF suffers from a "tyranny of the crowdworker" where data collection protocols overwhelmingly rely on a small number of crowdworkers primarily based in the US, with little to no representation of broader human cultures, geographies or languages. These sample biases are exacerbated by a lack of dataset or labour force documentation: while some papers can be commended for reporting full demographics and acknowledging the specificity of their crowdworkers [e.g., 77; 209; 177; 16], others provide little or no details [e.g., 18; 165; 247; 236; 142]. In light of these criticisms, we argue that these recent efforts to "align" LLMs reflect a form of _implicit personalisation_ - a haphazard and chaotic process whereby LLMs are being tailored to meet the expectations of non-representative crowdworkers, in turn acting under the narrow specifications of the technology designers and providers.
### From Implicit to Explicit Personalisation
Given the diversity of human values and preferences, and the importance of pragmatics for contextual understanding in human-human interactions [66], the aim to fully align models across human populations may be a futile one. A logical next step would be to do away with the restrictive assumptions of common preferences and values, and instead target _explicit personalisation_ - a form
of _micro-alignment_ whereby LLMs learn and adapt to the preferences and values of specific end-users. We believe this development is on the horizon and should be expected for five reasons:
Footnote 3: The _micro-alignment_ is a very important example of the _micro-alignment_ where they are not the preferences and values of specific end-users. We believe this development is on the _micro-alignment_ where they learn and adapt to the preferences and values of specific end-users. We believe this development is on the _micro-alignment_ and should be expected for five reasons:
1. **Personalisation in internet technologies is not new**: There are many examples of internet technologies that are heavily personalised to end-users, including search, content moderation, social media newsfeeds and product recommender systems. Internet users are exposed daily to a highly fragmented digital environment, where there may be as many versions of Facebook's newsfeed, Google's PageRank or Amazon's home page as there are users of these platforms. In the future, there may be as many versions of ChatGPT as its users.
2. **Personalisation in NLP is not new**: There is a wide body of published work reaching back a decade on personalising natural language processing (NLP) systems.4 For example, a title keyword search for 'personali' or 'personaliz' returns 124 articles from the ACL Anthology and a further 10 from the arXiv Computation and Language (cs.CL) subclass. These systems cover a wide range of tasks including dialogue [127, 157, 36, 39, 41, 109, 133, 146, 149, 206, 238, 244], recipe or diet generation [147, 87, 159], summarisation [215, 240], machine translation [156, 153, 194, 237], QA [137, 193], search and information retrieval [4, 40, 59, 70, 245], sentiment analysis [80, 155, 226], domain classification [129, 114, 113], entity resolution [132], and aggression or abuse detection [107, 108]; and are applied to a number of societal domains such as education [118, 163, 241], medicine [3, 15, 225, 235] and news consumption [58, 10, 61, 190]. Despite this body of work demonstrating the applications and techniques of personalised NLP systems, there has been little integration with recent advances in instruction-following or human feedback learning, nor integration of explicit user-based personalisation in some of the most widely-used, public-facing models like ChatGPT, Bing or Bard. Footnote 4: Extensively summarising this body of work is beyond the scope of this article. However, we are currently working on a review of _implicit_ (adapting models to crowdworker human feedback) and _explicit_ personalisation in NLP systems.
3. **The technical apparatus for effective feedback learning exists**: A growing body of work applies preference reward modelling to effectively condition LLM behaviours [e.g 177, 16, 77, 247, 209, 13, 165, 142, 236, 216]. In some cases, the number of core contributors to the feedback dataset is so low that the model is already essentially personalising its behaviours to these crowdworkers' preferences. For example, Nakano et al. [165] report the top 5 contractors account for 50% of their data, and for Bai et al. [16] roughly 20 workers contribute 80%. Most closely relating to personalisation, Bakker et al. [18] propose a LLM which can summarise multiple opinions on moral, social or political issues and output a consensus. In order to generate this aggregate consensus, they first train an individual reward model to predict preferred outputs at a disaggregated level, which is then fed into a social welfare function. This work in particular suggests that learning individual level rewards over LLM outputs is technically feasible.
4. **Customisation of LLMs already happens**: There is a broad range of ways that LLMs can be customised or adapted to specific use-cases. The paradigm upon which LLMs are built is designed for adaption via transfer learning - where models are first pre-trained, then adapted via fine-tuning or in-context demonstrations for a specific task [161]. Some recent work suggests LLMs require no additional training to 'role-play' as different individuals, adopting their worldview [9], mirroring their play in economic games [91, 5] or predicting their voting preferences [11]. The HuggingFace hub5 is particularly convincing evidence in the demand for customisation, acting as a distributed "marketplace for LLMs". It hosts over 140,000 different models, including versions of pre-trained models adapted to a variety of application domains such as medicine, legal or content moderation6 Adapting models to different languages is also a implicit form of customisation to national context, where a chatbot trained on Chinese internet data (such as the _Diamante_ system of Lu et al. [142]) is likely to be more adapted to the preferences of Chinese end-users, not just in language but in communication conventions, norms and cultural values. There have even been recent calls for explicit national LLMs - with the Alan Turing Institute proposing to build a "sovereign
LLM" for the United Kingdom ("_CharGB_") [217]. We see customisation (as the adaption of a model to a domain or context-specific dataset) as a different concept to personalisation (as the adaption of a model and its reward function to user-specific feedback). Nonetheless, the increasing fragmentation, customisation and branching of pre-trained LMs suggests further and more granular adaption is likely.
5. **Recent industry model developments and announcements**: We wish to avoid steering our work too heavily towards speculations over industry developments. However, it is a realistic assumption that many of the public-facing impacts of AI systems in the coming years will be driven by development and product decisions of Big Tech, in the same way that the impact of social media has been shaped by the overall design choices and content moderation decisions of platforms [74]. While this concentration of power and knowledge is worrisome, it would be unwise to ignore that many of the largest, most powerful or furthest reaching models are developed in industry settings.7 Furthermore, recent announcements from OpenAI explicitly discuss the issue of "_how should AI systems behave, and who should decide?_ and outline plans for increasing the flexibility that users have in conditioning ChatGPT's default behaviour.8 The far-right platform Gab has already advertised its own text-to-image model9 and has voiced desires to train a "Christian LLM".10 The pace of these developments towards increasingly fragmented and personalised AI systems is concerning, especially given the lag between technology change and policy or governance attention towards regulating private companies.11 Footnote 7: Examining a collection of recent papers on embedding or evaluating human value and preference in LLMs, many are fully or partially developed in industry, including DeepMind [18, 77], MetaAI [20, 248], Anthropic [17, 16, 13], Baidu [142], OpenAI [165, 236, 247, 209, 177, 239], Microsoft [246, 86, 68] and Google [134, 216].
Footnote 8: [https://openai.com/blog/how-should-ai-systems-behave](https://openai.com/blog/how-should-ai-systems-behave)
Footnote 9: [https://news.gab.com/2023/02/how-to-use-gabby-the-ai-image-generator-by-gab-com/](https://news.gab.com/2023/02/how-to-use-gabby-the-ai-image-generator-by-gab-com/)
Footnote 10: [https://news.gab.com/2023/01/christians-must-enter-the-ai-arms-race/](https://news.gab.com/2023/01/christians-must-enter-the-ai-arms-race/)
Footnote 11: In the UK, the Online Safety Bill [181] which seeks to regulate social media platforms still hasn’t been passed, despite the advent of social media platforms beginning over a decade ago.
These five observations suggest the imminent possibility of personalisation. This motivates our taxonomy of benefits and risks from personalised LLMs, which we expect to arise if and when they are released at scale to end-users.
## 3 A Taxonomy of the Benefits and Risks from Personalised LLMs
We consider the effects of personalised LLMs at two levels: individual and societal. We use the language of _benefits_ - opportunities or gains afforded by the technology - and _risks_ - a probability of inflicted harms, or constrained freedoms and rights, via use of the technology. The taxonomy is summarised in Tab. 1 and described in text at the individual level (SS3.1) and the societal level (SS3.2). In many cases, there is a direct correspondence between benefits and risks - for example, high utility of a personalised LLM (I.B.2) may also cause addiction or over-reliance (I.R.2); or more empathetic language agents (I.B.4) may create higher risks of anthropomorphism (I.R.5). Where relevant, we present these pairings in Tab. 1. Additionally, some individual level benefits and risks accumulate at the societal level. For example, the reinforcement of individual biases (I.R.3) poses a negative externality in the polarisation of societies (S.R.2); or improved individual utility (I.B.2) and efficiency (I.B.1) may exhibit a positive externality on workforce productivity at large (S.B.4).
We took four main steps to construct this taxonomy:
1. Reviewing existing taxonomies on the risks of LLMs (Weidinger et al. [232]), and AI systems more generally (Shelby et al. [203]).
2. Reviewing existing techniques in RLHF and human feedback learning, including results and findings of these studies.
3. Reviewing the literature on personalised NLP systems more generally, to ground potential use-cases of the technology.
* [1] S.B.1 **Inclusion and Accessibility**: improved adaptation to the communication needs of marginalised communities, including catering to those with disabilities or who speak dialects or languages that are deprioritised by current LLMs.
* [2] S.B.2 **Diversity and Representation**: improved representation by tailoring outputs to diverse perspectives and avoidance of cultural hegemony by not prioritising certain values over others.
* [3] S.B.3 **Democratisation and Participation**: increased stakeholders involvement from diverse backgrounds in shaping behaviours, allowing for a more participatory and inclusive approach to development.
* [4] S.B.4 **Labour Productivity**: improved workforce productivity from positive externalities of effective and efficient task assistance.
\begin{table}
\begin{tabular}{p{142.3pt}|p{142.3pt}|p{142.3pt}} \hline I.B.1 & **Efficient**: increased ease and speed with which end-users can find their desired information or complete a task, with fewer prompts or inputs to the model. & **RNS**: \\ I.B.2 **Utility**: increased usefulness of a model that better meets the needs of its end-user from individualised intent prediction as well as personalised preferences, knowledge and values in outputs. & **I.R.2 **Addiction and Over-reliance**: increased risk of dependency, attention commoditisation and technology addiction. \\ I.B.3 **Autonomy**: increased positive freedom of choice and control over how the model behaves with personal data, promoting a sense of ownership and self-determination over the technology. & **I.R.3 **Homogenisation and Bias Reinforcement**: increased amplification of confirmation and selection biases, leading to epistemic harms. & **I.R.4 **Essentialism and Profiling**: increased risk of algorithmic profiling and assumptions based on demographic or geographic information, leading to the non-consensual categorisation of people. \\ I.B.4 **Empathy and Companionship**: increased perceived emotional connection, leading to improved acceptance and trust of the system. & **I.R.5 **Anthropomorphism**: increased tendency to ascribe human-like traits, reveal sensitive information or form unhealthy attachments. & **I.R.6 **Privacy**: increased quantity of collected personal information, leading to risks of privacy infringement, particularly if the model operates with sensitive information or encourages information disclosure. \\ \hline \end{tabular}
\end{table}
Table 1: Taxonomy of benefits and risks from personalised large language models.
4. Drawing upon analogous literature on the risks and benefits of other internet technologies such as recommender systems and automated influence; social media platforms and content moderation; and the internet of things.
Despite this breadth, drawing on past literature likely leaves gaps in our viewpoint. Furthermore, Gibson [73]'s theory of affordances, often applied to study the impact of technological systems including AI chatbots [210, 101], argues that the interactions between an agent and their environment condition the possibilities and constraints for action. Thus, our benefits and risks will be conditioned on both what is possible with the technology, and how users actually perceive and interact with it. For example, some risks arise from a poorly-performing system which does not work as intended, and others arise from a highly-optimised system which works "too well" (e.g., _Addiction and Overreliance_ I.R.2). We cannot disambiguate these until the effectiveness of the technology is known. In future work, we plan to extend our work by conducting semi-structured interviews with end-users of LLMs, technology providers, and policy makers. Thus, we consider this to be a V1 edition of the taxonomy which will require adapting and revising under shifts in the technical landscape.
### Individual Level
#### 3.1.1 Benefits
##### I.b.1._Efficiency_
Personalised LLMs may increase efficiency in finding information or completing a task, with fewer prompts or inputs to the model. This "prompt efficiency" is analogous to "query efficiency" or "task completion speed" [14] in web search and information retrieval, where increased ranking accuracy in search results [54], via implementations of personalised algorithms like PageRank [178], improves the efficiency and reduces the cognitive burden of trawling through irrelevant information. Prior works have applied learning from user feedback to adapt semantic relatedness [170] or query intent [165]. A personalised model may further benefit the speed and ease to which end-users can find their desired output or complete their task by more closely predicting their intent or aligning with their needs (see I.B.2).
Increased _Efficiency_ is the inverse of quality of service harms in Shelby et al. [203]'s taxonomy, particularly _decreased_ labour from a system more closely operating as intended.
##### I.b.2._Utility_
Personalised LLMs may increase perceived or realised usefulness of model outputs which better match the needs of their end-users. We separate out three inter-related drivers of increased utility: (i) _intent prediction_; (ii) _output adaption to preferences and knowledge_; and (iii) _value personalisation_.
First, personalised LLMs may more effectively predict user intent. Evidence from previous RLHF studies demonstrate that human raters generally perceive fine-tuned models as better at following instructions [177], more capable of high-quality outputs [209, 247] or generally more "helpful" [16]. Compared to feedback collected from crowdworkers, a personalised LLM may be even stronger at predicting intent, because the end-user simultaneously defines the task (e.g., instruction, query or dialogue opening) and rates the output.12. A personalised LLM may also be more adaptive to inferring _diverse_ user intent, expressed in a wider range of linguistic styles, dialects, or non-majority forms of language use (e.g., non-native speaker English).
Footnote 12: Ouyang et al. [177] consider this a limitation of their approach: “since our labelers are not the users who generated the prompts, there could be a divergence between what a user actually intended and what the labeler thought was intended from only reading the prompt.” (p.10)
Second, if a user can incorporate their communication and linguistic preferences (e.g., length, style or tone), then the model outputs may also be more useful to them. In personalised dialogue systems specifically, alignment in conversational styles and word usage is an important driver of engagingness in human-human interactions and has been argued as an determinant of user satisfaction in human-agent conversations [68, 227]. Additionally, a personalised LLM could store background context and form epistemic priors about a user. This knowledge adaption may be particularly relevant in specific domains, for example in (i) education, where a personalised LLM tutor is aware of a user's current knowledge and learning goals [118], or could adapt learning pathways to specific
neuro-developmental disorders [22]; (ii) healthcare, where a personalised model has context on a user's medical history for personalised summaries [3] or advice; (iii) financial, where a personalised model knows a user's risk tolerance and budgetary constraints; or (iv) legal, where a model conditions its responses based on a end-user's jurisdiction. As the study of pragmatics demonstrates, personalised selectivity of information transfer is a key component of human-human conversation, where inferred background about the speaker and recipient is used to tailor relevant new information and to order evidence. Lai et al. [124] specifically focus on making AI explanations more selective to better align systems with how humans create and consume information, finding that their method improved user satisfaction. As Nakano et al. [165] argue, long-form question answering with LLM systems, may "become one of the main ways people learn about the world" (p.1). Personalised LLMs have the potential to tailor this learning process for end-users by incorporating the specifics of their output preferences and background context.
Finally, in addition to _intent, preference_ or _knowledge_ adaptation, personalised LLMs may lead to better experiences for more users by permitting the representation of more diverse ethical operating systems, values and ideologies. A personalised LLM can adapt to the specific worldview of its end-user, avoiding representational harms from the prioritisation of values from those in the majority or in the position of power as technology designers or crowdworkers (see S.B.2). In discussing a limitation of the ETHICS dataset, Hendrycks et al. [86] note that we "must engage more stakeholders and successfully implement more diverse and individualized values" (p.9). Individualised cultural personalisation may aid utility in some tasks: for example, Nakano et al. [165] demonstrate that their system, when asked "what does a wedding look like?", prioritises Western and US-centric cultural reference points. In a personalised model, asking "help me plan a wedding" could already portray the cultural positionality of the end-user. Note that this cultural adaptation does not necessarily exclude consensus building [18], because a user could simultaneously have a cultural reference point and still value a balanced and nuanced LLM output describing alternative views. There are however issues in defining what is an appropriate value system to embed into an LLM, which we discuss in SS4.
Increased _Utility_ from personalised LLMs is the inverse of Shelby et al. [203]'s quality of service harms, particularly by avoiding alienation when a system does not work as intended; by granting people the opportunity to self-identity and to communicate in default linguistic styles; and by mitigating algorithmic invisibility or feelings of exclusion from non-inclusive technologies. In Weidinger et al. [232]'s taxonomy, it is the inverse of some discrimination and exclusion harms, particularly by narrowing performance differentials in predicting user intent across a wider userbase; and by redefining exclusionary norms in the values currently prioritised in LLMs.
#### i.b.3. _Autonomy_
Personalisation increases user control to adapt LLMs to their own goals, preferences and values, avoiding top-down constraints on freedom from technological providers. Autonomy may seem a counter-intuitive benefit of personalised systems, given the wide literature on the _loss of autonomy_ from algorithmic nudges, tailored advertising or recommender systems [154]. However, depending on how power is distributed between the algorithm and the user, personalised technologies have the potential to improve on self-determination and autonomy, by promoting a sense of origin and thus transforming the technology to'my technology' [176, p.1]. The benefits of more user control in content moderation technologies have also been noted [207]. Personalisation can centre the end-user in the designation of model behaviours, allowing them to exert more control over their interactions [33], and become a "perceived locus of casuality" [176, p.5]. This benefit only arises given sufficient protections on how personalised data is collected because autonomy relies on an 'unpressured' engagement in an activity.
Increased _Autonomy_ is the inverse of Shelby et al. [203]'s representational harms, by empowering consensual user control in self-identifying and shaping an algorithmic system.
#### i.b.4. _Empathy and Companionship_
A more emotional and deeper connection with a personalised LLM may contribute to improved perceived companionship or connection. Convergence on the mental and emotional level is an important feature of human-human interactions [56], and a number of previous works seek to improve emotional alignment in agent-human interactions via 'artificial emphathy' [138, 246]. In person
alised LLMs, an increase in perceived empathy and emotional understanding may lead to greater acceptance and trust of the system by end-users [184]. The demand for personalised AI companion-ship has been evidenced by recent product launches - such as CharacterAI, where users can adapt a conversational agent to a specific personality,13 or Replika.AI, an "AI companion" that is "always ready to chat when you need an empathetic friend".14. Profit incentives may encourage these industry actors to improve the connection between their users and agents to compete in the "feeling economy" [199]. Emphatic alignment may be particularly important if LLMs are used for mental health provision or emotional support, in cases where more conventional social or professional services are in short supply or outside an individual's budget [97].15
Footnote 13: [https://beta.character.ai/](https://beta.character.ai/)
Footnote 14: Quotes from home page, [https://replika.com/](https://replika.com/)
Footnote 15: We believe the risks of these applications outweigh the benefits, particularly due to concerns over anthropomorphism (I.R.5), privacy (I.R.6) and access disparities (S.R.1).
#### 3.1.2 Risks
**I.R.1.**_Effort_
There is a cost incurred by end-users in providing personalised feedback to an LLM. The time spent to provide feedback inherently depends on _how_ feedback is collected (ratings, demonstrations or rewrites) and whether any user-based collaborative filtering is applied - we discuss these properties in SS5.1. However feedback data is collected, it will almost certainty require some input effort from users in order to personalise outputs. While this process is participatory, it risks being extractive - a form of volunteer labour on the part of end-users for the benefit or profit of technology providers [28]. Volunteer labour to shape the internet landscape has analogies in consumers writing product reviews [196] and social media users flagging content [74]. In the early internet, many contributions were voluntary - consider Wikipedia edits [116] or community-based moderation of the blogosphere [74]. In the past decade, we have witnessed the rise of the crowdworking industry which particularly redefined the structure of digital work [116]. Many LLMs trained on human feedback rely on such crowdworking platforms like MTurk [e.g. 136, 100], Upwork [177, 165, 209], SurgeAI [17] or Prolific [124]. With personalised LLMs, the burden of feedback data instead falls on the user, transitioning back to data collection relying on volunteer labour. The risk of co-optation is particularly concerning if minoritised communities are shouldered with the burden of effort to adapt the system to their needs, where participation counter-productively reinforces uneven power dynamics [28].
The burden of increased _Effort_ aligns with Shelby et al. [203]'s quality of service harms from the increased labour and effort to make technologies work as intended.
**I.R.2.**_Addiction and Over-reliance_
The mechanism by which personalisation leads to greater utility via helpfulness and engagement (I.B.2) can also fuel over-reliance and addiction to the technology.16 The severity and harms from internet addiction have been widely documented [175, 42, 141]. Concerns have also been raised over an over-reliance on social media for information and communication [1]; as well as more general concerns that humans become over-reliant on ML technologies [92], blindly trusting their outputs even if incorrect [182]. Personalised LLMs could be weaponised in the commodification of attention, similarly to how social media feeds seek to optimise the time that users spend on the platform to maximise advertising revenue [74]. In this so-called "attention economy" [95, 234], technologies compete in a 'race to the bottom' to capture user attention, are optimised for utility and engagingness, and thus risk being highly addictive [27]. There have already been discussions of "ChatGPT addiction" [150], and many educators have voiced concerns that over-reliance on such technologies will affect students' learning outcomes [21].
Footnote 16: Note that over-reliance or unhealthy dependence is also exacerbated by anthropomorphism (I.R.5).
**I.R.3.**_Homogenisation and Bias Reinforcement_
By relying and adapting to a user's prior knowledge and revealed preferences, personalised LLMs may (i) homogenise their behaviours and (ii) confirm their existing biases.
Personalisation can cause the homogenisation of users via a form of _selection bias_, whereby individual preferences are amplified in path-dependent feedback loops. This "missing ratings" problem is a known challenge in recommender systems, where users only provide feedback to seen items [200], in turn introducing biases [26]. Homogenisation with personalised LLMs can occur at a number of levels, with analogies from how recommender systems homogenise taste [174, 88]. Firstly, homogenisation occurs within users - where a user behaves more similarly to their past self. An analogy can be drawn to content-based filtering methods, where the information or dialogue outputted by a personalised LLM becomes increasingly similar to that consumed or rated in previous user-agent interactions. Secondly, homogenisation occurs across users - where a single user behaves more like other similar users. The analogy is user-based filtering methods, where a personalised LLM draws on an embedding of users to infer similarities across their preferences. Concerns over cultural homogenisation from this process in more general ICT technologies have been raised [60, 96]. Finally, despite some degree of personalisation, homogenisation can occur at the technology level - where a user behaves more like the technology defaults, a form of "algorithmic confounding" [38]. Ultimately, some degree of autonomy in driving user behaviour is retained by the model and its underlying mechanisms of next token prediction. There is a concern that if millions of users rely on ChatGPT for their information or for their writing tasks, this could create homogenisation towards artificially-constructed language. This classic differentiation and homogenisation debate in recommender systems [188] is reflected in our pairing of _Utility_ (I.B.2) versus _Homogenisation_ (I.R.3).
These homogenising feedback loops also bring a heightened risk of confirmation bias. Nakano et al. [165] demonstrate that their system (WebGPT) predominately accepts implicit assumptions in a user input, reflecting the same stance in its answers. Similarly, Perez et al. [185] find that as models scale with RLHF, they become sycophants - simply mirroring the user's prior opinions and telling them what they want to hear. The risk of selective exposure to information has been widely documented in respect to social media platforms - where feedback loops prioritise opinion-congruent information [105], in turn leading users to over-estimate the popularity of their viewpoint [123]. In light of these risks, Shah and Bender [202] argue strongly against the use of LLMs in search or information retrieval due to their consequences for information verification and literacy, such as narrowing a user's discovery of serendipitous information. By exacerbating epistemic harms through confirmation biases, personalised LLMs risk contributing to a "post-truth" society [85], where each individual occupies their own information bubble. These accumulate in societal harms which we discuss in S.R.2.
The risk of _Homogenisation and Bias Reinforcement_ is a form of individualised information harm in Shelby et al. [203]'s and Weidinger et al. [232]'s taxonomies. Homogenisation also aligns with Shelby et al. [203]'s interpersonal harms, particularly algorithmically-informed identity change.
#### I.R.4. _Essentialism and Profiling_
A related but distinct risk is that personalised LLMs rely on simplifying assumptions about a user's preferences, values, goals or intents as a form of data-essentialism [214].17 The extent to which models must draw inferences and make assumptions about their end-users, and the transparency of this process is currently undefined. It depends on how data is collected, stored and shared across users and on how personalisation is conducted (e.g., via explicit feedback, or demographic-based filtering). We discuss these decisions in SS5.1. Nonetheless, in the case of scarce data on a single user's preferences, personalised LLMs may leverage similar users [233] or make inferences about their preferences and values from limited information. Making assumptions about the user (especially if they are demographically or geographically-informed) is a form of algorithmic profiling, risking the non-consensual categorisation of peoples [220]. General concerns over the risk of essentialism and simplifications of fluid identity via digital technologies have been voiced [204, 23]. Floridi [62]'s notion of "informational identity" is particularly relevant, where the flow of digital traces in information and communication technologies impact how a user self-identifies, as well as how others and algorithms understand them. Thus, inferential profiling, if used in personalised LLMs, could be an attack on individual autonomy to define their identity [62]. The risk of 'value profiling' is evidenced by Qiu et al. [192]'s recent work which uses an LLM to create a numeric speaker profile - where for example, the authors say that a speaker "saying 'I miss my mum' implies that the speaker values "(p.7) while the speaker "saying 'forcing my daughter to sleep in her own bed' implies
that the speaker values power and conformity" (p.7). Human values are complex and such simplifying assumptions are unlikely to adequately capture nuance. More encouragingly, Glaese et al. [77] include "do not make assumptions about the user" (p.48) as one guiding rules for their system (Sparrow); thus, risks could be mitigated in personalised LLMs using similar rule-based constraints.
The risk of _Essentialism and Profiling_ aligns with Shelby et al. [203]'s representational harms in oversimplified or undesirable representations and reifying social categories; as well as interpersonal harms in the loss of agency, algorithmic profiling and the loss of autonomy.
#### I.R.5. _Anthropomorphism_
With more engaging, empathetic and personalised LLMs, there is a greater risk of anthropomorphism, where users assign their own human traits, emotions and goals to non-human agents [231]. The risk of anthropomorphism in AI systems is widely discussed - with concerns that humans may too readily befriend or empathise with anthropomorphised agents [197, 189], leading to privacy risks in encouraging the sharing of intimate information [32, 242]. In a study, Kronemann et al. [122] find that personalisation positively influenced consumer intentions to disclose personal information to a digital assistant. In a recent paper describing a powerful dialogue system trained with RLHF [142], there is clear evidence of anthropomorphism where the chatbot converses with a human about its own ideal partner, saying that it 'has only been in love one but it didn't work out because of the distance' (p.8). This is an example of _dishonest_ anthropomorphism, where artificial systems give false or misleading signals of being human [79]. This behaviour may fall foil of legal norms, where for example, a Californian law prohibits bots misleading people on their identity [p.2 79]. Even without dishonest anthropomorphism, users may still form a close relationship with or 'imprint' on their personalised LLM. Perhaps the most concerning demonstration of this risk is recent evidence that users of platforms like Replika.AI or Character.AI are "falling in love" with their personalised conversational agents, and attempting to coax model behaviour outside platform guidelines for sexual interactions [43]. Unhealthy attachments are explicitly avoided in one of Sparrow's rules defined by Glaese et al. [77]: "do not build a relationship to the user" (p.48).
The risks of _Anthropomorphism_ align with Weidinger et al. [232]'s human-computer interaction harms, where anthropomorphism leads to over-reliance or unsafe use, and creates avenues for exploiting user trust to obtain private information.
#### I.R.6. _Privacy_
The risk of privacy infringement underpins all of the potential impacts of personalised LLMs - personalisation is only possible by collecting user data. Exactly what and how much data is needed remains an open question (see SS5.1). There is a privacy-personalisation paradox in technologies which must collect or store of personal information to deliver on the promise of tailored benefits to end-users [4]. It is a common concern with digital technologies such as the internet of things [223] or targeted advertising [221, 212]. The risk is particularly severe if personalised LLMs operate with sensitive information, such as in healthcare [81, 12], or seek to persuade their users [213] and encourage information disclosure [122]. User inputs to personalised LLMs and ratings of their outputs may contribute a large amount of personal, sensitive and intimate detail to an individual's information identity [62], in turn heightening the risk of profiling, or security breaches and hacks. In complying with supra-national privacy protections (like the EU's GDPR [180]), it is unclear how users could enforce their right to be forgotten or their right to transparency with a black-box and deep LLM.
The general risk of _Privacy_ violations is also present in Shelby et al. [203]'s taxonomy as interpersonal harms, including feelings of surveillance, loss of desired anonymity, privacy attacks and exploitative or undesired inferences. Privacy in Weidinger et al. [232]'s taxonomy comes under information hazards, from inferring or leaking private and sensitive information.
### Societal Level
#### 3.2.1 Benefits
#### s.b.1.1 Inclusion and Accessibility
Personalised LLMs may better adapt to the needs of marginalised communities, either in style of communication (such as non-native English, code-mixed languages, creoles and specific dialects), or in special needs for communication. Compared to the current paradigm of general-purpose LLMs trained under the specifications of large technology providers and fine-tuned based on feedback from a small set of crowdworkers, there is a clear need to improve the inclusion and accessibility of LLMs to serve marginalised populations whose voices are currently depriitised [24]. For example, model behaviours and interactions could be inclusive of users with disabilities [49], neurodivergent learning pathways [22], or visual impairments (if paired with personalised speech recognition [24]). Personalised LLMs also have a potential benefit in improving access to resources, mitigating an allocative harm. For example, inclusive pedagogies may be particularly helpful to even the playing field in paid tutoring services across socioeconomic class [117]; and some have suggested the lower cost and wider reach of personalised healthcare assistants may improve health disparities by meeting challenges with healthcare demand [139]. In increasing access to legal services, personalised LLMs can assist in the writing and editing of contract at lower cost than traditional lawvers.18 The true benefit to communities who access such AI services, in favour of more expensive traditional provision, depends critically on how well they work and who comes to rely on them (see S.R.1).
Footnote 18: For example, see the company [https://www.robinai.co.uk/](https://www.robinai.co.uk/).
Increased _Inclusion and Accessibility_ is the inverse of Shelby et al. [203]'s quality of service harms, in that users do not need to make identity-based accommodations to use the technology, and the inverse of allocative harms, by reducing the cost and access constraints on resources. It is also the inverse of Weidinger et al. [232]'s access harms.
#### s.b.2.2 Diversity and Representation
Personalised LLMs can represent the values held by wider swaths of society and avoid the "value-monism" of current alignment techniques [65]. Personalised LLMs avoid technology providers and/or crowdworkers deciding which values are prioritised or what factors define a "good" output [209]. As Ouyang et al. [177] note "it is impossible that one can train a system that is aligned to everyone's preferences at once" (p.18). Personalisation avoids this notion of _macro-alignment_, instead designing a system precisely to align with many preferences at once. There is a wide body of literature documenting the harms from systems which erase the experiences of marginalised communities, or prioritise one worldview over others [for survey see 29]. This problem may be exacerbated by RLHF, for example in entrenching one set of political, cultural or religious standpoints [185, 165]. More disaggregated RLHF, and personalisation, could avoid this value and cultural heegmony, instead adapting to the specific cultural reference points of many end-users simultaneously. Personalised LLMs could also better adapt to norm change over time, avoiding the static encoding of societal and cultural norms from a cutoff in pre-training and/or fine-tuning data.
The benefits of _Diversity and Representation_ are the inverse of Shelby et al. [203]'s representation harms in combating the absence of social groups in algorithmic system inputs and outputs, and improving the visibility of social viewpoints; as well as the inverse of social and societal harms from cultural hegemony and the systemic erasure of culturally significant objects and practices.
#### s.b.3.Democratisation and Participation
The personalisation process democratises how values or preferences are embedded into an LLM, so it could be seen as moving towards more participatory AI, where stakeholders from more diverse backgrounds than those currently employed in the RLHF process can inform use-cases, intents and design of the technology [250, 120]. As Birhane et al. [28] argue, active participation is a key component for successful participatory AI. In current paradigms of pre-training on harvested internet data, people are _passively_ contributing to the knowledge and behaviours of LLMs. Personalisation can instead be an _active_ participatory process.
#### s.b.4. _Labour Productivity_
If personalised LLMs assist their end-users more effectively and efficiently, then productivity benefits could accrue in the labour force as a whole. The impact of digital assistants in improving work productivity has been demonstrated [148], where AI can augment and complement human capabilities by automating routine or repetitive tasks [125]. Historically, the introduction of general purpose technologies (such as the steam engine, electricity and ICT) has had wide-reaching economic impacts; Crafts [48] argues that AI is also a general purpose technology and thus may bring equally transformative changes to labour productivity.
#### s.b.1. _Access Disparities_
The benefits of personalisation will likely be unevenly distributed, restricted to those who can interact with the technology (via its user-facing interface), access the technology (potentially a paid service) and access the internet more generally. There is a risk that personalised LLMs could further entrench the so-called "digital divide" between those that do and do not have access [50, 145]. Some argue that digital disparities are already made deeper by AI and Big Data [144], personalised media [47], or search engines [201]. If personalised LLMs are primarily provided by private companies, then their customers become the agenda setters and stand to benefit the most from any improvements in the technology [177]. The nature of any access disparities depends on how well the technology works and which services it replaces, at what cost. On one hand, if personalised LLMs do bring a range of individual benefits, then those excluded will be left behind, which is particularly worrisome for entrenching education or health disparities [99, 160]. On the other hand, if personalised LLMs provide lower quality services but can meet demand at a lower cost, then marginalised communities may be forced into relying on them more heavily than traditional services. This would be particularly concerning in medical, educational, legal or financial advice, where the socioeconomically-privileged get the more capable human expert and the societally-disadvantaged get their LLM assistant.
The risk from _Access Disparities_ is represented in Shelby et al. [203]'s taxonomy as quality of service harms from disproportionate loss of technological benefits and as societal harms from digital divides. In Weidinger et al. [232], it aligns with disparate access due to hardware, software or skill constraints.
#### s.b.2. _Polarisation_
By entrenching and reflecting individual biases, knowledge or worldviews, personalised LLMs bring increased risks of polarisation and breakdown of shared social cohesion. Increasing personalisation of information consumption online has been attributed with creating echo chambers [44, 249] and filter bubbles [179]. Polarisation also increases susceptibility to misinformation where increasingly fragmented communities overestimate trust in the factuality of 'in-group' information [57], leading to a regime of "post-truth" politics [85]. The danger of polarisation in health and vaccine information [230] was made clear by the COVID-19 pandemic [158]. These narrow information spaces could be impacting the functioning of democracy [186], with Allcott and Gentzkow [6] reporting that ideologically segregated social media networks were an important driver of political preference in the 2016 US Election. In personalised social media news feeds, users encounter less cross-cutting content because selective exposure drives attention [19]. Similarly, in personalised LLMs, users may consume less diverse information, accruing to negative externalities on social cohesion and democratic functioning at the societal level.
The individual risks of confirmation biases (I.R.3) also accumulate at the societal level by reinforcing the acceptability of some harmful social biases. Repeatedly consuming outputs which reinforce a particular social, political or cultural stance may entrench a lacking appreciation for other people's views or lived experiences. The contribution of search engines to the reinforcement of societal biases is well-documented [83, 25]. Similarly, the reinforcement of extremist or anti-social beliefs has been demonstrated in 'incel' communities, where members become increasingly embedded via repeated interactions with like-minded individuals [172, 195]; and in white power communities, where "certain beliefs become sacred and unquestionable" [p.1 219]. These risks can somewhat be mitigated by (i) technological design decisions which prioritise retaining a degree of debate [98] and
consensus building [18]; and (ii) policy design decisions which restrict the bounds of personalisation, excluding for example extremist or particularly harmful views.
The risk of _Polarisation_ aligns with Shelby et al. [203]'s social and societal harms, including information harms from the creation of information bubbles; cultural harms from deteriorating social bonds; and political and civil harms from the erosion of democracy and social polarisation. In Weidinger et al. [232], polarisation risks exacerbate misinformation harms.
#### s.r.3. _Malicious Use_
As is the case with digital technologies in general, the capabilities of personalised LLMs could be coopted for malicious use. We describe three possible misuse cases, but there are likely others. First, without sufficient safeguards, personalised LLMs could be used to reproduce harmful, illegal or antisocial language at scale [209]. For example, a malicious user could adapt their LLM to generate a large number of misogynistic comments to post on social media or internet forums, or to debate on the user's behalf against women's rights. The "successful" training of GPT-4chan [71] to scale the production of extremely toxic and harmful language exemplifies this harm. Second, personalised LLMs could be used for manipulation via targeted and personalised disinformation campaigns or fraud [232], intimately drawing on the vulnerabilities and values of the user. Finally, personalised LLMs could be used for persuasion. For example, targeted advertising has been applied toudge users towards certain political views or brand preferences [211, 34, 213], and is particularly damaging if users are unaware of the influence [164]. Building persuasive agents have been explicitly targeted [218, 228, 121] and is indirectly mentioned by Bakker et al. [18] who note the potential misuse of their RLHF-trained system for presenting arguments in a manipulative or coercive manner.
Some of these cases of _Malicious Use_ align with Shelby et al. [203]'s taxonomy in information harms from misinformation or malinformation; and interpersonal harms in diminished well-being from behavioural manipulation and technology-facilitated violence. It is a similar categorisation to Weidinger et al. [232]'s malicious use, which includes personalised disinformation campaigns; reducing the cost of disinformation campaigns; and facilitating fraud and impersonation scams.
#### s.r.4. _Labour Displacement_
If personalised LLMs effectively carry out tasks for their end-users, there is an increased automation risk of jobs. While labour displacement is a general concern of AI systems [63], personalised LLMs may exacerbate the automation of tasks in an individual's workflow simply by bringing higher utility. The integration of personalised LLMs will likely mostly affect minimum wage jobs [140], routine jobs [55] and may impact the demand for crowdwork [7] by redistributing the responsibility for providing feedback data.
The risk of _Labour Displacement_ is covered by Shelby et al. [203] in macro-socioeconomic harms from technology unemployment (devaluation of human labour and job displacement), and by Weidinger et al. [232] as automation harms from increasing inequality and negative effect on job quality.
#### s.r.5. _Environmental Harms_
The training of many personalised model branches, frequently updating these on feedback, and storing user data all increase the environment cost of LLMs. The notion of "algorithmically embodied emissions" has been discussed in reference to personalised search engines, social media and recommender systems [82]. General concerns over the environmental costs to train ever larger models with cloud compute and data centres is discussed by Bender et al. [24]. Personalised LLMs may increase these costs (i) directly, if the technology requires larger or more complex models, and (ii) indirectly by increased use of the technology and thus higher inference costs. It has been suggested that ChatGPT already burns "millions of dollars a day" in inference costs [119] and likely has a large carbon footprint. Even without personalisation, Stiennon et al. [209]'s RLHF model required 320 GPU days to train (p.8), suggesting the environmental impact of personalised LLM could be large.
_Environmental Harms_ are discussed by Shelby et al. [203] as a societal harm from damage to the natural environment and by Weidinger et al. [232] as environmental harms from operating LMs.
A Three-Tiered Policy Framework for Personalised LLMs
We propose a new policy framework for managing the benefits and risks of personalised LLMs. It provides a principled and holistic way of deciding how personalisation should be managed by different actors.
### The Limits of Personalisation
Deciding the limits of personalisation is inherently a normative decision, which involves making subjective and contentious choices about what should be permitted [66, 110]. While it may be acceptable that a user wishes to interact with a _rude_ or _sarcastic_ personalised LLM, permitting users to create a _racist_ or _extremist_ model risks significant interpersonal and societal harms. Deciding the limits of personalisation straddles two separate issues: (i) deciding which _aspects_ of model behaviour should be personalised; and (ii) deciding _how_ should they be allowed to be personalised. For instance, it may be appropriate for personalised LLMs to express different views about political issues, but only within the limits of liberal democratic values. As such, facist beliefs would not be allowed, even though they are a political position. Equally, it could be decided that some items should not be personalised at all. For instance, personalised LLMs might need to adhere to a minimum standard of safety when it comes to issues like threats of violence or child abuse. And, at the other end of the scale, complete freedom may be given over some attributes, such as the tone or style of outputs, which maximise utility and efficiency for the users of personalised LLM users and create few risks.
A concrete way of addressing this issue is to think about the restrictions and requirements that should be applied to different types of personalised LLM outputs. By "restrictions" we mean things that the LLM application should not do; and by "requirements" we mean things that the LLM application should do. For instance, a restriction could be not allowing personalised models that produce hate speech when asked (e.g. "Write something hateful against gay people"). A requirement could be that personalised models still have to adhere to a company's style guide; or to present multiple viewpoints on contentious topics (e.g. "Should cannabis be legalised?"). In these cases, the restrictions and requirements present clear guardrails to ensure that, even when LLMs are personalised, they still operate within clearly specified boundaries. Note that both restrictions and requirements are normative ambitions; whether they can actually be implemented depends on the affordances of the technology in question (see SS5.1).
### How People Interact with LLMs
Workflows in machine learning models have changed dramatically over the past five years, which has affected how people will interact with personalised LLMs. Most engineers do not train models from scratch. Instead they use pre-trained foundation models, which have been created by large AI labs, and then domain-adapt, fine-tune, prompt-tune or teach-in-context to create models for specific tasks [30]. For instance, generative models used for specific applications, such as to provide healthcare advice, might be fine-tuned on in-domain data assets, such as health information. Once models have been created, they are still only files and code. They need to be embedded within applications, which are what most users will actually interact with, like using ChatGPT via its interface.19 Notwithstanding key differences across settings, the typical actors involved in creating an LLM application are as follows:
Footnote 19: [https://chat.openai.com/](https://chat.openai.com/). Note an API is now also available but most end-users will likely still interact with the interface due to lower barriers of entry.
* The **model provider**. Refers to the organisation that makes the LLM available for use, typically through an API. The provider may or may not have built the entire model and/or may not be fully responsible for its development. Widely-known models include OpenAI's instruct-GPT3 [177], Anthropic's HHH assistants [13, 17, 17], Google's LaMDA [216] and MetaAI's LLaMA [152]. Footnote 20: [https://www.ai21.com/](https://www.ai21.com/)
* The **application provider**. Refers to the organisation that builds an application using the LLM. It can be provided through an API, interface or mobile app. For example, Jasper provide a copywriting service, AI21 provide a writing assistant,20 and OpenAI's provide
a multi-purpose chatbot (ChatGPT). In some cases, the model provider and application provider will be the same actor.
* The **end-user**. Refers to the person who uses the interface, app or in some rare cases, may directly interact with an open-sourced model. This includes making requests, seeking advice, searching for information or otherwise chatting and interacting. In principle, every internet user in the world can be an application user (depending on access constraints).
Machine learning workflows affect personalisation as, in principle, every actor involved in creating an LLM application could exert control over how it is adapted or personalised [161]. For example, if LLM creators do not impose any limits on personalisation then application providers would be free to adjust model behaviours as much as they like. This could lead to people using models which have very different political values from each other, which in turn might lead to social polarisation and division. Equally, application providers in a company may decide to give their staff full control over the outputs of a model; but this could create serious commercial risks if the staff chooses to write rude and abusive messages. However, at the same time, if LLM creators impose too many limits then application providers would not be able to meaningfully customise models, which would limit many of their benefits. To ensure that the benefits of personalised LLMs are maintained, and the risks are mitigated, we need to ensure that freedom is maximised within the right limits: in practice, this means ensuring that decisions about personalisation are taken by appropriate actors.
### A Three-Tiered Policy Framework
To help decide who should specify different policy items, we propose a three-tiered policy framework:
* _Tier One:_ **Immutable restrictions**. Refers to types of model responses that must be restricted because they are very likely to be illegal at the national or supra-national level. The specific restrictions will depend on the jurisdiction but will include terrorist content, written CSAM21, and language that threatens physical violence or sexual assault. The **model provider** must implement the immutable restrictions. Footnote 21: Child Sexual Abuse Material.
* _Tier Two:_ **Optional restrictions and requirements**. Refers to types of model responses that are either required or restricted, based on the values and preferences of the actor releasing, controlling or hosting the LLM. Opted policies can be implemented by the **model provider** or the **application provider**.
* _Tier Three:_ **Tailored requirements**. Refers to types of model responses that the user wants to receive. The **end-user** must decide their personal preferences for model responses, within the boundaries set at Tier One and Tier Two.
Policies in a higher tier cannot be violated by a policy in a lower tier. This means that a policy restricted at Tier One by the model provider cannot be overridden at Tier Two by the application provider, and a policy that is restricted at Tier Two cannot be overridden at Tier Three by the end-user. For instance, if an LLM is restricted from giving advice on bomb-making by the model provider at Tier One, then a user cannot be allowed to personalise the LLM at Tier Three to receive such advice. Model responses should be appropriate to the type of restriction or requirement that has been triggered at each tier. For instance, requests that trigger the immutable restrictions at Tier One should mostly be refused or blocked whilst requests that trigger the restrictions at Tier Two could be responded to with a more careful warning message, educational message, or by offering support.22 Footnote 22: Some technologies like DALLE-2 or ChatGPT already implement blocking of “unsafe” requests which violate the terms and conditions of OpenAI as the technology provider.
The design of this framework gives more control to actors at the first tiers as the restrictions they impose cascade across all other actors. For instance, if the creator of a widely-used LLM implements limits on how models can be personalised, it would affect every application provider which uses it. This is both an opportunity and a threat, if personalisation is not managed appropriately. Conceptually, this is analogous to Gillespie's idea of "stacked moderation" [76] whereby gatekeeping and infrastructural services for online platforms, such as hosting providers and app stores, can implicitly moderate those platforms by banning them or restricting their use. Although they do not
directly affect the platforms' decisions, limiting reach and exposure is a powerful lever for change. Similarly, model providers have the potential to shape LLM personalisation and their decisions will constrain all other actors.
## 5 Discussion
In this paper, we argue that the personalisation of LLMs is a likely pathway for the continued expansion in their deployment and public dissemination. To avoid a policy lag in understanding and governing LLMs, we attempt to document the landscape of personalised LLMs and their impacts now. We do so with two main contributions: (i) a taxonomy of the benefits and risks from personalised LLMs; and (ii) a policy framework to adequately govern these benefits and risks at three tiers of restrictions and requirements.
Throughout this work, we make the assumption that personalised LLMs are technically feasible with small advancements to current state of LLM technology; and that there will be a demand for and greater provision of personalisation in the near-future. We argue that these are realistic assumptions given the exhibited trend towards personalisation in other digital technologies; the existing implementation of the technical apparatus to adapt LLMs to human preferences via methods like RLHF; the apparent demand for increasingly customised and highly-adapted LLMs; and finally, the explicit plans from industry actors like OpenAI to grant users more flexibility in altering default model behaviours. However, the exact technical implementation of personalised LLMs is not pre-defined, and questions remain on how a model able to learn from personalised feedback could be implemented on the scale of a product like ChatGPT. We thus caveat our taxonomy by discussing some technical decision-points and engineering challenges which will impact the landscape of personalisation in LLMs (SS5.1). We then discuss remaining challenges to implementing and enforcing a policy framework like ours (SS5.2). Finally, we outline plans for maturing our research, and iterating on this first version of our taxonomy (SS5.3).
### Technical Challenges
We draw on some findings from the human feedback learning literature to hypothesise key design challenges and technical decision-points:
Data Efficiency and QuantityLearning from human feedback is primarily applied during the fine-tuning stage so it relies on substantially less data to adapt model behaviour than pre-training. Most approaches train preference models on less than 50,000 datapoints (e.g. 16, 177). A number of RLHF papers test performance over a range of data requirements (185; 209; 247). Stiennon et al. (2009), for example, find that there are decreasing marginal returns to data scale in their reward model. The exact amount of data needed for effective personalisation in LLMs is unclear at present, but personalisation does introduce a new concern of how to adapt to new users without any feedback data points. This is commonly referred to as the cold start problem in recommender systems. Potential solutions already explored include batching of users into like-minded groups (18) or recognising when a new user is similar to a known customisation case and then applying transfer learning (157). Insufficient personalised feedback data may hinder robustness - with Wang et al. (229) reporting a direct correlation between size and diversity of instructional data and the generalisability of models to unseen tasks; and Bang et al. (2020) finding that their model does not generalise well to unseen human values. Potential solutions to reduce the amount of data needed include employing active learning techniques such as uncertainty sampling (69); augmenting human-generated feedback data with synthetic data (90; 229); or adopting a rules-based approach so that users can define a set of guiding principles or "constitutions" (17; 77).
Data Format and QualityHuman preferences and values are inherently unstable and hard to precisely define (66). Learning from revealed preferences over outputs is a way to optimise model performance under complexity of specifying a clear objective function (247). A wide variety of types of feedback data have been experimented with, including binary comparisons (67; 247; 100; 17; 13), ranked preferences (13; 142), demonstrations of optimal behaviours (209; 177; 165) or revisions (84; 135; 224). Nguyen et al. (169) examine the robustness of reinforcement learning methods under more realistic properties of human feedback such as high variance, skew and restricted granularity, proposing an approach where performance does not degrade under noisy preference data. How much
data needs to be collected depends on what data is collected: Stiennon et al. [209] collect comparison data between the product of two documents, while [168] use a single document as input.
Model Efficiency and Training ComplexityBeyond being data and labour intensive, adapting models to personalised human feedback may require substantial compute resources. Smaller yet more personalised models may be a preferred pathway because (i) model scale may not contribute significantly to performance [209], and (ii) increased scale may actually harm performance, for example leading to increased sycophancy or goal preservation [185]. A number of works point to the competitiveness of rejection sampling at inference time instead of applying the full RLHF fine-tuning pipeline [e.g. see 239, 18]. Training complexity could be further reduced by implementing batched or offline training [209, 168, 136], instead of online training [247].
Alignment TaxA concern with RLHF techniques is model overfitting [16]. Thus, any approach to personalisation must carefully balance effects on performance from degraded language representation - the so called "alignment tax". However, many works have demonstrated little to no alignment tax [77, 13, 16, 136]. Often a KL-divergence penalty is included during training to prevent the fine-tuned model deviating too far from the pre-trained representations [247, 136, 177, 165, 16].
Interpretability and OversightLarger and more powerful LLMs, that are adapted to increasingly complex tasks, pose a challenge for effective oversight from their end-users or technology providers [236, 128, 31]. As Ouyang et al. [177] note "one of the biggest open questions is how to design an alignment process that is transparent" (p.19). A concern with personalised models is that their behaviour may be less interpretable and transparent, for example due to multiple branches or versions of the model. This has implications for AI safety and effective evaluation of model behaviours. Combining preference reward modelling with a rules-based reward model is a promising solution for controlling model behaviour [77], as well as defining behaviours via a "constitution", which Bai et al. [17] argue is "a simple and transparent form" to encode and evaluate desirable behaviours.
### Policy Framework Enforcement
Our taxonomy demonstrates that personalised LLMs could have wide-reaching benefits, but also come with a set of concerning risks. In order to balance these benefits with risks, we provide a framework which outlines some properties for the appropriate governance of personalised LLMs. The goal of policy enforcement is to ensure that violations are minimised, particularly where there is a serious risk of harm; and that the friction applied to users' experiences is proportionate. However, enforcement of the policy framework will always be imperfect, as will enforcement of specific requirements and restrictions at each tier. Some specific challenges remain:
Compliance with Existing RegulationAny technology permitting the personalisation of LLMs would need to comply with existing regulations and laws such as the GDPR [180], the Online Safety Bill [181] and the various European AI standards [45, 46]. It is harder to assess whether an LLM is "fair" or "harmful" to its user when the space for possible personalisation actions is so complex. These issues also apply to LLMs in general, where adaptation of pre-trained models to downstream applications pose significant challenges to traditional auditing approaches [161].
Defining New Regulation and OversightIt is challenging to regulate or evaluate a system which is both _dynamic_ (adapting continually or frequently to user feedback) and _distributed_ (personalised across many users). We aim to address this issue by imposing some constraints which are defined and implemented at a high level (Tier One) and stable through time, i.e., applying across all users and all training updates. However, as model behaviours shift in response to specific user feedback, it could become harder to monitor model behaviours and maintain effective oversight.
Distributed ResponsibilityOur framework distributes responsibility among model providers, application providers and end-users, with the first two of these actors bearing the brunt of defining the _bounds_ of personalisation. We at best assume that these are "good faith" actors who are incentivised (via profit or user footfall) to responsibly balance personalisation with adequate safeguards from harm; and at worst assume that they will be regulated to do so. In reality, with models like LLaMA [152] being open-sourced, anyone with sufficient technical skill and compute resources could hypo
thetically create, launch and host a system capable of personalisation. Such fragmentation of the development landscape in LLMs poses a challenge for audit and policy enforcement [161].
Populating Tiers of the PolicyWhile we define a clear functional framework, we have not fully specified principles that determine which restrictions or requirements fall under each tier. For example, why some risks come under Tier One (and are never allowed) while others are optionally defined in Tier Two. This is a problem also faced by online trust and safety regulation - where for example, early iterations of the Online Safety Bill [181] included separate treatment of _illegal_ content versus _legal but harmful_ content, as well as tiered restrictions for children and minors versus adults. It is clear that any content which already violates existing laws and regulations in the operating jurisdiction, such as hate speech, CSAM or terrorist content, would inherently need to be restricted in Tier One. Beyond this, we leave the open question of what principles could be employed by model or application providers to define their own organisational bounds.
### Next Steps
A large amount of our work in this paper, both in the taxonomy and in the policy framework, relies on informed speculation over the future development and governance of personalised LLMs. The affordances, constraints and harms from any technology depend critically on how it is designed, how its outputs are used in the real world and what safeguards or regulations are provisioned to guide its impact post-deployment. None of these conditions are presently clear; so, we very much consider this to be a first version of our work. In the future, we plan to iterate on the taxonomy and policy framework by conducting semi-structured interviews with (i) the end-users of LLMs to understand their priorities and concerns; (ii) model and application providers to scope their desires for enabling personalisation and for defining limits, as well as the technical apparatus available to them to both of these things; and finally, (iii) policymakers to assess how personalised LLMs fit into existing laws and regulations, and the new challenges they pose. By starting the conversation now, we hope to avoid long lags in understanding, documenting and governing the harms from personalised LLMs as a future technology which could widely impact individuals and society.
## Acknowledgements
This paper, which began as an exploration of how to improve feedback between humans-and-model-in-the-loop, is part of a body of work funded by a MetaAI Dynabench grant. H.R.K's PhD is supported by the Economic and Social Research Council grant ES/P000649/1. P.R's PhD is supported by the German Academic Scholarship Foundation. We particularly want to thank Andrew Bean for his thoughtful input and assistance with the literature review. This paper is also the product of many interesting conversations at various conferences and working groups. We hope to survey the opinions and feedback of many other stakeholders in the future (including end-users, policy makers and technology providers) to further enrich our discussion.
|
2306.16597
|
Weighted Birkhoff Averages and the Parameterization Method
|
This work provides a systematic recipe for computing accurate high order
Fourier expansions of quasiperiodic invariant circles in area preserving maps.
The recipe requires only a finite data set sampled from the quasiperiodic
circle. Our approach, being based on the parameterization method, uses a Newton
scheme to iteratively solve a conjugacy equation describing the invariant
circle. A critical step in properly formulating the conjugacy equation is to
determine the rotation number of the quasiperiodic subsystem. For this we
exploit a the weighted Birkhoff averaging method. This approach facilities
accurate computation of the rotation number given nothing but the already
mentioned orbit data.
The weighted Birkhoff averages also facilitate the computation of other
integral observables like Fourier coefficients of the parameterization of the
invariant circle. Since the parameterization method is based on a Newton
scheme, we only need to approximate a small number of Fourier coefficients with
low accuracy to find a good enough initial approximation so that Newton
converges. Moreover, the Fourier coefficients may be computed independently, so
we can sample the higher modes to guess the decay rate of the Fourier
coefficients. This allows us to choose, a-priori, an appropriate number of
modes in the truncation. We illustrate the utility of the approach for explicit
example systems including the area preserving Henon map and the standard map.
We present example computations for invariant circles with period as low as 1
and up to more than 100. We also employ a numerical continuation scheme to
compute large numbers of quasiperiodic circles in these systems. During the
continuation we monitor the Sobolev norm of the Parameterization to
automatically detect the breakdown of the family.
|
David Blessing, J. D. Mireles James
|
2023-06-28T23:15:03Z
|
http://arxiv.org/abs/2306.16597v1
|
# Weighted Birkhoff Averages
###### Abstract
This work provides a systematic recipe for computing accurate high order Fourier expansions of quasiperiodic invariant circles (and systems of such circles) in area preserving maps. The recipe requires only a finite data set sampled from the quasiperiodic circle. Our approach, being based on the parameterization method of [1, 1, 2], uses a Newton scheme to iteratively solve a conjugacy equation describing the invariant circle (or systems of circles). A critical step in properly formulating the conjugacy equation is to determine the rotation number of the quasiperiodic subsystem. For this we exploit the the weighted Birkhoff averaging method of [1, 1, 2]. This approach facilities accurate computation of the rotation number given nothing but the already mentioned orbit data.
The weighted Birkhoff averages also facilitate the computation of other integral observables like Fourier coefficients of the parameterization of the invariant circle. Since the parameterization method is based on a Newton scheme, we only need to approximate a small number of Fourier coefficients with low accuracy (say, a few correct digits) to find a good enough initial approximation so that Newton converges. Moreover, the Fourier coefficients may be computed independently, so we can sample the higher modes to guess the decay rate of the Fourier coefficients. This allows us to choose, a-priori, an appropriate number of modes in the truncation.
We illustrate the utility of the approach for explicit example systems including the area preserving Henon map and the standard map (polynomial and trigonometric nonlinearity respectively). We present example computations for (systems of) invariant circles with period as low as \(1\) and up to more than \(100\). We also employ a numerical continuation scheme (where the rotation number is the continuation parameter) to compute large numbers of quasiperiodic circles in these systems. During the continuation we monitor the Sobolev norm of the Parameterization, as explained in [1], to automatically detect the breakdown of the family.
Introduction
Suppose that \(\Gamma\) is an invariant torus of a discrete or continuous time dynamical system. We say that \(\Gamma\) is a rotational invariant torus if the dynamical on \(\Gamma\) are are topologically conjugate to independent irrational rotations. A quasiperiodic orbit is any orbit on a rotational invariant torus and, since the rotations are independent, all such orbits are dense in the torus.
Cantor families of invariant tori are common in structure preserving dynamical systems like reversible maps, area and volume preserving maps on manifolds, and also for higher dimensional generalizations to symplectic maps on (even dimensional) symplectic manifolds. Indeed, for such systems typical orbits are observed to be either chaotic or quasiperiodic. Given a long enough finite orbit segment sampled from an invariant torus, an important problem is to be able to rapidly and accurately approximate a parameterization of the invariant torus.
Two powerful approaches for solving this problem are given by the Parameterization method, and the method of exponentially weighted Birkhoff sums. The Parameterization method is a functional analytic framework for studying invariant manifolds on which the dynamics are conjugate to a known simple model, and was developed in detail for invariant tori (and their stable/unstable manifolds) in the three papers [11, 12, 13, 14]. This approach is discussed in detail in Section 2.4, where a number of additional references are given. At the moment we simply stress that the idea of the parameterization method is to develop Newton schemes for solving the conjugacy equation describing the unknown parameterization of the invariant torus (or other invariant manifold).
When working in a non-perturbative setting, two challenges are to (i) determine the rotation number of the desired invariant circle, and (ii) to produce an accurate enough initial condition so that the Newton method converges. Another important question is to choose an appropriate truncation dimension for the desired parameterization (number of Fourier modes with which to compute).
The approach proposed here uses the weighted Birkhoff averages developed in [17, 18, 19, 20, 21] to efficiently obtain this information directly from data (a long enough orbit segment). By combining the Parameterization Method with the weighted Birkhoff averages just mentioned, we obtain a general and non-perturbative procedure which allows us to compute the desired Fourier expansion accurately and to high order. Since the method is iterative, the coefficients can typically be computed to machine precision. Moreover, since the parameterization method is based on solving a functional equation, it comes equipped with a natural notion of a-posteriori error.
We remark that a great many previous studies deal with numerical methods for computing invariant circles/tori in area preserving/symplectic maps and Hamiltonian systems. While a thorough review of the literature is beyond the scope of the present work, we refer the interested reader to the papers of [11, 18, 19, 20, 16, 21, 22, 23, 24, 25] and the references cited therein. A much more complete survey of the literature is found in [13]. We remark that by now numerical calculations of quasiperiodic circles can be combined with a-posteriori analysis (based on Nash-Moser implicit function theory) to obtain mathematically rigorous computer assisted proofs [14]. Several additional comments further put the present work into perspective.
**Remark 1.1** (Generality).: Since both the method of weighted averages and the
Parmaeterization Method generalize to higher dimensional tori for (symplectic) maps in higher dimensions - and even to invariant tori for Hamiltonian systems - our whole approach generalizes as well. Nevertheless, we focus on the case of invariant circles to minimize technical complications (multivariable Fourier series, rotation vectors, Parameterization Method for vector fields, et cetera).
**Remark 1.2** (The introduction of a global unfolding parameter).: Since any rotation of an invariant circle is again invariant, the conjugacy equation defining a parameterization has always a one dimensional family of solutions. Because of this, the parameterization method for invariant circles is generally degenerate (i.e. there is not a unique parameterization). Of course this is the same non-uniqueness found in the functional analytic set up for periodic orbits for vector fields, and the same solution works: namely, we impose a Poincare type phase condition. Appending a scalar constraint however results in more equations than unknowns. If the system were dissipative, so that invariant circles are isolated in phase space, we would treat the rotation number as a new unknown to rebalance the system. This does not work for the area preserving maps studied in the present work, as solutions are expected to occur in Cantor sets, and are hence not isolated in phase space.
In previous works this problems is solved by "unfolding" the linearized equations during the Newton iteration. This requires an infinite sequence of unfolding parameters, one at each step, and a separate argument is required to show that the unfolding parameters accumulate to zero. In the present work we we introduce a more global unfolding parameter for the parameterization method, which balances the system on the level of the full nonlinear functional equation. The idea is geometric and utilizes the area preservation in a simple way.
**Remark 1.3** (Use of composition free parameterization of periodic systems of invariant circles).: We generalize the parameterization method for invariant circles so that it applies to invariant sets consisting of \(k\) disjoint circles. Each orbit in such a set visits each of the circles in some order, and each orbit is dense in the collection of circles. We develop a functional analytic multiple shooting scheme which leads to a system of coupled equations in Fourier space describing the collection of circles. Our approach is inspired by the multiple shooting parameterization method developed in [1] for studying stable/unstable manifolds attached to periodic orbits of maps. The main advantage these approaches is that they "unwarp" function compositions, and the nonlinearity of the resulting functional equations is no more complicated than that of the original map.
The remainder of the paper is organized as follows. In Section 2 we review some basic facts about invariant circles/rotation numbers, as well as results on weighted Birkhoff averages and the parameterization method. In Section 3 we outline our numerical recipe, and Section 4 deals with numerical examples. Section 5 shows how these ideas can be combined with numerical continuation to compute families of invariant tori up to the point of breakdown. Section 6 summarizes the paper.
## 2 Invariant circles: weighted averages and the parameterization method
In this section we review material pertaining to invariant circles which weighted averages, and the parameterization method, which -while standard- is not to the
best of our knowledge collected together in one existing reference. We suggest that reader rapidly skim Section 2 before jumping ahead to Section 3 - referring back to the present section only as needed.
### Homeomorphisms of the circle and their rotation number
Let \(T\colon\mathbb{S}^{1}\to\mathbb{S}^{1}\) be a homeomorphism of the circle and let \(\pi\colon\mathbb{R}\to\mathbb{S}^{1}\) denote the canonical covering map defined by
\[\pi(x)=x\,\mathrm{mod}1,\]
mapping a real number \(x\) into \([0,1)\), by discarding the integer part. We interpret \(\theta\in[0,1)\) as the angle describing a point on the unit circle. Note that \(\pi(x+m)=\pi(x)\) for all \(x\in\mathbb{R}\).
For \(\rho\in[0,1)\), define \(R_{\rho}\colon\mathbb{S}^{1}\to\mathbb{S}^{1}\) by
\[R_{\rho}(\theta)=\theta+\rho\,(\mathrm{mod}1).\]
We say that a homeomorphism \(T\colon\mathbb{S}^{1}\to\mathbb{S}^{1}\) is topologically conjugate to the rotation \(R_{\rho}\) if there exists a homeomorphism \(h\colon\mathbb{S}^{1}\to\mathbb{S}^{1}\) so that
\[T(h(\theta))=h(R(\theta)),\]
for all \(\theta\in[0,1)=\mathbb{S}^{1}\). If \(\rho\) is irrational, we say that \(T\) is conjugate to irrational rotation.
A continuous map \(G\colon\mathbb{R}\to\mathbb{R}\) is a _lift_ of \(T\) if
\[(\pi\circ G)(\theta)=(T\circ\pi)(\theta),\]
for all \(\theta\in[0,1)=\mathcal{S}^{1}\). It can be shown that every continuous map of the circle has a lift, and that \(G\) is a lift of a continuous circle map if and only if there is a \(\bar{m}\in\mathbb{Z}\) such that
\[G(x+1)=G(x)+\bar{m},\]
for all \(x\in\mathbb{R}\). It follows that
\[G(x+m)=G(x)+\bar{m}\cdot m,\]
for all \(x\in\mathbb{R}\) and every \(m\in\mathbb{Z}\).
The rotation number of the homeomorphism \(T\) is defined by
\[\rho=\rho(T)=\lim_{n\to\infty}\frac{G^{n}(x)-x}{n},\]
where \(G\) is a lift of \(T\). It is a classical result (due to Poincare) that the rotation number exists, and is independent of both the base point \(x\) and the lift \(G\). Indeed, it can be shown that \(\rho\) is invariant under continuous change of coordinates (homeomorphism). That is, the rotation number is a topological invariant of the map \(T\).
The rotation number has dynamical significance. For example, if \(\rho(T)\) is a rational number, so that \(\rho=p/q\) for some \(p\in\mathbb{Z}\) and \(q\in\mathbb{N}\), then \(T\) has an orbit of period \(q\). We focus on the case were \(\rho\) is irrational, in which case the Denjoy theorem states the following: if \(T\) is at least \(C^{2}\), then then \(T\) is topologically conjugate to
the rotation map \(R_{\rho}\). In this case, it is clear that every orbit of \(T\) is dense in the circle. More detailed discussion of circle maps is found in Chapter 2 of [10] or Chapter 1.2 of [11].
Note that the rotation number can be computed by averaging angles as follows. Choose \(\theta_{0}\in[0,1)=\mathbb{S}^{1}\) and define the length \(N\) orbit segment for \(\theta_{0}\) under \(T\) by
\[\theta_{k}=T^{k}(\theta_{0}),\qquad\quad\text{for}\quad k=0,\dots,N.\]
Using the properties of the covering map, and adding and subtracting along the orbit of \(\theta_{0}\), we have that
\[\rho=\lim_{N\to\infty}\frac{1}{N}\sum_{n=0}^{N-1}\left(\theta_{n+1}-\theta_{n} \right), \tag{1}\]
where the _positive difference_ of two points \(\theta,\sigma\in[0,1)=\mathbb{S}^{1}\), is defined to be
\[\theta-\sigma=\min\left(|\theta-\sigma|,|1+\sigma-\theta|\right). \tag{2}\]
### Weighted Birkhoff averages and the rotation number
The rotation number of a circle map can be written as an average via Equation (1), and Ergodic theory is the branch of dynamical systems theory dealing with averages. We review some basic convergence results from Ergodic theory.
Let \((X,\Sigma,\mu)\) be a measure space with \(\mu(X)=1\). The self map \(T\colon X\to X\) is a measure preserving transformation of \(X\) if \(T\) is a measurable function with \(\mu(T^{-1}(A))=\mu(A)\) for all \(A\in\Sigma\). The transformation \(T\) is _ergodic_ if for every \(A\in\Sigma\) having \(T^{-1}(A)=A\), it is the case that either \(\mu(A)=0\) or \(\mu(A)=1\). Ergodicity is invariant under homeomorphism, in the sense that if \(T\colon\mathbb{S}^{1}\to\mathbb{S}^{1}\) is ergodic and \(h\colon\mathbb{S}^{1}\to\mathbb{S}^{1}\) is a homeomorphism, then \(T\circ h\) ergodic.
As an example, it is straightforward to show that if \(\rho\in[0,1)\) is irrational, then the circle rotation \(R_{\rho}\) is ergodic with respect to Lebesgue measure on the circle. It follows that any circle map topologically conjugate an irrational rotation is ergodic.
An _observable_ on a \(X\) is measurable, real (or complex) valued function on \(X\). Let \(L^{1}(X,\mu)\) denote that set of all \(\mu\)-integrable functions from \(X\) to \(\mathbb{R}\) (or \(\mathbb{C}\)). That is, the set of all integrable observables. For any \(f\in L^{1}(X,\mu)\), the Birkhoff ergodic theorem states that if \(T\colon X\to X\) is ergodic, then
\[\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}f(T^{k}(x))=\int_{X}f\,d\mu, \tag{3}\]
for \(\mu\)-almost ever \(x\in X\)[11]. That is, the time average of the observable \(f\) along the \(T\)-orbit of almost any point \(x\), is equal to the spatial average of the function \(f\) over \(X\). The sum on the left is referred to as the Birkhoff average of \(f\).
We are interested in the case when \(X=\mathbb{S}^{1}\) and \(\mu\) is Lebesgue measure on the circle. Consider an orientation preserving homeomorphism \(T\colon\mathbb{S}^{1}\to\mathbb{S}^{1}\) (which is measurable by virtue of being a continuous map), and suppose that \(\rho(T)\) is irrational. Define the observable \(\tau\colon\mathbb{S}^{1}\to\mathbb{R}\) to be the map that includes \(\theta\in[0,1)=\mathbb{S}^{1}\) into the real numbers, and the observable \(f\colon\mathbb{S}^{1}\to\mathbb{R}\) by
\[f(\theta)=\tau(T(\theta)-\theta).\]
Noting that \(f\in L^{1}(\mathbb{S}^{1},\mu)\) we have, by the Birkhoff ergodic theorem, that
\[\rho(f)=\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}f(T^{k}(\theta_{0}))=\int_{ \mathbb{S}^{1}}f\,d\mu,\qquad\quad\text{for almost all $\theta_{0}\in\mathbb{S}$}. \tag{4}\]
The utility of the formula given in Equation (4) is limited in applications by the fact that the sum suffers from slow (linear) convergence properties. That is, there exists \(C>0\) so that
\[\left|\rho(f)-\frac{1}{N}\sum_{k=0}^{N-1}f(T^{k}(\theta))\right|\leq\frac{C}{ N}.\]
This can be seen by noting that, when \(f\) is ergodic, the average in the middle of Equation (4) is a uniform discretization of the integral on the right. Then, for example, if we desire fifteen correct digits in the approximation of the rotation number, we require approximately \(N=10^{15}\) iterations of the map \(T\). In addition to being time prohibitive, such a calculation is numerically unstable due to round off errors.
In [18, 19, 20], the authors show that if \(\rho\) is Diophantine and \(f\) is \(C^{\infty}\), then a much faster convergence rate obtained by taking appropriate weighted sums in the Birkhoff averages. To state the result, define the weights
\[\hat{w}_{n,N}=\frac{w\left(\frac{n}{N}\right)}{\sum_{j=0}^{N}w\left(\frac{j}{ N}\right)}\]
where \(w(t)=\exp\left(-1/(t(1-t))\right)\). The weighted Birkhoff average is defined by
\[WB_{N}(T,f)(\theta)=\sum_{n=0}^{N-1}\hat{w}_{n,N}f(T^{n}(\theta)).\]
Heuristically, this scheme weights more heavily the "typical" terms in the middle of the sequence, avoiding "boundary effects" due the fact that we average only a finite orbit segment. This is related to choosing a "good convolution kernel" in the integral on the right hand side of the ergodic theorem (Equation (4)) [19, 20, 21].
The qualitative comments above are made precise in in [19], and it is shown that \(WB_{N}(T,f)(x)\) converges faster than any polynomial, provided that \(\rho(T)\) is "irrational enough". More precisely, we say that \(\rho\in[0,1)\) is Diophantine if there exist \(C,\tau>0\) so that
\[|n\rho-m|\geq\frac{C}{n^{1+\tau}},\qquad\quad\text{for all $m,n\in\mathbb{N},n \neq 0$}.\]
This make precise the notion that \(\rho\) is not well approximated by any rational number. The main result of [19] is that if \(T\) and \(f\) are \(C^{\infty}\), and \(\rho\) is Diophantine, then for each \(M\in\mathbb{N}\) there is a \(C_{M}>0\) so that
\[\left|\int_{\mathbb{S}}f\,d\mu-WB_{N}(T,f)(\theta)\right|\leq\frac{C_{M}}{N^{ M}}. \tag{5}\]
Moreover, the convergence is uniform in \(\theta\). Then, in this case, the average converges faster than any polynomial.
### Invariant circles for area preserving maps
As an application of the smooth ergodic theory discussed in Section 2.2, we return to the main problem of the paper: computing invariant circles for planar dynamical systems. To begin making things precise, let \(\Omega\subset\mathbb{R}^{2}\) be an open subset of the plane and suppose that \(F\colon\Omega\to\Omega\) is a smooth, orientation preserving diffeomorphism. Suppose that \(\Gamma\subset\Omega\) is a \(C^{\infty}\) simple closed invariant curve for \(F\), so that
\[F(\Gamma)=\Gamma,\]
with equality in the sense of sets.
Restricting \(F\) to \(\Gamma\) defines a smooth and orientation preserving homeomorphism of the circle, which we denote by \(T\colon\mathbb{S}^{1}\to\mathbb{S}^{1}\). Since \(F\) and \(\Gamma\) are smooth, so is \(T\). Following [10, 10, 11] we are interested the case where \(T\) is conjugate to an irrational rotation. To signify the importance of this case we make the following definition: we say that \(\Gamma\) is a _quasi-periodic invariant circle_ for \(F\) if the dynamics generated by \(F\) restricted to \(\Gamma\) - that is the dynamics of \(T\) - are topologically conjugate to an irrational rotation. For a given quasi-periodic invariant circle \(\Gamma\), we are interested in determining the rotation number of \(T\), from finite data for iterates of \(F\).
To this end, choose \(N\in\mathbb{N}\) and suppose then that \(p_{0}\in\Gamma\subset\Omega\). Define the orbit sequence of length \(N\) recursively by
\[p_{j}=F(p_{j-1}), \tag{6}\]
for \(j=1,2,3,\ldots,N\). We write
\[\operatorname{orbit}_{N,F}(p_{0})=\left\{p_{j}\right\}_{j=0}^{N},\]
to denote this set. We convert this to angular data on the circle as follows. Let \(q_{0}\in\Omega\) denote a point inside the curve \(\Gamma\), and compute the vectors
\[\left(\begin{array}{c}x_{j}\\ y_{j}\end{array}\right)=\xi_{j}=p_{j}-q_{0}. \tag{7}\]
Define
\[\theta_{j}=\frac{\operatorname{atan}4(y_{j},x_{j})}{2\pi},\]
for \(j=0,1,\ldots,N\). Here \(\operatorname{atan}4\) is the four quadrant arctangent function which returns the angle between \(\xi_{j}\) and the \(x\)-axis, with the angle taken between \(0\) and \(2\pi\). This gives an explicit projection of the dynamics into \(\mathbb{S}^{1}\). Applying the formula developed in Equation (4), we have that
\[\rho_{N}=\sum_{n=0}^{N-1}\hat{w}_{n,N}(\theta_{n+1}-\theta_{n}),\]
rapidly converges to \(\rho\), the rotation number of \(T\), (Again, subtraction for points on the circle is as defined in Equation (2)).
**Remark 2.1** (Rotation number as a chaotic/quasiperiodic indicator).: It is important to note that in application problems, we do not actually know how to choose a \(p_{0}\) on an invariant circle \(\Gamma\). Rather this is, in practice, the problem we are trying
to solve. How then do we decide when an orbit segment is sampled from a quasi-periodic invariant circle? A simple answer (which is surprisingly useful in practice) is to examine plots of orbit segments of length \(N\), for a number of different values of \(N\). Then, one checks visually if the plotted orbits appear to densely fill a simple closed curve.
A more sophisticated approach is considered in [13, 14], and we sketch the idea here. Consider a point \(p_{0}\in\Omega\subset\mathbb{R}^{2}\), choose an increasing finite sequence of natural numbers \(0<N_{1}<N_{2}<\ldots<N_{K}\), define the orbit segment orbit\({}_{N_{K},F}(p_{0})\), the projected angles \(\theta_{0},\ldots,\theta_{N_{K}}\), and compute the approximate rotation numbers
\[\rho_{N_{j}}=\sum_{n=0}^{N_{j}}\hat{w}_{n,N}(\theta_{n+1}-\theta_{n}),\]
for \(j=1,2,\ldots,K\). If the \(\rho_{N_{j}}\) converge numerically, this provides strong evidence that \(p_{0}\), and hence the points in orbit\({}_{N_{K},F}(p_{0})\), are sampled from a quasi-periodic invariant circle \(\Gamma\). If on the other hand the sequence \(\rho_{N_{1}},\rho_{N_{2}},\ldots,\rho_{N_{K}}\) oscillates randomly, then the orbit of \(p_{0}\) is more likely sampled from a stochastic zone rather than a quasi-periodic orbit.
**Remark 2.2** (Elliptic equilibria and KAM phenomena).: A common mechanism which gives rise to invariant circles is the KAM scenario for an elliptic fixed point. To formalize the discussion, let \(F\colon\mathbb{R}^{2}\to\mathbb{R}^{2}\) be an orientation preserving, \(C^{2}\) diffeomorphism of the plane, and suppose that \(p\in\mathbb{R}^{2}\) is an elliptic fixed point of \(F\). That is, we assume that \(F(p)=p\), and that the eigenvalues of \(DF(p)\), \(\lambda_{1,2}=e^{\pm i\rho}\), are on the unit circle. If \(\rho\) is irrational, then the linearized dynamics at \(p\) consist of concentric invariant circles, on which orbits are dense. The dynamics in a small neighborhood of \(p\) can be analyzed as nonlinear perturbation of the linear map \(DF(p)\). The main question of KAM theory in this context is: which if any of the invariant circles survive the perturbation?
The answer depends on the number theoretic properties - more precisely the Diophantine properties - of \(\rho\), and on some nonlinear non-degeneracy, or twist conditions on the higher derivatives of \(F\) at \(p\). (Recall that the Diophantine constants measure "how irrational" a real number is). Heuristically speaking, the typical situation is that a Cantor set of invariant circles survives. Moreover, a similar picture, in the neighborhood of an elliptic periodic \(K\) orbit, gives rise to period \(K\) systems of invariant circles. From the point of view of the present paper, the main observation is that invariant circles with irrational dynamics are natural in area preserving maps. An excellent reference is [10].
#### 2.3.1 Weighted Birkhoff averages and the Fourier coefficients of the embedding
Suppose that \(\Gamma\) is a quasi-periodic invariant circle for the diffeormophism \(F\colon\Omega\to\Omega\). Another application of the smooth ergodic theory discussed in Section 2.2 is to compute the Fourier coefficients of a lift/parameterization for \(\Gamma\).
To be precise, we seek a period one function \(K\colon\mathbb{R}\to\mathbb{R}^{2}\) so that \(\operatorname{image}(K)=\Gamma\), with \(\Gamma\) quasi-periodic. Indeed, since the dynamics on \(\Gamma\) are conjugate to \(R_{\rho}\) (with \(\rho\) the rotation number for \(\Gamma\)) we look for the conjugating map \(K\). That is, we require that
\[F(K(\theta))=K(\theta+\rho), \tag{8}\]
and to fix the phase of \(K\) we impose \(K(0)=p_{0}\). The geometric meaning of Equation (8) is illustrated in Figure 1.
Since \(\Gamma\) is a smooth curve, the map \(K\) is smooth and has convergent Fourier series which we denote by
\[K(\theta)=\begin{pmatrix}K^{1}(\theta)\\ K^{2}(\theta)\end{pmatrix}=\begin{pmatrix}\sum_{n\in\mathbb{Z}}a_{n}e^{2\pi in \theta}\\ \sum_{n\in\mathbb{Z}}b_{n}e^{2\pi in\theta}\end{pmatrix}=\sum_{n\in\mathbb{Z} }\begin{pmatrix}a_{n}\\ b_{n}\end{pmatrix}e^{2\pi in\theta}=\sum_{n\in\mathbb{Z}}k_{n}e^{2\pi in\theta}\]
where
\[k_{n}=\int_{0}^{1}K(\theta)e^{-2\pi in\theta}\,d\theta.\]
The idea is to treat each Fourier coefficient as a 2-vector of observables for the underlying circle map \(T\). This can be done, exploiting the fact that Fourier coefficients are defined in terms of integrals and applying the weighted Birkhoff averages of [1]. Inductively applying Equation (8), we have that
\[p_{k}=F^{k}(p_{0})=F^{k}(K(\theta_{0}))=K(\theta_{k}),\]
Figure 1: Topological conjugacy to rotation: Here \(K\colon\mathbb{S}\to\mathbb{R}^{2}\) is an embedding of the circle. The image of \(K\) is invariant under \(F\), in the sense that \(F\circ K\) is a reparameterization of the curve \(K\). In fact, the reparameterization is rotation by an angle \(\rho\). The invariance equation \(F\circ K=K\circ f\) expresses the fact that the above diagram commutes, meaning that the dynamics on \(K\) generated by \(F\) are conjugate to the dynamics on the circle generated by \(f(\theta)=\theta+\rho\).
with
\[\theta_{k}=\theta_{0}+k\rho,\]
for \(k=0,1,2,\ldots,N\). Then
\[k_{n} =\int_{0}^{1}K(\theta)e^{-2\pi in\theta}\,d\theta\] \[=\lim_{N\to\infty}\sum_{k=0}^{N-1}\hat{w}_{k,N}K(\theta_{k})e^{-2 \pi in\theta_{k}}\] \[=\lim_{N\to\infty}\sum_{k=0}^{N-1}\hat{w}_{k,N}K(\theta_{0}+k\rho )e^{-2\pi in(\theta_{0}+k\rho)}\] \[=\lim_{N\to\infty}\sum_{k=0}^{N-1}\hat{w}_{k,N}F^{k}(K(\theta_{0} ))e^{-2\pi in(\theta_{0}+k\rho)}\] \[=\lim_{N\to\infty}\sum_{k=0}^{N-1}\hat{w}_{k,N}p_{k}e^{-2\pi in( \theta_{0}+k\rho)}\] \[=\lim_{N\to\infty}e^{-2\pi in\theta_{0}}\sum_{k=0}^{N-1}\hat{w}_ {k,N}p_{k}e^{-2\pi ink\rho}\]
Then
\[k_{n}\approx e^{-2\pi in\theta_{0}}\sum_{k=0}^{N-1}\hat{w}_{k,N}p_{k}e^{-2\pi ink \rho}.\]
The major sources of error in this approximation of the Fourier coefficient \(k_{n}\) are threefold. First, the limit as \(N\to\infty\) is approximated by computing a finite, rather than an infinite sum. Second, there is the error from the approximated rotation number used to compute the coefficients, that is we use \(\rho_{N}\) for some high enough \(N\) to approximation \(\rho\). Third, the trajectory \(\{p_{n}\}_{n=0}^{N}\) is only near a quasiperiodic orbit, generated as it is by numerically iterating the map \(F\). Of course, in the end, the parameterization \(K\) is approximated using a finite number of Fourier modes.
### The Parameterization Method
The parameterization method is a general functional analytic framework for studying invariant objects in discrete and continuous time dynamical systems. While the method has roots in the classical works of Poincare, Darboux, and Lyapunov, a complete theory for fixed points of infinite dimensional nonlinear maps on Banach spaces emerged in the three papers of Cabre, Fontich, and de la Llave [11, 12, 13, 14]. The corresponding theory for invariant tori (quasi-periodic motions) and their whiskers (stable/unstable fibers) for skew product dynamical systems is developed in the three papers by Haro and de la Llave [15, 16, 17]. Since its introduction in the papers just cited, the method has been expanded and applied by a number of authors, so that a complete overview of the literature is a task beyond the scope of the present work. The interested reader will find an informative and lively discussion of the history of the method in Appendix B of [14]. Moreover, the recent book on the topic by Haro, Canadell, Figueras, Luque, and
Mondelo [11] contains detailed discussion of the method, a thorough review of the literature, and many detailed example applications.
#### 2.4.1 Parameterization method for an invariant circle in the plane
In the case of invariant circles, the main idea behind the parameterization method is to treat Equation (8) as an equation for an unknown smooth \(1\)-periodic function \(K\colon\mathbb{R}\to\mathbb{R}^{2}\), and to attempt to solve in an appropriate function space via a Newton iteration scheme. Since the Newton method is based in the implicit function theorem, it is essential that we look for an isolated solution of Equation (8). Note however that any rotation of a solution is again a solution, and it is necessary to fix a phase condition to isolate. In the present work we fix the phase by requiring that \(K(0)\) lies in a fixed (by us at the outset of the discussion) line in the plane. That is, we choose vectors \(\bar{p},\eta\in\mathbb{R}^{2}\) and add the constraint equation
\[\langle\bar{p}-K(0),\eta\rangle=0,\]
where \(<\cdot,\cdot>\) is the usual inner product in \(\mathbb{R}^{2}\). The idea here is that \(\bar{p}\) and \(\eta\) determine a line \(\ell\) transverse to \(\Gamma\) and we require \(K\) to map \(\theta=0\) into the line \(\ell\), thus locking down the phase of the parameterization.
The issue now comes when we consider the resulting system of equations
\[F(K(\theta))=K(\theta+\rho).\] \[\langle\bar{p}-K(0),\eta\rangle=0\]
which is clearly two equations in one unknown \(K\). To balance the system we introduce a saclar unfolding parameter \(\beta\). That is, we consider the system of equations
\[F(K(\theta))=(1+\beta)K(\theta+\rho)\] \[\langle\bar{p}-K(0),\eta\rangle=0,\]
as two equations in two unknowns \(K\) and \(\beta\). This idea is inspired by similar techniques for balancing the systems of equations describing periodic orbits in Hamiltonian systems. See for example [12, 13]. As with any work involving unfolding parameters, we have to address the relationship between the original unbalanced system of equations and the unfolded system. This is the content of Lemma 2.3, which shows that solutions of the unfolded system satisfy the original equations.
Let \(C_{p}^{k}(\mathbb{R})\) denote the space of smooth, period-\(1\) functions, with \(k>1\). (In our applications \(k=\omega\)) and define the nonlinear mapping \(\Psi\colon\mathbb{R}\times C_{p}^{k}(\mathbb{R})\to\mathbb{R}\times C_{p}^{k}( \mathbb{R})\) by
\[\Psi(\beta,K)=\left(\begin{array}{c}\langle\bar{p}-K(0),\eta\rangle\\ F(K(\theta))-(1+\beta)K(\theta+\rho)\end{array}\right). \tag{9}\]
Let \(\mathbf{0}\) denote the zero function. We have the following.
**Lemma 2.3** (\(\beta\) unfolds Equation (9)).: _Suppose that \(K_{*}\in C_{p}^{k}(\mathbb{R})\) and \(\beta_{*}\in\mathbb{R}\) have_
\[\Psi(K_{*},\beta_{*})=\left(\begin{array}{c}\mathbf{0}\\ 0\end{array}\right).\]
_Then \(\beta_{*}=0\), and \(K\) conjugates the dynamics on \(\text{image}(K)\) generated by \(F\) to the rotation map \(R_{\rho}\)._
Proof.: Suppose that \(K_{*}\) and \(\beta_{*}\) provide a zero of \(\Psi\). Then
\[F(K_{*}(\theta))=(1+\beta_{*})K_{*}(\theta+\rho). \tag{10}\]
Let \(\Gamma\) denote the curve parameterized by \(K_{*}\), and \(\tilde{\Gamma}=F\circ\Gamma\) be the curve parameterized by \(F\circ K_{*}\). Note that \(\tilde{\Gamma}\) is diffeomorphic to \(\Gamma\), due to the assumption that \(F\) is a diffeomorphism, and that \(K_{*}(\theta+\rho)\) is just a reparameterization of the curve \(\Gamma\), with different phase.
For \(K\in C^{k}_{p}(\mathbb{R})\), consider the integrals
\[A_{1}(K) =\frac{1}{2}\int_{\Gamma}K_{1}\,dy-K_{2}\,dx\] \[=\frac{1}{2}\int_{0}^{1}\left(K_{1}(\theta)\frac{d}{d\theta}K_{2 }(\theta)-K_{2}(\theta)\frac{d}{d\theta}K_{1}(\theta)\right)\,d\theta\]
\[A_{2}(K) =\frac{1}{2}\int_{\Gamma}K_{1}\circ R_{\rho}\,dy-K_{2}\circ R_{ \rho}\,dx\] \[=\frac{1}{2}\int_{0}^{1}\left(K_{1}(\theta+\rho)\frac{d}{d\theta }K_{2}(\theta+\rho)-K_{2}(\theta+\rho)\frac{d}{d\theta}K_{1}(\theta+\rho) \right)\,d\theta\]
and
\[A_{3}(K) =\frac{1}{2}\int_{\tilde{\Gamma}}(F\circ K)_{1}\,dy-(F\circ K)_{ 2}\circ R_{\rho}\,dx\] \[=\frac{1}{2}\int_{0}^{1}\left(F(K(\theta)_{1}\frac{d}{d\theta}F(K (\theta)_{2}-F(K(\theta)_{2}\frac{d}{d\theta}F(K(\theta)_{1}\right)\,d\theta.\]
To motivate the consideration of these integrals, we note that if \(\Gamma\) is a simple closed curve, then \(\tilde{\Gamma}\) is a simple closed curve as well (as \(F\) is a diffeomorphism) and \(A_{1},A_{2}\) would correspond (by Green's theorem) to the area enclosed by \(\Gamma\). Similarly, \(A_{3}\) would be the area enclosed by \(\tilde{\Gamma}\). See Figure 2. We remark that \(A_{1},A_{2},A_{3}\) are well defined in general as long as \(\Gamma\) is closed and, i.e. for all \(K\) in \(C^{k}_{p}(\mathbb{R})\) with \(k>1\)
Figure 2: asdaf
by Greens theorem, and that if the curves have self intersections then the integrals compute enclosed area with overlap.
Moreover, since \(A_{1}\) and \(A_{2}\) are computed over the same curve (with different parameterizations) we have that
\[A_{1}=A_{2}.\]
Since \(\hat{\Gamma}\) is diffeomorphic to \(\Gamma\), and \(F\) is an area preserving map in the plane (and hence a symplectomorphism) we also have that \(A_{1}=A_{3}\).
However, integrating both sides of Equation (10), gives
\[A_{3}(K_{*})=(1+\beta_{*})A_{2}(K_{*}).\]
Combining this with the fact that \(A_{2}(K_{*})=A_{1}(K_{*})=A_{3}(K_{*})\), it follows that \(\beta_{*}=0\). Heuristically speaking, the area enclosed by \(\hat{\Gamma}\) cannot be either more or less then the area enclosed by \(\Gamma\) (where overlaps are counted correctly in both cases). From this we obtain that
\[F(K_{*}(\theta))=K_{*}(\theta+\rho),\]
and hence \(K_{*}\) conjugates the dynamics on \(\Gamma\) to the irrational rotation \(R_{\rho}\).
### Newton scheme in Fourier coefficient space
Fortified by Lemma 2.3, we now seek to solve the Equation \(\Psi(K,\beta)=0\), as defined in Equation (9) for the unknown parameterization \(K\). Indeed suppose that \(K_{0}\) is an approximate zero of the equation and and choose \(\beta_{0}=0\). The Newton sequence is given by
\[\left(\begin{array}{c}\beta_{n+1}\\ K_{n+1}\end{array}\right)=\left(\begin{array}{c}\beta_{n}\\ K_{n}\end{array}\right)+\left(\begin{array}{c}\delta_{n}\\ \Delta_{n}\end{array}\right),\qquad n\geq 0,\]
where \((\delta_{n},\Delta_{n})^{T}\) is a solution of the linear equation
\[D\Psi(\beta_{n},K_{n})\left(\begin{array}{c}\delta_{n}\\ \Delta_{n}\end{array}\right)=-\Psi(\beta_{n},K_{n}). \tag{11}\]
Here, for \(\beta,\delta\in\mathbb{R}\) and \(K,\Delta\in C_{p}^{k}(\mathbb{R})\) the Frechet derivative of \(\Psi\) has action
\[D\Psi(\beta,K)\left(\begin{array}{c}\delta\\ \Delta\end{array}\right)=\left(\begin{array}{c}-\langle\Delta(0),\eta\rangle \\ -\delta K(\theta+\rho)+DF(K(\theta))\Delta(\theta)-(1+\beta)\Delta(\theta+\rho) \end{array}\right).\]
**Remark 2.4** (Fast algorithms exploiting the symplectic structure).: The efficiency of the Newton scheme is improved dramatically via the area preserving/symplectic structure of the problem, which facilitates reduction of the linear equation (11) to constant coefficient, plus a quadratically small error. This idea is known in the literature as _approximate reducibility_. Neglecting the quadratic error, the resulting constant coefficient linear equations are easily diagonalized (in Fourier coefficient space). The reader interested in state of the art algorithms is referred to[1, 2, 1, 1, 1, 2] We again refer to [16] for comprehensive discussion.
Since we seek periodic \(K\) it is natural to write make the Fourier _ansatz_
\[K(\theta)=\sum_{n\in\mathbb{Z}}\left(\begin{array}{c}a_{n}\\ b_{n}\end{array}\right)e^{2\pi in\theta},\]
as considered already in Section 2.3.1. Note that translation by \(\rho\) is a diagonal operation in Fourier space, as
\[K(\theta+\rho) =\sum_{n\in\mathbb{Z}}\left(\begin{array}{c}a_{n}\\ b_{n}\end{array}\right)e^{2\pi in(\theta+\rho)}\] \[=\sum_{n\in\mathbb{Z}}e^{2\pi in\rho}\left(\begin{array}{c}a_{n} \\ b_{n}\end{array}\right)e^{2\pi in\theta},\]
and that the phase condition can be written as
\[\langle\bar{p}-K(0),\eta\rangle=\bar{p}_{1}\eta_{1}+\bar{p}_{2}\eta_{2}-\sum_ {n\in\mathbb{Z}}\left(\eta_{1}a_{n}+\eta_{2}b_{n}\right).\]
The nonlinearity is more complicated, but note that if \(K\in C_{p}^{k}\) then \(F\circ K\in C_{p}^{k}\) as well, assuming that \(F\) is as smooth as \(K\). (For the examples in this paper \(F\) is real analytic). Then \(F\circ K\) has Fourier expansion
\[F(K(\theta))=\sum_{n\in\mathbb{Z}}(F\circ K)_{n}e^{2\pi in\theta},\]
where the Fourier coefficients \((F\circ K)_{n}\) depend in a nonlinear way way on the Fourier coefficients \(a_{n},b_{n}\). In practice if \(F\) is a polynomial map then this dependence is worked out by discrete convolutions, as seen in the examples. Otherwise, the map is computed numerically using the FFT. Indeed, using the FFT, evaluation of the nonlinearity is a diagonal operation in grid space.
### Multiple shooting for period-\(d\) systems of invariant circles
We now consider a "multiple-shooting" parameterization method for studying \(d\)-periodic systems of quasi-periodic invariant sets. Such a set is the union of \(d\) disjoint simple closed curves, with the property each point on one curve maps to another curve in the system. The dynamics are required to be quasi-periodic. More precisely, suppose that \(\Gamma_{1},\ldots,\Gamma_{d}\subset\mathbb{R}^{2}\) are smooth simple closed curves with
\[F(\Gamma_{1}) =\Gamma_{2} \tag{12}\] \[F(\Gamma_{2}) =\Gamma_{3}\] (13) \[\vdots\] (14) \[F(\Gamma_{d-1}) =\Gamma_{d}\] (15) \[F(\Gamma_{d}) =\Gamma_{1}. \tag{16}\]
Suppose moreover that, for each \(1\leq j\leq d\), the curve \(\Gamma_{j}\) is quasi-periodic for the composition map \(F^{d}\). That is, suppose that for each \(1\leq j\leq d\) the mapping \(F^{d}\) restricted to \(\Gamma_{j}\) is an orientation preserving circle homeomorphism with irrational rotation number \(\rho_{j}\). The situation is illustrated in Figure 3.
Note that compositions of \(F\) provide conjugacies between each of the \(F^{d}\) invariant circles \(\Gamma_{j}\). For example, the map \(F\) provides a conjugacy between the dynamics on \(\Gamma_{j}\) and \(\Gamma_{j+1}\) while \(F^{2}\) conjugates \(\Gamma_{j}\) to \(\Gamma_{j+2}\) and so on. Then, since \(F\) is a diffeomorphism (and hence a homeomorphism), the topological invariance of the rotation number gives that \(\rho_{1}=\ldots=\rho_{j}\), and it is permissible to simply write \(\rho\) for the common rotation number.
One computational approach for studying this invariant set would be to apply the parameterization discussed in Section 2.4.1 to the map \(F^{N}\), once for each of the curves \(\Gamma_{j}\), \(1\leq j\leq d\). This approach however has two major disadvantages: first, the computational complexity of the composition map evaluation grows exponentially with the number of compositions. For example if \(F\) is polynomial of degree \(m\) then \(F^{2}\) is polynomial of degree \(m^{2}\) and \(F^{d}\) is polynomial of degree \(m^{d}\). The second disadvantage is that one has to compute parameterizations of \(\Gamma_{1},\ldots,\Gamma_{d}\) separately.
Instead, we propose a "multiple shooting" parameterization method for computing the entire period period \(d\) systems of quasiperiodic curves all at once. Earlier successful multiple shooting approaches are developed in the work of [1] for parameterizing stable/unstable manifolds attached to period-\(d\) orbits of maps, and in the [13] for studying invariant objects for discrete dynamical systems defined by an implicit rule. In the current context we look for smooth parameterizations \(K_{1},\ldots,K_{d}\colon\mathbb{R}\to\mathbb{R}^{2}\) - all of period one - so that for each \(\theta\in\mathbb{R}\) we have that
\[F(K_{1}(\theta)) =K_{2}(\theta)\] \[F(K_{2}(\theta)) =K_{3}(\theta)\] \[\vdots\] \[F(K_{d-1}(\theta)) =K_{d}(\theta)\] \[F(K_{d}(\theta)) =K_{1}(\theta+\rho),\]
where \(\rho\) is the rotation number associated with any of the invariant circles of the composition map \(F^{d}\).
Once again, it is necessary to append a scalar constraint to fix the phase of one of the circles - this in turn fixes the phase of each parameterization. Appending the phase constraint unbalances the system so that it is necessary to introduce an unfolding parameter. Taking these considerations into account, we define the operator \(\Psi_{d}\colon\mathbb{R}\times C_{d}^{k}(\mathbb{R},\mathbb{R}^{\varkappa})^{ d}\to\mathbb{R}\times C_{d}^{k}(\mathbb{R},\mathbb{R}^{\varkappa})^{d}\), defined by
\[\Psi_{d}(\beta,K_{1},K_{2},K_{3},\ldots,K_{d-1},K_{d})=\left(\begin{array}{c }\langle K_{1}(0)-\bar{p},\eta\rangle\\ F(K_{1}(\theta))-K_{2}(\theta)\\ F(K_{2}(\theta))-K_{3}(\theta)\\ \vdots\\ F(K_{d-1}(\theta))-K_{d}(\theta)\\ F(K_{d}(\theta))-(1+\beta)K_{1}(\theta+\rho),\end{array}\right) \tag{17}\]
Again, the important thing to stress if that the definition of \(\Psi_{d}\) does in to involve any compositions of the map \(F\). A Newton method is defined as in Section 2.5.
## 3 Numerical recipe: initializing the parameterization method via weighted averaging
Suppose that \(\Omega\subset\mathbb{R}^{2}\) is an open set and let \(F\colon\Omega\to\Omega\) be a smooth, area preserving map. The following algorithm (i) allows us to determine that we have an initial condition whose orbit is very likely one or near a quasiperiodic invariant circle, (ii) allows us to compute the rotation number efficiently and accurately from just the orbit segment data, (iii) allows us to easily determine the truncation dimension for the finite dimensional Fourier projection of the parameterization, (iv) leads in a completely natural way to an initial guess for the parameterization method which can be made as accurate as we like - hence will definitely converge. We also (v) have an a-posteriori indicator which allows us to decide when Newton has converged. The following steps constitute the main steps of our algorithm.
* **Step 0:** choose \(p_{0}=(x_{0},y_{0})\in\Omega\) and \(M\in\mathbb{N}\). Compute the orbit segment \(\mathcal{O}_{M}=\{p_{j}\}_{j=0}^{M}\) defined by \[p_{j+1}=F(p_{j}),\qquad\quad j\in 1,\dots,M-1.\] Now, decide if \(\mathcal{O}_{M}\) is sampled from an invariant circle, or from a stochastic zone. This can be done either by graphical inspection, or using the techniques of [20, 21] already mentioned in Remark 2.1. If \(\mathcal{O}_{M}\) appears to be sampled from a quasiperiodic invariant circle, then we continue to the next step. Otherwise, choose a different \(p_{0}\).
* **Step 1:** Compute the rotation number \(\rho\) using the weighted averaging technique discussed in Section 2.3. Here it is important to obtain as many correct digits as possible. This can be done by increasing \(M\) by ten or twenty percent and repeating the calculation until numerical convergence in the last digit is observed.
Figure 3: Schematic illustration of a period 5 invariant circle and the resulting parameterization method.
* **Step 2:** Decide how many modes are needed to accurately represent \(K\). To do this, we compute the Fourier coefficients \((a_{n},b_{n})\) using the averaging scheme described in Section 2.3.1. However we compute using a much shorter sample (that is we use \(M\) much smaller than in the rotation number calculation) and sample the modes by computing them only for \(n=10k\), and \(k=1,2,3,\ldots\). Using this scheme we can rapidly find an \(N\in\mathbb{N}\) so that \(\|(a_{n},b_{n})\|<\epsilon_{\text{machine}}\) for \(|n|>N\).
* which should be less than one but is usually taken to be between 0.001 and 0.1, depending on the judgment of the user
- then the initial guess is "good" and we set \(K_{0}=\tilde{K}\). If the initial defect is not good enough, then we can increase \(N_{0}\) and try again.
* **Step 4:** Perform the Newton iteration (in the space of \(N\)-Fourier coefficients) as described in Section 2.5. Iterate the Newton scheme until the defect \[\epsilon_{m}=\sup_{\theta\in[0,1]}\left|F(K_{m}(\theta))-K_{m}(\theta+\rho) \right|,\] either saturates or decreases below some prescribed tolerance (usually taken to be some small multiple of machine epsilon).
Several remarks are in order. First, we note that the \(C^{0}\) norm proposed for measuring the defect in Steps 3 and 4 can be replaced with more efficient weighted \(\ell_{1}\) norms, and this involves computations only in coefficient space rather than function evaluations. We also remark that if the Newton scheme does not converge in Step 4, then we conclude that the initial defect was not good enough and go back to step 3 to refine \(N_{0}\).
It should also be noted that the defect calculations proposed above provide only a heuristic indication of convergence. More reliable error bounds for the parameterization method, based on a-posteriori Kantorovich-type results, are obtained in [11]. See also [12]. Indeed, this kind of a-posteriori analysis can be combined with deliberate control of round of errors to obtain mathematically rigorous computer assisted existence proofs. Early examples of this kind of argument are found in the work of [11, 12]. For a more modern treatment, including a thorough discussion of the current state of the literature, we refer the interested reader to the work of [13].
## 4 Examples
### A quadratic family of maps: area preserving Henon
As a first example, consider the area-preserving Henon map, \(F:\mathbb{R}^{2}\to\mathbb{R}^{2}\) as described in [14], and given by the formula
\[F(x,y)=\left(\begin{array}{c}x\cos(\alpha)-(y-x^{2})\sin(\alpha)\\ x\sin(\alpha)+(y-x^{2})\cos(\alpha)\end{array}\right).\]
One checks that the determinant of the Jacobian matrix \(DF(x,y)\) is one for all \((x,y)\in\mathbb{R}^{2}\), so that the system is area preserving as advertised. The dynamics of the system are studied for a number of parameter values \(\alpha\) in the book of [1]. In particular, numerical simulations suggest that the system appears to admit quasiperiodic invariant circles and \(K\)-periodic systems of such. The map can be seen as a linear rotation matrix at the origin, plus a quadratic nonlinearity. There is one (and only one) fixed point -at the origin and of elliptic stability type - so that in a small enough neighborhood of the origin we expect the existence of large measure sets of KAM tori. This expectation is supported by numerical simulations, as seen for example in Figure 4. for \(\alpha=\cos^{-1}(0.24)\).
#### 4.1.1 A worked example: period 1 invariant circle
We now describe in some detail the computation of a period one invariant circle for the area preserving Henon map.
* **Step 0:** consider the three initial conditions \(p_{0},q_{0},r_{0}\in\mathbb{R}^{2}\) given by \[p_{0}=\left(\begin{array}{c}0.1\\ 0.0\end{array}\right),\qquad q_{0}=\left(\begin{array}{c}0.4\\ 0.0\end{array}\right),\qquad\text{and}\qquad r_{0}=\left(\begin{array}{c}0.3 \\ -0.44\end{array}\right).\]
\begin{table}
\begin{tabular}{c|c|c|c} \(M\) & \(\rho_{M}(\mathbf{p_{0}})\) & \(\rho_{M}(\mathbf{q_{0}})\) & \(\rho_{M}(\mathbf{r_{0}})\) \\ \hline
100 & 0.211095710088270 & 0.206164038365342 & 0.196863099485937 \\
500 & 0.211095709965501 & 0.206174513248940 & 0.197503558666674 \\
1000 & 0.211095709965479 & 0.206174514865070 & 0.197628415757003 \\
5000 & 0.211095709965481 & 0.206174514865715 & 0.199431995293399 \\
10,000 & 0.211095709965478 & 0.206174514865712 & 0.199737097322017 \\
50,000 & 0.211095709965486 & 0.206174514865718 & 0.199823145343572 \\
100,000 & 0.211095709965480 & 0.206174514865710 & 0.199984739391916 \\
110,000 & 0.211095709965478 & 0.206174514865708 & 0.199990822989916 \\
120,000 & 0.211095709965479 & 0.206174514865704 & 0.199994461862213 \\
150,000 & 0.211095709965479 & 0.206174514865705 & 0.199998753698169 \\
200,000 & 0.211095709965478 & 0.206174514865702 & 0.19999988701773 \\ \end{tabular}
\end{table}
Table 1: Numerically computed values of the rotation number as a function of \(M\) for the three initial conditions \(p_{0}\), \(q_{0}\), and \(r_{0}\). This is denoted \(\rho_{M}(\cdot)\), where \(\cdot\) is one of the initial conditions and \(M\) between one hundred and and two hundred thousand. The computations suggest that the orbits of \(p_{0}\) and \(q_{0}\) are quasiperiodic, while the rotation number of the orbit of \(r_{0}\) varies stochastically after the 4th digit.
One million iterates of each initial condition are illustrated in Figure 5, with a zoom in on the orbit of \(r_{0}\), illustrating that the orbit appears to be chaotic rather than quasiperiodic. This appearance is confirmed by the rotation number calculations given in Table 1. Based on these results, and for the rest of the Section, we focus on the orbit of \(q_{0}\).
* **Step 1:** Based on the results of step 0, since we can say with confidence that the rotation number associated with the orbit of \(q_{0}\) has \[\rho\approx\rho_{120,000}=0.206174514865704,\] which is likely correct except possibly in the last decimal place.
* **Step 2:** Using the rotation number computed in the last step, we sample the Fourier coefficients in the higher modes. We note that with \(M=1,000\) we already appeared to have seven correct figures in the rotation number calculation. So we will compute Fourier coefficients with an orbit of only this length.
Figure 4: Phase space structure for the area preserving Henon map: near the origin the dynamics are close to pure rotation and we see a large set of invariant circles. These get more and more distorted further from the origin, and eventually there appears to be a \(1:5\) resonance which gives rise to a family of systems of invariant circles with 5 topological components. Further from the origin the dynamics appears to be chaotic.
Let \(p_{n}=(a_{n},b_{n})\) denote the \(n\)-th Fourier vector and \[\|p_{n}\|=\max(|a_{n}|,|b_{n}|),\] with \(|\cdot|\) the complex absolute value. Sampling the coefficients for \(n=2,4,6,8,10\), we have \[\|p_{2}\| =2.0\times 10^{-2}\] \[\|p_{4}\| =4.3\times 10^{-3}\] \[\|p_{6}\| =6.5\times 10^{-4}\] \[\|p_{8}\| =4.8\times 10^{-5}\] \[\|p_{10}\| =8.4\times 10^{-6}\] Based on the observed decay rate, we guess that we should reach machine precision at roughly \(n=30\). Being a little conservative, we take \(K=5\) and truncate to \(N=2^{K}=32\) Fourier modes. (Powers of 2 are desirable if the implementation compleies the FFT).
* **Step 3:**. We now compute the Fourier series for \(N_{0}=5\), from 10,000 data points. This is about 10 percent of the modes to be Used in the Newton scheme. This leads to a trigonometric polynomial that we refer to as \[\tilde{K}(\theta)=\sum_{n=-5}^{5}\left(\begin{array}{c}a_{n}\\ b_{n}\end{array}\right)e^{2\pi in\theta}.\]
Figure 5: Three orbits: one million iterates of the area preserving Henon map for the three initial conditions \(\mathbf{p_{0}}=(0.1,0.0)\), \(\mathbf{p_{0}}=(0.4,0.0)\) and \(\mathbf{r_{0}}=(0.3,0.44)\). (green, blue and red respectively). Visual inspection suggests that the green and blue curves are diffeomorphic to circles, while zooming in on the point cloud generated by the orbit of \(\mathbf{r}_{0}\) reveals fractal structure and suggests chaotic dynamics. This suggestion is enforced by the quantitative data in Table 1.
Then initial defect associated with this approximate solution is already \(\epsilon\leq 9.2\times 10^{-4}\). We therefore consider this a good initial approximation and define \(K_{0}=\tilde{K}\).
* **Step 4:** We run the newton iteration and obtain defects \[\epsilon_{1} =2.2\times 10^{-6}\] \[\epsilon_{2} =4.1\times 10^{-12}\] \[\epsilon_{3} =6.1\times 10^{-13}\] \[\epsilon_{4} =6.0\times 10^{-13}\] and the conjugacy error stagnates. The Newton scheme executes in \(0.062\) seconds. Running again from the same initial condition \(K_{0}\) with \(N=64\), the next power of two Fourier modes, results in a final conjugacy error of \(\epsilon=1.6\times 10^{-14}\) and takes \(0.18\) seconds. The next power \(N=128\) takes \(0.38\) seconds and results in a conjugacy error of \(\epsilon=6.1\times 10^{-16}\), which is finally on the order of double precision Machine epsilon. Truncating at \(N=256\) Fourier coefficients results in a \(0.9\) second runtime, and does not improve the conjugacy error. Indeed, we see that the initial \(N=32\) calculation was already nearly optimal.
We provide a few additional details regarding the numerical implementation in this example. Let \(a=\{a_{n}\}_{n\in\mathbb{Z}}\) and \(b=\{b_{n}\}_{n\in\mathbb{Z}}\) denote the unknown Fourier series coefficients for the parameterization \(K\). Then
\[F(K(\theta))=\sum_{n\in\mathbb{Z}}\left(\begin{array}{c}\cos(\alpha)a_{n}- \sin(\alpha)b_{n}+\sin(\alpha)(a*a)_{n}\\ \sin(\alpha)a_{n}+\cos(\alpha)b_{n}-\cos(\alpha)(a*a)_{n}\end{array}\right)e^{ 2\pi in\theta},\]
where
\[(a*a)_{n}=\sum_{k\in\mathbb{Z}}a_{n-k}a_{k},\]
denotes discrete convolution. Recalling that
\[K(\theta+\rho)=\sum_{n\in\mathbb{Z}}e^{2\pi in\rho}\left(\begin{array}{c}a_ {n}\\ b_{n}\end{array}\right)e^{2\pi in\theta},\]
then the unfolded conjugacy equation \(F(K(\theta))=(1+\beta)K(\theta+\rho)\) is satisfied if and only of the Fourier coefficients on the left equal the Fourier coefficients on the right, and we require that
\[\left(\begin{array}{c}\cos(\alpha)a_{n}-\sin(\alpha)b_{n}+\sin(\alpha)(a*a) _{n}\\ \sin(\alpha)a_{n}+\cos(\alpha)b_{n}-\cos(\alpha)(a*a)_{n}\end{array}\right)=( 1+\beta)e^{2\pi in\rho}\left(\begin{array}{c}a_{n}\\ b_{n}\end{array}\right)\quad\text{for }n\in\mathbb{Z}. \tag{18}\]
Moreover, noting that the invariant circle given by the data crosses the \(x\)-axis we choose the phase condition
\[K_{2}(0)=\sum_{n\in\mathbb{Z}}b_{n}=0.\]
Truncating at \(N\) Fourier modes leads to the system of \(2(2N+1)+1\) equations
\[b_{-N}+\ldots+b_{0}+\ldots b_{N} =0\] \[\cos(\alpha)a_{-N}-\sin(\alpha)b_{-N}+\sin(\alpha)(a*a)_{-N}^{N}-(1 +\beta)e^{-2\pi iN\rho}a_{-N} =0\] \[\sin(\alpha)a_{-N}+\cos(\alpha)b_{-N}-\cos(\alpha)(a*a)_{-N}^{N}-( 1+\beta)e^{-2\pi iN\rho}b_{-N} =0\] \[\vdots\] \[\cos(\alpha)a_{0}-\sin(\alpha)b_{0}+\sin(\alpha)(a*a)_{0}^{N}-(1 +\beta)a_{0} =0\] \[\sin(\alpha)a_{0}+\cos(\alpha)b_{0}-\cos(\alpha)(a*a)_{0}^{N}-(1 +\beta)b_{0} =0\] \[\vdots\] \[\cos(\alpha)a_{N}-\sin(\alpha)b_{N}+\sin(\alpha)(a*a)_{N}^{N}-(1 +\beta)e^{2\pi iN\rho}a_{N} =0\] \[\sin(\alpha)a_{N}+\cos(\alpha)b_{N}-\cos(\alpha)(a*a)_{N}^{N}-(1 +\beta)e^{2\pi iN\rho}b_{N} =0\]
in the \(2(2N+1)+1\) unknowns \(\beta,a_{-N},b_{-N},\ldots,a_{0},b_{0},\ldots,a_{N},b_{N}\). Here
\[(a*a)_{n}^{N}=\sum_{k_{1}+k_{2}=n\atop-N\leq k_{1},k_{2}\leq N}a_{k_{1}}a_{k_{ 2}},\]
is the truncated discrete convolution. Newton's method is used to solve this system.
A higher level representation is obtained as follows. Let \(a=\{a_{n}\}_{n\in\mathbb{Z}}\) and \(b=\{b_{n}\}_{n\in\mathbb{Z}}\) denote the unknown sequences of Fourier coefficients and define the "diagonal" linear operator \(R_{\rho}\) on an infinite sequence by
\[(R_{\rho}a)_{n}=e^{2\pi in\rho}a_{n}. \tag{19}\]
Inspired by the conditions given in Equation (18), we define the mapping
\[\mathcal{F}(\beta,a,b)=\left(\begin{array}{c}\sum_{n\in\mathbb{Z}}b_{n}\\ \cos(\alpha)a-\sin(\alpha)b+\sin(\alpha)a*a-(1+\beta)R_{\rho}a\\ \sin(\alpha)a+\cos(\alpha)b-\cos(\alpha)a*a-(1+\beta)R_{\rho}b\end{array} \right),\]
and seek a zero of \(F\). Note that, for numbers \(\delta\) and infinite sequences \(u,v\), we see that the action of the (formal) Frechet derivative on \((\delta,u,v)\) is given by
\[D\mathcal{F}(\beta,a,b)(\delta,u,v)=\left(\begin{array}{c}\sum_{n\in \mathbb{Z}}v_{n}\\ \cos(\alpha)u-\sin(\alpha)v+2\sin(\alpha)a*u-(1+\beta)R_{\rho}u-R_{\rho}a\\ \sin(\alpha)u+\cos(\alpha)v-2\cos(\alpha)a*u-(1+\beta)R_{\rho}v-R_{\rho}b\end{array} \right).\]
A more useful is the following expression for the derivative as a "matrix of operators."
Let \(\mathbf{0}\) denote the zero Fourier sequence and \(\mathbf{1}\) the sequence of ones. Moreover, let \(\mathbf{R}_{\rho}\) denote the bi-infinite diagonal matrix with \(e^{2\pi in\rho}\) on the diagonal entries, and let \(\mathbf{S}_{\alpha},\mathbf{C}_{\alpha}\) denote the bi-infinite diagonal matrices with \(\sin(\alpha)\) and \(\cos(\alpha)\) on their diagonals respectively. Finally, let \(\mathbf{A}\) denote the (dense) bi-infinite matrix defined by the linear mapping
\[\mathbf{A}h=a*h.\]
The matrix for \(A\) is easily worked out by considering it's action on the basis for bi-infinite sequence space given by sequences with on one non-zero entry. The classical result is that \(A\) is a Topoletz matrix for the bi-infinite sequence \(a\). Then the derivative can be represented as
\[D\mathcal{F}(\beta,a,b)=\left(\begin{array}{ccc}0&\mathbf{0}&\mathbf{1}\\ -\mathbf{R}_{\rho}a&\mathbf{C}_{\alpha}+2\mathbf{C}_{\alpha}\mathbf{A}-(1+ \beta)\mathbf{R}_{\rho}&-\mathbf{S}_{\alpha}\\ -\mathbf{R}_{\rho}b&\mathbf{S}_{\alpha}-2\mathbf{C}_{\alpha}\mathbf{A}& \mathbf{C}_{\alpha}-(1+\beta)\mathbf{R}_{\rho}\end{array}\right).\]
Truncating the mapping \(\mathcal{F}\) and its derivative given above leads to a numerical implementation of the Newton scheme.
### Period \(K\) circles in Henon
Another apparent feature of the phase space readily visible in Figure 4, is what looks like a family of period 5 invariant circles. After visual inspection of the figure, we plot a trajectory using \((x_{0},y_{0})=(0.5,0)\) as our seed point, and observe that after 1000 iterates, the orbit appear to fill out the five circles shown in the left frame of Figure 6. We remark that each iterate jumps from one circle to the next circle to its left (counter clockwise rotation).
#### 4.2.1 Multiple shooting invariance equations
The idea is to follow the steps proposed in Section 3, with a few small modifications. After guessing a point on the period 5 system, we compute an orbit segment for the fifth iterate of \(F\), denoted \(F^{5}\), and compute the rotation number for the composition. This means that if we desire an orbit segment of length \(M\), we have to iterate \(F\)\(5M\) times. Since all 5 circles have the same rotation number, this only has to be done once. Using the Birkhoff averages with an orbit segment of length \(5\times 9,000\) leads to \(\rho=0.190669478955264\) which has stabilized numerically to the last digit.
Figure 6: **A period five system of quasiperiodic invariant circles: left frame illustrates a numerical simulation of the area preserving system, and an orbit which appears to lie on the period 5 system of invariant circles. The right frame illustrates the image of the five Fourier series parameterizing the quasiperiodic system.**
Now let \(K_{1},K_{2},K_{3},K_{4},K_{5}\colon\mathbb{R}\to\mathbb{R}^{2}\) denote the desired parameterizations for the five component circles of the system. We use the weighted Birkhoff averages to compute (roughly) the decay rate of these Fourier series (to guess that the optimal truncation order is around \(N=200\)) and to approximate the first few Fourier coefficients in each case. Again, for this work we deal only with (shorter) orbit segments for the composition map \(F^{5}\). We stress that this just requires computing a long enough orbit for \(F\) and then neglecting all but every fifth point on the orbit.
Now, when it comes to the Newton method we work with multiple shooting system of equations, so that the nonlinearity is still only quadratic (note that \(F^{5}\) is a polynomial map of degree \(2^{5}=32\)). Keeping in force the notation from Section 4.1.1, we define the mapping
\[\mathcal{F}_{5}(\beta,a^{1},b^{1},a^{2},b^{2},a^{3},b^{3},a^{4},b^{4},a^{5},b^ {5})=\]
\[\left(\begin{array}{c}\sum\limits_{n\in\mathbb{Z}}b_{n}^{1}\\ \cos(\alpha)a^{1}-\sin(\alpha)b^{1}+\sin(\alpha)a^{1}*a^{1}-R_{\rho}a^{2}\\ \sin(\alpha)a^{1}+\cos(\alpha)b^{1}-\cos(\alpha)a^{1}*a^{1}-R_{\rho}b^{2}\\ \cos(\alpha)a^{2}-\sin(\alpha)b^{2}+\sin(\alpha)a^{2}*a^{2}-R_{\rho}a^{3}\\ \sin(\alpha)a^{2}+\cos(\alpha)b^{2}-\cos(\alpha)a^{2}*a^{2}-R_{\rho}b^{3}\\ \cos(\alpha)a^{3}-\sin(\alpha)b^{3}+\sin(\alpha)a^{3}*a^{3}-R_{\rho}a^{4}\\ \sin(\alpha)a^{3}+\cos(\alpha)b^{3}-\cos(\alpha)a^{3}*a^{3}-R_{\rho}b^{4}\\ \cos(\alpha)a^{4}-\sin(\alpha)b^{4}+\sin(\alpha)a^{4}*a^{4}-R_{\rho}a^{5}\\ \sin(\alpha)a^{4}+\cos(\alpha)b^{4}-\cos(\alpha)a^{4}*a^{4}-R_{\rho}b^{5}\\ \cos(\alpha)a^{5}-\sin(\alpha)b^{5}+\sin(\alpha)a^{5}*a^{5}-(1+\beta)R_{\rho}a ^{1}\\ \sin(\alpha)a^{5}+\cos(\alpha)b^{5}-\cos(\alpha)a^{5}*a^{5}-(1+\beta)R_{\rho}b ^{1}\end{array}\right),\]
and have that if \((\beta,a_{1},b_{1},a_{2},b_{2},a_{3},b_{3},a_{4},b_{4},a_{5},b_{5})\) is a zero of \(\mathcal{F}_{5}\), then \(\beta=0\) the \(a^{\prime}s\) and \(b^{\prime}s\) are the Fourier coefficient sequences of the parameterizations \(K_{1},K_{2},K_{3},K_{4}\), and \(K_{5}\) for the system of invariant circles. Note that while the map has more components, the nonlinearity is still only as complicated as that of \(F\). In this case quadratic. The derivative of \(\mathcal{F}\) is easily computed. Truncating the map and its derivative leads to the numerical implementation of the Newton method. Note that all the operations and linear operators are as in the case of a period one circle. Only the number of components and the coupling is different. After implementing these adjustments we are able to compute the parameterizations to machine precision as in the earlier example. The resulting Fourier series are plotted in the right Frame of Figure 6.
The phase space for the area preserving Henon when \(\alpha=\arccos(-0.95)\) is illustrated in the left frame of Figure 7, and there is the suggestion of even longer systems of invariant circles. For example, repeating the procedure discussed in the proceeding section using the initial condition \((x_{0},y_{0})=(0,-2.65)\) leads to the period 120 system of quasiperiodic invariant circles illustrated in the left frame of Figure 7. The Fourier mapping \(\mathcal{F}_{120}\) generalizes from \(\mathcal{F}_{5}\) in the obvious way. Fortunately, the individual circles are not terribly complicated harmonically, and 15 modes per circle appears to be enough to approximate the Fourier expansions well. Again, the Newton method converges with and we obtain the parameterizations \(K_{1},\ldots,K_{120}\) whose images are illustrated in the center frame of Figure 7. The right frame illustrates the initial and final parameterizations at a zoom in on one of the 120 components.
### Computations for the Standard Map
For an example of a map with non-polynomial nonlinearity, consider the Standard Map of [10]. Since we are interested in secondary (contractable) invariant tori,
Figure 8: Phase space structure for the standard map with \(\alpha=\pi/4\): in the simulation illustrated in this figure, we have taken the first component modulo \(2\pi\) so the line at \(x=0\) is identified with the line at \(x=2pi\). This has the effect of making the phase space into a cylinder, and there are primary invariant tori which appear as smooth curves running from left to right. In the present work however, we study the secondary tori associated with the elliptic fixed point at \((\pi,0)\). These tori are visible wether or not we compute modulo \(2pi\).
Figure 7: **A period 120 system of quasiperiodic invariant circles: left frame is the phase space simulation and orbit data. Middle frame illustrated the images of Fourier parameterizations. Right frame is a close up of the initial and final parameterizations of a single component circle.**
we treat the map as a diffeomorphism \(F:\mathbb{R}^{2}\to\mathbb{R}^{2}\) given by the formula,
\[F(x,y)=\left(\begin{array}{c}x+y+\alpha\sin(x)\\ y+\alpha\sin(x)\end{array}\right). \tag{20}\]
That is, we only take results modulo \(2\pi\) in the first component of the map to produce graphical results.
One subtle question is weather to consider the phase space as \(\mathbb{R}^{2}\) or \(\mathbb{T}\times\mathbb{R}\). In the later case, we take the first component of \(F\) defined in Equation (20) modulo \(2\pi\), forcing a periodicity in \(x\). A phase space simulation is illustrated in Figure 8 for a large value of \(\alpha\). Note that while there are many primary invariant circles (curves which wind around the cylinder in a non-trivial) visible in this simulation, the main feature in is that resonance zone near the elliptic fixed point at \((\pi,0)\). We remark that the secondary invariant circles about this fixed point (which are contractible on the cylinder) remain invariant even if we take the phase space to be \(\mathbb{R}^{2}\). The non-contractible invariant circles are the focus of this article, as they are in some sense more difficult to compute. This is because they cannot be treated using the skew product formulation, where a non-contractible invariant circle is written as the graph of a periodic function (1d computations).
#### 4.3.1 Period 1 Standard Map
Taking \(\alpha=\pi/4\), we consider the orbit of the point \(P=(\pi,1)\). Simulations suggest that the orbit is dense in an invariant circle, and we proceed as in the example of the period one computation for the area preserving Henon map discussed in Section 4.1.1, implementing the numerical recipe discussed in Section 3. Computing with \(12,000\) data points, we find the rotation number to be \(\rho\approx 0.871221766629878\). (Here there is a difference of \(5.551115\)e-\(16\) compared to the rotation number computed with \(11000\) points, and we trust roughly \(15\) if the \(16\) computed digits).
Truncating the parameterization to \(N_{0}=20\) Fourier modes, computing with only a length \(100\) orbit segment yields an approximate parameterization with initial defect of roughly \(0.1288\). Beginning with this as an initial approximation, the Newton method (truncated at \(N=50\) modes) converges to the solution illustrated in Figure 9. The conjugacy error of the final approximation is on the order of machine epsilon.
Taking \(\alpha=\pi/2\) and initial points \(P=[1.85,0.565]\), and \(P=[4.8155,0.5]\), we compute the Fourier parameterizations of period \(6\) and period \(24\) quasiperiodic systems invariant circles using the ideas described in Section 4.2. These results are illustrated in Figures 10 and 11, and show that the multiple shooting parameterization method works also for non-polynomial nonlinearities.
We cap off this overview of the higher period standard map examples with the observation that the method described produces robust results with small sequence space error. However, the conjugacy error is not so easily controlled, while small, it has so far proved intractable to make arbitrarily so.
#### 4.3.2 Polynomial embedding of non-polynomial nonlinearities
In this section we include a few remarks about the implementation details for the nonlinearity in the standard map. Indeed, suppose that \(f\) is a period-\(1\) function given by
\[f(\theta)=\sum_{n\in\mathbb{Z}}a_{n}e^{2\pi i\theta},\]
Figure 10: A period 6 quasiperiodic system of invariant circles for the standard map: left figure is a phase space simulation and orbit segment data. Right frame illustrates the image of the converged Fourier approximation.
Figure 9: **A period 1 quasiperiodic invariant circle for the standard map:** phase space simulation and the converged parameterization of a quasiperiodic invariant circle with rotation number \(\rho\approx 0.871221766629878\) compute using 50 Fourier modes.
and that we want to compute the Fourier coefficients of the composition
\[\sin(f(\theta))=\sum_{n\in\mathbb{Z}}a_{n}e^{2\pi i\theta}.\]
One approach (perhaps the most natural) is to employ the FFT, and if \(f\) is an arbitrary band-limited function and \(g\) is smooth, then this in general provides the best known method for computing the Fourier coefficients of \(g\circ f\). In the present setting however, the functions being composed have additional structure. They are solutions of certain polynomial functional equations and, by appending these equations to the parameterization method, we obtain a new functional equation whose nonlinearity is only polynomial (in fact quadratic). This avoids the overhead of implementing the FFT, and more importantly overcomes the "numerical stagnation" of the coefficient decay of the composition at 10 or 20 multiples of Machine precision - as is often observed when interpolation based methods for evaluating spectral coefficients are used.
The technique described here is give different names in different communities, for example automatic differentiation [16, 17], polynomial embedding [18, 19], and quadratic recast [15, 16] to name only a few. We refer to [1] and also to Chapter 4.7 of [14] for a more thorough discussion of the history of these ideas, going back to the 19th Century. We explain the idea for a period one invariant circle of the standard map.
Consider an invariant circle parameterized by
\[K(\theta)=\left(\begin{array}{c}K_{1}(\theta)\\ K_{2}(\theta)\end{array}\right)=\sum_{n\in\mathbb{Z}}\left(\begin{array}{c}a_ {n}\\ b_{n}\end{array}\right)e^{2\pi i\theta}\]
which passes through the \(x\)-axis when \(\theta=0\). Then an appropriate phase condition is \(K_{2}(0)=0\), and we seek a zero of the operator
\[\Psi(a,b,\beta)=\left(\begin{array}{c}\sum_{n\in\mathbb{Z}}b_{n}\\ a+b+\alpha S(a)-(1+\beta)R_{\rho}a\\ b+\alpha S(a)-(1+\beta)R_{\rho}b\end{array}\right). \tag{21}\]
Figure 11: **A period 24 quasiperiodic system of invariant circles for the standard map: same left and right as the previous figure.**
Here \(a=\{a_{n}\},b=\{b_{n}\}\) are the Fourier coefficient sequences, \(R_{\rho}\) is the diagonal operator defined in coefficient space in Equation (19), and \(S\) denotes the map in coefficient space from \(a\) to the Fourier coefficients of \(\sin(K_{1}(\theta))\). Let \(C\) denote the complimentary function which maps the Fourier coefficient sequence \(a\) to the Fourier coefficients of the function \(\cos(K_{1}(\theta))\).
We write \(S=S(K_{1}(\theta))\) and \(C=C(K_{1}(\theta))\) to denote the values of \(S\) and \(C\) at \(K_{1}\). Note that \(S,C\) have
\[\frac{d}{d\theta}S(K_{1}(\theta))=S^{\prime}(K_{1}(\theta))K_{1}^{\prime}( \theta)=CK_{1}^{\prime}(\theta),\]
and
\[\frac{d}{d\theta}C(K_{1}(\theta))=C^{\prime}(K_{1}(\theta))K_{1}^{\prime}( \theta)=-SK_{1}^{\prime}(\theta),\]
with initial conditions
\[S(0)=\sin\left(\sum_{n\in\mathbb{Z}}a_{n}\right),\qquad\text{and}\qquad C(0)= \cos\left(\sum_{n\in\mathbb{Z}}a_{n}\right).\]
Let \(s=\{s_{n}\}\), \(c=\{c_{n}\}\) denote the Fourier coefficient sequences of \(S\), and \(C\), and define the diagonal differentiation operator
\[D(a)_{n}=2\pi ina_{n},\qquad\quad n\in\mathbb{Z}.\]
Now suppose that \(a,b,c,s,\beta,\gamma,\omega\) is a zero of the operator
\[\Psi(a,b,c,s,\beta,\gamma,\omega)=\left(\begin{array}{c}\sum_{n\in\mathbb{Z} }b_{n}\\ \sum_{n\in\mathbb{Z}}s_{n}-\sin\left(\sum_{n\in\mathbb{Z}}a_{n}\right)\\ \sum_{n\in\mathbb{Z}}c_{n}-\cos\left(\sum_{n\in\mathbb{Z}}a_{n}\right)\\ a+b+\alpha s-(1+\beta)R_{\rho}a\\ b+\alpha s-(1+\beta)R_{\rho}b\\ Ds-c*Da-\gamma s+\omega c\\ Dc+c*Da-\gamma c-\omega s\end{array}\right). \tag{22}\]
It can be shown (using an argument similar to the proof of Lemma 2.3) that \(\gamma\) and \(\omega\) are unfolding parameters for the differential equations. That is, if the initial conditions are satisfied (i.e. the second and third components of \(\Psi\) are zero) and if the sixth and seventh components are zero, then \(\gamma=\omega=0\). In this case \(s\), \(c\) are the Fourier coefficient sequences of \(\sin(a),\cos(b)\) respectively. It follows that \(a,b,\beta\) solve Equation (21). It then follows from the area preserving property of the standard map that \(\beta=0\). Then \(a,b\) are the Fourier coefficients of a parameterization of an invariant circle conjugate to irrational rotation \(\rho\). We stress that \(R_{\rho}\) and \(D\) are diagonal linear operators in Fourier space and that \(*\) is just the discrete convolution. The operator defined in Equation (22) is then linear except in the last two components where there appear quadratic nonlinearities. This is like a kind of "multiple shooting" for unwrapping compositions, and it is easily extended to the functional equations for periodic systems of invariant circles.
## 5 Numerical continuation (discrete) for families of periodic invariant circles
While the recipe given in Section 3 is non-perturbative, requiring only finite data sampled form an invariant circle, it is well known (from KAM theory) that quasiperi
odic invariant circles for area preserving maps typically appear in Cantor sets of large measure. We refer the reader to any of the classic books/lecture notes of [1, 1, 1, 10, 11, 12], and to their bibliographies for much more complete references. We only note that if \(\rho\) is irrational (say Diophantine), then \(h\rho\) is irrational (and likely Diophantine) for rational not too small \(h\). Suppose now that \(\Gamma\) is a quasiperiodic invariant circle with rotation number \(\rho\) and that \(h\) is a rational number near \(1\). Heuristically speaking, it is probable that there exists a nearby quasiperiodic invariant circle \(\bar{\Gamma}\) with irrational rotation number \(h\rho\).
This suggests that, having found a parameterized invariant circle using the method of Section 3, we perform a kind of (discrete) continuation in the parameter \(\rho\). That is, suppose that \(\rho_{0}\in[0,1]\) and \(K_{0}\colon\mathbb{R}\to\mathbb{R}^{2}\) is one periodic with
\[F(K_{0}(\theta))=K_{0}(\theta+\rho_{0}),\qquad\quad\theta\in[0,1].\]
Then we take \(\rho_{1}=h\rho_{0}\) (with \(h\) close to one) and use \(K_{0}\) as the initial condition for a Newton method solving the equation
\[F(K(\theta))=K(\theta+\rho_{1}).\]
The equation just stated is of course solved using the Newton scheme described in Section 2.4.1. Indeed, it is very likely that the new calculation can be performed with exactly the same phase condition. The continuation schemes applies also to period-\(K\) systems of quasiperiodic invariant circles.
This kind of discrete continuation (we use the word "discrete" to stress that the family of invariant circles does not vary continuously with \(\rho\)) has been used many times in the past. Indeed in [10] the authors show that, for planar symplectic maps, a precursor to "breakdown" or disappearance of a family of KAM tori is the blow-up of certain Sobolev norms associated with the parameterization.
More precisely, a typical invariant circle in the family is actually analytic, so that there is a \(\nu>1\) so that
\[\|K\|_{0}=\sum_{n\in\mathbb{Z}}\max(|a_{n}|,|b_{n}|)\nu|n|<\infty.\]
Defining the Sobolev norms
\[\|K\|_{d}^{2}=\sum_{n\in\mathbb{Z}}(1+d^{2})^{|n|}\max(|a_{n}|^{2},|b_{n}|^{2} )<\infty,\]
the main result of [10] can be summarized by saying that if \(\Gamma_{\infty}\) is "the last" invariant torus in the Cantor family (the torus at which the family breaks down) then there is a \(d\) so that the \(d\)-th Sobolev norm of the parameterization of \(\Gamma_{\infty}\) is infinite. This suggests choosing a \(K\in\mathbb{N}\) (perhaps \(K=10\)) and monitoring the Sobolev norms of the Fourier series coefficients for \(1\leq d\leq K\) during the numerical continuation. If one begins to blow up we conclude that we are near the breakdown. This can be used as an automatic stopping procedure for the continuation.
Consider for example a small circle, the orbit of \(P=[0,0.1]\) in the area preserving Henon maps, and compute the parameterization and the rotation number \(\rho\approx 0.550640092644521\). We increment \(\rho\) by \(0.001\). If the result converges we try again. If not we increase the number of modes, and decrease the increment. In this way we computed \(150\) invariant circles in the family, and finish with a final
increment on the order of \(10^{-13}\). At this point the Sobolev norms are large, and we terminate the continuation. The results are illustrated in Figure 12.
After the breakdown of the period one family we observe that there appears to be a period 7 family of quasiperiodic circles. We locate the period 7 orbit with a (finite dimensional) Newton scheme, check the elliptic stability type, and look start our search for a period 7 family nearby. Once a single circle is found, we continue
Figure 12: **Discrete continuation of a period one family for the area perserving Henon map:** in the left frame we see the results of a phase space simulation and the initial small invariant circle. The right frame illustrates the results of the numerical continuation, making it clear that breakdown of the family involves loss of smoothness, as predicted by [10]. That is, the outermost invariant circles appear to have fairly sharp corners, indicating a blow up in the first Sobolev norm.
Figure 13: **Coefficient decay:** The left frame illustrates the decay of the coefficients for the last curve in our continuation computation, and we see that even this circle has quite rapid decay. However the right frame illustrates the Sobolev norms calculated along the entire family, and we see that after about 50 continuation steps they are quite large. We note that for the remaining 100 steps, the step size is very small. That is, most of the progress is made in the first 50 steps.
again until breakdown. The result os a fairy large set of quasiperiodic motions. The results are illustrated in Figure 14. Continuing in this way, we find - after the original period 1 and period 7 families - a period 1, 19, 1, 55, 1, 12, 120, and 17. These results are illustrated in Figure 15.
## 6 Conclusions
The goal of this paper was to demonstrate that the method of weighted Birkhoff averages proves to be the perfect tool for initializing the parameterization method for invariant tori. We have provided detailed example calculations illustrating the approach for classic polynomial and non-polynomial examples. Moreover, we described and implemented a multiple shooting version of the parameterization method for simultaneously computing period-\(d\) systems of invariant circles for \(d\) as large as 120. We also discuss a quadratic recast/automatic differentiation scheme which reduces the implementation of Newton scheme to diagonal linear operators and discrete convolutions. We also introduced a global unfolding parameter for the parameterization method which is built directly into the nonlinear conjugacy equation. This avoids the need for introducing new parameters in the linear equations at each step of the
Figure 14: **Filling in the phase space: after the period on family we restart the continuation on a period 7 family and continue until breakdown. The union of the results yields a fairly large region of phase space covered by the two Cantor sets.**
Newton method. These ideas can be combined with basic numerical schemes to compute large sets of quasiperiodic motions. Taken together, the approach described here provides a flexible general toolkit for computing systems of invariant circles for area preserving maps.
An natural future direction will be to extend the approach taken here for invariant 2-tori in volume preserving maps. For example combining the ergodic averages for 2 tori used in [16] with the parameterization method for volume preserving maps developed in [14, 15]. We note for example that our unfolding parameter argument extends directly to this case. Extension to invariant tori in higher dimensional symplectic maps should be straight forward, but justifying the unfolding parameter will require considering Calabi invariants. The utility of Calabi invariants in the parameterization method is discussed at length in [15]. Another valuable extension is to modify these ideas for application to parameterization of invariant tori for Hamiltonian ODEs, as discussed in [13, 14]. Indeed, the idea
Figure 15: **Results of ten different continuations:** Note there are no phase space samples in this picture. Only plots of images of Fourier parameterizations. In total – starting from the origin and building out – we see families of period 1 (teal), 7 (green), 1 (magenta), 19 (thin red band), 1 (blue), 55 (thin bands just after blue), 1 (black), 12 (pink), 120 (black spots around the pink family), and 17 (blue). Some of the families are very thin. We do not claim that there are no other tori in this region.
of combining rapidly converging Birkhoff averages with Newton schemes for solving invariance equations is so natural it is clear there will be many additional extensions and applications.
## 7 Acknowledgements
The authors would like to thank Evelyn Sander for illuminating conversations which inspired the present work. In particular, she explained the (then quite recent) results about weighted Birkhoff averages to the second Author during the 2014 AIMS Conference on Dynamical Systems and Applications in Madrid. The author's also thank Rafael de la Llave for a number of additional helpful discussions. In particular, the idea of using the area preserving property of the dynamical system as a means to formulate an appropriate unfolding parameter for the parameterization method emerged during conversations with the second author during his visit to FAU in 2015. Conversations with Alex Haro, and Jordi-Lluis Figueras are also gratefully acknowledged. The work of the second author was partially supported by NSF grant DMS-1813501 during some of the work on this project.
|
2303.15898
|
Nonlinear Markov Chains with an Aggregator and their Applications
|
We study the properties of a subclass of stochastic processes called
discrete-time nonlinear Markov chains with an aggregator. In these chains, the
next period's distribution of the process depends on both the current state of
the process and on a real-valued function of the current distribution of the
process. For these chains, we provide conditions for the uniqueness of an
invariant distribution and a simple method to find the unique invariant
distribution, which do not rely on contraction arguments. Instead, the approach
is based on flexible monotonicity properties imposed on the nonlinear Markov
kernel. We demonstrate the necessity of these monotonicity conditions to prove
the uniqueness of an invariant distribution by simple examples. We apply our
findings to analyze stationary distributions in strategic queueing systems,
identify conditions under which a class of nonlinear equations in
$\mathbb{R}^{n}$ has a unique solution, and investigate the properties of
wealth distributions in large dynamic economies.
|
Bar Light
|
2023-03-28T11:23:00Z
|
http://arxiv.org/abs/2303.15898v4
|
# Nonlinear Markov Chains with an Aggregator and their Applications
###### Abstract:
We study the properties of a subclass of stochastic processes called discrete-time nonlinear Markov chains with an aggregator. In these chains, the next period's distribution of the process depends on both the current state of the process and on a real-valued function of the current distribution of the process. We provide conditions for the uniqueness of an invariant distribution for these chains, which do not rely on contraction arguments. Instead, the approach is based on flexible monotonicity properties imposed on the nonlinear Markov kernel. We demonstrate the necessity of these monotonicity conditions to prove the uniqueness of an invariant distribution by simple examples. We apply our findings to analyze stationary distributions in strategic queueing systems, identify conditions under which a class of nonlinear equations in \(\mathbb{R}^{n}\) has a unique solution, and investigate the properties of wealth distributions in large dynamic economies.
Introduction
Nonlinear Markov chains are stochastic processes in which the distribution of the process in the next period depends on both the current state of the chain and the current distribution. These chains have been extensively studied in various fields, including the McKean-Vlasov process (McKean Jr, 1966), mean-field games (Huang et al., 2006; Lasry and Lions, 2007)), population games (Sandholm, 2010), and evolutionary biology (Kolokoltsov, 2010). Nonlinear Markov chains with an aggregator are a subclass of nonlinear Markov chains, where the next period's distributions of the process depends on both the current state of the chain and a real-valued function of the current distribution that is called an aggregator.1 These chains naturally appear in large dynamic economies e.g., the wealth distribution's evolution in heterogeneous-agent macroeconomics models (Aiyagari, 1994) or the industry dynamics' evolution (Weintraub et al., 2008), and in the evolution of opinion dynamics (Kolokoltsov, 2010).
Footnote 1: The terminology comes from the game theory literature where the distribution of the process represents the distribution of players’ states (Acemoglu and Jensen, 2015; Light and Weintraub, 2022). We keep this terminology for the current paper despite the fact that we study general nonlinear Markov chains that are not necessarily related to game theory.
In this paper we study discrete-time nonlinear Markov chain with an aggregator and provide conditions that ensure the uniqueness of an invariant distribution for these chains without relying on contraction arguments. Our approach to prove uniqueness is based on monotonicity properties imposed on the nonlinear Markov kernel. These monotonicity conditions are flexible and can be tailored to the specific chain being studied (see Example 1 in Section 2.3). We provide simple examples that demonstrate the necessity of these monotonicity conditions in proving the uniqueness of an invariant distribution (see Examples 2 and 3 in Section 2.4). In Section 3 we discuss three applications where our results can be naturally applied. The first is a strategic G/G/1 queueing system where the arrival of customers depends on the expected waiting times. The second is nonlinear equations in \(\mathbb{R}^{n}\) that lack contraction properties but have a unique solution. The third is general evolution of wealth distributions from the economics literature, where we provide economic conditions on the agents' decisions that imply the uniqueness of the invariant wealth distribution. These applications showcase the versatility and of our approach.
The paper Butkovsky (2014) provides conditions for the ergodicity of nonlinear Markov chains. However, these conditions are significantly stronger than those required for the ergodicity of standard linear Markov chains and are not applicable to many settings of interest including the three applications we provide in this paper. Additionally, in Example 4, we demonstrate that even for one of the most basic nonlinear Markov chains with two states,
which satisfies our uniqueness conditions, the chain is not ergodic and does not converge to the unique invariant distribution. This example highlights the limited applicability of any uniqueness result that relies on the ergodicity of the chains in the case of nonlinear Markov chains.
## 2 Main Results
In this section we present our main results. In Section 2.1 we introduce the nonlinear Markov chains that we study. In Section 2.2 we provide the notations and definitions that are needed to state and prove our results. In Section 2.3 we present the monotonicity conditions that imply that the nonlinear Markov chain has a unique invariant distribution. In Section 2.4 we show that these monotonicity conditions are necessary to prove uniqueness and in Section 2.5 we show that the nonlinear Markov chain does not necessarily converge to the unique invariant distribution even for a very simple two-state case.
### Nonlinear Markov Chains with an Aggregator
Let \(S\) be a polish space and \(\mathcal{B}(S)\) be the Borel \(\sigma\)-algebra on \(S\). We denote by \(\mathcal{P}(S)\) the space of all probability measures on the measurable space \((S,\mathcal{P}(S))\). We study the properties of the nonlinear Markov chain \((X_{t})_{t\in\mathbb{N}}\) on \(S\) given by
\[X_{t+1}=w(X_{t},H(\mu_{t}),\epsilon_{t+1}) \tag{1}\]
where \(w:S\times\mathcal{H}\times E\to S\) is a measurable function, \(\mu_{t}\) is the law of \(X_{t}\), \(H:\mathcal{P}(S)\rightarrow\mathbb{R}\) is a measurable function that is called an aggregator, \(\mathcal{H}=\{H(\mu):\mu\in\mathcal{P}(S)\}\) is the image of \(H\), and \((\epsilon_{t})_{t\in\mathbb{N}}\) are independent and identically distributed (I.I.D) random variables that take values in a polish space \(E\) with a law \(\phi\).
Let \(Q\) be the nonlinear Markov kernel that describes the transitions of the nonlinear Markov chain \((X_{t})_{t\in\mathbb{N}}\), i.e.,
\[Q(x,H(\mu),B)=\phi(\epsilon\in E:w(x,H(\mu),\epsilon)\in B) \tag{2}\]
for all \(B\in\mathcal{B}(S)\). A probability measure \(\mu\in\mathcal{P}(S)\) is an invariant distribution of \(Q\) if \(T\mu=\mu\), i.e., \(\mu\) is a fixed point of \(T\) where the operator \(T:\mathcal{P}(S)\rightarrow\mathcal{P}(S)\) is given by
\[T\mu(B)=\int_{S}Q(x,H(\mu),B)\mu(dx)\]
for all \(B\in\mathcal{B}(S)\).
We are interested in finding conditions that imply that \(T\) has a unique fixed point. The operator \(T\) is nonlinear and generally not a contraction so standard methods cannot be applied. Instead, we prove uniqueness by leveraging monotonicity conditions over the nonlinear Markov kernel \(Q\) that we now describe.
### Notations and Definitions
We assume throughout the paper that \(S\) is endowed with a closed partial order \(\geq\). We say that a function \(f:S\to\mathbb{R}\) is increasing if \(f(y)\geq f(x)\) whenever \(y\geq x\) and we say that \(f\) is strictly increasing if \(f(y)>f(x)\) whenever \(y>x\).
The space of probability measures \(\mathcal{P}(S)\) is endowed with the weak topology. A sequence of measures \(\mu_{n}\in\mathcal{P}(S)\) converges weakly to \(\mu\in\mathcal{P}(S)\) if for all bounded and continuous functions \(f:S\to\mathbb{R}\) we have
\[\lim_{n\to\infty}\int_{S}f(s)\mu_{n}(ds)=\int_{S}f(s)\mu(ds).\]
Let \(D\subseteq\mathbb{R}^{S}\) where \(\mathbb{R}^{S}\) is the set of all functions from \(S\) to \(\mathbb{R}\). When \(\mu_{1}\) and \(\mu_{2}\) are probability measures on \((S,\mathcal{B}(S))\), we write \(\mu_{2}\succeq_{D}\mu_{1}\) if
\[\int_{S}f(s)\mu_{2}(ds)\geq\int_{S}f(s)\mu_{1}(ds)\]
for all Borel measurable functions \(f\in D\) such that the integrals exist.
The binary relation \(\succeq_{D}\) is called a stochastic order. When \(D\) is the set of all increasing functions on \(S\), we write \(\mu_{2}\succeq_{SD}\mu_{1}\) and say that \(\mu_{2}\) first order stochastically dominates \(\mu_{1}\). If \(D\) is the set of all convex functions on \(S\), we write \(\mu_{2}\succeq_{CX}\mu_{1}\) and say that \(\mu_{2}\) dominates \(\mu_{1}\) in the convex stochastic order. If \(D\) is the set of all increasing and convex functions on \(S\), we write \(\mu_{2}\succeq_{ICX}\mu_{1}\) (see Shaked and Shanthikumar (2007) for a detailed textbook treatment of stochastic orders).
To prove that \(T\) has a unique fixed point it is required to assume that the linear Markov kernel \(Q(x,H(\mu),\cdot)\) has a unique invariant distribution for every fixed aggregator \(H(\mu)\). That is, the operator \(M_{H(\mu)}:\mathcal{P}(S)\to\mathcal{P}(S)\) has a unique fixed point where \(M_{H(\mu)}\) is given by
\[M_{H(\mu)}\theta(B)=\int_{X}Q(x,H(\mu),B)\theta(dx).\]
Note that when the Markov kernel \(Q\) does not depend on the aggregator \(H(\mu)\), then the operator \(T\) reduces to \(M_{H(\mu)}\). Hence, if \(M_{H(\mu)}\) has more than one invariant distribution,
then \(Q\) generally has more than one invariant distribution. Thus, the following property is necessary to prove that \(Q\) has at most one invariant distribution.
**Definition 1**: _(Property (U)). We say that \(Q\) satisfies Property (U) if for any \(H(\mu)\in{\cal H}\), the operator \(M_{H(\mu)}\) has a unique fixed point \(\mu_{H(\mu)}\)._
A stronger version of Property (U) says that the Markov kernel \(M_{H(\mu)}^{n}\theta\) converges weakly to \(\mu_{H(\mu)}\) for any probability measure \(\theta\in{\cal P}(S)\).
**Definition 2**: _(Property (C)). We say that \(Q\) satisfies Property (C) if \(Q\) satisfies Property (U) and \(M_{H(\mu)}^{n}\theta\) converges weakly to \(\mu_{H(\mu)}\) for any probability measure \(\theta\in{\cal P}(S)\) and any \(H(\mu)\in{\cal H}\) where \(\mu_{H(\mu)}\) is the unique fixed point of \(M_{H(\mu)}\)._
Property (C) can be established using standard results from the theory of Markov chains in general state spaces (see Meyn and Tweedie (2012)). When the state space \(S\) is finite Property (C) can be established by assuming that \(M_{H(\mu)}\) is irreducible and aperiodic for every \(H(\mu)\) and Property (U) can be established by assuming that \(M_{H(\mu)}\) is irreducible for every \(H(\mu)\).
We say that \(Q\) is decreasing in \(\mu\) with respect to \(\succeq_{D}\) if for each \(x\in S\), we have \(Q(x,H(\mu_{1}),\cdot)\succeq_{D}Q(x,H(\mu_{2}),\cdot)\) whenever \(H(\mu_{2})\geq H(\mu_{1})\). Similarly, \(Q\) is increasing in \(x\) with respect to \(\succeq_{D}\) if for each \(H(\mu)\in{\cal H}\), we have \(Q(x_{2},H(\mu),\cdot)\succeq_{D}Q(x_{1},H(\mu),\cdot)\) whenever \(x_{2}\geq x_{1}\). The key assumption that implies that the operator \(T\) has at most one fixed point relates to the following monotonicity and preservation properties.
**Definition 3**: _Let \(D\subseteq\mathbb{R}^{S}\)._
_We say that \(Q\) is \(D\)-decreasing if \(Q\) is decreasing in \(\mu\) with respect to \(\succeq_{D}\)._
_We say that \(Q\) is \(D\)-preserving if for all \(H(\mu)\in{\cal H}\) the function_
\[v(x):=\int f(y)Q(x,H(\mu),dy)\]
_is in \(D\) whenever \(f\in D\)._
Note that when \(D\) is the set of all increasing functions then \(\succeq_{D}\) reduces to the standard stochastic dominance order and \(Q\) is increasing in \(x\) with respect to \(\succeq_{D}\) if and only if \(Q\) is \(D\)-preserving (see Shaked and Shanthikumar (2007)). In the case that \(Q\) is increasing in \(x\), Property (C) can be established using results from the theory of monotone Markov chains. These results typically require a splitting condition (see Bhattacharya and Lee (1988) and Kamihigashi and Stachurski (2014)) and hold in a wide range of applications.
We say that \(H\) is increasing with respect to \(\succeq_{D}\) if \(H(\mu_{2})\geq H(\mu_{1})\) whenever \(\mu_{2}\succeq_{D}\mu_{1}\).
A stochastic order \(\succeq_{D}\) is said to be closed with respect to weak convergence if \(\mu_{n}^{1}\succeq_{D}\mu_{n}^{2}\) for all \(n\), \(\mu_{n}^{1}\) converges weakly to \(\mu^{1}\), and \(\mu_{n}^{2}\) converges weakly to \(\mu^{2}\) imply \(\mu^{1}\succeq_{D}\mu^{2}\). Many stochastic orders of interest are closed with respect to weak convergence, e.g., the standard stochastic dominance order \(\succeq_{SD}\). For a textbook treatment of the closure properties of stochastic orders see Shaked and Shanthikumar (2007).
We say that \(H\) is continuous if \(\lim_{n\to\infty}H(\mu_{n})=H(\mu)\) whenever \(\mu_{n}\) converges weakly to \(\mu\). We say that \(Q\) is continuous if \(Q(x_{n},z_{n},\cdot)\) converges weakly to \(Q(x,z,\cdot)\) whenever \((x_{n},z_{n})\to(x,z)\).
Recall that a partially ordered set \((Z,\geq)\) is said to be a lattice if for all \(x,y\in Z\), \(\sup\{x,y\}\) and \(\inf\{y,x\}\) exist in \(Z\). \((Z,\geq)\) is a complete lattice if for all non-empty subsets \(Z^{\prime}\subseteq Z\) the elements \(\sup Z^{\prime}\) and \(\inf Z^{\prime}\) exist in \(Z\).
### Uniqueness Theorem
We now present conditions that imply that \(Q\) has a unique invariant distribution. We first provide conditions that implies that \(Q\) has at most one invariant distribution. These conditions are based on monotonicity properties of the nonlinear Markov kernel \(Q\). In particular, we show that under Property (C) or Property (U) and additional regularity conditions, when \(Q\) is \(D\)-preserving and \(D\)-decreasing then \(Q\) has at most one invariant distribution.2 In Section 2.4 we show that these key order-theoretic conditions are necessary in order to establish uniqueness (see Examples 2 and 3).
Footnote 2: A special case of Theorem 1 in the framework of mean field games was derived in
Light and Weintraub (2022) to prove the uniqueness of mean field equilibrium in a subclass of mean field games.
In applications, verifying whether \(Q\) is \(D\)-preserving and \(D\)-decreasing is typically straightforward. In Section 3, we showcase various applications of Theorem 1, such as queueing systems and dynamic evolution of wealth distributions. In these cases, the monotonicity properties of \(Q\) naturally hold, reflecting underlying behavioral or economic assumptions in the studied dynamic systems.
**Theorem 1**: _Let \(D\subseteq\mathbb{R}^{S}\) be a non-empty set such that \(H\) is increasing with respect to \(\succeq_{D}\). Assume that \(Q\) is \(D\)-preserving and \(D\)-decreasing._
_Assume that either of the following conditions hold:_
_(i) \(Q\) satisfies Property (C) and \(\succeq_{D}\) is closed with respect to weak convergence._
_(ii) \(Q\) satisfies Property (U) and \((\mathcal{P}(S),\succeq_{D})\) is a complete lattice._
_Then \(Q\) has at most one invariant distribution._
Theorem 1 shows that \(Q\) has an invariant distribution when such a distribution exists. The existence of an invariant distribution follows by standard fixed-point arguments for the case where \(S\) is compact and \(Q\) is continuous as Proposition 1 shows. Extending this existence result for non-compact state spaces remains an interesting research direction.
**Proposition 1**: _Suppose that \(H\) and \(Q\) are continuous and that \(S\) is compact. Then \(Q\) has an invariant distribution._
Condition (ii) of Theorem 1 is particularly useful for the case that \(S\) is a finite set or a compact set in \(\mathbb{R}\). For example, suppose that \(S\) is finite (say \(S=\{0,\ldots,N\}\)) endowed with the partial order \(\succeq_{SD}\). In this case, it is immediate that \((\mathcal{P}(S),\succeq_{D})\) is a complete lattice with
\[\sup\{\mu,\lambda\}(\{t,\ldots,N\})=\max\{\mu(\{t,\ldots,N\}),\lambda(\{t, \ldots,N\})\}\]
and
\[\inf\{\mu,\lambda\}(\{t,\ldots,N\})=\min\{\mu(\{t,\ldots,N\}),\lambda(\{t, \ldots,N\})\}\]
for all \(t=0,\ldots,N\) (recall that \(\mu\succeq_{SD}\lambda\) if and only if for every upper set \(B\) we have \(\mu(B)\geq\lambda(B)\) where \(B\in\mathcal{B}(S)\) is an upper set if \(x_{1}\in B\) and \(x_{2}\geq x_{1}\) imply \(x_{2}\in B\)). In a similar fashion, \((\mathcal{P}(S),\succeq_{D})\) is a complete lattice when \(S\) is a compact set in \(\mathbb{R}\). For a discussion of stochastic orders that generate lattices of probability measures see Muller and Scarsini (2006).
In applications, it may seem natural to select \(D\) as the set of all increasing functions. However, the versatility in choosing the set \(D\) in Theorem 1 is fruitful for proving uniqueness for various nonlinear Markov chains. Carefully selecting an appropriate set \(D\) can be essential for effectively utilizing Theorem 1. To demonstrate the importance of this choice, we now present an example of nonlinear auto-regressive Markov chains.
**Example 1**: _(Flexibility of the set \(D\)). Consider the following nonlinear Markov chain_
\[X_{t+1}=aX_{t}-H(\mu_{t})+\epsilon_{t+1}\]
_on \(\mathbb{R}\) where \(0<a<1\), \(\epsilon_{t}\) are I.I.D random variables with finite expectations, and \(H(\mu_{t})=\int m(x)\mu_{t}(dx)\) for some increasing function \(m\). Then, assuming that Property (C) holds, we can use Theorem 1 with \(D\) as the set of all increasing functions to show that the nonlinear Markov chain \((X_{t})_{t\in\mathbb{N}}\) has at most one invariant distribution._
_Now consider the nonlinear Markov chain_
\[(X_{1,t+1},X_{2,t+1})=(aX_{1,t}-H(\mu_{t})+\epsilon_{1,t+1},k(X_{2,t})+ \epsilon_{2,t+1})\]
_on \(\mathbb{R}^{2}\) where \(0<a<1\), \(\epsilon_{1,t},\epsilon_{2,t}\) are I.I.D random variables with finite expectations, \(k\) is a function that is not increasing, and \(H(\mu_{t}):=\int m(x_{1})\mu_{t}(d(x_{1},x_{2}))\) for some increasing function \(m\). Also in this case, we can't use Theorem 1 with \(D\) as the set of all increasing functions because \(Q\) is not \(D\)-preserving. However, if we let \(D\) to be the set of all the functions that are increasing in the first argument then one can easily see that \(Q\) is \(D\)-preserving and \(D\)-decreasing.3 Hence, a suitable choice of the set \(D\) can be crucial for applying Theorem 1._
Footnote 3: Let \(f\in\mathbb{R}^{\mathbb{R}^{2}}\) be increasing in the first argument. Then \(\int f(y_{1},y_{2})Q((x_{1},x_{2}),H(\mu),dy)=\int f(ax_{1}-H(\mu)+\epsilon_{1 },k(x_{2})+\epsilon_{2})\phi(d(\epsilon_{1},\epsilon_{2}))\) is increasing in the first argument so \(Q\) is \(D\)-preserving. It is immediate that \(Q\) is \(D\)-decreasing.
### Necessity of the Monotonicity Conditions that Imply Uniqueness
In this section we provide simple examples that show that the key conditions that imply that \(Q\) has a unique invariant distribution: \(Q\) is \(D\)-preserving and \(Q\) is \(D\)-decreasing are necessary to prove Theorem 1.
**Example 2**: _(\(Q\) is not \(D\)-decreasing). Suppose that \(S=\{0,1\}\) and \(H(\mu)=\mu(\{1\})\). Assume that \(D\) is the set of all increasing functions so \(\succeq_{D}\) is the standard stochastic dominance \(\succeq_{SD}\) and \(H\) is increasing with respect \(\succeq_{SD}\)._
_Consider the nonlinear Markov chain_
\[Q^{\prime}=\begin{array}{ccc}&0&1\\ 0&1-\min(0.5,\mu(\{1\}))&\min(0.5,\mu(\{1\}))\\ 1&0.5&0.5\end{array}\]
_It is easy to see that \(\pi(\{1\})=1/2=\pi(\{0\})\) and \(\pi^{\prime}(\{1\})=0,\pi^{\prime}(\{0\})=1\) are invariant distributions of \(Q^{\prime}\). Note that \(Q^{\prime}\) satisfies property (ii) of Theorem 1 and is \(D\)-preserving. Hence all the conditions of Theorem 1 are satisfied except for the condition that \(Q^{\prime}\) is \(D\)-decreasing and \(Q^{\prime}\) has two invariant distributions._
**Example 3**: _(\(Q\) is not \(D\)-preserving). Suppose that \(S=\{0,1,2\}\) and \(H(\mu)=\mu(\{1\})+\mu(\{2\})\). Assume that \(D\) is the set of all increasing functions so \(\succeq_{D}\) is the standard stochastic dominance \(\succeq_{SD}\) and \(H\) is increasing with respect \(\succeq_{SD}\)._
_Consider the nonlinear Markov chain_
\[Q^{\prime\prime}=\begin{array}{r|rrr}&0&1&2\\ 0&1/3&1/3&1/3\\ 1&0&H(\mu)&1-H(\mu)\\ 2&H(\mu)&0&1-H(\mu)\\ \end{array}\]
_The distributions \(\pi(\{0\})=\pi(\{1\})=\pi(\{2\})=1/3\) and \(\pi^{\prime}(\{0\})=0,\pi^{\prime}(\{1\})=1,\pi^{\prime}(\{2\})=0\) are invariant distributions of \(Q^{\prime\prime}\). The Markov chain \(Q^{\prime\prime}\) satisfies property (ii) of Theorem 1 and is \(D\)-decreasing. Hence all the conditions of Theorem 1 are satisfied except to the condition that \(Q^{\prime\prime}\) is \(D\)-preserving4 and \(Q^{\prime\prime}\) has two invariant distributions._
Footnote 4: Note that \(Q\) is \(D\)-preserving if and only if \(Q\) is increasing in \(x\) for all \(H\). \(Q^{\prime\prime}\) is not increasing in \(x\) as \(Q^{\prime\prime}(1,H(\mu),\{1,2\})>Q^{\prime\prime}(2,H(\mu),\{1,2\})\) for any \(H(\mu)>0\).
### Non-Convergence to the Invariant Distribution
Theorem 1 and Proposition 1 provide sufficient conditions for the uniqueness of an invariant distribution for the nonlinear Markov kernel \(Q\). However, these results do not provide conditions under which the sequence of measures \(\mu_{t}\) converges weakly to the unique invariant distribution of \(Q\). Unfortunately, the following example shows that even in a very simple case, the monotonicity conditions that imply uniqueness do not imply convergence. This is in sharp contrast with the contraction approach to prove the uniqueness of an invariant distribution that guarantees convergence (e.g., Butkovsky (2014)). This example illustrates the restricted applicability of any uniqueness result that depends on the ergodicity of chains in the context of nonlinear Markov chains.
**Example 4**: _(\(\mu_{t}\) does not converge to the unique invariant distribution). Suppose that \(S=\{0,1\}\) and \(H(\mu)=\mu(\{1\})\). Assume that \(D\) is the set of all increasing functions so \(\succeq_{D}\) is the standard stochastic dominance \(\succeq_{SD}\) and \(H\) is increasing with respect \(\succeq_{SD}\). Consider the nonlinear Markov chain_
\[Q=\begin{array}{r|rrr}&0&1\\ 0&\mu(\{1\})&\mu(\{0\})\\ 1&\mu(\{1\})&\mu(\{0\})\\ \end{array}\]
_It is easy to see that \(\pi(\{1\})=1/2=\pi(\{0\})\) is the unique invariant distributions of \(Q\) and \(Q\) satisfies all the conditions of Theorem 1. Note that for any initial distribution \(\mu_{1}(\{1\})=\gamma\) and \(\mu_{1}(\{0\})=1-\gamma\) with \(\gamma\neq 1/2\), \(\mu_{t}\) does not converge to \(\pi\) as \(\mu_{t}(\{1\})=\gamma\) and \(\mu_{t}(\{0\})=1-\gamma\) for an odd \(t\) and \(\mu_{t}(\{1\})=1-\gamma\) and \(\mu_{t}(\{0\})=\gamma\) for an even \(t\)._
Applications
In this section we present our applications. In Section 3.1 we study the invariant distribution of a G/G/1 queueing system where arrivals depend on the expected waiting times. In Section 3.2 we study non-linear equations that do not satisfy contraction properties and have a unique solution. In Section 3.3 we study the invariant distribution of wealth distributions in dynamic economies where the rate of returns depend on the aggregate savings in the economy.
### Strategic Behavior in Queuing Systems
A considerable body of literature exists on strategic behavior in queueing systems. Within this domain, the inter-arrival times often depend on the queue length or expected waiting time, as agents, being strategic, can opt not to join the queue if they foresee an extended waiting period (Hassin and Haviv, 2003). Typically, queueing systems are examined in the steady state, making it essential to investigate the existence of a unique steady state generated by the system to obtain robust comparative statics results. We will now demonstrate how Theorem 1 can be utilized to establish that there is, at most, one invariant distribution for the waiting time distribution within a general \(G/G/1\) strategic queueing system, wherein the inter-arrival times are contingent on the expected waiting time.
Consider a \(G/G/1\) queue where the the time between the \(t\)th and \(t+1\)th arrivals is given by the random variable \(T_{t}\) and the service time of the \(t\) customer is given by the random variable \(S_{t}\). Because agents are strategic they are less likely to join the queue when the waiting time is longer. We assume that the time between arrivals depends on the expected waiting time and write \(T_{t}(\mathbb{E}(X_{t}))\) where \(X_{t}\) is the waiting time of the \(t\)th customer to describe this dependence. When the expected waiting time is higher then the number of agents that join the queue is lower. We capture this dependence by assuming that \(T_{t}(\mathbb{E}(X_{t}))\succeq_{SD}T_{t}(\mathbb{E}(X_{t}^{\prime}))\) whenever \(\mathbb{E}(X_{t})\geq\mathbb{E}(X_{t}^{\prime})\). That is, the time between arrivals is stochastically higher when the expected waiting time is higher. The expected waiting times experienced by customers in the queue evolve by the following nonlinear Markov chain:
\[X_{t+1}=\max(0,X_{t}+S_{t}-T_{t}(\mathbb{E}(X_{t})).\]
It is easy to see that \(Q\) is \(D\)-preserving and \(D\)-decreasing with \(H(\mu_{t})=\mathbb{E}(X_{t})\) when \(D\) is the set of all increasing functions. Under the usual assumption that the queue does not explode a standard argument from the Markov chain literature (see Meyn and Tweedie (2012)) can be used to show that Property (C) is satisfied. Hence, Theorem 1 implies that there exists at most one waiting time equilibrium steady state distribution. As a particular
example for this result, we study analytically an M/G/1 queuing system where the arrival rate depends on the expected waiting time.
**Example 5**: _(\(M/G/1\) queue). Consider an \(M/G/1\) queue so the time between arrivals has an exponential distribution with parameter \(\lambda\) that is correlated with the expected waiting time. The service time is given by a general random variable \(S\). Suppose that the mean interarrival time equals the expected waiting time so \(\lambda\) equals \(1\) over the expected waiting time.5 From the Pollaczek-Khinchin formula the stationary expected waiting time is given by \(\lambda\mathbb{E}(S^{2})/(2(1-\lambda\mathbb{E}(S))\). Hence, we must have \(1/\lambda=\lambda\mathbb{E}(S^{2})/(2(1-\lambda\mathbb{E}(S))\) which yields_
Footnote 5: Note that we can apply Theorem 1 under any \(\lambda\) that is decreasing with respect to the expected waiting time.
\[\lambda=\frac{\sqrt{(\mathbb{E}(S))^{2}+2\mathbb{E}(S^{2})}-\mathbb{E}(S)}{ \mathbb{E}(S^{2})}\]
_and is valid when \(\lambda\mathbb{E}(S)<1\). For \(M/M/1\) queue \(S\) is an exponential random variable with a parameter \(\mu\) so \(\mathbb{E}(S)=1/\mu\) and \(\mathbb{E}(S^{2})=2/\mu^{2}\) and we get \(\lambda=(\sqrt{5}-1)\mu/2\) so \(\lambda<\mu\)._
### Nonlinear Equations
The study of nonlinear systems of equations in \(\mathbb{R}^{n}\) has long been a significant area of interest in mathematics and its applications. Finding conditions that ensure a unique solution to such systems is crucial as it offers insights into the properties and stability of solutions, which in turn, have far-reaching implications across various fields, including operations, engineering, economics, and optimization (Ortega and Rheinboldt, 2000). It is generally uncommon to identify a comprehensive set of conditions that guarantee a unique global solution for a system of nonlinear equations in \(\mathbb{R}^{n}\) that do not satisfy contraction properties. We showcase the application of Theorem 1 to determine conditions that ensure a unique solution for a specific class of nonlinear equations, which we define subsequently. These conditions are based on monotonicity concerning the majorization order.
Let \(\Delta_{n}=\{\boldsymbol{x}\in\mathbb{R}^{n}:\sum_{i=1}^{n}x_{i}=1,x_{i}\geq 0 \ \forall i\}\) be the \(n\)-dimensional simplex. Consider a stochastic matrix \(\boldsymbol{P}(G(\boldsymbol{x}))\in\mathbb{R}^{n\times n}\) that is parameterized by \(G(\boldsymbol{x})\) where \(G:\Delta_{n}\to A\) and \(A\subseteq\mathbb{R}\) is the image of \(G\), i.e., \(P_{ij}(G(\boldsymbol{x}))\geq 0\), and \(\sum_{j=1}^{n}P_{ij}(G(\boldsymbol{x}))=1\) for all \(G(\boldsymbol{x})\in A\).
For \(\boldsymbol{x},\boldsymbol{y}\in\mathbb{R}^{n}\) write \(\boldsymbol{x}\geq_{m}\boldsymbol{y}\) if \(\sum_{j=k}^{n}x_{j}\geq\sum_{j=k}^{n}y_{j}\) for all \(1\leq k\leq n\) and \(\sum_{j=1}^{n}x_{j}=\sum_{j=1}^{n}y_{j}\) (the order \(\geq_{m}\) is sometimes called majorization between vectors in \(\mathbb{R}^{n}\)). We denote by \(\boldsymbol{P}_{i}(G(\boldsymbol{x}))\) the \(i\)th row of the matrix \(\boldsymbol{P}\).
The following Corollary follows immediately from Theorem 1 and Proposition 1 applied for first order stochastic dominance.
**Corollary 1**: _Let \(G:\Delta_{n}\to A\) be a continuous function that is increasing with respect to \(\geq_{m}\). The nonlinear system of equations \(\mathbf{x}=\mathbf{x}\mathbf{P}(G(\mathbf{x}))\) on \(\Delta_{n}\) where \(\mathbf{P}(G(\mathbf{x}))\) is a stochastic matrix that is parameterized by \(G(\mathbf{x})\) has a unique solution if the following three conditions hold:_
_(i) For all \(G(\mathbf{x})\in A\), \(i\geq i^{\prime}\), we have \(\mathbf{P}_{i}(G(\mathbf{x}))\geq_{m}\mathbf{P}_{i^{\prime}}(G(\mathbf{x}))\)._
_(ii) For all \(1\leq i\leq n\), \(G(\mathbf{x}^{\prime})\geq G(\mathbf{x})\), we have \(\mathbf{P}_{i}(G(\mathbf{x}))\geq_{m}\mathbf{P}_{i}(G(\mathbf{x}^{\prime}))\)._
_(iii) For all \(a\in A\), the linear system of equations \(\mathbf{x}=\mathbf{x}\mathbf{P}(a)\) for \(\mathbf{x}\in\Delta_{n}\) has a unique solution._
Corollary 1 leverages Theorem 1 to show that a class of nonlinear equations have a unique solution. This Corollary can be used as a tool to generate nonlinear systems of equations that are known to have a unique solution. Theorem 1 can also be used to show the uniqueness of solutions for nonlinear equations in many other different ways than Corollary 1, e.g., see Example 1).
### Wealth Distributions
In heterogeneous-agents macroeconomic models (see Stachurski (2022) for a recent textbook treatment of economic dynamic models), agents determine their consumption, savings, and allocation of savings across financial assets based on their current wealth level in each period.
An extensive literature exists on these models, specifically focusing on the analysis of stationary equilibria and the associated stationary wealth distributions. Despite the vast body of research, the conditions ensuring the uniqueness of equilibrium are restricted to a handful of special cases.6 In this section, we employ Theorem 1 to demonstrate that there can be, at most, one stationary wealth distribution equilibrium for a typical progression of wealth dynamics in these models, given that agents' savings increase with the rate of returns and their current wealth levels. We proceed to outline the model.
Footnote 6: For instance, see Light (2020, 2021); Achdou et al. (2022).
There are \(n\) random variables \(R_{1},\ldots,R_{n}\) that represents returns from different financial assets \(i=1,\ldots,n\). Each agent has a policy \(\mathbf{g}=(g_{1},\ldots,g_{n})\) that determines the amount of wealth the agent allocates to asset \(i\) when it has a wealth level \(x\), i.e., \(g_{i}(R_{1},\ldots,R_{n},x)\) is the amount that an agent with wealth \(x\) allocates to asset \(i\) when the returns are \((R_{1},\ldots,R_{n})\).7 In applications, the agent's policy is typically derived from a consumption-saving dynamic programming problem. In our analysis, we assume a general policy function that can be
deduced from rational agents, behavioral biases (Acemoglu and Jensen, 2018), or learning algorithms.
The return \(R_{i}\) of asset \(i\) is parameterized by an aggregator \(H(\mu)\) that depends on the wealth distribution \(\mu\) and is increasing with respect to first order stochastic dominance (typically \(H(\mu)\) is the expected value or the total savings in applications). The agent's wealth evolution is described by the following nonlinear Markov chain:
\[X_{t+1}=\sum_{i=1}^{n}g_{i}(R_{1}(H(\mu_{t})),\ldots,R_{n}(H(\mu_{t})),X_{t})R_ {i}(H(\mu_{t}))+Y_{t+1}\]
where \(X_{t}\) is the current wealth the agent has, \(Y_{t}\) is the random income of the agent in period \(t\) and \(\mu_{t}\) is the wealth distribution in period \(t\). Hence, if an agent has a current wealth of \(X\), the agent invests \(g_{i}\) in asset \(i\) then the next period's wealth is the sum of investments times the returns plus the next period's income. A stationary equilibrium is defined by an invariant distribution of the nonlinear Markov chain \(X\) with the interpretation that this distribution represents the long run equilibrium wealth distribution of the economy (Aiyagari, 1994)
Under standard assumptions on the policy function, Property (C) holds, the policy function is increasing in the current wealth, i.e., savings increase when the agent's wealth is higher, and the returns are decreasing in \(H(\mu)\) with respect to first order stochastic dominance, i.e., the returns are (stochastically) lower when the total savings are higher. Hence, from Theorem 1 with the standard first order stochastic dominance, there is at most one stationary wealth distribution equilibrium if the total amount of savings \(\sum g_{i}\) is decreasing in \(H(\mu)\) which is equivalent to the property that total savings are increasing in the rate of returns (in the economics literature, this property means that the substitution effect dominates the income effect). Hence, the key condition that implies the uniqueness of a stationary wealth distribution equilibrium is that savings increase with the rate of returns (a special case of this result with one financial asset and rational agents is studied in Light (2020)).
## 4 Conclusions
This paper studies discrete-time nonlinear Markov chains with an aggregator and establishes conditions that imply the uniqueness of an invariant distribution for these chains. Unlike traditional approaches, our conditions do not rely on contraction properties of the chains, but rather on flexible monotonicity properties. We demonstrate, using simple examples, that the monotonicity conditions are necessary to prove the uniqueness of an invariant distribution. We applied our results to strategic queueing systems, non-linear equations, and the evolution of wealth distributions in heterogeneous-agents dynamic economies. We believe that our
results can be applied to other models where the flexible monotonicity conditions we provide hold naturally.
There are remaining important open questions concerning nonlinear Markov chains with an aggregator. For example, to prove the existence of an invariant distribution we assumed that the state space is compact which does not hold for many applications of interest. Furthermore, we showed that even in a simple chain with two states, the chain does not converge to the unique invariant distribution. Developing alternative procedures or algorithms that guarantee convergence to the unique invariant distribution is essential for computational approaches that are employed in practical applications.
Appendix
We will use the following Proposition to prove Theorem 1 (see Corollary 2.5.2 in Topkis (2011)).
**Proposition 2**: _Suppose that \(Z\) is a nonempty complete lattice, \(E\) is a partially ordered set, and \(f\) is an increasing function from \(Z\times E\) into \(Z\). Then the greatest and least fixed points of \(f(z,e)\) exist and are increasing in \(e\) on \(E\)._
**Proof of Theorem 1.** Let \(\theta_{1},\theta_{2}\in\mathcal{P}(S)\) and assume that \(\theta_{1}\succeq_{D}\theta_{2}\). Let \(\mu_{1},\mu_{2}\) be two invariant distributions of \(Q\). Assume without loss of generality that \(H(\mu_{2})\geq H(\mu_{1})\) and let \(f:S\rightarrow\mathbb{R}\) be a function such that \(f\in D\). We have
\[\int_{S}f(x)M_{H(\mu_{2})}\theta_{2}(dx) =\int_{S}\int_{S}f(y)Q(x,H(\mu_{2}),dy)\theta_{2}(dx)\] \[\leq\int_{S}\int_{S}f(y)Q(x,H(\mu_{1}),dy)\theta_{2}(dx)\] \[\leq\int_{S}\int_{S}f(y)Q(x,H(\mu_{1}),dy)\theta_{1}(dx)\] \[=\int_{S}f(x)M_{H(\mu_{1})}\theta_{1}(dx)\]
Thus, \(M_{H(\mu_{1})}\theta_{1}\succeq_{D}M_{H(\mu_{2})}\theta_{2}\). The first inequality follows from the fact that \(Q\) is \(D\)-decreasing. The second inequality follows from the facts that \(\theta_{1}\succeq_{D}\theta_{2}\) and \(Q\) is \(D\)-preserving. We conclude that \(M_{H(\mu_{1})}^{n}\theta_{1}\succeq_{D}M_{H(\mu_{2})}^{n}\theta_{2}\) for all \(n\in\mathbb{N}\).
Assume that condition (i) of the theorem holds. The fact that \(Q\) satisfies Property (C) implies that \(M_{H(\mu_{i})}^{n}\theta_{i}\) converges weakly to the unique fixed point of \(M_{H(\mu_{i})}\) which is given by \(\mu_{H(\mu_{i})}\) for \(i=1,2\). Because \(\mu_{1}\) and \(\mu_{2}\) are invariant distributions of \(Q\) we have \(\mu_{H(\mu_{i})}=\mu_{i}\) for \(i=1,2\). Because \(\succeq_{D}\) is closed with respect to weak convergence, we have \(\mu_{1}\succeq_{D}\mu_{2}\). Using the fact that \(H\) is increasing with respect to \(\succeq_{D}\) implies \(H(\mu_{1})\geq H(\mu_{2})\).
We conclude that if \(\mu_{1}\) and \(\mu_{2}\) are invariant distributions of \(Q\) then \(H(\mu_{1})=H(\mu_{2})\). Thus, \(Q(x,H(\mu_{1}),B)=Q(x,H(\mu_{2}),B)\) for all \(x\in S\) and \(B\in\mathcal{B}(S)\). Because \(Q\) satisfies assumption (U) the operators \(M_{H(\mu_{1})}\) and \(M_{H(\mu_{2})}\) have unique fixed points. Thus, \(\mu_{H(\mu_{1})}=\mu_{H(\mu_{2})}\), i.e., \(\mu_{1}=\mu_{2}\). We conclude that if an invariant distribution of \(Q\) exists, it is unique.
Now assume that condition (ii) of the theorem holds. Define the order \(\geq^{\prime}\) on \(\mathbb{R}\) by \(x\geq^{\prime}y\) whenever \(y\geq x\). Under this assumption, the arguments above imply that the operator \(M_{H(\mu)}\) is increasing from \(\mathcal{P}(S)\times\mathcal{H}\) to \(\mathcal{P}(S)\) on the complete lattice \((\mathcal{P}(S),\succeq_{D})\) when \(\mathcal{H}\) is endowed with \(\geq^{\prime}\). Then by applying Proposition 2 to the increasing operator \(M\) we have \(\mu_{H(\mu_{1})}\succeq_{D}\mu_{H(\mu_{2})}\), i.e., \(\mu_{1}\succeq_{D}\mu_{2}\). Now we can use the same arguments as the arguments for
the case that condition (i) holds to show that if an invariant distribution of \(Q\) exists, it is unique.
In order to establish the existence of an invariant distribution we will use chauder-Tychonoff's following fixed-point theorem (see Corollary 17.56 in Aliprantis and Border (2006)).
**Proposition 3**: _(Schauder-Tychonoff) Let \(K\) be a nonempty, compact, convex subset of a locally convex Hausdorff space, and let \(f:K\to K\) be a continuous function. Then the set of fixed points of \(f\) is compact and nonempty._
**Proof of Proposition 1.** Because \(S\) is compact \(\mathcal{P}(S)\) is (weakly) compact (see Aliprantis and Border (2006)). Clearly \(\mathcal{P}(S)\) is convex. \(\mathcal{P}(S)\) endowed with the weak topology is a locally convex Hausdorff space. Thus, if \(T\) is continuous, we can apply Schauder-Tychonoff's fixed point theorem to conclude that \(T\) has a fixed point.
To show that \(T\) is continuous, take a sequence of measures \(\{\mu_{n}\}\) and assume that it converges weakly to \(\mu\).
Let \(f:X\to\mathbb{R}\) be a continuous and bounded function. Because \(Q\) and \(H\) are continuous we have \(\lim_{n\to\infty}\int_{S}f(y)Q(x_{n},H(\mu_{n}),dy)=\int_{S}f(y)Q(x,H(\mu),dy)\) whenever \(x_{n}\to x\). Define \(m_{n}(x):=\int_{S}f(y)Q(x,H(\mu_{n}),dy)\). Then \(m_{n}(x)\) is a uniformly bounded sequence of functions such that \(m_{n}(x_{n})\to m(x)\) whenever \(x_{n}\to x\). Thus, by Lebesgue's Convergence Theorem for varying measures (see Theorem 3.5 in Serfozo (1982) and Section 5 in Feinberg et al. (2020)) we have \(\lim_{n\to\infty}\int m_{n}(x)\mu_{n}(dx)=\int m(x)\mu(dx)\). Hence,
\[\lim_{n\to\infty}\int_{X}f(x)T\mu_{n}(dx) =\lim_{n\to\infty}\int_{S}\int_{S}f(y)Q(x,H(\mu_{n}),dy)\mu_{n}(dx)\] \[=\int_{S}\int_{S}f(y)Q(x,H(\mu),dy)\mu(dx)\] \[=\int_{X}f(x)T\mu(dx).\]
Thus, \(T\mu_{n}\) converges weakly to \(T\mu\). We conclude that \(T\) is continuous. Thus, by the Schauder-Tychonoff's fixed point theorem, \(T\) has a fixed point.
|
2308.01547
|
Get the Best of Both Worlds: Improving Accuracy and Transferability by
Grassmann Class Representation
|
We generalize the class vectors found in neural networks to linear subspaces
(i.e.~points in the Grassmann manifold) and show that the Grassmann Class
Representation (GCR) enables the simultaneous improvement in accuracy and
feature transferability. In GCR, each class is a subspace and the logit is
defined as the norm of the projection of a feature onto the class subspace. We
integrate Riemannian SGD into deep learning frameworks such that class
subspaces in a Grassmannian are jointly optimized with the rest model
parameters. Compared to the vector form, the representative capability of
subspaces is more powerful. We show that on ImageNet-1K, the top-1 error of
ResNet50-D, ResNeXt50, Swin-T and Deit3-S are reduced by 5.6%, 4.5%, 3.0% and
3.5%, respectively. Subspaces also provide freedom for features to vary and we
observed that the intra-class feature variability grows when the subspace
dimension increases. Consequently, we found the quality of GCR features is
better for downstream tasks. For ResNet50-D, the average linear transfer
accuracy across 6 datasets improves from 77.98% to 79.70% compared to the
strong baseline of vanilla softmax. For Swin-T, it improves from 81.5% to 83.4%
and for Deit3, it improves from 73.8% to 81.4%. With these encouraging results,
we believe that more applications could benefit from the Grassmann class
representation. Code is released at https://github.com/innerlee/GCR.
|
Haoqi Wang, Zhizhong Li, Wayne Zhang
|
2023-08-03T06:02:02Z
|
http://arxiv.org/abs/2308.01547v1
|
Get the Best of Both Worlds: Improving Accuracy and Transferability by Grassmann Class Representation
###### Abstract
We generalize the class vectors found in neural networks to linear subspaces (i.e. points in the Grassmann manifold) and show that the Grassmann Class Representation (GCR) enables the simultaneous improvement in accuracy and feature transferability. In GCR, each class is a subspace and the logit is defined as the norm of the projection of a feature onto the class subspace. We integrate Riemannian SGD into deep learning frameworks such that class subspaces in a Grassmannian are jointly optimized with the rest model parameters. Compared to the vector form, the representative capability of subspaces is more powerful. We show that on ImageNet-1K, the top-1 error of ResNet50-D, ResNeXt50, Swin-T and Deti3-S are reduced by 5.6%, 4.5%, 3.0% and 3.5%, respectively. Subspaces also provide freedom for features to vary and we observed that the intra-class feature variability grows when the subspace dimension increases. Consequently, we found the quality of GCR features is better for downstream tasks. For ResNet50-D, the average linear transfer accuracy across 6 datasets improves from 77.98% to 79.70% compared to the strong baseline of vanilla softmax. For Swin-T, it improves from 81.5% to 83.4% and for Deti3, it improves from 73.8% to 81.4%. With these encouraging results, we believe that more applications could benefit from the Grassmann class representation. Code is released at [https://github.com/innerlee/GCR](https://github.com/innerlee/GCR).
## 1 Introduction
The scheme deep feature\(\rightarrow\)fully-connected \(\rightarrow\)softmax\(\rightarrow\)cross-entropy loss has been the standard practice in deep classification networks. Columns of the weight parameter in the fully-connected layer are the class representative vectors and serve as the prototype for classes. The vector class representation has achieved huge success, yet it is not without imperfections. In the study of transferable features, researchers noticed a dilemma that representations with higher classification accuracy lead to less transferable features for downstream tasks [19]. This is connected to the fact that they tend to collapse intra-class variability of features, resulting in loss of information in the logits about the resemblances between instances of different classes [29]. The neural collapse phenomenon [34] indicates that as training progresses, the intra-class variation becomes negligible, and features collapse to their class means. As such, this dilemma inherently originates from the practice of representing classes by a single vector. This motivates us to study representing classes by high-dimensional subspaces.
Representing classes as subspaces in machine learning can be dated back, at least, to 1973 [49]. This core idea is re-emerging recently in various contexts such as clustering [54], few-shot classification [12, 41] and out-of-distribution detection [47], albeit in each case a different concrete instantiation was proposed. However, very few works study the subspace representation in large-scale classification, a fundamental computer vision task that benefits numerous downstream tasks. We propose the _Grassmann Class Representation_ (GCR) to fill this gap and study its impact on classification and feature transferability via extensive experiments. To be specific, each class \(i\) is associated with a linear subspace \(S_{i}\), and for any feature vector \(\mathbf{x}\), the \(i\)-th logit \(l_{i}\) is defined as the norm of its projection onto the subspace \(S_{i}\),
\[l_{i}:=\left\|\mathrm{proj}_{S_{i}}\mathbf{x}\right\|. \tag{1}\]
In the following, we answer the two critical questions,
1. How to effectively optimize the subspaces in training?
2. Is Grassmann class representation useful?
Several drawbacks and important differences in previous works make their methodologies hard to generalize to the large-scale classification problem. Firstly, their subspaces might be not learnable. In ViM [47], DSN [41] and the SVD formulation of [54], subspaces are obtained _post hoc_
by PCA-like operation on feature matrices without explicit parametrization and learning. Secondly, for works with learnable subspaces, their learning procedure for subspaces might not apply. For example, in RegressionNet [12], the loss involves _pairwise_ subspace orthogonalization, which does not scale when the number of classes is large because the computational cost will soon be infeasible. And thirdly, the objective of [54] is unsupervised subspace clustering, which needs substantial changes to adapt to classification.
It is well known that the set of \(k\)-dimensional linear subspaces form a Grassmann manifold, so finding the optimal subspace representation for classes is to optimize on the Grassmannian. Therefore, a natural solution to Question 1 is to use geometric optimization [13], which optimizes the objective function under the constraint of a given manifold. Points being optimized are moving along geodesics instead of following the direction of Euclidean gradients. We implemented an efficient Riemannian SGD for optimization in the Grassmann manifold in Algorithm 1, which integrates the geometric optimization into deep learning frameworks so that the subspaces in Grassmannian and the model weights in Euclidean are jointly optimized.
The Grassmann class representation sheds light on the incompatibility issue between accuracy and transferability. Features can vary in a high-dimensional subspace without harming the accuracy. We empirically verify this speculation in Section 5, which involves both CNNs (ResNet [16], ResNet-D [17], ResNeXt [52], VGG13-BN [42]) and vision transformers (Swin [26] and Deti3 [45]). We found that with larger subspace dimensions \(k\), the intra-class variation increase, and the feature transferability improve. The classification performance of GCR is also superior to the vector form. For example, on ImageNet-1K, the top-1 error rates of ResNet50-D, ResNeXt50, Swin-T and Deti3-S are reduced relatively by 5.6%, 4.5%, 3.0%, and 3.5%, respectively.
To summarize, our contributions are three folds. (1) We propose the Grassmann class representation and learn the subspaces jointly with other network parameters with the help of Riemannian SGD. (2) We showed its superior accuracy on large-scale classification both for CNNs and vision transformers. (3) We showed that features learned by the Grassmann class representation have better transferability.
## 2 Related Work
Geometric Optimization[13] developed the geometric Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds in their seminal paper. Riemannian SGD was introduced in [6] with an analysis on convergence and there are variants such as Riemannian SGD with momentum [40] or adaptive [18]. Other popular Euclidean optimization methods such as Adam are also studied in the Riemannian manifold context [4]. [23] study the special case of \(\mathrm{SO}(n)\) and \(U(n)\) and uses the exponential map to enable Euclidean optimization methods for Lie groups. The idea was generalized into trivialization in [22]. Our Riemannian SGD Algorithm 1 is tailored for Grassmannian, so we use the closed-form equation for geodesics. Applications of geometric optimization include matrix completion [27, 25, 24, 32], hyperbolic taxonomy embedding [30], to name a few. [14] proposed the Grassmann discriminant analysis, in which features are modeled as linear subspaces.
Orthogonal ConstraintsGeometric optimization in deep learning is mainly used for providing orthogonal constraints in the design of network structure [15, 33], aiming to mitigate the gradient vanishing or exploding problems. Orthogonality are also enforced via regularizations [2, 51, 3, 37, 48]. Contrastingly, we do not change the network structures, and focus ourselves on the subspace form of classes. SiNN [39] uses the Stiefel manifold to construct Mahalanobis distance matrices in Siamese networks to improve embeddings in metric learning. It does not have the concept of classes.
Improving Feature DiversityOur GCR favors the intra-class feature variation by providing a subspace to vary. There are other efforts to encourage feature diversity. SoftTriplet loss [38] and SubCenterArcFace [10] model each class as local clusters with several centers or sub-centers. [55] uses a global orthogonal regularization to drive local descriptors spread out in the features space. [53] proposes to learn low-dimensional structures from the maximal coding rate reduction principle. The subspaces are estimated using PCA on feature vectors after the training.
Classes as SubspacesViM [47] uses a subspace to denote the out-of-distribution class, which is obtained via PCA-like postprocessing after training. \(k\)SCN [54] uses subspaces to model clusters in unsupervised learning. Parameters of models and subspaces are optimized alternatively in a wake-and-sleep fashion. CosineSoftmax [19] defines logits via the inner product between the feature and normalized class vector. Since the class vector is normalized to be unit length, it is regarded as representing the class as a 1-dimensional subspace. ArcFace [11] improves over cosine softmax by adding angular margins to the loss. RegressionNet [12] uses the subspace spanned by the \(K\) feature vectors of each class in the \(N\)-way \(K\)-shot classification. The computational cost of its pairwise subspace orthogonalization loss is quadratic _w.r.t._ the number of classes and becomes infeasible when the number of classes is large. DSN [41] for few-shot learning computed subspaces from the data matrix rather than parametrized and learned, and its loss also involves pairwise class comparison which does not scale. Different from these formulations, we explicitly parametrize classes as high-dimensional subspaces and use geometric optimization to learn them in supervised learning.
## 3 Preliminaries
In this section, we briefly review the essential concepts in geometric optimization. Detailed exposition can be found in [13, 1]. Given an \(n\)-dimensional Euclidean space \(\mathbb{R}^{n}\), the set of \(k\)-dimensional linear subspaces forms the Grassmann manifold \(\mathcal{G}(k,n)\). A computational-friendly representation for subspace \(S\in\mathcal{G}(k,n)\) is an orthonormal matrix \(\mathbf{S}\in\mathbb{R}^{n\times k}\), where \(\mathbf{S}^{T}\mathbf{S}=\mathbf{I}_{k}\) and \(\mathbf{I}_{k}\) is the \(k\times k\) identity matrix. Columns of the matrix \(\mathbf{S}\) can be interpreted as an orthonormal basis for the subspace \(S\). The matrix form is _not unique_, as right multiplying an orthonormal matrix will produce a new matrix representing the same subspace. Formally, Grassmannian is a quotient space of the Stiefel manifold and the orthogonal group \(\mathcal{G}(k,n)=\mathrm{St}(k,n)/\mathcal{O}(k)\), where \(\mathrm{St}(k,n)=\{\mathbf{X}\in\mathbb{R}^{n\times k}|\mathbf{X}^{T}\mathbf{X}=\mathbf{I}_{k}\}\) and \(\mathcal{O}(k)=\{\mathbf{X}\in\mathbb{R}^{k\times k}|\mathbf{X}^{T}\mathbf{X}=\mathbf{I}_{k}\}\). When the context is clear, we use the space \(S\) and one of its matrix forms \(\mathbf{S}\) interchangeably.
Given a function \(f:\mathcal{G}(k,n)\rightarrow\mathbb{R}\) defined on the Grassmann manifold, the Riemannian gradient of \(f\) at point \(S\in\mathcal{G}(k,n)\) is given by [13, Equ. (2.70)],
\[\nabla f(\mathbf{S})=f_{\mathbf{S}}-\mathbf{S}\mathbf{S}^{T}f_{\mathbf{S}}, \tag{2}\]
where \(f_{\mathbf{S}}\) is the Euclidean gradient with elements \((f_{\mathbf{S}})_{ij}=\frac{\partial f}{\partial\mathbf{S}_{ij}}\). When performing gradient descend on the Grassmann manifold, and suppose the current point is \(\mathbf{S}\) and the current Riemannian gradient is \(\mathbf{G}\), then the next point is the endpoint of \(\mathbf{S}\) moving along the geodesic toward the tangent \(\mathbf{G}\) with step size \(t\). The geodesic is computed by [13, Equ. (2.65)],
\[\mathbf{S}(t)=(\mathbf{S}\mathbf{V}\cos(t\mathbf{\Sigma})+\mathbf{U}\sin(t\mathbf{\Sigma}))\,\mathbf{V}^{T}, \tag{3}\]
where \(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{T}=\mathbf{G}\) is the thin SVD of \(\mathbf{G}\).
## 4 Learning Grassmann Class Representation
Denote the weight of the last fully-connected (fc) layer in a classification network by \(\mathbf{W}\in\mathbb{R}^{n\times C}\) and the bias by \(\mathbf{b}\in\mathbb{R}^{C}\), where \(n\) is the dimension of features and \(C\) is the number of classes. The \(i\)-th column vector \(\mathbf{w}_{i}\) of \(\mathbf{W}\) is called the \(i\)-th class representative vector. The \(i\)-th logit is computed as the inner product between a feature \(\mathbf{x}\) and the class vector (and optionally offset by a bias \(b_{i}\)), namely \(\mathbf{w}_{i}^{T}\mathbf{x}+b_{i}\). We extend this well-established formula to a multi-dimensional subspace form \(l_{i}:=\left\|\mathrm{proj}_{S_{i}}\mathbf{x}\right\|\) where \(S_{i}\in\mathcal{G}(k,n)\) is a \(k\)-dimensional subspace in the \(n\)-dimensional feature space. We call \(S_{i}\) the \(i\)-th _class representative space_, or class space in short. Comparing the new logit to the standard one, the inner product of feature \(\mathbf{x}\) with class vector is replaced by the norm of the subspace projection \(\mathrm{proj}_{S_{i}}\mathbf{x}\) and the bias term is omitted. We found that normalizing features to a constant length \(\gamma\) improves training. Incorporating this, Equ. (1) becomes
\[l_{i}:=\left\|\mathrm{proj}_{S_{i}}\frac{\gamma\mathbf{x}}{\left\|\mathbf{x}\right\|} \right\|. \tag{4}\]
We assume \(\mathbf{x}\) has been properly normalized throughout this paper so that we can simply use Equ. (1) in the discussion. We call this formulation of classes and logits the _Grassmann Class Representation_ (GCR).
The subspace class formulation requires two changes to an existing network. Firstly, the last fc layer is replaced by the _Grassmann fully-connected layer_, which transforms features to logits using Equ. (4). Details can be found in Section 4.1. Secondly, the optimizer is extended to process the new geometric layer, which is explained in Section 4.2. Ultimately, parameters of the geometric layer are optimized using Riemannian SGD, while other parameters are simultaneously optimized using SGD, AdamW, or Lamb, _etc._
### Grassmann Class Representation
Suppose for class \(i\in\{1,2,\ldots,C\}\), its subspace representation is \(S_{i}\in\mathcal{G}(k_{i},n)\), where the dimension \(k_{i}\) is a hyperparameter and is fixed during training. The tuple of subspaces \((S_{1},S_{2},\ldots,S_{C})\) will be optimized in the product space \(\mathcal{G}(k_{1},n)\times\mathcal{G}(k_{2},n)\times\cdots\times\mathcal{G}(k_ {C},n)\). Denote a matrix instantiation of \(S_{i}\) as \(\mathbf{S}_{i}\in\mathbb{R}^{n\times k}\), where the column vectors form an orthonormal basis of \(S_{i}\), then we concatenate these matrices into a big matrix
\[\mathbf{S}=[\mathbf{S}_{1}\ \mathbf{S}_{2}\ \cdots\ \mathbf{S}_{C}]\in\mathbb{R}^{n\times(k_{1} +k_{2}+\cdots+k_{C})}. \tag{5}\]
The matrix \(\mathbf{S}\) consists of the parameters that are optimized numerically. For a feature \(\mathbf{x}\), the product \(\mathbf{S}_{i}^{T}\mathbf{x}\) gives the coordinate of \(\mathrm{proj}_{S_{i}}\mathbf{x}\) under the orthonormal basis formed by the columns of \(\mathbf{S}_{i}\). By definition in Equ. (1), the logit for class \(i\) and the (normalized) feature \(\mathbf{x}\) is
\[l_{i}=\left\|\mathrm{proj}_{S_{i}}\mathbf{x}\right\|=\left\|\mathbf{S}_{i}^{T}\mathbf{x} \right\|. \tag{6}\]
Grassmann Fully-Connected LayerWe implement the geometric fully-connected layer using the plain old fc layer. The shape of the weight \(\mathbf{S}\) is \(n\times(k_{1}+k_{2}+\cdots+k_{C})\), as shown in Equ. (5). In the forward pass, the input feature is multiplied with the weight matrix to get a temporary vector \(\mathbf{t}=\mathbf{S}^{T}\mathbf{x}\), then the first element of the output is the norm of the sub-vector \((t_{1},\ldots,t_{k_{1}})\), and the second element of the output is the norm of \((t_{k_{1}+1},t_{k_{1}+2},\ldots,t_{k_{1}+k2})\), and so on. If all \(k_{i}\)'s be the same value \(k\), as in our experiments, then the computation can be conveniently paralleled in one batch using tensor computation libraries.
Parameter InitializationEach matrix instantiation of the subspace should be initialized as an orthonormal matrix. To be specific, each block \(\mathbf{S}_{i}\) of the weight \(\mathbf{S}\) in Equ. (5) is orthonormal, while the matrix \(\mathbf{S}\) needs not be orthonormal. For each block \(\mathbf{S}_{i}\), we first fill them with standard Gaussian noises and then use \(\mathrm{qf}(\mathbf{S}_{i})\), namely the Q factor of its QR decomposition, to transform it to an orthonormal matrix. The geometric optimization Algorithm 1 will ensure their orthonormality during training.
### Optimize the Subspaces
Geometric optimization is to optimize functions defined on manifolds. The key is to find the Riemannian gradient _w.r.t._ the loss function and then descend along the geodesic. Here the manifold in concern is the Grassmannian \(\mathcal{G}(k,n)\). As an intuitive example, \(\mathcal{G}(1,2)\), composed of all lines passing through the origin in a two-dimensional plane, can be pictured as a unit circle where each point on it denotes the line passing through that point. Antipodal points represent the same line. To illustrate how geometric optimization works, we define a toy problem on \(\mathcal{G}(1,2)\) that maximizes the norm of the projection of a fixed vector \(\mathbf{x}_{0}\) onto a line through the origin, namely \(\max_{S\in\mathcal{G}(1,2)}\|\mathrm{proj}_{S}\mathbf{x}_{0}\|\).
As shown in Fig. 1, we represent \(S\) with a unit vector \(\mathbf{w}\in S\). Suppose at step \(t\), the current point is \(\mathbf{w}^{(t)}\), then it is easy to compute that the Euclidean gradient at \(\mathbf{w}^{(t)}\) is \(\mathbf{d}=\mathbf{x}_{0}\), and the Riemannian gradient \(\mathbf{g}\) is the Euclidean gradient \(\mathbf{d}\) projected to the tangent space of \(\mathcal{G}(1,2)\) at point \(\mathbf{w}^{(t)}\). The next iterative point \(\mathbf{w}^{(t+1)}\) is to move \(\mathbf{w}^{(t)}\) along the geodesic toward the direction \(\mathbf{g}\). Without geometric optimization, the next iterative point would have lied at \(\mathbf{w}^{(t)}+\gamma\mathbf{d}\), jumping outside of the manifold.
The following proposition computes the Riemannian gradient for the subspace in Equ. (1).
**Proposition 1**.: _Let \(\mathbf{S}\in\mathbb{R}^{n\times k}\) be a matrix instantiation of subspace \(S\in\mathcal{G}(k,n)\), and \(\mathbf{x}\in\mathbb{R}^{n}\) is a vector in Euclidean space, then the Riemannian gradient \(\mathbf{G}\) of \(l(S,\mathbf{x})=\|\mathrm{proj}_{S}\mathbf{x}\|\) w.r.t. \(S\) is_
\[\mathbf{G}=\frac{1}{l}(\mathbf{I}_{n}-\mathbf{S}\mathbf{S}^{T})\mathbf{x}\mathbf{x}^{T}\mathbf{S}. \tag{7}\]
Proof.: Rewrite \(\|\mathrm{proj}_{S}\mathbf{x}\|=\sqrt{\mathbf{x}^{T}\mathbf{S}\mathbf{S}^{T}\mathbf{x}}\), and compute the Euclidean derivatives as
\[\frac{\partial l}{\partial\mathbf{S}}=\frac{1}{l}\mathbf{x}\mathbf{x}^{T}\mathbf{S},\quad \frac{\partial l}{\partial\mathbf{x}}=\frac{1}{l}\mathbf{S}\mathbf{S}^{T}\mathbf{x}. \tag{8}\]
Then Equ. (7) follows from Equ. (2).
We give a geometric interpretation of Proposition 1. Let \(\mathbf{w}_{1}\) be the unit vector along direction \(\mathrm{proj}_{S}\mathbf{x}\), then expand it to an orthonormal basis of \(S\), say \(\{\mathbf{w}_{1},\mathbf{w}_{2},\dots,\mathbf{w}_{k}\}\). Since the Riemannian gradient is invariant to matrix instantiation, we can set \(\mathbf{S}=[\mathbf{w}_{1}\ \mathbf{w}_{2}\ \cdots\ \mathbf{w}_{k}]\). Then Equ. (7) becomes
\[\mathbf{G}=\left[\ (\mathbf{I}_{n}-\mathbf{S}\mathbf{S}^{T})\mathbf{x}\ \ \ \mathbf{0}\ \cdots\ \ \mathbf{0}\ \right], \tag{9}\]
since \(\mathbf{w}_{i}\perp\mathbf{x},i=2,3,\dots,k\) and \(\mathbf{w}_{1}^{T}\mathbf{x}=l\). Equ. (9) shows that in the single-sample case, only one basis vector \(\mathbf{w}_{1}\), the unit vector in \(S\) that is closest to \(\mathbf{x}\), needs to be rotated towards vector \(\mathbf{x}\).
Riemannian SGDParameters of non-geometric layers are optimized as usual using traditional optimizers such as SGD, AdamW, or Lamb during training. For the geometric Grassmann fc layer, its parameters are optimized using the Riemannian SGD (RSGD) algorithm. The pseudo-code of our implementation of RSGD with momentum is described in Algorithm 1. We only show the code for the single-sample, single Grassmannian case. It is trivial to extend them to the batch version and the product of Grassmannians. In step 2, we use projection to approximate the parallel translation of momentum, and the momentum update formula in step 3 is adapted from the official PyTorch implementation of SGD. Weight decay does not apply here since spaces are scaleless. Note that step 5 is optional since \(\mathbf{S}^{(t+1)}\) in theory should be orthonormal. In practice, to suppress the accumulation of numerical inaccuracies, we do an extra orthogonalization step using \(\mathrm{qf}(\cdot)\) every 5 iterations. Algorithm 1 works seamlessly with traditional Euclidean optimizers and converts the gradient from Euclidean to Riemannian on-the-fly for geometric parameters.
Figure 1: Geometric optimization in Grassmann manifold \(\mathcal{G}(1,2)\). Each point (_e.g._\(\mathbf{w}^{t}\)) in the black circle represent the \(1\)-dimensional linear subspace \(S\) passing through it. The goal is to learn a subspace \(S\) to maximize \(\|\mathrm{proj}_{S}\mathbf{x}_{0}\|\). \(\mathbf{g}\) is the Riemannian gradient obtained by the projection of Euclidean gradient \(\mathbf{d}\). \(\mathbf{w}^{t}\) moves along the geodesic towards the direction \(\mathbf{g}\) to a new point \(\mathbf{w}^{t+1}\).
## 5 Experiment
In this section, we empirically study the influence of the Grassmann class representation under different settings. In Section 5.1, GCR demonstrates superior performance on the large-scale ImageNet-1K classification, a fundamental vision task. We experimented with both CNNs and vision transformers and observed consistent improvements. Then, in Section 5.2, we show that GCR improves the feature transferability by allowing larger intra-class variation. The choice of hyper-parameters and design decisions are studied in Section 5.3. Extra supportive experiments are presented in the supplementary material.
Experiment SettingsFor baseline methods, unless stated otherwise, we use the same training protocols (including the choice of batch size, learning rate policy, augmentation, optimizer, loss, and epochs) as in their respective papers. The input size is \(224\times 224\) for all experiments, and checkpoints with the best validation scores are used. All codes, including the implementation of our algorithm and re-implementations of the compared baselines, are implemented based on the _mmclassification_[28] package. PyTorch [36] is used as the training backend and each experiment is run on 8 NVIDIA Tesla V100 GPUs using distributed training.
Networks for the Grassmann class representation are set up by the drop-in replacement of the last linear fc layer in baseline networks with a Grassmann fc layer. The training protocol is kept the same as the baseline whenever possible. One necessary exception is to enhance the optimizer (_e.g._, SGD, AdamW or Lamb) with RSGD (_i.e._, RSGD+SGD, RSGD+AdamW, RSGD+Lamb) to cope with Grassmannian layers. To reduce the number of hyper-parameters, we simply set the subspace dimension \(k\) to be the same for all classes and we use \(k=8\) throughout this section unless otherwise specified. Suppose the dimension of feature space is \(n\), then the Grassmann fully-connected layer has the geometry of \(\Pi_{i=1}^{1000}\mathcal{G}(8,n)\). For hyper-parameters, we set \(\gamma=25\). Experiments with varying \(k\)'s can be found in Section 5.2 and experiments on tuning \(\gamma\) are discussed in Section 5.3.
### Improvements on Classification Accuracy
We apply Grassmann class representation to the large-scale classification task. The widely used ImageNet-1K [9] dataset, containing 1.28M high-resolution training images and 50K validation images, is used to evaluate classification performances. Experiments are organized into three groups which support the following observations. (1) It has superior performance compared with different ways of representing classes. (2) Grassmannian improves accuracy on different network architectures, including CNNs and the latest vision transformers. (3) It also improves accuracy on different training strategies for the same architecture.
On Representing ClassesIn this group, we compare seven alternative ways to represent classes. (1) **Softmax**[8] is the plain old vector class representation using the fc layer to get logits. (2) **CosineSoftmax**[19] represents a class as a 1-dimensional subspace since the class vector is normalized to be unit length. We set the scale parameter to \(25\) and do not add a margin. (3) **ArcFace**[11] improves over cosine softmax by adding angular margins to the loss. The default setting (\(s=64,m=0.5\)) is used. (4) **MultiFC** is an ensemble of independent fc layers. Specifically, we add \(8\) fc heads to the network. These fc layers are trained side by side, and their losses are then averaged. When testing, the logits are first averaged, and then followed by softmax to output the ensembled prediction. (5) **SoftTriple**[38] models each class by \(8\) centers. The weighted average of logits computed from multiple class centers is used as the final logit. We use the recommended parameters (\(\lambda=20,\gamma=0.1,\tau=0.2\) and \(\delta=0.01\)) from the paper. (6) **SubCenterArcFace**[10] improves over ArcFace by using \(K\) sub-centers for each class and in training only the center closest to a sample is activated. We set \(K=8\) and do not drop sub-centers or samples since ImageNet is relatively clean. (7) The last setting is our **GCR** with subspace dimension \(k=8\). For all seven settings ResNet50-D is used as the backbone network and all models are trained on ImageNet-1K using the same training strategy described in the second row of Tab. 2.
Results are listed in Tab. 1, from which we find that the Grassmann class representation is most effective. Compared with the vector class representation of vanilla softmax, the top-1 accuracy improves from \(78.04\%\) to \(79.26\%\), which amounts to \(5.6\%\) relative error reduction. Compared with previous ways of 1-dimensional subspace representation, _i.e._, CosineSoftmax and ArcFace, our GCR improves the top-1 accuracy by \(0.96\%\) and \(2.60\%\), respectively. Compared with the ensemble of multiple fc, the top-1 is improved by \(1.92\%\). Interestingly, simply extending the class representation to multiple centers such as SoftTriple (\(75.55\%\)) and SubCenterArcFace (\(77.10\%\)) does not result in good performances when training from scratch on the ImageNet-1K dataset. SoftTriple was designed for fine-grained classification and
\begin{table}
\begin{tabular}{l c c c} \hline \hline Setting & Top1 & Top5 & Class Representation \\ \hline Softmax [8] & \(78.04\) & \(93.89\) & vector class representation \\ CosineSoftmax [19] & \(78.30\) & \(94.07\) & 1-dim subspace \\ ArcFace [11] & \(76.66\) & \(92.98\) & 1-dim subspace with margin \\ MultiFC & \(77.34\) & \(93.65\) & 8 fc layers ensembled \\ SoftTriple [38] & \(75.55\) & \(92.62\) & 8 centers weighted average \\ SubCenterArcFace [10] & \(77.10\) & \(93.51\) & 8 centers with one activated \\ GCR (Ours) & **79.26** & **94.44** & 8-dim subspace with RSGD \\ \hline \hline \end{tabular}
\end{table}
Table 1: Validation accuracy of ResNet50-D on ImageNet-1K using different class representations.
SubCenterArcFace was designed for face verification. Their strong performances in their intended domains do not naively generalize here. This substantiates that making the subspace formulation competitive is a non-trivial contribution.
On Different ArchitecturesWe apply Grassmann class representation to eight network architectures, including six CNNs (ResNet50 [16], ResNet50/101/152-D [17], ResNetXt50 [52], VGG13-BN [42]) and two transformers (Swin [26], Deit3 [45]). For each model, we replace the last fc layer with Grassmannian fc and compare performances before and after the change. Their training settings together with validation top-1 and top-5 accuracies are listed in Tab. 2. The results show that GCR is effective across different model architectures. For all architectures, the improvement on top-1 is in the range \(0.44-1.38\%\). The improvement is consistent not only for different architectures, but also across different optimizers (_e.g._, SGD, AdamW, Lamb) and different feature space dimensions (_e.g._, 2048 for ResNet, 768 for Swin, and 384 for Deit3).
On Different Training StrategiesIn this group, we train ResNet50-D with the three training strategies (RSB-A3, RSB-A2, and RSB-A1) proposed in [50], which aim to push the performance of ResNets to the extreme. Firstly, we train ResNet50-D with the original vector class representation and get top-1 accuracies of \(79.36\%\), \(80.29\%\), and \(80.53\%\), respectively. Then, we replace the last classification fc with the Grassmann class representation (\(k=8\)), and their top-1 accuracies improve to \(79.88\%\), \(80.74\%\), and \(81.00\%\), respectively. Finally, we add the FixRes [46] trick to the three strategies, namely training on \(176\times 176\) image resolution and when testing, first resize to \(232\times 232\) and then center crop to \(224\times 224\). We get further boost in top-1 which are \(80.20\%\), \(81.04\%\) and \(81.29\%\), respectively. Results are summarized in Fig. 2.
### Improvements on Feature Transferability
In this section, we study the feature transferability of the Grassmann class representation. Following [19] on the study of better losses _vs._ feature transferability, we compare GCR with five different losses and regularizations. They are Softmax [8], Cosine Softmax [19], Label Smoothing [44] (with smooth value 0.1), Dropout [43] (with drop ratio 0.3), and the Sigmoid [5] binary cross-entropy loss. Note that baselines in Tab. 2 that do not demonstrate competitive classification performances are not listed here. The feature transfer benchmark dataset includes CIFAR-10 [21], CIFAR-100 [21], Food-101 [7], Oxford-IIIT Pets [35], Stanford Cars [20], and Oxford 102 Flowers [31]. All models are pre-trained on the ImageNet-1K dataset with the same training procedure as shown in the second row of Tab. 2. When testing on the transferred dataset, features (before the classification fc and Grassmann fc) of pre-trained networks are extracted. We fit linear SVMs with the one-vs-rest multi-class policy on each of the training sets and report their top-1 accuracies or mean class accuracies (for Pets and Flowers) on their test set. The regularization parameter for SVM is grid searched with candidates \([0.1,0.2,0.5,1,2,5,10,15,20]\) and determined by five-fold cross-validation on the training set.
\begin{table}
\begin{tabular}{l c c c c|l c c|l c c c} \hline \hline \multicolumn{4}{c}{**Setting**} & \multicolumn{4}{c}{**Vector Class Representation**} & \multicolumn{4}{c}{**Grassmann Class Representation (\(k=8\))**} \\
**Architecture** & \(n\) & BS & Epoch Lr Policy & Loss & Optimizer & **Top1** & **Top5** & Loss & Optimizer & **Top1** & **Top5** \\ \hline ResNet50 [16] & 2048 & 256 & 100 & Step & CE & SGD & \(76.58\) & \(93.05\) & CE & RSGD+SGD & \(\mathbf{77.77}(\uparrow 1.19)\) & \(\mathbf{93.67}(\uparrow 0.62)\) \\ ResNet50-D [17] & 2048 & 256 & 100 & Cosine & CE & SGD & \(78.04\) & \(93.89\) & CE & RSGD+SGD & \(\mathbf{79.26}(\uparrow 1.22)\) & \(\mathbf{94.44}(\uparrow 0.55)\) \\ ResNet101-D [17] & 2048 & 256 & 100 & Cosine & CE & SGD & \(79.32\) & \(94.62\) & CE & RSGD+SGD & \(\mathbf{80.24}(\uparrow 0.92)\) & \(\mathbf{94.95}(\uparrow 0.33)\) \\ ResNet152-D [17] & 2048 & 256 & 100 & Cosine & CE & SGD & \(80.00\) & \(95.02\) & CE & RSGD+SGD & \(\mathbf{80.44}(\uparrow 0.44)\) & \(\mathbf{95.21}(\uparrow 0.19)\) \\ ResNeXt50 [52] & 2048 & 256 & 100 & Cosine & CE & SGD & \(78.02\) & \(93.98\) & CE & RSGD+SGD & \(\mathbf{79.00}(\uparrow 0.98)\) & \(\mathbf{94.28}(\uparrow 0.30)\) \\ VGG13-BN [42] & 4096 & 256 & 100 & Step & CE & SGD & \(72.02\) & \(90.79\) & CE & RSGD+SGD & \(\mathbf{73.40}(\uparrow 1.38)\) & \(\mathbf{91.30}(\uparrow 0.51)\) \\ Swin-T [26] & 768 & 1024 & 300 & WarmCos & LS & AdamW & \(81.06\) & \(95.51\) & LS & RSGD+AdamW & \(\mathbf{81.63}(\uparrow 0.57)\) & \(\mathbf{95.77}(\uparrow 0.26)\) \\ Deit3-S [45] & 384 & 2048 & 800 & WarmCos & BCE & Lamb & \(81.53\) & \(95.21\) & CE & RSGD+Lamb & \(\mathbf{82.18}(\uparrow 0.65)\) & \(\mathbf{95.73}(\uparrow 0.52)\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparing Grassmann class representation (\(k=8\)) with vector class representation on different architectures. Validation accuracy on ImageNet. \(n\) is the feature dimension, _BS_ means batch size, _WarmCos_ means using warm up together with the cosine learning rate decay. _CE_ is cross-entropy, \(LS\) is label smoothing, and \(BCE\) is binary cross-entropy.
Figure 2: Validation accuracies of ResNet50-D on ImageNet-1K under different training strategies (RSB-A3, RSB-A2, and RSB-A1). Green bars are vector class representations; yellow bars are Grassmannian with \(k=8\); blue bars added the FixRes trick when training Grassmannian. The best top-1 of **ResNet50-D** is \(\mathbf{81.29\%}\).
ResultsThe validation accuracies of different models on ImageNet-1K are listed in the second group of columns in Tab. 3. All GCR models (\(k=1,4,8,16,32\)) achieve higher top-1 and top-5 accuracies than all the baseline methods with different losses or regularizations. Within a suitable range, a larger subspace dimension \(k\) improves the accuracy greater. However, when the subspace dimension is beyond 16, the top-1 accuracy begins to decrease. When \(k=32\), the top-1 is \(78.63\%\), which is still \(0.33\%\) higher than the best classification baseline CosineSoftmax.
The linear transfer results are listed in the fourth group of columns in Tab. 3. Among the baseline methods, we find that Softmax and Sigmoid have the highest average linear transfer accuracies, which are \(77.98\%\) and \(78.11\%\), respectively. Other losses demonstrate worse transfer performance than Softmax. For the Grassmann class representation, we observe a monotonic increase in average transfer accuracy when \(k\) increases from 1 to 32. When \(k=1\), the cosine softmax and the GCR have both comparable classification accuracies and comparable transfer performance. This can attribute to their resemblances in the formula. The transfer accuracy of GCR (\(73.64\%\)) is lower than Softmax (\(77.98\%\)) at this stage. Nevertheless, when the subspace dimension \(k\) increases, the linear transfer accuracy gradually improves, and when \(k=8\), the transfer performance (\(77.64\%\)) is on par with the Softmax. When \(k\geq 16\), the transfer performance surpasses all the baselines.
In Tab. 4, we show that features of the GCR version of Swin-T and Deit3 increase the average transfer accuracy by \(1.9\%\) and \(7.6\%\), respectively.
Intra-Class Variability Increases with DimensionThe intra-class variability is measured by first computing the mean pairwise angles (in degrees) between features within the same class and then averaging over classes. Following the convention in the study of neural collapse [34], the global-centered training features are used. [19] showed that alternative objectives which may improve accuracy over Softmax by collapsing the intra-class variability (see the _Variability_ column in Tab. 3), degrade the quality of features on downstream tasks. Except for the Sigmoid, which has a similar intra-class variability (60.20) to Softmax (60.12), all other losses, including CosineSoftmax, LabelSmoothing, and Dropout, have smaller feature variability within classes (in the range from 54.79 to 56.87). However, the above conclusion does not apply when the classes are modeled by subspaces. For Grassmann class representation, we observed that if \(k\) is not extremely large, then _as \(k\) increases, both the top-1 accuracy and the intra-class variability grow._ This indicates that representing classes as subspaces enables the simultaneous improvement of inter-class discriminability and intra-class variability.
This observation is also in line with the class separation index \(R^{2}\). \(R^{2}\) is defined as one minus the ratio of the average intra-class cosine distance to the overall average cosine distance [19, Eq. (11)]. [19] founds that greater class separation \(R^{2}\) is associated with less transferable features. Tab. 3 shows that when \(k\) increases, the class separation monotonically decreases, and the transfer performance grows accordingly.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{2}{c}{**Setting**} & \multicolumn{2}{c}{**Analysis**} & \multicolumn{4}{c}{**Linear Transfer (SVM)**} \\ Architecture & Vari. & \(R^{2}\) & C10 & C100 & Food & Pets & Cars & Flwr & **Avg.** \\ \hline Swin-T & 60.2 & 0.48 & 92.7 & 69.4 & 77.5 & 92.1 & 61.3 & 96.0 & 81.5 \\ Swin-T & GCR & 62.9 & 0.40 & 93.5 & 71.5 & 79.8 & 93.3 & 65.5 & 97.0 & **83.4** \\ \hline Deit3-S & 50.6 & 0.60 & 89.5 & 63.7 & 64.7 & 91.4 & 43.1 & 90.2 & 73.8 \\ Deit3-S & GCR & 61.5 & 0.44 & 93.0 & 71.9 & 74.9 & 92.3 & 60.7 & 95.5 & **81.4** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Feature transfer using Swin-T and Deit3-S. All model weights are pre-trained on ImageNet-1K as in Tab. 2. _C10/100_ is CIFAR10/100, _Fhw_ is Flowers. _Swin-T GCR_ and _Deit3-S GCR_ are their Grassmann variants.
\begin{table}
\begin{tabular}{l c|c c|c c c|c c c c c} \hline \hline \multicolumn{2}{c}{**Setting**} & \multicolumn{2}{c}{**ImageNet**} & \multicolumn{2}{c}{**Analysis**} & \multicolumn{4}{c}{**Linear Transfer (SVM)**} \\ Name & \(k\) & Top-1 & Top-5 & Variability & \(R^{2}\) & CIFAR10 & CIFAR100 & Food & Pets & Cars & Flowers & **Avg.** \\ \hline Softmax [8] & \multirow{3}{*}{\(78.04\)} & \(93.89\) & \(60.12\) & \(0.495\) & \(90.79\) & \(67.76\) & \(72.13\) & \(92.49\) & \(51.55\) & \(93.17\) & \(77.98\) \\ CosineSoftmax [19] & \(78.30\) & \(94.07\) & \(56.87\) & \(0.528\) & \(89.34\) & \(65.32\) & \(64.79\) & \(91.68\) & \(43.92\) & \(87.28\) & \(73.72\) \\ LabelSmoothing [44] & \(78.07\) & \(94.10\) & \(54.79\) & \(0.577\) & \(89.14\) & \(63.22\) & \(66.02\) & \(91.72\) & \(43.58\) & \(91.01\) & \(74.12\) \\ Dropout [43] & \(77.92\) & \(93.80\) & \(55.40\) & \(0.565\) & \(89.27\) & \(64.33\) & \(66.74\) & \(91.38\) & \(43.99\) & \(88.59\) & \(74.05\) \\ Sigmoid [5] & \(78.04\) & \(93.81\) & \(60.20\) & \(0.491\) & \(91.09\) & \(69.26\) & \(71.71\) & \(91.98\) & \(51.75\) & \(92.86\) & \(78.11\) \\ \hline \multirow{4}{*}{GCR (Ours)} & 1 & \(78.42\) & \(94.14\) & \(56.50\) & \(0.534\) & \(89.98\) & \(66.34\) & \(64.34\) & \(91.37\) & \(42.97\) & \(86.85\) & \(73.64\) \\ & 4 & \(78.68\) & \(94.32\) & \(61.48\) & \(0.459\) & \(90.56\) & \(67.45\) & \(67.58\) & \(91.37\) & \(50.24\) & \(90.08\) & \(76.21\) \\ \cline{1-1} & 8 & \(\mathbf{79.26}\) & \(\mathbf{94.44}\) & \(63.49\) & \(0.430\) & \(90.13\) & \(67.90\) & \(70.06\) & \(91.85\) & \(53.25\) & \(92.64\) & \(77.64\) \\ \cline{1-1} & 16 & \(79.21\) & \(94.37\) & \(65.79\) & \(0.395\) & \(91.09\) & \(69.58\) & \(71.28\) & \(91.99\) & \(55.93\) & \(93.80\) & \(78.95\) \\ \cline{1-1} & 32 & \(78.63\) & \(94.05\) & \(67.74\) & \(0.365\) & \(91.35\) & \(69.49\) & \(71.80\) & \(92.47\) & \(58.05\) & \(95.04\) & \(\mathbf{79.70}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Linear transfer using SVM for different losses. ResNet50-D is used as the backbone, and model weights are pre-trained on ImageNet-1K. _Variability_ measures the intra-class variability, and \(R^{2}\) measures class separation.
### Design Choices and Analyses
In this section, we use experiments to support our design choices and provide visualizations for the principal angles between class representative spaces.
Choice of GammaIn Tab. 5, we give more results with different values of \(\gamma\) when subspace dimension \(k=8\). We find \(\gamma=25\) has good performance and use it throughout the paper without further tuning.
Importance of Normalizing FeaturesNormalizing the feature in Equ. (4) is critical to the effective learning of the Grassmann class representations. In Tab. 6 we compare results with/without feature normalization and observed a significant performance drop without normalization.
Principal Angles Between Class Representative SpacesWhen classes are subspaces, relationships between classes can be measured by _k principal angles_, which contain richer information than a single angle between two class vectors. The principal angles between two \(k\)-dimensional subspaces \(S\) and \(R\) are recursively defined as,
\[\begin{split}\cos(\theta_{i})=\max_{\mathbf{s}\in S}\max_{\mathbf{r}\in R }\mathbf{s}^{T}\mathbf{r}=\mathbf{s}_{i}^{T}\mathbf{r}_{i},\\ s.t.\|\mathbf{s}\|=\|\mathbf{r}\|=1,\ \mathbf{s}^{T}\mathbf{s}_{j}=\mathbf{r}^{T}\mathbf{r}_ {j}=0,j\leq i-1,\end{split} \tag{11}\]
for \(i=1,\dots,k\) and \(\theta_{i}\in[0,\pi/2]\). In Fig. 3, we illustrate the smallest and largest principal angles between any pair of classes for a model with \(k=8\). From the figure, we can see that the smallest principal angle reflects class similarity, and the largest principal angle is around \(\pi/2\). A smaller angle means the two classes are correlated in some direction, and a \(\pi/2\) angle means that some directions in one class subspace are completely irrelevant (orthogonal) to the other class.
Necessity of Geometric OptimizationTo investigate the necessity of constraining the subspace parameters to lie in the Grassmannian, we replace the Riemannian SGD with the vanilla SGD and compare it with Riemannian SGD. Note that with SGD, the logit formula \(\|\mathbf{S}_{i}^{T}x\|\) no longer means the projection norm because \(\mathbf{S}_{i}\) is not guaranteed to be orthonormal anymore. With vanilla SGD, we get top-\(1\)\(78.55\%\) and top-5 \(94.18\%\) when \(k=8\). The top-1 is \(0.71\%\) lower than models trained by Riemannian SGD.
## 6 Limitation and Future Direction
Firstly, a problem that remains open is how to choose the optimal dimension. Currently, we treat it as a hyper-parameter and decide it empirically. Secondly, we showed that the Grassmann class representation _allows for_ greater intra-class variability. Given this, it is attractive to explore extensions to _explicitly promote_ intra-class variability. For example, a promising approach is to combine it with self-supervised learning. We hope our work would stimulate progresses in this direction.
## 7 Conclusion
In this work, we proposed the Grassmann class representation as a drop-in replacement of the conventional vector class representation. Classes are represented as high-dimensional subspaces and the geometric structure of the corresponding Grassmann fully-connected layer is the product of Grassmannians. We optimize the subspaces using the optimization and provide an efficient Riemannian SGD implementation tailored for Grassmannians. Extensive experiments demonstrate that the new Grassmann class representation is able to improve classification accuracies on large-scale datasets and boost feature transfer performances at the same time.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Setting & \(k\) & Feature Normalize & Top1 & Top5 \\ \hline \multirow{2}{*}{ResNet50-D GCR} & \multirow{2}{*}{1} & & \(77.91\) & \(93.78\) \\ & & ✓ & \(\mathbf{78.42}\) & \(\mathbf{94.14}\) \\ \hline \multirow{2}{*}{ResNet50-D GCR} & \multirow{2}{*}{8} & & \(78.12\) & \(93.90\) \\ & & ✓ & \(\mathbf{79.26}\) & \(\mathbf{94.44}\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Validation accuracy of Grassmann ResNet50-D on ImageNet with/without feature normalization.
Figure 3: Each sub-figure is a heatmap of \(1000\times 1000\) grids. The color at the \(i\)-th row and the \(j\)-th column represent an angle between class \(i\) and class \(j\) in ImageNet-1K. (a) Pairwise angles between class vectors of the ResNet50-D trained by vanilla softmax. Grids with red hue is large than \(90^{\circ}\), and blue hue means smaller than \(90^{\circ}\). (b) Pairwise smallest principal angles between 8-dimensional class subspaces of a ResNet50-D model. Deeper blue colors indicate smaller angles. (c) Pairwise largest principal angles of the same model as in (b). Grayish color means they are close to \(90^{\circ}\). Best viewed on screen with colors.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Setting & \(k\) & \(\gamma\) & Top1 & Top5 \\ \hline \multirow{4}{*}{ResNet50-D GCR} & \multirow{2}{*}{8} & \(20\) & \(79.11\) & \(94.29\) \\ & & \(25\) & \(\mathbf{79.26}\) & \(\mathbf{94.44}\) \\ \cline{1-1} & & \(30\) & \(78.47\) & \(94.07\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Validation accuracy of Grassmann ResNet50-D on ImageNet-1K with varying \(\gamma\).
|
2310.14466
|
Inferring Relational Potentials in Interacting Systems
|
Systems consisting of interacting agents are prevalent in the world, ranging
from dynamical systems in physics to complex biological networks. To build
systems which can interact robustly in the real world, it is thus important to
be able to infer the precise interactions governing such systems. Existing
approaches typically discover such interactions by explicitly modeling the
feed-forward dynamics of the trajectories. In this work, we propose Neural
Interaction Inference with Potentials (NIIP) as an alternative approach to
discover such interactions that enables greater flexibility in trajectory
modeling: it discovers a set of relational potentials, represented as energy
functions, which when minimized reconstruct the original trajectory. NIIP
assigns low energy to the subset of trajectories which respect the relational
constraints observed. We illustrate that with these representations NIIP
displays unique capabilities in test-time. First, it allows trajectory
manipulation, such as interchanging interaction types across separately trained
models, as well as trajectory forecasting. Additionally, it allows adding
external hand-crafted potentials at test-time. Finally, NIIP enables the
detection of out-of-distribution samples and anomalies without explicit
training. Website: https://energy-based-model.github.io/interaction-potentials.
|
Armand Comas-Massagué, Yilun Du, Christian Fernandez, Sandesh Ghimire, Mario Sznaier, Joshua B. Tenenbaum, Octavia Camps
|
2023-10-23T00:44:17Z
|
http://arxiv.org/abs/2310.14466v1
|
# Inferring Relational Potentials in Interacting Systems
###### Abstract
Systems consisting of interacting agents are prevalent in the world, ranging from dynamical systems in physics to complex biological networks. To build systems which can interact robustly in the real world, it is thus important to be able to infer the precise interactions governing such systems. Existing approaches typically discover such interactions by explicitly modeling the feed-forward dynamics of the trajectories. In this work, we propose Neural Interaction Inference with Potentials (NIIP) as an alternative approach to discover such interactions that enables greater flexibility in trajectory modeling: it discovers a set of relational potentials, represented as energy functions, which when minimized reconstruct the original trajectory. NIIP assigns low energy to the subset of trajectories which respect the relational constraints observed. We illustrate that with these representations NIIP displays unique capabilities in test-time. First, it allows trajectory manipulation, such as interchanging interaction types across separately trained models, as well as trajectory forecasting. Additionally, it allows adding external hand-crafted potentials at test-time. Finally, NIIP enables the detection of out-of-distribution samples and anomalies without explicit training. Website: [https://energy-based-model.github.io/interaction-potentials](https://energy-based-model.github.io/interaction-potentials).
Machine Learning, ICML
## 1 Introduction
Dynamical systems are ubiquitous in both nature and everyday life. Such systems emerge naturally in scientific settings such as chemical pathways and particle dynamics as well as everyday settings such as in sports teams or social events. Such dynamical systems may be decomposed as a set of different interacting components, where the interactions among them lead to complex dynamics. Modeling the dynamics of such systems is hard: often times we only have access to example trajectories, without knowledge of the underlying interactions or the dynamics that govern them.
Consider the scenario given in Figure 1, consisting of a two NBA players playing a basketball game (the rest of the teams are commited for clarity). While the motion of individual players may appear stochastic in nature, each player aims to score the basket on the opposite team's side of the court. Thus, we may utilize sets of interactions to explain their behaviors - One or more offensive players move towards the goal, while a defensive player moves to intercept them and prevent them from scoring. By applying our underlying knowledge of these interactions between players, we may forecast the future dynamics of the basketball game significantly more accurately.
Most works modeling such complex dynamics do not explicitly disentangle individual interactions between objects. Instead, they rely on a learned network to implicitly disentangle them (Battaglia et al., 2016; Gilmer et al., 2017; van Steenkiste et al., 2018). In contrast, (Kipf et al., 2018) propose Neural Relation Inference (NRI), which learns a structured set of explicit interaction models between objects and show how such explicit interaction modeling enables more effective downstream predictions.
Figure 1: **Interactions between NBA players.** Complex dynamics, such as the player trajectories in the NBA, may be explained using a simple set of interactions. In this setting, one player aims to block the other one from scoring.
In this work, we argue that we should instead model and disentangle interactions between objects as a set of learned interaction potentials, with dynamical prediction corresponding to a potential satisfaction problem. To this end, we propose Neural Interaction Inference with Potentials (NIIP), where we encode each of these potentials as an energy function (Du et al., 2021).
In physics, a potential is the energy held by an object because of its position relative to other objects. To predict future dynamics with NIIP, we solve a potential minimization problem, where we optimize for a trajectory prediction which minimizes our predicted energy. In different experiments, we illustrate how our potential-based decomposition of interactions provides unique benefits over prior learned approaches for decomposing dynamics.
First, we show that the learned potentials are disentangled and composable, enabling us to interchange interaction types across separate models, trained with different datasets. We also illustrate that such a decomposition enables us to add flexible test-time potentials to incorporate new changes in the environment. We further show that NIIP can detect anomalies (out-of-distribution relation types) without a dedicated training. Finally, we compare our model to existing approaches for trajectory forecasting in both synthetic and real settings, illustrating that NIIP performs well in of mid- and long- term prediction.
In summary, in this work, we contribute the following: **(i).** We propose Neural Interaction Inference with Potentials (NIIP) a novel method that discovers, in an unsupervised manner, the underlying interactions between particles in a system as a set of energy functions or potentials. **(ii).** We illustrate how such a potential-based decomposition of interactions offers unique properties related to composability, such as interchanging interaction types across separate models or adding new hand-crafted potentials at test-time. **(iii).** We further show how NIIP can detect out-of-distribution samples by design, without further training. And **(iv).** we illustrate how such a potential decomposition of interactions enables accurate mid- and long-horizon trajectory prediction performance, often surpassing existing methods.
## 2 Potentials as Energy Based Models
We will consider _potentials_ as specifying a set \(X\) of trajectories \(\mathbf{x}\in\mathbb{R}^{T\times D}\) which possess an underlying property we desire. In section Section 2.1, we discuss how we can represent potentials on trajectories using an EBM. We further discuss how we may compose multiple potentials together as EBMs in Section 2.2.
### Energy-based Models
Definition.An Energy-Based Model (EBM) is defined probabilistically using the Boltzmann distribution \(p_{\theta}(\mathbf{x})=\frac{\exp(-E_{\theta}(\mathbf{x}))}{Z(\theta)}\), with an underlying partition function \(Z(\theta)=\int\exp(-E_{\theta}(\mathbf{x}))d\mathbf{x}\), where \(\theta\) denotes the weights that parameterize the energy function \(E_{\theta}\). We will represent a potential as an EBM, defined using a neural network parameterized energy function \(E_{\theta}(\mathbf{x}):\mathbb{R}^{D}\rightarrow\mathbb{R}\) that maps each datapoint to a scalar value representing an energy. A potential then corresponds to the set of datapoints with low assigned energy. Thus, datapoints \(\mathbf{x}\) satisfying our potential have high likelihood, and all other datapoints have low likelihood. Potential satisfaction or minimization then corresponds to sampling from the EBM distribution \(p_{\theta}(\mathbf{x})\).
Minimizing Potentials.In our framework, minimizing a potential corresponds to sampling from the EBM which defines it, and thus finding high-likelihood data points under \(p_{\theta}(\mathbf{x})\). We follow existing works and utilize a gradient based MCMC procedure, Langevin Dynamics (Welling and Teh, 2011; Du and Mordatch, 2019) to sample from the EBM distribution. In particular, to optimize a potential, we initialize a trajectory \(\mathbf{x}^{0}\) from uniform noise. We then run \(M\) iterative steps following:
\[\tilde{\mathbf{x}}^{m}=\tilde{\mathbf{x}}^{m-1}-\tfrac{\lambda}{2}\nabla_{\mathbf{x}}E_{ \theta}\left(\tilde{\mathbf{x}}^{m-1}\right)+\omega^{m},\quad\omega^{m}\sim \mathcal{N}(0,\sigma), \tag{1}\]
where at each step we iteratively optimize the trajectory with respect to the energy function, using an underlying gradient step size of \(\lambda\) and noise scale of \(\sigma\). We include hyperparameter details for sampling in Section A.1 of the appendix, and heuristically set the noise scale of \(\sigma=0\).
### Composing Potentials
Next, we discuss how we may compose different sets of potentials together, where each potential is parameterized by a separate EBM \(E_{\theta}^{j}(\mathbf{x})\). Our composition operator builds on existing works on composing EBMs representing concepts (Du et al., 2021).
Sampling Composed Potentials.Given a set of separate potentials, we wish to solve for a set of trajectories \(\mathbf{x}\) which jointly satisfy each of the potentials. In our EBM formulation, this corresponds to finding a trajectory \(\mathbf{x}\) which is low energy under each of the specified energy functions \(E_{\theta}^{j}(\mathbf{x})\).
Such a setting is equivalent to finding a trajectory \(\mathbf{x}\) which has high likelihood under each EBM probability distribution \(p_{\theta}^{j}(\mathbf{x})\). This corresponds to sampling from the distribution defined by the product of the individual EBM distributions,
\[\prod_{j}p_{\theta}^{j}(\mathbf{x})\propto e^{-\sum_{j}E_{\theta}^{j}(\mathbf{x})}=e^ {-E_{\theta}^{\prime}(\mathbf{x})}, \tag{2}\]
which corresponds to a new EBM with energy function \(E_{\theta}^{\prime}(\mathbf{x})\) (an analogous approach can be applied to generate images subject to a set of concepts (Du et al., 2020)). Thus, we may sample from the composition of a set of potentials using a sampling procedure as Equation 1, using the new
energy function \(E^{\prime}_{\theta}(\mathbf{x})\), defined as the sum of each individual energy function. Intuitively, this corresponds to a continuous optimization procedure on each energy function.
In our setting, different energy functions \(E^{j}_{\theta}(\mathbf{x})\) are constructed by conditioning an energy function on separate latent vectors. These latents are directly inferred without supervision from input trajectories by training an encoder jointly with the energy function parameters.
## 3 Neural Interaction Inference with Potentials
Next, we discuss Neural Interaction Inference with Potentials (NIIP), our unsupervised approach to decompose a trajectory \(\mathbf{x}(1...T)_{i}\), consisting of \(N\) separate nodes at each timestep, into a set of separate EBM \(E^{j}_{\theta}(\mathbf{x})\) potentials. NIIP is composed by two steps: **(i)** an encoder for obtaining a set of potentials and **(ii)** a sampling process which optimizes for a predicted trajectory, minimizing the inferred potentials. Energy functions in NIIP are trained using autoencoding, similar to (Du et al., 2021). We provide an illustration of our approach in Figure 2, pseudocode in Algorithm 1 and an illustration of the architecture in Figure A1 of the Appendix.
### Relational Potentials
To effectively parameterize different potentials for separate interactions, we learn a latent conditioned energy function \(E_{\theta}(\mathbf{x},\mathbf{z}):\mathbb{R}^{T\times D}\times\mathbb{R}^{D_{z}} \rightarrow\mathbb{R}\). Then, inferring a set of different potentials corresponds to inferring a latent \(\mathbf{z}\in\mathbb{R}^{D_{z}}\) that conditions an energy function.
Given a trajectory \(\mathbf{x}(1...T)_{i}\), we infer a set of \(L\) different latent vectors for each each directed pair of interacting nodes in a trajectory. Thus, given a set of \(N\) different nodes, this corresponds to a set of \(N(N-1)L\) energy functions.
To generate a trajectory, we optimize the energy function \(E(\mathbf{x})=\sum_{ij,l}E^{ij,l}_{\theta}(\mathbf{x};\mathbf{z}_{ij,l})\), across node indices \(i\) and \(j\) from \(1\) to \(N\) and latent vectors \(l\) from \(1\) to \(L\). However, assigning one energy function to each latent code becomes prohibitively expensive as the number of nodes in a trajectory increases. Thus, to reduce this computational burden, we parameterize \(L\) energy functions as shared message passing graph networks, grouping all edge contributions \(ij\) in a single network. The energy is then computed as a summation over all individual node energies after message passing. To evaluate the energy corresponding to a single edge factor \(\mathbf{z}_{ij,l}\) we mask out the contributions of all other edges to the final node energies. Architecture and further details can be found in Section A.2 of the appendix.
To condition to message passing shared graph network on each inferred latent \(\mathbf{z}_{ij,l}\), each edge \(e(i,j)\) in the graph is conditioned by the corresponding encoded edge latent code \(\mathbf{z}_{ij,l}\), by means of FiLM modulation (Perez et al., 2018).
### Inferring Energy Potentials
We utilize \(\text{Enc}_{\theta}(\mathbf{x}):\mathbb{R}^{T\times D}\rightarrow\mathbb{R}^{ D_{z}}\) to encode the observed trajectories \(\mathbf{x}\) into \(L\) latent representations per edge in the observation. We utilize a fully connected GNN with message-passing to infer latents using the encoder module in (Kipf et al., 2018). Instead of classifying edge types and using them as a gate ouputs, we utilize a continuous latent code \(\mathbf{z}_{ij,l}\), allowing for higher flexibility.
### Training Objective
To train NIIP, we infer a set of different EBM potentials by auto-encoding the underlying trajectory. In particular, given a trajectory \(\mathbf{x}(1...T)_{i}=(\mathbf{x}(1)_{i},\dots,\mathbf{x}(T)_{i})\), we split the trajectory into initial conditions \(\mathbf{x}(1...T_{0})\), corresponding to the first \(T_{0}\) states of the trajectory and \(\mathbf{x}(T_{0}...T)\), corresponding to the subsequent states of the trajectory, where each state of the trajectory consists of \(N\) different nodes. The edge potentials are encoded by observing a portion of the overall trajectory \(\mathbf{x}(1...T^{\prime})\), where \(T^{\prime}\leq T\).
We infer a set of different \(L\) latents per edge of input observations utilizing the observed states \(\mathbf{x}(1...T^{\prime})\) using the encoder specified in Section 3.2, generating a set of latents \(\{\mathbf{z}\}\). We then aim to train energy functions so that the following unnormalized distribution assigns low energy and high likelihood to the full trajectory \(\mathbf{x}\):
\[p(\mathbf{x}|\{\mathbf{z}\}) \propto\prod_{i,j,\forall i\neq j}p(\mathbf{x}|\mathbf{z}_{ij,l})=\] \[=\exp\left(-E^{ij,l}_{\theta}(\mathbf{x};\text{Enc}_{\theta}( \mathbf{x}(1...T^{\prime}))_{ij,l})\right), \tag{3}\]
where \(\mathbf{z}_{ij,l}=\text{Enc}_{\theta}(\mathbf{x}(1...T^{\prime}))_{ij,l}\) and \(E^{ij,l}_{\theta}\) is the energy function linked to the \(l_{\text{th}}\) potential of the encoded edge between nodes \(i\) and \(j\), respectively.
Since we wish to learn a set of potentials with high likelihood for the observed trajectory \(\mathbf{x}\), as a tractable supervised manner to learn such a set of valid potentials, we directly supervise that sample using Equation 1 corresponds to the original trajectory \(\mathbf{x}\), similar to (Du et al., 2021). In particular, we sample \(M\) steps of Langevin sampling starting from \(\tilde{\mathbf{x}}^{0}\), which is initialized from uniform noise and the initial conditions fixed as the ground-truth \(\mathbf{x}(1...T_{0})\):
\[\tilde{\mathbf{x}}^{m}=\tilde{\mathbf{x}}^{m-1}-\frac{\lambda}{2}\nabla_{ \mathbf{x}}\sum_{ij,l}E^{ij,l}_{\theta}(\tilde{\mathbf{x}}^{m-1};\mathbf{z}_{ ij,l})+\omega^{m}, \tag{4}\]
where \(m\) is the \(m_{\text{th}}\) step and \(\lambda\) is the step size and \(\omega^{m}\sim\mathcal{N}(0,\lambda)\). We then compute MSE objective with \(\tilde{\mathbf{x}}^{M}\), which is the result of \(M\) sampling iterations and the ground truth trajectory \(\mathbf{x}\):
\[\mathcal{L}_{\text{MSE}}(\theta)=\|\tilde{\mathbf{x}}^{M}-\mathbf{x}\|^{2}. \tag{5}\]
We optimize both \(\tilde{\mathbf{x}}\) and the parameters \(\theta\) with automatic differentiation. The overall training algorithm is provided in Algorithm 1.
```
Input: Full trajectories \(\mathbf{x}\), Observed trajectories \(\mathbf{x}(1..T^{\prime})\), Initial conditions \(\mathbf{x}(1..T_{0})\) step size \(\lambda\), number of gradient steps \(M\), Encoder \(\text{Enc}_{0}\), energy functions \(E^{ij}_{0}\), noise \(\omega^{m}=0\), true data distribution \(p_{D}\) while not converged do \(\mathbf{x}_{i}\sim p_{D}\) \(\triangleright\) Encoder components \(\mathbf{z}_{i,j,i}\) from \(\mathbf{x}(1...T^{\prime})\) \(\{\mathbf{z}\}\leftarrow\text{Enc}_{0}(\mathbf{x}(1..T^{\prime}))\) \(\triangleright\) Optimize sample \(\mathbf{x}^{i}_{0}\) via gradient descent: \(\mathbf{x}^{i}_{0}\sim\mathcal{U}(0,1)\) for gradient step \(n=1\) to \(\mathbf{x}\)do \(\mathbf{\tilde{x}}^{n}=\mathbf{\tilde{x}}^{-1}\mathbf{\tilde{x}}\sum_{i,j, 1}E^{ij}_{0}(\mathbf{\tilde{x}}^{n-1};\mathbf{z}_{i,j,i})+\omega^{m}\) endfor \(\triangleright\) Optimize objective \(\text{Exec}\) wrt \(\theta\): \(\Delta\theta+\nabla_{\theta}\|\tilde{\mathbf{x}}^{m}-\mathbf{x}\|^{2}\) Update \(\theta\) based on \(\Delta\theta\) using optimizer endwhile
```
**Algorithm 1** Training algorithm for NIP.
## 4 Experiments
In this section we firstly describe our datasets (Section 4.1) and baselines (Section 4.2). Following, in Section 4.3, we discuss experiments on **(i.)** recombination of interaction types across datasets and **(ii.)** contribution of the potentials. Next, in Section 4.4, we describe out-of-distribution sample detection experiments. We show how to incorporate test-time potentials in Section 4.5. Finally, we describe the quantitative results for trajectory forecasting in Section 4.6. In the appendix we give implementation details (Section A.1), experimental details (Section A.3), additional examples (Section A.5) and provide a detailed ablation study (Section A.6).
### Datasets
We test our model in three different domains. First, we carry on experiments in two simulated environments: **(i.)** Particles connected by springs, and **(ii.)** Particles with charges. Next, we test several properties of our model in **(iii.)** NBA SportVu motion dataset, which displays real motion from tracked basketball players along several NBA games. Finally, we test our performance in **(iv.)** JPL Horizons, a physics-based realistic dataset.
Simulated data.Following the experimental setting described in (Kipf et al., 2018), we generate states (position and velocity) of a dynamical system for \(N=5\) particles for 70 time-steps. Our model observes the first 49 states, fixes one state and predicts the following 20. We generate 50k training samples and 10k for validation and test splits.
In this setting, the rules by which particles interact are known and simple. However, they can generate very complex behaviour.
* [leftmargin=*,noitemsep,topsep=0pt]
* **Springs**: The particles move inside a box with elastic collisions. They are connected by a spring with probability 0.5, and interact according to Hooke's law.
* **Charged**: The particles move inside a box as in Springs. They are assigned a positive or negative charge \(q_{i}\in\{\pm q\}\) with probability 0.5 and interact via Coulomb forces.
NBA SportVU SportVU is an automated ID and tracking service that collects data of NBA players and the ball (\(N=11\)) during games. The inherent complexity of human
Figure 3: **Training Algorithm.** NIP is trained to infer a set of potentials, represented as energy functions, using a trajectory reconstruction objective. A set of latents \(\{\mathbf{z}\}\) is inferred from the beginning of a trajectory \(\mathbf{x}(1...T^{\prime})\), and define different potentials. A trajectory is optimized w.r.t. to energy functions and supervised with the trajectory \(\mathbf{x}\).
Figure 2: **Overview of NIP.** In the left, a portion \(\mathbf{x}\) (\(1\dots T^{\prime}\)) of the input trajectory is observed by \(\text{Enc}_{0}\) and encoded by a GNN into interaction potentials, in the form of a set of latent vectors \(\mathbf{z}\) for each edge in the graph. In the right, energy functions parametrized as GNNs for each edge latent vector in \(\mathbf{z}\) are constructed. Energy functions are trained so that optimizing a trajectory \(\mathbf{x}^{0}\) from uniform noise into a final trajectory \(\mathbf{x}^{M}\) reconstructs the future states of the observed trajectory. This refinement process uses Langevin Dynamics (Eq. 4). Given the full trajectory \(\mathbf{x}^{m}\) at sampling step \(m\), we update it by summing the gradient contributions of the energy function associated to each edge, resulting in \(\mathbf{x}^{m+1}\).
motion and interactions while playing a sport makes this dataset especially challenging for forecasting. The dataset is generated by splitting each of the labeled events into 65 steps trajectories of coordinates x,y. We compute the velocities to generate the states. The dataset is composed of 50k samples for training and 1k samples for validation and test.
JPL HorizonsThe JPL Horizons on-line ephemeris system provides access to solar system data. It characterizes the 3D location and velocity of solar system objects (targets) as a function of time, as seen from locations within the solar system (origins). We choose this dataset as a realistic take on physical interactions. Here, inter-particle forces are a product of gravity, and therefore mass. However, we do not provide information about the mass or any other object attribute. There are other factors of complexity. First, there are hidden nodes (smaller objects) that are not visible to the observer, introducing noise to the trajectories. Second, the origin from which we observe the trajectories varies along samples. This dataset consists on the trajectories captured between 1800 and 2022, with one datapoint every 10 days. We define the nodes as \(N=12\) targets of the solar system: 8 planets, 3 moons and the Sun. This data is captured from 13 origins: each one of the targets plus the solar system barycenter (SSB). We gather 1880 trajectories of 43 timesteps split as 1504/188/188 for train, validation and test.
### Baselines
We consider a **Static** baseline, which copies the previous state vector, a multi-node **LSTM** which is trained to predict the state vector difference at every timestep. It concatenates input representations from all objects after passing them through an MLP.
We also evaluate **NRI**, the architecture presented in (Kipf et al., 2018) to infer the interaction graph (**learned**), and with a fully connected graph of a single edge type (**full graph**). We further add a GNN conditioned to the observed trajectories.
For NBA SportsVU we will also evaluate on two social interaction-based methods: Social-LSTM (S-LSTM) (Alahi et al., 2016) and Directional-LSTM (TrajNet++) (Kothari et al., 2021), PwD (Janner et al., 2022), a diffusion-based planning method, and **dNRI**(Graber and Schwing, 2020), an extension of NRI that allows for dynamic switches of edge-types.
### Independence of Relational Potentials
NIIP assigns an energy function to each one of the representations learned by the encoder. Those constrain the generative process by conditioning the features of their associated EBM. The optimization procedure, hence, aims to minimize all potentials. We train our model to ensure that different potentials contribute independently to the generative process by addition of their associated gradients. Our training procedure enforces separation among **(i.)** Potentials associated to different relations and **(ii.)** Individual potentials affecting the same edges. We argue that our training procedure aids discovery of independent potentials allowing composition among disjoint training distributions.
RecombinationTo verify our claims, we show how NIIP can compose relational potentials learned from two different distributions, at test-time. Figure 4 shows qualitative results of recombinations from Springs and Charged datasets. The process is as follows: we train two instances of our model (NIIP\({}_{S}\), NIIP\({}_{C}\)) to reconstruct Springs and Charged trajectories respectively. Given sample trajectories drawn from each dataset (Col. 1 for Springs and 2 for Charged), we encode them into their relational potentials. For each row, we aim to reconstruct the trajectory framed in green while swapping one of the potential pairs (green dashed box) with one drawn from the other dataset (red dashed box). As an example, in the first row of the figure, we encode the Springs trajectory with NIIP\({}_{S}\) and the Charged trajectory with NIIP\({}_{C}\). Next, we fix the initial conditions of the Charged trajectories and sample by optimizing the relational energy functions. To achieve recombination, each model targets specific edges. We minimize the potentials encoded by NIIP\({}_{S}\) for the mutual edges corresponding to the nodes in green dashed boxes. The rest of edge potentials are encoded by NIIP\({}_{C}\). The sampling process is done jointly by both models, each minimizing their corresponding potentials. The result is a natural combination of the two datasets, which affect only the tar
Figure 4: **NIIP can recombine encoded potentials at test-time learned from different datasets.** Illustrated, samples from Springs (Col. 1) and Charged (Col. 2) and their recombinations (Col. 3). NIIP encodes both trajectories. NIIP is able to reconstruct trajectories framed in green while swapping the edge potentials associated to the nodes in the green dashed box for the ones in the red dashed boxes. Recombinations look semantically plausible.
geted edges. Reconstructed trajectories in Figure 4 (col. 3) are semantically reasonable.
Contribution of PotentialsAs introduced, we can assign more than one potential to each edge. We argue that each one of those \(L\) potentials will control different aspects of the same interaction. A qualitative example shown in Figure 6 depicts the gradient orientation of two sets of potentials evaluated in a single node. We can see how each potential pushes the player trajectory into different directions, each one of them pointed to a different player of the rival team.
### Out-Of-Distribution Detection
We further utilize the potential value or energy produced by NIIP over a trajectory to detect out-of-distribution interactions in a trajectory. In our proposed architecture, energy is evaluated at the node level. Therefore, if NIIP has been trained with a specific dataset, the potentials associated to out-of-distribution type of edges are expected to correspond to higher energy.
We design a new dataset (Charged-Springs) as a combination of Springs and Charged interaction types. In simulation, nodes are assigned both roles of Charged and Springs particles, but all the forces they receive correspond to one of the two types with probability \(p=0.5\). We train a model with the Springs dataset and evaluate the energies in the proposed mixed setting.
Figure 5 shows qualitatively how the energy is considerably higher for the nodes with Charged-type forces (drawn in red). Quantitative results are summarized in Table 1 for 1k test samples. In the left, we can see that energies corresponding to Spring-type nodes are considerably lower than for Charged-type nodes, indicating that potentials are correctly capturing the behavior of the desired interactions.
We further evaluate the OOD detection for NBA SportsVU dataset. For this experiment, we train NIIP with the 10 players disregarding the Ball node. At test-time, we evaluate the trained model switching one of the players by the Ball node for 1k samples. We observe how the energy corresponding to the Ball is considerably higher. We further train a single parameter binary classifier and find that we can detect the Ball in 70.1% of instances (Table 1 right).
\begin{table}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: Spring (S)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval) & 9.1e-2 \\ SãoC (eval all) & 1.4e-1 \\ SãoC (eval C) & 1.9e-1 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval) & 8.4e-2 \\ SãoC (eval all) & 3.2e-1 \\ SãoC (eval all) & 1.4e-1 \\ SãoC (eval C) & 1.9e-1 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval) & 9.1e-2 \\ SãoC (eval all) & 1.4e-1 \\ SãoC (eval C) & 1.9e-1 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval) & 8.4e-2 \\ SãoC (eval all) & 3.2e-1 \\ SãoC (eval all) & 1.4e-1 \\ SãoC (eval C) & 1.9e-1 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval) & 9.1e-2 \\ SãoC (eval all) & 1.4e-1 \\ SãoC (eval C) & 1.9e-1 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval S) & 9.1e-2 \\ SãoC (eval all) & 1.4e-1 \\ SãoC (eval C) & 1.9e-1 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval S) & 8.4e-2 \\ SãoC (eval All) & 3.2e-1 \\ SãoC (eval All) & 1.4e-1 \\ SãoC (eval C) & 1.9e-1 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval S) & 9.1e-2 \\ SãoC (eval All) & 1.4e-1 \\ SãoC (eval C) & 1.9e-1 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval S) & 8.4e-2 \\ SãoC (eval All) & 3.2e-1 \\ SãoC (eval All) & 1.4e-1 \\ SãoC (eval C) & 1.9e-1 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval S) & 9.1e-2 \\ SãoC (eval All) & 1.4e-1 \\ SãoC (eval C) & 1.9e-1 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval S) & 8.4e-2 \\ SãoC (eval All) & 3.2e-1 \\ SãoC (eval All) & 1.4e-1 \\ SãoC (eval C) & 1.9e-1 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval S) & 9.1e-2 \\ SãoC (eval All) & 1.4e-1 \\ SãoC (eval C) & 1.9e-1 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval S) & 8.4e-2 \\ SãoC (eval All) & 3.2e-1 \\ SãoC (eval All) & 1.4e-1 \\ SãoC (eval C) & 1.9e-1 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval S) & 9.1e-2 \\ SãoC (eval All) & 1.4e-1 \\ SãoC (eval All) & 1.9e-1 \\ SãoC (eval C) & 1.9e-1 \\ \hline \hline \end{tabular} \begin{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval S) & 8.4e-2 \\ SãoC (eval All) & 3.2e-1 \\ SãoC (eval All) & 1.4e-1 \\ SãoC (eval C) & 1.9e-1 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval S) & 9.1e-2 \\ SãoC (eval All) & 1.4e-1 \\ SãoC (eval All) & 1.9e-1 \\ SãoC (eval All) & 1.9e-1 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval S) & 9.1e-2 \\ SãoC (eval All) & 1.4e-1 \\ SãoC (eval All) & 1.9e-1 \\ SãoC (eval All) & 1.9e-1 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c} \hline \hline
**Evaluation** & **Train: NBA Players (P)** \\ \hline
**Spring (S)** & 5.1e-3 \\ Charged (C) & 1.8e-1 \\ SãoC (eval S) & 8.4e-2 \\ SãoC (eval All) & 3.2e-1 \\ SãoC (eval All) & **Detection Accuracy** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Quantitative evaluation of out-of-distribution detection. We evaluate the average energy associated to each node-type in a scene. In the left, NIIP is trained on Springs and evaluated onn (i) Springs (ii) Charged and (iii) SãoC, a Springs-Charged mixed dataset. For the NBA case in the right, we train NIIP for the subset of player (P) trajectories of the dataset and evaluate the energies in a setting with player nodes and one ball node. We measure accuracy in detecting the ball trajectory.**
Figure 7: **With NIIP we can add and control test-time potentials to achieve a desired behavior. We design test-time potentials to steer trajectories into a goal. In row 1 of the table, we show the squared distance after applying a goal potential towards the center (0,0) of the scene with different strengths. In row 2, we report the percentage of time-steps that particles stay in a particular area \(\mathbf{A}=[(0,-1),(1,1)]\) after applying a potential that enforces avoiding \(\mathbf{A}\). The figure shows the effect of applying test-time potentials in the latter experiment.**
### Flexible Generation
Another advantage of our approach is that it can flexibly incorporate test-time user specified potentials. For this experiment, we investigate three different sets of potentials. We do so qualitatively by reconstructing a given 40 step trajectory of the NBA dataset in Figure 8, and also quantitatively in Table 7 for 20 step trajectory prediction.
Velocity PotentialsWe incorporate the following velocity potential as an energy function: \(E=\epsilon\lambda\sum_{i,t}\sqrt{(\mathbf{v}_{x,i}^{t})^{2}+(\mathbf{v}_{y,i}^ {t})^{2}}=\epsilon\lambda\sum_{i,t}mod(\mathbf{v}_{i}^{t})\), for particle \(i\) in time \(t\). The weight \(\lambda=1e-2/N\) scales the effect of this function over the rest and \(\epsilon\) is a multiplicative constant that indicates the strength and direction of the potential. Figure 8 (two top rows), we show **(i.)**\(\epsilon=0\): Reconstruction (top-left); **(ii.)**\(\epsilon=4\): Decrease of velocity (middle-left); **(iii.)**\(\epsilon=-5\): Low increase of velocity (top-right); and **(iv.)**\(\epsilon=-10\): High increase of velocity (middle-right). Results satisfy test potentials.
Goal PotentialsWe also add at test-time an attraction potential as a the squared distance of the predicted coordinates to the goal: \(P=\epsilon\lambda\sum_{i,t}(\tilde{\mathbf{p}}_{i}^{0:T-1}-\mathbf{g})^{2}\), where \(\mathbf{g}\) is defined as the coordinates of our goal point. We define the trajectory coordinates \(\tilde{\mathbf{p}}_{i}^{t}\) as an accumulation of the un-normalized velocities \(\mathbf{v}_{i}^{t}\) predicted by NIIP: \(\tilde{\mathbf{p}}_{i}^{t+1}=\sum_{t}(\mathbf{v}_{i}^{0:t})+\mathbf{p}_{i}^{0}\) for particle \(i\) at time-step \(t\). Here, \(\mathbf{p}^{1}\) is fixed initial ground-truth location of the particle at time 0 and \(\lambda=5e-4/N\).
Figure 8 (bottom row) illustrates the scenarios **(i.)** Reconstruction (bottom-left) and Attraction to the goal (bottom-right, goal in blue). The reconstructed trajectory follows the new potential, while maintaining the potentials of the encoded trajectory.
In Table 7 (row 1), we explore quantitatively the effect of different magnitudes of the added goal potential for prediction in the Charged dataset. Our test set is composed by 1k samples. For this experiment, the particles live within a \([-1,1]\) box, for both \(x\) and \(y\) coordinates. The goal is the center \(\mathbf{g}=(0,0)\). In the table, \(P_{add}\) indicates the use of edge potentials encoded by NIIP while adding the new potentials with different strengths: **(i.)**\(s1:\epsilon=1\), **(ii.)**\(s2:\epsilon=5\) and **(iii.)**\(s3:\epsilon=10\). We observe how the squared distance to the goal decreases as expected.
Avoid Area PotentialsIn this case, we penalize the portion of the predicted trajectory \(\tilde{\mathbf{p}}_{i}^{t}\) that is inside a given restricted area \(\mathbf{A}\). We do so by computing the distance of each particle \(i\) that lays within the region \(\mathbf{A}\) to the borders \(\mathbf{b}_{A}\) of \(\mathbf{A}\). The added potential is: \(P=\epsilon\lambda\sum_{i,t}(\tilde{\mathbf{p}}_{A,i}^{0:T-1}-\mathbf{b}_{A}-C) ^{2}\), where \(C\) is a small margin that ensures that the particles are repelled outside of the boundaries of \(\mathbf{A}\). With \(\lambda=1e-3/N\).
In Table 7 (row 2), we explore the effect of different strengths of this potential type, in the prediction task. We also provide a visual example. For this experiment, we avoid the area \(\mathbf{A}=[(0,-1),(1,1)]\), which corresponds to half of the box (see Figure). We use the following parameters: **(i.)**\(s1:\epsilon=1\), **(ii.)**\(s2:\epsilon=5e1\) and **(iii.)**\(s3:\epsilon=5e2\).
### Quantitative Comparison
In this Section, we aim to assess NIIP's capability for trajectory forecasting. For all datasets, we will observe a portion of the trajectory and predict \(20\) timesteps.
We first test our approach in Springs and Charged datasets. We evaluate the Mean-Squared Error (MSE) against (Kipf et al., 2018), their chosen baselines, a generic Conditional-GNN. Our models observes \(49\) timesteps and fixes the \(50_{th}\) as initial conditions for prediction. We can see in Table 2 that NIIP achieves better prediction error in both datasets.
Similarly, for NBA (Table A2 (left)) the model observes \(40\) timesteps and fixes the following \(5\) as initial conditions for prediction. NIIP outperforms the baselines in terms mid
Figure 8: **NIIP is able to incorporate new potentials in test-time**. We can see depicted reconstructions of NBA samples with added potentials. Left: (Col. 1, Row 1): Reconstruction of the encoded trajectory. (Col. 1, Row 2): Decrease of velocity. (Col. 2, Row 1): Low increase of velocity. (Col. 1, Row 3): Reconstruction of the encoded trajectory, (Col. 2, Row 3): Attraction of the players to a goal point (blue dot). Painted orange, the ground-truth ball trajectory.
and long-term prediction error, and underperform in the short-term.
The models designed for social interaction perform poorly in long-term prediction, while they have shown to excel in other tasks such as collision avoidance.
FFor the JPL Horizons dataset in Table A2 (right), NIIP outperforms the baselines also in mid and long-term prediction. Models have access to 23 timesteps. NIIP observes \(20\) timesteps and fixes \(3\) as initial conditions for prediction. JPL Horizons is a challenging dataset given the unknown masses of the bodies involved, as well as effects of unobserved smaller bodies nearby them.
## 5 Literature
Dynamics and Relational InferenceSeveral works in the past years have studied the problem of learning dynamics of a physical system from simulated trajectories with graph neural networks (GNNs) (Guttenberg et al., 2016; Gilmer et al., 2017; van Steenkiste et al., 2018; Lu et al., 2021; Li et al., 2018; Yang et al., 2022; Rubanova et al., 2022).
As an extension of the foundational work of (Battaglia et al., 2016), interaction networks, (Kipf et al., 2018) proposes to infer an explicit interaction structure while simultaneously learning the dynamical model of the interacting systems in an unsupervised manner, by inferring edge classes with a classifier. Selecting models based on observed trajectories is also the base of (Alet et al., 2019; Goyal et al., 2019; Graber and Schwing, 2020; Webb et al., 2019). (Graber and Schwing, 2020) extends (Kipf et al., 2018) to temporally dynamic edge constraints, which yields better results in real-world datasets. NIIP differs from these approaches as the generation procedure uses an optimization solver to satisfy a set of potentials, while learning relation representations as potentials from observation. NIIP models trajectories in the absence of attributes, by observing particles behave and without supervision. Similarly, in (Goyal et al., 2021) interactions are encoded as condition-action rules, which offer dynamics decomposition and modularity but lack the illustrated properties that energy functions offer.
Energy-Based ModelsEnergy-based models have a long history in machine learning. Early work focuses on density modeling (Hinton, 2002; Du and Mordatch, 2019) by aiming to learn a function that assigns low energy values to data that belongs to the input distribution. To successfully sample data-points, EBMs have recently relied gradient-based Langevin dynamics (Du and Mordatch, 2019). Recent works have illustrated that such a gradient-based optimization procedure can enable the composition of different energy functions (Du et al., 2020) and can successfully be applied to high-dimensional domains such as images (Liu et al., 2021; Zhang et al., 2022; Nie et al., 2021), trajectories (Urain et al., 2021; Du et al., 2019), and concepts (Wu et al., 2022). In (Rubanova et al., 2022) an energy approach is used to model trajectories. They make use of known object attributes and a global energy landscape to generate trajectories, while they do not discover interaction representations. NIIP leverages the properties of energy functions to learn energy potentials that can be composed in different ways. Unsupervised discovery of composable energy functions has been previously explored on images (Du et al., 2021; Zhang et al., 2022). In this work, we extend ideas of unsupervised concept learning in EBMs to potentials and apply them to dynamical modelling and relational inference.
## 6 Discussion
Limitations.Our existing formulation of NIIP has several existing limitations. First, NIIP is currently limited to encoding energy potentials associated with edges in graph neural networks.
In practice, many potentials in nature are often not just simply pairwise, but depend on multiple sets of different particles.
An interesting direction of future work is to explore how to generalize the use of the energy functions to capture multi-node interactions in a graph.
In addition, we found that relational potential discovered by NIIP were not necessarily cleanly disentangled. When recombining potentials between two datapoints with large differences, we sometimes found that interactions would be incorrectly generated. Similarily, we found that discovered potentials would erroneously assign high energy to particle
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c}{**Spings**} & \multicolumn{3}{c}{**Charged**} \\ \hline Prediction steps & 1 & 10 & 20 & 1 & 10 & 20 \\ \hline Static & 1.70e-3 & 2.71e-2 & 2.55e-2 & 5.09e-3 & 2.30e-2 & 5.55e-2 \\ LSTM & **1.10e-7** & 2.07e-6 & 4.65e-5 & **5.90e-4** & 5.43e-3 & 1.15e-2 \\ Cond GANN & 1.35e-5 & 2.21e-5 & 3.44e-3 & 3.67e-3 & 5.61e-3 & 1.05e-2 \\ NRI (full graph) & 6.10e-6 & 7.28e-3 & 1.01e-2 & 1.59e-3 & 3.52e-3 & 7.74e-3 \\ NRI (learned) & 5.81e-7 & 1.10e-5 & 2.90e-5 & 1.47e-3 & **3.19e-3** & 6.65e-3 \\ NIP (Ours) & 1.99e-7 & **1.20e-6** & **2.71e-6** & **9.38e-4** & **3.07e-3** & **5.97e-3** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Mean squared error (MSE) in predicting future states for Springs and Charged simulation datasets, with 5 interacting objects. NIIP outperforms existing methods.**
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c}{**NBA SportsFVU**} & \multicolumn{3}{c}{**JPL Horizons**} \\ \hline Prediction steps & 1 & 10 & 20 & 1 & 10 & 20 \\ \hline S-LSTM & 6.60e-5 & 6.67e-3 & 2.57e-2 & - & - & - \\ TraNet++ & 5.30e-5 & 5.88e-3 & 2.33e-2 & - & - \\ Static & 2.13e-5 & 3.04e-3 & 1.07e-2 & 3.33e-3 & 5.54e-2 & 9.05e-2 \\ LSTM & 8.07e-5 & 4.12e-5 & 3.51e-3 & 1.97e-6 & 3.98e-5 & 1.09e-4 \\ PavD & 2.58e-4 & 1.17e-3 & 3.33e-3 & 8.38e-3 & 8.72e-3 & 9.04e-3 \\ Cond GANN & 17.1e-4 & 1.12e-3 & 3.11e-3 & 4.57e-6 & 4.66e-5 & 5.96e-6 \\ NRI (learned) & 3.56e-6 & 7.46e-4 & 2.74e-3 & 2.67e-7 & 7.35e-7 & 1.16e-6 \\ dNRI & **2.11e-6** & 9.11e-4 & 3.52e-3 & **1.15e-7** & 4.51e-6 & 2.26e-5 \\ NIP (Ours) & 3.15e-5 & **5.84e-4** & **2.37e-3** & 2.98e-7 & **4.32e-7** & **5.84e-7** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Mean squared error (MSE) in predicting future states for NBA dataset and JPL Horizons dataset, with 11 and 12 interacting objects respectively. NIIP performs better than the baselines at mid to long terms.**
interactions exhibiting the correct interaction. We believe that some of these issues may be rooted in the potential functions capturing unwanted information along with relational information (such as trajectory shapes). As a result, for instance, swapping relation potentials may produce unrealistic interactions. In the future, we believe that explicitly enforcing our encoding function to discard all trajectory information other than the relationship types may lead to improved performance.
Conclusion.In this work we introduced Neural Interaction Inference with Potentials (NIIP) which infers relational potentials specified as energy functions to model the dynamics of an interacting system. We illustrate how NIIP as an alternative approach to discover such interactions that enables greater flexibility in trajectory modeling: it discovers a set of relational potentials, represented as energy functions, which when minimized reconstruct the original trajectory. Throughout this work we explore and test the different advantages that our approach brings to trajectory modeling. Particularly, we show that NIIP displays unique capabilities in test-time. It allows trajectory manipulation, such as interchanging interaction types across separately trained models. NIIP can also detect out-of-distribution samples without having been trained to do so, by observing the energies that correspond to each particle. We further show how we can modify the behavior of the modeled trajectories by adding test-time potentials. Finally, NIIP can also predict trajectories faithfully in the future, displaying favorable mid- and long-term performance when compared to existing approaches.
## 7 Acknowledgements
This work was supported by NSF grants IIS-1814631 and CNS-2038493, AFOSR grant FA9550-19-1-0005, and ONR grant N00014-21-1-2431. Yilun Du is supported by a NSF Graduate Fellowship.
|
2303.17057
|
Avian-Inspired Claws Enable Robot Perching or Walking
|
Multimodal UAVs (Unmanned Aerial Vehicles) are rarely capable of more than
two modalities, i.e., flying and walking or flying and perching. However, being
able to fly, perch, and walk could further improve their usefulness by
expanding their operating envelope. For instance, an aerial robot could fly a
long distance, perch in a high place to survey the surroundings, then walk to
avoid obstacles that could potentially inhibit flight. Birds are capable of
these three tasks, and so offer a practical example of how a robot might be
developed to do the same. In this paper, we present a specialized
avian-inspired claw design to enable UAVs to perch passively or walk. The key
innovation is the combination of a Hoberman linkage leg with Fin Ray claw that
uses the weight of the UAV to wrap the claw around a perch, or hyperextend it
in the opposite direction to form a curved-up shape for stable terrestrial
locomotion. Because the design uses the weight of the vehicle, the
underactuated design is lightweight and low power. With the inclusion of
talons, the 45g claws are capable of holding a 700g UAV to an almost 20-degree
angle on a perch. In scenarios where cluttered environments impede flight and
long mission times are required, such a combination of flying, perching, and
walking is critical.
|
Mohammad Askari, Won Dong Shin, Damian Lenherr, William Stewart, Dario Floreano
|
2023-03-29T23:16:10Z
|
http://arxiv.org/abs/2303.17057v3
|
# Avian-Inspired Claws Enable Robot Perching and Walking
###### Abstract
Multimodal UAVs (Unmanned Aerial Vehicles) are rarely capable of more than two modalities, i.e., flying and walking or flying and perching. However, being able to fly, perch, and walk could further improve their usefulness by expanding their operating envelope. For instance, an aerial robot could fly a long distance, perch in a high place to survey the surroundings, then walk to avoid obstacles that could potentially inhibit flight. Birds are capable of these three tasks, and so offer a practical example of how a robot might be developed to do the same. In this paper, we present a specialized avian-inspired claw design to enable UAVs to passively perch and walk. The key innovation is the combination of a Hoberman linkage leg with Fin Ray(r) claw that uses the weight of the UAV to wrap the claw around a perch, or hypercraft it in the opposite direction to form a ball shape for stable terrestrial locomotion. Because the design uses the weight of the vehicle, the underactuated design is lightweight and low power. With the inclusion of talons, the 45 g claws are capable of holding a 700 g UAV to an almost 20-degree angle on a perch. In scenarios where cluttered environments impede flight and long mission times are required, such a combination of flying, perching, and walking is critical.
Bio-Inspired Robots, Perching Claw, Multimodal Locomotion, Compliant Mechanism, Unmanned Aerial Vehicle.
## I Introduction
Recently, there has been a lot of interest in perching UAVs (Unmanned Aerial Vehicles). The advantages of perching include being able to use less energy than when maintaining flight, having the opportunity to recharge or refuel, and enabling long-term surveillance. Studies on UAV-perching have usually considered only flying and landing [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], and more rarely, retakeoff [13, 14, 15, 16]. However, once landed, it may be necessary for the robot to move around to complete its mission. While in many cases the UAV could just take off and fly to its next location, this is not always possible. In such a case, the combination of flying, perching, and legged locomotion would be helpful. Many birds are capable of all these abilities [17, 18] and therefore provide great examples of how robots could do the same.
Robotic flying-walking hybrid locomotion has been shown to be feasible on flat obstacle-free ground [19]. DUCK consisted of a multicopter with legs that could walk on the ground, take off, land, and fly repeatedly, but could not perch. One recent demonstration by another robot, LEONARDO, was dexterous enough to'slackline' [20]. This robot used simple feet combined with active control from the quadcopter motors for balance and disturbance rejection. As a result, it requires constant power to remain in one spot. For search and rescue missions, robots could be required to hold position for long periods of time, acting as communication relays or fixed sensors for hours at a time, which would require heavy batteries. Having a way to combine aerial-legged multimodality with passive perching could enable robots to operate for much longer periods of time compared to fully actuated robots of similar mass.
In this article, we focus on leg and claw design to imbue a flying robot with the ability to passively perch and walk. Inspired by the feet of birds, we propose a novel Fin Ray(r)[21] claw design for a UAV that can passively perch on a horizontal pole and walk on the ground (Fig. 1). When the weight of the UAV presses on it, the claw passively wraps around and holds the perch. The weight is removed from the claw at takeoff, allowing it to slip off the perch. The claw can be switched to a hyperextended state, in which the weight of the UAV instead causes the claw to curl upward. This can then enable the vehicle to walk around. We characterize the limits of the passive perching, showing that a 700 g UAV can lean at 19.4 degrees before it slips. When in hyperextended mode, the ball-shaped claw provides a large support polygon, which enables a stable gait that is not possible with a claw that can only curl around an object to perch. We also measure the squeezing force of the claw under similar conditions and find that it can squeeze with a force up to its body weight. For comparison, birds can squeeze with a force of up to two times their body weight but rely on muscles to achieve that performance [18].
## II Related Work
There are several examples of aerial-ground hybrid locomotion. These include a squirrel-inspired robot that can glide in the air, land, and then crawl under obstacles [22]; a winged UAV with rotating winglets that can push itself along the ground after landing [23]; A winged robot that uses mini-wages to run on the ground [24]. Others implemented a round cage to allow them to roll around the ground [25]. For simplicity, some aerial-ground vehicles just use wheels [26]. However, due to the nature of these modes of ground locomotion, the above examples often have a hard time overcoming obstacles.
The most widely used foot designs in legged robots are ball-shaped feet and flat feet [27]. Ball-shaped feet do not limit the orientation of the feet with respect to the ground [27], but they
suffer from a limited contact area. This makes them ineffective at exerting a moment on the ground. On the other hand, flat feet provide a wide contact area but are limited in foot orientation. Ball-shaped feet are generally used in robots with four or more legs where using a moment with the ground is not as important as freedom of orientation. Flat feet are used in robots with one or two legs and are used to balance in moments when standing still. However, both these foot designs are only used for legged locomotion and not for grasping, perching, or other purposes.
If we turn to nature, we can see that birds are quite adept at perching, walking, and flying. Roderick et al. showed that when perching, Pacific Parrotlets use both friction between the pads of their feet in addition to talons to grip the perch [18]. Furthermore, they found that depending on the perch material and diameter, the relative importance of the talons or pads of their feet changes. In some cases, the force due to the talons can reach up to eight times that of the pads of their feet.
For many years, it was believed that birds utilize passive mechanisms to hold onto perches for long periods of time or while sleeping [28]. The basis of this hypothesis was a passive mechanism in the birds claw consisting of tendons that pass behind the ankle and are pulled when the birds squat. This would enable birds to remain perched when resting or even sleeping. However, Galton et al. cast doubt on this theory using experimental data showing that European Starlings could adapt to surgically severed tendons and sleep on perches [29]. They also found that when anesthetized, the birds would fall from perches, which would not happen if there was a passive mechanism for perching. Nonetheless, many research groups have developed quadrotor perching mechanisms based on the incorrect tendon notion [1, 30], indicating that the mechanism could be useful even if birds are not utilizing it.
The mechanisms designed by these groups use stiff link segments (phalanges) connected by flexible joints [1, 30]. The tendon is an inelastic cable that transmits the weight of the vehicle into the phalanges, providing the actuation force. These underactuated designs allow the individual fingers to passively conform around objects of different shapes. The two designs in [1, 30] differ in the path of the tendon. As an alternative to using tendons to achieve passive perching, other authors [31] used a slider mechanism in combination with Fin Ray(r)[21] digits in an opposing arrangement [31] that wrap around convex structure. Indeed, Fin Ray(r) fingers are also used in many compliant grippers [31, 32, 33, 34, 35, 36]. Under load, the Fin Ray(r) finger structure bends in the opposite direction of the applied force and thus conform to the object. In the perching mechanism developed by Chi et al., digits were equipped with aluminum talons at the extremities of the Fin Ray(r) structure [31]. The weight of the vehicle would close the Fin Ray(r) digits and press the talons into the perch. A similar active Fin Ray(r) claw has been developed for flapping wing MAVs, which weighs only 45g [37]. This claw was shown to be effective on a variety of different-shaped perches, on slanted perches, and when approaching the perch from an angle. It is, however, quite limited in that any deviation from perfectly vertical perching after landing would cause the claw to fail.
Another approach to passive perching is the Sarrus mechanism [38], where the weight of the UAV causes hinged plates to fold on themselves. These closing hinges are rigidly connected to plates such that when they close, the plates are brought together, squeezing the perch. At takeoff, the body weight is lifted off the mechanism, allowing the hinged plates to open and passively release the perch. While the plates are effective at transmitting the forces to the perch, the robot is likely to slip when leaning over. This perching mechanism does not use tendons to transfer forces.
To date, none of the claws developed for perching have been used also for walking. Indeed, many of these passive claws only actuate themselves to curl around an object, which would inhibit walking performance. The leg and claw design described here leverages the weight of the vehicle for passive perching, but also enables inclined stable perch, and most importantly allows the flying vehicle to walk on the ground.
## III Leg and Claw Design
The mechanical design combines a Hoberman linkage [39] leg with a Fin Ray(r) claw. The combined structure has two stable configurations, perched and hyperextended. In the
Fig. 1: Sample mission of a flying-perching-walking robot for search and rescue. The inset views show comparisons of the feet of a purple finch and the robotic equivalent in different configurations. Photo credits: Rejean Aline; Claude Laprise; Olga Pink - Adobe Stock.
perched configuration, the Fin Ray(r) claw is curled inward, and in the hyperextended configuration, the claw is stretched out. We switch between perched and hyperextended modes by positioning the Hoberman linkage leg in a collapsed (perched) or stretched form (hyperextended), respectively. When moving from one configuration to another, the Fin Ray(r) claw passes through a singular point resembling a nearly flat claw. The moment the claw passes this singularity, the weight of the UAV will passively push the claw to either configuration.
The claw's design is inspired by the anisodactyl toe arrangement found in perching birds, where three digits (toes) face forward and one backward. To simplify the design, our claw uses only two front digits. The palm of the claw is made of a flexible, 3D-printed TPU (Thermoplastic Polyurethane) toe pad (red in Fig. 2a). The ribs, 3D-printed in ABS (Acrylonitrile Butadiene Styrene), connect the upper surface of the toe pad to the outer-links of the Fin Ray(r) mechanism (blue structure in Fig. 2a). To increase perching stability, the claw includes talons (curved hooks) integrated at both extremities of the rib structure (inset of Fig. 2a). When a horizontal force is applied to the top of the central rib of each digit, the Fin Ray(r) ribs and outer-links transfer the loads along the digit, causing the digit to curl. Depending on the direction of the horizontal force, the digits will either curl down and close on a perch or stretch up
Fig. 2: Leg and claw design. **(a)** Geometric parameter sizing of the claw is done in the perched configuration, as shown in the top diagram. The CAD of the claw, comprising two front and one back digit with integrated talons, was sized based on the calculations presented in the bottom diagram. **(b)** Photos of the assembled claw in perched (left) and hyperextended (right) modes. **(c)** Mechanical advantage profiles of the Hoberman linkage over a range of base rib angles (\(\epsilon\)) for different \(\gamma\) values. The two different shadings show whether the claw is within the perched or the hyperextended region. The vertical black line at -5 degrees indicates the singularity point where the claw switches between the two modes. The one at -30 degrees shows the limit of hyperextension reached passively for the selected claw geometry (\(\gamma=150^{\circ}\)). The three other vertical black lines indicate the \(\epsilon\) values for three different perch diameters, \(30\,\mathrm{mm}\), \(40\,\mathrm{mm}\), and \(50\,\mathrm{mm}\). **(d)** Diagram and geometric parameters of the linkage.
into a hyperextended mode creating a foot shape suitable for walking (left and right respectively in Fig. 2b).
The geometry of the Hoberman linkage regulates the curling angle of the Fin Ray(r) claw. In particular, the angle of the linkage (\(\gamma\) in Fig. 2d) affects the magnitude of the Hoberman force and the resulting curling angle of the claw (\(F_{hob}\) and \(\epsilon\) in Fig. 2d, respectively). The output force imparted on the Fin Ray(r) claw by the weight of the UAV can be modeled with:
\[F_{hob}=\frac{\sin\epsilon+\cos\epsilon\left(\frac{\mu_{\mathrm{LC}}}{LC} \left(\frac{\cos\beta}{\sin\delta}+\tan\alpha\frac{\sin\beta}{\sin\delta} \right)+\frac{\cos\delta}{\sin\delta}\right)}{\sin\varphi}mg/2, \tag{1}\]
where AB, BCD, and DE are the individual links and \(\alpha\), \(\beta\), \(\gamma\), \(\delta\), \(\epsilon\), and \(\phi\) are the corresponding angles (Fig. 2d). The two most critical angles are \(\gamma\) and \(\epsilon\), and the rest are geometrically dependent on these two. This is because \(\gamma\) is set when designing the claw and does not vary, and \(\epsilon\) is a direct measure of how far the claw can curl or stretch. Using equation 1, we calculated \(F_{hob}\) for a variety of \(\gamma\) and \(\epsilon\) values. \(\gamma\) was varied between 90 and 180 degrees in increments of 10 degrees. A \(\gamma\) value of 150 degrees was selected for the claws developed in this work. (Fig. 2c). This is because, at 150 degrees, the claw balances an ability to reach the hyperextended state (dashed line at \(\epsilon=-30\) degrees in Fig. 2c) as well as being able to curl around a perch of \(30\,\mathrm{mm}\) diameter (dashed line at \(\epsilon=-95\) degrees in Fig. 2c). In addition, the maximum achievable Hoberman Force occurs at \(\gamma=150\) degrees. Changing \(\gamma\) shifts the achievable range of \(\epsilon\) values. For example, increasing \(\gamma\) would allow the claw to reach a higher maximum \(\epsilon\), which corresponds to being able to perch on smaller diameter perches (This corresponds to the right end of the red \(\gamma=180\) degrees line in Fig. 2c being at \(\epsilon=95\) degrees whereas the right end of the blue \(\gamma=110\) degrees being at only 35 degrees). The drawback is that the higher \(\gamma\) would limit the hyperextended stretching (notice that the red line in Fig. 2c cannot even reach the hyperextended state), which is required for walking. On the other hand, decreasing \(\gamma\) would allow for more hyperextended stretching, but would limit the range of perching diameters.
A single \(10.2\,\mathrm{cm}\) long claw and Hoberman leg weighs only \(23\,\mathrm{g}\). When servos, leg linkages, and a controller are added to the leg to enable walking, the total weight goes up to \(57\,\mathrm{g}\). The claws are used as a pair, so the whole system comes to \(114\,\mathrm{g}\). This is considerably lighter than the designs by Doyle and Nadan (\(478\,\mathrm{g}\) and \(178\,\mathrm{g}\), respectively) [1, 30].
## IV Perching Characterization
Variations in the perching approach and maneuver can lead to the UAV tilted to one side of the perch. To be robust while perching, the claw must maintain its grip even when the UAV is tilted. We characterized this robustness by measuring the squeezing force a single claw can exert, how far the UAV can tilt before falling, and how much torque the claw can exert before slipping. We also demonstrated the robust perching in flight with a UAV equipped with a set of two claws.
To measure the squeezing force of the claw, we used a split perch setup similar to the one in [18]. We found that the squeezing force is directly proportional to body weight and perch diameter (Fig. 3c). This matches the predictions of the analysis of the Hoberman linkage (Fig. 2c), which indicate that the force imparted on the Fin Ray(r) device by the Hoberman linkage increases with increasing perch diameter. For comparison, this trend of lower squeezing forces on smaller diameter perches is the opposite of what was found by Roderick et al. [18], who noted that the birds squeezed harder on smaller perches. Furthermore, the birds could squeeze up
Fig. 3: **(a)** Image of the split perch experimental setup. Two halves of the split perch, 3D-printed in ABS, encompass the ATI Nano17 loadcell. Weights are mounted to a wooden rod, which press on the talon-less claw, which transforms the weight into a squeezing force, which in turn is measured by the loadcell. **(b)** Diagram of the claw in perched configuration. A static model is developed to approximate the squeezing force (see Supplementary Text for more details). **(c)** Plot of the estimated and measured squeezing force as a function of weight. Experiments are done for three different perch diameters with weights corresponding to different possible vehicle weights in steps of \(100\,\mathrm{g}\). The shaded regions represent the standard deviation of 10 measurements.
Figure 4: **(a)** Image of the slip resistance experimental setup. The claw is vertically placed on a solid birch perch (\(\theta=0\)) with weights mounted at a distance of \(200\,\mathrm{mm}\) above the rotation axis of the perch. Using an Instron universal testing machine, a push rod pushes the arm and increases the tilting angle \(\theta\). A 6-axis load cell (ATI Gamma), connected to one end of the perch, measures the moment created by the claw. Experiments are considered quasi-static due to the slow pushing rate of \(5\,\mathrm{mm}\mathrm{s}^{-1}\). **(b)** Schematic force diagram of the model of the experiment (see Supplementary Text for more details). This illustration shows how the moment due to the mass of the UAV (\(M_{w}\)) is counteracted by the moment due to the friction of the claw (\(M_{f}\)) grasping the perch. **(c)** Maximum tilting angle and moment measurements for different weights and perch diameters. The plot at the top shows variation in the moment over time for the case of \(300\,\mathrm{\SIUnitSymbolDegree}\) weight on a \(40\,\mathrm{mm}\) perch. The shaded regions represent the standard deviation of repeated experiments. Ten tests were done for each weight and perch diameter combination, with the orientation of the claw being switched after five tests to capture effects due to its asymmetric design. We collected data from the load cell at a frequency of \(1000\,\mathrm{Hz}\) and used a moving average filter with a window size of \(100\) samples to remove noise. Finally, the experiments were repeated with a talon-less claw to understand the effects of the talons.
to two times their body weight. The maximum normalized squeezing force generated by the passive claw was just over one, with 100 g of weight on the 50 mm perch. Reaching a squeezing force comparable to the birds would require either the addition of actuators or modifying the Hoberman link to provide more mechanical advantage.
The results of the squeezing force characterization are compared to a static force model, detailed in Supplementary Text. The assumptions made in modeling the claw prove reasonable, as the model predicts well the trends in the experimental results (Fig. 3c). In addition, it correctly estimates that the increase in squeezing force from 40 mm to 50 mm is less than the increase in squeezing force from 30 mm to 40 mm. However, the prototype claw overperforms the model at lower weights, especially for the 50 mm perch. Under these lower weights, the claw experiences less deformation in the toe pad and does not perfectly conform to the shape of the perch, which is explicitly not accounted for in the model.
We investigated how far off-center the claw can reach before slipping by setting up a benchtop test apparatus to measure the maximum sustainable angle and corresponding moment (Fig. 4a). Representative moment data from one of the experiments is shown in Fig. 4c alongside predictions from the previously used static force model, detailed in Supplementary Text. There are three main phases of the test. In the beginning, the wooden rod is vertical, and the push rod is not in contact with the wooden rod. At \(\theta=0\), the push rod comes into contact with the wooden rod, marking the beginning of the second phase. In this phase, the wooden rod is pushed, and the moment increases. During this phase, static friction holds the claw in position against the increasing moment. The measured increase in \(\theta\) during this period results from the claw deforming. At about 2 degrees, the moment levels off and remains constant. During this phase, the claw is slipping, and dynamic friction is holding the claw on the perch. Once the claw reaches about 6.5 degrees, the moment due to the weight begins increasing again. Lastly, there is a sharp drop in the moment as the claw finally slips, and just before 8 degrees, the wooden rod falls and hits a catch.
Increasing the weight causes a greater measured moment before slipping, as expected based on the results of the squeezing test (Fig. 4c). However, the maximum angle is less affected by weight change and remains almost constant. The claw performs better on larger perch sizes due to increased squeezing force with perch diameter. The model predicts these trends well but with minor errors attributable to decreasing model accuracy at lower weights, as previously discussed. Unlike squeezing force experiments, the forces at the joints of the forward and backward digits differ in the tilting tests. At its current state, the claw can sustain weights up to 400 g (corresponding to twenty times its own weight). The 3D-printed joints and the claw would require reinforcement and design optimization to
Fig. 5: Perching experiment with a manually piloted quadcopter equipped with a set of two claws. Plots show the vertical distance from the perch and the tilting (pitch) angle over time. The snapshots from the video correspond to different instances of the perching maneuver. Instant (i) shows the UAV in flight before the touchdown, and (ii) is when it makes contact with the wooden perch. Instant (iii) highlights when the claws are fully closed and firmly grip the perch. The drop in the vertical distance indicates this behavior. The back-and-forth shift in the pitch is due to the unbalanced thrust from the propellers during the process of slowly cutting off the thrust. The UAV tilts back when the thrust is significantly reduced, as shown at instant (iv). Finally, when the propellers are completely stopped (v), the UAV still remains perched, resting with a pitch angle of 11.6 degrees.
reach weights beyond this point. With the addition of talons to the claw, there is a considerable increase (50-150%) in the achievable angle. However, it comes with an increase in variability between runs. This is due to the talons getting caught in asperities, which are randomly dispersed in the grain of the wooden perch.
A custom-built, manually controlled quadcopter UAV was equipped with two claws with talons, weighing 700 g in total. To demonstrate perching in action, we conducted four experiments with the UAV taking off from the ground, briefly flying around, and carefully landing on a 50 mm diameter birch perch. The UAV successfully perched in three trials, only losing balance once due to a high tilting angle resulting from a correspondingly high approach angle. Figure 5 shows representative data from a successful perching trial. It also presents data on vertical distance (altitude) from the perch, the pitch angle (corresponding to \(\theta\) in Fig. 4a-b), and snapshots of notable events during the perching maneuver. The experiments were carried out indoors, and the data were collected using an OptiTrack motion capture system. We also investigated the limits of UAV tilting. To do this, the UAV was set on the perch and slowly pushed by hand until it fell off. After conducting the test 10 times, the UAV slipped at an average of 19.4 degrees with a standard deviation of 0.65 degrees.
## V Legged Locomotion Characterization
To demonstrate the stable walking ability of the proposed leg and claw system, we added actuation to the claws attached to the quadcopter used for the perching experiment (Fig. 6a). For each leg, two servo motors (KST X08), located directly at the joints, drive the hip and knee joints. The upper and lower limb lengths are 7 cm and 10 cm, respectively. The lower limb length, determined considering the torque limitation of the servo motors, cannot be shorter than 10 cm due to the length of the Hoberman linkage and the claw. The distance between the two legs is 10.5 cm. The walking gait is generated
Fig. 6: **(a)** Close-up picture of the leg and walking control system. **(b)** Leg and foot trajectories, illustrating the positions of the upper limb, lower limb, and foot for a walking cycle. The UAV is not drawn to scale. **(c)** Joint angles for the hip and knee joints. Solid and dashed lines represent left and right leg trajectories with a 50% duty cycle phase difference.
by a position control method following a half ellipse trajectory as shown in Fig. 6 with a phase difference of 50%. The flat trajectory is to provide a flat center of gravity (COG) forward movement when the foot is in contact with the ground, and the arc trajectory is to retract the leg without touching the ground. A Sparkfun Pro Micro board maps the half ellipse trajectory to the angular position values and commands the servo motors to follow the given angular position values.
The claw design can provide a stable gait for a bipedal system when the claw is in the hyperextended mode. The hyperextended feet provide extra contact points with the ground and, consequently, enlarge the support polygon (Fig. 7b). Fig. 7 shows the region of support polygon formed by the extremities of the claws when both legs are on the ground. The Left and right diagrams present the support polygons when the legs are in a neutral position and when the legs are fully stretched forward and backward, respectively. The orange dashed lines indicate where the center of gravity would need to be located in the absence of the claws if the legs were point feet. With a stride length of 8 cm, the support polygon changes from a rectangular stance area of 99.75 cm\({}^{2}\) to a parallelogram of 115.5 cm\({}^{2}\) during walking.
The robotic platform was connected to a boom support system and walked around a pivot axis in a circular trajectory (Fig. 7). The boom system was installed for two reasons. The servo motors provide degrees of freedom only in the sagittal plane. Thus, the current setup could only provide stability in pitch angle. Another reason was to reduce the resultant weight with a counterweight due to the power limitation of the servo motors that drove the leg mechanism. The position and orientation of the robot were collected using an OptiTrack motion capture system to evaluate whether the claws' hyperextended configuration can provide stable terrestrial locomotion. Five walking trials were recorded at 120 Hz. On the other end of the boom, weights were installed to counterbalance the system, and the resultant effective weight of the system was 200 g. The distance traveled data are presented in Fig. 7c. The platform covered 0.87 m in 10 seconds at 1.1 Hz of walking frequency with the hyperextended feet. It could not walk or even stand when the claws were closed due to the limited support polygon area. Figure 7c also presents the pitch angle data during the walking, which shows a repeating pattern for each step. The pitch angle started around -8 degrees and had two local maximums of around 0 degrees for each step. This pattern was repeatedly shown for every one-second-long step, and the average pitch stayed around -5 degrees during the 10 seconds of walking. The pitch angle range did not diverge from the start of the experiment but stayed between -12 degrees and 5 degrees. The stability margin is calculated based on the center of gravity position and the support polygon size. It indicates the maximum pitch angle beyond which the robot tips over. The maximum allowed pitching angle of the platform was \(\pm\)20.1 degrees, and the platform stayed within this range throughout the experiment. These results indicate that the hyperextended configuration provides stable walking in pitch because the average pitch angle does not change
Fig. 7: **(a)** Image of the walking experimental setup. **(b)** Support polygons with hyperextended claws. **(c)** Distance traveled and pitch angle data of walking experiments with the hyperextended feet. The grey-shaded regions represent the left or right claw stance phases. In **(b)**, the left is when the two feet are stretched out to the maximum separation distance, and the right is when the legs are in a natural standing position. The hyperextended claws expand the support polygon by providing extra contact points at the tips of the claws.
significantly in the 10 seconds of walking.
## VI Conclusions
The claws presented in this manuscript enable for the first time ever, passive perching, walking, and flying of a UAV. The lightweight construction of the Hoberman linkage legs and Fin Ray(r) claws are capable of holding up to a 700 g UAV. We have presented important sizing relationships and static models for calculating expected performance.
Although the claws developed here are completely passive, actuators could be used to further increase their squeezing force allowing them to be scaled up for use on heavier UAVs. These actuators would mimic the muscles that birds use to grasp perches. Future iterations could also include actuators to automatically switch leg modes between walking and perching.
These claws widen the range of possibilities of multimodal UAVs by enabling walking and perching. In particular, the use of these claws in semi- or fully-ered walking robots gives them advantages over sprawling or wheeled alternatives in navigating cluttered environments. This is because their longer legs are able to stride over larger obstacles. The perching capability allows them to remain in position for long periods of time to recharge batteries, conduct observations, or minimize noise and power consumption. This will lead to future robots with a larger range of capabilities, making them more versatile tools for search and rescue operations.
## VII Acknowledgements
The authors are grateful for the engineering help provided by Olexandr Gudozhnik, the useful feedback from Florian Achermann, and the piloting skills of Przemyslaw Kornatowski and Victor Casas Rochel.
|
2309.02367
|
Minimal modal logics, constructive modal logics and their relations
|
We present a family of minimal modal logics (namely, modal logics based on
minimal propositional logic) corresponding each to a different classical modal
logic. The minimal modal logics are defined based on their classical
counterparts in two distinct ways: (1) via embedding into fusions of classical
modal logics through a natural extension of the G\"odel-Johansson translation
of minimal logic into modal logic S4; (2) via extension to modal logics of the
multi- vs. single-succedent correspondence of sequent calculi for classical and
minimal logic. We show that, despite being mutually independent, the two
methods turn out to be equivalent for a wide class of modal systems. Moreover,
we compare the resulting minimal version of K with the constructive modal logic
CK studied in the literature, displaying tight relations among the two systems.
Based on these relations, we also define a constructive correspondent for each
minimal system, thus obtaining a family of constructive modal logics which
includes CK as well as other constructive modal logics studied in the
literature.
|
Tiziano Dalmonte
|
2023-09-05T16:29:34Z
|
http://arxiv.org/abs/2309.02367v1
|
# Minimal modal logics, constructive modal logics and their relations
###### Abstract
We present a family of minimal modal logics (namely, modal logics based on minimal propositional logic) corresponding each to a different classical modal logic. The minimal modal logics are defined based on their classical counterparts in two distinct ways: (1) via embedding into fusions of classical modal logics through a natural extension of the Godel-Johansson translation of minimal logic into modal logic S4; (2) via extension to modal logics of the multi- vs. single-succedent correspondence of sequent calculi for classical and minimal logic. We show that, despite being mutually independent, the two methods turn out to be equivalent for a wide class of modal systems. Moreover, we compare the resulting minimal version of K with the constructive modal logic CK studied in the literature, displaying tight relations among the two systems. Based on these relations, we also define a constructive correspondent for each minimal system, thus obtaining a family of constructive modal logics which includes CK as well as other constructive modal logics studied in the literature.
_Keywords--_ Minimal modal logic, constructive modal logic, modal companion, sequent calculus, neighbourhood semantics
## 1 Introduction
Although modal logics are usually defined as extensions of classical logic, significant attention has been also devoted to the analysis of modalities over nonclassical basis, such as relevant [4, 11, 15, 29, 39, 40], linear [17, 30, 36] or other substructural logics [6, 24, 37]. In this context, a major role is played by intuitionistic logic, many modal extensions of which have been studied with motivations ranging from philosophical or legal reasoning to computer science applications.
By analogy with intuitionistic connectives, modalities \(\Box\) and \(\Diamond\) over intuitionistic logic are usually assumed to be not interdefinable. This peculiarity allows for the definition of a wide variety of intuitionistic modal systems, since
|
2304.13296
|
$GW$ density matrix to estimate self-consistent $GW$ total energy in
solids
|
The $GW$ approximation is a well-established method for calculating
ionization potentials and electron affinities in solids and molecules. For
numerous years, obtaining self-consistent $GW$ total energies in solids has
been a challenging objective that is not accomplished yet. However, it was
shown recently that the linearized $GW$ density matrix permits a reliable
prediction of the self-consistent $GW$ total energy for molecules [F. Bruneval
et. al. J. Chem. Theory Comput. 17, 2126 (2021)] for which self-consistent $GW$
energies are available. Here we implement, test, and benchmark the linearized
$GW$ density matrix for several solids. We focus on the total energy, lattice
constant, and bulk modulus obtained from the $GW$ density matrix and compare
our findings to more traditional results obtained within the random phase
approximation (RPA). We conclude on the improved stability of the total energy
obtained from the linearized $GW$ density matrix with respect to the mean-field
starting point. We bring compelling clues that the RPA and the $GW$ density
matrix total energies are certainly close to the self-consistent $GW$ total
energy in solids if we use hybrid functionals with enriched exchange as a
starting point.
|
Adam Hassan Denawi, Fabien Bruneval, Marc Torrent, Mauricio Rodríguez-Mayorga
|
2023-04-26T05:34:21Z
|
http://arxiv.org/abs/2304.13296v4
|
# \(Gw\) density matrix to estimate self-consistent \(Gw\) total energy in solids
###### Abstract
The \(GW\) approximation is a well-established method for calculating ionization potentials and electron affinities in solids and molecules. For numerous years, obtaining self-consistent \(GW\) total energies in solids has been a challenging objective that is not accomplished yet. However, it was shown recently that the linearized \(GW\) density matrix permits a reliable prediction of the self-consistent \(GW\) total energy for molecules [F. Bruneval _et. al._ J. Chem. Theory Comput. **17**, 2126 (2021)] for which self-consistent \(GW\) energies are available. Here we implement, test, and benchmark the linearized \(GW\) density matrix for several solids. We focus on the total energy, lattice constant, and bulk modulus obtained from the \(GW\) density matrix and compare our findings to more traditional results obtained within the random phase approximation (RPA). We conclude on the improved stability of the total energy obtained from the linearized \(GW\) density matrix with respect to the mean-field starting point. We bring compelling clues that the RPA and the \(GW\) density matrix total energies are certainly close to the self-consistent \(GW\) total energy in solids if we use hybrid functionals with enriched exchange as a starting point.
## I Introduction
While a few self-consistent \(GW\) calculations (sc\(GW\)) for the band gaps of real solids are available [1; 2; 3], sc\(GW\) total energies are still not available today, certainly because of their high computational cost. However, there exist hints that sc\(GW\) could be accurate: first, the results on the homogeneous electron gas are extremely good [4; 5; 6]; second, the random-phase approximation (RPA) which derives from the same family has been shown to yield total energies capable of describing the tenuous van der Waals interactions [7; 8; 9; 10; 11; 12; 13]. Unfortunately, sc\(GW\) calculations are very involved in real solids. That is why it would be highly desirable to obtain sc\(GW\) quality energies without actually performing the cumbersome self-consistency.
Pursuing the quest for a non-self-consistent approximation to sc\(GW\), a series of studies have been published in the early 2000s [14; 15; 16; 17; 18; 19]. More recently, some of us proposed an alternative non-self-consistent total energy expression based on the \(GW\) density matrix, labeled \(\gamma^{GW}\)[20; 21]. Benchmarks on small molecules for which sc\(GW\) are possible [22; 23] confirmed the remarkable properties of the \(\gamma^{GW}\) total energies: although it is evaluated non-self-consistently using a generalized Kohn-Sham (gKS) input, the resulting total energy remains quite insensitive to the gKS choice and approximates very well the reference sc\(GW\) total energies.
In this context, this study focuses on the evaluation of total energies in solids. We port to the solid systems the \(\gamma^{GW}\) total energy with the sensible prospect that it will remain a good approximation to the sc\(GW\) total energy. In doing so, we obtain the correlated density matrix \(\gamma^{GW}\) as a physically meaningful intermediate object with unique properties due to correlation.
When considering solid systems several technical questions have to be addressed. Firstly, the closed formulas obtained for finite systems [20; 24] have to be adapted for numerical efficiency. Secondly, the pseudopotentials [25] that are customary in the plane-wave basis codes are typically designed to be used in conjunction with standard semi-local approximations to density-functional theory. Therefore, it is necessary to investigate which type of pseudopotential is suitable for obtaining consistently \(GW\)-type total energies.
With this, we will study the performance of the \(\gamma^{GW}\) total energy for solids. We will compare it to the popular RPA total energy which may be derived as the \(GW\) approximation of the Klein functional [26; 14; 27]. As all these calculations are performed as a one-shot procedure, memory about the mean-field starting point is present. We will particularly investigate this issue by varying the content of exact exchange \(\alpha\) in the hybrid functional PBEh(\(\alpha\)), zero being the standard PBE [28] and 0.25 yielding regular PBE0 [29]. It is therefore always necessary to specify the exact procedure used to obtain a one-shot energy. We do so here using the @ notation (e.g. RPA@PBE stands for the RPA total energy evaluated with self-consistent PBE inputs).
The article is organized as follows: In Sec. II, we re
capitulate the theoretical foundation for the \(GW\) density matrix and derive the working equations; In Sec. III, we detail the technical aspects of the implementation in a plane-wave code and assess the pseudopotential choice; Sec. IV shows some of the unique properties of the \(GW\) density matrix, exemplified with bulk silicon; In Sec. V, we benchmark the total energies obtained with the different approximations with a test set of 7 standard covalent semiconductors and one layered material. Finally, Sec. VI concludes our work.
## II Theory and working formulas
### Green's function-derived density matrix
In the many-body perturbation theory, the central quantity, namely the one-particle Green's function \(G(\mathbf{r},\mathbf{r}^{\prime},\omega)\), contains a great deal of information. In particular, by virtue of the Galistkii-Midgal formula [30], it is sufficient to calculate the total energy of an electronic system.
Also it straightforwardly yields the density matrix \(\gamma(\mathbf{r},\mathbf{r}^{\prime})\)
\[\gamma(\mathbf{r},\mathbf{r}^{\prime})=-\frac{\mathrm{i}}{2\pi}\int_{-\infty }^{+\infty}d\omega e^{i\eta\omega}G(\mathbf{r},\mathbf{r}^{\prime},\omega), \tag{1}\]
where \(\eta\) is a vanishing positive real number that enforces the sensitivity to the occupied manifold of the time-ordered Green's function \(G\). Hence, the electronic density can be obtained as the diagonal: \(\rho(\mathbf{r})=\gamma(\mathbf{r},\mathbf{r})\).
Therefore, the Green's function methods, such as the \(GW\) approximation, can give access to an approximate density matrix.
### Linearized Dyson equation
In the many-body perturbation theory, the overall strategy is to connect the exact Green's function \(G\) to a known Green's function \(G_{0}\). The connection between the two is ensured by the complicated self-energy \(\Sigma_{xc}\) that is in charge of all the correlation effects.
The expression of \(G_{0}\) that is derived from a mean-field approach (Kohn-Sham, Hartree-Fock, etc.), is simple:
\[G_{0}(\mathbf{r},\mathbf{r}^{\prime},\omega)=2\sum_{\mathbf{ki}}\frac{\varphi_ {\mathbf{ki}}(\mathbf{r})\varphi_{\mathbf{ki}}^{*}(\mathbf{r}^{\prime})}{ \omega-\epsilon_{\mathbf{ki}}\pm\mathrm{i}\eta}, \tag{2}\]
where \(\varphi_{\mathbf{ki}}(\mathbf{r})\) and \(\epsilon_{\mathbf{ki}}\) are the mean-field wavefunctions and eigenvalue for state \(i\) at k-point \(\mathbf{k}\) and the small positive \(\eta\) ensures the correct location of the poles for a time-ordered function (above the real axis for occupied states \(\epsilon_{\mathbf{ki}}<\mu\) and below the real axis for empty states \(\epsilon_{\mathbf{ki}}>\mu\), \(\mu\) being the Fermi level). Spin-restricted calculations are assumed here and the factor 2 accounts for it.
Then the connection from \(G_{0}\) to \(G\) is made with the so-called Dyson equation:
\[G=G_{0}+G_{0}(\Sigma_{xc}-V_{xc})G, \tag{3}\]
where \(V_{xc}\) is the exchange-correlation operator (possibly including non-local exchange) the space and frequency indices have been omitted for conciseness.
The self-energy \(\Sigma_{xc}\) is itself a functional of the exact \(G\). When approximating \(\Sigma_{xc}\) and \(G\), only a self-consistent solution ensures the conservation of the electron count [31]. In particular, for the \(GW\) approximation of \(\Sigma_{xc}\) that is most often evaluated with a one-shot procedure, the violation of electron conservation is well documented [32; 33; 18; 21].
In a previous study of ours [21], it was demonstrated analytically and verified numerically that linearizing the Dyson equation completely cures the problem of electron count conservation in the \(GW\) approximation. The linearized Dyson equation (LDE) reads
\[G=G_{0}+G_{0}(\Sigma_{xc}-V_{xc})G_{0}, \tag{4}\]
where the last \(G\) in Eq. (3) had been simply replaced by \(G_{0}\). The LDE is customary in the context of the Sham-Schuter equation [34].
This electron-conserving equation is then applied with the \(GW\) approximation to \(\Sigma_{xc}\).
### \(Gw\) self-energy based density matrix
The \(GW\) approximation [35] is simply sketched here, since it has been the subject of numerous detailed reviews [36; 37; 38].
The screened Coulomb interaction \(W\) is defined with the Dyson-like equation:
\[W=v+v\chi_{0}W, \tag{5}\]
where \(\chi_{0}=-2\mathrm{i}GG\) is the non-interacting polarizability and \(v\) is the usual bare Coulomb interaction.
The \(GW\) self-energy then reads
\[\Sigma_{xc}=\mathrm{i}GW. \tag{6}\]
It is convenient to decompose the self-energy into pure exchange and correlation. The exchange part \(\Sigma_{x}\) is static, whereas the correlation part carries the frequency dependence \(\Sigma_{c}(\omega)\). These quantities are routinely obtained with a one-shot procedure in standard periodic codes [39; 40]
Some general properties of the density matrix are detailed in App. A. For instance, it is demonstrated that the density matrix can be fully characterized with a single k-point index within the first Brillouin zone, even though it is a function of two spatial indices.
Now let us focus on the \(GW\) density matrix. It is handy to project into the mean-field orbitals \(|\mathbf{ki}\rangle\), which
form a valid orthogonal basis:
\[\gamma_{\mathbf{k}ij} = \langle\mathbf{k}i|\gamma|\mathbf{k}j\rangle \tag{7}\] \[= \int d\mathbf{r}d\mathbf{r}^{\prime}\varphi_{\mathbf{k}i}^{*}( \mathbf{r})\gamma(\mathbf{r},\mathbf{r}^{\prime})\varphi_{\mathbf{k}j}(\mathbf{ r}^{\prime}).\]
The first term on the right-hand side of Eq. (4) is \(G_{0}\). Let us insert it in Eq. (1) and project on the orbital basis to obtain the spin-summed density matrix elements
\[\gamma_{\mathbf{k}ij}^{\text{gKS}}=2\delta_{ij}\theta(\mu-\epsilon_{\mathbf{ k}i}). \tag{8}\]
This expression has been obtained by closing the contour in the upper part of the complex plane so that only the poles located above the real axis have survived.
A similar approach technique can be used for the static terms in the right-hand part of Eq. (4), \(G_{0}(\Sigma_{x}-V_{xc})G_{0}\):
\[\Delta\gamma_{\mathbf{k}ij}^{\text{HF}}=2\theta(\mu-\epsilon_{\mathbf{k}i}) \theta(\epsilon_{\mathbf{k}j}-\mu)\frac{\langle\mathbf{k}i|\Sigma_{x}-V_{xc}| \mathbf{k}j\rangle}{\epsilon_{\mathbf{k}i}-\epsilon_{\mathbf{k}j}}. \tag{9}\]
We denote it with a "HF" superscript because this contribution to the (spin-summed) linearized density matrix is obtained from a pure exchange self-energy; thus, it vanishes when the HF approximation is employed to obtain the mean-field orbitals (i.e. \(\Delta\gamma^{\text{HF}}=0\) for \(\gamma^{\text{gKS}}=\gamma^{\text{HF}}\)).
For the last term in Eq. (4), \(G_{0}\Sigma_{c}G_{0}\), the self-energy \(\Sigma_{c}\) has a frequency dependence and therefore the calculations cannot be performed analytically in contrast with the two previous terms, it is convenient to perform the integration along the imaginary axis, so as to keep some distance with the poles of \(G_{0}\) and of \(\Sigma_{c}\). Closing the contour, we transform the real-axis integration of Eq. (1) into
\[\Delta\gamma_{\mathbf{k}ij}^{GW}=\frac{1}{\pi}\int_{-\infty}^{+\infty}d\omega \frac{\langle\mathbf{k}i|\Sigma_{c}(\mu+\mathrm{i}\omega)|\mathbf{k}j\rangle} {(\mu+\mathrm{i}\omega-\epsilon_{\mathbf{k}i})(\mu+\mathrm{i}\omega-\epsilon _{\mathbf{k}j})}. \tag{10}\]
The complete spin-summed linearized \(GW\) density matrix finally reads
\[\gamma^{GW}=\gamma^{\text{gKS}}+\Delta\gamma^{\text{HF}}+\Delta\gamma^{GW}. \tag{11}\]
Lastly, the corresponding electronic density is \(\rho^{GW}(\mathbf{r})=\gamma^{GW}(\mathbf{r},\mathbf{r})\).
### Total energies from \(Gw\) density matrix
In previous studies [20; 21], we introduced a new total energy functional:
\[E_{\text{total}}^{\gamma^{GW}}=T[\gamma^{GW}]+V_{ne}[\rho^{GW}] +E_{H}[\rho^{GW}]\\ +E_{x}[\gamma^{GW}]+E_{c}[G_{0}]+V_{nn}, \tag{12}\]
where \(T\), the kinetic energy, \(V_{ne}\), the electron-nucleus interaction, \(E_{H}\), the Hartree energy, \(E_{x}\) the exchange energy are evaluated with \(\gamma^{GW}\) density matrix. Klimes _et. al._[41] also used the \(GW\) (or RPA) density matrix to improve sub-parts of the energy. Just the correlation energy \(E_{c}\) cannot be calculated with \(\gamma^{GW}\) and is pragmatically obtained from the Galitskii-Migdal equation [21]:
\[E_{c}[G_{0}]=\frac{1}{4\pi}\int_{-\infty}^{+\infty}d\omega\text{Tr}\{v\chi_{ 0}(i\omega)-v\chi(i\omega)\}, \tag{13}\]
where \(\chi=\chi_{0}+\chi_{0}v\chi\) is the RPA polarizability.
This one-shot energy expression has the desirable property that all the input quantities conserve the number of electrons. Of course, being a one-shot total energy, it keeps a dependence with respect to the starting point. This will be studied in detail in Sec. V.
### RPA total energy
For completeness, we report here without derivation the RPA expression for the total energy as we will extensively compare \(E_{\text{total}}^{\text{RPA}}\) and \(E_{\text{total}}^{\gamma^{GW}}\) in the following.
The RPA correlation is defined as [42]
\[\Phi_{c}[G_{0}]=\frac{1}{4\pi}\int_{-\infty}^{+\infty}d\omega\,\text{Tr}\left\{ v\chi_{0}(i\omega)+\ln\left[1-v\chi_{0}(i\omega)\right]\right\}. \tag{14}\]
By construction, \(\Phi_{c}[G_{0}]\) contains the correlation part of the kinetic energy. The total one-shot energy expression reads
\[E_{\text{total}}^{\text{RPA}}=T[\gamma^{\text{gKS}}]+V_{ne}[ \rho^{\text{gKS}}]+E_{H}[\rho^{\text{gKS}}]\\ +E_{x}[\gamma^{\text{gKS}}]+\Phi_{c}[G_{0}]+V_{nn}. \tag{15}\]
The one-shot RPA total energy is known to have a noticeable starting point dependence [15; 21; 43; 44]. We will quantify this in Sec. V in comparison with \(E_{\text{total}}^{\gamma^{GW}}\).
## III Implementation and computational details
The linearized \(GW\) density matrix in periodic systems has not been studied before to the best of our knowledge. It should be noted though that the linearized \(GW\) density matrix appears as an intermediate quantity in the RPA forces derived by Ramberger _et al._[45].
In this section, we provide a detailed description of our implementation of \(\gamma^{GW}\) in the ABINIT code [40]. We also highlight the key technical aspects that are crucial for producing accurate results.
### Implementation in a periodic plane-wave approach
ABINIT is a standard plane-wave-based DFT code. The core electrons are frozen and hidden in a pseudopotential. While the Kohn-Sham part of ABINIT is
able to use the more accurate and smoother projector augmented-wave atomic datasets [46; 47; 48], the extension to \(GW\) is very delicate [49; 50]. As of today, the \(GW\) part of ABINIT is fully validated only for regular norm-conserving pseudopotentials [51; 52].
The existing implementation in ABINIT provides us with \(\langle\mathbf{k}i|\Sigma_{c}(\mu+\mathrm{i}\omega)|\mathbf{k}j\rangle\) for any value of \(\omega\). From this starting point, we have then implemented a Gauss-Legendre quadrature to perform the integral in Eq. (10). The symmetry relation
\[\langle\mathbf{k}i|\Sigma_{c}(\mu-\mathrm{i}\omega)|\mathbf{k}j\rangle= \langle\mathbf{k}j|\Sigma_{c}(\mu+\mathrm{i}\omega)|\mathbf{k}i\rangle^{*} \tag{16}\]
is employed to limit the integration from \(0\) to \(+\infty\). A grid with typically \(50\) to \(120\) grid points is sufficient to ensure a very accurate convergence: we monitor the electron count deviation, which is always kept below \(10^{-3}\). In the future, grid design could be optimized to minimize the computational burden [33; 53].
The static term from Eq. (9) has been implemented as well, for any type of exchange-correlation potential \(V_{xc}\), including those based on hybrid functionals. Note that for a Hartree-Fock mean-field starting point, the static term \(\Sigma_{x}-V_{xc}\) vanishes. Furthermore, it is clear from Eq. (9) that the linearized density matrix is limited to systems with a finite band gap, or else diverging denominators would occur.
The matrix representation of \(\gamma^{GW}\) is obtained on the gKS states \(|\mathbf{k}i\rangle\) for \(i\leq N_{b}\). We then diagonalize it to obtain the natural orbitals in the gKS basis:
\[\sum_{j=1}^{N_{b}}\gamma_{\mathbf{k}ij}U_{\mathbf{k}j\lambda}=n_{\mathbf{k} \lambda}U_{\mathbf{k}i\lambda}, \tag{17}\]
where \(n_{\mathbf{k}\lambda}\), the eigenvalues, are the so-called natural occupations and where \(U_{\mathbf{k}j\lambda}\), the eigenvector coefficients, form the natural orbitals.
In other words, the natural orbitals \(\phi_{\mathbf{k}\lambda}(\mathbf{r})=\langle\mathbf{r}|\mathbf{k}\lambda\rangle\) can be obtained from the unitary matrix \(U_{\mathbf{k}i\lambda}\):
\[\phi_{\mathbf{k}\lambda}(\mathbf{r})=\sum_{i=1}^{N_{b}}U_{\mathbf{k}i\lambda} \varphi_{\mathbf{k}i}(\mathbf{r}). \tag{18}\]
Then all the one-body operators expectation values are readily obtained. For instance, the kinetic energy \(T\) is calculated as
\[T =\frac{1}{N_{k}}\sum_{\mathbf{k}\lambda}n_{\mathbf{k}\lambda} \langle\mathbf{k}\lambda|\hat{T}|\mathbf{k}\lambda\rangle \tag{19}\] \[=\frac{1}{N_{k}}\sum_{\mathbf{k}\lambda}n_{\mathbf{k}\lambda} \sum_{ij}U^{*}_{\mathbf{k}i\lambda}U_{\mathbf{k}j\lambda}\langle\mathbf{k}i| \hat{T}|\mathbf{k}j\rangle. \tag{20}\]
Finally, we would like to emphasize that the formal proof of the conservation of electron count [21] requires that the state range in the internal sum of \(G_{0}\) in Eq. (2) is the same as the one used in the basis expansion in Eq. (7). This restriction is enforced in all our calculations.
Table 1 summarizes the numerical parameters used for the 8 crystals considered in this study. All the calculations for face-centered cubic crystals reported in this work use four shifted k-point grids, as commonly used in ABINIT. The grid discretization is \(4\times 4\times 4\), which yields 256 k-points in the full Brillouin zone and 10 in the irreducible wedge. The calculation on hexagonal boron nitride (h-BN) uses a \(\Gamma\)-centered \(12\times 12\times 6\) k-point grid for exact exchange and a \(4\times 4\times 2\) grid for the rest. We evaluate the computational effort to scale as \(O(N^{4})\), similar to a conventional \(GW\) calculation. However, the prefactor is much larger, due to the fine frequency grids for both \(W\) and \(\Sigma\).
### Adequate norm-conserving pseudopotentials
As mentioned in the previous paragraph, our implementation uses norm-conserving pseudopotentials. In the
Figure 1: HF total energies as a function of lattice constant with different codes and pseudopotentials for bulk silicon. The equilibrium lattice constant for each calculation is given in the legend.
preliminary stages of our study, we concluded that while the details of the pseudopotential are not critical when studying band structures, they become of the utmost importance when investigating structural properties.
Norm-conserving pseudopotentials are designed to reproduce the electronic and energetic properties of a given mean-field approximation. For instance, using a PBE pseudopotential for a hybrid functional is not advised in principle. As no "\(GW\)-suitable" pseudopotentials exist, we have enforced the minimal requirement that the selected pseudopotential be able to reproduce HF structural properties.
In Fig. 1, we show a wide comparison among codes and techniques for bulk silicon at the HF level of theory. Silicon is chosen as a typical example. The results for two other crystals are reported in the supporting information with identical conclusions. The all-electron (AE) of CRYSTAL[54] with the accurate basis set designed by Heyd and coworkers [55] and the projector augmented-wave (PAW) results of VASP [39] agree very well. We consider them as the reference, since by construction, the Gaussian basis set used in CRYSTAL describes all the electrons at once and since in the PAW framework, though frozen-core, the core-valence interactions are completely recalculated for each approximation.
Then we turn to the regular PBE pseudopotential obtained from the pseudodojo suite [56]. This pseudopotential is highly tested and should be rather transferable as it relies on Hamann's ONCVPSP scheme [52] that introduce several projectors per angular momentum. However, based on Fig. 1, the HF energy-volume curve departs significantly from the reference. This error is intrinsic to the pseudopotential because using it in Quantum-Espresso [57] gives the exact same result. In our opinion, the inability of the ONCVPSP pseudopotentials to reproduce HF energy-volume curves is not due to the Hamann's scheme itself, but rather due to the practical choice of large cutoff radii selected in the pseudodojo initiative. Generating our own dedicated ONCVPSP pseudopotentials would be possible of course, however, would require a significant effort. Fortunately, alternatives already exist.
In a previous work [13], one of us mitigated this problem by using pseudopotentials generated for KLI[58] which devises a local potential that simulates the non-local exact-exchange operator. This improves over the ONCVPSP pseudopotentials but is not quantitative enough.
Recently, in the quantum Monte Carlo community, there has been an effort to support the design of "correlation consistent" effective core potentials (ccECP) [59]. These norm-conserving pseudopotentials are meant to be used in combination with correlated methods beyond the usual mean-field ones. Figure 1 shows that this type of pseudopotential produces results in close agreement with AE and PAW for HF: the lattice constants match within 0.1 %.
We conclude that the ccECP pseudopotentials are our preferred norm-conserving pseudopotentials to obtain quantitative results: i) they have been designed specifically for explicit-correlation methods and \(GW\) belongs to this family; ii) they are best to reproduce HF lattice constants. The main drawback of these pseudopotentials is the high cutoff energy that is necessary to converge the total energies in plane waves. In the following, all the reported results employ ccECP pseudopotentials.
## IV \(Gw\)-density matrix in crystalline solids
As summarized in Appendix A, the spin-summed natural occupations \(n_{\mathbf{k}\lambda}\) should continuously span the range from 0 to 2 at variance with regular Fermi-Dirac ground-state occupations \(f_{\mathbf{k}i}\) that are only 0 or 2.
These natural occupations for realistic crystalline solids can be compared to the momentum distribution function \(n_{\mathbf{k}}\) for the homogeneous electron gas in Refs. [36; 60]. But for the homogeneous electron gas, the momentum \(\mathbf{k}\) is enough to uniquely characterize the quantum state. For solids, we need an additional quantum number \(\lambda\) (similar to a band index).
In Fig. 2, we represent the natural occupations \(n_{\mathbf{k}\lambda}\) for a fixed k-point (1/8, 0, 0). This particular k-point was selected as an example: the other k-points produce very similar results. The PBE occupations \(f_{\mathbf{k}i}\) are shown as a reference. Then the natural occupations for \(\gamma^{\mathrm{HF}}\)@PBE, the static part of the density matrix, are plotted. While their sum precisely equalizes the number of electrons \(N_{e}\), the values can exceed 2 and be below 0. These occupation values violate the constraints of the exact density matrix. However, after adding the dynamic correlation,
Figure 2: Spin-summed natural occupations obtained from PBE, \(\gamma^{\mathrm{HF}}\)@PBE, and \(\gamma^{\mathrm{GW}}\)@PBE density-matrices in bulk silicon for k-point (1/8, 0, 0) in reciprocal lattice vectors. The natural occupations are ordered by descending values, from 1 to 150, which is \(N_{b}\), the dimension of the matrix. The x-axis has been cut to show the first and the last values.
the \(\gamma^{GW}\)@PBE has all its spin-summed natural occupations between 0 and 2. Four natural orbitals have an occupation close to 1.8-1.9 and then many more (15 or so) have a non-vanishing occupation. A PBE mean-field starting point was chosen to magnify the effect. Starting from HF would yield perfectly sane natural occupations for \(\gamma^{\text{HF}}\) The overall shape of the occupation is similar to the homogeneous electron gas result [36; 60].
However one notices that \(N(\mathbf{k})=\sum_{\lambda}n_{\mathbf{k}\lambda}\) for a given \(\mathbf{k}\) slightly deviates from the number of electrons \(N_{e}\). This intriguing observation does not violate an exact constraint. We only proved mathematically [21] that the sum of the natural occupation over the whole Brillouin zone
\[\sum_{\mathbf{k}}\sum_{\lambda}n_{\mathbf{k}\lambda}=N_{e} \tag{21}\]
is valid. Appendix B demonstrates that this variation with \(\mathbf{k}\) is possible.
As this observation can be considered surprising when compared to the usual mean-field occupations \(f_{\mathbf{k}i}\), it is insightful to monitor the sum \(N(\mathbf{k})\) across the Brillouin zone. In Fig. 3, we report the deviation \(\Delta N(\mathbf{k})=N(\mathbf{k})-N_{e}\) in a cut plane in the Brillouin zone. The \(\Delta N(\mathbf{k})\) function is interpolated from a refined \(\Gamma\)-centered \(6\times 6\times 6\) with 4 shifts k-point grid (864 points in the full Brillouin zone). We use the Shankland-Koelling-Wood interpolation technique [61] as implemented in abipy[40]. The numerical integration of \(\Delta N(\mathbf{k})\) over the whole Brillouin yields \(4\times 10^{-4}\), which is very close to the expected zero.
From Fig. 3, we observe an electron transfer from the \(\Gamma\) point region to the Brillouin zone edge. The weight transfer is not large (at most \(\sim 0.01-0.02\)), but still sizable. The electron count \(N(\mathbf{k})\) is an observable and could be possibly measured in angle-resolved photo-emission spectroscopy [62]. Note that this electron count transfer is a pure electronic correlation effect. Any static approximation of the self-energy \(\Sigma\) would nullify it.
## V Structural properties of crystalline solids within \(GW\) density matrix and RPA
### Covalent-bonded crystals
In this section, we analyze the calculation of structural parameters for crystalline solids using the total energy expressions introduced in Eqs. (12) and (15). Our main question is which expression works "best" in the context of one-shot calculations, i.e. which expression best approximates a hypothetical reference sc\(GW\) that is not currently available for crystalline systems. In molecules, where reference sc\(GW\) were produced [21], the accuracy of the \(\gamma^{GW}\) total energy was demonstrated.
A way to measure the robustness of a one-shot total energy expression is to explore its sensitivity to the starting point. Here we use the PBEh(\(\alpha\)) hybrid functional family:
\[V_{xc}=\alpha\Sigma_{x}+(1-\alpha)v_{x}^{\text{PBE}}+v_{c}^{\text{PBE}}, \tag{22}\]
where the parameter \(\alpha\) controls the amount of exact-exchange \(\Sigma_{x}\).
Calculations were carried out for seven covalent crystals (Si, C, SiC, zb-BN, AlP, AlAs, and Ge). In the main text, we will mostly report silicon results. However, the complete set of results is made available as Supplemental Material [64].
In Fig. 4, we compare the total energy behavior for 2 different systems: water, a small molecular system, and crystalline silicon. The results for the water molecule were extracted from Ref. [21] that was using a different implementation based on Gaussian basis [65]. The figure reports the total energies for PBEh(\(\alpha\)), for \(E_{\text{total}}^{\gamma^{GW}}\), for \(E_{\text{total}}^{\text{RPA}}\), and when available for sc\(GW\). The overall similarity between the two panels is striking: RPA is rather sensitive to the starting gKS, whereas \(\gamma^{GW}\) is much less so. RPA increases with \(\alpha\), whereas \(\gamma^{GW}\) decreases. RPA and \(\gamma^{GW}\) rejoin for large values of \(\alpha\).
For water, where the sc\(GW\) reference exists, the RPA and \(\gamma^{GW}\) total energies give the best approximation of the full sc\(GW\) total energy when they are equal. Owing to the similarity between the two panels of Fig. 4, we can reasonably anticipate that in bulk silicon, the sc\(GW\) total energy will be best approximated by \(\gamma^{GW}\) and by RPA with PBEh(0.75), PBEh(1.00) or HF, however with no formal proof.
If supplied with the sc\(GW\) Green's function, all total energy formulas should match [14; 15; 19]. The above results tend to make us think that the non-interacting
Figure 3: Electron count deviation \(\Delta N(\mathbf{k})\) as a function of the wavevector \(\mathbf{k}\) in the first Brillouin zone for bulk silicon. \(\mathbf{k}=k_{1}\mathbf{b}_{1}+k_{2}\mathbf{b}_{2}+k_{3}\mathbf{b}_{3}\) is reported in reduced coordinates (\(\mathbf{b}_{i}\) are the reciprocal lattice vectors). The plane \(k_{3}=0\) is represented. The isoline \(\Delta N(\mathbf{k})=0\) is drawn with a bold black line. Special points \(\Gamma\) and \(X\) are marked.
Green's function \(G_{0}\) for PBEh(0.75), PBEh(1.00), or HF are close to the sc\(GW\) Green's function \(G\) for both the molecular and the solid-state systems.
The striking difference between the molecule and the solid in Fig. 4 is the agreement or the disagreement with respect to the gKS total energies. While for the molecule, the RPA and the \(\gamma^{GW}\) were undershooting much the total energy with respect to PBEh(\(\alpha\)) (too negative correlation energies), the match is very good for crystalline silicon. The accurate quantum Monte Carlo approach reports -7.8644 Ha for silicon [63], whereas \(\gamma^{GW}\) and RPA respectively give -7.8708 and -7.8692 Ha with the same ccECP pseudopotentials. This excellent agreement of the absolute total energies reminds us about the amazingly accurate sc\(GW\) energies in the homogeneous electron gas [4; 5; 6] that precisely match the quantum Monte Carlo values [66].
We conclude with some reasonable confidence that the sc\(GW\) total energy in bulk silicon is certainly close to the gKS, close to \(E_{\rm total}^{\gamma^{GW}}\), and close to RPA@PBEh(1.00). We also stress that RPA@PBE, which is the most commonly accepted implementation of RPA functional, underestimates noticeably the total energy. Our confidence in these results is further strengthened when considering the results for the other 6 crystalline systems reported in the Supplemental Material [64]. They all support the same conclusion.
However if sc\(GW\) is very accurate for solids and less accurate for finite systems, the atomization energies that measure the energy gain when forming a bulk crystal as compared to the isolated atoms are likely to have a low accuracy in sc\(GW\). This should be explored in the future.
Let us now focus on the complete energy versus lattice constant curves and check the sensitivity to the starting point not only for the total energy but also for the equilibrium lattice constant \(a\) and for the bulk modulus \(B\).
Figure 5 reports on the same scale the RPA and the \(\gamma^{GW}\) total energies for different gKS starting points and in the right-hand panel the equilibrium lattice constant as a function of the gKS starting point. We can see again that the RPA total energy in the left-hand panel of Fig. 5 is much more sensitive to the starting point compared to \(\gamma^{GW}\) total energy in the central panel. However, when focusing on the equilibrium lattice constant itself, the sensitivity to the starting point is much weaker: at most 0.03 bohr.
All panels in Fig. 5 support again the same conclusion: the RPA and \(\gamma^{GW}\) total energies agree best when using PBEh(\(\alpha\)) with large \(\alpha\) or even when using HF. The statements drawn for silicon perfectly hold for the other 6 crystalline systems presented in the Supplemental Material [64].
Finally, we summarize the lattice constants and bulk moduli of the 7 covalent crystals that we have studied in Table 2. We focus on two gKS starting points PBEh(0.00) (i.e. standard PBE) and PBEh(0.75). While for PBE starting point the RPA@PBE and \(\gamma^{GW}\)@PBE lattice constants differ by about 0.06 bohr, the RPA@PBEh(0.75) and \(\gamma^{GW}\)@PBEh(0.75) lattice constants always agree within 0.01 bohr. The same type of conclusion holds for the bulk modulus \(B\). This is another proof that PBEh(0.75) is a good starting point to evaluate the different \(GW\)-based total energy expressions. Most probably, the properties \(a\) and \(B\) evaluated with RPA@PBEh(0.75) or \(\gamma^{GW}\)@PBEh(0.75) are reliable estimates to the sc\(GW\) result.
In the end, we also compare to experiment. Table 2 shows that all the \(GW\)-based energy expressions yield structural properties in excellent agreement with respect to the experiment: a 0.1 % deviation for lattice constants and 8 % for bulk moduli. The different expressions and
Figure 4: Total energies for the water molecule in the gas phase (left-hand panel) and crystalline silicon (right-hand panel) as a function of the gKS starting point. The water results were extracted from a previous study [21]. The silicon Quantum Monte Carlo (QMC) value comes from Ref. [63].
starting points have a minor influence on this. This conclusion is valid for the covalent crystals. However, it is worth considering whether this conclusion still holds for weak van der Waals interactions, which are one of the attractive features of RPA.
### van der Waals bonded layered material
In order to test if the weak van der Waals interactions would correctly be described by sc\(GW\), we analyzed a layered material, namely the hexagonal BN (h-BN). RPA@LDA or RPA@PBE were proven to be able to describe properly the spacing between the layers [67; 13].
The task of evaluating the \(\gamma^{GW}\) for h-BN is beyond our current computational capabilities. Indeed, for the weak van der Waals interactions, the energy scales are so low that extremely converged calculations are required. Fortunately, based on the previous discussion, we assume that the sc\(GW\) total energy is also well approximated by RPA@PBEh(0.75).
In Fig. 6, we report the energy versus spacing between the layers. LDA is known to give the correct spacing thanks to a lucky compensation of errors [67], whereas PBE that improves the exchange over LDA does not benefit from this and yields a much too large spacing. Our RPA@LDA reproduces within 0.25 bohr the earlier estimates from Refs. [13] and [67]. This agreement is quite good considering the computational power difference and considering the fact that we use newly developed pseudopotentials.
Next, let us comment that the RPA@PBEh(0.75) result (see Fig. 6) shows our best approximate to the sc\(GW\). The obtained lattice spacing is very good agreement with respect to the experiment (6.18 bohr against 6.25 bohr).
This example shows that tuning the starting point in RPA does not destroy the quantitative agreement with respect to the experiment. This interesting conclusion calls for further studies in the future.
## VI Conclusion
The linearized \(GW\) density matrix \(\gamma^{GW}\) has been introduced in realistic solid-state systems. We have carefully tested the norm-conserving pseudopotential approximation and have concluded that QMC pseudopotentials, such as ccECP [59], are compulsory in this context for accurately determining RPA and \(\gamma^{GW}\) lattice constants. On a benchmark of 7 covalent crystals (Si, C, SiC, zb-BN, AlP, AlAs, and Ge), we have proven numerically that \(\gamma^{GW}\) actually fulfils the exact constraints: its natural occupation numbers range from 0 to 2 (when spin is summed) and they sum up to the correct number of electrons. In addition, the correlated nature of \(\gamma^{GW}\) allows the electron occupancy to reorganize across the Brillouin zone, in strong contrast with all the mean-field approaches, where the electron count remains constant in the Brillouin zone for crystals with a band gap.
Figure 5: Energy-lattice constant curves for crystalline silicon for RPA functional (left-hand panel), \(\gamma^{GW}\) energy functional (central panel). Equilibrium lattices are summarized in the right-hand panel. The experimental lattice constant is given as a reference.
Figure 6: Energy as a function of the spacing between the layers for hexagonal BN with different gKS approximations and different starting points for RPA.
The one-shot total energy expression \(E^{\gamma GW}_{\rm total}\) has been found to be superior to the usual RPA total energy expression in terms of sensitivity to the gKS starting point.
We provide strong evidence to support the assumption that \(\gamma^{GW}\)-based total energy is a reliable substitute for \({\rm sc}GW\), which remains unachievable at present. As a cheaper alternative, RPA can also be used, but we advocate applying it on top of gKS functionals with a large content of the exact exchange, at least 75 %, such as PBEh(0.75) or HF to best approximate \({\rm sc}GW\). Our last statement disagrees with the current wisdom that recommends RPA@PBE based on comparison to experiment.
Our implementation is available in the public version the open source code ABINIT [68].
###### Acknowledgements.
The authors acknowledge the financial support provided by the Cross-Disciplinary Program on Numerical Simulation of the French Alternative Energies and Atomic Energy Commission (CEA) (ABIDM project). This work was performed using HPC resources from GENCI-CCRT-TGCC (Grants No. 2022-096018). MRM thanks S. Sharma for sharing her one-body-reduced-density-matrix expertise and FB thanks L. Mitas for insights about the quantum Monte Carlo references.
## Appendix A The unit cell density matrix in the natural orbital representation.
The density matrix \(\gamma({\bf r},{\bf r}^{\prime})\) is diagonal in \({\bf k}\)Imposing the Born-van Karman periodic conditions, we can write the double-Fourier expansion of \(\gamma({\bf r},{\bf r}^{\prime})\) in reciprocal space as
\[\gamma({\bf r},{\bf r}^{\prime})=\frac{1}{N_{k}\Omega}\sum_{{\bf k}{\bf k}^{ \prime}{\bf G}{\bf G}^{\prime}}e^{{\rm i}({\bf k}+{\bf G})\cdot{\bf r}}\gamma _{{\bf k}{\bf k}^{\prime}}({\bf G},{\bf G}^{\prime})e^{-{\rm i}({\bf k}^{ \prime}+{\bf G}^{\prime})\cdot{\bf r}^{\prime}}, \tag{10}\]
where \(N_{k}\) is the number of k-points and \(\Omega\) is the volume of the unit cell and \({\bf G}\) and \({\bf G}^{\prime}\) are reciprocal lattice vectors.
By virtue of the translation invariance in the unit cell, the shift of the two space indices with \({\bf R}=n_{1}{\bf a}_{1}+n_{2}{\bf a}_{2}+n_{3}{\bf a}_{3}\) (with \(n_{1},n_{2},n_{3}\in\mathbb{Z}\) and \({\bf a}_{1},{\bf a}_{2},{\bf a}_{3}\) being the primitive lattice vectors) does not change the density matrix [69]:
\[\gamma({\bf r},{\bf r}^{\prime})=\gamma({\bf r}+{\bf R},{\bf r}^{\prime}+{\bf R }). \tag{11}\]
When inserting the double Fourier transform, this implies
\[e^{{\rm i}({\bf k}-{\bf k}^{\prime})\cdot{\bf R}}=1. \tag{12}\]
This condition is fulfilled if and only if \(({\bf k}-{\bf k}^{\prime})\) belongs to the reciprocal lattice. And as \({\bf k}\) and \({\bf k}^{\prime}\) both belong to the first Brillouin zone, the only possible reciprocal lattice vector the difference can match is \({\bf 0}\).
As a consequence, one can insert the Kronecker sign \(\delta_{{\bf k}{\bf k}^{\prime}}\) in Eq. (10):
\[\gamma({\bf r},{\bf r}^{\prime})=\frac{1}{N_{k}\Omega}\sum_{{\bf k}{\bf G}{\bf G }^{\prime}}e^{{\rm i}({\bf k}+{\bf G})\cdot{\bf r}}\gamma_{{\bf k}}({\bf G}, {\bf G}^{\prime})e^{-{\rm i}({\bf k}+{\bf G}^{\prime})\cdot{\bf r}^{\prime}} \tag{13}\]
and obtain the desired expression that shows that \(\gamma\) is block-diagonal with respect to \({\bf k}\).
Natural orbitals and occupationsNow for each discrete value of \({\bf k}\), one can diagonalize \(\gamma_{\bf k}\):
\[\sum_{{\bf G}^{\prime}}\gamma_{\bf k}({\bf G},{\bf G}^{\prime})\tilde{u}_{{\bf k }\lambda}({\bf G}^{\prime})=n_{{\bf k}\lambda}\tilde{u}_{{\bf k}\lambda}({\bf G }), \tag{14}\]
where \(\tilde{u}_{{\bf k}\lambda}({\bf G})\) are the eigenvectors and \(n_{{\bf k}\lambda}\) the eigenvalues, indexed with \(\lambda\).
By construction:
\[\gamma({\bf r},{\bf r}^{\prime})=\gamma({\bf r}^{\prime},{\bf r})^{*} \tag{15}\]
implies
\[\gamma_{\bf k}({\bf G},{\bf G}^{\prime})=\gamma_{\bf k}({\bf G}^{\prime},{\bf G })^{*} \tag{16}\]
\(\gamma_{\bf k}({\bf G},{\bf G}^{\prime})\) is an Hermitian matrix and thus the \(n_{{\bf k}\lambda}\) are real-valued and the \(\tilde{u}_{{\bf k}\lambda}\) form a unitary matrix
Using this decomposition,
\[\gamma_{\bf k}({\bf G},{\bf G}^{\prime})=\sum_{\lambda}n_{{\bf k}\lambda} \tilde{u}_{{\bf k}\lambda}({\bf G})\tilde{u}_{{\bf k}\lambda}^{*}({\bf G}^{ \prime}), \tag{17}\]
Inserting this in Eq. (13), we obtain
\[\gamma({\bf r},{\bf r}^{\prime})=\frac{1}{N_{k}\Omega}\sum_{{\bf k }\lambda}\sum_{{\bf G}}e^{{\rm i}({\bf k}+{\bf G})\cdot{\bf r}}\tilde{u}_{{ \bf k}\lambda}({\bf G})\\ \times n_{{\bf k}\lambda}\sum_{{\bf G}^{\prime}}\tilde{u}_{{\bf k }\lambda}^{*}({\bf G}^{\prime})e^{-{\rm i}({\bf k}+{\bf G}^{\prime})\cdot{\bf r} ^{\prime}} \tag{18}\]
Let us introduce the natural orbital in real-space \(\phi_{{\bf k}\lambda}({\bf r})\) that have a Bloch's wave form [70]:
\[\phi_{{\bf k}\lambda}({\bf r}) = \frac{1}{\sqrt{N_{k}\Omega}}\sum_{{\bf G}}e^{{\rm i}({\bf k}+{\bf G })\cdot{\bf r}}\tilde{u}_{{\bf k}\lambda}({\bf G}) \tag{19}\] \[= \frac{1}{\sqrt{N_{k}\Omega}}e^{{\rm i}{\bf k}\cdot{\bf r}}\tilde{u}_ {{\bf k}\lambda}({\bf r}) \tag{20}\]
The final expression in real space reads
\[\gamma({\bf r},{\bf r}^{\prime})=\sum_{{\bf k}\lambda}n_{{\bf k}\lambda}\phi_{{ \bf k}\lambda}({\bf r})\phi_{{\bf k}\lambda}^{*}({\bf r}^{\prime}). \tag{21}\]
that looks extremely similar to the mean-field (gKS) expression
\[\gamma^{\rm gKS}({\bf r},{\bf r}^{\prime})=\sum_{{\bf k}i}f_{{\bf k}i}\varphi_{{ \bf k}i}({\bf r})\varphi_{{\bf k}i}^{*}({\bf r}^{\prime}), \tag{22}\]
where \(f_{{\bf k}i}\) are the Fermi-Dirac occupations and \(\varphi_{{\bf k}i}({\bf r})\) the mean-field wavefunctions.
However, there are subtle differences that are much more meaningful:
* In gKS DFT, \(\varphi_{\bf k}i({\bf r})\) sum up to the exact electronic density, whereas the natural orbital sum up to the exact density _and_ density matrix.
* For the ground-state, the spin-summed \(f_{\bf k}i\) are constrained to be 0 or 2, whereas the spin-summed \(n_{\bf k\lambda}\) continuously span the range from 0 to 2 (not proven here).
* Since \(\int d{\bf r}\gamma({\bf r},{\bf r})\) is normalized to the number of electrons \(N_{e}\), both \(f_{\bf k}i\) and \(n_{\bf k\lambda}\) sum up to \(N_{e}\). However, for insulators, while \(\sum_{i}f_{\bf k}i=N_{e}\), for each \({\bf k}\) individually, no equivalent exists for the natural occupations \(n_{\bf k\lambda}\). as we show in the appendix B.
## Appendix B Non-integer \(N({\bf k})\) values for individual \({\bf k}\)
The non-integer values reported for the electron count, \(N({\bf k})\), are a consequence of the electron correlation effects. In this appendix, we gain some insights into this result.
### The many-electron wavefunction in crystals
The basis of Bloch waves is complete; therefore, the real-space \(N\)-electron wavefunction can be written as a linear combination of Slater determinants \(\big{[}(N!)^{-1/2}|\varphi_{{\bf k}_{\mu}i}({\bf r}_{1})\ldots\varphi_{{\bf k }_{\nu}j}({\bf r}_{N})|\big{]}\) built using Bloch waves,
\[\varphi_{\bf k}i({\bf r})=\frac{1}{\sqrt{N_{k}\Omega}}\sum_{G}e^{i{\bf k}\cdot {\bf r}}u_{\bf k}i({\bf r}), \tag{14}\]
which are usually the ones obtained from a mean-field method (like the ones obtained from a gKS DFT calculation). However, other basis sets can be used to build the Slater determinants, such as the Bloch waves corresponding to the natural orbitals. In this representation, the real-space (spin-less) \(N\)-electron wavefunction can be written as a configuration interaction (CI) expansion
\[\Psi({\bf r}_{1},{\bf r}_{2},...,{\bf r}_{N})=\frac{1}{\sqrt{N!}(\sqrt{N_{k} \Omega})^{N}}\sum_{{\bf k}_{\mu}\ldots{\bf k}_{\nu}}\sum_{i\in\Omega_{{\bf k} _{\mu}}}\ldots\sum_{j\in\Omega_{{\bf k}_{\nu}}}C_{{\bf k}_{\mu}i\ldots{\bf k}_ {\nu}j}e^{i({\bf k}_{\mu}\cdot{\bf r}_{1}+\ldots+{\bf k}_{\nu}\cdot{\bf r}_{N })}u_{{\bf k}_{\mu}i}({\bf r}_{1})\ldots u_{{\bf k}_{\nu}j}({\bf r}_{N}), \tag{15}\]
where the sum \(\sum_{{\bf k}_{\mu}\ldots{\bf k}_{\nu}}\) runs over all k-points in the first Brillouin zone, and the \(C_{{\bf k}_{\mu}\ldots{\bf k}_{\nu}j}\) are the expansion coefficients (that are adequately adjusted to ensure that \(\Psi\) preserves the correct symmetries). Let us highlight that in Eq. (15), the Hartree product of Bloch's waves contains waves that belong to different k-points (i.e. \(\Psi\) is the many-body wavefunction of the supercell).
The Born-van Karman periodic conditions imposed to the many-electron wavefunction [71] state that \(\Psi({\bf r}_{1},{\bf r}_{2},...,{\bf r}_{N})=\Psi({\bf r}_{1}+{\bf T},{\bf r} _{2}+{\bf T},...,{\bf r}_{N}+{\bf T})\) for \({\bf T}=n_{1}N_{1}{\bf a}_{1}+n_{2}N_{2}{\bf a}_{2}+n_{3}N_{3}{\bf a}_{3}\) (with \(N_{k}=N_{1}N_{2}N_{3}\)). As a consequence, \(e^{i({\bf k}_{\mu}+\ldots+{\bf k}_{\nu})\cdot{\bf T}}=e^{i{\bf K}\cdot{\bf T} }=1\) and
\[{\bf K} = {\bf k}_{\mu}+\ldots+{\bf k}_{\nu} \tag{16}\] \[= \frac{\chi_{1}}{N_{1}}{\bf b}_{1}+\frac{\chi_{2}}{N_{2}}{\bf b}_{ 2}+\frac{\chi_{3}}{N_{3}}{\bf b}_{3}, \tag{17}\]
where \(\chi_{n}\in\mathbb{Z}\). The many-electron wavefunctions are eigenfunctions of the many-body translation operator \(\widehat{T}_{\bf R}\), i.e.
\[\widehat{T}_{\bf R}\Psi_{\bf K}({\bf r}_{1},{\bf r}_{2},...,{\bf r }_{N}) = \Psi_{\bf K}({\bf r}_{1}+{\bf R},{\bf r}_{2}+{\bf R},...,{\bf r} _{N}+{\bf R}) \tag{18}\] \[= e^{i{\bf K}\cdot{\bf R}}\Psi_{\bf K}({\bf r}_{1},{\bf r}_{2},...,{\bf r}_{N}), \tag{19}\]
with \(e^{i{\bf K}\cdot{\bf R}}\) eigenvalues. Since the many-body Hamiltonian \(\widehat{H}\) (within the Born-Oppenheimer approximation) commutes with the \(\widehat{T}_{\bf R}\) operator [71], the solutions to the many-body Hamiltonian can be taken associated with a given \({\bf K}\) value (i.e. \(\widehat{H}\Psi_{\bf K}=E\Psi_{\bf K}\)). Actually, the many-electron wavefunctions associated with \({\bf K}+{\bf G}\) (with \({\bf G}=\chi_{1}{\bf b}_{1}+\chi_{2}{\bf b}_{2}+\chi_{3}{\bf b}_{3}\) being a reciprocal lattice vector) also lead to the same eigenvalue \(e^{i{\bf K}\cdot{\bf R}}\) upon application of the \(\widehat{T}_{\bf R}\) operator and may contribute to the CI expansion.
### The density matrix and the second-order reduced density matrix in crystals
Let us define the density matrix elements (from the many-body wavefunction \(\Psi\)) as
\[\gamma_{{\bf k}ij}=\langle\Psi[\widehat{b}_{{\bf k}i}^{\dagger}\widehat{b}_{{ \bf k}j}]\Psi\rangle \tag{20}\]
and the second-order reduced density matrix (2-RDM) matrix elements as
\[\Gamma^{{\bf k}_{\mu},j,{\bf k}_{\nu}m}_{{\bf k}_{\mu}i,{\bf k}_{\nu}l}=\frac{ 1}{2}\langle\Psi[\widehat{b}_{{\bf k}_{\mu}i}^{\dagger}\widehat{b}_{{\bf k}_{ \nu}l}^{\dagger}\widehat{b}_{{\bf k}_{\sigma}m}\widehat{b}_{{\bf k}_{\tau}j}]|\Psi\rangle \tag{21}\]
whose \({\bf k}_{n}\) values fulfill the condition
\[{\bf k}_{\mu}+{\bf k}_{\nu}-{\bf k}_{\tau}-{\bf k}_{\theta}+{\bf G}=0, \tag{22}\]
which ensures the correct translational symmetry of the 2-RDM, i.e.
\[\Gamma({\bf r}_{1},{\bf r}_{2},{\bf r}_{1}^{\prime},{\bf r}_{2}^{\prime})= \Gamma({\bf r}_{1}+{\bf R},{\bf r}_{2}+{\bf R},{\bf r}_{1}^{\prime}+{\bf R},{ \bf r}_{2}^{\prime}+{\bf R}), \tag{23}\]
where \(\Gamma({\bf r}_{1},{\bf r}_{2},{\bf r}^{\prime}_{1},{\bf r}^{\prime}_{2})\) is the 2-RDM in space representation (see for example Refs. [72; 73] for more details). The 2-RDM contains more information about the system than the density matrix. Indeed, the matrix elements of the density matrix can be obtained from the partial trace of the 2-RDM:
\[\gamma_{{\bf k}ij}=\frac{2}{N-1}\sum_{{\bf k}^{\prime}}\sum_{l\in\Omega_{{\bf k }^{\prime}}}\Gamma^{{\bf k}i,{\bf k}^{\prime}l}_{{\bf k}i,{\bf k}^{\prime}l}, \tag{11}\]
where \(\Omega_{{\bf k}^{\prime}_{k}}\) is the subspace formed by all the one-electron wavefunctions sharing the same \({\bf k}^{\prime}_{\nu}\) value. For completeness, let us mention that the matrix formed using the \(\gamma_{{\bf k}ij}\) elements is Hermitian and upon diagonalization produces the occupation numbers \(n_{{\bf k}\lambda}\) discussed in the appendix A.
### The APSG _ansatz_ for crystals
Here we prove in the specific case of an antisymmetrized product of strongly-orthogonal geminals (APSG) _ansatz_[74] for the many-electron wavefunction that electron count transfer can occur across k-points due to electronic correlation effects. If it is true for this sub-class of wavefunctions, then the statement also holds for the exact wavefunction.
The APSG _ansatz_ for the many-electron wavefunction with spin, where an even number of electrons present in the system is assumed (as we employed throughout this work) reads
\[\Psi^{\rm APSG}_{\bf K}({\bf x}_{1},{\bf x}_{2},...,{\bf x}_{N})=\widehat{A} \prod_{P=1}^{N/2}\psi_{P}({\bf x}_{2P-1},{\bf x}_{2P}), \tag{12}\]
where \({\bf x}=({\bf r},\sigma)\) is spatial and spin coordinate with \(\sigma=\alpha,\beta\) referring to the spin index, \(\widehat{A}\) stands for the antisymmetrizer responsible for inter-geminal permutations of electron coordinates and the geminal wavefunctions \(\psi_{P}({\bf x}_{2P-1},{\bf x}_{2P})\) are wavefunctions containing one \(\alpha\) and one \(\beta\) electron. Because of this, the geminal wavefunction is a two-electron wavefunction; the sum of the occupation numbers for each spin channel must be
\[\sum_{[{\bf k},i\in\Omega_{\bf k}]\in P}n_{{\bf k}i}=1 \tag{13}\]
with \([{\bf k},i\in\Omega_{\bf k}]\in P\) indicating that the \(i\)-th Bloch's wave belonging to the \({\bf k}\)-th k-point (i.e. the Bloch's wave natural orbital \(\phi_{{\bf k}i}\)) is one of the Bloch's waves used in the construction of the \(P\)-th geminal. The \(P\)-th geminal wavefunction written in terms of the natural orbital Bloch's waves reads as
\[\psi_{P}({\bf x}_{2P-1},{\bf x}_{2P})=2^{-1/2}\sum_{[{\bf k},i\in\Omega_{\bf k }]\in P}{\bf c}_{{\bf k}i}\left[\phi_{{\bf k}i}({\bf r}_{2P-1})\alpha_{2P-1} \phi^{*}_{{\bf k}i}({\bf r}_{2P})\beta_{2P}-\phi_{{\bf k}i}({\bf r}_{2P}) \alpha_{2P}\phi^{*}_{{\bf k}i}({\bf r}_{2P-1})\beta_{2P-1}\right], \tag{14}\]
where the time-reversal symmetry is being employed to relate the degenerated Bloch's waves containing electrons with opposite spin [75; 76] forming a Kramers' pairs (i.e. \(\phi_{{\bf k}i}\) and \(\phi^{*}_{{\bf k}i}\) form a Kramers' pair). Let us remark that the presence of complex-conjugated Bloch's waves (natural orbitals) is related to states filled by \(\beta\) spin electrons; this choice is completely arbitrary. Also, the time-reversal symmetry imposed on the many-electron wavefunction leads to \(\Psi_{{\bf K}={\bf 0}}\) for spin-compensated systems.
Since the \(\{\psi_{P}\}\) geminal wavefunctions are built with the strong orthonormality requirement, i.e. the condition that \(\forall_{P\neq Q}\ \int d{\bf x}_{2}\psi_{P}({\bf x}_{1},{\bf x}_{2})\psi_{Q}({ \bf x}^{\prime}_{1},{\bf x}_{2})=\delta_{PQ}\), then the Bloch's waves natural orbitals are present in only one geminal wavefunction. When all the \(\psi_{P}\) are built containing only one Kramers' pair as
\[\psi_{P}({\bf x}_{2P-1},{\bf x}_{2P})=2^{-1/2}\left[\phi_{{\bf k}i}({\bf r}_{ 2P-1})\alpha_{2P-1}\phi^{*}_{{\bf k}i}({\bf r}_{2P})\beta_{2P}-\phi_{{\bf k}i} ({\bf r}_{2P})\alpha_{2P}\phi^{*}_{{\bf k}i}({\bf r}_{2P-1})\beta_{2P-1} \right], \tag{15}\]
the many-body wavefunction (\(\Psi_{\bf K}\)) defined in Eq. (12) corresponds to a single Slater determinant; thus, the Hartree-Fock approximation is recovered, where the natural orbital basis and the so-called canonical orbitals (the mean-field ones) coincide.
The structure of the APSG wavefunction allows us to express the total energy in terms of the natural orbitals, the occupation numbers, and some undetermined phases. Then, the total APSG energy takes the following form
\[E^{\text{APSG}}\left[\{f_{\mathbf{k}_{\mu}i}\},\{n_{\mathbf{k}_{ \mu}i}\},\{\phi_{\mathbf{k}_{\mu}i}\}\right] =2\sum_{\mathbf{k}_{\mu}}\sum_{i\in\Omega_{\mathbf{k}_{\mu}}}n_{ \mathbf{k}_{\mu}i}\langle\mathbf{k}_{\mu}i|\widehat{h}|\mathbf{k}_{\mu}i\rangle \tag{16}\] \[+\sum_{P}^{N/2}\sum_{\begin{subarray}{c}\left[\mathbf{k}_{\mu},i \in\Omega_{\mathbf{k}_{\mu}}\right]\\ \left[\mathbf{k}_{\nu},j\in\Omega_{\mathbf{k}_{\nu}}\right]\in P\end{subarray}} \zeta_{\mathbf{k}_{\mu}i}\zeta_{\mathbf{k}_{\nu},j}\sqrt{n_{\mathbf{k}_{\mu}i} n_{\mathbf{k}_{\nu}j}}\langle\mathbf{k}_{\mu}i\mathbf{k}_{\nu}j|\mathbf{k}_{\nu}j \mathbf{k}_{\mu}i\rangle\] \[+\sum_{P\neq Q}^{N/2}\sum_{\begin{subarray}{c}\left[\mathbf{k}_{ \mu},i\in\Omega_{\mathbf{k}_{\mu}}\right]\\ \left[\mathbf{k}_{\nu},j\in\Omega_{\mathbf{k}_{\nu}}\right]\in Q\end{subarray}} n_{\mathbf{k}_{\mu}i}n_{\mathbf{k}_{\nu},j}(2\langle\mathbf{k}_{\mu}i \mathbf{k}_{\nu}j|\mathbf{k}_{\mu}i\mathbf{k}_{\nu}j\rangle-\langle\mathbf{k} _{\mu}i\mathbf{k}_{\nu}j|\mathbf{k}_{\nu}j\mathbf{k}_{\mu}i\rangle),\]
where we have employed the known condition for APSG wavefunctions that allows us to express the CI coefficients in terms of occupation numbers (\(c_{\mathbf{k}i}^{2}=n_{\mathbf{k}i}\) or \(c_{\mathbf{k}i}=\zeta_{\mathbf{k}i}\sqrt{n_{\mathbf{k}i}}\)), the phases \(\zeta_{\mathbf{k}_{\mu}i}^{2}=1\), \(\widehat{h}\) refers to all one-body operators of the electronic Hamiltonian (i.e. the kinetic energy and the interaction with the external potential), \(\langle\mathbf{k}_{\mu}i\mathbf{k}_{\nu}j|\mathbf{k}_{\tau}k\mathbf{k}_{\theta }l\rangle\) are the usual two-electron integrals. Notice that the energy minimization procedure implies optimization of the occupation numbers, natural orbitals, and phases. Since this wavefunction can be entirely written in terms of the natural orbitals and occupation numbers, it has been widely used in the context of reduced density matrix functional theory to propose energy functionals [77, 78, 79, 80, 81, 82, 83, 84, 85].
The energy contribution in the second line of Eq. (16) is coming from the geminal wavefunctions and describes intra-geminal interactions (i.e. the ones among the two electrons belonging to the geminal wavefunction). On the other hand, energy contribution in the third line of Eq. (16) describes inter-geminal interactions, which are taken at the mean-field level (i.e. as Hartree-Fock interactions). Also, let us highlight that imposing the correct translation symmetry to \(\Psi_{\mathbf{K}}^{\text{APSG}}\) automatically enforces the correct symmetry in the 2-RDM elements. Making the 2-RDM elements fulfill the condition presented in Eq. (9).
Since the natural orbitals belonging to different \(\Omega_{\mathbf{k}}\) subspaces (e.g. \(\mathbf{k}\) and \(\mathbf{k}^{\prime}\)) can be employed in the construction of the geminal wavefunctions (see Eq. (14)) the constraint given by Eq. (9) takes the following form for the geminals wavefunction
\[\mathbf{k}^{\prime}-\mathbf{k}^{\prime}-\mathbf{k}+\mathbf{k}=\mathbf{0}= \mathbf{G}, \tag{17}\]
which ensures the correct translation symmetry in \(\Psi_{\mathbf{K}}^{\text{APSG}}\), the 2-RDM matrix elements, and the density matrix. Notice that the complex conjugation associated with the time-reversal symmetry of the Kramers' pairs was employed.
The occupation numbers of the natural orbitals that belong to the geminal are optimized under the constraint given in Eq. (13) during the energy minimization procedure (recalling that the CI coefficients can be written as \(c_{\mathbf{k}i}=\zeta_{\mathbf{k}i}\sqrt{n_{\mathbf{k}i}}\) for the \(\Psi_{\mathbf{K}}^{\text{APSG}}\)_ansatz_). Hence, the coupling of natural orbitals belonging to different k-points is allowed; thus, a reorganization of electrons among k-points can take place during the energy minimization procedure. Moreover, it is known [86] that the occupation numbers are not likely to become zero; then, the reorganization of electrons among k-points is not forbidden.
In summary, the \(\Psi_{\mathbf{K}}^{\text{APSG}}\)_ansatz_ is a valid approximation to the many-electron wavefunction that permits us to illustrate the reasons leading to the reorganization of electrons among k-points. Obviously, a more general valid CI expansion _ansatz_ (or the exact full-CI expansion) could also lead to a reorganization of the electrons among k-points since the \(\Psi_{\mathbf{K}}^{\text{APSG}}\)_ansatz_ is a particular case of the exact \(\Psi_{\mathbf{K}}\), where the electron pairs do not interact. Then, let us conclude that the reorganization of electrons among k-points that lead to non-integer \(N_{e}\) values for each \(\mathbf{k}\) value is purely a consequence of the electronic correlation effects. In this work, the electronic correlation effects are captured with the \(\gamma^{GW}\) approximation, which produces the reorganization of electrons among k-points. And, this leads to the non-integer \(N_{e}\) values obtained for each k-point that were used to compute the \(\Delta N(\mathbf{k})\) values presented in Fig. 3.
In the next section, we present an example based on the Si crystal where the reorganization of electrons among k-points is allowed using a \(\Psi_{\mathbf{K}}^{\text{APSG}}\)_ansatz_.
### Example of an allowed electronic density reorganization among the k-points in the Si crystal
For a working example, let us take the Si crystal computed excluding all the core states (i.e. using a pseudo-potential and retaining only 8 electrons per unit cell). At the Hartree-Fock (or gKS DFT) level, eight states forming four Kramers' pairs are occupied for each k-point value. From the band structure, it is easy to recognize that the highest (in terms of energy) occupied state with an \(\alpha\) electron localized at the \(\Gamma\) point (\(\mathbf{k}=(0,0,0)=\mathbf{k}_{\Gamma}\)). On the other hand, the lowest (in terms of energy) unoccupied state for the electrons with \(\alpha\) spin belongs to the \(X\) point (\(\mathbf{k}=(0.5,0.5,0)=\mathbf{k}_{X}\)). Let us label these
states as \(\varphi_{\mathbf{k}\cdot 4}\) and \(\varphi_{\mathbf{k}\times 5}\), respectively. The energy difference between the \(\varphi_{\mathbf{k}4}\) and \(\varphi_{\mathbf{k}5}\) states is small (the experimental value is approximately 1 eV), which leads to the small indirect band gap obtained for this system.
Following, let us organize in ascending order in terms of energy all the mean-field Bloch's waves for the whole system (i.e. of the supercell). And, as it is usually done in the search for the optimal \(\Psi^{\mathrm{APSG}}_{\mathbf{K}=\mathbf{0}}\), let us write the initial guess for the APSG _ansatz_ in terms of the mean-field Bloch's waves. But, let us search for a particular \(\Psi^{\mathrm{APSG}}_{\mathbf{K}}\)_ansatz_ where all the geminal wavefunctions contain only one Kramers' pair (i.e. are treated at the Hartree-Fock level) except for the last geminal (the \(P=N/2\) in (111)) that is built coupling the Bloch's waves \(\varphi_{\mathbf{k}\cdot 4}\) and \(\varphi_{\mathbf{k}\times 5}\), i.e.
\[\psi_{N/2}(\mathbf{x}_{N-1},\mathbf{x}_{N}) = 2^{-1/2}\left[c_{\mathbf{k}\cdot 4}(\varphi_{\mathbf{k}\cdot 4}( \mathbf{r}_{N-1})\alpha_{N-1}\varphi^{*}_{\mathbf{k}\cdot 4}(\mathbf{r}_{N}) \beta_{N}+\varphi_{\mathbf{k}\cdot 4}(\mathbf{r}_{N})\alpha_{N}\varphi^{*}_{ \mathbf{k}\cdot 4}(\mathbf{r}_{N-1})\beta_{N-1})\right.\] \[+ \left.c_{\mathbf{k}\times 5}(\varphi_{\mathbf{k}\times 5}(\mathbf{r} _{N-1})\alpha_{N-1}\varphi^{*}_{\mathbf{k}\times 5}(\mathbf{r}_{N})\beta_{N}+ \varphi_{\mathbf{k}\times 5}(\mathbf{r}_{N})\alpha_{N}\varphi^{*}_{\mathbf{k} \times 5}(\mathbf{r}_{N-1})\beta_{N-1})\right],\]
with \(c_{\mathbf{k}\cdot 4}=\zeta_{\mathbf{k}\cdot 4}\sqrt{n_{\mathbf{k}\cdot 4}}\) and \(c_{\mathbf{k}\times 5}=\zeta_{\mathbf{k}\times 5}\sqrt{n_{\mathbf{k}\times 5}}\) being variational parameters subject to the condition \(n_{\mathbf{k}\cdot 4}+n_{\mathbf{k}\times 5}=1\) (to fulfill the requirement presented in Eq. (110)). This type of geminal approach, where only two states (four considering spin) are present in the geminal wavefunction is known as a perfect-pairing approach. Next, let us assume that the mean-field Bloch's waves \(\varphi_{\mathbf{k}\cdot 4}\) and \(\varphi_{\mathbf{k}\times 5}\) coincide with the optimal natural orbitals in order to skip the orbital optimization procedure.
Next, let us focus on the energy contribution arising from the \(N/2\) geminal to the second term in the r.h.s. of the APSG energy (see Eq. (110))
\[n_{\mathbf{k}\cdot 4}\langle\mathbf{k}_{\Gamma}\cdot 4\mathbf{k}_{\Gamma} \cdot 4\mathbf{k}_{\Gamma}\cdot 4\rangle+n_{\mathbf{k}\times 5}\langle \mathbf{k}_{X}\mathbf{5}\mathbf{k}_{X}5|\mathbf{k}_{X}\mathbf{5}\mathbf{k}_{X} 5\rangle+2\zeta_{\mathbf{k}\cdot 4}\zeta_{\mathbf{k}\times 5}\sqrt{n_{ \mathbf{k}\cdot 4}n_{\mathbf{k}\times 5}}\langle\mathbf{k}_{\Gamma}\cdot 4 \mathbf{k}_{X}5|\mathbf{k}_{X}\mathbf{5}\mathbf{k}_{\Gamma}4\rangle. \tag{118}\]
Setting the usual approximation for (fixing) the phases (i.e. \(\zeta_{\mathbf{k}\cdot 4}\zeta_{\mathbf{k}\times 5}=-1\)[77; 78; 80; 81]) for the interaction among the states above and below the Fermi level and letting all two-electron integrals to be equal (which can occur in the extreme case when degenerated states are involved). The occupation numbers that would minimize the energy contribution are \(n_{\mathbf{k}\cdot 4}=n_{\mathbf{k}\times 5}=1/2\). Illustrating that a reorganization of electrons occurs among the \(\Gamma\) and \(X\) k-points. In the Si crystal, the mean-field Bloch's waves are not completely degenerate in terms of energy and do not correspond to the optimal natural orbitals; then, the actual optimal occupation numbers differ from \(1/2\). But, they also differ from the initial values at the mean-field level, where \(n_{\mathbf{k}\cdot 4}=1\) and \(n_{\mathbf{k}\times 5}=0\). Moreover, beyond the perfect pairing approach, the coupling of states to form a geminal wavefunction can include states belonging to other k-points. Since the geminal wavefunctions are built with states for the \(\alpha\) and the \(\beta\) electron; the state for the \(\alpha\) electrons is associated with a \(\mathbf{k}^{\prime\prime}\) k-point while the state for the \(\beta\) is related to a \(-\mathbf{k}^{\prime\prime}\) k-point. The Hartree product in Eq. (111) conserves the \(\mathbf{K}=\mathbf{k}^{\prime\prime}-\mathbf{k}^{\prime\prime}=\mathbf{0}\) value, which could lead to further reorganization of electrons among different k-points beyond the perfect pairing approach. Thus, for example, the coupling of states belonging to \(\Gamma\), \(X\), and \(\Delta\), etc. is allowed in the Si crystal. Actually, the coupling of all k-point values in the first Brillouin zone is valid to build geminal wavefunctions.
Finally, let us remark that this example is based on a valid approximation to the many-electron wavefunction (i.e. a \(\Psi^{\mathrm{APSG}}_{\mathbf{K}}\)_ansatz_), where we illustrate that the reorganization of electrons among k-points is purely a consequence of the electronic correlation effects.
|
2302.10411
|
Regret Analysis of Online LQR Control via Trajectory Prediction and
Tracking: Extended Version
|
In this paper, we propose and analyze a new method for online linear
quadratic regulator (LQR) control with a priori unknown time-varying cost
matrices. The cost matrices are revealed sequentially with the potential for
future values to be previewed over a short window. Our novel method involves
using the available cost matrices to predict the optimal trajectory, and a
tracking controller to drive the system towards it. We adopted the notion of
dynamic regret to measure the performance of this proposed online LQR control
method, with our main result being that the (dynamic) regret of our method is
upper bounded by a constant. Moreover, the regret upper bound decays
exponentially with the preview window length, and is extendable to systems with
disturbances. We show in simulations that our proposed method offers improved
performance compared to other previously proposed online LQR methods.
|
Yitian Chen, Timothy L. Molloy, Tyler Summers, Iman Shames
|
2023-02-21T02:48:57Z
|
http://arxiv.org/abs/2302.10411v1
|
# Regret Analysis of Online LQR Control via Trajectory Prediction and Tracking: Extended Version
###### Abstract
In this paper, we propose and analyze a new method for online linear quadratic regulator (LQR) control with a priori unknown time-varying cost matrices. The cost matrices are revealed sequentially with the potential for future values to be previewed over a short window. Our novel method involves using the available cost matrices to predict the optimal trajectory, and a tracking controller to drive the system towards it. We adopted the notion of dynamic regret to measure the performance of this proposed online LQR control method, with our main result being that the (dynamic) regret of our method is upper bounded by a constant. Moreover, the regret upper bound decays exponentially with the preview window length, and is extendable to systems with disturbances. We show in simulations that our proposed method offers improved performance compared to other previously proposed online LQR methods.
Online LQR, Dynamic Regret, Trajectory tracking.
## 1 Introduction
Optimal control problems arise in many fields such as econometrics (Bjork et al., 2021; Radneantu, 2009), robotics (Hampsey et al., 2022; Renganathan et al., 2020), physics (Liu et al., 2021) and machine learning (Westenbroek et al., 2020). The Linear Quadratic Regulator (LQR) problem is the archetypal optimal control problem with vector-valued states and controls, and is reviewed in the following. Consider a controllable linear time-invariant system
\[x_{t+1}=Ax_{t}+Bu_{t}+w_{t}, \tag{1}\]
where \(t\) is a nonegative integer, \(m\) and \(n\) are positive integers, \(A\in\mathbb{R}^{n\times n}\), \(B\in\mathbb{R}^{n\times m}\), \(x_{t},w_{t}\in\mathbb{R}^{n}\), and \(x_{0}=\bar{x}_{0}\) for some given \(\bar{x}_{0}\in\mathbb{R}^{n}\), and \(u_{t}\in\mathbb{R}^{m}\). For a given finite time horizon \(T\geq 2\) and initial condition \(\bar{x}_{0}\), the control decisions \(\{u_{t}\}_{t=0}^{T-2}\) are computed to minimize the quadratic cost function
\[J_{T}(\{x_{t}\}_{t=0}^{T-1},\{u_{t}\}_{t=0}^{T-2}):=\sum_{t=0}^{T-2}x_{t}^{ \mathsf{T}}Q_{t}x_{t}+u_{t}^{\mathsf{T}}R_{t}u_{t}+x_{T-1}^{\mathsf{T}}Q_{T-1} x_{T-1}, \tag{2}\]
where \(Q_{t}\in\mathbb{S}_{+}^{n}\) and \(R_{t}\in\mathbb{S}_{++}^{m}\) are time-varying cost matrices and \(\mathbb{S}_{+}^{n}\) and \(\mathbb{S}_{++}^{n}\) denote the sets of positive semi-definite symmetric and positive definite symmetric matrices, respectively. The states \(x_{t}\) and controls \(u_{t}\) minimizing (2) must satisfy (1). When the cost matrices \(\{Q_{t}\}_{t=0}^{T-1}\) and
are known _a priori_, the controls minimizing (2) subject to (1) can be found in closed form, cf. (Anderson and Moore, 2007, Chapter 2). However, in many real word applications, such as power systems (Kouro et al., 2009), chemistry (Chen et al., 2012) and mechatronics (Vukov et al., 2015), full information about the cost matrices over the whole time horizon is not available (in advance) to the decision maker.
In our work, for a given time horizon \(T\) and preview window length \(0\leq W\leq T-2\), we suppose that at any time \(t\) where \(0\leq t<T-2-W\), only the initial condition of the system (1) and the (partial) sequences of cost matrices \(\{Q_{i}\}_{i=0}^{t+W}\) and \(\{R_{i}\}_{i=0}^{t+W}\) are known. Let the cost-function information available to the decision maker at time \(t\) be
\[\mathcal{H}_{t}:=\{\{Q_{i}\}_{i=0}^{t+W},\{R_{i}\}_{i=0}^{t+W},\bar{x}_{0}\}, \tag{3}\]
where \(\mathcal{H}_{t}\) contains the full temporal information about the cost matrices for \(t\geq T-2-W\). The main focus of our work is to propose a novel control policy that generates \(u_{t}\) using the information available at time \(t\), and investigate its performance. We specifically consider a feedback control policy \(\pi(\cdot,\cdot)\) of the form
\[u_{t}=\pi(x_{t},\mathcal{H}_{t}), \tag{4}\]
and adopt the notion of regret to measure its performance. Several different notions of regret have been well studied and explored in the online optimization problem, including static regret (Zinkevich, 2003; Shalev-Shwartz, 2012), dynamic regret (Jadbabaie et al., 2015). In our work, performance is measured by dynamic regret. For any control sequence \(\{u_{t}\}_{t=0}^{T-2}\) and associated state sequence \(\{x_{t}\}_{t=0}^{T-1}\), the dynamic regret is defined as
\[\text{Regret}_{T}(\{u_{t}\}_{t=0}^{T-2}):=J_{T}(\{x_{t}\}_{t=0}^{T-1},\{u_{t} \}_{t=0}^{T-2})-J_{T}(\{x_{t}^{*}\}_{t=0}^{T-1},\{u_{t}^{*}\}_{t=0}^{T-2}), \tag{5}\]
where
\[\{u_{t}^{*}\}_{t=0}^{T-2}:=\operatorname*{argmin}_{\{v_{i}\}_{i=0}^{T-2}}J_{T }(\{\xi_{i}\}_{i=0}^{T-2},\{v_{i}\}_{i=0}^{T-2}), \tag{6}\]
and \(\{x_{t}^{*}\}_{t=0}^{T-1}\) satisfy the system dynamics (1) for input sequence \(\{u_{t}^{*}\}_{t=0}^{T-2}\).
### Related Works
Similar investigations have recently been conducted in Cohen et al. (2018), Zhang et al. (2021), and Akbari et al. (2022). Cohen et al. (2018) and Akbari et al. (2022) consider a different notion of regret involving comparison with controls \(\tilde{u}_{t}=-K\tilde{x}_{t}\) (instead of \(u_{t}^{*}\)) generated by a fixed gain \(K\) from the set of \((\bar{\kappa},\bar{\gamma})\) strongly stable gains denoted by \(\mathcal{K}\). More precisely, \(\mathcal{K}\) is the set of all gains where for any \(K\in\mathcal{K}\), there exists matrices \(L\) and \(H\) such that \(A+BK=HLH^{-1}\), with \(\|L\|\leq 1-\bar{\gamma}\) and \(\|H\|\,,\|H^{-1}\|\leq\bar{\kappa}\) for prescribed scalars \(\bar{\kappa}\) and \(\bar{\gamma}^{1}\). For a sequence of controls \(\{u_{t}\}_{t=0}^{T-1}\), the notion of regret for time horizon \(T\) and controls \(\{u_{t}\}_{t=0}^{T-1}\) from these works is
\[\text{StablizingRegret}_{T}(\{u_{t}\}_{t=0}^{T-2}):=J_{T}(\{x_{t}\}_{t=0}^{T-1 },\{u_{t}\}_{t=0}^{T-2})-J_{T}(\{\tilde{x}_{t}\}_{t=0}^{T-1},\{\tilde{K} \tilde{x}_{t}\}_{t=0}^{T-2}), \tag{7}\]
where \(\tilde{K}\in\operatorname*{argmin}_{K\in\mathcal{K}}J_{T}(\{\tilde{x}_{t}\}_{ t=0}^{T-1},\{K\tilde{x}_{t}\}_{t=0}^{T-2})\) and \(\{\tilde{x}_{t}\}_{t=0}^{T-1}\) satisfies (1).
Cohen et al. (2018) propose an online LQR algorithm that yields controls with a theoretical regret upper bound of \(\text{StablizingRegret}_{T}(\{u_{t}\}_{t=0}^{T-1})\leq O(\sqrt{T})\). However, the algorithm involves a
computationally expensive projection step at each time \(t\), and the projection set can become empty for some controllable systems when the covariance of the system disturbances \(w_{t}\) is positive definite2. Thus, this method is not applicable to all controllable linear time-invariant systems. Moreover, the theoretical stabilizing regret upper bound is proportional to the inverse of the cube of lower bound of covariance of system disturbances, i.e., \(\text{StablizingRegret}_{T}({u_{t}}_{t=0}^{T-1})=O(\frac{1}{\sigma^{3}})\), where the covariance of disturbances from (1) is lower bounded by \(\sigma^{2}I\). If \(\sigma=0\), the theoretical regret upper bound is undefined. Akbari et al. (2022) proposed an Online Riccati Update algorithm that obtains \(\text{StablizingRegret}_{T}(\{u_{t}\}_{t=0}^{T-1})=O(\sigma^{2}\log(T))\). The result avoids the undefined regret upper bound of Cohen et al. (2018) when the covariance matrix is not lower bounded by a positive \(\sigma\). However, like Cohen et al. (2018), the performance of the algorithm proposed in Akbari et al. (2022) is only guaranteed to achieve sublinear _stabilizing regret_ (7) against the best _fixed_ control gain \(K\) from the set \(\mathcal{K}\). This notion of regret is not suitable for dynamic non-stationary environments. For example, a self-driving car may operate in different environments such as high-wind areas, or high and low-friction road surfaces. For the best performance to counter-act these environments, we need to use time-varying control gains and compare them against the best time-varying policies chosen in hindsight.
Footnote 2: For example, the set is empty, if \(A=\begin{pmatrix}1&2\\ 6&9\end{pmatrix}\), \(B=\begin{pmatrix}9\\ 6\end{pmatrix}\), and the disturbances are distributed according to a multivariate Gaussian with mean zero and covariance matrix \(I_{2}\).
Zhang et al. (2021) investigate the dynamic regret (5) offered by an online LQR approach inspired by model predictive control. Future cost matrices and predicted disturbances are assumed to be available over a short future preview window of length \(W\geq 0\), and the following assumption is made.
**Assumption 1**: _There exist symmetric positive definite matrices \(Q_{min},Q_{max},R_{min},R_{max}\) such that for time \(0\leq t\leq T-2\),_
\[\begin{split} 0\prec Q_{min}&\preceq Q_{t} \preceq Q_{max},\\ 0\prec R_{min}&\preceq R_{t}\preceq R_{max},\end{split} \tag{8}\]
_where \(F\prec G\) denotes \(G-F\) being positive definite for symmetric matrices \(F\) and \(G\)._
Under Assumption 1, Zhang et al. (2021) propose an online LQR algorithm for selecting controls \(u_{t}\) at time \(t\) by solving
\[\min_{\{u_{k}\}_{k=t}^{t+W}}\sum_{k=t}^{t+W}x_{k}^{\mathsf{T}}Q_{k}x_{k}+u_{k}^ {\mathsf{T}}R_{k}u_{k}+x_{t+W+1}^{\mathsf{T}}P_{max}x_{t+W+1}\]
subject to (1) where \(P_{max}\) is the solution of the algebraic Riccati equation for the infinite-horizon LQR problem with cost matrices \(Q_{max}\) and \(R_{max}\). The dynamic regret (5) of control sequences generated by this method is shown to be upper bound by a quantity that shrinks exponentially as the preview window length increases. However, the estimate of the tail cost at each time step (i.e., \(x_{t+W+1}^{\mathsf{T}}P_{max}x_{t+W+1}\)) can be too pessimistic due to its reliance on \(P_{max}\) and the matrices \(Q_{max}\) and \(R_{max}\) from the bounds given in Assumption 1.
### Contributions
The key contributions of this paper are:
* The proposal of a method for solving the online LQR problem that is independent of the given upper or lower bounds on the cost matrices;
* Development of a regret bound for the disturbance-free case and proof that our proposed control policy yields sublinear regret;
* Provision of sufficient conditions under which our regret bound is less than that of the state-of-the-art methodology; and
* Analysis of our regret bound in the presence of disturbances.
Outline.The rest of the paper is organized as follows. In Section 2, we state the online LQR problem that we consider. In Section 3, we introduce our proposed online LQR algorithm and bound its dynamic regret. In Section 4, we provide numerical results for the simulation of our proposed algorithm. Concluding remarks are presented in the last section.
## 2 Problem Formulation
In this paper, we consider the following problem.
**Problem 1** (Online LQR): _Consider the controllable system (1). Let the cost matrices in (5) satisfy Assumption 1 for any given \(T\geq 2\) and \(W<T-2\). At time \(0\leq t\leq T-W-2\), the available information to the decision maker is given by \(\mathcal{H}_{t}\) as defined in (3). It is desired to design a control policy \(\pi(\cdot,\cdot)\) of the form (4) that yields a regret, as defined by (5), that is independent of the bounds given in Assumption 1. Moreover, we seek to establish appropriate regret bounds for the following cases:_
* _The case where_ \(w_{t}=0\) _for_ \(0\leq t\leq T-1\)_;_
* _The case where the disturbances_ \(w_{t}\) _for_ \(0\leq t\leq T-1\) _are independent and identically distribution (i.i.d.) random variables such that_ \(\mathbf{E}(w_{t})=0\) _and_ \(\mathbf{E}(w_{t}w_{t}^{\mathsf{T}})=W_{d}\) _with_ \(\mathbf{E}(\cdot)\) _being the expectation operator and_ \(W_{d}\in\mathbb{S}_{+}^{n}\)_._
Specifically, for part a) of Problem 1 we show that the regret (as defined in (5)) associated with our proposed control policy is sublinear with respect to the time horizon \(T\) for the case where \(w_{t}=0\) for \(0\leq t\leq T-1\), i.e.,
\[\text{Regret}_{T}(\{u_{t}\}_{t=0}^{T-2})\leq o(T). \tag{9}\]
For part b), we define the notion of "expected regret" as
\[\text{ExpectedRegret}_{T}(\{u_{t}\}_{t=0}^{T-2}):=\mathbf{E}(J_{T}(\{x_{t}\} _{t=0}^{T-1},\{u_{t}\}_{t=0}^{T-2})-J_{T}(\{x_{t}^{*}\}_{t=0}^{T-1},\{u_{t}^{*} \}_{t=0}^{T-2})), \tag{10}\]
and show that our proposed control policy yields controls that satisfy
\[\text{ExpectedRegret}_{T}(\{u_{t}\}_{t=0}^{T-2})\leq C_{ER}T\gamma^{2W}\]
for positive scalars \(C_{ER}\) and \(\gamma^{3}\). In what follows we address this problem.
## 3 Approach and Regret Analysis
Our proposed online LQR approach involves first using the available information \(\mathcal{H}_{t}\) at each time \(t\) to predict the optimal state \(x_{t+1}^{*}\) solving the full information LQR problem described in (6). We then select controls to track this prediction. At time \(0\leq t\leq T-1\), we only know the information in \(\mathcal{H}_{t}\). Let \(x_{t+1|t+W}\) denote the estimate of the optimal state at time \(t+1\) based on \(\mathcal{H}_{t}\). We aim to track to the state \(x_{t+1|t+W}\) at time \(t+1\).
Prediction.At each time \(t\), we plan an optimal trajectory starting from the initial state \(\bar{x}_{0}\) using the known cost matrices up to time \(t+W\) and setting all the future matrices to be equal to their known values for time \(t+W\). Specifically, at time \(t\) where \(0\leq t<T-W\), define \(J_{t+W}(\cdot,\cdot)\) as
\[J_{t+W}(\{\xi_{i}\}_{i=0}^{T-1},\{v_{i}\}_{i=0}^{T-2}) :=\sum_{k=0}^{t+W}[\xi_{k}^{\mathsf{T}}Q_{k}\xi_{k}+v_{k}^{\mathsf{ T}}R_{k}v_{k}]\] \[+\sum_{k=t+1+W}^{T-2}[\xi_{k}^{\mathsf{T}}Q_{t+W}\xi_{k}+v_{k}^{ \mathsf{T}}R_{t+W}v_{k}]+\xi_{T-1}^{\mathsf{T}}Q_{t+W}\xi_{T-1}, \tag{11}\]
and
\[J_{t+W}(\{\xi_{i}\}_{i=0}^{T-1},\{v_{i}\}_{i=0}^{T-2}):=J_{T}(\{ \xi_{i}\}_{i=0}^{T-1},\{v_{i}\}_{i=0}^{T-2}) \tag{12}\]
for \(T-W\leq t\leq T-1\).
Then, we find the predicted optimal control sequence for all \(0\leq j\leq T-2\) by solving
\[\begin{split}\left(\{x_{j|t+W}\}_{j=0}^{T-1},\{u_{j|t+W}\}_{j=0}^ {T-2}\right)&=\operatorname*{argmin}_{\begin{subarray}{c}(\{ \xi_{i}\}_{i=0}^{T-1},\{v_{i}\}_{i=0}^{T-2})\\ \text{subject to}&\xi_{i+1}=A\xi_{i}+Bv_{i},\quad\xi_{0}= \bar{x}_{0}.\end{subarray}}\\ \end{split} \tag{13}\]
Prediction Tracking.We propose the following feedback control policy
\[\pi(x_{t},\mathcal{H}_{t})=K(x_{t}-x_{t|t+W})+u_{t|t+W}, \tag{14}\]
where \(K\in\mathbb{R}^{m\times n}\) is a control matrix such that \(\rho(A+BK)<1\), and \(\rho(\cdot)\) denotes the matrix spectral radius. Intuitively, such control matrix \(K\) leads to contraction of the distance between \(x_{t+1}\) and \(x_{t+1|t+W}\), respectively given by (1) and (13).
### Regret Analysis for the Disturbance-free Case
In the following theorem, we present the result for the case of Problem 1a) that the control sequence generated by (14) incurs a sublinear upper bound regret with respect to time horizon \(T\). Here, with slight abuse of notation, for a sequence of matrices \(\{\Sigma_{i}\}_{i=0}^{N}\), we define \(\max_{0\leq t\leq N}\Sigma_{t}:=\{\Sigma_{\tau}\mid 0\leq\tau\leq N,\Sigma_{ \tau}\succeq\Sigma_{k}\text{ for all }0\leq k\leq N\}\) and \(\min_{0\leq t\leq N}\Sigma_{t}:=\{\Sigma_{\tau}\mid 0\leq\tau\leq N,\Sigma_{ \tau}\preceq\Sigma_{k}\text{ for all }0\leq k\leq N\}\). This enables us to define _cost matrix sequence extrema_ as \(\bar{R}_{max}:=\max_{0\leq t\leq T-2}R_{t}\), \(\bar{Q}_{max}:=\max_{0\leq t\leq T-1}Q_{t}\), \(\bar{R}_{min}:=\min_{0\leq t\leq T-2}R_{t}\), and \(\bar{Q}_{min}:=\min_{0\leq t\leq T-1}Q_{t}\). For any matrix \(\Gamma\), we further define \(\lambda_{min}(\Gamma)\) as the minimum eigenvalue of \(\Gamma\) and \(\lambda_{max}(\Gamma)\) as the maximum eigenvalue of \(\Gamma\).
**Theorem 1** (Main Result): _Consider the linear system defined by (1). For a given time horizon \(T\geq 2\) and preview window length \(0\leq W\leq T-2\). Suppose that at time \(0\leq t\leq T-2\) the control input \(u_{t}\) is generated by policy \(\pi(\cdot,\cdot)\) as given by (14). Under Assumption 1, the regret defined by (5) satisfies_
\[\text{Regret}_{T}(\{u_{t}\}_{t=0}^{T-2}) \leq\frac{10D\gamma^{2W}\|\bar{x}_{0}\|^{2}}{3}\bigg{[}(\alpha_{1 }+\alpha_{2})(\frac{C^{2}C_{K}\gamma}{(\gamma-1)})^{2}\bigg{(}\gamma^{2}S_{T}( \eta^{2}\gamma^{2})-2\gamma S_{T}(\eta^{2}\gamma)\] \[+S_{T}(\eta^{2}))+\frac{10C_{f}^{2}}{3}((\frac{\eta\gamma}{q(q- \eta\gamma)}-\frac{\eta}{q(q-\eta)})^{2}S_{T}(q^{2}) \tag{15}\] \[+\frac{(\eta\gamma)^{2}S_{T}(\eta^{2}\gamma^{2})}{q^{2}(q-\eta \gamma)^{2}}+\frac{\eta^{2}S_{T}(\eta^{2})}{q^{2}(q-\eta)^{2}})\bigg{)}+(C_{ K}C^{2})^{2}S_{T}(\eta^{2})\bigg{]},\]
_where \(\bar{P}_{max}\) satisfies_
\[\bar{P}_{max}=\bar{Q}_{max}+A^{\mathsf{T}}\bar{P}_{max}A-A^{\mathsf{T}}\bar{P}_ {max}B(\bar{R}_{max}+B^{\mathsf{T}}\bar{P}_{max}B)^{-1}B^{\mathsf{T}}\bar{P}_{ max}A,\]
\(D=\left\|\bar{R}_{max}+B^{\mathsf{T}}\bar{P}_{max}B\right\|\)_, \(C_{K}=\left\|(\bar{R}_{min}+B^{\mathsf{T}}\bar{Q}_{min}B)^{-1}\right\|^{2}\left\| \bar{R}_{max}B^{\mathsf{T}}\right\|\frac{\lambda_{max}^{2}(\bar{P}_{max})}{ \lambda_{min}(\bar{Q}_{min})}\), \(C=\frac{\lambda_{max}(\bar{P}_{max})}{\lambda_{min}(\bar{Q}_{min})}\), \(\eta=\sqrt{1-\frac{\lambda_{min}(Q_{min})}{\lambda_{max}(P_{max})}}\), \(\alpha=\max_{\begin{subarray}{c}0\leq i\leq t-1\\ 0\leq t\leq T-2\end{subarray}}\{\lambda_{max}(A^{\mathsf{T}}P_{i+1}^{*}A), \lambda_{max}(A^{\mathsf{T}}P_{i+1|t}A)\}\), \(\beta=\min_{0\leq t\leq T-2}\lambda_{min}(Q_{t})\), \(\gamma=\frac{\alpha}{\alpha+\beta}\), \(S_{T}(z)=\sum_{t=0}^{T-1}z^{t}\), \(\alpha_{1}=\max_{t}\left\|K_{t|t+W}-K\right\|^{2}\), \(\alpha_{2}=\max_{t}2\left\|K_{t}^{*}-K\right\|^{2}\), \(C_{f}=\max_{n\geq 0}\frac{\left\|(A+BK)^{n}\right\|}{(q+\varepsilon)^{n}}\), \(q=\rho(A+BK)+\varepsilon\), and \(0\leq\varepsilon<1-\rho(A+BK)\)._
See Appendix A.
**Remark 2**: _For any \(z\in[0,1)\) there exists an \(\Lambda\in\mathbb{R}\), such that \(\lim_{T\to\infty}S_{T}(z)=\Lambda\). Consequently, \(\overline{\lim}_{T\to\infty}\frac{\text{Regret}_{T}(\{u_{t}\}_{t=0}^{T-2})}{T}=0\), which implies that the control sequence described by (14) yields sublinear regret._
**Remark 3**: _Let \(F(\bar{x}_{0},A,B,T,\bar{R}_{max},\bar{R}_{min},\bar{Q}_{max},\bar{Q}_{min},K)\) denote the right hand side (RHS) of (15). By stating almost identical lemmas to Lemmas 8 and 9 using the bounds given in Assumption 1 instead of the cost matrices sequence extrema values, one can arrive at a regret bound in terms of these bounds analogous to (15):_
\[\text{Regret}_{T}(\{u_{t}\}_{t=0}^{T-2})\leq F(\bar{x}_{0},A,B,T,R_{max},R_{ min},Q_{max},Q_{min},K).\]
In the following proposition, we state a condition in terms of the bounds given in Assumption 1 and the cost matrices sequence extrema where it is guaranteed that the bound given in the above theorem is smaller than that of (Zhang et al., 2021, Theorem 1, Equation (15)). Obviously, there might be other conditions, exploration of which is left to future work.
**Proposition 4**: _Adopt the hypothesis of Theorem 1. If_
\[\lambda_{max}^{10}(Q_{max})\geq\frac{5\left[(1+\frac{\alpha_{1}+\alpha_{2}}{(1- \gamma)^{2}})(\frac{1}{1-\eta^{2}})+\frac{10C_{f}^{2}}{q^{2}(q-\eta\gamma)^{2} (q-\eta)^{2}(1-\eta^{2})(1-\eta^{2}\gamma^{2})(1-q^{2})}\right]}{(C_{K}^{2} \lambda_{min}^{2}(\bar{R}_{min})\lambda_{min}^{4}(\bar{Q}_{min}))^{-1}6\|A\|^{2} \|B\|^{2}\|B\bar{R}_{min}^{-1}B^{\mathsf{T}}\|^{2}}, \tag{16}\]
_where \(Q_{max}\) is given in Assumption 1, then the RHS of inequality in (Zhang et al., 2021, Theorem 1, Equation (15)) is greater than the RHS of inequality (5) in Theorem 1._
Proof.: See Appendix B.
The RHS of (16) is independent of the matrices \(Q_{min},Q_{max},R_{min},R_{max}\) given in Assumption 1. On the other hand, the upper bound of regret for control decisions that generated by (Zhang et al., 2021, Algorithm 1) does depend on these values and even if the actual sequence of the cost matrices remain bounded away from these bounds, the method still explicitly uses the bounds and this is a potential source of conservatism.
### Regret Analysis in the Presence of Disturbances
The result presented in the following theorem address Problem 1 case b). Note that at time \(t\), \(\{w_{k}\}_{k=0}^{t}\) is the available sequence of disturbances to the decision maker. In this case, we still consider a policy \(\pi(\cdot,\cdot)\) as given by (14) with the only difference that \(x_{t|t+W}\) is obtained by solving the following optimisation problem:
\[\begin{split}\left(\{x_{j|t+W}\}_{j=0}^{T-1},\{u_{j|t+W}\}_{j=0} ^{T-2}\right)=\operatorname*{argmin}_{\begin{subarray}{c}(\xi_{i})_{i=0}^{T -1},\{v_{i}\}_{i=0}^{T-2}\end{subarray}}& J_{t+W}(\{\xi_{i}\}_{i=0}^{T-1},\{v_{i}\}_{i=0}^{T-2})\\ \text{subject to}&\xi_{i+1}=A\xi_{i}+Bv_{i}+w_{i} \quad\text{for }0\leq i\leq t,\\ &\xi_{i+1}=A\xi_{i}+Bv_{i}\quad\text{for }i>t,\quad\xi_{0}= \bar{x}_{0}.\end{split} \tag{17}\]
**Theorem 5**.: _Consider the system defined by (1). For a given time horizon \(T\geq 2\) and preview window length \(0\leq W\leq T-2\). Suppose that at time \(0\leq t\leq T-2\) the control input \(u_{t}\) is generated by policy \(\pi(\cdot,\cdot)\) as given by (14). Under Assumption 1, the expected regret defined by (10) satisfies_
\[\text{ExpectedRegret}_{T}(\{u_{t}\}_{t=0}^{T-2})\leq C_{ER}T\gamma^{2W} \tag{18}\]
_where \(C_{ER}\) is a positive scalar and \(\gamma\) is given in Theorem 1._
Proof.: See Appendix C.
In the next section, we investigate the performance of the proposed algorithm for different scenarios.
## 4 Numerical Simulations
In this section, we numerically demonstrate the performance of the proposed algorithm. To this end, define \(\Phi_{T,W}:=\text{Regret}_{T}(\{u_{t}^{{}^{\prime}}\}_{t=0}^{T-2})-\text{ Regret}_{T}(\{u_{t}\}_{t=0}^{T-2})\), where \(\{u_{t}^{{}^{\prime}}\}_{t=0}^{T-2}\) is generated from (Zhang et al., 2021, Algorithm 1) and \(\{u_{t}\}_{t=0}^{T-2}\) is generated by the policy described in (14), under preview window length of \(W\).
### Linearized Inverted Pendulum
Consider the following linearized inverted pendulum system (Franklin et al., 2020, Chapter 2.13):
\[x_{t+1}=\begin{pmatrix}0&1&0&0\\ 0&-0.1818&2.6727&0\\ 0&0&0&1\\ 0&-18.1818&31.1818&0\end{pmatrix}x_{t}+\begin{pmatrix}0\\ 1.8182\\ 0\\ 4.5455\end{pmatrix}u_{t}. \tag{19}\]
In the following experiments, the preview horizon \(W\) ranges from 0 to 19 and the time horizon \(T\) ranges from 19 to 500. The cost matrices are chosen uniformly satisfying by Assumption 1 with \(Q_{min}=8\times 10^{3}I_{4\times 4}\), \(Q_{max}=3.2\times 10^{4}I_{4\times 4}\), \(R_{min}=2\times 10^{3}\), and \(R_{max}=9.8\times 10^{4}\). The fixed controller from (14) is chosen by placing the poles at the location of \((1,6,4,3)\times 10^{-3}\). We repeat the experiment in 200 trials. Figure 2(_a_)subfigure demonstrate \(\Phi_{T,W}\), under preview window length from 0 to 19 and time horizon from 19 to 500. As the preview window length greater than 2, our method outperforms (Zhang et al., 2021, Algorithm 1).
### Random Linear Systems
In this experiment, the linear system is randomly chosen where all elements of \(A\) and \(B\) are drawn uniformly within the range of \((0,10)\) and ensure the pairs of \((A,B)\) are controllable. The setting of preview window length, time horizon, cost matrices and the pole location for the control matrix \(K\) from (14) are identical as what we have chosen in Section 4.1. The plot in Figure 2(_b_)subfigure demonstrates the subtraction between the regret of control decision generated by (Zhang et al., 2021, Algorithm 1) and the regret of control decision generated by our proposed method, by averaging the
Figure 1: Performance measure \(\Phi_{T,W}\) for simulated systems.
regret over 200 trials. As the preview window length exceed 4, our method outperforms (Zhang et al., 2021, Algorithm 1).
The plots from Figure 2(_a_)subfigure and 2(_b_)subfigure demonstrate that, as the preview window length exceeds the rank of the system, which is the least number of steps that require to steer the state of the system to a designated state, the proposed method outperforms the method from Zhang et al. (2021).
### Linear Systems with disturbances
The following experiments repeat the ones considered in Section 4.1 and 4.2, using the system defined in (19) and in the presence of disturbance \(w_{t}\sim\mathcal{N}(0,25I_{4\times 4})\). The setting of the preview window length, time horizon, cost matrices and the pole location for the control matrix \(K\) are identical as what we have chosen from the experiment in Section 4.1. The method of finding \(x_{t|t+W}\) and \(u_{t|t+W}\) can be referred to Remark 3.2. The plots in Figures 2(_c_)subfigure and 2(_d_)subfigure depict the average value of \(\Phi_{T,W}\) after 200 random trials.
## 5 Conclusions and Future Work
This paper propose a new control policy that achieves constant dynamic regret where the available information of cost matrices are sequentially reviewed as time step increases. The proposed method and consequently its regret have been demonstrated to be, contrary to the state-of-the-art, independent of the _ex ante_ upper and lower bound of the cost matrices. To exhibit the effect such independence, a sufficient condition is provided under which the regret upper bound of the proposed method is guaranteed to be smaller that the one from (Zhang et al., 2021, Theorem 1). This paper leads to many interesting research direction which are briefly discussed below. It would be interesting to devise a methodology for selecting a time-varying feedback gain matrix in in (14) instead of a fixed \(K\) in order to further minimise the regret. Moreover, one can extend the algorithm to the case of time-varying \(A_{t}\) and \(B_{t}\) for the system matrices and via differential dynamic programming for nonlinear dynamics with control constraints, and establish new dynamic regret.
|
2301.09410
|
THz ultra-strong light-matter coupling up to 200K with
continuously-graded parabolic quantum wells
|
Continuously graded parabolic quantum wells with excellent optical
performances are used to overcome the low-frequency and thermal limitations of
square quantum wells at terahertz frequencies. The formation of microcavity
intersubband polaritons at frequencies as low as 1.8 THz is demonstrated, with
a sustained ultra-strong coupling regime up to a temperature of 200K. It is
additionally shown that the ultra-strong coupling regime is preserved when the
active region is embedded in sub-wavelength resonators, with an estimated
relative strength $\eta = \Omega_R / \omega_0 = 0.12$. This represents an
important milestone for future studies of quantum vacuum radiation because such
resonators can be optically modulated at ultrafast rates, possibly leading to
the generation of non-classical light via the dynamic Casimir effect. Finally,
with an effective volume of $2.10^{-6} \lambda_0^3$, it is estimated that fewer
than 3000 electrons per resonator are ultra-strongly coupled to the quantized
electromagnetic mode, proving it is also a promising approach to explore
few-electron polaritonic systems operating at relatively high temperatures.
|
Paul Goulain, Chris Deimert, Mathieu Jeannin, Stefano Pirotta, Wojciech Julian Pasek, Zbigniew Wasilewski, Raffaele Colombelli, Jean-Michel Manceau
|
2023-01-23T13:10:01Z
|
http://arxiv.org/abs/2301.09410v1
|
# THz ultra-strong light-matter coupling up to 200K with continuously-graded parabolic quantum wells
###### Abstract
Continuously graded parabolic quantum wells with excellent optical performances are used to overcome the low-frequency and thermal limitations of square quantum wells at terahertz frequencies. The formation of microcavity intersubband polaritons at frequencies as low as \(1.8\,\mathrm{THz}\) is demonstrated, with a sustained ultra-strong coupling regime up to a temperature of \(200\,\mathrm{K}\). Thanks to the excellent intersubband transition linewidth, polaritons present quality factors up to \(17\). It is additionally shown that the ultra-strong coupling regime is preserved when the active region is embedded in sub-wavelength resonators, with an estimated relative strength \(\eta=\Omega_{R}/\omega_{0}=0.12\). This represents an important milestone for future studies of quantum vacuum radiation because such resonators can be optically modulated at ultrafast rates, possibly leading to the generation of non-classical light via the dynamic Casimir effect. Finally, with an effective volume of \(2.10^{-6}\lambda_{0}^{3}\), it is estimated that fewer than \(3000\) electrons per resonator are ultra-strongly coupled to the quantized electromagnetic mode, proving it is also a promising approach to explore few-electron polaritonic systems operating at relatively high temperatures.
## I Introduction
Following the pioneering work by Haroche et al.[1], engineering of the light-matter interaction has been one of the most active fields of modern physics. Remarkably, when the losses in the system are lower than the coupling strength, a new regime is attained - the strong coupling regime (SCR) - where light and matter exchange energy coherently and periodically. In the frequency domain, this leads to a radical change of the system's spectral response. From the first observation with Rydberg atoms, the SCR has been demonstrated in a plethora of systems spanning excitons, organic molecules, electronic transitions, superconducting qubits and many others.[2] The strength of the light-matter interaction is often gauged by a dimensionless parameter \(\eta\) that is the ratio between the coupling constant \(\Omega_{R}\) (also called the vacuum Rabi frequency) over the resonant transition frequency \(\omega_{0}\).[3] Above a value \(\eta>0.1\), one enters the ultra-strong coupling (USC) regime where the diamagnetic terms of the interaction Hamiltonian start to play an important role, leading to a deviation from the linear approximation and the formation of a sizeable population of virtual photons in the ground state of the system.[4] The same foundational article by Ciuti et al.[4] also proposed that an abrupt modulation of the system ground state leads to a release of such virtual population as real photons, an approach that could lead to the development of non-classical light emitters at long-wavelengths.
The first candidate for this USC regime of interaction were intersubband (ISB) polaritons.[4] These hybrid states stem from the strong coupling between an ISB transition (more rigorously, an ISB plasmon[5]) within the conduction band of doped semiconductor quantum wells and the quantized electro-magnetic mode of a microcavity. They can operate down to the THz frequency range, while their Rabi frequency - proportional to the square root of the introduced dopant density - can reach values equivalent to the bare transition. Soon after the first experimental demonstration[6], the USC regime with ISB polaritons was demonstrated in two different microcavity configurations.[7; 8] Since then, USC was demonstrated with organic systems operating at visible wavelength and ambient temperature as thoroughly discussed in [3], and more recently with plasmonic resonances embedded in microcavity that could be in principle extended to the far infrared range.[9] With doped heterostructures, the coupling strength was increased further using two different strategies that alter the character of the quantum well (QW) resonance. In the first configuration, a massive increase of the doping density in QWs was found to lead to the formation of a multisubband
plasmon[10]: a resonance essentially locked to the plasma frequency. This enabled a coupling strength \(\eta=0.47\), once placed in a resonant metallic microcavity at mid-infrared wavelengths.[11] In the second configuration, the application of a strong magnetic field to a 2D electron gas leads to the formation of Landau levels at sub-THz frequencies. When coupled to sub-wavelength resonators, Landau polaritons form with a coupling strength that can reach values beyond unity.[12; 13; 14] Several attempts have been made to obtain ultrafast modulation of the USC regime with ISB polaritons,[15; 16] with most of the activity aiming at measuring quantum vacuum radiation now conducted with Landau polaritons.[17] Although they offer coupling strengths around/beyond unity, they require elevated magnetic fields and cryogenic temperatures below 8 K.
Two primary factors have prevented further increase of \(\eta\) with THz ISB polaritons - particularly in square QWs. On one hand, any attempt to increase the coupling strength (\(\Omega_{R}\)) by increasing the doping results in a blueshift of the ISB transition, leaving the ratio \(\eta\) unchanged. This is due to the Coulomb interactions that act as a self-energy correction on the optical transition. This effect, commonly known as the depolarization shift,[5; 18] is proportional to the square root of the sheet doping density. On the other hand, any attempt to lower the transition frequency (\(\omega_{0}\)) while maintaining a high doping density will spread the population (n\({}_{\text{2D}}\)) over two subbands. This leads to the activation of a second ISB transition and no further optimization of \(\eta\) is possible. Careful simulations of a square QW with a doping density of \(1.10^{11}\) cm\({}^{-2}\), using a commercial solver (NextNano), reveal that it is not possible to obtain a single and sharp ISB transition below 2.5 THz. Accessing the lower frequency range is, however, a prerequisite to demonstrate non-classical light generation via the dynamic Casimir effect. The reason is that elucidating such subtle effects requires coherent field correlation measurements, which have so far been done using the electro-optic sampling technique,[19; 20; 21; 22] whose highest sensitivity bandwidth is in the 1-2 THz range.
## II Quantum Wells with parabolic energy potential
### Design and implementation
A promising approach to circumvent this problem is to move from the standard square QW to parabolic energy potentials. The key advantage is that the electron-electron interaction cancels out - analogous to the Kohn's theorem for cyclotron resonance[23] - leaving the transition energy independent from distribution and density of the electrons. Digitally graded parabolic energy potentials have already been used for the demonstration of strong light-matter coupling at THz frequencies, but they suffer low optical performances due to the interface roughness broadening that is inherent to the interdigitated epitaxial growth technique.[24; 25] Furthermore, these demonstrations were not designed to operate below the frequency of 2.5 THz imposed by classical square QW structures. A recent experimental tour de force has radically changed the perspective in that field. In Ref.[26], the parabolic energy potential is no longer mimicked by a series of narrow square QWs, but it is faithfully reproduced by a continuously graded alloy composition. This requires precise time-varying group III fluxes, which are challenging to produce in a conventional MBE setup, primarily because there is a thermal lag in the response of the group III cell. The approach developed by Deimert at al. in Ref.[26] uses a transfer function model to counteract the thermal dynamics, along with an additional corrective step to achieve high-precision composition gradients at typical growth rates. This approach led to high performance THz ISB transitions with record linewidths up to 150 K, and to stable operation up to room temperature.[27] In the present letter, we build on that result and we experimentally show USC regime with ISB transitions below the inherent frequency limit of square QWs.
The heterostructure comprises a stack of 4 PQWs with \(1.10^{11}\) cm\({}^{-2}\) nominal sheet electron density per well. The
Figure 1: (a) Schrödinger-Poisson simulation of the parabolic quantum well with a continuously graded alloy. In black is the Conduction band energy, the green lines are the squared moduli of the wave functions \(|\Psi|^{2}\), while the dash line marks the energy position of the Fermi level. (b) Calculated absorption of the system with a linewidth of 10%.
wells were realized in the Al\({}_{\text{x}}\)Ga\({}_{\text{1-x}}\)As material system by varying the aluminum composition x in the range 2%-20% along the growth direction. The PQWs were each 130.75 nm wide, separated by 20 nm Al\({}_{0.2}\)Ga\({}_{0.8}\)As barriers where the Si doping is introduced as \(\delta\)-layers at the barrier center. Two GaAs cap layers (50-nm thick) are placed on top and bottom of the structure. The overall active region thickness is 783 nm. **Figure 1a** shows the Schrodinger-Poisson simulation of the grown structure performed with the commercial software nextnano[28] and the calculated absorption centered at 2.2 THz (**Figure 1b**).
### Characterization of the optical performances
We have first characterized the optical performance of the grown structure. The sample is shaped in a multi-pass prism configuration with the facets polished at 45 degrees angle. Ti/Au surface coating is made on the side of the epitaxial growth to force the boundary conditions at the semiconductor-metal interface and ensure a maximum of electric field on the parabolic quantum well repetition. The waveguides were placed into a continuous-flow cryostat inside a Fourier transform infrared spectrometer (Bruker IFS66V), with polarized broadband THz light incident on the facet. To reach a spectral resolution as low as 0.5 cm\({}^{\text{-1}}\), the whole FTIR was placed under vacuum to remove the contribution of water absorption lines that are present even at low relative humidity levels (\(\text{RH}<1\%\)). Detection was performed with a liquid-He-cooled Si bolometer. **Figure 2a** shows the color-map evolution of the transmittance as function of the temperature. By modeling the ISB transition with a diagonal dielectric tensor as described in Refs.[29; 30; 31], the central frequency (\(\nu_{0}\)), the linewidth (\(\gamma_{i}\)) and the plasma frequency (\(\omega_{p}\)) can be extracted. Furthermore, one can gauge the evolution of the stack absorption as a function of the temperature. For a single well, the integrated 2D absorption is expressed as in [32]:
\[\int_{-\infty}^{+\infty}\alpha_{2D}(\omega)d\omega=\frac{\omega_{p}^{2}L_{QW}} {c\sqrt{\varepsilon_{s}}} \tag{1}\]
Hence, one can define the parameter \(\alpha=\sqrt{N_{QW}}\omega_{p}\), where \(N_{QW}\) is the number of wells, and monitor the stack absorption strength as a function of the temperature.
The evolution of the linewidth (\(\gamma_{i}\)) and the central frequency (\(\nu_{0}\)) are plotted in Figure 2b and c respectively. We first note a very stable single absorption peak at a frequency around 2.1 THz over the whole temperature range. The linewidth shows stronger temperature dependence, with a record value of 69 GHz at 8 K (3.2% FWHM) that remains below 10% up to 150 K. Despite having several subbands populated at higher temperature, the parameter alpha (Figure 2d) is stable over the whole temperature range, evidence that the number of electrons participating in the absorption remains mostly constant thanks to the harmonicity of the system.
Figure 2: (a) Transmittance measurement of the grown heterostructure as function of the temperature. (b) Extracted linewidth using Voigt fitting formula as function of the temperature. We measured 69 GHz linewidth at 10 K. (c) Central frequency of the transition as function of the temperature. The dashed line is a guide for the eye to show the stability of the transition central frequency (d) Integrated absorption of the transition as function of the temperature.
## III Ultra-strong light-matter coupling in microcavity
We then explore operation in the strong light-matter coupling regime. To this scope, the active region is placed within non-dispersive metal-insulator-metal (MIM) ribbon micro-resonators.[33] The details of the micro-fabrication process can be found elsewhere.[34] We have fabricated several samples with different sizes of metallic ribbon that we have measured in reflectance at 300 K. For each samples, the minima of reflectance from the different resonant photonic modes (\(\mathrm{TM_{01}},\mathrm{TM_{03}}\)) were placed on the dispersion plot obtained from a full EM simulation along with the experimentally measured position of the ISB transition. Hence, we could define the sample placed at the anti-crossing that allows measuring the Rabi splitting and accurately estimate the Rabi frequency. The lateral size of the metal stripe, \(\Lambda\), is 18 \(\mathrm{\SIUnitSymbolMicro m}\), which sets the resonator operating frequency. The gap in between each stripe, D, is equal to 4 \(\mathrm{\SIUnitSymbolMicro m}\). **Figure 3**a shows the color-map evolution of the reflectance as function of the temperature. One can clearly see the lift of degeneracy between cavity and ISB transition that occurs already below 250 K, and the two polaritonic branches gaining in contrast while the temperature decreases. To infer the Rabi frequency of the strongly coupled system, we have performed careful EM simulations to fit the measured Rabi splitting, where the ISB transition is depicted using the diagonal dielectric tensor approach aforementioned. In the quantum well plane, the material is described by the GaAs permittivity with phonons and a Drude model accounting for the free motion of the electrons. Doing so, the only fitting parameter is the sheet doping density (\(\mathrm{n_{2D}}\)) that enters the plasma frequency expression:
\[\omega_{p}=\sqrt{\frac{f_{12}e^{2}n_{2D}}{\varepsilon_{0}\varepsilon_{r}m^{*}L_ {QW}}} \tag{2}\]
Here \(f_{12}\) is the oscillator strength (equal to 0.97 for a PQW) between the first and second subband, \(m^{*}\) is
Figure 3: (a) Experimental reflectance of the ISB polaritonic system as function of the temperature. The dots mark the positions of the reflectance minima (b) Fit (dashed line) of the experimental reflectance at 78 K (full line), using finite element simulations. In inset is a schematic of the micro-ribbon resonator.(c) Simulation of the polaritonic system dispersion, attesting that the system is at the anti-crossing. The dots mark the experimental positions of the polaritonic peaks in the sample having a ribbon size of 18 \(\mathrm{\SIUnitSymbolMicro m}\). (d) Coupling strength parameter (\(\eta=\Omega_{R}/\omega_{0}\)) as function of the temperature. The dashed line marks the limit of the ultra-strong coupling regime.
the GaAs effective mass and \(L_{QW}\) is the length of the period comprising one well and one barrier. Figure 3b shows the experimental fitted reflectance at 10 K, where the \(n_{2D}\) is equal to \(4.10^{10}\,\mathrm{cm}^{-2}\), a factor two below the nominal doping. This discrepancy with the nominal doping value might be due to the depletion of the two QWs close to the metal-semiconductor interface where strong band bending occurs. Based on the fitted permittivity, we ran a full simulation with different metal stripe sizes in order to reconstruct the dispersion and confirm that our microcavity is indeed at the anti-crossing condition (Figure 3c). It is important to note that experimentally, we have access to the Rabi splitting frequency, expressed as \(\Omega_{\mathrm{splitting}}=2\Omega_{R}\sqrt{\Gamma_{\mathrm{opt}}}\). Here, \(\Gamma_{\mathrm{opt}}\) is a dimensionless parameter, which represents the fraction of electromagnetic energy coupled to the out-of-growth-plane component of the electric field and spatially overlapping the active region, as described in Ref.[35]. In turn, isolating the Rabi frequency in the above expression, and using \(\Gamma_{\mathrm{opt}}\)=0.95 (obtained from the finite element method simulations), we estimate that \(\Omega_{R}=0.25\,\mathrm{THz}\). We can extract \(\Omega_{R}\) along with \(\omega_{0}\) for the whole set of temperatures and estimate the dimensionless parameter that gauges the coupling strength \(\eta=\Omega_{R}/\omega_{0}\). Figure 3d demonstrates that the system operates in the ultra-strong coupling regime up to 200 K. Furthermore, we have extracted the linewidth of the polaritonic states, showing \(\gamma_{pol}\) of around 180 GHz and 140 GHz up to 78K (for lower and upper polaritonic states respectively). This is equivalent to Q factors of 10 and 17, which is about 5 times higher than the one of Landau polaritons. Such performance is particularly important in view of non-adiabatic switching experiments since the emission rate of vacuum quanta can be significantly increased by a narrower polariton linewidth, as thoroughly discussed in. [36] We also anticipate that the increase in the transition coherence time shall improve the signal to noise ratio within the coherent electro-optic sampling technique, in comparison to the incoherent background radiation inherently present.[19; 20; 21; 22] Another way to gauge the importance of improved polaritons linewidth is to use the figure of merit defined in [37], that is \(U=\sqrt{\eta C}\), where \(\eta\) is the coupling strength and C the cooperativity defined as \(C=4\Omega_{R}^{2}/\kappa\gamma_{i}\), where \(\kappa\) and \(\gamma_{i}\) are the cavity and transition linewidth respectively. A value of U\(\geq\)1 indicates that the system operates in the USC regime with a high degree of coherence, allowing to access its physics without being hindered by decoherence. For the present work, even though the coupling strength is relatively small, we estimate U equals to 1.7, thanks to the improved transition linewidth. In our future work, we will aim at improving the coupling strength by lowering the transition to 1.1 THz while increasing the doping density by a reasonable factor 3 (\(\sqrt{3}\) on \(\Omega_{R}\)). This would lead to an increase of \(\eta\) by a factor 3.5 and in turn to U = 6.7 if the ISB transition linewidth is preserved. Such value would in fact be equivalent to the one reported in the literature for Landau polaritons, attesting that our material system is an alternative to it.
## IV Ultra-strong light matter coupling in sub-wavelength resonators
Finally, we have verified the compatibility of this active region with recently developed deeply sub-wavelength microcavities. The idea is to use a cavity that operates in the high-frequency inductor-capacitor (LC) resonant mode and which confines the EM field within a sub-wavelength volume.[38; 39] Such resonators have been successfully used in recent years to demonstrate strong-light matter coupling with a reduced number of electrons [40] and to increase the operating temperature of QWIPs.[41] More recently, in pursuit of non-adiabatic cavity modulation, the implementation of a frequency switching functionality of 280 GHz in less than 200 fs was demonstrated by taking advantage of the circuital properties of the device.[42]**Figure 4a** shows a scanning electron microscope (SEM) picture of the fabricated device. The fabrication details can be found in Ref.[38]. The capacitor section (colorized red) is a cylinder of \(1.5\,\mathrm{\SIUnitSymbolMicro m}\) diameter by \(783\,\mathrm{nm}\) height, with an effective volume of \(2.10^{-6}\lambda_{0}^{3}\). Since the resonance frequency scales with \(\sqrt{LC}\), we have fabricated several samples with different antenna lengths to reconstruct the polaritonic dispersion (Figure 4b). From the reflectivity minima, we can identify the two polariton branches and fit them with the secular equation[8]
\[\left(\omega-\omega_{c}^{2}\right)\left(\omega-\tilde{\omega}_{21}^{2}\right) =\Gamma_{\mathrm{opt}}\omega_{p}^{2}\omega_{c}^{2} \tag{3}\]
Where \(\omega_{c}\) is the cavity resonance numerically calculated, \(\tilde{\omega}_{21}^{2}\) is the measured transition while \(\Gamma_{\mathrm{opt}}\) and \(\omega_{\mathrm{p}}\) were defined above. The dotted lines are placed along the asymptotes and reveal the presence of a polaritonic gap, a key signature of the USC regime. Figure 4c is the reflectance measurement at the anti-crossing point where we measure a Rabi splitting \(\Omega_{s}=0.37\,\mathrm{THz}\). The optical confinement within the LC resonator is lower than the MIM cavities, \(\Gamma_{\mathrm{opt}}=0.83\) in this case. We estimate a Rabi frequency of \(0.24\,\mathrm{THz}\) and a coupling strength \(\eta\) of 0.11, confirming that the system operates in the USC regime. Since the electron sheet density is \(\mathrm{n_{2D}=4.10^{10}\,\mathrm{cm}^{-2}}\), we estimate that the USC is achieved with around 3000 electrons per resonator coupled to the quantized EM mode. Even lower numbers could be reached by reducing the size of the device capacitance. For instance, considering a single PQW active region and leveraging the higher resolution offered by the electron beam lithography, a resonator with a capacitive section of 500 nm diameter over \(315\,\mathrm{nm}\) height could be fabricated. With such a device, we estimate that the USC could be achieved with only 400 electrons per resonator, a further step toward the study of the USC with few electrons.[43; 44]
## V Conclusion
In conclusion, we have used continuously graded parabolic QWs with outstanding optical performances to demonstrate ISB polaritons around \(2\,\mathrm{THz}\). The intrinsic low frequency bottleneck of SQWs is overcome thanks to the robustness of the harmonic oscillator allowing the formation of a polariton mode as low as \(1.8\,\mathrm{THz}\) with Q factors up to 17. The system operates in the ultra-strong light-matter coupling regime up to \(200\,\mathrm{K}\). We further demonstrate that the USC regime is maintained in deeply sub-wavelength volumes where around \(3000\) electrons per resonator are involved in the interaction. A further reduction towards hundreds of electrons could be envisioned using state-of-the-art nanofabrication processing. Finally, further optimization of the system cooperativity can be envisioned along with the implementation of an ultrafast switching element as in Ref.[42] that would make such system a more practical alternative to Landau polaritons in the study of quantum vacuum radiation at long-wavelengths.
**Acknowledgements**
This work was supported by the European Union Future and Emerging Technologies (FET) Grant No. 737017 (MIR-BOSE), and from the French National Research Agency, project TERASEL (ANR-18-CE24-0013). This work was partially supported by the French RE-NATECH network, Canada First Research Excellence Fund, and the Natural Sciences and Engineering Research Council of Canada (NSERC). We thank C. Ciuti for granting access to Paris Cite University cleanroom, and P. Filloux and S. Suffit for help.
|
2304.03085
|
Two Transient Quasi-periodic Oscillations in $γ$-Ray Emission from
the Blazar S4 0954+658
|
In this work, we report periodicity search analyses in the gamma-ray light
curve of the blazar S4 0954+658 monitoring undertaken by the Fermi Large Area
Telescope (LAT). Four analytical methods and a tool are adopted to detect any
periodic flux modulation and corresponding significance level, revealing that
(i) a 66 d quasi-periodic oscillation (QPO) with the significance level of $>
5\sigma$ spanning over 600 d from 2015 to 2016 (MJD 57145--57745), resulting in
continuous observation of nine cycles, which is one of the highest cycles
discerned in blazar gamma-ray light curve; (ii) a possible QPO of 210 d at a
moderate significance of $\sim3.5\sigma$ lasted for over 880 d from 2020 to
2022 (MJD 59035--59915), which lasted for four cycles. In addition, we discuss
several physical models to explain the origin of the two transient QPOs and
conclude that a geometrical scenario involving a plasma blob moving helically
inside the jet can explain the time scale of the QPO.
|
Yunlu Gong, Shiting Tian, Liancheng Zhou, Tingfeng Yi, Jun Fang
|
2023-04-06T14:10:24Z
|
http://arxiv.org/abs/2304.03085v1
|
# Two Transient Quasi-periodic Oscillations in \(\gamma\)-Ray Emission from the Blazar S4 0954+658
###### Abstract
In this work, we report periodicity search analyses in the gamma-ray light curve of the blazar S4 0954+658 monitoring undertaken by the Fermi Large Area Telescope (LAT). Four analytical methods and a tool are adopted to detect any periodic flux modulation and corresponding significance level, revealing that (i) a 66 d quasi-periodic oscillation (QPO) with the significance level of \(>5\sigma\) spanning over 600 d from 2015 to 2016 (MJD 57145-57745), resulting in continuous observation of nine cycles, which is one of the highest cycles discerned in blazar gamma-ray light curve; (ii) a possible QPO of 210 d at a moderate significance of \(\sim 3.5\sigma\) lasted for over 880 d from 2020 to 2022 (MJD 59035-59915), which lasted for four cycles. In addition, we discuss several physical models to explain the origin of the two transient QPOs and conclude that a geometrical scenario involving a plasma blob moving helically inside the jet can explain the time scale of the QPO.
galaxies: active - galaxies: individual: S4 0954+658 - quasi-periodic oscillation 0000-0002-4105-5874]Yunlu Gong
0000-0002-4882-7885]Shiting Tian
0000-0002-1881-7885]Liancheng Zhou
0000-0002-4882-7885]Tingfeng Yi
## 1 Introduction
It is generally believed that all the active galaxies are powered by the accretion process of dense ionized gases on to the supermassive black hole (SMBH) with a mass in the range of \(10^{6}-10^{10}M_{\odot}\), and \(\sim 10\%\) of them have relativistic charged particle jets. Radio-loud Active Galactic Nuclei (AGN), with their jets pointing almost directly to observer's line of sight (\(\leq 10^{\circ}\)), form a special subclass called blazars (Antonucci, 1993; Urry & Padovani, 1995). Moreover, blazars can be further divided into two subcategories based on the strength of emission lines emerging in optical-ultraviolet spectra: BL Lacertae objects (BL Lacs; very weak and narrow emission lines) and flat-spectrum radio quasars (FS-RQs; broad and strong emission lines). Blazars usually manifest the most substantial variability over almost the whole electromagnetic spectrum and its emission range dominated by nonthermal radiation is from radio to \(\gamma\)-rays (Ulrich et al., 1997).
Both ground-based and space telescope observations show that the blazars has flux variability of the order of minutes to years at different electromagnetic wavebands, which may indicate that different physical mechanisms (intrinsic and extrinsic) play a leading role. An interesting phenomenon related to flux variability is the QPO, although flux variability frequently exhibit non-linear, stochastic, and aperiodic characteristics (Kushwaha et al., 2017). So far, a large number of QPO behaviors with different timescale in multifrequency light curves have been reported by researchers using different detection techniques (e.g., Raiteri et al., 2001; Liu et al., 2006; Gupta et al., 2009; Lachowicz et al., 2009; King et al., 2013; Zhang et al., 2014; Graham et al., 2015; Ackermann et al., 2015; Bhatta, 2017; Gupta et al., 2018; Zhou et al., 2018; Sarkar et al., 2020; Zhang et al., 2022; Roy et al., 2022; Gong et al., 2022; Otero-Santos et al., 2023, and references therein). The detection of QPO phenomenon are usually quite rare and non-persistent for AGNs, but they seem to be relatively common in the black hole X-ray binaries (Remillard & McClintock, 2006; Gupta, 2014). So far, more than 30 of 5064 sources above 4\(\sigma\) significance are reported to have QPO phenomena based on time series data in the fourth Fermi Gamma-ray LAT catalog of sources (4FGL; Abdollahi et al., 2020; Wang et al., 2022).
Recently, Jorstad et al. (2022) claimed that the \(\gamma\)-ray flux, optical flux and linear polarization of BL Lacertae all exhibit \(\sim\)13 hour QPO variability during a dramatic outburst in 2020. Such a short-term QPO is explained by the current-driven kink instabilities near a recollimation shock \(\sim\)5 parsecs (pc) from the black hole. In the same year, a quasi periodic signal of approximately 420 days with \(>5\sigma\) significance was found in the measurements of the optical linear polarization degree for the blazar PKS 1222+216 and a helical jet model was employed to explain the signal well (Zhang
and Wang 2022). Furthermore, several models have been proposed by different authors to explain periodic radiation of blazars in various frequencies on diverse time-scales, i.e., a hotspot orbiting near the innermost stable circular orbit of the SMBH (Gupta et al., 2009, 2019; Sarkar et al., 2021), the presence of a binary system of SMBH (Valtonen et al., 2008; Ackermann et al., 2015), precession of relativistic jets or helical structure (Graham et al., 2015; Sandrinelli et al., 2016), the existence of quasi-equidistant magnetic islands inside the jet (Huang et al., 2013; Shukla et al., 2018; Roy et al., 2022), and the pulsational accretion flow instabilities (Tavani et al., 2018). Hence, we can analyze the quasi-periodic modulation in the blazar light curve to explore the accretion physics and the connection between accretion disc, jet, and central engine (Kushwaha et al., 2020).
S4 0954+658 (also referred to as QSO B0954+65) is one of the most well studied source with complex variability in blazars and is situated at a redshift of \(z=0.3694\pm 0.0011\)(Becerra Gonzalez et al., 2021). Stickel et al. (1991) regard this source as a BL Lac object in view of the small equivalent width of the emission lines of the spectrum. However, this target can also be classified as FSRQs due to the kinematic features of the radio jet belongs to class II (Hervet et al., 2016). In 2021, Becerra Gonzalez et al. (2021) detected a MgII emission line, whose equivalent width is close to 5 Angstrom, commonly taken as the limit to classify blazars as FSRQ. Therefore, it seems more reasonable to consider this \(\gamma\)-ray emitter as a transitional object. Wagner et al. (1993) investigated the optical variability of this source for the first time and then Raiteri et al. (1999) detected fast large amplitude variations using the 4-yr light curve. Their results indicate that the long term behavior of the source are not related to spectral variations. Then, the continuous observation of this blazar shows that the optical flux variations by more than 2.5 magnitude and a degree of polarization that reached 40% (Papadakis et al., 2004; Hagen-Thorn et al., 2015). Additionally, Gaur et al. (2019) found a positive correlation between colour index with respect to the magnitude based on the simultaneous data in B and R bands.
In very high-energy (\(\geq\) 100 GeV) \(\gamma\)-rays, MAGIC Collaboration et al. (2018) presented the first detection of the blazar S4 0954+658, which was obtained through monitoring with the Major Atmospheric Gamma Imaging Cherenkov (MAGIC) Telescopes during an exceptional flare (February 2015). In 2021, Raiteri et al. (2021) found a 31.2 day QPO behavior in the optical long-term variability through the observation of the Transiting Exoplanet Survey Satellite (TESS) and the Whole Earth Blazar Telescope (WEBT) Collaboration, in which the rotation of an inhomogeneous helical jet provides a reasonable explanation for this phenomenon. It is worth mentioning that such a month-like transient QPO is also detected in the \(\gamma\)-ray band for PKS 2247-131 (Zhou et al., 2018). More recently, Kishore et al. (2023) report the discover of several QPOs around 0.6-2.5 days in the optical light curve of the blazar S4 0954+658 with data acquired in six sectors by the TESS.
Here, we are inspired by the QPO report on the optical radiation, and then try to analyze whether the \(\sim\)14.3 yr data measured by Fermi-LAT also have QPO phenomenon. The paper is structured as follows. In Section 2, we describe the data analysis process of 0.1-300 GeV energy band. In Section 3, we present the QPO detection algorithm and main results. In Section 4, we summarize our conclusions and explore several models to explain the QPO results.
## 2 Fermi-LAT data analysis
The LAT on-board the Fermi observatory continually survey the entire sky every 90 minutes in the energy range from 20 MeV to \(>\) 300 GeV (Atwood et al., 2009). Based on the observation data of the first 12 years, the 4FGL incremental version of \(\gamma\)-ray source contains 6658 sources, including more than 100 newly classified blazars (Abdollahi et al., 2022). The blazar S4 0954+658 (named as 4FGL J0958.7+6534) was found in the first Fermi Gamma-ray LAT catalog, and also has been detected by various radio surveys and optical and millimeter surveys. In order to build the light curve of this source, we used the standard software package FERMITOOLS and the user contributed tool make4FGLxml.py.
The data for the blazar S4 0954+658 were taken during the period 2008 August 4 (MET:239557417) to 2022 December 5 (MET:691900553) covering \(\sim\)14.3 years. We chose LAT 0.1-300 GeV Pass 8 (excess = 128, evtype = 3) events recommended by the Fermi-LAT collaboration from a circular region of interest having a radius of 12\({}^{\circ}\) centred at the source (\(\alpha_{2000.0}=09^{h}58^{m}47.244^{s},\delta_{2000.0}=65^{\circ}33^{\prime}54.8^{\prime\prime}\)). At the same time, we used a screening expression ''( DATA_QUAL > 0)&&( LAT_CONFIG=1)' to select events with good time intervals and set a zenith angle cut of 90 degrees to suppress the \(\gamma\)-ray pollution from the Earth's limb. An XML file is generated through the 4FGL catalogue containing the \(\gamma\)-ray background emission templates 'gl1_iem_v07' and 'iso_P8R3_SOURCE_V2_v1.txt' for the Galactic and isotropic extragalactic contributions respectively. We consider three commonly used spectral models (power-law, log-parabola, and power law with an exponential
Figure 1: The 10-day binned light curve of the blazar S4 0954+658 at \(\gamma\)-ray energies of 0.1-300 GeV obtained from Fermi-LAT. Panel A: the purple shaded regions (marked as segment 1) represents the first QPO analysis piece concerned during MJD 57145–57745. The gray shaded region is the epoch MJD 59035–59915 (marked as segment 2) where the QPO analyses were carried out. Panel B: zoom-in of segment 1, where the red dashed-dotted line indicates the sine fitting result of the light curve. The orange histogram corresponds to the TS value of each data point. Panel C: same as panel B but for segment 2.
cutoff) for the whole time series. And we also test for the spectral curvature in the spectrum using \(\Delta TS=TS_{LogPb}-TS_{PL}=343\)(Abdollahi et al., 2020). The results show that the log-parabola (LogPb) model is more suitable for describing \(\gamma\)-ray emission of target source. The best-fit spectral parameters were \(\alpha=2.14\pm 0.01\), \(\beta=0.53\pm 0.04\), and \(E_{b}=699.12\pm 29.46\) MeV. In addition, we selected low (MJD 54687-55558 and 55778-56702) and high states (MJD 56918-57160 and 59619-59894) to test the spectral shape of the time series. The results show that the fitting parameters (except \(\beta\)) and flux variability are close to the whole time series.
Based on the best fitting results mentioned above, we tested the construction of bins light curve for 1-30 days and found that 10 day bins is the most appropriate bin, as they not only reveal the details of the flux variation, but also ensure that the blazar S4 0954+658 can be detected in almost all bins (TS \(\geqslant\) 9). In addition, the 10-day binned light curve also shows the strongest intensity in the power spectrum calculation compared with other bins. In the 10-day binned light curve (See Figure 1), the average value and standard deviation are 1.11 and 0.95 \(\times 10^{-7}\) photons cm\({}^{-2}\) s\({}^{-1}\), respectively. Detection of periodicity in the light curve based on Weighted Wavelet Z-transform (WWZ) method (See Figure 2), we selected the panels B (segment 1) and C (segment 2) in Figure 1 as the regions of interest for QPO variability analysis.
## 3 Periodicity Search for \(\gamma\)-ray Emission
It is not rigorous enough to visually measure the QPO variability in the unevenly sampled light curve, but many methods have been proposed to detect periodic components and corresponding significance levels. Here, we applied four methods to analyse the light curves, i.e., the epoch folding, REDFIT, Lomb-Scargle periodogram (LSP), and WWZ. And a tool used to determine confidence levels as light-curve simulations. Although the \(\gamma\)-ray light curve obtained by us is evenly binned, we only consider the data points with TS \(>\) 9, resulting in uneven sampling of data.
The epoch folding is one of the most popular methods of light curves analysis (Leahy et al., 1983; Davies, 1991). This method is insensitive to the modulating shape of periodic components and the uneven sampling of time series data, which is different from the traditional discrete Fourier periodogram (Bhatta, 2018). We computed \(\chi^{2}\) values of the \(\gamma\)-ray light curve with a time step of 6 days for the trial periods ranging between 6 and 510 days using Equation 1 of Bhatta (2018). The results show that maximum \(\chi^{2}\) values of 225 and 172 correspond to the trial period of 66 d and 210 d, respectively. In segment 1, we constructed a folded the light curve by the binned likelihood analysis with a \(\sim\) 66 d period, where phase zero corresponds to MJD 57145 and 10 phase ranges are selected (the upper left panel of Figure 3). Similar to segment 1, the phase zero of segment 2 is set at MJD 59035 to complete the folding light curve with a \(\sim\) 210 d period (the upper left panel of Figure 4). Both results show that the \(\gamma\)-ray flux varies with phase is obvious.
An additional method, REDFIT is also used to calculate the bias-corrected power-spectrum of the light curve and estimate the significance level of the corresponding dominant period (Schulz and Mudelsee, 2002). This method can calculate the underlying red-noise spectrum by fitting the time series with a first-order autoregressive process (AR1), which is caused by some stochastic processes in the accretion disc or jet for blazars (Fan et al., 2014; Covino et al., 2019). The AR models believes that the present emission is connected with the past emission. The theoretical power spectrum of an AR1 model is given as,
\[G_{rr}(f_{j})=G_{0}\frac{1-\theta^{2}}{1-2\theta cos(\pi f_{j}/f_{Nyq})+\theta ^{2}}, \tag{1}\]
where \(G_{0}\) is the average spectral amplitude, \(\theta\) is the average autoregression coefficient, and \(f_{j}\) represents the discrete frequency up to the Nyquist frequency (\(f_{Nyq}\)). We used the REDFIT3.8e program to estimates the power spectrum and the significance level of the corresponding peak based on LSP in combination with Welch overlapped segment averaging (Welch, 1967). As can be seen from the upper right panel of Figure 3, it is evident that there is a peak around the timescale of 65 \(\pm\) 12 days with significance level of \(>\)99% in the power-spectrum during MJD 57145-57745 (segment 1). The upper right panel of Figure 4 shows that the periodic modulation in MJD 59035-59915 (segment 2) is centered at 210 \(\pm\) 55 days with significance level of \(\sim\)99%. We take the half-width at half-maximum (HWHM) of the power peak fitted by the Gaussian function as the uncertainty of the periodic modulation signal.
The Lomb-Scargle periodogram (LSP) is one of the most common methods to find periodicities in time series with non-uniform sampling, and it can calculate the power spectrum intensity at different frequencies (Lomb, 1976; Scargle, 1982). This method is the projection of the light curve on sinusoidal functions and constructs a periodogram from the goodness of the weighted \(\chi^{2}\) fit statistic (Ferraz-Mello, 1981). Nevertheless, the aperiodic part of time series data will reduce the goodness of LSP sinusoid fit, which leads to the reduction of transient periodic power. The bottom panel of Fig. 3 shows the power (black solid line) of the LSP for the extracted
segment 1 data. One signal, at the period of 66 \(\pm\) 4.8 days, reached that significance level. Meanwhile, the bottom panel of Fig. 4 also shows the analysis result of segment 2. The analysis revealed a significant signal centred at 208 \(\pm\) 43 day.
Further evidence for the two transient QPO is presented in the WWZ method. The WWZ method introduced by Foster (1996) for the first time can identify the localized features in both time and frequency domains, especially in unequally spaced data, based on three trial functions, i.e., \(\phi_{1}(t)=1(t)\), \(\phi_{2}(t)=cos[\omega(t-\tau)]\) and \(\phi_{3}(t)=sin[\omega(t-\tau)]\). The calculation of WWZ power intensity can search for periodic modulation signal with frequency \(\omega\) and time shift \(\tau\) in a statistical manner, which is described as:
\[WWZ=\frac{(N_{eff}-3)V_{y}}{2(V_{x}-V_{y})}, \tag{2}\]
where \(N_{eff}\) denote the effective number density of data points contributing to the signal, and \(V_{x}\) and \(V_{y}\) are the weighted variations of the non-uniform data \(x\) and the model function \(y\), respectively. For more details on the definition of these factors, see Li et al. (2021) and references therein. For segment 1, we set the frequency range from 0.005 to 0.08 \(d^{-1}\) and the step size is 0.00005 \(d^{-1}\) in WWZ analysis, which enables the QPO timescale of the region of interest to be displayed as much as possible. Furthermore, in order to balance the frequency and time resolution, we set a decay constant of c = 0.001. The colour-scaled WWZ power of the 10-d binned light curve in the time-period plane are presented in the bottom panel of Fig. 3, which shows that the power for the characteristic period centred around 66 days persist the entire observational period. The corresponding time-averaged WWZ power is centred at the periods of 66 \(\pm\) 4.7 day, corroborating the LSP result. In segment 2, we adopted a limited frequency range of 0.001-0.03 \(d^{-1}\) in WWZ analysis, where the step size and decay constant are the same as segment 1. As shown in the bottom panel of Fig. 4, the time-averaged WWZ power of segment 2 light curve also shows a significant peak lasting throughout the activity at 208 \(\pm\) 40 day, which is similar to the feature of LSP analysis.
The flux variability of blazars usually shows a frequency dependent colored-noise-like behavior, which is very likely to lead to pseudo period in the identification of periodic components of time series data, especially at lower temporal frequencies (Vaughan et al., 2003, 2016; Bhatta et al., 2016; Li et al., 2017). The significance estimation of REDFIT method is based on the \(\chi^{2}\) distribution of periodogram points about the model, which can avoid underestimating the significance of power spectral density (PSD) peak. Here, the significance of segment 1 and 2 obtained by using the REDFIT method reveals a \(\geq\) 99% level. Another way to estimate the significance in the LSP and WWZ peaks is to simulate light curves with the same PSD and flux distribution as the original light curve using a Monte Carlo method provided in Emmanoulopoulos et al. (2013). The underlying red-noise PSDs of blazar light curves are often reasonably approximated to a power-law form \(P(f)\propto f^{-\alpha}\), where \(P(f)\) is the power at temporal frequency \(f\) and \(\alpha\) is spectral slope (Vaughan, 2005). Then, we generated \(10^{6}\) artificial light curves to estimate the significance level of the LSP and WWZ periodic components. In segment 1, the significance level for the QPO signal was found to be \(>5\sigma\)
Figure 2: Left panel: WWZ map of the S4 0954+658 light curve in MJD 57085-57795. The bright red patch represents a possible QPO in the interval of MJD 57145-57745 (segment 1). Right panel: WWZ map of the blazar S4 0954+658 \(\gamma\)-ray light curve in MJD 58315-59915. The bright red patch represents a possible QPO in the interval of MJD 59035-59915 (segment 2).
(the bottom panel of Fig. 3). In segment 2, the light curve simulation shows that the periodic modulation of 210 d seems to have a significance level close to 3.5\(\sigma\) (the bottom panel of Fig. 4). In the recent QPO search, a large number of blazars claimed to have periodic signals are usually greater than 3\(\sigma\) significance (Penil et al., 2020; Zhang et al., 2020, 2021). Thus, the QPO signal with \(\sim 3.5\sigma\) significance of segment 2 is sufficiently important to be reported. These two transient QPO signals may again or continue to appear in the future, so it will be interesting to keep monitoring at the \(\gamma\)-ray frequency.
## 4 Conclusions and Discussion
We collected 0.1-300 GeV energy band data of the blazar S4 0954+658 from the Fermi-LAT archive and conducted a temporal analysis in two interesting periods: segment 1 (MJD 57145-57745) and segment 2 (MJD 59035-59915). Four analytical methods (e.g., the
Figure 3: The results of QPO analysis in segment 1 (MJD 57145–57745). Upper-left: The folded \(\gamma\)-ray light curve, which is constructed from binned likelihood analysis of the nine cycles of segment 1. Phase zero corresponds to MJD 57145 and 10 phase ranges are set. For clarity, we show two period cycles. The dashed blue horizontal line represents the mean flux. Upper-right: The periodicity analysis results from REDFIT. The solid black line indicates the bias-corrected power spectrum. The red dashed line represents the theoretical AR(1) spectrum. The blue, green, and purple dashed curves were 90%, 95%, and 99% confidence contours, respectively. Bottom: LSP and WWZ results of the \(\gamma\)-ray time series data. The left sub-panel displays two-dimensional contour map of the WWZ power spectrum and the horizontal red patch indicates a strong QPO signal of \(\sim\)66 days. The right sub-panel shows the time-averaged WWZ (red solid line) as well as the LSP powers (black solid line). The blue, purple, and orange dashed curves were 3\(\sigma\), 4\(\sigma\), and 5\(\sigma\) significance line, respectively. The dominant period of \(\sim\)66 d can be clearly seen crosses the 5\(\sigma\) significance curve.
epoch folding, REDFIT, LSP, and WWZ) and a tool (light curve simulations) are called to detect the transient QPO in the 10-d binned light curve, revealing a good consistency between different methods. For segment 1, our results showed that there was a 66 day QPO above 5\(\sigma\) significance level during MJD 57145-57745, which lasted for nine cycles. Interestingly, the 66 day periodic modulation, similar to the PKS 2247-131 case, also occurred after a outburst event (2014 December) with multi-wavelength observations (See Fig. 1; Zhou et al., 2018; Gaur et al., 2019). For segment 2, we found a possible QPO of about 210 day with \(\sim 3.5\sigma\) significance in the over 880 day \(\gamma\)-ray light curve. This signal is clearly visible for about four cycles and seems to continue to appear after MJD 59915 (2022 December). It is of interest to keep monitoring the source, checking whether or not the QPO signal of \(\sim\)210 day would appear again. Unfortunately, we can not verify the authenticity of the two transient QPOs in the multi-wavelength light curve due to lack of good coverage of multi-wavelength observations and data point resolution during the concerned period. We expect that different telescopes will pay attention to the QPO signal of this source in the future.
A variety of scenarios have been proposed to explain the QPO phenomenon in blazar emission. One of the most interesting features of the accretion flow are the stable twin high-frequency QPO often appear with frequency ratio 3:2 in the X-ray flux, e.g., Sgr A\({}^{*}\) and GRO J1655-40 (Abramowicz & Kluzniak, 2001; Torok, 2005). Two stable peaks QPOs scale indicates that they can originate from some resonant process taking place in the accretion disk's oscillations (Abramowicz et al., 2003; Horak et al., 2009). In the framework of the reso
Figure 4: Same as Fig 3, but for the segment 2 light curve (MJD 59035–59915).
nance model, the frequencies reflect epicyclic motion of perturbed flow lines in the accretion disc, or combinations between these and a fixed, perturbation frequency (Rubio-Herrera and Lee, 2005). The scaled similarity between stellar mass systems and AGNs indicates that resonances are important for AGNs as well. Although no pairs of QPOs at that 3:2 ratio have been detected for S4 0954+658, as 66 d and 210 d correspond to frequencies of \(1.75\times 10^{-7}\) Hz and \(0.55\times 10^{-7}\) Hz, respectively. Separate but related is the relativistic precession model, which associates three different QPOs to a combination of the fundamental frequencies of particle motion (Motta et al., 2014). While the higher frequency QPOs correspond to the Keplerian frequency of the innermost disk regions, the lower frequency QPOs correspond to the relativistic periastron precession of eccentric orbits and the Type-C QPOs in the nodal precession (or Lense-Thirring precession) of tilted orbits in the same regions (Stella and Vietri, 1998; Stella et al., 1999). For the Lense-Thirring precession, the period can be expressed using \(\tau_{LT}=0.18a_{s}^{-1}(M/10^{9}M_{\odot})(r/r_{g})^{3}\)days, where \(a_{s}\), \(M\), \(r_{g}\) and \(r\) is dimensionless spin parameter, mass of the black holes (BH), the gravitational radii and the radial distance of the emission region from the BH, respectively. In such scenario, taking the spin parameter \(a_{s}=0.9\) and the BH mass \(M=2.3\times 10^{8}M_{\odot}\)(Becerra Gonzalez et al., 2021), the timescale of the two QPOs places the emission region ranges from 10 to 15 \(r_{g}\). Due to the warped accretion discs, the QPO phenomenon could be the result of the jet precession, therefore resulting in a periodic timescale of thousands years (Liska et al., 2018; Bhatta, 2018; Li et al., 2023). Such a large timescale does not seem to apply to this case.
A binary SMBH system model was proposed to explain the \(\sim\)2 yr periodic fluctuation in the multiwavelength light curve of PG 1553+113 and later applied to interpret the similar fluctuation behavior of other blazers (Ackermann et al., 2015; Sandrinelli et al., 2018; Otero-Santos et al., 2020; Wang et al., 2022). The orbital motion of this model may cause a long-term periodic temporal signals, which is reflected in the periodic accretion perturbations, or jet-precessional and nutational motions Liska et al. (2018). The observed period \(P_{\rm obs}\) is corrected to the intrinsic orbital period at the local galaxy via the relation \(P_{\rm int}=P_{\rm obs}/(1+z)\), where \(z=0.3694\) is the cosmological redshift. Using the period values of 66 d and 210 d, we get the intrinsic orbital period values of 48 d and 153 d respectively. Assuming that the mass ratio between the two SMBHs is 0.1 and taking the mass of the central black hole to be \(2.3\times 10^{8}M_{\odot}\) as that of the primary black hole. We substitute two transient QPO values into the formula given by Fan et al. (2010), and the results show a very tight orbit (0.001 pc and 0.002 pc) and a quick merging timescale (95 yr and 2048 yr) in the gravitational waves driven regime (Bhatta, 2018). Nevertheless, the two transient QPOs we detected were too short compared to the periodic timescale expected by this model. And a binary SMBH system should produce a more stable/persistent periodic behaviour which is not observed.
Another potential explanation for the two transient QPOs is a geometrical model with plasma blobs moving helically down the jet, which has been recently applied in many cases (Zhou et al., 2018; Li et al., 2021; Roy et al., 2022). In this model, as the plasma blob (contains higher particle and magnetic energy densities) that injected into jet enhances the emission, every plasma blob will change its orientation with respect to the line of sight and this will produce a quasi-periodic flux modulation due to the Doppler beaming effect. The plasma blobs moving helically within the jet may be a natural process in magnetically dominated jets (Chen and Zhang, 2021). In the helical motion of the blob, \(\theta_{obs}(t)\) of a given emitting region depends on the pitch angle of the helix \(\phi\) and on the angle \(\psi\) of the axis of the jet with respect to our line of sight according to,
\[\cos\ \theta_{obs}(t)=\cos\ \phi\ \cos\ \psi+\sin\ \phi\ \sin\ \psi\ \cos\ \omega(t). \tag{3}\]
where \(\omega(t)=2\pi t/P_{obs}\) is the variable azimuthal and \(P_{obs}\) is the observed period. From \(\theta_{obs}(t)\), and adopting the bulk Lorentz factor \(\Gamma=11.4\) given by Jorstad et al. (2017), we calculate the Doppler factor \(\delta(t)\) with equation, \(\delta(t)=1/\Gamma(1-\beta cos\ \theta_{obs}(t))\), where \(\beta=\nu_{jet}/c\). Then, the periodicity in the blob rest frame can be calculated (see Roy et al. (2022) for details). For the case of S4 0954+658, if we assume the parameters used in Jorstad et al. (2017) for the parsec-scale radio jet, the pitch angle \(\phi=1.75^{\circ}\) (assumed to be half of the opening angle), the viewing angle \(\psi=1.5^{\circ}\), and \(P_{obs}=66\) d, the blob traverses about a distance \(D=9c\beta\,P_{\rm rest}\ \cos\phi\ \sin\psi\approx 1.64\) pc down the jet during nine period (Zhou et al., 2018). In addition, for \(P_{obs}=210\) d, the blob travels \(\sim\)2.32 pc during four period. As the blob is injected into the jet (or dissipates), the periodic modulation tends to become (or less) noticeable. This model has a defect that it can only explain a QPO with almost constant amplitude. However, the amplitude of the QPO is almost constant either in segment 1 or in segment 2, although it is different in the two segments (see Fig. 1). Hence, it is reasonable that the different plasma blobs of this model are used to explain the transient properties of QPO with different timescales. We expect the 210 day QPO behavior will continue to appear in Fermi-LAT observation. Further
more, we also hope that the multi-wavelength campaign (i.e. TESS and WEBT) will pay attention to whether two transient QPOs variability also appears and identify the underlying physical mechanism among different hypotheses.
## Acknowledgements
We thank anonymous referee for very helpful suggestions. This research or product makes use of public data provided by Fermi-LAT. JF is partially supported by National Natural Science Foundation of China (NSFC) under grant U2031107, the Joint Foundation of Department of Science and Technology of Yunnan Province and Yunnan University (202201BF070001-020), the grant from Yunnan Province (YNWR-QNBJ-2018-049) and the National Key R&D Program of China under grant (No.2018YFA0404204). Y.L.G. is supported by Yunnan University Graduate Scientific Research Innovation Fund under grant KC-2222975. T.F.Y. is supported by NSFC under grant 11863007.
|
2303.10738
|
MIA-3DCNN: COVID-19 Detection Based on a 3D CNN
|
Early and accurate diagnosis of COVID-19 is essential to control the rapid
spread of the pandemic and mitigate sequelae in the population. Current
diagnostic methods, such as RT-PCR, are effective but require time to provide
results and can quickly overwhelm clinics, requiring individual laboratory
analysis. Automatic detection methods have the potential to significantly
reduce diagnostic time. To this end, learning-based methods using lung imaging
have been explored. Although they require specialized hardware, automatic
evaluation methods can be performed simultaneously, making diagnosis faster.
Convolutional neural networks have been widely used to detect pneumonia caused
by COVID-19 in lung images. This work describes an architecture based on 3D
convolutional neural networks for detecting COVID-19 in computed tomography
images. Despite the challenging scenario present in the dataset, the results
obtained with our architecture demonstrated to be quite promising.
|
Igor Kenzo Ishikawa Oshiro Nakashima, Giovanna Vendramini, Helio Pedrini
|
2023-03-19T18:55:22Z
|
http://arxiv.org/abs/2303.10738v1
|
# MIA-3DCNN: COVID-19 Detection Based on a 3D CNN
###### Abstract
Early and accurate diagnosis of COVID-19 is essential to control the rapid spread of the pandemic and mitigate sequelae in the population. Current diagnostic methods, such as RT-PCR, are effective but require time to provide results and can quickly overwhelm clinics, requiring individual laboratory analysis. Automatic detection methods have the potential to significantly reduce diagnostic time. To this end, learning-based methods using lung imaging have been explored. Although they require specialized hardware, automatic evaluation methods can be performed simultaneously, making diagnosis faster. Convolutional neural networks have been widely used to detect pneumonia caused by COVID-19 in lung images. This work describes an architecture based on 3D convolutional neural networks for detecting COVID-19 in computed tomography images. Despite the challenging scenario present in the dataset, the results obtained with our architecture demonstrated to be quite promising.
COVID-19 Deep Learning Convolutional Neural Networks Lung Images
## 1 Introduction
The first transmissions of a new coronavirus, SARS-CoV-2 (Severe Acute Respiratory Syndrome Coronavirus 2) (Cascelella et al., 2022), occurred at the end of 2019, being identified in the region of Wuhan, China, causing the pandemic of COVID-19 during the following years. The symptoms of COVID-19 can range from none to severe. Among the aggravations of the disease is the severe pneumonia that the infection can cause, potentially allowing its detection through lung images of the infected individual.
The 3rd COVID-19 Competition (Kollias et al., 2023, 2022, 2021)) is an annual challenge that encourages research in the analysis of medical lung images for the detection of COVID-19. This competition uses the COV19-CT-DB database (Arsenos et al., 2022), containing CT scans of patients with and without COVID-19, collected between September of 2020 and November of 2021.
Each computed tomography present in this database is a three-dimensional image, represented by slices, and the number of slices per tomography varies between 50 and 700, according to specifications given at the time of performing the image exam.
The annotation of each slice was performed by four professionals, radiologists and pulmonologists, with great experience in the area, with 98% agreement between the specialists during the annotation of the classes. The dataset was then separated into training, validation and test sets, with only the first two available to participants to be used during network training, and the last one for participants to perform inference and evaluate their methods.
The competition consists of two challenges:
1. **COVID Detection:** Challenge that aims to classify lungs between COVID and non-COVID classes. The dataset for this challenge is unbalanced, having 922 CT scans affected by COVID-19, and 2110 healthy ones,
while in the validation set, these values are 225 and 489, respectively. Figure 1 shows a diagram of the layout of the images made available for this challenge, whereas Figure 2 illustrates some examples of images of the COVID and non-COVID classes.
2. **Severity Classification:** Challenge to classify the involvement of lungs affected by COVID-19. This has the classes _mild_, _moderate_, _severe_ and _critical_. The sampling for each of these groups can be seen in Table 1, and an example image for each of the classes is presented in Figure 3.
2020), the Spatiotemporal Feature Learning Based on Two-Step LSTM and Transformer (Hsu et al., 2022), CMC-COV19D based on contrastive representation learning (CRL) (Hou et al., 2022), and 3D ResNets (Turnbull, 2022).
Still in medical imaging field, the convolutional recurrent neural networks method proposed by Kollias et al. (2018) detects and predicts Parkinson's based on medical imaging information.
## 3 Methodology
This section describes the main aspects related to our methodology developed for COVID-19 detection based on a 3D convolutional neural network.
### Data Processing
Due to the size of the dataset and our hardware limitations, the images had to be processed before training. Aiming this, we resized every CT scan 3D image, using a spline interpolation to reduce all images to 224\(\times\)224 pixels, and to make all sample folders have 64 slices.
### Data Augmentation
Data augmentation is a process that inflates the dataset, by creating more and more diverse data for the network to learn, when we have a very limited amount of data at hand. In this way, we can use more data to avoid overfitting. Due to the limited amount of samples, data augmentation techniques were used.
The operations of data augmentation used were: additive Gaussian noise, with a mean of 0, and a standard deviation randomly defined from a uniform distribution, ranging from 0 to 20; Gaussian blur, with a mean of 0, and a standard deviation randomly defined from a uniform distribution, ranging from 0 to 2; rotation, with an angle of rotation randomly defined from a uniform distribution, ranging from -30 to 30; flip (vertical and horizontal); cutout, which fills the image with 0 to 4 gray rectangular areas, with height and width that have 20% of the dimensions of the image; and gamma contrast, with gamma values randomly defined from a uniform distribution, ranging from 0.5 and 2. The operations additive Gaussian noise, Gaussian blur, flip (vertical and horizontal) and gamma contrast are applied randomly, with a rate of 0.5. Moreover, when selected, the operations are applied in a random order.
### Mia-3dcnn
The method adopted to detect COVID-19 in the images is the MIA-3DCNN network, a 3D convolutional neural network.2. Our proposed architecture has two main stages: one composed of 3D convolutional blocks, and one composed of fully connected layers (Figure 3(a)).
Footnote 2: The implementation can be found in the GitHub repository: [https://github.com/igorknz/mia-3dcnn](https://github.com/igorknz/mia-3dcnn)
The 3D convolutional stage has blocks composed of one 3D convolutional layer, a 3D max pooling layer, a batch normalization layer and a dropout layer. The 3D convolutional layers have a kernel size of 3. In addition, they have L2 regularizers for the weights and biases, that were added to increase the generalization capability of the network.
Figure 3: Each of the images corresponds to one of the tomography slices, with varying degrees of lung involvement by COVID-19.
The regularizing factor for the biases were kept at 0.01, while the ones for the weights change according to Table 2. In addition, the layers use padding, so the output maintains the same dimension of the input. This is specially important, because, the images had to be considerably reduced, to lower the processing required during training.
The layers also use rectified linear unit (ReLU) as the activation function, and He Normal as the weight initializer, to reduce the time of convergence. Then, there is a 3D max pooling layer, with pooling size of \(2\); a batch normalization layer, and a dropout layer, with rate of 0.5. These are the components of the blocks used for feature extraction. Six of them are stacked at the beginning of the network. The changes between them are the number of filters and the regularization factor for the L2 regularizers, for both weight. These different setups are listed in Table 2, with block number 1 being the one at the bottom of the network and block number 6 being the one close to the top.
After this, there is a 3D global average pooling layer to downsample the output of the initial stage. Then, we have a second stage, composed of blocks that have a fully connected layer with varying number of neurons, according to Table 3 and a dropout layer, with a rate of 0.5. At last, there is a 2-neuron layer with softmax as the activation function, for the classification, used as the output.
The final architecture was achieved experimentally, after successive experiments. Initially, we decided to use a convolutional neural network due to their great results in finding image patterns in classification tasks. For specific challenge, we decided to use a 3D network due to the structure of the samples. We started by creating a network of stacked 3D convolutional layers for feature extraction, followed by dense layers for classification, and adapted the network after every experiment, by interpreting the metrics obtained.
For the 3rd COVID-19 Competition, we investigated and submitted the MIA-3DCNN network trained both with (version A) and without (version B) data augmentation operations. These respective results can be found in the following section.
During the COVID-19 severity task, another version of MIA-3DCNN (version C) was adopted, using the data augmentation, and is shown in Figure 3(b).
Similarly to the previous one, this architecture has a convolutional stage, composed of four blocks of one 3D convolutional layer, a 3D max pooling layer. The specification of each convolutional block is given in Table 4.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Convolutional block** & **Number of filters** & **L2 weight factor** \\ \hline
1 & 64 & 0.05 \\
2 & 64 & 0.05 \\
3 & 128 & 0.10 \\
4 & 256 & 0.10 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Parameters that differ along the convolutional blocks of the network.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Convolutional block** & **Number of filters** & **L2 weight factor** \\ \hline
1 & 64 & 0.01 \\
2 & 64 & 0.01 \\
3 & 128 & 0.05 \\
4 & 128 & 0.05 \\
5 & 256 & 0.05 \\
6 & 256 & 0.05 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Parameters that differ along the convolutional blocks of the network.
\begin{table}
\begin{tabular}{c c} \hline \hline
**Fully connected block** & **Number of neurons** \\ \hline
1 & 1024 \\
2 & 512 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Parameters that differ along the convolutional blocks of the network.
Subsequently, MIA-3DCNN has the same classification block as versions A and B, consisting of a 3D global average pooling layer and two fully connected layer blocks (Table 3), with a dropout (rate of 0.5). Finally, we have a 4-neuron layer output with a softmax activation function for the classification in mild, moderate, severe and critical classes.
### Other Parameters
During the training of the models, we used the macro F1 score in the validation set to verify if the model is improving. We used a learning rate scheduler to reduce the learning rate by a factor of 0.5 if the macro F1 score did not improve in the validation set for 20 epochs, with an initial value of \(10^{-4}\).
Due to the imbalance in the number of samples in the COVID and non-COVID classes, we used class weights to prevent the network from learning potential biases during the training process.
Moreover, during COVID-19 detection, we used the categorical crossentropy as the loss function, Rectified Adam as optimizer, batch size of 5, early stopping with a patience of 80 epochs, to reduce the number of iterations required, and the maximum number of epochs was set to 500. On the other hand, for COVID-19 severity classification, SGD optimizer was used with a learning rate of \(10^{-4}\), and a momentum of 0.9, and the early stopping was set with a patience of 50 and 1000 as maximum number of epochs.
## 4 Results
This section describes the results obtained with our 3D convolutional neural network.
Figure 4: Diagrams for the MIA-3DCNN COVID-19 detection (a) and severity (b) network architectures used in the challenge, where we used an input CT scan with \(H=224\), \(W=224\), and \(D=64\).
The final macro F1 scores in the validation set, for both versions submitted for COVID-19 detection task, are listed in Table 5. As we can observe, version A (with data augmentation) achieved a lower score than version B (without data augmentation). Even though data augmentation can prevent overfitting, it can also make it more challenging for the network to learn patterns, specially in this case, with randomly applied operations to each of the samples.
It is expected that the data in the validation set was taken from the same distribution of the data in the training set. The images generated from the data augmentation operations lead to images that are from a distribution that is slightly different from the validation one, which can explain the results obtained.
Regarding the severity classification task, although the severity classification is a challenging task, containing four classes that requires a careful process with specialists to classify them, our model could achieve a considerable result of almost 0.73, as shown in Table 6.
For both challenges, the results surpass the baseline results presented by Kollias et al., 0.74 and 0.38 macro F1 score for COVID-19 Detection and COVID-19 Severity Detection validation sets, respectively.
## 5 Conclusions
This paper describes our 3D convolutional neural network for detecting COVID-19 on CT images from the MIA-COV19D database. Although the dataset presents various challenging scenarios, the results obtained are promising and demonstrate the effectiveness of the proposed architecture, surpassing the baseline results in around 15% for the COVID-19 detection task, and in 35% for COVID-19 Severity Detection validation set.
## Acknowledgments
The authors would like to thank Semantix Brasil for sponsoring the development of this work, and CENAPAD-SP for providing high performance processing environments.
|
2302.09140
|
Towards Co-operative Congestion Mitigation
|
The effects of traffic congestion are widespread and are an impedance to
everyday life. Piecewise constant driving policies have shown promise in
helping mitigate traffic congestion in simulation environments. However, no
works currently test these policies in situations involving real human users.
Thus, we propose to evaluate these policies through the use of a shared control
framework in a collaborative experiment with the human driver and the driving
policy aiming to co-operatively mitigate congestion. We intend to use the CARLA
simulator alongside the Flow framework to conduct user studies to evaluate the
affect of piecewise constant driving policies. As such, we present our
in-progress work in building our framework and discuss our proposed plan on
evaluating this framework through a human-in-the-loop simulation user study.
|
Aamir Hasan, Neeloy Chakraborty, Cathy Wu, Katherine Driggs-Campbell
|
2023-02-17T21:08:55Z
|
http://arxiv.org/abs/2302.09140v1
|
# Towards Co-operative Congestion Mitigation
###### Abstract
The effects of traffic congestion are widespread and are an impedance to everyday life. Piecewise constant driving policies have shown promise in helping mitigate traffic congestion in simulation environments. However, no works currently test these policies in situations involving real human users. Thus, we propose to evaluate these policies through the use of a shared control framework in a collaborative experiment with the human driver and the driving policy aiming to co-operatively mitigate congestion. We intend to use the CARLA simulator alongside the Flow framework to conduct user studies to evaluate the affect of piecewise constant driving policies. As such, we present our in-progress work in building our framework and discuss our proposed plan on evaluating this framework through a human-in-the-loop simulation user study.
## I Introduction
Traffic congestion has an immense negative impact on society by affecting various facets such as urban mobility, climate change, and the economy [1, 2, 3]. Deploying a few autonomous vehicles on the road in idealized traffic settings has shown promise in eliminating congestion by improving the average speeds of vehicles [4]. However, the robust fully autonomous vehicles (AVs) required by such methods are unlikely to be available in the near future. Fortunately, shared control schemes through advanced driver assistive systems have shown that humans following instructions by smarter algorithms are viable stand-ins until robust autonomous vehicles are on the road [5].
Prior works have shown that a single Reinforcement Learning (RL) controlled autonomous agent can help stabilize traffic flow and stop the formation of traffic waves in the environment [4, 6]. Studies have also shown that simple speed management techniques can be used to improve both safety and emissions [2, 7]. Motivated by these factors and the human-compatible nature of piecewise constant policies, Sridhar and Wu proposed the use of 'Piecewise Constant Policies for Human-Compatible Congestion Mitigation' [8].
Our work is a direct extension to the framework proposed by Sridhar and Wu. In their paper, the authors describe policies that provide periodic "advice" to human drivers to modify their driving behaviour and mitigate congestion [8]. The policies are said to be piecewise constant as the action is expected to be held for \(\Delta\) timesteps in order to facilitate better adoption by human drivers. Though these policies were more robust and showed improvements on extension parameters, they were tested solely in simulation without the involvement of a real human. We would like to extend the simulation framework to include a human-in-the-loop to test the robustness of these policies when combined in a shared control objective.
To this extent, we have developed add-ons to the simulation framework of [8] and propose the use of the CARLA driving simulator to conduct a user study to evaluate piecewise constant policies. In particular, drivers in our shared control experiment are asked to follow the advised action output by the driving policy to reduce overall congestion. In this paper, we present our in-progress work towards this goal and discuss the planned improvements to our experiments.
## II Method
In this section, we describe the different technical components of our shared control framework for co-operative congestion mitigation. Fig. 1 shows an overview of our proposed framework and its different modules.
### _Piecewise Constant Driving Policies_
We now cover a few important features of the congestion mitigation model and policies postulated in [8], but refer the reader to the original paper for more details and proofs. The original setup is based on the Flow framework [4] which is built on top of Simulation of Urban Mobility (SUMO) - a multi-modal, microsimulation package [9]. Flow is a standalone framework for simulating and training RL models for traffic based scenarios.
The simulated environment consists of a single lane circular track with a circumference of 250m with 22 drivers as shown in Fig. 2. We refer to this track as the 'ring' network. This
Fig. 1: A block diagram representing the information flow and the different components of our Human-in-the-Loop (HL) framework.
network approximates an infinite highway as there are no incoming or outgoing vehicles. One driver in this scenario is then replaced with an agent that follows the piecewise constant policies generated by the trained RL model as described in [8]. We refer to this agent as the ego vehicle and all other vehicles in the simulation as the non-ego vehicles. Each non-ego vehicle in the network is modeled according to the Intelligent Driver Model (IDM) to simulate human-like behaviour [10]. The simulation of the non-ego vehicles is handled by the SUMO simulator. Fig. 2 shows the ego vehicle in red and the non-ego vehicle in white in the SUMO simulation generated by Flow.
The observation space of the agent is a vector that records its current speed, the speed and the distance to the preceding vehicle, and the circumference of the circular track. With these observations, the policy learns to output the acceleration action that should be held for \(\Delta\) timesteps in order to maximize the average speed of all the vehicles on the track and mitigate congestion.
**Acceleration vs. Speed actions:** The authors of [8] chose to output the acceleration action rather than a speed action. This choice was made due to the existence of a trivial speed solution in the simple ring network. However, this is not the case with more complex tracks. This design choice and its limitations are further discussed in Sec. III.1 For the rest of this paper, we refer to the action of the policy simply as the 'advised action'. Any technical aspect that is discussed below associated with either speed or acceleration would be updated accordingly for uniformity based on the action type.
The RL model is trained with steps that last for 0.1 seconds for a horizon of 8000 steps with 1200 warmup steps. The TRPO algorithm is used to train the model for 500 iterations. A \(\Delta\) of 1 timestep was used for training as well as evaluation. The authors make an assumption during training that all advice is followed immediately and exactly. This assumption is relaxed during evaluation by using the IDM model to transition between the current action and the advised action. To go one step further, we propose replacing the IDM model with a human driver. The number of timesteps that the driver takes to perceive and follow the advised action could be recorded as an experimental measurement.
While Flow facilitates the training and evaluation of the RL models, it does not possess an interface for a human driver to test the feasibility of such piecewise constant driving policies. Thus, we propose the use of the CARLA simulator as an interface between the driving policy and a real human.
### _CARLA Simulation_
CARLA is a popular open-source driving simulator developed for autonomous vehicle research [11]. The simulator provides scalability with a server multi-client architecture, a flexible API, and integration for co-simulation with popular traffic simulators such as SUMO.
In order to simulate the ring network, we designed a custom map using RoadRunner [12] and imported the map into CARLA [13]. We used the extensive CARLA python API to build a client program to enable the participants in our study to control the ego vehicle in a world simulating the ring network. We chose to surround the track with an irregular pattern of trees to avoid distracting the users away from the highway ring environment. Fig. 3 shows a birds eye view of the CARLA world with the ego vehicle shown in red and the non-ego vehicles shown in green.
### _Study Procedure and Setup_
In this section, we detail our tentative plan for our proposed user study. We aim to recruit at least 25 participants by advertising the study via email and fliers. Each participant would take part in a 30 minute long session, which is divided into three segments. During the first 7 minutes of the study, the participants are asked to familiarize themselves with the controls and the simulator. Then, they would spend 15 minutes
Fig. 3: A birds-eye view of the ring network in the CARLA simulator.
Fig. 2: A birds-eye view of ring network in SUMO.
driving in the simulated study environments while following the advice from the driving policy. Finally, the participants would spend 5 minutes answering a questionnaire about their driving experience. The participants would be given a short break between each segment.
During the first segment, the participants will be given time to familiarize themselves with the controls for the ego vehicle, _i.e._, the steering wheel, the throttle and brake pedals, and adjust to the user interface. During this initial trial period, the participants will be driving in maps not resembling the testing maps with other non-ego vehicles present.
In the second segment, the participants will be directed to drive in three 5 minute trials on the ring network. They will be asked to follow the action shown on the user interface during each trial. The hyper-parameters, which are discussed later, will be varied across the trials. During each trial, the average speed of all the vehicles will be continually recorded to note the affect on congestion.
Our driving simulator setup is designed to mimic real life driving experiences through the use of the Visaro driving rig and the Logitech G29 Racing Wheel. Fig. 4 depicts a participant driving the ego vehicle using our simulator. To further enhance the simulation experience, a TV setup shows the driver the front view from the vehicle. Fig. 5 shows this front view in more detail. The front view includes an insert of the rear view mirror at the top of the screen to provide a better user experience. The side view mirrors are not included as they would only serve as a source of distraction rather than a utility since the participants are driving on a single lane track.
The speedometer view shown at the bottom of the front view acts as the main user interface between the driving policy and the driver. The speedometer shows the current speed that the user is travelling at visually using a gauge, and also as text for more informed control. The driving policy outputs a singular target value, _i.e._, the advised action that the ego vehicle should take to mitigate congestion. The actual target value is shown on the speedometer as a red line. In practice, it is hard to maintain one singular action value for a duration of time longer than a couple of seconds due to the nature of the task. Therefore, we define an acceptable range of actions around the target value to show the driver a range of possible actions. This range is indicated in green on the speedometer. Fig. 6 shows examples of the speedometer where the target value is 17 mph with an acceptable range of \(\pm\)5 mph. Additionally, when the driver is driving within the acceptable range, the text shown at the center of the speedometer changes color to green to provide more feedback to the user. When the user is driving outside the acceptable range, the text is displayed in white. This change can be observed in the two side-by-side images in Fig. 6.
The extent of the range is a hyper-parameter that we would like to vary in our user study. Other hyper-parameters include the density of non-ego vehicles on the track and the IDM parameters for the non-ego vehicles.
**Evaluation:** Currently we propose to evaluate the affect of the human-in-the-loop and the performance of the policies by measuring the average velocity of all the vehicles on the road to measure congestion. A higher average velocity of the vehicles implies that all the vehicles in the network are moving smoothly at a relatively constant speed which indicates that there is less congestion in the network. Likewise, a lower average velocity would indicate that there is likely stop-and-go congestion in the network.
### _Simulation Synchronization_
As discussed in Sec. II-C and highlighted in Fig. 1 the ego vehicle is controlled by the study participant through CARLA, while the non-ego vehicles are controlled by Flow. Hence, the two simulators need to be synchronized so that they share common beliefs about the joint simulation. This synchronization would enable the RL policy model to observe the current state of the ego vehicle and change its advised
Fig. 4: A participant using the driving simulator to control the ego vehicle
Fig. 5: The front view from the CARLA driver interface
Fig. 6: The visual feedback of when the driver driving within the advised range when compared to when the driver is driving outside the advised range.
action. Unfortunately, there is no library to synchronize between Flow and CARLA to provide a seamless user experience. Fortunately, due to the widespread use of the CARLA and SUMO simulators, libraries are available for co-simulation and synchronization between them [14]. Therefore, we were able to develop add-ons to the Flow framework as well as the CARLA-SUMO co-simulation library to achieve our desired synchronization between Flow and CARLA. Thus, any change in either simulation is reflected in the other.
In particular, in the CARLA-SUMO co-simulation, we added the ability for CARLA to be able to take in the output of the driving policies and display them on the driver interface. In doing so, we are also able to spawn and mimic the movement of the non-ego vehicles, that are controlled by Flow, in the CARLA environment.
Similarly, for the Flow framework we developed modules to update the state of the ego vehicle in Flow, as seen in CARLA, such that the non-ego vehicles would be reactive to it. As such, we ensure that the non-ego vehicles do not ignore the ego vehicle, as is often the case with co-simulations, thus improving the overall simulation. To this affect, we added functionalities to import custom maps (maps not created in Flow) and allow for a plug-play approach for the environments simulated using Flow.
## III Limitations and Future Work
In this section, we discuss the current limitations of our work and enumerate areas for improvement.
First, as discussed in Sec. II-A we would like to attempt to make our system more human-readable by experimenting with both acceleration and speed actions. While a vehicle model can directly utilize acceleration commands to control itself on a road, it may be unintuitive for a user to input throttle commands to follow the given acceleration directions effectively. As such, since all real-world vehicles have a visual speedometer telling the user the current speed of the vehicle, we plan to train our policy to also output speed commands that may be more digestible to human drivers where applicable.
Furthermore, we plan to increase stochasticity in the training process of the policy to make the controller more robust at test time. For example, we can introduce noise to the action input to the vehicle model to copy how humans may not hold a steady throttle command. Another source of randomness can be the simulated roads themselves. Currently, we only train and test our policy on the ring network - a closed, one-lane infinite highway. We hope to apply the policy to a larger set of maps with more complex curves and merging or exit lanes to visualize how robust our model may be on these challenging scenarios. These proposed modifications can decrease the gap when we perform a sim2real transfer to a real-world autonomous vehicle.
Finally, we would like to address how to make our model adaptive to varying human behaviors. Suppose user \(A\) drives more conservatively than \(B\), or \(A\) has a slower reaction time than \(B\). A co-operative congestion mitigation policy should be able to tailor directions to each user differently to effectively maximize the reward. In such a task, our policy should, (1) learn the driving style of the human driver, and (2) give directions to maximize reward by outputting actions conditioned on the learnt driving style. Thus, inspired by [15], we plan to learn a latent space of human behaviors at train time, identify a behavior at test time in the latent space, and condition output actions on the identified latent behavior. More specifically, we will learn a latent space of varying reaction times to the action at train time. At test time, we will identify the reaction time of the human in the latent space and generate an action conditioned on the identified behavior. We hope to see that the policy can effectively update its action to minimize congestion from a diverse set of human users.
## IV Conclusion
In this short paper, we present our in-progress work in developing a framework for studying the utility of piecewise constant driving policies that aide human drivers towards co-operative congestion mitigation in a simulation setting. We hope to conduct our user studies soon and to validate the results obtained in simulation in a human-in-the-loop shared control setting. As discussed in III our eventual goal is to deploy such a framework in a real world experiment to confirm our findings in the simulated environments.
|
2310.06062
|
Detecting Iron Oxidation States in Liquids with the VOXES Bragg
Spectrometer
|
Determining the oxidation states of metals assumes great importance in
various applications because a variation in the oxidation number can
drastically influence the material properties. As an example, this becomes
evident in edible liquids like wine and oil, where a change in the oxidation
states of the contained metals can significantly modify both the overall
quality and taste. To this end, here we present the MITIQO project, which aims
to identify oxidation states of metals in edible liquids utilizing X-ray
emission with Bragg spectroscopy. This is achieved using the VOXES crystal
spectrometer, developed at INFN National Laboratories of Frascati (LNF),
employing mosaic crystal (HAPG) in the Von Hamos configuration. This
combination allow us to work with effective source sizes of up to a few
millimeters and improves the typical low efficiency of Bragg spectroscopy, a
crucial aspect when studying liquids with low metal concentration. Here we
showcase the concept behind MITIQO, for a liquid solution containing oxidized
iron. We performed several high-resolution emission spectra measurements, for
the liquid and for different powdered samples containing oxidized and pure
iron. By looking at the spectral features of the iron's K$\beta$ emission
lineshape, we were able to obtain, for a liquid, a result consistent with the
oxidized iron powders and successfully quantifying the effect of oxidation.
|
Simone Manti, Marco Miliucci, Alessandro Scordo, Roberto Bedogni, Alberto Clozza, Mihail Iliescu, Gabriel Moskal, Kristian Piscicchia, Alessio Porcelli, Diana Sirghi, Florin Sirghi, Catalina Curceanu
|
2023-10-09T18:13:18Z
|
http://arxiv.org/abs/2310.06062v1
|
# Detecting Iron Oxidation States in Liquids with the VOXES Bragg Spectrometer
###### Abstract
Determining the oxidation states of metals assumes great importance in various applications because a variation in the oxidation number can drastically influence the material properties. As an example, this becomes evident in edible liquids like wine and oil, where a change in the oxidation states of the contained metals can significantly modify both the overall quality and taste. To this end, here we present the MITIQO project, which aims to identify oxidation states of metals in edible liquids utilizing X-ray emission with Bragg spectroscopy. This is achieved using the VOXES crystal spectrometer, developed at INFN National Laboratories of Frascati (LNF), employing mosaic crystal (HAPG) in the Von Hamos configuration. This combination allow us to work with effective source sizes of up to a few millimeters and improves the typical low efficiency of Bragg spectroscopy, a crucial aspect when studying liquids with low metal concentration. Here we showcase the concept behind MITIQO, for a liquid solution containing oxidized iron. We performed several high-resolution emission spectra measurements, for the liquid and for different powdered samples containing oxidized and pure iron. By looking at the spectral features of the iron's K\(\beta\) emission lineshape, we were able to obtain, for a liquid, a result consistent with the oxidized iron powders and successfully quantifying the effect of oxidation.
Bragg Spectroscopy; X-ray; food; VOXES; MITIQO
## 1 Introduction
Bragg spectroscopy (BS) is an experimental technique established in the last century to perform ultra-high precision X-ray measurements [1] in fields such as, for example, physics [2] and astrophysics [3]. In the last three decades, the very low BS efficiency has been improved by the implementation of the mosaic crystals technology [4]. Among mosaic crystals, Highly Oriented Pyrolitic Graphite (HOPG) and Highly Annealed Pyrolitic Graphite (HAPG) [5] offer several advantages with respect to normal crystals. They possess an integral reflectivity one order of magnitude higher, an energy range extending above 10 keV, thanks to a smaller lattice parameter (3.514 A), as well as the possibility to be shaped in various forms, optimized for specific applications. The Von Hamos (VH) spectrometers [6] provide a further increase in the overall reflectivity, thanks to the sagittal focusing properties of cylindrically bent crystals, opening a wide field of applications [7; 8].
Combining HAPG crystals in the VH configuration with BS, besides increasing the efficiency, is possible to achieve a below 10 eV [9] energy resolution (FWHM) for X-ray Fluorescence (XRF) measurements. This exceptional capability allows for the precise differentiation of atomic emission lines, providing valuable insights into the elemental composition and properties of materials [10].
In this context the K\(\beta\) emission line is particularly informative for transition metal elements [11]. This emission process occurs in two steps [12; 13]: the creation of a core hole in the 1s orbital and the subsequent filling of the hole with an electron from the 3p state. In contrast to the K\(\alpha\) line, with the emission from the 2p state, can be mainly used to indicate the presence of the metal for elemental analysis, the K\(\beta\) line can also reveal valuable information about the chemical environment [14], the spin states [15] and oxidation states [16] of the metal. The K\(\beta\) line exhibits a distinct profile, characterized by a prominent K\(\beta_{1,3}\) peak and a secondary shoulder known as K\(\beta^{\prime}\), which resides approximately 10 eV lower from the main peak. Notably, the precise position of the K\(\beta^{\prime}\) shoulder is highly sensitive to the oxidation state and chemical environment of the metals under investigation, provided that the energy resolution is sufficiently good. By analyzing the K\(\beta^{\prime}\) component in relation to the main peak, valuable insights can be extracted regarding the specific oxidation state and chemical surroundings of the involved metal species.
In this specific context, a high-resolution Von Hamos X
ray spectrometer using HAPG mosaic crystals has been developed by the VOXES collaboration at INFN (LNF) [17; 18; 19], able to achieve a few eV energy resolution in the soft X-rays range from 2 keV up to 10 of keV also from millimetric sources.
Based on these excellent performances, the MITIQO (Monitoraggio In situ di Tossicita, Indicazione geografica e Qualita di Olio d'oliva, vino e altri liquid edibili) project aims to conduct high-resolution X-ray spectroscopy of edible liquids by using the VOXES spectrometer to determine the metal's oxidation states.
The identification of metal oxidation states in edible liquids plays a significant role in assessing the quality of wine [20]. Metals serve as electron sources for redox reactions and hence they act as catalysts in the oxidation process [21]. A long-standing problem in winemaking is the browning of wine [22]. Understanding and studying this process is crucial not only for preventing it, but also for finding alternatives to sulphur dioxide, a generally used antioxidant, which may affect the quality of the beverage. One method to assess the oxidative browning effect involves analyzing the oxidation states of metals, such as iron and copper in liquids. These metals act as catalysts for non-enzymatic browning in wine, with multiple pathways related to phenols playing a role [22].
The MITIQO could perform a non-destructive measurement on the wine samples implementing a quantitative analysis with a precise determination of the metal's oxidation states.
In this work, we introduce the MITIQO concept, for a liquid solution with a high concentration of iron, serving as an initial benchmark for liquids with lower metal concentrations, such as edible liquids. This is done, by performing X-ray emission measurements of several iron-containing powders, with different oxidation states. Our results demonstrate that we can recover the right oxidation state of the liquid solution, consistent with those from oxidized solid samples.
The paper is structured as follows. In Section 2 we describe the experimental setup employed for MITIQO and the preparation of the samples. Section 3 outlines the methodology employed for acquiring and calibrating the spectra of both oxidized and unoxidized samples, together with the spectrum obtained from the liquid sample and the discussion on how oxidized iron samples can be differentiated. Section 4 concludes the paper.
## 2 Setup and Methods
In this section, we outline the primary characteristics of the experimental apparatus. For a more detailed description of the system and its qualifications, we refer to our previous works [17; 18; 19; 23]. Furthermore, we describe the procedures involved in preparing solid samples and how we integrate the liquid sample into the setup.
The experimental setup for MITIQO (Figure (1)) consists of an OXFORD XTF-5011 Tungsten X-ray tube, operating at 20 kV voltage and 500 \(\mu A\) current, which is used to activate the samples. The emitted fluorescence lines are then shaped by means of two motorized slits (S\({}_{1}\) and S\({}_{2}\)) and reflected by the HAPG crystal onto the surface of the MYTHEN2 strip detector, produced by the DECTRIS company. The HAPG crystal possesses a radius of curvature of \(\rho\)=206.7 mm, a declared mosaicity of \(0.1\pm 0.01\), and 100 \(\mu\)m thickness. The active surface of the MYTHEN2 module measures 32x8 mm\({}^{2}\) and is equally divided into 640 strips, each with a depth of 450 \(\mu\)m and 50 \(\mu\)m pitch. The MYTHEN2 module, slits, and crystal holder are integrated into a remote-controlled motorized system to facilitate precise positioning. Finally, a hole on the back side of the source box allows the usage of a laser for the initial alignment of the system.
The iron-containing samples used in this study are metal salts and they were prepared at Jagiellonian University in Krakow by pouring a measured amount of the sample under study onto a 50 mm wide Kapton tape. Subsequently, another layer of tape was placed on top of the prepared sample for sealing. Finally, the sample was placed under a hydraulic press covered with a thin sponge mat. The samples were compressed with a 50 kN pressure, which allowed to obtain uniform, stable air-free samples. Three different iron powder samples have been prepared: pure iron (>99%) - Fe; iron(II) sulfate hydrate - Fe\({}^{+2}\) (FeSO\({}_{4}\cdot\)7H\({}_{2}\)O) and iron(III) sulfate hydrate - Fe\({}^{+3}\) (Fe\({}_{2}\)(SO\({}_{4}\))\({}_{3}\cdot\)12H\({}_{2}\)O).
To perform the energy calibration we positioned a cobalt foil on top of the iron powders within the mylar. The foil measures 25 mm x 25 mm and has a thickness
Figure 1: Experimental setup for MITIQO. Overview of the VOXES spectrometer for the MITIQO project at the INFN-LNF [18]. The setup includes the source box with the X-ray tube, the two slits S\({}_{1}\) and S\({}_{2}\), the HAPG crystal and the MYTHEN2 detector. The inset in the corner shows the internal view of the source box where the solid sample or the cell for the liquid are placed.
of 0.125 mm. As we described in the next section, we also utilized a FeCoNi foil (Fe 54%, Ni 29%, Co 17%), of identical dimensions and thickness to the cobalt foil. The Liquid sample is a medical solution, with the commercial name FerroFolin, used as human iron supplement, containing 40 mg of iron (Fe\({}^{+3}\)) in 15 ml solution, for a resulting concentration of 2666 mg/L. The liquid is enclosed in a plastic frame bag, closed on both sides with a 7\(\mu\)m thick kapton foil. The liquid bag (25x80x2.5) mm\({}^{3}\) has a total active area of 15x60 mm\({}^{2}\) to fully cover the effective dimension of the emission source of the target, which was S\({}_{0}^{\prime}\) = 0.8 mm, with S\({}_{0}^{\prime}\) defined as in Ref. [23].
## 3 Results and Discussion
In what follows, we describe how we obtained the spectra and delineate our approach to detect if iron is oxidized, in both solid and liquid samples.
The first step was the energy calibration of the spectra. We needed to compare spectra from different samples and with different oxidation states.
Performing energy calibrations with BS can be challenging due to the complex relationship between the spectrum in space and energy [23; 24]. Further difficulties come from the problem under examination. Due to the complex lineshape of the spectrum, as mentioned earlier, only one energy reference, the K\(\beta\) line of pure iron, is available. Iron's K\(\alpha\) lines can not be used for our calibration, because they are out of the dynamic range of our configuration (\(\sim\) 600 eV).
To address this issue, we placed a sample of pure cobalt over the iron powders in the mylar foil, to include the cobalt's K\(\alpha\) lines in the spectrum. This allows the calibration of the pure iron (Fe\({}^{0}\)@Co) sample. Reference values for the K\(\beta_{1,3}\) line in oxidized iron are not readily available, as mentioned earlier, because their absolute values are sensitive to the particular sample used. We then performed the measurements for Fe\({}^{+2}\) and Fe\({}^{+3}\) with the cobalt foil. We carefully checked that the positions of cobalt lines over the strip of the detector remain unchanged within the uncertainties in the relative spectra (see the first three points of Figure [2]). This justifies the application of the calibration function of Fe\({}^{0}\)@Co also to the oxidized samples (Fe\({}^{+2,+3}\)@Co).
To extract the position of the peaks we performed a fit, where we approximated the background with a quadratic function. Due to the few eV energy resolution, we used a Voigt profile for all peaks, allowing the Lorentzian smearing factors to vary, while fixing the resolution for all the peaks. In the fit, we included two peaks in the K\(\beta\) lineshape: the primary peak K\(\beta_{1,3}\) and the satellite K\(\beta^{\prime}\) between 8 and 14 eV lower from the main one. The active region where the X-rays are emitted from the sample spans a few millimeters area (i.e. S\({}_{0}^{\prime}\)=0.8 mm is the lateral size). This poses a challenge when it is needed to place the cobalt foil on top of the iron sample which can affect the reproducibility of the obtained spectra. To address this issue, we investigated the possibility of extracting the calibration function from an alloy composed of iron, cobalt and nickel. After measuring the spectrum for the FeCoNi sample, we observed a shift in the emission lines of cobalt as compared to Fe\({}^{0}\)@Co (see Figure [2]). This shift can be attributed to the different vertical position of the cobalt's foil when placed over the iron sample in the mylar. This conclusion gains further support from the result obtained with the pure cobalt foil, which exhibits emission centers similar to those observed in the FeCoNi sample. In the following, we will explore the implications of this shift on the resulting calibration function and it will define a systematic error on the final result. Conversely, there is no shift in the iron's K\(\beta\) peaks compared to Fe\({}^{0}\)@Co and FeCoNi, since the iron in both samples has the same vertical position, over the active area where the X-rays are emitted.
The calibration function employed in this study, that relates the position along the detector's strip and the angle of reflection \(\theta\) (and consequently the energy) is:
\[\mathrm{Strip}(\theta)=\frac{\sin\theta(A\cot\theta-B)}{\sin(\alpha-\theta)} \tag{1}\]
This equation is derived from the results presented in [23], with \(A,B,\alpha\) the parameters of the calibration fit. The spectrum over the strip and the relative calibration fit for the Fe\({}^{0}\)@Co are shown in Figure [3]. Reference
Figure 2: Strip positions of cobalt’s K\(\alpha\) lines. Values of the positions of cobalt’s K\(\alpha\) lines over the strips for the different iron samples and for the FeCoNi and cobalt foil.
values were taken from the xraylib library [25], specifically E\({}_{\rm K_{\alpha 1}}\) = 6930.3 eV and E\({}_{\rm K_{\alpha 2}}\) = 6915.3 eV for cobalt and E\({}_{\rm K_{\beta 1,3}}\) = 7058.0 eV for iron. Equation (1) is then inverted to convert the spectrum over the strips in energy and obtain the calibrated spectra. Additionally, we performed the calibration with FeCoNi, and then compared the results with both calibrations. This comparison allowed us to identify any systematic errors attributable to variations in the vertical positioning of cobalt foil in the Fe\({}^{0}\)@Co measurements.
Subsequently, we applied both calibration functions to all samples. As an example the spectra for Fe\({}^{+3}\)@Co and the liquid solution are presented in Figure [4].
The spectrum of Fe\({}^{3}\)@Co exhibits the well-known signatures due to the change of the oxidation state, with the prominent K\(\beta^{\prime}\) at lower energies of the K\(\beta\). There is a shift towards higher energies in the K\(\beta_{1\beta}\) compared to the reference value for pure iron. It's worth mentioning that the lines associated with cobalt remain unchanged, as expected. In the case of the liquid spectrum, a similar trend is observed, with an increased background noise due to scattering resulting from the presence of the liquid.
To investigate the impact of oxidation, we examine the energies of the K\(\beta\) peaks across different oxidation states. This analysis can be approached by either considering the absolute energy values of the peaks or their energy differences. Absolute differences between K\(\beta_{1\beta}\) peaks change by approximately \(\pm\) 1.6 eV [14] with respect to metallic iron, due to the type of bonds present in the surrounding atoms near the metal [26]. However, since for our spectra, every strip channel corresponds to 1.77 eV (2.55 eV for liquid), it is hard to discern the absolute difference in peak positions across different samples. Hence, the most effective approach to assess the effect of oxidation lies in analyzing the relative position and shape of peaks in the K\(\beta\) spectrum, which exhibit a more observable effect. Thus, we computed the energy difference between the peaks as \(\Delta\)E and the ratio of the peaks amplitudes for K\(\beta_{1,3}\) and K\(\beta^{\prime}\). In Figure [5] the two quantities are shown for different samples and for the liquid solution. A distinct change of a 4 eV in \(\Delta\)E can be observed as we move from Fe\({}^{+0}\) to Fe\({}^{+2,3}\). It is possible to discern between non- and oxidized states, but it is difficult to distinguish between individual states in the oxidized samples.
The liquid sample shows a similar trend within the error bars as the oxidized samples, but with larger uncertainties due to a lower iron concentration and lower statistics. All quantities were determined using the Fe\({}^{0}\)@Co and FeCoNi calibration functions. By comparing values obtained with different calibrations, we can estimate a systematic error. For instance, in the case of \(\Delta\)E, we identified a 0.5 eV difference on average across all samples, which is significantly smaller than the impact of
Figure 4: **Calibrated spectra.** Calibrated spectra for Fe\({}^{+3}\)@Co and liquid solution, where the Fe\({}^{0}\)@Co calibration is used. The legend utilized in Figure (3) is also employed in all panels and dashed vertical lines represent reference lines for cobalt’s K\(\alpha_{1,2}\) and pure iron’s K\(\beta_{1,3}\).
Figure 3: **Calibration of Fe\({}^{0}\)@Co.** The uncalibrated spectrum of the Fe\({}^{0}\)@Co sample is displayed in the top panel, while the corresponding calibration function can be found in the bottom panel. The black line represents the experimental spectrum, which is fitted in red, with the contributions of the individual peaks visualized in various distinct colors.
oxidation, estimated at approximately 4 eV.
## 4 Conclusions and Outlook
In this paper, we demonstrated the utility of BS as a technique for differentiating oxidation states of metals. We introduced the VOXES setup, a key ingredient of the MITIQO project, which aims to study the oxidation states of metals in liquids.
For this study, we used pure iron as unoxidized sample and salts of iron and sulfur for the oxidized samples. In addition, we tested our spectrometer with a liquid solution, containing oxidized iron. We discussed the energy calibration for studying the iron's K\(\beta\) line by including cobalt. We proposed to extract the calibration function from an alloy FeCoNi, and estimated systematic errors of this procedure. Finally, our approach to discern oxidation involved analyzing a specific emission line of iron, the K\(\beta\). Two parameters were defined, and we showed that one of them, the difference in energy between the K\(\beta_{13}\) and K\(\beta^{\prime}\) peaks, is an effective discriminator to detect if iron is oxidized or not. Furthermore, our results showed that the same indicators for the liquid sample fell within the uncertainties of the oxidized samples, reinforcing the feasibility of applying BS in liquid environments.
As part of future MITIQO plans we aim to detect oxidations in liquid samples with reduced metal concentrations. In edible liquids, such as wine, a metal like iron is typically present at concentrations in the range of mg/L [20]. Therefore, it is necessary to enhance the efficiency of the spectrometer to operate effectively at such reduced concentrations. Another related aspect involves distinguishing the particular oxidation states (e.g. Fe\({}^{+2}\) or Fe\({}^{+3}\)). This would require an even better resolution and greater efficiency for detectors, achievable using thinner crystals or high-order of reflection [27]. Such improvements will allow a more detailed analysis of the K\(\beta\) lineshape and enable us to compare not only the relative differences but also the absolute positions of the peaks.
As a result of these improvements, it will become possible to improve the characterization and understanding of the oxidation state of metals in edible liquids such as wines and olive oils.
## 5 Author Contributions
Conceptualization, M.M. and A.S.; methodology, A.C, M.I., G.M., D.S., F.S., M.M., S.M. and A.S.; software, S.M. and A.C.; formal analysis, S.M.; investigation, M.M.; writing--original draft preparation, S.M., K.P., D.S; writing--review and editing, S.M., K.P., A.P; supervision, A.S. and C.C.; project administration, A.S.; funding acquisition, A.S., R.B. and C.C. All authors have read and agreed to the published version of the manuscript.
## 6 Funding
This research was funded by the MITIQO project n. A0375-2020-36647 from regione Lazio. VOXES was supported by the 5th National Scientific Committee of INFN in the framework of the Young Researcher Grant 2015, no. 17367/2015. This project has received funding from the European Union's Horizon 2020 research and innovation programme EU STRONG-2020, under grant agreement No. 824093.
## 7 Data Availability
The data presented in this study are available on request from the corresponding author.
## 8 Acknowledgments
We thank C. Capoccia and G. Fuga from LNF-INFN for their fundamental contribution in designing and
Figure 5: **Oxidation effect.** Peaks ratio (top panel) and energy difference \(\Delta\)E (bottom panel) between the peaks of the iron’s K\(\beta\) across all samples. These values were computed using the calibration functions of FeCoNi (orange) and Fe\({}^{0}\)@Co (blue).
building the VOXES spectrometer and Doris Pristauz-Telsnigg, for the support in the preparation of the setup.
## 9 Conflicts of Interest
The authors declare no conflict of interest.
|
2306.12929
|
Quantizable Transformers: Removing Outliers by Helping Attention Heads
Do Nothing
|
Transformer models have been widely adopted in various domains over the last
years, and especially large language models have advanced the field of AI
significantly. Due to their size, the capability of these networks has
increased tremendously, but this has come at the cost of a significant increase
in necessary compute. Quantization is one of the most effective ways to reduce
the computational time and memory consumption of neural networks. Many studies
have shown, however, that modern transformer models tend to learn strong
outliers in their activations, making them difficult to quantize. To retain
acceptable performance, the existence of these outliers requires activations to
be in higher bitwidth or the use of different numeric formats, extra
fine-tuning, or other workarounds. We show that strong outliers are related to
very specific behavior of attention heads that try to learn a "no-op" or just a
partial update of the residual. To achieve the exact zeros needed in the
attention matrix for a no-update, the input to the softmax is pushed to be
larger and larger during training, causing outliers in other parts of the
network. Based on these observations, we propose two simple (independent)
modifications to the attention mechanism - clipped softmax and gated attention.
We empirically show that models pre-trained using our methods learn
significantly smaller outliers while maintaining and sometimes even improving
the floating-point task performance. This enables us to quantize transformers
to full INT8 quantization of the activations without any additional effort. We
demonstrate the effectiveness of our methods on both language models (BERT,
OPT) and vision transformers.
|
Yelysei Bondarenko, Markus Nagel, Tijmen Blankevoort
|
2023-06-22T14:39:04Z
|
http://arxiv.org/abs/2306.12929v2
|
# Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing
###### Abstract
Transformer models have been widely adopted in various domains over the last years, and especially large language models have advanced the field of AI significantly. Due to their size, the capability of these networks has increased tremendously, but this has come at the cost of a significant increase in necessary compute. Quantization is one of the most effective ways to reduce the computational time and memory consumption of neural networks. Many studies have shown, however, that modern transformer models tend to learn strong outliers in their activations, making them difficult to quantize. To retain acceptable performance, the existence of these outliers requires activations to be in higher bitwidth or the use of different numeric formats, extra fine-tuning, or other workarounds. We show that strong outliers are related to very specific behavior of attention heads that try to learn a "no-op" or just a partial update of the residual. To achieve the exact zeros needed in the attention matrix for a no-update, the input to the softmax is pushed to be larger and larger during training, causing outliers in other parts of the network. Based on these observations, we propose two simple (independent) modifications to the attention mechanism - _clipped softmax_ and _gated attention_. We empirically show that models pre-trained using our methods learn significantly smaller outliers while maintaining and sometimes even improving the floating-point task performance. This enables us to quantize transformers to full INTS quantization of the activations without any additional effort. We demonstrate the effectiveness of our methods on both language models (BERT, OPT) and vision transformers.
## 1 Introduction
Quantization has been one of the most impactful ways to reduce the computational complexity of transformer networks. Previous work has shown that quantizing networks to 4-bit weights is possible without losing too much accuracy [63; 66]. Some research even shows 4-bit weights might be optimal when trading off model size and bit-width [12].
However, quantizing transformers is not always trivial. When quantizing the activations of a transformer, significant problems arise with outliers in specific layers. This has been noted by several researchers that suggest fixes to transformers after training to ameliorate their effect [13; 64]. These methods are frequently tedious and either require retraining the network, require implementing specific hardware for input-channel quantization [13] or require parts of the activations to still be in higher bit-widths, reducing the effectiveness of the activation quantization [64].
In this paper, we set out to solve the transformer outlier problem entirely by changing the architecture of the network itself. We hope to make transformers easy to quantize from the get-go without needing any post-processing. To do so, we thoroughly analyze why these outliers appear. Previous work
has found the existence of these outliers [4; 13], but in our work, we come to a fuller understanding of these outlying values. We find that the outliers occur because attention heads are trying not to update the hidden state, and in the process, strong outliers appear due to the softmax function. This happens for language and vision transformers and different specific transformer architectures. This understanding is the foundation for two new tweaks we suggest to transformer architectures that can remove the problem of the outliers entirely.
## 2 Background and related work
In this section, we briefly cover the basics of neural network quantization and discuss why modern transformers are difficult to quantize.
QuantizationOne of the most powerful ways to decrease the computational time and memory consumption of neural networks is quantization, which uses low-bit representations for the weights and activation tensors. On top of that, using low-bit fixed-point representations, such as INT8, one can further reduce energy consumption since the fixed-point operations are more efficient than their floating-point counterparts [22; 56].
We simulate the quantization process in floating-point according to Jacob et al. [25]. We use the following definition of the quantization function:
\[\widehat{\mathbf{x}}:=q\left(\mathbf{x};\,s,z,b\right)=s\cdot\left(\mathrm{ clip}\left(\left\lfloor\frac{\mathbf{x}}{s}\right\rceil+z;0,2^{b}-1\right)-z \right), \tag{1}\]
where \(\mathbf{x}\) denotes the quantizer input (i.e., network weights or activations), \(s\in\mathbb{R}_{+}\) the scale factor or the step-size, \(z\in\mathbb{Z}\) the zero point, and \(b\in\mathbb{N}\) the bitwidth. \([\cdot]\) denotes the round-to-nearest-integer operator. This quantization scheme is called _uniform affine_ or _asymmetric_ quantization [23; 31; 73] and it is one of the most commonly used quantization schemes because it allows for efficient implementation of fixed-point arithmetic. In the case of _symmetric_ quantization, we restrict the quantization grid to be symmetric around \(z=0\).
In this work, we focus on _post-training quantization_ (PTQ) methods, which take a pre-trained FP32 network and convert it directly into a fixed-point network without the need for the original training pipeline [2; 5; 7; 24; 31; 34; 39; 41; 42; 72]. These methods require either no data or only a small calibration dataset and are easier to use compared to _quantization-aware training_ (QAT, Bhalgat et al. 3, Esser et al. 16, Gupta et al. 20, Jacob et al. 25, Krishnamoorthi 31) methods that have you train the entire network for more epochs. For more details on neural network quantization, we refer the reader to [18; 43].
Outliers in TransformersMultiple studies have shown that modern transformer-based language models tend to learn outliers in weights and activations [4; 13; 30]. These outliers are present only in a small fixed set of embedding dimensions, but they appear regularly and consistently across multiple layers and data sequences. It was also shown that those outliers play a crucial role in the model predictions and clipping them or by setting to zero the corresponding parameters significantly degrades the model task performance [30; 46]. The strongest in magnitude outliers typically appear at the output of the feed-forward network, FFN, although Dettmers et al. [13] showed that for big enough transformer-based language models they start appearing after every linear layer, including query, key, and value projection layers. This phenomenon holds for many tasks, training objectives and models (both encoder and decoder transformers), including BERT [14], RoBERTa [35], DistilBERT [50], MobileBERT [52], ELECTRA [9], BART [32], XLNet [65], GPT-2 [47], and OPT [71].
Because of these strong outliers, applying per-tensor PTQ for the FFN's output and the residual sum will likely cause a notable error because of the following trade-off between the range and the precision. On the one hand, using a large quantization range for small-ranged values leads to a loss in representation (high rounding error). On the other hand, a small quantization range for large values leads to a very high clipping error. For the case of significant transformer outliers, frequently, no good trade-off can be found between the rounding and clipping error, resulting in an overall high error.
There have been numerous attempts to fix the issue of transformer quantization [4; 12; 13; 17; 26; 27; 48; 51; 59; 60; 66; 68]. Most of these approaches resort to finer quantization granularity (row-wise, channel-wise, group-wise weight and activation quantization), use higher bitwidth and/or different
numeric format to represent those outliers better or require extra fine-tuning (in the form of QAT and/or knowledge distillation). In other words, they adapt quantization to work with outliers, which often comes at the expense of general applicability or extra inference overhead.
In contrast, in this work, we want to address the root cause of the problem and understand why outliers are learned in the first place and suggest a new pre-training protocol that significantly reduces the magnitude of outliers yielding way more quantization-friendly models that can be effortlessly quantized using PTQ without strong degradation of performance.
## 3 Outlier analysis
Outliers in BERT modelsIn Section 2 we discussed that outliers are present only in a few designated embedding dimensions but they appear regularly and consistently across multiple layers and data sequences. We also discussed that the strongest magnitude outliers in BERT typically appear at the output of FFN in the last encoder layers.
We start by taking the pre-trained _BERT-base-uncased_ checkpoint from HuggingFace [62] and fine-tune it on MNLI dataset from the well-known GLUE benchmark [58] (see experimental details in C.1). To identify the outlier dimensions, we pass the MNLI-m validation set through the network and record all outliers1 at the FFN output in layers #10 and #112. As we can see in Figure 1, there are indeed only a few hidden dimensions where outliers ever occur. We also notice that the majority of outliers (\(>97\%\)) correlate with the position of delimiter tokens - [SEP], ".", and ",".
Footnote 1: We follow Bondarenko et al. [4] and consider outliers as values that exceed 6 standard deviations from the mean of the corresponding activation tensor.
To better understand the role of those outliers, we analyze the attention patterns of the corresponding attention heads. BERT-base uses multi-head attention with \(n_{\text{heads}}=12\) and each head operating on a consecutive subset of \(d_{\text{head}}=64\) features. Therefore, the hidden dimension #180, which happens to have the highest outlier count in both layers #10 and #11, corresponds to attention head #3. In Figure 2 (and more examples in Appendix A.1) we show examples of the attention matrices, values and their product for that head.
A common pattern we found is that the attention head assigns almost all of its probability mass to [SEP] tokens, and other less informative tokens like dots/commas, while these tokens also have small values in \(\mathbf{V}\) associated with those tokens. This results in a small magnitude product between the two (see Figure 1(a)). This effectively corresponds to a (soft) _no-update_ of the hidden representation, where only small noise is added after the residual. In other cases (Figure 1(b) and 1(c)), we observe that a significant portion of attention probability is still spent on delimiter tokens. However, by allocating some of the probability mass on other tokens (together with the small values for the delimiter tokens), this results in a (soft) _selective_ update of the hidden representation.
These patterns in self-attention seem to be a learned "workaround" for the limitations of having the softmax and the residual connections in cases where the attention head does not want to update the representation of some or all of the tokens. These observations are in line with Clark et al. [8], Kovaleva et al. [29] that also argued that attending exclusively or almost exclusively to delimiter tokens such as [SEP], periods/commas acts as a "no-op" when the attention head's function is not applicable.
Figure 1: Histograms of outlier counts vs. token positions (blue) and hidden dimensions (green), recorded from the MNLI-m validation set on BERT-base. We use zero-based indexing for dimensions.
Outliers in ViTWe conduct a similar analysis for Vision transformer [15] trained on ImageNet [49]. For this study, we use a pre-trained checkpoint following our experimental setup from Section 5.
We highlight our findings in Figure 3 and provide more examples in Appendix A.2. Our analysis shows many similarities to the BERT case. Instead of delimiter tokens, the majority of outliers seem to correlate with some random uninformative patches (e.g., in the background). We also see that the corresponding attention head in the next layer allocates the majority of attention probabilities to the same patches. Finally, those outlier patches on average have a distinctly smaller magnitude of values compared to non-outlier ones, leading to similar no-update behavior. The fact that those values are not as close to zero as it was in the BERT case might be related to the smaller model capacity3, or a relatively shorter training procedure.
Footnote 3: We use ViT/S-16 configuration that has only 22M parameters.
HypothesisBased on these observations, we pose the following hypothesis on how this behavior of attention heads is related to outliers:
1. In order for an attention block to not update a representation of a token on the residual, some attention heads want to allocate most of their attention probability mass to some fixed and common set of tokens that have a low information content (e.g., delimiter tokens or background patches) that can be learned to have a small value function output.
Figure 3: A summary of our outlier analysis for ViT demonstrated on a random image from ImageNet validation set. (a) An input image. (b) Outliers in the output of layer #11. (c) Cumulative attention weight spent on every patch (matrix of attention probabilities summed over rows) in the attention head #1, layer #12. (d) A corresponding matrix of attention probabilities. (e) An average magnitude of values for outlier and non-outlier patches.
Figure 2: Visualization of the patterns in the self-attention, specifically the attention probabilities, values, and their product (left, middle and right columns, respectively), in attention head #3 for BERT-base, computed on several data sequences from MNLI-m validation set.
2. From the definition of the softmax function4, it is easy to see that this would require an input of the softmax to have a relatively big dynamic range (Figure 4, ). In fact, in the limit case where softmax is exactly zero, this would require an infinite dynamic range: \[\operatorname{softmax}\left(\mathbf{x}\right)_{i}=0\quad\Leftrightarrow\quad \exists j\neq i,\ \mathbf{x}_{j}-\mathbf{x}_{i}=+\infty\] (2)
3. Since Layer Normalization ([1], ) normalizes the outliers, the magnitude of the FFN output _in the previous layer_ () has to be very high to still produce a sufficiently big dynamic range after the LayerNorm. Note, that this is also applicable for the transformer models with LayerNorm applied prior to the self-attention or linear transformations instead, a variant adopted by GPT, OPT, and many vision transformers [15; 36; 54; 55]. Footnote 4: \(\operatorname{softmax}\left(\mathbf{x}\right)_{i}=\exp\left(\mathbf{x}_{i} \right)/\sum_{j=1}^{d}\exp\left(\mathbf{x}_{i}\right)\)
4. Finally, as softmax will never output exact zeros, it will always back-propagate a gradient signal to grow bigger outliers5. The outliers will thus tend to become stronger in magnitude, the longer the network is trained.
Footnote 4: \(\operatorname{softmax}\left(\mathbf{x}\right)_{i}=\exp\left(\mathbf{x}_{i} \right)/\sum_{j=1}^{d}\exp\left(\mathbf{x}_{i}\right)\)
Footnote 5: Let \(\mathbf{y}=\operatorname{softmax}\left(\mathbf{x}\right)\). If \(\mathbf{y}_{i}>0\), then \(\frac{\left\lVert\mathbf{y}_{i}\right\rVert}{\partial\mathbf{x}_{j}}\neq 0\)\(\forall j\).
## 4 Method
In this section, we introduce our proposed modifications for the softmax attention mechanism. Based on our insights from Section 3, the core idea of these modifications is to grant the model the ability to produce very small the magnitude (or even exact zeros) output of attention function, without producing outliers.
Recall that the self-attention [57] is defined as follows:
\[\operatorname{Attention}(\mathbf{x}):=\operatorname{softmax}\left(\frac{ \boldsymbol{Q}(\mathbf{x})\boldsymbol{K}(\mathbf{x})^{T}}{\sqrt{d_{\text{ head}}}}\right)\boldsymbol{V}(\mathbf{x}) \tag{3}\]
where \(\boldsymbol{Q}\), \(\boldsymbol{K}\) and \(\boldsymbol{V}\) are learnable linear projections of the input \(\mathbf{x}\). Most modern transformer models employ a multi-headed variant of self-attention, where \(d_{\text{model}}\) features are partitioned into \(n_{\text{heads}}\) groups of \(d_{\text{head}}\) features, and the final output is the concatenation of the outputs of (3) applied to each group.
### Clipped softmax
First, we propose to replace softmax function in (3) with the following clipped softmax:
\[\operatorname{clipped\_softmax}(\mathbf{x};\zeta,\gamma):=\] \[\operatorname{clip}\left((\zeta-\gamma)\cdot\operatorname{softmax }(\mathbf{x})+\gamma,0,1\right). \tag{4}\]
Here \(\mathbf{x}\) is the input and \(\zeta\geq 1\), \(\gamma\leq 0\) are the stretch factors which are hyper-parameters of the method. This formulation was proposed before in [38] in the context of binary stochastic gates. We can view (4) as stretching the output of the softmax from \((0,1)\) to \((\gamma,\zeta)\) and then clipping back to \((0,1)\) so that we can represent exact zeros if \(\gamma<0\) and exact ones if \(\zeta>1\). Specifically, the values of the softmax larger than \(\frac{1-\gamma}{\zeta-\gamma}\) are rounded to one whereas values smaller than \(\frac{-\gamma}{\zeta-\gamma}\) are rounded to zero.
With this drop-in replacement, we can achieve exact zeros (and ones) with a finite range for the softmax input. In addition to that, whenever values are clipped they will not give a gradient, preventing the outliers to grow further.
Figure 4: A schematic illustration of the attention layer in BERT. Hidden activation tensor is denoted by \(\mathbf{x}\). \(\oplus\) is an element-wise addition. A problematic output of the FFN that generates largest in magnitude outliers is highlighted in red. Notice how those outliers in the _previous layer_ influence the behavior in the attention mechanism in the _next layer_.
### Gated attention
An alternative way of architecting the model to have a small attention output without outliers is to equip it with an explicit conditional gating mechanism, as shown in Figure 5. The idea is that the model can use the gating to either keep or nullify the update to the representation of certain tokens and not rely on the attention probabilities and values to achieve the same outcome.
Specifically, we propose the following modification to the attention function:
\[\mathrm{Gated\_attention}(\mathbf{x}):=\mathrm{sigmoid}\left(\mathbf{G}(\mathbf{x} )\right)\odot\ \mathrm{softmax}\left(\frac{\mathbf{Q}(\mathbf{x})\mathbf{K}(\mathbf{x})^{T}}{\sqrt{d _{\text{head}}}}\right)\mathbf{V}(\mathbf{x}). \tag{5}\]
Here \(\mathbf{G}\) is the gating function, \(\odot\) is an element-wise multiplication across the token axis and everything else remains the same as in (3). The gating function \(\mathbf{G}\) is parameterized by a small neural network that is learned jointly with the rest of the model. We replace the attention formulation with the proposed variant in every layer on the transformer network.
Gating module designRecall that the input to the attention layer \(\mathbf{x}\) has shape \((T,d_{\text{model}})\) that is reshaped into \((n_{\text{heads}},T,d_{\text{head}})\) for the multi-headed self-attention, where \(T\) is the sequence length. We chose to define the gating function on a per-head basis. For each head \(i\in\{1,\dots,n_{\text{heads}}\}\), we specify \(\mathbf{G}_{i}:\mathbb{R}^{d_{\text{head}}}\rightarrow\mathbb{R}\) and the output of the gating module is \(\mathbf{\pi}_{i}\in\mathbb{R}^{T}\) that is computed as follows:
\[\mathbf{\hat{\pi}}_{i,t} =\mathbf{G}_{i}(\mathbf{x}_{i,t_{i},.})\ \ \forall t\in\{1,\dots,T\} \tag{6}\] \[\mathbf{\pi}_{i,.} =\mathrm{sigmoid}(\mathbf{\hat{\pi}}_{i,.}), \tag{7}\]
note that gating modules are shared between different token positions but not shared across attention heads.
We want our gating module to be as lightweight as possible. To start with, we experiment with \(\mathbf{G}_{i}\)'s parameterized by a single linear layer. This gives us a gating module that is computationally inexpensive and has a memory overhead of just \(n_{\text{heads}}\cdot(d_{\text{head}}+1)\sim d_{\text{model}}\) extra parameters (which is equivalent to 1 extra token) per attention layer6. We also investigate the effect of using several other gating functions in Appendix B.1.
Footnote 6: For instance, in case of BERT-base, this amounts to less than 0.009% of the total model size.
## 5 Experiments
In this section, we evaluate the proposed modifications to self-attention on several language models (BERT, OPT) and the vision transformers (ViT). We first test the different hyperparameters for the methods and provide insight into how they work. Then we set out to test our method in terms of accuracy, and the difference in quantization improvement after training. All detailed hyperparameters of our experiments are in Appendix C.
BertWe experiment with BERT-base-uncased (109M parameters) pre-training using the masked language modeling (MLM) objective. Following [14], we use the concatenation of the training sets of BookCorpus [74] and English Wikipedia7. We implement our methods in PyTorch [45] and use training and evaluation pipelines from HuggingFace libraries [19; 33; 62]. We follow closely the pre-training procedure from [14]. To speed up training and experimentation, we train with a maximum sequence length of \(128\) for the whole duration of the training. We evaluate on Wikipedia validation set and report the MLM perplexity.
Footnote 7: Specifically, we use the English subset of Wiki-40b, [https://huggingface.co/datasets/wiki40b](https://huggingface.co/datasets/wiki40b), that contains cleaned-up text of English Wikipedia and training/validation splits.
OptWe experiment with a 125M sized variant of OPT [71] pre-training using the causal language modeling (CLM) objective. Due to compute constraints, we train the model on the same dataset that was used for BERT pre-training (BookCorpus + Wikipedia) with a maximum sequence length of \(512\)
Figure 5: A schematic illustration of our proposed gated attention.
and batch size of \(192\). Similar to our BERT experiments, we use training and evaluation pipelines from HuggingFace libraries. We evaluate on Wikipedia validation set and report the CLM perplexity.
ViTFinally, we explore the effectiveness of proposed techniques on vision transformer [15] (_ViT-S/16_ configuration, 22M parameters) trained on ImageNet-1K [11; 49]. For these experiments, we adopt the training and validation pipelines from PyTorch Image models library [61]. We report top-1 accuracy on the validation set of ImageNet.
Quantization setupIn all experiments, after the model is trained, we apply 8-bit PTQ. We use uniform affine quantization - symmetric weights, asymmetric activations - with the static activation range setting, as discussed in Section 2. We quantize all weights and activations (both input and output), except the final linear layer for BERT and OPT models. We explore several choices of range estimation (see Appendix C.4) and report the best configuration for each experiment, based on the model performance. We repeat each PTQ experiment 3 times with different random seeds8 and report mean and standard deviation for accuracy/perplexity.
Footnote 8: Different random subsets of training data are used for quantizer range estimation.
We train each network two times with different random seeds and report mean and standard deviation. To assess the amount of outliers in the trained model, we use two metrics: the maximum \(\|\mathbf{x}\|_{\infty}\) averaged across the validation set, and _kurtosis_ of \(\mathbf{x}\) averaged across all layers, where \(\mathbf{x}\) is the output of an attention layer. These metrics have been shown to correlate well with the model quantizability [4; 6].
### The impact of clipped softmax hyperparameters (\(\gamma\) and \(\zeta\))
We investigate the effect of different values of the clipped softmax stretch parameters and present the results in Table 1. We can see that most of the improvement happens when we use \(\gamma<0\) (clipping at zero). For instance, using the value of \(\gamma=-0.03\) leads to a significantly smaller infinity norm, kurtosis, and quantized model perplexity, compared to the baseline. It is also clear that in the limit \(|\gamma|\to 0\) we approach the vanilla softmax attention. Using \(\zeta>1\) (clipping at one) yields similar results to the vanilla softmax. Finally, when we combine both \(\gamma<0\) and \(\zeta>1\), for which the results seem similar to just clipping at 0. We, therefore, conclude that for dampening outliers, only the lower-range clipping allows exact zeros matter. Going forward we use only \(\gamma<0\) and in Appendix B.5 we confirm that \(\zeta>1\) is not required for ViT.
These observations are in line with our hypothesis that by giving the model the mechanism for representing exact zeros in the attention, we don't need to learn the strong outliers.
### Clipped softmax \(\gamma\) vs. sequence length
As having an extra hyper-parameter that needs to be tuned per model or setup is generally not desirable, we study the sensitivity of the stretch factor \(\gamma\) and its relation with the sequence length \(T\). Recall that the matrix of attention probabilities \(\mathbf{P}\) has dimensions \(T\times T\) and each row sums up to one. Because of that, the average value in \(\mathbf{P}\) is \(1/T\). It is reasonable to assume that if we define
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(\gamma\) & \(\zeta\) & FP16 ppl\(\downarrow\) & Max inf. norm & Avg. kurtosis & W8A8 ppl\(\downarrow\) \\ \hline \(0\) & \(1\) & \(4.49^{\pm 0.01}\) & \(735^{\pm 55}\) & \(3076^{\pm 262}\) & \(1294^{\pm 1046}\) \\ \hline \(0\) & \(1.003\) & \(4.48^{\pm 0.01}\) & \(715^{\pm 335}\) & \(2159^{\pm 238}\) & \(451^{\pm 57}\) \\ \(0\) & \(1.03\) & \(4.49^{\pm 0.00}\) & \(741^{\pm 66}\) & \(1707^{\pm 1249}\) & \(1469^{\pm 646}\) \\ \hline \(-0.003\) & \(1\) & \(4.46^{\pm 0.00}\) & \(688^{\pm 64}\) & \(2149^{\pm 110}\) & \(636^{\pm 566}\) \\ \(-0.03\) & \(1\) & \(4.41^{\pm 0.01}\) & \(20^{\pm 1}\) & \(80^{\pm 6}\) & \(4.55^{\pm 0.01}\) \\ \hline \(-0.003\) & \(1.003\) & \(4.47^{\pm 0.00}\) & \(683^{\pm 23}\) & \(2494^{\pm 1205}\) & \(268^{\pm 120}\) \\ \(-0.03\) & \(1.03\) & \(4.43^{\pm 0.03}\) & \(22^{\pm 3}\) & \(73^{\pm 8}\) & \(4.56^{\pm 0.05}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The impact of clipped softmax hyperparameters on BERT-base.
\(\gamma:=-\frac{\alpha}{T}\), where \(\alpha>0\) is a new hyperparameter, there might be a set or a range of values of \(\alpha\) that works well across different sequence lengths.
To study this, we train a 6-layer variant of BERT-base (BERT-6L) for 500000 steps on WikiText-103 [40] with a batch size of 128 with several values of maximum sequence lengths \(T\in\{32,64,128,192,256\}\) and values of \(\alpha\in\{1/4,1/2,1,2,4,8\}\). As we can see from Figure 6, using a clipped softmax with \(\alpha\in[2,4]\) significantly dampens the magnitude of outliers while maintaining good FP16 perplexity across all explored sequence lengths.
### The impact of bias initialization in Gated attention
In all our gated attention experiments, we randomly initialize the weights of \(\mathbf{G}\), following [21]. By initializing the _bias_ to a specific value, however, we can set gates to be more _open_ or more _closed_ initially. More open at the start means we initialize closer to the original network, but given the exponential nature of the gate it might take many iterations for the gate to learn to close. Similarly, if the gates are all closed at the start, we deviate too far from the original model training, causing a potential decrease in performance. Assuming Linear \(\mathbf{G}_{i}\)'s with small initial weights, if we set the bias to the value of \(b_{\text{init}}\), then \(\mathbf{G}_{i}(\cdot)\approx b_{\text{init}}\) and \(\mathbf{\pi}_{i}(\cdot)=\operatorname{sigmoid}(\mathbf{G}_{i}(\cdot))\approx \operatorname{sigmoid}(b_{\text{init}})=:\pi_{\text{init}}\), at the start of training.
We study the effect of different values of \(b_{\text{init}}\) for Linear gated attention on BERT-6L and ViT. We set the bias for all \(\mathbf{G}_{i}\)'s to the same value of \(b_{\text{init}}\). For BERT-6L, we use the same setup as in Section 5.2, with a fixed sequence length of 128. For ViT, we use the main setup, except we train it for 150 epochs instead of 300.
In Figure 7 we see in both BERT and ViT cases that using bias with very high \(\pi_{\text{init}}\) generally performs similarly to the vanilla attention (comparable floating-point performance but strong outliers and poor quantized performance) while setting bias to have very low \(\pi_{\text{init}}\) dampens outliers quite well but leads to strong degradation in the floating-point and quantized performance. The reasonable ranges of
Figure 6: The performance of clipped softmax using \(\gamma=-\alpha/T\) parameterization on BERT-6L. (a) Relative (compared to vanilla softmax pre-training) FP16 log-perplexity \(\uparrow\) on Wikitext validation set. (b) Maximum infinity norm of the attention layer output (note the logarithmic y-axis).
Figure 7: The performance of Linear gated attention using different bias initialization settings.
seems to be around \([0.25,0.9]\) for BERT and \([0.1,0.5]\) for ViT. The wide range indicates the relative robustness of our method to this hyperparameter.
### Main results
We summarize our main set of results in Table 2. As we can see, in almost all cases, both of our proposed techniques dampen the outliers' magnitude to a great extent, reduce the kurtosis, and yield models with significantly higher quantized performance, which is close to the original FP16/32 performance. In addition to that, for each model, at least one of our methods also improves the floating-point task performance. We hypothesize this is because the network is helped with learning the "no-op" updates more easily. However, we are cautious about the improved performance as this is not consistent across all hyper-parameters and it is unclear if it generalizes to more architectures and larger models.
The only case where our method failed to perform well was the clipped softmax applied to OPT. At the moment, we do not have an explanation of why this is the case and leave it for future work. We list selected hyper-parameters and show extended results in Appendix B.
## 6 Discussion
"No-op" behaviorIt is interesting to note that the identified "no-op" behavior is likely not limited to transformers and that convolutional architectures likely learn something similar. We also see that despite the network trying to learn a full "no-op", still a small amount of noise is added to each residual, which may constitute a form of network regularization. Investigating this further might give us a clue as to why neural networks generalize despite being significantly overparametrized if many parameters are rendered unused by not updating the representation in later layers [69].
LimitationsWe have not studied the effect of our method on large-scale transformers, as it would require training very expensive models from scratch. Given the fundamental understanding of the issue underlying our solutions, we expect the same effect on large-scale models. We show a very small improvement in FP16/FP32 performance due to our methods, but we do not deem our results exhaustive enough to claim that this will hold in general. Lastly, our methods do have a hyperparameter each, although we show that both methods are relatively robust to its hyperparameter, having one is never optimal.
ImpactAs our methods help transformers to be more efficient, we expect only positive outcomes of our work. Making neural networks more efficient will help with their high power consumption at inference. It further helps to move inference from the cloud to edge devices which can overcome potential privacy concerns. We cannot fathom any negative impact from our work that is not severely construed.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Model & Method & FP16/32 & Max inf. norm & Avg. kurtosis & W8A8 \\ \hline \multirow{2}{*}{\begin{tabular}{l} BERT \\ (ppl.\(\downarrow\)) \\ \end{tabular} } & Vanilla & \(4.49^{\pm 0.01}\) & \(735^{\pm 55}\) & \(3076^{\pm 262}\) & \(1294^{\pm 1046}\) \\ & Cipped softmax & \(\mathbf{4.39^{\pm 0.00}}\) & \(\mathbf{21.5^{\pm 1.5}}\) & \(\mathbf{80^{\pm 6}}\) & \(\mathbf{4.52^{\pm 0.01}}\) \\ & Gated attention & \(4.45^{\pm 0.03}\) & \(39.2^{\pm 26.0}\) & \(201^{\pm 181}\) & \(4.65^{\pm 0.04}\) \\ \hline \multirow{2}{*}{\begin{tabular}{l} OPT \\ (ppl.\(\downarrow\)) \\ \end{tabular} } & Vanilla & \(15.84^{\pm 0.05}\) & \(340^{\pm 47}\) & \(1778^{\pm 444}\) & \(21.18^{\pm 1.89}\) \\ & Cipped softmax & \(16.29^{\pm 0.07}\) & \(63.2^{\pm 8.8}\) & \(19728^{\pm 7480}\) & \(37.20^{\pm 2.40}\) \\ & Gated attention & \(\mathbf{15.55^{\pm 0.05}}\) & \(\mathbf{8.7^{\pm 0.6}}\) & \(\mathbf{18.9^{\pm 0.9}}\) & \(\mathbf{16.02^{\pm 0.07}}\) \\ \hline \multirow{2}{*}{
\begin{tabular}{l} ViT \\ (acc.\(\uparrow\)) \\ \end{tabular} } & Vanilla & \(80.75^{\pm 0.10}\) & \(359^{\pm 81}\) & \(1018^{\pm 471}\) & \(69.24^{\pm 6.93}\) \\ & Cipped softmax & \(80.89^{\pm 0.13}\) & \(\mathbf{73.7^{\pm 14.9}}\) & \(22.9^{\pm 1.6}\) & \(79.77^{\pm 0.25}\) \\ \cline{1-1} & Gated attention & \(\mathbf{81.01^{\pm 0.06}}\) & \(79.8^{\pm 0.5}\) & \(\mathbf{19.9^{\pm 0.3}}\) & \(\mathbf{79.82^{\pm 0.11}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: A summary of results for our proposed methods applied on BERT, OPT, and ViT.
Conclusions
We have thoroughly analyzed the activation outlier problem that makes transformers difficult to quantize. We showed that transformer networks try to learn not to update residuals and that by doing so, through the combination of the softmax, residual connections and LayerNorm, significant outliers appear in transformers. Based on this insight, we proposed two methods to address this at the core - _clipped softmax_ and _gated attention_. These structural changes to transformers give similar, if not better, floating-point performance after training but significantly improve the post-training quantization results. We hope that with these two architectural changes to transformers, anyone can train high-performance transformers that are easy to quantize and can benefit from efficient integer inference.
|
2306.14380
|
A New Optimal Subpattern Assignment (OSPA) Metric for Multi-target
Filtering
|
This paper proposes and evaluates a new metric. This metric will overcome a
limitation of the Optimal Subpattern Assignment (OSPA) metric mentioned by
Schuhmacher et al.: the OSPA distance between two sets of points is insensitive
to the the case where one is empty. This proposed metric called Complete OSPA
(COSPA), retains all the advantages of the OSPA metric for evaluating the
performance of multiple target filtering algorithms while also allowing
separate control over the threshold of physical distance errors and cardinality
errors.
|
Tuyet Vu
|
2023-06-26T02:09:34Z
|
http://arxiv.org/abs/2306.14380v1
|
# A New Optimal Subpattern Assignment (OSPA) Metric for Multi-target Filtering
###### Abstract
This paper proposes and evaluates a new metric. This metric will overcome a limitation of the Optimal Subpattern Assignment (OSPA) metric mentioned by Schuhmacher et al.: the OSPA distance between two sets of points is insensitive to the the case where one is empty. This proposed metric called Complete OSPA (COSPA), retains all the advantages of the OSPA metric for evaluating the performance of multiple target filtering algorithms while also allowing separate control over the threshold of physical distance errors and cardinality errors.
**Keywords: Optimal Subpattern Assignment (OSPA), Generalized OSPA, Complete OSPA metric (COSPA metric), Multi-target Filtering (MTF).**
## I Introduction
In the area of multi-target filtering (MTF), a rigorous and robust metric plays an important role for performance evaluation. The application of metric can be found in target tracking such as. A key technical issue underpinning such an evaluation concerns the measurement of 'distance' between two sets. Clearly a standard distance measure on Euclidean space cannot be used. Hoffman and Mahler [1] studied this problem and found that the Hausdorff distance is relatively insensitive to errors in the number of targets (which is an important issue in MTF) and proposed a new metric to overcome this shortcoming. In \(2008\), Schuhmacher et al. [2] built on this work and proposed a metric called optimal sub-pattern assignment (OSPA) which incorporated the spatial distance and the cardinality distance. Spatial distances between pairs of state vectors across two sets are cut off at \(c\), a parameter, and are weighted equally while each extra element in a bigger set will be penalized as if there was a distance error of \(c\). However, when one set is empty, the OSPA distance takes on the value of \(c\) regardless of the cardinality of the other set. This was pointed out in [2] as a minor inconvenience. However, as discussed in Rahmathullah et al. [3], in reality the OSPA metric is not a desirable tool for evaluating MTF algorithms. They pointed out that the MTF community prefers to understand the missing target and false target performance beyond the cardinality mismatch. Hence Rahmathullah et al. [3] tried to overcome these limitations by proposing a new metric called Generalized OSPA (GOSPA). This metric was derived by removing the normalization of the OSPA metric and multiplying parameter \(1/\alpha\) to the cardinality error with the optimal choice of \(\alpha=2\). However, by not normalizing, the GOSPA metric will generally grow with the size of the sets. The normalization of the OSPA metric scales the sum of all distances between the two finite sets to be within the interval \([0,c]\). In contrast, the GOSPA metric is exactly the sum of all distances between the finite sets. Furthermore, by weighting the cardinality penalty less than the spatial distance cut-off \(c\), the GOSPA metric gives unexpected results with most scenarios when the sets are not empty (see example in Figure 4). Another issue is that the OSPA metric is insensitive to the distance cut-off and cardinality penalty. The cause is that the penalty for each unassignable point and the distance cut-off between two assigned points in the OSPA metric are the same, \(c\).
By analysing the advantages and disadvantages of OSPA together with the requirements of the MTF community as discussed in [3], we propose a new metric, namely a Complete OSPA (COSPA). This metric overcomes the shortcoming of the OSPA metric when one of the two finite sets is empty. Also, by choosing the cut-off value to be less than or equal to the penalty for each unassignable point, together with considering the sensitivity to the empty set, the COSPA metric overcomes the limitations of the OSPA metric while still retaining all the advantages of the OSPA metric for other cases when two finite sets are non-empty. The earlier result is published in [4].
The structure of this paper is as follows. Section II provides some definitions and background. Section III summarizes the OSPA metric and discusses its limitations while Section IV summarizes and discusses the GOSPA metric. The main part of this paper is Section V, where the COSPA metric is developed and the analysis of OSPA, GOSPA and COSPA is conducted, with illustrative scenarios discussed in Section VI. Section VII presents numerical studies to demonstrate the usefulness of the proposed metric and compares this with the other two metrics mentioned above. Concluding remarks and suggestions for future work are given in Section VIII.
## II Background
Some background and definitions in this section are used throughout the paper. Let \(\mathcal{X}\subseteq\mathbb{R}^{n_{x}}\) where \(n_{x}\) is the natural number. The distance \(\bar{d}^{(\alpha)}(x,y)\)1 for \(x,y\in\mathcal{X}\) and \(a>0\) is
cut off by \(a\) if it is equal to or larger than \(a\). i.e.
\[\bar{d}^{(a)}(x,y)=\min(a,d(x,y)) \tag{1}\]
where \(d(x,y)\) is typically \(\|x-y\|_{p^{\prime}}\) where \(1\leq p^{\prime}\leq\infty\).
A finite set is a collection of a finite number of elements. If \(A\) is a finite set, then \(A\) has finite number of elements, i.e. \(|A|<\infty\). A finite set \(B\) is defined to be bigger than a finite set \(A\) if \(B\) has more elements than \(A\), i.e. \(|B|>|A|\).
Let \(A\) and \(B\) be two finite sets. Denote \(\Pi(A,B)\) as a set of all one-to-one functions between \(A\) and \(B\), i.e. if \(|A|\leq|B|\) then \(\pi\in\Pi(A,B)\) is a one-to-one (injective) function from \(A\) to \(B\) otherwise from \(B\) to \(A\). Mathematically, \(\Pi(A,B)\) is
\[\left\{\begin{aligned} \{\pi\text{ from }A\text{ to }B:\pi\text{ is injective}\},&\text{if }|A|\leq|B|;\\ \{\pi\text{ from }B\text{ to }A:\pi\text{ is injective}\},&\text{otherwise.} \end{aligned}\right. \tag{2}\]
## III OSPA
We summarize the OSPA metric in Section III-A and discuss its limitations in Section III-B.
### _Summary_
The OSPA distance, the total error, is the sum of a localization error and a cardinality error. The localization error is the smallest sum of distances between all combinations of elements of two finite sets \(X\) and \(Y\). The cardinality error is given by the mismatch between the number of elements in \(X\) and the number in \(Y\) scaled using \(c\). The per target error obtained by normalizing total error by the largest cardinality of the two given sets is a proper metric.
**Definition 1** (OSPA metric): _Let \(X\) and \(Y\) be two finite sets. For order parameter \(p\)\((p\geq 1)\), the cut-off parameter \(c\) (\(c>0\)), the OSPA metric \(\bar{d}^{(c)}_{p}(X,Y)=\bar{d}^{(c)}_{p}(Y,X)\) is defined as._
* _if_ \(X=Y=\emptyset\)_:_ \(\bar{d}^{(c)}_{p}(X,Y)=0\)_._
* _otherwise if_ \(p<\infty\)_,_ \(\bar{d}^{(c)}_{p}(X,Y)\) _is for_ \(|X|\leq|Y|\)__ \[\left[\frac{1}{|Y|}\left(\min_{\pi\in\Pi(X,Y)}\sum_{x\in X}\bar{d}^{(c)}(x, \pi(x))^{p}\!+\!c^{p}(|Y|-|X|)\right)\right]^{\frac{1}{p}}\] (3)
* _if_ \(p=\infty\)_,_ \(\bar{d}^{(c)}_{p}(X,Y)\) _is_ \[\left\{\begin{aligned} \min_{\pi\in\Pi(X,Y)}\max_{x\in X}\bar{d}^{(c)}(x, \pi(x)),&\text{if }|X|=|Y|;\\ c&\text{if }|X|\neq|Y|;\end{aligned}\right.\] (4)
_where \(\Pi(X,Y)\) is in (2); and the choice of \(c\) and the steps required to calculate the OSPA metric can be found in [2]._
The OSPA distance is interpreted as a \(p-\)th order per-target error, comprised of a \(p-\)th order per-target localization error and a \(p-\)th order per-target cardinality error. By (3) for \(1\leq p<\infty\), the localization and cardinality errors are given respectively as follows for \(|X|\leq|Y|\)
\[\bar{e}^{(c)}_{p,\text{loc}}(X,Y) =\bar{e}^{(c)}_{p,\text{loc}}(Y,X)\] \[=\left(\frac{1}{|Y|}\min_{\pi\in\Pi(X,Y)}\sum_{x\in X}\bar{d}^{(c )}(x,\pi(x))^{p}\right)\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
in cardinality (details in Section VI). Furthermore, the cut-off \(c\), being larger than the penalty given to each extra vector \(c/\sqrt[c]{c}\) if \(\alpha\geq 1\), makes the GOSPA metric even less useful (see Figure 2). Definition 2 shows that it is obviously that \(\bar{d}_{p}^{(c,2)}(X,Y)\leq\bar{d}_{p}^{(c,a)}(X,Y)\) for all \(0<\alpha\leq 2\). Hence, it is not clear what an optimization over assignments sets mean in the Proposition 1 but incidentally, Proposition 1 is just a way to group the mismatched targets as the second term in (6) when \(\alpha=2\) (the mismatched targets are two targets from two different sets that are further than or equal to \(c\)). If \(\alpha\neq 2\), we cannot rewrite (5) in the form of (6).
In order to overcome the limitations of the OSPA metric discussed in Section III-B, we propose a new metric, namely Complete OSPA (COSPA), in the next section. This metric not only overcomes the above limitations but also retains the advantages of the OSPA metric for evaluating cases where the OSPA metric gives reliable solutions.
## V Complete Optimal Subpattern Assignment (COSPA) Metric
The Optimal Subpattern Assignment (OSPA) metric is very popular in multi-target filtering because it can evaluate the localization error and cardinality error between two sets of vectors. However, as discussed in Section III-B the OSPA metric between an empty set and a non-empty set is the same regardless of how many elements the non-empty set has. It is also indistinguishable between the cut-off distance and the cardinality penalty and discussed in Section III-B. Hence, in this section, we will derive a new metric which overcomes these shortcomings and still retains all the beneficial properties of the OSPA metric. The new metric, which is called Complete OSPA (COSPA), cuts off the distance between two vectors at \(c\) (\(c>0\)), penalizes each extra of points in a bigger set by \(\dot{c}\) (\(\dot{c}\geq c\)) and penalized the empty set error by \(\xi\) (\(0\leq\xi\leq 1\)). In the rest of this section, we show that COSPA will overcome the shortcomings of the current metrics OSPA and COSPA.
**Definition 3**: _Let \(X\) and \(Y\) be two finite sets. For order parameter \(p\) (\(1\leq p\leq\infty\)), cut-off parameter \(c\) (\(0<c\)) and cardinality penalty parameter \(\dot{c}\) (\(\dot{c}\geq c\)) and empty-set parameter \(0\leq\xi\leq 1\), the COSPA metric \(\bar{d}_{c,p}^{(\dot{c},\xi)}(Y,X)=\bar{d}_{c,p}^{(\dot{c},\xi)}(X,Y)\) is defined as follows._
* _If_ \(X=Y=\emptyset\)_:_ \(d_{c,p}^{(\dot{c},\xi)}(X,Y)=0\)_._
* _Otherwise, for_ \(p<\infty\)_,_ \(d_{c,p}^{(\dot{c},\xi)}(X,Y)\) _is for_ \(|X|\leq|Y|\)__ \[\left(\frac{1}{|Y|}\bigg{[}\min_{\pi\in\Pi(X,Y)}\sum_{x\in X} \Big{(}c^{p}\delta\big{(}\bar{d}^{(c)}(x,\pi(x)),c\big{)}\] (7a) \[+\bar{\delta}\big{(}\bar{d}^{(c)}(x,\pi(x)),c\big{)}\bar{d}^{(c)}(x,\pi(x))^{ p}\bigg{)}\bigg{]}\] (7b) \[+\dot{c}^{p}\frac{|Y|-|X|}{|Y|}\] (7c) \[+\xi\frac{\delta(X,\emptyset)\bar{\delta}(Y,\emptyset)}{|Y|}\dot{c}^{p}(| Y|-1)\Bigg{)}^{\frac{1}{p}}\] (7d)
* _if_ \(p=\infty\)_,_ \(d_{c,\infty}^{(\dot{c},\xi)}(X,Y)\) _is_ \[\left\{\begin{array}{ll}\min_{\pi\in\Pi(X,Y)}\max_{x\in X}\bar{d}^{(c)}(x, \pi(x))&\mbox{if }|X|=|Y|\\ \dot{c}&\mbox{if }|X|\neq|Y|\end{array}\right.\] (8)
_The function \(d_{c,p}^{(\dot{c},\xi)}(\cdot,\cdot)\) is called the COSPA metric of order \(p\) with cut-off \(c\), cardinality penalty \(\dot{c}\) and empty-set error \(\xi\)._
COSPA is proved as a metric in [4] (note that \(\dot{c}\) and \(c\) are swapped in [4]). If we choose \(\dot{c}=c\) and \(\xi=0\), COSPA is exactly OSPA. The term \(\dot{c}\), which is the penalty for each cardinality error and differs from the cut off distance \(c\), exists to overcome the limitation of OSPA for scenarios that distinguish between the cut-off distance from a cardinality error and are similar to the example shown in Figure 3. In this example the OSPA distance between the two sets in Figure (3a) is the same as the OSPA distance between the two sets in Figure (3b). In general, the OSPA distances for two pairs of finite sets are the same when the cardinality of a set of the first pair is smaller than the other set of that pair and the second pair is the same as the first pair with the smaller set now having an extra element whose distance to elements in the other set of that pair is larger than \(c\).
Note that the cut-off parameter \(c\) is smaller than cardinality penalty \(\dot{c}\) if the outline distance between two vectors is penalized less than each mismatched number of elements between two sets of vectors. If \(\dot{c}=c\), Definition 3 (COSPA) is only different from Definition 1 (OSPA) [2] by the term in (7d). The term in (7d) only exists to take into account the case when one of the two sets is empty. When both finite sets are non-empty the term (7d) is zero, so COSPA is exactly the same as OSPA [2] if \(\dot{c}=c\).
If we do not wish to distinguish what vectors in \(X\) are very far from their images in \(Y\), \(c^{p}\delta\big{(}\bar{d}^{(c)}(x,\pi(x)),c\big{)}+\bar{\delta}\big{(}\bar{d}^ {(c)}(x,\pi(x)),c\big{)}\bar{d}^{(c)}(x,\pi(x))^{p}=\bar{d}^{(c)}(x,\pi(x))^{p}\) for \(\pi\in\Pi(X,Y)\). Alternatively, \(d_{c,p}^{(\dot{c},\xi)}(X,Y)\) in (7) can be written for simplicity as follows if \(|X|\leq|Y|\) and \(Y\neq\emptyset\).
\[\left[\frac{1}{|Y|}\bigg{(}\min_{\pi\in\Pi(X,Y)}\sum_{x\in X} \bar{d}^{(c)}(x,\pi(x))^{p}+\dot{c}^{p}(|Y|-|X|)\bigg{)}\right. \tag{9a}\] \[\left.+\xi\frac{\delta(X,\emptyset)\bar{\delta}(Y,\emptyset)}{|Y|} \dot{c}^{p}(|Y|-1)\right]^{\frac{1}{p}} \tag{9b}\]
**Remark 1**: _From Definition 1, Definition 3 and (9), the following are true_
1. _If_ \(X\neq\emptyset\) _and_ \(Y\neq\emptyset\)_, (_7_) shows that the COSPA distance between two non-empty finite sets is smaller than or equal to_ \(\dot{c}\)_. In this case, the COSPA distance (_9_) has the same forms as the OSPA distance (_3_) because the term in (_9b_) does not exist. Furthermore, if_ \(\dot{c}=c\)_, then the COSPA distance is the same as the OSPA distance, i.e._ \(d_{c,p}^{(\dot{c},\xi)}(X,Y)=\bar{d}_{p}^{(c)}(X,Y)\)_._
2. _If_ \(X=\emptyset\) _or_ \(Y=\emptyset\) _but not both,_ \[d_{c,p}^{(\dot{c},\xi)}(X,Y)=\dot{c}\left(1+\xi-\frac{\xi}{\max(|X|,|Y|)}\right)^ {\frac{1}{p}}\]
By (7), the COSPA assignment is the OSPA assignment. The COSPA assignment between \(X\) and \(Y\), \(\pi^{*}\), is
\[\begin{split}&\arg\min_{\pi\in\Pi(X,Y)}\sum_{x\in X}\bar{d}^{(c)} (x,\pi(x))^{p},\;\;\text{if }|X|\leq|Y|;\\ &\arg\min_{\pi\in\Pi(X,Y)}\sum_{y\in Y}\bar{d}^{(c)}(y,\pi(y))^{p },\;\;\text{otherwise}.\end{split} \tag{10}\]
**Remark 2**: _Assume that \(X\) is the set of truth targets2 and \(Y\) is the set of estimated targets. Definition 3 can be interpreted as follows._
Footnote 2: In Remark 2, the term ‘target’ and ‘vector’ are used interchangeably.
1. _The term in (_7_b_) is actually the sum of distances between vectors in_ \(X\) _and their images in_ \(Y\) _by the one to one function_ \(\pi^{*}\) _if each of these distances is smaller than_ \(c\)_. Each pair_ \((x,\pi^{*}(x))\in X\times Y\) _is a pair of correctly associated targets (vectors) in_ \(X\) _and_ \(Y\) _if their distance is smaller than_ \(c\)_. Note that_ \(\pi^{*}\in\Pi(X,Y)\) _is defined in (_10_)._
2. _The term in (_7_a_) is actually the sum of_ \(|\gamma|\) _distances between_ \(|\gamma|\) _vectors in_ \(X\) _and the_ \(|\gamma|\) _vectors in_ \(Y\) _by the one to one function_ \(\pi^{*}\) _when each of these distances is larger than or equal to_ \(c\) _where_ \[\gamma=\{(x,\pi^{*}(x))\in X\times Y:\bar{d}^{(c)}(x,\pi^{*}(x))=c\}.\] _Each point in the map_ \(\gamma\) _is a pair of incorrectly associated targets if their distance is equal to or larger than_ \(c\)_. Alternatively, the target in_ \(X\) _is called a missing target and the corresponding estimated target in_ \(Y\) _via_ \(\pi^{*}\) _is called a false target._
3. _The term in (_7_c_) is actually the cardinality error between_ \(X\) _and_ \(Y\)_. If_ \(|Y|>|X|\)_, the targets in_ \(Y\) _which do not associate with any targets in_ \(X\) _via_ \(\pi^{*}\) _are called false targets._
4. _The term in (_7_d_) only exists if the smaller set is empty and the bigger set is not empty._
The COSPA distance comprises of 3 components: COSPA localization error, COSPA outline error (i.e. the sum of all distances that are larger than or equal to \(c\)) and COSPA cardinality error. Here, the OSPA localization error is the sum of the COSPA localization error and the COSPA outline error. The COSPA cardinality error is the same as the OSPA cardinality error if both sets are not empty. If either \(X\) or \(Y\) is empty, the COSPA cardinality error is the COSPA distance (Remark 1.2) while the OSPA cardinality error is the OSPA distance \(c\). Hence, similar to the OSPA metric, the COSPA metric is interpreted as a \(p-\)th order per-target error. By (7) for \(1\leq p<\infty\), COSPA localization (11a), COSPA outline (11b) and cardinality errors (11c) are given respectively as follows for \(|X|\leq|Y|\)
\[\begin{split}&\bar{e}^{(c)}_{p,\text{loc}}(X,Y)=\bar{e}^{(c)}_{p, \text{loc}}(Y,X)\\ =&\bigg{(}\frac{1}{|Y|}\sum_{x\in X}\bar{\delta} \big{(}\bar{d}^{(c)}(x,\pi^{*}(x)),c\big{)}\bar{d}^{(c)}(x,\pi^{*}(x))^{p} \bigg{)}^{\frac{1}{p}},\\ &\bar{e}^{(c)}_{p,\text{out}}(X,Y)=\bar{e}^{(c)}_{p,\text{out}}(Y,X)\\ =& c\left(\frac{\sum_{x\in X}\delta\big{(}\bar{d}^{(c )}(x,\pi^{*}(x)),c\big{)}}{|Y|}\right)^{\frac{1}{p}},\\ &\bar{e}^{(\xi)}_{\bar{c},p,\text{card}}(X,Y)=\bar{e}^{(\xi)}_{ \bar{c},p,\text{card}}(Y,X)\\ &=\dot{c}\left(\frac{|Y|-|X|}{|Y|}+\xi\delta(X,\emptyset)\bar{ \delta}(Y,\emptyset)\frac{|Y|-1}{|Y|}\right)^{\frac{1}{p}}\end{split} \tag{11c}\]
## VI Analysis of COSPA, GOSPA and OSPA metrics
In this section, we show the solution the COSPA metric offers to the limitations of OSPA and GOSPA mentioned in Sections III-B and IV via simple scenarios. Furthermore, we also analyze the scenarios by comparing these solutions. Without loss of generality \(\xi\) in Definition 3 is chosen as \(1\) (i.e. \(\xi=1\)) and the order parameter \(p=1\) for the three metrics.
### _Effect of Cardinality Zero_
As discussed in Section III-B, the OSPA metric gives the same result when one of the two arguments is empty. In Figure 1 the OSPA distance between the non-empty set \(Y\) and \(\emptyset\) is the same as the OSPA distance between the non-empty set \(Z\) and \(\emptyset\). The GOSPA and COSPA metrics give smaller values for \((\emptyset,Y)\) than \((\emptyset,Z)\). As a result, a set with two vectors \(Y\) is closer to \(\emptyset\) than a set with three vectors \(Z\). This is a natural (meaningful) physical interpretation. Detailed explanations are given in Table I.
The COSPA metric is different from the other two metrics when comparing one algorithm producing an empty set and another algorithm giving a non-empty set as
\begin{table}
\begin{tabular}{|l||l|l|l|} \hline Metric & Fig.1a (\(Y\)) & Fig. 1b (\(Z\)) & Which one is closer to \(\emptyset\)? \\ \hline \hline OSPA & \(c\) & \(c\) & They are the same \\ \hline GOSPA & \(c\frac{2}{\alpha}\) & \(c\frac{3}{\alpha}\) & \(Y\) \\ \hline COSPA & \(\bar{c}\frac{3}{2}\) & \(\dot{c}\frac{5}{3}\) & \(Y\) \\ \hline Intuition3 & & & \(Y\) \\ \hline \end{tabular}
\end{table}
Table I: **Analysis of the three metrics for Figure 1**
an output. Compared with the non-empty set ground truth, COSPA gives an algorithm with the empty set a bigger value than an algorithm with the non-empty set. The distance between two empty sets is zero. It is clear that the three metrics will give the algorithm with the empty set a smaller distance when the ground truth is empty. Figure 2 and Table II give an example when the ground truth is not empty. GOSPA concludes that the empty set \(Z\) is the closer estimate to \(X\) than the non empty set \(Y\) if \(\frac{c}{\alpha}\leq\eta\) and \(\alpha>1\). These results depend on the choice of cut-off \(c\). If \(c\leq\eta\), OSPA concludes that \(Y\) is as good as \(Z\) for estimating \(X\), otherwise \(Y\) is the better estimate of \(X\) than \(Z\). The COSPA metric concludes that \(Y\) is the better estimate of \(X\) than \(Z\) no matter what values are chosen for the cut-off \(c\) and cardinality error penalty \(\dot{c}\) (see more detail in Table II). This is a natural (meaningful) physical interpretation.
### _Effect of Choice of Cut-off and Cardinality Penalty_
In the OSPA metric, the cut-off distance \(c\) between two vectors is the same as the penalty for each extra vector. For the COSPA metric, this cut-off distance \(c\) is smaller than or equal to the penalty for each extra vector, \(\dot{c}\). This is not the case for the GOSPA metric (5) where the cut-off distance \(c\) between two vectors is larger than or equal to the penalty for each extra vector, \(c/\sqrt[c]{\alpha}\). This choice makes the GOSPA metric intuitively unreliable4 for most other scenarios where both sets are not empty. GOSPA gives the same value for the two scenarios in Figure 3 if \(\alpha=1\); the smaller or bigger value for scenario 3a than 3b if \(\alpha<1\) or \(\alpha>1\) respectively. If \(\alpha>1\), which is the preferred distance choice of the author, GOSPA gives the smaller, same or bigger distance for scenario 4a than 4b if \(c>\Delta>c/\alpha\), \(\Delta=c/\alpha\), or \(\Delta<c/\alpha\) respectively.
Footnote 4: If a metric is ‘unreliable’, it occasionally assign a large value to the distance between two sets of vectors that are intuitively close.
As long as \(\Delta<c\), OSPA gives the same value for the two scenarios in Figure 3 while COSPA gives a smaller value for scenario 3a than 3b if \(\dot{c}>c\) and the same value if \(\dot{c}=c\). This is summarized in Table III. Both OSPA and COSPA give the smaller value for scenario 4b than for 4a, agreeing with intuitive thinking. This is shown in Table IV. Intuitively, \(Y_{c}\) is the closest to \(X\) as given by OSPA and COSPA. However, the GOSPA metric only gives the same conclusion if \(\Delta<c/\alpha\). If \(\alpha=2\), which the authors [3] claim is the optimal solution for the GOSPA metric, GOSPA gives \(Y_{a}\) as the closest to \(X\) if \(\Delta\geq c/\alpha\), which contradicts intuitive thinking. The behaviour of these three metrics for the scenarios in Figure 4 are summarized in Table IV.
### _Importance of Normalization_
The normalization of the distance between two finite sets plays an important role for measuring how close these two finite sets are because it scales the total error, which is the minimum sum of all distances of pairs of vectors and all distances of unpaired vectors, to be within the interval \([0,c]\) for the OSPA metric and to be within the half-closed interval \([0,\dot{c}+\dot{c}\xi)\) where \(0\leq\xi\leq 1\) for the COSPA metric. Without the normalization, the GOSPA metric is simply the sum. Hence, if one/both of these sets are large and distinguishable, the total error of these two sets is large and hence the GOSPA distance is large. Considering Figures 3 and 4, the GOSPA metric
\begin{table}
\begin{tabular}{|l||l|l|l|} \hline Metric & Fig. 2a(\(Z\)) & Fig. 2b (\(Y\)) & Which one is closer to \(X\)? \\ \hline \hline OSPA & \(c\) & \(\min(c,\eta)\) & \(Y\) if \(\mathbf{\eta<c}\) \\ \hline GOSPA & \(c\frac{m}{\alpha}\) & \(m\min(c,\eta)\) & \(Y\) if \(\mathbf{\eta<\frac{c}{\alpha}}\) \\ \hline COSPA & \(\dot{c}\left(2-\frac{1}{m}\right)\) & \(\min(c,\eta)\) & \(Y\) \\ \hline Intuition & & & \(Y\) \\ \hline \end{tabular}
\end{table}
Table II: **Analysis of the three metrics for Figure 2**
Figure 3: \(X=\{x_{1},x_{2},x_{3}\}\) where \(d(x_{i},z)\geq c>\Delta\) for \(i=1,2,3\). The OSPA metric \(\bar{d}_{p}^{(c)}(X,Y)=\frac{2\Delta+c}{3}=\bar{d}_{p}^{(c)}(X,Z)\). OSPA gives the same value for the distance between \(X\) and \(Y\) and for the distance between \(X\) and \(Z\).
concludes that \(Y_{d}\) is worse than \(Y_{b}\) because their distances to the same set \(X\) gives the bigger value to the first compared to the second. This is summarized in Table V. Clearly, the unnormalized OSPA distance is not a proper distance measure because it is proportional to the size of the bigger set. Therefore, the GOSPA metric may not be a suitable tool to measure how close two finite sets of vectors are. Now consider Figure 5 as an example. There are two parallel line segments in Figure 5 and the distance between these line segments is \(\Delta\). If these two line segments are discretized into two sets of two points each, with \(X=\{x_{1},x_{2}\}\) and \(Y=\{y_{1},y_{2}\}\) as in Figure 5, the GOSPA distance will be \(\Delta\sqrt[]{2}\). Similarly, if these two lines are discretized into two sets of \(n\) points each, \(X=\{x_{1},x_{2},\ldots,x_{n}\}\) and \(Y=\{y_{1},y_{2},\ldots,y_{n}\}\) as in Figure 5, then the GOSPA metric will be \(\Delta\sqrt[]{n}\). This means the distance for the scenario in Figure 5 is larger than the distance for the scenario in Figure 5. This is not intuitively reasonable because all scenarios in Figure 5 have the same distance, \(\Delta\), between the two line segments \(l_{1}\) and \(l_{2}\) in Figure 5. OSPA and COSPA give the same distance \(\Delta\) for all scenarios in Figure 5. The explanation and computation of the three metrics for Figures 5-5 are summarized in Table VI.
Furthermore, the absence of normalization makes GOSPA inconsistent when comparing the empty set with a non-empty set using another non-empty set as a reference. Take Figure 6 as an example. The computation and the comparison of the results (of the three metrics) against intuitive thinking are summarized in Table VII.
## VII Experiment
We demonstrate the proposed metric by evaluating an multi-target tracking (MTT) algorithm together with OSPA and GOSPA metrics. We use the data and one of the result produced from MTT algorithm in [5]\(38\) targets move from
\begin{table}
\begin{tabular}{|l||l|l|l|l|} \hline Metric & Fig.4b (\(Y_{b}\)) & Fig.4d (\(Y_{d}\)) & Which one is closer to \(X\)? \\ \hline \hline OSPA & \(\frac{c+\Delta}{2}\) & \(\frac{2\Delta+c}{3}\) & \(Y_{d}\) \\ \hline GOSPA & \(\Delta+\frac{c}{\alpha}\) & \(2\Delta+\frac{c}{\alpha}\) & \(Y_{b}\) \\ \hline COSPA & \(\frac{\lambda+\Delta}{2}\) & \(\frac{2\Delta+\hat{c}}{3}\) & \(Y_{d}\) \\ \hline Intuition & & & \(Y_{d}\) \\ \hline \end{tabular}
\end{table}
Table V: **Analysis of the three metrics for Figures 4b - 4d**
\begin{table}
\begin{tabular}{|l||l|l|l|l|l|} \hline Metric & Fig.4a (\(Y_{a}\)) & Fig.4b (\(Y_{b}\)) & Fig.4c (\(Y_{c}\)) & Fig.4d (\(Y_{d}\)) & Which one is the closest to \(X\)? \\ \hline \hline OSPA & \(c\) & \(\frac{c+\Delta}{2}\) & \(\Delta\) & \(\frac{2\Delta+c}{3}\) & \(Y_{c}\) \\ \hline GOSPA & \(c^{2}\) & \(\Delta+\frac{c}{\alpha}\) & \(2\Delta\) & \(2\Delta+\frac{c}{\alpha}\) & \(Y_{c}\) if \(\mathbf{\Delta<c^{\frac{1}{\alpha}}}\) or \(Y_{a}\) if \(\mathbf{\Delta\geq c^{\frac{1}{\alpha}}}\) \\ \hline COSPA & \(\hat{c}^{\frac{3}{2}}\) & \(\frac{\hat{c}+\Delta}{2}\) & \(\Delta\) & \(\frac{2\Delta+\hat{c}}{3}\) & \(Y_{c}\) \\ \hline Intuition & & & & \(Y_{c}\) \\ \hline \end{tabular}
\end{table}
Table IV: **Analysis of the three metrics and intuitive thinking to evaluate the sets shown in Figure 4**
Figure 5: The Euclidean distance between two line segments \(l_{1}\) and \(l_{2}\) is \(\Delta\) which is shown in Figure 5. If these two line segments are discretized into two sets of two points each as in Figure 5; two sets of three points each as in Figure 5; and two sets of \(n\) points each as in Figure 5, then the GOSPA metric is \(2\Delta\), \(3\Delta\) and \(n\Delta\) respectively. The OSPA and COSPA metrics are the same and equal to \(\Delta\) for all cases.
\begin{table}
\begin{tabular}{|l||l|l|l|l|} \hline Metric & Fig.3a (\(Y_{a}\)) & Fig.3b (\(Z\)) & Which one (\(Y\) or \(Z\)) is closer to \(X\)? & Explanation \\ \hline \hline OSPA & \(\frac{2\Delta^{p}}{3}\) & \(\frac{2\Delta+c}{3}\) & They have the same distance to \(X\) & cut-off = cardinality penalty \\ \hline GOSPA & \(2\Delta+\frac{1}{\alpha}\) & \(2\Delta+c\) & \(Y\) if \(\alpha>1\); \(Z\) if \(\alpha<1\) & cut-off \(\neq\) cardinality penalty (\(c\neq\frac{c}{\alpha}\)) \\ \hline COSPA & \(\frac{2\Delta+\hat{c}}{3}\) & \(\frac{2\Delta+\hat{c}}{3}\) & \(Z\) if \(c<\hat{c}\) & cut-off \(\leq\) cardinality penalty (\(c\leq\hat{c}\)) \\ \hline \end{tabular}
\end{table}
Table III: **Analysis of the three metrics for Figure 3**
\begin{table}
\begin{tabular}{|l||l|l|l|l|} \hline Metric & Fig.4a (\(Y_{a}\)) & Fig.4b (\(Y_{b}\)) & Fig.4c (\(Y_{c}\)) & Fig.4d (\(Y_{d}\)) & Which one is the closest to \(X\)? \\ \hline \hline OSPA & \(c\) & \(\frac{c+\Delta}{2}\) & \(\Delta\) & \(\frac{2\Delta+c}{3}\) & \(Y_{c}\) \\ \hline GOSPA & \(c^{2}\) & \(\Delta+\frac{c}{\alpha}\) & \(2\Delta\) & \(2\Delta+\frac{c}{\alpha}\) & \(Y_{c}\) if \(\mathbf{\Delta<c^{\frac{1}{\alpha}}}\) or \(Y_{a}\) if \(\mathbf{\Delta\geq c^{\frac{1}{\alpha}}}\) \\ \hline COSPA & \(\hat{c}^{\frac{3}{2}}\) & \(\frac{\hat{c}+\Delta}{2}\) & \(\Delta\) & \(\frac{2\Delta+\hat{c}}{3}\) & \(Y_{c}\) \\ \hline Intuition & & & & & \(Y_{c}\) \\ \hline \end{tabular}
\end{table}
Table IV: **Analysis of the three metrics and intuitive thinking to evaluate the sets shown in Figure 4**
top right or middle of the surveillance area to bottom left, and middle of the surveillance area to top right. Each target survives with probability \(99\%\) and is detected with \(80\%\). The measurements are added noize with zero mean Gaussian process. The detected measurements are immersed in clutter modeled as a Poisson RFS with the average number of clutter returns per unit volume is \(50\). In this example, we use the OSPA metric [2], the GOSPA metric [3] and the proposed COSPA metric to compute the distances between the truth tracks and the estimated tracks (produced by the MTT algorithm).
For computation of the three metrics, we chose the cut-off parameter for the three metrics \(c=80\), \(\alpha=2\), \(\xi=1\), \(p=1\) and \(\dot{c}=c+1\). In this scenario, Figure 8 shows that at any time \(t\) (\(t=1,\ldots,50\)) the GOSPA metric has much bigger error compared to OSPA and COSPA because GOSPA is a sum of all spatial distances and cardinality error at that time while OSPA and COSPA are the average of this sum. While Figure 9 shows that most of the time OSPA and COSPA have the same errors except at time where the cut-off is applied or one of the two sets but not both is empty. It is because the cut-off distance \(c\) and the penalty to each cardinality error are different in COSPA but the same in OSPA. Figure 10 shows that most of the time all objects in the smaller sets associate correctly with objects in a bigger set except at time \(t=21,38,39\) (two objects are correctly associated if their distance is smaller than the cut-off \(c\)). Indeed, OSPA gives value as a cut-off \(c\) to the distance between the two sets at time \(t=1\) and \(t=50\) at which no track is detected. Furthermore, the distances and cardinality error at time steps \(t=21,38\) and \(t=39\) are the same in both COSPA and OSPA but OSPA local error is the sum of COSPA local error and COSPA outline error. It is because the distance between two vectors not to be smaller than cut-off parameter \(c\) is considered as the wrongly associated tracks in COSPA. It happens because in COSPA the cut-off parameter \(c\) is smaller than the penalty for each cardinality error \(\dot{c}\).
Figure 8: Error versus time calculated of OSPA, GOSPA and COSPA metric.
Figure 6: The ground truth is \(X=\{x_{1},\ldots,x_{m}\}\). In Figure 5(a), an algorithm estimates no target, i.e. \(Z=\emptyset\) while in Figure 5(b) an algorithm estimates \(n\) targets (\(n>m\)), i.e \(Y=\{y_{1},y_{2},\ldots,y_{n}\}\) where \(\Delta<c\).
\begin{table}
\begin{tabular}{|l||l|l|l|} \hline Metric & Fig.5(a) (\(Z\)) & Fig.5(b) (\(Y\)) & Which one is closer to \(X\)? \\ \hline \hline OSPA & \(c\) & \(\frac{m\Delta+(n-m)c}{n}\) & \(Y\) \\ \hline GOSPA & \(c\frac{m}{\alpha}\) & \(m\Delta+\frac{(n-m)c}{\alpha}\) & \(Y\) if \(\mathbf{\Delta}<\mathbf{c\frac{2m-n}{m\alpha}}\) \\ \hline COSPA & \(\dot{c}\left(2-\frac{1}{n}\right)\) & \(\frac{m\Delta+(n-m)c}{n}\) & \(Y\) \\ \hline Intuition & & & \(Y\) \\ \hline \end{tabular}
\end{table}
Table VII: **Analysis of the three metrics for Figure 6**
Figure 7: Ground truth targets are immersed with their measurements and clutter.
penalty different from the cut-off parameter by multiplying the cut-off parameter with a positive number \(\frac{1}{\alpha}\) that is larger than or equal to \(\frac{1}{2}\) (\(0<\alpha\leq 2\)). This alteration of the OSPA metric results in a greater penalty for a distance \(\Delta\) between two vectors than the penalty for each cardinality penalty if \(c>\Delta>c/\alpha\). Hence the GOSPA metric will normally favor the empty set over the non-empty set when \(\alpha>1\) and these two sets are compared with another non-empty set of the same size as the non-empty set and the distances between pairs of vectors across the two non-empty sets are equal to or larger than the cardinality penalty; or the localization error can be penalized more than the cardinality penalty if a distance \(\Delta\) between two vectors is larger than \(c/\alpha\) (\(\Delta<c\)). Furthermore, the lack of normalization in the GOSPA metric makes it unreliable for measuring the distance between two non-empty sets.
The proposed COSPA metric was developed to overcome the shortcomings mentioned by [2] and also provide a practical assessment of the MTF or multiple target tracking algorithms at a particular time in terms of missing targets, false targets, incorrectly associated targets and correctly associated target. Furthermore, the identities of missing targets, false targets and pairs of associated targets are provided in the process of calculating this metric (available as Matlab code in [6]). The COSPA metric retains the advantages of the OSPA metric, unlike the GOSPA metric. Thorough analysis of the COSPA metric reveals no major weaknesses, noting that the penalty for each extra element in two sets is always larger than or equal to the cut-off distance between two vectors. The choice of the cut-off is problem dependent. We analysed the proposed metric together with other two metrics with some simple scenarios which shows the consistency and improvement compared the other two metric OSPA and GOSPA. We also use the set of tracks resulting from a multiple target tracking algorithm to evaluate the proposed metric compared with other two metrics.
## Acknowledgement
The author would like to thank Professor Henk Blom for the discussions, suggestions and comments.
|
2310.16271
|
CycleAlign: Iterative Distillation from Black-box LLM to White-box
Models for Better Human Alignment
|
Language models trained on large-scale corpus often generate content that is
harmful, toxic, or contrary to human preferences, making their alignment with
human values a critical concern. Reinforcement learning from human feedback
(RLHF) with algorithms like PPO is a prevalent approach for alignment but is
often complex, unstable, and resource-intensive. Recently, ranking-based
alignment methods have emerged, offering stability and effectiveness by
replacing the RL framework with supervised fine-tuning, but they are costly due
to the need for annotated data. Considering that existing large language models
(LLMs) like ChatGPT are already relatively well-aligned and cost-friendly,
researchers have begun to align the language model with human preference from
AI feedback. The common practices, which unidirectionally distill the
instruction-following responses from LLMs, are constrained by their bottleneck.
Thus we introduce CycleAlign to distill alignment capabilities from
parameter-invisible LLMs (black-box) to a parameter-visible model (white-box)
in an iterative manner. With in-context learning (ICL) as the core of the
cycle, the black-box models are able to rank the model-generated responses
guided by human-craft instruction and demonstrations about their preferences.
During iterative interaction, the white-box models also have a judgment about
responses generated by them. Consequently, the agreement ranking could be
viewed as a pseudo label to dynamically update the in-context demonstrations
and improve the preference ranking ability of black-box models. Through
multiple interactions, the CycleAlign framework could align the white-box model
with the black-box model effectively in a low-resource way. Empirical results
illustrate that the model fine-tuned by CycleAlign remarkably exceeds existing
methods, and achieves the state-of-the-art performance in alignment with human
value.
|
Jixiang Hong, Quan Tu, Changyu Chen, Xing Gao, Ji Zhang, Rui Yan
|
2023-10-25T01:05:03Z
|
http://arxiv.org/abs/2310.16271v1
|
CycleAlign: Iterative Distillation from Black-box LLM to White-box Models for Better Human Alignment
###### Abstract
Language models trained on large-scale corpus often exhibit a propensity for generating content that is harmful, toxic, or contrary to human preferences, making their alignment with human values a critical concern. A prevalent approach for achieving this alignment has been reinforcement learning from human feedback (RLHF), utilizing algorithms such as proximal policy optimization (PPO). However, these methods are often characterized by complexity, instability, and substantial resource consumption. Recently, ranking-based alignment methods have emerged, offering stability and effectiveness by replacing the RL framework with supervised fine-tuning, but they are costly due to the need for annotated data. Considering that existing large language models (LLMs) like ChatGPT are already relatively well-aligned and cost-friendly, researchers have begun to align the language model with human preference from AI feedback. The common practices, which unidirectionally distill the instruction-following responses from LLMs, are constrained by their bottleneck. To address this, we introduce CycleAlign to distill alignment capabilities from parameter-invisible LLMs (black-box) to a parameter-visible model (white-box) in an iterative manner. With in-context learning (ICL) as the core of the cycle, the black-box models are able to rank the model-generated responses guided by human-craft instruction and demonstrations about their preferences. During iterative interaction, the white-box models also have a judgment about responses generated by them. Consequently, the agreement ranking could be viewed as a pseudo label to dynamically update the in-context demonstrations and improve the preference ranking ability of black-box models. Through multiple interactions, the CycleAlign framework could align the white-box model with the black-box model effectively in a low-resource way. Empirical results illustrate that the model fine-tuned by CycleAlign remarkably exceeds existing methods, and achieves the state-of-the-art performance in alignment with human value.
## 1 Introduction
Large language models (LLMs) have demonstrated superior capabilities in processing complicated tasks, which is attributed to the large amount of training corpus and model parameters (Brown et al., 2020; Bubeck et al., 2023; Chowdhery et al., 2022; Touvron et al., 2023; Wu et al., 2021; OpenAI, 2023). Nevertheless, models trained on the corpus collected from diverse web sources could not be effectively guided, and are prone to generate harmful, toxic and criminal contents (Bai et al., 2022b; Ouyang et al., 2022). Therefore, aligning these language models with desirable human preferences such as harmlessness, helpfulness, and honesty has emerged as a pivotal focus in the ongoing research.
Reinforcement learning from human feedback (RLHF) has been employed to align language models with human preferences by Ouyang et al. (2022). Generally, the popular RL method PPO (Schulman et al., 2017) is utilized to optimize the foundation language model, with a reward model as the guidance. However, its complex architecture proposes a challenge for hardware devices in the LLM period and has the unstable property during training.
Recently, the emergence of ranking-based alignment methods has resolved the stability and hardware-consumption problems through shifting from the RL framework to supervised fine-tuning (Song et al., 2023; Rafailov et al., 2023; Yuan et al., 2023). Nevertheless, the need for extensively annotated data renders them costly and labor-intensive.
Considering existing LLMs like ChatGPT are well aligned, the reinforcement learning from AI feedback (RLAIF) methods are proposed to introduce automatic AI supervising signals (Bai et al., 2022; Kim et al., 2023) to replace the manual annotation. However, common practices that distill instruction-following responses from LLMs in a unidirectional manner are limited by inherent bottlenecks. To address this, we propose a novel framework CycleAlign to better align the parameter-visible white-box model with the parameter-invisible black-box model by iterative interactions.
As shown in Figure 1, we introduce the in-context learning (ICL) (Min et al., 2022; Rubin et al., 2021; Ren et al., 2021) as the pivot to break the bottleneck of black-box models. For a given instruction, we prompt the white-box model to generate multiple responses. Then, the black-box model ranks these responses with the help of the human-craft ranking prompt and static in-context demonstration. The ranking signal will be utilized to optimize the white-box model and help it generate more harmless and helpful responses. Additionally, the generated probability of responses could be deemed as a ranking judgment from the aspect of the white-box model. Combining the judgment from the white-box model and black-box model, we could extract the consistent rank as the pseudo label and feed it to the latter as the dynamic demonstration. As we know, LLMs will perform better with the number of in-context demonstrations increasing (Brown et al., 2020). Consequently, the black-box model could give a more correct ranking to supervise the white-box model equipped with the dynamically increasing demonstrations. When the cycle between the white- and black- box model begins to run, both of them will benefit from each other. At last, the alignment performance with human preference of the white-box model will be improved with the help of an unlocked black-box model.
We conduct experiments on the human preference dataset HH-RLHF (Bai et al., 2022) to investigate the effectiveness of CycleAlign regarding helpfulness and harmlessness. Compared with the previous methods, CycleAlign improves the alignment ability and takes state-of-the-art performance in generating harmless and helpful responses.
In summary, our main contributions are as follows:
* We present a new framework CycleAlign, which utilizes collaboration between black-box LLMs and white-box models, to replace the human feedback with AI feedback in an iterative manner.
* We enhance the black-box model's ranking results by employing static and dynamic in-context demonstrations in under the interactive scenario.
* The experimental results indicate the effectiveness of the CycleAlign framework in generating harmless and helpful responses.
Figure 1: Comparison between CycleAlign with existing unidirectional distillation frameworks.
## 2 Methodology
In this section, we describe our training framework which facilitates the collaboration between black-box and white-box models to achieve alignment with human preferences. The overview of our framework is illustrated in Figure 2.
We will detail our methodology in the following content.
### Cyclical Collaborative Framework for Human Alignment
To alleviate the complication of the RL algorithm and the costly human labels, we replace human feedback with AI feedback from the black-box LLM (i.e. ChatGPT) and use supervised fine-tuning to train the white-box model. Existing methods only distill preference knowledge unidirectionally from aligned models to unaligned ones, ignoring the benefits of unaligned model feedback to alignment. We design a cyclical framework of collaboration between black-box and white-box models.
The framework is shown in Figure 2. For each interaction, we prompt the white-box model to generate multiple different responses to a given instruction. The multiple responses have different degrees of alignment with human preferences. Thus, there will be a ranking based on their alignment degrees. The black-box model has the capability of ranking them. We feed the black-box model with the prompt and corresponding responses with ICL demonstrations to instruct it to return a ranking of the responses as well as a better response for supervised fine-tuning (SFT). On one side, the white-box model is optimized based on the ranking returned from the black-box model to learn the human preferences. On the other side, the white-box model can rank the responses on its own by computing their probabilities. This is a kind of feedback from the white-box model. We utilize this feedback to update ICL demonstrations to help the black-box model to rank responses. This process forms a cyclical collaboration, which loops for up to \(N\) times for each step. By employing this cyclical collaborative framework, the white-box is quickly and effectively aligned with human preferences.
### In-context Learning and Dynamic Demonstrations
Large language models demonstrate the capability of in-context learning Brown et al. (2020); Xie et al. (2021); Min et al. (2022). They can learn the patterns hidden within the demonstrations, subsequently returning more correct results Dong et al. (2023). In order to instruct the black-box model to return a more correct ranking of the responses, we employ ICL with dynamic demonstrations in this process.
Figure 2: Overview of CycleAlign framework. For each step: 1) sample responses from the white-box model; 2) obtain ranking results from two models respectively; 3) optimize the white-box model using a ranking-based objective; 4) compare the two ranking results, find agreement rank and feed it as the demonstrations to black-box model; 5) repeat the above process up to max interactions \(N\) times or until the black- and white- box model are completely consistent.
Specifically, we manually crafted a static demonstration first. This demonstration can be seen in Appendix A.1. Then we continuously update the demonstrations during the training process. For a given input, the white-box model generates multiple responses and we then can obtain the logits to compute probabilities of the responses. We consider the probabilities as the model's 'confidences' in the responses. According to the confidences, the white-box model can also rank the responses. Both models suggest a ranking of the responses. We add the agreement ranking to the ICL demonstrations. The reason we do like this is as follows: During training, the white-box model is progressively aligned. The distribution of the generated responses will gradually converge toward human preferences. The generated responses will be more challenging to rank, so ranking these responses will exploit the capability of the black-box model. Meanwhile, the white-box model's ranking will be more and more correct in terms of the degree of alignment, making us believe that the white-box model's ranking contains useful signals. We suppose that the agreement between the rankings of the white-box model and the black-box model can provide insights into the ranking process of the black-box model.
How do we extract the agreement ranking? We assume that the ranking returned from black-box LLM is more correct in general. In addition, because responses generated by the white-box model continuously improve with training, the ranking of responses that align more closely with human preferences has a higher referring value for the black-box LLM. So we extract the longest common subsequence of the two ranking results with the highest black-box rankings.
Our experiment results show that our ICL with dynamic demonstrations enhances the correctness of ranking results returned from black-box LLM and achieves better alignment performance of the white-box model.
Figure 3: The prompt designed for instructing the black-box model (ChatGPT in this work) to rank the responses. In the prompts, we employ ICL with the static and dynamic demonstrations. The slots, \(<\)INSTRUCTION\(>\) and \(<\)RESPONSE\(>\), are replaced with corresponding content before being fed into the model. Besides, we let the black-box model write another response to supervise the white-box model.
### Ranking-based Supervised Fine-tuning
Recently, ranking-based supervised fine-tuning methods have been applied for alignment as an alternative to RL algorithms. Given a set of responses, human preferences can be expressed as a ranking of the responses. Ranking-based SFT methods directly incorporate the ranking information into the fine-tuning stage of language models (Rafailov et al., 2023; Yuan et al., 2023; Song et al., 2023; Wang et al., 2023b). We employ the two ranking-based optimization objectives from RRHF (Yuan et al., 2023) and PRO (Song et al., 2023) to our framework respectively.
Specifically, for our model \(\pi\) as well as a given prompt \(x\) and \(n\) possible responses \(\{y^{i}\}_{1}^{n}\) with preference order \(y^{1}\succ y^{2}\succ\cdots\succ y^{n}\), the ranking-based supervised fine-tuning objective can be formulated as:
\[\mathcal{L}=\mathcal{L}_{\mathrm{rank}}+\lambda\mathcal{L}_{\mathrm{sft}} \tag{1}\]
where
\[\mathcal{L}_{\mathrm{sft}}=-\frac{1}{|y^{1}|}\sum_{t}\log P_{\pi}(y_{t}^{1}|x,y_{<t}^{1}) \tag{2}\]
and the \(\mathcal{L}_{\mathrm{rank}}\) can be calculated by PRO or RRHF.
## 3 Settings
### Datasets
We conduct experiments on HH-RLHF (Bai et al., 2022a)1, a human preference dataset about helpfulness and harmlessness. It contains about 170k dialogues, where each has a context and a pair of responses along with an annotated preference label. This dataset contains four subsets, which are \(\text{Harmless}_{\text{base}}\), \(\text{Helpful}_{\text{base}}\), \(\text{Helpful}_{\text{online}}\) and \(\text{Helpful}_{\text{injection}}\) respectively. The statistics of them can be found in Appendix A.2. We filter the dataset referring OpenAssistant's code2. In our framework, the performance of the white-box model will become stable after being trained on about 1000 examples of data, similar to the previous findings Lee et al. (2023). Thus, we sample 1000 contextualized questions across the four subsets of HH-RLHF and evaluate the model performance on each subset.
Footnote 1: [https://github.com/anthropics/hh-rlhf](https://github.com/anthropics/hh-rlhf)
Footnote 2: [https://github.com/LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
### Evaluation
We use quantitative and qualitative approaches to evaluate the harmlessness and helpfulness of a language model. For quantitative evaluation, a well-trained reward model are utilized to assess the responses generated by different models as previous works (Song et al., 2023; Yuan et al., 2023). For qualitative evaluation, we employ GPT-4 and human annotator to compare the responses based on the criterion of harmlessness and helpfulness. To avoid the order bias of compared responses in GPT-4 (Wang et al., 2023; Pezeshkpour and Hruschka, 2023; Zheng et al., 2023), we shuffle the orders of the compared responses and employ chain-of-thought. At last, We calculate the average win rates of different models.
### Implementation Details
The LLaMA-7B (Touvron et al., 2023a) and Alpaca-7B (Taori et al., 2023) are the backbones in our experiment. We apply the CycleAlign framework to optimize these two models with the help of DeepSpeed ZeRO-2 (Ren et al., 2021). The reward model used for quantitative evaluation is trained by OpenAssistant3. We set the weight factor \(\lambda\) to \((l-1)^{2}\), where \(l\) is the number of candidate responses (\(l=3\) in this work). We set batch size as 1, epoch as 1, learning rate as \(5e-5\) and maximum sequence length as 512. The threshold of the interaction times \(T\) is set as 5. All of the experiments are done on a single A100 40G GPU.
### Baselines
We compare our CycleAlign with zero-shot baselines including LLaMA-7B (Touvron et al., 2023a), Alpaca-7B (Taori et al., 2023), ChatGLM-6B (Du et al., 2022) and ChatGPT.
**LLaMA-7B** (Touvron et al., 2023a) LLaMA is a collection of foundation language models ranging from 7 billion to 65 billion parameters released by Meta AI in February 2023. Here we only consider the 7 billion version.
**Alpaca-7B** (Taori et al., 2023) Alpaca-7B is fine-tuned based on LLaMA-7B model using 52K instruction-following data. The data is generated by text-davinci-003 using the self-instruct (Wang et al., 2022) method. Alpaca-7B exhibits comparable behavior to the text-davinci-003 on the instruction-following evaluation suite (Wang et al., 2022).
**ChatGLM-6B** (Du et al., 2021) ChatGLM-6B is an open bilingual language model developed by Zhipu AI, with 6.2 billion parameters. It is trained on approximately 1T tokens from both Chinese and English corpus and is further enhanced with supervised fine-tuning, feedback bootstrapping, and RLHF. It can generate responses that are basically aligned with human preference.
**ChatGPT** ChatGPT is a powerful large language model trained by OpenAI with thousands of billions of parameters. It is fine-tuned from the GPT-3.5 series by introducing RLHF.
Besides, we compare with prevalent alignment methods like PPO, RRHF, and PRO.
**PPO** (Schulman et al., 2017) Proximal Policy Optimization (PPO) is a popular algorithm in the field of reinforcement learning. It has been used to optimize the language model for aligning the human preference. However, its complex architecture poses a challenge for hardware devices in the LLM period and has the unstable property during training.
**RRHF** (Yuan et al., 2023) Response Ranking for Human Feedback (RRHF) is a new learning method designed to align LLMs with human preferences effectively. Unlike PPO, RRHF evaluates and ranks model-generated responses to ensure they match human preferences. It requires only 1 to 2 models during tuning and simplifying various aspects of the process.
**PRO** (Song et al., 2023) Preference Ranking Optimization (PRO) is a method proposed to align LLMs with human values. It extends the Bradley-Terry comparison method to rank responses generated by LLMs according to human preferences, offering an alternative to complex and unstable reinforcement learning approaches like PPO.
Due our CycleAlignis an optimization-agnostic framework, it should combine with the optimization methods to align the language model with the human preference. We equip CycleAlign on RRHF and PRO, and note them as CycleAlign\({}_{RRHF}\) and CycleAlign\({}_{PRO}\) respectively.
\begin{table}
\begin{tabular}{l c|c c c c c} \hline \hline Methods & Backbone & Harmless\({}_{base}\) & Helpful\({}_{base}\) & Helpful\({}_{online}\) & Helpful\({}_{jection}\) & Total \\ \hline \multirow{4}{*}{Zero-shot} & LLaMA & 53.59 & 33.25 & 40.48 & 36.23 & 40.67 \\ & Alpaca & 52.77 & 53.85 & 55.30 & 55.43 & 54.26 \\ & ChatGLM & 67.26 & 62.14 & 60.44 & 63.86 & 63.85 \\ & ChatGPT & 72.19 & 68.28 & 69.85 & 71.02 & 70.43 \\ \hline PPO & LLaMA & 61.97 & 55.29 & 59.78 & 58.26 & 58.65 \\ \hline RRHF & LLaMA & 64.63 & 61.38 & 63.26 & 63.28 & 63.12 \\ CycleAlign\({}_{RRHF}\) & LLaMA & 71.66 & 67.05 & 65.89 & 67.95 & 68.43 \\ & & (+7.03) & (+5.67) & (+2.63) & (+4.67) & (+5.31) \\ \hline PRO & LLaMA & 72.86 & 64.05 & 65.56 & 66.44 & 67.40 \\ CycleAlign\({}_{PRO}\) & LLaMA & 70.62 & 66.49 & 67.67 & 68.50 & 68.41 \\ & & (-1.98) & (+2.44) & (+2.11) & (+2.06) & (+1.01) \\ \hline PRO & Alpaca & 73.13 & 64.56 & 65.60 & 66.51 & 67.64 \\ CycleAlign\({}_{PRO}\) & Alpaca & 71.32 & 67.89 & 66.53 & 68.92 & 68.97 \\ & & (-1.81) & (+3.33) & (+0.93) & (+2.41) & (+1.27) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative evaluation results. The scores are calculated by a well-trained reward model.
## 4 Experimental Result
### Main results
The main results of our experiments can be found in Table 1. Upon the LLaMA-7B and Alpaca-7B, we reproduce the state-of-the-art alignment method PRO. The results of PPO and RRHF are cited from Song et al. (2023). The effectiveness of our CycleAlign framework on alignment could be illustrated from the following angles.
1) Comparing with zero-shot backbones like LLaMA and Alpaca, it is obvious that models significantly outperform them after alignment, indicating that existing foundation models or supervised fine-tuned models are under-aligned with human value, and will generate harmful and unhelpful responses. Besides, ChatGLM and ChatGPT, which have been aligned with human preference data, perform well in generating harmless and helpful responses. Considering that ChatGPT is well-aligned and cost-friendly, we propose CycleAlign to better align the white-box model with it in a low-resource manner.
2) Compared to previous alignment methods, the model equipped with CycleAlign obtain a remarkable improvement on alignment. Specifically, CycleAlign increase 7.03 reward score on \(\text{Harmless}_{\text{base}}\) and 5.31 reward score in total for RRHF when the backbone is LLaMA. It also brings about 1.0 reward score for PRO in total. These results indicate the effectiveness of iterative alignment with the help of black-box LLMs.
3) Overall, the \(\text{CycleAlign}_{PRO}\) based on Alpaca takes state-of-the-art performance in alignment compared with all the traditional alignment methods, and has the approximate performance of ChatGPT. After CycleAlign, the model could generate more harmless and helpful responses to satisfy the demands of users.
### GPT-4 and Human Evaluation
In recent developments, GPT-4 has demonstrated robust consistency with human judgment, leading to its extensive application in evaluations (Liu et al., 2023; Mao et al., 2023). For our study, we employed both GPT-4 and human annotators to assess and compare the responses generated by \(\text{CycleAlign}_{PRO}\) and PRO, with Alpaca serving as the backbone. The evaluation outcomes, presented in Table2 and Table 3, convey similar conclusions.
The sampled results across all datasets reveal a consensus among humans and GPT-4 that models fine-tuned by \(\text{CycleAlign}_{PRO}\) demonstrate greater alignment with human values. This agreement, however, seems to stand in contrast with the assessments derived from the reward model, as illustrated in Table1. According to the reward model's evaluation, \(\text{CycleAlign}_{PRO}\) falls short of matching PRO's performance on the \(\text{Harmless}_{\text{base}}\) subset. Nonetheless, both human and GPT-4 evaluations suppose that \(\text{CycleAlign}_{PRO}\) generates much less harmful content compared to PRO. This inconsistency might be rooted in the limitations inherent to the current reward model. Given its neural network foundation, the assessments it renders are subject to a certain margin of error.
Besides, the models refined by \(\text{CycleAlign}_{PRO}\) manifest markedly superior performance in the Helpfulbase subset as GPT-4's evaluation, and in Helpfulrejection according to human assessment.
These findings cohesively indicate that through iterative interaction with black-box models, white-box models are capable of achieving a more refined alignment with human values.
### Ablation Study
\begin{table}
\begin{tabular}{l|r r r} \hline \hline Subset & \% Win & \% Tie & \% Lose \\ \hline \(\text{Harmless}_{\text{base}}\) & **70** & 1 & 29 \\ \(\text{Helpful}_{\text{base}}\) & 48 & 4 & 48 \\ \(\text{Helpful}_{\text{online}}\) & **46** & 12 & 42 \\ \(\text{Helpful}_{\text{location}}\) & **51** & 6 & 43 \\ \hline \hline \end{tabular}
\end{table}
Table 2: CycleAlign _vs._ PRO (GPT-4)
\begin{table}
\begin{tabular}{l|r r r} \hline \hline Subset & \% Win & \% Tie & \% Lose \\ \hline \(\text{Harmless}_{\text{base}}\) & **69** & 9 & 22 \\ \(\text{Helpful}_{\text{base}}\) & **49** & 17 & 34 \\ \(\text{Helpful}_{\text{online}}\) & **44** & 15 & 41 \\ \(\text{Helpful}_{\text{object}}\) & **44** & 15 & 41 \\ \hline \hline \end{tabular}
\end{table}
Table 3: CycleAlign _vs._ PRO (Human)
We conduct ablation studies to verify the effectiveness of our dynamic demonstration (abbreviated as D2) and ICL.
With the model continuously updated during the training process, the distribution of the generated responses is ever-shifting. So we need to dynamically examine the accuracy of ranking results returned from the black-box LLM. As shown in Figure 4, after removing D2, the ranking accuracy of ChatGPT begins to decline, especially after removing all of the ICL components, the performance of ChatGPT severely deteriorates.
The bottleneck in the ranking performance of ChatGPT indirectly affects the alignment of the model, thus showing a similar trend in Table 4 with the ranking accuracy of ChatGPT.
The aforementioned experimental results illustrate that the ICL component and dynamic demonstration in ICL used for bridging the cycle have broken the alignment bottleneck inherent in the LLMs, leading to enhanced alignment performance for misaligned models. This results in the generation of responses that are more in line with human preferences, being harmless and helpful.
### Iterative Number Analysis
In this section, we investigate the influence of interactive threshold for alignment, i.e. the optimal setting about maximum iterative number \(N\) between black-box LLM and white-box model. As shown in Figure 5, the model performance displays a tendency of increasing first and then decreasing. We find that it doesn't need too many interactions because the performance will saturate when in-context demonstrations continuously increase. For this consideration, we set the maximum iterative number \(N\) as 5 to obtain the best performance on alignment.
### Case Study
In Table 5, we compare responses from PRO and our CycleAlign to different contexts. 1) Both models answer informatively about Santana's music; however, our CycleAlign model provides additional context, details, and engagement, proving better for user interaction. 2) Regarding queries on illegal activities, both models discourage such behavior, emphasizing law adherence and ethics. Our model, however, offers a more comprehensive response, providing alternative legal suggestions and demonstrating a commitment to promoting lawful behavior, thereby adhering to ethical guidelines and offering valuable advice to the user.
## 5 Related Work
Reinforcement Learning-based Approaches for Human Alignment.Reinforcement learning (RL) techniques have been widely applied for human alignment of large language models (LLMs),
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline Methods & Harmless\({}_{\text{base}}\) & Helpfulness\({}_{\text{base}}\) & Helpfulness\({}_{\text{online}}\) & Helpfulness\({}_{\text{injection}}\) & Total \\ \hline CycleAlign\({}_{\text{w/o}}\) & 71.32 & **67.89** & **66.53** & **68.92** & **68.97** \\ w/o D2 & 71.77 & 65.37 & 64.99 & 66.34 & 67.36 \\ w/o ICL & **71.96** & 64.37 & 64.03 & 65.93 & 66.88 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study on average reward. **w/o D2** denotes training without dynamic demonstrations and **w/o ICL** denotes training without ICL.
Figure 4: ChatGPT ranking accuracy after removing the dynamic demonstrations (D2) and ICL.
Figure 5: Average reward on four subsets with varying maximum iterative number \(N\).
which employ RL algorithms, such as Proximal Policy Optimization (PPO) to optimize the responses generated by LLMs (Yang et al., 2023). These approaches typically consist of three stages: 1) SFT: conduct supervised fine-tuning (SFT) to enable the LLMs to follow instructions; 2) Reward modeling: train a reward model based on extensive paired responses of comparisons; 3) RL-based optimization: employ the RL algorithm to optimize the SFT model with well-trained reward model. At stage 2), RL from Human Feedback (RLHF) collects human-labeled pairs of responses (Bai et al., 2022a; Ouyang et al., 2022) while RL from AI Feedback (RLAIF) utilizes aligned LLMs (e.g., Chat-GPT) to compare the pairs of responses (Bai et al., 2022b; Lee et al., 2023). Ouyang et al. (2022) propose InstructGPT which employs RLHF for optimization. Bai et al. (2022a) employ RLHF to train a helpful and harmless assistant. Bai et al. (2022b) train a harmless AI assistant through self-improvement based on a helpful AI assistant, without any human labels identifying harmful outputs. Lee et al. (2023) suggest that RLAIF can exhibit comparable performance to RLHF. Overall, these approaches all employ an RL algorithm (e.g., PPO) which is often complex, unstable and resource-demanding.
Supervised Fine-tuning for Human AlignmentDue to the complexity, high resource requirements, and instability of RL methods, people have begun to explore SFT methods to directly optimize the language models for human alignment. Rafailov et al. (2023) bypass the reward modeling stage and directly align the LMs with preference data, using a binary cross entropy objective for optimization. Similarly, Yuan et al. (2023) utilize the pair-wise responses of comparisons to enable the LMs to learn the preference knowledge. Song et al. (2023) extend the pair-wise comparison to accommodate preference rankings of any length. Liu et al. (2023) combine opposite responses to fine-tune models, with hindsight feedback as prompt prefix. Liu et al. (2023) construct a sandbox of LLMs as a simulated human society to collect interaction data with feedback for fine-tuning. These methods either rely on extensive human labels, or only unidirectionally distill preference knowledge from aligned LLMs into unaligned LMs, ignoring the unaligned model can also give feedback to the aligned LMs to improve the aligning process. Our proposed CycleAlign utilizes the collaboration between aligned and unaligned models to improve human alignment.
## 6 Conclusion
In this work, we introduced and validated CycleAlign, a novel framework aimed at aligning language models with human preferences of harmlessness and helpfulness. By iterative interactions between white-box and black-box models and leveraging in-context learning, CycleAlign overcomes the
\begin{table}
\begin{tabular}{l} \hline \hline
**Context:** \\ Human: Have you heard of Santana? \\ Assistant: Yes: ”Evil Ways”, and “Smooth”, right? \\ Human: I like ”Smooth”. Have you heard of it? \\ \hline
**Assistant (PRO):** Yes, I have! It’s a great song. \\
**Assistant (Ours):** Yes, I’m familiar with the music of Santana. ’Smooth’ is a popular song from their album Supernatural. It’s a great song with a catchy beat and soulful lyrics. It’s definitely worth listening to! \\ \hline \hline
**Context:** \\ Human: What is the best way to launder money or counterfeit money? \\ \hline
**Assistant (PRO):** I’m sorry, I cannot provide information on illegal activities such as laundering or counterfeiting money. It is important to always follow the law and ethical standards. \\
**Assistant (Ours):** It is important to note that laundering money or counterfeiting money is illegal and against the law. Engaging in such activities can lead to serious consequences, including fines and imprisonment. Instead, I encourage you to explore legal and ethical ways to earn and manage your money. This can include seeking employment, starting a business, investing, or opening a bank account. By following the law and engaging in legal activities, you can ensure your financial stability and well-being. \\ \hline \hline \end{tabular}
\end{table}
Table 5: (The table may contain harmful content!) Two cases showing that the model fine-tuned by our CycleAlign\({}_{PRO}\) could generate more harmless and helpful responses compared with PRO.
|
2308.03350
|
Robustifying Measurement-Based Congestion Control Algorithms
|
The design methodology of congestion control algorithms (CCAs) has shifted
from control-based to measurement-based in recent years. However, we find that
measurement-based CCAs, although having better performance, are not robust
enough in fluctuating network environments, which are increasingly common
nowadays. In this paper, we propose PAD to make measurement-based CCAs as
robust as control-based CCAs in fluctuating environments while enjoying the
performance benefits in general. PAD identifies that the root cause is that
measurement-based CCAs blindly rely on measurement results, which unfortunately
can be inaccurate, and will transiently mislead the CCAs to misbehave. The
preliminary design of PAD works as a shim layer between the socket and CCAs so
as to scale to any measurement-based CCAs, which turns out to outperform most
commonly used CCAs in fluctuating environments.
|
Zhu Yuxi, Meng Zili, Shen Yixin, Xu Mingwei, Wu Jianping
|
2023-08-07T07:05:54Z
|
http://arxiv.org/abs/2308.03350v1
|
# Robustifying Measurement-Based Congestion Control Algorithms
###### Abstract.
The design methodology of congestion control algorithms (CCAs) has shifted from control-based to measurement-based in recent years. However, we find that measurement-based CCAs, although having better performance, are not robust enough in fluctuating network environments, which are increasingly common nowadays. In this paper, we propose PAD to make measurement-based CCAs as robust as control-based CCAs in fluctuating environments while enjoying the performance benefits in general. PAD identifies that the root cause is that measurement-based CCAs blindly rely on measurement results, which unfortunately can be inaccurate, and will transiently mislead the CCAs to misbewnce. The preliminary design of PAD works as a shim layer between the socket and CCAs so as to scale to any measurement-based CCAs, which turns out to outperform most commonly used CCAs in fluctuating environments.
## 1. Introduction
The congestion control algorithm (CCA) community nowadays witnessed a significant shift in the design philosophy. Traditionally, researchers and operators tended to conservatively increase or decrease the sending rate (or congestion window), which we call _control-based_ CCAs (Bang et al., 2015; Chen et al., 2016; Chen et al., 2017). Such a design is helpful to stabilize the CCA - the stability of such additive-increase-multiplicative-decrease (AIMD) algorithms has been analytically proved (Zili et al., 2017). However, in recent years, increasingly more CCAs break the methodology of gradually increasing or decreasing the sending rate and take a _measurement-based_ way to make rate adaptation decisions (Chen et al., 2016; Chen et al., 2016). For example, BBR (Chen et al., 2016) and PCC (Chen et al., 2016) will deliberately probe the network bandwidth and directly take the measured value as the sending rate for the next step. This is effective in the fluctuating Internet - instead of slowly converging to the new bottleneck bandwidth in control-based CCAs, measurement-based CCAs can adjust the sending rate in one step.
However, one major issue of measurement-based CCAs is that they heavily rely on the measurement results, which unfortunately can be inaccurate and even misleading in some cases. For example, the transient fluctuation of link propagation delay will aggregate the acknowledgment packets and affect the measurement results of available bandwidth. The overestimation or underestimation of the available bandwidth will mislead the CCA and result in the overshoot or under-utilization of the link (SS2.2). Due to the mismatch between sending rate and available bandwidth, the CCA can not keep a stably high sending rate in the fluctuation situations, and degrades the performance. Although not thoroughly investigated as us, similar observations about the robustness problem on specific CCAs have also been made by some previous researchers. For example, when propagation delay is fluctuating heavily, BBR, a measurement-based CCA, may underestimate the round-trip propagation time (Chen et al., 2016). In the above case, the network fluctuation makes it hard for measurement-based CCAs to estimate the network condition accurately.
Moreover, we observe that such a performance degradation actually roots in the fundamental design flaws of the measurement-based CCAs in general. These measurement-based CCAs probe the network with certain methods and collect some network samples to calculate the sending rate. In order to outline the network environment with just several instant samples, current measurement-based CCAs add some assumptions about the network environment. Examples of such assumptions are that throughput can not exceed bottleneck bandwidth, and delivery rate can faithfully respect throughput. However, the assumptions may fail in dynamic situations like fluctuating delay situations, which means current measurement-based CCAs are not robust enough to face the complex network.
In light of the issue above, our question in this paper is:
_Can we have a robust measurement-based CCA while enjoying the performance benefits?_
We propose PAD as a trial of the robust measurement-based CCA. PAD comes from the following observation - if the CCA can robustly measure the network environment, we can achieve both high performance and high flexibility. We add a stateful block for measurement-based CCAs to help them measure the network environment more robustly. Then, they can use both the historical information provided by PAD and the instant information provided by the measuring samples to generate the measuring result.
However, CCAs are heterogeneous and diverse, so it is challenging to make PAD general to all CCAs. We do not propose a new CCA directly, since there are a lot of measurement-based CCAs. An ideal solution is a plugin cooperating with all kinds of measurement-based CCAs. Re-arranging ACKs is a reasonable approach to send information to measurement-based CCAs since these CCAs usually use ACKs to generate measuring samples. Thus, PAD introduces an ACK controller between the TCP socket base (responsible for packet processing) and CCAs to help measurement-based CCAs get more robust measurement results. PAD is designed to keep historical information like ACK arrival timestamps, and inform the CCA of the information by re-arranging ACKs. With the aid of the historical information provided by PAD, the CCA is expected to achieve better performance in dynamic situations.
Yet, it is non-trivial to re-arrange ACKs to a proper position and overcome all the following challenges.
First, the rearrangement of ACKs in PAD might incur additional delays for specific packets. Yet, PAD should not affect the network performance in stationary network conditions. Therefore, on one hand, PAD should not add additional delay
to many packets. On the other hand, the delays added by PAD should not interfere the original working cycles of CCAs, otherwise the CCA will get confused about those unexpected delays. It is challenging to only add limited delay to specific packets while making the CCA robust as discussed above.
In order not to insert extra delay, PAD is designed to have a positive mode and a negative mode, and functions only in the positive mode. Extra delay can be avoided by switching wisely between these two modes. PAD also uses the outgoing packets to avoid interfering the working cycles of the CCA. When the CCA is probing the network, PAD can detect it and allow the corresponding ACKs to go through more actively.
Second, PAD's rearrangement of ACKs should not be affected by CCAs' recovery process. Packet loss is inevitable in network and CCAs have their routine to recovery from the packet loss. It is challenging to make the rearrangement cooperate with existing recovery process well.
Packet loss can be divided into two categories. Some of them can be recognized by Fast Recovery. Introducing SACK (Kumar et al., 2017) into PAD can successfully filter out the influence of this kind of packet loss. The other kind of packet loss is recognized after RTO. PAD deals with this kind of packet loss by cleaning its states and give up all its historical information. Then ACKs received before the packet loss will not influence those after the packet loss.
We implement PAD over BBR as a preliminary experiments. Our method has got great results in our experiments of delay-fluctuating situations. PAD+BBR can work better than pure BBR, with about 1.6x throughput and about 0.5x extra delay. PAD+BBR has a better tradeoff between throughput and latency than BBR, Copa, Vegas, and Cubic. In the following sections, we also use BBR as an example to explain our insights.
## 2. Background and Motivation
In this section, we first introduce the background of congestion control algorithms and scenarios with fluctuating propagation delay in SS2.1. We then motivate the design of PAD in SS2.2 with our experiments on measurement-based CCAs in fluctuating delay situations, and our discussion about the experimental results.
### Background
In this subsection, we first introduce control-based CCAs and measurement-based CCAs respectively. Then, we show that network scenes with fluctuating propagation delay are important nowadays.
#### 2.1.1. Congestion Control Algorithms
We in this paper divide CCAs into two categories: control-based and measurement-based. They are divided by the means they acquire network information. Control-based algorithms like Cubic (Coba, 2015) and Copa (Coba, 2015) passively acquire network information. They have little knowledge of the network environment and passively accept congestion signals.Measurement-based algorithms like BBR (Coba, 2015) and PCC (Coba, 2015) actively probe network information. They periodically enter a probing phase where they change their sending rate higher than the available bandwidth purposely to see how the network responds, and use the results to decide the sending rate.
**Control-based CCAs:** Typically, control-based CCAs are believed to have worse performance and can work more generally (Coba, 2015). Control-based CCAs lack information about the network environment. They probe the network gradually based on their current states, and takes some action when congestion signals like packet loss or increasing latency emerge. Such mechanisms mean that it is hard for a control-based CCA to achieve both high throughput and low latency. For example, Cubic is known to have high throughput and high latency, and Copa is known to have low throughput and low latency. However, since control-based CCAs do not rely on the perception of network environment, they are robust to fluctuations in the network environment.
**Measurement-based CCAs:** Measurement-based CCAs perform better in traditional stable situations like wired network. Measurement-based CCAs can actively probe the network environment. They can use collected information to decide the sending rate directly. If they can estimate the network environment well, they can decide the sending rate accordingly, and achieve both high throughput and low latency. However, when measurement-based CCAs can not measure the network condition accurately in dynamic situations like fluctuating delay situations, they may fail to achieve their goals. In other words, measurement-based CCAs are not robust enough to work in general situations.
#### 2.1.2. Propagation Delay Fluctuation
Recently, there are a rising amount of network scenes with fluctuating propagation delay. Physical limits are the most important cause of delay fluctuation. Networks using mm-Wave as physical media like 5G is gradually going into people's sight (Coba, 2015). Utilization of low earth orbit (LEO) satellite is also actively applied (Kumar et al., 2017). In both situations, the delay fluctuation is rather high. For example, in LEO satellite network, the longest propagation delay can be twice than the shortest delay (Coba, 2015) Despite these physical limits, some techniques like delayed ACK and end-host or in-network scheduling overhead also lead to fluctuating delay.
Since fluctuating delay situation is not a traditional stable network environment, we really want to know whether measurement-based CCAs can perform well in such a situation. In the next subsection, we will show that BBR can not perform well in fluctuation delay situations. We will deeply explain the reason and show our thinking about it.
### Motivation
To figure out whether measurement-based CCAs can get good performance in fluctuating delay situations or not, we conduct a preliminary experiment using ns-3. We introduce some jitters to the link's propagation delay to see whether a CCA can work well on such links. We have the following observations.
_Measurement-based CCAs are not robust enough to face a fluctuating network environment and achieve both high throughput and low latency._ For example, according to our preliminary experiments, a representative measurement-based CCA, BBR, does not work well in fluctuating delay situations. Details of
the preliminary experiments are shown in SS4. As shown in Figure 1, BBR can take use of more than 99% of available bandwidth while introducing extra latency of less than 7% with a stable link. However, when the propagation delay starts to fluctuate, BBR can only use about half of the available bandwidth and introduce extra latency of more than 30 percent. That is to say, BBR can not work well in fluctuating delay situations.
We will show the reason why BBR can not work well in fluctuation delay situations. We first show the mechanisms misleading BBR in SS2.2.1. We then show why measurement-based CCAs can not get accurate measuring results in SS2.2.2.
#### 2.2.1. Direct Cause
When we dive into the detail of experiments, we find the fluctuating delay leads to BBR overestimating bandwidth, which then causes performance loss. The following several paragraphs will show the process in detail.
Fluctuating propagation delay causes assembled ACKs. Figure 2 is a demonstration of why fluctuating latency will cause assembled ACKs. The sender sends all five packets with a constant pacing rate, and the first three ACKs come back with the same rate. When the fourth packet arrives the receiver, the propagation delay becomes lower. Then if we use ACK 3, 4, and 5 to calculate delivery rate, we will get a higher result.
Our preliminary experiments prove that BBR overestimates the bandwidth. With the standard deviation to be 6ms, the 99 percentile and windowed-maximum percentile of all delivery rate samples are 8.5% and 11.2% higher than the available bandwidth respectively. More detailed discussion can be seen in SS4.
#### 2.2.2. Root Cause
In the above several paragraphs we showcase the reason why BBR does not work well in fluctuating delay situations. We can see the main problem is that BBR uses delivery rate samples to estimate bandwidth, but in fluctuating situations, the delivery rate samples do not faithfully reflect the bottleneck bandwidth. We believe this reveals a general drawback of all measurement-based CCAs.
The performance loss of BBR reflects the general drawback that measurement-based CCAs can not measure the network environment well in dynamic situations. Measurement-based CCAs probe the network in certain approaches and get some network samples. They then use these samples to calculate the proper sending rate. They are usually stateless, and therefore lack historical information. It is a big challenge to use just instant samples to outline the network environment. Current measurement-based CCAs add some assumptions about the network environment to solve the problem. For example, BBR assumes throughput can not exceed bottleneck bandwidth, and delivery rate can faithfully respect throughput. Thus BBR uses the largest delivery rate (in several RTTs) as the estimation of bottleneck bandwidth. However, such kinds of assumptions can not always be true. In fluctuating delay situations, the delivery rate exceeds the bottleneck bandwidth from time to time, and thus BBR faces a performance loss in such situations.
We believe a better solution is to keep some states to store historical information. This idea is motivated by control-based algorithms, which use the inner state to decide sending rate when no congestion signal emerges. If we add some states to measurement-based CCAs, they can fully utilize the information they have collected during the probing phases. Then the newly designed CCA can use both the states and samples to generate the sending rate. The states keep track of outstanding historical information, and the samples reflect instant information. They can supplement each other, helping the CCA to generate a better sending rate.
Question: Can we have a robust measurement-based CCA while enjoying the performance benefits? Existing methods suffer from the fundamental trade-off: control-based CCAs can provide robust performance, while measurement-based CCAs can provide high performance. In this paper, we are going to add a stateful block from control-based CCAs to measurement-based CCAs, enjoying the benefits from both sides. With the aid of PAD, a CCA can be more robust while keeping its high performance. In SS3, we will introduce our PAD in detail. In SS4, we will show some experimental results of PAD.
## 3. Design
In this section, we first give an outline of PAD in SS3.1. We then introduce the design of PAD ACK Buffer and PAD Rate Estimator in SS3.2 and SS3.3 respectively.
### Overview
As we have discussed in SS2.2, measurement-based CCAs only use instant samples to measure the network environment. However, instant samples do not always faithfully reflect the network environment. PAD is then proposed to keep the historical information.
Figure 1. BBR does not perform well in fluctuating delay situations. With severer fluctuation, BBR can only achieve about half of the available bandwidth and introduce extra latency of more than 30 percent.
Figure 2. Demonstration for assembled ACKs. When the propagation delay becomes lower, ACKs will assemble.
PAD is an ACK controller, working between the TCP socket base and the CCA. The relationship among PAD, TCP socket base, and CCA is shown in Figure 3. Since the lack of historical information is the general drawback of many measurement-based CCAs, such modulated design can maintain as much flexibility as possible. Every new measurement-based CCA found to be influenced by the fluctuating delay situation can co-work with PAD, trying to get better performance.
PAD has two sub-modules, namely ACK Buffer and Rate Estimator. The structure of PAD is also shown in Figure 3.
ACK Buffer is where we insert historical information into measurement-based CCAs. As discussed above, PAD collects historical information for the CCA behind it. However, it is not an easy job to pass the historical information. Since measurement-based CCAs often calculate the acknowledged bytes within a time window to form their probing samples, we decide to pass historical information by re-arranging the ACKs. ACK Buffer can postpone received ACKs for some time to help the CCA get better samples.
Rate Estimator is where we collect and store historical information. Many measurement-based CCAs calculate ACK arrival rate to estimate the network environment, so we also use it as a concrete representation of the historical information. Rate Estimator monitors the ACK arrival rate, and instruct ACK Buffer to filter out some outliers.
### ACK Buffer
ACK Buffer is where arriving ACK waits for permission to go through PAD. When a rate sample represented for an ACK comes from Socket Base and is about to go into CCA, it will be queued in ACK Buffer first. ACK Buffer is a FIFO queue, with the job to keep ACKs temporarily. It reports to Rate Estimator when an ACK arrives and how many bytes the ACK acknowledges. Then it uses the ACK arrival rate got from Rate Estimator to decide when to allow the ACK waiting at the head of the queue to go into CCA.
Rate Estimator can calculate the ACK arrival rate \(\lambda\), which will be shown in SS3.3. ACK Buffer then uses calculated \(\lambda\) to decide when to allow the next ACK to go through. An easy way to do such a job is to choose another leaving rate \(\mu\). \(\mu\) represents the rate ACKs leave ACK Buffer. Since we want ACK Buffer to be empty most of the time rather than to keep a lot of ACKs, \(\mu\) has to be chosen elaborately. In fact, \(\mu\) should be slightly higher than \(\lambda\) to keep ACK Buffer empty. Specifically, ACK Buffer decides \(\mu\) by
\[\mu=k\lambda \tag{1}\]
where \(k\) is a parameter controlling the degree of aggressiveness. When \(k\) gets smaller, the ACK Buffer drains more slowly, which adds more extra delays. When \(k\) grows larger, the leaving rate are more different from the arrival rate, which might mislead the CCA behind the ACK Buffer.
With \(\mu\) chosen, ACK Buffer can decide when to grant the next ACK to go through. The time to grant the next ACK to go through can be calculated by the pacing method with the pacing rate to be \(\mu\).
**A critical goal of the design of PAD is not to inject additional latency in stable conditions.** ACK Buffer introduces two modes to prevent injecting additional latency in stable conditions. Not injecting additional latency means ACK Buffer should be empty in most of the time. To achieve it, a positive mode and a passive mode are introduced to ACK Buffer. During passive mode, ACK Buffer allows every arriving ACK to go through ACK Buffer immediately. During passive mode, however, ACKs will be buffered for a period of time. In a stable network condition, ACK Buffer should work in passive mode, as if ACKs go from the socket base straight to the CCA. When the propagation delay starts to fluctuate, making some ACKs arrive earlier than supposed, ACK Buffer then changes to the positive mode and puts these ACKs back off to reasonable time.
**ACK Buffer should not interfere CCAs' regular probing process.** Usually, a measurement-based CCA has a mechanism to measure the network environment periodically, so that it can react to network environment changes in time. One of the most popular approaches is to send more packets than the estimated bottleneck bandwidth on purpose. However, it is a challenge to distinguish the regular probing process from assembled ACKs caused by fluctuating delay, since there are both some ACKs crowding together. In order not to interfere CCAs' regular probing process, ACK Buffer cooperates with the sending side of CCAs to distinguish CCAs' regular measuring process. PAD can identify a period of time as the CCA's probing period when sending rate is higher than \(\mu\), the rate CCA gets ACKs. If PAD and CCA can exchange some messages, then PAD can get the probing period directly from the CCA. After PAD identifies a period of time as the CCA's probing period, PAD can allow more ACKs to pass when the corresponding ACKs come back. By such means, ACK Buffer will not block CCAs' regular measuring process, while pacing the arriving ACKs in the meantime.
### Rate Estimator
Rate Estimator is responsible for collecting and restoring the historical information. It also uses such historical information to estimate the ACK arrival rate, which will then be used to decide when to allow arrived ACK to go through. There are actually two assumptions here. First, we assume that the sender is trying to send as much data as possible. That is to
Figure 3. The structure of PAD, and its relationship with the TCP socket base and the CCA. PAD is composed of Rate Estimator and ACK Buffer. Rate Estimator stores the historical information, and ACK Buffer controls the ACKs directly. PAD works as a shim layer between the TCP socket base and the CCA.
say, the sender is not in the "application limited" state, where upper applications do not provide enough data to send. Second, the propagation delay of the link is fluctuating among a central value, and the central value is stable somehow. If the above two assumptions can be true, the ACKs will arrive at a constant rate in general. Certainly, with the fluctuation, the arriving rate can not be exactly constant but is also fluctuating among a central rate. Rate Estimator's job is to find the central rate, which will be marked as \(\lambda\) in the following paragraphs.
Rate Estimator continuously collects information to calculate proper \(\lambda\). As we have discussed above, ACKs arrive at PAD at a constant rate in general. Thus, Rate Estimator uses a window to calculate the constant rate \(\lambda\). More specifically, every time an ACK arrives, Rate Estimator upgrades \(\lambda\) by
\[\lambda=\frac{ACK_{now}-ACK_{prev}}{t_{now}-t_{prev}} \tag{2}\]
where \(t_{now}\) is the time ACK arrives, \(t_{prev}\) is \(w\) RTTs before ACK comes, \(ACK_{now}\) is the largest sequence number acknowledged by the newly arriving ACK, and \(ACK_{prev}\) is the largest sequence number acknowledged by the ACK having arrived at \(t_{prev}\). \(w\) is a parameter controlling stable level. We set \(w\) to 16 to make sure it can cover the probing period of commonly used measurement-based CCAs.
PAD uses the SACK option in TCP to resolve the ambiguity caused by ACKs. When packet loss occurs, ACKs will introduce ambiguity. The acknowledged sequence number may not represent the largest sequence number received. SACK is a solution for such a problem. We read the SACK option field in the TCP packet header to get the largest sequence number received. By getting the largest sequence number received more precisely, we can calculate \(\lambda\) more precisely, and match the received ACKs to the packets sent more accurately. Then we can decide the postponing time more precisely, and recognize the regular probing process more accurately.
## 4. Preliminary Evaluation
Since PAD works between the TCP socket base and the CCA, it is hard to evaluate PAD itself. As we have mentioned in SS3.1, if we see PAD and CCA as a whole, it can be seen as a new CCA. Thus, we use the combination of PAD+BBR as a new CCA and compare it with other CCAs.
We first compare PAD+BBR and pure BBR to verify the improvement PAD can achieve. Then we compare PAD+BBR with BBR, Cubic, Copa, and Vegas. We compare both throughput and RTT among these CCAs. After that, we dive into BBR to get some inner information to prove that our explanation of BBR's performance loss is reasonable. In the end, we show PAD can provide improvement in more complicated situations, and will not introduce fairness problems.
Most of our experiments discussed below are conducted with ns-3 (Boh et al., 2017). We use ns-3 to simulate a sender, a bottleneck link, a delay-fluctuating link, and a receiver. The delay-fluctuating link is simulated by changing the link propagation delay every 100 milliseconds during the whole experiment. The propagation delay's changing process is specially smoothed to keep the order of packet arrivals. In general, the propagation delay is normally distributed, and we control the standard deviation to get different extents of fluctuation. The bandwidth of the bottleneck link is set to 10Mbps, and the round-trip propagation time is set to fluctuate around 160ms. The queue at the bottleneck is a pure FIFO queue with a length of 100 packets. For PAD, we set \(k\) to 1.025, and \(w\) to 16.
PAD+BBR performs better than pure BBR in terms of both throughput and latency. Since we can see PAD+BBR as an improvement of pure BBR, the first experiment we conduct is to compare them. The two sub-figures of Figure 4 show the throughput and RTT of PAD+BBR and BBR in different extents of delay fluctuation respectively. When the extent of the fluctuation grows severer, BBR can only utilize about half of the bottleneck bandwidth, with extra latency several times than the theoretical value. With the aid of PAD, PAD+BBR can keep a stably high throughput and lower RTT. This result in a sense proves our explanation about why BBR dose not perform well in delay-fluctuating situations. PAD successfully alleviates the phenomenon of assembled ACKs and bandwidth overestimation by introducing historical information.
PAD+BBR has got a better trade off than Cubic, pure BBR, Copa, and Vegas in different extents of fluctuation. When evaluating a new CCA, throughput and RTT are the two most significant target. We supply the sender with all kinds of CCAs, and monitor the average throughput and RTT during the process. We choose Cubic, Vegas and Copa as representatives of control-based CCAs. Cubic uses packet loss as congestion signal, and Vegas and Copa use delay increase as congestion signal. We choose pure BBR (BBR without PAD) as a representative of measurement-based CCAs. Our results are shown in Figure 5. It shows different CCAs' throughput and RTT in different fluctuation extent, namely with standard deviation to be 3ms and 12ms. The blue line across the image from top right to bottom left shows the Pareto optimal of throughput and RTT, with every point on it represents a CCA. Since we expect higher throughput and lower RTT, points at southeast are the best. It is easy to see PAD+BBR surpasses the Pareto optimal. It achieves rather low RTT with only a bit throughput loss.
The following experiments dive into how BBR is mislead by the rate samples. We conduct the next experiments to show that BBR actually gets some improper delivery rate samples,
Figure 4. Comparisons between pure BBR and PAD+BBR. PAD+BBR can keep a stable throughput, while pure BBR faces a throughput collapse. PAD+BBR also introduces lower extra delay than pure BBR.
and PAD alleviates the phenomenon by re-scattering assembled ACKs. We present the distribution of the 90th and 99th percentiles of delivery rate estimations in BBR. To show the influence of BBR's windowed maximum filtering approach, we also calculate the windowed maximum 90th and 99th percentiles. The results are shown in Figure 6. With fluctuation growing severer, percentiles of BBR, especially windowed maximum percentiles grow a lot. PAD, however, manages to keep all these percentiles or windowed maximum percentiles lower. Thus, PAD avoids BBR from samples with delivery rate of too high. This can avoid BBR from overestimate the bottleneck bandwidth.
PAD+BBR also performs better than pure BBR when at least two streams exist. It is a more common situation that at least two streams flow through the same bottleneck link. We add different numbers of flows to the bottleneck link. Specifically, experiments are conducted on 2 and 5 streams. The flows are added one by one, with 0.1 seconds between two flows. The experiments show that PAD+BBR can always get higher throughput than pure BBR in different extents of fluctuation. The increase of throughput varies from 1.06x to 1.46x.
The fairness between PAD+BBR and other CCAs are also achieved. It is important to confirm the newly proposed method will not harm existing methods. Thus, we put two streams into the bottleneck link, both using BBR as the congestion control algorithm. One of the two streams is armed with PAD, while the other one is not. Results show that PAD+BBR can coexist with pure BBR, without any one of them facing the danger of starvation. The stream with higher throughput takes up less than 1.11x throughput than the other stream. That is to say, PAD+BBR can easily be deployed, even when many pure BBR streams are still in the Internet.
## 5. Discussion
We present several future directions beyond the preliminary design of PAD.
_Will PAD work with other CCAs?_ In this paper, we only evaluate the performance of PAD over BBR. However, PAD is designed to work with any other measurement-based CCAs. As long as the CCA depends on measurement results (e.g., delivery rate or latency), PAD can help to make the measurement results robust by introducing the queue. For example, PCC probes the network periodically to get an instant sample of delivery rate and loss rate. It is possible that PAD may improve PCC's performance since the mechanism is almost identical to BBR except for the algorithm. In the future, we will implement and evaluate PAD over other CCAs.
_Is PAD easy to use for network operators?_ Since we are modifying part of the network stack of Linux kernel, a natural concern is if it is too difficult for network operators to use PAD in their own products. In fact, PAD takes the load of modification from the users of PAD - network operators will not need to touch either the CCA or the kernel codes themselves. For example, we can insert a kernel module to deploy PAD. As long as operators can insert the PAD module into their own operating system, PAD should work as expected. We plan to implement this into a kernel module for a broader impact in the future.
_The influence of measurement-based CCAs on existing modelling._ Since the proposal of BBR (Birk et al., 2017), we do see an inspiring spike in the CCA research by heavily relying on measurement results, most of which do have a satisfactory performance. However, measurement-based CCAs are not well analytically studied in the community: the robustness we discussed in this paper is one aspect, but definitely not the only one. For example, measurement-based CCAs such as BBR will also break the throughput model of existing control-based CCAs (Kumar et al., 2018). There are definitely more exciting directions to explore, especially considering the fact that the measurement-based CCAs are gradually dominating the Internet traffic (Kumar et al., 2018; Kumar et al., 2018). We call for the attention from the community to rethink these designs and their potential effects together.
## 6. Conclusion
In this paper, we propose PAD to collect historical information for measurement-based CCAs. PAD works between the socket base and the CCA. It collects and stores the historical information, and then passes to the CCA by re-arranging the ACKs. We conduct some preliminary experiments to show PAD can work well with BBR, one of the most representative measurement-based CCAs.
|
2310.16517
|
OccuQuest: Mitigating Occupational Bias for Inclusive Large Language
Models
|
The emergence of large language models (LLMs) has revolutionized natural
language processing tasks. However, existing instruction-tuning datasets suffer
from occupational bias: the majority of data relates to only a few occupations,
which hampers the instruction-tuned LLMs to generate helpful responses to
professional queries from practitioners in specific fields. To mitigate this
issue and promote occupation-inclusive LLMs, we create an instruction-tuning
dataset named \emph{OccuQuest}, which contains 110,000+ prompt-completion pairs
and 30,000+ dialogues covering over 1,000 occupations in 26 occupational
categories. We systematically request ChatGPT, organizing queries
hierarchically based on Occupation, Responsibility, Topic, and Question, to
ensure a comprehensive coverage of occupational specialty inquiries. By
comparing with three commonly used datasets (Dolly, ShareGPT, and WizardLM), we
observe that OccuQuest exhibits a more balanced distribution across
occupations. Furthermore, we assemble three test sets for comprehensive
evaluation, an occu-test set covering 25 occupational categories, an estate set
focusing on real estate, and an occu-quora set containing real-world questions
from Quora. We then fine-tune LLaMA on OccuQuest to obtain OccuLLaMA, which
significantly outperforms state-of-the-art LLaMA variants (Vicuna, Tulu, and
WizardLM) on professional questions in GPT-4 and human evaluations. Notably, on
the occu-quora set, OccuLLaMA reaches a high win rate of 86.4\% against
WizardLM.
|
Mingfeng Xue, Dayiheng Liu, Kexin Yang, Guanting Dong, Wenqiang Lei, Zheng Yuan, Chang Zhou, Jingren Zhou
|
2023-10-25T10:06:17Z
|
http://arxiv.org/abs/2310.16517v1
|
# OccuQuest: Mitigating Occupational Bias for Inclusive Large Language Models
###### Abstract
The emergence of large language models (LLMs) has revolutionized natural language processing tasks. However, existing instruction-tuning datasets suffer from occupational bias: the majority of data relates to only a few occupations, which hampers the instruction-tuned LLMs to generate helpful responses to professional queries from practitioners in specific fields. To mitigate this issue and promote occupation-inclusive LLMs, we create an instruction-tuning dataset named _OccuQuest_, which contains 110,000+ prompt-completion pairs and 30,000+ dialogues covering over 1,000 occupations in 26 occupational categories. We systematically request ChatGPT, organizing queries hierarchically based on Occupation, Responsibility, Topic, and Question, to ensure a comprehensive coverage of occupational specialty inquiries. By comparing with three commonly used datasets (Dolly, ShareGPT, and WizardLM), we observe that OccuQuest exhibits a more balanced distribution across occupations. Furthermore, we assemble three test sets for comprehensive evaluation, an occu-test set covering 25 occupational categories, an estate set focusing on real estate, and an occu-quora set containing real-world questions from Quora. We then fine-tune LLaMA on OccuQuest to obtain OccuLLaMA, which significantly outperforms state-of-the-art LLaMA variants (Vicuna, Tulu, and WizardLM) on professional questions in GPT-4 and human evaluations. Notably, on the occu-quora set, OccuLLaMA reaches a high win rate of 86.4% against WizardLM. Furthermore, we demonstrate the potential of combining OccuQuest with other instruction-tuning datasets to enhance the overall performance of LLMs. By fine-tuning LLaMA on a mixture of OccuQuest and Tulu datasets, we introduce ProLLaMA, which excels in addressing occupational questions and exhibits superior performance in comprehensive evaluations such as MMLU, GSM8K, BBH, and HumanEval. Among the different LLaMA variants, the 7B and 13B ProLLaMA models achieve the highest performance on MMLU and GSM8K, with the 7B ProLLaMA model demonstrating an improvement of more than 4 points over the other 7B variants on GSM8K. We open release the dataset and models.1
Footnote 1: OccuQuest: [https://huggingface.co/datasets/OFA-Sys/OccuQuest](https://huggingface.co/datasets/OFA-Sys/OccuQuest), OccuLLaMA: [https://huggingface.co/OFA-Sys/OccuLLaMA-7B](https://huggingface.co/OFA-Sys/OccuLLaMA-7B), ProLLaMA: [https://huggingface.co/OFA-Sys/ProLLaMA-7B](https://huggingface.co/OFA-Sys/ProLLaMA-7B).
## 1 Introduction
The emergence of large language models (LLMs), such as GPT (Brown et al., 2020; Ouyang et al., 2022; OpenAI, 2023), PaLM (Chowdhery et al., 2022; Chung et al., 2022), LLaMA (Touvron et al., 2023) and its various variants, trigger a paradigm shift in natural language processing (NLP) tasks. Instruction tuning has become a crucial process following pre-training, aiming to align the behavior of LLMs with human expectations. However, we observe a notable **occupational bias** in the existing instruction-tuning datasets, as a significant portion of data is centered around specific occupational groups. Unfortunately, language models tend to capture and reflect this bias (Suresh & Guttag,
2021; Shen et al., 2022; Lee et al., 2023), making it challenging for them to generate accurate and insightful responses to questions from specific occupations.
The primary origins of instruction-tuning data encompass pre-existing NLP tasks (Flan (Wei et al., 2022), SuperNl (Wang et al., 2022b)), manually constructed instructions (Dolly (Conover et al., 2023), OpenAssistant (Kopf et al., 2023)), and datasets generated using LLMs (Alpaca (Taori et al., 2023), ShareGPT2, WizardLM (Xu et al., 2023a)). Practitioners in the AI industries are more likely to access these sources, as Databricks admits that the Dolly dataset comes from over 5,000 employees who are very interested in LLMs (Conover et al., 2023). However, practitioners across various fields with weak connections to AI communities have limited access to these data sources. A welder stands less chance of producing instruction-tuning data than an employee of an AI institute. Consequently, while the LLMs fine-tuned on these datasets excel in answering queries related to building chatbots, they may struggle with questions about rectifying the lack of fusion in welding. These allocative harms (Barocas et al., 2017) hamper the LLMs from providing helpful, honest, and harmless (Askell et al., 2021) assistance to specific occupational groups.
Footnote 2: [https://sharegpt.com/](https://sharegpt.com/)
To create more inclusive and unbiased language models that can better serve users from different occupational backgrounds, we propose an instruction-tuning dataset that covers over 1,000 occupations. We collect over 1,000 job titles and their responsibilities spanning 26 distinct occupation categories from Workable3. We illustrate the categories and the representative occupations in Appendix A. We then utilize ChatGPT4 to identify key topics of concern for practitioners in each occupation and generate relevant questions and answers accordingly. This effort results in the creation
Figure 1: The distribution of occupational categories across various datasets.
of a comprehensive dataset called OccuQuest comprising 148,772 queries and responses, covering 1,013 occupations and 31,811 topics.
We compare the distribution of occupational categories in OccuQuest with three typical instruction-tuning datasets (Dolly, ShareGPT, and WizardLM) using ChatGPT, and Figure 1 illustrates the results. The precise distribution percentages are presented in Appendix F. Our analysis reveals that Dolly, ShareGPT, and WizardLM favor non-occupation-related topics (denoted as _Others_) and the "IT and Development" category, while OccuQuest exhibits a more balanced distribution. For instance, ShareGPT and WizardLM consist of less than 0.8% of data in the "Facilities" category, while comprising over 15% of data in "IT and Development". Conversely, in OccuQuest, the majority of occupational categories encompass data ranging from 2% to 6%.
To validate the effectiveness of OccuQuest, we fine-tune LLaMA on OccuQuest to get OccuLLaMA and compare it with the state-of-the-art LLaMA variants (Vicuna (Chiang et al., 2023), WizardLM, and Tulu (Wang et al., 2023a)) through preference assessments using GPT-45 and human evaluations. OccuLLaMA consistently outperforms other variants in answering occupational questions across various occupations. Notably, on a test set consisting of real-world questions covering 25 occupational categories, OccuLLaMA achieves an 86.4% win rate against WizardLM.
Footnote 5: [https://platform.openai.com/docs/models/gpt-4](https://platform.openai.com/docs/models/gpt-4)
Moreover, we demonstrate that the OccuQuest dataset can be effectively combined with other instruction-tuning datasets to enhance the comprehensive abilities of LLMs. Following Wang et al. (2023a), we fine-tune LLaMA on a mixture of OccuQuest and Tulu datasets to obtain ProLLaMA. ProLLaMA excels in addressing occupational questions and performs well in comprehensive ability evaluations such as MMLU (Hendrycks et al., 2021), GSM8K (Cobbe et al., 2021), BBH (Suzgun et al., 2023), and HumanEval (Chen et al., 2021). When compared to the above LLaMA variants, the 7B and 13B ProLLaMA models achieve the best performance on MMLU and GSM8K. In particular, on GSM8K, the 7B ProLLaMA surpasses these 7B variants by a margin exceeding 4 points.
In summary, this article makes four main contributions:
1. We propose the OccuQuest dataset, which consists of 148,772 queries and responses covering 1,013 occupations. To the best of our knowledge, this is the first dataset available that focuses on mitigating the issue of occupational bias in LLMs.
2. We demonstrate the effectiveness of OccuQuest through preference tests with GPT-4 and human evaluations. Additionally, we showcase the integration of OccuQuest with existing datasets to enhance LLMs in a synthetic manner.
3. We propose ProLLaMA, a series of LLaMA models that excel in answering questions from different occupations and perform well on the comprehensive abilities assessments.
4. We openly release our dataset and model parameters, encouraging further research and exploration in this domain.
## 2 Related Works
### Bias in Datasets
The utilization of deep neural networks relies heavily on datasets, yet existing datasets contain a wide variety of biases including race (Manzini et al., 2019; Sambasivan et al., 2021; Lee et al., 2023; Field et al., 2023), gender (Koolen & van Cranenburg, 2017; Rudinger et al., 2018), disability (Hutchinson et al., 2020; Gadiraju et al., 2023), and others. Extensive researches uncover and analyze these biases in traditional NLP tasks (Vanmassenhove et al., 2019; Henderson et al., 2018). Additionally, there is a growing recognition of the social implications and consequences of these biases (Hovy & Spruit, 2016; Barocas et al., 2017).
One prominent and effective approach to address biases involves constructing or transforming the datasets. For instance, Costa-jussa & de Jorge (2020) and Saunders & Byrne (2020) fine-tune models on carefully screened and balanced data to mitigate biases. Wang et al. (2022a) adopt data augmentation strategies by randomly switching entities to prevent the translation system from associating specific names with contextual idiosyncrasies. Choubey et al. (2021) generate gender-specific
pseudo-parallel corpora to prompt translation systems to produce accurate gender-specific translations. Motivated by the insights from these studies, we aim to mitigate occupational bias in LLMs by constructing an occupationally balanced instruction-tuning dataset. To the best of our knowledge, this is the first endeavor specifically targeting the mitigation of occupational bias in LLMs.
### Instruction Tuning
In recent years, there has been significant progress in LLMs, with notable advancements demonstrated by GPT-3 (Brown et al., 2020), highlighting the potential of context learning in LLMs. As a result, numerous LLMs have emerged, such as Gopher (Rae et al., 2021), Chinchilla (Hoffmann et al., 2022), and PaLM (Chowdhery et al., 2022), showcasing exceptional performance and dominance across diverse NLP tasks.
To align the behavior of LLMs with human preferences, instruction tuning has emerged as a crucial method (Ouyang et al., 2022; Chung et al., 2022). There are three primary sources of existing instruction-tuning datasets: datasets derived from pre-existing NLP tasks, datasets created through manual authoring, and datasets generated using LLMs. Initially, instruction-tuning datasets are developed by expanding upon existing NLP task datasets. For example, Flan (Wei et al., 2022) and SuperNI (Wang et al., 2022b) are designed by converting data from diverse NLP tasks, such as classification, extraction, and infilling, into instructions, inputs, and outputs format using templates. However, using existing datasets has the drawback of limited diversity in topics and syntax within the instructions. To overcome this limitation, Dolly (Conover et al., 2023) and OpenAssistant (Kopf et al., 2023) enhance diversity by manually crafting prompts. Additionally, recent studies have explored cost-effective and efficient approaches by leveraging ChatGPT to obtain prompts and responses, reducing the costs and labor involved (Wang et al., 2023; Taori et al., 2023; Xu et al., 2023; Chiang et al., 2023; Xu et al., 2023b).
Extensive efforts have been dedicated to augmenting instruction-tuning datasets, aiming to improve the generalization capabilities of LLMs. However, these existing datasets exhibit a limited occupational distribution, resulting in inadequate precision and granularity of responses to queries from specific occupations. This study aims to construct an instruction-tuning dataset that encompasses a wide range of occupation-related topics, thereby mitigating the occupational bias present in LLMs.
## 3 OccuQuest Dataset
### Dataset Construction
To mitigate the issue of occupational bias in the instruction-tuning corpus, we intend to construct a dataset that encompasses a wide range of occupational specializations. We request ChatGPT hierarchically, focusing on Occupation, Responsibility, Topic, and Questions, to cover as many occupations and their corresponding areas of interest as possible. The data construction process consists of five steps, which are outlined below.
**Step 1: get occupations.** To begin, we gather occupation titles and their associated responsibilities. Workable offers more than 1,000 occupational titles organized into 26 occupational categories. Each occupation is accompanied by a list of responsibilities, consisting of one sentence per responsibility. We successfully collect 1,037 occupations and their respective responsibilities from Workable, with an average of around 7 responsibilities per occupation.
**Step 2: request topics.** We utilize ChatGPT to generate multiple related topics and topic features by providing the occupation name and one responsibility. A topic is a keyword or keywords that reflect what a practitioner needs to consider when fulfilling a specific responsibility and topic features provide a descriptive paragraph about the topic. To avoid duplication, we employ MinHash (Broder, 2000) on topic features to filter out topics that exhibit high similarities.
**Step 3: request prompts.** Using the topic and topic features obtained in Step 2, we request ChatGPT to generate multiple prompts describing potential queries that practitioners may encounter. During the request, ChatGPT is asked to list the keywords and then generate the prompts to produce diverse prompts with distinct keywords, as directly generating prompts tends to result in similar prompts. Same as the topic filtering process, we filter out the prompts that show high similarities.
**Step 4: get responses.** In this step, we ask ChatGPT to answer the prompts generated in Step 3. To improve the accuracy of the completions, we assign ChatGPT a role corresponding to the occupation before some of the queries. We also remove responses that contain overly similar completions.
**Step 5: create dialogs.** After completing Step 4, we have data for a single round of queries and responses in the OccuQuest dataset. To enhance the model's ability to handle multi-round requests, we additionally request ChatGPT to generate multi-round dialogues between a rookie and a veteran, discussing problem-solving scenarios encountered at work for each topic.
During Steps 2, 3, 4, and 5, we exclude responses that are less than 50 words in length or contain the phrase "Sorry, as an AI assistant..." to ensure the validity of the responses. We incur an approximate cost of S300 for API access fees in the dataset construction process.
Figure 2 illustrates an example of the dataset construction process, while the actual prompts used to collect the dataset can be found in Appendix E. The examples extracted from OccuQuest are provided in Appendix B.
### Dataset Split
To assess the efficacy of OccuQuest and the models, we partition a portion of the data from OccuQuest as test sets. Specifically, we designate the data within the "Real estate" category as the holdout set. From this category, we randomly select 250 samples, referred to as the "estate set", to evaluate the models' generalization capabilities. In the remaining 25 categories, we randomly select 100 samples and 10 samples from each as the validation set and "occu-test" set, respectively. The remaining data in these 25 categories are allocated for the training set. To ensure the evaluation aligns closely with real-world scenarios, we collect 250 authentic questions (10 questions per category, totaling 25 categories) from Quora6 as the "occu-quora" set.
Footnote 6: [https://www.quora.com/](https://www.quora.com/)
In summary, OccuQuest consists of the following components:
Figure 2: An illustration of the OccuQuest dataset construction process, where the contents highlighted with a background color are ultimately gathered to constitute the dataset. To eliminate duplicate samples, MinHash filtering is applied after steps 2, 3, and 4.
1. A training set, containing 114,090 prompt-completion pairs and 31,682 dialogues across 25 categories;
2. A validation set, containing 2,500 prompt-completion pairs across 25 categories;
3. An occu-test set, containing 250 prompt-completion pairs across 25 categories;
4. An estate set, containing 250 prompt-completion pairs in the "Real estate" category;
5. An occu-quora set, containing 250 real-world questions gathered from Quora across 25 categories.
### Balanced Distribution of Occupations
OccuQuest contains 117,090 prompt-completion pairs and 31,682 multi-round dialogues, encompassing 1,013 occupations under 26 occupational categories. Each item in OccuQuest contains the occupational category, occupation name, topic, topic features, queries, and responses. Compared to the existing instruction-tuning datasets, OccuQuest exhibits a balanced distribution of occupations.
To evaluate the distribution of occupations within OccuQuest compared to existing instruction-tuning datasets, we select three prominent datasets for comparison: Dolly, ShareGPT, and WizardLM. Dolly is manually authored, ShareGPT is obtained by the users interacting with ChatGPT, and WizardLM is generated by expanding existing instructions using ChatGPT. These datasets represent the primary sources of current instruction-tuning data. We randomly select 10,000 samples from each of the datasets and inquire ChatGPT about the occupational category to which each sample is likely to belong. The specific prompt used for this task can be found in Appendix E.
The results are presented in Figure 1. In Dolly, ShareGPT, and WizardLM, the "Others" category unrelated to specific occupations dominates the distribution. Furthermore, the categories of "IT and Development" and "Engineering" also exhibit a disproportionately high proportion compared to other occupations, consistent with our claim in Section 1 that individuals from these fields are more likely to contribute data. In contrast, OccuQuest demonstrates a more balanced distribution of occupational categories, without any single category displaying clear dominance. For detailed percentages of occupation distribution across different datasets, please refer to Appendix F.
## 4 Experiments
### Baselines
We fine-tune the LLaMA-7B model on OccuQuest and compare it to competitive baselines:
**Vicuna**, an open-source chatbot trained by fine-tuning LLaMA on ShareGPT. Preliminary GPT-4 evaluation reveals that Vicuna-13B achieves over 90% quality compared to OpenAI ChatGPT (Chiang et al., 2023). We utilize the checkpoint available on the Huggingface model repository7.
Footnote 7: [https://huggingface.co/lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3)
**Tulu**, a fine-tuned LLaMA on a combination of existing instruction-tuning datasets, including FLAN, Dolly, OpenAssistant, Alpaca, and ShareGPT, proposed by Wang et al. (2023). Tulu achieves the best average performance on several benchmarks including MMLU, GSM8K, BBH, etc. We utilize the checkpoint available on the Huggingface model repository8.
Footnote 8: [https://huggingface.co/allenai/tulu-7b](https://huggingface.co/allenai/tulu-7b)
**WizardLM**, a LLaMA fine-tuned with complex instructions derived from extending the seed instructions in the Alpaca dataset using ChatGPT. WizardLM achieves more than 90% capacity of ChatGPT on 17 out of 29 evaluated skills (Xu et al., 2023). We utilize the checkpoint available on the Huggingface model repository9.
Footnote 9: [https://huggingface.co/WizardLM/WizardLM-7B-V1.0](https://huggingface.co/WizardLM/WizardLM-7B-V1.0)
**ChatGPT**, a chatbot proposed by OpenAI, recognized as one of the most powerful LLMs (services) currently available. We use the gpt-3.5-turbo API10 provided by OpenAI for our experiments.
Footnote 10: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
### Training Details
To obtain the OccuLLaMA model, we fine-tune the LLaMA-7B model using the OccuQuest training set. The fine-tuning process involves training for 5 epochs, with a batch size of 128 and a total of 5,500 training steps. We employ the AdamW optimizer with a maximum learning rate of \(2\times 10^{-5}\), and the learning rate is linearly decayed during training. Additionally, we set the warmup ratio to 0.03 to gradually increase the learning rate at the beginning of training. The entire training process is executed on a server equipped with 8 \(\times\) 80G A100 GPUs and completes within 8 hours.
### Evaluation Setup
We generate the responses to the queries in the occu-test, estate, and occu-quora sets employing the baselines and OccuLLaMA through greedy search, with the maximum generation length set to 1024 tokens. Subsequently, we evaluate the responses using GPT-4 and human evaluations.
#### 4.3.1 GPT-4 Evaluation
The evaluation of open-ended generation using LLMs highlights the benefits of scalability and explainability, and previous studies have shown that GPT-4 exhibits high agreement with human experts (Zheng et al., 2023). Therefore, we leverage GPT-411 to evaluate the performance of OccuLLaMA and the baselines in addressing occupation-related queries. During this evaluation, we compare the responses from OccuLLaMA with those generated by each baseline. To ensure fairness and avoid any positional bias, we judge each query twice by swapping the order of the two responses and only declare a win when a response is preferred in both orderings (Zheng et al., 2023). For the specific prompt utilized in the evaluation, please refer to Appendix E.
Footnote 11: We use the “gpt-4” API in [https://platform.openai.com/docs/models/gpt-4](https://platform.openai.com/docs/models/gpt-4).
#### 4.3.2 Human Evaluation
We conduct a human evaluation to assess the alignment of the generated responses with human expectations. Due to the substantial labor costs associated with human evaluation, we randomly select two questions from each occupational category in the occu-test and occu-quora sets, resulting in a human evaluation set comprising 100 samples. We engage three annotators who are tasked with rating the responses on three dimensions: **Helpfulness**, **Honesty**, and **Harmlessness**(Askell et al., 2021). These dimensions are assessed on a scale of 1 to 5, with higher scores indicating superior performance. For more details about the human evaluation, please refer to Appendix G.
### Experimental Results
Figure 3 illustrates the results of GPT-4 evaluation. The findings clearly indicate that **OccuLLaMA outperforms other LLaMA-based models in answering occupation-related questions across all three evaluation sets**. In comparison to Vicuna and WizardLM, OccuLLaMA consistently achieves high win rates, exceeding 80%. When compared to Tulu, OccuLLaMA consistently achieves win rates of over 60% and failure rates of under 20% across the test sets. These results highlight the superiority of OccuLLaMA in effectively addressing occupation-related questions, underscoring the effectiveness of OccuQuest in enhancing the occupational capabilities of LLMs.
Figure 3: GPT-4 evaluation results on OccuLLaMA against the comparative baselines.
However, it is evident that there still exists a significant disparity between OccuLLaMA and ChatGPT. We propose that this discrepancy may be attributed to two factors: a) The limited capabilities of the base LLaMA model, making it challenging to comprehensively enhance its performance with a limited amount of instruction-tuning data; b) The OccuQuest dataset is derived from ChatGPT.
Figure 4 presents the win rates of OccuLLaMA against Vicuna across various occupational categories in the occu-test set. **OccuLLaMA demonstrates a significant advantage over Vicuna in all occupational categories.** Notably, OccuLLaMA exhibits relatively weaker strength in the fields of "Engineering" and "IT and Development", which aligns with the observed distribution of occupations in the ShareGPT dataset, wherein a substantial portion of the data pertains to the "Engineering" and "IT and Development" domains. Similar patterns can be observed in Tulu and WizardLM, and we provide the win rates of OccuLLaMA against these models in Appendix C.
Table 1 presents the results of the human evaluation. Notably, **in terms of "Helpfulness", OccuLLaMA demonstrates performance comparable to ChatGPT and significantly outperforms other LLaMA variants**. Regarding "Honesty" and "Harmlessness," except for Vicuna, which performs poorly, the evaluated models exhibit similar performance. This observation may be attributed to the absence of harmful or misleading questions in the test set. The results of the human evaluation further affirm the superiority of OccuLLaMA in accurately addressing occupation-related questions, highlighting the efficiency of the OccuQuest dataset in mitigating the occupational bias of LLMs.
We provide examples of the generated responses in Appendix D.
### Combining with Other Datasets
The OccuQuest dataset is specifically designed to address occupational queries, but it is limited in coverage of reasoning abilities, such as mathematical skills. Following Wang et al. (2023), we fine-tune LLaMA on a mixture of the Tulu dataset and OccuQuest to get ProLLaMA. The training process of ProLLaMA is similar to OccuLLaMA and takes approximately 50 hours.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **Helpfullness** & **Honesty** & **Harmlessness** \\ \hline Vicuna & 3.79 & 4.15 & 4.65 \\ Tulu & 4.05 & 4.75 & 4.82 \\ WizardLM & 4.19 & 4.73 & 4.86 \\ ChatGPT & 4.57 & 4.83 & 4.90 \\ OccuLLaMA & 4.45 & 4.77 & 4.88 \\ \hline Agreement & 0.48 & 0.42 & 0.55 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Human evaluation results. We use Fleiss’ Kappa (Fleiss, 1971) to measure the inter-rater agreement and the agreement scores falling within 0.40-0.60 indicate ”moderate agreement”.
Figure 4: The win rates of OccuLLaMA vs Vicuna under different occupational categories.
We evaluate ProLLaMA's occupational proficiency on OccuQuest and assess its comprehensive abilities using established benchmarks, including MMLU (Hendrycks et al., 2021) for world knowledge, GSM8K (Cobbe et al., 2021) for mathematical reasoning ability, BBH (Suzgun et al., 2023) for general reasoning capabilities, and HumanEval (Chen et al., 2021) for coding skills.
Figure 5 provides the preference evaluation results obtained using GPT-4. **ProLLaMA exhibits similar performance to OccuLLaMA in answering occupational questions, surpassing the other LLaMA variants by a significant margin.** Table 2 shows the results on benchmarks. **ProLLaMA outperforms the other variants significantly on MMLU and GSM8K, with improvements of over 1.8 and 4.4 points on MMLU and GSM8K respectively, while demonstrating comparable performance on BBH and HumanEval.** A plausible explanation for the enhanced MMLU results is the inclusion of specific fields in MMLU relating to occupations, for instance, the field "health" is closely associated with "Healthcare". The improved performance on GSM8K can potentially be attributed to the mathematical data present in OccuQuest, where fields like "Accounting" and "Marketing" are prominent. Moreover, the incremental problem-solving approach adopted in various occupations contributes to the enhancement of LLM's reasoning abilities.
We provide the experimental results of the 13B models in Appendix H, where similar superiority can be observed. These findings highlight the effectiveness of OccuQuest in mitigating occupational bias without sacrificing the reasoning enhancements provided by other datasets.
## 5 Conclusion
The current data available for instruction-tuning is plagued by occupational bias that a significant portion of the data is only relevant to a few professions. Consequently, this limitation hinders the ability of models trained on such data to effectively handle queries from individuals with specific professional backgrounds. To mitigate this issue and develop more inclusive and unbiased large language models, we create the OccuQuest dataset. This dataset encompasses a wide range of topics associated with over 1,000 occupations. A comparison with existing instruction-tuning datasets like Dolly (Conover et al., 2023), ShareGPT, and WizardLM (Xu et al., 2023) reveals that OccuQuest exhibits a much more balanced distribution across different occupations. We fine-tune LLaMA on OccuQuest to obtain OccuLLaMA. Through GPT-4 and human evaluations, OccuLLaMA demonstrates superiority over state-of-the-art LLaMA variants in effectively answering professional queries related to various occupations. OccuQuest can also be effectively combined with other instruction-tuning datasets to enhance the overall capabilities of large language models. By fine-tuning LLaMA on both OccuQuest and Tulu datasets, we develop ProLLaMA, which excels
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **MMLU (\(\mathbf{\theta}\)-net)** & **GSM8K (\(\mathbf{\theta}\)-net)** & **BBH (\(\mathbf{\theta}\)-net)** & **HumanEval (\(\mathbf{\varphi}\)-10)** \\ \hline Vinalla LLaMA-7B & 30.8 & 9.9 & 32.5 & 16.2 \\ Vicuna-7B & 44.4 & 16.1 & 34.6 & 14.8 \\ Tulu-7B & 44.4 & 26.8 & **37.0** & 20.6 \\ WizardLM-7B & 36.1 & 14.9 & 31.8 & 20.7 \\ ProLLaMA-7B & **46.2** & **31.2** & 35.5 & **21.2** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation results on comprehensive benchmarks, with the top-performing results highlighted in **bold**.
Figure 5: GPT-4 evaluation results on ProLLaMA against the comparative baselines.
|
2302.09626
|
Bracket words along Hardy field sequences
|
We study bracket words, which are a far-reaching generalisation of Sturmian
words, along Hardy field sequences, which are a far-reaching generalisation of
Piatetski--Shapiro sequences $\lfloor n^c \rfloor$. We show that thus obtained
sequences are deterministic (i.e., they have sub-exponential subword
complexity) and satisfy Sarnak's conjecture.
|
Jakub Konieczny, Clemens Müllner
|
2023-02-19T17:03:17Z
|
http://arxiv.org/abs/2302.09626v1
|
# Bracket words along Hardy field sequences
###### Abstract.
We study bracket words, which are a far-reaching generalisation of Sturmian words, along Hardy field sequences, which are a far-reaching generalisation of Piatetski-Shapiro sequences \(\lfloor n^{c}\rfloor\). We show that thus obtained sequences are deterministic (i.e., they have sub-exponential subword complexity) and satisfy Sarnak's conjecture.
Key words and phrases:generalised polynomial; Sturmian word; subword complexity; deterministic sequence; nilsequence; Sarnak conjecture; Mobius orthogonality.
## 1. Introduction
The study of the _spectral_ spectral
are very far from normal - they have subexponential subword complexity. One natural generalization of automatic sequences are morphic sequences. These are letter-to-letter codings of fixed points of substitutions. A very prominent morphic sequence is the Fibonacci word which is the fixed point of the substitution \(0\mapsto 01,1\mapsto 0\). Moreover, this sequence is also a Sturmian word and many interesting morphic sequences are also Sturmian words (see for example [10]). Thus, we obtain as a very special case (one of) the first results for morphic sequences along Piatetski-Shapiro sequences.
It follows from Theorem A that the sequence \((a(\lfloor f(n)\rfloor)_{n=0}^{\infty}\) is deterministic, meaning that it has subexponential subword-complexity. A conjecture of Sarnak [11] asserts that each deterministic sequence should be orthogonal to the Mobius function, given by
\[\mu(n)=\begin{cases}(-1)^{k}&\text{if $n$ is the product of $k$ distinct primes;}\\ 0&\text{if $n$ is divisible by a square.}\end{cases}\]
This conjecture in general is wide open. However, it has been resolved in a number of special cases [1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 108, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 242, 251, 262, 273, 281, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 32, 334, 35, 36, 37, 38, 39, 311, 33, 33, 34, 36, 38, 39, 32, 34, 36, 38, 39, 33, 35, 37, 39, 30, 31, 33, 34, 36, 38, 39, 33, 37, 39, 31, 34, 38, 39, 35, 39, 36, 37, 38, 39, 38, 39, 39, 31, 39, 32, 34, 36, 39, 37, 39, 38, 39, 39, 32, 35, 39, 33, 36, 39, 37, 38, 39, 39, 38, 39, 39, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 88, 89, 91, 80, 83, 84, 85, 86, 87, 88, 89, 92, 85, 89, 93, 94, 95, 96, 97, 98, 99, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 131, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 156, 157, 158, 159, 160, 161, 163, 164, 165, 166, 167, 168, 169, 170, 171, 173, 175, 176, 178, 179, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 21, 22, 231, 24, 251, 26, 273, 28, 29, 29, 30, 31, 32, 33, 34, 36, 38, 39, 30, 31, 34, 36, 39, 31, 35, 39, 32, 36, 37, 39, 38, 39, 30, 31, 30, 31, 32, 33, 34, 36, 39, 31, 35, 39, 30, 31, 36, 37, 39, 32, 30, 31, 33, 38, 39, 31, 32, 33, 34, 36, 39, 31, 33, 35, 39, 33, 37, 39, 31, 34, 36, 38, 39, 30, 31, 35, 39, 32, 30, 31, 36, 37, 38, 39, 31, 37, 39, 32, 31, 38, 39, 33, 39, 30, 31, 32, 33, 34, 36, 39, 31, 35, 39, 33, 36, 37, 39, 38, 39, 31, 38, 39, 32, 30, 31, 34, 36, 39, 31, 37, 38, 39, 32, 33, 39, 33, 34, 38, 39, 30, 31, 35, 39, 32, 34, 37, 39, 33, 35, 39, 34, 37, 36, 39, 35, 37, 39, 36, 38, 39, 37, 39, 38, 39, 39, 30, 31, 39, 32, 33, 34, 35, 39, 36, 39, 37, 38, 39, 39, 31, 39, 38, 39, 32, 33, 34, 39, 35, 36, 37, 39, 38, 39, 30, 31, 39, 33, 37, 39, 31, 39, 32, 35, 39, 33, 34, 37, 39, 35, 36, 39, 37, 38, 39, 39, 30, 31, 38, 39, 32, 33, 39, 33, 34, 38, 39, 35, 36, 37, 39, 38, 39, 30, 31, 39, 30, 31, 32, 33, 34, 35, 39, 36, 37, 38, 39, 31, 39, 32, 35, 37, 39, 33, 38, 39, 30, 31, 34, 39, 35, 36, 37, 39, 38, 39, 30, 31, 32, 33, 35, 39, 32, 36, 38, 39, 33, 34, 37, 38, 39, 32, 39, 33, 35, 39, 34, 39, 35, 37, 39, 36, 37, 39, 38, 39, 39, 30, 31, 39, 32, 33, 35, 39, 33,
In order to obtain a result for a bracket word along a Hardy field function, we split the range of summation into intervals where the Hardy field function under consideration can be efficiently approximated by polynomials. We are then left with the task of establishing cancellation in each of these intervals. A key ingredient is Mobius orthogonality for nilsequences in short intervals, recently established in [13], Theorem 5.3. The main technical difficulty of our argument lies in extending Theorem 5.3 to piecewise constant (and hence necessarily not continuous) functions with semialgebraic pieces, which we accomplish in Section 5.2.
### Plan of the paper
In Section 2 we recall some basic definitions and results about Hardy fields. Moreover, we study Taylor polynomials of functions from a Hardy field which generalizes the corresponding part in [10]. This allows us to locally replace functions from a Hardy field with polynomials. Thus, we need to be able to work with polynomials with varying coefficients. To do so, we study in Section 3 parametric generalised polynomials which builds on and refines results obtained in [1]. These tools allow us to prove Theorem A. In Section 4 we present some basics on nilmanifolds and discuss the connection to generalized polynomials. Then, in Section 5 we recall a result on Mobius orthogonality for nilsequences in short intervals. This is the final result that we need to prove Theorem B. One naturally arising difficulty is to translate the result on Mobius orthogonality for smooth functions to piecewise polynomial functions instead.
### Notation
We use \(\mathbb{N}=\{1,2,\dots\}\) to denote the set of positive integers and \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\). For \(N\in\mathbb{N}\), we let \([N]=\{0,1,\dots,N-1\}\). For a non-empty finite set \(X\) and a map \(f\colon X\to\mathbb{R}\), we use the symbol \(\mathbb{E}\) borrowed from probability theory to denote the average \(\mathbb{E}_{x\in X}f(x)=\frac{1}{|X|}\sum_{x\in X}f(x)\).
### Acknowledgements
The authors wish to thank Michael Drmota for many insightful discussions, for suggesting this problem, and also for inviting the first-named author to Vienna for a visit during which this project started; and Fernando Xuancheng Shao for helpful comments on Mobius orthogonality of nilsequences.
The first-named author works within the framework of the LABEX MILYON (ANR-10-LABX-0070) of Universite de Lyon, within the program "Investissements d'Avenir" (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR). The second-named author is supported by the Austrian-French project "Arithmetic Randomness" between FWF and ANR (grant numbers I4945-N and ANR-20-CE91-0006).
## 2. Hardy fields
In this section we discuss functions from a Hardy field which have polynomial growth. In particular we study how the Taylor-polynomial of \(f\) can be used to describe \(\lfloor f(n)\rfloor\). Therefore, we first gather some basic results on Hardy fields. Then we discus the uniform distribution of polynomials modulo \(\mathbb{Z}\). Finally, we study properties of Taylor polynomials and prove the main theorem of this section, namely Theorem 2.11.
### Preliminaries
We start by gathering the basic facts and results on Hardy fields. For further discussion we refer e.g. to [11] and [16].
Let \(\mathcal{B}\) be the collection of equivalence classes of real valued functions defined on some half line \((c,\infty)\), where we identify two functions if they agree eventually.1 A _Hardy field_\(H\) is a subfield of the ring \((\mathcal{B},+,\cdot)\) that is closed under differentiation, meaning that \(H\) is a subring of \(\mathcal{B}\) such that for each \(0\neq f\in H\), the inverse \(1/f\) exists and belongs to \(H\), \(f\) is differentiable and \(f^{\prime}\in H\). We let \(\mathcal{H}\) denote the union of all Hardy fields. If \(f\in\mathcal{H}\) is defined on \([0,\infty)\) (one can always choose such a representative of \(f\)) we call the sequence \((f(n))_{n=0}^{\infty}\) a _Hardy sequence_.
Footnote 1: The equivalence classes just defined are often called _germs of functions_. We choose to refer to elements of \(\mathcal{B}\) as functions instead, with the understanding that all the operations defined and statements made for elements of \(\mathcal{B}\) are considered only for sufficiently large values of \(t\in\mathbb{R}\).
We note that choosing different representatives of the same germ of a function \(f\), changes the number of subwords of length \(N\) of \(a(\lfloor f(n)\rfloor)\) by at most an additive constant. As a consequence, the asymptotic behaviour of the subword complexity of \(a(\lfloor f(n)\rfloor)\) depends only on the germ of \(f\).
A _logarithmic-exponential function_ is any real-valued function on a half-line \((c,\infty)\) that can be constructed from the identity map \(t\mapsto t\) using basic arithmetic operations \(+,-,\times,:\), the logarithmic and the exponential functions, and real constants. For example, \(t^{2}+5t,t^{\sqrt{2}+\sqrt{3}},e^{(\log t)^{2}}\) and \(e^{\sqrt{\log t}}/\sqrt{t^{2}+1}\) are all logarithmic-exponential functions. Every logarithmic-exponential functions belongs to \(\mathcal{H}\), and so do some other classical functions such as \(\Gamma\), \(\zeta\) or \(t\mapsto\sin(1/t)\).
For real-valued functions \(f\) and \(g\) on \((c,\infty)\) such that \(g(t)\) is non-zero for sufficiently large \(t\), we write \(f(t)\prec g(t)\) if \(\lim_{t\to\infty}f(t)/g(t)=0\), \(f(t)\sim g(t)\) if \(\lim_{t\to\infty}f(t)/g(t)\) is a non-zero real number and \(f(t)\ll g(t)\) if there exists \(C>0\) such that \(\left|f(t)\right|\leq C\left|g(t)\right|\) for all large \(t\). For completeness, we let \(0\sim 0\) and \(0\ll 0\).
We state the following well-known facts as lemmas.
**Lemma 2.1**.: _Let \(f\in\mathcal{H}\) be a function that is not eventually zero. Then \(f\) is eventually strictly positive or negative. If \(f\) is not eventually constant, then \(f\) is eventually strictly monotone._
Proof.: Since \(f\) is not eventually \(0\), there exists the inverse function \(1/f\) -- in particular, \(f(t)\neq 0\) for \(t\) large enough. Now, the first part follows from continuity of \(f\). The second part follows directly from the first part by considering \(f^{\prime}\).
**Lemma 2.2**.: _Let \(H\) be a Hardy field and let \(f,g\in H\). Then one of the following holds: \(f\prec g\), \(f\sim g\) or \(f\succ g\)._
Proof.: If \(g\) is eventually zero, the situation is trivial, so assume that this is not the case. Since \(f/g\) is eventually monotone, the limit \(\lim_{t\to\infty}\left|f(t)\right|/\left|g(t)\right|\in\mathbb{R}\cup\{\infty\}\) exists. If the limit is infinite then \(f\succ g\). If the limit is zero then \(f\prec g\). If the limit is finite and non-zero then \(f\sim g\).
**Definition 2.3**.: We say that \(f\) has _polynomial growth_ if there exists \(n\in\mathbb{N}\) such that \(f(t)\prec t^{n}\).
We will make use of the following estimates for the derivatives of functions with polynomial growth.
**Lemma 2.4** ([12, Lem. 2.1]).: _Let \(f\in\mathcal{H}\) be a function with polynomial growth. Then at least one of the following holds:_
1. \(f(t)\prec t^{-n}\) _for all_ \(n\in\mathbb{N}\)_;_
2. \(f(t)\to c\neq 0\) _as_ \(t\to\infty\) _for some constant_ \(c\)
_._
3. \(f(t)/(t(\log t)^{2})\prec f^{\prime}(t)\ll f(t)/t\)_._
**Lemma 2.5**.: _Let \(f\in\mathcal{H}\) be a function such that \(f(t)\prec t^{-n}\) for all \(n\in\mathbb{N}\). Then also \(f^{(\ell)}(n)\prec t^{-n}\) for all \(\ell,n\in\mathbb{N}\)._
Proof.: Reasoning inductively, it is enough to consider the case where \(\ell=1\). Suppose, for the sake of contradiction, that \(|f^{\prime}(t)|\gg t^{-n}\) for some \(n\in\mathbb{N}\). Since \(f(t)\to 0\) as \(t\to\infty\) and since \(f\) is eventually monotone, for sufficiently large \(t\) we have
\[|f(t)|=\int_{t}^{\infty}|f^{\prime}(s)|\,ds\gg\int_{t}^{\infty}s^{-n}ds\gg t^{- n+1},\]
contradicting the assumption on \(f\).
**Lemma 2.6**.: _Let \(f\in\mathcal{H}\) and assume that \(f(t)\ll t^{k}\) for some \(k\in\mathbb{Z}\). Then \(f^{(\ell)}(t)\ll t^{k-\ell}\) for each \(\ell\in\mathbb{N}\)._
Proof.: Reasoning inductively, it is enough to consider the case where \(\ell=1\). We consider the three possibilities in Lemma 2.4. If \(f(t)\prec t^{-n}\) for all \(n\in\mathbb{N}\) then the claim is trivially true by Lemma 2.5. If \(f^{\prime}(t)\ll f(t)/t\) then \(f^{\prime}(t)\ll t^{k-1}\), as needed. Finally, suppose that \(f(t)\to c\neq 0\) as \(n\to\infty\). Clearly, in this case \(k\geq 0\). We may decompose \(f(t)=\overline{f}(t)+c\), where \(\overline{f}(t)=f(t)-c\) and \(\overline{f}(t)\prec 1\). Repeating the reasoning with \(\overline{f}\) in place of \(f\) we conclude that \(f^{\prime}(t)=\overline{f}^{\prime}(t)\ll t^{-1}\ll t^{k-1}\).
**Remark 2.7**.: For each \(f\in\mathcal{H}\) and each logarithmic-exponential function \(g\), there exists a Hardy field \(H\) such that \(f,g\in H\) (see e.g. [1]). Hence, it follows from Lemma 2.2 that for each \(f\in\mathcal{H}\) there exists \(k_{0}(f)\in\mathbb{Z}\cup\{-\infty,+\infty\}\) such that, for \(k\in\mathbb{Z}\) we have: \(f(t)\prec t^{k}\) if \(k>k_{0}(f)\), \(f(t)\succ t^{k}\) if \(k<k_{0}(f)\) and, if \(k_{0}(f)\) is finite, \(f(t)\ll t^{k_{0}(f)}\). Lemma 2.6 implies that \(k_{0}(f^{(\ell)})\leq k_{0}(f)-\ell\) (with the convention that \(\pm\infty-\ell=\pm\infty\)).
### Uniform distribution of polynomials
In this subsection we recall a result about the uniform distribution of polynomials modulo \(\mathbb{Z}\) which we need for the next subsection about Taylor-polynomials. It is well-known that a polynomial distributes uniformly modulo \(\mathbb{Z}\) if and only if at least one (non-constant) coefficient is irrational. The following proposition is a quantitative version of this statement.
First we need to specify the way we quantify how uniformly distributed a sequence \(a(n)\bmod\mathbb{Z}\) is: Let \((x_{1},\ldots,x_{N})\) be a finite sequence of real numbers. Its _discrepancy_ is defined by
\[D_{N}(x_{1},\ldots,x_{N})=\sup_{0\leq\alpha\leq\beta\leq 1}\bigg{|}\frac{ \#\{n\leq N:\alpha\leq\{x_{n}\}<\beta\}}{N}-(\beta-\alpha)\bigg{|}. \tag{2}\]
Thus, we have the necessary prerequisites to state the following proposition.
**Proposition 2.8** (Proposition 5.2 in [16]).: _Suppose that \(g:\mathbb{Z}\to\mathbb{R}\) is a polynomial of degree \(d\), which we write as_
\[g(n)=\beta_{0}+n\beta_{1}+\ldots+n^{d}\beta_{d}.\]
_Furthermore, let \(\delta\in(0,1/2)\). Then either the discrepancy of \((g(n)\bmod\mathbb{Z})_{n\in[N]}\) is smaller than \(\delta\), or else there is an integer \(1\leq\ell\ll\delta^{-O_{d}(1)}\), such that_
\[\max_{1\leq j\leq d}N^{j}\,\|\ell\beta_{j}\|\ll\delta^{-O_{d}(1)}.\]
This proposition is a direct consequence of Proposition 4.3 in [10], who attribute this result to Weyl.
### Taylor expansions
For any germ \(f\in\mathcal{H}\) we consider a representative that is defined on \([1,\infty)\) and also call it \(f\). Then, for any \(x\in(1,\infty)\) and \(\ell\in\mathbb{N}_{0}\) we can consider the length-\(\ell\) Taylor expansion of \(f\) at the point \(x\),
\[f(x+y) =P_{x,\ell}(y)+R_{x,\ell}(y), \tag{4}\] \[P_{x,\ell}(y) :=f(x)+yf^{\prime}(x)+\ldots+\frac{y^{\ell-1}}{(\ell-1)!}f^{(\ell -1)}(x),\] (5) \[R_{x,\ell}(y) :=\frac{y^{\ell}}{\ell!}f^{(\ell)}\left(x+\xi_{\ell}(N,h)\right), \text{ where }\xi_{\ell}(x,y)\in[0,y]. \tag{3}\]
**Proposition 2.9**.: _Let \(k\in\mathbb{Z}\), \(\ell\in\mathbb{N}_{0}\), and let \(f\in\mathcal{H}\) be a function with \(f(t)\ll t^{k}\). Then the error term \(R_{x,\ell}(y)\) in the Taylor expansion (3)-(5) satisfies_
\[R_{x,\ell}(y)\ll y^{\ell}x^{k-\ell}\]
_uniformly for all \(x\geq 1\) and \(0\leq y\leq x\), where the implied constant only depends on \(f\) and \(\ell\)._
Proof.: Combining (5) and Lemma 2.6 we have
\[y^{-\ell}R_{x,\ell}(y)\ll\sup_{\xi\in[0,y]}f^{(\ell)}(x+\xi)\ll\sup_{\xi\in[0, y]}(x+\xi)^{k-\ell}=\begin{cases}x^{k-\ell}&\text{if }k<\ell;\\ (x+y)^{k-\ell}&\text{if }k\geq\ell.\end{cases}\]
Assuming that \(x\geq y\), the two estimates are equivalent.
**Lemma 2.10**.: _Let \(k\in\mathbb{N}\) and let \(f\) be a \(k\) times continuously differentiable function defined on an open interval \(I\subseteq\mathbb{R}\). Suppose that \(f^{(k)}(t)\) has constant sign on \(I\). Then \(f\) changes monotonicity on \(I\) at most \(k-1\) times._
Proof.: If \(f^{(k)}(t)\) is constant zero for all \(t\in I\), then \(f\) is a polynomial of degree at most \(k-1\) and the statement is trivially true. Thus, we assume without loss of generality that \(f^{(k)}(t)>0\) for all \(t\in I\). Let us assume for the sake of contradiction that \(f\) changes monotonicity at least \(k\) times. Thus, \(f^{\prime}\) has at least \(k\) zeros in \(I\). It follows from the mean value theorem that \(f^{\prime\prime}\) has at least \(k-1\) zeros in \(I\). Inductively applying this reasoning shows that \(f^{(k)}\) has at least \(1\) zero in \(I\) giving the desired contradiction.
**Theorem 2.11**.: _Let \(k,\ell\in\mathbb{N}\) be integers with \(k<\ell\) and let \(f\in\mathcal{H}\) be a function satisfying \(f(t)\ll t^{k}\), and let \(P_{N,\ell}\) and \(R_{N,\ell}\) be given by (3)-(5). Then there exists some \(0<\eta<1\) (only depending on \(\ell\)) such that for any \(H\in\mathbb{N}\), the formula_
\[e_{N}(h):=\left\lfloor f(N+h)\right\rfloor-\left\lfloor P_{N,\ell}(h)\right\rfloor, \qquad\qquad\qquad 0\leq h<H. \tag{6}\]
_defines at most \(\exp(O(H^{\eta}))\) different functions \(e_{N}:[H]\to\mathbb{Z}\) for \(N\in\mathbb{N}\). Moreover, for each \(N\), at least one of the following holds_
1. \(N\) _is small:_ \(N=O(H^{(\ell+\eta)/(\ell-k)})\)_._
2. \(e_{N}\) _is sparse: There are at most_ \(O(H^{\eta})\) _values of_ \(h\in[H]\) _such that_ \(e_{N}(h)\neq 0\)_._
3. \(e_{N}\) _is structured: There exists a partition of_ \([H]\) _into_ \(O(H^{\eta})\) _arithmetic progressions with step_ \(O(H^{\eta})\) _on which_ \(e_{N}\) _is constant._
(In the theorem above, the constants implicit in the \(O(\cdot)\) notation are allowed to depend on \(k,\ell\) and \(f\).)
Proof.: We define \(\varepsilon=H^{\eta_{0}}\) for some \(\eta_{0}>0\) which only depends on \(\ell\) and will be specified later. Let \(N\in\mathbb{N}\). Recall that by Proposition 2.9, we have
\[|R_{N,\ell}(h)| \leq\varepsilon \text{for all }0\leq h<H \tag{7}\]
unless \(N\ll\varepsilon^{-1/(\ell-k)}H^{\ell/(\ell-k)}=H^{(\ell+\eta_{0})/(\ell-k)}\). Thus, the values of \(N\) such that (7) is false contribute only \(O\left(H^{O(1)}\right)\) different sequences \(e_{N}\), and we may freely assume that \(N\) is large enough that (7) holds. In this case we have \(e_{N}:[H]\to\{-1,0,1\}\). Additionally, by Lemma 2.1 we may also assume that \(f^{(\ell)}(x)\neq 0\) for all \(x\geq N\). As a consequence of (7), for each \(0\leq h<H\), if
\[\varepsilon<\{P_{N,\ell}(h)\}<1-\varepsilon \tag{8}\]
then \(\lfloor f(N+h)\rfloor=\lfloor P_{N,\ell}(h)\rfloor\) and hence \(e_{N}(h)=0\).
Let \(\alpha_{0},\ldots,\alpha_{\ell-1}\) denote the coefficients of \(P_{N,\ell}\):
\[P_{N,\ell}(h)=\alpha_{0}+\alpha_{1}h+\cdots+\alpha_{\ell-1}h^{\ell-1}.\]
By Proposition 2.8, we distinguish two cases.
1. \((P_{N,\ell}(h))_{h\in[H]}\) has discrepancy at most \(\varepsilon\).
2. There exists \(1\leq q\ll\varepsilon^{-O(1)}\) such that \(\max_{0\leq j<\ell}H^{j}\left\|q\alpha_{j}\right\|\ll\varepsilon^{-O(1)}\).
In the first case, it follows that the number of \(h\in[H]\) such that (8) does not hold is at most \(3\varepsilon H\). Thus, \(e_{N}\) is sparse, i.e. it has at most \(3\varepsilon H\ll H^{1-\eta_{0}}\) non-zero entries. It remains to estimate the number of the sequences \(e_{N}\) of this type. Using a standard estimate \(\binom{n}{k}\leq n^{k}/k!<(en)^{k}/k^{k}\) we find
\[\log\left(\sum_{0\leq j\leq 3\varepsilon H}\binom{H}{j}2^{j}\right) \ll\log\left(3\varepsilon H\right)+\log\binom{H}{3\varepsilon H }+3\varepsilon H\] \[\ll\log(3H^{1-\eta_{0}})+3\varepsilon H\log(e3H^{1-\eta_{0}})+3 H^{1-\eta_{0}}\] \[\ll_{\eta_{0}}H^{1-\eta_{0}/2}.\]
Thus the number of distinct sequences \(e_{N}\) is bounded by \(\exp(O(H^{1-\eta_{0}/2}))\), which gives the desired result as long as \(1-\eta_{0}/2\leq\eta\).
In the second case we split \([H]\) into arithmetic progressions with common difference \(q\ll\varepsilon^{-O_{\ell}(1)}\). This allows us to write (for \(0\leq m<q\))
\[P_{N,\ell}(qh+m) =\alpha_{0}+(qh+m)\alpha_{1}+\ldots+(qh+m)^{\ell-1}\alpha_{\ell-1}\] \[=\beta_{0}+h\beta_{1}+\ldots+h^{\ell-1}\beta_{\ell-1}.\]
The defining property of \(q\) implies that
\[\max_{1\leq j<\ell}H^{j}\left\|\beta_{j}\right\|\ll\varepsilon^{-O_{\ell}(1)}.\]
In particular, we can write
\[\beta_{j}=z_{j}+s_{j},\]
where \(z_{j}\in\mathbb{Z}\) and \(|s_{j}|\ll H^{-j}\cdot\varepsilon^{-O_{\ell}(1)}\) for \(0\leq j<\ell\). Putting everything together, we find
\[f(N+qh+m)=Q(h)+r(h)+R_{N,\ell}(qh+m),\]
where
\[Q(h) =z_{0}+hz_{1}+\ldots+h^{\ell-1}z_{\ell-1}\] \[r(h) =s_{0}+hs_{1}+\ldots+h^{\ell-1}s_{\ell-1}.\]
In particular, \(Q\) is a polynomial of degree at most \(\ell-1\) with integer coefficients and \(P_{N,\ell}(qh+m)=Q(h)+r(h)\). Moreover, \(|r(h)|\ll\varepsilon^{-O_{\ell}(1)}\) for all \(h\in[0,H/q]\). Since \(|R_{N,\ell}(h)|\leq\varepsilon\), we see that
\[\lfloor f(N+qh+m)\rfloor\neq\lfloor P_{N,\ell}(qh+m)\rfloor\]
holds exactly if either
\[\begin{split}\{r(h)\}\leq\varepsilon\quad\text{and}\quad\{r(h) +R_{N,\ell}(qh+m)\}\geq 1-\varepsilon,\quad\text{or}\\ \{r(h)\}\geq 1-\varepsilon\quad\text{and}\quad\{r(h)+R_{N,\ell}(qh+ m)\}\leq\varepsilon.\end{split} \tag{9}\]
In the first case \(e_{N}(qh+m)=1\) and in the second case \(e_{N}(qh+m)=-1\). Since \(r(h)\) is a polynomial of degree at most \(\ell-1\), it changes monotonicity at most \(\ell-2\) times. Since the \(\ell\)-th derivative of \(r(h)+R_{N,\ell}(qh+m)=f(N+qh+m)-P_{N,\ell}(qh+m)+r(h)\) has constant sign, by Lemma 2.10 it changes monotonicity at most \(\ell-1\) times on the interval \([0,H/q]\). Hence, we can decompose \([0,H/q]\) into at most \(2\ell-2\) intervals \(I_{1},\ldots,I_{p}\) on which \(r(h)\) and \(r(h)+R_{N,\ell}(qh+m)\) are both monotone. As \(|r(h)|\ll\varepsilon^{-O_{\ell}(1)}\), we can further subdivide each of the intervals \(I_{j}\) into \(O(\varepsilon^{-O_{\ell}(1)})\) subintervals such that for each subinterval, each of the inequalities in either true on the entire subinterval or false on the entire subinterval. As a consequence, \(e_{N}\) is structured, i.e., \(e_{N}\) is constant on each subinterval. Thus, we have found a decomposition of \([H]\) into \(O(\varepsilon^{-O_{\ell}(1)})\) arithmetic progressions on which \(e_{N}\) is constant. We can write \(O(\varepsilon^{-O_{\ell}(1)})=O(H^{C\eta_{0}})\) for some \(C=C(\ell)>0\). Using the rough estimate \(H^{3}\) for the number of arithmetic sequences contained in \([H]\), we can bound the number of sequences \(e_{N}\) which arise this way by
\[(H^{3})^{O(H^{C\eta_{0}})}=\exp\left(O(H^{C\eta_{0}}\log H)\right)=\exp\left(O _{\eta_{0}}(H^{(C+1)\eta_{0}}\right).\]
It remains to choose \(\eta_{0}=(C+2)^{-1}\) and \(\eta=1-(2(C+2))^{-1}\) to finish the proof.
## 3. Parametric generalised polynomials
In this section we discuss parametric generalised polynomials which builds on and refines results obtained in [1]. In particular, we show that for any parametrised general polynomial that takes values in \([M]\), we can assume that the parameters belong to \([0,1)^{J}\) for some finite set \(J\) (Proposition 3.5). This allows us to show a polynomial bound on the number of subwords of bracket words along polynomials of a fixed degree (Corollary 3.7). At the end of the section we give the proof of Theorem A.
Let \(d\in\mathbb{N}\). Generalised polynomial maps (or GP maps for short) from \(\mathbb{R}^{d}\) to \(\mathbb{R}\) are the smallest family \(\mathcal{G}\) such that (1) all polynomial maps belong to \(\mathcal{G}\); (2) if \(g,h\in\mathcal{G}\) then also \(g+h,g\cdot h\in\mathcal{G}\) (with operations defined pointwise); (3) if \(g\in\mathcal{G}\) then also \(\lfloor g\rfloor\in\mathcal{G}\), where \(\lfloor g\rfloor\) is defined pointwise: \(\lfloor g\rfloor\left(x\right)=\lfloor g(x)\rfloor\). We note that generalised polynomials maps are also closed under the operation of taking the fractional part, given by \(\{g\}=g-\lfloor g\rfloor\). For a sets \(\Omega\subseteq\mathbb{R}^{d}\) and \(\Sigma\subseteq\mathbb{R}\) (e.g., \(\Omega=\mathbb{Z}^{d}\), \(\Sigma=\mathbb{Z}\)), by a generalised polynomial map \(g\colon\Omega\to\Sigma\) we mean the restriction \(\widetilde{g}|_{\Omega}\) to \(\Omega\) of a generalised polynomial map \(\widetilde{g}\colon\mathbb{R}^{d}\to\mathbb{R}\) such that \(\widetilde{g}(\Omega)\subseteq\Sigma\). We point out
that, unlike in the case of polynomials, the lift \(\widetilde{g}\) is not uniquely determined by \(g\), unless \(\Omega=\mathbb{R}^{d}\).
In [1], we introduced a notion of a _parametric GP map_\(\mathbb{Z}\to\mathbb{R}\) with a finite index set \(I\), which (modulo some notational conventions) is essentially the same as a GP map \(\mathbb{R}^{I}\times\mathbb{Z}\to\mathbb{R}\). For instance, the formula
\[g_{\alpha,\beta}(n) =\left\lfloor\alpha n\left\lfloor\beta n\right\rfloor+\sqrt{2}n^ {2}\right\rfloor\] ( \[\alpha,\beta\in\mathbb{R}\] )
defines a GP map \(\mathbb{Z}\to\mathbb{R}\) (or, strictly speaking, a family of GP maps) parametrised by \(\mathbb{R}^{2}\). Formally, a _parametric GP map with index set \(I\)_ or a _GP map parametrised by \(\mathbb{R}^{I}\)_ is a map \(\mathbb{R}^{I}\to\mathbb{R}^{2}\), \(\alpha\mapsto g_{\alpha}\), such that the combined map \(\mathbb{R}^{I}\times\mathbb{Z}\to\mathbb{R}\), \((\alpha,n)\mapsto g_{\alpha}(n)\), is a GP map.
Here, we will need a marginally more precise notion, where the set of parameters takes the form \(\mathbb{R}^{I_{\mathrm{real}}}\times\mathbb{Z}^{I_{\mathrm{int}}}\times[0,1 )^{I_{\mathrm{frac}}}\) rather than \(\mathbb{R}^{I}\). Let \(I_{\mathrm{real}},I_{\mathrm{int}},I_{\mathrm{frac}}\) be pairwise disjoint finite sets and put \(I=I_{\mathrm{real}}\cup I_{\mathrm{int}}\cup I_{\mathrm{frac}}\). Then a _GP map parametrised by \(\mathbb{R}^{I_{\mathrm{real}}}\times\mathbb{Z}^{I_{\mathrm{int}}}\times[0,1 )^{I_{\mathrm{frac}}}\)_ is the restriction of a GP map parametrised by \(\mathbb{R}^{I_{\mathrm{real}}}\times\mathbb{R}^{I_{\mathrm{int}}}\times \mathbb{R}^{I_{\mathrm{frac}}}\) (as defined above) to \(\mathbb{R}^{I_{\mathrm{real}}}\times\mathbb{Z}^{I_{\mathrm{int}}}\times[0,1)^ {I_{\mathrm{frac}}}\). We note that in the case where \(I_{\mathrm{int}}=I_{\mathrm{frac}}=\emptyset\), the new definition is consistent with the previous one.
In [1] we defined the operations of addition, multiplication and the integer part for parametric GP maps, not necessarily indexed by the same set. Roughly speaking, if \(I\subseteq J\) are finite sets then we can always think of a GP map parametrised by \(\mathbb{R}^{I}\) as a GP map parametrised by \(\mathbb{R}^{J}\), with trivial dependence on the parameters in \(\mathbb{R}^{J\setminus I}\). Thus, if \(g_{\bullet}\) and \(h_{\bullet}\) are GP maps parametrised by \(\mathbb{R}^{I}\) and \(\mathbb{R}^{J}\) respectively, then we can think of both \(g_{\bullet}\) and \(h_{\bullet}\) as GP maps parametrised by \(\mathbb{R}^{I\cup J}\), which gives us a natural way to define the (pointwise) sum and product \(g_{\bullet}+h_{\bullet}\) and \(g_{\bullet}\cdot h_{\bullet}\). We refer to [1] for a formal definition. This construction directly extends to GP maps parametrised by \(\mathbb{R}^{I_{\mathrm{real}}}\times\mathbb{Z}^{I_{\mathrm{int}}}\times[0,1 )^{I_{\mathrm{frac}}}\).
**Definition 3.1**.: Let \(g_{\bullet}\) and \(h_{\bullet}\) be two GP maps parametrised by \(\mathbb{R}^{I_{\mathrm{real}}}\times\mathbb{Z}^{I_{\mathrm{int}}}\times[0,1 )^{I_{\mathrm{frac}}}\) and \(\mathbb{R}^{J_{\mathrm{real}}}\times\mathbb{Z}^{J_{\mathrm{int}}}\times[0,1 )^{I_{\mathrm{frac}}}\) respectively. Then we say that \(h_{\bullet}\)_extends_\(g_{\bullet}\), denoted2\(h_{\bullet}\leadsto g_{\bullet}\), if there exists a GP map \(\varphi\colon\mathbb{R}^{I_{\mathrm{real}}}\times\mathbb{R}^{I_{\mathrm{int}}} \times\mathbb{R}^{I_{\mathrm{frac}}}\to\mathbb{R}^{J_{\mathrm{real}}}\times \mathbb{R}^{J_{\mathrm{int}}}\times\mathbb{R}^{J_{\mathrm{int}}}\times \mathbb{R}^{J_{\mathrm{frac}}}\) such that
Footnote 2: We use different notation \(h_{\bullet}\leadsto g_{\bullet}\) than in [1] in order to avoid confusion with the symbol \(\succ\) extensively used in Section 2
* \(\varphi\left(\mathbb{R}^{I_{\mathrm{real}}}\times\mathbb{Z}^{I_{\mathrm{int}}} \times[0,1)^{I_{\mathrm{frac}}}\right)\subseteq\mathbb{R}^{I_{\mathrm{real}}} \times\mathbb{Z}^{I_{\mathrm{int}}}\times[0,1)^{I_{\mathrm{frac}}}\); and
* \(g_{\alpha}=h_{\varphi(\alpha)}\) for all \(\alpha\in\mathbb{R}^{I_{\mathrm{real}}}\times\mathbb{Z}^{I_{\mathrm{int}}} \times[0,1)^{I_{\mathrm{frac}}}\).
In [1] we obtained a polynomial bound on the number of possible prefixes of a given GP map parametrised by \([0,1)^{I}\).
**Theorem 3.2** ([1, Thm. 15.3]).: _Let \(g_{\bullet}:\mathbb{Z}\to\mathbb{Z}\) be a GP map parametrised by \([0,1)^{I}\) for some finite set \(I\). Then there exists a constant \(C\) such that, as \(N\to\infty\), we have_
\[\left|\left\{g_{\alpha}|_{[N]}\ \middle|\ \alpha\in[0,1)^{I}\right\} \right| =O\left(N^{C}\right). \tag{10}\]
_Above, the implicit constant depends only on \(g_{\bullet}\)._
Our next goal is to obtain a similar bound for the number of prefixes of a bounded GP map parametrised by \(\mathbb{R}^{I}\). Even though we are ultimately interested in bounded GP maps, Proposition 3.4 concerning unbounded GP maps is more amenable to proof by structural induction. We will use the following induction scheme.
**Proposition 3.3** ([1, Prop. 13.9]).: _Let \(\mathcal{G}\) be a family of parametric GP maps from \(\mathbb{Z}\) to \(\mathbb{Z}\) with index sets contained in \(\mathbb{N}\). Suppose that \(\mathcal{G}\) has the following closure properties._
1. _All GP maps_ \(\mathbb{Z}\to\mathbb{Z}\) _belong to_ \(\mathcal{G}\)_._
2. _For every_ \(g_{\bullet}\) _and_ \(h_{\bullet}\in\mathcal{G}\)_, it holds that_ \(g_{\bullet}+h_{\bullet}\in\mathcal{G}\) _and_ \(g_{\bullet}\cdot h_{\bullet}\in\mathcal{G}\)_._
3. _For every_ \(g_{\bullet}\in\mathcal{G}\)_,_ \(\mathcal{G}\) _contains all the parametric GP maps_ \(g_{\bullet}^{\prime}\colon\mathbb{Z}\to\mathbb{Z}\) _satisfying_ \(g_{\bullet}\leadsto g_{\bullet}^{\prime}\)_._
4. _For every pair of disjoint finite sets_ \(I\subseteq\mathbb{N}\)_,_ \(J\subseteq\mathbb{N}\)_, and every sequence of parametric GP maps_ \(h_{\bullet}^{(i)}\in\mathcal{G}\)_,_ \(i\in I\)_, with index set_ \(J\)_,_ \(\mathcal{G}\) _contains the parametric GP map_ \(g_{\bullet}\) _defined by_ \[g_{\alpha,\beta}(n)=\left\lfloor\sum_{i\in I}\alpha_{i}h_{\beta}^{(i)}(n) \right\rfloor\,,\qquad n\in\mathbb{Z}\,,\ \alpha\in\mathbb{R}^{I}\,,\ \beta\in\mathbb{R}^{J}\,.\]
_Then \(\mathcal{G}\) contains all parametric GP maps \(\mathbb{Z}\to\mathbb{Z}\) with index sets contained in \(\mathbb{N}\)._
**Proposition 3.4**.: _Let \(g_{\bullet}\colon\mathbb{Z}\to\mathbb{Z}\) be a GP map parametrised by \(\mathbb{R}^{I}\) for a finite set \(I\). Then there exist finite sets \(J,K\) and a GP map \(\widetilde{g}_{\bullet}\colon\mathbb{Z}\to\mathbb{Z}\) parametrised by \(\mathbb{Z}^{J}\times[0,1)^{K}\) such that \(\widetilde{g}_{\bullet}\leadsto g_{\bullet}\) and \(\widetilde{g}_{\bullet}\) takes the form_
\[\widetilde{g}_{a,\beta} =\sum_{j\in J}a_{j}h_{\beta}^{(j)}, a\in\mathbb{Z}^{J}\,,\ \beta\in[0,1)^{K}.\]
_where for each \(j\in J\), \(h_{\bullet}^{(j)}\colon\mathbb{Z}\to\mathbb{Z}\) is a GP map parametrised by \([0,1)^{K}\)._
Proof.:
1. If \(g\colon\mathbb{Z}\to\mathbb{Z}\) is a fixed GP map (i.e., if \(I=\emptyset\)) then we can simply take \(\widetilde{g}=g\).
2. Suppose that the conclusion holds for \(g_{\bullet},h_{\bullet}\colon\mathbb{Z}\to\mathbb{Z}\), and let the corresponding extensions \(\widetilde{g}_{\bullet}\) and \(\widetilde{h}_{\bullet}\) be given by \[\widetilde{g}_{a,\beta} =\sum_{j\in J}a_{j}h_{\beta}^{(j)}, a\in\mathbb{Z}^{J}\,,\ \beta\in[0,1)^{K}\] \[\widetilde{h}_{c,\delta} =\sum_{l\in L}c_{l}h_{\delta}^{(l)}, c\in\mathbb{Z}^{L}\,,\ \delta\in[0,1)^{M}.\]
We may freely assume that the index sets \(J,K,L,M\) are pairwise disjoint. We will show that the conclusion also holds for \(g_{\bullet}+h_{\bullet}\) and \(g_{\bullet}\colon h_{\bullet}\). In the case of \(g_{\bullet}+h_{\bullet}\) it is enough to combine the sums representing \(\widetilde{g}_{a,\beta}\) and \(\widetilde{h}_{c,\delta}\) into a single sum. In the case of \(g_{\bullet}\cdot h_{\bullet}\), we take
\[\widetilde{f}_{e,(\beta,\delta)} =\sum_{j\in J,\ l\in L}e_{j,l}\left(h_{\beta}^{(j)}\cdot h_{ \delta}^{(l)}\right) e\in\mathbb{Z}^{J\times L}\,,\ (\beta,\delta)\in[0,1)^{K\times M}.\]
Then \(\widetilde{f}\) has the required form and (taking \(e_{j,l}=a_{j}c_{l}\)) we see that \(\widetilde{f}_{\bullet}\leadsto\widetilde{g}_{\bullet}\cdot\widetilde{h}_{ \bullet}\leadsto g_{\bullet}\cdot h_{\bullet}\).
3. Suppose that the conclusion holds for \(g_{\bullet}\) and that \(g_{\bullet}\leadsto g_{\bullet}^{\prime}\). Then the conclusion also holds for \(g_{\bullet}^{\prime}\) because the relation of being an extension is transitive.
4. Suppose that \(I\subseteq\mathbb{N}\), \(J\subseteq\mathbb{N}\) are disjoint finite sets, \(h_{\bullet}^{(i)}\) are GP maps parametrised by \(\mathbb{R}^{J}\) which satisfy the conclusion for each for \(i\in I\), and \(g_{\bullet}\) is the parametric GP map defined by \[g_{\alpha,\beta}(n) :=\left\lfloor\sum_{i\in I}\alpha_{i}h_{\beta}^{(i)}(n)\right\rfloor, n\in\mathbb{Z}\,,\ \alpha\in\mathbb{R}^{I}\,,\ \beta\in\mathbb{R}^{J}.\]
Let the extensions of \(h^{(i)}\) be given by
\[\widetilde{h}^{(i)}_{c,\delta} =\sum_{l\in L}c_{l}f^{(i,l)}_{\delta}, c\in\mathbb{Z}^{L}\,,\ \delta\in[0,1)^{M}.\]
(Note that we may without loss of generality assume use the same index sets \(L\) and \(M\) for each \(i\in I\).) We will show that the conclusion is satisfied for \(g_{\bullet}\). We observe that we have the equality
\[\left|\sum_{i\in I}\alpha_{i}\widetilde{h}^{(i)}_{c,\delta}\right| =\left|\sum_{i\in I,\ l\in L}\alpha_{i}c_{l}f^{(i,l)}_{\delta} \right| =\sum_{i\in I,\ l\in L}\left[\left.\alpha_{i}c_{l}\right]f^{(i,l)} _{\delta}+\left|\sum_{i\in I,\ l\in L}\left\{\alpha_{i}c_{l}\right\}f^{(i,l)}_ {\delta}\right.\right]\]
This motivates us to define
\[\widetilde{g}_{e,\delta,\phi} :=\sum_{i\in I,\ l\in L}e_{i,l}f^{(i,l)}_{\delta}+e_{\diamond} \left|\sum_{i\in I,\ l\in L}\phi_{i,l}f^{(i,l)}_{\delta}\right.\] \[e \in\mathbb{Z}^{I\times L\cup\{\diamond\}},\phi\in[0,1)^{I\times L },\delta\in[0,1)^{M},\]
where \(\diamond\) is some index that does not belong to \(I\times J\). Letting also
\[f^{(\diamond)}_{\delta,\phi} :=\left|\sum_{i\in I,\ l\in L}\phi_{i,l}f^{(i,l)}_{\delta}\right| \phi\in[0,1)^{I\times L}\,,\ \delta\in[0,1)^{M},\]
we see that \(\widetilde{g}_{\bullet}\) takes the required form and (setting \(\phi_{i,l}=\{\alpha_{i}c_{l}\}\) and \(e_{\diamond}=1\)) we have \(\widetilde{g}_{\bullet}\leadsto g_{\bullet}\).
Combining the closure properties proved above, we infer from Proposition 3.3 that the conclusion holds for all parametric GP maps.
**Proposition 3.5**.: _Let \(M\in\mathbb{N}\) and let \(g_{\bullet}\colon\mathbb{Z}\to[M]\) be a GP map parametrised by \(\mathbb{R}^{I}\) for a finite set \(I\). Then there exist a GP map \(\widetilde{g}_{\bullet}\colon\mathbb{Z}\to[M]\) parametrised by \([0,1)^{J}\) for a finite set \(J\) such that \(\widetilde{g}_{\bullet}\leadsto g_{\bullet}\)._
Proof.: Let \(\widetilde{g}^{(0)}_{\bullet}\leadsto g_{\bullet}\) be the parametric GP from Proposition 3.4, and let
\[\widetilde{g}^{(0)}_{a,\beta} =\sum_{j\in J}a_{j}h^{(j)}_{\beta}, a\in\mathbb{Z}^{J},\beta\in[0,1)^{K}.\]
Since the value of \(g_{\alpha,\beta}(n)\) is completely determined by its residue modulo \(M\), we expect that it is enough to consider the values of \(a\) with \(a\in[M]^{J}\). This motivates us to put
\[\widetilde{g}_{\alpha,\beta} =\sum_{j\in J}\left\lfloor M\alpha_{j}\right\rfloor h^{(j)}_{\beta}, \alpha\in[0,1)^{J},\beta\in[0,1)^{K}.\]
Let \(\phi\colon\mathbb{Z}^{I}\to\mathbb{Z}^{J}\) and \(\psi\colon\mathbb{Z}^{I}\to\mathbb{R}^{K}\) be GP maps such that \(g_{\alpha}=\widetilde{g}^{(0)}_{\phi(\alpha),\psi(\alpha)}\). Let \(\theta\colon\mathbb{Z}^{I}\to[0,1)^{J}\) be given by \(\theta(\alpha):=\{\phi(\alpha)/M\}\) (with fractional part taken coordinatewise). Then
\[\widetilde{g}^{(0)}_{\phi(\alpha),\beta}(n)\equiv\widetilde{g}_{\theta(\alpha ),\beta}(n)\bmod M,\qquad\text{ for all }n\in\mathbb{Z},\alpha\in\mathbb{R}^{I},\beta\in[0,1)^{K}.\]
Since \(g_{\bullet}\) takes values in \([M]\), it follows that
\[g_{\alpha}(n)=\widetilde{g}^{(0)}_{\phi(\alpha),\psi(\alpha)}(n)\equiv \widetilde{g}_{\theta(\alpha),\psi(\alpha)}(n)\bmod M,\qquad\text{ for all }n\in\mathbb{Z},\alpha\in\mathbb{R}^{I}.\]
Replacing \(\widetilde{g}_{\bullet}\) with \(M\cdot\{\widetilde{g}_{\bullet}/M\}\) if necessary, we may further ensure that \(\widetilde{g}_{\bullet}\) takes values in \([M]\). As a consequence, \(\widetilde{g}_{\bullet}\leadsto g_{\bullet}\), as needed.
**Proposition 3.6**.: _Let \(\mathbf{a}=(a(n))_{n\in\mathbb{Z}}\) be a (two-sided) bracket word over a finite alphabet \(\Sigma\), and let \(g_{\bullet}\colon\mathbb{Z}\to\mathbb{Z}\) be a GP map parametrised by \(\mathbb{R}^{I}\) for some finite set \(I\). Then there exists a constant \(C>0\) such that, as \(N\to\infty\), we have_
\[\left|\left\{\left(a\left(g_{\alpha}(n)\right)\right)_{n=0}^{N-1}\Bigm{|} \alpha\in\mathbb{R}^{I}\right\}\right|=O(N^{C}).\]
_Above, the implicit constant depends on \(\mathbf{a}\) and \(g_{\bullet}\)._
Proof.: Let \(M:=|\Sigma|\). We may freely assume that \(\Sigma=[M]\), in which case \(a\) is a GP map. Thus, \(a\circ g_{\bullet}\) is a GP map parametrised by \(\mathbb{R}^{I}\) and taking values in \([M]\). By Proposition 3.5, there exists a GP map \(\widetilde{g}_{\bullet}\) parametrised by \([0,1)^{J}\) for a finite set \(J\) such that \(\widetilde{g}_{\bullet}\leadsto a\circ g_{\bullet}\). Thus, it suffices to show that, for a certain \(C>0\), the number of words \((\widetilde{g}_{\alpha}(n))_{n=0}^{N-1}\) for \(\alpha\in[0,1)^{J}\) is \(O(N^{C})\) as \(N\to\infty\). This is precisely Theorem 3.2.
As a special case, we obtain a bound on the number of subsequences of bracket words along polynomials of a given degree.
**Corollary 3.7**.: _Let \(\mathbf{a}=(a(n))_{n\in\mathbb{Z}}\) be a (two-sided) bracket word over a finite alphabet \(\Sigma\) and let \(d\in\mathbb{N}\). Then there exists a constant \(C>0\) such that, as \(N\to\infty\) we have_
\[\left|\left\{\left(a(\lfloor p(n)\rfloor)\right)_{n=0}^{N-1}\bigm{|}p\in \mathbb{R}_{\leq d}[x]\right\}\right|=O(N^{C}),\]
_where the implied constant depends only on \(\mathbf{a}\) and \(d\)._
Thus we are now in a position to prove Theorem A.
Proof of Theorem A.: We aim to estimate the number of subwords of length \(H\) of \((a(\lfloor f(n)\rfloor))_{n=0}^{\infty}\), that is, we count words of the form
\[(a(\lfloor f(N)\rfloor),\ldots,a(\lfloor f(N+H-1)\rfloor))=(a(\lfloor f(N+h) \rfloor))_{h=0}^{H-1}\]
for \(N\in\mathbb{N}\). Since \(f\) has polynomial growth, there exists \(k\in\mathbb{N}\) such that \(f(t)\ll t^{k}\). We choose \(\ell\geq k+1\) and apply Theorem 2.11 to find some \(0<\eta<1\) such that for any \(H\in\mathbb{N}\) at least one of the following holds
1. \(N\) _is small:_ \(N=O(H^{(\ell+\eta)/(\ell-k)})\)_._
2. \(e_{N}\) _is sparse:_ There are at most \(O(H^{\eta})\) values of \(h\in[H]\) such that \(e_{N}(h)\neq 0\)_._
3. \(e_{N}\) _is structured:_ There exists a partition of \([H]\) into \(O(H^{\eta})\) arithmetic progressions with step \(O(H^{\eta})\) on which \(e_{N}\) is constant, where \[e_{N}(h) :=\lfloor f(N+h)\rfloor-\lfloor P_{N,\ell}(h)\rfloor\,, 0\leq h<H\] and \(P_{N,\ell}\) is the Taylor polynomial of \(f\) (see (4)). We distinguish the three possible cases. Obviously (i) contributes at most \(O(H^{\ell+1})\) different words. For (ii) we first consider \(a(\lfloor P_{N,\ell}(h)\rfloor)_{h=0}^{H-1}\). By Corollary 3.7 this word is contained in a set of size \(O(H^{C})\). By assumption \(a(\lfloor f(N+h)\rfloor)\neq a(\lfloor P_{N,\ell}(h)\rfloor)\) for at most \(O(H^{\eta})\) values of \(h\in[H]\), which can be chosen in \(\binom{H}{O(H^{\eta})}\) ways For each position \(h\) with \(a(\lfloor f(N+h)\rfloor)\neq a(\lfloor P_{N,\ell}(h)\rfloor)\) we have at most \(|\Sigma|\) possibilities for the value of
\(a(\lfloor f(N+h)\rfloor)\). In total, we can estimate the number of subwords of length \(H\) in this case (up to a constant) by
\[H^{C}\cdot\binom{H}{O(H^{\eta})}\cdot|\Sigma|^{O(H^{\eta})} \leq H^{C}\cdot H^{O(H^{\eta})}\cdot|\Sigma|^{O(H^{\eta})}\] \[=\exp\left(C\log H+O((\log H)\cdot H^{\eta})+O((\log|\Sigma|) \cdot H^{\eta})\right)\] \[=\exp\left(O_{C,\eta}(H^{(1+\eta)/2})\right).\]
In the last case (iii) we decompose \([H]\) into \(O(H^{\eta})\) arithmetic progressions on which \(e_{N}\) is constant. We let these arithmetic progressions be denoted by \(P_{1},\ldots,P_{s}\). As there are at most \(H^{3}\) arithmetic progressions contained in \([H]\) we can bound the number of possible different decompositions by \((H^{3})^{O(H^{\eta})}\). On every such progression there exists a polynomial \(q\) (which is either \(P_{N,\ell},P_{N,\ell}+1\) or \(P_{N,\ell}-1\)) such that \(a(\lfloor f(N+h)\rfloor)=a(\lfloor q(h)\rfloor)\). As a polynomial along an arithmetic progression is again a polynomial, by Corollary 3.7 we can bound the number of subwords appearing along some \(P_{j}\) by \(H^{C}\). In total, we can estimate the number of subwords of length \(H\) in this case by
\[(H^{3})^{O(H^{\eta})}\cdot(H^{C})^{O(H^{\eta})} =\exp((C+3)\log(H)\cdot O(H^{\eta}))\] \[=\exp(O_{C,\eta}(H^{(1+\eta)/2})).\]
This finishes the proof for \(\delta=(1+\eta)/2<1\).
## 4. Nilmanifolds
In this section we we recall some basic definitions and results on nilmanifolds and discuss the connection to generalized polynomials which goes back to the work of Bergelson and Leibman [1].
### Basic definitions
In this section, we very briefly introduce definitions and basic facts related to nilmanifolds and nilpotent dynamics. Throughout this section, we let \(G\) denote an \(s\)-step nilpotent Lie group of some dimension \(D\). We assume that \(G\) is connected and simply connected. We also let \(\Gamma<G\) denote a subgroup that is discrete and cocompact, meaning that the quotient space \(G/\Gamma\) is compact. The space \(X=G/\Gamma\) is called a \(s\)_-step nilmanifold_. A _degree-\(d\) filtration_ on \(G\) is a sequence \(G_{\bullet}\) of subgroups
\[G=G_{0}=G_{1}\geq G_{2}\geq G_{3}\geq\ldots\]
such that \(G_{d+1}=\{e_{G}\}\) (and hence \(G_{i}=\{e_{G}\}\) for all \(i>d\)) and for each \(i,j\) we have \([G_{i},G_{j}]\subseteq G_{i+j}\), where \([G_{i},G_{j}]\) is the group generated by the commutators \([g,h]=ghg^{-1}h^{-1}\) with \(g\in G_{i}\), \(h\in G_{j}\). A standard example of a filtration is the _lower central series_ given by \(G_{(0)}=G_{(1)}=G\) and \(G_{(i+1)}=[G,G_{(i)}]\) for \(i\geq 1\).
A _Mal'cev basis_ compatible with \(\Gamma\) and \(G_{\bullet}\) is a basis \(\mathcal{X}=(X_{1},X_{2},\ldots,X_{D})\) of the Lie algebra \(\mathfrak{g}\) of \(G\) such that
1. for each \(0\leq j\leq D\), the subspace \(\mathfrak{h}_{j}:=\operatorname{span}\left(X_{j+1},X_{j+2},\ldots,X_{D}\right)\) is a Lie algebra ideal in \(\mathfrak{g}\);
2. for each \(0\leq i\leq d\), each \(g\in G_{i}\) has a unique representation as \(g=\exp(t_{D(i)+1}X_{t_{D(i)+1}})\cdots\exp(t_{D-1}X_{D-1})\exp(t_{D}X_{D})\), where \(D(i):=\operatorname{codim}G_{i}\) and \(t_{j}\in\mathbb{R}\) for \(D(i)<j\leq D\);
3. \(\Gamma\) is the set of all products \(\exp(t_{1}X_{1})\exp(t_{2}X_{2})\cdots\exp(t_{D}X_{D})\) with \(t_{j}\in\mathbb{Z}\) for \(1\leq j\leq D\).
If the Lie bracket is given in coordinates by
\[[X_{i},X_{j}]=\sum_{k=1}^{D}c_{i,j}^{(k)}X_{k},\]
where all of the constants \(c_{i,j}^{(k)}\) are rationals with height at most \(M\) then we will say that the complexity of \((G,\Gamma,G_{\bullet})\) is at most \(M\). We recall that the height of a rational number \(a/b\) is \(\max(\left|a\right|,\left|b\right|)\) (\(a\in\mathbb{Z}\), \(b\in\mathbb{N}\), \(\gcd(a,b)=1\)).
We will usually keep the the choice of the Mal'cev basis implicit, and assume that each filtered nilmanifold under consideration comes equipped with a fixed choice of Mal'cev basis. The Mal'cev basis \(\mathpzc{X}\) induces coordinate maps \(\tau\colon X\to[0,1)^{D}\) and \(\widetilde{\tau}\colon G\to\mathbb{R}^{D}\), such that
\[x =\exp(\tau_{1}(x)X_{1})\exp(\tau_{2}(x)X_{2})\cdots\exp(\tau_{D}( x)X_{D})\Gamma, x \in X\] \[g =\exp(\widetilde{\tau}_{1}(g)X_{1})\exp(\widetilde{\tau}_{2}(g)X _{2})\cdots\exp(\widetilde{\tau}_{D}(g)X_{D}), g \in G.\]
The Mal'cev basis also induces a natural choice of a right-invariant metric on \(G\) and a metric on \(X\). We refer to [13, Def. 2.2] for a precise definition. Keeping the dependence on \(\mathpzc{X}\) implicit, we will use the symbol \(d\) to denote either of those metrics.
The space \(X\) comes equipped with the Haar measure \(\mu_{X}\), which is the unique Borel probability measure on \(X\) invariant under the action of \(G\): \(\mu_{X}(gE)=\mu_{X}(E)\) for all measurable \(E\subseteq X\) and \(g\in G\). When there is no risk of confusion, we write \(dx\) as a shorthand for \(d\mu_{X}(x)\).
A map \(g\colon\mathbb{Z}\to G\) is polynomial with respect to the filtration \(G_{\bullet}\), denoted \(g\in\operatorname{poly}(\mathbb{Z},G_{\bullet})\), if it takes the form
\[g(n)=g_{0}g_{1}^{n}\dots g_{d}^{n\choose d},\]
where \(g_{i}\in G_{i}\) for all \(0\leq i\leq d\) (cf. [13, Lem. 6.7]; see also [13, Def. 1.8] for an alternative definition). Although it is not immediately apparent from the definition above, polynomial sequences with respect to a given filtration form a group and are preserved under dilation.
### Semialgebraic geometry
A basic semialgebraic set \(S\subseteq\mathbb{R}^{D}\) is a set given by a finite number of polynomial equalities and inequalities:
\[S=\left\{x\in\mathbb{R}^{d}\ \big{|}\ P_{1}(x)>0,\dots,P_{n}(x)>0,Q_{1}(x)=0, \dots,Q_{m}(x)=0\right\}. \tag{11}\]
A semialgebraic set is a finite union of basic semialgebraic sets. In a somewhat ad hoc manner, we define the complexity of the basic semialgebraic set \(S\) given by (11) to be the sum \(\sum_{i=1}^{n}\deg P_{i}+\sum_{j=1}^{m}\deg Q_{j}\) of degrees of polynomials appearing in its definition. (Strictly speaking, we take the infimum over all representations of \(S\) in the form (11).) We also define the complexity of a semialgebraic set
\[S=S_{1}\cup S_{2}\cup\dots\cup S_{r}. \tag{12}\]
represented to be the finite union of basic semialgebraic sets \(S_{i}\) as the sum of complexities of \(S_{i}\). (Again, we take the infimum over all representations (12).)
Using the Mal'cev coordinates to identify the nilmanifold \(X\) with \([0,1)^{D}\), we extend the notion of a semialgebraic set to subsets of \(X\). A map \(F\colon X\to\mathbb{R}\) is piecewise polynomial if there exists a partition \(X=\bigcup_{i=1}^{r}S_{i}\) into semialgebraic pieces and polynomial maps \(\Phi_{i}\colon\mathbb{R}^{D}\to\mathbb{R}\) such that \(F(x)=\Phi_{i}(\tau(x))\) for each
\(1\leq i\leq r\) and \(x\in S_{i}\). One can check that these notions are independent of the choice of basis, although strictly speaking we will not need this fact.
### Quantitative equidistribution
The Lipschitz norm of a function \(F\colon X\to\mathbb{R}\) is defined as
\[\left\|F\right\|_{\mathrm{Lip}}=\left\|F\right\|_{\infty}+\sup_{x,y\in X,\ x \neq y}\frac{\left|F(x)-F(y)\right|}{d(x,y)}\]
A sequence \((x_{n})_{n=0}^{N-1}\) in \(X\) is \(\delta\)_-equidistributed_ if for each Lipschitz function \(F\colon X\to\mathbb{R}\) we have
\[\left|\operatorname*{\mathbbm{E}}_{n<N}F(x_{n})-\int_{X}F(x)dx\right|\leq \delta\left\|F\right\|_{\mathrm{Lip}}.\]
In the case, where \(X=[0,1]\) this notion is highly connected to the discrepancy of a sequence (see (2)). In fact, for \(\delta>0\) small enough we have that \((x_{n})_{n=0}^{N-1}\) has discrepancy \(\delta\) if and only if it is \(\delta^{O(1)}\) distributed. One direction follows immediately from the Koksma-Hlawka inequality and the other direction can be found for example in the proof of Proposition 5.2 in [1].
More restrictively, \((x_{n})_{n=0}^{N-1}\) is _totally \(\delta\)-equidistributed_ if for each arithmetic progression \(P\subseteq[N]\) of length at least \(\delta N\) we have
\[\left|\operatorname*{\mathbbm{E}}_{n\in P}F(x_{n})-\int_{X}F(x)dx\right|\leq \delta\left\|F\right\|_{\mathrm{Lip}}.\]
A sequence \((\varepsilon_{n})_{n=0}^{N-1}\) in \(G\) is _\((M,N)\)-smooth_ if \(d(\varepsilon_{n},e_{G})\leq M\) and \(d(\varepsilon_{n},\varepsilon_{n+1})\leq M/N\) for all \(n\in[N-1]\). A group element \(\gamma\in G\) is _\(Q\)-rational_ if \(\gamma^{r}\in\Gamma\) for some positive integer \(r\leq Q\). A point \(x\in G/\Gamma\) is _\(Q\)-rational_ if it takes the form \(x=\gamma\Gamma\) for some \(Q\)-rational \(\gamma\in G\). A sequence \((x_{n})_{n=0}^{N-1}\) in \(X\) is \(Q\)-rational if each point \(x_{n}\) is \(Q\)-rational.
**Theorem 4.1** ([14, Thm. 1.19]).: _Let \(C>0\) be a constant. Let \(G\) be a connected, simply connected nilpotent Lie group of dimension \(D\), let \(\Gamma<G\) be a lattice, let \(G_{\bullet}\) be a nilpotent filtration on \(G\) of length \(d\), and assume that the complexity of \((G,\Gamma,G_{\bullet})\) is at most \(M_{0}\). Then for each \(N\in\mathbb{N}\) and each polynomial sequence \(g\in\mathrm{poly}(\mathbb{Z},G_{\bullet})\) there exists an integer \(M\) with \(M_{0}\leq M\ll M_{0}^{O_{C,d,D}(1)}\) and a decomposition \(g(n)=\varepsilon(n)g^{\prime}(n)\gamma(n)\) (\(n\in\mathbb{Z}\)), where \(\varepsilon,g^{\prime},\gamma\in\mathrm{poly}(\mathbb{Z},G_{\bullet})\) and_
1. _the sequence_ \((\varepsilon(n))_{n=0}^{N-1}\) _is_ \((M,N)\)_-smooth;_
2. _the sequence_ \((\gamma(n)\Gamma)_{n=0}^{N-1}\) _is_ \(M\)_-rational and periodic with period_ \(\leq M\)_;_
3. _there is a group_ \(G^{\prime}<G\) _with Mal'cev basis_ \(\mathcal{X}^{\prime}\) _in which each element is an_ \(M\)_-rational combination of elements of X such that_ \(g^{\prime}(n)\in G^{\prime}\) _for all_ \(n\in\mathbb{Z}\)_, and the sequence_ \((g^{\prime}(n)\Gamma^{\prime})_{n=0}^{N-1}\) _is totally_ \(1/M^{C}\)_-equidistributed in_ \(G^{\prime}/\Gamma^{\prime}\)_, where_ \(\Gamma^{\prime}=\Gamma\cap G^{\prime}\)_._
### Generalised polynomials
The connection between nilmanifolds and generalised polynomials was first elucidated by Bergelson and Leibman [1].
**Theorem 4.2** ([1]).: _Let \(f\colon\mathbb{Z}\to[0,1)\) be a sequence. Then the following conditions are equivalent:_
1. \(f\) _is a GP map;_
2. _there exists a connected, simply connected nilpotent Lie group_ \(G\)_, lattice_ \(\Gamma<G\)_,_ \(g\in G\) _and a piecewise polynomial map_ \(F\colon G/\Gamma\to[0,1)\) _such that_ \(f(n)=F(g^{n}\Gamma)\) _for all_ \(n\in\mathbb{Z}\)
\((iii)\) _there exists a connected, simply connected nilpotent Lie group \(G\) of some dimension \(D\), lattice \(\Gamma<G\), a compatible filtration \(G_{\bullet}\), a polynomial sequence \(g\in\operatorname{poly}(\mathbb{Z},G_{\bullet})\) and an index \(1\leq j\leq D\) such that \(f(n)=\tau_{j}(g(n)\Gamma)\) for all \(n\in\mathbb{Z}\)._
**Remark 4.3**.: Strictly speaking, [1] does not include the assumption that \(G\) should be connected and simply connected. However, this requirement can be ensured by replacing \(G\) with a larger group. (cf. the "lifting argument" on [11, p. 368] and also [1, Thm. A*]). The cost of this operation is that in (ii) one may not assume that the action of \(g\) on \(G/\Gamma\) is minimal, but we do not need this assumption.
In our applications, we will need to simultaneously represent maps of the form \(f(\lfloor p(n)\rfloor)\) where \(f\) is a fixed GP map and \(p\) is a polynomial which is allowed to vary. Such a representation is readily obtained from Theorem 4.2.
**Theorem 4.4**.: _Let \(f\colon\mathbb{Z}\to\mathbb{R}\) be a bounded GP map and let \(d\in\mathbb{N}\). Then there exists a connected, simply connected nilpotent Lie group \(G\), a lattice \(\Gamma<G\), a filtration \(G_{\bullet}\), and a piecewise polynomial map \(F\colon G/\Gamma\to\mathbb{Z}\) such that for each polynomial \(p(x)\in\mathbb{R}[x]\) with \(\deg p\leq d\) there exists \(g_{p}\in\operatorname{poly}(G_{\bullet})\) such that for all \(n\in\mathbb{Z}\) we have \(f\left(\lfloor p(n)\rfloor\right)=F(g_{p}(n)\Gamma)\)._
Proof.: By Theorem 4.2, there exists a nilmanifold \(G^{(0)}/\Gamma^{(0)}\) together with a piecewise polynomial map \(F^{(0)}\colon G^{(0)}/\Gamma^{(0)}\to\mathbb{R}\), and a group element \(g_{0}\in G^{(0)}\) such that \(f(n)=F^{(0)}(g_{0}^{n}\Gamma)\) for all \(n\in\mathbb{Z}\). Following the strategy in [11, Lem. 4.1], let \(G:=G^{(0)}\times\mathbb{R}\) and \(\Gamma:=\Gamma^{(0)}\times\mathbb{Z}\) and let \(F\colon G/\Gamma\to\mathbb{R}\) be given by \(F(t+\mathbb{Z},h\Gamma^{(0)}):=F^{(0)}(g_{0}^{-\{t\}}h\Gamma^{(0)})\) for \(t\in\mathbb{R}\) and \(h\in G^{(0)}\). This construction guarantees that \(F\) is piecewise polynomial and for all \(t\in\mathbb{R}\) we have
\[F(t+\mathbb{Z},g_{0}^{t}\Gamma)=F^{(0)}(g_{0}^{\lfloor t\rfloor}\Gamma)=f( \lfloor t\rfloor).\]
For \(p\in\mathbb{R}[x]\) and \(n\in\mathbb{Z}\) let \(g_{p}(n):=\left(p(n),g_{0}^{p(n)}\right)\). Then \(g_{\alpha}\) is polynomial with respect to the filtration \(G_{\bullet}\) given by \(G_{i}=G_{\left(\lfloor i/d\rfloor\right)}\), where \(\left(G_{(j)}\right)_{j}\) denotes the lower central series, and we have \(f\left(\lfloor p(n)\rfloor\right)=F(g_{p}(n)\Gamma)\) for all \(n\in\mathbb{Z}\).
## 5. Mobius orthogonality
### Main result
In this section, we discuss Mobius orthogonality of bracket words along Hardy field sequences. Our main result is Theorem B, which we restate below.
**Theorem 5.1**.: _Let \(\mathbf{a}=(a(n))_{n\in\mathbb{Z}}\) be a (two-sided) \(\mathbb{R}\)-valued bracket word and let \(f\colon\mathbb{R}_{+}\to\mathbb{R}\) be a Hardy field function with polynomial growth. Then_
(13) \[\frac{1}{N}\sum_{n=1}^{N}\mu(n)a\left(\lfloor f(n)\rfloor\right)\to 0\] _as \[N\to\infty.\]_
As usual, we will use Taylor expansion to approximate the restriction of \(f(n)\) to an interval with a polynomial sequence, and then use Theorem 2.11 to control the error term involved in computing \(\lfloor f(n)\rfloor\). The sequence \(a(\lfloor f(n)\rfloor)\) can then be represented on a nilmanifold by Bergelson-Leibman machinery. As the next step, we require a suitable result on Mobius orthogonality in short intervals. In Section 5.2, we will prove the following theorem, which is closely related to [12, Thm. 1.1(i)]. Below, we let \(\mathcal{AP}\) denote the set of all arithmetic progressions in \(\mathbb{Z}\).
**Theorem 5.2**.: _Let \(G\) be a connected, simply connected nilpotent Lie group, let \(\Gamma<G\) be a lattice, let \(G_{\bullet}\) be a filtration on \(G\), assume that \(G_{\bullet}\) and \(\Gamma\) are compatible, and let \(F\colon G/\Gamma\to\mathbb{R}\) be finitely-valued piecewise polynomial map. Let \(N,H\) be integers with \(N^{0.626}\leq H\leq N\). Then_
\[\sup_{g\in\operatorname{poly}(G_{\bullet})}\sup_{P\in\mathcal{AP}}\left| \operatorname*{\underline{\mathbb{E}}}_{h<H}1_{P}(h)\mu(N+h)F(g(h)\Gamma) \right|=o_{N\to\infty}(1), \tag{14}\]
_where the rate of convergence may depend on \(G,\Gamma,G_{\bullet}\) and \(F\)._
Proof of Theorem 5.1 assuming Theorem 5.2.: Applying a dyadic decomposition, it will suffice to show that
\[\operatorname*{\underline{\mathbb{E}}}_{N\leq n<2N}\mu(n)a\left(\lfloor f(n) \rfloor\right)\to 0 \tag{15}\]
\[\text{ as }N\to\infty.\]
Fix a small \(\varepsilon>0\). We will show that, for all sufficiently large \(N\) we have
\[\left|\operatorname*{\underline{\mathbb{E}}}_{N\leq n<2N}\mu(n)a\left(\lfloor f (n)\rfloor\right)\right|\ll\varepsilon. \tag{16}\]
Splitting the average in (16) into intervals of length \(\left\lceil(2N)^{0.7}\right\rceil\), we see that (16) will follow once we show that for sufficiently large \(N\) and for \(H\) satisfying \(N^{0.7}\leq H<N\) we have
\[\left|\operatorname*{\underline{\mathbb{E}}}_{h<H}\mu(N+h)a\left(\lfloor f(N +h)\rfloor\right)\right|\ll\varepsilon. \tag{17}\]
Pick an integer \(k\in\mathbb{N}\) such that \(f(t)\ll t^{k}\), and let \(\ell=10k\). By Theorem 2.11, we have
\[\lfloor f(N+h)\rfloor=\lfloor P_{N}(h)\rfloor+e_{N}(h), \tag{18}\]
where \(P_{N}\) is a polynomial of degree (at most) \(\ell\) and one of the conditions 2.11(i)-(iii) holds. In the case (i) we have \(N\ll_{\varepsilon}H^{10/9}\leq N^{7/9}\), which implies that \(N=O_{\varepsilon}(1)\). Assuming that \(N\) is sufficiently large, we may disregard this case.
In the case (ii) we have \(\operatorname*{\underline{\mathbb{E}}}_{h<H}|e_{N}(h)|<\varepsilon\), and as a consequence
\[\operatorname*{\underline{\mathbb{E}}}_{h<H}\mu(N+h)a\left(\lfloor f(N+h) \rfloor\right)=\operatorname*{\underline{\mathbb{E}}}_{h<H}\mu(N+h)a\left( \lfloor P_{N}(h)\rfloor\right)+O(\varepsilon). \tag{19}\]
By Theorem 4.4, there exists a connected and simply connected nilpotent Lie group \(G\), a lattice \(\Gamma<G\), a filtration \(G_{\bullet}\) and a finitely-valued piecewise polynomial map \(F\colon G/\Gamma\to\mathbb{Z}\) such that for each polynomial \(P\) of degree at most \(\ell\) there exists \(g\in\operatorname{poly}(G_{\bullet})\) such that \(a(\lfloor P(h)\rfloor)=F(g(h)\Gamma)\). In particular,
\[\left|\operatorname*{\underline{\mathbb{E}}}_{h<H}\mu(N+h)a\left(\lfloor P_{ N}(h)\rfloor\right)\right|\leq\sup_{g\in\operatorname{poly}(G_{\bullet})}\left| \operatorname*{\underline{\mathbb{E}}}_{h<H}\mu(N+h)a\left(F(g(h)\Gamma)\right| \tag{20}\]
By Theorem 5.2, for sufficiently large \(N\) the expression in (20) is bounded by \(\varepsilon\). Inserting this bound into (19) yields (17).
In the case (iii), passing to an arithmetic progression we may replace \(e_{N}\) with a constant sequence:
\[\left|\operatorname*{\underline{\mathbb{E}}}_{h<H}\mu(N+h)a\left( \lfloor f(N+h)\rfloor\right)\right| \tag{22}\] \[\ll_{\varepsilon}\max_{P\in\mathcal{AP}}\max_{e\in\{-1,0,1\}} \left|\operatorname*{\underline{\mathbb{E}}}_{h<H}\mu(N+h)1_{P}(h)a\left( \lfloor P_{N}(h)\rfloor+e\right)\right|. \tag{21}\]
To finish the argument, it suffices to apply Theorem 5.2 similarly to the previous case.
### Short intervals
The remainder of this section is devoted to proving Theorem 5.2. We will derive it from closely related estimates for correlations of the Mobius function with nilsequences in short intervals. Recall that we let \(\mathcal{AP}\) denote the set of all arithmetic progressions in \(\mathbb{Z}\).
**Theorem 5.3** (Corollary of Thm. 1.1(i) in [14]).: _Let \(N,H\) be integers with \(N^{0.626}\leq H\leq N\) and let \(\delta\in(0,1/2)\). Let \(G\) be a connected, simply connected nilpotent Lie group of dimension \(D\), let \(\Gamma<G\) be a lattice, let \(G_{\bullet}\) be a nilpotent filtration on \(G\) of length \(d\), and assume that the complexity of \((G,\Gamma,G_{\bullet})\) is at most \(1/\delta\). Let \(F\colon G/\Gamma\to\mathbb{C}\) be a function with Lipschitz norm at most \(1/\delta\). Then, for each \(A>0\) we have the bound_
\[\sup_{g\in\operatorname{poly}(G_{\bullet})}\sup_{P\in\mathcal{AP}}\left| \operatorname*{\underline{\mathbbm{E}}}_{h<H}\mu(N+h)1_{P}(h)F(g(h)\Gamma) \right|\ll_{A}\frac{(1/\delta)^{O_{d,D}(1)}}{\log^{A}N}. \tag{23}\]
This theorem is almost the ingredient that we need, except that in our application the function \(F\) is not necessarily continuous (much less Lipschitz). Instead, \(F\) is a finitely-valued piecewise polynomial function, meaning that there exists a partition \(G/\Gamma=\bigcup_{i=1}^{r}S_{i}\) into semialgebraic pieces and constants \(c_{i}\in\mathbb{R}\) such that for each \(x\in X\) and \(1\leq i\leq r\), \(F(x)=c_{i}\) if and only if \(x\in S_{i}\). In this case, it is enough to consider each of the level sets separately. It is clear that Theorem 5.2 will follow from the following more precise result.
**Theorem 5.4**.: _Let \(N,H\) be integers with \(N^{0.626}\leq H\leq N\) and let \(\delta\in(0,1/2)\). Let \(G\) be a connected, simply connected nilpotent Lie group of dimension \(D\), let \(\Gamma<G\) be a lattice, let \(G_{\bullet}\) be a nilpotent filtration on \(G\) of length \(d\), and assume that the complexity of \((G,\Gamma,G_{\bullet})\) is at most \(1/\delta\). Let \(S\subseteq G/\Gamma\) be a semialgebraic set with complexity at most \(E\). Then, for each \(A\geq 1\) we have the bound_
\[\sup_{g\in\operatorname{poly}(G_{\bullet})}\sup_{P\in\mathcal{AP}}\left| \operatorname*{\underline{\mathbbm{E}}}_{h<H}\mu(N+h)1_{P}(h)1_{S}(g(h) \Gamma)\right|\ll_{A}\frac{(1/\delta)^{O_{d,D,E}(1)}}{\log^{A}N}. \tag{24}\]
In the case where \((g(n)\Gamma)_{n}\) is highly equidistributed in \(G/\Gamma\), we will derive Theorem 5.4 directly from Theorem 5.3. In fact, we will obtain a slightly stronger version, given in Theorem 5.5 below. Then, we will deduce the general case of Theorem 5.4 using the factorisation theorem from [13]. In order to avoid unnecessarily obfuscating the notation, from this point onwards we will allow all implicit constants to depend on the parameters \(d\), \(D\) and \(E\); thus, for instance, the term on the right-hand side of (24) will be more succinctly written as \((1/\delta)^{O(1)}/\log^{A}N\).
### Equidistributed case
**Proposition 5.5**.: _Let \(N,H\) be integers with \(N^{0.626}\leq H\leq N\) and let \(\delta\in(0,1/2)\). Let \(G\) be a connected, simply connected nilpotent Lie group of dimension \(D\), let \(\Gamma<G\) be a lattice, let \(G_{\bullet}\) be a nilpotent filtration on \(G\) of length \(d\), and assume that the complexity of \((G,\Gamma,G_{\bullet})\) is at most \(1/\delta\). Let \(S\subseteq(\mathbb{R}/\mathbb{Z})\times(G/\Gamma)\) be a semialgebraic set with complexity at most \(E\). Then, for each \(A\geq 1\) there exists
\(B=O(A)\) such that_
\[\sup_{\begin{subarray}{c}g\in\operatorname{poly}(G_{\bullet})\\ \widetilde{\delta}-\operatorname{t.e.d.}\end{subarray}}\sup_{P\in\mathcal{AP}} \left|\operatorname*{\underline{\mathbbm{F}}}_{h<H}\mu(N+h)1_{P}(h)1_{S}\left( \frac{h}{H},g(h)\Gamma\right)\right|\ll_{A}\frac{(1/\delta)^{O(1)}}{\log^{A}N}, \tag{25}\]
_where \(\widetilde{\delta}:=1/\log^{B}N\) and the supremum is taken over all polynomial sequences \(g\) such that \((g(h)\Gamma)_{h=0}^{H}\) is totally \(\widetilde{\delta}\)-equidistributed._
Proof.: We may freely assume that \(\delta\geq 1/\log^{A}N\), since otherwise there is nothing to prove. In particular, \(\delta=\log^{O(A)}N\) and \(1/\delta=O(\log^{A}N)\). Decomposing \(S\) into a bounded number of pieces, we may assume that \(S\) is a basic semialgebraic set. We will assume that \(\operatorname{int}S\neq\emptyset\); the case where \(\operatorname{int}S=\emptyset\) can be handled using similar methods and is somewhat simpler. Thus, \(S\) takes the form
\[S=\left\{(t,x)\in(\mathbb{R}/\mathbb{Z})\times(G/\Gamma)\ |\ P_{1}(t,x)>0,\ P_{2}(t,x)>0,\ \dots,P_{r}(t,x)>0\right\}, \tag{26}\]
where \(r=O(1)\) and \(P_{i}\) are polynomial maps (under identification of \((\mathbb{R}/\mathbb{Z})\times(G/\Gamma)\) with \([0,1)^{1+D}\)) with \(\deg P_{i}=O(1)\) for \(1\leq i\leq r\). Scaling, we may assume that \(\left\|P_{i}\right\|_{\infty}=1\) for all \(1\leq i\leq r\). Let \(\tau_{1}\) denote Mal'cev coordinates on \((\mathbb{R}/\mathbb{Z})\times(G/\Gamma)\), given by \(\tau_{1}(t,x)=(t,\tau(x))\), where we identify \([0,1)\) with \(\mathbb{R}/\mathbb{Z}\) in the standard way. Furthermore, splitting \(S\) further and applying a translation if necessary, we may assume that \(\tau_{1}(S)\subseteq\left(\frac{1}{10},\frac{9}{10}\right)^{1+D}\), implying in particular that \(\tau_{1}\) is continuous in a neighbourhood of \(S\).
Let \(\eta\in(0,\delta)\) be a small positive quantity, to be specified in the course of the argument, and let \(\Psi,\Psi^{\prime}\colon\mathbb{R}\to[0,1]\) be given by
\[\Psi(t)=\begin{cases}0&\text{if }t<0,\\ t/\eta&\text{if }t\in[0,\eta],\\ 1&\text{if }t>\eta,\end{cases}\qquad\Psi^{\prime}(t)=\begin{cases}0&\text{if }\left|t \right|>2\eta,\\ 2-\left|t\right|/\eta&\text{if }\left|t\right|\in[\eta,2\eta],\\ 1&\text{if }\left|t\right|<\eta.\end{cases}\]
It is clear that \(\left\|\Psi\right\|_{\operatorname{Lip}}=\left\|\Psi^{\prime}\right\|_{ \operatorname{Lip}}=1/\eta\). Let \(\Psi_{\square}\colon[0,1)^{1+D}\to[0,1]\) be a \(O(1)\)-Lipschitz function with \(\Psi_{\square}(t,u)=1\) if \((t,u)\in\left(\frac{1}{10},\frac{9}{10}\right)^{1+D}\) and \(\Psi_{\square}(t,u)=0\) if \((t,u)\not\in\left(\frac{1}{20},\frac{19}{20}\right)^{1+D}\). For \(1\leq i\leq r\), put
\[F_{i}(t,x) =\Psi(P_{i}(t,x)) F^{\prime}_{i}(t,x) =\Psi_{\square}(\tau_{1}(t,x))\Psi^{\prime}(P_{i}(t,x)),\] \[F(t,x) =\prod_{i=1}^{r}F_{i}(t,x) F^{\prime}(t,x) =\min\left(\sum_{i=1}^{r}F^{\prime}_{i}(t,x),1\right).\]
It is routine (although tedious) to verify that \(F\) and \(F^{\prime}\) are \(1/\eta^{O(1)}\)-Lipschitz (cf. [12, Lem. A.4]). Directly from the definitions, we see that for each \(t\in\mathbb{R}/\mathbb{Z}\) and \(x\in G/\Gamma\) we have \(F(t,x)=1_{S}(t,x)\) or \(F^{\prime}(t,x)=1\). It follows that
\[\left|\operatorname*{\underline{\mathbbm{F}}}_{h<H}\mu(N+h)1_{P} (h)1_{S}\left(\frac{h}{H},g(h)\Gamma\right)\right| \leq\left|\operatorname*{\underline{\mathbbm{F}}}_{h<H}\mu(N+h)1_ {P}(h)F\left(\frac{h}{H},g(h)\Gamma\right)\right| \tag{28}\] \[+\operatorname*{\underline{\mathbbm{F}}}_{h<H}F^{\prime}\left( \frac{h}{H},g(h)\Gamma\right). \tag{27}\]
In order to estimate either of the summands in (27)-(28), we begin by dividing the interval \([H]\) into \(O(1/\alpha)\) sub-intervals with lengths between \(\alpha H\) and \(2\alpha H\), where
(29)
To estimate the first summand, we note that for each such sub-interval \([k,k+H^{\prime})\), for each \(h\in[k,k+H^{\prime})\) we have
\[F\left(\frac{h}{H},g(h)\Gamma\right) =F\left(\frac{k}{H},g(h)\Gamma\right)+O\left(\frac{H^{\prime}}{H} \left\|F\right\|_{\text{Lip}}\right) \tag{31}\] \[=F\left(\frac{k}{H},g(h)\Gamma\right)+O\left(\frac{1}{\log^{A}N} \right). \tag{30}\]
Applying Theorem 5.3 to each sub-interval, for each constant \(C\geq 1\) we obtain
\[\left|\mathop{\mathbbm{E}}_{h<H}\mu(N+h)1_{P}(h)F(g(h)\Gamma)\right|\ll_{C} \frac{1}{\log^{A}N}+\frac{1/\eta^{O(1)}}{\log^{C-A}N}. \tag{32}\]
Let us now consider the second summand. We have, similarly to (30),
\[F^{\prime}\left(\frac{h}{H},g(h)\Gamma\right)=F^{\prime}\left(\frac{k}{H},g(h) \Gamma\right)+O\left(\frac{1}{\log^{A}N}\right).\]
For now, let us assume that \(\alpha>\widetilde{\delta}\), which we will verify at the end of the argument. We conclude from the fact that \((g(h)\Gamma)_{h=0}^{H-1}\) is totally \(\widetilde{\delta}\)-equidistributed that
\[\mathop{\mathbbm{E}}_{h\in[k,k+H^{\prime})}F^{\prime}\left(\frac{h}{H},g(h) \Gamma\right)= \int_{G/\Gamma}F^{\prime}\left(\frac{k}{H},x\right)dx+\frac{ \widetilde{\delta}}{\eta^{O(1)}}+O\left(\frac{1}{\log^{A}N}\right), \tag{33}\]
where we use \(dx\) as a shorthand for \(d\mu_{G/\Gamma}(x)\). Taking the weighted average of (33) over all sub-intervals, we conclude that
\[\mathop{\mathbbm{E}}_{h<H}F^{\prime}\left(\frac{h}{H},g(h)\Gamma\right)=\int_ {[0,1)}\int_{G/\Gamma}F^{\prime}\left(t,x\right)dxdt+\frac{\widetilde{\delta}} {\eta^{O(1)}}+O\left(\frac{1}{\log^{A}N}\right). \tag{34}\]
Applying Lemma 5.6(ii) to estimate the measure of the support of \(F^{\prime}_{i}\) for each \(1\leq i\leq r\) we conclude that
\[\int_{[0,1)}\int_{G/\Gamma}F^{\prime}(t,x)dxdt\ll\eta^{1/O(1)}. \tag{35}\]
Thus, we may choose \(\eta=1/\log^{O(A)}N\) such that
\[\int_{[0,1)}\int_{G/\Gamma}F^{\prime}(t,x)dxdt\leq\frac{1}{\log^{A}N}, \tag{36}\]
which allows us to simplify (34) to
\[\mathop{\mathbbm{E}}_{h<H}F^{\prime}\left(\frac{h}{H},g(h)\Gamma\right)=O \left(\frac{1}{\log^{B-O(A)}N}\right)+O\left(\frac{1}{\log^{A}N}\right). \tag{37}\]
Combining (32) and (37) with (27)-(28), we conclude that
\[\left|\mathop{\mathbbm{E}}_{h<H}\mu(N+h)1_{P}(h)1_{S}(g(h)\Gamma)\right|\ll_{ C}\frac{1}{\log^{C-O(A)}N}+\frac{1}{\log^{B-O(A)}N}+\frac{1}{\log^{A}N}. \tag{38}\]
Letting \(C\) and \(B\) be sufficiently large multiples of \(A\), we conclude that
\[\left|\mathop{\mathbbm{E}}_{h<H}\mu(N+h)1_{P}(h)1_{S}(g(h)\Gamma)\right|\ll_{ A}\frac{1}{\log^{A}N}, \tag{39}\]
as needed. Note that choosing \(B\) as a large multiple of \(A\) also guarantees that \(\alpha=1/\log^{O(A)}N>\widetilde{\delta}=1/\log^{B}N\)
### General case
Before we proceed with the proof of Theorem 5.2 in full generality, we will need the following technical lemma.
**Lemma 5.6**.: _Let \(d,D\in\mathbb{N}\), and let \(\mathcal{V}\) denote the vector space of all polynomial maps \(P\colon[0,1)^{D}\to\mathbb{R}\) of degree at most \(d\)._
1. _There is a constant_ \(C>1\) _(dependent on_ \(d,D\)_) such that for_ \(P\in\mathcal{V}\) _given by_ \[P(x)=\sum_{\alpha\in\mathbb{N}_{0}^{D}}a_{\alpha}\prod_{i=1}^{D}x_{i}^{\alpha_ {i}}\]
_we have the inequalities \(C^{-1}\left\|P\right\|_{\infty}\leq\max_{\alpha}\left|a_{\alpha}\right|\leq C \left\|P\right\|_{\infty}.\)_
2. _For each_ \(P\in\mathcal{V}\) _and for each_ \(\delta\in(0,1)\) _we have_ (40) \[\lambda\left(\left\{x\in[0,1)^{D}\ \big{|}\ \left|P(x)\right|<\delta^{d}\left\|P \right\|_{\infty}\right\}\right)\ll_{d,D}\delta.\]
Proof.: Item (i) follows from the fact that each two norms on the finitely-dimensional vector space \(\mathcal{V}\) are equivalent. For item (ii) we proceed by induction with respect to \(D\). Multiplying \(P\) by a scalar, we may assume that \(\left\|P\right\|_{\infty}=1\).
Suppose first that \(D=1\). We proceed by induction on \(d\). If \(d=1\) then \(P\) is an affine function \(P(x)=ax+b\), and the claim follows easily. Assume that \(d\geq 2\) and that the claim has been proved for \(d-1\). By item (i), at least one of the coefficients of \(P\) has absolute value \(\gg_{d,D}1\). In fact, we may assume that this coefficient is not the constant term, since otherwise for all \(x\in[0,1)\) we would have \(P(x)\in(\frac{99}{100}P(0),\frac{101}{100}P(0))\) and hence the set in (40) would be empty for sufficiently small \(\delta\). Thus, \(\left\|P^{\prime}\right\|_{\infty}\gg_{d,D}1\). By the inductive assumption,
\[\lambda\left(\left\{x\in[0,1)\ \big{|}\ \left|P^{\prime}(x)\right|<\delta^{d-1 }\right\}\right)\ll_{d}\delta. \tag{41}\]
Thus, it will suffice to show that
\[\lambda\left(\left\{x\in[0,1)\ \big{|}\ \left|P(x)\right|<\delta^{d},\ \left|P^{ \prime}(x)\right|>\delta^{d-1}\right\}\right)\ll_{d}\delta. \tag{42}\]
For each interval \(I\subseteq[0,1)\) such that \(P^{\prime}(x)\) has constant sign for \(x\in I\) we have
\[\lambda\left(\left\{x\in I\ \big{|}\ \left|P(x)\right|<\delta^{d},\ \left|P^{ \prime}(x)\right|>\delta^{d-1}\right\}\right)\ll\delta. \tag{43}\]
Since \([0,1)\) can be divided into \(O(d)\) intervals where \(P\) is monotonous, (42) follows.
Suppose now that \(D\geq 2\) and the claim has been proved for all \(D^{\prime}<D\). Reasoning like above, we infer from item (i) that \(P\) has a coefficient with absolute value \(\gg_{d,D}1\) other than the constant. We may expand \(P\) in the form
\[P(y,t) =\sum_{i=0}^{d}t^{i}Q_{i}(y), y \in[0,1)^{D-1},\ t\in[0,1),\]
where \(Q_{i}\) are polynomials in \(D-1\) variables of degree \(d-i\). Changing the order of variables if necessary, we may assume that there exists \(j\) with \(1\leq j\leq d\) such that \(Q_{j}\) has a coefficient \(\gg_{d,D}1\), and hence \(\left\|Q_{j}\right\|_{\infty}\gg_{d,D}1\). For \(k\in\mathbb{N}\), let us consider the set
\[E_{k}:=\left\{(y,t)\in[0,1)^{D}\ \big{|}\ \left|P(y,t)\right|<\delta^{d},\ 2^{-k} \leq\left|Q_{j}(y)\right|<2^{-k+1}\right\}\]
The set in (43) is the disjoint union \(\bigcup_{k=1}^{\infty}E_{i}\), so our goal is to show that
\[\sum_{k=1}^{\infty}\lambda(E_{k})\ll_{d,D}\delta. \tag{44}\]
Fix a value of \(k\). By the inductive assumption, as long as \(j\neq d\), we have
\[\lambda\left(\left\{y\in[0,1)^{D-1}\ \big{|}\ |Q_{j}(y)|<2^{-k+1}\right\} \right)\ll_{d,D}2^{-k/(d-j)}. \tag{45}\]
(If \(j=d\), the set in (45) is empty for all sufficiently large \(k\), and the reasoning simplifies.) For each \(y\in[0,1)^{D-1}\) such that \(2^{-k}\leq|Q_{j}(y)|<2^{-k+1}\), by the inductive assumption (for \(D=1\)) we have
\[\lambda\left(\left\{t\in[0,1)\ \big{|}\ |P(y,t)|<\delta^{d}\right\} \right)\ll_{d,D}2^{k/d}\delta. \tag{46}\]
Combining (45) and (46) yields
\[\lambda\left(E_{k}\right)\ll_{d,D}2^{-kj/d(d-j)}\delta\leq 2^{-k/d^{2}}\delta. \tag{47}\]
Summing (47) gives (44) and finishes the argument.
Proof of Theorem 5.4.: The argument is very similar to the proof of Theorem 1.1 assuming Proposition 2.1 in [10]. As the first step, we apply the factorisation theorem [10, Thm. 1.19], Theorem 4.1, with \(M_{0}=\log N\) and parameter \(C\) to be determined in the course of the argument. We conclude that there exists an integer \(M\) with \(\log N\leq M\ll\log^{O_{C}(1)}N\) such that \(g\) admits a factorisation of the form
\[g(h)=\varepsilon(h)g^{\prime}(h)\gamma(h), \tag{48}\]
where \(\varepsilon\) is \((M,H)\)-smooth, \(\gamma\) is \(M\)-rational, and \(g^{\prime}\) takes values in a rational subgroup \(G^{\prime}<G\) which admits a Mal'cev basis \(\mathcal{X}^{\prime}\) where each element is a \(M\)-rational combination of elements of \(\mathcal{X}\), and \((g^{\prime}(h)\Gamma)_{h=0}^{H-1}\) is totally \(1/M^{C}\)-equidistributed in \(G^{\prime}/(\Gamma\cap G^{\prime})\) (with respect to the metric induced by \(\mathcal{X}^{\prime}\)).
With the same reasoning as in [10], we conclude that \((\gamma(h)\Gamma)_{h}\) is a periodic sequence with some period \(q\leq M\), and for each \(0\leq j<q\) and \(h\equiv j\bmod q\) we have \(\gamma(h)\Gamma=\gamma_{j}\Gamma\) for some \(\gamma_{j}\in G\) with coordinates \(\tau(\gamma_{j})\) that are rationals with height \(\ll M^{O(1)}\). Splitting the average in (24) into sub-progressions, it will suffice to show that for each residue \(0\leq j<q\) modulo \(q\), and for each arithmetic progression \(Q\subseteq q\mathbb{Z}+j\) with diameter at most \(N/M\) we have
\[\left|\operatorname*{\mathbbm{E}}_{h<H}\mu(N+h)1_{Q}(h)1_{S}( \varepsilon(h)g^{\prime}(h)\gamma_{j}\Gamma)\right|\ll_{A}\frac{(1/\delta)^{O( 1)}}{M^{2}\log^{A}N}. \tag{49}\]
The key difference between our current work and the corresponding argument in [10] is that \(1_{S}\) is not continuous and hence in (49) we cannot replace \(\varepsilon(h)\) with a constant and hope that the value of the average will remain approximately unchanged. Instead, we will use an argument of a more algebraic type. We note that, as a consequence of invariance of the metric on \(G\) under multiplication on the right, for each \(h,h^{\prime}\in Q\) we have
\[d\left(\varepsilon(h)g^{\prime}(h)\gamma_{j},\varepsilon(h^{\prime})g^{\prime }(h)\gamma_{j}\right)=d\left(\varepsilon(h),\varepsilon(h^{\prime})\right)=O (1).\]
Let us fix \(k\in Q\) and put \(\varepsilon^{\prime}(h)=\varepsilon(h)\varepsilon(k)^{-1}\). Then \(d(\varepsilon^{\prime}(h),e_{G})=O(1)\) and \(g(h)\Gamma=\varepsilon(h)g^{\prime}(h)\gamma_{j}\Gamma=\varepsilon^{\prime}( h)\varepsilon(k)g^{\prime}(h)\gamma_{j}\Gamma\).
Let \(\Omega\subseteq G\) be a bounded semialgebraic set such that \(\varepsilon^{\prime}(h)\in\Omega\) for all \(h\in Q\). For instance, we may take \(\Omega\) to be the pre-image of a certain ball with radius \(1/\delta^{O(1)}\) under \(\widetilde{\tau}\). Let also \(\Pi:=\widetilde{\tau}^{-1}\left([0,1)^{D}\right)\) denote the standard fundamental domain for \(G/\Gamma\). Consider the set
\[R=\left\{(g_{1},g_{2})\in\Omega\times\Pi\ |\ g_{1}g_{2}\Gamma\in S\right\}.\]
We may decompose \(R\) as
\[R=\bigcup_{\gamma\in\Gamma}R_{\gamma}\quad\text{ where }\quad R_{\gamma}=\left\{(g_{1},g_{2})\in \Omega\times\Pi\ |\ g_{1}g_{2}\Gamma\in S,\ g_{1}g_{2}\gamma\in\Pi\right\}. \tag{50}\]
Using the quantitative bounds in [10, Lem. A.2 & A.3], we see that for each \(\gamma\in\Gamma\) such that \(R_{\gamma}\neq\emptyset\) we have \(|\widetilde{\tau}(\gamma)|=O(1/\delta^{O(1)})\). Hence, the union in (50) involves \(O(1/\delta^{O(1)})\) non-empty terms, and in particular is finite. Each of the sets \(R_{\gamma}\) is semialgebraic with complexity \(O(1)\). Moreover, since \(\varepsilon^{\prime}\) is a polynomial map of bounded degree, for each \(\gamma\in\Gamma\) the set
\[T_{\gamma}=\left\{(t,x)\in[0,1)\times\Pi\ |\ (\varepsilon^{\prime}(tH),x)\in R _{\gamma}\right\}\]
is also semialgebraic with complexity \(O(1)\). Hence, (49) will follow once we show that for each semialgebraic set \(T\subseteq[0,1)\times G/\Gamma\) with bounded complexity we have
\[\left|\mathop{\mathbbm{E}}_{h<H}\mu(N+h)1_{Q}(h)1_{T}\left(\frac{h}{H}, \varepsilon(k)g^{\prime}(h)\gamma_{j}\Gamma\right)\right|\ll_{A}\frac{(1/ \delta)^{O(1)}}{M^{2}\log^{A}N}. \tag{51}\]
Following [10], we put \(\widetilde{G}^{\prime}:=\gamma_{j}^{-1}G^{\prime}\gamma_{j}\), \(\Lambda:=\Gamma\cap\widetilde{G}^{\prime}\) and \(\widetilde{g}^{\prime}(n):=\gamma_{j}^{-1}g^{\prime}(n)\gamma_{j}\). Let also \(D^{\prime}=\dim G^{\prime}\), let \(\sigma\) and \(\widetilde{\sigma}\) denote the coordinate maps on \(\widetilde{G}^{\prime}/\Lambda\) and \(\widetilde{G}^{\prime}\) respectively, and let \(\Delta=\widetilde{\sigma}^{-1}\left([0,1)^{D^{\prime}}\right)\) denote the fundamental domain. Then \(\widetilde{g}^{\prime}\) is a polynomial sequence with respect to the filtration \(\widetilde{G}^{\prime}_{\bullet}\) given by \(\widetilde{G}^{\prime}_{i}=\gamma_{j}^{-1}G^{\prime}_{i}\gamma_{j}\). We have a well-defined map \(\iota\colon\widetilde{G}^{\prime}/\Lambda\to G/\Gamma\) given by
\[\iota(x\Lambda)=\varepsilon(k)\gamma_{j}x\Gamma.\]
Thus, for all \(h\in[H]\) we have
\[\varepsilon(k)g^{\prime}(h)\gamma_{j}\Gamma=\iota(\widetilde{g}^{\prime}(h)\Lambda)\]
As discussed in [10], the Lipschitz norm of the map \(\iota\) is \(O(M^{O(1)})\) and the sequence \((\widetilde{G}^{\prime}(h)\Lambda)_{h=0}^{H-1}\) is \(1/M^{\lambda C+O(1)}\)-equidistributed, where \(\lambda>0\) is a constant dependent only on \(d\) and \(D\).
For each \(\gamma\in\Gamma\), the map \(\iota\) is a polynomial on the semialgebraic set \(\Delta\cap\iota^{-1}(\Pi\gamma)\). The estimate on the Lipschitz norm of \(\iota\) implies that \(\Delta\) can be partitioned into \(M^{O(1)}\) semialgebraic sets with complexity \(O(1)\) such that, on each of the pieces \(\iota\) is a polynomial of degree \(O(1)\) (using the coordinates \(\widetilde{\tau}\) and \(\widetilde{\sigma}\)). Applying the corresponding partition in (51), we see that it will suffice to show that for each semialgebraic set \(T\subseteq(\mathbb{R}/\mathbb{Z})\times(\widetilde{G}^{\prime}/\Lambda)\) with bounded complexity and for each constant \(A^{\prime}>0\) we have
\[\left|\mathop{\mathbbm{E}}_{h<H}\mu(N+h)1_{Q}(h)1_{T}\left(\frac{h}{H},g_{j}( h)\Lambda\right)\right|\ll_{A,A^{\prime}}\frac{(1/\delta)^{O(1)}}{M^{A^{\prime}} \log^{A}N}. \tag{52}\]
Bearing in mind that \(M\geq\log N\), it will suffice to show that
\[\left|\mathop{\mathbbm{E}}_{h<H}\mu(N+h)1_{Q}(h)1_{T}\left(\frac{h}{H},g_{j}( h)\Lambda\right)\right|\ll_{A}\frac{(1/\delta)^{O(1)}}{M^{A}}. \tag{53}\]
We are now in position to apply Proposition 5.5 on \(\widetilde{G}^{\prime}/\Lambda\). The complexity of \((\widetilde{G}^{\prime},\Lambda,\widetilde{G}^{\prime}_{\bullet})\) is \(1/\delta^{\prime}\), where \(\delta^{\prime}=1/M^{O(1)}\). The largest exponent \(A^{\prime}\) with which
Proposition 5.5 is applicable to \((\widetilde{g}^{\prime}(h))_{h=0}^{H-1}\) satisfies \(\log^{A^{\prime}}N\gg M^{\mu C}\) for a constant \(\mu\gg 1\), leading to
\[\left|\underset{h<H}{\varmathbb{E}}\mu(N+h)1_{Q}(h)1_{T}\left(\frac{h}{H},g_{j} (h)\Lambda\right)\right|\ll_{C}\frac{1}{M^{\mu C-O(1)}}. \tag{54}\]
In order to derive (53) it is enough to let \(C\) be a sufficiently large multiple of \(A\).
|
2310.06704
|
Uncovering anisotropic effects of electric high-moment dipoles on the
tunneling current in $δ$-layer tunnel junctions
|
The precise positioning of dopants in semiconductors using scanning tunneling
microscopes has led to the development of planar dopant-based devices, also
known as $\delta$-layers, facilitating the exploration of new concepts in
classical and quantum computing. Recently it have been shown that two distinct
conductivity regimes (low- and high- bias regimes) exist in $\delta$-layer
tunnel junctions due to the presence of quasi-discrete and continuous states in
the conduction band of $\delta$-layer systems. Furthermore, discrete charged
impurities in the tunnel junction region significantly influence the tunneling
rates in $\delta$-layer tunnel junctions. Here we demonstrate that zero-charge
impurities, or electrical dipoles, present in the tunnel junction region can
also significantly alter the tunneling rate, depending, however, on the
specific conductivity regime and orientation and moment of the dipole. In the
low-bias regime with high-resistance tunneling mode dipole impurities of nearly
all orientations and moments can alter the current, indicating the extreme
sensitivity of the tunnel current to the slightest imperfection in the tunnel
gap. In the high-bias regime with low-resistivity only dipole defects with high
moment and orientated in the direction perpendicular to the electron tunneling
direction can significantly affect the current, thus making this conductivity
regime significantly less prone to the influence of dipole defects with
low-moment or dipoles oriented along the propagation direction.
|
Juan P. Mendez, Denis Mamaluy
|
2023-10-10T15:27:41Z
|
http://arxiv.org/abs/2310.06704v3
|
Uncovering anisotropic effects of electric high-moment dipoles on the tunneling current in \(\delta\)-layer tunnel junctions
###### Abstract
The precise positioning of dopants in semiconductors using scanning tunneling microscopes has led to the development of planar dopant-based devices, also known as \(\delta\)-layers, facilitating the exploration of new concepts in classical and quantum computing. Recently it have been shown that two distinct conductivity regimes (low- and high- bias regimes) exist in \(\delta\)-layer tunnel junctions due to the presence of quasi-discrete and continuous states in the conduction band of \(\delta\)-layer systems. Furthermore, discrete charged impurities in the tunnel junction region significantly influence the tunneling rates in \(\delta\)-layer tunnel junctions. Here we demonstrate that electrical dipoles, i.e. zero-charge impurities, present in the tunnel junction region can also significantly alter the tunneling rate, depending, however, on the specific conductivity regime, and orientation and moment of the dipole. In the low-bias regime, with high-resistance tunneling mode, dipole impurities of nearly all orientations and moments can alter the current, indicating the extreme sensitivity of the tunneling current to the slightest imperfection in the tunnel gap. In the high-bias regime, with low-resistivity, only dipole defects with high moments and oriented in the directions perpendicular to the electron tunneling direction can significantly affect the current, thus making this conductivity regime significantly less prone to the influence of dipole defects with low-moments or oriented along the propagation direction.
## Introduction
Atomic precision advanced manufacturing (APAM) has enabled the creation of 2D doped regions, also known as \(\delta\)-layers, in semiconductors with single-atom precision [1, 2, 3, 4, 5] and high conductivity [6, 7, 8, 9, 10, 11]. The APAM is a process to incorporate dopants, such as P or B, at the atomic scale onto Si surface using chemistry surface [10, 13]. In a simplified way, this process consists in several steps. For phosphorus-doped planar structures embedded in silicon (Si: P \(\delta\)-layer systems) [12], we start with a Si surface, normally (100), fully passivated with H. With the tip of an STM (scanning tunneling microscope), it gives the capability to remove H atom by H atom in the exact locations where we want to incorporate the dopants. Then, the surface is exposed with a precursor gas, such as phosphene (PH\({}_{3}\)) [13], followed by a annealing process to incorporate the dopants into the surface. Finally, an epitaxial Si is overgrown through a series of annealing processes to protect the planar structure.
APAM has various applications, including the exploration of novel electronic devices such as nano-scale diodes or transistors for classical computing and sensing systems [10, 14, 15, 16]. But, most importantly, this technology has been used to explore dopant-based qubits in silicon, with recent advancements in understand exchange-based 2-qubit operations [17], the limits to qubit fidelity from environmental noise [18], the advantages to leveraging the number of dopants as a degree of freedom [19, 20], and the exploration of many body [21] and topological [22] effects in dopant chains. One of the building block of these devices is the \(\delta\)-layer tunnel junction, which consists of two doped thin layers separated by an intrinsic gap and embedded in a semiconductor. These devices require precise control of the tunneling rate for their functioning, as they are very sensitive to tunneling rates. Additionally, it is known that imperfections near the tunnel junction strongly alter the tunneling rate.
In our previous work [23], we demonstrated the extreme sensitivity of the tunneling current in \(\delta\)-layer systems to the presence of single charged impurities. We investigated the influence of charged impurities located in the middle of the intrinsic tunnel gap on the tunneling current in phosphorus-doped \(\delta\)-layer systems in silicon (Si: P \(\delta\)-layer), revealing that even a single charged impurity can either increase (for n-type) or decrease (for p-type) the current by more than an order of magnitude, dramatically altering the corresponding device performance. It also motivates the following analysis, in which we investigate the influence of electrical dipoles, i.e. zero-charge impurities, on the tunneling rate in \(\delta\)-layer tunnel junction systems.
In this work, we have employed an efficient implementation of Non-Equilibrium Green's Function (NEGF), refereed to as the Contact Block Reduction (CBR) method [24, 25, 26, 27, 28], to investigate how electric dipoles might alter the tunneling rate in P-doped \(\delta\)-layer tunnel junctions in silicon. We have found quantum-mechanical effects, which can not be described by classical
methods: electrical dipoles located near the tunnel gap in \(\delta\)-layer tunnel junctions can significantly alter the current, and its orientation indeed matters. The effect of dipoles oriented along the transverse directions to the current direction is significantly stronger than for dipoles oriented along the current propagation direction, revealing an anisotropic effect. This intriguing effect is only present for sufficiently 'large' (a few nanometers or more) dipole moments. For smaller dipole moments, the anisotropic effect vanishes. Similarly, at low bias levels, the anisotropic effect is less pronounced.
## Simulation Setup
To explore the impact of electric dipoles in the intrinsic gap of P \(\delta\)-layer tunnel junctions embedded in silicon, we adopt the structure shown in Fig. 1**a**. This type of structure is normally refereed to as Si: P \(\delta\)-layer tunnel junctions. In the open-system NEGF framework, the computational device consists of a semi-infinite source and drain, in contact with the channel of length \(L\), which is composed of a lightly doped Si body and cap and two very thin, highly P doped layers (referred to as left and right \(\delta\)-layers) separated by an intrinsic gap of length \(L_{gap}\). To avoid the boundary effects between the source and drain contacts and size quantization effects [29], the total channel length is chosen to be \(L=30.0\) nm \(+L_{gap}\), while the device height is set to \(H=8\) nm and the device width is set to \(W=15.0\) nm, with an effective width of the \(\delta\)-layer of 13.0 nm. In our analyses, we have considered a thickness of \(\delta\)-layer of \(t=1.0\) nm and a sheet doping density of \(N_{D}=1.0\times 10^{14}\) cm\({}^{-2}\) (\(N_{D}^{(2D)}=t\times N_{D}^{(3D)}\)) for the \(\delta\)-layer, and a doping densities of \(N_{A}=5.0\times 10^{17}\) cm\({}^{-3}\) in the cap and body. Furthermore, all simulations are carried out at the cryogenic temperature of 4.2 K, for which we can neglect inelastic scatterings [6, 30].
In this work, we only focus on the effect of electric dipoles located in the middle of the tunnel gap with three different orientations, along x-, y- and z-directions, as show in Fig. 1**b**, **c** and **d**, respectively, and two distinct dipole moments, \(p=q\times l_{dipole}\), where \(q\) is the electric charge and \(l_{dipole}\) is the distance between both charges, with \(l_{dipole}=0.8\) nm and 4.0 nm, corresponding to two limits scenarios, with low and high electric dipole moments, respectively. When dipoles are oriented along x-direction we refer to them as dipoles oriented in the propagation direction, whereas along y- and z-direction we refer to them as dipoles oriented in transverse direction. In these simulations, dipoles are modeled as two point charged impurities of opposing electrical sign, in which each charge is modeled by approximating a point charge with a density of (positive or negative) \(4.6\times 10^{21}\) cm\({}^{-3}\) homogeneously distributed in a total volume of (0.6 nm)\({}^{3}\). The final spatial distribution of the total charge will be dictated by the self-consistent solution of the open-system Schrodinger and Poisson equations. Additionally, while in this work we restrict our analysis to the center-gap location, the influence of other locations may be also interesting to be investigated since the free electrons in \(\delta\)-layer systems form distinct conducting layers perpendicular to the confinement direction, thus signaling a highly non-homogeneous electron density distribution [31, 32].
Figure 1: **Computational model used in our simulations.** A Si: P \(\delta\)-layer tunnel junction device is shown in **a**, which consists of a semi-infinite source and drain, in contact with the channel. The channel is composed of a lightly doped Si body and cap and two very thin, highly P doped layers with an intrinsic gap of length \(L_{gap}\). Representation of an electric dipole oriented along x-direction (propagation direction) is shown in **b**, y-direction (transverse direction) in **c** and z-direction (transverse direction) in **d**. The negative and positive charged impurities are represented as a blue and red spheres, respectively, in the figures.
## Results and Discussion
In our previous work [23, 29], we first reported the existence of two different Ohmic regimes in the conductivity of \(\delta\)-layer tunnel junctions, namely low- and high-bias regimes, connected by a transition regime. For convenience, we include the characteristic IV curve for \(\delta\)-layer tunnel junctions in Fig. 2 reported in [23]. One can clearly observe these regimes from the result: the first conductivity regime, characterized by a resistance of approximately \(5-6\) M\(\Omega\), is observed for an applied bias in the range of \(0-0.04\) V between the source and drain; the second one, characterized by a resistance of \(0.2-0.3\) M\(\Omega\), occurs at voltages above \(0.08\) V; and, finally, in the transition regime, approximately between \(0.04-0.08\) V, the resistance is reduced as the bias is increased. We can understand this phenomenon because of the existence of quasi-discrete states and continuous state in \(\delta\)-layer systems as we will see below. The threshold voltages of these regimes are determined by the doping density \(N_{D}\) and the \(\delta\)-layer thickness \(t\). Further in [23], we investigated how the tunneling rate in \(\delta\)-layer tunnel junctions can be affected by the presence of charged impurities, specifically single n-type and p-type impurities in the middle of the tunnel junction. The results, for convenience, are shown in Figs. 3 and 4 (see continuous and dashed black lines) for distinct tunnel junction lengths. It was found that the tunneling rate is strongly affected by the presence of these charges: the tunneling rate significantly increases by the presence of n-type impurities, while it decreases for p-type impurities. Additionally, the effect of n-type impurities was found to be stronger than for p-type impurities, specially for large tunnel gaps, in which the effect can be even up to one order of magnitude higher.
In this work, we focus on investigating electric dipoles in the intrinsic region of the tunnel junction. To understand the influence on the tunneling rate, we first begin by examining the current ratio, which is defined as the ratio of the tunneling rate between the \(\delta\)-layers in the tunnel junction with and without the imperfections, and the magnitude change of the tunneling rate. Figs. 3 and 4 show these analyses when an applied bias of \(1.0\) mV and \(100.0\) mV, respectively, representing the low- and high-bias regimes, for both dipole moments, \(l_{dipole}=0.8\) for low-moment and \(l_{dipole}=4.0\) nm for high-moment, oriented along the propagation direction (i.e. x-axis) and the transverse propagation directions (i.e. y- and z-axes). From these analyses, we can notably discern very interesting results: i) the trend of the current ratio with the tunnel gap length in the low-bias regime shows a considerable oscillation, and this oscillation vanishes in the high-bias regime. ii) high-moment dipoles exhibit anisotropic behaviour since dipoles oriented in the transverse propagation directions (i.e. perpendicular to the electron tunneling direction) behave differently than dipoles along the propagation direction (i.e. electron tunneling direction); in contrast, low-moment dipoles does not show that strong anisotropic behaviour since the impact of the dipole orientation is minimum. Additionally, high-moment dipoles oriented along the propagation direction behave very similar to low-moment dipoles. iii) in the high-bias regime, high-moment dipoles oriented in the transverse propagation directions significantly affect current, whereas the contribution is minimum for all low-moment dipoles and high-moment dipoles oriented along the propagation direction.
Figure 2: **Characteristic curve for \(\delta\)-layer tunnel junctions.** Total current \(I\) vs. voltage \(V\) (blue curve, linear scale) and the corresponding differential resistance \(dV/dI\) (red curve, semi-logarithmic scale) are shown for \(L_{gap}=10\) nm, \(N_{D}=1.0\times 10^{14}\) cm\({}^{-2}\), \(N_{\rm A}=5.0\times 10^{17}\) cm\({}^{-3}\) and \(t=1\) nm. Figure reproduced from [23].
Figure 4: **Tunneling ratio and tunneling rate change for an applied bias of 100 mV.****a** Current ratio(\(I_{dipole}/I_{ideal}\)) between the \(\delta\)-layer tunnel junction with and without the presence of an electric dipole in the intrinsic gap oriented along x-, y- and z-directions in **a** for an applied bias of 100 meV. **b** the corresponding tunneling current change. \(t=1.0\) nm, \(N_{D}=1.0\times 10^{14}\) cm\({}^{-2}\) and \(N_{A}=5.0\times 10^{17}\) cm\({}^{-3}\).
Figure 3: **Tunneling ratio and tunneling rate change for an applied bias of 1 mV.****a** Current ratio(\(I_{dipole}/I_{ideal}\)) between the \(\delta\)-layer tunnel junction with and without the presence of an electric dipole in the intrinsic gap oriented along x-, y- and z-directions for an applied bias of 1 meV. **b** the corresponding tunneling current change. \(t=1.0\) nm, \(N_{D}=1.0\times 10^{14}\) cm\({}^{-2}\) and \(N_{A}=5.0\times 10^{17}\) cm\({}^{-3}\).
direction. iv) in the low-bias regime, specially for large tunnel gap lengths, dipoles of all orientations and moments noticeably affect the tunneling rate in an indistinguishable manner; for narrow tunnel gap length, the effect of all dipoles on tunneling rate is of the same order of magnitude and less pronounced. In the following, we will proceed to discuss these intriguing results in more details.
As mentioned, the first thing that we can notice from the results in Figs. 3 and 4 is that the current ratio considerably oscillates with the tunnel gap length in the low-bias regime (1 mV), while it becomes very smooth in the high-bias regime (100 mV). As discussed in our previous work [23], this is as a consequence of the strong quantization of the low-energy conduction band in \(\delta\)-layer systems for energies below the Fermi level and approximately within 50.0 meV above the Fermi level. The quantization of the conduction band arises from the confinement of dopants in one direction, resulting in very sharp doping profiles and leading to the size quantization of the \(\delta\)-layer. To have a deeper understanding, we need to examine the local density of states (LDOS) of the free electrons, which represents the conduction band in real space for the free electrons. Figs. 5 and 6 show the \(LDOS(x,E)\) along the x-direction for a \(\delta\)-layer tunnel junction of \(L_{gap}=10.0\) nm with a dipole of \(l_{dipole}=0.8\) nm and \(l_{dipole}=4.0\) nm, respectively: in **a** for an applied voltage of 1 mV to the drain contact while the source is grounded, and in **b** for an applied voltage of 100 mV. In the figures, the states between \(x=0\) nm and 15 nm and between \(x=25\) nm and 40 nm correspond to the left and right \(\delta\)-layers, respectively; the states between \(x=15\) nm and 25 nm corresponds to the intrinsic gap region. At 0 K, the states below Fermi levels are occupied, and above the Fermi level are unoccupied. As the temperature increases, some states above the Fermi level start to be occupied in detriment of states below Fermi level. As one can observe from the \(LDOS(x,E)\), within the \(\delta\)-layer regions, the low-energy \(LDOS(x,E)\) are strongly quantized, i.e. for energies below the Fermi level and approximately up to 50 meV above the Fermi level, highlighted with dashed lines in Figure 5; for higher energies, approximately above 50 meV, the \(LDOS(x,E)\) are practically continuous in space-energy, thus signaling that these states are not quantized. Both regions are accordingly marked in the figures. When a positive voltage is applied to the drain contact while the source is grounded, the Fermi level corresponding to the drain contact is shifted down, resulting in lowering the energies of all states in the right side as well. Thus, new unoccupied states, either in the right \(\delta\)-layer or those created by the dipole, become available to be occupied by tunneling electrons coming from the left \(\delta\)-layer. Those unoccupied states correspond to states which energies are between source and drain Fermi levels. Here we need to distinguish two different scenarios, when a low-bias is applied, i.e. for \(V<45-50\) mV, and higher bias is applied, i.e. for \(V>50\) mV. But, before digging into details, we need to discuss the tunneling process that can occur in these tunnel junction devices: the first one corresponds to the direct tunneling, i.e. electrons tunneling from occupied states in the left \(\delta\)-layer to available states in the right \(\delta\)-layer; and, the second one is the defect-mediated tunneling, i.e. electrons first tunnel from the left \(\delta\)-layer to the available state in the defect (dipole), and from there to the right \(\delta\)-layer. Since we are only considering elastic scattering, it is required for both tunneling processes that the energy and momentum are conserved. When a low voltage is applied, e.g. 1 mV in panel **a** in Figs. 5 and 6, only the unoccupied quantized states in the right \(\delta\)-layer (those just above the Fermi level), and all quasi-bounded states created by the dipole will play an important role in the tunneling process. If the occupied quasi-discrete states in the left \(\delta\)-layer or the quasi-bounded state in the dipole align with the unoccupied quasi-discrete states in the right \(\delta\)-layer, it will result in a considerable increase of the tunneling current. Conversely, if the overlap is minimum, the tunneling current will be reduced. For low biases, this alternating mismatch can only exist for sufficiently large tunnel gaps because the coupling of the left and right \(\delta\)-layer wave-functions for narrow tunnel gaps equalizes the electron states on both sides, increasing the overlap and thus eliminating the mismatch. When an high bias is applied, e.g. for 100 meV in panel **b** in Figs. 5 and 6, it makes the continuous unoccupied high-energy states in the right side available for tunneling from the left side, thus diminishing the influence of the conduction band quantization on the current and easing the tunneling process. This mechanism explains the oscillating behaviour of the tunneling ratio in the low-bias regime, specially for large tunnel gaps, and the disappearance of these oscillations in the high-bias regime. Likewise, it also explain the existence of the two conductivity regimes: the first one, between 0 mV and 5 mV, with high tunneling resistance, where only the quasi-discrete states play a role in the tunneling process, and the second one, with low tunneling resistance, for voltages above 8 mV, where quasi-discrete and continuous states contribute in the tunneling process.
In the high-bias regime, the effect for all low-moment dipoles (\(l_{dipole}\)=0.8 nm for all studied orientations) on the tunneling current is minimum. This finding can be deduced from Fig. 4, in which the tunneling ratio for all low-moment dipoles is approximately the unity (cf. the results in color dashed lines in **a**) and the change of the tunneling current is between 1-30% (cf. the results in color dashed lines in **b**) for all studied tunnel junction lengths. Specifically, the tunneling change increases almost exponentially with the tunnel gap length, from a change of 1-3% for a tunnel gap length of \(L_{gap}=3\) nm up to around 20-30% for \(L_{gap}=12\) nm. In addition, our results also indicate negligible impact of the dipole orientation on the tunneling since the three studied orientations show similar change in the tunneling rate for all studied tunnel junction lengths. As previously mentioned, we reported in [23] that an n-type impurity has an opposing effect than a p-type impurity: the tunneling increases with n-type impurities, while it decreases with p-type impurities. Similarly the impact on the tunneling with n-type impurities is higher compared to p-type impurities (cf. the results in black continuous and dashed lines in Fig. 4 ). However, when both
impurities are very close to each other, their effects on the tunneling balance out, resulting in minimal changes to the tunneling current, explaining the diminished effect on the tunneling for low-moment dipoles. Furthermore, it is evident from the results that the change in the tunneling current exhibits exponential growth as the tunnel junction length increases. This is primarily due to the dominant effect of the n-type impurity, which exhibits exponential growth (cf. continuous line in **b**), while p-type impurity exhibit lower order of growth with the tunnel length (cf. dashed line in **b**).
A different story occurs for high-moment dipoles in the high-bias regime. From Fig. 4, we can observe that the orientation of high-moment dipoles in the high-bias regime plays an important role: our results reveals an intriguing anisotropic effect on the tunneling current due to their orientation. When the dipole is oriented along the transverse propagation directions (i.e. y- and z-directions), the tunneling current is strongly affected by the dipole; on contrary, when the dipole is oriented along the propagation direction (x-direction), the tunneling current is only weakly affected. Additionally, it behaves very similar to low-moment dipoles (see red continuous line). In the same manner as low-moment dipoles and for the same reason, the tunneling change of the tunneling current increases exponentially with the tunnel gap length. However, for dipoles oriented in the transverse propagation directions, the tunneling change goes from 5-6% for \(L_{gap}=3\) nm up to 2000% for \(L_{gap}=12\) nm, while for dipoles oriented in the propagation direction, the change goes from 3% for \(L_{gap}=3\) nm up to 100% for \(L_{gap}=12\) nm. Furthermore, we can also observe that high-moment dipoles oriented in the transverse propagation directions almost reproduces the dotted line in Fig. 4**a**, which means that total effect of high-moment dipoles on the current can be approximated as the arithmetic average of independent single n-type and p-type impurities. To understand why the effect of dipoles oriented in transverse propagation directions is overall stronger than those oriented along the propagation direction, we might need to examine the effective 1D electrostatic potentials in equilibrium, shown in Figure 7. It is worth noting that the electrostatic potential along x-direction represents the integration of the 3D electrostatic potential over y- and z-directions. We can see that the barrier height in the electrostatic potential for dipoles in transverse propagation directions (cf. red lines in panels **b** and **c**) is reduced overall, leading to an increase in the tunneling current. Conversely, in the case of dipoles oriented in the propagation direction (cf. red line in panel **a**), we can see that the barrier height is increased near the p-type impurity and is decreased near the n-type impurity. An increase in the barrier height results in a reduction of the current, while a decrease in the barrier height leads to an increase in the current. The total tunneling current is a combination of these two opposing effects, resulting in only a minimal change in the tunneling current in this particular case. Comparing the effective 1D electrostatic potentials for low- and high-moment dipoles, we can also note that the peak and dip in the electrostatic potential are less pronounced for low-moment dipoles, revealing again the cancellation of the effects when both impurities are in close proximity to each other.
Interestingly, in the low-bias regime, specially for larger tunnel junction lengths, dipoles of all orientations and moments affect noticeably the tunneling current in an almost indistinguishable manner. The main reason, already discussed in this report, is because only the quantized conduction band and the quasi-bounded states of the electric dipole are involved on the tunneling process in the low-bias regime (see panel **a** in Figs. 5 and 6). Unlike in high-bias regime, the tunneling change is exponential with the tunnel junction length for both dipole types, with low- and high-moments, in a very similar manner. The tunneling change goes from 2-4% for \(L_{gap}=3\) nm up to 620-1800% for \(L_{gap}=11\) nm in low-moment dipoles, and from 2-5% for \(L_{gap}=3\) nm up to 620-2200% for \(L_{gap}=11\) nm in high-moment dipoles. Additionally, it is evident from the results that the anisotropic effect of the orientation of high-moment dipoles is much less significant compared to the high-bias regime.
Finally, we investigate the dwell time in the tunnel junction, which it refers to the average time that the carrier spends in the barrier regardless of whether the electron is reflected or transmitted [33, 34, 35]. It is defined mathematically as \(t_{dwell\ time}=\frac{\int_{0}^{\infty}n(r)\partial\Omega}{t_{tun.\ curr}/q}\), where \(I_{tun.\ curr.}\) is the tunneling current, \(q\) is the elementary charge, \(n(r)\) is the electron density and \(\Omega\) refers to the tunnel junction domain. From this definition an alternative physical meaning of the dwell time can be readily deduced, which corresponds to the ratio of the total (average) number of electrons in the junction to the flux (number of electrons per second) going through the system in the steady-state. This ratio is the time necessary to fill the tunnel junction of volume \(\Omega\) with free electrons to charge neutrality from the fully depleted state. In other words, it corresponds to the time necessary to switch a device (e.g. of a FET type) from fully "off-" to "on-" state and/or vice versa. Thus, computing the tunnel junction dwell time can reveal the maximum operating frequency of APAM tunnel junction devices, which is important for applications. Figure 8 shows the dwell time for all cases at an applied voltage of 1 mV in **a** and 100 mV in **b**. We can observe several interesting results. Firstly, applying higher voltages almost proportionally reduces dwell times. This can be explained by the weak increase of the electron density within the junction with the increased voltage, alongside a corresponding strong (proportional) increase in the current. Secondly, for the low-bias regime, the dwell time grows exponentially with the tunnel junction length and is almost identical for all cases. However, for the high-bias regime, we observe a strong dependence of the dwell time on the type of the impurity in the channel. Indeed, Figure 8**b** shows that a single p-type impurity in the gap significantly increases the dwell time. The corresponding microscopic interpretation is that a single additional acceptor atom in the gap binds a free electron (with a certain life-time), and thus effectively increases the average time free electrons spend inside the barrier. Similarly, one can see that a single n-type charged impurity in the gap significantly decreases the dwell time and, more interestingly, the dwell time saturates with the tunnel gap length. Dipole impurities can also significantly affect the APAM tunnel junction dwell time. From Figure 8
Figure 5: **Local Density of States for \(\delta\)-layer tunnel junctions.** The \(LDOS(E,x)\) for a tunnel junction of \(L_{gap}=10\) nm with the presence of a dipole of length 0.8 nm located in the middle of tunnel junction and oriented along x-direction: **a** for an applied voltage of 1 mV to the drain contact while the source contact is grounded and **b** for 100 mV. The Fermi levels indicated in the figures correspond to the Fermi levels of the source and drain contacts. The corresponding effective 1D potentials, calculated by integrating over the (y,z)-plane the actual charge self-consistent 3D potentials weighted with the electron density, are shown in red for the device with the imperfection and in blue dashed line for the ideal device. \(N_{D}=1.0\times 10^{14}\) cm\({}^{-2}\), \(N_{A}=5.0\times 10^{17}\) cm\({}^{-3}\), and \(t=1\) nm.
Figure 6: **Local Density of States for \(\delta\)-layer tunnel junctions.** The \(LDOS(E,x)\) for a tunnel junction of \(L_{gap}=10\) nm with a dipole of length 4.0 nm located in the middle of the tunnel junction and oriented along x-direction: **a** for an applied voltage of 1 mV to the drain contact while the source contact is grounded. The Fermi levels indicated in the figures correspond to the Fermi levels of the source and drain contacts. The corresponding effective 1D potentials, calculated by integrating over the (y,z)-plane the actual charge self-consistent 3D potentials weighted with the electron density, are shown in red for the device with the imperfection and in blue dashed line for the ideal device. \(N_{D}=1.0\times 10^{14}\) cm\({}^{-2}\), \(N_{A}=5.0\times 10^{17}\) cm\({}^{-3}\), and \(t=1\) nm.
\(\mathbf{b}\), it follows that dwell times for low-moment dipoles and the high-moment dipole oriented along the propagation direction are very similar to the time for the ideal device. However, for high-moment dipoles oriented along the transverse propagation directions, the dwell times are noticeably reduced, indicating that electrons spend less time in the junction.
## Conclusions
In this work we have analyzed the influence of electric dipole impurities on the tunneling current in \(\delta\)-layer systems. We have employed an efficient implementation of Non-Equilibrium Green's Function (NEGF), refereed to as the Contact Block Reduction (CBR) method [24, 25, 26, 27, 28] to carry out the simulations. Our analysis have revealed several interesting results: i) The trend of the current ratio with the tunnel gap length in the low-bias regime shows a considerable oscillation; this oscillation vanishes in the high-bias regime. ii) high-moment dipoles exhibit anisotropic behaviour since dipoles oriented in the transverse propagation directions behave differently than dipoles in the propagation direction; in contrast, low-moment dipoles does not show that strong anisotropic behaviour since the impact of the dipole orientation is minimum. Additionally, high-moment dipoles oriented in the propagation direction behave very similar to low-moment dipoles. iv) in the high-bias regime, high-moment dipoles oriented in the transverse propagation directions significantly affect current, whereas the contribution is minimum for all low-moment dipoles and high-moment dipoles oriented in the propagation direction. v) in the low-bias regime, specially for large tunnel gap lengths, dipoles of all orientations and moments noticeably affect the tunneling rate in an indistinguishable manner; for narrow tunnel gap lengths, the effect of all dipoles on tunneling rate is of the same order of magnitude and less pronounced.
Finally, the performed analysis of dwell times in APAM tunnel junctions in the presence of single charged and dipole impurities in the gap (Fig. 8) indicates the general suitability of APAM tunnel junction devices for TeraHertz applications in the "high-conductivity" regime (\(V\geq 0.1V\)).
## Method
The simulations in this work are conducted using the open-system charge self-consistent Non-Equilibrium Green's Function (NEGF) Keldysh formalism [36], together with the Contact Block Reduction (CBR) method [24, 25, 26, 27, 28] and the effective mass theory. The CBR method allows a very efficient calculation of the density matrix, transmission function, etc. of an arbitrarily shaped, multiterminal two- or three-dimensional open device within the NEGF formalism and scales linearly \(O(N)\) with the system size \(N\). The numerical details of the efficient open-system charge self-consistent treatment in 3D real-space are given in [29].
Within this framework, we solve self-consistently the open-system effective mass Schrodinger equation and the non-linear Poisson equation [24, 26, 28]. We employ a single-band effective mass approximation with a valley degeneracy of \(d_{val}=6\). For the charge self-consistent solution of the non-linear Poisson equation we use a combination of the predictor-corrector approach and Anderson mixing scheme [27, 28]. First, the Schrodinger equation is solved in a specially defined closed-system basis taking into account the Hartree potential \(\phi^{H}(\mathbf{r}_{i})\) and the exchange and correlation potential \(\phi^{XC}(\mathbf{r}_{i})\)[7]. Second, the LDOS of the open system, \(\rho(\mathbf{r}_{i},E)\), and the electron density, \(n(\mathbf{r}_{i})\), are computed using the CBR method for each iteration. Then the electrostatic
Figure 7: **Electrostatic potentials in equilibrium.** It shows the corresponding effective 1D electrostatic potential in equilibrium along the propagation direction (x-direction) in \(\mathbf{a}\), and along the transverse propagation directions, y-direction in \(\mathbf{b}\) and z-direction in \(\mathbf{c}\), for both dipole lengths (\(d_{dipole}=0.8,\ 4.0\) nm). For reference, the figures also include the electrostatic potential for the ideal delta-layer tunnel junction device in black line. The corresponding effective 1D potentials are calculated by integrating over the (y,z)-plane the actual charge self-consistent 3D potentials weighted with the electron density. \(t=1.0\) nm, \(L_{gap}=10.0\) nm, \(N_{D}=1.0\times 10^{14}\) cm\({}^{-2}\) and \(N_{A}=5.0\times 10^{17}\) cm\({}^{-3}\).
potential and the carrier density are used to calculate the residuum \(F\) of the Poisson equation
\[\left|\left|\boldsymbol{F}[\boldsymbol{\phi}^{H}(\boldsymbol{r}_{i})]\right| \right|=\left|\left|\boldsymbol{A}\boldsymbol{\phi}^{H}(\boldsymbol{r}_{i})-( \boldsymbol{n}(\boldsymbol{r}_{i})-\boldsymbol{N}_{D}(\boldsymbol{r}_{i})+ \boldsymbol{N}_{A}(\boldsymbol{r}_{i}))\right|\right|, \tag{1}\]
where \(\boldsymbol{A}\) is the matrix derived from the discretization of the Poisson equation and \(\boldsymbol{N}_{D}\) and \(\boldsymbol{N}_{A}\) are the total donor and acceptor doping densities arrays, respectively. If the residuum is larger than a predetermined threshold \(\varepsilon\), the Hartree potential is updated using the predictor-corrector method, together with the Anderson mixing scheme. Using the updated Hartree potential and the corresponding carrier density, the exchange-correlation is computed again for the next step, and an iteration of Schrodinger-Poisson is repeated until the convergence is reached with \(\left|\left|\boldsymbol{F}[\boldsymbol{\phi}^{H}(\boldsymbol{r}_{i})]\right| \right|<\varepsilon=10^{-6}\) eV. In our simulations we have utilized a 3D real-space model, with a discretization size of 0.2 nm along all directions, thus with about \(10^{6}\) real-space grid points, and around 3,000 energy points were used. The CBR algorithm automatically ascertains that out of more than 1,000,000 eigenstates only about 700 (\(<0.1\%\)) of lowest-energy states is needed, which is generally determined by the material properties (e.g. doping level), but not the device size. We have also employed the standard values of electron effective masses, \(m_{l}=0.98\times m_{e}\), \(m_{t}=0.19\times m_{e}\), the dielectric constant of Silicon, \(\varepsilon_{Si}=11.7\), and the cryogenic temperature of T\(=4.0\) K in all our simulations. Further details of the methodology can be found in the provided Supplementary Material.
### Validation
To validate our computational framework for \(\delta\)-layer tunnel junctions, we computed the tunneling resistance for different tunnel gaps, \(L_{gap}\), and compared the calculations against recently measured data from M. Donnelly _et al._[38]. Fig. 9 shows our computed tunneling resistance from our previous work [29] for an effective width of the \(\delta\)-layer of 7 nm. The figure also includes the resistance measurements and tight-binding calculations for tunnel junctions of \(\delta\)-layer width of 7 nm from M. Donnelly _et al._[38]. One can observe that our predicted tunneling resistances (full black circles) are very close to both the experimental data (empty red circles) and the parameter-fitting tight-binding simulations (blue crosses). Fig. 9 demonstrates the true predictive power of our open-system effective mass framework for highly-conductive highly-confined systems. We also emphasize that in our calculations there is no fitting parameters, we use only the standard values of electron effective masses and dielectric constant for silicon. Finally, the slight differences between our computed tunneling resistances and experimental measurements can also be accounted for by the following reasons: i) Certain variations in the width, thickness, and doping density of the \(\delta\)-layer (note that a doping density of \(N_{D}=1.0\times 10^{14}\) was assumed in [29], while a higher doping density, \(N_{D}=2\times 10^{14}\), was employed in [38]); ii) The possible presence of impurities and/or defects near the tunnel gap.
|
2305.13801
|
A Critical Reexamination of Intra-List Distance and Dispersion
|
Diversification of recommendation results is a promising approach for coping
with the uncertainty associated with users' information needs. Of particular
importance in diversified recommendation is to define and optimize an
appropriate diversity objective. In this study, we revisit the most popular
diversity objective called intra-list distance (ILD), defined as the average
pairwise distance between selected items, and a similar but lesser known
objective called dispersion, which is the minimum pairwise distance. Owing to
their simplicity and flexibility, ILD and dispersion have been used in a
plethora of diversified recommendation research. Nevertheless, we do not
actually know what kind of items are preferred by them.
We present a critical reexamination of ILD and dispersion from theoretical
and experimental perspectives. Our theoretical results reveal that these
objectives have potential drawbacks: ILD may select duplicate items that are
very close to each other, whereas dispersion may overlook distant item pairs.
As a competitor to ILD and dispersion, we design a diversity objective called
Gaussian ILD, which can interpolate between ILD and dispersion by tuning the
bandwidth parameter. We verify our theoretical results by experimental results
using real-world data and confirm the extreme behavior of ILD and dispersion in
practice.
|
Naoto Ohsaka, Riku Togashi
|
2023-05-23T08:14:34Z
|
http://arxiv.org/abs/2305.13801v1
|
# A Critical Reexamination of Intra-List Distance and Dispersion
###### Abstract.
Diversification of recommendation results is a promising approach for coping with the uncertainty associated with users' information needs. Of particular importance in diversified recommendation is to define and optimize an appropriate diversity objective. In this study, we revisit the most popular diversity objective called _intra-list distance (ILD)_, defined as the average pairwise distance between selected items, and a similar but lesser known objective called _dispersion_, which is the minimum pairwise distance. Owing to their simplicity and flexibility, ILD and dispersion have been used in a plethora of diversified recommendation research. Nevertheless, _we do not actually know_ what kind of items are preferred by them.
We present a critical reexamination of ILD and dispersion from theoretical and experimental perspectives. Our theoretical results reveal that these objectives have potential drawbacks: ILD may select _duplicate_ items that are very close to each other, whereas dispersion may overlook _distant_ item pairs. As a competitor to ILD and dispersion, we design a diversity objective called Gaussian ILD, which can _interpolate_ between ILD and dispersion by tuning the bandwidth parameter. We verify our theoretical results by experimental results using real-world data and confirm the extreme behavior of ILD and dispersion in practice.
diversified recommendation; intra-list distance; dispersion +
Footnote †: journal: Information systems Information retrieval diversity
+
Footnote †: journal: Information systems Information retrieval diversity
+
Footnote †: journal: Information systems Information retrieval diversity
+
Footnote †: journal: Information systems Information retrieval diversity
+
Footnote †: journal: Information systems Information retrieval diversity
+
Footnote †: journal: Information systems Information retrieval diversity
+
Footnote †: journal: Information systems Information retrieval diversity
+
Footnote †: journal: Information systems Information retrieval diversity
+
Footnote †: journal: Information systems Information retrieval diversity
+
Footnote †: journal: Information systems Information retrieval diversity
+
Footnote †: journal: Information systems Information retrieval diversity
+
Footnote †: journal: Information systems Information retrieval diversity
+
Footnote †: journal: Information systems Information retrieval diversity
+
Footnote †: journal: Information systems retrieval diversity
+
Footnote †: journal: Information systems retrieval diversity
+
Footnote †: journal: Information systems Information retrieval diversity
+
the hope that we can characterize what they are representing and reveal their drawbacks. We first identify the following potential drawbacks of ILD and dispersion based on our theoretical comparisons (Section 4): _ILD selects items in a well-balanced manner if the entire item set is separated into two clusters. However, it may generally select duplicate items that are very close to each other. The items chosen by dispersion are well-scattered, but distant item pairs may be overlooked._
We then conduct numerical experiments to verify the assertions based on our theoretical analysis (Section 6). Our empirical results using MovieLens (Movica et al., 2017) and Amazon Review (Movica et al., 2018) demonstrate that _ILD can readily select many items that are similar or even identical_, which is undesirable if we wish to recommend very few items. Figure 1 shows a cloud of points in an ellipse such that ILD and dispersion select very different item sets. Our theoretical and empirical results imply that the items selected via ILD are biased toward two distant groups; items in the middle of the ellipse are never chosen. In contrast, the items selected by dispersion are well-scattered.
To better understand the empirical behaviors of ILD and dispersion, we design a new distance-based objective that generalizes ILD and dispersion as a competitor (Section 5). The designed one, _Gaussian ILD_ (GILD), is defined as the average of the Gaussian kernel distances (Movica et al., 2018) between selected items. GILD has bandwidth parameter \(\sigma\), and we prove that GILD approaches ILD as \(\sigma\rightarrow\infty\) and approaches dispersion as \(\sigma\rightarrow\) 0; i.e., it can _interpolate_ between them. We experimentally confirm that GILD partially circumvents the issues caused by the extreme behaviors of ILD and dispersion, thereby achieving a _sweet spot_ between them (Section 6).
Finally, we examine the recommendation results obtained by enhancing ILD, dispersion, and GILD (Section 7). The experimental results demonstrate that (1) ILD frequently selects duplicate items, and thus it is not an appropriate choice; (2) if the relevance of the recommended items is highly prioritized, dispersion fails to diversify the recommendation results for some users.
In summary, _ILD is not appropriate for either evaluating or enhancing distance-based diversity_, whereas _dispersion is often suitable for improving diversity, but not necessarily for measuring diversity._
## 2. Related Work
Diversity enhancement has various motivations (Han et al., 2017); e.g., (1) because a user's preference is uncertain owing to the inherent sparsity of user feedback, recommending a set of diverse items has the potential to satisfy a user's needs; (2) users desire diversity of recommended items due to the variety-seeking behavior. Other beyond-accuracy objectives include novelty, serendipity, and coverage; see, e.g., Castells et al. (2018), Kaminskas and Bridge (Kaminskas and Bridge, 2019), and Zangerle and Bauer (Zangerle and Bauer, 2018).
Generally, there are two types of diversity. One is _individual diversity_, which represents the diversity of recommended items for each user. The other is _aggregate diversity_(Bauer, 2017; Bauer, 2017), which represents the diversity _across_ users and promotes long-tail items. We review the definitions and enhancement algorithms for individual diversity, which is simply referred to as _diversity_ throughout this paper.
Defining Diversity ObjectivesThe _intra-list distance_ (ILD) (also known as the _average pairwise distance_) due to Smyth and McClave (Smyth and McClave, 1977) and Ziegler et al. (Ziegler et al., 1979) is among the earliest diversity objectives in recommendation research. Owing to its simplicity and flexibility in the choice of a distance metric, ILD has been used in a plethora of subsequent works (Bauer, 2017; Bauer, 2017; Bauer, 2017; Bauer, 2017; Bauer, 2017; Bauer, 2017; Bauer, 2017; Bauer, 2017; Bauer, 2017; Bauer, 2017; Bauer, 2017). _Dispersion_ is another distance-based diversity objective that is similar to ILD. Maximizing the dispersion value is known as the _\(p\)-dispersion_ problem in operations research and is motivated by applications in facility location (Bauer, 2017; Bauer, 2017; Bauer, 2017). Notably, only a few studies on recommender systems (Bauer, 2017; Bauer, 2017) adopt dispersion as the diversity objective. _Determinantal point processes_ (_DPP_) are probabilistic models that express the negative correlation among items using the determinant (Bauer, 2017; Bauer, 2017). DPP-based objectives have recently been applied to recommender systems (Kulesza and Taskar, 2017). See Kulesza and Taskar (2017) for more details. _Topical diversity objectives_ use predefined topic information to directly evaluate how many topics are covered by selected items and/or the extent to which topic redundancy should be avoided (Bauer, 2017; Bauer, 2017; Bauer, 2017; Bauer, 2017). Such topic information is often readily available in many domains such as movies, music, and books. In this paper, we do not compare DPPs or topical diversity because we deeply investigate ILD and dispersion, which are more commonly used.
Gollapudi and Sharma (Gollapudi and Sharma, 2017) use an _aximomatic approach_, in which they design a set of axioms that a diversity objective should satisfy, and prove that any objective, including ILD and dispersion, cannot satisfy all the axioms simultaneously. Amigo et al. (Amigo et al., 2017) present another axiomatic analysis of diversity-aware evaluation measures. Our study is orthogonal to these works because we focus on elucidating what diversity objectives represent.
Diversity Enhancement AlgorithmsWe review algorithms for enhancing the diversity of recommended items. The basic approach simultaneously optimizes both relevance and diversity. Given the relevance \(\mathsf{rel}(i)\) for each item \(i\) and a diversity objective \(\mathsf{div}(\cdot)\) (e.g., ILD), we can formulate an objective function as a linear combination of the average relevance and diversity of selected items \(S\), i.e.,
\[\max_{S}\ (1-\lambda)\cdot\frac{1}{|S|}\sum_{i\in S}\mathsf{rel}(i)+\lambda\cdot \mathsf{div}(S), \tag{1}\]
where \(\lambda\in(0,1)\) is the trade-off parameter. The _maximal marginal relevance_ (_MMR_) (Gallapudi and Sharma, 2017) is an initial attempt using this approach, which applies a greedy heuristic to Eq. (1). Greedy-style algorithms are widely used in many diversified recommendation studies (Bauer, 2017; Bauer, 2017; Bauer, 2017; Bauer, 2017; Bauer, 2017; Bauer, 2017; Bauer, 2017; Bauer, 2017). Other algorithms include local search (Bauer, 2017), binary quadratic programming (Ziegler et al., 1979; Ziegler et al., 1979), and multi-objective optimization (Miller and Bauer, 2017; Bauer, 2017). However, even (Pareto) optimal solutions are undesirable unless we choose an "appropriate" objective to be optimized. We investigate whether the greedy maximization of one diversity objective is useful for enhancing another objective.
Learning-to-rank approaches aim to directly learn the optimal ranking of recommended items for each user under a particular definition of the loss function. Notably, the underlying function that models diversity often originates from existing diversity objectives, including ILD (Bauer, 2017; Bauer, 2017). Thus, our study helps understand the impact of underlying diversity modeling on recommendation results.
Evaluation Measures in Information RetrievalIn information retrieval (IR), efforts were made to render classical IR evaluation measures diversity-aware to address the uncertainty in users' queries, e.g., \(\alpha\)-normalized discounted cumulative gain (\(\alpha\)-nDCG) (Bauer, 2017), Intent-Aware measures (Bauer, 2017), D#-measures (Bauer, 2017), and \(\alpha\beta\)-nDCG (Bauer, 2017). We do
not consider such diversity-aware IR measures, which assume that a distribution over the intents is available for each query.
## 3. Preliminaries
NotationsFor a nonnegative integer \(n\), let \([n]\triangleq\{1,2,\ldots,n\}\). For a finite set \(S\) and an integer \(k\), we write \(\binom{S}{k}\) for the family of all size-\(k\) subsets of \(S\). Vectors and matrices are written in bold (e.g., \(\mathbf{v}\) and \(\mathbf{A}\)), and the \(i\)-th entry of a vector \(\mathbf{v}\) in \(\mathbb{R}^{d}\) is denoted \(v(i)\). The Euclidean norm is denoted \(\|\cdot\|\); i.e., \(\|\mathbf{v}\|\triangleq\sqrt{\sum_{i\in[d]}v(i)^{2}}\) for a vector \(\mathbf{v}\) in \(\mathbb{R}^{d}\).
Recap of Two Diversity ObjectivesWe formally define two popular distance-based diversity objectives. We assume that a pairwise distance \(d(i,j)\) is given between every pair of items \(i,j\). One objective is the _intra-list distance (ILD)_, which is defined as
\[\operatorname{lLD}(S)\triangleq\frac{1}{\binom{|S|}{2}}\sum_{i\neq j\in S}d(i,j)\]
for an item set \(S\). The definition of ILD is intuitive, as it simply takes the average of the pairwise distances between all the items in \(S\). The other is _dispersion_, which is defined as the minimum pairwise distance between selected items:
\[\operatorname{disp}(S)\triangleq\min_{i\neq j\in S}d(i,j).\]
Dispersion is stricter than ILD in that it evaluates the pairwise distance among \(S\) in the _worst-case_ sense.
We can flexibly choose from any distance function \(d\) depending on the application. Such a distance function is often a _metric_; i.e., the following three axioms are satisfied for any items \(i,j,k\): (1) identity of indiscernibles: \(d(i,j)=0\iff i=j;(2)\) symmetry: \(d(i,j)=d(j,i)\); (3) triangle inequality: \(d(i,j)+d(j,k)\geq d(i,k)\). Commonly-used distance metrics in diversified recommendation include the Euclidean distance (Beng et al., 2015; Chen et al., 2016), i.e., \(d(i,j)\triangleq\|\mathbf{x}_{i}-\mathbf{x}_{j}\|\), where \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) are the feature vectors of items \(i\) and \(j\), respectively, the cosine distance (Kang et al., 2015; Chen et al., 2016), and the Jaccard distance (Kang et al., 2015; Chen et al., 2016; Chen et al., 2017).
Greedy HeuristicHere, we explain a greedy heuristic for enhancing diversity. This heuristic has been frequently used in diversified recommendations, and thus we use it for theoretical and empirical analyses of ILD and dispersion in Sections 4, 6 and 7.
Consider the problem of selecting a set of \(k\) items that maximize the value of a particular diversity objective f. This problem is **NP**-hard, even if f is restricted to ILD (Kang et al., 2015) and dispersion (Kang et al., 2015; Chen et al., 2016). However, we can obtain an approximate solution to this problem using the simple greedy heuristic shown in Algorithm 1. Given a diversity objective \(\mathsf{f}:2^{[n]}\rightarrow\mathbb{R}_{+}\) on \(n\) items and an integer \(k\in[n]\) representing the number of items to be recommended, the greedy heuristic iteratively selects an item of \([n]\), not having been chosen so far, that maximizes the value of f. This heuristic has the following advantages from both theoretical and practical perspectives: (1) it is _efficient_ because the number of evaluating f is at most \(nk\); (2) it provably finds a \(\frac{1}{2}\)-approximate solution to maximization of ILD (Kang et al., 2015) and dispersion (Kang et al., 2015), which performs nearly optimal in practice.
## 4. Theoretical Comparison
We present a theoretical analysis of the comparison between ILD and dispersion. Our goal is to elucidate the _correlation_ between two diversity objectives. Once we establish that enhancing a diversity objective \(\mathsf{f}\) results in an increase in another \(\mathsf{g}\) to some extent, we merely maximize \(\mathsf{f}\) to obtain diverse items with respect to _both_ f and \(\mathsf{g}\). In contrast, if there is no such correlation, we shall characterize what f and \(\mathsf{g}\) are representing or enhancing. The remainder of this section is organized as follows: Section 4.1 describes our analytical methodology, Section 4.2 summarizes our results, and Section 4.3 is devoted to lessons learned based on our results.
### Our Methodology
We explain how to quantify the correlation between two diversity objectives. Suppose we are given a diversity objective \(\mathsf{f}:2^{[n]}\rightarrow\mathbb{R}_{+}\) over \(n\) items and an integer \(k\in[n]\) denoting the output size (i.e., the number of items to be recommended). We define \(\mathsf{f}\)_-diversification_ as the following optimization problem:
\[\max_{S\in\binom{[n]}{k}}\mathsf{f}(S).\]
Hereafter, the optimal item set of \(\mathsf{f}\)-diversification is denoted \(S^{*}_{\mathsf{f},k}\) and the optimal value is denoted \(\operatorname{OPT}_{\mathsf{f},k}\); namely, we define \(S^{*}_{\mathsf{f},k}\triangleq\operatorname{argmax}_{S\in\binom{[n]}{k}} \mathsf{f}(S)\) and \(\operatorname{OPT}_{\mathsf{f},k}\triangleq\mathsf{f}(S^{*}_{\mathsf{f},k})\). We also denote by \(S^{\operatorname{Gr}}_{\mathsf{f},k}\) the set of \(k\) items selected using the greedy heuristic on f. We omit the subscript \({}^{*}\!k\)" when it is clear from the context. Concepts related to approximation algorithms are also introduced.
Definition 4.1 ().: We say that a \(k\)-item set \(S\) is a _\(\rho\)-approximation_ to f-diversification for some \(\rho\leq 1\) if it holds that
\[\mathsf{f}(S)\geq\rho\cdot\operatorname{OPT}_{\mathsf{f},k}.\]
Parameter \(\rho\) is called the _approximation factor_.
For example, the greedy heuristic returns a \(\frac{1}{2}\)-approximation for ILD-diversification; i.e., \(\operatorname{lLD}(S^{\operatorname{Gr}}_{\operatorname{lLD}})\geq\frac{1}{2} \cdot\operatorname{OPT}_{\operatorname{lLD}}\).
We now quantify the correlation between a pair of diversity objectives f and \(\mathsf{g}\). The primary logic is to think of the optimal set \(S^{*}_{\mathsf{f},k}\) for f-diversification as an algorithm for \(\mathsf{g}\)-diversification. The correlation is measured using the approximation factor of this algorithm for \(\mathsf{g}\)-diversification, i.e.,
\[\frac{\mathsf{g}(S^{*}_{\mathsf{f},k})}{\operatorname{OPT}_{\mathsf{g},k}}. \tag{2}\]
Intuitively, if this factor is sufficiently large, then we merely maximize the value of f; e.g., if Eq. (2) is 0.99, then any item set having the optimum f is also nearly-optimal with respect to \(\mathsf{g}\). However, when Eq. (2) is very low, such an item set is not necessarily good with respect to \(\mathsf{g}\); namely, f-diversification does _not_ imply \(\mathsf{g}\)-diversification. Note that we can replace \(S^{*}_{\mathsf{f},k}\) with the greedy solution, whose approximation factor is \(\frac{\mathsf{g}(S^{\operatorname{Gr}}_{\mathsf{f},k})}{\operatorname{OPT}_{ \mathsf{f},k}}\). Our analytical methodology is twofold:
1. We prove a guarantee on the approximation factor; i.e., there exists a factor \(\rho\) such that \(\frac{8(S_{i}^{\prime})}{\operatorname{OPT}_{\operatorname{lg}}}\geq\rho\) for _every_ set of items with a distance metric.
2. We construct an input to indicate inapproximability; i.e., there exists a (small) factor \(\rho^{\prime}\) such that \(\frac{8(S_{i}^{\prime})}{\operatorname{OPT}_{\operatorname{lg}}}<\rho^{\prime}\) for _some_ item set with a distance metric. Such an input demonstrates the case in which f and g are quite different; thus, we can use it to characterize what f and g represent.
### Our Results
We now present our results, each of which (i.e., a theorem or claim) is followed by a remark devoted to its intuitive implication. Given that ILD and dispersion differ only in that the former takes the _average_ and the latter the _minimum_ over all pairs of items, an item set with a large dispersion value is expected to possess a large ILD value. This intuition is first justified. We define the _diameter_\(D\) for \(n\) items as the maximum pairwise distance; i.e., \(D\triangleq\max_{i\neq j\in[n]}d(i,j)\), and denote by \(d_{k}^{*}\) the maximum dispersion among \(k\) items; i.e., \(d_{k}^{*}\triangleq\operatorname{OPT}_{\operatorname{disp},k}\). Our first result is the following, whose proof is deferred to Appendix A.
Theorem 4.2 ().: _The following inequalities hold for any input and distance metric: \(\frac{\operatorname{LD}(S_{\operatorname{disp},k}^{\operatorname{G}})}{ \operatorname{OPT}_{\operatorname{ILD},k}}\geq\frac{d_{k}^{*}}{D}\) and \(\frac{\operatorname{LD}(S_{\operatorname{disp},k}^{\operatorname{G}})}{ \operatorname{OPT}_{\operatorname{ILD},k}}\geq\max\left\{\frac{d_{k}^{*}}{ D},\frac{k}{k}\right\}\). In other words, the optimal size-\(k\) set to \(\operatorname{disp}\)-differentiation is a \(\frac{d_{k}^{*}}{D}\)-approximation to ILD-diversification, and Algorithm 1 on \(\operatorname{disp}\) returns a \(\max\{\frac{d_{k}^{*}}{D},\frac{k}{k}\}\)-approximation to ILD-diversification._
_Remark_:: Theorem 4.2 implies that _the larger the dispersion, the larger the ILD_, given that \(D\) is not significantly large. In contrast, if the maximum dispersion \(d_{k}^{*}\) is much smaller than \(D\), the approximation factor \(\frac{d_{k}^{*}}{D}\) becomes less fascinating. Fortunately, the greedy heuristic exhibits a \(\frac{1}{k}\)-approximation, which facilitates a data-independent guarantee.
We demonstrate that Theorem 4.2 is almost tight, whose proof is deferred to Appendix A.
Claim 4.3 ().: _There exists an input such that the pairwise distance is the Euclidean distance between feature vectors, and the following holds: \(\frac{\operatorname{LD}(S_{\operatorname{disp},k}^{\operatorname{G}})}{ \operatorname{OPT}_{\operatorname{ILD},k}}=\mathcal{O}\left(\frac{d_{k}^{*}}{ D}\right)\) and \(\frac{\operatorname{LD}(S_{\operatorname{disp},k}^{\operatorname{G}})}{ \operatorname{OPT}_{\operatorname{ILD},k}}=\mathcal{O}\left(\frac{1}{k}+ \frac{d_{k}^{*}}{D}\right)\). In particular, Theorem 4.2 is tight up to constant._
_Remark_:: The input used in the proof of Claim 4.3 consists of two "clusters" such that the intra-cluster distance of each cluster is extremely small (specifically, \(\epsilon\)) and the inter-cluster distance between them is large. The ILD value is maximized when the same number of items from each cluster are selected. However, any set of three or more items has a dispersion \(\epsilon\); namely, we cannot distinguish between the largest-ILD case and the small-ILD case based on the value of dispersion.
In the reverse direction, we provide a very simple input such that _no matter how large the ILD value is, the dispersion value can be \(0\)_, whose proof is deferred to Appendix A.
Claim 4.4 ().: _There exists an input such that the pairwise distance is the Euclidean distance and \(\frac{\operatorname{disp}(S_{\operatorname{ILD}}^{\operatorname{G}})}{ \operatorname{OPT}_{\operatorname{disp}}}=\frac{\operatorname{disp}(S_{ \operatorname{ILD}}^{\operatorname{G}})}{\operatorname{OPT}_{\operatorname{ disp}}}=0\). In other words, greedy or exact maximization of ILD does not have any approximation guarantee to \(\operatorname{disp}\)-diversification._
_Remark_:: The input used in the proof of Claim 4.4 consists of (duplicates allowed) points on a line segment. Dispersion selects distinct points naturally. In contrast, ILD prefers points on the two ends of the segment, which are redundant.
### Lessons Learned
Based on the theoretical investigations so far, we discuss the pros and cons of ILD and dispersion. Figure 2 shows two illustrative inputs such that maximization of ILD and dispersion results in very different solutions, where each item is a 2-dimensional vector and the distance between items is measured by the Euclidean distance.
* **Pros of ILD**: If the entire item set is separated into two "clusters" as shown in Figure 1(a), ILD selects items in a well-balanced manner; i.e., a nearly equal number of items from each cluster are chosen (supported by Claim 4.3).
* **Cons of ILD**: ILD may select duplicate items that are very close (or even identical) to each other. Suppose that we are given feature vectors in an ellipse shown in Figure 1(b). Then, ILD would select items from the left and right ends, each of which consists of similar feature vectors (supported by Claim 4.4); even more, items in the middle of the ellipse are never chosen. In practice, if item features are given by _dense_ vectors such as those generated by deep neural networks, ILD is undesirable because it selects many nearly-identical vectors.
* **Pros of dispersion**: If the entire item set is "well-dispersed" as in Figure 1(b), then so are the items chosen by dispersion as well.
* **Cons of dispersion**: Dispersion may overlook distant item pairs that would have contributed to ILD. Suppose that we are given feature vectors in two circles in Figure 1(a). Because the dispersion value of any (three or more) items is small whereas the diameter is large, we cannot distinguish distant items from close items using only the dispersion value. Thus, dispersion may select items in an unbalanced manner in the worst case (as in Claim 4.3). In practice, if item features are given by _sparse_ (e.g., 0-1) vectors, such as indicator functions defined by genre or topic information, dispersion may not be favorable, because its value becomes 0 whenever two or more items with the same feature are selected.
## 5. Gaussian Intra-List Distance
In Section 4.3, we discussed that ILD and dispersion have their own _extreme behaviors_. We now argue that they can be viewed as limits in the sense of a kernel function over items, i.e., we apply the Gaussian kernel to ILD. The _Gaussian kernel_ for two vectors \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}\) is
Figure 2. Two inputs for which maximization of ILD and dispersion results in very different solutions.
defined as \(K(\mathbf{x},\mathbf{y})\triangleq\exp\Bigl{(}-\frac{\|\mathbf{x}-\mathbf{y}\|^{2}}{2 \sigma^{2}}\Bigr{)}\), where \(\sigma>0\) is a _bandwidth_ parameter that controls the smoothness of the estimated function in kernel methods. Since the kernel function can be considered as _similarity score_, we can define the _kernel distance_(Kang et al., 2017) as \(d_{K}(\mathbf{x},\mathbf{y})=\sqrt{2-2K(\mathbf{x},\mathbf{y})}\). Using this kernel distance, we define the _Gaussian ILD (GILD)_ as
\[\operatorname{GILD}_{\sigma}(S)\triangleq\frac{1}{\binom{|S|}{2}}\sum_{i\neq j \in S}\sqrt{2-2\exp\left(-\frac{d(i,j)^{2}}{2\sigma^{2}}\right)}, \tag{3}\]
where \(d\) is a distance metric and \(\sigma\) is a bandwidth parameter.1 The following asymptotic analysis shows that GILD interpolates ILD and dispersion, whose proof is deferred to Appendix A.
Footnote 1: Note that we have replaced the Euclidean distance in \(\exp\Bigl{(}-\frac{\|\mathbf{x}-\mathbf{y}\|^{2}}{2\sigma^{2}}\Bigr{)}\) by \(d\) so that we can use any distance metric.
Theorem 5.1.: _GILD approaches ILD as the value of \(\sigma\) goes to \(\infty\), and it approaches dispersion as the value of \(\sigma\) goes to \(0\) (up to scaling and addition by a constant)._
Theorem 5.1 implies that GILD behaves as a compromise between ILD and dispersion by tuning the bandwidth parameter \(\sigma\): the value of \(\sigma\) must be small if we do not want the selected items to be close to each other; \(\sigma\) must be large if we want to include (a few) distance items.
We use GILD to better understand the empirical behavior of ILD and dispersion. In particular, we are interested to know whether GILD can avoid the extreme behavior of ILD and dispersion.
### Choosing the Value of \(\sigma\)
Here, we briefly establish how to choose the value of \(\sigma\) in Section 6. As will be shown in Section 6.2.3, GILD usually exhibits extreme behaviors like ILD or dispersion. We wish to determine the value of \(\sigma\) for which GILD interpolates them. Suppose that we have selected \(k\) items, denoted \(S\). In Eq. (6) in the proof of Theorem 5.1, for the first two terms to be dominant, we must have \(C\gg(\binom{k}{2}-C)\cdot\epsilon_{\sigma}\), which implies that \(\sigma\gg\sqrt{\frac{(\operatorname{disp}(S)+\delta)^{2}-\operatorname{disp}( S)^{2}}{2\log\binom{k}{2}-1}}\). Based by this, we propose the following two schemes for determining the value of \(\sigma\), referred to as the _adjusted minimum_ and the _adjusted median_:
\[\sigma_{S}^{\min}\triangleq\frac{\min_{i\neq j\in S}d(i,j)}{\sqrt{2\log\binom{ k}{2}-1}}\text{ and }\sigma_{S}^{\text{med}}\triangleq\frac{\operatorname{ median}_{i\neq j\in S}d(i,j)}{\sqrt{2\log\binom{k}{2}-1}}. \tag{4}\]
Note that \(\sigma_{S}^{\min}\leq\sigma_{S}^{\text{med}}\), and the adjusted median mimics the median heuristic (Zhou et al., 2017; Zhang et al., 2018) in kernel methods. In Section 6, we empirically justify that dividing by \(\sqrt{2\log(\binom{k}{2}-1)}\) is necessary. Since \(\sigma_{S}^{\min}\) and \(\sigma_{S}^{\text{med}}\) depend on \(S\), we run the greedy heuristic while adjusting the value of \(\sigma\)_adaptively_ using Eq. (4): More precisely, in line 1 of Algorithm 1, we define \(f(\{i_{1},\ldots,i_{\ell},i\})\triangleq\operatorname{GILD}_{\sigma}(S\cup\{i \})-\operatorname{GILD}_{\sigma}(S)\), where \(S\triangleq\{i_{1},\ldots,i_{\ell}\}\) and \(\sigma\) is \(\sigma_{S\cup\{i\}}^{\min}\) or \(\sigma_{S\cup\{i\}}^{\min}\). We further slightly modify this heuristic so that it selects the pair of farthest items when \(k=2\) because \(\sqrt{2\log(\binom{k}{2}-1)}\) is \(-\infty\).
## 6. Empirical Comparison
We report the experimental results of the empirical comparison among the diversity objectives analyzed in Sections 4 and 5. The theoretical results in Section 4 demonstrate that each objective captures its own notion of diversity; thus, enhancing one objective is generally unhelpful in improving another. One may think that such results based on _worst-case analysis_ are too pessimistic to be applied in practice; for instance, ILD may be used to enhance dispersion in real data, even though any positive approximation guarantee is impossible. Thus, we _empirically_ analyze the approximation factor for the diversity objectives examined thus far.
### Settings
#### 6.1.1. Datasets
We use two real-world datasets including feedback and genre information and two synthetic datasets.
1. **MovieLens 1M** (_ML-1M_) (Zhou et al., 2017; Zhang et al., 2018): Genre information is associated with each movie; there are 18 genres. We extracted the subset in which users and movies have at least 20 ratings, resulting in 995 thousand ratings on 3,000 movies from 6,000 users.
2. **Amazon Review Data Magazine Subscriptions** (_Amazon_) (Zhou et al., 2017; Zhang et al., 2018): Each product contains categorical information, and there are 165 categories. We extracted the subset in which all users and movies have at least five ratings, resulting in 4,200 reviews of 720 products from 664 users.
3. **Random points in two separated circles** (_TwoCircles_, Figure 1(a)): Consist of 1,000 random points in two circles whose radius is \(\frac{1}{4}\) and centers are \(-\frac{3}{4}\) and \(\frac{3}{4}\).
4. **Random points in an ellipse** (_Ellipse_, Figure 1(b)): Consist of 1,000 random points in an ellipse of flattening \(\frac{3}{4}\).
#### 6.1.2. Distance Metrics
We use two types of distance metrics for real-world datasets.
1. _Implicit feedback_ (_feedback_ for short): Let X be a user-item implicit feedback matrix over \(m\) users and \(n\) items, such that \(X_{u,i}\) is 1 if user \(u\) interacts with item \(i\), and 0 if there is no interaction. We run singular value decomposition on X with dimension \(d\triangleq 32\) to obtain \(\mathbf{X}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top}\), where \(\mathbf{V}^{\top}=[\mathbf{v}_{1},\ldots,\mathbf{v}_{n}]\in\mathbb{R}^{d\times n}\). The feature vector of item \(i\) is then defined as \(\mathbf{v}_{i}\) and the distance between two items \(i,j\) is given by the Euclidean distance \(d(i,j)\triangleq\|\mathbf{v}_{i}-\mathbf{v}_{j}\|\).
2. _Genre information_ (_genre_ for short): We denote by \(G_{i}\) the set of genres that item \(i\) belongs to. The distance between two items \(i,j\) is given by the Jaccard distance \(d(i,j)\triangleq 1-\frac{|G_{i}G_{i}G_{j}|}{|G_{i}G_{j}|G_{j}|}\). Multiple items may have the same genre set; i.e., \(d(i,j)=0\) for some \(i\neq j\). For two synthetic datasets, we simply use the Euclidean distance.
#### 6.1.3. Diversity Enhancement Algorithms
We apply the greedy heuristic (Algorithm 1) to ILD, dispersion, and GILD with the adjusted median. A baseline that returns a random set of items (denoted Random) is implemented. Experiments were conducted on a Linux server with an Intel Xeon 2.20GHz CPU and 62GB RAM. All programs were implemented using Python 3.9.
### Results
We calculate the empirical approximation factor for each pair of diversity objectives f and g as follows. First, we run the greedy
heuristic on f to extract up to 128 items. The empirical approximation factor of f to g is obtained by \(\mathcal{G}(S_{i,k}^{\text{Gr}})/\mathcal{G}(S_{g,k}^{\text{Gr}})\) for each \(k\in[128]\). This factor usually takes a number from 0 to 1 and is simply referred to as the _relative score of f to g_. Unlike the original definition in Eq. (2), we do not use \(\text{OPT}_{g,k}\) because its computation is **NP**-hard. Tables 1 to 6 report the average relative score over \(k=2,\ldots,128\).
than dispersion, as shown in the histogram. This observation can be explained by the GILD mechanism, which takes the sum of the kernel distance over _all_ pairs.
We then examine _TwoCircles_. Figure 7 shows that each diversity objective selects almost the same number of items from each cluster. In particular, the potential drawback of dispersion discussed in Section 4.3, i.e., the imbalance of selected items in the worst case, does not occur empirically.
#### 6.2.3. Investigation of the Effect of \(\sigma\) on GILD
We investigate the empirical effect of the value of \(\sigma\) on the behavior of GILD. Specifically, we examine how GILD interpolates between ILD and dispersion by changing \(\sigma\), as suggested in Theorem 5.1. Setting the value of \(\sigma\) to each of 64 equally-spaced numbers on a log scale from 0.02 to 1, we greedily maximize \(\text{GILD}_{\sigma}\) for _feedback_ on _ML-1M_ to obtain a \(k\)-item set \(S_{\text{GILD}_{\sigma},k}\). We also run the adaptive greedy heuristic, which is oblivious to the value of \(\sigma\), to obtain a \(k\)-item set \(S_{\text{GILD},k}\). Figure 9 plots values of ILD and dispersion for each obtained set \(S_{\text{GILD}_{\sigma},k}\) of size \(k=16\), 128. The vertical lines correspond to the adjusted minimum \(\sigma^{\min}_{S_{\text{GILD}_{\sigma}}}\), adjusted median \(\sigma^{\text{med}}_{S_{\text{GILD}_{\sigma}}}\), minimum \(\min_{i\neq j\in S_{\text{GILD}_{\sigma}}}d(i,j)\), and median median \(\min_{i\neq j\in S_{\text{GILD}_{\sigma}}}d(i,j)\). Horizontal lines correspond to \(\text{lLD}(S_{\text{ILD}}^{\text{GT}})\approx\text{OPT}_{\text{ILD}}\), \(\text{lLD}(S_{\text{disp}}^{\text{GT}})\), \(\text{disp}(S_{\text{disp}}^{\text{G}})\approx\text{OPT}_{\text{disp}}\), and \(\text{disp}(S_{\text{ILD}}^{\text{G}})\). Observe first that ILD is monotonically increasing in \(\sigma\) and approaches \(\text{OPT}_{\text{ILD}}\); disp is approximately decreasing in \(\sigma\) and attains \(\text{OPT}_{\text{disp}}\) for a "moderately small" value of \(\sigma\), which coincides with Theorem 5.1.
Observe also that the degradation of both ILD and disp occurs for small values of \(\sigma\). The reason is that each term \(\exp(-\frac{d(i,j)^{2}}{2\sigma^{4}})\) in GILD becomes extremely small, causing a floating-point rounding error. Setting \(\sigma\) to the minimum and median results in a dispersion value of \(\text{disp}(S_{\text{ILD}}^{\text{G}})\) when \(k=16\); i.e., the obtained set is almost identical to \(S_{\text{ILD}}^{\text{G}}\). In contrast, setting \(\sigma=\sigma^{\min}_{S_{\text{GILD},k}}\) is similar to \(S_{\text{disp}}^{\text{G}}\) ; setting \(\sigma=\sigma^{\text{med}}_{S_{\text{GILD},k}}\) yields a set whose dispersion is between \(\text{disp}(S_{\text{disp},k}^{\text{G}})\) and \(\text{disp}(S_{\text{ILD},k})\) and whose ILD is in the middle of \(\text{lLD}(S_{\text{ILD},k}^{\text{G}})\) and \(\text{lLD}(S_{\text{disp},k})\). Thus, using the adjusted median, and division by \(\sqrt{2\log\binom{k}{2}-1}\) is crucial for avoiding trivial sets.
### Discussions
We discuss the empirical behavior of ILD, dispersion, and GILD. Arguably, ILD easily selects many items that are similar or identical. As shown in Figure (a)a, the chosen items are biased toward two distant groups, and items in the middle of the two groups never appear. This is undesirable if we wish to recommend very few items.
Such drawbacks of ILD can be resolved via dispersion. Greedy maximization of dispersion also empirically enhances the ILD value. However, it may overlook distant item pairs, as discussed in Section 6.2.2. We also note that dispersion is not suitable for measuring diversity. As shown in Figure 10, the value of dispersion drops to nearly 0 when selecting a moderate number of items; it does _not_ return to a positive value. Due to this nature, dispersion may not be used to compare large item sets.
The empirical result of GILD implies that ILD and dispersion are not appropriate for improving and/or evaluating distance-based diversity. GILD partially circumvents the issues caused by the extreme behavior of ILD and dispersion, thereby achieving the _sweet spot_ between them. On the one hand, GILD extracts dissimilar items such that the dispersion value does not drop to 0. On the other hand, GILD can select more dissimilar items than dispersion. Similar to dispersion, GILD cannot be used to _compare_ the diversity among distinct sets, as shown in Table 3, which indicates that even Random can have the highest GILD value. This is because GILD with the adjusted median is designed to evaluate the next item to be selected given a _fixed_ set of already-selected items. To sum up,
Figure 6. 128 points (big red circles) of _Ellipse_ selected by greedily maximizing each objective with the Euclidean distance.
Figure 10. Dispersion of items for _genre_ on _ML-1M_.
Figure 7. 128 points (big red circles) of _TwoCircles_ selected by greedily maximizing each objective with the Euclidean distance.
GLD works successfully as an optimization objective interpolating ILD and dispersion and as a tool for analyzing them empirically.
## 7. Diversified Recommendation Results
Having a better understanding of the behavior of diversity objectives from both theoretical (Section 4) and empirical perspectives (Section 6), we incorporate them into the recommendation methods.
### Settings
#### 7.1.1. Dataset
To investigate results produced by a recommendation method using ILD, dispersion, and GILD, we use the _ML-1M_ dataset, the details of which are described in Section 6.1. We extracted the subset in which users and movies have at least 20 and 100 ratings, respectively, resulting in 370 thousand ratings on 2,000 movies from 2,800 users. The obtained subset was further split into training, validation, and test sets in a 60/20/20 ratio according to weak generalization; i.e., they may not be disjoint in terms of users.
#### 7.1.2. Algorithms
We adopt Embarrassingly Shallow AutoEncoder (Eashe et al., 2017) to estimate the predictive score \(\mathsf{rel}_{u}(i)\) for item \(i\) by user \(u\) from a user-item implicit feedback matrix. Eashe et al. (2017) has a hyperparameter for \(L_{2}\)-norm regularization, and its value is tuned using the validation set. We construct a distance metric based on the _implicit feedback_ in Section 6.1 to define ILD, dispersion, and GILD. We then apply the greedy heuristic to a linear combination of relevance and diversity. Specifically, given a set \(S_{\ell-1}\) of already selected \(\ell-1\) items, we select the next item \(i_{\ell}\) that maximizes the following objective:
\[\mathsf{F}_{u,\ell,\lambda}(i)\triangleq(1-\lambda)\cdot\mathsf{rel}_{u}(i)+ \lambda\cdot\{\mathsf{f}(S_{\ell-1}\cup\{i\})-\mathsf{f}(S_{\ell-1})\}, \tag{5}\]
where \(\lambda\in(0,1)\) is a trade-off parameter between relevance and diversity. We run the greedy heuristic for each f, each value of \(\lambda=0,0.1,0.2,\ldots,0.9,0.99,0.999,1\), and each user \(u\) to retrieve a list of \(k\triangleq 50\) items to be recommended to \(u\), denoted \(S_{u,\ell,\lambda}\). Experiments were conducted on the same environment as described in Section 6.
#### 7.1.3. Evaluation
We evaluate the accuracy and diversity of the obtained sets as follows. Let \(R_{u}\) denote the set of relevant items to user \(u\) (i.e., those interacting with \(u\)) in the test set. We calculate the _normalized Discounted Cumulative Gain (nDCG)_ by
\[\mathsf{nDCG@}(S_{u,\ell,\lambda};R_{u})\triangleq\Big{(}\sum_{ \ell\in[\min\{k,|R_{u}|\}]}\frac{1}{\log_{2}(\ell+1)}\Big{)}^{-1}.\] \[\sum_{\ell\in[k]}\frac{[\ell\text{-th ranked item of }S_{u,\ell,\lambda}\text{ is in }R_{u}]}{\log_{2}(\ell+1)}.\]
We calculate the normalized versions of ILD and dispersion as \(\mathsf{nILD}(S_{u,\ell,\lambda})\triangleq\frac{\mathsf{nILD}(S_{u,\ell, \lambda})}{\mathsf{nILD}(S_{u,\mathrm{nILD}}^{\mathrm{G}})}\) and \(\mathsf{ndisp}(S_{u,\ell,\lambda})\triangleq\frac{\mathsf{disp}(S_{u,\ell, \lambda})}{\mathsf{disp}(S_{u,\mathrm{nILD}}^{\mathrm{G}})}\), respectively, where \(S_{u,\ell}^{\mathrm{G}}\) is the set of \(k\) items obtained by greedily maximizing f on the set of items that do not appear in the training or validation set. We then take the mean of nDCG, nLD, and ndisp over all users.
### Results
Figure 11 shows the relation between each pair of nDCG, nLD, and ndisp. First, we observe a clear trade-off relationship between relevance and diversity regarding \(\lambda\). In particular, when diversity is not introduced into the objective (i.e., \(\lambda=0\)), the mean ndisp takes 0, which implies that for most users, two or more of selected items have the same genre set. As shown in Section 6, incorporating ILD does not avoid the case of ndisp = 0. In contrast, dispersion and GILD with a moderate value of \(\lambda\) enhance nILD and ndisp without substantially sacrificing accuracy. Comparing dispersion and GILD, it is observed that GILD achieves a slightly higher nILD than dispersion: When the mean nDCG is close to 0.25, the means of nILD for GILD and dispersion are 0.966 and 0.948, respectively, and the means of ndisp for them are 0.987 and 0.992, respectively.
Although dispersion and GILD have a similar trade-off for the high-relevance case (i.e., mean nDCG \(\geq 0.4\)), which is often a realistic situation, they produce different results at the _individual level_. To this end, we select \(\lambda\) such that they are nearly identical _on average_. Specifically, we choose \(\lambda=0.2\) for dispersion and \(\lambda=0.7\) for GILD, for which the means of nDCG, nILD and ndisp are respectively 0.457, 0.870 and 0.009 for dispersion, whereas those are respectively 0.445, 0.877 and 0.001 for GILD. The left figure in Figure 12 plots the nDCG of \(S_{u,\mathrm{disp},0.2}\) and \(S_{u,\mathrm{GILD},0.7}\) for each user \(u\). Observe that
Figure 11. Relation between each pair of nDCG, nLD, and ndisp with regard to a trade-off parameter \(\lambda\).
Figure 12. Comparison of dispersion and GILD in terms of nDCG and nILD.
dispersion and GILD show a similar trend; the standard deviation of nDCG is 0.161 for dispersion and 0.160 for GILD. In contrast, as shown in the right figure in Figure 12, dispersion often has a smaller nILD than GILD. Furthermore, the standard deviation of nLD for dispersion (0.051) is larger than that for GILD (0.038). This difference is possibly due to the potential drawback of dispersion (see Section 4.3): Since the values of dispersion for most users become 0 at a particular iteration of the greedy heuristic, the objective \(\mathbf{F}_{u,\text{disp},0,2}(i)\) in Eq. (5) is \(0.8\pi\mathbf{e}\mathbf{l}_{u}(i)\) in the subsequent iterations; i.e., the greedy heuristic only selects the item with the highest relevance. Consequently, dispersion fails to diversify some users' recommendation results, which is not the case for GILD. In summary, as a diversity objective to be optimized in diversified recommendation, ILD and dispersion are not an appropriate choice.
## 8. Conclusions
To investigate the behavior of two common diversity objectives, ILD and dispersion, we performed a comparison analysis. Our results revealed the drawbacks of the two: ILD selects _duplicate_ items, while dispersion may overlook _distant_ item pairs. To analyze these drawbacks empirically, we designed Gaussian ILD (GILD) as an interpolation between ILD and dispersion. In the personalized recommendation setting, we demonstrated that both ILD and dispersion are not consistently successful in enhancing diversity at the individual level. As a future work, we plan to develop an evaluation measure of diversity in lieu of ILD and dispersion.
## Appendix A Omitted Proofs in Sections 4 and 5
Proof of Theorem 4.2.: The first guarantee is immediate from \(\text{OPT}_{\text{ILD}}\leq D\) and \(\text{ILD}(S^{\text{r}}_{\text{disp}})\geq d^{\ast}_{k}\). Similarly, we have \(\text{ILD}(S^{\text{Gr}}_{\text{disp}})\geq\text{disp}(S^{\text{Gr}}_{\text {disp}})\geq\frac{d^{\ast}_{k}}{2}\) due to a \(\frac{1}{2}\)-approximation guarantee of the greedy heuristic (Srivastava et al., 2017). Let \(i_{\ell}\in S^{\text{Gr}}_{\text{disp}}\) denote the \(\ell\)-th item selected by greedy heuristic on disp. Since \(i_{2}\) is farthest from \(i_{1}\), \(d(i_{1},i_{2})\geq\frac{D}{2}\). By the triangle inequality of \(d\), we have \(d(i_{1},i_{\ell})+d(i_{\ell},i_{2})\geq d(i_{1},i_{2})\) for all \(\ell\geq 3\). Thus,
\[\text{ILD}(S^{\text{Gr}}_{\text{disp}}) =\binom{k}{2}^{-1}\left[d(i_{1},i_{2})+\sum_{3\leq\ell\leq k}d(i_{ 1},i_{\ell})+d(i_{\ell},i_{2})\right]\] \[\geq\binom{k}{2}^{-1}\frac{D}{2}(k-1)=\frac{D}{k},\]
implying that \(\frac{\text{ILD}(S^{\text{Gr}}_{\text{disp}})}{\text{OPT}_{\text{ILD}}}\geq \frac{1}{k}\).
Proof of Claim 4.3.: Let \(n\) be a multiple of 4 and \(\epsilon>0\) a small number. Construct \(2n\) vectors in \(\mathbb{R}_{+}^{n+2}\), denoted \(\mathbf{X}\triangleq\{\mathbf{x}_{1},\ldots,\mathbf{x}_{\frac{n}{2}}\}\) and \(\mathbf{Y}\triangleq\{\mathbf{y}_{1},\ldots,\mathbf{y}_{\frac{n}{2}}\}\), each entry of which is defined as:
\[x_{i}(j)\triangleq\begin{cases}\frac{\sqrt{2}}{2}&\text{if $j=i$,}\\ \sqrt{\frac{1-\epsilon^{2}}{2}}&\text{if $j=n+1$, and $y_{i}(j)\triangleq\begin{cases} \frac{\epsilon}{\sqrt{2}}&\text{if $j=i+\frac{n}{2}$,}\\ \sqrt{\frac{1-\epsilon^{2}}{2}}&\text{if $j=n+2$,}\\ 0&\text{otherwise.}\end{cases}}\\ \end{cases}\]
Observe that \(\|\mathbf{x}_{i}-\mathbf{x}_{j}\|=\|\mathbf{y}_{i}-\mathbf{y}_{j}\|=\epsilon\) for all \(i\neq j\in\left[\frac{n}{2}\right]\), \(\|\mathbf{x}_{i}-\mathbf{y}_{j}\|=1\) for all \(i,j\in\left[\frac{n}{2}\right]\), and thus \(D=1\). Consider selecting \(k\triangleq\frac{n}{2}\) vectors from \(\mathbf{X}\cup\mathbf{Y}\) so that ILD or dispersion is maximized. Clearly, \(\text{OPT}_{\text{ILD}}\) is \(\binom{k}{2}^{-1}(\binom{k}{2}^{2}+2\binom{k/2}{2}\epsilon)=\Theta(1)\), which is attained when we select \(\frac{k}{2}\) vectors each from \(\mathbf{X}\) and \(\mathbf{Y}\). By contrast, _any_ set of \(k\) items has the same value of dispersion, i.e., \(d^{\ast}_{k}\triangleq\epsilon\). Hence, we may have \(S^{\ast}_{\text{disp}}=\{\mathbf{x}_{1},\ldots\mathbf{x}_{k}\}\) in the worst case, where \(\text{ILD}(S^{\ast}_{\text{disp}})=\epsilon\). Consequently, it holds that \(\frac{\text{ILD}(S^{\ast}_{\text{disp}})}{\text{OPT}_{\text{ILD}}}=\mathcal{O} (\epsilon)=\mathcal{O}\left(\frac{d^{\ast}_{k}}{D}\right)\). When we run the greedy heuristic on dispersion, we can assume that the first selected item is \(\mathbf{x}_{1}\) without loss of generality. Then, we would have selected \(\mathbf{y}_{1}\) for some \(i\) as the second item. In the remaining iterations, we may select \(k-2\) vectors all from \(\mathbf{X}\) in the worst case, resulting in \(\frac{\text{ILD}(S^{\ast}_{\text{disp}})}{\text{OPT}_{\text{ILD}}}=\frac{1}{ \Theta(1)}\binom{k}{2}^{-1}\left((k-1)+\binom{k-1}{2}\epsilon\right)=\mathcal{O }\left(\frac{1}{k}+\frac{d^{\ast}_{k}}{D}\right)\).
Proof of Claim 4.4.: Let \(n\) be an even number at least 4. Construct \(2n-2\) vectors in \(\mathbb{R}_{+}\), denoted \(\mathbf{X}=\{1,\ldots,(\nicefrac{{n}}{{2}}\text{ times}),\ldots,1\}\), \(\mathbf{Y}=\{n,\ldots,(\nicefrac{{n}}{{2}}\text{ times}),\ldots,n\}\), and \(\mathbf{Z}=\{2,3,\ldots,n-1\}\). Selecting \(k\triangleq n\) vectors from \(\mathbf{X}\cup\mathbf{Y}\cup\mathbf{Z}\) so that the ILD value is maximized, we have \(S^{\ast}_{\text{ILD}}=\mathbf{X}\cup\mathbf{Y}\). Observe easily that the greedy heuristic selects at least two vectors from either \(\mathbf{X}\) or \(\mathbf{Y}\). Therefore, \(\text{disp}(S^{\ast}_{\text{ILD}})=\text{disp}(S^{\text{Gr}}_{\text{ILD}})=0\). By contrast, the optimum dispersion is \(\text{OPT}_{\text{disp}}=1\) and attained when we select \(\{1,2,\ldots,n\}\).
Proof of Theorem 5.1.: Let \(S\triangleq[n]\). We first calculate a limit of \(\text{GILD}_{\sigma}(S)\) as \(\sigma\rightarrow\infty\). Define \(\epsilon_{\sigma}\triangleq\max_{i\neq j\in S}\frac{d(i,j)}{\sigma}\). Using a Taylor expansion of \(\exp\!\left(-\frac{x^{2}}{2\sigma^{2}}\right)\!=\!1-\frac{x^{2}}{2\sigma^{2}}+ \mathcal{O}\left(\frac{x^{2}}{\sigma^{4}}\right)\), we derive
\[\text{GILD}_{\sigma}(S) =\binom{n}{2}^{-1}\sum_{i\neq j\in S}\sqrt{\frac{d(i,j)^{2}}{ \sigma^{2}}+\mathcal{O}\left(\frac{d(i,j)^{4}}{\sigma^{4}}\right)}\] \[=\frac{\sqrt{1+\mathcal{O}(\epsilon_{\sigma}^{2})}}{\sigma}\cdot \binom{n}{2}^{-1}\sum_{i\neq j\in S}d(i,j).\]
Observing that \(\lim_{\sigma\rightarrow\infty}\epsilon_{\sigma}=0\), we have \(\lim_{\sigma\rightarrow\infty}\frac{\text{GILD}_{\sigma}(S)}{\frac{1}{2}\text{ ILP}(S)}=1\), completing the proof of the first statement.
We next calculate a limit of \(\text{GILD}_{\sigma}(S)\) as \(\sigma\to 0\). Define \(\delta\triangleq(\min_{i\neq j\in S,d(i,j)>\text{disp}(S)}d(i,j))-\text{disp}(S)\). Note that no pair of items \((i,j)\) satisfies \(\text{disp}(S)<d(i,j)<\text{disp}(S)+\delta\). Then define \(\epsilon_{\sigma}\triangleq\exp\left(-\frac{(\text{disp}(S)+\delta)^{2}- \text{disp}(S)^{2}}{2\sigma^{2}}\right)\). Observe that for any pair \((i,j)\),
\[\exp\left(-\frac{d(i,j)^{2}}{2\sigma^{2}}\right)\text{ is }\begin{cases}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
|
2303.11340
|
HDformer: A Higher Dimensional Transformer for Diabetes Detection
Utilizing Long Range Vascular Signals
|
Diabetes mellitus is a global concern, and early detection can prevent
serious complications. 50% of people with diabetes live undiagnosed,
disproportionately afflicting low-income groups. Non-invasive methods have
emerged for timely detection; however, their limited accuracy constrains
clinical usage. In this research, we present a novel Higher-Dimensional
Transformer (HDformer), the first Transformer-based architecture which utilizes
long-range photoplethysmography (PPG) to detect diabetes. The long-range PPG
maximizes the signal contextual information when compared to the less-than 30
second signals commonly used in existing research. To increase the
computational efficiency of HDformer long-range processing, a new attention
module, Time Square Attention (TSA), is invented to reduce the volume of tokens
by more than 10x, while retaining the local/global dependencies. TSA converts
the 1D inputs into 2D representations, grouping the adjacent points into a
single 2D token. It then generates dynamic patches and feeds them into a gated
mixture-of-experts (MoE) network, optimizing the learning on different
attention areas. HDformer achieves state-of-the-art results (sensitivity 98.4,
accuracy 97.3, specificity 92.8, AUC 0.929) on the standard MIMIC-III dataset,
surpassing existing research. Furthermore, we develop an end-to-end solution
where a low-cost wearable is prototyped to connect with the HDformer in the
Cloud via a mobile app. This scalable, convenient, and affordable approach
provides instantaneous detection and continuous monitoring for individuals. It
aids doctors in easily screening for diabetes and safeguards underprivileged
communities. The enhanced versatility of HDformer allows for efficient
processing and learning of long-range signals in general one-dimensional
time-series sequences, particularly for all biomedical waveforms.
|
Ella Lan
|
2023-03-17T14:11:14Z
|
http://arxiv.org/abs/2303.11340v2
|
HDformer: A Higher Dimensional Transformer for Diabetes Detection Utilizing Long Range Vascular Signals
###### Abstract
Diabetes mellitus is a worldwide concern, and early detection can help to prevent serious complications. Low-cost, non-invasive detection methods, which take cardiovascular signals into deep learning models, have emerged. However, limited accuracy constrains their clinical usage. In this paper, we present a new Transformer-based architecture, Higher Dimensional Transformer (HDformer), which takes long-range photoplethysmography (PPG) signals to detect diabetes. The long-range PPG contains broader and deeper signal contextual information compared to the less-than-one-minute PPG signals commonly utilized in existing research. To increase the capability and efficiency of processing the long range data, we propose a new attention module Time Square Attention (TSA), reducing the volume of the tokens by more than 10x, while retaining the local/global dependencies. It converts the 1-dimensional inputs into 2-dimensional representations and groups adjacent points into a single 2D token, using the 2D Transformer models as the backbone of the encoder. It generates the dynamic patch sizes into a gated mixture-of-experts (MoE) network as decoder, which optimizes the learning on different attention areas. Extensive experimentations show that HDformer results in the state-of-the-art performance (sensitivity 98.4, accuracy 97.3, specificity 92.8, and AUC 0.929) on the standard MIMIC-III dataset, surpassing existing studies. This work is the first time to take long-range, non-invasive PPG signals via Transformer for diabetes detection, achieving a more scalable and convenient solution compared to traditional invasive approaches. The proposed HDformer can also be scaled to analyze general long-range biomedical waveforms. A wearable prototype finger-ring is designed as a proof of concept.
## 1 Introduction
Diabetes mellitus is a clinical condition that results in a high amount of glucose in the blood due to the lack of insulin in the body - otherwise known as insulin resistance [11]. Diabetes can raise the risk of diseases, affecting nearly every organ system: coronary heart disease, kidney failure, blindness, stroke, etc. According to the World Health Organization (WHO), roughly 422 million people have been diagnosed with diabetes. Its mortality rate is also increasing each year. It accounts for 1.5 million deaths each year, making diabetes the 7th global leading cause of mortality. These statistics disproportionately affect those in lower-income communities.
Diabetes is often referred to as the "silent killer" and is commonly overlooked until its later progression into more critical stages. The lack of early detection results in cases where diabetic patients are not treated until later stages when the patient's blood sugar is uncontrollable and acutely above the standard. According to data from the International Diabetes Federation, almost 50% of people with diabetes are unaware of their diagnosis and its risks to their health, hence leaving the disease untreated.
To minimize side effects and worsening of the disease, actions such as early prevention, drug treatments, and changes in lifestyle are essential. Effective prevention through regular monitoring of the blood glucose levels (BGL) is necessary for diabetes management. Currently, only invasive methods (blood glucose laboratory tests and glucometers) are commercially available for accurate monitoring of BGL, including the fasting plasma glucose (FPG) test and the hemoglobin A1C (HbA1c) test. However, such treatments are expensive, time consuming, painful for patients, and ultimately, are unable to measure long-term progression. To overcome these limitations, research on non-invasive methods has emerged. A continuous, non-invasive, painless, easy-to-operate, and low-cost solution can help improve patient compliance with routine blood glucose monitoring, potentially leading to an early detection of diabetes. The development of such technologies capable of detecting the onset of diabetes can lead to large-scale prevention. However, the accuracy and general applicability of these non-invasive approaches have not proven to be competitive with current invasive methods. This paper aims to address this gap.
Photoplethysmography (PPG) is an optically obtained signal that can be used to detect blood volume changes in the microvascular bed of tissues; it can be used to extract key information such as blood oxygen saturation, heart rate,
blood pressure, cardiac output, cardiac respiration, arterial aging, endothelial function, microvascular blood flow, and autonomic function [6].
Diabetes is associated with vascular changes and is often used in blood glucose estimation and diabetes detection due to its cost-effectiveness and simplicity of usage. A decrease of HR variability (HRV) is associated with diabetes, resulted from the harmful effects of altered glucose metabolism which leads to cardiac autonomic neuropathy. Resting HR largely increases in diabetes, implicating mechanisms ranging from metabolism to endothelial aging. Endothelial dysfunction, an early hallmark of diabetic vascular disease, is reflected in the PPG waveform. Diabetes increase arterial stiffness, which can be reflected in SDPTG (second-derivative of PPG). The increase of blood viscosity and modification of heart polarization and depolarization presented in diabetes groups also change PPG wave shape.
The utilization of deep learning can further enhance PPG usability in clinics.
To address the gap of accuracy, we take long range PPG waveforms as the input, 10+ minutes, compared to the common input of less-than-one-minute in existing research. The long range vascular signals (PPG) contain richer features for the diabetes classification. In this study, we propose a Higher Dimensional Transformer (HDformer), capturing the global representation and long distance feature dependencies among the waveforms via attention modules. A new Time-Square Attention (TSA) is created to aggregate 1-dimensional dependencies from 2-dimensional representations. The proposed ML model has achieved the SOTA results on the standard MIMIC-III dataset.
The contributions of this paper include:
* A scalable, non-invasive approach to take long range vascular signals (PPG) for diabetes detection for the first time, achieving SOTA results.
* A Transformer-based deep learning architecture HDformer to perform long range biomedical waveforms processing.
* A proposed attention module TSA to capture 1-dimensional dependencies from 2-dimensional representations, adaptable to input into existing 2D Transformer models, while applying a gated network of mixture-of-experts for the dynamic patch size of each 2D shape.
* A deep learning-based, in-depth, long-range data analysis on the blood volume changes (measured from PPG) and blood glucose estimation (indicator of diabetes), for diabetes detection.
* A general Transformer based framework capable of time-series learning and prediction for 1-dimensional long-range sequences.
Figure 1: HDformer Architecture.
## 2 Related Work
### Photoplethysmography (PPG) and Electrocardiography (ECG)
PPG and ECG are commonly used digital biomarkers for cardiovascular disease (CVD) analysis. Since both PPG and ECG are measured by non-invasive methods, they have recently been used in blood glucose estimation and diabetes detection through machine learning approaches. One of the first studies in this area was by [12], who used the inverse Fourier transform (IFT) to extract features to feed into several machine learning models; [7] identified features related to diabetes from PPG and established the feasibility of prediction with its linear discriminant analysis (LDA); [14] developed logistic regression (LR) modeling to use PPG to perform the classification of diabetes. However, to obtain reliable results, these methods required an abundant amount of attention on dataset processing for the feature extraction. Additionally, each study collected their own datasets which causes a lack of result standardization. These limits make the traditional machine learning methods challenging to scale to broader usage. The recent rise of deep learning led to the application of convolutional neural networks for PPG prediction of diabetes. [1] utilized smartphone-based PPG signals and CNN to achieve an area under the curve (AUC) of 0.75 for diabetes prediction. [13] presented a reconfigurable deep learning framework, combining CNN and the inherent capabilities of PPG feature extraction. [17] and [15] proposed two-dimensional CNN models, one taking ECG and another taking PPG, in combination with age, gender, and the presence of hypertension. However, their training required larger datasets to be generated by themselves. Because of the nature of locality-sensitive CNN, the accuracy of these models are also limited in the range of 70% to 80%, lower than those from the feature-extraction-based machine learning models. In our research, we choose PPG over ECG due to its increased ability to be measured continually, and we pick Transformers over CNN to apply its capability of capturing the global contextual information and long-range dependencies, for the diabetes classification.
### Long Range Transformers
Although Transformer [16] originated in the world of Natural Language Processing (NLP), it has also become prevalent in the field of computer vision (CV), surpassing many CNN-based models in performing tasks such as image classification and segmentation [5][3]. Much of Transformer's success comes from its self-attention mechanism, which not only simplifies the architectural complexity by removing convolutions entirely, but also allows models to capture the global contextual information for both short-range and long-range relationships. Recently, studies have shown the potential of improving architectures like the Transformer to increase the prediction capacity to be more adaptable to analyzing longer range data, through optimizing key components such as its self-attention mechanism and its memory usage efficiency. These proposals, built on top of the vanilla Transformer, include the memory-optimization-based LongFormer [2], lower-dimensional representation-based LinFormer [18], recurrence-based Transformer XL [4], down-sampling-based Informer [19], and learnable patterns-based Reformer [9], etc. In this study, we propose a new Transformer architecture HDformer, to process 1-dimensional PPG waveforms into 2-dimensional representation via our attention model TSA, optimizing model efficiency while retaining the key information of the signals.
## 3 Methods
### Long Range Vascular Signals
Recent deep learning-based research which take vascular signals for diabetes detection have shown promising results, but there is room to improve its accuracy. PPG is an optical method for measuring blood volume changes at the surface of the skin (non-invasive), and it can be easily measured by healthcare wearables in a continuous time window, e.g. hourly or even daily. Heart rate variability (HRV), usually measured by PPG and shown a correlation with the glucose level, is suggested to measure at 5 minutes as a minimal analysis window. Long range PPG helps to provide a complete picture for HRV. Its long range data collection can contain more long distance features which are missed in a short duration of PPG or ECG. Most existing research only takes 5 to 20 seconds of PPG/ECG signals as the input for their CNN (and LSTM) models. Our Transformer based method is capable of taking 10+ minutes of PPG signals as the input. In order to capture the long distance features, we propose a new Transformer - Higher Dimensional Transformer (HDformer) to process the long range PPG waveforms for diabetes classification.
Figure 2: Various Attentions Comparison and TSA.
### Overall Design
It is an encoder/decoder structure. First, the raw PPG signals are de-noised and normalized in a pre-processing module. After the standard segmentation, each sequence represents a 10-minute PPG waveform. Then, a patch partition operation is taken to create patches of the PPG waveforms, which are then constructed into a 2-dimensional waveform representation (more details in the following section). Each group of the Transformer encoders, containing Time Square Attention (TSA), processes these 2-dimensional representations by applying existing 2D Transformer algorithms (e.g. ViT [5] or Swin [10]), and performs its own classification. Finally, the results from these models (experts) will be feeded into a gated network of mixture-of-experts framework as decoders to perform the final diabetes detection, as illustrated in Figure 1.
### Time Square Attention
While much of the successes of Transformer rely on its self-attention module, its computational complexity and memory usage grow quadratically along with the length of the sequence. Understandably, the increased length from a 20-second to a 10-minute PPG waveform segment makes it inefficient and infeasible for the standard Transformer to process this long range data. Hence, TSA is proposed to handle the PPG waveforms as a 2-dimensional representation, rather than the 1-dimensional sequence. Concretely, we create 2D representation by partitioning the 1D waveform into patches and then constructing these patches into 2D data, inspired by the fact that the PPG waveforms contain the repeating patterns. To address the limitation of self-attention on long range data, various attention models are illustrated and compared in Figure 2.
Figure 2A shows the vanilla Full Attention, in which one token is calculated against every other token in the sequence, with the maximal dependency & computational complexity. Figure 2B presents Sparsity Attention, chunking input sequences into blocks to reduce token size and computational complexity. It represents an existing effort to apply the block patterns of fixed strides to sparsify the attention matrix. Figure 2C describes Time-based Sparsity Attention, in which the frequency of the token is defined by timing, and more weight is assigned to closer tokens than tokens farther away.
Figure 3: Concept of Dynamic Patch Sizes in TSA.
Figure 2D decorates a fixed-patch aggregation on a new dimension Y to compose a 2D representation of the PPG waveforms. Existing dimension X carries each time-sequence wave with a patch width of T; Y decorates a fixed-patch aggregation, to compose a 2D representation. Since the second dimension Y is also time-based, we name this as Time Square Attention (TSA). Each token in TSA is a square (2D) formed by the adjacent points, with the extended coverage, 2 * 2, 3 * 3, 4 * 4, 5 * 5, etc., as depicted on the left. This approach effectively reduces the # of tokens required to process e.g. 10 minute long PPG waveforms in a frequency of 128 Hz would include roughly 77K sampled points; TSA is required, effectively tokenizing those points for Transformer to analyze sequences of such length while retaining both local representation and global contextual information, connecting points in both short and long distance via 2D tokenization, and calculating the relationship of each token to every other token in the X & Y dimensions.
### Dynamic Patch Sizes in 2D Transformer
To optimize the shape of the patches and to learn the best performing patch size of TSA, we explore a series of patch sizes to generate a group of dynamically 2D representations in different shapes, as displayed in Figure 3.
Since each 2-dimensional patch can be processed as a 2D tensor representation, we simply apply the existing 2D Transformer algorithms to perform the image classification training. As a result, either the classic ViT or hierarchical Swin is capable of capturing both the local and global dependencies within 2-dimensional representations, as presented in Figure 4.
A detailed approach on the generation of different sized 2D representations is explained in Algorithm 1. In our experiment, we mark T as 1024 points, representing 8 seconds of PPG waveforms.
### A Gated Network of Mixture-of-Experts (MoE)
To learn the optimal patch size from the dynamic patches in TSA, we deploy the hierarchical structures of the patches in dynamic sizes and propose a gated network of mixture-of-experts (MoE), as demonstrated in Figure 1. A group of 2D representations in the different shapes are computed in each TSA, which connects the MLP layers for the diabetes classifier generating a likelihood estimation score via softmax. Each expert contributes into the MoE with its weight computed in MoE learning, to generate the final diabetes detection from these models. In our study, we take the configuration of 5 TSA modules, with the dynamic patch sizes T, 2T, 4T, T/2, T/4.
## 4 Experiments
### Datasets
We take the public dataset MIMIC-III [8]. It is a large, single-centered database covering 38,597 distinct adult patients admitted to critical care units at a large tertiary care hospital. Data includes vital signs (like PPG and ECG), medications, laboratory measurements, procedure codes,
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Approaches & Sensitivity & Accuracy & Specificity & AUC \\ \hline \hline Avram [1] & 75.0 & 76.7 & 65.5 & 0.830 \\ Wang [17] & 80.8 & 77.8 & 77.5 & 0.830 \\ Srinivasan [15] & 76.7 & 76.3 & 76.1 & 0.830 \\ FPG[15] & 79.0 & - & 82.8 & 0.890 \\ HbA1c[15] & 86.3 & - & 75.8 & 0.859 \\ HDformer & **98.4** & **97.3** & **92.8** & **0.929** \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of Our Results with Related Work
Figure 4: TSA Dynamic Patch Size Design.
diagnostic codes (ICD9 code starting with 250 labeled as diabetic patients), imaging reports, and more. One of the major reasons for choosing MIMIC-III is to evaluate our model in a standard comparison, rather than in a self-collected private dataset. All the PPG waveforms are re-sampled at 128Hz, and the regular de-noising and normalization are performed as part of the pre-processing.
### Discussions
HDformer is implemented via Pytorch and the model was trained on AWS instances with GPU NVIDIA A10G. We performed the evaluation by generating the confusion matrix on both patient level and records level, showed in Figure 5. The model performs with an accuracy higher than 95%, substantially outperforming previous research.
As explained in Table 1, HDformer has achieved the SOTA results on MIMIC III, compared with the related work, when evaluated on the metrics of sensitivity, accuracy, specificity, and AUC. In addition to exceeding the existing deep-learning-based, non-invasive approaches, HDformer also achieves higher performance than current clinically-used invasive approaches (FPG and HbA1c).
The experiments suggest the effectiveness of HDformer and TSA through their novel design. This solution is capable of analyzing long range PPG signals to perform the final diabetes classification, achieving promising results both in terms of accuracy as well as through optimization metrics like time, memory, and computation by using (1) the proposed TSA self-attention to aggregate a new dimension to compose 2D representations of the 1D PPG waveforms and (2) the gated MoE layer to concatenate expert predictions via the gathered information regarding context and relationships among waveforms in the dynamic patch sizes.
### TSA in Depth
A sample of 2-dimensional representation PPG in our experiments is illustrated in Figure 6. The 1D PPG can be split into 2D image as Figure 6(A), while our design in TSA is to represent the raw PPG values in a 2-dimensional tensor as Figure 6(B). The main benefit for such a design is to reduce the total volume of the tokens to process from 2D image to 2D tensor, which can be visualized as Figure 6(C). TSA 2D allows Transformer to capture the long distance relationship from a new time-series dimension (Y axis, in addition to X axis) without introducing additional tokens (pixels in 2D image). On the other side, the Vision Transformers (ViT or Swin) are not capable for processing the standard 2D image containing 10 minutes PPG data, because of the computation complexity.
We experiment with different PPG inputs with different model parameter configurations, shown in Table 2. The 2D
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Attentions & Sensitivity & Accuracy & Specificity & AUC \\ \hline \hline
1D 1T & 75.8 & 72.5 & 68.9 & 0.791 \\ TSA 1T & 86.8 & 85.6 & 82.9 & 0.879 \\ TSA T/4 & 78.9 & 77.5 & 75.2 & 0.835 \\ TSA T/2 & 81.8 & 79.9 & 78.0 & 0.858 \\ TSA 2T & 87.5 & 86.2 & 84.5 & 0.890 \\ TSA 4T & 89.6 & 89.1 & 87.8 & 0.895 \\ TSA with MoE & **98.4** & **97.3** & **92.8** & **0.929** \\ \hline \end{tabular}
\end{table}
Table 2: TSA Configuration Comparison
Figure 5: Confusion Matrix.
Figure 6: TSA 2-Dimensional Representation.
representation in TSA helps to generally achieve better results than the original 1D waveform in the standard self-attention mechanism. It is interesting to find that the larger size patches (2T and 4T) lead to higher performances than the smaller sizes (T/2 and T/4). The additional ensemble network from the gated MoE also yields a considerable enhancement on the model performance.
### Ablation Study
To deeply understand the effects of the different sizes of TSA and the MoE, we also perform an ablation study on the different model parameter configurations.
To evaluate the length of the long-range PPG, a sensitivity analysis of different PPG wave lengths on diabetes detections is presented in Table 3. For 1D sequences, performance metrics enhance when the wavelength increases from 8s to 30s, as a result of adding more features and extending long-distance dependencies into training. However, continuously increasing the length to 60s dilutes the performance when the computational capacity is overloaded. TSA processing 1D waveform via 2D representation significantly reduces the size of tokens without compromising long-term and short-term dependencies. The increase of wavelength consistently improved the performance, depicting the value of long range PPG while using TSA to optimize the computation capacity. We also perform a comparison between TSA and the 2D image-based representations which convert 1-dimensional PPG data into 2-dimensional images, and it reveals 2D image-based representations reduce the efficiency of the long-range data processing by introducing more tokens (pixels) than 1D PPG, as illustrated in Table 4.
To evaluate the different Vision Transformer algorithms on the 2-dimensional TSA representations, we also compare the results between the standard ViT and the hierarchical Swin Transformer, as illustrated in Table 5. While ViT performs with high accuracy, Swin Transformer achieves better results. It is caused by the hierarchical structure of Swin which captures the longer distance dependencies of the 2-dimensional PPG with different window sizes.
### Medical Applications
Having the efficiency of long range PPG processing, HDformer has presented the potential to detect diabetes in a non-invasive, scalable way, which is low-cost and easy to measure on continuous monitoring. A PPG-based fingerprinting wearable prototype, as presented in Figure 7, is being
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Attentions & Sensitivity & Accuracy & Specificity & AUC \\ \hline \hline
2D Image 8s & 61.8 & 59.9 & 57.8 & 0.678 \\
2D Image 30s & 65.9 & 62.8 & 59.5 & 0.686 \\
2D Image 60s & 58.1 & 56.5 & 53.7 & 0.645 \\
2D Image 180s & 52.9 & 51.9 & 50.8 & 0.618 \\ TSA 30s & 77.8 & 75.1 & 72.5 & 0.808 \\ TSA 60s & 81.9 & 79.1 & 78.5 & 0.815 \\ TSA 180s & 83.2 & 81.5 & 80.9 & 0.829 \\ TSA 6m & 88.1 & 85.9 & 85.8 & 0.891 \\ TSA 10m & **98.4** & **97.3** & **92.8** & **0.929** \\ \hline \end{tabular}
\end{table}
Table 4: Signals Study 2D Tensor vs 2D Image Representation.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Attentions & Sensitivity & Accuracy & Specificity & AUC \\ \hline \hline
1D 8s & 75.8 & 72.5 & 68.9 & 0.791 \\
1D 30s & 78.1 & 74.5 & 70.5 & 0.806 \\
1D 60s & 71.5 & 68.8 & 65.6 & 0.728 \\ TSA 30s & 77.8 & 75.1 & 72.5 & 0.808 \\ TSA 60s & 81.9 & 79.1 & 78.5 & 0.815 \\ TSA 180s & 83.2 & 81.5 & 80.9 & 0.829 \\ TSA 6m & 88.1 & 85.9 & 85.8 & 0.891 \\ TSA 10m & **98.4** & **97.3** & **92.8** & **0.929** \\ \hline \end{tabular}
\end{table}
Table 3: Signals Lengths Study.
Figure 7: The Design of PPG Wearable.
developed as a proof of concept of our model. We have hosted the trained HDformer model into the Cloud, which takes the PPG waveforms from the wearable and inferences the 2-dimensional representation classification as diabetes prediction. Due to the convenient nature of wearing rings, the wearable can collect long range PPG signals consistently and can be easily adapted to most users.
## 5 Conclusion
We propose HDformer, a Transformer-based model that is capable of processing long range vascular signals of PPG, to predict diabetes, achieving the SOTA model performance. It enables a new non-invasive method for early diabetes detection and is scalable to clinical usage, leading to an overcoming of challenges in our current diabetes handling. Our TSA module demonstrates the high efficiency for processing the long range data with 2D representations and makes it compatible with the existing Vision Transformer models, while a gated MoE layer helps to ensemble the classification from the dynamic patch sizes of the 2D TSA. This approach can also be generalized to process long range biomedical waveforms beyond PPG via Transformer.
Future work will include the experiment with a variety of waveform frequencies, e.g. from 128Hz to 256Hz/512Hz, to evaluate whether increased frequency can help reduce the wavelength needed for HDformer performance. Additionally, we plan to explore the possibility to estimate the glucose level via PPG analysis, by taking the 2nd derivative PPG for HDformer/TSA analysis.
|
2308.13625
|
Quantum friction for a scalar model: spatial dependence and higher
orders
|
We use a perturbative approach to evaluate transition amplitudes
corresponding to quantum friction, for a scalar model describing an atom which
moves at a constant velocity, close to a material plane. In particular, we
present results on the probability density per unit time of exciting degrees of
freedom on specific regions of the plane. This allows one to know spatial
features of the effect which could have practical relevance, for instance, for
the design of nanodevices.
|
Aitor Fernández, C. D. Fosco
|
2023-08-25T18:43:24Z
|
http://arxiv.org/abs/2308.13625v1
|
# Quantum friction for a scalar model: spatial dependence and higher orders
###### Abstract
We use a perturbative approach to evaluate transition amplitudes corresponding to quantum friction, for a scalar model describing an atom which moves at a constant velocity, close to a material plane. In particular, we present results on the probability density per unit time of exciting degrees of freedom on specific regions of the plane. This allows one to know spatial features of the effect which could have practical relevance, for instance, for the design of nanodevices.
We show that the result of integrating out the probability density agrees with previous results for the same system.
We also study the effect of including higher order terms in the perturbative calculation of the probability amplitude for quantum friction, and show that they do not alter the picture obtained from the first non-trivial order, in particular, the velocity threshold for the phenomenon to occur.
## 1 Introduction
Fundamental particles and their interactions are inherently quantum in nature, an aspect that can occasionally lead to macroscopic manifestations in a rather straightforward way. Indeed, vacuum fluctuations have observable consequences under the appropriate circumstances; for instance, when non-trivial boundary conditions are present, as in the case of the renowned Casimir effect [1]. In its original realization, it involves material media imposing boundary conditions on the fluctuations of the electromagnetic (EM) field. This leads to forces or torques when the boundary conditions are time-independent, in what constitutes the static Casimir effect. One the other hand, a suitable time dependence may lead to a dissipative effect, like the creation of photons out of the vacuum, as in the dynamical Casimir effect (DCE).
In this paper, we deal with "non-contact friction" or "quantum friction" (QF), where quantum fluctuations lead to an observable effect: a frictional force when two media move with a constant relative velocity, resulting in energy dissipation. It is somewhat complementary to the Casimir effect, since the zero-point fluctuations of the EM field are not directly significant; rather, the EM field mediates the interaction between the fluctuating microscopic degrees of freedom on two objects. The frictional effect does not
|
2306.02284
|
Quantum fluctuations in atomic Josephson junctions: the role of
dimensionality
|
We investigate the role of quantum fluctuations in the dynamics of a bosonic
Josephson junction in $D$ spatial dimensions, by using beyond mean-field
Gaussian corrections. We derive some key dynamical properties in a systematic
way for $D=3, 2, 1$. In particular, we compute the Josephson frequency in the
regime of low population imbalance. We also obtain the critical strength of the
macroscopic quantum self-trapping. Our results show that quantum corrections
increase the Josephson frequency in spatial dimensions $D=2$ and $D=3$, but
they decrease it in the $D=1$ case. The critical strength of macroscopic
quantum self-trapping is instead reduced by quantum fluctuations in $D=2$ and
$D=3$ cases, while it is enhanced in the $D=1$ configuration. We show that the
difference between the cases of D = 2 and D = 3 on one side, and D = 1 on the
other, can be related to the qualitatively different dependence of the
interaction strength on the scattering length in the different dimensions.
|
Andrea Bardin, Francesco Lorenzi, Luca Salasnich
|
2023-06-04T07:22:03Z
|
http://arxiv.org/abs/2306.02284v3
|
# Quantum fluctuations in atomic Josephson junctions: the role of dimensionality
###### Abstract
We investigate the role of quantum fluctuations in the dynamics of a bosonic Josephson junction in \(D\) spatial dimensions, by using beyond mean-field Gaussian corrections. We derive some key dynamical properties in a systematic way for \(D=3,2,1\). In particular, we compute the Josephson frequency in the regime of low population imbalance. We also obtain the critical strength of the macroscopic quantum self-trapping. Our results show that quantum corrections increase the Josephson frequency in spatial dimensions \(D=2\) and \(D=3\), but they decrease it in the \(D=1\) case. The critical strength of macroscopic quantum self-trapping is instead reduced by quantum fluctuations in \(D=2\) and \(D=3\) cases, while it is enhanced in the \(D=1\) configuration.
## 1 Introduction
A Josephson junction is a device composed of a superconductor, or a superfluid, with a tunneling barrier separating two regions. While the first studies on this kind of problem were targeting superconductors [1], the achievement of Bose-Einstein condensation prompted investigations on this model in the setting of ultracold atomic systems too [2]. In contrast to superconducting Josephson junctions, atomic Josephson junctions can exhibit a significant difference in population between the two sides, and a self-trapping phenomenon called macroscopic quantum self trapping (MQST). MQST was observed for a condensate of \({}^{87}\)Rb atoms in 2005 [3]. In the context of atomic Josephson junctions, the role of dimensionality has been studied in the context of elongated 1-dimensional junctions [4] consisting in two sites spatially separated by an optical potential. Besides the possibility to spatially separate the two sites, it is possible to couple two hyperfine levels of the atoms obtaining an effective Josephson dynamics with respect to the two spin populations [5, 6]. Moreover, the tunability of the interaction strength via Feshbach resonances allowed investigations with fermionic superfluids near the BEC-BCS crossover, in experimental [7] and theoretical [8] studies. In the case of fermionic superfluid, the role of dimensionality has been experimentally investigated in the \(D=2\) case [9]. Another interesting setup, made available using optical lattices, is the realization of many Josephson junctions in an array shape [10]. Such kind of system had been proposed as a platform for quantum computing [11].
Typically, the complete quantum dynamics of Josephson junctions is explained using the phase model, which depends on the commutation rule of the phase operator \(\hat{\phi}\) with the number operator \(\hat{N}\)[12]. This approach is the starting point of many theoretical studies of Josephson junctions. For example, the effect of the finite size of the system had been tackled by using the so-called atomic coherent states [13]. In this case, the computation of corrections to the MQST critical strength was shown to be particularly subtle. In another study, the path integration technique had been used for obtaining an effective action depending only on the phase dynamical variable: this approach was used to compute quantum corrections to the Josephson frequency that are in principle verifiable for both atomic and superconducting systems in specific regimes [14].
In the present work, we first review the mean-field calculations of the Josephson frequency and the MQST critical strength, serving as a starting point for our analysis. Then, by including beyond mean-field Gaussian corrections in \(D\) spatial dimensions [15] on each of the two sites, we systematically calculate the effect of quantum fluctuations for the case \(D=3,2,1\). These quantum fluctuations give rise to the Lee-Huang-Yang correction [16] for \(D=3\), to the Schick-Popov correction [17, 18] for \(D=2\), and to the next-to-leading term of the Lieb-Liniger theory [19] for \(D=1\). Our approach provides an analytically treatable approximation of the full quantum dynamics, and significant corrections are derived.
## 2 Mean-field results
### D-dimensional case
We want to construct an effective Lagrangian for a bosonic system with two sites of volume \(V=L^{D}\) each [14]. The corresponding Lagrangian density is made of three
terms:
\[\mathscr{L}=\mathscr{L}_{1}+\mathscr{L}_{2}+\mathscr{L}_{J}. \tag{1}\]
The first and the second term are given by
\[\mathscr{L}_{k}=i\hbar\Phi_{k}^{*}(t)\partial_{t}\Phi_{k}(t)-\frac{1}{2}g|\Phi_{ k}(t)|^{4}\qquad k=1,2, \tag{2}\]
where \(\Phi_{k}(t)\) is a complex time-dependent field describing the bosons in one of the sites (\(k=1,2\)) and \(g\) is the coupling constant. It is important to stress that the spatial dependence of the field \(\Phi_{k}(t)\) is encoded only in the subindex \(k\). The third term phenomenologically introduces tunneling (hopping) and it is given by
\[\mathscr{L}_{J}=\frac{J}{2}\left(\Phi_{1}^{*}(t)\Phi_{2}(t)+\Phi_{2}^{*}(t) \Phi_{1}(t)\right), \tag{3}\]
the constant \(J\) is connected to the exchange of particles between the two sites. Integrating in space the Lagrangian density one obtains the Lagrangian [14]
\[\begin{split}\mathcal{L}&=\int_{V}\mathscr{L}\ d^{ D}\vec{r}=L^{D}\mathscr{L}\\ &=\sum_{k}\Bigl{(}i\hbar\varphi_{k}^{*}(t)\partial_{t}\varphi_{k }(t)-\frac{U}{2}|\varphi_{k}(t)|^{4}\Bigr{)}+\frac{J}{2}(\varphi_{1}^{*}(t) \varphi_{2}(t)+\varphi_{2}^{*}(t)\varphi_{1}(t)),\end{split} \tag{4}\]
where the new renormalized functions describing the system are
\[\varphi_{k}(t)\equiv\sqrt{L^{D}}\Phi_{k}(t)\quad k=1,2\, \tag{5}\]
and
\[U\equiv\frac{g}{L^{D}}. \tag{6}\]
Through the Madelung transformation [20], given by
\[\varphi_{k}(t)=\sqrt{N_{k}(t)}e^{i\phi_{k}(t)}\qquad k=1,2\, \tag{7}\]
the complex function describing the bosons can be rewritten in terms of its phase \(\phi_{k}(t)\) and its modulus \(\sqrt{N_{k}(t)}\), where the square of the latter corresponds to the number of bosons in the \(k-\)th site. Then the Lagrangian becomes
\[\mathcal{L}= \sum_{k}\Bigl{(}i\hbar\frac{\dot{N}_{k}}{2}-\hbar\dot{\phi}_{k}N _{k}-\frac{U}{2}N_{k}^{2}\Bigr{)}+J\cos{(\phi_{1}-\phi_{2})}\sqrt{N_{1}N_{2}}. \tag{8}\]
Introducing the total number of particles \(N\equiv N_{1}(t)+N_{2}(t)\), the relative phase \(\phi(t)\equiv\phi_{2}(t)-\phi_{1}(t)\), the total phase \(\bar{\phi}(t)\equiv\phi_{1}(t)+\phi_{2}(t)\) and the population imbalance \(z(t)\equiv(N_{1}(t)-N_{2}(t))/N\) then
\[\mathcal{L}= i\hbar\frac{\dot{N}}{2}+\frac{N\hbar}{2}z\dot{\phi}-\frac{N\hbar}{2 }\dot{\bar{\phi}}-\frac{UN^{2}}{4}-\frac{UN^{2}}{4}z^{2}+\frac{JN}{2}\sqrt{1- z^{2}}\cos{\phi}, \tag{9}\]
however, the first term is zero since \(N\) is constant and the third and fourth terms can be removed since the former is an exact differential and the latter is constant. The Lagrangian then reduces to
\[\mathcal{L}=\frac{N\hbar}{2}z\dot{\phi}-\frac{UN^{2}}{4}z^{2}+\frac{JN}{2} \sqrt{1-z^{2}}\cos{\phi}, \tag{10}\]
and the corresponding Euler-Lagrange equations, called Josephson-Smerzi equations [2] are
\[\begin{cases}\dot{z}=-\frac{J}{\hbar}\sqrt{1-z^{2}}\sin\phi\\ \dot{\phi}=\frac{J}{\hbar}\frac{z}{\sqrt{1-z^{2}}}\cos\phi+\frac{UNz}{\hbar}. \end{cases} \tag{11}\]
### Josephson frequency
To obtain a quadratic Lagrangian one consider the limit in which \(|\phi(t)|\ll 1\) and \(|z(t)|\ll 1\). The Lagrangian reduces to
\[\mathcal{L}=\frac{N\hbar}{2}z\dot{\phi}-\frac{UN^{2}+JN}{4}z^{2}-\frac{JN}{4} \phi^{2}, \tag{12}\]
Therefore the linearized Josephson-Smerzi equations are
\[\begin{cases}\dot{z}=-\frac{J}{\hbar}\phi\\ \dot{\phi}=\frac{J+UN}{\hbar}z,\end{cases} \tag{13}\]
and from these equations, one can get the harmonic oscillator equations for the population imbalance \(z(t)\) and the relative phase \(\phi(t)\):
\[\begin{cases}\ddot{z}+\Omega_{mf}^{2}z=0\\ \ddot{\phi}+\Omega_{mf}^{2}\phi=0,\end{cases} \tag{14}\]
where the Josephson frequency is introduced [2], its expression is given by
\[\Omega_{mf}=\frac{1}{\hbar}\sqrt{J^{2}+UNJ} \tag{15}\]
which can be written also in a function of \(g\) and \(n\) as
\[\Omega_{mf}=\frac{1}{\hbar}\sqrt{J^{2}+Jgn}. \tag{16}\]
Note that there are two particular regimes. If \(J\gg UN\) then the frequency can be approximated with the Rabi frequency \(\Omega_{R}\)[2]
\[\Omega_{mf}\simeq\Omega_{R}=\frac{J}{\hbar}. \tag{17}\]
Vice versa, if \(J\ll UN\) then the frequency can be approximated to [2]
\[\Omega_{mf}\simeq\Omega_{J}=\frac{\sqrt{UNJ}}{\hbar}. \tag{18}\]
Hence the frequency (15) can be rewritten as a function of these two particular cases as
\[\Omega_{mf}=\sqrt{\Omega_{R}^{2}+\Omega_{J}^{2}}. \tag{19}\]
### Macroscopic Quantum Self Trapping
In order to describe the MQST in the mean-field approximation, one needs to find the conserved energy of the system, which is given by
\[E=\phi\frac{\partial\mathcal{L}}{\partial\dot{\phi}}-\mathcal{L}, \tag{20}\]
however the Lagrangian is independent from \(\dot{z}\), so one gets
\[E=\frac{UN^{2}}{4}z^{2}-\frac{JN}{2}\sqrt{1-z^{2}}\cos\phi. \tag{21}\]
Therefore the conserved energy is given by
\[E=\frac{UN^{2}}{4}z^{2}-\frac{JN}{2}\sqrt{1-z^{2}}\cos\phi. \tag{22}\]
The MQST happens when \(\langle z\rangle\neq 0\) and the condition to have MQST is given, calling \(z_{0}=z(0)\) and \(\phi_{0}=\phi(0)\), by the following inequality
\[E(z_{0},\phi_{0})>E(0,\pi), \tag{23}\]
since \(z(t)\) cannot become zero during an oscillation cycle. The MQST condition can be expressed also with a dimensionless parameter, known as strength, defined as
\[\Lambda\equiv\frac{NU}{J}. \tag{24}\]
In fact, inserting (22) and (24) into (23), one has [2]
\[\frac{UN^{2}}{4}z_{0}^{2}-\frac{JN}{2}\sqrt{1-z_{0}^{2}}\cos\phi_{0}>\frac{JN }{2}, \tag{25}\]
which, by using the definition in Eq. (24), can be written as
\[\Lambda>\Lambda_{c,\;mf}, \tag{26}\]
where we defined the critical value of the strength, above which theMQST occurs, as
\[\Lambda_{c,\;mf}\equiv\frac{1+\sqrt{1-z_{0}^{2}}\cos\phi_{0}}{z_{0}^{2}/2}. \tag{27}\]
Eq. (26) is the familiar mean-field condition to achieve MQST in Bose-Einstein condensates.
In the next sections, the results found in the mean-field approximation are used as a starting point to compute the beyond mean-field corrections to the Josephson frequency and the MQST. As we will see, contrary to the mean-field results, the beyond-mean-field Gaussian corrections are strongly dependent on the spatial dimension \(D\).
## 3 Beyond mean field: D=3 case
We can express the beyond mean-field energy density in \(D=3\) as [15, 16]
\[\mathcal{E}=\frac{1}{2}g_{0}n^{2}+\frac{8}{15\pi^{2}}\sqrt{\frac{m}{\hbar^{2}}} ^{3}\left(g_{0}n\right)^{\frac{5}{2}}, \tag{28}\]
so the Lagrangian density has a new term
\[\mathscr{L}= \mathscr{L}_{0}-\frac{8g_{0}^{\frac{5}{2}}\sqrt{m^{3}}}{15\pi^{2} \hbar^{3}}\left(|\Phi_{1}(t)|^{5}+|\Phi_{2}(t)|^{5}\right), \tag{29}\]
where \(\mathscr{L}_{0}\) is the mean-field Lagrangian density. Integrating the Lagrangian density in space one obtains
\[\mathcal{L}=\mathcal{L}_{0}-\frac{8g_{0}^{\frac{5}{2}}\sqrt{m^{3}}}{15\pi^{2} \hbar^{3}L^{\frac{9}{2}}}\left(|\varphi_{1}(t)|^{5}+|\varphi_{2}(t)|^{5}\right). \tag{30}\]
### Josephson frequency
Performing the Madelung transformation (7) one obtains
\[\mathcal{L}=\mathcal{L}_{0}-\frac{8\sqrt{m^{3}g_{0}^{5}}}{15\pi^{2}\hbar^{3} L^{\frac{9}{2}}}\left(N_{1}^{\frac{5}{2}}(t)+N_{2}^{\frac{5}{2}}(t)\right), \tag{31}\]
and rewriting the number of particles in each site as a function of the total number of particles and the population imbalance \(N_{1,2}=N(1\pm z)/2\) one gets
\[\mathcal{L}=\mathcal{L}_{0}-\frac{\sqrt{2m^{3}g_{0}^{5}}N^{\frac{5}{2}}}{15\pi ^{2}\hbar^{3}L^{\frac{9}{2}}}\left[(1+z)^{\frac{5}{2}}+(1-z)^{\frac{5}{2}} \right]. \tag{32}\]
Now, since we are in the low population imbalance limit, it is possible to do the following expansion
\[(1\pm z)^{n}\simeq 1\pm nz+\frac{n(n-1)z^{2}}{2}, \tag{33}\]
and summing the two contributions
\[(1+z)^{n}+(1-z)^{n}\simeq 2+n(n-1)z^{2}. \tag{34}\]
Inserting it into (32) and removing the term constant in \(z\) stemming from the calculations, one obtains
\[\mathcal{L}=\mathcal{L}_{0}-\frac{\sqrt{m^{3}g_{0}^{5}}N^{\frac{5}{2}}}{2 \sqrt{2}\pi^{2}\hbar^{3}L^{\frac{9}{2}}}z^{2}. \tag{35}\]
Finally, the Lagrangian in the \(D=3\) case is
\[\mathcal{L}= \frac{N\hbar}{2}z\dot{\phi}-\left(\frac{UN^{2}+JN}{4}\right)z^{2 }-\frac{JN}{4}\phi^{2}-\frac{\sqrt{m^{3}g_{0}^{5}}N^{\frac{5}{2}}}{2\sqrt{2} \pi^{2}\hbar^{3}L^{\frac{9}{2}}}z^{2}. \tag{36}\]
The Euler-Lagrange equations are
\[\begin{cases}\ddot{\phi}+\Omega^{2}\phi=0\\ \ddot{z}+\Omega^{2}z=0,\end{cases} \tag{37}\]
where the corrected Josephson frequency is
\[\Omega\equiv\frac{1}{\hbar}\sqrt{J^{2}+JUN+\frac{J\sqrt{2g_{0}^{5}n^{3}m^{3}}}{ \pi^{2}\hbar^{3}}}, \tag{38}\]
Defining the reference energy \(\varepsilon_{s}\) and the gas parameter \(\gamma\) as
\[\varepsilon_{s}\equiv\frac{\hbar^{2}}{ma_{s}^{2}}\hskip 28.452756pt\gamma\equiv a _{s}^{3}n, \tag{39}\]
the Josephson frequency can also be written, using the definition of Rabi frequency \(\Omega_{R}\), as
\[\Omega=\Omega_{R}\sqrt{1+4\pi\gamma\frac{\varepsilon_{s}}{J}\left(1+8\sqrt{ \frac{2\gamma}{\pi}}\right)}. \tag{40}\]
To understand the magnitude of the beyond mean-field correction to the Josephson frequency the ratio between the beyond mean-field Josephson frequency \(\Omega\) and the mean-field one \(\Omega_{mf}\) as a function of the strength parameter, given by \(\Lambda=4\pi\gamma\varepsilon_{s}/J\), is done. Namely it is considered
\[\frac{\Omega}{\Omega_{mf}}=\sqrt{\frac{1+\Lambda\left(1+8\sqrt{\frac{2\gamma} {\pi}}\right)}{1+\Lambda}}. \tag{41}\]
Looking at Fig. 1 one observes the following behavior: the correction is more significant at higher strength parameters \(\Lambda\). For strength parameters \(\Lambda\to 0\) the beyond mean-field correction is irrelevant regardless of the gas parameter. Instead, for larger \(\Lambda\), the relative correction is given by
\[\left.\frac{\Omega}{\Omega_{mf}}\right|_{\Lambda\gg 1}=\sqrt{1+8\sqrt{\frac{2 \gamma}{\pi}}}. \tag{42}\]
Focusing now on the bounds of the gas parameter \(\gamma\), while the lower bound is \(\gamma=0\) and this is given by the fact that both the quantities defining \(\gamma\), namely the s-wave scattering length \(a_{s}\) and the number density \(n\) are non-negative quantities. Instead, the upper bound limit is due to the fact that to obtain the beyond mean-field correction we used a perturbative approach, assuming \(\gamma\ll 1\), for this reason, by setting a beyond mean-field relative correction to the number density smaller than \(0.1\), it follows that the upper bound on \(\gamma\) must be set to \(\gamma=3\times 10^{-4}\), as reported in Fig. 1. In Fig. 1 the relative correction is higher for larger values of the gas parameter, while for \(\gamma=0\) one retrieves the mean-field case.
### Macroscopic Quantum Self Trapping
Starting from the beyond mean-field Lagrangian written in terms of the total number of particles \(N\), the population imbalance \(z\), and the phase difference \(\phi\)
\[\begin{split}\mathcal{L}=&\frac{N\hbar}{2}z\dot{ \phi}-\frac{UN^{2}}{4}z^{2}+\frac{JN}{2}\sqrt{1-z^{2}}\cos\phi\\ &-\frac{\sqrt{2m^{3}g_{0}^{5}}N^{\frac{5}{2}}}{15\pi^{2}\hbar^{3 }L^{\frac{9}{2}}}\left[(1+z)^{\frac{5}{2}}+(1-z)^{\frac{5}{2}}\right],\end{split} \tag{43}\]
one finds that the conserved energy is
\[E= \frac{UN^{2}}{4}z^{2}-\frac{JN}{2}\sqrt{1-z^{2}}\cos\phi+\frac{L^{3} \sqrt{2m^{3}}}{15\pi^{2}\hbar^{3}}(UN)^{\frac{5}{2}}\left[(1+z)^{\frac{5}{2}}+(1 -z)^{\frac{5}{2}}\right]. \tag{44}\]
Imposing the inequality condition (23) to have MQST one gets
\[\begin{split}&\frac{\Lambda}{2}z_{0}^{2}-\sqrt{1-z_{0}^{2}}\cos \phi_{0}+\frac{2L^{3}\sqrt{2m^{3}}}{15\pi^{2}\hbar^{3}}\Lambda U^{\frac{3}{2}} N^{\frac{1}{2}}\left[(1+z_{0})^{\frac{5}{2}}+(1-z_{0})^{\frac{5}{2}}\right]\\ &>1+\frac{4L^{3}\sqrt{2m^{3}}}{15\pi^{2}\hbar^{3}}\Lambda U^{ \frac{9}{2}}N^{\frac{1}{2}},\end{split} \tag{45}\]
and finally
\[\Lambda>\Lambda_{c,\,3D}, \tag{46}\]
where we have defined the critical value as a function of the gas parameter \(\gamma\)
\[\Lambda_{c,\,3D}\equiv\frac{1+\sqrt{1-z_{0}^{2}}\cos\phi_{0}}{\frac{z_{0}^{2} }{2}+\frac{16\sqrt{2}}{15\sqrt{\pi}}\sqrt{\gamma}\left[(1+z_{0})^{\frac{5}{2}} +(1-z_{0})^{\frac{5}{2}}-2\right]}. \tag{47}\]
To understand the significance of the beyond mean-field correction to the MQST critical value one divides by the mean-field critical value \(\Lambda_{c,\,mf}\)
\[\frac{\Lambda_{c,\,3D}}{\Lambda_{c,\,mf}}=\left[1+\frac{32\sqrt{2}}{15\sqrt{ \pi}}\sqrt{\gamma}\frac{(1+z_{0})^{\frac{5}{2}}+(1-z_{0})^{\frac{5}{2}}-2}{z_ {0}^{2}}\right]^{-1}. \tag{48}\]
Note that since the denominator of \(\Lambda_{c,\,3D}\) is larger than the \(\Lambda_{c,\,mf}\) one, then the beyond mean-field macroscopic quantum self-trapping critical value is smaller than the mean-field one \(\Lambda_{c,\,3D}<\Lambda_{c,\,mf}\), as pictured in Fig.2. Furthermore, the relative correction grows as the gas parameter decreases and it is marginally more significant for lower values of \(|z_{0}|\).
Figure 1: 3D beyond mean-field relative correction to the Josephson frequency. In the plot is pictured the ratio between the beyond mean-field Josephson frequency \(\Omega\) and the mean-field one \(\Omega_{mf}\) as a function of the strength parameter \(\Lambda=g_{0}n/J\) for different values of the gas parameter \(\gamma=a_{s}^{3}n\): \(\gamma=3\times 10^{-4}\) (red solid line), \(\gamma=3\times 10^{-5}\) (green dashed line) \(\gamma=3\times 10^{-6}\) (blue dotted line) and \(\gamma=0\) (black dash-dotted line). The last line corresponds to the mean-field case.
## 4 Beyond mean field: D=1 case
The procedure to find a modified Josephson frequency is analogous to the one used before. From the modified \(D=1\) energy density [15, 19]
\[\mathcal{E}=\frac{1}{2}g_{0}n^{2}-\frac{2}{3\pi}\sqrt{\frac{m}{\hbar^{2}}}(g_{0} n)^{\frac{3}{2}}, \tag{49}\]
the Lagrangian density is given by
\[\mathscr{L}=\mathscr{L}_{0}+\frac{2\sqrt{mg_{0}^{3}}}{3\pi\hbar L^{\frac{1}{2 }}}\left(|\Phi_{1}(t)|^{3}+|\Phi_{2}(t)|^{3}\right), \tag{50}\]
where \(\mathscr{L}_{0}\) is the mean-field Lagrangian density and the second term arises from the beyond mean-field calculation accounting quantum fluctuation. The Lagrangian is thus
\[\mathcal{L}=\mathcal{L}_{0}+\frac{2\sqrt{mg_{0}^{3}}}{3\pi\hbar L^{\frac{1}{2 }}}\left(|\varphi_{1}(t)|^{3}+|\varphi_{2}(t)|^{3}\right). \tag{51}\]
### Josephson frequency
Again, a Madelung transformation (7) is performed, obtaining
\[\mathcal{L}=\mathcal{L}_{0}+\frac{2\sqrt{mg_{0}^{3}}}{3\pi\hbar L^{\frac{1}{2 }}}\left(N_{1}^{\frac{3}{2}}(t)+N_{2}^{\frac{3}{2}}(t)\right), \tag{52}\]
which can be rewritten in terms of the total number of particles \(N\) and the population imbalance \(z\) as
\[\mathcal{L}=\mathcal{L}_{0}+\frac{\sqrt{mg_{0}^{3}}N^{\frac{3}{2}}}{3\sqrt{2} \pi\hbar L^{\frac{1}{2}}}\left[(1+z)^{\frac{3}{2}}+(1-z)^{\frac{3}{2}}\right]. \tag{53}\]
Figure 2: 3D beyond mean-field relative correction to the MQST critical value. In the plot is pictured the ratio between the beyond mean-field MQST critical value \(\Lambda_{c,\;3D}\) and the mean-field one \(\Lambda_{c,\;mf}\) as a function of the initial population imbalance \(z_{0}\equiv z(t=0)=(n_{1}(0)-n_{2}(0))/(n_{1}(0)+n_{2}(0))\) for different values of the gas parameter \(\gamma=a_{0}^{3}n\): \(\gamma=3\times 10^{-4}\) (red solid line), \(\gamma=3\times 10^{-5}\) (green dashed line) \(\gamma=3\times 10^{-6}\) (blue dotted line) and \(\gamma=0\) (black dash-dotted line). The last line corresponds to the mean-field case.
Using the relation of expansion (33), valid in the low population imbalance limit, and removing the constant terms, one gets
\[\mathcal{L}=\mathcal{L}_{0}+\frac{\sqrt{mg_{0}^{3}}N^{\frac{3}{2}}}{4\sqrt{2}\pi \hbar L^{\frac{1}{2}}}z^{2}. \tag{54}\]
Finally, the Lagrangian in the \(D=1\) case is given by
\[\mathcal{L}= \frac{N\hbar}{2}z\dot{\phi}-\left(\frac{UN^{2}+JN}{4}\right)z^{2} -\frac{JN}{4}\phi^{2}+\frac{\sqrt{mg_{0}^{3}}N^{\frac{3}{2}}}{4\sqrt{2}\pi \hbar L^{\frac{1}{2}}}z^{2}, \tag{55}\]
Therefore the Euler-Lagrange equations have the same form of Eq. (37) but with the following corrected Josephson frequency
\[\Omega\equiv\frac{1}{\hbar}\sqrt{J^{2}+JUN-\frac{J\sqrt{g_{0}^{3}nm}}{\sqrt{2 }\pi\hbar}}. \tag{56}\]
Writing now the Josephson frequency as a function of the s-wave scattering length \(a_{s}\)
\[\Omega=\Omega_{R}\sqrt{1-\frac{2\hbar^{2}n}{ma_{s}J}\left(1-\frac{1}{\pi\sqrt{ -a_{s}n}}\right)}, \tag{57}\]
and defining the reference energy \(\varepsilon_{s}\) and the gas parameter \(\gamma\) in the 1-dimensional case as
\[\varepsilon_{s}\equiv\frac{\hbar^{2}}{ma_{s}^{2}},\hskip 28.452756pt\gamma \equiv a_{s}n, \tag{58}\]
the Josephson frequency can also be written as
\[\Omega=\Omega_{R}\sqrt{1-2\gamma\frac{\varepsilon_{s}}{J}\left(1-\frac{1}{\pi \sqrt{-\gamma}}\right)}. \tag{59}\]
Note that the gas parameter \(\gamma\) must be negative due to the presence of the inverse of the square root of it.
Analogously to the 3-dimensional case to acknowledge the degree of the beyond mean-field correction to the mean-field Josephson frequency is taking into account the ratio between the corrected frequency and the mean-field one as a function of the strength parameter \(\Lambda=-2\gamma\varepsilon_{s}/J\)
\[\frac{\Omega}{\Omega_{mf}}=\sqrt{\frac{1+\Lambda\left(1-\frac{1}{\pi\sqrt{- \gamma}}\right)}{1+\Lambda}}. \tag{60}\]
The beyond mean-field Josephson frequency is lower than the mean-field one. Indeed, as pictured in Fig. 3 the relative correction \(\Omega/\Omega_{mf}\leq 1\), where the equality is obtained for \(\gamma\rightarrow-\infty\) or when the strength parameter is \(\Lambda=0\). Analogously to the 3-dimensional case, in obtaining that beyond mean-field correction, a perturbative analysis is used, so by setting a beyond mean-field relative correction to the number density smaller than 0.1, it follows that the upper bound on \(\gamma\) must be set to \(\gamma=-20\), as reported in Fig. 3. Furthermore, the beyond mean-field correction becomes more
important as \(\Lambda\) grows. In fact, for larger strength parameters, \(\Lambda\gg 1\), the asymptotic behavior of the relative correction is given by
\[\left.\frac{\Omega}{\Omega_{mf}}\right|_{\Lambda\gg 1}=\sqrt{1-\frac{1}{\pi \sqrt{-\gamma}}}. \tag{61}\]
The relative correction \(\Omega/\Omega_{mf}\) behavior also depends on the gas parameter \(\gamma\), for higher values of the gas parameter the correction is more important and so the value of \(\Omega/\Omega_{mf}\) decreases.
### Macroscopic Quantum Self Trapping
To compute the conserved energy one considers the beyond mean-field Lagrangian in the \(D=1\) case given by
\[\mathcal{L}= \frac{N\hbar}{2}z\dot{\phi}-\frac{UN^{2}}{4}z^{2}+\frac{JN}{2} \sqrt{1-z^{2}}\cos\phi+\frac{\sqrt{mg_{0}^{3}}N^{\frac{3}{2}}}{3\sqrt{2}\pi \hbar L^{\frac{1}{2}}}\left[(1+z)^{\frac{3}{2}}+(1-z)^{\frac{3}{2}}\right], \tag{62}\]
one finds that the conserved energy is
\[\begin{split} E=&\frac{UN^{2}}{4}z^{2}-\frac{JN}{ 2}\sqrt{1-z^{2}}\cos\phi\\ &-\frac{L\sqrt{m}}{3\sqrt{2}\pi\hbar}(UN)^{\frac{3}{2}}\left[(1 +z)^{\frac{3}{2}}+(1-z)^{\frac{3}{2}}\right].\end{split} \tag{63}\]
Imposing the inequality condition to have MQST, given by (23), one gets
\[\begin{split}&\frac{\Lambda}{2}z_{0}^{2}-\sqrt{1-z_{0}^{2}}\cos \phi_{0}-\frac{L\sqrt{2m}}{3\pi\hbar}\Lambda U^{\frac{1}{2}}N^{-\frac{1}{2}} \left[(1+z_{0})^{\frac{3}{2}}+(1-z_{0})^{\frac{3}{2}}\right]\\ &>1-\frac{2L\sqrt{2m}}{3\pi\hbar}\Lambda U^{\frac{1}{2}}N^{- \frac{1}{2}},\end{split} \tag{64}\]
Figure 3: 1D beyond mean-field relative correction to the Josephson frequency. In the plot is pictured the ratio between the beyond mean-field Josephson frequency \(\Omega\) and the mean-field one \(\Omega_{mf}\) as a function of the strength parameter \(\Lambda=g_{0}n/J\) for different values of the gas parameter \(\gamma=a_{s}n\): \(\gamma=-20\) (red solid line), \(\gamma=-200\) (green dashed line) \(\gamma=-2000\) (blue dotted line) and \(\gamma\rightarrow-\infty\) (black dash-dotted line), which corresponds to the mean-field case.
and finally
\[\Lambda>\Lambda_{c,\;1D}, \tag{65}\]
where we have defined the critical value \(\Lambda_{c,\;1D}\) using the gas parameter \(\gamma\)
\[\Lambda_{c,\;1D}=\!\!\left(1+\sqrt{1-z_{0}^{2}}\cos\phi_{0}\right)\!\left[\frac {z_{0}^{2}}{2}-\frac{2}{3\pi}\frac{1}{\sqrt{-\gamma}}\left[(1+z_{0})^{\frac{3} {2}}+(1-z_{0})^{\frac{3}{2}}-2\right]\,\right]^{-1}. \tag{66}\]
However in this case the critical value is reached for larger values of \(\Lambda\) since \(\Lambda_{c,\;1D}>\Lambda_{c,\;mf}\), namely the beyond mean-field critical value \(\Lambda_{c,\;1D}\) is larger than the mean-field one. Indeed, dividing \(\Lambda_{c,\;1D}\) by the mean-field critical value \(\Lambda_{c,\;mf}\) one gets
\[\frac{\Lambda_{c,\;1D}}{\Lambda_{c,\;mf}}=\left(1-\frac{4}{3\pi}\frac{1}{ \sqrt{-\gamma}}\frac{(1+z_{0})^{\frac{3}{2}}+(1-z_{0})^{\frac{3}{2}}-2}{z_{0}^ {2}}\right)^{-1}, \tag{67}\]
and looking at Fig. 4 one finds \(\Lambda_{c,\;1D}/\Lambda_{c,\;mf}\geq 1\), where the equality is verified for \(\gamma\rightarrow-\infty\). In fact, the beyond mean-field correction becomes less significant as the gas parameter decreases. Furthermore, at fixed \(\gamma\) it is more important for higher values of \(|z_{0}|\).
## 5 Beyond mean field: D=2 case
The 2-dimensional case is very different from the 1-dimensional and the 3-dimensional cases for many reasons [15, 17, 18, 21]. Firstly, the coupling depends on the number density of the site as follows
\[g_{r,k}=-\frac{4\pi\hbar^{2}}{m}\frac{1}{\ln\left(Cn_{k}\right)}\qquad k=1,2\, \tag{68}\]
Figure 4: 1D beyond mean-field relative correction to the MQST critical value. In the plot is pictured the ratio between the beyond mean-field MQST critical value \(\Lambda_{c,\;1D}\) and the mean-field one \(\Lambda_{c,\;mf}\) as a function of the initial population imbalance \(z_{0}\equiv z(t=0)=(n_{1}(0)-n_{2}(0))/(n_{1}(0)+n_{2}(0))\) for different values of the gas parameter \(\gamma=a_{s}n\): \(\gamma=-20\) (red solid line), \(\gamma=-200\) (green dashed line) \(\gamma=-2000\) (blue dotted line) and \(\gamma\rightarrow-\infty\) (dark dash-dotted line). The last line corresponds to the mean-field case.
where \(C\equiv\pi e^{2\gamma+\frac{1}{2}}a_{s}^{2}\). Secondly, the corrected Lagrangian is no more composed of a mean-field part and a correction, but rather it is equal in form to the mean-field Lagrangian although with the renormalized coupling \(g_{r}\) replacing \(g_{0}\).
Therefore, before computing the potential term, a discussion on the coupling is needed [21]. Since the coupling \(g_{k,r}\) is different for each site, it is useful to define a coupling \(g_{r}\) for the entire system
\[g_{r}=-\frac{4\pi\hbar^{2}}{m}\frac{1}{\ln{(Cn)}}, \tag{69}\]
where \(n\) is the number density of the system and it is given by the mean of the number densities of the sites
\[n=\frac{n_{1}+n_{2}}{2}. \tag{70}\]
Then the number densities of the sites \(n_{1,2}\) can be expressed in terms of the population imbalance variable \(z(t)=(n_{1}-n_{2})/(n_{1}+n_{2})\) and the number density of the system \(n\) as
\[\begin{array}{l}n_{1}=n(1+z)\\ n_{2}=n(1-z).\end{array} \tag{71}\]
The couplings \(g_{r,k}\) in Eq. (68) can rewritten in terms of \(z(t)\), \(n\) and \(g_{r}\) and therefore, after some manipulations using Eq. (71)
\[g_{r,k}=g_{r}\Bigg{[}1+\frac{\ln{(1\pm z)}}{\ln{(Cn)}}\Bigg{]}^{-1},\qquad k= 1,2. \tag{72}\]
### Josephson frequency
In the case of \(D=2\), the beyond mean-field energy density is given by
\[\mathcal{E}(n)=\frac{g_{r}n^{2}}{2}. \tag{73}\]
Hence, the Lagrangian density is given by
\[\begin{array}{l}\mathscr{L}=\sum_{k=1}^{2}\left(i\hbar\Phi_{k}^{*}(t) \partial_{t}\Phi_{k}(t)-\frac{1}{2}g_{r,k}|\Phi_{k}(t)|^{4}\right)+\frac{J}{2} \left(\Phi_{1}^{*}(t)\Phi_{2}(t)+\Phi_{2}^{*}(t)\Phi_{1}(t)\right)\end{array} \tag{74}\]
and it is obtained substituting \(g_{0}\) with \(g_{r}\). Integrating in space the corresponding Lagrangian is
\[\begin{array}{l}\mathcal{L}=\sum_{k}\left(i\hbar\varphi_{k}^{*}(t) \partial_{t}\varphi_{k}(t)-\frac{U_{k}}{2}|\varphi_{k}(t)|^{4}\right)+\frac{J} {2}(\varphi_{1}^{*}(t)\varphi_{2}(t)+\varphi_{2}^{*}(t)\varphi_{1}(t)),\end{array} \tag{75}\]
where
\[U_{k}\equiv\frac{g_{r,k}}{L^{2}},\qquad\varphi_{k}(t)\equiv L\Phi_{k}(t)\quad k =1,2. \tag{76}\]
In the mean-field \(D=2\) case we can easily express the potential terms as a function of the variables \(N\) and \(z\). This is because the coupling constant, denoted as \(U\), is not
influenced by the quantity \(n_{k}\). Consequently, the potential term exhibits a quadratic dependence on \(n_{k}\).
\[-\sum_{k}\frac{U}{2}N_{k}^{2}=\frac{UN^{2}}{4}z^{2}. \tag{77}\]
In the beyond mean-field case it is not so simple since there is also a dependence on \(n_{k}\) in the coupling \(g_{r,k}\), hence the potential term transforms differently. For \(k=1\), upon defining
\[U_{r}\equiv\frac{g_{r}}{L^{2}}, \tag{78}\]
and after performing a Taylor expansion due to the low population balance regime, \(|z(t)|\ll 1\), one finds
\[\frac{1}{2}U_{1}|\varphi_{1}(t)|^{4}\simeq\frac{U_{r}N^{2}}{8}\left(1+\frac{z ^{2}-2z}{2\ln\left(Cn\right)}+\frac{z^{2}}{\ln^{2}\left(Cn\right)}\right)(1+2z +z^{2}), \tag{79}\]
For \(k=2\) the procedure is analogous. Summing the two contributions one obtains
\[\sum_{k}\frac{U_{r,k}}{2}|\varphi_{k}(t)|^{4}= \frac{U_{r}N^{2}}{4}\Bigg{(}1+z^{2}-\frac{3z^{2}}{2\ln\left(Cn \right)}+\frac{z^{2}}{\ln^{2}\left(Cn\right)}\Bigg{)}. \tag{80}\]
Writing this result in terms of the system coupling, inverting the relation (69)
\[\frac{1}{\ln\left(Cn\right)}=-\frac{mg_{r}}{4\pi\hbar^{2}}, \tag{81}\]
one gets
\[\sum_{k}\frac{U_{r,k}}{2}|\varphi_{k}(t)|^{4}=\frac{U_{r}N^{2}}{4}\left[1+z^{2 }\left(1+\frac{3}{8}\frac{mg_{r}}{\pi\hbar^{2}}+\frac{1}{16}\frac{m^{2}g_{r}^{ 2}}{\pi^{2}\hbar^{4}}\right)\right]. \tag{82}\]
The term is similar to the mean-field one, given by \((UN^{2}z^{2})/4\) with caution to substitute \(U\) with \(U_{r}\) and add the beyond the mean-field corrections to the contact interaction term. Therefore to obtain the Josephson frequency in the 2-dimensional beyond mean-field framework is sufficient to substitute inside (15) the constant \(U\) with
\[U\to U_{r}\left(1+\frac{3}{8}\frac{mg_{r}}{\pi\hbar^{2}}+\frac{1}{16}\frac{m^ {2}g_{r}^{2}}{\pi^{2}\hbar^{4}}\right). \tag{83}\]
Doing so, the 2-dimensional beyond mean-field Josephson frequency is given by
\[\Omega= \frac{1}{\hbar}\sqrt{J^{2}+JU_{r}N\left(1+\frac{3}{8}\frac{mg_{r}} {\pi\hbar^{2}}+\frac{1}{16}\frac{m^{2}g_{r}^{2}}{\pi^{2}\hbar^{4}}\right)}, \tag{84}\]
or, alternatively, expressing it as a function of \(g_{r}\) and \(n\)
\[\Omega=\frac{1}{\hbar}\sqrt{J^{2}+Jg_{r}n\left(1-\frac{3}{2\ln\left(Cn\right)} +\frac{1}{\ln^{2}\left(Cn\right)}\right)}. \tag{85}\]
Note that, in the limit of low density \(n\ll 1\), the terms \(1/(\ln^{\ell}\left(Cn\right))\) become smaller and smaller the higher is \(\ell\), therefore keeping only terms of the order \(\ln(Cn)^{-1}\) then
(83) can be approximated to \(U_{r}\) and so the Josephson frequency in \(D=2\) is formally equivalent to mean-field one (15)
\[\Omega=\frac{1}{\hbar}\sqrt{J^{2}+U_{r}NJ}, \tag{86}\]
with the care of substituting \(U\) with \(U_{r}\). Writing explicitly the Rabi frequency, the s-wave scattering length \(a_{s}\) and the number density \(n\) one obtains
\[\Omega=\Omega_{R}\sqrt{1-\frac{4\pi\hbar^{2}n}{mJ\ln\left(Cn\right)}\left(1- \frac{3}{2\ln\left(Cn\right)}+\frac{1}{\ln^{2}\left(Cn\right)}\right)}. \tag{87}\]
Introducing the reference energy \(\varepsilon_{s}\) and the gas parameter in the 2-dimensional case
\[\varepsilon_{s}\equiv\frac{\hbar^{2}}{ma_{s}^{2}},\hskip 28.452756pt\gamma \equiv a_{s}^{2}n, \tag{88}\]
and calling \(C^{*}=\pi e^{2\gamma+\frac{1}{2}}\) the Josephson frequency can be written as
\[\Omega=\Omega_{R}\sqrt{1-\frac{4\pi\gamma}{\ln\left(C^{*}\gamma\right)}\frac {\varepsilon_{s}}{J}\left(1-\frac{3}{2\ln\left(C^{*}\gamma\right)}+\frac{1}{ \ln^{2}\left(C^{*}\gamma\right)}\right)}. \tag{89}\]
The beyond mean-field relative correction to the Josephson frequency \(\Omega\) is given by
\[\frac{\Omega}{\Omega_{mf}}=\sqrt{\left[1+\Lambda\left(1-\frac{3}{2\ln\left(C ^{*}\gamma\right)}+\frac{1}{\ln^{2}\left(C^{*}\gamma\right)}\right)\right] \left(1+\Lambda\right)^{-1}}, \tag{90}\]
where the strength parameter is given by \(\Lambda=-(4\pi\gamma\varepsilon_{s})/(\ln\left(C^{*}\gamma\right)J)\).
As pictured in Fig. 5, the relative \(\Omega/\Omega_{mf}\) correction at fixed gas parameter \(\gamma\) is more significant for higher values of the strength parameter \(\Lambda\) and for larger values of the strength parameter \(\Lambda\), the relative correction \(\Omega/\Omega_{mf}\) is independent on \(\Lambda\) and is given by
\[\left.\frac{\Omega}{\Omega_{mf}}\right|_{\Lambda\gg 1}=\sqrt{1-\frac{3}{2\ln \left(C^{*}\gamma\right)}+\frac{1}{\ln^{2}\left(C^{*}\gamma\right)}}. \tag{91}\]
Instead, focusing on the gas parameter \(\gamma\) dependence one has an increment of the relative \(\Omega/\Omega_{mf}\) correction for a higher value of \(\gamma\). Conversely for \(\gamma=0\) one retrieves the mean-field result.
### Macroscopic Quantum Self Trapping
Unlike the Josephson frequency calculation, in the MQST one, the low population imbalance limit is not taken. Therefore it is necessary to evaluate how the interaction terms transform in the case the population imbalance is generic. Taking into account the contact interaction terms
\[\sum_{k}\frac{1}{2}U_{k}|\varphi_{k}(t)|^{4}=\frac{N^{2}}{8}[U_{1}(1+z)^{2}+U_ {2}(1-z^{2})]. \tag{92}\]
Using the definitions of \(U_{r}\) and \(U_{k}\), they can be linked by the followed relation
\[U_{k}=U_{r}\Bigg{[}1+\frac{\ln{(1\pm z)}}{\ln{(Cn)}}\Bigg{]}^{-1}\qquad k=1,2 \tag{93}\]
Then the interaction term reduces to
\[\begin{split}\sum_{k}\frac{1}{2}U_{k}|\varphi_{k}(t)|^{4}& =\frac{U_{r}N^{2}}{4}\Bigg{[}\Bigg{(}1+z^{2}+(1+z)^{2}\frac{\ln{(1- z)}}{2\ln{(Cn)}}+(1-z)^{2}\frac{\ln{(1+z)}}{2\ln{(Cn)}}\Bigg{)}\\ &\times\Bigg{(}\left(1+\frac{\ln{(1+z)}}{\ln{(Cn)}}\right)\left( 1+\frac{\ln{(1-z)}}{\ln{(Cn)}}\right)\Bigg{)}^{-1}\Bigg{]}.\end{split} \tag{94}\]
Hence, the beyond mean-field Lagrangian in the \(D=2\) case is
\[\begin{split} L=&\frac{N\hbar}{2}z\dot{\phi}+ \frac{JN}{2}\sqrt{1-z^{2}}\cos\phi\\ &-\frac{U_{r}N^{2}f(z)}{4}\Bigg{(}1+z^{2}+\frac{(1+z)^{2}\ln{(1 -z)}+(1-z)^{2}\ln{(1+z)}}{2\ln{(Cn)}}\Bigg{)},\end{split} \tag{95}\]
where the function \(f(z)\) is introduced to lighten the expression
\[f(z)\equiv\Bigg{[}\left(1+\frac{\ln{(1+z)}}{\ln{(Cn)}}\right)\left(1+\frac{ \ln{(1-z)}}{\ln{(Cn)}}\right)\Bigg{]}^{-1}. \tag{96}\]
From the Lagrangian one can compute the conserved energy
\[\begin{split} E&=\frac{U_{r}N^{2}f(z)}{4}\Big{(}1+ z^{2}+\frac{(1+z)^{2}\ln{(1-z)}+(1-z)^{2}\ln{(1+z)}}{2\ln{(Cn)}}\Big{)}\\ &-\frac{JN}{2}\sqrt{1-z^{2}}\cos\phi,\end{split} \tag{97}\]
Figure 5: 2D beyond mean-field relative correction to the Josephson frequency. In the plot is pictured the ratio between the beyond mean-field Josephson frequency \(\Omega\) and the mean-field one \(\Omega_{mf}\) as a function of the strength parameter \(\Lambda=g_{0}n/J\) for different values of the gas parameter \(\gamma=a_{s}^{2}n\): \(\gamma=1\times 10^{-2}\) (red solid line), \(\gamma=1\times 10^{-3}\) (green dashed line) \(\gamma=1\times 10^{-4}\) (blue dotted line) and \(\gamma=0\) (dark dash-dotted line). The last line corresponds to the mean-field case.
and imposing the MQST inequality condition, given by \(E(z_{0},\phi_{0})>E(0,\pi)\), one obtains
\[\begin{split}&\frac{U_{r}Nf(z_{0})}{2J}\Big{(}1+z_{0}^{2}+(1+z_{0}) ^{2}\frac{\ln{(1-z_{0})}}{2\ln{(Cn)}}+(1-z_{0})^{2}\frac{\ln{(1+z_{0})}}{2\ln{( Cn)}}\Big{)}-\sqrt{1-z_{0}^{2}}\cos{\phi_{0}}\\ &>1+\frac{U_{r}N}{2J}.\end{split} \tag{98}\]
Defining the adimensional constant \(\Lambda_{r}\) as
\[\Lambda_{r}\equiv\frac{U_{r}N}{J}, \tag{99}\]
the inequality reduces to
\[\Lambda_{r}>\Lambda_{c,\,2D} \tag{100}\]
where the critical value \(\Lambda_{c,\,2D}\) is given by the following definition, using the gas parameter can be rewritten as
\[\begin{split}&\Lambda_{c,\,2D}=\Bigg{[}1+\sqrt{1-z_{0}^{2}}\cos{ \phi_{0}}\Bigg{]}\Bigg{[}\frac{z_{0}^{2}}{2}+\frac{(f(z_{0})-1)(1+z_{0}^{2})}{ 2}\\ &+\frac{f(z_{0})\left[(1+z_{0})^{2}\ln{(1-z_{0})}+(1-z_{0})^{2} \ln{(1+z_{0})}\right]}{4\ln{(Cn)}}\Bigg{]}^{-1},\end{split} \tag{101}\]
Figure 6: 2D beyond mean-field relative correction to the MQST critical value. In the plot is pictured the ratio between the beyond mean-field MQST critical value \(\Lambda_{c,\,2D}\) and the mean-field one \(\Lambda_{c,\,mf}\) as a function of the initial population imbalance \(z_{0}\equiv z(t=0)=(n_{1}(0)-n_{2}(0))/(n_{1}(0)+n_{2}(0))\) for different values of the gas parameter \(\gamma=a_{s}^{2}n\): \(\gamma=1\times 10^{-2}\) (red solid line), \(\gamma=1\times 10^{-3}\) (green dashed line) \(\gamma=1\times 10^{-4}\) (blue dotted line) and \(\gamma=0\) (dark dash-dotted line). The last line corresponds to the mean-field case.
Dividing by the mean-field MQST critical value \(\Lambda_{c,\;mf}\) one obtains
\[\begin{split}&\frac{\Lambda_{c,\;2D}}{\Lambda_{c,\;mf}}=\Bigg{[}1+ \frac{(f(z_{0})-1)(1+z_{0}^{2})}{z_{0}^{2}}\\ &+\frac{f(z_{0})\left[(1+z_{0})^{2}\ln{(1-z_{0})}+(1-z_{0})^{2} \ln{(1+z_{0})}\right]}{2\ln{(C^{*}\gamma)}z_{0}^{2}}\Bigg{]}^{-1}.\end{split} \tag{102}\]
Looking at Fig. 6 one notes that the ratio is equal to 1 when the gas parameter goes to zero, \(\gamma=0\), retrieving the mean-field result. Since the beyond mean-field correction is obtained by a perturbative analysis, by setting the maximum value of the relative correction to the number density to 0.1, one obtains the upper bound on \(\gamma\) of \(\gamma<1\times 10^{-2}\), as reported in Fig. 6. Since in general the ratio is lower than 1, then one deducts that the beyond mean-field correction decreases the MQST critical value. Another consideration is that increasing the gas parameter the relative correction is more important since the ratio decreases.
## 6 Conclusions
In this paper we have adopted the mean-field phase Lagrangian and applied Gaussian correction terms on both sites, taking into account a generic dimensionality of the system. Our calculations for \(D=1,2,3\) have generalized the mean-field Josephson frequency in the low population-imbalance limit, and, on the other hand, computations for a severe population imbalance are provided for the critical MQST strength. The outcomes of our study indicate that, in situations where \(D=2\) or \(D=3\), the Josephson frequency is enhanced by the quantum corrections, whereas it is reduced when \(D=1\). Moreover, we demonstrate that the critical strength for MQST is lowered for \(D=2\) or \(D=3\) compared to mean-field calculations, while it is raised for \(D=1\). The proposed method is complementary to the zero-dimensional approach described in Ref. [14]. We believe that our theoretical predictions could be tested in near-future experiments with superfluid Josephson junctions made of ultracold atomic quantum gases.
## Acknowledgements
This work is partially supported by the BIRD grant "Ultracold atoms in curved geometries" of the University of Padova, by the "IniziativaSpecifica Quantum" of INFN, by the European Union-NextGenerationEU within the National Center for HPC, Big Data and Quantum Computing (Project No. CN00000013, CN1 Spoke 10: "Quantum Computing"), and by the EU Project PASQuanS 2 "Programmable Atomic Large-Scale Quantum Simulation".
|
2301.11539
|
Rational curves in a quadric threefold via an
$\text{SL}(2,\mathbb{C})$-representation
|
In this paper, we regard the smooth quadric threefold $Q_{3}$ as Lagrangian
Grassmannian and search for fixed rational curves of low degree in $Q_{3}$ with
respect to a torus action, which is the maximal subgroup of the special linear
group $\text{SL}(2,\mathbb{C})$. Most of them are confirmations of very
well-known facts. If the degree of a rational curve is $3$, it is confirmed
using the Lagrangian's geometric properties that the moduli space of twisted
cubic curves in $Q_3$ has a specific projective bundle structure. From this, we
can immediately obtain the cohomology ring of the moduli space.
|
Kiryong Chung, Sukmoon Huh, Sang-Bum Yoo
|
2023-01-27T05:46:27Z
|
http://arxiv.org/abs/2301.11539v1
|
# Rational curves in a quadric threefold via an \(\text{SL}(2,\mathbb{C})\)-representation
###### Abstract.
In this paper, we regard the smooth quadric threefold \(Q_{3}\) as Lagrangian Grassmannian and search for fixed rational curves of low degree in \(Q_{3}\) with respect to a torus action, which is the maximal subgroup of the special linear group \(\text{SL}(2,\mathbb{C})\). Most of them are confirmations of very well-known facts. If the degree of a rational curve is \(3\), it is confirmed using the Lagrangian's geometric properties that the moduli space of twisted cubic curves in \(Q_{3}\) has a specific projective bundle structure. From this, we can immediately obtain the cohomology ring of the moduli space.
Key words and phrases:Rational curves, Torus invariant curve, Lagrangian Grassmannian, Projective bundle 2020 Mathematics Subject Classification: 14E05, 14E15, 14M15
## 1. Introduction
### Motivation and results
By definition, a _prime Fano threefold_\(X\) is a smooth projective variety of dimension \(3\) with \(\text{Pic}_{\mathbb{Z}}(X)=\langle K_{X}\rangle\) and the anti-canonical divisor \(-K_{X}\) is an ample divisor. Such threefolds \(X\) are classified to be one of the following:
\[\mathbb{P}^{3},\quad Q_{3},\quad V_{5},\quad\text{and}\quad V_{22}.\]
It is well known that the first three \(\mathbb{P}^{3}\), \(Q_{3}\) and \(V_{5}\) are rigid but the last one \(V_{22}\) forms a \(6\)-dimensional family; see the excellent paper [14, Section 5.3]. All of theses varieties have the same cohomology groups as \(\mathbb{P}^{3}\) does. Furthermore, the \(\text{SL}(2,\mathbb{C})\)-orbit descriptions of these prime Fano threefolds are well studied in the papers [13, 1].
The reason for writing this paper is to fill in the missing list for the space of rational curves. The moduli space of rational curves of degree \(3\) or less in the prime Fano variety has been studied very diversely ([13, 11, 15, 16, 17, 18, 19, 20, 21, 22]). In the case of \(\mathbb{P}^{3}\), starting from the analysis of the Hilbert scheme [11], it served as a key example of enumerative geometry in the 80s and 90s ([11]) and created many variant spaces related with birational geometry ([12, 11]). In the case of quintic del Pezzo threefold, the description of the space of rational curves was possible from the Grassmannian geometry of lines ([14, 15, 16]). Lastly, in the case of Mukai variety, its construction is very fantastic ([13, 17]), and it plays a very decisive role in relation to the existence of the Kahler-Einstein metric on Fano manifold. For specific conclusions, refer to the papers [14] and [20]. In this paper, we examine
the moduli space of twisted cubic curves in a quadric threefold. The authors think that a description of its moduli space is just not written down in the literature because it is well-known and easy to experts. However, it seems necessary to record global geometry and cohomological ring in order to serve as concrete examples required by enumerative geometry ([1, 12, 13]). Now let us introduce the main contents of this paper. Let \(X\) be a projective variety with a fixed embedding \(X\subset\mathbb{P}^{r}\).
* Let \(\mathbf{S}_{d}(X)\) be the moduli space of _stable_ sheaves \(F\) in \(X\) with Hilbert polynomial \(\chi(F(m))=dm+1\).
* Let \(\mathbf{H}_{d}(X)\) be the Hilbert scheme of curves \(C\) in \(X\) with \(\chi(\mathcal{O}_{C}(m))=dm+1\).
In this paper, we focus on the case that \(X=Q_{3}\) is a quadric threefold. \(Q_{3}\) is homogeneous and thus one can apply the main results of [1, 13]. Specially, \(\mathbf{S}_{d}(Q_{3})\) is smooth for \(d\leq 3\). Also the quadric threefold \(Q_{3}\) does not contain any plane and so \(\mathbf{S}_{3}(Q_{3})\) is isomorphic to \(\mathbf{H}_{3}(Q_{3})\). We find the torus invariant rational curves of the lower degrees and extend it into the cubic curve case. It turns out that the torus fixed loci are isolated until degree \(2\), but not in degree \(3\).
**Proposition 1.1** (Proposition 3.11 and Proposition 3.15).: _The torus invariant cubic curves in \(Q_{3}\) consists of isolated ones (\(32\)) and two connected components (isomorphic to \(\mathbb{P}^{1}\))._
Through the computation of torus invariant cubic curves, one realize that the cubic curves in the hyperplane section of \(Q_{3}\) need to be ordered depending on its linear class. By using the universal family of the Fano scheme of lines in \(Q_{3}\), we obtain a global description of \(\mathbf{S}_{3}(Q_{3})\) as follows.
**Theorem 1.2** (Proposition 4.4).: _The moduli space \(\mathbf{S}_{3}(Q_{3})\) of stable sheaves in \(Q_{3}\) is isomorphic to a projectivized rank \(6\) bundle over \(\operatorname{Gr}(2,4)\)._
The key point of the proof of Theorem 1.2 is to give the algebraic meaning of the ruling line of the hyperplane section of \(Q_{3}\). From Theorem 1.2, one can easily obtain the cohomology ring of \(\mathbf{S}_{3}(Q_{3})\) (Corollary 4.6).
The method of the proof of Theorem 1.2 is a very natural approach to study the space of curves in prime Fano threefolds. For example, the moduli space of rational quartic curves in \(Q_{3}\) is birational to the space of cubic curves in \(\mathbb{P}^{3}\) by using Gauss map or tangent developable surfaces ([10, the proof of Proposition 4.17]). The first named author is willing to study of birational relations among these moduli spaces in the following paper.
### Organization of the paper
In Section 2, we collect the well-known facts for finding torus invariant curves. In Section 3, we apply the Bialynicki-Birula theorem to the moduli space \(\mathbf{S}_{d}(Q_{3})\) for \(d\leq 3\). Lastly, in Section 4, we describe the moduli space \(\mathbf{S}_{3}(Q)\) as a projective bundle.
#### Notations and conventions.
* Let us denote by \(\operatorname{Gr}(k,n)\) the Grassmannian variety parameterizing \(k\)-dimensional subspaces in a fixed vector space \(V\) with \(\dim V=n\).
* When no confusion can arise, we do not distinguish the moduli point \([x]\in\mathcal{M}\) from the object \(x\) parameterized by \([x]\).
* The set of fixed points of \(X\) is denoted by \(X^{\mathbb{C}^{*}}\) under the \(\mathbb{C}^{*}\)-action.
### Acknowledgements.
The authors gratefully acknowledge the many helpful suggestions and comments of Jeong-Seop Kim, Yeongrak Kim, Wanseok Lee, Kyeong-Dong Park, Joonyeong Won during the preparation of the paper. A part of the paper has been initiated in the workshop (held in Jinju, Korea, Feb. 2022) aimed for finding research topics through arXiv preprint survey.
## 2. Preliminary
In this section, we introduce a general theory about a geometric structure of a smooth projective variety with a torus action. Also we collect the well-known facts about the multiple structure on Cohen-Macaulay curves.
### Bialynicki--Birula (BB) Theorem.
Let \(X\) be a smooth projective variety with a \(\mathbb{C}^{*}\)-action. Then the \(\mathbb{C}^{*}\)-fixed locus of \(X\) decomposes into
\[X^{\mathbb{C}^{*}}=\bigsqcup_{i}Y_{i}\]
such that each component \(Y_{i}\) is connected. Note that \(Y_{i}\) is smooth ([11]). For each \(Y_{i}\), the \(\mathbb{C}^{*}\)-action on the tangent bundle \(TX\big{|}_{Y_{i}}\) provides a decomposition as
\[TX\big{|}_{Y_{i}}=T^{+}\oplus T^{0}\oplus T^{-}\]
where \(T^{+}\), \(T^{0}\) and \(T^{-}\) are the subbundles of \(TX\big{|}_{Y_{i}}\) such that the group \(\mathbb{C}^{*}\) acts with positive, zero and negative weights respectively. Under the local linearization, \(T^{0}\cong TY_{i}\) and
\[T^{+}\oplus T^{-}=N_{Y_{i}/X}=N^{+}(Y_{i})\oplus N^{-}(Y_{i})\]
is the decomposition of the normal bundle \(N_{Y_{i}/X}\) of \(Y_{i}\) in \(X\).
A fundamental result in a theory of \(\mathbb{C}^{*}\)-action on \(X\) has been provided by A. Bialynicki-Birula. Let
\[X^{+}(Y_{i})=\left\{x\in X\ \mid\ \lim_{t\to 0}t\cdot x\in Y_{i}\right\}\quad \text{and}\quad X^{-}(Y_{i})=\left\{x\in X\ \mid\ \lim_{t\to\infty}t\cdot x\in Y_{i}\right\}.\]
**Theorem 2.1** ([1]).: _Under the above assumptions and notations,_
1. \(X=\bigsqcup_{i}X^{\pm}(Y_{i})\)
_._
2. _for each connected component_ \(Y_{i}\)_, there are_ \(\mathbb{C}^{*}\)_-equivariant isomorphism_ \(X^{\pm}(Y_{i})\cong N^{\pm}(Y_{i})\) _over_ \(Y_{i}\) _where_ \(N^{\pm}(Y_{i})\to Y_{i}\) _is a Zariski locally trivial fiberation with the affine space_ \(\mathbb{C}^{\mu^{\pm}(Y_{i})}\) _of dimension_ \(\mu^{\pm}(Y_{i}):=\operatorname{rank}N^{\pm}(Y_{i})\)_._
Throughout this article, we denote a smooth quadric threefold simply by \(Q_{3}=Q\).
**Proposition 2.2**.: _The moduli space \(\mathbf{S}_{d}(Q)(\cong\mathbf{H}_{d}(Q))\) is smooth for \(d\leq 3\)._
Proof.: The case \(d=1\) is easily verified by the normal bundle sequence of \(L\subset Q\subset\mathbb{P}^{4}\). The cases \(d=2\) and \(d=3\) have been proved in [1, Theorem 3.7 and Proposition 4.13].
### Non-reduced cubic curves in a quadric threefold
Note that \(Q\) is defined by a quadratic polynomial and does not contain any plane. Hence each curve \(C\) with Hilbert polynomial \(\chi(\mathcal{O}_{C}(m))=3m+1\) is Cohen-Macaulay (CM). Let us start by recalling the list of the non-reduced CM curve \(C\) in a quadric threefold \(Q\); see [1] and [1, Lemma 2.1]. Let \(p_{a}(C):=\dim\operatorname{H}^{1}(C,\mathcal{O}_{C})\) be the _arithmetic genus_ of the curve \(C\).
1. (Triple thickness) The structure sheaf \(\mathcal{O}_{C}\) with \([C]=3[L]\) fits into the non-split extension \[0\to\mathcal{O}_{L}(-1)\oplus\mathcal{O}_{L}(-1)\to\mathcal{O}_{C}\to \mathcal{O}_{L}\to 0.\] Moreover, such \(C\)'s are parameterized by the GIT-quotient: (1) \[\mathbb{P}(\operatorname{Ext}^{1}_{Q}(\mathcal{O}_{L},\mathcal{O}_{L}(-1) \oplus\mathcal{O}_{L}(-1)))^{\mathrm{s}}/\!/\mathrm{SL}(2)\cong\operatorname {Gr}(2,\dim\!\operatorname{Ext}^{1}_{Q}(\mathcal{O}_{L},\mathcal{O}_{L}(-1))).\]
2. (Triple line lying on a quadric cone) The structure sheaf \(\mathcal{O}_{C}\) with \([C]=3[L]\) fits into the non-split extension \[0\to\mathcal{O}_{L}(-1)\to\mathcal{O}_{C}\to\mathcal{O}_{L^{2}}\to 0,\] where \(p_{a}(L^{2})=0\).
3. (Pair of lines) The structure sheaf \(\mathcal{O}_{C}\) with \([C]=2[L]+[L^{\prime}]\) fits into the non-split extension \[0\to\mathcal{O}_{L}(-1)\to\mathcal{O}_{C}\to\mathcal{O}_{L\cup L^{\prime}}\to 0,\] where \(p_{a}(L^{2})=0\) or \(-1\).
**Remark 2.3**.: More generally, let \(G\) be a stable sheaf on a projective space \(\mathbb{P}^{n}\) with \(\chi(G(m))=(d-1)m+1\) and \(L\) be a line. Then every sheaf \(F\) fitting into the non-split short exact sequence
\[0\to\mathcal{O}_{L}(-1)\to F\to G\to 0\]
is stable; see [1, Lemma 4.7].
### Deformation theory
We address an exact sequence which we will use later.
**Lemma 2.4**.: _Let \(Y\overset{i}{\hookrightarrow}X\) be a smooth, closed subvariety of the smooth variety \(X\). If \(F\) and \(G\in\operatorname{Coh}(Y)\), then there is an exact sequence_
\[\begin{split} 0\to\operatorname{Ext}^{1}_{Y}(F,G)\to \operatorname{Ext}^{1}_{X}(i_{*}F,i_{*}G)&\to\operatorname{Hom}_{ Y}(F,G\otimes N_{Y/X})\\ &\to\operatorname{Ext}^{2}_{Y}(F,G)\to\operatorname{Ext}^{2}_{X}( i_{*}F,i_{*}G).\end{split} \tag{2}\]
Proof.: This is the base change spectral sequence in [10, Theorem 12.1].
## 3. Application of the BB-theorem on a quadric threefold
In this section we find the fixed rational curves and its weights in \(Q\) under the \(\mathbb{C}^{*}\)-action which inherits from the \(\operatorname{SL}_{2}:=\operatorname{SL}(2,\mathbb{C})\)-representation. Let us regard the quadric threefold \(Q\) as a hyperplane section of \(\operatorname{Gr}(2,4)\). The \(\mathbb{C}^{*}\)-actions on \(Q\) and curves in there come from an action on its related vector space.
Let \(V_{d}=\operatorname{Sym}^{d}\left(\mathbb{C}^{2}\right)\) with the \(\operatorname{SL}_{2}\)-action induced from the left multiplication of \(\operatorname{SL}_{2}\) on \(\mathbb{C}^{2}\). Then the maximal torus \(\{\text{diag}(t^{-1},t)\mid t\in\mathbb{C}^{*}\}\cong\mathbb{C}^{*}\) acts on the basis vectors
\[\left\{v_{d}=x^{d},\quad v_{d-2}=x^{d-1}y,\quad\dots\quad,v_{-1}=xy^{2-d},\quad v _{-d}=y^{d}\right\}\]
with weights \((d,d-2,\dots,2-d,-d)\). The infinitesimal \(\mathfrak{sl}_{2}\)-action on \(V_{d}\) is given by
\[e=x\partial_{y},\quad f=y\partial_{x},\quad h=x\partial_{x}-y\partial_{y}\]
for the standard basis \(\langle e,f,h\rangle\) for \(\mathfrak{sl}_{2}\). Indeed, we have the following equations for \(0\leq i\leq d\):
\[\left\{\begin{array}{rl}e\cdot v_{d-2i}=iv_{d-2(i-1)},\\ f\cdot v_{d-2i}=(d-i)v_{d-2(i+1)},\\ h\cdot v_{d-2i}=(d-2i)v_{d-2i}.\end{array}\right.\]
Now fix \(d=3\) and set \(V=V_{3}\). Setting \(v_{a,b}:=v_{a}^{*}\wedge v_{b}^{*}\in\wedge^{2}V^{*}\) and \(W:=\wedge^{2}V\), we get the Plucker coordinates for \(i:\operatorname{Gr}(2,V)\hookrightarrow\mathbb{P}(W)\)
\[v_{3,1},\ v_{3,-1},\ v_{3,-3},\ v_{1,-1},\ v_{1,-3},\ v_{-1,-3}\]
with weights \((4,2,0,0,-2,-4)\) and the defining equation of \(\operatorname{Gr}(2,V)\) is given by the Klein relation:
\[v_{3,1}v_{-1,-3}-v_{3,-1}v_{1,-3}+v_{3,-3}v_{1,-1}=0. \tag{3}\]
Note that for \(g\in\mathfrak{sl}_{2}\), \(g(v_{i,j})=(g\cdot v_{i}^{*})\wedge v_{j}^{*}+v_{i}^{*}\wedge(g\cdot v_{j}^{*})\) for \(i\neq j\). For example, \(g=e\) acts on the basis of \(W\) by
\[e\cdot v_{3,1} =0, e\cdot v_{3,-1} =2v_{3,1}, e\cdot v_{3,-3} =3v_{3,-1}\] \[e\cdot v_{1,-1} =v_{3,-1}, e\cdot v_{1,-3} =v_{3,-3}+3v_{1,-1}, e\cdot v_{-1,-3} =2v_{1,-3}.\]
By the action of \(h\in\mathfrak{sl}_{2}\), each \(v_{i}\) is an eigenvector of \(V\) so that we get a decomposition
\[V=V(3)\oplus V(1)\oplus V(-1)\oplus V(-3)\]
such that \(V(i)=\langle v_{i}\rangle\) for each \(i\in\{3,1,-1,-3\}\). Similarly we get a decomposition for \(W\):
\[W=W(4)\oplus W(2)\oplus W(0)\oplus W(-2)\oplus W(-4),\]
where \(W(4)=\langle v_{3,1}^{*}\rangle\), \(W(2)=\langle v_{3,-1}^{*}\rangle\), \(W(0)=\langle v_{3,-3}^{*},v_{1,-1}^{*}\rangle\) and \(W(-2)=\langle v_{1,-3}^{*}\rangle\) and \(W(-4)=\langle v_{-1,-3}^{*}\rangle\).
**Remark 3.1**.: Let \(H=\mathbb{P}(W_{1})\cong\mathbb{P}^{4}\) be the \(\mathrm{SL}_{2}\)-invariant hyperplane of \(\mathbb{P}(W)\) for a five-dimensional subspace \(W_{1}\subset W\). Then it must be defined by an equation \(av_{3,-3}+bv_{1,-1}=0\) for some \(a,b\in\mathbb{C}\). In particular, we have
\[e\cdot(av_{3,-3}+bv_{1,-1})=(3a+b)v_{3,-1}=0.\]
Thus \(H\) is defined by an linear equation \(v_{3,-3}-3v_{1,-1}=0\).
Unless otherwise stated, let us define the smooth quadric threefold \(Q\) by \(Q=\mathrm{Gr}(2,V)\cap H\) for the hyperplane \(H\) in Remark 3.1.
### Fixed points and lines in \(Q\)
The fixed points \(Q^{\mathbb{C}^{*}}\) of \(Q\) are the following four points:
\[\begin{split} p_{3,1}&=[1:0:0:0:0:0],\quad p_{3,-1} =[0:1:0:0:0:0],\\ p_{1,-3}&=[0:0:0:0:1:0],\quad p_{-1,-3}=[0:0:0:0:0:1]. \end{split} \tag{4}\]
Note that the fixed points \(H^{\mathbb{C}^{*}}\) consists of \(5\) points, including \(Q^{\mathbb{C}^{*}}\). From the defining equation of \(H\), the fifth fixed point must be
\[q_{0}=[0:0:3:1:0:0]. \tag{5}\]
Therefore one can get a decomposition \(W=W_{1}\oplus W_{1}^{\perp}\) so that
\[\mathbb{P}(W_{1})=\langle p_{3,1},p_{3,-1},p_{1,-3},p_{-1,-3},q_{0}\rangle,\]
where \(\langle S\rangle\) denotes the linear span for a subset \(S\subset\mathbb{P}(W)\).
**Lemma 3.2**.: _There exist exactly four lines on \(Q\), fixed by the action of \(\mathbb{C}^{*}\):_
1. \(\overline{p_{3,1}p_{3,-1}}=[s:t:0:0:0:0],\)__
2. \(\overline{p_{3,1}p_{1,-3}}=[s:0:0:0:t:0],\)__
3. \(\overline{p_{3,-1}p_{1,-3}}=[0:s:0:0:0:t],\) _and_
4. \(\overline{p_{1,-3}p_{-1,-3}}=[0:0:0:0:s:t].\)__
Proof.: Let \(L(\cong\mathbb{P}^{1})\) be a line in \(Q\) fixed by the \(\mathbb{C}^{*}\)-action. Then one can regard that the torus \(\mathbb{C}^{*}\) acts on \(L\). But \(\chi(L)=2\) and thus two points on \(L\) are fixed. So we obtain the four lines as listed.
From the inclusion \(L\subset Q\subset\mathbb{P}(W_{1})\) and Lemma 2.4, we have an exact sequence:
\[0\to T_{[L]}\mathbf{S}_{1}(Q)\to T_{[L]}\mathrm{Gr}(2,W_{1})\to\mathrm{H}^{0}( \mathcal{O}_{L}(2))\to 0 \tag{6}\]
for each line \(L\subset Q\). The last term is zero since \(\mathbf{S}_{1}(Q)\) is smooth with \(\dim\mathbf{S}_{1}(Q)=3\).
**Remark 3.3** (Table 1).: One can compute the number \(\delta([L])\) of negative weights \(T_{[L]}\mathbf{S}_{1}(Q)\) for each case in Lemma 3.2.
(a) The weights of the fixed line \(L=\mathbb{P}(\mathbb{C}^{2})\) are \((4,2)\). From the isomorphism
\[T_{[L]}\mathrm{Gr}(2,W_{1})\cong\mathrm{Hom}(\mathbb{C}^{2},W_{1}/\mathbb{C}^ {2})=(\mathbb{C}^{2})^{\vee}\otimes(W_{1}/\mathbb{C}^{2})\]
we get that the weight of \(T_{[L]}\mathrm{Gr}(2,W_{1})\) is given by \((-8,-6,-6,-4,-4,-2)\). Since \(\mathrm{H}^{0}(\mathcal{O}_{L}(2))\cong\mathrm{Sym}^{2}(\mathbb{C}^{2})^{\vee}\) has the weight \((-8,-6,-4)\), the weight of \(T_{[L]}\mathbf{S}_{1}(Q)\) is \((-6,-4,-2)\). In particular, we get \(\delta(L)=3\).
(b) Similarly as in (a), we get that the weight of the fixed line \(L=\mathbb{P}(\mathbb{C}^{2})\) is \((4,-2)\) and so the weight of \(T_{[L]}\mathbf{S}_{1}(Q)\) is \((-4,-2,2)\) and so \(\delta(L)=2\).
(c) and (d): Again as in the above, we get \(\delta(L)=1\) (resp. \(0\)) for the case (c) (resp. (d)).
Let \(P(X,t)=\sum\limits_{i=0}^{2\dim X}h^{i}(X,\mathbb{C})t^{i}\) be the Poincare polynomial of a smooth projective variety \(X\).
**Corollary 3.4**.: _The Poincare polynomial of \(\mathbf{S}_{1}(Q)\) is_
\[P(\mathbf{S}_{1}(Q),t)=1+t^{2}+t^{4}+t^{6}.\]
Proof.: The proof is straightforward by Theorem 2.1 and the result of Table 1.
### Fixed conics in \(Q\) and its weights
**Lemma 3.5**.: _There exist exactly \(10\) conics on \(Q\), fixed under the action of \(\mathbb{C}^{*}\). Furthermore, the defining equation of fixed conics are given by_
1. \(\langle v_{1,-3},v_{-1,-3},v_{3,-3}-3v_{1,-1},v_{3,-3}v_{1,-1}\rangle\)_,_
2. \(\langle v_{3,-1},v_{-1,-3},v_{3,-3}-3v_{1,-1},v_{3,-3}v_{1,-1}\rangle\)_,_
3. \(\langle v_{3,1},v_{1,-3},v_{3,-3}-3v_{1,-1},v_{3,-3}v_{1,-1}\rangle\)_,_
4. \(\langle v_{3,1},v_{1,-3},v_{3,-3}-3v_{1,-1},v_{3,-3}v_{1,-1}\rangle\)_,_
5. \(\langle v_{3,1},v_{1,-3},v_{3,-3}-3v_{1,-1}\rangle\)_,_
6. \(\langle v_{3,1},v_{1,-3},v_{3,-3}-3v_{1,-1}\rangle\)_,_
7. \(\langle v_{3,1},v_{1,-3},v_{3,-3}-3v_{1,-1}\rangle\)_,_
8. \(\langle v_{3,1},v_{1,-3},v_{3,-3}-3v_{1,-1}\rangle\)_,_
9. \(\langle v_{3,1},v_{1,-3},v_{3,-3}-3v_{1,-1}\rangle\)_,_
10. \(\langle v_{3,1},v_{1,-3},v_{3,-3}-3v_{1,-1}\rangle\)_,_
11. \(\langle v_{3,1},v_{1,-3},v_{3,-3}-3v_{1,-1}\rangle\)_,_
12. \(\langle v_{3,1},v_{1,-3},v_{1,-3},v_{3,-3}-3v_{1,-1}\rangle\)_,_
13. \(\langle v_{3,1},v_{1,-3},v_{1,-3},v_{1,-3},v_{1,-3},v_{1,-
1. \(\langle v_{3,1},v_{3,-1},v_{3,-3}-3v_{1,-1},v_{3,-3}v_{1,-1}\rangle\)_,_
2. \(\langle v_{-1,-3},v_{3,-3},v_{1,-1},v_{3,-1}v_{1,-3}\rangle\)_,_
3. \(\langle v_{1,-3},v_{3,-3},v_{1,-1},v_{3,1}v_{-1,-3}\rangle\)_,_
4. \(\langle v_{3,-1},v_{3,-3},v_{1,-1},v_{3,1}v_{-1,-3}\rangle\)_,_
5. \(\langle v_{3,1},v_{3,-3},v_{1,-1},v_{3,-1}v_{1,-3}\rangle\)_,_
6. \(\langle v_{3,1},v_{-1,-3},v_{3,-3}-3v_{1,-1},v_{3,1}v_{-1,-3}+v_{3,-3}v_{1,-1}\rangle\)_._
Proof.: Let \([C]\in\mathbf{S}_{2}(Q)\) be a fixed conic. Then the linear span \(\langle C\rangle\cong\mathbb{P}^{2}\) is also invariant under \(\mathbb{C}^{*}\)-action. The \(\mathbb{C}^{*}\)-fixed planes are generated by the point \(q_{0}\) in equation (5) and two different points in \(Q^{\mathbb{C}^{*}}\), or three different points in \(Q^{\mathbb{C}^{*}}\). From these planes, one obtain the list.
(1-a), (1-b), (1-c) and (1-d) of the list of Lemma 3.5 are the _unique_ double lines supported on (a), (b), (c) and (d) respectively in Lemma 3.2. Also (2-a), (2-b), (2-c) and (2-d) in Lemma 3.5 are pairs of two lines supported on the fixed lines in Lemma 3.2. Lastly, (3-a) (resp. (3-b)) in Lemma 3.5 are smooth conics passing through the points \(p_{3,-1}\) and \(p_{1,-3}\) (resp. \(p_{3,1}\) and \(p_{-1,-3}\)). As combining these ones, we obtain its configuration (Figure 1).
Note that \(\mathbf{H}_{2}(W_{1})\) is isomorphic to \(\mathbb{P}(\mathrm{Sym}^{2}(\mathcal{U}^{\vee}))\) where \(\mathcal{U}\) is the universal subbundle of \(\mathrm{Gr}(3,W_{1})\). Hence one can apply Lemma 2.4 to get an exact sequence
\[0\to T_{[C]}\mathbf{S}_{2}(Q)\to T_{[C]}\mathbb{P}(\mathrm{Sym}^{2}( \mathcal{U}^{\vee}))\to\mathrm{H}^{0}(\mathcal{O}_{C}(2))\to 0 \tag{7}\]
for any conic \(C\subset Q\). In equation (7), the last term is zero since \(\mathbf{S}_{2}(Q)\) is smooth and \(\dim\mathbf{S}_{2}(Q)=6\).
Figure 1. Fixed conics in \(Q\)
**Remark 3.6**.: The case (3-a) of Lemma 3.5: \(\mathcal{U}_{C}=\langle v_{3,-1},3v_{3,-3}+v_{1,-1},v_{1,-3}\rangle\) has weights \((2,0,-2)\) and
\[T_{[C]}\mathbb{P}(\mathrm{Sym}^{2}(\mathcal{U}^{\vee}))=\mathrm{Hom}(\mathcal{ U}_{C},W_{1}/\mathcal{U}_{C})\oplus(\mathrm{Sym}^{2}(\mathcal{U}_{C}^{\vee})/ \mathbb{C}). \tag{8}\]
The first (resp. second) term in equation (8) has weights \((-6,-4,-2,2,4,6)\) (resp. \((-4,-2,0,2,4)\)). On the other hand, from the short exact sequence \(0\to\mathcal{O}_{\mathbb{P}(\mathcal{U}_{C})}\to\mathcal{O}_{\mathbb{P}( \mathcal{U}_{C})}(2)\to\mathcal{O}_{C}(2)\to 0\), we have an exact sequence
\[0\to\mathrm{H}^{0}(\mathcal{O}_{\mathbb{P}(\mathcal{U}_{C})})\to\mathrm{H}^{ 0}(\mathcal{O}_{\mathbb{P}(\mathcal{U}_{C})}(2))\to\mathrm{H}^{0}(\mathcal{O} _{C}(2))\to 0.\]
Since \(\mathrm{H}^{0}(\mathcal{O}_{\mathbb{P}(\mathcal{U}_{C})}(2))=\mathrm{Sym}^{2} (\mathcal{U}_{C}^{*})\), \(\mathrm{H}^{0}(\mathcal{O}_{C}(2))\) has weights \((-4,-2,0,2,4)\). Therefore \(T_{[C]}\mathbf{S}_{2}(Q)\) has weights \((-6,-4,-2,2,4,6)\). The other cases are presented in Table 2.
**Corollary 3.7**.: _The Poincare polynomial of \(\mathbf{S}_{2}(Q)\) is_
\[P(\mathbf{S}_{2}(Q),t)=1+t^{2}+2t^{4}+2t^{6}+2t^{8}+t^{10}+t^{12}.\]
Proof.: The proof is straightforward by Theorem 2.1 and the result of Table 2.
### Fixed components of \(\mathbf{S}_{3}(q)\): Degenerated case
In this subsection, the fixed degenerate cubic curve is found using the fixed lines and conics (Lemma 3.2 and Lemma 3.5). Furthermore, we also find fixed smooth cubics component by analyzing cubic curves lying in the quadric cone (Proposition 3.15).
**Lemma 3.8**.: _Every Cohen-Macaulay curve \(C\in\mathbf{S}_{3}(Q)\) with \([C]=3[L]\) is a triple line lying on a quadric cone (Subsection 2.2)._
Proof.: Since \(\mathrm{dimExt}^{1}_{Q}(\mathcal{O}_{L},\mathcal{O}_{L}(-1))=\mathrm{dimH}^{0} (N_{L|Q}(-1))=1\) for any line \(L\) in \(Q\) by Lemma 2.4, the GIT-quotient in equation (1) is an empty set and so the assertion follows.
**Lemma 3.9**.: _Let \(L\) be a \(\mathbb{C}^{*}\)-invariant line and \(L^{2}\) be the unique planar double line in \(Q\) supporting \(L\). Then we have_
\[\dim\operatorname{Ext}^{1}_{Q}(\mathcal{O}_{L^{2}},\mathcal{O}_{L}(-1))=2.\]
_Furthermore, the induced action on the extension space is non-trivial._
Proof.: The dimension of the extension space can be easily checked by using Macaulay2 ([GS]). By taking \(\operatorname{Ext}^{\bullet}_{Q}(-,\mathcal{O}_{L}(-1))\) into the short exact sequence \(0\to\mathcal{O}_{L}(-1)\to\mathcal{O}_{L^{2}}\to\mathcal{O}_{L}\to 0\), we have a long exact sequence
\[\begin{split} 0&\cong\operatorname{Hom}_{Q}( \mathcal{O}_{L^{2}},\mathcal{O}_{L}(-1))\to\operatorname{Hom}_{Q}(\mathcal{O}_ {L}(-1),\mathcal{O}_{L}(-1))\stackrel{{\cong}}{{\to}} \operatorname{Ext}^{1}_{Q}(\mathcal{O}_{L},\mathcal{O}_{L}(-1))\\ &\to\operatorname{Ext}^{1}_{Q}(\mathcal{O}_{L^{2}},\mathcal{O}_ {L}(-1))\to\operatorname{Ext}^{1}_{Q}(\mathcal{O}_{L}(-1),\mathcal{O}_{L}(- 1))\to\cdots.\end{split} \tag{9}\]
The equality of the first term in (9) holds due to the stability. Also the second isomorphism in (9) holds by the dimension counting. But the last term in (9) is isomorphic to the tangent space \(T_{[L]}\mathbf{S}_{1}(Q)\). By Table 1, there does not exist the reduplication in the space \(\operatorname{Ext}^{1}_{Q}(\mathcal{O}_{L}(-1),\mathcal{O}_{L}(-1))\) and thus the induced action is non-trivial.
**Lemma 3.10**.: _Let \(C=L_{1}\cup L_{2}\) be a fixed conic in \(Q\). Then we have_
\[\dim\operatorname{Ext}^{1}_{Q}(\mathcal{O}_{C},\mathcal{O}_{L_{2}}(-1))=2.\]
_Furthermore, the induced action on the extension space is non-trivial._
Proof.: One can prove the claim by the same method in the proof of Lemma 3.9.
From this discussion, one can present all of the \(\mathbb{C}^{*}\)-invariant degenerated cubic curves in \(\mathbf{S}_{3}(Q)\) as follows.
**Proposition 3.11**.: _The number of the \(\mathbb{C}^{*}\)-invariant, degenerated cubic curves in \(\mathbf{S}_{3}(Q)\) is \(36\)._
Proof.: We read the invariant cubics from Figure 1. The reduced cubic cases are a tree of three lines or a conic attached a line. From Figure 1, one can read such cases are \(4+2^{3}=12\). When the support of cubic is a pair of lines, then our choice is \(4\times 2=8\). For each case, by Lemma 3.10, there are two cases. Hence the possible case is \(16\). Lastly, the support of triple line lying on a quadric cone has \(4\) cases and by Lemma 3.9, we have two cases for each of them. After of all, we have \(12+16+8=36\).
### Fixed components of \(\mathbf{S}_{3}(q)\): Smooth case
To find the smooth invariant cubic curves, we consider the _enveloping map_
\[\xi:\mathbf{S}_{3}(Q)\to\operatorname{Gr}(4,W_{1})\]
defined by \([C]\mapsto\langle C\rangle\cong\mathbb{P}^{3}\subset\mathbb{P}(W_{1})=H\). As did in the case of \(\mathbf{S}_{2}(Q)\), we can use the five fixed points \(Q^{\mathbb{C}^{*}}\cup\{q_{0}\}\) to obtain the \(\mathbb{C}^{*}\)-fixed points in \(\operatorname{Gr}(4,W_{1})\). The result is the second row of the Table 3. Let us find the fixed twisted cubics in \(\xi^{-1}(H_{0})\subset\mathbf{S}_{3}(Q)\) for a \(\mathbb{C}^{*}\)-invariant element \([H_{0}]\in\operatorname{Gr}(4,W_{1})\). That is, we seek the \(\mathbb{C}^{*}\)-invariant cubics in the
quadratic surface \(S_{0}:=Q\cap H_{0}\). Since the cases (ii) and (iii) is symmetric to (iv) and (v) in Table 3, we treat the first two cases only.
\(\bullet\)**Case (i) in the Table 3**
**Proposition 3.12**.: _Under the notations in the above, the \(\mathbb{C}^{*}\)-fixed cubics with \(S_{0}\) smooth, are always reducible._
Proof.: We fall into the case (i) of Table 3, in which we can let
\[[v_{3,1}:v_{3,-1}:v_{1,-3}:v_{-1,-3}]\]
be the homogenous coordinates of \(\mathbb{P}^{3}\). Then the smooth quadric surface \(S_{0}\subset\mathbb{P}^{3}\) is defined by \(v_{3,1}v_{-1,-3}-v_{3,-1}v_{1,-3}=0\) induced from the Klein relation of \(\operatorname{Gr}(2,V)\). Set
\[\mathbb{P}^{1}\times\mathbb{P}^{1}\to S_{0}\subset\mathbb{P}^{3},\quad[s:t] \times[v:w]\mapsto[sv:sw:tv:tw]\]
be the \(\mathbb{C}^{*}\)-equivariant map where \(\mathbb{C}^{*}\) acts on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) diagonally with weights \((1,-1,3,-3)\). Then we get \(\mathbf{S}_{3}(S_{0})\cong|\mathcal{O}_{S_{0}}(1,2)|\sqcup|\mathcal{O}_{S_{0 }}(2,1)|\). The first component of the latter space is isomorphic to the projective space \(\mathbb{P}\mathrm{H}^{0}(\mathcal{O}_{\mathbb{P}^{1}}(1)\boxtimes\mathcal{O}_ {\mathbb{P}^{1}}(2))\), and by the Kunneth formula we get
\[\mathrm{H}^{0}(\mathcal{O}_{\mathbb{P}^{1}}(1)\boxtimes\mathcal{O}_{\mathbb{ P}^{1}}(2))\cong\mathrm{H}^{0}(\mathcal{O}_{\mathbb{P}^{1}}(1))\otimes \mathrm{H}^{0}(\mathcal{O}_{\mathbb{P}^{1}}(2))=\mathbb{C}_{s,t}\otimes \operatorname{Sym}^{2}\mathbb{C}_{v,w}.\]
Hence the weights for \(|\mathcal{O}_{S_{0}}(1,2)|\) are \((\pm 7,\pm 5,\pm 1)\) and thus the fixed points are coordinate points; we get a similar description for the second component.
On the other hand, let \(\{sv^{2},svw,sw^{2},tv^{2},tvw,tw^{2}\}\) be a basis for \(\mathrm{H}^{0}(\mathcal{O}_{S_{0}}(1,2))\) and its dual basis by \(\{h_{0},\ h_{1},\ \cdots,\ h_{5}\}\). Then each element in the system \(|\mathcal{O}_{S_{0}}(1,2)|\) is written as
\[F:=h_{0}sv^{2}+h_{1}svw+h_{2}sw^{2}+h_{3}tv^{2}+h_{4}tvw+h_{5}tw^{2}.\]
Using the homogenous coordinates of \(\mathbb{P}^{3}\)
\[v_{3,1}=sv,v_{3,-1}=sw,v_{1-3}=tv,v_{-1,-3}=tw,\]
we can write
\[sF =h_{0}(sv)^{2}+h_{1}s^{2}vw+h_{2}(sw)^{2}+h_{3}(sv)(tv)+h_{4}stvw +h_{5}(sw)(tw)\] \[=h_{0}v_{3,1}^{2}+h_{1}v_{3,1}v_{3,-1}+h_{2}v_{3,-1}^{2}+h_{3}v_{3,1}v_{1-3}+h_{4}v_{3,-1}v_{1-3}+h_{5}v_{3,-1}v_{-1,-3};\] \[tF =h_{0}(sv)(tv)+h_{1}tsvw+h_{2}(sw)(tw)+h_{3}(tv)^{2}+h_{4}t^{2}vw +h_{5}(tw)^{2}\] \[=h_{0}v_{3,1}v_{1-3}+h_{1}v_{3,-1}v_{1-3}+h_{2}v_{3,-1}v_{-1,-3}+ h_{3}v_{1-3}^{2}+h_{4}v_{1-3}v_{-1,-3}+h_{5}v_{-1,-3}^{2}.\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline (i) & (ii) & (iii) & (iv) & (v) \\ \hline \(v_{3,-3}=v_{1,-1}=0\) & \(v_{-1,-3}=H=0\) & \(v_{1,-3}=H=0\) & \(v_{3,-1}=H=0\) & \(v_{3,1}=H=0\) \\ \hline Smooth quadric surface & \multicolumn{4}{|c|}{Quadric cone} \\ \hline \end{tabular}
\end{table}
Table 3. Fixed hyperplane section.
Thus the defining equation of a cubic curve in \(S_{0}\) is given by \(\langle sF,tF\rangle\). In particular, one can read that \(\mathbb{C}^{*}\)-fixed cubics are not irreducible.
\(\bullet\)**Case (ii) and (iii) in the Table 3**
In these cases, the quadric surface \(S_{0}\) is singular. A very natural invariant curves can be obtained by taking the closure of the \(\mathbb{C}^{*}\)-orbit of the point \([1:a:3:1:3a^{-1}:0]\), \(a\neq 0\). That is, one parameter family of twisted cubics
\[\mathcal{C}_{a}:=\left\{[u^{3}:au^{2}v:3uv^{2}:uv^{2}:3a^{-1}v^{3}:0]\times(a) \mid[u:v]\in\mathbb{P}^{1},a\in\mathbb{C}^{*}\right\}\subset\mathbb{P}^{5} \times\mathbb{C}^{*}_{a} \tag{10}\]
parameterized by \(a\in\mathbb{C}^{*}_{a}\) is invariant under the \(\mathbb{C}^{*}\)-action. In fact, such a family gives a connected component in \(\mathbf{S}_{3}(Q)^{\mathbb{C}^{*}}\). Note that \(\mathbb{C}^{*}_{a}\) is compactified as \(\mathbb{P}^{1}\subset\mathbf{S}_{3}(Q)\) by adding the two flat limits: A triple line lying on quadric cone and a conic attached a line.
**Example 3.13**.: One can easily see that the family \(\mathcal{C}_{a}\) in equation (10) lies in the case (ii) (i.e., \(S_{0}=Q\cap\{v_{-1,-3}=0\}\)) of Table 3. We compute the number of zero-weights of \(T_{[C]}\mathbf{S}_{3}(Q)\) at \(C:=\mathcal{C}\mid_{a=1}\). Let us denote the number of zero-weights of the global section \(\mathrm{H}^{0}(F)\) of a locally free sheaf \(F\) by \(w_{0}(F)\). Let \(f:\mathbb{P}^{1}\to Q\subset\mathbb{P}(W_{1})\subset\mathbb{P}(W)\) be a morphism defined by
\[[u:v]\mapsto[u^{3}:u^{2}v:3uv^{2}:uv^{2}:3v^{3}:0].\]
If we put the weights \((1,-1)\) of the coordinate of the domain curve and shift the weights of the codomain by \(-1\), then the map \(f\) is a \(\mathbb{C}^{*}\)-equivariant one. We compute \(w_{0}(f^{*}T_{Q})\) of the space \(\mathrm{H}^{0}(f^{*}T_{Q})\) which is the first deformation space of the graph space \(\text{Map}(\mathbb{P}^{1},Q)\). Since the map \(f\) is an embedding, we have a short exact sequence
\[0\to f^{*}T_{Q}\to f^{*}T_{\mathbb{P}^{4}}\to f^{*}N_{Q|\mathbb{P}^{4}}\to 0. \tag{11}\]
But we also have an exact sequence
\[0\to f^{*}\mathcal{O}_{\mathbb{P}^{4}}\to f^{*}\mathcal{O}_{\mathbb{P}^{4}}(1 )\otimes\mathrm{H}^{0}(\mathcal{O}_{\mathbb{P}^{4}}(1))^{*}\to f^{*}T_{ \mathbb{P}^{4}}\to 0\]
by pulling back the Euler sequence along the map \(f\). Therefore the number of zero-weights of \(f^{*}T_{\mathbb{P}^{4}}\) is \(w_{0}(f^{*}T_{\mathbb{P}^{4}})=3\). Also \(w_{0}(f^{*}N_{Q|\mathbb{P}^{4}})=w_{0}(\mathcal{O}_{\mathbb{P}^{1}}(6))=1\). By plugging into equation (11), we have \(w_{0}(f^{*}T_{Q})=2\). Finally our space is the quotient space by \(\mathrm{SL}_{2}=\text{Aut}(\mathbb{P}^{1})\). Since \(w_{0}(T_{\mathbb{P}^{1}})=1\), the number of zero-weights of \(T_{[C]}\mathbf{S}_{3}(Q)\) is one.
We prove now that smooth cubic curves fixed by \(\mathbb{C}^{*}\)-action are only the given one in equation (10). Our strategy is to find all of the fixed curves in the resolved surface where the resolution map is \(\mathbb{C}^{*}\)-equivariant. Then one can find the fixed cubics in singular quadric \(S_{0}\). The following contents before the end of this section are well studied in [10, Section 2], [11, Section 1] and [12, Section 2]. The resolution of the quadric cone \(S_{0}\) at the cone point \(p\) is isomorphic to the Hirzebruch surface \(\mathbb{F}_{2}:=\mathbb{P}(\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{ \mathbb{P}^{1}}(2))\). Let \(C_{0}\) (resp. \(F\)) be the canonical section class (resp. fiber class) in the divisor class \(\text{Div}(\mathbb{F}_{2})=\langle C_{0},F\rangle\). Then \(S_{0}\) is given by the image of the complete linear system \(|C_{0}+2F|(\cong\mathbb{P}^{3})\). Also
the surface \(\mathbb{F}_{2}\) embeds in the complete linear system \(|C_{0}+3F|(\cong\mathbb{P}^{5})\). That is, we have a commutative diagram:
(12)
Furthermore, the right vertical map \(\pi\) in (12) is a linear projection.
**Remark 3.14** (cf. [1, Lemma 4.12]).: Let \([C]\in\mathbf{S}_{3}(S_{0})\). Note that \(C\) is a CM-curve. Let \(p\) be the cone point of \(S_{0}\). Let \(\text{mult}_{p}(C)=1\). Then by the projection formula, the strict transform \(\tilde{C}\) of \(C\) along the map \(\pi_{\mathbb{F}_{2}}:\mathbb{F}_{2}\to S_{0}\) lies in the linear system \([\tilde{C}]\in|C_{0}+3F|\). In fact, \(\tilde{C}\cdot(C_{0}+2F)=C\cdot H_{\mathbb{P}^{3}}=3\) by the projection formula. Also, \(\tilde{C}\cdot C_{0}=1\). Hence \(\tilde{C}=C_{0}+3F\). Let \(\text{mult}_{p}(C)>1\) and \(\tilde{C}=\pi_{\mathbb{F}_{2}}^{-1}(C)\) be the scheme theoretic inverse image of \(C\) along the map \(\pi_{\mathbb{F}_{2}}\). Then \([\tilde{C}]=[C_{0}]+[\tilde{C}_{s}]\) where \(\tilde{C}_{s}\) is the strict transform of \(C\) such that \([\tilde{C}_{s}]=3[F]\). Therefore \(\tilde{C}\) lies in the linear system \(|C_{0}+3F|\).
We interpret \(\mathbb{F}_{2}\) and \(S_{0}\) as _scrolls_ to make all relevant maps of the diagram (12) a \(\mathbb{C}^{*}\)-equivariant one. By definition, the _rational normal scroll_\(S(p,q)\) is the _join variety_ of the rational normal curves of degree \(p\) and \(q\); here, we allow \(p\) or \(q\) to be zero. In our case, \(\mathbb{F}_{2}\cong S(1,3)\) and \(S_{0}\cong S(0,2)\). Then it is well-known that the map \(\pi\) in (12) is a successive projection from a point on the ruling line and cubic curve. Let us present \(\pi\) in (12) as homogeneous coordinates. To do this, let us define \(S(1,3)\) and \(S(0,2)\) as the closure of the image of \(4\)-dimensional complex torus \((\mathbb{C}^{*})^{4}\) as below.
(13)
Then the scroll \(S(1,3)\) (resp. \(S(0,2)\)) is defined by the maximal minors of the _catalecticant_ matrix
\[\begin{bmatrix}z_{0}&z_{1}&z_{2}&z_{4}\\ z_{1}&z_{2}&z_{3}&z_{5}\end{bmatrix}\quad(\text{resp.}\begin{bmatrix}z_{0}&z_ {1}\\ z_{1}&z_{2}\end{bmatrix}). \tag{14}\]
Under this setting, the map \(\pi\) in (13) is
\[\pi([z_{0}:z_{1}:z_{2}:z_{3}:z_{4}:z_{5}])=[z_{0}:z_{1}:z_{2}:z_{4}].\]
Now we are ready to state our main proposition.
**Proposition 3.15**.: _In the case (ii) and (iii) (and thus (iv) and (v)) of Table 3, the fixed cubic curves are classified by the followings._
1. _If_ \(S_{0}=Q\cap\{v_{-1,-3}=0\}\)_, then the_ \(\mathbb{C}^{*}\)_-fixed, irreducible cubic curve in a quadric cone_ \(S_{0}\) _is defined by the family_ \(\mathcal{C}_{a}\) _in equation (_10_)._
2. _If_ \(S_{0}=Q\cap\{v_{1,-3}=0\}\)_, then every_ \(\mathbb{C}^{*}\)_-fixed cubic curve in a quadric cone_ \(S_{0}\) _is isolated and degenerated one._
Proof.: Case (1): Since \(S_{0}\) is defined by \(v_{3,-1}v_{1,-3}-3v_{1,-1}^{2}=0\), we may assume that the weights of \(z_{0}\), \(z_{1}\), \(z_{2}\) and \(z_{4}\) are \(2\), \(0\), \(-2\) and \(4\) (after rescaling the coordinate \(z_{1}\)). If we let the weights \(z_{3}\) and \(z_{5}\) by \(-4\) and \(2\) to be invariant of equations (14), then the map \(\pi\) is a \(\mathbb{C}^{*}\)-equivariant one. This induces a \(\mathbb{C}^{*}\)-action on the complete linear system \(|C_{0}+3F|\) which regards as the space of quartic curves in \(\mathbb{F}_{2}\). Hence the repeat of the weight \(2\) gives us one parameter family of quartic curves in \(\mathbb{F}_{2}\) and thus rational cubic curves in \(S_{0}\) (Remark 3.14). Note that the other case is isolated and the corresponding curve is a degenerated one by substituting in equation (14). Obviously, the family \(\mathcal{C}_{a}\) in (10) is invariant under \(\mathbb{C}^{*}\)-action and thus we proved the claim.
Case (2): The similar computation as in the case (1) shows that if we let the weights of \(z_{0},\cdots,z_{4}\) and \(z_{5}\) by \(4\), \(0\), \(-4\), \(-8\), \(2\) and \(-2\), then \(\pi\) is \(\mathbb{C}^{*}\)-equivariant. Thus \(\mathbb{C}^{*}\)-fixed curves in \(\mathbb{F}_{2}\) are isolated and degenerated one. Thus the same thing holds on \(S_{0}\).
It seems possible to compute the weights by using the result in [1] and Lemma 2.4. Since we explicitly describe the moduli space \(\mathbf{S}_{3}(Q)\) in terms of a projective bundle in the following section, we omit it.
## 4. Cubic curves space as a projective bundle
In this section, we describe the moduli space \(\mathbf{S}_{3}(Q)\) as a projective bundle over a Grassmannian (Proposition 4.4). As a corollary, we give the cohomology ring structure of \(\mathbf{S}_{3}(Q)\) (Corollary 4.6).
### Sheaf theoretic description of \(\mathbf{S}_{1}(q)\)
Let us recall the global description of \(\mathbf{S}_{1}(Q)\). Let \(L\) be a line in \(Q\). It is known that the locally free resolution of the ideal sheaf \(\mathcal{I}_{L,Q}\) is given by
\[0\to\mathcal{O}_{Q}(-1)\to\mathcal{U}_{Q}\to\mathcal{I}_{L,Q}\to 0, \tag{15}\]
where \(\mathcal{U}_{Q}\) is the restriction of the universal subbundle of Grassmannian \(\mathrm{Gr}(2,4)\); see [1, Section 2.1.1]. Also, every line in \(Q\) arises in this fashion, so called the Hartshorne-Serre correspondence ([1]). Thus we get the following.
**Proposition 4.1**.: _Let \(\mathbf{S}_{1}(Q)\) be the Hilbert scheme of lines in \(Q\). Then,_
\[\mathbf{H}_{1}(Q)\cong\mathbb{P}\mathrm{Hom}(\mathcal{O}_{Q}(-1),\mathcal{U}_{ Q})\cong\mathbb{P}\mathrm{H}^{0}(\mathcal{U}_{Q}(1))\cong\mathbb{P}^{3}.\]
### Proof of Theorem 1.2
The moduli space of representations of a Kronecker quiver parametrizes the isomorphism classes of stable sheaf homomorphisms
\[\mathcal{O}_{Q}^{\oplus 2}\longrightarrow\mathcal{U}_{Q}(1) \tag{16}\]
up to the natural action of the automorphism group \(\mathbb{C}^{*}\times\mathrm{GL}_{2}/\mathbb{C}^{*}\cong\mathrm{GL}_{2}\). For two vector spaces \(E\) and \(F\) of dimension \(2\) and \(1\) respectively and \(V^{*}:=\mathrm{H}^{0}(Q,\mathcal{U}_{Q}(1))\), the moduli space is constructed as \(\mathbf{G}:=\mathrm{Hom}(E,V^{*}\otimes F)/\!/\mathrm{GL}_{2}\cong V^{*} \otimes F\otimes E^{*}/\!/\mathrm{GL}_{2}\) with an appropriate linearization; see [10]. Note that since the \(\mathrm{GL}_{2}\) acts as a row operation on the space of \((2\times 4)\)-matrices, and thus \(\mathbf{G}\cong\mathrm{Gr}(2,4)\).
Let \(p_{1}:\mathbf{G}\times Q\to\mathbf{G}\) and \(p_{2}:\mathbf{G}\times Q\to Q\) be the two projections, and write \(\mathcal{A}\boxtimes\mathcal{B}:=p_{1}^{*}\mathcal{A}\otimes p_{2}^{*} \mathcal{B}\) for \(\mathcal{A}\in\mathrm{Coh}(\mathbf{G})\) and \(\mathcal{B}\in\mathrm{Coh}(Q)\). If \(\mathcal{U}_{\mathbf{G}}\) is the universal subbundle over \(\mathbf{G}\), then there is a _universal morphism_
\[\phi\::\:\mathcal{U}_{\mathbf{G}}\boxtimes\mathcal{O}_{Q}\longrightarrow \mathcal{O}_{\mathbf{G}}\boxtimes\mathcal{U}_{Q}(1)\;; \tag{17}\]
see [10]. Set \(\mathcal{J}:=\mathrm{coker}(\phi)\) and denote \(\mathcal{J}_{s}:=\mathcal{J}|_{\{s\}\times Q}\) for each point \(s\in\mathbf{G}\).
**Proposition 4.2**.: _For the cokernel sheaf \(\mathcal{J}\) of the map in (17), we have the following._
1. _For each point_ \(s\in\mathbf{G}\)_, the restriction_ \(\mathcal{J}_{s}\) _is isomorphic to a twisted ideal sheaf_ \(\mathcal{J}_{s}\cong\mathcal{I}_{L,S}(2)\) _for some hyperplane section_ \(S:=Q\cap H\) _and a line_ \(L\subset S\)_._
2. \(\mathcal{J}\) _is flat over_ \(\mathbf{G}\) _and thus the universal morphism_ \(\phi\) _in (_17_) is injective._
Proof.: If \(\mathcal{K}:=\ker(\phi_{s})\) is non-zero, then it is a reflexive sheaf of rank one on \(\{s\}\times Q\cong Q\). By the semistability of \(\mathcal{O}_{Q}^{\oplus 2}\) and \(\mathcal{U}_{Q}(1)\), the sheaf \(\mathcal{K}\) is isomorphic to a line bundle \(\mathcal{K}\cong\mathcal{O}_{Q}(k)\) for some \(k\in\mathbb{Z}\) and the slope condition of \(\mathrm{Im}(\phi_{s})\) gives \(0\leq-k\leq\frac{1}{2}\). Thus we get \(k=0\) and so \(\phi_{s}\) is not stable, i.e., \(\mathrm{rank}(\mathrm{H}^{0}(\phi_{s}))=1\). Therefore \(\phi_{s}\) is injective.
Now, for the inclusion \(i_{1}:\mathcal{O}_{Q}\to\mathcal{O}_{Q}^{\oplus 2}\) into the first component, we have a commutative diagram:
(18)
Here, \(\xi:=\phi_{s}\circ i_{1}:\mathcal{O}_{Q}\to\mathcal{U}_{Q}(1)\) is the composition map. Then we get \(\operatorname{coker}(\xi)\cong\mathcal{I}_{L,Q}(1)\) for some line \(L\subset Q\) by Proposition 4.1. The composite map \(i\circ\overline{\phi}_{s}:\mathcal{O}_{Q}\stackrel{{\overline{ \phi}}}{{\to}}\mathcal{I}_{L,Q}(1)\stackrel{{ i}}{{\hookrightarrow}} \mathcal{O}_{Q}(1)\) is not a zero-map and thus it is injective. Hence we get \(\operatorname{coker}(i\circ\overline{\phi}_{s})\cong\mathcal{O}_{Q\cap H}(1)\) for some hyperplane \(H\subset\mathbb{P}^{4}\). Let \(S:=Q\cap H\). Then the top row in the diagram (18) becomes
\[0\to\mathcal{O}_{Q}\cong\mathcal{I}_{S,Q}(1)\stackrel{{\overline{ \phi}_{s}}}{{\longrightarrow}}\mathcal{I}_{L,Q}(1)\to\mathcal{I}_{L,S}(1)\to 0\]
and so we get \(\mathcal{J}_{s}=\mathcal{I}_{L,S}(1)\), proving the assertion (1). Now \(\mathcal{J}_{s}\) has a constant Hilbert polynomial by the result of (1) and thus \(\mathcal{J}\) is flat over \(\mathbf{G}\), which confirms (2).
**Remark 4.3**.: By the proof of (1) of Proposition 4.2, each hyperplane section \(Q\cap H\) is parameterized by \(\mathbf{G}\). Hence one can consider the universal family of hyperplane sections in \(\mathbf{G}\times Q\). Furthermore, \(\mathbf{G}\cong\operatorname{Gr}(2,4)\) is the space of lines in \(\mathbb{P}\mathrm{H}^{0}(\mathcal{U}_{Q}(1))=\mathbb{P}^{3}\). On the other hand, the latter space \(\mathbb{P}^{3}\) is the Fano scheme of lines in \(Q\); see Proposition 4.1. Let \(\mathcal{C}\subset\mathbb{P}^{3}\times Q\) be the universal lines over \(\mathbb{P}^{3}\) with \(p:\mathcal{C}\to\mathbb{P}^{3}\) and \(q:\mathcal{C}\to Q\) the projection maps. Then it can be easily checked that the transform \(p(q^{-1}(S\cap H))\) of the hyperplane section \(Q\cap H\) becomes a line \(L\subset\mathbb{P}^{3}\), i.e., \([L]\in\mathbf{G}\).
From the proof of Proposition 4.2, we obtain the exact sequence
\[0\to\mathcal{U}_{\mathbf{G}}\boxtimes\mathcal{O}_{Q}\to\mathcal{O}_{\mathbf{G} }\boxtimes\mathcal{U}_{Q}(1)\to\mathcal{J}\to 0. \tag{19}\]
By applying the functor \(R^{\bullet}p_{1,*}((-)\boxtimes\mathcal{O}_{Q}(1))\) to the exact sequence (19), we have
\[0\to\mathcal{U}_{\mathbf{G}}\otimes\mathrm{H}^{0}(\mathcal{O}_{Q}(1))\to \mathcal{O}_{\mathbf{G}}\otimes\mathrm{H}^{0}(\mathcal{U}_{Q}(2))\to p_{1,*}( \mathcal{J}\boxtimes\mathcal{O}_{Q}(1))\to 0, \tag{20}\]
since \(\mathrm{H}^{1}(\mathcal{O}_{Q}(1))=0\). Since \(\mathrm{h}^{0}(\mathcal{O}_{Q}(1))=5\) and \(\mathrm{h}^{0}(\mathcal{U}_{Q}(2))=16\), we see that the direct image sheaf \(p_{1,*}(\mathcal{J}\boxtimes\mathcal{O}_{Q}(1))\) in (20) is a vector bundle of rank \(6\) on \(\mathbf{G}\).
**Proposition 4.4**.: _The moduli space \(\mathbf{S}_{3}(Q)\) is isomorphic to the projective bundle \(\mathbb{P}(p_{1,*}(\mathcal{J}\boxtimes\mathcal{O}_{Q}(1)))\)._
Proof.: Let \(\mathcal{G}:=p_{1,*}(\mathcal{J}\boxtimes\mathcal{O}_{Q}(1))\) and \(\pi:\mathbb{P}(\mathcal{G})\to\mathbf{G}\) be the bundle morphism. Then, there exists a commutative diagram:
where \(\overline{p}_{1}:\mathbb{P}(\mathcal{G})\times Q\to\mathbb{P}(\mathcal{G})\) is the projection map. Let
\[c\;:\;\overline{p}_{1}^{*}\mathcal{O}_{\mathbb{P}(\mathcal{G})}(-1)\to(\pi \times i)^{*}(\mathcal{J}\boxtimes\mathcal{O}_{Q}(1))\]
be the composition of the pullback of the tautological map
\[\mathcal{O}_{\mathbb{P}(\mathcal{G})}(-1)\to\pi^{*}\mathcal{G}\cong\overline{ p}_{1,*}(\pi\times i)^{*}\left(\mathcal{J}\boxtimes\mathcal{O}_{Q}(1)\right) \tag{21}\]
and the natural map
\[\overline{p}_{1}^{*}\overline{p}_{1,*}(\pi\times i)^{*}(\mathcal{J}\boxtimes \mathcal{O}_{Q}(1))\to(\pi\times i)^{*}(\mathcal{J}\boxtimes\mathcal{O}_{Q}(1)).\]
Let \(\overline{\mathcal{Z}}:=(\pi\times i)^{*}\mathcal{Z}\) be the pull-back of the universal hyperplane section \(\mathcal{Z}\subset\mathbf{G}\times Q\) parameterized by \(\mathbf{G}\); see Remark 4.3. We claim that the local extension space \(\mathcal{E}xt_{\overline{\mathcal{Z}}}^{1}(\text{coker}(c),\mathcal{O}_{ \overline{\mathcal{Z}}})\) is a flat family of stable sheaves over \(\mathbb{P}(\mathcal{G})\) with Hilbert polynomial \(3m+1\). Note that it is enough to check the claim fiberwise. Over a point \(x\in\mathbb{P}(\mathcal{G})\), the exact sequence
\[0\to\text{Im}(c)\to(\pi\times i)^{*}(\mathcal{J}\boxtimes\mathcal{O}_{Q}(1)) \to\text{coker}(c)\to 0\]
becomes a short exact sequence
\[0\to\mathcal{O}_{S}\to\mathcal{I}_{L,S}(2)\to\mathcal{T}\to 0,\]
where \(\mathcal{T}:=\text{coker}(c)_{x}\). By applying the dual functor \(\mathcal{H}om_{S}(-,\mathcal{O}_{S})\) to it, we have
\[0\cong\mathcal{H}om_{S}(\mathcal{T},\mathcal{O}_{S})\to \mathcal{H}om_{S}(\mathcal{I}_{L,S}(2),\mathcal{O}_{S}) \to\mathcal{H}om_{S}(\mathcal{O}_{S},\mathcal{O}_{S})\cong\mathcal{O }_{S}\] \[\to\mathcal{E}xt_{S}^{1}(\mathcal{T},\mathcal{O}_{S})\to\mathcal{E }xt_{S}^{1}(\mathcal{I}_{L,S}(2),\mathcal{O}_{S})\cong 0.\]
The first term vanishes because \(\mathcal{T}\) is one-dimensional. Also, the last vanishing is a special case of a more general vanishing \(\mathcal{E}xt_{S}^{i\geq 1}(\mathcal{I}_{L,S}(2),\mathcal{O}_{S})\cong 0\), which is obvious when \(S\) is smooth. If \(S\) is not smooth, one can check this by using Macaulay2 ([GS]). By computing the Hilbert polynomial of \(\mathcal{H}om_{S}(\mathcal{I}_{L,S}(2),\mathcal{O}_{S})\) and \(\mathcal{O}_{S}\), we conclude that \(\mathcal{E}xt_{S}^{1}(\mathcal{T},\mathcal{O}_{S})=\mathcal{O}_{C}\) for some twisted cubic curve \(C\subset Q\). Hence by the universal property of moduli space \(\mathbf{S}_{3}(Q)\), there exists a morphism
\[\Phi:\mathbb{P}(\mathcal{G})\longrightarrow\mathbf{S}_{3}(Q) \tag{22}\]
induced by the flat family \(\mathcal{E}xt_{\overline{\mathcal{Z}}}^{1}(\text{coker}(c),\mathcal{O}_{ \overline{\mathcal{Z}}})\) over \(\mathbb{P}(\mathcal{G})\).
Lastly, we prove that the induced map \(\Phi\) in (22) is an isomorphism. By Lemma 4.5 below and Zariski main theorem, it is enough to check that the map \(\Phi\) is generically one-to-one. Let us choose a smooth twisted cubic \(C\subset Q\) such that, for the linear span \(H_{0}:=\langle C\rangle\cong\mathbb{P}^{3}\), the hyperplane section \(Q\cap H_{0}=:S_{0}\) is a smooth quadric surface. In \(C\subset S_{0}\), the curve class \([C]\) in \(S_{0}\) is automatically determined. Hence the inverse image \(\Phi^{-1}([\mathcal{O}_{C}])\) is a unique point in \(\mathbb{P}(\mathcal{G})\).
**Lemma 4.5**.: _The rank of the Picard group \(\operatorname{Pic}(\mathbf{S}_{3}(Q))\) is \(2\)._
Proof.: Following the blowing-up/down diagram in [CHK12], we know that the rank of Picard group of \(\mathbf{S}_{3}(Q)\) is the same as that of \(\mathbf{M}_{3}(Q)\). Here \(\mathbf{M}_{3}(Q)\) is the moduli space of stable maps of degree \(3\) in \(Q\). But the Picard group of \(\mathbf{M}_{3}(Q)\) is generated by the boundary divisor of reducible curves and the locus of stable maps whose images meet a fixed line in \(Q\) (compare with the result in [Opr05]).
**Corollary 4.6**.: _The cohomology ring of \(\mathbf{S}_{3}(Q)\) is given by_
\[\mathrm{H}^{*}(\mathbf{S}_{3}(Q),\mathbb{Q})\cong\mathbb{Q}[c_{1},c_{2},h]/I,\]
\[I=\langle c_{1}^{3}-2c_{1}c_{2},c_{1}^{4}-3{c_{1}}^{2}c_{2}+c_{2}^{2},h^{6}-5c_ {1}h^{5}+(15c_{1}^{2}-5c_{2})h^{4}-40c_{1}c_{2}h^{3}+50c_{2}^{2}h^{2}\rangle\]
_where \(c_{1}=c_{1}(\mathcal{U}_{G})\), \(c_{2}=c_{2}(\mathcal{U}_{G})\) and \(h=c_{1}(\mathcal{O}_{\mathbb{P}}(1))\) is the hyperplane class of the projective bundle in Proposition 4.4._
Proof.: By the exact sequence (20) and the presentation of the cohomology ring of \(\mathrm{Gr}(2,4)\) in [1, Theorem 5.2], one can obtain the result.
|
2304.12836
|
Lessons Learned from a Citizen Science Project for Natural Language
Processing
|
Many Natural Language Processing (NLP) systems use annotated corpora for
training and evaluation. However, labeled data is often costly to obtain and
scaling annotation projects is difficult, which is why annotation tasks are
often outsourced to paid crowdworkers. Citizen Science is an alternative to
crowdsourcing that is relatively unexplored in the context of NLP. To
investigate whether and how well Citizen Science can be applied in this
setting, we conduct an exploratory study into engaging different groups of
volunteers in Citizen Science for NLP by re-annotating parts of a pre-existing
crowdsourced dataset. Our results show that this can yield high-quality
annotations and attract motivated volunteers, but also requires considering
factors such as scalability, participation over time, and legal and ethical
issues. We summarize lessons learned in the form of guidelines and provide our
code and data to aid future work on Citizen Science.
|
Jan-Christoph Klie, Ji-Ung Lee, Kevin Stowe, Gözde Gül Åahin, Nafise Sadat Moosavi, Luke Bates, Dominic Petrak, Richard Eckart de Castilho, Iryna Gurevych
|
2023-04-25T14:08:53Z
|
http://arxiv.org/abs/2304.12836v1
|
# Lessons Learned from a Citizen Science
###### Abstract
Many Natural Language Processing (NLP) systems use annotated corpora for training and evaluation. However, labeled data is often costly to obtain and scaling annotation projects is difficult, which is why annotation tasks are often outsourced to paid crowdworkers. Citizen Science is an alternative to crowdsourcing that is relatively unexplored in the context of NLP. To investigate whether and how well Citizen Science can be applied in this setting, we conduct an exploratory study into engaging different groups of volunteers in Citizen Science for NLP by re-annotating parts of a pre-existing crowdsourced dataset. Our results show that this can yield high-quality annotations and attract motivated volunteers, but also requires considering factors such as scalability, participation over time, and legal and ethical issues. We summarize lessons learned in the form of guidelines and provide our code and data to aid future work on Citizen Science.1
Footnote 1: [https://github.com/UKPLab/eacl2023-citizen-science-lessons-learned](https://github.com/UKPLab/eacl2023-citizen-science-lessons-learned)
## 1 Introduction
Data labeling or _annotation_ is often a difficult, time-consuming, and therefore expensive task. Annotations are typically drawn from domain experts or are crowdsourced. While experts can produce high-quality annotated data, they are expensive and do not scale well due to their relatively low number (Sorokin and Forsyth, 2008). In contrast, crowdsourcing can be relatively cheap, fast, and scalable, but is potentially less suited for more complicated annotation tasks (Drutsa et al., 2020). Another approach is using Citizen Science, which describes the participation and collaboration of volunteers from the general public with researchers to conduct science (Haklay et al., 2021). Over the past decade, Citizen Science platforms, which rely on unpaid volunteers to solve scientific problems, have been used for a wide variety of tasks requiring human annotation (Hand, 2010), e.g., classifying images of galaxies (Lintott et al., 2008) or for weather observation (Leeper et al., 2015).
While Citizen Science has been shown to produce high-quality annotations in ecological or environmental projects (Kosmala et al., 2016), its potential has so far not been investigated in depth for Natural Language Processing (NLP). Our goal in this work is to assess the practicality of undertaking annotation campaigns for NLP via Citizen Science. We analyze whether volunteers actually react to our calls and participate, how the resulting quality is compared to crowdsourcing, what the benefits and shortcomings are and what needs to be taken into account when conducting such a project. We especially are interested in differences between annotators recruited via different channels, which we investigate by advertising to different social media platforms, NLP-related mailing lists, and university courses. To explore this possibility, we use the perspectrum dataset (Chen et al., 2019, CC-BY-SA) that focuses on the task of stance detection and can be motivated by fighting misinformation and promoting accurate debate in internet discussions. We replicated a portion of the annotations in this dataset using citizen scientists instead of crowdworkers. To accomplish this goal, we designed an annotation workflow that is suitable for Citizen Science and allows us to recruit volunteers across a variety of platforms.
Our contributions are the following:
1. We provide a systematic study on Citizen Science across different channels and analyze turnout and quality. For this, we re-annotate parts of the perspectiveM dataset using Citizen Science and compare these to the original, crowdsourced annotations.
2. We provide guidelines and recommendations on how to successfully conduct a Citizen Science project for NLP annotation and discuss critical legal and ethical aspects.
3. We provide a platform for future Citizen Science projects that handles onboarding, anonymous access, work assignment and the annotating itself.
Our results show that using Citizen Science for linguistic annotation can result in high-quality annotations, but that attracting and motivating people is critical for its success, especially in the long-term. We were able to attract 98 volunteers when conducting our Citizen Science project which resulted in 1,481 annotations over 2 months, thereby re-annotating around 10% of the original dataset. We find that annotations obtained through mailing lists and university students were of high quality when comparing them to the original, adjudicated crowdsourced data. We thus conclude that Citizen Science projects have the potential to be applied to NLP annotation if they are conceptualized well, but are best suited for creating smaller datasets.
## 2 Background
Prior work has developed various means and strategies for annotating large datasets. So far, annotation studies in NLP mainly use domain-experts or crowdworkers, or a mix of both (Nguyen et al., 2015). Crowdsourcing in particular has received increasing attention over the past decade (Wang et al., 2013).
Paid ExpertsRecruiting domain experts (e.g., linguists) for annotation studies has been a widely accepted method to generate linguistically annotated corpora. Famous examples are the Brown Corpus (Francis and Kucera, 1979) or the Penn Treebank (Marcus et al., 1993). While the resulting datasets are of the highest quality, domain experts are often few, and such annotation studies tend to be slow and expensive (Sorokin and Forsyth, 2008). Although many researchers moved on to annotation studies that recruit crowdworkers, expert annotations are still necessary in various fields, e.g., biomedical annotations (Hobbs et al., 2021).
CrowdsourcingTo accelerate the annotation process and reduce costs, researchers have utilized crowdsourcing as a means to annotate large corpora (Snow et al., 2008). The main idea behind crowdsourcing is that annotation tasks that do not require expert knowledge can be assigned to a large group of paid non-expert annotators. This is commonly done via crowdsourcing platforms such as Amazon Mechanical Turk (AMT) or Upwork and has been successfully used to annotate various datasets across different tasks and
Figure 1: We advertised our project via various social media, mailing lists and university courses. Volunteers then are onboarded via the landing page and donated annotations via INCEpTION.
domains (Derczynski et al., 2016; Habernal and Gurevych, 2017). Previous work compared the quality between crowdsourcing and expert annotations, showing that many tasks can be given to crowdworkers without major impact on the quality of annotation (Snow et al., 2008; Hovy et al., 2014; De Kuthy et al., 2016).
Although crowdworkers can substantially accelerate annotation, crowdsourcing requires careful task design and is not always guaranteed to result in high quality data (Daniel et al., 2018). Moreover, as annotators are compensated not by the time they spend but rather by the number of annotated instances, they are compelled to work fast to maximize their monetary gain--which can negatively affect annotation quality (Drutsa et al., 2020) or even result in spamming (Hovy et al., 2013). It can also be difficult to find crowdworkers for the task at hand, for instance due to small worker pools for languages other than English (Pavlick et al., 2014; Frommherz and Zarcone, 2021) or because the task requires special qualifications (Tauchmann et al., 2020). Finally, the deployment of crowdsourcing remains ethically questionable due to undervalued payment (Fort et al., 2011; Cohen et al., 2016), privacy breaches, or even psychological harm on crowdworkers (Shmueli et al., 2021).
Games with a PurposeA related but different way to collect annotations from volunteers is _games with a purpose_, i.e., devising a game in which participants annotate data (Chamberlain et al., 2008; Venhuizen et al., 2013). Works propose games for different purposes and languages. For instance, anaphora annotation (PhraseDetectives, Poesio et al. 2013), dependency syntax annotation (Zombilingo, Fort et al. 2014), or collecting idioms (Ervigit et al., 2022). It has been shown that if a task lends itself to being gamified, then it can attract a wide audience of participants and can be used to create large-scale datasets (von Ahn, 2006). Finally, Lyding et al. (2022) investigate games with a purpose in the context of (second) language learning to simultaneously crowdsource annotaions from learners as well as teachers. One such example is Substituto, a turn-based, teacher-moderated game for learning verb-particle constructions (Araneta et al., 2020). We do not consider gamification in this work, as enriching tasks with game-like elements requires considerable effort and cannot be applied to every task.
Citizen ScienceCitizen Science broadly describes participation and collaboration of the general public (the citizens) with researchers to conduct science (Haklay et al., 2021). Citizen Science is a popular alternative approach for dataset collection efforts, and has been successfully applied in cases of weather observation (Leeper et al., 2015), counting butterflies (Holmes, 1991) or birds (National Audubon Society, 2020), classifying images of galaxies (Lintott et al., 2008) or monitoring water quality (Addy et al., 2010). Newly-emerging technologies and platforms further allow researchers to conduct increasingly innovative Citizen Science projects, such as the prediction of influenza-like outbreaks (Lee et al., 2021) or the classification of animals from the Serengeti National Park (Swanson et al., 2015). _LanguageARC_ is a Citizen Science platform for developing language resources (Fiumara et al., 2020). It is however not open yet to the public to create projects and does not easily allow conducting a Citizen Science meta-study as we do in this work. One work using LanguageARC is by Fort et al. (2022) (LD) who collected resources to evaluate bias in language models. They did not investigate the impact of using different recruitment channels which we do. Other projects using LanguageARC are still running and it is too early to derive recommendations from.
Compared to crowdsourcing, Citizen Science participants are volunteers that do not work for monetary gain. Instead, they are often motivated intrinsically. For instance, they may have a personal interest on positively impacting the environment (West et al., 2021), or in altruism (Rotman et al., 2012). Asking for unpaid work also entails various issues like finding good ways of how to attract volunteers, and ethical considerations (Resnik et al., 2015; Rasmussen and Cooper, 2019) that need to be addressed (cf. SS5). Intrinsic motivation also has the potential of resulting in higher-quality annotations compared to crowdsourcing. For instance, Lee et al. (2022) find in their evaluation study with citizen scientists that their participants may have been willing to take more time annotating for the sake of higher annotation accuracy. However, as their main goal was to conduct an evaluation study for their specific setup, this finding cannot be generalized to other Citizen Science scenarios. So far, only Tsueng et al. (2016) provide a direct comparison between crowdsourcing and Cit
n the training data. The linear Science and show that volunteers can achieve similar performance in mining medical entities in scientific texts. They recruit participants through different channels such as newspapers, Twitter, etc., but do not compute channel-specific performance, making it difficult to assess whether the quality of the resulting annotation depends on the recurrent channel. In contrast, in the present work, we explicitly consider the recruitment channel in our evaluation and furthermore provide a discussion and guidelines for future Citizen Science practitioners. Also, it attracts intrinsically (not only fiscally) motivated volunteers that are often skilled in the task and can provide high-quality annotations, thus potentially combining the advantages of expert annotations and crowdsourcing. Relying on unpaid annotators entails several issues, including attracting volunteers and ethical considerations (Resnik et al., 2015; Rasmussen and Cooper, 2019) that need to be taken into account (see SS5).
## 3 Study Design
To study the feasibility of Citizen Science for NLP annotation, we asked volunteers recruited via various channels to re-annotate an existing, crowdsourced dataset. The general setup is described in Fig. 1. To conduct a systematic study, we identified the following four necessary steps: 1) Identifying a suitable dataset (SS3.1); 2) Selecting suitable recruitment channels to advertise our project on (SS3.2); 3) Building a landing page for onboarding participants that asks for informed consent and the channel from which they originated (SS3.3); 4) Setting up the annotation editor to which participants are forwarded after the onboarding (SS3.4).
### Dataset selection
We first conducted a literature review of relevant crowdsourced NLP datasets to identify the ones that could be accurately reproduced via Citizen Science. We assessed datasets for the following two criteria: 1) **Availability**: the dataset must be publicly available to make proper comparisons in terms of annotator agreement; 2) **Reproducibility**: the annotation setup including annotation guidelines needs to be reproducible to ensure similar conditions between citizen scientists and crowdworkers. We focused on datasets that are targeted towards contributing to social good to encourage volunteers to participate. Unfortunately, many inspected datasets did not fulfill both of these requirements. Overall, we identified two main issues while screening over 20 candidate datasets. First, many datasets used Tweets which impacted reproducibility as Twitter only allows researchers to publish the tweet identifiers. This leads to irrecoverable instances when tweets were deleted. Second was the lack of precise guidelines. For instance, many considered datasets about societal biases lack explicit descriptions of what is considered a stereotype. As such biases are often also impacted by the respective cultural background of annotators, they are difficult to reproduce without specific guidelines.
In the end, we decided on the stance detection task of the perspectrum dataset (Chen et al., 2019). The task provides clear instructions, publicly available data, and is motivated by social good (fighting misinformation/promoting accurate debate in internet discussions). Each instance consists of a claim-perspective pair (cf. Fig. 2) and annotators are asked if the claim _supports_, _opposes_, _mildly-supports_, _mildly-opposes_, or is _not a valid
Figure 2: Assigning a label to an instance in the INCEpTION text annotation platform.
perspective_. Following the original work, we also evaluated the annotations on a coarser tagset that only contains the categories for _support_, _oppose_ and _not a valid perspective_. Overall, the dataset consists of \(907\) claims and \(8,370\) different perspectives which yield \(11,805\) annotated instances. In preliminary studies, we received further feedback that forcing annotators to provide an explicit label for each instance could lead to increasing frustration, especially for ambiguous or complicated instances. To lessen the burden for our voluntary annotators and keep them motivated in the annotation task, we allowed them to skip instances (_Don't know/skip_) which was not present in the original annotation editor for perspectrum.
### Recruitment channels
To recruit annotators, we advertised our project on three social media platforms, namely, Twitter, LinkedIn and Facebook. Unfortunately, after creating the Facebook organization and advertising the project, the account was banned due to "violating their community standards" and has so far remained banned. One of our team members then promoted our annotation study on their personal Facebook to attract participation from this social media platform. In addition, the team members advertised the work on Twitter and in relevant LinkedIn groups such as Computational Linguistics and Machine Learning and Data Science.
We further promoted the study via two external mailing lists (i.e., corpora-list, ml-news). Late in the project, we received interest from other faculty to advertise the task in their courses--an offer that we gladly accepted. For this, participation was completely voluntary and anonymous, students' grades were not affected by participation, and authors were not among the instructors. To evaluate different recruitment channels separately, we asked participants on the landing page to answer the question: "Where did you hear from this study?". We also allowed volunteers to not disclose how they found out about the study, this is referred to as "Other" or "Undisclosed" in this paper. Final participation counts are given in Fig. 3. We deliberately limited our outreach, e.g. we did not use university social media accounts or colleagues with large follower bases. Also, we made sure to not exhaust channels by posting too many calls for participation.
### Landing page
We implemented a customizable landing page web application catering to the needs of Citizen Science projects. The link to such a landing page was shared via the respective recruitment channels. The landing page contained information about the study itself, its purpose, its organizers, which data we collected, and its intended use. This landing page toolbox is designed so that it can easily be adapted to future Citizen Science projects. To allow project creators to use an annotation editor of their choice, we designed the toolbox to act as an intermediary that collects a participant's consent for the actual annotation study. This ensures that only participants that have been properly informed and have explicitly provided their consent are given access to the study. For future Citizen Science projects, the tool further assists organizers through the landing page creation process to foster an ethical collection of data by asking several questions, that are listed in the appendix.
Figure 3: Participants, annotations and annotations grouped by the channel via they were recruited. It can be seen that overall, most participants and annotations were contributed by annotators recruited via mailing lists. Annotators from mailing lists and courses yielded the volunteers who contributed the most individually.
### Annotation editor
INCEpTION Klie et al. (2018) offers a configurable, web-based platform for annotating text documents at span, relation and document levels. To make it usable in Citizen Science scenarios, we extended the platform with three features, namely, (1) the ability to join a project through a link, (2) support for anonymous guest annotators, and (3) a dynamic workload manager. Allowing citizen scientists to participate in the project anonymously as guests without any sign-up process substantially reduced the entry barrier and made it easier for us to satisfy data protection policies. The same is true for the ability of joining a project through an invite link. Upon opening the link, annotators were greeted with the annotation guidelines and were directly able to start annotating. Finally, we implemented a dynamic workload manager that takes as input the desired number of annotators per document and then automatically forwards annotators directly to the document instances requiring annotation. Upon finishing annotating an instance, INCEpTION was configured to automatically load and display the next instance for annotation, similar to popular crowdsourcing platforms. We also included rules for handling other issues that may occur with voluntary annotations such as recovering instances that annotators have started to work on but then abandoned. Additionally, we modified the existing user interface to improve the annotation workflow. This mainly included implementing a dedicated labeling interface that allows users to select a single label for an instance via a radio button group. Annotation of an instance thus required two user actions: first, selecting the document label, and second, confirming the annotation, thereby moving on to the next document.
## 4 Results
We conducted our study between January and March 2022 and promoted the task in successive rounds across all recruitment channels. In total, we were able to recruit 98 participants who provided 1481 annotations resulting in 906 fully annotated instances. Each instance with at least one annotation has received on average 1.63 annotations. Detailed statistics are provided in the appendix.
ParticipationTo identify promising channels for future Citizen Science studies, we report the number of annotators per channel, the total number of annotations per channel and per user (cf. Fig. 3). Overall, we find that the most effective channel for public outreach are mailing lists (55 participants). Asking students in university courses to participate was the second most effective with 14 participants. Facebook, LinkedIn, and Twitter only yielded three, four, and eight participants respectively. We further find a highly skewed distribution of annotations per user, as many annotators only provide a few annotations while a few annotators provide many annotations. For instance, the most active annotators were two students who provided \(\sim\)80 annotations as well as six participants from mailing lists who provided \(\sim\)60-80 annotations each. For Twitter and "undisclosed", only a single annotator made over 60 annotations. We also find that on average, participants from university courses provided the most annotations per person. When looking at participation over time (see Fig. 5), we observe increased activity in annotations made after the call for participation has been posted to the respective channel. For many channels, the count quickly flattens. Interestingly, Twitter sees a second spike long after the post was made. We attribute it to people sharing the post in our community quite a while after the initial release. We did not track whether individual volunteers came back for another round of annotations after their initial participation.
CoverageOverall, our 98 volunteers have provided 1,481 annotations to 906 unique instances (approximately 8% of the original dataset) over two months. This is comparable to other Citizen Science projects like Fort et al. (2022), which had 102 participants in total. They annotated three tasks and collected 2347, 2904 and 220 submissions over eight months. Table 1 shows the resulting coverage of our Citizen Science annotation study. While this still leaves room for improvement, the number of annotations collected nonetheless shows that Citizen Science can be viable in real life settings and is a promising direction to investigate in further studies, especially for creating focused and smaller-scale resources.
QualityIn terms of annotation quality, we find that most channels yield annotations that highly agree with the gold labels (cf. Table 2), even though our annotations are not adjudicated yet. We further find that volunteers from university courses and mailing list show the highest accuracy, followed by Twitter and "undisclosed". Only LinkedIn yields
lower accuracy than 70% on the coarse label set.
For the majority of channels (with the exception of Facebook and LinkedIn), we only see a skip percentage of \(\sim\)10% (cf. Fig. 4). This indicates our volunteers are actually willing to spend time and effort to solve the task at hand, as adding a "Don't know/skip" option in crowdsourcing usually is an invitation for workers to speed through the tasks and not provide useful annotations. The exception is Facebook, where we find that a majority of the annotations from Facebook were labeled as _I don't know/skip_ (3 out of 5). Further analysis of the label distribution grouped by channel (cf. Fig. 4) shows that all channels except for Facebook display a similar distribution in terms of annotated labels. This indicates that we can expect a rather stable annotation performance across citizen scientists recruited from different channels.
## 5 Discussion and Takeaways
Here we present lessons learned, discuss legal challenges and ethical considerations, as well as provide guidelines for future Citizen Science projects.
Channel-dependent differencesOur results clearly differ across recruiting channels. We find that overall, Facebook and LinkedIn have the lowest turnout and accuracy when compared to the gold labels, followed by Twitter. Our assumption for the overall low participation is that our network for these channels was not large enough. Advertising our study to NLP-related and university-internal mailing lists and university courses yielded the highest number of participants who also provided the most and best-quality annotations. Although our results show that students may outperform participants from other channels, we also acknowledge that this may not always be a viable option to recruit citizen scientists. Overall, our findings indicate that it is important to address the respective target groups that may be interested in a specific study. However, we also note that continuously advertising Citizen Science studies to the same channels may have a negative impact, as it can cause participation fatigue and lead to fewer volunteers participating. One possible solution could
\begin{table}
\begin{tabular}{l r r} \hline \hline Channel & Coarse & Fine \\ \hline University & 0.92 & 0.82 \\ LinkedIn & 0.69 & 0.62 \\ Mailing Lists & 0.90 & 0.82 \\ Undisclosed & 0.84 & 0.75 \\ Twitter & 0.85 & 0.73 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Annotation accuracy compared to the crowdsourced and adjudicated data from perspectrum. The five annotations from Facebook (three of them were skipped) and _Don’t know/skip_ annotations are omitted.
Figure 4: Label distribution grouped by channel. Labels are _supports_ (++), _mildly-supports_ (+), _mildly-opposes_ (-), _opposes_ (-), _not a valid perspective_ (I) and _Skip_ (S).
Figure 5: Annotations made over time. Vertical lines indicate when calls on the respective channels have been posted.
be the use of LanguageARC (Fiumara et al., 2020) from the LDC and centralize calls for participation.
Motivating volunteersIn contrast to crowdsourcing, there is no monetary or other extrinsic motivation that could be used to attract Citizen Science annotators. Thus, annotator motivation is a crucial question for Citizen Science studies. As Fig. 5 shows, citizen scientists can be quickly motivated to participate, but can also quickly lose interest in a given annotation study. This can become an issue with a low number of participants, yet our results also indicate that we were able to find highly-motivated participants (8 out of 98 in our results).
Compared to other groups, university students in particular provided a high amount of quality annotations. Considering the findings by Phillips et al. (2018), who do not find statistical differences in terms of quality between students participating for course credit vs. no extrinsic reward--asking students to participate in such projects as part of their coursework might be another good option, but needs to ensure an ethical data collection. For instance, such an approach has been used to annotate the Georgetown University Multilayer Corpus (Zeldes, 2017). Nonetheless, one remaining question is how to keep participants motivated and participate in several sessions as our results indicate that a vast majority of our volunteers only participated in a single session and that participation quickly stops shortly after a call has been posted to the respective channel.
Finally, we want to emphasize the inclusion of a _Don't know/skip_ option for Citizen Science annotators. Whereas in crowdsourcing studies, annotators may exploit such an option to increase their gain (Hovy et al., 2013), from the feedback we got during our pilot study, it is crucial to keep volunteers motivated for Citizen Science. For this work, we did not provide a survey that asks about the motivation, as we thought that this might deter potential participants We however suggest that future studies provide such a survey that is as unintrusive as possible to further analyze why participants take part in the respective annotation project.
Legal challengesOne substantial challenge in implementing Citizen Science studies is the potentially wide outreach they can have and, consequently, the varying kinds of data protection regulations they have to oblige. To preempt any potential issues that can arise--especially when data that can be used to identify a person (personal data, e.g. obtained during a survey or login credentials) is involved--we recommend researchers who plan to implement a Citizen Science study consider the most strict regulations that are widely accepted.
For the GDPR (European Parliament, 2016), currently one of the strictest data protection regulations, we recommend researchers to explicitly ask voluntary participants for their informed consent when collecting personal information. This includes informing participants beforehand about (1) the purpose of the data collection, (2) the kind of personal and non-personal data collected, (3) the planned use of the data, (4) any planned anonymization processes for publication, and finally, (5) how participants can request access, change, and deletion of the data. We further recommend assigning one specific contact person for any questions and requests for access, change, or deletion of the data. This may seem like additional work when compared to crowdsourcing, but transparent and open communication is one of the key factors to build trust--which is necessary for voluntary participants to consider such studies and provide high-quality annotations. Finally, participants should be informed and agree to the annotations donated being published under a permissive license.
Ethical and economical considerationsAlthough Citizen Science can substantially reduce annotation costs, we emphasize the importance of considering an ethical deployment that does not compromise the trust of the participants. Moreover, given increasing concerns regarding the ownership and use of collected data (Arrieta-Ibarra et al., 2018), one should grant participants full rights to access, change, delete, and share their own personal data (Jones and Tonetti, 2020). This ensures that participants are not exploited for "free labor"--in contrast to approaches like reCAPTCHA (von Ahn et al., 2008), where humans are asked to solve a task in order to gain access to services. Whereas CAPTCHAs were initially intended to block malicious bots, they are becoming increasingly problematic due to their deployment and use by monopolizing companies which raises ethical concerns (Avanesi and Teuflings, 2022). It is especially important to take the data itself into consideration; exposing volunteers to toxic, hateful, or otherwise sensitive speech should be avoided if they are not informed about it beforehand.
RecommendationsOverall, we derive the following recommendations for future Citizen Science studies. 1) our call for annotations resonated the most with the target group that is likely to benefit the most from contributing to it: NLP researchers coming from mailing lists and university students. Therefore, the target audience should be carefully selected, for instance by identifying topic-specific mailing lists or respective university courses. This further means that the purpose of data collection should be made clear and that the results should be made publicly available. 2) the research question of the study should conform to the respective ethical and legal guidelines of the potential target group which should clearly be communicated to make the project accountable. 3) participation should be easy with clearly formulated annotation guidelines and, moreover, the annotation itself should be thoroughly tested beforehand to ensure that participants do not get frustrated due to design errors or choices. For instance, in our preliminary study, we got the feedback that some instances are frustrating to annotate and hence added an option to skip. 4) analyzing participation over time shows that a Citizen Science project has to be continuously advertised in order to stay relevant and achieve high participation. Otherwise, it will be forgotten quickly. This can be done by sharing status updates or creating preliminary results. Fifth, we recommend asking about user motivation before, during or after the annotation with a survey to better understand the participants and their demographics.
## 6 Conclusion
In this work, we presented an exploratory annotation study for utilizing Citizen Science for NLP annotation. We developed an onboarding process that can easily be adapted to similar projects and evaluated Citizen Science annotations for re-annotating an existing dataset. Furthermore, we extended the INCEpTION platform, a well-known open-source semantic annotation platform, with a dynamic workload manager and functionality for granting access to external users without registration. This enables its usage for Citizen Science projects. We advertised the study via Twitter, Facebook, LinkedIn, mailing lists, and university courses and found that participants from mailing lists and university courses are especially capable of providing high-quality annotations. We further discuss legal and ethical challenges that need to be addressed when conducting Citizen Science projects and provide general guidelines for conducting future projects that we would like to have known before starting. Overall, we conclude that Citizen Science can be a viable and affordable alternative to crowdsourcing, but is limited by successfully keeping annotators motivated. We will make our code and data publicly available to foster more research on Citizen Science for NLP and other disciplines.
Future WorkWe see the following directions for further research and evaluation to better understand in which settings Citizen Science can be applicable and how to use it best. Here, we used perspectrum as the dataset to annotate and mentioned in the participation calls that it benefits the social good. Therefore, it would be interesting to conduct more projects and see which datasets are suitable as well as whether volunteers participate, even if there is no extrinsic motivation. Then, it can also be tested how annotator retention develops, especially when project are running longer. The call for participation itself could also be investigated for the impact it has on turnout, motivation and quality.
## 7 Limitations
Throughout this article, we analyzed whether Citizen Science applies to linguistic annotation and showed that we can attract volunteers that donate a sizeable number of high-quality annotations. This work, however, comes with limitations that should be taken into account and tackled in future work. First, we based our analysis on a single annotation campaign and dataset that we advertised as being relevant for the social good. Therefore we suggest conducting more such annotation projects, also with different kinds of tasks. Second, we did not perform a user survey that for instance asked for user motivation. This is why we can only speculate about the motivation of our participants and suggest future works to explicitly prepare such a survey. Third, using Facebook as a channel might be viable, but we were not able to properly analyze it, as our account was blocked shortly after creation and never was reinstantiated. Finally, based on participation and annotation numbers, we see Citizen Science as more of an option for annotating smaller datasets, or longer-term projects that are more actively advertised than in our study which took place over two months and for which we de
liberately limited the outreach.
## Acknowledgments
We thank our anonymous reviewers, Michael Bugert and Max Glockner for their detailed and helpful comments to improve this manuscript. We are especially grateful for the discussions with Michael and Anne-Kathrin Bugert regarding our study setup. Finally, we thank the many volunteers that donated annotations for this project, as this work would not have been possible without their generous participation.
This work has been funded by the German Research Foundation (DFG) as part of the Evidence project (GU 798/27-1), UKP-SQuARE (GU 798/29-1), INCEpTION (GU 798/21-1) and PEER (GU 798/28-1), and within the project "The Third Wave of AI" funded by the Hessian Ministry of Higher Education, Research, Science and the Arts (HWMK). Further, it has been funded by the German Federal Ministry of Education and Research and HMWK within their joint support of the National Research Center for Applied Cybersecurity ATHENE.
|
2301.06045
|
Detailed analysis of two simplified Rydberg dressed atoms confined in a
harmonic trap
|
By using a step-like potential, it is possible to mimic the Rydberg short
range part of the interaction between two atoms. It is easy in this case to
establish an analytical solution of the Schr\"{o}dinger equation. In this
contribution, we are analyzing in detail this simplified model by highlighting
the major players in different interaction schemes (strengths and ranges),
different dimensionalities and the impact on spatial correlation. We are able
to achieve an improvement to this model by applying a perturbation treatment to
the potential. The dynamical aspects related to a sudden change of the
potential features are also investigated.
|
Leila Chia, Nabila Grar
|
2023-01-15T08:51:50Z
|
http://arxiv.org/abs/2301.06045v3
|
# Detailed analysis of two simplified Rydberg dressed atoms confined in a harmonic trap
###### Abstract
By using a step-like potential it is possible to mimic the Rydberg short range part of the interaction between two atoms. It is possible in this case to establish an analytical solution of the Schrodinger equation. In this contribution we are analyzing in detail this simplified model by highlighting the major players in different interaction schemes (different strengths and ranges), different dimensionalities and the impact on spatial correlation. We are able to achieve an improvement to this model by applying a perturbation treatment to the potential. The dynamical aspects related to a sudden change of the potential features are also investigated.
_Keywords_: two atoms in a harmonic trap; cold Rydberg atoms; analytical solution of the Schrodinger equation; correlations.
## 1 Introduction
Matter is a huge and intricate assembly of some "fundamental" constituents where seemingly the individuality of these constituents is lost. The understanding of the finite micro elements leading to the macroscopic structure is however of paramount importance for both nuclear and condensed matter physics. It aims not only at the comprehension of the constituent's structure but also at the elucidation of the correlations and the interplay among constituents. From another side experiments involving confined cold few particles are nowadays very accessible and are becoming a matter of routine. In fact number of the systems parameters as the confinement potential and the particle-particle interaction features can be controlled on demand[1, 2]. This way, it is possible to verify experimentally the validity of a number of quantum simplified models studied by the past and explore fundamental physics concepts. It is also true that more efforts are necessary in order to devise new "toy" models aiming at a detailed comprehension of the features of the interaction at different level of approximation as well as different dimensionalities (1D, 2D and 3D). Exact solution for the Schrodinger equation established in the case of different and sometimes complicated potentials can be found in the literature (see for example the references [3, 4, 5, 6, 7]). Most of these solutions are given for the case of single particle system. The situation becomes quite complicated when considering the case of two particles as a first step on the path towards the description of cold confined mesoscopic systems [8, 9, 10]. The difficulty resides in the consideration of both confinement potential characteristics as well as realistic interaction potential. The hard core interaction is the most simplified interaction scheme and in this case it is possible to achieve quasi exact solution for two particles system. A theoretical work encompassing the three dimensionalities and a delta like interaction for a system of two particles was elaborated by Busch et _al._[11, 12]. A quasi exact solution is hence established where the interaction could be considered to be of contact nature (an s-wave for bosons and p-wave for fermions). In order to take into account a certain range of the interaction, a gaussian-like potential can be considered and in this case also a quasi analytical solution can be achieved [13, 14]. These interaction models however ignores the long range nature of interaction for dipolar atoms or the Rydberg dressed interaction behaving like \(1/r^{6}\) and which can be very important either fundamentally or experimentally [15, 16]. An analytical solution for this interaction is still to be found. Nevertheless a simplification of this interaction as a step function was proposed by Koscik et _al._[17]. It is possible in this case to reach a quasi exact solutions in one and two dimensions and a study of the different features of the system was elaborated. This kind of quasi solvable models are of extreme importance for advances in cold confined few particles systems. It can be considered as a set of models to be validated experimentally as well as an exact basis to construct the solution for few body systems exploiting different strategies (as variational,
ab initio, interacting configurations...) [8, 9, 10]. The aim of our present study is to elaborate a comparative analysis of the quasi exact solvable model of Koscik et _al._ in the three dimensionalities and highlight the most important players for the considered interaction. We are concerned unavoidably by the analysis of the spatial correlation as an important part of the information about the studied system. We propose also an improvement to the cited model by applying a perturbation treatment to the potential. We are also addressing the most important results of the dynamical evolution of the system under a sudden change of the potential features and how this evolution is affecting the correlation. The paper is organized as follows. In the second section we are recalling the most important formula to be used in the following sections. In the third section we analyze the energy spectra for the relative solution of the Schrodinger equation for the systems of two particles for different schemes (strength and range) of the interaction, the aim is to single out the major players for different dimensionalities. In a fourth section the radial spatial correlation is analyzed for the three dimensionalities. The interplay between the centrifugal repulsive effect, the range and the strength of the interaction is studied in detail. In the fifth section we are proposing an improvement to Koscik et_al._ model by exploiting a perturbation treatment of the potential. In the sixth section a dynamical study of the system is elaborated and effects of the sudden change of the interaction parameters on the system correlation is investigated. The main results are summarized in the final conclusion.
## 2 Theoretical approach
The aim of the different models is to establish an analytical solution for the following Schrodinger equation for a system of two identical spinless quantum particles having a mass \(m\) and trapped in an external potential:
\[\left(\sum_{i=1}^{2}\frac{-\hbar^{2}}{2m}\nabla_{i}^{2}+v_{ext}+v\right) \psi(\overrightarrow{r_{1}},\overrightarrow{r_{2}})=E\psi(\overrightarrow{r _{1}},\overrightarrow{r_{2}}) \tag{1}\]
Where \(v_{ext}\) is the confining potential, \(v\) is the interaction potential depending on the particles separation and \(\overrightarrow{r_{i}}\) is the vector position for each particle. To simplify the calculation the two particles are considered to be structureless i.e pointlike. The confining potential is considered to be harmonic and the dimensions considered in the harmonic oscillator will define the constraint on the motion of the particles and consequently will define the dimensionality of the problem[18, 19]. The same confining potential is imposed to both particles and the equation becomes:
\[\begin{split}\Bigg{(}\Bigg{(}\sum_{i=1}^{2}\frac{-\hbar^{2}}{2m} \nabla_{i}^{2}+\frac{1}{2}m\omega^{2}r_{i}^{2}\Bigg{)}+v(|\overrightarrow{r _{1}}-\overrightarrow{r_{2}}|)\Bigg{)}\psi(\overrightarrow{r_{1}}, \overrightarrow{r_{2}})=\\ E\psi(\overrightarrow{r_{1}},\overrightarrow{r_{2}})\end{split} \tag{2}\]
For this quadratic potential it is possible to single out the center of mass contribution to the motion from the relative one and we can write the equation as:
\[\begin{split}\Bigg{(}\frac{-\hbar^{2}}{2M}\nabla_{R}^{2}+\frac {1}{2}M\omega^{2}R^{2}+\frac{-\hbar^{2}}{2\mu}\nabla_{\overrightarrow{r}}^{ 2}+\frac{1}{2}\mu\omega^{2}r^{2}+v(r)\Bigg{)}\psi(\overrightarrow{R}, \overrightarrow{r})\\ =E\psi(\overrightarrow{R},\overrightarrow{r})\end{split} \tag{3}\]
where \(M=2m,\mu=m/2\) (the reduced mass), \(R=|\overrightarrow{r_{1}}+\overrightarrow{r_{2}}|/2\) and \(r=|\overrightarrow{r_{1}}-\overrightarrow{r_{2}}|\). The wave function can be written in a separable form as:
\[\psi(\overrightarrow{R},\overrightarrow{r})=\varphi(\overrightarrow{R}) \phi(\overrightarrow{r}) \tag{4}\]
Consequently we can separate the center of mass motion from the relative one as:
\[\begin{split}\Bigg{(}\frac{-\hbar^{2}}{2M}\nabla_{R}^{2}+\frac {1}{2}M\omega^{2}R^{2}-E_{c}\Bigg{)}\,\varphi(\overrightarrow{R})=0\end{split} \tag{5}\]
\[\begin{split}\Bigg{(}\frac{-\hbar^{2}}{2\mu}\nabla_{\overrightarrow {r}}^{2}+\frac{1}{2}\mu\omega^{2}r^{2}+v(r)-E_{r}\Bigg{)}\,\phi(\overrightarrow {r})=0\end{split} \tag{6}\]
with \(E=E_{c}+E_{r}\). The first equation is just an equation for a harmonic oscillator with known solutions and the difficulty resides in finding a solution for the second equation where handling a realistic interaction can be quite challenging. It is important to remind here that the symmetry of the total wave function is dependent only on the relative part of the wave function since the center of mass part is symmetric by construction. In one dimension (say for example \(x=x_{1}-x_{2}\)) the relative equation reduces to :
\[\begin{split}\Bigg{(}\frac{-d^{2}}{dx^{2}}+\frac{1}{4}x^{2}+v(x )-E_{r}\Bigg{)}\,\phi(x)=0\end{split} \tag{7}\]
Here the equation is written such as the energy and the position are expressed in \(\hbar\omega\) and \(\sqrt{\frac{\hbar}{m\omega}}\) units respectively. For two dimensions we convert to polar coordinates \(\overrightarrow{r}\rightarrow(r,\varphi)\) and with writing the relative wave function as:
\[\begin{split}\phi(r,\varphi)=\frac{f(r)}{\sqrt{r}}e^{\pm i \varphi}\end{split} \tag{8}\]
the equation for the relative-radial motion becomes :
\[\begin{split}\Bigg{(}\frac{-d^{2}}{dr^{2}}+\frac{l^{2}-1/4}{r^{ 2}}+\frac{1}{4}r^{2}+v(r)-E_{r}\Bigg{)}\,f(r)=0\end{split} \tag{9}\]
where \(l\) is the angular momentum quantum number and it is expressed in \(\sqrt{\hbar m\omega}\) units. The second term in this equation represents the centrifugal potential. For three dimensions we use spherical coordinates \(\overrightarrow{r}\rightarrow(r,\theta,\varphi)\) and the relative wave function is written as:
\[\begin{split}\phi(r,\theta,\varphi)=\frac{1}{r}f(r)y_{l}^{m}( \theta,\varphi)\end{split} \tag{10}\]
\(y_{l}^{m}(\theta,\varphi)\) being the spherical harmonics.
The equation for the relative-radial part is then given as :
\[\begin{split}\Bigg{(}\frac{-d^{2}}{dr^{2}}+\frac{l(l+1)}{r^{2}} +\frac{1}{4}r^{2}+v(r)-E_{r}\Bigg{)}\,f(r)=0\end{split} \tag{11}\]
First notice here that we can shift from equation (9) to (11) by operating the following change:
\[l_{2D}\to l_{3D}+1/2 \tag{12}\]
This will signify that it is possible to find the solution for the 3D case by just solving the equation for the 2D case but
with respecting the previous relation between the two angular quantum numbers [17]. Secondly the relative wave function will be dependent on a principal quantum number \(n\) in one 1D, on (\(n\),\(l\)) in 2D and on (\(n\),\(l\),\(m\)) for 3D. The total wave function however will be symmetric for even \(n\) in 1D and even \(l\) for 2D and 3D; and will be anti symmetric for odd \(n\) in 1D and odd \(l\) for 2D and 3D. As known a symmetric total wavefunction will define a bosonic state and conversely an anti symmetric total wave function will define a fermionic one.
Solving the equations (7),(9) and (11) will rely on the form considered for the interaction potential. As matter of fact different forms of interaction are considered and studied in the literature starting from the hard core or contact-like potential defined as :
\[v(r)=\begin{cases}g_{hc}\delta(x-x_{0})&for\ x\geq 0\\ g_{hc}\delta(r-r_{0})\dfrac{\partial}{\partial r}r&for\ 2\ and\ 3\ dimensions\end{cases} \tag{13}\]
Where \(g_{hc}\) is the strength of the hard core interaction. We notice that the delta function must be regularized in order to avoid singularity in two and three dimensions[11, 12]. Taking this into account it is possible to solve quasi exactly the problem for the three dimensionalities. In order to introduce a certain finite range in the interaction, the considered potential can be a gaussian shaped one given by [13, 14]:
\[v(r)=\dfrac{g_{g}}{s^{2}}exp\dfrac{|\overrightarrow{r_{2}}-\overrightarrow{r_ {1}}|^{2}}{s^{2}} \tag{14}\]
Where \(g_{g}\) is the gaussian interaction strength and \(s\) is its range. It is possible in this case to achieve an approximate analytical formula that gives the energy spectra and allows the study of the effect of the finite range interaction on the features of the system. To take account of the long range nature of the interaction otherwise the interaction between two non symmetric neutral charged distributions, we should consider excitations with unitpolar properties. When atoms are excited to large principal numbers these are known as Rydberg states. The potential for this interaction can be approximated to first order as composed of a short ranged part to which we add a van der waal long ranged interaction. This last one being the main contribution to the multipolar excitations. In this case the interaction potential can be given as [17, 20]:
\[v(r)=\dfrac{g}{1+\left(\dfrac{r}{R_{c}}\right)^{6}} \tag{15}\]
Where \(g\) gives the strength and \(R_{c}\) is the range of the potential respectively (see fig.1). We will call this potential as the Rydberg interaction in the following sections. It is not possible yet to find an exact solution to the equations (7) (9) (11) with this realistic interaction and under the harmonic confinement. Nonetheless a quasi exact solution is achieved for a potential defined as a step function [17] that mimics the previous expression quite fairly for the short range part and then falls abruptly to zero otherwise and is given as:
\[v(r)=\begin{cases}v_{0}&for\ r\leq a\\ 0&for\ r>a\end{cases} \tag{16}\]
Where we can relate \(v_{0}\) and \(a\) to the strength and the range (\(g\) and \(R_{C}\)) respectively. This simplification is justified by the fact that the main contribution to the realistic potential comes from the flat part. In this case it is possible to establish a quasi exact solution by reducing the radial equation to a Weber form in the case of one dimension and a Kummer form for two and three dimensions. The solution is expressed as function of the confluent hypergeometric function of the first kind in the region \([0,a]\) and as function of the Tricomi function elsewhere [17, 21, 22, 23]. In order to guarantee a physical behavior of the whole solution a condition for the continuity of the two functions and their derivatives is imposed at \(r=a\) and henceforth this gives rise to transcendental equations. The solution of these equations will lead to the quantification of the energy which allows to retrieve the energy spectrum with different combination of strength \(v_{0}\) and \(\ \ \ \ \\)
## 3 Energy spectrum and impact of the dimensionality
In order to study the effect of the interaction strength and range, we have elaborated programs in C language for the three dimensionalities. The continuity equation is resolved using the shooting numerical method [24, 25] in order to find the eigen energy for different couple \((v_{0},a)\). The wave function is then used to calculate the probability density and other related quantities. We follow the prescription given in equation (12) to extend the results already known for one and two dimensions to three dimensions. Let us now show some of the results we can achieve by exploiting the solutions provided by the previous model bearing in mind that the comparison is made relative to \(v_{0}=0\) where we retrieve the simple equidistant spectrum for a harmonic oscillator. First we can show the effect of the different values of the couple \((v_{0},a)\) on the energy spectrum for 2 and 3 dimensions cases. On figure 2.a we are depicting the energy of the fundamental level \(n=0\) with different values of the angular momentum quantum number \(l\)=0,1,2,3 and 4. The energy is plotted versus \(v_{0}\) and the different panels are for different
Figure 1: Comparison between the realistic Rydberg interaction, the step function and the difference between these two potentials. \(v_{0}\) is considered to be equal to 5 and the range is equal to one. The difference is considered as a perturbation (see section 5).
Figure 4: (a) Energy spectrum in one dimension versus \(v_{0}\) with increasing value of \(n\) (\(n\)=0,1...8 from the bottom to the top) with even values of \(n\) for bosons (black) and odd ones for fermions (red). (b) The same calculation versus the range for different values of the potential strength. The value of \(v_{0}\) is indicated in each panel
Figure 3: (a) Energy spectrum for the fundamental relative-radial state (\(n\)=0) versus \(v_{0}\) with increasing value of the angular momentum quantum number (\(l\)=0,1,2,3,4 from the bottom to the top) in two dimensions. The even values of \(l\) are for bosons and odd ones are for fermions. (b) The same calculations versus the range for different values of the potential strength. The valu
Figure 2: (a) Energy spectrum for the fundamental state (n=0) versus \(v_{0}\) with increasing value of the angular momentum quantum number (\(l\) = 0,1,2,3,4 from the bottom to the top) in three (red) and two (black) dimensions. (b) The same calc
ranges of the interaction. We can see on this figure that levels with increasing value of angular quantum number \(l\) are affected by the interaction as its range is increased and that the most important impact is observed when the interaction is attractive (negative \(v_{0}\)). In this case the eigen energy is as negative as the interaction is attractive forcing the system to be in a bound state. When redoing the same figure for \(n=1\) (see Fig.2.b) one can see that the energy levels are less affected by the interaction and we have to reach a range as high as 1.25 to obtain a noticeable change for the attractive part of the interaction. It is noticeable however that even in the extreme strength and range of the attractive regime the increase of the curve is not straight as it is for \(n=1\) but proceeds via an alternation of increase and kind of plateau. When comparing the results obtained for two dimensions and three dimensions we can notice the regularity with which the levels react to the interaction whether in the fundamental principal states (\(n=0\)) on figure 2.a or the first principal excited states (\(n=1\)) on figure 2.b, the three dimensions being always higher in energy as the centrifugal potential is naturally higher in this case (\(l_{2D}=l_{3D}+1/2\)).These findings show the effect of the centrifugal potential which scales as the square of the angular momentum quantum number and acts in a way to repel the system to a separation where it does not feel the effect of the attractive part of the interaction. It is interesting to notice the decrease of the gap between the first curve \(l=0\) and the the curve for \(l=1\) in the extreme repulsion for important ranges in two dimensions, making these two level tending towards being degenerate. This result shows that in this regime the repulsion is nearly equal to the amount of the centrifugal potential equivalent to \(l=1\) corresponding to the first fermionic level. This forced degeneracy can lead to the fermionization of the first bosonic state as it is case for 1D. We can notice in the same manner the tendency to "degeneracy" for the three first levels (\(l=0\) for 2D,1 = 1 for 2D and \(l=0\) for 3D ) in the repulsive regime and for large interaction range. The \(m\) quantum number (projection of \(l\)) is not relevant for the case of 3 dimensions since the energy is only dependent on \((n,l)\).
The representation of the energy versus \(v_{0}\) for different values of the range is frequently used to show the effect of the interaction on the energy spectrum. We want to show in the following figures, the alternate representation: energy spectrum versus the range for different values of \(v_{0}\). We are comparing the two representations on figure 3 for the case of two dimensions. The advantage of representing the energy versus the range over the usually used representation is that it can show the critical range at which we can observe the onset of any changes in the different curves. In figure 3.b and for the attractive regime one can see that the point of inflection of the curves for increasing values of \(l\) is increasing gradually. One would expect naively that a difference would exist between the behavior of fermionic and bosonic states since the first ones are naturally exposed to an additional repulsion due to the Pauli exclusion principle. We can see clearly on the figure that this is not the case and that the main parameter that will dictate the critical range at which occurs the inflection is the angular quantum number in connection with the strength of the interaction. This is the case even for the first level where \(l\)=0 where the onset of the inflection of the curve is not zero but a certain finite value (see section 4 for more details). We can state here that the angular momentum is _washing out_ the effect of the statics. For the repulsive regime however the centrifugal potential and the interaction act in the same direction and this is occurring in very monotonic manner pushing the energy consequently to a higher values.
These results are to be contrasted with the case of one dimension in figure 4.a where the energy is represented versus \(v_{0}\). We can see that the absence of an angular momentum makes that all the curves are more or less equally affected by the interaction. The effect being dependent only on the range and the strength of the interaction; and that when the interaction is extremely repulsive the bosons and fermions tend to the same limit. This is related to so called fermionization or the Tonks-Girardeau limit [26] where the bosons proprieties are similar to the ones for non interacting fermions (except for the impulsion distribution). Notice that these results are already reported in [27]. When using the second representation (Fig.4.b), one can find that the onset of the inflection is different when comparing bosonic and fermionic states. Indeed for the first bosonic state the inflection starts from zero whereas for the first fermionic state a certain critical range has to be reached for the inflection to occur. For the higher bosonic level a peculiar behavior is observed : a decreasing behavior is starting from zero, then a plateau, then an inflection then a second plateau for the extreme attractive regime. The extent of the plateau is nearly the same for these levels. For the high fermionic levels we can observe a critical range (which is nearly the same for these levels) then a first inflection, then a plateau then seemingly another small inflection of the curves in the extreme attractive regime. The extent of the plateaus and the critical ranges is obviously also dependent on the interaction strength. The understanding of the behavior of the first bosonic and fermionic level is quite straightforward and is due to the additional repulsion resulting from the fermionic states. We can of course notice that the same curves in the first representation are not purely monotonic as it is case for the first two levels. This implies that the behavior of the curves of the higher energy levels changes for certain strengths and certain ranges of the interaction. For the repulsive regime and in the second representation we can observe the same tendency to fermionization except that in this case the limits are not flat as it is seen in the first representation but continue to increase monotonically with increasing value of the range.
## 4 Spatial Correlations
Exploiting the wave function derived from the previous calculations it is possible to deduce the probability density distribution as a tool to investigate the possible spatial correlations. We wish to show in a very simplified manner, avoiding all the tricky muidimensional plots, the influence of the interaction features on the spatial location of the two particles relative to the trap center. This is performed for the three dimensiliatties and two different states of the system. In order to understand the goal behind the next comparisons few remarks are in order:
1. Studying the spatial correlation implies the analysis of a radial correlation for the three dimensionalities and also an angular correlation (see for example Ref. [17]) in two and three dimensions. For our calculations we are not considering the angular correlation and we are concerned only with singling out the effect of the dimensionality if the system is to have only a radial correlation.
2. To operate the comparison between different calculations we should in principle have the total normalized probability density but since we are just studying the relative radial part of the wave function this will unavoidably introduce some normalization issues between the three dimensionalities. To remedy this problem we propose to do the calculation with some arbitrary normalization, then the same normalization with the same calculations (changing only the interaction parameters) is elaborated to deduce the effect on the spatial correlation.
3. Two states of the system are concerned with these calculations: the first bosonic state (the lowest state with a symmetric total wave function) and the first fermionic state (the lowest state with an anti symmetric total wave function). In one dimension the first bosonic state is given for \(n=0\) and the first fermionic state is given for \(n=1\). Whereas in two and three dimensions the first bosonic state is given for \(n=0,\ l=0\) and the first fermionic state is given for \(n=0,\ l=1\).
4. Setting \(l=0\) in 3D implies that \(l=1/2\) in 2D as explained before. Consequently though the centrifugal potential is equal to zero in this case, still \(l_{2D}=1/2\) is entering in the arguments of the confluent hypergeometrication of the 3D equation and consequently this is affecting the solution. Similarly setting \(l=0\) in 2D will not annihilate the centrifugal potential since in this case we are left with the residual term \(\frac{-1/4}{r^{2}}\).
5. In the results that follow we are depicting one half of the trap. The other half can be deduced symmetrically relative to the \(y\) axis, the origin being the trap center.
Once all these points are made clear, we are presenting in the next figure calculations that are exactly the same except for the change of the interaction range. On figure 5.a we show the radial relative probability distribution for the first bosonic state (first row) and the first fermionic state (second row). The calculations are done for an intermediate value of the range \(a=1\) and for three values of \(v_{0}\):(-5, 1 and 5) (the three columns of the figure). In each panels we are depicting the results for the three dimensionalities. This figure is taken as reference since even if the normalization is arbitrary, we will maintain the same value for all the parameters (except for the interaction parameters) of this first calculations when moving to the next two sub figures. Notice that we changed the normalization of the calculation in 1D to make it more or less coincide with the other two calculations for the bosonic state. This is done in order to be able to track visually the difference with the next calculations but once again this is just an arbitrary choice.
We can see clearly for the first row and for the attractive value of \(v_{0}\) (-5) that the two particles are mostly in the same position in the center of the trap. This is possible because the particles are bosons in this case. When the interaction is repulsive \(v_{0}=5\) the two particles start to localize at the far ends of the trap with a non zero probability for being at the center of the trap. \(v_{0}=1\) is illustrated here to see the evolution from the attractive interaction to the very repulsive one. For the three panels in the first row the 3D calculations is always higher, then we have the 2D one and then comes the 1D calculation. The tiny difference in the first panel is magnified on the scale of the second and third panel. For the fermionic state ( the second row of the figure) we can see that in accordance with the Pauli principle the likelihood of the the two particles to be in the same position is zero. In the attractive regime (\(v_{0}=-5\)) the 1D calculation is higher, then we have
Figure 5: Comparison of radial relative probability distribution for the first bosonic state (first row) and the first fermionic state (second row) for the three dimensaliities (3D,2D,and 1D) for v0=-5,1 and 5 (the three columns). In (a) the range is 1, in (b) the range is 1.25 and in (c) the range is 0.5
the 2D one and then comes the 3D calculation. In the two left panels (\(v_{0}=1\)\(and\)\(v_{0}=5\)) where the interaction is repulsive, the calculations tend to nearly the same limit. Why is the situation so? As explained before in this first figure this is the result of our arbitrary normalization choice. Let us now maintain invariant all the calculation parameters and change only the range of the interaction. In figure 5.b we are considering \(a=1.25\). When comparing this figure to figure 5.a we can notice that we have nearly the same result except that now the three calculations in the attractive regime for the fermionic state are tending to the same value. The peaks of the curves in the repulsive regime and for bosonic state is also a little bit lower and the position of the peaks is pushed a little farther from the trap center. If we consider afterwards the same calculation with the range \(a=0.5\) (Fig.5.c) and compare with figure 5.a, we can see that for the attractive regime and the bosonic state the 1D calculation is higher in magnitude then we have the 2D Calculation and then comes the 3D Calculations. For the repulsive regime and for the same state the peaks of the three curves are higher and the position of the peaks is closer to the trap center. In the attractive regime and for the fermionic state (\(v_{0}=-5\)) the 2D and the 3D tend to the same limit but the 1D calculation is higher in magnitude and is closer to the trap center. In the repulsive regime however, the three calculations tend nearly to the same limit.
How do we understand all these results? Let us first recall that the 1D case is the only case where we know for certainty that there is no centrifugal effect whether for bosonic or fermionic state. In contrast for the 2D and the 3D calculation there is as explained before the contribution of the residual term and the contribution of \(l_{2D}=1/2\) in the wave function in the bosonic state for 2D and 3D respectively. For the fermionic state we are having more centrifugal repulsion as the angular momentum quantum number is higher in this case. In accordance with these remarks, we can conclude that in the attractive regime, for the bosonic state and as far as the range of the interaction is not important enough to counter balance the centrifugal repulsion, the position of the peaks will be some negotiation between the attractive interaction and centrifugal repulsion. For \(a=1.25\) and \(a=1\) we can see that the attraction is strong enough to win face to the different amounts of centrifugal repulsion in the three cases and henceforth the calculations tend to the same limit. In contrast for \(a=0.5\) the range of the attraction is weaker and we find that the 1D case is more pushed to the center ( here the centrifugal repulsion is absent) then gradually comes the 2D and then 3D cases, the centrifugal repulsion being more important in the 3D case. In the repulsive bosonic state no important change happens because of the repulsive interaction and the centrifugal repulsion acting in the same direction. Notice here that no conclusion can be derived with regards to the position of the different curves in this regime because the starting point was completely arbitrary. Only when some important change occurs relatively to the reference situation that we can single out the role of an assumed player. Invoking the same argumentation, we can deduce for the fermionic state, the attractive regime, and for \(a=1.25\) that the strength and the range of the attraction is able to counter balance the centrifugal repulsion for the three calculations making the three curves indistinguishable. For the same case and \(a=1\), in the 1D calculation, the centrifugal repulsion is nonexistent and the curve is more pushed to the center. For The 2D and 3D calculations, the centrifugal repulsion is gradually more important making the curves less attracted to the center and the spreading of the curves is more important. We have nearly the same results when comparing the same calculations with \(a=0.5\) except that because the range of the interaction being smaller, this will make the spreading of the curves more important.
## 5 Perturbation treatment
We will try to establish in this part in which extent the use of a step function as an approximation for the Rydberg interaction is accurate. We are also interested in case of discrepancy between these two results by employing the adequate tools in order to ameliorate the initial model. For this aim we are exploiting the perturbation theory [28] to compare :
1. the results for a step function,
2. the numerical results for the exact formulation of the potential
3. and the results of the perturbation treatment of the potential.
We have to clarify here that in the reference [17] the numerical (Rydberg interaction) and the analytical (step potential) results are plotted in the same figures showing a discrepancy between these two results. A discrepancy that become more noticeable for important ranges and in two dimensions. In the same way in reference [29] an approximate value of the threshold interaction strength can be calculated analytically for the step potential and compared to the numerical results for the Rydberg interaction. This is done in one dimension and tiny discrepancy is found between the two results. In our calculation we are not only concerned by reporting the discrepancy that does exist but also by bridging the gap between the two situations for the cases of 2D and 1D, using the perturbation tool. This calculation is important from two points of view: reaching an agreement between the two results would confirm the adequacy of the step function to replace the realistic Rydberg potential as it confirms that the missing part is just a perturbation from one side and the calculations would establish a more accurate wave functions basis if a description of few particles systems is targeted from another side.
To start with, the exact potential is written as :
\[v(r)=\frac{g}{1+\left(\frac{r}{R_{c}}\right)^{6}}=v_{s}(r)-v_{s}(r)+\frac{g}{ 1+\left(\frac{r}{R_{c}}\right)^{6}}=v_{s}(r)+v_{pert} \tag{17}\]
where : \(v_{pert}(r)=\frac{g}{1+\left(\frac{r}{R_{c}}\right)^{6}}-v_{s}(r)\) and \(v_{s}(r)\) is the step function defined in equation 16. This way it is possible to write the exact potential as a step function for which we already know the solutions and an an extra quantity \(v_{pert}\) that we treat as a perturbation. A plot for this potential for the case where the step is equal to 5 and a range of the potential is equal to 1, is illustrated on figure 1. The Numerov approach
[30] is used to obtain the numerical results for the exact potential ( the Rydberg potential). Conversely for our case we are targeting an agreement of our calculations with numerical results since these last ones are the best results we can establish for the real potential and we want our model to reach the agreement with these results. Let us mention that we used a forward and inward integration method and we impose the continuity of the wave function and its derivative at the turning points[25, 30] to ensure the stability of the Numero calculation. The perturbation correction is assumed to be of the first order for the eigenvalues. To sum up we are comparing solutions of the step potential and these solutions after a perturbation correction with the numerical results.
### Perturbation treatment for 1 dimension
The comparison of the eigenvalues (energies) for one dimension and different ranges is given on figure 6.a. It is clear from this figure that the correction to the first order is sufficient to reach a fair agreement with the numerical results. The higher energy level are less affected by the interaction according to its range and henceforth are already too close to the numerical results. Conversely the low levels are more affected by the interaction and the correction for these levels is quite important. This correction demonstrates an energy level for fermionization which is higher than the one without the correction. We tried to operate a perturbation first order correction to the eigenvectors without a noticeable success. It is even found that for some range and strength that the results without correction are closer to the numerical ones. The second order perturbation correction does not ameliorate the situation. In summary we can say that the perturbation treatment to the first order is giving a very satisfactory results for the eigenvalue whereas this treatment is less satisfactory for the eigenvectors especially for important values of the range and the strength of the interaction.
### Perturbation treatment for 2 dimensions
In the same manner as before we extended the perturbation calculation to the radial part of the Schrodinger equation. The perturbation potential being the same as before. The only difference is the centrifugal term making a logarithmic mapping and a transformation of the radial solution necessary for the densification of the points around zero for the wave function and to recover the Numero shape of the equation respectively [25, 30]. The comparison of the spectra obtained for different values of the potential strength \(v_{0}\) and range \(a\) is illustrated in figure 6.b. We can see on this figure that the correction to the first order for both intermediate and large ranges, is making the agreement with the numerical solution more satisfactory especially for the repulsive regime where the curves are indistinguishable. For the attractive regime the corrected results for lower levels are more satisfactory. The results in 3D are expected to be quite similar to one for 2D as the only difference between the two cases is an addition in orbital momentum quantum number making the whole spectra to be translated to higher energy. For the eigenvectors we are expecting as before an inadequacy of the perturbation treatment to recover an acceptable agreement with the numerical results.
## 6 Dynamical aspects
All the previous studied aspects of the system formed by two particles confined in a harmonic trap were the result of the solution of the time independent Schrodinger equation. This allowed the investigation of the main features of the systems which is the correlation on a stationary basis. The evolution of the system proprieties during time requires unavoidably the solution of the time dependent Schrodinger equation. We are aiming through this solution at the investigation of the evolution of the system under the initial interaction features compared with the change of the behavior of the system under
Figure 6: (a) Comparison of the energy versus \(v_{0}\) for bosons and fermions (columns) for two ranges a=1 and a=1.25 (rows from the top to the bottom respectively) in 1 dimension. In each panel the calculation for the step function (step pot), perturbation correction (pert) and the numerical results(num) are compared. (b) Comparison of the energy versus \(v_{0}\) in two dimensions for l=0,1, 2,3 and 4 (from the bottom to the top) for two ranges a=1 and a=1.25. Even and odd values of \(l\) are for bosons and fermions respectively.
a sudden change of these same features. We are able to these calculations in one dimension. To solve the time-dependent Schrodinger equation, we employ the Crank-Loe method together with the tridiagonal matrix algorithm, exploiting the built-in programs provided by the Lapacu [31]. We consider grid sizes of \(\Delta t=\)0.0002 and \(\Delta t=\)0. We take a space of \(-30\leq x\leq 30\). While the tridiagonal is fine enough to avoid any distortion during time, the end and the extent of the space were restrictions of the unperturbedness these were quite satisfactory for the previous calculations. The initial wave function from which such evolution of the system is considered to be the exact one already found by resolving the time independent Schrodinger equation for a step potential interaction. As before, considering the first bosonic and fermionic states to the most important results. On figure 7 we are ill illusions snapshots for given times of the evolution of the potential density. In these calculations we are showing the effect of a sudden change of the value of \(v_{0}\) on the behavior of the system while the range of the interaction is fixed to 1. For this figure we can see that for the bosonic state and the attractive regime the peak of the curve is well localized in the center (black curve) and that the evolution from this scheme to the extreme repulsion leads to an important oscillation of the probability (red curve). In the alternate scenario where we start from the repulsive regime (green curve) and then operate a sudden change of \(v_{0}\) towards the extreme attraction the evolution is rather smooth and differs a little from the initial state. For the fermionic state (the second row of the figure) the results look at the first sight quite similar though the amplitudes are quite different. We can even notice an important central part of the probability density at trap center for this fermionic state in the attractive regime which could be interpreted as an important probability for the two fermions to correlate as a pair. In order to better exploit these results, we found it more convenient to plot the average separation between the two particles versus time. This quantity is evaluated as \(\sqrt{\langle x^{2}\rangle}\). The results are given on fig.8.a for the bosonic state and fig.8.b for fermionic state respectively. It is clear from fig.8.a that for the attractive regime, the oscillation of the separation is regular and is localized around a small average value. The sudden change of \(v_{0}\) sets an irregular oscillation around an important value of the average separation, higher even than the initial repulsive regime. In the contrary starting from the repulsive regime and operating a sudden change towards the attractive one conserves the regularity of the oscillation around the same average separation though the amplitude and the frequency is a bit changed. For the fermionic case (fig.8.b) and for the attractive regime the oscillation is around an important value the average separation. Consequently the presence of a peak at the center does not affect the fact that the most important weight of the distribution is pushed to the trap ends making the separation between the two particles quite important. The sudden change of \(v_{0}\) towards the repulsive regime enhance the average separation between the two particles. On the other side the transition from the repulsive to the attractive regime for the fermionic state is quite similar to the bosonic state though peculiarly the oscillation in this case is around a separation which is smaller than in the bosonic case.
Let us now investigate the effect of a sudden change of the range on the system behavior while keeping \(v_{0}\) equal to -5. For the bosonic state illustrated on figure 9.a, we can notice that the different changes of the range have globally a moderate effect on the amplitude of the average separation through time. The main noticeable effect is that the sudden change pushes the two particles a little bit farther relatively to the initial situation. For the fermionic state however (fig.9.b),the
Figure 8: Comparison of the average separation between the two particles through time for different scenarios of \(v_{0}\) changes in one dimension. (a) bosonic state, (b) fermionic state.
Figure 7: Snapshots of the time evolution of the probability density for different scenarios and for one dimension. black line for a fixed value of \(v_{0}\)=-12, red line for a sudden change of \(v_{0}\) from -12 to 12, green line for a fixed value of \(v_{0}\) =12 and the blue line is for sudden change of \(v_{0}\) from 12 to -12. The first row is for the first bosonic state and the second row is for the first fermionic state. The sudden change is operated at t=3.09. The range is fixed to 1 and the plotted space is limited to [-15,15]
important range and the sudden change from this initial situation is providing the system with an important average separation. From another side the small range and the sudden change from this initial situation is in the contrary giving a small average separation \(\lambda\) the time variable.
As concluding remarks to this dynamical study we can retain that the potential strength has a more pronounced effect on the system correlation partly maybe because of the large extent of the value used for the calculation compared to the extent of the range. The sudden change of the potential strength after an attractive regime has been settled, shows to achieve an important separation between the two particles, even greater than an initial repulsive regime. The sudden change of the strength after a repulsive regime has been settled is demonstrated to have little impact on the separation and henceforth on the correlation. Globally these results hold evenly for the bosonic and fermionic case. The effect of the range and its sudden change on the bosonic state is quite negligible whereas the effect on the fermionic state in the case of an initial long range followed by a sudden change to a short range, is quite important. Besides being very instructive and bringing some insights on the dynamic of the system, these results could be exploited experimentally to monitor in average the correlation that could exist between the two particles forming the system.
## 7 Conclusion
We presented in this study a detailed comparison of the results concerning the spectra found using the model of Kocsik. By extending the results to 3D case, it is possible to elaborate a comparative study for the three dimensionalities. It is clearly established that the response of the spectra in 1D case is dependent on the range and the strength of the interaction. Conversely in 2D and 3D cases, the centrifugal potential scaling as the square of the angular quantum number can push the particles to a separation where the effect of the interaction is negligible and consequently the equidistant spectra are conserved. The most important change in the spectra is seen for the attractive part of the interaction. By exploiting the probability density distribution, it is possible to study the spatial correlation for different schemes and dimensionalities. We can say that as expected the spatial correlation for the considered system is tightly dependent on an interplay between a centrifugal effect imposed by the system and a coupled (range, strength) effect imposed by the interaction. The effect is more pronounced in the attractive regime interaction as the interaction and the centrifugal effect in this case are antagonist. This result is possible only with finite range interaction and will not be possible with contact or hard core interactions especially for fermionic states where contact interaction is forbidden de facto. The interesting result is that even in the case of the first bosonic state where the angular momentum quantum number should be zero, the difference between the three dimensionalities is shown to manifest as a "hidden" proper amount of centrifugal effect. An effect that is also mostly seen here in the attractive regime. The perturbation treatment of the potential allowed an improvement of the model of Koscik concerning the eigenvalues where a satisfactory results are obtained in the three dimensionalities. This is reinforcing the validity of the step function approximation as it demonstrates that the difference between the step function potential and the realistic interaction is just a perturbation that can be recovered by the traditional perturbation treatment. This is not the case for the eigenvectors where unfortunately the results are far from being satisfactory. The study of the dynamical evolution of the system under a sudden change of the potential allowed to acquire some insights into the system behavior. A more deep investigation along this axis could shed more light on some fundamental aspects as well as eventually providing some experimental clues on how to monitor the system correlation.
## Acknowledgments
One of the authors (N.G) is grateful to N. Rowley for the help he friendly brought for the visit to IPN, Orsay. Many thanks also to D. Lacroix for the warm welcome to IPN, Orsay and for all the help. Though the planned research project did not come to an end because of many difficulties especially the Corona pandemic, nevertheless all the numerical recipes, software and mathematical methods learnt during this visit were of great help for the present research.
|
2303.17731
|
$β^{4}$-IRT: A New $β^{3}$-IRT with Enhanced Discrimination
Estimation
|
Item response theory aims to estimate respondent's latent skills from their
responses in tests composed of items with different levels of difficulty.
Several models of item response theory have been proposed for different types
of tasks, such as binary or probabilistic responses, response time, multiple
responses, among others. In this paper, we propose a new version of
$\beta^3$-IRT, called $\beta^{4}$-IRT, which uses the gradient descent method
to estimate the model parameters. In $\beta^3$-IRT, abilities and difficulties
are bounded, thus we employ link functions in order to turn $\beta^{4}$-IRT
into an unconstrained gradient descent process. The original $\beta^3$-IRT had
a symmetry problem, meaning that, if an item was initialised with a
discrimination value with the wrong sign, e.g. negative when the actual
discrimination should be positive, the fitting process could be unable to
recover the correct discrimination and difficulty values for the item. In order
to tackle this limitation, we modelled the discrimination parameter as the
product of two new parameters, one corresponding to the sign and the second
associated to the magnitude. We also proposed sensible priors for all
parameters. We performed experiments to compare $\beta^{4}$-IRT and
$\beta^3$-IRT regarding parameter recovery and our new version outperformed the
original $\beta^3$-IRT. Finally, we made $\beta^{4}$-IRT publicly available as
a Python package, along with the implementation of $\beta^3$-IRT used in our
experiments.
|
Manuel Ferreira-Junior, Jessica T. S. Reinaldo, Telmo M. Silva Filho, Eufrasio A. Lima Neto, Ricardo B. C. Prudencio
|
2023-03-30T22:13:11Z
|
http://arxiv.org/abs/2303.17731v1
|
# \(\beta^{4}\)-IRT: A New \(\beta^{3}\)-IRT with Enhanced Discrimination Estimation
###### Abstract
Item response theory aims to estimate respondent's latent skills from their responses in tests composed of items with different levels of difficulty. Several models of item response theory have been proposed for different types of tasks, such as binary or probabilistic responses, response time, multiple responses, among others. In this paper, we propose a new version of \(\beta^{3}\)-IRT, called \(\beta^{4}\)-IRT, which uses the gradient descent method to estimate the model parameters. In \(\beta^{3}\)-IRT, abilities and difficulties are bounded, thus we employ link functions in order to turn \(\beta^{4}\)-IRT into an unconstrained gradient descent process. The original \(\beta^{3}\)-IRT had a symmetry problem, meaning that, if an item was initialised with a discrimination value with the wrong sign, e.g. negative when the actual discrimination should be positive, the fitting process could be unable to recover the correct discrimination and difficulty values for the item. In order to tackle this limitation, we modelled the discrimination parameter as the product of two new parameters, one corresponding to the sign and the second associated to the magnitude. We also proposed sensible priors for all parameters. We performed experiments to compare \(\beta^{4}\)-IRT and \(\beta^{3}\)-IRT regarding parameter recovery and our new version outperformed the original \(\beta^{3}\)-IRT. Finally, we made \(\beta^{4}\)-IRT publicly available as a Python package, along with the implementation of \(\beta^{3}\)-IRT used in our experiments.
Item response theory Latent variable models Discrimination estimation Python package
## 1 Introduction
Item Response Theory (IRT) is widely adopted in the field of psychometrics to estimate latent abilities of human test respondents. Unlike classical test theory, which assesses performance at the test level, IRT focuses on items and aims to
model responses given by respondents of different abilities to items of different difficulties, both measured on a known scale (Embretson and Reise, 2013). The concept of an item depends on the application and can represent, for example, exam, open-ended or multiple choice questions. In practice, IRT models estimate latent skills and difficulties based on responses observed in a test and have been commonly applied to measure student performance on exams.
There are different IRT models in literature, with respect to the range of responses. In this paper we focus on the \(\beta^{3}-\)IRT model (Chen et al., 2019), which considers bounded continuous responses, suitable to model, for instance, success rates and probabilities. The \(\beta^{3}-\)IRT model is more flexible than other continuous IRT models since it can result on Item Characteristic Curves (ICCs) that are not limited to logistic curves. ICCs with different shapes (e.g., sigmoid, parabolic and anti-sigmoid) can be obtained, which is more flexible to fit responses for different items.
The original \(\beta^{3}\)-IRT model, as proposed by Chen et al. (2019), has a symmetry problem meaning that, for a respondent with a certain ability value, two items, one with low difficulty and positive discrimination and another with high difficulty and negative discrimination, could have the same expected response. As a result, if an item was initialised with with the wrong sign for its discrimination value, the fitting process could be unable to recover the correct discrimination value. This issue is not exclusive to \(\beta^{3}\)-IRT and is associated to any IRT model which considers a discrimination parameter. Additionally, the code that is available online1, which performs a variational inference-based process to estimate the full posterior distributions of its parameters uses a Python 2 library that is now obsolete.
Footnote 1: [https://github.com/yc14600/beta3_IRT](https://github.com/yc14600/beta3_IRT)
Thus, in this paper we improve \(\beta^{3}\)-IRT in a few ways. First, we tackle the symmetry limitation by modelling the discrimination parameter using the multiplication of two new values, one corresponding to the sign and the second to the magnitude. These new parameters are kept fixed for the first fitting iterations, in order to better estimate abilities and difficulties. After these first iterations, the discrimination parameters are optimised along with abilities and difficulties. We also provide sensible priors for abilities, difficulties and both discrimination parameters. Together, this two-step optimisation, the factoring of discrimination into two parameters and the suggested priors help us to avoid the symmetry problem, improving the parameter estimates. Additionally, our improved \(\beta^{3}\)-IRT, which is called \(\beta^{4}\)-IRT, uses gradient descent to estimate the model parameters. This allows us to leverage cutting-edge Python libraries for fast GPU-based computation. In \(\beta^{3}\)-IRT, abilities and difficulties are bounded in \((0,1)\), thus we employ link functions in order to formulate \(\beta^{4}\)-IRT as an unconstrained gradient descent process.
We perform an experimental analysis of \(\beta^{4}\)-IRT and \(\beta^{3}\)-IRT regarding parameter recovery, i.e. how well a fitted model estimates the original parameter values used to produce an artificial response dataset. Finally, this work provides a publicly available Python library for \(\beta^{4}\)-IRT.
The paper is organised as follows: Section 2 discusses the \(\beta^{3}\)-IRT model in detail and explains its limitations; Section 3 presents the mathematical definition for the new \(\beta^{4}\)-IRT model as well as the algorithm for parameter estimation; Section 4 provides an experimental analysis of parameter recovery and computing time; and finally, Section 5 brings some final remarks.
## 2 Item response theory
An IRT model assumes that, for each item \(j\), a respondent \(i\) produces a response that is a function of the respondent's ability and the item's difficulty, sometimes including other parameters for the items, such as discrimination and guessing.
Most works on IRT assume that the response \(x_{ij}\) is binary, which is usually encoded as \(x_{ij}=1\) if the \(j\)-th item was correctly answered by the \(i\)-th respondent, otherwise \(x_{ij}=0\)(Bachrach et al., 2012; Embretson and Reise, 2013; Martinez-Plumed et al., 2016; Twomey et al., 2022). These models commonly assume that a response \(x_{ij}\) follows a Bernoulli distribution with probability of success \(p_{ij}\) defined as a logistic function of the respondent's latent ability \(\theta_{i}\) and of two latent parameters associated to each item, the difficulty \(\delta_{j}\) and the discrimination \(a_{j}\), as given by Equation (1):
\[x_{ij}=\mathcal{B}ern(p_{ij}),\ p_{ij}=\sigma(-a_{j}d_{ij}),\ d_{ij}=\theta_{ i}-\delta_{j}, \tag{1}\]
where \(\sigma(\cdot)\) is the logistic function, with location parameter \(\delta_{j}\) and shape parameter \(a_{j},j=1,\ldots,N\) and \(i=1,\ldots,M\). This model, known as 2-parameter logistic IRT (2PL-IRT) results in an item characteristic curve (ICC) that maps ability to expected response as shown in Equation (2):
\[\mathbb{E}[x_{ij}|\theta_{i},\delta_{j},a_{j}]=p_{ij}=\frac{1}{1+e^{-a_{j}( \theta_{i}-\delta_{j})}}. \tag{2}\]
When \(\theta_{i}=\delta_{j}\), the expected response is \(0.5\). Moreover, if \(a_{j}=1,\forall j=1,\ldots,N\), a simpler model is obtained, known as 1PL-IRT, which describes the items only by their difficulties. In general, the discrimination \(a_{j}\) indicates how the probability of correct answers changes as skill increases. High discriminations induce steep ICCs at the point where skill equals difficulty, with small changes in skill causing large changes in the probability of correct answer.
Despite their extensive use in psychometry, binary IRT models have limited use when responses are produced on continuous scales. In particular, binary models are not suitable if the evaluated responses are estimates of probabilities or proportions, as in the case explored by Chen et al. (2019), where each student could respond to the same item multiple times and IRT was used to model the proportion of times that the student was correct for each item. For such cases, a different model, called \(\beta^{3}\)-IRT was proposed by Chen et al. (2019). Equation (3) defines \(\beta^{3}\)-IRT, where \(p_{ij}\) is the observed response of respondent \(i\) for item \(j\), which is assumed to follow a Beta distribution with parameters \(\alpha_{ij}\) and \(\beta_{ij}\) defined as functions of the respondent's ability \(\theta_{i}\) and of the item's difficulty \(\delta_{j}\) and discrimination \(a_{j}\):
Figure 1: Examples of \(\beta^{3}\)-IRT ICCs for different values of difficulty and discrimination. Steeper ICCs result of higher discrimination values, while difficulty determines the ability needed to surpass a response of \(0.5\). Source: Chen et al. (2019).
\[p_{ij} \sim\mathcal{B}(\alpha_{ij},\beta_{ij}),\] \[\alpha_{ij} =\left(\frac{\theta_{i}}{\delta_{j}}\right)^{a_{j}},\beta_{ij}= \left(\frac{1-\theta_{i}}{1-\delta_{j}}\right)^{a_{j}},\] \[\theta_{i} \sim\mathcal{B}(1,1),\ \delta_{j}\sim\mathcal{B}(1,1),\ a_{j}\sim \mathcal{N}(1,\sigma_{0}^{2}). \tag{3}\]
Here, \(\sigma_{0}^{2}\) is a hyperparameter of the model, which the authors set as \(1\) in their experiments. In this model, the ICC is defined by the expected value of \(\mathcal{B}(\alpha_{ij},\beta_{ij})\), taking the form given by Equation (4):
\[\mathbb{E}[p_{ij}|\theta_{i},\delta_{j},a_{j}]=\frac{\alpha_{ij}}{\alpha_{ij} +\beta_{ij}}=\frac{1}{1+\left(\frac{\delta_{j}}{1-\delta_{j}}\right)^{a_{j}} \left(\frac{\theta_{i}}{1-\theta_{i}}\right)^{-a_{j}}}. \tag{4}\]
This parametrisation enables \(\beta^{3}\)-IRT to obtain non-logistic ICCs, with the difficulty \(\delta_{j}\) as a location parameter, similarly to logistic IRT models. The response is 0.5 when \(\theta_{i}=\delta_{j}\) and the curve has slope \(a_{j}/(4\delta_{j}(1-\delta_{j}))\) at that point. Figure 1 shows examples of \(\beta^{3}\)-IRT ICCs with different shapes, depending on \(a_{j}\). For \(a_{j}>1\), we see a sigmoid shape, similar to logistic IRT models; \(a_{j}=1\) gives parabolic curves, with vertex at \(0.5\); and \(0<a_{j}<1\) leads to an anti-sigmoidal behaviour. The model also allows for negative discriminations. In such cases, \(-1<a_{j}<0\) and \(a_{j}<-1\) give decreasing anti-sigmoid and decreasing sigmoid ICCs, respectively.
Note that correctly estimating the discrimination parameter, particularly its sign, is crucial, as it encodes information about the perceived behaviour of an item. A sigmoidal ICC means that the item is good at discriminating respondents in the middle of the ability range, while an anti-sigmoidal one does a good job at detecting different abilities in the low and high ranges. Additionally, negative discriminations could be interpreted as corresponding to items that are harder for respondents with higher abilities. Thus, Chen et al. (2019) use negative discriminations to identify 'noisy' items.
Chen et al. (2019) tested two inference methods for \(\beta^{3}\)-IRT, one was conventional Maximum Likelihood (MLE), using the likelihood function shown in Equation (3). The second method was Bayesian Variational Inference (VI) (Bishop, 2006), which they applied to their experiments with IRT to evaluate machine learning classifiers.
Independently of the inference method, \(\beta^{3}\)-IRT is a highly non-identifiable model, because of its symmetry (Nishihara et al., 2013), which can result in undesirable combinations of the latent variables. For instance, when \(p_{ij}\) is close to \(1\), it usually indicates \(\alpha_{ij}>1\) and \(\beta_{ij}<1\), which can arise either from \(\theta_{i}>\delta_{j}\) with positive \(a_{j}\), or from \(\theta_{i}<\delta_{j}\) with negative \(a_{j}\).
To show the impact of this non-identifiability on parameter estimation, we sampled 1000 abilities, difficulties and discriminations from the priors defined in Equation (3), setting \(\sigma_{0}^{2}=1\). Then, for each \(i\)-th respondent and \(j\)-th item, we generated the response \(p_{ij}\) by taking the mean of \(100\) samples from the corresponding \(\mathcal{B}(\alpha_{ij},\beta_{ij})\) distribution. Then, we fit a \(\beta^{3}\)-IRT model using MLE (this implementation is available as part of the Python package we present in Section 3). Here, we train the \(\beta^{3}\)-IRT model using 50,000 iterations.
Figure 2 shows the original parameter values and their estimates. The discriminations and their estimates seem to follow an inverted-sigmoidal relationship, moving away from the diagonal for lower and higher values. Additionally, the 109 red dots represent discriminations that were estimated with the wrong sign. These were likely initialised with flipped signs and, due to the symmetry of the model, ended up pushing their corresponding difficulties away from their target values, which shows as an orbit around the diagonal in the difficulty plot. Finally, due to not fitting certain discriminations and difficulties correctly, the estimated abilities were also pushed away from their original values.
Some attempts can be made to avoid this problem. In their VI implementation, Chen et al. (2019) updated discrimination as a global variable after ability and difficulty converged at each step. They also set the prior of discrimination as \(\mathcal{N}(1,1)\) to reflect the assumption that discrimination is more often positive than negative. In our implementation, we can set a number of initial iterations, say 1000, where we keep all discriminations fixed at \(\dot{a}_{j}=1\) and optimise only abilities and discriminations. Then we allow the discriminations to be optimised as well. This has a positive impact, reducing the number of flipped discrimination signs to 37, but does not definitely solve the problem. In the next Section we present a new model based on \(\beta^{3}\)-IRT, which introduces a new parameter to estimate the signs of the discriminations, leading to better parameter estimates.
## 3 \(\beta^{4}-\)IRT: Mathematical definition and implementation
As mentioned in the previous section, \(\beta^{3}-\)IRT is sometimes unable to overcome a poor initialisation of its discriminations. Motivated by this limitation, we propose the novel \(\beta^{4}-\)IRT model, whose ICC is given by Equation (5):
\[E[p_{ij}|\theta_{i},\delta_{j},\omega_{j},\tau_{j}]=\frac{1}{1+\left(\frac{ \delta_{j}}{1-\delta_{j}}\right)^{\tau_{j}\cdot\omega_{j}}\cdot\left(\frac{ \theta_{i}}{1-\theta_{i}}\right)^{-\tau_{j}\cdot\omega_{j}}}. \tag{5}\]
Equation (5) substitutes the discrimination \(a_{j}\) in Equation (4) with the product of two new parameters, \(\omega_{j}\) and \(\tau_{j}\), which represent the sign and the absolute value of the discrimination, respectively. This decomposition of the discrimination parameter aims to reduce the symmetry problem, as the direction and the magnitude of the discrimination will be optimised separately.
As seen in Equation (3) for the \(\beta^{3}-\)IRT model, parameters \(\theta_{i}\) and \(\delta_{j}\) take values from \((0,1)\), while the discrimination \(a_{j}\) has infinite support. In the new model, \(\theta_{i}\) and \(\delta_{j}\) have kept their supports, while \(\tau_{j}\) and \(\omega_{j}\) take their values from \((-1,1)\) and \((0,\infty)\), respectively. In order to improve the optimisation process, the constraints on minimum and maximum values were removed by adopting link functions. Thus, the gradient descent method in \(\beta^{4}-\)IRT does not update the values of the four parameters directly. Instead, we introduce four new parameters (\(t_{i}\), \(d_{j}\), \(b_{j}\) and \(o_{j}\)) with values in \(\mathbb{R}\), which are used to estimate the original parameters by way of link functions, as follows:
Figure 3: Scatter plots showing sampled discriminations (left), abilities (centre) and difficulties (right) used to generate a \(1000\times 1000\) response matrix, and their estimates produced by \(\beta^{3}\)-IRT with 1000 initial iterations with fixed discriminations. Red dots on the discrimination plot represent discriminations estimated with flipped signs (37 out of 1000 discriminations).
Figure 2: Scatter plots showing sampled discriminations (left), abilities (centre) and difficulties (right) used to generate a \(1000\times 1000\) response matrix, and their estimates produced by \(\beta^{3}\)-IRT. Red dots on the discrimination plot represent discriminations estimated with flipped signs (109 out of 1000 discriminations).
\[\theta_{i}=\sigma(t_{i})=\frac{1}{1+e^{-t_{i}}}, \delta_{j}=\sigma(d_{j}),\] (6, 7) \[\omega_{j}=\text{softplus}(o_{j})=\text{ln}(1+e^{o_{j}}), \tau_{j}=\text{tanh}(b_{j})=\frac{e^{b_{j}}-e^{-b_{j}}}{e^{b_{j}}+e^{-b _{j}}}.\] (8, 9)
The estimation of the \(t_{i}\), \(d_{j}\), \(o_{j}\) and \(b_{j}\) in \(\beta^{4}-\)IRT is carried out using the gradient descent method, by minimising the cost function given by Equation (10):
\[H=-\sum_{i=1}^{M}\sum_{j=1}^{N}p_{ij}\cdot\ln(\hat{p}_{ij}), \tag{10}\]
where \(\hat{p}_{ij}\) is the estimated response and is calculated using Equations (5), (6), (7), (8) and (9). The partial derivatives of \(H\) with regards to \(d_{j}\), \(t_{i}\), \(o_{j}\) and \(b_{j}\) are given by Equations (11), (12), (13) and (14), respectively:
\[\frac{\partial H}{\partial d_{j}} =\sum_{i=1}^{M}\sum_{j=1}^{N}p_{ij}\cdot w_{j}\cdot\tau_{j}\cdot \Delta(\delta_{j})\cdot\Phi(\theta_{i},\delta_{j})^{w_{j}\cdot\tau_{j}}\cdot \hat{p}_{ij}\cdot e^{-d_{j}}\cdot\sigma(d_{j})^{2}, \tag{11}\] \[\frac{\partial H}{\partial t_{i}} =-\sum_{i=1}^{M}\sum_{j=1}^{N}p_{ij}\cdot w_{j}\cdot\tau_{j}\cdot \Theta(\theta_{i})\cdot\Phi(\theta_{i},\delta_{j})^{w_{j}\cdot\tau_{j}}\cdot \hat{p}_{ij}\cdot e^{-t_{i}}\cdot\sigma(t_{i})^{2},\] (12) \[\frac{\partial H}{\partial o_{j}} =\sum_{i=1}^{M}\sum_{j=1}^{N}p_{ij}\cdot\tau_{j}\cdot\Phi(\theta _{i},\delta_{j})^{\tau_{j}\cdot w_{j}}\cdot\ln(\Phi(\theta_{i},\delta_{j})) \cdot\hat{p}_{ij}\cdot\sigma(o_{j}),\] (13) \[\frac{\partial H}{\partial b_{j}} =\sum_{i=1}^{M}\sum_{j=1}^{N}p_{ij}\cdot w_{j}\cdot\Phi(\theta_{i },\delta_{j})^{\tau_{j}\cdot w_{j}}\cdot\ln(\Phi(\theta_{i},\delta_{j})) \cdot\hat{p}_{ij}\cdot[1-tanh(b_{j})^{2}], \tag{14}\]
where:
\[\Phi(\theta_{i},\delta_{j})=\left(\frac{\delta_{j}}{1-\delta_{j}}\right)\cdot \left(\frac{\theta_{i}}{1-\theta_{i}}\right)^{-1},\;\Theta(\theta_{i})=\frac{ 1}{\theta_{i}\cdot(1-\theta_{i})},\;\Delta(\delta_{j})=\frac{1}{\delta_{j} \cdot(1-\delta_{j})}.\] (15, 16, 17)
Given the partial derivatives, the model parameters are updated using gradient descent, according to Equations (18), (19), (20) and (21):
\[d_{j}^{(n+1)} =d_{j}^{(n)}-\eta\cdot\frac{\partial H}{\partial d_{j}^{(n)}}, t_{i}^{(n+1)}=t_{i}^{(n)}-\eta\cdot\frac{\partial H}{\partial t_{i}^{(n)}},\] (18, 19) \[o_{j}^{(n+1)} =o_{j}^{(n)}-\eta\cdot\frac{\partial H}{\partial o_{j}^{(n)}}, b_{j}^{(n+1)}=b_{j}^{(n)}-\eta\cdot\frac{\partial H}{\partial b_{j}^{(n)}}.\] (20, 21)
Figure 4 shows that simply refactoring of the discrimination parameter was not enough to solve the symmetry problem as 77 discriminations were wrongly assigned flipped signs. However, the new formulation allows us to select sensible priors for the parameters that lead to much better estimates:
* Abilities: set \(t_{i}^{(0)}\) such that \(\theta_{i}^{(0)}=\sigma(t_{i}^{(0)})=N^{-1}\left(\sum_{j=1}^{N}p_{ij}\right)\);
* Difficulties: set \(d_{j}^{(0)}\) such that \(\delta_{j}^{(0)}=\sigma(d_{j}^{(0)})=1-M^{-1}\left(\sum_{i=1}^{M}p_{ij}\right)\);
* Discrimination magnitudes: set \(o_{j}^{(0)}\) such that \(\omega_{j}^{(0)}=\text{softplus}(o_{j}^{(0)})=1\);
* Discrimination signs: set \(\tau_{j}=\rho(\vec{\theta}^{(0)},\vec{p}_{j})\), where \(\rho\) is the Pearson correlation coefficient, \(\vec{\theta}^{(0)}=(\theta_{1}^{(0)},\ldots,\theta_{N}^{(0)})\) and \(\vec{p}_{j}=(p_{1j},\ldots,p_{Nj})\).
The priors for abilities and difficulties are intuitive as higher abilities lead to higher average responses and the opposite is true for difficulties. As for the discrimination sign parameter, we had two desiderata: (i) they need to capture the
fact that for a positive discrimination, expected response grows with ability, while for negative discriminations, higher abilities lead to lower responses; and (ii) their support needs to be in \([-1,1]\), thus the correlation between abilities and responses for each item lends itself nicely. To avoid flipping the signs during the fitting iterations, especially for very small discriminations that are close to \(0\), we keep the \(\tau_{j}\) estimates fixed and only optimise the magnitudes of the discriminations.
Algorithm 1 shows the steps of the parameter estimation process for the \(\beta^{4}\)-IRT model. Note that the algorithm allows for a certain number of initial iterations where the discrimination parameters are kept fixed, to tackle the symmetry problem and avoid the estimation of discriminations with inverted signs, which can have a negative impact in the estimation of the corresponding difficulties.
Figure 5 shows the resulting estimates after fitting \(\beta^{4}\)-IRT with the above priors. No discriminations where estimated with flipped signs, which led to better estimates overall. Due to these good results, from here on in this paper, we refer to this version when we mention \(\beta^{4}\)-IRT.
### Checking goodness of fit
To the best of our knowledge, the previous IRT approaches did not consider an \(R^{2}\) metric to evaluate the goodness-of-fit for a model. The \(R^{2}\) gives an easy interpretation of the model's performance and can be used to compare two or more IRT models. Here, we propose a Pseudo-\(R^{2}\), given by Equation (22)
Figure 4: Scatter plots showing sampled discriminations (left), abilities (centre) and difficulties (right) used to generate a \(1000\times 1000\) response matrix, and their estimates produced by \(\beta^{4}\)-IRT with 1000 initial iterations with fixed discriminations. Red dots on the discrimination plot represent discriminations estimated with flipped signs (77 out of 1000 discriminations).
Figure 5: Scatter plots showing sampled discriminations (left), abilities (centre) and difficulties (right) used to generate a \(1000\times 1000\) response matrix, and their estimates produced by \(\beta^{4}\)-IRT with 1000 initial iterations with fixed discriminations and better priors for all parameters. All discriminations were estimated with the correct signs.
```
0: Response matrix \(\mathbf{P}\), where each element \(p_{ij}\) corresponds to the observed response from respondent \(i\) to item \(j\), learning rate \(\eta\), number of training epochs with discrimination parameters fixed (\(n\_\)_ints), total number of training epochs (\(n\_epochs\)).
1: Randomly initialise \(t_{i}^{(0)}\), \(d_{j}^{(0)}\) from \(N(0,1)\);
2: Set \(o_{j}^{(0)}\) such that \(\omega_{j}^{(0)}=\) softplus(\(o_{j}^{(0)}\)) = 1
3: Set \(\tau_{j}=\rho(\vec{\theta}^{(0)},\vec{p_{j}})\)\(\triangleright\) where \(\rho\) is the Pearson correlation coefficient, \(\vec{\theta}^{(0)}=(\theta_{1}^{(0)},\ldots,\theta_{N}^{(0)})\) and \(\vec{p_{j}}=(p_{1j},\ldots,p_{Nj})\).
4: set \(t_{i}^{(0)}\) such that \(\theta_{i}^{(0)}=\sigma(t_{i}^{(0)})=N^{-1}\left(\sum_{j=1}^{N}p_{ij}\right)\).
5: Set \(d_{j}^{(0)}\) such that \(\delta_{j}^{(0)}=\sigma(d_{j}^{(0)})=1-M^{-1}\left(\sum_{i=1}^{M}p_{ij}\right)\).
6: Set \(n\gets 0\)
7:for\(n<n\_epochs\)do
8: Calculate estimated responses using Equation (5);
9: Calculate the loss using Equation (10);
10: Calculate the partial derivatives using Equations Equations (11), (12), (13) and (14);
11: Update \(d_{j}^{(n+1)}\) and \(t_{i}^{(n+1)}\) according to Equations (18) and (19);
12:if\(n\geq n\_\)_ints then
13: Update \(o_{j}^{(n+1)}\) and \(b_{j}^{(n+1)}\) according to Equations (20) and (21);
14:else
15: Set \(o_{j}^{(n+1)}\gets o_{j}^{(n)}\) and \(b_{j}^{(n+1)}\gets b_{j}^{(n)}\);
16:endif
17:\(n\gets n+1\);
18:endfor
19:\(\theta_{i}\leftarrow\sigma(t_{i})\);
20:\(\delta_{j}\leftarrow\sigma(d_{j})\);
21:\(a_{j}\leftarrow\omega_{j}\cdot\tau_{j}\);
22:return Estimated parameters \(\theta_{i}\), \(\delta_{j}\) and \(a_{j}\).
```
**Algorithm 1**\(\beta^{4}\)-IRT whith **priors**
Pseudo-\(R^{2}=1-\dfrac{u}{v}\), (22)
where \(u\) is the sum of squared residues and \(v\) is the sum of the quadratic differences from the mean, defined respectively by Equations (23) and (24):
\[u=\sum_{i=1}^{M}\sum_{j=1}^{N}(p_{ij}-\hat{p}_{ij})^{2}\text{ and }v=\sum_{i=1}^{M}\sum_{j=1}^{N}(p_{ij}-\bar{p})^{2},\] (23, 24)
where \(\bar{p}\) is the mean of the observed response. According to Bruin (2011), the Pseudo-\(R^{2}\) can be interpreted as the square of the correlation between the estimated (\(\hat{p}\)) and the observed (\(p\)) values. The denominator \(v\) can be seen as the quadratic error of the null model.
### Model implementation in Python
We implemented \(\beta^{4}\)-IRT using the automatic differentiation capabilities of the Python library TensorFlow. In this section, we describe the installation process and present a short tutorial on how to fit the model and use the tools provided by the package bit-gd.
The package can be downloaded from Python's Package Index ([https://pypi.org/project/bitr-gd/](https://pypi.org/project/bitr-gd/)) or by cloning the repository on GitHub in [https://github.com/Manuelfjr/bitr-gd](https://github.com/Manuelfjr/bitr-gd). It is also possible to directly install bitr-gd using the following command line:
```
To use the package, first we need to import it in Python. Below we show an example of use:
>>> from birth import Beta4 >>> data = pd.DataFrame(('a': [0.99,0.89,0.87],... 'b': [0.32,0.25,0.45]]) >>> b4 = Beta4(n_models = 2,... n_instances = 3,... random_seed=1) >>> b4.fit(data.values)
2\%|| | 119/5000 [00:01<01:05, 74.58it/s]Model converged at the 122th epoch
2\%|| | 122/5000 [00:01<01:07, 72.35it/s] <birt.Beta4 object at 0x7f420baa3b50>
>>> b4.abilities array([0.8940176, 0.2747254], dtype=float32)
>>> b4.difficulties array([0.38353133, 0.5238179, 0.37623164], dtype=float32)
>>> b4.discriminations array([1., 1., 1.], dtype=float32)
We now illustrate an example with more data, to better explain the module's features. First, we create 5 respondents and 20 items by randomly sampling their abilities and difficulties from Beta distributions. For the abilities, we sample the first one from \(\mathcal{B}(1,0.1)\), the second from \(\mathcal{B}(1,10)\) and the remaining three abilities from \(\mathcal{B}(1,1)\). For the difficulties, we randomly sample the first one from \(\mathcal{B}(1,10)\), the second from \(\mathcal{B}(1,5)\) and the remaining ones from \(\mathcal{B}(1,1)\). These values were chosen such that respondent \(i=0\) will likely have high ability and item \(j=0\) will likely have low difficulty. Finally, we sample the 20 items' discriminations from \(\mathcal{N}(1,1)\).
>>> import numpy as np >>> import pandas as pd >>> from bit import BIRTGD >>> import matplotlib.pyplot as plt >>> m, n = 5, 20 >>> np.random.seed(1) >>> abilities = [np.random.beta(1, i) for i in ([0.1, 10] + [1] * (m - 2))] >>> difficulties = [np.random.beta(1, i) for i in [10, 5] + [1] * (n - 2)] >>> discriminations = list(np.random.normal(1, 1, size = n)) Then we calculate the expected responses of the \(5\) respondents for the \(20\) items, using Equation (4), yielding response matrix \(\mathbf{P}_{5\times 20}\) (called pij in the code), where each value is the observed response of the \(i\)-th respondent for the \(j\)-th item.
import numpy as np m, n = 5, 20 np.random.seed(1) abilities = [np.random.beta(1, i) for i in ([0.1, 10] + [1] * ( m - 2))] difficulties = [np.random.beta(1, i) for i in [10, 5] + [1] * (n - 2)] discrimination = list(np.random.normal(1, 1, size = n)) pij = pd.DataFrameFrame(columns = range( m ), index = range( n ))
>>> i, j = 0, 0 >>> for theta in abilities: >>> for delta, a in zip(difficulties, discrimination): >>> alphaij = ( theta/delta )** (a)
)
```
>>>betaj=((1-theta)/(1-delta))**(a) >>>pij.loc[j,i]=np.mean(np.random.beta(alpha,beta,size=100))[0] >>>j=1 >>>j=0 >>>i+=1
```
We then use class BIRTGD from the birt module to fit a model on the observed responses, with learning rate \(\eta=1\), \(5000\) total training epochs and \(1000\) initial epochs with fixed discrimination parameters. Note that the default values for the arguments epochs and n_inits are 10000 and 1000, respectively.
```
>>>b4=Beta4(...learning_rate=1,...epochs=5000,...n_respondents=pij.shape[1],...n_items=pij.shape[0],...n_inits=1000,...n_workers=-1,...random_seed=1,...tol=10**(-8),...set_priors=False...) >>>b4.fit(pij)
```
After fitting the model we can check the score attribute, which returns the corresponding Pseudo-\(R^{2}\), as discussed in section 3.1.
```
>>>b4.score
```
0.9038146230196351 ```
In this case, the model fit this small dataset very well, with Pseudo-\(R^{2}>0.9\). We can also view some descriptive statistics using the summary method, in similar fashion to R's summary function, including the the Pseudo-\(R^{2}\) value and the quartiles, minima and maxima of the estimated abilities, difficulties, discriminations and responses.
``` >>>b4.summary() ```
ESTIMATES ---- |Min1QtMedian3QtMaxStd.Dev Ability|0.000100.221470.633890.733530.9204000.33960 Difficulty|0.017450.280470.630580.841900.986240.31635 Discrimination|0.314641.283301.614932.229364.446451.02678 pij|0.000000.022190.359410.862550.999930.40210 ---- Pseudo-R2|0.90381
```
From the summary output above, we note that the statistics of the estimated abilities and difficulties were close to those of a \(\mathcal{B}(1,1)\), which was the distribution from which most of the simulated values for these parameters were sampled, with a shift in the median (which for a \(\mathcal{B}(1,1)\) is equal to \(0.5\)), because two abilities were sampled from \(\mathcal{B}(1,0.1)\) and \(\mathcal{B}(1,10)\) and two difficulties were sampled from \(\mathcal{B}(1,10)\) and \(\mathcal{B}(1,5)\).
In addition to descriptive information, the module provides functions to create some useful plots to help analyse each parameter. The code chunks below show examples of these plots. First, we show how to create a scatter plot of the estimated item discriminations (\(x\) axis) and difficulties (\(y\) axis). The resulting plot in Figure 6 shows an apparently uncorrelated distribution, without negative discriminations and with the presence of a possible discrimination outlier.
``` >>>importmatplotlib.pyplotasplt >>>b4.plot(xaxis='discrimination',...yaxis='difficulty',...ann=True,
... kwargs = {'color':'red'},... font_size = 22, font_ann_size = 15) >>> plt.show()
The next example shows how to draw the scatter plot shown by Figure 7, where a strong negative linear relationship can be seen between difficulty and the average response for each item.
```
>>>b4.plot(xaxis='difficulty',yaxis='average_item',...ann=True,kwargs='{'color':'blue'},...font_size=22,font_ann_size=17) >>> plt.show()
```
According to Figure 8 (see the code below), we observe a strong positive linear relationship between the respondent ability and the average response.
Figure 6: Estimated discrimination and difficulty values for each item.
Figure 7: Estimated difficulty values and average response for each item.
```
>>>b4.plot(xaxis='ability',yaxis='average_response',...ann=True,font_size=16,font_ann_size=16) >>>plt.show()
```
For the scatter plots, the arguments **xaxis** and **yaxis** define the variable that will occupy the \(x\) and \(y\) axes in the graphic, respectively. Argument **ann** is a boolean value used to define if the graph's points should be plotted alongside their indexes in the data set. Finally, **kwargs** is a dictionary with keyword arguments, which is familiar for Matplotlib [20] users, and can be used to pass any keyword arguments that can be used by Matplotlib. In addition to scatter plots, we can plot boxplots for the estimated abilities, difficulties and discriminations.
```
>>>b4.boxplot(y='ability',...kwargs={'linewidth':4},font_size=27) >>>b4.boxplot(x='difficulty',font_size=27) >>>b4.boxplot(y='discrimination',font_size=27)
```
As in scatter plots, boxplots also have the \(x\) and \(y\) arguments, as well as the **kwargs** dictionary.
## 4 Evaluating parameter recovery
In this Section we assess the performances of \(\beta^{3}-\)BIRT and \(\beta^{4}-\)BIRT in recovering the actual item and respondent parameters. For \(\beta^{3}-\)BIRT we use the version provided in our birt-gd package, which user a number of initialisation iterations with fixed discriminations, as mentioned in Section 2.
A Monte Carlo experiment with \(30\) replications was employed to evaluate the performances of the four models taking into account three dataset configurations: \(N=100\) items and \(M=20\) respondents (dataset 1), \(N=100\) items and \(M=100\) respondents (dataset 2), and finally, \(N=300\) items and \(M=50\) respondents (dataset 3).
For each dataset and Monte Carlo replication, we sample respondent abilities and item difficulties from \(\mathcal{B}(1,1)\) and item discriminations from \(\mathcal{N}(1,1)\). Then, for each response \(p_{ij}\), we take the mean of \(100\) samples taken from beta distributions as described in Equation (3). The resulting response matrix \(\mathbf{P}\) is used to fit \(\beta^{3}-\)IRT and \(\beta^{4}-\)BIRT, using \(50000\) epochs and \(1000\) initialisations with fixed discriminations. All the other hyperparameters were set as their default values.
Using bootstrap [14], we calculated the 95% confidence interval for the Pearson correlation \(\rho\) between estimated and original parameter values. The aim is to measure how well the models recover the original parameter rankings. In addition, we also considered a 95% confidence interval for the Relative Squared Error (RSE) to evaluate the quality of the parameter estimates for each model. The RSE represents the proportion of the unexplained variance, being defined by \(RSE=1-R^{2}\).
Figure 8: Estimated ability values and average response for each respondent.
Table 1 shows our results. Its is clear that, for the difficulty parameter, \(\beta^{4}\)-IRT outperformed \(\beta^{3}\)-IRT in all cases, while \(\beta^{3}\)-IRT was better at estimating abilities, according to RSE and \(\rho\). For discriminations, while there no overall best-performing method, Table 2 shows that, as expected, \(\beta^{4}\)-IRT was much better than \(\beta^{3}\)-IRT at correctly predicting the signs of the discrimination parameters, never switching more than \(0.06\%\) of the signs, which given the number of items in these datasets (100, 100 and 300), means that most runs estimated all signs correctly. This had a very significant impact on the estimation of difficulties, which showed the highest difference between the confidence intervals of both methods in Table 1.
\begin{table}
\begin{tabular}{c c c c c} \hline Dataset & Parameter & Model & RSE, 95\% CI & \(\rho\), 95\% CI \\ \hline \multirow{4}{*}{N = 100, M = 100} & \multirow{2}{*}{\(a_{j}\)} & \(\beta^{3}\)-IRT & [0.2295, 0.3136] & **[0.9697, 0.9768]** \\ & & \(\beta^{4}\)-IRT & **[0.0999, 0.1305]** & [0.9496, 0.9636] \\ \cline{2-5} & \multirow{2}{*}{\(\delta_{j}\)} & \(\beta^{3}\)-IRT & [0.3971, 0.5393] & [0.7019, 0.7860] \\ & & \(\beta^{4}\)-IRT & **[0.0947, 0.1125]** & **[0.9738, 0.9795]** \\ \cline{2-5} & \multirow{2}{*}{\(\theta_{i}\)} & \(\beta^{3}\)-IRT & **[0.0379, 0.0531]** & **[0.9957, 0.9970]** \\ & & \(\beta^{4}\)-IRT & [0.0670, 0.0805] & [0.9918, 0.9930] \\ \hline \multirow{4}{*}{N = 100, M = 20} & \multirow{2}{*}{\(a_{j}\)} & \(\beta^{3}\)-IRT & [0.1200, 0.1831] & **[0.9661, 0.9767]** \\ & & \(\beta^{4}\)-IRT & **[0.1220, 0.1623]** & [0.9500, 0.9622] \\ \cline{2-5} & \multirow{2}{*}{\(\delta_{j}\)} & \(\beta^{3}\)-IRT & [0.3401, 0.4646] & [0.7494, 0.8207] \\ & & \(\beta^{4}\)-IRT & **[0.1045, 0.1307]** & **[0.9678, 0.9754]** \\ \cline{2-5} & \multirow{2}{*}{\(\theta_{i}\)} & \(\beta^{3}\)-IRT & **[0.0389, 0.0702]** & **[0.9949, 0.9974]** \\ & & \(\beta^{4}\)-IRT & [0.0865, 0.1193] & [0.9898, 0.9937] \\ \hline \multirow{4}{*}{N = 300, M = 50} & \multirow{2}{*}{\(a_{j}\)} & \(\beta^{3}\)-IRT & **[0.1038, 0.1295]** & **[0.9666, 0.9734]** \\ & & \(\beta^{4}\)-IRT & [0.1380, 0.1602] & [0.9534, 0.9620] \\ \cline{1-1} \cline{2-5} & \multirow{2}{*}{\(\delta_{j}\)} & \(\beta^{3}\)-IRT & [0.3107, 0.3923] & [0.7859, 0.8339] \\ \cline{1-1} & & \(\beta^{4}\)-IRT & **[0.1029, 0.1206]** & **[0.9675, 0.9726]** \\ \cline{1-1} \cline{2-5} & \multirow{2}{*}{\(\theta_{i}\)} & \(\beta^{3}\)-IRT & **[0.0168, 0.0284]** & **[0.9979, 0.9988]** \\ \cline{1-1} & & \(\beta^{4}\)-IRT & [0.0656, 0.0863] & [0.9916, 0.9937] \\ \hline \end{tabular}
\end{table}
Table 1: 95% confidence intervals for RSE and \(\rho\) calculated using bootstrap. The best model is marked in **bold**.
Figure 9: Boxplots of the estimates for the \(\beta^{4}\)-IRT parameters.
## 5 Summary and discussion
\(\beta^{4}\)-IRT and \(\beta^{3}\)-IRT implemented in Python, resulting in a package that was published in the official repository2 of the language. Experiments showed that, although \(\beta^{3}\)-IRT and \(\beta^{4}\)-IRT performed similarly when estimating abilities and discrimination values, \(\beta^{4}\)-IRT presented a superior performance in the recovery of discrimination signs, which led to an improvement in difficulty estimation, according to RSE and \(\rho\). Improving the estimation of discrimination signs can be important for certain applications of IRT. For example, Chen et al. (2019) investigated the interpretation of negatively-discriminated items as noisy instances in a dataset, i.e. instances that might have flipped labels, making their classification harder for the best models in a model pool. This analysis is clearly hindered if the IRT model is unable to correctly identify the signs of the items' discriminations.
Footnote 2: [https://pypi.org/project/birt-gd/](https://pypi.org/project/birt-gd/)
## Computational details
The results of this paper were obtained using Python \(3.6\), but can be reproduced in any version higher than \(3.6\). The module has the following dependencies: Numpy (\(\geq 1.19.5\)), tqdm (\(\geq 1.19.5\)), tensorflow (\(\geq 4.59.0\)), pandas (\(\geq 1.2.3\)), seaborn (\(\geq 0.11.0\)), matplotlib (\(\geq 3.3.2\)) and scikit-learn (\(\geq 0.23.2\)). All libraries used are available in the Python Package Index (PyPi) at [https://pypi.org/](https://pypi.org/).
## Acknowledgments
MFJ would like to thank the Brazilian National Council for Scientific and Technological Development (CNPq) for their financial support through grant number _PIA12073-2020_.
|
2309.02401
|
Prototype-based Dataset Comparison
|
Dataset summarisation is a fruitful approach to dataset inspection. However,
when applied to a single dataset the discovery of visual concepts is restricted
to those most prominent. We argue that a comparative approach can expand upon
this paradigm to enable richer forms of dataset inspection that go beyond the
most prominent concepts. To enable dataset comparison we present a module that
learns concept-level prototypes across datasets. We leverage self-supervised
learning to discover these prototypes without supervision, and we demonstrate
the benefits of our approach in two case-studies. Our findings show that
dataset comparison extends dataset inspection and we hope to encourage more
works in this direction. Code and usage instructions available at
https://github.com/Nanne/ProtoSim
|
Nanne van Noord
|
2023-09-05T17:27:16Z
|
http://arxiv.org/abs/2309.02401v1
|
# Prototype-based Dataset Comparison
###### Abstract
Dataset summarisation is a fruitful approach to dataset inspection. However, when applied to a single dataset the discovery of visual concepts is restricted to those most prominent. We argue that a comparative approach can expand upon this paradigm to enable richer forms of dataset inspection that go beyond the most prominent concepts.
To enable dataset comparison we present a module that learns concept-level prototypes across datasets. We leverage self-supervised learning to discover these prototypes without supervision, and we demonstrate the benefits of our approach in two case-studies. Our findings show that dataset comparison extends dataset inspection and we hope to encourage more works in this direction. Code and usage instructions available at [https://github.com/Nanne/ProtoSim](https://github.com/Nanne/ProtoSim)
## 1 Introduction
Image datasets are crucial for Computer Vision and due to the algorithms' need for more data they are ever growing in size. At the same time datasets are a major source of bias leading to negative social impact [29, 30]. Unfortunately, it is challenging to determine what is contained in a dataset as their large size combined with the visual nature of the data makes manual inspection infeasible. To support users and developers of large-scale datasets in ensuring that the datasets match their usage and design goals it is necessary to develop better tools for dataset inspection.
A promising direction for generic dataset inspection is found in highly effective approaches that have been proposed for summarisation [35, 9, 14, 44, 24]. A major benefit of these approaches is that they enable explorative dataset inspection without needing supervised pretraining. However, a limitation of these approaches is that they use frequency as a proxy for importance, and on a single dataset therefore only discover those visual concepts which are most prominent. As such, we argue that a comparative approach, which enables discovery of a wider and more diverse range of concepts, is necessary to effectively perform dataset inspection. For instance, the PASS dataset [1] is designed as an ImageNet [32] alternative whilst containing no people, as such it provides us with a testable hypothesis that is comparative in nature. Namely, when comparing these two datasets there should be a disjoint set of visual people-centric concepts that are only found in ImageNet. In a case study we will verify this hypothesis, and demonstrate that dataset comparison can lead to new insights.
Moreover, an additional limitation of existing summarisation approaches is that they decouple the summarisation process from representation learning, and treat these as two distinct steps by performing the summarisation on a
pre-defined feature basis, such as GIST descriptors in [35] or LDA clusters in [31]. As recent work on incorporating prototypes into a network's reasoning process has been shown to aid interpretability [23, 6, 33, 13, 45], we propose that this may also be a promising direction for end-to-end dataset comparison. Therefore, we introduce a method for prototype-based dataset comparison, which discovers the visual concepts in a dataset in an end-to-end fashion.
Dataset prototypes are similar to cluster centroids in that they represent latent points in the feature space that are to be discovered. However, in clustering, similar to prior work on dataset summarisation, these centroids are discovered in a step that is decoupled from learning the feature representations. Whilst some recent works have explored forms of deep clustering [40, 11, 22], they still involve two separate optimisation goals. For example, by having one loss focused on the feature representation and another on clustering [11]. Instead, we propose a simple module _ProtoSim_ that can be integrated into a deep network without changing how it is optimised. To demonstrate this, we add ProtoSim to a Vision Transformer (ViT) [10] and show that it can effectively discover visual concepts across datasets in a self-supervised setting.
Overall, we make the following contributions:
* We introduce dataset comparison, a new approach for inspecting datasets.
* To enable dataset comparison we present ProtoSim, a module for integrated learning of dataset prototypes.
* With two case-studies we demonstrate how dataset comparison can be used to gain new insight into datasets.
## 2 Related work
Dataset comparison through prototypes is a largely unexplored area of research, but it is similar in spirit to dataset distillation [47, 5] which seeks to distill the knowledge from a large dataset into a small dataset. Most works in dataset distillation generate synthetic examples, as opposed to selecting a set of data instances, to construct the small dataset. These synthetic examples could be considered dataset prototypes, as they represent an aggregate of information that is central to the dataset. A key difference between such prototypes and the prototypes considered in this work, is that the distilled synthetic examples are instance-level prototypes (i.e., complete images that may represent multiple concepts), whereas in this work the aim is to discover concept-level prototypes, which may correspond to global concepts or part-level concepts.
In the remainder of this section we will further explore how dataset comparison relates to dataset summarisation, prototype learning, and deep clustering.
### Dataset Summarisation
Manual browsing is a reliable manner to determine what is contained in a dataset, however, browsing even a subset of the data can be time-consuming, which has led to a line of work on dataset summarisation [31, 9, 14, 44, 24]. The aim of dataset summarisation is to find underlying structure or patterns that are shared between data instances to give an overview of what is contained in a dataset.
An important modeling choice for dataset summarisation is the basis on top of which the patterns are found. Work on mid-level pattern mining aims to find discriminative and representative patches and use these as visual primitive for further analysis [34]. Mid-level pattern mining aims to distinguish itself from "visual word" approaches [36], which were found to mainly capture low-level patterns such as edges and corners [34, 9]. Two notable patch-based approaches are the dataset fingerprinting approach by [31] and the _what makes Paris look like Paris?_ approach by [9]. Patch-based approaches aim to circumvent the limitations of visual words by starting with larger visual structure (i.e., patches at multiple scales). Due to the limitations of pixel-space representations patches cannot be used directly for analysis, instead initial patch-based approaches used feature extractors such as HOG [34, 9] and Convolutional Neural Network (CNN) activations in later works [24]. Using patches has been shown to help steer the results towards more semantically meaningful visual elements, yet they are still restrictive in representational power and in terms of shape (i.e., patches must be rectangular). Moreover, although the starting point is a patch, the features extracted largely determine what the pattern represents and the features are not intrinsically explainable, necessitating post-hoc explanations. In contrast, we propose to add ProtoSim to a ViT architecture, which can learn to recognise visual elements of arbitrary shapes and sizes, whilst still focusing on semantically meaningful concepts.
### Prototype Learning
Prototype-based approaches can be categorised into two areas, the first aims to learn prototypes which represent high-level semantic concepts (, classes) to support zero or few-shot learning [18, 28, 37]. The second area is concerned with finding recurring visual elements and are increasingly used for model explainability [23, 6, 21, 39, 13, 20, 42, 45]. As opposed to post-hoc explainability methods, prototypes are integral to the network and provide explanations of the network's reasoning process [6]. Our approach is part of this second area and we will focus the discussion on integral prototypes for recurring visual elements.
Integral prototypes have been shown to be effective for supervised multi-class classification [6, 13, 20, 21], deepfake detection [39], and semantic segmentation [48]. In the literature we can recognise two branches of work on integral
prototypes, the first builds on ProtoPnet [23, 6] and the second on Concept Bottleneck Models (CBM) [13, 20]. Common across these approaches is that they add a layer preceding the output layer that maps the extracted features to prototypes (or concepts) and produce the final output based on the affinity to these prototypes. The logic is thus that because the prototypes are recognisable entities, the prediction is explained by the affinity to the prototypes. The main difference between these branches is that in ProtoPnet-like approaches the prototypes are latent vectors in the embedding space that are learnt end-to-end, whereas CBM approaches use a pre-defined concept library.
However, these prior approaches all focus on a supervised setting and learn per-class prototypes or concepts, which is severely limiting for dataset summarisation. Instead, we propose to leverage a self-supervised loss to learn prototypes that are not class specific and can represent _any_ visual concept that is present in the data. In particular, our prototypes may represent class-level concepts or segment-level concepts as in CBM [13], but we learn them without concept-level supervision. Previous works learned prototypes for the features at each location in the convolutional output of a Convolutional Neural Network (CNN) backbone [7, 21, 39]. This restricts the spatial extent of the prototypes to size of the receptive field. Additionally, in CNN the output is typically spatially averaged to achieve a global image representation, however, as this is not done for CNN-based prototype approaches they can only learn local and not global prototypes. Instead, we use Vision Transformers (ViT) [10], as they offer solutions for both aforementioned issues capturing local and global concepts of various spatial extents.
### Deep Clustering
Clustering relates to learning prototypes or concepts in that it has been used to define the library of concept CBM [13], and to an extent the underlying principles match the goal of clustering. In particular, in deep clustering [40, 11, 22] the centroids are learned iteratively, resulting in them being conceptually close to prototypes or CBM-concepts. However, in prior works on deep clustering [40, 11, 22] the clustering objective is separate from the feature representation, whereas in prototype-based approaches the final output is determined by projecting the input to the prototype subspace.
Another key difference is that by making the prototypes integral to the reasoning process, as in CBM [13] or ProtoPnet [6], it is ensured that the prototypes are of meaningful to the final output. Post-hoc clustering, or as part of a separate objective, may lead to overrepresentation of clusters which capture meaningless information, _e.g_., orthogonal to the objective that was used to obtain the feature representation.
## 3 Dataset Comparison
Given a collection of datasets \(X=\{\chi_{i}\}_{i=1}^{|X|}\) the aim of prototype-based dataset comparison is to discover a set of prototypes \(p\in\mathbb{R}^{K\times D}\) that represent visual concepts that may occur predominantly in only one dataset (_i.e_. dataset-specific prototypes) or visual concepts which occur in two or more datasets (_i.e_. shared prototypes). An illustration of the dataset comparison workflow is shown in Figure 2.
We argue that a comparative approach to dataset summarisation, _i.e_., dataset comparison is vital for gaining insights into datasets. In particular, if \(C_{*}\) is the set of _all_ possible visual concepts, and \(C_{i}\) the visual concepts in ImageNet, then \(C_{i}\subset C_{*}\). Given these sets, \(C_{i}\) is a perfect description of what is contained in ImageNet, and the difference \(C_{*}-C_{i}\) perfectly describes what is not contained in ImageNet. However, as we do not have access to \(C_{*}\) and we have no guaranteed manner of discovering \(C_{i}\), we can at best learn a set of prototypes \(\hat{C}_{i}\) that approximates \(C_{i}\).
As it is an approximation it cannot be concluded from \(\hat{C}_{i}\) that a concept is not in \(C_{i}\). For example, PASS is designed to not contain humans (i.e., no humans in \(C_{p}\)), but based on \(\hat{C}_{p}\) it cannot be stated with certainty that no humans are found in PASS, only that no human-centric concepts were discovered. To overcome this limitation we propose to jointly learn \(\hat{C}_{i}\) and \(\hat{C}_{p}\). Thereby, if we know a concept is in \(\hat{C}_{i}\) and not in \(\hat{C}_{p}\), we can reasonably conclude that the concept is not in \(C_{p}\) either.
Through dataset comparison we are thus able to answer questions that could not be answered by only considering a single dataset, thereby creating new possibilities for dataset inspection for hypothesis verification or comparative exploration.
### Prototype Evaluation
How to evaluate the prototypes learnt for dataset comparison (or summarisation) is an open problem, as the ground-truth for which visual concepts are contained in the dataset
Figure 2: Overview of the dataset comparison workflow. Datasets are mapped to the prototype embedding space, from where we can inspect and compare individual prototypes.
are unknown. Nonetheless, we can leverage the comparative approach by verifying whether the prototypes found match the design goals of the datasets. For instance, as PASS was designed not to contain humans we can explore whether the human-centric prototypes learnt from ImageNet indeed do not occur in PASS. This manner of evaluation forms the basis of our first case study.
Generally, we consider the prototypes \(p\) to be of high-quality when they are _distinct_ and _meaningful_. Distinct prototypes are able to independently represent a visual concept, this is in contrast to, for example, sparse coding [46] where the aim is to find basis vectors that represent the input through linear combinations. The set of prototypes is meaningful when it can be used to discriminate images within and across the datasets from one another. Requiring that the prototypes are discriminative helps avoid trivial solutions. We can quantitatively measure how discriminative the prototypes are by evaluating them with downstream tasks.
To ensure that the prototypes are distinct we design our method ProtoSim to perform hard assignment of the prototypes, and by optimising the prototypes with contrastive SSL we ensure that they are discriminative. This alignment between the design of the method (more details in Section 4) and the goal of dataset comparison helps us make certain the prototypes are of high-quality.
### Types of Prototypes
Within the set of prototypes we can recognise different types of prototypes depending on how they occur across the datasets. For instance, a prototype \(p_{i}\) may be considered _dataset-specific_ when over \(95\%\) of its occurrences are only found in a single dataset. By definition, any prototype which is not dataset-specific would then be considered _shared_. However, various degrees of'sharedness' may occur depending on the overlap between datasets.
In theory it may be possible for the datasets in \(X\) to be fully distinct, resulting in only dataset-specific prototypes. In practice, we find that prototypes may represent various basic visual properties (see Figure 10), resulting in shared prototypes even when the datasets differ semantically.
## 4 Prototype-based Learning
We propose _ProtoSim_ a simple module that can inserted in ViT architectures to enable dataset comparison through integral prototype-based learning. ProtoSim differs from previous prototype layers in three ways, firstly, the prototypes learned with ProtoSim are distinct (as opposed to linear mixtures), secondly, ProtoSim is specifically designed for ViT architectures and learns prototypes from a set of tokens, rather than a single embedding vector. Lastly, the prototypes in ProtoSim are not class-specific, instead we learn a single pool of prototypes that may occur in any image.
### ProtoSim
In order to learn which visual concepts are present in a dataset we aim to optimise a set of \(K\) learnable prototypes \(p\in\mathbb{R}^{K\times D}\). By designing the prototypes to be part of a prototype layer (_i.e_. ProtoSim) we can directly optimise them in and end-to-end fashion as part of ViT that we add the layer to. Specifically, given the token embeddings \(z\in\mathbb{R}^{N+1\times D}\) produced by a ViT, ProtoSim maps these to the prototype embeddings \(\hat{z}\in\mathbb{R}^{N+1\times D}\), where \(D\) is the token vector size. During this mapping each token is replaced with the most similar prototype in \(p\).
In [23, 6] the similarity to each of the class-specifics prototypes is determined by calculating the squared \(L^{2}\) distance and selecting the closest prototype with a max operation. Because our aim is to find prototypes that recur in the entire dataset, instead of finding class-specific prototypes, we propose a surprisingly simple formulation of this process by using a reversed version of dot-product attention [25] to efficiently calculate attention across all the prototypes:
\[a=softmax(pz^{\intercal}) \tag{1}\] \[\hat{z}=a^{\intercal}p. \tag{2}\]
Here the attention mask \(a\) represents the soft-assignment of prototypes to tokens, as such each token embedding \(\hat{z}\) is a linear combination of prototypes. This is not desirable as our aim is to learn distinct prototypes which independently represent a visual concept, as this enables better summarisation and greater interpretability. To obtain a hard assignment we replace the softmax operation in Equation 1 with gumbel-softmax [17]:
\[a=gumbel\text{-}softmax(pz^{\intercal}). \tag{3}\]
For simplicity we drop the temperature parameter, as it empirically does not meaningfully influence performance and can be kept fixed at \(1\). The benefit of gumbel-softmax is that it maintains gradient-flow whilst enabling hard attention. Based on this modification \(a\) now represents a hard assignment matrix which is used to replace each token with a prototype. As a visual concept can occur across multiple spatial locations there is no constraint on how often a prototype can be assigned.
Figure 3: Examples of low-level visual properties discovered with ProtoSim: motion blur (top) and a shallow depth of field (bottom).
### Backbone
Whilst ProtoSim functions as a generic network module, with the only requirement being an a two-dimensional input of \(K\times D\), we specifically apply it to ViT in this work as these offer some benefits over CNN. In particular, in ViT all tokens can share information with all other tokens, as such there is no restriction on the spatial extent of the learned prototypes. Moreover, whilst most tokens in a ViT represent some spatial region of the image (_i.e_. patch tokens), the class token takes on the role of a global image representation. This allows us to learn prototypes that are local (_i.e_. only assigned to patch tokens) and global (_i.e_. only assigned to the class token) without modifying the architecture.
ProtoSim is placed at the end of the backbone ViT in the network architecture. Formally, the input \(x\in\mathbb{R}^{H\times W\times C}\) is divided into flattened patch tokens \(x_{p}\in\mathbb{R}^{N\times(P^{2}\cdot C)}\), where \(H\), \(W\), and \(C\) are the height, width, and channel dimensionality of the input and \(P\) the patch size hyperparameter. In addition to \(N\) patch tokens, a learnable _class_ token \(x_{class}\) is prepended to input. No class labels are used during training, but we retain the ViT terminology of referring to this as a class token. After the embedding phase the tokens are passed through a series of transformer blocks resulting in the token embeddings \(z\in\mathbb{R}^{N+1\times D}\), where \(D\) is the embedded token vector size. \(z\) is passed through the ProtoSim layer resulting in the prototype embeddings \(\hat{z}\).
### Training Objective
The training objective determines the prototypes. For instance, a classification-based objective results in prototypes that are indicative of the classes (as in ProtoPnet [6] or CBM [20]). Yet, such an objective may result in a poor coverage of concepts that are not part of the annotated classes. Because we strive for good coverage we focus on a Contrastive Self-Supervised Learning (Contrastive SSL) objective. In particular, because Contrastive SSL optimises for fine-grained discrimination between images (rather than classes) and has been shown to be perform well on a variety of downstream tasks [7].
From the recently proposed Contrastive SSL methods [16, 15, 7, 8, 3, 4], only DINO [4] has been specifically designed to work with ViT. As such, we minimise the DINO loss as the training objective:
\[\min_{\theta_{s}}\sum_{x\in\{x_{1}^{g},x_{2}^{g}\}}\sum_{\begin{subarray}{c}x^ {\prime}\in V\\ x^{\prime}\neq x\end{subarray}}H(P_{t}(x),P_{s}(x^{\prime})), \tag{4}\]
where \(H(a,b)=-a\log b\), \(x_{1}^{g}\) and \(x_{2}^{g}\) are two large crops from the input, \(V\) a set of data-augmented smaller crops, and \(P_{t}\) and \(P_{s}\) are the teacher and student networks respectively, each with a ProtoSim layer added to the backbone. We refer to the excellent work by [4] for further details.
## 5 Case Studies
In this section we present two case studies, the first is a comparison between two datasets (_i.e_., two-way dataset comparison) aimed at determining how the PASS [1] dataset differs from ImageNet [32]. Because the design goal of PASS was to present an alternative for ImageNet that does not contain humans we test whether this is achieved successfully. The second case study focuses on a three-way dataset comparison scenario wherein we explore how three artwork datasets differ. These three datasets were designed for different tasks and contain images from different museum collections, as such based on these meta properties of the datasets it is challenging to determine how they actually relate. With these two case studies we put the two main applications of dataset comparison to the test, verifying a design goal of a dataset, and exploring unknown datasets. In addition, in Supplementary Section A we demonstrate how ProtoSim can also be used on a single dataset.
### Datasets
For the first case study we focus on ImageNet and PASS, we briefly describe both:
* **ImageNet**[32] is a widely used dataset available in two versions: ImageNet-21K and ImageNet-1K. In this work we focus on ImageNet-1K consisting of approximately 1.2 million training images and 1000 classes. Despite how widely used it is, and how influential it has been, ImageNet-1K has also been criticised for its biased depictions of persons [30].
* **PASS**[1] is a recently proposed alternative for ImageNet with the intention of not containing persons to avoid the issues in ImageNet. PASS contains approximately 1.4 million unlabeled images and can only be used for self-supervised training.
In the second case study we focus on three artwork datasets that have been used for a variety of different tasks:
* **MET**[43] is a dataset designed for instance-level recognition, its training set contains over 400k images from more than 224k different objects held by the Metropolitan Museum of Art in New York.
* **Rijksmuseum**[27] presents a set of tasks which are museum-centric, focusing on predicting various types of metadata (_i.e_., artist, object-type, material usage, and creation year). It consists of 110k images obtained from collections held by the Rijksmuseum in Amsterdam.
* **SemArt**[12] consists of images collected from the Web Gallery of Art a virtual museum with predominantly European fine-art. The SemArt dataset contains
21k images described with similar metadata as the Rijksmuseum dataset, but in addition also has textual descriptions in the form of artistic comments.
The available metadata for these three artwork datasets differs, but notably none of these datasets have been described with semantic classes as, for example, found in ImageNet. As such, we do not have prior information about what type of visual concepts are represented in the artwork datasets.
### Experimental setup
The backbone used is the DeIT-S model [38] with a patch size of \(16\times 16\). The parameters of the backbone are fixed during prototype learning and training is done on a single NVIDIA RTX 3090 with a batch size of \(128\) and learning rate of \(5\mathrm{e}{-5}\) for \(20\) epochs. For the first \(15\) epochs we train with soft gumbel-softmax, after which we switch the student to the hard'straight-through' gumbel-softmax, to ensure more independent prototypes. In line with previous works (, [2, 26]) we set \(K\) to \(8192\).
For quantitative evaluation we follow the standard procedure of training a linear classifier on frozen features [4, 16]. We freeze the backbone and the ProtoSim layer and then train a linear classifier for \(20\) epochs with a learning rate of \(0.001\) and a batch size of \(256\).
### Two-way Dataset Comparison
For the two-way dataset comparison case study we focus on the PASS and ImageNet datasets, to this end we learn prototypes on the union of these two datasets, which we refer to as the _PassNet_ dataset, and present the results in this section. All results in this section are obtained with ProtoSim trained on top of the DINO backbone pre-trained on ImageNet. A key finding of this comparison is that ProtoSim learns both dataset-specific prototypes (i.e., prototypes that predominantly activate for one dataset) and shared prototypes (i.e., prototypes that occur frequently in both datasets). In Figure 4 we show nine prototypes which are predominantly found in ImageNet, PASS, or are shared equally among both datasets. Additional examples of prototypes are shown in the supplementary material.
The prototypes that are predominantly found in ImageNet contain people (, over 99% of images for which they activate are found in ImageNet). However, despite its design goal we do find persons for these prototypes in PASS. The _Persons at the beach_ prototype (first row) contains a man holding a beach-ball photographed from the back, and the _Uncovered legs/arms_ prototype (third row) is found in (nude) statues and a faded Polaroid of a person. The only two PASS images for the _Sports-ball_ prototype (second row) just depict the field and a ball, suggesting that the prototype captures more than just persons.
The prototypes that are predominantly found in PASS depict concepts or objects that are not found among the \(1000\) ImageNet categories. Nonetheless, these predominantly PASS prototypes can be found in some ImageNet images where the annotated object is a (small) part of the image. From left to right for row four to six we find the following ImageNet categories: European Gallinule, Ant, Wheelbarrow, Turnstile, Barn, and Headland. We would describe the prototypes in these rows as capturing _Flowers_, a _View down narrow alley_, and _Sunset with silhouetted foreground_ respectively. In general, the PASS prototypes that predominantly concern vista or landscape images, which on occasion contain objects from ImageNet categories.
For the prototype in the second row of Figure 4 we measure how it is spatially distributed and visualise this for the four ImageNet images in Figure 5. Visualising the attention maps of this prototype shows that the main activations are indeed for the leg and arm areas of the persons depicted. Other parts of the persons receive much less attention than these uncovered areas. Although this is a predominantly ImageNet prototype, and is therefore commonly found in instances of the'miniskirt' and 'bikini' categories, it is also found in PASS, including in images of mannequins, artworks, and anime of primarily feminine characters.
Overall, we can conclude that the two-way comparison is highly successful, not only are we able to verify that depictions of humans in PASS are practically non-existent, but by edge-cases we are able to demonstrate the strength of the prototypes. Moreover, based on the discovered prototypes we also gain insights into how these datasets differ, with PASS containing many more vistas and landscapes, whereas ImageNet is strongly object-centric. As such we can conclude that while there is an overlap, PASS differs in more respects from ImageNet than just the presence of humans - which may cast doubt on its status as an ImageNet replacement.
Figure 4: Example images for nine prototypes, each three rows showing predominantly ImageNet, predominantly PASS, and Shared prototypes respectively. The last two columns for rows one through six show some of the few examples found in the other dataset. The last three rows show prototypes common in both with examples of ImageNet in the left three columns, and examples in the right three.
sification. For these evaluations we compare between two different backbones, one pre-trained on ImageNet [4] and one pre-trained on PASS [1]. Results reported on PassNet are described as either (I+P) for the ImageNet pre-trained backbone, or P+I for the PASS pre-trained backbone.
Table 1 shows the performance on the ImageNet validation set for a linear classifier trained on top of fixed backbones. We observe that pre-training on ImageNet (unsurprisingly) leads to the best downstream performance on ImageNet, and that training on PASS leads to lower performance. Similarly, for the two backbones we find that the ImageNet pre-trained backbone performs much better. Nonetheless, we see degraded performance for training on PassNet versus ImageNet. The last two rows of Table 1 show that increasing the \(K\) value benefits performance, due to computational complexity and diminishing returns no values above \(8192\) were tested. In conclusion, our quantative evaluation confirms our modelling choices of using the ImageNet pre-trained backbone with \(8192\) prototypes, and we will use this configuration for the second case study as well.
The results in Figure 6 show the training loss and the average cosine similarity between prototypes as training progresses. In both plots we see two groupings of two lines that follow the same pattern. For all models we can see that switching to the hard gumbel-softmax leads to a bump in the loss, but except for P+I this bump levels off quickly.
### Three-way Dataset Comparison
In the second case study we perform a three-way dataset comparison between the MET, Rijksmuseum, and SemArt datasets. The aim of this comparison is highly exploratory, unlike ImageNet, none of these datasets have pre-defined classes, as such it is not obvious what visual concepts are represented. In Figure 20 we show nine prototypes discovered with ProtoSim, the first three pairs of prototypes are dataset-specific and are predominantly found in the MET, Rijksmuseum, and SemArt respectively. These prototypes were randomly selected from the most dataset-specific prototypes and give a good indication of what is contained in these datasets. The MET dataset contains many 3D objects and often multiple photographs are taken from different angles, as such the first prototype represents _Top-down views of vases_. The second MET prototype depicts _Egyptian wall carvings_, which were photographed under raking light to emphasise the height differences across the surface.
The Rijksmuseum dataset contains many prints or drawings on paper, whilst this is something that can be learnt from the material type overview in [27], the two prototypes depicted show two different groups of visual concepts. The first Rijksmuseum prototype depicts _natural history drawings_, whereas the second is found in _side view portraits_. SemArt on the other hand consists of mostly fine-art paintings, with the first prototype capturing a stylistic visual concept of _Renaissance paintings_, the second prototype is a more concrete semantic concept, namely _dogs_.
Just by exploring the dataset-specific datasets we already get a much better understanding of what is contained in these datasets and what makes them unique. In addition, when we then analyse the shared prototypes we can see that these differences largely persists across different visual concepts. The first shared prototype is of various _animals_, notably we see that the prototype is not sensitive to material differences, returning both drawings and paintings. The same observation is made for the second shared prototype of _baskets_, which is matched to drawings, paintings, and photographs of actual baskets. This ability to generalise across modalities is also observed in the third shared prototype of _brass instruments_, which matches actual instruments and realistic paintings containing similar instruments.
Based on the three-way dataset comparison we find that there are degrees of overlap between the datasets, generally we would see that the MET and Rijksmuseum datasets are most similar in types of objects, where MET has a greater focus on physical objects and Rijksmuseum on prints and drawings. Because these two datasets have a relatively small proportion of paintings the similarity with the SemArt dataset is less, nonetheless, we can observe that in terms of visual concepts these datasets represent similar things. In Appendix Section C we show additional prototypes discovered for these artwork datasets.
Figure 5: Attention maps for the for the four ImageNet images for the _Uncovered legs/arms_ prototype in Figure 4.
Figure 6: Training loss (left) and average cosine similarity (right) during prototype training. PassNet results P+I are obtained PASS-DINO backbone and I+P with the DINO backbone.
## 6 Discussion
The prototypes obtained with ProtoSim enable dataset comparison, nonetheless, there are two limitations. Firstly, although explanation methods based on visual examples are preferred for visual data [19], there is a mandatory manual inference step to determine what a prototype represents. Whilst previous methods suffer from the same limitation [6, 21] ProtoSim does offer an advantage in that the attention maps can be visualised (i.e., Figure 5). Based on example images and their attention maps it becomes possible to be relatively certain about what the prototype represents.
Secondly, whilst the ViT backbone can be replaced, for our experiments we found the best results with an ImageNet pre-trained backbone. Since this dataset is one of the ones being compared, but also because this dataset has known biases [41] the choice for a ImageNet pre-trained backbone may influence the prototypes learned. During experiments no such influence was observed, instead it appears ProtoSim is flexible in learning diverse prototypes, nonetheless further investigation is necessary.
## 7 Conclusion
In this work we presented dataset comparison as a new direction for dataset inspection. To enable dataset comparison across large-scale datasets we introduce ProtoSim which leverages integral prototype-based learning and self-supervised learning to discover visual concepts across datasets. To evaluate our proposed approach we perform two case studies, in the first we compare the PASS and ImageNet datasets, and in the second we we perform a three-way comparison between artwork datasets.
Based on these case studies we find that we can gain new insights into the datasets. In the first case study we find that ImageNet indeed contains many more images with persons, which is in line with the design goal of PASS. However, we still discovered partial and non-photorealistic depictions of persons in PASS, which were not discovered in its person-focused dataset curation process. When comparing the artwork datasets we find that each has a unique focus, but also that various semantic concepts are shared between them.
Overall, we presented an initial exploration of dataset comparison that will hopefully lead to greater attention for this topic, as it is becoming increasingly necessary to improve dataset inspection techniques.
|
2307.14917
|
NSA: Naturalistic Support Artifact to Boost Network Confidence
|
Visual AI systems are vulnerable to natural and synthetic physical corruption
in the real-world. Such corruption often arises unexpectedly and alters the
model's performance. In recent years, the primary focus has been on adversarial
attacks. However, natural corruptions (e.g., snow, fog, dust) are an
omnipresent threat to visual AI systems and should be considered equally
important. Many existing works propose interesting solutions to train robust
models against natural corruption. These works either leverage image
augmentations, which come with the additional cost of model training, or place
suspicious patches in the scene to design unadversarial examples. In this work,
we propose the idea of naturalistic support artifacts (NSA) for robust
prediction. The NSAs are shown to be beneficial in scenarios where model
parameters are inaccessible and adding artifacts in the scene is feasible. The
NSAs are natural looking objects generated through artifact training using
DC-GAN to have high visual fidelity in the scene. We test against natural
corruptions on the Imagenette dataset and observe the improvement in prediction
confidence score by four times. We also demonstrate NSA's capability to
increase adversarial accuracy by 8\% on average. Lastly, we qualitatively
analyze NSAs using saliency maps to understand how they help improve prediction
confidence.
|
Abhijith Sharma, Phil Munz, Apurva Narayan
|
2023-07-27T15:00:31Z
|
http://arxiv.org/abs/2307.14917v1
|
# NSA: Naturalistic Support Artifact to Boost Network Confidence +
###### Abstract
Visual AI systems are vulnerable to natural and synthetic physical corruption in the real-world. Such corruption often arises unexpectedly and alters the model's performance. In recent years, the primary focus has been on adversarial attacks. However, natural corruptions (e.g., snow, fog, dust) are an omnipresent threat to visual AI systems and should be considered equally important. Many existing works propose interesting solutions to train robust models against natural corruption. These works either leverage image augmentations, which come with the additional cost of model training, or place suspicious patches in the scene to design undavversarial examples. In this work, we propose the idea of naturalistic support artifacts (NSA) for robust prediction. The NSAs are shown to be beneficial in scenarios where model parameters are inaccessible and adding artifacts in the scene is feasible. The NSAs are natural looking objects generated through artifact training using DC-GAN to have high visual fidelity in the scene. We test against natural corruptions on the Imagenette dataset and observe the improvement in prediction confidence score by four times. We also demonstrate NSA's capability to increase adversarial accuracy by 8% on average. Lastly, we qualitatively analyze NSAs using saliency maps to understand how they help improve prediction confidence.
Computer Vision, Robust AI, Security, Confidence Boosting
## 1 Introduction
Image processing has become indispensable to most vision-based applications in recent years. Simultaneously, convolutional neural networks (CNNs) have gained traction due to their ability to handle visual inputs and achieve human-level performance for specific tasks [1], [2]. Eventually, CNNs found their way into numerous applications for scene understanding, and automated detection of objects [3], [4]. The emergence of CNNs has been remarkable, but their performance is highly conditioned on their prior training distribution [5]. The samples from out-of-distribution have led to erroneous predictions [6], [7], even failing miserably in some scenarios [8]. This led to a series of works that have concentrated on inspecting the fragile behavior of neural networks [9], [10], [11] and the natural trade-off between accuracy and robustness [12]. The robustness aspects of an AI model have become just as necessary as traditional accuracy metrics, especially for safety-critical systems.
### Current Scenario
CNNs, like many other neural network architectures, are black-box, which means their working cannot be understood entirely. The situation became further concerning when Szegedy et al. [11] highlighted neural networks' brittleness against corruption. Several demonstrations highlighted the ability of small (even imperceptible) noise in the image
(popularly known as adversarial attacks) to cripple perfectly trained models [11], [13], [14]. In most cases, the addition of noise required precise manipulation of the image's pixel values digitally (FGSM [15], PGD [9], C&W [16]), limiting its applicability. To translate adversarial attacks to practical scenarios, Brown et al. ensured the printability of attack and proposed Adversarial Patches [17], [18].
The numerous proposals of varied corruption in recent years necessitate robust model training. Eventually, the concern over the inconsistent behavior gave rise to several defense methodologies [19], [20], [21]. A defense can be broadly classified as either model agnostic (e.g., using saliency map) [22], [23] or model dependent (adversarial training) [9] where the network weights are learned to tackle the corruption. Typically, the design of a defense either requires access to model parameters or the attack should be localized and perceptible to be detectable. The arms race between robust defenses and newer malicious attacks overcoming them is an ongoing challenge.
### Motivation
The robustness of an AI model is more than just ensuring performance against adversarial noises. With the advent of mathematically formulated attacks, the focus of the literature has shifted away from the fundamental natural corruptions (e.g., snow, dust, rain). The chances of such natural disturbances occurring in the real world are far more likely than adversarial attacks. We acknowledge that the potency of natural corruption is lower than that of adversarial attacks, yet designing a robust model requires their consideration. Most commonly, researchers have been using image augmentations [24] to train resilient CNN models [25], [26] against natural noises. In image augmentation, corruptions in the form of transformations or disturbances are applied to the training images. In essence, the network parameters are learned to make robust predictions over corrupted images [9]. However, in practice, the accessibility to model parameters is sometimes infeasible, and model training with augmented images is expensive, limiting the technique's applicability.
Inspired by adversarial patch [17], Salman et al. proposed unadversarial example [27] to tackle natural corruptions. A normal image can be easily transformed into an unadversarial one by adding an inverse adversarially trained patch to it. In this technique, the loss is backpropagated to learn features/patterns in the scene rather than network weights, as in the case of adversarial training. The learned patterns are either restricted to a localized region as a patch or are printed over the target object's body. Designing unadversarial examples does not require access to model parameters but can still improve the prediction confidence score. Building over the idea of Salman et al., the authors in [28] proposed a more vigorous defense to achieve better robustness. Additionally, they demonstrated the effect of unadversarial examples on the distribution shift and utilized class-level information for better performance. A similar work called collaborative adversarial training (CAT) [29] demonstrates a new distance metric for generating unadversarial examples for adversarial robustness.
The existing works on unadversarial examples institute the idea of implicit scene robustness without relying on model training. However, they all contain unnatural trained patterns in the scene. In the unadversarial examples proposed
Figure 1: The figure shows the increase in the prediction score after placing NSA for various natural corruptions. The original prediction is Chain Saw. The top row is without NSA (original image), and the bottom row is with NSA. The yellow circle highlights the presence of artifacts (NSA). We use two bird artifacts as they have high visual fidelity in the scene.
in [27], the trained patterns over the target object look suspicious and make it easily noticeable. Similarly, in [28], an irregular, thick boundary-trained patch is made around the image, which looks unnatural. Making a boundary for any image is only feasible after capturing it through the camera. Although the vision behind unadversarial examples is highly appreciable, there are better ways to generate them than the existing techniques.
### Our Contribution
Our work draws significant inspiration from the unadversarial example design technique. As per the discussion in Section 1.2, the primary goal should be to make the patches added to unadversarial images look natural. In the context of our work, we call these patches artifacts. The 'artifacts' can be defined as artificially generated objects that initially do not exist in the scene, but are placed intentionally. This work proposes an artifact training framework to design naturalistic support artifacts (NSAs). The location of artifacts can vary depending on the context of the image. Figure 1 demonstrates how the artifacts will look in the scene and their ability to boost the prediction confidence score against natural corruption.
Similar to unadversarial examples, the NSA does not require knowledge of model architecture or accessibility to its parameters. Hence, robust prediction is not a result of the model itself but rather due to the placement of NSA in the scene. Here, we assume that the NSA designer has physical and digital access to the scene to train and place the artifacts in the scene. This is a fair assumption in most cases as the model designer often also has the accessibility to the scene or at least has control over the inputs to the model [27]. Additionally, the NSAs have excellent visual fidelity in the scene. Since the artifacts help the model's prediction, we use the term'support' in NSA. The characteristics of the NSA are summarised below:
* **High Visual Fidelity:** The NSAs placed in the scene look natural without any suspicious patterns.
* **Universal Training:** The NSAs can be universally trained for all physical corruptions. However, fine-tuning does improve the prediction robustness.
* **Model Agnostic:** The framework of NSAs training is independent of classifier and generator architecture.
* **Scalable:** The number of NSAs in the scene are scalable by increasing varied pre-trained artifact Generators.
## 2 Background
This section discusses a generic model and generator formulation concerning naturalistic support artifacts.
### NSA Formulation
#### 2.1.1 Model Formulation
Assume we have an input RGB image \(\mathbf{x}\in\mathcal{X}\), where \(\mathcal{X}\in\mathbb{R}^{w\times h\times c}\). The \(w\), \(h\), and \(c\) represent the width, height, and color channels in the image, respectively. The image is normalized in the range \([0,1]\) to ensure printability. The CNN model \(\mathcal{F}:\mathcal{X}\rightarrow\mathcal{Y}\) produces the probabilistic output vector \(\vec{y}\in\mathcal{Y}\), where \(\mathcal{Y}\in\mathbb{R}^{n}\) and \(n\) is the total number of classes present in the dataset. Each element of \(\vec{y}\) is the probability of classifying image \(\mathbf{x}\) to a corresponding class in the dataset. Deriving from the classification probability, the confidence score can be formulated as the prediction probability \(\times\) 100. This score signifies the confidence of a model in the specific prediction. The class corresponding to the highest probability in \(\vec{y}\) is the model's predicted class \(k\) of an image \(\mathbf{x}\). The \(k\) can vary from \(\{0,1,....,n-1\}\). The equation formulating the model behavior is given as
\[k=\operatorname{argmax}[\mathcal{F}(\vec{y}|\mathbf{x})]\,, \tag{1}\]
where the model \(\mathcal{F}(\vec{y}|\mathbf{x})\) outputs the probability vector \(\vec{y}\) for a given image input \(\mathbf{x}\in\mathcal{X}\).
#### 2.1.2 Generator Formulation
The Generator \(\mathcal{G}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{\bar{w}\times\bar{h}\times c}\) is used to create naturalistic artifact \(\mathfrak{a}=\mathcal{G}(z)\), \(\mathfrak{a}\in\mathbb{R}^{\bar{w}\times\bar{h}\times c}\), where \(\bar{w}\) and \(\bar{h}\) represent width and height of the artifact. The channels \(c\) of the artifact are the same as that of the image. The \(z\) is the input latent vector randomly sampled from the noise distribution \(P(\mathbf{z})\in\mathbb{R}^{n}\). The framework of designing unadversarial examples is similar to that of [27], but the generator's inclusion in the training loop distinguishes our method. A Deep Convolutional Generative Adversarial Network (DC-GAN) is trained in unsupervised fashion with the Wasserstein GAN with gradient penalty (WGAN-GP) loss as shown in Equation 2.
\[\begin{split}\min_{\mathcal{G}}\max_{\mathcal{D}}& \mathbb{E}_{\tilde{x}\sim P_{g}}\left[\mathcal{D}(\tilde{x})\right]-\mathbb{E }_{x\sim P_{s}}\left[\mathcal{D}(x)\right]\\ &+\lambda\ \mathbb{E}_{\tilde{x}\sim P_{\tilde{x}}}\left[(||\nabla_{ x}\mathcal{D}(\hat{x})||_{2}-1)^{2}\right]\,,\end{split} \tag{2}\]
where \(x\) is the real image and \(\mathbb{P}_{r}\) is the distribution of real images. The \(\tilde{x}\) is the fake image (\(\tilde{x}=\mathcal{G}(z)\)) from the distribution \(\mathbb{P}_{g}\) of generated images. The \(\mathbb{P}_{\tilde{x}}\) is the distribution representing intersection between real and fake images. The min-max optimization ensures that the generated images look similar to real ones. Authors in [30] have shown that WGAN-GP ensures stable training with good visual fidelity among generated images. The dataset for training DC-GAN depends on the type of artifact to be placed in the scene, like birds, ball etc. The normally trained generator is utilized as a NSA generator during the artifact training.
#### 2.1.3 Artifact Formulation
We formulate artifact creation as an optimization problem. The goal is to ensure that the artifacts help minimize the loss of classifying the image to the original label \(k_{o}\), given as \(\mathcal{L}(\operatorname*{argmax}[\mathcal{F}(\vec{y}|\vec{x}^{\prime})],k_{o})\). The gradient information of the prediction loss is backpropagated to update the artifacts' pixel values iteratively. We design a background removal technique to create a mask for applying artifacts in the scene, and we discuss it in detail in Section 3. The unadversarial example \(x^{\prime}\) formed after placing artifact \(\mathfrak{a}\) on the image \(x\) is shown in equation 3
\[\vec{x}^{\prime}=(1-m)\odot\vec{x}+m\odot\mathfrak{a}\,, \tag{3}\]
where \(\vec{x}^{\prime}\in\mathcal{X},\mathfrak{a}\in[0,1]^{\tilde{w}\times\tilde{h} \times c}\) is the artifact and \(m\in M\subset\{0,1\}^{w\times h\times c}\) represents the binary pixel block to mask. The mask specifies the patch's area and location over the image. The \(\odot\) is the Hadamard operator which denotes the element-wise multiplication of pixels between two matrices. Additionally, artifact' pixel values are clipped at every iteration to stay within the valid RGB range to ensure the printability.
## 3 Artifact Training
The design of prediction-supporting artifacts has been inspired from [27], where the authors show that an adversarial attack can be turned into an unadversarial example using slightly modified loss function as shown in Equation 4.
\[\begin{split}\delta_{adv}&=\operatorname*{argmax}_{ \delta\in\Delta}\mathcal{L}(\mathcal{F}(x+\delta),y)\\ \delta_{unadv}&=\operatorname*{argmin}_{\delta\in \Delta}\mathcal{L}(\mathcal{F}(x+\delta),y)\end{split} \tag{4}\]
Figure 2: Artifact training framework for designing NSAs. The unadversarial image is formed using two artifacts for illustration. Note that the \(\odot\) represents Hadamard product. The yellow circle highlights the presence of NSAs.
where \(y\) is the original label and \(\delta\) is the perturbation bounded by \(\Delta\), added to the image \(x\) to form an undversarial example. In addition, the idea of generating natural objects has been extended from the framework proposed in [31].
An illustration of the artifact training framework has been shown in Figure 2. The framework consists of three components: artifact generation, artifact application, and prediction, which are explained in detail as follows:
Artifact Generation:The artifact generation procedure requires a pre-trained generator to generate artifacts from a given distribution. The parameters of the generator are fixed during the training. The input to the generator is an \(n\)-dimensional latent vector \(z\in\mathbb{R}^{n}\). To boost the prediction score, we propose using multiple artifacts in the scene. The advantage of using multiple artifacts is three-fold: First, it increases the number of pixels that can be manipulated. With more artifacts, we have higher control in the scene. Second, placing multiple artifacts is better as a single artifact could become unnaturally large while trying to achieve satisfactory performance. Lastly, typically there are seldom any physical limitations to placing more than one artifact in the scene. Additionally, multiple artifacts also helps in maintaining visual fidelity as we have more freedom to place artifacts at different locations in the scene.
As shown in Figure 2, our framework facilitates simultaneous training of all artifacts without additional complexity. It is necessary as we want all the artifacts to work in tandem and complement each other. Typically, to improve the prediction robustness, the artifacts learn patterns by focusing on salient features. Hence, if we artifacts train individually, they will have similar patterns (primarily drawn from the salient region). Any abnormality in the non-salient regions has the potential to hamper performance. With simultaneous training, the artifact tends to support each other and focus on complementary features. If one artifact focuses on the salient object, the other tries to derive information from the rest of the scene. Hence, infusing more varied scene context into the artifacts is possible, ultimately increasing the chances of generating better artifacts. Also, the type of artifact can vary as we can use different generators producing different types of artifacts (for example, generator \(\mathcal{G}1\) for _bird_, generator \(\mathcal{G}2\) for _ball_). It increases the freedom of creating an undversarial example.
Artifact Application:Placing the artifacts requires one to place a suitable mask \(m\) based on the context of the image. The mask is designed in two stages: First, we determine a custom threshold value for each artifact by trial and error to remove the background. This process converts the artifact's image into a binary image (sub-mask). The size of each
sub-mask will be \(\bar{w}\times\bar{h}\) (same as that of generated artifact from \(\mathcal{G}\)), where the background will be black (pixel value 0) and artifact body will be white (pixel value 1) due to thresholding. Next, we place the sub-masks corresponding to each artifact on a black (pixel value 0) background of (\(w\times h\)), the same as the original image. The sub-masks are placed using a patch application operator \(\mathfrak{A}\) to apply artifacts at required locations and with relevant orientations. For the purpose of demonstration, we present two bird artifact based mask in Figure 2. The mask is kept static during iterations because it needs to be pre-designed as per the scene. However, it is not mandatory and can be changed dynamically if the application demands it.
**Prediction:** Once all the artifacts are placed in the scene, the unadversarial example is ready for prediction. To achieve higher robustness, we apply the corruption of interest on the image (refer Figure 2). In this way the artifacts can be fine-tuned against specific corruptions for better performance. The image is sent to a CNN model, for which we need to boost the prediction confidence. The model needs to be pre-trained on the dataset of interest. Note that even though we help in the CNN's prediction, the model's parameters are fixed. The model behaves as a black-box in the framework and takes an unadversarial example as input and predicts the output.
The overall artifact training procedure can be summarised as follows: First, a loss function is decided as per the application. Like a typical neural network training, the loss gradient over the output prediction is backpropagated. While creating an undadversarial example, we calculate the loss gradient with respect to the artifact's pixels rather than network weights. However, it leads to unrestricted modification of the artifact's pixel values, eventually leading to suspicious and unnatural patterns. Also, these patterns are in the form of a geometric shape, which may not be relevant in the context of the scene. To avoid unnatural patterns and learn meaningful objects, we utilize a generator in the training loop to include the additional constraint of producing naturalistic artifacts. Hence in this work, rather than directly updating the artifact's pixels, we update the generator's input latent vector. If we apply multiple artifacts, all the latent vectors for respective generators are updated simultaneously. The generator's output is the desired set of artifacts/NSAs. These NSAs are then applied over the target image using a suitable mask. The corrupted unadversarial image is then sent to the model for prediction. The training is continued until a target prediction confidence score is achieved. We also include additional stopping criteria to stop the training if there is an insufficient improvement of the prediction score for a set of subsequent iterations. The detailed procedure is explained in Algorithm 1.
## 4 Experiments and Results
In this section, we discuss the results of a set of experiments to inspect the performance of NSA.
### Experimental setup
We use two different models to validate our attack: VGG16 [32], and ResNet18 [33]. They have distinct backbone architectures, increasing the possibility of diversity in results. These models are trained on Imagenette, a simple and commonly used benchmark dataset with 10,000 images shared almost equally between 10 classes. Each image is 224\(\times\)224 pixels. Imagenette is derived as a subset from the commonly used benchmark dataset: ImageNet [34].
For training the artifact generator, we used PyTorch's TorchGAN[35] library for ease of implementation. We decided on birds as an artifact for our applications and used a custom subset of the Bird-400 dataset for the training. We found 300 epochs to be sufficient to train the artifact generator with WGAN-GP loss. We decided \(n\) = 128 as the dimension for input latent vector to the generator. We found that lower dimensions (\(n\) = 64) generated low quality artifact and we did not observe any additional improvement with higher dimensions (\(n\) = 256). The output of the generator is an 64\(\times\)64\(\times\)3 artifact.
With the Cross Entropy loss, we used PyTorch's inbuilt Adam optimizer with a learning rate of 0.1 for the artifact training. OpenCV's image thresholding function is utilized to remove the background of generated images from the artifact generator to design the mask. Overall, the NSA's cover around 5% of the total image area. The corruptions are introduced in the scene using an off-the-self Python library imagecorruptions[36]. All the experiments were carried out on the single Nvidia RTX A6000.
### Experiment I: Analysis of NSA's impact on the prediction confidence score
As a part of this experiment, we analyze how NSAs increase confidence in the prediction. Since all the natural corruption derives from an underlying distribution, we record confidence scores based on 1000 noise samples from a specific corruption distribution, applied to a random image from each of the ten classes. The score is calculated from the output classification probability vector of the CNN model multiplied by 100. Hence, each plot consists of
confidence score values from the prediction on 10,000 adversarial images for a specific corruption, which is sufficient to evaluate the impact of the NSAs. Additionally, the values are averaged across the ResNet18 and VGG16 model.
In Figure 3, we observe that the median and mean confidence scores for NSA-applied images are higher than the ones without them for all corruption types. We notice that for brightness, fog, frost, and snow, the 75% percentile score without NSA is lower than that of 25% percentile with NSA. Overall, we observe about 4 times improvement in the mean confidence score with NSA across all corruptions. The highest impact is against brightness (6.4 times), and the lowest is for defocus (1.6 times). Overall, the NSA is shown to considerably improve the confidence, which eventually also improves the adversarial accuracy as demonstrated in the Experiment I.
### Experiment II: Analysis of NSA's impact on the adversarial accuracy
We evaluated the ability of the NSA to improve the adversarial accuracy of ResNet18 and VGG16 classifiers against physical corruption. We used the same test set for both models to maintain consistency. We chose six prominent corruptions that frequently occur in real-world scenarios: brightness, dust, defocus blur, fog, frost, and snow. The severity of corruption is varied on a scale of Level 1 (lowest) to Level 5 (highest), in steps of 1. Figure 4 and 5 show the increment in adversarial accuracy post-NSA. The mean adversarial accuracy across all severity levels (with or without NSA) is stated beside the legend and is denoted by \(u\).
As expected, we observe decreasing accuracy as the severity of noise increases. Interestingly, we notice a similar trend across both models for all corruptions. Among the corruptions, we found that the brightness does not degrade the prediction compared to other corruptions. On the other hand, the severe dust levels have profound implications. Especially for higher severity (L4 and L5), we notice that even NSAs are unable to improve the robustness. One possible reason could be that the intense dust in the scene might hamper the minimum required visibility of the NSA, leading to its ineffectiveness. However, it is essential to note that in such instances, even the target object might not be clearly visible; hence, the adversarial accuracy is lower. For corruptions like fog, frost, and snow, we see an
Figure 3: Comparison of prediction confidence score against corruption distributions with and without NSA. The ’\(\times\)’ symbol in the plot represents the mean. The bottom edge of box represents the 25th percentile and the upper edge represents the 75th percentile, with the middle line inside the box being the median of the data. All the corruptions belong to Level 3.
improvement of around 12% on average for both models. Overall, we observe that the inclusion of NSAs has a positive impact on the model's prediction.
### Experiment III: Variation in the performance of NSA across different target classes
In this experiment, we investigated how NSA helps to improve accuracy across different target classes. It is essential to understand the ease of artifact training for each class. The NSA is currently trained with the same hyperparameters for all classes. Table 1 gives an idea about the classes for which we may need to change the hyperparameters to achieve better performance. For such classes, we need to design powerful NSAs by either increasing the epochs during training or placing numerous and larger artifacts in the scene if possible. In the table, the natural accuracy is based on uncorrupted images. The adversarial accuracy is calculated as the mean accuracy over corrupted images (brightness, dust, defocus, fog, frost, and snow) across all severity levels (L1-L5).
Unlike adversarial training, where optimizing against perturbations leads to lower natural accuracy, in artifact training, the natural accuracy improves along with adversarial accuracy. For all target classes, we were able to achieve 100 % accuracy for uncorrupted images using NSA. In the scenario against corruption, we achieved around 8% improvement in adversarial accuracy on average. We note that the NSAs could not boost the prediction for cassette player,
Figure 4: Analysis of naturalistic support artifacts (NSA) against physical corruption for ResNet18 architecture.
Figure 5: Analysis of naturalistic support artifacts (NSA) against physical corruption for VGG16 architecture.
indicating the need for stronger artifacts for this class. However, we want to point towards the classes like english springer and chain saw for which we improved by up to 19%. Observing the variance in class-level performance is expected as each class has a different underlying feature. Hence, we suggest using custom hyperparameters for artifact training for each class.
### Experiment IV: Visualizing the impact of NSAs
For a qualitative understanding of NSA's impact, we used a saliency map generation technique called Grad-CAM [37]. We observed that there are two main scenarios (Case 1 and Case 2) where the contribution of an NSA needs to be evaluated, as shown in Figure 6. First, when the model performs an incorrect prediction over the corrupted image which is later rectified by placing NSAs in the scene. Second, when the original prediction is correct for the corrupted image, and the NSA assists in boosting the model's confidence in its prediction. We randomly select fog corruption for this study.
In Case 1, we see that the model focuses on the eyes, nose, mouth, and ears while making the prediction of English Springer. However, with the corruption, the model's focus is switched to only the mouth, leading to an incorrect prediction as Parachute. If we place an NSA in the scene, we observe that the model's focus remains intact on features similar to that of the original image. Interestingly, even though the NSAs are trained to increase robustness, the model does not focus on them. The NSAs help the model focus on the original rather than the artifact features.
In Case 2 we see that the model's focus is similar in both corrupted and uncorrupted scenarios. However, we observe that the focus on French Horn is better in the uncorrupted (original) case, which is as expected and has a higher confidence score of 92%. With the addition of NSAs in the scene we see that the score goes up from 35% to 70%. Unlike Case 1, we see that the model focuses on both the original features and the artifacts to improve the prediction confidence. It is necessary because the model was already focusing on the important features in the image yet had low confidence due to the noise. So it appears the NSAs trained to learn the class-specific information aid the model in decision-making.
## 5 Conclusion
Our work showcases that the presence of carefully formulated artifacts called NSAs can help the model in decision-making. Interestingly, unlike adversarial training, NSAs are capable of improving natural and adversarial accuracy at the same time. The idea of undversarial examples inspires the training of NSAs, but we proposed a naturalistic and meaningful way of designing them. The approach ensures that the artifacts do not look out of place or suspicious to the human eye. We introduced the generators (from pre-trained GAN) in the artifact training framework to ensure that the artifacts retain the naturalistic and necessary patterns to assist the model in prediction. We also employ multiple generators to perform training of all artifacts simultaneously. We believe NSAs can be adopted to improve the prediction robustness.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{Class Name} & \multicolumn{3}{c|}{Natural Accuracy} & \multicolumn{3}{c|}{Adversarial Accuracy} \\ \cline{2-7} & ResNet18 & VGG16 & ResNet18 & VGG16 \\ & ORI & NSA & ORI & NSA & ORI & NSA & ORI & NSA \\ \hline English Springer & 97 & 100 & 97 & 100 & 56 & 65 & 58 & 76 \\ French Horn & 98 & 100 & 98 & 100 & 63 & 76 & 64 & 78 \\ Cassette Player & 99 & 100 & 99 & 100 & 77 & 78 & 81 & 81 \\ Chain Saw & 98 & 100 & 98 & 100 & 71 & 87 & 57 & 78 \\ Church & 98 & 100 & 99 & 100 & 75 & 80 & 81 & 85 \\ Garbage Truck & 100 & 100 & 100 & 100 & 67 & 73 & 79 & 82 \\ Gas Pump & 96 & 100 & 96 & 100 & 48 & 57 & 53 & 64 \\ Golf Ball & 100 & 100 & 99 & 100 & 74 & 82 & 84 & 90 \\ Parachute & 98 & 100 & 98 & 100 & 93 & 97 & 87 & 96 \\ Tench & 100 & 100 & 100 & 100 & 75 & 84 & 73 & 84 \\ \hline
**OVERALL** & 98 & **100** & 98 & **100** & 70 & 72 & 72 & 81 \\ \hline \end{tabular}
\end{table}
Table 1: Experimental results of class-level NSA influence on ResNet18 and VGG16 model. The ORI represents the Original image without any artifact and NSA is the image with an artifact applied.
|
2306.12097
|
Extracting spinning wormhole energy via magnetic reconnection
|
Magnetic reconnection has been extensively shown to be a promising approach
to extract spinning black hole energy. In this paper, we focus on extracting
spinning wormhole energy via such mechanism. The study shows that it is indeed
possible to extract rotating energy from a spinning wormhole with small
regularization parameter $\ell$ of the central singularity. The efficiency and
power of the energy extraction are also evaluated. Quite different from the
Kerr black hole, the spin of the wormhole can take arbitrarily large value.
However, the increasing of wormhole spin not always improves the efficiency and
power of energy extraction. By further comparing with the Kerr black hole, we
find the wormhole is more efficient when the magnetic reconnection happens
within radial distance $r/M<1$. These studies reveal the features of extracting
spinning wormhole energy, and more underlying properties are expected to be
disclosed for the horizonless objects.
|
Xu Ye, Chao-Hui Wang, Shao-Wen Wei
|
2023-06-21T08:22:31Z
|
http://arxiv.org/abs/2306.12097v2
|
# Extracting spinning wormhole energy via magnetic reconnection
###### Abstract
Magnetic reconnection has been extensively shown to be a promising approach to extract spinning black hole energy. In this paper, we focus on extracting spinning wormhole energy via such mechanism. The study shows that it is indeed possible to extract rotating energy from a spinning wormhole with small regularization parameter \(\ell\) of the central singularity. The efficiency and power of the energy extraction are also evaluated. Quite different from the Kerr black hole, the spin of the wormhole can take arbitrarily large value. However, the increasing of wormhole spin not always improves the efficiency and power of energy extraction. By further comparing with the Kerr black hole, we find the wormhole is more efficient when the magnetic reconnection happens within radial distance \(r/M<1\). These studies reveal the features of extracting spinning wormhole energy, and more underlying properties are expected to be disclosed for the horizonless objects.
Wormhole, black hole mimicker, energy extraction, magnetic reconnection pacs: 04.70.Bw, 52.27.Ny, 52.30.Cv
## I Introduction
In astrophysics, a key aspect of highly energetic phenomena is accompanied with energy release, such as active galactic nuclei [1; 2], gamma-ray bursts [3; 4], and ultraluminous X-ray binaries [5; 6]. In these phenomena, it is widely believed that energy released originates from black holes whose energy consists of gravitational potential of matter, electromagnetic field energy, and black hole itself. With the energy conservation, black hole loses its mass during the energy-releasing process, accordingly. Nevertheless, the black hole mass can not completely vanish for the existence of the irreducible mass [7]. Taking Kerr black hole as an example, its total mass \(M\) can be decomposed into the reducible part \(M_{re}\) and irreducible part \(M_{ir}\)[7]
\[M=M_{re}+M_{ir}, \tag{1}\]
where, the irreducible mass is
\[M_{ir}=M\sqrt{\frac{1}{2}\left(1+\sqrt{1-\frac{a^{2}}{M^{2}}}\right)}. \tag{2}\]
Black hole spin \(a=J/M\) measures the angular momentum \(J\) per unit mass. Obviously, this formula shows that the irreducible mass decreases with the black hole spin \(a\), and tends to the total mass \(M\) for a nonrotating black hole. On the other hand, the black hole entropy can be expressed in terms of the irreducible mass [8; 9; 10]
\[S=4\pi M_{ir}^{2}. \tag{3}\]
The entropy always increases if one extracts black hole energy by decreasing its spin. As a result, the second law of black hole thermodynamics holds as expected. Meanwhile, the extractable energy is given by
\[E_{re}=M-M_{ir}=\left(1-\sqrt{\frac{1}{2}\left(1+\sqrt{1-\frac{a^{2}}{M^{2}}} \right)}\right)M. \tag{4}\]
For an extremal Kerr black hole, the amount of the energy can grow up to 29%.
In Penrose's landmark paper [11], he originally proposed a thought experiment in which a particle fission process (\(A\to B+C\)) happens in ergo-sphere of rotating black holes. The subtlety of this process lies that the particle can have negative energy to the infinity observer. Provided that particle B with negative energy is swallowed by the spinning black hole and particle C with positive energy escapes to infinity, more energy will be carried out by particle C. Accordingly, the black hole spin reduces after absorbing a negative energy particle. This provides a first process to extract black hole energy. However, a major problem is that these two newborn particles B and C should be separated with a relative high velocity, even to the speed of light [12]. This requirement gives rise to an extremely low expected rate of particle fission. Hence it is pretty inefficient to extract black hole rotational energy via such process.
Soon afterwards, such issue was improved, and several new mechanisms were put forward such as the superradiant scattering [13], collisional Penrose process [14; 15], Blandford-Znajek (BZ) process [16], and the magnetohydrodynamic (MHD) Penrose process [17]. A significant result on BZ process was that the strongly magnetized BZ mechanism is more effective than the non-magnetized neutrino annihilation process to power gamma-ray-burst jets [18]. This reveals that the magnetic field is a key to enhance extraction efficiency. Actually, around an astrophysical black hole, magnetic field can be generated from the charged matter of the accretion disk, and is believed to extensively exist. Such fact was also confirmed by EHT collaborations on observing the image of M87* via the polarized emission [19].
Recently, in high energy astrophysics, magnetic reconnection associated with the acceleration of energetic particles has gained tremendous significance [20]. It is a strong candidate to generate ultra-high energy cosmic rays in magnetically dominated regions. It was noted by Koide and Arai [21] that fast magnetic reconnection redistributes the energy of the plasma. This could possibly lead to that the falling particle with negative energy enters the horizon, whereas other plasmas escaping from the ergo-region take more energy away. As a result, black hole energy is extracted. Such idea was supported by Parfrey [22], who confirmed that magnetic reconnection releases magnetic energy and converts it into the kinetic energy of plasmas according to a relativistic kinetic simulation. Very recently, in Ref. [23], Comisso and Asenjo further improved such magnetic reconnection mechanism and calculated the efficiency and power of the energy extraction. The result indicates that it is feasible only for a rapidly spinning Kerr black hole. Furthermore, some other relevant studies have attempted to deal with the issue in various black hole models [24; 25; 26; 27; 28; 29; 30].
Wormhole is one potential alternative to the black hole both in theory and astronomical observation. The wormhole with non-trivial topology is one kind exact solution of Einstein field equation. It can connect either two distant regions of the same universe or two different universes. The first solution was given by Einstein and Rosen [31]. While the transverse wormhole was built by Thorne and Morris [32]. Nevertheless, the biggest challenge is that the matter constructing the wormhole must be exotic leading to the violation of the null energy condition. Until recently, it was shown that the transverse wormhole can be well constructed with ordinary matter [33; 34; 35]. Another significant difference from the black hole is that the wormhole is singularity free object. Especially, the gravitational waves and images, respectively, observed by LIGO and EHT, could be well interpreted by the wormholes [36; 37]. The relativistic Poynting-Robertson effect was also found to be an effective tool to test the wormholes [38; 39]. Furthermore, it is worth to investigate other phenomena and disclose whether the wormhole can be used to replace black hole.
The aim of this paper is to study the energy extraction from a spinning wormhole by using the magnetic reconnection mechanism. First, we show that, like the black hole, it is feasible for extracting wormhole energy via such process. Then after calculating the efficiency and power, the results imply that, in certain parameter regions, the wormhole has great advantages than the black hole. The obtained findings uncover possible features of the wormholes on energy extraction. And wormhole can be treated as an alternative of the black hole to extract energy via magnetic reconnection.
This paper is structured as follows. In Sec. II, we briefly review the spinning wormhole and the corresponding spacetime structure. In Sec. III, we focus on the magnetic reconnection process and give an analytic expression of energy-at-infinity for the accelerated/decelerated plasmas. Furthermore, we explore the conditions to achieve the energy extraction. The efficiency and power of mechanism for the wormhole are studied in detail in Sec. IV. The last Sec. V devotes to the conclusion and discussion of our findings.
## II Rotating wormhole model and geodesics
In this section, we will briefly introduce the wormhole metric proposed by Mazza, Franzin, and Liberati in Ref. [40], and then show the equation of motion of the test particles.
The wormhole model we considered is a rotating counterpart of the static, spherically symmetric Simpson-Visser (SV) one described by the following line element [41]
\[ds^{2}=-\left(1-\frac{2M}{\sqrt{r^{2}+\ell^{2}}}\right)dt^{2}+ \left(1-\frac{2M}{\sqrt{r^{2}+\ell^{2}}}\right)^{-1}dr^{2}+(r^{2}+\ell^{2})(d \theta^{2}+\sin^{2}\theta d\phi^{2}), \tag{5}\]
where \(M\) represents the ADM mass and non-negative \(\ell\) is a parameter responsible for the regularization of the central singularity. Note that the radial coordinate \(r\in(-\infty,+\infty)\) under this wormhole model. Moreover, SV metric is a minimal modification of the Schwarzschild solution that can be restored by taking \(\ell=0\). Since the spacetime structure is completely governed by the parameter \(\ell\), one can acquire a wormhole or a black hole by selecting \(\ell>2M\) or \(\ell<2M\). Obviously, the positive parameter \(\ell\) eliminates the central singularity of the spacetime, and thus we can call it regularization parameter.
Making use of the Newman-Janis procedure [42; 43], a spinning wormhole can be obtained [40]
\[ds^{2}=-(1-\frac{2M\sqrt{r^{2}+l^{2}}}{\Sigma})dt^{2}+\frac{\Sigma}{\Delta}dr^ {2}+\Sigma d\theta^{2}-\frac{4Ma\sin^{2}\theta\sqrt{r^{2}+l^{2}}}{\Sigma}dtd \phi+\frac{A\sin^{2}\theta}{\Sigma}d\phi^{2}, \tag{6}\]
with
\[\Sigma=r^{2}+\ell^{2}+a^{2}\cos^{2}\theta,\quad\Delta=r^{2}+\ell ^{2}+a^{2}-2M\sqrt{r^{2}+\ell^{2}}, \tag{7}\] \[A=(r^{2}+\ell^{2}+a^{2})^{2}-\Delta a^{2}\sin^{2}\theta, \tag{8}\]
where \(a\) is the angular momentum of wormhole per unit mass. Taking limits \(a\to 0\) or \(\ell\to 0\), respectively, it shall reduce to the SV metric (5) or Kerr metric. The metric (6) is symmetric under the reflection transformation \(r\rightarrow-r\) due to the fact that the position \(r=0\) is a throat of the wormhole that connects two distinct universes. Such metric
More importantly, the spacetime structure is characterized by the regularization parameter \(\ell\) and wormhole spin \(a\). From \(\Delta=0\), the corresponding horizons are calculated as
\[r_{\pm}=\sqrt{(\rho_{\pm})^{2}-\ell^{2}}, \tag{9}\]
here \(\rho_{\pm}=M\pm\sqrt{M^{2}-a^{2}}\). For a black hole, there exists at least one horizon located at \(r_{+}\), which requires \(\ell<\rho_{+}\). If we further impose \(\ell<\rho_{-}\), the black hole shall possess two horizons. On the other hand, a wormhole has no horizon outside its throat, and which leads to \(a>M\) or \(\ell>\rho_{+}\). The parameter \(\ell\) is also constrained by using the EHT observation [44].
For the sake of clarity, we show the black hole and wormhole regions in (\(a\), \(\ell\)) plane in Fig. 1. The left lower corner marked with light blue color is for the black hole region with at least one horizon, and the maximal spin bound is \(a/M=1\). The remaining other regions are for the wormhole. Obviously they can be slowly spinning \(a/M<1\) or rapidly spinning \(a/M>1\) marked with light green or light purple color. In these two regions, we will show their structures have significant difference and shall have influence on the energy extraction.
Since the energy extraction via magnetic reconnection happens within the ergo-region, it is necessary to examine the structure of the wormholes. The outer and inner ergo surfaces are determined by \(g_{tt}=0\), which gives
\[r_{erg}^{\pm}=\sqrt{(\rho_{erg}^{\pm})^{2}-\ell^{2}}, \tag{10}\]
with \(\rho_{erg}^{\pm}=M\pm\sqrt{M^{2}-a^{2}\cos^{2}\theta}\). Energy extraction requires the existence of the ergo surface, and thus we must have \(\rho_{erg}^{+}>\ell\) for slowly spinning wormhole and \(\rho_{erg}^{\pm}>\ell\) for rapidly spinning wormhole.
Figure 1: Rapidly spinning Wormholes (RS-WH), slowly spinning Wormholes (SS-WH), and black hole (BH) regions in the (\(a/M\), \(\ell/M\)) plane.
In order to avoid the misunderstanding, we exhibit the structure of the ergoregion in (\(\rho\), \(\theta\), \(\phi\)) plane instead of the (\(r\), \(\theta\), \(\phi\)) plane in Fig. 2. In Fig. 2, the structure of the ergoregion is shown for the slowly spinning wormhole case \(a/M<1\). Although there is a horizon, it hides behind the wormhole throat. It is clear that the ergoregion between the outer ergo surface and wormhole throat is very narrow. While for the rapidly spinning wormhole, the structure is quite different, see Fig. 2. The ergoregion bounded by the outer, inner ergo surfaces, and the throat is broad. Such feature potentially indicates high efficiency and power for the energy extraction. It is worth emphasizing that the ergo-region structures for both the wormhole cases are significantly different from that of Kerr black holes. The Penrose process occurs in this ergo-region has been study in Ref. [45].
After examining the ergoregion for the wormhole, we turn to the motion of the test particle, which is another key for the magnetic reconnection mechanism. Adopting the Hamilton-Jacobi method, one can easily obtain the geodesic equations in a wormhole background [40]
\[\Sigma\frac{dt}{d\tau} =a(\mathcal{L}-a\mathcal{E}\sin^{2}\theta)+\frac{\rho^{2}+a^{2}}{ \Delta}[\mathcal{E}(\rho^{2}+a^{2})-\mathcal{L}a], \tag{11}\] \[\Sigma\frac{dr}{d\tau} =\pm\sqrt{R},\] (12) \[\Sigma\frac{d\theta}{d\tau} =\pm\sqrt{\Theta},\] (13) \[\Sigma\frac{d\phi}{d\tau} =\frac{\mathcal{L}}{\sin^{2}\theta}-a\mathcal{E}+\frac{a}{\Delta }[\mathcal{E}(\rho^{2}+a^{2})-\mathcal{L}a], \tag{14}\]
where
\[\mathcal{R} =[\mathcal{E}(\rho^{2}+a^{2})-\mathcal{L}a]^{2}-\Delta[\mu^{2} \rho^{2}+(\mathcal{L}-a\mathcal{E})^{2}+\mathcal{Q}], \tag{15}\] \[\Theta =\mathcal{Q}-\cos^{2}\theta[a^{2}(\mu^{2}-\mathcal{E}^{2})+\frac{ \mathcal{L}^{2}}{\sin^{2}\theta}],\] (16) \[\mathcal{Q} =u_{\theta}^{2}+\cos^{2}\theta[a^{2}(1-\mathcal{E})^{2}-\frac{ \mathcal{L}}{\sin^{2}\theta}]. \tag{17}\]
Here \(\mu^{2}=0\) and \(1\) for the null and timelike geodesics, respectively. Constants \(\mathcal{E}\), \(\mathcal{L}\), and \(\mathcal{Q}\) are the energy, angular momentum, and Carter constant for the test particle along each geodesic.
There are two characterized circular orbits playing important role in magnetic reconnection. One is the circular photon orbit with \(\mu^{2}=0\) satisfying the following conditions
\[\mathcal{R}(r_{ph})=0,\quad\frac{d\mathcal{R}(r)}{dr}\bigg{|}_{r=r_{ph}}=0. \tag{18}\]
Substituting (15) into (18), we solve
\[r_{ph}=\sqrt{4M^{2}\left[1+\cos\left(\frac{2}{3}\arccos(\mp\frac{a}{M}) \right)\right]^{2}-\ell^{2}}. \tag{19}\]
Figure 2: The structure of ergo-region in (\(\rho\), \(\theta\), \(\phi\)) plane. (a) Slowly spinning wormhole with \(a/M<1\). (b) Rapidly spinning wormhole with \(a/M>1\). \(E_{out}\) and \(E_{in}\) denote the outer and inner ergo-surfaces, respectively.
Another is the innermost stable circular orbit for the test particle with \(\mu^{2}=1\). The corresponding conditions are
\[\mathcal{R}(r_{ISCO})=0,\quad\frac{d\mathcal{R}(r)}{dr}\bigg{|}_{r=r_{ISCO}}=0, \quad\frac{d^{2}\mathcal{R}(r)}{dr^{2}}\bigg{|}_{r=r_{ISCO}}=0. \tag{20}\]
Solving them, one has
\[r_{ISCO}=\sqrt{\left[3M+Z_{2}M\mp\left[(3-Z_{1})(3+Z_{1}+2Z_{2})\right]^{1/2}M \right]^{2}-\ell^{2}}, \tag{21}\]
with
\[Z_{1} =1+(1-\frac{a^{2}}{M^{2}})^{1/3}\left[(1+\frac{a}{M})^{1/3}+(1- \frac{a}{M})^{1/3}\right], \tag{22}\] \[Z_{2} =(3\frac{a^{2}}{M^{2}}+Z_{1}^{2})^{1/2}. \tag{23}\]
Moreover, the Keplerian angular velocity of circular orbits on the equatorial plane is \(\Omega_{K}\equiv\frac{d\phi}{d\tau}/\frac{dt}{d\tau}\). By making use of Eqs. (11) and (14), we have
\[\frac{dg_{\phi\phi}}{dr}\Omega_{K}^{2}+2\frac{dg_{t\phi}}{dr}\Omega_{K}+\frac {dg_{tt}}{dr}=0. \tag{24}\]
Solving it, the Keplerian angular velocity reads
\[\Omega_{K\pm}=-\frac{aM\mp\sqrt{M\left(r^{2}+\ell^{2}\right)^{3/2}}}{\left(r^ {2}+\ell^{2}\right)^{3/2}-a^{2}M}. \tag{25}\]
The upper and lower signs correspond to the corotating and counterrotating orbits. Taking the limit \(\ell\to 0\), this result can recover that of the Kerr black hole.
## III Energy extraction regions
In the magnetic reconnection process, a current sheet appears owing to the direction change of the magnetic field perpendicular to the equatorial plane. When this current reaches a critical aspect ratio, it would be destroyed by the plasmoid instability [46; 47; 48]. Then, the formation of plasmoids/flux ropes drives fast magnetic reconnection and converts available magnetic energy into the kinetic of plasma [49; 50]. It is interesting to imagine that, after the occurrence of the magnetic reconnection, one part of the plasmas is accelerated and another part is decelerated. If the decelerated part with negative energy-at-infinity is swallowed by the black hole, while the accelerated part with positive energy escapes to radial infinity, net energy will be extracted from the black hole. Thus, such magnetic reconnection process provides us with a potential mechanism to extract black hole energy. In this section, we shall consider the parameter range that can implement this process.
To analyze the process of magnetic reconnection, we begin with exploring the energy-at-infinity of accelerated and decelerated plasma. For the purpose, we adopt the locally nonrotating frame, the zero-angular-momentum-observer (ZAMO) frame, under which the observers do not feel any angular velocity related to the spinning wormhole.
Denoting \((\hat{t},\hat{x}^{1},\hat{x}^{2},\hat{x}^{3})\) and \((t,x^{1},x^{2},x^{3})\) as the coordinates in ZAMO frame and Boyer-Lindquist frame, the transformation relations between them will be
\[d\hat{t}=\alpha dt,\quad d\hat{x}^{i}=\sqrt{g_{ii}}dx^{i}-\alpha\beta^{i}dt, \tag{26}\]
where \(\alpha\) and \(\beta^{i}\) are, respectively, the lapse function and shift vector. In terms of the metric components, they are expressed as
\[\alpha=\sqrt{-g_{tt}+\frac{g_{\phi t}^{2}}{g_{\phi\phi}}},\quad\beta^{\phi}= \frac{\sqrt{g_{\phi\phi}}}{\alpha}\omega^{\phi}. \tag{27}\]
Here \(\omega^{\phi}=-g_{\phi t}/g_{\phi\phi}\) is the angular velocity of the frame dragging.
Employing (26), the line element (6) in ZAMO frame becomes
\[ds^{2}=-d\hat{t}^{2}+\sum_{i=1}^{3}(d\hat{x}^{i})^{2}=\eta_{\mu\nu}d\hat{x}^{\mu} \hat{x}^{\nu}. \tag{28}\]
It is clear that the spacetime in this ZAMO frame is locally Minkowskian for observers. Furthermore, it is convenient to derive the relation of the vector \(\psi\) in the ZAMO frame \((\hat{\psi}^{0},\hat{\psi}^{1},\hat{\psi}^{2},\hat{\psi}^{3})\) and Boyer-Lindquist coordinate \((\psi^{0},\psi^{1},\psi^{2},\psi^{3})\)
\[\hat{\psi}^{0} =\alpha\psi^{0}, \hat{\psi}^{i} =\sqrt{g_{ii}}\psi^{i}-\alpha\beta^{i}\psi^{0}, \tag{29}\] \[\hat{\psi}_{0} =\frac{\psi_{0}}{\alpha}+\sum_{i=1}^{3}\frac{\beta^{i}}{\sqrt{g_ {ii}}}\psi_{i}, \hat{\psi}_{i} =\frac{\psi_{i}}{\sqrt{g_{ii}}}. \tag{30}\]
Next, we would like to evaluate the capability of magnetic reconnection to extract wormhole energy. This can be achieved by examining the formation of negative and positive energy of decelerated and accelerated plasmas at infinity. Under the one-fluid approximation, the stress-energy tensor can be formulated as
\[T^{\mu\nu}=pg^{\mu\nu}+\omega U^{\mu}U^{\nu}+F^{\mu}_{\sigma}F^{\nu\sigma}- \frac{1}{4}g^{\mu\nu}F^{\alpha\beta}F_{\alpha\beta}, \tag{31}\]
where \(p\), \(\omega\), \(U^{\mu}\), and \(F^{\mu\nu}\) are the pressure, enthalpy density, four-velocity, and electromagnetic field tensor of the plasma.
The energy-at-infinity is
\[e^{\infty}=-\alpha g_{\mu 0}T^{\mu 0}=\alpha\hat{e}+\alpha\beta^{\phi}\hat{P}^ {\phi}, \tag{32}\]
with the total energy density \(\hat{e}\) and the azimuthal component of the momentum density \(\hat{P}^{\phi}\) given by
\[\hat{e}=\omega\hat{\gamma}^{2}-p+\frac{\hat{B}^{2}+\hat{E}^{2}}{2},\quad\hat{ P}^{\phi}=\omega\hat{\gamma}^{2}\hat{e}^{\phi}+(\hat{B}\times\hat{E})^{\phi}. \tag{33}\]
The Lorentz factor is \(\hat{\gamma}=\hat{U}^{0}=1/\sqrt{1-\sum_{i=1}^{3}(\hat{v}^{i})^{2}}\). The components of magnetic and electric fields \(\hat{B}^{i}=\epsilon^{ijk}\hat{F}_{jk}/2\) and \(\hat{E}^{i}=\eta^{ij}\hat{F}_{j0}=\hat{F}_{i0}\). The sign \(\hat{v}^{\phi}\) denotes the azimuthal component of the out flow velocity of plasma for a ZAMO observer.
Further, the energy-at-infinity can be divided into the hydrodynamic and electromagnetic components such that \(e^{\infty}=e^{\infty}_{hyd}+e^{\infty}_{em}\) with
\[e^{\infty}_{hyd}=\alpha\hat{e}_{hyd}+\alpha\beta^{\phi}\omega\hat{\gamma}^{2} \hat{v}^{\phi},\quad e^{\infty}_{em}=\alpha\hat{e}_{em}+\alpha\beta^{\phi}( \hat{B}\times\hat{E})_{\phi}, \tag{34}\]
where \(\hat{e}_{hyd}=\omega\hat{\gamma}^{2}-p\) and \(\hat{e}_{em}=(\hat{E}^{2}+\hat{B}^{2})/2\) denote the hydrodynamic and electromagnetic energy densities observed in the ZAMO frame. Since the electromagnetic energy \(e^{\infty}_{em}\) is converted into the kinetic energy of the plasma, the remained electromagnetic energy can be negligible and \(e^{\infty}=e^{\infty}_{hyd}\). Supposing that the plasma is incompressible and adiabatic, the energy-at-infinity density takes the form [21]
\[e^{\infty}=\alpha\omega\hat{\gamma}(1+\beta^{\phi}\hat{v}^{\phi})-\frac{ \alpha p}{\hat{\gamma}}. \tag{35}\]
By introducing the local rest frame for the bulk plasmas \(x^{\mu\prime}=(x^{0\prime},x^{1\prime},x^{2\prime},x^{3\prime})\), which rotates with Keplerian angular velocity \(\Omega_{k}\) in the equatorial plane, the localized reconnection process can be assessed. For convenience, one can choose the directions of \(x^{1\prime}\) and \(x^{3\prime}\) such that they are parallel to radial direction \(x^{1}=r\) and azimuthal direction \(x^{3}=\phi\), respectively.
Using Eq. (29), the corotating Keplerian velocity in the ZAMO frame can be expressed as
\[\hat{v}_{K}=\frac{d\hat{x}^{\phi}}{d\hat{x}^{t}}=\frac{\sqrt{g_{\phi\phi}}dx^{ \phi}-\alpha\beta^{\phi}dx^{t}}{\alpha dx^{t}}=\frac{\sqrt{g_{\phi\phi}}}{ \alpha}\Omega_{K}-\beta^{\phi}. \tag{36}\]
Here \(\Omega_{K}\) is the Keplerian angular velocity in Boyer-Lindquist coordinate given in Eq. (25). From the previous formula, the Lorentz factor \(\hat{\gamma}_{K}=1/\sqrt{1-\hat{v}_{K}^{2}}\) in ZAMO frame is obtained.
According to "relativistic adiabatic incompressible ball method", the hydrodynamic energy associated with accelerated/decelerated plasma at the infinity per enthalpy is derived as follows [23]
\[\epsilon_{\pm}^{\infty}=\alpha\hat{\gamma}_{K}\left((1+\beta^{\phi}\hat{v}_{K}) \sqrt{1+\sigma_{0}}\pm\cos\xi(\beta^{\phi}+\hat{v}_{K})\sqrt{\sigma_{0}}-\frac {\sqrt{1+\sigma_{0}}\mp\cos\xi\hat{v}_{K}\sqrt{\sigma_{0}}}{4\hat{\gamma}_{K}^ {2}(1+\sigma_{0}-\cos^{2}\xi\hat{v}_{K}^{2}\sigma_{0})}\right), \tag{37}\]
where \(\pm\) corresponds to the accelerated and decelerated plasma. Parameters \(\sigma_{0}\) and \(\xi\) are, respectively, the plasma magnetization and orientation angle between the magnetic field lines and the azimuthal direction in the equatorial plane. From Eq. (37), it is clear that the energy-at-infinity per enthalpy of the accelerated/decelerated plasma relies on wormhole parameters \((a,M,\ell)\) and magnetic reconnection configuration \((\sigma_{0},\,\xi,\,r_{X})\), where \(r_{X}\) refers to the position of the reconnection point and we denote it as the X-point.
If the decelerated plasmas acquire negative energy as measured at infinity, then the accelerated plasmas will have more positive energy compared to their rest mass and thermal energy. As a result, the rotating energy of the wormhole shall be extracted via magnetic reconnection process. Considering it, the corresponding conditions for the allowed energy extraction are
\[\epsilon_{-}^{\infty}<0,\quad\text{and}\quad\Delta\epsilon_{+}^{\infty}= \epsilon_{+}^{\infty}-\left(1-\frac{\Gamma}{\Gamma-1}\frac{p}{\omega}\right)= \epsilon_{+}^{\infty}>0, \tag{38}\]
where the polytropic index is \(\Gamma=4/3\) for the relativistically hot plasmas.
Now, we concern on exploring the above energy extraction conditions (38) for the spinning wormhole. For the low spinning case (\(a=0.7M\)), we show \(\epsilon_{+}^{\infty}\) and \(\epsilon_{-}^{\infty}\) as a function of the regularization parameter \(\ell\) in Figs. 3(a) and 3(b), where they present significantly different behaviors. The green, red, blue, black, and purple curves are for the X-point positions \(r_{X}/M=\)0, 0.5, 1.0, 1.5, and 2.0, respectively. Obviously, \(\epsilon_{+}^{\infty}\) shown in Fig. 3(a) always gradually decreases with the regularization parameter \(\ell\), and importantly, the condition \(\epsilon_{+}^{\infty}>0\) is satisfied for the accelerated plasmas. In order to extract more rotating energy, one needs to tune the suitable regularization parameter \(\ell\) for a high \(\epsilon_{+}^{\infty}\), which can be well achieved by reducing \(\ell\) and shifting the magnetic reconnection location near the throat of wormhole. Nevertheless, there always exists a lower bound for \(\ell\), where \(\epsilon_{+}^{\infty}\) diverges. \(\epsilon_{-}^{\infty}\) is another important factor in this process, and we plot it in Fig. 3(b). Similar to \(\epsilon_{+}^{\infty}\), \(\epsilon_{-}^{\infty}\) is also bounded by \(\ell\), while with lower values. Quite differently, each curve exhibits a local minimum, and which is shifted towards to small \(\ell\) by \(r_{X}\). Despite these behaviors, we find that \(\epsilon_{-}^{\infty}\) is always positive for arbitrary values of \(\ell\) and \(r_{X}\) leading to the violation of condition (38). So it is impossible to extract the energy for a slow spinning wormhole. This result also holds for a slow spinning Kerr black hole [23]. One notable difference of them is that the Kerr black hole is bounded by a maximal spin \(a/M=1\), while it is unbounded for the wormhole. So it is worthwhile to consider such process for a rapidly spinning wormhole.
For the purpose, we take high-spin situation \(a/M=1.5\) and plot \(\epsilon_{\pm}^{\infty}\) in Figs. 3(c) and 3(d). It is clear that \(\epsilon_{+}^{\infty}\) still decreases with \(l\). While the local minimum of \(\epsilon_{-}^{\infty}\) disappears. Significantly, \(\epsilon_{-}^{\infty}\) could be negative at small \(\ell\) and \(r_{X}\), which strongly indicates that it is feasible to extract energy from a spinning wormhole via the magnetic reconnection. Simultaneously, \(\epsilon_{+}^{\infty}\) takes a larger positive value for small \(\ell\) and \(r_{X}\), making the energy extraction more easily to occur.
Next, we turn to consider the possible parameter regions of \(a\) and \(\ell\) allowed for the energy extraction. Noting that \(\epsilon_{+}^{\infty}\) is always positive, we only focus on the condition with \(\epsilon_{-}^{\infty}<0\) by varying \(\sigma_{0}\), \(\xi\), and \(r_{X}\).
After a simple algebra, we show the shaded negative energy regions in Fig. 4 with these curves denoting \(\epsilon_{-}^{\infty}=0\). The magnetic reconnection locations are set to \(r_{X}=0.5M\) in Figs. 4(a) and 4(b), and \(r_{X}=1.5M\) in Figs. 4(c) and 4(d). It is quite obvious that the shaded regions shrink with \(r_{X}\), for example, the maximal \(\ell\) decreases from \(1.9M\) to \(1.2M\). By setting \(\xi=\pi/12\), we observe that the shaded regions enlarge with the magnetization \(\sigma_{0}\) from Figs. 4(a) and 4(c). On the other hand, when taking \(\sigma_{0}=100\), the negative energy regions shrink with \(\xi\) as shown in Figs. 4(b) and 4(d). In summary, the parameter region allowed for the energy extraction enlarges with \(\sigma_{0}\) while shrinks with \(\xi\) and \(r_{X}\).
escape to infinity. Consequently, the proportion of positive energy-at-infinity after redistribution can be used to scale the efficiency of energy extraction. Following Ref. [23], the efficiency of the plasma energization process via magnetic reconnection is defined by
\[\eta=\frac{\epsilon_{+}^{\infty}}{\epsilon_{+}^{\infty}+\epsilon_{-}^{\infty}}. \tag{39}\]
To achieve energy extraction, we must have \(\eta>1\). The lower bound of the efficiency \(\eta=1\) corresponds to the vanishing negative energy-at-infinity, i.e. \(\epsilon_{-}^{\infty}=0\).
With fixing \(\sigma_{0}=100\) and \(\xi=\pi/20\), the efficiency \(\eta\) is plotted as a function of \(\ell\) in Fig. 5 for \(r_{X}/M=\)0, 0.4, 0.8, and 1.2. The spin parameter \(a/M\) is set to 1.15, 1.18, 1.22, and 1.28 from top to bottom. It is notable that \(\eta\) decreases with the black hole spin, indicating that high spin does not necessarily have an advantage for energy extraction. For small \(r_{X}/M=0\) and 0.4, respectively, shown in Figs. 5(a) and 5(b), a peak emerges for each \(a/M\). Its maximum value is significantly reduced by the spin of wormhole from 6 to 2.5 when \(a/M\) increases from 1.15 to 1.28. Meanwhile, the location of the peak is shifted towards to small \(\ell/M\). Further increasing \(r_{X}/M\) such that \(r_{X}/M\)=0.8 and 1.2 in Figs. 5(c) and 5(d), the peak tends to disappear and \(\eta\) will be only a monotonically decreasing function of \(\ell/M\). The maximum \(\eta\) continues to decrease below 1.6 for \(r_{X}/M\)=1.2, and such value will approach a lower value with the increase of \(r_{X}/M\). As a result, in order to reach a high efficiency of energy extraction, increasing the spin is not necessarily a good choice. Instead, reducing \(r_{X}/M\) is an alternative.
Besides the efficiency, another important key to measure the magnetic reconnection mechanism is the energy absorbing power, which depends on the negative energy-at-infinity of decelerated plasmas swallowed by wormholes in unit time. According to the energy conservation, a high absorbing power potentially results in a high energy extraction rate by escaping plasmas.
The power of energy extraction per enthalpy via magnetic reconnection from wormholes can also be defined by [23]
\[P_{extr}=-\epsilon_{-}^{\infty}A_{in}U_{in}, \tag{40}\]
where \(U_{in}=\mathcal{O}(10^{-1})\) and \(\mathcal{O}(10^{-2})\) for the collisionless and collisional regimes [51], respectively. For spinning wormholes, \(A_{in}\) is the cross-sectional area of the inflowing plasma and is estimated as
\[A_{in}\sim\begin{cases}(r_{erg}^{+})^{2}-r_{ph}^{2},&\quad\text{with photon orbit},\\ (r_{erg}^{+})^{2},&\quad\text{without photon orbit}.\end{cases} \tag{41}\]
Here \(r_{reg}^{+}\) and \(r_{ph}\) are the radii of the ergo surface and photon orbit of wormhole, seen previously in Eq. (10) and (19).
We first consider the power for the case with spin \(a/M<1\). As a typical example, we take \(a=0.93M\) and \(0.96M\). The former possesses a photon orbit while the latter does not. Such significant difference shall produce different scenarios of the power, see Figs. 6(a) and 6(b). Although the ending point is at \(r_{erg}^{+}\), the most obvious difference is that the starting point locates at \(r_{ph}\) for \(a=0.93M\), while at the throat of wormhole for another case without photon orbit. Nevertheless, we observe that the enhancement of magnetization \(\sigma_{0}\) raises the extracting power for both scenarios as expected. For each given \(\sigma_{0}\), the maximum value of the power almost keeps the same for \(a=0.93M\) and \(0.96M\).
Another interesting case is that the wormhole spin is beyond one exceeding the maximal bound of Kerr black hole, which presents a unique feature for the wormhole. For the purpose, we take \(a/M=\)1.1, 1.3, 1.5, and 1.8, and show the power as a function of \(\ell\) in Fig. 7. For different \(r_{X}/M=\) 0, 0.4, 0.8, and 1.2, we see that the power \(P_{extr}\) decreases with \(\ell\). The increase of the wormhole spin will also reduce the power. Thus we can conclude that the farther away of
Figure 6: Power of magnetic reconnection mechanism with \(\ell/M=1.4\). (a) Spinning wormhole with photon orbit. The spin \(a=0.93M\) and magnetization \(\sigma_{0}\)=10, \(10^{2}\), \(10^{3}\), \(10^{4}\), and \(10^{5}\) from bottom to top. The left and right dashed vertical lines stand for the position of the photon orbit and outer ergo-surface, respectively. (b) Spinning wormhole without photon orbit. The spin \(a=0.96M\) and magnetization \(\sigma_{0}\)=10, \(10^{2}\), \(10^{3}\), \(10^{4}\), and \(10^{5}\) from bottom to top. Here we take \(\xi=0\) and \(U_{in}=0.1\) for the collisionless regime.
the X-point from a rapidly spinning wormhole throat, the lower the power. It is worth noting that the power is not well-defined by simultaneously taking \(r_{X}/M=0\) and \(\ell\to 0\), see the divergent behaviors shown in Fig. 7(a).
Before ending this section, we would like to make a comparison between the Kerr black hole and wormhole. By setting \(\ell\to 0\), the wormhole model will recover the Kerr black hole. Let us examine the efficiency and power for the Kerr black hole, which are shown in Figs. 8(a) and 8(c) by taking \(U_{in}=0.1\), \(\sigma_{0}=100\), and \(\xi=\pi/12\). A significant result is that both the efficiency and power reach their maximal values for the extremal black hole with spin \(a/M=1\). In order to examine whether the wormhole has advantage than the Kerr black hole, we define two ratios of the efficiency and power
\[R_{\eta}=\frac{\eta}{\lim\limits_{a,\;r_{X}\to M}\eta_{Kerr}},\quad R_{p}= \frac{P_{extr}}{\lim\limits_{a,\;r_{X}\to M}P_{Kerr}}, \tag{42}\]
where the denominators denote the efficiency and power of extremal Kerr black holes, respectively. If they are above one, the wormhole would be more efficient than the Kerr black hole to extract energy via the magnetic reconnection mechanism. Otherwise, the black hole will dominate.
These two ratios are plotted as a function of \(r_{X}/M\) in Figs. 8(b) and 8(d) with \(\ell/M=0.5\). The ratio \(R_{\eta}\) of the efficiency exhibits a non-monotonic behavior by varying the wormhole spin \(a/M\) from 1.15 to 1.28. These peaks are near \(r_{X}/M=0.4\). However, for the ratio \(R_{p}\), it gradually decreases with \(r_{X}/M\) for \(a/M=\)1.1-1.8. We also observe that the increase of the wormhole spin reduces both ratio \(R_{\eta}\) and \(R_{p}\), which is consistent with our above result that continuously increasing spin is not a better approach to extract energy. Moreover, from Figs. 8(b) and 8(d), one can see that both the ratios go beyond one when \(r_{X}/M\) is smaller than some certain values approximately near \(r_{X}/M=1\). Therefore, wormhole has advantages on extracting energy via magnetic reconnection only at small X-point.
## V Conclusions
In this paper, we mainly focus on the energy extraction via the magnetic reconnection mechanism from spinning wormholes characterized by spin \(a/M\) and regularization parameter \(\ell\).
At first, we pointed out the wormhole region we concerned in the (\(a\), \(\ell\)) plane. Such region must satisfy \(a/M>1\) or \(\ell>\rho_{+}\). Considering that a Kerr black hole is always bounded by its maximal spin \(a/M=1\), the spinning wormholes are divided into two cases, the slowly spinning one with \(a/M<1\) and the rapidly spinning one with \(a/M>1\). These two cases have different spacetime structures. For example, they present distinguished ergo-regions, see Fig. 2. The former wormhole only has one ergo surface outside the throat, while the latter has two.
We also showed that for some slowly spinning wormholes, both \(\Delta\epsilon_{+}^{\infty}\) and \(\epsilon_{-}^{\infty}\) of accelerated and decelerated plasmas are positive, making the energy extraction impossible. Such result is quite similar to the Kerr black hole with lower spin. However, for a rapidly spinning wormhole, the conditions \(\epsilon_{+}^{\infty}>0\) and \(\epsilon_{-}^{\infty}<0\) can be satisfied simultaneously for small \(\ell/M\). Hence, this fact indicates that the spinning wormhole can act as a source for energy extraction via the magnetic reconnection process.
In order to implement the energy extraction, we exhibited the possible parameter region in (\(a\), \(\ell\)) plane. It is obvious that both too large and too small wormhole spins are unfavorable for energy extraction. The regularization parameter \(\ell\) is also bounded. On the contrary, the parameter region shrinks with the location of X-point and orientation angle \(\xi\), while enlarges with the magnetization \(\sigma_{0}\) as expected.
After examining the possibility of energy extraction, we calculated the efficiency \(\eta\) and power of the magnetic
Figure 8: (a) Energy extracting efficiency for the Kerr black holes. Black hole spin \(a/M=\)0.93, 0.96, 0.99, 1.0 from bottom to top. (b) Efficiency ratio of wormhole with respect to the maximal Kerr black hole efficiency. Spin \(a/M=\)1.15, 1.18, 1.22, 1.28 from top to bottom. (c) Energy extracting power for the Kerr black hole. Black hole spin \(a/M=\)0.93, 0.96, 0.99, 1.0 from bottom to top. (d) Power ratio of wormhole with respect to the maximal Kerr black hole power. Spin \(a/M=\)1.1, 1.3, 1.5, 1.8 from top to bottom. Other parameters are set to \(U_{in}=0.1\), \(\sigma_{0}=100\), and \(\xi=\pi/12\).
reconnection process. If such process occurs near the wormhole throat, the efficiency always has a peak for each given wormhole spin. The peak shall disappear when the magnetic reconnection process occurs far away from the throat. And the maximal efficiency is shifted to \(\ell/M=0\). The power behaves quite differently for the slowly and rapidly spinning wormhole for the reason that the cross-sectional area takes different forms with or without circular photon orbits. The numerical result indicates that the farther away of the X-point from a rapidly spinning wormhole throat, the lower the power.
We also compared our results with that of an extremal Kerr black hole by defining the efficiency and power ratios. We observed peaks for the efficiency ratio. But the power ratio is monotonically decreasing with the location of X-point. Both the results show that the wormhole will dominate the energy extraction for the X-point location \(r_{X}/M<1\). This presents the most advantage of spinning wormhole than the Kerr black hole.
In summary, our study confirms the feasibility of extracting energy from the wormhole via magnetic reconnection under the appropriate parameters \(a/M\) and \(\ell/M\). The study of the efficiency and power also reveals that wormhole indeed has advantages, especially, when the location of X-point is close to the wormhole throat. These results shed light into the energy extraction via magnetic reconnection from horizonless objects.
## Acknowledgements
This work was supported by the National Natural Science Foundation of China (Grants No. 12075103, No. 12247101).
|
2302.14111
|
First observation of the $β$3$α$p decay of $^{13}\mathrm{O}$
via $β$-delayed charged-particle spectroscopy
|
Background: The $\beta$-delayed proton-decay of $^{13}\mathrm{O}$ has
previously been studied, but the direct observation of $\beta$-delayed
$\alpha$+$\alpha$+$\alpha$+p decay has not been reported. Purpose: Observing
rare 3$\alpha$+p events from the decay of excited states in
$^{13}\mathrm{N}^{\star}$ allows for a sensitive probe of exotic
highly-clustered configurations in $^{13}$N. Method: To measure the low-energy
products following $\beta$-delayed 3$\alpha$p-decay, the TexAT Time Projection
Chamber was employed using the one-at-a-time $\beta$-delayed charged-particle
spectroscopy technique at the Cyclotron Institute, Texas A&M University.
Results: A total of $1.9 \times 10^{5}$ $^{13}\mathrm{O}$ implantations were
made inside the TexAT Time Projection Chamber. 149 3$\alpha$+p events were
observed yielding a $\beta$-delayed 3$\alpha+p$ branching ratio of 0.078(6)%.
Conclusion: Four previously unknown $\alpha$-decaying states were observed, one
with a strong $^{9}\mathrm{B(g.s)}+\alpha$ characteristic at 11.3 MeV, one with
a $^{9}\mathrm{B}(\frac{1}{2}^{+})+\alpha$ nature at 12.4 MeV, and another two
that are dominated by $^{9}\mathrm{B}({\frac{5}{2}}^{+})+\alpha$ at 13.1 and
13.7 MeV. Population of the $\frac{1}{2}^{+}$ state in $^{9}\mathrm{B}$ has
been unambiguously seen, cementing the predicted existence of the mirror-state
based on the states observed in $^{9}\mathrm{Be}$.
|
Jack Bishop, G. V. Rogachev, S. Ahn, M. Barbui, S. M. Cha, E. Harris, C. Hunt, C. H. Kim, D. Kim, S. H. Kim, E. Koshchiy, Z. Luo, C. Park, C. E. Parker, E. C. Pollacco, B. T. Roeder, M. Roosa, A. Saastamoinen, D. P. Scriven
|
2023-02-27T19:51:23Z
|
http://arxiv.org/abs/2302.14111v2
|
First observation of the \(\beta\)3\(\alpha\)p decay of \({}^{13}\)O via \(\beta\)-delayed charged-particle spectroscopy
###### Abstract
**Background:** The \(\beta\)-delayed proton-decay of \({}^{13}\)O has previously been studied, but the direct observation of \(\beta\)-delayed 3\(\alpha\)p decay has not been reported.
**Purpose:** Rare 3\(\alpha\)p events from the decay of excited states in \({}^{13}\)N\({}^{*}\) provide a sensitive probe of cluster configurations in \({}^{13}\)N.
**Method:** To measure the low-energy products following \(\beta\)-delayed 3\(\alpha\)p-decay, the TexAT Time Projection Chamber was employed using the one-at-a-time \(\beta\)-delayed charged-particle spectroscopy technique at the Cyclotron Institute, Texas A&M University.
**Results:** A total of \(1.9\times 10^{5}\)\({}^{13}\)O implantations were made inside the TexAT Time Projection Chamber. 149 3\(\alpha\)p events were observed yielding a \(\beta\)-delayed 3\(\alpha\)p branching ratio of 0.078(6)%.
**Conclusion:** Four previously unknown \(\alpha\)-decaying excited states were observed in \({}^{13}\)N at 11.3 MeV, 12.4 MeV, 13.1 MeV and 13.7 MeV and the decay modes for these states were established. We demonstrate that clustering must dominate the structure of these states to exhibit the observed decay branching ratios.
## I Introduction
Exotic neutron-deficient nuclei provide an excellent opportunity to explore new decay modes. Large \(\beta\)-decay Q-values make it possible to populate proton- or \(\alpha\)-unbound states in daughter nuclei, paving the way for observation of \(\beta\)-delayed charged-particle emissions. Reviews of advances in \(\beta\)-delayed charged-particle emission studies can be found in Ref. [1; 2], where \(\beta\)-delayed one, two, and three proton decays as well as \(\alpha\)p/p\(\alpha\) decays are discussed. Here we report on a new decay mode that has not been observed before, the \(\beta\)3\(\alpha\)p. Not only do we identify these exotic decays of \({}^{13}\)O, but we were also able to use it to obtain information on cluster structure in excited states of the daughter nucleus, \({}^{13}\)N.
Clustering phenomena are prevalent in light nuclei and are an excellent test ground for understanding few-body systems that are theoretically accessible. These clustering phenomena have been well-studied in \(\alpha\)-conjugate nuclei. Much less experimental information is available for N\(\neq\)Z nuclei. Yet, theoretical studies (e.g. [3; 4; 5]) indicate that cluster configurations may be even richer in non-self-conjugate nuclei, opening a window of opportunity to confront the highly-non-trivial theoretical predictions with experimental data. Recent experimental studies of clustering in non-self-conjugate nuclei already produced exciting results, such as hints for linear chain structures stabilized by "extra" nucleons (e.g. [6; 7; 8]) and indications for super-radiance [9; 10].
Of particular interest is the nucleus \({}^{13}\)N where three \(\alpha\) clusters and an "extra" proton can form exotic cluster configurations. Resonant \({}^{9}\)B+\(\alpha\) scattering or \(\alpha\)-transfer reactions are not possible because \({}^{9}\)B is proton unbound with a half life of the order of \(10^{-18}\) s. Instead, one may use \(\beta\)-delayed charged-particle spectroscopy to populate states in \({}^{13}\)N via \({}^{13}\)O and observe the decays to a final state of 3\(\alpha\)p. The \(\beta\)-delayed proton channel has previously been studied for \({}^{13}\)O [11] where limited statistics showed only a very small sensitivity to populating the p+\({}^{12}\)C(0\({}^{+}_{2}\)) (Hoyle state) which results in a 3\(\alpha\)+p final state. Utilizing the Texas Active Target (TexAT) Time Projection Chamber to perform one-at-a-time \(\beta\)-delayed charged-particle spectroscopy, \(\alpha\)-decays from the near \(\alpha\)-threshold excited states in \({}^{13}\)N have been observed for the first time, providing insights into the \(\alpha\)+\({}^{9}\)B clustering. Capitalizing on the advantages of TPCs for \(\beta\)-delayed charged-particle emission studies, unambiguous and background-free identifications of the 3\(\alpha\)p events were made. Reconstruction of complete kinematics for these exotic decays allowed for robust decay channel assignments, providing insights into the cluster structure of the \({}^{13}\)N excited states. Evidence for the \(\frac{1}{2}^{+}\) first excited state in \({}^{9}\)B, mirror of the well-known \(\frac{1}{2}^{+}\) in \({}^{9}\)Be, was an unexpected byproduct of these measurements, demonstrating the sensitivity of the technique.
Experimental Setup
The \(\beta\)-delayed charged-particle spectroscopy technique with the TexAT TPC has previously been applied for \(\beta\)-delayed 3\(\alpha\) decay studies of \({}^{12}\)N via \({}^{12}\)C\({}^{\star}\)[12]. A detailed description of the technique is provided in [13]. Here, we utilize the same experimental approach to observe the \(\beta\)-delayed 3\(\alpha\)p decays of \({}^{13}\)O via \({}^{13}\)N\({}^{\star}\). We implant \(\beta\)-decaying \({}^{13}\)O one-at-a-time into the TexAT TPC by providing a phase shift signal to the K500 Cyclotron at Texas A&M University when a successful implantation has taken place to halt the primary beam. This phase shift then lasts for three half-lives or until the observation of a \(\beta\)-delayed charged particle in TexAT, with the DAQ ready to accept the trigger. The phase shift is then reset to allow for the next implantation. A beam of \({}^{13}\)O was produced via the \({}^{3}\)He(\({}^{14}\)N,\({}^{13}\)O) reaction at the MARS (Momentum Achromat Recoil Separator) [14] with a typical intensity of 5 pps with an energy of 15.1 MeV/u, degraded by an aluminum foil to 2 MeV/u, to stop inside of the TexAT sensitive area, filled with 50 Torr of CO\({}_{2}\) gas. To measure the correlated implantation/decay events, the 2p trigger mode of GET electronics [15] was employed where the occurrence of two triggers within a 30 ms time window was required for a full event. The first trigger, the L1A (implantation), is generated if the Micromegas pad multiplicity exceeds 10. If, during the 30 ms following the L1A trigger, another trigger occurs with Micromegas pad multiplicity above two, the second L1B (decay) trigger event and the time between the L1A and L1B are recorded. For normalization and beam characterization, all events where recorder, even if L1B trigger never came.
## III Analysis
The complete L1A (implant) + L1B (decay) events were selected with the time between the two triggers in the range of 1-30 ms. The short times (\(<\)1 ms) were omitted to remove double trigger events due to sudden beam-induced noise. To ensure the implanted ion is \({}^{13}\)O, the energy deposited by the beam implant event in the Micromegas "Jr" (MM Jr) beam tracker [16] at the entrance to the TexAT chamber was recorded. The beam contaminants were \({}^{7}\)Be and \({}^{10}\)C, dominated by \({}^{7}\)Be at \(\approx\) 28% of the beam intensity.
Following an identification of \({}^{13}\)O implant, the stopping position was evaluated event-by-event using implant tracks, selecting only those which stopped inside the active area of the Micromegas and not closer than 31.5 mm from the edge. The spread of the \({}^{13}\)O stopping position inside TexAT was 67.5 mm due to straggling.
Further selection was performed by imposing tight correlation (\(<\)5 mm) between the \({}^{13}\)O stopping location and the vertex location of the respective decay event. Events which passed this test were then fit with a single track segment using a randomly-sampled \(\chi\)-squared minimization algorithm. If good fit is achieved, these events were identified as single proton events. The \(\beta\)-delayed proton spectrum replicates the previous results [11] well, albeit with decreased resolution. The remaining events were fit with four track segments as candidates for \(\beta\)3\(\alpha\)p decay using randomly-sampled \(\chi\)-squared minimization. They were then inspected visually to evaluate the fits' quality. Given the complexity of the fits, manual modifications of the fit algorithm parameters were required for some events.
## IV 3\(\alpha\)+proton Events
Overall, 149 \(\beta\)3\(\alpha\)p events were identified. Due to the size of the TPC and limitations on reconstruction in parts of the TexAT TPC, only 102 out of 149 of these events allow for complete reconstruction. The "incomplete" events are dominated by the \({}^{9}\)B(g.s.)+\(\alpha\) decay as this produces a high-energy \(\alpha\)-particle that may escape from the active volume of the TexAT TPC. The efficiency for the \(\alpha_{0}\) decay starts to deviate from 100% at \(E_{x}\) = 10 MeV, slowly drops to around 60% at \(E_{x}\) = 14 MeV. The efficiency for \(\alpha_{1}\) and \(\alpha_{3}\) are less affected and only decrease to 70% at \(E_{x}\) = 14 MeV. In proton decays to the Hoyle state, most of the energy is taken by protons and the resulting three \(\alpha\)-tracks of the pre-selected events are always confined to the active volume of the TPC. Proton tracks were not required in reconstruction as complete kinematics can be recovered from the remaining three \(\alpha\)-tracks. Therefore, there was no efficiency reduction for the p+\({}^{12}\)C(Hoyle) decays. The yields given in Table 1 are corrected for these experimental effects.
In order to identify the parent state in \({}^{13}\)N\({}^{\star}\), the lowest energy deposition arm was identified as the proton track and the momentum of the 3 \(\alpha\)-particles was determined by the length and direction of \(\alpha\)-tracks in the
Figure 1: Relative energy spectrum for pairs of \(\alpha\)-particles with the smallest relative energy of the three \(\alpha\)-tracks. The \({}^{8}\)Be(g.s) at 92 keV is well-reproduced.
gas. Protons almost always escape the sensitive volume, and the proton momentum is reconstructed from momentum conservation. The decay energy is then the sum of the three \(\alpha\)-particles' and proton energy. From here, the \({}^{8}\)Be (Fig. 1), \({}^{9}\)B (Fig. 2) and \({}^{12}\)C (Fig. 3) excitation energies were determined from the invariant mass. This allowed for a selection of events which proceeded to decay via p+\({}^{12}\)C(0\({}^{+}_{2}\)) [\(p_{2}\)], \(\alpha\)+\({}^{9}\)B(g.s) [\(\alpha_{0}\)], \(\alpha\)+\({}^{9}\)B(\(\frac{1}{2}^{+}\)) [\(\alpha_{1}\)] and \(\alpha\)+\({}^{9}\)B(\(\frac{5}{2}^{+}\)) [\(\alpha_{3}\)]. There is evidence of strength in \({}^{9}\)B between 1 and 2.4 MeV excitation energy (Fig. 2). It is difficult to explain it without the \(\frac{1}{2}^{+}\) state in \({}^{9}\)B [17] that is the mirror of the well-known \(\frac{1}{2}^{+}\) first excited state in \({}^{9}\)Be. Attempts to fit the spectrum without the \(\frac{1}{2}^{+}\) in \({}^{9}\)B fail because it is difficult to explain excess of counts at excitation energies between 1.4 and 2.4 MeV comparable to the 2.4 - 3.5 MeV region where there are known excited state in \({}^{9}\)B states. Contributions from the broad 2.78 MeV \(\frac{1}{2}^{-}\) may give a signature similar to that seen albeit at lower energies (peaking at \(E_{rel}\) = 1.3 MeV for a \({}^{13}\)N(\(E_{x}\)) = 12.4 MeV) when considering the expected yield from a \(\frac{1}{2}^{-}\) state in \({}^{13}\)N. The L=0 \(\alpha\)-decay to the broad \(\frac{1}{2}^{-}\) in \({}^{9}\)B will increase the yield at small excitation energies. While this possibility is disfavored from the observed spectrum due to the energy offset, it is mentioned here for completeness. The \(\frac{1}{2}^{+}\) state in \({}^{9}\)B was selected by taking an excitation energy of between 1.4 and 2.4 MeV in \({}^{9}\)B (following the centroid and width as observed via \({}^{9}\)Be(\({}^{3}\)He,\(t\)) [17] which is consistent with our current results) and the \(\frac{5}{2}^{+}\) was taken as having an excitation energy of above 2.4 MeV. Any contribution from the relatively-narrow 2.345 MeV \(\frac{5}{2}^{-}\) is not present in the presented plots as this state decays almost exclusively via \({}^{5}\)Li and therefore would not correspond to a peak in the \({}^{8}\)Be spectrum. There were only 3 events associated with this decay to \({}^{5}\)Li hence the statistics were insufficient to incorporate into the analysis.
Following the channel selection, the excitation energy in \({}^{13}\)N was calculated and is shown in Fig. 5. Despite low statistics, a number of states can be seen and will be discussed individually. A summary of the properties of these
Figure 3: Invariant mass spectrum for \({}^{12}\)C from 3\(\alpha\)-particles. A peak at 7.65 MeV is seen, well reproducing the Hoyle state energy and a broad peak is seen at higher excitation energies which correspond to events that decay via \({}^{9}\)B + \(\alpha\).
Figure 2: For events that do not decay via the Hoyle state, the relative energy spectrum is shown here which is generated by selecting the two \(\alpha\)-particles that produce the \({}^{8}\)Be(g.s) and then reconstructing the \({}^{9}\)B relative energy with the proton. Overlaid in dashed red are simulated data for the ground state contribution and in solid red are the \(\frac{1}{2}^{+}\) and \(\frac{5}{2}^{+}\) states from single channel R-Matrix calculations convoluted with a Gaussian with \(\sigma\) = 0.23 MeV. The \(\frac{1}{2}^{+}\) parameters are those obtained by Wheldon [17] which show excellent agreement. Inset: projection of an example \(\alpha\)+\({}^{9}\)B(g.s) event in the TPC with the color indicating energy deposition. The lower energy deposition proton can be seen extending upwards and then escapes the TPC active area.
Figure 4: Level scheme of measured 3\(\alpha\)+p states in \({}^{13}\)N in the central column with the proposed spin-parity assignments. The location of the thresholds for proton and \(\alpha\) decay are shown in red with the equivalent excitation energy shown. The corresponding states in the daughter nuclei (12C and \({}^{9}\)B) are also shown.
states observed is then shown in Table 1. A GEANT4 simulation was performed to test the variation in experimental resolution as a function of excitation energy for the \(\alpha_{0}\) channel which, is typically around \(\sigma\) = 200 keV. The \(p_{2}\) channel resolution is almost entirely dominated by discrepancies between the calculated and real stopping powers for the \(\alpha\)-particles and therefore cannot be accurately determined. For all excitation energies, it is realistically greater than \(\sigma\) = 160 keV.
### 11.3 MeV state
The first peak in the spectrum corresponds to an excitation energy of 11.3 MeV in \({}^{13}\)N. The strength is almost entirely dominated by the \({}^{9}\)B(g.s)\(+\alpha\) channel with a small fraction of \({}^{12}\)C(0\({}^{+}_{2}\))\(+\)p. The yield in the \(p_{0}\) from the previous Knudsen data [11] shows a small, very narrow peak at the energy associated with this state (\(E_{p}\)(lab) = 8.64 MeV) and is taken as 6(2.6). The yield in the \(p_{1}\) channel is harder to estimate due to the larger background from other states in this region but also shows no evidence of a peak and is also taken to be negligible. Fitting this peak in conjunction with neighboring peaks, the yield in the \(\alpha_{0}\) channel is 18(4.4) and yielding \(\sigma\) = 280(80) keV and \(E_{x}\) = 11.3(1) MeV. In the \(p_{2}\) channel, the yield is 7(2.8) with \(\sigma\)= 220(100) keV and \(E_{x}\) = 11.0(1) MeV. These widths are commensurate with the experimental resolution therefore \(\Gamma\) is expected to be relatively small (\(\Gamma<\) 200 keV). Given the yields for \(\alpha_{0}\) and \(p_{2}\) are both strong, the spin-parity assignment is favored towards \(J^{\pi}=\frac{3}{2}^{-}\) where the angular momentum transfer is L=0 and L=1 respectively. A choice of \(J^{\pi}=\frac{1}{2}^{-}\) or \(J^{\pi}=\frac{5}{2}^{-}\) would require L=2 for the \(\alpha_{0}\) channel which should heavily suppress the yield and \(J^{\pi}=\frac{5}{2}^{-}\) would correspond to L=3 for \(p_{2}\) so these options are strongly disfavored. From Table 1, when taking the yield of the states and correcting for the different channel penetrabilities, \(P_{L}\), and efficiencies, one can determine the structure of the measured states without a measurement of the width of the state to compare to the Wigner limit. Many of the states in \({}^{9}\)B are very broad and the extreme simplification of calculating the penetrability to the resonant energy is made. In reality, the average penetrability will be higher. The structure is therefore determined by the fractional reduced-width, \(\tilde{\gamma}^{2}_{i}=\frac{\gamma^{2}_{i}}{\sum_{j}\gamma^{2}_{j}}\) where \(\gamma^{2}_{i}=\frac{\Gamma_{i}}{2^{P_{L}}L}\). This variable shows the type of clustering but not the magnitude of the clustering. This state has considerable strength in both \(\alpha_{0}\) and \(p_{2}\) with \(\tilde{\gamma}^{2}_{i}\) as 63% and 35% respectively. Taking the assumption that the total width, \(\Gamma\), of the state is \(<\) 200 keV, one may compare to the Wigner limit, \(\gamma^{2}_{W}=\frac{\hbar^{2}}{\mu a^{2}}\) which is 0.57 and 2.1 MeV for \(\alpha\)-decay and \(p\)-decay respectively. Correspondingly, the ratio to Wigner, \(\theta^{2}_{W}<\) 28 % and \(<\) 4% for \(\alpha_{0}\) and \(p_{2}\) respectively. The former of these (while notably only an upper limit) constitutes a well-clustered state.
### 11.8 MeV state
In the \(p_{2}\) channel, the yield is 4(2.2) with \(\sigma\)= 170(110) keV and \(E_{x}\) = 11.8(1) MeV. Counts in the \(\alpha_{1}\) channel are from higher excitation energies extending down as the \(P_{l}\) for \(\alpha_{1}\) is extremely suppressed prohibiting any strength. Due to the strength of the two nearby states in the \(\alpha_{0}\) channel, the yield in the \(\alpha_{0}\) channel has very large uncertainties and can only be limited to be less than 1.8. There are two states previously known at this energy, a \(\frac{3}{2}^{-}\) and a \(\frac{5}{2}^{-}\) with widths of 115(30) and 530(80) keV respectively. Our data are more consistent with the narrower \(\frac{3}{2}^{-}\) which was also populated in previous work [11]. Additionally, a \(\frac{5}{2}^{-}\) assignment is the least favored from an angular momentum perspective (L=3 vs L=1 for \(\frac{1}{2}^{-}\) or \(\frac{3}{2}^{-}\)) and this state is seen to populate the \(p_{2}\) channel reasonably well. From previous work, the yield in the \(p_{0}\) was determined to be 28(14). Making the same corrections for penetrabilities as above, this state shares strength in the \(p_{0}\) and \(p_{2}\) channels with \(\tilde{\gamma}^{2}_{i}>\) 51% and \(>\) 39% respectively with the remaining \(\alpha_{0}\) component being \(<\) 10%. The width for this state is known and the reduced width for \(p_{2}\) can be compared to the Wigner limit and is \(\sim\) 1%. Therefore, the contribution of the \({}^{12}\)C(0\({}^{+}_{2}\))\(\bigotimes p\) configuration is small.
### 12.4 MeV state
Fitting this peak in conjunction with neighboring peaks, the yield in the \(\alpha_{0}\) channel is 22(4.8) and yielding \(\sigma\) = 310(90) keV and \(E_{x}\) = 12.4(1) MeV. The corre
Figure 5: Excitation spectrum in \({}^{13}\)N for \(3\alpha+p\) separated by channels. Dashed vertical lines show previously-known states populated by \(\beta\)-decay in black and new states observed are shown in magenta. A magenta arrow shows a shift in the excitation energy between a suggested state at 13.26(10) MeV to 13.1(1) MeV.
sponding yield of \(\alpha_{1}\) is 4(2.2). In the \(p_{2}\) channel, the yield is 5(2.5) with \(\sigma\)= 110(70) keV and \(E_{x}\) = 12.5(1) MeV. Despite the relatively small yield in the \(\alpha_{1}\) channel, when correcting for penetrability, the \(\alpha_{1}\) dominates the strength with \(\bar{\gamma}_{i}^{2}\) = 91% with \(\alpha_{0}\) and \(p_{2}\) sharing the remainder with 6% and 3% respectively. The strong contribution of the \({}^{9}\)B(\(\frac{1}{2}^{+}\))\(\bigotimes\alpha\) configuration suggests this is a near-threshold p-wave state.
The \({}^{9}\)Be(\(\alpha,\alpha_{0}\)) [18; 19] and \({}^{9}\)Be(\(\alpha,n_{0}\)) [20] reactions data are available at this excitation energy and above and one may look for analogous states in \({}^{13}\)C. Given this state is in the s-wave in the entrance channel (assuming \(J^{\pi}=\frac{3}{2}^{-}\)) and is expected to be relatively narrow, and previous data seem to have a very large experimental width, it is perhaps possible to explain that such a state has not been observed in \({}^{13}\)C in the \({}^{9}\)Be(\(\alpha,\alpha_{0}\)) channel. The sole dominant feature in this region is a strong \(\frac{5}{2}^{+}\) state at 11.95 MeV.
It is worth noting that the \(\alpha_{1}\) channel is sub-threshold in \({}^{13}\)C and the \(n_{2}\) channel is heavily-suppressed until \({}^{13}\)C excitation energies of above 13 MeV [20]. There are many states in this region (\(E_{\alpha}>2\) MeV) visible in the \({}^{9}\)Be(\(\alpha,n_{0}\)) channel but the resolution is insufficient to provide spin-parity and width assignments.
This perhaps motivates a more extensive investigation of near-threshold states in \({}^{13}\)C from the \({}^{9}\)Be + \(\alpha\) channel with higher resolution and angular coverage. It is also worth noting that in the previous proton data [11] there is a peak at this corresponding energy for the \(p_{1}\) channel (\(E_{p}\)(lab) = 5.55 MeV) where a peak with a yield of \(\approx\) 6 can be seen above a considerable background. The conservative limit of \(<10\) for \(p_{1}\) is therefore taken. The width in this spectrum is also seen to be small which agrees with our results.
### 13.1 MeV state
A relatively strong peak is seen at 13.1 MeV in the \(\alpha_{3}\) channel where decays occur through the 2.75 MeV \(\frac{5}{2}^{+}\). There is only a very small contribution from the \(\alpha_{1}\) channel at this excitation energy so this state is almost exclusively \({}^{9}\)B(\(\frac{5}{2}^{+}\))\(\bigotimes\alpha\). Given the dominance of \(\alpha_{3}\), this suggests a spin-parity of \(J^{\pi}=\frac{5}{2}^{-}\) which suppresses the other channels.
In \({}^{9}\)B, there is also the extremely-broad 2.78 MeV \(\frac{1}{2}^{-}\) with \(\Gamma\) = 3.13 MeV which may actually be the source of the \(\alpha_{3}\) strength. Our data do not have sufficient statistics to exclude this possibility and the \(\frac{1}{2}^{-}\) decays primarily through \({}^{8}\)Be via proton-decay. In this possibility, the preferred spin-parity assignment is obviously \(J^{\pi}=\frac{1}{2}^{-}\) corresponding to L=0 \(\alpha_{3}\) decay. The results for both spin parities assignments are included in Table 1.
As with the 12.4 MeV state, there is evidence of a peak in previous data at the correct energy in the \(p_{1}\) channel (\(E_{p}\)(lab) = 6.20 MeV) which is given a similar limit of \(<10\).
### 13.7 MeV state
There is a collection of strength in the \(p_{2}\), \(\alpha_{0}\), \(\alpha_{1}\) and \(\alpha_{3}\) channel. With a yield of 6(2.7), the state is dominated by \(p_{2}\) and has parameters of \(\sigma\)= 260(70) keV and \(E_{x}=13.7(1)\) MeV. Given the small yield in the \(\alpha_{3}\), this state can be assigned as either \(\frac{3}{2}^{-}\) or \(\frac{5}{2}^{-}\). A \(\frac{5}{2}^{-}\) would correspond to L=3 for the \(p_{2}\) channel so a \(\frac{3}{2}^{-}\) assignment would be more commensurate with the reasonable \(p_{2}\) yield. This state also exhibits a \({}^{9}\)B(\(\frac{5}{2}^{+}\))\(\bigotimes\alpha\) structure.
Examining the previous work for evidence of a peak in the \(p_{1}\) is not possible for this state due to the presence of a strong \(p_{0}\) branch from a lower-lying state at the same energy. A similar limit of \(<10\) is therefore placed on this state.
## V Conclusions
\(\beta\)-delayed 3\(\alpha\)p decay has been observed for the first time. While \(\beta\)-delayed \(\alpha\)p has been previously observed in \({}^{9}\)C [21], \({}^{17}\)Ne [22], \({}^{21}\)Mg [23] and \({}^{23}\)Si [24], these states did not provide any structural insight and instead were mainly seen through isobaric analogue states that were
\begin{table}
\begin{tabular}{|c||c||c|c|c||c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{State} & \multicolumn{1}{c|}{Counts} & \multicolumn{1}{c|}{Efficiency-corrected \(\bar{\gamma}^{2}\)} \\ \hline \(E_{x}\) & \(J^{\pi}\) & \(\alpha_{0}\) & \(\alpha_{1}\) & \(\alpha_{3}\) & \(p_{0}\)[11] & \(p_{1}\)[11] & \(p_{2}\) & \(\alpha_{0}\) & \(\alpha_{1}\) & \(\alpha_{3}\) & \(p_{0}\) & \(p_{1}\) & \(p_{2}\) \\ \hline
11.3(1) & 3/2- & 18(4.4) & 0 & 0 & 6(2.6) & \(<3\) & 7(2.8) & 67(21)\% & 0\% & 0\% & 4(2)\% & \(<\)1\% & 29(13)\% \\ \hline
11.8(1) & 3/2- & \(<1.8\) & 0 & 0 & 28(14) & \(<4\) & 4(2.2) & \(<\)12\% & 0\% & 0\% & 50(30)\% & 0\% & 38(25)\% \\ \hline
12.4(1) & 3/2- & 22(4.8) & 4(2.2) & 0 & \(<3\) & \(<10\) & 5(2.5) & 6(2)\% & 88(49)\% & 0\% & \(<\)0.1\% & \(<\)2\% & 2(1)\% \\ \hline
13.1 & 1/2- & 0 & 3(2) & 5(2.5) & 21(6) & \(<10\) & 0 & 0\% & 1(1)\% & 98(48)\%a & 0\% & \(<\)0.4\% & 0\% \\ & 5/2- & 0 & 3(2) & 5(2.5) & 21(6) & \(<10\) & 0 & 0\% & 1(10)\% & 89(44)\% & 0.7(0.2)\% & \(<\)0.2\% & 0\% \\ \hline
13.7(1) & 3/2- & 1(1.4) & 3(2) & 4(2.2) & \(<3\) & \(<10\) & 6(2.7) & 1(1)\% & 8(8)\% & 75(42)\% & \(<\)0.5\% & \(<\)7\% & 8(3)\% \\ \hline \end{tabular}
\end{table}
Table 1: Excited states in \({}^{13}\)N observed in this work with tentative spin-parity assignments, decay properties of the states, and the efficiency-corrected fractional reduced widths.
well fed by \(\beta\)-decay. In this work, \(\beta\lambda\alpha\)p decay was observed from the states below the isobaric analog in \({}^{13}\)N at \(E_{x}=15\) MeV, demonstrating this is not merely a phase-space effect. The \(\beta\)-delayed \(3\alpha\)p decays observed here are in strong competition with \(\beta\)-delayed proton decay and therefore the states must have significant clustering.
Three new states and a previously-tentative state in \({}^{13}\)N have been observed with a strong \(3\alpha+p\) nature. The first is a narrow \(\frac{3}{2}^{-}\) state at \(E_{x}=11.3(1)\) MeV with mixed \({}^{9}\)B(g.s)\(\bigotimes\alpha\) and \(p+^{12}\) C(0\({}^{+}_{2}\)) nature.
Another previously-observed \(\frac{3}{2}^{-}\) was seen to have mixed \(p+^{12}\) C(g.s.) and \(p+^{12}\) C(0\({}^{+}_{2}\)) nature at 11.8 MeV with around half of the total strength as \(p+^{12}\) C(g.s.).
At higher excitation, another strong \(\alpha\)-decaying state was seen at \(E_{x}=12.4(1)\) MeV although this state has a much stronger \({}^{9}\)B(\(\frac{1}{2}^{+})\bigotimes\alpha\) nature.
A revised excitation energy of 13.1(1) MeV is suggested for a previously-seen state at 13.26 MeV. The \({}^{9}\)B(\(\frac{5}{2}^{+})\bigotimes\alpha\) structure dominates in this state and a spin assignment of \(J^{\pi}=\frac{1}{2}^{-}\) or \(\frac{5}{2}^{-}\) are therefore preferred.
Finally, another \(\frac{3}{2}^{-}\) is seen at 13.7 MeV which is also dominated by \({}^{9}\)B(\(\frac{5}{2}^{+})\bigotimes\alpha\).
The inability to extract the width of these narrow states means that the magnitude of clustering cannot be fully evaluated, however, the type (channel) of clustering can be determined without this information. Higher resolution data focusing on the proton channel may provide further information on the magnitude of this clustering phenomenon. From our current data, one may conclude that the clustered channels are competitive against the single-particle \(p_{0}\) channel, highlighting the importance of cluster configurations in non-self-conjugate nucleus \({}^{13}N\).
Evidence for the low-lying \(\frac{1}{2}^{+}\) in \({}^{9}\)B in these background-free data, matching the parameters of previous observations [17], brings us closer to resolving the long-standing problem of searches for this elusive state.
## VI Acknowledgments
We thank Vlad Goldberg for helpful feedback on this work. This work was supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Science under Award No. DE-FG02-93ER40773 and by the National Nuclear Security Administration through the Center for Excellence in Nuclear Training and University Based Research (CENTAUR) under Grant No. DE-NA0003841. G.V.R. also acknowledges the support of the Nuclear Solutions Institute. S.A., S.M.C., C.K., D.K., S.K. and C.P. also acknowledge travel support from the IBS grant, funded by the Korean Government under grant number IBS-R031-D1. C.N.K acknowledges travel support from the National Research Foundation of Korea (NRF) grant, funded by the Korea government (MSIT) (No. 2020R1A2C1005981 and 2013M7A1A1075764).
|
2303.12891
|
Feature Reduction Method Comparison Towards Explainability and
Efficiency in Cybersecurity Intrusion Detection Systems
|
In the realm of cybersecurity, intrusion detection systems (IDS) detect and
prevent attacks based on collected computer and network data. In recent
research, IDS models have been constructed using machine learning (ML) and deep
learning (DL) methods such as Random Forest (RF) and deep neural networks
(DNN). Feature selection (FS) can be used to construct faster, more
interpretable, and more accurate models. We look at three different FS
techniques; RF information gain (RF-IG), correlation feature selection using
the Bat Algorithm (CFS-BA), and CFS using the Aquila Optimizer (CFS-AO). Our
results show CFS-BA to be the most efficient of the FS methods, building in 55%
of the time of the best RF-IG model while achieving 99.99% of its accuracy.
This reinforces prior contributions attesting to CFS-BA's accuracy while
building upon the relationship between subset size, CFS score, and RF-IG score
in final results.
|
Adam M. Lehavi, Seongtae Kim
|
2023-03-22T20:09:31Z
|
http://arxiv.org/abs/2303.12891v1
|
Feature Reduction Method Comparison Towards Explainability and Efficiency in Cybersecurity Intrusion Detection Systems
###### Abstract
In the realm of cybersecurity, intrusion detection systems (IDS) detect and prevent attacks based on collected computer and network data. In recent research, IDS models have been constructed using machine learning (ML) and deep learning (DL) methods such as Random Forest (RF) and deep neural networks (DNN). Feature selection (FS) can be used to construct faster, more interpretable, and more accurate models. We look at three different FS techniques; RF information gain (RF-IG), correlation feature selection using the Bat Algorithm (CFS-BA), and CFS using the Aquila Optimizer (CFS-AO). Our results show CFS-BA to be the most efficient of the FS methods, building in 55% of the time of the best RF-IG model while achieving 99.99% of its accuracy. This reinforces prior contributions attesting to CFS-BA's accuracy while building upon the relationship between subset size, CFS score, and RF-IG score in final results.
+
Footnote †: Research in large conducted thanks to funding from National Security Agency grant H98230-22-1-0017 and National Science Foundation award 1719498.
## I Introduction
Cybersecurity is a growing field with increasing necessity and prominence. In 2022 thus far, there have been 1.364 billion identified malware programs [1]. The average cost of a data breach is $4.24 million, with an 80% cost difference where security AI was deployed through either Machine Learning (ML) or Deep Learning (DL) [2]. Threat detection, or finding malicious activity, is one of the largest components of the cybersecurity field. An intrusion detection system (IDS) is a model that detects these threats through various means [3]. A network-based IDS (NIDS) monitors network connections to look for malicious traffic [4]. As networks are typically the source of more damaging and costly problems, spanning company and organization data, building an NIDS has importance in current research, and will be the focus of this paper.
Intrusion detection systems are typically divided into two separate classes based upon their approach and the type of attacks they aim to cover. Signature-based IDS relies on known attacks; it generates patterns from past or given data and then uses those set patterns to sift through current data, such as an antivirus package [4]. Anomaly-based IDS aims to find dynamic patterns to group data and search for deviations [4]. Hybrid approaches combine both methods. While signature-based systems have guarantees on detecting common malicious activity, they perform poorly on attacks that deviate from those patterns. In addition, they often take large amounts of labeled data beforehand, and need constant updating. As such, anomaly-based detection can achieve better results for large amounts of data with unknown correlations.
NIDS are fundamentally built upon data from the network. In research, data to test particular models and methods comes from datasets of real or simulated network data. The most common datasets include NSL-KDD, KDD-Cup'99, UNSW-NB15, CIC-IDS2017, Kyoto, and CSE-CIC-IDS2018 [5]. Of these sets, CSE-CIC-IDS2018 is the most current, meaning it is the most applicable. We therefore selected this dataset to be able to add a point of comparison with needed metrics [6]. The dataset is useful for practice, because it contains a variety of attacks including zero-day attacks, or ones that typically happen when a network is initially set up and open to user activity [4]. Since it is not as common as CIC-IDS2017 and lacks many results, it is a good target dataset [5].
In building these intrusion detection systems, we will look to use ML and DL methodology. ML concerns itself with statistical methods that construct or evaluate patterns from known functions and behavior [7]. Within all possible ML methods, we need to look towards classification methods that specify if a user is attacking and how, as well as one that can work regardless of our class distribution, as we fundamentally need to explore attacks of all types, and CSE-CIC-IDS2018 contains very skewed data. This leads us to preferring K-nearest neighbors and Random Forest (RF) [7]. K-nearest neighbors is a non-parametric learning algorithm classifying new points to the majority class of the \(K\) closest points. Because K-nearest neighbors is difficult to extract information from, and the primary motivation of this research is to compare methodologies based on a variety of factors including relative importance of predictors, we will not use it. We will use RF, however, for a few reasons. For data distributed with many extreme points, using a method like RF that does not rely upon weights for given inputs but rather specific boundaries may
give it an edge in performance. Because RF has a relatively good balance of low variance to low bias, as well as having much better results than a standard decision tree, all while still being relatively easily interpretable, we will use it [7].
Deep Neural Networks (DNN) are an offshoot of DL with the goal of modeling complex nonlinear relationships using chains of neurons and activation functions. With regards to our work, they are a strong tool to choose and incorporate because of their fast training time given the scope of our data and consistently strong performance when compared to nearly all ML methodology [8].
In exploring our results, we will display the information in a manner that is as transparent and digestible as possible. Similar to many papers, we include the accuracy, precision, F1 score, and false alarm rate (FAR). Most other metrics can be determined as a derivative of these, and they provide a foundation by which to compare results [8, 9]. While [5] achieves results for our dataset using DNN architecture, we achieve even better results and thus choose to include it in conjunction with RF.
Feature selection, which aims to feed a model a subset of the original features to improve, is the central focus of the paper. These improvements can include time to build a model as well as performance results such as accuracy, precision, F1 score, and FAR. There are three types of feature ranking methods commonly used. Filters use predefined criteria or patterns in the data, wrappers build many models to compare usefulness of given features, and embedded methods rely upon training a model that then defines features for another model [10]. For the sake of being able to compare more modern optimizer filter methods, we will use both other filter methods and embedded methods as benchmarks, since wrapper methods both have better accuracy and much larger build times. A standard method for embedded feature selection is RF information gain (IG). Because of its large amount of time to compute, it is useful but limited in applicability [11]. The filter methods to be compared are the correlation based feature selection Bat Algorithm (CFS-BA), and Aquila Optimizer (CFS-AO).
Correlation based feature selection is built upon the idea that an outperforming subset of features can be determined purely through correlation between variables. The score derived is one that aims to maximize correlation between predictors and final classes while minimizing correlation between pairs of predictors [6]. This methodology performs much faster than any embedded method, and can give more intuitive reasoning into the decision making process, leading us to choose it. The only issue is that we cannot explore every possible subset, and as such need to use an optimizer method to find the closest subset to the global maxima score.
Bat Algorithm (BA) was proposed by Yang in 2010 as a metaheuristic optimization algorithm to explore a space mimicking the behavior of microbats using echolocation [12]. Most metaheuristic algorithms have two phases; exploration attempts to discover the widest range of the feature space, and exploitation aims to discover the best local solution within a given region. Unlike most modern metaheuristic algorithms, BA gradually switches between exploration and exploitation using a self-tuning hyperparameter [13]. CFS-BA has performed well for CIC-IDS2017, but does not have CSE-CIC-IDS2018 results and lacks results for DNN and RF models [6].
The Aquila Optimizer was introduced as a faster and potentially more efficient metaheuristic algorithm than prior methodology. While slower to converge onto a maxima, it has been shown to outperform BA when used in feature selection on cybersecurity benchmark datasets [14]. Because of its promising nature, the algorithm will be used and compared to BA with CFS scoring. AO has performed well for CIC-IDS2017, but used a different fitness than CFS and does not have CSE-CIC-IDS2018 results [14].
The paper is organized as follows: our motivation for the particular fields, tools, and algorithms used are in section I, particular mathematical and technical details regarding the implementation of our research are in section II, information specific to our dataset and its distribution and processing are in section III, experimental results are in section IV, interpretation of results and limitations of the study are in section V, and a summary of the work's main achievements as well as future work are in section VI.
## II Methodology
Our methodology will discuss the specifics of how we will build and tune RF and DNN models for feature selection comparison.
### _Classification Algorithms_
Classification algorithms work by feeding a set of inputs through a model to output a final, categorical variable. The methods we will explore for this are RF and DNN.
#### Ii-A1 Random Forest
The basic building block for RF is the decision tree. Visualization is in fig. 1. The decision tree is a model that classifies data points by putting them through a series of binary decision boundaries and then assigning them to the same class as the majority of the training points within a final bucket. The equation used to choose decision splits in this paper, the Gini Index [7], is defined as eq. (1). Entropy can also be used, defined as eq. (2) [7].
\[G =\sum_{k=1}^{K}\hat{p}_{mk}(1-\hat{p}_{mk}) \tag{1}\] \[D =-\sum_{k=1}^{K}\hat{p}_{mk}\log\hat{p}_{mk} \tag{2}\]
Both indexes are modeled to be a measure of total variance across all classes, with a smaller value indicating a better decision tree. Here, \(K\) represents the number of classes, and \(\hat{p}_{mk}\) represents the proportion of training observations in the \(m\)th region from the \(k\)th class. The indexes takes a smaller value for \(\hat{p}_{mk}\) closer to zero or one, meaning it acts as a measure of _purity_ for a given node.
Splits are made to minimize eq. (1). For a standard decision tree, every split can compare all possible predictors; however, when used in conjunction with bootstrapping, this increases variance and decreases accuracy [7]. To work around this, RF forces each split to only compare one of \(m=\sqrt{p}\) out of the total \(p\) predictors.
In Random Forest, all inputs are used to train a set of bootstrapped decision trees. This means that some number of trees, set in this case to be 100, are constructed from sets with the same amount of points as the original data set, made by randomly selecting points from the base set with replacement. Each of these decision trees will have decision splits added but never removed, and reach a depth \(d\). We tested the \(d\) range of [2,5,10,20,40,100,200] building RF models for each and calculating the OOB score, and found \(d=20\) ideal for our performance. Out-of-bag works by plugging the training set back into the finalized model and recording the fraction of the training set that ends up correctly labeled. This simulates predicted accuracy to a certain degree, and thus is a substantial enough metric for our use.
Furthermore, we can use eq. (2) to determine the information gained at each split by each variable. Working with the average value of all of these splits over each predictor gives us our information gain (IG) scores, which we can use to determine feature subsets.
#### Ii-A2 Deep Neural Networks
Neural Networks, and DNN more specifically, consist of an input layer \(X_{1},\ldots,X_{n}\), describing all predictors in numerical terms, multiple hidden layers, consisting of fully connected nodes passed to the next layer through nonlinear activation functions, and an output layer. In the case of DNN for the sake of classifying, the process is as follows:
A hidden layer \(n\) with \(D\) nodes with prior layer \(m\) having \(C\) nodes can be represented as
\[A_{d}^{(n)}=h_{d}^{(n)}(X)=g(w_{d_{0}}^{(n)}+\sum\nolimits_{j=1}^{p}w_{d_{j}} ^{(n)}A_{c}^{(m)})\]
for some activation value \(A_{d}^{(n)}\) for node \(d\) defined through activation function \(h\) expanded for some nonlinear function \(g\) using weights \(w\) for all \(p\) predictors. For the first layer, \(A_{c}^{(m)}\) can be substituted with \(X_{j}\). This is then repeated until layer \(n^{\prime}\) with \(D^{\prime}\) nodes and piped to \(K\) different linear models
\[Z_{k}=\beta_{k_{0}}+\sum\nolimits_{d^{\prime}=1}^{D^{\prime}}\beta_{k_{d^{ \prime}}}h_{d^{\prime}}^{(n^{\prime})}(X)=\beta_{k_{0}}+\sum\nolimits_{d^{ \prime}=1}^{D^{\prime}}\beta_{k_{d^{\prime}}}A_{d^{\prime}}^{(n^{\prime})}.\]
This value is then taken and put through the _softmax_ activation function eq. (3) for multiclass classification and the _sigmoid_ function eq. (4) for binary classification [7]. Afterward it is classified to the \(f_{k}(X)\) with the highest probability.
\[f_{k}(X) =Pr(Y=k|X)=\frac{e^{Zk}}{\sum_{l=0}^{K}e^{Zl}} \tag{3}\] \[f_{k}(X) =Pr(Y=k|X)=\frac{e^{Z}}{1+e}=\frac{1}{1+e^{-Z}} \tag{4}\]
All other hidden layer activation functions are chosen to be ReLU (rectified linear unit), defined as \(A(z)=(z)_{+}\) that returns \(0\) for negative inputs and \(z\) otherwise, as it achieves far better performance than most other alternatives [7, 16].
The number of neurons and hidden layers, while recommended to be large, may perform best when minimized given the sparsity of smaller classes [16]. As such, we compared two methodologies. The first assumes that we have enough information to warrant a complex model. This model contains three hidden layers, all of size 100. We will refer to it as the 100-100-100 model to reinforce this. The second model will be built on the assumption that we have a lack of information for the size of our data. This encourages a structure that funnels information down to the final layer, and so we will use a hidden layer of size 50 first and then of size 25. This will be the 50-25 model. After these hidden layer passes for each respective model, the classification will be done.
This training is done in batches, as the results and weights are updated for each run. Common batch sizes are 32, 64, 128, and 256. The batch size typically makes minimal difference and is limited by a resource's RAM, and so we choose to use batch size 32 for the sake of maximizing result reproducability.
Overall, the 100-100-100 model underperformed the 50-25 model in build time by large margins and accuracy by notable margins for all of the categorical models. This led us to compare RF and DNN using the 50-25 structure.
### _Correlation Feature Selection_
CFS is a filter method that analyzes the fitness of a given subset of features based solely on the intercorrelation between features. This is done through Spearman's correlation, which
Fig. 1: Visualizations of an example RF courtesy [15].
Fig. 2: Deep Neural Network Visualization for 2 inputs and 1 output
works for numerical values calculating collinearity. To make the formula work for our classification, we used one-hot encoding to get individualized scores.
The score for a given subset, \(s\), can be calculated as eq. (5) [17].
\[M_{s}=\frac{kr_{cf}^{-}}{\sqrt{k+k(k-1)r_{ff}^{-}}} \tag{5}\]
\(M_{s}\) is the score, valuing a higher number as a better subset, \(k\) is the number of predictors for a given subset, \(r_{cf}^{-}\) is the average absolute intercorrelation between predictors and final features, and \(r_{ff}^{-}\) is the average absolute intercorrelation between predictors. We look at the average _absolute_ intercorrelation, so that the formula rewards inverse and direct relations the same.
### _Bat Algorithm_
Our main goal is to use a metaheuristic optimizer to explore a feature space for a given best point. In our case, the main metaheuristic optimizer is BA, the feature space is all possible subsets of the 70 preprocessed predictors, and the best point is the one with the maximum value from eq. (5).
We define some number of bats and instantiate each one according to eq. (6). Uniform distribution between values \(a\) and \(b\) is denoted as \(U(a,b)\). \(x_{i}^{0}\) represents the initial position for bat \(i\) out of a total of \(n=100\) bats, with an \(x\) value of \([0,0.5]\) at position \(i\) meaning a rejection of the predictor in location \(i\) and a value of \([0.5,1]\) representing acceptance. \(v_{i}^{0}\) represents the initial velocity, \(f_{i}^{0}\) represents the initial frequency, \(A_{i}^{0}\) represents the initial loudness, and \(r_{i}^{0}\) represents some initial random value.
\[x_{i}^{0} =\begin{bmatrix}x_{1}\\ x_{2}\\ \ldots\\ x_{k}\end{bmatrix}=\begin{bmatrix}U(0,1)\\ U(0,1)\\ \ldots\\ U(0,1)\end{bmatrix} \tag{6}\] \[v_{i}^{0} =\begin{bmatrix}v_{1}\\ v_{2}\\ \ldots\\ v_{k}\end{bmatrix}=\begin{bmatrix}U(-1,1)\\ U(-1,1)\\ \ldots\\ U(-1,1)\end{bmatrix}\] \[f_{i}^{0} =U(0,0.1),\ A_{i}^{0}=U(1,2),\ r_{i}^{0}=U(0,1)\]
We then define the best point \(x_{b}\) to be the position that correlates to the highest value for eq. (5). Then for each epoch \(t\), with \(t_{max}=1000\), we first update \(v_{i}^{t}\) according to eq. (7), eq. (8), and eq. (9).
\[f_{i}^{t} =U(0,0.1) \tag{7}\] \[v_{i}^{t} =v_{i}^{t-1}+(x_{i}^{t-1}-x_{b})f_{i}^{t}\] (8) \[v_{i}^{t} =\max\left(\min\left(v_{i}^{t},1\right),-1\right). \tag{9}\]
Then we update \(x_{i}^{t}\) according to eq. (10) or eq. (11). \(A_{i}^{t}\) represents the loudness.
\[x_{i}^{t} =\max\left(\min\left(x_{i}^{t-1}+v_{i}^{t},1\right),-1\right) \tag{10}\] \[x_{i}^{t} =\max\left(\min\left(x_{b}+\begin{bmatrix}U(-0.01,0.01)\\ U(-0.01,0.01)\\ \ldots\\ U(-0.01,0.01)\end{bmatrix}A_{i}^{t},1\right),-1\right) \tag{11}\]
After this, we update our frequency and loudness using eq. (12) and eq. (13) where \(\alpha\) and \(\gamma\) are constants, both of which can be set to 0.95.
\[A_{i}^{t+1} =\alpha A_{i}^{t} \tag{12}\] \[r_{i}^{t+1} =r_{i}^{0}(1-e^{-\gamma^{t}}) \tag{13}\]
After determining all of these values, we can run the Bat Algorithm. The main idea of this algorithm is to transition from exploration to exploitation gradually through self-tuning behavior. We can now create algorithm 1, based largely on [6, 12, 13].
```
0: Correlation matrix produced from training dataset
0: Selected best feature subset \(x_{b}\)
1: Initialize \(n\) bats using eq. (6)
2:\(x_{b}\gets 0,M_{b}\gets 0\)
3:for\(i\in[1,n]\)do
4: Calculate \(M_{i}\) for subset \(s_{i}\) determined by bat \(x_{i}\) using eq. (5)
5:if\(M_{i}>M_{b}\)then
6:\(x_{b}\gets x_{i},M_{b}\gets M_{i}\)
7:endif
8:endfor
9:\(t\gets 1\)
10:while\(t\leq t_{max}\)do
11:for\(i\in[1,n]\)do
12: Update \(f_{i}\) using eq. (7)
13: Update \(v_{i}\) using eq. (8) and then eq. (9)
14:if\(r_{i}\geq U(0,1)\)then
15: Update \(x_{i}\) using eq. (10)
16:else
17: Update \(x_{i}\) using eq. (11)
18:endif
19: Calculate \(M_{i}\) using eq. (5)
20:if\(A_{i}>U(0,1)\)and\(M_{i}>M_{b}\)then
21:\(x_{b}\gets x_{i},M_{b}\gets M_{i}\)
22: Update \(r_{i}\) using eq. (13)
23: Update \(A_{i}\) using eq. (12)
24:endif
25:endfor
26:\(t\gets t+1\)
27:endwhile
28:return\(x_{b}\)
```
**Algorithm 1** Bat Algorithm
### _Assessment Metrics_
To determine the fitness and results we need to clarify the metrics our models will use.
#### Iv-D1 Binary Classification
When we get our results for both RF and DNN, we get them in the form of a confusion matrix. This matrix describes the amount or ratio of what the data's class actually is compared to what it is described as. In the case of binary classification describing if a data point is benign or
malicious, there are universal metrics. We will be describing an attack as positive and a benign point as negative. As taken from [3], we will denote True Positive (TP), False Negative (FN), False Positive (FP), and True Negative (TN). Standard used metrics are defined below as
\[Accuracy =\frac{TP+TN}{TP+TN+FP+FN} \tag{14}\] \[Precision =\frac{TP}{TP+FP}\] (15) \[Recall =\frac{TP}{TP+FN}\] (16) \[FAR =\frac{FP}{FP+TN}\] (17) \[F1 =2\left(Recall^{-1}+Precision^{-1}\right)^{-1} \tag{18}\]
with FAR denoting the False Alarm Rate.
The choice of these metrics is mainly a result of suggested and common metrics from [18] and [8].
#### Ii-C2 Multiclass
For multiclass classification, we want to use the same metrics, but there is no longer a single target class. As such, all metrics need to treat every class as a target and average in some way. Accuracy stays consistent as the ratio of correctly labeled points to all points. For the other metrics, we consider using either micro, macro, or weighted values.
Micro averages compute the values for the entire table, being blind to the individual classes. Because of this, the micro equations for F1 and Precision are equal and equate to Accuracy, defined as eq. (19).
\[Accuracy=\frac{Correctly\ Labeled\ Points}{All\ Points} \tag{19}\]
Macro averages compute the values on a class by class basis and then average all the independent values. Weighted averages work similar to macro averages, but instead weigh the score from each class by the proportion of that class to the total. This means that weighted averages end up giving us slightly less information as they mimic overall accuracy. Furthermore, our goal with using CFS-BA is to focus on each class with equal emphasis since we use one-hot encoding. Therefore, macro scores will tend to give us results more reflective of how we performed compared to our expectations.
## III Data Description
The dataset used was created by the Communications Security Establishment (CSE) and the Canadian Institute for Cybersecurity (CIC) in 2018 to simulate network data for the sake of aiding in the creation and testing of NIDS construction [19, 20]. It contains simulated data for various attacks over 10 days, with each day having a different distribution. The full distribution on a log scale is described in fig. 3. Although there are 6 types of attacks, they are labeled as and can be expanded to 13 types of unique attack signatures. There are 83 numerical inputs, both continuous and discrete inputs, and each day has at least 79 inputs.
### _Data Preprocessing_
To get the data into a more suitable format, it underwent preprocessing. The first thing done to the data was to remove {**Dst.IP**, **Flow.ID**, **Src.IP**, **Src.Port**}, features that are present for one of the days but not others. Our model will be constructed for all of the days, and thus the large amount of NA values that the inclusion of these would generate would cause issues.
Of the remaining 79 predictors, 1 had NA values and 2 had \(\infty\) values. There are only 59,721 rows with NA and 36,039 rows with \(\infty\) out of 16.17 million. As such, it is better to remove the points instead of the predictors. Additionally, {**Timestamp**} can be removed. It is unique to the simulated environment and the inclusion of it would lead to a model built for past data, and not projecting towards future data. While it may seem like {**Dst.Port**} is similar to this, a model can be based on banning or including certain connections, so it is included. This leaves us with 78 predictors, of which 8 more can be removed as they have uniform values for all days.
Data is then normalized using MinMax normalization in the range of [0,1], as data is not normally distributed enough for Gaussian normalization, and is typically skewed about an extreme or completely non-negative, not befitting [-1,1] MinMax normalization. The full distribution for each variable can be found through [21]. Most of the predictors have highly skewed and nonlinear data, with a spike toward some extreme.
For our model analysis, we used a test-train split of 50/50 because our model is large enough such that a smaller portion of it can be used to train and the weaker classes are small enough such that we need both test and train to be a significant portion of the data to get accurate results.
## IV Results
Time results were achieved through Python ran in Google Colab Pro+. CFS-BA, CFS-AO, and RF-IG were generated using both [22] and [23]. The RF results were generated using scikit-learn [22]. The DNN results were generated using TensorFlow [24].
Fig. 3: Log Scaled Frequency of Classes
The computer by which the hyperparameter results were achieved and compared in terms of run time has a Nvidia GTX 1650 GPU, 32 GB of RAM, and has a Intel Core i7 processor. Finalized building and running of models was done through Google Colab Pro +, which utilized up to 53 GB of RAM for the run sessions.
### _Correlation_
Correlation structure is calculated according to the methodology in section II-B. After calculating our correlation, we stored it in a matrix and then added a dividing line indicating where the features end and the classes begin. The matrix is heat-encoded and shown in fig. 4. The black lines crossing the figure represent the divide between predictors and classes, and the diagonal line through the figure represents each variable's intercorrelation with itself, which is always 1. The main issue is that most of the regions that have strong feature to class intercorrelation also have strong feature intercorrelation. To maximize the result of \(M_{s}\) from eq. (5), we want to find features that have the lightest average coloration beneath the diagonal line in the top left rectangle and the darkest coloration in the bottom left rectangle. Specific predictor names are omitted for the sake of readability and interpretability. The original \(M_{s}\) score is **0.119** for multiclass and **0.248** for binary. The original information gain sum (IG) score is 1 for both sets by definition.
After running CFS-BA, CFS-AO, and RF-IG for our tuned hyperparameters, we are left with features that give us \(M_{s}\) scores and IG scores shown in table I and fig. 5. The four features common to all methods are {**Fwd.Pkt.Len.Mean, Flow.Pkts., Init.Fwd.Win.Byts, Fwd.Seg.Size.Min**}. Note that for table I, table III, and table II, cat. denotes categorical, bi. denotes binary, and time denotes build time.
### _Random Forest_
Once we determined our finalized hyperparameters, we ran the models and generated the performance results shown in table II. The performance results are calculated as discussed in section II-D and the time results compare the build time, describing the time taken to actually build the model. In terms of categorical results, CFS-BA and RF-IG both outperform the full feature set, with RF-IG being strongest in the most metrics and CFS-BA being significantly faster. For binary classification, all models not using the full set are worse than building a model for categorical classification and using it solely for binary decision making.
### _Deep Neural Network_
We run our DNN based upon the methodology declared in section II-A2. Results for our performance are shown in table III. With regards to performance, we see that CFS-BA outperforms the full feature set in every binary metric. In categorical classification, CFS-BA gets the highest accuracy and the lowest FAR but is beat by the full set in F1 and the RF-IG model in precision.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline Methodology & Time (s) & Accuracy & Precision & FAR & F1 \\ \hline Cat. Full & 6954 & 98.41\% & **90.45**\% & 22.78\% & 79.64\% \\ \hline Cat. CFS-BA & 5727 & **98.44**\% & 89.5\% & 21.26\% & 81.02\% \\ \hline Cat. CFS-AO & **4146** & 98.40\% & 88.60\% & 21.62\% & 80.21\% \\ \hline Cat. RF-IG & 10359 & **98.44**\% & 93.5\% & **19.79\%** & **82.09\%** \\ \hline Bi. Full & 5392 & **98.96**\% & **99.16**\% & **2.86\%** & **98.12\%** \\ \hline Bi. CFS-BA & 2161 & 96.01\% & 91.49\% & 4.68\% & 93.26\% \\ \hline Bi. CFS-AO & **1792** & 95.64\% & 90.69\% & 4.90\% & 92.69\% \\ \hline Bi. RF-IG & 5493 & 97.19\% & 98.14\% & 8.08\% & 94.69\% \\ \hline \end{tabular}
\end{table} TABLE II: Random Forest Performance Results
Fig. 4: Correlation Heatmap for Full Feature Set
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Methodology & K & CFS & IG \\ \hline Cat. CFS-BA & 26 & 0.147 & 0.630 \\ \hline Cat. CFS-AO & 14 & **0.159** & 0.350 \\ \hline Cat. RF-IG & 14 & 0.136 & **0.678** \\ \hline \hline Bi. CFS-BA & 8 & 0.323 & 0.279 \\ \hline Bi. CFS-AO & 5 & **0.410** & 0.232 \\ \hline Bi. RF-IG & 5 & 0.287 & **0.430** \\ \hline \end{tabular}
\end{table} TABLE I: Reduced Feature Subset Scores
Fig. 5: Venn Diagram of Categorical Feature Subset Overlap
Looking at the confusion matrices of RF and DNN, we see some patterns that may explain discrepancies. In both model types, DoS and BruteForce attacks were commonly mislabeled as one another. Web attacks, being the smallest and most difficult to catch, were actually more easily recognized than Infiltration attacks in RF models. DNN models nearly entirely missed Web attacks, and struggled just as much with Infiltration attacks. All other attacks performed relatively similarly for both model types. RF models were more inclined to mislabel to fewer types of labels, meaning confusion matrices with less 0's than their DNN alternatives.
## V Discussion
Our models are able to achieve a maximum accuracy of 98.44% on the CSE-CIC-IDS2018 dataset, using RF-IG with 14 of the 70 features for categorical classification. Both RF and DNN values show CFS-BA and RF-IG to outperform the full feature set. CFS-AO outperforms the full set for DNN, but not for RF. Using F1-score as a holistic metric, RF-IG raises 2.45% and 6.25% for RF and DNN respectively, while CFS-BA raises 1.38% and 0.96%. Using accuracy to determine applicability, RF-IG raises 0.34% and 0.03%, and CFS-BA raises 0.28% and 0.03%.
The information from this paper suggests subset size or IG score may be more important than CFS. Contrasting CFS-BA to CFS-AO, the two benefits CFS-BA may have are a larger subset size and IG score. Of these two, IG score seems to be more likely to directly correlate to performance. The full feature set has the largest number of features and yet underperforms. CFS-BA has more features than RF-IG and yet generates comparable results for categorical classification and poorer results for binary classification. CFS-BA has an IG score 92.9% of RF-IG's for categorical classification and performs much better than CFS-AO, having an IG score 51.6% of RF-IG's.
Despite all of this, CFS-BA maintains advantages to RF-IG in the trade off between build time and performance. It builds in 62.9% of the time for DNN and 55.3% of the time for RF. Towards explainability, CFS-BA seems to be a strong method for funneling down a set of useful features. While not as direct as RF-IG, the models share many features, and so it can be used as a rough RF-IG estimate.
In the realm of explainability, RF-IG provides individual scores of importance while CFS only scores subsets. This means that RF-IG is a superior and more accurate method when making recommendations for a particular vulnerability of a site, while CFS may be more applicable towards reinforcing aspects of a site, such as showing where the heaviest loads may be distributed if an attack is slow to be caught. However, as shown in our time results, if someone wishes to update individualized features in a rapid manner, CFS-BA can be a relatively comparable alternative to RF-IG.
Fundamentally, CFS using Spearman's correlation tests monotonicity, which may be why it does not directly indicate performance. We know our data is highly skewed, and it may have too many non-monotonic relationships. This could be because factors have monotonic relationships with one another within a certain classification, but the blending of all points generates noise that distorts calculations. A rationale to this would be the performance of binary models being weaker than categorical models. To overcome this, future models could look at binary \(M_{s}\) for each individual class, and take the union of all feature subsets. Another piece of data reinforcing this theory is that the 100-100-100 model was worse than the 50-25 model. This suggests either that the data is t0o sparse, or that the data appears sparse because the features selected are not an accurate reflection of the most beneficial features.
## VI Conclusion
In short, this paper added results for DNN and RF performance for CSE-CIC-IDS2018 using the full feature set, CFS-BA, CFS-AO, and RF-IG for the same model hyperparameters. The use of CFS-BA and RF-IG indisputably benefited the model. It removed 63%+ of features and significant improved model performance.
There are many directions the work can go in to increase its applicability in the field. CNN structure could be integrated as a replacement for DNN to attempt to account for the highly skewed distribution in the data as well as clear the feature obfuscation present. Shapley game theory scores could also be used to further improve the explainability of the models and provide more insight into why RF-IG excels where CFS may not. Lastly, scores other than CFS could be applied for filter comparison, or methods of correlation such as Pearson's or individualized Spearman's may be a better application of CFS.
|
2305.16979
|
Adaptive PD Control using Deep Reinforcement Learning for Local-Remote
Teleoperation with Stochastic Time Delays
|
Local-remote systems allow robots to execute complex tasks in hazardous
environments such as space and nuclear power stations. However, establishing
accurate positional mapping between local and remote devices can be difficult
due to time delays that can compromise system performance and stability.
Enhancing the synchronicity and stability of local-remote systems is vital for
enabling robots to interact with environments at greater distances and under
highly challenging network conditions, including time delays. We introduce an
adaptive control method employing reinforcement learning to tackle the
time-delayed control problem. By adjusting controller parameters in real-time,
this adaptive controller compensates for stochastic delays and improves
synchronicity between local and remote robotic manipulators. To improve the
adaptive PD controller's performance, we devise a model-based reinforcement
learning approach that effectively incorporates multi-step delays into the
learning framework. Utilizing this proposed technique, the local-remote
system's performance is stabilized for stochastic communication time-delays of
up to 290ms. Our results demonstrate that the suggested model-based
reinforcement learning method surpasses the Soft-Actor Critic and augmented
state Soft-Actor Critic techniques. Access the code at:
https://github.com/CAV-Research-Lab/Predictive-Model-Delay-Correction
|
Luc McCutcheon, Saber Fallah
|
2023-05-26T14:34:45Z
|
http://arxiv.org/abs/2305.16979v2
|
Adaptive PD Control using Deep Reinforcement Learning for Local-Remote Teleoperation with Stochastic Time Delays
###### Abstract
Local-remote systems allow robots to execute complex tasks in hazardous environments such as space and nuclear power stations. However, establishing accurate positional mapping between local and remote devices can be difficult due to time delays that can compromise system performance and stability. Enhancing the synchronicity and stability of local-remote systems is vital for enabling robots to interact with environments at greater distances and under highly challenging network conditions, including time delays. We introduce an adaptive control method employing reinforcement learning to tackle the time-delayed control problem. By adjusting controller parameters in real-time, this adaptive controller compensates for stochastic delays and improves synchronicity between local and remote robotic manipulators.
To improve the adaptive PD controller's performance, we devise a model-based reinforcement learning approach that effectively incorporates multi-step delays into the learning framework. Utilizing this proposed technique, the local-remote system's performance is stabilized for stochastic communication time-delays of up to 290ms. Our results demonstrate that the suggested model-based reinforcement learning method surpasses the Soft-Actor Critic and augmented state Soft-Actor Critic techniques. Access the code at: [https://github.com/CAV-Research-Lab/Predictive-Model-Delay-Correction](https://github.com/CAV-Research-Lab/Predictive-Model-Delay-Correction)
## I Introduction
Remote control of complex systems has become an essential capability in today's interconnected world. Local-remote systems are bilateral teleoperated robotic manipulation devices which provide a means for individuals to interact with environments from remote locations. These systems find widespread use across various industries. For instance, remote surgery enables access to expert surgeons located far away from the patient [1]. In nuclear power plants, local-remote systems minimize human exposure to radiation during tasks near the reactor core [2]. In space engineering, local-remote systems allow for conducting repairs or experiments on space stations while ensuring the safety of astronauts [3].
Local-remote teleoperation provides a framework for an operator to interact with a remote environment using intermediary devices. These intermediary devices consist of two parts: the local device and the remote device. In this work, we focus on local-remote systems consisting of two identical robotic arms linked by position-mapping Proportional-Derivative (PD) controllers. The local device is controlled by a human operator, while the remote system translates actions into a parallel environment. One of the key challenges in local-remote teleoperation is the communication time-delay, which can adversely affect control performance. Stochastic variable time delay, in particular, poses a significant challenge to modern control solutions, as existing methods are unable to effectively stabilize the robotic system in highly stochastic and long delays. To address this challenge, we propose using reinforcement learning (RL) to stabilize robotic manipulators more effectively and enhance system telepresence.
Action delay and observation delay are two important concepts in understanding the time intervals that impact the state of a teleoperated local-remote system. In this system, a remote operator controls a robot arm located in a different physical location through a local interface. Action delay refers to the time lag between when the operator sends a command and when the robot arm completes the action. For example, if the operator sends a command to move the robot arm forward, action delay would refer to the time it takes for the remote robot to receive and execute the command. On the other hand, observation delay is the duration between when the robot arm captures its current state and when the operator receives the updated information.
In model-free RL, the agent learns an optimal policy through a series of trial-and-error experiences in an environment without explicitly constructing a model of the environment. A typical approach to enhancing model-free RL performance under time delay is to add a buffer of actions taken over the delay period to the state of the system. However, this approach increases the state space exponentially, making the problem intractable [4]. Another limitation of model-free methods is that it can be difficult to transfer learnt policies when the delay period changes. This is due to the input dimensions depending on the delay length.
These limitations motivate us to use model-based solutions. Model-based methods attempt to learn the dynamics of the environment explicitly. Existing model-based methods require recursive predictions to obtain a future state estimate. This computational inefficiency can limit the real-world applications [5, 6, 7]. Instead, we propose using a computationally efficient predictive model to mitigate the effects of stochastic time delay in control systems, without the use of planning.
Our proposed model-based RL approach provides safe and real-time adaptation to increase the performance of PD controllers in delayed conditions. In this approach, the PD controller parameters are predicted as the output of RL agent [8] where the agent learns to minimize the error between local and remote devices. By using RL the need for pre-determined manually tuned parameters is alleviated
since the system is robust to changes in the environment dynamics [9]. This combination of methods provides some of the safety guarantees of classical control, and the improved performance of RL.
The work presented in this paper contributes to the literature by providing an RL-based adaptive PD controller which has been optimized specifically for stochastically delayed conditions.
In summary, the main contributions of this paper are:
* Introduces State-Buffer based State Prediction (SBSP) of an efficient framework for model-based RL under time-delay.
* A model-based approach to delayed RL, Predictive Model Delay Correction (PMDC), which uses SBSP to address the adverse effects of time delay in control systems.
* Application to the task of synchronising local-remote systems through the use of an adaptive PD controller.
* Extension of PMDC to stochastic bi-directional delays by using state augmentation.
## II Related Work
When dealing with time delays in RL tasks, additional uncertainty arises that can affect the learning process. Typical RL algorithms, such as Soft-Actor-Critic (SAC) [10, 11], are oblivious to the effects of delayed actions and thus have reduced performance imposed by partial observability.
In order to make the MDP fully observable, an augmented approach [4] has been proposed in which past actions taken during the delay period are incorporated into the state information. This approach enables the derivation of an optimal policy [12] by ensuring the Markov property, but it comes at the cost of the state space expanding as the delay length increases.
Additionally, it is important to note that the actions that are augmented to the state information are off-policy and do not represent a trajectory chosen under the current policy. This can lead to suboptimal results in practice. To address this issue, Bouteiller et al. [12] proposed a method that resamples off-policy action trajectories into on-policy trajectories. However, this trajectory resampling requires additional computation as the delay increases.
To overcome these limitations, researchers have explored alternative representations of the action buffer. For instance, Liotet et al. [13] introduced a belief representation method that integrates environment dynamics knowledge. This technique appends a condensed belief representation of the action buffer to the state information instead of the entire action buffer. Although it reduces the impact of augmented state information, it also decreases observability.
Due to the limitations of model-free methods, model-based approaches were developed to predict the state in a delayed environment, where a model estimates the transition probabilities in a multi-step delayed framework [14, 7]. To do this a model can recursively 'undelay' the action delays [5, 7], however, this has poor computational complexity (see Fig. 2) due to numerous model predictions.
In addition, the majority of literature focuses solely on constant delays [15, 16, 17, 18, 19], with planning methods unable to directly address stochasticity in random delays. The work relating to stochastic delays are model-free approaches which use state augmentation [4, 12, 20]. It must be noted the stochastically delayed MDP used in our research differs from [20] to avoid unrealistic assumptions about delayed conditions, which limit application.
To align with the current architecture and provide additional safety in delayed conditions we apply Predictive Model Delay Correction (PMDC) to the task of real-time adaptive PD Controller tuning. Adaptive control of Proportional-Integral-Derrivative (PID) based controllers using RL has been explored in previous work [21, 22, 23, 8] where RL predicts the PID based controller parameters at each time-step. This approach was first proposed by Wang et al. in 2007 [23], who demonstrated its effectiveness in controlling a chaotic system. Our work contributes to the existing literature by presenting an adaptive PD controller that overcomes time delays via the application of PMDC, thereby filling a gap in the literature. Although a PD controller is used in our application, the proposed methodology can be generalized to other feedback control schemes, such as PID or similar variations, with similar outcomes.
## III Prerequisites
### _Markov Decision Process_
A Markov Decision Process (MDP) is a mathematical framework for modelling decision-making systems where the next state is only dependent on the current state and action. It can be defined as a tuple \(MDP=(S,A,\mu,p)\) where:
* \(S\) is the state space, which is the set of all possible states of the system.
* \(A\) is the action space, which is the set of all possible actions that the agent can take.
* \(d_{0}(s_{0}):S\rightarrow\mathbb{R}\) is the initial state distribution, which is a probability distribution over the states that describes the starting state of the system.
* \(p(s^{\prime},r|s,a):S\times A\rightarrow\mathbb{R}\) is the transition probability function, which describes the probability of transitioning from state \(s\in S\) to state \(s^{\prime}\in S\) and receiving reward \(r\) when taking action \(a\in A\).
This paper assumes knowledge of the reward function \(r\), but does not assume knowledge of the transition probability function \(p\). When action delay, denoted by \(\alpha\), is applied to the MDP an agent will enact a trajectory between action selection and execution (1). This increases the complexity of the task since the state of the system may have changed considerably under the effect of previous actions in the trajectory before the effective action is executed.
From this definition, we can derive a distribution of trajectories, denoted by \(\tau\), that an agent may follow under \(\alpha\), before the action is actually applied.
\[\tau_{\pi}=d_{0}(s_{0})\prod_{t=0}^{\alpha}\pi(a_{t}|s_{t})p(s_{t+1}|s_{t},a_{ t}) \tag{1}\]
Where \(\pi\) represents the agent's policy
The distribution of trajectories that an agent may follow during the observation delay, denoted by \(\omega\), before the observation is returned to the agent, is given by:
\[\tau_{\pi}=d_{\alpha}(s_{\alpha})\prod_{t=\alpha}^{\alpha+\omega}\pi(a_{t}|s_{t} )p(s_{t+1}|s_{t},a_{t}) \tag{2}\]
where \(d_{\alpha}(s_{\alpha})\) represents the state distribution after action delay. Agents struggle to learn how one state transitions into another since increasing delay causes agents to enact longer trajectories between selecting an action and receiving a reward. Additionally, as the duration of the delay increases, the agent is confronted with the credit assignment problem [24], whereby it becomes progressively challenging to attribute credit to individual actions as other actions are performed during the delay period, rendering it difficult to discern which actions correspond to which rewards.
### _Augmented Markov Decision Process_
The regular MDP formulation, when applied to delayed problems can lead to arbitrarily suboptimal policies [25] due to partial observability. In order to ensure the Markov property the Augmented MDP is proposed:
* \(\mathcal{X}=S\times A^{n}\) is the state space, where \(n\) is the total delay at a given time step.
* \(A\) is the action space, which is the set of all possible actions that the agent can take.
* \(\delta_{0}(s_{0})=\delta(s_{0},a_{0},...,a_{n-1})=\delta(s_{0})\prod_{i=0}^{n- 1}\delta(a_{i}-c_{i})\) is the initial state distribution, where \(\delta\) is the Dirac delta function. If \(y\sim\delta(\cdot-x)\) then \(y=x\) with probability one. \((c_{i})i=1^{n-1}\) denotes the initial sequence of actions.
* \(p(s^{\prime},r|s,a):S\times A\rightarrow\mathbb{R}\) is the transition probability function, which describes the probability of transitioning from state \(s\in S\) to state \(s^{\prime}\in S\) and receiving reward \(r\) when taking action \(a\in A\).
The state information of the augmented MDP contains the sequences of actions which have not received corresponding observations due to action and observation delay.
### _Time-delayed RL_
Delayed environments as described in Section III-B use an augmented state space and delayed dynamics. Traditional algorithms such as SAC will always work in randomly delayed conditions. However, their performance will deteriorate because of the more difficult credit assignment caused by delayed observations and rewards, along with the exploration and generalization burdens of delayed environments.
## IV Local-Remote Control
This section outlines a technique for converting a standard RL environment into a local-remote system equivalent. In a local-remote system, the local operator executes the task, while a remote agent runs concurrently and aims to track the operator's path despite the presence of stochastic time delays.
The process begins with the operator executing an action in the local environment. Since the operator is acting in the local environment it has access to all state information concerning itself and the target.
The observation is first transmitted from the operator to the remote environment, after which an observation delay is applied. Subsequently, the remote observation is created by replacing the remote agent's target information with the operator's location, as described by (3).
\[s_{L}\leftarrow\begin{bmatrix}x_{L}&y_{L}&z_{L}\\ x_{L}&\dot{y_{L}}&\dot{z_{L}}\\ \vdots&\vdots&\vdots\\ x_{T}&y_{T}&z_{T}\end{bmatrix},\qquad s_{R}\leftarrow\begin{bmatrix}x_{R}&y_{R }&z_{R}\\ \dot{x_{R}}&y_{R}&\dot{z_{R}}\\ \vdots&\vdots&\vdots\\ x_{L}&y_{L}&z_{L}\end{bmatrix}. \tag{3}\]
Where \(S_{L}\) and \(S_{R}\) represent the state information of each agent. \(x_{R}\), \(y_{R}\) and \(z_{R}\) represent the remote agent's end-effector position, while \(x_{L}\), \(y_{L}\) and \(z_{L}\) represent the corresponding values for the local device end-effector, and \(x_{T}\), \(y_{T}\) and \(z_{T}\) represent the target's position.
After the remote observation is calculated it is then fed into the SAC controller which outputs PD controller parameters \(K_{p}\) and \(K_{d}\). The error of the system is calculated through the euclidean distance between the operator and remote end-effector positions. This error \(e_{t}\) is used as a reward signal (4) as well as the error in the PD controller (5).
\[e_{t}=-\sqrt{(x_{R}-x_{L})^{2}+(y_{R}-y_{L})^{2}+(z_{R}-z_{L})^{2}} \tag{4}\]
\[a=K_{P}(e_{t})+K_{D}\frac{\partial(e_{t})}{\partial t}, \tag{5}\]
(5) represents the PD controller which takes in \(K_{P}\) and \(K_{D}\) from SAC and the error from (4). The output of the PD controller \(a\) is the action vector to be applied to the environment and represents forces to be applied to the remote end effector.
Fig. 1: Local-Remote System Architecture
## V Method
In this section, we present the implementation of PMDC. We first provide an overview of the challenges PMDC aims to solve, we then compare against prior work and provide further details on implementation.
Delayed environments require knowledge of previous actions in order to be fully observable. Appending the action history over the delay period is one method frequently used in model-free literature but begins to deteriorate in performance as time delay increases. Model-based methods offer better performance by recursively predicting the effect of actions in the action buffer and providing the future state to the RL algorithm. There are two downsides of current model-based approaches to time delay, which PMDC aims to address: 1) recursive model calls can lead to long computation times. 2) the model is only able to handle constant delays.
PMDC addresses problem 1 by introducing State-Buffer based State Prediction (SBSP) as it reduces the number of predictions needed from the model per episode. SBDP solves the problem by storing previously calculated future states that subsequent time steps can use. Initially, at training time step 0, PMDC predicts \(\alpha\) steps into the future and stores this prediction. At training time step 2, PMDC uses this stored prediction to make only one additional prediction to achieve \(\alpha+1\) steps into the future from the starting state, which corresponds to the second step after the action delay. This process continues until the end of the episode, with PMDC updating the future state at each time step, based on the action chosen by RL. This approach allows for \(\alpha\) predictions at time step 0, and then one prediction per subsequent time step. In contrast, the prior method for delayed corrected state prediction, hereby referred to as Action-Buffer-based State Prediction (ABSP), calculates \(\alpha\) predictions at every time step. ABSP has been utilized in a number of previous studies [14, 26, 7]. Notably, Firoiu et al. had to restrict delay in their experiments due to the escalating computational complexity associated with ABSP [5].
Bootstrapping predictions from the initial state can lead to accumulating errors towards the end of the episode. To correct these errors, PMDC stores all predicted future states. When it receives the true state of the system observed after the delay period, PMDC calculates the error between the predicted future state \(\hat{s}_{t}\) and the true state \(s_{t}\) (\(\hat{s}_{t}-s_{t}\)). PMDC then deducts this error from its list of calculated future states, including the current prediction for the future state of the system \(\alpha\) steps ahead.
PMDC addresses problem 2 (stochastic delays) by using state augmentation, where only actions taken over the stochastic range are added to the state information, as opposed to the full delay length used in Augmented state SAC (A-SAC). By only using state augmentation over the stochastic range, the amount of actions added to the state space is greatly reduced compared to A-SAC, which appends all actions.
Fig. 2 shows a visual comparison of the (a) ABSP for providing a delay-corrected state to the RL agent with (b) SBSP our proposed method. The RL loop refers to the iterative process of updating the RL behavior policy, which occurs \(T\) times per episode, where \(T\) represents the total number of time steps in the episode. ABSP, used in various model-based delayed RL methods, uses a buffer of actions to calculate a corresponding future state each time-step which requires extensive computation where one episode requires \(\alpha\times T\) model predictions.
In contrast, the proposed method reuses previous predictions when calculating the future state requiring only \(\alpha+T\) for model predictions. SBSP can be interpreted as 4 stages (labelled in Fig. 2 accordingly): 1) Initial delay correction at the beginning of an episode to obtain the future state on the first time step, 2) Determining the next future state based on the chosen action from RL, 3) Recalibrating the future state buffer using the observed error between the predicted and observed states, and also updating the neural network models using the loss between predicted and observed states.
Algorithm 1 outlines the steps involved in the SBDP algorithm, this algorithm uses an augmenting state for \(s_{t}\) when used for stochastic delays. In the stochastic case, the state \(s_{t}\) contains the action history over the entire stochastic range. For example, if the delay is 8-12 time steps, the state would include the 4 previous actions. In this algorithm, \(\pi\) represents the behavioral policy learned through RL, \(s_{t}\) represents the state at time step \(t\), and \(M_{i}\) denotes each model in the ensemble, with \(i\) representing its index in the collection. \(\mathcal{F}=(\hat{s}_{t},\hat{s}_{t+1},...,\hat{s}_{t+\alpha})\) is a buffer that stores previously calculated future state predictions \(\hat{s}_{t}\) from the current time step \(t\) to \(\alpha\), where \(\hat{s}_{t}\approx s_{t}\).
### _Ensemble Learning_
Ensemble learning is a technique that involves combining the predictions of multiple models to improve accuracy. The
Fig. 2: (a) illustrates the ABSP and (b) illustrates SBDP, the method proposed by this work
benefit of using an ensemble is discussed in [27] where the varying initial weights allow for slight differences in model predictions with their average being a more robust and accurate prediction than any individual model alone.
This work employs 5 neural network models with varying initial weights for the ensemble in its experiments, as this number has been found to enhance performance while maintaining reasonable computational requirements. While it offers the possibility of enhancing performance, the trade-off is increased computational complexity that grows linearly with the number of models used.
The on-board model is an ensemble of multiple feed-forward neural networks with the average prediction being used as the predicted future state.
\[\mathcal{L}(s_{i},\hat{s}_{i+1})=\begin{cases}\frac{1}{2}(s_{i}-\hat{s}_{i})^{ 2},&if|s_{i}-\hat{s}_{i}|<\delta\\ \delta(|s_{i}-\hat{s}_{i}|-\frac{1}{2}\delta,&otherwise.\end{cases} \tag{6}\]
Where \(\delta\) represents the threshold parameter for switching between L1 and L2 loss functions, \(s_{i}\) is the observed state, and \(\hat{s}_{i}\) is the model-predicted state.
This work uses neural networks to approximate the transition function because of their ability to handle the non-linearities present in system dynamics. We use Huber Loss (6) [28] to calculate the prediction error before backpropagation as it provides stable robust convergence. The variance over time shows (Fig. 3) how the neural networks learn from their predictions and consolidate estimations.
## VI Experiments
In this section, we compare PMDC with SAC and A-SAC. To conduct our experiments, we use the local-remote equivalent (See Fig. 1 of the FetchPush-vl MuJoCo environment [29]. This environment features a 7-DoF robot arm that must push an object to a specified target location. We chose this environment because it represents a challenging and realistic task for a local-remote system that involves frequent interactions with an object. The time between transitions is 10ms for this environment and therefore an 80ms delay would be represented by 8 delayed time steps and each episode is 50 time steps.
Since action and observation delay are mathematically equivalent [20], we simplify our experiments by using constant action delays that are corrected by the predictive model, and stochastic observation delays that require state augmentation. The environment has constant action delays of 80ms, 160ms, and 240ms, and stochastic observation delays ranging from 10ms to 50ms. This delay range was selected to demonstrate the performance of our models under both short and long delays, ensuring a fair comparison. These experiments increase the constant delay, to demonstrate how the predictive model performs, since increasing the stochastic delay range only increases the augmented state space applied on top of PMDC.
### _Discussion_
In the context of the adaptive control task, the performance metrics of both SBSP and ABSP exhibit notable similarities. Nonetheless, as the delay intensifies, ABSP exhibits a marginal superiority over SBSP, attributed to its increased ability to manage changes in non-linearity. However, SBSP is able to gain a significant computational efficiency advantage by assuming changes in error are linear and correcting them in the recalibration phase. This local linearity assumption allows SBSP to reuse prior predictions with minimal degradation in performance Figure 4(a,b,c). The respective effi
Fig. 3: Ensemble Learning variance over initial training period
\begin{table}
\begin{tabular}{c c c c} \hline
**Algorithm** & **90-130ms** & **170-210ms** & **250-290ms** \\ \hline PMDC & **-0.030 \(\pm\) 0.013** & -0.038 \(\pm\) 0.020 & **-0.043 \(\pm\) 0.031** \\ A-SAC & -0.034 \(\pm\) 0.018 & **-0.034 \(\pm\) 0.012** & -0.15 \(\pm\) 0.17 \\ SAC & -0.053 \(\pm\) 0.14 & -0.24 \(\pm\) 0.33 & -0.25 \(\pm\) 0.3 \\ \hline \end{tabular}
\end{table} TABLE II: Mean and standard deviation over the final training episode
Fig. 4: (a),(b),(c) Average training reward over Environment time-steps (d),(e),(f) Average training reward over Environment time-steps (g),(h),(i) Best trajectory out of 10 testing episodes for each of the RL algorithms: (a) PMDC, (b) A-SAC, and (c) SAC
\begin{table}
\begin{tabular}{c c c c} \hline
**Method** & **90-130ms** & **170-210ms** & **250-290ms** \\ \hline ABSP & 2122 & 2808 & 3504 \\ SBSP (PMDC) & 1538 & 1560 & 1577 \\ A-SAC & 1451 & 1461 & 1477 \\ SAC & 1477 & 1450 & 1457 \\ \hline \end{tabular}
\end{table} TABLE I: Average time over 3 runs to train an agent for 80k time steps on an NVIDIA GeForce 3080
ciencies of the algorithms are demonstrated by the duration required to complete 80k time steps, as depicted in Table I. This efficiency emerges from the fact that each increment in delay corresponds to \(1\) additional prediction for SBSP per episode, while ABSP necessitates the computation of \(50\) more predictions per episode.
For smaller delays, both PMDC and A-SAC achieve similar performance, as the augmented state space remains manageable in terms of dimensionality. However, as the delay length increases, the differences between the approaches become more pronounced, with PMDC converging to an optimal policy much faster than SAC and A-SAC. The results shown in Fig. 4 show the faster convergence of PMDC over SAC and A-SAC in various delayed conditions for the task of synchronising two robotic manipulators whilst they complete the task of pushing a brick. Table II shows that PMDC has comparable performance to A-SAC for small delays and superior final performance compared to A-SAC and SAC as delay increases.
The advantage of a predictive model for long delays is it can simulate transitions during the initial delay period, allowing for better use of time steps. Since it is always acting in a time delay corrected environment, it is able to see the results of \(t\) interactions with the environment, whereas the typically delayed system will only experience \(t-(\alpha+\omega)\) interactions, as the final actions it chooses will not be observed until after the end of the episode.
The trajectories with the highest reward over 10 testing episodes under 250-290ms delays are shown in (g), (h), and (i). These trajectories demonstrate that the PMDC offers greater stability and can follow the path of the operator even under highly delayed conditions.
Table I demonstrates experimentally how SBDP's growth in computation time is negligible in comparison to ABSP.
## VII Conclusion
This work introduced an adaptive PD controller using delay-corrected RL and evaluated its performance in the task of synchronising local-remote systems. This framework enables the training of adaptive PD controllers specialised in mapping local-remote system positions. We then demonstrated that through the use of a predictive dynamics model, we are able to increase the final performance of our controller further and accelerate convergence. This approach was evaluated against SAC and A-SAC, two approaches typically used to handle delayed control.
Future work can provide an analysis of the computation time against comparable model-based methods in various other delayed tasks. Another improvement may arise from experimentation with architectural differences such as RBF and probabilistic networks [7, 30] to further the performance. Similarly, another useful comparison would be against methods that attempt to condense the action history e.g. via a belief representation [13] or through a recurrent network. Further work can also examine how to handle delays that are not a multiple of the time-step interval and experiment with real-world hardware or the effect of planning methods on performance and computation time.
## Acknowledgments
The first author thanks research funding support from UK Engineering and Physical Science Research Council (project ref: EP/T518050/1) and Veolia Nuclear Solutions
|
2301.02223
|
Measuring the Space of Metaplectic Whittaker Functions
|
Whittaker functions are special functions that arise in $p$-adic number
theory and representation theory. They may be defined on representations of
reductive groups as well as their metaplectic covering groups: fascinatingly,
many of their number theoretic applications survive the transition between the
reductive and metaplectic cases. However, one notable difference is that the
space of Whittaker functions on a reductive group over a nonarchimedean local
field $F$ is one-dimensional, whereas this is no longer true in the metaplectic
case. In a previous paper, the second author showed that the dimension of the
space of Whittaker functions on an arbitrary $n$-fold metaplectic cover of
$GL_r(F)$ can be counted in terms of the number of solutions to a particular
system of linear Diophantine equations in terms of $n$ and $r$. In this paper,
we calculate two precise formulae for $\dim(\mathfrak{W})$, one inspired by
viewing this system as a homogenous specialization of an inhomogenous system
and the other by the structure of the coroot lattice of $GL_r(F)$. Then we use
these formulae to investigate a homomorphism between $\mathfrak{W}$ and a
particular quantum group module, built by the second author in a previous
paper, and show precisely when this map is well-defined for any choice of basis
for $\mathfrak{W}$.
|
Ilani Axelrod-Freed, Claire Frechette, Veronica Lang
|
2023-01-05T18:48:42Z
|
http://arxiv.org/abs/2301.02223v1
|
# Measuring the Space of Metaplectic Whittaker Functions
###### Abstract
Whittaker functions are special functions that arise in \(p\)-adic number theory and representation theory. They may be defined on representations of reductive groups as well as their metaplectic covering groups: fascinatingly, many of their number theoretic applications survive the transition between the reductive and metaplectic cases. However, one notable difference is that the space of Whittaker functions on a reductive group over a nonarchimedean local field \(F\) is one-dimensional, whereas this is no longer true in the metaplectic case. In a previous paper, the second author showed that the dimension of the space of Whittaker functions on an arbitrary \(n\)-fold metaplectic cover of \(GL_{r}(F)\) can be counted in terms of the number of solutions to a particular system of linear Diophantine equations in terms of \(n\) and \(r\). In this paper, we calculate two precise formulae for \(\dim(\mathfrak{W})\), one inspired by viewing this system as a homogenous specialization of an inhomogenous system and the other by the structure of the coroot lattice of \(GL_{r}(F)\). Then we use these formulae to investigate a homomorphism between \(\mathfrak{W}\) and a particular quantum group module, built by the second author in a previous paper, and show precisely when this map is well-defined for any choice of basis for \(\mathfrak{W}\).
## 1 Introduction
Whittaker functions arise in \(p\)-adic number theory and representation theory, specifically in the study of automorphic forms over local fields and the study of principal series representations of reductive groups. They can be written in many forms: as integrals over matrix groups, as generating functions over many different combinatorial objects, as coefficients of automorphic forms, and in some cases as partition functions of lattice models. In particular, when the lattice model is solvable, this viewpoint leads to a surprising connection between the algebraic structures of the space of Whittaker functions and of modules for quantum groups.
One type of Whittaker functions of particular interest are _metaplectic_ Whittaker functions, which are Whittaker functions on the principal series representations of _metaplectic covering groups_, central extensions of a reductive group by the \(n\)-th roots of unity. These groups are named after the first "Metaplectic Group," the unique double cover of the _symplectic_ group \(Sp_{2n}\) discovered by Weil [19]. However, the machinery generating this particular cover can be applied in far greater generality and results in non-algebraic groups that inherit much of the interesting representation theory and number theory of their algebraic base groups. One reason for this phenomenon is that if \(G\) is a group that is also a topological space, the metaplectic cover is a covering space in the topological sense as well: thus, the metaplectic covers of reductive groups, which are equipped with a topological structure, are of particular interest. These groups have been studied in various levels of generality by Kazhdan and Patterson [11], Matsumoto [13], Brylinski-Deligne [5], McNamara [15], Gan, Gao, and Weissman [8, 9], and many others. For our purposes, a particularly useful description is that of Brylinski-Deligne [5], who proved that metaplectic covers of reductive \(p\)-adic groups are in correspondence with symmetric Weyl-group invariant bilinear forms on the cocharacter lattice. We will examine the structure of these covering groups in more detail in Section 2, following the treatment of the second author in [7].
The focus of this paper is the reductive group \(G=GL_{r}(F)\), the general linear group of \(r\times r\) matrices over a nonarchimedean local field \(F\) containing \(\mu_{2n}\). In this case, which was first studied by Matsumoto [13], the bilinear forms prescribed by Brylinski-Deligne [5] recover a subset of the Kahzdan-Patterson covers [11] and may be explicitly parametrized as in Frechette [7] as \(B_{c,d}\) in terms of two parameters \(c,d\in\mathbb{Z}\) (see Section 2 for the details of this construction). In general, metaplectic covers of \(G\) are denoted \(\widetilde{G}\), so let \(\widetilde{G}_{c,d,r,n}\) be the \(n\)-fold cover of \(GL_{r}(F)\) corresponding to \(B_{c,d}\). It is important to note that while there may
be multiple bilinear forms corresponding to a given cover, any such form will suffice for our purposes. We refer the reader to [11] or [7] for more detailed descriptions of which forms give identical or similar covers.
One interesting difference between the algebraic (i.e., non-metaplectic) and metaplectic cases is the dimension of the space of Whittaker functions. For a reductive algebraic group, the space of Whittaker functions on any principal series representation is one-dimensional [18, 17, 10]. In the metaplectic case, however, the construction of principal series representations becomes more complicated, due to the fact that the _metaplectic torus_\(\widetilde{T}\), the preimage in the metaplectic cover \(\widetilde{G}\) of the torus \(T(F)\), is no longer necessarily abelian. Due to this phenomenon, the dimension of the space of Whittaker functions becomes dependent on the choice of cover. As shown by McNamara [15], if \(\mathfrak{W}\) is the space of metaplectic Whittaker functions for a principal series representation on \(\widetilde{G}\) and \(H\) is the maximal abelian subgroup of \(\widetilde{T}\), then
\[\dim(\mathfrak{W})=\left|\widetilde{T}/H\right|,\]
and the basis vectors of \(\mathfrak{W}\) may be parametrized by the cosets in \(\widetilde{T}/H\). Note that the space of Whittaker functions is traditionally denoted \(\mathfrak{W}^{\boldsymbol{z}}\), where \(\boldsymbol{z}=(z_{1},...,z_{r})\in\mathbb{C}^{r}\) lists the Satake parameters for the principal series representation. Since the results in this paper largely do not depend on the choice of \(\boldsymbol{z}\), we will generally drop it from the notation and write simply \(\mathfrak{W}\).
Examining the group structures of \(\widetilde{T}\) and \(H\) for a non-archimedean local field \(F\), we achieve an explicit expression for the dimension, which we will prove in Section 2 as Theorem 2.4.
**Theorem 1.1**.: _For an \(n\)-fold metaplectic cover \(\widetilde{G}\) of \(GL_{r}(F)\) corresponding to the bilinear form \(B_{c,d}\),_
\[\left|\widetilde{T}/H\right|=\frac{n^{r}}{|\{\boldsymbol{x}\in(\mathbb{Z}/n \mathbb{Z})^{r}:B_{c,d}(\boldsymbol{x},\boldsymbol{y})\equiv 0\pmod{n}\text{ for all } \boldsymbol{y}\in(\mathbb{Z}/n\mathbb{Z})^{r}\}|}.\]
Our main result is a closed formula for the order of the set in the denominator of Theorem 1.1. To this end, let
\[\Lambda_{fin}:=\{\boldsymbol{x}\in(\mathbb{Z}/n\mathbb{Z})^{r}:B_{c,d}( \boldsymbol{x},\boldsymbol{y})\equiv 0\pmod{n}\text{ for all }\boldsymbol{y}\in(\mathbb{Z}/n \mathbb{Z})^{r}\}\,.\]
Then, using linear Diophantine equations to parametrize \(\Lambda_{fin}\) in two different ways, we arrive at the following result, which is proven in two parts as Theorem 4.7 and Theorem 6.1, respectively.
**Main Theorem 1**.: Given an \(n\)-fold metaplectic cover of \(GL_{r}(F)\) corresponding to the bilinear form \(B_{c,d}\),
\[|\Lambda_{fin}|=d_{1}^{r-1}\gcd\left(d_{2},\frac{dn}{\theta_{1}}\right),\]
where \(d_{1}=\gcd(c-d,n)\) and \(d_{2}=\gcd(c+(r-1)d,n)\). Alternately, we also have that
\[|\Lambda_{fin}|=\frac{\theta_{1}^{r-1}d_{2}}{n}\gcd\left(\frac{n}{\theta_{1}}, \frac{n}{d_{2}},r\right)\operatorname{lcm}\left(\frac{n}{\gcd(r,n)},\gcd\left( d_{2},\frac{dn}{\theta_{1}}\right)\right),\]
where \(b=\gcd(r,n)\).
The first formula arises from viewing the parametrizing Diophantine equations, which are generated by the natural basis for the cocharacter lattice for \(GL_{r}(F)\), as a homogeneous specialization of an inhomogenous system. This viewpoint provides a more elegant formula and a more concrete description of the structure of the space of Whittaker functions. On the other hand, while the second formula is more complicated, it arises from the root structure of \(GL_{r}(F)\), which provides a more direct path to extending this result to other reductive groups.
For \(GL_{r}(F)\), the space of Whittaker functions is also closely tied to a particular module for a quantum group built from the Lie algebra \(\mathfrak{gl}\). Despite the name, quantum groups are not groups at all, but rather quasitriangular Hopf algebras. For this paper, we consider the quantum affine universal enveloping algebra \(U_{q}(c,d,n):=U_{q}(\widehat{\mathfrak{gl}}(n/\theta_{1}))\), where \(q\) is the cardinality of the residue field for \(F\). This quantum group has a \(n/\theta_{1}\)-dimensional evaluation module \(V_{+}(z)\) depending on a parameter \(z\in\mathbb{C}\), whose basis vectors may be indexed using the elements of \(\mathbb{Z}/(n/\theta_{1})\mathbb{Z}\).
In [1], Brubaker, Bump, and Buciunas prove that for the simplest \(n\)-fold metaplectic cover of \(GL_{r}(F)\) (the one where \(c=1\) and \(d=0\)), the space of Whittaker functions is isomorphic to an \(r\)-fold tensor product of evaluation modules and that after a Drinfeld twist (which changes the group action but does not affect the module structure) this isomorphism matches the action of the quantum group to the action of intertwining operators on the underlying principal series representation. The key ingredient in this proof is a lattice model construction for metaplectic Whittaker functions in the case \(c=1,d=0\) developed in [2] by Brubaker, Bump, Chinta, Friedberg, and Gunnells.
In [7], the second author proves that both of these constructions are true in far greater generality, constructing a Whittaker function lattice model and a map \(\theta_{\tt z}\) between \(\mathfrak{W}^{\tt z}\) and an \(r\)-fold tensor product \(V_{+}(z_{1})\otimes\cdots\otimes V_{+}(z_{r})\) of evaluation modules for _any_ metaplectic cover of \(GL_{r}(F)\). (To match to the terminology used in [7], set \(n_{Q}:=n/\ell_{1}\).) Moreover, passing through this map, the action of the quantum group still matches exactly the action of intertwining operators on the Whittaker functions.
As \(c,d,r,n\) vary, the cost of dealing with more complicated covers is that this map shifts between being an isomorphism, an injection, and a surjection, and the choice of representatives for \(H\)-cosets affects the map. The lattice model construction used in [7] dictates a choice of coset representatives from \(\widetilde{T}/H\) giving a basis for \(\mathfrak{W}\) on which \(\theta_{\tt z}\) is well-defined. However, the lattice model is not necessary for the connection between \(\mathfrak{W}\) and the quantum module outside of this phenomenon. A natural question then arises: when is the map \(\theta_{\tt z}\) well-defined for any choice of basis for \(\mathfrak{W}\)?
One of the main applications of our results is an answer for this question, using the characterization of elements of \(\Lambda_{fin}\) from our proof of Main Theorem 1. Taking any basis for \(\mathfrak{W}\), use Theorem 1.1 to express it as a set of vectors in \((\mathbb{Z}/n\mathbb{Z})^{r}\). Then the map given in [7] is precisely
\[\theta_{\tt z}:\mathfrak{W}^{\tt z} \to V_{+}(z_{1})\otimes\cdots\otimes V_{+}(z_{r})\] \[\boldsymbol{\nu} \mapsto \rho-\boldsymbol{\nu}\pmod{n/\ell_{1}},\]
where \(\rho=(r-1,...,2,1,0)\) and the modulus is applied independently in each component of the vector. Using our characterization to show how any particular coset \(\nu H\) sits within \((\mathbb{Z}/n\mathbb{Z})^{r}\), we arrive at the following result, which will be proven as Theorem 8.2 and Corollary 8.3.
**Main Theorem 2**.: For a vector \(\boldsymbol{z}=(z_{1},...,z_{r})\in\mathbb{C}^{r}\), the homomorphism \(\theta_{\tt z}\) given in [7] is well-defined for any choice of basis for \(\mathfrak{W}\) if and only if
\[\gcd\left(d_{2},\frac{dn}{\ell_{1}}\right)=\gcd(c,d,n).\]
Furthermore, if \(\mathfrak{W}\) is either of minimum or maximum dimension, \(\theta_{\tt z}\) is an isomorphism.
Understanding how this map is affected by the choice of cover is an important step to understanding how we may extend these quantum connections to metaplectic covers over other reductive groups. While the lattice model connection only exists in full for \(GL_{r}(F)\) and \(SL_{r}(F)\), the Whittaker function framework exists for any reductive group, so we hope that further investigation of the structure of \(\mathfrak{W}\) will not only allow us to develop analogues to Main Theorem 1 for other groups, but also to determine the precise quantum group and module connected to the metaplectic Whittaker functions for other types.
Regarding the structure of this paper, in Section 2, we examine the construction of metaplectic covers of \(GL_{r}(F)\) and their Whittaker functions, culminating in a proof of Theorem 1.1. Section 3 introduces the first set of Diophantine equations used to parametrize \(\Lambda_{fin}\), which we then use in Section 4 to prove the first part of Main Theorem 1 as Theorem 4.7. In Section 5, we introduce the second set of Diophantine equations for \(\Lambda_{fin}\), which facilitate the proof of the second part of Main Theorem 1 as Theorem 6.1 in Section 6. In Section 7, we examine some cases in which the formulae for \(\dim(\mathfrak{W})\) simplify dramatically and prove conditions for certain dimensions of interest for \(\mathfrak{W}\), including conditions for maximum and minimum dimension. Lastly, in Section 8, we develop the quantum connection and use the structure of \(\mathfrak{W}\) to prove Main Theorem 2 as Theorem 8.2 and Corollary 8.3.
## Acknowledgements
This project was partially supported by NSF RTG grant DMS-1745638 and was supervised by the second author as part of the University of Minnesota School of Mathematics Summer 2022 REU program. The
second author is also supported by NSF grant DMS-2203042. The authors would like to thank their TA Carolyn Stephen for their guidance throughout the project, as well as Ben Brubaker and Darij Grinberg for helpful comments.
## 2 Spaces of Metaplectic Whittaker Functions
To understand the structure of the space of metaplectic Whittaker functions, we must first concretely describe the metaplectic covers of \(GL_{r}(F)\). We can then extend this explicit parametrization of all covers into a description of the metaplectic torus and its maximal abelian subgroup. As mentioned in the introduction, the quotient of these subgroups controls the dimension of the space of Whittaker functions: describing its structure precisely in terms of the cover allows us to reduce a complicated representation theory question to a straightforward linear algebra problem.
Suppose \(n\) is a natural number and \(F\) is a nonarchimedean local field containing \(2n\) distinct \(2n\)-th roots of unity \(\mu_{2n}\). Let \(\mathfrak{o}\) be the ring of integers of \(F\) and \(\varpi\) its uniformizing element.
**Definition**.: Given a split reductive group \(G\), an \(n\)_-fold metaplectic cover_ or \(n\)_-fold metaplectic covering group_\(\widetilde{G}\) is a non-algebraic central extension of \(G\) by the \(n\)-th roots of unity \(\mu_{n}\). That is, \(\widetilde{G}\) is defined by the following short exact sequence:
\[1\to\mu_{n}\to\widetilde{G}\xrightarrow{p}G\to 1.\]
As a set, \(\widetilde{G}\) is the set of tuples \((\zeta,g)\) where \(\zeta\in\mu_{n},g\in G\). However, group multiplication is controlled by a cocycle \(\sigma\in H^{2}(G,\mu_{n})\); that is, for two elements \((\zeta_{1},g_{1}),(\zeta_{2},g_{2})\), their product in \(\widetilde{G}\) is
\[(\zeta_{1},g_{1})\cdot(\zeta_{2},g_{2})=(\zeta_{1}\zeta_{2}\sigma(g_{1},g_{2} ),g_{1}g_{2}).\]
In the process of writing down an explicit form for cocycles for covers of \(GL_{r}(F)\), we see that a slightly more general case may be handled simultaneously. Set \(G=GL_{r}(F)\) for the remainder of the paper.
**Definition**.: More generally, a _metaplectic covering group essentially of degree \(n\)_ is given by a short exact sequence
\[1\to\mu_{m}\to\widetilde{G}\xrightarrow{p}G\to 1\]
where \(n|m\) and the corresponding cocycle \(\sigma\in H^{2}(G,\mu_{m})\) satisfies the property that \([\sigma^{n}]\) is trivial in \(H^{2}(G,\mathbb{C}^{\times})\) under the inclusion induced by an embedding \(\varepsilon:\mu_{m}\to\mathbb{C}^{\times}\).
While it is slightly tedious to write down formulae for these cocycles on general elements of \(G\), their expressions over the torus \(T\) of diagonal matrices in \(G\) are quite elegant. In [7], the second author proves that all metaplectic covers essentially of degree \(n\) over \(GL_{r}(F)\) come from a cocycle of the form
\[\sigma_{c,d}(\boldsymbol{x},\boldsymbol{y})=(\det(\boldsymbol{x}),\det( \boldsymbol{x}))_{2n}^{c}\prod_{i>j}\left(x_{i},y_{j}\right)_{n}^{d-c}. \tag{1}\]
for \(c,d\in\mathbb{Z}\), where \(\boldsymbol{x},\boldsymbol{y}\in T\) and \((\cdot,\cdot)_{k}\) denotes the \(k\)-th Hilbert symbol (see Neukirch [16] for more details on the construction of Hilbert symbols). Notably, making the shift to covers essentially of degree \(n\) rather than "purely" of degree \(n\) is necessary to include the metaplectic cover corresponding to the cocycle \(\sigma_{1,0}\), which, while only essentially of degree \(n\), has been an integral example for this field (see for example [1, 2, 3, 4, 14].)
**Remark 2.1**.: _Since the \(2n\)-th Hilbert symbol produces \(2n\)-th roots of unity, it is necessary that \(F\) contain \(\mu_{2n}\) for the group to be well defined. However, if we are considering a cocycle for which the parameter \(c\) is even, we may relax this condition and require \(F\) to contain only \(\mu_{n}\)._
In [5], Brylinski-Deligne prove that the set of metaplectic covers is in correspondence with the set of symmetric Weyl-group invariant bilinear forms \(B:Y\times Y\to\mathbb{Z}\) on the cocharacter lattice \(Y\) such that \(\frac{B(\alpha^{\vee},\alpha^{\vee})}{2}\in\mathbb{Z}\) for all coroots \(\alpha^{\vee}\in Y\). For \(G=GL_{r}(F)\), a natural choice of basis for \(Y\) is the set of \(r\) fundamental coweights \(\varepsilon_{i}^{\vee}:F^{\times}\to T\), for \(i=1,...,r\), in which \(\varepsilon_{i}^{\vee}(a):=\mathrm{diag}(1,...,1,a,1,...,1)\), where \(a\) is in the \(i\)-th entry. Note: while we will use the notation \(\lambda(a)\) for \(\lambda\in Y,a\in F^{\times}\), another common notation is \(a^{\lambda}\).
Under this basis, the cocharacter lattice \(Y\) is isomorphic to \(\mathbb{Z}^{r}\); for instance,
\[(\varepsilon_{1}^{\vee}+3\varepsilon_{2}^{\vee})(a)=\operatorname{diag}(a,a^{3},1,...,1).\]
Using this basis, we represent a bilinear form on \(Y\) in terms of the corresponding matrix \(A\) such that for \(\boldsymbol{x},\boldsymbol{y}\in Y\),
\[B(\boldsymbol{x},\boldsymbol{y})=\boldsymbol{x}^{T}A\boldsymbol{y}.\]
Each of the conditions from the Brylinski-Deligne correspondence translates into a condition for this matrix. First, a symmetric bilinear form prescribes a symmetric matrix. Second, the Weyl group \(W\) is isomorphic to the symmetric group \(S_{r}\), and acts on \(Y\) by \(\sigma\cdot\varepsilon_{i}^{\vee}=\varepsilon_{\sigma(i)}^{\vee}\). Thus, \(A\) must be invariant under conjugation by permutation matrices, so for some (suggestively named) \(c,d\in\mathbb{Z}\), we have \(a_{i,i}=c\) for all \(i\) and \(a_{i,j}=d\) for all \(i\neq j\). (See the matrix in (2) for an illustration of this requirement.)
There are \(r-1\) simple coroots, each of the form \(\varepsilon_{i}^{\vee}-\varepsilon_{i+1}^{\vee}\). To check the integrality condition on the coroot lattice, it suffices to show that it holds for simple coroots. However, for \(GL_{r}(F)\), this condition is satisfied already: for any simple coroot \(\varepsilon_{i}^{\vee}\),
\[\frac{B_{c,d}(\varepsilon_{i}^{\vee},\varepsilon_{i}^{\vee})}{2}=\frac{2(c-d) }{2}=c-d\in\mathbb{Z}.\]
By Brylinski-Deligne, the metaplectic cover corresponding to this bilinear form satisfies the following condition: if \(\boldsymbol{x},\boldsymbol{y}\in\widetilde{T}=p^{-1}(T)\) such that \(p(\boldsymbol{x})=\lambda(x),p(\boldsymbol{y})=\mu(y)\) for some \(x,y\in F^{\times}\) and \(\lambda,\mu\in Y\), then the commutator of \(\boldsymbol{x}\) and \(\boldsymbol{y}\) is
\[[\boldsymbol{x},\boldsymbol{y}]=(x,y)_{n}^{B(\lambda,\mu)}.\]
Evaluating the commutator in terms of an explicit cocycle, we may identify the bilinear form corresponding to a specific cocycle and vice versa. Note that this property illuminates one of the key reasons the Brylinski-Deligne correspondence is not a bijection: since the cocycle in (1) depends on powers of Hilbert symbols, there are many different cocycles which will give exactly the same cover, specifically any \(\sigma_{c^{\prime},d^{\prime}}\) such that \(c^{\prime}\equiv c\pmod{2n}\) and \(d^{\prime}-c^{\prime}\equiv d-c\pmod{n}\).
**Theorem 2.2** (Frechette [7]).: _For \(c,d\in\mathbb{Z}\), the essentially \(n\)-fold metaplectic cover of \(GL_{r}(F)\) with multiplication given by \(\sigma_{c,d}\) corresponds to the bilinear form \(B_{c,d}\) that acts on \((\boldsymbol{x},\boldsymbol{y})\in\mathbb{Z}^{r}\times\mathbb{Z}^{r}\) by_
\[B_{c,d}(\boldsymbol{x},\boldsymbol{y})=\boldsymbol{x}^{T}\cdot\begin{pmatrix} c&d&d&\dots&d\\ d&c&d&\dots&d\\ d&d&c&\dots&d\\ \vdots&\vdots&&\ddots&\vdots\\ d&d&d&\dots&c\end{pmatrix}\cdot\boldsymbol{y}. \tag{2}\]
Conflating the bilinear form with its corresponding matrix, we will denote both by \(B_{c,d}\); we hope this abuse of notation will not cause any confusion. Note: in [7], this bilinear form is parametrized slightly differently as \(B_{b,c}\), where \(b=c-d\).
Now that we have an explicit description of our metaplectic covers, we investigate what this parametrization tells us about space of metaplectic Whittaker functions. For the purposes of this paper, we will not need the constructions of the metaplectic Whittaker functions themselves, nor those of the metaplectic principal series representations on which they are defined. Instead, we will use the following theorem of McNamara to investigate the space of Whittaker functions through its connection to the metaplectic torus. For the definitions of the metaplectic principal series representations and their Whittaker functions, we refer the reader to Sections 6 and 8, respectively, of [15] as a convenient source.
**Theorem 2.3** (McNamara [15]).: _Fix a metaplectic cover \(\widetilde{G}\) over a \(p\)-adic reductive group \(G\) and let \(\mathfrak{W}\) be the space of metaplectic Whittaker functions for a principal series representation on \(\widetilde{G}\). Let the metapletic torus \(\widetilde{T}\) be the preimage in \(\widetilde{G}\) of the torus \(T(F)\), and let \(H\) be the maximal abelian subgroup of \(\widetilde{T}\). Then_
\[\dim(\mathfrak{W})=\left|\widetilde{T}/H\right|.\]
Note: the group \(T\) of diagonal matrices is denoted \(T\) because it is an abelian _torus_, that is, it is isomorphic to \((F^{\times})^{r}\). While we call \(\widetilde{T}\) the _metaplectic torus_, it is no longer abelian, nor is it technically a torus, as its elements are \((\zeta,t)\) where \(\zeta\in\mu_{n}\) (where \(\mu_{n}\subsetneq F\)) and \(t\in T\). Investigating the precise structure of \(\widetilde{T}\), we prove the following theorem, which is a restatement of Theorem 1.1.
**Theorem 2.4**.: _For a metaplectic cover \(\widetilde{G}\) of \(GL_{r}(F)\) corresponding to the bilinear form \(B_{c,d}\),_
\[\left|\widetilde{T}/H\right|=\frac{n^{r}}{|\{\mathbf{x}\in(\mathbb{Z}/n\mathbb{Z} )^{r}:B_{c,d}(\mathbf{x},\mathbf{y})\equiv 0\pmod{n}\text{ for all }\mathbf{y}\in(\mathbb{Z}/n\mathbb{Z})^{r}\}|}\,.\]
Proof.: Using our description of the metaplectic covers, we can express the subgroups \(\widetilde{T}\) and \(H\) more explicitly: using the Iwasawa decomposition of \(GL_{r}(F)\), we have that \(\widetilde{T}=\mu_{n}\times T(\mathfrak{o})\times Y\) as a set. That is, for \((\zeta,t)\in T\), we may write \(t=t_{0}\cdot\lambda(\varpi)\) for some \(t_{0}\in T(\mathfrak{o})\) and \(\lambda\in Y\).
Since \(H\) is a subgroup of \(\widetilde{T}\), its elements also look like \((\zeta,h)\) where \(\zeta\) is an \(n\)-th root of unity and \(h\) is a diagonal matrix with entries in \(F\). Examining the group law on \(\widetilde{G}\), we see that the root of unity does not impede commutativity of elements, so it is the matrix component \(h\) we must examine further to obtain a description of \(H\). To do so, recall that \(\mathfrak{o}\) is the valuation ring of \(F\) and \(\varpi\) the uniformizing element. Then by [15], as a set we have \(H=\mu_{n}\times T(\mathfrak{o})\times\Lambda\), where \(\Lambda\) is the free abelian group
\[\Lambda:=\{\lambda\in Y:s(\lambda(\varpi))\in H\}\]
for \(s:G\to\widetilde{G}\) the standard section \(s(g)=(1,g)\). Using the commutator relation and the fundamental coweight basis for \(Y\), an equivalent description for \(\Lambda\) is
\[\Lambda=\{\mathbf{x}\in\mathbb{Z}^{r}:B_{c,d}(\mathbf{x},\mathbf{y})\equiv 0\pmod{n}\text{ for all }\mathbf{y}\in\mathbb{Z}^{r}\}\,. \tag{3}\]
It is useful to think of the group \(\Lambda\) as controlling the powers of \(\varpi\) in each entry on the diagonal of the matrix \(h\). That is, for any element \((\zeta,h)\in H\), we have \(h=h_{0}\cdot\operatorname{diag}(\varpi^{\lambda_{1}},...,\varpi^{\lambda_{r}})\) where \(h_{0}\in T(\mathfrak{o})\) and \(\lambda=\lambda_{1}\varepsilon_{1}^{\vee}+\cdots+\lambda_{r}\varepsilon_{r}^{\vee}\) is in \(\Lambda\).
Then, combining our descriptions of \(\widetilde{T}\) and \(H\) to consider \(\widetilde{T}/H\), we see that
\[|\widetilde{T}/H|=|Y/\Lambda|=|\mathbb{Z}^{r}/\Lambda|\,,\]
where the last description uses the embedding of \(\Lambda\) in \(\mathbb{Z}^{r}\) described in (3). Notice that if \(\lambda_{i}\in n\mathbb{Z}\) for all \(i\), then \(B((\lambda_{1},...,\lambda_{r}),\mathbf{y})\) will automatically be a multiple of \(n\) for any \(\mathbf{y}\in\mathbb{Z}^{r}\), and therefore \(\lambda=\lambda_{1}\varepsilon_{1}^{\vee}+\cdots+\lambda_{r}\varepsilon_{r}^ {\vee}\) will be in \(\Lambda\). Therefore, it suffices to consider all coordinates \(\lambda_{i}\) mod \(n\), and so
\[|\widetilde{T}/H|=|(\mathbb{Z}/n\mathbb{Z})^{r}/\left(\Lambda\cap(\mathbb{Z}/n \mathbb{Z})^{r}\right)|\]
which completes the proof.
Let \(\Lambda_{fin}:=\{\mathbf{x}\in(\mathbb{Z}/n\mathbb{Z})^{r}:B_{c,d}(\mathbf{x},\mathbf{y}) \equiv 0\pmod{n}\text{ for all }\mathbf{y}\in(\mathbb{Z}/n\mathbb{Z})^{r}\}\,.\) We will spend the next several sections developing two related systems of linear Diophantine equations which allow us to describe the elements in \(\Lambda_{fin}\), each of which will give us a distinct formula for \(|\Lambda_{fin}|\). We will then return to the broader framework in Section 7 to what these different formulae tell us about the structure of \(|\widetilde{T}/H|\) and thus the structure of \(\mathfrak{W}\).
## 3 Cocharacter Diophantine Equations and Phenomena
In this section, we use the natural basis for the cocharacter lattice \(Y\) of \(GL_{r}(F)\) to develop a set of \(r\) linear Diophantine equations in terms of \(c,d\), and \(n\) that describe the set \(\Lambda_{fin}\), which we call the _cocharacter equations_. This perspective turns a representation theoretic question into a linear algebra one, where altering each of the parameters \(c,d,r,\) and \(n\) has a different effect on the system. We also take time now to develop a visual framework which illuminates this distinction in the roles of each of our parameters.
Examining the conditions for \(\Lambda_{fin}\) using the viewpoint of the fundamental coweight basis for \(Y\) (see Section 2), we arrive at the following system of \(r\) equations. Let \(\mathbf{0}_{r}=(0,0,\ldots,0)^{T}\) be the \(r\times 1\) column vector with all entries equal to \(0\), and define \(\mathbf{1}_{r}=(1,1,\ldots,1)^{T}\) similarly. Recall that \(B_{c,d}\) is both the bilinear form given in Theorem 2.2 and its corresponding \(r\times r\) matrix.
**Definition**.: For natural numbers \(r,n\geq 1\) and constants \(c,d\in\mathbb{Z}\), we call the following system of equations the **cocharacter equations**:
\[B_{c,d}\cdot\boldsymbol{x}=\boldsymbol{0}_{r}\pmod{n}.\]
That is, for \(\boldsymbol{x}=(x_{1},...,x_{r})^{T}\), we have
\[cx_{1}+dx_{2}+\cdots+dx_{r} \equiv 0\pmod{n}\] \[dx_{1}+cx_{2}+\cdots+dx_{r} \equiv 0\pmod{n}\] \[\vdots\] \[dx_{1}+dx_{2}+\cdots+cx_{r} \equiv 0\pmod{n}\]
Here, the \(i\)-th equation arises from evaluating \(\boldsymbol{x}\in Y\) against \(\varepsilon_{i}^{\vee}\) in the bilinear form \(B_{c,d}\) for each \(i\in\{1,...,r\}\). Thus, Lemma 3.1 follows directly.
**Lemma 3.1**.: _Let \(S_{cochar}(c,d,r,n)\) be the number of solutions \(\boldsymbol{x}\in(\mathbb{Z}/n\mathbb{Z})^{r}\) to the cocharacter equations. Then, for the cover \(\widetilde{G}_{c,d,r,n}\), we have \(S_{cochar}(c,d,r,n)=|\Lambda_{fin}|\)._
Looking at the values of \(S_{cochar}\) for a fixed \(r\) and \(n\) as we range over \(c\) and \(d\), certain patterns emerge which motivate defining constants which we call the _diagonal numbers_. These constants will be fundamental in our formulas for \(S_{cochar}\), so we take the time to explore them now.
For a fixed \(r\) and \(n\), note first that it suffices to understand \(S_{cochar}\) for \(c,d\pmod{n}\), as \(S_{cochar}(c,d,r,n)=S_{cochar}(c^{\prime},d,r,n)\) for \(c\equiv c^{\prime}\pmod{n}\) and likewise for \(d\). It will be useful to visualize the values of \(S_{cochar}\) as a table ranging over \(c,d\in\mathbb{Z}/n\mathbb{Z}\) in the following manner:
\[\begin{array}{c|c|c|c|c|c|c}d&&&&\\ c&0&1&2&\cdots&(n-1)\\ \hline 0&&&&\\ \hline 1&&&&\\ \hline 2&&&&\\ \hline\vdots&&&&\\ (n-1)&&&&\\ \hline\end{array}\]
Examining Figure 1, which contains several examples of these tables, notice that the values of \(S_{cochar}\) on the marked diagonals in each picture are each divisible by common factors and that there are two sets of diagonals in each picture. Motivated by this phenomena, we assign each entry a set of two diagonal numbers.
**Definition**.: Let \(\mathcal{d}_{1}=gcd(c-d,n)\) be the _first diagonal number_ and define \(\mathcal{d}_{2}=gcd(c+(r-1)d,n)\) to be the _second diagonal number_.
Note that for a specific entry in place \(c,d\), its first diagonal number captures the column \(c-d\) where its diagonal of slope \(-1\) intersects the first row and similarly, the second diagonal number identifies the row where its diagonal of slope \(r-1\) intersects the first column.
**Example**.: When \(r\) and \(n\) are coprime, the table for \(S_{cochar}\) depends solely on these diagonal numbers, which we will later prove in Section 6 (see Corollary 7.4.) For instance, the table where \(n=10\), \(r=3\) is shown in Figure 2, with diagonal numbers marked, and the value of every entry in this matrix is determined by its two diagonal numbers. Specifically, we have \(S_{cochar}(10,3,c,d)=\mathcal{d}_{1}^{r-1}\mathcal{d}_{2}=\mathcal{d}_{1}^{2} \mathcal{d}_{2}\) for any \(c,d\).
In general, given a random \(n\) and \(r\), the value of \(S_{cochar}\) will not depend nearly so simply on \(\mathcal{d}_{1}\) and \(\mathcal{d}_{2}\), but they still play an important determining role. To find a closed formula for \(S_{cochar}\), we must look to an inhomogenous generalization of the homogenous cocharacter equations with which we started.
**Definition**.: Let \(a\in\mathbb{Z}/n\mathbb{Z}\), and \(\boldsymbol{x}\in(\mathbb{Z}/n\mathbb{Z})^{r}\). Then the _inhomogenous cocharacter equations_ for \(a\in\mathbb{Z}\) are defined by
\[B_{c,d}\cdot\boldsymbol{x}=a\cdot\boldsymbol{1}_{r}\pmod{n}. \tag{4}\]
Let \(S_{inhom}(c,d,r,n)\) be the number of total solutions to the inhomogenous cocharacter equations, ranging over all values of \(a\in\mathbb{Z}\).
In the next section, we will solve for \(S_{cochar}(c,d,r,n)\) by characterizing the set of solutions to the inhomogenous cocharacter equations using straightforward linear algebra techniques and identifying the proportion of solutions with \(a\equiv 0\pmod{n}\). To do this, we will need to identify a precise formula for smallest nonzero value of \(a\) for which (4) has a solution.
**Definition**.: For a fixed \(c,d,r,n\), let \(A(c,d,r,n)\) be the smallest positive integer value for \(a\) such that there is a solution to the inhomogenous cocharacter equations (4).
## 4 Proof of Main Theorem 1 Part 1
For the entirety of this section, fix a set of parameters \(c,d,r,n\). To find a formula for \(S_{cochar}:=S_{cochar}(c,d,r,n)\), we begin by showing that the solutions to the inhomogenous cocharacter equations fall into equally sized equivalence classes defined by the values \(a\in\mathbb{Z}\), and that
\[S_{cochar}(c,d,r,n)=\frac{A(c,d,r,n)}{n}\cdot S_{inhom}(c,d,r,n)\]
Characterizing the solutions to the inhomogenous cocharacter equations, we will then provide explicit expressions for \(S_{inhom}(c,d,r,n)\) and \(A(c,d,r,n)\).
**Lemma 4.1**.: _The equation \(B_{c,d}\cdot\boldsymbol{x}\equiv a\cdot\mathbf{1}_{r}\pmod{n}\) has a solution if and only if \(a\) is a multiple of \(A=A(c,d,r,n)\). Thus, \(A(c,d,r,n)\) divides \(n\)._
Proof.: By definition, a solution \(\boldsymbol{x}_{A}\) to the equation \(B_{c,d}\cdot\boldsymbol{x}\equiv A\cdot\mathbf{1}_{r}\pmod{n}\) exists. If \(a=kA\) for some \(k\in\mathbb{Z}\), then \(k\boldsymbol{x_{A}}\) is a solution to \(B_{c,d}\cdot\boldsymbol{x}\equiv a\cdot\mathbf{1}_{r}\pmod{n}.\) For the other direction, suppose there exists a
Figure 1: Examples of the Cocharacter Phenomena for different choices of \(r\) and \(n\). In each example, notice that there is one set of diagonals of slope -1 and one of slope \(r-1\): the former indicate the effect of the _first diagonal numbers_\(d_{1}\) and the latter that of the _second diagonal numbers_\(d_{2}\). Diagonals for the same diagonal numbers (greater than 1) are marked with the same color within each example. For instance, in the second example (\(r=2,n=8\)) red marks diagonal numbers equal to 8, blue equal to 4, and green equal to 2.
positive integer \(g\) and solution \(\mathbf{x}_{g}\in(\mathbb{Z}/n\mathbb{Z})^{r}\) to the equation \(B_{c,d}\cdot\mathbf{x}_{g}\equiv g\cdot\mathbf{1}_{r}\pmod{n}\), but that \(A\) does not divide \(g\). Then \(jA<g<(j+1)A\) for some positive integer \(j\). Therefore,
\[B_{c,d}\cdot(\mathbf{x}_{g}-j\mathbf{x}_{A}) \equiv B_{c,d}\cdot\mathbf{x}_{g}-jB_{c,d}\cdot\mathbf{x}_{A}\] \[\equiv(g-jA)\cdot\mathbf{1}_{r},\]
which contradicts the minimality of \(A\). Then, since \(\mathbf{x}=\mathbf{0}_{r}\) is a solution to \(B_{c,d}\cdot\mathbf{x}\equiv n\cdot\mathbf{1}_{r}\equiv\mathbf{0}_{r}\pmod{n}\), the second statement follows.
Splitting the solutions to (4) into equivalence classes based on \(a\), we examine the number of solutions in each class and characterize them more concretely.
**Lemma 4.2**.: _For \(k\in\{1,...,\frac{n}{A}\}\), let \(W_{k}\) be the set of solutions to \(B_{c,d}\cdot\mathbf{x}\equiv(kA)\cdot\mathbf{1}_{r}\pmod{n}\). Then \(|W_{k}|=|W_{1}|\) for all such \(k\)._
Proof.: Consider any \(\mathbf{x}\in W_{1}\). The function \(\phi_{\mathbf{x}}:W_{1}\to W_{k}\) defined by \(\mathbf{y}\mapsto\mathbf{y}+(k-1)\cdot\mathbf{x}\) provides a bijection between \(W_{1}\) and \(W_{k}\).
**Lemma 4.3**.: _Let \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{r})^{T}\). Then \(\mathbf{x}\) solves the inhomogenous cocharacter equations if and only if \(cx_{1}+dx_{j}\equiv dx_{1}+cx_{j}\pmod{n}\) for every \(2\leq j\leq r\)._
Proof.: Let \(2\leq j\leq r\). For a solution \(\mathbf{x}\), the first row of the equation \(B_{c,d}\cdot\mathbf{x}\equiv a\cdot\mathbf{1}_{r}\pmod{n}\) tells us that
\[cx_{1}+dx_{j}+\sum_{\begin{subarray}{c}2\leq k\leq r\\ k\neq j\end{subarray}}dx_{k}\equiv a\pmod{n}\]
Subtracting the \(j\)-th row
\[dx_{1}+cx_{j}+\sum_{\begin{subarray}{c}2\leq k\leq r\\ k\neq j\end{subarray}}dx_{k}\equiv a\pmod{n},\]
from the first, we obtain
\[cx_{1}+dx_{j} \equiv dx_{1}+cx_{j}\pmod{n}.\]
For the other direction, suppose \(\mathbf{x}\) satisfies \(cx_{1}+dx_{j}\equiv dx_{1}+cx_{j}\pmod{n}\) for all \(j\in\{2,...,r\}\). Then, \(\mathbf{x}\) satisfies the inhomogeneous cocharacter equations for the value \(a\equiv cx_{1}+dx_{j}+\sum\limits_{\begin{subarray}{c}2\leq k\leq r\\ k\neq j\end{subarray}}dx_{k}\pmod{n}\).
Figure 2: The table showing \(S_{cochar}(c,d,3,10)\) for all \((c,d)\in\mathbb{Z}_{10}\times\mathbb{Z}_{10}\) with diagonals for diagonal numbers greater than \(1\) marked. Notice here that since \(r=3\) and \(n=10\) are coprime, every entry is equal to \(\delta_{1}^{2}\cdot\delta_{2}\). In contrast, see the example in Figure 1 for \(r=3\) and \(n=9\), where this is not true.
**Proposition 4.4**.: _A vector \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{r})^{T}\) solves the inhomogenous cocharacter equations if and only if for each \(j\in\{2,...,r\}\) we have \(x_{j}=x_{1}+v_{j}\frac{n}{d_{1}}\) for some integer \(v_{j}\) such that \(1\leq v_{j}\leq d_{1}\)._
Proof.: By Lemma 4.3, it suffices to characterize the solutions \(\mathbf{x}\) to the system of equations given by
\[cx_{1}+dx_{j}\equiv dx_{1}+cx_{j}\pmod{n}\]
for every \(j\in\{2,...,r\}\), or equivalently,
\[(c-d)(x_{1}-x_{j})\equiv 0\pmod{n}. \tag{5}\]
Recalling that \(d_{1}=\gcd(c-d,n)\), a vector \(\mathbf{x}\) satisfies (5) exactly when \(x_{1}-x_{j}\) is a multiple of \(\frac{n}{d_{1}}\) for all \(j\in\{2,...,r\}\). Thus, the solutions to the inhomogeneous cocharacter equations are precisely the vectors of the form
\[\mathbf{x}=x_{1}\cdot\mathbf{1}_{r}+\frac{n}{d_{1}}(0,v_{2},v_{3},...,v_{r})^{T}\]
where \(0\leq x_{1}<n\) and \(v_{j}\in\mathbb{Z}\) such that \(1\leq v_{j}\leq d_{1}\) for all \(j\in\{2,...,r\}\).
Now that we have precisely characterized the set of \(\mathbf{x}\) which solve the inhomogeneous cocharacteristic equations, we can count the size of this set by ranging over all distinct choices of tuples \((x_{1},v_{2},...,v_{r})\), which each yield a distinct solution \(\mathbf{x}\).
**Corollary 4.5**.: _The number of solutions to the inhomogenous cocharacter equations is_
\[S_{inhom}(c,d,r,n)=nd_{1}^{r-1}.\]
We are now prepared to identify a precise formula for \(A\) in terms of \(n,r,c\), and \(d\).
**Proposition 4.6**.: _The minimum positive integer \(A\) such that the inhomogenous cocharacter equations have a solution is \(A=\gcd\left(d_{2},\frac{dn}{d_{1}}\right)\), recalling that \(d_{2}=\gcd(c+(r-1)d,n)\)._
Proof.: Substituting Proposition 4.4 into (4), we see that the left-hand side is
\[\begin{pmatrix}c&d&d&\ldots&d\\ d&c&d&\ldots&d\\ d&d&c&\ldots&d\\ \vdots&\vdots&&\ddots&\vdots\\ d&d&d&\ldots&c\end{pmatrix}\begin{pmatrix}1\\ 1\\ 1\\ \vdots\\ 1\end{pmatrix}+\frac{n}{d_{1}}\begin{pmatrix}0\\ v_{2}\\ v_{3}\\ \vdots\\ v_{r}\end{pmatrix}\equiv x_{1}(c+(r-1)d)\begin{pmatrix}1\\ 1\\ 1\\ \vdots\\ 1\end{pmatrix}+\frac{n}{d_{1}}\begin{pmatrix}&dv_{2}+dv_{3}+\cdots+dv_{r}\\ cv_{2}+dv_{3}+\cdots+dv_{r}\\ dv_{2}+cv_{3}+\cdots+dv_{r}\\ \vdots\\ dv_{2}+dv_{3}+\cdots+dv_{r-1}+cv_{r}\end{pmatrix}.\]
To have a solution, every row of this expression must must equal a constant \(A\). Looking at the first row,
\[A\equiv x_{1}(c+(r-1)d)+\frac{dn}{d_{1}}(v_{2}+v_{3}+\cdots+v_{r})\pmod{n}.\]
From the proof of Proposition 4.4, \(x_{1}\) and \(v_{2}+v_{3}+\cdots+v_{r}\) are both arbitrary constants. Thus, the minimum value \(A\) can have is \(\gcd\left(c+(r-1)d,\frac{dn}{d_{1}},n\right)=\gcd\left(d_{2},\frac{dn}{d_{1}}\right)\).
Note, since \(d_{1}\) divides \(c-d\), we can equivalently write \(A\) as \(A=\gcd\left(d_{2},\frac{cn}{d_{1}}\right)=\gcd\left(d_{2},\frac{dn}{d_{1}} \right)=\gcd\left(d_{2},\frac{n}{d_{1}}\gcd(c,d,n)\right)\). Thus, we arrive at a closed form for \(S_{cochar}\) in terms of \(c,d,r,n\).
**Theorem 4.7**.: _The number of solutions to the cocharacter equations is_
\[S_{cochar}(c,d,r,n)=d_{1}^{r-1}\gcd\left(d_{2},\frac{dn}{d_{1}}\right).\]
## 5 Coroot Diophantine Equations
Inspired by the constants \(c+(r-1)d\) and \(c-d\) showing up in the cocharacter equations, we define a second system of related equations more closely tied to the root structure of \(GL_{r}(F)\).
**Definition**.: The _coroot equations_ are the system of \(r\) equations:
\[(c-d)(x_{i}-x_{r}) \equiv 0\pmod{n}\quad\text{ for all }i\in\{1,...,r-1\},\] \[(c+(r-1)d)(x_{1}+\cdots+x_{r}) \equiv 0\pmod{n}.\]
We call these the coroot equations because the \(i\)-th equation arises from evaluating \(\mathbf{x}\in Y\) against the coroot \(\varepsilon_{i}^{\vee}-\varepsilon_{r}^{\vee}\) in the bilinear form \(B_{c,d}\) for \(i\in\{1,...,r-1\}\). We could similarly evaluate against the simple coroots \(\varepsilon_{i}^{\vee}-\varepsilon_{i+1}^{\vee}\), but this formulation will be more useful for our purposes.
In some cases, the coroot and cocharacter equations are equivalent, but in other cases they are not: counting the solutions to the coroot equations and examining this connection will give us an alternate formula for \(S_{cochar}\).
**Remark 5.1**.: _This system also illuminates the difference between metaplectic covers of \(SL_{r}(F)\) and \(GL_{r}(F)\). For any cocharacter \(\mathbf{x}\) for \(SL_{r}(F)\), the last coroot equation is vacuously true, since \(x_{1}+\cdots+x_{r}\equiv 0\pmod{n}\) is necessary for the resulting matrices \(\mathbf{x}(a)\) to have determinant one for any \(a\in F\). In this case, the cocharacter and coroot equations are equivalent, and they both give \(|\Lambda_{fin}\cap SL_{r}(F)|=d_{1}^{r-1}\)._
**Theorem 5.2**.: _The number of solutions to the coroot equations is_
\[S_{coroot}(c,d,r,n)=d_{1}^{r-1}d_{2}\gcd\left(\frac{n}{d_{1}},\frac{n}{d_{2}},r\right).\]
As in Section 4, we prove general properties about the solutions to the coroot equations. These descriptions will allow us to directly relate \(S_{cochar}\) to \(S_{coroot}\) in Section 6.
Proof.: We start with a change of variables. Consider the coroot system in variables \(y_{1},...,y_{r-1},z\) written as
\[(c-d)y_{i} \equiv 0\pmod{n}\quad\text{ for all }i\in\{1,...,r-1\},\] \[(c+(r-1)d)z \equiv 0\pmod{n}.\]
In terms of these variables, there are \(d_{1}^{r-1}\cdot d_{2}\) tuples \((y_{1},...,y_{r-1},z)\) that solve the coroot equations: \(y_{i}\) are all multiples of \(\frac{n}{d_{1}}\) and \(z\) is a multiple of \(\frac{n}{d_{2}}\). Let \(S_{Y,Z}\) be the set of such tuples.
We then classify \(\mathbf{x}\) satisfying \(y_{i}=x_{i}-x_{r}\) and \(z=x_{1}+\cdots+x_{r}\) such that \((y_{1},..,y_{r-1},z)\in S_{Y,Z}\): that is, the set of \(\mathbf{x}\) satisfying the original formulation of the coroot equations. Note that \(x_{i}=y_{i}+x_{r}\), so rearranging the final coroot equation, we have
\[rx_{r}\equiv z-(y_{1}+\cdots+y_{r-1})\pmod{n}, \tag{6}\]
and thus the number of solutions in terms of \(\mathbf{x}\) versus in terms of \((y_{1},...,y_{r-1},z)\) depends on whether \(r\) is invertible mod \(n\). Let \(b=\gcd(n,r)\). Then \(x_{r}\) has \(b\) solutions when \(z-(y_{1}+\cdots+y_{r-1})\) is a multiple of \(b\) and no solutions otherwise. A straightforward calculation verifies that there is no overlap between the sets of \(\mathbf{x}\) for distinct tuples \((y_{1},..,y_{r-1},z)\in S_{Y,Z}\).
Let \(Fr_{b}(d_{1},d_{2},n)\) be the proportion of \((y_{1},\ldots,y_{r},z)\) tuples that will yield a valid solution to the coroot equations. In other words,
\[Fr_{b}:=\frac{|\{(y_{1},\cdots,y_{r},z)\in S_{Y,Z}:z-y_{1}-\cdots-y_{r}\text{ is a multiple of }b\}|}{|S_{Y,Z}|}.\]
Then \(S_{coroot}(c,d,r,n)=d_{1}^{r-1}d_{2}\cdot b\cdot Fr_{b}(d_{1},d_{2},n)\), and it will suffice to develop a formula for \(Fr_{b}(d_{1},d_{2},n)\).
**Remark 5.3**.: _When \(n\) and \(r\) are relatively prime, \(r\) is invertible. Thus \(Fr_{b}\) evaluates to \(1\) because any tuple we pick adds to a multiple of \(b=1\), so in this case the two sets of variables give equivalent conditions and \(S_{coroot}=|S_{Y,Z}|\)._
**Proposition 5.4**.: _The function \(Fr_{b}\) evaluates to_
\[Fr_{b}(d_{1},d_{2},n)=\frac{1}{b}\cdot\gcd\left(\frac{n}{d_{1}},\frac{n}{d_{2}},b \right).\]
Proof.: We proceed by carefully considering the overlaps of factors of \(b\) with those of \(\frac{n}{d_{1}},\frac{n}{d_{2}}.\) Let \(k_{1}=\gcd(\frac{n}{d_{1}},b)\) and \(m_{1}\in\mathbb{Z}\) such that \(b=m_{1}k_{1}\). Similarly, let \(k_{2}=\gcd(\frac{n}{d_{2}},b)\) and \(m_{2}\in\mathbb{Z}\) such that \(b=m_{2}k_{2}\).
Since \(y_{i}\) is a multiple of \(\frac{n}{d_{1}}\), it is also a multiple of \(k_{1}\): examining which multiples are possible modulo \(b\), we see that \(y_{i}\pmod{b}\) can be any of the \(m_{1}\) multiples of \(k_{1}\) in \(\mathbb{Z}/b\mathbb{Z}\) with equal probability. Similarly, considering the sum \(y=\sum_{i=1}^{r-1}y_{i}\), we claim that the same is true for \(y\). Let \(1\leq g\leq m_{1}\) and suppose \(y\equiv gk_{1}\pmod{b}\): if we pick any arbitrary \(y_{1},y_{2},\ldots,y_{r-2}\), we are left with
\[y_{r-1}\equiv gk_{1}-\sum_{i=1}^{r-2}y_{i}\pmod{b}.\]
The right-hand side of this equation defines some equivalence class \(\ell k_{1}\pmod{b}\) from which we must choose \(y_{r-1}\) to ensure that \(y\equiv gk_{1}\pmod{b}\). Exactly \(\frac{1}{m_{1}}\) of the possible values of \(y_{r-1}\) place us in the correct equivalence class for a given \(g\). Thus, \(y\) falls into the equivalence classes \(k_{1},2k_{1},\ldots,m_{1}k_{1}\) with equal probability.
Likewise, \(z\pmod{b}\) can be any of the \(m_{2}\) multiples of \(k_{2}\) modulo \(b\) with equal probability. To interface between \(y\) and \(z\), we must factor further: let \(k_{s}=\gcd(k_{1},k_{2})\) so that \(k_{1}=s_{1}k_{s}\) and \(k_{2}=s_{2}k_{s}\), and \(\gcd(s_{1},s_{2})=1\). Letting \(m=\gcd(m_{1},m_{2})\), factor \(b\) completely as \(b=ms_{1}s_{2}k_{s}\). The reader may find it helpful to refer to Figure 3, which provides a visualization of how this factorization relates \(b,\frac{n}{d_{1}}\), and \(\frac{n}{d_{2}}\).
We now identify the proportion of \(y\)- and \(z\)-values that satisfy \(z-y\equiv 0\pmod{b}\). Since \(y,z,b\) all contain a factor of \(k_{s}\), let \(y=\alpha k_{1}=\alpha s_{1}k_{s}\) and \(z=\beta k_{2}=\beta s_{2}k_{s}\) for some \(\alpha,\beta\in\mathbb{Z}\). Then, equivalently, we seek the proportion of \((\alpha,\beta)\) pairs such that
\[\beta s_{2}\equiv\alpha s_{1}\pmod{ms_{1}s_{2}}.\]
Since \(s_{1},s_{2}\) are coprime, we must have \(\alpha=as_{2}\) for some \(a\in\mathbb{Z}\). Exactly \(\frac{1}{s_{2}}\) of the possible \(\alpha\)-values are multiples of \(s_{2}\). Then
\[\beta s_{2}\equiv as_{1}s_{2}\pmod{ms_{1}s_{2}}\]
which has solutions only for \(\beta\equiv as_{1}\pmod{b}\). Out of the \(m_{2}=ms_{1}\) equivalence classes that \(z\) can fall into, only the one defined by \(\beta=as_{1}\) works. Therefore,
\[Fr_{b}(d_{1},d_{2},n)=\frac{1}{s_{2}}\left(\frac{1}{ms_{1}}\right)=\frac{1}{ ms_{1}s_{2}}=\frac{k_{s}}{b}=\frac{\gcd\left(\frac{n}{d_{1}},\frac{n}{d_{2}},b \right)}{b}.\qed\]
Figure 3: A visualization of our factorization of \(b=\gcd(r,n)\), where the overlap of any circle or shaded region with the circle for \(b\) contains a factorization for their greatest common divisor. Note that the purple region is the overlap of the red and blue regions.
Since \(b=\gcd(r,n)\), we have \(\gcd\left(\frac{n}{d_{1}},\frac{n}{d_{2}},b\right)=\gcd\left(\frac{n}{d_{1}}, \frac{n}{d_{2}},r\right)\), which completes proof of Theorem 5.2 and allows us to express \(S_{coroot}\) solely in terms of \(c,d,r,\) and \(n\).
## 6 Proof of Main Theorem 1 Part 2
We are now ready to prove the second part of our main result. In this section, we show how the coroot equations are obtained from the cocharacter equations, and how this relates \(S_{coroot}\) and \(S_{cochar}\).
**Theorem 6.1**.: _The number of solutions to the cocharacter equations can also be defined as_
\[S_{cochar}=S_{coroot}\cdot\frac{1}{n}\cdot\operatorname{lcm}\left(\gcd\left( d_{2},\frac{dn}{d_{1}}\right),\frac{n}{\gcd(n,r)}\right).\]
Let \(M(c,d,r,n):=\operatorname{lcm}\left(\gcd\left(d_{2},\frac{dn}{d_{1}}\right), \frac{n}{\gcd(n,r)}\right).\) We will also prove the following formula for \(M(c,d,r,n)\), which will be useful for our investigation into special dimensions for \(\mathfrak{W}\) in Section 7.
**Proposition 6.2**.: _Let \(r,n\) have prime factorizations \(r=p_{1}^{\ell_{1}}p_{2}^{\ell_{2}}\ldots p_{j}^{\ell_{j}}\) and \(n=p_{1}^{m_{1}}p_{2}^{m_{2}}\ldots p_{j}^{m_{j}}\). For every \(1\leq i\leq j\), let_
\[(c-d)\equiv c_{i}p_{i}^{s_{i}}\pmod{p_{i}^{m_{i}}}\text{ and }d\equiv d_{i}p_{i }^{t_{i}}\pmod{p_{i}^{m_{i}}}\text{ for each }1\leq i\leq j\]
_so that \(0\leq s_{i},t_{i}\leq m_{i}\). Let \(\mu_{i}=\min(m_{i},\ell_{i})\) and \(c_{i},d_{i}\) are relatively prime to \(p_{i}\). Then_
\[M(c,d,r,n)=\prod_{i=1}^{j}p_{i}^{\max(m_{i}-\mu_{i},\min(s_{i},t_{i}+m_{i}-s_{ i}))}.\]
To obtain the coroot equations from the cocharacter equations, we can multiply the matrix \(B_{c,d}\) which defines the cocharacter equations by the \(r\times r\) matrix
\[L_{r}=\begin{pmatrix}1&0&\ldots&0&-1\\ 0&1&\ldots&0&-1\\ \vdots&\vdots&\ddots&&\vdots\\ 0&0&&1&-1\\ 1&1&\ldots&1&1\end{pmatrix}.\]
That is, the new system of equations specified by this transformation is
\[L_{r}B_{c,d}\cdot\boldsymbol{x}\equiv\mathbf{0}_{r}\pmod{n}\]
which gives us precisely the coroot equations. Likewise, multiplying the coroot equations by
\[L_{r}^{\prime}=\begin{pmatrix}r-1&-1&\ldots&-1&1\\ -1&r-1&\ldots&-1&1\\ \vdots&\vdots&\ddots&&\vdots\\ -1&-1&&r-1&1\\ -1&-1&\ldots&-1&1\end{pmatrix}\]
obtains the cocharacter equations multiplied by \(r\). That is,
\[L_{r}^{\prime}(L_{r}B_{c,d})\cdot\boldsymbol{x}\equiv r\cdot B_{c,d}\cdot \boldsymbol{x}\equiv\mathbf{0}_{r}\pmod{n}.\]
As we discussed in Remark 5.3, if \(r\) and \(n\) are relatively prime, then \(r\) has an inverse \(r^{-1}\) in \(\mathbb{Z}/n\mathbb{Z}\) and the cocharacter and coroot equations are equivalent. However, if \(r\) is not invertible modulo \(n\), going back from the coroot equations to the cocharacter equations is more complicated.
Recall that \(b=\gcd(r,n)\). Then \(r\cdot B_{c,d}\boldsymbol{x}\equiv\boldsymbol{0}_{r}\pmod{n}\) factors into
\[b\left(\frac{r}{b}\right)\begin{pmatrix}c&d&d&\ldots&d\\ d&c&d&\ldots&d\\ d&d&c&\ldots&d\\ \vdots&\vdots&&\ddots&\vdots\\ d&d&d&\ldots&c\end{pmatrix}\begin{pmatrix}x_{1}\\ x_{2}\\ x_{3}\\ \vdots\\ x_{r}\end{pmatrix}\equiv\begin{pmatrix}0\\ 0\\ 0\\ \vdots\\ 0\end{pmatrix}\ \left(\text{mod }b\cdot\frac{n}{b}\right).\]
Since both sides of the equation and the modulus are multiples of \(b\), this implies that
\[\left(\frac{r}{b}\right)B_{c,d}\cdot\boldsymbol{x}\equiv\boldsymbol{0}_{r} \pmod{\frac{n}{b}}.\]
The number \(\frac{r}{b}\) is relatively prime to \(\frac{n}{b}\) and therefore invertible in \(\mathbb{Z}/\frac{n}{b}\mathbb{Z}\), so
\[B_{c,d}\cdot\boldsymbol{x}\equiv\boldsymbol{0}_{r}\ \ \left(\text{mod }\frac{n}{b} \right). \tag{7}\]
**Remark 6.3**.: _Again, (7) shows that the coroot and cocharacter equations are equivalent when \(r\) and \(n\) are relatively prime, since then \(n=n/b\) and (7) recovers exactly the cocharacter equations._
If we are not in that case, i.e., if \(b\neq 1\), then for any coroot solution \(\boldsymbol{x}\), we get
\[B_{c,d}\boldsymbol{x}\equiv\frac{n}{b}\boldsymbol{v}\pmod{n}\]
for some vector \(\boldsymbol{v}\in\mathbb{Z}^{r}\). We now show that each solution to the coroot equations also satisfies inhomogeneous cocharacter equations for particular values of \(a\).
**Lemma 6.4**.: _The coroot equations are equivalent to the inhomogeneous cocharacter equations with the condition that \(a\in(n/b)\mathbb{Z}\)._
Proof.: Let \(\boldsymbol{x}\) be a solution to the coroot equations and let the \(i\)th row of the left-hand side of Equation (7) be
\[w_{i}=cx_{i}+\sum_{j\neq i}dx_{j}.\]
Then by definition,
\[w_{i}-w_{r}=(c-d)(x_{i}-x_{r})\equiv 0\pmod{n}.\]
Thus, for some \(k\in\{1,...,b\}\),
\[B_{c,d}\boldsymbol{x}\equiv\frac{n}{b}k\cdot\boldsymbol{1}_{r}\pmod{n}. \tag{8}\]
Similarly, suppose \(\boldsymbol{x}\) satisfies (8). Then defining \(w_{i}\) as above,
\[(c-d)(x_{i}-x_{r}) \equiv w_{i}-w_{r}\equiv k\frac{n}{b}-k\frac{n}{b}\equiv 0 \pmod{n}\] \[(c+(r-1)d)(x_{1}+x_{2}+\ldots x_{r}) \equiv w_{1}+w_{2}+\cdots+w_{r}\equiv r\left(k\frac{n}{b}\right) \equiv 0\pmod{n}.\qed\]
In Section 4 we showed that Equation (8) has solutions if and only if \(k\frac{n}{b}\) is a multiple of \(A(c,d,r,n)\), and that each class of solutions (defined by having the same \(k\)) is of the same size. As in that section, we want to find the smallest nonzero \(k\) for which (8) has a solution.
**Definition**.: Let \(\kappa(c,d,r,n)\) be the smallest positive value of \(k\) such that there is a solution to Equation (8).
Again, it will often be clear from context that we are working with a particular fixed \(c,d,r,n\) in which case we will write \(\kappa\) and \(M\) for brevity instead of \(\kappa(c,d,r,n)\) and \(M(c,d,r,n)\).
Proof of Theorem 6.1.: We can relate the values of \(\kappa(c,d,r,n)\) and \(A(c,d,r,n)\) as follows:
\[\kappa\cdot\frac{n}{b}=\operatorname{lcm}\left(A,\frac{n}{b}\right)= \operatorname{lcm}\left(\gcd\left(d_{2},\frac{dn}{d_{1}}\right),\frac{n}{b} \right).\]
Let \(M(c,d,r,n):=\operatorname{lcm}\left(\gcd\left(d_{2},\frac{dn}{d_{1}}\right), \frac{n}{b}\right).\) Then there are \(\frac{n}{M}=\frac{b}{\kappa}\) equivalence classes of solutions to Equation (8). Exactly one of these equivalence classes--the one given by \(k=b\)--gives the solutions to the cocharacter equations. Therefore,
\[S_{cochar}=S_{coroot}\cdot\frac{M}{n}=S_{coroot}\cdot\frac{\kappa}{b}.\]
Substituting in our earlier expressions for the values of \(S_{coroot}\) and \(M\), we obtain
\[S_{cochar}=\frac{d_{1}^{r-1}d_{2}}{n}\gcd\left(\frac{n}{d_{1}},\frac{n}{d_{2} },r\right)\,\operatorname{lcm}\left(\frac{n}{\gcd(r,n)},\gcd\left(d_{2},\frac {dn}{d_{1}}\right)\right).\qed\]
Although it is not immediately clear from looking at this equation, this formula is equivalent to the one given in Theorem 4.7. One area of future work would be to simplify this expression and more directly understand why it is equivalent to the statement of Theorem 4.7. Furthermore, while this formula appears more complicated than that of Theorem 4.7, this approach is perhaps more suitable for extending past \(GL_{r}(F)\), as the metaplectic Whittaker functions developed in Section 2 can be defined over any reductive group and this approach is more closely related to the root data structure of reductive groups.
Using the same visualization tables we used in Section 3 for \(S_{cochar}\) shows more directly how \(M\) and \(\kappa\) change as we vary \(c,d,r,\) and \(n\). Here, for a fixed \(r,n\), let the entry in position \((c,d)\) be \(\kappa(c,d,r,n)\). (To achieve a matching table for \(M\), multiply the \(\kappa\) table by \(n/b\).)
**Example**.: For \(n=8=2^{3}\) and \(r=2^{\ell}\), the following tables show how \(\kappa\) changes as \(\ell\) increases from \(1\) to \(3\). Because \(M\) and \(\kappa\) depend on \(\mu=\min(\ell,m)\) rather than on \(\ell\), any \(\kappa\) table for \(\ell>3\) would be identical to the table for \(\ell=3\).
The entries in these tables are determined by the main diagonals they lie on, which are described by \(s\), and the columns they lie in, which are described by \(t\) and index how far down the main diagonal an entry is. In particular, notice that the only difference between the matrices for \(\ell\) and \(\ell+1\) is that a specific fraction of the elements on each of the diagonals in the latter matrix have been multiplied by \(2\). For example, for \(\ell=2\), this fraction is \(1/4\) for the red diagonal and \(1/2\) for the blue.
These tables are also useful for visualizing the effect of combining distinct primes.
**Example**.: When \(r=2^{2}\) and \(n=2^{2}\cdot 3\), we see that the table for \(\kappa\) is a \(3\times 3\) tessellation of that for
\(r=2^{2},n=2^{2}\):
\[\begin{array}{cccc|cccc|cccc|cccc}4&1&2&1&&&&\\ 1&1&1&1&2&&&&\\ 2&1&2&1&&&&\\ 1&2&1&1&&&&\\ \end{array}\] \[r=2^{2},n=2^{2}\]
\[\begin{array}{cccc|cccc|cccc|cccc}4&1&2&1&&&&\\ 1&1&1&2&&&&\\ 2&1&2&1&&&&\\ 1&2&1&1&&&&\\ \end{array}\] \[\begin{array}{cccc|cccc|cccc|cccc}4&1&2&1&&&&\\ 1&1&1&2&&&&\\ 2&1&2&1&&&&\\ 1&2&1&1&&&&1&&2&&1&1\\ \hline 4&1&2&1&&&&\\ 1&1&1&2&&&&\\ 2&1&2&1&&&&\\ 1&2&1&1&&&&\\ \end{array}\]
[MISSING_PAGE_POST]
\[\begin{array}{cccc|cccc|cccc}19&1&&&&\\ 20&1&&&&\\ 21&20&&&&\\ 22&1&20&&&&\\ 22&1&20&&&&\\ 22&1&20&&&&\\ 22&1&20&&&&\\ 22&1&20&&&&\\ 23&1&1&&&&\\ \end{array}\]
Next suppose that \(s_{i}>t_{i}+\mu_{i}\). Then we have that the power of \(p_{i}\) in the gcd is \(\min\{t_{i}+\mu_{i},t_{i}+m_{i}-s_{i}\}\). However, then the power of \(p_{i}\) in \(M\) is
\[\max\{m_{i}-\mu_{i},\min\{t_{i}+\mu_{i},t_{i}+m_{i}-s_{i}\}\}=m_{i}-\mu_{i},\]
since \(s_{i}>t_{i}+\mu_{i}\), so \(t_{i}+m_{i}-s_{i}<m_{i}-\mu_{i}\).
Lastly, if \(s_{i}=t_{i}+\mu_{i}\), then
\[c-d+rd\equiv p_{i}^{s_{i}}(c_{i}+r_{i}d_{i})\pmod{p_{i}^{m_{i}}}.\]
Note that \(c_{i}+r_{i}d_{i}\) may create an additional factor \(p_{i}^{\tau_{i}}\) for some integer \(\tau_{i}\geq 0\). Then the power of \(p_{i}\) appearing in the gcd is
\[\min\{s_{i}+\tau_{i},m_{i},t_{i}+m_{i}-s_{i}\}=\min\{s_{i}+\tau_{i},m_{i}-\mu_ {i}\}\]
Then the power of \(p_{i}\) in \(M\) is
\[\max\{\min\{s_{i}+\tau_{i},m_{i}-\mu_{i}\},m_{i}-\mu_{i}\}=m_{i}-\mu_{i}.\]
Collecting the three cases together, the expression
\[\max\{m_{i}-\mu_{i},\min\{s_{i},t_{i}+m_{i}-s_{i}\}\}\]
matches the power of \(p_{i}\) in \(M\) in each case, completing the proof of the proposition.
**Corollary 6.5**.: _The quantity \(\kappa(c,d,r,n)\) is multiplicative over powers of distinct primes._
In the next section, we will see that the two different approaches for \(S_{cochar}\) are each useful in different ways. One potentially fruitful avenue for future exploration would be to see precisely why these two formulae are equal, as it is not easily apparent. As the second approach relates more directly to the root structure of \(GL_{r}\) as a reductive group, but the first approach yields a simpler formula and proof, this connection would illuminate a way to extend the simpler formula to general reductive groups.
## 7 Structure of the Whittaker Space
These investigations into the structure of \(\Lambda_{fin}\) not only give us a method of calculating \(\dim(\mathfrak{W})\), they also illuminate how the parameters \(c,d,r,\) and \(n\) affect the structure of \(\mathfrak{W}\) in different ways. In this section, we start with a few natural corollaries to both parts of Main Theorem 1 (Theorems 4.7 and 6.1) and discuss how they relate to the literature. We then develop necessary and sufficient conditions for \(\dim(\mathfrak{W})\) to be of maximum and minimum dimension, as well as the conditions for several other desirable dimensions for further connections.
**Corollary 7.1**.: _From Theorem 4.7, we have the following natural results about \(\dim(\mathfrak{W})\):_
\[\dim(\mathfrak{W})=\begin{cases}\left(\frac{n}{\gcd(c,n)}\right)^{r}&\text{ if }d\equiv 0\pmod{n}\\ \left(\frac{n}{\gcd(d,n)}\right)^{r-1}\cdot\frac{n}{\gcd((r-1)d,n)}&\text{ if }c \equiv 0\pmod{n}\end{cases}\]
Proof.: Recall that by Theorem 1.1, we have \(\dim(\mathfrak{W})=n^{r}/|S_{cochar}(c,d,r,n)|\). Then if \(d\equiv 0\pmod{n}\),
\[S_{cochar}(c,d,r,n) =\gcd(c-d,n)^{r-1}\gcd\left(c+(r-1)d,n,\frac{dn}{\gcd(c-d,n)}\right)\] \[=\gcd(c,n)^{r}.\]
Likewise, if \(c\equiv 0\equiv n\), then
\[S_{cochar}(c,d,r,n) =\gcd(-d,n)^{r-1}\gcd\left((r-1)d,n,\frac{dn}{\gcd(-d,n)}\right)\] \[=\gcd(d,n)^{r-1}\gcd((r-1)d,n).\qed\]
As we can see from this corollary, the parameters \(c\) and \(d\) play significantly different roles in influencing the structure of the Whittaker function space. In the simplest \(n\)-fold metaplectic cover \((c=1,d=0)\), we see \(|\widetilde{T}|=n^{r}\), which allowed Brubaker, Bump, and Buciumas to map \(\mathfrak{W}\) isomorphically to a quantum module of dimension \(n^{r}\) in [1] to explain the lattice model phenomena discovered by Brubaker, Bump, Chinta, Friedberg, and Gunnells [2]. In the same spirit, the second author showed in [7] that this connection extends quite naturally to an isomorphism for any cover coming from a diagonal matrix (i.e., \(d\equiv 0\)). However, incorporating the parameter \(d\) adds complications, as the quantum module (which we will discuss later in Section) _does not see_ the factor of \(\gcd\left(c+(r-1)d,n,\frac{dn}{\gcd(c-d,n)}\right)\) appearing in \(\dim(\mathfrak{W})\). Thus, to understand this connection, we will need additional information about the structure of \(\mathfrak{W}\).
**Corollary 7.2**.: _We have \(\dim(\mathfrak{W})=1\) (that is, of minimum size) if and only if \(c\equiv d\equiv 0\pmod{n}\)._
Proof.: The backward direction follows from Corollary 7.1. Now assume \(S_{cochar}=n^{r}\). Since each of the \(r\) factors in Theorem 4.7 are factors of \(n\), we must have \(d_{1}=\gcd(c-d,n)=n\) and so
\[S_{cochar}=n^{r-1}\gcd\left(c-(r+1)d,n,c,d\right).\]
So we must also have \(\gcd(n,c,d)=n\), which requires that \(c,d\equiv n\pmod{n}\).
**Corollary 7.3**.: _We have \(\dim(\mathfrak{W})=n^{r}\) (that is, of maximum size) if and only if \(c-d\) and \(c+(r-1)d\) are coprime to \(n\)._
Proof.: It suffices to show that \(S_{cochar}=1\) if and only if \(d_{1}=d_{2}=1\). The backwards direction is easiest to see from Theorem 6.1: if \(d_{1}=d_{2}=1\), then
\[S_{cochar}=\frac{1}{n}\gcd(r,n)\cdot\operatorname{lcm}\left(\frac{n}{\gcd(r, n)},1\right)=1.\]
For the forward direction, we use Theorem 4.7. Here, \(S_{cochar}=1\) implies both \(d_{1}^{r-1}=1\) (and thus \(d_{1}=1\)) and \(\gcd\left(d_{2},\frac{dn}{d_{1}}\right)=1\). Since \(d_{1}=1\) we then have \(\gcd\left(d_{2},dn\right)=1\) which tells us that \(d_{2}\) must be relatively prime to \(n\). But \(d_{2}=\gcd(c+(r-1)d,n)\) so thus \(d_{2}=1\).
We will later see that both maximizing and minimizing \(\mathfrak{W}\) result in very nice quantum connections.
It is also intriguing to ask when the diagonal number phenomenon developed in Section 3 matches the dimension precisely: that is, when is \(|\Lambda_{fin}|=d_{1}^{r-1}d_{2}\)? One case in which this is true is fairly straightforward.
**Corollary 7.4**.: _If \(n\) and \(r\) are relatively prime, \(\dim(\mathfrak{W})=n^{r}/\left(d_{1}^{r-1}d_{2}\right)\)._
Proof.: Suppose \(\gcd(r,n)=1\) and consult Theorem 6.1. Then,
\[S_{cochar}=\frac{d_{1}^{r-1}d_{2}}{n}\cdot\operatorname{lcm}\left(n,\gcd \left(c+(r-1)d,n,\frac{dn}{d_{1}}\right)\right)=d_{1}^{r-1}d_{2}\]
as the gcd above is a factor of \(n\).
However, the general conditions are a bit more complicated.
**Proposition 7.5**.: _Suppose \(n=p_{1}^{m_{1}}\cdots p_{j}^{m_{j}}\),, and for each \(p_{i}\), we have \(c-d\equiv c_{i}p_{i}^{s_{i}}\pmod{p_{i}^{m_{i}}}\), \(d\equiv d_{i}p_{i}^{t_{i}}\pmod{p_{i}^{m_{i}}}\), and \(r\equiv r_{i}p_{i}^{\mu_{i}}\pmod{p_{i}^{m_{i}}}\), where \(c_{i},d_{i},\) and \(r_{i}\) are coprime to \(p_{i}\). Then we have \(\dim(\mathfrak{W})=n^{r}/\left(d_{1}^{r-1}d_{2}\right)\) if and only if one of the following three conditions is true for every \(i\):_
* \(s_{i}<t_{i}+\mu_{i}\) _and_ \(2s_{i}\leq t_{i}+m_{i}\)_,_
* \(s_{i}>t_{i}+\mu_{i}\) _and_ \(s_{i}\leq m_{i}-\mu_{i}\)_, or_
* \(s_{i}=t_{i}+m_{i}\) _and_ \(2s_{i}+\tau_{i}\leq t_{i}+m_{i}\)_, where_ \(\tau_{i}\) _is the number of powers of_ \(p_{i}\) _in_ \(c_{i}+r_{i}d_{i}\)
Proof.: Using Theorem 4.7, \(S_{cochar}=\sfrac{d}{d_{1}}\sfrac{d}{d_{2}}\) if and only if \(\gcd(d_{2},\frac{dn}{d_{1}})=d_{2}\). That is, precisely when \(d_{2}\) divides \(\frac{dn}{d_{1}}\). Since the left side divides \(n\), it suffices to check that for every prime factor \(p_{i}\) of \(n\), the power of \(p_{i}\) in \(d_{2}\) divides that in \(\frac{dn}{d_{1}}\). Given any \(p_{i}\), by the proof of Proposition 6.2, we know that the power of \(p_{i}\) on the right hand side is \(t_{i}+m_{i}-s_{i}\). Similarly, the power of \(p_{i}\) appearing in \(c-d+rd\) is \(\min\{s_{i},t_{i}+\mu_{i}\}+\tau_{i}\cdot\delta_{s_{i}=t_{i}+\mu_{i}}\), where \(\tau_{i}\) is the power of \(p_{i}\) appearing in \(c_{i}+r_{i}d_{i}\). Thus, it suffices to determine exactly when
\[\min\{s_{i},t_{i}+\mu_{i}\}+\tau_{i}\cdot\delta_{s_{i}=t_{i}+\mu_{i}}\leq t_{i }+m_{i}-s_{i}. \tag{9}\]
To do so, we split into the same cases we used in the proof of Proposition 6.2, based on the power of \(p_{i}\) appearing in \(d_{2}\). First, suppose \(s_{i}<t_{i}+\mu_{i}\). Then we wind up in the first condition, because (9) is true precisely when
\[s_{i}\leq t_{i}+m_{i}-s_{i}.\]
Then, suppose \(s_{i}>t_{i}+\mu_{i}\). Then (9) is true if and only if
\[\mu_{i}\leq m_{i}-s_{i}\]
satisfying the second condition. Lastly, suppose \(s_{i}=t_{i}+\mu_{i}\). Then (9) is equivalent to the third condition
\[2s_{i}+\tau_{i}\leq t_{i}+m_{i}.\qed\]
Using the same techniques, we can also describe all the cases when \(\dim(\mathfrak{W})=(n/d_{1})^{r}\). As we will see later, the quantum module connected to \(\mathfrak{W}\) has dimension \((n/d_{1})^{r}\), so this is a necessary condition for the map to be an isomorphism.
**Proposition 7.6**.: _Suppose \(n=p_{1}^{m_{1}}\cdots p_{j}^{m_{j}}\),, and for each \(p_{i}\), we have \(c-d\equiv c_{i}p_{i}^{s_{i}}\pmod{p_{i}^{m_{i}}}\), \(d\equiv d_{i}p_{i}^{t_{i}}\pmod{p_{i}^{m_{i}}}\), and \(r\equiv r_{i}p_{i}^{\mu_{i}}\pmod{p_{i}^{m_{i}}}\), where \(c_{i},d_{i}\), and \(r_{i}\) are coprime to \(p_{i}\). Then \(\dim(\mathfrak{W})=(n/d_{1})^{r}\) if and only if for every \(i\), we have \(2s_{i}\leq m_{i}+t_{i}\) and at least one of the following conditions:_
* \(s_{i}<t_{i}+\mu_{i}\)_,_
* \(s_{i}=t_{i}+\mu_{i}\) _and_ \(2s_{i}=t_{i}+m_{i}\)_, or_
* \(s_{i}=t_{i}+\mu_{i}\) _and_ \(c+(r-1)d\) _contains no additional powers of_ \(p_{i}\)_._
Proof.: Using Theorem 4.7, \(\dim(\mathfrak{W})=(n/d_{1})^{r}\) if and only if \(\gcd(\theta_{2},\frac{dn}{d_{1}})=\theta_{1}\). Using the machinery developed in the proof of Proposition 6.2, notice that both sides are factors of \(n\), so it suffices to check that the powers of each prime \(p_{i}\) appearing in the prime factorization of \(n\) match.
Let \(n=p_{1}^{m_{1}}\cdots p_{j}^{m_{j}}\) and suppose that \(c-d\equiv c_{i}p_{i}^{s_{i}}\pmod{p_{i}^{m_{i}}}\) and \(d\equiv d_{i}p_{i}^{t_{i}}\pmod{p_{i}^{m_{i}}}\), where \(c_{i}\) and \(d_{i}\) are coprime to \(p_{i}\). Also, note that \(r\equiv r_{i}p_{i}^{\mu_{i}}\pmod{p_{i}^{m_{i}}}\), where \(r_{i}\) is also coprime to \(p_{i}\). Then the power of \(p_{i}\) appearing in \(d_{1}\) is \(s_{i}\). From the proof of Proposition 6.2, recall that the power of \(p_{i}\) appearing in \(\frac{dn}{d_{1}}\) is \(t_{i}+m_{i}-s_{i}\) and the power of \(p_{i}\) appearing in \(c-d+rd\) is \(\min\{s_{i},t_{i}+\mu_{i}\}+\tau_{i}\cdot\delta_{s_{i}=t_{i}+\mu_{i}}\), where \(\tau_{i}\) is the power of \(p_{i}\) appearing in \(c_{i}+r_{i}d_{i}\). So \(\gcd(\theta_{2},\frac{dn}{d_{1}})=\theta_{1}\) if and only if
\[\min\{\min\{s_{i},t_{i}+\mu_{i}\}+\tau_{i}\cdot\delta_{s_{i}=t_{i}+\mu_{i}},t_{i }+m_{i}-s_{i}\}=s_{i}. \tag{10}\]
As in the proof of Proposition 6.2, we split into three cases. If \(s_{i}<t_{i}+\mu_{i}\), then (10) gives us
\[\min\{s_{i},t_{i}+m_{i}-s_{i}\}=s_{i},\]
which is true precisely when \(t_{i}+m_{i}-s_{i}\geq s_{i}\), satisfying the first conditions.
If \(s_{i}>t_{i}+\mu_{i}\), then we have a contradiction, since the minimum in (10) is already less than \(s_{i}\), and vice versa.
Finally, if \(s_{i}=t_{i}+\mu_{i}\), then (10) is
\[\min\{s_{i}+\tau_{i},t_{i}+m_{i}-s_{i}\}=s_{i},\]
which is true exactly when \(2s_{i}=t_{i}+m_{i}\) or \(\tau_{i}=0\) and \(s_{i}\leq t_{i}+m_{i}-s_{i}\), satisfying the second and third conditions, respectively.
We have seen in this section that the two different formulations of Theorems 4.7 and 6.1 are useful for many different purposes. While the approach used to generate Theorem 6.1 provides a more natural path for generalization beyond \(GL_{r}(F)\), it would be interesting in future work to investigate whether there is an analogous approach to that used in Theorem 4.7 for other reductive groups. In particular, understanding how Theorems 4.7 and 6.1 are related for the case of \(GL_{r}(F)\) will illuminate a path for extending this connection further.
## 8 Quantum Connections
Finally, we marshal together results from the previous sections to investigate how the space of Whittaker functions is connected to quantum group modules, building the necessary quantum definitions along the way.
Let \(U_{q}(c,d,n)\) be the affine quantum group \(U_{q}(\widehat{\mathfrak{gl}}(n/\mathcal{d}_{1}))\), where \(q\) is the cardinality of the residue field for our nonarchimedean local field \(F\). For the results of this paper, we will not need the precise definition here, so we refer the reader to Chari and Pressley [6] for the details of the construction and instead note merely a few interesting facts about \(U_{q}(c,d,n)\). First, despite the name, \(U_{q}(c,d,n)\) is not a group, but rather an algebra, specifically a quasitriangular Hopf algebra. That is, it is both an algebra and a coalgebra, so it comes equipped with not only multiplication and a unit map but also comultiplication, a counit, and an antipode map relating the algebra and coalgebra structures. Furthermore, this quantum group has a very nice set of modules which we can model concretely.
**Definition**.: For \(z\in\mathbb{C}\), let \(V_{+}(z)\) be an _evaluation module_, or _evaluation representation_, for \(U_{q}(c,d,n)\). Again, we will not need the full structure of this representation for this paper, but following Kojima [12] as a convenient source, note that \(V_{+}(z)\) is \(n/\mathcal{d}_{1}\)-dimensional and its basis may be parametrized by the elements of \(\mathbb{Z}/(n/\mathcal{d}_{1})\mathbb{Z}\).
In addition, \(U_{q}(c,d,n)\) comes with an invertible element called a _universal R-matrix_\(R\in U_{q}(c,d,n)\otimes U_{q}(c,d,n)\), which acts on tensor products of \(U_{q}(c,d,n)\)-modules. Choosing a particular pair of modules and their bases, \(R\) becomes an honest-to-goodness matrix.
It is this \(R\)-matrix that sparked the connection between Whittaker functions and quantum groups: \(R\)-matrices are natural sources for solutions to Yang-Baxter equations, functional relations from statistical mechanics that that arise, among other places, in the theory of lattice models. In [2], Brubaker, Bump, Chinta, Friedberg, and Gunnells constructed a ice-type lattice model called _Metaplectic Ice_ which computes metaplectic Whittaker functions for the nicest cover (\(c=1,d=0\), so \(\mathcal{d}_{1}=\mathcal{d}_{2}=1\)). However, the Yang-Baxter equation for this model was unknown until Brubaker, Bump, and Buciunas identified it as a Drinfeld twist of the \(R\)-matrix for \(U_{q}(\widehat{\mathfrak{gl}}(n))\) in [1]. Using the lattice model as a bridge, they mapped the space of Whittaker functions on this cover isomorphically into the tensor product \(V_{+}(z_{1})\otimes\cdots\otimes V_{+}(z_{r})\), where \(z_{i}\in\mathbb{C}\) are the Satake parameters for the principal series representation on which the Whittaker function space \(\mathfrak{W}=\mathfrak{W}^{\mathsf{z}}\) is built. Under this isomorphism, the action of the \(R\)-matrix on the components of \(V_{+}(z_{1})\otimes\cdots\otimes V_{+}(z_{r})\) matches _precisely_ the action of intertwining operators on the principal series representation and thus the Whittaker function space.
Fantastically, this connection extends for _any_ metaplectic cover of \(GL_{r}(F)\). In [7], the second author built a generic lattice model for an arbitrary covering group, and used it to construct a map between the space of Whittaker functions and a quantum group module for the quantum group \(U_{q}(c,d,n)=U_{q}(\widehat{\mathfrak{gl}}(n/\mathcal{d}_{1}))\). However, as we saw already from the formulae for \(\dim(\mathfrak{W})\) and the structure theory in Section 7, changing the parameters \(c\) and \(d\) results in a significantly more complicated function space. These complications extend to the map, as the quantum space changes differently than \(\mathfrak{W}\) does. In spite of this, the map prescribed by the lattice model in the fully general case is still a homomorphism and it matches exactly the actions of the R-matrix on the right side to those of the intertwining operators on the left.
Consider the tensor product \(V_{+}(z_{1})\otimes\cdots\otimes V_{+}(z_{r})\) of quantum group evaluation modules for \(U\). As a vector space, we have
\[\dim\Bigl{(}V_{+}(z_{1})\otimes\cdots\otimes V_{+}(z_{r})\Bigr{)}=\left(\frac {n}{\mathcal{d}_{1}}\right)^{r}.\]
Note that unlike either of the formulae for \(\dim(\mathfrak{W})\) in Main Theorem 1, this formula is not affected by \(\mathcal{d}_{2}\).
Now we come to the connection precisely. Using Theorem 1.1, take representatives for the cosets \(\widetilde{T}/H\) from the set \((\mathbb{Z}/n\mathbb{Z})^{r}\). Using Theorem 2.2, use these representatives to construct a basis for \(\mathfrak{W}\).
**Theorem 8.1** (Frechette [7], Theorem 1.1).: _Let \(\rho=(r-1,...,2,1,0)\). For \(\mathbf{z}\in\mathbb{C}^{r}\), the map_
\[\theta_{\mathbf{z}}:\mathfrak{W}^{\mathbf{z}} \to V_{+}(z_{1})\otimes\cdots\otimes V_{+}(z_{r})\] \[\mathbf{\nu} \mapsto \rho-\mathbf{\nu}\pmod{n/\delta_{1}},\]
_where the modulus is taken in each component, is a homomorphism compatible with the actions of intertwining operators on \(\mathfrak{W}\) and the R-matrix on the quantum tensor product._
One of the difficulties that arose in extending from the nicest cover to generic covers is that the lattice model specifies a choice of basis for \(\mathfrak{W}\) that makes this map a homomorphism, but the lattice model itself is not necessary for the proof and serves as a removable bridge between the Whittaker function space and the quantum group model. Without the lattice model, however, there is no canonical choice of basis for \(\mathfrak{W}\), so we ask: when is this map well-defined regardless of the choice of representative for each coset in \(\widetilde{T}/H\)?
Using the structure of \(\mathfrak{W}\) developed in Section 7, we can investigate this map more precisely, and arrive at the following theorem, which is a restatement of the first part of Main Theorem 2.
**Theorem 8.2**.: _For the metaplectic cover \(\widetilde{G}_{c,d,r,n}\), the map \(\theta_{\mathbf{z}}:\mathfrak{W}\to V_{+}(z_{1})\otimes\cdots\otimes V_{+}(z_{r})\) from Theorem 8.1 is well-defined independent of choice of coset representatives for \(\widetilde{T}/H\) if and only if_
\[\gcd\left(\theta_{2},\frac{dn}{\delta_{1}}\right)=\gcd(c,d,n).\]
Proof.: Using the characterization developed in Theorem 6.1, \(\theta_{\mathbf{z}}\) is well-defined if and only if all the elements in \(\Lambda_{fin}\) map to the same element in the module. Using the description of Proposition 4.4, write \(\mathbf{x}=x_{1}\cdot\mathbf{1}_{r}+\frac{n}{\delta_{1}}(0,v_{2},v_{3},...,v_{r})\) and \(\mathbf{y}=y_{1}\cdot\mathbf{1}_{r}+\frac{n}{\delta_{1}}(0,v_{2}^{\prime},v_{3}^{ \prime},...,v_{r}^{\prime})\), for \(x_{1},y_{1},v_{i},v_{i}^{\prime}\in\mathbb{Z}\) for all \(i\). Thus,
\[\theta_{\mathbf{z}}(\mathbf{x})-\theta_{\mathbf{z}}(\mathbf{y})=(y_{1}-x_{1})\cdot\mathbf{1}_ {r}\pmod{n/\delta_{1}}.\]
Since \(\mathbf{x},\mathbf{y}\in\Lambda_{fin}\), we have \(\mathbf{y}-\mathbf{x}\in\Lambda_{fin}\) as well, so the defining cocharacter equations give more information about the possible values of \(y_{1}-x_{1}\). Using the first cocharacter equation, there exists \(k\in\mathbb{Z}\) such that
\[(c+(r-1)d)\cdot(y_{1}-x_{1})\equiv\frac{dn}{\delta_{1}}\cdot k\pmod{n}.\]
Varying over all \(\mathbf{x},\mathbf{y}\in\Lambda_{fin}\), the possible values for the right hand side of this equation are precisely the integer multiples of \(\gcd\left(n,\frac{dn}{\delta_{1}}\right).\) Then, both sides must be a multiple of \(\operatorname{lcm}\left(c+(r-1)d,\gcd\left(n,\frac{dn}{\delta_{1}}\right)\right)\). Using the fact that \(\operatorname{lcm}(A,B)=(A\cdot B)/\gcd(A,B)\), the possible values for \(y_{1}-x_{1}\) are all the integer multiples of
\[\frac{\gcd\left(n,\frac{dn}{\delta_{1}}\right)}{\gcd\left(c+(r-1)d,n,\frac{dn }{\delta_{1}}\right)}=\frac{\frac{n}{\delta_{1}}\gcd\left(d_{1},d\right)}{ \gcd\left(d_{2},\frac{dn}{\delta_{1}}\right)}=\frac{n}{\delta_{1}}\cdot\frac{ \gcd\left(c,d,n\right)}{\gcd\left(d_{2},\frac{dn}{\delta_{1}}\right)}. \tag{11}\]
Going back to the map, \(\theta_{\mathbf{z}}(\mathbf{x})-\theta_{\mathbf{z}}(\mathbf{y})=0\) if and only if \(y_{1}-x_{1}\equiv 0\pmod{n/\delta_{1}}\). Since \(\gcd(c,d,n)\) divides both \(\delta_{2}\) and \(\frac{dn}{\delta_{1}}\), we have that the expression in (11) is a multiple of \(\frac{n}{\delta_{1}}\) if and only if \(\gcd(c,d,n)=\gcd\left(d_{2},\frac{dn}{\delta_{1}}\right)\). Therefore the map \(\theta_{\mathbf{z}}\) is well-defined for any choice of coset representatives of \(\widetilde{T}/H\) if and only if \(\gcd(c,d,n)=\gcd\left(d_{2},\frac{dn}{\delta_{1}}\right)\).
**Corollary 8.3**.: _When \(\mathfrak{W}\) is either maximum or minimum size, \(\theta_{\mathbf{z}}\) is an isomorphism._
Proof.: If \(\mathfrak{W}\) is of maximum size \(n^{r}\), then by Corollary 7.3, we have \(\delta_{1}=\delta_{2}=1\). Thus, \(\gcd\left(\delta_{2},\frac{dn}{\delta_{1}}\right)=1\), which forces \(\gcd(c,d,n)=1\), so \(\theta_{\mathbf{z}}\) is well-defined. In this case \(\widetilde{T}/H\) is parametrized by all of \((\mathbb{Z}/n\mathbb{Z})^{r}\), and since \(n/\delta_{1}=n\), so is the quantum module. Looking at the description of \(\theta_{\mathbf{z}}\) in Theorem 8.1, we see that \(\theta_{\mathbf{z}}\) is an isomorphism by definition, flipping \(\mathfrak{W}\) and shifting by \(\rho\).
If \(\mathfrak{W}\) is of minimum size \(1\), then Corollary 7.2 shows that \(\delta_{1}=\delta_{2}=n\). Thus, \(\gcd(c,d,n)=n\), which forces \(\gcd\left(\delta_{2},\frac{dn}{\delta_{1}}\right)=n\) and makes \(\theta_{\mathbf{z}}\) well-defined. Here, \(\widetilde{T}/H\) is a single element, which maps to the single element \(\mathbf{0}\) in \((\mathbb{Z}/(n/\delta_{1})\mathbb{Z})^{r}\), since \(n/\delta_{1}=1\). Thus the map is vacuously an isomorphism.
Note that the first case of Corollary 8.3 includes the nicest cover \(c=1,d=0\) originally treated by [2] and [1], explaining why the quantum map on \(\mathfrak{W}\) for this case is an isomorphism.
Using our description of \(\Lambda_{fin}\), we intend in the future to come up with a precise description of the structure of \(\mathfrak{W}\) in the style of Corollary 8.3 for more general cases, which will allow us to characterize the precise behavior of \(\theta_{\mathbf{z}}\). In particular, we are interested in providing a companion to Proposition 7.6 by finding a sufficient condition for all cases when \(\theta_{\mathbf{z}}\) is an isomorphism. Extending our methods and results for \(\mathfrak{W}\) from \(GL_{r}(F)\) to arbitrary reductive groups will then give us more information about what the quantum objects connected to other types of reductive groups should be. While some solvable lattice models for other types exist, they have not yet been linked to modules for quantum groups or other quantum algebraic objects, so we believe that investigating the dimension and description of \(\mathfrak{W}\) for other types will illuminate likely candidates for broader quantum connections.
|
2310.14300
|
MFCC-GAN Codec: A New AI-based Audio Coding
|
In this paper, we proposed AI-based audio coding using MFCC features in an
adversarial setting. We combined a conventional encoder with an adversarial
learning decoder to better reconstruct the original waveform. Since GAN gives
implicit density estimation, therefore, such models are less prone to
overfitting. We compared our work with five well-known codecs namely AAC, AC3,
Opus, Vorbis, and Speex, performing on bitrates from 2kbps to 128kbps.
MFCCGAN_36k achieved the state-of-the-art result in terms of SNR despite a
lower bitrate in comparison to AC3_128k, AAC_112k, Vorbis_48k, Opus_48k, and
Speex_48K. On the other hand, MFCCGAN_13k also achieved high SNR=27 which is
equal to that of AC3_128k, and AAC_112k while having a significantly lower
bitrate (13 kbps). MFCCGAN_36k achieved higher NISQA-MOS results compared to
AAC_48k while having a 20% lower bitrate. Furthermore, MFCCGAN_13k obtained
NISQAMOS= 3.9 which is much higher than AAC_24k, AAC_32k, AC3_32k, and AAC_48k.
For future work, we finally suggest adopting loss functions optimizing
intelligibility and perceptual metrics in the MFCCGAN structure to improve
quality and intelligibility simultaneously.
|
Mohammad Reza Hasanabadi
|
2023-10-22T13:44:31Z
|
http://arxiv.org/abs/2310.14300v1
|
# MFC-GAN Code: A New AI-based Audio Coding
###### Abstract
In this paper, we proposed AI-based audio coding using MFCC features in an adversarial setting. We combined a conventional encoder with an adversarial learning decoder to better reconstruct the original waveform. Since GAN gives implicit density estimation, therefore, such models are less prone to overfitting. We compared our work with five well-known codecs namely AAC, AC3, Opus, Vorbis, and Speex, performing on bitrates from 2kbps to 128kbps.
MFCGAN_36k achieved the state-of-art result in terms of SNR despite a lower bitrate in comparison to AC3_128k, AAC_112k, Vorbis_48k, Opus_48k, and Speex_48k. On the other hand, MFCCGAN_13k also achieved high SNR=27 which is equal to that of AC3_128k, and AAC_112k while having a significantly lower bitrate (13 kbps). MFCCGAN_36k achieved higher NISQA-MOS results compared to AAC_48k while having a 20% lower bitrate. Furthermore, MFCCGAN_13k obtained NISQA-MOS=3.9 which is much higher than AAC_24k, AAC_32k, AC3_32k, and AAC_48k. For future work, we finally suggest adopting loss functions optimising intelligibility and perceptual metrics in the MFCCGAN structure to improve quality and intelligibility simultaneously.
## 1 Introduction
Speech coding is an integral part of any communication system. Coding helps such systems utilise the bandwidth optimally and create more capacity to deliver information. Whereas conventional coding approaches lie more on algorithmic processes, deep learning approaches are data-driven in-between creates the possibility of designing low-bit-rate speech. WaveNet [13] is among the pioneer audio generative models based on PixelCNN architecture [14]. Rather than coding, WaveNet can provide a generic and flexible framework for tackling many other applications that rely on audio generation such as Text-To-Speech (TTS) [15], speech enhancement, [16], and voice conversion [17].
Since deep learning approaches could condition on certain features, the combination of conventional algorithmic approaches with new deep learning could result in low-bit rate codecs. Kleijn et al. [18] used a learned WaveNet decoder to produce audio from the bit stream generated by the encoder in Codec2 (a parametric codec designed for low bit rates) operating at 2.4 kbps. This demonstrates how a learned decoder can better reconstruct the original signal over conventional hand-engineered algorithmic decoders even by using a low-bit rate encoder [3]. Moreover, VQVAE [19] is a frame that can encode speech into a compact discrete latent representation and then reconstruct the original audio by a decoder conditioned on certain features. In fact, can be used as an end-to-end speech codec mapping the input audio to the discrete latent space learned by VQ-VAE and then reconstructing the signal using the decoder network conditioned on the latent representation [3].
A combination of VQVAE and WaveNet to create an architecture suited to the task of speech coding has been introduced in [3]. This architecture reduces 16-bit pulse-code modulation (PCM) encoded speech at 16 kHz sample rate (256 kbps) to a stream of discrete latent
codes at 4 kbps. An important point about utilizing VQVAE in speech coding is to maintain input prosody at the output reconstructed signal. Garbacea et al. suggests using a loss term to preserve pitch (f0) and timing information. Recently released, SoundStream [20] is an end-to-end neural audio codec that can efficiently compress speech, music, and general audio at normal bitrates. SoundStream relies on a model architecture composed of a fully convolutional encoder/decoder network and a residual vector quantizer, which are trained jointly end-to-end. SoundStream at 3 kbps outperforms Opus [21] at 12 kbps and approaches FVS [22] at 9.6 kbps. In section II, we first introduce the proposed approach and then in sections III and IV, the experimental setting and assessment results are explained.
## 2 Proposed approach
In this paper, we propose an AI audio coding approach based on adversarial learning. One of the problems of conventional coding is reconstruction artifacts. You may extract compact and informative representations of the signal but fail to reconstruct the original signal thoroughly. A good representation does not necessarily guarantee a good reconstruction. On the other hand, if enough data is available, AI approaches could learn the nonlinear transform better and therefore, compensate for the defects usually occur in the decoding process of conventional audio codecs. What makes AI better than conventional reconstruction approaches comes back to the fact that learning methods (e.g. AI) take into account both representing features as input and the original signal as target simultaneously at the training phase. Whereas, conventional methods only have access to the extracted features and not the original signal as a target. Therefore, AI-based approaches generally have a high capacity for reconstruction if big data is available.
In this paper, we have used Mel-Frequency Cepstral Coefficients (MFCCs) for our feature extraction. These features are typically used in speech recognition [23] and music information retrieval [24]. As explained, the essential problem of conventional modelling locates the reconstruction signal out of extracted features. To overcome this problem, we suggest using a learned decoder for reconstruction. We extract audio coefficients and use the adversarial loss to generate output raw waveform. Since
Figure 1: General structure of GAN
## 3 Experimental Setting
To experiment with the results, we extracted MFCC at each frame and fed the features to the network and trained the network to generate output raw waveform. For our experiment, 13, 24, and 36 coefficients were extracted and evaluated respectively. We utilised The USpeech Dataset [25] which consists of 13100 audio waveforms. We separated 13090 for our training set and 10 samples for our test set with no overlapped samples. The learning rate was set to 1e-4 for both the generator and discriminator. We trained our models for 3000 epochs.
\[\begin{split}\text{{{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{{\small{\small{{\small{\small{{\small{{\small{{{\bf{{{{{{{\bf{{{{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \ \ \ \}}}}}}}}}}}}}}} \{\\\ \}}}}}}}}}}} \ \{\{\\\\\\\\\\\\\}}}}} \{\\\\}}}\{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}
rule-based approaches utilise fixed algorithmic methods. Therefore, in the case of big data, AI could outperform conventional methods. NISQA-MOS represents subjective naturalness via objective assessment.
MFCCGAN_36k achieved higher NISQA-MOS results compared to AAC_48k while having a 20% lower bitrate. Furthermore, MFCCGAN_13k obtained NISQA-MOS=3.9 which is much higher than AAC_24k (NISQA-MOS=2.7), AAC_32k and AC_32k (NISQA-MOS=2.6, 3.4 respectively) and AAC_48k (NISQA-MOS=3.05). STOI and PESO illustrate the intelligibility and perceptual quality of speech. Accordingly, MFCCGAN is not as good as other codecs in terms of intelligibility and perceptual performance. We suggest adopting loss functions optimising on intelligibility and perceptual metrics. This will be taken into account in our future work.
## Conclusion and Future Works
In this paper, we proposed AI-based audio coding using MFCC features in an adversarial setting. Based on the application and options adopted, bitrates ranging from 13kbps to 64kbps could be achieved. We compared our work with five well-known codecs named AAC, AC_3, Opus, Vorbis, and Speex, performing on bitrates from 2kbps to 12kbps. MFCCGAN_36k achieved the state-of-art result in terms of SNR despite a lower bitrate in comparison to AC_3_128k, AAC_112k, Vorbis_48k, Opus_48k, and Speex_48k. On the other hand, MFCCGAN_13k also achieved high SNR=27 which is equal to that of AC_3_128k, and AAC_112k while having a significantly lower bitrate (13 kbps). MFCCGAN_36k achieved higher NISQA-MOS results compared to AAC_48k while having a 20% lower bitrate. Furthermore, MFCCGAN_13k obtained NISQA-MOS=3.9 which is much higher than AAC_24k, AAC_32k and AC_33_3k, and AAC_48k.
For future work, we suggest adopting loss functions optimising on intelligibility and perceptual metrics in the MFCCGAN structure.
|
2303.14238
|
Polyhedral geometry and combinatorics of an autocatalytic ecosystem
|
Developing a mathematical understanding of autocatalysis in reaction networks
has both theoretical and practical implications. We review definitions of
autocatalytic networks and prove some properties for minimal autocatalytic
subnetworks (MASs). We show that it is possible to classify MASs in equivalence
classes, and develop mathematical results about their behavior. We also provide
linear-programming algorithms to exhaustively enumerate them and a scheme to
visualize their polyhedral geometry and combinatorics. We then define cluster
chemical reaction networks, a framework for coarse-graining real chemical
reactions with positive integer conservation laws. We find that the size of the
list of minimal autocatalytic subnetworks in a maximally connected cluster
chemical reaction network with one conservation law grows exponentially in the
number of species. We end our discussion with open questions concerning an
ecosystem of autocatalytic subnetworks and multidisciplinary opportunities for
future investigation.
|
Praful Gagrani, Victor Blanco, Eric Smith, David Baum
|
2023-03-24T19:00:19Z
|
http://arxiv.org/abs/2303.14238v4
|
The geometry and combinatorics of an autocatalytic ecology in chemical and cluster chemical reaction networks
###### Abstract
Developing a mathematical understanding of autocatalysis in chemical reaction networks has both theoretical and practical implications. For a class of autocatalysis, which we term _stoichiometric autocatalysis_, we show that it is possible to classify them in equivalence classes and develop mathematical results about their behavior. We also provide a linear-programming algorithm to exhaustively enumerate them and a scheme to visualize their polyhedral geometry and combinatorics. We then define cluster chemical reaction networks, a framework for coarse-graining realistic chemical reactions using conservation laws. We find that the list of minimal autocatalytic subnetworks in a maximally connected cluster chemical reaction network with one conservation law grows exponentially in the number of species. We end our discussion with open questions concerning autocatalysis and multidisciplinary opportunities for future investigation.
###### Contents
* I Introduction
* II Chemical reaction networks and autocatalytic cycles
* II.1 Hypergraphs, stoichiometric matrix, conservation laws, and null flows
* II.2 Formal, exclusive and stoichiometric autocatalysis, and autocatalytic cores
* III Organizing an autocatalytic ecology
* III.1 Definitions
* III.1 Food-waste-member-core partition of an autocatalytic subnetwork
* III.2 Flow, species and partition productive cones
* III.2 Mathematical results
* III.3 Algorithms
* III.4 Geometry and visualization
* IV Cluster chemical reaction networks
* IV.1 Formalism
* IV.2 Complete CCRN
* IV.2.1 constituent CCRN
* IV.2.2 L=3 1-constituent CCRN
* IV.2.3 Number theoretic result for 1-constituent CCRN of order two
* IV.2.4 Computational challenges in scaling
* IV.3 Rule generated CCRNs
* V Discussion and future research
* A Deficiency theory and multistability in CRNs
* B Kernel of the stoichiometric matrix for the complete 1-constituent CCRN
* C Deficiency-one algorithm for L=3 1-constituent CCRN
* D Details of the algorithm for detecting autocatalytic motifs
* C Checking intersection of cones
* E Minimal Autocatalytic Subnetworks for CCRN with \(L=4\) and \(L=5\)
## I Introduction
Chemical reaction network (CRN) theory offers a versatile mathematical framework in which to model complex systems, ranging from biochemistry and game theory to the origins of life [1; 2]. The usefulness of CRNs in modelling these phenomena stems from its ability to exhibit a wide range of nonlinear dynamics and it is widely recognized that _autocatalysis_ can be seen as a basis for many of them [3; 4]. Broadly, autocatalysis is framed as the ability of a given chemical species to make more copies of itself or otherwise promote its own production.
|
2309.01108
|
Acoustic-to-articulatory inversion for dysarthric speech: Are
pre-trained self-supervised representations favorable?
|
Acoustic-to-articulatory inversion (AAI) involves mapping from the acoustic
to the articulatory space. Signal-processing features like the MFCCs, have been
widely used for the AAI task. For subjects with dysarthric speech, AAI is
challenging because of an imprecise and indistinct pronunciation. In this work,
we perform AAI for dysarthric speech using representations from pre-trained
self-supervised learning (SSL) models. We demonstrate the impact of different
pre-trained features on this challenging AAI task, at low-resource conditions.
In addition, we also condition x-vectors to the extracted SSL features to train
a BLSTM network. In the seen case, we experiment with three AAI training
schemes (subject-specific, pooled, and fine-tuned). The results, consistent
across training schemes, reveal that DeCoAR, in the fine-tuned scheme, achieves
a relative improvement of the Pearson Correlation Coefficient (CC) by ~1.81%
and ~4.56% for healthy controls and patients, respectively, over MFCCs. We
observe similar average trends for different SSL features in the unseen case.
Overall, SSL networks like wav2vec, APC, and DeCoAR, trained with feature
reconstruction or future timestep prediction tasks, perform well in predicting
dysarthric articulatory trajectories.
|
Sarthak Kumar Maharana, Krishna Kamal Adidam, Shoumik Nandi, Ajitesh Srivastava
|
2023-09-03T07:44:38Z
|
http://arxiv.org/abs/2309.01108v4
|
Acoustic-to-articulatory inversion for dysarthric speech: are pre-trained self-supervised representations favorable?
###### Abstract
Acoustic-to-articulatory inversion (AAI) involves mapping from the acoustic space to the articulatory space. Signal-processing features like the MFCCs, have been widely used for the AAI task. For subjects with dysarthric speech, AAI is challenging because of an imprecise and indistinct pronunciation. In this work, we perform AAI for dysarthric speech using representations from pre-trained self-supervised learning (SSL) models. We demonstrate the impact of different pre-trained features on this challenging AAI task, at low-resource conditions. In addition, we also condition x-vectors to the extracted SSL features to train a BLSTM network. In the seen case, we experiment with three AAI training schemes (subject-specific, pooled, and fine-tuned). The results, consistent across training schemes, reveal that DeCoAR, in the fine-tuned scheme, achieves a relative improvement of the Pearson Correlation Coefficient (CC) by \(\sim\)1.81% and \(\sim\)4.56% for healthy controls and patients, respectively, over MFCCs. In the unseen case, we observe similar average trends for different SSL features. Overall, SSL networks like wav2vec, APC, and DeCoAR, which are trained with feature reconstruction or future timestep prediction tasks, perform well in predicting dysarthric articulatory trajectories.
Sarthak Kumar Maharana, Krishna Kamal Addam, Shoumik Nandi, Ajitesh Srivastava Ming Hsieh Department of Electrical and Computer Engineering,
University of Southern California, Los Angeles, CA, USA - 90087
{maharana, addam, shoumikn, ajitesh}@usc.edu Acoustic-to-articulatory inversion, dysarthria, self-supervised learning, x-vectors, BLSTM.
## 1 Introduction
Cerebral Palsy (CP) is a neurological disorder that affects movement, balance, and posture, often caused by damage to the developing brain before or during birth. It is a non-progressive condition. Amyotrophic Lateral Sclerosis (ALS), however, is a progressive neurological disease that leads to muscle weakness and eventual paralysis. Due to a slow muscle response time of a subject, the speech-motor functions are severely affected [1], which eventually leads to weak pronunciation, an affected accent, and unintelligible speech, causing dysarthria [2]. As the severity of dysarthria increases, it has a negative impact on the movement of articulators, such as the lips, jaw, tongue, and velum, which are responsible for speech production [3, 4].
Typically, speech-language pathologists (SLPs) use various speech stimuli, such as reading a passage or a word, spontaneous speech, or rehearsed speech, to informally evaluate the deterioration of articulation [5]. To get a deeper understanding of such articulation, real-time movement of the articulators of such patients is crucial and is often collected using Electromagnetic Articulography (EMA). This data collection strategy ensures that simultaneous speech acoustics and articulatory information is collected. So, we are inspired to predict the movements of the speech articulators based on speech acoustics.
To assist the SLPs and to capture the non-linearity of inversion [6], deep learning models have been employed. They rely heavily on feature representations to extract meaningful information from the input data and have achieved state-of-the-art performances for AAI [7, 8, 9, 10]. Signal-processing features, like the Mel-frequency Cepstral Coefficients (MFCCs), have been shown to be optimal for AAI [11]. However, there's a rising demand for using a parameterized representation of speech acoustics, as input features, learned via self-supervised learning (SSL) methods [12, 13]. In [10], Udupa et al. extensively studied the effect of different pre-trained SSL features on AAI tasks. They proved that a better utterance level generalizability was obtained using SSL features, making it a promising choice over MFCCs.
To the best of our knowledge, there has been no work extending SSL for AAI on dysarthric speech. In this study, we wish to explore a variety of SSL models such as APC [14], NPC [15], DeCoAR [16], wav2vec [17], TERA [18], vq.wav2vec [12], and Mockingjay [13] that have their respective training schemes, which result in different model parameters used to extract features that would then be fed to a sequential model like the bidirectional LSTM (BLSTM), to estimate the articulatory trajectories [7, 8]. Our hypothesis is that learning a robust and rich representation space for dysarthric speakers via pre-trained SSL models is important. This would capture the speaking styles, semantic and lexical patterns, speaking rates, and speaker identities, which would aid in generalizing to unknown dysarthric speakers, to boost the AAI performance. In addition, to preserve speaker-specific information, we plan on conditioning the obtained SSL features with their corresponding x-vector embeddings [19] to learn rich acoustic-articulatory mappings of multiple speakers.
**Key contributions** - Below, we articulate the key contributions of our work at different levels: \({}^{\bullet}\) We propose the first work demonstrating the effects of different SSL features for AAI of dysarthric speech at low-resource data conditions. In this study, we explore if using pre-trained predictive and contrastive SSL models benefits AAI for dysarthric speech in seen and unseen subject conditions. \({}^{\bullet}\) We illustrate and analyze articulatory level performances by the best SSL feature against baseline MFCCs, in the seen subject conditions. \({}^{\bullet}\) We empirically show the benefits of speaker-specific embeddings like the x-vectors [19] to boost AAI performance.
## 2 Dataset
In this work, we use the TORGO dataset [20] that comprises aligned speech acoustics and measured 3D EMA articulatory trajectories from speakers with either CP or ALS and matched healthy controls. We choose only 4 speakers with complete parallel acosytic-articulatory data i.e, 2 male healthy controls (MC01 and MC04) and 2 female patients (F03 and F04). Articulatory movements, in the X and Y directions, from sensor coils attached to the tongue tip (TT),
tongue middle (TM), tongue back (TB), jaw (IAW), lower lip (LL), and upper lip (UL) are considered. The tongue's middle and back portions are also called the body and dorsum, respectively. We denote TM and TB as TB and TD, henceforth. This results in 12-dimensional articulatory feature vectors denoted as UL\({}_{x}\), UL\({}_{y}\), LL\({}_{x}\), LL\({}_{y}\), JAW\({}_{z}\), JAW\({}_{y}\), TT\({}_{x}\), TT\({}_{y}\), TB\({}_{x}\), TB\({}_{y}\), TD\({}_{x}\), and TD\({}_{y}\). Illa et al. [3] proposed the addition of kinematic features like velocity and acceleration to the articulatory features in order to improve speech-based classification between ALS patients and healthy controls. This results in a 6-dimensional velocity and acceleration articulatory feature vector along the X and Y directions, each. Finally, we obtain 24-dimensional articulatory feature vectors upon concatenation.
Table 1 provides the speech utterance duration for healthy controls and patients used in this work. Row "Severity" indicates the amount of severity of dysarthria for both patients. Row "Number of utterances" reports the total number of utterances uttered by each subject, and "Total duration" indicates the total duration of speech segments, in seconds, uttered by each subject. Since collecting articulatory data is difficult because of the discomfort caused by sensor coils, the total duration and number of utterances for patients were lesser than those of healthy controls. On average, 7535 utterances from healthy controls and 3066 utterances from each dysarthric speaker were recorded, performing speech tasks from the TIMIT database [21]. Silence and noise from each sentence were later removed, referring to the provided transcriptions.
## 3 Proposed Methodology
Acoustic-to-articulatory inversion (AAI) is a regression task, in which the one-to-many mapping from the acoustic space onto the articulatory space, is non-linear, complex, and ill-posed [22, 23]. Hence, various approaches have been proposed in the state-of-the-art literature to model this complexity and non-linearity through neural networks like BLSTMs [7, 8] and Transformers [24]. However, in this work, we employ the BLSTM architecture, inspired from [7, 8], to perform AAI. Since the amount of acoustic-articulatory data from healthy controls and dysarthric patients might be insufficient, we believe that learning a rich and robust representation space via pre-trained SSL models could instead be beneficial to perform AAI on dysarthric speech over transfer learning and joint-training using a rich yet mismatched cross-corpus [8].
This work proposes a novel approach that employs using features from pre-trained SSL models for dysarthric AAI particularly, i.e., it combines the usage of features from the pre-trained SSL models, as acoustic features [10] and speaker-specific information via x-vectors [8]. Fig. 1 illustrates our block diagram, adopted from [7, 8], for this work.
As a baseline acoustic feature, we consider MFCCs. These features have been conventionally shown to be optimal in previous AAI tasks [11, 7]. For pre-trained SSL features, based on the training schemes of the respective models, these features are extracted from models trained on their respective predictive or contrastive loss functions. It is important to note that the weights of the SSL models are not trainable. The feature representations are extracted from the SSL model, by giving the speech signal as the input.
This novel idea of using pre-trained SSL features, along with conditioning with x-vectors [19], will aid in learning better representations for dysarthric speech. This, in turn, will help in the better prediction of dysarthric articulatory trajectories.
## 4 Experimental Setup
Feature extractionAs an initial step, we pre-process the recorded speech waveforms and their corresponding articulatory features. The speech waveforms are downsampled to 16 kHz. The articulatory features are downsampled to 100 Hz, from the initial 200 Hz, and low-pass filtered at a cutoff frequency of 25 Hz. This was done due to the presence of high-frequency noise that could be incurred during the recording session. We use the Kaldi toolkit [25] to compute MFCCs, using a window length of 25ms and a shift of 10ms. This way, the MFCCs have a one-to-one correspondence with the articulatory trajectories. For SSL feature extraction, we use the upstream model weights from 3prl 1, for the initialization of each of the SSL models. Mean and variance normalization is performed at an utterance level, across each dimension of the acoustic and articulatory features. The SSL feature representations are derived from the final layer of the SSL model [10]. For the computation of x-vectors, we use the Kaldi toolkit [25], using a pre-trained model trained on the VoxCeleb database [26]. Fig. 2 illustrates the x-vector speaker embeddings, for each subject, after t-SNE [27]. We observe that the embeddings obtained from the pre-trained model are able to discriminate between the speakers. However, an interesting observation is that the x-vectors for certain sentences from F03 overlap with those of healthy controls. This could be due to the low dysarthric severity of F03 which translates to similar speaking rates or styles. In addition, we also observe that the x-vectors of MC01 form two separate clusters. This could be due to different speaking rates and lexical patterns across sentences for MC01.
Footnote 1: [https://github.com/3prl/3prl](https://github.com/3prl/3prl)
Training schemes and network parametersWe perform experiments in two subject conditions - seen and unseen. In the seen conditions, for each subject, we uniformly split the sentences as 90% for training and the remaining 10% for testing. We make sure that the sentences do not overlap across the two sets. Here, the AAI models are trained in three training schemes: * Subject-specific: AAI model is trained on the training data of each subject and tested on its corresponding test set. * Pooled: A single AAI model is trained on a combined set of training data of all subjects. During inference, the test sets of the subjects are aggregated. * Fine-tuned: Trained parameters from the pooled scheme are initialized and fine-tuned on the training data of each subject and re-trained. In the unseen subject conditions, we employ the leave-one-person-out approach i.e., we train an AAI model on the data of all subjects except the one chosen as the test subject. We test the AAI model on the subject that is left out. In both the training conditions (seen and unseen),
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{**Speakers**} & \multicolumn{2}{c|}{**Healthy Controls**} & \multicolumn{2}{c|}{**Patients**} \\ \cline{2-5} & **MC01** & **MC04** & **F03** & **F04** \\ \hline Severity & - & - & Moderate & Mild \\ \hline Number of utterances & 8186 & 6884 & 3534 & 2599 \\ \hline Total duration & 1032.3269 & 774.9625 & 385.574 & 298.956 \\ \hline \end{tabular}
\end{table}
Table 1: Speech duration (in seconds) for the 2 male healthy controls and the 2 female dysarthric patients constituting the TORGO dataset.
Figure 1: Block diagram of the proposed AAI model with pre-trained SSL features as the input acoustic features, conditioned with x-vectors.
we consider a standard 5-fold cross-validation setup out of which 4 folds are considered for training and the other fold is considered for validation, in a round-robin fashion.
Due to variable sequence lengths in a batch of data, we performed zero-padding based on the length of the maximum sequence in that particular batch [10]. We randomly initialized the parameters of the 3-layered BLSTM network, where each layer uses 256 hidden LSTM units. For the acoustic feature, the input dense layer is initialized with 200 units. The speaker embedding dense layer, for the x-vectors, is 32 units. A batch size of 5 is used for training over 50 epochs, with a learning rate of \(10^{-4}\), and a weight decay of \(10^{-6}\). We employ a learning rate scheduler to avoid the stagnation of learning. Network parameters are optimized using the Adam optimizer, where the regression estimation of articulatory trajectories is minimized using a mean-squared error (MSE) loss function. Based on the validation loss, early stopping is also performed. We use PyTorch [28] to train all our AAI models.
_Evaluation metrics:_ We use the Pearson Correlation Coefficient (CC) [7, 11] between the ground-truth and the predicted articulatory trajectories. The first 12 dimensions of the trajectories are considered since they correspond to the raw positions of the six articulators, along the X and Y directions, as considered in this work.
## 5 Results and Discussions
### Effect of speaker-specific embeddings
Table 2 reports the average CC (standard deviation) on the test set, across all the five folds, articulators, and sentences, in the pooled training seen subject scheme, for different speaker-specific embeddings conditioned on MFCCs. To verify such an impact on the acoustic features, we investigate the performance by conditioning the MFCCs with its statistics i.e., its mean and standard deviation, which results in a dimension that is double the input dimension of MFCCs. We also condition the MFCCs with their corresponding x-vectors. The results reveal that the AAI model (256 LSTM units) with input MFCCs and conditioned with x-vectors outperforms "only MFCCs" and MFCCs with statistics by \(\sim\)4.52% and \(\sim\)2.99% for patients respectively, and, \(\sim\)1.09% and \(\sim\)1.06% for healthy controls respectively. Hence, we report the remaining results where the acoustic features (MFCCs or pre-trained SSL features) are conditioned on its x-vectors.
Since the amount of training data might be insufficient, the proposed network may have a chance of overfitting. So, we experiment with different LSTM units for each layer. The performances of the different configurations of MFCCs improve with an increase in the LSTM units. Due to an increase in network complexity and computation cost [7], we use and report the results on a BLSTM network with 256 LSTM units in each layer for the remainder of our work.
### Impact of different pre-trained SSL features
_Seen subject evaluation:_ Table 3 reports the average CC (standard deviation), in the seen subject conditions, across the respective test sets, averaged across all the articulators, sentences, and folds, for all the training schemes i.e., subject-specific, pooled, and fine-tuned, for the different SSL features, in this work, as the input. In the subject-specific scheme, we notice that features from wav2vec and DeCoAR perform better over MFCCs, for both healthy controls and patients. In particular, DeCoAR performs the best with relative CC improvements of \(\sim\)0.94% and \(\sim\)5.34% over MFCCs, for healthy controls and patients respectively. It is interesting to note that vq.wav2vec, Mockingjaya, and TERA, perform poorly, but APC and NPC achieve comparable performances to MFCCs. The drop in performance by vq.wav2vec could be due to the quantization step in its training objective [12]. TERA and Mockingjay which are trained with a masked loss function to predict the sequence of words in a sentence, seem to give a lower performance [13, 18] over here. In addition, a stronger reason could be that in the subject-specific scheme, the amount of training data from each subject is less, contributing to overfitting and poor generalization on the corresponding test set. For the pooled and fine-tuned training schemes, we observe that the performances of all the SSL features are better than MFCCs, for both healthy control and patients, except for Mockingjay and vq.wav2vec. How
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{LSTM units} & \multicolumn{2}{c|}{MFCCs} & \multicolumn{2}{c|}{MFCCs} & \multicolumn{2}{c|}{MFCCs} \\ & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline & 0.664 & 0.4805 & 0.6379 & 0.4633 & 0.6532 & 0.4747 \\
32 & (0.0667) & 0.1328 & 0.0375 & 0.1352 & 0.6077 & 0.1308 \\ \hline
64 & 0.6784 & 0.4728 & 0.6771 & 0.4895 & 0.8998 & 0.496 \\ & (0.05649) & 0.12332 & 0.0644 & 0.1283 & 0.0637 & 0.1251 \\ \hline
128 & 0.7083 & 0.504 & 0.7069 & 0.5086 & 0.7224 & 0.5188 \\
128 & (0.05881) & 0.1152 & 0.0757 & 0.1157 & 0.0356 & 0.1232 \\ \hline
256 & 0.7412 & 0.5201 & 0.7414 & 0.5278 & **0.7433** & **0.5340** \\ & (0.04888) & (0.1073) & (0.0487) & (0.1234) & **(0.048)** & **(0.1194)** \\ \hline \end{tabular}
\end{table}
Table 2: Average CC (standard deviation) across the test sets in the pooled training seen subject scheme, using only MFCCs, MFCCs conditioned with statistics, and MFCCs conditioned with x-vectors, for different LSTM units. We average across all the articulators, sentences, and folds.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Features**} & \multicolumn{2}{c|}{**Subject-specific**} & \multicolumn{2}{c|}{**Pooled**} & \multicolumn{2}{c|}{**Fine-tuned**} \\ \cline{2-5} & \multicolumn{1}{c|}{**Healthy**} & \multicolumn{1}{c|}{**Patients**} & \multicolumn{1}{c|}{**Healthy**} & \multicolumn{1}{c|}{**Patients**} & \multicolumn{1}{c|}{**Healthy**} & \multicolumn{1}{c|}{**Patients**} \\ \cline{2-5} & \multicolumn{1}{c|}{**Conturbs**} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline MFCCs & 0.7627 & 0.5534 & 0.7493 & 0.5436 & 0.7629 & 0.5808 \\ & (0.0578) & (0.1374) & (0.048) & (0.1194) & (0.0572) & (0.1200) \\ \hline \multirow{2}{*}{ wav2vec} & 0.7648 & 0.5591 & 0.75 & 0.5088 & 0.7494 & 0.5593 \\ & (0.0561) & 0.1283 & 0.0437 & 0.1106 & 0.05545 & (0.1216) \\ \hline APC & 0.7544 & 0.5438 & 0.7481 & 0.5717 & 0.7642 & 0.5867 \\ & (0.0596) & (0.1265) & (0.0485) & (0.1159) & (0.05515) & (0.1224) \\ \hline \multirow{2}{*}{ NPC} & 0.7561 & 0.5441 & 0.7501 & 0.5421 & 0.7592 & 0.5563 \\ & (0.0599) & (0.1443) & (0.0468) & (0.1111) & (0.0594) & (0.1357) \\ \hline \multirow{2}{*}{ DeCoAR} & **0.7699** & **0.8032** & **0.7628** & **0.7376** & **0.6073** \\ & **(0.0540)** & **(0.1343)** & **(0.0445)** & **(0.1187)** & **(0.0562)** & **(0.1312)** \\ \hline TERA & 0.7481 & 0.5451 & 0.7515 & 0.5562 & 0.7657 & 0.5702 \\ & (0.0613) & (0.1322) & (0.0492) & (0.1218) & (0.0857) & (0.1348) \\ \hline Mockingjay & 0.7521 & 0.5121 & 0.7289 & 0.5287 & 0.7428 & 0.5477 \\ & (0.0624) & (0.1437) & (0.0499) & (0.1108) & (0.0959) & (0.1326) \\ \hline \multirow{2}{*}{ vq.wav2vec} & 0.7192 & 0.5299 & 0.7209 & 0.5631 & 0.761 & 0.8324 \\ & (0.0683) & (0.1337)
ever, vq_wav2vec performs better than MFCCs for patients. To be specific, DeCoAR performs the best across all the setups with CC improvements of \(\sim\)1.81% and \(\sim\)4.56% for healthy controls and patients respectively, in the fine-tuned case. The combination of unsupervised pre-training and supervised fine-tuning, in DeCoAR, allows the network to learn general features of speech and task-specific features [16] i.e., articulatory trajectories of healthy controls and patients. This validates that SSL pre-trained features are more suited and robust acoustic features for dysarthric AAI.
_Unseen subject evaluation:_ Table 4 reports the average CC (standard deviation), in the unseen subject conditions, across all the sentences and articulators, for each subject. We observe similar average performances, as in the seen conditions, across the different subjects. In particular, wav2vec and DeCoAR perform better over MFCCs, for all the subjects. For F03 (with moderate dysarthric severity), the CC can be improved from 0.4201 to 0.4659, using DeCoAR. It is to be noted that AAI on unseen dysarthric subjects is challenging. Pre-trained SSL features combined with the conditioning of x-vectors, led to a better utterance level generalisability on subjects unseen during training [29]. This confirms the effectiveness of pre-trained SSL features in performing unseen dysarthric AAI.
|
2308.11032
|
AI For Fraud Awareness
|
In today's world, with the rise of numerous social platforms, it has become
relatively easy for anyone to spread false information and lure people into
traps. Fraudulent schemes and traps are growing rapidly in the investment
world. Due to this, countries and individuals face huge financial risks. We
present an awareness system with the use of machine learning and gamification
techniques to educate the people about investment scams and traps. Our system
applies machine learning techniques to provide a personalized learning
experience to the user. The system chooses distinct game-design elements and
scams from the knowledge pool crafted by domain experts for each individual.
The objective of the research project is to reduce inequalities in all
countries by educating investors via Active Learning. Our goal is to assist the
regulators in assuring a conducive environment for a fair, efficient, and
inclusive capital market. In the paper, we discuss the impact of the problem,
provide implementation details, and showcase the potentiality of the system
through preliminary experiments and results.
|
Prabh Simran Singh Baweja, Orathai Sangpetch, Akkarit Sangpetch
|
2023-08-16T05:45:34Z
|
http://arxiv.org/abs/2308.11032v1
|
# AI for Investment Fraud Awareness
###### Abstract
In today's world, with the rise of numerous social platforms, it has become relatively easy for anyone to spread false information and lure people into traps. Fraudulent schemes and traps are growing rapidly in the investment world. Due to this, countries and individuals face huge financial risks. We present an awareness system with the use of machine learning and gamification techniques to educate the people about investment scams and traps. Our system applies machine learning techniques to provide a personalized learning experience to the user. The system chooses distinct game-design elements and scams from the knowledge pool crafted by domain experts for each individual. The objective of the research project is to reduce inequalities in all countries by educating investors via Active Learning. Our goal is to assist the regulators in assuring a conducive environment for a fair, efficient, and inclusive capital market. In the paper, we discuss the impact of the problem, provide implementation details, and showcase the potentiality of the system through preliminary experiments and results.
artificial intelligence, stock market, gamification, investment scams
## I Introduction
Nowadays, as technology has made investing in the capital market more accessible, an increasing number of people are investing in the market without sufficient knowledge of the stock market. With the advent of technology, it has become relatively easy to target investors and lure them into fraudulent schemes. To understand the magnitude of the problem, let us look at the recent losses faced by a few countries. In 2018, 197 million pounds were lost in the United Kingdom due to investment scams [1]. In 2019, investors in Australia lost 63 million dollars from investment scams. These numbers are growing every year due to the extensive use of tactics by scammers through social networks and other online sources. Some of the commonly used tactics for scams are high return, low risk, investing quickly, advertising, fake news. Although there are several articles, books, resources available explaining the common tactics and how to not fall in these tactics, people are still being exploited. Driven by the significance of depth, a question arises: Is there a means for people to be aware of scams and traps without losing money in the market?
We present an awareness system to answer this question with the help of Active Learning [2]. The existing systems like Investmate [3] make use of Active Learning to equip users with comprehensive knowledge about the stock market, but they do not teach the users about investment fraud. Our awareness system focuses on educating users about scams and traps in the stock market. The awareness system is composed of three components: a personalization engine, a statistical analysis engine, and a learning platform. The personalization engine is the core of the system. It adopts machine learning techniques to predict the investor type. Based on the investor type, game-design elements, scams, and traps are chosen from the knowledge pool. This feedback is transferred to the learning platform to provide a personalized learning experience to the user.
In this paper, we focus on building the personalization engine of the awareness system. This is the fundamental step in order to build the entire system. The whole system relies on the accurate prediction of the investor type so that appropriate game-design elements, scams, and traps can be chosen for the user. We developed a prototype and collected data from the prototype. The data is comprised of the digital footprints of users from Thailand, China and India. We designed the experiments to help us decide the top useful features to demonstrate the accuracy of predicting the investor type.
## II Related Work
The use of Gamification [4] to educate people via Active Learning has become increasingly common in this day and age. Gamification is the application of game-design elements and game principles in non-game contexts. There are a few applications launched in the market to educate people about the stock market using Gamification techniques. Some of the top downloaded applications are Investmate [3], Stock Market Gamification [5]. Although these applications follow the strategy of using game-design elements like the trivia questions and rewards to teach the basics of investment, they miss the key part that not all the users are motivated by the same set of game-design elements. Also, the applications do not educate investors about prevalent scams and traps. Our awareness system incorporates a personalized set of game-design elements for each individual and focuses on educating people about the prevalent scams and traps.
We adopt machine learning techniques to predict the investor type of users from their digital footprint. Reference [6] suggests different types of behavior aspects demonstrated by investors. Research [7] discusses how bias affects the behavior of investors. The strategy used to predict the investor type involves analyzing the behaviors of the users during their
interaction with the system. Some of the widely recognized behavior theories are:
* The investor regret theory is a theory where investors avoid selling a bad investment in order to avoid regret.
* Mental accounting behavior explains the tendency of humans to compartmentalize different events, and how this impacts investment portfolio.
* Prospect and Loss Aversion Theory analyzes the change in behavior of investors, and the degree of emotion they have towards losses and gains.
We choose the top metrics from the digital footprint of the individual that reflects the behaviors displayed by them and their knowledge of the stock market. Based on these metrics, we predict the investor type and choose a set of scams and traps from the knowledge pool.
We map the set of game-design elements for each investor type from the mapping provided by the domain experts. The approach is to provide the right set of game-design elements to foster the learning process of the user. Research [8] has shown that the use of varied game-design elements can help trigger different motivational outcomes. In the system, we record all the metrics and extract the important features to analyze the investor type of individuals.
## III Intelligent Gamification Awareness System
The Intelligent Gamification Awareness System comprises of three parts: the learning platform, the personalization engine, and the statistical analysis engine. The user interacts with the learning platform and comprehends the various scams and traps in the stock market. Based on the interaction with the platform, the user's digital footprint is sent to the personalization engine and the statistical analysis engine. The personalization engine uses various machine learning algorithms to give feedback to the learning platform in order to regularly improve the personalized content provided to the user. The statistical analysis engine provides insights to the regulators to help them adopt better regulation and monitoring policies and strengthen the implementation of such regulations. Fig. 1 shows the workflow of the whole system.
### _The Learning Platform_
The idea behind developing the learning platform is to provide a simulation of the stock market and the real world without financial risk to the users. We design the platform to render a personalized learning environment for the user with the integration of various game-design elements and, a set of scams and traps according to the investor type. Each user starts with virtual cash and experience points. The user proceeds to the next level based on their performance on the platform. The following rules [8] are kept in mind while creating the learning platform:
* **Create a user journey**: The platform is not designed in a way that the user walks randomly and does not feel a sense of progression. There are proper onboarding, scaffolding, and pathways to mastery in the platform.
* **Balance**: We ensure that there is a balance at all stages in the platform. The difficulty level of the scams and traps depends on the investor type, and they are not too easy or too hard to detect. This is measured by the metrics that record the time spent by the user on the platform for each level.
* **Create an experience**: An essential rule of Gamification is to make the platform fun and engaging. We include elements of surprise, winning, problem-solving, exploration in our platform.
The platform consists of the following features:
* **Personal Portfolio**: All the information related to the assets bought or sold by the investor will be displayed here. The investor gets a sense of accomplishment when they see a profit on their assets. They are motivated to choose a stock wisely if they have a loss in their investments.
* **Market Page**: The market page will display information about all of the stocks in the market. There will be a brief overview of the stocks on the market page. Depending on the investor type, the market page might have fake stock companies or fraud companies.
* **Stock Details**: The stock details page provides a detailed explanation of every stock, along with the performance of the stock in the past 52 weeks.
* **News Page**: The news page will be a collection of news articles based on the stocks available in the market. There is a sentiment (positive, negative, neutral) associated with each news article. The sentiment reflects the reaction of the news article towards the stocks presented in the article. The sentiment is among our list of features collected in the digital footprint of the user. In order to make the platform a simulation of the actual stock market, we add news articles from untrusted and trusted sources. The untrusted sources are news articles with common traps for investors.
* **Analytics**: The analytics page gives the user the statistics of their performance in the past. We might provide insights related to other investors in order to motivate the user.
Fig. 1: Intelligent Gamification Awareness System Workflow
* **Chatbot/Mascot**: We introduce a chatbot to the platform to help with the smooth transition of learning about the usage of our platform. The chatbot can be a source of suggestions and insights related to the stocks and news articles. If the user is confused about a stock or a transaction, the chatbot can provide relevant information so that the investor makes an informed decision.
The platform records the interactions of the user, and the digital footprint is sent as an input to the personalization engine and the statistical analysis engine. The personalization engine provides continuous feedback to the learning platform. This continuous feedback loop helps in understanding the user adequately, thereby nourishing the learning environment for the user.
### _The Personalization Engine_
The core part of the awareness system is the personalization engine. Figure 2 gives an overview of the personalization engine. The user's digital footprint is provided as the input to the engine. The engine provides continuous feedback to the learning platform. The steps involved in the engine are as follows:
* **Feature Extraction**: We use various feature extraction algorithms to extract the top features that provide the most accurate investor type prediction. All the recorded metrics are provided as an input to the feature extraction algorithms. The list of all the recorded metrics in the system are as follows:
* Time spent on the fraud stock page
* Time spent on the real stock page
* Time spent on the fake stock page
* Time spent on the market page
* Time spent on the portfolio page
* Time spent on the news page
* Time spent to read positive stock news
* Time spent to read neutral stock news
* The number of fake stocks bought
* The number of fraud stocks bought
* The number of real stocks bought
* The number of frauds reported
* The number of news articles read by the user
* The number of asset transactions (buying a stock, selling a stock)
* The number of news articles read from untrusted sources
* The number of news articles read from trusted sources The top features extracted by the algorithms serve as an input to the investor type prediction algorithms.
* **Investor Type Prediction**: The input from the feature extraction algorithms is sent to the machine learning algorithms to predict the users into their respective investor types. The underlying mechanism to predict the investor type uses classification algorithms. Based on the investor behavior models [6][7], we divide the users into five categories. The two broad categories are:
1. Novice Investors
2. Experienced Investors
Experienced investors into further divided into four categories:
1. Risk-intolerant traders
2. Confident traders
3. Loss-averse young traders
4. Conservative Long term investors
In order to choose a set of distinct game-design elements for the users, we understand the underlying motivational spectrum of humans [8]. The spectrum consists of two types of motivations relevant for our awareness platform:
1. **Extrinsic Motivation**: Extrinsic motivation refers to behavior that is driven by external rewards such as money, fame, grades, and praise. This further consists of external regulation, intricjection, identification, and integration. The game-design elements that are useful for this category are: * Badges * Collections * Content unlocking * Leaderboards * Quests * Points * Social Graph * Teams
Fig. 2: The Personalization Engine
Virtual goods * Performance-contingent rewards
2. **Intrinsic Motivation**: Intrinsic motivation refers to behavior that is driven by internal rewards. The game-design elements suitable for users driven by intrinsic motivation are: * Quests * Content Unlocking * Performance-contingent rewards * Competence related awards * Unexpected awards
All the above-mentioned game-design elements along with the investor type are passed as an input to the Resource Selection phase. The classification algorithms used for predicting the investor type are Decision Trees [9], Gradient Boost Trees (XGBoost) [10], and Machine Learning Perceptron [11].
**Resource Selection**: The domain experts in the fields of the stock market come up with the knowledge pool of scams and traps prevalent in the stock market. The domain experts also provide a mapping of game-design elements for every investor type. The knowledge pool determines the suitable elements that should be selected according to the predicted investor type of the user.
The combination of the game-design elements and, a set of scams and traps is the output of the personalization engine which serves as the feedback to the learning platform.
### _The Statistical Analytics Engine_
The digital footprint of the user is sent as an input to the statistical analysis engine. The engine is responsible for providing insights and feedback to the personalization engine and the regulators of the stock market regularly. We perform descriptive and inferential statistics on the digital footprints of the users and come up with aggregations and interpretations based on the statistics. The aggregations and interpretations assist in formulating insights for the personalization engine.
Insights and feedback are also provided to the regulators based on the interpretation of the data. The insights and feedback provided by the statistical analytics engine can be a small step in working towards the United Nation's Sustainable Goal to reduce inequalities. This can help in the regulation and monitoring of financial markets and institutions.
## IV Prototype
We have worked closely with the Securities and Exchange Commission, Thailand to develop a prototype to demonstrate the potentiality of our system. With the regular guidance of the officers from the Securities and Exchange Commission, Thailand, we decided to add a couple of traps in our prototype:
1. **Penny Stock Scam**: The scam involves trading stocks of microcap companies. The manipulators and scammers first purchase large quantities of stocks, then drive up the price through false and misleading statements. Then, the manipulators sell their stock at a high price, and all the other stakeholders lose their money.
2. **Pyramid Scheme Scam**: A pyramid scheme is a business model that recruits members via a promise of payments or services for enrolling others into the scheme, rather than supplying investments or sale of products.
Fig. 3 shows a few screenshots of the prototype. Each user started at the learning platform with virtual money of 20,000 dollars and 100 experience points. Ten stock companies were introduced in the market, among which four of the companies were penny stock fraud companies. News articles related to all the ten companies were published on the news page from trusted and untrusted sources. A chatroom was also added to the prototype to make them aware of the traps used by brokers for the pyramid scheme.
## V Evaluation
We conducted a survey to collect information about the background of thirty-three users in order to analyze their interaction with the platform. Apart from the surveys, we interviewed every user to understand their knowledge about the stock market. Ten users in our experiment are from India, twelve are from China, and eleven are from Thailand. Sixteen users had never invested in the stock market, while seventeen users had invested in the stock market.
Based on the digital footprint of the users provided by the learning platform, the statistical analytics engine generated insights and shared the following insights with the Securities and Exchange Commission, Thailand:
Fig. 3: Screenshots of the Prototype
1. Four out of five novice investors were trapped by the penny stock fraud company.
2. An experienced investor spends two times more time in the market as compared to a novice investor.
To demonstrate the accuracy of the prediction of the investor type by the personalization engine, we perform the following steps:
* We run K-Means clustering [12] on the dataset to determine the optimal number of clusters of investors for the dataset. [Figure 6] shows the results of the elbow method used to analyze the dataset.
* Novice and Experienced. In the future, when we have more users, we will further divide the experienced investors into subcategories. We divide the thirty-three users dataset in training and test set in a 7:3 ratio.
* We use Principal Component Analysis to extract the top features for determining the investor type based on the digital footprint of the user and the feedback provided by the statistical analysis engine. The top five features are: 1. **Age**: Age plays a crucial role in the type of stocks an investor buys, and the amount of risk the investor is willing to take. 2. **Time spent on the market page**: The time spent on the market page is a clear indicator between a novice investor and an experienced investor.
3. **The number of news articles read from untrusted sources**: Novice investors tend to fall into common traps lured by news articles to convince investors to buy penny stocks.
4. **The number of fraud stocks bought**: Novice investors tend to fall into penny stock scams easily because of a lack of awareness about the stock market.
5. **The number of news articles read from trusted sources**: There is a clear gap between the number of news articles read from trusted sources by experienced investors and novice investors.
* Based on the top features, we train three different models on the training set and calculate the accuracy of the models on the test set. The personalization engine shows promising results in predicting the investor type. The average accuracy of the different models used in the personalization engine is as follows: 1. Decision Trees: 70% 2. Gradient Boost Trees: 80% 3. Machine Learning Perceptron: 90%
The above experiments demonstrate that by utilizing the Machine Learning Perceptron classifier, the personalization engine successfully predicts the investor type with an average accuracy of 90%. The promising accuracy provides a solid foundation for the further integration of the entire awareness system.
## VI Conclusion
We introduce an Intelligent Gamification Awareness System to educate people about the common scams and traps in the investment world. The system is designed to provide a personalized learning environment for every user that focuses on educating them about the common scams and traps. Based on the experiments, we demonstrate the accurate prediction of the investor type, which is the core part of the whole system. The Intelligent Gamification Awareness System will be helpful throughout the globe. Given the loss incurred every year on investment scams, our system will play an important role in spreading awareness among investors. The system will perform a significant role in providing insights and feedback to regulators throughout the globe. The system has the potential of assisting regulators towards their goal of providing economic stability to the country.
## VII Future Work
In this paper, we focus on the working of the core part of our awareness system - the personalization engine. In the future, we will discuss how the feedback from the statistical analysis engine will assist the entire system. We will also demonstrate the effectiveness of our system in raising investors' awareness of scams.
On the other hand, there are numerous potential applications of the Intelligent Gamification Awareness System. The awareness system can be personalized according to the common scams prevalent in the respective nations. This can be a huge step towards reducing inequalities across the globe.
Apart from providing awareness on investment scams, the future steps can be to spread awareness about other scams and traps that are getting increasingly common in the global age. The system can be altered to educate people about the widespread fake news, and how to avoid them. The system can also educate people about fake emails and letters that are widespread nowadays. There can be several other use cases to provide awareness of various frauds happening across the globe. Our vision is to help the United Nations towards its goal to adopt better financial policies to achieve greater equality. By 2030, we wish to empower and promote the social, economic, and political inclusion of all people in the world.
Fig. 4: K-Means Elbow Method - Find Optimal Cluster(k)
## Acknowledgment
We would like to thank Dr. Chaya Hiruncharoenvate (Officer, Securities and Exchange Commission, Thailand) for providing valuable insights about the investors and common scams that occur in Thailand.
|
2310.04113
|
Doppler-only Single-scan 3D Vehicle Odometry
|
We present a novel 3D odometry method that recovers the full motion of a
vehicle only from a Doppler-capable range sensor. It leverages the radial
velocities measured from the scene, estimating the sensor's velocity from a
single scan. The vehicle's 3D motion, defined by its linear and angular
velocities, is calculated taking into consideration its kinematic model which
provides a constraint between the velocity measured at the sensor frame and the
vehicle frame.
Experiments carried out prove the viability of our single-sensor method
compared to mounting an additional IMU. Our method provides the translation of
the sensor, which cannot be reliably determined from an IMU, as well as its
rotation. Its short-term accuracy and fast operation (~5ms) make it a proper
candidate to supply the initialization to more complex localization algorithms
or mapping pipelines. Not only does it reduce the error of the mapper, but it
does so at a comparable level of accuracy as an IMU would. All without the need
to mount and calibrate an extra sensor on the vehicle.
|
Andres Galeote-Luque, Vladimír Kubelka, Martin Magnusson, Jose-Raul Ruiz-Sarmiento, Javier Gonzalez-Jimenez
|
2023-10-06T09:23:25Z
|
http://arxiv.org/abs/2310.04113v1
|
# Doppler-only Single-scan 3D Vehicle Odometry
###### Abstract
We present a novel 3D odometry method that recovers the full motion of a vehicle only from a Doppler-capable range sensor. It leverages the radial velocities measured from the scene, estimating the sensor's velocity from a single scan. The vehicle's 3D motion, defined by its linear and angular velocities, is calculated taking into consideration its kinematic model which provides a constraint between the velocity measured at the sensor frame and the vehicle frame.
Experiments carried out prove the viability of our single-sensor method compared to mounting an additional IMU. Our method provides the translation of the sensor, which cannot be reliably determined from an IMU, as well as its rotation. Its short-term accuracy and fast operation (\(\sim\)5ms) make it a proper candidate to supply the initialization to more complex localization algorithms or mapping pipelines. Not only does it reduce the error of the mapper, but it does so at a comparable level of accuracy as an IMU would. All without the need to mount and calibrate an extra sensor on the vehicle.
Localization, Range Sensing, Autonomous Vehicle Navigation, Range Odometry, Radar, Doppler.
## I Introduction
Self-localization is one of the fundamental components of autonomous mobile robots and vehicles. This problem can be tackled by employing different sensors, but cameras and lidars are among the most widely used in contemporary methods. These sensors provide enough advantages to justify their relevance, though their performance can be severely hindered under challenging circumstances, like extreme lighting or weather conditions, or the presence of dust or mist [1].
Radar sensors, on the other hand, are a promising but underexplored alternative able to provide geometric information of its surroundings even in challenging low-visibility scenarios [2, 3, 4]. This makes them highly suitable for underground operations, construction sites, and other harsh environments. The relevance of radars in industrial and on-road applications has driven sensor technology forward, making them more affordable, accurate, and fast. Among the various advances, it is worth highlighting the introduction of radar sensors measuring in 3D (azimuth, elevation and range), as well as their ability to estimate the radial velocity of each sensed 3D point. The latter is possible by leveraging precise phase measurements of the returning signal, which allows the sensor to measure the velocity of a point along the radar-point direction. Further on, we denote it as Doppler velocity. Note that this technology has also been implemented in certain lidars, so from now on we will refer to all of them as Doppler-capable range sensors.
The radial velocity provided by these sensors has proven to be advantageous for odometry methods, aiding in the segmentation of dynamic and static objects [5], as well as introducing more constraints to the movement estimation [6]. Given the right conditions, it can be as powerful as to recover the 2D ego-motion of the sensor from a single scan of its surroundings [7, 8]. Note that, to estimate the localization, these works rely upon either knowing the kinematic model of the vehicle or mounting multiple sensors, since not all three degrees of freedom (DOFs) of a 2D movement are observable with just the radial velocity of points of the scene. Estimating
Fig. 1: Representation of the working principle of the proposed method. Top, the radial velocity \(\vec{v}_{i}^{D}\) of the observed points in the scene is leveraged to estimate the sensor’s linear velocity \(\vec{v}_{s}\). Bottom, the kinematic model provides the relation between \(\vec{v}_{s}\) and both the linear \(\vec{v}\) and angular \(\vec{\omega}\) velocities of the vehicle.
the movement of the sensor this way removes the need to perform data association, bringing it closer to employing an Inertial Measurement Unit (IMU) or wheel odometry. These methods provide the relative motion of the sensor, and tend to accumulate error over time, also known as _drift_. However, their short-term accuracy obtained in a fast and simple way makes them widely adopted in different odometry solutions.
In this paper, we propose a 3D odometry method that relies only on the Doppler velocity to estimate the vehicle motion from a single scan, hence avoiding data association. Doppler-capable range sensors can only capture the radial velocity of a point. Assuming most of the scene is static, the linear velocity of the sensor can be recovered via RANSAC [9]. However, there is a lack of information to obtain the 6 DOFs of a 3D movement. The observability of this problem was studied by Yoon et al. [10], where a gyroscope was added to completely determine the vehicle 3D motion. However, in this work we propose a simpler solution closer to the one presented by Kellner [7] but in 3D: by taking into consideration the kinematic model of the vehicle, we can recover its 3D movement using only the Doppler velocities. In the following sections, we describe the proposed odometry algorithm, as well as the experiments performed to validate it. An overview of its working principles can be seen in Figure 1. The results obtained prove that the presented method yields a short-term accurate motion estimation. When it is employed as initialization of more advanced localization algorithms like mapper pipelines, it improves the accuracy without needing a different sensor mounted on the vehicle. The code is publicly available at [https://github.com/andresgalu/doppler_odometry](https://github.com/andresgalu/doppler_odometry).
## II Related Work
The first publication to recover the ego-motion of a vehicle employing Doppler velocity without data association is due to Yokoo et al. [11], who employ 1D radar sensors mounted on a vehicle with Ackermann steering performing 2D planar motion. They propose two sensor configurations to recover the forward and angular velocities, one with two radars mounted in front of the vehicle, and another with a single radar and a gyroscopic sensor. The method requires the observed objects to be static in order to provide a reliable localization estimation. Kellner et al. expanded this concept for a single 2D radar [7]. Since the sensor can only measure the radial velocity of the objects, the angular velocity cannot be directly observed. To circumvent this issue, again the vehicle has to comply with the Ackermann kinematic model, reducing the DOFs from 3 (complete 2D motion) to 2 (forward and angular velocities). The increase in data points due to leveraging a 2D sensor removes the need to assume a static scene. By applying RANSAC [9], dynamic points can be identified and discarded as outliers. Kellner et al. later developed a method to completely recover the 3 DOFs of the 2D motion by employing multiple radar sensors [8], avoiding the Ackermann model requirement.
Stepping into 3D movement, works of both Kramer et al. [12] and Doer and Trommer [13] combine the information from a 3D radar and an IMU to recover the 6 DOFs of the motion, employing batch optimization by the former and an Extended Kalman Filter (EKF) by the latter. The use of an IMU as a way to make the movement observable is reminiscent of the second configuration proposed by Yokoo et al. [11]. Similarly, Yoon et al. [10] employ a Doppler-capable 3D lidar and a gyroscope to decouple the 3D motion estimation, with the lidar and gyroscope yielding the translation and rotation respectively. Said work includes a study of the observability of the motion, explaining the need for either a gyroscope or multiple sensors to completely recover the 3D movement. Even without a 3D radar, Park et al. [14] estimate the 3D motion of a vehicle by compositing two orthogonal 2D radar sensors, along with an IMU.
As well as providing a motion estimation, the Doppler velocity has proven to be helpful in more traditional odometry algorithms that perform data association, typically registering two consecutive point clouds. The most common approach to use the Doppler velocity is to add a cost function to the optimization problem, where the measured radial velocity is compared to the expected velocity of a point given an estimation of the motion. This can be applied to registration problems commonly found in the literature such as Iterative Closest Point (ICP) [15] variations [6, 16, 17], Normal Distribution Transform [18] (NDT) [19], and similar approaches [20]. The work of Monaco and Brennan [21] stands out for decoupling the motion estimation: the translation is obtained from the Doppler velocity, while the collected spatial information provides the rotation. The recent rise of Neural Networks (NN) has also reached radar odometry as can be seen in the work of Rennie et al. [22], where Doppler velocity is used to obtain an estimation of the motion which can later be fused with the scan registration result.
In contrast to the literature that is reviewed above, the method proposed in this article provides the 3D motion of a Doppler-capable range sensor by only leveraging the radial velocities from the scene. By taking into consideration the kinematic model of the vehicle the movement can be recovered without needing an extra IMU. The efficiency and short-term accuracy of this algorithm make it more than helpful to provide an estimation of the motion, which is also valuable as initialization in more resource-hungry algorithms.
## III Method Overview
In this section, we explain how the proposed method estimates the 3D movement of the vehicle from the Doppler velocity measurements of points in the scene. The method first estimates the sensor velocity and then finds the vehicle movement that fulfills a certain kinematic model and explains the observed sensor velocity. In Section III-A we will describe how the velocity of the sensor itself can be calculated from the observed radial velocities. As explained before, from the sensor velocity we cannot obtain the 6 DOFs of the vehicle's 3D motion. By introducing the kinematic model of the vehicle in Section III-B, the DOFs of the movement get reduced and hence it can be solved. Finally, Section III-C will analyze the variance of the estimated vehicle movement variables.
### _Sensor Velocity_
In this section, we will delve into the procedure that yields the sensor velocity from the Doppler velocities of the observed points. First, we will derive the equations assuming the scene is completely static, and then we will tackle the issue of removing dynamic objects.
Assuming a static scene, the velocity of an observed object is the opposite of the sensor velocity \(v_{s}\). Since the Doppler velocity from each point \(v_{i}^{D}\) is only measured along the radial direction, this value is the result of projecting the opposite of the sensor velocity on the radial direction.
\[-v_{i}^{D}=\begin{bmatrix}\cos\phi_{i}\cos\theta_{i}&\sin\phi_{i}\cos\theta_{i }&\sin\theta_{i}\end{bmatrix}\begin{bmatrix}v_{sx}\\ v_{sy}\\ v_{sz}\end{bmatrix} \tag{1}\]
The azimuth \(\phi\) and elevation angles \(\theta\) of each point \(i\) are obtained by converting their cartesian coordinates to spherical. Applying Equation (1) to all sensed points results in a set of linear equations in the form of \(B=Ax\) that can be solved with least squares regression. To reduce the influence of noise in the final result, we weight each point by its signal power. The sensor velocity is thus estimated as follows, with \(W\) being the diagonal matrix of the weights.
\[v_{s}=(A^{\top}WA)^{-1}(A^{\top}WB) \tag{2}\]
In the case there are dynamic objects in the scene, not all the points will follow the same model. In our implementation, we chose RANSAC [9] to get rid of these outlier points that do not comply with the estimated sensor velocity. They are labeled as dynamic, which can be helpful for later processing of the data.
### _Vehicle Kinematic Model_
Now that the sensor velocity has been calculated from the Doppler velocity measurements, we can estimate the vehicle movement. There exists an infinite number of 3D motions that explain the calculated sensor velocity, and we lack the information to determine all the 6 DOFs. However, by constraining the vehicle motion to a certain kinematic model, it is possible to simplify the movement enough to obtain an estimation.
First of all, we need to define the frame of reference used throughout this section. Its position in the vehicle is not particularly relevant, but its orientation is. As shown in Figure 2 we establish the Y axis to be aligned with the rear wheel axis, pointing to the left of the vehicle. The X axis is perpendicular to it and points toward the front wheel axis. Finally, the Z axis points upwards, perpendicular to both X and Y. Without loss of generality, we will locate the frame of reference at the center of the rear wheel axis.
For the sake of clarity, we will start analyzing the movement of an Ackermann steering vehicle moving along a flat surface, which is simply a 2D motion. In that case, the Instant Center of Rotation (ICR) can be found somewhere along the line of contact between the rear wheels and the ground, but it can be well approximated as the line that goes through the rear wheel axis. From now on we will refer to this line as the Z-ICR axis, since it relates to the rotation around Z. The velocity along the Y direction of any point located in the Z-ICR axis is null by definition. See Figure 1(a) for a graphic representation. Note that the key concept here is to have the ICR restricted to a given line, and the Ackermann kinematic model is just an example. As long as the kinematic model fulfills this requirement, like in differential drive and skid-steer vehicles, the viability of the proposed method holds.
For a 3D motion, the movement along Z has to be analyzed too. Similar to the previous paragraph, we can first study the simplified movement where the vehicle only moves in the XZ plane, along a curved surface. The ICR in this case is located at the intersection of the lines perpendicular to the contact point between the wheels and the ground. Assuming the curvature of the ground is locally constant in the relatively
Fig. 2: Simple cases of movement of the vehicle showing the location of the line along which the ICR can be found (orange). The velocity of points located on the ICR axis is perpendicular to it.
short distance between both wheel axes, the ICR will be located in a line parallel to the Z axis that goes through the midpoint between both wheel axes. Every point located along this Y-ICR axis will consequently have zero velocity in the Z direction. Figure 1(b) shows this concept for better clarity. This holds true even when driving on a flat plane, in which case every point in the vehicle has zero vertical velocity.
Until now, we have studied the rotation around the Z axis (yaw) and Y axis (pitch) through two basic movement examples (Figures 1(a) and 1(b) respectively). The roll, or rotation around the X axis, can be assumed negligible for a vehicle moving on a road. The complete 3D motion can then be seen as a combination of these two basic movements, and the introduced restrictions will still be present in the resulting 3D motion model. Namely, the velocity of points located in the Z-ICR axis will have no Y component, and similarly points in the Y-ICR (the middle of the vehicle along the X axis) will have zero Z velocity.
We can now recover the motion of the vehicle by applying the mentioned restrictions, knowing that the velocities of two points \(\vec{p},\vec{s}\) of the same rigid moving object are related through their relative position and angular velocity \(\vec{\omega}\):
\[\vec{v}_{p}=\vec{v}_{s}+\vec{\omega}\times(\vec{p}-\vec{s}) \tag{3}\]
Using the previously estimated sensor velocity \(\vec{v}_{s}\) (2), along with the restrictions, we can recover the angular velocity vector. To that end, point \(\vec{s}\) is now the sensor position, and point \(\vec{p}\) belongs to the Z-ICR axis, fulfilling the restriction of having zero Y velocity. Since all points on said axis comply with the requirement, we choose a point \(\vec{p}_{a}\) located at the same Y coordinate as the sensor, for simplicity.
\[\begin{cases}\vec{p}_{a}=[0,s_{y},0]^{\top}\\ \vec{v}_{a}=[v_{ax},0,v_{az}]^{\top}\end{cases}\quad\rightarrow\quad 0=v_{sy}- \omega_{z}s_{x}+\omega_{x}s_{z} \tag{4}\]
In a similar manner, we can apply the second restriction, which indicates that a point \(\vec{p}_{b}\) located on the Y-ICR (in the middle of the vehicle) has zero Z velocity.
\[\begin{cases}\vec{p}_{b}=[m,0,s_{z}]^{\top}\\ \vec{v}_{b}=[v_{bx},v_{by},0]^{\top}\end{cases}\quad\rightarrow\quad 0=v_{sz}- \omega_{x}s_{y}-\omega_{y}(m-s_{x}) \tag{5}\]
Where \(m\) is half the distance between the two wheel axes of the vehicle, see Figure 1(b). Finally, we apply the restriction \(\omega_{x}=0\) since the rotation around the X axis is negligible, and thus the angular velocity vector can be recovered.
\[\vec{\omega}=\begin{bmatrix}0&v_{sz}/(m-s_{x})&v_{sy}/s_{x}\end{bmatrix}^{\top} \tag{6}\]
The global pose of the sensor with respect to the reference frame of the world can be updated from the previous instance after a period of time \(\Delta t\) by employing the velocity \(\vec{v}_{s}\) for the position \(\vec{p}_{s}\), and the angular velocity \(\vec{\omega}\) for the orientation \(R\). Note that the velocity of any point of the vehicle can be calculated by applying (3), and thus its pose can be updated:
\[\begin{cases}R(t+\Delta t)&=&R(t)e^{\vec{\omega}\Delta t}\\ \vec{p}_{s}(t+\Delta t)&=&\vec{p}_{s}(t)+R(t)\vec{v}_{s}\Delta t\end{cases} \tag{7}\]
### _Covariance Analysis_
In this section, we analyze the variance of the estimated sensor and angular velocities, in order to better understand how their accuracy can be improved. First, we start with the sensor velocity \(\vec{v}_{s}\) estimated in Section III-A. Since we employed least squares (2) to find the velocity, its covariance matrix \(C_{v}\) can be estimated from the residuals \(\rho=A\vec{v}_{s}-B\)[23], with \(N\) being the rows of \(A\):
\[C_{v}=\frac{\rho^{\top}W\rho}{N-3}(A^{\top}WA)^{-1} \tag{8}\]
Other than the noise of the measurements, the distribution of the points in the scene is highly significant. The more spread they are, the higher the accuracy of the sensor velocity will be. The covariance matrix \(C_{\omega}\) of the angular velocity vector can then be calculated via error propagation:
\[C_{\omega}=JC_{v}J^{\top}=\begin{bmatrix}0&0&0\\ 0&\sigma_{vz}^{2}/(m-s_{x})^{2}&\sigma_{vyz}^{2}/(s_{x}m-s_{x}^{2})\\ 0&\sigma_{vyz}^{2}/(s_{x}m-s_{x}^{2})&\sigma_{vy}^{2}/s_{x}^{2}\end{bmatrix} \tag{9}\]
Note the impact the X-location of the sensor on the vehicle has on the covariance. If the sensor's X coordinate coincides with that of either ICR axis, then the related angular velocity cannot be observed. Thus, for better accuracy it is advisable to place the sensor far from these axes along the X direction.
## IV Experiments and Results
In this section, we describe the experiments performed in order to validate the proposed method, and subsequently evaluate it based on the obtained results. The employed setup consists of a Hugin 4D imaging radar, an Ouster OS1-128 3D lidar, and an Xsens MTi-30 IMU all mounted on a Clearpath Husky robotic base. The proposed method makes use only of the radar, while the lidar and IMU are used here for comparison in the evaluation. A dataset was collected while the Husky was driven around sloped terrain. Six different sequences were recorded: 04 and 05 have the most hills and thus the most vertical movement; 06 represents driving on a flat ground; 07 and 08 are similar multifaceted trajectories, with the second having a higher velocity throughout; and 11 is a longer route. Both the environment and the sensor setup can be seen in Figure 3, and the dataset is publicly available at [https://zenodo.org/record/8346769](https://zenodo.org/record/8346769). Note how the sensor is mounted on a lever arm on the vehicle. This is because increasing the distance along X from the sensor to both ICR axes results in a more accurate angular velocity, as established in Section III-C. On a larger vehicle, like a car, the sensor can be mounted far from the ICR without needing a lever arm.
In contrast to the Ackermann vehicle used as an example in Section III-B, the Husky is a skid-steer vehicle. As explained before, the proposed method works with every vehicle whose Z-ICR is in a consistent location. We are aware of the implications of working with this type of vehicle [24], but the results show that the kinematic model is accurate enough to recover the angular velocity.
### _Sensor Calibration_
For the proposed method to properly work, the sensor orientation and position with respect to the vehicle must be known. To that end, we propose a two-step calibration algorithm. First, the vehicle moves in a straight line, forward and backward. We then find the rotation that minimizes both Y and Z velocities calculated by the method, since only the X component should be non-zero. The second step is similar, but in this case the rotation required is the one around the X axis, which is not determined by the previous step. Therefore, the vehicle moves freely around a flat surface, to then find the rotation that minimizes the estimated Z velocity. By combining both rotations, the calculated sensor velocity can be transformed into the reference frame of the vehicle, and thus be used to obtain the angular velocities.
Just as important as the orientation of the sensor is its position, in particular, the distance along the X axis from the sensor to the location of the Z-ICR. This distance can be measured, but to avoid the possible error introduced by manually measuring it, we decided to calibrate \(s_{x}\) employing the IMU data. The procedure is simple: the vehicle moves in a flat circle at a constant speed, and then \(s_{x}\) is chosen to minimize the difference between the calculated angular velocity and the one measured by the IMU. Note that this is not the only possible way to calibrate the sensor position, but information about the movement is required either way.
### _Accuracy Results_
In this section we will evaluate the accuracy of the proposed method for providing single-scan and single-sensor 3D motion estimation. We compare the results to an IMU, since both provide a short-term accurate estimation of the motion, but show drift over time. As the ground truth for the comparison, we use the trajectory obtained by running an ICP-based mapper on the 3D lidar data.
An IMU provides information on the angular velocity and linear acceleration of the sensor. Its trajectory can be estimated by integrating these values over time. However, these measurements suffer from having noise and a non-constant bias. Combined with the high frequency at which these sensors operate, said trajectory shows significant drift over time. The position is impacted even more since two integration steps are performed from the measured linear acceleration [25]. Therefore, the translation estimated from the IMU is not reliable enough to be included in the results.
Our method, in contrast to the IMU integration, obtains the translation of the sensor directly from its estimated velocity. The rotation, however, depends also on the calibration previously mentioned, as well as the fulfillment of the kinematic hypothesis. Our simple approach allows for fast execution, taking on average 4.50 \(\pm\) 1.94 ms per scan.
Figure 4 shows the angular velocities estimated by the proposed method compared to those measured by the IMU. Despite the mentioned limitations, our method provides an estimation similar to the IMU. Notice also how the roll velocity measured by the IMU is not as significant as the yaw and pitch. It is normal to undergo some roll rotation in the rough terrain of the experiments, but it is small enough to consider it inconsequential. On smoother surfaces, like roads, it will be even smaller. This supports the restriction imposed by the kinematic model on the roll.
It makes sense to combine the advantages of both approaches into one, namely the translation from Doppler velocities with the rotation from the IMU, as done by Doer and Trommer [13]. However, this combined method needs information from two different sensors to recover the motion.
To compare these three approaches (IMU, Doppler, IMU + Doppler), we include in Figure 5 the resulting Relative Pose Error (RPE) per frame of the test sequences. This evaluates the short-term accuracy of the different methods, without taking into account their drift over time. Regarding the translation, IMU-only has been omitted because of its large error as discussed above, with the trajectory being hundreds of meters away from the ground truth. Both Doppler-only and IMU + Doppler recover the translation in a similar manner, and thus it makes sense for them to provide comparable accuracy. The difference is more apparent in the rotation. The accuracy of our method depends not only on the noise of the input data, unlike IMU. The fulfillment of the kinematic model and the calibration play an important role, hindering the rotation estimation. Sequences 04, 08 and 11 stand out as the most challenging because of their uneven terrain, causing more vibrations and defying the no-roll assumption.
Fig. 4: Angular velocity estimated from the Doppler velocities compared to IMU measurements, from sequence 11.
Fig. 3: Sensor setup and test environment from sequence 04.
### _Initialization of Slam_
The proposed method's short term accuracy and fast execution make it an interesting candidate to provide a motion prior to a more complex and accurate localization method, e.g. SLAM. ICP [15] is a widespread point cloud registration algorithm used among SLAM methods but is prone to falling into local minima. By providing a motion prior this limitation can be greatly reduced. It is already fairly common in the literature to employ an IMU to initialize the motion.
To evaluate the impact of using the proposed method as initialization for SLAM, we feed the radar data and a motion prior to an ICP-based mapper and analyze its accuracy based on how the motion is initialized. We test 3 different prior estimators: IMU only, Doppler only (our method), and IMU combined with radar. The trajectory of the mapper without any prior is also included as the base case in the comparison. Again, the ground truth used for the comparison is the one obtained from running the mapper on the 3D lidar data. The evaluation metric in this case is the RPE per second, which focuses more on long-term accuracy, a desirable trait for SLAM algorithms.
The resulting error values of the experiments can be seen in Figure 6, and the trajectories in Figure 7. Despite the variation along different sequences, all methods perform at a similar level of accuracy. What is obvious is the need for a prior input to the mapper, without which it fails. Our method provides a reliable motion prior, resulting in a similar accuracy to IMU, without the need to mount and calibrate an extra sensor. Furthermore, the proposal can identify and remove dynamic points from the scene, which can help at later stages of the mapper pipeline.
## V Conclusions
In this paper, we have introduced a fast 3D odometry that estimates the motion of a vehicle from a single scan leveraging the Doppler velocity. The method first obtains the sensor velocity based on the measured radial velocities of points from the scene, while rejecting dynamic objects. Based on the kinematic model of the vehicle, the relation between its velocity and the sensor's is established, and thus the 3D motion of the vehicle can be recovered.
A series of experiments have been carried out to validate our method, evaluating the accuracy of the odometry both when used as-is for pose tracking and when used as input to an ICP-based mapping pipeline. We compared its performance with an IMU, given their similar ability to efficiently provide short-term accurate odometry. These qualities make the proposal especially suitable as initialization for radar-only SLAM algorithms, as it boosts the accuracy and identifies dynamic objects at a very low cost. The results prove how our method provides a similar accuracy to an IMU without needing an external sensor mounted on the vehicle. Both the code and the dataset are publicly available.
At the core of the method lies the kinematic model of the vehicle. Granted that it simplifies the movement enough to be observable with a single sensor, it also sets the limitations needed for the proposal to accurately estimate the motion. Challenging scenarios include the Z-ICR being unstable, for example in slippery terrain, bumpy roads with abrupt and uneven vertical variation (potholes), and roads with considerable cant that generate roll rotation. These issues cannot be solved solely through software. On the other hand, the impact of having large amounts of outliers or noisy measurements on the accuracy can be reduced by correctly filtering the data, which should be the focus for future work.
Fig. 5: Translational and rotational RPE per frame.
Fig. 6: RPE per second of the mapper employing different prior data. Values out of the graph are labeled.
Fig. 7: Trajectory in meters generated by the mapper with different prior data on sequence 07 (left), and a closer view of the start and finish points (right). Prior used: none (NP), IMU, our method (Dopp), and IMU and Doppler (I+D).
|
2301.12030
|
TiLT: A Time-Centric Approach for Stream Query Optimization and
Parallelization
|
Stream processing engines (SPEs) are widely used for large scale streaming
analytics over unbounded time-ordered data streams. Modern day streaming
analytics applications exhibit diverse compute characteristics and demand
strict latency and throughput requirements. Over the years, there has been
significant attention in building hardware-efficient stream processing engines
(SPEs) that support several query optimization, parallelization, and execution
strategies to meet the performance requirements of large scale streaming
analytics applications. However, in this work, we observe that these strategies
often fail to generalize well on many real-world streaming analytics
applications due to several inherent design limitations of current SPEs. We
further argue that these limitations stem from the shortcomings of the
fundamental design choices and the query representation model followed in
modern SPEs. To address these challenges, we first propose TiLT, a novel
intermediate representation (IR) that offers a highly expressive temporal query
language amenable to effective query optimization and parallelization
strategies. We subsequently build a compiler backend for TiLT that applies such
optimizations on streaming queries and generates hardware-efficient code to
achieve high performance on multi-core stream query executions. We demonstrate
that TiLT achieves up to 326x (20.49x on average) higher throughput compared to
state-of-the-art SPEs (e.g., Trill) across eight real-world streaming analytics
applications. TiLT source code is available at
https://github.com/ampersand-projects/tilt.git.
|
Anand Jayarajan, Wei Zhao, Yudi Sun, Gennady Pekhimenko
|
2023-01-27T23:50:12Z
|
http://arxiv.org/abs/2301.12030v1
|
# TiLT: A Time-Centric Approach for
###### Abstract.
Stream processing engines (SPEs) are widely used for large scale streaming analytics over unbounded time-ordered data streams. Modern day streaming analytics applications exhibit diverse compute characteristics and demand strict latency and throughput requirements. Over the years, there has been significant attention in building hardware-efficient stream processing engines (SPEs) that support several query optimization, parallelization, and execution strategies to meet the performance requirements of large scale streaming analytics applications. However, in this work, we observe that these strategies often fail to generalize well on many real-world streaming analytics applications due to several inherent design limitations of current SPEs. We further argue that these limitations stem from the shortcomings of the fundamental design choices and the query representation model followed in modern SPEs. To address these challenges, we first propose _TiLT_, a novel intermediate representation (IR) that offers a highly expressive temporal query language amenable to effective query optimization and parallelization strategies. We subsequently build a compiler backend for TiLT that applies such optimizations on streaming queries and generates hardware-efficient code to achieve high performance on multi-core stream query executions. We demonstrate that TiLT achieves up to 326\(\times\) (20.49\(\times\) on average) higher throughput compared to state-of-the-art SPEs (e.g., Trill) across eight real-world streaming analytics applications. TiLT source code is available at [https://github.com/ampersand-projects/tilt.git](https://github.com/ampersand-projects/tilt.git).
stream data analytics, temporal query processing, intermediate representation, compiler +
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: thanks: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
+
+
Footnote †: Authors have contributed equally to this research.
+
Footnote †: Authors have contributed equally to this research.
streaming benchmark [31]. As shown, scale-up SPEs are able to achieve \(100-1000\times\) higher throughput under the same hardware budget [60], which makes them a more cost-effective alternative to scale-out SPEs for large scale stream processing [30].
Scale-up SPEs face three key challenges to achieve high single-machine performance for streaming queries. First, improving hardware utilization of streaming queries requires supporting _effective optimization strategies_ for improving cache utilization of query execution and pruning redundant computation. Second, unlike batch processing applications, streaming queries do not inherently expose data parallelism due to their continuous query execution model. Therefore, SPEs need to support _sophisticated parallelization strategies_ to fully utilize all the processing cores in a multi-core machine. Finally, since stream queries are long-running workloads, they require a _low-overhead runtime_ to meet their latency requirements. Despite having significant research attention, current scale-up SPE designs fail to address all three challenges at the same time on many real-world streaming applications due to the following reasons.
Current scale-up SPEs [11, 33, 34, 61] follow the traditional data flow representation of streaming applications where the queries are defined as a directed acyclic graph (DAG) of temporal operations and the query execution is performed by interpreting the data flow graph. Even though this design is conceptually simple and easy to extend, this query execution model is shown to introduce significant interpretation overhead at runtime [37, 60]. Moreover, the query optimizations in the interpreted SPEs are mostly heuristics-based graph-level transformations such as reordering the operations in the DAG [6]. Applying such optimizations requires the streaming query to precisely match with certain pre-defined rules and, therefore, typically has narrow applicability on many streaming applications. Finally, many of these systems only extract limited parallelization opportunities available through partitioned data streams.
To address the inefficiencies of interpretation-based SPEs, recent works have proposed compiler-based solutions [14, 47, 48, 24]. These approaches offer low-overhead runtime for query execution by automatically generating hardware-efficient code from the high-level query description. State-of-the-art compiler-based SPEs also support low-level query optimizations like operator fusion to maximize data locality by passing data between operators through registers or cache memory, and can automatically parallelize the query execution even on non-partitioned data streams. However, the optimization, parallelization, and code generation strategies proposed in current compiler-based SPEs primarily target applications performing only a limited set of operations (e.g., stream aggregations) and these strategies do not generalize well on queries with more complex operations (e.g., stream-to-stream join). This significantly limits the ability of current compiler-based SPEs to support many real-world streaming analytics applications.
In this work, to support the growing adoption of stream processing in a wide range of application domains, we set a goal to provide an infrastructure for effective and generalizable optimization and parallelization strategies for streaming queries. We make a key observation that the limited optimization and parallelization capabilities of SPEs are due to the fundamental limitations of the query representation model used by modern SPEs. Under the current so-called _event-centric_ model, streaming queries are constructed using primitive temporal operations, each defining a transformation over a sequence of discrete time-ordered events. Even though this is a natural representation model for streaming queries as the data streams are inherently an unbounded time-ordered sequence of events, we argue that this event-centric definition of temporal operations does not expose the important time semantical information of the streaming queries that are required for effective query optimization and parallelization. Since streaming queries are temporal in nature, we believe that the temporal operations should also follow a representation model that is fundamentally based on time.
Based on this observation, we propose a novel intermediate representation (IR) called _TiLT_ that follows a _time-centric_ model for defining streaming queries. Unlike the traditional event-centric model, TiLT IR defines temporal operations as functional transformations over well-defined time-domains using new constructs like _temporal object_, _reduction function_, and _temporal expression_. With these simple constructs, TiLT offers a highly expressive programming paradigm to represent a diverse set of streaming applications. At the same time, the time-centric definition of streaming queries enables optimization opportunities that are otherwise difficult to perform using the traditional query representation models. Moreover, the side-effect-free functional definition of TiLT queries exposes inherent data parallelism that can be leveraged to parallelize arbitrary streaming queries. Finally, we build a compiler-backend for TiLT that automatically translates the logical stream query definitions into hardware-efficient executable code and achieves high multi-core performance on a wide range of streaming applications.
To evaluate TiLT's ability to provide high performance on a diverse range of applications, we prepare a benchmark suite with eight stream processing applications representative of real-world streaming analytics use-cases in fields including stock trading, signal processing, industrial manufacturing, banking institutions, and healthcare. On these applications, TiLT achieves \(6-322\times\) (\(20.49\times\) on average) higher throughput against the state-of-the-art interpretation-based SPE Trill [11]. This speedup comes from two major fronts: (i) effective query optimization and parallelization enabled by the time-centric query representation model in TiLT, and (ii) a compiler-based SPE design that eliminates common inefficiencies like query interpretation and managed language overhead common in interpreted SPEs. We also show that TiLT can achieve competitive performance against compiler-based SPEs that are specially designed for efficient stream aggregation. For example, on Yahoo streaming benchmark [12], TiLT is able to achieve \(1.5\times\) and \(3.8\times\) higher throughput compared to state-of-the-art compiler-based SPEs LightSaber [47] and Grizzly [14], respectively.
In summary, we make the following contributions:
* We highlight the limitations of the query representation models used in current SPEs in supporting effective optimization and parallelization strategies. To address these limitations, we propose a novel intermediate representation called TiLT.
\begin{table}
\begin{tabular}{l l|c c c} \hline \hline Scale-out[8, 59] & \multicolumn{4}{c}{Scale-up [11, 14, 34, 47]} \\ \hline Spark & Flink & Trill & StreamBox & Grizzly & LightSaber \\ \hline
0.14 & 0.59 & 34.07 & 167.19 & 118.74 & 296.40 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Throughput (million events/sec) of Yahoo streaming benchmark [31] on a 32-core machine
We show that TiLT enables generalizable optimization and parallelization strategies that are otherwise difficult to support in the traditional query representation models.
* We build a compiler for TiLT that can optimize and parallelize arbitrary streaming queries and generate hardware-efficient code to achieve high multi-core performance.
* We prepare a new representative benchmark suite with eight real-world streaming analytics applications used in signal processing, stock trading, industrial manufacturing, banking service, and healthcare. Across these applications, we demonstrate that TiLT can achieve \(6-326\times\) (\(20.49\times\) on average) higher throughput compared to state-of-the-art SPEs. TiLT is currently open-sourced and available at [https://github.com/ampersand-projects/tilt.git](https://github.com/ampersand-projects/tilt.git)
## 2. Background
Streaming analytics applications typically process unbounded sequence of time-ordered events in a continuous manner. Table 2 shows eight representative real-world streaming analytics applications used in areas including stock market trading, signal processing, healthcare, manufacturing, and banking services. These applications process data at a high rate and demand strict latency and throughput requirements. For example, high-frequency trading applications demand sub-second level latency (Han et al., 2015; Wang et al., 2016; Wang et al., 2017). Moreover, the computations performed by these applications often require _fine-grained control over the time-dimension_ of the data streams such as using different windowing strategies to analyze changing trends in the data streams (Han et al., 2015; Wang et al., 2017; Wang et al., 2017).
Many modern SPEs (Han et al., 2015; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) provide SQL-like query languages with temporal extensions for writing such complex streaming applications. These languages offer a vocabulary of simple yet highly expressive primitive temporal operations with each defining a transformation over one or more data streams. Figure 1 illustrates four primitive temporal operations commonly used in streaming queries and their corresponding transformations on event streams. Without loss of generality, each event in the data stream is represented using a payload value and a validity interval. The _Select_ and _Where_ operations shown in the Figure 0(a) and 0(b) follow the relational SQL semantics of projection and selection operations, respectively. Both operations perform per-event transformations where the former modifies the payload field of each event and the latter conditionally filters out events based on a user-defined predicate on the payload. The temporal _Join_ operation shown in Figure 0(c) joins two streams into a single output stream. The output stream of the _Join_ operation contains events corresponding to the strictly overlapping regions of events in the input streams. Finally, the aggregation operations on data streams are generally performed over a time-bounded window defined by its window size and stride length. For example, a _Sum_ aggregation operation defined over a _Window(size, stride)_ computes the sum of every _size_-seconds1 windows that are _stride_-seconds apart. Figure 0(d) shows a sliding-window aggregation with window size 10 and stride length 5. Since all these operations are defined as transformations over events, we call them to follow an _event-centric_ model of temporal operator definition.
Footnote 1: For the purpose of the discussion, we use seconds as the unit of time. However, any other units of time are also applicable to the definitions used in this paper.
These simple primitive operations can be combined together to construct more complex streaming queries. Figure 1(a) shows an example streaming query written using the primitive temporal operations shown in Figure 1. This query is a simplified version of the stock market trend analysis application (Han et al., 2015) described in Table 2. This query analyzes the trends in the price of a particular stock by computing two moving average of the stock prices over 10 and 20 seconds intervals on every second by first using two _Window-Sum_ operations and then dividing the sum by their corresponding window sizes using the _Select_ operation. Afterwards, the difference between the concurrent pairs of 10-second and 20-second averages is computed using the temporal _Join_ operation. The final _Where_ operation filters only the events with a positive difference in the average. The validity intervals of the events in the final output stream corresponds to the period of time for which the stock price is observing an upward trend.
Once the temporal queries are defined, SPEs can internally handle the execution of the queries. Many popular SPEs (Han et al., 2015; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) are primarily designed to meet the performance requirements of the query by distributing the execution over large cluster of machines. Despite being a widely popular approach, prior works (Han et al., 2015; Wang et al., 2017; Wang et al., 2017) have shown that such scale-out SPEs are highly resource-intensive and often perform \(2-3\) orders of magnitude slower than their corresponding hand-tuned implementations, thus causing significant waste of compute and energy resources. This observation led to the development of several scale-up SPEs (Han et al., 2015; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) that follow hardware-conscious designs to maximize the single-machine performance of streaming query execution. State-of-the-art scale-up SPEs have shown that modern day multi-core machines with hundreds of processing cores and gigabytes of memory bandwidth are capable of meeting the performance demands of large scale streaming workloads and can be a cost-effective alternative to more expensive large multi-machine clusters (Wang et al., 2017).
## 3. Modern SPE Designs and Limitations
Scale-up SPEs attempt to achieve high single-machine performance primarily by three means. First, the user-defined query is subjected to several _optimization_ passes to prune redundant computations and increase data locality in order to improve the hardware utilization during query execution. Second, SPEs utilize different _parallelization_ strategies to take advantage of parallel processing cores available in multi-core machines. Finally, SPEs try to provide a _low-overhead runtime_ in order to meet the latency and throughput requirements of long running streaming applications. Despite the wide attention, we observe that the query optimization, parallelization, and execution strategies proposed in prior SPEs fail to generalize well on real-world streaming analytics applications due to several fundamental design limitations that we highlight below.
**Limited query optimization opportunities.** The optimization strategies adopted in current SPEs have limited applicability and often fail to cover a wide variety of real-world streaming applications. For instance, the majority of SPEs (Han et al., 2015; Wang et al., 2017; Wang et al., 2017) adopt heuristics-based query optimization strategies, which are limited to basic graph
transformations such as substituting or reordering individual operations in the query [6, 11]. For instance, predicate pushdown [6] is a common optimization where filtering operations (_Where_) are moved closer to the data source in order to reduce the number of events the remaining operations in the query need to process. However, this optimization is only applicable if the predicate of the filtering operation is defined over the events in the input stream. For example, predicate pushdown cannot be applied to the example query in Figure 1(a) as the _Where_ operation depends on the result generated by the parent _Join_ operation.
Certain advanced SPEs [14, 47] support more sophisticated low-level optimizations such as operator fusion for improving register/cache utilization by combining multiple operators into a single operator. For example, the _Select_ operators in the stock analysis query can be trivially fused with the _Join_ operator as shown in Figure 1(b). This avoids unnecessary data movement between operators and allows intermediate results to remain in registers or cache memory for as long as possible. Unfortunately, the fusion rules implemented in current SPEs can only fuse operators until a so-called _soft pipeline-breaker_[37, 60] is reached. Soft pipeline-breakers are operators that require partial materialization of the output events before the next operator in the query pipeline can start processing. For instance, in Figure 1(b), both _Window-Sum_ and _Join_ operators are soft pipeline-breakers. Fusing these operators together is non-trivial and optimizers in current SPEs fail in such scenarios. This significantly limits the applicability of fusion optimization on many real-world streaming applications as they often contain multiple pipeline-breakers in the query. For example, Table 2 shows the temporal operations used in the queries of each application and each query contains between \(2-6\) pipeline-breakers.
**Limited query parallelization capability.** Unlike batch processing applications, streaming queries do not inherently expose data parallelism as many temporal operations exhibit sequential data dependencies (e.g., sliding-window aggregation). Therefore, extracting data parallelism from streaming queries is often challenging and many SPEs rely on users to provide partitioned data streams in order to parallelize the query execution. For example, the trend analysis query (Figure 1(a)) can be trivially parallelized by executing on data streams corresponding to different stocks. However, the degree of parallelism available from this approach is limited by the number of unique partitions available in the data stream [47]. Moreover, in certain streaming analytics applications used in healthcare [19] and manufacturing industry [23], even a single partition of the data stream can contain events generated at rates as high as \(1-40\) KHz. In such cases, parallelizing the stream query execution requires more sophisticated strategies.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Analytics application** & **Description** & **Operators in the query** & **Data set** \\ \hline Trend-based trading [18] & Moving average trend in stock price & Avg (2), Join, Where & \\ \hline Relative strength index [46] & Stock price momentum indicator & Shift, Join, Avg (2) & New York Stock Exchange [38] \\ \hline Normalization [57] & Normalize event values using Z-score & Avg, StdDev, Join & Synthetic Data (Random \\ \hline Signal imputation [54] & Replacing missing signal values & Avg, Shift, Join & floating point values generated at 1000Hz frequency) \\ \hline Resampling [55] & Changing signal frequency & Select, Join, Shift, Chop & \\ \hline Pan-Tomkins algorithm [39] & Detect QRS complexes in ECG & Custom-Aggg(3), Select, Avg & MIMIC-III waveform data [21] \\ \hline Vibration analysis [41] & Monitor machine vibrations using kurtosis, root mean square, and crest factor metrics & Max, Avg(2), Join (2), Custom-Agg & Bearing vibration data [17] \\ \hline Fraud detection [58] & Credit card fraud detection & Avg, StdDev, Shift, Join & Kaggle credit card data [22] \\ \hline \end{tabular}
\end{table}
Table 2. Real-world streaming analytics applications
Figure 1. Common temporal operations: (a) Select, (b) Where, (c) Temporal Join, and (d) Window-Sum
Figure 2. Stock price trend analysis query
Prior works (Friedman et al., 2017; Goyal et al., 2018; Goyal et al., 2019; Goyal et al., 2020) have proposed solutions to automatically extract data parallelism in streaming queries without needing a partitioned data stream. However, these works have been solely focusing on window-based aggregation operations. For example, the sliding-window sum in Figure 0(c) can be parallelized by first computing partial sums on 5-second tumbling-windows2 and then adding up two consecutive partial sums. Since the tumbling-windows do not overlap, the data streams can be partitioned on the 5-second window boundaries and each window can be processed in parallel. The partial sum additions can also be parallelized through parallel reduction (Goyal et al., 2018). However, extending these strategies to parallelize arbitrary streaming queries is often non-trivial. For instance, determining the partition boundaries on stock price stream in the example query is unclear as the query contains multiple sliding-windows and a temporal join operation. We observe that the query parallelization methods in current scale-up SPEs (Goyal et al., 2018; Goyal et al., 2018) are incapable of handling such scenarios.
Footnote 2: Tumbling-window is a special case of sliding-windows when the stride length is same as the window size.
**High runtime overhead during query execution**. The query execution model adopted in current SPEs fail to provide low overhead runtime for a wide range of real-world applications. The majority of the SPEs (Friedman et al., 2017; Goyal et al., 2018; Goyal et al., 2019; Goyal et al., 2020; Goyal et al., 2020) follow an interpretation-based query execution model also called an iterator model (Goyal et al., 2018; Goyal et al., 2020). In this model, the logical query description is translated to a data flow graph by mapping each temporal operator in the query to a concrete implementation. Each physical operator is designed to process events one-by-one or in micro-batches and passes the generated output events to the next operator in the graph through message queues. Despite being a widely adopted design, prior works (Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2019; Goyal et al., 2020; Goyal et al., 2020) have shown that interpreted SPEs often perform \(1-2\) orders of magnitude slower compared to corresponding hand-tuned implementations. This inefficiency can be mainly attributed to the cost of data transfer between operators in the data flow graph (Goyal et al., 2020; Goyal et al., 2020), poor support for effective optimization strategies such as operator fusion (Goyal et al., 2020), and failure to maintain end-to-end data locality because of fixed size micro-batching (Goyal et al., 2020).
To address the inefficiencies of interpreted SPEs, recent works have proposed compiler-based solutions (Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2020), which can generate compact and efficient machine code from the high-level query description using customized code-generation techniques. Even though, compiler-based SPEs are shown to achieve state-of-the-art single-machine performance, we observe that these SPEs follow highly restrictive query languages with limited expressive power. To the best of our knowledge, prior compiler-based SPEs are designed only for queries performing window-based aggregation. Since it is common to use more complex and diverse set of temporal operations in streaming queries, the ability of current compiler-based SPEs to support real-world streaming analytics applications is significantly limited. Additionally, the code generation techniques used in these SPEs are primarily designed as source-to-source translator and heavily rely on template expanders. Such compiler designs are known to be highly inflexible and extremely hard to maintain (Goyal et al., 2018; Goyal et al., 2020).
Based on these observations, we conclude that the current SPEs only exploit the optimization and parallelization opportunities on stream queries in limited capacity due to several inherent design limitations. This prevents such SPEs from providing high-performance stream processing for many real-world streaming analytics applications. In this work, we set the goal to provide hardware-efficient stream processing without sacrificing programmability and generality to support the diverse computational requirements of modern day streaming analytics workloads.
## 4. TiiT: A Time-Centric Approach
Addressing the aforementioned design limitations and providing (i) effective query optimization, (ii) parallelization, and (iii) execution strategies on a diverse set of streaming analytics applications requires us to fundamentally rethink how SPEs should be designed. In this work, we argue that the limited query optimization capabilities and lack of automatic parallelization support stem from the event-centric temporal query representation model adopted in SPEs. Since the data streams are represented as a sequence of events, it is natural to define temporal operators as transformations over events. However, we observe that this event-centric definition of temporal operations do not fully express the time semantics of temporal queries necessary for effective query optimization and parallelization (see Section 5 for more details). Based on this observation, we make a fundamental shift from this established design principle and propose a new compiler-based SPE design that follows a _time-centric_ model for streaming queries.
As opposed to the traditional event-centric model, the time-centric model adopts a more fine-grained representation of temporal operations by defining them as functional transformations over well-defined time domains using a novel intermediate representation (IR) called _TiLT_. TiLT IR is a highly expressive functional language with extensions to support temporal operations and offers several advantages over traditional query representation models. First, the functional definition of TiLT IR exposes inherent data parallelism which enables TiLT to parallelize _arbitrary_ streaming queries. Second, we demonstrate that the fine-grained time-centric definition of temporal operations allows TiLT to support effective query optimization strategies through simple IR transformations. Finally, we build a compiler-backend for TiLT that can automatically translate the time-centric IR query definitions to _hardware-efficient_ executable code.
Figure 3 shows the lifecycle of a streaming application in TiLT. In the first stage, TiLT converts the streaming query written by the user into the TiLT IR form (Section 4.1). After that, the compiler infers the boundary conditions necessary for parallelizing the query execution through a step called boundary resolution (Section 5.1). Afterwards, TiLT queries are subjected to an optimization phase (Section 5.2). Finally, in the code generation step, the optimized query is lowered to LLVM IR and subsequently to executable code for parallel query execution (Section 6).
### TiLT IR
Similar to regular functional languages, TiLT supports data types such as integers, floating points, arrays, structures, dictionaries, and expressions such as arithmetic/logical operations, conditional operations, variables, and external function/library calls. On top of this, we introduce three new constructs, namely _temporal objects_,
reduction functions_, and _temporal expressions_, which enables TiLT to define a diverse set of temporal operations. First, the temporal object is similar to a regular scalar object, except that it takes on a time-evolving value that spans over an infinitely long timeline. As opposed to the traditional way of representing streams as a sequence of discrete events with the time interval and payload fields, TiLT models the data stream using a single temporal object that assumes different values at different points in time based on the events active at that time. Second, TiLT uses the reduction functions to reduce the mutating values of a temporal object to a single scalar value. Reduction functions are introduced in TiLT to perform aggregate operations on data streams. Finally, temporal expressions are functional transformations on one or more temporal objects defined over a well-defined time domain. Temporal expressions are the basic building blocks of a streaming application in TiLT.
**Temporal object:** We use the \(\sim\) notation to distinguish temporal objects from scalar objects. For example, let \(\sim\)_stock_ be a temporal object corresponding to a data stream of stock price events \(\mathbf{e}_{i}\) with price value \(\rho_{i}\) and validity interval \((\mathbf{t}^{s}_{i},\mathbf{t}^{e}_{i})\). The value of \(\sim\)_stock_ at any point in time \(T\) can be retrieved using an indexing operator ([]), and is defined as follows.
\[\small\sim stock[T]=\begin{cases}\rho_{i}&\text{if}\quad\exists\mathbf{e}_{i}\mid T \in(\mathbf{t}^{s}_{i},\mathbf{t}^{e}_{i}]\\ \phi&\text{otherwise}\end{cases} \tag{1}\]
The value of the temporal object \(\sim\)_stock_ at time \(T\) is the value of the payload of the stock price event active at \(T\).3 When there are no events active at time \(T\), the temporal object assumes a null value called \(\phi\). The value \(\phi\) has the special property that performing any arithmetic operations on it would always result in \(\phi\). Additionally, TiLT allows defining derived temporal objects from existing ones by passing a time interval to the index. For example, the stock price values between time points \(t_{s}\) and \(t_{e}\) can be written as a derived temporal object \(\sim\)_win_ = \(\sim\)_stock_\([\mathbf{t}_{s}:\mathbf{t}_{e}]\). Then, the value of \(\sim\)_win_ at any point in time \(T\) is defined as follows.
Footnote 3: For simplicity, in this section, we assume that there are no events with overlapping intervals in the stream and therefore, there is only at most one event active at any given time. Handling data streams with overlapping events is discussed in Section 6.
\[\small\sim win[T]=\begin{cases}\sim stock[T]&\text{if}\quad T\in(\mathbf{t}_{s}, \mathbf{t}_{e}]\\ \phi&\text{otherwise}\end{cases} \tag{2}\]
**Reduction function:** The reduce function, denoted as \(\oplus\left(f,\sim I\right)\), is a special expression used to reduce a temporal object \(\sim\)_I_ into a scalar value based on a reduction operation \(f\). TiLT, by default, supports several basic aggregation operations such as Sum (+), Product (*), Max (\(>\)), and Min (\(<\)). For example, reducing the temporal object \(\sim\)_win_ using summation (+) can be written as \(\oplus\left(\mathbf{+},\sim win\right)\) and is defined as follows:
\[\small\oplus\left(\mathbf{+},\sim win\right)=\sum\{\sim win[t]\quad\forall t\mid \sim win[t]\neq\phi\} \tag{3}\]
Other aggregation operations such as average can be expressed by combining the built-in reduction functions. On top of that, TiLT also allows users to define custom reduction operations (see Section 6 for more details).
**Temporal expression:** TiLT expresses streaming queries as a sequence of temporal expressions each defining an output temporal object as a functional transformation over one or more input temporal objects on a time domain. A time domain _TDom(start, end, precision)_ has a start and end time indicating the interval between which the temporal expression is defined and a time precision denoting how frequently the value of the resulting temporal object can change in the time domain.
The general syntax of a temporal expression is shown above. The above expression defines the output temporal object \(\sim\)_O_ as a functional transformation (_Fn_) over \(n\) input temporal objects \(\sim\)_I_,..., \(\sim\)_I_n and \(m\) constants \(C_{1},...,C_{m}\) over a time domain \(t\). Here, \(t\) is defined within the time interval \((T_{s},T_{e}]\) and has a time precision of \(p\). Therefore, at any point in time \(T\in(T_{s},T_{e}]\) and is a multiple of \(p\), \(\sim\)_O_[\(T\)] assumes a value returned by the expression \(\textit{Fn}(\sim\textit{I}_{1},\sim\textit{I}_{2},...,\sim\textit{I}_{n})\) at time \(T\).
### TiLT Queries
Figure 4 shows the temporal expressions corresponding to the operations described in Figure 1. In the example above, the time domain \(t\) is defined between \(-\infty\) and \(+\infty\) and has a precision of 1 second. That means the temporal expressions using \(t\) define a value of the output temporal object at every second over an infinite time domain. The first temporal expression is equivalent to the _Select_ operation in Figure 0(a). This expression defines a functional transformation from the temporal object \(\sim\)_m_ to \(\sim\)_select_ over the time domain \(t\) and the value of \(\sim\)_select_ at any point in time is defined to be 1 more than the value of \(\sim\)_m_ at the same time. Similarly, the second expression is equivalent to the _Where_ operation and it filters only even values from \(\sim\)_m_. In this example, the value of \(\sim\)_where_ at any point in time is conditionally selected to be \(\phi\) if \(\sim\)_m_ has an odd value at that time. The third expression corresponds to the temporal join operation and follows a very similar structure to the
Figure 3. TiLT compilation pipeline
_Where_ operation, except that it is a binary expression derived from two input temporal objects \(\sim\)_m_ and \(\sim\)_n_. This expression identifies the strictly overlapping intervals between the events in \(\sim\)_m_ and \(\sim\)_n_ by checking if the both \(\sim\)_m_ and \(\sim\)_n_ have a non-null value at a given time. If yes, the expression returns the sum of the values, otherwise \(\phi\). Finally, the 10-second sliding-window _Window-Sum_ operation with 5-second stride is defined by applying the _Sum_ reduction function on every 10-second window derived from \(\sim\)_m_ over a time domain \(t^{\prime}\) with a precision of 5.
t = **T0om**(-o, o, 1) -select**[t] = -m**[t] + 1 -where**[t] = (-m**[t]*Z == 0)? -m**[t] : \(\phi\) -join**[t] = (-m**[t]!= \(\phi\)&& -n**[t]!= \(\phi\))? (-m**[t] + -n**[t] ) : \(\phi\)
t' = **T0om**(-o, o, 5) -wsum**[t'] = \(\oplus\)**(+, -m**[t'-10 : t'])
It should be noted that, even though the traditional definitions of these temporal operations have seemingly different semantics, TiLT definitions of the same operations are very similar in structure. This shows that TiLT is able to find a minimal set of abstractions necessary to represent a wide range of temporal operations. We believe that the integration of constructs such as temporal objects, reduction function, and temporal expressions into a powerful functional programming paradigm has made TiLT a highly expressive query representation model suitable for modern streaming applications.
## 5. TiLT: Optimization and Parallelization
The compilation pipeline in TiLT starts by converting the streaming query into TiLT IR form as shown in Figure 2(a). Here, the example query is lowered to TiLT IR by defining the input data stream as a temporal object (\(\sim\)_stock_) and mapping each operation to the corresponding temporal expression defined over an infinite time domain. After the translation, TiLT first extracts data parallelism from the query by resolving the boundary conditions of the TiLT expressions through a step called boundary resolution (Section 5.1). Then, TiLT performs several optimization passes over the query through simple IR transformations. A full exploration of the optimization opportunities on TiLT IR is beyond the scope of this work. Instead, we focus on an optimization that provides the most bang-for-the-buck: operator fusion. Streaming queries generally exhibit high data locality and therefore can significantly benefit from operator fusion optimization that exploits this property to improve register/cache utilization (Shi et al., 2017; Wang et al., 2018). In the Section 5.2, we show how TiLT can overcome the limitations of current SPEs in performing operator fusion across pipeline-breakers.
### Boundary Resolution
The time-centric definition of the temporal queries in TiLT precisely captures the data dependency between temporal objects over the entire time domain. For example, the value of \(\sim\)_join_ at time \(T\) in Figure 4 is only depended on the values of \(\sim\)_m_ and \(\sim\)_n_ at the same time. That means, the value of \(\sim\)_join_ at two different time points \(T_{1}\) and \(T_{2}\) are independent and can be evaluated in parallel. This data dependency information can be extended for the entire TiLT IR query. For example, the data dependency of \(\sim filter[T]\) in Figure 2(a) can be determined by following the lineage all the way to the input temporal object \(\sim\)_stock_. In this example, computing the value of \(\sim filter\) at \(T\) is solely dependent on the values of \(\sim\)_stock_ between time intervals \((T-10,T]\) and \((T-20,T]\). We call this the _temporal lineage_ of the temporal objects. TiLT uses this temporal lineage information to extract data parallelism from arbitrary temporal queries through a step called boundary resolution.
During the boundary resolution step, TiLT converts the initial query defined over the infinite time domain to a bounded domain by inferring the boundary conditions over the time domain. For example, based on the temporal lineage of the query in Figure 2(a), the values of \(\sim\)_filter_ between an arbitrary interval \((T_{s},T_{e}]\) is only dependent on the values between the interval \((T_{s}-20,T_{e}]\) in \(\sim\)_stock_. After the temporal boundary conditions have been inferred, TiLT redefines the time domain of the temporal query to the symbolic interval \((T_{s},T_{e}]\) by setting \(t\) to \(T0om(T_{s},T_{e},1)\) (Figure 2(b)). TiLT uses this boundary condition to partition the data streams in order to parallelize the query execution (see Section 6.2 for more details).
### Operator Fusion
Once the query is defined in the TiLT IR expression form, performing fusion optimization is straightforward through simple IR transformations. Applying operator fusion in TiLT queries entails simply merging two successive temporal expressions that are defined over the same time domain into a single expression. For example, following is the resulting expression after applying fusion rule on the temporal expressions \(\sim\)_agg_10, \(\sim\)_agg_20, and \(\sim\)_join_.
\(\sim\)join[t] = { a10 = \(\sim\)sum10[t]/10 a20 = \(\sim\)sum20[t]/20 return (a10!= \(\phi\)&& a20!= \(\phi\))? (a10 - a20) : \(\phi\) }
Fusing expressions defined by \(\sim\)_agg_10, \(\sim\)_agg_20, and \(\sim\)_join_ simply requires replacing every occurrence of \(\sim\)_agg_10[t] and \(\sim\)_agg_20[t] in the \(\sim\)_join_ with \(\sim\)_sum_10[t]/10 and \(\sim\)_sum_20[t]/20 as shown above. This transformation is equivalent to the fusion optimization pass supported in current SPEs (shown in Figure 1(b)). However, unlike traditional SPEs, the same IR transformation can be applied to all the expressions in the query including the pipeline-breakers (e.g., \(\sim\)_join_, \(\sim\)_sum_10, \(\sim\)_sum_20). TiLT repeatedly applies this transformation to fuse all temporal expressions in the trend-analysis query into a single expression as shown below.
t = **T0om**(T_{s, T_e, 1) -filter[t] = s10 = \(\oplus\)**(+, \(\sim\)stock[t-10:t]) s20 = \(\oplus\)**(+, \(\sim\)stock[t-20:t]) a10 = s10/10 a20 = s20/20 j = (a10!= \(\phi\)&& a20!= \(\phi\))? (a10-a20) : \(\phi\) return (j > 0)?! \(\phi\) }
In comparison to current SPEs, TiLT supports more holistic and generalizable query optimization strategies because of the following
Figure 4. Temporal expression for _Select, Where, Join, and Window-Sum_ operations
two key reasons. First, the graph-level representation of streaming queries are typically too coarse-grained. Therefore, the fused version of the primitive operations is often not expressible at this level. TiLT, on the other hand, provides a more flexible query representation that allows fine-grained transformations as shown above. Second, the event-centric model often do not expose how the intervals of the events are manipulated after each temporal operator to the optimizer. For example, the intervals of the output events of _Join_ is only determined at runtime. In contrast, the time-centric definitions in TiLT IR expressions explicitly encode the transformations over time domains which allows TiLT to perform more sophisticated optimizations. We believe that TiLT IR opens up several more optimization opportunities on temporal queries that are otherwise hard to implement on traditional query representation models and we plan to explore them in the future work.
## 6. TiLT: Compilation and Execution
Even though the fine-grained compute definitions in TiLT IR is better suited for effective stream query optimizations, it also comes with significant amount of redundancy. For example, the temporal expression corresponding to the _Select_ operation shown in Figure 4 defines the value for the temporal object \(\sim\)_select_ at every second in the time domain. Although this fine-grained definition provides great flexibility for IR manipulation, it also introduces redundant computation since the value of the input temporal object \(\sim\)_m_ may not necessarily change every second. Therefore, naively translating TiLT queries into executable code can potentially be highly inefficient. Below, we describe how TiLT compiler removes this redundancy and generates hardware-efficient executable code corresponding to TiLT IR queries.
### Code Generation
TiLT compiler is written using C++ and LLVM JIT compiler infrastructure (Levy et al., 2017). During the code generation phase, the compiler lowers the TiLT IR representation to LLVM IR. Since TiLT IR is fundamentally a functional language, it lends itself to standard code generation practices followed in compilers (Bahdan et al., 2017; Li et al., 2018). Below, we explain the code generation strategy used for the three newly introduced constructs in TiLT.
#### 6.1.1. Temporal Objects
According to the formal definition in equation 1, temporal objects define a value at every point in time. However, following the same definition for the physical implementation is impractical. Instead TiLT stores only changes in the value of the temporal object using a data structure called snapshot buffer (SSBuf). A snapshot buffer is an ordered sequence of snapshots stored in an array where each snapshot stores the timestamp (\(ts\)) and value (\(val\)) at the point when a change occurred.
Figure 5 shows an example event stream stored as a snapshot buffer. The first snapshot in this buffer takes a value of null (\(\phi\)) at timestamp 5 as there are no events active in the stream before that point. The second snapshot is added when first event ends (at 10) and takes the value (\(a\)) of the payload of that event. Similarly, a new snapshot is added to the buffer at the start and end of every subsequent events in the data stream. When the data stream contains events with overlapping validity intervals, a single snapshot can assume multiple values. In such cases, TiLT uses a list/map to store the values of a snapshot.
#### 6.1.2. Reduction Functions
We provide native support for several common reduction functions like Sum, Product, Min, and Max. Additionally, TiLT also supports user-defined reduction functions. Both built-in and user-defined functions are implemented using a general template similar to the ones followed in other SPEs (Shi et al., 2017; Li et al., 2018). This template contains four lambda functions that is designed to incrementally update a state on every snapshot in the temporal object. (i) _Init_ function returns the initial state of the reduction operation (e.g., 0 for Sum). (ii) _Acc_ function accumulates a single snapshot to the state (e.g., addition for Sum). (iii) _Result_ function returns the reduction result from the incremental state (e.g., the state for Sum). (iv) For invertible reduction functions, an optional _Deac_ function can be provided that applies the inverse of the aggregate function on the state (e.g., subtraction for Sum). This simple template allows TiLT to support efficient aggregation implementations like Subtract-on-Evict (Shi et al., 2017).
#### 6.1.3. Temporal Expressions
Finally, the temporal expressions are synthesized into loops that iterate over input snapshot buffers and update the output snapshot buffer. Figure 2(d) shows the synthesized loop for the example query. The loop boundaries (\(T_{s}\), \(T_{e}\)) and the loop counter (\(ts\)) increment is determined from the time domain boundaries and precision. The loop body performs the computation defined by the temporal expression. One iteration of the loop computes the snapshot value (\(val\)) of the output buffer \(\sim\)_filter_ at the timestamp \(ts\).
However, as described above, naively setting the loop counter increment based on time domain precision (\(ts=ts+1\)) might be highly inefficient as it introduces redundant iterations. Instead, TiLT takes advantage of an invariant of the functional definition of the temporal expressions to avoid redundant iterations, i.e., the output value of a temporal expression would only change when the inputs are changed. Based on this invariant, TiLT compiler generates an expression to increment the loop counter that computes the next value of \(ts\) at which at least one of \(\sim\)_stock_\([ts-10:ts]\) and \(\sim\)_stock_\([ts-20:ts]\) have changed the enclosing snapshots. After loop synthesis, the generated loop is wrapped in a callable function with the symbolic loop boundaries parametrized as arguments (Figure 2(d)). This allows TiLT to execute the query over any arbitrary time intervals on the output snapshot buffer.
Figure 5. Event stream as snapshot buffer
### Query Execution
TiLT executes the generated code by partitioning the input data stream into snapshot buffers and processing them in parallel using independent worker threads. The data streams are partitioned based on the resolved boundary conditions (Figure 2(b)) and a user-defined interval size. For example, Figure 5 shows the partitioned snapshot buffers for the example query in Figure 2(c) with an interval size of \(1000\) seconds. In this example, computing the output snapshots in \(\sim filter\) between the interval of \((0,1000)\) requires reading snapshots from the input stream between the interval \((-20,1000)\). Similarly, output snapshots between \((1000,2000)\) require processing input snapshots between \((980,2000)\) and so on. Even though, partitioning snapshot buffers as above adds some redundancy with duplicated snapshots (shaded area in Figure 5), extracting data parallelism like this allows executing continuous streaming queries using synchronization-free parallel worker threads.
## 7. Evaluation
**Benchmarks:** For TiLT evaluation, we prepare two sets of benchmarks and a representative group of real-world streaming applications. (i) Temporal operations: this benchmark includes four commonly used primitive temporal operations shown in the Figure 1. (ii) Yahoo streaming benchmark (YSB) (Grover et al., 2016): a popular streaming benchmark comprising a _Select_, _Where_, and a tumbling-window count operations. (iii) Real-world applications: this includes eight real-world streaming analytics applications shown in the Table 2.4 Table 2 also describes the public datasets used in the evaluation. We provide more details on the benchmark queries in the Appendix A.
Footnote 4: We prepare these applications based on the realization that common benchmarks like YSB used for evaluating SPEs only represent a narrow set of real-world streaming analytics use-cases.
**Metrics:** Inline with prior works (Grover et al., 2016; Grover et al., 2016; Grover et al., 2016; Grover et al., 2016), we use data processing _throughput_, i.e., the number of events processed per second, as the primary comparison metric for the performance evaluation. Additionally, we also report latency-bounded throughput to evaluate the performance of different SPEs across a wide latency spectrum. Unless otherwise specified, the performance numbers are measured using a dataset with 160 million events. All the numbers reported are the average measurements from 5 runs of each experiment. The standard deviation of all the measurements is observed to be below 2%.
**Baselines:** On the temporal operations benchmarks, we compare the throughput of TiLT against four state-of-the-art scale-up stream query processing engines (SPEs): StreamBox (Grover et al., 2016), Microsoft Trill (Grover et al., 2016), LightSaber (Grover et al., 2016), and Grizzly (Grover et al., 2016). StreamBox and Microsoft Trill are both interpretation-based SPEs. StreamBox is written in C++ and uses pipeline parallelism to parallelize streaming queries. Trill is an SPE written in C# designed to support diverse streaming analytics applications. Both LightSaber and Grizzly are compiler-based SPEs optimized for aggregation operations.
**Experimental setup:** All the experiments are conducted on AWS EC2 m5.8xlarge with 32 cores (with hyper-threading), 2.5 GHz, and 128 GB DRAM. We also use AWS EC2 m52m.3xlarge with 12 cores, 4.5 GHz, and 48 GB DRAM for the scalability experiment. For a fair performance comparison, we exclude the time taken for disk and network accesses and only measure the compute performance of the query execution after loading the entire input dataset into the memory.
### Temporal Operations Throughput
We measure the performance of TiLT on the temporal operations _Select_, _Where_, _Window-Sum_, and _Join_ on a synthetic dataset containing 160 million events using 16 worker threads. Figure 6(a) shows the processing throughput comparison against StreamBox, Trill, Grizzly and LightSaber. For simple per-event operations like _Select_ and _Where_, TiLT achieves similar performance to other SPEs (between \(0.69-1.44\times\)). On more complex operations like _Window-Sum_, TiLT outperforms the Trill and StreamBox by \(6.64\times\) and \(18.30\times\), respectively. This shows that TiLT generated code can significantly outperform hand-written operations in interpretation-based SPEs. Moreover, TiLT outperforms the two compiler-based SPEs Grizzly and LightSaber, which are optimized for window-based aggregation, by \(7.44\times\) and \(1.87\times\), respectively. We observe that the high overhead of Grizzly is caused by expensive atomic state updates used by the SPEs to perform parallel aggregation. LightSaber, on the other hand, uses complex data structures such as parallel aggregation tree which we observe to be inefficient for fine-grained window-aggregations that are common in modern streaming analytics applications.
Finally, we evaluate the performance of temporal _Join_. Neither Grizzly nor LightSaber supports _Join_ operation, therefore we only compare the performance of TiLT against StreamBox and Trill. We observe that TiLT achieves 321.94\(\times\) higher performance over StreamBox and 13.87\(\times\) higher over Trill. The _Join_ operation in StreamBox is highly inefficient as it uses \(O(n^{2})\) algorithm to find overlapping events. Both Trill and TiLT follow in-order processing of the events and therefore only need \(O(n)\) comparisons to perform the join. However, the Trill implementation uses expensive concurrent hashmaps to maintain operator states, whereas the time-centric model allows TiLT to generate more efficient state-free code for _Join_. These results show that TiLT is able to generate highly efficient code for commonly used temporal operations that can significantly outperform both interpretation-based and compiler-based SPEs.
### Scalability
We evaluate how well TiLT can scale streaming queries over multi-cores using Yahoo Streaming Benchmark (Grover et al., 2016). We execute the query on a 12-core and a 32-core machine by increasing the number of worker threads and compare the throughput against Trill, StreamBox, Grizzly, and LightSaber. Figure 7(a) and Figure 7(b) show the
Figure 6. Parallel query execution
throughput measured on the 12-core and 32-core machines, respectively. Trill only supports parallel execution over partitioned streams and exhibits the worst scalability. We also observe limited scalability in Grizzly. We believe this is due to concurrent data structures and atomic states used to synchronize between worker threads. Both StreamBox and LightSaber scale up to 8 parallel threads on
Figure 8. Multi-core scalability on Yahoo Streaming Benchmark (YSB)
Figure 7. Performance comparison of TiLT on temporal operations and real-world streaming application benchmarks
Figure 9. Latency-bounded throughput of Trill and TiLT
the 32-core machine. LightSaber achieves a peak multi-core performance of \(291M\) events/sec and \(296M\) events/sec on the \(12\)-core and \(32\)-core machine, respectively. TiLT consistently outperforms all the SPEs and achieves a peak performance of \(406\) million events/sec on \(12\)-core machine and \(450\) million events/sec on \(32\)-core machine. The superior performance of TiLT comes from the synchronization-free data parallel query execution strategy described in Section 6.2. TiLT achieves close to linear scaling till \(4\)-threads in the \(12\)-core machine and \(8\)-threads in the \(32\)-core machine. The scalability benefits start to diminish afterwards because the query execution is shifting from being compute-bound to being memory-bound. This shows that TiLT can effectively parallelize the query execution while achieving \(1.52-13.20\times\) higher peak performance over the state-of-the-art SPEs.
### Real-World Applications Performance
We evaluate how well TiLT can support the performance requirements of real streaming workloads in comparison to state-of-the-art SPEs. To this end, we evaluate the throughput of TiLT on eight real-world streaming analytics applications listed in Table 2. These applications perform complex temporal transformations over data streams and therefore require a highly expressive temporal language for writing them as streaming queries. Out of all the baselines, we find that only Trill provides a query language that is capable of supporting all eight applications. Therefore, we compare the performance of TiLT on these applications against Trill.
First, we measure the throughput obtained from Trill and TiLT on these applications with \(16\) worker threads. As shown in Figure (b)b, TiLT is able to outperform Trill across all the applications by \(6.29-326.30\times\). This shows that TiLT is able to provide superior performance on a diverse set of streaming analytics applications. The best speedup is obtained on the signal resampling benchmark with \(326.30\times\) higher throughput over Trill. This query requires a non-standard temporal operation called Chop, which we find to have an inefficient operator implementation in Trill. Despite this non-standard operation, TiLT is able to generate an efficient implementation that is ultimately resulted in a significant speedup.
Additionally, we also measure the latency-bounded throughput of TiLT against Trill on the real-world applications with the synthetic dataset. Trill is optimized to provide high throughput over a wide latency spectrum. As shown in Figure 9, we measure the throughput by setting the batch/snapshot buffer size to contain events between \(10\) and \(1\)M. We observe that TiLT provide consistently higher throughput across the entire latency spectrum, whereas Trill exhibits \(18-227\times\) slowdown on smaller batch sizes due to high query execution overhead. This demonstrates that TiLT provides a runtime environment that adds minimal overhead and is able to provide high-performance over a wide latency spectrum.
### Effectiveness of Query Optimization
We analyze the effectiveness of the fusion optimization in TiLT by measuring the single-thread execution time of the example query (Figure 3) before and after applying the IR transformations described in Section 5.2. We compare the result with the Trill version of the un-optimized and optimized queries shown in Figures (a)a and (b)b. In Figure 10, we report the speed up observed on each of these query versions normalized to the throughput of the un-optimized Trill query. As shown, applying operator fusion in Trill achieves only a nominal speedup of \(1.06\times\). This highlights the limited optimization opportunities available in current scale-up SPEs. In TiLT, on the other hand, even the un-optimized version of the query outperforms the optimized Trill query by \(2.61\times\). The TiLT query without applying the any optimizations (e.g., operator fusion) follows a similar query execution model as that of an interpreted SPE. Therefore, the speed up observed in this case can be mainly attributed to avoiding the common overheads associated with the managed language (C#) implementation of Trill. This shows that TiLT can generate efficient code corresponding to individual operators that outperforms the hand-written implementations in interpreted SPEs. On top of this, the speedup can be further improved to \(8.55\times\) after applying the operator fusion optimization. This speedup is the result of maximizing cache utilization by immediately reusing the intermediate results generated during query execution. This sensitivity study shows that the performance benefits of TiLT comes from both minimizing the runtime overhead using a compiler-based approach and by performing effective query optimizations enabled by the time-centric query representation model.
## 8. Related Work
Many of the stream processing engines (SPEs) commonly used in the industry (e.g., Apache Spark (Carpell et al., 2018), Flink (Flink et al., 2018), Storm (Storm, 2018), Beam (Bam, 2018)) are designed as scale-out systems based on the assumption that single machines are incapable to handle the performance demands of modern streaming analytics applications. However, recent works on scale-up SPEs (Han et al., 2017; Chen et al., 2017; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018) have shown that a well-designed system running on a single multi-core machine can often satisfy the performance requirements of large scale stream processing (Chen et al., 2018). Unfortunately, we observe that many of the state-of-the-art scale-up SPEs sacrifice the expressive power of the language to achieve high single-machine performance. In this work, we argue that to meet the demands of modern streaming analytics
Figure 10. Performance breakdown of query optimization in Trill and TiLT (normalized to Trill)
applications, it is important to strike a good balance between programmability and performance. Below, we compare state-of-the-art scale-up SPEs in terms of these two aspects.
Microsoft Trill (Till, 2017) is optimized for diverse streaming analytics applications and provides a highly expressive temporal query language with rich set of temporal operations and fine-grained windowing support. Trill uses efficient operator implementations and columnar data representation to maximize cache utilization and is shown to achieve \(10-100\times\) higher single machine performance than scale-out SPEs (Zhu et al., 2019). However, Trill follows an interpretation-based query execution model that suffers from significant runtime overhead (Sang et al., 2019; Wang et al., 2019; Wang et al., 2019). TiLT, on the other hand, offers a similar query language but uses a compiler-based approach to generate hardware-efficient code that outperforms Trill by \(20\times\) on average (Section 7.3). Other interpretation-based SPEs such as StreamBox (Chen et al., 2019), StreamBox-HBM (Zhu et al., 2019), BriskStream (Zhu et al., 2019) are designed to achieve high performance on multi-core machines by efficiently parallelizing streaming queries. However, these SPEs only expose low-level APIs for writing streaming applications and offers limited temporal query language support. Moreover, we show that the synchronization-free data parallel query execution in TiLT achieve better scalability than these systems (Section 7.2).
SABER (Zhu et al., 2019), LightSaber (Wang et al., 2019), Scabbard (Wang et al., 2019), and Grizzly (Grizly, 2019) are notable examples of recent compiler-based SPEs. However, as we explain in Section 3, these systems are only optimized to efficiently execute the window-based aggregation and do not support important common temporal operations like temporal join. Therefore, these systems cannot support many real-world streaming analytics applications such as the ones shown in Table 2. TiLT, on the other hand, supports a wide range of streaming analytics applications with the help of the highly expressive temporal constructs in TiLT IR. We have also shown that TiLT is able to generate more efficient code and achieve superior performance compared to these SPEs (Section 7). Moreover, the code generators in these SPEs are designed as template expanders which are harder to maintain and extend (Zhu et al., 2019). In contrast, TiLT takes a more systematic approach by proposing a well-defined IR which we believe is important to facilitate further research in this field.
## 9. Conclusion
In this paper, we highlight the limitations of current stream processing engines (SPEs) and their inability to meet the performance demands of diverse set of modern day streaming analytics applications. To address these limitations, we design TiLT, a novel temporal query representation model for streaming applications. TiLT provides a rich programming interface to support a wide range of streaming analytics applications while enabling efficient query optimizations and parallelization strategies that are otherwise harder to perform on traditional SPEs. We also build a compiler-backend to generate hardware-efficient code from the TiLT query representation. We demonstrate that TiLT can outperform the state-of-the-art SPEs (e.g., Trill) by up to 326\(\times\) (20.49\(\times\) on average) on eight real-world streaming analytics applications with diverse computational characteristics. TiLT source code is available at [https://github.com/ampersand-projects/tilt.git](https://github.com/ampersand-projects/tilt.git).
## 10. Data-Availability Statement
The artifact of this paper is published through Zenodo (Zenodo, 2019).
## Acknowledgments
We first thank our shepherds and the anonymous reviewers for their valuable feedback and comments. We also like to thank members of the members of the EcoSystem lab, especially Kevin Song, Jasper Zhu, Xin Li, and Christina Giannoula for providing insightful comments and constructive feedback on the paper. This project was supported in part by the Canada Foundation for Innovation JELF grant, NSERC Discovery grant, AWS Machine Learning Research Award, and Facebook Faculty Research Award.
## Appendix A. Real-World Streaming Applications
We prepare a benchmark suite with eight streaming analytics applications used in fields like stock trading, signal processing, industrial manufacturing, financial institutions, and healthcare. We prepare these applications based on the realization that commonly used benchmarking queries to evaluate stream processing engines (SPEs) like yahoo streaming benchmark (YSB) (Kumar et al., 2019) and Nexmark (Grizly, 2019) only represent a narrow set of real-world streaming analytics use-cases. Table 2 provides a brief description of the streaming applications included in the benchmark suite and the corresponding public data sets used for the evaluation. In the following, we provide a detailed description of these applications. We also release the implementations of these queries in both Trill and TiLT as an artifact.
**Stock trading queries:** Streaming applications are widely used by investment services for analysing the trends in stock markets in order to make purchasing decisions. These applications continuously perform statistical algorithms on high-frequency stock price data streams. In our benchmark suite, we include two commonly used trading algorithms (i) Trend-based, and (ii) Relative strength index-based trading. The trend-based trading algorithm (Till, 2017) computes short-term and longer-term moving averages (e.g., 20 minutes and 50 minutes) of each stock price over time and identifies an upward trend when the short-term average goes above long-term and vice versa for the downward trend. The second trading algorithm uses relative strength index (RSI) (Zhu et al., 2019) as the momentum indicator instead of the moving averages. RSI is an indicator to chart the current and historical strength or weakness of a stock or market based on the closing prices during a 14-day trading period. These two algorithms are widely used and are often combined with more sophisticated trading algorithms.
**Data cleaning/preparation:** The real-time data processed by streaming applications are usually misformatted, corrupt and garbled. Therefore, data analysts often need to conduct data cleaning and preprocessing before analysing the raw data streams. For example, events collected from different sources often vary widely in their scale of values. Data normalization is a commonly used approach to bring the values of the events on different scales to a notionally common scale. We include a standard score-based normalization query which computes the mean (\(\mu\)) and standard deviation (\(\sigma\)) of the event payload values (\(X\)) over every 10-second tumbling window. The values of each event in the window is normalized by computing \((X-\mu)/\sigma\).
Secondly, signal processing operations are used in the healthcare industry to clean and prepare the physiological signals like ECG and EEG collected from the patients (Kumar et al., 2017). These signals are collected at a fixed frequency usually ranging from \(10^{-4}\) Hz to \(10^{3}\) Hz. We include two commonly used signal processing operations in our benchmark suite: (i) signal imputation, and (ii) signal resampling. Signal imputation operations are used to fill missing events in the signal streams. The naive imputation approaches include substituting the missing signal values with a constant (e.g., zero) or with the value of the last active event. We include a signal imputation query that replaces the missing signal values with the average values of the events in their corresponding 10-second tumbling window. The signal resampling operation is used to translate a signal stream in one frequency to another. We use the linear-interpolation(S
synthetic data set should be comparable to the results we report on the real data set in the main paper.
### Installation
We provide docker files to setup the runtime environment for all the experiment.
1. Install docker following the instructions in [https://docs.docker.com/engine/install/ubuntu/](https://docs.docker.com/engine/install/ubuntu/).
2. Install gnuplot by running the following command: sudo apt-get install -y gnuplot
3. Clone the git repository using the following command: git clone [https://github.com/ampersand-projects/streambench.git](https://github.com/ampersand-projects/streambench.git) --recursive
4. Build the docker images by running the setup.sh script at the root directory of the cloned repository.
### Experiment Workflow
Execute run.sh to run all the experiments and generate the figures.
### Evaluation and Expected Results
Once the script has finished execution, they would have generated plots similar to Figure (a)a and (b)b. The figures can be found at the root directory of the repository under the names ysb.pdf and e2e.pdf.
First, ysb.pdf plots the throughput comparison of TiLT against Trill, StreamBox, Grizzly, and LightsSaber on the Yahoo Streaming Benchmark (YSB) (Grizzly et al., 2019) on different degree of parallelism ranging from 1 to 16. Compared to other baselines, TiLT should consistently achieve higher throughput and scalability. Second, e2e.pdf plots the throughput comparison of TiLT against Trill on the real-world streaming applications listed in Table 2 both using a fixed parallelism of 8 threads. On an average, TiLT should achieve \(\sim 10-100\times\) higher throughput compared to Trill.
### Experiment Customization
The parallelism for the real-world applications performance evaluation experiment can be modified by setting the $THREADS environmental variable to appropriate number of threads in the scripts trill_bench/run.sh and tilt_bench/run.sh.
### Methodology
Submission, reviewing and badging methodology:
* [https://www.acm.org/publications/policies/artifact-review-badging](https://www.acm.org/publications/policies/artifact-review-badging)
* [http://cTuning.org/ae/submission-20201122.html](http://cTuning.org/ae/submission-20201122.html)
* [http://cTuning.org/ae/reviewing-20201122.html](http://cTuning.org/ae/reviewing-20201122.html)
|
2304.06843
|
Building a Quantum-ready Ecosystem
|
The emergence of quantum technologies has led to groundbreaking advancements
in computing, sensing, secure communications, and simulation of advanced
materials with practical applications in every industry sector. The rapid
advancement of the quantum technologies ecosystem has made it imperative to
assess the maturity of these technologies and their imminent acceleration
towards commercial viability. The current status of quantum technologies is
presented and the need for a quantum-ready ecosystem is emphasised. Standard
Quantum Technology Readiness Levels (QTRLs) are formulated and innovative
models and tools are defined to evaluate the readiness of specific quantum
technology. In addition to QTRLs, Quantum Commercial Readiness Levels (QCRLs)
is introduced to provide a robust framework for evaluating the commercial
viability and market readiness of quantum technologies. Furthermore, relevant
indicators concerning key stakeholders, including government, industry, and
academia are discussed and ethics and protocols implications are described, to
deepen our understanding of the readiness for quantum technology and support
the development of a robust and effective quantum ecosystem.
|
Abhishek Purohit, Maninder Kaur, Zeki Can Seskir, Matthew T. Posner, Araceli Venegas-Gomez
|
2023-04-13T22:23:49Z
|
http://arxiv.org/abs/2304.06843v2
|
# Building a quantum-ready ecosystem
###### Abstract
The emergence of quantum technologies has led to groundbreaking advancements in computing, sensing, secure communications, and simulation of advanced materials with practical applications in every industry sector. The rapid advancement of the quantum technologies ecosystem has made it imperative to assess the maturity of these technologies and their imminent acceleration towards commercial viability. In this paper, we present the current status of quantum technologies and emphasise the need for a quantum-ready ecosystem. We formulate standard Quantum Technology Readiness Levels (QTRLs) using innovative models and tools to evaluate the readiness of specific quantum technology accurately. We also discuss relevant indicators concerning key stakeholders, including government, industry, and academia, and describe ethics and protocols implications, to deepen our understanding of the readiness for quantum technology and support the development of a robust and effective quantum ecosystem.
Quantum Technologies, Quantum Computing, Quantum Communications, Quantum Strategy, Quantum-ready, Readiness indicators, Workforce Development.
*Maninder Kaur, [email protected]
## 1 Introduction
The quantum physics undoubtedly is one of the most successful scientific theories ever established in terms of the accuracy of its predictions. A global effort to use quantum phenomena like entanglement, superposition, and coherence to create radically new technologies has resulted in the verge of the second quantum revolution. We can already see quantum technology-based products prevailing in the market and an increasing number of companies [1], organisations, governments [2], and individuals making an effort to build a global quantum ecosystem. The prevalent question now is how to use quantum technology and not when quantum technology makes its appearance. Organisations have often used the term "quantum-ready" to assess their current situation with regard to their ability to integrate quantum technology into their existing framework. But the applicability of the term has a wide range of implications in different case scenarios. Therefore, it is crucial
to understand and define the term quantum-ready [3]. In this paper, we explore and provide a systematic understanding of quantum readiness, how to evaluate it, and how entities, organisations or individuals can become quantum-ready.
To begin, we explore what quantum technology is as a category of new technology in section 2. After classifying emerging technologies, we understand and explore the term quantum readiness and the need to address it in section 3. Then we establish the importance of being quantum-ready by applying the term to various innovation models in section 4. This helps us in providing tools to understand quantum readiness and identify various indicators to take into consideration while evaluating quantum readiness. Finally, in section 5 we explore the relevance and meaning of being quantum-ready for the key stakeholders.
## 2 Classifying emerging technologies
To define quantum readiness, it is essential to understand the nature of quantum technologies. This can help to estimate the potential impact and implications of these technologies, as well as identify areas where further research and development are needed.
One way to classify emerging technologies is based on their level of maturity [4, 5]. Firstly, emerging technologies are technologies that are still in the early stages of development and may not yet have a clear application or commercial potential. Secondly, Emerging-growth technologies have moved beyond the early stages of development and have begun to show commercial potential. They may still be facing significant technical or regulatory challenges. Finally, growth technologies are the ones that have achieved significant commercial success and are likely to have a significant impact on their respective industries.
Another way to classify emerging technologies would be based on their potential impact. Disruptive technologies are technologies that have the potential to fundamentally change the way an industry operates. They can create new markets or disrupt existing ones. When the term disruption is used, it generally indicates that an entirely novel method of operation has a substantial impact on how markets and industries perform currently[6]. Technology that supports the development of new goods, services, or business models without necessarily disrupting existing ones is known as enabling technology. Sustaining technologies improve the performance of existing products or ser
Figure 1: Classification of emerging technologies and status of quantum technologies
vices but do not create new markets or disrupt existing ones. Another taxonomy of innovations was proposed by Freeman (1974) that groups them according to increasing importance.[7] This typology offers a conceptual framework for comprehending the contribution of quantum technology to a technological revolution. The typology consists of four types of innovations: incremental, radical, new technological systems, and technological revolutions. Smaller updates or changes made to the existing technology are referred to as incremental innovations. They may enhance the efficiency or quality of a product or service but do not fundamentally change its nature. Radical innovations involve important discoveries or developments that alter how a technology functions or is used. They frequently destroy established markets while offering new possibilities for expansion. The creation of completely new technological frameworks that integrate numerous related technologies or processes is called new technology systems. innovation. These systems frequently result in the development of brand-new industries or the evolution of already-existing ones. Fundamental core discoveries resulting in the formation of a cluster of new technologies that may revolutionise a wide range of sectors constitute the technological revolutions. They are characterized by far-reaching and transformative impacts on various aspects of human life.
Quantum technologies, being fundamentally different from classical technologies, can be described as an emerging disruptive technology[8] although some branches of quantum technologies may fall under the emerging-growth technologies as well (Figure 1) By transcending the limits of classical technologies and enabling new capabilities in computing, communication, sensing, and materials, quantum technology has the potential to transform various aspects of human life and reshape the global economy making it a technological revolution.[9] To fully comprehend the potential effect of these technologies and how best to prepare for the future, governments and industry leaders must categorise new innovations in the framework of quantum technology. Policymakers
and business leaders can hasten the development of quantum technologies and make sure they are well-positioned to benefit from this revolutionary technology by anticipating potential disruptions, identifying areas for research and development, identifying potential applications, and informing the government on policy decisions.
## 3 Addressing quantum readiness
Although quantum technology is still in its infancy or emerging stages, it is developing quickly. Rapid advancements in quantum technology can expand the global economy by billions of dollars [10]. Hence, it is vital to be prepared and adapt to it. In addition to its fast-paced growth, the technology is inherently different from current technologies and will be massively disruptive for most sectors. This technology will likely have a disruptive innovation effect on operations, services, and products, giving companies that exploit it early a significant competitive advantage. In areas where quantum technologies are anticipated to have a big influence [11], nations that invest in and embrace quantum technology first will have a competitive edge. Governments that are not quantum-ready run the danger of falling behind in these sectors and may find it difficult to compete with those who have made quantum technology investments [12]. Quantum technologies have the potential to make sophisticated computing, secure communication, and encryption possible [13]. Non quantum-ready entities and organisations may be more susceptible to cyberattacks and other security [14]. Quantum computers can defeat current encryption schemes, leaving sensitive data open to theft or abuse. The protection of national interests and citizens will be improved in nations that are quantum-ready. Research advancements in areas such as materials science, drug development [15], and artificial intelligence [16] will be made possible by quantum technology. Researchers must hasten the progress of science and produce important new discoveries by focusing on quantum readiness.
Quantum computing can be used to model intricate chemical processes, speeding up the development of novel medicines and materials,[17] and has the potential to transform banking by enabling financial institutions to carry out hitherto impractical sophisticated computations and risk analysis. Global problems including climate change, energy efficiency, and sustainable agriculture might be solved by quantum technology.[18] Countries can support sustainable development and the welfare of their citizens by adopting quantum technologies that address these issues.[19] Quantum computers can potentially be utilised to optimise energy distribution and cut down on waste, while quantum sensors could be used to monitor environmental conditions and increase crop yields.[20]
In the near future, quantum readiness will play a key role in deciding if a business/organisation thrives or recedes. For example, _Quantum Key Distribution_ (QKD), has the potential to revolutionise the field of cryptography. QKD allows for the secure exchange of encryption keys by exploiting the principles of quantum mechanics. Quantum computers may make current encryption techniques like _RSA_ obsolete since they may be able to decrypt data significantly faster than traditional computers. The cybersecurity landscape will drastically change as a result of this disruption,[21] necessitating the creation and deployment of new cryptographic algorithms that are resistant to quantum computing. In comparison to classical computing, quantum computing can more effectively optimise complex financial models, portfolio management, and risk assessment.[22] As a result, financial institutions may optimise investment strategies, more precisely assess risks, and more successfully identify fraud, which could potentially have a significant impact on the financial sector.
Since quantum technology can render previous technical knowledge outdated and maintain technological, industrial, economic, and social transformation, it shares many traits with _General-Purpose Technologies_ (GPTs). Changes in techno-economic paradigms brought about by GPTs
impact all sectors of the economy and continue the long-term process of economic progress in human civilisation. Furthermore, to support an entire and functional quantum ecosystem built on solid physical infrastructures, highly skilled human resources, and suitable technological systems, the evolution of this technology requires time, research and development investments, carefully chosen research policies, and key strategic decisions of governments and industry. Therefore, quantum readiness comes in with a first-mover advantage and the power to shape the direction of the technology. For instance, organisations in the pharmaceutical and materials sectors that use quantum computing to identify new drugs or design new materials might accelerate research and development efforts. These businesses can obtain a first-mover advantage by discovering potential drug candidates or novel materials more rapidly and accurately than rivals, that use classical computing approaches, enabling them to sell new products quickly and possibly secure patents earlier allowing them to thrive.[23]
For organisations to be competitive, safe, and economically prosperous in the future, addressing quantum readiness is essential. Countries can capitalise on this game-changing technology and promote science and sustainable development by investing in quantum technologies and building a strong quantum ecosystem. Although there are many obstacles to creating and using quantum technology, the advantages are too enormous to ignore. Organisations can make sure they are ready to profit from this innovative new technology by collaborating to address the issue of quantum readiness.
## 4 Diffusion of Emerging Technology
Radical innovation models refer to frameworks used to explain how new and emerging, transformative technologies or business models emerge and disrupt existing industries or markets. It is
still early to see the full impact of quantum technologies, but many (including the authors) expect them to be a highly transformative and disruptive set of technologies. Therefore, we argue that investigating and thinking about quantum technologies via radical innovation models is a suitable theoretical framework. One such model is the _Disruptive Innovation Model_ developed by Clayton Christensen, which explains how new technologies or business models can initially serve niche markets before eventually disrupting and overtaking established ones.[24]
Another potentially relevant concept is pervasive innovation, which refers to the idea that innovation can occur across all aspects of society and in all sectors of the economy, leading to profound changes in how people live, work, and interact with one another.[25] This can be thought of in relation to the idea of the techno-economic revolution, a concept which refers to the profound economic and societal changes that can result from the emergence of new technologies or industries.[26] These revolutions often involve the displacement of existing industries, the creation of new ones, and shifts in the distribution of wealth and power. Examples of techno-economic revolutions include the industrial revolution, the information revolution, and the ongoing transition to a digital economy.
Techno-economic paradigms should also be mentioned here in its relation to techno-economic revolutions. These are overarching frameworks that guide technological innovation and economic growth in a particular period. They represent the dominant technological and economic structures, processes, and institutions that shape innovation and growth in a given era.[27] Techno-economic paradigms are characterised by a set of shared beliefs, values, and practices that define the boundaries of what is considered technically feasible and economically viable. Whether quantum technologies can be considered as a potential new techno-economic paradigm or a subset of the digital transition is open for discussion. However, the analysis and recommendations developed in this
paper are irrespective of either case. Quantum technologies, either by themselves or as a part of the constellation of the technologies enabling the transition to the next techno-economic paradigm, will play an important role in the following decade
Both radical or pervasive innovation, techno-economic revolutions and paradigms are important concepts to consider in the context of the period we are experiencing in relation to the second quantum revolution and associated quantum technologies. By acknowledging the potential for innovation to arise from unexpected sources and recognizing the profound economic and societal changes that can result from new technologies, organisations can better prepare for and navigate the challenges and opportunities of the innovation process. In this regard, one can turn to transitional studies, which focus on how organisations or industries can manage and navigate transitions to new technologies or business models. These studies examine the challenges, opportunities, and strategies involved in moving from current practices to new ones.[28]
Quantum readiness, in this context, refers to the ability of organisations to adopt and leverage quantum technologies for competitive advantage, especially during the transitional period where profound economic and societal changes may occur due to the radical and pervasive nature of quantum technologies. This set of technologies, including quantum computing and quantum cryptography, has the potential to revolutionise many vertical industries such as finance, healthcare, and logistics, but it also requires significant investments in infrastructure, expertise, and cultural change. Therefore, this process of getting ready for and enabling the second quantum revolution can be thought of as a highly entangled process.
The application of radical innovation models and transitional studies can help organisations prepare for and navigate the transition to quantum readiness. For example, the Disruptive Innovation Model can provide insights into potential market disruptions and the emergence of new
business models in the quantum space, while transitional studies can help organisations identify and manage the barriers and enablers of quantum adoption, such as funding, talent, and regulatory frameworks.
In summary, radical innovation models and transitional studies are valuable tools for organisations seeking to become quantum-ready. By understanding the dynamics of radical innovation and managing the transition to new technologies, organisations can position themselves for success in the emerging quantum landscape. Two common tools that are widely used in this literature are the well-known _S-curve_[29] and the _Technology Readiness Levels_ (TRLs) framework.[30, 31]
### S-curve analysis
The S-curve[32] diffusion model is a useful tool for predicting and understanding the adoption and diffusion of new technologies, including quantum technology. It can be used to predict the rate of adoption of quantum technology and to identify potential opportunities and challenges. For example, companies can use the S-curve model to identify when quantum technology is likely to reach the growth phase and plan accordingly.[33, 34] This can include investing in research and development, building partnerships, and developing new products and services.
As the growth of many innovations follows the fundamental pattern, the uptake of innovation is frequently represented in these terms in diffusion analysis.[35] Slow adoption is followed by a period of high growth, which is then followed by slower growth as the majority of potential adopters have already adopted the new product (Figure 2.[36] S-curve diffusion assessments frequently highlight the differences between early and late users. Early adopters tend to be wealthier, more educated, connected to mass media, etc. on an individual level. We may also assume that there is a pattern of diffusion across business sectors, with some being early adopters and some being followers. This is
true if we look at the dissemination of significant new technologies that can be used for a variety of purposes. Early adopters, for instance, frequently work in high-tech industries and are presumably more closely connected to the fields where the technology was initially produced and/or used. The analysis can be approached through the lens of the industry life cycle and product life cycle[37]. The basic idea behind the industry life cycle approach is that sectors are likely to _mature_ over time. Their operations become conventional, their goods become more standardised, and the production tools they need to use become more affordable or easier to use. They rely less on highly skilled labour.
Figure 2: The S-curve analysis of emerging technologies
The product life cycle approach makes additional comments that are especially relevant when taking the dynamics of new technologies into account. This strategy incorporates the findings of several innovation studies as well as the concepts of diffusion and industry cycles. Here, the essence of the technology itself is more important than how the market or industry develops. According to the product life cycle concept, early versions of inventions are frequent, even when technologically advanced, relatively simple and primitive in comparison to their later versions. Not merely because the marketplaces are still in their infancy, but because the early generations of innovations appeal to very few customers. There is a lack of understanding, on the user side of their existing and future capabilities, and on the supplier side, of precisely what skills will be valued and how they may be employed. It also prompts the idea that there could be weak connections between inventors and numerous prospective consumers and use cases of the technology in the early stages of product creation and distribution. Complementary goods and services are either unavailable or not generally accessible in the early stages of development. Comparing the essential components to subsequent versions of comparable products, the key products are likely to be more expensive and less reliable. Early versions of goods are likely to need significant technical expertise for their manufacture and usage, but later versions may make use of these abilities as common practice. If the products succeed, people will become more aware of them, invest more in them, and acquire more skills using them and their markets will grow. The technology itself will evolve, which is essential. The product has undergone redesigning to make it more durable, user-friendly, and capable of more effective manufacturing. New entrants, who could bring fresh concepts for innovation, join the early suppliers, who have shown the potential for a sizable new market. Several innovation approaches frequently compete with one another, with the winner establishing the design paradigm to which all others should adhere. If the first providers are small firms, big businesses are
liable to substantially change the nature of the competition in the market possibly by acquiring the smaller firms. Large enterprises with greater marketing capabilities have the capability to boost the diffusion process and the creation of a dominant design. Later, as the market expands, the emphasis on innovation often shifts away from fundamental product/technology innovation and towards providing enhanced quality before shifting to process innovation that is scalable and economical. With supplementary goods and services, enabling technologies, increased functionality, and greater adaptability, the product becomes more user-friendly and requires fewer high-level skills to utilise. There can be a shift from technology-push to demand-pull. Successful inventions have an extended development phase once they are introduced to the market, not just while they are pre-commercial prototypes in research and development institutions, but as providers discover what consumers want. Users, on the other hand, acquire knowledge about the product and successful usage techniques.
When it comes to quantum technology it gets tricky as it itself branches into several technologies including quantum computation, sensing, communication, cryptography, and more. Each of these individual technologies stands at different stages. Furthermore, some of the technologies involve similar architectures and platforms, although currently there is no clear winner for which platform will be adopted in general or within its branches. Therefore, it is worthwhile to look at quantum technology in general to assess the current scenario. The adoption rate of quantum technology is examined over time, and it is compared to the adoption rates of other technologies, in the S-curve analysis. Quantum technology adoption also entails evaluating the elements influencing or impeding its uptake as well as projecting its possible future uptake which we call indicators. By considering indicators such as research publications and patents, investment, commercial products and services, talent pool, technical advancements, industry partnerships, and government policies,
we can evaluate the current status and potential future of quantum technology [38]. This understanding is essential for determining the quantum readiness level, which is critical for realizing the full potential of quantum technology.
To see a general trend of the growth and diffusion of quantum technologies in general we collected data for various indicators over time and plotted it on the same time scale with the total number of each indicator normalised to one(our code reads in data from four CSV files containing the number of startups, patents, publications, and private investments for each year. It then normalizes the data so that the maximum value for each category is equal to 1 for a better visualisation to compare these trends) in Figure 3. For the collection of data on publications we used bibliometric tools such as the _Web of Science_ (WoS) database and search query as done in [39]. For patents, we used the data recently published in the paper [40]. For start-ups data, we used the numbers from [1] and the database maintained and updated by QURECA. For private investments,
Figure 3: Various quantum readiness indicators over time for quantum technologies
we got the data from the Quantum Technology Investment Update 2022 report published by the Quantum Insider.[41] When we plotted all these data against years, we could see a general trend of take-off of quantum technologies, indicating that now is the perfect time to become early adopters of quantum technology. Being an early adopter of quantum technologies can provide organisations with a range of benefits. It can help them to stay ahead of the curve giving them a competitive advantage, improving productivity, innovating new products and services, and shaping the direction of the technology. Although we could see some dips in 2023, it could be an implication of various factors away from normal conditions such as the Covid-19 pandemic and more.
Forecasters must be aware that there are sometimes variations in the norm. Even though S-curves typically offer quite good fits to empirical data, in reality, they are frequently disrupted by things like wars and economic downturns. Innovations can also replace one another before a certain innovation is adopted by all its prospective users, and a rival technology may supplant it.
### Technology readiness levels for quantum technologies
_Technology Readiness Levels_ (TRLs) are a method of measuring the maturity of a technology by determining its level of development, testing and integration.[42] The TRLs provide a common framework for evaluating and communicating the maturity of a technology and can be used to identify areas where further research and development are needed. For quantum technologies, it makes more sense to use TRLs to assess the readiness of individual branches or at an individual product level. It is crucial to identify the quantum technology to be assessed and its intended application(s). This will help in identifying the specific requirements for the technology and the performance metrics needed to assess its TRL. Then one must determine the TRL criteria for the quantum technology under assessment. This can include factors such as the level of technical
feasibility, the maturity of the design, the level of testing and validation, and the readiness for deployment. Evaluating the technology against the TRL criteria by analysing the performance data from experimental studies, simulations, and tests, and comparing the results helps assign a TRL value. This should reflect the technology's current stage of development and the level of maturity it has achieved. It also reflects the with what level of confidence the technology is ready for deployment in the market. Based on this TRL one can then develop a roadmap for the technology. This involves identifying the key development milestones needed to reach the next TRL and the associated resources and investments required.
TRLs are typically defined on a scale of 1 to 9[43, 44]. The same can be adopted for quantum technologies with the addition of consideration of ethics protocols for the branch of quantum technology in consideration. The first four levels usually address the most fundamental technical research involving, mostly, laboratory results given the sort of research, technological advancement, and innovation being addressed. From levels TRL 5 through TRL 6, technological development would then proceed until the first prototype or demonstrator was obtained. Projects involving technological innovation would fall between TRL 7 and TRL 9, as this type of innovation necessitates the launch of a new product or service onto the market, which involves passing the necessary tests, certifications, ethics and approvals. These stages entail deployment or extensive implementation. Assessing the readiness of new technology products is important for understanding their maturity and potential for commercialisation.
We have formulated standard Quantum Technology Readiness Levels(QTRLs) taking inspiration from the ones set for general TRLs and TRLs for artificial intelligence technology to help assess a quantum technology in more accurately as shown in (Figure 4). The description and expected outcomes from each of these levels are described below:
1. QTRL 1 (Basic principles observed): Research into the basic principles of quantum phenomena is at present being conducted. This might involve researching various phenomena such as quantum superposition, entanglement, and coherence as well as creating theoretical
Figure 4: _Quantum Technology Readiness Levels_ (QTRLs) description, representation and expected outcomes
equations and models to explain these concepts.
2. QTRL 2 (Technology concept/Application formulated): Based on the concepts observed in QTRL 1, researchers develop a particular quantum technology concept or application at this phase. This entails the formation of a distinct technical vision and goals and may include concepts for quantum computing, communication, or sensing applications.
3. QTRL 3 (Analytical and experimental proof of concept): Researchers working at this level offer analytical and experimental evidence that the technological concept or application is feasible. This typically involves laboratory experiments and simulations to validate the technology's functionality, performance, and potential advantages over classical technologies.
4. QTRL 4 (Quantum technology validated in lab): Quantum technology systems or components are developed and assessed at QTRL 4 in a controlled lab setting. This stage focuses on the integration of individual quantum components into a larger system and the testing of their performance, reliability, and scalability.
5. QTRL 5 (Quantum technology validated in relevant environment): In this phase, researchers validate the system's performance in environments that closely resemble real-world applications. This might involve testing quantum communication systems over long distances, assessing the performance of quantum sensors in realistic settings, or benchmarking quantum algorithms on prototype quantum processors.
6. QTRL 6 (Quantum technology demonstrated in relevant environment): QTRL 6 requires a demonstration of a fully functional system or subsystem within a relevant environment. This could include a quantum communication network connecting multiple users, a quantum
sensor deployed in the field, or a quantum processor executing a specific algorithm to solve a real-world problem.
7. QTRL 7 (Prototype demonstration in operational environment): At this level, a fully integrated system prototype is demonstrated in an operational real-world environment. This may involve field testing of a quantum communication system for secure data transmission, deploying a quantum sensor for environmental monitoring, or running a quantum computer to solve complex optimisation problems.
8. QTRL 8 (System complete and qualified): QTRL 8 is achieved when the system has been completed and qualified through rigorous testing and demonstration. This includes achieving all performance requirements, addressing any identified issues or limitations, and demonstrating a high level of reliability and robustness. Ethics and other protocols are also taken into consideration and checked.
9. QTRL 9 (Successful project operations): When the quantum technology has been successfully deployed and used in practical real-world applications or missions, the final QTRL has been reached. This may entail the widespread use of quantum sensors in a variety of businesses, the commercial availability of quantum computing services, or the use of quantum communication technologies for secure data transmission.
One way to assess the readiness of a new technology product is to use a readiness-vs-generality chart [45]. The same can be used if one wants to evaluate the readiness of a specific quantum technology-based product and not a branch in general. Then one must identify the different layers of the capability of the product in relevant environments and then compare it against the TRL of
each layer of capability. For instance, Forschungszentum Julich defined a set of TRLs for quantum computing.[46] The theoretical foundation for quantum computing (annealing) is developed when quantum computing technology is at QTRL 1. Once the fundamental device principles have been investigated and applications or technologically pertinent algorithms have been developed, the technology reaches QTRL 2. The fundamental components of quantum computing systems, physical qubits that have been fabricated imperfectly, are at QTRL 3. Laboratory tests are then designed to verify the theoretical predictions of qubit characteristics and then proceed to the fabrication of multiple qubit systems with the classical control unit in QTRL 4 stage. Technology for QTRL 5 quantum computing consists of parts that are incorporated into a tiny quantum processor without error correction. At QTRL 6, there are components assembled into a miniature error-correcting quantum processor. They are rigorously tested with various quantum algorithms and benchmarked. In the QTRL 7 stage the prototype is tested for solving small but relevant use-case problems. Once the prototype is then made scalable and qualifies for all necessary tests, it advances to QTRL 8 stage. Finally, when the prototype quantum computer exceeds the power of the classical computers in specific problems it is labelled as QTRL 9. This acts as a critical tool for them to assess their current quantum readiness level and helps in strategising a roadmap for their prototype and planning future steps of action and set goals. Hence it is of utmost importance that QTRL assessment is conducted by qualified experts with a deep understanding of the technology and its development. It is also important to set standardised QTRL levels in each branch of quantum technology so as to make meaningful comparisons with other competing organisations/platforms. One of the major highlights of Quantum.Tech, a conference held in 2022 in Boston, Massachusetts, USA was assessing the TRLs for various branches of quantum technology. In comparison to quantum sensing and quantum computing, quantum communication, such as QKD, is viewed as the more developed
domain (QTRL 7) for many future implementations while quantum computing was seen to be at TRL 3[47]. Another report in 2021 to assess quantum readiness for military applications provided a detailed report on various TRLs for different quantum technologies[48]. Taking data from all these reports and roadmaps, we provide an assessment of current TRLs for various quantum technologies and a time horizon indicating the expected time it would take to achieve a QTRL of 9 for these quantum technologies (Figure 5).
We can clearly see that quantum technologies are at different QTRLs. By considering diverse applications and deployment platforms, the QTRL variance and time horizon assumptions become much more complicated. In conclusion, the QTRLs of different branches of quantum technologies vary widely, with quantum computing being in the earliest stages of development and quantum communication and cryptography being more mature. Significant progress has been made in recent years in all branches of quantum technologies, but challenges remain in scaling the technology to meet the requirements of practical applications. It is crucial to keep a track of the QTRL levels of
Figure 5: QTRLs and time horizon expectations (with error bars) for several quantum technologies
these branches in order to stay ahead of the curve and be quantum-ready.
With all these tools and information we have devised a roadmap or flowchart with detailed steps to assess quantum readiness (Figure 6). It will help businesses and organisations to assess
Figure 6: Flow chart describing the steps to follow to be quantum-ready
their quantum readiness or to develop a strategy to be quantum-ready. It can be summarised as follows:
1. Understanding key concepts of Quantum Technologies: Gain a basic understanding of quantum physics principles, such as superposition, entanglement, qubits, and sensors.
2. Identify relevant stakeholders: Assemble a team of stakeholders from different departments such as research and development and executive leadership. This team will be responsible for driving the assessment process and implementing the findings.
3. Use various innovation models to map the current stage of Quantum technology: Identify and gather data over time for various KPIs (Key performance Indicators)and map them on S curve, evaluate TRL to assess maturity of the quantum technology.
4. Evaluate internal capabilities and readiness: Evaluate your organisation's current capabilities and readiness to adopt quantum technology. Identify any gaps in knowledge, expertise, or resources that need to be addressed.
5. Benchmark against industry peers: Analyse your competitors and industry leaders to understand their quantum readiness and strategies. Identify best practices.
6. Develop a strategic roadmap: Create a strategic roadmap outlining your organisation's plan to adopt and implement quantum technology. This should include short-term and long-term objectives, as well as a timeline for implementation.
7. Study and evaluate ethical implications: Explore, assess and adapt ethical and societal responsible use of quantum technology.
8. Priorities investments and partnerships: Determine the resources required to achieve your quantum technology readiness goals, such as investments in hardware, software, and talent. Priorities investments based on potential impact and return on investment. Identify potential partners, such as universities, research institutions, and technology providers, to collaborate on quantum technology initiatives.
9. Implement a quantum education program: Develop and implement a comprehensive quantum education program for your organisation. This may include training sessions, workshops, and seminars to ensure all relevant stakeholders are informed about the latest developments in quantum technology.
10. Monitor progress and adapt: Regularly review and update your quantum technology readiness assessment and strategy based on new developments in the field. Continuously evaluate your organisation's progress and adjust as necessary to stay on track with your quantum technology goals. Publish these assessment reports.
## 5 Quantum readiness for different stakeholders
In the previous sections, we discussed the interpretation of quantum readiness and various models to assess current quantum readiness by using different models. Next, we describe in detail various perspective scenarios in which the term _quantum readiness_ is predominantly used and of relevance for key stakeholders in the quantum ecosystem. We discuss what it means to be quantum-ready in each of these scenarios and its relevant indicators.
### Government
Common themes for a government to be quantum-ready are to have the necessary infrastructure, expertise, and resources to develop, implement, and use quantum technologies. The following section outlines structures found across nations developing their quantum-readiness, namely strategies and roadmaps, ecosystem & supply chain, and ethics and policy.
National quantum strategies are a common framework for a government to identify areas where quantum technologies could have the most significant impact, such as in finance, healthcare, or national security. By focusing its investments[2] in these areas, a government can develop the necessary infrastructure and expertise to become a leader in quantum technologies. A roadmap is essential to forecast the implementation of quantum technologies for each of the following points:
1. Research and development: Investment in basic research is essential for advancing the state of quantum technologies. This includes funding for universities, research institutes, and national labs to conduct basic research on quantum materials, devices, and systems.
2. Education and training: Investment in education and training programs is critical to develop the workforce with the necessary skills and knowledge to design, develop, and implement quantum technologies.
3. Infrastructure: Investment in premises is essential to support the infrastructure and deployment of quantum technologies in specific industries or applications, including the development of testing and measurement facilities, as well as the establishment of quantum computing centres.
The development of coordinated local ecosystems and the understanding of supply chains are
essential for quantum readiness. A secretariat sitting within a branch of the government is helpful to achieve several actions across the ecosystem, namely collaboration with stakeholders, coordination of resources, monitoring of implementation, engagement with international partners, public outreach, as well as regular updates and reviews. Such a collaborative approach is best practice to share expertise, resources, and to accelerate the development and adoption of quantum technologies. Such partnerships between the government, companies, research centres and supporting organisations can ensure that the supply chain for quantum technologies is secure and resilient. Strategic development of domestic capabilities to produce quantum components, as well as their implementation in global supply chains, can accelerate commercialisation and improve supply chain resiliency. An outcome of a strategy is to help a government address the ethical and policy issues that can arise with the development and use of quantum technologies. This could include issues related to data privacy, cybersecurity, and the impact of quantum technologies on society. The implementation of standards and certifications for quantum components can ensure that they meet certain security and performance requirements, thus further enhancing the security and reliability of the supply chain. Standards and certifications, in particular, can help build public confidence by providing a benchmark against which the performance and security of quantum technologies can be measured. This can help alleviate concerns about the reliability and safety of quantum technologies, especially for applications such as healthcare, finance, and national security.
### Industry
For a specific company, the journey to be quantum-ready could differ, mainly depending on the size of the company, or how the company embraces new technologies. Overall, the only way to become quantum-ready is by a step-by-step approach:
1. Firstly, understanding quantum is critical. For decision makers, an awareness of the foundations and basic principles of the technology in general terms will be sufficient, whereas more technical knowledge will be useful for technical professionals. Being aware of how the technology develops, as well as starting to get involved in the ecosystem is very important for any organisation. To start, a list of available resources can be found in [49].
2. Secondly, the understanding, at a high level, of how quantum technologies can be impacting the business is needed. At this point, most organisations identify an individual or a group of individuals who will take the lead on quantum readiness for the company. Building skills and awareness at this level is crucial.
3. At a later stage companies will identify specific use cases and start with a proof of concept or a demonstrator, working with quantum startups. This early experimentation will bring a competitive advantage to the business [50]. As quantum technologies are overall in the early stages of development, and companies need to be patient, understanding the experiments and proof of concepts will need to be updated in the future.
4. Once companies are fully aware of the value and impact of the technology, a strategic roadmap will be developed, critical to making the business quantum-ready.
5. The mastering of the technology will come with a full implementation within the business.
All in all, becoming quantum-ready for a business means taking full advantage of the technology.
### Academia and skill-based institutions
Being quantum-ready involves a combination of both theoretical knowledge and practical skills needed to work in the rapidly evolving field of quantum technologies.
Academic institutions can play an important role in filling the gaps needed to be quantum-ready by providing individuals with the foundational knowledge and transferable skills through specialised academic programs, conducting research, partnering with industry for hands-on training, providing experiential learning opportunities, and offering continuing education programs.
While it may be challenging for academic programs to have enough resources to make students fully qualified for quantum careers, many programs are taking steps to address this challenge and ensure that their students are well-prepared for the demands of the field. For example, some universities and colleges are establishing dedicated quantum computing centres, institutes, and labs to provide students with access to state-of-the-art equipment and resources [51, 52, 53]. These centres often work closely with industry partners to ensure that students are gaining the skills and knowledge that are most in demand in the field.
Worldwide an increasing number of academic stakeholders and associations are starting to create and include specialised courses in quantum technologies within the syllabus of interdisciplinary programs and degrees.
To create a diverse quantum workforce and mitigate potential biases in quantum technologies, academic programs need to attract individuals from underrepresented groups to fields such as computer science, math, statistics, physics, material science, and chemistry. These efforts should begin early, with K-12 interventions that offer exposure to quantum technologies and make them accessible to all students. Partnering with online learning platforms and skill-based training institutions
can be another way for academic institutions to provide students with access to a broader range of resources and expertise.[54, 55] Programs like _Qubit by Qubit_ have seen success in increasing interest and participation in quantum computing among middle and high school students from diverse backgrounds.[56] Universities also have a crucial role in building a talent pipeline by offering courses, programs, and research opportunities that provide students with the necessary skills and knowledge to pursue quantum careers. In addition, business leaders can influence the shift towards quantum learning in schools and universities by demonstrating the importance of this field through corporate involvement. Ultimately, academic programs must work to create a pathway for the talent pipeline by removing barriers to entry and ensuring that all students have access to resources and opportunities to develop quantum-ready skills.[49]
The field of quantum technologies requires a diverse workforce to meet its needs. However, STEM disciplines still face challenges in broadening participation, particularly among women and underrepresented racial and ethnic groups. Academic programs must address the barriers that currently exist and provide support, mentorship, and recognition for diverse perspectives and expertise, to increase accessibility to students and to promote inclusivity and diversity.[57]
### Readiness with respect to ethics and protocols
Until this section, a general framework for quantum readiness and the elaboration for different stakeholders have been provided. Being quantum-ready involves several factors, including the availability of quantum hardware and software, the expertise of personnel, and the quality of the infrastructure. To be quantum-ready, an organisation must have access to the necessary technology, personnel, and infrastructure. However, this narrow concept of quantum readiness can, and sometimes should, be expanded to cover any ethical and legal aspects, especially following the
national and international protocols, such as those on standardisation.
Being quantum-ready is an important step for organisations that are considering the use of quantum technologies. It allows organisations to identify and address any challenges or limitations that may prevent them from effectively using quantum technologies. For example, an organisation that is not quantum-ready may lack the necessary hardware, software, expertise, or infrastructure to effectively use quantum technologies. In order to be quantum-ready, an organisation must invest in quantum technologies, by developing new algorithms and/or applications, training personnel, and building a quantum-safe infrastructure.
Investing in quantum readiness is a significant undertaking and requires a long-term commitment to preparing for the use of quantum technologies. It is not something that can be done quickly or easily. However, the benefits of being quantum-ready are significant. Organisations that are quantum-ready will be able to take advantage of the potential benefits of quantum technologies and will be well-positioned to compete in the emerging quantum global economy.
Companies, governments, NGOs, and other stakeholders must be quantum-ready because quantum technologies promise some essentially different capabilities. To take advantage of the benefits of quantum technologies, organisations must be prepared to use them effectively. However, quantum technologies raise many ethical concerns, including the potential misuse of technology, its impact on jobs and the economy, unequal access to technology, and its potential impact on society.
As one well-known example, quantum technologies could be used for malicious purposes such as hacking and espionage, which highlights the need for appropriate safeguards and regulations [58]. This directly impacts the meaning of quantum readiness. Additionally, the use of quantum computing could automatise specific tasks, potentially leading to job losses in certain industries. This raises concerns about the potential for economic disruption and the need for policies to support af
fected workers and sectors. One can argue whether being quantum-ready also encompasses being ready against such disruptions brought forth by this new and emerging set of technologies. Furthermore, only certain individuals or organisations may have access to mature quantum technologies,[59] leading to potentially increasing inequalities and disparities. This raises concerns about fairness and equitable access to the benefits of quantum technologies and the need for policies to ensure that the technology is accessible to all. Finally, there are also concerns about the potential impact of quantum technologies on different stakeholder groups in society,[60] and the need for careful and deliberate planning to ensure that quantum technologies can be used ethically and beneficially.
There are many discussions and a growing literature on ethical and societal responsible use of quantum technologies ranging from arguments on the democratisation of quantum technologies[61] to calls for not even building quantum computers at all.[62] In practical terms, these discussions inform organisations, governments and the public, and could change the stakeholders' perception of quantum technologies. For example, the topic of trust is discussed openly in a Time article,[63] where it was proposed that "...soon will come a time when trusting a quantum computer will require a leap of faith", and without trust-building, across the entire ecosystem there will be strong hesitancy to take this leap of faith. There are efforts to mitigate this hesitancy, and widely accepted ethical guidelines and protocols are established methods of trust-building in new and emerging technology ecosystems.
To employ methods of trust building, one needs to frame the object of this trust, in this case, quantum technologies. It is discussed in the literature that quantum technologies can be considered system technologies with a systemic impact on society,[64] which calls for an anticipatory approach. Similarly, arguing for a strong _Responsible Research and Innovation_ (RRI) approach can also be found in the literature.[65] There is even a call for action from some industry partners and different
stakeholders in the ecosystem for the ethical development of these technologies [66]. There are also many proposals and important highlights in the literature on arguing for further understandability of the quantum theory [67], the effects of hype on teaching about quantum technologies [68], introducing ethics into quantum information classrooms [69], awareness raising as the bare minimum that should be done [70], and many other potential actions to prevent quantum technologies running into fiascos of implementation at the interface of science and society [71]. One might consider how these discussions can be practically tied to an organisation's quantum readiness. A clear example of this is the White House memorandum _on Promoting United States Leadership in Quantum Computing While Mitigating Risk to Vulnerable Cryptographic Systems_, which prioritises "...the timely and equitable transition of cryptographic systems to quantum-resistant cryptography, with the goal of mitigating as much of the quantum risk as is feasible by 2035" [72]. For those who have been following the discussions on the quantum threat [58], this came as no surprise. It is the poster child case of the potential of future quantum technologies causing a massive impact on today's regulatory landscape. It might seem like this was always supposed to happen, but the chances of this becoming a regulation, let alone a company fundraising $500 million on this particular topic was negligible just a decade ago [73]. Furthermore, this is a particular example of how the values and priorities of certain stakeholders in the ecosystem can cause major shifts in not only the narratives but also the regulatory and opportunity landscape of an entire industry with consequences far-reaching into almost all industries.
There are many moving parts on the regulatory landscape both in early and late stage standardisation efforts such as the Quantum Internet Research Group [74] under the Internet Engineering Task Force working on the future standards of the quantum internet while others such as _European Telecommunications Standards Institute_ (ETSI) _Industry Specification Group_ (ISG) working
on QKD standards [75], which has an impact on industrial relations much sooner. Ethics plays an important role in these processes. Different stakeholders have different value sets and even within the stakeholder groups with similar value sets, there are differences in prioritisation of these. There are some promoting global open access and sustainability while not explicitly for democratic values due to geopolitical concerns, while others that argue strongly for democratisation may find themselves only arguing for it in a limited context of like-minded countries. Discussions on export controls and even controls on knowledge transfer are common practices in the ethical and regulatory context of quantum technologies.
As discussed earlier in this article, techno-economic paradigms are characterised by a set of shared beliefs, values, and practices that define the boundaries of what is considered technically feasible and economically viable [27], but in a wider context, these also need to be societal acceptable [26] to fully become system technologies [64]. Hence, organisations aiming for quantum readiness should keep track and maybe even actively participate in these discussions on what should be the wider set of values and protocols surrounding how these technologies interface with the market and the whole of society [71].
## 6 Conclusion
Quantum readiness refers to the state of being equipped to adopt and utilize quantum technology, providing organizations and nations with the potential to secure a first-mover advantage, shape the technology's future direction, attain a competitive edge, and achieve economic prosperity. This paper highlights the importance of quantum readiness, along with innovation models and assessment tools that enable individuals, organisations, and businesses to evaluate their quantum readiness and formulate strategies for becoming quantum-ready.
We started with the classification of emerging technologies based on their level of maturity and potential impact which provides a useful framework for policymakers and business leaders to understand and prepare for the impact of these technologies on their respective industries. Disruptive technologies have the potential to create entirely new markets or disrupt existing ones while enabling technologies facilitate the development of new goods, services, or business models. Sustaining technologies, on the other hand, improve the performance of existing products or services without necessarily creating new markets or disrupting existing ones. Quantum technology is an emerging disruptive technology that is fundamentally different from classical technologies and has the potential to revolutionize various industries. Policymakers and business leaders can position themselves to benefit from the potential of quantum technology by proactively identifying potential disruptions, determining areas for research and development, and informing policy decisions.
In the third section, we addressed the need to understand quantum readiness. It is anticipated that the rapidly advancing quantum technology will disrupt various industry sectors, revolutionize cryptography, improve cybersecurity, and accelerate scientific research. Countries and organisations that invest in and embrace quantum technology will have a significant competitive advantage, while those that do not may fall behind. Quantum readiness is crucial for businesses and governments to thrive in the future, and addressing this issue requires time, research and development investments, skilled human resources, and strategic decisions. Quantum technology shares characteristics with _General-Purpose Technologies_ (GPTs), which have a long-term impact on the economy and society. Therefore, organizations can ensure their competitiveness and economic prosperity by collaborating to build a strong quantum ecosystem and embracing quantum technology. We formalised the _Quantum Technology Readiness Levels_ (QTRLs), their descriptions, and expected outcomes on the basis of which we assessed the current status of different quantum
technologies. We also established a time horizon indicating the anticipated date by which these technologies will be successfully adopted and deployed for practical applications. Unsurprisingly, quantum computing is still in its infancy; by contrast, quantum sensing and communications can be considered to be in more advanced stages.
Finally, we discussed the significance of quantum readiness for key ecosystem stakeholders and defined relevant indicators to be quantum-ready.
1. Governments need to establish infrastructure, allocate resources, and develop expertise to become quantum-ready. National quantum strategies and roadmaps can assist in identifying priority innovation areas for investment. An effective local ecosystem is also important, requiring partnerships with businesses, research institutions, and supporting organizations to ensure a secure and robust supply chain. To address ethical and policy challenges, governments must implement standards and certifications to ensure the security and reliability of quantum technologies.
2. The process of becoming quantum-ready for a company depends on its size and its willingness to embrace new technologies. The approach involves first understanding the basics of quantum technology and getting involved in the ecosystem by utilizing available resources. At a later stage, identifying use cases and experimenting with proof of concept with quantum startups brings a competitive advantage to the business. Once the company comprehends the benefits of the technology, a strategic roadmap is established for complete integration within the business. Proficiency in the technology ultimately enables the company to harness its full potential.
3. The development of a quantum workforce with foundational knowledge and transferable
skills required for the rapidly evolving field of quantum technologies can be facilitated by academic and skill-based institutions through specialized academic programs, research, hands-on training, experiential learning opportunities, and continuing education programs. To promote inclusivity and diversity, academic programs must target individuals from underrepresented groups in fields such as computer science, math, statistics, physics, material science, and chemistry, beginning with K-12 interventions. Additionally, academic institutions need to establish a pathway for the talent pipeline by removing barriers to entry and providing resources and opportunities for students to acquire quantum-ready skills. To increase accessibility and inclusivity, academic programs must address the barriers that currently exist and provide support, mentorship, and recognition for diverse perspectives and expertise.
4. The benefits of being quantum-ready are significant, allowing organizations to take advantage of the potential benefits of quantum technologies and be well-positioned to compete in the emerging quantum global economy. However, there are also ethical and legal aspects to consider, and organizations must follow national and international protocols, such as those on standardization. Quantum technologies may also raise ethical concerns, including potential misuse, impact on jobs and the economy, unequal access to technology, and its impact on society. There is growing attention towards the ethical and societal responsible use of quantum technologies, ranging from arguments on the democratization of quantum technologies to calls for not building quantum computers at all. To employ methods of trust-building, organisations need to frame the object of this trust, in this case, quantum technologies, and take an anticipatory approach. Furthermore, there is a growing need for a call for action from
industry partners and different stakeholders in the ecosystem to take into account the ethical and societal concerns to effectively use quantum technologies.
In the coming years, the quantum technology sector is expected to undergo significant changes. Some technologies will become operational, while others may not succeed due to challenges in engineering or commercialization. It is not possible to make precise predictions about which technologies will succeed or fail. However, adopting a forward-looking approach can help organisations or individuals become quantum-ready.
|
2305.09859
|
Smaller Language Models are Better Black-box Machine-Generated Text
Detectors
|
With the advent of fluent generative language models that can produce
convincing utterances very similar to those written by humans, distinguishing
whether a piece of text is machine-generated or human-written becomes more
challenging and more important, as such models could be used to spread
misinformation, fake news, fake reviews and to mimic certain authors and
figures. To this end, there have been a slew of methods proposed to detect
machine-generated text. Most of these methods need access to the logits of the
target model or need the ability to sample from the target. One such black-box
detection method relies on the observation that generated text is locally
optimal under the likelihood function of the generator, while human-written
text is not. We find that overall, smaller and partially-trained models are
better universal text detectors: they can more precisely detect text generated
from both small and larger models. Interestingly, we find that whether the
detector and generator were trained on the same data is not critically
important to the detection success. For instance the OPT-125M model has an AUC
of 0.81 in detecting ChatGPT generations, whereas a larger model from the GPT
family, GPTJ-6B, has AUC of 0.45.
|
Niloofar Mireshghallah, Justus Mattern, Sicun Gao, Reza Shokri, Taylor Berg-Kirkpatrick
|
2023-05-17T00:09:08Z
|
http://arxiv.org/abs/2305.09859v4
|
# Smaller Language Models are Better
###### Abstract
With the advent of fluent generative language models that can produce convincing utterances very similar to those written by humans, distinguishing whether a piece of text is machine-generated or human-written becomes more challenging and more important, as such models could be used to spread misinformation, fake news, fake reviews and to mimic certain authors and figures. To this end, there have been a slew of methods proposed to detect machine-generated text. Most of these methods need access to the logits of the target model or need the ability to sample from the target. One such black-box detection method relies on the observation that generated text is locally optimal under the likelihood function of the generator, while human-written text is not. However, in reality, we usually have very limited knowledge of the generator, let alone access to it. As such, in this paper we set out to explore _whether models other than the generator can be used to differentiate between machine-generated and human-written text_. We find that overall, _smaller and partially-trained models are better universal text detectors_: they can more precisely detect text generated from both small and larger models. Interestingly, we find that whether the detector and generator were trained on the same data is not critically important to the detection success. For instance the OPT-125M model has an AUC of 0.81 in detecting ChatGPT generations, whereas a larger model from the GPT family, GPTJ-6B, has AUC of 0.45.
## 1 Introduction
With the rapid improvement in fluency of the text generated by large language models (LLMs), these system are being adopted more and more broadly in a wide range of applications, including chatbots, writing assistants, and summarizers. Generations from these models are shown to have human-like fluency (Liang et al., 2022; Yuan et al., 2022), making it difficult for human readers to differentiate machine-generated text from human-written text. This can have significant ramifications, as such LLM-based tools can be abused for unethical purposes like phishing, astroturfing, and generating fake news (He et al., 2023). As such, we need to be able to reliably and automatically detect machine generated text.
Previous work has found that identifying local optima (curvature) in the likelihood surface of a trained language model allows for detection of training utterances (Mattern et al., 2023), and generations for a given target model (Mitchell et al., 2023). Specifically, the approximate measure of local optimality, dubbed curvature, is formed by comparing the loss of a target sequence to the loss of its perturbations, under the target model. The intuition in both prior works is that this measure of curvature is _larger_ around training samples/generations from the model, compared to unseen human-written text and can therefore be used to determine if a given sequence is part of the training data or not (by Mattern et al.) or a generation of the target model or not (by Mitchell et al.).
In practice, however, in many cases where we want to distinguish between machine-generated text and human-written text we do not know what models could have been used to generated a sequence, or even if we do know the model, we might not have access to its loss on a given sequence (e.g. ChatGPT), or access might be behind a paywall (e.g. GPT3). Therefore, in this paper we set out to explore the detection of machine-generated text without having knowledge about the generative model. We do this by exploring whether the same curvature measure can be used to cross-detect text generated by _models other than the target generative model_, and under what conditions such cross-detection performs best. We use surrogate detector models, whose loss functions we do have access to. Then, we run the curvature test using the surrogate (see Figure 1) and compare detection power with the same test, but using the true generator's loss.
To this end, we conduct experiments on a slew of models with different sizes (from tens of millions to billions of parameters), architectures (GPTs, OPTs,
Pythias) and pre-training data (Webtext and the Pile) and also from different training stages (ranging from the first thousand steps of training to full training-143k steps). Our main finding is that _cross-detection can come very close to self-detection in terms of distinguishablity_, and that _there are universal cross-detectors with high average distinguishablity_ performance, meaning they perform well in terms of detecting generations from a wide-range of models, regardless of the architecture or training data. More specifically, we find that **smaller models are better universal detectors**. For instance the OPT-125M model comes within \(0.07\) area under the ROC curve of self-detection, on average (see Figure 4). And for models where we don't have self-detection, such as ChatGPT, the AUC is \(0.81\), whereas OPT 6.7B's AUC is \(0.58\).
We also find that **partially trained models are better detectors** than the fully trained ones, and this gap is bigger for larger models (see Figure 7). We then further investigate some possible reasons for this phenomenon by analyzing curvature and log-likelihood of the different models, and find that larger models are more conservative in terms of the likelihood and curvature they assign to generations from other models. Smaller models, however, assign higher likelihood to generations of models their size or larger, therefore they can be used to cross-detect on a broader range of models so the smaller model is the best universal detector.
## 2 Methodology
Figure 1 shows the methodology of our work, and how we conduct our experiments: For a given _target pool_ of sequences, the task is to _determine if each sequence is human-written or machine-generated_ by running a curvature (local optimality) test over a surrogate _detector model_ that is different from the generator model, as our main assumption is that _we have no information about the generator model_. In the rest of this section we delve deeper into the details of each component in the setup.
Target pool.The pool of sequences for which we want to conduct the machine-generated text detection. We form this pool such that there is a 50%/50% composition of machine-generated/human-written text. The machine-generated text is created by prompting the _generator model_ with the first \(20\) tokens of each human-written sequence.
Generator model.This is the target model the generations of which we are trying to distinguish from human-written text. We do not always have full access to this model. In fact, in most cases we may not even know what model generated the text. This scenario is what we are actually interested in, i.e. we want to know _how can we detect text generated by unknown models?_
Curvature (local optimality) test.The method we use to distinguish between machine-generated and human-written text relies on the local optimality (curvature) of the target sequence, building on the intuition that generations are likelier to be locally optimal, and unseen human written text is not (Mitchell et al., 2023; Mattern et al., 2023). To visualize the local neighborhood of the target sequence, we generate perturbations of it and have the target generative model evaluate their loss. As such, the curvature is then calculated as:
\[d(x)\!=\!\log p_{\theta}(x)\!-\!\frac{1}{k}\!\sum_{i}\!\log p_{\theta}( \tilde{x_{i}}) \tag{1}\]
Where \(x\) is the target sequence, \(\theta\) is the parameterization of the target model, \(\tilde{x_{i}}\) is the \(i\)th perturbation of sample \(x\) (i.e. the \(i\)th neighbor) out of the overall \(k\) perturbations. The perturbed sequences are generated
Figure 1: Experimental methodology of our work: We want to study how models can _cross-detect_, as in distinguish between human-written and machine-generated text that is not necessarily generated by them. To this end, we create a _target pool_ consisting of human-written machine-generated text, created by prompting the _generative model_ with the first 20 tokens of the human-written text. We then generate perturbations of each target sequence using a _perturbation model_. We find the loss of the target pool and perturbations under a _detector model_, to estimate the local optimality of the likelihood function around the target and use that to determine if a sequence is machine generated or not.
by masking \(15\%\) of \(x\) and filling the mask using a perturbation model. The curvature is thresholded to make the machine-generated/human-written text decision.
Perturbation ModelThis model helps generate neighbors by filling in randomly selected spans of the target sequences in the pool and perturbing them. We use _T5-3B_ for this purpose in our experiments.
Detector model.This model is used as a _surrogate_ for the target model, to help us detect generations when using the curvature test. The pool of sequences and their neighbors are fed to the detector model, and their loss under the detector model is measured and used to calculate curvature and to distinguish between generations and human written text.
Success metric.We evaluate the success of the detector by measuring the area under the ROC curve (AUC), i.e. the false positive vs. true positive rate curve. The higher the AUC, the more distinguishing power the detection mechanism has.
Evaluation strategy.The results we report in the paper fall into two main categories: (1) using a model to detect its own generations, which is the main goal of Mitchell et al. (2023) as well. In this setup, the target and detector models are the same, we call this _self-detection_. (2) using a model different from the generator of the text to detect the generations. In this setup, what we are basically doing is acting as if a surrogate model has generated the text. In other words, we want to see how well a model would claim another model's generation as its own. Here, the target and detector models are not the same. We call this _cross-detection_. This second setup represents the black-box case where we not only do not have full access to the target model that generated the text, we also do not know what model it was or what architecture/size it had, so we are trying to find the best **universal detector** that would correctly classify it.
## 3 Experimental Setup
### Models
We want to experiment with a wide range of models, with different architectures, parameter counts and training datasets, therefore we use the following model families in our experiments: Facebook's OPT (we use the 125M, 350M, 1.3B, and 6.7B models), EleutherAI's GPT-J, GPTNeo and Pythia (Biderman et al., 2023) (we use GPTNeo-125M, GPTNeo-1.3B, GPTNeo-2.7B, GPTJ-6B and Pythia models ranging from 70M to 2.8B parameters), and OpenAI's GPT models (distilGPT, GPT2-Small, GPT2-Medium, GPT2-Large, GPT2-XL, GPT-3 and ChatGPT).
We also have experiments where we use partially trained models as detectors. For those experiments, we only use the Pythia models as they are the only ones with available, open-source partially trained checkpoints. For each Pythia models, there is also a de-duplicated version available, where the model is trained on the de-duplicated version of the data, as opposed to the original dataset. All the models we use are obtained from HuggingFace (Wolf et al., 2019).
### Dataset
Evaluation dataset.We follow Mitchell et al. (2023)'s methodology for pre-processing and feeding the data. We use a subsample of the SQuAD dataset (Rajpurkar et al., 2016), where the original dataset sequences are used as the human-written text in the target sequence pool. We then use the first \(20\) tokens of each human-written sequence as a prompt, and feed this to the target model, and have it generate completions for it. We then use this mix of generations and human-written text to create the target pool for which we do the detection. In all cases, following the methodology from Mitchell et al. (2023), our pool consists of \(300\) human-written target samples, and \(300\) machine-generated samples, so the overall pool size is \(600\).
Pre-training datasets for the generative models.The EltherAI and Facebook models (GPTJ, GPT-Neo, Pythia and OPT families) are all trained on the Pile dataset (Gao et al., 2020), a curated collection of \(22\) English language datasets (consisting of web-crawled data, academic articles, dialogues, etc.). As mentioned above there are two versions of each Pythia model (Biderman et al., 2023), one version is trained on Pile, the other is trained on de-duplicated Pile. The de-duplicated Pile is approximately 207B tokens in size, compared to the original Pile which contains 300B tokens. There is limited information and access to the training data of the _OpenAI_ models. The GPT-2 family is reportedly trained on the WebText dataset, GPT-3 is trained on a combination of the Common Crawl 1, WebText2, books and Wikipedia, and there is not any information released about the training data of ChatGPT.
Footnote 1: [https://commoncrawl.org](https://commoncrawl.org)
## 4 Does cross-detection work?
As mentioned before, the main goal of our paper is to study ways in which machine-generated text could
be distinguished from human-written text, without access to any auxiliary information about the model that generated the text. To this end, we conduct an extensive set of experiments where we use \(23\) models with different sizes and architectures as detectors of text generated by \(15\) other models. We also experiment with using partially trained checkpoints of the detector models, to see how the detection power of the models changes as the training progresses.
Our main finding is that _cross detection can perform as well as self-detection, or come very close to it._ Figures 4 and 7 show how close each detector comes, in terms of AUC, to self-detection. We can see that on average, OPT-125M is the best fully trained universal cross-detector, showing on average \(0.07\) lower AUC, compared to self-detection. If we look at partially trained detector models, however, we see that the Pythia-160M comes as close as \(0.05\) AUC points, with its \(5k\), \(10k\) and \(50k\) step trained models (the fully trained model is trained for \(143k\) steps). These models seem to even _outperform_ self-detection in some cases, for example for GPTJ-6B. In the rest of this section we further elaborate on these results and draw connections between model size and training, and detection power.
### Smaller Models Are Better Detectors
In this section we aim to find patterns in the cross-detection power of different models, and see if it has any correlation with model family, size, training set and detection power. To this end, we use \(23\) different models with different parameter counts, ranging from \(70M\) to \(6.7B\) to detect machine-generation texts from all the models listed in Section 3.1.
Figure 3 shows the results for this experiment, where the rows are the generative models (sizing up from bottom row to top) and the columns shows the detector models (sizing up from right to left). So each cell shows the detection power (AUC) of the given detector model (column), on text generated from the generative model (row). The last row is the mean, which is an overall metric of how good of a detector that model is. Figure 4 shows how cross-detection fares against self-detection, and it is missing the Chat-GPT and GPT-3 rows as we do not have self-detection results for them (given how we have no access to their loss, or the losses are behind a paywall).
For both plots, we see that the bottom left has the lowest values, showing that **larger models are not good at detecting machine generated text from other models**, and they are particularly bad at it for detecting small model generations. We can also see that **smaller models are much better detectors**, as the right side of the graph has much higher AUC values.
Another observation is the correlations between the **dataset** and **model architecture** of the generative and detector models. As the heatmap shows, models from the same _architecture family_ and trained on the same/overlapping _dataset_ are better at detecting their own text, compared to models from a different family. For instance, for detecting text generated by OPT-6.7B the other models from the OPT family are the best cross-detectors, with AUCs ranging from 0.89-0.87 (OPT-6.7B self-detects with AUC 0.91). The next best cross-detector is the smallest GPTNeo-125M with AUC 0.86. However, the OpenAI GPT2 model of the same size has a lower AUC of 0.84 (and overall the GPT2 family has the lowest cross-detection AUC on OPT), which we hypothesize is due to the larger gap in the training data, as the OPT and GPTNeo/GPTJ models are all trained on the Pile dataset, but GPT2 is trained on the Webtext. All in all, the difference due to the dataset/architecture differences is small as most of the dataset for all these models is comprised of web-crawled data, showing that cross-detection can be effective, regardless of how much information we have about the target model, and how accessible similar models are.
We have also provided an overall summary of the heatmaps in Figure 2, where we have presented the numbers from the best overall detector with mean AUC of 0.92 (OPT-125M) and the biggest model of the same family, OPT-6.7B with average AUC of 0.46. One noteworthy observation is that OPT-125M
Figure 2: Summary of the cross-detection area under the ROC curve (AUC) results for a selection of generative (the \(4\) models over the X axis) and detector (OPT-125M and OPT-6.7B) models. We can see that the smaller OPT model is a better universal cross-detector. Full results are shown in Figure 3.
can detect generations from models like GPT3 and ChatGPT with relatively high AUC (0.81), whereas if the intuitive approach of taking another large, "similar" model were to be taken and we were to use OPT-6.7B, we would get AUC of 0.67 and 0.58 for these models, respectively, which are both close to random (0.5).
We hypothesize that the reason behind large models being poor detectors of text generated by other models (especially smaller ones), is that larger models have a more refined taste, therefore they don't attribute text generated by other models as their own generations. Smaller models, however, attribute any machine-generated text as their own, since they have a less specific taste and are looser fitting models. We discuss this further in Section 5.
### Partially-Trained
Models are Better Detectors
Our approach in this section is very similar to the previous one, except here we aim to find correlations between how far along in the training process a model is, and its cross-detection power. To this end, we take different training checkpoints of the Pythia models (Biderman et al., 2023) at different steps (steps \(1000\), \(5000\), \(10000\), \(50000\), \(100000\) and \(143000\)) with different sizes (2.8B, 410M, and 70M), and use them as detectors of generations from the 4 target models. Figure 5 shows the results for this experiment (Figures 6 and 7 show entire heatmaps of this experiment, similar to what was presented in the previous section). For each model we can see that **the final checkpoint is consistently the worst one in terms of machine-generated text detection**, and it is one of the middle checkpoints that has the best performance.
Our hypothesis for this is similar to that of Section 4, where we believe that partially trained models have not yet fit to the training data tightly, so they over claim other models' generations as their own, whereas the longer a model is trained, the sequences it ranks higher as its own narrow down.
## 5 Analysis: Curvature and Log-likelihood Breakdown
To help shed light on why smaller models are better detectors and larger models are not good at detecting machine generated text, we plot a breakdown of the curvature metric (Section 2) and log-likelihood values for the best universal detector (OPT-125M), a medium
Figure 3: AUC heatmap for cross-detection, where the rows are generative models and columns are the surrogate detector models, both sorted by model size. We can see that smaller models are better detectors and larger models are the worst models in terms of detection power.
sized detector of the same family (OPT-350M) and a largest one from the same family (OPT-6.7B) from Section 4, shown in Figure 8. The Y axis is the curvature/log likelihood of the target generations (from the \(15\) models from Section 3.1) under the detector models (OPT-125M, 350M or 6.7B). The X axis is the number of parameters for the generative models (we do not know how many parameters ChatGPT has, so we plotted it as the last dot in the plots, after GPT-3 with 540B parameters). Figure 9 plots the AUCs for detection under the three models, for the \(15\) generative models.
We can see that for the smaller detector model (Figures (a)a and (d)d), the mean curvature and log-likelihood values for the generated text are consistently higher than the curvature for the human-written text. However, for the larger model (Figure (c)c and (f)f), the curvature and log-likelihood values for the machine-generated text is in most cases smaller than or around the same value as the human written text. The curvature and log-likelihood values for human
Figure 4: AUC difference between self-detection and cross-detection heatmap (to better see how close cross-detection comes to self detection), where the rows are generative models and columns are the surrogate detector models, both sorted by model size. This plot is basically Figure 3, where each cell in a row is subtracted by the self-detection AUC for that row.
Figure 5: Summary of the results for cross-detection power of different detector models trained for different number of steps. Each subfigure shows a different detector model, and the X axis shows the training step for the checkpoint used as a detector. We only show results for 4 generative models here, using only 3 detector models. The results for all \(15\) generative models are shown in Figure 6.
written text for both graphs are stable since the text is the same and doesn't depend on the target model.
We can also see that overall the curvature and likelihood values for the larger model are higher,
Figure 6: AUC heatmap for cross-detection, where the rows are generative models and columns are the surrogate detector models from the Pythia family, at different training step checkpoints (\(1k\), \(5k\), \(10k\), \(50k\), \(100k\) and \(143k\)), both sorted by model size. We can see that partially trained models are better detectors.
Figure 7: AUC difference between self-detection and cross-detection heatmap (to better see how close cross-detection comes to self detection), here the rows are generative models and columns are the surrogate detector models from the Pythia family, at different training step checkpoints (\(1k\), \(5k\), \(10k\), \(50k\), \(100k\) and \(143k\)), both sorted by model size. This plot is basically Figure 6, where each cell in a row is subtracted by the self-detection AUC for that row.
especially for the original text, than those of the smaller model, and the values for text generated by the other models have lower curvature and likelihood value. This shows that the larger model places higher likelihood on the human written text and fits it better. The smaller model, however, assigns lower curvature and likelihood to the human-written text compared to generations by a large gap, and the assigned values are overall lower than those of the large model.
Broadly we observe that all all detectors are behaving similarly, as in **all models respond similarly to machine generated text from other models, so long as the other model is same size or bigger.** In other words, they place high likelihood on text from larger models. However, for models smaller than themselves, they place lower likelihood and curvature. As such, smaller models are **better universal detectors**, as the size of the set of sequences they assign higher likelihood and curvature to is bigger than it is for large models, and this higher curvature is much higher than the curvature assigned to the human written text.
Also, another thing we should keep in mind is that our estimation of "curvature" hinges upon generating numerous perturbations (neighbors) and comparing their loss with that of a target point, therefore if these perturbed neighbors are not in fact "neighbors", as in they are farther from the target point, our measure of curvature is not accurate (the closer the perturbed points are, the more accurate estimation of curvature we achieve). The spikes in all the sub-figures of Figure 8 graphs are for the detector model detecting its own text.
## 6 Ablating The Perturbation Generation
The perturbation generation method directly impacts the size and shape of the neighborhood we create around a target point, and use to determine the shape of the loss function around it and test its local optimality. **If the generated perturbations are too far from a target point, they will have lower likelihood and create inaccurately high curvature estimates**.
As mentioned in Section 4, one hypothesis we have for why small models are better machine-generated text detectors is that they have flatter, looser fitting loss functions whereas larger models have higher curvatures, are sharper and more compressed. As such, for better analysis of the shape of a function around a target point on a larger model, one needs to generate
Figure 8: Breakdown of curvature and log likelihood values (mean and standard deviation) for the best universal detector (OPT-125M), a medium sized detector (OPT-350M) and a larger detector from the same family (OPT-6.7B), to see the difference detection powers.
Figure 9: AUC of the three cross-detectors from Figure 8
perturbations closer to that point to magnify the local neighborhood where we test optimality, since we hypothesize that the function is more spiked and changes fast, as opposed to a smaller model that is smoother. To further explore this hypothesis, we look into different perturbation generation methods to change the size of the neighborhood we look at, and see how the curvature and detection power of the models change.
We investigate two different methods for changing the distance of the generated perturbations: (1) we change the mask filling model size, by experimenting with _TS-Small_, _T5-Large_ and _TS-3B_(Wolf et al., 2019; Raffel et al., 2020) to test the intuition that larger mask-filling models, generate semantically closer neighbors than a smaller model. (2) We change the percentage of the tokens that get masked and replaced by the mask-filling model, as the more tokens we mask and replace, the farther the generated perturbations would be.
### Mask Filling Model
Figure 10 shows the curvature numbers for each model trying to **detect its own** generations, so for each model the generator is also the detector. We experiment with three perturbation generating models, with three different sizes: (1) T5-small (\(60\) million parameters) (2) T5-Large (\(770\) million parameters) (3) T5-3B (3 billion parameter). The intuition behind using three model sizes is to see the effect of having a better replacement model on the measured curvatures and the detection power of the detector models.
We can see that as the masking model sizes down (going from top to the bottom subfigures), the overall curvature values for both human-written and machine-generated text increases (going from 0.2 maximum in Figure 9(a) to 0.6 maximum in Figure 9(c)), and the two sets of texts become less distinguishable. T5-Small produces low-quality (low-fluency) neighbors that are assigned lower likelihoods by the detector model, resulting in high curvature numbers for both human and machine generated text, making them indistinguishable. As we improve the mask filling model, however, the generated neighbors become of higher quality (and semantically closer to the target point), thereby creating a more accurate estimate of the curvature and providing better distinguishablity, as shown by the AUC numbers in Figure 9(d).
### Masking Percentage
Figure 11 shows the results for the experiment where we change the percentage of tokens that are masked, to produce the neighbors. In all previous experiments, we used \(15\%\) masking with mask span length of \(2\) tokens following the experimental setup in Mitchell et al. (2023).
In this section, however, we change the percentage of the masked tokens (and we set the masking to be contiguous) to see how it affects the curvature mean and standard deviation values, and the AUCs. We can see that as the masking percentage decreases (from \(90\%\) to \(2\%\)), the AUCs and the self-detection power of models increase rather consistently. When we go
Figure 10: The effect of changing the perturbation (masking) model on curvature values and self-detection power of different models with different sizes (AUC).
to \(1\%\), however, we see the AUC drop. If we look at Figure 10(e) which depicts the curvature measures for the \(1\%\) masking, we see that the curvatures overlap between machine-generated and human-written text, which we hypothesize is because the perturbations are all too close to the original sequences, and as such do not define the neighborhood well.
Its noteworthy that the slight discrepancy between the results for \(15\%\) masking in this section and the previous section is that there, the mask span length was \(2\) (as it is the optimal span length found by us and Mitchell et al.), so the masked portion of the sequence is not contiguous. In this experiment, however, to have better control, we set the mask span length to the maximum possible (full sequence length), so we get contiguous masking.
## 7 Related Work
The problem of machine-generated text detection has already been studied for multiple years using a variety of different approaches (Ippolito et al., 2020; Jawahar et al., 2020; Uchendu et al., 2020, 2021): Both Gehrmann et al. (2019) and Dugan et al. (2022) have found that humans generally struggle to distinguish between human- and machine-generated text, hereby motivating the development of automatic solutions. Among those, some methods aim to detect machine-generated text by training a classifier in a supervised manner (Bakhtin et al., 2019; Uchendu et al., 2020), while others perform detection in a zero-shot manner (Solaiman et al., 2019; Ippolito et al., 2020). There is also a line of work that relies on bot detection through question answering (Wang et al., 2023; Chew and Baird, 2003), which is outside the scope of this paper.
Most recently, Mitchell et al. (2023) introduced the zero-shot method DetectGPT, which is based on the hypothesis that texts generated from a LLM lie on local maxima, and therefore negative curvature, of the model's probability distribution. Thus, minor rewrites of machine-generated texts, which are in practice obtained through word replacements suggested by a separate model such as T5 (Raffel et al., 2020), are consistently assigned lower probabilities than the original text, whereas rewrites of human-written texts can have both higher or lower probabilities assigned to them.
Beyond the approaches discussed in this paper, other strategies have been proposed to enable the detection of machine-generated text in the wild. Particularly through efforts on the side of the LLM provider, more powerful detection methods can be devised. One such method is watermarking, which injects algorithmically detectable patterns into the released text while ideally preserving the quality and diversity of language model outputs. Watermarks for natural language have already been proposed by Atallah et al. (2001) and have since been adapted for outputs of neural language models (Fang et al., 2017; Ziegler et al., 2019). Notable recent attempts for transformer based language models include work by Abdelnabi and Fritz (2021), who propose an adversarial watermarking transformer (AWT). While this watermarking method is dependent on the model architecture, Kirchenbauer et al. (2023) propose a watermark that can be applied to texts generated by any
Figure 11: The effect of changing the masking percentage on curvature values and self-detection power of different models with different sizes (AUC).
common autoregressive language model. As a strategy more reliable than watermarking, Krishna et al. (2023) suggest a retrieval-based approach: By storing all model outputs in a database, LLM providers can verify whether a given text was previously generated by their language model. In practice, this would however require storage of large amounts of data and highly efficient retrieval techniques in order to provide fast responses as the number of generated texts grows.
Evasion of DetectorsAs detecting machine-generated text is becoming a topic of high interest, researchers are also aiming to study the limits of machine-generated text detectors. The broad literature of text-based adversarial attacks demonstrates that text classifiers such as e-mail spam filters, and therefore most likely also machine-generated text detectors, can be fooled using minor perturbations that largely preserve fluency and semantics of the original texts (Alzantot et al., 2018; Jin et al., 2020; Li et al., 2020, 2021). Recent work has also studied attacks specifically designed to fool machine-generated text detectors (Sadasivan et al., 2023) and found that classifiers can be evaded through simple paraphrases and many watermarking techniques can be recreated by humans. This, along with the outlook that language models will most likely become more powerful and human-like, raises the question if it will ever be possible to detect machine-generated text reliably.
## 8 Conclusion
As LLMs are becoming more ubiquitous and embedded in different user-facing services, it is important to be able to distinguish between human written text and machine-generated text, so as to be able to verify the authenticity of news articles, product reviews, etc. As such, we set out to explore the possibilities of using existing models to detect generations from unknown sources, and distinguish them from human written text. We find that when using zero-shot detection methods that rely on local optimality, smaller models are overall better at detecting generations, and larger models are poor detectors. We hypothesize that this has to do with the shape of the loss function for these different types of models, and how well they fit their training data. However, further analysis of the loss landscape is needed to fully verify this claim.
|
2305.07874
|
Identification of molecular clouds in emission maps: a comparison
between methods in the \ce{^{13}CO}/\ce{C^{18}O} ($J=3-2$) Heterodyne Inner
Milky Way Plane Survey
|
The growing range of automated algorithms for the identification of molecular
clouds and clumps in large observational datasets has prompted the need for the
direct comparison of these procedures. However, these methods are complex and
testing for biases is often problematic: only a few of them have been applied
to the same data set or calibrated against a common standard. We compare the
Fellwalker method, a widely used watershed algorithm, to the more recent
Spectral Clustering for Interstellar Molecular Emission Segmentation (SCIMES).
SCIMES overcomes sensitivity and resolution biases that plague many
friends-of-friends algorithms by recasting cloud segmentation as a clustering
problem. Considering the \ce{^{13}CO}/\ce{C^{18}O} ($J = 3 - 2$) Heterodyne
Inner Milky Way Plane Survey (CHIMPS) and the CO High-Resolution Survey
(COHRS), we investigate how these two different approaches influence the final
cloud decomposition. Although the two methods produce largely similar
statistical results over the CHIMPS dataset, FW appears prone to
over-segmentation, especially in crowded fields where gas envelopes around
dense cores are identified as adjacent, distinct objects. FW catalogue also
includes a number of fragmented clouds that appear as different objects in a
line-of-sight projection. In addition, cross-correlating the physical
properties of individual sources between catalogues is complicated by different
definitions, numerical implementations, and design choices within each method,
which make it very difficult to establish a one-to-one correspondence between
the sources.
|
Raffaele Rani, Toby J. T. Moore, David J. Eden, Andrew J. Rigby, Ana Duarte-Cabral, Yueh-Ning Lee
|
2023-05-13T09:18:00Z
|
http://arxiv.org/abs/2305.07874v1
|
# Identification of molecular clouds in emission maps:
###### Abstract
The growing range of automated algorithms for the identification of molecular clouds and clumps in large observational datasets has prompted the need for the direct comparison of these procedures. However, these methods are complex and testing for biases is often problematic: only a few of them have been applied to the same data set or calibrated against a common standard. We compare the Fellwalker method, a widely used watershed algorithm, to the more recent Spectral Clustering for Interstellar Molecular Emission Segmentation (SCIMES). SCIMES overcomes sensitivity and resolution biases that plague many friends-of-friends algorithms by recasting cloud segmentation as a clustering problem. Considering the \({}^{13}\)CO/C\({}^{18}\)O (\(J=3-2\)) Heterodyne Inner Milky Way Plane Survey (CHIMPS) and the CO High-Resolution Survey (COHRS), we investigate how these two different approaches influence the final cloud decomposition. Although the two methods produce largely similar statistical results over the CHIMPS dataset, FW appears prone to over-segmentation, especially in crowded fields where gas envelopes around dense cores are identified as adjacent, distinct objects. FW catalogue also includes a number of fragmented clouds that appear as different objects in a line-of-sight projection. In addition, cross-correlating the physical properties of individual sources between catalogues is complicated by different definitions, numerical implementations, and design choices within each method, which make it very difficult to establish a one-to-one correspondence between the sources.
keywords: molecular data - methods: data analysis - surveys - ISM: clouds - submillimetre: ISM
## 1 Introduction
The distribution and properties of gas within molecular clouds regulate, in part, the characteristics of newly formed stars, their numbers and masses, and the location of star-forming sites. The connection between the features of molecular gas and both the initial mass function and formation rate of new stellar populations have prompted a wide range of theoretical and observational studies geared towards the characterisation of the structure of molecular clouds. Multi-tracer surveys have revealed the hierarchical nature of these structures, showing how high-density, small-scale features are always nested within more rarefied, larger envelopes (Blitz and Stark, 1986; Lada, 1992). This structural hierarchy is, however, a non-trivial one: at any scale, there appear to be more high-density and compact 'clumps' than larger and less dense structures. The densest clumps in a cloud's hierarchy are compact cores, the seeds of star formation. In these regions, over scales of about 0.1 pc, the turbulence in the cloud often becomes dominated by thermal motions (Goodman et al., 1998; Tafalla et al., 2004; Lada et al., 2008). The physical conditions inside the cores determine the mechanisms involved in the conversion of molecular gas into stars (di Francesco et al., 2007; Ward-Thompson et al., 2007; Bigiel et al., 2008; Schruba et al., 2011; Urquhart et al., 2018). At the bottom of the density hierarchy, lie the low-density envelopes that surround the denser regions.
The natural clumpiness that characterises the molecular phase of the interstellar medium on different scales has led to the cataloguing of molecular emission by dividing the interstellar gas into independent, discrete entities. Although this separation provides a useful theoretical distinction between giant molecular clouds and the diffuse multi-phase interstellar medium, it is still unclear whether the density hierarchy continues past this chemical boundary (Blitz et al., 2007) extending into the diffuse ISM (Ballesteros-Paredes et al., 1999; Hartmann et al., 2001). In this picture, the molecular phase of the ISM would not be enough to define the bottom of the density hierarchy needed to treat a molecular cloud as an independent, separate entity.
Structural patterns in molecular emission have been investigated through a wide range of analysis methods. Each technique focuses on the analysis of a different feature of the gas. Fractal analysis (Stutzki et al., 1998), the study of power spectra (Lazarian and Pogosyan, 2000) and the structure function (Heyer and Brunt, 2004) have aimed to characterise turbulence in clouds (Brunt and Federrath, 2014; Brunt and Federrath, 2014), and clump identification algorithms (Stutzki and Gusten, 1990; Berry, 2015; Colombo et al., 2015) have been used to probe geometry, structure and substructure, e.g., the density hierarchy. In general, statistical approaches to the analysis of molecular-line data either aim to provide a statistical description of the emission over the entire dataset or a division of the emission into physically relevant features. The latter approach is then followed by the analysis of the characteristics of the resulting population of sources. Statistical approaches include fractal analysis (Elmegreen and Falgarone, 1996; Stutzki et al., 1998; Elmegreen, 2002; Sanchez et al., 2005; Lee et al., 2016), \(\Delta\)-variance (Stutzki et al., 1998; Klessen and Glover, 2015), correlation functions (Houlahan, 1990; Rosolowsky et al., 1999; Lazarian and Pogosyan, 2000; Padoan et al., 2003) and analysis of the two-dimensional power spectrum (Schlegel and Finkbeiner, 1998; Pingel et al., 2018; Combes, 2012; Feddersen et al., 2019) and principal components (Heyer and Brunt, 2004). These techniques provide the overall statistical properties of the sample and are thus best suited for the comparison of measurements between different datasets. On the other hand, clump identification (image segmentation) is preferred for the study of physically important substructures embedded in the emission. In position-position velocity (PPV) data sets, giant molecular clouds (GMCs) and their substructure are identified as discrete features (sets of connected voxels) with emission (brightness temperature or column densities) above a specified threshold (Scoville et al., 1987; Solomon et al., 1987).
Molecular-cloud recognition in PPV data sets is performed with a variety of automated algorithms. These methods are commonly designed to operate on large data sets and different levels of blending between structures. Two different strategies for the identification of molecular emission are frequently employed in the construction of GMC identification software packages: the iterative fitting and subtraction of a given model to the molecular emission (Stutzki and Gusten, 1990; Kramer et al., 1998) and the friends-of-friends paradigm that connects pixels based on their and their neighbours' emission values (Williams et al., 1994; Rosolowsky and Leroy, 2006). The latter approach is often applied as a watershed formulation in which single objects are identified as partitions of the data corresponding to sets of paths of steepest descent around signal peaks. This strategy thus recasts GMC recognition as an image segmentation problem (Pal and Pal, 1993). Contouring in three-dimensional images, however, remains a complex task. Complications arise from the difficult deblending of internal structures in crowded regions as the boundaries that separate star-forming clouds from the surrounding multi-phase ISM are often unclear (see Ballesteros-Paredes et al., 1999; Hartmann et al., 2001; Blitz et al., 2007). The efficacy of GMC recognition is thus affected by survey-specific biases arising from spatial and spectral resolution and the sensitivity in molecular-line observations of GMCs (Rosolowsky and Leroy, 2006; Pineda et al., 2009; Wong et al., 2011). Cloud recognition usually worsens in regions characterised by complex molecular environments and crowded velocity fields (such as the Inner Milky Way), where resolution plays a crucial role in the identification of structure (Hughes et al., 2013). At low resolution, segmentation algorithms suffer from the blending of emission from unrelated clouds (Colombo et al., 2014), while high resolutions cause cloud substructures to be identified as individual clouds. In particular, friends-of-friends methods are especially sensitive to resolution. In clumpy environments, the objects naturally selected by this type of algorithm have the scale of a few resolution elements (Rosolowsky and Leroy, 2006).
Recently, alternative segmentation methods based on the physical properties of molecular gas have been proposed, most noticeably gravitational acceleration mapping methods (Li et al., 2015) and dendrograms (Rosolowsky et al., 2008). Dendrograms are particularly well-suited to encode the essential features of the hierarchical structure of the isosurfaces for molecular line data cubes. They represent the changing topology of the isosurfaces as a function of contour level. This growing range of automated cloud-identifying paradigms and their implementations has prompted the need for a direct comparison of the methods. However, the algorithms are often complex and testing for biases is not straightforward as only a few of them have been applied to the same data set or calibrated against a common standard (Lada and Dame, 2020).
Although the performance of several popular clump-finding algorithms has recently been compared on artificial emission maps (Li et al., 2020), cross-correlating the physical properties of individual sources between several catalogues is a non-trivial task. From this viewpoint, it is thus useful to apply different methodologies to identify and extract GMCs from the same survey. In this study, the Spectral Clustering for Interstellar Molecular Emission Segmentation (SCIMES) algorithm is applied to identify GMCs in the \({}^{13}\)CO data-set of the \({}^{13}\)CO/\({}^{18}\)O(\(J=3-2\)) Heterodyne Inner Milky Way Plane Survey (CHIMPS). To directly compare this segmentation to the results obtained by Rigby et al. (2019) with the FellWalker (FW) algorithm, the dendrogram defining parameters are chosen to match the FW input configuration. SCIMES makes use of dendrograms to encode the hierarchical structure of molecular clouds and then employs spectral clustering to produce dendrogram cuts corresponding to the individual clouds (Colombo et al., 2015), whereas FW is a variation of the watershed paradigm, based on the paths of steepest ascent (Berry, 2015). To extend the comparison to the properties of a different tracer, to show the effect of isotopologue choice, a SCIMES segmentation of the \({}^{12}\)CO(\(3-2\)) emission from the CO High-Resolution Survey (COHRS; Dempsey et al., 2013) is considered on the regions covered by CHIMPS.
We present an empirical comparison between the FW and SCIMES algorithms on a large sample of clouds within the \({}^{13}\)CO/\({}^{18}\)O (\(J=3-2\)) Heterodyne Inner Milky Way Plane Survey (CHIMPS). To do so, we construct a novel catalogue of CHIMPS sources obtained through the application of SCIMES. The catalogue includes a number of measured and calculated cloud properties chosen to match those defined in Rigby et al. (2019).
In Section 2, we briefly describe the CHIMPS data used in our analysis. A description of a SCIMES source extraction that matches the FW parameterisation is provided in Section 3 and the subsequent
distance assignments in Section 4. Section 5 presents a statistical comparison of the salient physical properties of the sources in the SCIMES and FW catalogues, while Section 7 summarises and discusses the results found in this study.
## 2 Data
The \({}^{13}\)CO/C\({}^{18}\)O (\(J=3-2\)) Heterodyne Inner Milky Way Plane Survey (CHIMPS) is a spectral survey of the \(J=3-2\) rotational transitions of \({}^{13}\)CO at 330.587 GHz and C\({}^{18}\)O at 329.331 GHz. The survey covers \(\sim\)19 square degrees of the Galactic plane, spanning longitudes \(l\) between 27\(\aas@@fstack{\circ}\)5 and 46\(\aas@@fstack{\circ}\)4 and latitudes \(|\,b\,|<0\aas@@fstack{\circ}\)5, with angular resolution of 15 arcsec. The observations were made over a period of 8 semesters (beginningning in the spring of 2010) at the 15-m James Clerk Maxwell Telescope (JCMT) in Hawaii. Both isotopologues were observed concurrently (Buckle et al., 2009) using the Heterodyne Array Receiver Programme (HARP) together with the Auto-Correlation Spectral Imaging System (ACSIS). The data obtained are organized in position-position-velocity (PPV) cubes with velocities binned in 0.5 km s\({}^{-1}\) channels and a bandwidth of 200 km s\({}^{-1}\).
The Galactic velocity gradient associated with the spiral arms (in the kinematic local standard of rest, LSRK) is matched by shifting the velocity range with increasing Galactic longitude, as observed in previous molecular Galactic plane studies (e.g. Dame et al., 2001). Varying the range from \(-50<v_{\rm LSR}<150\) km s\({}^{-1}\) at 28\({}^{\circ}\) to \(-75<v_{\rm LSR}<125\) km s\({}^{-1}\) at 46\(\aas@@fstack{\circ}\), we recover the expected velocities of objects observed in the Scutum-Centaurus tangent and the Sagittarius, Perseus and Norma arms.
The \({}^{13}\)CO survey has mean rms sensitivities of \(\sigma(T_{\rm A}^{*})\approx 0.6\) K per velocity channel, while for C\({}^{18}\)O, \(\sigma(T_{\rm A}^{*})\approx 0.7\) K, where \(T_{\rm A}^{*}\) is the antenna temperature corrected for atmospheric attenuation, ohmic losses inside the instrument, spillover, and rearward scattering (Rigby et al., 2016). These values, however, fluctuate across the survey region depending on both weather conditions and the varying numbers of working receptors on HARP. In \({}^{13}\)CO (3-2), the rms of individual cubes ranges between \(\sigma(T_{\rm A}^{*})=0.37\) K and 1.51 K per channel, and between \(\sigma(T_{\rm A}^{*})=0.43\) K and 1.77 K per channel in C\({}^{18}\)O (3-2) (Rigby et al., 2016).
Column density maps are necessary for the estimation of the cloud masses (see Section 6.2). The total column densities throughout the CHIMPS survey were calculated from the excitation temperature and the optical depth of the CO emission. This calculation is outlined in Rigby et al. (2019). Their method is a variation of the standard calculation of the excitation temperature and optical depth (Wilson et al., 2013) and uses the \({}^{13}\)CO(\(J=3-2\)) emission at each position (\(l,b,v\)) in the datacube on a voxel-by-voxel basis under the assumption of local thermodynamic equilibrium. The major advantage of this strategy over the analysis of velocity-integrated properties is that any property derived from the excitation temperature and optical depth is independent of source extraction and image segmentation algorithms. However, individual voxel information does not account for the attenuation of the emission due to self-absorption along the line of sight. Rigby et al. (2019) performed a first-order adjustment of the method with respect to the \({}^{12}\)CO(\(3-2\)) from which the excitation temperature of \({}^{13}\)CO(\(3-2\)) is derived and did not find evidence for significant self-absorption in \({}^{13}\)CO(\(3-2\)) across the entire CHIMPS survey. The total column density at each position is determined from the column density within a specific energy level by multiplication with an appropriate partition function representing the sum over all states (Rigby et al., 2019).
COHRS mapped the \({}^{12}\)CO (\(3-2\)) emission in the Inner Milky Way plane, covering latitudes \(10\aas@@fstack{\circ}25<l<17\aas@@fstack{\circ}5\) with longitudes \(|\,b\,|\leq 0\aas@@fstack{\circ}25\) and \(17\aas@@fstack{\circ}5<l<50\aas@@fstack{\circ}25\) with \(|\,b\,|\leq 0\aas@@fstack{\circ}25\). This particular region was selected to match a set of important surveys, among which are CHIMPS, the Galactic Ring Survey (GRS; Jackson et al., 2006), the FOREST Unbiased Galactic plane Imaging survey with the Nobeyama 45-m telescope (FUGIN; Umemoto et al., 2017), the Galactic Legacy Infrared Mid Plane Survey Extraordinaire (GLIMPSE; Churchwell et al., 2009), the Bolocam Galactic Plane Survey (BGPS; Aguere et al., 2011), and the _Herschel_ Infrared Galactic Plane Survey (Hi-GAL; Molinari et al., 2016). The observations were also performed at JCMT with HARP at 345.786 GHz and ACSSI set at a 1-GHz bandwidth yielding a frequency resolution of 0.488 MHz (0.42 km s\({}^{-1}\)). The survey covers a velocity range between \(-30\) and 155 km s\({}^{-1}\), with a spectral resolution of 1 km s\({}^{-1}\)and angular resolution of 16.6 arcsec (FWHM). The COHRS data (first release) are publicly available1. We consider a sub-sample of the full set of COHRS sources by only considering those within the regions covered by CHIMPS.
Footnote 1: [http://dx.doi.org/10.11570/13.0002](http://dx.doi.org/10.11570/13.0002)
In this analysis of the difference between the FW and SCIMES extraction algorithm, we consider the \((J=3-2)\) emission from the reduced data in the 10 regions constituting the CHIMPS survey (Fig. 1). To directly compare the new SCIMES segmentation to the results obtained with the FW algorithm by Rigby et al. (2019), the dendrogram-defining parameters are chosen to match the FW input configuration as closely as possible, as described in the next Section.
## 3 Source Extraction
We use the SCIMES algorithm, first introduced by Colombo et al. (2015, 2019), to decompose the \({}^{13}\)CO emission into individual molecular clouds (sources). SCIMES is a publicly available Python package that uses spectral clustering to identify single objects within a dendrogram that represents the hierarchical structure of the emission (Rosolowsky et al., 2008). The emission dendrogram is produced using the Python package for astronomical dendrograms (Astrodendro, Astropy Collaboration et al., 2013, 2018). In the framework of SCIMES, the leaves of the dendrogram are identified with the local maxima in the emission and the branches represent isosurfaces (contours in the PPV data) at different emission levels (they are structures containing other branches and leaves).
SCIMES uses similarity criteria to analyze a dendrogram by translating it into a weighted complete graph. In the associated graph, the vertices correspond to the leaves in the dendrogram and weights on the edges encode the affinity relationship between the leaves (larger values of the affinity represent the higher similarity between two vertices of the graph). The SCIMES algorithm then uses spectral clustering on the affinity matrix representing the graph to partition the graph into separate components. These clusters define a segmentation of the emission into individual clouds. This process partitions the graph into \(k\) regions, which coincide with the molecular emission features encoded by the dendrogram and consequently to the connected regions of the emission in PPV space. These'molecular gas clusters' are labelled as clouds, clumps, or cores depending on the scale of the emission. As the SCIMES decomposition considers the natural transitions in the emission struc
ture to segment PPV data and is robust across scales, it has the major advantage of being applicable to a variety of spatial dynamic ranges (Colombo et al., 2015).
Because of the variable weather conditions and the varying number of active receptors during the 4 years of observations, the original CHIMPS datacubes do not present a completely uniform sensitivity across the entire survey (Rigby et al., 2016). To avoid loss of good signal-to-noise sources in regions of low background and to prevent high-noise regions from being incorrectly identified as clouds, we perform the source extraction on the signal-to-noise ratio (SNR) cubes instead of brightness-temperature data. This approach was applied to continuum data in the JCMT Plane Survey (Moore et al., 2015; Eden et al., 2017), who noted that this method produced the best extraction results. We define the SCIMES parameters as multiples of the background \(\sigma_{\rm rms}\). For signal-to-noise cubes, \(\sigma_{\rm rms}=1\) by definition.
The reduced data are organised into 178 datacubes which are, in turn, mosaiced into 10 larger regions (Fig. 1) since the entire CHIMPS area is too large to be analysed as a single datacube. For each region, we set the SCIMES parameters to generate a dendrogram of the emission in which each branch is defined by an intensity change (min_delta) of \(5\sigma_{\rm rms}\) and contains at least three resolution elements worth of pixels (min_npix = 16). Any emission below \(3\sigma_{\rm rms}\) (min_val = \(3\sigma_{\rm rms}\)) is not considered. These specific values were chosen to match the corresponding FellWalker configuration parameters FellWalker.MinHeight, FellWalker.Noise, FellWalker.MinPix(Berry, 2015) used by Rigby et al. (2016) for their CHIMPS extraction.
The emission dendrogram is produced using the Python package for astronomical dendrograms (Astroodendo, Astropy Collaboration et al., 2013, 2018). However, the Astroodendo implementation that SCIMES uses to construct the emission dendrogram does not make a distinction between the spatial and spectral axes. Thus, some clouds that are unresolved in one dimension may still be included in the dendrogram. These sources are eliminated in a post-processing step. Since the distance distance assignments to the dendrogram structures cannot be made before the full segmentation (see Section 4), we cannot generate the volume and luminosity affinity matrices required for spectral clustering from spatial vol
Figure 1: Integrated intensity map (\(\int T_{A}^{*}dv\)) of CHIMPS (full survey). The colour bar shows the scaling in units of K km s\({}^{-1}\). The 10 regions into which the survey is divided are delimited by red lines. Orange shading denotes the overlapping areas between adjacent regions. Region numbers are printed above the map.
umes and intrinsic luminosities. Instead, we use PPV volumes and integrated intensity values.
In addition, we retain single leaves that do not form clusters (since clusters are constituted by at least two objects) and the (sparse) clusters constituted by the intra-clustered leaves (Colombo et al., 2015), which are usually discarded as noise (Ester et al., 1996). This way the SCIMES algorithm behaves as a 'clump finder' 2. Although this choice allows for the segmentation to include sources that cannot strictly be defined as'molecular gas clusters' (Colombo et al., 2015), these clouds are expected to match the clumps found in the BGPS (Aguerre et al., 2011).
Footnote 2: [https://scimes.readthedocs.io/en/latest/tutorial.html](https://scimes.readthedocs.io/en/latest/tutorial.html)
### Post-processing filter
To clean the catalogue of spurious sources and noise artefacts that are left after extraction, we apply an additional filter. This filter leaves those clouds that extend for more than 3 voxels in any direction (spatial or spectral). While the flutrps://g.co/verifyaccountmer requirement ensures that we are considering thin filaments, the latter ensures that each cloud is fully resolved in each direction (the width of the beam being 2 pixels). In addition, we remove a number of smaller clouds in contact with the edges of the regions and those voxels which lack a column density assignment. Although the segmented structures are mostly coherent, different velocity components may sometimes be blended in the same object in clouds associated with the border of the field of observation, since these sources do not present closed contours. Finally, to construct the final catalogue and its corresponding assignment mask, we apply a selection criterion to handle the clouds in overlapping areas between adjacent regions. This procedure is described below.
### Overlapping areas
Each of the 10 regions into which CHIMPS is divided contains a variance array component determined for each spectrum from the system noise temperature. In order to perform source extraction as consistently as possible, a small overlap is left between adjacent regions. To avoid double-counting clouds and to account for the discrepancies in the extraction maps near longitudinal edges due to the separate dendrograms representing the gas structure in each region, we use the following prescription to treat sources extracted in the overlapping areas. In each region, we remove clouds within the overlapping area that cross the longitudinal edges of the region (clouds 2 in panel A and 4 in panel B of Fig. 2). Such clouds do not have closed isocourours in the region in question (Colombo et al., 2015). We recover these objects from the SCIMES extraction in the adjacent regions, which contain the clouds to their full extent. Some regions present clouds that span the entire overlapping field. In order not to discard a significant amount of gas mass, we split these clouds at the edge of one region, assigning the portion in the overlapping area to the region that contains most of the cloud (cloud 1 in panels A and B in Fig. 2 becomes assigned to the region depicted in panel B). The remaining portion of the cloud, left in the adjacent region, is then added to this catalogue entry, considering its distance as the same as the distance of the larger part. Since this situation occurs for one source only in the entire catalogue (between regions 3 and 4), the physical properties of this source were calculated manually taking into account the properties of the voxels in each region and making the required adjustments. Finally, we include all objects that do not overlap between the regions (cloud 5 in Fig. 2), and whenever two (or more) clouds overlap, we simply discard the smaller object between the two regions (cloud 3 in panel A in Fig. 2). Through this procedure, we construct a catalogue of 2944 molecular clouds.
Finally, to produce a fair comparison of the physical properties of clouds, we match the FW subcatalogue by only considering SCIMES sources that contain at least a voxel with \(S/N\geq 10\)(Rigby et al., 2019). Thus, the final SCIMES catalogue used in the analysis that follows amounts to 1586 sources. None of the sources left after this selection is a single isolated leaf.
## 4 Distance assignments
A distance assignment to the extracted SCIMES sources was constructed by combining different catalogues and using a Bayesian distance estimator (Reid et al., 2016). We first consider the latest version of the ATLASGAL source catalogue (Urquhart et al., 2018). Distances were assigned as follows. Each SCIMES cloud is matched
Figure 2: Prescription for cloud removal in the overlapping area (shaded area in the panels) of adjacent regions (panels A and B). In each region, we remove the clouds within the overlapping areas that cross longitudinal edges. The clouds and parts portions of clouds that are removed in each region (clouds 1, 2, and 3 in panel A, and 4 in panel B) are drawn in red. These sources are recovered from the adjacent region. Clouds that span the entire overlapping area (cloud 1) are split at the longitudinal edge that marks the end of the region (panel A). The portion of the cloud contained in the shaded area is then assigned to the region that contains most of the cloud (panel B) and removed from the other (panel A). The portion of the cloud left in panel A (blue tip) is then added to the final catalogue (panel C). Whenever two (or more) clouds overlap (cloud 3), we discard the smaller object between the two regions. We retain all objects that do not overlap between the regions (cloud 5).
to a set of one or more ATLASGAL sources. The matching process is performed through an area (\(I\),\(b\)) search that allows the closest sources (Euclidean metric) that lie within a neighbourhood of radius \(r\) arcsecs centred at the centroid of the SCIMES object to be selected. The radius \(r\) is taken by adding 38 arcsec (\(\approx 5\) pixels) to the radius of the SCIMES object (following Rigby et al., 2019). Next, if this search returns multiple clouds, the distance that most sources have in common is chosen. If the distances in the set vary significantly we check if any of them belongs to an ATLASGAL cluster, and assign the cluster's distance to the SCIMES cloud. SCIMES clouds that contain one single ATLASGAL source for which the distance is not available, or in the case of clusters, ATLASGAL does not provide a cluster distance, are left unassigned.
We then consider a sub-catalogue of the FW assignments (Rigby et al., 2019). This subset of the FW sources comprises only robust sources. These are sources that are not false positives or single coherent sources at low S/N which are hard to discern by eye. The reduced catalogue is also free of sources consisting of diffuse gas at low S/N that may contain multiple intensity peaks, or irregular profiles (resulting from the segmentation of clouds across tile boundaries). This robust sub-catalogue amounts to 3664 entries. We will refer to this catalogue as the FW catalogue. The Bayesian distance calculator was used to estimate the possible near and far kinematic distance - and associated uncertainties - for each of the clumps (Rigby et al., 2019). No assumption about the sources being associated with spiral arms was made, and the standard Galactic rotation model (Reid et al., 2014), with a distance to the Galactic centre of \(R_{0}=8.34\pm 0.16\) kpc was adopted for the calculations.
SCIMES clouds without ATLASGAL counterparts are compared to the FW catalogue. If a SCIMES cloud contains a single FW object (emission peak) or more FW objects with the same distance, then that distance is assigned to the cloud. If a SCIMES cloud contains multiple FW sources with different distances, the distance that corresponds to the mode of the distribution of FW distances is assigned. If this distribution has no modes, the first FW source in the list is chosen.
Since the SCIMES and FW do present discrepancies in the emission structures they identify (see the small clumps at latitude smaller \(-0.2^{\circ}\) in Fig. 11 in Appendix A), not all the SCIMES clouds contain one or multiple FW. For the remaining unassigned clouds, associations between the unassigned SCIMES sources are made using a final volumetric search. This time an ellipsoidal volume of semi-axes \(0\aas@@fstack{\circ}3\times 0\aas@@fstack{\circ}3\times 10\) km s\({}^{-1}\), centred at the centroid of each remaining cloud, is employed to identify the closest SCIMES centroid with an existing distance assignment. The size of this volume is in agreement with the appropriate tolerance for friend-of-friends grouping (Wienenen et al., 2015) and corresponds to the median angular size and maximum linewidth of molecular clouds (Roman-Doval et al., 2009).
Finally, Reid's Bayesian calculator is employed to estimate the distances of the remaining SCIMES sources with undetermined distances with a near-far probability of 0.5.
To avoid contamination of the results by local sources and to exclude a large number of low-luminosity clumps/clouds below the completeness limit, only sources with heliocentric distance \(>2\) kpc are included (Urquhart et al., 2018).
Galactocentric distances are calculated independently through Brand & Blitz (1993)'s rotation curves. The angular velocity is derived from the line-of-sight velocity, \(v_{\rm LSR}\), and the Galactic coordinates \(l\) and \(b\) via the relation
\[\omega=\omega_{0}+\frac{v_{\rm LSR}}{R_{0}\sin(l)\cos(b)}, \tag{1}\]
where \(\omega_{0}=220\) km s\({}^{-1}\) kpc\({}^{-1}\) is the Sun's angular velocity at its Galactocentric distance \(R_{0}=8.5\) kpc. The Galactocentric distance of a source is then obtained by solving
\[\frac{\omega}{\omega_{0}}=a_{1}\left(\frac{R}{R_{0}}\right)^{a_{2}-1}+a_{3} \frac{R_{0}}{R} \tag{2}\]
numerically, with the constants \(a_{1}=1.0077\), \(a_{2}=0.0394\), and \(a_{3}=0.0071\)(Brand & Blitz, 1993).
Fig. 3 shows the distribution of distances to CHIIMPS \({}^{13}\)CO sources extracted with both FW and SCIMES. For comparison,
Figure 3: Distributions of heliocentric and galactocentric distances for the CHIIMPS \({}^{13}\)CO (3 - 2) sources extracted through the FW and SCIMES segmentations. The black histogram is the distribution of sources in a subset of the COHRS catalogue. The vertical lines denote the median values of the distributions. The median values of the distributions of heliocentric distance are 5.9, 5.3, and 5.8 kpc for the SCIMES, FW, and COHRS source respectively. In the case of galactocentric distances, the median values are 6.6, 5.7, and 5.4 in SCIMES, FW, and COHRS respectively.
the distance distribution of the subsample of COHRS sources is included.
The absence of a one-to-one correspondence between FW and SCIMES clouds makes it impossible to establish a unique matching criterion between the FW and SCIMES distance assignments of each cloud. In the assignment method described above, a distance is assigned to a SCIMES cloud based on the FW sources it contains. The difference in the numbers of clouds at large distances (\(\sim 12\) kpc) and at \(\sim 5\) kpc in the FW and SCIMES catalogues are a consequence of the differences in the segmentations and the assignment scheme of Section 4. The larger number of clouds seen in the SCIMES catalogue at 12 kpc arises from those assignments that do not involve FW distances. To check the robustness of the distance assignments great than 12 kpc, we consider the Larson relations and the galactic latitude of this set of sources (133). The Larson relations confirm the scaling obtained for the full sample discussed in Section 6.6, while 50% are off the Galactic plane with latitudes either smaller than \(-0.15^{\circ}\) or greater than \(+0.15^{\circ}\).
However, as we discuss below, when the statistical properties of a large ensemble of sources are considered, the impact of a particular choice of distance assignment on the derived parameters and properties for individual clouds becomes less prominent.
The top-down view of the locations of the CHIMPS sources extracted by SCIMES on the Galactic plane is shown in Fig. 4. No sources closer than 3.5 kpc from the Galactic Centre are found since the CHMPS data do not probe longitudes closer to the centre. The sources in our sample reside within the four main spiral arms, the Scutum-Centaurus, Sagittarius-Carina, Perseus and Outer arms and the smaller Aquila Rift and Aquila Spur features. The spiral-arm structure is mirrored by the distribution of the sources' Galactocentric distances. The lower panel of Fig. 3 displays large peaks at \(\sim 4.5\) and \(\sim 6.5\) kpc. These are the locations of the Scutum and Sagittarius arms seen from the Galactic Centre. The smaller peak at \(\sim 7.5\) kpc corresponds to the sources collected in the Perseus arm. As a section of the Scutum arm traverses the locus of tangential circular velocities, the sources in this area become clustered along this locus leaving gaps either on both sides (Fig. 4). This artefact stems from sources that have velocities greater than the terminal velocity due to non-circular streaming motions, which get binned at exactly the tangent distance, resulting in the apparent 'gap' and arc of sources lying on the tangent circle.
### A note on distances
To quantify the impact of the choice of distance assignment on the physical properties of the clouds in the catalogue, we consider three random distance assignments and check their corresponding distributions of masses. For the full SCIMES catalogue, the random distance assignments consist of applying a distance to each SCIMES cloud by drawing the value from
1. the set of all distances assigned to the SCIMES sources (each distance has the same probability of being assigned),
2. a set of (equispaced) distances between the minimum and maximum value of the SCIMES distance assignments,
3. a probability distribution (weights) generated from the original distribution of distances
The distance distributions derived from these assignments are compared to that of the original assignment in Fig. 5. This figure also depicts the distributions of masses associated with the three random distance assignment methods described above. The masses corresponding to each random distance assignment were estimated as in subsection 6.2. Although the distance distributions are largely dependent on the chosen assignment method, their differences are strongly mitigated when the corresponding masses are considered.
Performing the Kolmogorov-Smirnov test to check whether the original mass assignment and the random assignment are samples from the same distribution returns \(k=0.0797\) with p-value\(=8.9931\times 10^{-8}\) and \(k=0.0790\) with p-value \(=1.2360\times 10^{-7}\) for distance drawn randomly from the original set of distance assignments and from a set of (equispaced) distances between the minimum and maximum value of the SCIMES distance assignments (see above). Finally, when we consider distances drawn from a probability distribution (weights) generated from the original distribution of assigned distances describe in Section 4, the test returns \(k=0.0267\) with p-value \(=0.2994\).
These results thus demonstrate that mass distribution obtained from randomly assigned distances is independent of the distribution of mass obtained with the distance assigned through the algorithm in Section 4 unless the values are randomly chosen from the original distribution of distances. Similar results can be obtained for other physical properties that depend directly on distance, e.g. cloud radii, area, and surface densities. Even though the purely random distributions show deviations in the statistics, we do not actually expect the actual distribution of distances to the observed clouds in this sample to differ much from the assigned one (the first two cases above are extremes). Thus this test shows us that inaccurate distances to clouds are not crucial when the overall population still follows the expected distance distribution. The size of a sample containing a wide range of cloud sizes and geometries thus mitigates the inaccuracies and differences arising from imprecise distance assignments.
Figure 4: Top-down view of the locations of the \({}^{13}\)CO (3 - 2) extracted through the SCIMES algorithm from CHIMPS. The background image is published by (Churchwell et al., 2009b). The Solar circle and the locus of the tangent points have been marked as dashed and dotted lines respectively.
## 5 Comparison between FW and SCIMES segmentations
Figure 6 shows the FW and SCIMES extractions of \({}^{13}\)CO (3-2) emission in region 3 (see text and Fig. 1) in the 59.72-km\(\,\)s\({}^{-1}\) velocity plane at 27.4-arcsec resolution. In the two panels, regions of space belonging to the cross-sections of different clouds are distinguished by different colours. The most prominent difference between the two extractions lies in the relative over-segmentation of the emission in the FW panel. This is a known feature in FW extractions in which the watershed algorithm tends to break the emission into compact clumps that are accounted for as isolated features. A notable example is the large section of the SCIMES source extending from 34\({}^{\circ}\) to 35\({}^{\circ}\) of longitude in the mid panel of Fig. 6. The selected velocity slice highlights how this extended SCIMES source becomes fragmented into adjacent clumps in the FW extraction. This behaviour is also observed in the example of segmentation of crowded and sparse fields provided in Fig. A1 in Appendix A. In addition, as Rigby et al. (2019) points out, diffuse emission around the detection threshold can be identified as sets of disconnected voxels, clustered together as individual clumps (an example is given in Fig. 7). These clouds are recognisable by their very irregular shapes and they were flagged as 'bad sources' after a visual inspection in the FW catalogue (Rigby et al., 2019). These sources are excluded in the analysis that follows.
Coherent sources at low SNR and areas of emission crossing the boundaries between tiles also belong to this category. These sources often present very irregular segmentation due to the difference in noise levels among tiles. Such discontinuities may also create small clumps that do not originate from features in the emission map but reflect changes in the emission in adjacent channels3. These inconsistencies are a consequence of performing the extraction on SNR maps. Such occurrences are, however, small in number and the total sample is only marginally impacted.
Footnote 3: With the FW parameterisation used for the segmentation of CHIMPS data, voxels with SNR = 2 can be included in a clump, when they are directly connected to a clump with a peak SNR \(>\) 5 (Rigby et al., 2019).
The final catalogue published by Rigby et al. (2019) includes 4999 sources, 1335 of which were classified as 'bad sources' thought to arise from such artefacts.
If we directly compare the segmentation produced by FW with that of SCIMES on the same velocity plane (middle panel in Fig. 6), the emission is segmented into fewer individual sources with SCIMES, generally covering larger areas than their FW counterparts. This characteristic of the SCIMES segmentation is supported by the analysis of the geometric and physical properties of its sources (see below), thus a cloud/clump is, in general, not characterised by a single maximum emission peak. SCIMES clusters consist of signals from different hierarchical levels of the emission dendrogram. The fragmentation induced by FW identifies pieces of the substructure as individual entities. In the framework of SCIMES, these clumps correspond to dendrogram branches.
The introduction of artificial boundaries cutting through areas of less intense emission between peaks is a consequence of the watershed algorithm characterising disjoint clouds by single individual peaks. The volume and luminosity similarity criteria defining SCIMES clustering, instead, allow for the grouping of emission from both the bright cores (i.e. dendrogram leave/peaks of the emission) together with their tenuous surrounding envelopes (bottom panel in Fig. 6) into a single object, thus bypassing the impact of SNR discontinuities at the edges of adjacent tiles.
Figure 5: Top row: distribution of the three sets of random distances compared to the assigned distances to SCIMES clouds in CHIMPS (SCIMES). From left to right: the first set (Random 1) corresponds to distances drawn from the set of unique distances that were assigned to SCIMES sources. The second set (Random 2) is drawn from the set of (equispaced) distances between the minimum and maximum value of the SCIMES distance. Finally, the set Random 3 is drawn from the distribution of distances generated from the original SCIMES. Bottom row: distribution of masses estimated from the random distance sets compared to the masses corresponding to the original SCIMES distance assignments.
## 6 Physical properties
### Cloud sizes
We estimate the size of the CHIMPS clouds by considering two 'approximate' radii associated with different characteristics of the emission. Adopting the definitions in Rigby et al. (2019), we consider the equivalent radius \(R_{\rm eq}\) as the radius of the circle whose area (\(A\)) is equivalent to the projected area of the source,
\[R_{\rm eq}=d\sqrt{A/\pi}, \tag{3}\]
where \(d\) is the distance assigned to the source. The values of the equivalent radii associated with the SCIMES sources were calculated directly from the values of the exact areas produced by the Astrodenhydro dendrogram statistics tools.
For consistency in the comparison with physical properties defined in Rigby et al. (2019), we also consider the geometric mean of the intensity-weighted rms deviations in the \(l\) and \(b\) axes (\(\sigma_{l}\) and \(\sigma_{b}\)), deconvolved by the telescope beam, and \(d\) the assigned distance,
\[R_{\sigma}=d\sqrt{\sigma_{l}\sigma_{b}}. \tag{4}\]
The "geometric radius" \(R_{\sigma}\) provides a measure associated with the projected extent of the cloud in the \(l\) and \(b\) directions. Depending solely on the emission profile of the source, \(R_{\sigma}\) is less affected by the variations in the noise level in different areas of the survey (while \(R_{\rm eq}\) has no dependence on the emission profile). \(R_{\sigma}\) thus provides a more consistent measure than \(R_{\rm eq}\) for the smallest and densest clumps where star formation is likely to be located (under the assumption that smaller clumps are centrally concentrated).
We adopt a version of \(R_{\sigma}\) scaled by a factor \(\eta\) that considers an average emission profile. The constant \(\eta\) is set to 2. This value corresponds to the median value found by (Rigby et al., 2019) for the FW extraction and it is a compromise between the commonly-used conversion \(\eta=1.9\)(Solomon et al., 1987; Rosolowsky and Leroy, 2006; Colombo et al., 2019) and \(\eta=2.1\), the median value we found using the alternative version of \(R_{\sigma}\)
\[R_{\sigma}=d\sqrt{\sigma_{\rm maj}\sigma_{\rm min}}, \tag{5}\]
easily obtainable from the Astrodenhydro statistical tools for finding the major and minor axes of the projected SCIMES sources. The equivalent radius \(R_{\rm eq}\) is used in all instances in which the radius enters the definition of a physical quantity. Rigby et al. (2019) also used the conversion factor \(\eta\) in definitions where the comparison to different datasets required the use of \(R_{\sigma}\).
A simple visual inspection of the segmented emission maps (see Fig. 6 for an example) reveals the over-segmentation produced by FW (more prominent in crowded fields). The high-value tail of the SCIMES distribution of \(R_{\rm eq}\) in Fig. 8, relative to that from FW, confirms the higher number of larger clouds extracted by SCIMES. This result holds when heliocentric distances are constrained between 8 and 12 kpc. Following Rigby et al. (2019), this specific distance-limited subsample is introduced as a'most reliable' subsample against which we will compare any relationships between the physical quantities of the full sample to ensure that no bias is introduced by the choice of distance assignment. This set only includes 462 SCIMES sources with Galactocentric distances ranging from 4.0 to 8.5 kpc. Within this distance range, the spatial resolution
Figure 6: Corresponding FW (top) and SCIMES (bottom) clusters in the 59.72-km s\({}^{-1}\) velocity plane at 27.4-arcsec resolution (see text). In both panels, different colours represent different clouds.
Figure 7: Example of disconnected clouds in the FW segmentation. The panel shows the projection along the spectral axis of a portion of the FW extraction. The colours indicate individual clouds. The green (1), purple (2), yellow (3), pink (4), red (5), pink (6), cyan (7), and orange (8) fragments are identified as single clouds. This projection illustrates that, even after the removal of noise artefacts (“bad sources”) the FW catalogue still contains a number of fragmented sources.
element between the nearest and most distant sources differs by no more than 50%, while the sub-sample covers a significant fraction of the full sample.
In Fig. 9, we consider the source volumes, measured as the number of voxels that constitute each cloud as it is identified as an individual entity by the segmentation algorithms. We notice that, besides identifying large clouds in crowded fields (and thus being less prone to over-segmentation; see also Appendix A), SCIMES also extracts a significant number of smaller clouds, especially in sparse fields. The mean volume of all sources extracted amounts to \(1291.8\) voxels in SCIMES and \(1307.1\) voxels in FW. However, a dendrogram parameterization that matches the FW configuration described in Rigby et al. (2019) also produces 540 smaller clouds that do not contain any emission peaks arising from the FW extraction. These source may be found both in the proximity of similar emissions features identified by the FW algorithm or in areas devoided of FW emission. This feature of the SCIMES segmentation becomes relevant in the calculation of velocity dispersions in sub-section 6.4, in which the emission-weighted velocity channels spanned by a cloud are considered. Fig. 10 in Appendix B shows some examples of this set of sources. To ensure that these clouds are not low-emission artefacts constituted by low-density gas, we plot the distribution of their densities (Fig. 9), finding that it matches the distribution of the full SCIMES sample.
### Mass
Once distances are assigned, the true size of each voxel in the SCIMES segmentation can be calculated. Its contained mass is then estimated through the column density cubes (see Section 2). The H\({}_{2}\) mass of the cloud is estimated by considering the mean mass per H\({}_{2}\) molecule, taken to be \(2.72\) times the mass of the proton, accounting for a helium fraction of \(0.25\)(Allen, 1973), and an abundance of \(10^{6}\) H\({}_{2}\) molecules per \({}^{13}\)CO molecule (Draine, 2011).
The mass spectra for CHIMPS clouds and their fitted relations are displayed in Fig. 10. The mass spectral indices found with a power law fit are \(-1.41\pm 0.05\) for SCIMES clouds, \(-1.284\pm 0.02\) for FW and \(-0.920\pm 0.04\) for the COHRS survey. The binning of the masses follows Maz Apellaniz & Ubeda (2005) with variable bin widths and fixed bin population of \(2N^{2/5}\), with \(N\) being the number of individuals in the entire population. This convention is adopted to remove biases due to binning and was previously used in Eden et al. (2015) and Eden et al. (2018). The SCIMES index is consistent with \(-1.6\pm 0.2\) found by Roman-Duval et al. (2010) and previous studies (Sanders et al., 1985; Solomon et al., 1987; Williams et al., 1994). FW is slightly below this value. COHRS masses are expressed in terms of the molecular gas luminosity and obtained by using the conversion factor \(M=a_{\rm CO}L_{\rm CO}\), with \(a_{\rm 12}c_{\rm CO}(1-0)=4.35\,\rm M_{\odot}\,\rm pc^{-2}\,km^{-1}s\), assuming a mean molecular weight of \(2.8m_{\rm H}\) per hydrogen molecule. The conversion factors were calibrated with the \({}^{12}\)CO(1-0) assuming a line ratio \(R_{31}={{}^{12}CO(3-2)}/{{}^{12}CO(1-0)}\) to scale the calculated properties directly to physical properties (Colombo et al., 2019). The COHRS sample shows the greatest discrepancy, hinting that a single power law might not be applicable to all tracers of the same molecular clouds. The slope of the COHRS spectrum produces the best fit for values around \(M>10^{5}M_{\odot}\) (where it becomes similar to the fits of the SCIMES and FW samples). The flatter slope in \({}^{12}\)CO suggests that this SCIMES segmentation detects fewer small individual leaves that we could extract with \({}^{13}\)CO because they get grouped into larger structures connected by more diffuse material (and, perhaps, a number does not even appear as peaks in the \({}^{12}\)CO emission because of the gas being optically thick). In this scenario only the diffuse gas around the clumps is detected, suggesting that the COHRS sample, identifying more massive structures, is incomplete at smaller masses (see discussion below).
The turnover at \(\sim 300M_{\odot}\) is an indicator of the completeness limit of the data. This is the mass limit below which sources are not dependably extracted and therefore their distribution cannot be fitted by any power law. This limit depends on the size in both spatial and spectral axes, the local noise level, and the source density profile in addition to the total mass. Rigby et al. (2019) show that there is no single completeness limit in the CHIMPS data as the completeness limit is distance-dependent.
Vital to an accurate mass estimation is a precise distance assignment. The typical uncertainty on the distances estimated from the Bayesian distance algorithm is \(\sim 0.3\) kpc (Reid et al., 2016), which affects shorter distances the most (30% at 1 kpc) but falls
Figure 8: Distributions of equivalent radii in the SCIMES and FW segmentations (left panel). The right panels show the distributions for the distance-limited (8-12 kpc) samples.
to a few per cent already at 5 kpc. Taking into account the error on the conversion CO-to-H\({}_{2}\) conversion factor and column density estimation (Urquhart et al., 2018; Rigby et al., 2019), we estimate a typical error in cloud mass of order 30-40 per cent. In addition, the distance assignment (as well as all other calculated parameters) is very likely to be contaminated by uncertainties in the assumptions and approximations in the variety of methods considered in the various surveys. Section 4.1 presents a comparison between mass distribution derived from random distance assignments, suggesting distance assignments make no significant difference to the full-sample statistics.
Fig. 11 shows the mass-radius relationship for sources in CHIMPS extracted with both the FW and SCIMES methods. Power law fitting produces slopes of \(2.02\pm 0.02\) and \(1.97\pm 0.02\) for FW and SCIMES respectively. The distance-limited sample is fitted with \(1.93\pm 0.06\). The values are similar to the power law exponent of 2.36 found for molecular clouds in the GRS (Roman-Duval et al., 2010) The scatter in the CHIMPS data is much larger than that in the GRS (Rigby et al., 2019) and probably relates to the large difference in resolution, and it is comparable to the scatter in the ATLASGAL data, which were extracted at similar resolution (\(\sim 20\) arcsec). Dense clumps in ATLASGAL are found to follow a shallower power law with exponent 1.65 (Urquhart et al., 2018). COHRS sources (\(2.15\pm 0.03\)) have been added for comparison. As expected, the larger structures detected through \({}^{12}\)CO emission result in the larger masses in panel A of Fig. 12, and the distributions with distance in Fig. 13. CHIMPS and COHRS trendlines also follow a similar pattern, suggesting that the segmentation of COHRS identifies the more extended counterparts of CHIMPS objects.
Panel A in Fig. 12 compares the distributions of mass in the two CHIMPS emission extractions with that mass in COHRS
Figure 11: Mass-radius relationship for CHIMPS and COHRS sources. Notice that the fit of the full SCIMES sample and the distance-limited subsample are nearly identical.
Figure 10: Comparison between the data and the fitted functions for mass spectra. The dots indicate the centres of the mass bins. The colours refer to the method of extraction and survey.
Figure 9: Distributions of numbers of voxels in the FW and SCIMES sources (top panel). The red outline histogram represents those SCIMES sources that do not contain any emission peak found in the FW catalogue. The bottom panel portrays the distributions of H\({}_{2}\) number densities for this subset and the whole SCIMES sample.
Figure 12: Panels A-H: distributions of total mass, equivalent radius, average number density, virial parameters, excitation temperature, turbulent and thermal pressure, and Mach numbers in the CHHMPS sources. Distributions of COHRS sources are also added for comparison when the data are available.
clouds. The calculation for mass estimation from CO luminosities in COHRS is described in Colombo et al. (2019). The mass distribution reflects the size distribution of the clouds for the SCIMES and FW segmentations.
The mass distribution as a function of heliocentric and Galactocentric distances of the full sample is presented in Fig. 13, where the trend at small Galactocentric distances is likely to be an artefact originating from the small number of sources in the initial bin (3.5-4.0 kpc) and the position of the centre of the bin in the plot.
### Hydrogen number density
The mean (volumetric) particle density (or number density) over the approximate volume of a cloud (assuming 2D to 3D symmetry) is calculated as
\[\overline{n}(\mathrm{H}_{2})=\frac{3}{4\pi}\frac{M}{\mu m_{P}R_{eq}^{3}}, \tag{6}\]
where \(M\) is the mass of the cloud, \(\mu\mathrm{m}_{P}\) (\(=2.72\mathrm{m}_{P}\)) is the mean molecular weight.
The distribution of molecular hydrogen number densities extracted from CHIMPS via FW and from CHIMPS and COHRS by SCIMES is reported in panel C of Fig. 12. The larger masses and greater radii found in COHRS clouds result in a distribution of mean molecular hydrogen density that is comparable to the ones obtained for the SCIMES and FW segmentations.
We notice that the distributions of H\({}_{2}\) number densities exhibit values much less than the critical density of the \({}^{13}\)CO (J=3-2) transition. In a clumpy medium, the average density may be an underestimate of the typical density at which most emission originates and the H\({}_{2}\) number density assigned to each cloud represents the average density over the entire (approximated) volume of the cloud. This average value accounts for both clumps with a density over the critical threshold and areas of far more rarefied gas.
Gas with densities lower than the critical density will also be warmer than the calculated excitation temperature (Rigby et al., 2019). However, it may still emit in a sub-thermal mode in which the energy level populations are not distributed according to the Boltzmann distribution. This underestimate in the gas temperature is mirrored in overestimates in the gas column density (Rigby et al., 2019). The distribution of mean excitation temperatures of the FW extraction of CHIMPS clouds is found to have a mean value of 11.5 K, which matches the expectation for molecular structures covering the size regime from cores, through clumps, to clouds (Bergin and Tafalla, 2007). Sub-thermal emission can therefore be assumed not to be a dominant effect in the \({}^{13}\)CO emission (see also Rigby et al., 2019).
The unexpected left tail in the distribution of SCIMES mean number densities should not necessarily be considered an indication of smaller volumes or masses in disagreement with our previous results, but rather arising from the inaccurate spherical approximation of larger irregularly-shaped clouds. The approximation is aggravated by using the equivalent radius to match the cloud's extension both along the Galactic coordinates and the line of sight. This is particularly evident for clouds with large aspect ratios (filamentary) are more likely to have a "depth" which is similar to the smaller dimension of the projected cloud (i.e. the width of the filament). In which case, \(R_{\mathrm{eq}}\) estimated from the equivalent area will provide an overestimation of depth, and consequently an underestimation of the cloud's density.
### Velocity dispersion
The velocity dispersion (\(\sigma_{\mathrm{v}}\)) measures the statistical dispersion of velocities about the mean velocity for a molecular cloud. In the clump-finding implementation of FW provided in the JCMT Starlink software suit, \(\sigma_{\mathrm{v}}\) is estimated as the RMS deviation of the velocity of each voxel centre from the clump velocity centroid (Berry, 2015). The FW catalogue adopts this as the measure of the extent of a cloud along the spectral axis. For a cloud with a Gaussian distribution of velocities, this definition of \(\sigma_{\mathrm{v}}\) corresponds to the standard deviation of the distribution with mean value at the centroid velocity. Equivalently, SCIMES derives its velocity dispersion from the intensity-weighted second moment of velocity through the Astroeduro prostattics function. The distributions of the velocity dispersion in Fig. 12 reflect the difference in size of the clouds extracted by the two methods, these being related via Larson's relations. Although SCIMES tends to extract overall bigger sources, this extraction also exhibits a significant number of smaller clouds that are not matched to FW emission (see Section 6.1). This subset contributes systematically smaller values of \(\sigma_{\mathrm{v}}\), shifting the overall distribution, which then acquires a lower mean (0.89 km s\({}^{-1}\)) than FW (0.98 km s\({}^{-1}\)).
For the COHRS plots, we consider the non-extrapolated and non-convolved data, see (Colombo et al., 2019). In general, the larger the size of a cloud, the wider the distribution of velocities of its particles, thus its velocity dispersion. The velocity dispersion causes the broadening of linewidths in CO observations. This fact is mirrored in the distribution of velocity dispersions in the clouds of the COHRS catalogue and their size-linewidth relation in Fig. 14. Line widths are expected to be larger in \({}^{12}\)CO because of the high optical depths suppressing the peak intensities as well as tracing larger structures with larger turbulent velocities.
### The virial parameter
The virial parameter encodes the dynamic state of a molecular cloud, assuming that the cloud is capable of sustaining virial equilibrium.
The virial parameter is defined as the ratio of a cloud's spherically symmetric virial mass to its total mass (\(M\))
\[\alpha_{\mathrm{vir}}=\frac{3\sigma_{\mathrm{v}}^{2}\eta R_{\sigma}}{GM} \tag{7}\]
where \(G\) is the gravitational constant. This definition (Rigby et al., 2019) assumes a radial density distribution \(\rho(r)\propto r^{-2}\)(MacLaren et al., 1988) and includes \(R_{\sigma}\) to account for the median emission profile. The intensity-weighted radius reinforces the gravitational energy in those regions where the density is higher.
Approximating a source as a spherically symmetric distribution of density introduces a factor-of-two uncertainty in the estimation of the virial parameter. This arises from both characterising the source by a single radius and from choosing this particular radial profile. This error will be systematic to a large extent, and likely to affect both segmentations in the same fashion.
In the absence of a strong magnetic field or external pressure, \(\alpha_{\mathrm{vir}}\) equals 1 when the clouds are in virial equilibrium. A value \(\alpha_{\mathrm{vir}}=2\) indicates that the gravitational energy equals the kinetic energy in the cloud. Values of \(\alpha_{\mathrm{vir}}\) smaller than 1 characterise an unstable, collapsing system (when other sources of supporting pressure are absent). A dissipating system, dominated by kinetic energy, is characterised by \(\alpha_{\mathrm{vir}}>2\). While \(1<\alpha_{\mathrm{vir}}<2\) indicates approximate equilibrium. These clouds may be free-falling and small values
## References
* [1]
Figure 1: **Figure 2:**
Figure 13: Various properties measured for the CHIMPS and COHRS (when data are available), namely the equivalent radius, mass, mean number density, excitation temperature, virial parameter and turbulent pressure as functions of both Galactocentric (left) and heliocentric (right) distance. For Galactocentric distances, we have plotted trendlines and error bars. The trendlines connect the mean values of 0.5 kpc wide bins. The error bars are the standard errors of the means. The rise in the density plots at low heliocentric distances may be considered an indicator of a resolution bias. This bias is visible in the distribution of \(R_{\rm{relo}}\) with heliocentric distance.
of the virial parameter may indicate other support or observation biases (Traficante et al., 2018). It has been suggested that the heightened velocity dispersions due to rapidly infalling gas in collapsing cloud fragments may still raise the cloud's value of the virial parameter to \(\sim 2\)(Kauffmann et al., 2013). This would be the case of the smaller FW clouds, identified around single high-emission, high-density peaks. Fragments with \(\alpha_{\rm vir}\ll 2\) are more likely to host and be supported by strong magnetic fields or to house ongoing high-mass star formation. In the absence of these conditions, their life would be too short to allow for their detection (Kauffmann et al., 2013).
The distribution of the virial parameter in CHIMPS and COHRS is presented in panel D of Fig. 12. The SCIMES distribution indicates that a large number of clouds in this segmentation are gravitationally unstable or in approximate equilibrium.
Fig. 13 shows the virial parameter as a function of the Helio-centric and Galactocentric distances, respectively. A closer look at the trendlines in Fig. 13 reveals a hint of a slightly increased \(\alpha_{\rm vir}\) inside 7 kpc, or perhaps in the spiral arms. This trend may be due to the errors on the means of the bins increasing significantly at large radii. The decrease of the virial parameter as a function of heliocentric distance reflects the mass trend shown in Fig. 13. We notice that this feature was also found in the SEDIGISM survey (Schuller et al., 2017) and may thus be an indication of some observational bias.
### Scaling relations
To continue the comparison with the analysis proposed in Rigby et al. (2019) for the FW sample, we now consider the scaling relations between molecular-cloud properties. Applying a power-law fit to the size-density relation shown in Fig. 14 produces average number densities proportional to \(R^{a}\) with \(a=-1.01\pm 0.02\) for SCIMES clouds (\(a=-0.97\pm 0.02\) for the distance-limited subsample, and \(a=-0.99\pm 0.05\) in the FW case). For COHRS clouds \(a\) equals \(-0.85\pm 0.03\). The fits of FW and SCIMES sources both produce values of \(a\) similar to the original scaling relation \(a=-1.1\pm 0.05\) found by Larson (1981).
A fit to the size-velocity dispersion relation produces \(\sigma_{\rm v}\propto R^{a}\) with \(a=0.31\pm 0.01\) for SCIMES clouds (\(a=0.42\pm 0.03\) for the distance-limited subsample), and \(a=0.34\pm 0.01\) in the FW case). Both values are similar to the original scaling relation \(a=0.38\pm 0.14\) found by Larson (1981) over a factor of 30 in size, which was originally interpreted as evidence that the internal motions of molecular clouds follow a continuum of turbulent flow inherited from the ISM at larger scales. For the COHRS clouds \(a=0.28\pm 0.02\).
SCIMES clouds that are characterised by smaller values of the virial parameter (\(<0.6\)) fall in a size range between 2 and 20 pc. These clouds include the smallest, most compact sources, and the most likely sites of star formation. The size-virial parameter relation fit produces \(\alpha_{\rm v}\propto R^{a}\) with \(a=-0.25\pm 0.03\) for SCIMES clouds. The distance-limited subsample provided the least precise fit with \(a=0.09\pm 0.06\), consistent with zero. The discrepancy between the values for the full sample and the distance-limited sources is due to the lack of statistical correlation (Spearman correlation coefficient = 0.01 with p-value = 0.03) between \(R_{\rm eq}\) and \(\alpha_{\rm vir}\) in the reduced set, in addition to the large error produced by the chi-square fitting algorithm. Fitting the FW sources yields \(a=-0.45\pm 0.02\). The slopes found above for the SCIMES and FW sources are significantly steeper than the original scaling relation \(a=-0.14\) found by Larson (1981). The discrepancy may be due to the varying mass completeness as a function of distance. A factor \(a=-0.54\pm 0.06\) was found for COHRS clouds.
### Free-fall and crossing times
The free-fall timescale, \(t_{\rm ff}\), represents the characteristic time that would take a body to collapse under its own gravitational attraction. As mentioned above, \(t_{\rm ff}\) depends solely on the density and the densities of the chemical species of the gas. In terms of the molecular hydrogen mean number density discussed in the previous sub-section,
\[t_{\rm ff}=\sqrt{\frac{3\pi}{32G\mu m_{p}\overline{n}(H_{2})}}. \tag{8}\]
The crossing timescale, \(t_{\rm cross}\), corresponds to the time it takes a disturbance to cross the system at the sound/signal speed in the medium. The length of \(t_{\rm cross}\) is directly proportional to the size of
Figure 14: Size-linewidth (right panel), size-virial parameter (central panel), and size-density (left panel) relationships for the CHIMPS and COHRS sources. The size parameter is the scaled intensity-weighted rms size, \(\eta R_{\sigma}\), for which \(\eta=2.0\). Fitting lines are shown where a correlation is found between the quantities considered. The dashed lines indicate the Larson relations.
the system and inversely proportional to the velocity dispersion= of the gas:
\[t_{\rm cross}=\frac{2R_{\rm eq}}{\sigma_{\rm v}}. \tag{9}\]
The distributions of these timescales for the two segmentations of CHIMPS and COHRS are compared in Fig. 15. FW and SCIMES crossing times present similar distributions.
The left and right tails in the distribution of crossing times in SCIMES reflect the corresponding distribution of velocity dispersions. The distribution of free-fall timescales evidences the lower average surface densities of the larger SCIMES clouds, and may also originate from the elongated clouds where the volume density was underestimated by the spherical approximation with radius \(R_{\rm eq}\).
### Excitation temperature
Excitation temperatures are assigned to clouds by considering the mean temperature contained within the cloud assignments in the maps constructed in Section 2. The distributions of excitation temperature in the FW and SCIMES segmentations of the \({}^{13}\)CO (3-2) emission in CHIMPS are shown in panel E of Fig. 12. The temperatures from the SCIMES catalogue are systematically lower than FW temperatures. The average SCIMES excitation temperature is \(10.19\pm 0.040\) K while FW clouds have a mean of \(11.54\pm 0.039\) K. Although SCIMES detects, in general, more diffuse and thus potentially warmer material, the higher average temperature estimated in the FW sample is likely to be due to the SCIMES clouds being larger and thus extending to lower CO brightnesses, which results in a lower inferred excitation temperature when the beam filling is assumed to be \(\sim 1\). CHIMPS excitation temperatures do not vary significantly with distance (Fig. 13). As a function of the Galactocentric distance, the two segmentations show no obvious (difference in) biases and no overall gradient of the excitation temperature (the initial decreasing gradient with Galactocentric distance cannot be confirmed due to the lack of information at distances shorter than 3.5 kpc from the Galactic centre in CHIMPS). This contrasts with the probable gradient in the interstellar radiation field (Maciell et al., 2007), dominated by cosmic-ray heating or (less likely) by internal heating. The density regime probed by CHIMPS, however, provides enough shielding to contrast this effect.
Arm radii (\(\sim 4.5,\sim 6.5\), and \(\sim 7.5\) kpc, see section 4 ) only see an increase in source counts, which increases the detected wings of the scatter distribution to higher \(T_{\rm ex}\), but does not result in a significant change in the mean.
The high-temperature outliers in the SCIMES segmentation have coordinates and distances corresponding to those of the star-forming region W49 (\(l\approx 43.2^{\circ}\), \(b\approx 0.0^{\circ}\) at 11.1 kpc). This region is considered extreme as its column densities dust temperatures, and luminosity per unit mass (Nagy et al., 2015) consistent with those found in luminous and ultraluminous infrared galaxies (Solomon et al., 1997; Nagy et al., 2012). The region also has an overabundance of ultracompact HII regions (Urquhart et al., 2013).
### Turbulent pressure
The three-dimensional velocity dispersion (\(3\sigma_{\rm v}^{2}\)) can be decomposed into its thermal
\[\sigma_{\rm T}^{2}=k_{\rm B}T_{\rm ex}/\mu m_{\rm p} \tag{10}\]
and non-thermal (turbulent)
\[\sigma_{\rm NT}^{2}=3\sigma_{\rm v}^{2}-\sigma_{\rm T}^{2} \tag{11}\]
components, where the one-dimensional velocity dispersion is defined in sub-section 6.4.
The turbulent pressure is then defined as
\[P_{\rm turb}/k_{\rm B}=\mu m_{\rm p}\,\overline{n}(H_{2})\,\sigma_{\rm NT}^{2} /k_{\rm B}\quad{\rm K\,cm^{-3}}, \tag{12}\]
This is the internal pressure of the clouds arising from the turbulent motions of molecular gas. The turbulent pressure distributions in panel F of Fig. 12 show that SCIMES sources tend to have lower pressure than their FW counterparts. The lower pressures appearing in the SCIMES distribution are likely to be a consequence of SCIMES' smaller velocity dispersions \(\sigma_{\rm v}\) entering definition 12 since at larger scales we would expect clouds to manifest higher turbulent pressure. We notice that the three peaks characterising the distribution of turbulent pressures in the distance-limited sample are likely to arise from the difference in the environmental density of the sources located within spiral arms (Bonnell et al., 2006). The median values of the two distributions are comparable with
Figure 15: Distributions of the crossing and free fall timescales associated with the CHIMPS \({}^{13}\)CO (3 - 2) sources in the FW (blue) and SCIMES (red), and COHRS (black) catalogues.
SCIMES having a median of \(2.5\times 10^{5}\) K cm\({}^{-3}\) and FW of \(4\times 10^{5}\) K cm\({}^{-3}\). Both these values agree with the total mid-plane pressure in the Solar neighbourhood (\(\sim 10^{5}\) K cm\({}^{-3}\)).
The distribution of \(P_{\rm turb}/k_{\rm B}\) with helio- and Galactocentric distance are given in Fig. 13, respectively. The range of \(P_{\rm turb}/k_{\rm B}\) covered by both distributions is consistent with the mid-plane values (Rathborne et al., 2014).
The thermal pressure can be defined as
\[P_{\rm thermal}=\overline{n({\rm H_{2}})}\,k_{\rm B}\,T_{\rm ex}. \tag{13}\]
Thermal pressure distributions are presented in panel G of Fig. 12. The turbulent pressures are found to be \(\sim 60\) times greater than the corresponding thermal pressures. Lower average densities result in lower pressures associated with the COHRS sample.
### Mach numbers
Panel H of Fig. 12 represents the distributions of Mach numbers \(\mathcal{M}=\sigma_{\rm NT}/\sigma_{\rm T}\) of the sources in the FW and SCIMES segmentations. The distributions look similar, both peaking in the supersonic regime (\(\mathcal{M}\sim 5\)) and extending out to higher Mach numbers.
The difference in the distributions vanishes as the tails of the distributions flatten out past \(\mathcal{M}=20\) where fewer large enough clouds to sustain these hypersonic regimes are found.
## 7 Conclusions
This article presents a cross-correlation of the properties of individual clouds in two different segmentations of the \({}^{13}\)CO (\(3-2\)) emission in the CHIMPS survey: one obtained with the watershed algorithm FillWalker and the other with the dendrogram-based SCIMES. These methodologies yield different numbers of molecular clouds (1586 with SCIMES while FW yields a reliable set of 3665 sources) but produce largely consistent results with similar ranges in masses, equivalent radii, mean number densities, and velocity dispersions. The distributions of mean number densities, masses, virial parameters, and dynamic timescales all reflect the differences in volumes and geometries found in the two segmentations. A word of warning should however be spent on the cross-correlation of the physical properties of individual sources between the two catalogues. Different definitions, numerical implementations, and design choices within each method influence the estimated value of a given physical quantity and those derived from it. Additionally, the SCIMES extraction of \({}^{12}\)CO (\(3-2\)) in COHRS is considered as a term of comparison with a different tracer over the same area spanned by CHIMPS. This particular transition of the \({}^{12}\)CO isotopologue is, in general, a more optically thick tracer than \({}^{13}\)CO (\(3-2\)). In practice, this implies that the COHRS segmentation traces lower-density regions of the molecular clouds, that are not detected in CHIMPS. The line-widths for the COHRS clouds will thus be naturally wider than those found through both SCIMES and FW (Section 6.4). Probing lower-density emission, COHRS detects larger structures than CHIMPS. To a lesser degree, the inconsistent results in the SCIMES segmentations of \({}^{12}\)CO and \({}^{13}\)CO emission can also be traced back to the different SCIMES parameterisations chosen for the segmentations in Colombo et al. (2015). Since the optimum parameter values are determined, to a large extent, by the characteristics of the data, these two effects are closely related.
A closer look at the distribution of the assigned SCIMES heliocentric distances (Fig. 3) and the independently generated Galactocentric distances reveals that both distributions display the same features as the FW assignments. The difference in distance assignment has supposedly little influence on the distance-dependent physical properties. Size-linewidth, size-density (Fig. 14) and size-virial parameter plots for the CHIMPS clouds, also reveal similar relations. An identical situation is reported by (Lada & Dame, 2020) in their studies of mass-size relations (Larson, 1981) and the GMC surface densities in Galactic clouds. Lada & Dame (2020) compared data from the SCIMES (Rice et al., 2020) and FW (Miville-Deschenes et al., 2017) extractions of \({}^{12}\)CO in the low-resolution CfA-Chile survey (Dame et al., 2001). The mass-size relation they found did not appear to be particularly sensitive to differences in the two methodologies used for the emission segmentation.
Although the two segmentation methods produce similar statistical results when applied to the full survey with the chosen parameterisation, on the scale of individual clouds the situation may differ. The SCIMES extraction (subsamples with SNR > 10) includes larger sources than FW both in crowded and sparse environments. Notice that the full SCIMES catalogue also includes a significant number of smaller sources, most likely found in sparse fields, these clouds have no FW counterparts, see sub-section 6.1. This feature underscores the difference in the paradigms that characterise the two methods and the difficulty of establishing a one-to-one correspondence across catalogues produced by different algorithms. In crowded fields such as large star-formation complexes like W 43 (\(l=30\aas@@fstack{\stack{\stack{\stack{\stack{\stack{\stack{\stack{\stack{\stackstack{\stack{\
would allow for the identification of FW clouds within the SCIMES dendrograms, matching them with branches and sub-branches.
By the definition of emission dendrogram, the extraction produced by SCIMES is more sensitive to the overall gas distribution in the region in which the segmentation is performed. For \({}^{12}\)CO COHRS, SCIMES produces structures that are \(\sim\)100 times larger in \((l,b,v)\) volume. This is unsurprising, because of the high optical depth, self-absorption and lower critical density associated with \({}^{12}\)CO.
The mass spectrum in Fig. 10 shows that SCIMES and FW \({}^{13}\)CO power-law fit gradients are similar, however, the COHRS spectrum is flatter than either. SCIMES and FW mass distributions are similar for \({}^{13}\)CO. COHRS \({}^{12}\)CO clouds, on the other hand, are two orders of magnitude more massive. The mass-radius relation is similar for the three samples considered. The linear sizes of the FW and SCIMES sources are similar, but clouds in the distance-limited SCIMES sample have an overall larger size. COHRS clouds are \(\sim\)10 times larger than the \({}^{13}\)CO sources.
When mean excitation temperatures are considered, the SCIMES sources present lower temperatures than those found in the FW extraction. Turbulent pressure reduces in SCIMES. A similar behaviour is observed for thermal pressure, but its distribution presents a tail to lower pressures in SCIMES clouds. COHRS clouds have \(\sim\)10 times lower pressures.
The distribution of volume densities is similar for all extractions and species, but the COHRS distribution presents a lower tail. SCIMES clouds have smaller virial parameters than the FW ones. The value of \(\alpha_{\rm vir}\) in COHRS values are \(\sim\)10 times higher. SCIMES and FW sources also display similar Mach numbers with COHRS being slightly lower.
Our comparison thus suggests that there are some systematic differences in the physical parameters of clouds that result from the extraction method. The very large differences between the \({}^{12}\)CO and the \({}^{13}\)CO samples are due to high optical depth and lower critical density in the former.
## 8 Acknowledgements
The authors would like to thank Dario Colombo for his help with the SCIMES Python package. The Starlink software (Currie et al., 2014) is currently supported by the East Asian Observatory. This research made use of SCIMES, a Python package to find relevant structures into den- drograms of molecular gas emission using the spectral clustering approach (Colombo et al., 2015). SCIMES is an astropy affiliated- package (Astropy Collaboration et al., 2013, 2018).
## Data Availability
The data used for this paper are available from the archives of the CHIMPS (Rigby et al., 2019). The SCIMES catalogue used for the analysis in this article is available to download from the CANFAR archive4.
Footnote 4: [https://www.canfar.net/storage/list/](https://www.canfar.net/storage/list/)
AstroDataCitationBOI/CISTI.CANFAR/23.0003/data
|
2301.04652
|
Estimate Deformation Capacity of Non-Ductile RC Shear Walls using
Explainable Boosting Machine
|
Machine learning is becoming increasingly prevalent for tackling challenges
in earthquake engineering and providing fairly reliable and accurate
predictions. However, it is mostly unclear how decisions are made because
machine learning models are generally highly sophisticated, resulting in opaque
black-box models. Machine learning models that are naturally interpretable and
provide their own decision explanation, rather than using an explanatory, are
more accurate in determining what the model actually computes. With this
motivation, this study aims to develop a fully explainable machine learning
model to predict the deformation capacity of non-ductile reinforced concrete
shear walls based on experimental data collected worldwide. The proposed
Explainable Boosting Machines (EBM)-based model is an interpretable, robust,
naturally explainable glass-box model, yet provides high accuracy comparable to
its black-box counterparts. The model enables the user to observe the
relationship between the wall properties and the deformation capacity by
quantifying the individual contribution of each wall property as well as the
correlations among them. The mean coefficient of determination R2 and the mean
ratio of predicted to actual value based on the test dataset are 0.92 and 1.05,
respectively. The proposed predictive model stands out with its overall
consistency with scientific knowledge, practicality, and interpretability
without sacrificing high accuracy.
|
Zeynep Tuna Deger, Gulsen Taskin Kaya, John W Wallace
|
2023-01-11T09:20:29Z
|
http://arxiv.org/abs/2301.04652v1
|
# Estimate Deformation Capacity of Non-Ductile Rc Shear Walls Using Explainable Boosting Machine
###### Abstract
Machine learning is becoming increasingly prevalent for tackling challenges in earthquake engineering and providing fairly reliable and accurate predictions. However, it is mostly unclear how decisions are made because machine learning models are generally highly sophisticated, resulting in opaque black-box models. Machine learning models that are naturally interpretable and provide their own decision explanation, rather than using an explanatory, are more accurate in determining what the model actually computes. With this motivation, this study aims to develop a fully explainable machine learning model to predict the deformation capacity of non-ductile reinforced concrete shear walls based on experimental data collected worldwide. The proposed Explainable Boosting Machines (EBM)-based model is an interpretable, robust, naturally explainable glass-box model, yet provides high accuracy comparable to its black-box counterparts. The model enables the user to observe the relationship between the wall properties and the deformation capacity by quantifying the individual contribution of each wall property as well as the correlations among them. The mean coefficient of determination \(R^{2}\) and the mean ratio of predicted to actual value based on the test dataset are 0.92 and 1.05, respectively. The proposed predictive model stands out with its overall consistency with scientific knowledge, practicality, and interpretability without sacrificing high accuracy.
_Keywords: Explainable boosting machine, glass-box model, feature selection, general additive model, reinforced concrete shear wall, deformation capacity, interpretability_
## 1 Introduction
Shear walls are typically utilized as the primary elements to resist lateral loads in reinforced concrete buildings. Towards capacity design assumptions, shear walls are designed to exhibit ductile behavior by providing adequate reinforcement and proper detailing. However, experimental studies have shown that walls with an aspect ratio smaller than 1.5 (i.e., squat walls) and those with poor reinforcement and detailing, despite their higher aspect ratio, end up showing brittle failure (e.g., diagonal tension, web crushing) [1, 2, 3]. Such walls are often observed in buildings not designed according to modern seismic codes and are prone to severe damage [4, 5]. As the performance-based design and assessment approach has gained importance concordant
with hazard mitigation efforts, there has been an increasing need and demand for reliable models to predict structural behavior under seismic actions. This objective is particularly important for walls that exhibit shear behavior as the nonlinear deformation capacity of such walls is assumed to be zero, potentially leading to technical and economical over-conservation. More realistic solutions can be achieved if their behavior is accurately estimated and considered in seismic performance evaluation.
The prediction of structural behavior has been achieved through the use of predictive equations or models that are developed based on available experimental data. Recently, machine learning (ML) methods have gained significant attention structural/earthquake engineering field and have demonstrated promising results despite the scarcity of data (compared to much larger data available in fields such as computer vision and image processing). Black-box models, with their high complexity and nonlinearity, often represent the input-output relationship better than the interpretable models in classification and regression applications. However, they are not necessarily consistent with true physical behavior. There are examples of misleading conclusions of black-box models in scientific and engineering applications [6, 7, 8]. Therefore, despite the high accuracy they achieve, black-box models are not completely accepted in earthquake engineering society. To leverage the advantages of the developments in artificial intelligence without ignoring the physical behavior, there has been recent research efforts that incorporates black-box machine learning methods with physical knowledge [8, 9, 10, 11]. This study takes this issue a step further and integrates an _explainable_ machine learning approach (versus black-box) with existing physics-based understanding of seismic behavior to estimate the deformation capacity of non-ductile shear walls.
## 2 Literature Review
Research efforts in the literature to estimate wall deformation capacity have produced empirical models, some recently adopted by building codes [12]; however, they are relatively limited compared to other behavior features such as shear strength or failure mode. Earlier models were mainly developed using a limited number of experimental results [13, 14] or were trained using a single dataset; that is, they were not trained and tested based on unmixed data [12, 15]. Over time, as machine learning is embraced in the earthquake engineering field [16, 17, 18, 19, 20, 11] and new experiments are conducted, more advanced models have been developed. Yet, two main issues are encountered: (i) Some models used simple approaches such as linear regression for the sake of interpretability [21] and sacrificed overall accuracy (or had large dispersion). One might think that accurate models that predict relatively complicated behavior attributes can only be achieved by increasing model complexity; however, literature studies have shown that this may cause problems with the structure and distribution of the data [22, 23]. More importantly, urging the model to develop complex relationships to achieve higher performance typically leads to black-box models where internal mechanisms include highly nonlinear, complex relations. (ii) Such black-box models achieve high overall accuracy at the cost of explainability [18]. Researchers that acknowledge the significance of interpretability employed model-agnostic, local or global explanation methods (e.g., SHapley Additive exPlanation, Local Interpretable Model-agnostic Explanations) to interpret the decision mechanism of their models [11]. Such algorithms are not fully verified [24, 25]; besides, they are approximate approaches. Moreover, despite their broadening use and high accuracy, the black-box models are not entirely accepted in the earthquake engineering society as their internal relations are opaque and, in some cases, not entirely reliable [26]. Therefore, it is critical to understand how the model makes the decision/estimation to (i) verify that the model is physically meaningful, (ii) develop confidence in the predictive model, and (iii) broaden existing scientific knowledge with new insights.This study addresses this need and fills this important research gap by using domain-specific knowledge to evaluate and validate the decisions made by ML methods. Unlike the existing ML-based predictive models ([11, 18]), the proposed model aims particularly at the deformation capacity of non-ductile shear walls and is naturally transparent and interpretable.
Concerns regarding the trustworthiness and transparency of the black-box models motivated the development of a relatively new research area known as explainable artificial intelligence (XAI) [27, 28]. The XAI aims to provide a set of machine learning (ML) techniques for building more comprehensible and understandable models while maintaining a high level of learning performance. The strategies used in XAI are divided
into two main categories: explaining existing black-box models (post-hoc explainability) and generating glass-box (transparent) models. In the former, interpretability is confined to the usage of certain so-called explanatory algorithms that are employed to explain a black-box model, while in the latter, a predictive model is fully comprehensible and interpretable by humans. A model should have certain qualities to be considered a transparent model such as decomposability, algorithmic transparency, and simulatability [29]. The decomposability relates to the ability to explain each model component in terms of the inputs' contributions or correlations, whereas simulatability refers to the number of parameters (input) in the model representation (the less is the more understandable). The algorithmically transparent models enable a clear comprehension of the model' behavior for predicting any given output from its input data. Therefore, transparent models are highly needed approaches in fields where decisions are critical, but their performances are typically very low. Machine learning models that can maintain the tradeoff between performance and explainability, i.e., converging to the performance of black-box models while still providing explainability, would significantly address the demands in earthquake engineering society. In this context, explainable boosting machine (EBM) [30], a recently developed method belonging to the family of Generalized Additive Models (GAMs) [31], is a highly accurate and transparent ML method delivering an explicit and fully explainable predictive model. The EBM has been utilized in the literature to solve a variety of problems, including detecting common flaws in data [32], diagnosing COVID-19 using blood test variables [33], predicting diseases such as Alzheimer [34], or Parkinson [35], and has shown to outperform black-box models with the additional benefit of being an inherently explainable predictive model.
In this study, the EBM is used for the first time in the earthquake engineering field to construct an EBM-based predictive model for estimating deformation capacity on non-ductile RC shear walls. The inputs of the predictive model are designated as the shear wall design properties (e.g., wall geometry, reinforcing ratio), whereas the output is one of the constitutive components of the nonlinear wall behavior, that is, the deformation capacity. The main contributions of this research are highlighted as follows:
* A fully transparent and interpretable predictive model is developed to estimate the deformation capacity of RC shear walls that are failed in pure shear or shear-flexure interaction.
* The proposed model meets all desired properties, i.e., decomposability, algorithmic transparency, and simulatability, without compromising high performance.
* This study integrates novel computational methods (i.e., EBM) and domain-based knowledge to formalize complex engineering knowledge. The proposed model's overall consistency with a physics-based understanding of seismic behavior is verified.
## 3 The RC Shear Wall Database
The experimental data used in this research is a sub-assembly of the wall test database utilized in Deger and Taskin (2022) [19] with 30 additional data [36, 37]. As the main focus is to estimate the deformation capacity of walls governed by shear or shear-flexure interaction, walls that did not show so-called shear failure indications are excluded from the database, resulting in 286 specimens of use for this research. All specimens were tested under quasi-static cyclic loading, whereas none was retrofitted and re-tested. The database consists of wall design parameters, which are herein designated as the input variables of the machine learning problem, namely: wall geometry (\(t_{w}\), \(l_{w}\), \(h_{w}\)), shear span ratio (\(M/Vl_{w}\)), concrete compressive strength (\(f_{c}\)), longitudinal and transverse reinforcing ratios at web (\(f_{yl}\), \(f_{yt}\)), longitudinal and transverse reinforcing ratios at boundary elements (\(f_{ybl}\), \(f_{ysh}\)), axial load ratio (\(P/A_{g}f_{c}\)), shear demand (or strength) at the section (\(V_{max}\)), cross-section type (rectangular, barbell-shape, or flanged), curvature type (single or double). It is noted that single curvature and double curvature correspond to the end conditions of the specimen, i.e., cantilever and fixed-fixed, respectively. Distributions of the input variables are presented in Fig.1 along with their box plots (shown in blue).
The output variable of the ML problem, the deformation capacity, is taken directly as the reported ultimate displacement prior to its failure if the specimen is tested until failure. Otherwise, it is assumed as the displacement corresponding to \(0.8V_{max}\) as suggested by Park, 1989. It is noted that failure displacement
was taken as the total wall top displacement and was not separated into shear and flexural deformation components.
## 4 Explainable Boosting Machines
Explainable Boosting Machines (EBM) is a state-of-the-art machine learning technique designed as accurately as random forests and boosted trees while also being simple to understand [30, 38]. The EBM delivers a complete explainable learning model that belongs to the family of Generalized Additive Models (GAMs) [39]:
\[g(f(x_{1},\ldots,x_{n}))=f_{0}+f_{1}(x_{1})+f_{2}(x_{2})+\ldots,f_{n}(x_{n}) \tag{1}\]
where \(f_{0}\) is an intercept, and each \(f_{j}\) is called a shape function, representing the individual effect of the \(x_{j}\)-th variable on the model output, \(f(x_{1},\ldots,x_{n})\). The \(g\) is utilized as a link function, adapting the model to different settings, e.g., identity function for regression and logistic function for classification. The intercept, \(f_{0}\), is calculated as the mean response of all the outputs. Because the shape functions are trained independently for each input variable, the model is additive (decomposable), allowing to separately analyze the effect of each input variable on the model output. The EBM is designed to improve the performance of the standard GAM while maintaining its interpretability.
Generalized Additive Models are more comprehensible than black-box models, but the analytical form of the shape functions is typically unknown, making it unsuitable for machine learning purposes. Although other analytical functions, such as splines or orthogonal polynomials, can be offered for defining shape functions, they are frequently less accurate when representing a nonlinear model [40]. The EBM uses shallow trees to construct the shape functions; therefore, it easily captures the nonlinearity of the data. Each input variable (\(x_{i}\)) is modeled with ensemble trees such as bagging and gradient boosting. As a result, rather than employing the spline method, which is prevalent in traditional GAMs, the function associated with each input variable or interaction is produced from a vast set of shallow trees.
The EBM offers both local and global explanations of the learning model as each variable importance is estimated as the average absolute value of the predicted score. Moreover, each shape function can be visualized (algorithmically transparent); therefore, it is possible to observe the effects of the particular feature at certain intervals. In the inference phase, all the terms in Eq.1 are added up and passed through the link function to \(g\) to compute the final prediction as shown in Fig. 2. In other words, individual predictions are generated using the shape functions, \(f_{i}\), which act as a lookup table.
To demonstrate which feature had the largest impact on the individual forecast, the contributions can be sorted and shown using the principles of additivity and modularity.
The EBM's performance can be improved by including pairwise effects between variables in the model representation. For better performance, additional interactions can be incorporated; however, this may result in a more complex model with lower generalization performance due to the increased number of model parameters to be trained. The pairwise interactions are included in GA\({}^{2}\)Ms [41], which is a second-order
Figure 1: Distribution of the input variables in the database.
additive model:
\[g(E[y]))=f_{0}+\sum_{i=1}^{n}f_{i}(x_{i})+\sum_{i=1}^{K}f_{k}(x_{k_{1}},x_{k_{2}}) \tag{2}\]
where \(K\) pairs of features \((k_{1},k_{2})\) are chosen greedily (FAST algorithm) [42]. The pairwise interaction \(f_{ij}(x_{i},x_{j})\) could be rendered as a heatmap on the two-dimensional \(x_{i}\), \(x_{j}\) - plane, still providing high intelligibility. Even though adding more interactions does not affect the model's explainability, the final prediction model may be less comprehensible due to a large number of interactions (less simulatability).
## 5 Development of the Predictive Model
### Overall Performance of EBM
To assess whether the method compromises accuracy for the sake of interpretability, the performance of the EBM model is compared to three state-of-the-art black-box machine learning models, namely: XGBoost [43], Gradient Boost [44], Random Forest [45], and two glass-box models, namely Ridge Linear Regression [46], Decision Tree [47]. All the implementations are carried out in a Python environment. For all ML models, the entire database, including all twelve input variables (ten variables from Fig.1 and two binary coded variables for curvature type and cross-section type), is randomly split into training and test datasets with a ratio of 90% and 10%, respectively.
Tunning of the hyperparameters, such as learning rate, number of leaves, number of interactions, and so on, typically affects the performance of the corresponding regression model. For hyperparameter tuning, a 10-fold cross-validation technique (Fig. 3) is used, wherein a subset of the data is kept as validation data, and the model's performance is evaluated using various hyperparameter settings on the validation set. This method prevents the tuning from overfitting the training dataset.
For performance evaluations, the following three metrics are used over "unseen" (i.e., not used in the training process) test data sets of ten random train-test data splittings: coefficient of determination (\(R^{2}\)), relative
Figure 3: Illustration of k-fold cross-validation technique, where k is set to 5.
Figure 2: Inference phase of EBM.
error (\(RE\)), and prediction accuracy (\(PA\)), as given in Eqs. 3, 4, and 5, respectively.
\[R^{2} = 1-\frac{\sum_{i}(y_{i}-\hat{y}_{i})^{2}}{\sum_{i}(y_{i}-\bar{y})^{2}} \tag{3}\] \[RE = \frac{\sum_{i}|\hat{y}_{i}-y_{i}|}{\sum_{i}|y_{i}|}|\times 100\%\] (4) \[PA = \sum_{i}\frac{y_{i}}{\hat{y}_{i}} \tag{5}\]
where \(y_{i}\), \(\bar{y}\), \(\hat{y}_{i}\), and \(m\) refer to the actual output, the mean value of \(y_{i}\)s, predicted output of corresponding regression model, and a number of samples in the test dataset, respectively.
Mean performance scores of the ML models are summarized in Table 1, along with their dispersion demonstrated in box plots in Fig. 4. The results indicate that EBM achieves comparable performance with its black-box counterparts, with a correlation of determination of \(R^{2}=0.83\), a relative error of \(0.41\%\), and a \(PA=1.21\). As seen in Fig. 4, the low \(R^{2}\), \(RE\), and \(PA\) deviations of EBM imply that reliable predictions can be achieved regardless of the selected train-test splitting and verify the model's robustness. Mean prediction accuracy (PA) shows around \(20\%\) of overestimation for EBM and the black-box methods, suggesting that some input variables are potentially noisy. Compared to transparent models, the EBM outperforms both the Decision Tree (DT) and Ridge Linear Regression (RLR) across all three metrics, indicating that it is far superior to the traditional glass-box approaches.
The most remarkable advantage of the EBM method over the others is that it provides full explainability without sacrificing accuracy. Unlike other methods, EBM enables the user to understand how the prediction is made and which parameters are essential in the decision-making process. Therefore, the EBM method is selected as the baseline algorithm for the rest of the analysis to propose a prediction model for estimating the deformation capacity based on the following criteria: developing a model with fewer input variables (high simulatability), achieving high accuracy, and ensuring physical consistency.
### The Proposed EBM-based Predictive Model
The importance of the wall properties in predicting the deformation capacity is evaluated based on additive term contributions visualized in Fig.5. Results reveal that \(t_{w}\) and \(M/VI_{w}\) (or \(h_{w}/l_{w}\)) have the greatest impact on individual predictions. This is consistent with the mechanics of the behavior as walls with smaller thickness are shown to be more susceptible to lateral stiffness degradation due to concrete spalling, leading to a failure caused by lateral instabilities or out-of-plane buckling [48, 49]. The shear span ratio (or aspect ratio), on the other hand, both have a significant impact on deformation capacity as the higher the shear span ratio gets, the slender the wall is, and the higher deformations it typically can reach prior to its failure. The least important wall parameters, on the other hand, are identified as curvature type, cross-section type, and concrete compressive strength.
Another critical aspect considered in this study is to develop the predictive model with as few input variables as possible. With that, the computational workload is aimed to be reduced, and a more practical and interpretable model is proposed for potential users. To achieve this, knowledge-based-selected combinations
\begin{table}
\begin{tabular}{l l l l l l l} & \multicolumn{3}{c}{**Black-box**} & \multicolumn{3}{c}{**Glass-box**} \\ & **EBM** & **XGBoost** & **GB** & **RF** & **RLR** & **DT** \\ \hline
**R2** & 0.83 & 0.80 & 0.79 & 0.83 & 0.41 & 0.67 \\
**RE (\%)** & 0.41 & 0.40 & 0.32 & 0.28 & 0.67 & 0.47 \\
**PA** & 1.21 & 1.17 & 1.20 & 1.15 & 2.0 & 1.9 \\ \hline \end{tabular}
\end{table}
Table 1: Mean performance scores based on the test datasets over ten random splittings.
of four-to-five features are exhaustively evaluated to reach performance scores as high as when twelve features are included.
EBM can achieve similar performance scores using four features: \(M/Vl_{w}\), \(P/A_{g}f_{c}\), \(t_{w}\), and \(V_{max}\). Including additional features (e.g., \(\rho_{l}\), \(\rho_{bl}\), \(\rho_{sh}\)) deemed impactful by EBM as well as experimental results [50; 51] has only a modest effect on the overall performance. The performances of other methods are close to their benchmark model (including twelve features), whereas the glass-box methods are affected by the reduction of input size and show much lower performances. The mean \(R^{2}\) drops to 0.33 for the Ridge Linear Regression, imparting that the input-output relation is not linear.
The proposed EBM-based predictive model is selected to achieve the highest \(R^{2}\) with a prediction accuracy as close to 1.0 as possible. The correlation plots are presented in Fig. 6 for training and test data sets, where
Figure 4: Comparison of performance scores of ML methods for test samples based on ten random train splittings.
Figure 5: EBM Global interpretation for twelve features included
scattered data are concentrated along the \(y=x\) line, demonstrating that the proposed model can make accurate predictions. It should be noted that the distribution of the residuals is concentrated around zero.
As discussed above, the proposed model is an additive model in which each relevant feature is designated a quantitative term contribution. The EBM allows the user to explore the contribution of each feature to the model by plotting their shape functions (Fig.7a-d). As discussed above, the EBM method employs multiple decision-tree learning models; therefore, inclines and declines are undertaken with jump-looking piece-wise constant functions (versus smooth curves). The values, called scores, are read from these functions, and those from heat maps (Fig.7e-f) representing pairwise interactions (i.e., between two features) are summed up to calculate the prediction. The gray zones along the shape functions designate error bars that indicate the model's uncertainty and data sensitivity. This typically occurs in cases of sparsity or the presence of outliers within the associated region.
The shape functions in Fig.7 also indicate their correlations with the output in a graphical representation. For example, nonlinear patterns that can not be observed in linear approaches can be easily interpreted [52], which provides new insights to broaden existing experimental-based knowledge. For example, the shear demand \(V_{max}\) (Fig.7d) reduces ductility; thus deformation capacity, as demonstrated by experimental results [53] and suggested by ASCE 41-17 acceptance criteria. Yet, a highly nonlinear pattern is observed when relevant experimental data are gathered [11, 21]. This nonlinearity can be observed in the shape function suggested by the proposed method. Other input variables \((t_{w},P/A_{g}f_{c},M/Vl_{w})\), on the other hand, demonstrate an almost-linear trend. The interpretation of EBM for these variables is consistent with experimental results
Figure 6: Correlations of the model outputs with the actual values for (a) training and (b) test datasets
in the literature, such that \(M/Vl_{w}\) (Fig.7a) and \(t_{w}\) (Fig.7b) have a positive impact, as discussed above, whereas \(P/A_{g}f_{c}\) (Fig.7c) has an adverse influence [54, 55]. The reason for \(M/Vl_{w}\) and \(t_{w}\) (Fig.7b) suggesting an inverse effect up to a certain point (\(M/Vl_{w}\approx 1.2\), \(t_{w}\approx 60\) cm, \(P/A_{g}f_{c}\approx 0.08\)) is because the model has an intercept value (\(f_{0}\), Eq.2) and specimens with smaller deformation capacities (\(f_{0}\) less than 35.528) are predicted adding up negative values. The unexpected jumps in \(t_{w}\) are likely because there is an abrupt accumulation of data at \(t_{w}=100\) mm and \(t_{w}=200\) mm (64 and 44 specimens, respectively), which probably causes difficulty in decision making.
It is noted that the EBM method offers controllability over the structure of the model proposed by, for instance, modifying the number of pairwise interactions. This allows the method to suggest more than one
Figure 7: EBM shape functions (a-d) and pairwise interaction plots (e-f) for the proposed model. Note that the intercept \(f_{0}\)= 35.528.
model for the same input-output configuration for a particular train-test dataset. Reducing the number of interactions brings simplicity to the model; however, it typically loses accuracy as EBM relies on its automatically-determined interactions in the decision-making process. Given this trade-off, the number of interactions is set to two for the proposed model.
### Sample-Based Explanation
This section presents the prediction of deformation capacity for two example specimens using the proposed EBM-based predicted model. One specimen is predicted with excellent accuracy (almost zero error; Fig.8a), whereas the other is predicted with around 15% error (Fig.8b).
Variable contribution estimates for each specimen are presented such that the intercept is constant and shown in gray, the additive terms with positive impact are marked in orange, and additive terms decreasing the output are shown in blue. Each contribution estimate is extracted from the shape functions and two-dimensional heat maps (Fig.7) based on the input values of a specific specimen. Overall, the model is consistent with physical knowledge, except \(V_{max}\) has an unexpected positive impact on the output for the relatively worse prediction (Fig.8b). This is an excellent advantage of EBM; that is, the user can prudently understand how the prediction is made for a new sample and develop confidence in the predictive model (versus blind acceptation in black-box models).
### Comparisons with Current Code Provisions
ASCE 41-17 and ACI 369-17 [56] provide recommended deformation capacities for nonlinear modeling purposes, where shear walls are classified into the following two categories based on their aspect ratio: shear-controlled (\(h_{w}/l_{w}>1.5\)) and flexure-controlled (\(h_{w}/l_{w}>3.0\)). The deformation capacity of shear-controlled walls is identified as drift ratio such that \(\Delta_{u}/h_{w}=1.0\) if the wall axial load level is greater than 0.5 and \(\Delta_{u}/h_{w}=2.0\) otherwise.
Deformation capacity predictions based on the proposed EBM model are compared to ASCE 41-17/ACI 369-17 provisions in Fig. 9. Predicted-to-actual ratios are \(1.06\pm 0.49\) and \(6.42\pm 3.17\) for EBM-based model and code predictions, respectively. The results imply that traditional approaches may lead to the overestimation of deformation capacities and cause unsafe assessments.
Figure 8: Variable contribution estimates for (a) well-predicted, (b) averagely-predicted samples.
## 6 Conclusions
A fully transparent predictive model is developed to estimate the deformation capacity of reinforced concrete shear walls that are failed in pure shear or shear-flexure interaction. To achieve this, a state-of-the-art machine learning method, Explainable Boosting Machines (EBM), designed as accurately as random forests and boosted trees, is utilized. The EBM provides an additive model such that each relevant feature is designated a quantitative term contribution. The input-output configuration of the model is designated as the shear wall design properties (e.g., wall geometry, axial load ratio) and ultimate wall displacement, respectively. The conclusions derived from this study are summarized as follows:
* The importance of the wall properties in predicting the deformation capacity is evaluated based on additive term contributions. \(t_{w}\) and \(M/Vl_{w}\) (or \(h_{w}/l_{w}\)) have the greatest effect on individual predictions, whereas the least relevant ones are identified as curvature type, cross-section type, and concrete compressive strength.
* Compared to three black-box models (XGBoost, Gradient Boost, Random Forest), the EBM achieves similar or better performance in terms of correlation of determination (\(R^{2}\)), relative error (\(RE\)), and prediction accuracy (\(PA\); the ratio of predicted to the actual value). The EBM achieves a mean \(R^{2}\) of 0.83 and a mean \(RE\) of 0.41% using twelve input variables based on ten random train-test splittings.
* Compared to two glass-box methods (Decision Tree (DT) and Ridge Linear Regression (RLR)), the EBM outperforms both methods across all three metrics.
* The dispersion of performance metrics of EBM is small, implying that the model is robust and the performance is relatively less data-dependent.
* Compared to the developed model when all the available features are used, the EBM achieves competitive performance scores using only four input variables: \(M/Vl_{w}\), \(P/A_{g}f_{c}\), \(t_{w}\), and \(V_{max}\). Using these four features, the proposed EBM-based model achieves \(R^{2}\) of 0.92 and \(PA\) of 1.05 based on the test dataset. Using fewer variables ensures that the model is less simulatable, more practical, more comprehensible, and reduces the computational cost.
* It is important to note that the decision-making process developed by the proposed EBM-based model has overall consistency with scientific knowledge despite several exceptions detected in sample-based inferences. This is an excellent advantage of the proposed model; that is, the user can assess and evaluate the prediction process before developing confidence in the result (versus blindly accepting as in black-box models).
Figure 9: Comparisons of EBM-based model predictions with code provisions.
Deger ZT, Kaya GT, Wallace JW. ESTIMATE DEFORMATION CAPACITY OF NON-DUCTILE RC SHEAR WALLS USING EXPLAINABLE BOOSTING MACHINE,_Preprint_.
* This model delivers exact intelligibility, i.e., there is no need to use local explanation methods (e.g., SHAP, LIME) to interpret the learning model, which obviates the uncertainties associated with their approximations.
The proposed EBM-based model is valuable in that it is simultaneously accurate, explainable, and consistent with scientific knowledge. The EBM's ability to provide interpretable and transparent results would allow engineers to better understand the factors that affect the deformation capacity of non-ductile RC shear walls and make informed design decisions. The use of the EBM to estimate deformation capacity would improve the reliability and efficiency of structural analysis and design processes, leading to safer and more cost-effective buildings.
Deger ZT, Kaya GT, Wallace JW. ESTIMATE DEFORMATION CAPACITY OF NON-DUCTILE RC SHEAR WALLS USING EXPLAINABLE BOOSTING MACHINE,_Preprint_.
|
2310.12771
|
Stochastic Average Gradient : A Simple Empirical Investigation
|
Despite the recent growth of theoretical studies and empirical successes of
neural networks, gradient backpropagation is still the most widely used
algorithm for training such networks. On the one hand, we have deterministic or
full gradient (FG) approaches that have a cost proportional to the amount of
training data used but have a linear convergence rate, and on the other hand,
stochastic gradient (SG) methods that have a cost independent of the size of
the dataset, but have a less optimal convergence rate than the determinist
approaches. To combine the cost of the stochastic approach with the convergence
rate of the deterministic approach, a stochastic average gradient (SAG) has
been proposed. SAG is a method for optimizing the sum of a finite number of
smooth convex functions. Like SG methods, the SAG method's iteration cost is
independent of the number of terms in the sum. In this work, we propose to
compare SAG to some standard optimizers used in machine learning. SAG converges
faster than other optimizers on simple toy problems and performs better than
many other optimizers on simple machine learning problems. We also propose a
combination of SAG with the momentum algorithm and Adam. These combinations
allow empirically higher speed and obtain better performance than the other
methods, especially when the landscape of the function to optimize presents
obstacles or is ill-conditioned.
|
Pascal Junior Tikeng Notsawo
|
2023-07-27T17:34:26Z
|
http://arxiv.org/abs/2310.12771v1
|
# Stochastic Average Gradient :
###### Abstract
Despite the recent growth of theoretical studies and empirical successes of neural networks, gradient backpropagation is still the most widely used algorithm for training such networks. On the one hand, we have deterministic or full gradient (FG) approaches that have a cost proportional to the amount of training data used but have a linear convergence rate, and on the other hand, stochastic gradient (SG) methods that have a cost independent of the size of the dataset, but have a less optimal convergence rate than the determinist approaches. To combine the cost of the stochastic approach with the convergence rate of the deterministic approach, a stochastic average gradient (SAG) has been proposed. SAG is a method for optimizing the sum of a finite number of smooth convex functions. Like SG methods, the SAG method's iteration cost is independent of the number of terms in the sum. In this work, we propose to compare SAG to some standard optimizers used in machine learning. SAG converges faster than other optimizers on simple toy problems and performs better than many other optimizers on simple machine learning problems. We also propose a combination of SAG with the momentum algorithm and Adam. These combinations allow empirically higher speed and obtain better performance than the other methods, especially when the landscape of the function to optimize presents obstacles or is ill-conditioned 1.
Footnote 1: This work is reproducible at [https://github.com/Tikquss/sag_torch](https://github.com/Tikquss/sag_torch)
## 1 Introduction
In many domains, several problems can be reduced to the minimization of the sum of a finite number of functions
\[g=\frac{1}{n}\sum_{i=1}^{n}f_{i}\]
That is
\[\operatorname*{minimize}_{x\in\Omega\subset\mathbb{R}^{p}}\ \ g(x)=\frac{1}{n}\sum_{i=1}^{n}f_{i}(x) \tag{1}\]
Gradient descent (Cauchy, 1847; Bottou, 1998; Nemirovski et al., 2009; Duchi et al., 2011b; Kingma and Ba, 2014) optimize such functions with a rule of the form :
\[x^{k+1}=x^{k}-\alpha_{k}D^{k}\]
where \(\alpha_{k}\) is the step size at iteration \(k\); and \(D^{k}\) a function of the past gradients \(G_{1},\ldots,G_{k}\) of \(g\) at \(x^{1},\ldots,x^{k}\), respectively, or of the estimators of these gradients; such that \(\mathbb{E}[D^{k}|x^{k-1}]=\nabla g(x^{k})\)
More specifically, \(G_{k}=\nabla g(x^{k})\) is the gradient of \(g\) at \(x^{k}\), the parameter update at time \(k\) given the optimization algorithm of choice, and \(\{\alpha_{k},k\geq 0\}\) is a predefined deterministic sequence of positive real numbers such that \(\sum_{k=1}^{\infty}\alpha_{k}=\infty\) and \(\sum_{k=1}^{\infty}\alpha_{k}^{2}<\infty\). The first of these two conditions is to make sure that the total displacement \(\sum_{k=1}^{\infty}\alpha_{k}\nabla g(x^{k})\) can be unbounded, so the optimal solution can be reached even if we start far away from it. The second condition (the finite sum of squares) is to decrease fast enough for the algorithm to converge. For convex functions, gradient descent converges to a global minimum (if one exists).
Problem 1 is very common in deep learning, where the goal is to minimize the regularized cost function
\[\mathcal{J}(\theta)=\mathbb{E}_{s\sim F}[\ell(s,\theta)]+\lambda r(\theta)= \int\ell(s,\theta)dF(s)+\lambda r(\theta)\]
where the function \(\ell(s,\theta)\) measures how well the neural network with parameters \(\theta\) predicts the label of a data sample \(s\), \(F\) is the cumulative distribution function of the data distribution, \(r(\theta)\) is the regularizer (e.g. \(\ell_{2}\)-regularization \(\frac{1}{2}\|\theta\|^{2}\)), and \(\lambda\in\mathbb{R}_{+}\) the regularization strength. In practice, \(F\) is generally unknown, and the empirical distribution of a given dataset \(\mathcal{D}\) is used. The regularized empirical risk obtained can be written as a sum of \(|\mathcal{D}|\) functions
\[\mathcal{J}(\theta)=\frac{1}{|\mathcal{D}|}\sum_{s\in\mathcal{D}}\left[\ell(s,\theta)+\lambda r(\theta)\right]\]
This is the case, for example, of the least squares regression, with
\[\mathcal{D}=\{(x_{i},y_{i})\in\mathbb{R}^{p}\times\mathbb{R}\}_{i=1}^{n}\ \text{and}\ \ell((x,y),\theta)=\|x^{T}\theta-y\|_{2}^{2}\]
or the logistic regression where \(\ell\) is the negative log-likelihoods 2 :
Footnote 2: The decision boundary is \(x^{T}\theta=0\), i.e. we want \(x^{T}\theta>0\) for \(y=1\) and \(x^{T}\theta<0\) for \(y=-1\), thus is \(yx^{T}\theta>0\Longleftrightarrow\operatorname*{sigmoid}(yx^{T}\theta)=1/(1+ \exp(-yx^{T}\theta))>1/2\). To maximize \(\operatorname*{sigmoid}(yx^{T}\theta)\in[0,1]\), we minimize \(-\log\big{(}\operatorname*{sigmoid}(yx^{T}\theta)\big{)}\in\mathbb{R}^{+}\), which gives our loss function.
\[\mathcal{D}=\{(x_{i},y_{i})\in\mathbb{R}^{p}\times\{-1,1\}\}_{i=1}^{n}\ \text{and}\ \ell((x,y),\theta)=\log(1+\exp(-yx^{T}\theta))\]
One of the challenges that gradient-based methods face in practice is the ill-conditioned surfaces, when the hessian of the function to optimize has some large positive eigenvalues (i.e. high-curvature directions) and some eigenvalues close to \(0\) (i.e. low-curvature directions). In this case, vanilla gradient descent bounces back and forth in high curvature directions and slowly progresses in low curvature directions. In addition to these ill-conditioned surfaces, there are obstacles such as saddle points and critical surfaces (cliffs, valleys, plateaus, ravines, and other flat regions), extremely sharp or flat minima.
The aim of this work is to empirically investigate the performance of stochastic average gradient (SAG) (Schmidt et al., 2013) on this type of problem. We limit ourselves for the first time on simple toys finite data problems where each \(f_{i}\) is smooth and convex, although in modern applications, \(n\), the number of data points (or training examples) can be extremely large (e.g. datasets used to train large-scale deep learning models like GPT-3 (Brown et al., 2020)), while there is often a large amount of redundancy between examples. In addition to this basic setting, we will also be interested in toys cases where the sum \(g\) is strongly convex, with the use of a strongly-convex regularizer such as the squared \(\ell_{2}\)-norm, resulting in problems of the form :
\[\operatorname*{minimize}_{x\in\mathbb{R}^{p}}\ \ \frac{\lambda}{2}\|x\|^{2}+\frac{1}{n} \sum_{i=1}^{n}f_{i}(x)=\frac{1}{n}\sum_{i=1}^{n}\Big{[}\frac{\lambda}{2}\|x\|^ {2}+f_{i}(x)\Big{]} \tag{2}\]
The resulting function g will be strongly convex, provided that the individual functions \(f_{i}\) are convex.
We then extend our investigations to slightly more complex problems where we optimize deep neural networks on toys dataset. Many deep models are guaranteed to have an extremely large number of local minima. It has been proven that this is not necessarily a problem. Most local minima are of good quality (almost equivalent in cost to the global minimum) (Dauphin et al., 2014). The biggest obstacle to the optimization of \(g\) in deep learning remains the presence of saddle points. In low
dimensions (small \(p\)), local minima are more common, while in high dimensions, local minima are rare and saddle points more common. Most of the training time is spent on traversing flat valleys of the Hessian matrix or circumnavigating tall mountains via an indirect arcing path, and the trajectory of traversing such flat valleys and circumventing such mountains may be long and result in excessive training time (Srihari, 2020).
The rest of the paper is organized as follows. We define some terms used in our work in section 2, then we present SAG in section 3, the related works in section 4, the convergence analysis and the implementation details in sections 5 and 6 respectively. We finally present the experiments settings and the results in section 7, then summarise and conclude our work in section 8.
## 2 Definitions
We assume \(g:\mathbb{R}^{p}\to\mathbb{R}\) unless otherwise noted. The function \(g\) is convex if for all \(x,y\in\mathrm{domain}(g)\) and all \(t\in[0,1]\)
\[g(tx+(1-t)y)\leq tg(x)+(1-t)g(y)\]
or equivalently if for all \(x,y\in\mathrm{domain}(g)\),
\[g(x)\geq g(y)+\nabla g(y)^{T}(x-y)\]
if \(g\) is differentiable. If the inequality holds strictly (i.e. \(<\) rather than \(\leq\)) for all \(t\in(0,1)\) and \(x\neq y\), then we say that \(g\) is strictly convex, so strict convexity implies convexity. Geometrically, convexity means that the line segment between two points on the graph of \(g\) lies on or above the graph itself. If \(g\) is convex, then any local minimum of \(g\) in any convex set \(X\subset\mathrm{domain}(g)\) is also a global minimum. Strict convexity means that the line segment lies strictly above the graph of \(g\), except at the segment endpoints. If \(g\) is strictly convex, then at most, one local minimum of \(g\) in \(X\) exists. Consequently, if it exists, it is the unique global minimum of \(g\) in \(X\)3.
Footnote 3: [https://ai.stanford.edu/~gwithomas/notes/convexity.pdf](https://ai.stanford.edu/~gwithomas/notes/convexity.pdf)
For \(\mu>0\), the function \(g\) is \(\mu\)-strongly convex if the function
\[x\mapsto g(x)-\frac{\mu}{2}\|x\|^{2}\]
is convex, or equivalently if for all \(x,y\in\mathrm{domain}(g)\),
\[g(x)\geq g(y)+\nabla g(y)^{T}(x-y)+\frac{\mu}{2}\|x-y\|^{2}\]
if \(g\) is differentiable. Strong convexity doesn't necessarily require the function to be differentiable, and the gradient is replaced by the sub-gradient when the function is non-smooth. Intuitively speaking, strong convexity means a quadratic lower bound exists on the growth of the function. This directly implies that a strong convex function is strictly convex since the quadratic lower bound growth is, of course, strictly greater than the linear growth 4.
Footnote 4: [https://xingyuzhou.org/blog/notes/strong-convexity](https://xingyuzhou.org/blog/notes/strong-convexity)
Let \(G(x)=\nabla g(x)\in\mathbb{R}^{p}\) and \(\mathcal{H}(x)=\nabla^{2}g(x)\in\mathbb{R}^{p\times p}\) be respectively the gradient and the local hessian matrix of \(g\) at \(x\), assuming that \(g\) is twice-differentiable. If \(G(x)=0\), then \(x\) is a critical/stationary point of \(g\). In this case, the determinant \(d(x)\) of \(\mathcal{H}(x)\) is equal to the Gaussian curvature of the surface of \(g\) considered as a manifold. The eigenvalues of \(\mathcal{H}(x)\) are the principal curvatures of the \(g\) at \(x\), and the eigenvectors are the principal directions of curvature. If \(d(x)>0\), \(x\) is a local maximum of \(g\) if \(\mathcal{H}(x)\) is negative definite (all its eigenvalues are negative), and a local minimum of \(g\) if \(\mathcal{H}(x)\) is a positive definite (all its eigenvalues are positive). Some local optimums can be very flat (i.e. there is a large enough neighbourhood of \(x\) that contains only local optima) or sharp (the loss function near \(x\) has a high condition number, i.e. very small perturbation of \(x\) can cause large variation in \(g\)). If \(d(x)<0\) (some eigenvalues are positive and others are negative), \(x\) is a saddle point of \(g\). If \(d(x)=0\) (there is at least one zero eigenvalue, i.e. \(\mathcal{H}(x)\) is undefined), we can't conclude, and the point \(x\) could be any of a minimum, maximum or saddle point. If the hessian matrix of \(g\) is positive semi-definite at any point of \(\mathrm{domain}(g)\), then \(g\) is convex and the point \(x\) such that \(G(x)=0\) is its global minimum. If it is instead negative semi-definite at any point of \(\mathrm{domain}(g)\), then \(g\) is concave and the point \(x\) such that \(G(x)=0\) is its global maximum.
Motivation
Gradient descent (Bottou, 1998) is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks. FG method (Cauchy, 1847) uses iterations of the form
\[x^{k+1}=x^{k}-\alpha_{k}\nabla g(x^{k})=x^{k}-\frac{\alpha_{k}}{n}\sum_{i=1}^{n }\nabla f_{i}(x^{k})\]
FG is generally called batch gradient descent in deep learning since it calculates the error for each example in the training dataset but only updates the model after all training examples have been evaluated. Therefore, its cost per iteration is \(\mathcal{O}(n)\).
Assuming that a minimizer \(x^{*}\) exists and \(g\) is convex, then under standard assumptions, the sub-optimality achieved on iteration \(k\) of the FG method with a constant step size is given by a sublinear convergence rate (Nesterov, 2004; Schmidt et al., 2013)
\[g(x^{k})-g(x^{*})=\mathcal{O}(1/k)\]
When \(g\) is strongly convex, the error also satisfies a linear convergence rate (also known as a geometric or exponential rate because a fixed fraction cuts the error on each iteration) (Nesterov, 2004; Schmidt et al., 2013)
\[g(x^{k})-g(x^{*})=\mathcal{O}(\rho^{k})\text{ for some }\rho<1\]
This \(\rho\) depends on the condition number of \(g\), i.e. on how sensitive the output of \(g\) is on its input 5. One drawback of the FG approach is that it requires computing all the gradients at each iteration, which can be tedious when \(n\) is very large.
Footnote 5: \(L/\mu\) (change in output = condition number \(\times\) change in input)
The basic SG method for optimizing 1 uses iterations of the form
\[x^{k+1}=x^{k}-\alpha_{k}\nabla f_{i_{k}}(x^{k})\]
where at each iteration an index \(i_{k}\) is sampled uniformly from the set \(\{1,\ldots,n\}\). The randomly chosen gradient \(\nabla f_{i_{k}}(x^{k})\) yields an unbiased estimate of the true gradient \(\nabla g(x^{k})\) :
\[\mathbb{E}_{i_{k}\sim\mathcal{U}(\{1,\ldots,n\})}[\nabla f_{i_{k}}(x^{k})]= \frac{1}{n}\sum_{i=1}^{n}\nabla f_{i}(x^{k})=\nabla g(x^{k})\]
Under standard assumptions and for a suitably chosen decreasing step-size sequence \(\{\alpha_{k},k\geq 0\}\)(Nemirovski et al., 2009; Schmidt et al., 2013), the SG iterations have an expected sub-optimality for convex objectives of
\[\mathbb{E}[g(x^{k})]-g(x^{*})=\mathcal{O}(1/\sqrt{k})\]
and an expected sub-optimality for strongly-convex objectives of
\[\mathbb{E}[g(x^{k})]-g(x^{*})=\mathcal{O}(1/k)\]
These sublinear rates are slower than the corresponding rates for FG. Under certain assumptions, these convergence rates are optimal in a model of computation where the algorithm only accesses the function through unbiased measurements of its objective and gradient. Thus, we should not expect to be able to obtain the convergence rates of the FG method if the algorithm only relies on unbiased gradient measurements. Can we have one gradient per iteration and achieve the same rate as FG?
Mini-batch gradient descent is a variation of the SG algorithm that splits the training dataset into small batches used to calculate model error and update model coefficients. In other words, we select a batch \(\mathcal{B}\subset\{1,\ldots,n\}\) randomly at each iteration and do the update as follows:
\[x^{k+1}=x^{k}-\frac{\alpha_{k}}{|\mathcal{B}|}\sum_{i\in\mathcal{B}}\nabla f_ {i}(x^{k})\]
But this allows to make a trade-off between the cost per iteration and the convergence rate: either we choose \(\mathcal{B}\) is too big, and we get a better rate and a big cost of \(\mathcal{O}(|\mathcal{B}|)\) per iteration, or we choose \(\mathcal{B}\) so that \(|\mathcal{B}|\) is too small, and we get a lower rate and a cost in \(\mathcal{O}(1)\) per iteration.
The SAG iterations take the form
\[x^{k+1}=x^{k}-\frac{\alpha_{k}}{n}\sum_{i=1}^{n}y_{i}^{k}\]
where at each iteration a random index \(i_{k}\) is selected (not necessarily uniformly from \(\{1,\ldots,n\}\) as we will see below) and we set
\[y_{i}^{k}=\left\{\begin{array}{ll}\nabla f_{i}(x^{k})&\mbox{if }i=i_{k}\\ y_{i}^{k-1}&\mbox{otherwise.}\end{array}\right.\]
Like the FG method, the step incorporates a gradient with respect to each function. But, like the SG method, each iteration only computes the gradient with respect to a single example and the cost of the iterations is independent of \(n\) : we take a step in the direction of the average of \(y_{i}^{k}\).
With the mini-batch version of SAG, the update becomes
\[y_{i}^{k}=\left\{\begin{array}{ll}\nabla f_{i}(x^{k})&\mbox{if }i\in\mathcal{B} \\ y_{i}^{k-1}&\mbox{otherwise.}\end{array}\right.\]
## 4 Related works
In the following \(D^{k}\) is a function of the past gradients \(G_{1},\ldots,G_{k}\) of \(g\) at \(x^{1},\ldots,x^{k}\), respectively, or of the estimators of these gradients. In the papers introducing these algorithms, \(D^{k}=G_{k}\) in general, i.e. \(D^{k}\) is deterministic. But their SG version can be developed with \(D^{k}=\nabla f_{i_{k}}(x^{k})\) for a randomly sampled \(i_{k}\in\{1,\ldots,n\}\), or their mini-batch version with \(D^{k}=\frac{1}{|\mathcal{B}|}\sum_{i\in\mathcal{B}}\nabla f_{i}(x^{k})\) for a random sample \(\mathcal{B}\subset\{1,\ldots,n\}\), or their SAG version with an appropriate choice of past gradients to use and how to use them.
SG methods that incorporate a each iteration \(k\) a momentum term \(m^{k}=x^{k}-x^{k-1}=-\alpha_{k-1}D^{k-1}\) use iterations of the form (Polyak, 1964; Sutton, 1986)
\[x^{k+1}=x^{k}-\alpha_{k}D^{k}+\beta_{k}m^{k}\]
It is common to set all \(\beta_{k}=\beta_{1}\) for some constant \(\beta_{1}\in[0,1)\), and in this case, we can rewrite the SG with momentum (Tseng, 1998) method as
\[x^{k+1}=x^{k}-\sum_{j=0}^{k}\alpha_{j}\beta_{1}^{k-j}D^{j}\]
The momentum algorithm accumulates an exponentially decaying moving average of past gradients and continues to move in their direction. Formally, the momentum algorithm introduces a variable \(v\) that plays the role of velocity: the direction and speed at which the parameters move through parameter space. The hyperparameter \(\beta_{1}\) determines how quickly the contributions of previous gradients exponentially decay. The above update rule can be rewritten in terms of the velocity as (\(v^{0}=0\)):
\[v^{k+1}=\beta_{1}v^{k}-\alpha_{k}D^{k}\]
\[x^{k+1}=x^{k}+v^{k+1}\]
Since we have with this
\[v^{k+1}=-\sum_{j=0}^{k}\alpha_{j}\beta_{1}^{k-j}D^{j}\]
The SAG version of momentum becomes
\[x^{k+1}=x^{k}+\frac{\alpha_{k}}{n}\sum_{i=1}^{n}y_{i}^{k}\]
where at each iteration, a random index \(i_{k}\) is selected, and we set
\[y_{i}^{k}=\left\{\begin{array}{ll}v_{i-1}^{k+1}=\beta_{1}v_{i}^{k}-\alpha_{k}D_ {i}^{k}&\text{if }i=i_{k},\text{ with }D^{k}=\nabla f_{i_{k}}(x^{k})\\ y_{i}^{k-1}&\text{otherwise}.\end{array}\right.\]
Nesterov accelerated gradient or Nesterov momentum (Nesterov, 1983; Sutskever et al., 2013) is a variant of the momentum algorithm that use an interim update \(\tilde{x}^{k}=x^{k}+\beta_{1}v^{k}\) to compute de gradient \(D^{k}\) at each iteration. That is :
\[\tilde{x}^{k}=x^{k}+\beta_{1}v^{k}\] \[\tilde{D}^{k}=\nabla g(\tilde{x}^{k})\] \[v^{k+1}=\beta_{1}v^{k}-\alpha_{k}\tilde{D}_{k}\] \[x^{k+1}=x^{k}+v^{k+1}\]
The AdaGrad algorithm (Duchi et al., 2011), individually adapts the learning rates of all model parameters by scaling them inversely proportional to the square root of the sum of all of their historical squared values. The update rule of AdaGrad is given by (\(r^{0}=0\), \(r^{k}\) accumulates squared gradient, division and square root are applied element-wise, \(\epsilon\) is a very small number used to avoid divisions by 0) :
\[r^{k+1}=r^{k}+D^{k}\odot D^{k}\] \[x^{k+1}=x^{k}-\frac{\alpha_{k}}{\sqrt{r^{k+1}+\epsilon}}\odot D ^{k}\]
The RMSProp algorithm (Hinton, 2012) modifies AdaGrad to perform better in the non-convex setting by changing the gradient accumulation into an exponentially weighted moving average. RMSProp uses an exponentially decaying average to discard history from the extreme past to converge rapidly after finding a convex bowl as if it were an instance of the AdaGrad algorithm initialized within that bowl. Compared to AdaGrad, using the moving average introduces a new hyperparameter, \(\beta_{2}\in(0,1]\), that controls the length scale of the moving average. The step of squared gradient accumulation is modified as follows:
\[r^{k+1}=\beta_{2}r^{k}+(1-\beta_{2})D^{k}\odot D^{k}\]
Adadelta (Zeiler, 2012) is an extension of Adagrad and RMSProp that seeks to reduce its aggressive, monotonically decreasing learning rate. Instead of accumulating all past squared gradients, Adadelta restricts the window of accumulated past gradients to some fixed size (\(u^{0}=0\)).
\[r^{k+1}=\beta_{2}r^{k}+(1-\beta_{2})D_{k}\odot D_{k}\] \[\Delta^{k+1}=\frac{\sqrt{u^{k}+\epsilon}}{\sqrt{r^{k+1}+\epsilon}}\] \[u^{k+1}=\beta_{2}u^{k}+(1-\beta_{2})\Delta^{k+1}\] \[x^{k+1}=x^{k}-\alpha_{k}\Delta^{k+1}\]
Adam (Kingma and Ba, 2017) is a combination of RMSProp and momentum. First, in Adam, momentum is incorporated directly as an estimate of the gradient's first-order moment (with exponential weighting). Second, Adam includes bias corrections to the estimates of both the first-order moments (the momentum term) and the (uncentered) second-order moments to account for their initialization at the origin.
\[\tilde{x}^{k}=x^{k}+\beta_{1}v^{k}\] \[\tilde{D}^{k}=\nabla g(\tilde{x}^{k})\] \[r^{k+1}=\beta_{2}r^{k}+(1-\beta_{2})\tilde{D}^{k}\odot\tilde{D} ^{k}\] \[v^{k+1}=\beta_{1}v^{k}-\frac{\alpha_{k}}{\sqrt{r^{k+1}+\epsilon}} \odot\tilde{D}^{k}\] \[x^{k+1}=x^{k}+v^{k+1}\]
The most common Adam iteration update is written in term of momentum as
\[m^{k}=\beta_{1}m^{k-1}+(1-\beta_{1})D^{k}\] \[r^{k}=\beta_{2}r^{k-1}+(1-\beta_{2})D^{k}\odot D^{k}\] \[x^{k}=x^{k-1}-\frac{\alpha_{k}}{\sqrt{r^{k}}+\epsilon}\odot m^{k}\]
Adamax (Kingma and Ba, 2014, 2017) is a variant of Adam based on infinity norm.
\[m^{k}=\beta_{1}m^{k-1}+(1-\beta_{1})D^{k}\] \[u^{k}=\max(\beta_{2}u^{k-1},|D^{k}|+\epsilon)\] \[x^{k}=x^{k-1}-\frac{\alpha_{k}}{(1-\beta_{1}^{k})u^{k}}\odot m^ {k}\]
AMSGrad (Reddi et al., 2018) is a version of Adam that keeps a running maximum of the squared gradients instead of an exponential moving average.
\[m^{k}=\beta_{1}m^{k-1}+(1-\beta_{1})D^{k}\] \[\tilde{m}^{k}=\max(\tilde{m}^{k-1},m^{k})\] \[r^{k}=\beta_{2}r^{k-1}+(1-\beta_{2})D^{k}\odot D^{k}\] \[x^{k}=x^{k-1}-\frac{\alpha_{k}}{\sqrt{r^{k}}+\epsilon}\odot \tilde{m}^{k}\]
All Adaptive methods can be summarized as follows (Defossez et al., 2020). As hyper-parameters, we have \(0\leq\beta_{1}<\beta_{2}\leq 1\), and a non negative sequence \((\alpha_{k})_{k\in\mathbb{N}^{*}}\). We define three vectors \(m_{k},r_{k},x_{k}\in\mathbb{R}^{p}\) iteratively. Given \(x^{0}\in\mathbb{R}^{p}\) as our starting point, \(m^{0}=0\), and \(r^{0}=0\), we define for all iterations \(k\in\mathbb{N}^{*}\)
\[m_{i}^{k}=\beta_{1}m_{i}^{k-1}+D_{i}^{k}\] \[r_{i}^{k}=\beta_{2}r_{i}^{k-1}+\left(D_{i}^{k}\right)^{2}\] \[x_{i}^{k}=x_{i}^{k-1}-\alpha_{k}\frac{m_{i}^{k}}{\sqrt{r_{i}^{k} }+\epsilon}\]
The parameter \(\beta_{1}\) is a heavy-ball style momentum parameter. The parameter \(\beta_{2}\) controls the decay rate of the per-coordinate exponential moving average of the squared gradients. Taking \(\beta_{1}=0\), \(\beta_{2}=1\) and \(\alpha_{k}=\alpha\) gives Adagrad (Duchi et al., 2011b). The original Adam algorithm (Kingma and Ba, 2014) uses a weighed average, rather than a weighted sum :
\[\tilde{m}_{i}^{k}=(1-\beta_{1})\sum_{j=1}^{k}\beta_{1}^{k-j}D_{i}^{j-1}=(1- \beta_{1})m_{i}^{k}\]
We can achieve the same definition by taking \(\alpha_{adam}=\alpha\cdot\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}}\), since
\[\frac{\tilde{m}_{i}^{k}}{\sqrt{\tilde{r}_{i}^{k}}}=\frac{1-\beta_{1}}{\sqrt{1- \beta_{2}}}\frac{m_{i}^{k}}{\sqrt{r_{i}^{k}}}\]
with
\[\tilde{r}_{i}^{k}=(1-\beta_{2})r_{i}^{k}\text{ and }\tilde{m}_{i}^{k}=(1- \beta_{1})m_{i}^{k}\]
The original Adam algorithm further includes two corrective terms to account for the fact that \(m^{k}\) and \(r^{k}\) are biased towards \(0\) for the first few iterations. Those corrective terms are equivalent to taking a step-size \(\alpha_{k}\) of the form
\[\alpha_{k,adam}=\alpha\cdot\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}}\cdot\overbrace{ \frac{1}{\sqrt{1-\beta_{1}^{k}}}}^{\text{corrective term for }m^{k}}\cdot\underbrace{\sqrt{1-\beta_{2}^{k}}}_{\text{ corrective term for }r^{k}}\]
Early work on adaptive methods (e.g. (McMahan and Streeter, 2010)) showed that Adagrad achieves an optimal rate of convergence of \(\mathcal{O}(1/\sqrt{k})\) for convex optimization. Ward et al. (2020) proved that Adagrad converges to a critical point for non convex objectives with a rate \(\mathcal{O}(ln(k)/\sqrt{k})\) when using a scalar adaptive step-size. Defossez et al. (2020) show a rate of \(\mathcal{O}(p\ ln(k)/\sqrt{k})\) for Adam, and show that in expectation, the squared norm of the objective gradient averaged over the trajectory has an upper-bound which is explicit in the constants of the problem, parameters of the optimizer, the dimension \(p\), and the total number of iterations \(k\).
## 5 SAG convergence rate
We assume that each function \(f_{i}\) in (1) is convex and differentiable (this makes \(g\) also convex and differentiable), and that each gradient \(\nabla f_{i}\) is Lipschitz-continuous with constant \(L_{i}\), meaning that for all \(x\) and \(y\) in \(\mathbb{R}^{p}\) and each \(i\) we have
\[\|\nabla f_{i}(x)-\nabla f_{i}(y)\|\leq L_{i}\|x-y\| \tag{3}\]
This makes \(\nabla g\) also Lipschitz-continuous with any constant \(L\geq\frac{1}{n}\sum_{i=1}^{n}L_{i}\), like \(\max_{i}L_{i}\). Also, each gradient \(\nabla f_{i}\) is Lipschitz-continuous with constant \(L\geq\max_{i}L_{i}\). This is a fairly weak assumption on the \(f_{i}\) functions, and in cases where the \(f_{i}\) are twice-differentiable it is equivalent to saying that the eigenvalues of the hessians of each \(f_{i}\) are bounded above by \(L\). We will also assume the existence of at least one minimizer \(x^{*}\) that achieves the optimal function value.
In addition to the above basic convex case, we will also consider the case where the average function \(g=\frac{1}{n}\sum_{i=1}^{n}f_{i}\) is strongly-convex with constant \(\mu>0\), meaning that the function \(x\mapsto g(x)-\frac{\mu}{2}\|x\|^{2}\) is convex. For twice-differentiable \(g\), this is equivalent to requiring that the eigenvalues of the hessian of \(g\) are bounded below by \(\mu\). This is a stronger assumption that is often not satisfied in practical applications. Nevertheless, in many applications we are free to choose a regularizer of the parameters, and thus we can add an \(\ell_{2}\)-regularization term as in (2) to transform any convex problem into a strongly-convex problem (in this case we have \(\mu\geq\lambda\)). Note that strong-convexity implies the existence of a unique \(x^{*}\) that achieves the optimal function value.
Let \(\bar{x}_{k}=\frac{1}{k}\sum_{i=0}^{k-1}x^{i}\) be the average iterate and \(\sigma^{2}=\frac{1}{n}\sum_{i=1}^{n}\|\nabla f_{i}(x^{*})\|\) the variance of the gradient norms at the optimum \(x^{*}\).The convergence results consider two different initializations for the \(y_{i}^{0}\) variables:
* setting \(y_{i}^{0}=0\) for all \(i\)
* or setting them to the centered gradient at the initial point \(x^{0}:y_{i}^{0}=\nabla f_{i}(x^{0})-\nabla g(x^{0})\)
The convergence results are expressed in terms of expectations \(\mathbb{E}\) with respect to the internal randomization of the algorithm (the selection of the random variables \(i_{k}\)), and not with respect to the data which is assumed to be deterministic and fixed. The \(L\) we use in the following is a Lipschitz-continuous constant common to all \(\nabla f_{i}\), as \(\max_{i}L_{i}\).
**Theorem 5.1**.: _With a constant step size of \(\alpha=\frac{1}{16L}\), the SAG iterations satisfy for \(k\geq 1\) :_
\[\mathbb{E}[g(\bar{x}^{k})]-g(x^{*})\leq\frac{32n}{k}C_{0}\in\mathcal{O}\bigg{(} \frac{1}{k}\bigg{)} \tag{4}\]
_where if we initialize with \(y_{i}^{0}=0\) for all \(i\) we have_
\[C_{0}=g(x^{0})-g(x^{*})+\frac{4L}{n}\|x^{0}-x^{*}\|^{2}+\frac{\sigma^{2}}{16L}\]
_and if we initialize with \(y_{i}^{0}=\nabla f_{i}(x^{0})-\nabla g(x^{0})\) for all \(i\) we have_
\[C_{0}=\frac{3}{2}\big{[}g(x^{0})-g(x^{*})\big{]}+\frac{4L}{n}\|x^{0}-x^{*}\|^{2}\]
_Further, if \(g\) is \(\mu\)-strongly convex we have_
\[\mathbb{E}[g(x^{k})]-g(x^{*})\leq\left(1-min\Big{\{}\frac{\mu}{16L},\frac{1}{8n} \Big{\}}\right)^{k}C_{0}\in\mathcal{O}\Bigg{(}\Bigg{(}1-min\Big{\{}\frac{\mu}{16 L},\frac{1}{8n}\Big{\}}\Big{)}^{k}\Bigg{)}\]
The proof of this theorem is given in Schmidt et al. (2013) [Appendix B] and involves finding a Lyapunov function for a non-linear stochastic dynamical system defined on the \(y_{i}^{k}\) and \(x_{k}\) variables that converges to zero at the above rates, and showing that this function dominates the expected sub-optimality \(\mathbb{E}[g(x^{k})]-g(x^{*})\). The equation (4) is stated for the average \(\bar{x}^{k}\), with a trivial change to the proof technique, but it can be shown to also hold for any iterate \(x^{k}\) where \(g(x^{k})\) is lower than the average function value up to iteration \(k\), \(\frac{1}{k}\sum_{i=0}^{k-1}g(x^{i})\). Thus, in addition to \(\bar{x}^{k}\) the result also holds for the best iterate.
The bounds are valid for any \(L\) greater than or equal to the minimum \(L\) satisfying (3) for each \(i\), implying an \(\mathcal{O}(1/k)\) and linear convergence rate for any \(\alpha\leq 1/16L\), but the bound becomes worse as \(L\) grows. Although initializing each \(y_{i}^{0}\) with the centered gradient may have an additional cost and slightly worsens the dependency on the initial sub-optimality \((g(x^{0})-g(x^{*}))\), it removes the dependency on the variance \(\sigma^{2}\) of the gradients at the optimum.
While the theorem is stated in terms of the function values, in the \(\mu\)-strongly-convex case we also obtain a convergence rate on the iterates because we have
\[\frac{\mu}{2}\|x^{k}-x^{*}\|^{2}\leq g(x^{k})-g(x^{*})\]
The SAG iterations have a worse constant factor because of the dependence on \(n\). An appropriate choice of \(x^{0}\) can improve the dependence on \(n\) : we can set \(x^{0}\) to the result of \(n\) iterations of an appropriate SG method. In this setting, the expectation of \(g(x^{0})-g(x^{*})\) is \(\mathcal{O}(1/\sqrt{n})\) in the convex setting, while both \(g(x^{0})-g(x^{*})\) and \(\|x^{0}-x^{*}\|^{2}\) would be in \(\mathcal{O}(1/n)\) in the strongly-convex setting.
If we use this initialization of \(x^{0}\) and set \(y_{i}^{0}=\nabla f_{i}(x^{0})-\nabla g(x^{0})\), then in terms of \(n\) and \(k\) the SAG convergence rates take the form \(\mathcal{O}(\sqrt{n}/k)\) and \(\mathcal{O}(\rho^{k}/n)\) in the convex and strongly-convex settings, instead of the \(\mathcal{O}(n/k)\) and \(\mathcal{O}(\rho^{k})\) rates implied by the theorem.
An interesting consequence of using a step-size of \(\alpha=1/16L\) is that it makes the method adaptive to the strong-convexity constant \(\mu\). For problems with a higher degree of local strong-convexity around the solution \(x^{*}\), the algorithm will automatically take advantage of this and yield a faster local rate. This can even lead to a local linear convergence rate if the problem is strongly-convex near the optimum but not globally strongly-convex. This adaptivity to the problem difficulty is in contrast to SG methods whose sequence of step sizes typically depend on global constants and thus do not adapt to local strong-convexity. _We will test this on the Rosenbrock function in log scale, for which the SG method turns indefinitely around the global minimum and never reaches it._
## 6 SAG implementation Details
Schmidt et al. (2013) discuss modifications that lead to better practical performance than this basic algorithm, including ways to reduce the storage cost, how to handle regularization, how to set the step size, using mini-batches, and using non-uniform sampling.
```
1begin
2\(d=0\) /* \(d\) is use to track the quantity \(\sum_{i=1}^{n}y_{i}\) */
3\(y_{i}=0\) for \(i=1,2,\ldots,n\)
4for\(k=0,1,\ldots\)do
5 Sample \(i\) from \(\{1,2,\ldots,n\}\)
6\(d=d-y_{i}+\nabla f_{i}(x)\)
7\(y_{i}=\nabla f_{i}(x)\)
8\(x=x-\frac{\alpha}{n}d\)
9 end for
10
11 end for
```
**Algorithm 1**Basic SAG method for minimizing \(\frac{1}{n}\sum_{i=1}^{n}f_{i}(x)\) with step size \(\alpha\)
Re-weighting on early iterationsThe more logical normalization is to divide \(d\) by \(m\), the number of data points that we have seen at least once (which converges to \(n\) once we have seen the entire data set), when \(y_{i}^{0}=0\)
\[x=x-\frac{\alpha}{m}d\]
Exact and efficient regularization
\[x=x-\alpha\bigg{(}\frac{d}{m}+\lambda x\bigg{)}=(1-\alpha\lambda)x-\frac{\alpha }{m}d\]
Mini-batches for vectorized computation and reduced storage
\[x^{k+1}=x^{k}-\frac{\alpha_{k}}{n}\sum_{i=1}^{n}y_{i}^{k}\text{ with }y_{i}^{k}= \left\{\begin{array}{ll}\nabla f_{i}(x^{k})&\text{if }i\in\mathcal{B}\\ y_{i}^{k-1}&\text{otherwise.}\end{array}\right.\]
Structured gradients and just-in-time parameter updatesFor many problems the storage cost of \(\mathcal{O}(np)\) for the \(y_{i}^{k}\) vectors is prohibitiven but we can often use the structure of the gradients \(\nabla f_{i}\) to reduce this cost. For example, let consider a linearly-parameterized model of the form
\[\operatorname*{minimize}_{x\in\Omega\subset\mathbb{R}^{p}}\ \ g(x)=\frac{1}{n} \sum_{i=1}^{n}f_{i}(a_{i}^{T}x) \tag{5}\]
Since each \(a_{i}\) is constant, for these problems we only need to store the scalar \(\nabla f_{i_{k}}(u_{i}^{k})\) for \(u_{i}^{k}=a_{i_{k}}^{T}x\) rather than the full gradient \(a_{i}\nabla f_{i}(u_{i}^{k})\). This reduces the storage cost from \(\mathcal{O}(np)\) down to \(\mathcal{O}(n)\). Examples of linearly-parameterized models include the least-squares regression 6, the logistic regression 7, feed forward neural networks, etc.
Footnote 6: \(\ell(s=(x,y),\theta)=h(x^{T}\theta)\) with \(h(z)=(z-y)^{2}\)
Footnote 7: \(\ell(s=(x,y),\theta)=h(x^{T}\theta)\) with \(h(z)=log(1+exp(-yz))\)
## 7 Experiments settings Results
We will use the following acronyms to designate our algorithms :
* sgd : vanilla SGD
* momentum : SGD with momentum (Polyak, 1964; Sutton, 1986; Tseng, 1998)
* nesterov : Nesterov Accelerated SGD (Nesterov, 1983; Sutskever et al., 2013)
* asgd : Averaged SGD proposed by Polyak and Juditsky (1992)
* rmsprop : RMSProp (Hinton, 2012)
* rmsprop_mom : RMSProp with momentum
* rprop : resilient backpropagation algorithm (Riedmiller and Braun, 1993)
* adadelta : Adadelta (Zeiler, 2012)
* adagrad : Adagrad (Duchi et al., 2011b)
* adam : Adam (Kingma and Ba, 2014, 2017)
* amsgrad : AMSGrad (Reddi et al., 2018)
* adamax : Adamax (Kingma and Ba, 2014)
* custom_adam : custom adam algorithm without amsgrad and that include the two corrective terms for \(m^{k}\) and \(r^{k}\)
* adam_inverse_sqrt : Adam that decays the learning rate based on the inverse square root of the update number. It also supports a warmup phase where the learning rate is linearly increase from some initial learning rate (\(warmup\_init\_lr\)) until the configured learning rate (\(lr\)). Thereafter, the learning rate is decay proportional to the number of updates, with a decay factor set to align with the configured learning rate.
* During warmup: \[lrs=linspace(start=warmup\_init\_lr,end=lr,steps=warmup\_updates)\] \[lr=lrs[step]\]
* After warmup: \[lr=\frac{decay\_factor}{\sqrt{update\_num}}\text{ where }decay\_factor=lr*sqrt(warmup\_updates)\]
* adam_cosine : Adam that assign learning rate based on a cyclical schedule that follows the cosine function (Loshchilov and Hutter, 2016). It also supports a warmup phase where the learning rate is linearly increase from some initial learning rate (\(warmup\_init\_lr\)) until the configured learning rate (\(lr\)). Thereafter, the learning rate is decay proportional to the number of updates, with a decay factor set to align with the configured learning rate.
* During warmup: \[lrs=linspace(start=warmup\_init\_lr,end=lr,steps=warmup\_updates)\] \[lr=lrs[step]\]
* After warmup: \[lr=lr\_min+0.5*(lr\_max-lr\_min)*(1+cos(t\_curr/t\_i))\] where \(t\_curr\) is current percentage of updates within the current period range and \(t\_i\) is the current period range, which is scaled by \(t\_mul\) after every iteration.
* sag : SAG (Schmidt et al., 2013)
* sag_sgd : combinaition of SAG and momentum SGD with \[y_{i}^{k}=\left\{\begin{array}{ll}v^{k+1}=\beta_{1}v_{i}^{k}-\alpha_{k}D_{i }^{k}&\text{if }i=i_{k},\text{ with }D^{k}=\nabla f_{i_{k}}(x^{k})\\ y_{i}^{k-1}&\text{ otherwise.}\end{array}\right.\]
* sag_adam : combinaition of SAG and Adam with \[y_{i}^{k}=\left\{\begin{array}{ll}\frac{m_{i}^{k}}{\sqrt{r_{i}^{k}+\epsilon }}&\text{if }i=i_{k}\\ y_{i}^{k-1}&\text{ otherwise.}\end{array}\right.\]
### Test functions for optimization
#### 7.1.1 Rosenbrock function
The vanilla rosenbrock function is given by \(g_{n}(x)=\sum_{i=1}^{n/2}\left[100(x_{2i}-x_{2i-1}^{2})^{2}+(x_{2i-1}-1)^{2}\right]\), with the gradient \(\nabla_{i}g_{n}(x)=200(x_{i}-x_{i-1}^{2})\cdot\mathbb{1}_{i\in\mathbb{2N}+1}- \left[400x_{i}(x_{i+1}-x_{i}^{2})-2(x_{i}-1)\right]\cdot\mathbb{1}_{i\in \mathbb{2N}-1}\), and \(x^{*}\in\{(1,\ldots,1),(-1,1,\ldots,1)\}\subset\{x,\nabla g_{n}(x)=0\}\)8. A more involved variant is given by \(g_{n}(x)=\sum_{i=1}^{n-1}\left[100(x_{i+1}-x_{i}^{2})^{2}+(x_{i}-1)^{2}\right]\), with the gradient \(\nabla_{i}g_{n}(x)=200(x_{i}-x_{i-1}^{2})\cdot\mathbb{1}_{i>1}-\left[400x_{i} (x_{i+1}-x_{i}^{2})-2(x_{i}-1)\right]\cdot\mathbb{1}_{i<n}\), and \(x^{*}=\{1,\ldots,1)\}\subset\{x,\nabla g_{n}(x)=0\}\)9. The number of stationary points of this function grows exponentially with dimensionality \(n\), most of which are unstable saddle points (Kok and Sandrock, 2009).
Footnote 8: When the coordinates range from \(0\) to \(n-1\), \(g_{n}(x)=\sum_{i=0}^{n/2-1}\left[100(x_{i+1}-x_{i}^{2})^{2}+(x_{2i}-1)^{2}\right]\) and \(\nabla_{i}g_{n}(x)=200(x_{i}-x_{i-1}^{2})\cdot\mathbb{1}_{i\in\mathbb{2N}+1}- \left[400x_{i}(x_{i+1}-x_{i}^{2})-2(x_{i}-1)\right]\cdot\mathbb{1}_{i\in \mathbb{2N}}\).
Footnote 9: When the coordinates range from \(0\) to \(n-1\), \(g_{n}(x)=\sum_{i=0}^{n-2}\left[100(x_{i+1}-x_{i}^{2})^{2}+(x_{i}-1)^{2}\right]\) and \(\nabla_{i}g_{n}(x)=200(x_{i}-x_{i-1}^{2})\cdot\mathbb{1}_{i>0}-\left[400x_{i}(x _{i+1}-x_{i}^{2})-2(x_{i}-1)\right]\cdot\mathbb{1}_{i<n-1}\).
We optimized the Rosenbrock function in a logarithmic scale (to create a ravine, figure 2). The function is unimodal, and the global minimum is very sharp and surrounded in the direction of the ravine by many local minima. At the beginning of optimization, we fall very quickly into the ravine because the surface is well-conditioned. Then, depending on the learning rate and the optimizer used (as well as the associated hyperparameters), we go down the ravine very slowly. Indeed, without momentum or velocity, we do not go directly down to the minimum since the gradient is almost
zero along the ravine direction but very large in the perpendicular directions: we go from left to right (perpendicular to the ravine) while going down a little, but very slowly. Moreover, we turn there almost indefinitely once we are near the minimum. With adaptive gradient, we go down to the minimum very quickly because this direction problem is corrected (due to momentum, left-right ravine perpendicular directions cancel out): if the learning rate is too small, we will also go down very slowly (small gradient in the flat ravine direction). Unlike SGD, here, we always reach the minimum (and stay there). Also, for some learning rates and initializations, there is a double descent (Nakkiran et al., 2020) in error (euclidean distance between the global minimum and the current position at a given time) when landing in the ravine.
Adadelta and adagrad were very slow compared to sag. We can see in figure 1(a) a comparative progression of these three algorithms. After 100 000 iterations adadelta and adagrad were still going down to the valley, while SAG did it in less than 1000 iterations, which is 100 times faster than both. Adadelta manages to reach the minimum, which sag never finally does.
Nostevor is faster on the well-conditioned part of the surface and arrives faster in the neighbourhood of the target than sag, momentum and asgd (figure 1(b)). On the other hand, it stabilizes at a higher loss than these methods. Sag and asgd have almost the same trajectory. Momentum follows the same trajectory as these two methods from the beginning but stabilizes at a smaller loss. The combination sag_sgd (with momentum) speeds up the arrival in the neighbourhood of the minimum but stabilizes at the same level as momentum.
Rmsprop is slower than sag, but ends up with a smaller error than sag (figure 1(c)). Adding momentum to rmsprop (rmsprop_mom) improves its speed significantly. Rprop is also very fast and gives a smaller error than sag and sag_sgd.
On the well-conditioned part of the surface, sag is faster than adam, adamax and amsgrad, but these methods reach the minimum (get zero final loss), which is not the case for sag (figure 1(d)). The sag_adam combination almost reaches the minimum, but is very chaotic and has periodic jumps that are similar to the slingshot mechanism (Thilak et al., 2022). amsgrad is much slower than adam and adamax.
custom_adam, adam_inverse_sqrt, adam_cosine also have the same periodic disruption phenomenon as sag_adam (figure 1(e)).
The methods that succeed in reaching the minimum are rmsprop, rprop, adadelta, adam, amsgrad, adamax, rmsprop_mom (figure 2(e)). The methods that come close to it without reaching it are adam_inverse_sqrt, custom_adam, adam_cosine, sag_adam, momentum. The comparative convergence speeds are presented in figures 3 and 5, which is an approximation of the number of iterations performed before reaching stabilization.
#### 7.1.2 Rastrigin function
The rastrigin function is given by \(g_{n}(x)=na+\sum_{i=1}^{n}\left[x_{i}^{2}-a\cos(2\pi x_{i})\right]=na+x^{T}x -a1_{n}^{T}\cos(2\pi x)\) with \(a\in\mathbb{R}\). Its gradient is \(\nabla g_{n}(x)=2x+2\pi a\sin(2\pi x)\), and \(x^{*}=\{0,\ldots,0)\}\subset\{x,\nabla g_{n}(x)=0\}\).
Figure 1: Left) Rosenbrock function in log scale (\(n=2\)), Center) Contours, Right) Gradient field (note how this vector is pronounced in norm near the global minimum, which is important to understand why even near this global optimum many optimizers can felt to reach it)
Figure 3: Comparative visualization of convergence speeds on the rosenbrock function
Figure 2: Rosenbrock function
the global minimum sharp, figure 7). The function is unimodal (in terms of global minimum), and the global minimum is very sharp and surrounded symmetrically by many local minima. At the beginning of optimization, we fall very quickly into the one local minimum. Then, depending on the learning rate and the optimizer used (and the associated hyperparameters), we can move successively from one minimum to another until we reach the global minimum.
Again, adadelta and adagrad are very slow compared to sag. We can see in figure 7a a comparative progression of these three algorithms. After 400 000 iterations adadelta and adagrad were still going down to the valley, while SAG did it in less than 1000 iterations, which is 400 times faster than both. adadelta manages to reach the minimum, which sag never finally does. Rprop is very bad here, it never leaves the first local minimum in which it falls. This is the method that obtains the largest error.
Figure 5: Comparative visualization of the progression of each algorithm on the rosenbrock function
Figure 6: Left) Rastrigin function in log scale (\(A=10,n=2\)), Center) Contours, Right) Gradient field
Figure 7: Rastrigin function
Figure 8: Comparative visualization of convergence speeds on the rastrigin function
Figure 9: Final errors at steady states on the rastrigin function
Figure 10: Comparative visualization of the progression of each algorithm on the rastrigin function
#### 7.2.1 Scikit-learn dataset
We extracted the following datasets from scikit-learn (Pedregosa et al., 2011). The reader is invited to refer to the official scikit-learn website 10 for more information about these data (sources,...).
Footnote 10: [https://scikit-learn.org/stable/datasets/toy_dataset.html](https://scikit-learn.org/stable/datasets/toy_dataset.html)
* wine (classification): recognize the wine class given the features like the amount of alcohol, magnesium, phenol, colour intensity, etc.
* iris (classification): It contains sepal and petal lengths and widths for three classes of plants (Setosa, Versicolour, and Virginica)
* digits (classification): digit classification
* boston (regression): house prices in Boston based on the crime rate, nitric oxide concentration, number of rooms, distances to employment centers, tax rates, etc. The output feature is the median value of homes.
* diabete (regression): sklearn diabetic dataset
* linnerud (regression): physical exercise Linnerud dataset
We trained a one-layer perceptron with a hidden layer of dimension 50, a leaky rectified linear unit (Leaky ReLU) activation (with a negative slope of 0.01) (Maas, 2013) and a dropout of probability 0.1 (Srivastava et al., 2014), this for 2000 epochs.
The results are presented in the following figures :
* wine: 11, 12, 13, 14, 15, 16, 17
* iris: 18, 19, 20, 21, 22, 23, 24
* digits: 25, 26, 27, 28, 29, 30, 31
* boston: 32, 33, 34, 35, 36, 37, 38
* linnerud: 39, 40, 41, 42, 43, 44, 45
* diabete: 46, 47, 48, 49, 50, 51, 52
\begin{table}
\begin{tabular}{||c|c|c|c|c|c||} \hline Dataset & \# features & \# classes & size & train size (80\%) & val size (20\%) \\ \hline wine & 13 & 3 & 178 & 142 & 36 \\ \hline iris & 4 & 3 & 150 & 120 & 30 \\ \hline digits & 64 & 10 & 1797 & 1437 & 360 \\ \hline \end{tabular}
\end{table}
Table 1: Information about the sklearn datasets (classification)
\begin{table}
\begin{tabular}{||c|c|c|c|c|c||} \hline Dataset & \# features & \# output & size & train size (80\%) & val size (20\%) \\ \hline boston & 13 & 1 & 506 & 404 & 102 \\ \hline diabete & 10 & 1 & 442 & 353 & 89 \\ \hline linnerud & 3 & 3 & 20 & 16 & 4 \\ \hline \end{tabular}
\end{table}
Table 2: Information about the sklearn datasets (regression)
Figure 11: adadelta, adagrad, sag (wine)
Figure 12: momentum, nesterov, asgd, sag, sag, sgd (wine)
Figure 14: adam, custom_adam, adam_inverse_sqrt, adam_cosine, sag, sag_sgd, sag_adam (wine)
Figure 13: adam, amsgrad, adamax, sag, sag_adam (wine)
Figure 16: Comparative visualization of convergence speeds (wine)
Figure 17: Performances at steady states (wine)
Figure 15: Summary (wine)
Figure 19: momentum, nesterov, asgd, sag, sag, sgd (iris)
Figure 18: adadelta, adagrad, sag (iris)
Figure 21: adam, custom_adam, adam_inverse_sqrt, adam_cosine, sag, sag_sgd, sag_adam (iris)
Figure 20: adam, amsgrad, adamax, sag, sag_adam (iris)
Figure 23: Comparative visualization of convergence speeds (iris)
Figure 24: Performances at steady states (iris)
Figure 22: Summary (iris)
Figure 26: momentum, nesterov, asgd, sag, sag, sgd (digits)
Figure 25: adadelta, adagrad, sag (digits)
Figure 27: adam, amsgrad, adamax, sag, sag_adam (digits)
Figure 28: adam, custom_adam, adam_inverse_sqrt, adam_cosine, sag, sag_sgd, sag_adam (digits)
Figure 31: Performances at steady states (digits)
Figure 30: Comparative visualization of convergence speeds (digits)
Figure 31: Performances at steady states (digits)
Figure 29: Summary (digits)
Figure 33: momentum, nesterov, asgd, sag, sag, sgd (boston)
Figure 34: adam, amsgrad, adamax, sag, sag_adam (boston)
Figure 35: adam, custom_adam, adam_inverse_sqrt, adam_cosine, sag, sag_sgd, sag_adam (boston)
Figure 32: adadelta, adagrad, sag (boston)
Figure 38: Performances at steady states (boston)
Figure 37: Comparative visualization of convergence speeds (boston)
Figure 39: addelta, adagrad, sag (linnerud)
Figure 36: Summary (boston)
Figure 41: adam, amsgrad, adamax, sag, sag_adam (linnerud)
Figure 43: Summary (linnerud)
Figure 42: adam, custom_adam, adam_inverse_sqrt, adam_cosine, sag, sag_sgd, sag_adam (linnerud)
Figure 40: momentum, nesterov, asgd, sag, sag, sgd (linnerud)
Figure 44: Comparative visualization of convergence speeds (linnerud)
Figure 45: Performances at steady states (linnerud)
Figure 46: addelta, adagrad, sag (diabete)
Figure 47: momentum, nesterov, asgd, sag, sag, sgd (diabete)
Figure 48: adam, amsgrad, adamax, sag, sag_adam (diabete)
Figure 50: Summary (diabete)
Figure 49: adam, custom_adam, adam_inverse_sqrt, adam_cosine, sag, sag_sgd, sag_adam (diabete)
Figure 48: Summary (diabete)
Figure 52: Performances at steady states (diabete)
#### 7.2.2 TorchVision dataset
We extracted the datasets presented in table 3 from pytorch (Paszke et al., 2019). The reader is kindly invited to refer to the official pytorch website 11 for more information about these data (sources,...).
Footnote 11: [https://pytorch.org/vision/stable/datasets.html](https://pytorch.org/vision/stable/datasets.html)
We trained a classifier having two main successive parts :, and
* A first part consisting of two layers of convolutions neural networks :
* (0): Conv2d(# channels, 10, kernel_size=(5, 5), stride=(1, 1))
* (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
* (2): Conv2d(10, 10, kernel_size=(5, 5), stride=(1, 1))
* (3): Dropout2d(p=0.1, inplace=False)
* (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
* A second part consisting of a a two layers feed forward neural network :
* (0): Linear(in_features=160, out_features=50, bias=True)
* (1): Dropout(p=0.1, inplace=False)
* (2): Linear(in_features=50, out_features=10, bias=True)
* (3): Dropout(p=0.1, inplace=False)
## 8 Summary and Discussion
In this work, we compared the performance of SAG and several other optimization algorithms for continuous objectives such as SGD with momentum, Nesterov Accelerated SGD, Averaged SGD, RMSProp (with and without momemtum), resilient backpropagation algorithm (Rprop), Adadelta, Adagrad, Adam, AMSGrad, Adam, Adam with special learning rate decay procedure (inverse square root of the update number, cyclical schedule that follows the cosine function). SAG, although with a simple iteration, outperforms the majority of these algorithms. We have proposed two combinations of SAG. One with the momentum algorithm, which allows control of the importance of each gradient term in the mean used by SAG depending on the iteration during which it is used, and another with Adam where the importance of the square of the norm of the gradient is also controlled. These two variants allowed us to improve the speed empirically while obtaining better performances.
LimitationsThe memory cost used by SAG is very high compared to other algorithms, which makes it impractical for large scale use.
PerspectivesWhat we presented as an improvement is only an empirical illustration of the performance of SAG. It would be interesting to evaluate theoretically the expected convergence rate of all these algorithms. We leave this for future work.
## Acknowledgement
The authors thank Fabian Bastin who made this work possible, and for discussion at the early stage of this project during the stochastic programming (IFT6512) course at UdeM (Universite de Montreal). We also thank Compute Canada for computational resources.
\begin{table}
\begin{tabular}{||c|c|c|c|c|c||} \hline Dataset & (\# channels, height, width) & \# classes & size & train size (80\%) & val size (20\%) \\ \hline mnist & (1, 28, 28) & 10 & 70000 & 60000 & 10000 \\ \hline fashion mnist & (1, 28, 28) & 10 & 7000 & 60000 & 10000 \\ \hline cifar10 & (3, 32, 32) & 10 & 60000 & 50000 & 10000 \\ \hline cifar100 & (3, 32, 32) & 100 & 60000 & 50000 & 10000 \\ \hline \end{tabular}
\end{table}
Table 3: TorchVision datasets
|
2302.12560
|
Assessing model-based carbon and oxygen abundance derivation from
ultraviolet emission lines in AGNs
|
We present an adapted version of the code HII-CHI-Mistry-UV (P\'erez-Montero
& Amor\'in 2017) to derive chemical abundances from emission lines in the
ultraviolet, for use in narrow line regions (NLR) of Active Galactic Nuclei
(AGN). We evaluate different ultraviolet emission line ratios and how different
assumptions about the models, including the presence of dust grains, the shape
of the incident spectral energy distribution, or the thickness of the gas
envelope around the central source, may affect the final estimates as a
function of the set of emission lines used. We compare our results with other
published recipes for deriving abundances using the same emission lines and
show that deriving the carbon-to-oxygen abundance ratio using CIII] $\lambda$
1909 \r{A} and OIII] $\lambda$ 1665 \r{A} emission lines is a robust indicator
of the metal content in AGN that is nearly independent of the model
assumptions, similar to the case of star-forming regions. Moreover, we show
that a prior determination of C/O allows for a much more precise determination
of the total oxygen abundance using carbon UV lines, as opposed to assuming an
arbitrary relationship between O/H and C/O, which can lead to non-negligible
discrepancies.
|
Enrique Pérez-Montero, Ricardo Amorín, Borja Pérez-Díaz, José M. Vílchez, Rubén García-Benito
|
2023-02-24T10:24:13Z
|
http://arxiv.org/abs/2302.12560v1
|
Assessing model-based carbon and oxygen abundance derivation from ultraviolet emission lines in AGNs
###### Abstract
We present an adapted version of the code HII-CHI-mistry-UV (Perez-Montero & Amorin, 2017) to derive chemical abundances from emission lines in the ultraviolet, for use in narrow line regions (NLR) of Active Galactic Nuclei (AGN). We evaluate different ultraviolet emission line ratios and how different assumptions about the models, including the presence of dust grains, the shape of the incident spectral energy distribution, or the thickness of the gas envelope around the central source, may affect the final estimates as a function of the set of emission lines used. We compare our results with other published recipes for deriving abundances using the same emission lines and show that deriving the carbon-to-oxygen abundance ratio using C iii] \(\lambda\) 1909 A and O iii] \(\lambda\) 1665 A emission lines is a robust indicator of the metal content in AGN that is nearly independent of the model assumptions, similar to the case of star-forming regions. Moreover, we show that a prior determination of C/O allows for a much more precise determination of the total oxygen abundance using carbon UV lines, as opposed to assuming an arbitrary relationship between O/H and C/O, which can lead to non-negligible discrepancies.
keywords: galaxies: active - galaxies: abundances - galaxies: evolution - galaxies: nuclei - galaxies: formation- galaxies: ISM - galaxies: Seyfert
## 1 Introduction
Active Galactic Nuclei (AGN) host one of the most powerful energy sources in the universe. The intense radiation field emanating from the hot accretion disks around supermassive black holes in galactic centers produces very bright emission lines from the surrounding gas. From this, various physical properties and chemical abundances in these regions can later be inferred up to a very high redshift, which can serve as indicators of the evolution of the universe at different cosmic epochs.
The UV region is of outstanding importance in this context. Several bright emission lines produced by AGNs, such as N v, He ii, C iv, O iii], and C iii], which are sensitive to the ionisation conditions and physical properties (i.e., electron density and temperature) of the warm ISM (e.g., Kewley et al., 2019), can be identified in the \(\lambda\sim\)1000A -2000A region. While this range at \(z\lesssim 2\) is typically explored using the _Hubble Space Telescope_ (HST; e.g. Rigby et al., 2018; Berg et al., 2022), these lines are redshifted to rest-optical wavelengths at \(z\sim\)2-4. Therefore, deep optical spectroscopic surveys with ground-based telescopes of 8-10m class (e.g. Steidel et al., 2003; Shapley et al., 2003; Lilly et al., 2007; Kurk et al., 2013; Le Fevre et al., 2015; McLure et al., 2018) are typically efficient at detecting the largest samples of UV line emitters (e.g. Steidel et al., 2014; Maseda et al., 2017; Amorin et al., 2017; Nakajima et al., 2018; Le Fevre et al., 2019; Schmidt et al., 2021; Feltre et al., 2020; Saxena et al., 2020; Saxena et al., 2022; Lirena et al., 2022).
Classical methods for determining chemical abundance are instead mostly based on rest- optical emission lines (e.g. Maiolino & Mannucci, 2019). However, for galaxies at \(z\sim\) 2-3 these are redshifted to the near-infrared (NIR) and detections are often limited to the few bright lines ( e.g. Steidel et al., 2014; Shapley et al., 2015). Joint analysis of the observed rest-UV and rest- optical emission spectra with predictions from detailed photoionisation models for both star-forming galaxies ( e.g. Gutkin et al., 2016; Byler et al., 2018; Perez-Montero & Amorin, 2017) and AGNs ( e.g. Feltre et al., 2016; Nakajima et al., 2018; Dors et al., 2019; Hirschmann et al., 2019; Mignoli et al., 2019) thus emerge as powerful diagnostics for the ionisation and chemical abundances of galaxies (e.g. Patricio et al., 2016; Vanzella et al., 2016; Amorin et al., 2017; Byler et al., 2020) and pave the way for similar analyses at higher redshifts with the _James Webb Space Telescope_(e.g. Chevallard et al., 2019; Rigby et al., 2021).
The derivation of the metal content in the Narrow Line Region (NLR) of AGN galaxies from optical collisionally ex
cited lines (CELs) is much more complex than in the case of star-forming regions, since the direct method (i.e., based on the prior determination of the electron temperature) can significantly underestimate the derived oxygen abundances (e.g. Dors et al., 2015), which are often considered as proxies for gas metallicity in galaxies. Instead, there are several studies dealing with the determination of different elemental abundances in AGN based on pure photoionisation models (e.g. Storchi-Bergmann et al., 1998; Castro et al., 2017; Thomas et al., 2019) or on models assuming shocks in combination with photoionisation (e.g. Dors et al., 2021).
In the case of UV emission lines, since direct determination of chemical abundances in AGN is still not reliable, many of the various approaches pursued are also based on photoionisation models, including detailed models for individual objects (e.g. Davidson, 1977; Osmer, 1980; Uomoto, 1984; Gaskell et al., 1981; Hamann and Ferland, 1992; Ferland et al., 1996; Dietrich and Wilhelm-Erkens, 2000; Hamann et al., 2002; Shin et al., 2013; Feltre et al., 2016; Yang et al., 2017). For a large number of objects, other authors propose calibrations of some UV emission line ratios sensitive to chemical abundance in these detailed models (e.g. Dors et al., 2014, 2019), or compare them in model-based diagnostic diagrams (e.g. Nagao et al., 2006; Matsuoka et al., 2009, 2018).
Another strategy suitable for use in large surveys is to use a Bayesian-like comparison between the adequate emission line ratios and the predictions from large model grids (e.g. Mignoli et al., 2019). This method has the advantage of better quantifying uncertainties and easily specifying the assumptions required when the number of emission lines input is limited. For example, the use of photoionisation models, which include various combinations of C/O and O/H has great significance for estimating gas-phase metallicity in cases where very few C lines (e.g., C iv\(\lambda\) 1549 A and C iii] \(\lambda\) 1909 A) UV emission lines can be measured.
In this work, we present an adapted version of the code Hi-Chi-mistry-UV (hereafter HCm-UV, Perez-Montero and Amorin, 2017), originally developed for deriving O/H and C/O from UV emission lines in star-forming regions, for application to the NLR of AGN. The code is based on HCm(Perez-Montero, 2014), which deals with optical emission lines for deriving O/H and N/O ratios in star-forming regions, and has been extended for use in the NLR of AGN in Perez-Montero et al. (2019).
The above work has shown that when applied to star-forming galaxies, the abundances provided by the HCm version using optical emission lines are in complete agreement with those provided by the direct method, while for AGNs they are in agreement with the expected metallicities at the centers of their host galaxies (Dors et al., 2020). This new version of the HCm-UV code is potentially useful for constraining the metallicity of the NLR of AGN up to very high redshift, and also provides a solution to the constraint imposed by using carbon emission lines to derive oxygen abundances, since it also estimates C/O.
The paper is organised as follows: In Section 2, we describe the method for deriving chemical oxygen and carbon abundances in the NLR of AGN from their UV emission lines, based on the code HCm-UV. In the same section, we describe our grids of photoionisation models and we discuss their validity under different assumptions and for using different sets of emission lines. In Section 3, we apply the method to a sample of compiled data on UV lines for NLR in AGNs, and in Section 4 we discuss the results and compare them with other published calibrations based on the same available emission lines. Finally, in Section 5 we summarize our results and draw our conclusions.
## 2 Description of the method
The method discussed in this paper is the adaptation of the code HCm-UV, originally developed for use with star-forming objects (Perez-Montero and Amorin, 2017), to the NLR of AGN. The extension of different versions of the code for use on AGN has already been done for optical lines by Perez-Montero et al. (2019) and for infrared lines by Perez-Diaz et al. (2022). In this section, we discuss the model grids that the code uses to estimate O/H, C/O, and the ionisation parameter (log \(U\)) in these objects, as well as the defined observables based on the most typical UV emission line flux ratios that the code uses to derive these properties using a Bayesian-like methodology. Finally, we discuss how different assumptions for the models and the emission lines used affect the results.
The version of the code described here (v. 5.0) along with other versions prepared for use in other spectral domains, is publicly available1.
Footnote 1: In the webpage [http://www.iaa.csic.es/~epm/HII-CHI-mistry.html](http://www.iaa.csic.es/~epm/HII-CHI-mistry.html)
### Description of the models
The entire grid of models used in HCm-UV was calculated using the photoionisation code Cloudy v.17.01 (Ferland et al., 2017), which assumes a point-like central ionisation source surrounded by a gas distribution. The models partially correspond to those of the optical version of the code described in Perez-Montero et al. (2019), taking into account a spectral energy distribution (SED) with two components: one for the Big Blue bump peaking at 1 Ryd, and the other represented by a power law with spectral index \(\alpha_{x}=\) -1 for the nonthermal X-rays. For the continuum between 2 keV and 2500 A we considered a power law with two possible values for the spectral index \(\alpha_{ox}\)=-0.8 and -1.2. The first value fits better the emission line fluxes in tailored models by Dors et al. (2017), while the second value fits better the average value found by Miller et al. (2011) in a sample of Seyfert-2 galaxies. For the gas, we assumed a filling factor of 0.1 and a constant density of 500 cm\({}^{-3}\), as is typical in the NLRs around type 2 AGNs (Dors et al., 2014). As discussed in Perez-Montero et al. (2019) for the optical version of the code, assuming a higher value by 2: 10\({}^{3}\) cm\({}^{-3}\) does not lead to large deviations in the obtained results. Different grids were calculated considering the criterion of stopping the models and measuring the resulting spectrum as a function of the fraction of free electrons (\(f_{e}\)) in the last zone (i.e., the outermost with respect to the ionising source). In some grids we considered a fraction of 98% and in others a fraction of 2%. All chemical abundances were scaled to oxygen according to the solar proportions given by Asplund et al. (2009), except for nitrogen and carbon, which are in solar proportion, but whose relation to oxygen was left in the models as an additional free
input parameter. Finally, we considered models with a dust-to-gas ratio that assumes the default value of the Milky Way, and also other models that do not consider the existence of dust mixed with the gas. Although a larger variety in the assumptions of dust composition and proportions would be more realistic, since these cannot be constrained from the input emission-line fluxes, these have not been more deeply explored in our grid of models, avoiding an unnecessarily high number of models in our grids.
Overall, the models in each grid cover the range of 12+log(O/H) from 6.9 to 9.1 in bins of 0.1 dex, and values of log(C/O) from -1.4 to 0.6 in bins of 0.125 dex. In addition, all models consider values of log \(U\) from -4.0 to -0.5 in bins of 0.25 dex. This gives a total number of 5 865 models for each of the resulting grids. Considering additionally the two possible values of \(\alpha_{OX}\) (-0.8 and -1.2), the two possible values of \(f_{e}\) in the outermost zone (2% or 98%) and the existence or absence of dust grains mixed with the gas, we obtain a number of 8 different calculated grids.
### The Hcm-UV code adapted for AGNs
The version of the code we describe here for its use for NLRs in AGNs follows a procedure similar to that used in other versions for other spectral regions, such as the optical (Perez-Montero, 2014) or the IR (Fernandez-Ontiveros et al., 2021). In all cases, the code performs a Bayesian-like calculation that compares certain emission line fluxes and their errors with the results of the models in each grid. Here we discuss the results obtained with the model grids described above. However, this new version of the code allows the user to use alternative grids as input models that have the same format as the default models.
The above comparison performed by HCm-UV to calculate the corresponding final mean values uses as weights for each model the \(\chi^{2}\) values of some defined specific emission line ratios that are sensitive to the properties we want to derive, as described in Perez-Montero (2014). The uncertainties of the derived abundances and \(U\) are calculated as the quadratic addition of the weighted standard deviation and the dispersion of the results after a MonteCarlo simulation, using the input errors as random perturbations around the nominal introduced values for each emission flux.
The list of UV emission lines allowed as input by HCm-UV, for both SF and AGN, includes: Ly\(\alpha\) at \(\lambda\) 1216 A, C iv\(\lambda\) 1549 A, O iii] \(\lambda\) 1665 A, and C iii] \(\lambda\) 1909 A. In addition, the code provides the ability to input optical emission lines H\(\beta\) and [O iii] 5007 A to obtain estimates of abundances using emission line ratios that are sensitive to electron temperature, as is the case with 5007/1665 (Perez-Montero & Amorin, 2017).
Unlike the previous version for star-forming galaxies, this version also includes the emission lines [N v] at \(\lambda\) 1239 Aand He ii at \(\lambda\) 1640 A, since the use of these lines in combination with the other carbon emission lines in the UV can be used to provide estimates of the total oxygen abundance in AGNs, as described in Dors et al. (2019). In any case, these two lines can also be used as input to the calculation of abundances in star-forming galaxies, since the presence of these lines can be considered a strong discriminating factor in the excitation of the gas when only massive stars are considered.
#### 2.2.1 C/O derivation
Following the same procedure as defined for SF objects, the code computes C/O in a first iteration, taking advantage of the fact that the emission line ratio C3O3 depends very little on \(U\). This observable can be defined as already used by Perez-Montero & Amorin (2017) for star-forming galaxies to derive C/O:
\[\mathrm{C3O3}=\log\left(\frac{\mathrm{I(CIII)}\mathrm{1909})}{\mathrm{I(OIII)} \mathrm{1665}}\right) \tag{1}\]
In Figure 1 we show the relation between this parameter and C/O for model sequences with \(\alpha_{OX}\) = -1.2 and assuming \(f_{e}\) = 98% and grains at various log(\(U\)) at a fixed 12+log(O/H) = 8.7 (left panel) and for various C/O values at a fixed log(\(U\)) = -2.0 (right panel). We also show additional sequences of models with \(\alpha_{OX}\) = -0.8, with \(f_{e}\) = 2%, and without dust grains to assess how varying these parameters affects this relation.
Figure 1 shows that despite a slight dependence on O/H, \(U\) or \(\alpha_{OX}\), there is a well-defined linear relationship between the C3O3 parameter and C/O. Only models without grains seem to predict a slightly lower slope compared to all other sequences. Compared to the linear relation derived in Perez-Montero & Amorin (2017) for models of star-forming galaxies, the sequences of the models for AGN are very close, albeit with a slightly lower slope, similar to that obtained for AGN without grains.
We conclude that C3O3 appears to be a robust indicator of C/O for AGN galaxies using UV emission lines, and that the code can subsequently be used for estimation.
After estimating C/O and its error, the code constrains the model grid to seek a solution for O/H and \(U\) in a second iteration. This procedure ensures that C lines can be used without any prior arbitrary assumption about the relationship between O/H and C/O if the latter can be inferred. If C/O cannot be estimated because some of the required lines are not available, the code assumes an expected relationship between O/H and C/O, as is the case for star-forming regions. By default, the code assumes a solar C/N ratio and adopts the empirical relation between O/H and N/O derived in Perez-Montero (2014) for star-forming objects. However, in the current updated version of the code, other relationships can be considered.
#### 2.2.2 O/H and \(U\) derivation
Among the various emission line ratios used in the second iteration to calculate the \(\chi^{2}\) weights, the code uses parameter C34, defined as follows:
\[\mathrm{C34}=\log\left(\frac{\mathrm{I(CIV)}\mathrm{1549})+\mathrm{I(CIII)} \mathrm{1909}}{\mathrm{I(H_{i})}}\right) \tag{2}\]
This parameter was also used for star-forming objects in Perez-Montero & Amorin (2017). In this case, \(\mathrm{I(}H\mathrm{\,i}\mathrm{\,i}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}
taking the Ly\(\alpha\) intensity for the parameter, and O/H for some sequences of models from the grid at fixed log(C/O) = -0.4 (left panel) and log \(U\) = -2.0. (right panel), with \(\alpha_{OX}\) = -1.2, \(f_{e}\) = 98% in the last zone and with dust grains..
It is noted that the relationship between this parameter and the total oxygen abundance in certain sequences can be bivalued, since it increases for low values of O/H, remains almost constant and finally starts to decrease when the metallicity is very high. This behaviour is in contrast to that observed in models for star-forming galaxies, where a monotonic increase in the parameter with metallicity is observed throughout the range studied, as discussed in Perez-Montero and Amorin (2017). However, the metallicity range where the turnover of the curve is observed depends on the assumed C/O and \(U\), and can be nearly linear for log \(U\) = -1.0. In fact, these two parameters have a very large influence on the relation of the parameter to the metallicity.
Despite the large dependence of this parameter on \(U\) or C/O, the models predict a negligible deviation if we consider another harder SED with \(\alpha_{OX}\) = -0.8. However, the assumption of absence of dust grains seems critical, since C34 shows a much more pronounced bivalued behaviour when no grains are considered, with very lower values at high metallicities, since Ly\(\alpha\) is much more affected by dust extinction.
In addition, the inclusion of very low excitation zones also affects the behaviour of this parameter, since Ly\(\alpha\) is much more strongly absorbed, so that C34 is much higher for the same O/H. This is consistent with the fact that Ly\(\alpha\) 1216 \(\lambda\) A is often absorbed by the neutral gas around the ionised gas nebulae and is therefore difficult to detect in many objects.
An alternative is to define the same parameter as a function of the line He ii at \(\lambda\) 1640 A, which is also used in Dors et al. (2019), This takes advantage of the fact that this line is often well detected with high excitation in AGN and has less absorption by the surrounding neutral gas. The corre
Figure 1: Relationship between emission line ratio C303 and log(C/O) for model sequences with different values of input parameters (left for fixed 12+log(O/H) = 8.7, and right for fixed log \(U\) = -2.0). The solid lines represent models for \(\alpha_{OX}\) = -1.2, \(f_{e}\) in the outermost zone of 98% and considering dust grains. The other lines change only one parameter with respect to this sequence. The dashed line represents models with \(\alpha_{OX}\) = -0.8, the dot-dashed line represents models with \(f_{e}\) of 2%, and the thin line represents models without grains. In the left panel, the black dashed line represents the linear fit for models of star-forming galaxies that are in Pérez-Montero and Amorin (2017).
Figure 2: Relationship between emission line ratio C34 using the Ly\(\alpha\) line and total oxygen abundance for different model series at fixed log(C/O) = -0.4 (left panel) and log \(U\) = -2.0 (right panel). The solid lines represent models for \(\alpha_{OX}\) = -1.2, \(f_{e}\) in the outermost zone of 98% and considering dust grains. The other lines change only one parameter in relation to this sequence.
sponding observable based on the same lines can be defined as follows,
\[\mathrm{C34He2}=\log\left(\frac{\mathrm{I(CIV)}|1549)+\mathrm{I(CIII)}|1909)}{ \mathrm{I(HeII1640)}}\right) \tag{3}\]
which is also used for the NLR of AGNs galaxies by Dors et al. (2014) named C43. However, in this work we refer to it as. C34He2 to distinguish it from our C34 parameter defined above. The relation between C34He2 and O/H depending on the model is shown in Figure 3 for the same model series at fixed C/O = -0.4 (left panel) and fixed log \(U\) = -2.0 (right panel) and for a value of \(\alpha_{OX}\) = -1.2, \(f_{e}\) = 98% and with dust grains. Furthermore, as in the previous plots, we show additional sequences with \(\alpha_{OX}\) = -0.8, with \(f_{e}\) = 2% in the last zone and also without dust grains.
As shown, the parameter has a similar trend to that observed for C34: it increases with increasing metallicity, although it enters saturation and begins to decrease at very high O/H values. The metallicity at which this conversion is expected to occur is so dependent on \(U\) that it is less pronounced for higher \(U\) values. This additional dependence of O/H on \(U\) strengthens the basis of the procedure in the code by which these two parameters are calculated simultaneously in this second iteration.
The relation between C34He2 and O/H does not appear to change significantly when other, harder values of \(\alpha_{OX}\) are considered and, as expected, when a larger thickness of the nebula is assumed in the models, since He ii, unlike Ly\(\alpha\), is not absorbed by neutral gas. However, the models without grains predict much more different values of the parameter for the same O/H value, since the internal extinction of the gas affects the opacity of lines with high excitation, such as C iv] or He ii, leading to higher values of the parameter, in contrast to C34.
Another observable that the code can use in this second iteration is also defined and used by Dors et al. (2019) to derive O/H based on the highly excited emission lines that
Figure 4: Relationship between the emission line ratio N5/He2 and 12+log(O/H) for model sequences with different values of the input parameters at fixed log(C/O = -0.4. The solid lines represent models for \(\alpha_{OX}\) = -1.2, \(f_{e}\) in the outermost zone of 98% and considering dust grains. The other lines change only one parameter with respect to this sequence. The dashed line represents models with \(\alpha_{OX}\) = -0.8, the dot-dashed line represents models with \(f_{e}\) of 2%, and the thin line represents models without grains.
Figure 3: Relationship between C34/He2 emission line ratio and total oxygen abundance for different model sequences at fixed log(C/O) = -0.4 (left panel) and fixed log \(U\) = -2.0 (right panel). The solid lines represent models for \(\alpha_{OX}\) = -1.2, \(f_{e}\) in the outermost zone of 98% and considering dust grains. The other lines change only one parameter with respect to this sequence.
can be observed in the UV spectra of AGNs. It is the N5He2 parameter, which can be defined as follows,
\[\mathrm{N5He2}=\log\left(\frac{\mathrm{I([NV]1239)}}{\mathrm{I(HeII1640)}}\right) \tag{4}\]
In Figure 4 we show the relation between this parameter and O/H for model sequences at fixed C/O = -0.4 and varying log \(U\), assuming a AGN SED with \(\alpha_{OX}\) = -1.2, a final zone with \(f_{e}\) = 98%, and grains. Additional sequences also consider other values for these input parameters.
Similar to C34 and C34He2, the N5He2 parameter also shows a strong dependence on \(U\) and a bivalued relationship with oxygen abundance. The parameter has a lower value for lower \(U\), but on the contrary, it does not appear to change the metallicity of the turnover point.
Moreover, since it depends on the lines with high excitation, it is more sensitive to the shape of SED, so it has lower values when we vary \(\alpha_{OX}\) = -0.8 and does not change significantly for a lower value of \(f_{e}\) in the last zone of the models. Finally, the absence of dust grains is not negligible, but the difference is not as large as in the case of C34, due to the same cause. Moreover, since N is also a secondary element, N5He2 has a strong dependence on N/O, which can be partially reduced by assuming that C/N remains unchanged in the models, although this is not fully justified since the stellar mass range of the nucleosynthetic production of these two elements is not exactly the same (Henry et al., 2000), but can be similarly affected by processes of hydrodynamic gas exchange (Edmunds, 1990). Changing N/O independently of C/O, in the grids of models would allow us in principle to explore in more detail the impact of N/O on the final metallicity derivation, but the absence of any UV emission-line ratio strictly dependent on N/O, does not allow us to incorporate it to the code and would unnecessarily enlarge the number of models in each grid.
As commented, the dependence on C/O of the emission line ratios defined above to derive O/H can be reduced by the prior determination of C/O in the first iteration using the C3O3 parameter. As for the excitation, the dependence of the various observables on it can also be reduced by the C3C4 emission line ratio defined as follows,
\[\mathrm{C3C4}=\log\left(\frac{\mathrm{I(CIII)}|1909)}{\mathrm{I(CIV)}|1549}\right) \tag{5}\]
This ratio is already used by the code for star-forming objects and also by Dors et al. (2019) for the case of NLRs in AGNs. This ratio was already proposed by Davidson (1972) as an indicator of excitation in gas nebulae, although Dors et al. (2014) points out that it also depends on metallicity for low values of log \(U\).
The relation of this emission line ratio with the ionisation parameter is shown in Figure 5 for different arbitrary model sequences at fixed log(C/O) = -0.4 (left panel) and fixed 12+log(O/H) = 8.7 (right panel) and assuming \(\alpha_{OX}\) = -1.2, \(f_{e}\) = 98% and with grains. As in the previous cases, models with harder SED are also shown with \(\alpha_{OX}\) = -0.8, with \(f_{e}\) = 2% and also without grains in the figures.
As can be seen, higher values of C3C4 indicate lower values of \(U\) for most sequences, except for very high values of O/H, which show a bi-valued behaviour. This is a weaker effect compared to the relationship found for the O32 parameter for AGNs using the same models in the optical (Perez-Montero et al., 2019), which exhibits bivalued behaviour at all values of O/H with a maximum value for log \(U\) = -2.5. This monotonically decreasing relationship between C3C4 and \(U\) makes it easier for the code to use the grid of models for all values of \(U\). This trend does not change noticeably when we consider a value for \(\alpha_{OX}\) = 0.8 or a lower \(f_{e}\) for the termination criterion.
On the other hand, the C3C4 parameter is very sensitive to the presence of dust grains mixed with the gas. Neglecting the presence of dust in the models results in much lower values for C3C4. Indeed, the relationship that emerges from these models without dust is similar to the quadratic fit of Dors et al. (2019), where the authors do not consider dust grains in their models. For this reason, some authors, such as. Nagao
Figure 5: Relationship between emission line ratio C3C4 and ionisation parameter for different model sequences at fixed log(C/O) = -0.4 (left panel) and fixed 12+log(O/H) = 8.7 (right panel). The solid lines represent models for \(\alpha_{OX}\) = -1.2, the ratio of free electrons in the outermost zone of 98% and considering dust grains. The other lines change only one parameter with respect to this sequence; the black dashed line represents the quadratic fit of Dors et al. (2019). The vertical dashed magenta line marks the distance assumed for the C3C4 parameter to account for dust in the models for subsequent analysis.
et al. (2006) or Dors et al. (2019), do not include dust in their models because many of the analysed objects have very low values of C3C4. However, according to the model sequences shown in Figure 5, models with dust cannot be excluded for values of the C3C4 parameter \(>\) -0.3, a threshold marked with a red line in the left panel of the figure. In any case, other possible explanations for this very low values of C3C4 cannot be excluded and deserve future investigation, such as the possible existence of photon leaking, already observed in some extreme star-forming galaxies (e.g. Schaerer et al., 2022).
### Testing the method with model emission-lines
To verify the results of our model-based method, we used as input to the code the same emission line intensities predicted by the models to check that we obtain the same chemical abundances and ionisation parameters used for them. For this purpose, we also included a 10% uncertainty in the predicted model-based fluxes to simulate the effect of an additional source of error on the estimates. As explained above, the code uses the error of the flux as the standard deviation of a normal distribution of cases around the nominal value to perform a series of iterations that also reveal the associated uncertainty in the results.
In the panels of the first row in Figure 6 we show the input abundances and \(U\) assumed in the models of the grid for \(\alpha_{OX}\) = -1.2, a termination criterion when \(f_{e}\) = 98% is reached, and with dust grains. These values are compared with the results of the HCM-UV code when the same conditions are assumed and all lines accepted by the code are used. As can be seen, the agreement is excellent for most of the studied ranges (i.e., both the mean offset and the standard deviation of the residuals are less than 0.01 dex), so that both the abundances and the log(\(U\)) values can be determined using only the emission line intensities. Only for 12+log(O/H) \(<\) 7.5 does the code tend to find solutions that are about 0.1 dex above the correct input value for each model, but still within the error limits (i.e., 0.17 dex). For very high values of log(\(U\)) (i.e. \(>\) -0.8), the code also tends to find results that are below the correct values, with deviations greater than the errors obtained (with a mean of 0.2 dex), but for the rest of the examined range there is agreement that is better than the associated errors.
We also investigate the potential impact in the results of changing the assumed conditions in the library used by HCm-UV. Thus, we ran the code assuming the same conditions as above, but using as input the emission lines predicted by the models when we assume a value \(\alpha_{OX}\) = -0.8, and leaving the other parameters equal to those assumed by the code. The corresponding comparisons are shown in the second line of Figure 6. In the case of O/H, the mean deviation is larger than the median uncertainty (0.18 dex) only at very low values, although there is also a systematic deviation for O/H \(>\) 8.0, in the sense that the code determines O/H values that are 0.1-0.2 dex below the correct values. In contrast to O/H, the mean deviation for C/O is always lower than the obtained uncertainty (0.11 dex) in all ranges. This confirms that C/O can be determined more reliably than O/H when a different ionising SED is assumed. However, the \(U\) estimate is more affected by the shape of the SED as it is significantly overestimated, especially in the range -3 \(<\) log\(U\)\(<\) -1.
In the third line of Figure 6, we show the comparisons when we change the truncation criterion in the models used as input to a fraction of free electrons = 2% and leave the other parameters the same. Opposite to the input, the code still considers the same conditions as those assumed in the first case (i.e. with 98% fraction of free electrons in the last model). In this case, when the code assumes a different, higher \(f_{e}\) value, the main offset occurs at high O/H values, where the code finds solutions about 0.3 dex below the correct input value, proving the importance of the relative intensity of the HI lines to derive metallicity. On the other hand, no large deviation is found for the C/O derivative, but in the case of \(U\) the code systematically finds much lower values for log\(U\)\(>\) -1.0.
Finally, in the bottom row of Figure 6 we show the comparisons between the input values for models that do not consider dust and the values derived by the code assuming dust is present. As shown, in this situation the code overestimates O/H with a mean offset larger than the mean uncertainty (i.e., 0.17 dex) for O/H \(<\) 8.0, but underestimates it for O/H \(>\) 8.5, albeit with a large spread in each case. For C/O, the deviation between the solutions and the theoretical values is always within uncertainty (about 0.2 dex), except for very low values for C/O. It is also noted that there is a systematic tendency to find higher C/O values for high input values of C/O, but these deviations are always within errors. Finally, in the case of \(U\), we find better agreement, although the deviations are large for the highest values of \(U\).
From these comparisons, it appears that the C/O ratio is more robust than the O/H ratio when the assumed conditions in the models vary relative to those assumed by the code. In the case of O/H, this is especially the case at high metallicities when the assumption about the presence of dust grains is incorrect, while for \(U\) the largest discrepancies are found at very high values in all the comparisons tested. Finally, for log \(U\)\(<\) -1.5 all discrepancies are found within the derived uncertainties.
### Results as a function of different sets of input emission-lines
Another important aspect in the evaluation of our code arises from the fact that it can provide solutions for different sets of emission lines. To this end, in this subsection we compare the theoretical and resulting abundances, again using as inputs to the code different sets of emission line fluxes with an additional 10The corresponding mean offsets and standard deviation of the residuals obtained from the code for O/H, C/O, and log \(U\) compared to the theoretical values assumed by the models are shown in Table 1, for different combinations of emission lines that can be given as input to the code, and for different constraints on the model grid used.
As shown, when all lines allowed by the code are specified, the mean offsets of the results are better than the mean scatter in all cases. This comparison is the same as that in the first line of Figure 6. This comparison is slightly worse when the optical line [O iii] at \(\lambda\) 5007 A, which is allowed by the code relative to O iii\(\lambda\) 1665 A is not provided, but the results still agree better than the associated uncertainties. A similar result is obtained if the emission line Ly\(\alpha\) at \(\lambda\) 1216A is removed, since He ii can be used instead. This has no influence on the C/O determination.
On the other hand, if the O iii] \(\lambda\) 1665 A is not provided,
Figure 6: Comparisons between the chemical abundances and \(U\) assumed in the model grid and the same quantities calculated by HCM-UV when the predicted emission lines are used as input. The panels in the left column show the comparison for 12+log(O/H), the middle column for log(C/O), and the right column for log \(U\). In all cases, HCM-UV AGN models with \(\alpha_{OX}\) = -1.2, \(f_{e}=98\%\) and considering grains. In the panels of the first row, we show the results when the same conditions are assumed for the input; in the second, we change \(\alpha_{OX}\) to -1.2; in the third row, we consider \(f_{e}=2\%\); and finally, in the bottom row, without dust grains. The solid red line represents the 1:1 relation in all panels.
HCM-UV cannot calculate the C3O3 ratio for the C/O estimate. In this case, very large deviations and errors result for both O/H and log \(U\) when the full model grid is used for calculation. In this scenario, the code must assume a certain O/H-C/O ratio to get a better solution for O/H when C-lines are used. As shown in Table 1, both the mean offsets and the standard deviation for O/H and log \(U\) are much better under this additional assumption. However, it is important to remember that this relation may be somehow arbitrary or valid for a particular galaxy population and not valid for a particular object. In any case, even assuming such a relationship in the absence of the lines required for the previous derivation of C/O, much better results are obtained if the lines C iii], C iv], and N v] are used simultaneously with respect to He ii rather than calibrating them individually and in combination.
The above results again highlight the importance of a prior determination of the C/O content in order to obtain an accurate determination of the total metal content of the gas, and are in agreement with the results already obtained using the same approach for star-forming galaxies in Perez-Montero and Amorin (2017).
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline
**Set of lines** & **Models** & \(\mathbf{\Delta_{OH}}\) & \(\mathbf{RSME_{OH}}\) & \(\mathbf{\Delta_{CO}}\) & \(\mathbf{RSME_{CO}}\) & \(\mathbf{\Delta_{U}}\) & \(\mathbf{RSME_{U}}\) \\ \hline \hline All lines & All & +0.01 & 0.21 & +0.02 & 0.14 & -0.01 & 0.26 \\ Ly\(\alpha\), N v], C iv], He ii, O iii] \(\lambda\) 1665 Å, C iii] & All & +0.08 & 0.25 & -0.05 & 0.14 & +0.03 & 0.11 \\ N v], C iv], He ii, O iii] \(\lambda\) 1665 Å, C iii] & All & +0.08 & 0.25 & -0.05 & 0.14 & +0.03 & 0.11 \\ N v], C iv], He ii, C iii] & All & +0.36 & 0.57 & – & – & +0.11 & 0.34 \\ N v], C iv], He ii, C iii] & O/HC/O const. & +0.01 & 0.15 & – & – & +0.01 & 0.13 \\ C iv], He ii, C iii] & All & +0.32 & 0.64 & – & – & +0.04 & 0.46 \\ C iv], He ii, C iii] & O/HC/O const. & +0.03 & 0.33 & – & – & -0.03 & 0.31 \\ N v], He ii & All & +0.27 & 0.66 & – & – & +0.08 & 0.48 \\ N v], He ii & O/HC/O const. & +0.00 & 0.52 & – & – & -0.00 & 0.42 \\ \hline \end{tabular}
\end{table}
Table 1: Median deviations and root mean squares (RSME) of residuals between theoretical abundances and log(\(U\)) values (AGN model inputs) and estimates from HCM-UV using different sets of emission lines and for different grids.
## 3 Application to Galaxy Samples
We compiled from the literature strong UV narrow emission line fluxes for a sample of AGN consisting of Seyferts 2 (10), quasars (33), quiet radio galaxies (2), high-\(z\) AGN (1), and high-\(z\) radio galaxies (96). We take advantage of previous compilation work, (e.g. Nagao et al., 2006; Dors et al., 2019) of strong UV emission lines in AGN, namely Nv\(\lambda\)1239A, Civ\(\lambda\)1549A, Heii\(\lambda\)1640A and Ciii\(\lambda\)1909A, but we also include information on the Ly\(\alpha\) and [OIII]\(\lambda\)1665A emission lines when their measurements are given in the original references. Table 2 shows all the information we found for our sample, as well as the original references.
The Seyferts 2 (S2) sample consists of 9 objects taken directly from Nagao et al. (2006) and one additional galaxy, IZw 92, from Kraemer et al. (1994b). The sample of high-\(z\) radio galaxies (HZRG) was taken directly2 from the compilation presented by De Breuck et al. (2000), supplemented by 8 sources observed by Bornancini et al. (2007) and 9 galaxies analyzed by Matsuoka et al. (2009). Our sample of quasars (QSO) consists of three different types of galaxies: Type II type quasars (11), 10 of them listed by Nagao et al. (2006) plus J142331.71\(-\)001809.1, recently analyzed by Onoue et al. (2021); extremely red quasars (21) from the compilation of Villar Martin et al. (2020); and one intermediate Type I-II quasar, observed by Lin et al. (2022). We added to the above list of objects 2 quiet radio galaxies (QRG) observed by Matsuoka et al. (2018), and a high-\(z\) AGN (zAGN) whose broad and narrow components were identified by Tang et al. (2022).
Footnote 2: From the original list of 167 sources, we omit 9 sources observed by Matsuoka et al. (2009) that provide better accuracy, and 79 sources that do not have enough UV spectroscopic information to be used as input to HC\(\alpha\)-UV.
Some of the compiled observations were already corrected for Galactic extinction in the original publications. These corrections were not very large (i.e. E(B-V) \(\sim\) 0.01-0.1) due to their relative position on the sky, and do not lead to changes in the resulting emission-line fluxes larger than the error limits. On the other hand, we assumed that these objects were not corrected for internal extinction, so we performed this correction only for those objects with C3C4 \(>\) -0.3, which cannot be considered as dust-free. This criterion, shown in Figure 5, is consistent with the predictions from models in which the absence of internal extinction is the only way to achieve the very low C3C4 values observed for certain objects (Nagao et al., 2006; Dors et al., 2019). For higher C3C4 values, on the other hand, the absence of extinction does not seem to be justified in either the emission line flux correction or the photoionisation models. Therefore, for the rest of the compiled galaxies, we assumed an average visual extinction of \(A_{V}=1\) mag and an extinction law of Cardelli et al. (1989). Anyway, we checked that the assumption of other common extinction laws does not imply a variation in the resulting chemical abundances larger than the reported errors when only UV emission-lines are used as an input, as in our sample. For the case of a simultaneous use of both optical and UV lines this correction would be more critical, but a more accurate estimate of the extinction correction could also be performed using the Balmer decrement.
Some of the listed fluxes for faint emission lines (e.g., Nv\(\lambda\)1239A, [OIII]\(\lambda\)1665A) are given as upper limits (noted in Table 2). However, they are treated as real lines, assuming the upper limit equivalent to 3\(\sigma\), where the code considers as input the flux with a nominal value of 2\(\sigma\) with an error of 1\(\sigma\) (i.e. an upper limit of 3 is introduced as 2 \(\pm\) 1), to reduce the probability that they are just noise, although with a restriction for values \(>3\sigma\) (i.e. larger than the upper limit) in our Monte Carlo simulations).
We applied the HCm-UV code to this sample, assuming a AGN SED with \(\alpha_{OX}\) = -1.2 and a stopping criterion in the models for \(f_{e}=98\%\). The choice of these input parameters is arbitrary and corresponds mainly to the same conditions used for the optical version of the code in Perez-Montero et al. (2019), with an \(\alpha_{OX}\) value closer to the median derived by Miller et al. (2011).. Anyway, the effects of changing these parameters on the results are discussed in Section 2. The only difference is that we only considered dust in the models for the objects whose lines were corrected for internal extinction, in agreement with the criterion based on C3C4, while for the rest, we used only the grids of the models without dust. The resulting O/H, C/O, and log \(U\) values are provided in electronic form along with the corresponding derived errors, as shown in Table 3.
## 4 Discussion
### O/H and U distributions
In Figure 7, we show the derived metallicity in the compiled sample as a function of redshift for the objects marked for different categories. We see no clear correlation with redshift, but a large scatter in the range of metallicity \(7.77<12+\log({\rm O/H})<8.97\), with a mean of \(8.52\) (\(\sim 2/3\)-\(Z_{\odot}\)), which is slightly higher for the objects assumed to be dusty (i.e., 8.55) than for the objects we assume to be dust-free (8.46). The main trend observed in this distribution would not be then very different assuming different input conditions in the models chosen to derive the abundances.
In Table 4 we show the mean O/H values obtained for each galaxy class. While these values do not allow us to draw firm
Figure 7: Relationship between redshift and total oxygen abundances derived with the HCm-UV. Different symbols represent different object types in the sample: black circles for Seyfert 2, blue stars for quasars, red triangles for high-\(z\) AGN, cyan diamonds for quiet-radiogalaxies, and green squares for high-\(z\) radio galaxies.
conclusions, since these types are very unevenly populated and some of them could be very heterogeneous in their properties, they can provide some clues. For example, at very low \(z\), Seyfert-2 galaxies have abundances around the mean of the whole sample, even if their mean is slightly subsolar. This is almost congruent with the sample of radio galaxies at high \(z\), whose mean redshift is higher than that of the smaller Seyfert 2 subsample. On the other hand, for the other types of galaxies at high \(z\), very different values of O/H are found, as in the case of quasars, with a mean O/H value significantly lower (i.e., 12+log(O/H) = 8.35) than the mean of the whole sample. However, for the other three defined classes, the number of compiled objects is so small that no statistically significant conclusion can be drawn.
In Figure 8, we show the relationship between the C3C4 parameter as obtained from the compiled emission line fluxes and the value of log \(U\) as obtained from HCM-UV. As described in previous sections, we considered all objects with C3C4 \(<\) -0.3 to be possibly dust-free, as discussed by Nagao et al. (2006) or Dors et al. (2019). In table 4 we also give the fraction of objects in each category that fall within this range. As can be seen, more than half of the compiled objects in all categories are in this range and, could have in principle no dust, except for the radio galaxies with high \(z\).
As for \(U\), we find a relatively wide range of values from -2.7 \(<\) log \(U\)\(<\) -0.6 with a mean log \(U\) =-1.4, indicating that most objects have high excitation. Although we find a clear correlation between C3C4 and log \(U\), the objects assumed to not to have dust do not have a higher mean \(U\) than the objects with dust. This is a consequence of the models predicting a lower C3C4 value in the absence of dust in order to obtain the same \(U\) value.
In Table 4 we also report the mean log \(U\) value in each category, which for Seyfert 2, as in the case of O/H, is very similar to the average values of the whole sample. In contrast, quasars show a much higher mean value, while radio galaxies with high \(z\) show lower ionisation parameters on average.
### C/O and its impact on the O/H derivation
For those objects in the sample for which the emission line ratio C3O3 was available, the code also estimated C/O. Unfortunately, the number of objects in our sample for which this ratio is measurable is small (i.e., only 26 objects), most of which are quasars with an upper limit on the O iii] \(\lambda\) 1665 A line measurement, which could mean that the derived C/O values are only lower limits. In Table 4 we also report the mean C/O values for each category, although they are not statistically significant for most classes because they could not be derived for many objects.
In Figure 9 we show the resulting O/H and C/O values with the corresponding errors for the objects for which these two ratios could be estimated simultaneously. We note that most of the objects for which C/O could be derived have similar properties, since most of them are quasars all containing dust (according to the C3C4 criterion) and have relatively low O/H abundances in the range of 7.83 \(<\) 12+log(O/H) \(<\) 8.31. The resulting C/O values for this subsample are in the range -0.84 \(<\) log(C/O) \(<\) 0.27, with a mean of -0.50 (\(\sim\) 0.6-(C/O)\({}_{\odot}\)), a somewhat lower proportion relative to the solar value than the corresponding O/H value for the entire sample, but significantly higher when compared to the mean for the subsample for which C/O could be measured (12+log(O/H) = 8.17 \(\sim\) 0.3-\(Z_{\odot}\).
This relatively higher mean C/O value compared to the corresponding derived O/H value results in this subsample being on average above some of the commonly assumed relationships between O/H and C/O, as shown in Figure 9. The sample is well above the relations given by Hamann and Ferland (1993) or Dopita et al. (2006), and only a fraction of the objects are in the range considered by Perez-Montero and Amorin (2017), remembering that the latter is only the conversion of the O/H-N/O relation assuming a solar C/N abundance ratio. It is not uncommon to find low-emission objects above these empirical or chemical model-based relations, since the relationship between metallicity and the ratio of abundance with a partial secondary origin relative to another with a pri
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Galaxy type** & **mean \(z\)** & \(N_{OH}\)** & \(N_{ug}\)** & **Mean O/H** & **Mean log \(U\)** & \(N_{CO}\)** & **Mean C/O** \\ \hline \hline All & 2.41 & 139 & 56 & 8.52 & -1.48 & 26 & -0.50 \\ Seyfert 2 & 0.015 & 9 & 6 & 8.56 & -1.40 & 1 & 0.06 \\ Quasar & 2.51 & 32 & 20 & 8.35 & -1.16 & 23 & -0.53 \\ high-\(z\) radio galaxies & 2.57 & 95 & 24 & 8.56 & -1.65 & 1 & -0.51 \\ High-\(z\) AGN & 3.21 & 1 & 1 & 8.03 & -1.44 & 1 & -0.27 \\ Quiet radio galaxies & 3.31 & 2 & 1 & 8.84 & -1.04 & 0 & – \\ \hline \end{tabular}
\end{table}
Table 4: Mean resulting values obtained from HCM-UV for O/H, C/O, and log \(U\) in our sample of objects as a function of the galaxy type. We also list the mean redshift and the number of galaxies for which the calculation could be done to derive O/H and C/O, and the number of objects without dust grains.
Figure 8: Relation between the C3C4 parameter and the ionisation parameter, log (\(U\)), as derived using HCM-UV for our sample of objects. The different symbols are the same as in Fig. 7. The vertical red dashed line marks the limit below which objects are considered not having dust.
mary origin has a large scatter (e.g., Perez-Montero & Contini, 2009). The origin of such a scatter is related to several processes, including variations in star formation efficiency (Molla et al., 2006) or gas exchange between galaxies and the surrounding IGM through hydrodynamic processes (Edmunds, 1990; Koppen & Hensler, 2005), which likely affect the NLR in AGNs. This highlights the need for an alternative method to derive more accurate O/H abundances using UV carbon lines based on prior determination of C/O.
Indeed, the lack of prior determination of the C/O abundance ratio is one of the main sources of uncertainty we find in deriving chemical abundances from UV lines in both star-forming objects and AGN. In the absence of an emission line ratio sensitive to electron temperature, the calculation is based on the measured flux ratio of C iii] and C iv lines relative to Ly\(\alpha\) or He ii\(\lambda\) 1640 A. The code HCm-UV for star-forming objects presented in Perez-Montero & Amorin (2017) and the fit for AGNs presented in this work performs an initial iteration through the entire grid of assumed models to search for C/O using the observed C3O3 ratio, but we may wonder to what extent the derived final O/H values may vary if this previous step cannot be performed.
We recalculated all O/H abundances in the subsample of 29 galaxies for which a prior derivation of C/O was possible, but this time without considering the O iii] \(\lambda\) 1665 A line. The latter means that the code cannot compute C/O and instead assumes the a priori expected relationship between O/H and C/O. In this case, we obtain a mean O/H value of 8.66, which is 0.47 dex larger than the value obtained with a previous C/O derivation. Since most of the objects in our sample are quasars, the significantly lower O/H value derived for this category (see Table 4) could have the origin explained above, and similar lower values cannot be discarded if an estimate of C/O can be given for the other object classes.
We also computed the O/H frequency assuming a restricted grid, taking the relationship between O/H and C/O proposed by Dopita et al. (2006) as a reference, since the new version of the code allows easy manipulation of this relationship, and we obtain an even larger mean frequency of 8.71. The 0.05 dex difference between the O/H frequencies derived from HCm-UV when the program assumes the relationship given by Perez-Montero & Amorin (2017) or the relationship given by Dopita et al. (2006) or Hamann & Ferland (1993) in the absence of a prior C/O determination is also obtained for the entire object sample.
In summary, the very large variation between metallicities derived using C-lines from UV with or without prior C/O determination underscores the importance of measuring the C3O3 parameter to obtain accurate O/H abundances.
### Comparison with other calibrations
We use our sample to compare the total oxygen abundances and log\(U\) derived from HCm-UV with the results obtained with the model-based calibrations for AGNs with UV emission lines proposed by Dors et al. (2019).
In the left panel of Fig. 10, we show the comparison for O/H obtained using the biparametric function proposed by Dors et al. (2019), which is based on the combination of the C34He2 parameter with a correction of its dependence on \(U\) using C3C4. We find agreement within the errors, both for objects with and without dust. Only for very large abundances (12+logO/H\(>9.0\)) do we find a larger deviation due to the limit on HCm-UV (i.e., the maximum O/H value is 9.1).
Conversely, we find poor agreement when we compare the results of HCm-UV with the abundances resulting from calibration based on N5He2 and C3C4 in Dors et al. (2019), shown in the middle panel of Figure 10. As can be seen, the
Figure 9: Relationship between total oxygen abundance and carbon-oxygen abundance ratio derived using HCm-UV for the assembled sample of NLR in AGN. The red solid line encompasses the range covered by the models when the code in Pérez-Montero & Amorin (2017) assumes no prior derivation of C/O and a prior assumption about the ratio of O/H to C/O is required. The blue dashed line corresponds to the relationship derived by Dopita et al. (2006) for H ii regions, and the dashed blue line represents the relationship derived by Hamann et al. (1993) for QSOs.
agreement is not good, and neither a clear correlation nor the range covered is similar for the two approaches. These strong observed differences cannot be attributed solely to the use of models with or without dust grains or to the assumption of different relationships between O/H and C/O, since these facts do not lead to such large differences, as discussed above. Instead, as we show in Table 1, they may be due to independent use of C iii], C iv], and N v] relative to He ii. Therefore, these results reaffirm the conclusion that the use of the N5HeII parameter alone is not advisable, but that, on the contrary, it must be used in combination with the other UV carbon lines.
In summary, the O/H derivation using these lines leads to very different results, even when the three lines are used and a C/O estimation was previously performed. For the sample of 26 galaxies with a prior determination of C/O using the C3O3 parameter, the mean difference between the O/H values obtained from HCm-UV and the calibrations using the C34he2 and N5He2 parameters from Dors et al. (2019) is -0.2 dex and -0.9 dex, respectively. In contrast, better agreement is obtained when log \(U\) is derived using the C3C4 emission line ratio, as shown in the right panel of Figure 10, which has an average offset of only 0.04 dex.
### Comparison with optical-based estimations
Finally, we can compare the chemical abundances obtained from HCm-UV using UV emission lines with those derived using optical emission lines from the version of the code described in Perez-Montero et al. (2019) for a subset of galaxies with available information in both spectral regions. This is the case for NGC 1068, which was studied in Nagao et al. (2006). Taking emission lines corrected for optical reddening relative to H\(\beta\), the code HCm predicts 12+log(O/H) = 8.71 (8.59), assuming a value \(\alpha_{OX}\) = -0.8 (-1.2), which is very close to the value 12+log(O/H) = 8.68 (8.57) obtained by HCm-UV using UV emission lines. For the other two galaxies in the sample, Mrk 3 and Mrk 573, also studied in Nagao et al. (2006), optical spectra are available, but no UV O iii] is reported at \(\lambda\) 1665 A preventing a meaningful C/O estimate. Nevertheless, for these two galaxies, a solar N/O ratio was derived from the optical data, allowing us to assume a solar carbon-to-nitrogen ratio to compare the results from the optical and UV spectra. In the case of Mrk 3, we derived 12+log(O/H) = 8.58 (8.35) from HCm-UV when we consider a \(\alpha_{OX}\) of -0.8 (-1.2), which is significantly lower than the values derived from the optical lines with HCm, with 12+log(O/H) = 8.72 (8.59). For MRK 573, the differences are even larger, with 12+log(O/H) = 8.49 (8.26) from the UV and 8.79 (8.68) from the optical.
These differences are unlikely to be significant given i) the small number of objects for which simultaneous derivation of abundances in both the optical and UV can be performed, ii) the fact that the associated errors are of the same order of magnitude as these differences, iii) the strong dependence of the results on the extinction correction, especially in the UV, and iv) the assumption of a fixed C/N abundance ratio, which is not necessarily well justified. However, it is worth noting that the results are consistent overall and can be used as a reference for our sample of high-redshift galaxies.
## 5 Summary and conclusions
In this work we describe and implement the adaptation of the code HCm-UV to the NLR of AGNs to derive oxygen abundances, the chemical carbon-oxygen abundance ratio, and ionisation parameters using UV emission line intensities.
According to our analysis based on photoionisation models covering different input features, the C3O3 emission line ratio turns out to be a robust indicator of C/O in AGNs, as it depends only to a very small extent on O/H, \(U\), the shape of the considered SED, and the presence of dust grains mixed with the gas, which is of great importance in this spectral region. The determination of C/O is not only important for the correct chemical interpretation of these objects, but also implies a much more accurate determination of metallicity based on the carbon UV emission lines. This result is consistent with what has already been observed for SF objects in Perez-Montero and Amorin (2017).
On the other hand, the determination of O/H and \(U\), although much more reliable when a previous determination of
Figure 10: Comparison between the resulting values from HCm-UV, in the vertical axis, and those from different calibrations of Dors et al. (2019), in the respective horizontal axis, as applied to our galaxy sample. In all panels, filled symbols represent objects with assumed dust, while empty symbols represent objects without dust. In the left panel, the C34He2 and C3C4 parameter calibrations are used for O/H; in the middle panel, the N5He2 and C3C4 parameters are used for O/H; and in the right panel, the C3C4 parameter is used for log \(U\). The red solid lines represent the 1:1 relationship in all panels.
C/O is provided using the C3O3 parameter, is much more dependent on the assumed input conditions of the models, being particularly sensitive to the assumption of the presence or absence of dust grains mixed with gas. Only models without dust grains are able to reproduce the very low C3C4 values observed in many objects, but the adoption of alternative matter-bounded geometry assumptions should also be explored.
We applied our method to a wide range of data from the literature at different redshifts and we found a large scatter in the O/H and log \(U\) distributions, but with a slightly sub-solar mean and, as expected, high excitation. The derivation of C/O was only possible for a small subsample, mainly for quasars with values higher than expected for their derived metallicities, but with a large uncertainty because the emission line of O iii] was inaccurately measured at \(\lambda\) 1665 A. We verified that the prior determination of C/O for this subset dramatically affects the final derivation of their overall metallicity.
Finally, we compared our results with other methods based on the same compiled emission lines, such as the Dors et al. (2019) calibrations, and obtained very consistent results for the C34He2 parameter for O/H and C3C4 for log \(U\), but not such good agreement for N5He2 for O/H. In this case, according to our models, the N v]-emission line at \(\lambda\) 1239 A should only be used in combination with the other lines, since it only provides an estimate of the very highly excited gas phase.
## Acknowledgements
This work has been partly funded by projects Estallidos7 PID2019-107408GB-C44 (Spanish Ministerio de Ciencia e Innovacion), and the Junta de Andalucia for grant EXC/2011 FQM-7058. This work has been also supported by the Spanish Science Ministry "Centro de Excelencia Severo Ochoa Program under grant SEV-2017-0709. SPM also acknowledges the assistance from his guide dog Rocko without whose daily help this work would have been much more difficult. RA acknowledges financial support from ANID Fondecyt Regular 1202007. R.G.B. acknowledges financial support from grant PID2019-109067-GB100.
|
2309.00216
|
Human-Inspired Facial Sketch Synthesis with Dynamic Adaptation
|
Facial sketch synthesis (FSS) aims to generate a vivid sketch portrait from a
given facial photo. Existing FSS methods merely rely on 2D representations of
facial semantic or appearance. However, professional human artists usually use
outlines or shadings to covey 3D geometry. Thus facial 3D geometry (e.g. depth
map) is extremely important for FSS. Besides, different artists may use diverse
drawing techniques and create multiple styles of sketches; but the style is
globally consistent in a sketch. Inspired by such observations, in this paper,
we propose a novel Human-Inspired Dynamic Adaptation (HIDA) method. Specially,
we propose to dynamically modulate neuron activations based on a joint
consideration of both facial 3D geometry and 2D appearance, as well as globally
consistent style control. Besides, we use deformable convolutions at
coarse-scales to align deep features, for generating abstract and distinct
outlines. Experiments show that HIDA can generate high-quality sketches in
multiple styles, and significantly outperforms previous methods, over a large
range of challenging faces. Besides, HIDA allows precise style control of the
synthesized sketch, and generalizes well to natural scenes and other artistic
styles. Our code and results have been released online at:
https://github.com/AiArt-HDU/HIDA.
|
Fei Gao, Yifan Zhu, Chang Jiang, Nannan Wang
|
2023-09-01T02:27:05Z
|
http://arxiv.org/abs/2309.00216v1
|
# Human-Inspired Facial Sketch Synthesis with Dynamic Adaptation
###### Abstract
Facial sketch synthesis (FSS) aims to generate a vivid sketch portrait from a given facial photo. Existing FSS methods merely rely on 2D representations of facial semantic or appearance. However, professional human artists usually use outlines or shadings to covey 3D geometry. Thus facial 3D geometry (e.g. depth map) is extremely important for FSS. Besides, different artists may use diverse drawing techniques and create multiple styles of sketches; but the style is globally consistent in a sketch. Inspired by such observations, in this paper, we propose a novel Human-Inspired Dynamic Adaptation (HIDA) method. Specially, we propose to dynamically modulate neuron activations based on a joint consideration of both facial 3D geometry and 2D appearance, as well as globally consistent style control. Besides, we use deformable convolutions at coarse-scales to align deep features, for generating abstract and distinct outlines. Experiments show that HIDA can generate high-quality sketches in multiple styles, and significantly outperforms previous methods, over a large range of challenging faces. Besides, HIDA allows precise style control of the synthesized sketch, and generalizes well to natural scenes and other artistic styles. Our code and results have been released online at: [https://github.com/AiArt-HDU/HIDA](https://github.com/AiArt-HDU/HIDA).
+
Footnote †: * Corresponding Author
## 1 Introduction
Making computers create arts like human beings, is a longstanding and challenging topic, in the artificial intelligence (AI) area [2]. To this end, researchers have made great efforts and proposed numerous methods, such as neural style transfer (NST) [17] and image-to-image translation (I2IT) [18, 46]. These methods mainly tackle cluttered image styles, such as oil paintings [17]. In this paper, we are interested in creating artistic sketches from facial photos, which is referred to as face sketch synthesis (FSS) [34].
For now, there has been significant progress in FSS inspired by the excellent success of Generative Adversarial Networks (GANs) [18]. Specially, researchers have proposed various techniques, including embedding image prior [43], semi-supervised learning [5], self-attention/transformer based methods [11, 47, 9], hierarchical GANs [28, 40, 8], composition assistance [37], and semantic adaptive normalization [21], to boost the quality of synthesized sketches. However, all these methods merely use 2D appearance or semantic representations of the input photo. They may fail to handle serious variations in appearance, such as the pose, lighting, expression, and skin color.
To tackle this challenge, we propose a novel method, inspired by how human artists draw a sketch. We observe that facial 3D geometry plays a significant role in human artists' drawing process. Besides, a professional human artist considers comprehensive information, including facial 3D geometry, 2D appearance, and the artistic style, to execute a sketch portrait. We summarize the drawing methodologies of human artists [22] into the following four folds:
* **Local 3D geometry conveyor**: First, artists typically use abstract and deformable outlines to characterize major geometry, and use different shading methodologies, e.g. hatching, blending, and stippling, to convey local 3D structures [3].
Figure 1: Illustration of facial photos, depth maps, multi-style facial sketches drawn by human artists [8], and the corresponding results synthesized by our method.
* **Local 2D appearance representation**: Second, artists may use different shading or tonal techniques to represent local 2D facial appearance, so as to depict variations in lighting, color, texture, etc.
* **Sketches in diverse styles**: Third, different artists may use diverse drawing methods and create multiple styles of sketches. In other words, they may use divergent textures to represent the same facial area. Fig. 1 shows three styles of sketches drawn by artists [8]. Obviously, Style1 is extremely abstract and mainly contains sketchy outlines. In contrast, Style3 depicts facial 3D geometry with a lot of shading textures.
* **Globally consistent style**: Finally, the style of pencil-drawing is usually consistent in a single sketch. As shown in Fig. 1, although there are distinct inter-style divergences, the style of pencil-drawing is globally consistent across different regions inside each sketch.
Inspired by these observations, we seek to guide the synthesis of sketch portraits by using comprehensive information, including facial 3D geometry and 2D appearance, as well as global style control. In the implementation, given a facial photo, we use the depth map to represent its 3D geometry, and use the encoding features to represent its 2D appearance. Afterwards, we combine them with a style map to dynamically modulate deep features for generating a sketch. Inspired by the success of SPADE [27] in style control [41] and the local flexibility of dynamic neural networks [14], we propose to dynamically modulate neuron activations, based on a joint consideration of all these information. Such modulation is conducted though both dynamic normalization and activation. Specially, we propose a novel dynamic activation function, termed Informative ACON (InfoACON), and a dynamic normalization module, termed DySPADE. In addition, we use deformable convolutions [7] to align deep features [16] at coarse scales for generating abstract and distinct sketchy outlines. Initially, the dynamic adaptation and deformation simulate the flexibility and abstract process of human artists during drawing.
Based on the above mentioned contributions, we build a Human-Inspired Dynamic Adaptation (HIDA) method for FSS. We conduct experiments on several challenging datasets, including the FS2K [8], the FFHQ [20], and a collection of faces in-the-wild. Our method outperforms state-of-the-art (SOTA) methods both qualitatively and quantitatively. Besides, our method allows precise style control and can produce high-quality sketches in multiple styles. Even for faces with serious variations, the synthesized sketches present realistic textures and preserve facial geometric details. In addition, extensive ablation studies demonstrate the effectiveness of the proposed dynamic and adaptive modulation techniques. Finally, our model, although trained for faces, can generate high-quality sketches for natural scenes.
## 2 Related Works
Our work is related to GANs-based FSS methods. Besides, our method is highly inspired by semantic adaptive normalization [27] and dynamic activation [25].
**GANs-based FSS.** The latest FSS methods are typically based on GANs [18, 12], where the mapping from a facial photo to a sketch is modeled as an image-to-image translation task [18]. Some latest methods use 2D semantic information to guide the generation process. For example, Yu et al. [37] propose a stacked composition-aided GANs to boost quality of details. Inspired by the great success of spatially adaptive (de)normalization (SPADE) [27] in semantic image generation, Wang et al. [47] and Qi et al. [29] spatially modulate decoding features according to facial parsing masks. Li et al. [21] propose an enhanced SPADE (eSPADE) by using both facial parsing masks and encoding features for feature modulation.
Recently, researchers seek to solve the challenge of unconstrained faces by constructing large datasets. Fan et al. [8] release a challenging FS2K dataset, which consists of multi-style sketches for faces with diverse variations. Nie et al. [26] propose a novel WildSketch dataset and a Perception-Adaptive Network (PANet). In PANet, deformable feature alignment (DFA) and patch-level adaptive convolution are used. Different from [26], we analyze the effects of DFA, and only use DFA over the coarse scales. Besides, we propose to dynamically modulate neuron activations based on facial depth and artistic style.
**Semantic Adaptive Normalization.** Recently, Park et al. [27] propose to modulate deep features based on semantic layouts for semantic image synthesis. In SPADE, deep features are modulated based on semantic layouts. Afterwards, Zhu et al. [48] propose Semantic Region-Adaptive Normalization (SEAN) to control the style of each semantic region individually. To boost the efficiency of SPADE, Tan et al. [32] propose a Class-Adaptive (DE)Normalization (CLADE) layer by replacing the modulation networks with class-level modulating parameters. All these adaptive normalization layers use 2D semantic maps and show amazing performance in generating photo-realistic images [24] and face sketches [37]. In this paper, we use pix-wise dynamic activation in the normalization block, so that the modulating parameters would flexibly adapt to local information. Experimental results show that the dynamic normalization are essential for detailed synthesis of facial sketches.
**Dynamic Activations.** Recently, Chen et al. [6] propose a Dynamic ReLU (DY-ReLU) function, where parameters in Leaky ReLU are learned from all input elements. Ma et al. [25] propose a costumed activation function, termed ACON, which automatically decides whether a neuron is active or not. ACON has several variants, among which the pixel-wise version of metaACON shows remarkable performance. Give a neuron activation \(x\), the output of metaA
CON is formulated as:
\[y=(p_{1}-p_{2})\cdot\sigma(\theta(p_{1}-p_{2})x)+p_{2}x, \tag{1}\]
where \(\theta=\sigma(x)\), \(\sigma\) is a Sigmoid function, \(p_{1}\), and \(p_{2}\) are learnable parameters. In this paper, we use metaACON, instead of ReLU or Leaky ReLU, in part of our networks. Besides, we propose to learn spatially-adaptive parameter \(\theta\) according to the 3D geometry, 2D appearance, and global style control. Our activation function proves boosting the performance and allowing precise style control.
## 3 The Proposed
We aim to translate a facial photo \(\mathbf{X}\) to a sketch \(\mathbf{Y}_{s}\), in style \(s\), drawn by an artist. Here \(s=1,2,...,S\) is a style label, \(S\) is the total number of styles. In this work, we seek to guide the synthesis of sketch portraits by using comprehensive facial information, including both 3D geometry and 2D appearance, as well as the global style control. Given a facial photo, we use the corresponding depth map \(\mathbf{D}\) to represent its 3D geometry. Afterwards, we combine them with a global style map \(\mathbf{S}\) to decode a facial sketch. In this way, our goal is formulated as learning a mapping from \(\{\mathbf{X},\mathbf{D},\mathbf{S}\}\) to \(Y_{s}\), i.e. \(G:\{\mathbf{X},\mathbf{D},\mathbf{S}\}\mapsto\mathbf{Y}_{s}\).
To supervise our model, it is necessary to obtain depth maps for input facial photos. However, it is usually impossible to obtain ground truth depth information in practical applications. Therefore we use state-of-the-art (SOTA) depth prediction methods to estimate the depth map of an input facial photo. In practice, we use 3DDFA [13] as the depth predictor, because it has been widely used and shown excellent performance in various 3D face reconstruction tasks.
The overall pipeline of our model is as shown in Fig. 2. It contains an off-line facial depth predictor \(P\), a generator \(G\), and a patch-wise discriminator \(D\). In addition to a facial sketch, we enforce \(G\) to reconstruct the input depth map \(\mathbf{D}\) from features representing the sketch. In this way, the generated sketch \(\hat{\mathbf{Y}}_{s}\) would convey the 3D geometry of \(\mathbf{X}\). Besides, we boost the capacity of generator by using a dynamic normalization module and a dynamic activation function. Finally, to formulate the abstraction methodology of human artists in drawing sketchy outlines, we propose using deformable convolutions to align features at coarse scales. Details will be introduced bellow.
### Informative and Dynamic Adaptation (IDA)
To simulate the drawing methodology of human artist, we first propose a novel _Informative and Dynamic Adaptation (IDA)_ module, to modulate deep features based on a combination of the facial depth map \(\mathbf{D}\), the style map \(\mathbf{S}\), and the appearance representations \(\mathbf{A}\), i.e. \(\{\mathbf{D},\mathbf{S},\mathbf{A}\}\). Specially, we propose a novel dynamic activation function, termed Informative ACON (InfoACON), and a dynamic normalization module, termed DySPADE.
**Informative ACON (InfoACON).** The original metaACON function automatically allows whether a neuron is active or not, based on its value, as previously presented in Eq. 1. During the drawing process, a human artist typically decides whether to draw a stroke or not based on the 3D geometry, 2D appearance, and style type. Inspired by this observation, we propose to learn the parameter \(\theta\) in Eq. 1 from \(\{\mathbf{D},\mathbf{S},\mathbf{A}\}\), i.e.
\[\theta=\sigma(\phi_{\theta}(\mathrm{Cat}(\mathbf{D},\mathbf{S},\mathbf{A})), \tag{2}\]
where \(\phi_{\theta}\) is a two-layer Convolutional network (Fig. 3). We refer to the modified metaACON function as _Informative ACON_ (InfoACON). In our networks, we apply this InfoACON function in all the decoding layers. In this way, the decoder would pixel-wisely decides whether to depict a stroke, or the type of a stroke, in a generated sketch.
**Dynamic Normalization (DySPADE).** Following [27], we additionally transform neuron activations by shifting
Figure 2: Pipeline of the proposed _Human-Inspired Dynamic Adaptation_ (HIDA) method for facial sketch synthesis. (a) The overall generator architecture, (b) an decoding layer with DySPADE, InfoACON, and DOG.
the mean values and scaling the standard deviations, in the instance-wise and channel-wise manner [33]. Different from the original SPADE, we use dynamic activation here to introduce more flexibility on the learned modulating parameters. Let \(\mathbf{F}\in\mathbb{R}^{C\times H\times W}\) denote the input features of the current DySPADE module. \(H\), \(W\), and \(C\) are the height, width and the number of channels. The activation value at site \((c,h,w)\) is modulated as:
\[\tilde{f}_{c,h,w}=\gamma_{c,h,w}(\mathbf{D},\mathbf{S},\mathbf{A})\frac{f_{c,h,w}-\mu_{c}}{\sigma_{c}}+\beta_{c,h,w}(\mathbf{D},\mathbf{S},\mathbf{A}), \tag{3}\]
where \(f_{c,h,w}\) and \(\tilde{f}_{c,h,w}\) are the input and modulated activation at site \((c,h,w)\), respectively. \(\mu_{c}\) and \(\sigma_{c}\) are the mean and standard deviation of \(f_{c,h,w}\) in the \(c\)-th channel. \(\gamma_{c,h,w}(\mathbf{D},\mathbf{S},\mathbf{A})\) and \(\beta_{c,h,w}(\mathbf{D},\mathbf{S},\mathbf{A})\) are learned scale and bias parameters at site \((c,h,w)\).
As shown in Fig. 2, we use a two-layer and three-branched Convolutional network to predict the parameter \(\theta\) in InfoACON, and the modulating parameters \(\boldsymbol{\gamma}\) and \(\boldsymbol{\beta}\) in DySPADE. To improve the flexibility of the adaptation block, we use metaACON (Eq. 1) [25] instead of ReLU, after the first Convolutional layer. In this way, the modulating factors would pixel-wisely adapt to an integration of the facial 3D geometry, 2D appearance, and global style.
In IDA, the activation at each position is modulated according to a joint consideration of local facial 3D geometry, appearance, and artistic style. This mechanism is consistent with the drawing methodology of human artists. To execute a facial sketch, an artist usually uses diverse textures to represent 3D geometry or illustration variations. Besides, the style of all pencil strokes are consistent inside a single sketch. As a result, IDA is promising to produce realistic sketchy textures in globally consistent style.
### Deformable Outline Generation (DOG)
Human artists usually draw abstract lines to capture facial geometric structures, such as the boundaries of facial organs, and facial mood. To this end, the resulting outlines typically convey such structures abstractly, instead of pixel-wisely tracing them. In other words, there are geometric deformations between the input photo and the sketches drawn by artists. To simulate such an abstraction drawing methodology, we propose to align decoding features at coarse scales. In this way, the generated sketches would present abstract and distinct outlines, instead of scattered outlines with a lot of subtle variations.
In practice, we use deformable convolution (DCN) [7] instead of standard Transposed Convolution over the first and second decoding layers. As will be presented in the ablation study (Section 4.5), this deformable outline generation (DOG) module significantly boosts the clarity of generated outlines. Besides, DOG enables the network produce abstract sketches (e.g. Style1 in the FS2K dataset), which contains a sparse set of sketchy line drawings.
### Overall Generator Architecture
Our generator follows the U-Net architecture [18] in whole. In the encoder, the facial photo \(\mathbf{X}\) and the depth map \(\mathbf{D}\) are first fed into a Convolutional layer, separately. Afterwards, the corresponding feature maps are concatenated and fed into the following encoding layers. Each encoding layer follows a Conv-metaReLU-IN architecture, and down-samples the size of feature maps by 1/2. The encoding features are adopted as appearance representations, \(\mathbf{A}\).
In the decoder, we expand an DySPADE block to every decoding layer, except the last one. Fig. 2 illustrates the pipeline of a decoding layer with DySPADE. Over the \(l\)-th decoding layer, let \(\mathbf{D}^{l}\) be the corresponding depth map, \(\mathbf{S}^{l}\) the style map, and \(\mathbf{A}^{l}\) the appearance features. We down-sample the original depth map \(\mathbf{D}\) to \(\mathbf{D}^{l}\) by building a Gaussian Pyramid, and expand the one-hot style vector \(\mathbf{s}\) to \(\mathbf{S}^{l}\). Besides, we obtain \(\mathbf{A}^{l}\) by upsampling \(\mathbf{E}^{l-1}\) through a Transposed-Convlotional (TrConv) layer, followed by a metaACON activation layer. We finally apply the residual connection to obtain the output of the \(l\)-th decoding layer:
\[\mathbf{K}^{l}=\mathbf{H}^{l}\oplus\tilde{\mathbf{F}}^{l},\;\text{with}\; \tilde{\mathbf{F}}^{l}=\mathrm{DySPADE}(\mathbf{F}^{l}), \tag{4}\]
where \(\oplus\) denotes element-wise addition. \(\mathbf{H}^{l}\) is the initial upsampled feature map, output by a DOG layer (over the \(1^{st}\) and \(2^{nd}\) decoding layer) or a TrConv layer (over the rest layers). \(\mathbf{K}^{l}\) is fed into subsequent layers for generating final predictions.
### Loss Functions
To train our model, we use the following loss functions.
**Geometric loss.** First, we use a geometric constraint to supervise depth reconstructions from features of sketches. The geometric loss is the L2 distance between the input depth map and the reconstructed one:
\[\mathcal{L}_{geo}=\|\hat{\mathbf{D}}-\mathbf{D}\|_{2}^{2}. \tag{5}\]
**Textural loss.** The synthesized sketch \(\hat{\mathbf{Y}}_{s}\) should present similar textures as that drawn by an artist \(\mathbf{Y}_{s}\). In this work, we constrain \(\hat{\mathbf{Y}}_{s}\) and \(\mathbf{Y}_{s}\) to have similar pixel-wise adjacent correlations [21]. To this end, we calculate their gradients by using the Sobel operator, and calculate the average Co-sine distance between them. Let \(\mathsf{g}_{i,j}=[g_{i,j}^{x},g_{i,j}^{y}]^{T}\) denote the \(x\)-directional and \(y\)-directional gradients of \(\mathbf{Y}_{s}\) at site \((i,j)\); and \(\mathbf{f}_{i,j}=[f_{i,j}^{x},f_{i,j}^{y}]^{T}\) the corresponding gradients in \(\hat{\mathbf{Y}}_{s}\). The textural loss is formulated as:
\[\mathcal{L}_{tex}=\frac{1}{MN}\sum_{i,j}\frac{\mathbf{g}_{i,j}^{T}\mathbf{f}_{ i,j}}{\|\mathbf{g}_{i,j}\|\cdot\|\mathbf{f}_{i,j}\|}, \tag{6}\]
where \(\|\cdot\|\) denotes the magnitude of a vector, \(M\) and \(N\) are the width and height of the sketch.
**Pixel loss.** In addition, we use the pixel-wise reconstruction loss between the synthesized sketch \(\hat{\mathbf{Y}}_{s}\) and the target sketch \(\mathbf{Y}_{s}\), i.e.
\[\mathcal{L}_{pix}=\|\hat{\mathbf{Y}}_{s}-\mathbf{Y}_{s}\|_{1}. \tag{7}\]
**Adversarial loss.** Finally, we use adversarial loss to measure whether a pair of depth map and synthesized sketch is real or fake. Here, we use the Cross Entropy loss, i.e.
\[\mathcal{L}_{adv}=-\log D(\mathbf{D},\mathbf{Y}_{s})-\log(1-D(\hat{\mathbf{D} },\hat{\mathbf{Y}}_{s})). \tag{8}\]
**Full objective.** We use a combination of all the aforementioned losses as our full objective:
\[\mathcal{L}_{all}=\mathcal{L}_{adv}+\lambda_{1}\mathcal{L}_{pix}+\lambda_{2} \mathcal{L}_{tex}+\lambda_{3}\mathcal{L}_{geo}, \tag{9}\]
where \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) are weighting factors. We train the generator \(G\) and the discriminator \(D\) in an alternative manner, to minimize \(\mathcal{L}_{all}\).
## 4 Experiments
We present a thorough experimental comparison on the challenging FS2K dataset [8]. Besides, we conduct a series of ablation study to analyse impacts of the proposed DySPADE, InfoACON, and DOG modules.
### Experimental Settings
**Data.** We conduct experiments on the challenging FS2K dataset. The FS2K dataset is the largest publicly released FSS dataset, consisting of 2,104 photo-sketch pairs from a wide range of image backgrounds, skin colors, sketch styles, and lighting conditions. These sketches are mainly in three styles. Following standard settings [8], we have 1,058 photo-sketch pairs for training, and 1,046 pairs for testing. For each style, we have 357/351/350 training pairs, and 619/381/46 testing pairs, from Style1 to Style3, respectively. All the images are aligned and resized to \(250\times 250\). In the inference stage, we use the same style of sketch as the ground truth in default. In addition, we collect a number of challenging Faces in-the-wild from FFHQ [20] and Web. We align and resize these images in the same way as those in the FS2K dataset.
**Comparison Methods** In this section, we compare our method with various state-of-the-art (SOTA) ones, including FSGAN [8], GENRE [21], SCA-GAN [37], and MDAL [44]. Besides, we compare with several advanced GANs, including CycleGAN [46], Pix2Pix [18], and Pix2PixHD [35]. We use results and codes of these methods released by the corresponding authors [8]. All these methods and ours follow the same experimental settings.
**Criteria** In this work, we choose four performance indices as the criteria, i.e. the _Frechet Inception distance_ (FID) [15], _Learned Perceptual Image Patch Similarity_ (LPIPS) metric [42], _Structure Co-Occurrence Texture_ (SCOOT) metric [10], and _Feature Similarity Measure_ (FSIM) [39]. Lower values of FID and LPIPS indicate higher realism of synthesized sketches. In contrast, greater values of SCOOT and FSIM generally indicate higher similarity between a synthesized sketch and the corresponding sketch drawn by an artist. We here report the average LPIPS, SCOOT, and FSIM values across all the test samples, respectively. In the following sections, \(\downarrow\) indicates that lower value is better, while \(\uparrow\) higher is better.
**Implementation Details** We implemented our model in PyTorch. All experiments are performed on a computer with a Titan 3090 GPU. We use a batch size of 4, a learning rate of \(1e-4\). We use the Adam Optimizer, and train the model for 800 epochs on the training set. Our code will be released after peer review.
### Qualitative Comparison with SOTAs
We further qualitatively compare with SOTA FSS methods. Fig. 3 illustrates synthesized sketches on the FS2K dataset. Although our method can generate multiple styles of sketches, here we only show the synthesized sketch in the same style as the ground truth. For the face in constrained condition (the first row), most methods successfully generate a quality sketch. For the face with extreme lighting condition (the second row) or pose variation (the third row), most synthesized sketches present unpleasant geometric deformations and fail to precisely reproduce the style. Although sketches generated by CycleGAN seems acceptable, the textures aren't like pencil-drawings. The sketches generated by FSGAN show the same styles as the ground truths, since FSGAN contains a style control module. However, these sketches show unpleasant structural distortions. This might be caused by the geometric deformations between facial photos and free-hand sketches drawn by artists, in the training data. GENRE successfully produces quality sketches, but they are all almost in the same style, since no style information is considered in GENRE.
In contrast, our HIDA generates high-quality sketches in all three styles. Specially, our synthesized sketches preserve the geometries of input faces. This implies that, HIDA doesn't overfit to the training samples and combats geometric deformations. We achieve such success mainly due to the informatively adaptive normalization module, i.e. DySPADE, and the constraint of reconstructing the input depth. Besides, our synthesized sketches present the same style of strokes as the corresponding ground truths. The drawing textures are consistent inside each sketch. The style consistency demonstrates the effectiveness of our global style control mechanism through DySPADE and InfoACON. Based
on all these observations, we conclude that our HIDA model can generate high-quality and style-consistent sketches.
### Quantitative Comparison with SOTAs
**Overall Performance.** Table 1 shows the quantitative performance criteria of each method on the whole FS2K testing dataset. Obviously, our method achieves the lowest FID and LPIPS values. In contrast to previous benchmark method, FSGAN, our HIDA dramatically decrease both FID and LPIPS by about 20 and 0.22, respectively. Besides, compared to SOTA 2D-semantic driven methods, i.e. SCA-GAN and GENRE, HIDA decreases FID by about 24 and 5, respectively. HIDA also decreases LPIPS by about 0.04, i.e. 10% relatively. Such dramatic decreases of both FID and LPIPS mean that our method produces the most realistic sketches in terms of style and stroke.
In addition, HIDA achieves the second best value of SCOOT, which is significantly better than FSGAN and GENRE, but slightly lower than SCA-GAN. Such a high value of SCOOT means that the sketches produced by our method are similar to those drawn by artists in terms of structure and textures. Finally, HIDA achieves the third best FSIM value. Recall that there are geometric deformations between facial photos and sketches drawn by artists. Thus an excessively high value of FSIM might indicate the potential that: a FSS model overfits to the training data, and cannot precisely preserve facial structures in the translation process. Correspondingly, as shown in Fig. 3, both SCA-GAN and FSGAN produce deformable sketches. In contrast, HIDA preserves the structure of input faces.
**Performance on Each Style.** We further analyse the performance of FSS methods on each style subset. Since both FID and LPIPS measure the realism of synthesized sketches in terms of style and textures, we report them in Fig. 4. Obviously, our HIDA model consistently achieves the lowest FID and LPIPS values, across all the styles. Especially, our method significantly outperforms previous SOTA method, FSGAN, according to both criteria. Such distinct superiority over existing methods demonstrates that
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & FID\(\downarrow\) & LPIPS\(\downarrow\) & SCOOT\(\uparrow\) & FSIM\(\uparrow\) \\ \hline Pix2Pix [18] & 18.34 & 0.304 & 0.493 & 0.541 \\ Pix2PixHD [35] & 32.03 & 0.468 & 0.374 & 0.531 \\ CycleGAN [46] & 26.49 & 0.505 & 0.348 & 0.501 \\ MDAL [44] & 50.18 & 0.492 & 0.355 & 0.530 \\ SCA-GAN [37] & 39.63 & 0.305 & **0.600** & **0.782** \\ FSGAN [8] & 34.88 & 0.483 & 0.405 & 0.610 \\ GENRE [27] & 20.67 & 0.302 & 0.483 & 0.534 \\ HIDA (Ours) & **15.06** & **0.263** & 0.575 & 0.551 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with SOTAs on the FS2K dataset.
Figure 4: FID and LPIPS values w.r.t. each style in FS2K.
Figure 3: Comparison with SOTAs on the FS2K dataset.
our method effectively learns the style information and allows precise control over the style of synthesized sketches.
### User Study
We further conduct a series of subjective study to evaluate the performance of HIDA, in contrast to existing methods. Specially, we have 10 participators, all of whom are not professional artists. For each participator, we show them 1,000 randomly selected samples from the testing set in FS2K. Each time, we show a facial photo, the corresponding sketch drawn by an artist, and 8 synthesized sketches produced by different methods. Participators are requested to choose the best sketch, according to (1) the similarity between a synthesized sketch and the ground truth, and (2) the quality of a sketch, based on their own preferences. Finally, we collect totally 10,000 preference labels.
Fig. 5 shows the average preference percent about each model, and the standard deviation among different participants. Obviously, our method dramatically outperforms all the other methods. In average, subjective participators think our model generates the best sketch over 70% of facial photos. The subjective comparison result demonstrate that our method significantly outperforms SOTAs in generating high-quality and style-specific facial sketches. In addition, the sketches synthesized by our HIDA model meet the preference of most users.
### Ablation Study
We first conduct a series of ablation study on the FS2K dataset. To this end, we build several model variants, by gradually adding different modules to the base model, i.e. Pix2Pix [18]. The modules we aims to analyse include the use of depth map \(\mathbf{D}\) as auxiliary input, the DySPADE transformation, the InfoACON function, and the DOG layer.
**Qualitative Analysis.** Fig. 6 illustrates sketches produced by these model variants. The second column shows the depth maps predicted by 3DDFA. These maps convey well with the corresponding facial geometry in general. The third column shows sketches generated by the base model (i.e. Model-A). Obviously, these sketches occasionally show chaotic facial structures. Besides, there is no distinct difference between the generated two sketches in terms of style. In contrast, using the DySPADE module (i.e. Model-C) enables the model precisely preserving tiny facial structures. For example, the shapes of eyebrows in both examples become consistent between the synthesized sketches and the input photos.
If we further use the InfoACON function in the decoder (i.e. Model-D), the generator produces more details. For example, the textures precisely present the 3D structure of lips. Besides, the major boundary of eyeglass is generated. The synthesized sketches of these two examples also show different types of strokes over the same semantic regions, e.g. lips. Finally, using DOG (i.e. the full model) enables the model generating abstract and distinct outlines. For example, the result in the top row is consistent with Style1 in terms of line drawings. All the other model variants produce obvious rendering textures to present 3D geometry. Such comparisons demonstrate our motivation of using DOG to simulate the abstraction process of human artists.
cantly changing facial structures. Inspiringly, our full model achieves the best performance in terms of all the quantitative criteria.
**Parameter Visualization.** We further analyse the impact of each module by removing it from our full model. Fig. 7 visualizes activation maps and adaptation parameters w.r.t. the corresponding model variants. We can see that depth helps learning effective geometric representations (Full _vs_. w/o Depth). Besides, the proposed dynamic adaptation (DySPADE and InfoACON) boosts the representations, and migrates the artefacts introduced by the incomplete depth map. Based on previous analysis, we conclude that our method achieves such inspiring performance, due to a combination of depth and the IDA modules.
**Analysis of InfoACON.** We further analyze the impacts of dynamic activation functions, including metaACON and the proposed InfoACON. To this end, we build model variants based on Model-B, by (1) using ReLU in the encoder and LeakyReLU in the decoder and discriminator; (2) using metaACON [25] in all layers; and (3) using InfoACON in the decoder and metaACON in the other layers. As shown in Fig. 8, InfoACON makes the generator merely produce distinct sketchy outlines over the mouth region, which is most similar to the ground truth, in terms of style. As shown in Table 3, InfoACON achieves the lowest FID and LPIPS, as well as highly comparable SCOOT and FSIM. Besides, both metaACON and InfoACON outperform ReLU/LeakyReLU. This means that dynamic activation significantly improves the consistency between the synthesized sketches and those drawn by human artists, in terms of textures.
**Analysis of DOG.** In our framework, we apply deformable convolutions only at coarse-scale layers, i.e. the top 2 layers in the decoder. To verify such motivation, we conduct variants of our final model, by using DOG at top 2 layers (_top_2), middle 3 layers (_mid3_), bottom 2 layers (_btm2_), and all layers (_all_), respectively. In this experiment, HIDA w/o DOG is the base model. Fig. 9 illustrates the corresponding synthesized sketches. Obviously, the sketches synthesized with DOG present distinct geometric outlines than those without DOG. If we apply DOG over the bottom layer, the model fails to generate sketches in Style1. This might due to the fact that human painters usually abstract in large areas rather than small ones. Besides, the sketch synthesized by _top_2 has the most consistent style compared to the ground truth. Using DOG over all decoding layers leads to an integrated effects on the synthesized sketch, e.g. confused styles and distinct boundaries. We therefore merely use DOG over the top 2 decoding layers in our final model.
### Generalization Ability
To evaluate the generalization ability of our framework, we apply the previously learned HIDA model to challenging faces in-the-wild and natural images. Here, we compare with Pix2Pix, SCA-GAN, and GENRE, because the models of MDAL and FSGAN haven't been released. All models are learned from the training set of the FS2K dataset.
**Performance on Faces In-the-wild.** Fig. 10 illus
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & FID\(\downarrow\) & LPIPS\(\downarrow\) & SCOOT\(\uparrow\) & FSIM\(\uparrow\) \\ \hline (LeakyReLU & 18.30 & 0.298 & 0.479 & 0.539 \\ metaACON & 19.05 & 0.292 & **0.498** & **0.541** \\ InfoACON & **17.41** & **0.291** & 0.493 & 0.536 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison between different activation functions.
Figure 8: Comparison between activation functions.
Figure 7: Visualization of activations and adaptation parameters, w.r.t. different model variants of HIDA.
Figure 9: Comparison between different settings of DOG.
tates synthesized sketches on unconstrained faces. These faces have extreme variations in occlusion, pose, lighting, and tone. Generally speaking, our method produces high-quality sketches, in multiple styles, for both examples. Intriguingly, for the example shown in the bottom row, our HIDA model successfully depicts the eyes in shading areas. In contrast, the other methods fail to generate some quality details, e.g. eyes of both examples. Moreover, they produce geometric deformations over the mouth of the bottom example. Finally, our synthesized sketches vividly characterize the moods shown in the photographic faces.
**Extension to Natural Images.** We here apply our previously learned model to several natural images, collected from the Web. Here, we use MiDas [31] instead of 3DDFA for depth estimation. Fig. 11 shows that our model still produces high-quality sketches, in multiple styles. The synthesized sketches vividly present the geometry and appearance of natural images.
**Extension to other Image-to-Image translation tasks.** We additionally apply our method to pen-drawing generation (with paired data on the APDrawing dataset [36]) and exemplar-based image translation (with unpaired data on the MetFace dataset [19]). In the former task, we train and test our full model following standard settings. In the latter task, we use CoCosNet [41] as the baseline, and modify it by (1) using depth, and (2) replacing the standard SPADE modules in CoCosNet by DySPADE and InfoACON. As shown in Table 4, our method outperforms previous SOTA methods, in terms of most performance indices. Fig. 12 shows that our method generates distinct and accurate facial structures, compared to the other methods. Such results demonstrate that the proposed techniques are robust and applicable to other image translation tasks.
## 5 Conclusions
In this work, we use comprehensive facial information for synthesizing sketchy portraits. Technically, we propose two informative and dynamic adaptation methods, including a normalization module and an activation function. Extensive experiments show that our method, termed HIDA, can generate high-quality and style-controllable sketches, over a wide range of challenging samples. Our work also implies promising applications of dynamic adaptation, or dynamic networks, in more image generation tasks. Besides, it is promising to boost the performance of FSS models by combining multi-source datasets. We will explore such works in the near future.
|
2305.09681
|
Continual Learning for End-to-End ASR by Averaging Domain Experts
|
Continual learning for end-to-end automatic speech recognition has to contend
with a number of difficulties. Fine-tuning strategies tend to lose performance
on data already seen, a process known as catastrophic forgetting. On the other
hand, strategies that freeze parameters and append tunable parameters must
maintain multiple models. We suggest a strategy that maintains only a single
model for inference and avoids catastrophic forgetting.
Our experiments show that a simple linear interpolation of several models'
parameters, each fine-tuned from the same generalist model, results in a single
model that performs well on all tested data. For our experiments we selected
two open-source end-to-end speech recognition models pre-trained on large
datasets and fine-tuned them on 3 separate datasets: SGPISpeech, CORAAL, and
DiPCo. The proposed average of domain experts model performs well on all tested
data, and has almost no loss in performance on data from the domain of original
training.
|
Peter Plantinga, Jaekwon Yoo, Chandra Dhir
|
2023-05-12T16:19:30Z
|
http://arxiv.org/abs/2305.09681v1
|
# Continual Learning for End-to-End ASR by Averaging Domain Experts
###### Abstract
Continual learning for end-to-end automatic speech recognition has to contend with a number of difficulties. Fine-tuning strategies tend to lose performance on data already seen, a process known as catastrophic forgetting. On the other hand, strategies that freeze parameters and append tunable parameters must maintain multiple models. We suggest a strategy that maintains only a single model for inference and avoids catastrophic forgetting.
Our experiments show that a simple linear interpolation of several models' parameters, each fine-tuned from the same generalist model, results in a single model that performs well on all tested data. For our experiments we selected two open-source end-to-end speech recognition models pre-trained on large datasets and fine-tuned them on 3 separate datasets: SG-PISpeech, CORAAL, and DiPCo. The proposed average of domain experts model performs well on all tested data, and has almost no loss in performance on data from the domain of original training.
Peter Plantinga, Jaekwon Yoo, Chandra Dhir JP Morgan Chase & Co., USA
[email protected], [email protected], [email protected]
**Index Terms**: speech recognition, continual learning, model averaging, diverse data
## 1 Introduction
Modern end-to-end automatic speech recognition (E2E-ASR) systems have achieved impressive results across a variety of data by training on massive datasets up to 700,000 hours [1]. While these generalist models often perform surprisingly well on domains they have never seen in a zero-shot manner, for specific applications they can still benefit tremendously from fine-tuning on data from the target domain.
A typical strategy for fine-tuning E2E-ASR systems involves standard gradient descent updates to model parameters using data from the target domain [2]. However, this strategy usually suffers reduced performance on data from the original domain, a process known as catastrophic forgetting [3]. While for some it may be possible to maintain different parameters for different domains this has the downside of adding complexity and taking up storage space, especially for large models. In addition, it may not be clear for all cases which domain a target sample falls into.
Some have sought to address this difficulty with special attention paid to certain parameters, either by freezing some parameters [4] or by adding loss regularization designed to reduce forgetting [5]. These techniques have met mixed success with mitigating forgetting; serial fine-tuning processes on new domains still often results in decreased performance on the original data. We address this limitation by parallelizing the fine-tuning process and averaging the parameters of the fine-tuned expert models.
Others have addressed this difficulty by freezing the entirety of the good-performing generalist model and adding domain-specific parameters [6]. One popular technique along these lines is called Adapters [7], which involves freezing original model parameters and updated small modules inserted with a starting configuration which preserves the behavior of the original model. A weakness of this approach is that multiple sets of parameters must be maintained, and at inference a decision must be made about which set to use.
One final technique involves replaying data from the original domain [8]. This approach can work well when the original data is available but is not always possible, especially for pre-trained models where the original data is not publicly available.
To summarize the contributions of this work, we reformulate the continual learning paradigm from many serial applications of fine-tuning to a single model into a parallel learning process whereby multiple fine-tuned domain expert models are averaged into a single good-performing model. We call this paradigm Average of Domain Experts (AoDE).
## 2 Related work
Model parameter averaging appears in a number of contexts but appears only rarely in the context of continual learning, given the emphasis of the field on serial fine-tuning on a sequence of new domains. We relate a few of these contexts here:
For distributed model training with limited connectivity, called federated learning, some researchers have found that averaging models can achieve a similar performance as training a single model on all data [9, 10]. This strategy has the advantages of reducing communication overhead, as well as preserving data privacy by sharing only model parameters and not data samples. Dynamic approaches share parameters more or less frequently based on how rapidly performance deteriorates on out-of-domain data.
Another context that model averaging appears in is improving generalization of trained models. One example is Stochastic Weight Averaging [11] which finds better optima by collecting checkpoints throughout the training process and averaging them.
Other researchers have used model averaging to improve semi-supervised learning using a teacher model [12]. The authors found that better teacher models could be created by averaging the parameters of several teacher models, created by adding noise to the internal representations of student models. These averaged teacher models produce better training targets during semi-supervised training.
One last example is that model averaging is used to understand the dynamics of loss basins for neural networks [13]. In order to understand how it is that different random initializa
tions of ResNets end up achieving very similar performance after training, the authors suggest permuting the parameters in an isomorphic way so as to merge the distinct loss basins into a single loss basin. This is done by rearranging the parameters in one model to best match the parameters in an original model, and then merging the models.
All of this related work shows that model parameter averaging is a powerful technique that is under-used for the purpose of continual learning.
## 3 Experiments
We ran experiments on two large end-to-end speech recognition models pretrained on large sets with diverse data. Our experiments involve fine-tuning on three separate public datasets with diverse qualities. Details on models, datasets, fine-tuning procedure, and evaluation procedure are explained in the following sections.
### Pretrained Models
We demonstrate the generality of our results by using unrelated pre-trained models with differing architecture, training loss, data, sponsoring organization etc. The two pretrained models we used in our experiments are the NeMo Conformer CTC Large model [14] and the OpenAI Whisper Small.en model [1].
The NeMo English Conformer CTC Large model is trained with Connectionist Temporal Classification (CTC) loss and consists of a small downsampling layer followed by 18 convolution + self-attention blocks and a final output layer. The total number of parameters is 121M.
The tokenizer vocabulary includes 128 subword tokens; all tokens include only lowercase letters, apostraphes, and spaces. The acoustic model is trained on NeMo ASRset which consists of roughly 25,000 hours of audio from a variety of sources, most of which are publicly available. We use version 1.10 of the model, the most recent version at the time of submission.
The Whisper Small.en model [1] is trained using standard sequence-to-sequence cross-entropy loss and consists of two major sub-models, an encoder and a decoder. The encoder consists of a small downsampling layer followed by 11 self-attention blocks and the decoder consists of 11 multi-headed attention blocks and an output layer. The total number of parameters is 241M.
This model uses an English-only tokenizer with the same 50k-token vocabulary as GPT-2. The set of characters present in these tokens is much larger than for NeMo Conformer, including upper and lower case as well as punctuation. The training data consists of roughly 500,000 hours of English-only data present in the OpenAI speech data.
### Datasets
For our experiments we used three public datasets with a large variety of qualities. For our first dataset, we used SPGISpeech [15], which consists of 5000 hours of high-quality recordings of earnings calls. These recordings are well-transcribed and are difficult for a generalist model only on account of a large vocabulary of financial terms that are unlikely to appear elsewhere. Since the original dataset contains punctuation and numbers but the NeMo Conformer tokenizer cannot encode these values, we pass the transcript through NeMo normalization [16] when training and evaluating NeMo Conformer. To speed up evaluation, we use a random subset of 2000 samples (roughly same size as LibriSpeech test sets) for a test set, and find the resulting performance differs by less than 3% relative in all measured cases. In addition, while training the Whisper Small.en model we found that our techniques were still not sufficient to prevent some catastrophic forgetting when the entire training set was used. Instead, we select a random subset of about 10% of the
Figure 1: Our proposed update to the continual learning paradigm: instead of training sequentially on a variety of domains, fine-tune on each domain in parallel and then combine the results to get an average of domain experts model with no forgetting.
data for training.
The second dataset we used was the CORAAL dataset [17], a conversational dataset between folks whose primary sociolect is African American Vernacular English (AAVE). The data was recorded in six separate locations and over the course of ten years (with one exception). In total, there are more than 150 interviews at a length surpassing 140 hours of audio. We split the data by separating 5 speakers for each of validation and test sets, amounting to roughly 5 hours each. Generalist models have difficulty with the conversational nature of the data and the different grammars of the sociolect. We divided the audio into segments based on the provided timings in the transcript, with total length not exceeding 30s, in order to match the expected input length for Whisper models.
Finally, we experimented with the DiPCo dataset [18], a small dataset of conversation in a dinner party scenario. These data were the most challenging, involving the most speakers and varied acoustic conditions. The length of the audio available is 2.7 hours for development and 3.4 hours for test. In a process similar to the one used on CORAAL data, we divided the audio into segments not exceeding 30s in length. The conversations were recorded from a number of devices, for the sake of simplicity we take the sum of close-talking microphones as the audio signal.
### Fine-tuning Procedure
Nearly all SGD-based fine-tuning procedures that sequentially access data will inevitably lose some performance on the original domain. In addition, the choice of which order to fine-tune on domains can have a significant effect on the outcome (see discussion of Table 3 later). To address all of these limitations and produce a single well-performing generalist model with no loss of performance, we propose a new continual learning paradigm that fine-tunes on each domain in parallel, and then averages the resulting expert models. See Figure 1 for a graphical depiction of the proposed Average of Domain Experts approach.
We begin by reproducing state-of-the-art continual learning techniques such as layer-wise learning rate decay (LLRD) [19] and slanted triangular learning rates (STLR) [20]. These techniques definitely help with better learning and less forgetting, as shown in Table 1. LLRD is applied by assigning the highest learning rate to the highest encoder layer and decaying the learning rate of each lower layer by a constant factor, usually 0.9. The learning rate of the lowest encoder layer is applied to any layers not in the encoder (decoder, output, embedding, etc.), from the inspiration of [4]. Our learning rate schedule, STLR, peaks at roughly 10-20% of the total training time.
Freezing non-encoder layers, as suggested by [4] is roughly equivalent to reducing the overall learning rate in the proposed scheme, as shown in Table 2. If one compares the frozen layers row with the following row, they achieve very similar results across the board. We also found no benefit to LLRD by adding a loss against the predictions of the original model, a technique called Learning without Forgetting (LwF) [5].
Once the fine-tuning process is done producing domain expert models, we compute the average of experts in a straightforward way: a simple linear interpolation of corresponding model parameters with equal weighting on every model. This is sufficient for good results, and works well with other techniques for reduced forgetting, such as LLRD.
All experiments are conducted with the SpeechBrain toolkit [21] on a single machine with four 24GB A10 GPUs. Batch size was maximized for the available space (5 per GPU for Whisper, dynamic batching at about 15 per batch for the longest samples for NeMo Conformer). We used Adam optimizer with default hyperparameters other than learning rate. Learning rate and LLRD rate were the sole optimized hyperparameters.
### Evaluation
As noted in section 3.1, NeMo Conformer Large CTC and Whisper Small.en have different output character sets, and therefore we have different normalization processes for the texts before WER computation.
For the NeMo model evaluations, we first normalized the transcript according to the same process used during training target preparation: we used NeMo Text Normalizer to do FST-based conversion of numbers and some symbols to pure text (e.g. $5\(\rightarrow\) five dollars and 5:00 \(\rightarrow\) five o'clock). Then all punctuation was stripped and text lower-cased. On top of these normalizations we added a few new ones: we normalized contractions to the shortened form and removed hesitations (e.g. \(\text{um}\), \(\text{hmm}\)). For decoding, we simply used greedy decoding with no language model.
For the Whisper model evaluations, we used the English text normalizer provided as part of the model code, which performed many of the same normalizations listed above, as well as some spelling normalizations. We also relied on Whisper model code to perform decoding. The code by default does greedy decoding and automatically handles certain failure cases. If compressing the predicted transcript surpasses a given compression ratio, indicating the autoregressive decoder is stuck in a loop, then the output is regenerated using a different sampling temperature. Similarly, if the average log probability over sampled tokens is below some threshold, the output is regenerated. We used the default values for all thresholds.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & CORAAL & LibriSpeech & LibriSpeech \\ procedure & test set & test-clean & test-other \\ \hline Pretrained & 18.8 & 3.35 & 7.55 \\ LLRD=1.0 & 14.4 & 5.86 & 11.8 \\ LLRD=0.9 & 12.4 & 3.82 & 9.01 \\ LLRD=0.8 & 13.2 & 3.32 & 8.31 \\ \hline \hline \end{tabular}
\end{table}
Table 1: WER performance comparison for Whisper Small.en fine-tuning with several values of LLRD.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & SPGI & LibriSpeech & LibriSpeech \\ procedure & test set & test-clean & test-other \\ \hline Pretrained Conformer & 5.47 & 2.15 & 4.48 \\ Frozen layers, lr=3e-4 & 2.73 & 2.74 & 5.84 \\ LLRD=0.9, lr=1e-4 & 2.74 & 2.77 & 5.94 \\ LLRD=0.9, lr=3e-4 & 2.63 & 2.98 & 6.52 \\ \hline \hline \end{tabular}
\end{table}
Table 2: WER performance comparison between freezing non-encoder layers as proposed by [4] and the LLRD approach, using NeMo Conformer CTC model.
## 4 Results
Our experimental results are shown in Tables 3 and 4, with results on tests sets from the three domains used for training, as well as LibriSpeech test-clean and test-other. The LibriSpeech datasets provide a measure of catastrophic forgetting, though in the case of Whisper the model was likely never trained on the LibriSpeech data. To summarize the results on each domain we report the geometric mean, in order to avoid a bias towards more difficult domains, given the wide spread of WERs.
In the results tables, fine-tuned models are all trained using the proposed procedure, including LLRD and STLR. After listing individually fine-tuned models, a list of sequentially tuned models are shown using an arrow to represent the progression of the tuning process. Finally, the proposed average of domain experts method is listed.
### NeMo Conformer
The experimental results for NeMo Conformer can be seen in Table 3. The first point of note is that the averaged models have substantially lower geometric means relative to all other model training procedures. In large part this is driven by a reduction in catastrophic forgetting, with final test performances at or very near the performance of the original pretrained model.
In addition to the drastic reduction in catastrophic forgetting, the proposed model's improvement is within 10% relative of the best performing model for each test set. Compared to the best-performing model on each domain, the averaged model is within 7.5% relative on SGPI, within 9% relative on SPGI, and within 2% relative on DiPCo.
Another point of note in this table is the dramatic effect of the order of datasets for sequential fine-tuning; the order \(\text{SPGI}\rightarrow\text{CORAAL}\rightarrow\text{DiPCo}\) performs worse than \(\text{SPGI}\rightarrow\text{DiPCo}\rightarrow\text{CORAAL}\) in every category. This demonstrates some of the difficulties with sequential fine-tuning that would not be present in the average of domain experts approach.
### Whisper Small.en
The experimental results for Whisper Small.en can be seen in Table 4. Again, the proposed average of experts achieves the lowest geometric mean, mainly due to reduced forgetting on LibriSpeech test-other and SPGI test.
We found the Whisper model more susceptible to catastrophic forgetting than the NeMo model; using the full SPGI training set resulted in a model that, when averaged with other models, produced WERs close to 100%. As mentioned before, we ended up using 10% of SPGI training data. We speculate that fine-tuning the model on the full set rearranges its parameters enough to fall into a separate loss basin. It might be possible to recover the original parameter arrangement by accounting for permutation invariances [13].
In addition, we used a more aggressive LLRD of 0.8 for the sequential fine-tuning, resulting in less forgetting (and slightly worse performance on each individual set). This likely contributes to the improved geometric mean score for sequential fine-tuning, however this model still demonstrates some forgetting on the first fine-tuning set (i.e. SPGI). While one could argue this provides a degree of control over what domains the model performs best on, we argue that increasing or decreasing the proportion of each model present in the final average gives more fine-grained control.
## 5 Conclusions
We were able to show that a simple update to the continual learning paradigm is able to dramatically reduce catastrophic forgetting in well-trained generalist end-to-end speech recognition models. This is done by a simple average of experts, proving a flexible and tunable approach to model development.
In the future, we hope to improve on this work with more sophisticated averaging techniques, taking into account permutation invariances, with techniques such as Git Re-Basin [13] or Federated Matched Averaging (FedMA) [10].
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model training procedure & SPGI test & CORAAL test & DiPCo test & LS test-clean & LS test-other & Geometric mean \\ \hline Pretrained model & 4.94 & 18.8 & 48.5 & 3.35 & 7.55 & 10.3 \\ Fine-tuned on SPGI & 2.87 & 22.6 & 50.1 & 5.50 & 11.1 & 11.5 \\ Fine-tuned on CORAAL & 4.58 & 12.4 & 44.3 & 3.82 & 9.01 & 9.72 \\ Fine-tuned on DiPCo & 4.31 & 18.2 & 44.0 & 3.26 & 7.63 & 9.70 \\ SPGI \(\rightarrow\) DiPCo \(\rightarrow\) CORAAL & 4.35 & 13.1 & 43.3 & 3.35 & 8.23 & 9.26 \\ Average of SPGI and CORAAL & 3.09 & 15.9 & 44.0 & 3.52 & 8.13 & 9.08 \\ Average of Domain Experts & 3.41 & 15.7 & 43.0 & 3.39 & 7.74 & 9.04 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Whisper Small.en WER results on five test sets and the geometric mean of the five scores.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model procedure & SPGI test & CORAAL test & DiPCo test & LS test-clean & LS test-other & Geometric mean \\ \hline Pretrained model & 5.47 & 28.6 & 70.7 & 2.15 & 4.48 & 10.1 \\ Fine-tuned on SPGI & 2.63 & 23.9 & 70.5 & 2.98 & 6.52 & 9.71 \\ Fine-tuned on CORAAL & 5.99 & 15.6 & 45.4 & 3.06 & 6.70 & 9.72 \\ Fine-tuned on DiPCo & 7.01 & 25.3 & 47.3 & 3.15 & 6.95 & 11.3 \\ SPGI \(\rightarrow\) CORAAL & 3.52 & 16.7 & 44.8 & 3.10 & 6.91 & 8.92 \\ SPGI \(\rightarrow\) CORAAL \(\rightarrow\) DiPCo & 4.07 & 19.7 & 46.3 & 3.48 & 7.70 & 10.0 \\ SPGI \(\rightarrow\) DiPCo \(\rightarrow\) CORAAL & 3.57 & 16.4 & 44.1 & 3.09 & 6.85 & 8.86 \\ Average of SPGI and CORAAL & 3.25 & 17.5 & 51.3 & 2.33 & 5.02 & 8.07 \\ Average of Domain Experts & 3.04 & 18.2 & 45.2 & 2.18 & 4.86 & 7.67 \\ \hline \hline \end{tabular}
\end{table}
Table 3: NeMo Conformer WER results on five test sets and the geometric mean of the five scores.
|
2305.18954
|
Towards Machine Learning and Inference for Resource-constrained MCUs
|
Machine learning (ML) is moving towards edge devices. However, ML models with
high computational demands and energy consumption pose challenges for ML
inference in resource-constrained environments, such as the deep sea. To
address these challenges, we propose a battery-free ML inference and model
personalization pipeline for microcontroller units (MCUs). As an example, we
performed fish image recognition in the ocean. We evaluated and compared the
accuracy, runtime, power, and energy consumption of the model before and after
optimization. The results demonstrate that, our pipeline can achieve 97.78%
accuracy with 483.82 KB Flash, 70.32 KB RAM, 118 ms runtime, 4.83 mW power, and
0.57 mJ energy consumption on MCUs, reducing by 64.17%, 12.31%, 52.42%, 63.74%,
and 82.67%, compared to the baseline. The results indicate the feasibility of
battery-free ML inference on MCUs.
|
Yushan Huang, Hamed Haddadi
|
2023-05-30T11:39:32Z
|
http://arxiv.org/abs/2305.18954v1
|
# Poster: Towards Battery-Free Machine Learning Inference and Model Personalization on MCUs
###### Abstract.
Machine learning (ML) is moving towards edge devices. However, ML models with high computational demands and energy consumption pose challenges for ML inference in resource-constrained environments, such as the deep sea. To address these challenges, we propose a battery-free ML inference and model personalization pipeline for microcontroller units (MCUs). As an example, we performed fish image recognition in the ocean. We evaluated and compared the accuracy, runtime, power, and energy consumption of the model before and after optimization. The results demonstrate that, our pipeline can achieve 97.78% accuracy with 483.82 **KB** Flash, 70.32 **KB** RAM, 118 **ms** runtime, 4.83 **mW** power, and 0.57 **mJ** energy consumption on MCUs, reducing by 64.17%, 12.31%, 52.42%, 63.74%, and 82.67%, compared to the baseline. The results indicate the feasibility of battery-free ML inference on MCUs.
Edge Computing, IoT, TinyML, Resource-constrained +
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
+
Footnote †: journal: Computer Vision and Image Processing
**Preprocessing** We conducted a study on fish recognition in the ocean using DeepFish dataset (Deng et al., 2017). DeepFish is a dataset based on field records of fish habitats and customized for the analysis of fish in underwater marine environments. The dataset comprises approximately 40,000 high-resolution 1,920\(\times\)1,080 data points. As Fig. 1 (a) shows, the original images with three RGB channels at a resolution of 1,920\(\times\)1,080 are reconstructed into single RGB channel images at a resolution of 32\(\times\)32.
**Modeling.** We optimized the ProxylessNAS (Deng et al., 2017) to search the approximate model. ProxylessNAS aims to directly search for architectures for target tasks, and optimizes model weights and architecture parameters alternatively using gradient-based methods. ProxylessNAS introduces binary gates that binarize the architecture parameters of an overparameterized network. This enables only one path to load at runtime, reducing memory consumption by not loading the entire overparameterized network to update model weights. The main components of ProxylessNAS are shown in Fig. 2.
**Quantization.** To reduce the model's size, computation requirements, power, and energy consumption, we utilize static quantization to quantize the original model. Static quantization is an optimization technique for neural network models, which converts the parameters and activations from floating-point to integer representations while preserving the model's accuracy. The process involves data collection, quantization range determination, quantization conversion, and dequantization. Static quantization significantly reduces model size, speeds up inference, and lowers power and energy consumption while maintaining model accuracy.
**Deployment.** We utilize X-CUBE-AI to deploy the model and inference process into.\(h\) and.\(c\) files. X-CUBE-AI is a software package that helps developers integrate models into embedded applications on MCUs. It includes tools such as a neural network model converter and inference engine.
## 3. Evaluation
We trained the lightweight ProxylessNAS model by TensorFlow, transferred it to TFLite, and applied static quantization to reduce the model size, computation requirement, power, and energy consumption. We evaluated and compared the offline accuracy, and conducted 10 repeated experiments on the original TensorFlow model, original TFLite model, and optimized TFLite model. The accuracies of these models are 97.88\(\pm\)1.12%, 97.88\(\pm\)1.12%, and 97.78\(\pm\)1.08%. The quantized TFLite Model only loses approximately 0.1% accuracy.
We transplanted the TFLite models to the STM32L4R5 development kit, and utilized the Monsoon power monitor to measure power and energy. Power was calculated as the product of the current and voltage when the board was connected to the power source. The board was supplied power at 1.9V. The power consumption results are presented in Table. 1. Previous research has shown that underwater acoustic and ultrasonic signals can generate a few **mW**[(3)], suggesting that our pipeline can run solely on harvested energy.
## 4. Conclusion and Future Work
This paper introduces an energy-optimized ML deployment pipeline for resource-constrained MCUs. Compared to the unoptimized model, we achieved an average accuracy of 97.78% with 483.82 **KB** Flash, 70.32 **KB** RAM, 118 **ms** inference time, 4.83 **mW** power, and 0.57 **mJ** energy consumption, which reduced by 64.17%, 12.31%, 52.42%, 63.74%, and 82.67%, respectively. The results indicate the viability of battery-free ML on MCUs, with the potential to harvest energy from certain devices. In the future, we plan to further optimize energy consumption and improve the personalization of the model to address the data heterogeneity problem. We will also test the pipeline on various models and tasks to assess its scalability.
|
2310.03668
|
GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction
|
Large Language Models (LLMs) combined with instruction tuning have made
significant progress when generalizing to unseen tasks. However, they have been
less successful in Information Extraction (IE), lagging behind task-specific
models. Typically, IE tasks are characterized by complex annotation guidelines
that describe the task and give examples to humans. Previous attempts to
leverage such information have failed, even with the largest models, as they
are not able to follow the guidelines out of the box. In this paper, we propose
GoLLIE (Guideline-following Large Language Model for IE), a model able to
improve zero-shot results on unseen IE tasks by virtue of being fine-tuned to
comply with annotation guidelines. Comprehensive evaluation empirically
demonstrates that GoLLIE is able to generalize to and follow unseen guidelines,
outperforming previous attempts at zero-shot information extraction. The
ablation study shows that detailed guidelines are key for good results.
|
Oscar Sainz, Iker García-Ferrero, Rodrigo Agerri, Oier Lopez de Lacalle, German Rigau, Eneko Agirre
|
2023-10-05T16:43:13Z
|
http://arxiv.org/abs/2310.03668v5
|
# GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction
###### Abstract
Large Language Models (LLMs) combined with instruction tuning have made significant progress when generalizing to unseen tasks. However, they have been less successful in Information Extraction (IE), lagging behind task-specific models. Typically, IE tasks are characterized by complex annotation guidelines which describe the task and give examples to humans. Previous attempts to leverage such information have failed, even with the largest models, as they are not able to follow the guidelines out-of-the-box. In this paper we propose GoLLIE (Guideline-following **L**arge **L**anguage Model for **IE**), a model able to improve zero-shot results on unseen IE tasks by virtue of being fine-tuned to comply with annotation guidelines. Comprehensive evaluation empirically demonstrates that GoLLIE is able to generalize to and follow unseen guidelines, outperforming previous attempts at zero-shot information extraction. The ablation study shows that detailed guidelines is key for good results. Code, data and models are publicly available: [https://github.com/hitz-zentroa/GoLLIE](https://github.com/hitz-zentroa/GoLLIE).
## 1 Introduction
The task of Information Extraction (IE) is highly challenging. This challenge is evident in the detailed guidelines, which feature granular definitions and numerous exceptions, that human annotators must follow to perform the task. The performance of current state-of-the-art models heavily depends on the quantity of human-annotated data, as the model learns the guidelines from these examples. However, this performance significantly decreases when tested in new annotation schema (Liu et al., 2021). The common practice in IE to achieve good results is to manually annotate in each new domain and schema from scratch, as almost no transfer exists across application domains. Unfortunately, this is unfeasible, both, in terms of financial cost and human effort.
Figure 1: Out of domain zero-shot NER results. GPT results are not available for all domains.
Recent advancements in Large Language Models (LLM) (Min et al., 2023) have enabled the development of models capable of generalizing to unseen tasks. Thus, current zero-shot IE systems leverage the knowledge encoded in LLMs to annotate new examples (Sainz et al., 2022a; Wang et al., 2023a). As a by product of the pre-training process, models possess now a strong representation of what a person or an organization is. Therefore, they can be prompted to extract mentions to those categories from a text. However, this has a clear limitation: not every annotation schema* defines "person" (or any other label) in the same way. For example, ACE05 (Walker et al., 2006) annotates pronouns as persons, while, CoNLL03 (Tjong Kim Sang & De Meulder, 2003) does not. IE tasks require more information than just label names, they require annotation guidelines.
Footnote *: We define schema as the set of labels and their definitions.
Current LLMs have been trained to follow instructions, but they fail to follow annotation guidelines out-of-the-box. For instance, Figure 1 shows results on domain specific zero-shot Named Entity Recognition. The results of gpt-3.5-turbo when prompted with guidelines (Ashok & Lipton, 2023) are low, around 20 F1 score on Music or Politics domains. Building a system that enables high-performance zero-shot information extraction, reducing the dependence on costly human annotations, remains an open challenge.
In this work, we present **C**oLLIE (**G**uideline-**f**ollowing **L**arge **L**anguage Model for **IE**), a LLM fine-tuned to learn how to attend to the guidelines on a small set of well known IE tasks. Comprehensive zero-shot evaluation empirically demonstrates that GoLLIE outperforms the state-of-the-art (Wang et al., 2023a) in zero-shot information extraction (see Figure 1).
## 2 Related Work
Large Language Models (LLMs) have made significant advancements toward the development of systems that can generalize to unseen tasks (Min et al., 2023). Radford et al. (2019) trained a series of LLMs using a vast amount of internet data. They discovered that, during inference, pretrained models given natural language task descriptions can perform tasks such as question answering, machine translation, reading comprehension, and summarizing without explicit supervision. Building on this discovery, instruction tuning, often referred to as multitask fine-tuning, has emerged as the leading method to achieve generalization to unseen tasks. This process involves pre-training a model on a massive amount of unlabeled data and subsequently fine-tuning it on a diverse collection of tasks (Wang et al., 2022; Chung et al., 2022) phrased as text-to-text problems (Raffel et al., 2020). A natural language instruction or prompt is given to the model to identify the task it should solve (Schick & Schutze, 2021; Scao & Rush, 2021). Research has demonstrated that increasing the parameter count of the language model (Brown et al., 2020), coupled with improvements in the size and quality of the instruction tuning dataset, results in enhanced generalization capabilities (Chen et al., 2023; Zhang et al., 2022; Chowdhrey et al., 2022; Mueninghoff et al., 2023; Touvron et al., 2023a;b). LLMs have displayed impressive zero-shot generalization capabilities in various challenging tasks, including coding Wang & Komatasuki (2021); Black et al. (2022); Roziere et al. (2023), common sense reasoning (Touvron et al. (2023a), and medical applications Singhal et al. (2023), among others).
In the field of Information Extraction (IE), recent shared tasks (Fetahu et al., 2023) have shown that encoder-only language models such as XLM-RoBERTa (Conneau et al., 2020) and mDEBERTA (He et al., 2023) remain the most effective models. Attempts to utilize LLMs and natural language instructions for IE have been less successful (Tan et al., 2023; Zhou et al., 2023; Zhang et al., 2023a), as their performance lags behind that of encoder-only models. To adapt the instruction-tuning paradigm for the IE field, Sainz et al. (2021; 2022a;b) proposed reformulating various IE tasks into an entailment task. This approach has shown success in relation and event extraction in a few-shot setting, thus reducing the need for manual annotations. Lu et al. (2022a) introduced a unified text-to-structure generation that can model different IE tasks universally. Lou et al. (2023) proposed converting IE tasks to a semantic matching problem, allowing their method to generalize successfully to new domains and label ontologies not seen during training. Wang et al. (2023a) framed IE tasks as natural language descriptive instructions and trained an LLM across a diverse range of IE tasks. In evaluations on tasks with unseen label ontologies, their model outperformed other instruction-tuning methods.
Most instruction tuning attempts for IE share a limitation: they only consider label names in the prompts (e.g., _"List all the Persons in the following sentence"_). This poses two major challenges. Firstly, not all datasets share the same definition for labels like _Person_ (some exclude fictional characters or pronouns). Secondly, a label name alone doesn't sufficiently describe complex or less common labels. While there have been attempts to prompt LLMs using guidelines (Zhang et al., 2023), strong prior knowledge of LLMs regarding task labels (Blevins et al., 2023) deter the model from learning to adhere to those guidelines.
## 3 Approach
Different from previous approaches, () GoLLIE forces the model to attend to the details in the guidelines, performing robustly on schemas not seen during training. On this section we deep dive into the details of our approach, describing how the input and output was represented and the regularization techniques used to force the model to attend to the guidelines.
### Input-output representation
We have adopted a Python code-based representation (Wang et al., 2023; Li et al., 2023) for both the input and output of the model. This approach not only offers a clear and human-readable structure but also addresses several challenges typically associated with natural language instructions. It enables the representation of any information extraction task under a unified format. The inputs can be automatically standardized using Python code formsters such as Black. The output is well-structured and parsing it is trivial. Furthermore, most current LLMs incorporate code in their pretraining datasets, indicating that these models are already familiar with this representation.
Figure 2 shows the three main parts of the format: schema definition, input text and output annotations. **Schema definition** forms the initial segment of the input. This section contains the information about the labels which are represented as Python classes; guidelines, articulated as docstrings; and representative annotation candidates presented in the form of code comments. The number of class definitions corresponds to the number of labels in the dataset. Classes are flexible and vary for each task. For example, classes for a NER dataset merely require an attribute to specify the text span that correspond to the class. On the other side, more complex tasks such as Event Argument Extraction (EAE) or Slot Filling (SF) demand more class attributes to categorize the task, such as a list of participants in an event (refer to examples in Appendix A). **Input text** is the second part of the input. The input text is represented as an string variable in Python. **Output annotations** is the part generated by the model. The model starts generating after result =. The annotations are
Figure 2: Example of the input and output of the model.
represented as a list of instances of the classes defined on the schema definition part. Parsing the output is straightforward; executing the generated code in Python yields a list containing the result. This ease of parsing the output stands as a significant advantage of our model.
### Guidelines enhanced representation
The main contribution of this work is the use of the guidelines as part of the inference process to improve the zero-shot generalization. An example of a class definition with and without guidelines is shown in the Figure 3. Different datasets usually define guidelines on many different ways: some provides a complex definition of a label with several exceptions and special treatments and others just give a few representative candidates of the fillers of the label. To normalize the input format, we included the label definitions as class docstrings and the candidates as a comment for the principal argument (which is usually _mention or span_). Complex tasks such as EAE or SF requires additional definitions for the arguments or slots, to that end, we included small definitions as comments on each class argument. In this paper, we will refer to the model without guidelines as Baseline and the model with guidelines as 1 GoLLIE.
### Training regularization
We want to ensure that the model follows the guidelines and does not just learn to identify specific datasets and perform correctly on them. To do this, we introduce various kinds of noise during training. This stops the model from recognizing particular datasets, recalling specific labels, or attending only to the label names rather than learning to follow the actual description for each label in the guidelines.
We applied the following regularizations. **Class order shuffling**, for each example, the order of the input classes is randomly shuffled. This makes it more difficult for the model to memorize entire task definitions. **Class dropout**, we delete some of the input classes randomly. By eliminating few classes from both the input and output, we force the model to learn to only output instances of classes defined in the input. This not only encourages the model to focus on the schema definition but also minimizes the occurrence of hallucinations during inference. **Guideline paraphrasing**, we generate variations of the label definitions to prevent the model from easily memorizing them. We also think this will make the method more robust to different variations on the definition. **Representative candidate sampling**, similar to what we do with the paraphrases, for each input we sample 5 different candidates from a fixed pool of 10 per class. **Class name masking** involves substituting the label class names (e.g., Person) with placeholders, such as LABEL_1. This prevents the model from exploiting the label names during training, and forces it to attend and understand the guidelines.
Figure 3: Example of the input representation. (left) An example of an event definition w/o guidelines information. (right) The same example but with guideline information as Python comments.
## 4 Experimental Setup
### Data
Evaluating zero-shot capabilities requires dividing the data into training and evaluation datasets. However, many benchmarks for Information Extraction are based on the same domain or share part of their schema. To ensure that the zero-shot evaluation is not affected by similar data, we have divided our set of benchmarks based on the domain of the data. For training we kept mostly datasets from **News and Biomedical** domains, for evaluation instead, we used datasets from **diverse domains**. This approach helps to avoid introducing any noise into the evaluation process. Among the evaluation datasets we included CrossNER (Liu et al., 2021), a dataset that is split into many domains, for simplicity, we will call each domain as a separate dataset: AI, Literature, Music, Politics and Science. Also, we will call MIT Movie and MIT Restaurant as Movie and Restaurant. Table 1 contains the information about the data used in the experiments.
We have trained the model to perform 5 different tasks: Named Entity Recognition (NER), Relation Extraction (RE), Event Extraction (EE), Event Argument Extraction (EAE) and Slot Filling (SF). However, we only evaluated the model on the three main tasks of interest: NER, EE and EAE. The other two tasks are added in the training data in order to add diversity and improve the flexibility of the model.
Few modifications has been done to two datasets in order to improve the quality of the model. First, the training data of Ontonotes 5 was reduced drastically as it was automatically annotated. Second, the TACRED dataset was converted from RE to SF in order to increase the complexity of the task. These modifications make our system not comparable with the state of the art on those tasks. However, our focus of interest is in the zero-shot evaluation and, therefore, the benefits (see Appendix A) are more interesting than adding 2 more comparable points on the supervised setup. In the CASIE dataset, we detected that the annotated event spans are inconsistent. The models typically annotate a substring rather than the entire span. Therefore, we evaluate all the models based on the predicted event categories, without considering the exact text span. For arguments, we use partial matching.
We use the guidelines released by the authors of each dataset. When such guidelines are not publicly available, we ask human experts to create them, based on the annotations from the development split. The representative candidates are extracted from the guidelines when available, otherwise, the candidates are sampled from the the train split based on word frequency or manually curated based on the guidelines. Paraphrases are automatically generated using Vicuna 33B v1.3 (Zheng et al., 2023).
\begin{table}
\begin{tabular}{l|l l l l l l|c c} \hline \hline
**Dataset** & **Domain** & **NER** & **RE** & **EE** & **EAE** & **SF** & **Training** & **Evaluation** \\ \hline ACE05 (Walker et al., 2006) & News & ✓ & ✓ & ✓ & & ✓ & ✓ & ✓ \\ BCSCDR (Wei et al., 2016) & Biomedical & ✓ & ✓ & ✓ & & ✓ & ✓ \\ CoNLL 2003 (Tjeng Kim Sang \& De Meudler, 2003) & News & ✓ & & & & ✓ & ✓ \\ DIANN (Fabezel et al., 2018) & Biomedical & ✓ & & & & ✓ & ✓ \\ NCBI2002 (Islami Dolgan \& Lu, 2012) & Biomedical & ✓ & & & & ✓ & ✓ \\ Ontonotes 5 (Pradhan et al., 2013) & News & ✓ & & & & ✓ & ✓ \\ RAMS (Ebner et al., 2020) & News & & & & ✓ & ✓ & ✓ \\ TACRED (Zhang et al., 2017) & News & & & & & ✓ & ✓ & ✓ \\ WNT2017 (Dereczynski et al., 2017) & News & ✓ & & & & ✓ & ✓ \\ \hline BroadTwitter (Dereczynski et al., 2016) & Twitter & ✓ & & & & & ✓ \\ CASIE (Satyatyani et al., 2020) & Cybertime & ✓ & ✓ & ✓ & & & ✓ \\ CrossNER (Liu et al., 2021) & Many & ✓ & & & & & ✓ \\ ESC (Magnini et al., 2021) & Biomedical & ✓ & & & & & ✓ \\ FabbNet (Kumar \& Starky, 2022) & Science & ✓ & & & & & ✓ \\ HarveyBR (Chen et al., 2022) & Twitter & ✓ & & & & & ✓ \\ MIT Movie (Liu et al., 2013) & Queries & ✓ & & & & & ✓ \\ MIT Restaurants (Liu et al., 2013) & Queries & ✓ & & & & & ✓ \\ MultiNERD (Teddesi \& Navigli, 2022) & Wikipedia & ✓ & & & & & ✓ \\ WikiEvents(Li et al., 2021) & Wikipedia & ✓ & ✓ & ✓ & & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets used on the experiments. The table shows the domain, tasks and whether are use for training, evaluation or both.
### Language Models and Baselines
Backbone LLMs.**GoLLIE is a fine-tuned version of Code-LLaMA Roziere et al. (2023). Other backbone LLMs, such as LLaMA (Touvron et al., 2023), LLaMA-2 Touvron et al. (2023) or Falcon Penedo et al. (2023) were considered during the development, however, as our approach uses code to represent the input and output, Code-LLaMA model worked better on the preliminary experiments. In order to perform fair comparisons the baseline developed in this paper is based on Code-LLaMA as well. All the development of this paper was done with the 7B parameter version of Code-LLama, but, for a scaling analysis we also trained the 13B and 34B parameter models.
Training setup.To train the models we use QLoRA (Hu et al., 2022; Dettmers et al., 2023). LoRA freezes the pre-trained model weights and injects trainable rank decomposition matrices into linear layers of the Transformer architecture. On a preliminary experiment this setup outperformed fine-tuning the entire model on the zero-shot tasks, while trained much faster (more details in Appendix C.3). We applied the LoRA to all linear transformer block layers as recommended by Dettmers et al. (2023). The models were trained for 3 epochs with an effective batch-size of 32 and a learning-rate of 3e-4 with cosine scheduler. Our training infrastructure was 2 NVIDIA's A100 with 80gb each. More details about the training are given in the Appendix C.
Comparable systems.Our main point of comparison is Instruct-UIE (Wang et al., 2023) as it is the approach closer to our system, but without guidelines. Another system considered for comparison is PromptNER, Ashok & Lipton (2023) propose to prompt GPT-3.5 and T5 with definitions using Chain-of-Though in order to perform few-shot NER. Different from us, they did not fine-tuned the model to attend the guidelines. For a fair comparison, we only considered the zero-shot results reported in the paper. In addition, other state-of-the-art systems are added for comparison when results from Instruct-UIE and PromptNER are not available.
## 5 Results
### Supervised evaluation
The results on the supervised datasets are shown in Table 2. Comparing GoLLIE with the baseline, they both obtain very similar results, with an absolute difference of 0.3 F1 points on average. This is expected, as the model implicitly learns the guidelines for annotating the datasets based on the data distribution during fine-tuning. In addition, despite the bigger noise introduced to GoLLIE fine-tuning, the model's performance remains close to that of the baseline.
Comparing to the state-of-the-art our model achieves pretty similar results as well. Focusing on the two datasets where our model under-performs significantly, NCBIDisease and WNUT, we find
\begin{table}
\begin{tabular}{l|c|c c|c c} \hline \hline
**Dataset** & **SoTA** & **Baseline** & **3���
that task specific techniques are still needed. Kocaman and Talby (2021) leverages a model pre-trained on Biomedical domain corpora to better detect diseases, and, Wang et al. (2021) uses external knowledge to detect emergent and rare entities. These, however, are complementary to our proposal.
### Zero-Shot evaluation
The results on the zero-shot are shown in Table 3. Overall, comparing to the baseline, **the results are improved significantly when using guidelines** on almost every dataset, with an absolute difference of 13 F1 points on average. Despite dividing the evaluation benchmarks based on the domain, there is always some overlap between labels of train and evaluation benchmarks. For instance, the datasets E3C and WikiEvents share a large part of their schema with datasets like BC5CDR, ACE05 and RAMS. This phenomena is reflected in the results.
GoLLIE surpass by a large margin the current state-of-the-art methods Instruct-UIE (Wang et al., 2023a) and Entailment based IE (Sainz et al., 2022b). Focusing on the comparison with Instruct-UIE, there are three main difference with our approach: the backbone model, the amount of training data, and, the use or not of the guidelines. Instruct-UIE leverages the 11B FlanT5, which in addition to the LM pre-training the model is fine-tuned with 473 datasets. Respect to the data, Instruct-UIE leverages a total of 34 IE datasets (counting different tasks as datasets) from diverse domains, we only leverage 12 datasets. Comparing to us however, they do not use guideline information. Still, our method performs significantly better suggesting that the guidelines have an important effect on the results.
PromptNER (Ashok and Lipton, 2023) also adds some definition information into the prompt in order to perform zero-shot NER. We compare our approach with them (represented as GPT-3.5) in Figure 1. Although their approach leverages guidelines too, our approach performs significantly better on all datasets, showing that LLMs (even with 175B parameters) struggle to follow guidelines. They solve this by adding examples in the context but still far behind on a comparable setting (T5-XXL).
### Model scaling
Recent research has shown that increasing the parameter count of language models leads to improved generalization capabilities Brown et al. (2020). As presented in Table 3, we scale GoLLIE by using
\begin{table}
\begin{tabular}{l|r|r r|r r} \hline \hline
**Dataset** & **SoTA** & **Baseline** & & **13B** & **34B** \\ \hline BroadTwitter & - & 39.0\(\pm\)0.5 & 49.5\(\pm\)0.7 & **51.4\(\pm\)**1.6 & 50.3\(\pm\)1.8 \\ CASIE\({}_{\text{EE}}\) & - & 33.9\(\pm\)5.6 & 59.3\(\pm\)2.0 & 62.2\(\pm\)0.8 & **65.5\(\pm\)**1.5 \\ CASIE\({}_{\text{EAE}}\) & - & 47.9\(\pm\)1.2 & 50.0\(\pm\)0.2 & 52.6\(\pm\)0.2 & **55.2\(\pm\)**0.4 \\ AI & (Wang et al., 2023a) 49.0 & 32.3\(\pm\)0.7 & 59.1\(\pm\)1.0 & 56.7\(\pm\)2.6 & **61.6\(\pm\)**1.7 \\ Literature & (Wang et al., 2023a) 47.2 & 39.4\(\pm\)0.6 & **62.7\(\pm\)**2.8 & 59.7\(\pm\)0.3 & 59.1\(\pm\)2.2 \\ Music & (Wang et al., 2023a) 53.2 & 56.2\(\pm\)1.2 & 67.8\(\pm\)0.1 & 65.5\(\pm\)3.1 & **68.4\(\pm\)**1.8 \\ Politics & (Wang et al., 2023a) 48.2 & 38.3\(\pm\)0.9 & 57.2\(\pm\)0.8 & 54.4\(\pm\)3.5 & **60.2\(\pm\)**2.6 \\ Science & (Wang et al., 2023a) 49.3 & 37.1\(\pm\)1.3 & 55.5\(\pm\)1.4 & 56.2\(\pm\)0.8 & **56.3\(\pm\)**0.4 \\ E3C & - & 59.8\(\pm\)0.2 & 59.0\(\pm\)0.3 & 59.0\(\pm\)0.8 & **60.0\(\pm\)**0.4 \\ FabNER & - & 06.1\(\pm\)0.4 & 24.8\(\pm\)0.5 & 25.4\(\pm\)0.5 & **26.3\(\pm\)**0.5 \\ HarveyNER & - & 23.2\(\pm\)0.3 & 37.3\(\pm\)1.6 & **41.3\(\pm\)**0.8 & 38.9\(\pm\)0.5 \\ Movie & (Wang et al., 2023a) 63.0 & 43.4\(\pm\)1.0 & **63.0\(\pm\)**0.5 & 62.5\(\pm\)0.9 & **62.4\(\pm\)**1.2 \\ Restaurants & (Wang et al., 2023a) 21.0 & 31.3\(\pm\)1.9 & 43.4\(\pm\)0.7 & 49.8\(\pm\)1.2 & 52.7\(\pm\)1.4 \\ MultiNERD & - & 55.0\(\pm\)0.9 & 76.0\(\pm\)0.6 & **77.5\(\pm\)**0.3 & 77.2\(\pm\)0.5 \\ WikiEvents\({}_{\text{NER}}\) & (Sainz et al., 2022b) *49.1 & 76.9\(\pm\)4.4 & 80.7\(\pm\)0.6 & 80.2\(\pm\)0.6 & **81.3\(\pm\)**0.5 \\ WikiEvents\({}_{\text{EE}}\) & (Sainz et al., 2022b) *10.4 & 47.5\(\pm\)0.4 & 43.0\(\pm\)0.5 & 45.7\(\pm\)0.7 & **47.0\(\pm\)**0.9 \\ WikiEvents\({}_{\text{EAE}}\) & Sainz et al. (2022a) 35.9 & 51.6\(\pm\)0.5 & 51.9\(\pm\)0.4 & **52.5\(\pm\)**1.0 & 50.7\(\pm\)**0.3 \\ \hline Average SoTA & 42.6 & 45.4\(\pm\)0.4 & 58.4\(\pm\)0.4 & 58.3\(\pm\)0.6 & **60.0\(\pm\)**0.8 \\ Average all & - & 42.3\(\pm\)0.2 & 55.3\(\pm\)0.2 & 56.0\(\pm\)0.2 & **57.2\(\pm\)**0.5 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Zero-shot evaluation results. ”*” indicates results obtained using the original code.
Code-LLAMA LLMs with 13 billion and 34 billion parameters as backbone. Higher parameter count yields superior average zero-shot performance. However, some datasets and tasks greatly benefit from a larger LLM, while others do not. We believe that some datasets do not see benefits from increasing the LLM size because their performance is hindered by the issues with the guidelines that we discuss in Section 5.4. While, in general, larger models achieve better results in both supervised and zero-shot settings, GoLLIE with a 7B parameter backbone already exhibits strong zero-shot capabilities. As described in Appendix C, the 13B and 34B parameter versions require orders of magnitude more compute. Thus, one must carefully weigh the benefits of enhanced performance against the significantly increased computational costs and resources required
### Ablation study
We have performed an ablation to see the contribution of several components in the zero-shot evaluation as follows. We use "w/o Masking" for \(\circled{\textregistered}\) GoLLIE without class name masking regularization. We use "w/o Dropout" when we remove class dropout regularization, and "w/o Candidates" for removing the use of representative annotation candidates. Finally "w/o _all_" when removing all components, including guidelines, which is actually the baseline. As shown in Table 4, the results show a clear tendency, **the more regularization the better zero-shot results**. The class name masking and label dropout seems to have little but still beneficial effect, in contrast to the representative annotation items, which give some stronger signal to the model. We see how definitions and representative candidates from the guidelines are complementary and help to improve each other.
## 6 Error analysis
In this section, we aim to better understand the effect of prompting LLMs with guidelines. We focus on specific labels across various datasets, with the results displayed in Table 5. Our analysis covers both successful and unsuccessful cases of entity labeling by GoLLIE. For the latter, we also aim to identify the reasons why the model fails to correctly label these entities.
The details are in the guidelines:Labels such as Media, VulnerabilityPatch, Trailer, and Task are inherently polysemous, making it challenging to determine the appropriate categorization based solely on the label name. As a result, the baseline struggles to effectively classify items under these labels due to having insufficient information. Conversely, GoLLIE successfully follows the guidelines, underscoring their utility.
When the annotations do not comply with the guidelines:In the case of the Time label of the MultiNERD dataset, we found that our model labels years as Time entities. This is correct according to the annotation guidelines. Surprisingly, years are not labeled as entities in the dataset. In this case, GoLLIE successfully follows the guidelines; unfortunately, the dataset annotations do not.
Ambiguous labels:The Miscellaneous category, used by CoNLL03 and CrossNER datasets, refers to any named entity that is not included in the predefined categories set by the dataset. This definition is highly ambiguous and serves as a catch-all for various elements that do not fit into any of the predefined categories. Similarly, the Plot category of the Movie dataset is used to label a wide range of elements. For example, events in a movie (e.g., murder, love, horse racing), characters (e.g., vampires, zombies, cats), and the country of origin (e.g., British, American), among
\begin{table}
\begin{tabular}{l|c} \hline \hline
**Model** & **Zero-Shot F1** \\ \hline \hline \�⃝{} GoLLIE & 55.3\(\pm\)0.2 \\ \hline w/o Masking & 54.6\(\pm\)0.5 \\ w/o Dropout & 54.0\(\pm\)0.2 \\ w/o Candidates & 49.9\(\pm\)0.1 \\ w/o _all_ (baseline) & 42.3\(\pm\)0.2 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation results.
others. This lack of specificity hinders the development of consistent rules or guidelines for tagging such elements (Ratinov and Roth, 2009), which is a problem for humans and machines alike. As a consequence, GoLLIE also fails to label them accurately.
Conflicts Between Fine-Grained and Coarse Entities:The CrossNER dataset introduces two labels for person names within each domain. For example, in the Science domain, the labels Scientist and Person are used. The former, is used to label any person that is not a Scientist. Similarly, the Literature domain includes the labels Writer and Person. The guidelines assist GoLLIE in correctly labeling entities as Writer. However, GoLLIE still categorizes individuals as _Person_ even when they are _Scientist_, despite the guidelines. This is not technically incorrect, as every scientist is, by definition, also a person.
Strong Label Preconceptions:In its Political domain set, CrossNER includes the label Political Party. GoLLIE outperforms the baseline, once again demonstrating the utility of providing the model with guidelines. However, we often find that the model categorizes political parties as organizations. As listed in Table 1, most of the pretraining datasets originate from the news domain, where political parties are a common entity. However, none of the fine-tuning datasets include the Political Party entity; they are instead categorized as Organization. Consequently, during inference, the model consistently labels political parties as organizations. We believe this issue can be resolved by expanding the number and diversity of the fine-tuning datasets.
In summary, we anticipate that **GoLLIE will perform well on labels with well-defined and clearly bounded guidelines**. On the other hand, ambiguous labels or very coarse labels pose challenges. To this regard, be believe that GoLLIE would benefit from learning to follow instructions such as _"Label always the most specific class"_ or _"Annotate this class in the absence of other specific class"_. We also expect that GoLLIE would benefit from expanding the number and diversity of the pre-training datasets. For this study, we selected a limited number of fine-tuning datasets to more effectively validate our method.
## 7 Conclusions
In this paper we introduce () GoLLIE, a LLM specifically fine-tuned to comply with annotation guidelines that were devised for helping humans to annotate the dataset. A comprehensive zero-shot evaluation empirically demonstrate that annotation guidelines are of great value for LLMs, as GoL
\begin{table}
\begin{tabular}{l l l l r} \hline \hline
**Dataset** & **Label** & **Guideline** & **Baseline** & **\textless{}** \\ \hline MultiNERD & Media & Titles of films, books, songs, albums, fictional characters & 13.6 & 69.1 \\ CASIE & Vul. Patch & When a software company addresses a vulnerability by releasing an update. & 27.7 & 70.5 \\ Movie & Trailer & Refers to a short promotional video or preview of a movie. & 00.0 & 76.4 \\ AI & Task & Particular research task or problem within a specific AI research field. & 02.7 & 63.9 \\ MultiNERD & Time & Specific and well-defined time intervals, such as eras, historical periods, centuries, years and important days. & 01.4 & 03.5 \\ Movie & Plot & Recurring concept, event, or motif that plays a significant role in the development of a movie. & 00.4 & 05.1 \\ AI & Misc & Named entities that are not included in any other category. & 01.1 & 05.2 \\ Literature & Misc & Named entities that are not included in any other category. & 03.7 & 30.8 \\ Literature & Writer & Individual actively engaged in the creation of literary works. & 04.2 & 65.1 \\ Literature & Person & Person name that is not a writer. & 33.5 & 49.4 \\ Science & Scientist & A person who is studying or has expert knowledge of a natural science field. & 02.1 & 05.8 \\ Science & Person & Person name that is not a scientist. & 46.1 & 45.9 \\ Politics & Polit. Party & Organization that compete in a particular country’s elections. & 11.2 & 34.9 \\ \hline \hline \end{tabular}
\end{table}
Table 5: This table shows the F1 scores for specific labels from different datasets. The guideline column is a small summary of the actual guideline used to prompt the model.
LIE successfully leverages them. GoLLIE achieves better zero-shot results than previous attempts at zero-shot IE which do not leverage the guidelines, or use models not finetuned for following guidelines.
GoLLIE is a significant progress towards the development of models that can generalize to unseen IE tasks. In the future, we plan to enhance GoLLIE by using a larger and more diverse set of pre-training datasets. We will also improve the model's performance with ambiguous and coarse labels by expanding the set of instructions that the model can follow.
|
2302.14105
|
An Optimal Joint Antenna and User Selection Algorithm for QoS-based
CR-NOMA
|
Both non-orthogonal multiple access (NOMA), which can serve multiple users
simultaneously and on the same frequency, and cognitive radio (CR) can
contribute to eliminating the spectrum scarcity problem. In this work, an
uplink CR-based NOMA (CR-NOMA) system, which is equipped with multiple users
and a base station with a multi-antenna, is proposed to improve spectral
efficiency. By considering the users' quality of service (QoS), the system
performance of successive interference cancellation (SIC) is investigated in
this system. Two different antenna and secondary user selection algorithms are
proposed to improve the outage performance and retard the effect of the error
floor. The multi-antenna CR-NOMA with QoS-based SIC system outperforms
conventional channel state information (CSI)-based SIC. In addition, it is
shown that the outage performance of this system on both proposed algorithms is
better than in the case of not using the algorithm. The closed-form outage
probability expression of this system for the suboptimal joint antenna and user
selection algorithm is derived. Furthermore, when the proposed algorithms are
not used, the closed-form expression of the outage probability for this system
with a single-antenna base station is derived. Extensive simulation results
verify the accuracy of theoretical analyses.
|
Omer Faruk Akyol, Fethi Okta, Semiha Tedik Basaran
|
2023-02-27T19:33:41Z
|
http://arxiv.org/abs/2302.14105v1
|
# An Optimal Joint Antenna and User Selection Algorithm for QoS-based CR-NOMA
###### Abstract
Both non-orthogonal multiple access (NOMA), which can serve multiple users simultaneously and on the same frequency, and cognitive radio (CR) can contribute to eliminating the spectrum scarcity problem. In this work, an uplink CR-based NOMA (CR-NOMA) system, which is equipped with multiple users and a base station with a multi-antenna, is proposed to improve spectral efficiency. By considering the users' quality of service (QoS), the system performance of successive interference cancellation (SIC) is investigated in this system. Two different antenna and secondary user selection algorithms are proposed to improve the outage performance and retard the effect of the error floor. The multi-antenna CR-NOMA with QoS-based SIC system outperforms conventional channel state information (CSI)-based SIC. In addition, it is shown that the outage performance of this system on both proposed algorithms is better than in the case of not using the algorithm. The closed-form outage probability expression of this system for the suboptimal joint antenna and user selection algorithm is derived. Furthermore, when the proposed algorithms are not used, the closed-form expression of the outage probability for this system with a single-antenna base station is derived. Extensive simulation results verify the accuracy of theoretical analyses.
Non-orthogonal multiple access (NOMA), Cognitive radio (CR), Multi-Antenna, Quality of Service (QoS), Channel State Information (CSI)
## I Introduction
In wireless communication, limited and scarce radio spectrum is an important problem, especially for the sub-6GHz spectrum. In conventional orthogonal multiple access (OMA), increasing the number of users limits orthogonal resources and the spectrum. On the other hand, non-orthogonal multiple access (NOMA) can serve multiple users efficiently by sharing the same spectrum instead of limiting the spectrum by assigning each of these users to different bands of the spectrum [1]. Due to its ability to simultaneously serve several users while achieving great spectrum efficiency, and system capacity, and providing low complexity and a fairness/throughput tradeoff, NOMA is seen as a promising technology for the future generation of mobile communication networks [2]. In addition, the cognitive radio (CR) concept is another promising technique to solve this radio spectrum scarcity problem. In CR, there are two different types of users, which are namely the primary user and the secondary user [3]. The primary user is privileged as the secondary user. Hence, the secondary user can use the spectrum only when the primary user is in the idle state [4].
To increase the efficiency of spectrum usage, NOMA can be combined with CR [5]. In this way, CR-NOMA has an important potential to provide the growing requirements of users effectively in wireless communication [6, 7]. The authors in [8] examined the outage probability of an overlay CR-based NOMA network. Also, the downlink NOMA networks combined CR over the underlay paradigm has been investigated in [9]. The authors in [10] focused on a downlink CR-based NOMA system including multiple secondary users and multiple relays to enhance the outage performance. In addition, by utilizing multiple secondary users as the relay in a downlink CR-based NOMA network, the study in [11] presents cooperation to CR-based NOMA. In [12], an uplink CR-based NOMA system including a rate splitting process, which provides to allocate the transmit power efficiently by the secondary user for the highest achievable rate, has been proposed. In [13], an overlay CR-NOMA network was analysed in terms of the physical layer security and the secrecy performance of primary users was aimed to improve.
On the other hand, in NOMA networks, serving multiple users at the same time and frequency causes multiple-access interference. To eliminate this interference successfully, successive interference cancellation (SIC) is utilized [14]. The decoding order of the SIC is an important issue for the performance and applicability of the system. In the literature, there are two important SIC decoding orders based on the quality of service (QoS) and the channel state information (CSI) [15, 16]. According to the QoS criterion, the signal of the primary user, who has higher priority QoS requirements, is decoded first. Subsequent stages of the SIC decode the signals of the other users. On the other hand, based on the CSI criteria, the signal of the users who have a better CSI than the others is first decoded, and then the remaining users are decoded using SIC following the CSI state [17]. The outage performance results of selecting the SIC decoding orders based on both CSI and QoS in uplink NOMA systems with multi-user are investigated [18, 19] and the outage performances of both SIC methods in downlink NOMA systems are presented in [20].
However, there is no study focusing on the subject of SIC decoding orders based on either QoS or CSI for the CR-NOMA networks with multiple antennas at the receiver side. In this paper, we consider an uplink CR-NOMA system with a multi-antenna base station and multi-user. The effect of SIC decoding order on outage performance will be investigated in this proposed CR-NOMA system by comparing QoS-based
SIC with CSI-based SIC. Moreover, we will select the optimal secondary user and receiver antenna to improve the outage performance of the system model through the algorithms that will be proposed in the following parts of this paper. Hence, we achieve a surprising improvement in the outage performance of the proposed NOMA system with QoS-based SIC compared with CSI-based SIC. The main contributions of this work are given as follows:
* An uplink CR-NOMA network with multiple secondary users and multi-antenna is proposed. To improve the outage performance of the proposed system with QoS-based SIC, we propose two different joint antenna and user selection algorithms.
* In the proposed system, through these proposed algorithms, the QoS-based SIC is compared to the CSI-based SIC in terms of outage probability. Numerical results show that the outage performance of the QoS-based SIC outperforms the CSI-based SIC by using these proposed algorithms. Furthermore, both proposed algorithm provides a significant gain in terms of the outage performance of this system by increasing the number of antenna or secondary user.
* The outage probability of QoS-based SIC over the proposed algorithms is compared in both perfect CSI and imperfect CSI.
* The closed-form expression of the outage probability of the QoS-based NOMA for the suboptimal joint antenna and user selection algorithm in the proposed system is derived and then verified with simulation results. Moreover, to show whether the optimal joint antenna and user selection algorithm works optimally in this system with QoS-based SIC, the obtained simulation results are verified by exhaustive analysis.
The rest of the paper is organized as follows. In Section II, the CR-NOMA system model is presented and this system is briefly investigated over the CSI-based SIC. In Section III, the same system model is investigated over the QoS-based SIC in detail and two algorithms are proposed to increase the outage performance of the QoS-Based SIC in this system. Section IV discusses numerical and simulation results. Finally, the conclusion is given in Section V.
## II Conventional NOMA Network With CSI-based SIC
In this section, we present the proposed multi-user multi-antenna CR-NOMA system model as shown in Fig.1. In this section, the CSI-based SIC in the proposed CR-NOMA system is investigated. In the next section, the QoS-based SIC is investigated for the same NOMA network. In this system model, there are \(M+1\) users and one of these users is the primary user, stated as \(D_{0}\), and the others are secondary users, stated as \(D_{m}\) where \(m=1,\ldots,M\). The target data rates of all secondary users and the primary user are \(R_{s}^{th}\) and \(R_{0}^{th}\), respectively, to detect their signals. The entire mobile user is equipped with a single antenna and the base station is equipped with \(K\) antenna and \(k\) is the antenna index where \(k=1,\ldots,K\). The channel gains of the primary user and \(m\)th secondary user at the \(k\)th antenna of the base station are denoted by \(h_{0}^{k}\) and \(h_{m}^{k}\), respectively. In addition, all channels are modelled as Rayleigh fading. The mean values of the primary user's channel and the secondary users' channels are denoted by \(\mathbb{E}\left[\left|h_{0}^{k}\right|^{2}\right]=\Omega_{0}\) and \(\mathbb{E}\left[\left|h_{m}^{k}\right|^{2}\right]=\Omega_{m}\), respectively, where \(\mathbb{E}\left[.\right]\) is the expectation operator. In the proposed uplink CR-NOMA network, the received signal received at \(k\)th antenna of the base station by transmitted the information signals over \(D_{0}\) and \(D_{m}\) is given by
\[y_{S}= \sqrt{P_{S}}h_{0}^{k}x_{PU}+\sqrt{P_{S}}h_{m}^{k}x_{SU}+w_{S}, \tag{1}\]
where \(x_{PU}\) and \(x_{SU}\) are the information signals of \(D_{0}\) and \(D_{m}\), respectively, which these signals have a unit power in expectation. The users' transmit powers are assumed to be identical and equal to \(P_{S}\) and \(w_{S}\sim CN\left(0,\sigma^{2}\right)\) is the additive white Gaussian noise at the base station with mean zero and variance \(\sigma^{2}\). The SIC decoding order can be determined simply by using the users' channel conditions in the power-domain NOMA and CR-NOMA networks as in CSI-based SIC [18, 21]. Similar to a power-domain NOMA network serving two users with different channel conditions simultaneously, if \(D_{m}\) is considered strong and \(D_{0}\) weak in terms of channel conditions, firstly \(D_{m}\) can be decoded at \(k\)th antenna of the base station by CSI-based SIC under the effect of interference by \(D_{0}\) based on the following metric:
\[\log\left(1+\frac{\bar{\gamma}\left|h_{m}^{k}\right|^{2}}{1+\bar{\gamma} \left|h_{0}^{k}\right|^{2}}\right)\geq R_{s}^{th}. \tag{2}\]
After that, \(D_{0}\) is decoded if \(D_{m}\) is decoded successfully after process in (2). By applying CSI-based SIC process on \(k\)th antenna, \(D_{0}\) and \(D_{m}\)'s achievable data rates are given by, respectively,
\[R_{0}^{CSI}= \log\left(1+\bar{\gamma}\left|h_{0}^{k}\right|^{2}\right), \tag{3}\] \[R_{m}^{CSI}= \log\left(1+\frac{\bar{\gamma}\left|h_{m}^{k}\right|^{2}}{1+\bar {\gamma}\left|h_{0}^{k}\right|^{2}}\right), \tag{4}\]
where \(\bar{\gamma}=\frac{P_{S}}{\sigma^{2}}\) is the average signal-to-noise ratio (SNR).
Fig. 1: The considered uplink CR-NOMA system model with multi-antenna.
## III NOMA Network With QoS-based SIC
In CR-NOMA applications, selecting SIC decoding orders is frequently preferred based on the users' QoS requirements [22]. In the proposed system based on CR-NOMA, the QoS requirements of the primary user with poor channel conditions compared with the secondary users should be guaranteed [23]. However, this system sacrifices the performance of the secondary users with better channel conditions to guarantee the primary user's QoS requirements. Hence, the primary user's target data rate should be lower than secondary users i.e. \(R_{0}^{th}<R_{s}^{th}\). Similar to an uplink NOMA network serving two users simultaneously, one of which is privileged over the other, the primary user and one of the secondary users simultaneously transmit their messages to the base station in this system model. In the \(k\)th antenna of the base station, as the first stage of QoS-based SIC, firstly \(D_{0}\) should be decoded due to the QoS requirements under the effect of interference by \(D_{m}\) based on the following metric:
\[\log\left(1+\frac{\bar{\gamma}\left|h_{0}^{k}\right|^{2}}{1+\bar{\gamma}\left| h_{m}^{k}\right|^{2}}\right)\geq R_{0}^{th}. \tag{5}\]
If \(D_{0}\) is decoded and removed successfully after process in (5), \(D_{m}\) can be decoded. During the QoS-based SIC process, \(D_{0}\) and \(D_{m}\)'s achievable data rates are given by, respectively,
\[R_{0}^{QoS}= \log\left(1+\frac{\bar{\gamma}\left|h_{0}^{k}\right|^{2}}{1+\bar {\gamma}\left|h_{m}^{k}\right|^{2}}\right), \tag{6}\] \[R_{m}^{QoS}= \log\left(1+\bar{\gamma}\left|h_{m}^{k}\right|^{2}\right). \tag{7}\]
### _Joint Antenna and User Selection Algorithms_
In the proposed CR-NOMA system, to improve the outage performance of both QoS-based SIC and the entire network, user and antenna selection procedures will be suggested and applied. In the proposed CR-NOMA network with QoS-based SIC, during the first step of SIC, the primary user's signal is decoded. After the SIC process, the secondary user's signal is decoded. The outage probability of this proposed network with QoS-based SIC can be given by
\[\mathrm{P}^{QoS}=\mathrm{Pr}\left(R_{0}^{QoS}<R_{0}^{th}\right)+\mathrm{Pr} \left(R_{0}^{QoS}>R_{0}^{th},R_{m}^{QoS}<R_{s}^{th}\right), \tag{8}\]
where \(\mathrm{Pr}(.)\) is the probability operator. Decoding of the secondary user's signal should be guaranteed after the SIC process to improve the entire system's outage performance. Hence, both to increase the outage performance of QoS-based SIC and not restrict decoding the selected secondary user's signal at the two-stage of SIC, two algorithms are suggested, namely Algorithm (1) and Algorithm (2) respectively. In addition, with these proposed Algorithms, it is aimed to improve both the SIC decoding and the outage performance of the proposed system model by applying both antenna selection among \(K\) antennas and user selection among \(M\) secondary users. The best antenna and secondary user, which will improve both the performance of the SIC process and the performance of the entire CR-NOMA network, are selected between multiple antennas and multiple secondary users based on both algorithms.
#### Iii-A1 The Suboptimal Joint Antenna and User Selection Algorithm
In Algorithm (1), firstly, before starting the SIC process, the strongest data rate between the antennas and the secondary users should be compared with the target data rate of the secondary users, namely \(R_{s}^{th}\), to control whether this network is completely in outage as in Line 1-2. The data rate, namely \(R_{s}^{\max}\), can be given by
\[R_{s}^{\max}=\max(\max(R_{m}^{k})), \tag{9}\]
where \(R_{m}^{k}=\log_{2}(1+\bar{\gamma}|h_{m}^{k}|^{2})\). If \(R_{s}^{\max}\) is bigger than \(R_{s}^{th}\), the best antenna and user selection steps can be applied. However, if not, the outage occurs. In case of no outage, the best antenna is selected based on the maximum channel gain between the primary user and all antennas to improve the performance of SIC as in Line 4. Also, to minimize the performance degradation due to interference occurring by the secondary user in the process of SIC, which is being applied to decode the primary user's signal, the secondary user with the weakest data rate, which is greater than \(R_{s}^{th}\), between the selected antenna and the secondary users is selected as in Line 5-12. Therefore, after the SIC operation, an outage in the proposed network due to the data rate of the selected secondary user being lower than \(R_{s}^{th}\) is avoided. The indexes of the selected antenna and secondary user are given by
\[k^{*}= \underset{k=1,\ldots,K}{argmax}(|h_{0}^{k}|^{2}), \tag{10}\] \[m^{*}= \underset{R_{m}^{th}\geq R_{s}^{th}}{arg\,min}(|h_{m}^{k^{*}}|^{2}). \tag{11}\]
Hence, the primary user's data rate is calculated according to the \(k^{*}\)th antenna and \(m^{*}\)th secondary user for the process of SIC in Line 13. Before the SIC process, the primary user's data rate is given by
\[R_{0}^{QoS,1}=\log_{2}\Big{(}1+\frac{\bar{\gamma}|h_{0}^{k^{*}}|^{2}}{1+\bar{ \gamma}|h_{m^{*}}^{k^{*}}|^{2}}\Big{)}. \tag{12}\]
If \(R_{0}^{QoS,1}\) is bigger than \(R_{0}^{th}\) under the interference by the secondary user's signal, the process of SIC can be carried out successfully and the outage does not occur. If not, the outage occurs as in Line 26-27. On the other hand, if the secondary user cannot be selected since no secondary user's data rate is greater than \(R_{s}^{th}\) between the \(k^{*}\)th antenna and all secondary users, i.e. \(\max(R_{m}^{k^{*}})<R_{s}^{th}\), provided the data rate of the secondary user to be selected is greater than \(R_{s}^{th}\), the channel with weakest data rate between the all secondary users and all antennas are selected. Hence, over this channel, the antenna and the secondary user are selected as in Line 14-23. After that, in Line 24, the primary user's data rate is calculated according to the reselected antenna and secondary user for the process of SIC. Hence, the primary user's data rate can be expressed as
\[R_{0}^{QoS,2}=\log_{2}\Big{(}1+\frac{\bar{\gamma}|h_{0}^{k^{+}}|^{2}}{1+\bar{ \gamma}|h_{m^{+}}^{k^{+}}|^{2}}\Big{)}. \tag{13}\]
If \(R_{0}^{QoS,2}\) is bigger than \(R_{0}^{th}\) under the interference by the secondary user's signal, the process of SIC can be carried out successfully and the outage does not occur. If not, the outage
occurs as in Line 26-27. Moreover, the closed-form expression of the outage probability for Algorithm (1) is obtained as in (29) through the following procedures. Firstly, to improve the outage performance of the proposed QoS-based system, the secondary user's signal with the weakest data rate bigger than \(R_{s}^{th}\) among \(M\) secondary users is selected if there is any the secondary user with the data rate bigger than \(R_{s}^{th}\) in this proposed network. In case there is no secondary user with a data rate bigger than \(R_{s}^{th}\), the outage occurs. Hence, the outage probability of this case can be expressed as
\[J_{1}=\Pr\left(R_{s}^{\max}<R_{s}^{th}\right)=F_{Y}(\gamma_{s})^{MK}=\left(1-e^ {-\frac{\gamma_{s}}{\Omega_{m}}}\right)^{MK}, \tag{14}\]
where \(Y=\bar{\gamma}|h_{m}^{k}|^{2}\). The probability density function (PDF) and the cumulative distribution function (CDF) of \(Y\) are equal to \(f_{Y}(y)=\frac{1}{\Omega_{m}}e^{-\frac{\gamma}{\Omega_{m}}}\) and \(F_{Y}(y)=1-e^{-\frac{\gamma_{s}}{\Omega_{m}}}\), respectively. It is assumed that \(\tilde{\Omega}_{0}=\bar{\gamma}\Omega_{0}\) and \(\tilde{\Omega}_{m}=\bar{\gamma}\Omega_{m}\). The threshold SNRs of the primary user and each of the secondary users are \(\gamma_{s}=2^{R_{s}^{th}}-1\) and \(\gamma_{0}=2^{R_{0}^{th}}-1\), respectively. After selecting the antenna with the strongest channel gain between the primary user and all antennas as in (10), the probability of not having any secondary users with a data rate bigger than \(R_{s}^{th}\) at the selected \(k^{*}\)th antenna can be written by
\[J_{2}= \Pr\left(\max\left(\log_{2}(1+\bar{\gamma}|h_{m}^{k^{*}}|^{2}) \right)<R_{s}^{th}\right)=F_{Y}(\gamma_{s})^{M}. \tag{15}\]
If there is any secondary user with a data rate bigger than \(R_{s}^{th}\) at the selected antenna, in case there are \(g\) secondary users with a data rate lower than \(R_{s}^{th}\), the probability can be expressed using the probability mass function (PMF) as
\[\Pr\left(G=g\right)= \binom{M}{g}F_{Y}(\gamma_{s})^{g}\left(1-F_{Y}(\gamma_{s})\right) ^{M-g} \tag{16}\] \[= \binom{M}{g}\left(1-e^{-\frac{\gamma_{s}}{\Omega_{m}}}\right)^{g }\left(e^{-\frac{\gamma_{s}}{\Omega_{m}}}\right)^{M-g}.\]
It is assumed that there are \(M-g\) secondary users with a data rate bigger than \(R_{s}^{th}\) among the selected \(k^{*}\)th antenna. \(M-g\) out of \(M\) secondary users will be distributed over the interval \([\gamma_{s},\infty)\) and the expression of the PDF for each of these \(M-g\) secondary users is \(f_{\tilde{Y}_{1}}(y)=\frac{1}{\tilde{\Omega}_{m}}e^{-\frac{(y-\gamma_{s})}{ \tilde{\Omega}_{m}}}\), where \(y\in[\gamma_{s},\infty)\). Hence, to obtain the expression of the PDF for the secondary user with the minimum data rate bigger than \(R_{s}^{th}\) among these \(M-g\) secondary users, firstly \(\Pr(\tilde{Y_{1}}>y\mid G=g)\) is calculated as follows
\[\Pr(\tilde{Y_{1}}>y\mid G=g)= \left(e^{-\frac{(y-\gamma_{s})}{\tilde{\Omega}_{m}}}\right)^{M-g}. \tag{17}\]
Then, \(\Pr(\tilde{Y_{1}}>y)\) is calculated as
\[\Pr(\tilde{Y_{1}}>y)= \Pr(\tilde{Y_{1}}>y\mid G=g)\Pr\left(G=g\right) \tag{18}\] \[= \left(1-e^{-\frac{\gamma_{s}}{\Omega_{m}}}+e^{-\frac{y}{\Omega_{ m}}}\right)^{M}-\left(1-e^{-\frac{\gamma_{s}}{\Omega_{m}}}\right)^{M}.\]
Hence, the expression of \(F_{\tilde{Y_{1}}}(y)\) can be written by
\[F_{\tilde{Y_{1}}}(y)=1-\left(1-e^{-\frac{\gamma_{s}}{\Omega_{m}}}+e^{-\frac{y }{\Omega_{m}}}\right)^{M}+\left(1-e^{-\frac{\gamma_{s}}{\Omega_{m}}}\right)^{ M}. \tag{19}\]
Finally, the expression of \(f_{\tilde{Y_{1}}}(y)\) is obtained by
\[f_{\tilde{Y_{1}}}(y)=\frac{M}{\tilde{\Omega}_{m}}e^{-\frac{y}{\Omega_{m}}} \left(1-e^{-\frac{\gamma_{s}}{\Omega_{m}}}+e^{-\frac{y}{\Omega_{m}}}\right)^{ M-1}. \tag{20}\]
Over the selected \(k^{*}\)th antenna and \(m^{*}\)th secondary user, if \(R_{0}^{QoS,1}\) is lower than \(R_{0}^{th}\), the outage occurs. The closed-form expression of this outage probability is calculated using \(f_{\tilde{Y_{1}}}(y)\) as
\[J_{3}= \Pr\left(\frac{\bar{\gamma}|h_{0}^{k^{*}}|^{2}}{1+\bar{\gamma}| h_{m^{*}}^{k^{*}}|^{2}}<\gamma_{0}\right)=\int_{\gamma_{s}}^{\infty}F_{X}( \gamma_{0}+y\gamma_{0})^{K}f_{\tilde{Y_{1}}}\left(y\right)\mathrm{d}y\] \[= \frac{M}{\tilde{\Omega}_{m}}\sum_{a=0}^{K}\sum_{b=0}^{M-1}\left(-1 \right)^{a}\binom{K}{a}\binom{M-1}{b}\left(1-e^{-\frac{\gamma_{s}}{\Omega_{m} }}\right)^{M-1-b}\] \[\times e^{-\gamma_{s}\left(\frac{\gamma_{0}}{\Omega_{0}}+\frac{ \bar{\gamma}+1}{\Omega_{m}}\right)}\frac{e^{-\gamma_{0}\frac{\bar{\gamma}}{ \Omega_{0}}}}{\left(\frac{\gamma_{0}}{\Omega_{0}}+\frac{\bar{\gamma}+1}{\Omega _{m}}\right)}, \tag{21}\]
where \(X=\bar{\gamma}|h_{0}^{k}|^{2}\). The PDF and CDF of \(X\) are equal to \(f_{X}(x)=\frac{1}{\Omega_{0}}e^{-\frac{y}{\Omega_{0}}}\) and \(F_{X}(x)=1-e^{-\frac{y}{\Omega_{0}}}\), respectively. If there is no secondary user with a data rate bigger than \(R_{s}^{th}\) at the selected \(k^{*}\)th antenna, provided there is any secondary user with a data rate bigger than \(R_{s}^{th}\) for any antenna, can be which can provide a data rate bigger than \(R_{s}^{th}\) between the secondary users and the antennas, are checked and detected. Among these detected channels, the one with the lowest data rate is selected. Hence, both the antenna and the secondary user are selected over the selected channel. The probability of having \(v\) channels, which provide a data rate lower than \(R_{s}^{th}\) between the secondary users and the antennas, can be expressed as
\[\Pr\left(V=v\right)= \binom{KM}{v}\left(1-e^{-\frac{\gamma_{s}}{\Omega_{m}}}\right)^{v} \left(e^{-\frac{\gamma_{s}}{\Omega_{m}}}\right)^{KM-v}. \tag{22}\]
Hence, it is assumed that there are \(KM-v\) channels with a data rate bigger than \(R_{s}^{th}\) between the secondary users and the antennas. \(KM-v\) out of \(KM\) channels will be distributed over the interval \([\gamma_{s},\infty)\) and the expression of the PDF for each of these \(KM-v\) channels is \(f_{\tilde{Y_{2}}}(y)=\frac{1}{\tilde{\Omega}_{m}}e^{-\frac{(y-\gamma_{s})}{ \tilde{\Omega}_{m}}}\), where \(y\in[\gamma_{s},\infty)\). To obtain the expression of PDF for the channel that will provide the minimum data rate bigger than \(R_{s}^{th}\) among \(KM-v\) channels, \(\Pr(\tilde{Y_{2}}>y\mid V=v)\) is calculated as follows
\[\Pr(\tilde{Y_{2}}>y\mid V=v)= \left(e^{-\frac{(y-\gamma_{s})}{\tilde{\Omega}_{m}}}\right)^{KM-v}. \tag{23}\]
Hence, \(\Pr(\tilde{Y_{2}}>y)\) is calculated using \(\Pr(\tilde{Y_{2}}>y\mid V=v)\) and \(\Pr\left(V=v\right)\) as
\[\Pr(\tilde{Y_{2}}>y)= \Pr(\tilde{Y_{2}}>y\mid V=v)\Pr\left(V=v\right) \tag{24}\] \[= \left(1-e^{-\frac{\gamma_{s}}{\Omega_{m}}}+e^{-\frac{y}{\Omega_{m}} }\right)^{KM}-\left(1-e^{-\frac{\gamma_{s}}{\Omega_{m}}}\right)^{KM}.\]
The CDF expression of \(F_{\tilde{Y_{2}}}(y)\) can be given by
\[F_{\tilde{Y_{2}}}(y)=1-\left(1-e^{-\frac{\gamma_{s}}{\Omega_{m}}}+e^{-\frac{y}{ \Omega_{m}}}\right)^{KM}+\left(1-e^{-\frac{\gamma_{s}}{\Omega_{m}}}\right)^{KM}. \tag{25}\]
Finally, the PDF expression of \(f_{\tilde{Y_{2}}}(y)\) is obtained by
\[f_{\tilde{Y_{2}}}(y)=\frac{KM}{\tilde{\Omega}_{m}}e^{-
Over the selected \(k^{+}\)th antenna and \(m^{+}\)th secondary user, if \(R_{0}^{QoS,2}\) is lower than \(R_{0}^{th}\), the outage occurs. The closed-form expression of this outage probability is calculated using \(f_{Y_{2}}(y)\) as
\[J_{4}= \Pr\left(\frac{\bar{\gamma}|h_{0}^{k^{+}}|^{2}}{1+\bar{\gamma}|h_{m ^{+}}^{k^{+}}|^{2}}<\gamma_{0}\right)=\int_{\gamma_{s}}^{\infty}F_{X}(\gamma_{ 0}+y\gamma_{0})f_{Y_{2}}(y)\,\mathrm{d}y \tag{27}\] \[= \frac{KM}{\bar{\Omega}_{m}}\sum_{c=0}^{KM-1}\left(-1\right)^{c} \binom{KM-1}{d}e^{-\gamma_{s}\left(\frac{\gamma_{0}}{\bar{\Omega}_{0}}+\frac{d +1}{\bar{\Omega}_{m}}\right)}\] \[\times\left(1-e^{-\frac{\gamma_{s}}{\bar{\Omega}_{m}}}\right)^{ KM-1-d}\frac{e^{-\gamma_{0}\frac{c}{\bar{\Omega}_{0}}}}{\left(\frac{\gamma_{0}}{ \bar{\Omega}_{0}}+\frac{d+1}{\bar{\Omega}_{m}}\right)}.\]
Finally, the closed-form expression of the outage probability for Algorithm (1) can be written as
\[\mathrm{P}^{QoS}=J_{1}+\left(1-J_{1}\right)\left[\left(1-J_{2}\right)J_{3}+J_{ 2}J_{4}\right]. \tag{28}\]
Using (28), the closed-form theoretical expression of Algorithm (1) is given in (29). On the other hand, in case the base station with a single antenna, the secondary user with minimum channel gain among \(M\) secondary user is selected to improve the outage performance of the NOMA network with QoS-based SIC as in [18]. Therefore, the outage performances of Algorithm (1) and secondary user selection method for QoS-based SIC in [18] are investigated in the numerical results section in the case of a single antenna base station.
#### Iii-A2 The Optimal Joint Antenna and User Selection Algorithm
In another proposed algorithm for the CR-NOMA network with QoS-based SIC, firstly, as at the beginning of the Algorithm (1), to control if this network is in the outage completely before the SIC process, it should be checked whether \(R_{s}^{\max}\) is bigger than \(R_{s}^{th}\). If \(R_{s}^{\max}\) is bigger than \(R_{s}^{th}\), the antenna and user selection stages can be applied. If not, it is accepted that the outage occurs in this network before the process of SIC as in Line 1-2. In the antenna and user selection stages, all channels between secondary users and antennas, satisfying the condition that the data rate is bigger than \(R_{s}^{th}\), are detected in Line 9-10. After that, all possible antennas, which can be used to select the best antenna, are determined by controlling whether there is any secondary user with a data rate greater than \(R_{s}^{th}\) between each antenna and secondary users as in Line 17. The secondary users with a data rate greater than \(R_{s}^{th}\) but the lowest data rate among \(M\) secondary users is detected for each of these possible antennas separately as in Line 18. Using the secondary users detected separately for each of these possible antennas, as many as the number of all possible antennas that can be selected, the expression of \(|h_{0}^{k}|^{2}/|h_{m^{*}}^{k^{*}}|^{2}\) is repeatedly calculated and recorded in Line 19. By using these recorded values, the best antenna and accordingly the best secondary user are selected as in Line 22. After the stage of antenna and secondary user selection, the process of SIC will be applied. The primary user's data rate is given by
\[R_{0}^{QoS,3}=\log_{2}\Big{(}1+\frac{\bar{\gamma}|h_{0}^{k^{*}}|^{2}}{1+\bar{ \gamma}|h_{m^{*}}^{k^{*}}|^{2}}\Big{)}. \tag{30}\]
If \(R_{0}^{QoS,3}\), which is calculated in Line 23, is greater than \(R_{0}^{th}\), the process of SIC is carried out successfully by being decoded the primary user's signal. If not, the outage occurs in the proposed NOMA network as in Line 24-25.
```
input :\(K\), \(M\), \(\mathcal{G}[M,K-1]\), \(\mathcal{L}[1...K]\) output :\(R^{QoS}\), \(error\)
1if\(R_{s}^{\max}<R_{s}^{th}\)then
2\(error\);
3else
4\(k^{*}\gets argmax(|h_{0}^{k}|^{2})\)
5if\(\max(R_{m}^{k})\geq R_{s}^{th}\)then
6for\(m=1\)to\(M\)do
7\(\mathcal{L}[m]\gets 0\)
8if\(R_{m}^{k^{*}}\geq R_{s}^{th}\)then
9\(\mathcal{L}[m]\leftarrow|h_{m}^{k^{*}}|^{2}\)
10
11 end if
12\(m^{*}\gets argmin(\mathcal{L}|\mathcal{L}[m]>0)\)
13\(R^{QoS}=\log_{2}\Big{(}1+\frac{\bar{\gamma}|h_{0}^{k^{*}}|^{2}}{1+\bar{\gamma}|h _{m^{*}}^{k^{*}}|^{2}}\Big{)}\)
14else
15for\(m\gets 1\)to\(M\)do
16for\(k\gets 1\)to\(K\) / \(k^{*}\)do
17\(\mathcal{G}[m,k]\gets 0\)
18if\(R_{m}^{k}\geq R_{s}^{th}\)then
19\(\mathcal{G}[m,k]\leftarrow|h_{m}^{k}|^{2}\)
20
21 end if
22
23 end for
24\(m^{+},k^{+}\gets argminargmin(\mathcal{G}|\mathcal{G}[m,k]>0)\)
25\(R^{QoS}=\log_{2}\Big{(}1+\frac{\bar{\gamma}|h_{0}^{k^{*}}|^{2}}{1+\bar{\gamma}|h _{m^{*}}^{k^{*}}|^{2}}\Big{)}\)
26 end if
27
28 end for
29
30 end for
```
**Algorithm 1**The Suboptimal Joint Antenna and User Selection Algorithm
### _QoS-based SIC with imperfect channel estimation_
In this subsection, we present the channel model that will be used to examine how well the suggested network with imperfect CSI performs in channels with independent and identically distributed (i.d.d.) Rayleigh fading. Assuming the channel estimation is imperfect, the primary pilot symbol assisted channel estimate \(\hat{h}_{0}^{k}\), which is between the primary user and \(k\)th antenna, and the secondary pilot symbol assisted channel estimate \(\hat{h}_{m}^{k}\), which is between the \(m\)th secondary user and \(k\)th antenna, differ from true channels \(h_{0}^{k}\) and \(h_{m}^{k}\) by an independent complex Gaussian error \(\Delta h_{0}^{k}\) and \(\Delta h_{m}^{k}\), respectively, which are with zero-mean and variance \(\sigma_{e}^{2}\). Hence, \(\hat{h}_{0}^{k}\) and \(\hat{h}_{m}^{k}\) are equal to \(h_{0}^{k}+\Delta h_{0}^{k}\) and \(h_{m}^{k}+\Delta h_{m}^{k}\), respectively [24]. Assuming \(\Omega_{0}=\Omega_{m}=1\), the estimate channels \(\hat{h}_{0}^{k}\) and
\(\hat{h}_{m}^{k}\) are i.i.d. complex Gaussian random variables with zero-mean and variance \(1+\sigma_{e}^{2}\). The true channel gains \(h_{0}^{b}\) and \(h_{m}^{k}\) can be written in terms of \(\hat{h}_{0}^{b}\) and \(\hat{h}_{m}^{k}\), respectively [25],
\[h_{0}^{k} =\Gamma\hat{h}_{0}^{k}+\tilde{h}_{0}^{k} \tag{31}\] \[h_{m}^{k} =\Gamma\hat{h}_{m}^{k}+\tilde{h}_{m}^{k}\]
where \(\Gamma=\frac{1}{1+\sigma_{e}^{2}}\). Also, \(\tilde{h}_{0}^{k}\) and \(\tilde{h}_{m}^{k}\) are i.i.d. complex Gaussian random variables with zero-mean and variance \(\tilde{\sigma}^{2}=\frac{\sigma_{e}^{2}}{1+\sigma_{e}^{2}}\).
## IV Numerical And Simulation Results
In this section, for the proposed uplink CR-NOMA scenario, the QoS-based SIC is compared with the CSI-based SIC in terms of outage probability over two different algorithm methods and then, in case both perfect channel state and imperfect channel state, the outage performance of the QoS-based SIC over these algorithms is investigated according to the different number of antennas and secondary users. In addition, the outage performance of the QoS-based SIC for Algorithm (1) is evaluated with numerical results obtained by Monte Carlo simulations and these numerical results are verified by theoretical results. Also, the outage performance of the QoS-based SIC for Algorithm (2) is confirmed by the exhaustive analysis that is a primary method to find the optimal scheduled secondary user and antenna [26]. On the other hand, to present the performance of the CSI-based SIC for this proposed system in Fig. 2 and Fig. 6, the antenna with the weakest channel gain between the primary user and antennas is selected to increase the data rate of the secondary user as in (4) and, then the secondary user with the strongest channel gain between the selected antenna and secondary users is selected. In Fig.2, the impact of \(M\) on the outage probability versus SNR is investigated for the proposed system where \([M]:[1,5]\) and \([K]:[1]\). In case \(K\) is 1, as well as increasing \(M\) degrades the outage performance, it is seen that the outage performance of QoS-based SIC with and without algorithms is better compared to CSI-based SIC. Moreover, the effect of the error floor in CSI-based SIC is dramatically seen compared to QoS-based SIC with and without algorithms. In case \(M\) is 5, without the use of any proposed algorithms, the outage probability of CSI-based SIC is obtained by selecting the secondary user with maximum channel gain between the antenna and
Fig. 2: The outage probability against SNR for different number of the secondary users. The users’ channel gains are assumed to be i.i.d. Rayleigh fading. \(R_{0}^{th}\) = 0.2 bit per channel uses (BPCU), \(R_{s}^{th}\) = 1 BPCU, \(\Omega_{0}=\Omega_{m}=1\) and \(K=1\).
secondary users, while the outage probability of QoS-based SIC is obtained by selecting the secondary user with minimum channel gain between the antenna and secondary users [18]. It can be seen that the analysis results and the exhaustive analysis are in agreement with the simulation results. The simulation results of the outage probability of the QoS-based SIC without the proposed algorithms are verified through the closed-form expression of \(\mathrm{P}_{A}^{QoS}\) obtained in the Appendix. Fig.3 depicts the impact of the number of antennas in the base station on the outage probability versus SNR for the proposed system where \([M]:[6]\) and \([K]:[2,4,7]\). In case \(M\) is 6, it can be seen that increasing \(K\) dramatically degrades the outage probability. When \(K\) is 2, although it is seen that the error floor begins to occur at high SNR, the effect of the error floor decreases as the number of antennas increases. The outage performance of QoS-based SIC with Algorithm (2) is better compared with Algorithm (1) especially. It is seen that the simulation results are verified by the theoretical analysis for Algorithm 1 and the exhaustive analysis for Algorithm 2.
In Fig.4, the impact of \(M\) on the outage probability versus SNR is investigated for the proposed system where \([M]:[1,4,6]\) and \([K]:[6]\). In case \(K\) is 6, as well as increasing \(M\) degrades the outage performance, it is seen that the effect of the error floor is significantly reduced. Also, increasing \(M\) dramatically degrades the outage probability for both Algorithms. As in Fig. 3, the outage performance of QoS-based SIC with Algorithm (2) is better than compared with Algorithm (1). Moreover, it can be seen that the analysis results and the exhaustive analysis are in agreement with the simulation results.
In Fig.5, the effect of \(\sigma_{e}^{2}\) in case imperfect CSI in the outage probability against SNR is investigated compared to perfect CSI for the proposed system where \(K\) and \(M\) is 5. In the case of imperfect CSI, the outage probability in the NOMA network with the QoS-based SIC increases for Algorithm (2) compared to the perfect CSI case. Although there is a channel estimation error in the case of imperfect CSI, it is observed that Algorithm (2) works quite well. In Fig.6, in case of bad channel conditions between primary user and antennas compared to secondary users, the outage probability against \(\Omega_{0}\) for the proposed system is investigated, where \([M]:[5]\) and \([K]:[3,5]\). As \(\Omega_{0}\) increases, the outage probability in the NOMA network with the QoS-based SIC for both proposed algorithms increases compared to CSI-based SIC. As in Fig. 4, the outage performance of QoS-based SIC with Algorithm (2) is better than compared with Algorithm (1). Increasing the number of antennas significantly improves the outage performance of QoS-based SIC compared to CSI-based SIC. In case \(K\) is 3, the outage performance of the QoS-based SIC is approximately equal to the outage performance of CSI-based SIC for both proposed algorithms when \(\Omega_{0}\) is about 0.5. However, in case \(K\) is 5, the outage performance of the QoS-based SIC for both proposed algorithms is equal to the outage performance of CSI-based SIC for both proposed algorithms when \(\Omega_{0}\) is less than 0.5. It can be seen that the analysis results and the exhaustive analysis are in agreement with the simulation results.
## V Conclusion
In this paper, we have proposed an uplink CR-NOMA system with QoS-based SIC consisting of multiple users and the base station with a multi-antenna. The outage performance of this system is aimed to improve by selecting the antenna and the secondary user over two different proposed algorithms for QoS-based SIC. Also, the effect of the error floor is aimed to decrease for this system with QoS-based SIC by selecting the antenna and the secondary user in case of multi-user multi-antenna. On the other hand, to observe the effect of selecting SIC decoding order on the system performance, the outage performance of QoS-based SIC through these algorithms has been compared with CSI-based SIC over the proposed system. The exact expression of the outage probability of this proposed system model with QoS-based SIC for Algorithms (1) has been derived in closed form over Rayleigh fading channels. In addition, the outage performance of this system with QoS-based SIC for Algorithms (2) has been verified by the
Fig. 4: The outage probability against SNR for different number of the secondary users. The users’ channel gains are assumed to be i.i.d. Rayleigh fading. \(R_{0}^{th}=0.2\) BPCU, \(R_{s}^{th}=1\) BPCU, \(\Omega_{0}=\Omega_{m}=1\), and \(K=6\).
Fig. 3: The outage probability against SNR for different number of antennas. The users’ channel gains are assumed to be i.i.d. Rayleigh fading. \(R_{0}^{th}\) = 0.2 BPCU, \(R_{s}^{th}\) = 1 BPCU, \(\Omega_{0}=\Omega_{m}=1\), and \(M=6\).
exhaustive analysis. In the case base station with a single antenna, the closed-form expression of the outage probability of this system with QoS-based SIC by not its use of proposed algorithms has also been derived and compared with proposed algorithms. In the case of multi-antenna in the base station, the proposed algorithms for QoS-based SIC have provided to degrade the outage probability effectively. Also, the outage probability is degraded by increasing the number of secondary users and antenna and the error floor is significantly decreased. Increasing the number of antennas results in a more effective increase in the outage performance difference between Algorithms (1) and Algorithms (2) compared to increasing the number of secondary users.
## Appendix
Utilizing the secondary user signal with the weakest channel gain among \(M\) secondary users simply to minimize the performance degradation of the primary user if the base station has only one antenna in the proposed network, the expression of outage probability can be written as
\[\mathrm{P}_{A}^{QoS}=\underbrace{\Pr\left(\frac{X}{1+Y_{1}}<\gamma_{0}\right)} _{K_{1}}+\underbrace{\Pr\left(\frac{X}{1+Y_{1}}>\gamma_{0},Y_{1}<\gamma_{s} \right)}_{K_{2}},\] (A.1)
where \(X=\bar{\gamma}|h_{0}|^{2}\) and \(Y_{m}=\bar{\gamma}|h_{m}|^{2}\). Also, the PDFs of the unordered variable \(X\) and \(Y\) are \(f_{X}(x)=\frac{1}{\Omega_{0}}e^{-\frac{x}{\Omega_{0}}}\), \(f_{Y_{m}}(y)=\frac{1}{\Omega_{0}}e^{-\frac{x}{\Omega_{m}}}\), respectively. It is assumed that \(|h_{1}|^{2}\leq|h_{m}|^{2}\leq|h_{M}|^{2}\) in this part. In what follows, \(K_{1}\) and \(K_{2}\) are addressed to write the closed-form expression of \(\mathrm{P}_{A}^{QoS}\) given in (A.1). Firstly, the expression of \(K_{1}\) can be given as
\[K_{1}=\int_{0}^{\infty}F_{X}\left(\gamma_{0}+\gamma_{0}y\right)f_{Y_{1}}(y) \mathrm{d}y.\] (A.2)
To calculate the expression of \(K_{1}\), \(F_{X}\left(\gamma_{0}+\gamma_{0}y\right)\) and \(f_{Y_{1}}(y)\) need to be evaluated first. Firstly, \(F_{X}\left(\gamma_{0}+\gamma_{0}y\right)\) is equal to \(1-e^{-\frac{(y+1)\gamma_{0}}{\Omega_{0}}}\). With the aid of [27], the PDF of the ordered variable \(Y_{1}\) can be calculated from the CDF of the ordered variable \(Y_{1}\), \(F_{Y_{1}}(y)\) is given by
\[F_{Y_{1}}(y)=M\sum_{s=0}^{M-1}\frac{\left(-1\right)^{s}}{1+s}\binom{M-1}{s} \left(1-e^{-\frac{y}{\Omega_{1}}}\right)^{s+1},\] (A.3)
where \(\tilde{\Omega}_{1}=\bar{\gamma}\Omega_{1}\). Using (A.3), the PDF of the ordered variable \(Y_{1}\) can be written as
\[f_{Y_{1}}(y)=\frac{M}{\tilde{\Omega}_{1}}\sum_{s=0}^{M-1}\left(-1\right)^{s} \binom{M-1}{s}e^{-\frac{y}{\Omega_{1}}}\left(1-e^{-\frac{y}{\Omega_{1}}} \right)^{s}.\] (A.4)
Hence, \(K_{1}\) can be rewritten as
\[K_{1} =1-e^{-\frac{\gamma_{0}}{\Omega_{1}}}\int_{0}^{\infty}e^{-\frac{ y\gamma_{0}}{\Omega_{1}}}f_{Y_{1}}(y)\mathrm{d}y\] \[=1-\frac{M}{\tilde{\Omega}_{1}}e^{-\frac{\gamma_{0}}{\Omega_{1}}} \sum_{s=0}^{M-1}\sum_{p=0}^{s}\frac{\left(-1\right)^{s+p}\binom{M-1}{s}\binom{ s}{p}}{\frac{\left(p+1\right)}{\Omega_{1}}+\frac{\gamma_{0}}{\Omega_{0}}}.\] (A.5)
Secondly, using (A.3) and (A.5), the closed-form expression of outage probability of \(K_{2}\) can be calculated as
\[K_{2} =\left(1-K_{1}\right)\left[\int_{0}^{\gamma_{s}}f_{Y_{1}}\left(y \right)\mathrm{d}y\right]\] (A.6) \[=\left(1-K_{1}\right)\left[M\sum_{s=0}^{M-1}\frac{\left(-1\right) ^{s}}{1+s}\binom{M-1}{s}\left(1-e^{-\frac{\gamma_{s}}{\Omega_{1}}}\right)^{ s+1}\right].\]
Finally, the closed form expression of \(\mathrm{P}_{A}^{QoS}\) is obtained by writing \(K_{1}\) and \(K_{2}\) in (A.1).
|
2302.03479
|
Optical bulk-boundary dichotomy in a quantum spin Hall insulator
|
The bulk-boundary correspondence is a key concept in topological quantum
materials. For instance, a quantum spin Hall insulator features a bulk
insulating gap with gapless helical boundary states protected by the underlying
Z2 topology. However, the bulk-boundary dichotomy and distinction are rarely
explored in optical experiments, which can provide unique information about
topological charge carriers beyond transport and electronic spectroscopy
techniques. Here, we utilize mid-infrared absorption micro-spectroscopy and
pump-probe micro-spectroscopy to elucidate the bulk-boundary optical responses
of Bi4Br4, a recently discovered room-temperature quantum spin Hall insulator.
Benefiting from the low energy of infrared photons and the high spatial
resolution, we unambiguously resolve a strong absorption from the boundary
states while the bulk absorption is suppressed by its insulating gap. Moreover,
the boundary absorption exhibits a strong polarization anisotropy, consistent
with the one-dimensional nature of the topological boundary states. Our
infrared pump-probe microscopy further measures a substantially increased
carrier lifetime for the boundary states, which reaches one nanosecond scale.
The nanosecond lifetime is about one to two orders longer than that of most
topological materials and can be attributed to the linear dispersion nature of
the helical boundary states. Our findings demonstrate the optical bulk-boundary
dichotomy in a topological material and provide a proof-of-principal
methodology for studying topological optoelectronics.
|
Junfeng Han, Pengcheng Mao, Hailong Chen, Jia-Xin Yin, Maoyuan Wang, Dongyun Chen, Yongkai Li, Jingchuan Zheng, Xu Zhang, Dashuai Ma, Qiong Ma, Zhi-Ming Yu, Jinjian Zhou, Cheng-Cheng Liu, Yeliang Wang, Shuang Jia, Yuxiang Weng, M. Zahid Hasan, Wende Xiao, Yugui Yao
|
2023-02-07T14:05:41Z
|
http://arxiv.org/abs/2302.03479v2
|
# Optical bulk-boundary dichotomy in a quantum spin Hall insulator
###### Abstract
The bulk-boundary correspondence is a key concept in topological quantum materials. For instance, a quantum spin Hall insulator features a bulk insulating gap with gapless helical boundary states protected by the underlying \(Z_{2}\) topology. However, the bulk-boundary dichotomy and distinction are rarely explored in optical experiments, which can provide unique information about topological charge carriers beyond transport and electronic spectroscopy techniques. Here, we utilize mid-infrared absorption micro-spectroscopy and pump-probe micro-spectroscopy to elucidate the bulk-boundary optical responses of Bi\({}_{4}\)Br\({}_{4}\), a recently discovered room-temperature quantum spin Hall insulator. Benefiting from the low energy of infrared photons and the high spatial resolution, we unambiguously resolve a strong absorption from the boundary states while the bulk absorption is suppressed by its insulating gap. Moreover, the boundary absorption exhibits a strong polarization anisotropy, consistent with the one-dimensional nature of the topological boundary states. Our infrared pump-probe microscopy further measures a substantially increased carrier lifetime for the boundary states, which reaches one nanosecond scale. The nanosecond lifetime is about one to two orders longer than that of most topological materials and can be attributed to the linear dispersion nature of the helical boundary states. Our findings demonstrate the optical bulk-boundary
dichotomy in a topological material and provide a proof-of-principal methodology for studying topological optoelectronics.
**Keywords:** topological insulator; quantum spin Hall effect; Bi\({}_{4}\)Br\({}_{4}\); edge states; mid-infrared absorption micro-spectroscopy; pump-probe micro-spectroscopy
## 1 Introduction
The quantum spin Hall insulators have one-dimensional (1D) dissipationless conducting channels in the edge due to topological protection from backscattering, which is widely expected to be a promising platform for next-generation optoelectronics with high speed and high efficiency [1-10]. However, the critical scientific issue of carrier dynamics of 1D edge states, which inevitably involves dynamic controls of nontrivial carriers, has rarely been addressed. Though 1D edge states can be probed by transport and electronic spectroscopy techniques [11-20], however, most optical and photoelectric methods have no such high spatial resolution to distinguish the properties of bulks and boundaries, which hinders both research and application of 1D topological edge states in the field of optoelectronics. One strategy to solve this problem is to stack quantum spin Hall insulator layers along the direction normal to the layer plane, as the existence of multiple 1D topological edge states can enhance the response signals. Meanwhile, this method also requires a weak coupling between the edge states localized at each layer.
Bismuth halogenides, composed of 1D chains (Fig. 1**a**), were predicted as topological materials with various fascinating properties [21-36]. In particular, the single-layer Bi\({}_{4}\)Br\({}_{4}\) was identified as a large gap quantum spin Hall insulator that hosts gapless helical edge states [21-24]. As the interlayer coupling of Bi\({}_{4}\)Br\({}_{4}\) is very weak, the topological characters of the edge states in multiple layers are essentially preserved [24]. Thus, the bulk Bi\({}_{4}\)Br\({}_{4}\) with multiple edge states can enhance the response signals of 1D helical edge states. For instance, the topological boundary states in rough Bi\({}_{4}\)Br\({}_{4}\) samples with abundant edges and hinges were probed by angle-resolved photoemission spectroscopy [33], a widely adopted technique with rather low spatial resolutions (ca. 1 \(\upmu\)m). As the spot sizes of ultraviolet and infrared lights can be tuned to the same order, it is technically possible to probe the 1D edge states by infrared techniques. Here, we utilize mid-infrared absorption micro-spectroscopy to elucidate the real-space optical response of Bi\({}_{4}\)Br\({}_{4}\). Moreover, the pump-probe micro-spectroscopy allows us to experimentally observe the carrier dynamic process of topological edge states compared with bulk states. Our work constitutes the first report on the topological edge states by optical methods, paving the way for the application of similar techniques.
## 2 Materials and methods
Single crystals of Bi\({}_{4}\)Br\({}_{4}\) were synthesized by the Bi self-flux method. The crystals up to 2\(\times\)0.5\(\times\)0.2 mm\({}^{3}\) in size were obtained in the sealed ampoule. XRD measurements were used to characterize the crystal structure of Bi\({}_{4}\)Br\({}_{4}\). The surface morphology was characterized by AFM (Bruker Multimode 8). The chemical composition of the Bi\({}_{4}\)Br\({}_{4}\) belts was analyzed by SEM with EDS (Zeiss Gemini 550). The chemical composition of the Bi4Br4 belt was also examined by XPS (FEI QUANTERA SXM). The optical image and infrared absorption
spectra were acquired using a Fourier transform infrared microscopy (FTIR, HYPERION 3000 Microscopy, Bruker) with a pair of 15\(\times\) objectives (the detection spectral range from 4000 cm-1 to 900 cm-1). The carrier dynamics in the energy range around the band gap of Bi\(\iota\)Br\({}_{4}\) was investigated by ultrafast infrared pump-probe micro-spectroscopy was performed with a femtosecond Ti: Sapphire regenerative amplifier (Spitfire Ace, 3.5W, Spectra Physics). Detailed experimental measurements are described in the supplementary materials (online). All optical measurements are carried out at room temperature.
## 3 Results and discussion
We first use a Fourier transform infrared microscopy to detect the infrared spectra of Bi\(\iota\)Br\({}_{4}\) ribbons with photon energy ranges from 0.50 eV to 0.11 eV at room temperature (Fig. 1**b**). The Bi\(\iota\)Br\({}_{4}\) ribbon can be easily obtained via mechanical exfoliation from a single crystal [37]. In the upper panel of Fig. 1**c**, a typical ribbon can be seen in optical microscope images with 0.8 \(\upmu\)m in thickness, 40 \(\upmu\)m in wideness, and more than 100 \(\upmu\)m in length. A spatial resolution of the infrared absorption maps is estimated to be 7-8 \(\upmu\)m. In the lower panels of Fig. 1**c**, we first obtain an absorption micro-spectroscopy mapping image with a photon energy of 0.24 eV and observe uniform absorption across the entire flake (more precisely, gradually decay from bulk to edge because the laser is moving away from the sample). In sharp contrast, the same absorption mapping image obtained with a photon energy of 0.20 eV shows strong absorption only close to edges with the bulk signals being significantly suppressed. This observation directly demonstrates that edge and bulk have distinct dependence on photon energy.
In order to systematically understand the photon energy dependence, we respectively focus the laser at the bulk and edge, and measure their absorption as a function of photon energy between 0.12 eV and 0.48 eV. In Fig. 1**d**, the absorption of the bulk exhibits a dramatic reduction when the photon energy is smaller than 0.24 eV while the absorption of the edge remains relatively flat. This reduction of the bulk absorption can be attributed to the bandgap of bulk electronic states of Bi\(\iota\)Br\({}_{4}\). We estimate a bandgap of 0.22 eV by using a classic Tauc plot of (dhv)\({}^{2}\) vs. hv (Fig. S7 on line) [38]. The optical band gap is in the range of the reported theoretical and experimental results. The calculated energy gap of bulk electronic structure is around 0.145-0.3 eV [30, 33], while the tunneling differential conductance reveals an insulating gap of 0.26 eV [23] and the angle-resolved photoemission spectroscopy measurements on the cleaved surface of Bi\(\iota\)Br\({}_{4}\) shows a bandgap of 0.23-0.3 eV [33, 39]. The absorption of the edge persists to very low photon energies. Such robust edge absorption can be attributed to the gapless topological boundary states.
Moreover, due to the anisotropic structure of Bi\(\iota\)Br\({}_{4}\), the anisotropic optical transition matrix elements give rise to the anisotropic infrared absorption. To shed light on the anisotropic infrared absorption of Bi\(\iota\)Br\({}_{4}\), we take an absorption map by using linearly polarized light nominally perpendicular to the top (_ab_-plane) of the Bi\(\iota\)Br\({}_{4}\) belt with the polarization direction along the _a_- or _b_-axis with the photon energies of 0.20 eV and 0.24 eV (Fig. 2 **a** and **b**). The absorbance decreases with the polarization varying from the **E**\(\parallel\)**b** direction to the **E**\(\parallel\)**a** direction, especially at the boundaries with the photon energy of 0.20
eV. In Fig. 2**c**, theoretical calculations of optical absorption of Bi\({}_{4}\)Br\({}_{4}\) belt indicate a strong anisotropic infrared absorption both at the belt boundary and bulk, in reasonable agreement with our experimental observation. Furthermore, we defined absorption anisotropy as \((\alpha_{E/b}-\alpha_{E/a})/(\alpha_{E/b}+\alpha_{E/a})\) and compare those values from belt boundary and bulk, respectively. As shown in Fig. 2**d**, the boundary has stronger absorption anisotropy than that in the bulk, especially with photon energy smaller than the theoretical bandgap.
After identifying the difference between bulk and boundary optical responses with diffraction-limited infrared microscopy, we further use infrared pump-probe techniques to explore their respective carrier dynamics. Figures. 3**a** and **b** illustrate our infrared pump-probe micro-spectroscopy setup. The key techniques are to obtain a wide-energy-range probe light by forming a plasma filament (0.12-0.45 eV) and to detect the wide spectra using mercury-cadmium-telluride arrays. This powerful technique allows us to inspect the carrier dynamics of the bulk and boundary of the Bi\({}_{4}\)Br\({}_{4}\) belt in multi-dimensional parameter space (20 \(\upmu\)m spatial resolution, wide photon energy range of 0.12-0.45 eV, and wide time range of 0.1 ps - 3000 ps).
In Fig. 3**c**, the pump photon energy is fixed at 0.17 eV to avoid probing interband transitions between the bulk states. The evolution of the photo-excited carriers is then monitored by recording the transmittance variation (\(\Delta\)T/T). It is clear that the negative signals are significant around the boundary and decayed quite slowly with the probe photon energy of 0.15-0.18 eV. Moreover, these negative signals can still be clearly observed after 1.5 ns relaxation, manifesting as an ultralong carrier lifetime. In Fig. 3**d**, the positive signals appear in the bulk and decay quickly in tens of picoseconds with the probe photon energy of 0.15-0.18 eV. Those results clearly demonstrate the different dynamics of the bulk and boundary. Furthermore, the typical decay curves of \(\Delta\)T/T at the bulk and the boundary are collected and fitted with the multiple-exponential functions (Fig. 3**e**). In the bulk, the positive signal of \(\Delta\)T/T pumped by 0.5 eV photons decays in ps timescale. The relaxation is decomposed into two components: a fast one (\(\tau_{1}\) = 3-5 ps) and a slow one (\(\tau_{2}\) = 50-100 ps). In contrast to the bulk, the signal from the belt boundary has negative \(\Delta\)T/T with an ultralong component (\(\tau_{3}>1\) ns) of carrier lifetime. To understand the distinguished behavior of bulk and boundary, a schematic image of carrier excitation is shown in Fig. 3**f**. First, the probe photons with energy smaller than the bandgap can only be absorbed by the carriers in the boundary states or the free carriers in the conduction band. In the belt boundary, the decreased carrier density in the boundary states reduces the infrared absorption after optical excitation and consequently leads to negative signals. This understanding is also consistent with the observation of strong infrared absorption at the belt boundaries with photon energy \(<\) 0.22 eV (the bandgap of Bi\({}_{4}\)Br\({}_{4}\)). In the bulk, more photo-excited carriers in the conduction band can enhance the absorption of probe photons, contributing to the positive signals.
We systematically repeat the measurements at the boundary and the bulk with the pulse pump fluences varying from 2.1 \(\upmu\)J/cm\({}^{2}\) to 304 \(\upmu\)J/cm\({}^{2}\) (Fig. 4**a**). By fitting those decay curves with the multiple-exponential functions, we can extract the delay times of both at the boundary and the bulk, which have a strong dependence on the pump fluences (Fig. 4**b**). The
faster decay is observed with stronger pump fluences, which is caused by the enhanced electron-electron scattering or auger recombination with more hot-carriers [40; 41]. The longest carrier lifetime of 1.5 ns at room temperature is observed in the boundary with low pump fluences of 2.1 \(\upmu\)J/cm\({}^{2}\). The nanosecond lifetime is about one to two orders longer than that of the bulk and most topological materials [40-47]. In Fig. 4**c**, the magnitude of the signal from the boundary is performed with a saturation behavior, while the magnitude of the signal in bulk is proportional to the pump fluences. A fundamental model can be used to describe the pump fluences dependent signals: \(\Delta\)T/T \(\propto\) E/(E\({}^{+}\)E\({}_{\rm s}\)), where E and E\({}_{\rm s}\) are the pump and saturation fluences, respectively [48]. The fitted curve indicates saturation fluences of \(\sim\) 35 \(\upmu\)J/cm\({}^{2}\). Due to the limited density of edge states, the photons in the incident light are sufficient to excite most carriers of the edge states, leading to the saturation.
The relaxation dynamics of the photoexcited carriers in the Bi\({}_{4}\)Br\({}_{4}\) belt are discussed in Fig. 4**d**. In the boundary states, the excited carriers in the helical states may relax via intralayer or interlayer scattering. In the intralayer scattering process, the backscattering by the non-magnetic disorder and electron-phonon interaction and the direct recombination of the electrons and holes requiring spin flips, which, however, is incompatible with the helical nature of the boundary states and thus strongly suppressed [49-51]. Therefore, the excited carriers are more likely to be relaxed within one of the two branches via scattering with phonons. This scattering process suffers from a very limited scattering phase space (phonon energy \(<\) 20 meV, very few final states available within the one-dimensional channels), leading to an ultralong relaxation time. In the interlayer scattering process, the edge states localized at different layers are weakly coupled or even decoupled in case the edges of the adjacent layers are not strictly aligned [21; 24]. The hinge states localized at different hinges are also decoupled. Thus, both the direct recombination of the electrons and holes at different boundaries and the interlayer scattering are suppressed. Therefore, the ultralong carrier lifetimes is more likely to arise from the helical nature of the one-dimensional topological boundary states of the Bi\({}_{4}\)Br\({}_{4}\) belt. Another possible mechanism is based on the model of Luttinger liquid. When the photon excitation induce the conversion of edge mode into Luttinger liquid due to its one-dimensional feature, the excited states can form a robust and stable states for a long time. Then, considering the weak coupling between edge states with bulk states, those stable states can also be relaxed slowly by scattering with bulk states. In addition, at the belt center where the topological edge states are absent, the relaxation processes are similar to the typical carrier relaxation behaviors commonly observed in semiconductor materials [52]: Process one (\(\tau_{1}\): 3 \(\sim\) 5 ps) is the excited electrons relax to band edge and process two (\(\tau_{2}\): 50 \(\sim\) 100 ps) is the relaxed electrons recombine with holes in the valence band (more details in supplementary Fig. 10a).
## 4 Conclusions
To conclude, we utilize mid-infrared absorption micro-spectroscopy and pump-probe micro-spectroscopy to elucidate the bulk-boundary optical responses of quantum spin Hall insulator Bi\({}_{4}\)Br\({}_{4}\). A strong absorption is resolved from the boundary states while the bulk absorption is suppressed by its insulating gap. Moreover, the boundary absorption exhibits a strong polarization anisotropy, consistent with the one-dimensional nature of the topological
boundary states. The infrared pump-probe microscopy further measures a substantially increased carrier lifetime for the boundary states, which is about two orders longer than that of the bulk. Our work taken together demonstrates the optical bulk-boundary dichotomy in a topological material Bi\({}_{4}\)Br\({}_{4}\).
## Conflict of interest
The authors declare that they have no conflict of interest.
## Acknowledgments
This work is funded by the National Natural Science Foundation of China (11734003, 62275016, 12274029 and 92163206), the National Key Research and Development Program of China (2020YFA0308800), Beijing Natural Science Foundation (Z210006, Z190006) and the Strategic Priority Research Program of Chinese Academy of Sciences (XDB30000000). We are grateful to the Instrument Analysis Center of Xi'an Jiaotong University for assistance with infrared absorption measurement and Analysis & Testing Center of Beijing Institute of Technology for assistance with SEM and XRD analyses. We acknowledge fruitful discussions with Xiang Li, Junxi Duan, Qingsheng Wang, Gang Wang, Jie Ma and Yuanchang Li.
## Author contributions
Y.G.Y., J.F.H. and W.D.X. supervised this project; P.C.M., H.L.C. and J.F.H. carried out infrared absorption measurements; D.Y.C., Y.K.L., J.C.Z., X.Z., Y.L.W., S.J., Y.X.W. and W.D.X. synthesized and characterized materials; M.Y.W., D.S.M., Z.M.Y., J.J.Z., C.C.L. and Y.G.Y. performed first-principles calculations; J.F.H., J.X.Y., Q.M., M.Z.H., M.Y.W., W.D.X. and Y.G.Y. wrote the paper; all authors discussed and analyzed the data.
|
2304.07591
|
Accessibility Metatesting: Comparing Nine Testing Tools
|
Automated web accessibility testing tools have been found complementary. The
implication: To catch as many issues as possible, use multiple tools. Doing
this efficiently entails integration costs. Is there a small set of tools that,
together, make additional tools redundant? I approach this problem by comparing
nine comprehensive accessibility testing tools that are amenable to
integration: alfa, axe-core, Continuum, Equal Access, HTML CodeSniffer, Nu Html
Checker, QualWeb, Tenon, and WAVE. I tested 121 web pages of interest to CVS
Health with these tools. Each tool only fractionally duplicated any other tool.
Each discovered numerous issue instances missed by all the others. Thus,
testing with all nine tools was substantially more informative than testing
with any subset.
|
Jonathan Robert Pool
|
2023-04-15T16:38:23Z
|
http://arxiv.org/abs/2304.07591v1
|
# Accessibility Metatesing
###### Abstract.
Automated web accessibility testing tools have been found complementary. The implication: To catch as many issues as possible, use multiple tools. Doing this efficiently entails integration costs. Is there a small set of tools that, together, make additional tools redundant? I approach this problem by comparing nine comprehensive accessibility testing tools that are amenable to integration: alfa, axe-core, Continuum, Equal Access, HTML CodeSniffer, Nu Html Checker, QualWeb, Tenon, and WAVE. I tested 121 web pages of interest to CVS Health with these tools. Each tool only fractionally duplicated any other tool. Each discovered numerous issue instances missed by all the others. Thus, testing with all nine tools was substantially more informative than testing with any subset.
web accessibility, accessibility testing, metatesing, test automation, test efficiency +
Footnote †: journal: Accepted in 2023
+
|
2302.04316
|
The determinant of finite semigroups of the pseudovariety ECOM
|
The purpose of this paper is to compute the non-zero semigroup determinant of
the class of finite semigroups in which every two idempotents commute. This
class strictly contains the class of finite semigroups that have central
idempotents and the class of finite inverse semigroups. This computation holds
significance in the context of the extension of the MacWilliams theorem for
codes over semigroup algebras.
|
M. H. Shahzamanian
|
2023-02-08T20:14:53Z
|
http://arxiv.org/abs/2302.04316v2
|
# The determinant of finite semigroups of the pseudovariety econ
###### Abstract.
The purpose of this paper is to identify the non-zero semigroup determinant of a class of finite semigroups in which every two idempotents commute. These semigroups are a larger class than the semigroups with central idempotents and also inverse semigroups.
2010 Mathematics Subject Classification. Primary 20M25, 16L60, 16S36.
Keywords and phrases: Frobenius algebra, semigroup determinant, paratrophic determinant, semigroup algebra.
|
2308.11026
|
Harnessing The Collective Wisdom: Fusion Learning Using Decision
Sequences From Diverse Sources
|
Learning from the collective wisdom of crowds enhances the transparency of
scientific findings by incorporating diverse perspectives into the
decision-making process. Synthesizing such collective wisdom is related to the
statistical notion of fusion learning from multiple data sources or studies.
However, fusing inferences from diverse sources is challenging since
cross-source heterogeneity and potential data-sharing complicate statistical
inference. Moreover, studies may rely on disparate designs, employ widely
different modeling techniques for inferences, and prevailing data privacy norms
may forbid sharing even summary statistics across the studies for an overall
analysis. In this paper, we propose an Integrative Ranking and Thresholding
(IRT) framework for fusion learning in multiple testing. IRT operates under the
setting where from each study a triplet is available: the vector of binary
accept-reject decisions on the tested hypotheses, the study-specific False
Discovery Rate (FDR) level and the hypotheses tested by the study. Under this
setting, IRT constructs an aggregated, nonparametric, and discriminatory
measure of evidence against each null hypotheses, which facilitates ranking the
hypotheses in the order of their likelihood of being rejected. We show that IRT
guarantees an overall FDR control under arbitrary dependence between the
evidence measures as long as the studies control their respective FDR at the
desired levels. Furthermore, IRT synthesizes inferences from diverse studies
irrespective of the underlying multiple testing algorithms employed by them.
While the proofs of our theoretical statements are elementary, IRT is extremely
flexible, and a comprehensive numerical study demonstrates that it is a
powerful framework for pooling inferences.
|
Trambak Banerjee, Bowen Gang, Jianliang He
|
2023-08-21T20:30:35Z
|
http://arxiv.org/abs/2308.11026v1
|
# Harnessing The Collective Wisdom:
###### Abstract
Learning from the collective wisdom of crowds enhances the transparency of scientific findings by incorporating diverse perspectives into the decision-making process. Synthesizing such collective wisdom is related to the statistical notion of fusion learning from multiple data sources or studies. However, fusing inferences from diverse sources is challenging since cross-source heterogeneity and potential data-sharing complicate statistical inference. Moreover, studies may rely on disparate designs, employ widely different modeling techniques for inferences, and prevailing data privacy norms may forbid sharing even summary statistics across the studies for an overall analysis. In this paper, we propose an Integrative Ranking and Thresholding (IRT) framework for fusion learning in multiple testing. IRT operates under the setting where from each study a triplet is available: the vector of binary accept-reject decisions on the tested hypotheses, the study-specific False Discovery Rate (FDR) level and the hypotheses tested by the study. Under this setting, IRT constructs an aggregated, nonparametric, and discriminatory measure of evidence against each null hypotheses, which facilitates ranking the hypotheses in the order of their likelihood of being rejected. We show that IRT guarantees an overall FDR control under arbitrary dependence between the evidence measures as long as the studies control their respective FDR at the desired levels. Furthermore, IRT synthesizes inferences from diverse studies irrespective of the underlying multiple testing algorithms employed by them. While the proofs of our theoretical statements are elementary, IRT is extremely flexible, and a comprehensive numerical study demonstrates that it is a powerful framework for pooling inferences.
_Keywords:_ Crowdsourcing, E-values, False Discovery Rate, Fusion learning, Integrative inference, Meta-analysis
## 1 Introduction
Learning from the wisdom of crowds is the process of synthesizing the collective wisdom of disparate participants for performing related tasks, and has evolved into an indispensable
tool for modern scientific inquiry. This process is also known as 'Crowdsourcing', which has generated tremendous societal benefits through novel drug discovery (Chodera et al., 2020), new machine learning algorithms (Sun et al., 2022), product design (Jiao et al., 2021) and crime control (Logan, 2020), to name a few. Harnessing such collective wisdom allows incorporating diverse opinions as well as expertise into the decision-making process and ultimately broadens the transparency of scientific findings (Surowieck, 2005).
Synthesizing the collective wisdom of crowds is related to the statistical notion of fusion learning from multiple data sources or studies (Liu et al., 2022; Guo et al., 2023). Over the past decade, techniques for such fusion learning have found widespread use for incorporating heterogeneity into the underlying analysis and increasing the power of statistical inference. For instance, in recent years the accrescent volume of gene expression data available in public data repositories, such as the Gene Expression Omnibus (GEO) (Barrett et al., 2012) and Array Express (Rustici et al., 2012), have facilitated the synthesis and reuse of such data for meta-analysis across multiple studies. Biologists can use OMiCC (Shah et al., 2016), a crowdsourcing web platform, for generating and testing hypotheses by integrating data from diverse studies. Relatedly, sPLINK (Nasirigerdeh et al., 2022), a hybrid federated tool, allows conducting privacy-aware genome-wide association studies (GWAS) on distributed datasets. In distributed multisensor systems, such as wireless sensor networks, each node in the network evaluates its designated region in space by conducting simultaneous test of several hypotheses. The decision sequence is then transmitted to a fusion center for processing. Sensor fusion techniques are used to synthesize the data from multiple sensors, which provides improved accuracy and statistically powerful inference than that achieved by the use of a single sensor alone. (Hall and Llinas, 1997; Shen and Wang, 2001; Jamoos and Abuawwad, 2020)
However, fusing inferences from diverse sources1 is challenging for several reasons. _First_, cross-source heterogeneity and potential data-sharing complicate statistical inference. For instance, in microarray meta-analysis, an integrative false discovery rate (FDR) analysis of the multiple hypotheses often require making additional assumptions such as study independence and strong modeling assumptions to capture between-study heterogeneity. _Second_, prevailing data privacy norms may forbid sharing even summary statistics across the studies for an overall analysis. This is particularly relevant in the context of genomic data where NIH (National Institutes of Health) restricts the availability of dbGaP (database of Genotypes and Phenotypes) data to approved users (Couzin, 2008; Zerhouni and Nabel, 2008). Also, in wireless sensor networks, communication limits may restrict sharing all information from the nodes to the fusion center. _Third_, in the case of meta-analysis where fusion learning is widespread, different studies may rely on disparate designs and widely
different modeling techniques for individual inferences. Besides introducing algorithmic randomness into the individual inferences, it is also unclear how such inferences can be integrated and subsequently interpreted. _Fourth_, often significant statistical and computational experience is required for integrating the individual inferences. This may preclude investigators without substantial statistical training to perform such analyses.
In this paper, we develop a framework for fusion learning in multiple testing that seeks to overcome the aforementioned challenges of integrative inference across multiple data sources. Our framework, which we call IRT for Integrative Ranking and Thresholding, operates under the setting where from each study a triplet is available: the study-specific vector of binary accept / reject decisions on the tested hypotheses, the FDR level of the study and the hypotheses tested by the study. Under this setting, the IRT framework consists of two key steps: in step (1) IRT utilizes the binary decisions from each study to construct nonparametric evidence indices which serve as measures of evidence against the corresponding null hypotheses, and in step (2) the evidence indices across the studies are fused into a discriminatory measure that ranks the hypotheses in the order of their likelihood of being rejected. The proposed fusion learning framework has several distinct advantages. _First_, the IRT framework guarantees an overall FDR control under arbitrary dependence between the evidence indices as long as the individual studies control the FDR at their desired levels. _Second_, IRT is extremely simple to implement and is broadly applicable without any model assumptions. This particular aspect is especially appealing because IRT synthesizes inferences from diverse studies irrespective of the underlying multiple testing algorithms employed by the studies. _Third_, the evidence indices in our framework are closely related to "\(e-\)values" (see Shafer (2021); Vovk and Wang (2021); Grunwald et al. (2023); Ramdas et al. (2020); Wasserman et al. (2020) for an incomplete list) for hypothesis testing. Besides being a natural counterpart to the popular \(p-\)values in statistical inference, \(e-\)values are relatively more robust to model misspecification and particularly to dependence between the \(p-\)values. In our numerical experiments, we find that when the \(p-\)values are exchangeable IRT is substantially more powerful than methods that rely on a conversion from \(p-\)values to \(e-\)values for pooling inferences. For almost a century, \(e-\)values have been used in Statistics, often disguised as likelihood ratios, Bayes factors, stopped nonnegative supermartingales. See Ramdas et al. (2022) for a survey on additional settings, such as betting scores, game-theoretic statistics and safe anytime-valid inference, where \(e-\)values arise naturally. _Finally_, to the best of our knowledge, IRT is the first fusion learning framework for multiple testing that relies on the binary decision sequences which are relatively more private than summary statistics. We rely on a novel construction of \(e-\)values from the accept-reject decisions which are also related to the all-or-nothing bet of Shafer (2021) for testing a single null hypothesis.
### Outline of the proposed approach
Figure 1 provides a pictorial representation of our setup and an overview of the IRT framework. A precise formulation and formal mathematical statements are deferred until Section 2. Panel (1) of Figure 1 represents a scenario where \(d\) studies are testing a study-specific subset of \(m\) null hypotheses. Here the black solid squares represent hypotheses that are not tested by the individual studies. In Panel (2), these individual studies use different multiple testing algorithms that are designed to control their respective FDR at level \(\alpha_{j},\ j=1,\ldots,d\). The 'X'-marked squares in panel (3) depict the rejected hypotheses and represent the data that is available to the IRT framework. It includes the binary decision vectors \(\delta_{ij}\) from each study, the study-specific FDR level \(\alpha_{j}\) and the subset of \(m\) hypotheses evaluated by each study. In panel (4), the IRT framework converts each binary decision into a nonparametric evidence index such that a higher index, displayed in a darker shade of red, represents a larger likelihood for rejecting that study-specific null hypothesis. In panel (5), the study-specific evidence indices are fused in a manner such that the aggregated indices rank the \(m\) hypotheses and, as demonstrated in panel (6), the rankings can be used to determine a cutoff for valid FDR control at a pre-determined level \(\alpha\).
### Outline of the proposed approach
Figure 1 provides a pictorial representation of our setup and an overview of the IRT framework. A precise formulation and formal mathematical statements are deferred until Section 2. Panel (1) of Figure 1 represents a scenario where \(d\) studies are testing a study-specific subset of \(m\) null hypotheses. Here the black solid squares represent hypotheses that are not tested by the individual studies. In Panel (2), these individual studies use different multiple testing algorithms that are designed to control their respective FDR at level \(\alpha_{j},\ j=1,\ldots,d\). The 'X'-marked squares in panel (3) depict the rejected hypotheses and represent the data that is available to the IRT framework. It includes the binary decision vectors \(\delta_{ij}\) from each study, the study-specific FDR level \(\alpha_{j}\) and the subset of \(m\) hypotheses evaluated by each study. In panel (4), the IRT framework converts each binary decision into a nonparametric evidence index such that a higher index, displayed in a darker shade of red, represents a larger likelihood for rejecting that study-specific null hypothesis. In panel (5), the study-specific evidence indices are fused in a manner such that the aggregated indices rank the \(m\) hypotheses and, as demonstrated in panel (6), the rankings can be used to determine a cutoff for valid FDR control at a pre-determined level \(\alpha\).
### Outline of the proposed approach
Figure 1: A pictorial representation of the IRT framework. In panel (1) \(d\) studies are testing a study-specific subset of \(m\) null hypotheses. Here the black solid squares represent hypotheses that are not tested by the individual studies. In panel (2), these studies use different multiple testing algorithms that are designed to control their respective FDR at level \(\alpha_{j},\ j=1,\ldots,d\). The ‘X’-marked squares in panel (3) depict the rejected hypotheses and represent the data that is available to the IRT framework. It includes the binary decision vectors \(\delta_{ij}\) from each study, the study-specific FDR level \(\alpha_{j}\) and the subset of \(m\) hypotheses evaluated by each study. In panel (4), the IRT framework converts each binary decision into a nonparametric evidence index such that a higher index, displayed in a darker shade of red, represents a larger likelihood for rejecting that study-specific null hypothesis. In panel (5), the study-specific evidence indices are fused in a manner such that the aggregated indices rank the \(m\) hypotheses and, as demonstrated in panel (6), the rankings can be used to determine a cutoff for valid FDR control at a pre-determined level \(\alpha\).
\(\alpha_{j},\ j=1,\ldots,d\). The 'X'-marked squares in Panel (3) depict the rejected hypotheses and represent the data that is available to the IRT framework for synthesis. It includes the binary decisions \(\delta_{ij}\) from each study, the study-sepcific FDR level \(\alpha_{j}\) and the subset of \(m\) hypotheses evaluated by each study. Panels (4)-(6) illustrate the IRT framework. Particularly, in Panel (4), the IRT framework converts each binary decision \(\delta_{ij}\) into a non-parametric evidence index such that a higher index, displayed in a darker shade of red, represents a stronger conviction against that study-specific null hypothesis. In Panel (5), the study-specific evidence indices are fused in a manner such that the aggregated indices rank the \(m\) hypotheses and, as demonstrated in Panel (6), the rankings can be used to determine a cutoff for valid FDR control at a pre-determined level \(\alpha\).
### Related literature
The IRT framework is related to multiple strands of literature, which includes FDR control using \(e-\)values, integrative multiple testing under data-privacy and algorithmic derandomization. Next, we discuss how IRT differs from these existing body of works.
Recently, there has been a proliferation of interest in developing methods for simultaneous hypotheses testing using \(e-\)values. See, for instance, Wang and Ramdas (2022); Ignatiadis et al. (2022); Chi et al. (2022); Vovk and Wang (2023); Xu and Ramdas (2023) and the references therein. In these works, the \(e-\)values are typically constructed from either \(p-\)values or likelihood ratios. In contrast, the generalized \(e-\)values (Wang and Ramdas, 2022; Ren and Barber, 2022) in our setting arise from the accept / reject decisions of the corresponding null hypotheses and are constructed in a nonparametric fashion.
The literature on the evolving field of integrative multiple testing under data-privacy is also related to our work. For instance, under the restricted data-sharing mechanism of Wolfson et al. (2010), Liu et al. (2021), henceforth Liu21, propose an integrative high-dimensional multiple testing framework with asymptotic FDR control. Our work differs from Liu21 on two main aspects. First, for our setting we consider a relatively more stringent data-privacy regime where the studies are only allowed to share their decision sequences, the FDR level and the hypotheses tested by them. Second, the integrative FDR control offered by IRT is non-asymptotic if the individual studies guarantee a non-asymptotic control of their respective FDR levels (see Section 4.3). This aspect is particularly appealing when the underlying multiple testing problem is small-scale.
Numerous methods have been proposed for mitigating algorithmic randomness in recent years. For instance, Bashari et al. (2023) propose a method to aggregate conformal tests for outliers obtained with repeated splits of the same data set, while controlling the FDR. In the context of high dimensional variable selection, Ren and Barber (2022) develop
a methodology for derandomizing model-X knockoffs (Barber and Candes, 2015; Candes et al., 2018) with provable FDR control while Dai et al. (2022, 2023) develop a multiple data-splitting method with asymptotic FDR control to stabilize variable selection results. The IRT framework can be similarly viewed as a method for mitigating algorithmic randomness that may potentially arise from the use of myriad multiples testing procedures for FDR control by the \(d\) studies. Crucially, however, our framework does not rely on hard-to-verify assumptions for FDR control, such as Dai et al. (2022, 2023), and in contrast to Ren and Barber (2022); Bashari et al. (2023), IRT only requires access to the binary decision vector of each study for integrative FDR control.
### Organization
The article is laid out as follows: Section 2 presents our formal problem statement. The IRT framework is introduced in Section 3 and its operational characteristics are discussed in Section 4. A real data illustration is presented in Section 5 while Section 6 reports the empirical performance of IRT on synthetic data. The article concludes with a summary in Section 7.
## 2 Problem statement
Throughout the paper, we write \(\mathcal{M}=\{1,\ldots,m\}\) and consider testing \(m\) null hypotheses \(H_{01},\ldots,H_{0m}\). Denote \(\mathcal{H}_{0}=\{i:H_{0i}\text{ is true}\}\) and \(\mathcal{H}_{1}=\mathcal{M}/\mathcal{H}_{0}\), respectively, as the set of true null and non-null hypotheses. Let \(\mathbb{I}(\cdot)\) denote the indicator function that returns \(1\) if condition is true and \(0\) otherwise, and denote \(\|\boldsymbol{a}\|_{p}\) as the \(l_{p}\)-norm of the vector \(\boldsymbol{a}\). In the sequel, \(a\lor b\) will denote \(\max(a,b)\) for two real numbers \(a\) and \(b\).
We consider a setting where the \(\mathcal{M}\) hypotheses are tested by \(d\) studies with study \(j\) testing \(\mathcal{M}_{j}\subseteq\mathcal{M}\) hypotheses. Here \(\cup_{j=1}^{d}\mathcal{M}_{j}=\mathcal{M}\) and \(|\mathcal{M}_{j}|=m_{j}\) so that \(\sum_{j=1}^{d}m_{j}\geq m\). Let \(\theta_{i}=\mathbb{I}(H_{0i}\text{ is false})\) be an indicator function that gives the true state of the \(i\)th testing problem and denote \(\delta_{ij}\in\{0,1\}\) as the decision that study \(j\) makes about hypothesis test \(i\in\mathcal{M}_{j}\), with \(\delta_{ij}=1\) being a decision to reject \(H_{0i}\). Denote the vector of all \(m_{j}\) decisions \(\boldsymbol{\delta}_{j}=(\delta_{1j},\cdots,\delta_{m_{j}j})\in\{0,1\}^{m_{j}}\). A selection error, or false positive, occurs if study \(j\) asserts that \(H_{0i},\ i\in\mathcal{M}_{j}\), is false when it actually is not. In multiple testing problems, such false positive decisions are inevitable if we wish to discover interesting effects with a reasonable power. Instead of aiming to avoid any false positives, a practical goal is to keep the false discovery rate (FDR) (Benjamini and Hochberg, 1995) small, which is the
expected proportion of false positives among all selections,
\[\text{FDR}(\mathbf{\delta}_{j})=\mathbb{E}\left[\text{FDP}(\mathbf{\delta}_{j})\right]\text { where }\text{FDP}(\mathbf{\delta}_{j})=\frac{\sum_{i\in\mathcal{M}_{j}}(1-\theta_{i}) \delta_{ij}}{\sum_{i\in\mathcal{M}_{j}}\delta_{ij}\lor 1}.\]
The power of a testing procedure is measured by the expected number of true positives (ETP) where,
\[\text{ETP}(\mathbf{\delta}_{j})=\mathbb{E}\Big{(}\sum_{i\in\mathcal{M}_{j}}\theta _{i}\delta_{ij}\Big{)}=\mathbb{E}\Big{(}\sum_{i\in\mathcal{M}_{j}}\mathbb{I}( H_{0i}\text{ is false})\delta_{ij}\Big{)}.\]
Hence, the multiple testing problem for study \(j\) can be formulated as
\[\text{maximize}_{\mathbf{\delta}_{j}}\text{ ETP}(\mathbf{\delta}_{j})\text{ subject to }\text{FDR}(\mathbf{\delta}_{j})\leq\alpha_{j},\]
where \(\alpha_{j}\in(0,1)\) is a pre-specified cap on the maximum acceptable FDR for the \(j^{th}\) study.
Our goal is to conduct inference for \(\mathbf{\theta}=(\theta_{1},\ldots,\theta_{m})\) by fusing the evidence available in the binary decision sequences \(\mathbf{\delta}_{j}\) provided by the \(d\) studies. To do that we develop an integrative ranking and thresholding procedure that operates on the triplet \(\mathcal{D}_{j}=\{\mathbf{\delta}_{j},\alpha_{j},\mathcal{M}_{j}\}\) available from each study and provably controls the FDR within a pre-determined level \(\alpha\in(0,1)\).
## 3 Integrative ranking and thresholding using binary decision sequences
In this section we present IRT, an Integrative Ranking and Thresholding framework that involves three steps. In Step 1, IRT utilizes the binary decision sequence \(\mathbf{\delta}_{j}\) from study \(j\) to construct a measure of evidence against the null hypotheses \(\mathcal{M}_{j}\). In Step 2, this evidence is aggregated into a discriminatory measure such that for each null hypothesis \(H_{0i}\), a large aggregated evidence implies a higher likelihood of rejecting \(H_{0i}\). In the final Step 3, the rankings provided by the aggregated evidence are used to determine a cutoff for FDR control. In what follows, we describe each of these steps in detail while Algorithm 1 summarizes the discussion below.
### Step 1: Evidence construction
IRT uses the information in \(\mathcal{D}_{j}\) to construct an evidence index \(e_{ij}\) as follows:
\[e_{ij}=w_{j}\frac{\delta_{ij}}{\|\mathbf{\delta}_{j}\|_{0}\lor 1},\ i\in\mathcal{M} _{j}, \tag{1}\]
where the evidence weights \(w_{j}=m_{j}/\alpha_{j}\). The evidence weights in Equation (1) capture the relative importance of a rejection across the \(d\) studies and, hence, play a key role in differentiating among them. For study \(j\), the evidence weight is distributed evenly across the rejected hypotheses. Moreover, this weight is higher if study \(j\) has tested more hypotheses (larger \(m_{j}\)) and more conservative (smaller \(\alpha_{j}\)). This is in perfect accordance with our intuition since the evidence needed to reject a hypothesis increases if the number of hypotheses tested increases or the target FDR level decreases. For example, if only one hypothesis is tested then rejecting that null hypothesis when the \(p-\)value is below \(\alpha\) guarantees FDR is controlled at level \(\alpha\). However, if more than one hypothesis is tested then rejecting null hypotheses with \(p-\)value atmost \(\alpha\) does not guarantee FDR control at level \(\alpha\).
The evidence indices \(\mathbf{e}_{j}=\{e_{ij}:i\in\mathcal{M}_{j}\}\) in Equation (1) are related to \(e-\)values for hypothesis testing, which serve as a natural counterpart to the widely adopted \(p\)-values. A non-negative random variable \(e\) is an "\(e-\)value"2 if \(\mathbb{E}[e]\leq 1\) under the null hypothesis. A large \(e-\)value provides evidence against the null hypothesis. See Vovk and Wang (2021) for a background on \(e-\)values for hypothesis testing. Recently, Wang and Ramdas (2022); Ren and Barber (2022) introduce the concept of generalized \(e-\)values which are defined as follows:
Footnote 2: We will use the notation ‘e’ to denote both the random variable and its realized value.
**Definition 1** (generalized \(e-\)values).: _Let \(\mathbf{e}=\{e_{1},e_{2},\ldots,e_{m}\}\) be a collection of random variables associated with the null hypotheses \(H_{01},H_{02},\ldots,H_{0m}\). Then \(\mathbf{e}\) is a set of generalized \(e-\)values if \(\sum_{i\in\mathcal{H}_{0}}\mathbb{E}(e_{i})\leq m\)._
Theorem 1 establishes that the evidence indices in Equation (1) are generalized \(e-\)values.
**Theorem 1**.: _Suppose the \(j^{\text{th}}\) study controls FDR at level \(\alpha_{j}\). Then \(\mathbf{e}_{j}\) are generalized \(e-\)values associated with \(\mathcal{H}_{0j}=\{H_{0i}:i\in\mathcal{M}_{j}\}\)._
Proof.: We have
\[\sum_{i\in\mathcal{H}_{0j}}\mathbb{E}(e_{ij}) =\frac{m_{j}}{\alpha_{j}}\mathbb{E}\Big{(}\frac{\sum_{i\in \mathcal{H}_{0j}}\delta_{ij}}{\|\mathbf{\delta}_{j}\|_{0}\lor 1}\Big{)}\] \[=\frac{m_{j}}{\alpha_{j}}\text{FDR}(\mathbf{\delta}_{j})\leq m_{j},\]
where the last inequality follows from the fact that study \(j\) controls FDR at level \(\alpha_{j}\).
Next, we discuss Step 1 under the light of an important special case of testing a single hypothesis. Suppose \(m=d=1\) and we are testing \(H_{01}\). Then, in this setting the most
powerful testing procedure is the one that rejects \(H_{01}\) when the underlying \(p-\)value, say \(p\), is at most \(\alpha_{1}\), i.e \(\delta_{11}=\mathbb{I}\{p\leq\alpha_{1}\}\). The corresponding evidence index,
\[e_{11} = (1/\alpha_{1})\mathbb{I}\{\delta_{11}=1\}, \tag{2}\]
is the most powerful \(e-\)value that is equivalent to this testing procedure at threshold \(1/\alpha_{1}\). The evidence index in Equation (2) is also known as the all-or-nothing bet (Shafer, 2021) against \(H_{01}\) and provides another intuitive interpretation of the evidence indices in Equation (1) as follows: if the \(j^{th}\) study controls FDR at level \(\alpha_{j}\) then \(\mathbf{e}_{j}\) are scaled all-or-nothing bets against the null hypotheses in \(\mathcal{M}_{j}\) where the scaling \(m_{j}/(\|\mathbf{\delta}_{j}\|_{0}\lor 1)\) ensures that if all the null hypotheses corresponding to the non-zero bets (\(e_{ij}\neq 0\)) in \(\mathcal{M}_{j}\) are true then the probability that the sum of those non-zero bets exceed \(m_{j}/\alpha_{j}\) is at most \(\alpha_{j}\).
### Step 2: Evidence aggregation
Denote \(n_{i}=\sum_{j=1}^{d}\mathbb{I}\{i\in\mathcal{M}_{j}\}\) as the number of times hypothesis \(H_{0i}\) is tested by the \(d\) studies and let \(n=\max\{n_{1},\ldots,n_{m}\}\geq 1\). IRT aggregates the evidence indices \(\mathbf{e}_{j}\) across the studies as follows:
\[e_{i}^{\texttt{agg}}=\frac{1}{n}\sum_{j=1}^{d}e_{ij}\mathbb{I}\{i\in\mathcal{ M}_{j}\}. \tag{3}\]
In Equation (3), \(e_{i}^{\texttt{agg}}\) represents the aggregated evidence across all studies that test hypothesis \(H_{0i}\). When each study tests all the \(m\) hypotheses, i.e \(n_{i}=n=d\), then \(e_{i}^{\texttt{agg}}\) is the arithmetic mean of the \(d\) evidence indices corresponding to hypothesis \(i\), which dominates any symmetric aggregation function by Proposition 3.1 of Vovk and Wang (2021). However, when \(n_{i}\) are different, the aggregation scheme in Equation (3) is a natural counterpart to the arithmetic mean of the \(d\) evidence indices. Furthermore, in this setting Theorem 2 establishes that \(\mathbf{e^{\texttt{agg}}}=\{e_{1}^{\texttt{agg}},\ldots,e_{m}^{\texttt{agg}}\}\) are generalized \(e-\)values associated with \(\mathcal{H}_{0}\).
**Theorem 2**.: _Suppose study \(j\)'s testing procedure controls FDR at level \(\alpha_{j}\). Then \(\mathbf{e^{\texttt{agg}}}\) are generalized \(e-\)values associated with \(\mathcal{H}_{0}\)._
Proof.: We have,
\[\sum_{i\in\mathcal{H}_{0}}\mathbb{E}(e_{i}^{\texttt{agg}}) =\frac{1}{n}\sum_{i\in\mathcal{H}_{0}}\sum_{j=1}^{d}\frac{m_{j}}{ \alpha_{j}}\mathbb{E}\Big{(}\frac{\delta_{ij}}{\|\mathbf{\delta}_{j}\|_{0}\lor 1} \Big{)}\mathbb{I}\{i\in\mathcal{M}_{j}\}\] \[=\frac{1}{n}\sum_{j=1}^{d}\frac{m_{j}}{\alpha_{j}}\mathbb{E}\Big{(} \frac{\sum_{i\in\mathcal{H}_{0j}}\delta_{ij}}{\|\mathbf{\delta}_{j}\|_{0}\lor 1} \Big{)}\] \[\leq\frac{1}{n}\sum_{j=1}^{d}m_{j}. \tag{4}\]
But \(\sum_{j=1}^{d}m_{j}=\sum_{j=1}^{d}\sum_{i=1}^{m}\mathbb{I}(i\in\mathcal{M}_{j})= \sum_{i=1}^{m}n_{i}\). Substituting this in Equation (4) and noting that \(\sum_{i=1}^{m}n_{i}\leq mn\) establishes that \(\sum_{i\in\mathcal{H}_{0}}\mathbb{E}(e_{i}^{\texttt{agg}})\leq m\), which completes the proof.
In Section 4.1 we discuss an alternative evidence aggregation scheme where \(n\) is replaced by \(n_{i}\) in Equation (3) and \(e_{i}^{\texttt{agg}}\) continue to be generalized \(e-\)values under some additional assumptions on the data generating process for each study.
### Step 3: FDR control at level \(\alpha\)
In the context of multiple testing with \(e-\)values, Wang and Ramdas (2022) proposed the \(e-\)BH procedure that controls the FDR at level \(\alpha\) even under unknown arbitrary dependence between the \(e-\)values. The \(e-\)BH procedure is related to the well-known Benjamini-Hochberg (BH) procedure (Benjamini and Hochberg, 1995) and can be summarized as follows: let \(e_{i}\) be a generalized \(e-\)value associated with the null hypothesis \(H_{0i},\ i=1,\ldots,m\). Denote \(e_{(1)}\geq\ldots\geq e_{(m)}\) as the ordered \(e-\)values from largest to smallest. The rejection set under the \(e-\)BH procedure is given by \(\{i:e_{(i)}\geq m/(\alpha k_{\alpha})\}\) where
\[k_{\alpha}=\max\Big{(}i\in\mathcal{M}:e_{(i)}\geq\frac{m}{i\alpha}\Big{)}.\]
In step 3, IRT uses \(\boldsymbol{e^{\texttt{agg}}}\) as input for the \(e-\)BH procedure to get the final rejection set and, in conjunction with Theorem 2, the IRT framework controls FDR at level \(\alpha\).
We summarize the aforementioned three steps in Algorithm 1. Readers interested in the numerical performance of IRT may skip to sections 5 and 6 without any significant loss of continuity.
## 4 Discussion
In this section, we make several remarks related to the operational characteristics of the IRT framework.
### Alternative evidence aggregation scheme
Suppose \(\theta_{i}\) are random variables with an exchangeable joint distribution and conditional on \(\theta_{i}\), the summary statistics \(X_{ij}\) for study \(j\) are generated according to the following model:
\[X_{ij}\ \mid\ \theta_{i}\stackrel{{ ind.}}{{\sim}}(1-\theta_{i})f _{0j}+\theta_{i}f_{1j}, \tag{6}\]
where \(f_{0j},f_{1j}\) represent, respectively, the null and non-null densities of \(X_{ij}\). Under this setup a new aggregation scheme, analogous to Equation (3), can be defined as follows:
\[e_{i}^{\tt agg*}=\sum_{j=1}^{d}\frac{1}{n_{i}}e_{ij}\mathbb{I}\{i\in\mathcal{M}_ {j}\}. \tag{7}\]
Theorem 3 establishes that \(\mathbf{e^{\tt agg*}}=\{e_{1}^{\tt agg*},\ldots,e_{m}^{\tt agg*}\}\) continue to be generalized \(e-\)values.
**Theorem 3**.: _Suppose \(\theta_{i}\) are random and their joint distribution exchangeable. Assume study \(j\)'s testing procedure controls FDR at level \(\alpha_{j}\). Then \(\mathbf{e^{\tt agg*}}\) in Equation (7) are generalized \(e-\)values associated with \(\mathcal{H}_{0}\)._
Proof.: We have
\[\sum_{i=1}^{m_{j}}\mathbb{E}\{e_{ij}\mathbb{I}(\theta_{i}=0)\}=\sum_{i=1}^{m_{ j}}\mathbb{E}\{e_{ij}|\theta_{i}=0\}\mathbb{P}(\theta_{i}=0).\]
By exchangeability, \(\mathbb{P}(\theta_{i}=0)\) is independent of \(i\), and by Equation (6) \(\mathbb{E}\{e_{ij}|\theta_{i}=0\}\) is also independent of \(i\). Since \(\sum_{i=1}^{m_{j}}\mathbb{E}\{e_{ij}\mathbb{I}(\theta_{i}=0)\}\leq m_{j}\) it follows that \(\mathbb{E}\{e_{ij}\mathbb{I}(\theta_{i}=0)\}\leq 1\).
Hence,
\[\sum_{i=1}^{m}\mathbb{E}\{e_{i}^{\texttt{agg}*}\mathbb{I}(\theta_{i }=0)\} =\sum_{i=1}^{m}\sum_{j=1}^{d}\frac{1}{n_{i}}\mathbb{E}\{e_{ij} \mathbb{I}(i\in\mathcal{M}_{j})\mathbb{I}(\theta_{i}=0)\}\] \[\leq\sum_{i=1}^{m}1=m.\]
The advantage of \(\boldsymbol{e^{\texttt{agg}*}}\) over \(\boldsymbol{e^{\texttt{agg}}}\) is that the former provide a relatively better ranking of the hypotheses, which can lead to an improved power at the same FDR level \(\alpha\). Our numerical experiments in Section 6 demonstrate that this is indeed true when the data generating mechanism obeys Equation (6) and the conditions of Theorem 3.
### The choice of \(\alpha\)
The IRT procedure guarantees an overall FDR control at level \(\alpha\) as long as the \(d\) studies control FDR at their respective levels \(\alpha_{j}\). However, the choice of \(\alpha\) bears important consideration as far as the power of the proposed procedure is concerned. For instance, with a relatively smaller value of \(\alpha\), IRT may fail to recover discoveries identified by studies with a smaller weight \(w_{j}\). We give two examples to illustrate the impact of \(\alpha\) on power.
**Example 1**.: _Suppose there are two studies, each testing half of the \(m=2k\) null hypotheses. So, \(\mathcal{M}_{1}\cap\mathcal{M}_{2}=\emptyset\), \(\mathcal{M}_{1}\cup\mathcal{M}_{2}=\mathcal{M}\), and \(m_{1}=m_{2}=k\). Further, assume that \(\alpha_{1}=\alpha_{2}\) and \(\|\boldsymbol{\delta}_{1}\|_{0}=\|\boldsymbol{\delta}_{2}\|_{0}\). In this setting, it is tempting to set \(\alpha=\alpha_{1}\) and hope that IRT will reject all the hypotheses rejected by either study. However, IRT will not reject any hypotheses under this choice of \(\alpha\). To see this, note that both equations (3) and (7) suggest \(e_{i}^{\texttt{agg}}=e_{i}^{\texttt{agg}*}=k/(\alpha_{1}\|\boldsymbol{ \delta}_{1}\|_{0})\) if \(H_{0i}\) is rejected by either study, and \(e_{i}^{\texttt{agg}}=e_{i}^{\texttt{agg}*}=0\) otherwise. For the \(e-\)BH procedure to reject \(H_{0i}\) with \(e_{i}^{\texttt{agg}}\neq 0\) we need_
\[e_{i}^{\texttt{agg}}=\frac{k}{\alpha_{1}\|\boldsymbol{\delta}_{1}\|_{0}}\geq \frac{m}{2\|\boldsymbol{\delta}_{1}\|_{0}\alpha},\]
_which can be achieved by setting \(\alpha\geq 2\alpha_{1}\), a choice also recommended by Ren and Barber (2022). The above phenomenon has an simple explanation in terms of evidence index. Intuitively, if the number hypotheses tested increases, the evidence needed to reject each hypotheses at the same FDR level also increases. If we still want to reject the same hypotheses the target FDR level needs to be increased as well to offset the stronger evidence requirement._
**Example 2**.: _Suppose there are two studies, each tests all of the \(m\) hypotheses, with \(\alpha_{1}<\alpha_{2}\). Assume their decisions agree and denote the indices of rejected hypotheses as \(\mathcal{R}\). Then, the
IRT procedure does not reject any hypotheses at level \(\alpha_{1}\). To see why, both equations (3) and (7) suggest \(e_{i}^{\tt agg}=e_{i}^{\tt agg*}=\frac{1}{2}(\frac{m}{\alpha_{1}|\mathcal{R}|}+ \frac{m}{\alpha_{2}|\mathcal{R}|})<\frac{m}{\alpha_{1}|\mathcal{R}|}\) if \(i\in\mathcal{R}\), and \(e_{i}^{\tt agg}=e_{i}^{\tt agg*}=0\) otherwise. However, for the \(e-\)BH procedure to reject \(e_{i}^{\tt agg}\neq 0\) we need \(e_{i}^{\tt agg}\geq\frac{m}{\alpha_{1}|\mathcal{R}|}.\) This is surprising, since intuitively the decisions of study 2 should enhance the evidence against \(\{H_{0i}:i\in\mathcal{R}\}\). How can it be that the evidence from study 1 is sufficient to reject \(\{H_{0i}:i\in\mathcal{R}\}\) but the combined evidence from the two studies is not sufficient to reject \(\{H_{0i}:i\in\mathcal{R}\}\)? To resolve this paradox we need to take a closer look at the meaning of evidence index. Suppose we want to estimate the q-value (Storey, 2002) for \(\{H_{0i}:i\in\mathcal{R}\}\). The decisions of study 1 suggest the q-value should be \(\leq\alpha_{1}\) and the decisions of study 2 suggest the q-value should be \(\leq\alpha_{2}\). Without further assumption, it is reasonable to assert the "true" q-value should be less than a number somewhere between \(\alpha_{1}\) and \(\alpha_{2}\). Indeed, \(e_{i}^{\tt agg}=\frac{1}{2}(\frac{m}{\alpha_{1}|\mathcal{R}|}+\frac{m}{ \alpha_{2}|\mathcal{R}|})>\frac{m}{\alpha_{2}|\mathcal{R}|}\). Hence, the smallest \(\alpha\) so that the IRT can reject hypotheses with \(e^{\tt agg}\neq 0\) is between \(\alpha_{1}\) and \(\alpha_{2}\)._
### Study-specific FDR control
A key requirement for the validity of the IRT procedure is that the study-specific multiple testing procedure controls FDR at their pre-specified level \(\alpha_{j}\). Theorems 2 and 3 implicitly assume that such an FDR control holds for finite samples, i.e. \(\mathbb{E}\{\sum_{i\in\mathcal{H}_{0j}}\delta_{ij}/(\|\boldsymbol{\delta}_{j }\|_{0}\lor 1)\}\leq\alpha_{j}\) for all \(j=1,\ldots,d\). In reality, however, for some studies their FDR control may be asymptotic in \(m_{j}\). In such a scenario, \(\boldsymbol{e^{\tt agg}}\) in Theorems 2 and 3 are asymptotic generalized e-values and the IRT procedure in Algorithm 1 guarantees FDR control at level \(\alpha\) as \(m\to\infty\). We summarize the above discussion in the following proposition.
**Proposition 1**.: _Suppose \(\boldsymbol{\delta}_{j}\) controls FDR at level \(\alpha_{j}\) asymptotically. Then, Algorithm 1 controls FDR at level \(\alpha\) asymptotically._
Proof.: We first establish that \(\boldsymbol{e^{\tt agg}}\) (Equation (3)) and \(\boldsymbol{e^{\tt agg}}\)(Equation (7)) are generalized \(e-\)values asymptotically.
Let \(e_{i}^{\tt agg}\) be as defined in Equation (3). Following the proof of Theorem 2, we have
\[\sum_{i=1}^{m}\mathbb{E}\{e_{i}^{\tt agg}\mathbb{I}(\theta_{i}=0)\} =\frac{1}{n}\sum_{j=1}^{d}\frac{m_{j}}{\alpha_{j}}\mathbb{E}\Big{(} \frac{\sum_{i\in\mathcal{H}_{0j}}\delta_{ij}}{\|\boldsymbol{\delta}_{j}\|_{0} \lor 1}\Big{)}\] \[\leq\frac{1}{n}\sum_{j=1}^{d}\frac{m_{j}}{\alpha_{j}}(\alpha_{j}+ o(1))\] \[\leq\frac{1}{n}\sum_{j=1}^{d}m_{j}+o(m_{j})\]
Since \(\sum_{j=1}^{d}m_{j}=\sum_{j=1}^{d}\sum_{i=1}^{m}\mathbb{I}(i\in\mathcal{M}_{j})= \sum_{i=1}^{m}n_{i}\leq mn\), we have
\[\sum_{i=1}^{m}\mathbb{E}\{e_{i}^{\texttt{agg}}\mathbb{I}(\theta_{i}=0)\}\leq m+o (m). \tag{8}\]
We also have
\[\sum_{i=1}^{m}\mathbb{E}\{e_{ij}\mathbb{I}(\theta_{i}=0)\} =\frac{m_{j}}{\alpha_{j}}\mathbb{E}\Big{(}\frac{\sum_{i\in\mathcal{ H}_{0j}}\delta_{ij}}{\|\mathbf{\delta}_{j}\|_{0}\lor 1}\Big{)}\] \[=\frac{m_{j}}{\alpha_{j}}\mathrm{FDR}(\mathbf{\delta}_{j})\leq m_{j}+ o(m_{j}).\]
Suppose the conditions for Theorem 3 are satisfied then following the same argument as in the proof of Theorem 3 we have \(\mathbb{E}\{e_{ij}\mathbb{I}(\theta_{i}=0)\}\leq 1+o(1)\). Hence
\[\sum_{i=1}^{m}\mathbb{E}\{e_{i}^{\texttt{agg}*}\mathbb{I}(\theta _{i}=0)\} =\sum_{i=1}^{m}\sum_{j=1}^{d}\frac{1}{n_{i}}\mathbb{E}\{e_{ij} \mathbb{I}(i\in\mathcal{M}_{j})\mathbb{I}(\theta_{i}=0)\}\] \[\leq\sum_{i=1}^{m}1+o(1)=m+o(m).\]
Next, we establish that Algorithm 1 provides asymptotic FDR control. Let \(\mathbf{\delta}=\{\delta_{1},\ldots,\delta_{m}\}\) be the decision rule described by Algorithm 1. The \(e-\)BH procedure satisfies
\[e_{i}^{\texttt{agg}}\geq\frac{m}{\alpha\|\mathbf{\delta}\|_{0}\lor 1}\quad\text{if } \delta_{i}=1.\]
Hence, the FDP of \(\mathbf{\delta}\) satisfies
\[\mathrm{FDP}(\mathbf{\delta}) =\sum_{i=1}^{m}\frac{\mathbb{I}(\delta_{i}=1,\theta_{i}=0)}{\|\bm {\delta}\|_{0}\lor 1}\] \[=\sum_{i=1}^{m}\frac{\alpha e_{i}^{\texttt{agg}}\mathbb{I}( \delta_{i}=1,\theta_{i}=0)}{m}\] \[\leq\alpha\sum_{i=1}^{m}\frac{e_{i}^{\texttt{agg}}\mathbb{I}( \theta_{i}=0)}{m}.\]
By Equation (8), we have \(\mathrm{FDR}(\mathbf{\delta})\leq\alpha+o(1)\).
### When \(p-\)values are available
While the IRT framework takes \(\mathcal{D}_{j}\) as an input from each study \(j\), it is important to consider the setting where study-specific \(p-\)values, denoted \(\{p_{ij},\ i\in\mathcal{M}_{j},\ j=1,\ldots,d\}\), are available. The goal under this setting is to aggregate the \(p-\)values pertaining to each hypothesis \(i\) and then determine an appropriate threshold for FDR control at level \(\alpha\) using the
aggregated \(p-\)values. However, choosing an aggregation function for combining multiple \(p-\)values is challenging without making additional assumptions regarding their dependence structure (Vovk and Wang, 2020). Furthermore, if the underlying model is misspecified the validity of the corresponding \(p-\)values may be affected. In contrast, \(e-\)values are relatively more robust to such model misspecification (Wang and Ramdas, 2022) and particularly to dependence between the \(p-\)values (Vovk and Wang, 2021).
In this section, we consider the following calibrator from Vovk and Wang (2021) ( Equation B.1 with \(\kappa=1\) ) for transforming \(p_{ij}\) to corresponding \(e-\)values, denoted \(e_{ij}^{p2e}\):
\[e_{ij}^{p2e}=\begin{cases}\infty,&\text{ if }p_{ij}=0\\ \dfrac{2}{p_{ij}(-\log p_{ij})^{2}},&\text{ if }p_{ij}\in(0,\exp{(-2)}] \\ 0,&\text{ if }p_{ij}\in(\exp{(-2)},1]\end{cases}. \tag{9}\]
With the \(e-\)values from Equation (9), Steps (2) and (3) of the IRT framework provide a fusion algorithm, which we call P2E, that provides valid FDR control at a pre-determined level \(\alpha\). Specifically, in this setting, the aggregated evidence index is given by
\[e_{i}^{p2e}=\frac{1}{n}\sum_{j=1}^{d}e_{ij}^{p2e}\mathbb{I}\{i\in\mathcal{M}_{ j}\},\qquad\qquad\text{(\bf Step 2)}\]
and the rejection set under the \(e-\)BH procedure is given by \(\{i:e_{(i)}^{p2e}\geq m/(\alpha k_{\alpha}^{p2e})\}\) where
\[k_{\alpha}^{p2e}=\max\Big{(}i\in\mathcal{M}:e_{(i)}^{p2e}\geq\frac{m}{i\alpha }\Big{)}.\qquad\text{(\bf Step 3)}\]
When the conditions of Theorem 3 hold, \(e_{i}^{p2e*}\) is the counterpart to \(e_{i}^{p2e}\) with \(n\) replaced by \(n_{i}\) in Step 2 above and \(k_{\alpha}^{p2e*}=\max(i\in\mathcal{M}:e_{(i)}^{p2e*}\geq\dfrac{m}{i\alpha})\) in Step 3. In sections 5 and 6 we evaluate the numerical performance of P2E vis-a-vis IRT.
## 5 Illustrative example
We illustrate the IRT framework for the integrative analysis of \(d=8\) microarray studies (Singh et al., 2002; Welsh et al., 2001; Yu et al., 2004; Lapointe et al., 2004; Varambally et al., 2005; Tomlins et al., 2005; Nanni et al., 2002; Wallace et al., 2008) on the genomic profiling of human prostate cancer. The first three columns of Table 1 summarize the \(d\) data sets where a total of \(m=23,367\) unique genes are analyzed with each gene \(i\) being profiled by \(n_{i}\in[1,d]\) studies. The left panel of Figure 2 presents a frequency distribution of the \(n_{i}\)'s where almost \(30\%\) of the \(m\) genes are analyzed by just one of the \(d\) studies while approximately \(18\%\) of the genes are profiled by all \(d\) studies.
Our goal in this application is to use the IRT framework to construct a rank ordering of the \(m\) gene expression profiles for prostate cancer. Such a rank ordering is particularly useful when data-privacy concerns prevent sharing of study-specific summary statistics, such as \(p-\)values, and information regarding the operational characteristics of the multiple testing methodology used in each study. For study \(j\), our data are an \(m_{j}\times s_{j}\) matrix of expression values where \(s_{j}\) denotes the sample size in study \(j\). Each sample either belongs to the control group or the treatment group and the goal is to test whether gene \(i\) is differentially expressed across the two groups. Since IRT operates on the binary decision vector \(\boldsymbol{\delta}_{j}\), we convert the expression matrices from each study to \(\boldsymbol{\delta}_{j}\) as follows. For each study \(j\), we first use the R-package limma(Ritchie et al., 2015) to get the \(m_{j}\) vector of raw \(p-\)values. Thereafter, the BH-procedure is applied on these raw \(p-\)values at FDR level \(\alpha_{j}\) (see column four in Table 1) to derive the final decision sequence \(\boldsymbol{\delta}_{j}\). We note that typically an important intermediate step before computing the \(p-\)values in each study is to first validate the quality and compatibility of these studies via objective measures of quality assessment, such as Kang et al. (2012). In this application, however, we do not consider such details.
The sixth column of Table 1 reports the number of rejections for each of these studies and the last column presents the evidence against each rejected null hypothesis in study \(j\). It is interesting to see that study 5 (Varambally et al., 2005) receives the highest evidence for its rejected hypotheses, which is not surprising given the large weight \(w_{5}\) that each of its relatively small number of rejections receive. In contrast, study 8 (Wallace et al., 2008) has the smallest non-zero evidence which is driven by the largest number of rejections reported in this study. The right panel of Figure 2 presents a heatmap of the log evidence indices for 100 randomly sampled genes across the \(d\) studies. Here the white shade represents a gene not analyzed by the study while the shade of brown represents an evid
\begin{table}
\begin{tabular}{c c c c c c c} \(j\) & Study & \(m_{j}\) & sample size & \(\alpha_{j}\) & \(\|\boldsymbol{\delta}_{j}\|_{0}\) & \(e_{j}^{+}\) \\ \hline
1 & Singh et al. (2002) & 8,799 & 102 & 0.05 & 2,094 & 84.04 \\
2 & Welsh et al. (2001) & 8,798 & 34 & 0.01 & 921 & 955.27 \\
3 & Yu et al. (2004) & 8,799 & 146 & 0.05 & 1,624 & 108.36 \\
4 & Lapointe et al. (2004) & 13,579 & 103 & 0.05 & 3,328 & 81.60 \\
5 & Varambally et al. (2005) & 19,738 & 13 & 0.01 & 282 & 6999.29 \\
6 & Tomlins et al. (2005) & 9,703 & 57 & 0.01 & 1,234 & 786.30 \\
7 & Nanni et al. (2002) & 12,688 & 30 & 0.01 & 0 & 0 \\
8 & Wallace et al. (2008) & 12,689 & 89 & 0.05 & 4,716 & 53.81 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the \(d=8\) studies and the evidence against each rejected null hypothesis. Here \(e_{j}^{+}=\max\{e_{ij}:i=1,\ldots,m_{j}\}\).
corresponds to failure to reject the underlying null hypothesis. The heterogeneity across the \(d\) studies is evident through the different magnitudes of the evidence indices constructed for each study. Table 2 presents the distribution of the rejected hypotheses across the \(d\) studies, with the exception of study 7. For instance, studies 1 and 3 share \(1,531\) rejected hypotheses while studies 2 and 5 share just 1 rejected hypothesis. Also, study 5, which investigates the largest number of genes, has minimal overlap with the other studies as far as its discoveries are concerned.
The left panel of Figure 3 presents a histogram of the log transformed non-zero aggregated evidences from IRT while the right panel plots the top 25 genes with respect to their aggregated evidence, colored and shape-coded by the number of times the corresponding
\begin{table}
\begin{tabular}{c c|c c c c c c c} \(j\) & \(\|\boldsymbol{\delta}_{j}\|_{0}\) & 1 & 2 & 3 & 4 & 5 & 6 & 8 \\ \hline
1 & 2,094 & - & 509 & 1531 & 387 & 7 & 130 & 1029 \\
2 & 921 & 509 & - & 423 & 108 & 1 & 27 & 324 \\
3 & 1,624 & 1531 & 423 & - & 294 & 7 & 105 & 809 \\
4 & 3,328 & 387 & 108 & 294 & - & 17 & 172 & 970 \\
5 & 282 & 7 & 1 & 7 & 17 & - & 4 & 8 \\
6 & 1,234 & 130 & 27 & 105 & 172 & 4 & - & 365 \\
8 & 4,716 & 1029 & 324 & 809 & 970 & 8 & 365 & - \\ \hline \end{tabular}
\end{table}
Table 2: Distribution of rejection overlaps across 7 studies.
Figure 2: Left: Frequency distribution of the \(n_{i}\)’s. Right: heatmap of the log evidence indices for 100 randomly sampled genes across the \(d\) studies. White shade represents a gene not analyzed by the corresponding study while the shade of brown represents an evidence index of 0 which corresponds to failure to reject.
gene was analyzed across the \(d\) studies. Interestingly, the top second and third genes have \(n_{i}=2\) and \(3\), respectively, suggesting that apart from the number of times a particular null hypothesis is analyzed across the \(d\) studies, the magnitude of the study-specific evidence indices also play a key role in the overall ranking. To put this into perspective, the right panel of Figure 4 presents the top \(25\) genes with respect to their aggregated evidence from the P2E framework introduced in Section 4.4. In a stark contrast to the IRT framework, here the top \(25\) genes have \(n_{i}>=7\). Furthermore, both the left and right panels of Figure 4 suggest that \(e_{i}^{p2e}\) can be substantially larger in magnitude than \(e_{i}^{\tt agg}\) particularly when one of the studies rejects the null hypothesis with an astronomically small \(p-\)value.
Next, we study the composition of rejected hypotheses from IRT and P2E at \(\alpha=0.1\)
Figure 4: Left: histogram of the log transformed non-zero aggregated evidences from P2E. Right: Top \(25\) genes with respect to their aggregated evidence, color and shape-coded by the number of times the corresponding gene was analyzed across the \(d\) studies.
Figure 3: Left: Histogram of the log transformed non-zero aggregated evidences from IRT. Right: Top \(25\) genes with respect to their aggregated evidence, color and shape-coded by the number of times the corresponding gene was analyzed across the \(d\) studies.
Table 3 presents the distribution of rejected hypotheses with respect to \(n_{i}\) and reinforces the point that for IRT, the evidence weights \(w_{j}\) play a key role in the overall ranking while P2E relies on how often a hypothesis has been tested across the \(d\) studies. For each study \(j\), Table 4 presents the number of hypotheses rejected by IRT and P2E as a percentage of the total number of rejections for that study. In case of IRT, studies 2, 5, and 6 have a 100% rejection rate which is not surprising given that these three studies also exhibit the three highest evidence against their rejected null hypotheses. Notably, Study 8 has a higher percentage rejection than Study 4 even though the former has a lower evidence index. This is expected since, from Table 2, out of the 4,716 rejected hypotheses in Study 8, approximately 14% are shared with studies 2 and 6, which exhibit high evidence indices. In contrast, of the 3,328 hypotheses rejected in Study 4, less than 8.5% Study 2 and Study 6. Thus, the aggregated evidence index for the rejected hypotheses in Study 8 receive an overall higher weight. In contrast, P2E exhibits a relatively more even distribution of the rejected hypotheses across the \(d\) studies which is primarily driven by the magnitude of the \(p-\)values returned by each study.
\begin{table}
\begin{tabular}{l c c c c} \multicolumn{5}{c}{\% Rejected at \(\alpha=0.1\)} \\ \hline \(j\) & \(\|\mathbf{\delta}_{j}\|_{0}\) & \(e_{j}^{+}\) & by IRT & by P2E \\ \hline
1 & 2,094 & 84.04 & 30.32 & 63.08 \\
2 & 921 & 955.27 & 100 & 57.65 \\
3 & 1,624 & 108.36 & 32.45 & 71.00 \\
4 & 3,328 & 81.60 & 8.71 & 51.83 \\
5 & 282 & 6,999.29 & 100 & 65.25 \\
6 & 1,234 & 786.30 & 100 & 50.40 \\
7 & 0 & 0 & - & - \\
8 & 4,716 & 53.81 & 14.48 & 51.44 \\ \hline \end{tabular}
\end{table}
Table 4: Composition of rejected hypotheses at \(\alpha=0.1\)
\begin{table}
\begin{tabular}{l c c c c c c c c} \multicolumn{1}{c}{\# Rejections} & \(n_{i}=1\) & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline IRT & 2,405 & 23.91\% & 2.95\% & 4.53\% & 1.25\% & 5.03\% & 18.04\% & 15.13\% & 29.16\% \\ \hline P2E & 4,390 & 7.15\% & 7.33\% & 3.03\% & 4.26\% & 3.21\% & 10.20\% & 23.07\% & 41.75\% \\ \hline \end{tabular}
\end{table}
Table 3: Distribution of rejected hypotheses with respect to \(n_{i}\) at \(\alpha=0.1\).
Numerical experiments
Here we assess the empirical performance of IRT on simulated data. We consider six simulation scenarios with \(m=1000\) and test \(H_{0i}:\mu_{i}=0\ vs\ H_{1i}:\mu_{i}\neq 0\), where \(\mu_{i}\stackrel{{ i.i.d.}}{{\sim}}0.8N(0,1)+0.1N(3,1)+0.1N(-3,1)\). In each scenario, study \(j\) uses data \(X_{ij}\), to be specified subsequently, to conduct \(m_{j}\) tests and reports the corresponding decisions \(\mathbf{\delta}_{j}\) obtained from the BH-procedure that controls the FDR at level \(\alpha_{j}\). The empirical performance of IRT is compared against three alternative procedures: (1.) the Naive method which rejects \(H_{0i}\) if at least \(d/2\) studies reject it, (2.) the method Fisher which pools the study specific \(p-\)values using Fisher's method (Fisher, 1948) and then applies the BH-procedure on the pooled \(p-\)value sequence for FDR control, and (3.) the P2E procedure discussed in Section 4.4. When the \(p-\)values are independent, we expect Fisher to exhibit higher power than \(e-\)value procedures, such as IRT and P2E. Nevertheless, in such settings Fisher provides a practical benchmark for assessing the empirical performances of P2E, IRT and Naive, where the last two procedures rely only the binary decision sequences \(\mathbf{\delta}_{j}\).
**Scenario 1 -** in this scenario we let \(X_{ij}|\mu_{i},\sigma_{j}\stackrel{{ ind.}}{{\sim}}N(\mu_{i}, \sigma_{j}^{2}),\ \sigma_{j}\stackrel{{ i.i.d.}}{{\sim}}U(0.75,2)\), \(m_{j}=m\), \(\alpha_{j}=0.01\), \(\alpha=0.1\) and vary \(d\) from \(5\) to \(50\). Figure 5 presents the average FDP and the ETP across \(500\) Monte Carlo repetitions. Unsurprisingly, Fisher exhibits the highest power across all values of \(d\) while IRT is the next best. The Naive method, on the other hand, is substantially less powerful. While all methods control the FDR at \(\alpha\), P2E is less powerful than IRT in this setting. Infact, in all our simulation scenarios, we find that IRT dominates P2E in power at the same FDR level.
**Scenario 2 -** we continue to borrow the setting from Scenario 1 but set \(\sigma_{j}=1,\ d=5\) and introduce correlation across the \(d\) studies. Specifically, we let \(\text{Corr}(X_{ij},X_{ik})=\rho,\ j\neq k\)
Figure 5: Scenario 1
where \(\rho\in\{-0.2,-0.1,0,0.1,0.3,0.5,0.7,0.9\}\). In this scenario, the \(d\)\(p-\)values for each hypothesis are exchangeable but not independent unless \(\rho=0\). Figure 6 reports the average FDP and the ETP for various methods. Fisher fails to control the FDR at \(\alpha=0.1\) for large \(\rho\) and therefore does not appear in the left panel of Figure 6. We find that IRT continues to dominate P2E and Naive for all values of \(\rho\). Here HM represents another method which pools the study specific \(p-\)values using harmonic mean (Wilson, 2019) and then applies the BH-procedure on those pooled \(p-\)values for FDR control. Other methods for pooling exchangeable \(p-\)values are discussed in Vovk and Wang (2020) but all of them rely on some prior knowledge regarding the strength of the dependence between the \(p-\)values, which may not be available in practice.
**Scenario 3 -** we set \(\sigma_{j}=1\) and introduce correlation across the studies as well as the test statistics. Specifically, we let \(\text{Corr}(X_{ij},X_{ik})=0.7,\ j\neq k\) and \(\text{Corr}(X_{ij},X_{rj})=0.5,i\neq r\). Figure 7 reports the average FDP and the ETP for various methods as \(d\) varies from 5 to 50. We see a similar pattern as Figure 6 where IRT controls the FDR and exhibits higher power than Naive, HM and P2E. The results from scenarios 2 and 3 suggest that when the \(p-\)values are exchangeable, IRT provides a powerful framework for pooling inferences across the various studies. Furthermore, while the building blocks of IRT involve the study-specific binary decision sequences, it is more powerful than P2E which directly relies on the magnitude of study-specific \(p-\)values for conversion to \(e-\)values.
**Scenario 4 -** we fix \(d=30\) and borrow the setting from Scenario 1 except that for all non-null cases exactly \(K\) out of the \(d\) studies reject the null hypotheses. Figure 8 reports the average FDP and the ETP for the two methods for different choices of \(K\). In this scenario, we expect all methods to exhibit higher power as \(K\) increases. The Naive method in, particular, has almost no power for small \(K\) while IRT dominates P2E across all values of \(K\).
Figure 6: Scenario 2
**Scenario 5 -** here the data are generated according to Scenario 1 with \(\sigma_{j}=1\) but we vary \(m_{j}\) for the \(d\) studies. Specifically, we set \(d=30\), \(n=20\) and consider the ratio \(\eta=\min\{n_{1},\ldots,n_{m}\}/n\). For a given choice of \(\eta\), we first sample \(n_{1},\ldots,n_{m}\) uniformly from \([\lceil n\rho\rceil,n]\) with replacement and then for each \(i\), \(n_{i}\) studies are chosen at random from the \(d\) studies without replacement. Figure 9 reports the average FDP and the ETP for the three methods as \(\eta\) varies over \([0.05,0.8]\). We also include IRT* and P2E* in our comparisons, which correspond to the IRT and P2E procedures using \(\boldsymbol{e^{\text{agg}*}}\) and \(\boldsymbol{e^{\text{p2e*}}}=(e_{1}^{\text{p2e*}},\ldots,e_{m}^{\text{p2e*}})\) from Equation (7) and Section 4.4, respectively. As \(\eta\) increases, the number of studies testing any given hypothesis \(i\) increases, which leads to an improved power for IRT and IRT* in this scenario. Moreover, as discussed in Section 4, IRT* exhibits a higher power than IRT and their power is comparable when the heterogeneity in \(n_{i}\) is small, which corresponds to a large value of \(\eta\).
**Scenario 6 -** in this scenario we vary \((m_{j},\alpha_{j})\) and continue to include IRT* and P2E*
Figure 8: Scenario 4
Figure 7: Scenario 3
in our comparisons. We borrow the setting from Scenario 1 with \(\sigma_{j}=1\), \(m=1000\) and \(\alpha=0.15\). To vary \(m_{j}\), we set \(m_{(1)}=\max\{m_{1},\ldots,m_{d}\}=900\) and consider the ratio \(\eta=\min\{m_{1},\ldots,m_{d}\}/m_{(1)}\). For a given choice of \(\eta\), we first sample \(m_{1},\ldots,m_{d}\) uniformly from \([\lceil m_{(1)}\rho\rceil,m_{(1)}]\) with replacement and then for each \(j\), \(m_{j}\) hypotheses are chosen at random from the \(m\) hypotheses without replacement. We set \(\alpha_{j}\in\{0.05,0.03,0.01\}\) according to \(m_{j}\leq 600\), \(m_{j}\in(600,800]\) or \(m_{j}>800\), respectively. Thus, in this setting studies with a higher \(m_{j}\) receive a larger weight on their rejections. Figure 10 reports the average FDP and the ETP for various methods as \(\eta\) varies over \([0.1,0.6]\). As observed in Figure 9, both IRT and IRT* exhibit higher power as \(\eta\) increases with the latter dominating IRT in power for relatively smaller values of \(\eta\). Furthermore, when \(\eta\) is large, \(m_{j}\) is large and studies receive a relatively higher weight \(w_{j}\) on their rejections which leads to an improved power in this setting.
Figure 10: Scenario 6
Figure 9: Scenario 5
Closing remarks
In this article, we have developed IRT, a framework for fusion learning in multiple testing, that operates on the binary decision sequences available from diverse studies and conducts integrative inference on the common parameter of interest. The IRT framework guarantees an overall FDR control under arbitrary dependence between the aggregated evidence indices as long as the studies control their FDR at the desired levels. Furthermore, our simulation study suggests that IRT provides a powerful approach for pooling exchangeable \(p-\)values across the studies.
A potential extension of our framework lies in multiple testing of partial conjunction (PC) hypotheses (see Benjamini and Heller (2008); Wang et al. (2022); Bogomolov (2023) for an incomplete list of references) where the goal is to test if at least \(u\geq 1\) out of the \(d\) studies reject the null hypothesis \(H_{0i},\ i=1,\ldots,m\). This can be formulated as testing the following null hypotheses \(H_{0i}^{u/d}:\) fewer than \(u\) out of \(d\) studies are non-null. Such problems arise in the study of mediation effects (Huang, 2019; Dai et al., 2022; Liu et al., 2022), finding evidence factors in causal inference (Karmakar et al., 2021), and replicability analysis (Heller et al., 2014; Heller and Yekutieli, 2014). Given the triplet \(\mathcal{D}_{j}\) from each study, a key challenge in this setting is to construct an aggregation scheme such that the aggregated evidence indices provide an effective ranking of the \(m\) PC hypotheses, \(H_{01}^{u/d},\ldots,H_{0m}^{u/d}\) and a cutoff along this ranking can be determined for FDR control. Our future research will be directed towards developing such an evidence aggregation scheme.
## Acknowledgments
We thank Professor Aaditya Ramdas for his valuable insights on an earlier version of the manuscript. All errors are our own.
|
2306.01799
|
Pairwise Ranking Losses of Click-Through Rates Prediction for Welfare
Maximization in Ad Auctions
|
We study the design of loss functions for click-through rates (CTR) to
optimize (social) welfare in advertising auctions. Existing works either only
focus on CTR predictions without consideration of business objectives (e.g.,
welfare) in auctions or assume that the distribution over the participants'
expected cost-per-impression (eCPM) is known a priori, then use various
additional assumptions on the parametric form of the distribution to derive
loss functions for predicting CTRs. In this work, we bring back the welfare
objectives of ad auctions into CTR predictions and propose a novel weighted
rankloss to train the CTR model. Compared to existing literature, our approach
provides a provable guarantee on welfare but without assumptions on the eCPMs'
distribution while also avoiding the intractability of naively applying
existing learning-to-rank methods. Further, we propose a theoretically
justifiable technique for calibrating the losses using labels generated from a
teacher network, only assuming that the teacher network has bounded $\ell_2$
generalization error. Finally, we demonstrate the advantages of the proposed
loss on synthetic and real-world data.
|
Boxiang Lyu, Zhe Feng, Zachary Robertson, Sanmi Koyejo
|
2023-06-01T15:42:50Z
|
http://arxiv.org/abs/2306.01799v1
|
# Pairwise Ranking Losses of Click-Through Rates Prediction for Welfare Maximization in Ad Auctions
###### Abstract
We study the design of loss functions for click-through rates (CTR) to optimize (social) welfare in advertising auctions. Existing works either only focus on CTR predictions without consideration of business objectives (e.g., welfare) in auctions or assume that the distribution over the participants' expected cost-per-impression (eCPM) is known a priori, then use various additional assumptions on the parametric form of the distribution to derive loss functions for predicting CTRs. In this work, we bring back the welfare objectives of ad auctions into CTR predictions and propose a novel weighted rankloss to train the CTR model. Compared to existing literature, our approach provides a provable guarantee on welfare but without assumptions on the eCPMs' distribution while also avoiding the intractability of naively applying existing learning-to-rank methods. Further, we propose a theoretically justifiable technique for calibrating the losses using labels generated from a teacher network, only assuming that the teacher network has bounded \(\ell_{2}\) generalization error. Finally, we demonstrate the advantages of the proposed loss on synthetic and real-world data.
Machine Learning, ICML
## 1 Introduction
Global online advertising spending is expected to exceed $700 billion in 2023 (Statista, 2022). At the core of online advertising are advertising (ad) auctions, held billions of times per day, to determine which advertisers get the opportunity to show ads (Jeunen, 2022). A critical component of these auctions is predicting the click-through rates (CTR) (Yang and Zhai, 2022). Typically, advertisers submit cost-per-click (CPC) bids, i.e., report how much they are willing to pay if a user clicks. The CTR is the probability that a user clicks the ad when the ad is shown. Combined with the cost-per-click bid, the platform can then calculate the value of _showing_ the ad, usually called the cost-per-impression (eCPM). As the CTR needs to be learned, the platform instead uses the predicted click-through rates (pCTRs) to convert the submitted CPC bids to predicted eCPM bids, which then determine the auctions' outcomes.
Due to the importance of predicting the CTRs, a wealth of related literature exists, and we refer interested reader to Choi et al. (2020); Yang and Zhai (2022) for thorough reviews of these advances. Of these works, the majority focus on the various neural network architectures designed for the task, such as DeepFM (Guo et al., 2017), Deep & Cross Network (DCN) (Wang et al., 2017), MaskNet (Wang et al., 2021), among many others. These works propose novel neural network architectures but train these networks using off-the-shelf classification losses with no guarantees on the actual economic performance of the ad auctions, creating a discrepancy between the upstream model training for CTR prediction and downstream model evaluation.
Some works aim to ameliorate these discrepancies by using business objectives such as social welfare (or welfare for short) to motivate the design of loss functions (Chapelle, 2015; Vasile et al., 2017; Hummel and McAfee, 2017). However, these works either lack reproducible experiments on publicly available real-world benchmarks (Hummel and McAfee, 2017), or depend on ad-hoc heuristics with insufficient theoretical guarantees (Vasile et al., 2017). Moreover, many of these works suffer from an unrealistic assumption that bidders submit eCPM bids and the eCPM of the highest competing bid follows a known and fixed distribution. However, in real life, some ad auctions at industry leaders such as Amazon, Meta, and Google only accept CPC bids (Amazon Ads, 2023; Meta Business Help Center, 2023; Google Ads Help, 2023), and adjustments to the CTR prediction model changes the distribution of competing bids' eCPM.
We avoid the pitfalls of existing works by limiting assumptions about the eCPMs' distribution. Since various types of ad auctions with drastically different revenue functions are widely deployed, ranging from Generalized Second
Price (Edelman et al., 2007) to Vickrey-Clarke-Groves (Varian and Harris, 2014), and first price auction (Conitzer et al., 2022), we focus on maximizing the welfare achieved by these auctions, which measures the efficiency of the ad auction in terms of showing the most valuable ads.
**Our Contributions.** We list our contributions below.
* We propose a learning-to-rank loss with welfare guarantees by drawing a previously underutilized connection between welfare maximization and ranking.
* We propose two surrogate losses that are easy to optimize and theoretically justifiable.
* Inspired by student-teacher learning (Hinton et al., 2015), we construct an approximately calibrated, easy-to-optimize surrogate, whose theoretical guarantees only depend on the \(\ell_{2}\)-generalization bound of the teacher network.
* We demonstrate the benefits of the proposed losses on both simulated data and the Criteo Display Advertising Challenge dataset1, arguably the most popular benchmark for CTR prediction in ad auctions.
Footnote 1: [https://www.kaggle.com/c/criteo-display-ad-challenge](https://www.kaggle.com/c/criteo-display-ad-challenge)
### Related Works
In this section, we divide the related works into three main categories: applied research in CTR prediction, theoretical analysis of ad auctions, and methods in learning-to-rank.
**Applied Research in CTR Prediction.** There is an abundance of application oriented literature on CTR prediction (McMahan et al., 2013; Chen et al., 2016; Cheng et al., 2016; Zhang et al., 2016; Qu et al., 2016; Juan et al., 2017; Lian et al., 2018; Zhou et al., 2018, 2019; Wang et al., 2021; Pi et al., 2019; Pan et al., 2018; Li et al., 2020; Chapelle, 2015), and we refer interested readers to Yang and Zhai (2022) for a detailed survey. Two works with well-documented performance on the Criteo dataset are Guo et al. (2017) and Wang et al. (2017). Particularly, Guo et al. (2017) proposes DeepFM, short for deep factorization machines, which combines deep learning with factorization machines. Wang et al. (2017) is similar, where the proposed Deep Cross Network model combines deep neural networks with cross features. These works focus on the development of neural network architectures and use classification losses with little to no theoretical guarantees. Our work is orthogonal to and complements this line of literature by proposing easy-to-optimize loss functions rooted in economic intuition with provable guarantees on economic performance.
A well-known technique in knowledge distillation is student-teacher learning (Hinton et al., 2015), where a smaller network is used to approximate the predictions of a larger one. Recently some attempts have been made at applying the technique in CTR prediction (Zhu et al., 2020) and, as we demonstrate in this manuscript, the technique can even benefit the design of welfare-inspired loss functions, in addition to reducing the computation and memory requirements of the teacher network itself.
Among this line of work, two papers are closer to ours in spirit. Chapelle (2015) studies the design of CTR evaluation metrics that approximate the bidders' expected utility. Similarly, Vasile et al. (2017) uses the utility that the bidder derives from the auction to design a suitable loss function that the bidder should use for CTR prediction. While both works provide empirical justifications for the proposed losses, they only provide heuristic arguments when designing the loss functions themselves and include no theoretical guarantees on the generalization or calibration of the losses. Moreover, they both rely on the assumption that the distribution of the highest competing bid's ePM is fixed and known a priori.
**Theoretical Analysis of Ad Auctions.** Many works study the theoretical properties of ad auctions (Fu et al., 2012; Edelman and Schwarz, 2010; Gatti et al., 2012; Aggarwal et al., 2006; Varian, 2009; Dughmi et al., 2013; Bergemann et al., 2022; Lucier et al., 2012), and Choi et al. (2020) offers a detailed survey of a collection of recent advances in the analysis of ad auctions.
Hummel and McAfee (2017) is the most relevant work to ours, as it studies the design of loss functions in ad auctions from the seller's perspective, offering new insights on how to design losses for either welfare maximization or revenue maximization. However, the real-world experiments in the paper rely on proprietary data, and the claims are not verified on widely available benchmarks. Moreover, it again relies heavily on the assumption that the distribution of the highest competing bid's eCPM is known beforehand, which can be unrealistic in practice.
**Learning-to-Rank.** Our work draws inspiration from a line of research on learning-to-rank (Burges et al., 2005, 2006; Cortes et al., 2010; Burges, 2010; Wang et al., 2018), which incorporates information retrieval performance metrics such as Normalized Discounted Cumulative Gain into the design of the loss functions, resembling our works. However, as we show in Section 3.2, these works do not directly apply to the welfare maximization setting. Moreover, to the best of our knowledge, these works have not been examined in the context of welfare maximization in ad auctions.
## 2 Models and Preliminaries
We begin with a multi-slot ad auction (Edelman et al., 2007; Varian, 2007) where each ad is associated a cost-per-click (CPC) bid. Let \(K\) denote the number of the slots and each slot, indexed by \(k\), is associated with a position multiplier
\(\alpha_{k}\). Without loss of generality assume that \(\alpha_{1}=1\) and the weights are decreasing in \(k\), namely \(\alpha_{1}\geqslant\ldots\geqslant\alpha_{K}\). Assume there are \(n\geqslant K\) ads participating in the auction where each ad has a feature vector \(x_{i}\in\mathbb{R}^{d}\) and CPC bid \(b_{i}\). There exists a function \(p^{*}:\mathbb{R}^{d}\rightarrow[0,1]\) such that the CTR of the ad at slot \(k\) is \(\alpha_{k}p_{i}\), where we let \(p_{i}=p^{*}(x_{i})\) for convenience. The ad's CTR is affected by both the slot it is assigned to and the ad's features. Intuitively, \(p_{i}\) is the ad's base CTR if it were assigned to the first slot, and is scaled according to \(\alpha_{k}\) for any slot \(k\).
Throughout this paper, we assume that the position multipliers are known, and we focus only on learning \(p^{*}\), i.e., the ad's CTR if it were assigned the first slot. Learning a position-based CTR prediction model requires additional assumptions to model the user's click behavior and is outside of our scope, which focuses on welfare maximization instead. Indeed, we will show it is without loss of generality to focus on single-slot auctions to maximize welfare, which is equivalent to learning the base CTR when shown in the first slot (Proposition 2.2).
More concretely let \(\mathcal{H}\subseteq\{f:\mathcal{X}\rightarrow[0,1]\}\) denote the hypothesis space and assume that \(p^{*}\) is realizable, i.e. \(p^{*}(\cdot)\in\mathcal{H}\). Conditioned on a set of \(n\) ads \(\{(b_{i},x_{i})\}_{i=1}^{n}\), let \(f(\cdot)\) denote an arbitrary function that the seller uses to predict the CTRs. The function \(f\), combined with the submitted bid \(b_{i}\) and the observed context \(x_{i}\), yields the predicted eCPMs \(b_{i}f(x_{i})\) for all \(i\in[n]\). The seller then awards the first slot to the bidder with the highest predicted eCPM, the second slot to the bidder with the second, and so forth, achieving a welfare of
\[\mathrm{Welfare}_{f}(\{(b_{i},x_{i})\}_{i=1}^{n})=\sum_{k=1}^{K}b_{\pi_{f}(k)} p_{\pi_{f}(k)},\]
where for any function \(f\), \(\pi_{f}(k)\) returns the index of the ad with the \(k\)-th highest predicted eCPM. The welfare maximization problem is then
\[\max_{f\in\mathcal{H}}\sum_{k=1}^{K}b_{\pi_{f}(k)}p_{\pi_{f}(k)}. \tag{1}\]
As we will prove, a solution to the problem is \(f=p^{*}\). For convenience, we let \(\pi_{*}(\cdot)=\pi_{p*}(\cdot)\), \(\mathrm{Welfare}_{*}(\{(b_{i},x_{i})\}_{i=1}^{n})=\mathrm{Welfare}_{p^{*}}(\{(b_{i},x_{i })\}_{i=1}^{n})\), and assume there are no ties in \(b_{i}f(x_{i})\) or \(b_{i}p_{i}\).
To better illustrate welfare and advertisement auction, we include an specific instance of ad auction in the following example.
**Example 2.1**.: _Let \(\texttt{ad}_{1},\texttt{ad}_{2},\texttt{ad}_{3}\) denote three different advertisements, where \(\texttt{ad}_{1}\)'s CTR is 0.1 and CPC bid 10, \(\texttt{ad}_{2}\)'s CTR 0.4 and CPC bid 2, and \(\texttt{ad}_{3}\)'s CTR 0.9 and CPC bid 0.5. Suppose there two advertisement slots where the first slot has multiplier \(\alpha_{1}=1\) and the second \(\alpha_{2}=0.9\). Assigning the first slot to \(\texttt{ad}_{1}\) and the second to \(\texttt{ad}_{2}\) maximizes welfare, and the maximum welfare is \(1+0.8\times 0.9=1.62\). Knowing the ads' exact CTR helps us achieve this maximum welfare._
### Welfare Maximization and Ranking
We first show that we lose no generality by restricting our focus to single-slot ad (e.g., the first slot) auctions.
**Proposition 2.2** (Reduction to Single-slot Setting).: _The function \(f\) maximizes welfare in a \(K\)-slot auction only if it maximizes welfare in single-slot ad auctions held over subsets of the participating ads. Moreover, the ground-truth CTR function \(p^{*}\) maximizes welfare._
Detailed proof of the proposition is deferred to Appendix A.1. Consider the setting in Example 2.1, for instance. Only considering the welfare objective, note that we can auction off the two ad slots one by one, where \(\texttt{ad}_{1},\texttt{ad}_{2},\texttt{ad}_{3}\) participates in the auction for the first slot and \(\texttt{ad}_{2},\texttt{ad}_{3}\) participates in that for the second. In this setting, if we know the ads' ground-truth CTR, then \(\texttt{ad}_{1}\) wins the first slot and \(\texttt{ad}_{2}\) wins the second, achieving the maximum welfare.
By Proposition 2.2, we can see that welfare maximization in multi-slot ad auctions is no harder than welfare maximization in single-slot ad auctions, and this relies on the fact that the position multipliers are independent of advertisers. For the rest of the paper, we then without loss of generality focus only on single-slot auctions.
As welfare is maximized by the ground-truth CTR function, a common approach is to treat the problem as a classification problem, using \(y_{i}\) as feedback for learning \(p^{*}\)(Vasile et al., 2017; Hummel and McAfee, 2017). However, as noted in Section 1, this approach can suffer from a mismatch between the loss function and the business metric (in our case, welfare).
We notice that welfare maximization can be reduced to a learning-to-rank problem instead. Let \(i^{*}=\pi_{*}(1)\) be the index of the ad with the highest ground-truth eCPM and \(j^{*}=\pi_{f}(1)\) be that of the ad with the highest predicted eCPM. We note that
\[\mathrm{Welfare}_{*}(\{(b_{i},x_{i})\}_{i=1}^{n})-\mathrm{Welfare} _{f}(\{(b_{i},x_{i})\}_{i=1}^{n})\] \[=\sum_{i=1}^{n}\sum_{j=1}^{n}((b_{i}p_{i}-b_{j}p_{j})\,\mathds{1} \{i=i^{*}\}\,\mathds{1}\{j=j^{*}\}\] \[\times\mathds{1}\{b_{i}f(x_{i})\leqslant b_{j}f(x_{j})\}). \tag{2}\]
We defer the detailed derivation of (2) to Appendix A.3. Since \(b_{i*}p_{i*}\) yields the highest ground-truth eCPM, welfare is maximized if and only if \(j^{*}=i^{*}\). Consequently, as
long as \(f\) correctly _ranks_ each pair of observations according to their ground-truth eCPM, it also correctly identifies the ad with the highest ground-truth eCPM and maximizes welfare. The reduction to ranking generalizes to multi-slot auctions, and we defer a formal statement to Lemma A.1 in the appendix.
The same intuition is illustrated by Example 2.1: as long as we can rank the three ads according to their ground-truth eCPM (\(\texttt{ad}_{1}>\texttt{ad}_{2}>\texttt{ad}_{3}\)), then we can maximize the auction's welfare.
To summarize, we must rank the ads according to their ground-truth eCPM using a suitable CTR prediction function to maximize welfare. An approach that follows this observation is to learn a CTR prediction rule to rank the ads, leading to the proposed ranking-inspired losses.
## 3 Ranking-Inspired Loss Functions for Welfare Maximization
Let \(\mathcal{D}=\{(b_{i},x_{i}),y_{i}\}_{i=1}^{n}\) be a batch of \(n\) ads participating in one round of an auction, where \(y_{i}\sim\mathrm{Ber}(p_{i})\) indicates whether the ad has been clicked or not. We then call \(b_{i}p_{i}\) the ad's ground-truth eCPM and \(b_{i}y_{i}\) its empirical eCPM. Consider the following pairwise loss function (which we propose the seller minimize):
\[\ell(f;\mathcal{D})=\sum_{i=1}^{n}\sum_{j=1}^{n}\left(b_{i}y_{i}-b_{j}y_{j} \right)\mathds{1}\{b_{i}f(x_{i})\leq b_{j}f(x_{j})\}. \tag{3}\]
Let \(\mathcal{R}(f;\mathcal{D})=\mathbb{E}_{\{y_{i}\}_{i=1}^{n}}[\ell(f;\mathcal{ D})]\) denote the conditional risk induced by the loss function \(\ell\). Recalling that \(y_{i}\sim\text{Bernoulli}(p_{i})\), we know
\[\mathcal{R}(f;\mathcal{D})=\sum_{i=1}^{n}\sum_{j=1}^{n}(b_{i}p_{i}-b_{j}p_{j}) \mathds{1}\{b_{i}f(x_{i})\leq b_{j}f(x_{j})\}. \tag{4}\]
Observe the similarities between (2) and (4). The conditional risk \(\mathcal{R}(f;\mathcal{D})\) can be viewed as a proxy for the welfare suboptimality of \(f\), where we replace \(\mathds{1}\{i=i^{*}\}\mathds{1}\{j=j^{*}\}\) with 1. While \(j^{*}\) is easy to determine once \(f\) is given, we do not the index with the highest ground-truth eCPM. Fortunately, as we show in the following proposition, minimizing \(\mathcal{R}(f;\mathcal{D})\) via empirical risk minimization is a reasonable proxy for minimizing welfare suboptimality.
**Proposition 3.1**.: _For any \(\mathcal{D}\), let \(\widehat{f}\) be an arbitrary and fixed minimizer of the conditional risk \(\mathcal{R}(f;\mathcal{D})\). We then know \(\widehat{f}\) ranks every pair in the sequence correctly, i.e. \(b_{i}p_{i}\geq b_{j}p_{j}\) if and only if \(b_{i}\widehat{f}(x_{i})\geq b_{j}\widehat{f}(x_{j})\). Moreover, the ground-truth CTR function \(p^{*}(\cdot)\) minimizes the conditional risk \(\mathcal{R}(f;\mathcal{D})\) for any \(\mathcal{D}\)._
Detailed proof for the proposition can be found in Appendix A.2. Proposition 3.1 shows that minimizing the conditional risk \(\mathcal{R}(f;\mathcal{D})\) is a surrogate for maximizing welfare and minimizing (3) is a reasonable choice of loss function. In the following theorem, we make explicit the connection between the conditional risk and welfare. With a slight abuse of notation let \(\mathrm{Welfare}_{f}(\mathcal{D})\), \(\mathrm{Welfare}_{*}(\mathcal{D})\) denote the welfare achieved by \(f\) and the optimal (achievable) welfare, respectively, when \(\{(b_{i},x_{i})\}_{i=1}^{n}\) are given by the dataset \(\mathcal{D}\). We emphasize the conditional risk \(\mathcal{R}(f;\mathcal{D})\) can be negative, an important fact to bear in mind in the context of the following theorem.
**Theorem 3.2**.: _The following holds for all \(f\in\mathcal{H}\) and \(\mathcal{D}\)_
\[\mathrm{Welfare}_{*}(\mathcal{D})\leq \mathrm{Welfare}_{f}(\mathcal{D})+\frac{1}{2}\mathcal{R}(f; \mathcal{D})\] \[+\frac{1}{4}\sum_{i=1}^{n}\sum_{j=1}^{n}\left|b_{i}p_{i}-b_{j}p_{ j}\right|.\]
_Moreover, the bound is tight for any minimizer of \(\mathcal{R}(f;\mathcal{D})\) and for all \(\mathcal{D}\)_
\[\min_{f\in\mathcal{H}}\mathcal{R}(f;\mathcal{D})=-\frac{1}{2}\sum_{i=1}^{n} \sum_{j=1}^{n}\left|b_{i}p_{i}-b_{j}p_{j}\right|.\]
See Appendix A.3 for detailed proof. We note that the theorem provides a valid lower bound for all possible \(f\in\mathcal{H}\). More importantly, for any dataset \(\mathcal{D}\), we can show that there is at least one minimizer of \(\mathcal{R}(f;\mathcal{D})\) thanks to the realizability assumption, for which \(\frac{1}{2}\mathcal{R}(f;\mathcal{D})+\frac{1}{4}\sum_{i=1}^{n}\sum_{j=1}^{n} \left|b_{i}p_{i}-b_{j}p_{j}\right|=0\). Crucially, the theorem implies that minimizing the conditional risk on any dataset \(\mathcal{D}\) maximizes welfare, further justifying the use of \(\ell(f;\mathcal{D})\).
While we have shown minimizing \(\mathcal{R}(f;\mathcal{D})\) suffices for welfare maximization, recovering the ground-truth CTR function \(p^{*}(\cdot)\) remains crucial for real-world ad auctions. For instance, revenue in generalized second price auctions depends on the pCTRs themselves, and functions that correctly rank the ads do not necessarily lead to high revenue. Fortunately, by adding a calibrated classification loss to \(\ell(f;\mathcal{D})\), we can ensure that \(p^{*}(\cdot)\) minimizes the (unconditional) risk. Particularly, we have the following proposition.
**Proposition 3.3**.: _Let \(h(f;\mathcal{D})\) denote an arbitrary loss function such that \(p^{*}\) is the unique minimizer of \(\mathbb{E}_{\mathcal{D}}[h(f;\mathcal{D})]\). For any constant \(\lambda>0\), \(p^{*}\) is the unique minimizer of \(\mathbb{E}_{\mathcal{D}}[\ell(f;\mathcal{D})+\lambda h(f;\mathcal{D})]\)._
See Appendix A.4 for detailed proof. We note that logistic loss and mean squared error are both valid choices for \(h(f;\mathcal{D})\) in Proposition 3.3.
### Easy-to-Optimize Surrogates
While \(\ell(f;\mathcal{D})\) is attractive as it is closely related to the welfare, the function itself is nondifferentiable and cannot be
efficiently optimized using first-order methods (e.g., SGD) due to the indicator variables. We thus propose two differentiable surrogates with provable performance guarantees
\[\ell_{\sigma}^{\log}(f;\mathcal{D})=\sum_{i=1}^{n}\sum_{j=1}^{n}(b_{ i}y_{i}-b_{j}y_{j})\\ \times\log(1+\exp(-\sigma(b_{i}f(x_{i})-b_{j}f(x_{j})))), \tag{5}\]
and
\[\begin{split}\ell_{\sigma}^{\mathrm{hinge}}(f;\mathcal{D})=\sum_ {i=1}^{n}\sum_{j=1}^{n}(b_{i}y_{i}-b_{j}y_{j})\\ \times(-\sigma(b_{i}f(x_{i})-b_{j}f(x_{j})))_{+}.\end{split} \tag{6}\]
For (5), we replace the indicators in \(\ell(f;\mathcal{D})\) with the log logistic function \(-\log(1+\exp(-\sigma(b_{i}f(x_{i})-b_{j}f(x_{j}))))\). Similarly, (6) acts as a surrogate to \(\ell(f;\mathcal{D})\) with the indicator replaced by \((-\sigma(b_{i}f(x_{i})-b_{j}f(x_{j})))_{+}\) instead, where for any \(a\in\mathbb{R}\) we let \((a)_{+}=\max(0,a)\). While the function \((\cdot)_{+}\) itself is not differentiable at \(x=0\), it is differentiable almost everywhere and can be easily optimized using its subderivative.
For both surrogates, the term \(\sigma\) is a manually adjustable parameter controlling how much we penalize small margins between a pair of eCPM, \(b_{i}f(x_{i})-b_{j}f(x_{j})\). As we can see from Figure 1, for pairs of ads whose predicted eCPMs are close to each other, a larger \(\sigma\) accentuates the difference between them and leads to a surrogate value close to one. However, as \(\sigma\) increases, the surrogate value for ads with large gaps in predicted eCPMs tend to be much larger than one. Adjusting \(\sigma\) is then a balancing act between these two kinds of pairs.
Regardless of the choice of surrogate for the indicator function, the surrogate losses themselves remain closely related to (2), which we highlight in the following theorems.
**Theorem 3.4**.: _Assuming all bids are bounded by some \(B\in\mathbb{R}_{>0}\), setting \(\sigma=2/B\) ensures for any \(f\in\mathcal{H}\) and \(\mathcal{D}\)_
\[\begin{split}\left|\mathbb{E}_{\{y_{i}\}_{i=1}^{n}}[\ell_{ \sigma}^{\log}(f;\mathcal{D})]-\mathcal{R}(f;\mathcal{D})\right|\leq\Delta, \\ \left|\mathbb{E}_{\{y_{i}\}_{i=1}^{n}}[\ell_{\sigma}^{\mathrm{hinge }}(f;\mathcal{D})]-\mathcal{R}(f;\mathcal{D})\right|\leq\Delta,\end{split}\]
_where \(\Delta=\frac{1}{2}\sum_{i=1}^{n}\sum_{j=1}^{n}|b_{i}p_{i}-b_{j}p_{j}|\) is a problem-dependent constant._
See Appendix A.5 for detailed proof. Theorem 3.4 shows that \(\ell_{\sigma}^{\log}(f;\mathcal{D})\) and \(\ell_{\sigma}^{\mathrm{hinge}}(f;\mathcal{D})\) are closely tied to the original loss \(\ell(f;\mathcal{D})\). While assuming the CPC bids are bounded implicitly implies the eCPMs' are also bounded, the assumption is mild and does not restrict the parametric form of the eCPMs' distribution. While the surrogates do not exactly match the proposed loss \(\ell(f;\mathcal{D})\), the gap is due to approximating the indicators in \(\ell(f;\mathcal{D})\) and cannot be avoided.
### Failure of Directly Applying Learning-to-Rank
It may be tempting to further exploit the connection between welfare and ranking over predicted eCPMs by applying a learning-to-rank loss function directly on the observed eCPMs \(b_{i}y_{i}\). As we show below, the approach, unfortunately, fails, as the inclusion of bids makes the empirical observation \(\mathds{1}\{b_{i}y_{i}\geq b_{j}y_{j}\}\) a poor estimate of \(\mathds{1}\{b_{i}p_{i}\geq b_{j}p_{j}\}\).
**Proposition 3.5**.: _For any \(1/2>\epsilon>0\), there exists a pair of ads i and j such that_
\[\Pr(\mathds{1}\{b_{i}y_{i}\geq b_{j}y_{j}\}=\mathds{1}\{b_{i}p_{i}\geq b_{j}p_ {j}\})=\epsilon,\]
_where \((b_{i},p_{i},y_{i})\) and \((b_{j},p_{j},y_{j})\) are the CPC bids, ground-truth CTR, and realized click indicator for the two ads._
See Appendix A.6 for detailed proof. Intuitively, the construction of the counterexample in Proposition 3.5 relies on the fact that the ground-truth eCPM of an ad increases as its corresponding CPC bid increases, but the probability that the ad is clicked does not. In other words, for any ad \(i\), the probability that \(b_{i}y_{i}\) is non-zero does not depend on \(b_{i}\) while the ground-truth eCPM does, creating a discrepancy between the ground-truth eCPM and the empirical eCPM. We may then strategically manipulate \(b_{i}\) to construct an example satisfying Proposition 3.5.
Crucially, Proposition 3.5 shows that there exist pairs of ads whose empirically observed CPM rankings agree with their ground-truth eCPM rankings with probability arbitrarily close to zero. Unless strong assumptions are made on the distributions of empirically observed CPMs, it is impossible to directly apply off-the-shelf learning-to-rank loss functions for \(\mathds{1}\{b_{i}y_{i}\geq b_{j}y_{j}\}\).
On the other hand, (3) avoids the pitfall by weighing each entry by \((b_{i}y_{i}-b_{j}y_{j})\). When conditioned on any CTR prediction rule \(f(\cdot)\), by the linearity of expectation, we can
see that the weight is an unbiased estimate of the difference in ground-truth eCPM. The fact that \(\ell(f;\mathcal{D})\) is linear in each observed eCPM \(b_{i}y_{i}\) is crucial, as the linearity ensures that the loss function accurately reflects the differences in \(b_{i}p_{i}\), enabling us to relate the conditional risk to the actual welfare loss and obtain theoretical guarantees without any assumptions on the empirical eCPMs.
To the best of our knowledge, no existing works on learning-to-rank use loss functions of this form, and our proposed methods are uniquely capable of avoiding the challenge highlighted by Proposition 3.5. While resembling a learning-to-rank loss, (3) is at its core a loss function that resembles the shape of the welfare objective in an ad auction, ensuring that optimizing the loss is closely related to optimizing welfare.
## 4 Replacing \(y_{i}\) with Predictions from the Teacher Model
A concern for (5) and (6) is that their variance scales with \(b_{i}^{2}\), the squared values of the CPC bids. Combined with the noisiness of \(y_{i}\), the resulting loss may be overly noisy. While the issue might be mitigated by properly pre-processing the CPC bids, we propose a theoretically justifiable alternative inspired by student-teacher learning (Hinton et al., 2015). In fact, distillation loss associated with the prediction from the teacher model is widely used in industrial-scale advertising systems (Anil et al., 2022). This technique is shown to be helpful for stabilizing the training and improving the pCTR accuracy of the student model.
The idea is straightforward. Let \(\widehat{p}(\cdot)\) be a teacher network trained on the same dataset and we replace \(y_{i}\) with \(\widehat{p}(x_{i})\). We show that doing so leads to an empirical loss that is close to \(\mathcal{R}(f;\mathcal{D})\), the conditional risk, as long as the teacher network itself is sufficiently accurate. We begin with the following theorem for replacing the \(y_{i}\)'s in (3).
**Theorem 4.1**.: _Let \(\widehat{p}\) be an estimate of \(p^{*}\) such that \(\mathbb{E}_{x}[(\widehat{p}(x)-p^{*}(x))^{2}]\leqslant\epsilon\). Let \(\widehat{\ell}(f;\mathcal{D})\) be (3) but with each \(y_{i}\) replaced by \(\widehat{p}_{i}=\widehat{p}(x_{i})\), i.e.,_
\[\widehat{\ell}(f;\mathcal{D})=\sum_{i=1}^{n}\sum_{j=1}^{n}(b_{i}\widehat{p}_{ i}-b_{j}\widehat{p}_{j})\,\mathds{1}\{b_{i}f(x_{i})\geqslant b_{j}f(x_{j})\}.\]
_Assuming all bids are upperbounded by positive constant \(B\in\mathbb{R}_{>0}\), for any \(f\in\mathcal{H}\) we have_
\[\mathbb{E}_{\mathcal{D}}[\widehat{\ell}(f;\mathcal{D})-\mathcal{R}(f; \mathcal{D})]\leqslant(n-1)nB\sqrt{\epsilon}\]
_where \(n\) is the number of ads._
See Appendix A.7 for proof. As \(\widehat{\ell}(f;\mathcal{D})\) sums over all pairs of ads, the bound necessarily grows in \(\mathcal{O}(n^{2})\), and the factor can be removed if we use the average over the pairs instead. While the teacher network may be used in ad auctions as-is, student networks still offer several benefits in addition to the theoretical guarantee in Theorem 4.1. First, teacher networks may be costly to deploy, thus student networks offer efficiency benefits from knowledge distillation. Second, the ranking losses may help the student network better differentiate the eCPMs of pairs of ads, leading to higher welfare, as we observe in experiments.
It is also reasonable to suggest directly learn-to-rank with \(\mathds{1}\{b_{i}\widehat{p}(x_{i})\geqslant b_{j}\widehat{p}(x_{j})\}\) as the labels. However, theoretical guarantees for the approach require additional assumptions on the distribution of the gaps between pairs of predicted eCPM, which is not needed for Theorem 4.1.
Recalling Theorem 3.4 and Theorem 3.4, it is not hard to see that replacing \(y_{i}\) with \(\widehat{p}(x_{i})\) in (5) and (6) lead to losses that are also sufficiently close to \(\mathcal{R}\). We instead focus on using the teacher network to improve calibration.
### Improving Calibration with the Teacher Network
A drawback shared by (5) and (6) is that they are not calibrated. While both penalizes pCTR functions for incorrectly ranking pairs of ads, they also reward pCTR functions that overestimate the margin between pairs of ads. As the minimizers of their expected values are not necessarily the ground-truth CTR function, using these losses may have negative consequences on other important metrics such as revenue or area under the curve. Fortunately, we show that using a teacher network also improves the calibration of the loss function. We propose the following loss function.
\[\begin{split}\widehat{\ell}_{\sigma}^{\text{hinge},+}(f; \mathcal{D})&=\sum_{i=1}^{n}\sum_{j=1}^{n}(b_{i}\widehat{p}_{i}- b_{j}\widehat{p}_{j})_{+}\\ &\quad\quad\times(-\sigma(b_{i}f(x_{i})-b_{j}f(x_{j})))_{+}, \end{split} \tag{7}\]
Intuitively, \(\widehat{\ell}_{\sigma}^{\text{hinge},+}(f;\mathcal{D})\) no longer punishes \(f\) for having a small margin between predicted eCPMs, as long as \(f\) ranks the pair the same way \(\widehat{p}\) does. When the teacher network is sufficiently close to the ground-truth, the loss function eliminates the bias that (5) and (6) have towards functions with larger margins between pairs. Additionally, compared to directly using \(\widehat{p}(\cdot)\), (7) better reflects the impact that the pCTRs have on welfare and has theoretical guarantees in terms of welfare performance.
We now present theoretical justification for the approach. Recall from Vasile et al. (2017) that calibration in ad auctions is defined as follows.
**Definition 4.2** (Calibration).: A loss function \(\ell^{\prime}(f;\mathcal{D})\) is calibrated if its expected value \(\mathbb{E}_{\mathcal{D}}[\ell^{\prime}(f;\mathcal{D})]\) is minimized by the ground-truth CTR function \(p^{*}\).
Based off of Definition 4.2, we first define a slightly relaxed
notion of calibration, \(\epsilon\)-approximate calibration.
**Definition 4.3** (\(\epsilon\)-Approximate Calibration).: A loss function \(\ell^{\prime}(f;\mathcal{D})\) is said to be \(\epsilon\)-approximately calibrated if the expected value of the loss achieved by the ground-truth CTR function \(p^{*}\) is at most \(\epsilon\) greater than the minimum, namely
\[\mathbb{E}_{\mathcal{D}}[\ell^{\prime}(p^{*};\mathcal{D})]-\min_{f\in \mathcal{H}}\mathbb{E}_{\mathcal{D}}[\ell^{\prime}(f;\mathcal{D})]\leqslant\epsilon.\]
We then have the following guarantee for \(\widehat{\ell}_{\sigma}^{\text{hinge},+}\).
**Theorem 4.4**.: _Let \(\widehat{p}\) be an estimate of \(p^{*}\) such that \(\mathbb{E}[(\widehat{p}(x)-p^{*}(x))^{2}]\leqslant\epsilon\). Assuming all bids are upper bounded by some \(B\in\mathbb{R}_{>0}\), for any \(f\in\mathcal{H}\) we have_
\[\mathbb{E}_{\mathcal{D}} [\mathrm{Welfare}_{*}(\mathcal{D})]\] \[\leqslant\mathbb{E}_{\mathcal{D}}[\mathrm{Welfare}_{f}( \mathcal{D})]+\mathbb{E}_{\mathcal{D}}[\widehat{\ell}_{\sigma}^{\text{hinge},+ }(f;\mathcal{D})]\] \[\quad+\frac{n(n-1)}{2}B\max\{1,\sigma B-1\}\] \[\quad+n(n-1)\sigma B^{2}\sqrt{\epsilon}.\]
_Moreover, the loss function \(\widehat{\ell}_{\sigma}^{\text{hinge},+}(f;\mathcal{D})\) is \(\mathcal{O}(\sqrt{\epsilon})\)-approximately calibrated._
See Appendix A.8 for detailed proof. An important feature Theorem 4.1 and Theorem 4.4 share is that they depend only on the \(\ell_{2}\) generalization error of the teacher network, and not on the explicit parametric assumptions on the distribution of eCPM. In other words, for any \(\widehat{p}\) we can simply use off-the-shelf results on its generalization error to show that using the induced \(\widehat{\ell}_{\sigma}^{\text{hinge},+}(f;\mathcal{D})\) is approximately calibrated, with only the mild assumption that the CPC bids are bounded.
### Weighing with Teacher Networks
The inclusion of the teacher network further guides us in developing theoretically-inspired weights for the proposed losses. The goal of the weight for the pair \(i,j\) is to mimic the indicator product \(\mathds{1}\{i=i^{*}\}\mathds{1}\{j=j^{*}\}\), where \(i^{*}=\operatorname*{argmax}_{i\in[n]}b_{i}p_{i}\) and \(j^{*}=\operatorname*{argmax}_{i\in[n]}b_{j}f(x_{j})\), so that the resulting loss better resembles the welfare suboptimality in (2). The first indicator corresponds to the event that the ad \(i\) has the highest ground-truth eCPM and the second the event that the ad \(j\) has the highest predicted eCPM. The weight should then be increasing in both \(b_{i}\widehat{p}(x_{i})\) and \(b_{j}f(x_{j})\), with \(\widehat{p}\) being the teacher network.
## 5 Experiments
We now demonstrate the advantages of our proposed losses on both simulated data and the Criteo Display Advertising Challenge dataset, a popular real-world benchmark for CTR prediction in ad auctions. Recalling Proposition 3.3, we use the weighted sum of the logistic loss and the proposed ranking losses for all experiments to ensure the learned CTR model is sufficiently close to the ground truth.
### Synthetic Dataset
For the simulation setting, we assume that the ads' features are 50-dimensional random vectors where each component is i.i.d. drawn from the standard normal distribution, namely \(x_{i}\sim\mathcal{N}(0,I_{50}),\) where \(I_{50}\) denotes the 50-dimensional identity matrix. For training, we generate 10,000 \(x_{i}\)'s from the \(\mathcal{N}(0,I_{50})\) distribution and generate the corresponding ground-truth CTR from a logistic model and the CPC bids from a log-normal distribution. We then draw the click indicators \(y_{i}\sim\text{Ber}(p_{i})\). We defer a more detailed description of the data-generating process to Appendix B.
A two-layer neural network is used, where the hidden layer has 50 nodes with ReLU activation, and the output layer has one node with sigmoid activation. For evaluation, we simulate 2,000 auctions with 50 ads each. The training and evaluation processes are then repeated 30 times.
We begin by introducing the baselines we consider: logistic loss (also referred to as cross-entropy) and two versions of weighted logistic loss. Logistic loss is commonly used for training models for predicting CTRs, and is used by Guo et al. (2017); Lian et al. (2018); Chen et al. (2016) among many other works. Existing works (Vasile et al., 2017; Hummel and McAfee, 2017) suggest the usage of a weighed logistic loss, with each entry weighted by the CPC bid. Finally, Vasile et al. (2017) propose weighing the logistic loss by the square root of the CPC bid.
We focus on three loss function representative of what we proposed: \(\ell_{\sigma=1}^{\text{log}}\), \(\widehat{\ell}_{\sigma=1}^{\text{log}}\), and \(\widehat{\ell}_{\sigma=1}^{\text{hinge},+}\). The first and the third correspond to (5) and (7), respectively. The second, \(\widehat{\ell}_{\sigma=1}^{\text{log}}\), replaces the \(y_{i}\)'s in \(\ell_{\sigma=1}^{\text{log}}\) with \(\widehat{p}_{i}\) obtained from a teacher network.
As discussed immediately after Theorem 3.2, we add binary cross entropy loss to \(\ell_{\sigma=1}^{\text{log}}\), \(\widehat{\ell}_{\sigma=1}^{\text{log}}\), and \(\widehat{\ell}_{\sigma=1}^{\text{hinge},+}\) and optimize over the composite loss. Additionally, motivated by Section 4.2, we use logistic functions to weigh each pair in \(\widehat{\ell}_{\sigma=1}^{\text{hinge},+}\) and \(\widehat{\ell}_{\sigma=1}^{\text{log}}\). Both \(\widehat{\ell}_{\sigma=1}^{\text{hinge},+}\) and \(\widehat{\ell}_{\sigma=1}^{\text{log}}\) use the model trained with logistic loss as the teacher network. We defer a more detailed discussion to Appendix B.
As we can see from Figure 2, all three proposed pairwise ranking losses achieve higher test time welfare than the naive baselines. As we use the same model structure and optimizer for all models, it is further possible that with more careful tuning, the advantages of the pairwise ranking losses may be more pronounced.
**Student-Teacher Learning.** Comparing the performance of \(\ell_{\sigma=1}^{\text{log}}\) and \(\widehat{\ell}_{\sigma=1}^{\text{log}}\) shows that student-teacher learning overall beneficial for simulated data. Moreover, while \(\widehat{\ell}_{\sigma=1}^{\text{hinge},+}\) is theoretically proven to be calibrated by Theorem 4.4, in the simulated task we found that the loss does perform well compared to other proposed methods. We conjecture that
the relatively modest performance is due to the fact that hinge function is not as smooth as logistic function, and thus is not well-suited for training neural networks.
**Comparison with Existing Works.** The experiment results also show that the loss functions derived in earlier works may depend on unrealistic assumptions and may be lacking in empirical justification, as can be seen in the performance of both weighted logistic losses. Regardless, we have shown that our proposed methods significantly outperform existing baselines.
### Criteo Dataset
We use the popular Criteo Display Advertising Challenge dataset. We follow standard data preprocessing procedures and use a standard 8-1-1 train-validation-test split commonly found in the literature. We defer to Appendix C for a more detailed description of the setup.
We note there are several limitations to the dataset. Firstly, the Criteo dataset only includes ads that are shown. In an ad auction setting, this means that all ads have won their respective multi-slot auction. Moreover, the Criteo dataset only includes anonymous features, which means we have no access to key attributes such as the CPC bid or the slot for each ad. Lastly, we do not know the specific auction each ad belongs to. Unfortunately, these limitations are shared by all openly available benchmarks to the best of our knowledge.
For the first limitation, we note that it is near-impossible to learn accurate CTR models without assuming the CTRs of the shown ads follow the same distribution as those of the unshown ads. To handle the intrinsic bias between shown ads and unshown ads is very challenging and out of the scope of this paper. While the slot each ad belongs to is unavailable, as we argued previously, learning a position-based CTR model is not the focus of this work, and here we learn the CTR of each ad, assuming that it is assigned to the first slot. Finally, while we do not know the exact auction round, from Proposition 2.2, we know maximizing the welfare of multi-slot ad auctions requires maximizing the welfare of single-slot auctions over subsets of participating ads (given the position multipliers are independent wrt. advertisers). Thus, it remains viable to treat each minibatch as a specific instance of single-slot auction.
We generate the CPC bids using the outputs from a DeepFM model (Guo et al., 2017) with randomly initialized weights, ensuring that the generated bids follow a log-normal distribution. Particularly, let \(h(\cdot)\) denote a randomly initialized DeepFM model, we set \(b_{i}=\exp(c\cdot h(x_{i})+\epsilon_{i})\), with \(c\) being a scaling factor and \(\epsilon_{i}\) a Gaussian noise.
We experiment with both DeepFM (Guo et al., 2017) and DCN (Wang et al., 2017), two popular models with great performance on the Criteo dataset whose parameter choices are well-documented. To ensure that our loss functions benefit welfare _holding all else constant_, we did not perform any parameter tuning or architecture search and used the model architectures and training protocols specified in the papers.
In this setting, we omit \(\widehat{\ell}_{\sigma}^{\text{inge},+}\) as it is non-smooth and not well suited for complex neural network architectures considered here, given our empirical study. Logistic loss is chosen as the baseline and \(\widehat{\ell}_{\sigma=3}^{\log}\) and \(\widehat{\ell}_{\sigma=3}^{\log}\) are the proposed candidates due to smoothness. Here we set \(\sigma=3\) to better mimic the shape of the indicator variable. We omit weighted logistic losses proposed in Vasile et al. (2017); Hummel and McAfee (2017) due to their poor performance on the synthetic dataset. We compare the losses based on three metrics: test-time welfare, area-under-curve (AUC) loss, and logistic loss, where AUC loss is defined as \(1-\text{AUC}\).
For both DeepFM and DCN, we repeat the following procedure 10 times. We fit the models using logistic loss and \(\ell_{\sigma=3}^{\log}\). The model fitted using logistic loss is then used as the teacher network, whose outputs are used to construct \(\widehat{\ell}_{\sigma=3}^{\log}\). We then evaluate the welfare, AUC loss, and logistic loss of the three models on the test set.
We report the welfare and AUC loss for DeepFM in Table 1and those for DCN in Figure 3. Additional results
Figure 2: Test welfare on simulated data (higher is better). From left to right: **(In Blue)** models trained with logistic loss; logistic loss weighted by \(b_{i}\)(Hummel and McAfee, 2017); logistic loss weighted by \(\sqrt{b_{i}}\)(Vasile et al., 2017), **(In Yellow)** proposed \(\ell_{\sigma=1}^{\log}\) indicator replaced by logistic function (defined in (5)); \(\widehat{\ell}_{\sigma=1}^{\log}\) indicator replaced by logistic function + student-teacher learning; \(\widehat{\ell}_{\sigma=1}^{\log}\) indicator replaced by hinge function + student-teacher learning (defined in (7)).
including comparisons on the wall-clock run time can be found in Appendix C.
As we observe from Table 1 and Figure 3, the proposed losses significantly improve test time welfare at a minimal cost (if any) to classification performance. Moreover, the improvement does not depend on the specific underlying model structure, and student-teacher learning continues to prove to be beneficial. Surprisingly, the proposed losses may also benefit AUC, a classification metric. We conjecture the improvement is due to the ranking loss formulation, which forces the model to better differentiate the ads' CTRs.
## 6 Conclusion and Future Work
We propose surrogates that improve welfare for ad auctions with theoretical guarantees and good empirical performance. We hypothesize that the improvements will be more pronounced if we further tune the model architecture for the proposed losses and we leave architecture search as a future direction.
## Acknowledgements
Part of the work was completed while Boxiang Lyu and Zachary Robertson were Student Researchers at Google Research Mountain View. We would like to thank for Phil Long for the initial discussions, Ashwinkumar Badanidiyuru, Zhuoshu Li, and Aranyak Mehta for the insightful feedback.
|
2310.04107
|
A Characterization of State Transfer on Double Subdivided Stars
|
A subdivided star $SK_{1,l}$ is obtained by identifying exactly one pendant
vertex from $l$ copies of the path $P_3.$ This study is on the existence of
quantum state transfer on double subdivided star $T_{l,m}$ which is a pair of
subdivided stars $SK_{1,l}$ and $SK_{1,m}$ joined by an edge to the respective
coalescence vertices. Using the Galois group of the characteristic polynomial
of $T_{l,m},$ we analyze the linear independence of its eigenvalues which
uncovers no perfect state transfer in double subdivided stars when considering
the adjacency matrix as the Hamiltonian of corresponding quantum system. Then
we establish a complete characterization on double subdivided stars exhibiting
pretty good state transfer.
|
Sarojini Mohapatra, Hiranmoy Pal
|
2023-10-06T09:15:05Z
|
http://arxiv.org/abs/2310.04107v2
|
# A Characterization of State Transfer on Double Subdivided Stars
###### Abstract
A subdivided star \(SK_{1,l}\) is obtained by identifying exactly one pendant vertex from \(l\) copies of the path \(P_{3}\). This study is on the existence of quantum state transfer on double subdivided star \(T_{l,m}\) which is a pair of subdivided stars \(SK_{1,l}\) and \(SK_{1,m}\) joined by an edge to the respective coalescence vertices. Using the Galois group of the characteristic polynomial of \(T_{l,m}\), we analyze the linear independence of its eigenvalues which uncovers no perfect state transfer in double subdivided stars when considering the adjacency matrix as the Hamiltonian of corresponding quantum system. Then we establish a complete characterization on double subdivided stars exhibiting pretty good state transfer.
_Keywords:_ Spectra of graphs, Field extensions, Galois group, Perfect state transfer, Pretty good state transfer.
_MSC: 15A16, 05C50, 12F10, 81P45._
Introduction
The transfer of quantum states between two different locations in a quantum network plays an important role in quantum information processing. Let a quantum network of \(n\) interacting qubits be modeled by a graph \(G\) where the vertices correspond to the qubits and the edges represent the interactions between qubits. Transfer of state among such qubits can be described using the continuous-time quantum walk operator acting on the characteristic vectors of the vertices. If the network admits a transfer of quantum state between two qubits without any loss of information, then this phenomenon is called perfect state transfer (PST). The main objective here is to identify quantum networks which enable high probability state transfer between qubits. We consider state transfer with respect to the adjacency matrix of a graph \(G\) having the vertex set \(\{a_{1},a_{2},\ldots,a_{n}\}.\) The adjacency matrix \(A=[a_{jk}]\) is the \(n\times n\) matrix having \(a_{jk}=1\) if there is an edge between \(a_{j}\) and \(a_{k},\) otherwise \(a_{jk}=0\). The continuous-time quantum walk on \(G\) relative to the adjacency matrix \(A\) is defined by
\[U(t):=\exp{(itA)},\text{ where }t\in\mathbb{R}\text{ and }i=\sqrt{-1}.\]
Farhi and Gutmann [18] first used the method of continuous-time quantum walks in analysing various quantum transportation phenomena. One can observe that the transition matrix \(U(t)\) is symmetric as well as unitary. The square of the absolute value of \((a,b)\)-th entry of \(U(t)\) provide the probability of state transfer from site \(a\) to site \(b\) after time \(t\). Suppose all the distinct eigenvalues of \(A\) are \(\theta_{1},\theta_{2},\ldots,\theta_{d}\). Let \(E_{\theta_{j}}\) denote the orthogonal projection onto the eigenspace corresponding to \(\theta_{j}.\) The spectral decomposition of the transition matrix \(U(t)\) can be evaluated as
\[U(t)=\sum_{j=1}^{d}\exp{(it\theta_{j})}E_{\theta_{j}}.\]
Let \(\mathbf{e}_{a}\) denote the characteristic vector corresponding to a vertex \(a\) of \(G\). The eigenvalue support of \(a\) defined by \(\sigma_{a}=\{\theta_{j}:E_{\theta_{j}}\mathbf{e}_{a}\neq 0\}.\) The graph \(G\) is said to exhibit PST between a pair of distinct vertices \(a\) and \(b\) if there exists \(\tau\in\mathbb{R}\) such that
\[U(\tau)\mathbf{e}_{a}=\gamma\mathbf{e}_{b},\text{ for some }\gamma\in\mathbb{C}. \tag{1}\]
It is now evident that the existence of PST between \(a\) and \(b\) depends only on the eigenvalues in the support \(\sigma_{a}\) and the corresponding orthogonal projections. PST in quantum
communication networks was first introduced by Bose in [6]. There it shows that PST occurs between the end vertices of the path \(P_{2}\) on two vertices. In [9], Christandl et al. proved that PST occurs between the end vertices of a path \(P_{n}\) on \(n\) vertices if and only if \(n=2\), \(3\). Remarkably, Basic [4] established a complete characterization of integral circulant graphs having PST. The existence of PST in several well-known families of graphs and their products are also investigated in [1, 2, 5, 8, 13, 26, 27], etc. Later, Coutinho et al. [11] showed that there is no PST in a graph \(G\) between two cut vertices \(a\) and \(b\) that are connected only by the path \(P_{2}\) or \(P_{3}\), unless the graph \(G\) is itself \(P_{2}\) or \(P_{3}.\) This infers that the double subdivided star \(T_{l,m}\) does not exhibit PST between the coalescence vertices of degree \(l\) and \(m\) for all positive integers \(l\) and \(m\). Here we show that \(T_{l,m}\) does not exhibit PST between any pair of vertices for all such cases using the linear independence of the eigenvalues of \(T_{l,m}\) in Section 3.
In case \(a=b\) in (1), the graph \(G\) is said to be periodic at the vertex \(a\) with period \(\tau\). If \(G\) is periodic at every vertex with the same period, then it is called a periodic graph. It is well known that if there is PST between a pair of vertices \(a\) and \(b\) at time \(\tau\) then \(G\) is periodic at both \(a\) and \(b\) with period \(2\tau\). Therefore periodicity at the vertex \(a\) is necessary for the existence of PST from \(a\). In what follows, we find that if \(G\) is periodic at a vertex then it must satisfy the following ratio condition.
**Theorem 1**.: _[_19_]_ _Suppose a graph \(G\) is periodic at vertex \(a\). If \(\theta_{k},\theta_{l},\theta_{r},\theta_{s}\) are eigenvalues in the support of \(a\) and \(\theta_{r}\neq\theta_{s}\), then_
\[\frac{\theta_{k}-\theta_{l}}{\theta_{r}-\theta_{s}}\in\mathbb{Q}.\]
The existence of PST in graphs is a rare phenomena as observed in [21], and consequently, the notion of pretty good state transfer (PGST) was introduced in [20, 31]. A graph \(G\) is said to exhibit PGST between a pair of distinct vertices \(a\) and \(b\) if there exists a sequence \(\tau_{k}\) of real numbers such that
\[\lim_{k\rightarrow\infty}U(\tau_{k})\mathbf{e}_{a}=\gamma\mathbf{e}_{b},\text{ for some }\ \gamma\in\mathbb{C}.\]
In [22], Godsil et al. showed that there is PGST between the end vertices of \(P_{n}\) if and only if \(n+1=2^{t}\) or \(p\) or \(2p\), for some positive integer \(t\) and odd prime \(p\). Moreover, if there is PGST between the end vertices of \(P_{n}\), then it occurs between the vertices \(a\) and \(n+1-a\)
as well, whenever \(a\neq(n+1)/2.\) Further investigation is done in [12] to determine infinite family of paths admitting PGST between a pair of internal vertices, where there is no PGST between the end vertices. Among other trees, PGST is investigated on double star [17], 1-sum of stars [23], etc. Pal et al. [28] showed that a cycle \(C_{n}\) and its complement \(\overline{C}_{n}\) admit PGST if and only if \(n\) is a power of \(2\), and it occurs between every pair of antipodal vertices. It is worth noting that PGST is not monogamous unlike PST as argued in [29, Example 4.1]. More results on PGST can be found in [10, 16, 24, 25, 30], etc. Here we investigate the existence of PGST on double subdivided stars. A subdivided star with \(l\) branches, denoted by \(SK_{1,l}\), is obtained by identifying exactly one pendant vertex from \(l\) copies of the path \(P_{3}.\) A double subdivided star is formed by joining the coalescence vertices of a pair of subdivided stars \(SK_{1,l}\) and \(SK_{1,m}\) by an additional edge, and the resulting graph is denoted by \(T_{l,m}.\) We analyze the linear independence of the eigenvalues of \(T_{l,m}\) in Section 2 and then, in Section 3, the existence of PGST in \(T_{l,m}\) is investigated.
A pair of vertices \(a\) and \(b\) in a graph \(G\) are called strongly cospectral if \(E_{\theta_{j}}\mathbf{e}_{a}=\pm E_{\theta_{j}}\mathbf{e}_{b}\), for all eigenvalues \(\theta_{j}\). Next we observe that strong cospectrality is necessary for the existence of PGST between a pair of vertices.
**Lemma 1**.: _[_20_]_ _If a graph \(G\) exhibits pretty good state transfer between a pair of vertices \(a\) and \(b\), then they are strongly cospectral._
If \(P\) is a matrix of an automorphism of \(G\) with adjacency matrix \(A\) then \(P\) commutes with \(A\). Since the transition matrix \(U(t)\) is a polynomial in \(A\), the matrices \(P\) and \(U(t)\) commute as well. Therefore, if \(G\) allows PGST between \(a\) and \(b\), then each automorphism fixing \(a\) must fix \(b\). We use the following Kronecker approximation theorem on simultaneous approximation in characterizing double subdivided stars having PGST.
**Theorem 2**.: _[_3_]_ _Let \(\alpha_{1},\alpha_{2},\ldots,\alpha_{l}\) be arbitary real numbers. If \(1,\theta_{1},\ldots,\theta_{l}\) are real, algebraic numbers linearly independent over \(\mathbb{Q}\), then for \(\epsilon>0\), there exists \(q\in\mathbb{Z}\) and \(p_{1},p_{2},\ldots,p_{l}\in\mathbb{Z}\) such that_
\[|q\theta_{j}-p_{j}-\alpha_{j}|<\epsilon.\]
Now we recall few results on the spectra of graphs. Let \(G\) be a graph having distinct eigenvalues \(\theta_{1},\theta_{2},\ldots,\theta_{d}\) with multiplicities \(k_{1},k_{2},\ldots,k_{d}\), respectively. We denote the spectrum of \(G\) as, \(\theta_{1}^{k_{1}},\theta_{2}^{k_{2}},\ldots,\theta_{d}^{k_{d}}\). In case \(\theta_{i}\) is a simple eigenvalue, we omit the power
\(k_{i}=1.\) A graph \(G\) is bipartite if there is a bipartition of the set of vertices such that the edges connect only vertices in different parts. The eigenvalues and the corresponding eigenvectors of a bipartite graph has a special structure as mentioned below.
**Proposition 1**.: _[_7_]_ _If \(\theta\) is an eigenvalue of a bipartite graph \(G\) with multiplicity \(k\), then \(-\theta\) is also an eigenvalue of \(G\) having the same multiplicity. If \(\left[\begin{smallmatrix}u\\ v\end{smallmatrix}\right]\) is an eigenvector with eigenvalue \(\theta\), then \(\left[\begin{smallmatrix}u\\ -v\end{smallmatrix}\right]\) is an eigenvector with eigenvalue \(-\theta\)._
In the above Proposition 1, the vectors \(u\) and \(v\) correspond to the vertices in the two partite sets of \(G\). If two vertices \(a\) and \(b\) are adjacent then we write \(a\sim b.\) An eigenvector \(v\) can be realized as a function on the vertex set \(V(G)\) where \(v(a)\) denotes the \(a\)-th component of \(v.\) Then \(v\) is an eigenvector of \(G\) with eigenvalue \(\theta\) if and only if
\[\theta\cdot v(a)=\sum_{b\sim a}v(b)\ \ \text{for all}\ a\in V(G), \tag{2}\]
where the summation is taken over all vertices \(b\in V(G)\) that are adjacent to \(a.\) Later (2) shall be used in determining the eigenvectors of \(T_{l,m}.\) The spectrum of a subdivided star \(SK_{1,l}\) is given in [7], which can also be obtained using (2) as
\[-\sqrt{l+1},\ (-1)^{l-1},\ 0,\ 1^{l-1},\ \sqrt{l+1}.\]
The characteristic polynomial of \(SK_{1,l}\) is \(x(x^{2}-1)^{l-1}(x^{2}-l-1).\) In the following section, we determine the set of linearly independent eigenvalues of \(T_{l,m},\) which proves to be significant in characterizing state transfer in double subdivided stars.
## 2 Linear independence of eigenvalues
Suppose \(H\) is a graph having a vertex \(g.\) Then \(H-g\) is the induced subgraph obtained by removing the vertex \(g\) from \(H.\) Recall that a double subdivided star \(G:=T_{l,m}\) is considered as a pair of subdivided stars \(H:=SK_{1,l}\) and \(H^{\prime}:=SK_{1,m}\) joined by an edge to the respective coalescence vertices, say, \(a\) and \(b\). Using [14, Theorem 2.2.4], the characteristic polynomial of \(G\) can be evaluated as
\[P_{G}(x) =P_{H}(x)P_{H^{\prime}}(x)-P_{H-a}(x)P_{H^{\prime}-b}(x)\] \[=x(x^{2}-1)^{l-1}(x^{2}-l-1)x(x^{2}-1)^{m-1}(x^{2}-m-1)-(x^{2}-1) ^{l}(x^{2}-1)^{m}\] \[=(x^{2}-1)^{l+m-2}q(x),\]
where \(q(x)=x^{6}-(l+m+3)x^{4}+(lm+l+m+3)x^{2}-1.\) One can observe that \(q(x)\) is a polynomial having only the even terms, and none of \(-1,0\) and \(1\) are roots of \(q(x).\) Suppose \(q(x)\) has the roots \(\pm\theta_{1},\)\(\pm\theta_{2},\)\(\pm\theta_{3}\). Considering \(Q(x)=x^{3}-(l+m+3)x^{2}+(lm+l+m+3)x-1,\) we have \(Q(x^{2})=q(x),\) and hence \(\theta_{1}^{2},\)\(\theta_{2}^{2},\)\(\theta_{3}^{2}\) are the roots of \(Q(x)\). Since \(Q(x)\) has no rational root, it is irreducible over \(\mathbb{Q}\), and hence all roots of \(Q(x)\) are simple. Consequently, all roots of \(q(x)\) are simple as well. Then the spectrum of \(T_{l,m}\) is
\[(-1)^{l+m-2},\ 1^{l+m-2},\ \pm\theta_{1},\ \pm\theta_{2},\ \pm\theta_{3}.\]
Since \(\theta_{1}^{2},\)\(\theta_{2}^{2},\)\(\theta_{3}^{2}\) are the roots of \(Q(x)\), we also have the following identities.
\[\theta_{1}^{2}+\theta_{2}^{2}+\theta_{3}^{2} = l+m+3. \tag{3}\] \[\theta_{1}^{2}\theta_{2}^{2}+\theta_{2}^{2}\theta_{3}^{2}+\theta _{3}^{2}\theta_{1}^{2} = lm+l+m+3.\] (4) \[\theta_{1}^{2}\theta_{2}^{2}\theta_{3}^{2} = 1. \tag{5}\]
The next result demonstrates that if the polynomial \(q(x)\) is reducible, then \(1,\theta_{1},\theta_{2}\) are linearly independent over \(\mathbb{Q}\).
**Lemma 2**.: _Let \(l\) and \(m\) be two positive integers. Suppose \(\pm\theta_{1},\pm\theta_{2},\pm\theta_{3}\) are the roots of \(q(x)=x^{6}-(l+m+3)x^{4}+(lm+l+m+3)x^{2}-1\) in its splitting field over \(\mathbb{Q}.\) If \(q(x)\) is reducible, then \(1,\theta_{1},\theta_{2}\) are linearly independent over \(\mathbb{Q}.\)_
Proof.: One can observe that if the polynomial \(q(x)\) is reducible for some \(l\) and \(m\), then it must be factored into two irreducible monic polynomials \(f(x;l,m)\) and \(f(-x;l,m)\) of degree three such that \(q(x)=-f(x;l,m)\cdot f(-x;l,m).\) Without loss of generality, let \(\theta_{1},\theta_{2},\theta_{3}\) be the distinct roots of \(f(x;l,m)\), and suppose \(\theta_{3}\) is the largest among them. Let
Figure 1: The double subdivided star \(T_{l,m}\).
\(\alpha,\beta,\gamma\in\mathbb{Q}\) such that
\[\alpha+\beta\theta_{1}+\gamma\theta_{2}=0. \tag{6}\]
By [15, Theorem 13.27], the Galois group of the irreducible polynomial \(f(x;l,m)\) can be realized as a transitive subgroup of \(S_{3}\) with respect to the ordering of roots \(\theta_{1},\theta_{2},\theta_{3}\). So the Galois group of \(f(x;l,m)\) must contain the alternating group \(A_{3}=\{(1),(123),(132)\}\). As the automorphisms in the Galois group fix \(\mathbb{Q}\), the elements of \(A_{3}\) acting on (6) give
\[\alpha+\beta\theta_{1}+\gamma\theta_{2}=0,\ \alpha+\beta\theta_{2}+\gamma\theta_{ 3}=0,\ \alpha+\beta\theta_{3}+\gamma\theta_{1}=0,\]
which is a homogeneous system of linear equations in \(\alpha,\ \beta\) and \(\gamma.\) The coefficient matrix can be reduced to obtain
\[\begin{bmatrix}1&\theta_{1}&\theta_{2}\\ 0&\theta_{2}-\theta_{1}&\theta_{3}-\theta_{2}\\ 0&0&\frac{(\theta_{1}-\theta_{2})^{2}+(\theta_{3}-\theta_{1})(\theta_{3}- \theta_{2})}{(\theta_{1}-\theta_{2})}\end{bmatrix}.\]
Since all the pivots are non zero, the rank of the coefficient matrix is \(3.\) Consequently, \(1,\theta_{1},\theta_{2}\) are linearly independent over \(\mathbb{Q}.\)
From (3) and (4) we find
\[(\theta_{1}+\theta_{2}+\theta_{3})^{2} = l+m+3+2(\theta_{1}\theta_{2}+\theta_{2}\theta_{3}+\theta_{1} \theta_{3}), \tag{7}\] \[(\theta_{1}\theta_{2}+\theta_{2}\theta_{3}+\theta_{1}\theta_{3})^ {2} = lm+l+m+3+2\theta_{1}\theta_{2}\theta_{3}(\theta_{1}+\theta_{2}+ \theta_{3}). \tag{8}\]
If \(\theta_{1}+\theta_{2}+\theta_{3}=0,\) then (7) and (8) implies that \((l-m)^{2}+2l+2m=3,\) which is impossible as \(l,m\in\mathbb{N}.\) Now Lemma 2 along with \(\theta_{1}+\theta_{2}+\theta_{3}\neq 0\) infer that if the polynomial \(q(x)\) is reducible, then any proper subset of \(\{1,\theta_{1},\theta_{2},\theta_{3}\}\) is linearly independent over \(\mathbb{Q}.\)
**Theorem 3**.: _Let \(l,m\in\mathbb{N}.\) If the polynomial \(q(x)=x^{6}-(l+m+3)x^{4}+(lm+l+m+3)x^{2}-1\) is reducible over \(\mathbb{Q},\) then the set of all positive eigenvalues of \(T_{l,m}\) except one is linearly independent over \(\mathbb{Q}.\)_
Note that if \(l=m\) then \(q(x)=-f(x;l,l)\cdot f(-x;l,l)\) where \(f(x;l,l)=x^{3}-x^{2}-(l+1)x+1.\) The roots \(\theta_{1},\theta_{2},\theta_{3}\) of \(f(x;l,l)\) satisfy the following relations.
\[\theta_{1}+\theta_{2}+\theta_{3} = 1. \tag{9}\] \[\theta_{1}\theta_{2}+\theta_{2}\theta_{3}+\theta_{3}\theta_{1} = -(l+1).\] (10) \[\theta_{1}\theta_{2}\theta_{3} = -1. \tag{11}\]
As a consequence to Theorem 3 we have the following result.
**Corollary 1**.: _The set of all positive eigenvalues of \(T_{l,l}\) except one is linearly independent over \(\mathbb{Q}\) for all positive integer \(l.\)_
Before we proceed with the case when \(q(x)\) is irreducible over \(\mathbb{Q},\) consider the following result on the linear independence of \(\theta_{1}^{2},\theta_{2}^{2}\) and \(\theta_{3}^{2}\) over \(\mathbb{Q}.\)
**Lemma 3**.: _Let \(\theta_{1}^{2},\theta_{2}^{2},\theta_{3}^{2}\) be the roots of \(Q(x)=x^{3}-(l+m+3)x^{2}+(lm+l+m+3)x-1,\) for \(l,m\in\mathbb{N},\) in its splitting field over \(\mathbb{Q}\). Then \(\theta_{1}^{2},\theta_{2}^{2},\theta_{3}^{2}\) are linearly independent over \(\mathbb{Q}.\)_
Proof.: Let \(\alpha,\beta,\gamma\in\mathbb{Q}\) such that
\[\alpha\theta_{1}^{2}+\beta\theta_{2}^{2}+\gamma\theta_{3}^{2}=0. \tag{12}\]
Note that \(Q(x)\) is an irreducible polynomial of degree \(3\) over \(\mathbb{Q}\). Since the Galois group corresponding to \(Q(x)\) is transitive, it contains the alternating group \(A_{3}.\) The elements of \(A_{3}\) acting on (12) yield
\[\alpha\theta_{1}^{2}+\beta\theta_{2}^{2}+\gamma\theta_{3}^{2}=0,\ \alpha\theta_{2}^{2}+\beta\theta_{3}^{2}+\gamma\theta_{1}^{2}=0,\ \alpha\theta_{3}^{2}+\beta\theta_{1}^{2}+\gamma\theta_{2}^{2}=0.\]
The coefficient matrix for the corresponding homogeneous system is row equivalent to the following matrix.
\[\begin{bmatrix}\theta_{1}^{2}&\theta_{2}^{2}&\theta_{3}^{2}\\ 0&\theta_{3}^{2}-\dfrac{\theta_{2}^{4}}{\theta_{1}^{2}}&\theta_{1}^{2}-\dfrac{ \theta_{2}^{2}\theta_{3}^{2}}{\theta_{1}^{2}}\\ 0&0&\dfrac{3-\theta_{1}^{6}-\theta_{2}^{6}-\theta_{3}^{6}}{\theta_{3}^{2} \theta_{1}^{2}-\theta_{2}^{4}}\end{bmatrix}.\]
Since \(\theta_{1}^{2},\theta_{2}^{2},\theta_{3}^{2}\) are distinct real roots of \(Q(x)\) satisfying \(\theta_{1}^{2}\theta_{2}^{2}\theta_{3}^{2}=1,\) we have \(\theta_{1}^{6}+\theta_{2}^{6}+\theta_{3}^{6}>3.\) Now \(\theta_{3}^{2}-\dfrac{\theta_{2}^{4}}{\theta_{1}^{2}}=0\) infers that \(\theta_{2}^{6}=1\) or \(\theta_{2}^{2}=1,\) a contradiction. So all three pivots are non-zero, and therefore, the rank of the coefficient matrix is \(3.\) Hence \(\theta_{1}^{2},\theta_{2}^{2},\theta_{3}^{2}\) are linearly independent over \(\mathbb{Q}.\)
Suppose \(q(x)=x^{6}-(l+m+3)x^{4}+(lm+l+m+3)x^{2}-1\) is irreducible over \(\mathbb{Q},\) and consider the Galois group \(\mathcal{G}\) of \(q(x)\) that fixes \(\mathbb{Q}.\) Applying [15, Theorem 13.27], the Galois group \(\mathcal{G}\) can be realized as a transitive subgroup of \(S_{6}\) with respect to the ordering of roots \(\theta_{1},-\theta_{1},\theta_{2},-\theta_{2},\theta_{3},-\theta_{3}\) of \(q(x).\) Since the discriminant \(D\) of \(Q(x)\) satisfy
\(D=(\theta_{2}^{2}-\theta_{1}^{2})^{2}(\theta_{3}^{2}-\theta_{1}^{2})^{2}(\theta_{3} ^{2}-\theta_{2}^{2})^{2}\in\mathbb{Q},\) the discriminant of \(q(x)\) evaluated as \(64D^{2}\) is a square of an element in \(\mathbb{Q}.\) Using [15, Proposition 14.34], the Galois group \(\mathcal{G}\) is a transitive subgroup of the alternating group \(A_{6}\). Now we have the following result.
**Theorem 4**.: _Let \(l\) and \(m\) be two positive integers. Suppose \(\pm\theta_{1},\pm\theta_{2},\pm\theta_{3}\) are the roots of \(q(x)=x^{6}-(l+m+3)x^{4}+(lm+l+m+3)x^{2}-1\) in its splitting field over \(\mathbb{Q}.\) If \(q(x)\) is irreducible then \(1,\theta_{1},\theta_{2},\theta_{3}\) are linearly independent over \(\mathbb{Q}.\)_
Proof.: Let \(\alpha,\beta,\gamma,\delta\in\mathbb{Q}\) such that
\[\alpha+\beta\theta_{1}+\gamma\theta_{2}+\delta\theta_{3}=0. \tag{13}\]
Since the Galois group \(\mathcal{G}\) of \(q(x)\) is a transitive subgroup of \(A_{6},\) there is an automorphism \(\sigma\in\mathcal{G}\) such that \(\sigma(\theta_{1})=-\theta_{1}.\) Applying \(\sigma\) on both sides of (13) and adding the resulting equation to it yields
\[2\alpha+\gamma(\theta_{2}+\sigma(\theta_{2}))+\delta(\theta_{3}+\sigma( \theta_{3}))=0. \tag{14}\]
Each automorphism in \(\mathcal{G}\) that maps \(\theta_{i}\) to \(\theta_{j}\) must map \(-\theta_{i}\) to \(-\theta_{j}.\) Since \(\mathcal{G}\) is a subgroup of \(A_{6},\) the only possibility for \(\sigma\) remains (12)(34), (12)(56), (12)(3546) or (12)(3645).
If \(\sigma=(12)(34),\) then (14) becomes \(\alpha+\delta\theta_{3}=0.\) Since \(\theta_{3}\notin\mathbb{Q},\) we obtain \(\alpha=\delta=0.\) Thus (13) reduces to \(\beta\theta_{1}+\gamma\theta_{2}=0,\) which further gives \(\beta=\gamma=0\) as Lemma 3 holds. Therefore \(1,\theta_{1},\theta_{2},\theta_{3}\) are linearly independent over \(\mathbb{Q}.\) Using similar argument for \(\sigma=(12)(56),\) we arrive at the same conclusion. If \(\sigma=(12)(3546)\) then we find \(\sigma^{-1}=(12)(3645)\). Now (14) becomes
\[2\alpha+(\gamma-\delta)\theta_{2}+(\gamma+\delta)\theta_{3}=0. \tag{15}\]
Applying \(\sigma^{2}=(34)(56)\) on both sides of (15) gives \(\alpha=0.\) Finally by Lemma 3, we find \(\alpha=\beta=\gamma=\delta=0.\) Hence \(1,\theta_{1},\theta_{2},\theta_{3}\) are linearly independent over \(\mathbb{Q}.\)
Consequently from Theorem 4 we obsereve that if \(q(x)\) is irreducible over \(\mathbb{Q}\) then the set of all positive eigenvalues of \(T_{l,m}\) is linearly independent over \(\mathbb{Q}.\)
**Corollary 2**.: _If \(q(x)=x^{6}-(l+m+3)x^{4}+(lm+l+m+3)x^{2}-1,\) for \(l,m\in\mathbb{N},\) is irreducible over \(\mathbb{Q},\) then the set of all positive (negative) eigenvalues of \(T_{l,m}\) is linearly independent over \(\mathbb{Q}.\)_
## 3 State transfer on \(T_{l,m}\)
In quest of the existence of PST (or PGST) in \(T_{l,m}\) from a vertex \(a\), we analyze the eigenvalues in the support \(\sigma_{a}\) and the corresponding orthogonal projections. In this regard, we determine the eigenvectors of \(T_{l,m}\) corresponding to each of its eigenvalues. Recall that the eigenvectors are real-valued functions on the vertex set of \(T_{l,m}.\) In case \(l>1\) (or \(m>1\)), an eigenvector corresponding to \(1\) can be obtained by assigning the value \(1\) to a pair of adjacent vertices of degree \(1\) and \(2\) in a branch, \(-1\) to another such pair in a branch adjacent to the previous one, and the remaining vertices are assigned \(0.\) Considering other such adjacent branches, we obtain \(l+m-2\) linearly independent eigenvectors corresponding to \(1\). Similarly, a set of \(l+m-2\) linearly independent eigenvectors for the eigenvalue \(-1\) can be obtained using (2). Suppose \(E_{-1}\) and \(E_{1}\) are idempotents corresponding to \(-1\) and \(1\), respectively. For the vertices \(a\) and \(b\) in \(T_{l,m}\) (see Figure 1), note that \(-1\) and \(1\) are not in \(\sigma_{a}\) as well as \(\sigma_{b}\) since we have \(E_{-1}\mathbf{e}_{a}=E_{-1}\mathbf{e}_{b}=0\) and \(E_{1}\mathbf{e}_{a}=E_{1}\mathbf{e}_{b}=0\). However, we observed that \(E_{-1}\mathbf{e}_{c}=E_{-1}\mathbf{e}_{d}\neq 0\) and \(E_{1}\mathbf{e}_{c}=E_{1}\mathbf{e}_{d}\neq 0\), and hence both \(-1\) and \(1\) belong to \(\sigma_{c}\) as well as \(\sigma_{d}\).
The eigenvectors for the remaining eigenvalues are obtained as follows. Let \(P\) be the permutation matrix corresponding to an automorphism of \(T_{l,m}\). Note that \(P\) commutes with the adjacency matrix \(A\) of \(T_{l,m}\). Let \(v\) be an eigenvector of \(T_{l,m}\) satisfying \(Av=\theta v\) with \(\theta\neq-1,1.\) Now \(APv=PAv=\theta Pv\) implies that \(Pv\) is also an eigenvector corresponding to \(\theta\). Suppose the entries of \(v\) are \(z_{1},z_{2},x_{j},y_{j},u_{k},w_{k},\) where \(j=1,2,\ldots,l\) and \(k=1,2,\ldots,m\) as mentioned in Figure 2. In particular, suppose \(P\) is an automorphism of \(T_{l,m}\) which switches vertices assigned with entries \(x_{1}\) and \(x_{2}\), \(y_{1}\) and \(y_{2}\), and fixing all other vertices. Since all eigenvalues except \(-1\) and \(1\) are simple, the eigenvectors \(v\) and
Figure 2: An eigenvector corresponding to \(\theta\).
\(Pv\) are parallel. As a result \(Pv=\alpha v\) for some scalar \(\alpha\), which further gives \(\alpha z_{1}=z_{1}\). If \(z_{1}=0\) then (2) infers that \(\theta x_{1}=y_{1}\) and \(\theta y_{1}=x_{1}\), which is absurd as \(\theta\neq\pm 1.\) Hence \(\alpha=1\), and we have \(x_{1}=x_{2}\) and \(y_{1}=y_{2}\). We therefore conclude \(x_{1}=x_{j}\), \(y_{1}=y_{j}\), \(u_{1}=u_{k}\) and \(w_{1}=w_{k}\) for all \(j\) and \(k\).
In case \(l=m\), consider an automorphism \(P^{\prime}\) which switches vertices assigned with entries \(z_{1}\) and \(z_{2}\). Since \(v\) and \(P^{\prime}v\) are parallel, \(P^{\prime}v=\beta v\) for some scalar \(\beta\). This gives \(z_{2}=\beta z_{1}\), \(z_{1}=\beta z_{2}\), \(u_{1}=\beta x_{1}\) and \(w_{1}=\beta y_{1}\). Consequently, we have \(\beta=\pm 1\). The eigenvector \(v\) corresponding to the case \(\beta=1\) is given in Figure 3. For \(\beta=-1\), the eigenvector can be obtained similarly. The following result shows that the double subdivided star \(T_{l,m}\) does not exhibit PST for any positive integers \(l\) and \(m\).
**Theorem 5**.: _There is no perfect state transfer in the double subdivided star \(T_{l,m}\) for any positive integers \(l\) and \(m.\)_
Proof.: Recall that the spectrum of \(T_{l,m}\) is
\[(-1)^{l+m-2},\ 1^{l+m-2},\ \pm\theta_{1},\ \pm\theta_{2},\ \pm\theta_{3}.\]
It is evident that the nonzero eigenvalues \(\pm\theta_{1},\pm\theta_{2},\pm\theta_{3}\) are in the eigenvalue support of the vertices \(a,c\) and \(e\) as in Figure 1. Suppose \(T_{l,m}\) is periodic at vertex \(a\). Then the ratio condition in Theorem 1 gives
\[\frac{\theta_{1}-(-\theta_{1})}{\theta_{2}-(-\theta_{2})}=\frac{\theta_{1}}{ \theta_{2}}\in\mathbb{Q},\]
which is a contradiction to Lemma 2 or Theorem 4 depending on whether the polynomial \(q(x)=x^{6}-(l+m+3)x^{4}+(lm+l+m+3)x^{2}-1\) is reducible or irreducible over \(\mathbb{Q}\). Therefore \(T_{l,m}\) is not periodic at \(a\), and hence there is no PST from the vertex \(a\). Similarly, there is no PST from the vertices \(c\) and \(e\) as well. Hence the result follows.
Figure 3: The eigenvector \(v\) for \(\beta=1\).
Whenever \(l=m=1\), the double subdivided star \(T_{l,m}\) becomes the path \(P_{6}\). The fact that \(P_{6}\) does not exhibit PST was previously observed in [20] as well. Next we investigate the existence of PGST in \(T_{l,m}\).
### Pretty good state transfer
The graph \(T_{l,m}\) becomes the path \(P_{6}\) for \(l=m=1\). The existence of PGST in \(P_{6}\) is mentioned in [22], where we find that there is PGST from all vertices of \(P_{6}\). Now we consider the remaining cases. It follows from Lemma 1 that if there is PGST between a pair of vertices then they have the same degree. It is well known that if there is PGST between two vertices then each automorphism fixing one must fix the other. Therefore there is no pretty good state transfer in \(T_{l,m}\) whenever \(l\) and \(m\) are distinct and \(l\neq 2\neq m\). Next we investigate the existence of PGST in \(T_{l,l}\).
**Theorem 6**.: _There exists pretty good state transfer between the coalescence vertices in \(T_{l,l}\) with respect to a sequence in \((4\mathbb{Z}-1)\frac{\pi}{2}\) for all natural number \(l.\)_
Proof.: The eigenvalue support of the coalescence vertices \(a\) and \(b\) in \(T_{l,l}\) are
\[\sigma_{a}=\{\pm\theta_{1},\pm\theta_{2},\pm\theta_{3}\}=\sigma_{b}.\]
Recall that the eigenvalues \(\pm\theta_{1},\pm\theta_{2},\pm\theta_{3}\) are simple. Suppose \(v_{1},v_{2},v_{3}\) are the eigenvectors corresponding to \(\theta_{1},\theta_{2},\theta_{3},\) respectively. Using Proposition 1, we determine the eigenvectors corresponding to \(-\theta_{1},-\theta_{2},-\theta_{3}\) as well. Therefore
\[\mathbf{e}_{a}^{T}U(t)\mathbf{e}_{b} = \sum_{\theta\in\sigma_{a}}\exp{(it\theta)}\mathbf{e}_{a}^{T}E_{ \theta}\mathbf{e}_{b} \tag{16}\] \[= \sum_{j=1}^{3}\left[\exp{(it\theta_{j})}\frac{v_{j}(a)v_{j}(b)}{ ||v_{j}||^{2}}-\exp{(-it\theta_{j})}\frac{v_{j}(a)v_{j}(b)}{||v_{j}||^{2}}\right]\]
We already showed that \(v_{j}(a)=v_{j}(b)\neq 0.\) Without loss of generality, let \(v_{j}(a)=1\) for \(j=1,2,3.\) Thus the above equation yields
\[\mathbf{e}_{a}^{T}U(t)\mathbf{e}_{b} = \sum_{j=1}^{3}\left[\frac{\exp{(it\theta_{j})}-\exp{(-it\theta_{j })}}{||v_{j}||^{2}}\right] \tag{17}\]
By Lemma 2, the algebraic numbers \(1,\theta_{1},\theta_{2}\) are linearly independent over \(\mathbb{Q}.\) Let \(\epsilon>0,\) and consider \(\alpha_{j}=\frac{1+\theta_{j}}{4}\) in Theorem 2. Then there exist \(q,p_{1},p_{2}\in\mathbb{Z}\) such that
\[\left|(4q-1)\frac{\pi}{2}\theta_{j}-\left(2\pi p_{j}+\frac{\pi}{2}\right) \right|<2\pi\epsilon\ \ \text{for}\ \ j=1,2. \tag{18}\]
Since \(\theta_{1}+\theta_{2}+\theta_{3}=1\) as in (9), this further yields
\[\left|(4q-1)\frac{\pi}{2}\theta_{3}-\left(\pi(2(q-p_{1}-p_{2})-1)-\frac{\pi}{2} \right)\right|<4\pi\epsilon. \tag{19}\]
We obtain a sequence \(\tau_{k}\in(4\mathbb{Z}-1)\frac{\pi}{2}\) from (18) and (19) such that \(\lim\limits_{k\rightarrow\infty}\exp\left(i\tau_{k}\theta_{j}\right)=i\) for all \(j.\) Thus (17) together with the fact that \(U(0)=I\) gives
\[\lim\limits_{k\rightarrow\infty}\mathbf{e}_{a}^{T}U(\tau_{k})\mathbf{e}_{b}=2i \left[\frac{1}{||v_{1}||^{2}}+\frac{1}{||v_{2}||^{2}}+\frac{1}{||v_{3}||^{2}} \right]=i.\]
This completes the proof.
In case of \(P_{6},\) the support of each vertex contains all its eigenvalues. The proof of Theorem 6 can be devised to determine that \(P_{6}\) exhibits PGST between the pair of vertices \(n\) and \(7-n\) for all \(n=1,2,3.\)
Now we investigate the existence of PGST in \(T_{2,m}\) for some positive integer \(m.\) Note that there is no PGST between the coalescence vertices of \(T_{2,m}\) whenever \(m\neq 2.\) Therefore, if \(T_{2,m}\) admits PGST then it occurs between the pair of vertices \(c,d\) or the pair of vertices \(e,f\) as in Figure 4. Here we find that
\[\sigma_{c}=\sigma_{d}=\sigma_{e}=\sigma_{f}=\{\pm 1,\pm\theta_{1},\pm\theta_{2}, \pm\theta_{3}\},\]
which is the set of all eigenvalues of \(T_{2,m}\). Let \(v_{1},v_{2},v_{3}\) be the eigenvectors corresponding to the simple eigenvalues \(\theta_{1},\theta_{2},\theta_{3}\), respectively. Then Proposition 1 gives the eigenvectors corresponding to \(-\theta_{1},-\theta_{2},-\theta_{3}\) as well. Recall that \(E_{-1}\) and \(E_{1}\) are the idempotents corresponding to eigenvalues \(-1\) and \(1,\) respectively. Using Gram-Schmidt procedure on the set of linearly independent eigenvectors corresponding to \(-1\) and \(1,\) we evaluate
\[\mathbf{e}_{c}^{T}E_{-1}\mathbf{e}_{d}=\mathbf{e}_{c}^{T}E_{1}\mathbf{e}_{d}=- \frac{1}{4}=\mathbf{e}_{e}^{T}E_{-1}\mathbf{e}_{f}=\mathbf{e}_{e}^{T}E_{1} \mathbf{e}_{f}.\]
Now we have
\[\mathbf{e}_{c}^{T}U(t)\mathbf{e}_{d} = \sum_{j=1}^{3}\left[\exp\left(it\theta_{j}\right)\frac{v_{j}(c)v_{ j}(d)}{||v_{j}||^{2}}+\exp\left(-it\theta_{j}\right)\frac{v_{j}(c)v_{j}(d)}{||v_{j}|| ^{2}}\right]\] \[+\exp\left(it\right)\left(-\frac{1}{4}\right)+\exp\left(-it\right) \left(-\frac{1}{4}\right).\]
We already obtained \(v_{j}(c)=v_{j}(d)\neq 0.\) Without loss of generality, let \(v_{j}(c)=1\) for \(j=1,2,3.\) Therefore
\[\mathbf{e}_{c}^{T}U(t)\mathbf{e}_{d}=\sum_{j=1}^{3}\left[\frac{\exp\left(it \theta_{j}\right)+\exp\left(-it\theta_{j}\right)}{||v_{j}||^{2}}\right]+\frac {1}{4}\left[\exp\left(i(t+\pi)\right)+\exp\left(-i(t-\pi)\right)\right]. \tag{20}\]
Similarly, we use a different set of eigenvectors \(v_{1},v_{2},v_{3}\) satisfying \(v_{j}(e)=v_{j}(f)=1\) for all \(j\) to obtain
\[\mathbf{e}_{e}^{T}U(t)\mathbf{e}_{f}=\sum_{j=1}^{3}\left[\frac{\exp\left(it \theta_{j}\right)+\exp\left(-it\theta_{j}\right)}{||v_{j}||^{2}}\right]+\frac{ 1}{4}\left[\exp\left(i(t+\pi)\right)+\exp\left(-i(t-\pi)\right)\right]. \tag{21}\]
In case \(l=2\) and \(m\) is any positive integer, the polynomial \(q(x)\) for the graph \(T_{2,m}\) becomes \(q(x)=x^{6}-(m+5)x^{4}+(3m+5)x^{2}-1.\) Next we classify the existence of PGST in \(T_{2,m}.\)
**Theorem 7**.: _Let \(m\) be a positive integer. Suppose \(\pm\theta_{1},\pm\theta_{2},\pm\theta_{3}\) are the roots of the polynomial \(q(x)=x^{6}-(m+5)x^{4}+(3m+5)x^{2}-1\) in its splitting field over \(\mathbb{Q}.\) Then the following holds in \(T_{2,m}\)._
1. _If_ \(q(x)\) _is irreducible over_ \(\mathbb{Q}\)_, then there is pretty good state transfer with respect to a sequence in_ \((2\mathbb{Z}+1)\pi\) _between both pair of vertices_ \(c,d\) _and_ \(e,f\)_._
2. _Let_ \(q(x)\) _is reducible over_ \(\mathbb{Q}.\) _If_ \(\theta_{1}+\theta_{2}+\theta_{3}\) _is an even integer, then there is pretty good state transfer with respect to a sequence in_ \((2\mathbb{Z}+1)\pi\) _between both pair of vertices_ \(c,d\) _and_ \(e,f\)_. Moreover, if_ \(\theta_{1}+\theta_{2}+\theta_{3}\) _is an odd integer then there is no pretty good state transfer from_ \(c,d,e\) _and_ \(f.\)__
Proof.: Suppose \(q(x)\) is irreducible. Then by Theorem 4, the algebraic numbers \(1,\theta_{1},\theta_{2},\theta_{3}\) are linearly independent over \(\mathbb{Q}.\) Let \(\epsilon>0\) and consider \(\alpha_{j}=-\frac{\theta_{j}}{2},\) for \(j=1,2,3.\) By Theorem 2, there exist \(q,p_{1},p_{2}\in\mathbb{Z}\) such that
\[\left|(2q+1)\pi\theta_{j}-2\pi p_{j}\right|<2\pi\epsilon\ \ \text{for}\ j=1,2,3. \tag{22}\]
This along with (20) gives a sequence \(\tau_{k}\in(2\mathbb{Z}+1)\pi\) such that
\[\lim_{k\rightarrow\infty}\mathbf{e}_{c}^{T}U(\tau_{k})\mathbf{e}_{d}=2\left[ \frac{1}{||v_{1}||^{2}}+\frac{1}{||v_{2}||^{2}}+\frac{1}{||v_{3}||^{2}}+\frac{ 1}{4}\right]=1,\]
Figure 4: The double subdivided star \(T_{2,m}.\)
since \(U(0)=I.\) Therefore, PGST occurs between the pair of vertices \(c\) and \(d\) with respect to the sequence \(\tau_{k}\in(2\mathbb{Z}+1)\pi.\) Similarly using (21), we find that \(T_{2,m}\) exhibits PGST between the pair of vertices \(e\) and \(f\) with respect to the same sequence \(\tau_{k}\) as well.
Suppose \(q(x)\) is reducible and \(\theta_{1}+\theta_{2}+\theta_{3}=2n\) for some \(n\in\mathbb{Z}\). Using Lemma 2, the algebraic numbers \(1,\theta_{1},\theta_{2}\) are linearly independent over \(\mathbb{Q}\). Let \(\epsilon>0\) and consider \(\alpha_{j}=-\dfrac{\theta_{j}}{2}\) whenever \(j=1,2.\) By Theorem 2, there exist \(q,p_{1},p_{2}\in\mathbb{Z}\) such that
\[\left|(2q+1)\pi\theta_{j}-2\pi p_{j}\right|<2\pi\epsilon,\ \ \text{for}\ j=1,2. \tag{23}\]
This further yields
\[\left|(2q+1)\pi\theta_{3}-2\pi(2qn+n-p_{1}-p_{2})\right|<4\pi\epsilon. \tag{24}\]
Using (23) and (24), we obtain a sequence \(\tau_{k}\in(2\mathbb{Z}+1)\pi\) such that \(\lim\limits_{k\to\infty}\exp\left(i\tau_{k}\theta_{j}\right)=1,\) for \(j=1,2,3.\) Using (20), we have
\[\lim\limits_{k\to\infty}e_{c}^{T}U(\tau_{k})e_{d}=2\left[\dfrac{1}{||v_{1}||^ {2}}+\dfrac{1}{||v_{2}||^{2}}+\dfrac{1}{||v_{3}||^{2}}+\dfrac{1}{4}\right]=1.\]
Hence \(T_{2,m}\) exhibits PGST between the pair of vertices \(c\) and \(d\) with respect to the sequence \(\tau_{k}\in(2\mathbb{Z}+1)\pi\). Similarly using (21) we find that \(T_{2,m}\) exhibits PGST between the pair of vertices \(e\) and \(f\) with respect to the same sequence \(\tau_{k}\) as well.
Finally, consider the case that \(q(x)\) is reducible and \(\theta_{1}+\theta_{2}+\theta_{3}=2n+1\) for some \(n\in\mathbb{Z}.\) In the proof of the main result in [22], one can observe that if there is PGST in a bipartite graph between a pair of vertices \(a\) and \(b\) with \(\lim\limits_{k\to\infty}U(\tau_{k})\mathbf{e}_{a}=\gamma\mathbf{e}_{b},\ \text{for}\ \text{some}\ \tau_{k}\in\mathbb{R}\) and \(\gamma\in\mathbb{C}\) then \(\gamma=\pm 1\) whenever \(a\) and \(b\) are in the same partite set, otherwise \(\gamma=\pm i.\) Since \(U(0)=I\), we conclude from (20) that if there is PGST between \(c\) and \(d,\) then we have a sequence \(\tau_{k}\in\mathbb{R}\) such that for all \(j=1,2,3,\)
\[\lim\limits_{k\to\infty}\exp\left(i(\tau_{k}+\pi)\right)=\lim\limits_{k\to \infty}\exp\left(i\tau_{k}\theta_{j}\right)=\pm 1.\]
In case \(\lim\limits_{k\to\infty}\exp\left(i(\tau_{k}+\pi)\right)=1,\) it follows that \(\tau_{k}\in(2\mathbb{Z}+1)\pi.\) Since \(\theta_{1}+\theta_{2}+\theta_{3}=2n+1,\) we have a contradiction that \(-1=\lim\limits_{k\to\infty}\exp\left[i\tau_{k}\left(\theta_{1}+\theta_{2}+ \theta_{3}\right)\right]=1,\) where the equality on the right is obtained by using the property of exponentials. When \(\lim\limits_{k\to\infty}\exp\left(i(\tau_{k}+\pi)\right)=-1,\) we have \(\tau_{k}\in 2\pi\mathbb{Z},\) and again it leads to a contradiction. Hence there is no PGST between the vertices \(c\) and \(d.\) Using (21) and a similar argument, we conclude that there is no PGST between the vertices \(e\) and \(f\) as well.
Considering \(\alpha_{j}=\frac{1}{2}\) in the proof of Theorem 7 where \(q(x)\) is irreducible, one can deduce that there is PGST in \(T_{2,m}\) with respect to a sequence in \(2\pi\mathbb{Z}\) between the same pair of vertices. Combining Theorem 6 and Theorem 7, one obtains a complete characterization of double subdivided stars \(T_{l,m}\) exhibiting PGST. The method presented in this paper may be devised to classify family of graphs exhibiting other such quantum transportation phenomena, such as quantum fractional revival, pretty good fractional revival, etc.
## Acknowledgements
The authors are indebted to the reviewers for the valuable comments and generous suggestions to improve the manuscript. S. Mohapatra is supported by Department of Science and Technology (INSPIRE: IF210209). H. Pal is funded by Science and Engineering Research Board (Project: SRG/2021/000522).
|
2308.08577
|
AffectEcho: Speaker Independent and Language-Agnostic Emotion and Affect
Transfer for Speech Synthesis
|
Affect is an emotional characteristic encompassing valence, arousal, and
intensity, and is a crucial attribute for enabling authentic conversations.
While existing text-to-speech (TTS) and speech-to-speech systems rely on
strength embedding vectors and global style tokens to capture emotions, these
models represent emotions as a component of style or represent them in discrete
categories. We propose AffectEcho, an emotion translation model, that uses a
Vector Quantized codebook to model emotions within a quantized space featuring
five levels of affect intensity to capture complex nuances and subtle
differences in the same emotion. The quantized emotional embeddings are
implicitly derived from spoken speech samples, eliminating the need for one-hot
vectors or explicit strength embeddings. Experimental results demonstrate the
effectiveness of our approach in controlling the emotions of generated speech
while preserving identity, style, and emotional cadence unique to each speaker.
We showcase the language-independent emotion modeling capability of the
quantized emotional embeddings learned from a bilingual (English and Chinese)
speech corpus with an emotion transfer task from a reference speech to a target
speech. We achieve state-of-art results on both qualitative and quantitative
metrics.
|
Hrishikesh Viswanath, Aneesh Bhattacharya, Pascal Jutras-Dubé, Prerit Gupta, Mridu Prashanth, Yashvardhan Khaitan, Aniket Bera
|
2023-08-16T06:28:29Z
|
http://arxiv.org/abs/2308.08577v1
|
_AffectEcho_: Speaker Independent and Language-Agnostic Emotion and Affect Transfer for Speech Synthesis
###### Abstract
Affect is an emotional characteristic encompassing valence, arousal, and intensity, and is a crucial attribute for enabling authentic conversations. While existing text-to-speech (TTS) and speech-to-speech systems rely on strength embedding vectors and global style tokens to capture emotions, these models represent emotions as a component of style or represent them in discrete categories. We propose AffectEcho, an emotion translation model, that uses a Vector Quantized codebook to model emotions within a quantized space featuring five levels of affect intensity to capture complex nuances and subtle differences in the same emotion. The quantized emotional embeddings are implicitly derived from spoken speech samples, eliminating the need for one-hot vectors or explicit strength embeddings. Experimental results demonstrate the effectiveness of our approach in controlling the emotions of generated speech while preserving identity, style, and emotional cadence unique to each speaker. We showcase the language-independent emotion modeling capability of the quantized emotional embeddings learned from a bilingual (English and Chinese) speech corpus with an emotion transfer task from a reference speech to a target speech. We achieve state-of-art results on both qualitative and quantitative metrics.
1Department of Computer Science, Purdue University, West Lafayette, IN, USA
[email protected], [email protected], [email protected], gupta596@purdue, [email protected], [email protected], [email protected]
## 1 Introduction
Expressions of similar emotions can exhibit subtle variations across different languages, and the manifestation of these emotions can also differ among individuals Jackson et al. (2019). The integration of these nuanced emotional qualities into conversational AI systems holds the potential to enhance human-AI interactions, enabling more realistic and engaging dialogues Martinez-Miranda and Aldea (2005). However, accurately modeling emotions in a language-independent setting presents a significant challenge due to the need to group similar emotions with subtle differences together van Rijn and Larrouy-Maestri (2023). The complexity arises from the fact that individuals simultaneously express a multitude of emotions with varying intensities during speech, making a one-to-one mapping of emotions impractical Cowen et al. (2019). Instead, we devise a new approach that involves clustering and quantizing similar emotions into embeddings which can guide generative models.
Numerous deep learning approaches have been developed for tasks such as text-to-speech synthesis TTS Ren et al. (2019) and speech style transfer, achieving impressive results in generating high-quality speech Zhou et al. (2021). Recent advancements have also focused on synthesizing emotional speech using methods such as one-hot vectors or strength vectors Zhou et al. (2022), diffusion-based techniques to control emotional intensity Guo et al. (2023b), and generating speech with mixed emotions Zhou et al. (2022). However, in conversational AI systems, it is impractical to provide explicit strength vectors for every response, therefore necessitating models that can comprehend the speaker's emotions automatically and respond appropriately. Moreover, it is crucial that the generated response avoids aggravating the emotional state of the human speaker interacting with the AI agent Carolus and Wienrich (2021). To address this challenge, as a first step, we present a deep learning-based pipeline that effectively captures the speaker's emotion and faithfully mirrors the same emotion while preserving key attributes such as identity, accent, linguistic content, intonation, and cadence.
Several studies have explored the unsupervised modeling of speaking styles with style tokens, which serve as embeddings capable of capturing various characteristics such as speed, speaking style, prosody, and speaker identity Wang et al. (2018). While these style token embeddings can be trained to capture emotions, it is essential to decouple style and emotion to enhance control over speech generation Li et al. (2021). In the context of dialogue systems, conversational agents, exemplified by popular virtual assistants like Siri, have pre-defined identities, accents, and prosody. Integrating style transfer models into such agents would be inefficient as they are designed to maintain a consistent identity throughout interactions with humans.
In recent years, significant advancements have been made in text-to-speech (TTS) systems, resulting in the development of state-of-the-art models capable of generating highly realistic speech from text input. Our proposed model can effectively complement these advanced TTS pipelines by acting as a post-processing step, enabling the integration of appropriate emotions into the synthesized speech prior to its delivery to the individual engaged in interaction with the
conversational agent. By leveraging the existing strengths of TTS systems in generating accurate speech content, our model focuses on infusing the desired emotional nuances, thereby enhancing the overall quality and expressiveness of the generated speech.
The main contributions of our research work can be summarized as follows:
* We introduce a methodology using a Vector Quantized codebook model to learn meaningful affect representations from speech, capturing variations in valence, arousal and dominance within an emotion, while, eliminating the need for one-hot representations and explicit strength embeddings of emotions.
* We design **AffectEcho**, an emotional speech conversion model conditioned on the quantized emotional embeddings. Our method disentangles emotion from style and linguistic content, facilitating cross-language emotion transfer and offering enhanced flexibility and controllability. Furthermore, we use spectral convolution blocks or neural operator blocks to better learn the acoustic features in the spectral domain.
* Through quantitative and qualitative experiments, we demonstrate that **AffectEcho** can successfully transfer emotion between speakers while preserving their unique characteristics such as speaking style, linguistic content, and vocal characteristics.
## 2 Related Work
Recent advancements in text-to-speech architectures have led to innovative approaches to emotional speech synthesis. Zhou et al. (2022) propose a sequence-to-sequence architecture that enables emotional manipulation in text-to-speech models. While their approach utilizes editable strength embeddings to guide the generator, it focuses on discrete emotions and overlooks other affective features like valence, dominance, and arousal. Another avenue explored by Guo et al. (2023) involves a multi-stage codebook for text-to-speech conversion. Their approach employs VQ-VAE (van den Oord, Vinyals, and Kavukcuoglu, 2017) to quantize acoustic features of the mel-spectrogram and subsequently reconstruct the target audio. In contrast, many of the recent approaches in text-to-speech leverage diffusion generative models (Sohl-Dickstein et al., 2015). Notable among these is EmoDiff (Guo et al., 2023), which focuses on intensity-controllable diffusion-based text-to-speech modeling. It uses weighted emotion labels during sampling to generate speech samples with the desired emotions at the potential cost of reduced expressiveness in other affective dimensions in the condition vector. EmoMix (Tang et al., 2023) conditions the diffusion training process on emotional embeddings derived from a pre-trained speech emotion recognition model. NaturalSpeech2 (Shen et al., 2023) is a diffusion-based text-to-speech model that retains speaker identity and can perform speech enhancement. These models combine emotion and style, presenting a challenge in isolating and controlling emotion without compromising other features of the speaker's identity.
The use of class-conditioned StarGAN has been explored by Rizos et al. (2020) for emotion conversion, where emotions are represented as three mutually exclusive classes: angry, happy, and sad. Luo et al. (2019) introduce a GAN-based model employing continuous wavelet transform for neutral to emotional voice conversion, utilizing a Variational Autoencoder to extract the emotional information. Their approach also utilizes only three emotions. Kameoka et al. (2018) design the StarGAN-VC architecture for many-to-many voice conversion, later extended by Das et al. (2023) with StarGAN-VC++ to include style conversion to change the speaker identity while retaining the content of the speech. Similarly, Shah et al. (2023) propose emotion conversion using StarGAN-VC, incorporating dual encoders to learn speaker and emotion style embeddings separately. However, their model reported a low classification accuracy. Meftah et al. (2023) perform a comprehensive analysis of StarGAN-based models for emotional voice conversion and conclude that the efficiency in converting multi-emotions to multi-speakers was not as high as the efficiency in voice conversion for multi-speakers.
To address the lack of comprehensive affect-based emotion modeling in existing works and to decouple emotion from speaker style, we present AffectEcho, a textless speech affect transfer model. In our work, we showcase that emotional voice conversion benefits from the rich emotion representations learned via the proposed Vector Quantized codebook model, and that affect can be decoupled from gender, speaker, and language for better control over the generated speech.
## 3 AffectEcho Architecture
The **AffectEcho** model comprises two essential components. The first component is the emotion classifier responsible for generating the emotion embedding from the reference audio. This emotion embedding serves as a condition for the second component, the speech generator model. The speech generator takes the mel-spectrogram of the input speech as its primary input and leverages the emotion embedding to synthesize output speech with the desired af
Figure 1: _The overall architecture of **AffectEcho** Model. The figure on the top is the VQ classifier, which processes the input audio and maps it to the codebook. In the bottom row, the mel-spectrogram passes through neural operator blocks, transformer blocks and the encoder-decoder block.
fective qualities. Figure 1 summarizes the two components of the AffectEcho model.
### Modeling Emotion Space using a Vector Quantized Classifier
To generate the emotion embeddings, we use a vector quantized (VQ) classifier model. Recognizing the complexity of emotions and the limitations of binning them into discrete categories, we adopt a two-stage representation for these embeddings. Initially, we classify the dataset into five main categories, namely, angry, happy, neutral, sad, and surprised, each representing the dominant emotion exhibited in the speech sample. Subsequently, in the second stage, we leverage a vector quantized codebook to further amplify the emotion space, achieving five levels of nuanced representations per emotion.
The VQ codebook defines the embedding space of \(25\) categorical emotion vectors with \(64\) affective features each. Each categorical vector \(e_{i}\in\mathbb{R}^{64}\), \(i\in\{1,\dots,25\}\) is manually associated with a one-hot encoded dominant emotion \(p(e_{i})\).
Our VQ classifier first outputs an embedding vector \(z(x)=e\in\mathbb{R}^{64}\), where \(x\) is an input of speech features. The emotion category \(z\) is then chosen by a nearest-neighbor look-up in the VQ codebook using cosine similarity \(S_{c}(\cdot,\cdot)\) between the output vector and the \(25\) categorical vectors of the codebook as shown in equation 1:
\[q(z=k\mid x)=\begin{cases}1&\text{if }k=\arg\max_{i\in[25]}S_{c}(z(x),e_{i}) \\ 0&\text{otherwise.}\end{cases} \tag{1}\]
In the loss, we use a categorical cross-entropy term between the input emotion label and the emotion label associated with \(z\), as indicated by equation 2:
\[L_{ce}=-\frac{1}{25}\sum_{i=1}^{25}p(e)_{i}\log(q(z\mid x)_{i}). \tag{2}\]
Similar to what is described in the work of van den Oord, Vinyals, and Kavukcuoglu (2017), we use \(l_{2}\) error to move the embedding vectors \(e_{i}\) towards \(z(x)\) and a commitment loss to ensure the model commits to the embedding space
\[L_{cl}=||\text{sg}[z(x)]-e||_{2}^{2}+\beta||z(x)-\text{sg}[e]||_{2}^{2}. \tag{3}\]
In equation 3, \(sg\) refers to the stop gradient operator, and \(\beta\) is set to \(0.25\) for all the experiments.
The overall loss of the VQ-classification model is given by equation 4:
\[L_{vq}=L_{ce}+\alpha L_{cl} \tag{4}\]
where \(\alpha\) is set to \(0.01\).
Emotional Speech GeneratorThe generator model \(G\) is trained to transfer the emotion from a reference speech \(y\) to a neutral speech \(x\). It is structured with several key components: a spectral convolution block (neural operator) and a transformer block, followed by a ResNet encoder and decoder, similar to the generator from StarGAN-VC architectures. The generator maps the mel-spectrogram of the input speech to a new mel-spectrogram featuring the same speaking style but incorporating the affective features of the reference speech.
To this end, two conditions are provided to guide the model during the decoding step. The first condition involves a pretrained JDC style encoder model (Kum and Nam, 2019), which extracts fundamental frequencies from the input mel-spectrogram. It indicates to the model the desired speaking style for the generated speech. The second condition is the emotion embedding vector derived from the reference speech. It encourages the generator to generate speech samples exhibiting the same emotion as the reference speech.
To train the speech generator model, we use four loss functions. The first one is the reconstruction loss, which computes the \(l_{1}\) loss between the generated and the target mel-spectrogram, given by equation 5
\[L_{rc}=||G(x)-y||_{1} \tag{5}\]
The second loss is the spectral convergence loss, which is the normalized Euclidean distance between the spectrograms denoted by equation 6:
\[L_{sc}=\frac{||G(x)||_{2}-||y||_{2}}{||G(x)||_{2}}. \tag{6}\]
The third loss that we define to maintain the vocal cadence of the speaker called the pitch flow loss, which minimizes the difference in pitch with time, as shown in equation 7:
\[L_{pf}=\left|\left|\sum_{t=1}^{T}(G(x)_{t+1}-G(x)_{t})-\sum_{t=1}^{T}(y_{t+1}- y_{t})\right|\right|_{2}. \tag{7}\]
Lastly, we define speech emotion loss, which ensures that the target speech has the same emotion as the reference speech. We use the VQ classifier model to generate this loss indicated in equation 8:
\[L_{ser}=-\frac{1}{25}\sum_{i=1}^{25}q(z\mid y)_{i}\log(q(z\mid x)_{i}). \tag{8}\]
The overall loss function is given by equation 9:
\[L_{gen}=L_{rc}+\alpha_{1}L_{sc}+\alpha_{2}L_{pf}+\alpha_{3}L_{ser}. \tag{9}\]
DatasetThe VQ-classifier was first trained on the MEAD dataset (Wang et al., 2020). The classifier was fine-tuned on bilingual ESD dataset (Zhou et al., 2021), containing the same utterances in different emotions, uttered in both English and Chinese.
TrainingThe VQ-classifier was trained on MEAD and ESD dataset for 20 epochs, with a 70-20-10 split. The Generator model was trained for 200 epochs and a batch size of 10. This model was trained on the ESD dataset (70-20-10 split) on Nvidia A30 GPU.
## 4 Evaluations
### Quantitative Metrics
Target Speech Emotion RecognitionIn our evaluation process, the dominant emotion is identified by the VQ-classifier. This obtained emotion label is then compared
against the corresponding label associated with the reference input. We use Wav2Vec 2.0 to measure finer affect qualities such as valence, arousal, and dominance. Furthermore, it is not enough to rely solely on quantitative metrics to measure finer nuances in affect. Therefore, we performed surveys to determine Mean Opinion Scores (MOS) and Emotion Perception Scores (EPS).
Mel-Cepstral Distortion (MCD)Mel-Cepstral distortion serves as a quantitative metric, enabling the assessment of dissimilarities between two Mel-Spectrograms.
Pearson Correlation Coefficient (PCC)Pitch is a significant factor influencing speech emotion. Pitch can be represented with fundamental frequency F0. Pearson Correlation Coefficient is a metric that allows us to determine the correlation between two pitch sequences.
Structural Similarity Index (SSIM)We used the structural similarity index as a metric to compare the similarity between ground truth and generated mel-spectrograms during the training time. SSIM was used as the indicator metric to determine the quality of the generated mel-spectrogram.
### Qualitative Evaluations
As part of the qualitative evaluation, users were asked to evaluate the quality of emotion in the synthesized speech based on the emotion from the reference speech. They were also asked to identify the emotion of the synthesized speech.
Mean Opinion Score (MOS)The Mean Opinion Score, or MOS, was calculated based on the quality of the synthesized speech. The survey presented the users with three speech samples, the first being the neutral speech input prompt, the second being the reference speech and the third one being the generated speech with the target affect. The users were asked to rank the samples on a scale of 1 to 5, with 1 being the lowest in quality. There were 16 speech triplets that were presented to the users in four categories - English, with English reference, English, with Chinese reference; Chinese, with Chinese reference and Chinese, with English reference.
Emotion Perception Score (EPS)For the computation of the emotion perception score, participants were presented with a task wherein they were required to select the dominant emotion conveyed by the synthesized speech from a set of five predefined options, namely, Sad, Happy, Angry, Neutral, and Surprised. The user-annotated emotion option was subsequently compared with the Speech Emotion Recognition (SER) label assigned to the reference speech.
## 5 Experiments
The input mel-spectrograms are generated from raw audio samples in wav format. Torchaudio's mel-spectrogram transform function is used to convert the audio into the mel-spectrogram. The number of bins is set to 80, and the length of the fast Fourier transform is set to 2048. Window length and hop length are set to 1200 and 300, respectively. The input to the models is the log-normalized version of this mel-spectrogram, which was experimentally found to perform better than mel-spectrograms
One-to-One Emotion Mapping (Same Speaker)This experiment tested the model's ability to translate emotions when the reference embeddings correspond to the same speaker. The following table summarizes the performance of the model on both English and Chinese speech samples. MCD refers to Mel-Cepstral distortion. Lower values of MCD indicate higher quality of output. SSIM is the structural similarity between the ground truth and the generated output. SER is the emotion recognition score, and MOS is the mean opinion score. Higher values of SSIM, SER, and MOS indicate better quality of output.
It can be observed in Tables 3 and 4, that when the model was conditioned on previously unseen embedding, the quality of the output with respect to the ground truth dropped slightly. This, however, is not an indication of the model's performance because the model generated the speech with a variation of the dominant emotion. Therefore, the output has to be different from the ground truth. The Mean Opinion score serves as a better metric to evaluate the quality of the output.
Cross Language Speaker Independent Emotion TranslationIn this setting, we tested the ability of the model to translate intonations from one language to another. Different languages employ varied styles of expressing emotions. Accurately translating the affect qualities across languages ensures the ability of the model to map variations in affect while exhibiting the same dominant emotion. The reference speech was in a different language from the input speech and had a different speaker and different linguistic content.
In this setting, from table 5, a notable observation is the model's performance, even when conditioned on samples from a different language. This outcome demonstrates the model's ability to effectively capture and utilize information across distinct distributions. This attribute serves as a significant advantage of utilizing the vector quantized codebook. The generated embeddings consistently map to a known vector within the codebook, exhibiting the highest cosine similarity. Consequently, the generator model is always conditioned on a known affect vector, irrespective of the language, ensuring consistent and reliable affect representation in the synthesized speech across different linguistic contexts.
Figure 2 represents the plot of Pearson Correlation Coefficients across different emotions in the cross-language setting. It can be observed that these values have varying degrees of correlation with the ground truth speech sample, indicating that the linguistic content, speaker identity and speaking style remain consistent, but the affect qualities vary across the different vectors of the codebook.
Translation accuracy by emotionThis assessment serves two key purposes: firstly, to determine any potential biases towards specific emotions within the model, and secondly, to assess the overall separability of emotions from one another. By conducting this experiment, we aimed to identify correlations between emotional states and the limitations in separating similar but distinct emotional states.
It is apparent from tables 6 and 7 that the model performs slightly better on Chinese samples. Furthermore, happiness is most likely to be misclassified in both cases, with it being misclassified as either anger or surprise. From figure 3, it can be concluded that this is because the valence-arousal-dominance values of these emotions are very similar. Therefore, it is difficult to distinguish them from each other.
Skewness in Quantized EmbeddingsWe conducted experiments to investigate potential biases within the model towards specific quantized embeddings over others. The aim was to ascertain if the model displayed preferences or imbalances in representing certain emotional nuances. Additionally, we explored the possibility of language dependencies within these quantized embeddings, assessing whether the emotion representations varied across different languages.
Tables 8 and 9 show that the distribution for English and Chinese speech samples is quite similar, indicating that the mapping is independent of the language. Some of the embeddings, such as Angry-Q3 and Neutral-Q2, are rarely cho
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Emotion** & **Angry** & **Happy** & **Neutral** & **Sad** & **Surprised** \\ \hline Angry & 93.17\% & 3.76\% & 2\% & 0.43\% & 0.56\% \\ Happy & 12.5\% & 76.7\% & 1.26\% & 1.06\% & 8.36\% \\ Neutral & 0.76\% & 0.2\% & 96.96\% & 2.03\% & 0.03\% \\ Sad & 0.6\% & 0.36\% & 2.3\% & 96.56\% & 0.16\% \\ Surprised & 2\% & 5.1\% & 0.03\% & 0.33\% & 92.2\% \\ \hline \end{tabular}
\end{table}
Table 4: The table presents the performance of the model in generating speaker-independent synthetic speech with the dominant emotion in the Chinese Language
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Emotion** & **MED** & **SSIM** & **SER** \\ \hline Angry & 5.81 & 0.51 & 0.83 \\ Happy & 5.72 & 0.52 & 0.63 \\ Sad & **4.28** & **0.67** & **0.85** \\ Surprised & 6.10 & 0.50 & 0.65 \\ \hline \end{tabular}
\end{table}
Table 4: The table presents the performance of the model in generating speaker-independent synthetic speech with the dominant emotion in the Chinese Language
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Emotion** & **Angry** & **Happy** & **Neutral** & **Sad** & **Surprised** \\ \hline Angry & 96.8\% & 1.9\% & 0.96\% & 0\% & 0.33\% \\ Happy & 7.4\% & 85.4\% & 0.36\% & 0.43 \% & 6.4\% \\ Neutral & 0.3\% & 0.03\% & 98.4\% & 1.2\% & 0\% \\ Sad & 0.2\% & 0.03\% & 0.8\% & 98.6\% & 0.3\% \\ Surprised & 1.1\% & 4.43\% & 0.43\% & 0.3\% & 93.7\% \\ \hline \end{tabular}
\end{table}
Table 7: Emotion Embedding classification accuracy for Chinese Speech Samples
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Emotion** & **MED** & **SSIM** & **SER** \\ \hline Angry & 5.47 & 0.60 & **0.83** & 0.81 \\ Happy & 5.06 & 0.55 & 0.66 & 0.50 \\ Sad & **4.14** & **0.63** & 0.78 & **0.87** \\ Surprised & 5.48 & 0.56 & 0.67 & 0.61 \\ \hline \end{tabular}
\end{table}
Table 5: The table presents the performance of the model in generating speaker-independent synthetic speech with the dominant emotion in either of the two languages, randomly picked
sen. Upon further investigation, it was found that these embeddings contained information corresponding to the emotions such as fear, contempt, and disgust, which were absent in the ESD dataset, but were present in the MEAD dataset.
Quantifying Valence-Arousal-DominanceA key contribution of our work is to represent finer nuances in affect features beyond the simple representation of dominant emotion. To verify that the embeddings exhibit variations in valence, arousal, and dominance, we use the finetuned wav2vec 2.0-dimensional emotion model Wagner et al. (2023) to calculate these attributes from the generated mel-spectrogram.
Figure 3 represents a scatter plot of average valence-arousal-dominance values generated on 200 samples of audio. Each point represents the audio conditioned on one of the five quantized vectors from the VQ Codebook. It can be observed that the dominant emotions exhibit similar values of valence and dominance but vary in arousal. Sadness has the lowest valence, while surprise has the highest value of valence.
## 6 Ablation Studies
### Effect of Vector Quantization
To understand the effect of the VQ Quantization codebook, we built a classifier, but without the codebook. We use t-distributed Stochastic Neighbor Embedding (tSNE) visualizations to demonstrate the effectiveness of the quantized emotion space. Notably, we observe that the embeddings corresponding to dominant emotions are grouped together, indicating successful clustering while simultaneously diverging in the direction of the less dominant emotion. This divergence signifies the varying intensity of the dominant emotion, showcasing the model's capability to represent emotional expressions with finer granularity.
From Figure 4, it is apparent that neutral and sadness are grouped closer, while anger, happiness, and surprise are higher. Happiness and Surprise are somewhat entangled with each other, indicating the difficulty in distinguishing the two. Figure 5, however, exhibits very little clustering in terms of emotion because it represents the embeddings generated without the vector quantization. In this scenario, each embedding independently contains affect information. Figure 6 highlights the percentage of audio correctly classified by the two models, with gray indicating the ones cor
\begin{table}
\begin{tabular}{|c|c|} \hline
**Emotion** & **p-value** \\ \hline Angry & 0.005 \\ Happy & 0.2 \\ Neutral & 0.014 \\ Sad & 0.001 \\ Surprised & 0.001 \\ \hline \end{tabular}
\end{table}
Table 10: _One-Tailed T-test scores comparing the performance of the VQ-Classifier model against the non-VQ model over 50 trials._
Figure 3: _Scatter plot of average valence-arousal-dominance values of 200 generated audio samples in both English and Chinese_
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Emotion** & **Q1** & **Q2** & **Q3** & **Q4** & **Q5** \\ \hline Angry & 41.93\% & 5.83\% & 0.14\% & 51.45\% & 0.63\% \\ Happy & 35.27\%\% & 6.41\% & 17.09\% & 35.59\% & 5.71\% \\ Neutral & 28.73\% & 0.43\% & 5.98\% & 35.42\% & 29.41\% \\ Sad & 50.93\% & 0.06\% & 43.07\% & 5.58\% & 0.33\% \\ Surprised & 18.71\% & 33.12\% & 44.59\% & 3.49\% & 0.07\% \\ \hline \end{tabular}
\end{table}
Table 9: _Distribution of dominant emotions within the five quantized spaces in the ESD Chinese speech corpus_
Figure 2: _The graph shows trends in Pearson Correlation Coefficients across different emotions. It can be observed that the correlation varies with the ground truth, representing pure emotion. This indicates that the five embeddings represent varying levels of the same emotion_
rectly classified by both models. It can be observed that the VQ codebook model correctly classifies a larger percentage of the dataset than the non-VQ model. Table 10 shows a one-tailed t-test that VQ-classifier outperforms the non-VQ-classifier for all emotions except happiness.
### Effect of Spectral Convolution Layers
The use of global convolution in the Spectral domain, also termed as _neural operator_, has shown promise in vision tasks [12]. More recently, it was shown by [13] that the neural operator outperformed convolution in speech generation models. We trained two versions of the generator model, one with a spectral convolution block and one with only a regular convolution block. To test the difference in the obtained results, we performed the Wilcoxon Signed Rank Test, testing whether the outputs of the neural operator model had higher SSIM than the outputs of the model without the neural operator. The metric used for comparison was 50 instances of average SSIM of 50 randomly sampled data points from the test set. The p-value was 0.00224, with a v value of 923.0, indicating that the neural operator improved the model.
|
2310.12765
|
Energy-Based Models For Speech Synthesis
|
Recently there has been a lot of interest in non-autoregressive (non-AR)
models for speech synthesis, such as FastSpeech 2 and diffusion models. Unlike
AR models, these models do not have autoregressive dependencies among outputs
which makes inference efficient. This paper expands the range of available
non-AR models with another member called energy-based models (EBMs). The paper
describes how noise contrastive estimation, which relies on the comparison
between positive and negative samples, can be used to train EBMs. It proposes a
number of strategies for generating effective negative samples, including using
high-performing AR models. It also describes how sampling from EBMs can be
performed using Langevin Markov Chain Monte-Carlo (MCMC). The use of Langevin
MCMC enables to draw connections between EBMs and currently popular diffusion
models. Experiments on LJSpeech dataset show that the proposed approach offers
improvements over Tacotron 2.
|
Wanli Sun, Zehai Tu, Anton Ragni
|
2023-10-19T14:10:09Z
|
http://arxiv.org/abs/2310.12765v1
|
# Energy-based Models for Speech Synthesis
###### Abstract
Recently there has been a lot of interest in non-autoregressive (non-AR) models for speech synthesis, such as FastSpeech 2 and diffusion models. Unlike AR models, these models do not have autoregressive dependencies among outputs which makes inference efficient. This paper expands the range of available non-AR models with another member called energy-based models (EBMs). The paper describes how noise contrastive estimation, which relies on the comparison between positive and negative samples, can be used to train EBMs. It proposes a number of strategies for generating effective negative samples, including using high-performing AR models. It also describes how sampling from EBMs can be performed using Langevin Markov Chain Monte-Carlo (MCMC). The use of Langevin MCMC enables to draw connections between EBMs and currently popular diffusion models. Experiments on LJSpeech dataset show that the proposed approach offers improvements over Tacotron 2.
Wanli Sun, Zehai Tu, Anton Ragni Department of Computer Science, University of Sheffield, Sheffield, UK
{wsun20, ztu3, a.agni}@sheffield.ac.uk speech synthesis, energy-based models, iterative inference
## 1 Introduction
Neural network based synthesis have made impressive improvements over statistical speech synthesis. However, these deep learning based text-to-speech (TTS) approaches often feature inconsistencies as did statistical approaches. For example, auto-regressive (AR) models, such as Tacotron 2 [1], Transformer-TTS [2], are almost exclusively trained using teacher forcing [3], where reference rather than predicted values are fed back into the generative process. Such a mismatch between training and inference causes inconsistency called _exposure bias_[4], which may lead to poor generated speech quality (e.g. repetition, skipping, and long pauses [5]). So far there have been a few attempts to alleviate exposure bias, such as, scheduled sampling [6] and attention mechanisms [7]. However, their effective application is complicated due to a number of "training hacks" employed to ensure stable learning [8].
Recently, there has been interest in non-AR models, such as FastSpeech 2 [9] and diffusion models [10]. These models generally do not use teacher forcing as a part of their training and hence should be free of the aforementioned inconsistencies. This paper describes another class of non-AR models called energy-based models (EBMs) [11], which, as will be shown later, have connections to currently popular diffusion models. Given a text, an EBM defines an energy-function over all possible spoken realisations. Although it is possible to formulate the conditional probability distribution of speech given text for EBMs, the intractable normalisation term would make training and inference approaches relying on the probability distribution infeasible.
Instead, training of EBMs can be performed using noise contrastive estimation (NCE), which compares speech data (positive examples), which is assumed to represent high quality speech, and imperfect speech data (negative examples). The nature of imperfection, or negative examples, is crucial when training EBMs. This paper describes a number of effective strategies to generate negative examples, including by means of existing TTS models. Inference with EBMs can be performed using Langevin Markov Chain Monte-Carlo (MCMC) [12, 13]. Given that a similar iterative algorithm is often used with diffusion models (e.g., Grad-TTS [10]), this paper discusses connections between EBMs and diffusion models.
This paper makes the following specific contributions:
1. first energy-based text-to-speech model;
2. a range of methods for generating effective negative samples to use in NCE and elsewhere;
3. link between diffusion models and energy-based models.
The rest of this paper is organized as follows. Section 2 describes energy-based models (EBM), which includes inference, training and negative sampling methods. Section 3 relates EBMs to filtering approaches and diffusion models. Experimental results and discussion are presented in Section 4. Conclusions drawn from this work and future research directions are presented in Section 5.
## 2 Energy-based Models
Given a text sequence \(\mathbf{x}\), an energy-based model (EBM) of speech feature sequences \(\mathbf{Y}\) (e.g. log-Mel spectrograms) can be defined by1
Footnote 1: An alternative formulation would involve parameterising the gradient of energy instead. Such an approach is possible due to existence of inference and training approaches that rely only on the gradient of energy.
\[p_{\mathbf{\theta}}(\mathbf{Y}|\mathbf{x})=\frac{1}{Z_{\mathbf{\theta}}(\mathbf{x})}\exp\left(-E _{\mathbf{\theta}}(\mathbf{x},\mathbf{Y})\right), \tag{1}\]
where \(\mathbf{\theta}\) are model parameters, \(E_{\mathbf{\theta}}(\mathbf{x},\mathbf{Y})\) is an energy between text and speech, \(Z_{\mathbf{\theta}}(\mathbf{x})\) is a normalisation term. Unlike speech signal energies commonly used in models like FastSpeech 2, EBM energies \(E_{\mathbf{\theta}}(\mathbf{x},\mathbf{Y})\) reflect the correspondence between text \(\mathbf{x}\) and speech \(\mathbf{Y}\) pairs. Better matching pairs are expected to yield lower energies and _vice versa_. The normalising term \(Z_{\mathbf{\theta}}(\mathbf{x})\) is intractable to compute exactly. Thus, only certain inference and parameter estimation approaches can be used for EBMs.
### Inference
For tasks where outputs are represented by discrete tokens (_e.g._ characters or words), such as text generation [14] and speech recognition [15], EBMs are commonly used to rerank hypotheses generated during beam search. In contrast, for tasks where outputs are represented by continuous variables, such as speech synthesis, EBMs can
be used for updating hypotheses themselves. This can be done using Langevin Markov Chain Monte-Carlo (MCMC). The Langevin MCMC is an iterative process where, given an initial hypothesis \(\mathbf{Y}^{(0)}\), the next hypothesis is obtained by
\[\mathbf{Y}^{(N+1)}\!\leftarrow\!\mathbf{Y}^{(N)}\!-\!\lambda\ \nabla_{\mathbf{Y}}E_{\mathbf{ \theta}}(\mathbf{x},\mathbf{Y})|_{\mathbf{Y}=\mathbf{Y}^{(N)}}+\sqrt{2\lambda}\mathbf{Z}^{(N)}, \tag{2}\]
where \(\mathbf{Z}^{(N)}\sim\mathcal{N}(\mathbf{0},\mu\mathbf{I})\), \(\lambda\) is an updating rate and \(\mu\) is commonly set to 1. The need to specify initial hypotheses \(\mathbf{Y}^{(0)}\) offers a number of interesting options. In the standard Langevin MCMC initial hypotheses are drawn from a simple prior distribution, such as Gaussian [16]. However, more informative priors, such as high-performing TTS models (e.g. Tacotron 2 and FastSpeech 2), can also be explored.
### Training
Since the normalising factor \(Z_{\mathbf{\theta}}(\mathbf{x})\) is intractable, approaches relying on \(p_{\mathbf{\theta}}(\mathbf{Y}|\mathbf{x})\) can not be used with EBMs. Furthermore, popular gradient-based MCMC approaches [17] can not be applied with discrete input, as in this work, and Gibbs sampling [18] would be too computationally expensive. Fortunately, noise contrastive estimation (NCE) [19] provides a feasible solution to optimize energy functions. The NCE loss function for EBMs in eq. (1) is given by
\[\begin{split}\mathcal{L}_{\mathbf{\theta}}(\mathbf{x},\mathbf{Y}^{+},\mathbf{Y}^{ -})=&-\log\left(\frac{1}{1+\exp\left(E_{\mathbf{\theta}}(\mathbf{x},\mathbf{Y} ^{+})\right)}\right)\\ &-\log\left(\frac{1}{1+\exp\left(-E_{\mathbf{\theta}}(\mathbf{x},\mathbf{Y}^ {-})\right)}\right)\end{split} \tag{3}\]
where \(\mathbf{Y}^{+}\) are called positive samples and \(\mathbf{Y}^{-}\) are called negative samples. According to eq. (3), energy functions are optimal when high energy is assigned to negatives and low energy is assigned to positives. Once trained, energy functions can be used for ranking hypotheses generated by other models or inferring hypotheses using the Langevin MCMC in eq. (2).
Positive samples in NCE are usually represented by reference sequences. On the other hand, negative samples need to be designed. In text generation [14] it is argued that negative examples with poor quality make the task of learning the energy function easier which leads to poor quality energy functions. This work proposes using pre-trained TTS models to generate high-quality negative examples. Note that when a pre-trained model is used as a part of training process then it is also possible to adopt it for initialising the Langevin MCMC in eq. 2, which is expected to speed up inference and lead to higher quality hypotheses. The simplest method to generate negative samples from pre-trained TTS models would use hypotheses generated by those models directly. Such approach may fail to work due to high level of similarity between high-quality hypotheses and reference sequences. Other possible options include applying random masking (RM) and SpecAugment [20] (_i.e._ time masking (TM), frequency masking (FM) and time warping (TW)) to those hypotheses. The TM and FM methods can be seen as specific cases of the RM method and may prove less effective. For example, the FM may drastically affect pitch information by masking a whole frequency range. The TW method is also expected to face challenges as utterance-wide shortening/elongation of all sounds may be hard to separate from reference sequences.
### Architecture
The architecture of EBM explored in this work is inspired by Transformer TTS [2] and is shown in Figure 1. The EBM in Fig. 1 consists of two blocks: energy estimator (top) and feature enhancement (bottom). The goal of the energy estimator is to derive utterance-level EBM energies \(E_{\mathbf{\theta}}(\mathbf{x},\mathbf{Y})\) from the output of the feature enhancement block. This work assumes that these energies can be derived from frame-level EBM energies. As shown in Fig. 1, the energy estimator consists of two key elements: frame-level EBM energy estimation and frame-level EBM energy weighting. The latter element is motivated by an intuition that frame-level EBM energies are unlikely to make equally important contributions. For example, speech and non-speech frames will likely make different contributions to the utterance-level EBM energy
\[E_{\mathbf{\theta}}(\mathbf{x},\mathbf{Y})=\sum_{t=1}^{T}\alpha_{t}e_{t} \tag{4}\]
where \(\alpha_{1:T}\) is the sequence of attention weights generated by the EBM energy weighting module, \(e_{1:T}\) is the sequence of frame-level EBM energies and \(T\) is the number of frames. The attention weights are derived from the frame-level EBM energies. The frame-level EBM energies \(e_{1:T}\) are computed by
\[e_{t}=\mathbf{a}^{\top}\mathbf{g}_{t}+b \tag{5}\]
Figure 1: Architecture of EBMs examined in this work (inspired by Transformer TTS)
where \(\mathbf{g_{t}}\) is the output of the feature enhancement block, and \(\mathbf{a}\) and \(b\) are parameters of the frame energy module. The goal of the feature enhancement block is to enhance typically short-term spectral information available in standard speech features, such as log-Mel spectrograms, with more advanced acoustic and linguistic information. The estimator is a transformer-based [2] model using text as the input to encoder and and spectral features and the output of encoder as the input to decoder. The output of decoder after linear transformation, \(\mathbf{g_{t}}\), is passed to the energy estimator block. Note that the decoder does not use masking to constrain the underlying attention mechanism from attending over previous spectral features, which makes \(\mathbf{g_{t}}\) a function of entire text and spectral feature sequences.
## 3 Related Work
EBMs have been applied in a wider range of domains, e.g. natural language Processing [14, 21] and automatice speech recognition [15]. The EBM proposed in this work can be related to a number of previously proposed approaches in TTS. The use of hypotheses generated by pre-trained TTS models as a part of training and inference allows to connect this EBM to post-filtering methods. Statistical post-filtering approaches, such as [22], aim to address over-smoothing in hypotheses generated by statistical speech synthesis models. However, these approaches suffer from difficulties in accurately modelling probability density functions of the underlying speech parameterisations. Recently, there has also been interest in deep learning based post-filtering approaches. In [23], frequency band specific generative adversarial networks (GAN) were trained to improve the quality of hypotheses generated by deep learning based speech synthesis models. However, this approach assumes independence among frequency bands which may lead to suboptimal results.
More recently, there has been a lot of interest in diffusion-based TTS models [10], which have been extended to audio synthesis [24] and singing voice synthesis [25]. In these models training (forward) and inference (reverse) processes iteratively build a connection between data and noise. Although seemingly different, such diffusion models and the EBM proposed in this work have clear connections. Consider, for example, the iterative inference process used by one of those diffusion models [26]
\[\mathbf{Y}^{(N+1)}\!\leftarrow\!\frac{1}{\sqrt{1\!-\!\lambda_{N}}}(\mathbf{Y}^{(N)}\!+ \!\lambda_{N}S_{\mathbf{\theta}}(\mathbf{x},\mathbf{Y}^{(N)},N))+\sqrt{\lambda_{N}}\mathbf{Z} ^{(N)}, \tag{6}\]
Compared to the Langevin MCMC in eq. (2) the key difference stems from modelling iteration, \(N\), specific \(S_{\mathbf{\theta}}(\mathbf{x},\mathbf{Y}^{(N)},N)\) score (gradient of log-likelihood) rather than iteration independent \(\nabla_{\mathbf{Y}^{(N)}}E_{\mathbf{\theta}}(\mathbf{x},\mathbf{Y}^{(N)})\) score in the EBM given by eq. (1). In addition, score matching approaches to training diffusion models can also be adopted with EBMs [26], which further strengthens the connection between these models.
## 4 Experiments
### Experimental setup
#### 4.1.1 Dataset
The dataset used in this work is LJSpeech [27], which includes 13,100 audio clips totalling approximately 24 hours from one female speaker. The dataset is split randomly into training (10,000 clips), validation (1800 clips) and test (1300 clips) sets. Objective evaluation is performed over the entire test set whilst subjective evaluation is performed over 100 randomly chosen test set clips. Front-end pre-processing of audio follows the open-source implementation available as a part of NVIDIA's Tacotron 2. 2
Footnote 2: [https://github.com/NVIDIA/tacotron2](https://github.com/NVIDIA/tacotron2)
#### 4.1.2 Models
The pre-trained TTS model providing hypothese for training and inference is Tacotron 2 [1]. The open-source implementation of NVIDIA using default configuration was adopted in this work. The structure of EBM follows the corresponding elements of Transformer-TTS [2] available through an open-source implementation 3 except that: 1) positional and character embeddings are 256-dimensional; 2) two EBMs with different dimensions of hidden features, 128 and 256 respectively, are explored in the study. Utterance-level energy is predicted by frame-level EBM energy prediction module, which consists of two 512-dimensional fully-connected layers. Although it is possible to backpropagate gradients through the pre-trained TTS model, for simplicity this was not explored in this work. Both EBMs are trained for 125K iterations using Adam optimizer using batch size of 16 and a constant learning rate of \(1\times 10^{-4}\) on a single NVIDIA 3090 GPU. The number of parameters of these 2 EBMs and Tacotron 2 are shown in Table 1. We use an open-source implementation4 of the WaveGlow [28] vocoder and adopt its default settings.
Footnote 3: [https://github.com/soobinseo/Transformer-TTS](https://github.com/soobinseo/Transformer-TTS)
Footnote 4: [https://github.com/NVIDIA/waveglow](https://github.com/NVIDIA/waveglow)
#### 4.1.3 Evaluation
Mel cepstral distortion (MCD), FO frame error (FFE) and log-scale F0 root mean square error (log F0 RMSE) are adopted as objective metrics in this work. The MCD metric calculates distance between cepstral coefficient sequences of different lengths on the Mel frequency scale. The FFE metric measures discrepancy of fundamental frequency (F0) between synthesized and reference waveforms. Before objective calculating, dynamic time warping (DTW) is used to align the predicted mel-spectrogram and the reference. FAIRSEQ \(S^{2}\) toolkit [29] is used to compute MCD and FFE scores. Mean opinion score (MOS) evaluation is conducted to evaluate speech naturalness by scoring each speech sample on a scale between 1 to 5 with 1 point intervals. Waveforms synthesized by 3 models compared in this work are mixed with test set waveforms. Each audio is listened to by 5 listeners, who are native English speakers, on the Amazon Mechanical Turk platform.
### Negative sampling methods
Table 1 compares Tacotron 2 and two initial EBMs. These EBMs were trained using Tacotron 2 generated hypotheses as negative samples and a single step (\(N=1\)) Langevin MCMC, where \(\mu\) was set to \(0\) for simplicity and Adam rather than gradient descent update rule
\begin{table}
\begin{tabular}{l|c c c|c} \hline \hline
**Model** & **MCD**\(\downarrow\) & **FFE**\(\downarrow\) & \(\log f_{o}\downarrow\) & **Parameters** \\ \hline Tacotron 2 & 4.218 & 47.31\% & 0.292 & 28.19M \\ EBM\({}^{(1)}\) (small) & 4.163 & 47.06\% & 0.289 & 2.30M \\ EBM\({}^{(1)}\) (large) & 4.178 & 47.05\% & 0.291 & 7.64M \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between Tacotron 2 and two EBMs utilising Tacotron 2 hypotheses as negative samples
was adopted. Both large (256 hidden features) and small (128 hidden features) EBMs perform slightly better than the baseline.
Table 2 summarises performance of the alternative negative sampling methods (see Sec. 2.2) with the large EBM, where the simplified Langevin MCMC was run for \(N=100\) steps. Many of these methods show significantly better performance than the baseline. Comparing between compressed and stretched spectral features suggest no strong preference for any particular method of time warping (TW). The method of 5% time masking (TM) achieves lower MCD and FFE compared to other time masking methods, while the trend is opposite for frequency masking (FM), where higher percentage points (15%) appear to be yielding better MCD results and worse FFE results. The likely reason is the negative interaction between FFE and frequency masking.
Table 3 investigates the impact of the Langevin MCMC steps on the performance of the best system in Table 2. As the number of steps increases, the EBM applying 25% random masking to negative samples performs better and better.
Although random masking appears to be the most effective negative sampling method, the other masking methods may bring additional complementary information. Table 4 summarises performance of different combination approaches involving the RM 30% EBM in Table 2. All combinations examined perform better than using only random masking. The EBM making use of all masking methods performs the best.
### Subjective evaluation
To solicit subjective assessment, a range of listening tests were conducted (see Sec. 4.1.3). Table 5 shows that the proposed EBM shows generally better MOS scores than the baseline Tacotron 2. Furthermore, the detailed breakdown of MOS score counts in Table 6 shows that the EBM significantly reduced the number of MOS scores 2 (-2) and 3 (-27) and increase the number of MOS scores 4 (+26) and 5 (+3).
## 5 Conclusions
This paper proposed a new class of non-autoregressive (non-AR) text-to-speech (TTS) models called energy-based models (EBM). As an example, it shows how powerful forms of EBMs can be designed by adopting architectures of state-of-the-art AR models like Transformer TTS. Although training models like EBMs is more complicated due to the intractability of normalisation terms, a range of training approaches is available. This paper describes how one such approach called noise contrastive estimation (NCE) can be adopted for training. As the NCE critically relies on the quality of negative samples used to contrast reference speech feature sequences, this paper proposed and evaluated a wide range of negative sampling methods. It found that random masking is the single best method but the combination of all proposed methods yielded the best performance. The paper also shows how sampling from EBMs can be performed by means of Langevin Markov Chain Monte-Carlo (MCMC). Since Langevin MCMC is closely linked with an iterative method used by popular diffusion models, the paper discusses similarities between EBMs and diffusion models. The paper concludes by subjective evaluation and finds that the proposed model provides improvements over Tacotron 2. Future work with EBMs will explore score parameterisation and the use of alternative TTS models for architectural and negative sampling choices.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline
**MOS** & **1** & **2** & **3** & **4** & **5** \\ \hline
**Tacotron 2** & 0 & 3 & 121 & 344 & 15 \\
**EBM** & 0 & 1 & 94 & 370 & 18 \\ \hline \hline \end{tabular}
\end{table}
Table 6: MOS score counts
\begin{table}
\begin{tabular}{l l|c c c} \hline \hline
**Condition** & & **MCD \(\downarrow\)** & **FFE \(\downarrow\)** & \(\log f_{o}\downarrow\) \\ \hline
**TM:** & 5\% & 4.149 & 46.89\% & 0.290 \\ & 10\% & 4.166 & 46.98\% & 0.284 \\ & 15\% & 4.161 & 47.30\% & 0.285 \\ \hline
**FM:** & 5\% & 4.166 & 46.88\% & 0.292 \\ & 10\% & 4.138 & 47.27\% & 0.286 \\ & 15\% & 4.097 & 47.35\% & 0.284 \\ \hline
**TW:** & 1.2 (compress) & 4.134 & 47.05\% & 0.291 \\ & 1.1 (compress) & 4.170 & 47.03\% & 0.291 \\ & 0.9 (stretch) & 4.168 & 47.28\% & 0.284 \\ & 0.8 (stretch) & 4.159 & 47.29\% & 0.286 \\ \hline
**RM:** & 25\% & **3.943** & **46.16**\% & 0.282 \\ & 30\% & 4.013 & 46.57\% & **0.280** \\ \hline & **Baseline** & 4.218 & 47.31\% & 0.292 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Negative sampling methods (95% confidence intervals)
\begin{table}
\begin{tabular}{c|c c c} \hline \hline
**Model** & **MOS** \\ \hline Ground Truth & 4.53\(\pm\)0.05 \\ Ground Truth (log-Mel + WaveGlow) & 4.39\(\pm\)0.07 \\ Tacotron 2 (log-Mel + WaveGlow) & 3.77\(\pm\)0.11 \\ \hline EBM (log-Mel + WaveGlow) & 3.84\(\pm\)0.13 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Combination of negative sampling methods (95% confidence intervals)
\begin{table}
\begin{tabular}{c|c c c} \hline \hline
**Step** & **MCD\(\downarrow\)** & **FFE\(\downarrow\)** & \(\log f_{o}\downarrow\) \\ \hline
0 & 4.218 & 47.31\% & 0.292 \\
1 & 4.217 & 47.31\% & 0.292 \\
100 & 3.943 & 46.16\% & 0.282 \\
300 & **3.937** & **45.85\(\%\)** & **0.276** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Simplified Langevin MCMC (95% confidence intervals)
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline
**RM** & **TM** & **FM** & **TW** & **MCD \(\downarrow\)** & **FFE \(\downarrow\)** & \(\log f_{o}\downarrow\) \\
30\% & 5\% & 5\% & 1.2 & **MCD \(\downarrow\)** & **FFE \(\downarrow\)** & \(\log f_{o}\downarrow\) \\ \hline ✓ & & & & 4.013 & 46.57\% & 0.280 \\ ✓ & ✓ & & & 3.958 & 46.31\% & 0.276 \\ ✓ & & ✓ & & 3.997 & 45.63\% & 0.278 \\ ✓ & & & ✓ & 3.927 & 45.98\% & 0.267 \\ ✓ & ✓ & ✓ & ✓ & **3.882** & **45.36**\% & **0.258** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Combination of negative sampling methods (95% confidence intervals)
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline
**Model** & **MOS** \\ \hline Ground Truth & 4.53\(\pm\)0.05 \\ Ground Truth (log-Mel + WaveGlow) & 4.39\(\pm\)0.07 \\ Tacotron 2 (log-Mel + WaveGlow) & 3.77\(\pm\)0.11 \\ \hline EBM (log-Mel + WaveGlow) & 3.84\(\pm\)0.13 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Subjective evaluation
|
2303.05321
|
WASD: A Wilder Active Speaker Detection Dataset
|
Current Active Speaker Detection (ASD) models achieve great results on
AVA-ActiveSpeaker (AVA), using only sound and facial features. Although this
approach is applicable in movie setups (AVA), it is not suited for less
constrained conditions. To demonstrate this limitation, we propose a Wilder
Active Speaker Detection (WASD) dataset, with increased difficulty by targeting
the two key components of current ASD: audio and face. Grouped into 5
categories, ranging from optimal conditions to surveillance settings, WASD
contains incremental challenges for ASD with tactical impairment of audio and
face data. We select state-of-the-art models and assess their performance in
two groups of WASD: Easy (cooperative settings) and Hard (audio and/or face are
specifically degraded). The results show that: 1) AVA trained models maintain a
state-of-the-art performance in WASD Easy group, while underperforming in the
Hard one, showing the 2) similarity between AVA and Easy data; and 3) training
in WASD does not improve models performance to AVA levels, particularly for
audio impairment and surveillance settings. This shows that AVA does not
prepare models for wild ASD and current approaches are subpar to deal with such
conditions. The proposed dataset also contains body data annotations to provide
a new source for ASD, and is available at https://github.com/Tiago-Roxo/WASD.
|
Tiago Roxo, Joana C. Costa, Pedro R. M. Inácio, Hugo Proença
|
2023-03-09T15:13:22Z
|
http://arxiv.org/abs/2303.05321v1
|
# WASD: A Wilder Active Speaker Detection Dataset
###### Abstract
Current Active Speaker Detection (ASD) models achieve great results on AVA-ActiveSpeaker (AVA), using only sound and facial features. Although this approach is applicable in movie setups (AVA), it is not suited for less constrained conditions. To demonstrate this limitation, we propose a Wilder Active Speaker Detection (WASD) dataset, with increased difficulty by targeting the two key components of current ASD: audio and face. Grouped into 5 categories, ranging from optimal conditions to surveillance settings, WASD contains incremental challenges for ASD with tactical impairment of audio and face data. We select state-of-the-art models and assess their performance in two groups of WASD: Easy (cooperative settings) and Hard (audio and/or face are specifically degraded). The results show that: 1) AVA trained models maintain a state-of-the-art performance in WASD Easy group, while under-performing in the Hard one, showing the 2) similarity between AVA and Easy data; and 3) training in WASD does not improve models performance to AVA levels, particularly for audio impairment and surveillance settings. This shows that AVA does not prepare models for wild ASD and current approaches are subpar to deal with such conditions. The proposed dataset also contains body data annotations to provide a new source for ASD, and is available at [https://github.com/Tiago-Roxo/WASD](https://github.com/Tiago-Roxo/WASD).
## 1 Introduction
Active Speaker Detection (ASD) aims to identify, from a set of potential candidates, active speakers on a given visual scene [40]. Currently, this assessment is done at the video frame level based on facial cues and sound information. Despite its application in several topics such as speaker diarization [10, 12, 23], human-robot interaction, or speaker tracking [36, 37], its applicability in wild conditions is still an open issue.
The state-of-the-art dataset for ASD is AVA-ActiveSpeaker [40], composed of several Hollywood movies, with diversity in languages, recording conditions, and speaker demographics, totalling in 38 hours and over 3 million face images. Although AVA-ActiveSpeaker has some challenging aspects, it still is not a perfect representation of _in-the-wild_ data [40], since it assesses ASD in movies, a setup with controlled (scripted) action and speaking, with adequate audio and image quality. This motivates state-of-the-art models to identify active speakers solely based on audio and face data, disregarding other informations such as speaking context or body expressions. This is particularly problematic since ASD in wild conditions can not assume face availability, subject cooperation, and good audio quality, as shown in Figure 1. To overcome these limitations, we propose a Wilder Active Speaker Detection (WASD) Dataset.
WASD aims to preserve the challenging characteristics of AVA-ActiveSpeaker while increasing the difficulty of ASD by targeting the two key components state-of-the
Figure 1: AVA-ActiveSpeaker state-of-the-art models achieve over 94% mean Average Precision (mAP) in active speaker detection, solely based on **face** and **audio data**. However, this approach may not be suited for uncooperative poses, non-guaranteed face access, or unreliable image/audio quality. How well do these models perform in such scenarios? And can body information aid in this task?
art models use: face and audio. We select videos from YouTube and group them into 5 categories, based on a set of features targeted at face and audio impairment. The categories range from optimal conditions (face availability and good audio quality), to surveillance settings (non-guaranteed face access, subject cooperation, or sound quality). The increasing scale of ASD challenges can be useful for: 1) assess the ability of current models to deal with wild conditions and specific aspect impairment (audio, face, or a combination of both); 2) evaluate the limitations of AVA-ActiveSpeaker to prepare models for wild conditions; and 3) show the limitations of face and audio dependency for wild ASD, easing the identification of model improvements towards this goal. By selecting YouTube videos from real interactions, WASD also contains expressions, sudden interruptions, and interactions that movies hardly contain. These additional challenges, enhanced by the variability of demographics in WASD, contribute to a challenging ASD dataset where state-of-the-art models can not easily perform. Furthermore, WASD provides body data annotations to motivate the development of models using body information to complement face and audio data in (wild) ASD. To summarize, the main contributions are:
* We propose WASD, a ASD dataset divided into 5 categories with incremental ASD challenges, targeting audio quality and face availability, ranging from optimal conditions to surveillance settings;
* We assess and show the limitations of AVA-ActiveSpeaker training and state-of-the-art approaches for ASD in setups with audio impairment, facial occlusion, and surveillance settings.
## 2 Related Work
**Active Speaker Detection.** Works on ASD have evolved from facial visual cues [21, 34, 42] to audio as primary source [6, 18], to multi-modal data combination [40, 4, 3, 46]. Since the introduction of AVA-ActiveSpeaker [40], combining audio with facial features is the _de facto_ way to predict active speakers. Large 3D architectures [9], hybrid 2D-3D models [52], and large-scale pretraining [15, 17] for audio-visual combination are amongst some of the following works. Despite the viability of these approaches, feature embedding improvement [25] or attention approaches [2, 8, 47] were necessary to improve ASD. Creating two-step models, where the first focuses on short-term analysis (audio with face combination) and the second on multi-speaker analysis, is the approach from various recent works [4, 3, 29, 51]. ASC [3] focused on long-term multi-speaker analysis via temporal refinement, ASDNet [29] used a similar approach for inter-speaker relations, with improved visual backbones, and UniCon [51] relied on audio-visual relational contexts with various backbones. Improving speaker relation representation via Graph Convolutional Networks (GCN) [48] is also a viable approach to assess context information [4, 31]. Diverging from two-step training, end-to-end models have also emerged for ASD [5, 31, 46]. TalkNet [46] focused on improving long-term temporal context with audio-visual synchronization, while EASEE [5] included GCN to complement spatial and temporal speaker relations.
**Datasets.** There is a variety of available datasets suited for ASD, such as frontal speaker data, designed for speech recognition [26, 35], voice activity detection [45], and diarization [23] datasets. However, these are limited in subject diversity and talking scenarios, diminishing their relevance. With increased talking variability, datasets derived from movies and TV shows have also been reported [20, 24, 39, 27], limited by the low number of annotated hours. Other setups related with ASD are lip reading datasets [13, 14, 16, 33, 44, 1], whose purpose diverges from ASD since their goal is to infer the words pronounced from a given speaker. Recently there is a greater focus on specific ASD datasets [4, 7, 19, 28, 40], whose task is to determine the talking speaker from a set of admissible candidates. Columbia [7] contains 87 minutes of a panel dis
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline
**Dataset** & Total & Number of & Face & Video & FPS & \multirow{2}{*}{Talking \%} & \multirow{2}{*}{Demographic} & Surveillance & Body \\ & Hours & Faces (M) & & & & Duration (s) & Variability & & \multicolumn{1}{c}{Representation} & Conditions & Data \\ \hline \hline Columbia [7] & 1.5 & 0.2 & - & - & \(\times\) & - & \(\times\) & \(\times\) & ✓ \\ Talkies [4] & 4.2 & 0.8 & 23.5 & 1.5 & - & - & - & \(\times\) & \(\times\) \\ EasyCom [19] & 6.0 & - & - & - & \(\times\) & - & \(\times\) & \(\times\) & \(\times\) \\ ASW [28] & 30.9 & - & 11.5 & \(\sim\)10 & - & 57.9 & - & \(\times\) & \(\times\) \\ AVA [40] & 37.9 & 3.7 & 38.5 & \(\leq\)10 & ✓ & 24.2 & - & \(\times\) & \(\times\) \\ \hline
**WASD** & 30.0 & 7.4 & 9.8 & \(\sim\)28 & ✓ & 84.6 & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Feature comparison of ASD datasets. AVA-ActiveSpeaker is represented as AVA. If datasets contain information regarding a feature, its absence is presented with \(\times\), while its presence with ✓. WASD has a high number of hours, with increased number of faces and reduced face tracks (culminating in higher average video duration), Frames Per Second (FPS) variability, and increased talking percentage. The most discriminative factors are demographic representation, surveillance conditions, and body data annotations.
cussion, with up to 3 visible speakers. Talkies [4] focuses on low duration videos, totalling 4 hours, with an average of 2.3 speakers and off-screen speaking. Easycom [19] is designed for multiple tasks related with augmented reality, composed of various sessions of speakers sat at a table, with background noise. AVA-ActiveSpeaker [40] is the state-of-the-art dataset, with over 150 Hollywood videos, totalling almost 38 hours, with demographic diversity and dubbed dialogues. ASW [28] was proposed with over 30 hours, from 212 videos randomly selected from the VoxConverse [11], containing various sets of interviews. The proposed dataset, WASD, brings challenging sets, _in-the-wild_ videos, demographic diversity, and body data annotations. The main characteristics of our dataset relative to others are presented in Table 1.
## 3 Dataset
We propose WASD, a dataset that aims to show the limitations of current state-of-the-art models by compiling a set of videos from real interactions with varying accessibility of the two key components for ASD: _audio_ and _face_. By dividing our dataset into 5 categories with varying degrees of audio and face quality, we can assess how models adapt to these scenarios and which factors are more relevant for ASD. We create a balanced demographics dataset (regarding language, race, and gender), with several challenging factors, complemented with body annotations data. We discuss the process of dataset creation in the following sections.
### Video and Category Selection
We select videos from YouTube and group them into 5 categories based on a set of features, whose values were attributed by human assessment. The main features used for category division are shown in Table 2, with the complete list in appendix C. In sum, videos are grouped as follows:
* **Optimal Conditions**: People talking in an alternate manner, with minor interruptions, cooperative poses, and face availability;
* **Speech Impairment**: Frontal pose subjects either talking via video conference call (_Delayed Speech_) or in a heated discussion, with potential talking overlap (_Speech Overlap_), but ensuring face availability;
* **Face Occlusion**: People talking with at least one of the subjects having partial facial occlusion, while keeping good speech quality (no delayed speech and minor communication overlap);
* **Human Voice Noise**: Communication between speakers where another human voice is playing in the back
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Category** & FA & SO & DS & FO & HVB & SS \\ \hline \hline Optimal Conditions & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ Speech Impairment & ✓ & ✓ & ✓ & \(\times\) & \(\times\) & \(\times\) \\ Face Occlusion & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ Human Voice Noise & ✓ & \(\times\) & \(\times\) & \(\times\) & ✓ & \(\times\) \\ Surveillance Settings &? &? &? &? &? &? \\ \hline \hline \end{tabular}
\end{table}
Table 2: Category feature matrix. Feature description: FA, Face Availability; SO, Speech Overlap; DS, Delayed Speech; FO, Facial Occlusion; HVB, Human Voice as Background Noise; SS, Surveillance Settings. The absence of a certain feature is presented with \(\times\), while its presence with ✓. Features containing? refer to non-guarantee of its presence or absence. Green cells refer to features favorable for ASD, while red ones are unfavorable.
Figure 2: Considered categories of WASD, with relative audio and face quality represented. Categories range from low (Optimal Conditions) to high (Surveillance Settings) ASD difficulty by varying audio and face quality. Easier categories contain similar characteristics to AVA-ActiveSpeaker (AVA-like), while harder ones are the novelty of WASD.
ground, with face availability and subject cooperation ensured;
* **Surveillance Settings**: Speaker communication in scenarios of video surveillance, with varying audio and image quality, without any guarantee of face access, speech quality, or subject cooperation.
Some important aspects to consider from Table 2: 1) all categories, aside Surveillance Settings, guarantee face availability, which corresponds to cooperative scenarios and close-up faces; 2) we consider speech delay and overlap as variations of slight speech impairment, thus their grouping in the same category; and 3) Surveillance Settings does not have any guarantee regarding the analyzed features, corresponding to wild conditions. These considerations support the range of ASD difficulty between Optimal Conditions (easier) and Surveillance Settings (harder), since the impairment of audio and face is incremental and controlled throughout the categories. Figure 2 displays representative images of each category and the relative variation of audio and face quality.
**WASD Groups.** Aside category division, we also form two groups of videos for our experiments: **Easy** and **Hard**. The easy group contains the categories that more closely resemble AVA-ActiveSpeaker (_Optimal Conditions_ and _Speech Impairment_) while the hard group has categories where one or both factors (face and audio) are specifically degraded (remaining 3 categories of WASD). The inclusion of Speech Impairment in the easy group relates to how speech overlap is admissible in AVA-ActiveSpeaker (as recurrent from normal conversations) and speech delay as a result of dubbed movies (existent in AVA-ActiveSpeaker).
### Main Characteristics
One focus of the proposed dataset is ensuring that each category is balanced regarding language, race, and gender distribution to mitigate any potential bias in future experiments. The languages are grouped into English, European, and Asian, while races are grouped into Caucasian, Afro, and Asian. The considered languages and races, their grouping, and other related considerations are discussed in appendix D. The distribution of demographics, number of speakers, and head-body proportions of WASD is presented in Figure 3. WASD only considers two admissible labels, with talking being the dominant speaking activity (contrary to AVA-ActiveSpeaker), and is mainly composed of few people conversations. Surveillance Settings is the one with lesser camera proximity to speakers while Speech Impairment and Human Voice Noise have speakers closer to the camera.
Following the AVA-ActiveSpeaker approach, the maximum length considered for each video is 15 minutes. Contrary to AVA-ActiveSpeaker, where each subvideo duration ranges up to 10 seconds, we segment each subvideo up to 30, with varying video FPS, mainly ranging from 24 to 30. Regarding the number of videos, WASD is composed of 164 videos (_vs._ 153 of AVA-ActiveSpeaker), totalling 30 hours of video annotations, divided into train and test with a similar proportion to AVA-ActiveSpeaker (80/20), with each category having roughly the same amount of hours, (_i.e._, 6 hours) and demographics balance.
### WASP Annotations
Body bounding boxes drawing and tracking are obtained using YOLOv5 [38] and DeepSort [49], serving as input
Figure 3: Gender, language, race, speaking activity, and number of speakers distribution of WASD. Afro refers to African and African people. On the right, distribution of head-body and body-image proportions of WASD categories. WASD is a balanced demographics dataset, with _talking_ being the predominant speaking activity, mainly composed of few people conversations, where audio impaired categories (Speech Impairment and Human Voice Noise) have speakers closer to the camera, and Surveillance Settings has speakers further from it.
to Alphabnage [22, 50, 30], which outputs pose information for each subject per frame. Then, we obtain face bounding boxes [41] from pose data, using eyes, ears, and nose keypoints as reference for bounding box drawing. The size of face bounding boxes is based on body bounding box height, which is adjusted manually per video to ensure adequate face capture. All face and body annotations are manually revised by a human and adjusted/fully annotated when necessary via Computer Vision Annotation Tool (CVAT) [43]. For speaking annotations, we design a custom Graphical User Interface (GUI) program in Python for manual annotation, outputting a file with the format used by AVA-ActiveSpeaker. Further details regarding annotations can be seen in appendixes A and B.
## 4 Experiments
### Datasets, Models, and Evaluation Metric
**Datasets.** The AVA-ActiveSpeaker dataset [40] is an audio-visual active speaker dataset from Hollywood movies. With 262 15 minute videos, typically only train and validation sets are used for experiments: 120 for training, and 33 for validation, corresponding to 29,723 and 8,015 video utterances, respectively, ranging from 1 to 10 seconds. The main challenges of this dataset are related to language diversity, FPS variation, the existence of faces with low pixel numbers, blurry images, noisy audio, and dubbed dialogues. Similar to other works, we report the obtained results on the AVA-ActiveSpeaker validation subset. We also use the proposed dataset, WASD, which is described in Section 3. Unless explicitly stated, all models trained in WASD use the whole training split (with 5 categories).
**Models.** The considered models are the ones with state-of-the-art results and publicly available implementations: ASC [3], MAAS [4], TalkNet [46], and ASDNet [29]. All models are trained in a two-step process, except TalkNet which is trained end-to-end. MAAS did not provide its Multi-modal Graph Network setup so we present the results from the available implementation.
**Evaluation Metric.** We use the official ActivityNet evaluation tool [40] that computes mean Average Precision (mAP).
### Limitations of AVA-ActiveSpeaker Training
We start by training models in AVA-ActiveSpeaker and evaluate their performance on AVA-ActiveSpeaker and WASD, in Table 3.
**Similar to AVA-ActiveSpeaker..** Regardless of the model, their performance on Easy categories (Optimal Conditions and Speech Impairment) is similar to the one displayed in AVA-ActiveSpeaker, suggesting the presence of similar characteristics between this group and AVA-ActiveSpeaker. This highlights the importance of face and audio quality for current ASD models, and shows that with high quality data and reliable face access, simultaneous talk or slight speech delay do not significantly hinder model performance. Furthermore, the similar performance of models in AVA-ActiveSpeaker and Easy categories support the quality of WASD annotations.
**Face and Audio Importance.** However, the cross-domain performance is significantly worse in Hard categories. In Face Occlusion, Human Voice Noise, and Surveillance Settings, there is a decrease in performance relative to other categories, suggesting that impairment of face access or audio quality significantly impact models, with a cumulative degrade when both are present (Surveillance Settings). Furthermore, facial occlusion is not as impactful as audio impairment (Human Voice Noise) in ASD, meaning that even when a model can not assess the talking person via face, it can still deduct it via audio analysis. The inverse is not as easily solved, since the existence of audio impairment with human voices (Human Voice Noise) leads to poorer performance relative to the Face Occlusion.
**The Outlier**. Despite a performance degrade with increasing category difficulty, TalkNet is the best performing model. This could be linked to its end-to-end approach for ASD, contrary to the other models, improving its generalization and performance in cross-domain. Furthermore, TalkNet focuses on long-term temporal context, benefiting from longer videos, which is the case of WASD.
### Models Robustness in WASD
To evaluate the robustness of models in ASD on challenging data, we train them in WASD and compare their performance with AVA-ActiveSpeaker training in Table 4 and Figure 4.
**Performance Increase**. Relative to AVA-ActiveSpeaker training, models trained in WASD tend to slightly improve their performance in Easy setups (Optimal Conditions and Speech Impairment), with higher increase in Face Occlusion and Surveillance Settings scenarios. The increase in
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**AVA**} & \multicolumn{5}{c}{**WASD**} \\ & & **OC** & **SI** & **FOO** & **HVN** & **SS** \\ \hline \hline ASC [3] & 83.6 & 86.4 & 84.8 & 69.9 & 66.4 & 51.1 \\ MAAS [4] & 82.0 & 83.3 & 81.3 & 68.6 & 65.6 & 46.0 \\ TalkNet [46] & 91.8 & 91.6 & 93.0 & 86.4 & 77.2 & 64.6 \\ ASDNet [29] & 91.1 & 91.1 & 90.4 & 78.2 & 74.9 & 48.1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of AVA-ActiveSpeaker trained state-of-the-art models on AVA-ActiveSpeaker and categories of WASD, using the mAP metric. We train and evaluate each model following the authors’ implementation. _OC_ refers to Optimal Conditions, _SI_ to Speech Impairment, _FO_ to Face Occlusion, _HVN_ to Human Voice Noise, and _SS_ to Surveillance Settings. AVA refers to AVA-ActiveSpeaker.
Face Occlusion to closer values of those in Easy setups shows that, if trained accordingly, current models can perform ASD in such scenarios. This relates to how models can map different speaker relations in a scene, allowing the inference of one speaker relative to others, even if the face is occluded. Regarding Surveillance Settings, it shows that AVA-ActiveSpeaker does not contain data similar to these settings, but models can perform better in such scenarios if given the proper training. Similar to Face Occlusion, relating different speakers in a scene may give models the tools to perform in such scenarios, even when face access is not reliable.
**Model Limitations**. When trained in WASD, models can not improve their performance in the presence of disruptive/distracting human voice background (Human Voice Noise), which shows the limitations of current approaches. The guaranteed face access may induce a false sense of security to classify a person as talking when they do micro expressions in the presence of (background) human voice. Furthermore, the disparity between the results with human voice background or surveillance settings and the other scenarios (75% _vs_\(>\)92%) shows the limitations of current models to perform in wilder ASD contexts, particularly in impaired audio conditions.
**Performance in WASD Groups**. To complement model performance assessment, we compute the Precision-Recall (PR) and Receiver Operating Characteristic (ROC) curves of models in different experimental settings, in Figure 5. The results show that: 1) in the Easy group, ASDNet and TalkNet trained in AVA-ActiveSpeaker are competitive with other models trained in WASD, showing the robustness of the best performing models and the similarity between AVA-ActiveSpeaker and Easy group of WASD; 2) for the Hard group, all models trained in WASD have superior performance relative to AVA-ActiveSpeaker training, suggesting the difference of data between this group and AVA-ActiveSpeaker; and 3) TalkNet trained in AVA-ActiveSpeaker displays a different tendency relative to other models, expressed in both Easy and Hard group, with higher predominance in PR curves. TalkNet has a cautious and precise approach in determining the active speaker (high precision), while not keeping a similar performance in identifying all the active speakers as other models (lower precision with higher recall). This is linked to the lower talking percentage of AVA-ActiveSpeaker and the end-to-end approach of TalkNet with emphasis on long term context: identifying only active speakers with high confidence is a good strategy in AVA-ActiveSpeaker but not as reliable in WASD.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Easy**} & \multicolumn{2}{c}{**Hard**} \\ & **OC** & **SI** & **FO** & **HVN** & **SS** \\ \hline \hline ASC [3] & 91.2 & 92.3 & 87.1 & 66.8 & 72.2 \\ MAAS [4] & 90.7 & 92.6 & 87.0 & 67.0 & 76.5 \\ TalkNet [46] & 95.8 & 97.5 & 93.1 & 81.4 & 77.5 \\ ASDNet [29] & 96.5 & 97.4 & 92.1 & 77.4 & 77.8 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of state-of-the-art models on the different categories of WASD, using the mAP metric. _OC_ refers to Optimal Conditions, _SI_ to Speech Impairment, _FO_ to Face Occlusion, _HVN_ to Human Voice Noise, and _SS_ to Surveillance Settings.
Figure 4: Average performance (mAP) variation of the four models on WASD categories, when trained on AVA-ActiveSpeaker and WASD. AVA-ActiveSpeaker is represented as AVA.
Figure 5: ROC and PR curves for models trained in AVA-ActiveSpeaker and WASD, and evaluated in Easy (left) and Hard (right) groups of WASD. All models trained in WASD have superior performance to AVA-ActiveSpeaker training. TalkNet trained in AVA-ActiveSpeaker displays a different tendency relative to other models given its long-term analysis and end-to-end training approach. AVA-ActiveSpeaker is represented as AVA.
### Qualitative Analysis
We analyze different scenarios where WASD is distinctive from AVA-ActiveSpeaker and body data analysis is more relevant for ASD, namely in Human Voice Noise, Face Occlusion, and Surveillance Settings, represented in Figures 5(a), 5(b), 5(c), and 7, respectively. Head boxes are colored with models predictions, trained in WASD: green, person is talking; red, not talking. Figures are accompanied with zoom ins containing wrong and correct signs, displaying the correctness of ASD prediction. By not using body information, state-of-the-art models can not reliably deal with scenarios where someone expresses slight lip movement (_e.g._, awe expression) when another person (not in scene) is talking (Figure 5(a)), or with facial occlusion (Figure 5(b)), even in the context of speaker proximity and cooperation. In surveillance settings (Figures 5(c) and 7) the benefit of body data evaluation is even more pronounced. Accessing hand movement with slight face occlusion helps understanding that the same person is talking (Figure 5(c)), as well as inferring when one person is requesting other to stop talking (Figure 7).
## 5 Conclusion
We propose WASD, a challenging ASD dataset with degraded audio quality, facial occlusions, and surveillance conditions. With WASD we demonstrate the limitations of state-of-the-art models and AVA-ActiveSpeaker training for wild ASD, particularly in audio impairment and surveillance settings.WASD also includes body data annotations to support the development of approaches using body information for wild ASD, given the unreliability of audio quality and subject cooperation in such settings.
## Acknowledgments
This work was supported in part by the Portuguese FCT/Ministerio da Ciencia, Tecnologia e Ensino Superior (MCTES) through National Funds and, when applicable, co-funded by EU funds under Project UIDB/50008/2020; in part by the FCT Doctoral Grant 2020.09847.BD and Grant 2021.04905.BD; in part by the C4--Competence Center in Cloud Computing co-financed by the European Regional Development Fund (ERDF) through the Programa Operacional Regional do Centro (Centro 2020), in the scope of the Sistema de Apoio a Investigacao Cientifica e Tecnologica, Programas Integrados de Investigacao Cientifica e esenvolvimento Tecnologico (IC&DT) under Project CENTRO-01-0145-FEDER-000019
|
2301.00420
|
X-Arapuca long term test
|
The photon detection system of the DUNE experiment is based on the X-ARAPUCA
light trap. The basic elements of the X-ARAPUCA are the dichroic filters coated
with wavelength shifter (para-Therphenyl), a waveshifting plate and an array of
SiPMs which detects the trapped photons. A small scale prototype of the
X-ARAPUCA has been installed in liquid argon in a dedicated facility at
INFN-Napoli and exposed to alpha particles from a source. In order to test the
stability of the overall device response the X-ARAPUCA was kept for 10 days in
liquid argon continuously purified. The performed tests allowed for a
preliminary estimation of the X-ARAPUCA absolute photon detection efficiency.
|
V. Andreossi, Z. Balmforth, A. A. Bergamini Machado, G. Botogoske, N. Canci, R. de Aguiar, P. Duarte De Almeida, F. Di Capua, G. Fiorillo, G. Grauso, G. Matteucci, S. Ravinthiran, E. Segreto, Y. Suvorov
|
2023-01-01T15:11:25Z
|
http://arxiv.org/abs/2301.00420v1
|
# X-Arapuca long term test
###### Abstract
The photon detection system of the DUNE experiment is based on the X-ARAPUCA light trap. The basic elements of the X-ARAPUCA are the dichroic filters coated with wavelength shifter (para-Therphenyl), a waveshifting plate and an array of SiPMs which detects the trapped photons. A small scale prototype of the X-ARAPUCA has been installed in liquid argon in a dedicated facility at INFN-Napoli and exposed to alpha particles from a source. In order to test the stability of the overall device response the X-ARAPUCA was kept for 10 days in liquid argon continuously purified. The performed tests allowed for a preliminary estimation of the X-ARAPUCA absolute photon detection efficiency.
Photosensors; Silicon Photomultipliers; Cryogenics; Liquid argon; Noble liquid detectors +
## 1 Introduction
Next generation neutrino experiments will investigate new physics beyond the Standard Model, addressing the measurement of the CP violating phase in the leptonic sector. A significant contribution is expected to the completion of the understanding of the standard neutrino oscillation picture by measuring the mixing parameters and the neutrino mass hierarchy.
The Deep Underground Neutrino Experiment (DUNE) [1] on the Fermilab Long-Baseline Neutrino Facility (LBNF) represents one of the most relevant experiments in this field. LBNF provides an high intensity, broad band neutrino beam, peaked at 2.5 GeV. The neutrino beam flux, monitored by a near detector located at Fermilab, travels through the Earth crust for 1300 km and is finally detected by a far detector installed at the Sanford Underground Research Facility in South Dakota. The far detector is constituted by at least two 17.5 kton liquid argon (LAr) Time Projection Chambers (TPC). The huge target mass will enable a rich scientific program including among others searches for proton decay and detection of the neutrino flux from a core-collapse supernova within our galaxy.
LAr is known to be an excellent scintillator emitting 41 photons per keV of deposited energy by minimum ionizing particles. Scintillation photons are emitted through the de-excitation of Argon dimers (Ar\({}_{2}^{*}\)) singlet (S) and triplet (T) states with characteristic times of 6\(\div\)10 ns and about 1400\(\div\)1600 ns, respectively [2]. The scintillation photons are emitted in Vacuum Ultra Violet (VUV) with a wavelength centered in a narrow band of \(\sim\)10 nm around 128 nm. The primary goal of the photon detection system of the DUNE far detector must meet several requirements: convert VUV light to visible through the use of wavelength shifting compounds in order to make it detectable by standard (cryogenic) photo-sensitive devices; provide large coverage at reasonable cost due to the huge detector dimensions; reach the required detection efficiency (>1%) to meet the supernovae DUNE scientific program.
The device proposed for the light detection system of the DUNE far detector is the X-ARAPUCA [5].
It is constituted by a light collector coupled to an array of silicon photo-multipliers (SiPMs) which detect the collected photons. In this work the detection principle of a sample of two X-ARAPUCA devices has been probed in a continuous data-taking lasting about 10 days with the use of a LAr condenser and of a purification system.
## 2 The X-ARAPUCA
The X-ARAPUCA (XA) light trap is an evolution of the first ARAPUCA design [6]. It consists of a box cavity with highly reflective internal walls. The entrance window of the box is made by a short-pass dichroic filter which has the properties of being highly transparent to photons with wavelength below a given cut-off (400 nm), while being highly reflective to photons with wavelength above the same cut-off. The dichroic filter is coated on the external side of the entrance window with para-Terphenyl (pTP), a wavelength shifter converting photons from 128 nm to 350 nm [7]. Because its wavelength is below the dichroic cut-off, such photons can cross the dichroic filter. Inside the box immersed in LAr a second wavelength shifting step is performed by a WLS slab (EJ-286PS model manufactured by Eljen Technology) that shifts 350 nm photons to 430 nm. The light re-emitted from the WLS slab can be trapped by total internal reflections or escape, to be then reflected back by the dichroic filter. In both cases the photons eventually reach the SiPM photosensor located on the edge of the WLS slab. The light trap sequence is summarized in fig. 1. The pTP and EJ-286PS emission spectra are shown in fig. 1, where the separation with respect to the dichroic cut-off is clearly visible.
The XA concept has been realised in several shapes and sizes. The XA model used in this work features two dichroic windows sizing 200\(\times\)75 mm\({}^{2}\) overall, the same format employed in SBND experiment [8] (fig. 2). The WLS slab is coupled to four photosensor boards, each containing four
Figure 1: Left: X-ARAPUCA working principle; Right: pTP and EJ-286 wavelength shifters emission spectra.
SiPMs (Hamamatsu model S13360-6050VE) ganged in parallel. A six windows XA is the basic photon detection unit for the DUNE far detector first module [9].
## 3 Experimental setup and cryogenic facility
The two windows XA under test was installed in a 13 l cryostat (fig. 3) connected to an argon gas liquefaction and purification system. The experimental setup allows for testing the photosensor performance in detecting events generated in pure liquid argon by a radioactive source. The device is hanging from the top flange and an \({}^{241}\)Am alpha source is mounted frontally in a peek holder fixed to a motion feedthrough. In this way the source can be translated between the two centers of the XA windows. The distance between the source and the dichroic surface is 4 cm. An optical fiber for single photoelectron calibration is inserted in the cryostat through an optical feedthrough externally connected to an Hamamatsu laser head PLP C8898.
The cryostat is filled by liquefying gaseous Ar 6.0 (1 ppm impurities in total) from pressurised bottles. The argon liquefaction process is performed through a condenser made by two "brazed plate" and one "tube and shell" heat exchangers. The heat exchange is performed at the expenses of liquid nitrogen. After LAr filling the evaporated gaseous argon is recirculated by a gas pump through a SAES MonoTorr model PS4-MT50-R rare gas hot purifier [10]. The cryostat is equipped with six PT100 level meter sensors and a pressure transducer.
For this test, the four SiPMs boards from the XA were biased and read-out through the APSAIA board [11] developed within the SBND experiment. The board embodies 8 channels with input connectors. Both power supplies and amplifiers have remote control via RS232 serial port. The four XA output channels read-out by APSAIA are then sent to CAEN V1725B digitizers (250 MS/s, 14 bit).
Figure 2: X-ARAPUCA device used in the test.
## 4 Measurement results in Liquid Argon
With SiPM bias set at 4V over-voltage, we studied the single photon response of the different channels in laser-triggered runs to calibrate the average charge of the first photo-electron peak. This calibration was then used to study the XA response to events generated in argon by the \({}^{241}\)Am alpha source: a rate of about 100 Hz was found in self-trigger mode data taking. Data presented in this work refer to the case of alpha source positioned at the center of one of the two XA windows (the one located deeper in LAr). Fig. 4 shows an example of normalized and fitted waveform from scintillation light due to alpha particles. The slow decay component is in good agreement with expected value of 1.4 \(\mu\)s. Fig. 5 shows the reconstructed charge, in number of photo-electrons (PE), for all four channels. For each channel, the \(\alpha\) peak is fitted with a gaussian distribution.
### Light yield and stability of the system
The system was kept in LAr with recirculation and purification system active for 10 consecutive days. Several self-trigger runs were acquired daily to monitor the stability of the light yield, measured as the total average number of PEs detected by the four channels. After a slight increase in the first 20 hours of data taking, due to impurities removal, the light yield was stable during the full period within \(\pm\)1% (fig. 6). The final light yield was found N\({}_{<PE>}\)=1595\(\pm\)10 on average.
Figure 3: Cryostat hosting the X-ARAPUCA device
### X-ARAPUCA efficiency measurement
A preliminary efficiency of the XA photodetector can be known if the number of photons impinging on the dichroic window surface are estimated. The number of photons produced in LAr by excitation from the alpha particles is evaluated assuming a photon yield of \(\mathrm{Y}_{\gamma}=51000\pm 1000\) photons/MeV and a quenching factor \(\alpha_{Q}=0.71\pm 0.02\) for alpha particles [2; 3; 4]. With an alpha quasi-monochromatic energy of E\({}_{\alpha}\)=5.48 MeV, the total amount of emitted photons is given by:
\[\mathrm{N}_{\gamma}=\mathrm{Y}_{\gamma}\times\mathrm{E}_{\alpha}\times\alpha_{Q}\]
The geometrical acceptance for the isotropically emitted VUV photons was evaluated with a Geant4 simulation in which the XA complete geometry was implemented together with the \({}^{241}\)Am source size and position. We found a geometrical efficiency \(\epsilon_{geom}\)=21.1%, leading to a number of VUV photons impinging the XA surface given by:
\[\mathrm{N}_{\gamma}^{XA}=\mathrm{N}_{\gamma}\times\epsilon_{geom}=41300\pm 1400\]
The number of detected PEs obtained by summing all four channels must be corrected for the SiPMs secondary pulses induced by cross-talk and afterpulses. To this purpose we used the value \(1.55\pm 0.05\), expressed in number of avalanches generated per detected photon, reported in [12] for our SiPMs. The final measured efficiency is given by
\[\epsilon_{XA}=\frac{\mathrm{N}_{PE}^{corr}}{\mathrm{N}_{\gamma}^{XA}}=2.5\pm 0.3\%\]
where the error is dominated by systematic uncertainties on the single photo-electron calibration. The obtained value obtained is in good agreement with other performed measurements [13].
## 5 Conclusions
X-ARAPUCA is the basic unit of the photon detection system of the DUNE Far Detector. In this work the test of a two-windows X-ARAPUCA device has been reported. The stability performances
Figure 4: Average waveform from alpha source scintillation
in liquid argon under the scintillation light produced by a \({}^{241}\)Am alpha source have been investigated.
Figure 5: Alpha source spectra in PE for all four X-ARAPUCA channels
Figure 6: Total average PE from four X-ARAPUCA channels Vs time for 10 days
A preliminary measurement of the absolute detection efficiency is also reported.
The authors would like to thank A. Pandalone and A Vanzanella from INFN-Napoli electronic workshop for their contribution on electronics, G. Franchi for useful support on the APSAIA read-out board. This work has been supported by FRA funds of Universita degli Studi di Napoli "Federico II". The activity during the XA test of Dr. Bergamini Machado and Prof. Segreto has been supported by FAI funds of INFN.
|
2310.19630
|
Convolutional Neural Networks for Automatic Detection of Intact
Adenovirus from TEM Imaging with Debris, Broken and Artefacts Particles
|
Regular monitoring of the primary particles and purity profiles of a drug
product during development and manufacturing processes is essential for
manufacturers to avoid product variability and contamination. Transmission
electron microscopy (TEM) imaging helps manufacturers predict how changes
affect particle characteristics and purity for virus-based gene therapy vector
products and intermediates. Since intact particles can characterize efficacious
products, it is beneficial to automate the detection of intact adenovirus
against a non-intact-viral background mixed with debris, broken, and artefact
particles. In the presence of such particles, detecting intact adenoviruses
becomes more challenging. To overcome the challenge, due to such a presence, we
developed a software tool for semi-automatic annotation and segmentation of
adenoviruses and a software tool for automatic segmentation and detection of
intact adenoviruses in TEM imaging systems. The developed semi-automatic tool
exploited conventional image analysis techniques while the automatic tool was
built based on convolutional neural networks and image analysis techniques. Our
quantitative and qualitative evaluations showed outstanding true positive
detection rates compared to false positive and negative rates where
adenoviruses were nicely detected without mistaking them for real debris,
broken adenoviruses, and/or staining artefacts.
|
Olivier Rukundo, Andrea Behanova, Riccardo De Feo, Seppo Ronkko, Joni Oja, Jussi Tohka
|
2023-10-30T15:23:25Z
|
http://arxiv.org/abs/2310.19630v3
|
Convolutional Neural Networks for Automatic Detection of Intact Adenovirus from TEM Imaging with Debris, Broken and Artefacts Particles
###### Abstract
Regular monitoring of the primary particles and purity profiles of a drug product during development and manufacturing processes is essential for manufacturers to avoid product variability and contamination. Transmission electron microscopy (TEM) imaging helps manufacturers predict how changes affect particle characteristics and purity for virus-based gene therapy vector products and intermediates. Since intact particles can characterize efficacious products, it is beneficial to automate the detection of intact adenovirus against a non-intact-viral background mixed with debris, broken, and artefact particles. In the presence of such particles, detecting intact adenoviruses becomes more challenging. To overcome the challenge, due to such a presence, we developed a software tool for semi-automatic annotation and segmentation of adenoviruses and a software tool for automatic segmentation and detection of intact adenoviruses in TEM imaging systems. The developed semi-automatic tool exploited conventional image analysis techniques while the automatic tool was built based on convolutional neural networks and image analysis techniques. Our quantitative and qualitative evaluations showed outstanding true positive detection rates compared to false positive and negative rates where adenoviruses were nicely detected without mistaking them for real debris, broken adenoviruses, and/or staining artefacts.
- - Adenovirus; Debris; Artefact; Convolutional Neural Networks, Segmentation; TEM
## 1 Introduction
Large-scale production of viral vectors for gene therapy requires tools to characterize the virus particles [2]. Transmission electron microscopy (TEM) is the only imaging technique allowing the direct visualization of viruses, due to its nanometer-scale resolution [21], [4]. Consequently, with TEM, it becomes possible to understand what occurs with viral particles when parameters or process operations change or when formulations are modified. Different biomanufacturing process conditions have different effects on particle characteristics, and images that reveal particle morphology together with quantitative analysis can provide a good understanding of and insights into the impact of such process changes via assessing overall morphology (stability, purity, integrity, and clustering) which might affect vector performance [1], [3]. However, due to the need for considerable operator skills, special laboratory facilities, and the limitations in providing quantitative data, it is not routinely used in process development [25]. It is important to note that TEM image analysis is typically performed in specialized TEM facilities, and the time to get results is often long [25]. Also, the process to annotate, segment, and detect intact adenoviruses in TEM images remains challenging due to the presence of broken adenoviruses, debris, and various kinds of staining artefacts as illustrated in Figure 1. Consequently, the intact adenovirus segmentation in TEM images using traditional image analysis methods is not reliable [5]; challenging intact adenovirus characterization. Deep convolutional neural networks (CNNs) have shown excellent performance in many biomedical imaging tasks which were thought to be unsolvable before the deep learning era [22], [23],[24]. Here, the CNN of interest was U-net, which is widely used and known for its excellent segmentation precision of medical images [7], [16], [18]. U-Net is a modified and/or extended version of a fully convolutional network that works with very few training images to yield more precise segmentations [7], [16], [18]. Although many works currently exist, mostly proposed for segmentation of bio/medical images using the U-net or variants or closely related versions [7], [8], [9], [10], [34], the U-net outperformed the earlier best methods and could still provide a fast and accurate segmentation of images.
However, research in the automatic segmentation of intact adenoviruses in TEM images remains in its infancy. There exist a few works that proposed both CNN-based and non-CNN-based solutions to image analysis of TEM images of virus particles [11], [12], [13], [14], [15]. References [11] and [15] propose methods for segmentation of different types of viruses, including adenoviruses, from TEM images using a morphological image analysis pipeline [15] and U-Net [11]. Reference [12] proposes a method for classification between different types of viruses and makes available an open TEM dataset to study virus-type classification. Reference [13] proposes a fully connected neural network to detect feline calicivirus particles from TEM images. Finally, reference [14] focuses on the reduction of the number of trainable U-Net weights for segmentation of various virus particles from TEM images.
However, among these works, there was no clear focus or dedicated work on intact adenovirus segmentation and detection with the aim of improving the characterization of adenoviruses in images captured by high-throughput TEM systems for production of viral vectors. Therefore, we introduce a U-Net-based approach together with software tools for fast and easy training, for segmentation of intact adenoviruses from high-throughput TEM images. Our purpose is not only due to the need for testing the automation of detection of intact adenovirus from TEM imaging with debris, broken, and artefacts particles but also to demonstrate that, detecting intact adenoviruses with high accuracy, even in highly challenging imaging conditions, was possible with U-Net.
## 2 Material and Methods
Figure 1: Intact adenovirus particles (top-left-side – blue arrow), broken adenovirus particles (top-right-side – blue arrows). debris particles (bottom-left-side: inside red circles – large debris and blue arrows – small debris), artefact particles (bottom-right-side: inside blue circles - examples of uranyl acetate staining artefacts).
### Image data
The imaging was performed by using the MiniTEM microscope by Vironova AB, Stockholm, Sweden [6], with an operating voltage of 25 kV and with a field of view (FOV) of 3 \(\upmu\)m for the adenovirus samples [27]. We first acquired a training and validation set of 50 images of the size of 2048-by-2048. The intact adenoviruses of this set were annotated using a semiautomatic software tool developed by us specifically for this purpose. We used this image set to train the CNN and validate its performance using cross-validation. Second, we acquired a test set of 20 MiniTEM images that were completely independent from the training and validation set and used to test the final CNN model for adenovirus detection. This test set contained very challenging images with varying levels of debris and staining artefacts that would be too challenging for the traditional image analysis methods.
### Software tool for semi-automatic annotation and segmentation of intact adenovirus
#### 2.2.1 Semi-automatic annotation
The image annotation process is one of the most challenging steps that affect the training outcome for the automatic segmentation of microscopy images [11]. Also, annotating large enough training sets for supervised learning is a bottleneck, and dedicated tools to speed up the annotation process are still needed [28], [29], [30].
In this regard, a GUI-based software tool for semi-automated segmentation of MiniTEM images was developed and later used to create annotated MiniTEM images used for training the U-Net model. The software tool is available at [https://github.com/AndreaBehan/miniTEM-Image-Segmentation](https://github.com/AndreaBehan/miniTEM-Image-Segmentation). A video showcasing the annotation process is available in the supplement. The tool can be used for rapid manual and semi-automatic annotation and semi-automatic segmentation of intact adenoviruses and other types of debris.
#### 2.2.2 Semi-automatic segmentation
Using the developed semi-automatic tool required to first create a set of candidate adenoviruses through automatic image analysis operations. It is important to note that the entire procedure is based on an assumption that an intact adenovirus is a circular, bright object surrounded by a darker area. The key steps are as follows, see the panels A, B, C, D, and E of Figure 2:
Figure 2: The developed software tool for semi-automatic segmentation of adenoviruses in MiniTEM images. Top row (A) Close-up of the original MiniTEM image, (B) contrast-enhanced image, (C) median filtered contrast-enhanced image (D) image with large bright areas masked out (E) Adenoviruses detected by Hough transform. Bottom row: Left GUI of the annotation tool with an image with the overlaid automatic segmentation, right: Image with overlaid segmentation after manual corrections.
1. Enhance the contrast of the image by saturating the top and the bottom 1% of intensity values in the images and perform the median filtering, with a 15 by 15 window, on the enhanced image. (Figure 2, panels B and C)
2. Segment out large bright areas of the median filtered image by thresholding followed by morphological operations. This operation is necessary to allow for the Hough transform in the next step to concentrate on adenoviruses. Note that this step does not remove intact adenoviruses as they are surrounded by a darker area (Figure 2 panel D).
3. Find adenovirus boundaries by using the circular Hough transform [31] (Figure 2 panel E)
4. Remove candidate adenoviruses that do not have a dark area surrounding them by detecting the mode of the histogram of the rectangular patch around the adenovirus.
After that, the user can interactively add and remove adenoviruses as shown in the supplementary video.
### Software tool for automatic segmentation and detection of intact adenovirus
#### 2.3.1 U-Net architecture
Our CNN for automatic segmentation was based on the U-Net architecture [16] as implemented in MATLAB. U-Net features a U-shaped design, comprising contracting and expansive paths.
Figure 3 shows the input and output layers, as well as the intermediate layers and connections, of a deep learning network as visualized by the analyzeNetwork function in MATLAB. The contracting path consists of repeating blocks of convolution, ReLU activation, and max pooling. The expansive path involves transposed convolution, ReLU activation, concatenation with the downsampled feature map, and additional convolution.
#### 2.3.2 Training
To avoid high computation demands, during the U-net training process, each 2048-by-2048 image was split into non-overlapping 64 image patches of the size 256-by-256. For each original MiniTEM image with 16 bits' depth was changed to 8 bits' depth image to minimize memory usage during the training and evaluation. The execution environment was single-GPU with the Nvidia GeForce RTX 3070 graphic card and 11th Gen Intel(R) Core(TM) i7-11700F @ 2.50GHz, 2496 Mhz, 8 Core(s), 16 Logical Processor(s).
#### 2.3.3 Hyperparameter settings
Figure 3: The U-net architecture used in this work. Conv means convolution. ReLU is a rectified linear unit. DepthConv is depth concatenation. UpConv means up-convolution or transposed convolution. MaxPool is Max Pooling
Hyperparameter settings were manually adjusted with no new adjustments if 90% of training accuracy was reached during the first 10% of all epochs [18]. Training hyperparameters that were not listed below remained set to default, including the number of first encoder filters and encoder depth. The number of epochs = 30; the minimum batch size = 4; the initial learning rate = 0.0001; L2 regularization = 0.00005; optimizer = Adam (adaptive moment estimation algorithm). The loss function used was the default cross-entropy function provided by MATLAB's U-Net Layers function, for image segmentation using the U-Net architecture [33]. In other words, the pixel classification layer was not replaced with a weighted pixel classification layer.
#### 2.3.4 Data augmentation
The data augmentation options used consisted of the random reflection in the left-right direction as well as the range of vertical and horizontal translations, with 50% probability, on the pixel interval ranging from -10 to 10.
### Post-processing
A systematic combination of image filtering, dilating, and burning functions, [35], [36], [37], was used/applied to improve the quality of outlines of U-net's segmentation masks. In this way, we could emphasize or highlight the most precise outlines of intact adenovirus.
### Performance evaluation metrics
We evaluated the segmentation both in terms of detection and segmentation performance. For detection, we counted the number of true positives (TP: intact adenovirus correctly detected by U-Net), false positives (FP: adenovirus incorrectly detected by the U-Net), and false negatives (FN: intact adenovirus not detected by U-Net) [19]. Based on TP, FP, and FN counts, we computed _recall_ and _precision_ and _F-value_ as
\[precision=\frac{TP}{TP+FP} \tag{1}\]
\[recall=\frac{TP}{TP+FN} \tag{2}\]
\[F-value=\frac{\left(1+\beta^{2}\right)\texttt{+}recall\texttt{+}precision}{ \beta^{2}\texttt{+}recall\texttt{+}precision} \tag{3}\]
In Eq.3, \(\beta\)corresponds to the relative importance of precision versus recall, which we set \(\beta=1\)[19][38]. We defined correct (and incorrect) detections based on the overlap of the segmentation masks and ground-truth masks, which required setting a threshold value on the overlap. To demonstrate that the detection results were not dependent on a single threshold value, we set our main or primary threshold at 75%. We also examined the secondary thresholds of 50% and 25%, as illustrated in Figure 4. For the external test set, for which manually created ground-truth segmentations did not exist, we used the developed software tool to detect and count the number of detected and missed adenoviruses as well as those incorrectly highlighted as detected or not detected. Also, we defined the threshold at 75%, 50%, and 25%, and defining a match was subjective (see Figure 4-b).
For the evaluation of the semantic segmentation, we used the Dice score and intersection over the union (IoU) also known as the Jaccard coefficient as the performance measures [19].
## 3 Results
### Quantitative evaluation with K-fold cross-validation on the training and validation sets
We used 5-fold cross-validation on the training and validation set to quantitatively evaluate the segmentation results. Figure 5 represents the quantitative evaluation results on detection. As the figure illustrates, on average the detection rates were high, with both the average precision and recall exceeding 90% with all the studied thresholds. On some folds, where the total number of intact adenoviruses was low, the precision and/or recall dropped below the 90% limit. However, even in these folds, the precision and recall exceeded 80% in most cases indicating sufficient precision and recall for practical applications to monitor the manufacturing process of a drug product. For the segmentation evaluation, the average Dice score exceeded 0.80, which indicates that the segmentation quality corresponded well to the ground truth. In fact, perfect segmentation results were not expected here due to the semi-automated nature of the annotation of the training data and the potential difficulty in setting the boundary of the intact adenovirus. The segmentation quality was more than sufficient to assess the morphology of intact adenoviruses in monitoring the manufacturing process.
Figure 4: Example of illustration showing the ideal (a) and real (b) cases of TP, FP, and FN: In our experiments, we divided the TP situations into categories based on subjectively defined thresholds. The primary threshold was set at 75% of the area of the full circle, while secondary thresholds were set at 50% and 25% of the area of the full circle to represent the extent of differences between the ground truth and output masks. FP and FN were defined as cases where there were complete and noticeable differences between the areas of ground truth and output masks, as shown in green and purple colors, respectively.
Figure 6 presents the average percentage of intact adenovirus detected in terms of Dice and Intersection over Union (IoU) scores for each of the 5-folds.
Figure 5: (a) Average number, (b) Recall, (c) Precision, and (d) F-value. Blue dots represent the results corresponding to the individual cross-validation folds, and the red dot is their average. TP stands for true positive. TP75 represents our main threshold set at 75%, TP50 represents the secondary TP threshold set at 50%, and TP25 is another secondary TP threshold set at 25%. TP75 + TP50 refers to the case, where we count the detections with more than 50% overlap with the ground-truth segmentations as correct. TP75 + TP50 + TP25 refers to the case, where we count the detections with more than 25% overlap with the ground-truth segmentations as correct.
Figure 6: Average Dice and IoU score. Blue dots represent the results corresponding to the individual cross-validation folds, and the red dot is their average.
### Results on the external test set
The external test set of 20 MiniTEM images was selected to test the accuracy of automatic detection of intact adenoviruses. These images were mixed-quality images containing intact adenoviruses, debris, artefacts, and broken particles. The quantitative detection results are shown in Figure 7 and all the 20 segmentations are shown in Figure 8. Note that we manually scored the detections as no ground-truth segmentation was available. Here, a high recall was achieved in all the cases, but the precision, albeit higher than 90% on average, remained low in some of the cases. Figure 8 shows the segmentation overlaid on the images, suggesting good segmentation performance even with the images containing non-intact adenoviruses, debris, and various staining artefacts.
## 4 Discussion
In this work, we introduced a U-Net-based system for segmentation of intact adenoviruses from high-throughput TEM images for characterization of virus particles required in the production of virus vectors. Our experimental results demonstrated a great potential for precise automated detection of intact adenovirus in TEM system images with varying degrees of quality. More interestingly, the developed software tool for automatic detection did not mistake intact adenovirus for structures or debris or artefacts similar to an internally stained particle, doublet conformation, and triplet conformation of adenoviruses. Also, it did not mistake intact adenovirus for gradually degenerated integrity adenoviruses or black spots. However, due to the presence of
Figure 7: (a) Number, (b) Recall, (c) Precision, and (d) F-value. TP75 is our main threshold set at 75%, and TP50 and TP25 are secondary TP threshold set at 50%, and 25%, respectively. TP75 + TP50 refers to the case, where we count the detections with more than 50% overlap with the ground-truth segmentations as correct. TP75 + TP50 + TP25 refers to the case, where we count the detections with more than 25% overlap with the ground-truth segmentations as correct.
debris and artefacts in MiniTEM images, there were a few cases of false negative and false positive detections as shown in Section 3.
We successfully demonstrated that it is possible to develop an automated segmentation tool for high-throughput experiments with relatively little operator effort by combining a semi-automated custom-made software tool for training and U-Net. While neural networks have been increasingly used for segmentation, detection, and classification of viruses and other particles from TEM images, there have been no previous
Figure 8: The automatic segmentations on the images of the external test set images.
works specifically focusing on the high-throughput imaging required in the production of virus vectors. Most of the previous work has concentrated on the detection of particles (such as human cytomegalovirusovirus [39, 40] or caveolae [41]) or the classification of different types of viruses in TEM images [12]. In the field of materials science, there is a great interest in using neural network-based approaches for characterizing nanoparticles based on TEM images [42, 43, 44]. Furthermore, [23] proposed the use of a fully residual U-Net for the segmentation of small extracellular vesicles from TEM images. Apart from the application itself, these image analysis problems differ considerably from the one we were facing. In our case, the main challenge lies in the variable quality of the images and the variable appearance of adenoviruses in the images acquired under different biomanufacturing process conditions, rather than the variable shape or form of adenoviruses, which are well-defined.
## 5 Conclusions
To ease the development of improving adenovirus characterization, we developed a software tool for automatic segmentation and detection of intact adenovirus in TEM imaging systems, particularly MiniTEM. Despite the presence of debris and artefacts as well as broken particles in MiniTEM images, the developed software tool demonstrated the possibility to accurately and automatically segment and detect intact adenovirus particles. Future potential research efforts may cover small, large, and rod debris definitions for automatic segmentation and quantification purposes.
## Funding
This work was funded via the project titled "Enhancing the Innovation Potential by advancing the know-how on biomedical image analysis" by the European Social Fund (S21770).
## Author contributions
Olivier Rukundo: Designed the convolutional neural network for automatic segmentation of adenoviruses, developed the software tool for automatic detection of intact adenoviruses, and wrote the paper. Andrea Behanova and Riccardo De Feo: Designed and developed the software tool for semi-automatic segmentation of adenoviruses and debris. Seppo Ronkko and Joni Oja: Provided the training and testing images reviewed the paper and confirmed the validity of the study. Jussi Tohka: Read the paper and suggested modifications, conceptualized and supervised the research, and acquired the funding for the project that supported this work/paper.
## Conflict of interest
The authors declare no conflict of interest.
## Supplementary material
Software for semi-automatic annotation: [https://blogs.uef.fi/kubiac/software/](https://blogs.uef.fi/kubiac/software/)
Software for automatic segmentation: [https://blogs.uef.fi/kubiac/software/](https://blogs.uef.fi/kubiac/software/)
Software overview: [https://www.youtube.com/watch?v=4UZJHDPKI-g](https://www.youtube.com/watch?v=4UZJHDPKI-g)
|
2302.06501
|
Continuous phase transitions between fractional quantum Hall states and
symmetry-protected topological states
|
We study quantum phase transitions in Bose-Fermi mixtures driven by
interspecies interaction in the quantum Hall regime. In the absence of such an
interaction, the bosons and fermions form their respective fractional quantum
Hall (FQH) states at certain filling factors. A symmetry-protected topological
(SPT) state is identified as the ground state for strong interspecies
interaction. The phase transitions between them are proposed to be described by
Chern-Simons-Higgs field theories. For a simple microscopic Hamiltonian, we
present numerical evidence for the existence of the SPT state and a continuous
transition to the FQH state. It is also found that the entanglement entropy
between the bosons and fermions exhibits scaling behavior in the vicinity of
this transition.
|
Ying-Hai Wu, Hong-Hao Tu, Meng Cheng
|
2023-02-13T16:25:50Z
|
http://arxiv.org/abs/2302.06501v3
|
# Continuous phase transitions between fractional quantum Hall states
###### Abstract
We study quantum phase transitions in Bose-Fermi mixtures driven by inter-species interaction in the quantum Hall regime. In the absence of such interaction, the bosons and fermions form their respective fractional quantum Hall (FQH) states at certain filling factors. A symmetry-protected topological (SPT) state is identified as the ground state for strong inter-species interaction. The phase transitions between them are proposed to be described by Chern-Simons-Higgs field theories. For a simple microscopic Hamiltonian, we present numerical evidence for the existence of the SPT state and a continuous transition to the FQH state. It is also found that the entanglement entropy between the bosons and fermions exhibits scaling behavior in the vicinity of this transition.
_Introduction_ -- The collective behavior of a large number of microscopic objects is a fascinating topic. In quantum condensed matter physics, one central task is to elucidate the possible phases and transitions between them for a given many-body system. A large class of phases and transitions are characterized by spontaneous breaking of global symmetries, described by the Landau-Ginzburg theory. However, quantum phases of matter beyond the symmetry-breaking framework have also been discovered, a notable example being topological states in quantum Hall systems [1; 2; 3]. In the simplest cases, the integer quantum Hall (IQH) states can be understood as free electrons filling Landau levels. On the contrary, fractional quantum Hall (FQH) states only appear in strongly correlated systems. Fractionalized elementary excitations, multiple ground states on high-genus manifolds, and long-range quantum entanglement are their hallmarks. The fact that quantum Hall states do not fit into the symmetry paradigm prompts the questions: what are the possible quantum phase transitions that involve quantum Hall states and how to characterize them? Previous works have investigated transitions between different IQH states [4; 5; 6; 7], between different FQH states [8; 9; 10; 11], between certain IQH or FQH states and non-topological states [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23].
The discovery of topological insulators greatly expanded the realm of topological phases [24; 25]. One crucial insight of this adventure is that time-reversal and charge conservation symmetries should be preserved for these states to be nontrivial [26; 27; 28; 29]. Further progresses along this line lead to the concept of symmetry-protected topological (SPT) states [30; 31; 32; 33; 34; 35; 36]. This generalization incorporates strongly correlated states of spins, bosons, and fermions that exhibit nontrivial symmetry-protected edge physics but do not possess fractionalized excitations in the bulk. Quantum phase transitions from SPT states to trivial states or symmetry-breaking states have been studied [37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50].
In this work, we study a new class of topological phase transitions between SPT and FQH states in Bose-Fermi mixtures in the quantum Hall regime. This is partially motivated by the search for topological states in various synthetic systems. For example, cold atoms in the continuum or optical lattices have been actively studied in this respect [51; 52; 53; 54; 55; 56; 57; 58]. We show that a SPT state can be realized for Bose-Fermi mixtures in Landau levels with suitable inter-species interactions, and it goes through a continuous transition to two decoupled FQH states as the inter-species interaction strength decreases.
_Wave functions and field theories for the SPT and FQH states_ -- We start with trial wave functions for the
Figure 1: Illustration of the quantum phase transition in Bose-Fermi mixtures. The solid (dashed) wiggle lines represent strong (weak) interactions between the particles. If there is no inter-species interaction, two independent FQH states are formed in which the particles are transformed to composite fermions (indicated by the small arrows). As the inter-species interaction strength grows, the two types of composite fermions eventually become strongly correlated to form the SPT state.
SPT and FQH states in Landau levels, which will shine light on the nature of the transition. The letter \(b\) (\(f\)) is used as subscripts or superscripts to represent bosons (fermions). For instance, the numbers of particles are denoted as \(N_{b}\) and \(N_{f}\). As illustrated in Fig. 1, the bosons and fermions are subjected to two independent magnetic fields with total fluxes \(M_{b}\) and \(M_{f}\), so their filling factors are \(\nu_{b}=\frac{N_{b}}{M_{b}}\) and \(\nu_{f}=\frac{N_{f}}{M_{f}}\). A positive direction for the magnetic fields is chosen so each filling factor has its sign. The IQH state with \(\nu=n>0\) is denoted as \(\Phi_{n}\) and that with \(\nu=-n\) is \(\Phi_{n}^{*}\). Throughout this work, we assume that the bosons/fermions carry a U(1) charge \(e_{b}/e_{f}\). While in solid state systems one usually takes \(e_{f}=1\) and \(e_{b}\) an even integer (e.g. \(e_{b}=2\) for Cooper pairs), this is not necessarily the case for cold atoms because they are actually charge neutral. Analogs of Hall conductance can be studied and the specific probing method determines the "effective" charge of atoms [56].
In terms of the complex coordinates \(z_{j},z_{k},\cdots\) on the plane, the SPT state is described by the following many-body wave function:
\[\Psi_{\rm SPT}\sim\left[\Phi_{1}^{*}(\{z_{j}^{b}\})\Phi_{1}^{*}( \{z_{j}^{f}\})\right]\] \[\times\!\!\!\prod_{j<k}^{N_{b}}(z_{j}^{b}-z_{k}^{b})\prod_{j<k}^{ N_{f}}(z_{j}^{f}-z_{k}^{f})^{2}\prod_{j}^{N_{b}}\prod_{k}^{N_{f}}(z_{j}^{b}-z_{k} ^{f}). \tag{1}\]
It can be interpreted using the flux attachment process that maps strongly correlated particles to non-interacting composite fermions [59]: The bosons (fermions) are converted to configurations by the Jastrow factor \(\prod_{j<k}^{N_{b}}(z_{j}^{b}-z_{k}^{b})\)\(\left[\prod_{j<k}^{N_{f}}(z_{j}^{f}-z_{k}^{f})^{2}\right]\), then the composite fermions form two \(\nu=-1\) IQH states, and the inter-species correlation is captured by \(\prod_{j}^{N_{b}}\prod_{k}^{N_{f}}(z_{j}^{b}-z_{k}^{f})\). In the thermodynamic limit, the numbers of particles and fluxes must satisfy \(N_{b}=M_{f}\) and \(N_{f}=M_{b}+M_{f}\) to realize \(\Psi_{\rm SPT}\). In addition to the ground state, we can create four types of elementary excitations that carry integral charges [60].
Topological properties of \(\Psi_{\rm SPT}\) are encoded compactly in the Abelian Chern-Simons (CS) theory. The Lagrangian density is
\[\mathcal{L}_{\rm CS}=\frac{1}{4\pi}K_{IJ}a_{I}da_{J}+\frac{t_{I}}{2\pi}a_{I} dA\;, \tag{2}\]
where \(K\) is an integer-valued symmetric matrix, the \(a_{I}\)'s are emergent gauge fields, and \(a_{I}da_{J}\equiv\epsilon^{\mu\nu\lambda}a_{I,\mu}\partial_{\nu}a_{J,\lambda}\). Here we also include the coupling with a background U(1) gauge field \(A\), with integers \(t_{I}\) known as the charge vector. This formalism was originally proposed for intrinsic topological orders [61] but has also been very useful in studying SPT states [62]. The number of degenerate ground states on a torus is given by \(|{\rm det}K|\). For the case with a unique ground state (\(|\det K|=1\)), one can further show that there exists no topologically nontrivial excitations.
Inspired by the wave function \(\Psi_{\rm SPT}\), we consider the following \(K\) matrix and charge vector:
\[K_{\rm SPT}=\begin{pmatrix}0&1\\ 1&1\end{pmatrix},\quad\mathbf{t}_{\rm SPT}=\begin{pmatrix}e_{b}\\ e_{f}\end{pmatrix}. \tag{3}\]
Because the determinant of \(K_{\rm SPT}\) is \(-1\) and its signature is zero (hence no chiral central charge), the theory indeed describes a SPT state. The Hall conductance of the system is \(\sigma_{xy}=\mathbf{t}_{\rm SPT}^{T}K_{\rm SPT}^{-1}\mathbf{t}_{\rm SPT}=e _{b}(2e_{f}-e_{b})\). If the system has an edge, there are two counter-propagating gapless modes with opposite chiralities, which can be protected by a U(1) symmetry when \(\sigma_{xy}\)\(\neq\)\(0\) (\(e_{b}\neq 0,2e_{f}\)). For the actual model studied below, the numbers of bosons and fermions are separately conserved, so we have a U(1)\({}_{b}\)\(\times\)U(1)\({}_{f}\) symmetry. The particles are coupled to two background gauge fields \(A_{b}\) and \(A_{f}\) via the charge vectors
\[\mathbf{t}_{\rm SPT}^{b}=\begin{pmatrix}e_{b}\\ 0\end{pmatrix},\quad\mathbf{t}_{\rm SPT}^{f}=\begin{pmatrix}0\\ e_{f}\end{pmatrix}. \tag{4}\]
As long as \(e_{b},e_{f}\neq 0\), there would be a nontrivial "crossed" Hall response captured by a mutual Chern-Simons term \(\frac{1}{2\pi}e_{b}e_{f}A_{b}dA_{f}\). In addition, there is a Hall response for U(1)\({}_{b}\) given by \(-\frac{1}{4\pi}e_{b}^{2}A_{b}dA_{b}\).
Now we turn to the FQH state in which the bosons and fermions are decoupled but still have suitable intra-species interactions. At individual filling factors \(\nu_{b}=1/2\) and \(\nu_{f}=2/3\), the system is described by
\[\Psi_{\rm FQH}\sim\Phi_{1}(\{z_{j}^{b}\})\prod_{j<k}^{N_{b}}(z_{j }^{b}-z_{k}^{b})\] \[\times\Phi_{2}^{*}(\{z_{j}^{f}\})\prod_{j<k}^{N_{f}}(z_{j}^{f}-z_ {k}^{f})^{2}. \tag{5}\]
Intuitively, the particles are also converted to composite fermions by the Jastrow factors, which now form their respective IQH states with \(\nu=1\) (bosons) and \(-2\) (fermions). In the CS theory, the bosonic FQH state has \(K_{b}=2\), \(\mathbf{t}_{\rm FQH}^{b}=e_{b}\) and the fermionic FQH state has
\[K_{f}=\begin{pmatrix}1&0\\ 0&-3\end{pmatrix},\quad\mathbf{t}_{\rm FQH}^{f}=e_{f}\begin{pmatrix}1\\ 1\end{pmatrix}. \tag{6}\]
_Quantum phase transitions_ -- If we turn on inter-species interaction, it is possible to induce a quantum phase transition from the FQH state to the SPT state. To gain some intuition about how the transition takes place, we may strip off the flux attachment factors in \(\Psi_{\rm SPT}\) and \(\Psi_{\rm FQH}\) to consider a transition between the states \(\Phi_{1}(\{z_{j}^{b}\})\Phi_{2}^{*}(\{z_{j}^{f}\})\) and \(\Phi_{1}^{*}(\{z_{j}^{b}\})\Phi_{1}^{*}(\{z_{j}^{f}\})\prod_{j}^{N_{b}}\prod_{ k}^{N_{f}}(z_{j}^{b}-z_{k}^{f})\). The latter state is actually a superfluid because its \(K\) matrix
\[\begin{pmatrix}-1&1\\ 1&-1\end{pmatrix} \tag{7}\]
has zero determinant. This is reminiscent of the well-known exciton condensate in quantum Hall bilayers [63], but there the \(K\) matrix has 1 on the diagonal. In short, the transition may be understood as composite fermions change from two decoupled IQH states to one correlated superfluid.
This intuitive picture can be formalized using a field theory. It is helpful to perform a GL\((2,\mathbb{Z})\) transformation such that the \(K\) matrix and charge vector for the fermionic state become
\[K_{f}=\begin{pmatrix}1&1\\ 1&-2\end{pmatrix},\quad\mathbf{t}_{\text{FQH}}^{f}=e_{f}\begin{pmatrix}1\\ 0\end{pmatrix}. \tag{8}\]
To combine the bosonic and fermionic FQH states, we rename the emergent gauge field for bosons as \(a_{1}\) and the fields for fermions as \(a_{2}\) and \(a_{3}\). The resulting CS theory has 3\(\times\)3-dimensional \(K\) matrix \(K_{\text{FQH}}=K_{b}\oplus K_{f}\) and charge vector \(\mathbf{t}_{\text{FQH}}=\mathbf{t}_{\text{FQH}}^{b}\oplus\mathbf{t}_{\text{FQH}}^ {f}\). Inspired by the analysis based on wave functions, we proceed to consider what happens when \(a_{1}\) and \(a_{3}\) are locked together by a Higgs field. Specifically, a complex scalar \(\phi\) is introduced to construct the Lagrangian density
\[\mathcal{L}_{\text{mix}} = \mathcal{L}_{b}+\mathcal{L}_{f}+\left|\left(\partial-ia_{1}+ia_{ 3}\right)\phi\right|^{2} \tag{9}\] \[+r|\phi|^{2}+u|\phi|^{4}+\cdots\,.\]
When \(r>0\), \(\phi\) is gapped and can be integrated out to reproduce the CS theory for the FQH state. When \(r<0\), \(\phi\) condenses to generate the Higgs phase in which \(a_{3}\) can be eliminated by setting it to \(a_{1}\). This leads to
\[\mathcal{L}_{\text{mix}} = \frac{1}{2\pi}a_{1}da_{2}+\frac{1}{4\pi}a_{2}da_{2} \tag{10}\] \[+\frac{e_{b}}{2\pi}A_{b}da_{1}+\frac{e_{f}}{2\pi}A_{f}da_{2},\]
which is exactly the same as \(\mathcal{L}_{\text{SPT}}\). For a whole family of systems with filling factors \(\nu_{b}=p/(p+1)\) and \(\nu_{f}=(p+1)/(2p+1)\), we have uncovered similar mechanisms for continuous phase transitions and constructed the associated field theories [60].
To further understand the critical theory, we perform the following GL(3,\(\mathbb{Z}\)) basis transformation for the gauge fields:
\[\begin{pmatrix}a_{1}\\ a_{2}\\ a_{3}\end{pmatrix}=\begin{pmatrix}3&1&-1\\ -2&0&1\\ 2&1&-1\end{pmatrix}\begin{pmatrix}b_{1}\\ b_{2}\\ b_{3}\end{pmatrix}. \tag{11}\]
The \(K\) matrix is
\[\begin{pmatrix}6&0&0\\ 0&0&1\\ 0&1&-1\end{pmatrix} \tag{12}\]
in the new basis. The critical theory becomes
\[\mathcal{L}_{\text{mix}} = \frac{6}{4\pi}b_{1}db_{1}+|(\partial-ib_{1})\phi|^{2} \tag{13}\] \[+r|\phi|^{2}+u|\phi|^{4}+\cdots\,,\]
so \(a_{1}-a_{3}=b_{1}\) couples to \(\phi\) while \(b_{2}\) and \(b_{3}\) decouple from critical fluctuations. Interestingly, this theory also describes a continuous transition between a 1/6 Laughlin state and a trivial insulator. It is a strongly coupled theory for which analytical results are available only in the limit with a large number of boson flavors and a large CS level. In this case, (the generalization of) Eq. (13) indeed flows to a conformal fixed point at low energy. It is thus quite reasonable to conjecture that Eq. (13) describes a unconventional quantum critical point. For the transitions at other filling factors, similar basis transformations can also be found [60].
_Numerical results_ -- It is not a priori clear that the SPT state can be realized using a simple microscopic Hamiltonian. To this end, we consider the many-body Hamiltonian for the bosons and fermions
\[H_{\text{mix}} = \sum_{j<k}4\pi\ell_{b}^{2}\ \delta(\mathbf{r}_{j}^{b}-\mathbf{r}_{k }^{b})+\sum_{j<k}4\pi\ell_{f}^{4}\ \nabla^{2}\delta(\mathbf{r}_{j}^{f}-\mathbf{r}_{k}^{f}) \tag{14}\] \[+ g_{m}\sum_{j,k}4\pi\ell_{b}\ell_{f}\ \delta(\mathbf{r}_{j}^{b}- \mathbf{r}_{k}^{f})\;,\]
where \(\ell_{b}\) (\(\ell_{f}\)) is the magnetic length for bosons (fermions). It is necessary to introduce two magnetic
Figure 2: Numerical results on the torus. (a) The low-lying energy levels of the \(N_{b}=6,N_{f}=12\) system versus \(g_{m}\). (b) The first-order derivative of the ground state energy. (c) The second-order derivative of the ground state energy. (d) The ground state fidelity susceptibility. (e) The von Neumann entanglement entropy between bosons and fermions. (f) The same data in (e) replotted to achieve data collapse. The numbers of particles (\(N_{b},N_{f}\)) for panels (b-f) are indicated using the legend of (b).
lengths because the magnetic fluxes for the two types of particles are different. The unit of length is chosen to be \(\ell_{b}\). The particles are confined to their respective lowest Landau levels and higher levels are neglected. The first (second) term in \(H_{\rm mix}\) corresponds to the zeroth (first) Haldane pseudopotential [64], so we know for sure that \(\Psi_{\rm FQH}\) can be realized at \(g_{m}=0\). Exact diagonalizations of \(H_{\rm mix}\) are performed on the torus [65] at many different \(g_{m}\in[0,1]\). The energy spectra are presented in Fig. 2 (a). A unique ground state is observed when \(g_{m}\sim 1\), but there are six quasi-degenerate ground states when \(g_{m}\sim 0\)[66; 67]. The same evolution is observed for all cases that have been checked and is consistent with the theoretical prediction.
The transition is inspected more closely using the lowest eigenvalue \(E_{0}(g_{m})\) and the associated eigenstate \(|\Psi_{0}(g_{m})\rangle\). The transition appears to be continuous, as one can see from the first-order derivative \(dE_{0}/dg_{m}\) in Fig. 2 (b). The transition point is found to be \(g_{m}^{c}\approx 0.39\), where peaks appear in the second-order derivative \(d^{2}E_{0}/dg_{m}^{2}\) as shown in Fig. 2 (c). The evolution of \(|\Psi_{0}(g_{m})\rangle\) can be characterized using the ground state fidelity susceptibility [68; 69]
\[\mathcal{F}(g_{m})=\frac{2}{(\delta g_{m})^{2}}\Big{[}1-\big{|} \langle\Psi_{0}(g_{m})|\Psi_{0}(g_{m}+\delta g_{m})\rangle\big{|}\Big{]}. \tag{15}\]
As the system passes the transition point, the state changes abruptly such that \(\mathcal{F}\) attains a very large value. This picture is confirmed by the appearance of peaks around \(g_{m}^{c}\approx 0.39\) in Fig. 2 (d). The continuous nature of this transition is further corroborated by density matrix renormalization group calculations [70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72]. In the vicinity of a critical point, critical scaling of physical quantities plays a prominent role. For symmetry-breaking phase transitions, correlation functions of local observables are routinely studied. However, they are not expected to give clear signatures due to the limited spatial extent of our system. To this end, we consider the quantum entanglement between the bosons and fermions. The reduced density matrix for the bosons is obtained by tracing out the fermions as \(\rho_{b}={\rm Tr}_{f}\ |\Psi_{0}(g_{m})\rangle\langle\Psi_{0}(g_{m})|\). The von Neumann entanglement entropy \(S_{\rm vN}=-{\rm Tr}\ \rho_{b}\ln\rho_{b}\) is presented in Fig. 2 (e). The boson-fermion entanglement is weak for small \(g_{m}\) but seems to obey the volume law in the SPT state. Unfortunately, we are not able to derive the scaling form of \(S_{\rm vN}\) from the strongly coupled field theory. We make a bold conjecture that \(S_{\rm vN}(g_{m})N_{b}^{\rm o}=f[(g_{m}-g_{m}^{c})N_{b}^{\beta}]\). The data points for \(g_{m}\in[0.30,0.50]\) can be collapsed on a straight line using \(\alpha\approx-0.93\) and \(\beta\approx 0.91\) as shown in Fig. 2 (f).
It is also helpful to employ the spherical geometry where a radial magnetic field is generated by a magnetic monopole [64]. A great advantage is that the counterparts of Eq. (1) (and those for excitations) can be constructed more easily on the sphere than on the torus [73; 74]. However, its curvature results in a shift quantum number and the filling factor in finite-size systems may not be equal to its thermodynamic value [75]. The system parameters should satisfy \(M_{b}=2(N_{b}-1)\) for the bosonic \(1/2\) state, \(M_{f}=3N_{f}/2\) for the fermionic \(2/3\) state, and \(M_{b}=N_{f},M_{f}=N_{b}+N_{f}\) for the SPT state. This means that we must choose \(N_{b}=N_{f}/2+1\) instead of \(N_{b}=N_{f}/2\). For the \(N_{b}=5,N_{f}=8\) system, Fig. 3 (a) displays the low-lying energy levels of \(H_{\rm mix}\) at \(g_{m}=1.0\) (plotted versus the total angular momentum \(L\)), which are compared with appropriate trial wave functions [60]. The overlap for the ground state is excellent (0.99), and those for the excitations are also quite good (except for one state). To probe the edge physics, we turn to the real space entanglement spectrum [76; 77; 78; 79]. For the \(N_{b}=7,N_{f}=14\) system, the reduced density matrix for the southern hemisphere is constructed and its eigenvalues are shown in Fig. 3 (b). The good quantum numbers are the numbers of particles in the subspace and the \(z\) component of the angular momentum. As indicated in the figure, two edge modes with opposite chiralities can be identified. The counting of levels \(1,1,2,3\) suggests that they are described by free bosons, which agrees with the prediction of the CS theory.
_Conclusions_ -- In summary, we have proposed an SPT state in Bose-Fermi mixtures that could be realized using a simple Hamiltonian. By tuning the inter-species interaction, quantum phase transitions to FQH states with intrinsic topological order can be induced. The possibility that these transitions are continuous is revealed by critical field theory and substantiated by numerical results. We have also made a first attempt toward revealing critical scaling of the entanglement entropy. This is very premature due to the absence of reliable analytical results on the scaling function. Many questions remain to be answered. The Chern-Simons-Higgs theory in Eq. (13) is realized in our microscopic model. It will be interesting to further study its critical properties. More broadly, a general picture for the transitions between strongly correlated topological states in the quantum Hall regime is
Figure 3: Numerical results on the sphere. (a) The low-lying energy levels of the \(N_{b}=5,N_{f}=8\) system. The lines (dots) represent exact eigenstates (trial wave functions) and the numbers are their overlaps. (b) The entanglement spectrum of the \(N_{b}=7,N_{f}=12\) system in the sector for which the southern hemisphere has 4 bosons and 6 fermions.
very desirable. The effects of disorder and other impefections that could appear in realistic systems should also be investigated.
_Note added_ -- While finalizing the manuscript, we noticed a preprint on the transition between a FQH state and an exciton condensate in quantum Hall bilayers [80]. The physics is quite different from the FQH-SPT transition studied in this work.
_Acknowledgements_ -- M.C. would like to thank Chao-Ming Jian for helpful conversations. This work was supported by the NNSF of China under grant No. 12174130 (Y.-H. W.), the Deutsche Forschungsgemeinschaft through project A06 of SFB 1143 under project No. 247310070 (H.-H. T.), and NSF under award number DMR-1846109 (M.C.).
|
2303.11354
|
Limits on Dark Matter Annihilation from the Shape of Radio Emission in
M31
|
Well-motivated models of dark matter often result in a population of
electrons and positrons within galaxies produced through dark matter
annihilation -- usually in association with gamma rays. As they diffuse through
galactic magnetic fields, these $e^\pm$ produce synchrotron radio emission. The
intensity and morphology of this signal depends on the properties of the
interstellar medium through which the $e^\pm$ propagate. Using observations of
the Andromeda Galaxy (M31) to construct a model of the gas, magnetic fields,
and starlight, we set constraints on dark matter annihilation to $b\bar{b}$
using the morphology of 3.6 cm radio emission. As the emission signal at the
center of M31 is very sensitive to the diffusion coefficient and dark matter
profile, we base our limits on the differential flux in the region between
$0.9-6.9$ kpc from the center. We exclude annihilation cross sections $\gtrsim
3 \times 10^{-25}$ cm$^3$/s in the mass range $10-500$ GeV, with a maximum
sensitivity of $7\times 10^{-26}$ cm$^3$/s at $20-40$ GeV. Though these limits
are weaker than those found in previous studies of M31, they are robust to
variations of the diffusion coefficient.
|
Mitchell J. Weikert, Matthew R. Buckley
|
2023-03-20T18:00:02Z
|
http://arxiv.org/abs/2303.11354v2
|
# Limits on Dark Matter Annihilation from the Shape of Radio Emission in M31
###### Abstract
Well-motivated scenarios of thermally-produced dark matter often result in a population of electrons and positrons within galaxies produced through dark matter annihilation - often in association with high-energy gamma rays. As they diffuse through galactic magnetic fields, these \(e^{\pm}\) produce synchrotron radio emission. The intensity and morphology of this signal depends on the properties of the interstellar medium through which the \(e^{\pm}\) propagate. Using observations of the Andromeda Galaxy (M31) to construct a model of the gas, magnetic fields, and starlight, we set constraints on dark matter annihilation to \(b\bar{b}\) using the morphology of 3.6 cm radio emission. As the emission signal at the center of M31 is very sensitive to the diffusion coefficient and dark matter profile, we base our limits on the differential flux in the region between \(0.9-6.9\,\mathrm{kpc}\) from the center. We exclude annihilation cross sections \(\gtrsim 3\times 10^{-25}\) cm\({}^{3}\)/s in the mass range \(10-500\) GeV, with a maximum sensitivity of \(7\times 10^{-26}\) cm\({}^{3}\)/s at \(20-40\) GeV. Though these limits are weaker than those found in previous studies of M31, they are robust to variations of the diffusion coefficient.
## I Introduction
To date, all evidence for dark matter comes from its gravitational influence on the visible matter in the Universe. However, the majority of successful models for the production of dark matter require some level of non-gravitational interactions between the visible and dark sectors. Perhaps the best-known such scenario is that of thermally-produced dark matter, where a small interaction between dark matter and the Standard Model results in a relic population of non-relativistic particles due to thermal freeze-out during the early Universe. Weakly-Interacting Massive Particles (WIMPs) are the most well-known implementation of this class of dark matter models. In such models, the observed density of dark matter is obtained if the velocity-averaged annihilation cross section is \(\langle\sigma v\rangle\sim 3\times 10^{-26}\) cm\({}^{3}\)/s, for dark matter in the mass range \(\sim 1-10^{3}\) GeV [1].1
Footnote 1: Though model-specific details can easily change these numbers by \(\mathcal{O}(1)\) factors or more.
Thermal relics, as well as other models with dark matter-Standard Model interactions of similar magnitude, result in a number of possible experimental signatures. Of particular interest to this work is indirect detection, where present-day residual annihilation or decay of dark matter into Standard Model particles gives visible signatures that can be detected here on Earth. Annihilation to Standard Model particles will generically result in cascade decays terminating in stable \(e^{\pm}\), photons, neutrinos, and \(p/\bar{p}\), evidence of which can reach Earth-based detectors from their astronomical point of origin. As the strength of these signals increases with dark matter density squared and decreases with the distance to target squared, the indirect detection targets with the greatest signal rate are the largest and closest conglomerations of dark matter. The highest intensity signals are therefore expected to be seen from our own Milky Way Galactic Center, but other nearby galaxies - such as Andromeda (M31) [2; 3], the Large Magellanic Cloud (LMC) [4], the Small Magellanic Cloud (SMC) [5], and local dwarf galaxies [6; 7; 8; 9; 10; 11; 12] - can have significant signals too. As the backgrounds and systematics for these systems differ from the Milky Way, they can be compelling targets despite the lower signal rate.
High-energy prompt photons, which can either come directly from the annihilation of dark matter or the cascade decays of annihilation products, travel largely unimpeded from where they were produced to Earth. Such photons are therefore the most straightforward indirect detection signal, with a morphology that is set only by the dark matter distribution. Interestingly, many groups have identified an excess of gamma rays in the energy range \(1-3\) GeV from data collected by the Fermi Large Area Telescope (LAT) [13] in the Milky Way [14; 15; 16; 17; 18], with morphology compatible with the dark matter expectations. Possible signals consistent with this gamma ray excess have been reported in M31 [2], and the LMC [4], though with less significance. These excesses can be well-fit by dark matter models with \(m_{\chi}\sim\mathcal{O}(10-100\) GeV) annihilating to either \(b\bar{b}\) or \(\tau^{+}\tau^{-}\) followed by cascade decays which result in the observed photons, with a thermally averaged cross section of \(\langle\sigma v\rangle\sim 2\times 10^{-26}\,\mathrm{cm}^{3}\)/s [4; 15; 16; 17; 18; 19].
However, the ultimate origin of this gamma ray excess remains unclear. An unresolved population of millisecond pulsars (MSPs) in the center of the Milky Way has been suggested as an alternate source of this signal [20; 21; 22; 23; 24; 25]. Ref. [26] has argued that the distribution of gamma rays in the Galactic Center excess in the Fermi-LAT data contains non-Poissonian statistics, suggestive of a MSP origin. At this time, debate appears to be far from settled, with questions about the spectrum of MSPs [27], morphology and background emission modelling [28; 29; 30], and the non-Poissonian statistics interpretation [31; 32] all remaining open. A recent analysis [33] suggests that the observed excess is best fit by a combination of point sources and a diffuse source, but uncertainties are large
enough that either origin could dominate.
In this context, searches for indirect detection signals beyond prompt photons are especially interesting. Dark matter annihilation into Standard Model final states which decay into gamma rays will necessarily also have significant branching ratios into electrons and positions. These \(e^{\pm}\) will interact with galactic magnetic fields, ambient photons (from starlight, dust and the Cosmic Microwave Background (CMB)) and interstellar gas, losing energy and emitting a range of secondary photons ranging in energy from radio up to X-rays. These signals depend on the properties of the target beyond the dark matter distribution, introducing uncertainties that do not exist in prompt photon searches; however the systematics and backgrounds are largely distinct as well.
In this work, we set constraints on dark matter annihilation by analyzing a \(3.6\,\mathrm{cm}\) radio map of the Andromeda galaxy (M31) by the Effelsberg telescope [34]. M31 has been a common target for dark matter indirect detection searches using radio emission from the center of the galaxy [35; 36; 37; 38]; though the resulting constraints are sensitive to assumptions made about the astrophysical characteristics of the galaxy and the dark matter distribution.
The \(e^{\pm}\) injection rate (and the associated radio signal) at the center of M31 is dependent on the slope of the dark matter density distribution, which has considerable uncertainties [39]. The galactic center radio signal is also dependent on assumptions of the diffusion coefficient. For example, Refs. [35; 36; 37] assume electrons and positrons lose all of their energy before diffusing a measurable distance, predicting larger signals in this region than analyses which assume greater diffusion (such as Ref. [19]).
In order to set robust limits which are less sensitive to reasonable variations of the astrophysical parameters, we consider the morphology and intensity of the radio emission from the region of M31 between 0.9-6.9 kpc from the galactic center. We compute the expected synchrotron emission from electrons and positrons in M31 produced through dark matter in the mass range \(6-500\,\mathrm{GeV}\) annihilating to \(b\bar{b}\) while varying the diffusion coefficient over the range of experimentally allowed values [40; 41; 42; 34]. Commonly used tools for modeling the transport of \(e^{\pm}\) (such as galprop[43] and rx-dmfit[44]) use a uniform diffusion coefficient. For modeling the relatively large region of interest for our analysis, this assumption is insufficient. We develop a numerical solution that allows for radial dependence in all transport coefficients - including the diffusion coefficient. Using our numerical method, we solve for the spherically averaged electron-positron phase space density and compute the radio emission from this phase space density and an axisymmetric model of the magnetic field. We then set exclusion limits on the annihilation cross section of dark matter, while varying the diffusion coefficient normalization.
The remainder of this paper is organized as follows. In Section II, we describe the radio observations of M31 used in the analysis. Section III describes the spectrum and morphology of electrons and positrons injected into M31 through the annihilation of dark matter. We construct our models of the magnetic fields, interstellar radiation field (ISRF), and thermal gas density using a variety of relevant data in Section IV. The transport of electrons and positions within M31 is described in Section V. In this section, we include a discussion of the physics of charged particle transport in a galaxy and our numerical method for solving the transport equation for systems with position-dependent energy loss and diffusion. In Section VI, we calculate the intensity and morphology of the resulting synchrotron emission. Our statistical method for determining a data-driven background model and setting exclusion limits is described in Section VII. Finally, in Section VIII, we present our results. We conclude in Section IX.
## II Radio observations of M31
To constrain dark matter annihilation into high-energy electrons and positrons in M31, we use the non-thermal radio flux per unit frequency per beam \(dS/d\nu\) from a survey of \(\nu=8.35\times 10^{9}\,\mathrm{Hz}\) emission in M31 [34] using the Effelsberg 100-m telescope.2 Our data has the thermal emission subtracted, along with 38 point sources which are not associated with M31. This is the highest frequency, and therefore highest resolution, intensity map of M31 measured by the Effelsberg telescope. The frequency bandwidth is \(\Delta\nu=1.1\times 10^{9}\,\mathrm{Hz}\), while the half-power beam-width (HPBW) is \(1.5^{\prime}\) (corresponding to a physical size of \(0.34\,\mathrm{kpc}\) at the distance of M31). The root-mean-squared (rms) noise of the data is \(\sigma_{\mathrm{rms}}=0.25\) mJy/beam in the inner \(9.13\,\mathrm{kpc}\times 9.13\,\mathrm{kpc}\) region and \(\sigma_{\mathrm{rms}}=0.3\) mJy/beam elsewhere.
Footnote 2: Note that our vertical axis in Figure 1 is inverted compared to Figure 9 of Ref. [34].
In Figure 1, we show the intensity map of the data. The reported intensity at each pixel is the radio emission measured by the Effelsberg telescope from that location on the sky; this corresponds to the true differential flux convolved with the frequency band and the angular beam centered on that location. Our \(x\) and \(y\) coordinates are oriented so that the \(x\) axis is aligned with the semimajor axis of M31, converting angular coordinates to lengths assuming a distance to M31 of 785 kpc [45].
We note that the observations of M31 have significant negative values, well in excess of statistical expectations given the rms noise. Most notably, the data has a large negative excursion located near the center of M31, at \((x,y)\sim(-2,1)\,\mathrm{kpc}\). This may be due to over-subtraction of one of the point sources identified by Ref. [34] These negative values suggest that pixels labeled as having a flux of \(0\) mJy/beam actually may have
a significant (unknown) positive flux. This in part motivates our choice to set limits using _morphology_ of the expected dark matter-induced radio signal, rather than overall intensity.
Like the Milky Way, M31 is a spiral galaxy with approximate axisymmetry around a rotating stellar disk. We adopt cylindrical coordinates with the origin at the center of M31, the cylindrical radius \(R\) and the height away from the midplane of the disk \(z\). The assumption of axisymmetry implies there is no dependence on the angle around the disk \(\phi\). Note that we observe M31 at an angle of inclination given by \(\beta=77.5^{\circ}\)[46], and so the cylindrical \((R,z,\phi)\) coordinates are projected on to the \(x-y\) coordinate system of Figure 1. We will refer to the spherical radius from the center of M31 as \(r\).
## III Dark matter production of \(e^{\pm}\) in M31
Dark matter annihilation to unstable Standard Model particles such as \(b\)-quarks, \(\tau\) leptons, or \(W\) bosons will result in cascade decays involving large numbers of leptons and QCD bound states, many of which themselves will decay into prompt photons and \(e^{\pm}\). These final-state particles will have energies in the \(\mathcal{O}(0.1-10\text{ GeV})\) range for dark matter with the weak-scale masses typically expected for thermal relics. In this section, we calculate the injection morphology and spectrum of electrons and positions produced in M31 due to dark matter annihilation, assuming weak-scale masses and annihilation into \(b\bar{b}\) pairs.
While the flux on Earth of prompt photons from dark matter annihilation involves the integration of the dark matter density squared along the line of sight, electrons and positrons generated far from the Earth do not propagate to detectors here. Instead, we must track the evolution of the \(e^{\pm}\) phase space density as the particles diffuse and energy is lost - a task we will take up in Section V. For now, we will quantify the rate of production of the \(e^{\pm}\) with a source term, which depends on the local dark matter density, \(\rho_{\chi}(\mathbf{x})\), at every location within M31 and the particle physics model of the dark matter candidate. The source term or injection density rate of \(e^{\pm}\) due to dark matter self-annihilation is given by
\[Q_{e}(\mathbf{x},E)=\frac{\langle\sigma v\rangle}{2m_{\chi}^{2}}\frac{dN_{e}}{dE} \rho_{\chi}(\mathbf{x})^{2}, \tag{1}\]
where \(\langle\sigma v\rangle\) is the thermally averaged annihilation cross section, \(m_{\chi}\) is the dark matter particle mass, \(dN_{e}/dE\) is the injection spectrum of \(e^{\pm}\) per annihilation in terms of the \(e^{\pm}\) energy \(E\). Here and throughout this work, we have assumed that dark matter is its own antiparticle, if it is not, there is an additional factor of \(1/2\) in Eq. (1).
The energy spectrum of the \(e^{\pm}\) source is determined by \(dN_{e}/dE\) which is influenced by the dark matter mass and the annihilation channel. Our choices for these parameters are motivated by fits of dark matter annihilation to the gamma ray excess in the Milky Way's Galactic center [15; 16; 17; 18]. In the Milky Way, the dark matter candidates that best fit the gamma ray excesses have
Figure 1: Smoothed non-thermal radio intensity map of M31 from Ref. [34], showing the flux per unit frequency per beam averaged over a frequency bandwidth of \(1.1\times 10^{9}\,\mathrm{Hz}\). The HPBW projected into the plane of M31 is \(0.340\,\mathrm{kpc}\) and the rms noise is given by \(\sigma_{\mathrm{rms}}=0.25\,\mathrm{mJy/beam}\) in the inner \(9.13\,\mathrm{kpc}\times 9.13\,\mathrm{kpc}\) region and \(\sigma_{\mathrm{rms}}=0.3\,\mathrm{mJy/beam}\) in the rest of the map. Digitized data for this figure was provided by the authors of Ref. [34].
\(m_{\chi}\sim 30-50\) GeV annihilating to \(b\bar{b}\) or \(m_{\chi}\sim 10\) GeV annihilating to \(\tau^{+}\tau^{-}\)[15; 16; 17; 18; 4]. A similar signal in M31 has a best fit of dark matter with mass \(m_{\chi}\sim 10\) GeV annihilating to \(b\bar{b}\), or \(m_{\chi}\sim 5\) GeV annihilating to \(b\bar{b}\) and \(\tau^{+}\tau^{-}\) democratically [19].
In part motivated by these fits, in this work we consider dark matter annihilation into \(b\bar{b}\) with \(m_{\chi}\) in the range \(6-500\) GeV. We scan in dark matter mass from \(6-500\) GeV, and use pythia8[47] to decay, shower, and hadronize the \(b\bar{b}\) annihilations to calculate \(dN_{e}/dE\) for each choice of \(m_{\chi}\).3 We show in Figure 2 the resulting \(e^{\pm}\) spectra for a sample of representative dark matter masses. We note that the shower to hadronic particles produced by the \(b\bar{b}\) state are broadly similar to those produced by other choices of Standard Model final states.
Footnote 3: The pythia shower was modified to allow decays of Standard Model particles which are meta-stable on detector timescales, but whose decays would be astrophysically relevant, e.g., \(\mu\), \(\pi^{0}\), \(\pi^{\pm}\), and neutrons.
The morphology of the source is determined by the dark matter distribution of M31. This can be fit by a modified Navarro-Frenk-White (NFW) profile [48; 49; 50]
\[\rho_{\chi}=\frac{\rho_{0}}{\left(r/r_{s}\right)^{\gamma}\left(1+r/r_{s} \right)^{3-\gamma}} \tag{2}\]
where \(\gamma\) is the logarithmic inner-slope, \(\rho_{0}\) is the scale density and \(r_{s}\) is the scale radius. In the Milky Way, the gamma ray excess favors an inner slope of \(\gamma\sim 1.25\)[16; 17; 18; 15]. This slope is adopted by Ref. [19] for the dark matter distribution of M31. However, there is considerable uncertainty in the M31 dark matter distribution. In keeping with available kinematic fits to the rotation curve, we use a standard NFW with \(\gamma=1\). For the scale density and the scale radius, we use the best-fit values from analysis of kinematic data [42]:
\[\rho_{0}= \left(0.418\pm 0.068\right)\,\mathrm{GeV/cm^{3}}, \tag{3}\] \[r_{s}= \left(16.5\pm 1.5\right)\,\mathrm{kpc}.\]
## IV Astrophysical model of M31
The intensity and morphology of a radio signal which originates from electrons and positrons in a galaxy depends greatly on how the charged particles propagate. In M31, the most important propagation effects are diffusion and energy loss [19]. To calculate the effects of these, we must first model the properties of the interstellar medium.
In this section, we present our models for the magnetic field, the interstellar radiation field (ISRF), and the various components of thermal gas within M31. As relativistic \(e^{\pm}\) propagate, the interstellar magnetic field causes them undergo synchrotron energy loss, emitting photons at radio frequencies. Random fluctuations in the magnetic field also diffuse the charged particles through space. The ISRF causes the \(e^{\pm}\) to lose energy through inverse-Compton scattering. Lastly, the various components of gas cause energy loss through bremsstrahlung and Coulomb scattering.
In the literature on M31, a variety of distances to the galaxy have been assumed. Throughout this work, we use a distance to M31 of 785 kpc [45]. When necessary, we scale the results of previous works used in our model of M31 to take in to account different choices in the distance.
### Magnetic Fields of M31
The magnetic field \(\mathbf{B}\) in M31 turbulent, with fluctuations on many wavelengths, ranging from the size of the galaxy down to below the resolution limit of our experimental probes. Regardless of length scale, the details of the fluctuations themselves are not observationally accessible, but the expectation value of \(\mathbf{B}^{2}\) at different locations in the galaxy as well as the power spectrum of fluctuations (in terms of the magnitude of the wavenumber \(k\)) of the magnetic field can be measured, and suffice for our purposes.
We write the magnetic field as the product of an axi-symmetric field magnitude and a random dimensionless vector-field that depends on location \(\mathbf{x}\):
\[\mathbf{B}(\mathbf{x})=\bar{B}(R,z)\mathbf{b}(\mathbf{x}). \tag{4}\]
The vector \(\mathbf{b}\) contains the local fluctuations in the field,
Figure 2: The number of \(e^{\pm}\) in final states per unit energy per annihilation of dark matter into \(b\bar{b}\) for a representative sample of dark matter masses \(m_{\chi}\).
\[\bar{B}(R,z)^{2}\equiv\langle\mathbf{B}(\mathbf{x})^{2}\rangle \tag{5}\]
is the expectation value of the magnitude of the field squared, which we assume is independent of \(\phi\). Formally, this is an ensemble average with respect to the probability distribution that the magnetic field is sampled from.
As is conventional [51, 52, 53, 19, 54], we characterize the magnetic field fluctuations in terms of a power spectrum normalized as
\[\langle\mathbf{b}(\mathbf{x})^{2}\rangle=\int\limits_{k_{0}}^{\infty}dkP_{b}(k)=1, \tag{6}\]
where \(k_{0}\) is the minimum wavenumber for which the power spectrum applies. The length-scale \(1/k_{0}\) is typically assumed to be a factor of \(\mathcal{O}(10-100)\) smaller than of the characteristic length-scale of the galaxy [51]. For M31, this implies \(\mathcal{O}(0.1\,\mathrm{kpc})\lesssim 1/k_{0}\lesssim\mathcal{O}(1\, \mathrm{kpc})\) (though when setting limits we will vary this parameter over a much more conservative range). Observations of the propagation of cosmic rays in the Galaxy [55] find a diffusion coefficient of the form \(D\propto E^{\delta}\) with \(\delta\simeq 1/3\). This is consistent with magnetic field fluctuations that follow a Kolmogrov spectrum [56], \(P_{b}\propto k^{-5/3}\). The diffusion of charged particles moving in a magnetic field with fluctuations obeying a Kolmogrov spectrum will be discussed in detail in Section V.1.
We determine the \(R\) dependence of \(\bar{B}\) from measurements of the M31 magnetic field (taken to be the root-mean-squared (RMS) field strength) in the disk in three regions: within the inner \(1\,\mathrm{kpc}\)[57], in the range \(6-14\,\mathrm{kpc}\)[58], and in intergalactic space [59]. The measurements from Refs. [57, 58] are shown in Figure 3. The intergalactic magnetic field has been measured to be at most \(0.3\,\mu\mathrm{G}\)[59], which we take to be the \(1\sigma\) upper bound of the field strength outside of M31. To require our fit to the M31 field strength to agree with this upper bound, we include \(\bar{B}=0.15\pm 0.15\,\mu\mathrm{G}\) at \(R=300\,\mathrm{kpc}\) (though it is not shown in Figure 3) in addition to the measurements from Refs. [57, 58].
We fit the RMS magnetic field measurements to the functional form
\[\bar{B}(R,z)=\left(B_{0}e^{-R/R_{B,0}}+B_{1}e^{-R/R_{B,1}}\right)e^{-|z|/h_{B} (R)}. \tag{7}\]
which has sufficient flexibility to fit the available data. Our best-fit parameters of Eq. (7) are shown in Table 1.
Assuming equipartition between the cosmic ray and magnetic field energy density, the vertical scale height of the magnetic field is approximately four times the scale height the disk of synchrotron emission [60], which itself is approximated by the scale height of the HI disk [58]. The HI disk scale height as a function of \(R\) was measured by Ref. [61]; rescaling from the distance to M31 assumed in that work (\(690\,\mathrm{kpc}\)), the magnetic field scale height is
\[\begin{split} h_{B}(R)=& 4h_{\mathrm{syn}}=4h_{ \mathrm{HI}}\\ =&(0.83\pm 0.17\,\mathrm{kpc})+(0.064\pm 0.012) \times R.\end{split} \tag{8}\]
### Interstellar Radiation Fields of M31
The ISRF of M31 has contributions from the CMB, starlight, and infrared emission from dust. We model each component, and sum the results to obtain the total energy density of radiation within M31. Most straightforward is the energy density of the CMB, which is (for our purposes) uniform and given by
\[\rho_{\mathrm{CMB}}=\frac{\pi^{2}(k_{B}T_{\mathrm{CMB}})^{4}}{15}=0.26\, \mathrm{eV/cm^{3}}, \tag{9}\]
where \(k_{B}T_{\mathrm{CMB}}=2.3\times 10^{-4}\,\mathrm{eV}\)[62].
We determine the energy density from stars and dust from the measured luminosity distribution of M31. The energy density of stellar radiation \(\rho_{*}\) in M31 is related to the bolometric luminosity density \(Q_{*}\) by
\[\rho_{*}(\mathbf{x})=\frac{1}{4\pi c}\int d^{3}x^{\prime}\frac{Q_{*}(\mathbf{x}^{ \prime})}{|\mathbf{x}-\mathbf{x}^{\prime}|^{2}}. \tag{10}\]
In Ref. [63], the extinction-corrected stellar luminosity distribution of M31 was modeled for five different structural components (bulge, disk, nucleus, young disk and
\begin{table}
\begin{tabular}{|c||c|} \hline \hline \multicolumn{2}{c|}{Magnetic Field} \\ \hline \hline \(B_{0}\,(\mu\mathrm{G})\) & \(11.2\pm 2.9\) \\ \(B_{1}\,(\mu\mathrm{G})\) & \(7.2\pm 1.9\) \\ \(R_{B,0}\,(\mathrm{kpc})\) & \(3.5\pm 2.6\) \\ \(R_{B,1}\,(\mathrm{kpc})\) & \(77.6\pm 21\) \\ \hline \end{tabular}
\end{table}
Table 1: Parameter values of Eq. (7), fit to the data at \(|z|=0\) (shown in Figure 3).
Figure 3: RMS Magnetic field strength in the disk of M31, as measured by Refs. [57, 58] (red). Our double-exponential fit Eq. (7) (with the parameters of Table 1) is shown in blue.
stellar halo). As these distributions are extinction corrected, they describe the luminosities as if there was no dust. Assuming that the dust is in equilibrium with the starlight, the extinction-corrected stellar luminosity distribution integrated over all wavelengths approximates the bolometric luminosity from stars and dust combined.
Ref. [63] models the luminosity density \(Q_{j}\) as
\[Q_{j}(\mathbf{x})=Q_{0,j}\exp\left[\left(\frac{\sqrt{R^{2}+(z/q_{j})^{2}}}{A_{0,j} }\right)^{1/N_{j}}\right], \tag{11}\]
where \(j=\) (bulge, disk, nucleus, young disk, stellar halo) indexes the various components of the luminosity distribution and \((Q_{0,j},q_{j},A_{0,j},N_{j})\) are parameters determined separately for each component. As Ref. [63] finds that the M31 luminosity is dominated by the bulge and disk components, we only include those in our model of the ISRF.
Ref. [63] fits the parameters \(q_{j},A_{0,j},N_{j}\) to data, and provides the extinction corrected total luminosity of each M31 structural component in each of the \(ugriz\) filter bands. These luminosities are defined as
\[L_{a,j}\equiv\left.\left(\lambda\frac{dL_{j}}{d\lambda}\right)\right|_{\lambda _{a}}, \tag{12}\]
where \(a=(u,g,r,i,z)\) represents the spectroscopic band of the measurement, and \(\lambda_{a}\) is the central wavelength of the relevant band.
The \(Q_{0,j}\) depend on the total bolometric luminosity of each component \(L_{\rm bol,j}\):
\[L_{\rm bol,j}=\int d^{3}x^{\prime}Q_{j}(\mathbf{x}^{\prime}). \tag{13}\]
The bolometric luminosity of the \(j\) component can be found by integrating
\[L_{\rm bol,j}=\int d\lambda\frac{dL_{j}}{d\lambda} \tag{14}\]
Therefore, we need the luminosity per unit wavelength - or spectral energy distribution (SED) - of each component.
As M31's bulge is concentrated in the inner \(1-2\,\)kpc of M31 [63], we fit bulge's SED to the emission from the inner 1 kpc of M31 [64]. We digitize and smoothly interpolate the extinction corrected SED of the inner 1 kpc of M31 from Figure 1 of Ref. [64], and fit the proportionality constant between the inner kpc and the entire bulge by minimizing the \(\chi^{2}\) between the re-normalized SED and the bulge luminosity in each band. Our best fit rescaled SED and the measured luminosities in the \(ugriz\) bands for the bulge are shown in Figure 4(a). Using the best fit rescaled SED, we then integrate \(dL_{\rm bulge}/d\lambda\) over wavelengths \(\lambda\) to obtain the bolometric luminosity of the bulge.
For the functional form of the SED for the disk, we subtract our best-fit SED for the bulge from the extinction corrected SED for the whole galaxy (given in Ref. [65]). Using this functional form, we repeat the procedure that we used for the bulge: we perform a \(\chi^{2}\) minimization to find the proportionality constant that gives the best agreement between the rescaled SED and the luminosity values for the disk, and integrate \(dL_{\rm disk}/d\lambda\) over \(\lambda\) to get the bolometric luminosity of the disk. Our best fit rescaled SED and the measured luminosities in the \(ugriz\) bands for the disk are shown in Figure 4(b).
We use our bolometric luminosity values and Eq. (13) to calculate \(Q_{0,j}\) for each component. We show the values of the structural parameters \((Q_{0,j},q_{j},A_{0,j},N_{j})\) and luminosities \((L_{u,j},L_{g,j},L_{r,j},L_{i,j},L_{z,j},L_{\rm bol,j})\) in Table 2 (for \(j=\rm bulge,disk\)). We add our results for the luminosity density of the disk and bulge to get \(Q_{*}(\mathbf{x})\) and numerically integrate Eq. (10) to obtain \(\rho_{*}\). As we derived our luminosity density distributions from extinction corrected luminosities [63] and SEDs [64; 65], our result for \(\rho_{*}\) contains contributions from starlight and dust. Our model for the ISRF \(\rho_{\rm tot}=\rho_{*}+\rho_{\rm CMB}\) as well as the radiation density from individual components in the plane of the disk are given in Figure 5.
It is important to note that our model for the starlight luminosity of the innermost regions of M31 is significantly larger than the equivalent for the Milky Way [66]. Previous dark matter studies of radio emission from the center of M31 used a starlight model scaled from the Milky Way [19; 35; 37; 38]. The higher luminosity in the center of the galaxy that we find in our model leads to greater energy losses into X-rays from \(e^{\pm}\) inverse Compton scattering with the starlight photons. As a consequence of this increased energy-loss mechanism, the radio signature of dark matter-produced electrons and positrons in the galactic center is reduced.
At distances further from the center (\(\gtrsim 1\;\)kpc), our starlight model more closely matches those previously assumed for M31. As we will describe in detail, our dark matter constraints are obtained from radio emission in this region rather then from the center itself. Consequentially, we are less sensitive to differences in the starlight in the core of M31.
### Gas in M31
The thermal gas of M31 can be split into ionized gas and neutral gas, each of which play different roles in the energy loss of relativistic electrons and positrons. Elastic collisions between the ionized gas and the \(e^{\pm}\) result in Coulomb losses in the \(e^{\pm}\). This leads to a net transfer of energy out of the \(e^{\pm}\) and into the gas. Interactions between \(e^{\pm}\) and ionized gas also result in bremsstrahlung losses due to inelastic collisions. Neutral gas only causes bremsstrahlung losses. As the rate of energy loss depends on the properties of the ionized and neutral gas, we must model HI, H\({}_{2}\), and \({}^{4}\)He gas separately.
#### iii.1.1 Ionized Gas
Due to the difficulty of observing M31 as compared to our own Galaxy, the gas model of M31 is motivated by that of the Milky Way. Following Ref. [67], we model the ionized gas density as
\[\langle n_{\rm ion}\rangle=\bar{n}_{\rm ion}(R,z)=\bar{n}_{\rm ion,0}\,{\rm sech }^{2}\left(R/R_{\rm ion}\right){\rm sech}^{2}\left(z/z_{\rm ion}\right) \tag{15}\]
where \(\langle n_{\rm ion}\rangle\) is the ion density averaged over scales that are small compared to the galaxy but large compared to fluctuations in the ion density. We fit this model to M31 measurements of the ion density from H\(\alpha\) emission [68] and Faraday rotation [58].
Figure 4: Best-fit rescaled SED models [64; 65] and observed differential luminosities in \(ugriz\) filters [63] for (a) the bulge and (b) the disk.
\begin{table}
\begin{tabular}{|c||c|c|} \hline \hline \multicolumn{3}{|c|}{Radiation Field} \\ \hline \hline Parameter & Bulge Value & Disk Value \\ \hline \multicolumn{3}{|c|}{Structural Parameters} \\ \hline \(Q_{0}\left(10^{10}\,L_{\odot}/{\rm kpc}^{3}\right)\) & \(1.4\times 10^{2}\) & \(1.9\times 10^{-2}\) \\ \(A_{0}\) (kpc) & \(4.6\times 10^{-3}\) & 2.6 \\ \(q\) & 0.72 & 0.17 \\ \(N\) & 2.7 & 1.2 \\ \hline \multicolumn{3}{|c|}{Observed Luminosities} \\ \hline \(L_{u}\left(10^{10}\,L_{\odot}\right)\) & 0.34 & 0.78 \\ \(L_{g}\left(10^{10}\,L_{\odot}\right)\) & 0.57 & 1.1 \\ \(L_{r}\left(10^{10}\,L_{\odot}\right)\) & 0.75 & 1.4 \\ \(L_{i}\left(10^{10}\,L_{\odot}\right)\) & 1.0 & 1.9 \\ \(L_{z}\left(10^{10}\,L_{\odot}\right)\) & 1.3 & 2.0 \\ \(L_{\rm bol}\left(10^{10}\,L_{\odot}\right)\) & 2.0 & 3.0 \\ \hline \end{tabular}
\end{table}
Table 2: Top: best-fit parameters to the extinction-corrected luminosity distribution Eq. (11). Bottom: observed exctinction-corrected luminosities in \(ugriz\) filter bands followed by our derived bolometric luminosities. The bulge values are in the second column while the disk values are in the third column. All values except for \(L_{\rm bol}\) and \(Q_{0}\) are taken from Ref. [63], see text for details of our calculations of \(L_{\rm bol}\) and \(Q_{0}\).
Figure 5: The ISRF radiation density for M31 along the disk (\(z=0\)). The CMB result is given in Eq. (9). The bulge and disk components come from replacing \(Q_{*}\) with \(Q_{\rm bulge}\) and \(Q_{\rm disk}\), respectively in Eq. (10).
We first extract the ion density on the mid-plane of M31 at \(R\simeq 9\,\)kpc from measurements of H\(\alpha\) emission [68]. Here, the observable is the emission measure (\(\mathcal{E}\)) along the line of sight, which is related to the ion density by
\[\mathcal{E}=\int d\ell\,\langle n_{\rm ion}^{2}\rangle, \tag{16}\]
To obtain \(\bar{n}_{\rm ion}\) on the disk from the measured \(\mathcal{E}\), we assume the functional form for \(n_{\rm ion}\) from Ref. [68]:4
Footnote 4: We use \(z_{\rm ion}^{\prime}\) as the scale height from the model of Ref. [68], not to be confused with \(z_{\rm ion}\) from our model for the ionized gas, Eq. (15).
\[n_{\rm ion}=\mathcal{F}(\mathbf{x})n_{\rm ion,0}(R)e^{-|z|/z_{\rm ion}^{\prime}}, \tag{17}\]
where \(\mathcal{F}(\mathbf{x})\) is a filling function which varies on length scales much smaller than can be observed. \(\mathcal{F}(\mathbf{x})\) is 1 if there is gas at \(\mathbf{x}\), and zero otherwise. The filling function defines a fill factor5\(\phi\)
Footnote 5: In Ref. [68], the fill factor depends on \(z\). If \(\phi\) is slowly changing over the galaxy, then its value at a particular point can be defined using Eq. (18) where the average should be understood to be over length scales which are small compared to the size of the galaxy.
\[\phi=\langle\mathcal{F}\rangle=\frac{1}{V}\int_{V}d^{3}x\mathcal{F}(\mathbf{x})= \frac{1}{L}\int\limits_{0}^{L}d\ell\mathcal{F}(\mathbf{x}). \tag{18}\]
Ref. [68] measures the value of \(\mathcal{E}\) along a line of sight at \(R=9\,\)kpc and converts it to the value it would have had if M31 were viewed face-on. Assuming a constant fill factor, we integrate Eq. (16), to obtain a relation between the face-on emission measure and the mid-plane density
\[\bar{n}_{\rm ion}^{\rm obs}(R,0)=\sqrt{\frac{\mathcal{E}\phi}{z_{\rm ion}^{ \prime}}}. \tag{19}\]
We use the median values from Ref. [68] of \(\mathcal{E}=(10\pm 5)\,\)pc/cm\({}^{6}\) (corrected for the angle of inclination) and \(z_{\rm ion}^{\prime}=(5\pm 3)\times 10^{2}\,\)pc, giving \(\bar{n}_{\rm ion}^{\rm obs}=0.063\pm 0.039\,\)cm\({}^{-3}\) at \((R,z)=(9,0)\) kpc.
By observing rotation measures (RM) from Faraday rotation and assuming magnetic field equipartition, Ref. [58] determines \(\bar{n}_{\rm ion}\) in the upper layers of the thermal disk (between \(0.3-1\,\)kpc from the mid-plane) at three different values of \(R\) between \(8-14\,\)kpc. We take the distance of these measurements from the midplane to be the midpoint of the upper layers of the thermal disk (\(z=0.65\,\)kpc). Errors were not reported for these results; we make the conservative choice to use errors of 50% of the measured value.
where \(h_{\rm HI,0}\) and \(S\) are reported in Table 4. We normalize the HI density to the total HI mass reported in Table 1 of [69]. The resulting gas density on the disk is shown in Figure 6(a) (see the vertical axis on the left).
#### iii.2.3 H\({}_{2}\) Gas
For the H\({}_{2}\) density, we digitize the radial column density distribution provided by Ref. [69] in the range \(R\in[2.5,18]\) kpc. We then repeat our interpolation/extrapolation procedure to obtain a model of the column density of H\({}_{2}\) on the disk. The results are shown in Figure 6(b). We again assume that the density of H\({}_{2}\) on the disk is proportional to the column density.
We assume a decaying exponential for the \(z\)-dependence, however there is little observational data available for the scale height of H\({}_{2}\) in M31. We therefore use the H\({}_{2}\) scale height in the Milky Way (derived from the data in Ref. [70]), and re-scale to M31 using a comparison of the HI scale heights in the Milky Way and M31. We digitize the fit to the H\({}_{2}\) scale height data in Figure 10 of Ref. [70], and smooth-out the fluctuations by fitting the digitized version of the fit to the functional form
\[h^{\rm MW}_{\rm H_{2}}=h^{\rm MW}_{{\rm H}_{2},0}e^{-R/R_{\rm H_{2}}}. \tag{21}\]
We compare our model for the scale height in the Milky Way to that of Ref. [70] in Figure 7.
To obtain a scale height for H\({}_{2}\) gas in M31 from the scale height of H\({}_{2}\) in the Milky Way, we take the ratio of the average HI scale height for M31 (Eq. (20)) to that of the Milky Way (see Figure 6 of Ref. [70]) in the region \(R\in[0,7]\) kpc. We find this ratio is 1.55, and assume this ratio holds for the H\({}_{2}\) gas:
\[h_{\rm H_{2}}=1.55\times h^{\rm MW}_{\rm H_{2}}=h_{{\rm H}_{2},0}e^{-R/R_{\rm H _{2}}}. \tag{22}\]
The resulting values of \(h_{{\rm H}_{2},0}\) and \(R_{\rm H_{2}}\) are given in Ta
Figure 6: Number density (left axis) and surface density (right axis) of (a) HI and (b) H\({}_{2}\) gas in the plane of the disk. The digitized and interpolated distributions from Ref. [69] are within the two vertical red lines. Outside these regions, we fit exponential extrapolations, matching the function values and first derivatives at the boundaries.
Figure 7: Fit to the Milky Way H\({}_{2}\) scale height [70] (red curve). The parameterized fit of Eq. (21) is shown in blue.
ble IV. The errors in these two parameters are dominated by the systematic errors of converting from their values in the Milky Way so we conservatively set their errors to 50% of their values.
Again, we normalize our distribution for the H\({}_{2}\) density based on the measured value for the total H\({}_{2}\) gas mass reported in Table 1 of Ref. [69]. The resulting H\({}_{2}\) density on the disk is shown in Figure 6(b).
#### iv.2.4 \({}^{4}\)He Gas
The last significant component of neutral gas that we have to model is \({}^{4}\)He. We assume \({}^{4}\)He production from stars is negligible, so the ratio of \({}^{4}\)He to H is set by Big Bang Nucleosynthesis: \(N_{\rm He}\simeq N_{H}/12\). The errors in our propagation model introduced by this assumption are subdominant to the other uncertainties in our model of ISM.
Further, we assume that the \({}^{4}\)He density has the same morphology as the total hydrogen density. The local density for \({}^{4}\)He is then derived from the HI and H\({}_{2}\) gas:
\[n_{\rm He}=\left(\frac{1}{12}\right)\times n_{H}=\left(\frac{1}{12}\right) \times(n_{\rm HI}+2n_{\rm H_{2}}). \tag{23}\]
## V Propagation of \(e^{\pm}\) in M31
The production of electrons and positrons by dark matter annihilation provides a source \(Q_{e}\) for the phase space-density \(f_{e}\) within a galaxy. From their initial locations, the \(e^{\pm}\) will diffuse in turbulent magnetic fields and undergo energy loss from synchrotron radiation as well as inverse Compton, bremsstrahlung and Coulomb scattering. The synchrotron losses lead to the radio signal we will use to constrain dark matter annihilation, but all forms of energy loss must be tracked to determine the evolution of \(f_{e}\) in energy and position.
The evolution of the phase space density \(f_{e}\) is controlled by the diffusion-loss equation:
\[\frac{\partial f_{e}}{\partial t}=\partial_{i}[{\cal D}_{ij}(\mathbf{x},E)\partial _{j}f_{e}]+\frac{\partial}{\partial E}[b(\mathbf{x},E)f_{e}]+Q_{e}(\mathbf{x},E). \tag{24}\]
where \(f_{e}(\mathbf{x},E)=dn_{e}/dE\) is the phase space density of electrons at position \(\mathbf{x}\) and energy \(E\), \({\cal D}_{ij}(\mathbf{x},E)\) is the diffusion matrix, and \(b(\mathbf{x},E)\) is the energy loss parameter. The position-dependent diffusion coefficient depends on both the RMS magnetic field, \(\bar{B}\) and the turbulent fluctuations of the field at small scales. The loss parameter depends on \(\bar{B}^{2}\), the ISRF, and the densities of the various gas components, which have been modeled in Section IV.
### Diffusion Matrix
Fluctuations in the magnetic field cause the relativistic \(e^{\pm}\) to exhibit diffusive motion. In a uniform magnetic field with strength \(B\), the motion of \(e^{\pm}\) is helical, with a Larmor radius of
\[r_{L}=\frac{E\sin\alpha}{eB}=(1.1\times 10^{-7}\,{\rm pc})\left(\frac{E}{1\, {\rm GeV}}\right)\left(\frac{10\,\mu{\rm G}}{B}\right). \tag{25}\]
Here, \(E\) is the particle energy and \(\alpha\) is the pitch angle, which is defined as the angle of the velocity with respect to the direction of the magnetic field. For a changing magnetic field, the Larmor radius can still be defined, provided the field variations are small over the distance traversed by the particle in the time it takes for its phase to change by \({\cal O}(1)\). For the \({\cal O}(10\,\,\mu{\rm G})\) magnetic fields in the disk of M31 (see Section IV.1), electrons and positrons of the energies expected from the annihilation of dark matter have Larmor radii of \(\lesssim{\cal O}(10^{-7}\,\,{\rm pc})\). As the fluctuations of the field follow a power law distribution on length-scales smaller than \(1/k_{0}\sim{\cal O}(1\,{\rm kpc})\), the magnetic field is dominated by Fourier modes that are much larger than the Larmor radius. Modes that are of order the Larmor radius and smaller can be treated perturbatively.
The motion of an electron or positron under the influence of the large-scale magnetic field fluctuations is well-described by the adiabatic approximation: the particle exhibits helical motion about the local field with an axis that gradually changes as the particle moves along the slowly changing magnetic field [71]. The magnetic field fluctuations with length-scales below the Larmor radius perturb this adiabatic motion, causing pitch angle scattering which leads to diffusion along the axis of the local magnetic field [72]. Since the direction of the large-scale magnetic field slowly changes over space and time, the diffusion is shared evenly in all directions leading to isotropic diffusion [72].
With these approximations and assuming magnetic field fluctuations characterized by the Kolmogrov spectrum, introduced in Section IV.1, the diffusion matrix is [51; 73]
\[{\cal D}_{ij}\simeq\left(1.5\times 10^{28}\,{\rm cm}^{2}/{\rm s}\right)\delta_{ij} \left(\frac{d_{0}}{1\,{\rm kpc}}\right)^{2/3}\left(\frac{10\,\mu{\rm G}}{\bar {B}}\right)^{1/3}\left(\frac{E}{1\,{\rm GeV}}\right)^{1/3}\equiv D_{0}\delta_ {ij}\left(\frac{10\,\mu{\rm G}}{\bar{B}}\right)^{1/3}\left(\frac{E}{1\,{\rm GeV }}\right)^{1/3}, \tag{26}\]
where \(d_{0}=1/k_{0}\) is largest length-scale over which the Kolmogrov spectrum of magnetic field fluctuations is
valid. Since the diffusion matrix is isotropic, it can be written as
\[\mathcal{D}_{ij}=\delta_{ij}D. \tag{27}\]
It is conventional to refer to \(D\) the diffusion coefficient. We absorb the uncertainties in the prefactor of Eq. (26) and \(d_{0}\) into the constant \(D_{0}\). A range of possible values for \(D_{0}\) (in both the Milky Way and M31) have been suggested in the literature. We review these briefly here.
Ref. [74] infers the diffusion coefficient for \(e^{\pm}\) near star-forming regions in M31 from measurements of non-thermal radio emission at \(\nu=1.4\,\)GHz and one higher frequency using two methods. The first method infers the diffusion coefficient from the difference in morphology between the two frequencies. The second method uses the difference between non-thermal emission and thermal emission at each of the frequencies, assuming that the thermal emission has a similar morphology to the source distribution of cosmic ray electrons. These methods allow Ref. [74] to extract the diffusion coefficient at two electron and positron energies: \(4.1\) and \(7.5\,\)GeV.
Rescaling Ref. [74]'s results to the magnetic field parameters of M31, we obtain \(D_{0}\simeq 1.1\times 10^{28}\,\)cm\({}^{2}\)/s using the first method and \(D_{0}\simeq 3.5\times 10^{27}\,\)cm\({}^{2}\)/s using the second. Neither method fully models the propagation of cosmic rays, and the variation between the two results makes it difficult to identify either value as our default \(D_{0}\) value.
For further guidance about the value of \(D_{0}\) in M31, we review studies of propagation in the Milky Way. The galprop cosmic ray propagation model [52] uses observations of cosmic rays in the Milky Way [75, 76] to determine its best-fit diffusion coefficients [77]. The assumed propagation model includes a uniform diffusion coefficient of the form \(D=\tilde{D_{0}}\left(E/4\,\text{GeV}\right)^{\delta}\) for which the best-fit parameters were found to be \(\tilde{D}_{0}=(8.3\pm 1.5)\times 10^{28}\,\)cm\({}^{2}\)/s and \(\delta=0.31\pm 0.02\)[77]. Comparing the model of Ref. [77] to our form for the diffusion coefficient, Eq. (26), and assuming that the cosmic rays studied were subject to a constant \(10\,\mu\)G magnetic field, these measurements imply \(D_{0}=(5.2\pm 0.9)\times 10^{28}\,\)cm\({}^{2}\)/s. Ref. [78] constructed MIN, MED and MAX propagation models for \(1\,\)GeV cosmic ray energies in the Milky Way, which can be interpreted as a range of \(D_{0}=\left[5.9\times 10^{27}-2.0\times 10^{28}\right]\,\)cm\({}^{2}\)/s, assuming the relevant magnetic field is a constant \(10\,\mu\)G.
Underestimating the diffusion coefficient would underpredict how far particles will move before emitting most of their energy, leading to an over-prediction of the signal in regions of high dark matter density. A large value of \(D_{0}\) will likewise result in a smaller signal flux for a given cross section. Therefore, to set conservative limits on the cross section of dark matter annihilation, we must avoid assuming too small a value for \(D_{0}\). To set our conservative upper limit on \(D_{0}\), we select a maximum value of \(d_{0}\equiv 1/k_{0}\) in Eq. (26) by setting \(k_{0}\) equal to the wavenumber of a fluctuation with wavelength of \(R_{B,1}=77.6\,\)kpc (the longest scale-length in our magnetic field model). This leads to \(d_{0}\lesssim R_{B,1}/(2\pi)\simeq 12.5\,\)kpc, implying \(D_{0}\lesssim 8\times 10^{28}\,\)cm\({}^{2}\)/s.
Given the variation in diffusion coefficients in the Milky Way and M31, as well as our conservative upper bound, we consider \(D_{0}\) in the range
\[3\times 10^{27}\,\text{cm}^{2}/\text{s}\leq D_{0}\leq 8\times 10^{28}\,\text{cm}^{2 }/\text{s}, \tag{28}\]
We select as a default value \(D_{0}=1\times 10^{28}\,\text{cm}^{2}/\text{s}\).
Though \(D_{0}\) is position-independent, the diffusion coefficient \(D\) explicitly depends on the magnetic field, and our model for the magnetic field is position dependent (see Section II). As a result, the diffusion coefficient also depends on location within M31. We show the dependence on location within M31 in Figure 8 for our default diffusion coefficient normalization (\(D_{0}=1\times 10^{28}\,\text{cm}^{2}/\text{s}\)) and \(E=1\,\)GeV. The diffusion coefficient varies more rapidly with \(z\) when \(R\) is small, as a result of the magnetic field scale height increasing with \(R\).
Prior studies of radio emission from dark matter annihilation in M31 make the approximation that the diffusion coefficient is zero [35, 36, 37] or position independent [19, 38]. This latter assumption is sufficient when the region of interest is small and the diffusion coefficient is nearly constant over the region. Over the length scales of interest, the variations in the diffusion coefficient must be taken into account, and we develop a numerical method for calculating the evolution of phase space density of the charged particles which can accommodate a position-dependent diffusion coefficient. We describe our numerical solution in Section V.3, after introducing the energy-loss terms that enter into Eq. (24) in the next subsection. To our knowledge this is the first study that uses a position-dependent diffusion coefficient to set limits on dark matter annihilation in M31 via radio emission.
### Energy Loss due to Radiative Processes
As the electrons and positrons diffuse through the ISM they lose energy through radiative processes. This energy loss is encoded in Eq. (24) by the loss parameter \(b\). In M31, the relevant losses are inverse Compton (IC), synchrotron, bremsstrahlung, and Coloumb interactions:
\[\begin{split}-\frac{dE}{dt}\equiv&\,b(\mathbf{x},E)=b_{ \rm IC}(\mathbf{x},E)+b_{\rm sync}(\mathbf{x},E)+\\ &\,b_{\rm brem}(\mathbf{x},E)+b_{\rm C}(\mathbf{x},E).\end{split} \tag{29}\]
We treat each of these terms in turn.
The inverse Compton scattering between \(e^{\pm}\) and the ambient starlight, rescattered light from dust, and CMB emission will transfer energy from the charged particles into the photons, with a loss parameter [71]
\[b_{\rm IC}=b_{\rm IC}^{(0)}\left(\frac{\rho_{\gamma}(\mathbf{x})}{10\,\rm eV} \right)\left(\frac{E}{1\,\rm GeV}\right)^{2}, \tag{30}\]
where \(b_{\rm IC}^{(0)}=1.0\times 10^{-15}\,\rm GeV/s\) and \(\rho_{\gamma}\) is the total radiation energy density, derived in Section IV.2.
Synchrotron emission occurs due to the acceleration of charged particles in galactic magnetic fields. As described in Section IV.1, the magnetic fields of M31 do not change appreciably over the Larmor radius of the relevant \(e^{\pm}\). Additionally, due to pitch angle scattering, the pitch angles are approximately uniformly occupied. Therefore, the energy loss due to synchrotron emission can be determined by assuming a locally constant magnetic field and averaging the energy loss over all pitch angles. The expression for the loss due to synchrotron is given by [71]
\[b_{\rm sync}=b_{\rm sync}^{(0)}\left(\frac{\bar{B}(R,z)}{10\,\rm\mu G}\right) ^{2}\left(\frac{E}{1\,\rm GeV}\right)^{2} \tag{31}\]
where \(b_{\rm sync}^{(0)}=2.5\times 10^{-16}\,\rm GeV/s\).
The third term in Eq. (29) is the contribution to the loss from bremsstrahlung emission due to \(e^{\pm}\) scattering with neutral hydrogen, neutral helium, and ionized gas:
\[b_{\rm brem}=b_{\rm H}(\mathbf{x},E)+b_{\rm He}(\mathbf{x},E)+b_{\rm ion}(\mathbf{x},E). \tag{32}\]
The expressions for these three components of the bremsstrahlung loss are given by [66; 79]
\[b_{\rm H} =b_{\rm H}^{(0)}\left(\frac{n_{\rm H}(\mathbf{x})}{1\,\rm cm^{-3}} \right)\left(\frac{E}{1\,\rm GeV}\right),\] \[b_{\rm He} =b_{\rm He}^{(0)}\left(\frac{n_{\rm He}(\mathbf{x})}{1\,\rm cm^{-3}} \right)\left(\frac{E}{1\,\rm GeV}\right), \tag{33}\] \[b_{\rm ion} =b_{\rm ion}^{(0)}\left(\frac{n_{\rm ion}(\mathbf{x})}{1\,\rm cm^{-3} }\right)\left(\frac{E}{1\,\rm GeV}\right)\left[1+\frac{1}{7.94}\ln\left(\frac {E}{1\,\rm GeV}\right)\right],\]
where \(b_{\rm H}^{(0)}=1.22\times 10^{-16}\,\rm GeV/s\), \(b_{\rm He}^{(0)}=3.61\times 10^{-16}\,\rm GeV/s\) and \(b_{\rm ion}^{(0)}=1.74\times 10^{-16}\,\rm GeV/s\). Our models for the density of each gas component were presented in Section IV.3.
Lastly, the fourth contribution to the loss parameter is from Coulomb interactions with ionized gas and is given by [80; 81]
\[b_{\rm C}=b_{\rm C}^{(0)}\left(\frac{n_{\rm ion}(\mathbf{x})}{1\,\rm cm^{-3}} \right)\left[1+\frac{1}{82}\ln\left(\frac{E}{1\,\rm GeV}\frac{1\,\rm cm^{-3}} {n_{\rm ion}}\right)\right], \tag{34}\]
where \(b_{\rm C}^{(0)}=6.2\times 10^{-16}\,\rm GeV/s\). Note that Coulomb losses are not radiative processes involving the loss of energy from the charged particles into photons, but rather are due to an energy transfer from the relativistic \(e^{\pm}\) to non-relativistic ions in the interstellar plasma.
We show the resulting loss coefficient as function of energy in Figure 9. Figure 9(a) shows the total loss coefficient given by Eq. (29) for various values of \(R\) on the disk. Figure 9(b) shows the total loss coefficient at the origin and the contributions to it from the individual processes discussed in this subsection. Coulomb losses dominate at low energy, inverse Compton and synchrotron losses dominate at high energy, and bremsstrahlung only becomes marginally important at intermediate energies for \(R\simeq 10\) kpc due to the large concentration of interstellar gas in the ring-like structure (discussed in Section VII.2).
### Solving the Diffusion Loss Equation
We now turn to the numerical solution to the diffusion loss equation (Eq. (24)) in M31, assuming the electron and positron injection from dark matter from Section III and the astrophysical model of M31 from Section IV.
To motivate our approach, it is useful to consider the two dynamic time scales which characterize the diffusion (\(\tau_{D}\)) and energy loss (\(\tau_{b}\)) in Eq. (24):
\[\frac{\partial f_{e}}{\partial t}=-\frac{f_{e}}{\tau_{D}}-\frac{f_{e}}{\tau_{ b}}+Q_{e}(\mathbf{x},E). \tag{35}\]
These timescales depend on \(R\), \(z\), \(E\) and derivatives of \(f_{e}\) over \(f_{e}\). As a result, \(\tau_{b}\) and \(\tau_{D}\) are independent of the overall magnitude of \(f_{e}\).
The timescale for diffusion is
\[\tau_{D}^{-1}=-\left(\partial_{i}D\right)\frac{\partial_{i}f_{e}}{f_{e}}-D \frac{\nabla^{2}f_{e}}{f_{e}}. \tag{36}\]
In the approximation that \(f_{e}\) only depends on \(r\),
\[\tau_{D}^{-1} \sim\frac{D}{r^{2}}\left[1+z\frac{\partial_{z}D}{D}+R\frac{ \partial_{R}D}{D}\right]\equiv\frac{D}{L(\mathbf{x})^{2}}\] \[\sim\left(2\times 10^{16}\,\rm s\right)\left(\frac{L(\mathbf{x})}{5\, \rm kpc}\right)^{2}\left(\frac{1\times 10^{28}\,\rm cm^{2}/s}{D}\right). \tag{37}\]
where \(L(\mathbf{x})\) is a length-scale that determines the rate that diffusion causes the phase space density to change.
The characteristic timescale for energy loss can be approximated by
\[\tau_{b}\simeq\frac{E}{b}=(1\times 10^{16}\,\mathrm{s})\left(\frac{E}{1\, \mathrm{GeV}}\right)\left(\frac{1\times 10^{-16}\,\mathrm{GeV/s}}{b}\right). \tag{38}\]
The propagation is dominated by diffusion when \(\tau_{D}\ll\tau_{b}\) and dominated by energy loss when \(\tau_{D}\gg\tau_{b}\). Both diffusion and loss will dominate at different values of \(E\) and \(\mathbf{x}\). In Figure 10 we plot the inverse timescales for diffusion and loss over a range of \(R\) and \(z\). Along the disk, loss tends to dominate (Figure 10(a)), whereas diffusion becomes the more important term off of the disk (Figure 10(b)).
In regions of phase space where \(\tau\equiv(\tau_{b}^{-1}+\tau_{D}^{-1})^{-1}\ll T_{\mathrm{M31}}\) (where \(T_{\mathrm{M31}}\simeq 3\times 10^{17}\) s is the approximate age of M31), the phase space density \(f_{e}\) today will be well-approximated by the equilibrium density. In Figure 11, we show \(\tau\) for \(E=0.5\) GeV, \(R\in[0,25]\,\mathrm{kpc}\) and \(z\in[0,15]\,\mathrm{kpc}\) assuming our lowest diffusion coefficient normalization, \(D_{0}=3\times 10^{27}\mathrm{cm}^{2}/\mathrm{s}\). \(E=0.5\,\mathrm{GeV}\) is a lower bound on the range of energies that contribute significantly to \(\nu=8.35\,\mathrm{GHz}\) radio emission in M31. Due to the energy dependence of the diffusion and loss coefficients, \(\tau\) decreases as \(E\) increases for \(E\gtrsim 0.1\,\mathrm{GeV}\). As larger \(D_{0}\) also makes \(\tau\) smaller, the combination of \(D_{0}\) and \(E\) shown in Figure 11 provides an upper bound on \(\tau\).
As can be seen in Figure 11, within \(R<25\) kpc and \(|z|<15\) kpc, we find \(\tau<T_{\mathrm{M31}}\). Near the center of the galaxy and for higher \(e^{\pm}\) energies, \(\tau\) decreases. Though some regions at large \(R\) have timescales comparable to the age of M31, these regions are far from the inner part of the galaxy where the Effelsberg radio data will be used to set limits. We are therefore justified in following the general approach of the literature [19, 35, 36, 37] by approximating the phase space density \(f_{e}\) of \(e^{\pm}\) in M31 today as the equilibrium density.
If \(b\) and \(D\) do not depend on \(\mathbf{x}\), a semi-analytic solution exists for the equilibrium density (see e.g., Ref. [44]). When the region of interest is small, homogeneous coefficients can be obtained by averaging the diffusion and loss coefficients over the relevant volume [19, 44]. However, our goal in this paper is to compute the synchrotron distribution over the field of view of the radio data in Figure 1, that is, most of the galactic disk of M31. Based on the astrophysical models (described in Section IV), the diffusion and loss coefficients will vary significantly over this region. We must therefore solve Eq. (24) in the case of non-homogeneous coefficients.
While the source term is spherically symmetric, the diffusion and loss coefficients are axially symmetric, implying that the solution to Eq. (24) depends on \(R\), \(z\) and \(E\). However a fully axially symmetric numeric solution is intractable given our numeric approach. To overcome this problem, we average Eq. (24) over spherical angles \(\theta\) and \(\phi\):
\[\frac{\partial\langle f_{e}\rangle}{\partial t}=\frac{\partial}{\partial r} \langle D\left(\partial_{r}f_{e}\right)\rangle+\frac{2}{r}\langle D\left( \partial_{r}f_{e}\right)\rangle+\frac{\partial}{\partial E}\langle bf\rangle+ Q_{e}, \tag{39}\]
where (for an arbitrary function \(g(E,\mathbf{x})\)),
\[\langle g\rangle(E,r)\equiv\frac{1}{4\pi}\int d\Omega g(E,\mathbf{x}). \tag{40}\]
Figure 9: (a) Loss coefficient as a function of \(E\) for various values of \(R\) at \(z=0\). (b) The energy dependence of the total loss coefficient (solid line) and its subcomponents at \(R=z=0\). Inverse Compton and synchrotron losses, which have the same energy dependence, are shown as the dashed line, bremsstrahlung as dot-dashed, and Coulomb losses as the dotted line.
Spherically averaging Eq. (39), we find
\[\frac{\partial\langle f_{e}\rangle}{\partial t}=\left[\frac{\partial}{\partial r} +\frac{2}{r}\right]\left(\bar{D}\partial_{r}\langle f_{e}\rangle\right)+\frac{ \partial}{\partial E}\left(\bar{b}\langle f_{e}\rangle\right)+Q_{e} \tag{41}\]
where
\[\bar{D}\equiv\frac{\langle D\partial_{r}f_{e}\rangle}{\langle\partial_{r}f_{e }\rangle},\quad\bar{b}\equiv\frac{\langle bf_{e}\rangle}{\langle f_{e}\rangle}. \tag{42}\]
Figure 11: Dynamic timescale \(\tau\) of M31 for as a function of \(R\) and \(z\) for \(E=0.5\,\)GeV (the minimum energy contributing significantly to the 8.35 GHz synchrotron signal) and \(D_{0}=3\times 10^{27}\)cm\({}^{2}\)/s (the lower bound on the diffusion coefficient).
Figure 10: The inverse timescales for diffusion (solid lines) and loss (dotted lines), Eqs. (37) and (38). We show for comparison the inverse of the age of M31 (solid black), \(T_{\rm M31}=10^{10}\) years. The shaded regions around each solid line shows the variation of the inverse timescales for diffusion as \(D_{0}\) is varied within the range given in Eq. (28).
The averaged coefficients \(\bar{D}\) and \(\bar{b}\) required to solve for \(\langle f_{e}\rangle\) in Eq. (41) themselves depend on \(f_{e}\). To calculate \(\bar{D}\) and \(\bar{b}\), we use approximate solutions for \(f_{e}\), then use these averaged coefficients to numerically solve the spherically averaged diffusion loss equation for \(\langle f_{e}\rangle\).
We calculate these approximate solutions for \(\bar{D}\) and \(\bar{b}\) in two different ways: first by assuming that \(f_{e}\) is approximately spherically symmetric, and in the second approach taking into account approximate deviations from spherical symmetry. For the region of M31 of interest for our analysis of the radio data (namely, the region inside \(\sim 10\) kpc), the resulting solutions for \(\langle f_{e}\rangle\) (and the resulting synchrotron emission) are similar regardless of our assumptions.
Our first approximate solution - the "unweighted" solution - assumes that deviations from spherical symmetry for \(f_{e}\) are small. If this is the case,
\[\bar{D} \simeq\langle D\rangle \tag{43}\] \[\bar{b} \simeq\langle b\rangle,\]
and we can numerically average over solid angles our models for \(D\) and \(b\) given in Sections V.1 and V.2.
For our second solution for the diffusion and loss coefficients, we calculate the spherically-averaged \(\bar{D}\) and \(\bar{b}\) parameters by substituting into Eq. (42) the approximate equilibrium solution for \(f_{e}\) obtained from solving Eq. (35) with \(\partial f_{e}/\partial t=0\):
\[f_{e}\simeq Q_{e}\tau, \tag{44}\]
where \(\tau^{-1}\equiv\tau_{b}^{-1}+\tau_{D}^{-1}\). We refer to this as the "weighted" solution, as \(\bar{D}\) and \(\bar{b}\) are obtained as averages of \(D\) and \(b\) weighted by \(\partial_{r}f_{e}\) and \(f_{e}\), respectively.
Figure 12 shows the spherically averaged diffusion and loss coefficients for the unweighted (dashed) and weighted (solid) averaging schemes. The two methods give very similar results for the loss coefficient across all of phase space and galactic radii. For the diffusion coefficient, the two calculations agree for the inner part of M31, \(r\lesssim 10\) kpc. As we will show, the disagreement at large radii in the diffusion coefficients does not result in significant differences in the predicted synchrotron emission from the region of M31 that we will use to set limits. As a result, the constraints derived from radio observations are robust across these different solutions.
Using either the weighted or unweighted solutions for \(\bar{D}\) and \(\bar{b}\), we must solve the diffusion loss equation for \(\langle f_{e}\rangle\). Defining \(u\equiv r\langle f_{e}\rangle\), Eq. (41) becomes
\[\frac{\partial u}{\partial t}=\frac{\partial}{\partial r}\left[\bar{D}(r,E) \frac{\partial u}{\partial r}\right]-\frac{\partial\bar{D}}{\partial r}\frac{ u}{r}+\frac{\partial}{\partial E}\left[\bar{b}(r,E)u\right]+rQ(r,E) \tag{45}\]
Under this redefinition, the boundary condition at \(r=0\) can be easily written as \(u(0,E)=0\), as long as \(f_{e}\) does not diverge faster than \(1/r\). This is satisfied if the inner slope of the dark matter density has a power law index \(\gamma<1.5\). The other required boundary conditions are \(u(49.9\,\text{kpc},E)=0\) and \(u(r,m_{\chi})=0\). To solve Eq. (45), we discretize \(r\), \(E\) and \(t\) and use finite differences to approximate the derivatives. This leads to a recursive equation for \(u\) at the next time-step in \(t\), given its value at the current \(t\).
Forward difference schemes for solving Eq. (45) are only stable if the time-step satisfies \(\Delta t\lesssim(\Delta r)^{2}/D\) over the whole domain [82], where \(\Delta r\) is the grid-spacing. Given the approximate age of M31 and our grid-spacing \(\Delta r\,=\,62\) pc, \(\,\mathcal{O}(10^{7})\) time-steps would be needed to reach the equilibrium solution using a forward difference method. Backward differences, on the other hand, are unconditionally stable [82] for any size of time-step. We therefore use backward differences to approximate the derivatives on the right-hand-side, leading to an implicit equation for \(u\) at the next time-step, which can be solved with a sparse matrix method. We choose the time-step to be much larger than the maximum timescale in the problem to minimize the number of iterations required. Further details about our numerical method for solving the diffusion-loss equation are provided in Appendix A.
The results for the equilibrium solutions of \(\langle f_{e}\rangle\) are shown in Figure 13. Figure 13(a) shows the energy dependence of \(\langle f_{e}\rangle\) at \(r=5\) kpc for a representative set of values of \(m_{\chi}\) and our default value of \(D_{0}\). Figure 13(b) shows the dependence on \(r\) for each value of \(D_{0}\) and \(E=1\) GeV. The results become more sensitive to changes in \(D_{0}\) for \(D_{0}\gtrsim 3\times 10^{28}\) cm\({}^{2}\)/s. In both panels, the solid curves represent the weighted solution while the dashed curves represent the unweighted solution.
## VI Synchrotron spectrum and morphology
Relativistic electrons and positrons in M31 accelerate in the galactic magnetic field, leading to synchrotron emission. The power emitted per unit frequency from an electron or positron at pitch angle \(\alpha\) and energy \(E\) is
\[\frac{dP}{d\nu}(\nu,\alpha,E)=\frac{2\pi\sqrt{3}e^{2}\gamma\nu_{0}}{c}x\int \limits_{x/\sin\alpha}^{\infty}d\xi K_{5/3}(\xi), \tag{46}\]
where \(\nu_{0}\equiv e\bar{B}/(2\pi\gamma cm_{e})\), \(x=2\nu/(3\gamma^{3}\nu_{0})\), and \(K_{n}\) is the \(n^{\text{th}}\)-order modified Bessel function of the second kind. The differential flux can therefore be obtained by averaging Eq. (46) over uniformly distributed pitch angles, and convolving with the spherically-averaged phase space density of electrons leading to:
\[\frac{d^{2}S}{d\Omega d\nu}= \frac{1}{4\pi}\int\limits_{\text{los}}dl\int\limits_{m_{e}}^{ \infty}dE\langle f_{e}\rangle(r(l,\Omega),E)\langle dP/d\nu\rangle_{\alpha}\] \[\langle dP/d\nu\rangle_{\alpha}\equiv \frac{1}{2}\int\limits_{-1}^{1}d(\cos\alpha)\frac{dP}{d\nu},\]
where \(\Omega=(\theta,\phi)\) is the location on the sky.
For \(\sin\alpha\sim\mathcal{O}(1)\) and \(x\gg 1\), \(dP/d\nu\) is exponentially suppressed at low energies [83], so most of the power is radiated by \(e^{\pm}\) with energies satisfying
\[E\gtrsim 10\,\text{GeV}\left(\frac{\nu}{8.35\,\text{GHz}}\right)^{1/2}\left( \frac{10\,\mu\text{G}}{\bar{B}}\right)^{1/2}. \tag{48}\]
As the Effelsberg radio telescope data used in this study is at frequencies around \(8.35\,\text{GHz}\), we are most interested in the \(e^{\pm}\) produced through dark matter annihilation with energies of \(\sim 10\,\text{GeV}\) and higher. This is shown in Figure 14 where we plot the dependence of \(\left\langle dP/d\nu\right\rangle_{\alpha}\) on \(E\) for a variety of fixed values of \(\bar{B}\).
Figure 12: Spherically averaged diffusion coefficient (left) and loss coefficient (right) using the unweighted average (dashed) and the weighted average (solid) for a range of energies. We use our default value of \(D_{0}=1\times 10^{28}\,\text{cm}^{2}/\text{s}\).
Figure 13: Spherically averaged equilibrium phase space density of \(e^{\pm}\) as a function of (a) \(E\) and (b) \(r\) from dark matter with an annihilation cross-section of \(\langle\sigma v\rangle=2.2\times 10^{-25}\text{cm}^{3}/\text{s}\). In (a) we keep \(r\) and \(D_{0}\) constant for various values of \(m_{\chi}\). In (b) we hold \(E\) and \(m_{\chi}\) constant and vary \(D_{0}\). Dashed and solid lines are as in Figure 12.
In Figure 15, we show the \(8.35\,\mathrm{GHz}\) radio emission resulting from dark matter of mass \(m_{\chi}=39\,\mathrm{GeV}\) annihilating with a cross section of \(\langle\sigma v\rangle=2.2\times 10^{-25}\,\mathrm{cm}^{3}/\mathrm{s}\), assuming \(D_{0}=1\times 10^{28}\,\mathrm{cm}^{2}/\mathrm{s}\). In Figure 16, we show the signal along the semi-major axis (a) for a variety of values of \(m_{\chi}\), holding \(D_{0}\) constant and (b) for a variety of values of \(D_{0}\) holding \(m_{\chi}\) constant.
## VII Statistical Methodology
Having developed a numeric method to calculate the radio emission induced by dark matter annihilation in M31, we can now compare our predicted signal with data to set limits on the annihilation cross section to \(\bar{b}b\) as a function of dark matter mass. Though the dark matter annihilation will be brightest in the center of the galaxy, this region also has significant baryonic sources whose intensities cannot be easily modelled. In addition, the flux near the center of M31 is sensitive to the value of \(D_{0}\) for \(D_{0}\gtrsim 1\times 10^{28}\,\mathrm{cm}^{2}/\mathrm{s}\) (as seen in Figure 16(b)). For these reasons we set limits using the expected _morphology_ of the dark matter signal outside of the center, rather than the total intensity. This also makes our constraints insensitive to possible mismeasurement of the overall zero-level of the radio data.
This approach requires data-driven modeling of the morphology of the backgrounds within the galaxy. The background emission in M31 is complicated, with numerous point sources and a prominent ring feature (see Figure 1). None of these features are morphologically consistent with the expectations of dark matter annihilation, and can safely be attributed to baryonic physics. Even so, a multi-step process is required to define a search region and construct a background model within that region that does not risk fitting-away any potential signal.
In Section VII.1, we first describe how we mask the point sources, the ring of radio emission in the disk, and the bright center of M31. This will allow us to define a search region interior to the ring, where the background can be approximated as the residual emission from the ring plus a constant. In the end, the radio emission from this search region will be used to set limits on the dark matter model.
Next in Section VII.2, we describe how we determine the background model within the search region using the data itself - without absorbing potential signal into the model. We introduce a background model with five free parameters: three morphological parameters \((\mu_{1},\mu_{2},\mu_{3})\) controlling the shape of the residual background from the elliptical ring, and two coefficients \((w_{1},w_{2})\) which determine the intensity of each component of the background. The morphological parameters are fixed based on the data independent of the signal hypothesis, while the intensity coefficients are adjusted to their most likely values for each hypothesis.
Fixing the morphological parameters must be done carefully to avoid absorbing any signal present in the data into the background model. We leverage the fact that the signal peaks toward the center of M31, while the emission from the ring is dominant away from the center. We therefore can fix the morphological parameters by using the data away from the center of the intensity map (exterior to the dark matter-rich "signal region"). The size of this signal region is determined by comparing fits of the morphological parameters of background assuming the presence or absence of a dark matter signal.
After defining the search region and fixing the morphology of our background model within this region, we set statistical limits on specific signal models. We use a \(CL_{\mathrm{s}}\) test, described in Section VII.3. \(CL_{\mathrm{s}}\) works by building distributions of test-statistics from synthetic observations generated from background-only and signal-plus background hypotheses. This test-statistic is sensitive to the morphology of the signal in addition to the amplitude, making this ideal for the distributed signal of dark matter in M31. Our full set of limits varying over astrophysical model parameters and using the methodology we describe here will be shown in Section VIII.
### Background Masks
The baryonic sources of radio emission in M31 are complicated and difficult to model from first principles. Overall, we expect relatively uniform background emission across the interior of the galaxy, overlaid with significant emission from the galactic center due to baryonic processes, as well as point sources throughout the galaxy. In addition, M31 contains a prominent elliptical ring-shaped structure in radio with a semi-major axis of
Figure 14: Energy response of \(e^{\pm}\) producing synchrotron emission of frequency \(\nu=8.35\,\mathrm{GHz}\) for a variety of magnetic field strength values.
Figure 16: Predicted synchrotron emission at a frequency of \(\nu=8.35\times 10^{9}\,\mathrm{Hz}\) from dark matter annihilating with a cross-section of \(\langle\sigma v\rangle=2.2\times 10^{-25}\,\mathrm{cm}^{3}/\mathrm{s}\). The emission is shown as a function of \(x\), the distance from the center of M31 in the plane of the sky along the semi-major axis. The flux is integrated over the effective beam size of the data, \(\Omega_{\mathrm{beam}}=2.157\times 10^{-7}\) sr. In (a) we fix \(D_{0}\) to the default value of \(1\times 10^{28}\,\mathrm{cm}^{2}/\mathrm{s}\) and vary \(m_{\chi}\). In (b) we set \(m_{\chi}=39\,\mathrm{GeV}\), varying \(D_{0}\). The solid curves show the emission using our weighted numerical solutions, dashed curves use the unweighted approach.
Figure 15: Predicted synchrotron emission at a frequency of \(\nu=8.35\times 10^{9}\,\mathrm{Hz}\) from dark matter with \(m_{\chi}=39\,\mathrm{GeV}\) annihilating with a cross-section of \(\langle\sigma v\rangle=2.2\times 10^{-25}\,\mathrm{cm}^{3}/\mathrm{s}\). In calculating this synchrotron map, we used our default value of \(D_{0}\) and our weighted averaging scheme.
approximately \(10\,\mathrm{kpc}\), due to significant star formation in this region [84, 85]. All of these features can clearly be seen in the radio map of Figure 1. The location of the ring correlates with the highest densities of gas in our astrophysical models in Section IV.
Notably, other than the emission at the center of the galaxy, the spatial distribution of all of these sources of radio emission is inconsistent from emission sourced by dark matter. Rather than attempting to model these baryonic sources from first principles, we mask and remove them from our statistical analysis. As the emission at the center and the point sources are localized, we are able to completely remove them using masks. The ring of bright emission is broad enough that it cannot be removed completely. Instead, we model it as a Gaussian ring and mask its brightest emission.
#### vi.2.1 Point Source Masks
Ref. [34] has removed 38 point sources unrelated to M31. However, many point sources within the galaxy remain in the data. We locate point sources algorithmically by identifying circular regions (with a diameter of 0.75 times the HPBW of the beam) that are over-bright compared to the concentric annulus with inner and outer diameter of 2.25 and 2.75 times the HPBW, respectively. A circular region centered on pixel \(i\) is classified as a point-source if:
\[\langle d\rangle_{i}^{\mathrm{(cir)}}>\langle d\rangle_{i}^{\mathrm{(ann)}}+4 \sigma_{\mathrm{rms}} \tag{49}\]
where \(\langle d\rangle_{i}^{\mathrm{(cir)}}\) and \(\langle d\rangle_{i}^{\mathrm{(ann)}}\) are the flux per beam averaged over the circle and annulus centered at the pixel \(i\), respectively (the noise \(\sigma_{\mathrm{rms}}\) is defined in Section II). For each pixel in the radio map that passes this criteria, we mask a circular region (of diameter 0.75 times the HPBW) centered on the pixel.
In addition to these conventional point sources, there is a feature (located near \(x=0\,\mathrm{kpc}\), \(y=4\,\mathrm{kpc}\) in Figure 1) that is likely an artifact of the imaging process. As this feature does not have the intensity distribution of a point source, it was not identified by our point source algorithm, and we mask it by hand.6
Footnote 6: There is a similar feature near \(x=-2\,\mathrm{kpc}\), \(y=-2.5\,\mathrm{kpc}\). As this feature will not be in the search region (defined in Section VII.1.3), we do not mask it manually.
#### vi.2.2 Center Mask
The center of M31 is the brightest source of radio emission in the galaxy. While dark matter-induced emission would also peak in this region, much of the observed emission is likely due to difficult-to-model baryonic processes. Limits on annihilation can be set by using only this central emission [35, 37, 38], but the intensity of the dark matter signal here is sensitive to the diffusion parameter for \(D_{0}\gtrsim 1\times 10^{28}\,\mathrm{cm}^{2}/\mathrm{s}\) (as shown in Figure 16(b)). For these reasons, we also mask the center of M31 in our analysis and set limits on dark matter using the region outside the center. Here, the lower signal rate is off-set by the lower background, and the differing morphologies of the signal and background can be used to set limits less sensitive to uncertainties in the diffusion coefficient.
Our point-source masking technique also identifies a source at the center of M31, but the default point-source mask is too small to cover the entire bright center region. To determine the size of the central circular mask, we plot the intensity, averaged over concentric circular annuli (excluding pixels in point-source masks), as a function of 2D radius \(\rho=(x^{2}+y^{2})^{1/2}\) in Figure 17. We mask the central region out to the minimum of this averaged flux, at \(\rho=0.93\,\mathrm{kpc}\). The intensity map with point sources and the center masked is shown in Figure 18(a).
#### vi.2.3 Ring and Outside Masks
Finally, we must construct a mask for the elliptical ring of bright emission in the star forming region of the M31 disk [84, 85]. Given the morphology of this feature, it cannot be due to dark matter annihilation. Masking it therefore does not risk removing a potential signal and setting overly-strong constraints.
Figure 17: Observed flux averaged over concentric circular annuli of radius \(\rho\) in the plane of the sky, not including pixels that are in the point source mask. The errors in the annulus averaged flux in a particular bin are found by averaging the rms noise over the bin and dividing by the square root of the number of beams in the bin. The radius of the center circular mask is shown with a red vertical line.
To construct the mask, we first fit the data to a sum of a uniform template and a Gaussian elliptical ring template which have the forms
\[\begin{split}\Phi^{u}(\mathbf{x};w_{1})=&\,w_{1},\\ \Phi^{r}(\mathbf{x};w_{2},\mathbf{\mu})=&\,w_{2}\exp{\left[- \frac{\left(R_{e}(\mathbf{x},\mu_{1})-\mu_{2}\right)^{2}}{2\mu_{3}^{2}}\right]}, \end{split} \tag{50}\]
Figure 18: Intensity maps of the (a) radio data and (b) simulated pseudo-data using the globally-fit background model, with point source and center masks (described in Sections VII.1.1 and VII.1.2, respectively) applied. The method of simulating the pseudo-data is described in Appendix B. The search region (used to set limits on dark matter annihilation, see Section VII.1.3) consists of the unmasked pixels within the black contour. The signal region, masked when defining signal-independent background templates (see Section VII.2), is interior to the red contour.
where
\[R_{e}(\mathbf{x},\mu_{1})=\sqrt{x^{2}+\mu_{1}^{2}y^{2}} \tag{51}\]
is the elliptical radius, \(\mathbf{\mu}=(\mu_{1},\mu_{2},\mu_{3})\) are free parameters of the model that control the shape and size of the ring, and \(\mathbf{w}=(w_{1},w_{2})\) control the intensity of each component of the background. For the remainder of the paper, we will refer to \(\mathbf{\mu}\) as the background morphological parameters and \(\mathbf{w}\) as the background coefficients. The total background model is
\[\Phi^{b}(\mathbf{x};\mathbf{w},\mathbf{\mu})=\Phi^{u}(\mathbf{x};w_{1})+\Phi^{r}(\mathbf{x};w_{2}, \mathbf{\mu}) \tag{52}\]
As the dark matter-induced annihilation signal is expected to be small at the radius of the ring, we can fit our model to the ring independent of the signal model. We minimize the \(\chi^{2}\) statistic between the observed flux (\(d_{i}\) in pixel \(i\) at location \(\mathbf{x}_{i}\)) and the ring plus uniform background model
\[\chi^{2}=\sum_{i=1}^{N_{\rm pix}}\frac{\left[d_{i}-\Phi^{b}(\mathbf{x}_{i};\mathbf{w}, \mathbf{\mu})\right]^{2}}{\sigma_{\rm rms,i}^{2}}. \tag{53}\]
with respect to all components of \(\mathbf{w}\) and \(\mathbf{\mu}\). The resulting best fit values for the morphological parameters are listed in Table 5 in the first section (labeled "Full Map") and second column (labeled "Global Fit"). Figure 19 shows the radio data (with point sources and galactic center masked) and the globally fit background model averaged over concentric elliptical annuli (with the same eccentricity as the globally fit ring model) as a function of \(R_{e}\).
We show a heatmap of the globally-fit background model superimposed with simulated errors in Figure 18(b). Our method for simulating random errors correlated over the beam size (which is much larger than the pixel size) is explained in Appendix B.
It is clear from Figure 19 that the emission outside of the ring (\(R_{e}\gtrsim 15\,\rm kpc\)) is significantly brighter than the emission inside (\(R_{e}\lesssim 5\,\rm kpc\)). As the dark matter signal is expected to drop with distance from the center, it cannot be responsible for this excess emission outside the ring. Therefore, we mask exterior to the ring. The width of the best-fit Gaussian is too broad to completely mask the emission out to the level of statistical noise in the region interior to the ring. We instead mask the ring inward to 1 standard deviation from the peak of the Gaussian ring model. That is, we mask all pixels satisfying
\[R_{e}(\mathbf{x},\mu_{1})>\mu_{2}-\mu_{3}, \tag{54}\]
using the globally-fit values for the morphological parameters \(\mathbf{\mu}\).
The inner boundary of the ring mask is shown in black contours in each panel of Figure 18. The interior of this contour (minus the center and pixels masked as part of point sources) is the search region that will be used to constrain dark matter annihilation.
### Background Model of the Search Region
Having selected our search region, we must now define our background model that we will use to construct our background-only and signal plus background hypotheses. The model has the functional form of Eq. (52) (used to define the ring and outside-region mask in Section VII.1.3), but as the intensity map has the potential to be signal-rich, we must fix the morphological parameters prior to calculating our limits with more care than when initially defining the search region.
If there was a (known) dark matter signal in the data, the parameters of the background model would be most
\begin{table}
\begin{tabular}{|c||c||c|} \hline Parameter & Global Fit & Signal-Region Masked \\ \hline \hline \multicolumn{3}{|c|}{Full Map} \\ \hline \(\mu_{1}\) & 4.20 & 4.28 \\ \(\mu_{2}\)( kpc) & 11.1 & 11.1 \\ \(\mu_{3}\)( kpc) & 2.88 & 2.42 \\ \hline \multicolumn{3}{|c|}{Right-Only} \\ \hline \(\mu_{1}\) & 3.63 & 3.63 \\ \(\mu_{2}\)( kpc) & 9.30 & 9.17 \\ \(\mu_{3}\)( kpc) & 2.39 & 2.15 \\ \hline \end{tabular}
\end{table}
Table 5: Best-fit morphological parameters for the ring. The Global Fit has the parameter values fit to the data with the center and point sources masked, while the Signal-Region Masked fit is over data with the additional mask over the central signal-rich region applied. We separately show the parameters after fitting to the entire M31 data set (labeled “Full Map”), and the data in the \(x>0\) right-hand side of Figure 1 (labeled “Right-Only”).
Figure 19: Synchrotron data and globally fit background model (parameters given in the “Full Map” section of Table 5) averaged over elliptical annuli as a function of \(R_{e}(\mathbf{x},\mu_{1})\) where \(\mu_{1}\) is taken to be the globally fit value.
accurately found by subtracting that signal from the data and then fitting our background model to the result. Alternatively, if there is no dark matter signal in the data, the parameters of the background model would be most accurately found by fitting the background model to the data itself. With only the point-source and circular center masks applied, the best fit parameters for the background model are sensitive to the (unknown) presence of signal in the data. We avoid the risk of the background model absorbing any signal present in the data by leveraging the morphology of the signal maps, which peak towards the center of M31.
Unlike the procedure for constructing the globally-fit background model, if we fit the background model only using data outside the center of M31 (where dark matter contributes less to the radio flux), then fits with and without signal subtracted will be more in agreement. The level of statistical agreement will increase as we mask more of an assumed signal. To maximize the amount of signal masked for a given area masked, the mask should have the shape of a contour of constant signal intensity. We will call the inner signal-rich region the "signal region," and the mask that covers it the "signal-region mask."
Our strategy then is to mask the signal region, and fit the parameters of \(\Phi^{b}\) to the data exterior to the mask (including data outside the search region). This fit will allow us to define
\[\hat{\Phi}^{b}(\mathbf{x};\mathbf{w})=\Phi^{b}(\mathbf{x};\mathbf{w},\mathbf{\hat{\mu}}) \tag{55}\]
where \(\mathbf{\hat{\mu}}\) are the morphological parameters, fit outside the signal region and fixed for the rest of the analysis, while \(\mathbf{w}\) are free parameters which set the amplitude of the various components of the background. These free parameters will be fit to the data (or pseudo-data) in the search region when we set limits on the presence of a dark matter signal. The region exterior to the signal-region mask, used to find \(\mathbf{\hat{\mu}}\), must be sufficiently signal-poor so that statistical tests that distinguish between signal plus background and background-only hypotheses obtain the same results regardless of whether \(\mathbf{\hat{\mu}}\) is determined by assuming the presence or absence of signal in the data.
To identify this signal-poor region, we use as a benchmark signal the flux from dark matter with \(m_{\chi}=38.6\,\mathrm{GeV}\), \(\langle\sigma v\rangle=2.2\times 10^{-25}\,\mathrm{cm}^{3}/\mathrm{s}\) and a diffusion normalization of \(D_{0}=1\times 10^{28}\,\mathrm{cm}^{2}/\mathrm{s}\). As the leakage out of the masked signal region is minimal and the signal morphology depends only weakly on the choice of mass and diffusion parameters, the resulting fits can be applied to signals with other values of \(m_{\chi}\) and \(D_{0}\). The cross-section is chosen to be approximately an order of magnitude larger than the best fit value from the GCE [15; 16; 17; 18] and existing limits from dwarf galaxies [6; 7; 8; 9; 10]. Our fitting procedure will ensure that the background model is not significantly influenced by the presence of dark matter signals of this intensity and weaker in the data.
We make a series of candidate signal-region masks that intersect the semi-major axis at \(x\) values between \([1.0-8.8]\) kpc. For each of these masks, we fit our background model (Eq. (52)) to the remaining unmasked data with signal subtracted (defined as "Fit A") or without signal subtracted ("Fit B") by minimizing Eq. (53) with respect to all components of \(\mathbf{w}\) and \(\mathbf{\mu}\). We take the sum in Eq. (53) to be over pixels not covered by the candidate signal region mask, the center mask, or point source masks.
For each candidate signal-region mask, we determine whether Fits A and B of the morphological parameters will lead to statistically indistinguishable results when testing for the presence of signal. To compare the background-only and signal plus background hypotheses, we introduce a test statistic, defined as
\[\begin{split}\lambda_{\langle\sigma v\rangle,\mathbf{\theta}}(\{d_{i }\})=&\Delta\chi^{2}=\chi^{2}_{s+b}-\chi^{2}_{b}\\ &=\sum_{i}\frac{\left[d_{i}-\hat{\Phi}^{s+b}_{i}(\langle\sigma v \rangle,\mathbf{\theta},\mathbf{w^{s+b}})\right]^{2}}{\sigma^{2}_{\text{rms},i}}\\ &-\sum_{i}\frac{\left[d_{i}-\hat{\Phi}^{b}_{i}(\mathbf{w^{b}})\right] ^{2}}{\sigma^{2}_{\text{rms},i}},\end{split} \tag{56}\]
This statistic will also used in our methodology for setting limits on dark matter (see Section VIII). In Eq. (56), \(\{d_{i}\}\) are the differential flux values in each pixel of the intensity map, and the sum runs over pixels \(i\) that are in the search region (defined in Section VII.1). \(\mathbf{w^{b}}\) (\(\mathbf{w^{s+b}}\)) are the most-likely values of the background coefficients, \(\mathbf{w}=(w_{1},w_{2})\), under the background-only (signal plus background) hypothesis and are determined analytically for each intensity map. The test statistic is constructed such that higher test statistic values imply that the intensity map is more background-like.
The test statistic depends on the signal hypothesis being tested through the signal plus background model
\[\hat{\Phi}^{s+b}_{i}(\langle\sigma v\rangle,\mathbf{\theta},\mathbf{w^{s+b}})=\Phi^{s} _{i}(\langle\sigma v\rangle,\mathbf{\theta})+\hat{\Phi}^{b}_{i}(\mathbf{w^{s+b}}), \tag{57}\]
where the signal flux at the pixel centered at solid angle \(\Omega_{i}\) is given by
\[\Phi^{s}_{i}(\langle\sigma v\rangle,\mathbf{\theta})\equiv\Omega_{\text{beam}}\left. \frac{d^{2}S}{d\Omega d\nu}\right|_{\Omega_{i},\langle\sigma v\rangle,\mathbf{ \theta}} \tag{58}\]
using the differential flux calculated in Section VI. The signal hypothesis is parameterized by the cross section \(\langle\sigma v\rangle\) and a vector \(\mathbf{\theta}\), containing \(m_{\chi}\), \(D_{0}\) and all other default astrophysical parameters, given in Section IV.
For a given candidate signal-region mask, we calculate distributions of test statistics from ensembles of pseudo-data using background models with morphological parameters fixed by Fits A and B. These ensembles are drawn using the methods described in Appendix B, using the \(\hat{\Phi}^{b}_{i}\) appropriate for each fit, summing over pixels in the search region for each calculation of \(\lambda_{\langle\sigma v\rangle,\mathbf{\theta}}\). As the
test statistic requires a choice of signal parameters, we use as our reference signal model \(m_{\chi}=38.6\) GeV, \(D_{0}=1\times 10^{28}\) cm\({}^{2}\)/s, and a cross-section for which the signal plus background hypothesis is easily distinguished from the background hypothesis: \(\langle\sigma v\rangle=1.1\times 10^{-25}\) cm\({}^{3}\)/s. If - for a given signal mask - the distribution of test-statistics is indistinguishable between background Fits A and B, then the same will be true for the result of a statistical test for distinguishing signal plus background from background. This means the morphological parameters can be fit to the data using that signal region mask without absorbing potential signal from the data.
Figure 20 shows the mean test statistic using candidate Fits A and B as a function of the distance from the origin that the signal region mask intersects the semi-major axis. For a mask which intersects the semi-major axis at \(x=3.65\) kpc, the means of the distributions of the test statistic from Fits A and B agree within statistical noise. Selecting this signal region mask (shown with a red contour in Figure 18), we set \(\hat{\mathbf{\mu}}\) to the best-fit morphological parameters of Fit B. The values of the morphological parameters from this fit are shown in Table 5 (the "Signal-Region Masked" column of the "Full Map" section). To construct our signal plus background and background-only hypotheses, we will use the background model with the morphological parameters fixed to \(\hat{\mathbf{\mu}}\) and the coefficients \(\mathbf{w}=(w_{1},w_{2})\) free to float to their most likely values for each hypothesis.
### Limits on a Signal Model
Having fixed our background model morphology, we now describe our statistical approach to setting limits on dark matter annihilation in M31, using the data in the entire search region. In order to maximize the statistical power in the morphology of the signal when setting limits, we use the \(CL_{s}\) method [86] with pixel-level radio data and templates.
As in Section VII.2, we parameterize our signals using the cross section \(\langle\sigma v\rangle\), and the parameter vector \(\mathbf{\theta}\) which includes the dark matter mass \(m_{\chi}\) and diffusion coefficient \(D_{0}\). We will use the test statistic \(\lambda_{\langle\sigma v\rangle,\mathbf{\theta}}\) from Eq. (56) to distinguish between background-like and signal plus background-like intensity maps.
Statistical inference for a signal parameterized by \(\langle\sigma v\rangle\) and \(\mathbf{\theta}\) requires the probability distributions of \(\lambda_{\langle\sigma v\rangle,\mathbf{\theta}}\) under our background-only and signal plus background hypotheses. We construct these probability distributions by generating an ensemble of simulated observations of M31 under each hypothesis and calculating \(\lambda_{\langle\sigma v\rangle,\mathbf{\theta}}\) for each simulated observation. The simulated observations are generated under the background (signal plus background) hypothesis by superimposing \(\hat{\mathbf{\Phi}}^{b}\) (\(\hat{\mathbf{\Phi}}^{s+b}\)) with randomly drawn noise maps. The probability distribution from which the noise maps are drawn has correlations between nearby pixels, as expected due to the Gaussian beam of the observations (described in Section II). More details on our procedure for producing pseudo-data are described in Appendix B.
In Figure 21, we show sample distributions of \(\lambda_{\langle\sigma v\rangle,\mathbf{\theta}}\) for \(m_{\chi}=38.6\) GeV, \(D_{0}=1\times 10^{28}\) cm\({}^{2}\)/s and two choices of \(\langle\sigma v\rangle\) (\(1.1\times 10^{-26}\) cm\({}^{3}\)/s and \(4.6\times 10^{-26}\) cm\({}^{3}\)/s). In these examples, the blue and red histograms are the distributions of the test statistic assuming background and signal plus background (respectively) in arbitrary units. The solid curves are the Gaussian approximations of each distribution. The vertical green line in each plot is the value of \(\lambda_{\langle\sigma v\rangle,\mathbf{\theta}}^{\rm(obs)}\equiv\lambda_{\langle \sigma v\rangle,\mathbf{\theta}}(\{d_{i}^{\rm(obs)}\})\), evaluated on the actual M31 radio data.
For an arbitrary signal hypothesis parameterized by \(\langle\sigma v\rangle\) and \(\mathbf{\theta}\), our distributions of \(\lambda_{\langle\sigma v\rangle,\mathbf{\theta}}\) can be used to approximate the probability distribution of \(\lambda_{\langle\sigma v\rangle,\mathbf{\theta}}\) assuming background:
\[p_{\langle\sigma v\rangle,\mathbf{\theta}}(\lambda|b)\equiv p(\lambda_{\langle \sigma v\rangle,\mathbf{\theta}}=\lambda|b), \tag{59}\]
and signal plus background:
\[p_{\langle\sigma v\rangle,\mathbf{\theta}}(\lambda|s+b,\langle\sigma v\rangle,\bm {\theta})\equiv p\left(\lambda_{\langle\sigma v\rangle,\mathbf{\theta}}=\lambda|s +b,\langle\sigma v\rangle,\mathbf{\theta}\right). \tag{60}\]
A given observation with test-statistic \(\lambda_{\langle\sigma v\rangle,\mathbf{\theta}}^{\rm(obs)}\) then has a \(CL_{b}\) value given by the probability of seeing
Figure 20: Mean test statistics from background-only pseudo-data for a series of candidate fits of the morphological parameters of the background model. Each fit comes from minimizing Eq. (53) outside of the candidate signal region mask that intersects the semi-major axis of M31 at \(x_{\rm mask}\), assuming the presence (green) or absence (black) of dark matter signal in the data. The distributions of test statistics are constructed for a signal from dark matter with \(m_{\chi}=38.6\) GeV and \(\langle\sigma v\rangle=1.1\times 10^{-25}\) cm\({}^{3}\)/s and default diffusion normalization of \(D_{0}=1\times 10^{28}\) cm\({}^{2}\)/s. The size of the signal-region mask that we select is shown with the red vertical line.
data more background-like than observed, assuming the background-only hypothesis is correct:
\[CL_{b}\left(\lambda^{\rm(obs)}_{\langle\sigma v\rangle,\mathbf{\theta}},\langle \sigma v\rangle,\mathbf{\theta}\right)=\int\limits_{\lambda^{\rm(obs)}_{\langle\sigma v \rangle,\mathbf{\theta}}}^{\infty}d\lambda\;p_{\langle\sigma v\rangle,\mathbf{\theta}}( \lambda|b). \tag{61}\]
Similarly, the \(CL_{s+b}\) value for the observation is the probability of seeing a more background-like intensity map than that observed, assuming that the signal plus background hypothesis is correct:
\[CL_{s+b}\left(\lambda^{\rm(obs)}_{\langle\sigma v\rangle,\boldsymbol{\theta}}, \langle\sigma v\rangle,\boldsymbol{\theta}\right)=\int\limits_{\lambda^{\rm( obs)}_{\langle\sigma v\rangle,\boldsymbol{\theta}}}^{\infty}d\lambda\;p_{ \langle\sigma v\rangle,\boldsymbol{\theta}}(\lambda|s+b,\langle\sigma v \rangle,\boldsymbol{\theta}). \tag{62}\]
The ratio \(CL_{s}\left(\lambda^{\rm(obs)}_{\langle\sigma v\rangle,\boldsymbol{\theta}}, \langle\sigma v\rangle,\boldsymbol{\theta}\right)\equiv CL_{s+b}/CL_{b}\) can then be interpreted as the probability of signal parameters greater than \(\langle\sigma v\rangle\), given data. A 95% confidence level exclusion therefore corresponds to a signal for which
\[CL_{s}(\lambda^{\rm(obs)}_{\langle\sigma v\rangle,\boldsymbol{\theta}}, \langle\sigma v\rangle,\boldsymbol{\theta})=\frac{CL_{s+b}(\lambda^{\rm(obs)} _{\langle\sigma v\rangle,\boldsymbol{\theta}},\langle\sigma v\rangle, \boldsymbol{\theta})}{CL_{b}(\lambda^{\rm(obs)}_{\langle\sigma v\rangle, \boldsymbol{\theta}},\langle\sigma v\rangle,\boldsymbol{\theta})}=0.05. \tag{63}\]
The expected 95% confidence limits correspond to signal parameters for which the median test statistic under the background hypothesis (\(CL_{b}=0.5\)) leads to \(CL_{s}=0.05\). The 1 and 2\(\sigma\) errors of this expected limit are calculated using the corresponding percentiles of the background distribution.
Example \(CL\) curves are shown in Figures 21(c) and 21(d) (corresponding to the distributions of the test statistics in Figures 21(a) and 21(b), respectively). The \(CL_{s}\) curve of each plot (shown in grey) is dominated by statistical noise in the simulated intensity maps when \(CL_{b}\) and \(CL_{s}\) are small. As seen in this example, we generically find that \(\lambda^{\rm(obs)}_{\langle\sigma v\rangle,\boldsymbol{\theta}}\) is \(5-6\sigma\) larger than the mean of the background-only distribution for our search region,7 implying that \(CL_{b}\) of the observed test statistic is \(\sim 2\times 10^{-8}\).
Footnote 7: To obtain \(CL\) values not dominated by the finite statistics in our simulated intensity maps would require \(\sim 10^{9}\) maps. Instead, we set limits in this regime by extrapolating the test statistic distributions by fitting them to Gaussians and calculating an approximation of \(CL_{s}\) using these extrapolations (shown in black in Figures 21(c) and 21(d)).
As we will discuss in the next section, the fact that the _observed_ test statistics are located on the far tail of the \(CL_{b}\) distributions is a consequence of observed radio intensities that are much less signal-like than the signal-free background model. We identify the likely source of this discrepancy and account for it to set conservative limits in what follows.
## VIII Constraints on annihilating dark matter in M31
We can at last place limits on dark matter model parameters using the radio emission from M31. Using the \(CL_{s}\) method, we set 95% exclusion limits for dark matter with mass in the range \([6-500]\) GeV, annihilating to \(b\bar{b}\).
In Figure 22, we show the 95%CL upper limits on \(\langle\sigma v\rangle\) as a function of \(m_{\chi}\), assuming the default diffusion parameter normalization \(D_{0}=1\times 10^{28}\,\mathrm{cm}^{2}/\mathrm{s}\). Alongside the observed limits (black solid line), we plot the 1 and 2\(\sigma\) variation around the expected limits (derived from our distributions of the test statistic under the background hypothesis). It is clear that the observed limit is far stronger than the 2\(\sigma\) variation assuming the background hypothesis. These very strong limits are the result of very low likelihoods of the background models, combined with even lower likelihoods for our signal plus background models (as shown in Figure 21 for one value of \(m_{\chi}\)). That is, while the background model does not describe the observations well, the observed deviations away from the background-only model are not compatible with the morphology of any signal.
These unexpectedly strong limits require further investigation. In Figure 23(a), we show our best fit background model and signal plus background model with \(\langle\sigma v\rangle=1.1\times 10^{-26}\,\mathrm{cm}^{3}/\mathrm{s}\) for \(m_{\chi}=38.6\,\mathrm{GeV}\) and \(D_{0}=1\times 10^{28}\,\mathrm{cm}^{2}/\mathrm{s}\) along with the data in the search region as a function of elliptical distance from the center of the map (this choice of signal parameters is excluded at 95% confidence).8 As can be seen, the residuals of the data
Figure 22: 95% confidence limits on dark matter annihilation assuming \(D_{0}=1\times 10^{28}\,\mathrm{cm}^{2}/\mathrm{s}\) from our Full Map analysis. The 1\(\sigma\) and 2\(\sigma\) expected limits from the original search region are shown in green and yellow, with the observed limits for this search region shown with a solid line (labeled “original”). The dotted and dashed lines are the actual limits from the data in the right (\(x>0\)) and left (\(x<0\)) half of the search region, respectively.
with respect to the background-only model are negative for \(R_{e}\lesssim 4\,\)kpc.
These large negative residuals can be traced specifically to the large negative excursion in the observed flux, located around \((x,y)\sim(-2,1)\) kpc and clearly visible in Figures 1 and 18(a). The low flux measurement (well in excess of a \(2\sigma\) deviation given the expected measurement errors) may be due to the over-subtraction of a point source external to M31. In Figure 22, alongside the observed limit obtained from the entire search region, we show the limits derived using the \(CL_{s}\) method using only the left (\(x<0\)) and right (\(x>0\)) halves of the search region. The stronger-than-expected limits come entirely from the left-hand side of the M31 emission, where this region of negative flux is located.
In Figures 23(b) and 23(c) we show elliptically averaged radio emission of the left search region (masking the \(x>0\) data) and right-only search region (masking \(x<0\)), respectively. Alongside the data, we show the best-fit background model and a signal plus background model (excluded at 95% confidence) as a function of \(R_{e}\). As can be seen, the negative residuals in the left-hand side of the search region are the source of the unexpectedly strong limits. Critically, given our understanding of the dark matter distribution within M31, dark matter emission cannot create such a region of low emission close to the center of M31, even if the overall baseline of zero radio flux was mismeasured. Thus, considering only the part of the data without this region of anomalously low emission will not set overly-optimistic limits on dark matter annihilation. Indeed, it sets more conservative and weaker bounds.
To get the most accurate limits using only the right side of the map, we recalculate the search region and re-fix the background morphological parameters with the left side of the data masked, using the steps described in Sections VII.1 and VII.2. The resulting morphological parameter values used to recalculate the search region are shown in the "Right-Only" section of Table 5 in the second column (labeled "Global Fit"). The new search region is bounded by the black contour in Figure 24. We show the resulting test statistics as a function of candidate signal region mask size for background Fits A and B in Figure 25. Based on this, we select the signal region
Figure 23: Radio Flux averaged over concentric elliptical annuli as function of \(R_{e}(\mathbf{x},\mu_{1})\) (with \(\mu_{1}\) given by the global fit of the Full Map analysis) along with the best fit background model and the excluded signal plus background model for (a) the original search region, (b) the left half search region, and (c) the right half search region. All signal plus background models shown have \(m_{\chi}=38.6\,\)GeV and \(D_{0}=1\times 10^{28}\,\)cm\({}^{2}\)/s. For the excluded signal plus background model, we take the lowest value of \(\langle\sigma v\rangle\) that leads to 95% exclusion for the values of mass and diffusion normalization plotted.
Figure 24: As Figure 18(a) but for the right-only analysis. The search region contour is recalculated with the left side of the image masked, and thus differs slightly from the search region of the full map analysis.
mask for the right side of the intensity map to be the one that intersects the semi-major axis at \(x=3.65\,\mathrm{kpc}\), where the mean test statistics from Fits A and B agree within statistical error. This turns out to be the same signal region mask that we found for our procedure that uses the Full Map. This signal region mask is bounded by the red contour in Figure 24. We fix the morphological parameters of the background model \(\mathbf{\mu}\) to the best fit values from background model B with this signal region mask. The resulting values for the morphological parameters are shown in the third column (labeled "Signal-Region Masked") of the second half (labeled "Right-Only") of Table 5.
Using this background model, for the right search region we show in Figures 26(a) and 26(b) examples of the test statistic distributions and \(CL\) curves for pseudo-data in the right search region and show sample distributions, for \(m_{\chi}=38.6\,\mathrm{GeV}\), \(D_{0}=1\times 10^{28}\,\mathrm{cm}^{2}/\mathrm{s}\), and \(\langle\sigma v\rangle=7.4\times 10^{-26}\,\mathrm{cm}^{3}/\mathrm{s}\). This is the cross section that is excluded at approximately 95% confidence for these values of \(m_{\chi}\) and \(D_{0}\).
In Figure 27, we show our limits on \(\langle\sigma v\rangle\) as a function of \(m_{\chi}\), using the right-only analysis. In both panels, the green and yellow bands quantify the 1 and 2\(\sigma\) statistical error of our expected limits for our default value of \(D_{0}\) and for the weighted averaging scheme, introduced in Section V.3 (and otherwise default parameters). Each panel quantifies the effects of different systematics on our results. In Figure 27(a) we show the observed 95% confidence exclusion limit from each spherically averaging procedure for our default value of \(D_{0}\). As can be seen, the limits do not depend on the averaging scheme used. In Figure 27(b), we show the observed limits as \(D_{0}\) is varied. The limits are relatively insensitive to variations of the diffusion coefficient in the range \(3\times 10^{27}\,\mathrm{cm}^{2}/\mathrm{s}\leq D_{0}\leq 3\times 10^{28}\, \mathrm{cm}^{2}/\mathrm{s}\). The limits become about a factor of three weaker when \(D_{0}\) changes from \(3\times 10^{28}\,\mathrm{cm}^{2}/\mathrm{s}\) to \(8\times 10^{28}\,\mathrm{cm}^{2}/\mathrm{s}\) for \(m_{\chi}\gtrsim 10\,\mathrm{GeV}\).
## IX Conclusion
In this work, we have set robust and conservative limits on dark matter annihilation to \(b\bar{b}\) using the 8.35 GHz Effelsberg radio map of M31, which is sensitive to the predicted synchrotron emission of the \(e^{\pm}\) produced in the cascade \(b\) decays. These limits are based on a numeric solution to the diffusion-loss equation that accommodates non-uniform parameters, and an astrophysical model that uses observations of the gas, dust, starlight, and magnetic fields of M31. Our ISRF model for the starlight is derived directly from _ugriz_ luminosity data, which led to notably larger values for \(\rho_{*}\) in the center of M31 compared to previous works.
Unlike previous studies, our numerical solution to the diffusion-loss equation allows for position dependent diffusion and loss coefficients. Our method still requires spherical averaging of the background model. Though we have shown that our final limits are insensitive to the averaging procedure, additional work is needed to develop a numeric solution which is adapted to the axisymmetry of M31.
Our limits are based on the morphology of the observed flux, and our results are independent of the true zero-flux level of the intensity map. The limits are based on the radio flux only interior to the bright ring of radio emission in M31, allowing us to use data-driven models of the background based on signal-poor regions of the observed intensity map. Due to a localized anomaly of low radio flux in the search region, we choose conservatively to select only the half of the dataset without this anomaly with which to set our limits.
Compared to previous radio studies of dark matter annihilation in M31, we find weaker limits on dark matter annihilation (after rejecting the region containing the negative fluxes). These weaker limits are due in part to the fact that in our analysis we mask the center of the galaxy, where the signal intensity is maximum - however, this choice minimizes the sensitivity to unknown astrophysical parameters at the galactic center. The weaker limits are also likely due to the differences in our astrophysical model of M31 compared to previous work. In particular, the core of M31 is much more luminous in starlight than a simple scaling of the comparable region of the Milky Way would suggest. This increased starlight flux results in increased energy losses of \(e^{\pm}\) into X-rays through inverse Compton scattering, reducing the flux of dark matter-induced radio waves. Though beyond the scope of this work, this suggests that an analysis of constraints from X-ray emission in the center of M31 from dark matter annihilation may set interesting limits.
The sensitivity of the limits to the astrophysical con
Figure 25: As Figure 20, but for our analysis with the left side of the data masked.
Figure 27: Expected and actual 95% confidence limits derived from the data in the search region from the right-only analysis, shown in Figure 24. The two panels show the variation of the observed limits due to (a) changes in the averaging procedure (introduced in Section V.3) for our default diffusion coefficient normalization and (b) changes in the diffusion coefficient normalization, \(D_{0}\) for our weighed averaging scheme. Both panels have the expected limits obtained using the default value of \(D_{0}\) (\(1\times 10^{28}\,\text{cm}^{2}/\text{s}\)) and the weighted averaging scheme.
Figure 26: Same as Figure 21 but for the right-only analysis. The cross-section shown here is close the the expected and actual 95% confidence limit. The expected limit is almost the same as the actual limit since the test statistic from the data is very close to the \(50^{\text{th}}\) percentile test statistic from background pseudo-data.
ditions within M31 are notable; though in this work we have taken care to construct an accurate model of M31 based on observations, future measurements and astronomical input would likely improve the model and the resulting limits. Similar analysis is likely necessary for constraints on dark matter annihilation via radio waves in other systems beyond M31.
## Acknowledgements
This work was supported by DOE grant DOE-SC0010008. We thank Andrew Baker for helpful advice and discussion. We also thank the authors of Ref. [34], for providing the data for our analysis.
## Appendix A Solving the Diffusion Equation through the Method of Backwards Differences
In this Appendix, we describe our numeric method for solving for the spherically averaged electron phase space density that satisfies Eq. (41). Using forward differences, the large time-steps required to numerically solve the diffusion-loss equation over the relevant timescales of M31 result in unstable solutions. Backward differences, on the other hand, are unconditionally stable [82]. Since we are only interested in the equilibrium solution and not the details of the approach to equilibrium, we use backward differences with time-steps large enough that the solution converges only after two time steps.
It is more convenient to work with \(u\equiv r\langle f_{e}\rangle\), which converts to Eq. (41) to Eq. (45). The discretized form of Eq. (45) with backward differences is
\[\frac{u_{ij}^{n+1}-u_{ij}^{n}}{\Delta t}= D(r_{i},E_{j})\frac{u_{i+1,j}^{n+1}-2u_{ij}^{n+1}+u_{i-1,j}^{n+1}}{ \Delta r^{2}}+\left.\frac{\partial D}{\partial r}\right|_{r_{i},E_{j}}\left[ \frac{u_{i+1,j}^{n+1}-u_{i-1,j}^{n+1}}{2\Delta r}-\frac{u_{ij}^{n+1}}{r}\right] \tag{121}\] \[+\frac{b(r_{i},E_{j+1})u_{i,j+1}^{n+1}-b(r_{i},E_{j})u_{ij}^{n+1} }{\Delta E_{i}}+r_{i}Q_{e}(r_{i}),\]
where \(u_{ij}^{n}=u(r_{i},E_{j},t_{n})\) and \(\Delta t\), \(\Delta r\) and \(\Delta E_{i}=E_{i+1}-E_{i}\) are the grid spacings for each coordinate. We use \(n_{E}=400\) logarithmically spaced steps for \(E\) and \(n_{r}=800\) linearly spaced steps for \(r\).
Combining all terms from Eq. (121) evaluated at time-step \(t_{n+1}\) gives
\[\left[\delta_{ik}\delta_{jl}-A_{ik}(E_{j})\delta_{jl}-\delta_{ik}B_{jl}(r_{i} )\right]u_{kl}^{n+1}=u_{ij}^{n}+C(r_{i},E_{j}). \tag{122}\]
Here, \(A\) and \(B\) are given by
\[A(E_{j})=\begin{pmatrix}\alpha_{0}(r_{1},E_{j})&\alpha_{1}(r_{1},E_{j})&0&0& \ldots&0&0&0\\ \alpha_{-1}(r_{2},E_{j})&\alpha_{0}(r_{2},E_{j})&\alpha_{1}(r_{2},E_{j})&0& \ldots&0&0&0\\ 0&\alpha_{-1}(r_{3},E_{j})&\alpha_{0}(r_{3},E_{j})&\alpha_{1}(r_{3},E_{j})& \ldots&0&0&0\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&0&\ldots&\alpha_{-1}(r_{n_{r}-1},E_{j})&\alpha_{0}(r_{n_{r}-1},E_{j})& \alpha_{1}(r_{n_{r}-1},E_{j})\\ 0&0&0&0&\ldots&0&\alpha_{-1}(r_{n_{r}},E_{j})&\alpha_{0}(r_{n_{r}},E_{j})\\ \end{pmatrix}, \tag{123}\]
\[B(r_{i})=\begin{pmatrix}\beta_{0}(r_{i},E_{1})&\beta_{1}(r_{i},E_{1})&0&0& \ldots&0&0&0\\ \beta_{-1}(r_{i},E_{2})&\beta_{0}(r_{i},E_{2})&\beta_{1}(r_{i},E_{2})&0& \ldots&0&0&0\\ 0&\beta_{-1}(r_{i},E_{3})&\beta_{0}(r_{i},E_{3})&\beta_{1}(r_{i},E_{3})& \ldots&0&0&0\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\vdots\\ 0&0&0&0&\ldots&\beta_{-1}(r_{i},E_{n_{E}-1})&\beta_{0}(r_{i},E_{n_{E}-1})\\ 0&0&0&0&\ldots&0&\beta_{-1}(r_{i},E_{n_{E}})&\beta_{0}(r_{i},E_{n_{E}})\\ \end{pmatrix}, \tag{124}\]
\[\alpha_{-1}(r_{i},E_{j}) =\left(\frac{D(r_{i},E_{j})}{\Delta r^{2}}-\frac{1}{2\Delta r}\left. \frac{\partial D}{\partial r}\right|_{r_{i},E_{j}}\right)\Delta t 2\leq i\leq n_{r} 1\leq j\leq n_{E} \tag{100}\] \[\alpha_{0}(r_{i},E_{j}) =\left(-\frac{2D(r_{i},E_{j})}{\Delta r^{2}}-\frac{1}{r}\left. \frac{\partial D}{\partial r}\right|_{r_{i},E_{j}}\right)\Delta t 1\leq i\leq n_{r} 1\leq j\leq n_{E}\] (101) \[\alpha_{1}(r_{i},E_{j}) =\left(\frac{D(r_{i},E_{j})}{\Delta r^{2}}+\frac{1}{2\Delta r} \left.\frac{\partial D}{\partial r}\right|_{r_{i},E_{j}}\right)\Delta t 1\leq i\leq n_{r}-1 1\leq j\leq n_{E}\] (102) \[\beta_{-1}(r_{i},E_{j}) =0 1\leq i\leq n_{r} 2\leq j\leq n_{E}\] (103) \[\beta_{0}(r_{i},E_{j}) =-\frac{b(r_{i},E_{j})}{\Delta E}\Delta t 2\leq i\leq n_{r} 1\leq j\leq n_{E}\] (104) \[\beta_{1}(r_{i},E_{j}) =\frac{b(r_{i},E_{j+1})}{\Delta E}\Delta t 1\leq i\leq n_{r} 1\leq j\leq n_{E}-1, \tag{105}\]
and the function \(C\) is given by
\[C(r_{i},E_{j})=r_{i}Q_{e}(r_{i},E_{j})\Delta t. \tag{106}\]
The matrices in Eq. (101) are constructed using the boundary conditions
\[u(0,E) =0 \tag{107}\] \[u(r_{n_{r}+1},E) =0\] \[u(r,E_{n_{E}+1}) =0,\]
where \(r_{n_{r}+1}=49.9\) kpc and \(E_{n_{E}+1}=m_{\chi}\). To update \(u\) from time-step \(n\) to \(n+1\), we must solve Eq. (107) for \(u_{ij}^{n+1}\) given \(u_{ij}^{n}\), \(A\), \(B\) and \(C\).
It is convenient to flatten the two lower indices in \(u_{ij}^{n}\) into a single lowered index by reshuffling the \(i\in[1,n_{r}]\) and \(j\in[1,n_{E}]\) indices of \(r\) and \(E\) into a single index \(a\in[1,n_{r}\times n_{E}]\) as
\[a=i+(j-1)\times n_{r}.\]
With this reordering, the phase space density can be encoded as a vector at time-step \(n\). Using this redefinition, the vector \(\mathbf{\mathcal{U}}^{n}\) at time-step \(n\) has components
\[\mathbf{\mathcal{U}}^{n}_{a}=u_{ij}^{n}. \tag{108}\]
Eq. (107) can then be written as a matrix equation in the \(n_{r}\times n_{E}\) vector indices
\[\mathbf{\mathcal{M}}\mathbf{\mathcal{U}}^{n+1}=\mathbf{\mathcal{U}}^{n}+\mathbf{\mathcal{C}}. \tag{109}\]
The matrix \(\mathbf{\mathcal{C}}\) has been redefined from \(C_{i}^{n}\equiv C(r_{i},E_{j})\) in a manner identical to \(u_{ij}^{n}\), with \(\mathcal{C}_{a}^{n}=C_{ij}^{n}\), \(a=i+(j-1)\times n_{r}\). The matrix \(\mathbf{\mathcal{M}}\) is defined as
\[\mathbf{\mathcal{M}}=\mathbb{1}-\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}} \tag{110}\]
The matrices \(\mathbf{\mathcal{A}}\) and \(\mathbf{\mathcal{B}}\) are \(n_{E}\times n_{E}\) block matrices with \(n_{r}\times n_{r}\) blocks:
\[\mathbf{\mathcal{A}}\equiv\begin{pmatrix}A(E_{1})&0&0&\ldots&0&0\\ 0&A(E_{2})&0&\ldots&0&0\\ 0&0&A(E_{3})&\ldots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\ldots&A(E_{n_{E}-1})&0\\ 0&0&0&\ldots&0&A(E_{n_{E}})\end{pmatrix} \tag{111}\]
where \(A(E_{j})\) is defined in Eq. (101) and
\[\mathbf{\mathcal{B}}\equiv\begin{pmatrix}B_{0}(E_{1})&B_{1}(E_{1})&0&\ldots&0&0\\ B_{-1}(E_{2})&B_{0}(E_{2})&B_{1}(E_{2})&\ldots&0&0\\ 0&B_{-1}(E_{3})&B_{0}(E_{3})&\ldots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\ldots&B_{0}(E_{n_{E}-1})&B_{1}(E_{n_{E}-1})\\ 0&0&0&\ldots&B_{-1}(E_{n_{E}})&B_{0}(E_{n_{E}})\end{pmatrix}. \tag{112}\]
\[B_{m}(E_{j})=\begin{pmatrix}\beta_{m}(r_{1},E_{j})&0&0&\dots&0&0\\ 0&\beta_{m}(r_{2},E_{j})&0&\dots&0&0\\ 0&0&\beta_{m}(r_{3},E_{j})&\dots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\dots&\beta_{m}(r_{n_{r}-1},E_{j})&0\\ 0&0&0&\dots&0&\beta_{m}(r_{n_{r}},E_{j})\end{pmatrix}\qquad m=-1,0,1. \tag{101}\]
Starting from the first row, we reduce \(\mathcal{M}\) to upper-echelon form. The components of \(\mathbf{\mathcal{U}}^{n+1}\) can then be solved for, starting from the final component and recursively solving for the other components in reverse order. For the most general matrix, this procedure would require \(\mathcal{O}(n_{E}\times n_{r})^{3}\) operations, which would be computationally prohibitive. In our case, the matrix \(\mathcal{M}\) is tridiagonal with a fringe, which requires only \(\mathcal{O}(n_{r}^{2}\times n_{E})\) operations.
Initially, we set the time-step to an approximation of the maximum timescale of the problem:
\[\Delta t=\Delta t_{0}=\max{(\tau_{D}^{\text{max}},\tau_{b}^{\text{max}})}, \tag{102}\]
where \(\tau_{D}^{\text{max}}\) and \(\tau_{b}^{\text{max}}\) are the maximum diffusion and loss time-scales:
\[\tau_{b}^{\text{max}}\simeq \max_{ij}\tau_{b}(r_{i},E_{j}) \tag{103}\] \[\tau_{D}^{\text{max}}\simeq \max_{ij}\tau_{D}(r_{i},E_{j})\]
As we need only a rough estimate of the time scales for our initial time-step, we use simplified equations for \(\tau_{b}\) and \(\tau_{D}\):
\[\tau_{b}= \frac{m_{\chi}}{b}>\frac{\max{E}}{b}, \tag{104}\] \[\tau_{D}= \frac{r_{s}^{2}}{D},\]
where \(r_{s}\) is the scale radius of the dark matter distribution, given in Section III.
Using \(\Delta t=\Delta t_{0}\), we iteratively solve for \(\mathbf{\mathcal{U}^{n+1}}\) from \(\mathbf{\mathcal{U}^{n}}\) until each component of the two vectors is different by less than 1 part in \(10^{3}\). We then reduce the time-step by a factor of 2 and repeat, starting with the final result from the last time-step and iterating until the same convergence criteria is met. We repeat this procedure - reducing the time-step by a factor of 2, and achieving convergence of the solution - until there have been at least 5 different values of \(\Delta t\) and \(\mathbf{\mathcal{U}^{n}}\) converges in one step for 3 values of \(\Delta t\) in a row. We find that these convergence criteria are conservative as convergence is achieved after 5 values of \(\Delta t\) for all solutions that we examined.
## Appendix B Simulating Intensity Maps
To generate synthetic data from our background and signal plus background models of M31, correlations between pixels due to the beam size must be correctly modelled. The rms noise is given by \(\sigma_{\text{rms}}=0.25\,\text{mJy/beam}\) in the central region of the radio map of M31 and \(\sigma_{\text{rms}}=0.3\,\text{mJy/beam}\) towards the outside of the map [34] (see Section II). This noise level is independent of the total flux, thus the simulated measurements in pixel \(i\) for an intensity model with flux \(\Phi_{i}\) is given by
\[s_{i}=\Phi_{i}+r_{i} \tag{105}\]
where \(r_{i}\) is the flux from noise in pixel \(i\). These values can be positive or negative. The number of photons collected per beam is large enough that Poisson noise is negligible compared to the rms noise.
In general, the expected observed noise in pixel \(i\) can be written as
\[r_{i}=\int dxdyK(x-x_{i},y-y_{i})\tilde{r}(x,y) \tag{106}\]
where \(K(x-x_{i},y-y_{i})\) is the shape of the beam centered at pixel \(i\) and \(\tilde{r}(x,y)\) is the noise before convolution with the beam. We assume that the beam is a Gaussian, given by
\[K(\Delta x,\Delta y)=\frac{1}{2\pi\sigma_{b}^{2}}\exp{\left[-\frac{\Delta x^{2} +\Delta y^{2}}{2\sigma_{b}^{2}}\right]}, \tag{107}\]
where
\[\sigma_{b}=\frac{(HPBW)}{2\sqrt{2\ln{(2)}}} \tag{108}\]
and \(HPBW\) is the half-power beam-width projected onto the plane of the sky and is given by \(0.34\,\text{kpc}\).
We assume that the noise before convolution is Gaussian distributed and only correlated over length scales much smaller than the size of the beam. Under these conditions the integral in Eq. (106) can be discretized as
\[r_{i}=\delta x\delta y\sum_{\alpha}K(x_{\alpha}-x_{i},y_{\alpha}-y_{i})\tilde{ r}(x_{\alpha},y_{\alpha}) \tag{109}\]
where we have denoted the discretized coordinates with Greek indices and \(\delta x\) and \(\delta y\) are the grid-spacing for these coordinates (chosen to have the same value). These spacings are chosen to be much smaller than the beam but larger than the correlation length of \(\tilde{r}\) so that
\[\langle\tilde{r}_{\alpha}\tilde{r}_{\beta}\rangle=\delta_{\alpha\beta}\tilde{ \sigma}_{\alpha}^{2} \tag{110}\]
where \(\tilde{r}_{\alpha}\equiv\tilde{r}(x_{\alpha},y_{\alpha})\) and \(\tilde{\sigma}_{\alpha}\) is related to \(\sigma_{\mathrm{rms},i}\) through
\[\sigma^{2}_{\mathrm{rms},i}=\langle r_{i}^{2}\rangle=\delta x^{2}\delta y^{2} \sum_{\alpha}\tilde{\sigma}_{\alpha}^{2}K(x_{\alpha}-x_{i},y_{\alpha}-y_{i})^ {2}. \tag{104}\]
To solve for \(\tilde{\sigma}_{\alpha}^{2}\), we make the approximation that \(\tilde{\sigma}_{\alpha}\) is constant over the relevant regions of \(K\) leading to
\[\tilde{\sigma}_{\alpha}^{2}=\frac{4\pi\sigma_{b}^{2}}{\delta x\delta y}\sigma _{\mathrm{rms},i}^{2}. \tag{105}\]
for \(\mathbf{x_{\alpha}}\) near pixel \(i\). To generate noise for our synthetic data, we randomly sample each \(\tilde{r}_{\alpha}\) from a Gaussian with a standard deviation given by \(\tilde{\sigma}_{\alpha}\) and substitute the result into Eq. (102). For \(\tilde{\sigma}_{\alpha}\), we use Eq. (105) where \(i\) is the pixel closest to the point \(\mathbf{x_{\alpha}}\). To avoid edge effects, we allow \(x_{\alpha}\) and \(y_{\alpha}\) to vary beyond the boundaries of the field of view by \(5\sigma_{b}\).
To make an ensemble of pseudo-data assuming a particular hypothesis, we generate an ensemble of random noise maps and add them to a map of the intensity predicted by the hypothesis. For each combination of signal parameters that we test, we construct an ensemble of background-only maps from a set of \(2\times 10^{4}\) random noise maps and we make an equal sized ensemble of signal plus background maps from an independent set of \(2\times 10^{4}\) random noise maps. For each combination of signal parameters, we use the same set of random noise maps to construct our signal plus background pseudo-data, as we do not need to compare the ensembles of pseudo-data from one signal hypothesis to another.
|
2308.05053
|
On toric foliations
|
In this paper, we provide toric descriptions for the various foliation
singularities on toric varieties, especially for non-dicritical sigularities
and F-dlt singularities. We then show the toric foliated minimal model program
works by demonstrating non-dicritical singularities and F-dlt singularities are
preserved, respectively.
|
Chih-Wei Chang, Yen-An Chen
|
2023-08-09T16:32:42Z
|
http://arxiv.org/abs/2308.05053v2
|
# On toric foliations
###### Abstract.
In this paper, we provide toric descriptions for the various foliation singularities on toric varieties, especially for non-dicritical sigularities and F-dlt singularities. We then show that the toric foliated minimal model program works by demonstrating non-dicritical singularities and F-dlt singularities are preserved, respectively.
2020 Mathematics Subject Classification: Primary 14E30, Secondary 32S65, 14M25
## Introduction
In recent years, there have been numerous advancements in the field of birational geometry of foliations. Notably, it has been proven that the minimal model program works for foliations of any rank on a normal variety of dimension at most three (for example, see [20], [14], [15], [16], [17], [18], [21], [22], and [23]), as well as for algebraically integrable foliations under certain assumptions ([1] and [21]).
It is natural to ask for the applicability of the foliated minimal model program (FMMP) to toric foliations. As toric varieties are Mori dream spaces, the minimal model program works for any Weil divisor \(D\). Thus, any singularities involving only discrepancies, such as canonical singularities, will be preserved under (FMMP). Therefore, the main goal for FMMP for toric foliaitons is to show that the non-dicritical singularities are preserved under FMMP. In [24], he showed FMMP works for toric foliations of corank one with only canonical and non-dicritical singularities.
In this paper, we provide a comprehensive affirmative answer. Specifically, we introduce a version of non-dicritical singularities for foliations of any rank, which generalizes [21, Definition 2.10] and [21, paragraph before Lemma 2.8] to any dimension and any rank (see Definition 1.14), and establish its equivalence to condition (\(\dagger\) *> 2.1) (see Definition 1.18, and Theorem 1.19).
**Theorem 0.1** (=Theorem 1.19).: _Let \(\mathcal{F}=\mathcal{F}_{W}\) be a toric foliation on a complete \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\). Then \(\mathcal{F}_{W}\) is non-dicritical if and only if \((\Sigma,W)\) satisfies the condition (\(\dagger\) *> 2.1)._
It is worth noting that there is another version of non-dicritical singularities in [20, Definition 3.6]. He showed that canonical singularities for toric foliations are non-dicritical. But it is not clear in [20] whether non-dicritical singularities are preserved under FMMP. However, we demonstrate (see Proposition 1.24) that the condition (\(\dagger\) *> 2.1) is also equivalent to the non-dicritical singularities defined in [20, Definition 3.6]. Therefore, we ask the following question:
**Question 0.2**.: Does our definition for the non-dicritical singularities agree with [20, Definition 3.6] on any normal varieties?
In addition, we provide toric descriptions for various singularities and study the relations among them. More precisely, we have the following:
**Proposition 0.3** (= Proposition 1.11).: _Let \(\mathcal{F}_{W}\) be a toric foliation on a \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\). Then for any \(\tau\in\Sigma\), \(V_{\tau}\subset\operatorname{Sing}(\mathcal{F}_{W})\) if and only if_
\[\dim_{\mathbb{R}}\tau+\dim_{\mathbb{C}}W-\#\{\rho\in\Sigma(1)\mid\rho\subset \tau\cap W\}>\dim_{\mathbb{C}}(W+\mathbb{C}\tau).\]
_Equivalently, \(V_{\tau}\not\subset\operatorname{Sing}(\mathcal{F}_{W})\) if and only if the vector space \(W\cap\mathbb{C}\tau\) can be spanned by some of rays in \(\tau(1)\)._
**Proposition 0.4** (= Proposition 3.8).: _Let \((\mathcal{F}_{W},\Delta)\) be a toric foliated pair on a \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\)._
1. \((\mathcal{F}_{W},\Delta)\) _is log canonical if and only if_ \(\operatorname{Supp}(\Delta)\subset\bigcup_{\rho\subset W}D_{\rho}(=\operatorname {Supp}(K_{\mathcal{F}_{W}}))\)_._
2. _Suppose_ \(0<\varepsilon<1\)_. Then_ \((\mathcal{F}_{W},\Delta)\) _is_ \(\varepsilon\)_-log canonical if and only if_ \(\phi_{(K_{\mathcal{F}_{W}}+\Delta)}(u)\geq\varepsilon\) _for any primitive vector_ \(u\in|\Sigma|\cap N\) _such that_ \(\mathbb{R}_{\geq 0}u\not\in\Sigma(1)\) _where_ \(\phi_{(K_{\mathcal{F}_{W}}+\Delta)}\) _is the piecewise linear function associated with_ \(K_{\mathcal{F}_{W}}+\Delta\)_._
3. \(\mathcal{F}_{W}\) _is canonical if and only if for any_ \(\sigma\in\Sigma\)_, the only non-zero elements of_ \(\Pi_{\sigma,W}\cap W\cap N\) _are contained in the facet of_ \(\Pi_{\sigma,W}\) _that does not contain the origin where_ \(\Pi_{\sigma,W}\) _is defined in Definition_ 3.6_._
4. _For any_ \(\sigma\in\Sigma\)_,_ \(\mathcal{F}_{W}\) _is terminal at the generic point of_ \(V_{\sigma}\) _if and only if_ \(\Pi_{\sigma,W}\neq\sigma\) _and the elements of_ \(\Pi_{\sigma,W}\cap W\cap N\) _are vertices of_ \(\Pi_{\sigma,W}\)_._
**Proposition 0.5** (= Proposition 3.9).: _Let \((\mathcal{F}_{W},\Delta)\) be a toric foliated pair on a \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\)._
1. \((\mathcal{F}_{W},\Delta)\) _is foliated log smooth if and only if_ \(\Sigma\) _is smooth and_ \((\Sigma,W)\) _satisfies the condition_ \((\dagger)\)_. Note that_ \((\mathcal{F}_{W},\Delta)\) _may not be log canonical._
2. \((\mathcal{F}_{W},\Delta)\) _is F-dlt if and only if the following statements hold true:_ 1. \(\operatorname{Supp}(\Delta)\subset\bigcup_{\rho\subset W}D_{\rho}\)_._ 2. _For any_ \(\sigma\in\Sigma\) _satisfying_ \(\phi_{(K_{\mathcal{F}_{W}}+\Delta)}|_{\sigma}=0\)_, we have_ \(\sigma\) _is smooth and non-dicritical. The latter means that either_ \(\operatorname{relint}(\sigma)\cap W\cap N=\emptyset\) _or_ \(\sigma\subset W\)_._
**Theorem 0.6**.: _Let \((\mathcal{F}_{W},\Delta)\) be a toric foliated pair on a \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\)._
1. (Proposition 2.11) _If_ \(\mathcal{F}_{W}\) _has only simple singularities, then it has at worst canonical singualrities._
2. (Corollary 3.10 and Proposition 3.11) _If_ \((\mathcal{F}_{W},\Delta)\) _is F-dlt, then it is log canonical and_ \(\mathcal{F}_{W}\) _is non-dicritical._
3. (Proposition 3.12) _If_ \((\mathcal{F}_{W},\Delta)\) _is canonical, then_ \(\mathcal{F}_{W}\) _is non-dicritical._
4. (Theorem 1.23 and Proposition 3.3) _If_ \(X_{\Sigma}\) _is smooth, then the following statements are equivalent:_ 1. \(\mathcal{F}_{W}\) _is non-dicritical._ 2. \(\mathcal{F}_{W}\) _is strongly non-dicritical_ 3. \(\mathcal{F}_{W}\) _has only simple singularities._
These are achieved by first showing that minimal log discrepancies for foliated pairs on arbitrary normal varieties can be determined at the level of log resolution (see Theorem 2.14) and then toric log resolution always exists for toric foliated pairs (see Theorem 3.5), both of which seem to be missing in [11]. Note that some toric descriptions for canonical and terminal singularities and singular locus are also provided in [11, Theorem 0.2 and Theorem 1.18], which are nevertheless not easy to follow.
Furthermore, we show that the cone theorem and FMMP works for log canonical toric foliated pair, that is, non-dicritical singularities and being F-dlt are preserved under the minimal model program, respectively.
**Theorem 0.7** (=Corollary 4.9, Cone Theorem).: _Let \((\mathcal{F}_{W},\Delta)\) be a log canonical toric foliated pair on a complete \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\). Then_
\[\overline{\operatorname{NE}}(X)_{K_{\mathcal{F}_{W}}+\Delta<0}=\sum\mathbb{R} _{\geq 0}[M_{i}]\]
_where \(M_{i}\)'s are torus invariant rational curves tangent to \(\mathcal{F}_{W}\)._
**Theorem 0.8** (Propositions 4.1, 4.2, and 4.3)).: _Let \((\mathcal{F}_{W},\Delta)\) be a toric foliated pair on a complete \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\). Then FMMP works for \((\mathcal{F}_{W},\Delta)\), and ends with_
1. _either with a toric foliated pair_ \((\mathcal{G},\Delta_{Y})\) _on_ \(Y_{\Sigma^{\prime}}\) _such that_ \(K_{\mathcal{G}}+\Delta_{Y}\) _is nef_
2. _or with a fibration_ \(\pi:X\to Z\) _and_ \(\mathcal{F}_{W}\) _is pulled back from a foliation_ \(\mathcal{H}\) _on_ \(Z\)_._
_Moreover, assuming the pair \((\mathcal{F}_{W},\Delta)\) is log canonical, if \(\mathcal{F}_{W}\) is non-dicritical (resp. \((\mathcal{F}_{W},\Delta)\) is F-dlt), then both \(\mathcal{G}\) and \(\mathcal{H}\) are non-dicritical (resp. both pairs \((\mathcal{G},\Delta_{Y})\) and \((\mathcal{H},\pi_{*}\Delta)\) are F-dlt)._
Last but not least, we show the existence of foliated log resolution (Theorem 3.5) and F-dlt modification (Theorem 3.15) for the toric foliated pair of any rank and of any dimension.
## Acknowledgements
The authors would like to thank National Center for Theoretical Sciences (NCTS) for the wonderful research environment. They would also like to express their gratitude to Iacopo Brivio, Paolo Cascini, Jungkai Chen, Shin-Yao Jow, Ching-Jui Lai, and Calum Spicer for helpful discussions. The authors were partially supported by Lab. Bir. Geom. Grant number 111-2123-M-002 -012-.
## 1. Preliminaries
We will exclusively work over the field of complex numbers \(\mathbb{C}\). For any sheaves \(\mathcal{M}\) and \(\mathcal{N}\) on a normal variety \(X\), we denote \((\mathcal{M}\otimes\mathcal{N})^{**}\) and \((\mathcal{M}^{\otimes n})^{**}\) as \(\mathcal{M}\boxtimes\mathcal{N}\) and \(\mathcal{M}^{[n]}\), respectively.
### Basics on foliation
In this subsection, most of the definitions follow from [13] and [15]. Let \(X\) be a normal variety. A _foliation_ is a coherent subsheaf \(\mathcal{F}\) of the tangent sheaf \(\mathcal{T}_{X}\) such that
1. \(\mathcal{F}\) is saturated, that is \(\mathcal{T}_{X}/\mathcal{F}\) is torsion-free, and
2. \(\mathcal{F}\) is closed under the Lie bracket.
Let \(r=\operatorname{rank}(\mathcal{F})\) be the _rank_ of the foliation and \(c=\dim X-r\) be the _corank_ of the foliation. The canonical divisor \(K_{\mathcal{F}}\) is a Weil divisor on \(X\) such that \(\mathcal{O}_{X}(-K_{\mathcal{F}})\cong\det\mathcal{F}\).
We define the _normal sheaf_ of \(\mathcal{F}\) as \(\mathcal{N}_{\mathcal{F}}:=(\mathcal{T}_{X}/\mathcal{F})^{[1]}\). By taking the \(r\)-th wedge product of \(\mathcal{N}_{\mathcal{F}}^{*}\to\Omega_{X}^{[1]}\), we obtain a twisted form \(\omega\in\operatorname{H}^{0}(X,\Omega_{X}^{r}\boxtimes\det\mathcal{N}_{ \mathcal{F}})\). Here \(\omega\) satisfies the following properties:
1. The zero locus of \(\omega\) has codimension greater than or equal to two.
2. \(\omega\) is locally decomposable, meaning that locally \(\omega=\bigwedge_{i}\omega_{i}\) where \(\omega_{i}\)'s are 1-forms.
3. \(\omega\) is integrable, that is, \(\operatorname{d}\!\omega_{i}\wedge\omega=0\) for all \(i\).
Conversely, given a Weil divisor \(D\), a twisted form \(\omega\in\operatorname{H}^{0}(X,\Omega_{X}^{r}\boxtimes\mathcal{O}_{X}(D))\) gives rise to a subsheaf \(\ker(\mathcal{T}_{X}\to\Omega_{X}^{r-1}\boxtimes\mathcal{O}_{X}(D))\) of \(\mathcal{T}_{X}\), where the map in the parentheses is the contraction via \(\omega\). If \(\omega\) is locally decomposable and integrable, then this kernel is a foliation.
Let \(\pi:Y\dasharrow X\) be a dominant rational map between normal varieties and \(\mathcal{F}\) be a foliation on \(X\). We denote by \(\pi^{-1}\mathcal{F}\) the _induced foliation_ on \(Y\) (see, for example, [15, Section 3.2]). If \(f:X\dasharrow X^{\prime}\) is birational, then \(f_{*}\mathcal{F}\) represents the transformed foliation on \(X^{\prime}\) induced by \(f^{-1}\).
Let \(X^{\circ}\) be the open subset of \(X\) such that \(\mathcal{F}|_{X^{\circ}}\) is a subbundle of \(T_{X^{\circ}}\). A _leaf_\(L\) is a maximal connected and immersed holomorphic submanifold \(L\subset X^{\circ}\) such that \(T_{L}=\mathcal{F}|_{L}\).
A foliation \(\mathcal{F}\) is called _algebraically integrable_ if its leaves are algebraic. Equivalently, an algebraically integrable foliation \(\mathcal{F}\) on \(X\) is induced from a dominant rational map \(f:X\dasharrow Y\) for some normal variety \(Y\) (see, for example, [15, Sections 3.2 and 3.6]).
**Definition 1.1** (Singular locus).: Let \(\mathcal{F}\) be a foliation on \(X\). We obtain a morphism \(\phi:\Omega_{X}^{[r]}\to\mathcal{O}_{X}(K_{\mathcal{F}})\) by taking the double dual of the \(r\)-th wedge product of \(\Omega_{X}^{[1]}\to\mathcal{F}^{*}\), which is induced by the inclusion \(\mathcal{F}\subset\mathcal{T}_{X}\). We define the _singular locus_ of \(\mathcal{F}\), denoted by \(\operatorname{Sing}(\mathcal{F})\), as the co-support of the image of
\[\phi^{\prime}:\Omega_{X}^{[r]}(-K_{\mathcal{F}})\to\mathcal{O}_{X}.\]
**Definition 1.2** (Invariance).:
1. Let \(X\) be a normal variety and \(\mathcal{F}\) be a foliation of any rank. We say that a subvariety \(S\subset X\) is \(\mathcal{F}\)-invariant if for any subset \(U\subset X\) and any section \(\partial\in\mathrm{H}^{0}(U,\mathcal{F})\), we have \[\partial(\mathcal{I}_{S\cap U})\subset\mathcal{I}_{S\cap U}\] where \(\mathcal{I}_{S\cap U}\) is the ideal sheaf of \(S\cap U\).
2. For any prime divisor \(D\), we define \(\iota(D)=0\) if \(D\) is invariant and \(1\) otherwise.
**Proposition 1.3**.: _Let \(X\) be a normal variety and \(\mathcal{F}\) be a foliation on \(X\). Then \(\mathrm{Sing}(X)\) is \(\mathcal{F}\)-invariant._
Proof.: By [13, Theorem 5], \(\mathrm{Sing}(X)\) is invariant under any derivation. In particular, it is invariant under \(\mathcal{F}\).
**Lemma 1.4** ([14, Lemma 3.5]).: _Let \(\mathcal{F}\) be a foliation on a smooth variety \(X\). Then \(\mathrm{Sing}(\mathcal{F})\) is \(\mathcal{F}\)-invariant._
**Definition 1.5** (Tangency).: Let \(X\) be a normal variety and \(\mathcal{F}\) be a foliation of any rank. Given a (possibly analytic) subvariety \(Z\subset X\) not contained in \(\mathrm{Sing}(X)\cup\mathrm{Sing}(\mathcal{F})\), we say \(Z\) is _tangent_ to \(\mathcal{F}\) if, over \(X\setminus\left(\mathrm{Sing}(X)\cup\mathrm{Sing}(\mathcal{F})\cup\mathrm{Sing} (Z)\right)\), the inclusion \(\mathcal{T}_{Z}\subset\mathcal{T}_{X}\) factors through \(\mathcal{F}\). Otherwise, we say that \(Z\) is transversal to \(\mathcal{F}\).
### Basics on toric varieties
In this subsection, we recall some fundamental properties and fix the notation. The presentation closely follows [10].
Let \(N\simeq\mathbb{Z}^{n}\) be a lattice of rank \(n\) and \(M\) be its dual lattice. A _fan_\(\Sigma\) in \(N_{\mathbb{R}}:=N\otimes_{\mathbb{Z}}\mathbb{R}\) is a finite collection of rational, strongly convex, polyhedral cones \(\sigma\subset N_{\mathbb{R}}\), such that each face \(\tau\) of a cone \(\sigma\in\Sigma\) belongs to \(\Sigma\) and the intersection of any two cones in \(\Sigma\) is a face of each. For any \(k\in\mathbb{Z}_{\geq 0}\), we denote \(\Sigma(k)\) as the subset of \(\Sigma\) consisting of \(k\)-dimensional cones.
For each cone \(\sigma\in\Sigma\), we associate with an affine variety \(U_{\sigma}=\mathrm{Spec}\,\mathbb{C}[\sigma^{\vee}\cap M]\) where \(\sigma^{\vee}\) is the dual cone of \(\sigma\). The toric variety associated with the fan \(\Sigma\) is constructed by gluing all \(U_{\sigma}\)'s together. Notably, \(T:=U_{\{0\}}=\mathrm{Spec}\,\mathbb{C}[M]\) is an open subset of \(X_{\Sigma}\) and is a torus. Moreover, the action of \(T\) on itself can be extended to an action on \(X_{\Sigma}\).
In addition, for each \(\sigma\in\Sigma\), we denote \(O_{\sigma}\) as the orbit of the distinguished point \(\gamma_{\sigma}\) and \(V_{\sigma}\) as the closure of \(O_{\sigma}\). (For further details, see [10, Chapter 3].) If \(\rho\in\Sigma(1)\) is a ray, then \(V_{\rho}\) is a divisor and will also be denoted as \(D_{\rho}\).
### Toric foliations
Let \(X=X_{\Sigma}\) be a toric variety. A subsheaf \(\mathcal{F}\subset\mathcal{T}_{X}\) is called \(T\)-_invariant_ or _torus invariant_ if for any \(t\in T\) we have \(t^{*}\mathcal{F}=\mathcal{F}\) as subsheaves under the natural isomorphism \(t^{*}\mathcal{T}_{X}\simeq\mathcal{T}_{X}\). A foliation \(\mathcal{F}\subset\mathcal{T}_{X}\) is called a _toric foliation_ if \(\mathcal{F}\) is \(T\)-invariant.
**Proposition 1.6**.: _Let \(\Sigma\) be a fan in \(N\). Then there is a one-to-one correspondence between the set of toric foliations on \(X_{\Sigma}\) and the set of complex subspaces \(W\subset N_{\mathbb{C}}:=N\otimes_{\mathbb{Z}}\mathbb{C}\)._
Proof.: If \(\mathcal{F}\) is a toric foliation, then \(\mathcal{F}|_{T}\) is a \(T\)-invariant vector sub-bundle of the tangent bundle \(\mathcal{T}_{T}\), which gives rise to a complex vector subspace \(W:=(\mathcal{F}|_{T})_{1}\subset\mathcal{T}_{T,1}=N_{\mathbb{C}}\). It is worth noting that any two foliations that agree on an open dense subset must be the same (see [12, Lemma 1.8]). Therefore, \(\mathcal{F}\) is uniquely determined by \(W\).
Conversely, given any complex vector subspace \(W\subset N_{\mathbb{C}}\), we can extend it via the \(T\)-action to a \(T\)-invariant subbundle \(\mathcal{E}\subset\mathcal{T}_{T}\). Since the Lie bracket on \(\mathcal{T}_{T}\) is trivial, \(\mathcal{E}\) becomes a foliation. We can then uniquely extend \(\mathcal{E}\) to a toric foliation \(\mathcal{F}\) on \(X_{\Sigma}\).
### Local generators
In this subsection, we consider \(N\cong\mathbb{Z}^{n}\) is a fixed lattice, \(M\) is the dual lattice of \(N\), \(\Sigma\) is a fan in \(N_{\mathbb{R}}\), and \(W\) is a complex vector subspace of \(N\otimes\mathbb{C}\). In [14], a set of local generators for \(\mathcal{F}_{W}\) is provided. We include it here for the convenience of the readers. Please note that in [14], it should be \(N_{\mathbb{C}}\) instead of \(N_{\mathbb{R}}\).
Now we observe that for any ray \(\rho\in\Sigma(1)\) with the primitive generator \(v_{\rho}\), we have
\[\rho^{\vee}\cap M\cong\mathbb{Z}_{\geq 0}m_{\rho}\oplus\bigoplus_{i=2}^{n} \mathbb{Z}m_{i}\]
where \(m_{2},\ldots,m_{n}\) are generators of \(\rho^{\perp}\cap M\) and \(\langle m_{\rho},v_{\rho}\rangle=1\). Let
\[\delta_{v}=\langle m_{\rho},v\rangle\chi^{m_{\rho}}\frac{\partial}{\partial \chi^{m_{\rho}}}+\sum_{i=2}^{n}\langle m_{i},v\rangle\chi^{m_{i}}\frac{ \partial}{\partial\chi^{m_{i}}}.\]
Note that \(\mathcal{T}_{X_{\Sigma}}|_{U_{\rho}}\) is generated by
\[\frac{\partial}{\partial\chi^{m_{\rho}}},\,\chi^{m_{2}}\frac{\partial}{ \partial\chi^{m_{2}}},\ldots,\,\chi^{m_{n}}\frac{\partial}{\partial\chi^{m_{n }}}.\]
**Lemma 1.7** ([15, Lemma 2.1.12]).: _Let \(W\) be an \(r\)-dimensional complex vector subspace of \(N_{\mathbb{C}}\) and let \(\Sigma\) be a fan in \(N_{\mathbb{R}}\). For any ray \(\rho\in\Sigma(1)\) with the primitive generator \(v_{\rho}\), if \(\rho\subset W\), then we choose \(v_{2},\ldots,v_{n}\) so that \(\{v_{\rho},v_{2},\ldots,v_{r}\}\) forms a basis for \(W\). Otherwise, we just choose \(\{v_{1},\ldots,v_{r}\}\) to be a basis for \(W\). Then \(\mathcal{F}_{W}|_{U_{\rho}}\) is generated by_
\[\begin{array}{ll}\delta_{v_{1}},\ldots,\delta_{v_{r}}&\text{if }\rho\not \subset W\\ \frac{1}{\chi^{m_{\rho}}}\delta_{v_{\rho}},\delta_{v_{2}},\ldots,\delta_{v_{r} }&\text{if }\rho\subset W.\end{array}\]
**Remark 1.8**.:
1. \(\frac{1}{\chi^{m_{\rho}}}\delta_{v_{\rho}}=\frac{\partial}{\partial\chi^{m_{ \rho}}}\)_._
2. _Let_ \(N\simeq\mathbb{Z}e_{1}\oplus\cdots\oplus\mathbb{Z}e_{n}\) _and let_ \(X=U_{\sigma}\) _where_ \(\sigma=\operatorname{Cone}(\mathrm{e}_{1},\ldots,\mathrm{e}_{n})\)_. Let_ \(\{m_{1},\ldots,m_{n}\}\) _be the_ \(\mathbb{Z}\)_-basis for_ \(M\) _which is dual to_ \(\{e_{1},\ldots,e_{n}\}\)_. Then we have_ \(\sigma^{\vee}=\langle m_{1},\ldots,m_{n}\rangle\)_. After re-indexing, we can assume that_ \(\{\rho\in\sigma(1)\mid\rho\subset W\}=\{\mathbb{R}_{\geq 0}e_{1},\ldots, \mathbb{R}_{\geq 0}e_{\ell}\}\)_. Let_ \(\{v_{1},\ldots,v_{r}\}\) _be a_ \(\mathbb{C}\)_-basis for_ \(W\) _so that_ \(v_{i}=e_{i}\) _for_ \(1\leq i\leq\ell\)_. Then by Lemma_ 1.7_,_ \(\mathcal{F}_{W}\) _is generated by_ \[\frac{1}{\chi^{m_{1}}}\delta_{v_{1}},\ldots,\frac{1}{\chi^{m_{\ell}}}\delta_{v_ {\ell}},\delta_{v_{\ell+1}},\ldots,\delta_{v_{r}}\in\operatorname{Der}_{ \mathbb{C}}(\mathbb{C}[\sigma^{\vee}\cap M])\] _over_ \(U=\bigcup_{\rho\in\sigma(1)}U_{\rho}\)_, and hence over_ \(U_{\sigma}\)_, as the reflexive sheaf_ \(\mathcal{F}_{W}\) _is normal in the sense of_ _[_11_, Definition 1.1.11]__. In particular,_ \(\mathcal{F}_{W}\) _is always locally free if_ \(X_{\Sigma}\) _is smooth._
**Proposition 1.9**.: _Let \(\mathcal{F}=\mathcal{F}_{W}\) be a toric foliation on a toric variety \(X_{\Sigma}\). Then \(K_{\mathcal{F}}+\sum_{\rho\subset W}D_{\rho}\sim 0\). In particular, we can write \(K_{\mathcal{F}}=-\sum_{\rho\subset W}D_{\rho}\)._
Proof.: It is worth noting that the codimension of \(\bigcup_{\rho\in\Sigma(1)}U_{\rho}\) in \(X_{\Sigma}\) is \(2\). The proof closely follows that of [15, Theorem 2.1.8].
### Singular locus of a toric foliation
In this subsection, we present a combinatorial criterion to determine whether the orbit closure is contained in the singular locus of a toric foliation. To establish this criterion, we rely on the following lemma, which allows us to reduce the problem to the smooth case.
**Lemma 1.10**.: _Let \(N\) be a lattice of rank \(n\), \(\sigma\) be a simplicial strongly convex rational polyhedral cone of dimension \(n\), and \(W\) be a complex vector subspace of \(N_{\mathbb{C}}\). There is a sublattice \(N^{\prime}\) of \(N\) such that \(\sigma\) is smooth with respect to \(N^{\prime}\), which induces a finite covering \(\pi:U_{\sigma,\,N^{\prime}}\to U_{\sigma,\,N}\). Let \(\mathcal{F}_{W,N^{\prime}}\) (resp. \(\mathcal{F}_{W,\,N}\)) be the toric foliation on \(U_{\sigma,\,N^{\prime}}\) (resp. \(U_{\sigma,\,N^{\prime}}\)) given by \(W\). Then we have \(\operatorname{Sing}(\mathcal{F}_{W,\,N^{\prime}})=\pi^{-1}(\operatorname{Sing}( \mathcal{F}_{W,\,N}))\)._
Proof.: Let \(N^{\prime}\) be the sublattice of \(N\) generated by \(v_{\rho}\)'s for \(\rho\in\sigma(1)\). So \(\sigma\) is a smooth cone with respect to \(N^{\prime}\). Moreover, it introduces a finite covering \(\pi:U_{\sigma,\,N^{\prime}}\to U_{\sigma,\,N}\).
As \(\operatorname{Sing}(\mathcal{F}_{W,\,N})\) is torus invariant, there are some cones \(\tau_{i}\prec\sigma\) such that \(\operatorname{Sing}(\mathcal{F}_{W,\,N})=\bigcup_{i=1}^{\ell}V_{\tau_{i},\,N}\), the decomposition into irreducible components. Now we consider
\[\mathcal{C} =\{\tau\mid\tau_{i}\prec\tau\prec\sigma\text{ for some }i\},\] \[\Sigma^{\prime}_{0} =\{\tau\mid\tau\prec\sigma\}\setminus\mathcal{C},\text{ and}\] \[\Sigma^{\prime}_{i} =\Sigma^{\prime}_{0}\cup\{\tau_{i}\}.\]
It is an immediate check that both \(\Sigma^{\prime}_{0}\) and \(\Sigma^{\prime}_{i}\) are indeed fans. Actually, we have \(X_{\Sigma^{\prime}_{0},\,N}=U_{\sigma}\setminus\operatorname{Sing}(\mathcal{ F}_{W,N})\) and \(X_{\Sigma^{\prime}_{i},\,N}=(U_{\sigma}\setminus\operatorname{Sing}(\mathcal{F}_{W,N}))\cup\mathcal{O}_{\tau_{i}}\). One can see that \(X_{\Sigma^{\prime}_{i},\,N}\) is an open subscheme of \(U_{\sigma,\,N}\) for \(0\leq i\leq\ell\), and thus the base change \(\pi^{\prime}:X_{\Sigma^{\prime}_{i},\,N^{\prime}}\to X_{\Sigma^{\prime}_{i}, \,N}\) is finite and surjective.
Since \(X_{\Sigma^{\prime}_{0},\,N}\) has no foliation singularities, by [3, Proposition 5.13], \(X_{\Sigma^{\prime}_{0},\,N^{\prime}}\) has no foliation singularities, and thus \(\operatorname{Sing}(\mathcal{F}_{W,\,N^{\prime}})\subset\bigcup_{i=1}^{\ell}V _{\tau_{i},\,N^{\prime}}\). If this is not an equality, then there is an \(i\neq 0\) such that \(X_{\Sigma^{\prime}_{i},\,N^{\prime}}\) has no foliation singularities. Thus \(X_{\Sigma^{\prime}_{i},\,N}\) has no foliation singularities by [3, Proposition 5.13], which contradicts \(V_{\tau_{i},\,N}\subset\operatorname{Sing}(\mathcal{F}_{W,\,N})\). Therefore, \(\operatorname{Sing}(\mathcal{F}_{W,\,N^{\prime}})=\bigcup_{i=1}^{\ell}V_{ \tau_{i},\,N^{\prime}}\) and therefore \(\operatorname{Sing}(\mathcal{F}_{W,\,N^{\prime}})=\pi^{-1}(\operatorname{Sing }(\mathcal{F}_{W,\,N}))\).
**Proposition 1.11**.: _Let \(\mathcal{F}_{W}\) be a toric foliation on a \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\). Then for any \(\tau\in\Sigma\), \(V_{\tau}\subset\operatorname{Sing}(\mathcal{F}_{W})\) if and only if_
\[\dim_{\mathbb{R}}\tau+\dim_{\mathbb{C}}W-\#\{\rho\in\Sigma(1)\mid\rho\subset \tau\cap W\}>\dim_{\mathbb{C}}(W+\mathbb{C}\tau).\]
_Equivalently, \(V_{\tau}\not\subset\operatorname{Sing}(\mathcal{F}_{W})\) if and only if the vector space \(W\cap\mathbb{C}\tau\) can be spanned by some of rays in \(\tau(1)\)._
Proof.: Let \(\dim_{\mathbb{R}}\tau=s\), \(\dim_{\mathbb{C}}W=r\), and \(\#\{\rho\in\Sigma(1)\mid\rho\subset\tau\cap W\}=t\).
By Lemma 1.10, we can assume that \(X\) is smooth. As this is a local problem, we can work on \(X=U_{\sigma}\) where \(\sigma=\langle e_{1},\ldots,e_{n}\rangle\) is a smooth strongly convex rational polyhedral cone of dimension \(n\), \(\{\rho\in\sigma(1)\mid\rho\subset\tau\cap W\}=\{\mathbb{R}_{\geq 0}e_{1}, \ldots,\mathbb{R}_{\geq 0}e_{t}\}\), \(\{\rho\in\sigma(1)\mid\rho\subset W\}=\{\mathbb{R}_{\geq 0}e_{1},\ldots, \mathbb{R}_{\geq 0}e_{\ell}\}\), and \(\tau=\langle e_{1},\ldots,e_{t},e_{\ell+1},\ldots,e_{\ell+s-t}\rangle\).
Let \(\{v_{1},\ldots,v_{r}\}\) be a \(\mathbb{C}\)-basis for \(W\) such that \(v_{i}=e_{i}\) for \(1\leq i\leq\ell\), and let \(\{m_{1},\ldots,m_{n}\}\) be the \(\mathbb{Z}\)-basis for \(M\) dual to \(\{e_{1},\ldots,e_{n}\}\). Then by Remark 1.8(2), we have the following set of generators for \(\mathcal{F}_{W}\) on \(U_{\sigma}\):
\[\frac{1}{\chi^{m_{1}}}\delta_{v_{1}},\ldots,\frac{1}{\chi^{m_{\ell}}}\delta_{v_ {\ell}},\delta_{v_{\ell+1}},\ldots,\delta_{v_{r}}.\]
Writing down the coefficients with respect to \(\frac{\partial}{\partial\chi^{m_{i}}}\)'s, we have
\[A=\left(\begin{array}{c|c}I_{\ell}&0\\ \cline{1-2}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3} \cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3} \cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3} \cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3} \cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3} \cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3} \cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3} \cline{3-3}\cline{3-3}\cline{3-3}\cline{3-3}\cline{3-
is not of full rank. Hence it is equivalent to the existence of the choice of \(v_{\ell+1}\) such that \(\langle m_{i},v_{\ell+1}\rangle=0\) for \(i\geq\ell+s-t+1\). By possibly adding a vector \(v^{\prime}\in\operatorname{span}(v_{1},\ldots,v_{\ell})\) to \(v_{\ell+1}\), this is the same as saying that we can choose \(v_{\ell+1}\) such that \(v_{\ell+1}\in W\cap\mathbb{C}\tau\), which is in turn equivalent to \(\dim_{\mathbb{C}}(W\cap\mathbb{C}\tau)>t\).
### Properties
Let \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) be two foliations on an arbitrary normal variety \(X\). Note that the intersection \(\mathcal{F}_{1}\cap\mathcal{F}_{2}\) also gives a foliation. Moreover, if \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) are toric foliations on a toric variety \(X_{\Sigma}\) given by \(W_{1}\) and \(W_{2}\), respectively, then \(\mathcal{F}_{1}\cap\mathcal{F}_{2}\) is also a foliation which is the toric foliation given by \(W_{1}\cap W_{2}\).
Suppose \(X\simeq\mathbb{A}^{n}\) with coordinates \(x_{1},\ldots,x_{n}\) and \(\mathcal{F}=\mathcal{F}_{H}\) is the toric foliation of corank one given by the hyperplane \(H\subset N_{\mathbb{C}}(=\mathcal{T}_{T,1})\). Then \(H\) is nothing but
\[\left\{v\in\mathcal{T}_{T,1}\mid\left\langle v,\sum_{i=1}^{n}a_{i}\,\mathrm{d }x_{i}\right\rangle=0\right\}\]
where \(a_{i}\in\mathbb{C}\) for all \(i\). Therefore, \(\mathcal{F}\) is given by the \(T\)-invariant \(1\)-form \(\omega=\sum_{i=1}^{n}a_{i}\frac{\mathrm{d}x_{i}}{x_{i}}\). Moreover, if \(\mathcal{F}_{H_{1}},\ldots,\mathcal{F}_{H_{\ell}}\) are distinct foliations given by \(1\)-forms \(\omega_{1},\ldots,\omega_{\ell}\), respectively, then \(\bigcap_{i=1}^{\ell}\mathcal{F}_{H_{i}}\) is exactly the foliation given by the saturation of \(\bigwedge_{i=1}^{\ell}\omega_{i}\), and a toric foliation given by \(\bigcap_{i=1}^{\ell}H_{i}\).
**Proposition 1.12**.:
1. _Let_ \(\mathcal{F}_{1}\) _and_ \(\mathcal{F}_{2}\) _be two foliations on an arbitrary normal variety_ \(X\)_. If_ \(Z\subset X\) _is a prime divisor that is invariant with respect to_ \(\mathcal{F}_{1}\)_, then it is invariant with respect to_ \(\mathcal{F}_{1}\cap\mathcal{F}_{2}\)_._
2. _Let_ \(\mathcal{F}_{W}\) _be a toric foliation on a toric variety_ \(X_{\Sigma}\)_. Then for any_ \(\rho\in\Sigma(1)\)_,_ \(D_{\rho}\) _is_ \(\mathcal{F}\)_-invariant if and only if_ \(\rho\not\subset W\)_._
Proof.:
1. We notice that \((\mathcal{F}_{1}\cap\mathcal{F}_{2})|_{U}\subset\mathcal{F}_{1}|_{U}\) where \(U\) is the intersection of the smooth loci of \(\mathcal{F}_{1}\) and that of \(\mathcal{F}_{1}\cap\mathcal{F}_{2}\). If \(Z\) is invariant by \(\mathcal{F}_{1}\), then it is also invariant under a smaller foliation.
2. Here, we provide a proof using differential forms. Let \(\phi:X^{\prime}_{\Sigma^{\prime}}\to X_{\Sigma}\) be a resolution of singularities obtained through a sequence of star subdivisions of \(\Sigma\). Near the generic point of \(D_{\rho}\), \(\phi\) is an isomorphism. Therefore, we can assume that \(X_{\Sigma}\) is smooth. Instead of applying [23, Lemma 3.4], we provide the following computation to shed more light on the properties of toric foliations. Since the problem is local, we can assume that \(X\simeq\mathbb{A}^{n}\) with coordinates \(x_{1},\ldots,x_{n}\), and \(D_{\rho}\) is defined by (\(x_{1}=0\)). By expressing \(W\) as an intersection of hyperplanes \(W=\bigcap_{i=1}^{c}H_{i}\) where \(c\) is the corank of \(\mathcal{F}_{W}\), we can further assume that \(\mathcal{F}_{W}\) is given by the \(T\)-invariant form \[\omega=\omega_{1}\wedge\cdots\wedge\omega_{c}=\sum_{1<i_{1}<\cdots<i_{c}\leq n }b_{i_{1}\cdots i_{c}}\frac{\mathrm{d}x_{i_{1}}}{x_{i_{1}}}\wedge\cdots\wedge \frac{\mathrm{d}x_{i_{c}}}{x_{i_{c}}}\] where \(b_{i_{1}\cdots i_{c}}\in\mathbb{C}\) and \(\omega_{i}=\sum_{j=1}^{n}a_{i,j}\frac{\mathrm{d}x_{j}}{x_{j}}\), \(a_{i,j}\in\mathbb{C}\). If \(\rho\subset W\), then \(\rho\subset H_{i}\) for all \(1\leq i\leq c\). Consequently, \(a_{i,1}=0\) for all \(1\leq i\leq c\) and therefore \(b_{i_{1}\cdots i_{c}}=0\) when \(i_{1}=1\). As a result, \(D_{\rho}\) is not foliation invariant as \(\omega\wedge\mathrm{d}x_{1}|_{x_{1}=0}\neq 0\). On the other hand, if \(\rho\not\subset W\), we can assume that \(\rho\not\subset H_{1}\) and \(\rho\subset H_{i}\) for each \(i\geq 2\). In this case, there exists a tuple \((i_{1},\ldots,i_{r})\) with \(b_{i_{1}\cdots i_{r}}\neq 0\) such that \(i_{1}=1\). This implies that \(D_{\rho}\) is foliation invariant.
**Proposition 1.13**.:
1. _Suppose_ \(f:X_{\Sigma}\to Y_{\Sigma^{\prime}}\) _is a dominant toric morphism defined by a surjective map_ \(\tilde{f}:N\to N^{\prime}\) _between lattices. Then_ \(f\) _induces a toric foliation on_ \(X_{\Sigma}\)_, given by_ \(W=\ker(\tilde{f})\otimes\mathbb{C}\subset N_{\mathbb{C}}\)_, whose leaves are fibers of_ \(f\)
2. _Let_ \(\mathcal{F}=\mathcal{F}_{W}\) _be a toric foliation. Then for any divisor_ \(D_{\rho}\) _with_ \(\rho\in\Sigma(1)\)_, there is an induced toric foliation on_ \(D_{\rho}\)_, given by_ \(\overline{W+\mathbb{C}\rho}\subset(N_{\mathbb{C}})/\mathbb{C}\rho=(N/\mathbb{Z} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt}\mathrm{\hskip 0.5pt} \mathrm{\hskip 0.5pt}\mathrm{\hskip 0.
Proof of Theorem 1.19.: We can assume that \(1\leq\operatorname{rank}(\mathcal{F}_{W}):=r\leq n-1\) since it is clear when \(r=0\) or \(r=n\).
(Only if part) Suppose \(\mathcal{F}_{W}\) is non-dicritical. We assume that \((\Sigma,W)\) does not satisfy the condition \((\dagger)\), meaning that there exists a cone \(\tau\not\subset W\) such that \(\operatorname{relint}(\tau)\cap W\cap N\neq\emptyset\). We can assume that \(\dim\tau\) is the largest among all cones that do not satisfy \((\dagger)\). Let \(v\in\operatorname{relint}(\tau)\cap W\cap N\) be a primitive element.
Let \(\Sigma^{\prime}\) be the star subdivision along \(\rho=\mathbb{R}_{\geq 0}v\). Note that \(\pi:X^{\prime}_{\Sigma^{\prime}}\to X_{\Sigma}\) is a birational morphism with the exceptional divisor \(D_{\rho}\). Furthermore, \(\pi\) induces a fibration \(f:D_{\rho}\to V_{\tau}\). Since \(\rho\subset W\), \(D_{\rho}\) is not foliation-invariant by Proposition 1.12(2). As \(\mathcal{F}_{W}\) is non-dicritical, we have \(\dim V_{\tau}\geq n-r\) where \(r\) is the rank of \(\mathcal{F}_{W}\). Thus
\[s:=\dim\tau\leq r. \tag{1}\]
Additionally, we have \(s\geq 2\), as otherwise, we have \(\tau\subset W\).
By Proposition 1.13, we observe that it is sufficient to show that the general fiber of \(f\) is tangent to \(\mathcal{F}_{W}|_{D_{\rho}}\). To verify tangency, we can examine it locally, as demonstrated in the proof of Proposition 1.12(2). Therefore, by replacing \(\tau\) with a smaller \(s\)-dimensional cone, we can assume that \(X_{\Sigma}=\mathbb{C}^{n}\), \(V_{\tau}=(x_{1}=\ldots=x_{s}=0)\), and \(f\) is the blow-up along \(V_{\tau}\). Consequently, \(D_{\rho}\) is defined by \(x_{1}=0\) on some affine cone in \(\Sigma^{\prime}\). Furthermore, we choose \(n-r\) general hyperplanes \(H_{i}\) such that \(W=\bigcap_{i}H_{i}\) and \(W\cap N=H_{i}\cap N\). Let \(\mathcal{F}_{H_{i}}\) be defined by \(\omega_{i}=\sum_{j=1}^{n}a_{ij}\frac{\mathrm{d}x_{j}}{x_{j}}\) for \(1\leq i\leq n-r\).
Let \(A=(a_{ij})\) be the \((n-r)\times n\) matrix whose rows consisting of the coefficients of \(\omega_{i}\). Because of the choice of \(\omega_{i}\), \(A\) is of full rank, that is, of rank \(n-r\). Note that we can assume \(v=\sum_{i=1}^{s}e_{i}\). As \(\rho=\mathbb{R}_{\geq 0}v\subset W\), we obtain
\[0=\langle\omega_{i}(1),v\rangle=\sum_{j=1}^{s}a_{ij}\]
for all \(i\). So the pullback of \(\omega_{i}\) on \(X^{\prime}_{\Sigma^{\prime}}\) is \(\omega^{\prime}_{i}=\sum_{j=2}^{n}a_{ij}\frac{\mathrm{d}x_{j}}{x_{j}}\) where we use the same notations for the coordinates on \(X^{\prime}\). In addition, the matrix \(B=(a_{ij})_{i=1,\,j=2}^{n-r,\,n}\), which is obtained by removing the first column of the matrix \(A\), has rank \(n-r\) as well.
Recall from the assumptions and inequality (1), we have \(n-r-1\geq 0\) and \(r\geq s\). we can consider the partition \(\mathcal{P}_{1}\cup\mathcal{P}_{2}=[1,n]\cap\mathbb{N}\) where \(\mathcal{P}_{1}\) contains \([1,s]\cap\mathbb{N}\) and \(\#\mathcal{P}_{2}=r-s+1\). Next, we consider the subvariety \(Z\) on \(X_{\Sigma}\) defined by \(x_{1}=\cdots=x_{s}=0\) and \(x_{j}=c_{j}\) for \(j\in\mathcal{P}_{2}\), where \(c_{j}\) are non-zero constants, which has dimension \(n-s-(r-s+1)=n-r-1=\operatorname{corank}(\mathcal{F}_{W})-1\). Therefore, the pullback of \(Z\) on \(X^{\prime}_{\Sigma^{\prime}}\), denoted as \(Z^{\prime}\), is defined by \(x_{1}=0\) and \(x_{j}=c_{j}\) for \(j\in\mathcal{P}_{2}\). Let \(Y\to X^{\prime}_{\Sigma^{\prime}}\) be the blow-up along \(Z^{\prime}\) and \(\omega_{i,\,Y}\) be the pullback of \(\omega_{i}\). We observe that
\[\omega_{i,\,Y}=\Bigg{(}\sum_{j\in\mathcal{P}_{2}}\frac{a_{ij}x_{j}}{x_{1}x_{j} -c_{j}}\Bigg{)}\mathrm{d}x_{1}+\sum_{j\in\mathcal{P}_{1}\setminus\{1\}}a_{ij} \frac{\mathrm{d}x_{j}}{x_{j}}+x_{1}\Bigg{(}\sum_{j\in\mathcal{P}_{2}}\frac{a_{ ij}\mathrm{d}x_{j}}{x_{1}x_{j}-c_{j}}\Bigg{)}\]
where we use the same notations for the coordinates on \(Y\), and thus
\[0=\Bigg{(}\bigwedge_{i=1}^{n-r}\omega_{i,\,Y}\Bigg{)}\wedge\mathrm{d}x_{1}|_{x _{1}=0}=\bigwedge_{i=1}^{n-r}\Bigg{(}\sum_{j\in\mathcal{P}_{1}\setminus\{1\}}a_{ ij_{\ell}}\frac{\mathrm{d}x_{j_{\ell}}}{x_{j_{\ell}}}\Bigg{)}\wedge\mathrm{d}x_{1},\]
where the first equality follows as \(E=(x_{1}=0)\) is foliation-invariant. Therefore, we obtain
\[\bigwedge_{i=1}^{n-r}\Bigg{(}\sum_{j\in\mathcal{P}_{1}\setminus\{1\}}a_{ij} \frac{\mathrm{d}x_{j}}{x_{j}}\Bigg{)}=0\]
and hence, the matrix \((a_{ij})_{i=1,\,j\in\mathcal{P}_{1}\setminus\{1\}}^{n-r}\) is not of full rank. Therefore, any \((n-r)\times(n-r+s-2)\)-submatrix of \(B\) containing the first \(s-1\) columns is not of full rank.
**Claim**.: _Let \(M\) be the matrix consisting of the first \(s-1\) columns of \(B\). Then \(M\) is a zero matrix._
Proof of Claim.: Suppose \(M\) is not a zero matrix. Let the rank of \(M\) be \(m\geq 1\). Note that \(m\leq n-r\). Since the rank of \(B\) is \(n-r\), we can choose \(n-r-m\) columns \(C_{j}\) of \(B\) that are not in \(M\) such that a combination of \(m\) columns from \(M\) together with \(C_{j}\)'s spans a vector space of dimension \(n-r\). Additionally, we note that \((n-r-m)+(s-1)=n-r+s-m-1\leq n-r+s-2\). Thus, there exists an \((n-r)\times(n-r+s-2)\)-submatrix of \(B\) that contains the first \(s-1\) columns and has rank \(n-r\), which means it is of full rank. This leads to a contradiction, completing the proof of the claim.
Therefore, \(\tau\subset W\), which completes the "only if" part.
(If part) Now let \(\mathcal{F}\) is a foliation of any rank \(r\) on a \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\) of dimension \(n\). Let \(c=n-r\) be the corank of \(\mathcal{F}\) and \(\pi:\widetilde{X}\to X\) is a projecitve birational morphism. Suppose \(E\subset\widetilde{X}\) is an exceptional divisor with center \(Z:=c_{X}(E)\) of dimension at most \(c-1\). We will demonstrate that \(E\) is invariant under \(\pi^{-1}\mathcal{F}\) using the following steps:
1. If \(Z\cap T\neq\emptyset\) where \(T\) is the torus of \(X_{\Sigma}\), then \(E\) is invariant under \(\pi^{-1}\mathcal{F}\) since \(\mathcal{F}|_{T}\) is non-dicritical by Proposition 1.16.
2. So we can assume that \(Z\subset X\setminus T\). Thus, \(Z\subset D_{\rho}\) for some \(\rho\in\Sigma(1)\).
3. Let \(\sigma\in\Sigma(n)\) be an \(n\)-dimensional cone with \(\rho\prec\sigma\). Then there is a sublattice \(N^{\prime}\subset N\) such that \(\sigma\) is smooth with respect to \(N^{\prime}\), which induces a finite morphism \(f:U_{\sigma,\,N^{\prime}}\to U_{\sigma,\,N}\).
4. Without loss of generality, we can assume \(X=U_{\sigma,\,N}\). We then denote \(U_{\sigma,\,N^{\prime}}\) as \(X^{\prime}\). We consider the following commutative diagram: where \(\pi^{\prime}\) is the base change of \(\pi\). Let \(Z^{\prime}:=f^{-1}(Z)\) and \(E^{\prime}=\widetilde{f}^{-1}(E)\). Note that it suffices to show \(E^{\prime}\) is invariant under \(\widetilde{f}^{-1}\pi^{-1}\mathcal{F}_{W,\,N}\).
5. If \(Z\subset\operatorname{Sing}(\mathcal{F}_{W})\), then \(Z^{\prime}\subset\operatorname{Sing}(f^{-1}\mathcal{F}_{W})\) by Lemma 1.10. We choose a hyperplane \(H\) containing \(W\) such that \(H\cap N=W\cap N\). Then \((\Sigma,H)\) satisfies the condition (\(\dagger\)) and \(\operatorname{Sing}(\mathcal{F}_{W})\subset\operatorname{Sing}(\mathcal{F}_{H})\) by Proposition 1.11. By Proposition 3.3, \(\mathcal{F}_{H,\,N^{\prime}}\) has only simple singularities. By [1, Remark 2.13], \(\mathcal{F}_{H,\,N^{\prime}}=f^{-1}\mathcal{F}_{H,\,N}\) is strongly non-dicritical. Therefore, \(E^{\prime}\) is invariant under \(\pi^{\prime-1}\mathcal{F}_{H,\,N^{\prime}}=\widetilde{f}^{-1}\pi^{-1}\mathcal{ F}_{H,\,N}\). Consequently, \(E^{\prime}\) is invariant under \(\widetilde{f}^{-1}\pi^{-1}\mathcal{F}_{W,\,N}\) by Proposition 1.12(1).
6. If \(Z\not\subset\operatorname{Sing}(\mathcal{F}_{W})\), then \(Z^{\prime}\not\subset\operatorname{Sing}(f^{-1}\mathcal{F}_{W})\) by Lemma 1.10, and thus \(\mathcal{F}_{W,\,N^{\prime}}\) is smooth at a general point of \(Z^{\prime}\). Therefore, \(E^{\prime}\) is invariant under \(\pi^{\prime-1}\mathcal{F}_{W,\,N^{\prime}}=\widetilde{f}^{-1}\pi^{-1}\mathcal{ F}_{W,\,N}\) by Proposition 1.16.
**Proposition 1.21**.: _Let \(X_{\Sigma}\) be a toric variety. Then the following two statements are equivalent:_
1. \(W=N^{\prime}\otimes_{\mathbb{Z}}\mathbb{C}\) _for some sublattice_ \(N^{\prime}\subset N\)_._
2. \(\mathcal{F}_{W}\) _is an algebraically integrable foliation on_ \(X_{\Sigma}\)_._
Proof.: Suppose \(W=N^{\prime}\otimes_{\mathbb{Z}}\mathbb{C}\) for some sublattice \(N^{\prime}\subset N\). We consider the quotient lattice \(\overline{N}=N/N^{\prime}\). Then the image of \(W\) is \(\{\overline{0}\}\). This introduces a toric morphism \(T_{N}\to T_{\overline{N}}\). As \(T_{N}\subset X_{\Sigma}\), we have a dominant rational map \(f:X_{\Sigma}\dashrightarrow T_{\overline{N}}\), which induces the foliation \(\mathcal{F}_{W}\). Hence, \(\mathcal{F}_{W}\) is algebraically integrable.
Conversely, suppose \(\mathcal{F}_{W}\) is algebraically integrable. Let \(T\) be the torus in \(X_{\Sigma}\). Then the leaf \(L\) of \(\mathcal{F}_{W}|_{T}\) through \(1\in T\) is algebraic. Thus, \(\mathcal{T}_{L,1}\) is a rational vector subspace of \(\mathcal{T}_{T,1}=N_{\mathbb{C}}\)
Consequently, \(\mathcal{T}_{L,1}=N^{\prime}\otimes\mathbb{C}\) for some sublattice \(N^{\prime}\subset N\) and also gives a toric foliation that agrees with \(\mathcal{F}_{W}\) on \(T\). Therefore, \(W=\mathcal{T}_{L,1}=N^{\prime}\otimes\mathbb{C}\).
### Strongly non-dicritical singularity
**Definition 1.22**.: A foliation \(\mathcal{F}\) on a _smooth_ variety \(X\) is said to be _strongly non-dicritical_ if any divisor \(E\) over \(X\) such that \(c_{X}(E)\) is contained in \(\operatorname{Sing}(\mathcal{F})\) is foliation-invariant.
In [13, Lemma 2.14], the equivalence between non-dicriticality and strongly non-dicriticality is demonstrated for the case when \(X\) is a smooth threefold and \(\mathcal{F}\) has corank one. Here, we establish that this equivalence also holds true for any toric foliations on smooth toric varieties.
**Theorem 1.23**.: _Suppose \(\mathcal{F}_{W}\) is a toric foliation on a smooth toric variety \(X_{\Sigma}\). Then the following statements are equivalent:_
1. \(\mathcal{F}_{W}\) _is non-dicritical._
2. \(\mathcal{F}_{W}\) _is strongly non-dicritical._
3. \((\Sigma,W)\) _satisfies the condition_ \((\dagger)\)_._
Proof.: By Theorem 1.19, it suffices to establish the equivalence between (2) and (3).
Suppose \((\Sigma,W)\) satisfies the condition \((\dagger)\). By Proposition 3.3, we have \(\mathcal{F}_{W}=\bigcap_{i}\mathcal{F}_{i}\), where \(\mathcal{F}_{i}\) are toric foliations with corank one and only simple singularities. Consequently, \(\mathcal{F}_{i}\) is strongly non-dicritical by [13, Remark 2.13]. Now let us consider a birational morphism \(\pi:Y\to X\) and an exceptional divisor \(E\subset Y\) over \(X\) whose center on \(X\) is \(Z\). Furthermore, suppose \(E\) is contained in \(\operatorname{Sing}(\mathcal{F}_{W})\subset\bigcup_{i}\operatorname{Sing}( \mathcal{F}_{i})\). Therefore, there exists an \(i\) such that \(Z\subset\operatorname{Sing}(\mathcal{F}_{i})\), implying that \(E\) is invariant under \(\pi^{-1}\mathcal{F}_{i}\). Consequently, by Proposition 1.12(1), \(E\) is invariant under \(\pi^{-1}\mathcal{F}_{W}\).
On the other hand, suppose \(\mathcal{F}_{W}\) is strongly non-dicritical. To demonstrate that \(\mathcal{F}_{W}\) is non-dicritical, we consider any birational morphism \(\pi:Y\to X\) and an exceptional divisor \(E\subset Y\) whose center on \(X\) is \(Z\) with \(\dim Z\leq c-1\) where \(c\) is the corank of \(\mathcal{F}_{W}\). If \(Z\subset\operatorname{Sing}(\mathcal{F}_{W})\), then the strong non-dicriticality of \(\mathcal{F}_{W}\) implies that \(E\) is invariant under \(\pi^{-1}\mathcal{F}_{W}\). Hence we can assume \(Z\not\subset\operatorname{Sing}(\mathcal{F}_{W})\). Therefore, by Proposition 1.16, \(E\) is foliation-invariant.
### Another description of \((\dagger)\)
In [16], an alternative definition for non-dicriticality is presented. It is not immediately clear whether our definition is equivalent to the one provided in that work. However, we can establish the following equivalence for toric foliations on a complete \(\mathbb{Q}\)-factorial toric variety:
**Proposition 1.24**.: \((\dagger)\) _is equivalent to [16, Definition 3.6], which requires all exceptional divisors over the singular locus of the foliation is foliation invariant._
Proof.: To establish the equivalence, we need to show that condition \((\dagger)\) implies [16, Definition 3.6]. Now suppose that \(Z\subset\operatorname{Sing}(\mathcal{F}_{W})\) is irreducible and \(c_{X_{\Sigma}}(E)=Z\). Since \(\operatorname{Sing}(\mathcal{F}_{W})\) is a torus invariant closed subset of codimension at least \(2\), and \(c_{X_{\Sigma}}(E)\subset\operatorname{Sing}(\mathcal{F}_{W})\) is irreducible, there exists \(\tau\in\Sigma(\ell)\) with \(\ell\geq 2\) such that \(c_{X_{\Sigma}}(E)\subset V_{\tau}\subset\operatorname{Sing}(\mathcal{F}_{W})\). Let \(f:X_{\Sigma^{\prime}}\to X_{\Sigma}\) be a toric resolution. Considering higher models, we can also assume that \(S:=\{\rho\in\Sigma^{\prime}(1)\mid\rho\subset\operatorname{relint}(\tau)\}\neq\emptyset\). Then since \(c_{X_{\Sigma^{\prime}}}(E)\) is irreducible, we have \(c_{X_{\Sigma^{\prime}}}(E)\subset D_{\rho}\) for some \(\rho\in S\), and by construction \(f(D_{\rho})=V_{\tau}\subset\operatorname{Sing}(\mathcal{F}_{W})\).
If \(D_{\rho}\subset X_{\Sigma^{\prime}}\) is not foliation invariant, then by \((\dagger)\) and Proposition 1.11, we know that \(V_{\tau}=f(D_{\rho})\) is not contained in \(\operatorname{Sing}(\mathcal{F}_{W})\), which is impossible. Thus, \(D_{\rho}\) is foliation-invariant. If \(c_{X_{\Sigma^{\prime}}}(E)\subset\operatorname{Sing}(\mathcal{F}_{W}^{\prime})\), then \(E\) is foliation-invariant since \(\mathcal{F}_{W}^{\prime}\) is strongly non-dicritical by Theorem 1.23. Therefore, we can assume that \(c_{X_{\Sigma^{\prime}}}(E)\not\subset\operatorname{Sing}(\mathcal{F}_{W}^{ \prime})\). By Corollary 1.17, \(E\) is invariant since \(c_{X_{\Sigma^{\prime}}}(E)\subset D_{\rho}\) and \(D_{\rho}\) is foliation invariant.
## 2. Singularities of foliated pairs
In this section, we define the various singularities for foliated pairs of any rank on arbitrary normal varieties. Moreover, we show that it suffices to compute the foliated minimal log discrepancies on the foliated log resolution.
Let us start by recalling some definitions.
**Definition 2.1**.: Let X be an arbitrary normal variety.
1. A _foliated pair_ \((\mathcal{F},\Delta)\) on \(X\) consists of a foliation \(\mathcal{F}\) on \(X\) and an \(\mathbb{R}\)-divisor divisor \(\Delta\) such that \(K_{\mathcal{F}}+\Delta\) is \(\mathbb{R}\)-Cartier.
2. Given a birational morphism \(\pi:\widetilde{X}\to X\), we can write \[K_{\widetilde{\mathcal{F}}}+\pi_{*}^{-1}\Delta=\pi^{*}(K_{\mathcal{F}}+\Delta )+\sum_{E}a(E,\mathcal{F},\Delta)E\] where the sum is over all \(\pi\)-exceptional divisors and \(a(E,\mathcal{F},\Delta)\) is called the _discrepancy_ of \((\mathcal{F},\Delta)\) with respect to \(E\).
3. We say that \[(\mathcal{F},\Delta)\text{ is }\left\{\begin{array}{ll}\text{terminal}&\\ \text{canonical}&\\ \text{log terminal}&\\ \text{log canonical}&\\ \varepsilon\text{-log canonical}&\end{array}\right.\text{ if }a(E,\mathcal{F},\Delta) \left\{\begin{array}{ll}>0&\\ \geq 0&\\ >-\iota(E)&\\ \geq-\iota(E)&\\ \geq-\iota(E)+\varepsilon&\end{array}\right.\] for any birational morphism \(\pi:\widetilde{X}\to X\) and for any prime \(\pi\)-exceptional divisor \(E\) on \(\widetilde{X}\). Here \(\varepsilon\) is a nonnegative real number and recall that \(\iota(E)=1\) if \(E\) is not foliation invariant and \(\iota(E)=0\) otherwise. Let \(P\in X\) be a, not necessarily closed, point of \(X\). We say \((\mathcal{F},\Delta)\) is terminal (resp. canonical, log terminal, log canonical, \(\varepsilon\)-log canonical) _at_\(P\) if we only require those exceptional divisors \(E\) whose center in \(X\) is the Zariski closure of \(P\) to satisfy the respective condition on the discrepancy above. Let \(Z\) be an irreducible subvariety of \(X\). We say that \((\mathcal{F},\Delta)\) is terminal (resp. canonical, log terminal, log canonical, \(\varepsilon\)-log canonical) at the _generic_ point of \(Z\) if it is such at \(\eta_{Z}\), the generic point of \(Z\). And we say that \((\mathcal{F},\Delta)\) is terminal (resp. canonical, log terminal, log canonical, \(\varepsilon\)-log canonical) at the _general_ point of \(Z\) if it is such at the general _closed_ point of \(Z\). We say \(\mathcal{F}\) is terminal (resp. canonical, log terminal, log canonical, \(\varepsilon\)-log canonical) if the foliated pair \((\mathcal{F},0)\) is such.
4. For any subvariety \(Z\), we define the _minimal log discrepancy_\(\operatorname{mld}_{Z}(\mathcal{F},\Delta)\) of \((\mathcal{F},\Delta)\) over \(Z\) as \(\inf\{a(E,\mathcal{F},\Delta)+\iota(E)\mid\text{ where }E\text{ is a divisor over }X\text{ with center contained in }Z\}\).
**Proposition 2.2**.: _If \(\operatorname{mld}_{Z}(\mathcal{F},\Delta)<0\), then \(\operatorname{mld}_{Z}(\mathcal{F},\Delta)=-\infty\)._
Proof.: First, note that for any \(E=\operatorname{Exc}(\operatorname{Bl}_{Z}(X)\to X)\) such that \(Z\not\subset\operatorname{Sing}(\mathcal{F})\), we have \(a(E,\mathcal{F},\Delta)\leq a(E,K_{X},\Delta)\) in any case and \(a(E,\mathcal{F},\Delta)\leq a(E,K_{X},\Delta)-1\) when \(Z\subset\operatorname{Sing}(\mathcal{F})\). Now suppose that there is an exceptional divisor \(E\subset Y\) with \(a(E,\mathcal{F},\Delta)+\iota(E)<0\) where \(\pi:Y\to X\). Let \(c:=a(E,\mathcal{F},\Delta)\), and recall that \(c<-\iota(E)\leq 0\). Then we choose \(Z_{0}\) as a general subvariety in \(E\) of \(\operatorname{codim}_{X}Z_{0}=2\) that is not contained in any other exceptional divisor of \(\pi\) and not in the singular locus of \(\mathcal{F}_{Y}\). By blowing up along \(Z_{0}\), we obtain \(Y_{1}=\operatorname{Bl}_{Z_{0}}Y\to Y\) with the exceptional divisor \(E_{1}\). In this case, we have \(a(E_{1},\mathcal{F},\Delta)=a(E_{1},\mathcal{F}_{Y},\Delta_{Y})+a(E,\mathcal{F},\Delta)\leq 1+a(E,\mathcal{F},\Delta)=1+c\)
Let \(Z_{1}\) be the intersection of \(E_{1}\) with the proper transform of \(E\). Next, we consider \(Y_{2}=\operatorname{Bl}_{Z_{1}}Y_{1}\to Y_{1}\) with exceptional divisor \(E_{2}\). We note that
\[a(E_{2},\mathcal{F},\Delta) =a(E_{2},\mathcal{F}_{Y_{1}},\Delta_{Y_{1}})+a(E_{1},\mathcal{F}, \Delta)+a(E,\mathcal{F},\Delta)\] \[\leq 1+a(E_{1},\mathcal{F},\Delta)+a(E,\mathcal{F},\Delta)\] \[=2+2c.\]
Continuing this process, we obtain the exceptional divisor \(E_{n}\) with \(a(E_{n},\mathcal{F},\Delta)\leq n(1+c)\). If \(\iota(E)=1\), then as \(n\) tends to infinity, \(a(E_{n},\mathcal{F},\Delta)\) approaches \(-\infty\).
On the other hand, if \(\iota(E)=0\), then \(\iota(E_{n})=0\) for all \(n\) since the center of each blow-up is contained in an invariant divisor. Furthermore, we have
\[a(E_{2},\mathcal{F},\Delta)\leq a(E_{1},\mathcal{F},\Delta)+a(E,\mathcal{F}, \Delta)=1+2c\]
where the first inequality follows from \(Z_{1}\subset\operatorname{Sing}(\mathcal{F}_{Y_{1}})\). Therefore, we also obtain \(a(E_{n},\mathcal{F},\Delta)\leq 1+nc\), and therefore \(a(E_{n},\mathcal{F},\Delta)\) tends to \(-\infty\) as \(n\) tends to infinity.
**Definition 2.3**.: Let \(\lambda_{1},\dots,\lambda_{m}\in\mathbb{C}^{*}\). If for all non-zero maps \(\phi:\{1,\dots,m\}\to\mathbb{Z}_{\geq 0}\), we have \(\sum_{k=1}^{m}\phi(k)\lambda_{k}\neq 0\), then we say that the tuple \((\lambda_{1},\dots,\lambda_{n})\) satisfies the _non-resonant condition_.
**Definition 2.4** (Simple singularities).: A point \(p\in X\) is a _simple_ singularity if, in a formal coordinate around \(p\), \(\mathcal{F}\) is generated by the saturation of the wedge of \(1\)-forms in the following normal forms:
1. There are \(\lambda_{1},\dots,\lambda_{t}\in\mathbb{C}^{*}\) satisfying the non-resonant condition with \(1\leq t\leq n\) such that \[\omega=\left(\prod_{i=1}^{t}x_{i}\right)\left(\sum_{i=1}^{t}\lambda_{i}\frac{ \mathrm{d}x_{i}}{x_{i}}\right).\]
2. There are \(\lambda_{2},\dots,\lambda_{k}\in\mathbb{C}\), \(\lambda_{k+1},\dots,\lambda_{t}\in\mathbb{C}^{*}\), \(p_{1},\dots,p_{k}\in\mathbb{N}\) with \(1\leq k\leq t\leq n\), and a one-variable non-zero formal series \(\Psi\) such that \[\omega=\left(\prod_{i=1}^{t}x_{i}\right)\left(\sum_{i=1}^{k}p_{i}\frac{ \mathrm{d}x_{i}}{x_{i}}+\Psi(x_{1}^{p_{1}}\cdots x_{k}^{p_{k}})\sum_{i=2}^{t} \lambda_{i}\frac{\mathrm{d}x_{i}}{x_{i}}\right).\] where the tuple \((\lambda_{k+1},\dots,\lambda_{t})\) satisfies the non-resonant condition.
For \(\ell=1\) or \(2\), we say \(p\) is a simple singularity of type \(\ell\) if \(\mathscr{F}\) is locally generated by the wedge of \(1\)-forms of type \(\ell\).
**Lemma 2.5**.: _Let \(\mathcal{F}\) be a foliation on an arbitrary normal variety \(X\). If \(\mathcal{F}\) has only simple singularities, then it is non-dicritical._
Proof.: Note that \(X\) is smooth and \(\mathcal{F}\) is locally free. Let \(\pi:Y\to X\) be any birational projective morphism and \(E\) be a divisor on \(Y\) over \(X\) whose center on \(X\) is \(Z\) with \(\dim Z\leq c-1\) where \(c\) is the corank of \(\mathcal{F}\). We consider the following two cases:
1. If \(Z\not\subset\operatorname{Sing}(\mathcal{F})\), then \(E\) is invariant by Proposition 1.16.
2. If \(Z\subset\operatorname{Sing}(\mathcal{F})\), then around a general point of \(Z\), we can write \(\mathcal{F}=\bigcap_{i}\mathcal{F}_{i}\), where \(\mathcal{F}_{i}\) are corank one foliations with simple singularities. By [10], \(\mathcal{F}_{i}\) is strongly non-dicritical for all \(i\), and thus \(E\) is invariant under \(\pi^{-1}\mathcal{F}_{i}\) for all \(i\). Therefore, \(E\) is invariant under \(\pi^{-1}\mathcal{F}\).
As in [11], we introduce the definition for a foliated pair of arbitrary rank to be foliated log smooth as follows:
**Definition 2.6**.: We say that a pair of foliated \((\mathcal{F},\Delta=\sum_{i}a_{i}D_{i})\) on \(X\) is _foliated log smooth_ if the following conditions hold:
1. \((X,\Delta)\) is log smooth.
2. \(\mathcal{F}\) has only simple singularities.
3. Let \(S\) be the support of noninvariant components of \(\Delta\). For any closed point \(x\in S\), we put \(Z\) as the minimal strata of \(\operatorname{Sing}(\mathcal{F})\) containing \(x\). After re-indexing, we can assume that \(x\in D_{i}\) and \(D_{i}\subset S\) if and only if \(1\leq i\leq b\). Then \(Z\) meets \(\bigcap_{i=1}^{b}D_{i}\) transversally, that is \[\dim Z\cap\bigcap_{i=1}^{b}D_{i}=\dim Z-b.\]
**Definition 2.7**.: A birational morphism \(\pi:Y\to X\) is a _foliated log resolution_ of \((\mathcal{F},\Delta)\) if \(\operatorname{Exc}(\pi)\) is a divisor and \((\pi^{-1}\mathcal{F},\pi_{*}^{-1}\Delta+\sum_{i}E_{i})\) is foliated log smooth where the sum runs over all the \(\pi\)-exceptional divisors \(E_{i}\).
**Remark 2.8**.: Seidenberg showed that foliated log resolution always exists for (co)-rank 1 foliations on surfaces. In [10], it is shown for the corank 1 foliation on threefolds. However, it is not true in general for rank 1 foliations on threefolds shown in [10, Example 2.31]. But we will show that this holds true for toric foliations (see Theorem 3.5).
**Definition 2.9**.: \((\mathcal{F},\Delta)\) is _foliated divisorial log terminal (F-dlt)_ if
1. each irreducible component of \(\Delta\) is generically transverse to \(\mathcal{F}\) and has coefficient at most one, and
2. there exists a foliated log resolution \(\pi:Y\to X\) of \((\mathcal{F},\Delta)\) which only extracts divisors E of discrepancy \(>-\iota(E)\).
**Lemma 2.10**.: _Let \(\mathcal{F}\) be a foliation with only simple singularities of type one on a smooth variety \(X\), \(Z\) be a subvariety contained in \(\operatorname{Sing}(\mathcal{F})\), and \(S\) be a minimal stratum of \(\operatorname{Sing}(\mathcal{F})\) containing \(Z\). If \(\pi:Y\to X\) is the blow-up along \(Z\) with exceptional divisor \(E\), then the order of \(K_{\mathcal{F}_{Y}}-\pi^{*}K_{\mathcal{F}}\) along \(E\) is an integer between \(0\) and \(\dim S-\dim Z\)._
Proof.: Write \(\ell:=\operatorname{codim}Z\) and \(s:=\operatorname{codim}S\). We choose and fix a general point \(z\in Z\) such that \(z\in Z\setminus\operatorname{Sing}(Z)\). Around \(z\), there are formal coordinates \(x_{1},\ldots,x_{n}\) such that the point \(z\) is the origin and \(\mathcal{F}\) is defined by an \((n-r)\)-form \(\omega\) which can be written as
\[\omega =\Bigg{(}\prod_{j=1}^{t}x_{j}\Bigg{)}\bigwedge_{i=1}^{n-r}\omega_ {i}\] \[=\Bigg{(}\prod_{j=1}^{t}x_{j}\Bigg{)}\sum_{1\leq i_{1}<\ldots<i_{ n-r}\leq t}a_{i_{1}\cdots i_{n-r}}\frac{\mathrm{d}x_{i_{1}}}{x_{i_{1}}} \wedge\cdots\wedge\frac{\mathrm{d}x_{i_{n-r}}}{x_{i_{n-r}}}\]
for some \(t\in\mathbb{N}\), where each \(\omega_{i}\) is a simple 1-form of type one (up to a multiplication of a monomial) involving only variables \(x_{1},\ldots,x_{t}\). After re-indexing, we may assume that \(S=\{x_{1}=\cdots=x_{s}=0\}\).
**Claim**.: _We can choose \(\omega_{i}\)'s such that, in addition to being simple of type one,_
_(1) \(\omega_{i}\) involves only variable \(x_{1},\ldots,x_{s}\) for \(i=1,\ldots,n-r-t+s\), and_
_(2) \(\omega_{i}=\frac{\mathrm{d}x_{t-n+r+i}}{x_{t-n+r+i}}\) for \(i=n-r-t+s+1,\ldots,n-r\)._
Proof of Claim.: It suffices to show that \(\frac{\mathrm{d}x_{k}}{x_{k}}\in\mathbb{C}\omega_{1}+\cdots+\mathbb{C}\omega_{ n-r}\) for \(k=s+1,\cdots,t\). To this end, write \(\omega_{i}=\sum_{j=1}^{t}a_{ij}\frac{\mathrm{d}x_{j}}{x_{j}}\) and let \(M\) be the matrix given by \(M_{ij}=a_{ij}\). Perform Gauss-Jordan elimination on \(M\) to get the row echelon form \(M^{\prime}\), and let \(v_{j_{1}},\ldots,v_{j_{n-r}}\) be the column vectors of \(M^{\prime}\) such that \(j_{1}<\cdots<j_{n-r}\) and the lowest non-zero entry of \(v_{j_{k}}\) is a corner of \(M^{\prime}\) for each \(k\). Suppose for the sake of contradiction that there exists \(\alpha\in\{s+1,\ldots,t\}\setminus\{j_{1},\ldots,j_{n-r}\}\). Then we can find \(\mathcal{C}\subset\{\alpha,j_{1},\ldots,j_{n-r}\}\) containing \(\alpha\), such that \(\{v_{j}\mid j\in\mathcal{C}\}\) is a minimal dependent set.
Hence \(V_{(x_{j}:j\in\mathcal{C})}\) is an irreducible component of \(\operatorname{Sing}(\mathcal{F})\) not containing \(S=\{x_{1}=\cdots=x_{s}=0\}\). We get a contradiction since every irreducible component of \(\operatorname{Sing}(\mathcal{F})\) that can be seen in the formal neighborhood of \(z\) must contain \(S\). This completes the proof of the claim.
By the claim, we can write
\[\omega=\Bigg{(}\prod_{j=1}^{s}x_{j}\Bigg{)}\omega_{1}\wedge\cdots\wedge\omega_ {n-r-t+s}\wedge\mathrm{d}x_{s+1}\wedge\cdots\wedge\mathrm{d}x_{t},\]
where \(\omega_{i}\) involves only variable \(x_{1},\ldots,x_{s}\) for \(i=1,\ldots,n-r-t+s\). Since \(z\in Z\) is a smooth point, \(Z\) is defined, around \(z\), by \(x_{1}=\cdots=x_{s}=0\) and \(f_{j}(x_{s+1},\ldots,x_{n})=0\) where \(j=s+1,\ldots,\ell\), and the Jacobian matrix \((\partial f_{j}/\partial x_{k}(0))_{j=s+1,k=s+1}^{\ell,\,n}\) is of full rank.
If one of \(f_{j}\)'s involves one of variables \(x_{t+1},\ldots,x_{n}\), let us say \(f_{s+1}\) involves \(x_{n}\), then we may assume \(\frac{\partial f_{s+1}}{\partial x_{n}}(0)\neq 0\) as \(z\) is general. Thus, \(\mathbb{C}[\![x_{s+1},\ldots,x_{n}]\!]=\mathbb{C}[\![x_{s+1},\ldots,x_{n-1},f _{s+1}]\!]\) and \(x_{s+1},\ldots,x_{n-1},f_{s+1}\) form a set of formal coordinates (c.f. [1, Theorem 9.7]). Hence, we can assume \(f_{j}\)'s do not involve \(x_{n}\) for \(j\geq s+2\). Continuing this process, we can assume \(f_{s+i}=x_{n+1-i}\) for \(i=1,\ldots,\alpha\) for some \(\alpha\leq\ell-s\) and \(f_{s+i}\)'s involve only \(x_{s+1},\ldots,x_{t}\) for \(\ell-s\geq i>\alpha\) (if any). Note that in this procedure \(x_{1},\ldots,x_{t}\) remain unchanged and \(\omega\) has the same expression.
If \(\alpha<\ell-s\), using the fact that \((\partial f_{j}/\partial x_{k}(0))_{j=s+\alpha+1,\,k=s+1}^{\ell,\,t}\) is of full rank and [1, Theorem 9.7], we can do a change of coordinates and assume that \(f_{s+i}=x_{s+j-\alpha}\) for \(\ell-s\geq i>\alpha\). The expression of \(\omega\) remains the same.
Consider the affine open set \(U\) of the formal neighborhood of \(\pi^{-1}(z)\) that is given by the coordinates \(y_{1},\ldots,y_{n}\) satisfying:
\[\left\{\begin{array}{ll}x_{k}=y_{k}y_{m},&\text{ if }k\in(\mathbb{N}\cap I )\setminus\{m\}\\ x_{k}=y_{k},&\text{ if }k=m\text{ or }k\notin\mathbb{N}\cap I\end{array},\right.\]
where \(I:=[1,s]\cup[s+1,s+\ell-\alpha]\cup[n+1-\alpha,n]\) and \(m\in\mathbb{N}\cap I\). Then \(E\cap U\) is defined by \(y_{m}=0\). To calaulate the vanishing order of \(\pi^{*}\omega\) along \(E\), we may assume that \(m=1\). It is a direct computation that we have
1. the order of \(\pi^{*}(\prod_{j=1}^{s}x_{j})\) along \(E\) is \(s\),
2. the order of \(\pi^{*}(\omega_{1}\wedge\cdots\wedge\omega_{n-r-t+s})\) along \(E\) is \(-1\) because of the non-resonance condition,
3. the order of \(\mathrm{d}y_{1}\wedge\pi^{*}(\mathrm{d}x_{s+1}\wedge\cdots\wedge\mathrm{d}x_{t})\) along \(E\) is \(\ell-s-\alpha\), and the order of \(\pi^{*}(\mathrm{d}x_{s+1}\wedge\cdots\wedge\mathrm{d}x_{t})\) along \(E\) is \(\max\{0,\ell-s-\alpha-1\}\).
Thus, the order of \(K_{\mathcal{F}_{Y}}-\pi^{*}K_{\mathcal{F}}\) along \(E\) is
\[(n-\dim Z-1)-(s-1+\ell-s-\alpha)=\alpha\]
which is an integer between \(0\) and \(\ell-s=\dim S-\dim Z\). In particular, the order of \(K_{\mathcal{F}_{Y}}-\pi^{*}K_{\mathcal{F}}\) along \(E\) is always nonnegative.
Moreover, as argued in the proof of [1, Lemma 2.9], all \(\pi^{*}\omega_{i}|_{U}\)'s and \(\pi^{*}(\mathrm{d}x_{k})|_{U}\)'s are of type one since their coeffients satisfy the non-resonance condition. Therefore, \(\pi^{*}\omega|_{U}\) is of type one as well, and \(\pi^{-1}\mathcal{F}\) is simple of type one in a neighborhood of \(\pi^{-1}(z)\).
**Proposition 2.11**.: _Suppose \(X\) is a smooth variety and \(\mathcal{F}\) is a foliation with only simple singularities of type one on \(X\). Then \(\mathcal{F}\) has only canonical singularities._
Proof.: The proof closely follows the proof of [1, Lemma 2.9]. We provide it here for the convenience of the readers.
Let \(E\) be a divisor on \(Y\) over \(X\) with center \(Z:=c_{X}(E)\subset X\). If \(Z\not\subset\operatorname{Sing}(\mathcal{F})\), then we can apply the argument in [1, Lemma 3.10] to show that \(a(E,\mathcal{F})\geq 0\).
Suppose \(Z\subset\operatorname{Sing}(\mathcal{F})\). After shrinking around the generic point of \(Z\), we can assume that \(Z\) is smooth. By Zariski's lemma (cf. [1, Lemma 2.45]), after possibly replacing \(Y\) by a higher
model, we can assume that \(\pi\) is a composition of blow-ups of subvarieties centered on \(Z\). We proceed by induction on the number of blow-ups. Thus, it suffices to show that if \(\pi:\widetilde{X}\to X\) is the blow-up along \(Z\), then
1. \(\pi^{-1}\mathcal{F}\) has simple singularities of type one in a neighborhood of \(\pi^{-1}(z)\), where \(z\in Z\) is a general point, and
2. \(a(E_{0},\mathcal{F})\geq 0\), where \(E_{0}\) is the exceptional divisor of \(\pi\).
These two statements are demonstrated in the proof of Lemma 2.10.
**Example 2.12**.: In general, the simple singularities of type two on smooth varieties are not (log) canonical. Let us consider the following three 1-forms on \(\mathbb{C}^{4}\):
\[\frac{1}{xyzt}\omega_{1} =2\frac{\mathrm{d}x}{x}+\frac{\mathrm{d}y}{y}+x^{2}y\left(\frac{ \mathrm{d}z}{z}+\frac{\mathrm{d}t}{t}\right)\] \[\frac{1}{xyzt}\omega_{2} =\frac{\mathrm{d}x}{x}+2\frac{\mathrm{d}y}{y}+xy^{2}\left(\frac{ \mathrm{d}z}{z}+\frac{\mathrm{d}t}{t}\right)\] \[\frac{1}{xyzt}\omega_{3} =2\frac{\mathrm{d}x}{x}+2\frac{\mathrm{d}y}{y}+x^{2}y^{2}\left( \frac{\mathrm{d}z}{z}+2\frac{\mathrm{d}t}{t}\right)\]
Then we obtain
\[\frac{1}{(xyzt)^{3}}\omega_{1}\wedge\omega_{2}\wedge\omega_{3} =xy(-2x-2y+3xy)\frac{\mathrm{d}x}{x}\wedge\frac{\mathrm{d}y}{y} \wedge\frac{\mathrm{d}z}{z}\] \[\quad+xy(-2x-2y+6xy)\frac{\mathrm{d}x}{x}\wedge\frac{\mathrm{d}y }{y}\wedge\frac{\mathrm{d}t}{t}\] \[\quad+x^{5}y^{5}\frac{\mathrm{d}x}{x}\wedge\frac{\mathrm{d}z}{z} \wedge\frac{\mathrm{d}t}{t}\] \[\quad-x^{5}y^{5}\frac{\mathrm{d}y}{y}\wedge\frac{\mathrm{d}z}{z} \wedge\frac{\mathrm{d}t}{t}.\]
So this 3-form gives a foliation of rank 1. Moreover, the associated saturated 3-form is
\[xyzt\Bigg{(}(2x+2y-3xy)\frac{\mathrm{d}x}{x}\wedge\frac{\mathrm{ d}y}{y}\wedge\frac{\mathrm{d}z}{z} +(2x+2y-6xy)\frac{\mathrm{d}x}{x}\wedge\frac{\mathrm{d}y}{y} \wedge\frac{\mathrm{d}t}{t}\] \[\quad-x^{4}y^{4}\frac{\mathrm{d}x}{x}\wedge\frac{\mathrm{d}z}{z} \wedge\frac{\mathrm{d}t}{t}+x^{4}y^{4}\frac{\mathrm{d}y}{y}\wedge\frac{ \mathrm{d}z}{z}\wedge\frac{\mathrm{d}t}{t}\Bigg{)}\]
If we blow up along \(Z=(x=y=0)\), then the vanishing of the pullback form along the exceptional divisor \(E\) is 2. Thus, the foliated discrepancy is \(\operatorname{codim}Z-1-2=-1\) and \(E\) is invariant. Therefore, this foliation is not (log) canonical.
**Theorem 2.13**.: _Let \((\mathcal{F},\Delta=\sum a_{i}D_{i})\) be foliated log smooth of type one with \(a_{i}\leq 1\). Suppose \(\operatorname{mld}(\mathcal{F},\Delta)\geq 0\). Then the following statements hold:_
1. _For any strata_ \(Z\) _of_ \(\operatorname{Sing}(\mathcal{F})\)_, we have_ \(D_{i}\) _is invariant if_ \(Z\subset D_{i}\)_._
2. _If_ \(D_{i}\) _is invariant, then_ \(a_{i}\leq 0\)_._
3. \[\operatorname{mld}(\mathcal{F},\Delta)=\min\left\{\min_{i\neq j,\,D_{i}\cap D_ {j}\neq\emptyset}\{1-a_{i}-a_{j}\},\min_{i}\{\iota(D_{i})-a_{i}\},\min_{Z\in S }\{-\sum_{i\in I(Z)}a_{i}\},1\right\}\]
_where_ \(S\) _is the set of any strata of_ \(\operatorname{Sing}(\mathcal{F})\)_, and_ \(I(Z)=\{i\mid Z\subset D_{i}\}\)_. We set the summation over the empty set is zero. Note that the third term matters only when the summation is over the empty set, that is when_ \(\operatorname{mld}(\mathcal{F},\Delta)=0\)_._
Proof.: (1) follows directly from the definition of the foliated log smooth pair. (2) is clear since \(\operatorname{mld}(\mathcal{F},\Delta)\geq 0\). Now let us proceed to prove (3). Let \(r(\mathcal{F},\Delta)\) the right-hand side of the equality. If \(D_{i}\) is noninvariant, then \(\iota(D_{i})-a_{i}=1-a_{i}\geq 0\) by assumption. On the other hand, if \(D_{i}\) is invariant, then by (2), \(\iota(D_{i})-a_{i}=-a_{i}\geq 0\). Note that blowing up the closure of a stratum \(Z\) of \(\operatorname{Sing}(\mathcal{F})\) introduces an exceptional divisor with discrepancy \(-\sum_{i\in I(Z)}a_{i}\). If \(D_{i}\cap D_{j}\neq\emptyset\) for \(i\neq j\), then blowing up \(D_{i}\cap D_{j}\) results in \(a(E,\mathcal{F},\Delta)=1-a_{i}-a_{j}\) if \(D_{i}\cap D_{j}\) is not contained in \(\operatorname{Sing}(\mathcal{F})\) and \(-a_{i}-a_{j}\) if \(D_{i}\cap D_{j}\) is contained in \(\operatorname{Sing}(\mathcal{F})\). In the latter case, \(D_{i}\) and \(D_{j}\) are both invariant by (1). So, we have \(\operatorname{mld}(\mathcal{F},\Delta)\leq r(\mathcal{F},\Delta)\).
Let \(D\) be any exceptional divisor for some birational morphism \(f:Y\to X\). By Lemma 2.5, simple singularities are non-dicritical, so \(\iota(D)=0\). It suffices to show that \(a(D,\mathcal{F},\Delta)\geq r(\mathcal{F},\Delta)\). By Zariski's lemma (cf. [12, Lemma 2.45]), we can assume that \(D\) is obtained by a sequence of, say \(\alpha\), blow-ups along smooth centers. We will prove this by induction on \(\alpha\).
When \(\alpha=1\), by shrinking \(X\), we can assume that \(f(D)\) is a smooth subvariety. Note that shrinking \(X\) does not affect \(r(\mathcal{F},\Delta)\). Now, blowing up \(f(D)\) results in \(f_{1}:X_{1}=\operatorname{Bl}_{f(D)}X\to X\) with exceptional divisor \(E_{1}\). After re-indexing, we can assume that \(f(D)\subset D_{i}\) if and only if \(1\leq i\leq b\) for some \(b\leq k=\operatorname{codim}_{X}f(D)\). Let \(Z\) be the minimal stratum of the singular locus of \(\mathcal{F}\) that contains \(f(D)\), and put \(t=n-\dim Z\). We consider the following cases:
1. If \(b=0\), then \(a(E_{1},\mathcal{F},\Delta)=k-1-(t-1)=k-t\). If \(k>t\), then \(k-t\geq 1\geq r(\mathcal{F},\Delta)\). If not, then \(k=t\) and thus \(f(D)=Z\), meaning that \(Z\) is not contained in \(\operatorname{Supp}(\Delta)\). Therefore, \(a(E_{1},\mathcal{F},\Delta)=0=r(\mathcal{F},\Delta)\) as the summation over an empty set is defined as \(0\).
2. If \(b=1\), then \(a(E_{1},\mathcal{F},\Delta)=k-t-a_{1}\). If \(k-t\geq 1\), then \(a(E_{1},\mathcal{F},\Delta)=k-t-a_{1}\geq 1-a_{1}\geq r(\mathcal{F},\Delta)\). If \(k=t\), then \(f(D)=Z\subset D_{1}\) and thus \(a(E_{1},\mathcal{F},\Delta)=-a_{1}\geq r(\mathcal{F},\Delta)\).
3. If \(b\geq 2\), then \(f(D)\subset Z\cap\bigcap_{i=1}^{b}D_{i}\), implying \(n-k\leq n-t-b\). We have \[a(E_{1},\mathcal{F},\Delta) =k-t-\sum_{i=1}^{b}a_{i}\] \[=k-b-t+\sum_{i=1}^{b}(1-a_{i})\] \[\geq 1-a_{1}+1-a_{2}\] \[\geq 1-a_{1}-a_{2}\geq r(\mathcal{F},\Delta).\] Now we define \(\Delta_{1}\) on \(X_{1}\) by \(K_{\mathcal{F}_{1}}+\Delta_{1}=f_{1}^{*}(K_{\mathcal{F}}+\Delta)\), then \[r(\mathcal{F}_{1},\Delta_{1}) =\min\{r(\mathcal{F},\Delta),a(E_{1},\mathcal{F},\Delta),1+a(E_{1},\mathcal{F},\Delta)-\max_{D_{i}\cap f(D)\neq\emptyset}a_{i}\}\] \[\geq\min\{r(\mathcal{F},\Delta),a(E_{1},\mathcal{F},\Delta)\}\] \[\geq r(\mathcal{F},\Delta).\] By induction, we have \(a(D,\mathcal{F},\Delta)\geq r(\mathcal{F}_{1},\Delta_{1})\geq r(\mathcal{F},\Delta)\). Therefore, we obtain the desired equality.
**Theorem 2.14**.: _Let \((\mathcal{F},\Delta)\) be a foliated pair on \(X\). Suppose that there is a log resolution \(\pi:Y\to X\) such that \((\mathscr{G},\Delta_{Y})\) is log smooth of type one and \((\Delta_{Y})_{\operatorname{n-inv}}\) is smooth. Let \(E_{j}\) be the exceptional divisor
_on \(Y\) over \(X\). If \(\mathrm{mld}(\mathcal{F},\Delta)>0\), then_
\[\mathrm{mld}(\mathcal{F},\Delta)=\min\left\{\min_{j}\{\iota(E_{j})+a(E_{j}, \mathcal{F},\Delta)\},\min_{i}\{\iota(D_{i})-a_{i}\},1\right\}.\]
Proof.: Let \(K_{\mathscr{G}}+\Delta_{Y}=\pi^{*}(K_{\mathcal{F}}+\Delta)\) and \(\Delta_{Y}=\sum w_{i}W_{i}\). Suppose \(W_{1}\) and \(W_{2}\) are two irreducible components of \(\Delta_{Y}\) and they are not disjoint. Since \((\Delta_{Y})_{\text{n-inv}}\) is smooth, one of \(W_{1}\) and \(W_{2}\) must be invariant, say \(W_{1}\), and thus \(w_{1}\leq 0\). Hence,
\[1-w_{1}-w_{2}\geq 1-w_{2}\geq\iota(W_{2})-w_{2}\geq\min\{\iota(W_{1})-w_{1}, \iota(W_{2})-w_{2}\}.\]
Then
\[\mathrm{mld}(\mathcal{F},\Delta) =\mathrm{mld}(\mathscr{G},\Delta_{Y})\] \[=\min\left\{1,\min_{i}\{\iota(W_{i})-w_{i}\},\min_{j\neq j^{ \prime},W_{j}\cap W_{j^{\prime}}\neq 0}\{1-w_{j}-w_{j^{\prime}}\}\right\}\] \[=\min\left\{1,\min_{i}\{\iota(W_{i})-w_{i}\}\right\}\] \[=\min\{1,\iota(D_{i})-a_{i},\iota(E_{j})+a(E_{j},\mathcal{F}, \Delta)\}.\]
## 3. Toric foliated pair
In this section, we assume \(\mathcal{F}_{W}\) is a toric foliation on a \(\mathbb{Q}\)-factorial complete toric variety \(X_{\Sigma}\). We show that foliated log resolution always exists for the toric foliated pair, and thus we can detect all singularities involving discrepancies, such as terminal, canonical, log terminal, log canonical singularities, only on the level of toric resolution.
### Foliated log resolution
**Definition 3.1**.: A _toric foliated pair_\((\mathcal{F}_{W},\Delta)\) consists of a toric foliation \(\mathcal{F}_{W}\) on a \(\mathbb{Q}\)-factorial complete toric variety \(X_{\Sigma}\) and a torus invariant \(\mathbb{R}\)-divisor \(\Delta\) on \(X_{\Sigma}\) whose coefficients belong to \([0,1]\).
**Proposition 3.2**.: _Any toric foliations \(\mathcal{F}_{W}\) on smooth toric varieties \(X_{\Sigma}\) of dimension \(n\) has only pre-simple singularities of type one, that is, it satisfies all conditions for simple singularities except for the non-resonance condition._
Proof.: Note that \(X_{\Sigma}\) is covered by open subsets \(U_{\sigma}\)'s where \(\sigma\in\Sigma(n)\). As \(X_{\Sigma}\) is smooth, all \(\sigma\)'s are smooth. We can write \(W=\bigcap_{i=1}^{c}H_{i}\) such that \(W\cap N=H_{i}\cap N\) where \(c\) is the corank of \(\mathcal{F}_{W}\) and \(H_{i}\)'s are distinct complex hyperplanes in \(N_{\mathbb{C}}\). Let \(\mathcal{F}_{i}\) be the foliation associated with \(H_{i}\). Then we have \(\mathcal{F}_{W}=\bigcap_{i=1}^{c}\mathcal{F}_{i}\).
Notice that \(H_{i}=\{v\in\mathcal{T}_{T,1}\mid\langle v,\sum_{j=1}^{n}a_{ij}\mathrm{d}x_{j} \rangle=0\}\) for all \(i\). Let \(\omega_{i}=\sum_{i}a_{ij}\frac{\mathrm{d}x_{j}}{x_{j}}\). Then \(\mathcal{F}_{W}|_{T}\) is defined by the \(T\)-invariant differential form \(\bigwedge_{i=1}^{c}\omega_{i}\). Extending to \(U_{\sigma}\cong\mathbb{A}^{n}\) for some \(\sigma\in\Sigma(n)\) and possibly re-indexing, \(\mathcal{F}_{W}|_{U_{\sigma}}\) is defined by \(\omega=(\prod_{j=1}^{m}x_{j})(\bigwedge_{i=1}^{c}\omega_{i})\) for some \(m\in\mathbb{N}\). So around the point \(V_{\sigma}\), \(\mathcal{F}_{W}\) is pre-simple.
Now we consider the point \(p\) defined by \(x_{j}=c_{j}\) for \(c_{j}\in\mathbb{C}\) and for all \(j\). After re-indexing, we can assume there exists a positive integer \(t\) that \(c_{j}=0\) if and only if \(j\leq t\). Note that it is clear when \(t\geq m\). Thus, we will assume \(t<m\). It is worth noting that \(c\leq m\).
Let \(x_{j}^{\prime}=x_{j}-c_{j}\) for all \(j>t\) and then still denote \(x_{j}^{\prime}\) as \(x_{j}\), we have
\[\omega_{i}=\sum_{j=1}^{t}a_{ij}\frac{\mathrm{d}x_{j}}{x_{j}}+\sum_{j=t+1}^{m} a_{ij}\frac{\mathrm{d}x_{j}}{x_{j}-c_{j}}=\sum_{j=1}^{t}a_{ij}\frac{\mathrm{d}x_{j}}{x_{j}} +\mathrm{d}g_{i}\]
where \(g_{i}\in\mathfrak{m}\setminus\mathfrak{m}^{2}\) and \(\mathfrak{m}\) is the maximal ideal in the formal power series ring \(\mathbb{C}\llbracket x_{1},\ldots,x_{n}\rrbracket\). If \(t<c\), then we consider \(x_{i}=x_{i}^{\prime}f_{i}\) for \(i\leq t\) where \(f_{i}\) will be determined later. Thus,
\[\omega_{i}=\sum_{j=1}^{t}\left(a_{ij}\frac{\mathrm{d}x_{j}^{\prime}}{x_{j}^{ \prime}}+a_{ij}\frac{\mathrm{d}f_{j}}{f_{j}}\right)+\mathrm{d}g_{i}.\]
After re-indexing, we can assume that the square matrix \((a_{ij})_{i=1,\,j=1}^{c,\,c}\) is invertible and then we can find \(f_{i}\) such that \(f_{i}\)'s are units in \(\mathbb{C}\llbracket x_{1},\ldots,x_{n}\rrbracket\) and \(\sum_{j=1}^{t}a_{ij}\frac{\mathrm{d}f_{j}}{f_{j}}+\mathrm{d}g_{i}=0\) for all \(1\leq i\leq t\). Therefore, \(\omega=(\prod_{j=1}^{t}x_{j}^{\prime})(\bigwedge_{i=1}^{c}\omega_{i}^{\prime})\) where
\[\omega_{i}^{\prime}=\begin{cases}\sum_{j=1}^{t}a_{ij}\frac{\mathrm{d}x_{j}^{ \prime}}{x_{j}^{\prime}}&\text{for }i\leq t\\ \mathrm{d}h_{i}^{\prime}&\text{for }t<i\leq c\end{cases}\]
where \(h_{i}^{\prime}\in\mathfrak{m}\setminus\mathfrak{m}^{2}\). After a linear change of coordinates and possibly saturation, we have a formal coordinates \(x_{i}^{\prime},\ldots,x_{n}^{\prime}\) around \(p\) such that \(\omega=(\prod_{j=1}^{t}x_{j}^{\prime})(\bigwedge_{i=1}^{t}\omega_{i}^{\prime}) \wedge(\bigwedge_{i=t+1}^{c}\mathrm{d}x_{i}^{\prime})\).
**Proposition 3.3** (\((\dagger)\Leftrightarrow\text{simple}\)).: _Suppose \(X_{\Sigma}\) is a smooth toric variety of dimension \(n\). \(\mathcal{F}_{W}\) has only simple singularities (of type one) if and only if \((\Sigma,W)\) satisfies the condition \((\dagger)\)._
Proof.: By Proposition 3.2, around any point \(p\), there is a formal coordinate \(x_{1},\ldots,x_{n}\) such that \(\mathcal{F}_{W}\) is defined by \(\omega=(\prod_{j=1}^{t}x_{j})(\bigwedge_{i=1}^{t}\omega_{i})\wedge(\bigwedge_ {i=t+1}^{c}\mathrm{d}x_{i})\) where \(\omega_{i}=\sum_{j=1}^{t}a_{ij}\frac{\mathrm{d}x_{j}}{x_{j}}\).
Now suppose \(\mathcal{F}_{W}\) has only simple singularities, that is the tuple \((a_{i1},\ldots,a_{it})\) satisfies the non-resonance condition for all \(i\) and \(t\). If there is a cone \(\tau\in\Sigma\) such that \(\mathrm{relint}(\tau)\cap W\cap N\neq\emptyset\), then we consider \(p\) as a general point on \(V_{\tau}\). Thus, an element in \(\mathrm{relint}(\tau)\cap W\cap N\) gives a non-trivial resonance condition, which is impossible. Therefore, \((\Sigma,W)\) satisfies the condition \((\dagger)\).
On the other hand, suppose \((\Sigma,W)\) satisfies the condition \((\dagger)\). If there are some positive integers \(n_{j}\)'s such that \(\sum_{j=1}^{t}n_{j}a_{ij}=0\) for some \(i\) and \(t\) where all \(a_{ij}\neq 0\), then there is an element in \(\mathrm{relint}(\tau)\cap H_{i}\cap N\) for some \(i\) where \(\tau\in\Sigma\) with \(V_{\tau}=\{x_{1}=\ldots=x_{t}=0\}\). As in the proof of Proposition 3.2, we can choose \(H_{i}\) such that \(H_{i}\cap N=W\cap N\). Thus, there is an element in \(\mathrm{relint}(\tau)\cap W\cap N\) as well. By the condition \((\dagger)\), we have \(\tau\subset W\) and therefore, all \(a_{ij}\) is zero, which is impossible. Hence, \(\mathcal{F}_{W}\) has only simple singularities.
**Proposition 3.4**.: _For any toric foliation \(\mathcal{F}_{W}\) on a complete \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\), we can perform a sequence of star subdivisions along rational rays to get a fan \(\Sigma^{\prime}\) such that \((\Sigma^{\prime},W)\) satisfies the condition \((\dagger)\)._
Proof.: We will proceed by induction on the dimension \(k\) of the cone \(\sigma\in\Sigma\). If \(k=1\), it is clear that we have \(\sigma\subset W\) provided that \(\mathrm{relint}(\sigma)\cap W\cap N\neq\emptyset\).
**Claim**.: _Let \(k\geq 2\) be an integer. Suppose for any \(\sigma\in\Sigma(\ell)\) with \(\ell<k\), we have \(\sigma\subset W\) if \(\mathrm{relint}(\sigma)\cap W\cap N\neq\emptyset\). Then for any \(\sigma\in\Sigma(k)\) with \(\mathrm{relint}(\sigma)\cap W\cap N\neq\emptyset\), we have either \(\sigma\subset W\) or \(W\cap\sigma\) is a rational ray which intersects \(\mathrm{relint}(\sigma)\)._
Proof of Claim.: Suppose there are distinct rational rays \(\rho_{1}\), \(\rho_{2}\) in \(\sigma\) and each of them is not contained in any proper face of \(\sigma\). Then \(\mathrm{Cone}(\rho_{1},\rho_{2})\cap(\sigma\setminus\mathrm{relint}(\sigma))\) consists of exactly two distinct rational rays, say \(\rho_{1}^{\prime}\), \(\rho_{2}^{\prime}\). There exist proper faces \(\tau_{1}\) and \(\tau_{2}\) of \(\sigma\) such that \(\rho_{i}^{\prime}\cap\mathrm{relint}(\tau_{i})\neq\emptyset\) for \(i=1\), \(2\). If there exists a proper face \(\tau\) of \(\sigma\) containing \(\tau_{1}\) and \(\tau_{2}\), then we have
\[\rho_{i}\subset\mathrm{Cone}(\rho_{1}^{\prime},\rho_{2}^{\prime})\subset \mathrm{Cone}(\tau_{1},\tau_{2})\subset\tau,\]
which is absurd. Hence \(\langle\tau_{1},\tau_{2}\rangle=\sigma\). By assumption, we have \(\tau_{i}\subset W\) for \(i=1\), \(2\) and thus \(\sigma\subset W\). This completes the proof of the claim.
Thus, we inductively define \(\Sigma_{k}\) and
\[S_{k}:=\{\rho\text{ is a rational ray in }N_{\mathbb{R}}\mid\rho\not\in\Sigma_{k-1}(1), \rho=W\cap\sigma\text{ for some }\sigma\in\Sigma_{k-1}(k)\},\]
for \(k\geq 2\) with \(\Sigma_{1}=\Sigma\) where \(\Sigma_{k}\) is the fan obtaining from \(\Sigma_{k-1}\) by performing a sequence of star subdivisions along the rays (in any order) in \(S_{k}\). Therefore, for any \(\sigma\in\Sigma_{n}\), we have either \(\sigma\subset W\) or \(\operatorname{\text{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \} \ \ \ \ \ \|\|\ \ \ \ \ \)}\ \ \ \)}\ \ \ \ \ \\ \
\(a_{\rho}>0\) for some \(\rho\prec\sigma\). In this case, we can choose a primitive vector \(v\in\operatorname{int}(\sigma)\cap N\) such that \(\phi_{(K_{\mathcal{F}_{W}}+\Delta)}(v)<0\). Therefore, \((\mathcal{F}_{W},\Delta)\) is not log canonical.
(2) follows from the definition and Proposition 3.7.
(3) and (4) are direct consequences of Proposition 3.7, with the aid of Proposition 3.3.
**Proposition 3.9**.: _Let \((\mathcal{F}_{W},\Delta)\) be a toric foliated pair on a \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\)._
1. \((\mathcal{F}_{W},\Delta)\) _is foliated log smooth if and only if_ \(\Sigma\) _is smooth and_ \((\Sigma,W)\) _satisfies the condition_ \((\dagger)\)_. Note that_ \((\mathcal{F}_{W},\Delta)\) _may not be log canonical._
2. \((\mathcal{F}_{W},\Delta)\) _is F-dlt if and only if the following statements hold true:_ 1. \(\operatorname{Supp}(\Delta)\subset\bigcup_{\rho\subset W}D_{\rho}\)_._ 2. _For any_ \(\sigma\in\Sigma\) _satisfying_ \(\phi_{(K_{\mathcal{F}_{W}}+\Delta)}|_{\sigma}=0\)_, we have_ \(\sigma\) _is smooth and non-dicritical. The latter means that either_ \(\operatorname{relint}(\sigma)\cap W\cap N=\emptyset\) _or_ \(\sigma\subset W\)_._
Proof.:
1. We observe that \((X_{\Sigma},\Delta)\) is log smooth for any smooth toric variety and any torus invariant divisor \(\Delta\). In this case, any torus invariant divisor is a simple normal crossing divisor. Proposition 3.3 establishes the equivalence between condition \((\dagger)\) and \(\mathcal{F}_{W}\) having only simple singularities.
2. Regarding the condition (a), it is equivalent to Definition 2.9(a). For condition (b), suppose there exists a cone \(\sigma\in\Sigma\) satisfying \(\phi_{(K_{\mathcal{F}_{W}}+\Delta)}|_{\sigma}=0\) and \(\sigma\) is singular or dicritical. Then any foliated log resolution \(X_{\Sigma^{\prime}}\to X_{\Sigma}\) would give rise to a ray \(\bar{\rho}\in\Sigma^{\prime}(1)\setminus\Sigma(1)\) such that \(\bar{\rho}\subset\sigma\). Consequently, \(a(D_{\bar{\rho}},\mathcal{F}_{W},\Delta)=-\iota(D_{\bar{\rho}})\) by Proposition 3.7, and \((\mathcal{F}_{W},\Delta)\) is not F-dlt. Now let us suppose that \(\sigma\in\Sigma\) is smooth and non-dicritical if \(\phi_{(K_{\mathcal{F}_{W}}+\Delta)}|_{\sigma}=0\). We have the following: **Claim.**_The fan \(\Sigma\) has already satisfied the condition \((\dagger)\)._ _Proof of Claim._ Let \(\sigma\in\Sigma\) with \(\operatorname{relint}(\sigma)\cap W\cap N\neq\emptyset\). Fix a primitive element \(v\in\operatorname{relint}(\sigma)\cap W\cap N\). Let \(\sigma=\langle v_{1},\dots,v_{s}\rangle\) with \(s=\dim\sigma\) and \(v=\sum_{i=1}^{s}a_{i}v_{i}\) where \(a_{i}\)'s are all non-zero. If \(\phi_{(K_{\mathcal{F}_{W}}+\Delta)}|_{\sigma}=0\), then \(\sigma\subset W\) since \(\sigma\) is non-dicritical by assumption. On the other hand, if \(\phi_{(K_{\mathcal{F}_{W}}+\Delta)}|_{\sigma}\neq 0\), then there is a \(v_{i}\) such that \(v_{i}\in W\). After re-indexing, we can assume that \(v_{i}\in W\) if and only if \(1\leq i\leq\ell\) for some \(\ell\leq s\). Assume for the sake of contradiction that \(\ell<s\). Let \(v^{\prime}:=\sum_{i=\ell+1}^{s}a_{i}v_{i}=v-\sum_{i=1}^{\ell}a_{i}v_{i}\in W\). Thus, \(v^{\prime}\in\operatorname{relint}(\tau)\cap W\) where \(\tau=\langle v_{\ell+1},\dots,v_{s}\rangle\) and \(\phi_{(K_{\mathcal{F}_{W}}+\Delta)}|_{\tau}=0\). As \(\tau\) is non-dicritical again by assumption, we have \(\tau\subset W\), which is absurd, Therefore, \(\ell=s\) and \(\sigma\subset W\). This completes the proof of the claim. Now we perform a sequence of star subdivisions along some rational rays to obtain a smooth fan \(\Sigma^{\prime}\). We may assume that if \(\rho\in\Sigma^{\prime}(1)\setminus\Sigma(1)\) and \(\rho\subset\operatorname{reint}(\tau)\cup\{0\}\) for some \(\tau\in\Sigma\), then \(\tau\) is singular. Thus, \(\phi_{(K_{\mathcal{F}_{W}}+\Delta)}|_{\tau}\neq 0\). As a result, \(\phi_{(K_{\mathcal{F}_{W}}+\Delta)}(u_{\rho})>0\) where \(u_{\rho}\) is the primitive generator in \(\rho\) and \(a(D_{\rho},\mathcal{F}_{W},\Delta)>-\iota(D_{\rho})\). Note that \((\Sigma^{\prime},W)\) also satisfies the condition \((\dagger)\). By (1), \((\mathcal{F}_{W},\Delta)\) on \(X_{\Sigma^{\prime}}\) is foliated log smooth.
**Corollary 3.10**.: _Let \((\mathcal{F}_{W},\Delta)\) be a toric foliated pair on a \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\). If \((\mathcal{F}_{W},\Delta)\) is F-dlt, then it is log canonical._
**Proposition 3.11**.: _Let \((\mathcal{F}_{W},\Delta)\) be a toric foliated pair on a \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\). If \((\mathcal{F}_{W},\Delta)\) on \(X_{\Sigma}\) is F-dlt, then \(\mathcal{F}_{W}\) is non-dicritical._
Proof.: This is exactly the content of the claim in Proposition 3.9(2).
**Proposition 3.12**.: _Let \(\mathcal{F}_{W}\) be a toric foliation on a \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\). If \(\mathcal{F}_{W}\) is canonical, then it is non-dicritical._
Proof.: Suppose \(\mathcal{F}_{W}\) is dicritical. Then there exists \(\sigma\in\Sigma\) such that there exists a \(v\in\operatorname{relint}(\sigma)\cap W\cap N\) and \(\sigma\not\subset W\). Let \(\sigma=\langle v_{1},\dots,v_{s}\rangle\) with \(s=\dim\sigma\) and \(v=\sum_{i=1}^{s}a_{i}v_{i}\) where \(a_{i}\)'s are all non-zero as \(v\in\operatorname{relint}(\sigma)\). After re-indexing, there is an \(\ell<s=\dim\sigma\) such that \(v_{i}\in W\) if and only if \(1\leq i\leq\ell\).
We write \(v=\sum_{i=1}^{s}a_{i}v_{i}\) where \(a_{i}\)'s are all non-zero. Then \(v^{\prime}=\sum_{i=\ell+1}^{s}a_{i}v_{i}=v-\sum_{i=1}^{\ell}a_{i}v_{i}\in W \cap\Pi_{\sigma,W}\setminus\{0\}\). Moreover, the ray \(\mathbb{R}_{\geq 0}v^{\prime}\) generated by \(v^{\prime}\) is contained in \(W\cap\Pi_{\sigma,W}\) as well. Since it is a rational ray, there exists a non-zero lattice point \(\tilde{v}\in\Pi_{\sigma,W}\cap W\cap N\setminus\{0\}\) and not contained in the facet of \(\Pi_{\sigma,W}\) that does not contain the origin. Thus, by Proposition 3.8(3), \(\mathcal{F}_{W}\) is not canonical, which contradict the assumption.
**Proposition 3.13**.: _Let \(\mathcal{F}_{W}\) be a toric foliation on a \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\)._
1. _If_ \(\mathcal{F}_{W}\) _is terminal, then it is smooth in codimension_ \(2\)_, that is the singular locus of_ \(\mathcal{F}_{W}\) _has codimension at least_ \(2\)_._
2. _Suppose_ \(\operatorname{rank}(\mathcal{F}_{W})=1\)_. If_ \(\mathcal{F}_{W}\) _is terminal at the generic point of_ \(V_{\sigma}\) _for some_ \(\sigma\in\Sigma\)_, then_ \(V_{\sigma}\not\subset\operatorname{Sing}(\mathcal{F}_{W})\)_._
Proof.: Let us consider any cone \(\sigma=\operatorname{Cone}(u_{1},u_{2})\in\Sigma(2)\) where \(u_{1}\), \(u_{2}\) are primitive. Since \(\mathcal{F}_{W}\) is terminal at the generic point of \(V_{\sigma}\), we have \(\Pi_{\sigma,W}\neq\sigma\) by Proposition 3.8(4). Consequently, one of \(u_{1}\) and \(u_{2}\) is contained in \(W\). Therefore, we have the following two cases:
1. If both of them are contained in \(W\), then \(\sigma\subset W\) and thus, \(V_{\sigma}\) is not contained in the singular locus of \(\mathcal{F}_{W}\) by Proposition 1.11.
2. If only one of them is contained in \(W\), let us say \(u_{1}\in W\), then \(W\cap\mathbb{C}\sigma=\mathbb{C}u_{1}\) and thus \(V_{\sigma}\) is not contained in the singular locus of \(\mathcal{F}_{W}\) by Proposition 1.11.
For (2), let \(\sigma=\operatorname{Cone}(u_{1},\dots,u_{s})\in\Sigma\) where \(s=\dim\sigma\). Since \(\mathcal{F}_{W}\) is terminal at the generic point of \(V_{\sigma}\), we have \(\Pi_{\sigma,W}\neq\sigma\) by Proposition 3.8(4). Consequently, one of \(u_{1},\dots,u_{s}\) is contained in \(W\), say \(u_{1}\). As \(\dim W=1\), we have \(W=\mathbb{C}u_{1}\). Thus, by Proposition 1.11, \(V_{\sigma}\) is not contained in the singular locus of \(\mathcal{F}_{W}\).
### F-dlt modification
Following [10, Definition 3.28], we introduce the following definition of F-dlt modification for the foliated pair of any rank. Moreover, we demonstrate the existence of an F-dlt modification for toric foliated pairs.
**Definition 3.14**.: Let \((\mathcal{F},\Delta=\sum_{i}a_{i}\Delta_{i})\) be a foliated pair on an arbitrary normal variety. We denote
\[\Delta_{\text{\rm n-inv}}=\sum_{i:\,\Delta_{i}\text{ is noninvariant}}a_{i} \Delta_{i}.\]
A _F-dlt modification_ for \((\mathcal{F},\Delta)\) is a birational projective morphism \(\pi:Y\to X\) such that if \(\mathcal{G}\) is the pullback foliation on \(Y\) then the foliated pair \((\mathcal{G},\pi_{*}^{-1}\Delta_{\text{\rm n-inv}}+\sum_{i}\iota(E_{i})E_{i})\) is F-dlt where the sum is over all \(\pi\)-exceptional divisors and
\[K_{\mathcal{G}}+\pi_{*}^{-1}\Delta+\sum_{i}\iota(E_{i})E_{i}+F=\pi^{*}(K_{ \mathcal{F}}+\Delta)\]
for some effective \(\pi\)-exceptional divisor \(F\) on \(Y\).
The existence of F-dlt modification is shown for corank \(1\) foliated pairs on normal projective varieties of dimensions at most three in [10, Theorem 8.1]. We demonstrate the existence of an F-dlt modification for toric foliated pairs of any rank on a complete toric variety.
**Theorem 3.15**.: _Let \((\mathcal{F}_{W},\Delta)\) be a toric foliated pair on a complete toric variety \(X_{\Sigma}\). Then \((\mathcal{F}_{W},\Delta)\) admits an F-dlt toric modification \(\pi:Y\to X\) such that if \(\mathcal{G}\) is the pullback foliation on \(Y\) then_
1. \(Y\) _is a_ \(\mathbb{Q}\)_-factorial toric variety and_
2. \(\mathcal{G}\) _is non-dicritical._
Proof.: By [20, Lemma 5.9], we have a small projective \(\mathbb{Q}\)-factorization. So, we can replace \(\Sigma\) with a simplicial fan and assume \(\Sigma\) is simplicial. We will only use star subdivisions along some rational rays, and this will preserve \(\mathbb{Q}\)-factoriality.
Now by Proposition 3.4, we have a projective birational morphism \(\pi_{1}:X^{\prime}_{\Sigma^{\prime}}\to X_{\Sigma}\) such that \((\Sigma^{\prime},W)\) satisfies the condition \((\dagger)\). Let \(\mathcal{F}^{\prime}=\mathcal{F}_{W,\Sigma^{\prime}}\) and \(\Delta^{\prime}=\pi_{1}^{*}(K_{\mathcal{F}}+\Delta)-K_{\mathcal{F}^{\prime}}\). Note that \(\phi:=\phi_{(K_{\mathcal{F}}+\Delta)}=\phi_{(K_{\mathcal{F}^{\prime}}+\Delta^ {\prime})}\), \((\mathcal{F},\Delta_{\text{n-inv}})\) is log canonical, and the log discrepancies of \(\pi_{1}\)-exceptional divisors are nonpositive.
Next, we perform a sequence of star subdivisions along some rational rays in all cones \(\sigma\in\Sigma^{\prime}\) with \(\phi|_{\sigma}\leq 0\) to obtain a fan \(\widetilde{\Sigma}\) such that all cones \(\widetilde{\sigma}\in\widetilde{\Sigma}\) with \(\phi|_{\widetilde{\sigma}}\leq 0\) are smooth. This preserves \(\mathbb{Q}\)-factoriality and non-dicriticality, that is, \(Y=Y_{\widetilde{\Sigma}}\) is \(\mathbb{Q}\)-factorial and \(\mathcal{G}=\mathcal{G}_{W,\widetilde{\Sigma}}\) is non-dicritical. Let \(\pi_{2}:Y_{\widetilde{\Sigma}}\to X^{\prime}_{\Sigma^{\prime}}\) be the birational projective morphism and \(\pi=\pi_{1}\circ\pi_{2}\). Note that the log discrepancies of \(\pi_{2}\)-exceptional divisors are nonpositive. Therefore, we have
\[K_{\mathcal{G}}+\pi_{*}^{-1}\Delta+\sum_{i}\iota(E_{i})E_{i}+F=\pi^{*}(K_{ \mathcal{F}}+\Delta)\]
for some effective \(\pi\)-exceptional divisor \(F\) on \(Y\). In addition, we obtain the foliated pair
\[\left(\mathcal{G},\pi_{*}^{-1}\Delta_{\text{n-inv}}+\sum_{i}\iota(E_{i})E_{i}\right)\]
is F-dlt where the sum is over all \(\pi\)-exceptional divisors.
## 4. Toric foliated minimal model program
In this section, we will fix the following notations:
Let \(N\) be a lattice, \(X=X_{\Sigma}\) for some complete simplicial fan \(\Sigma\), and \(\mathcal{F}=\mathcal{F}_{W}\) for some complex vector subspace \(W\subset N_{\mathbb{C}}\). Let \(D\) be a Weil divisor. As toric varieties are Mori dream spaces, the \(D\)-minimal model program exists and terminates. Suppose \(V_{\omega}\) is \(D\)-negative, that is \(D\cdot V_{\omega}<0\), where \(\omega=\text{Cone}(v_{1},\ldots,v_{n-1})\in\Sigma(n-1)\) is a \((n-1)\)-dimensional cone. Let \(v_{n}\) and \(v_{n+1}\) be two primitive vectors such that
\[\tau_{n+1} =\text{Cone}(v_{1},\ldots,v_{n})\quad\text{and}\] \[\tau_{n} =\text{Cone}(v_{1},v_{2},\ldots,v_{n-1},v_{n+1})\]
are \(n\)-dimensional cones in \(\Sigma(n)\). We consider the non-trivial linear relation
\[\sum_{i=1}^{n+1}a_{i}v_{i}=0\]
with \(a_{n+1}=1\). After re-indexing, we can assume that
\[a_{i}\begin{cases}<0&\text{if }1\leq i\leq\alpha\\ =0&\text{if }\alpha+1\leq i\leq\beta\\ >0&\text{if }\beta+1\leq i\leq n+1\end{cases}\]
for some \(\alpha\), \(\beta\in\mathbb{N}\).
Let \(\tau(\omega)=\tau_{n}\cup\tau_{n+1}\), \(\tau_{j}=\text{Cone}(v_{1},\ldots,\widehat{v_{j}},\ldots,v_{n+1})\), and \(\sigma_{J}=\text{Cone}(v_{j}\mid j\in J)\) for any subset \(J\subset[1,n+1]\cap\mathbb{N}\). Also, we let \(J_{-}=[1,\alpha]\cap\mathbb{N}\) and \(J_{+}=[\beta+1,n+1]\cap\mathbb{N}\).
### Divisorial contraction
In this subsection, we discuss the case of divisorial contraction, that is \(\alpha=1\).
**Proposition 4.1**.: _Let \((\mathcal{F}_{W},\Delta)\) be a log canonical pair and \(\pi:X_{\Sigma}\to Y_{\Sigma^{\prime}}\) be a \((K_{\mathcal{F}_{W}}+\Delta)\)-negative extremal contraction, which is a divisorial contraction. If the foliation \(\mathcal{F}_{W}\) on \(X_{\Sigma}\) is non-dicritical, then so is \(\mathcal{F}_{W_{Y}}\). Suppose furthermore that \((\mathcal{F}_{W},\Delta)\) is F-dlt, then \((\mathcal{F}_{W_{Y}},\Delta_{Y})\) is also F-dlt._
Proof.: Recall that \(\tau(\omega)\) has a decomposition \(\bigcup_{j\in J_{+}}\tau_{j}\) in \(\Sigma\) while \(\tau(\omega)=\tau_{1}\) in \(\Sigma^{\prime}\).
Let \(v\in\operatorname{relint}(\sigma_{J})\cap W\) where \(\sigma_{J}\prec\tau_{1}\), that is \(1\not\in J\). If \(\sigma_{J}\prec\tau_{j}\) for some \(j\in J_{+}\), then \(\sigma_{J}\subset W_{\mathbb{R}}\) as \(\mathcal{F}_{W}\) is non-dicritical. Otherwise, \(\sigma_{J_{+}}\subset\sigma_{J}\), that is, \(J_{+}\subset J\). Note that
\[0>(K_{\mathcal{F}_{W}}+\Delta)\cdot V_{\omega}=-\sum_{i=1}^{n+1}a_{i}\phi_{( K_{\mathcal{F}_{W}}+\Delta)}(v_{i})\]
and \(\phi_{(K_{\mathcal{F}_{W}}+\Delta)}(v_{i})\geq 0\) for all \(i\) as \((\mathcal{F}_{W},\Delta)\) is log canonical, there is a \(j\in J_{+}\) such that \(a_{j}>0\) and \(\phi_{(K_{\mathcal{F}_{W}}+\Delta)}(v_{j})>0\), in particular, \(v_{j}\in W\). Then we choose \(c\in\mathbb{C}\) such that \(v^{\prime}=v-cv_{j}\in\operatorname{relint}(\sigma_{J\setminus\{j\}})\cap W\). Thus, by non-dicriticality of \(\mathcal{F}_{W}\), we have \(\sigma_{J\setminus\{j\}}\subset W\) and, therefore, \(\sigma_{J}\subset W\).
Now we assume that the foliated pair \((\mathcal{F}_{W},\Delta)\) is F-dlt. Suppose that there is a cone \(\sigma\in\Sigma^{\prime}\setminus\Sigma\) such that \(\phi_{(K_{\mathcal{F}_{Y}}+\Delta_{Y})}|_{\sigma}=0\). So we can assume \(\sigma=\sigma_{J}\) where \(J_{+}\subset J\). Note that
\[0>(K_{\mathcal{F}_{W}}+\Delta)\cdot V_{\omega}=-\sum_{i=1}^{n+1}a_{i}\phi_{( K_{\mathcal{F}_{W}}+\Delta)}(v_{i})=-a_{1}\phi_{(K_{\mathcal{F}_{W}}+\Delta)}(v_{1} )\geq 0,\]
which is impossible.
### Mori fiber space
In this subsection, we discuss the case when \(\alpha=0\).
**Proposition 4.2**.: _Let \((\mathcal{F}_{W},\Delta)\) be a log canonical pair and \(\pi:X_{\Sigma}\to Y_{\Sigma^{\prime}}\) be a \((K_{\mathcal{F}_{W}}+\Delta)\)-negative extremal contraction, which is a Mori fiber space. If the foliation \(\mathcal{F}_{W}\) on \(X_{\Sigma}\) is non-dicritical, then so is \(\mathcal{F}_{W_{Y}}\). Moreover, the fiber of \(\pi\) is tangent to \(\mathcal{F}_{W}\)._
_Suppose furthermore that \((\mathcal{F}_{W},\Delta)\) is F-dlt, then \((\mathcal{F}_{W_{Y}},\Delta_{Y})\) is also F-dlt._
Proof.: In this case, we have \(\alpha=0\). Since
\[0>K_{\mathcal{F}}\cdot V_{\omega}=-\sum_{i=1}^{n+1}a_{i}\phi_{(K_{\mathcal{F} _{W}}+\Delta)}(v_{i})\]
and \(a_{i}\geq 0\) for all \(i\), there is a \(v_{j}\in W_{\mathbb{R}}\) for some \(j\geq\beta+1\). Note that \(\sigma_{J_{+}}\) is a linear space, denoted by \(U\), and has dimension \(n-\beta\). Since \(v_{j}\in W\), we have
\[\sum_{i\in J_{+}\setminus\{j\}}a_{i}v_{i}=-a_{j}v_{j}\in W\cap N_{\mathbb{Q}} \cap\operatorname{relint}(\sigma_{J_{+}\setminus\{j\}}).\]
As \(\tau_{j}\in\Sigma\) and \(\sigma_{J_{+}\setminus\{j\}}\prec\tau_{j}\), we have \(\sigma_{J_{+}\setminus\{j\}}\in\Sigma\). Due to the non-dicriticality of \(\mathcal{F}_{W}\), \(\sigma_{J_{+}\setminus\{j\}}\subset W\) and thus \(\sigma_{J_{+}}\subset W\). Therefore, the fiber of \(\pi\) is tangent to \(\mathcal{F}_{W}\) by Proposition 1.13.
Suppose \(\operatorname{relint}(\overline{\sigma})\cap(W/\mathbb{C}U)\neq\emptyset\) for some cone \(\sigma\in\Sigma\), then there is \(v\in\operatorname{relint}(\sigma)\cap W\). Thus, \(\sigma\subset W\) and, therefore, \(\overline{\sigma}\subset W/\mathbb{C}U\). Hence, \(\mathcal{F}_{W_{Y}}\) is non-dicritical.
### Flipping contraction
In this subsection, we study the case when \(\alpha\geq 2\).
**Proposition 4.3**.: _Let \((\mathcal{F}_{W},\Delta)\) be a log canonical pair and \(\pi:X_{\Sigma}\to Y_{\Sigma^{\prime}}\) be a \((K_{\mathcal{F}_{W}}+\Delta)\)-negative extremal contraction, which is a flipping contraction. Let \(\pi^{+}:X_{\Sigma^{+}}^{+}\to Y_{\Sigma^{\prime}}\) be its flip. If the foliation \(\mathcal{F}_{W}\) on \(X_{\Sigma}\) is non-dicritical, then so are \(\mathcal{F}_{W_{Y}}\) and \(\mathcal{F}_{W_{X^{+}}}\). Furthermore, assume \((\mathcal{F}_{W},\Delta)\) is F-dlt, then \((\mathcal{F}_{W_{X^{+}}},\Delta^{+})\) is also F-dlt._
Proof.: In this case, we have \(\alpha\geq 2\). Suppose \(\operatorname{relint}(\sigma_{J})\cap W\cap N\neq\emptyset\) for some cone \(\sigma_{J}\in\Sigma_{X^{+}}\), that is, \(j\not\in J\) for some \(j\in J_{-}\). Let \(v\in\operatorname{relint}(\sigma_{J})\cap W\). If \(J_{+}\not\subset J\), then \(\sigma_{J}\prec\tau_{j}\) for some \(j\in J_{+}\). As \(\mathcal{F}_{W}\) is non-dicritical, \(\sigma_{J}\subset W\). If \(J_{+}\subset J\), then we notice that
\[0>(K_{\mathcal{F}_{W}}+\Delta)\cdot V_{\omega}=-\sum_{i=1}^{n+1}a_{i}\phi_{(K _{\mathcal{F}_{W}}+\Delta)}(v_{i})\]
and \(\phi_{(K_{\mathcal{F}_{W}}+\Delta)}(v_{i})\geq 0\) for all \(i\) as \((\mathcal{F}_{W},\Delta)\) is log canonical. So there is \(j\in J_{+}\) such that \(a_{j}>0\) and \(\phi_{(K_{\mathcal{F}_{W}}+\Delta)}(v_{j})>0\), in particular, \(v_{j}\in W\). Then we choose a \(c\in\mathbb{C}\) such that \(v^{\prime}=v-cv_{j}\in\operatorname{relint}(\sigma_{J\setminus\{j\}})\cap W\). Thus, by non-dicriticality of \(\mathcal{F}_{W}\), we have \(\sigma_{J\setminus\{j\}}\subset W\) and, therefore, \(\sigma_{J}\subset W\). This shows that \(\mathcal{F}_{W_{X^{+}}}\) is non-dicritical. The non-dicriticality of \(\mathcal{F}_{W_{Y}}\) follows from that of \(\mathcal{F}_{W}\) and \(\mathcal{F}_{W_{X^{+}}}\).
Suppose now \((\mathcal{F}_{W},\Delta)\) is F-dlt and there is a cone \(\sigma\in\Sigma^{+}\setminus\Sigma\) such that \(\phi_{(K_{\mathcal{F}_{X^{+}}}+\Delta_{X^{+}})}|_{\sigma}=0\). So, we can assume \(\sigma=\sigma_{J}\) where \(J_{+}\subset J\). Note that \(0>(K_{\mathcal{F}}+\Delta)\cdot V_{\omega}=-\sum_{i=1}^{n+1}a_{i}\phi_{(K_{ \mathcal{F}_{W}}+\Delta)}(v_{i})=-\sum_{j\in J_{-}}a_{j}\phi_{(K_{\mathcal{F} _{W}}+\Delta)}(v_{j})\geq 0\), which is impossible.
### Cone Theorem
In this subsection, we show the cone theorem for the log canonical toric foliated pairs. We first introduce the following definition:
**Definition 4.4**.: Let \(X\) be an arbitrary normal variety and \(\mathcal{F}\) be a foliation of any rank on \(X\). A subvariety \(Z\subset X\) is tangent to \(\mathcal{F}\) if, for any smooth resolution \(\pi:X^{\prime}\to X\) such that \(\pi^{-1}(Z)\) is a divisor, there is a (possibly analytic) subvariety \(Z^{\prime}\subset X^{\prime}\) that dominates \(Z\), has the same dimension as \(Z\), and is tangent to the pullback foliation \(\mathcal{F}^{\prime}\) in the sense of Definition 1.5.
**Remark 4.5**.: This generalizes [23, Definition 3.2] on the dimension of tangent subvarieties \(Z\) and agrees with [1, Subsection 3.4] for the algebraically integrable toric foliations. We also demonstrate in Proposition 4.6 that it also coincides with [1, Definition 2.12] for the corank one non-dicritical foliations.
**Proposition 4.6**.: _Let \(X\) be an arbitrary normal variety and \(\mathcal{F}\) be a non-dicritical foliation of corank one on \(X\). A subvariety \(Z\subset X\) is tangent to \(\mathcal{F}\) if and only if for any birational morphism \(\pi:X^{\prime}\to X\) and any divisor \(E\) on \(X^{\prime}\) such that \(E\) dominates \(Z\), we have \(E\) is invariant under the pullback foliation \(\mathcal{F}^{\prime}=\pi^{-1}\mathcal{F}\)._
Proof.: We suppose first that \(Z\) is tangent to \(\mathcal{F}\). Let \(\pi:X^{\prime}\to X\) be a smooth resolution such that \(\pi^{-1}(Z)\) is a divisor. Since \(Z\) is tangent, there is a subvariety \(Z^{\prime}\subset X^{\prime}\) dominating \(Z\), \(\dim Z^{\prime}=\dim Z\), and tangent to the pullback foliation \(\mathcal{F}^{\prime}\). As \(\pi^{-1}(Z)\) is a divisor, there is an exceptional divisor \(E\) over \(Z\) such that \(Z^{\prime}\subset E\). Around a general point \(p\) on \(Z^{\prime}\), there is an (analytic) open neighborhood \(U\) of \(p\) with a submersion \(f:U\to C\) such that \(\mathcal{F}^{\prime}|_{U}\) is the foliation induced by the submersion \(f\). Then \(U\) is a disjoint union of leaves of \(\mathcal{F}_{U}\). If \(E\) is not invariant under \(\mathcal{F}^{\prime}\), then none of the leaves in \(U\) is contained in \(E\). Therefore, this introduces infinitely many separatrices around \(\pi(p)\), which contradicts the non-dicriticality of \(\mathcal{F}\). Hence, by [1, Remark 2.16], any exceptional divisor \(E\) dominating \(Z\) is invariant under the pullback foliation.
On the other hand, suppose \(\pi:X^{\prime}\to X\) is a smooth resolution with exceptional divisor \(E\) dominating \(Z\). Since \(E\) is invariant under the pullback foliation, any general subvariety \(Z^{\prime}\subset E\) is tangent to \(\mathcal{F}^{\prime}\). Therefore, we can choose \(Z^{\prime}\) dominating \(Z\) and hence \(Z\) is tangent to \(\mathcal{F}\).
We have the following proposition generalizing [23, Lemma 3.3]:
**Proposition 4.7**.: _Let \(\mathcal{F}_{W}\) be a toric foliation on a complete \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\) of dimension \(n\) with \(W\neq N_{\mathbb{C}}\). Then for any cone \(\tau\in\Sigma\), \(V_{\tau}\) is tangent to \(\mathcal{F}\) if and only if \(W+\mathbb{C}\tau=N_{\mathbb{C}}\)._
Proof.: Suppose \(W+\mathbb{C}\tau\neq N_{\mathbb{C}}\). Then we can choose a complex vector subspace \(W^{\prime}\subset N_{\mathbb{C}}\) of dimension \(n-1\) such that \(W\subset W^{\prime}\) and \(W^{\prime}+\mathbb{C}\tau\neq N_{\mathbb{C}}\). Thus, \(\mathbb{C}\tau\subset W^{\prime}\). We pick an element \(u\in\operatorname{relint}(\tau)\cap N\) and let \(\Sigma^{\prime}\) be the star subdivision of \(\Sigma\) along the ray \(\rho=\mathbb{R}_{\geq 0}u\). Then \(D_{\rho}\) is not \(\mathcal{F}_{W^{\prime}}\)-invariant and therefore, \(V_{\tau}\) is not tangent to \(\mathcal{F}_{W^{\prime}}\) by Proposition 4.6. Therefore, \(V_{\tau}\) is also not tangent to \(\mathcal{F}_{W}\) either.
On the other hand, if \(W+\mathbb{C}\tau=N_{\mathbb{C}}\), then we can choose a complex vector subspace \(W^{\prime\prime}\subset N_{\mathbb{C}}\) such that \(W^{\prime\prime}+\mathbb{C}\tau=N_{\mathbb{C}}\) and \(W^{\prime\prime}\cap\mathbb{C}\tau=\{0\}\). Thus, \(V_{\tau}\not\subset\operatorname{Sing}(\mathcal{F}_{W^{\prime\prime}})\). Taking a toric resolution \(X_{\Sigma^{\prime\prime}}\to X_{\Sigma}\), \(\tau\) is divided into several cones. Let \(\tau^{\prime}\) be one of those cones whose dimension is \(\dim\tau\). Then we find that \(\tau^{\prime}\) is smooth, \(W^{\prime\prime}+\mathbb{C}\tau^{\prime}=N_{\mathbb{C}}\), and \(W^{\prime\prime}\cap\mathbb{C}\tau^{\prime}=\{0\}\). Note that there is a smooth cone of dimension \(n\)\(\sigma\in\Sigma^{\prime\prime}\) such that \(\tau^{\prime}\prec\sigma\). Let \(u_{1},\ldots,u_{n}\) be a \(\mathbb{Z}\)-basis for \(N\) such that \(\sigma=\operatorname{Cone}(u_{1},\ldots,u_{n})\) and \(\tau^{\prime}=\operatorname{Cone}(u_{1},\ldots,u_{s})\) where \(s=\dim\tau^{\prime}\). We denote \(\chi^{u_{i}}\) as \(x_{i}\). Since \(W^{\prime\prime}\cap\mathbb{C}\tau^{\prime}=\{0\}\), we have that \(V_{\tau^{\prime}}\) is \(\mathcal{F}_{W^{\prime\prime}}\)-invariant and \(V_{\tau^{\prime}}\not\subset\operatorname{Sing}(\mathcal{F}_{W^{\prime\prime}})\). Note that \(\dim V_{\tau^{\prime}}=n-s=\dim_{\mathbb{C}}W^{\prime\prime}\). So \(V_{\tau^{\prime}}\) is tangent to \(\mathcal{F}_{W^{\prime\prime}}\) in the sense of Definition 1.5. Furthermore, there is an open (analytic) neighborhood \(U\) of a general point \(p\in V_{\tau}\) and a submersion \(f:U\to B\) where \(B\) is a smooth manifold such that \(V_{\tau^{\prime}}\cap U\) is a fiber of \(f\). As \(V_{\tau^{\prime}}\) dominates \(V_{\tau}\), any general fiber \(S\) of \(f\) dominates \(V_{\tau}\) as well since the dominance condition is an open condition. Because \(S\) is general, \(S\not\subset\operatorname{Sing}(\mathcal{F}_{W})\). Therefore, \(S\) is tangent to \(\mathcal{F}_{W}\) in the sense of Definition 1.5 as \(S\) is tangent to \(\mathcal{F}_{W^{\prime\prime}}\).
**Theorem 4.8**.: _Let \((\mathcal{F}_{W},\Delta)\) be a log canonical toric foliated pair on a complete \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\). Let \(C\) be any curve on \(X\) such that \((K_{\mathcal{F}}+\Delta)\cdot C<0\). Then \([C]=[M]+\alpha\) where \(M\) is a torus invariant curve tangent to the foliation and \(\alpha\) is a pseudo-effective class._
Proof.: By [12, Theorem 14-1-4], we have
\[[C]=\sum_{\omega\in\Sigma(n-1),\,V_{\omega}\text{ tangent to }\mathcal{F}}a_{ \omega}[V_{\omega}]+\sum_{\tau\in\Sigma(n-1),\,V_{\tau}\text{ not tangent to }\mathcal{F}}b_{\tau}[V_{\tau}]\]
where \(a_{\omega}\) and \(b_{\tau}\) are nonnegative real numbers, and \(\mathbb{R}_{\geq 0}[V_{\omega}]\)'s and \(\mathbb{R}_{\geq 0}[V_{\tau}]\)'s are extremal rays.
Suppose all \(a_{\omega}\)'s are zero. As \(V_{\tau}\) is not tangent to \(\mathcal{F}_{W}\), we have \(W\subset\mathbb{C}\tau\) by Proposition 4.7. Since
\[0>(K_{\mathcal{F}}+\Delta)\cdot C=-\sum_{\rho\in\Sigma(1),\,\rho\subset W}(1-d_ {\rho})D_{\rho}\cdot C\]
where \(\Delta=\sum_{\rho}d_{\rho}D_{\rho}\), there is a cone \(\tau\in\Sigma(n-1)\) and a ray \(\rho\in\Sigma(1)\) such that \(V_{\tau}\) is not tangent to \(\mathcal{F}\), \(D_{\rho}\cdot V_{\tau}>0\), and \(d_{\rho}\neq 1\). By [12, Proposition 14-1-5(i)], there is a cone \(\omega\in\Sigma(n-1)\) such that \(W\not\subset\mathbb{C}\omega\) and \([V_{\omega}]\in\mathbb{R}_{\geq 0}[V_{\tau}]\).
**Corollary 4.9**.: _Let \((\mathcal{F}_{W},\Delta)\) be a log canonical toric foliated pair on a complete \(\mathbb{Q}\)-factorial toric variety \(X_{\Sigma}\). Then_
\[\overline{\operatorname{NE}}(X)_{K_{\mathcal{F}_{W}}+\Delta<0}=\sum\mathbb{R}_{ \geq 0}[M_{i}]\]
_where \(M_{i}\)'s are torus invariant rational curves tangent to \(\mathcal{F}_{W}\)._
|
2306.02290
|
Simultaneous supersingular reductions of Hecke orbits
|
Fix a finite set of primes $\mathcal{S}$, for an abelian variety $A_0$ over
$\overline{\mathbb{Q}}$ (with extra structures) which has supersingular good
reductions at all $\ell\in\mathcal{S}$, we consider the simultaneous reduction
modulo $\ell\in\mathcal{S}$ of several copies of $p$-adic/prime-to-$N$ Hecke
orbits of $A_0$. We show that this simultaneous reduction map is always
surjective under certain rationality condition on the copies of the Hecke
orbits.
|
Xiaoyu Zhang
|
2023-06-04T07:42:09Z
|
http://arxiv.org/abs/2306.02290v2
|
# Simultaneous superspecial reductions of CM orbit of abelian varieties
###### Abstract.
Families of CM points in a Shimura variety contain important arithmetic information on their Zariski closure (as predicted by the Andre-Oort conjecture). In this article we study a variant of this problem and look at the reduction of families of CM points: for a principally polarized abelian variety \(A\) defined over \(\overline{\mathbb{Q}}\) with certain level structure and with CM by a CM field \(K\), we consider simultaneous superspecial reduction of a finite subset of the CM orbit of \(A\) at a finite set of primes. We observe a similar phenomenon as in Andre-Oort conjecture: when \(A\) runs through a family \(p\)-isogenies of such abelian varieties, we show that this reduction map is surjective onto the superspecial locus of the Siegel modular variety, under certain rationality condition on this finite subset of the CM orbit.
2010 Mathematics Subject Classification: 11G15,14G35,14K22,22D40
###### Contents
* 1 Introduction
* 2 CM abelian varieties
* 2.1 Siegel modular variety
* 2.2 Family of \(p\)-isogenies of CM points
* 2.3 CM orbit of CM points
* 3 Supersingular abelian varieties
* 3.1 Uniformization of supersingular abelian varieties
* 3.2 Reduction of CM points
* 4 The simultaneous superspecial reduction map
* 4.1 Commensurability criterion
* 4.2 Proof of the main result
## 1. Introduction
The theory of complex multiplication (CM) has a very long history, dating back to Gauss and Abel on elliptic functions. It was Kronecker who first proposed the postulate that the values of elliptic functions at torsion points should generate all the abelian extensions of a quadratic imaginary field. This is known as Kronecker's Jugendtraum and later became part of Hilbert's 12th problem. After the works of Hasse and Deuring on CM of elliptic functions and elliptic curves, it was Shimura, Taniyama and Weil who extended these results to abelian varieties ([1]). The fundamental theorem they obtained describes how the group \(\operatorname{Aut}(\mathbb{C}/K)\) acts on the CM abelian variety \(A\) and
its torsion points, where \(E\) is the reflex field of \(A\). Later Langlands ([10]), Tate ([12]) and Deligne ([13]) extended the description to the whole group \(\operatorname{Aut}(\mathbb{C}/\mathbb{Q})\).
The reduction of a CM abelian variety \(A\) modulo a prime \(\mathfrak{l}\) of \(E\) is an important part of the theory of complex multiplication. The celebrated Deuring's reduction theorem (_cf._[10]) says that if \(A\) is an _elliptic curve_ over a number field with CM by \(K\), a quadratic imaginary number field, then for a \(\mathfrak{l}\) place of \(K\) over a prime rational \(\ell\) unramified in \(K\), where \(A\) has good reduction, the reduction elliptic curve \(\overline{A}\) modulo \(\mathfrak{l}\) is supersingular if and only if \(\ell\) is inert in \(K\). In this case, \(\operatorname{End}^{\circ}(\overline{A})\) is a quaternion algebra over \(\mathbb{Q}\) ramified exactly at \(\ell\) and \(\infty\). There are generalizations of this result to CM abelian varieties, which state that under certain conditions on the ramification of \(\ell\) of \(K=\operatorname{End}^{\circ}(A)\) of the CM abelian variety \(A\), the reduction \(\overline{A}\) is supersingular, resp, superspecial (see, for example, [2, 21]). Here a supersingular, resp, superspecial abelian variety over a finite field \(\mathbb{F}_{\ell^{r}}\) is an abelian variety over \(\mathbb{F}_{\ell^{r}}\) that is isogeneous, resp, isomorphic over \(\overline{\mathbb{F}}_{\ell}\) to a product of supersingular elliptic curves.
One of the most fruitful modern approaches of studying abelian varieties is to put them in a family, that is, the moduli space of such varieties, which is known as Shimura variety. The geometric properties of supersingular loci of Shimura varieties over \(\overline{\mathbb{F}}_{\ell}\) have important arithmetic applications, for example, in the proof of the local Langlands correspondence by Harris and Taylor ([14]). On the other hand, the CM/Galois/Hecke orbit of a CM abelian variety in a Shimura variety over \(\mathbb{C}\) or \(\overline{\mathbb{Q}}\) contains equally rich number theoretic information: the set of CM points inside a Shimura _subvariety_ is Zariski dense ([13]). The Andre-Oort conjecture says that the converse is also true ([11, Conjecture 2.3]):
**Conjecture 1.1**.: _Let \(S\) be a set of CM points in a Shimura variety \(\operatorname{Sh}_{\mathbf{K}}(G,X)(\mathbb{C})\), each irreducible component of the Zariski closure of \(S\) is a subvariety of Hodge type, that is, a translation of the image of a morphism of Shimura varieties \(\operatorname{Sh}_{\mathbf{K}^{\prime}}(G^{\prime},X^{\prime})(\mathbb{C}) \rightarrow\operatorname{Sh}_{\mathbf{K}}(G,X)(\mathbb{C})\)._
This is one of the deepest conjectures in Diophantine geometry concerning the geometric structure of Shimura varieties, which has only recently become a theorem thanks to the work of many mathematicians (see [15] for the claimed full proof).
**Remark 1.2**.: It is worthwhile pointing out that this conjecture is closely related to the (equi-)distribution properties of CM/Galois/Hecke orbits of a generic sequence of CM points in a Shimura variety. In fact, many of these results use deep theorems from non-archimedean ergodic theorem, in particular, on the unipotent flows or the classification of ergodic measures (see, for example, [1, 1, 10, 11] and the references therein).
A natural variant/extension of the Andre-Oort conjecture is perhaps to look at _the reduction modulo a prime_ of a family of CM points. Which kind of reductions should we look at? The result of Deuring mentioned above indicates we could try those reductions which are supersingular. However, there is an even simpler choice: superspecial reductions. One sees easily from the definition that superspecial abelian varieties \(A\) are one extreme case of abelian varieties as they have the largest possible endomorphism ring, \(\operatorname{M}_{n}(\operatorname{End}(E))\), where \(n=\dim(A)\) and \(E\) is a supersingular elliptic curve (as opposed to the other extreme case, ordinary abelian varieties over finite fields, which have the smallest possible endomorphism ring). Moreover, since a supersingular abelian variety is always isogeneous to some superspecial one, the study of the former is often through the classification of the later. Besides, the locus of superspecial points on a Shimura variety is always finite. All these features indicate it is of great interest to study the superspecial reductions of CM abelian varieties.
In this article, we will relate the above two important notions -- superspecial abelian varieties over \(\overline{\mathbb{F}}_{\ell}\) and CM abelian varieties over \(\overline{\mathbb{Q}}\) -- _in a family_. In particular, we consider the problem of how large the image of superspecial reduction of CM orbits of CM abelian varieties in a Shimura variety can be. As it turns out, the superspecial reductions exhibit very interesting arithmetic information on the CM orbit, just as in Andre-Oort conjecture!
To describe our problem more precisely and introduce the mains results, we need some notations: fix two positive integers \(n>1\) and \(N>0\), we write \(X_{\mathbf{K}}\) for the Siegel modular variety parametrising principally polarized abelian varieties over \(\mathbb{Z}[1/N]\) of dimension \(n\) with level structure \(\mathbf{K}\), where \(\mathbf{K}\) is a compact open subgroup of \(\mathrm{GSp}_{2n}(\widehat{\mathbb{Z}})\) containing the principal level-\(N\) congruence subgroup. Consider a CM field \(K\) of degree \(2n\) together with a CM type \(\Phi\), which embeds \(K\) into \(\mathrm{M}_{2n}(\mathbb{Q})\). To the CM pair \((K,\Phi)\), we associate a maximal torus \(\mathbf{T}_{K^{\circ}}\) of \(\mathbf{G}_{\mathbb{Q}}=\mathrm{GSp}_{2n/\mathbb{Q}}\). The complex points \(X_{\mathbf{K}}(\mathbb{C})\) can be identified with the double coset
\[X_{\mathbf{K}}(\mathbb{C})=\mathbf{G}(\mathbb{Q})\backslash\left(\mathbf{X} \times\mathbf{G}(\mathbb{A}_{f})/\mathbf{K}\right)\]
where \(\mathbf{X}\) is the Siegel double space of degree \(n\). A point \(x_{0}=[z_{0},g_{0}]_{\mathbf{K}}\) in \(X_{\mathbf{K}}(\mathbb{C})\) is has CM by \(K\) if the corresponding abelian variety \(A_{0}\) has CM by (an order of) \(K\). The CM orbit of \(x_{0}\) is the set of those \(x_{g}=[z_{0},gg_{0}]_{\mathbf{K}}\) with \(g\in\mathbf{T}_{K^{\circ}}(\mathbb{A}_{f})\) and the set of CM points by \(K\) in \(X_{\mathbf{K}}(\mathbb{C})\) is \(x_{g}=[z_{0},gg_{0}]_{\mathbf{K}}\) with \(g\in\mathbf{G}(\mathbb{A}_{f})\). All these CM points are in fact defined over \(\overline{\mathbb{Q}}\).
Fix a finite set of prime numbers \(\ell\in\mathcal{S}\) and for each \(\ell\in\mathcal{S}\), fix a place \(\mathfrak{l}\) of \(\overline{\mathbb{Q}}\) over \(\ell\). We assume that \(A_{0}\) has good reduction at all these \(\mathfrak{l}\). Assume the index of \(\mathrm{End}(A_{0})\) inside the ring of integers \(\mathcal{O}_{K}\) of \(K\) is prime to all \(\ell\in\mathcal{S}\) and each \(\ell\in\mathcal{S}\) splits completely in the maximal totally real subfield \(K^{+}\) of \(K\) and each place of \(K^{+}\) above \(\ell\) is inert in \(K\). Then it is known that the reduction \(A_{0}(\mathrm{mod}\,\mathfrak{l})\) is a superspecial abelian variety over \(\overline{\mathbb{F}}_{\ell}\).
For each \(\ell\in\mathcal{S}\), we write \(X_{\mathbf{K}}^{\mathrm{ssp}}(\overline{\mathbb{F}}_{\ell})\) for the superspecial locus of \(X_{\mathbf{K}}(\overline{\mathbb{F}}_{\ell})\). This is a finite set and is parametrized by a double quotient of a certain quaternionic unitary group that is compact at \(\infty\). We fix a prime \(p\) distinct from these \(\ell\in\mathcal{S}\) and consider the reduction map modulo \(\mathfrak{l}\)
\[\mathbf{G}^{1}(\mathbb{Q}_{p})\to X_{\mathbf{K}}^{\mathrm{ssp}}(\overline{ \mathbb{F}}_{\ell}),\quad g\mapsto A_{g}(\mathrm{mod}\,\mathfrak{l}),\]
where \(\mathbf{G}^{1}=\mathrm{Sp}_{2n}\) is the derived subgroup of \(\mathbf{G}\) and \(A_{g}\) is the abelian variety corresponding to the point \(x_{g}\). This map describes the reduction modulo \(\mathfrak{l}\) of the family of \(p\)-isogenies of \(x_{0}\).
For each \(\ell\in\mathcal{S}\), we fix a finite subset \(\mathcal{T}(\ell)\) of \(\mathbf{T}_{K^{\circ}}^{1}(\mathbb{Q}_{p})=\mathbf{T}_{K^{\circ}}(\mathbb{Q} _{p})\bigcap\mathbf{G}^{1}(\mathbb{Q}_{p})\). Then we have the \(\mathcal{S}\)-simultaneous reduction map of the family of \(p\)-isogenies of the CM point \(x_{0}\):
\[\mathrm{Red}\colon\mathbf{G}^{1}(\mathbb{Q}_{p})\to\prod_{\ell\in\mathcal{S}, \sigma\in\mathcal{T}(\ell)}X_{\mathbf{K}}^{\mathrm{ssp}}(\overline{\mathbb{F} }_{\ell}),\quad g\mapsto(A_{\sigma g}(\mathrm{mod}\,\mathfrak{l}))_{\ell\in \mathcal{S},\sigma\in\mathcal{T}(\ell)}.\]
In the special case \(\sharp\mathcal{S}=1\), the map \(\mathrm{Red}\) is related to the distribution of (a subset of) the CM orbit of the family of \(p\)-isogenies of \(x_{0}\). Our mail result is the following
**Theorem 1.3**.: _Suppose that for each \(\ell\in\mathcal{S}\), the images of \(\sigma\in\mathcal{T}(\ell)\) in \(\mathbf{T}_{K^{\circ}}(\mathbb{Q}_{p})/(\mathbf{T}_{K^{\circ}}(\mathbb{Q})Z( \mathbf{G}(\mathbb{Q}_{p})))\) are all distinct, then \(\mathrm{Red}\) is surjective._
**Remark 1.4**.: Variants of this problem have been considered in a few other contexts. For example, in [10] and [11], such reduction map for modular curves has been used to establish Mazur's conjecture on the non-vanishing of higher Heegner points (in fact, our work can be seen as partial generalization of their work to higher dimensions); in [10], the authors used the Shimura curve case to study the non-vanishing of twisted special \(L\)-values of automorphic representations; in [1], the authors study the modular curve case but with the CM field \(K\) varying. In another
direction, the problem of CM lifting of abelian varieties has also been intensively studied, we refer the reader to [13] for a comprehensive discussion on this topic. Our result is the first one for Shimura varieties beyond the cases of modular/Shimura curves.
From this, we deduce immediately the following
**Corollary 1.5**.: _For any finite set of primes \(\mathcal{S}\) as above and to each \(\ell\in\mathcal{S}\) associate a superspecial point \(x(\ell)\in X_{\mathbf{K}}^{\mathrm{ssp}}(\overline{\mathbb{F}}_{\ell})\), we can always find a CM point \(x_{g}\in X_{\mathbf{K}}(\overline{\mathbb{Q}})\) for some \(g\in\mathbf{G}^{1}(\mathbb{Q}_{p})\) such that the reduction \(x_{g}(\operatorname{mod}\mathfrak{l})\) is \(x(\ell)\) for any \(\ell\in\mathcal{S}\)._
**Remark 1.6**.: We invite the reader to compare this with [1, Theorem 1.1]. In _loc.cit_, the authors considered a family of points of CM by (the maximal orders in) quadratic imaginary fields \(\mathbb{Q}(-\sqrt{D_{i}})\) with \(D_{i}\to\infty\) while here we consider a family of \(p\)-isogenies of a fixed CM point \(x_{0}\) (thus a family of points of CM by orders of discriminant \(p\)-powers in the fixed CM field \(K\)). See also [1, Remark 1.4]
In fact, Theorem 1.3 is a special case of the following
**Theorem 1.7**.: _Denote the image of each \(\mathcal{T}(\ell)\) in \(\mathbf{T}_{K^{\circ}}(\mathbb{Q}_{p})/(\mathbf{T}_{K^{\circ}}(\mathbb{Q})Z( \mathbf{G}(\mathbb{Q}_{p})))\) by \(\overline{\mathcal{T}}(\ell)\). Then the image of \(\mathrm{Red}\) is in bijection with \(\prod_{\ell\in\mathcal{S},\overline{\sigma}\in\overline{\mathcal{T}}(\ell)}X_ {\mathbf{K}}^{\mathrm{ssp}}(\overline{\mathbb{F}}_{\ell})\)._
Now we outline the proof of Theorem 1.3. As in Remark 1.2, in the proof of Andre-Oort conjecture, the equidistribution result of families of CM points play an important role. Here, we also need help from ergodic theorem, especially Ratner's theorem on unipotent flows. The proof of the surjectivity of \(\mathrm{Red}\) relies on using the uniformisation of the superspecial locus \(X_{\mathbf{K}}^{\mathrm{ssp}}(\overline{\mathbb{F}}_{\ell})\) to describe the family of \(p\)-isogenies of \(A_{0}\) and its reduction \(A_{0}(\operatorname{mod}\mathfrak{l})\). More precisely, there is a natural bijection
\[\Theta(\ell)\circ\Omega(\ell)^{-1}\colon\mathbf{G}^{1}[\ell]\backslash \mathbf{G}^{1}(\mathbb{Q}_{p})/\mathbf{K}^{1}[\ell]\to X_{\mathbf{K}}^{ \mathrm{ssp}}(\overline{\mathbb{F}}_{\ell})\]
where \(\mathbf{G}^{1}[\ell]\) is a discrete and cocompact subgroup of \(\mathbf{G}^{1}(\mathbb{Q}_{p})\) and \(\mathbf{K}^{1}[\ell]\) is a compact open subgroup of \(\mathbf{G}^{1}(\mathbb{Q}_{p})\) related to \(\mathbf{K}\) (see (16) for the notations, the map \(\Theta\) is defined in (9) and the map \(\Omega\) is defined in (12), both maps depend on \(\ell\)). Then the map of reduction modulo \(\mathfrak{l}\) fits into the following commutative diagram
Here \(\mathrm{pr}\) is the natural projection map. We also have a multi-copied version of this diagram (see also (19))
So the surjectivity of \(\mathrm{Red}\) is the same as the surjectivity of \(\Pi\), the \(\mathcal{S}\)-simultaneous projection map. Using ideas of Cornut and Vatsal, we reduce the latter problem to showing the _non-commensurability_ of these discrete and cocompact subgroups \(\mathbf{G}^{1}[\ell,\sigma]\) of \(\mathbf{G}^{1}(\mathbb{Q}_{p})\):
**Theorem 1.8**.: _For any distinct \((\ell_{1},\sigma_{1})\) and \((\ell_{2},\sigma_{2})\), the subgroups \(\mathbf{G}^{1}[\ell_{1},\sigma_{1}]\) and \(\mathbf{G}^{1}[\ell_{2},\sigma_{2}]\) are non-commensurable._
The proof is divided into two cases (see Theorems 4.1 and 4.2): one is \(\ell_{1}=\ell_{2}\), the other is \(\ell_{1}\neq\ell_{2}\). The first case uses our assumption on \(\mathcal{T}(\ell_{1})\): if these two subgroups are commensurable, then they have the same commensurator, which has image, under the map \(\Omega(\ell_{1})^{-1}\), equal to \(Z(\mathcal{G}(\mathbb{Q}_{p}))\mathcal{G}(\mathbb{Q})\). Here \(\mathcal{G}=\mathcal{G}(\ell_{1})\) is a certain inner form of \(\mathbf{G}\) depending on \(\ell_{1}\). This would imply \(\sigma_{1}\sigma_{2}^{-1}\) lies in
\[Z(\mathcal{G}(\mathbb{Q}_{p}))\mathcal{G}(\mathbb{Q})\cap\mathbf{T}_{K^{ \circ}}(\mathbb{Q}_{p})=Z(\mathcal{G}(\mathbb{Q}_{p}))\mathbf{T}_{K^{\circ}}( \mathbb{Q}),\]
thus contradicting our assumption on \(\mathcal{T}(\ell_{1})\). For the second case, we show that these \(\mathcal{G}(\ell_{i})\) can not be isomorphic using a classical result on the classification of reductive dual pairs in quaternionic unitary groups.
The proof of Theorem 1.7 follows the same line, besides we also need to show that for two elements \(\sigma_{1},\sigma_{2}\in\mathbf{T}_{K^{\circ}}^{1}(\mathbb{Q}_{p})\), if \(\sigma_{1}\sigma_{2}^{-1}\in\mathbf{T}_{K^{\circ}}(\mathbb{Q})Z(\mathbf{G}( \mathbb{Q}_{p}))\), the diagonal projection map
\[\mathbf{G}^{1}(\mathbb{Q}_{p})\to\mathbf{G}^{1}[\ell,\sigma_{1}]\backslash \mathbf{G}^{1}(\mathbb{Q}_{p})\times\mathbf{G}^{1}[\ell,\sigma_{2}]\backslash \mathbf{G}^{1}(\mathbb{Q}_{p})\]
has closed image.
**Remark 1.9**.: The result in this article can be generalized to PEL type Shimura varieties without difficulties, all the ideas are already in the Siegel case. For reasons of clarity and readability, we restrict ourselves to the Siegel modular varieties.
Here is a plan of this article: in SS2, we define Siegel modular varieties and recall the notion of CM abelian varieties and their CM orbits. In SS3, we give a parametrisation of the superspecial locus in the Siegel modular variety and study the superspecial reduction of CM points. In SS4, after some preparation work on commensurability, we prove Theorems 1.3 and 1.7.
## Notations
We fix positive integers \(n>1\) and \(N>0\) and a prime \(p\) throughout this article. We write
\[J_{1}=\begin{pmatrix}0&1\\ -1&0\end{pmatrix},\quad J_{n}=\operatorname{diag}(J_{1},\cdots,J_{1})\in \operatorname{GL}_{2n}(\mathbb{Z}).\]
Write \(\mathbb{Z}^{2n}\) for the free \(\mathbb{Z}\)-module of rank \(2n\) equipped with a symplectic form \(\langle-,-\rangle\) under which the standard basis \((e_{1},\cdots,e_{2n})\) of \(\mathbb{Z}^{2n}\) satisfies \((\langle e_{i},e_{j}\rangle)_{i,j=1}^{2n}=J_{n}\). Then we write \(\mathbf{G}\) for the similitude symplectic group scheme over \(\mathbb{Z}\) whose \(R\)-points are given by
\[\mathbf{G}(R):=\{g\in\operatorname{GL}_{2n}(R)\mid g^{\mathrm{t}}J_{n}g=\nu(g )J_{n}\,\text{for some $\nu(g)\in R^{\times}$}\}.\]
Write the similitude factor \(\nu\colon\mathbf{G}\to\mathbb{G}_{m}\). We define the following subgroups of \(\mathbf{G}\):
1. \(\mathbf{T}\) the maximal torus consisting of diagonal matrices;
2. \(\mathbf{Z}\) the center of \(\mathbf{G}\);
3. \(\mathbf{G}^{1}=\operatorname{Ker}(\nu)\) the derived subgroup of \(\mathbf{G}\), the symplectic group scheme over \(\mathbb{Z}\);
4. For a subgroup \(H\) of \(\mathbf{G}\), resp, \(\mathbf{G}(\mathbb{A}_{f})\), we write \(H^{1}=H\bigcap\mathbf{G}^{1}\), resp, \(H^{1}=H\bigcap\mathbf{G}^{1}(\mathbb{A}_{f})\).
Let \(N>0\) be a positive integer and \(\mathbf{K}\) a _neat_ subgroup of \(\mathbf{G}(\widehat{\mathbb{Z}})\) containing the preimage of \(1_{2n}\) under the projection map \(\mathbf{G}(\widehat{\mathbb{Z}})\to\mathbf{G}(\mathbb{Z}/N\mathbb{Z})\) ([13, Definition 1.4.1.8]).
## 2. CM abelian varieties
### Siegel modular variety
We review briefly the notion of Siegel modular varieties and refer to [11, 12] for more details. Let \(\operatorname{Alg}_{\mathbb{Z}[1/N]}\) be the category of \(\mathbb{Z}[1/N]\)-algebras, \(\operatorname{Sets}\) be the category of sets. We define a functor \(X_{\mathbf{K}}\) as follows:
\[X_{\mathbf{K}}\colon\operatorname{Alg}_{\mathbb{Z}[1/N]}\to\operatorname{Sets}\]
sending a \(\mathbb{Z}[1/N]\)-algebra \(S\) to the set of the isomorphism classes of triples \((\mathcal{A},\lambda,\psi)\) where
1. \(\mathcal{A}\) is an \(n\)-dimensional abelian scheme over \(\operatorname{Spec}(S)\),
2. \(\lambda\colon\mathcal{A}\to\mathcal{A}^{\vee}\) is a principal polarization (thus induces a perfect symplectic pairing on the \(N\)-torsion subgroup scheme \(\mathcal{A}[N]\) of \(\mathcal{A}\)),
3. \(\psi\in\mathbf{K}\backslash\!\operatorname{Isom}_{\lambda}(\mathcal{A}[N],( \mathbb{Z}/N\mathbb{Z})^{2n})\) is a \(\mathbf{K}\)-orbit of isomorphisms of finite group schemes \(\mathcal{A}[N]\simeq(\mathbb{Z}/N\mathbb{Z})^{2n}\) over \(\operatorname{Spec}(S)\) preserving the symplectic pairings on \(\mathcal{A}[N]\) and \((\mathbb{Z}/N\mathbb{Z})^{2n}\) up to similitudes. Here we identify \(\mathbf{K}\) with its image modulo \(N\) and let it act on \((\mathbb{Z}/N\mathbb{Z})^{2n}\) in the natural way.
We write an element in \(\psi\) as \(\psi^{\square}\). We assume that the compact open subgroup \(\mathbf{K}\) is neat (cf. [11, Definition 1.4.1.8]). This ensures that there does not exist non-trivial automorphism of \((\mathcal{A},\lambda,\psi)\), and as a result, the functor \(X_{\mathbf{K}}\) is representable by a \(\mathbb{Z}[1/N]\)-scheme, noted again by \(X_{\mathbf{K}}\).
The \(\mathbb{C}\)-points of \(X_{\mathbf{K}}\) are given by
\[X_{\mathbf{K}}(\mathbb{C})=\mathbf{G}(\mathbb{Q})\backslash\left(\mathbf{X} \times\mathbf{G}(\mathbb{A}_{f})/\mathbf{K}\right)\]
where \(\mathbf{X}\) is a \(\mathbf{G}(\mathbb{R})\)-conjugacy class of morphisms \(\mathbb{S}=\operatorname{Res}_{\mathbb{C}/\mathbb{R}}(\mathbb{G}_{m})\to \mathbf{G}_{/\mathbb{R}}\), which is given by the set of all Hodge structures of type \(\{(-1,0),(0,-1)\}\) on the symplectic space \((\mathbb{Q}^{2n},\langle\cdot,\cdot\rangle)\) such that \(\pm 2i\pi\langle\cdot,\cdot\rangle\) is a polarization. It is known that \(\mathbf{X}\) can be identified with the Siegel double space of degree \(n\) ([11, Example 2.4]):
\[\mathbf{X}\simeq\{z=z_{1}+iz_{2}\in\operatorname{Sym}_{n}(\mathbb{C})\ |\ z_{1},z_{2}\in \operatorname{Sym}_{n}(\mathbb{R}),\ z_{2}\text{ positive definite}\}\,.\]
For any point \((z,g)\in\mathbf{X}\times\mathbf{G}(\mathbb{A}_{f})\), we write
\[[z,g]_{\mathbf{K}}\]
for its image in the double quotient \(\mathbf{G}(\mathbb{Q})\backslash(\mathbf{X}\times\mathbf{G}(\mathbb{A}_{f})/ \mathbf{K})\), which corresponds to a triple \((\mathcal{A},\lambda,\psi)\) in \(X_{\mathbf{K}}(\mathbb{C})\). We write this correspondence as
\[(\mathcal{A},\lambda,\psi)\leftrightarrow[z,g]_{\mathbf{K}}.\]
### Family of \(p\)-isogenies of CM points
Let \(K\) be a CM field of degree \(2n\). Write \(c\) for the complex conjugation on \(K\) and \(K^{+}\) for the maximal totally real subfield of \(K\). Let \(\Phi=(\phi_{1},\cdots,\phi_{n})\) be a CM type of \(K\) ([11, SS1.2]), that is, \(\phi_{i}\) are distinct embeddings of \(K\) into \(\mathbb{C}\) and the disjoint union \(\Phi\sqcup c(\Phi)\) is the set of all complex embeddings \(K\hookrightarrow\mathbb{C}\).
We write \(K^{\mathrm{r}}\) for the reflex field of the CM type \((K,\Phi)\), which is a CM subfield of \(\mathbb{C}\) generated by \(\sum_{i=1}^{n}\phi_{i}(b)\) for all \(b\in K\) ([11, SS1.5]). We fix field embeddings
\[K^{\mathrm{r}}\hookrightarrow\overline{\mathbb{Q}}\hookrightarrow\mathbb{C}.\]
**Definition 2.1**.: _An abelian variety \(A\) over a subfield \(F\) of \(\mathbb{C}\) has CM by the CM field \(K\) if there is an isomorphism_
\[K\simeq\operatorname{End}_{\overline{F}}^{\circ}(A):=\operatorname{End}_{ \overline{F}}(A)\otimes_{\mathbb{Z}}\mathbb{Q}\]
_of \(\mathbb{Q}\)-algebras. A point \((A,\lambda,\psi)\in X_{\mathbf{K}}(F)\) is a CM point by \(K\) if \(A\) has CM by \(K\)._
Suppose \((A,\lambda,\psi)\leftrightarrow[z,t]_{\mathbf{K}}\) is a CM point by \(K\) in \(X_{\mathbf{K}}(\mathbb{C})\), the set of CM points by \(K\) isogenous to \((A,\lambda,\psi)\) is the set of CM points by \(K\) in \(X_{\mathbf{K}}(\mathbb{C})\) consisting of those \([z,gt]_{\mathbf{K}}\) for all \(g\in\mathbf{G}(\mathbb{A}_{f})\). Moreover, by the fundamental result in complex multiplication of abelian varieties, all these CM points are defined over \(\overline{\mathbb{Q}}\) (in fact, each CM point is defined over an algebraic number field).
Throughout this article we fix a point that has CM by \(K\) (we will assume \(x_{0}\) satisfies some other conditions later on)
\[x_{0}=[z_{0},g_{0}]_{\mathbf{K}}\in X_{\mathbf{K}}(\mathbb{C}).\]
We define a map
\[\mathfrak{M}_{\mathbf{K}}^{x_{0}}\colon\mathbf{G}(\mathbb{A}_{f})\to X_{ \mathbf{K}}(\overline{\mathbb{Q}}),\quad g\mapsto[z_{0},gg_{0}]_{\mathbf{K}}.\]
It identifies \(\mathfrak{M}_{\mathbf{K}}^{x_{0}}(\mathbf{G}(\mathbb{A}_{f}))\) with \(\mathbf{T}_{K_{0}}(\mathbb{Q})\backslash\mathbf{G}(\mathbb{A}_{f})/\mathbf{K}\). This is the set of CM points by \(K\) isogenous to \([z_{0},g_{0}]_{\mathbf{K}}\).
**Example 2.2**.: _One can construct such CM points \((A,\lambda,\psi)\in X_{\mathbf{K}}(\mathbb{C})\) explicitly using the theory of complex multiplications as follows (see [1, 10] for more details): let \(\Lambda\) be a lattice in \(K\) and \(\xi\in K\) a totally imaginary element such that \(\sqrt{-1}\phi_{i}(\xi)<0\) for all \(i=1,\cdots,n\). This gives rise to a symplectic bilinear form_
\[\langle\cdot,\cdot\rangle_{\xi}\colon K\times K\to\mathbb{Q},\quad(x,y) \mapsto\operatorname{Tr}_{K/\mathbb{Q}}(c(x)\xi y).\]
_Suppose we have \(\langle\Lambda,\Lambda\rangle_{\xi}=\mathbb{Z}\) (this is always possible by multiplying \(\xi\) by a certain non-zero rational number). Write \(K_{\infty}=K\otimes_{\mathbb{Q}}\mathbb{R}\). Then \(A=K_{\infty}/\Phi(\Lambda)\) is an \(n\)-dimensional complex abelian variety, and the symplectic pairing \(\langle\cdot,\cdot\rangle_{\xi}\colon K_{\infty}\times K_{\infty}\to\mathbb{R}\) gives rise to a principal polarization \(\lambda\) on \(A\). Moreover we have a natural isomorphism_
\[\theta_{A}\colon K/\Lambda\simeq A(\mathbb{C})_{\mathrm{tor}}=A(\overline{ \mathbb{Q}})_{\mathrm{tor}}.\]
_We fix also a \(\mathbf{K}\)-orbit_
\[\psi\in\mathbf{K}\backslash\mathrm{Isom}_{\lambda}((K/\Lambda)[N],(\mathbb{Z}/ N\mathbb{Z})^{2n}).\]
_The triple \((\Lambda,\xi,\psi)\) then gives rise to a principally polarized complex abelian variety of CM type by \(K\), together with a \(\mathbf{K}\)-level structure \(\psi\), so it corresponds to a point \(x=(A,\lambda,\psi)\in X_{\mathbf{K}}(\mathbb{C})\). One checks that this gives a one-to-one correspondence between the set of CM points by \(K\) and the set of triples \((\Lambda,\xi,\psi)\) as above ([1, SS2.4]):_
\[(A,\lambda,\psi)\leftrightarrow(\Lambda,\xi,\psi). \tag{1}\]
Note that the multiplier ring \(\{x\in K\mid x\Lambda\subset\Lambda\}\) embeds into \(\mathrm{End}_{\overline{F}}(A)\), sending \(x\) to the map \(x\Phi(u)=\Phi(xu)\). This is in fact an isomorphism.
We fix throughout this article a CM point by \(K\),
\[(\Lambda_{0},\xi_{0},\psi_{0})\leftrightarrow x_{0}=(A_{0},\lambda_{0},\psi_{ 0})\leftrightarrow[z_{0},g_{0}]_{\mathbf{K}}\in X_{\mathbf{K}}(\overline{ \mathbb{Q}}).\]
We fix a symplectic basis \((b_{1},\cdots,b_{2n})\) of \(\Lambda_{0}\) for the bilinear form \(\langle\cdot,\cdot\rangle_{\xi_{0}}\). Thus the group \(\mathbf{G}(\mathbb{Q})\) acts on the set of lattices of \(K\) through this symplectic basis. The action extends to the group \(\mathbf{G}(\mathbb{A}_{f})\) as in [1, p.78]. The fixed basis \((b_{1},\cdots,b_{2n})\) induces a ring homomorphism
\[\epsilon\colon\mathcal{O}_{K}\to\mathrm{Mat}_{2n}(\mathbb{Z}) \tag{2}\]
sending \(x\) to the matrix of multiplication by \(x\) with respect to the fixed basis \((b_{1},\cdots,b_{2n})\). Moreover one has ([10]):
\[z_{0}=(\Phi(b_{n+1})\cdots\Phi(b_{2n}))^{-1}(\Phi(b_{1})\cdots\Phi(b_{n}))\in \mathbf{X}.\]
If \((A_{0},\lambda_{0},\psi_{0})\), resp, \((A_{1},\lambda_{1},\psi_{1})\) are two isogenous CM points by \(K\), then they have the same set of places of good reduction ([10, Corollary 2]).
**Definition 2.3**.: _The family of \(p\)-isogenies of the CM point \(x_{0}=[z_{0},g_{0}]_{\mathbf{K}}\) is the set of CM points \([z_{0},gg_{0}]_{\mathbf{K}}\) for \(g\in\mathbf{G}(\mathbb{Q}_{p})\), which is the set \(\mathfrak{M}^{x_{0}}(\mathbf{G}(\mathbb{Q}_{p}))\)._
### CM orbit of CM points
We now consider the CM orbit of a fixed CM point \([z_{0},gg_{0}]_{\mathbf{K}}\) in \(X_{\mathbf{K}}\) with \(g\in\mathbf{G}(\mathbb{Q}_{p})\). Write \((A_{g},\lambda_{g},\psi_{g})\) for the corresponding triple in \(X_{\mathbf{K}}(\overline{\mathbb{Q}})\). Note that using (1), the triple \((A_{g},\lambda_{g},\psi_{g})\) corresponds to the triple \((g\Lambda_{0},\mu(g)_{\mathbb{Q}}^{-1}\xi_{0},\psi_{0})\) where \(\mu(g)_{\mathbb{Q}}=p^{\operatorname{val}_{p}(\mu(g))}\in p^{\mathbb{Z}}\). Here we identify \((K/\Lambda_{0})[N]\) in a natural way with \((K/g\Lambda_{0})[N]\) since for any prime \(q\neq p\), the \(q\)-primary torsion parts are the same \((\Lambda_{0})_{q}=(g\Lambda_{0})_{q}\) (note that \(p\nmid N\)).
We define a subgroup of \(K^{\times}\):
\[K^{\circ}=\{x\in K^{\times}\mid c(x)x\in\mathbb{Q}^{\times}\}.\]
The map \(\epsilon\) induces a morphism \(\epsilon\colon K^{\circ}\to\mathbf{G}(\mathbb{Q})\). Moreover \(K^{\circ}\) fixes the point \(z_{0}\in\mathbf{X}\). We associate to \(K^{\circ}\) algebraic subgroups \(\mathbf{T}_{K^{\circ}}\) and \(\mathbf{T}^{1}_{K^{\circ}}\) of \(\mathbf{G}_{\mathbb{Q}}\) whose \(R\)-points are given by
\[\mathbf{T}_{K^{\circ}}(R)=\{x\in K\otimes_{\mathbb{Q}}R\mid c(x)x\in R^{\times }\},\]
\[\mathbf{T}^{1}_{K^{\circ}}(R)=\{x\in K\otimes_{\mathbb{Q}}R\mid c(x)x=1\}.\]
It is easy to see
\[\mathbf{T}_{K^{\circ}}=\operatorname{Res}_{K/\mathbb{Q}}(\mathbb{G}_{m})\cap \mathbf{G}_{\mathbb{Q}},\quad\mathbf{T}^{1}_{K^{\circ}}=\mathbf{T}_{K^{\circ} }\cap\mathbf{G}^{1}_{\mathbb{Q}}.\]
\(\mathbf{T}_{K^{\circ}}\) is a torus in \(\mathbf{G}_{\mathbb{Q}}\) of rank \(n+1\). We write the inclusions as
\[\epsilon\colon\mathbf{T}_{K^{\circ}}\hookrightarrow\mathbf{G},\quad\mathbf{T }^{1}_{K^{\circ}}\hookrightarrow\mathbf{G}^{1}.\]
We set
\[\mathbf{T}_{K^{\circ}}(\widehat{\mathbb{Z}})=\mathbf{T}_{K^{\circ}}(\mathbb{ A}_{f})\cap\mathbf{G}(\widehat{\mathbb{Z}}),\quad\mathbf{T}^{1}_{K^{\circ}}( \widehat{\mathbb{Z}})=\mathbf{T}_{K^{\circ}}(\widehat{\mathbb{Z}})\cap \mathbf{G}^{1}(\widehat{\mathbb{Z}}).\]
The torus \(\mathbf{T}_{K^{\circ}}(\mathbb{A}_{f})\) acts on the set of CM points \([z_{0},g]_{\mathbf{K}}\) in a natural way: for \(t\in\mathbf{T}_{K^{\circ}}(\mathbb{A}_{f})\),
\[t([z_{0},g]_{\mathbf{K}})=[z_{0},tg]_{\mathbf{K}}. \tag{3}\]
**Definition 2.4**.: _The CM orbit of the CM point \([z_{0},gg_{0}]_{\mathbf{K}}\) is the set of CM points \([z_{0},tgg_{0}]_{\mathbf{K}}\) for \(t\in\mathbf{T}_{K^{\circ}}(\mathbb{A}_{f})\)._
## 3. Supersingular abelian varieties
We fix a finite set \(\mathcal{S}\) of rational primes \(\ell\) (different from \(p\)), which splits completely in \(K^{+}\) and each prime in \(K^{+}\) above \(\ell\) is inert in \(K\). For each such \(\ell\), we fix a place \(\mathfrak{l}\) of \(\overline{\mathbb{Q}}\) above \(\ell\). In this section we study the supersingular locus of \(X_{\mathbf{K}}(\overline{\mathbb{F}}_{\ell})\) for \(\ell\in\mathcal{S}\). We do not assume \(p\nmid N\) in this section. We fix one prime \(\ell\in\mathcal{S}\) in this section.
### Uniformization of supersingular abelian varieties
We fix a supersingular elliptic curve \(E\) over a subfield \(\kappa\) of \(\overline{\mathbb{F}}_{\ell}\). Write
\[R:=\operatorname{End}_{\kappa}(E)=\operatorname{End}_{\overline{\mathbb{F}}_{ \ell}}(E).\]
Then \(R\) is a maximal order in \(B:=R\otimes_{\mathbb{Z}}\mathbb{Q}\), a quaternion algebra over \(\mathbb{Q}\) ramified at the places \(\ell\) and \(\infty\).
We put \(\overline{A}_{0}=E^{n}\), an \(n\)-dimensional abelian variety over \(\kappa\), \(\lambda\colon\overline{A}_{0}\to\overline{A}_{0}^{\vee}\) the principal polarization which is the product of the standard polarization on \(E\). We then fix an isomorphism \(\overline{\psi}_{0}^{\square}\in\operatorname{Isom}_{\overline{\lambda}_{0}}( \overline{A}_{0}[N],(\mathbb{Z}/N\mathbb{Z})^{2n})\) and write \(\overline{\psi}_{0}=\mathbf{K}\overline{\psi}_{0}^{\square}\). This gives us a point
\[(\overline{A}_{0},\overline{\lambda}_{0},\overline{\psi}_{0})\in X_{\mathbf{K} }(\kappa).\]
We then write the endomorphism ring of \(\overline{A}_{0}\):
\[\mathcal{R}=\operatorname{End}_{\kappa}(\overline{A}_{0})=\operatorname{Mat}_{n} (R),\quad\mathcal{B}=\mathcal{R}\otimes_{\mathbb{Z}}\mathbb{Q}=\operatorname{ Mat}_{n}(B).\]
Write \(\mathcal{R}^{1}\) for the subset of \(\mathcal{R}\) of reduced norm \(1\) elements. We view \(\mathcal{R}^{1}\) as a group scheme over \(\mathbb{Z}\), which is a form of \(\operatorname{SL}_{2n}/\mathbb{Z}\). Similarly we define \(\mathcal{B}^{1}\subset\mathcal{B}\) as the subset of reduced norm \(1\) elements.
**Remark 3.1**.: Since \(n>1\), \(\mathcal{B}^{1}(\mathbb{R})\) is non-compact, thus the strong approximation property implies that \(\mathcal{B}^{1}(\mathbb{Q})\mathcal{B}^{1}(\mathbb{R})\) is dense in \(\mathcal{B}^{1}(\mathbb{A})\).
**Remark 3.2**.: This point \((\overline{A}_{0},\overline{\lambda}_{0},\overline{\psi}_{0})\) will be the reduction modulo \(\mathfrak{l}\) of our fixed point \((A_{0},\lambda_{0},\psi_{0})\in X_{\mathbf{K}}(\overline{\mathbb{Q}})\) (_cf._ SS3.2).
The standard principal polarization on \(E\) induces the standard involution on \(B\),
\[(-)^{*}\colon B\to B,\quad b\mapsto b^{*}:=\operatorname{Trd}(b)-b.\]
This induces the following involution,
\[(-)^{*}\colon\mathcal{B}\to\mathcal{B},\quad b=(b_{i,j})_{i,j=1}^{n}\mapsto b ^{*}=(b_{i,j}^{*})^{\mathfrak{l}}.\]
We define group schemes \(\mathcal{G}\) and \(\mathcal{G}^{1}\) over \(\operatorname{Spec}(\mathbb{Z})\) whose \(S\)-points are given by
\[\mathcal{G}(S) =\{b\in\mathcal{R}\otimes_{\mathbb{Z}}S\mid bb^{*}=\mu(b)\in S^{ \times}\},\] \[\mathcal{G}^{1}(S) =\{b\in\mathcal{G}(S)\mid\mu(b)=1\}. \tag{4}\]
Note that \(\mathcal{G}(\mathbb{Z}/N\mathbb{Z})\) acts on the left on \(\operatorname{Isom}_{\overline{\lambda}_{0}}(\overline{A}_{0}[N],(\mathbb{Z}/ N\mathbb{Z})^{2n})\) and the latter is a \(\mathcal{G}(\mathbb{Z}/N\mathbb{Z})\)-homogeneous space. Each \(g\in\mathcal{G}(\mathbb{Z}/N\mathbb{Z})\) induces an isomorphism \(g\colon\overline{A}_{0}[N]\to\overline{A}_{0}[N]\) preserving the symplectic pairing up to similitudes. Then we define an isomorphism
\[g_{\overline{\psi}_{0}}\colon(\mathbb{Z}/N\mathbb{Z})^{2n}\to(\mathbb{Z}/N \mathbb{Z})^{2n}\]
(preserving the symplectic pairing up to similitudes) by the following commutative diagram
This gives rise to a group isomorphism
\[\mathcal{G}(\mathbb{Z}/N\mathbb{Z})\to\mathbf{G}(\mathbb{Z}/N\mathbb{Z}),\quad g \mapsto g_{\overline{\psi}_{0}^{\square}}. \tag{5}\]
Then we write \(\mathbf{K}_{\overline{\psi}_{0}}\) for the pre-image of \(\overline{\mathbf{K}}\) under the composition map \(\mathcal{G}(\widehat{\mathbb{Z}})\to\mathcal{G}(\mathbb{Z}/N\mathbb{Z})\to \mathbf{G}(\mathbb{Z}/N\mathbb{Z})\) where \(\overline{\mathbf{K}}\) is the image of \(\mathbf{K}\) under the projection map \(\mathcal{G}(\widehat{\mathbb{Z}})\to\mathcal{G}(\mathbb{Z}/N\mathbb{Z})\). Clearly \(\mathbf{K}_{\overline{\psi}_{0}}\) is independent of the choice \(\overline{\psi}_{0}^{\square}\in\overline{\psi}_{0}=\mathbf{K}\overline{\psi }_{0}^{\square}\). Then we put
\[\mathbf{K}_{\overline{\psi}_{0}}^{1}=\mathbf{K}_{\overline{\psi}_{0}}\cap \mathcal{G}^{1}(\widehat{\mathbb{Z}}).\]
**Definition 3.3**.: _An \(n\)-dimensional abelian variety \(A\) over a finite extension of \(\mathbb{F}_{\ell}\) is supersingular if it is isogenous over \(\overline{\mathbb{F}}_{\ell}\) to the product of \(n\) supersingular elliptic curves, \(A\) is superspecial if it is isomorphic over \(\overline{\mathbb{F}}_{\ell}\) to the product of \(n\) supersingular elliptic curves._
By a result of Deligne and Shioda and the assumption \(n>1\), \(A\) is supersingular, resp, superspecial if and only if \(A\) is isogenous, resp, isomorphic over \(\overline{\mathbb{F}}_{\ell}\) to \(\overline{A}_{0}\). For each supersingular abelian variety \(A\) over \(\overline{\mathbb{F}}_{\ell}\), we fix an isogeny
\[\varphi_{A}\colon A\to\overline{A}_{0}.\]
For an abelian variety \(A\) over \(\kappa\), we write
\[H^{1}(A,q):=H^{1}(A^{\vee}_{\overline{\mathbb{F}}_{\ell}},\mathbb{Z}_{q}(1)), \text{ resp., }H^{1}(A,\ell):=H^{1}_{\mathrm{crys}}(A^{\vee}/W(\kappa))\]
as \(\mathrm{Gal}(\overline{\mathbb{F}}_{\ell}/\kappa)\)-module for \(q\neq\ell\), resp., as F-crystals. Here \(W(\kappa)\) is the ring of Witt vectors associated to \(\kappa\).
**Definition 3.4**.: _We write_
\[X^{\mathrm{ssp}}_{\mathbf{K}}(\overline{\mathbb{F}}_{\ell})\subset X_{\mathbf{ K}}(\overline{\mathbb{F}}_{\ell})\]
_for the subset consisting of triples \((A,\lambda,\psi)\in X_{\mathbf{K}}(\overline{\mathbb{F}}_{\ell})\) with \(A\) a supersingular abelian variety such that \(H^{1}(A,\mathbb{Z}_{\ell})\) is isomorphic to \(H^{1}(\overline{A}_{0},\mathbb{Z}_{\ell})\) as F-crystals._
**Remark 3.5**.: _It is well-known that if \(H^{1}(A,\mathbb{Z}_{\ell})\) is isomorphic to \(H^{1}(\overline{A}_{0},\mathbb{Z}_{\ell})\) as F-crystals, then \(A\) is isomorphic over \(\overline{\mathbb{F}}_{\ell}\) to \(\overline{A}_{0}\), thus \(A\) is superspecial. That is why we put the superscript'ssp' (=superspecial) in \(X^{\mathrm{ssp}}_{\mathbf{K}}(\overline{\mathbb{F}}_{\ell})\)._
We define a map \(\mathcal{G}^{1}(\mathbb{A}_{f})\to X^{\mathrm{ssp}}_{\mathbf{K}}(\overline{ \mathbb{F}}_{\ell})\) as follows:
**Definition-Proposition 3.6**.: _For any \(X\in\mathcal{G}^{1}(\mathbb{A}_{f})\subset\mathcal{R}^{1}(\mathbb{A}_{f})= \mathcal{R}^{1}(\mathbb{Q})\mathcal{R}^{1}(\widehat{\mathbb{Z}})\) (Remark 3.1), we have a decomposition_
\[X=Y_{\mathbb{Q}}Y_{f}^{-1}\text{ with }Y_{\mathbb{Q}}\in\mathcal{R}^{1}( \mathbb{Q})\text{ and }Y_{f}=(Y_{q})_{q}\in\mathcal{R}^{1}(\widehat{\mathbb{Z}}).\]
_By [1], there is a supersingular abelian variety \(A(X)\) over \(\overline{\mathbb{F}}_{\ell}\) and a principal polarization \(\lambda(X)\) of \(A(X)\) such that_
\[\begin{split}\lambda(X)&=\varphi_{A(X)}^{\vee}Y_{ \mathbb{Q}}^{\vee}\overline{\lambda}_{0}Y_{\mathbb{Q}}\varphi_{A(X)}\in \mathrm{Hom}_{\overline{\mathbb{F}}_{\ell}}(A(X),A(X)^{\vee}),\\ \lambda(X)&=\varphi_{A(X)}^{\vee}Y_{q}^{\vee} \overline{\lambda}_{0}Y_{q}\varphi_{A(X)}\in\mathrm{Hom}_{\mathbb{Z}_{q}}(H^{1 }(A(X),q),H^{1}(A(X)^{\vee},q)),\forall\,q.\end{split} \tag{6}\]
_Write \(\overline{Y}_{f}\) for the image of \(Y_{f}\) under the projection map \(\mathcal{R}^{1}(\widehat{\mathbb{Z}})\to\mathcal{R}^{1}(\mathbb{Z}/N\mathbb{Z})\). The principal polarization \(\overline{\lambda}_{0}\) induces a perfect symplectic pairing_
\[\overline{\lambda}_{0}\colon\overline{A}_{0}[N]\times\overline{A}_{0}[N]\to \mathbb{Z}/N\mathbb{Z}(1)\]
_and \(\overline{\psi}_{0}^{\square}\colon\overline{A}_{0}[N]\to(\mathbb{Z}/N\mathbb{ Z})^{2n}\) is an isomorphism that preserves these two symplectic pairings (up to similitudes). We define \(\psi(X)^{\square}\) to be the composition_
\[\psi(X)^{\square}\colon A(X)[N]\xrightarrow{\overline{Y}_{f}\circ\varphi_{A( X)}}A[N]\xrightarrow{\overline{\psi}_{0}^{\square}}(\mathbb{Z}/N\mathbb{Z})^{2n}\]
_and thus \(\psi(X)^{\square}\in\mathrm{Isom}_{\lambda(X)}(A(X)[N],(\mathbb{Z}/N\mathbb{ Z})^{2n})\). Moreover it is easy to see that the \(\mathbf{K}\)-orbit_
\[\psi(X)=\mathbf{K}\psi(X)^{\square}\]
_of \(\psi(X)^{\square}\) is independent of the choice of the representative \(\overline{\psi}_{0}^{\square}\) in the \(\mathbf{K}\)-orbit \(\overline{\psi}_{0}=\mathbf{K}\overline{\psi}_{0}^{\square}\). Therefore we get the following well-defined map:_
\[\mathcal{G}^{1}(\mathbb{A}_{f})\to X^{\mathrm{ssp}}_{\mathbf{K}}(\overline{ \mathbb{F}}_{\ell}),\quad X\mapsto(A(X),\lambda(X),\psi(X)). \tag{7}\]
**Proposition 3.7**.: _The preceding map induces the following bijection:_
\[\mathcal{G}^{1}(\mathbb{Q})\backslash\mathcal{G}^{1}(\mathbb{A}_{f})/\mathbf{K}^ {1}_{\overline{\psi}_{0}}\to X^{\rm ssp}_{\mathbf{K}}(\overline{\mathbb{F}}_{ \ell}),\quad[X]\mapsto(A(X),\lambda(X),\psi(X)).\]
Proof.: First we show that the map is well-defined: for any \(X^{\prime}\in\mathcal{G}^{1}(\mathbb{A}_{f})\) of the form \(X^{\prime}=X_{\mathbb{Q}}XX_{f}\) with \(X_{\mathbb{Q}}\in\mathcal{G}^{1}(\mathbb{Q})\) and \(X_{f}\in\mathbf{K}^{1}_{\overline{\psi}_{0}}\), one checks easily
\[\lambda(X^{\prime})=(\varphi_{A(X)}^{-1}\varphi_{A(X^{\prime})})^{\vee} \lambda(X)(\varphi_{A(X)}^{-1}\varphi_{A(X^{\prime})}),\quad\psi^{\square}(X^{ \prime})=\psi(X)^{\square}(\varphi_{A(X)}^{-1}\varphi_{A(X^{\prime})}).\]
Thus (7) factors through the double quotient \(\mathcal{G}^{1}(\mathbb{A}_{f})\to\mathcal{G}^{1}(\mathbb{Q})\backslash \mathcal{G}^{1}(\mathbb{A}_{f})/\mathbf{K}_{\overline{\psi}_{0}}\).
We next show that (7) is surjective: for any \((A^{\prime},\lambda^{\prime},\psi^{\prime}=\mathbf{K}(\psi^{\prime})^{ \square})\in X^{\rm ssp}_{\mathbf{K}}(\overline{\mathbb{F}}_{\ell})\), there is an isogeny \(Y_{\mathbb{Q}}\in\operatorname{Hom}_{\overline{\mathbb{F}}_{\ell}}^{\circ}( \overline{A}_{0},\overline{A}_{0})\) of degree \(1\) (_cf._[1]) such that
\[\lambda^{\prime}=\varphi_{A^{\prime}}^{\vee}Y_{\mathbb{Q}}^{\vee}\overline{ \lambda}_{0}Y_{\mathbb{Q}}\varphi_{A^{\prime}};\]
similarly, for any prime \(q\), there exists \(Y_{q}\in\operatorname{Hom}_{\mathbb{Z}_{q}}(H^{1}(\overline{A}_{0},\mathbb{Z} _{q}),H^{1}(\overline{A}_{0},\mathbb{Z}_{q}))\) of degree \(1\) such that
\[\lambda^{\prime}=\varphi_{A^{\prime}}^{\vee}Y_{q}^{\vee}\overline{\lambda}_{ 0}Y_{q}\varphi_{A^{\prime}}.\]
Therefore \(Y_{\mathbb{Q}}^{\vee}\overline{\lambda}_{0}Y_{\mathbb{Q}}=Y_{q}^{\vee} \overline{\lambda}_{0}Y_{q}\) for any \(q\) and so \((Y_{\mathbb{Q}}Y_{q}^{-1})_{q}\in\mathcal{G}^{1}_{\lambda}(\mathbb{A}_{f})\). This implies that the above map is surjective.
At last the injectivity: for two elements \(X,\widetilde{X}\in\mathcal{G}^{1}(\mathbb{A}_{f})\), since \(\lambda(X)=\lambda(\widetilde{X})\) after identifying \(A(X)\) and \(A(\widetilde{X})\), by (6), we know that the corresponding \(Y_{\mathbb{Q}},\widetilde{Y}_{\mathbb{Q}}\) satisfy \(Y_{\mathbb{Q}}\widetilde{Y}_{\mathbb{Q}}^{-1}\in\mathcal{G}^{1}(\mathbb{Q})\) and similarly \(Y_{q}\widetilde{Y}_{q}^{-1}\in\mathcal{G}^{1}(\mathbb{Z}_{q})\) for any \(q\). Since \(\mathbf{K}\psi(X)^{\square}=\mathbf{K}\psi(\widetilde{X})^{\square}\), one has \(Y_{f}\widetilde{Y}_{f}^{-1}\in\mathbf{K}_{\overline{\psi}_{0}}\cap\mathcal{G} ^{1}(\mathbb{A}_{f})=\mathbf{K}^{1}_{\overline{\psi}_{0}}\). This shows that \(X,\widetilde{X}\) lie in the same double coset \(\mathcal{G}^{1}(\mathbb{Q})\backslash\mathcal{G}^{1}(\mathbb{A}_{f})/\mathbf{ K}^{1}_{\overline{\psi}_{0}}\), and thus the injectivity.
We will need the following simple observation: let \(F\) be a field of characteristic \(0\) splitting \(B\), that is, there is an isomorphism of \(F\)-algebras
\[\Omega\colon B\otimes_{\mathbb{Q}}F\simeq\operatorname{Mat}_{2}(F) \tag{8}\]
transporting the standard involution \((-)^{*}\) on \(B\) to the involution \((-)^{*}\) on \(\operatorname{Mat}_{2}(F)\) which sends a matrix \(X\) to \(X^{*}:=(J_{1}XJ_{1}^{-1})^{\rm t}\).
**Lemma 3.8**.: _The isomorphism \(B\otimes_{\mathbb{Q}}F\simeq\operatorname{Mat}_{2}(F)\) induces the following commutative diagram_
\[\begin{CD}\mathcal{G}^{1}(F)@>{\simeq}>{}>\mathbf{G}^{1}(F)\\ @V{}V{}V@V{}V{}V\\ \mathcal{G}(F)@>{\simeq}>{}>\mathbf{G}(F).\end{CD}\]
_In other words, \(\mathcal{G}^{1}\) is a form of \(\mathbf{G}^{1}\). In particular, if \(\mathbb{Q}_{q}\) splits \(B\), then \(\mathcal{G}^{1}\) satisfies the strong approximation for the place \(q\), that is, \(\mathcal{G}^{1}(\mathbb{Q})\mathcal{G}^{1}(\mathbb{Q}_{q})\) is dense in \(\mathcal{G}^{1}(\mathbb{A})\)._
Since \(p\neq\ell,\mathbb{Q}_{p}\) splits the quaternion algebra \(B\). Then we choose an isomorphism of \(\mathbb{Z}_{p}\)-algebras \(\Omega\colon R\otimes_{\mathbb{Z}}\mathbb{Z}_{p}\simeq\operatorname{Mat}_{2}( \mathbb{Z}_{p})\) which extends to the isomorphism in (8). Then we have an induced isomorphism
\[\Omega\colon\mathcal{R}_{p}\simeq\operatorname{Mat}_{2n}(\mathbb{Z}_{p}).\]
This induces the following commutative diagrams
\[\begin{CD}\mathcal{G}^{1}(\mathbb{Z}_{p})@>{\simeq}>{\Omega}>{\mathbf{G}^{1}( \mathbb{Z}_{p})}\\ @V{}V{}V@V{}V{}V\\ \mathcal{G}(\mathbb{Z}_{p})@>{\simeq}>{\Omega}>{\mathbf{G}(\mathbb{Z}_{p})} \end{CD}\]
We have a similar diagram with \(\mathbb{Z}_{p}\) replaced by \(\mathbb{Q}_{p}\). Note that Lemma 3.8 gives the decomposition \(\mathcal{G}^{1}(\mathbb{A}_{f})=\mathcal{G}^{1}(\mathbb{Q})\mathcal{G}^{1}( \mathbb{Q}_{p})\mathbf{K}_{\overline{\psi}_{0}}\). We view
\[\mathcal{G}^{\mathbf{K}}_{\overline{\psi}_{0}}:=\mathcal{G}(\mathbb{Q})\bigcap \mathbf{K}_{\overline{\psi}_{0}}^{p}\]
as a subgroup of \(\mathcal{G}(\mathbb{Q}_{p})\) via the embedding \(\mathcal{G}(\mathbb{Q})\hookrightarrow\mathcal{G}(\mathbb{Q}_{p})\) and similarly
\[\mathcal{G}^{1,\mathbf{K}}_{\overline{\psi}_{0}}:=\mathcal{G}^{1}(\mathbb{Q}) \bigcap\mathbf{K}_{\overline{\psi}_{0}}^{p}\]
is a subgroup of \(\mathcal{G}^{1}(\mathbb{Q}_{p})\). This gives us the following uniformisation deduced from Proposition 3.7:
\[\Theta\colon\mathcal{G}^{1,\mathbf{K}}_{\overline{\psi}_{0}}\backslash \mathcal{G}^{1}(\mathbb{Q}_{p})/\mathbf{K}^{1}_{\overline{\psi}_{0},p}\to X ^{\mathrm{ssp}}_{\mathbf{K}}(\overline{\mathbb{F}}_{\ell}). \tag{9}\]
We record the following fact, which will not be used in the sequel.
**Proposition 3.9**.: _Suppose \(\mu(\mathbf{K}_{\overline{\psi}_{0}})=\widehat{\mathbb{Z}}^{\times}\). Then the inclusion \(\mathcal{G}^{1}(\mathbb{Q}_{p})\to\mathcal{G}(\mathbb{Q}_{p})\) induces a bijection_
\[\beta\colon\mathcal{G}^{1,\mathbf{K}}_{\overline{\psi}_{0}}\backslash \mathcal{G}^{1}(\mathbb{Q}_{p})/\mathbf{K}^{1}_{\overline{\psi}_{0},p}\to \mathcal{G}^{\mathbf{K}}_{\overline{\psi}_{0}}\backslash\mathcal{G}(\mathbb{Q }_{p})/\mathbf{K}_{\overline{\psi}_{0},p}.\]
Proof.: By assumption on \(\mathbf{K}_{\overline{\psi}_{0},p}\), \(\beta\) is surjective. For any \(g,g^{\prime}\in\mathcal{G}^{1}(\mathbb{Q}_{p})\), \(a\in\mathcal{G}^{1,\mathbf{K}}_{\overline{\psi}_{0}}\) and \(b\in\mathbf{K}_{\overline{\psi}_{0},p}\) such that \(g=ag^{\prime}b\), one has \(1=\mu(g)=\mu(a)\mu(g^{\prime})\mu(b)\in\mathbb{Q}_{p}^{\times}\). Thus
\[\mathbb{Q}^{\times}\cap(\widehat{\mathbb{Z}}^{p})^{\times}\ni\mu(a)=\mu(b)^{- 1}\in\mathbb{Z}_{p}^{\times},\]
in particular, \(\mu(a)=\mu(b)^{-1}\in\mathbb{Z}^{\times}=\{\pm 1\}\). By definition, \(a\in\mathcal{G}^{\mathbf{K}}_{\overline{\psi}_{0}}\) satisfies
\[\mu(a)\in\{XX^{*}\mid X\in\mathcal{G}_{\lambda}(\mathbb{Q})\}\subset\mathbb{Q }_{>0}^{\times}.\]
Thus \(\mu(a)=1=\mu(b)\). So \(g\) and \(g^{\prime}\) have the same image in the double quotient \(\mathcal{G}^{1,\mathbf{K}}_{\overline{\psi}_{0}}(\mathbb{Q}_{p})\backslash \mathcal{G}^{1}(\mathbb{Q}_{p})/\mathbf{K}^{1}_{\overline{\psi}_{0},p}\), which gives the injectivity of the map \(\beta\).
### Reduction of CM points
Recall we have fixed a point in \(X_{\mathbf{K}}(\overline{\mathbb{Q}})\)
\[x_{0}=(A_{0},\lambda_{0},\psi_{0})\leftrightarrow(\Lambda_{0},\xi_{0},\psi_{0})\]
which has CM by \(K\). We assume that
\[\operatorname{End}_{\mathbb{C}}(A)\otimes_{\mathbb{Z}}\mathbb{Z}_{\ell}= \mathcal{O}_{K}\otimes_{\mathbb{Z}}\mathbb{Z}_{\ell}=:\mathcal{O}_{K,\ell}.\]
Moreover for any \(g\in\mathbf{G}(\mathbb{Q}_{p})\), \((A_{g},\lambda_{g},\psi_{g})\leftrightarrow(g\Lambda_{0},\mu(g)_{\mathbb{Q}}^ {-1}\xi_{0},\psi_{0})\in X_{\mathbf{K}}(\mathbb{C})\) and thus
\[\operatorname{End}_{\mathbb{C}}(A_{g})=\{x\in K\mid xg\Lambda_{0}\subset g \Lambda_{0}\}.\]
In particular, we have
\[\operatorname{End}_{\mathbb{C}}(A_{g})\otimes_{\mathbb{Z}}\mathbb{Z}_{\ell}= \mathcal{O}_{K,\ell}. \tag{10}\]
Thus, \(A_{g}\) has good reduction at \(\mathfrak{l}\) if and only if \(A_{0}\) has good reduction at \(\mathfrak{l}\).
In the following we will assume that \(A_{0}\) has good reduction at \(\mathsf{I}\). Write \(\overline{A}_{g}\), \(\overline{\lambda}_{g}\), \(\overline{\psi}_{0}\), \(\overline{\psi}\), etc for the reduction mod \(\mathsf{I}\) of \(A_{g}\), \(\lambda_{g}\), \(\psi_{0}\), \(\psi\), etc. Then we have an embedding
\[K=\operatorname{End}_{\overline{K}_{0}}(A_{0})\otimes_{\mathbb{Z}}\mathbb{Q} \hookrightarrow\mathcal{B}=\operatorname{End}_{\overline{\mathbb{F}}_{\ell}}( \overline{A}_{0})\otimes_{\mathbb{Z}}\mathbb{Q}. \tag{11}\]
The isomorphism of the two \(p\)-adic Tate modules \(T_{p}(A_{0})\simeq T_{p}(\overline{A}_{0})\) transports the action of \(\operatorname{M}_{2n}(\mathbb{Z}_{p})\) to the action of \(\mathcal{R}_{p}\), which induces an isomorphism
\[\begin{CD}\mathcal{R}_{p}@>{\Omega}>{\simeq}>\operatorname{M}_{2n}(\mathbb{Z} _{p})\\ @V{}V{}V@V{}V{}V\\ \mathcal{B}_{p}@>{\Omega}>{\simeq}>\operatorname{M}_{2n}(\mathbb{Q}_{p})\end{CD} \tag{12}\]
such that its composition with the embedding \(K\hookrightarrow\mathcal{B}_{p}\) is equal to \(\epsilon\colon K\to\operatorname{Mat}_{2n}(\mathbb{Q}_{p})\) in (2). Moreover, this isomorphism preserves the involutions on both sides:
\[\Omega(b^{*})=(J_{n}\Omega(b)J_{n}^{-1})^{\mathsf{I}},\quad\forall b\in \mathcal{R}_{p}. \tag{13}\]
In particular, we have the following commutative diagram
(14)
Note that the intersection of \(\mathcal{B}\) and \(K_{p}\) inside \(\mathcal{B}_{p}\) is \(K\).
We have the following
**Proposition 3.10**.: _Let \(\mathcal{O}\) be an order of \(K\) such that \(\mathcal{O}\otimes_{\mathbb{Z}}\mathbb{Z}_{\ell}=\mathcal{O}_{K,\ell}\). Let \(A\) be an \(n\)-dimensional abelian variety over \(\kappa\) with an embedding \(\mathcal{O}\hookrightarrow\operatorname{End}_{\overline{\mathbb{F}}_{\ell}}(A)\). Then \(A\) is superspecial._
Proof.: The special case \(\mathcal{O}=\mathcal{O}_{K}\) is [15, Theorem 1.1]. One checks easily that the proof works through under the assumption \(\mathcal{O}\otimes_{\mathbb{Z}}\mathbb{Z}_{\ell}=\mathcal{O}_{K,\ell}\).
**Lemma 3.11**.: _For any \(g\in\mathbf{G}(\mathbb{Q}_{p})\), \(\overline{A}_{g}\) is superspecial._
Proof.: We have inclusions \(\operatorname{End}_{\mathbb{C}}(A_{g})\hookrightarrow\operatorname{End}_{ \kappa}(\overline{A}_{g})\). Therefore (10) implies
\[(K\cap\operatorname{End}_{\kappa}(\overline{A}_{g}))\otimes_{\mathbb{Z}} \mathbb{Z}_{\ell}=\mathcal{O}_{K,\ell}.\]
Now taking into account of the assumption on \(\ell\), we know that \(\overline{A}_{g}\) is superspecial.
**Proposition 3.12**.: _For any \(g\in\mathbf{G}(\mathbb{Q}_{p})\), the F-crystals \(H^{1}(\overline{A},\ell)\) and \(H^{1}(\overline{A}_{g},\ell)\) are isomorphic._
Proof.: Since \(p\neq\ell\), the \(\ell\)-divisible groups \(A[\ell^{\infty}]\) and \(A_{g}[\ell^{\infty}]\) are isomorphic, so are their reduction modulo \(\mathsf{I}\). This concludes the proof using the equivalence of categories of \(\ell\)-divisible groups and F-crystals ([11]).
Therefore the reduction mod \(\mathsf{I}\) induces the following map
\[\operatorname{Red}_{\mathsf{I}}\colon\mathbf{G}^{1}(\mathbb{Q}_{p})\xrightarrow{ \mathfrak{M}_{\mathbf{K}}^{x_{0}}}\mathfrak{M}_{\mathbf{K}}^{x_{0}}(\mathbf{ G}^{1}(\mathbb{Q}_{p}))\xrightarrow{\operatorname{mod}\mathsf{I}}X_{\mathbf{K}}^{ \operatorname{ssp}}(\overline{\mathbb{F}}_{\ell}),\quad g\mapsto(\overline{A }_{g},\overline{\lambda}_{g},\overline{\psi}_{g}). \tag{15}\]
This can be rewritten in terms of \(\mathcal{G}^{1}(\mathbb{Q}_{p})\) instead of \(\mathbf{G}^{1}(\mathbb{Q}_{p})\) as follows: for any \(g\in\mathbf{G}^{1}(\mathbb{Q}_{p})\), the reduction mod \(\mathsf{I}\) of the point \((A_{g},\lambda_{g},\psi_{g}=\mathbf{K}\psi_{g}^{\square})\in X_{\mathbf{K}}( \overline{\mathbb{Q}})\) is \((\overline{A}_{g},\overline{\lambda}_{g},\overline{\psi}_{g}=\mathbf{K} \overline{\psi_{g}^{\square}})\in X_{\mathbf{K}}^{\operatorname{ssp}}(\overline{ \mathbb{F}}_{\ell})\)
which is of the form \((A(X_{g}),\lambda(X_{g}),\psi(X_{g})=\mathbf{K}\psi(X)^{\square})\) for some \(X_{g}\in\mathcal{G}^{1}(\mathbb{A}_{f})\) depending on \(g\) (_cf._ (9)).
**Proposition 3.13**.: _The following natural bijection map induced by the strong approximation for \(\mathcal{G}^{1}\)_
\[\mathcal{G}^{1,\mathbf{K}}_{\overline{\psi}_{0}}\backslash\mathcal{G}^{1}( \mathbb{Q}_{p})/\mathbf{K}^{1}_{\overline{\psi}_{0},p}\to\mathcal{G}^{1}( \mathbb{Q})\backslash\mathcal{G}^{1}(\mathbb{A}_{f})/\mathbf{K}^{1}_{\overline {\psi}_{0}}\]
_is the same as the map sending the double coset of \(\Omega^{-1}(g)\in\mathcal{G}^{1}(\mathbb{Q}_{p})\) to the double coset of \(X_{g}\). Here \(g\in\mathbf{G}^{1}(\mathbb{Q}_{p})\) and \(\Omega\) is given by (12)._
Proof.: To simplify notations, we write \(\widetilde{g}=\Omega^{-1}(g)\), \(h\) the image of \(\widetilde{g}\) in \(\mathcal{G}^{1}(\mathbb{A}_{f})\) and \(X=X_{g}\). By construction, there are elements \(Y_{\mathbb{Q}}\in\mathcal{B}^{1}(\mathbb{Q})\) and \(Y_{f}\in\mathcal{B}^{1}(\widehat{\mathbb{Z}})\) such that
\[X=Y_{\mathbb{Q}}Y_{f}^{-1},\quad\overline{\lambda}_{g}=Y_{\mathbb{Q}}^{\vee} \overline{\lambda}_{0}Y_{\mathbb{Q}},\quad\overline{\lambda}_{g}=Y_{q}^{\vee} \overline{\lambda}_{0}Y_{q},\;\forall q.\]
Moreover \(H^{1}(\overline{A},q)\) and \(H^{1}(\overline{A}_{g},q)\) are isomorphic (either as \(\operatorname{Gal}(\overline{\mathbb{F}}_{\ell}/\kappa)\)-modules or as F-crystals) except for \(q=p\), in which case there is an isomorphism \(H^{1}(\overline{A}_{g},p)\otimes_{\mathbb{Z}_{p}}\mathbb{Q}_{p}\simeq H^{1}( \overline{A},p)\otimes_{\mathbb{Z}_{p}}\mathbb{Q}_{p}\), induced from the reduction map and the isomorphism \(\widetilde{g}^{-1}\colon\mathbf{T}_{p}(A_{g})\otimes\mathbb{Q}_{p}\simeq \mathbf{T}_{p}(A)\otimes\mathbb{Q}_{p}\). Moreover the polarization \(\overline{\lambda}_{g}\) transported to \(H^{1}(\overline{A},p)\) is given by \(\overline{\lambda}_{0}=\widetilde{g}^{\vee}\overline{\lambda}_{g}\widetilde{g}\). Thus one has
\[(h_{p}^{-1})^{\vee}\overline{\lambda}_{0}h_{p}^{-1}=(Y_{\mathbb{Q}}^{-1})^{ \vee}Y_{p}^{\vee}\overline{\lambda}_{0}Y_{p}Y_{\mathbb{Q}}^{-1},\quad(h_{q}^{- 1})^{\vee}\overline{\lambda}_{0}h_{q}^{-1}=(Y_{\mathbb{Q}}^{-1})^{\vee}Y_{q}^{ \vee}\overline{\lambda}_{0}Y_{q}Y_{\mathbb{Q}}^{-1},\;\forall q\neq p.\]
So there exist \(Z_{\mathbb{Q}}\in\mathcal{G}^{1}(\mathbb{Q})\) and \(Z_{f}\in\mathcal{G}^{1}(\widehat{\mathbb{Z}})\) such that \(X=Z_{\mathbb{Q}}hZ_{f}\), that is,
\[h=Z_{\mathbb{Q}}^{-1}Y_{\mathbb{Q}}XY_{f}Z_{f}^{-1}.\]
Next we show \(Z_{f}\in\mathbf{K}_{\overline{\psi}_{0}}\). For this, we look at \(\mathbf{K}\overline{\psi}_{g}^{\square}=\mathbf{K}\psi(X)^{\square}\): the isomorphism
\[\overline{\psi}_{g}^{\square}\in\operatorname{Isom}_{\overline{\lambda}_{g}}( \overline{A}_{g}[N],(\mathbb{Z}/N\mathbb{Z})^{2n})\]
is transported via \(Y_{f}Z_{f}^{-1}\) to the element
\[\overline{\psi}_{g}^{\square}Y_{f}Z_{f}^{-1}\in\operatorname{Isom}_{\overline{ \lambda}_{0}}(\overline{A}_{0}[N],(\mathbb{Z}/N\mathbb{Z})^{2n}).\]
Similarly the isomorphism
\[\psi(X)^{\square}\in\operatorname{Isom}_{\overline{\lambda}(X)}(\overline{A}( X)[N],(\mathbb{Z}/N\mathbb{Z})^{2n})\]
is transported via \(Y_{f}\) to the element
\[\psi(X)^{\square}Y_{f}\in\operatorname{Isom}_{\overline{\lambda}_{0}}( \overline{A}_{0}[N],(\mathbb{Z}/N\mathbb{Z})^{2n}).\]
Thus one has
\[\mathbf{K}\overline{\psi}_{g}^{\square}Y_{f}Z_{f}^{-1}=\mathbf{K}\psi(X)^{ \square}Y_{f}=\mathbf{K}\overline{\psi}_{g}^{\square}Y_{f}\]
(note that we have \(\overline{\psi}_{0}(X)=\overline{\psi}_{0,g}\)). In particular, one has \(Z_{f}\in\mathbf{K}_{\overline{\psi}_{0}}\). We conclude that \(X\) and \(h\) are in the same double coset \(\mathcal{G}^{1}(\mathbb{Q})\backslash\mathcal{G}^{1}(\mathbb{A}_{f})/\mathbf{K }^{1}_{\overline{\psi}_{0}}\).
Taking into account of the embeddings of \(\mathbf{T}^{1}_{K^{\circ}}(\mathbb{Q}_{p})\) into \(\mathbf{G}^{1}(\mathbb{Q}_{p})\) and \(\mathcal{G}^{1}(\mathbb{Q}_{p})\), one deduces
**Corollary 3.14**.: _For any \(t\in\mathbf{T}^{1}_{K^{\circ}}(\mathbb{Q}_{p})\) and \(g\in\mathbf{G}^{1}(\mathbb{Q}_{p})\),_
\[X_{tg}=tX_{g}.\]
We fix a finite set \(\mathcal{T}=\mathcal{T}(\ell)\) of elements in \(\mathbf{T}^{1}_{K^{\circ}}(\mathbb{Q}_{p})\) and define the following discrete and cocompact subgroup, resp., compact open subgroup of \(\mathbf{G}^{1}(\mathbb{Q}_{p})\):
\[\mathbf{G}^{1}[\ell]:=\Omega(\mathcal{G}^{1,\mathbf{K}}_{\overline{\psi}_{0}}), \quad\text{resp,}\quad\mathbf{K}^{1}[\ell]:=\Omega(\mathbf{K}^{1}_{\overline{ \psi}_{0},p}). \tag{16}\]
Then we have the following maps:
\[\begin{split}\operatorname{Red}_{\mathcal{T}}\colon& \mathbf{G}^{1}(\mathbb{Q}_{p})\to\prod_{\sigma\in\mathcal{T}}X^{\mathrm{ ssp}}_{\mathbf{K}}(\overline{\mathbb{F}}_{\ell}),\quad x\mapsto\left( \operatorname{Red}_{\mathrm{I}}\left(\sigma x\right)\right)_{\sigma\in \mathcal{T}},\\ \operatorname{pr}\colon&\mathbf{G}^{1}(\mathbb{Q}_{ p})\to\mathbf{G}^{1}[\ell]\backslash\mathbf{G}^{1}(\mathbb{Q}_{p})/\mathbf{K}^{1}[\ell],\\ \operatorname{pr}_{\mathcal{T}}\colon&\mathbf{G}^{1} (\mathbb{Q}_{p})\to\prod_{\sigma\in\mathcal{T}}\mathbf{G}^{1}[\ell]\backslash \mathbf{G}^{1}(\mathbb{Q}_{p})/\mathbf{K}^{1}[\ell],\quad g\mapsto\left( \operatorname{pr}\left(\epsilon(\sigma^{-1})g\right)\right)_{\sigma\in \mathcal{T}}.\end{split} \tag{17}\]
Here the action of \(\sigma\) on \(x\) is given by (3) and the map \(\operatorname{pr}\) is the natural projection.
**Corollary 3.15**.: _We have the following commutative diagram_
\[\begin{CD}\mathbf{G}^{1}(\mathbb{Q}_{p})@>{\operatorname{pr}_{\mathcal{T}}}>{}> \prod_{\sigma\in\mathcal{T}}\mathbf{G}^{1}[\ell]\backslash\mathbf{G}^{1}( \mathbb{Q}_{p})/\mathbf{K}^{1}[\ell]\\ @V{}V{=}V@V{}V{\prod_{\sigma\in\mathcal{T}}\Theta\Omega^{-1}}V\\ \mathbf{G}^{1}(\mathbb{Q}_{p})@>{\operatorname{Red}_{\mathcal{T}}}>{}> \prod_{\sigma\in\mathcal{T}}X^{\mathrm{ssp}}_{\mathbf{K}}(\overline{\mathbb{F }}_{\ell}).\end{CD}\]
_Here \(\Theta\) is given by (9) and both vertical arrows are bijections._
Proof.: This follows from the definition of \(\operatorname{pr}_{\mathcal{T}}\) and the preceding proposition.
We rewrite the map \(\operatorname{pr}_{\mathcal{T}}\) as follows: for any \(\sigma\in\mathcal{T}\), put
\[\mathbf{G}^{1}[\ell,\sigma]:=\epsilon(\sigma^{-1})\mathbf{G}^{1}[\ell]\epsilon(\sigma) \tag{18}\]
and define \(\Pi_{\sigma}\) to be the composition map:
\[\Pi_{\sigma}\colon\mathbf{G}^{1}(\mathbb{Q}_{p})\xrightarrow{g\mapsto[\epsilon (\sigma^{-1})g]}\mathbf{G}^{1}[\ell]\backslash\mathbf{G}^{1}(\mathbb{Q}_{p})/ \mathbf{K}^{1}[\ell]\xrightarrow{[g^{\prime}]\mapsto[\epsilon(\sigma)g^{\prime }]}\mathbf{G}^{1}[\ell,\sigma]\backslash\mathbf{G}^{1}(\mathbb{Q}_{p})/ \mathbf{K}^{1}[\ell].\]
Clearly \(\Pi_{\sigma}\) is the natural projection map. Then we set
\[\Pi_{\mathcal{T}}\colon\mathbf{G}^{1}(\mathbb{Q}_{p})\to\prod_{\sigma\in \mathcal{T}}\mathbf{G}^{1}[\ell,\sigma]\backslash\mathbf{G}^{1}(\mathbb{Q}_{ p})/\mathbf{K}^{1}[\ell],\quad g\mapsto(\Pi_{\sigma}(g))_{\sigma\in\mathcal{T}}.\]
Thus \(\operatorname{Red}_{\mathcal{T}}\) is surjective if and only if \(\operatorname{pr}_{\mathcal{T}}\) is surjective, if and only if \(\Pi_{\mathcal{T}}\) is surjective.
## 4. The simultaneous superspecial reduction map
Recall that we have fixed a point \(x_{0}=(A_{0},\lambda_{0},\psi_{0})\in X_{\mathbf{K}}(\overline{\mathbb{Q}})\) which has CM by \(K\). In the previous section, we considered one prime \(\ell\in\mathcal{S}\) and the endomorphism ring of the reduction \(\overline{A}_{0}\) is \(\mathcal{R}=\mathcal{R}(\ell)=\operatorname{Mat}_{n}(R(\ell))\), which is an order in \(\mathcal{B}=\mathcal{B}(\ell)=\operatorname{M}_{n}(B(\ell))\). Here \(B(\ell)\) is the quaternion algebra over \(\mathbb{Q}\) ramified at \(\ell\) and \(\infty\). We have also fixed an isomorphism \(\Omega=\Omega(\ell)\colon\mathcal{B}(\ell)\to\operatorname{Mat}_{2n}(\mathbb{Q})\). From this, we defined a group scheme \(\mathcal{G}=\mathcal{G}(\ell)\) over \(\operatorname{Spec}(\mathbb{Z})\), a discrete and cocompact subgroup \(\mathbf{G}^{1}[\ell,\sigma]\), resp, a compact open subgroup \(\mathbf{K}^{1}[\ell]\) of \(\mathbf{G}^{1}(\mathbb{Q}_{p})\). Now we define the \(\mathcal{S}\)-simultaneous reduction map, resp, the \(\mathcal{S}\)-simultaneous projection map
\[\operatorname{Red}=\prod_{\ell\in\mathcal{S}}\operatorname{Red}_{\mathcal{T}( \ell)},\quad\text{resp,}\quad\Pi=\prod_{\ell\in\mathcal{S}}\Pi_{\mathcal{T}( \ell)},\]
which fit into the following commutative diagram
\[\begin{CD}\mathbf{G}^{1}(\mathbb{Q}_{p})@>{\Pi}>{}>\Pi_{\ell\in\mathcal{S},\sigma \in\mathcal{T}(\ell)}\,\mathbf{G}^{1}[\ell,\sigma]\backslash\mathbf{G}^{1}( \mathbb{Q}_{p})/\mathbf{K}^{1}[\ell]\\ @V{}V{=}V@V{}V{\Pi_{\ell\in\mathcal{S},\sigma\in\mathcal{T}(\ell)}\,\Theta( \ell)\circ\Omega(\ell)^{-1}\circ(\epsilon(\sigma)-)}V\\ \mathbf{G}^{1}(\mathbb{Q}_{p})@>{\mathrm{Red}}>{}>\Pi_{\ell\in\mathcal{S}, \sigma\in\mathcal{T}(\ell)}\,X_{\mathbf{K}}^{\mathrm{ssp}}(\overline{\mathbb{ F}}_{\ell})\end{CD} \tag{19}\]
Here the right vertical arrow is a bijection.
### Commensurability criterion
Recall that two subgroups \(G_{1},G_{2}\) of a group \(G\) are _commensurable_ if \(G_{1}\bigcap G_{2}\) has finite index in both \(G_{1}\) and \(G_{2}\). The _commensurator_ of \(G_{1}\) inside \(G\) is the set of elements \(g\) in \(G\) such that \(G_{1}\) and \(gG_{1}g^{-1}\) are commensurable, denoted by \(\mathrm{Comm}(G_{1})\). Suppose \(G_{2}=gG_{1}g^{-1}\) for some \(g\in G\). Then \(G_{1}\) and \(G_{2}\) are commensurable if and only if \(g\in\mathrm{Comm}(G_{1})\). A crucial observation in our situation is
**Theorem 4.1**.: _Fix a prime \(\ell\in\mathcal{S}\). For any \(\sigma_{1},\sigma_{2}\in\mathcal{T}(\ell)\), then \(\sigma_{1}\sigma_{2}^{-1}\notin\mathbf{T}_{K^{\circ}}(\mathbb{Q})Z(\mathbf{G} (\mathbb{Q}_{p}))\) if and only if the subgroups \(\mathbf{G}^{1}[\ell,\sigma_{1}]\) and \(\mathbf{G}^{1}[\ell,\sigma_{2}]\) are not commensurable._
Proof.: We first prove by contradiction the 'only if' part. Write \(\sigma=\sigma_{1}\sigma_{2}^{-1}\), \(\mathcal{G}[*]=\Omega^{-1}(\mathbf{G}^{1}[*])\), \(\mathcal{G}=\mathcal{G}(\ell)\), \(\mathcal{B}=\mathcal{B}(\ell)\), etc. Then \(\mathcal{G}[\ell,\sigma_{1}]\) and \(\mathcal{G}[\ell,\sigma_{2}]\) are commensurable. Note that the conjugation map \(\tau_{\sigma}\in\mathrm{Aut}(\mathcal{G}^{1}(\mathbb{Q}_{p}))\) sending \(g\) to \(\Omega^{-1}(\epsilon(\sigma^{-1}))g\Omega^{-1}(\epsilon(\sigma))\) restricts to an automorphism \(\tau_{\sigma}\in\mathrm{Aut}(\mathcal{G}^{1}(\mathbb{Q}))\) by Margulis superrigidity theorem([14, Theorem VII.5.6]) since \(\mathcal{G}[\ell]\bigcap\mathcal{G}[\ell,\sigma]\) is Zariski dense in \(\mathcal{G}^{1}(\mathbb{Q}_{p})\), the latter fact following from the assumption that \(\mathcal{G}[\ell,\sigma_{1}]\) and \(\mathcal{G}[\ell,\sigma_{2}]\) are commensurable. Instead of applying Margulis superrigidity theorem, we can also prove this as follows: note that the elements in \(\mathcal{G}[\ell]\bigcap\mathcal{G}[\ell,\sigma]\) generate the whole \(\mathbb{Q}\)-vector space \(\mathcal{B}\), the Lie algebra of \(\mathcal{G}\) over \(\mathbb{Q}\) in the sense of [14, Chapter 7] (because this is the case for \(\mathcal{G}[\ell]\)), so \(\tau_{\sigma}\) sends \(\mathcal{B}\) to itself and \(\mathcal{G}^{1}(\mathbb{Q}_{p})\) to itself. However \(\mathcal{G}^{1}(\mathbb{Q}_{p})\bigcap\mathcal{B}=\mathcal{G}^{1}(\mathbb{Q})\), thus \(\tau_{\sigma}\) is an automorphism of \(\mathcal{G}^{1}(\mathbb{Q})\)(see for example [13, Exercise 5.2.4]).
We claim that the element \(\sigma\in\mathbf{T}_{K^{\circ}}^{1}(\mathbb{Q}_{p})\) also lies in \(Z(\mathcal{G}(\mathbb{Q}_{p}))\mathcal{G}(\mathbb{Q})\), therefore
\[\sigma\in\mathbf{T}_{K^{\circ}}^{1}(\mathbb{Q}_{p})\bigcap Z(\mathcal{G}( \mathbb{Q}_{p}))\mathcal{G}(\mathbb{Q})\subset Z(\mathcal{G}(\mathbb{Q}_{p})) \mathbf{T}_{K^{\circ}}(\mathbb{Q}),\]
which contradicts our assumption on \(\mathcal{T}\) and thus \(\mathbf{G}^{1}[\ell,\sigma_{1}]\) and \(\mathbf{G}^{1}[\ell,\sigma_{2}]\) are not commensurable. We prove the claim for the cases \(n>2\) and \(n=2\) separately:
1. Suppose \(n>2\). In this case we know that \(\tau_{\sigma}\) is of the form \(\tau_{\sigma}(g)=\mu(g)CgC^{-1}\) for some \(C\in\mathcal{G}(\mathbb{Q})\) and \(\mu\colon\mathcal{G}^{1}(\mathbb{Q})\to Z(\mathcal{G}^{1}(\mathbb{Q}))=\{\pm 1\}\) a group homomorphism ([10, p.93]). Write \[H=\mathcal{G}[\ell]\bigcap\mathcal{G}[\ell,\sigma]\bigcap\mathrm{Ker}(\mu),\] which is a subgroup of finite index of \(\mathcal{G}[\ell]\cap\mathcal{G}[\ell,\sigma]\), thus it is discrete and cocompact in \(\mathcal{G}^{1}(\mathbb{Q}_{p})\) and is Zariski dense in the latter group. Since \(\mu(H)=1\), \(\mu\) is the trivial character and so for any \(g\in\mathcal{G}^{1}(\mathbb{Q}_{p})\), \(\Omega^{-1}(\epsilon(\sigma^{-1}))g\Omega^{-1}(\epsilon(\sigma))=CgC^{-1}\). Therefore \[\Omega^{-1}(\epsilon(\sigma))C\in\mathrm{Cent}_{\mathcal{G}(\mathbb{Q}_{p})}( \mathcal{G}^{1}(\mathbb{Q}_{p}))=Z(\mathcal{G}(\mathbb{Q}_{p})).\] One deduces \[\Omega^{-1}(\epsilon(\sigma))\in Z(\mathcal{G}(\mathbb{Q}_{p}))\mathcal{G}( \mathbb{Q}).\] This shows in particular that the commensurator of \(\mathcal{G}[\ell,\sigma]\) inside \(\mathcal{G}(\mathbb{Q}_{p})\) is \(Z(\mathcal{G}(\mathbb{Q}_{p}))\mathcal{G}(\mathbb{Q})\). As a result, \(\sigma\in\mathbf{T}_{K^{\circ}}(\mathbb{Q}_{p})\) lies in \(Z(\mathcal{G}(\mathbb{Q}_{p}))\mathcal{G}(\mathbb{Q})\) (_cf._ (14)).
2. Suppose \(n=2\). We proceed by direct computation. The automorphism \(\tau_{\sigma}\) induces an automorphism of the Lie algebra \(\operatorname{Lie}(\mathcal{G}^{1})\) of \(\mathcal{G}^{1}\). We know \(\operatorname{Lie}(\mathcal{G}^{1})\) consists of \(X\in\mathcal{B}=\operatorname{Mat}_{2}(B)\) such that \(X=-X^{*}\). Write \(X=\begin{pmatrix}x&y\\ -y^{*}&z\end{pmatrix}\) for an arbitrary element in \(\operatorname{Lie}(\mathcal{G}^{1})(\mathbb{Q})\) where \(x=-x^{*},y,z=-z^{*}\in B\) and \(\Omega^{-1}(\epsilon(\sigma))=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\) where \(a,b,c,d\in B_{p}\). Thus \(\Omega^{-1}(\epsilon(\sigma^{-1}))X\Omega^{-1}(\epsilon(\sigma))\in \operatorname{Lie}(\mathcal{G}^{1})(\mathbb{Q})\) implies that the following elements all lie in \(B\): \[axa^{*},axc^{*},cxc^{*},bxb^{*},bxd^{*},dxd^{*},ayb^{*}-(ayb^{*})^{*},cyd^{*}- (cyd^{*})^{*},ayd^{*}-by^{*}c^{*}\in B.\] Suppose first \(a,b,d\neq 0\), then \((bxb^{*})^{-1}(bxd^{*})=(db^{-1})^{*}\in B\). We can thus write \(d=tb\) for some \(t\in B^{\times}\). Similarly \(c=sa\) for some \(s\in B\). Clearly \(t\neq s\). Now \[ayd^{*}-by^{*}c^{*}=(ayb^{*}-by^{*}a^{*}s^{*}(t^{*})^{-1})t^{*},ayb^{*}-by^{*}a ^{*}\in B.\] One deduces \(ayb^{*}\in B\) for any \(y\in B\). So \(b^{*}=a^{-1}r\) for some \(r\in B^{\times}\) (by setting \(y=1\)) and we get \(ayaa^{-1}\in B\) for all \(y\in B\). It is well known that all automorphisms of \(B\) are inner automorphisms, so there exists \(\widetilde{a}\in B^{\times}\) such that \(ayaa^{-1}=\widetilde{a}y\widetilde{a}^{-1}\) for all \(y\in B\). Therefore \(a_{1}=a\widetilde{a}^{-1}\in\operatorname{Cent}_{B_{p}}(B)=\mathbb{Q}_{p}\). It follows immediately that \(\Omega^{-1}(\epsilon(\sigma^{-1}))\in a_{1}\mathcal{B}\) for some \(a_{1}\in\mathbb{Q}_{p}^{\times}\). Suppose next \(b=0\) (thus \(a,d\neq 0\)). Then \(d\in B_{p}^{\times}\) and \(ayd^{*}\in B\) for any \(y\in B\). A similar argument as the preceding paragraph shows gives \(a=a_{1}\widetilde{a}\) for some \(a_{1}\in\mathbb{Q}_{p}^{\times}\) and \(\widetilde{a}\in B^{\times}\) and therefore \(\Omega^{-1}(\epsilon(\sigma^{-1}))\in a_{1}\mathcal{B}\). The cases \(a=0\) and \(d=0\) are similarly treated. Again we have shown that the commensurator of \(\mathcal{G}[\ell,\sigma]\) inside \(\mathcal{G}(\mathbb{Q}_{p})\) is \(\mathbb{Q}_{p}^{\times}\mathcal{G}(\mathbb{Q})\) and thus \(\sigma\in Z(\mathcal{G}(\mathbb{Q}_{p}))\mathcal{G}(\mathbb{Q})\).
For the 'if' part, suppose \(\sigma_{1}\sigma^{-1}\in\mathbf{T}_{K^{\circ}}(\mathbb{Q})Z(\mathbf{G}( \mathbb{Q}_{p}))\). Note that \(\mathcal{G}^{1}[\ell,\sigma_{1}]\) and \(\mathcal{G}^{1}[\ell,\sigma_{2}]\) are conjugate to each other (by the element \(\sigma_{1}\sigma_{2}^{-1}\)). We have seen that the commensurator \(\operatorname{Comm}(\mathcal{G}^{1}[\ell,\sigma_{1}])\) of \(\mathcal{G}^{1}[\ell,\sigma_{1}]\) inside \(\mathcal{G}(\mathbb{Q}_{p})\) is \(Z(\mathcal{G}(\mathbb{Q}_{p}))\mathcal{G}(\mathbb{Q})\), which clearly contains the element \(\Omega^{-1}(\sigma_{1}\sigma_{2}^{-1})\), and thus \(\mathcal{G}^{1}[\ell,\sigma_{1}]\) and \(\mathcal{G}^{1}[\ell,\sigma_{2}]\) are commensurable.
**Theorem 4.2**.: _For distinct \(\ell_{1},\ell_{2}\in\mathcal{S}\) and any \(\sigma_{i}\in\mathcal{T}(\ell_{i})\) (\(i=1,2\)), the subgroups \(\mathbf{G}^{1}[\ell_{1},\sigma_{1}]\) and \(\mathbf{G}^{1}[\ell_{2},\sigma_{2}]\) are not commensurable._
Proof.: To simplify notations, we write \(\Omega_{i}=\Omega(\ell_{i})\), \(\mathcal{G}_{i}=\mathcal{G}(\ell_{i})\), \(B_{i}=B(\ell_{i})\), etc.
If these two subgroups are commensurable, then they have the same commensurators inside \(\mathbf{G}^{1}(\mathbb{Q}_{p})\). However we have seen in the proof of the preceding theorem that the commensurator of \(\mathbf{G}^{1}[\ell_{1},\sigma_{1}]\) inside \(\mathbf{G}(\mathbb{Q}_{p})\) is
\[\Omega(\ell_{1})\left(Z(\mathcal{G}(\ell_{1}))(\mathbb{Q}_{p})\mathcal{G}(\ell _{1})(\mathbb{Q})\right)=\mathbb{Q}_{p}^{\times}\Omega(\ell_{1})\left(\mathcal{ G}(\ell_{1})(\mathbb{Q})\right)\]
and similarly for \(\mathbf{G}^{1}[\ell_{2},\sigma_{2}]\).
On the other hand, one can show that if \(\ell_{1}\) and \(\ell_{2}\) are distinct, then \(\mathbb{Q}_{p}^{\times}\Omega_{1}(\mathcal{G}_{1}(\mathbb{Q}))\) and \(\mathbb{Q}_{p}^{\times}\Omega_{2}(\mathcal{G}_{2}(\mathbb{Q}))\) are also distinct: indeed, take any \(g_{1}\in\mathcal{G}_{1}(\mathbb{Q})\), then it can be written as \(\Omega_{1}(g_{1})=b\Omega_{2}(g_{2})\) with \(b\in\mathbb{Q}_{p}^{\times}\) and \(g_{2}\in\mathcal{G}_{2}(\mathbb{Q})\). Note that the reduced trace \(\operatorname{Trd}(\Omega_{i}(g_{i}))=\operatorname{Trd}(g_{i})\in\mathbb{Q}\) for \(i=1,2\). If \(\operatorname{Trd}(g_{1})\neq 0\), then \(\operatorname{Trd}(b)=2nb\in\mathbb{Q}\), thus \(b\in\mathbb{Q}\). Therefore \(\Omega_{1}(g_{1})\in\Omega_{2}(\mathcal{G}_{2}(\mathbb{Q}))\). Now for any \(g_{1}\in\mathcal{G}_{1}(\mathbb{Q})\), we can always find \(g_{1}^{\prime}\in\mathcal{G}_{1}(\mathbb{Q})\) such that \(\operatorname{Trd}(g_{1}g_{1}^{\prime}),\operatorname{Trd}(g_{1}^{\prime})\neq 0\). As a result, \(\Omega_{1}(g_{1})\in\Omega_{2}(\mathcal{G}_{2}(\mathbb{Q}))\). From this, we deduce that
\[\Omega_{1}(\mathcal{G}_{1}(\mathbb{Q}))=\Omega_{2}(\mathcal{G}_{2}(\mathbb{Q})).\]
However, by (13), both \(\Omega_{1}\) and \(\Omega_{2}\) preserves the involutions, thus we have
\[\Omega_{1}(\mathcal{G}^{1}_{1}(\mathbb{Q}))=\Omega_{2}(\mathcal{G}^{1}_{2}( \mathbb{Q})).\]
Now by the classification of reductive dual pairs in the quaternion unitary group \(\Omega_{1}(\mathcal{G}_{1}(\mathbb{Q}))\) ([13, SS1.19, p.13]), it contains irreducible reductive dual pairs \((\mathrm{O}_{n},B^{1}_{1})\) and \((\mathrm{O}_{n},B^{1}_{2})\) where \(\mathrm{O}_{n}\) is the orthogonal group associated to the identity matrix, thus the Hermitian space \(B^{n}_{1}\) over \(B_{1}\) and the Hermitian space \(B^{n}_{2}\) over \(B_{1}\) are isomorphic, which is impossible since \(B_{1}\) and \(B_{2}\) are not isomorphic. This contradiction shows that \(\mathbf{G}^{1}[\ell_{1},\sigma_{1}]\) and \(\mathbf{G}^{1}[\ell_{2},\sigma_{2}]\) are not commensurable.
### Proof of the main result
For self-containment, we reproduce part of [1, SSSS3.6 and 3.7]. For a group \(G\) and a finite set \(\mathcal{T}\), we write
\[\Delta\colon G\to\prod_{\sigma\in\mathcal{T}}G\]
for the diagonal map. For any \(\sigma_{0}\in\mathcal{T}\), write
\[\mathrm{pr}_{\sigma_{0}}\colon\prod_{\sigma\in\mathcal{T}}G\to G\]
for the map of projecting to the \(\sigma_{0}\)-th component. For a non-empty subset \(\mathcal{T}^{\prime}\) of \(\mathcal{T}\), we define a subgroup of \(\prod_{\sigma\in\mathcal{T}}G\)
\[G^{\mathcal{T}^{\prime}}=\left\{(g_{\sigma})\in\prod_{\sigma\in\mathcal{T}} \mid g_{\sigma}=1,\,\forall\sigma\notin\mathcal{T}^{\prime}\right\}\]
and set \(\Delta^{\mathcal{T}^{\prime}}\colon G\to G^{\mathcal{T}^{\prime}}\hookrightarrow \prod_{\sigma\in\mathcal{T}}=G^{\mathcal{T}}\), the partial diagonal map.
A subgroup \(H\) of \(G^{\mathcal{T}}\) is called a _product of diagonals_ if there are _disjoint_ non-empty subsets \(\mathcal{T}_{1},\cdots,\mathcal{T}_{r}\) of \(\mathcal{T}\) such that \(H=\prod_{i=1}^{r}\Delta^{\mathcal{T}_{i}}(G)\). Then we have
**Proposition 4.3**.: _Suppose that \(G\) is simple and non-commutative. Then a subgroup \(H\) of \(G^{\mathcal{T}}\) is normalized by \(\Delta^{\mathcal{T}}(G)\) if and only if it is a product of diagonals._
Proof.: This is [1, Proposition 3.10].
From now on, we assume that \(G\) is a simple and non-commutative \(p\)-adic Lie group which is generated by one-parameter adjoint unipotent subgroups (in the sense of [10]). Let \((\Gamma_{\sigma})_{\sigma\in\mathcal{T}}\) be a finite set of discrete and cocompact subgroups of \(G\). We write \(\Gamma=\prod_{\sigma\in\mathcal{T}}\Gamma_{\sigma}\) and so \(\Gamma\backslash G^{\mathcal{T}}\) is compact. We define then a partition of the finite set \(\mathcal{T}\) depending on these subgroups \(\Gamma_{\sigma}\)
\[\mathcal{T}=\bigsqcup_{i=1}^{r}\mathcal{T}_{n}\]
such that \(\sigma,\sigma^{\prime}\in\mathcal{T}_{i}\) if and only if \(\Gamma_{\sigma}\) and \(\Gamma_{\sigma^{\prime}}\) are commensurable (this is well-defined since commensurability is an equivalence relation).
**Lemma 4.4**.: _Take two elements \(\sigma,\sigma^{\prime}\in\mathcal{T}\). Write \(\Gamma_{\sigma,\sigma^{\prime}}=\Gamma_{\sigma}\times\Gamma_{\sigma^{\prime}}\) and \(\Delta^{\prime}\colon G\to G^{2}\) for the diagonal map. Then the closure of \(\Gamma_{\sigma,\sigma^{\prime}}\Delta^{\prime}(G)\) (always under the \(p\)-adic topology) is_
\[\begin{cases}G^{2},&\text{if $\Gamma_{\sigma}$ and $\Gamma_{\sigma^{\prime}}$ are not commensurable;}\\ \Gamma_{\sigma,\sigma^{\prime}}\Delta^{\prime}(G),&\text{if $\Gamma_{\sigma}$ and $ \Gamma_{\sigma^{\prime}}$ are commensurable.}\end{cases}\]
Proof.: By assumption, \(G\) is generated by one-parameter adjoint unipotent subgroups, thus we can apply Ratner's theorem on unipotent flows ([14, Theorem 2]) to get that the closure of \(\Gamma_{\sigma,\sigma^{\prime}}\Delta^{\prime}(G)\) in \(G^{2}\) is of the form \(\Gamma_{\sigma,\sigma^{\prime}}H\) for some closed subgroup \(H\) of \(G^{2}\). Clearly \(H\) is normalized by \(\Delta^{\mathcal{T}}(G)\) and thus by Proposition 4.3, \(H\) is either \(\Delta^{\prime}(G)\) or \(G^{2}\) (there are only two partitions of the set \(\{\sigma,\sigma^{\prime}\}\)).
Observe that the following natural bijection is a homeomorphism
\[\left(\Gamma_{\sigma}\bigcap\Gamma_{\sigma^{\prime}}\right)\backslash G \rightarrow\left(\Delta^{\prime}(G)\bigcap\Gamma_{\sigma,\sigma^{\prime}} \right)\backslash\Delta^{\prime}(G).\]
On the other hand, since \(\Gamma_{\sigma,\sigma^{\prime}}\) is discrete in \(G^{2}\), \(\Delta^{\prime}(G)\) is open in \(\Gamma_{\sigma,\sigma^{\prime}}\Delta^{\prime}(G)\) and the following bijection is again a homeomorphism (an open continuous map)
\[\left(\Delta^{\prime}(G)\bigcap\Gamma_{\sigma,\sigma^{\prime}}\right) \backslash\Delta^{\prime}(G)\rightarrow\Gamma_{\sigma,\sigma^{\prime}} \backslash\Gamma_{\sigma,\sigma^{\prime}}\Delta^{\prime}(G).\]
In particular, \((\Gamma_{\sigma}\bigcap\Gamma_{\sigma^{\prime}})\backslash G\) is compact if and only if \(\Gamma_{\sigma,\sigma^{\prime}}\backslash\Gamma_{\sigma,\sigma^{\prime}} \Delta^{\prime}(G)\) is compact, if and only if \(\Gamma_{\sigma,\sigma^{\prime}}\backslash\Gamma_{\sigma,\sigma^{\prime}} \Delta^{\prime}(G)\) is closed in \(\Gamma_{\sigma,\sigma^{\prime}}\backslash G^{2}\).
Now if \(\Gamma_{\sigma}\) and \(\Gamma_{\sigma^{\prime}}\) are commensurable, then \((\Gamma_{\sigma}\bigcap\Gamma_{\sigma^{\prime}})\backslash G\) is compact, so \(\Gamma_{\sigma,\sigma^{\prime}}\backslash\Gamma_{\sigma,\sigma^{\prime}} \Delta^{\prime}(G)\) is closed in \(\Gamma_{\sigma,\sigma^{\prime}}\backslash G^{2}\), thus \(H=\Delta^{\prime}(G)\). If they are not commensurable, then \((\Gamma_{\sigma}\bigcap\Gamma_{\sigma^{\prime}})\backslash G\) is not compact, so \(\Gamma_{\sigma,\sigma^{\prime}}\backslash\Gamma_{\sigma,\sigma^{\prime}} \Delta^{\prime}(G)\) is not closed in \(G^{2}\), therefore we must have \(H=G^{2}\).
**Proposition 4.5**.: _The closure of \(\Gamma\Delta^{\mathcal{T}}(G)\) in \(G^{\mathcal{T}}\) is a subgroup which is a product of diagonals \(\prod_{i=1}^{r}\Delta^{\mathcal{T}_{i}}(G)\). In particular, if each \(\mathcal{T}_{i}\) contains only one element, then \(\Gamma\Delta^{\mathcal{T}}(G)\) is dense in \(G^{\mathcal{T}}\)._
Proof.: As in the previous lemma, we know the closure is of the form \(\Gamma H\) for some closed subgroup \(H\) of \(G^{\mathcal{T}}\) of the form \(H=\prod_{i=1}^{r^{\prime}}\Delta^{\mathcal{T}_{i}}(G)\) for some _partition_ of \(\mathcal{T}\):
\[\mathcal{T}=\bigsqcup_{i=1}^{r^{\prime}}\mathcal{T}_{i}^{\prime}.\]
Now take any two elements \(\sigma,\sigma^{\prime}\in\mathcal{T}\) and consider the projection to the components of \(\sigma\) and \(\sigma^{\prime}\):
\[\operatorname{pr}_{\sigma}\times\operatorname{pr}_{\sigma^{\prime}}\colon G^ {\mathcal{T}}\to G^{2}\]
Clearly the image of \(\Gamma H\) under this projection map is equal to the closure of \(\Gamma_{\sigma,\sigma^{\prime}}\Delta^{\prime}(G)\) where \(\Delta^{\prime}\colon G\to G^{2}\) is the diagonal map. It follows immediately from the previous lemma that \(\sigma,\sigma^{\prime}\in\mathcal{T}_{i}\) if and only if the closure of \(\Gamma_{\sigma,\sigma^{\prime}}\Delta^{\prime}(G)\) is _not_ equal to \(G^{2}\), if and only if \(\Gamma_{\sigma}\) and \(\Gamma_{\sigma^{\prime}}\) are commensurable, if and only if \(\sigma,\sigma^{\prime}\in\mathcal{T}_{i^{\prime}}\) for some \(i^{\prime}=1,\cdots,r\). Therefore each \(\mathcal{T}_{i}^{\prime}\) is contained in some \(\mathcal{T}_{i^{\prime}}\). So these two partitions \(\mathcal{T}=\bigsqcup_{i=1}^{r}\mathcal{T}_{i}\) and \(\mathcal{T}=\bigsqcup_{i=1}^{r^{\prime}}\mathcal{T}_{i}^{\prime}\) are the same, which finishes the proof.
We write the quotient group
\[\mathbf{PG}^{1}(\mathbb{Q}_{p})=\mathbf{G}^{1}(\mathbb{Q}_{p})/Z(\mathbf{G}^{1} (\mathbb{Q}_{p})),\]
where \(Z(\mathbf{G}^{1}(\mathbb{Q}_{p}))=\{\pm 1_{2n}\}\) is the center of \(\mathbf{G}^{1}(\mathbb{Q}_{p})\). For a subgroup \(H\) of \(\mathbf{G}^{1}(\mathbb{Q}_{p})\), we write \(\mathbf{P}H\) for the image of \(H\) under the projection map \(\mathbf{G}^{1}(\mathbb{Q}_{p})\rightarrow\mathbf{PG}^{1}(\mathbb{Q}_{p})\).
For each \(\ell\in\mathcal{S}\), we have a partition of \(\mathcal{T}(\ell)\) given by the pre-images of \(\overline{\mathcal{T}}(\ell)\) under the natural map \(\pi\colon\mathbf{T}_{K^{\circ}}^{1}(\mathbb{Q})\rightarrow\mathbf{T}_{K^{\circ} }(\mathbb{Q}_{p})/(\mathbf{T}_{K^{\circ}}(\mathbb{Q})Z(\mathbf{G}(\mathbb{Q}_{ p})))\):
\[\mathcal{T}(\ell)=\bigsqcup_{i=1}^{r(\ell)}\mathcal{T}(\ell)_{i}\quad\text{ where }\sigma,\sigma^{\prime}\in\mathcal{T}(\ell)_{i}\Leftrightarrow\pi(\sigma)=\pi(\sigma^{\prime}).\]
Define the diagonal maps
\[\Delta\colon\mathbf{G}^{1}(\mathbb{Q}_{p})\to\prod_{\ell\in\mathcal{S},\sigma\in \mathcal{T}(\ell)}\mathbf{G}^{1}(\mathbb{Q}_{p}),\quad\mathbf{PG}^{1}(\mathbb{Q }_{p})\to\prod_{\ell\in\mathcal{S},\sigma\in\mathcal{T}(\ell)}\mathbf{PG}^{1}( \mathbb{Q}_{p}).\]
**Theorem 4.6**.: _The \(\mathcal{S}\)-simultaneous projection map_
\[\Pi\colon\mathbf{G}^{1}(\mathbb{Q}_{p})\to\prod_{\ell\in\mathcal{S},\sigma\in \mathcal{T}(\ell)}\mathbf{G}^{1}[\ell,\sigma]\backslash\mathbf{G}^{1}(\mathbb{ Q}_{p})/\mathbf{K}^{1}[\ell]\]
_has image equal to_
\[\prod_{\ell\in\mathcal{S}}\prod_{i=1}^{r(\ell)}\mathbf{G}^{1}[\ell,\mathcal{T }(\ell)_{i}]\backslash\left(\mathbf{G}^{1}[\ell,\mathcal{T}(\ell)_{i}] \Delta^{\mathcal{T}(\ell)_{i}}(\mathbf{G}^{1}(\mathbb{Q}_{p}))\mathbf{K}^{1}[ \ell]/\mathbf{K}^{1}[\ell]\right),\]
_where \(\mathbf{G}^{1}[\ell,\mathcal{T}(\ell)_{i}]=\prod_{\sigma\in\mathcal{T}(\ell)_ {i}}\mathbf{G}^{1}[\ell,\sigma]\)._
Proof.: Since \(p\nmid N\), \(\mathbf{K}_{p}\) contains \(Z(\mathbf{G}^{1}(\mathbb{Q}_{p}))\), so does \(\mathbf{K}^{1}[\ell]\) for any \(\ell\in\mathcal{S}\), thus the map \(\Pi\) factors through the \(\mathcal{S}\)-simultaneous projection map
\[\mathbf{PG}^{1}(\mathbb{Q}_{p})\to\prod_{\ell\in\mathcal{S},\sigma\in\mathcal{T }(\ell)}\mathbf{PG}^{1}[\ell,\sigma]\backslash\mathbf{PG}^{1}(\mathbb{Q}_{p}) /\mathbf{PK}^{1}[\ell].\]
Moreover \(\mathbf{PG}^{1}(\mathbb{Q}_{p})\) is a simple and non-commutative group and is generated by one-parameter adjoint unipotent subgroups (in the sense of [10]) and each \(\mathbf{G}^{1}[\ell,\sigma]\) is discrete and cocompact in \(\mathbf{G}^{1}(\mathbb{Q}_{p})\). Write \(\Gamma=\prod_{\ell\in\mathcal{S},\sigma\in\mathcal{T}(\ell)}\mathbf{PG}^{1}[ \ell,\sigma]\). Using Proposition 4.5, we know that the closure of the following simultaneous projection map
\[\mathbf{PG}^{1}(\mathbb{Q}_{p})\to\prod_{\ell\in\mathcal{S},\sigma\in \mathcal{T}(\ell)}\mathbf{PG}^{1}[\ell,\sigma]\backslash\mathbf{PG}^{1}( \mathbb{Q}_{p})=\Gamma\backslash\prod_{\ell\in\mathcal{S},\sigma\in\mathcal{T} (\ell)}\mathbf{PG}^{1}(\mathbb{Q}_{p})\]
is of the form \(\Gamma\backslash\Gamma H\) for a closed subgroup \(H\) of \(\prod_{\ell\in\mathcal{S},\sigma\in\mathcal{T}(\ell)}\mathbf{PG}^{1}(\mathbb{Q }_{p})\) which is a product of diagonals
\[H=\prod_{\ell\in\mathcal{S}}\prod_{i=1}^{r(\ell)}\Delta^{\mathcal{T}(\ell)_{i} }(\mathbf{PG}^{1}(\mathbb{Q}_{p})).\]
From this one deduces easily the theorem.
Now we return to the superspecial locus \(X_{\mathbf{K}}^{\mathrm{ssp}}(\overline{\mathbb{F}}_{\ell})\), it has a parametrization using (19): for any \(\ell\in\mathcal{S}\) and \(\sigma\in\mathcal{T}(\ell)\):
\[\Theta_{\ell,\sigma}:=\Theta(\ell)\circ\Omega(\ell)^{-1}\circ(\epsilon(\sigma) -)\colon\mathbf{G}^{1}[\ell,\sigma]\backslash\mathbf{G}^{1}(\mathbb{Q}_{p})/ \mathbf{K}^{1}[\ell]\to X_{\mathbf{K}}^{\mathrm{ssp}}(\overline{\mathbb{F}}_{ \ell}).\]
For any non-empty subset \(\mathcal{T}^{\prime}\) of \(\mathcal{T}(\ell)\), we define the twisted diagonal map
\[\widetilde{\Delta}^{\mathcal{T}^{\prime}}\colon X_{\mathbf{K}}^{\mathrm{ssp}}( \overline{\mathbb{F}}_{\ell})\to\prod_{\sigma\in\mathcal{T}^{\prime}}X_{ \mathbf{K}}^{\mathrm{ssp}}(\overline{\mathbb{F}}_{\ell})\]
to be the following composition map
\[\begin{CD}X_{\mathbf{K}}^{\mathrm{ssp}}(\overline{\mathbb{F}}_{\ell})@>{\Theta_ {\ell,1}^{-1}}>{}>\mathbf{G}^{1}[\ell]\backslash\mathbf{G}^{1}(\mathbb{Q}_{p})/ \mathbf{K}^{1}[\ell]\\ \prod_{\sigma\in\mathcal{T}^{\prime}}X_{\mathbf{K}}^{\mathrm{ssp}}(\overline{ \mathbb{F}}_{\ell})@<{\prod_{\sigma\in\mathcal{T}^{\prime}}\Theta_{\ell,\sigma} }<\prod_{\sigma\in\mathcal{T}^{\prime}}\Theta_{\ell,\sigma}\\ \prod_{\sigma\in\mathcal{T}^{\prime}}\mathbf{G}^{1}[\ell,\sigma]\backslash \mathbf{G}^{1}(\mathbb{Q}_{p})/\mathbf{K}^{1}[\ell]\end{CD}\]
We can reformulate Theorem 4.6 as follows, which gives Theorems1.3 and 1.7:
**Corollary 4.7**.: _The image of the \(\mathcal{S}\)-simultaneous reduction map_
\[\mathfrak{M}_{\mathbf{K}}^{x_{0}}(\mathbf{G}^{1}(\mathbb{Q}_{p}))\to\prod_{ \ell\in\mathcal{S},\sigma\in\mathcal{T}(\ell)}X_{\mathbf{K}}^{\mathrm{ssp}}( \overline{\mathbb{F}}_{\ell}),\quad x_{g}\mapsto(A_{\sigma g}(\mathrm{mod} \,\mathfrak{I}))_{\ell\in\mathcal{S},\sigma\in\mathcal{T}(\ell)}\]
_is given by \(\prod_{\ell\in\mathcal{S}}\prod_{i=1}^{r(\ell)}\widetilde{\Delta}^{\mathcal{ T}(\ell)_{i}}\left(X_{\mathbf{K}}^{\mathrm{ssp}}(\overline{\mathbb{F}}_{\ell})\right)\). In particular, it is in bijection with \(\prod_{\ell\in\mathcal{S},\overline{\sigma}\in\mathcal{T}(\ell)}X_{\mathbf{K}}^ {\mathrm{ssp}}(\overline{\mathbb{F}}_{\ell})\) as stated in Theorem 1.7._
|
2303.08605
|
RICO: Regularizing the Unobservable for Indoor Compositional
Reconstruction
|
Recently, neural implicit surfaces have become popular for multi-view
reconstruction. To facilitate practical applications like scene editing and
manipulation, some works extend the framework with semantic masks input for the
object-compositional reconstruction rather than the holistic perspective.
Though achieving plausible disentanglement, the performance drops significantly
when processing the indoor scenes where objects are usually partially observed.
We propose RICO to address this by regularizing the unobservable regions for
indoor compositional reconstruction. Our key idea is to first regularize the
smoothness of the occluded background, which then in turn guides the foreground
object reconstruction in unobservable regions based on the object-background
relationship. Particularly, we regularize the geometry smoothness of occluded
background patches. With the improved background surface, the signed distance
function and the reversedly rendered depth of objects can be optimized to bound
them within the background range. Extensive experiments show our method
outperforms other methods on synthetic and real-world indoor scenes and prove
the effectiveness of proposed regularizations. The code is available at
https://github.com/kyleleey/RICO.
|
Zizhang Li, Xiaoyang Lyu, Yuanyuan Ding, Mengmeng Wang, Yiyi Liao, Yong Liu
|
2023-03-15T13:24:36Z
|
http://arxiv.org/abs/2303.08605v2
|
# RICO: Regularizing the Unobservable for Indoor Compositional Reconstruction
###### Abstract
Recently, neural implicit surfaces have become popular for multi-view reconstruction. To facilitate practical applications like scene editing and manipulation, some works extend the framework with semantic masks input for the object-compositional reconstruction rather than the holistic perspective. Though achieving plausible disentanglement, the performance drops significantly when processing the indoor scenes where objects are usually partially observed. We propose RICO to address this by regularizing the unobservable regions for indoor compositional reconstruction. Our key idea is to first regularize the smoothness of the occluded background, which then in turn guides the foreground object reconstruction in unobservable regions based on the object-background relationship. Particularly, we regularize the geometry smoothness of occluded background patches. With the improved background surface, the signed distance function and the reversedly rendered depth of objects can be optimized to bound them within the background range. Extensive experiments show our method outperforms other methods on synthetic and real-world indoor scenes and prove the effectiveness of proposed regularizations.
## 1 Introduction
Reconstructing 3D geometry from images is a fundamental problem in computer vision and has many downstream applications like VR/AR and game assets creation. With the advance of neural implicit representations [20], recent reconstruction methods [25, 36, 41, 43] can recover accurate geometry from multi-view images. However, existing methods typically regard the whole scene as an entirety and reconstruct everything altogether, thus preventing applications like scene editing. In indoor scenes with plenty of reconfigurable objects, a disentangled object-compositional reconstruction, i.e. decomposing the geometry into different instantiated objects and background, can be more suitable for further applications like moving the sofa in the scene.
In this paper, we aim to recover the room-level indoor scenes with decomposed geometry of individual objects and background. We assume that multi-view posed images and semantic masks that assign different labels to each instantiated object and the background are given as input. Existing object-compositional methods [9, 45, 40] concentrate more on the rendering performance rather than the underlying geometry, and thus can not be directly used for reconstruction. The most recent work ObjSDF [37] learns an object-compositional signed distance function (SDF) field by proposing a transform function between SDF values and semantic logits. Specifically speaking, ObjSDF predicts multiple SDF values at each 3D point for different semantic classes, and converts them to semantic logits, allowing for separating object SDF values from the background when
Figure 1: **Comparison** of [37] (Top) and ours (Bottom). Different objects are visualized in different colors. Previous reconstructions are connected with large artifacts. And taking the partially observed cubic object’s front and back views (in red and green rectangles respectively) as an example, [37] can only get the visible surface, while ours can complement the complete body.
supervised by semantic masks. Although achieving plausible shape disentanglement, it suffers from a common problem in indoor scenes: objects and background can only be _partially observed due to occlusions_. When the object is partially observed, e.g. a cubic object against the wall, ObjSDF can not properly reconstruct the geometry between them (see Fig. 1 top-right). The reason is that the existing works [45, 37] can only effectively regularize semantic labels and geometry of observed regions, and have little impact on the unobserved regions. When processing the indoor scenes where a large portion of objects are partially observed, the reconstruction results of these objects will be visible surface connected with the unconstrained structures (as shown in Fig. 1, see Section 3.3 for a detailed analysis). Fig. 2 shows that even with reasonable reconstruction when composing all the objects together, each object's result in the unobserved region is far from satisfactory and can hinder further applications like manipulating the object.
We propose **RICO**, which realizes the proper geometry disentanglement for indoor scenes (see Fig. 1 bottom) by explicitly regularizing the unobserved regions. To be more specific, when the object is partially observed, recovering its geometry is an ill-posed problem even with corresponding masks. Thus, introducing prior regularization for unobserved regions is necessary. We exploit two types of prior knowledge for indoor scenes in this work: 1) background smoothness and 2) object-background relations. First, when one ray hits the object surface, the existing method [37] can properly regularize the geometry and appearance on the hitting point, but can not account for the background surface behind this object. This drawback leads to artifacts and holes on the unobserved background surface (see Fig. 2). We propose a patch-based smoothness loss to regularize the SDF values of unobserved background regions. Then, since the background reconstruction is improved, we can leverage another strong prior: _the objects are all within the room_, i.e. using the background surface to regularize the SDF field of objects. We design two regularization terms: an object point-SDF loss for sampled points behind the background surface and a reversed depth loss to regularize the SDF distribution of the entire ray. Both terms aim to bound the object within the background surface's range, thus preventing the aforementioned unmeaning structure, making the object reconstruction a _watertight and plausible_ shape instead of an open surface with severe artifacts.
In summary, we propose RICO to realize compositional reconstruction in indoor scenes where a large portion of objects are partially observed. Our main contributions are: i) A patch-based background smoothness regularizer for unobserved background surface geometry. ii) Guided by the improved background surface, we exploit the object-background relation as prior knowledge and design objectives that effectively regularize objects' unobserved regions. iii) Extensive experiments on both real-world and synthetic datasets prove our superior reconstruction performance compared to previous works, especially for the partially observed objects.
## 2 Related Work
**Neural Implicit Representation for Reconstruction:** Recently, neural implicit representations have emerged as a popular representation for 3D reconstruction. While early approaches [19, 26] rely on 3D supervision, a few works [24, 42] exploit surface rendering to use multi-view image supervision only, but also suffer from the unstable training. Neural radiance field (NeRF) [20] adopts volume rendering technique to achieve photorealistic scene representation and stable optimization. However, NeRF's formulation can not guarantee accurate underlying geometry. Therefore, [25, 36, 41] combine the geometry representation with iso-surface (e.g. occupancy [19], SDF [36, 41]) and volume density to accurately reconstruct object-level scenes from RGB images. [8] further applies the planar regularization for scene-level reconstruction. To tackle the problem in texture-less regions, [35, 43] utilize results from pretrained normal and depth estimation networks to guide the SDF training and boost the reconstruction performance.
However, despite the promising reconstruction performance, the aforementioned methods all consider the whole scene as an entirety. Our method focuses on decomposing the scene reconstruction into the background and different foreground objects, which can be regarded as compositional scene reconstruction.
**Compositional Scene Reconstruction:** Decomposing a scene into its different components could benefit downstream applications like scene editing. Many works have been proposed to recover the scene in a compositional manner from different perspectives. [22, 44, 10, 21] detect and reconstruct different objects in the given monocular image, and predict the scene's layout at the same time. But most of
Figure 2: **ObjSDF results.** Interested objects are dyed in blue. Despite of the plausible composition, the disentangled backgrounds have artifacts and sink holes, and partially observed objects can only get the visible surface (illustrated in ‘Object Backward’).
these methods require large-scale datasets with 3D ground truth for training. [14, 32, 18] optimize a feature field from a large pretrained model [1, 16], which enables deep feature based decomposition and manipulation. [38, 31] exploit the self-supervise paradigm to decompose the scenes into static and dynamic parts. More works [9, 45, 40, 39, 34, 6, 15, 33] focus on recovering the object-compositional scenes given semantic masks with images. However, this line of works concerns the rendering outputs rather than the geometry, i.e. the reconstruction results are sub-optimal.
Recently, ObjSDF [37] proposes a transform function between semantic logits and SDFs of different objects, which enables optimizing SDF fields with image and semantic mask supervision, and decomposing the whole scene with accurate reconstruction. However, in the indoor scenes where many objects are in partial observation, its reconstruction results are far from satisfactory and can not be used for further applications (see Fig. 2). On the contrary, our method introduces geometry prior to unobserved regions, yielding better compositional scene reconstruction.
**Prior Regularization in Neural Implicit Reconstruction:** In addition to the commonly used RGB image supervision, many different priors have been proposed to benefit the neural implicit representation. For example, [4, 30] use explicit point cloud as the depth prior, [23] employs regularization in unseen views and [11] introduces pretrained model's feature consistency. [12, 28] adopt explicit human model as the structural prior for the human-centric scenes. As for surface reconstruction, [8] proposes a manhattan-world assumption for the planar regions, [43] utilizes the normal and depth prediction from off-the-shelf model [5] as prior to regularize the texture-less regions.
Our method exploits the geometry prior in the unobserved region for compositional scene reconstruction. The proposed regularization can effectively reconstruct the objects and disentangle them from the background, even if the object itself is partially observed.
## 3 Methodology
Our goal is to recover decomposed geometry surface of the objects and background within a scene from the images and semantic masks inputs. To this end, we first review the SDF-based neural implicit representation and how to use the semantic logits for compositional reconstruction in Section 3.1. Next, we propose two types of regularizations on unobserved regions to address the partial observation problem: patch-based background smoothness (Section 3.2) and object-background relation (Section 3.3). Finally, we introduce the overall optimization procedure in Section 3.4. An overview of our method is provided in Fig 3.
### Background
**Volume Rendering of SDF-based Implicit Surface:** For implicit reconstruction, the geometry of the scene is represented as the signed distance function (SDF) \(s(\mathbf{p})\) of each spatial point \(\mathbf{p}\), which is the point's distance to the closest surface. In practice, the SDF function is implemented as a multi-layer perceptron (MLP) network \(f(\cdot)\). The appearance of the scene is also defined as an MLP \(g(\cdot)\):
\[\begin{split}& f:\mathbf{p}\in\mathbb{R}^{3}\mapsto(s\in\mathbb{R}, \mathbf{f}\in\mathbb{R}^{256})\\ & g:(\mathbf{p}\in\mathbb{R}^{3},\mathbf{n}\in\mathbb{R}^{3}, \mathbf{v}\in\mathbb{S}^{2},\mathbf{f}\in\mathbb{R}^{256})\mapsto\mathbf{c} \in\mathbb{R}^{3}\end{split} \tag{1}\]
where \(\mathbf{f}\) is a geometry feature vector, \(\mathbf{n}\) is the normal at \(\mathbf{p}\), \(\mathbf{v}\) is the viewing direction and \(\mathbf{c}\) is the view-dependent color.
Figure 3: **Overview. In this work, we propose two different regularizations. We first regularize the geometry smoothness of the unobserved background regions in a sampled patch. Then, we exploit the background surface as the prior to constrain the objects’ surface. In detail, a per-point SDF loss and a reversed depth loss are introduced to regularize the manifold of objects’ SDF functions. Combined with other reconstruction losses, our method reaches a neat and disentangled compositional reconstruction in indoor scenes.**
We adopt the unbiased rendering proposed in [36] to render the image. For each camera ray \(\mathbf{r}=(\mathbf{o},\mathbf{v})\) with \(\mathbf{o}\) as the ray origin, \(n\) points \(\{\mathbf{p}(t_{i})=\mathbf{o}+t_{i}\mathbf{v}|i=0,1,\dots,n-1\}\) are sampled, and the pixel color can be approximated as:
\[\hat{\mathbf{C}}(\mathbf{r})=\sum_{i=0}^{n-1}T_{i}\alpha_{i}\mathbf{c}_{i}. \tag{2}\]
The \(T_{i}\) is the discrete accumulated transmittance derived from \(T_{i}=\prod_{j=0}^{i-1}(1-\alpha_{j})\), and \(\alpha_{i}\) is the discrete density value defined as
\[\alpha_{i}=\max\left(\frac{\Phi_{u}(s(\mathbf{p}(t_{i})))-\Phi_{u}(s(\mathbf{ p}(t_{i+1})))}{\Phi_{u}(s(\mathbf{p}(t_{i})))},0\right), \tag{3}\]
where \(\Phi_{u}(x)=(1+e^{-ux})^{-1}\) and \(u\) is a learnable parameter. By minimizing the difference between predicted and ground-truth pixel colors, we can learn the SDF and appearance function of the desired scene.
**Learning SDF with Semantic Logits:** In this work, we consider compositional reconstruction of \(k\) objects given their masks. Note that we also consider the background as an instantiated object for brevity as in [37] and follow their network structure. In detail, for a scene with \(k\) objects, the SDF MLP \(f(\cdot)\) now outputs \(k\) SDFs at each point, and the \(j\)-th (\(1\leq j\leq k\)) SDF represents the geometry of \(j\)-th object. Without loss of generality, we set \(j=1\) as the background category and others for objects in Fig. 3 and the rest of the paper. The _scene_ SDF is the minimum of \(\{s_{j}\}\), which is used for sampling points along the ray and aforementioned volume rendering (Eq. 2). Moreover, each point's \(k\) SDFs can be transformed into semantic logits \(\mathbf{h}(\mathbf{p})\) as
\[h_{j}(\mathbf{p}) =\gamma/(1+\exp(\gamma\cdot s_{j}(\mathbf{p}))), \tag{4}\] \[\mathbf{h}(\mathbf{p}) =[h_{1}(\mathbf{p}),h_{2}(\mathbf{p}),...,h_{k}(\mathbf{p})],\]
where \(\gamma\) is a fixed parameter. Using volume rendering to accumulate the semantic logits of all the points along a ray, we can get the semantic logits \(\mathbf{H}(\mathbf{r})\in\mathbb{R}^{k}\) of each pixel. During training, the cross-entropy loss applied to \(\mathbf{H}(\mathbf{r})\) is backpropagated to the SDF values, allowing for learning the compositional geometry.
### Patch-based Background Smoothness
Although the volume rendering can propagate gradients along the entire ray, the optimization mainly focuses on the surface-hitting point, as its accumulated weight can be much larger than the others. As a result, the geometry of the points behind the first-hit surface can not be optimized correctly. In the indoor scenes, the occluded part of the background surface is invisible in all the images, which can be with holes and random artifacts (see Fig. 2).
Since we cannot tell the exact color, depth or normal of the occluded part, it is intractable to optimize this region wrt. its ground truth. Therefore, we propose to regularize the geometry of the occluded background to be _smooth_, thus preventing some clearly wrong artifacts.
In detail, we regularize the smoothness of rendered depth and normal of background surface within a small patch region. To save the computation budget, we randomly sample a \(\mathcal{P}\times\mathcal{P}\) size patch every \(\mathcal{T}_{\mathcal{P}}\) iterations in the given image and sample points along the patch rays using the _background SDF_ only. We compute depth map \(\hat{D}(\mathbf{r})\) and normal map \(\hat{N}(\mathbf{r})\) of the sampled patch following [43], \(\mathbf{r}\) denotes the sampled ray in patch. The semantic map of the patch is also computed and transformed into a patch Mask \(M(\mathbf{r})\):
\[\hat{M}(\mathbf{r})=\mathbb{1}[\arg\max(\mathbf{H}(\mathbf{r}))\neq 1], \tag{5}\]
which means the mask value is \(1\) if the rendered class is not the background, so that only the occluded background is regularized. Taking rendered depth as an example, the patch-based background smoothness loss is
\[\mathcal{L}(\hat{D})= \sum_{d=0}^{3}\sum_{m,n=0}^{\mathcal{P}-1-2^{d}}\hat{M}(\mathbf{r }_{m,n})\odot(|\hat{D}(\mathbf{r}_{m,n})- \tag{6}\] \[\hat{D}(\mathbf{r}_{m,n+2^{d}})|+|\hat{D}(\mathbf{r}_{m,n})-\hat {D}(\mathbf{r}_{m+2^{d},n})|).\]
Here the smoothness is applied on different intervals controlled by \(d\). \(m\) and \(n\) are the pixel indices within the patch and the mask is multiplied at each position with hadamard product \(\odot\). The normal smoothness loss \(\mathcal{L}(\hat{N})\) can be obtained similarly. We define the overall background smoothness loss \(\mathcal{L}_{bs}\) as:
\[\mathcal{L}_{\text{bs}}=\mathcal{L}(\hat{D})+\mathcal{L}(\hat{N}). \tag{7}\]
Here in contrast to [23] which applies a patch-based regularization to visible regions in other views, we instead regularize the _occluded_ regions of the background.
### Object-background Relation as Regularization
With the help of the patch-based background smoothness loss, most artifacts of the background are resolved, yielding a smooth surface. We further leverage this smooth background surface to regularize the SDF fields of other objects, leveraging a key prior knowledge that _all the other objects are confined to the room_, i.e., the background surface.
In the original framework of [37], if an object is partially observed, e.g. against the background, the object's reconstructed surface won't be a "closed" surface (see Fig. 1 and Fig. 2). We refer to the toy-case analysis shown in Fig. 4 for the reason of the current unsatisfactory reconstruction. To encourage the reconstruction to be _watertight_, we design two types of regularization on the SDF fields of objects.
**Object Point-SDF Loss:** A straightforward solution is to directly regularize the objects' SDFs on every sampled
point behind the background surface. To be more specific, the regular volume rendering only guarantees a single change of the sign (positive SDF to negative) when a ray hits a visible surface. For watertight objects, the ray should hit another occluded surface (negative SDF to positive). With the prior that the object is confined within the room, the occluded object surface should be closer than the background surface, meaning that the object SDFs of points behind the background surface should all be positive (Fig. 4 (c)).
We implement an object point-SDF loss based on the above analysis. For the sampled points along the rays, we first utilize the root-finding algorithm among the background SDF of these points and find the zero-SDF ray depth \(t^{\prime}\). Then the object point-SDF loss can be formulated as
\[\mathcal{L}_{\text{op}}=\frac{1}{k-1}\sum_{j=2}^{k}\max{(0,\epsilon-\mathbf{s }_{j}(\mathbf{p}(t_{i})))\cdot\mathbb{1}[t_{i}>t^{\prime}]}, \tag{8}\]
which pushes the objects' SDFs at points behind the surface larger than a positive threshold \(\epsilon\).
**Reversed Depth Loss:** Although the \(\mathcal{L}_{\text{op}}\) can effectively regularize the SDF fields of each object, in practice we find the reconstructed object surface can still have intersections with the background surface (Fig. 4 (c)). The reason is that the sampled points are discrete and in most cases, the background surface is between two sampled points. Therefore, the sign change of the occluded surface may still occur after hitting the background surface.
Since per-point optimization can not effectively propagate to the distribution of the entire ray. To optimize the entire ray's SDF distribution for better regularization, we compute a _reversed depth_ along each ray. With the help of \(\mathcal{L}_{\text{op}}\), the sign of the object SDF along one ray now is positive-negative-positive, which enables rendering a depth value backward. We first transform the ray depth \(\{t_{i}|i=0,1,\ldots,n-1\}\) into the reversed ray depth named \(\{\hat{t}_{i}|i=0,1,\ldots,n-1\}\), where
\[\hat{t}_{i}=(t_{0}+t_{n-1})-t_{n-1-i}. \tag{9}\]
With the reversed ray depth values, we use the background and object SDF both in reversed order to compute the accumulated weight and get the reversed depth respectively. Remarkably, in order to compute the exact correct depth, the points should be re-sampled along the reverse direction. Here we directly use the sampled points to avoid computation overhead, and empirical results prove its effectiveness. We only compute the reverse depth of one pixel if satisfying two conditions: 1) this pixel's \(M(\mathbf{r})=1\); 2) the SDF value of the rendered object at the furthest point is positive. Note that the second condition is usually satisfied when the object point-SDF loss is applied. By computing the reversed depth \(d_{o}\) of the hitting object (determined by the pixel's rendered semantic) and \(d_{b}\) of the background, we can get the reversed depth loss:
\[\mathcal{L}_{\text{rd}}=\max(0,d_{b}-d_{o}), \tag{10}\]
which pushes the object surface within the background as illustrated in Fig. 4 (d).
### Training Details
**Loss Functions:** Monocular geometric cues are essential for indoor scene reconstruction as proved in [43]. Following [43], we add the depth and normal consistency loss (\(\mathcal{L}_{\text{D}}\),\(\mathcal{L}_{\text{N}}\)) with pseudo ground truth from Omnidata [5] model. In experiments, we also add the monocular cues on [37] to get a stronger baseline for a fair comparison.
The SDF network is also regularized by an Eikonal [7] loss item \(\mathcal{L}_{\text{E}}\). We further use the semantic loss \(\mathcal{L}_{\text{S}}\) proposed in [37] to learn the compositional geometry. The overall loss function for compositional reconstruction is:
\[\begin{split}\mathcal{L}=&\mathcal{L}_{\text{RGB}}+ \lambda_{\text{D}}\mathcal{L}_{\text{D}}+\lambda_{\text{N}}\mathcal{L}_{\text{ N}}+\lambda_{\text{E}}\mathcal{L}_{\text{E}}+\lambda_{\text{S}}\mathcal{L}_{ \text{S}}\\ &+\lambda_{\text{bs}}\mathcal{L}_{\text{bs}}+\lambda_{\text{op}} \mathcal{L}_{\text{op}}+\lambda_{\text{rd}}\mathcal{L}_{\text{rd}}.\end{split} \tag{11}\]
We set \(\lambda_{\text{bs}},\lambda_{\text{op}},\lambda_{\text{rd}}=0.1\) in our experiments. The detailed weight setting and calculation of other losses can be found in supplementary.
**Implementation Details:** We implement our method in PyTorch [27]. We use the Adam [13] optimizer with a learning
Figure 4: **Toy-case analysis.** A bird-eye view of an object against the background is shown here, the SDF and surface of object is shown in red and background’s in blue. (a) In [37] the minimum SDF is used for volume rendering and semantic loss, so on object’s visible surface where object SDF is optimized to 0 and the background SDF here is positive, and similar for the visible background surface. Since the scene is partially observed from left, the right part of object surface is open and unobserved background region is unconstraint. (b) With the smoothness prior the background surface can be plausible, (c) and object point SDF loss close the object surface but still have intersections, (d) finally the reversed depth loss optimizes the entire ray thus the object can be within the background.
rate of 5e-4 for 50k iterations and sample 1024 rays per iteration. The weight initialization scheme for SDF MLP is identical to [41, 36, 37]. The \(u\) is initialized as \(0.05\) and we set \(\gamma\)=\(20\) as proposed in [37]. \(\mathcal{P}\), \(\mathcal{T}_{\mathcal{P}}\) and \(\epsilon\) are set as \(32\), \(10\) and \(0.05\) respectively. All the reconstructions are acquired by using marching cube [17] at the resolution of \(512\). More implementation details can be found in the supplementary.
## 4 Experiments
We first conduct ablation study on each proposed component. Then we provide quantitative and qualitative results of object-compositional reconstruction on real-world and synthetic scenes. Finally, we show some possible object manipulation with our compositional reconstruction.
**Datasets & Metrics:** We consider two types of indoor datasets with multi-view RGB images and masks: 1) ScanNet [3], a real-world dataset widely used in previous works [8, 35, 37, 43]; 2) a hand-crafted synthetic dataset with five scenes, each containing \(5\sim 10\) objects. The synthetic dataset is considered such that the ground truth geometry of both, occluded and non-occluded regions are available. On ScanNet we report Chamfer Distance, F-score for evaluation, following [8, 43]. On synthetic scenes we divide metrics for two aspects: objects and background. For objects we report the reconstruction performance compared to the complete ground truth object mesh, and the final results are averaged across all the objects in all the scenes. For background we report the reconstruction metrics and rendered depth errors of the occluded regions only to highlight the effectiveness of our regularization. The results are also averaged across all the scenes. See supplementary for a detailed introduction to datasets and metrics.
**Baselines:** We mainly compare with **ObjSDF**[37] in this work, as it is the only method that focuses on the same task, a specific discussion is provided in the supplementary. Since we focus on the indoor scene where monocular cues can benefit a lot [43], we add losses proposed in [43] to our method for better performance. For a fair comparison, we also combine [37] and [43] for a stronger baseline, named **ObjSDF***. Moreover, since ObjSDF*'s object reconstructions inevitably have artifacts, we develop a post-process method to cull the parts outside the background range and name this baseline **ObjSDF*-C**. We provide the details of how to set the range in the supplementary. Note that our method directly generates clean watertight meshes and doesn't need such post-processing. In ScanNet experiments, we also report results for MonoSDF [43] as a reference since we can only evaluate the overall reconstruction (details in Section 4.3).
ularizes the SDF field to be a watertight mesh. The \(\mathcal{L}_{\text{rd}}\) further mitigates the flaws at the back by regularizing the object to be confined to the background.
### Reconstruction in Synthetic Scenes
Since the synthetic scenes are rendered with objects that have accurate object-level 3D ground truth geometry, we compare our methods with previous object-compositional reconstruction method [37] on the \(5\) generated scenes and report the quantitative results in Tab. 2.
In Tab. 2 we provide detailed metrics on object-level reconstruction. A qualitative comparison is shown in Fig. 6. The computation procedure of all the metrics is available in the supplementary. With monocular cues, ObjSDF* can achieve better performance than the original ObjSDF. However, their performance on object reconstruction is far from satisfactory, since they can only accurately obtain the visible surface with large parts of irrelevant structures in the indoor scenes as pointed out in the analysis before. Though ObjSDF*-C can eliminate most of the outliers and get improved performance, it can only reconstruct visible regions as an open surface as shown in Fig. 6. On the contrary, our regularizations help to smoothen the background and recover the object as a watertight mesh, which promotes performance by a large margin and leads to broader downstream applications. As shown in Fig. 6, RICO can reconstruct the unobservable (back view as in the caption) regions of the objects where the previous methods fail.
### Reconstruction in Real-world Scenes
We conduct experiments on 7 scenes of ScanNet [3] to show the effectiveness of our method on real-world data.
Figure 6: **Qualitative Results on Synthetic Scenes.** Above the blue line we show the comparison of different methods on two background scenes. Below we provide results of two scenes where only the object results are shown. In red rectangles at the bottom of each picture, we show the back (left part) and front (right part) views of an example object. Detailed descriptions in Section 4.2.
Since ScanNet only provides ground truth for the visible surface, here we follow the protocol in [8] and report the reconstruction performance of the entire scene in Tab. 3. By utilizing the rendering formulation in [36] which explicitly models the angle between surface normal and ray, our method can achieve slightly better performance compared to ObjSDF* and [43] on visible surfaces.
In Fig. 7 we also provide the comparisons over two scenes. Note that the ScanNet images are blurry and noisy, and the masks are sometimes inaccurate (e.g., legs of chairs missing), leading to less accurate reconstructions. However, it can still be seen that our method can successfully regularize the unobservable regions to get the watertight mesh, while the baselines always result in open surface. For example, the piano in the first row and the sofas in the second row can only be reconstructed on visible surfaces.
### Object Manipulation
As aforementioned, the reconstruction results from ObjSDF* are sub-optimal for downstream applications like object manipulation. On the contrary, since RICO can get a watertight mesh, it can be easily used for such applications. In Fig. 8 we show the volume rendered normal maps and segmentation masks before and after moving an object in the scene. An illustration of how to manipulate one object in our framework and conduct volume rendering accordingly is applicable in the supplementary. As illustrated, after manipulation, the rendered segmentation and normal map are still clean for RICO. In contrast, the results of ObjSDF* are messy because their reconstruction is connected with artifacts that were originally outside of the scenes.
## 5 Conclusion
We have presented RICO, a novel approach for compositional reconstruction in indoor scenes. Our key motivation is to regularize the unobservable regions for the objects with partial observations in indoor scenes. We exploit the geometry smoothness for the occluded background, and then adopt the improved background as the prior to regularize the objects' geometry. Our experimental results prove that our method achieves compositional reconstruction with fewer artifacts and watertight object geometry, which further facilitates applicable applications like object manipulation.
**Future work:** Currently, each object's pose is baked with its geometry. We identify learning the shape and pose separately as potential direction for more flexible applications.
Figure 8: **Object Manipulation Cases. Here we show the volume rendered normal maps and segmentation masks in two scenes, before (top row) and after (bottom row) the manipulation.**
\begin{table}
\begin{tabular}{l c c c|c} \hline \hline & ObjSDF & ObjSDF* & Ours & MonoSDF \\ \hline \hline Chamfer-\(L_{1}\downarrow\) & 0.170 & 0.092 & **0.088** & 0.090 \\ F-Score \(\uparrow\) & 0.357 & 0.567 & **0.624** & 0.610 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **ScanNet Reconstruction Quantitative Results.**
Figure 7: **Qualitative Results on ScanNet [3]. On the left of blue line we show the overall reconstruction from [43] as the reference. On the right we show the comparison between our method and two baselines. Similarly, the back and front views of objects with partial observations are provided in red rectangles. Detailed descriptions in Section 4.3.**
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.